00:00:00.000 Started by upstream project "autotest-per-patch" build number 126157 00:00:00.000 originally caused by: 00:00:00.001 Started by user sys_sgci 00:00:00.035 Checking out git https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool into /var/jenkins_home/workspace/nvmf-tcp-phy-autotest_script/33b20b30f0a51e6b52980845e0f6aa336787973ad45e341fbbf98d1b65b265d4 to read jbp/jenkins/jjb-config/jobs/autotest-downstream/autotest-phy.groovy 00:00:00.036 The recommended git tool is: git 00:00:00.036 using credential 00000000-0000-0000-0000-000000000002 00:00:00.038 > git rev-parse --resolve-git-dir /var/jenkins_home/workspace/nvmf-tcp-phy-autotest_script/33b20b30f0a51e6b52980845e0f6aa336787973ad45e341fbbf98d1b65b265d4/jbp/.git # timeout=10 00:00:00.063 Fetching changes from the remote Git repository 00:00:00.066 > git config remote.origin.url https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool # timeout=10 00:00:00.102 Using shallow fetch with depth 1 00:00:00.102 Fetching upstream changes from https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool 00:00:00.102 > git --version # timeout=10 00:00:00.136 > git --version # 'git version 2.39.2' 00:00:00.136 using GIT_ASKPASS to set credentials SPDKCI HTTPS Credentials 00:00:00.154 Setting http proxy: proxy-dmz.intel.com:911 00:00:00.154 > git fetch --tags --force --progress --depth=1 -- https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool refs/heads/master # timeout=5 00:00:06.703 > git rev-parse origin/FETCH_HEAD^{commit} # timeout=10 00:00:06.715 > git rev-parse FETCH_HEAD^{commit} # timeout=10 00:00:06.727 Checking out Revision 9bf0dabeadcf84e29a3d5dbec2430e38aceadf8d (FETCH_HEAD) 00:00:06.727 > git config core.sparsecheckout # timeout=10 00:00:06.738 > git read-tree -mu HEAD # timeout=10 00:00:06.755 > git checkout -f 9bf0dabeadcf84e29a3d5dbec2430e38aceadf8d # timeout=5 00:00:06.773 Commit message: "inventory: add WCP3 to free inventory" 00:00:06.773 > git rev-list --no-walk 9bf0dabeadcf84e29a3d5dbec2430e38aceadf8d # timeout=10 00:00:06.855 [Pipeline] Start of Pipeline 00:00:06.866 [Pipeline] library 00:00:06.867 Loading library shm_lib@master 00:00:06.867 Library shm_lib@master is cached. Copying from home. 00:00:06.885 [Pipeline] node 00:00:06.897 Running on GP6 in /var/jenkins/workspace/nvmf-tcp-phy-autotest 00:00:06.899 [Pipeline] { 00:00:06.910 [Pipeline] catchError 00:00:06.911 [Pipeline] { 00:00:06.926 [Pipeline] wrap 00:00:06.937 [Pipeline] { 00:00:06.945 [Pipeline] stage 00:00:06.947 [Pipeline] { (Prologue) 00:00:07.143 [Pipeline] sh 00:00:07.459 + logger -p user.info -t JENKINS-CI 00:00:07.484 [Pipeline] echo 00:00:07.486 Node: GP6 00:00:07.494 [Pipeline] sh 00:00:07.802 [Pipeline] setCustomBuildProperty 00:00:07.811 [Pipeline] echo 00:00:07.813 Cleanup processes 00:00:07.819 [Pipeline] sh 00:00:08.105 + sudo pgrep -af /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:00:08.105 627120 sudo pgrep -af /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:00:08.121 [Pipeline] sh 00:00:08.407 ++ sudo pgrep -af /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:00:08.407 ++ grep -v 'sudo pgrep' 00:00:08.407 ++ awk '{print $1}' 00:00:08.407 + sudo kill -9 00:00:08.407 + true 00:00:08.425 [Pipeline] cleanWs 00:00:08.436 [WS-CLEANUP] Deleting project workspace... 00:00:08.436 [WS-CLEANUP] Deferred wipeout is used... 00:00:08.445 [WS-CLEANUP] done 00:00:08.449 [Pipeline] setCustomBuildProperty 00:00:08.496 [Pipeline] sh 00:00:08.785 + sudo git config --global --replace-all safe.directory '*' 00:00:08.882 [Pipeline] httpRequest 00:00:08.951 [Pipeline] echo 00:00:08.953 Sorcerer 10.211.164.101 is alive 00:00:08.963 [Pipeline] httpRequest 00:00:08.968 HttpMethod: GET 00:00:08.969 URL: http://10.211.164.101/packages/jbp_9bf0dabeadcf84e29a3d5dbec2430e38aceadf8d.tar.gz 00:00:08.969 Sending request to url: http://10.211.164.101/packages/jbp_9bf0dabeadcf84e29a3d5dbec2430e38aceadf8d.tar.gz 00:00:08.972 Response Code: HTTP/1.1 200 OK 00:00:08.972 Success: Status code 200 is in the accepted range: 200,404 00:00:08.973 Saving response body to /var/jenkins/workspace/nvmf-tcp-phy-autotest/jbp_9bf0dabeadcf84e29a3d5dbec2430e38aceadf8d.tar.gz 00:00:10.101 [Pipeline] sh 00:00:10.390 + tar --no-same-owner -xf jbp_9bf0dabeadcf84e29a3d5dbec2430e38aceadf8d.tar.gz 00:00:10.407 [Pipeline] httpRequest 00:00:10.439 [Pipeline] echo 00:00:10.440 Sorcerer 10.211.164.101 is alive 00:00:10.449 [Pipeline] httpRequest 00:00:10.454 HttpMethod: GET 00:00:10.455 URL: http://10.211.164.101/packages/spdk_b0f01ebc511599385f70a4df7103d68ab20c5be8.tar.gz 00:00:10.456 Sending request to url: http://10.211.164.101/packages/spdk_b0f01ebc511599385f70a4df7103d68ab20c5be8.tar.gz 00:00:10.477 Response Code: HTTP/1.1 200 OK 00:00:10.478 Success: Status code 200 is in the accepted range: 200,404 00:00:10.478 Saving response body to /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk_b0f01ebc511599385f70a4df7103d68ab20c5be8.tar.gz 00:03:42.315 [Pipeline] sh 00:03:42.603 + tar --no-same-owner -xf spdk_b0f01ebc511599385f70a4df7103d68ab20c5be8.tar.gz 00:03:45.911 [Pipeline] sh 00:03:46.193 + git -C spdk log --oneline -n5 00:03:46.193 b0f01ebc5 scripts/pkgdep: Set yum's skip_if_unavailable=True under rocky8 00:03:46.193 719d03c6a sock/uring: only register net impl if supported 00:03:46.193 e64f085ad vbdev_lvol_ut: unify usage of dummy base bdev 00:03:46.193 9937c0160 lib/rdma: bind TRACE_BDEV_IO_START/DONE to OBJECT_NVMF_RDMA_IO 00:03:46.193 6c7c1f57e accel: add sequence outstanding stat 00:03:46.205 [Pipeline] } 00:03:46.222 [Pipeline] // stage 00:03:46.231 [Pipeline] stage 00:03:46.234 [Pipeline] { (Prepare) 00:03:46.250 [Pipeline] writeFile 00:03:46.263 [Pipeline] sh 00:03:46.548 + logger -p user.info -t JENKINS-CI 00:03:46.563 [Pipeline] sh 00:03:46.849 + logger -p user.info -t JENKINS-CI 00:03:46.861 [Pipeline] sh 00:03:47.147 + cat autorun-spdk.conf 00:03:47.147 SPDK_RUN_FUNCTIONAL_TEST=1 00:03:47.147 SPDK_TEST_NVMF=1 00:03:47.147 SPDK_TEST_NVME_CLI=1 00:03:47.147 SPDK_TEST_NVMF_TRANSPORT=tcp 00:03:47.147 SPDK_TEST_NVMF_NICS=e810 00:03:47.147 SPDK_TEST_VFIOUSER=1 00:03:47.148 SPDK_RUN_UBSAN=1 00:03:47.148 NET_TYPE=phy 00:03:47.155 RUN_NIGHTLY=0 00:03:47.159 [Pipeline] readFile 00:03:47.191 [Pipeline] withEnv 00:03:47.193 [Pipeline] { 00:03:47.205 [Pipeline] sh 00:03:47.489 + set -ex 00:03:47.489 + [[ -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/autorun-spdk.conf ]] 00:03:47.489 + source /var/jenkins/workspace/nvmf-tcp-phy-autotest/autorun-spdk.conf 00:03:47.489 ++ SPDK_RUN_FUNCTIONAL_TEST=1 00:03:47.489 ++ SPDK_TEST_NVMF=1 00:03:47.489 ++ SPDK_TEST_NVME_CLI=1 00:03:47.489 ++ SPDK_TEST_NVMF_TRANSPORT=tcp 00:03:47.489 ++ SPDK_TEST_NVMF_NICS=e810 00:03:47.489 ++ SPDK_TEST_VFIOUSER=1 00:03:47.489 ++ SPDK_RUN_UBSAN=1 00:03:47.489 ++ NET_TYPE=phy 00:03:47.489 ++ RUN_NIGHTLY=0 00:03:47.489 + case $SPDK_TEST_NVMF_NICS in 00:03:47.489 + DRIVERS=ice 00:03:47.489 + [[ tcp == \r\d\m\a ]] 00:03:47.489 + [[ -n ice ]] 00:03:47.489 + sudo rmmod mlx4_ib mlx5_ib irdma i40iw iw_cxgb4 00:03:47.489 rmmod: ERROR: Module mlx4_ib is not currently loaded 00:03:50.782 rmmod: ERROR: Module irdma is not currently loaded 00:03:50.782 rmmod: ERROR: Module i40iw is not currently loaded 00:03:50.782 rmmod: ERROR: Module iw_cxgb4 is not currently loaded 00:03:50.782 + true 00:03:50.782 + for D in $DRIVERS 00:03:50.782 + sudo modprobe ice 00:03:50.782 + exit 0 00:03:50.792 [Pipeline] } 00:03:50.810 [Pipeline] // withEnv 00:03:50.815 [Pipeline] } 00:03:50.835 [Pipeline] // stage 00:03:50.845 [Pipeline] catchError 00:03:50.846 [Pipeline] { 00:03:50.859 [Pipeline] timeout 00:03:50.859 Timeout set to expire in 50 min 00:03:50.861 [Pipeline] { 00:03:50.874 [Pipeline] stage 00:03:50.877 [Pipeline] { (Tests) 00:03:50.889 [Pipeline] sh 00:03:51.172 + jbp/jenkins/jjb-config/jobs/scripts/autoruner.sh /var/jenkins/workspace/nvmf-tcp-phy-autotest 00:03:51.172 ++ readlink -f /var/jenkins/workspace/nvmf-tcp-phy-autotest 00:03:51.172 + DIR_ROOT=/var/jenkins/workspace/nvmf-tcp-phy-autotest 00:03:51.172 + [[ -n /var/jenkins/workspace/nvmf-tcp-phy-autotest ]] 00:03:51.172 + DIR_SPDK=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:03:51.172 + DIR_OUTPUT=/var/jenkins/workspace/nvmf-tcp-phy-autotest/output 00:03:51.172 + [[ -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk ]] 00:03:51.172 + [[ ! -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/output ]] 00:03:51.172 + mkdir -p /var/jenkins/workspace/nvmf-tcp-phy-autotest/output 00:03:51.172 + [[ -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/output ]] 00:03:51.172 + [[ nvmf-tcp-phy-autotest == pkgdep-* ]] 00:03:51.172 + cd /var/jenkins/workspace/nvmf-tcp-phy-autotest 00:03:51.172 + source /etc/os-release 00:03:51.172 ++ NAME='Fedora Linux' 00:03:51.172 ++ VERSION='38 (Cloud Edition)' 00:03:51.172 ++ ID=fedora 00:03:51.172 ++ VERSION_ID=38 00:03:51.172 ++ VERSION_CODENAME= 00:03:51.172 ++ PLATFORM_ID=platform:f38 00:03:51.172 ++ PRETTY_NAME='Fedora Linux 38 (Cloud Edition)' 00:03:51.172 ++ ANSI_COLOR='0;38;2;60;110;180' 00:03:51.172 ++ LOGO=fedora-logo-icon 00:03:51.172 ++ CPE_NAME=cpe:/o:fedoraproject:fedora:38 00:03:51.172 ++ HOME_URL=https://fedoraproject.org/ 00:03:51.172 ++ DOCUMENTATION_URL=https://docs.fedoraproject.org/en-US/fedora/f38/system-administrators-guide/ 00:03:51.172 ++ SUPPORT_URL=https://ask.fedoraproject.org/ 00:03:51.172 ++ BUG_REPORT_URL=https://bugzilla.redhat.com/ 00:03:51.172 ++ REDHAT_BUGZILLA_PRODUCT=Fedora 00:03:51.172 ++ REDHAT_BUGZILLA_PRODUCT_VERSION=38 00:03:51.172 ++ REDHAT_SUPPORT_PRODUCT=Fedora 00:03:51.172 ++ REDHAT_SUPPORT_PRODUCT_VERSION=38 00:03:51.172 ++ SUPPORT_END=2024-05-14 00:03:51.172 ++ VARIANT='Cloud Edition' 00:03:51.172 ++ VARIANT_ID=cloud 00:03:51.172 + uname -a 00:03:51.172 Linux spdk-gp-06 6.7.0-68.fc38.x86_64 #1 SMP PREEMPT_DYNAMIC Mon Jan 15 00:59:40 UTC 2024 x86_64 GNU/Linux 00:03:51.172 + sudo /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh status 00:03:52.106 Hugepages 00:03:52.106 node hugesize free / total 00:03:52.106 node0 1048576kB 0 / 0 00:03:52.106 node0 2048kB 0 / 0 00:03:52.106 node1 1048576kB 0 / 0 00:03:52.106 node1 2048kB 0 / 0 00:03:52.106 00:03:52.106 Type BDF Vendor Device NUMA Driver Device Block devices 00:03:52.106 I/OAT 0000:00:04.0 8086 0e20 0 ioatdma - - 00:03:52.106 I/OAT 0000:00:04.1 8086 0e21 0 ioatdma - - 00:03:52.106 I/OAT 0000:00:04.2 8086 0e22 0 ioatdma - - 00:03:52.106 I/OAT 0000:00:04.3 8086 0e23 0 ioatdma - - 00:03:52.106 I/OAT 0000:00:04.4 8086 0e24 0 ioatdma - - 00:03:52.106 I/OAT 0000:00:04.5 8086 0e25 0 ioatdma - - 00:03:52.106 I/OAT 0000:00:04.6 8086 0e26 0 ioatdma - - 00:03:52.106 I/OAT 0000:00:04.7 8086 0e27 0 ioatdma - - 00:03:52.106 NVMe 0000:0b:00.0 8086 0a54 0 nvme nvme0 nvme0n1 00:03:52.106 I/OAT 0000:80:04.0 8086 0e20 1 ioatdma - - 00:03:52.106 I/OAT 0000:80:04.1 8086 0e21 1 ioatdma - - 00:03:52.106 I/OAT 0000:80:04.2 8086 0e22 1 ioatdma - - 00:03:52.106 I/OAT 0000:80:04.3 8086 0e23 1 ioatdma - - 00:03:52.106 I/OAT 0000:80:04.4 8086 0e24 1 ioatdma - - 00:03:52.106 I/OAT 0000:80:04.5 8086 0e25 1 ioatdma - - 00:03:52.364 I/OAT 0000:80:04.6 8086 0e26 1 ioatdma - - 00:03:52.364 I/OAT 0000:80:04.7 8086 0e27 1 ioatdma - - 00:03:52.364 + rm -f /tmp/spdk-ld-path 00:03:52.364 + source autorun-spdk.conf 00:03:52.364 ++ SPDK_RUN_FUNCTIONAL_TEST=1 00:03:52.364 ++ SPDK_TEST_NVMF=1 00:03:52.364 ++ SPDK_TEST_NVME_CLI=1 00:03:52.364 ++ SPDK_TEST_NVMF_TRANSPORT=tcp 00:03:52.364 ++ SPDK_TEST_NVMF_NICS=e810 00:03:52.364 ++ SPDK_TEST_VFIOUSER=1 00:03:52.364 ++ SPDK_RUN_UBSAN=1 00:03:52.364 ++ NET_TYPE=phy 00:03:52.364 ++ RUN_NIGHTLY=0 00:03:52.364 + (( SPDK_TEST_NVME_CMB == 1 || SPDK_TEST_NVME_PMR == 1 )) 00:03:52.364 + [[ -n '' ]] 00:03:52.364 + sudo git config --global --add safe.directory /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:03:52.364 + for M in /var/spdk/build-*-manifest.txt 00:03:52.364 + [[ -f /var/spdk/build-pkg-manifest.txt ]] 00:03:52.364 + cp /var/spdk/build-pkg-manifest.txt /var/jenkins/workspace/nvmf-tcp-phy-autotest/output/ 00:03:52.364 + for M in /var/spdk/build-*-manifest.txt 00:03:52.364 + [[ -f /var/spdk/build-repo-manifest.txt ]] 00:03:52.364 + cp /var/spdk/build-repo-manifest.txt /var/jenkins/workspace/nvmf-tcp-phy-autotest/output/ 00:03:52.364 ++ uname 00:03:52.364 + [[ Linux == \L\i\n\u\x ]] 00:03:52.364 + sudo dmesg -T 00:03:52.364 + sudo dmesg --clear 00:03:52.364 + dmesg_pid=628530 00:03:52.364 + [[ Fedora Linux == FreeBSD ]] 00:03:52.364 + sudo dmesg -Tw 00:03:52.364 + export UNBIND_ENTIRE_IOMMU_GROUP=yes 00:03:52.364 + UNBIND_ENTIRE_IOMMU_GROUP=yes 00:03:52.364 + [[ -e /var/spdk/dependencies/vhost/spdk_test_image.qcow2 ]] 00:03:52.364 + [[ -x /usr/src/fio-static/fio ]] 00:03:52.364 + export FIO_BIN=/usr/src/fio-static/fio 00:03:52.364 + FIO_BIN=/usr/src/fio-static/fio 00:03:52.364 + [[ '' == \/\v\a\r\/\j\e\n\k\i\n\s\/\w\o\r\k\s\p\a\c\e\/\n\v\m\f\-\t\c\p\-\p\h\y\-\a\u\t\o\t\e\s\t\/\q\e\m\u\_\v\f\i\o\/* ]] 00:03:52.364 + [[ ! -v VFIO_QEMU_BIN ]] 00:03:52.364 + [[ -e /usr/local/qemu/vfio-user-latest ]] 00:03:52.364 + export VFIO_QEMU_BIN=/usr/local/qemu/vfio-user-latest/bin/qemu-system-x86_64 00:03:52.364 + VFIO_QEMU_BIN=/usr/local/qemu/vfio-user-latest/bin/qemu-system-x86_64 00:03:52.364 + [[ -e /usr/local/qemu/vanilla-latest ]] 00:03:52.364 + export QEMU_BIN=/usr/local/qemu/vanilla-latest/bin/qemu-system-x86_64 00:03:52.364 + QEMU_BIN=/usr/local/qemu/vanilla-latest/bin/qemu-system-x86_64 00:03:52.364 + spdk/autorun.sh /var/jenkins/workspace/nvmf-tcp-phy-autotest/autorun-spdk.conf 00:03:52.364 Test configuration: 00:03:52.364 SPDK_RUN_FUNCTIONAL_TEST=1 00:03:52.364 SPDK_TEST_NVMF=1 00:03:52.364 SPDK_TEST_NVME_CLI=1 00:03:52.364 SPDK_TEST_NVMF_TRANSPORT=tcp 00:03:52.364 SPDK_TEST_NVMF_NICS=e810 00:03:52.364 SPDK_TEST_VFIOUSER=1 00:03:52.364 SPDK_RUN_UBSAN=1 00:03:52.364 NET_TYPE=phy 00:03:52.364 RUN_NIGHTLY=0 09:12:03 -- common/autobuild_common.sh@15 -- $ source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:03:52.364 09:12:03 -- scripts/common.sh@508 -- $ [[ -e /bin/wpdk_common.sh ]] 00:03:52.364 09:12:03 -- scripts/common.sh@516 -- $ [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:03:52.364 09:12:03 -- scripts/common.sh@517 -- $ source /etc/opt/spdk-pkgdep/paths/export.sh 00:03:52.364 09:12:03 -- paths/export.sh@2 -- $ PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/sys_sgci/.local/bin:/home/sys_sgci/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:03:52.365 09:12:03 -- paths/export.sh@3 -- $ PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/sys_sgci/.local/bin:/home/sys_sgci/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:03:52.365 09:12:03 -- paths/export.sh@4 -- $ PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/sys_sgci/.local/bin:/home/sys_sgci/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:03:52.365 09:12:03 -- paths/export.sh@5 -- $ export PATH 00:03:52.365 09:12:03 -- paths/export.sh@6 -- $ echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/sys_sgci/.local/bin:/home/sys_sgci/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:03:52.365 09:12:03 -- common/autobuild_common.sh@443 -- $ out=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output 00:03:52.365 09:12:03 -- common/autobuild_common.sh@444 -- $ date +%s 00:03:52.365 09:12:03 -- common/autobuild_common.sh@444 -- $ mktemp -dt spdk_1721027523.XXXXXX 00:03:52.365 09:12:03 -- common/autobuild_common.sh@444 -- $ SPDK_WORKSPACE=/tmp/spdk_1721027523.3ZjP81 00:03:52.365 09:12:03 -- common/autobuild_common.sh@446 -- $ [[ -n '' ]] 00:03:52.365 09:12:03 -- common/autobuild_common.sh@450 -- $ '[' -n '' ']' 00:03:52.365 09:12:03 -- common/autobuild_common.sh@453 -- $ scanbuild_exclude='--exclude /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/' 00:03:52.365 09:12:03 -- common/autobuild_common.sh@457 -- $ scanbuild_exclude+=' --exclude /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/xnvme --exclude /tmp' 00:03:52.365 09:12:03 -- common/autobuild_common.sh@459 -- $ scanbuild='scan-build -o /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/scan-build-tmp --exclude /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/ --exclude /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/xnvme --exclude /tmp --status-bugs' 00:03:52.365 09:12:03 -- common/autobuild_common.sh@460 -- $ get_config_params 00:03:52.365 09:12:03 -- common/autotest_common.sh@396 -- $ xtrace_disable 00:03:52.365 09:12:03 -- common/autotest_common.sh@10 -- $ set +x 00:03:52.365 09:12:03 -- common/autobuild_common.sh@460 -- $ config_params='--enable-debug --enable-werror --with-rdma --with-idxd --with-fio=/usr/src/fio --with-iscsi-initiator --disable-unit-tests --enable-ubsan --enable-coverage --with-ublk --with-vfio-user' 00:03:52.365 09:12:03 -- common/autobuild_common.sh@462 -- $ start_monitor_resources 00:03:52.365 09:12:03 -- pm/common@17 -- $ local monitor 00:03:52.365 09:12:03 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:03:52.365 09:12:03 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:03:52.365 09:12:03 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:03:52.365 09:12:03 -- pm/common@21 -- $ date +%s 00:03:52.365 09:12:03 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:03:52.365 09:12:03 -- pm/common@21 -- $ date +%s 00:03:52.365 09:12:03 -- pm/common@25 -- $ sleep 1 00:03:52.365 09:12:03 -- pm/common@21 -- $ date +%s 00:03:52.365 09:12:03 -- pm/common@21 -- $ date +%s 00:03:52.365 09:12:03 -- pm/common@21 -- $ /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/collect-cpu-load -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power -l -p monitor.autobuild.sh.1721027523 00:03:52.365 09:12:03 -- pm/common@21 -- $ /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/collect-vmstat -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power -l -p monitor.autobuild.sh.1721027523 00:03:52.365 09:12:03 -- pm/common@21 -- $ /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/collect-cpu-temp -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power -l -p monitor.autobuild.sh.1721027523 00:03:52.365 09:12:03 -- pm/common@21 -- $ sudo -E /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/collect-bmc-pm -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power -l -p monitor.autobuild.sh.1721027523 00:03:52.365 Redirecting to /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/monitor.autobuild.sh.1721027523_collect-vmstat.pm.log 00:03:52.365 Redirecting to /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/monitor.autobuild.sh.1721027523_collect-cpu-load.pm.log 00:03:52.365 Redirecting to /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/monitor.autobuild.sh.1721027523_collect-cpu-temp.pm.log 00:03:52.365 Redirecting to /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/monitor.autobuild.sh.1721027523_collect-bmc-pm.bmc.pm.log 00:03:53.746 09:12:04 -- common/autobuild_common.sh@463 -- $ trap stop_monitor_resources EXIT 00:03:53.746 09:12:04 -- spdk/autobuild.sh@11 -- $ SPDK_TEST_AUTOBUILD= 00:03:53.746 09:12:04 -- spdk/autobuild.sh@12 -- $ umask 022 00:03:53.746 09:12:04 -- spdk/autobuild.sh@13 -- $ cd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:03:53.746 09:12:04 -- spdk/autobuild.sh@16 -- $ date -u 00:03:53.746 Mon Jul 15 07:12:04 AM UTC 2024 00:03:53.746 09:12:04 -- spdk/autobuild.sh@17 -- $ git describe --tags 00:03:53.746 v24.09-pre-203-gb0f01ebc5 00:03:53.746 09:12:04 -- spdk/autobuild.sh@19 -- $ '[' 0 -eq 1 ']' 00:03:53.746 09:12:04 -- spdk/autobuild.sh@23 -- $ '[' 1 -eq 1 ']' 00:03:53.746 09:12:04 -- spdk/autobuild.sh@24 -- $ run_test ubsan echo 'using ubsan' 00:03:53.746 09:12:04 -- common/autotest_common.sh@1099 -- $ '[' 3 -le 1 ']' 00:03:53.746 09:12:04 -- common/autotest_common.sh@1105 -- $ xtrace_disable 00:03:53.746 09:12:04 -- common/autotest_common.sh@10 -- $ set +x 00:03:53.746 ************************************ 00:03:53.746 START TEST ubsan 00:03:53.746 ************************************ 00:03:53.746 09:12:04 ubsan -- common/autotest_common.sh@1123 -- $ echo 'using ubsan' 00:03:53.746 using ubsan 00:03:53.746 00:03:53.746 real 0m0.000s 00:03:53.746 user 0m0.000s 00:03:53.746 sys 0m0.000s 00:03:53.746 09:12:04 ubsan -- common/autotest_common.sh@1124 -- $ xtrace_disable 00:03:53.746 09:12:04 ubsan -- common/autotest_common.sh@10 -- $ set +x 00:03:53.746 ************************************ 00:03:53.746 END TEST ubsan 00:03:53.746 ************************************ 00:03:53.746 09:12:04 -- common/autotest_common.sh@1142 -- $ return 0 00:03:53.746 09:12:04 -- spdk/autobuild.sh@27 -- $ '[' -n '' ']' 00:03:53.746 09:12:04 -- spdk/autobuild.sh@31 -- $ case "$SPDK_TEST_AUTOBUILD" in 00:03:53.746 09:12:04 -- spdk/autobuild.sh@47 -- $ [[ 0 -eq 1 ]] 00:03:53.746 09:12:04 -- spdk/autobuild.sh@51 -- $ [[ 0 -eq 1 ]] 00:03:53.746 09:12:04 -- spdk/autobuild.sh@55 -- $ [[ -n '' ]] 00:03:53.746 09:12:04 -- spdk/autobuild.sh@57 -- $ [[ 0 -eq 1 ]] 00:03:53.746 09:12:04 -- spdk/autobuild.sh@59 -- $ [[ 0 -eq 1 ]] 00:03:53.746 09:12:04 -- spdk/autobuild.sh@62 -- $ [[ 0 -eq 1 ]] 00:03:53.746 09:12:04 -- spdk/autobuild.sh@67 -- $ /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/configure --enable-debug --enable-werror --with-rdma --with-idxd --with-fio=/usr/src/fio --with-iscsi-initiator --disable-unit-tests --enable-ubsan --enable-coverage --with-ublk --with-vfio-user --with-shared 00:03:53.746 Using default SPDK env in /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk 00:03:53.746 Using default DPDK in /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build 00:03:54.004 Using 'verbs' RDMA provider 00:04:04.557 Configuring ISA-L (logfile: /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/.spdk-isal.log)...done. 00:04:14.528 Configuring ISA-L-crypto (logfile: /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/.spdk-isal-crypto.log)...done. 00:04:14.528 Creating mk/config.mk...done. 00:04:14.528 Creating mk/cc.flags.mk...done. 00:04:14.528 Type 'make' to build. 00:04:14.528 09:12:25 -- spdk/autobuild.sh@69 -- $ run_test make make -j48 00:04:14.528 09:12:25 -- common/autotest_common.sh@1099 -- $ '[' 3 -le 1 ']' 00:04:14.528 09:12:25 -- common/autotest_common.sh@1105 -- $ xtrace_disable 00:04:14.528 09:12:25 -- common/autotest_common.sh@10 -- $ set +x 00:04:14.528 ************************************ 00:04:14.528 START TEST make 00:04:14.528 ************************************ 00:04:14.528 09:12:25 make -- common/autotest_common.sh@1123 -- $ make -j48 00:04:14.528 make[1]: Nothing to be done for 'all'. 00:04:15.924 The Meson build system 00:04:15.924 Version: 1.3.1 00:04:15.924 Source dir: /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/libvfio-user 00:04:15.924 Build dir: /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/build-debug 00:04:15.924 Build type: native build 00:04:15.924 Project name: libvfio-user 00:04:15.924 Project version: 0.0.1 00:04:15.924 C compiler for the host machine: cc (gcc 13.2.1 "cc (GCC) 13.2.1 20231011 (Red Hat 13.2.1-4)") 00:04:15.924 C linker for the host machine: cc ld.bfd 2.39-16 00:04:15.924 Host machine cpu family: x86_64 00:04:15.924 Host machine cpu: x86_64 00:04:15.924 Run-time dependency threads found: YES 00:04:15.924 Library dl found: YES 00:04:15.924 Found pkg-config: YES (/usr/bin/pkg-config) 1.8.0 00:04:15.924 Run-time dependency json-c found: YES 0.17 00:04:15.924 Run-time dependency cmocka found: YES 1.1.7 00:04:15.924 Program pytest-3 found: NO 00:04:15.924 Program flake8 found: NO 00:04:15.924 Program misspell-fixer found: NO 00:04:15.924 Program restructuredtext-lint found: NO 00:04:15.924 Program valgrind found: YES (/usr/bin/valgrind) 00:04:15.924 Compiler for C supports arguments -Wno-missing-field-initializers: YES 00:04:15.924 Compiler for C supports arguments -Wmissing-declarations: YES 00:04:15.924 Compiler for C supports arguments -Wwrite-strings: YES 00:04:15.924 ../libvfio-user/test/meson.build:20: WARNING: Project targets '>= 0.53.0' but uses feature introduced in '0.57.0': exclude_suites arg in add_test_setup. 00:04:15.924 Program test-lspci.sh found: YES (/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/libvfio-user/test/test-lspci.sh) 00:04:15.924 Program test-linkage.sh found: YES (/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/libvfio-user/test/test-linkage.sh) 00:04:15.924 ../libvfio-user/test/py/meson.build:16: WARNING: Project targets '>= 0.53.0' but uses feature introduced in '0.57.0': exclude_suites arg in add_test_setup. 00:04:15.924 Build targets in project: 8 00:04:15.924 WARNING: Project specifies a minimum meson_version '>= 0.53.0' but uses features which were added in newer versions: 00:04:15.924 * 0.57.0: {'exclude_suites arg in add_test_setup'} 00:04:15.924 00:04:15.924 libvfio-user 0.0.1 00:04:15.924 00:04:15.924 User defined options 00:04:15.924 buildtype : debug 00:04:15.924 default_library: shared 00:04:15.924 libdir : /usr/local/lib 00:04:15.924 00:04:15.924 Found ninja-1.11.1.git.kitware.jobserver-1 at /usr/local/bin/ninja 00:04:16.879 ninja: Entering directory `/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/build-debug' 00:04:16.879 [1/37] Compiling C object lib/libvfio-user.so.0.0.1.p/migration.c.o 00:04:16.879 [2/37] Compiling C object samples/null.p/null.c.o 00:04:16.879 [3/37] Compiling C object lib/libvfio-user.so.0.0.1.p/irq.c.o 00:04:16.879 [4/37] Compiling C object samples/gpio-pci-idio-16.p/gpio-pci-idio-16.c.o 00:04:16.879 [5/37] Compiling C object lib/libvfio-user.so.0.0.1.p/tran.c.o 00:04:16.879 [6/37] Compiling C object samples/client.p/.._lib_tran.c.o 00:04:17.142 [7/37] Compiling C object samples/shadow_ioeventfd_server.p/shadow_ioeventfd_server.c.o 00:04:17.142 [8/37] Compiling C object lib/libvfio-user.so.0.0.1.p/pci_caps.c.o 00:04:17.142 [9/37] Compiling C object samples/lspci.p/lspci.c.o 00:04:17.142 [10/37] Compiling C object test/unit_tests.p/.._lib_tran_sock.c.o 00:04:17.142 [11/37] Compiling C object test/unit_tests.p/.._lib_tran.c.o 00:04:17.142 [12/37] Compiling C object samples/client.p/.._lib_migration.c.o 00:04:17.142 [13/37] Compiling C object lib/libvfio-user.so.0.0.1.p/pci.c.o 00:04:17.142 [14/37] Compiling C object lib/libvfio-user.so.0.0.1.p/tran_sock.c.o 00:04:17.142 [15/37] Compiling C object test/unit_tests.p/mocks.c.o 00:04:17.142 [16/37] Compiling C object test/unit_tests.p/.._lib_migration.c.o 00:04:17.142 [17/37] Compiling C object test/unit_tests.p/.._lib_irq.c.o 00:04:17.142 [18/37] Compiling C object test/unit_tests.p/.._lib_pci.c.o 00:04:17.142 [19/37] Compiling C object test/unit_tests.p/unit-tests.c.o 00:04:17.142 [20/37] Compiling C object test/unit_tests.p/.._lib_tran_pipe.c.o 00:04:17.142 [21/37] Compiling C object lib/libvfio-user.so.0.0.1.p/dma.c.o 00:04:17.142 [22/37] Compiling C object test/unit_tests.p/.._lib_dma.c.o 00:04:17.142 [23/37] Compiling C object samples/client.p/.._lib_tran_sock.c.o 00:04:17.142 [24/37] Compiling C object samples/server.p/server.c.o 00:04:17.142 [25/37] Compiling C object samples/client.p/client.c.o 00:04:17.142 [26/37] Compiling C object test/unit_tests.p/.._lib_pci_caps.c.o 00:04:17.142 [27/37] Linking target samples/client 00:04:17.405 [28/37] Compiling C object test/unit_tests.p/.._lib_libvfio-user.c.o 00:04:17.405 [29/37] Linking target test/unit_tests 00:04:17.405 [30/37] Compiling C object lib/libvfio-user.so.0.0.1.p/libvfio-user.c.o 00:04:17.405 [31/37] Linking target lib/libvfio-user.so.0.0.1 00:04:17.667 [32/37] Generating symbol file lib/libvfio-user.so.0.0.1.p/libvfio-user.so.0.0.1.symbols 00:04:17.667 [33/37] Linking target samples/server 00:04:17.667 [34/37] Linking target samples/null 00:04:17.667 [35/37] Linking target samples/gpio-pci-idio-16 00:04:17.667 [36/37] Linking target samples/lspci 00:04:17.667 [37/37] Linking target samples/shadow_ioeventfd_server 00:04:17.667 INFO: autodetecting backend as ninja 00:04:17.667 INFO: calculating backend command to run: /usr/local/bin/ninja -C /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/build-debug 00:04:17.930 DESTDIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user meson install --quiet -C /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/build-debug 00:04:18.506 ninja: Entering directory `/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/build-debug' 00:04:18.506 ninja: no work to do. 00:04:23.789 The Meson build system 00:04:23.789 Version: 1.3.1 00:04:23.789 Source dir: /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk 00:04:23.789 Build dir: /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build-tmp 00:04:23.789 Build type: native build 00:04:23.789 Program cat found: YES (/usr/bin/cat) 00:04:23.789 Project name: DPDK 00:04:23.789 Project version: 24.03.0 00:04:23.789 C compiler for the host machine: cc (gcc 13.2.1 "cc (GCC) 13.2.1 20231011 (Red Hat 13.2.1-4)") 00:04:23.789 C linker for the host machine: cc ld.bfd 2.39-16 00:04:23.789 Host machine cpu family: x86_64 00:04:23.789 Host machine cpu: x86_64 00:04:23.789 Message: ## Building in Developer Mode ## 00:04:23.789 Program pkg-config found: YES (/usr/bin/pkg-config) 00:04:23.789 Program check-symbols.sh found: YES (/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/buildtools/check-symbols.sh) 00:04:23.789 Program options-ibverbs-static.sh found: YES (/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/buildtools/options-ibverbs-static.sh) 00:04:23.789 Program python3 found: YES (/usr/bin/python3) 00:04:23.789 Program cat found: YES (/usr/bin/cat) 00:04:23.789 Compiler for C supports arguments -march=native: YES 00:04:23.789 Checking for size of "void *" : 8 00:04:23.789 Checking for size of "void *" : 8 (cached) 00:04:23.789 Compiler for C supports link arguments -Wl,--undefined-version: NO 00:04:23.789 Library m found: YES 00:04:23.789 Library numa found: YES 00:04:23.789 Has header "numaif.h" : YES 00:04:23.789 Library fdt found: NO 00:04:23.789 Library execinfo found: NO 00:04:23.789 Has header "execinfo.h" : YES 00:04:23.789 Found pkg-config: YES (/usr/bin/pkg-config) 1.8.0 00:04:23.789 Run-time dependency libarchive found: NO (tried pkgconfig) 00:04:23.789 Run-time dependency libbsd found: NO (tried pkgconfig) 00:04:23.789 Run-time dependency jansson found: NO (tried pkgconfig) 00:04:23.789 Run-time dependency openssl found: YES 3.0.9 00:04:23.789 Run-time dependency libpcap found: YES 1.10.4 00:04:23.789 Has header "pcap.h" with dependency libpcap: YES 00:04:23.789 Compiler for C supports arguments -Wcast-qual: YES 00:04:23.789 Compiler for C supports arguments -Wdeprecated: YES 00:04:23.789 Compiler for C supports arguments -Wformat: YES 00:04:23.789 Compiler for C supports arguments -Wformat-nonliteral: NO 00:04:23.789 Compiler for C supports arguments -Wformat-security: NO 00:04:23.789 Compiler for C supports arguments -Wmissing-declarations: YES 00:04:23.789 Compiler for C supports arguments -Wmissing-prototypes: YES 00:04:23.789 Compiler for C supports arguments -Wnested-externs: YES 00:04:23.789 Compiler for C supports arguments -Wold-style-definition: YES 00:04:23.789 Compiler for C supports arguments -Wpointer-arith: YES 00:04:23.789 Compiler for C supports arguments -Wsign-compare: YES 00:04:23.789 Compiler for C supports arguments -Wstrict-prototypes: YES 00:04:23.789 Compiler for C supports arguments -Wundef: YES 00:04:23.789 Compiler for C supports arguments -Wwrite-strings: YES 00:04:23.789 Compiler for C supports arguments -Wno-address-of-packed-member: YES 00:04:23.789 Compiler for C supports arguments -Wno-packed-not-aligned: YES 00:04:23.789 Compiler for C supports arguments -Wno-missing-field-initializers: YES 00:04:23.789 Compiler for C supports arguments -Wno-zero-length-bounds: YES 00:04:23.789 Program objdump found: YES (/usr/bin/objdump) 00:04:23.789 Compiler for C supports arguments -mavx512f: YES 00:04:23.789 Checking if "AVX512 checking" compiles: YES 00:04:23.789 Fetching value of define "__SSE4_2__" : 1 00:04:23.789 Fetching value of define "__AES__" : 1 00:04:23.789 Fetching value of define "__AVX__" : 1 00:04:23.789 Fetching value of define "__AVX2__" : (undefined) 00:04:23.789 Fetching value of define "__AVX512BW__" : (undefined) 00:04:23.789 Fetching value of define "__AVX512CD__" : (undefined) 00:04:23.789 Fetching value of define "__AVX512DQ__" : (undefined) 00:04:23.789 Fetching value of define "__AVX512F__" : (undefined) 00:04:23.789 Fetching value of define "__AVX512VL__" : (undefined) 00:04:23.789 Fetching value of define "__PCLMUL__" : 1 00:04:23.789 Fetching value of define "__RDRND__" : 1 00:04:23.789 Fetching value of define "__RDSEED__" : (undefined) 00:04:23.789 Fetching value of define "__VPCLMULQDQ__" : (undefined) 00:04:23.790 Fetching value of define "__znver1__" : (undefined) 00:04:23.790 Fetching value of define "__znver2__" : (undefined) 00:04:23.790 Fetching value of define "__znver3__" : (undefined) 00:04:23.790 Fetching value of define "__znver4__" : (undefined) 00:04:23.790 Compiler for C supports arguments -Wno-format-truncation: YES 00:04:23.790 Message: lib/log: Defining dependency "log" 00:04:23.790 Message: lib/kvargs: Defining dependency "kvargs" 00:04:23.790 Message: lib/telemetry: Defining dependency "telemetry" 00:04:23.790 Checking for function "getentropy" : NO 00:04:23.790 Message: lib/eal: Defining dependency "eal" 00:04:23.790 Message: lib/ring: Defining dependency "ring" 00:04:23.790 Message: lib/rcu: Defining dependency "rcu" 00:04:23.790 Message: lib/mempool: Defining dependency "mempool" 00:04:23.790 Message: lib/mbuf: Defining dependency "mbuf" 00:04:23.790 Fetching value of define "__PCLMUL__" : 1 (cached) 00:04:23.790 Fetching value of define "__AVX512F__" : (undefined) (cached) 00:04:23.790 Compiler for C supports arguments -mpclmul: YES 00:04:23.790 Compiler for C supports arguments -maes: YES 00:04:23.790 Compiler for C supports arguments -mavx512f: YES (cached) 00:04:23.790 Compiler for C supports arguments -mavx512bw: YES 00:04:23.790 Compiler for C supports arguments -mavx512dq: YES 00:04:23.790 Compiler for C supports arguments -mavx512vl: YES 00:04:23.790 Compiler for C supports arguments -mvpclmulqdq: YES 00:04:23.790 Compiler for C supports arguments -mavx2: YES 00:04:23.790 Compiler for C supports arguments -mavx: YES 00:04:23.790 Message: lib/net: Defining dependency "net" 00:04:23.790 Message: lib/meter: Defining dependency "meter" 00:04:23.790 Message: lib/ethdev: Defining dependency "ethdev" 00:04:23.790 Message: lib/pci: Defining dependency "pci" 00:04:23.790 Message: lib/cmdline: Defining dependency "cmdline" 00:04:23.790 Message: lib/hash: Defining dependency "hash" 00:04:23.790 Message: lib/timer: Defining dependency "timer" 00:04:23.790 Message: lib/compressdev: Defining dependency "compressdev" 00:04:23.790 Message: lib/cryptodev: Defining dependency "cryptodev" 00:04:23.790 Message: lib/dmadev: Defining dependency "dmadev" 00:04:23.790 Compiler for C supports arguments -Wno-cast-qual: YES 00:04:23.790 Message: lib/power: Defining dependency "power" 00:04:23.790 Message: lib/reorder: Defining dependency "reorder" 00:04:23.790 Message: lib/security: Defining dependency "security" 00:04:23.790 Has header "linux/userfaultfd.h" : YES 00:04:23.790 Has header "linux/vduse.h" : YES 00:04:23.790 Message: lib/vhost: Defining dependency "vhost" 00:04:23.790 Compiler for C supports arguments -Wno-format-truncation: YES (cached) 00:04:23.790 Message: drivers/bus/pci: Defining dependency "bus_pci" 00:04:23.790 Message: drivers/bus/vdev: Defining dependency "bus_vdev" 00:04:23.790 Message: drivers/mempool/ring: Defining dependency "mempool_ring" 00:04:23.790 Message: Disabling raw/* drivers: missing internal dependency "rawdev" 00:04:23.790 Message: Disabling regex/* drivers: missing internal dependency "regexdev" 00:04:23.790 Message: Disabling ml/* drivers: missing internal dependency "mldev" 00:04:23.790 Message: Disabling event/* drivers: missing internal dependency "eventdev" 00:04:23.790 Message: Disabling baseband/* drivers: missing internal dependency "bbdev" 00:04:23.790 Message: Disabling gpu/* drivers: missing internal dependency "gpudev" 00:04:23.790 Program doxygen found: YES (/usr/bin/doxygen) 00:04:23.790 Configuring doxy-api-html.conf using configuration 00:04:23.790 Configuring doxy-api-man.conf using configuration 00:04:23.790 Program mandb found: YES (/usr/bin/mandb) 00:04:23.790 Program sphinx-build found: NO 00:04:23.790 Configuring rte_build_config.h using configuration 00:04:23.790 Message: 00:04:23.790 ================= 00:04:23.790 Applications Enabled 00:04:23.790 ================= 00:04:23.790 00:04:23.790 apps: 00:04:23.790 00:04:23.790 00:04:23.790 Message: 00:04:23.790 ================= 00:04:23.790 Libraries Enabled 00:04:23.790 ================= 00:04:23.790 00:04:23.790 libs: 00:04:23.790 log, kvargs, telemetry, eal, ring, rcu, mempool, mbuf, 00:04:23.790 net, meter, ethdev, pci, cmdline, hash, timer, compressdev, 00:04:23.790 cryptodev, dmadev, power, reorder, security, vhost, 00:04:23.790 00:04:23.790 Message: 00:04:23.790 =============== 00:04:23.790 Drivers Enabled 00:04:23.790 =============== 00:04:23.790 00:04:23.790 common: 00:04:23.790 00:04:23.790 bus: 00:04:23.790 pci, vdev, 00:04:23.790 mempool: 00:04:23.790 ring, 00:04:23.790 dma: 00:04:23.790 00:04:23.790 net: 00:04:23.790 00:04:23.790 crypto: 00:04:23.790 00:04:23.790 compress: 00:04:23.790 00:04:23.790 vdpa: 00:04:23.790 00:04:23.790 00:04:23.790 Message: 00:04:23.790 ================= 00:04:23.790 Content Skipped 00:04:23.790 ================= 00:04:23.790 00:04:23.790 apps: 00:04:23.790 dumpcap: explicitly disabled via build config 00:04:23.790 graph: explicitly disabled via build config 00:04:23.790 pdump: explicitly disabled via build config 00:04:23.790 proc-info: explicitly disabled via build config 00:04:23.790 test-acl: explicitly disabled via build config 00:04:23.790 test-bbdev: explicitly disabled via build config 00:04:23.790 test-cmdline: explicitly disabled via build config 00:04:23.790 test-compress-perf: explicitly disabled via build config 00:04:23.790 test-crypto-perf: explicitly disabled via build config 00:04:23.790 test-dma-perf: explicitly disabled via build config 00:04:23.790 test-eventdev: explicitly disabled via build config 00:04:23.790 test-fib: explicitly disabled via build config 00:04:23.790 test-flow-perf: explicitly disabled via build config 00:04:23.790 test-gpudev: explicitly disabled via build config 00:04:23.790 test-mldev: explicitly disabled via build config 00:04:23.790 test-pipeline: explicitly disabled via build config 00:04:23.790 test-pmd: explicitly disabled via build config 00:04:23.790 test-regex: explicitly disabled via build config 00:04:23.790 test-sad: explicitly disabled via build config 00:04:23.790 test-security-perf: explicitly disabled via build config 00:04:23.790 00:04:23.790 libs: 00:04:23.790 argparse: explicitly disabled via build config 00:04:23.790 metrics: explicitly disabled via build config 00:04:23.790 acl: explicitly disabled via build config 00:04:23.790 bbdev: explicitly disabled via build config 00:04:23.790 bitratestats: explicitly disabled via build config 00:04:23.790 bpf: explicitly disabled via build config 00:04:23.790 cfgfile: explicitly disabled via build config 00:04:23.790 distributor: explicitly disabled via build config 00:04:23.790 efd: explicitly disabled via build config 00:04:23.790 eventdev: explicitly disabled via build config 00:04:23.790 dispatcher: explicitly disabled via build config 00:04:23.790 gpudev: explicitly disabled via build config 00:04:23.790 gro: explicitly disabled via build config 00:04:23.790 gso: explicitly disabled via build config 00:04:23.790 ip_frag: explicitly disabled via build config 00:04:23.790 jobstats: explicitly disabled via build config 00:04:23.790 latencystats: explicitly disabled via build config 00:04:23.790 lpm: explicitly disabled via build config 00:04:23.790 member: explicitly disabled via build config 00:04:23.790 pcapng: explicitly disabled via build config 00:04:23.790 rawdev: explicitly disabled via build config 00:04:23.790 regexdev: explicitly disabled via build config 00:04:23.790 mldev: explicitly disabled via build config 00:04:23.790 rib: explicitly disabled via build config 00:04:23.790 sched: explicitly disabled via build config 00:04:23.790 stack: explicitly disabled via build config 00:04:23.790 ipsec: explicitly disabled via build config 00:04:23.790 pdcp: explicitly disabled via build config 00:04:23.790 fib: explicitly disabled via build config 00:04:23.790 port: explicitly disabled via build config 00:04:23.790 pdump: explicitly disabled via build config 00:04:23.790 table: explicitly disabled via build config 00:04:23.790 pipeline: explicitly disabled via build config 00:04:23.790 graph: explicitly disabled via build config 00:04:23.790 node: explicitly disabled via build config 00:04:23.790 00:04:23.790 drivers: 00:04:23.790 common/cpt: not in enabled drivers build config 00:04:23.790 common/dpaax: not in enabled drivers build config 00:04:23.790 common/iavf: not in enabled drivers build config 00:04:23.790 common/idpf: not in enabled drivers build config 00:04:23.790 common/ionic: not in enabled drivers build config 00:04:23.790 common/mvep: not in enabled drivers build config 00:04:23.790 common/octeontx: not in enabled drivers build config 00:04:23.790 bus/auxiliary: not in enabled drivers build config 00:04:23.790 bus/cdx: not in enabled drivers build config 00:04:23.790 bus/dpaa: not in enabled drivers build config 00:04:23.790 bus/fslmc: not in enabled drivers build config 00:04:23.790 bus/ifpga: not in enabled drivers build config 00:04:23.790 bus/platform: not in enabled drivers build config 00:04:23.790 bus/uacce: not in enabled drivers build config 00:04:23.790 bus/vmbus: not in enabled drivers build config 00:04:23.790 common/cnxk: not in enabled drivers build config 00:04:23.790 common/mlx5: not in enabled drivers build config 00:04:23.790 common/nfp: not in enabled drivers build config 00:04:23.790 common/nitrox: not in enabled drivers build config 00:04:23.790 common/qat: not in enabled drivers build config 00:04:23.790 common/sfc_efx: not in enabled drivers build config 00:04:23.790 mempool/bucket: not in enabled drivers build config 00:04:23.790 mempool/cnxk: not in enabled drivers build config 00:04:23.790 mempool/dpaa: not in enabled drivers build config 00:04:23.790 mempool/dpaa2: not in enabled drivers build config 00:04:23.790 mempool/octeontx: not in enabled drivers build config 00:04:23.790 mempool/stack: not in enabled drivers build config 00:04:23.790 dma/cnxk: not in enabled drivers build config 00:04:23.790 dma/dpaa: not in enabled drivers build config 00:04:23.790 dma/dpaa2: not in enabled drivers build config 00:04:23.790 dma/hisilicon: not in enabled drivers build config 00:04:23.790 dma/idxd: not in enabled drivers build config 00:04:23.790 dma/ioat: not in enabled drivers build config 00:04:23.790 dma/skeleton: not in enabled drivers build config 00:04:23.790 net/af_packet: not in enabled drivers build config 00:04:23.790 net/af_xdp: not in enabled drivers build config 00:04:23.790 net/ark: not in enabled drivers build config 00:04:23.790 net/atlantic: not in enabled drivers build config 00:04:23.790 net/avp: not in enabled drivers build config 00:04:23.790 net/axgbe: not in enabled drivers build config 00:04:23.790 net/bnx2x: not in enabled drivers build config 00:04:23.790 net/bnxt: not in enabled drivers build config 00:04:23.790 net/bonding: not in enabled drivers build config 00:04:23.790 net/cnxk: not in enabled drivers build config 00:04:23.790 net/cpfl: not in enabled drivers build config 00:04:23.790 net/cxgbe: not in enabled drivers build config 00:04:23.790 net/dpaa: not in enabled drivers build config 00:04:23.790 net/dpaa2: not in enabled drivers build config 00:04:23.790 net/e1000: not in enabled drivers build config 00:04:23.791 net/ena: not in enabled drivers build config 00:04:23.791 net/enetc: not in enabled drivers build config 00:04:23.791 net/enetfec: not in enabled drivers build config 00:04:23.791 net/enic: not in enabled drivers build config 00:04:23.791 net/failsafe: not in enabled drivers build config 00:04:23.791 net/fm10k: not in enabled drivers build config 00:04:23.791 net/gve: not in enabled drivers build config 00:04:23.791 net/hinic: not in enabled drivers build config 00:04:23.791 net/hns3: not in enabled drivers build config 00:04:23.791 net/i40e: not in enabled drivers build config 00:04:23.791 net/iavf: not in enabled drivers build config 00:04:23.791 net/ice: not in enabled drivers build config 00:04:23.791 net/idpf: not in enabled drivers build config 00:04:23.791 net/igc: not in enabled drivers build config 00:04:23.791 net/ionic: not in enabled drivers build config 00:04:23.791 net/ipn3ke: not in enabled drivers build config 00:04:23.791 net/ixgbe: not in enabled drivers build config 00:04:23.791 net/mana: not in enabled drivers build config 00:04:23.791 net/memif: not in enabled drivers build config 00:04:23.791 net/mlx4: not in enabled drivers build config 00:04:23.791 net/mlx5: not in enabled drivers build config 00:04:23.791 net/mvneta: not in enabled drivers build config 00:04:23.791 net/mvpp2: not in enabled drivers build config 00:04:23.791 net/netvsc: not in enabled drivers build config 00:04:23.791 net/nfb: not in enabled drivers build config 00:04:23.791 net/nfp: not in enabled drivers build config 00:04:23.791 net/ngbe: not in enabled drivers build config 00:04:23.791 net/null: not in enabled drivers build config 00:04:23.791 net/octeontx: not in enabled drivers build config 00:04:23.791 net/octeon_ep: not in enabled drivers build config 00:04:23.791 net/pcap: not in enabled drivers build config 00:04:23.791 net/pfe: not in enabled drivers build config 00:04:23.791 net/qede: not in enabled drivers build config 00:04:23.791 net/ring: not in enabled drivers build config 00:04:23.791 net/sfc: not in enabled drivers build config 00:04:23.791 net/softnic: not in enabled drivers build config 00:04:23.791 net/tap: not in enabled drivers build config 00:04:23.791 net/thunderx: not in enabled drivers build config 00:04:23.791 net/txgbe: not in enabled drivers build config 00:04:23.791 net/vdev_netvsc: not in enabled drivers build config 00:04:23.791 net/vhost: not in enabled drivers build config 00:04:23.791 net/virtio: not in enabled drivers build config 00:04:23.791 net/vmxnet3: not in enabled drivers build config 00:04:23.791 raw/*: missing internal dependency, "rawdev" 00:04:23.791 crypto/armv8: not in enabled drivers build config 00:04:23.791 crypto/bcmfs: not in enabled drivers build config 00:04:23.791 crypto/caam_jr: not in enabled drivers build config 00:04:23.791 crypto/ccp: not in enabled drivers build config 00:04:23.791 crypto/cnxk: not in enabled drivers build config 00:04:23.791 crypto/dpaa_sec: not in enabled drivers build config 00:04:23.791 crypto/dpaa2_sec: not in enabled drivers build config 00:04:23.791 crypto/ipsec_mb: not in enabled drivers build config 00:04:23.791 crypto/mlx5: not in enabled drivers build config 00:04:23.791 crypto/mvsam: not in enabled drivers build config 00:04:23.791 crypto/nitrox: not in enabled drivers build config 00:04:23.791 crypto/null: not in enabled drivers build config 00:04:23.791 crypto/octeontx: not in enabled drivers build config 00:04:23.791 crypto/openssl: not in enabled drivers build config 00:04:23.791 crypto/scheduler: not in enabled drivers build config 00:04:23.791 crypto/uadk: not in enabled drivers build config 00:04:23.791 crypto/virtio: not in enabled drivers build config 00:04:23.791 compress/isal: not in enabled drivers build config 00:04:23.791 compress/mlx5: not in enabled drivers build config 00:04:23.791 compress/nitrox: not in enabled drivers build config 00:04:23.791 compress/octeontx: not in enabled drivers build config 00:04:23.791 compress/zlib: not in enabled drivers build config 00:04:23.791 regex/*: missing internal dependency, "regexdev" 00:04:23.791 ml/*: missing internal dependency, "mldev" 00:04:23.791 vdpa/ifc: not in enabled drivers build config 00:04:23.791 vdpa/mlx5: not in enabled drivers build config 00:04:23.791 vdpa/nfp: not in enabled drivers build config 00:04:23.791 vdpa/sfc: not in enabled drivers build config 00:04:23.791 event/*: missing internal dependency, "eventdev" 00:04:23.791 baseband/*: missing internal dependency, "bbdev" 00:04:23.791 gpu/*: missing internal dependency, "gpudev" 00:04:23.791 00:04:23.791 00:04:23.791 Build targets in project: 85 00:04:23.791 00:04:23.791 DPDK 24.03.0 00:04:23.791 00:04:23.791 User defined options 00:04:23.791 buildtype : debug 00:04:23.791 default_library : shared 00:04:23.791 libdir : lib 00:04:23.791 prefix : /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build 00:04:23.791 c_args : -Wno-stringop-overflow -fcommon -Wno-stringop-overread -Wno-array-bounds -fPIC -Werror 00:04:23.791 c_link_args : 00:04:23.791 cpu_instruction_set: native 00:04:23.791 disable_apps : test-sad,test-acl,test-dma-perf,test-pipeline,test-compress-perf,test-fib,test-flow-perf,test-crypto-perf,test-bbdev,test-eventdev,pdump,test-mldev,test-cmdline,graph,test-security-perf,test-pmd,test,proc-info,test-regex,dumpcap,test-gpudev 00:04:23.791 disable_libs : port,sched,rib,node,ipsec,distributor,gro,eventdev,pdcp,acl,member,latencystats,efd,stack,regexdev,rawdev,bpf,metrics,gpudev,pipeline,pdump,table,fib,dispatcher,mldev,gso,cfgfile,bitratestats,ip_frag,graph,lpm,jobstats,argparse,pcapng,bbdev 00:04:23.791 enable_docs : false 00:04:23.791 enable_drivers : bus,bus/pci,bus/vdev,mempool/ring 00:04:23.791 enable_kmods : false 00:04:23.791 max_lcores : 128 00:04:23.791 tests : false 00:04:23.791 00:04:23.791 Found ninja-1.11.1.git.kitware.jobserver-1 at /usr/local/bin/ninja 00:04:23.791 ninja: Entering directory `/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build-tmp' 00:04:23.791 [1/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_hypervisor.c.o 00:04:23.791 [2/268] Compiling C object lib/librte_log.a.p/log_log_linux.c.o 00:04:23.791 [3/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_errno.c.o 00:04:23.791 [4/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_hexdump.c.o 00:04:23.791 [5/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_cpuflags.c.o 00:04:23.791 [6/268] Compiling C object lib/librte_eal.a.p/eal_common_rte_reciprocal.c.o 00:04:23.791 [7/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_string_fns.c.o 00:04:23.791 [8/268] Compiling C object lib/librte_eal.a.p/eal_common_rte_version.c.o 00:04:23.791 [9/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_uuid.c.o 00:04:23.791 [10/268] Compiling C object lib/librte_telemetry.a.p/telemetry_telemetry_data.c.o 00:04:23.791 [11/268] Compiling C object lib/librte_eal.a.p/eal_unix_eal_debug.c.o 00:04:23.791 [12/268] Compiling C object lib/librte_kvargs.a.p/kvargs_rte_kvargs.c.o 00:04:23.791 [13/268] Compiling C object lib/librte_log.a.p/log_log.c.o 00:04:24.057 [14/268] Linking static target lib/librte_kvargs.a 00:04:24.057 [15/268] Linking static target lib/librte_log.a 00:04:24.057 [16/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_cpuflags.c.o 00:04:24.628 [17/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_timer.c.o 00:04:24.628 [18/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_debug.c.o 00:04:24.628 [19/268] Compiling C object lib/librte_eal.a.p/eal_common_rte_random.c.o 00:04:24.628 [20/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_tailqs.c.o 00:04:24.628 [21/268] Compiling C object lib/librte_telemetry.a.p/telemetry_telemetry_legacy.c.o 00:04:24.628 [22/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_class.c.o 00:04:24.629 [23/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_memalloc.c.o 00:04:24.629 [24/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_launch.c.o 00:04:24.629 [25/268] Compiling C object lib/librte_eal.a.p/eal_common_rte_keepalive.c.o 00:04:24.629 [26/268] Generating lib/kvargs.sym_chk with a custom command (wrapped by meson to capture output) 00:04:24.629 [27/268] Compiling C object lib/librte_eal.a.p/eal_unix_eal_file.c.o 00:04:24.629 [28/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_dynmem.c.o 00:04:24.629 [29/268] Compiling C object lib/librte_eal.a.p/eal_common_hotplug_mp.c.o 00:04:24.629 [30/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_mcfg.c.o 00:04:24.629 [31/268] Compiling C object lib/librte_eal.a.p/eal_unix_eal_unix_timer.c.o 00:04:24.629 [32/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_dev.c.o 00:04:24.629 [33/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace.c.o 00:04:24.629 [34/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace_points.c.o 00:04:24.629 [35/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_lcore.c.o 00:04:24.629 [36/268] Compiling C object lib/librte_eal.a.p/eal_common_malloc_elem.c.o 00:04:24.890 [37/268] Compiling C object lib/librte_eal.a.p/eal_common_malloc_mp.c.o 00:04:24.890 [38/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_interrupts.c.o 00:04:24.890 [39/268] Compiling C object lib/librte_eal.a.p/eal_unix_eal_filesystem.c.o 00:04:24.890 [40/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_devargs.c.o 00:04:24.890 [41/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_memzone.c.o 00:04:24.890 [42/268] Compiling C object lib/librte_eal.a.p/eal_x86_rte_spinlock.c.o 00:04:24.890 [43/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace_utils.c.o 00:04:24.890 [44/268] Compiling C object lib/librte_telemetry.a.p/telemetry_telemetry.c.o 00:04:24.890 [45/268] Compiling C object lib/librte_eal.a.p/eal_x86_rte_cpuflags.c.o 00:04:24.890 [46/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_bus.c.o 00:04:24.890 [47/268] Compiling C object lib/librte_eal.a.p/eal_x86_rte_hypervisor.c.o 00:04:24.890 [48/268] Compiling C object lib/librte_eal.a.p/eal_unix_eal_unix_memory.c.o 00:04:24.890 [49/268] Linking static target lib/librte_telemetry.a 00:04:24.890 [50/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_fbarray.c.o 00:04:24.890 [51/268] Compiling C object lib/librte_eal.a.p/eal_unix_rte_thread.c.o 00:04:24.890 [52/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_memory.c.o 00:04:24.890 [53/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_dev.c.o 00:04:24.890 [54/268] Compiling C object lib/librte_eal.a.p/eal_common_rte_service.c.o 00:04:24.890 [55/268] Compiling C object lib/librte_eal.a.p/eal_unix_eal_firmware.c.o 00:04:24.890 [56/268] Compiling C object lib/librte_eal.a.p/eal_unix_eal_unix_thread.c.o 00:04:24.890 [57/268] Compiling C object lib/librte_eal.a.p/eal_common_malloc_heap.c.o 00:04:24.890 [58/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace_ctf.c.o 00:04:24.890 [59/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_proc.c.o 00:04:24.890 [60/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_alarm.c.o 00:04:24.890 [61/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal.c.o 00:04:24.890 [62/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_config.c.o 00:04:24.890 [63/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_thread.c.o 00:04:25.153 [64/268] Compiling C object lib/librte_eal.a.p/eal_common_rte_malloc.c.o 00:04:25.153 [65/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_hugepage_info.c.o 00:04:25.153 [66/268] Generating lib/log.sym_chk with a custom command (wrapped by meson to capture output) 00:04:25.413 [67/268] Compiling C object lib/librte_pci.a.p/pci_rte_pci.c.o 00:04:25.413 [68/268] Linking target lib/librte_log.so.24.1 00:04:25.413 [69/268] Linking static target lib/librte_pci.a 00:04:25.413 [70/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_interrupts.c.o 00:04:25.413 [71/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_options.c.o 00:04:25.678 [72/268] Compiling C object lib/net/libnet_crc_avx512_lib.a.p/net_crc_avx512.c.o 00:04:25.678 [73/268] Linking static target lib/net/libnet_crc_avx512_lib.a 00:04:25.678 [74/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_vfio_mp_sync.c.o 00:04:25.678 [75/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_lcore.c.o 00:04:25.678 [76/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline.c.o 00:04:25.678 [77/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_timer.c.o 00:04:25.678 [78/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_thread.c.o 00:04:25.678 [79/268] Generating symbol file lib/librte_log.so.24.1.p/librte_log.so.24.1.symbols 00:04:25.678 [80/268] Compiling C object lib/librte_eal.a.p/eal_x86_rte_power_intrinsics.c.o 00:04:25.678 [81/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_cirbuf.c.o 00:04:25.678 [82/268] Compiling C object lib/librte_eal.a.p/eal_x86_rte_cycles.c.o 00:04:25.678 [83/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse.c.o 00:04:25.678 [84/268] Compiling C object lib/librte_ring.a.p/ring_rte_ring.c.o 00:04:25.678 [85/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_num.c.o 00:04:25.678 [86/268] Compiling C object lib/librte_net.a.p/net_net_crc_sse.c.o 00:04:25.678 [87/268] Linking static target lib/librte_ring.a 00:04:25.678 [88/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_portlist.c.o 00:04:25.678 [89/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_socket.c.o 00:04:25.678 [90/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_vt100.c.o 00:04:25.678 [91/268] Compiling C object lib/librte_mempool.a.p/mempool_rte_mempool_ops_default.c.o 00:04:25.678 [92/268] Linking target lib/librte_kvargs.so.24.1 00:04:25.678 [93/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_string.c.o 00:04:25.678 [94/268] Compiling C object lib/librte_hash.a.p/hash_rte_hash_crc.c.o 00:04:25.948 [95/268] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf_pool_ops.c.o 00:04:25.948 [96/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_os_unix.c.o 00:04:25.948 [97/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_memory.c.o 00:04:25.948 [98/268] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf_ptype.c.o 00:04:25.948 [99/268] Compiling C object lib/librte_meter.a.p/meter_rte_meter.c.o 00:04:25.948 [100/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_vfio.c.o 00:04:25.948 [101/268] Compiling C object lib/librte_mempool.a.p/mempool_mempool_trace_points.c.o 00:04:25.948 [102/268] Linking static target lib/librte_meter.a 00:04:25.948 [103/268] Compiling C object lib/librte_net.a.p/net_rte_net.c.o 00:04:25.948 [104/268] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf_dyn.c.o 00:04:25.948 [105/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_memalloc.c.o 00:04:25.948 [106/268] Generating lib/telemetry.sym_chk with a custom command (wrapped by meson to capture output) 00:04:25.948 [107/268] Generating lib/pci.sym_chk with a custom command (wrapped by meson to capture output) 00:04:25.948 [108/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_rdline.c.o 00:04:25.948 [109/268] Compiling C object lib/librte_net.a.p/net_rte_net_crc.c.o 00:04:25.948 [110/268] Linking static target lib/librte_eal.a 00:04:25.948 [111/268] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_profile.c.o 00:04:25.948 [112/268] Compiling C object lib/librte_rcu.a.p/rcu_rte_rcu_qsbr.c.o 00:04:25.948 [113/268] Compiling C object lib/librte_mempool.a.p/mempool_rte_mempool.c.o 00:04:25.948 [114/268] Linking static target lib/librte_rcu.a 00:04:25.948 [115/268] Compiling C object lib/librte_power.a.p/power_guest_channel.c.o 00:04:25.948 [116/268] Linking target lib/librte_telemetry.so.24.1 00:04:25.948 [117/268] Compiling C object lib/librte_mempool.a.p/mempool_rte_mempool_ops.c.o 00:04:25.948 [118/268] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_private.c.o 00:04:25.948 [119/268] Linking static target lib/librte_mempool.a 00:04:25.948 [120/268] Generating symbol file lib/librte_kvargs.so.24.1.p/librte_kvargs.so.24.1.symbols 00:04:25.948 [121/268] Compiling C object lib/librte_power.a.p/power_power_common.c.o 00:04:25.948 [122/268] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_telemetry.c.o 00:04:26.231 [123/268] Compiling C object lib/librte_net.a.p/net_rte_ether.c.o 00:04:26.231 [124/268] Compiling C object lib/librte_power.a.p/power_power_kvm_vm.c.o 00:04:26.231 [125/268] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_driver.c.o 00:04:26.231 [126/268] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_class_eth.c.o 00:04:26.231 [127/268] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_common.c.o 00:04:26.231 [128/268] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_8079.c.o 00:04:26.231 [129/268] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_linux_ethtool.c.o 00:04:26.231 [130/268] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_8472.c.o 00:04:26.231 [131/268] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_ethdev_telemetry.c.o 00:04:26.231 [132/268] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_8636.c.o 00:04:26.231 [133/268] Compiling C object lib/librte_vhost.a.p/vhost_fd_man.c.o 00:04:26.231 [134/268] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_ethdev_cman.c.o 00:04:26.231 [135/268] Generating symbol file lib/librte_telemetry.so.24.1.p/librte_telemetry.so.24.1.symbols 00:04:26.231 [136/268] Generating lib/ring.sym_chk with a custom command (wrapped by meson to capture output) 00:04:26.495 [137/268] Compiling C object lib/librte_net.a.p/net_rte_arp.c.o 00:04:26.495 [138/268] Generating lib/meter.sym_chk with a custom command (wrapped by meson to capture output) 00:04:26.495 [139/268] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_pci_params.c.o 00:04:26.495 [140/268] Linking static target lib/librte_net.a 00:04:26.495 [141/268] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_mtr.c.o 00:04:26.495 [142/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_etheraddr.c.o 00:04:26.756 [143/268] Compiling C object lib/librte_hash.a.p/hash_rte_thash_gfni.c.o 00:04:26.756 [144/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_ipaddr.c.o 00:04:26.756 [145/268] Linking static target lib/librte_cmdline.a 00:04:26.756 [146/268] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_tm.c.o 00:04:26.756 [147/268] Generating lib/rcu.sym_chk with a custom command (wrapped by meson to capture output) 00:04:26.756 [148/268] Compiling C object lib/librte_dmadev.a.p/dmadev_rte_dmadev_trace_points.c.o 00:04:26.756 [149/268] Compiling C object lib/librte_timer.a.p/timer_rte_timer.c.o 00:04:26.756 [150/268] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_trace_points.c.o 00:04:26.756 [151/268] Compiling C object lib/librte_compressdev.a.p/compressdev_rte_compressdev_pmd.c.o 00:04:26.756 [152/268] Compiling C object drivers/libtmp_rte_bus_vdev.a.p/bus_vdev_vdev_params.c.o 00:04:26.756 [153/268] Linking static target lib/librte_timer.a 00:04:26.756 [154/268] Compiling C object lib/librte_hash.a.p/hash_rte_fbk_hash.c.o 00:04:26.756 [155/268] Compiling C object lib/librte_compressdev.a.p/compressdev_rte_compressdev.c.o 00:04:26.756 [156/268] Compiling C object lib/librte_cryptodev.a.p/cryptodev_cryptodev_pmd.c.o 00:04:27.014 [157/268] Compiling C object lib/librte_power.a.p/power_rte_power_uncore.c.o 00:04:27.014 [158/268] Compiling C object lib/librte_dmadev.a.p/dmadev_rte_dmadev.c.o 00:04:27.014 [159/268] Linking static target lib/librte_dmadev.a 00:04:27.014 [160/268] Generating lib/net.sym_chk with a custom command (wrapped by meson to capture output) 00:04:27.014 [161/268] Compiling C object lib/librte_power.a.p/power_rte_power.c.o 00:04:27.014 [162/268] Compiling C object lib/librte_cryptodev.a.p/cryptodev_cryptodev_trace_points.c.o 00:04:27.014 [163/268] Compiling C object lib/librte_power.a.p/power_power_amd_pstate_cpufreq.c.o 00:04:27.014 [164/268] Compiling C object lib/librte_power.a.p/power_power_acpi_cpufreq.c.o 00:04:27.014 [165/268] Compiling C object lib/librte_hash.a.p/hash_rte_thash.c.o 00:04:27.014 [166/268] Compiling C object lib/librte_power.a.p/power_power_cppc_cpufreq.c.o 00:04:27.014 [167/268] Generating lib/mempool.sym_chk with a custom command (wrapped by meson to capture output) 00:04:27.273 [168/268] Compiling C object lib/librte_vhost.a.p/vhost_iotlb.c.o 00:04:27.273 [169/268] Compiling C object lib/librte_power.a.p/power_power_intel_uncore.c.o 00:04:27.273 [170/268] Compiling C object lib/librte_power.a.p/power_rte_power_pmd_mgmt.c.o 00:04:27.273 [171/268] Generating lib/timer.sym_chk with a custom command (wrapped by meson to capture output) 00:04:27.273 [172/268] Compiling C object lib/librte_power.a.p/power_power_pstate_cpufreq.c.o 00:04:27.273 [173/268] Linking static target lib/librte_power.a 00:04:27.273 [174/268] Compiling C object lib/librte_vhost.a.p/vhost_vduse.c.o 00:04:27.273 [175/268] Compiling C object lib/librte_vhost.a.p/vhost_vdpa.c.o 00:04:27.273 [176/268] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_pci_common.c.o 00:04:27.273 [177/268] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_pci_common_uio.c.o 00:04:27.273 [178/268] Compiling C object lib/librte_compressdev.a.p/compressdev_rte_comp.c.o 00:04:27.273 [179/268] Linking static target lib/librte_compressdev.a 00:04:27.273 [180/268] Compiling C object lib/librte_hash.a.p/hash_rte_cuckoo_hash.c.o 00:04:27.273 [181/268] Compiling C object lib/librte_vhost.a.p/vhost_socket.c.o 00:04:27.273 [182/268] Linking static target lib/librte_hash.a 00:04:27.273 [183/268] Compiling C object lib/librte_reorder.a.p/reorder_rte_reorder.c.o 00:04:27.273 [184/268] Linking static target lib/librte_reorder.a 00:04:27.273 [185/268] Compiling C object lib/librte_vhost.a.p/vhost_virtio_net_ctrl.c.o 00:04:27.531 [186/268] Generating lib/dmadev.sym_chk with a custom command (wrapped by meson to capture output) 00:04:27.531 [187/268] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_linux_pci.c.o 00:04:27.531 [188/268] Compiling C object drivers/libtmp_rte_bus_vdev.a.p/bus_vdev_vdev.c.o 00:04:27.531 [189/268] Linking static target drivers/libtmp_rte_bus_vdev.a 00:04:27.531 [190/268] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_linux_pci_uio.c.o 00:04:27.531 [191/268] Generating lib/cmdline.sym_chk with a custom command (wrapped by meson to capture output) 00:04:27.531 [192/268] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_linux_pci_vfio.c.o 00:04:27.531 [193/268] Linking static target drivers/libtmp_rte_bus_pci.a 00:04:27.531 [194/268] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf.c.o 00:04:27.531 [195/268] Compiling C object lib/librte_vhost.a.p/vhost_vhost_user.c.o 00:04:27.531 [196/268] Linking static target lib/librte_mbuf.a 00:04:27.531 [197/268] Generating lib/reorder.sym_chk with a custom command (wrapped by meson to capture output) 00:04:27.790 [198/268] Generating drivers/rte_bus_vdev.pmd.c with a custom command 00:04:27.790 [199/268] Compiling C object drivers/librte_bus_vdev.a.p/meson-generated_.._rte_bus_vdev.pmd.c.o 00:04:27.790 [200/268] Compiling C object drivers/librte_bus_vdev.so.24.1.p/meson-generated_.._rte_bus_vdev.pmd.c.o 00:04:27.790 [201/268] Linking static target drivers/librte_bus_vdev.a 00:04:27.790 [202/268] Generating lib/compressdev.sym_chk with a custom command (wrapped by meson to capture output) 00:04:27.790 [203/268] Compiling C object drivers/libtmp_rte_mempool_ring.a.p/mempool_ring_rte_mempool_ring.c.o 00:04:27.790 [204/268] Linking static target drivers/libtmp_rte_mempool_ring.a 00:04:27.790 [205/268] Generating lib/power.sym_chk with a custom command (wrapped by meson to capture output) 00:04:27.790 [206/268] Compiling C object lib/librte_security.a.p/security_rte_security.c.o 00:04:27.790 [207/268] Linking static target lib/librte_security.a 00:04:27.790 [208/268] Generating drivers/rte_bus_pci.pmd.c with a custom command 00:04:27.790 [209/268] Compiling C object lib/librte_vhost.a.p/vhost_vhost.c.o 00:04:27.790 [210/268] Compiling C object drivers/librte_bus_pci.a.p/meson-generated_.._rte_bus_pci.pmd.c.o 00:04:27.790 [211/268] Compiling C object drivers/librte_bus_pci.so.24.1.p/meson-generated_.._rte_bus_pci.pmd.c.o 00:04:27.790 [212/268] Linking static target drivers/librte_bus_pci.a 00:04:27.790 [213/268] Generating lib/hash.sym_chk with a custom command (wrapped by meson to capture output) 00:04:28.048 [214/268] Generating drivers/rte_bus_vdev.sym_chk with a custom command (wrapped by meson to capture output) 00:04:28.048 [215/268] Generating drivers/rte_mempool_ring.pmd.c with a custom command 00:04:28.048 [216/268] Compiling C object drivers/librte_mempool_ring.a.p/meson-generated_.._rte_mempool_ring.pmd.c.o 00:04:28.048 [217/268] Compiling C object drivers/librte_mempool_ring.so.24.1.p/meson-generated_.._rte_mempool_ring.pmd.c.o 00:04:28.048 [218/268] Linking static target drivers/librte_mempool_ring.a 00:04:28.048 [219/268] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_ethdev.c.o 00:04:28.048 [220/268] Generating lib/mbuf.sym_chk with a custom command (wrapped by meson to capture output) 00:04:28.048 [221/268] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_flow.c.o 00:04:28.048 [222/268] Linking static target lib/librte_ethdev.a 00:04:28.048 [223/268] Generating lib/security.sym_chk with a custom command (wrapped by meson to capture output) 00:04:28.320 [224/268] Compiling C object lib/librte_cryptodev.a.p/cryptodev_rte_cryptodev.c.o 00:04:28.320 [225/268] Linking static target lib/librte_cryptodev.a 00:04:28.320 [226/268] Generating drivers/rte_bus_pci.sym_chk with a custom command (wrapped by meson to capture output) 00:04:29.256 [227/268] Generating lib/cryptodev.sym_chk with a custom command (wrapped by meson to capture output) 00:04:30.631 [228/268] Compiling C object lib/librte_vhost.a.p/vhost_vhost_crypto.c.o 00:04:32.534 [229/268] Generating lib/eal.sym_chk with a custom command (wrapped by meson to capture output) 00:04:32.534 [230/268] Linking target lib/librte_eal.so.24.1 00:04:32.534 [231/268] Generating lib/ethdev.sym_chk with a custom command (wrapped by meson to capture output) 00:04:32.534 [232/268] Generating symbol file lib/librte_eal.so.24.1.p/librte_eal.so.24.1.symbols 00:04:32.534 [233/268] Linking target lib/librte_ring.so.24.1 00:04:32.534 [234/268] Linking target lib/librte_pci.so.24.1 00:04:32.534 [235/268] Linking target lib/librte_timer.so.24.1 00:04:32.534 [236/268] Linking target drivers/librte_bus_vdev.so.24.1 00:04:32.534 [237/268] Linking target lib/librte_dmadev.so.24.1 00:04:32.534 [238/268] Linking target lib/librte_meter.so.24.1 00:04:32.534 [239/268] Generating symbol file lib/librte_ring.so.24.1.p/librte_ring.so.24.1.symbols 00:04:32.534 [240/268] Generating symbol file lib/librte_timer.so.24.1.p/librte_timer.so.24.1.symbols 00:04:32.534 [241/268] Generating symbol file lib/librte_pci.so.24.1.p/librte_pci.so.24.1.symbols 00:04:32.534 [242/268] Generating symbol file lib/librte_meter.so.24.1.p/librte_meter.so.24.1.symbols 00:04:32.534 [243/268] Generating symbol file lib/librte_dmadev.so.24.1.p/librte_dmadev.so.24.1.symbols 00:04:32.534 [244/268] Linking target lib/librte_rcu.so.24.1 00:04:32.534 [245/268] Linking target drivers/librte_bus_pci.so.24.1 00:04:32.534 [246/268] Linking target lib/librte_mempool.so.24.1 00:04:32.791 [247/268] Generating symbol file lib/librte_mempool.so.24.1.p/librte_mempool.so.24.1.symbols 00:04:32.791 [248/268] Generating symbol file lib/librte_rcu.so.24.1.p/librte_rcu.so.24.1.symbols 00:04:32.791 [249/268] Linking target lib/librte_mbuf.so.24.1 00:04:32.791 [250/268] Linking target drivers/librte_mempool_ring.so.24.1 00:04:32.791 [251/268] Generating symbol file lib/librte_mbuf.so.24.1.p/librte_mbuf.so.24.1.symbols 00:04:33.049 [252/268] Linking target lib/librte_reorder.so.24.1 00:04:33.049 [253/268] Linking target lib/librte_net.so.24.1 00:04:33.049 [254/268] Linking target lib/librte_compressdev.so.24.1 00:04:33.049 [255/268] Linking target lib/librte_cryptodev.so.24.1 00:04:33.049 [256/268] Generating symbol file lib/librte_net.so.24.1.p/librte_net.so.24.1.symbols 00:04:33.049 [257/268] Generating symbol file lib/librte_cryptodev.so.24.1.p/librte_cryptodev.so.24.1.symbols 00:04:33.049 [258/268] Linking target lib/librte_security.so.24.1 00:04:33.049 [259/268] Linking target lib/librte_hash.so.24.1 00:04:33.049 [260/268] Linking target lib/librte_cmdline.so.24.1 00:04:33.049 [261/268] Linking target lib/librte_ethdev.so.24.1 00:04:33.310 [262/268] Generating symbol file lib/librte_hash.so.24.1.p/librte_hash.so.24.1.symbols 00:04:33.310 [263/268] Generating symbol file lib/librte_ethdev.so.24.1.p/librte_ethdev.so.24.1.symbols 00:04:33.310 [264/268] Linking target lib/librte_power.so.24.1 00:04:35.851 [265/268] Compiling C object lib/librte_vhost.a.p/vhost_virtio_net.c.o 00:04:35.851 [266/268] Linking static target lib/librte_vhost.a 00:04:36.788 [267/268] Generating lib/vhost.sym_chk with a custom command (wrapped by meson to capture output) 00:04:36.788 [268/268] Linking target lib/librte_vhost.so.24.1 00:04:36.788 INFO: autodetecting backend as ninja 00:04:36.788 INFO: calculating backend command to run: /usr/local/bin/ninja -C /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build-tmp -j 48 00:04:37.725 CC lib/ut_mock/mock.o 00:04:37.725 CC lib/log/log.o 00:04:37.725 CC lib/log/log_flags.o 00:04:37.725 CC lib/log/log_deprecated.o 00:04:37.725 CC lib/ut/ut.o 00:04:37.983 LIB libspdk_log.a 00:04:37.983 LIB libspdk_ut_mock.a 00:04:37.983 LIB libspdk_ut.a 00:04:37.983 SO libspdk_ut_mock.so.6.0 00:04:37.983 SO libspdk_log.so.7.0 00:04:37.983 SO libspdk_ut.so.2.0 00:04:37.983 SYMLINK libspdk_ut_mock.so 00:04:37.983 SYMLINK libspdk_ut.so 00:04:37.983 SYMLINK libspdk_log.so 00:04:38.241 CXX lib/trace_parser/trace.o 00:04:38.241 CC lib/dma/dma.o 00:04:38.241 CC lib/ioat/ioat.o 00:04:38.241 CC lib/util/base64.o 00:04:38.241 CC lib/util/bit_array.o 00:04:38.241 CC lib/util/cpuset.o 00:04:38.241 CC lib/util/crc16.o 00:04:38.241 CC lib/util/crc32.o 00:04:38.241 CC lib/util/crc32c.o 00:04:38.241 CC lib/util/crc32_ieee.o 00:04:38.241 CC lib/util/crc64.o 00:04:38.241 CC lib/util/dif.o 00:04:38.241 CC lib/util/fd.o 00:04:38.241 CC lib/util/file.o 00:04:38.241 CC lib/util/hexlify.o 00:04:38.241 CC lib/util/iov.o 00:04:38.241 CC lib/util/math.o 00:04:38.241 CC lib/util/pipe.o 00:04:38.241 CC lib/util/strerror_tls.o 00:04:38.241 CC lib/util/string.o 00:04:38.241 CC lib/util/uuid.o 00:04:38.241 CC lib/util/fd_group.o 00:04:38.241 CC lib/util/xor.o 00:04:38.241 CC lib/util/zipf.o 00:04:38.241 CC lib/vfio_user/host/vfio_user_pci.o 00:04:38.241 CC lib/vfio_user/host/vfio_user.o 00:04:38.241 LIB libspdk_dma.a 00:04:38.500 SO libspdk_dma.so.4.0 00:04:38.500 SYMLINK libspdk_dma.so 00:04:38.500 LIB libspdk_ioat.a 00:04:38.500 SO libspdk_ioat.so.7.0 00:04:38.500 SYMLINK libspdk_ioat.so 00:04:38.500 LIB libspdk_vfio_user.a 00:04:38.500 SO libspdk_vfio_user.so.5.0 00:04:38.500 SYMLINK libspdk_vfio_user.so 00:04:38.758 LIB libspdk_util.a 00:04:38.758 SO libspdk_util.so.9.1 00:04:39.016 SYMLINK libspdk_util.so 00:04:39.016 CC lib/conf/conf.o 00:04:39.016 CC lib/json/json_parse.o 00:04:39.016 CC lib/env_dpdk/env.o 00:04:39.016 CC lib/idxd/idxd.o 00:04:39.016 CC lib/rdma_provider/common.o 00:04:39.016 CC lib/vmd/vmd.o 00:04:39.016 LIB libspdk_trace_parser.a 00:04:39.016 CC lib/rdma_utils/rdma_utils.o 00:04:39.016 CC lib/json/json_util.o 00:04:39.016 CC lib/rdma_provider/rdma_provider_verbs.o 00:04:39.016 CC lib/env_dpdk/memory.o 00:04:39.016 CC lib/idxd/idxd_user.o 00:04:39.016 CC lib/vmd/led.o 00:04:39.016 CC lib/json/json_write.o 00:04:39.016 CC lib/env_dpdk/pci.o 00:04:39.016 CC lib/idxd/idxd_kernel.o 00:04:39.016 CC lib/env_dpdk/init.o 00:04:39.016 CC lib/env_dpdk/threads.o 00:04:39.016 CC lib/env_dpdk/pci_ioat.o 00:04:39.016 CC lib/env_dpdk/pci_virtio.o 00:04:39.016 CC lib/env_dpdk/pci_vmd.o 00:04:39.016 CC lib/env_dpdk/pci_idxd.o 00:04:39.016 CC lib/env_dpdk/pci_event.o 00:04:39.016 CC lib/env_dpdk/sigbus_handler.o 00:04:39.016 CC lib/env_dpdk/pci_dpdk.o 00:04:39.016 CC lib/env_dpdk/pci_dpdk_2211.o 00:04:39.016 CC lib/env_dpdk/pci_dpdk_2207.o 00:04:39.016 SO libspdk_trace_parser.so.5.0 00:04:39.274 SYMLINK libspdk_trace_parser.so 00:04:39.274 LIB libspdk_rdma_provider.a 00:04:39.274 LIB libspdk_rdma_utils.a 00:04:39.532 SO libspdk_rdma_utils.so.1.0 00:04:39.533 LIB libspdk_json.a 00:04:39.533 SO libspdk_rdma_provider.so.6.0 00:04:39.533 LIB libspdk_conf.a 00:04:39.533 SO libspdk_conf.so.6.0 00:04:39.533 SO libspdk_json.so.6.0 00:04:39.533 SYMLINK libspdk_rdma_utils.so 00:04:39.533 SYMLINK libspdk_rdma_provider.so 00:04:39.533 SYMLINK libspdk_conf.so 00:04:39.533 SYMLINK libspdk_json.so 00:04:39.533 CC lib/jsonrpc/jsonrpc_server.o 00:04:39.533 CC lib/jsonrpc/jsonrpc_server_tcp.o 00:04:39.533 CC lib/jsonrpc/jsonrpc_client.o 00:04:39.533 CC lib/jsonrpc/jsonrpc_client_tcp.o 00:04:39.790 LIB libspdk_idxd.a 00:04:39.790 SO libspdk_idxd.so.12.0 00:04:39.790 SYMLINK libspdk_idxd.so 00:04:39.790 LIB libspdk_vmd.a 00:04:39.790 SO libspdk_vmd.so.6.0 00:04:39.790 SYMLINK libspdk_vmd.so 00:04:40.048 LIB libspdk_jsonrpc.a 00:04:40.048 SO libspdk_jsonrpc.so.6.0 00:04:40.048 SYMLINK libspdk_jsonrpc.so 00:04:40.307 CC lib/rpc/rpc.o 00:04:40.307 LIB libspdk_rpc.a 00:04:40.307 SO libspdk_rpc.so.6.0 00:04:40.565 SYMLINK libspdk_rpc.so 00:04:40.565 CC lib/notify/notify.o 00:04:40.565 CC lib/keyring/keyring.o 00:04:40.565 CC lib/notify/notify_rpc.o 00:04:40.565 CC lib/trace/trace.o 00:04:40.565 CC lib/keyring/keyring_rpc.o 00:04:40.565 CC lib/trace/trace_flags.o 00:04:40.565 CC lib/trace/trace_rpc.o 00:04:40.823 LIB libspdk_notify.a 00:04:40.823 SO libspdk_notify.so.6.0 00:04:40.823 LIB libspdk_keyring.a 00:04:40.823 SYMLINK libspdk_notify.so 00:04:40.823 LIB libspdk_trace.a 00:04:40.823 SO libspdk_keyring.so.1.0 00:04:40.823 SO libspdk_trace.so.10.0 00:04:41.081 SYMLINK libspdk_keyring.so 00:04:41.081 SYMLINK libspdk_trace.so 00:04:41.081 LIB libspdk_env_dpdk.a 00:04:41.081 SO libspdk_env_dpdk.so.14.1 00:04:41.082 CC lib/sock/sock.o 00:04:41.082 CC lib/sock/sock_rpc.o 00:04:41.082 CC lib/thread/thread.o 00:04:41.082 CC lib/thread/iobuf.o 00:04:41.340 SYMLINK libspdk_env_dpdk.so 00:04:41.598 LIB libspdk_sock.a 00:04:41.598 SO libspdk_sock.so.10.0 00:04:41.598 SYMLINK libspdk_sock.so 00:04:41.856 CC lib/nvme/nvme_ctrlr_cmd.o 00:04:41.857 CC lib/nvme/nvme_ctrlr.o 00:04:41.857 CC lib/nvme/nvme_fabric.o 00:04:41.857 CC lib/nvme/nvme_ns_cmd.o 00:04:41.857 CC lib/nvme/nvme_ns.o 00:04:41.857 CC lib/nvme/nvme_pcie_common.o 00:04:41.857 CC lib/nvme/nvme_pcie.o 00:04:41.857 CC lib/nvme/nvme_qpair.o 00:04:41.857 CC lib/nvme/nvme.o 00:04:41.857 CC lib/nvme/nvme_quirks.o 00:04:41.857 CC lib/nvme/nvme_transport.o 00:04:41.857 CC lib/nvme/nvme_discovery.o 00:04:41.857 CC lib/nvme/nvme_ctrlr_ocssd_cmd.o 00:04:41.857 CC lib/nvme/nvme_ns_ocssd_cmd.o 00:04:41.857 CC lib/nvme/nvme_tcp.o 00:04:41.857 CC lib/nvme/nvme_opal.o 00:04:41.857 CC lib/nvme/nvme_io_msg.o 00:04:41.857 CC lib/nvme/nvme_poll_group.o 00:04:41.857 CC lib/nvme/nvme_zns.o 00:04:41.857 CC lib/nvme/nvme_stubs.o 00:04:41.857 CC lib/nvme/nvme_auth.o 00:04:41.857 CC lib/nvme/nvme_cuse.o 00:04:41.857 CC lib/nvme/nvme_vfio_user.o 00:04:41.857 CC lib/nvme/nvme_rdma.o 00:04:42.791 LIB libspdk_thread.a 00:04:42.791 SO libspdk_thread.so.10.1 00:04:42.791 SYMLINK libspdk_thread.so 00:04:43.049 CC lib/virtio/virtio.o 00:04:43.049 CC lib/vfu_tgt/tgt_endpoint.o 00:04:43.049 CC lib/init/json_config.o 00:04:43.049 CC lib/blob/blobstore.o 00:04:43.049 CC lib/accel/accel.o 00:04:43.049 CC lib/virtio/virtio_vhost_user.o 00:04:43.049 CC lib/init/subsystem.o 00:04:43.049 CC lib/vfu_tgt/tgt_rpc.o 00:04:43.049 CC lib/blob/request.o 00:04:43.049 CC lib/accel/accel_rpc.o 00:04:43.049 CC lib/init/subsystem_rpc.o 00:04:43.049 CC lib/virtio/virtio_vfio_user.o 00:04:43.049 CC lib/blob/zeroes.o 00:04:43.049 CC lib/accel/accel_sw.o 00:04:43.049 CC lib/init/rpc.o 00:04:43.049 CC lib/virtio/virtio_pci.o 00:04:43.049 CC lib/blob/blob_bs_dev.o 00:04:43.307 LIB libspdk_init.a 00:04:43.307 SO libspdk_init.so.5.0 00:04:43.307 LIB libspdk_virtio.a 00:04:43.307 LIB libspdk_vfu_tgt.a 00:04:43.307 SYMLINK libspdk_init.so 00:04:43.307 SO libspdk_vfu_tgt.so.3.0 00:04:43.307 SO libspdk_virtio.so.7.0 00:04:43.307 SYMLINK libspdk_vfu_tgt.so 00:04:43.307 SYMLINK libspdk_virtio.so 00:04:43.565 CC lib/event/app.o 00:04:43.565 CC lib/event/reactor.o 00:04:43.565 CC lib/event/log_rpc.o 00:04:43.565 CC lib/event/app_rpc.o 00:04:43.565 CC lib/event/scheduler_static.o 00:04:43.823 LIB libspdk_event.a 00:04:43.823 SO libspdk_event.so.14.0 00:04:44.082 LIB libspdk_accel.a 00:04:44.082 SYMLINK libspdk_event.so 00:04:44.082 SO libspdk_accel.so.15.1 00:04:44.082 SYMLINK libspdk_accel.so 00:04:44.082 LIB libspdk_nvme.a 00:04:44.340 SO libspdk_nvme.so.13.1 00:04:44.340 CC lib/bdev/bdev.o 00:04:44.340 CC lib/bdev/bdev_rpc.o 00:04:44.340 CC lib/bdev/bdev_zone.o 00:04:44.340 CC lib/bdev/part.o 00:04:44.340 CC lib/bdev/scsi_nvme.o 00:04:44.598 SYMLINK libspdk_nvme.so 00:04:45.972 LIB libspdk_blob.a 00:04:45.972 SO libspdk_blob.so.11.0 00:04:45.972 SYMLINK libspdk_blob.so 00:04:46.230 CC lib/blobfs/blobfs.o 00:04:46.230 CC lib/blobfs/tree.o 00:04:46.230 CC lib/lvol/lvol.o 00:04:46.796 LIB libspdk_bdev.a 00:04:46.796 SO libspdk_bdev.so.15.1 00:04:46.796 SYMLINK libspdk_bdev.so 00:04:47.063 CC lib/ublk/ublk.o 00:04:47.063 CC lib/nvmf/ctrlr.o 00:04:47.063 CC lib/nbd/nbd.o 00:04:47.063 CC lib/scsi/dev.o 00:04:47.063 CC lib/ublk/ublk_rpc.o 00:04:47.063 CC lib/nvmf/ctrlr_discovery.o 00:04:47.063 CC lib/nbd/nbd_rpc.o 00:04:47.063 CC lib/scsi/lun.o 00:04:47.063 CC lib/ftl/ftl_core.o 00:04:47.063 CC lib/nvmf/ctrlr_bdev.o 00:04:47.063 CC lib/scsi/port.o 00:04:47.063 CC lib/scsi/scsi.o 00:04:47.063 CC lib/nvmf/subsystem.o 00:04:47.063 CC lib/ftl/ftl_init.o 00:04:47.063 CC lib/scsi/scsi_bdev.o 00:04:47.063 CC lib/nvmf/nvmf.o 00:04:47.063 CC lib/ftl/ftl_layout.o 00:04:47.063 CC lib/nvmf/nvmf_rpc.o 00:04:47.063 CC lib/scsi/scsi_pr.o 00:04:47.063 CC lib/ftl/ftl_debug.o 00:04:47.063 CC lib/nvmf/transport.o 00:04:47.063 CC lib/ftl/ftl_io.o 00:04:47.063 CC lib/scsi/scsi_rpc.o 00:04:47.063 CC lib/nvmf/tcp.o 00:04:47.063 CC lib/nvmf/stubs.o 00:04:47.063 CC lib/scsi/task.o 00:04:47.063 CC lib/ftl/ftl_sb.o 00:04:47.063 CC lib/ftl/ftl_l2p.o 00:04:47.063 CC lib/nvmf/mdns_server.o 00:04:47.063 CC lib/ftl/ftl_l2p_flat.o 00:04:47.063 CC lib/nvmf/vfio_user.o 00:04:47.063 CC lib/ftl/ftl_nv_cache.o 00:04:47.063 CC lib/nvmf/rdma.o 00:04:47.063 CC lib/ftl/ftl_band.o 00:04:47.063 CC lib/nvmf/auth.o 00:04:47.063 CC lib/ftl/ftl_band_ops.o 00:04:47.063 CC lib/ftl/ftl_writer.o 00:04:47.063 CC lib/ftl/ftl_rq.o 00:04:47.063 CC lib/ftl/ftl_reloc.o 00:04:47.063 CC lib/ftl/ftl_l2p_cache.o 00:04:47.063 CC lib/ftl/ftl_p2l.o 00:04:47.063 CC lib/ftl/mngt/ftl_mngt.o 00:04:47.063 CC lib/ftl/mngt/ftl_mngt_bdev.o 00:04:47.063 CC lib/ftl/mngt/ftl_mngt_shutdown.o 00:04:47.063 CC lib/ftl/mngt/ftl_mngt_startup.o 00:04:47.063 CC lib/ftl/mngt/ftl_mngt_md.o 00:04:47.063 LIB libspdk_blobfs.a 00:04:47.063 SO libspdk_blobfs.so.10.0 00:04:47.324 SYMLINK libspdk_blobfs.so 00:04:47.324 CC lib/ftl/mngt/ftl_mngt_misc.o 00:04:47.324 LIB libspdk_lvol.a 00:04:47.324 SO libspdk_lvol.so.10.0 00:04:47.324 CC lib/ftl/mngt/ftl_mngt_ioch.o 00:04:47.324 CC lib/ftl/mngt/ftl_mngt_l2p.o 00:04:47.324 CC lib/ftl/mngt/ftl_mngt_band.o 00:04:47.324 CC lib/ftl/mngt/ftl_mngt_self_test.o 00:04:47.324 CC lib/ftl/mngt/ftl_mngt_p2l.o 00:04:47.324 CC lib/ftl/mngt/ftl_mngt_recovery.o 00:04:47.324 CC lib/ftl/mngt/ftl_mngt_upgrade.o 00:04:47.324 CC lib/ftl/utils/ftl_conf.o 00:04:47.324 CC lib/ftl/utils/ftl_md.o 00:04:47.324 SYMLINK libspdk_lvol.so 00:04:47.585 CC lib/ftl/utils/ftl_mempool.o 00:04:47.585 CC lib/ftl/utils/ftl_bitmap.o 00:04:47.585 CC lib/ftl/utils/ftl_property.o 00:04:47.585 CC lib/ftl/utils/ftl_layout_tracker_bdev.o 00:04:47.585 CC lib/ftl/upgrade/ftl_layout_upgrade.o 00:04:47.585 CC lib/ftl/upgrade/ftl_sb_upgrade.o 00:04:47.585 CC lib/ftl/upgrade/ftl_p2l_upgrade.o 00:04:47.585 CC lib/ftl/upgrade/ftl_band_upgrade.o 00:04:47.585 CC lib/ftl/upgrade/ftl_chunk_upgrade.o 00:04:47.585 CC lib/ftl/upgrade/ftl_trim_upgrade.o 00:04:47.585 CC lib/ftl/upgrade/ftl_sb_v3.o 00:04:47.585 CC lib/ftl/upgrade/ftl_sb_v5.o 00:04:47.585 CC lib/ftl/nvc/ftl_nvc_dev.o 00:04:47.843 CC lib/ftl/nvc/ftl_nvc_bdev_vss.o 00:04:47.843 CC lib/ftl/base/ftl_base_dev.o 00:04:47.843 CC lib/ftl/base/ftl_base_bdev.o 00:04:47.843 CC lib/ftl/ftl_trace.o 00:04:47.843 LIB libspdk_nbd.a 00:04:47.843 SO libspdk_nbd.so.7.0 00:04:47.843 SYMLINK libspdk_nbd.so 00:04:47.843 LIB libspdk_scsi.a 00:04:48.101 SO libspdk_scsi.so.9.0 00:04:48.101 LIB libspdk_ublk.a 00:04:48.101 SYMLINK libspdk_scsi.so 00:04:48.101 SO libspdk_ublk.so.3.0 00:04:48.101 SYMLINK libspdk_ublk.so 00:04:48.359 CC lib/iscsi/conn.o 00:04:48.359 CC lib/vhost/vhost.o 00:04:48.359 CC lib/vhost/vhost_rpc.o 00:04:48.359 CC lib/iscsi/init_grp.o 00:04:48.359 CC lib/vhost/vhost_scsi.o 00:04:48.359 CC lib/iscsi/iscsi.o 00:04:48.359 CC lib/iscsi/md5.o 00:04:48.359 CC lib/vhost/vhost_blk.o 00:04:48.359 CC lib/vhost/rte_vhost_user.o 00:04:48.359 CC lib/iscsi/param.o 00:04:48.359 CC lib/iscsi/portal_grp.o 00:04:48.359 CC lib/iscsi/tgt_node.o 00:04:48.359 CC lib/iscsi/iscsi_subsystem.o 00:04:48.359 CC lib/iscsi/iscsi_rpc.o 00:04:48.359 CC lib/iscsi/task.o 00:04:48.359 LIB libspdk_ftl.a 00:04:48.616 SO libspdk_ftl.so.9.0 00:04:48.873 SYMLINK libspdk_ftl.so 00:04:49.438 LIB libspdk_vhost.a 00:04:49.438 SO libspdk_vhost.so.8.0 00:04:49.696 SYMLINK libspdk_vhost.so 00:04:49.696 LIB libspdk_nvmf.a 00:04:49.696 LIB libspdk_iscsi.a 00:04:49.696 SO libspdk_nvmf.so.18.1 00:04:49.696 SO libspdk_iscsi.so.8.0 00:04:49.954 SYMLINK libspdk_iscsi.so 00:04:49.954 SYMLINK libspdk_nvmf.so 00:04:50.212 CC module/vfu_device/vfu_virtio.o 00:04:50.212 CC module/vfu_device/vfu_virtio_blk.o 00:04:50.212 CC module/vfu_device/vfu_virtio_scsi.o 00:04:50.212 CC module/vfu_device/vfu_virtio_rpc.o 00:04:50.212 CC module/env_dpdk/env_dpdk_rpc.o 00:04:50.212 CC module/keyring/file/keyring.o 00:04:50.212 CC module/keyring/file/keyring_rpc.o 00:04:50.212 CC module/sock/posix/posix.o 00:04:50.212 CC module/scheduler/dpdk_governor/dpdk_governor.o 00:04:50.212 CC module/blob/bdev/blob_bdev.o 00:04:50.212 CC module/accel/dsa/accel_dsa.o 00:04:50.212 CC module/keyring/linux/keyring.o 00:04:50.212 CC module/accel/dsa/accel_dsa_rpc.o 00:04:50.212 CC module/keyring/linux/keyring_rpc.o 00:04:50.212 CC module/accel/ioat/accel_ioat.o 00:04:50.212 CC module/scheduler/dynamic/scheduler_dynamic.o 00:04:50.212 CC module/accel/ioat/accel_ioat_rpc.o 00:04:50.212 CC module/accel/iaa/accel_iaa.o 00:04:50.212 CC module/accel/iaa/accel_iaa_rpc.o 00:04:50.212 CC module/accel/error/accel_error.o 00:04:50.212 CC module/scheduler/gscheduler/gscheduler.o 00:04:50.212 CC module/accel/error/accel_error_rpc.o 00:04:50.212 LIB libspdk_env_dpdk_rpc.a 00:04:50.471 SO libspdk_env_dpdk_rpc.so.6.0 00:04:50.471 SYMLINK libspdk_env_dpdk_rpc.so 00:04:50.471 LIB libspdk_keyring_linux.a 00:04:50.471 LIB libspdk_keyring_file.a 00:04:50.471 LIB libspdk_scheduler_dpdk_governor.a 00:04:50.471 LIB libspdk_scheduler_gscheduler.a 00:04:50.471 SO libspdk_keyring_linux.so.1.0 00:04:50.471 SO libspdk_keyring_file.so.1.0 00:04:50.471 SO libspdk_scheduler_dpdk_governor.so.4.0 00:04:50.471 SO libspdk_scheduler_gscheduler.so.4.0 00:04:50.471 LIB libspdk_accel_error.a 00:04:50.471 LIB libspdk_accel_ioat.a 00:04:50.471 LIB libspdk_scheduler_dynamic.a 00:04:50.471 LIB libspdk_accel_iaa.a 00:04:50.471 SO libspdk_accel_error.so.2.0 00:04:50.471 SO libspdk_scheduler_dynamic.so.4.0 00:04:50.471 SO libspdk_accel_ioat.so.6.0 00:04:50.471 SYMLINK libspdk_keyring_linux.so 00:04:50.471 SYMLINK libspdk_keyring_file.so 00:04:50.471 SYMLINK libspdk_scheduler_dpdk_governor.so 00:04:50.471 SYMLINK libspdk_scheduler_gscheduler.so 00:04:50.471 SO libspdk_accel_iaa.so.3.0 00:04:50.471 SYMLINK libspdk_scheduler_dynamic.so 00:04:50.471 LIB libspdk_blob_bdev.a 00:04:50.471 SYMLINK libspdk_accel_error.so 00:04:50.471 LIB libspdk_accel_dsa.a 00:04:50.471 SYMLINK libspdk_accel_ioat.so 00:04:50.471 SO libspdk_blob_bdev.so.11.0 00:04:50.471 SYMLINK libspdk_accel_iaa.so 00:04:50.471 SO libspdk_accel_dsa.so.5.0 00:04:50.730 SYMLINK libspdk_blob_bdev.so 00:04:50.730 SYMLINK libspdk_accel_dsa.so 00:04:50.730 LIB libspdk_vfu_device.a 00:04:50.991 SO libspdk_vfu_device.so.3.0 00:04:50.991 CC module/bdev/lvol/vbdev_lvol.o 00:04:50.991 CC module/bdev/gpt/gpt.o 00:04:50.991 CC module/bdev/null/bdev_null.o 00:04:50.991 CC module/bdev/nvme/bdev_nvme.o 00:04:50.991 CC module/bdev/lvol/vbdev_lvol_rpc.o 00:04:50.991 CC module/bdev/raid/bdev_raid.o 00:04:50.991 CC module/bdev/null/bdev_null_rpc.o 00:04:50.991 CC module/bdev/virtio/bdev_virtio_scsi.o 00:04:50.991 CC module/bdev/nvme/bdev_nvme_rpc.o 00:04:50.991 CC module/bdev/error/vbdev_error.o 00:04:50.991 CC module/bdev/virtio/bdev_virtio_blk.o 00:04:50.991 CC module/bdev/gpt/vbdev_gpt.o 00:04:50.991 CC module/bdev/aio/bdev_aio.o 00:04:50.991 CC module/bdev/split/vbdev_split.o 00:04:50.991 CC module/blobfs/bdev/blobfs_bdev.o 00:04:50.991 CC module/bdev/error/vbdev_error_rpc.o 00:04:50.991 CC module/bdev/nvme/bdev_mdns_client.o 00:04:50.991 CC module/bdev/raid/bdev_raid_rpc.o 00:04:50.991 CC module/bdev/aio/bdev_aio_rpc.o 00:04:50.991 CC module/bdev/nvme/nvme_rpc.o 00:04:50.991 CC module/bdev/ftl/bdev_ftl.o 00:04:50.991 CC module/bdev/delay/vbdev_delay.o 00:04:50.991 CC module/bdev/zone_block/vbdev_zone_block.o 00:04:50.991 CC module/blobfs/bdev/blobfs_bdev_rpc.o 00:04:50.991 CC module/bdev/virtio/bdev_virtio_rpc.o 00:04:50.991 CC module/bdev/split/vbdev_split_rpc.o 00:04:50.991 CC module/bdev/ftl/bdev_ftl_rpc.o 00:04:50.991 CC module/bdev/raid/bdev_raid_sb.o 00:04:50.991 CC module/bdev/nvme/vbdev_opal.o 00:04:50.991 CC module/bdev/zone_block/vbdev_zone_block_rpc.o 00:04:50.991 CC module/bdev/raid/raid0.o 00:04:50.991 CC module/bdev/delay/vbdev_delay_rpc.o 00:04:50.991 CC module/bdev/malloc/bdev_malloc.o 00:04:50.991 CC module/bdev/nvme/vbdev_opal_rpc.o 00:04:50.991 CC module/bdev/malloc/bdev_malloc_rpc.o 00:04:50.991 CC module/bdev/raid/raid1.o 00:04:50.991 CC module/bdev/nvme/bdev_nvme_cuse_rpc.o 00:04:50.991 CC module/bdev/passthru/vbdev_passthru.o 00:04:50.991 CC module/bdev/raid/concat.o 00:04:50.991 CC module/bdev/iscsi/bdev_iscsi.o 00:04:50.991 CC module/bdev/passthru/vbdev_passthru_rpc.o 00:04:50.991 CC module/bdev/iscsi/bdev_iscsi_rpc.o 00:04:50.991 SYMLINK libspdk_vfu_device.so 00:04:51.250 LIB libspdk_sock_posix.a 00:04:51.250 SO libspdk_sock_posix.so.6.0 00:04:51.250 LIB libspdk_blobfs_bdev.a 00:04:51.250 LIB libspdk_bdev_null.a 00:04:51.250 SO libspdk_blobfs_bdev.so.6.0 00:04:51.250 SO libspdk_bdev_null.so.6.0 00:04:51.250 SYMLINK libspdk_sock_posix.so 00:04:51.250 LIB libspdk_bdev_split.a 00:04:51.250 LIB libspdk_bdev_error.a 00:04:51.250 LIB libspdk_bdev_malloc.a 00:04:51.250 LIB libspdk_bdev_gpt.a 00:04:51.250 SO libspdk_bdev_error.so.6.0 00:04:51.250 SO libspdk_bdev_split.so.6.0 00:04:51.250 SO libspdk_bdev_malloc.so.6.0 00:04:51.250 SYMLINK libspdk_blobfs_bdev.so 00:04:51.508 SYMLINK libspdk_bdev_null.so 00:04:51.508 SO libspdk_bdev_gpt.so.6.0 00:04:51.508 LIB libspdk_bdev_ftl.a 00:04:51.508 LIB libspdk_bdev_passthru.a 00:04:51.508 SO libspdk_bdev_ftl.so.6.0 00:04:51.508 SYMLINK libspdk_bdev_split.so 00:04:51.508 SYMLINK libspdk_bdev_error.so 00:04:51.508 SYMLINK libspdk_bdev_malloc.so 00:04:51.508 LIB libspdk_bdev_delay.a 00:04:51.508 SO libspdk_bdev_passthru.so.6.0 00:04:51.508 SYMLINK libspdk_bdev_gpt.so 00:04:51.508 LIB libspdk_bdev_aio.a 00:04:51.508 SO libspdk_bdev_delay.so.6.0 00:04:51.508 LIB libspdk_bdev_zone_block.a 00:04:51.508 SYMLINK libspdk_bdev_ftl.so 00:04:51.508 SO libspdk_bdev_aio.so.6.0 00:04:51.508 SO libspdk_bdev_zone_block.so.6.0 00:04:51.508 LIB libspdk_bdev_iscsi.a 00:04:51.508 SYMLINK libspdk_bdev_passthru.so 00:04:51.508 SO libspdk_bdev_iscsi.so.6.0 00:04:51.508 SYMLINK libspdk_bdev_delay.so 00:04:51.508 SYMLINK libspdk_bdev_zone_block.so 00:04:51.508 SYMLINK libspdk_bdev_aio.so 00:04:51.508 SYMLINK libspdk_bdev_iscsi.so 00:04:51.508 LIB libspdk_bdev_lvol.a 00:04:51.767 SO libspdk_bdev_lvol.so.6.0 00:04:51.767 LIB libspdk_bdev_virtio.a 00:04:51.767 SO libspdk_bdev_virtio.so.6.0 00:04:51.767 SYMLINK libspdk_bdev_lvol.so 00:04:51.767 SYMLINK libspdk_bdev_virtio.so 00:04:52.025 LIB libspdk_bdev_raid.a 00:04:52.025 SO libspdk_bdev_raid.so.6.0 00:04:52.282 SYMLINK libspdk_bdev_raid.so 00:04:53.215 LIB libspdk_bdev_nvme.a 00:04:53.215 SO libspdk_bdev_nvme.so.7.0 00:04:53.474 SYMLINK libspdk_bdev_nvme.so 00:04:53.732 CC module/event/subsystems/vfu_tgt/vfu_tgt.o 00:04:53.732 CC module/event/subsystems/vmd/vmd.o 00:04:53.732 CC module/event/subsystems/keyring/keyring.o 00:04:53.732 CC module/event/subsystems/scheduler/scheduler.o 00:04:53.732 CC module/event/subsystems/sock/sock.o 00:04:53.732 CC module/event/subsystems/iobuf/iobuf.o 00:04:53.732 CC module/event/subsystems/iobuf/iobuf_rpc.o 00:04:53.732 CC module/event/subsystems/vmd/vmd_rpc.o 00:04:53.732 CC module/event/subsystems/vhost_blk/vhost_blk.o 00:04:53.992 LIB libspdk_event_keyring.a 00:04:53.992 LIB libspdk_event_vhost_blk.a 00:04:53.992 LIB libspdk_event_vfu_tgt.a 00:04:53.992 LIB libspdk_event_sock.a 00:04:53.992 LIB libspdk_event_vmd.a 00:04:53.992 LIB libspdk_event_scheduler.a 00:04:53.992 LIB libspdk_event_iobuf.a 00:04:53.992 SO libspdk_event_keyring.so.1.0 00:04:53.992 SO libspdk_event_vhost_blk.so.3.0 00:04:53.992 SO libspdk_event_sock.so.5.0 00:04:53.992 SO libspdk_event_vfu_tgt.so.3.0 00:04:53.992 SO libspdk_event_scheduler.so.4.0 00:04:53.992 SO libspdk_event_vmd.so.6.0 00:04:53.992 SO libspdk_event_iobuf.so.3.0 00:04:53.992 SYMLINK libspdk_event_keyring.so 00:04:53.992 SYMLINK libspdk_event_vhost_blk.so 00:04:53.992 SYMLINK libspdk_event_sock.so 00:04:53.992 SYMLINK libspdk_event_scheduler.so 00:04:53.992 SYMLINK libspdk_event_vfu_tgt.so 00:04:53.992 SYMLINK libspdk_event_vmd.so 00:04:53.992 SYMLINK libspdk_event_iobuf.so 00:04:54.250 CC module/event/subsystems/accel/accel.o 00:04:54.250 LIB libspdk_event_accel.a 00:04:54.250 SO libspdk_event_accel.so.6.0 00:04:54.510 SYMLINK libspdk_event_accel.so 00:04:54.510 CC module/event/subsystems/bdev/bdev.o 00:04:54.769 LIB libspdk_event_bdev.a 00:04:54.769 SO libspdk_event_bdev.so.6.0 00:04:54.769 SYMLINK libspdk_event_bdev.so 00:04:55.027 CC module/event/subsystems/ublk/ublk.o 00:04:55.027 CC module/event/subsystems/nbd/nbd.o 00:04:55.027 CC module/event/subsystems/scsi/scsi.o 00:04:55.027 CC module/event/subsystems/nvmf/nvmf_rpc.o 00:04:55.027 CC module/event/subsystems/nvmf/nvmf_tgt.o 00:04:55.027 LIB libspdk_event_nbd.a 00:04:55.027 LIB libspdk_event_ublk.a 00:04:55.027 LIB libspdk_event_scsi.a 00:04:55.027 SO libspdk_event_ublk.so.3.0 00:04:55.027 SO libspdk_event_nbd.so.6.0 00:04:55.287 SO libspdk_event_scsi.so.6.0 00:04:55.287 SYMLINK libspdk_event_ublk.so 00:04:55.287 SYMLINK libspdk_event_nbd.so 00:04:55.287 SYMLINK libspdk_event_scsi.so 00:04:55.287 LIB libspdk_event_nvmf.a 00:04:55.287 SO libspdk_event_nvmf.so.6.0 00:04:55.287 SYMLINK libspdk_event_nvmf.so 00:04:55.287 CC module/event/subsystems/vhost_scsi/vhost_scsi.o 00:04:55.287 CC module/event/subsystems/iscsi/iscsi.o 00:04:55.545 LIB libspdk_event_vhost_scsi.a 00:04:55.545 SO libspdk_event_vhost_scsi.so.3.0 00:04:55.545 LIB libspdk_event_iscsi.a 00:04:55.545 SO libspdk_event_iscsi.so.6.0 00:04:55.545 SYMLINK libspdk_event_vhost_scsi.so 00:04:55.545 SYMLINK libspdk_event_iscsi.so 00:04:55.804 SO libspdk.so.6.0 00:04:55.804 SYMLINK libspdk.so 00:04:55.804 CC app/trace_record/trace_record.o 00:04:55.804 TEST_HEADER include/spdk/accel.h 00:04:55.804 CC app/spdk_nvme_identify/identify.o 00:04:55.804 TEST_HEADER include/spdk/accel_module.h 00:04:55.804 TEST_HEADER include/spdk/assert.h 00:04:55.804 TEST_HEADER include/spdk/base64.h 00:04:55.804 TEST_HEADER include/spdk/barrier.h 00:04:55.804 CXX app/trace/trace.o 00:04:55.804 TEST_HEADER include/spdk/bdev.h 00:04:55.804 CC app/spdk_nvme_discover/discovery_aer.o 00:04:55.804 CC test/rpc_client/rpc_client_test.o 00:04:55.804 TEST_HEADER include/spdk/bdev_module.h 00:04:55.804 CC app/spdk_lspci/spdk_lspci.o 00:04:55.804 TEST_HEADER include/spdk/bdev_zone.h 00:04:55.804 CC app/spdk_top/spdk_top.o 00:04:55.804 TEST_HEADER include/spdk/bit_array.h 00:04:55.804 TEST_HEADER include/spdk/bit_pool.h 00:04:55.804 CC app/spdk_nvme_perf/perf.o 00:04:55.804 TEST_HEADER include/spdk/blob_bdev.h 00:04:55.804 TEST_HEADER include/spdk/blobfs_bdev.h 00:04:55.804 TEST_HEADER include/spdk/blobfs.h 00:04:55.804 TEST_HEADER include/spdk/blob.h 00:04:55.804 TEST_HEADER include/spdk/conf.h 00:04:55.804 TEST_HEADER include/spdk/config.h 00:04:55.804 TEST_HEADER include/spdk/cpuset.h 00:04:55.804 TEST_HEADER include/spdk/crc16.h 00:04:55.804 TEST_HEADER include/spdk/crc32.h 00:04:55.804 TEST_HEADER include/spdk/crc64.h 00:04:55.804 TEST_HEADER include/spdk/dif.h 00:04:55.804 TEST_HEADER include/spdk/dma.h 00:04:55.804 TEST_HEADER include/spdk/endian.h 00:04:55.804 TEST_HEADER include/spdk/env_dpdk.h 00:04:55.804 TEST_HEADER include/spdk/env.h 00:04:55.804 TEST_HEADER include/spdk/event.h 00:04:55.804 TEST_HEADER include/spdk/fd_group.h 00:04:55.804 TEST_HEADER include/spdk/fd.h 00:04:55.804 TEST_HEADER include/spdk/file.h 00:04:55.804 TEST_HEADER include/spdk/ftl.h 00:04:55.804 TEST_HEADER include/spdk/gpt_spec.h 00:04:55.804 TEST_HEADER include/spdk/hexlify.h 00:04:55.804 TEST_HEADER include/spdk/histogram_data.h 00:04:55.804 TEST_HEADER include/spdk/idxd.h 00:04:55.804 TEST_HEADER include/spdk/idxd_spec.h 00:04:55.804 TEST_HEADER include/spdk/init.h 00:04:55.804 TEST_HEADER include/spdk/ioat.h 00:04:55.804 TEST_HEADER include/spdk/ioat_spec.h 00:04:55.804 TEST_HEADER include/spdk/iscsi_spec.h 00:04:55.804 TEST_HEADER include/spdk/json.h 00:04:55.804 TEST_HEADER include/spdk/jsonrpc.h 00:04:55.804 TEST_HEADER include/spdk/keyring.h 00:04:55.804 TEST_HEADER include/spdk/keyring_module.h 00:04:55.804 TEST_HEADER include/spdk/likely.h 00:04:55.804 TEST_HEADER include/spdk/log.h 00:04:55.804 TEST_HEADER include/spdk/memory.h 00:04:55.804 TEST_HEADER include/spdk/lvol.h 00:04:56.068 TEST_HEADER include/spdk/mmio.h 00:04:56.068 TEST_HEADER include/spdk/nbd.h 00:04:56.068 TEST_HEADER include/spdk/notify.h 00:04:56.068 TEST_HEADER include/spdk/nvme.h 00:04:56.068 TEST_HEADER include/spdk/nvme_intel.h 00:04:56.068 TEST_HEADER include/spdk/nvme_ocssd.h 00:04:56.068 TEST_HEADER include/spdk/nvme_ocssd_spec.h 00:04:56.068 TEST_HEADER include/spdk/nvme_spec.h 00:04:56.068 TEST_HEADER include/spdk/nvme_zns.h 00:04:56.068 TEST_HEADER include/spdk/nvmf_cmd.h 00:04:56.068 TEST_HEADER include/spdk/nvmf_fc_spec.h 00:04:56.068 TEST_HEADER include/spdk/nvmf.h 00:04:56.068 TEST_HEADER include/spdk/nvmf_spec.h 00:04:56.068 TEST_HEADER include/spdk/nvmf_transport.h 00:04:56.068 TEST_HEADER include/spdk/opal.h 00:04:56.068 TEST_HEADER include/spdk/opal_spec.h 00:04:56.068 TEST_HEADER include/spdk/pci_ids.h 00:04:56.068 TEST_HEADER include/spdk/queue.h 00:04:56.068 TEST_HEADER include/spdk/pipe.h 00:04:56.068 TEST_HEADER include/spdk/reduce.h 00:04:56.068 TEST_HEADER include/spdk/rpc.h 00:04:56.068 TEST_HEADER include/spdk/scsi.h 00:04:56.068 TEST_HEADER include/spdk/scheduler.h 00:04:56.068 TEST_HEADER include/spdk/scsi_spec.h 00:04:56.068 TEST_HEADER include/spdk/sock.h 00:04:56.068 TEST_HEADER include/spdk/stdinc.h 00:04:56.068 TEST_HEADER include/spdk/string.h 00:04:56.068 TEST_HEADER include/spdk/thread.h 00:04:56.068 TEST_HEADER include/spdk/trace.h 00:04:56.068 TEST_HEADER include/spdk/trace_parser.h 00:04:56.068 TEST_HEADER include/spdk/tree.h 00:04:56.068 TEST_HEADER include/spdk/ublk.h 00:04:56.068 TEST_HEADER include/spdk/util.h 00:04:56.068 TEST_HEADER include/spdk/uuid.h 00:04:56.068 TEST_HEADER include/spdk/version.h 00:04:56.068 TEST_HEADER include/spdk/vfio_user_pci.h 00:04:56.068 TEST_HEADER include/spdk/vfio_user_spec.h 00:04:56.068 TEST_HEADER include/spdk/vhost.h 00:04:56.068 TEST_HEADER include/spdk/vmd.h 00:04:56.068 TEST_HEADER include/spdk/xor.h 00:04:56.068 TEST_HEADER include/spdk/zipf.h 00:04:56.068 CXX test/cpp_headers/accel.o 00:04:56.068 CC examples/interrupt_tgt/interrupt_tgt.o 00:04:56.068 CXX test/cpp_headers/accel_module.o 00:04:56.068 CXX test/cpp_headers/assert.o 00:04:56.068 CXX test/cpp_headers/barrier.o 00:04:56.068 CC app/spdk_dd/spdk_dd.o 00:04:56.068 CXX test/cpp_headers/base64.o 00:04:56.068 CXX test/cpp_headers/bdev.o 00:04:56.068 CXX test/cpp_headers/bdev_module.o 00:04:56.068 CXX test/cpp_headers/bdev_zone.o 00:04:56.068 CXX test/cpp_headers/bit_array.o 00:04:56.068 CXX test/cpp_headers/bit_pool.o 00:04:56.068 CXX test/cpp_headers/blob_bdev.o 00:04:56.068 CXX test/cpp_headers/blobfs_bdev.o 00:04:56.068 CXX test/cpp_headers/blobfs.o 00:04:56.068 CXX test/cpp_headers/blob.o 00:04:56.068 CXX test/cpp_headers/conf.o 00:04:56.068 CXX test/cpp_headers/config.o 00:04:56.068 CC app/nvmf_tgt/nvmf_main.o 00:04:56.068 CXX test/cpp_headers/cpuset.o 00:04:56.068 CXX test/cpp_headers/crc16.o 00:04:56.068 CC app/iscsi_tgt/iscsi_tgt.o 00:04:56.068 CXX test/cpp_headers/crc32.o 00:04:56.068 CC app/spdk_tgt/spdk_tgt.o 00:04:56.068 CC test/thread/poller_perf/poller_perf.o 00:04:56.068 CC examples/util/zipf/zipf.o 00:04:56.068 CC test/env/vtophys/vtophys.o 00:04:56.068 CC test/env/memory/memory_ut.o 00:04:56.068 CC test/env/pci/pci_ut.o 00:04:56.068 CC examples/ioat/perf/perf.o 00:04:56.068 CC test/env/env_dpdk_post_init/env_dpdk_post_init.o 00:04:56.068 CC examples/ioat/verify/verify.o 00:04:56.068 CC test/app/histogram_perf/histogram_perf.o 00:04:56.068 CC test/app/jsoncat/jsoncat.o 00:04:56.068 CC test/app/stub/stub.o 00:04:56.068 CC app/fio/nvme/fio_plugin.o 00:04:56.068 CC test/dma/test_dma/test_dma.o 00:04:56.068 CC app/fio/bdev/fio_plugin.o 00:04:56.068 CC test/app/bdev_svc/bdev_svc.o 00:04:56.338 LINK spdk_lspci 00:04:56.338 CC test/env/mem_callbacks/mem_callbacks.o 00:04:56.338 CC test/app/fuzz/nvme_fuzz/nvme_fuzz.o 00:04:56.338 LINK rpc_client_test 00:04:56.338 LINK spdk_nvme_discover 00:04:56.338 LINK vtophys 00:04:56.338 LINK poller_perf 00:04:56.338 LINK interrupt_tgt 00:04:56.338 LINK zipf 00:04:56.338 LINK nvmf_tgt 00:04:56.338 LINK jsoncat 00:04:56.338 CXX test/cpp_headers/crc64.o 00:04:56.338 LINK histogram_perf 00:04:56.338 CXX test/cpp_headers/dif.o 00:04:56.338 CXX test/cpp_headers/dma.o 00:04:56.338 CXX test/cpp_headers/endian.o 00:04:56.338 LINK spdk_trace_record 00:04:56.338 CXX test/cpp_headers/env_dpdk.o 00:04:56.338 LINK env_dpdk_post_init 00:04:56.338 CXX test/cpp_headers/env.o 00:04:56.338 CXX test/cpp_headers/event.o 00:04:56.338 CXX test/cpp_headers/fd_group.o 00:04:56.338 CXX test/cpp_headers/fd.o 00:04:56.338 LINK stub 00:04:56.338 CXX test/cpp_headers/file.o 00:04:56.608 CXX test/cpp_headers/ftl.o 00:04:56.608 LINK iscsi_tgt 00:04:56.608 CXX test/cpp_headers/gpt_spec.o 00:04:56.608 CXX test/cpp_headers/hexlify.o 00:04:56.608 CXX test/cpp_headers/histogram_data.o 00:04:56.608 CXX test/cpp_headers/idxd.o 00:04:56.608 LINK ioat_perf 00:04:56.608 LINK spdk_tgt 00:04:56.608 CXX test/cpp_headers/idxd_spec.o 00:04:56.608 LINK verify 00:04:56.608 LINK bdev_svc 00:04:56.608 CC test/app/fuzz/iscsi_fuzz/iscsi_fuzz.o 00:04:56.608 CC test/app/fuzz/vhost_fuzz/vhost_fuzz_rpc.o 00:04:56.608 CXX test/cpp_headers/init.o 00:04:56.608 CXX test/cpp_headers/ioat.o 00:04:56.608 CC test/app/fuzz/vhost_fuzz/vhost_fuzz.o 00:04:56.608 LINK spdk_dd 00:04:56.608 CXX test/cpp_headers/ioat_spec.o 00:04:56.608 CXX test/cpp_headers/iscsi_spec.o 00:04:56.871 CXX test/cpp_headers/json.o 00:04:56.871 CXX test/cpp_headers/jsonrpc.o 00:04:56.871 LINK spdk_trace 00:04:56.871 CXX test/cpp_headers/keyring.o 00:04:56.871 CXX test/cpp_headers/keyring_module.o 00:04:56.871 CXX test/cpp_headers/likely.o 00:04:56.871 LINK pci_ut 00:04:56.871 CXX test/cpp_headers/log.o 00:04:56.871 CXX test/cpp_headers/lvol.o 00:04:56.871 CXX test/cpp_headers/memory.o 00:04:56.871 CXX test/cpp_headers/mmio.o 00:04:56.871 CXX test/cpp_headers/nbd.o 00:04:56.871 CXX test/cpp_headers/notify.o 00:04:56.871 CXX test/cpp_headers/nvme.o 00:04:56.871 CXX test/cpp_headers/nvme_intel.o 00:04:56.871 CXX test/cpp_headers/nvme_ocssd.o 00:04:56.871 CXX test/cpp_headers/nvme_spec.o 00:04:56.871 CXX test/cpp_headers/nvme_ocssd_spec.o 00:04:56.871 CXX test/cpp_headers/nvme_zns.o 00:04:56.871 CXX test/cpp_headers/nvmf_cmd.o 00:04:56.871 CXX test/cpp_headers/nvmf_fc_spec.o 00:04:56.871 LINK test_dma 00:04:56.871 CXX test/cpp_headers/nvmf.o 00:04:56.871 CXX test/cpp_headers/nvmf_spec.o 00:04:56.871 CXX test/cpp_headers/nvmf_transport.o 00:04:56.871 CXX test/cpp_headers/opal.o 00:04:56.871 CXX test/cpp_headers/opal_spec.o 00:04:57.130 CXX test/cpp_headers/pci_ids.o 00:04:57.130 CC test/event/event_perf/event_perf.o 00:04:57.130 LINK nvme_fuzz 00:04:57.130 CXX test/cpp_headers/pipe.o 00:04:57.130 CXX test/cpp_headers/queue.o 00:04:57.130 CC test/event/reactor_perf/reactor_perf.o 00:04:57.130 CC test/event/reactor/reactor.o 00:04:57.130 CC examples/sock/hello_world/hello_sock.o 00:04:57.130 CXX test/cpp_headers/reduce.o 00:04:57.130 CC examples/idxd/perf/perf.o 00:04:57.130 LINK spdk_nvme 00:04:57.130 LINK spdk_bdev 00:04:57.130 CC examples/vmd/lsvmd/lsvmd.o 00:04:57.130 CXX test/cpp_headers/rpc.o 00:04:57.130 CC test/event/app_repeat/app_repeat.o 00:04:57.130 CC examples/thread/thread/thread_ex.o 00:04:57.403 CC examples/vmd/led/led.o 00:04:57.403 CXX test/cpp_headers/scheduler.o 00:04:57.403 CC test/event/scheduler/scheduler.o 00:04:57.403 CXX test/cpp_headers/scsi.o 00:04:57.403 CXX test/cpp_headers/scsi_spec.o 00:04:57.403 CXX test/cpp_headers/sock.o 00:04:57.403 CXX test/cpp_headers/stdinc.o 00:04:57.403 CXX test/cpp_headers/string.o 00:04:57.403 CXX test/cpp_headers/thread.o 00:04:57.403 CXX test/cpp_headers/trace.o 00:04:57.403 CXX test/cpp_headers/trace_parser.o 00:04:57.403 CXX test/cpp_headers/tree.o 00:04:57.403 CXX test/cpp_headers/ublk.o 00:04:57.403 CXX test/cpp_headers/util.o 00:04:57.403 CXX test/cpp_headers/uuid.o 00:04:57.403 CXX test/cpp_headers/version.o 00:04:57.403 CXX test/cpp_headers/vfio_user_pci.o 00:04:57.403 CXX test/cpp_headers/vfio_user_spec.o 00:04:57.403 CXX test/cpp_headers/vhost.o 00:04:57.403 CXX test/cpp_headers/vmd.o 00:04:57.403 CXX test/cpp_headers/xor.o 00:04:57.403 LINK event_perf 00:04:57.403 CXX test/cpp_headers/zipf.o 00:04:57.403 LINK mem_callbacks 00:04:57.403 LINK spdk_nvme_perf 00:04:57.403 LINK reactor_perf 00:04:57.403 CC app/vhost/vhost.o 00:04:57.403 LINK reactor 00:04:57.403 LINK vhost_fuzz 00:04:57.403 LINK lsvmd 00:04:57.661 LINK app_repeat 00:04:57.661 LINK spdk_nvme_identify 00:04:57.661 LINK led 00:04:57.661 LINK spdk_top 00:04:57.661 LINK hello_sock 00:04:57.661 CC test/nvme/reset/reset.o 00:04:57.661 CC test/nvme/overhead/overhead.o 00:04:57.661 CC test/nvme/sgl/sgl.o 00:04:57.661 CC test/nvme/reserve/reserve.o 00:04:57.661 CC test/nvme/aer/aer.o 00:04:57.661 CC test/nvme/e2edp/nvme_dp.o 00:04:57.661 CC test/nvme/startup/startup.o 00:04:57.661 CC test/nvme/err_injection/err_injection.o 00:04:57.661 CC test/nvme/simple_copy/simple_copy.o 00:04:57.661 LINK scheduler 00:04:57.661 LINK thread 00:04:57.661 CC test/accel/dif/dif.o 00:04:57.661 CC test/blobfs/mkfs/mkfs.o 00:04:57.919 CC test/nvme/connect_stress/connect_stress.o 00:04:57.920 CC test/nvme/boot_partition/boot_partition.o 00:04:57.920 CC test/nvme/fused_ordering/fused_ordering.o 00:04:57.920 CC test/nvme/compliance/nvme_compliance.o 00:04:57.920 CC test/lvol/esnap/esnap.o 00:04:57.920 CC test/nvme/doorbell_aers/doorbell_aers.o 00:04:57.920 CC test/nvme/fdp/fdp.o 00:04:57.920 CC test/nvme/cuse/cuse.o 00:04:57.920 LINK idxd_perf 00:04:57.920 LINK vhost 00:04:57.920 LINK startup 00:04:57.920 LINK reserve 00:04:57.920 LINK boot_partition 00:04:57.920 LINK simple_copy 00:04:58.178 LINK mkfs 00:04:58.178 LINK fused_ordering 00:04:58.178 LINK err_injection 00:04:58.178 LINK memory_ut 00:04:58.178 LINK doorbell_aers 00:04:58.178 CC examples/nvme/hotplug/hotplug.o 00:04:58.178 CC examples/nvme/nvme_manage/nvme_manage.o 00:04:58.178 CC examples/nvme/abort/abort.o 00:04:58.178 CC examples/nvme/reconnect/reconnect.o 00:04:58.178 LINK nvme_dp 00:04:58.178 CC examples/nvme/hello_world/hello_world.o 00:04:58.178 CC examples/nvme/pmr_persistence/pmr_persistence.o 00:04:58.178 CC examples/nvme/arbitration/arbitration.o 00:04:58.178 CC examples/nvme/cmb_copy/cmb_copy.o 00:04:58.178 LINK connect_stress 00:04:58.178 LINK reset 00:04:58.178 LINK sgl 00:04:58.178 LINK aer 00:04:58.178 LINK overhead 00:04:58.178 LINK nvme_compliance 00:04:58.178 LINK fdp 00:04:58.436 LINK pmr_persistence 00:04:58.436 CC examples/accel/perf/accel_perf.o 00:04:58.436 CC examples/blob/cli/blobcli.o 00:04:58.436 LINK hotplug 00:04:58.436 LINK dif 00:04:58.436 CC examples/blob/hello_world/hello_blob.o 00:04:58.436 LINK cmb_copy 00:04:58.436 LINK hello_world 00:04:58.694 LINK arbitration 00:04:58.694 LINK reconnect 00:04:58.694 LINK abort 00:04:58.694 LINK hello_blob 00:04:58.694 LINK nvme_manage 00:04:58.951 CC test/bdev/bdevio/bdevio.o 00:04:58.951 LINK accel_perf 00:04:58.951 LINK blobcli 00:04:58.951 LINK iscsi_fuzz 00:04:59.209 CC examples/bdev/hello_world/hello_bdev.o 00:04:59.209 CC examples/bdev/bdevperf/bdevperf.o 00:04:59.209 LINK bdevio 00:04:59.467 LINK cuse 00:04:59.467 LINK hello_bdev 00:05:00.035 LINK bdevperf 00:05:00.293 CC examples/nvmf/nvmf/nvmf.o 00:05:00.550 LINK nvmf 00:05:03.086 LINK esnap 00:05:03.086 00:05:03.086 real 0m48.867s 00:05:03.086 user 10m10.724s 00:05:03.086 sys 2m29.134s 00:05:03.086 09:13:14 make -- common/autotest_common.sh@1124 -- $ xtrace_disable 00:05:03.086 09:13:14 make -- common/autotest_common.sh@10 -- $ set +x 00:05:03.086 ************************************ 00:05:03.086 END TEST make 00:05:03.086 ************************************ 00:05:03.086 09:13:14 -- common/autotest_common.sh@1142 -- $ return 0 00:05:03.086 09:13:14 -- spdk/autobuild.sh@1 -- $ stop_monitor_resources 00:05:03.086 09:13:14 -- pm/common@29 -- $ signal_monitor_resources TERM 00:05:03.086 09:13:14 -- pm/common@40 -- $ local monitor pid pids signal=TERM 00:05:03.086 09:13:14 -- pm/common@42 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:05:03.086 09:13:14 -- pm/common@43 -- $ [[ -e /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/collect-cpu-load.pid ]] 00:05:03.086 09:13:14 -- pm/common@44 -- $ pid=628567 00:05:03.086 09:13:14 -- pm/common@50 -- $ kill -TERM 628567 00:05:03.086 09:13:14 -- pm/common@42 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:05:03.086 09:13:14 -- pm/common@43 -- $ [[ -e /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/collect-vmstat.pid ]] 00:05:03.086 09:13:14 -- pm/common@44 -- $ pid=628569 00:05:03.086 09:13:14 -- pm/common@50 -- $ kill -TERM 628569 00:05:03.086 09:13:14 -- pm/common@42 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:05:03.086 09:13:14 -- pm/common@43 -- $ [[ -e /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/collect-cpu-temp.pid ]] 00:05:03.086 09:13:14 -- pm/common@44 -- $ pid=628571 00:05:03.086 09:13:14 -- pm/common@50 -- $ kill -TERM 628571 00:05:03.086 09:13:14 -- pm/common@42 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:05:03.086 09:13:14 -- pm/common@43 -- $ [[ -e /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/collect-bmc-pm.pid ]] 00:05:03.086 09:13:14 -- pm/common@44 -- $ pid=628599 00:05:03.086 09:13:14 -- pm/common@50 -- $ sudo -E kill -TERM 628599 00:05:03.346 09:13:14 -- spdk/autotest.sh@25 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:05:03.346 09:13:14 -- nvmf/common.sh@7 -- # uname -s 00:05:03.346 09:13:14 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:05:03.346 09:13:14 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:05:03.346 09:13:14 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:05:03.346 09:13:14 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:05:03.346 09:13:14 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:05:03.346 09:13:14 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:05:03.346 09:13:14 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:05:03.346 09:13:14 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:05:03.346 09:13:14 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:05:03.346 09:13:14 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:05:03.346 09:13:14 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a 00:05:03.346 09:13:14 -- nvmf/common.sh@18 -- # NVME_HOSTID=29f67375-a902-e411-ace9-001e67bc3c9a 00:05:03.346 09:13:14 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:05:03.346 09:13:14 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:05:03.346 09:13:14 -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:05:03.346 09:13:14 -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:05:03.346 09:13:14 -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:05:03.346 09:13:14 -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:05:03.346 09:13:14 -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:05:03.346 09:13:14 -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:05:03.346 09:13:14 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:03.346 09:13:14 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:03.346 09:13:14 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:03.346 09:13:14 -- paths/export.sh@5 -- # export PATH 00:05:03.346 09:13:14 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:03.346 09:13:14 -- nvmf/common.sh@47 -- # : 0 00:05:03.346 09:13:14 -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:05:03.346 09:13:14 -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:05:03.346 09:13:14 -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:05:03.346 09:13:14 -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:05:03.346 09:13:14 -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:05:03.346 09:13:14 -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:05:03.346 09:13:14 -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:05:03.346 09:13:14 -- nvmf/common.sh@51 -- # have_pci_nics=0 00:05:03.346 09:13:14 -- spdk/autotest.sh@27 -- # '[' 0 -ne 0 ']' 00:05:03.346 09:13:14 -- spdk/autotest.sh@32 -- # uname -s 00:05:03.346 09:13:14 -- spdk/autotest.sh@32 -- # '[' Linux = Linux ']' 00:05:03.346 09:13:14 -- spdk/autotest.sh@33 -- # old_core_pattern='|/usr/lib/systemd/systemd-coredump %P %u %g %s %t %c %h' 00:05:03.346 09:13:14 -- spdk/autotest.sh@34 -- # mkdir -p /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/coredumps 00:05:03.346 09:13:14 -- spdk/autotest.sh@39 -- # echo '|/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/core-collector.sh %P %s %t' 00:05:03.346 09:13:14 -- spdk/autotest.sh@40 -- # echo /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/coredumps 00:05:03.346 09:13:14 -- spdk/autotest.sh@44 -- # modprobe nbd 00:05:03.346 09:13:14 -- spdk/autotest.sh@46 -- # type -P udevadm 00:05:03.346 09:13:14 -- spdk/autotest.sh@46 -- # udevadm=/usr/sbin/udevadm 00:05:03.346 09:13:14 -- spdk/autotest.sh@48 -- # udevadm_pid=684594 00:05:03.346 09:13:14 -- spdk/autotest.sh@47 -- # /usr/sbin/udevadm monitor --property 00:05:03.346 09:13:14 -- spdk/autotest.sh@53 -- # start_monitor_resources 00:05:03.346 09:13:14 -- pm/common@17 -- # local monitor 00:05:03.346 09:13:14 -- pm/common@19 -- # for monitor in "${MONITOR_RESOURCES[@]}" 00:05:03.346 09:13:14 -- pm/common@19 -- # for monitor in "${MONITOR_RESOURCES[@]}" 00:05:03.346 09:13:14 -- pm/common@19 -- # for monitor in "${MONITOR_RESOURCES[@]}" 00:05:03.346 09:13:14 -- pm/common@21 -- # date +%s 00:05:03.346 09:13:14 -- pm/common@19 -- # for monitor in "${MONITOR_RESOURCES[@]}" 00:05:03.346 09:13:14 -- pm/common@21 -- # date +%s 00:05:03.346 09:13:14 -- pm/common@25 -- # sleep 1 00:05:03.346 09:13:14 -- pm/common@21 -- # date +%s 00:05:03.346 09:13:14 -- pm/common@21 -- # date +%s 00:05:03.346 09:13:14 -- pm/common@21 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/collect-cpu-load -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power -l -p monitor.autotest.sh.1721027594 00:05:03.346 09:13:14 -- pm/common@21 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/collect-vmstat -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power -l -p monitor.autotest.sh.1721027594 00:05:03.346 09:13:14 -- pm/common@21 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/collect-cpu-temp -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power -l -p monitor.autotest.sh.1721027594 00:05:03.346 09:13:14 -- pm/common@21 -- # sudo -E /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/collect-bmc-pm -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power -l -p monitor.autotest.sh.1721027594 00:05:03.346 Redirecting to /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/monitor.autotest.sh.1721027594_collect-vmstat.pm.log 00:05:03.346 Redirecting to /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/monitor.autotest.sh.1721027594_collect-cpu-load.pm.log 00:05:03.346 Redirecting to /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/monitor.autotest.sh.1721027594_collect-cpu-temp.pm.log 00:05:03.346 Redirecting to /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/monitor.autotest.sh.1721027594_collect-bmc-pm.bmc.pm.log 00:05:04.281 09:13:15 -- spdk/autotest.sh@55 -- # trap 'autotest_cleanup || :; exit 1' SIGINT SIGTERM EXIT 00:05:04.281 09:13:15 -- spdk/autotest.sh@57 -- # timing_enter autotest 00:05:04.282 09:13:15 -- common/autotest_common.sh@722 -- # xtrace_disable 00:05:04.282 09:13:15 -- common/autotest_common.sh@10 -- # set +x 00:05:04.282 09:13:15 -- spdk/autotest.sh@59 -- # create_test_list 00:05:04.282 09:13:15 -- common/autotest_common.sh@746 -- # xtrace_disable 00:05:04.282 09:13:15 -- common/autotest_common.sh@10 -- # set +x 00:05:04.282 09:13:15 -- spdk/autotest.sh@61 -- # dirname /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/autotest.sh 00:05:04.282 09:13:15 -- spdk/autotest.sh@61 -- # readlink -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:05:04.282 09:13:15 -- spdk/autotest.sh@61 -- # src=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:05:04.282 09:13:15 -- spdk/autotest.sh@62 -- # out=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output 00:05:04.282 09:13:15 -- spdk/autotest.sh@63 -- # cd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:05:04.282 09:13:15 -- spdk/autotest.sh@65 -- # freebsd_update_contigmem_mod 00:05:04.282 09:13:15 -- common/autotest_common.sh@1455 -- # uname 00:05:04.282 09:13:15 -- common/autotest_common.sh@1455 -- # '[' Linux = FreeBSD ']' 00:05:04.282 09:13:15 -- spdk/autotest.sh@66 -- # freebsd_set_maxsock_buf 00:05:04.282 09:13:15 -- common/autotest_common.sh@1475 -- # uname 00:05:04.282 09:13:15 -- common/autotest_common.sh@1475 -- # [[ Linux = FreeBSD ]] 00:05:04.282 09:13:15 -- spdk/autotest.sh@71 -- # grep CC_TYPE mk/cc.mk 00:05:04.282 09:13:15 -- spdk/autotest.sh@71 -- # CC_TYPE=CC_TYPE=gcc 00:05:04.282 09:13:15 -- spdk/autotest.sh@72 -- # hash lcov 00:05:04.282 09:13:15 -- spdk/autotest.sh@72 -- # [[ CC_TYPE=gcc == *\c\l\a\n\g* ]] 00:05:04.282 09:13:15 -- spdk/autotest.sh@80 -- # export 'LCOV_OPTS= 00:05:04.282 --rc lcov_branch_coverage=1 00:05:04.282 --rc lcov_function_coverage=1 00:05:04.282 --rc genhtml_branch_coverage=1 00:05:04.282 --rc genhtml_function_coverage=1 00:05:04.282 --rc genhtml_legend=1 00:05:04.282 --rc geninfo_all_blocks=1 00:05:04.282 ' 00:05:04.282 09:13:15 -- spdk/autotest.sh@80 -- # LCOV_OPTS=' 00:05:04.282 --rc lcov_branch_coverage=1 00:05:04.282 --rc lcov_function_coverage=1 00:05:04.282 --rc genhtml_branch_coverage=1 00:05:04.282 --rc genhtml_function_coverage=1 00:05:04.282 --rc genhtml_legend=1 00:05:04.282 --rc geninfo_all_blocks=1 00:05:04.282 ' 00:05:04.282 09:13:15 -- spdk/autotest.sh@81 -- # export 'LCOV=lcov 00:05:04.282 --rc lcov_branch_coverage=1 00:05:04.282 --rc lcov_function_coverage=1 00:05:04.282 --rc genhtml_branch_coverage=1 00:05:04.282 --rc genhtml_function_coverage=1 00:05:04.282 --rc genhtml_legend=1 00:05:04.282 --rc geninfo_all_blocks=1 00:05:04.282 --no-external' 00:05:04.282 09:13:15 -- spdk/autotest.sh@81 -- # LCOV='lcov 00:05:04.282 --rc lcov_branch_coverage=1 00:05:04.282 --rc lcov_function_coverage=1 00:05:04.282 --rc genhtml_branch_coverage=1 00:05:04.282 --rc genhtml_function_coverage=1 00:05:04.282 --rc genhtml_legend=1 00:05:04.282 --rc geninfo_all_blocks=1 00:05:04.282 --no-external' 00:05:04.282 09:13:15 -- spdk/autotest.sh@83 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --no-external -v 00:05:04.540 lcov: LCOV version 1.14 00:05:04.540 09:13:15 -- spdk/autotest.sh@85 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --no-external -q -c -i -t Baseline -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk -o /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_base.info 00:05:19.404 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/nvme/nvme_stubs.gcno:no functions found 00:05:19.404 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/nvme/nvme_stubs.gcno 00:05:34.268 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/accel.gcno:no functions found 00:05:34.268 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/accel.gcno 00:05:34.268 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/assert.gcno:no functions found 00:05:34.268 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/assert.gcno 00:05:34.268 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/accel_module.gcno:no functions found 00:05:34.268 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/accel_module.gcno 00:05:34.268 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/barrier.gcno:no functions found 00:05:34.268 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/barrier.gcno 00:05:34.268 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/base64.gcno:no functions found 00:05:34.268 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/base64.gcno 00:05:34.268 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/bdev.gcno:no functions found 00:05:34.268 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/bdev.gcno 00:05:34.268 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/bdev_module.gcno:no functions found 00:05:34.268 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/bdev_module.gcno 00:05:34.268 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/bdev_zone.gcno:no functions found 00:05:34.268 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/bdev_zone.gcno 00:05:34.268 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/bit_array.gcno:no functions found 00:05:34.269 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/bit_array.gcno 00:05:34.269 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/bit_pool.gcno:no functions found 00:05:34.269 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/bit_pool.gcno 00:05:34.269 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/blob_bdev.gcno:no functions found 00:05:34.269 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/blob_bdev.gcno 00:05:34.269 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/blobfs.gcno:no functions found 00:05:34.269 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/blobfs.gcno 00:05:34.269 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/blobfs_bdev.gcno:no functions found 00:05:34.269 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/blobfs_bdev.gcno 00:05:34.269 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/blob.gcno:no functions found 00:05:34.269 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/blob.gcno 00:05:34.269 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/conf.gcno:no functions found 00:05:34.269 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/conf.gcno 00:05:34.269 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/config.gcno:no functions found 00:05:34.269 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/config.gcno 00:05:34.269 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/cpuset.gcno:no functions found 00:05:34.269 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/cpuset.gcno 00:05:34.269 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/crc16.gcno:no functions found 00:05:34.269 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/crc16.gcno 00:05:34.269 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/crc32.gcno:no functions found 00:05:34.269 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/crc32.gcno 00:05:34.269 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/crc64.gcno:no functions found 00:05:34.269 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/crc64.gcno 00:05:34.269 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/dif.gcno:no functions found 00:05:34.269 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/dif.gcno 00:05:34.269 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/dma.gcno:no functions found 00:05:34.269 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/dma.gcno 00:05:34.269 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/endian.gcno:no functions found 00:05:34.269 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/endian.gcno 00:05:34.269 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/env_dpdk.gcno:no functions found 00:05:34.269 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/env_dpdk.gcno 00:05:34.269 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/env.gcno:no functions found 00:05:34.269 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/env.gcno 00:05:34.269 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/event.gcno:no functions found 00:05:34.269 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/event.gcno 00:05:34.269 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/fd_group.gcno:no functions found 00:05:34.269 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/fd_group.gcno 00:05:34.269 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/fd.gcno:no functions found 00:05:34.269 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/fd.gcno 00:05:34.269 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/file.gcno:no functions found 00:05:34.269 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/file.gcno 00:05:34.269 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/ftl.gcno:no functions found 00:05:34.269 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/ftl.gcno 00:05:34.269 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/gpt_spec.gcno:no functions found 00:05:34.269 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/gpt_spec.gcno 00:05:34.269 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/hexlify.gcno:no functions found 00:05:34.269 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/hexlify.gcno 00:05:34.269 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/histogram_data.gcno:no functions found 00:05:34.269 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/histogram_data.gcno 00:05:34.269 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/idxd.gcno:no functions found 00:05:34.269 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/idxd.gcno 00:05:34.269 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/idxd_spec.gcno:no functions found 00:05:34.269 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/idxd_spec.gcno 00:05:34.269 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/init.gcno:no functions found 00:05:34.269 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/init.gcno 00:05:34.269 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/ioat.gcno:no functions found 00:05:34.269 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/ioat.gcno 00:05:34.269 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/ioat_spec.gcno:no functions found 00:05:34.269 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/ioat_spec.gcno 00:05:34.269 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/iscsi_spec.gcno:no functions found 00:05:34.269 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/iscsi_spec.gcno 00:05:34.269 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/json.gcno:no functions found 00:05:34.269 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/json.gcno 00:05:34.269 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/jsonrpc.gcno:no functions found 00:05:34.269 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/jsonrpc.gcno 00:05:34.269 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/keyring.gcno:no functions found 00:05:34.269 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/keyring.gcno 00:05:34.269 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/keyring_module.gcno:no functions found 00:05:34.269 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/keyring_module.gcno 00:05:34.269 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/likely.gcno:no functions found 00:05:34.269 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/likely.gcno 00:05:34.269 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/log.gcno:no functions found 00:05:34.269 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/log.gcno 00:05:34.269 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/lvol.gcno:no functions found 00:05:34.269 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/lvol.gcno 00:05:34.269 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/memory.gcno:no functions found 00:05:34.269 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/memory.gcno 00:05:34.269 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nbd.gcno:no functions found 00:05:34.269 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nbd.gcno 00:05:34.269 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/notify.gcno:no functions found 00:05:34.269 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/notify.gcno 00:05:34.269 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/mmio.gcno:no functions found 00:05:34.269 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/mmio.gcno 00:05:34.269 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvme.gcno:no functions found 00:05:34.269 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvme.gcno 00:05:34.269 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvme_intel.gcno:no functions found 00:05:34.269 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvme_intel.gcno 00:05:34.269 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvme_ocssd.gcno:no functions found 00:05:34.269 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvme_ocssd.gcno 00:05:34.269 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvme_ocssd_spec.gcno:no functions found 00:05:34.269 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvme_ocssd_spec.gcno 00:05:34.269 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvme_zns.gcno:no functions found 00:05:34.269 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvme_zns.gcno 00:05:34.269 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvmf_cmd.gcno:no functions found 00:05:34.269 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvmf_cmd.gcno 00:05:34.269 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvme_spec.gcno:no functions found 00:05:34.269 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvme_spec.gcno 00:05:34.269 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvmf_fc_spec.gcno:no functions found 00:05:34.269 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvmf_fc_spec.gcno 00:05:34.269 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvmf.gcno:no functions found 00:05:34.269 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvmf.gcno 00:05:34.269 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvmf_spec.gcno:no functions found 00:05:34.269 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvmf_spec.gcno 00:05:34.269 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvmf_transport.gcno:no functions found 00:05:34.269 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvmf_transport.gcno 00:05:34.269 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/opal.gcno:no functions found 00:05:34.269 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/opal.gcno 00:05:34.269 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/opal_spec.gcno:no functions found 00:05:34.269 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/opal_spec.gcno 00:05:34.269 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/pci_ids.gcno:no functions found 00:05:34.269 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/pci_ids.gcno 00:05:34.269 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/pipe.gcno:no functions found 00:05:34.270 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/pipe.gcno 00:05:34.270 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/queue.gcno:no functions found 00:05:34.270 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/queue.gcno 00:05:34.270 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/reduce.gcno:no functions found 00:05:34.270 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/reduce.gcno 00:05:34.270 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/rpc.gcno:no functions found 00:05:34.270 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/rpc.gcno 00:05:34.270 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/scheduler.gcno:no functions found 00:05:34.270 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/scheduler.gcno 00:05:34.270 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/scsi.gcno:no functions found 00:05:34.270 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/scsi.gcno 00:05:34.270 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/scsi_spec.gcno:no functions found 00:05:34.270 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/scsi_spec.gcno 00:05:34.270 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/sock.gcno:no functions found 00:05:34.270 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/sock.gcno 00:05:34.270 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/string.gcno:no functions found 00:05:34.270 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/string.gcno 00:05:34.270 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/stdinc.gcno:no functions found 00:05:34.270 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/stdinc.gcno 00:05:34.270 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/thread.gcno:no functions found 00:05:34.270 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/thread.gcno 00:05:34.270 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/trace.gcno:no functions found 00:05:34.270 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/trace.gcno 00:05:34.270 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/trace_parser.gcno:no functions found 00:05:34.270 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/trace_parser.gcno 00:05:34.270 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/tree.gcno:no functions found 00:05:34.270 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/tree.gcno 00:05:34.270 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/ublk.gcno:no functions found 00:05:34.270 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/ublk.gcno 00:05:34.270 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/util.gcno:no functions found 00:05:34.270 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/util.gcno 00:05:34.270 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/uuid.gcno:no functions found 00:05:34.270 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/uuid.gcno 00:05:34.270 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/version.gcno:no functions found 00:05:34.270 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/version.gcno 00:05:34.270 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/vfio_user_spec.gcno:no functions found 00:05:34.270 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/vfio_user_spec.gcno 00:05:34.270 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/vfio_user_pci.gcno:no functions found 00:05:34.270 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/vfio_user_pci.gcno 00:05:34.270 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/vhost.gcno:no functions found 00:05:34.270 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/vhost.gcno 00:05:34.270 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/vmd.gcno:no functions found 00:05:34.270 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/vmd.gcno 00:05:34.270 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/xor.gcno:no functions found 00:05:34.270 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/xor.gcno 00:05:34.270 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/zipf.gcno:no functions found 00:05:34.270 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/zipf.gcno 00:05:38.448 09:13:49 -- spdk/autotest.sh@89 -- # timing_enter pre_cleanup 00:05:38.448 09:13:49 -- common/autotest_common.sh@722 -- # xtrace_disable 00:05:38.448 09:13:49 -- common/autotest_common.sh@10 -- # set +x 00:05:38.448 09:13:49 -- spdk/autotest.sh@91 -- # rm -f 00:05:38.448 09:13:49 -- spdk/autotest.sh@94 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh reset 00:05:39.398 0000:00:04.7 (8086 0e27): Already using the ioatdma driver 00:05:39.398 0000:00:04.6 (8086 0e26): Already using the ioatdma driver 00:05:39.398 0000:00:04.5 (8086 0e25): Already using the ioatdma driver 00:05:39.398 0000:00:04.4 (8086 0e24): Already using the ioatdma driver 00:05:39.655 0000:00:04.3 (8086 0e23): Already using the ioatdma driver 00:05:39.655 0000:00:04.2 (8086 0e22): Already using the ioatdma driver 00:05:39.655 0000:00:04.1 (8086 0e21): Already using the ioatdma driver 00:05:39.655 0000:00:04.0 (8086 0e20): Already using the ioatdma driver 00:05:39.655 0000:0b:00.0 (8086 0a54): Already using the nvme driver 00:05:39.655 0000:80:04.7 (8086 0e27): Already using the ioatdma driver 00:05:39.655 0000:80:04.6 (8086 0e26): Already using the ioatdma driver 00:05:39.655 0000:80:04.5 (8086 0e25): Already using the ioatdma driver 00:05:39.655 0000:80:04.4 (8086 0e24): Already using the ioatdma driver 00:05:39.655 0000:80:04.3 (8086 0e23): Already using the ioatdma driver 00:05:39.655 0000:80:04.2 (8086 0e22): Already using the ioatdma driver 00:05:39.655 0000:80:04.1 (8086 0e21): Already using the ioatdma driver 00:05:39.655 0000:80:04.0 (8086 0e20): Already using the ioatdma driver 00:05:39.914 09:13:50 -- spdk/autotest.sh@96 -- # get_zoned_devs 00:05:39.914 09:13:50 -- common/autotest_common.sh@1669 -- # zoned_devs=() 00:05:39.914 09:13:50 -- common/autotest_common.sh@1669 -- # local -gA zoned_devs 00:05:39.914 09:13:50 -- common/autotest_common.sh@1670 -- # local nvme bdf 00:05:39.914 09:13:50 -- common/autotest_common.sh@1672 -- # for nvme in /sys/block/nvme* 00:05:39.914 09:13:50 -- common/autotest_common.sh@1673 -- # is_block_zoned nvme0n1 00:05:39.914 09:13:50 -- common/autotest_common.sh@1662 -- # local device=nvme0n1 00:05:39.914 09:13:50 -- common/autotest_common.sh@1664 -- # [[ -e /sys/block/nvme0n1/queue/zoned ]] 00:05:39.914 09:13:50 -- common/autotest_common.sh@1665 -- # [[ none != none ]] 00:05:39.914 09:13:50 -- spdk/autotest.sh@98 -- # (( 0 > 0 )) 00:05:39.914 09:13:50 -- spdk/autotest.sh@110 -- # for dev in /dev/nvme*n!(*p*) 00:05:39.914 09:13:50 -- spdk/autotest.sh@112 -- # [[ -z '' ]] 00:05:39.914 09:13:50 -- spdk/autotest.sh@113 -- # block_in_use /dev/nvme0n1 00:05:39.914 09:13:50 -- scripts/common.sh@378 -- # local block=/dev/nvme0n1 pt 00:05:39.914 09:13:50 -- scripts/common.sh@387 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/spdk-gpt.py /dev/nvme0n1 00:05:39.914 No valid GPT data, bailing 00:05:39.914 09:13:50 -- scripts/common.sh@391 -- # blkid -s PTTYPE -o value /dev/nvme0n1 00:05:39.914 09:13:50 -- scripts/common.sh@391 -- # pt= 00:05:39.914 09:13:50 -- scripts/common.sh@392 -- # return 1 00:05:39.914 09:13:50 -- spdk/autotest.sh@114 -- # dd if=/dev/zero of=/dev/nvme0n1 bs=1M count=1 00:05:39.914 1+0 records in 00:05:39.914 1+0 records out 00:05:39.914 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00238249 s, 440 MB/s 00:05:39.914 09:13:50 -- spdk/autotest.sh@118 -- # sync 00:05:39.914 09:13:50 -- spdk/autotest.sh@120 -- # xtrace_disable_per_cmd reap_spdk_processes 00:05:39.914 09:13:50 -- common/autotest_common.sh@22 -- # eval 'reap_spdk_processes 12> /dev/null' 00:05:39.914 09:13:50 -- common/autotest_common.sh@22 -- # reap_spdk_processes 00:05:41.813 09:13:52 -- spdk/autotest.sh@124 -- # uname -s 00:05:41.813 09:13:52 -- spdk/autotest.sh@124 -- # '[' Linux = Linux ']' 00:05:41.813 09:13:52 -- spdk/autotest.sh@125 -- # run_test setup.sh /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/test-setup.sh 00:05:41.813 09:13:52 -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:05:41.813 09:13:52 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:41.813 09:13:52 -- common/autotest_common.sh@10 -- # set +x 00:05:41.813 ************************************ 00:05:41.813 START TEST setup.sh 00:05:41.813 ************************************ 00:05:41.813 09:13:52 setup.sh -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/test-setup.sh 00:05:41.813 * Looking for test storage... 00:05:41.813 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup 00:05:41.813 09:13:52 setup.sh -- setup/test-setup.sh@10 -- # uname -s 00:05:41.813 09:13:52 setup.sh -- setup/test-setup.sh@10 -- # [[ Linux == Linux ]] 00:05:41.813 09:13:52 setup.sh -- setup/test-setup.sh@12 -- # run_test acl /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/acl.sh 00:05:41.813 09:13:52 setup.sh -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:05:41.813 09:13:52 setup.sh -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:41.813 09:13:52 setup.sh -- common/autotest_common.sh@10 -- # set +x 00:05:41.813 ************************************ 00:05:41.813 START TEST acl 00:05:41.813 ************************************ 00:05:41.813 09:13:52 setup.sh.acl -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/acl.sh 00:05:41.813 * Looking for test storage... 00:05:41.813 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup 00:05:41.813 09:13:52 setup.sh.acl -- setup/acl.sh@10 -- # get_zoned_devs 00:05:41.813 09:13:52 setup.sh.acl -- common/autotest_common.sh@1669 -- # zoned_devs=() 00:05:41.813 09:13:52 setup.sh.acl -- common/autotest_common.sh@1669 -- # local -gA zoned_devs 00:05:41.813 09:13:52 setup.sh.acl -- common/autotest_common.sh@1670 -- # local nvme bdf 00:05:41.813 09:13:52 setup.sh.acl -- common/autotest_common.sh@1672 -- # for nvme in /sys/block/nvme* 00:05:41.813 09:13:52 setup.sh.acl -- common/autotest_common.sh@1673 -- # is_block_zoned nvme0n1 00:05:41.813 09:13:52 setup.sh.acl -- common/autotest_common.sh@1662 -- # local device=nvme0n1 00:05:41.813 09:13:52 setup.sh.acl -- common/autotest_common.sh@1664 -- # [[ -e /sys/block/nvme0n1/queue/zoned ]] 00:05:41.813 09:13:52 setup.sh.acl -- common/autotest_common.sh@1665 -- # [[ none != none ]] 00:05:41.813 09:13:52 setup.sh.acl -- setup/acl.sh@12 -- # devs=() 00:05:41.813 09:13:52 setup.sh.acl -- setup/acl.sh@12 -- # declare -a devs 00:05:41.813 09:13:52 setup.sh.acl -- setup/acl.sh@13 -- # drivers=() 00:05:41.813 09:13:52 setup.sh.acl -- setup/acl.sh@13 -- # declare -A drivers 00:05:41.813 09:13:52 setup.sh.acl -- setup/acl.sh@51 -- # setup reset 00:05:41.813 09:13:52 setup.sh.acl -- setup/common.sh@9 -- # [[ reset == output ]] 00:05:41.813 09:13:52 setup.sh.acl -- setup/common.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh reset 00:05:43.713 09:13:54 setup.sh.acl -- setup/acl.sh@52 -- # collect_setup_devs 00:05:43.713 09:13:54 setup.sh.acl -- setup/acl.sh@16 -- # local dev driver 00:05:43.713 09:13:54 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:05:43.713 09:13:54 setup.sh.acl -- setup/acl.sh@15 -- # setup output status 00:05:43.713 09:13:54 setup.sh.acl -- setup/common.sh@9 -- # [[ output == output ]] 00:05:43.713 09:13:54 setup.sh.acl -- setup/common.sh@10 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh status 00:05:44.650 Hugepages 00:05:44.650 node hugesize free / total 00:05:44.650 09:13:55 setup.sh.acl -- setup/acl.sh@19 -- # [[ 1048576kB == *:*:*.* ]] 00:05:44.650 09:13:55 setup.sh.acl -- setup/acl.sh@19 -- # continue 00:05:44.650 09:13:55 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:05:44.650 09:13:55 setup.sh.acl -- setup/acl.sh@19 -- # [[ 2048kB == *:*:*.* ]] 00:05:44.650 09:13:55 setup.sh.acl -- setup/acl.sh@19 -- # continue 00:05:44.650 09:13:55 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:05:44.650 09:13:55 setup.sh.acl -- setup/acl.sh@19 -- # [[ 1048576kB == *:*:*.* ]] 00:05:44.650 09:13:55 setup.sh.acl -- setup/acl.sh@19 -- # continue 00:05:44.650 09:13:55 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:05:44.650 00:05:44.650 Type BDF Vendor Device NUMA Driver Device Block devices 00:05:44.650 09:13:55 setup.sh.acl -- setup/acl.sh@19 -- # [[ 2048kB == *:*:*.* ]] 00:05:44.650 09:13:55 setup.sh.acl -- setup/acl.sh@19 -- # continue 00:05:44.650 09:13:55 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:05:44.651 09:13:55 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:00:04.0 == *:*:*.* ]] 00:05:44.651 09:13:55 setup.sh.acl -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:05:44.651 09:13:55 setup.sh.acl -- setup/acl.sh@20 -- # continue 00:05:44.651 09:13:55 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:05:44.651 09:13:55 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:00:04.1 == *:*:*.* ]] 00:05:44.651 09:13:55 setup.sh.acl -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:05:44.651 09:13:55 setup.sh.acl -- setup/acl.sh@20 -- # continue 00:05:44.651 09:13:55 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:05:44.651 09:13:55 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:00:04.2 == *:*:*.* ]] 00:05:44.651 09:13:55 setup.sh.acl -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:05:44.651 09:13:55 setup.sh.acl -- setup/acl.sh@20 -- # continue 00:05:44.651 09:13:55 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:05:44.651 09:13:55 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:00:04.3 == *:*:*.* ]] 00:05:44.651 09:13:55 setup.sh.acl -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:05:44.651 09:13:55 setup.sh.acl -- setup/acl.sh@20 -- # continue 00:05:44.651 09:13:55 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:05:44.651 09:13:55 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:00:04.4 == *:*:*.* ]] 00:05:44.651 09:13:55 setup.sh.acl -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:05:44.651 09:13:55 setup.sh.acl -- setup/acl.sh@20 -- # continue 00:05:44.651 09:13:55 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:05:44.651 09:13:55 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:00:04.5 == *:*:*.* ]] 00:05:44.651 09:13:55 setup.sh.acl -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:05:44.651 09:13:55 setup.sh.acl -- setup/acl.sh@20 -- # continue 00:05:44.651 09:13:55 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:05:44.651 09:13:55 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:00:04.6 == *:*:*.* ]] 00:05:44.651 09:13:55 setup.sh.acl -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:05:44.651 09:13:55 setup.sh.acl -- setup/acl.sh@20 -- # continue 00:05:44.651 09:13:55 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:05:44.651 09:13:55 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:00:04.7 == *:*:*.* ]] 00:05:44.651 09:13:55 setup.sh.acl -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:05:44.651 09:13:55 setup.sh.acl -- setup/acl.sh@20 -- # continue 00:05:44.651 09:13:55 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:05:44.651 09:13:55 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:0b:00.0 == *:*:*.* ]] 00:05:44.651 09:13:55 setup.sh.acl -- setup/acl.sh@20 -- # [[ nvme == nvme ]] 00:05:44.651 09:13:55 setup.sh.acl -- setup/acl.sh@21 -- # [[ '' == *\0\0\0\0\:\0\b\:\0\0\.\0* ]] 00:05:44.651 09:13:55 setup.sh.acl -- setup/acl.sh@22 -- # devs+=("$dev") 00:05:44.651 09:13:55 setup.sh.acl -- setup/acl.sh@22 -- # drivers["$dev"]=nvme 00:05:44.651 09:13:55 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:05:44.651 09:13:55 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:80:04.0 == *:*:*.* ]] 00:05:44.651 09:13:55 setup.sh.acl -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:05:44.651 09:13:55 setup.sh.acl -- setup/acl.sh@20 -- # continue 00:05:44.651 09:13:55 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:05:44.651 09:13:55 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:80:04.1 == *:*:*.* ]] 00:05:44.651 09:13:55 setup.sh.acl -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:05:44.651 09:13:55 setup.sh.acl -- setup/acl.sh@20 -- # continue 00:05:44.651 09:13:55 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:05:44.651 09:13:55 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:80:04.2 == *:*:*.* ]] 00:05:44.651 09:13:55 setup.sh.acl -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:05:44.651 09:13:55 setup.sh.acl -- setup/acl.sh@20 -- # continue 00:05:44.651 09:13:55 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:05:44.651 09:13:55 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:80:04.3 == *:*:*.* ]] 00:05:44.651 09:13:55 setup.sh.acl -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:05:44.651 09:13:55 setup.sh.acl -- setup/acl.sh@20 -- # continue 00:05:44.651 09:13:55 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:05:44.651 09:13:55 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:80:04.4 == *:*:*.* ]] 00:05:44.651 09:13:55 setup.sh.acl -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:05:44.651 09:13:55 setup.sh.acl -- setup/acl.sh@20 -- # continue 00:05:44.651 09:13:55 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:05:44.651 09:13:55 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:80:04.5 == *:*:*.* ]] 00:05:44.651 09:13:55 setup.sh.acl -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:05:44.651 09:13:55 setup.sh.acl -- setup/acl.sh@20 -- # continue 00:05:44.651 09:13:55 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:05:44.651 09:13:55 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:80:04.6 == *:*:*.* ]] 00:05:44.651 09:13:55 setup.sh.acl -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:05:44.651 09:13:55 setup.sh.acl -- setup/acl.sh@20 -- # continue 00:05:44.651 09:13:55 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:05:44.651 09:13:55 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:80:04.7 == *:*:*.* ]] 00:05:44.651 09:13:55 setup.sh.acl -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:05:44.651 09:13:55 setup.sh.acl -- setup/acl.sh@20 -- # continue 00:05:44.651 09:13:55 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:05:44.651 09:13:55 setup.sh.acl -- setup/acl.sh@24 -- # (( 1 > 0 )) 00:05:44.651 09:13:55 setup.sh.acl -- setup/acl.sh@54 -- # run_test denied denied 00:05:44.651 09:13:55 setup.sh.acl -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:05:44.651 09:13:55 setup.sh.acl -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:44.651 09:13:55 setup.sh.acl -- common/autotest_common.sh@10 -- # set +x 00:05:44.651 ************************************ 00:05:44.651 START TEST denied 00:05:44.651 ************************************ 00:05:44.651 09:13:55 setup.sh.acl.denied -- common/autotest_common.sh@1123 -- # denied 00:05:44.651 09:13:55 setup.sh.acl.denied -- setup/acl.sh@38 -- # PCI_BLOCKED=' 0000:0b:00.0' 00:05:44.651 09:13:55 setup.sh.acl.denied -- setup/acl.sh@38 -- # setup output config 00:05:44.651 09:13:55 setup.sh.acl.denied -- setup/acl.sh@39 -- # grep 'Skipping denied controller at 0000:0b:00.0' 00:05:44.651 09:13:55 setup.sh.acl.denied -- setup/common.sh@9 -- # [[ output == output ]] 00:05:44.651 09:13:55 setup.sh.acl.denied -- setup/common.sh@10 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh config 00:05:46.559 0000:0b:00.0 (8086 0a54): Skipping denied controller at 0000:0b:00.0 00:05:46.559 09:13:57 setup.sh.acl.denied -- setup/acl.sh@40 -- # verify 0000:0b:00.0 00:05:46.559 09:13:57 setup.sh.acl.denied -- setup/acl.sh@28 -- # local dev driver 00:05:46.559 09:13:57 setup.sh.acl.denied -- setup/acl.sh@30 -- # for dev in "$@" 00:05:46.559 09:13:57 setup.sh.acl.denied -- setup/acl.sh@31 -- # [[ -e /sys/bus/pci/devices/0000:0b:00.0 ]] 00:05:46.559 09:13:57 setup.sh.acl.denied -- setup/acl.sh@32 -- # readlink -f /sys/bus/pci/devices/0000:0b:00.0/driver 00:05:46.559 09:13:57 setup.sh.acl.denied -- setup/acl.sh@32 -- # driver=/sys/bus/pci/drivers/nvme 00:05:46.559 09:13:57 setup.sh.acl.denied -- setup/acl.sh@33 -- # [[ nvme == \n\v\m\e ]] 00:05:46.559 09:13:57 setup.sh.acl.denied -- setup/acl.sh@41 -- # setup reset 00:05:46.559 09:13:57 setup.sh.acl.denied -- setup/common.sh@9 -- # [[ reset == output ]] 00:05:46.559 09:13:57 setup.sh.acl.denied -- setup/common.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh reset 00:05:49.099 00:05:49.099 real 0m4.062s 00:05:49.099 user 0m1.148s 00:05:49.099 sys 0m1.956s 00:05:49.100 09:13:59 setup.sh.acl.denied -- common/autotest_common.sh@1124 -- # xtrace_disable 00:05:49.100 09:13:59 setup.sh.acl.denied -- common/autotest_common.sh@10 -- # set +x 00:05:49.100 ************************************ 00:05:49.100 END TEST denied 00:05:49.100 ************************************ 00:05:49.100 09:13:59 setup.sh.acl -- common/autotest_common.sh@1142 -- # return 0 00:05:49.100 09:13:59 setup.sh.acl -- setup/acl.sh@55 -- # run_test allowed allowed 00:05:49.100 09:13:59 setup.sh.acl -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:05:49.100 09:13:59 setup.sh.acl -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:49.100 09:13:59 setup.sh.acl -- common/autotest_common.sh@10 -- # set +x 00:05:49.100 ************************************ 00:05:49.100 START TEST allowed 00:05:49.100 ************************************ 00:05:49.100 09:13:59 setup.sh.acl.allowed -- common/autotest_common.sh@1123 -- # allowed 00:05:49.100 09:13:59 setup.sh.acl.allowed -- setup/acl.sh@45 -- # PCI_ALLOWED=0000:0b:00.0 00:05:49.100 09:13:59 setup.sh.acl.allowed -- setup/acl.sh@45 -- # setup output config 00:05:49.100 09:13:59 setup.sh.acl.allowed -- setup/acl.sh@46 -- # grep -E '0000:0b:00.0 .*: nvme -> .*' 00:05:49.100 09:13:59 setup.sh.acl.allowed -- setup/common.sh@9 -- # [[ output == output ]] 00:05:49.100 09:13:59 setup.sh.acl.allowed -- setup/common.sh@10 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh config 00:05:51.633 0000:0b:00.0 (8086 0a54): nvme -> vfio-pci 00:05:51.633 09:14:02 setup.sh.acl.allowed -- setup/acl.sh@47 -- # verify 00:05:51.633 09:14:02 setup.sh.acl.allowed -- setup/acl.sh@28 -- # local dev driver 00:05:51.633 09:14:02 setup.sh.acl.allowed -- setup/acl.sh@48 -- # setup reset 00:05:51.633 09:14:02 setup.sh.acl.allowed -- setup/common.sh@9 -- # [[ reset == output ]] 00:05:51.633 09:14:02 setup.sh.acl.allowed -- setup/common.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh reset 00:05:53.018 00:05:53.018 real 0m4.038s 00:05:53.018 user 0m1.048s 00:05:53.018 sys 0m1.887s 00:05:53.018 09:14:03 setup.sh.acl.allowed -- common/autotest_common.sh@1124 -- # xtrace_disable 00:05:53.018 09:14:03 setup.sh.acl.allowed -- common/autotest_common.sh@10 -- # set +x 00:05:53.018 ************************************ 00:05:53.018 END TEST allowed 00:05:53.018 ************************************ 00:05:53.018 09:14:03 setup.sh.acl -- common/autotest_common.sh@1142 -- # return 0 00:05:53.018 00:05:53.018 real 0m11.027s 00:05:53.018 user 0m3.355s 00:05:53.018 sys 0m5.688s 00:05:53.018 09:14:03 setup.sh.acl -- common/autotest_common.sh@1124 -- # xtrace_disable 00:05:53.018 09:14:03 setup.sh.acl -- common/autotest_common.sh@10 -- # set +x 00:05:53.018 ************************************ 00:05:53.018 END TEST acl 00:05:53.018 ************************************ 00:05:53.018 09:14:03 setup.sh -- common/autotest_common.sh@1142 -- # return 0 00:05:53.018 09:14:03 setup.sh -- setup/test-setup.sh@13 -- # run_test hugepages /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/hugepages.sh 00:05:53.018 09:14:03 setup.sh -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:05:53.018 09:14:03 setup.sh -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:53.018 09:14:03 setup.sh -- common/autotest_common.sh@10 -- # set +x 00:05:53.018 ************************************ 00:05:53.018 START TEST hugepages 00:05:53.018 ************************************ 00:05:53.019 09:14:03 setup.sh.hugepages -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/hugepages.sh 00:05:53.019 * Looking for test storage... 00:05:53.019 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup 00:05:53.019 09:14:04 setup.sh.hugepages -- setup/hugepages.sh@10 -- # nodes_sys=() 00:05:53.019 09:14:04 setup.sh.hugepages -- setup/hugepages.sh@10 -- # declare -a nodes_sys 00:05:53.019 09:14:04 setup.sh.hugepages -- setup/hugepages.sh@12 -- # declare -i default_hugepages=0 00:05:53.019 09:14:04 setup.sh.hugepages -- setup/hugepages.sh@13 -- # declare -i no_nodes=0 00:05:53.019 09:14:04 setup.sh.hugepages -- setup/hugepages.sh@14 -- # declare -i nr_hugepages=0 00:05:53.019 09:14:04 setup.sh.hugepages -- setup/hugepages.sh@16 -- # get_meminfo Hugepagesize 00:05:53.019 09:14:04 setup.sh.hugepages -- setup/common.sh@17 -- # local get=Hugepagesize 00:05:53.019 09:14:04 setup.sh.hugepages -- setup/common.sh@18 -- # local node= 00:05:53.019 09:14:04 setup.sh.hugepages -- setup/common.sh@19 -- # local var val 00:05:53.019 09:14:04 setup.sh.hugepages -- setup/common.sh@20 -- # local mem_f mem 00:05:53.019 09:14:04 setup.sh.hugepages -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:05:53.019 09:14:04 setup.sh.hugepages -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:05:53.019 09:14:04 setup.sh.hugepages -- setup/common.sh@25 -- # [[ -n '' ]] 00:05:53.019 09:14:04 setup.sh.hugepages -- setup/common.sh@28 -- # mapfile -t mem 00:05:53.019 09:14:04 setup.sh.hugepages -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:05:53.019 09:14:04 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:05:53.019 09:14:04 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:05:53.019 09:14:04 setup.sh.hugepages -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60541712 kB' 'MemFree: 44537056 kB' 'MemAvailable: 47951204 kB' 'Buffers: 12432 kB' 'Cached: 9111700 kB' 'SwapCached: 0 kB' 'Active: 6640468 kB' 'Inactive: 3452172 kB' 'Active(anon): 6263760 kB' 'Inactive(anon): 0 kB' 'Active(file): 376708 kB' 'Inactive(file): 3452172 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 972360 kB' 'Mapped: 169584 kB' 'Shmem: 5295252 kB' 'KReclaimable: 157780 kB' 'Slab: 447776 kB' 'SReclaimable: 157780 kB' 'SUnreclaim: 289996 kB' 'KernelStack: 12752 kB' 'PageTables: 8696 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 36562308 kB' 'Committed_AS: 7773376 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 193496 kB' 'VmallocChunk: 0 kB' 'Percpu: 33408 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 2048' 'HugePages_Free: 2048' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 4194304 kB' 'DirectMap4k: 579164 kB' 'DirectMap2M: 12972032 kB' 'DirectMap1G: 55574528 kB' 00:05:53.019 09:14:04 setup.sh.hugepages -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:05:53.019 09:14:04 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:05:53.019 09:14:04 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:05:53.019 09:14:04 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:05:53.019 09:14:04 setup.sh.hugepages -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:05:53.019 09:14:04 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:05:53.019 09:14:04 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:05:53.019 09:14:04 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:05:53.019 09:14:04 setup.sh.hugepages -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:05:53.019 09:14:04 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:05:53.019 09:14:04 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:05:53.019 09:14:04 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:05:53.019 09:14:04 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:05:53.019 09:14:04 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:05:53.019 09:14:04 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:05:53.019 09:14:04 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:05:53.019 09:14:04 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:05:53.019 09:14:04 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:05:53.019 09:14:04 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:05:53.019 09:14:04 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:05:53.019 09:14:04 setup.sh.hugepages -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:05:53.019 09:14:04 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:05:53.019 09:14:04 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:05:53.019 09:14:04 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:05:53.019 09:14:04 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:05:53.019 09:14:04 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:05:53.019 09:14:04 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:05:53.019 09:14:04 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:05:53.019 09:14:04 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:05:53.019 09:14:04 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:05:53.019 09:14:04 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:05:53.019 09:14:04 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:05:53.019 09:14:04 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:05:53.019 09:14:04 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:05:53.019 09:14:04 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:05:53.019 09:14:04 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:05:53.019 09:14:04 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:05:53.019 09:14:04 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:05:53.019 09:14:04 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:05:53.019 09:14:04 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:05:53.019 09:14:04 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:05:53.019 09:14:04 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:05:53.019 09:14:04 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:05:53.019 09:14:04 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:05:53.019 09:14:04 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:05:53.019 09:14:04 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:05:53.019 09:14:04 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:05:53.019 09:14:04 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:05:53.019 09:14:04 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:05:53.019 09:14:04 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:05:53.019 09:14:04 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:05:53.019 09:14:04 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:05:53.019 09:14:04 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:05:53.019 09:14:04 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:05:53.019 09:14:04 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:05:53.019 09:14:04 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:05:53.019 09:14:04 setup.sh.hugepages -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:05:53.019 09:14:04 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:05:53.019 09:14:04 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:05:53.019 09:14:04 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:05:53.019 09:14:04 setup.sh.hugepages -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:05:53.019 09:14:04 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:05:53.019 09:14:04 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:05:53.019 09:14:04 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:05:53.019 09:14:04 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:05:53.019 09:14:04 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:05:53.019 09:14:04 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:05:53.019 09:14:04 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:05:53.019 09:14:04 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:05:53.019 09:14:04 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:05:53.019 09:14:04 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:05:53.019 09:14:04 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:05:53.019 09:14:04 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:05:53.019 09:14:04 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:05:53.019 09:14:04 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:05:53.019 09:14:04 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:05:53.019 09:14:04 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:05:53.019 09:14:04 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:05:53.019 09:14:04 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:05:53.019 09:14:04 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:05:53.019 09:14:04 setup.sh.hugepages -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:05:53.019 09:14:04 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:05:53.019 09:14:04 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:05:53.019 09:14:04 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:05:53.019 09:14:04 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:05:53.019 09:14:04 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:05:53.019 09:14:04 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:05:53.019 09:14:04 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:05:53.019 09:14:04 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:05:53.019 09:14:04 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:05:53.019 09:14:04 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:05:53.019 09:14:04 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:05:53.019 09:14:04 setup.sh.hugepages -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:05:53.019 09:14:04 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:05:53.019 09:14:04 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:05:53.019 09:14:04 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:05:53.019 09:14:04 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:05:53.019 09:14:04 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:05:53.019 09:14:04 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:05:53.019 09:14:04 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:05:53.019 09:14:04 setup.sh.hugepages -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:05:53.019 09:14:04 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:05:53.019 09:14:04 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:05:53.019 09:14:04 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:05:53.019 09:14:04 setup.sh.hugepages -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:05:53.019 09:14:04 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:05:53.019 09:14:04 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:05:53.019 09:14:04 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:05:53.019 09:14:04 setup.sh.hugepages -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:05:53.019 09:14:04 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:05:53.019 09:14:04 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:05:53.019 09:14:04 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:05:53.019 09:14:04 setup.sh.hugepages -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:05:53.019 09:14:04 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:05:53.019 09:14:04 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:05:53.019 09:14:04 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:05:53.019 09:14:04 setup.sh.hugepages -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:05:53.019 09:14:04 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:05:53.019 09:14:04 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:05:53.019 09:14:04 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:05:53.019 09:14:04 setup.sh.hugepages -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:05:53.019 09:14:04 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:05:53.019 09:14:04 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:05:53.019 09:14:04 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:05:53.019 09:14:04 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:05:53.019 09:14:04 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:05:53.019 09:14:04 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:05:53.019 09:14:04 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:05:53.019 09:14:04 setup.sh.hugepages -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:05:53.019 09:14:04 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:05:53.019 09:14:04 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:05:53.019 09:14:04 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:05:53.019 09:14:04 setup.sh.hugepages -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:05:53.019 09:14:04 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:05:53.019 09:14:04 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:05:53.019 09:14:04 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:05:53.019 09:14:04 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:05:53.019 09:14:04 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:05:53.019 09:14:04 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:05:53.019 09:14:04 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:05:53.019 09:14:04 setup.sh.hugepages -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:05:53.019 09:14:04 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:05:53.019 09:14:04 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:05:53.019 09:14:04 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:05:53.019 09:14:04 setup.sh.hugepages -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:05:53.019 09:14:04 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:05:53.019 09:14:04 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:05:53.019 09:14:04 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:05:53.019 09:14:04 setup.sh.hugepages -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:05:53.019 09:14:04 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:05:53.019 09:14:04 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:05:53.019 09:14:04 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:05:53.019 09:14:04 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:05:53.019 09:14:04 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:05:53.019 09:14:04 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:05:53.019 09:14:04 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:05:53.019 09:14:04 setup.sh.hugepages -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:05:53.019 09:14:04 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:05:53.019 09:14:04 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:05:53.019 09:14:04 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:05:53.019 09:14:04 setup.sh.hugepages -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:05:53.019 09:14:04 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:05:53.019 09:14:04 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:05:53.019 09:14:04 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:05:53.019 09:14:04 setup.sh.hugepages -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:05:53.019 09:14:04 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:05:53.019 09:14:04 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:05:53.019 09:14:04 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:05:53.019 09:14:04 setup.sh.hugepages -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:05:53.019 09:14:04 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:05:53.019 09:14:04 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:05:53.019 09:14:04 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:05:53.019 09:14:04 setup.sh.hugepages -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:05:53.019 09:14:04 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:05:53.019 09:14:04 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:05:53.019 09:14:04 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:05:53.019 09:14:04 setup.sh.hugepages -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:05:53.019 09:14:04 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:05:53.019 09:14:04 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:05:53.019 09:14:04 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:05:53.019 09:14:04 setup.sh.hugepages -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:05:53.019 09:14:04 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:05:53.019 09:14:04 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:05:53.019 09:14:04 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:05:53.019 09:14:04 setup.sh.hugepages -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:05:53.019 09:14:04 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:05:53.019 09:14:04 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:05:53.019 09:14:04 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:05:53.019 09:14:04 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:05:53.019 09:14:04 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:05:53.019 09:14:04 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:05:53.019 09:14:04 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:05:53.019 09:14:04 setup.sh.hugepages -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:05:53.019 09:14:04 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:05:53.020 09:14:04 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:05:53.020 09:14:04 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:05:53.020 09:14:04 setup.sh.hugepages -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:05:53.020 09:14:04 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:05:53.020 09:14:04 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:05:53.020 09:14:04 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:05:53.020 09:14:04 setup.sh.hugepages -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:05:53.020 09:14:04 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:05:53.020 09:14:04 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:05:53.020 09:14:04 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:05:53.020 09:14:04 setup.sh.hugepages -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:05:53.020 09:14:04 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:05:53.020 09:14:04 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:05:53.020 09:14:04 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:05:53.020 09:14:04 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Hugepagesize == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:05:53.020 09:14:04 setup.sh.hugepages -- setup/common.sh@33 -- # echo 2048 00:05:53.020 09:14:04 setup.sh.hugepages -- setup/common.sh@33 -- # return 0 00:05:53.020 09:14:04 setup.sh.hugepages -- setup/hugepages.sh@16 -- # default_hugepages=2048 00:05:53.020 09:14:04 setup.sh.hugepages -- setup/hugepages.sh@17 -- # default_huge_nr=/sys/kernel/mm/hugepages/hugepages-2048kB/nr_hugepages 00:05:53.020 09:14:04 setup.sh.hugepages -- setup/hugepages.sh@18 -- # global_huge_nr=/proc/sys/vm/nr_hugepages 00:05:53.020 09:14:04 setup.sh.hugepages -- setup/hugepages.sh@21 -- # unset -v HUGE_EVEN_ALLOC 00:05:53.020 09:14:04 setup.sh.hugepages -- setup/hugepages.sh@22 -- # unset -v HUGEMEM 00:05:53.020 09:14:04 setup.sh.hugepages -- setup/hugepages.sh@23 -- # unset -v HUGENODE 00:05:53.020 09:14:04 setup.sh.hugepages -- setup/hugepages.sh@24 -- # unset -v NRHUGE 00:05:53.020 09:14:04 setup.sh.hugepages -- setup/hugepages.sh@207 -- # get_nodes 00:05:53.020 09:14:04 setup.sh.hugepages -- setup/hugepages.sh@27 -- # local node 00:05:53.020 09:14:04 setup.sh.hugepages -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:05:53.020 09:14:04 setup.sh.hugepages -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=2048 00:05:53.020 09:14:04 setup.sh.hugepages -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:05:53.020 09:14:04 setup.sh.hugepages -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=0 00:05:53.020 09:14:04 setup.sh.hugepages -- setup/hugepages.sh@32 -- # no_nodes=2 00:05:53.020 09:14:04 setup.sh.hugepages -- setup/hugepages.sh@33 -- # (( no_nodes > 0 )) 00:05:53.020 09:14:04 setup.sh.hugepages -- setup/hugepages.sh@208 -- # clear_hp 00:05:53.020 09:14:04 setup.sh.hugepages -- setup/hugepages.sh@37 -- # local node hp 00:05:53.020 09:14:04 setup.sh.hugepages -- setup/hugepages.sh@39 -- # for node in "${!nodes_sys[@]}" 00:05:53.020 09:14:04 setup.sh.hugepages -- setup/hugepages.sh@40 -- # for hp in "/sys/devices/system/node/node$node/hugepages/hugepages-"* 00:05:53.020 09:14:04 setup.sh.hugepages -- setup/hugepages.sh@41 -- # echo 0 00:05:53.020 09:14:04 setup.sh.hugepages -- setup/hugepages.sh@40 -- # for hp in "/sys/devices/system/node/node$node/hugepages/hugepages-"* 00:05:53.020 09:14:04 setup.sh.hugepages -- setup/hugepages.sh@41 -- # echo 0 00:05:53.020 09:14:04 setup.sh.hugepages -- setup/hugepages.sh@39 -- # for node in "${!nodes_sys[@]}" 00:05:53.020 09:14:04 setup.sh.hugepages -- setup/hugepages.sh@40 -- # for hp in "/sys/devices/system/node/node$node/hugepages/hugepages-"* 00:05:53.020 09:14:04 setup.sh.hugepages -- setup/hugepages.sh@41 -- # echo 0 00:05:53.020 09:14:04 setup.sh.hugepages -- setup/hugepages.sh@40 -- # for hp in "/sys/devices/system/node/node$node/hugepages/hugepages-"* 00:05:53.020 09:14:04 setup.sh.hugepages -- setup/hugepages.sh@41 -- # echo 0 00:05:53.020 09:14:04 setup.sh.hugepages -- setup/hugepages.sh@45 -- # export CLEAR_HUGE=yes 00:05:53.020 09:14:04 setup.sh.hugepages -- setup/hugepages.sh@45 -- # CLEAR_HUGE=yes 00:05:53.020 09:14:04 setup.sh.hugepages -- setup/hugepages.sh@210 -- # run_test default_setup default_setup 00:05:53.020 09:14:04 setup.sh.hugepages -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:05:53.020 09:14:04 setup.sh.hugepages -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:53.020 09:14:04 setup.sh.hugepages -- common/autotest_common.sh@10 -- # set +x 00:05:53.020 ************************************ 00:05:53.020 START TEST default_setup 00:05:53.020 ************************************ 00:05:53.020 09:14:04 setup.sh.hugepages.default_setup -- common/autotest_common.sh@1123 -- # default_setup 00:05:53.020 09:14:04 setup.sh.hugepages.default_setup -- setup/hugepages.sh@136 -- # get_test_nr_hugepages 2097152 0 00:05:53.020 09:14:04 setup.sh.hugepages.default_setup -- setup/hugepages.sh@49 -- # local size=2097152 00:05:53.020 09:14:04 setup.sh.hugepages.default_setup -- setup/hugepages.sh@50 -- # (( 2 > 1 )) 00:05:53.020 09:14:04 setup.sh.hugepages.default_setup -- setup/hugepages.sh@51 -- # shift 00:05:53.020 09:14:04 setup.sh.hugepages.default_setup -- setup/hugepages.sh@52 -- # node_ids=('0') 00:05:53.020 09:14:04 setup.sh.hugepages.default_setup -- setup/hugepages.sh@52 -- # local node_ids 00:05:53.020 09:14:04 setup.sh.hugepages.default_setup -- setup/hugepages.sh@55 -- # (( size >= default_hugepages )) 00:05:53.020 09:14:04 setup.sh.hugepages.default_setup -- setup/hugepages.sh@57 -- # nr_hugepages=1024 00:05:53.020 09:14:04 setup.sh.hugepages.default_setup -- setup/hugepages.sh@58 -- # get_test_nr_hugepages_per_node 0 00:05:53.020 09:14:04 setup.sh.hugepages.default_setup -- setup/hugepages.sh@62 -- # user_nodes=('0') 00:05:53.020 09:14:04 setup.sh.hugepages.default_setup -- setup/hugepages.sh@62 -- # local user_nodes 00:05:53.020 09:14:04 setup.sh.hugepages.default_setup -- setup/hugepages.sh@64 -- # local _nr_hugepages=1024 00:05:53.020 09:14:04 setup.sh.hugepages.default_setup -- setup/hugepages.sh@65 -- # local _no_nodes=2 00:05:53.020 09:14:04 setup.sh.hugepages.default_setup -- setup/hugepages.sh@67 -- # nodes_test=() 00:05:53.020 09:14:04 setup.sh.hugepages.default_setup -- setup/hugepages.sh@67 -- # local -g nodes_test 00:05:53.020 09:14:04 setup.sh.hugepages.default_setup -- setup/hugepages.sh@69 -- # (( 1 > 0 )) 00:05:53.020 09:14:04 setup.sh.hugepages.default_setup -- setup/hugepages.sh@70 -- # for _no_nodes in "${user_nodes[@]}" 00:05:53.020 09:14:04 setup.sh.hugepages.default_setup -- setup/hugepages.sh@71 -- # nodes_test[_no_nodes]=1024 00:05:53.020 09:14:04 setup.sh.hugepages.default_setup -- setup/hugepages.sh@73 -- # return 0 00:05:53.020 09:14:04 setup.sh.hugepages.default_setup -- setup/hugepages.sh@137 -- # setup output 00:05:53.020 09:14:04 setup.sh.hugepages.default_setup -- setup/common.sh@9 -- # [[ output == output ]] 00:05:53.020 09:14:04 setup.sh.hugepages.default_setup -- setup/common.sh@10 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:05:54.394 0000:00:04.7 (8086 0e27): ioatdma -> vfio-pci 00:05:54.394 0000:00:04.6 (8086 0e26): ioatdma -> vfio-pci 00:05:54.394 0000:00:04.5 (8086 0e25): ioatdma -> vfio-pci 00:05:54.394 0000:00:04.4 (8086 0e24): ioatdma -> vfio-pci 00:05:54.394 0000:00:04.3 (8086 0e23): ioatdma -> vfio-pci 00:05:54.394 0000:00:04.2 (8086 0e22): ioatdma -> vfio-pci 00:05:54.394 0000:00:04.1 (8086 0e21): ioatdma -> vfio-pci 00:05:54.394 0000:00:04.0 (8086 0e20): ioatdma -> vfio-pci 00:05:54.394 0000:80:04.7 (8086 0e27): ioatdma -> vfio-pci 00:05:54.394 0000:80:04.6 (8086 0e26): ioatdma -> vfio-pci 00:05:54.394 0000:80:04.5 (8086 0e25): ioatdma -> vfio-pci 00:05:54.394 0000:80:04.4 (8086 0e24): ioatdma -> vfio-pci 00:05:54.394 0000:80:04.3 (8086 0e23): ioatdma -> vfio-pci 00:05:54.394 0000:80:04.2 (8086 0e22): ioatdma -> vfio-pci 00:05:54.394 0000:80:04.1 (8086 0e21): ioatdma -> vfio-pci 00:05:54.394 0000:80:04.0 (8086 0e20): ioatdma -> vfio-pci 00:05:55.333 0000:0b:00.0 (8086 0a54): nvme -> vfio-pci 00:05:55.595 09:14:06 setup.sh.hugepages.default_setup -- setup/hugepages.sh@138 -- # verify_nr_hugepages 00:05:55.595 09:14:06 setup.sh.hugepages.default_setup -- setup/hugepages.sh@89 -- # local node 00:05:55.595 09:14:06 setup.sh.hugepages.default_setup -- setup/hugepages.sh@90 -- # local sorted_t 00:05:55.595 09:14:06 setup.sh.hugepages.default_setup -- setup/hugepages.sh@91 -- # local sorted_s 00:05:55.595 09:14:06 setup.sh.hugepages.default_setup -- setup/hugepages.sh@92 -- # local surp 00:05:55.595 09:14:06 setup.sh.hugepages.default_setup -- setup/hugepages.sh@93 -- # local resv 00:05:55.595 09:14:06 setup.sh.hugepages.default_setup -- setup/hugepages.sh@94 -- # local anon 00:05:55.595 09:14:06 setup.sh.hugepages.default_setup -- setup/hugepages.sh@96 -- # [[ always [madvise] never != *\[\n\e\v\e\r\]* ]] 00:05:55.595 09:14:06 setup.sh.hugepages.default_setup -- setup/hugepages.sh@97 -- # get_meminfo AnonHugePages 00:05:55.595 09:14:06 setup.sh.hugepages.default_setup -- setup/common.sh@17 -- # local get=AnonHugePages 00:05:55.595 09:14:06 setup.sh.hugepages.default_setup -- setup/common.sh@18 -- # local node= 00:05:55.595 09:14:06 setup.sh.hugepages.default_setup -- setup/common.sh@19 -- # local var val 00:05:55.595 09:14:06 setup.sh.hugepages.default_setup -- setup/common.sh@20 -- # local mem_f mem 00:05:55.595 09:14:06 setup.sh.hugepages.default_setup -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:05:55.595 09:14:06 setup.sh.hugepages.default_setup -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:05:55.595 09:14:06 setup.sh.hugepages.default_setup -- setup/common.sh@25 -- # [[ -n '' ]] 00:05:55.595 09:14:06 setup.sh.hugepages.default_setup -- setup/common.sh@28 -- # mapfile -t mem 00:05:55.595 09:14:06 setup.sh.hugepages.default_setup -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:05:55.595 09:14:06 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:55.595 09:14:06 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:55.596 09:14:06 setup.sh.hugepages.default_setup -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60541712 kB' 'MemFree: 46630476 kB' 'MemAvailable: 50044452 kB' 'Buffers: 12432 kB' 'Cached: 9111792 kB' 'SwapCached: 0 kB' 'Active: 6659188 kB' 'Inactive: 3452172 kB' 'Active(anon): 6282480 kB' 'Inactive(anon): 0 kB' 'Active(file): 376708 kB' 'Inactive(file): 3452172 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 990440 kB' 'Mapped: 169860 kB' 'Shmem: 5295344 kB' 'KReclaimable: 157436 kB' 'Slab: 446404 kB' 'SReclaimable: 157436 kB' 'SUnreclaim: 288968 kB' 'KernelStack: 12832 kB' 'PageTables: 8852 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 37610884 kB' 'Committed_AS: 7790484 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 193576 kB' 'VmallocChunk: 0 kB' 'Percpu: 33408 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 579164 kB' 'DirectMap2M: 12972032 kB' 'DirectMap1G: 55574528 kB' 00:05:55.596 09:14:06 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:55.596 09:14:06 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:55.596 09:14:06 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:55.596 09:14:06 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:55.596 09:14:06 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:55.596 09:14:06 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:55.596 09:14:06 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:55.596 09:14:06 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:55.596 09:14:06 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemAvailable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:55.596 09:14:06 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:55.596 09:14:06 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:55.596 09:14:06 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:55.596 09:14:06 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Buffers == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:55.596 09:14:06 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:55.596 09:14:06 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:55.596 09:14:06 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:55.596 09:14:06 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Cached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:55.596 09:14:06 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:55.596 09:14:06 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:55.596 09:14:06 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:55.596 09:14:06 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SwapCached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:55.596 09:14:06 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:55.596 09:14:06 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:55.596 09:14:06 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:55.596 09:14:06 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:55.596 09:14:06 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:55.596 09:14:06 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:55.596 09:14:06 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:55.596 09:14:06 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:55.596 09:14:06 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:55.596 09:14:06 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:55.596 09:14:06 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:55.596 09:14:06 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:55.596 09:14:06 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:55.596 09:14:06 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:55.596 09:14:06 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:55.596 09:14:06 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:55.596 09:14:06 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:55.596 09:14:06 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:55.596 09:14:06 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:55.596 09:14:06 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:55.596 09:14:06 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:55.596 09:14:06 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:55.596 09:14:06 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:55.596 09:14:06 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:55.596 09:14:06 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:55.596 09:14:06 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:55.596 09:14:06 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:55.596 09:14:06 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Unevictable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:55.596 09:14:06 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:55.596 09:14:06 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:55.596 09:14:06 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:55.596 09:14:06 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Mlocked == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:55.596 09:14:06 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:55.596 09:14:06 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:55.596 09:14:06 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:55.596 09:14:06 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SwapTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:55.596 09:14:06 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:55.596 09:14:06 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:55.596 09:14:06 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:55.596 09:14:06 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SwapFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:55.596 09:14:06 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:55.596 09:14:06 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:55.596 09:14:06 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:55.596 09:14:06 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Zswap == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:55.596 09:14:06 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:55.596 09:14:06 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:55.596 09:14:06 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:55.596 09:14:06 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Zswapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:55.596 09:14:06 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:55.596 09:14:06 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:55.596 09:14:06 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:55.596 09:14:06 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Dirty == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:55.596 09:14:06 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:55.596 09:14:06 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:55.596 09:14:06 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:55.596 09:14:06 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Writeback == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:55.596 09:14:06 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:55.596 09:14:06 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:55.596 09:14:06 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:55.596 09:14:06 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ AnonPages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:55.596 09:14:06 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:55.596 09:14:06 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:55.596 09:14:06 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:55.596 09:14:06 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Mapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:55.596 09:14:06 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:55.596 09:14:06 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:55.596 09:14:06 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:55.596 09:14:06 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Shmem == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:55.596 09:14:06 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:55.596 09:14:06 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:55.596 09:14:06 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:55.596 09:14:06 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ KReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:55.596 09:14:06 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:55.596 09:14:06 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:55.596 09:14:06 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:55.596 09:14:06 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Slab == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:55.596 09:14:06 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:55.596 09:14:06 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:55.596 09:14:06 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:55.596 09:14:06 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:55.596 09:14:06 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:55.596 09:14:06 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:55.596 09:14:06 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:55.596 09:14:06 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SUnreclaim == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:55.596 09:14:06 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:55.596 09:14:06 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:55.596 09:14:06 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:55.596 09:14:06 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ KernelStack == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:55.596 09:14:06 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:55.596 09:14:06 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:55.596 09:14:06 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:55.596 09:14:06 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ PageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:55.596 09:14:06 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:55.596 09:14:06 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:55.596 09:14:06 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:55.596 09:14:06 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SecPageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:55.596 09:14:06 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:55.596 09:14:06 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:55.596 09:14:06 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:55.596 09:14:06 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ NFS_Unstable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:55.596 09:14:06 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:55.596 09:14:06 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:55.596 09:14:06 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:55.597 09:14:06 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Bounce == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:55.597 09:14:06 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:55.597 09:14:06 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:55.597 09:14:06 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:55.597 09:14:06 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ WritebackTmp == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:55.597 09:14:06 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:55.597 09:14:06 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:55.597 09:14:06 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:55.597 09:14:06 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ CommitLimit == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:55.597 09:14:06 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:55.597 09:14:06 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:55.597 09:14:06 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:55.597 09:14:06 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Committed_AS == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:55.597 09:14:06 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:55.597 09:14:06 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:55.597 09:14:06 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:55.597 09:14:06 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ VmallocTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:55.597 09:14:06 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:55.597 09:14:06 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:55.597 09:14:06 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:55.597 09:14:06 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ VmallocUsed == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:55.597 09:14:06 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:55.597 09:14:06 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:55.597 09:14:06 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:55.597 09:14:06 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ VmallocChunk == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:55.597 09:14:06 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:55.597 09:14:06 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:55.597 09:14:06 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:55.597 09:14:06 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Percpu == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:55.597 09:14:06 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:55.597 09:14:06 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:55.597 09:14:06 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:55.597 09:14:06 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HardwareCorrupted == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:55.597 09:14:06 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:55.597 09:14:06 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:55.597 09:14:06 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:55.597 09:14:06 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ AnonHugePages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:55.597 09:14:06 setup.sh.hugepages.default_setup -- setup/common.sh@33 -- # echo 0 00:05:55.597 09:14:06 setup.sh.hugepages.default_setup -- setup/common.sh@33 -- # return 0 00:05:55.597 09:14:06 setup.sh.hugepages.default_setup -- setup/hugepages.sh@97 -- # anon=0 00:05:55.597 09:14:06 setup.sh.hugepages.default_setup -- setup/hugepages.sh@99 -- # get_meminfo HugePages_Surp 00:05:55.597 09:14:06 setup.sh.hugepages.default_setup -- setup/common.sh@17 -- # local get=HugePages_Surp 00:05:55.597 09:14:06 setup.sh.hugepages.default_setup -- setup/common.sh@18 -- # local node= 00:05:55.597 09:14:06 setup.sh.hugepages.default_setup -- setup/common.sh@19 -- # local var val 00:05:55.597 09:14:06 setup.sh.hugepages.default_setup -- setup/common.sh@20 -- # local mem_f mem 00:05:55.597 09:14:06 setup.sh.hugepages.default_setup -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:05:55.597 09:14:06 setup.sh.hugepages.default_setup -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:05:55.597 09:14:06 setup.sh.hugepages.default_setup -- setup/common.sh@25 -- # [[ -n '' ]] 00:05:55.597 09:14:06 setup.sh.hugepages.default_setup -- setup/common.sh@28 -- # mapfile -t mem 00:05:55.597 09:14:06 setup.sh.hugepages.default_setup -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:05:55.597 09:14:06 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:55.597 09:14:06 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:55.597 09:14:06 setup.sh.hugepages.default_setup -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60541712 kB' 'MemFree: 46630668 kB' 'MemAvailable: 50044644 kB' 'Buffers: 12432 kB' 'Cached: 9111792 kB' 'SwapCached: 0 kB' 'Active: 6659192 kB' 'Inactive: 3452172 kB' 'Active(anon): 6282484 kB' 'Inactive(anon): 0 kB' 'Active(file): 376708 kB' 'Inactive(file): 3452172 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 990460 kB' 'Mapped: 169820 kB' 'Shmem: 5295344 kB' 'KReclaimable: 157436 kB' 'Slab: 446416 kB' 'SReclaimable: 157436 kB' 'SUnreclaim: 288980 kB' 'KernelStack: 12752 kB' 'PageTables: 8316 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 37610884 kB' 'Committed_AS: 7790504 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 193544 kB' 'VmallocChunk: 0 kB' 'Percpu: 33408 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 579164 kB' 'DirectMap2M: 12972032 kB' 'DirectMap1G: 55574528 kB' 00:05:55.597 09:14:06 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:55.597 09:14:06 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:55.597 09:14:06 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:55.597 09:14:06 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:55.597 09:14:06 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:55.597 09:14:06 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:55.597 09:14:06 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:55.597 09:14:06 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:55.597 09:14:06 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:55.597 09:14:06 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:55.597 09:14:06 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:55.597 09:14:06 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:55.597 09:14:06 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:55.597 09:14:06 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:55.597 09:14:06 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:55.597 09:14:06 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:55.597 09:14:06 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:55.597 09:14:06 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:55.597 09:14:06 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:55.597 09:14:06 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:55.597 09:14:06 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:55.597 09:14:06 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:55.597 09:14:06 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:55.597 09:14:06 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:55.597 09:14:06 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:55.597 09:14:06 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:55.597 09:14:06 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:55.597 09:14:06 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:55.597 09:14:06 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:55.597 09:14:06 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:55.597 09:14:06 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:55.597 09:14:06 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:55.597 09:14:06 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:55.597 09:14:06 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:55.597 09:14:06 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:55.597 09:14:06 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:55.597 09:14:06 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:55.597 09:14:06 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:55.597 09:14:06 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:55.597 09:14:06 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:55.597 09:14:06 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:55.597 09:14:06 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:55.597 09:14:06 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:55.597 09:14:06 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:55.597 09:14:06 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:55.597 09:14:06 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:55.597 09:14:06 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:55.597 09:14:06 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:55.597 09:14:06 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:55.597 09:14:06 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:55.597 09:14:06 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:55.597 09:14:06 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:55.597 09:14:06 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:55.597 09:14:06 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:55.597 09:14:06 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:55.597 09:14:06 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:55.597 09:14:06 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:55.597 09:14:06 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:55.597 09:14:06 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:55.597 09:14:06 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:55.597 09:14:06 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:55.597 09:14:06 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:55.597 09:14:06 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:55.597 09:14:06 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:55.597 09:14:06 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:55.598 09:14:06 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:55.598 09:14:06 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:55.598 09:14:06 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:55.598 09:14:06 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:55.598 09:14:06 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:55.598 09:14:06 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:55.598 09:14:06 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:55.598 09:14:06 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:55.598 09:14:06 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:55.598 09:14:06 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:55.598 09:14:06 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:55.598 09:14:06 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:55.598 09:14:06 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:55.598 09:14:06 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:55.598 09:14:06 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:55.598 09:14:06 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:55.598 09:14:06 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:55.598 09:14:06 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:55.598 09:14:06 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:55.598 09:14:06 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:55.598 09:14:06 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:55.598 09:14:06 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:55.598 09:14:06 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:55.598 09:14:06 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:55.598 09:14:06 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:55.598 09:14:06 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:55.598 09:14:06 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:55.598 09:14:06 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:55.598 09:14:06 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:55.598 09:14:06 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:55.598 09:14:06 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:55.598 09:14:06 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:55.598 09:14:06 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:55.598 09:14:06 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:55.598 09:14:06 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:55.598 09:14:06 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:55.598 09:14:06 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:55.598 09:14:06 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:55.598 09:14:06 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:55.598 09:14:06 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:55.598 09:14:06 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:55.598 09:14:06 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:55.598 09:14:06 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:55.598 09:14:06 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:55.598 09:14:06 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:55.598 09:14:06 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:55.598 09:14:06 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:55.598 09:14:06 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:55.598 09:14:06 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:55.598 09:14:06 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:55.598 09:14:06 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:55.598 09:14:06 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:55.598 09:14:06 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:55.598 09:14:06 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:55.598 09:14:06 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:55.598 09:14:06 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:55.598 09:14:06 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:55.598 09:14:06 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:55.598 09:14:06 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:55.598 09:14:06 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:55.598 09:14:06 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:55.598 09:14:06 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:55.598 09:14:06 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:55.598 09:14:06 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:55.598 09:14:06 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:55.598 09:14:06 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:55.598 09:14:06 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:55.598 09:14:06 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:55.598 09:14:06 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:55.598 09:14:06 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:55.598 09:14:06 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:55.598 09:14:06 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:55.598 09:14:06 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:55.598 09:14:06 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:55.598 09:14:06 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:55.598 09:14:06 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:55.598 09:14:06 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:55.598 09:14:06 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:55.598 09:14:06 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:55.598 09:14:06 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:55.598 09:14:06 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:55.598 09:14:06 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:55.598 09:14:06 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:55.598 09:14:06 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:55.598 09:14:06 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:55.598 09:14:06 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:55.598 09:14:06 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:55.598 09:14:06 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:55.598 09:14:06 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:55.598 09:14:06 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:55.598 09:14:06 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:55.598 09:14:06 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:55.598 09:14:06 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:55.598 09:14:06 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:55.598 09:14:06 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:55.598 09:14:06 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:55.598 09:14:06 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:55.598 09:14:06 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:55.598 09:14:06 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:55.598 09:14:06 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:55.598 09:14:06 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:55.598 09:14:06 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:55.598 09:14:06 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:55.598 09:14:06 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:55.598 09:14:06 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:55.598 09:14:06 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:55.598 09:14:06 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:55.598 09:14:06 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:55.598 09:14:06 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:55.598 09:14:06 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:55.598 09:14:06 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:55.598 09:14:06 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:55.598 09:14:06 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:55.598 09:14:06 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:55.598 09:14:06 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:55.598 09:14:06 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:55.598 09:14:06 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:55.598 09:14:06 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:55.598 09:14:06 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:55.598 09:14:06 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:55.598 09:14:06 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:55.598 09:14:06 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:55.598 09:14:06 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:55.598 09:14:06 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:55.599 09:14:06 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:55.599 09:14:06 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:55.599 09:14:06 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:55.599 09:14:06 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:55.599 09:14:06 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:55.599 09:14:06 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:55.599 09:14:06 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:55.599 09:14:06 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:55.599 09:14:06 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:55.599 09:14:06 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:55.599 09:14:06 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:55.599 09:14:06 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:55.599 09:14:06 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:55.599 09:14:06 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:55.599 09:14:06 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:55.599 09:14:06 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:55.599 09:14:06 setup.sh.hugepages.default_setup -- setup/common.sh@33 -- # echo 0 00:05:55.599 09:14:06 setup.sh.hugepages.default_setup -- setup/common.sh@33 -- # return 0 00:05:55.599 09:14:06 setup.sh.hugepages.default_setup -- setup/hugepages.sh@99 -- # surp=0 00:05:55.599 09:14:06 setup.sh.hugepages.default_setup -- setup/hugepages.sh@100 -- # get_meminfo HugePages_Rsvd 00:05:55.599 09:14:06 setup.sh.hugepages.default_setup -- setup/common.sh@17 -- # local get=HugePages_Rsvd 00:05:55.599 09:14:06 setup.sh.hugepages.default_setup -- setup/common.sh@18 -- # local node= 00:05:55.599 09:14:06 setup.sh.hugepages.default_setup -- setup/common.sh@19 -- # local var val 00:05:55.599 09:14:06 setup.sh.hugepages.default_setup -- setup/common.sh@20 -- # local mem_f mem 00:05:55.599 09:14:06 setup.sh.hugepages.default_setup -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:05:55.599 09:14:06 setup.sh.hugepages.default_setup -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:05:55.599 09:14:06 setup.sh.hugepages.default_setup -- setup/common.sh@25 -- # [[ -n '' ]] 00:05:55.599 09:14:06 setup.sh.hugepages.default_setup -- setup/common.sh@28 -- # mapfile -t mem 00:05:55.599 09:14:06 setup.sh.hugepages.default_setup -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:05:55.599 09:14:06 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:55.599 09:14:06 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:55.599 09:14:06 setup.sh.hugepages.default_setup -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60541712 kB' 'MemFree: 46630164 kB' 'MemAvailable: 50044140 kB' 'Buffers: 12432 kB' 'Cached: 9111808 kB' 'SwapCached: 0 kB' 'Active: 6658644 kB' 'Inactive: 3452172 kB' 'Active(anon): 6281936 kB' 'Inactive(anon): 0 kB' 'Active(file): 376708 kB' 'Inactive(file): 3452172 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 989800 kB' 'Mapped: 169716 kB' 'Shmem: 5295360 kB' 'KReclaimable: 157436 kB' 'Slab: 446512 kB' 'SReclaimable: 157436 kB' 'SUnreclaim: 289076 kB' 'KernelStack: 12768 kB' 'PageTables: 8648 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 37610884 kB' 'Committed_AS: 7790524 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 193528 kB' 'VmallocChunk: 0 kB' 'Percpu: 33408 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 579164 kB' 'DirectMap2M: 12972032 kB' 'DirectMap1G: 55574528 kB' 00:05:55.599 09:14:06 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:55.599 09:14:06 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:55.599 09:14:06 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:55.599 09:14:06 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:55.599 09:14:06 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:55.599 09:14:06 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:55.599 09:14:06 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:55.599 09:14:06 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:55.599 09:14:06 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:55.599 09:14:06 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:55.599 09:14:06 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:55.599 09:14:06 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:55.599 09:14:06 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:55.599 09:14:06 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:55.599 09:14:06 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:55.599 09:14:06 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:55.599 09:14:06 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:55.599 09:14:06 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:55.599 09:14:06 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:55.599 09:14:06 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:55.599 09:14:06 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:55.599 09:14:06 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:55.599 09:14:06 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:55.599 09:14:06 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:55.599 09:14:06 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:55.599 09:14:06 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:55.599 09:14:06 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:55.599 09:14:06 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:55.599 09:14:06 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:55.599 09:14:06 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:55.599 09:14:06 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:55.599 09:14:06 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:55.599 09:14:06 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:55.599 09:14:06 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:55.599 09:14:06 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:55.599 09:14:06 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:55.599 09:14:06 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:55.599 09:14:06 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:55.599 09:14:06 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:55.599 09:14:06 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:55.599 09:14:06 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:55.599 09:14:06 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:55.599 09:14:06 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:55.599 09:14:06 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:55.599 09:14:06 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:55.599 09:14:06 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:55.599 09:14:06 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:55.599 09:14:06 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:55.599 09:14:06 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:55.599 09:14:06 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:55.599 09:14:06 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:55.599 09:14:06 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:55.599 09:14:06 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:55.599 09:14:06 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:55.599 09:14:06 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:55.599 09:14:06 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:55.599 09:14:06 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:55.599 09:14:06 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:55.599 09:14:06 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:55.599 09:14:06 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:55.599 09:14:06 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:55.599 09:14:06 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:55.599 09:14:06 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:55.599 09:14:06 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:55.599 09:14:06 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:55.599 09:14:06 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:55.599 09:14:06 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:55.599 09:14:06 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:55.599 09:14:06 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:55.599 09:14:06 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:55.599 09:14:06 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:55.599 09:14:06 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:55.599 09:14:06 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:55.599 09:14:06 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:55.599 09:14:06 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:55.599 09:14:06 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:55.599 09:14:06 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:55.599 09:14:06 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:55.599 09:14:06 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:55.599 09:14:06 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:55.599 09:14:06 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:55.599 09:14:06 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:55.599 09:14:06 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:55.600 09:14:06 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:55.600 09:14:06 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:55.600 09:14:06 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:55.600 09:14:06 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:55.600 09:14:06 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:55.600 09:14:06 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:55.600 09:14:06 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:55.600 09:14:06 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:55.600 09:14:06 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:55.600 09:14:06 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:55.600 09:14:06 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:55.600 09:14:06 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:55.600 09:14:06 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:55.600 09:14:06 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:55.600 09:14:06 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:55.600 09:14:06 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:55.600 09:14:06 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:55.600 09:14:06 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:55.600 09:14:06 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:55.600 09:14:06 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:55.600 09:14:06 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:55.600 09:14:06 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:55.600 09:14:06 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:55.600 09:14:06 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:55.600 09:14:06 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:55.600 09:14:06 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:55.600 09:14:06 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:55.600 09:14:06 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:55.600 09:14:06 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:55.600 09:14:06 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:55.600 09:14:06 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:55.600 09:14:06 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:55.600 09:14:06 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:55.600 09:14:06 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:55.600 09:14:06 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:55.600 09:14:06 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:55.600 09:14:06 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:55.600 09:14:06 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:55.600 09:14:06 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:55.600 09:14:06 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:55.600 09:14:06 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:55.600 09:14:06 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:55.600 09:14:06 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:55.600 09:14:06 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:55.600 09:14:06 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:55.600 09:14:06 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:55.600 09:14:06 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:55.600 09:14:06 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:55.600 09:14:06 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:55.600 09:14:06 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:55.600 09:14:06 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:55.600 09:14:06 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:55.600 09:14:06 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:55.600 09:14:06 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:55.600 09:14:06 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:55.600 09:14:06 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:55.600 09:14:06 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:55.600 09:14:06 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:55.600 09:14:06 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:55.600 09:14:06 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:55.600 09:14:06 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:55.600 09:14:06 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:55.600 09:14:06 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:55.600 09:14:06 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:55.600 09:14:06 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:55.600 09:14:06 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:55.600 09:14:06 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:55.600 09:14:06 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:55.600 09:14:06 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:55.600 09:14:06 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:55.600 09:14:06 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:55.600 09:14:06 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:55.600 09:14:06 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:55.600 09:14:06 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:55.600 09:14:06 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:55.600 09:14:06 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:55.600 09:14:06 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:55.600 09:14:06 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:55.600 09:14:06 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:55.600 09:14:06 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:55.600 09:14:06 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:55.600 09:14:06 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:55.600 09:14:06 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:55.600 09:14:06 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:55.600 09:14:06 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:55.600 09:14:06 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:55.600 09:14:06 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:55.600 09:14:06 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:55.600 09:14:06 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:55.600 09:14:06 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:55.600 09:14:06 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:55.600 09:14:06 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:55.600 09:14:06 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:55.600 09:14:06 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:55.600 09:14:06 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:55.600 09:14:06 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:55.600 09:14:06 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:55.600 09:14:06 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:55.600 09:14:06 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:55.600 09:14:06 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:55.600 09:14:06 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:55.600 09:14:06 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:55.600 09:14:06 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:55.600 09:14:06 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:55.600 09:14:06 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:55.600 09:14:06 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:55.600 09:14:06 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:55.600 09:14:06 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:55.600 09:14:06 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:55.601 09:14:06 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:55.601 09:14:06 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:55.601 09:14:06 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:55.601 09:14:06 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:55.601 09:14:06 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:55.601 09:14:06 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:55.601 09:14:06 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:55.601 09:14:06 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:55.601 09:14:06 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:55.601 09:14:06 setup.sh.hugepages.default_setup -- setup/common.sh@33 -- # echo 0 00:05:55.601 09:14:06 setup.sh.hugepages.default_setup -- setup/common.sh@33 -- # return 0 00:05:55.601 09:14:06 setup.sh.hugepages.default_setup -- setup/hugepages.sh@100 -- # resv=0 00:05:55.601 09:14:06 setup.sh.hugepages.default_setup -- setup/hugepages.sh@102 -- # echo nr_hugepages=1024 00:05:55.601 nr_hugepages=1024 00:05:55.601 09:14:06 setup.sh.hugepages.default_setup -- setup/hugepages.sh@103 -- # echo resv_hugepages=0 00:05:55.601 resv_hugepages=0 00:05:55.601 09:14:06 setup.sh.hugepages.default_setup -- setup/hugepages.sh@104 -- # echo surplus_hugepages=0 00:05:55.601 surplus_hugepages=0 00:05:55.601 09:14:06 setup.sh.hugepages.default_setup -- setup/hugepages.sh@105 -- # echo anon_hugepages=0 00:05:55.601 anon_hugepages=0 00:05:55.601 09:14:06 setup.sh.hugepages.default_setup -- setup/hugepages.sh@107 -- # (( 1024 == nr_hugepages + surp + resv )) 00:05:55.601 09:14:06 setup.sh.hugepages.default_setup -- setup/hugepages.sh@109 -- # (( 1024 == nr_hugepages )) 00:05:55.601 09:14:06 setup.sh.hugepages.default_setup -- setup/hugepages.sh@110 -- # get_meminfo HugePages_Total 00:05:55.601 09:14:06 setup.sh.hugepages.default_setup -- setup/common.sh@17 -- # local get=HugePages_Total 00:05:55.601 09:14:06 setup.sh.hugepages.default_setup -- setup/common.sh@18 -- # local node= 00:05:55.601 09:14:06 setup.sh.hugepages.default_setup -- setup/common.sh@19 -- # local var val 00:05:55.601 09:14:06 setup.sh.hugepages.default_setup -- setup/common.sh@20 -- # local mem_f mem 00:05:55.601 09:14:06 setup.sh.hugepages.default_setup -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:05:55.601 09:14:06 setup.sh.hugepages.default_setup -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:05:55.601 09:14:06 setup.sh.hugepages.default_setup -- setup/common.sh@25 -- # [[ -n '' ]] 00:05:55.601 09:14:06 setup.sh.hugepages.default_setup -- setup/common.sh@28 -- # mapfile -t mem 00:05:55.601 09:14:06 setup.sh.hugepages.default_setup -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:05:55.601 09:14:06 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:55.601 09:14:06 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:55.601 09:14:06 setup.sh.hugepages.default_setup -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60541712 kB' 'MemFree: 46630164 kB' 'MemAvailable: 50044140 kB' 'Buffers: 12432 kB' 'Cached: 9111816 kB' 'SwapCached: 0 kB' 'Active: 6658120 kB' 'Inactive: 3452172 kB' 'Active(anon): 6281412 kB' 'Inactive(anon): 0 kB' 'Active(file): 376708 kB' 'Inactive(file): 3452172 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 989300 kB' 'Mapped: 169716 kB' 'Shmem: 5295368 kB' 'KReclaimable: 157436 kB' 'Slab: 446512 kB' 'SReclaimable: 157436 kB' 'SUnreclaim: 289076 kB' 'KernelStack: 12736 kB' 'PageTables: 8544 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 37610884 kB' 'Committed_AS: 7790548 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 193528 kB' 'VmallocChunk: 0 kB' 'Percpu: 33408 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 579164 kB' 'DirectMap2M: 12972032 kB' 'DirectMap1G: 55574528 kB' 00:05:55.601 09:14:06 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:55.601 09:14:06 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:55.601 09:14:06 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:55.601 09:14:06 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:55.601 09:14:06 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:55.601 09:14:06 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:55.601 09:14:06 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:55.601 09:14:06 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:55.601 09:14:06 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:55.601 09:14:06 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:55.601 09:14:06 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:55.601 09:14:06 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:55.601 09:14:06 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:55.601 09:14:06 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:55.601 09:14:06 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:55.601 09:14:06 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:55.601 09:14:06 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:55.601 09:14:06 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:55.601 09:14:06 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:55.601 09:14:06 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:55.601 09:14:06 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:55.601 09:14:06 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:55.601 09:14:06 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:55.601 09:14:06 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:55.601 09:14:06 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:55.601 09:14:06 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:55.601 09:14:06 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:55.601 09:14:06 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:55.601 09:14:06 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:55.601 09:14:06 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:55.601 09:14:06 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:55.601 09:14:06 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:55.601 09:14:06 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:55.601 09:14:06 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:55.601 09:14:06 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:55.601 09:14:06 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:55.601 09:14:06 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:55.601 09:14:06 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:55.601 09:14:06 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:55.601 09:14:06 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:55.601 09:14:06 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:55.601 09:14:06 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:55.601 09:14:06 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:55.601 09:14:06 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:55.601 09:14:06 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:55.601 09:14:06 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:55.601 09:14:06 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:55.601 09:14:06 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:55.601 09:14:06 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:55.601 09:14:06 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:55.601 09:14:06 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:55.601 09:14:06 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:55.601 09:14:06 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:55.601 09:14:06 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:55.601 09:14:06 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:55.601 09:14:06 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:55.601 09:14:06 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:55.601 09:14:06 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:55.601 09:14:06 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:55.601 09:14:06 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:55.601 09:14:06 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:55.601 09:14:06 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:55.601 09:14:06 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:55.601 09:14:06 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:55.601 09:14:06 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:55.601 09:14:06 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:55.601 09:14:06 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:55.601 09:14:06 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:55.601 09:14:06 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:55.601 09:14:06 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:55.601 09:14:06 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:55.601 09:14:06 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:55.601 09:14:06 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:55.601 09:14:06 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:55.601 09:14:06 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:55.601 09:14:06 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:55.601 09:14:06 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:55.601 09:14:06 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:55.601 09:14:06 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:55.601 09:14:06 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:55.601 09:14:06 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:55.601 09:14:06 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:55.601 09:14:06 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:55.601 09:14:06 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:55.601 09:14:06 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:55.602 09:14:06 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:55.602 09:14:06 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:55.602 09:14:06 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:55.602 09:14:06 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:55.602 09:14:06 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:55.602 09:14:06 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:55.602 09:14:06 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:55.602 09:14:06 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:55.602 09:14:06 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:55.602 09:14:06 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:55.602 09:14:06 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:55.602 09:14:06 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:55.602 09:14:06 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:55.602 09:14:06 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:55.602 09:14:06 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:55.602 09:14:06 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:55.602 09:14:06 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:55.602 09:14:06 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:55.602 09:14:06 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:55.602 09:14:06 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:55.602 09:14:06 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:55.602 09:14:06 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:55.602 09:14:06 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:55.602 09:14:06 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:55.602 09:14:06 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:55.602 09:14:06 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:55.602 09:14:06 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:55.602 09:14:06 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:55.602 09:14:06 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:55.602 09:14:06 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:55.602 09:14:06 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:55.602 09:14:06 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:55.602 09:14:06 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:55.602 09:14:06 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:55.602 09:14:06 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:55.602 09:14:06 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:55.602 09:14:06 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:55.602 09:14:06 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:55.602 09:14:06 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:55.602 09:14:06 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:55.602 09:14:06 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:55.602 09:14:06 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:55.602 09:14:06 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:55.602 09:14:06 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:55.602 09:14:06 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:55.602 09:14:06 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:55.602 09:14:06 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:55.602 09:14:06 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:55.602 09:14:06 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:55.602 09:14:06 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:55.602 09:14:06 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:55.602 09:14:06 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:55.602 09:14:06 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:55.602 09:14:06 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:55.602 09:14:06 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:55.602 09:14:06 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:55.602 09:14:06 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:55.602 09:14:06 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:55.602 09:14:06 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:55.602 09:14:06 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:55.602 09:14:06 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:55.602 09:14:06 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:55.602 09:14:06 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:55.602 09:14:06 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:55.602 09:14:06 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:55.602 09:14:06 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:55.602 09:14:06 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:55.602 09:14:06 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:55.602 09:14:06 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:55.602 09:14:06 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:55.602 09:14:06 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:55.602 09:14:06 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:55.602 09:14:06 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:55.602 09:14:06 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:55.602 09:14:06 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:55.602 09:14:06 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:55.602 09:14:06 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:55.602 09:14:06 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:55.602 09:14:06 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:55.602 09:14:06 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:55.602 09:14:06 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:55.602 09:14:06 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:55.602 09:14:06 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:55.602 09:14:06 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:55.602 09:14:06 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:55.602 09:14:06 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:55.602 09:14:06 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:55.602 09:14:06 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:55.602 09:14:06 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:55.602 09:14:06 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:55.602 09:14:06 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:55.602 09:14:06 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:55.602 09:14:06 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:55.602 09:14:06 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:55.602 09:14:06 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:55.602 09:14:06 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:55.602 09:14:06 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:55.602 09:14:06 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:55.602 09:14:06 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:55.603 09:14:06 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:55.603 09:14:06 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:55.603 09:14:06 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:55.603 09:14:06 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:55.603 09:14:06 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:55.603 09:14:06 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:55.603 09:14:06 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:55.603 09:14:06 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:55.603 09:14:06 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:55.603 09:14:06 setup.sh.hugepages.default_setup -- setup/common.sh@33 -- # echo 1024 00:05:55.603 09:14:06 setup.sh.hugepages.default_setup -- setup/common.sh@33 -- # return 0 00:05:55.603 09:14:06 setup.sh.hugepages.default_setup -- setup/hugepages.sh@110 -- # (( 1024 == nr_hugepages + surp + resv )) 00:05:55.603 09:14:06 setup.sh.hugepages.default_setup -- setup/hugepages.sh@112 -- # get_nodes 00:05:55.603 09:14:06 setup.sh.hugepages.default_setup -- setup/hugepages.sh@27 -- # local node 00:05:55.603 09:14:06 setup.sh.hugepages.default_setup -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:05:55.603 09:14:06 setup.sh.hugepages.default_setup -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=1024 00:05:55.603 09:14:06 setup.sh.hugepages.default_setup -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:05:55.603 09:14:06 setup.sh.hugepages.default_setup -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=0 00:05:55.603 09:14:06 setup.sh.hugepages.default_setup -- setup/hugepages.sh@32 -- # no_nodes=2 00:05:55.603 09:14:06 setup.sh.hugepages.default_setup -- setup/hugepages.sh@33 -- # (( no_nodes > 0 )) 00:05:55.603 09:14:06 setup.sh.hugepages.default_setup -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:05:55.603 09:14:06 setup.sh.hugepages.default_setup -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:05:55.603 09:14:06 setup.sh.hugepages.default_setup -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 0 00:05:55.603 09:14:06 setup.sh.hugepages.default_setup -- setup/common.sh@17 -- # local get=HugePages_Surp 00:05:55.603 09:14:06 setup.sh.hugepages.default_setup -- setup/common.sh@18 -- # local node=0 00:05:55.603 09:14:06 setup.sh.hugepages.default_setup -- setup/common.sh@19 -- # local var val 00:05:55.603 09:14:06 setup.sh.hugepages.default_setup -- setup/common.sh@20 -- # local mem_f mem 00:05:55.603 09:14:06 setup.sh.hugepages.default_setup -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:05:55.603 09:14:06 setup.sh.hugepages.default_setup -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node0/meminfo ]] 00:05:55.603 09:14:06 setup.sh.hugepages.default_setup -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node0/meminfo 00:05:55.603 09:14:06 setup.sh.hugepages.default_setup -- setup/common.sh@28 -- # mapfile -t mem 00:05:55.603 09:14:06 setup.sh.hugepages.default_setup -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:05:55.603 09:14:06 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:55.603 09:14:06 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:55.603 09:14:06 setup.sh.hugepages.default_setup -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 32829884 kB' 'MemFree: 21958136 kB' 'MemUsed: 10871748 kB' 'SwapCached: 0 kB' 'Active: 4638960 kB' 'Inactive: 3357784 kB' 'Active(anon): 4505540 kB' 'Inactive(anon): 0 kB' 'Active(file): 133420 kB' 'Inactive(file): 3357784 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'FilePages: 7325960 kB' 'Mapped: 77332 kB' 'AnonPages: 673960 kB' 'Shmem: 3834756 kB' 'KernelStack: 5784 kB' 'PageTables: 4040 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 59292 kB' 'Slab: 177152 kB' 'SReclaimable: 59292 kB' 'SUnreclaim: 117860 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Surp: 0' 00:05:55.603 09:14:06 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:55.603 09:14:06 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:55.603 09:14:06 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:55.603 09:14:06 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:55.603 09:14:06 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:55.603 09:14:06 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:55.603 09:14:06 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:55.603 09:14:06 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:55.603 09:14:06 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:55.603 09:14:06 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:55.603 09:14:06 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:55.603 09:14:06 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:55.603 09:14:06 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:55.603 09:14:06 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:55.603 09:14:06 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:55.603 09:14:06 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:55.603 09:14:06 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:55.603 09:14:06 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:55.603 09:14:06 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:55.603 09:14:06 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:55.603 09:14:06 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:55.603 09:14:06 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:55.603 09:14:06 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:55.603 09:14:06 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:55.603 09:14:06 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:55.603 09:14:06 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:55.603 09:14:06 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:55.603 09:14:06 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:55.603 09:14:06 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:55.603 09:14:06 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:55.603 09:14:06 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:55.603 09:14:06 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:55.603 09:14:06 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:55.603 09:14:06 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:55.603 09:14:06 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:55.603 09:14:06 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:55.603 09:14:06 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:55.603 09:14:06 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:55.603 09:14:06 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:55.603 09:14:06 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:55.603 09:14:06 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:55.603 09:14:06 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:55.603 09:14:06 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:55.603 09:14:06 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:55.603 09:14:06 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:55.603 09:14:06 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:55.603 09:14:06 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:55.603 09:14:06 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:55.603 09:14:06 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:55.603 09:14:06 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:55.603 09:14:06 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:55.603 09:14:06 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:55.603 09:14:06 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:55.603 09:14:06 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:55.603 09:14:06 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:55.603 09:14:06 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:55.603 09:14:06 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:55.603 09:14:06 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:55.603 09:14:06 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:55.603 09:14:06 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:55.603 09:14:06 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:55.603 09:14:06 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:55.603 09:14:06 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:55.603 09:14:06 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:55.603 09:14:06 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:55.603 09:14:06 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:55.603 09:14:06 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:55.603 09:14:06 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:55.603 09:14:06 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:55.603 09:14:06 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:55.603 09:14:06 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:55.603 09:14:06 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:55.603 09:14:06 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:55.603 09:14:06 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:55.603 09:14:06 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:55.603 09:14:06 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:55.603 09:14:06 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:55.603 09:14:06 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:55.603 09:14:06 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:55.603 09:14:06 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:55.603 09:14:06 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:55.603 09:14:06 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:55.603 09:14:06 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:55.603 09:14:06 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:55.603 09:14:06 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:55.603 09:14:06 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:55.603 09:14:06 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:55.603 09:14:06 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:55.604 09:14:06 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:55.604 09:14:06 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:55.604 09:14:06 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:55.604 09:14:06 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:55.604 09:14:06 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:55.604 09:14:06 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:55.604 09:14:06 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:55.604 09:14:06 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:55.604 09:14:06 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:55.604 09:14:06 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:55.604 09:14:06 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:55.604 09:14:06 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:55.604 09:14:06 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:55.604 09:14:06 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:55.604 09:14:06 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:55.604 09:14:06 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:55.604 09:14:06 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:55.604 09:14:06 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:55.604 09:14:06 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:55.604 09:14:06 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:55.604 09:14:06 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:55.604 09:14:06 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:55.604 09:14:06 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:55.604 09:14:06 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:55.604 09:14:06 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:55.604 09:14:06 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:55.604 09:14:06 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:55.604 09:14:06 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:55.604 09:14:06 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:55.604 09:14:06 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:55.604 09:14:06 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:55.604 09:14:06 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:55.604 09:14:06 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:55.604 09:14:06 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:55.604 09:14:06 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:55.604 09:14:06 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:55.604 09:14:06 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:55.604 09:14:06 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:55.604 09:14:06 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:55.604 09:14:06 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:55.604 09:14:06 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:55.604 09:14:06 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:55.604 09:14:06 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:55.604 09:14:06 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:55.604 09:14:06 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:55.604 09:14:06 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:55.604 09:14:06 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:55.604 09:14:06 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:55.604 09:14:06 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:55.604 09:14:06 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:55.604 09:14:06 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:55.604 09:14:06 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:55.604 09:14:06 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:55.604 09:14:06 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:55.604 09:14:06 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:55.604 09:14:06 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:55.604 09:14:06 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:55.604 09:14:06 setup.sh.hugepages.default_setup -- setup/common.sh@33 -- # echo 0 00:05:55.604 09:14:06 setup.sh.hugepages.default_setup -- setup/common.sh@33 -- # return 0 00:05:55.604 09:14:06 setup.sh.hugepages.default_setup -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:05:55.604 09:14:06 setup.sh.hugepages.default_setup -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:05:55.604 09:14:06 setup.sh.hugepages.default_setup -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:05:55.604 09:14:06 setup.sh.hugepages.default_setup -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:05:55.604 09:14:06 setup.sh.hugepages.default_setup -- setup/hugepages.sh@128 -- # echo 'node0=1024 expecting 1024' 00:05:55.604 node0=1024 expecting 1024 00:05:55.604 09:14:06 setup.sh.hugepages.default_setup -- setup/hugepages.sh@130 -- # [[ 1024 == \1\0\2\4 ]] 00:05:55.604 00:05:55.604 real 0m2.559s 00:05:55.604 user 0m0.689s 00:05:55.604 sys 0m0.909s 00:05:55.604 09:14:06 setup.sh.hugepages.default_setup -- common/autotest_common.sh@1124 -- # xtrace_disable 00:05:55.604 09:14:06 setup.sh.hugepages.default_setup -- common/autotest_common.sh@10 -- # set +x 00:05:55.604 ************************************ 00:05:55.604 END TEST default_setup 00:05:55.604 ************************************ 00:05:55.604 09:14:06 setup.sh.hugepages -- common/autotest_common.sh@1142 -- # return 0 00:05:55.604 09:14:06 setup.sh.hugepages -- setup/hugepages.sh@211 -- # run_test per_node_1G_alloc per_node_1G_alloc 00:05:55.604 09:14:06 setup.sh.hugepages -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:05:55.604 09:14:06 setup.sh.hugepages -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:55.604 09:14:06 setup.sh.hugepages -- common/autotest_common.sh@10 -- # set +x 00:05:55.604 ************************************ 00:05:55.604 START TEST per_node_1G_alloc 00:05:55.604 ************************************ 00:05:55.604 09:14:06 setup.sh.hugepages.per_node_1G_alloc -- common/autotest_common.sh@1123 -- # per_node_1G_alloc 00:05:55.604 09:14:06 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@143 -- # local IFS=, 00:05:55.604 09:14:06 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@145 -- # get_test_nr_hugepages 1048576 0 1 00:05:55.604 09:14:06 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@49 -- # local size=1048576 00:05:55.604 09:14:06 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@50 -- # (( 3 > 1 )) 00:05:55.604 09:14:06 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@51 -- # shift 00:05:55.604 09:14:06 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@52 -- # node_ids=('0' '1') 00:05:55.604 09:14:06 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@52 -- # local node_ids 00:05:55.604 09:14:06 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@55 -- # (( size >= default_hugepages )) 00:05:55.604 09:14:06 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@57 -- # nr_hugepages=512 00:05:55.604 09:14:06 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@58 -- # get_test_nr_hugepages_per_node 0 1 00:05:55.605 09:14:06 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@62 -- # user_nodes=('0' '1') 00:05:55.605 09:14:06 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@62 -- # local user_nodes 00:05:55.605 09:14:06 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@64 -- # local _nr_hugepages=512 00:05:55.605 09:14:06 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@65 -- # local _no_nodes=2 00:05:55.605 09:14:06 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@67 -- # nodes_test=() 00:05:55.605 09:14:06 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@67 -- # local -g nodes_test 00:05:55.605 09:14:06 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@69 -- # (( 2 > 0 )) 00:05:55.605 09:14:06 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@70 -- # for _no_nodes in "${user_nodes[@]}" 00:05:55.605 09:14:06 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@71 -- # nodes_test[_no_nodes]=512 00:05:55.605 09:14:06 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@70 -- # for _no_nodes in "${user_nodes[@]}" 00:05:55.605 09:14:06 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@71 -- # nodes_test[_no_nodes]=512 00:05:55.605 09:14:06 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@73 -- # return 0 00:05:55.605 09:14:06 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@146 -- # NRHUGE=512 00:05:55.605 09:14:06 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@146 -- # HUGENODE=0,1 00:05:55.605 09:14:06 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@146 -- # setup output 00:05:55.605 09:14:06 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@9 -- # [[ output == output ]] 00:05:55.605 09:14:06 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@10 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:05:56.979 0000:00:04.7 (8086 0e27): Already using the vfio-pci driver 00:05:56.979 0000:00:04.6 (8086 0e26): Already using the vfio-pci driver 00:05:56.979 0000:00:04.5 (8086 0e25): Already using the vfio-pci driver 00:05:56.979 0000:00:04.4 (8086 0e24): Already using the vfio-pci driver 00:05:56.979 0000:00:04.3 (8086 0e23): Already using the vfio-pci driver 00:05:56.979 0000:00:04.2 (8086 0e22): Already using the vfio-pci driver 00:05:56.979 0000:00:04.1 (8086 0e21): Already using the vfio-pci driver 00:05:56.979 0000:00:04.0 (8086 0e20): Already using the vfio-pci driver 00:05:56.979 0000:80:04.7 (8086 0e27): Already using the vfio-pci driver 00:05:56.979 0000:0b:00.0 (8086 0a54): Already using the vfio-pci driver 00:05:56.979 0000:80:04.6 (8086 0e26): Already using the vfio-pci driver 00:05:56.979 0000:80:04.5 (8086 0e25): Already using the vfio-pci driver 00:05:56.979 0000:80:04.4 (8086 0e24): Already using the vfio-pci driver 00:05:56.979 0000:80:04.3 (8086 0e23): Already using the vfio-pci driver 00:05:56.979 0000:80:04.2 (8086 0e22): Already using the vfio-pci driver 00:05:56.979 0000:80:04.1 (8086 0e21): Already using the vfio-pci driver 00:05:56.979 0000:80:04.0 (8086 0e20): Already using the vfio-pci driver 00:05:56.979 09:14:08 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@147 -- # nr_hugepages=1024 00:05:56.979 09:14:08 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@147 -- # verify_nr_hugepages 00:05:56.979 09:14:08 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@89 -- # local node 00:05:56.979 09:14:08 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@90 -- # local sorted_t 00:05:56.979 09:14:08 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@91 -- # local sorted_s 00:05:56.979 09:14:08 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@92 -- # local surp 00:05:56.979 09:14:08 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@93 -- # local resv 00:05:56.979 09:14:08 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@94 -- # local anon 00:05:56.979 09:14:08 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@96 -- # [[ always [madvise] never != *\[\n\e\v\e\r\]* ]] 00:05:56.979 09:14:08 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@97 -- # get_meminfo AnonHugePages 00:05:56.979 09:14:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@17 -- # local get=AnonHugePages 00:05:56.979 09:14:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@18 -- # local node= 00:05:56.979 09:14:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@19 -- # local var val 00:05:56.979 09:14:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@20 -- # local mem_f mem 00:05:56.979 09:14:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:05:56.979 09:14:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:05:56.979 09:14:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:05:56.979 09:14:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:05:56.979 09:14:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:05:56.979 09:14:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:56.979 09:14:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:56.979 09:14:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60541712 kB' 'MemFree: 46633116 kB' 'MemAvailable: 50047092 kB' 'Buffers: 12432 kB' 'Cached: 9111912 kB' 'SwapCached: 0 kB' 'Active: 6658508 kB' 'Inactive: 3452172 kB' 'Active(anon): 6281800 kB' 'Inactive(anon): 0 kB' 'Active(file): 376708 kB' 'Inactive(file): 3452172 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 989536 kB' 'Mapped: 169776 kB' 'Shmem: 5295464 kB' 'KReclaimable: 157436 kB' 'Slab: 446100 kB' 'SReclaimable: 157436 kB' 'SUnreclaim: 288664 kB' 'KernelStack: 12736 kB' 'PageTables: 8504 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 37610884 kB' 'Committed_AS: 7791028 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 193656 kB' 'VmallocChunk: 0 kB' 'Percpu: 33408 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 579164 kB' 'DirectMap2M: 12972032 kB' 'DirectMap1G: 55574528 kB' 00:05:56.979 09:14:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:56.979 09:14:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:56.979 09:14:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:56.979 09:14:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:56.979 09:14:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:56.979 09:14:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:56.979 09:14:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:56.979 09:14:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:56.979 09:14:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:56.979 09:14:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:56.979 09:14:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:56.979 09:14:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:56.979 09:14:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Buffers == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:56.979 09:14:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:56.979 09:14:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:56.979 09:14:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:56.979 09:14:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Cached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:56.979 09:14:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:56.979 09:14:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:56.979 09:14:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:56.979 09:14:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SwapCached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:56.979 09:14:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:56.979 09:14:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:56.979 09:14:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:56.979 09:14:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:56.979 09:14:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:56.979 09:14:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:56.979 09:14:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:56.979 09:14:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:56.979 09:14:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:56.979 09:14:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:56.979 09:14:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:56.979 09:14:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:56.979 09:14:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:56.979 09:14:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:56.979 09:14:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:56.979 09:14:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:56.979 09:14:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:56.979 09:14:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:56.979 09:14:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:56.979 09:14:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:56.979 09:14:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:56.979 09:14:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:56.979 09:14:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:56.979 09:14:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:56.979 09:14:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:56.979 09:14:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:56.979 09:14:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:56.979 09:14:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Unevictable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:56.979 09:14:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:56.979 09:14:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:56.979 09:14:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:56.979 09:14:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Mlocked == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:56.979 09:14:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:56.979 09:14:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:56.979 09:14:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:56.979 09:14:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:56.980 09:14:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:56.980 09:14:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:56.980 09:14:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:56.980 09:14:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SwapFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:56.980 09:14:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:56.980 09:14:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:56.980 09:14:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:56.980 09:14:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Zswap == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:56.980 09:14:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:56.980 09:14:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:56.980 09:14:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:56.980 09:14:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Zswapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:56.980 09:14:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:56.980 09:14:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:56.980 09:14:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:56.980 09:14:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Dirty == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:56.980 09:14:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:56.980 09:14:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:56.980 09:14:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:56.980 09:14:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Writeback == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:56.980 09:14:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:56.980 09:14:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:56.980 09:14:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:56.980 09:14:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ AnonPages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:56.980 09:14:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:56.980 09:14:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:56.980 09:14:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:56.980 09:14:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Mapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:56.980 09:14:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:56.980 09:14:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:56.980 09:14:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:56.980 09:14:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Shmem == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:56.980 09:14:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:56.980 09:14:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:56.980 09:14:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:56.980 09:14:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:56.980 09:14:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:56.980 09:14:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:56.980 09:14:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:56.980 09:14:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Slab == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:56.980 09:14:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:56.980 09:14:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:56.980 09:14:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:56.980 09:14:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:56.980 09:14:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:56.980 09:14:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:56.980 09:14:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:56.980 09:14:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:56.980 09:14:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:56.980 09:14:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:56.980 09:14:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:56.980 09:14:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ KernelStack == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:56.980 09:14:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:56.980 09:14:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:56.980 09:14:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:56.980 09:14:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ PageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:56.980 09:14:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:56.980 09:14:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:56.980 09:14:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:56.980 09:14:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:56.980 09:14:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:56.980 09:14:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:56.980 09:14:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:56.980 09:14:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:56.980 09:14:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:56.980 09:14:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:56.980 09:14:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:56.980 09:14:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Bounce == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:56.980 09:14:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:56.980 09:14:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:56.980 09:14:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:56.980 09:14:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:56.980 09:14:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:56.980 09:14:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:56.980 09:14:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:56.980 09:14:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:56.980 09:14:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:56.980 09:14:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:56.980 09:14:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:56.980 09:14:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:56.980 09:14:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:56.980 09:14:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:56.980 09:14:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:56.980 09:14:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:56.980 09:14:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:56.980 09:14:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:56.980 09:14:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:56.980 09:14:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:56.980 09:14:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:56.980 09:14:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:56.980 09:14:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:56.980 09:14:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:56.980 09:14:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:56.980 09:14:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:56.980 09:14:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:56.980 09:14:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Percpu == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:56.980 09:14:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:56.980 09:14:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:56.980 09:14:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:56.980 09:14:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:56.980 09:14:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:56.980 09:14:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:56.980 09:14:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:56.980 09:14:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:56.980 09:14:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@33 -- # echo 0 00:05:56.980 09:14:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@33 -- # return 0 00:05:56.980 09:14:08 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@97 -- # anon=0 00:05:56.980 09:14:08 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@99 -- # get_meminfo HugePages_Surp 00:05:56.980 09:14:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:05:56.980 09:14:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@18 -- # local node= 00:05:56.980 09:14:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@19 -- # local var val 00:05:56.980 09:14:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@20 -- # local mem_f mem 00:05:56.980 09:14:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:05:56.980 09:14:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:05:56.980 09:14:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:05:56.980 09:14:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:05:56.980 09:14:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:05:56.980 09:14:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:56.980 09:14:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:56.981 09:14:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60541712 kB' 'MemFree: 46638016 kB' 'MemAvailable: 50051992 kB' 'Buffers: 12432 kB' 'Cached: 9111912 kB' 'SwapCached: 0 kB' 'Active: 6659208 kB' 'Inactive: 3452172 kB' 'Active(anon): 6282500 kB' 'Inactive(anon): 0 kB' 'Active(file): 376708 kB' 'Inactive(file): 3452172 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 990252 kB' 'Mapped: 169732 kB' 'Shmem: 5295464 kB' 'KReclaimable: 157436 kB' 'Slab: 446076 kB' 'SReclaimable: 157436 kB' 'SUnreclaim: 288640 kB' 'KernelStack: 12784 kB' 'PageTables: 8624 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 37610884 kB' 'Committed_AS: 7791048 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 193608 kB' 'VmallocChunk: 0 kB' 'Percpu: 33408 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 579164 kB' 'DirectMap2M: 12972032 kB' 'DirectMap1G: 55574528 kB' 00:05:56.981 09:14:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:56.981 09:14:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:56.981 09:14:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:56.981 09:14:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:56.981 09:14:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:56.981 09:14:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:56.981 09:14:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:56.981 09:14:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:56.981 09:14:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:56.981 09:14:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:56.981 09:14:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:56.981 09:14:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:56.981 09:14:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:56.981 09:14:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:56.981 09:14:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:56.981 09:14:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:56.981 09:14:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:56.981 09:14:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:56.981 09:14:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:56.981 09:14:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:56.981 09:14:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:56.981 09:14:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:56.981 09:14:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:56.981 09:14:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:56.981 09:14:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:56.981 09:14:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:56.981 09:14:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:56.981 09:14:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:56.981 09:14:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:56.981 09:14:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:56.981 09:14:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:56.981 09:14:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:56.981 09:14:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:56.981 09:14:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:56.981 09:14:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:56.981 09:14:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:56.981 09:14:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:56.981 09:14:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:56.981 09:14:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:56.981 09:14:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:56.981 09:14:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:56.981 09:14:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:56.981 09:14:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:56.981 09:14:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:56.981 09:14:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:56.981 09:14:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:56.981 09:14:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:56.981 09:14:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:56.981 09:14:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:56.981 09:14:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:56.981 09:14:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:56.981 09:14:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:56.981 09:14:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:56.981 09:14:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:56.981 09:14:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:56.981 09:14:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:56.981 09:14:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:56.981 09:14:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:56.981 09:14:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:56.981 09:14:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:56.981 09:14:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:56.981 09:14:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:56.981 09:14:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:56.981 09:14:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:56.981 09:14:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:56.981 09:14:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:56.981 09:14:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:56.981 09:14:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:56.981 09:14:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:56.981 09:14:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:56.981 09:14:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:56.981 09:14:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:56.981 09:14:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:56.981 09:14:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:56.981 09:14:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:56.981 09:14:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:56.981 09:14:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:56.981 09:14:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:56.981 09:14:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:56.981 09:14:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:56.981 09:14:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:56.981 09:14:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:56.981 09:14:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:56.981 09:14:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:56.981 09:14:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:56.981 09:14:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:56.981 09:14:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:56.981 09:14:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:56.981 09:14:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:56.981 09:14:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:56.981 09:14:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:56.981 09:14:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:56.981 09:14:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:56.981 09:14:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:56.981 09:14:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:56.981 09:14:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:56.981 09:14:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:56.981 09:14:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:56.981 09:14:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:56.981 09:14:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:56.981 09:14:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:56.981 09:14:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:56.981 09:14:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:56.981 09:14:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:56.981 09:14:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:56.981 09:14:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:56.981 09:14:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:56.981 09:14:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:56.981 09:14:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:56.981 09:14:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:56.981 09:14:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:56.981 09:14:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:56.981 09:14:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:56.981 09:14:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:56.981 09:14:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:56.981 09:14:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:56.981 09:14:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:56.981 09:14:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:56.981 09:14:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:56.981 09:14:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:56.981 09:14:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:56.981 09:14:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:56.982 09:14:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:56.982 09:14:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:56.982 09:14:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:56.982 09:14:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:56.982 09:14:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:56.982 09:14:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:56.982 09:14:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:56.982 09:14:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:56.982 09:14:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:56.982 09:14:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:56.982 09:14:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:56.982 09:14:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:56.982 09:14:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:56.982 09:14:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:56.982 09:14:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:56.982 09:14:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:56.982 09:14:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:56.982 09:14:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:56.982 09:14:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:56.982 09:14:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:56.982 09:14:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:56.982 09:14:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:56.982 09:14:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:56.982 09:14:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:56.982 09:14:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:56.982 09:14:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:56.982 09:14:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:56.982 09:14:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:56.982 09:14:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:56.982 09:14:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:56.982 09:14:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:56.982 09:14:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:56.982 09:14:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:56.982 09:14:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:56.982 09:14:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:56.982 09:14:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:56.982 09:14:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:56.982 09:14:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:56.982 09:14:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:56.982 09:14:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:56.982 09:14:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:56.982 09:14:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:56.982 09:14:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:56.982 09:14:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:56.982 09:14:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:56.982 09:14:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:56.982 09:14:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:56.982 09:14:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:56.982 09:14:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:56.982 09:14:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:56.982 09:14:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:56.982 09:14:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:56.982 09:14:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:56.982 09:14:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:56.982 09:14:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:56.982 09:14:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:56.982 09:14:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:56.982 09:14:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:56.982 09:14:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:56.982 09:14:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:56.982 09:14:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:56.982 09:14:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:56.982 09:14:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:56.982 09:14:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:56.982 09:14:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:56.982 09:14:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:56.982 09:14:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:56.982 09:14:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:56.982 09:14:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:56.982 09:14:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:56.982 09:14:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:56.982 09:14:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:56.982 09:14:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:56.982 09:14:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:56.982 09:14:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:56.982 09:14:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:56.982 09:14:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:56.982 09:14:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:56.982 09:14:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:56.982 09:14:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:56.982 09:14:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:56.982 09:14:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:56.982 09:14:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:56.982 09:14:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@33 -- # echo 0 00:05:56.982 09:14:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@33 -- # return 0 00:05:56.982 09:14:08 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@99 -- # surp=0 00:05:56.982 09:14:08 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@100 -- # get_meminfo HugePages_Rsvd 00:05:56.982 09:14:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@17 -- # local get=HugePages_Rsvd 00:05:56.982 09:14:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@18 -- # local node= 00:05:56.982 09:14:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@19 -- # local var val 00:05:56.982 09:14:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@20 -- # local mem_f mem 00:05:56.982 09:14:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:05:56.982 09:14:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:05:56.982 09:14:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:05:56.982 09:14:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:05:56.982 09:14:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:05:56.982 09:14:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:56.982 09:14:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:56.982 09:14:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60541712 kB' 'MemFree: 46638080 kB' 'MemAvailable: 50052056 kB' 'Buffers: 12432 kB' 'Cached: 9111928 kB' 'SwapCached: 0 kB' 'Active: 6658352 kB' 'Inactive: 3452172 kB' 'Active(anon): 6281644 kB' 'Inactive(anon): 0 kB' 'Active(file): 376708 kB' 'Inactive(file): 3452172 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 989352 kB' 'Mapped: 169732 kB' 'Shmem: 5295480 kB' 'KReclaimable: 157436 kB' 'Slab: 446124 kB' 'SReclaimable: 157436 kB' 'SUnreclaim: 288688 kB' 'KernelStack: 12736 kB' 'PageTables: 8440 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 37610884 kB' 'Committed_AS: 7791068 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 193608 kB' 'VmallocChunk: 0 kB' 'Percpu: 33408 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 579164 kB' 'DirectMap2M: 12972032 kB' 'DirectMap1G: 55574528 kB' 00:05:56.982 09:14:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:56.982 09:14:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:56.982 09:14:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:56.982 09:14:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:56.982 09:14:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:56.982 09:14:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:56.982 09:14:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:56.982 09:14:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:56.982 09:14:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:56.982 09:14:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:56.982 09:14:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:56.982 09:14:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:56.982 09:14:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:56.982 09:14:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:56.982 09:14:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:56.983 09:14:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:56.983 09:14:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:56.983 09:14:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:56.983 09:14:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:56.983 09:14:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:56.983 09:14:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:56.983 09:14:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:56.983 09:14:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:56.983 09:14:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:56.983 09:14:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:56.983 09:14:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:56.983 09:14:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:56.983 09:14:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:56.983 09:14:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:56.983 09:14:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:56.983 09:14:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:56.983 09:14:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:56.983 09:14:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:56.983 09:14:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:56.983 09:14:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:56.983 09:14:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:56.983 09:14:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:56.983 09:14:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:56.983 09:14:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:56.983 09:14:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:56.983 09:14:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:56.983 09:14:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:56.983 09:14:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:56.983 09:14:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:56.983 09:14:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:56.983 09:14:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:56.983 09:14:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:56.983 09:14:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:56.983 09:14:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:56.983 09:14:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:56.983 09:14:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:56.983 09:14:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:56.983 09:14:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:56.983 09:14:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:56.983 09:14:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:56.983 09:14:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:56.983 09:14:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:56.983 09:14:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:56.983 09:14:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:56.983 09:14:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:56.983 09:14:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:56.983 09:14:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:56.983 09:14:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:56.983 09:14:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:56.983 09:14:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:56.983 09:14:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:56.983 09:14:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:56.983 09:14:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:56.983 09:14:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:56.983 09:14:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:56.983 09:14:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:56.983 09:14:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:56.983 09:14:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:56.983 09:14:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:56.983 09:14:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:56.983 09:14:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:56.983 09:14:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:56.983 09:14:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:56.983 09:14:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:56.983 09:14:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:56.983 09:14:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:56.983 09:14:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:56.983 09:14:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:56.983 09:14:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:56.983 09:14:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:56.983 09:14:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:56.983 09:14:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:56.983 09:14:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:56.983 09:14:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:56.983 09:14:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:56.983 09:14:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:56.983 09:14:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:56.983 09:14:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:56.983 09:14:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:56.983 09:14:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:56.983 09:14:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:56.983 09:14:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:56.983 09:14:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:56.983 09:14:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:56.983 09:14:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:56.983 09:14:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:56.983 09:14:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:56.983 09:14:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:56.983 09:14:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:56.983 09:14:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:56.983 09:14:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:56.983 09:14:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:56.983 09:14:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:56.984 09:14:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:56.984 09:14:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:56.984 09:14:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:56.984 09:14:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:56.984 09:14:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:56.984 09:14:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:56.984 09:14:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:56.984 09:14:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:56.984 09:14:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:56.984 09:14:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:56.984 09:14:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:56.984 09:14:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:56.984 09:14:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:56.984 09:14:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:56.984 09:14:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:56.984 09:14:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:56.984 09:14:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:56.984 09:14:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:56.984 09:14:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:56.984 09:14:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:56.984 09:14:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:56.984 09:14:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:56.984 09:14:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:56.984 09:14:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:56.984 09:14:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:56.984 09:14:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:56.984 09:14:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:56.984 09:14:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:56.984 09:14:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:56.984 09:14:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:56.984 09:14:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:56.984 09:14:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:56.984 09:14:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:56.984 09:14:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:56.984 09:14:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:56.984 09:14:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:56.984 09:14:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:56.984 09:14:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:56.984 09:14:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:56.984 09:14:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:56.984 09:14:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:56.984 09:14:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:56.984 09:14:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:56.984 09:14:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:56.984 09:14:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:56.984 09:14:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:56.984 09:14:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:56.984 09:14:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:56.984 09:14:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:56.984 09:14:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:56.984 09:14:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:56.984 09:14:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:56.984 09:14:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:56.984 09:14:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:56.984 09:14:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:56.984 09:14:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:56.984 09:14:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:56.984 09:14:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:56.984 09:14:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:56.984 09:14:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:56.984 09:14:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:56.984 09:14:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:56.984 09:14:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:56.984 09:14:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:56.984 09:14:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:56.984 09:14:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:56.984 09:14:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:56.984 09:14:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:56.984 09:14:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:56.984 09:14:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:56.984 09:14:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:56.984 09:14:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:56.984 09:14:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:56.984 09:14:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:56.984 09:14:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:56.984 09:14:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:56.984 09:14:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:56.984 09:14:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:56.984 09:14:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:56.984 09:14:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:56.984 09:14:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:56.984 09:14:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:56.984 09:14:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:56.984 09:14:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:56.984 09:14:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:56.984 09:14:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:56.984 09:14:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:56.984 09:14:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:56.984 09:14:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:56.984 09:14:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:56.984 09:14:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:56.984 09:14:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:56.984 09:14:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:56.984 09:14:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@33 -- # echo 0 00:05:56.984 09:14:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@33 -- # return 0 00:05:56.984 09:14:08 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@100 -- # resv=0 00:05:56.984 09:14:08 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@102 -- # echo nr_hugepages=1024 00:05:56.984 nr_hugepages=1024 00:05:56.984 09:14:08 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@103 -- # echo resv_hugepages=0 00:05:56.984 resv_hugepages=0 00:05:56.984 09:14:08 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@104 -- # echo surplus_hugepages=0 00:05:56.984 surplus_hugepages=0 00:05:56.984 09:14:08 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@105 -- # echo anon_hugepages=0 00:05:56.984 anon_hugepages=0 00:05:56.984 09:14:08 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@107 -- # (( 1024 == nr_hugepages + surp + resv )) 00:05:56.984 09:14:08 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@109 -- # (( 1024 == nr_hugepages )) 00:05:57.245 09:14:08 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@110 -- # get_meminfo HugePages_Total 00:05:57.246 09:14:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@17 -- # local get=HugePages_Total 00:05:57.246 09:14:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@18 -- # local node= 00:05:57.246 09:14:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@19 -- # local var val 00:05:57.246 09:14:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@20 -- # local mem_f mem 00:05:57.246 09:14:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:05:57.246 09:14:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:05:57.246 09:14:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:05:57.246 09:14:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:05:57.246 09:14:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:05:57.246 09:14:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:57.246 09:14:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:57.246 09:14:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60541712 kB' 'MemFree: 46638080 kB' 'MemAvailable: 50052056 kB' 'Buffers: 12432 kB' 'Cached: 9111928 kB' 'SwapCached: 0 kB' 'Active: 6658532 kB' 'Inactive: 3452172 kB' 'Active(anon): 6281824 kB' 'Inactive(anon): 0 kB' 'Active(file): 376708 kB' 'Inactive(file): 3452172 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 989564 kB' 'Mapped: 169732 kB' 'Shmem: 5295480 kB' 'KReclaimable: 157436 kB' 'Slab: 446124 kB' 'SReclaimable: 157436 kB' 'SUnreclaim: 288688 kB' 'KernelStack: 12768 kB' 'PageTables: 8544 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 37610884 kB' 'Committed_AS: 7791092 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 193624 kB' 'VmallocChunk: 0 kB' 'Percpu: 33408 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 579164 kB' 'DirectMap2M: 12972032 kB' 'DirectMap1G: 55574528 kB' 00:05:57.246 09:14:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:57.246 09:14:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:57.246 09:14:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:57.246 09:14:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:57.246 09:14:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:57.246 09:14:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:57.246 09:14:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:57.246 09:14:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:57.246 09:14:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:57.246 09:14:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:57.246 09:14:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:57.246 09:14:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:57.246 09:14:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:57.246 09:14:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:57.246 09:14:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:57.246 09:14:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:57.246 09:14:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:57.246 09:14:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:57.246 09:14:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:57.246 09:14:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:57.246 09:14:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:57.246 09:14:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:57.246 09:14:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:57.246 09:14:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:57.246 09:14:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:57.246 09:14:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:57.246 09:14:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:57.246 09:14:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:57.246 09:14:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:57.246 09:14:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:57.246 09:14:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:57.246 09:14:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:57.246 09:14:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:57.246 09:14:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:57.246 09:14:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:57.246 09:14:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:57.246 09:14:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:57.246 09:14:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:57.246 09:14:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:57.246 09:14:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:57.246 09:14:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:57.246 09:14:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:57.246 09:14:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:57.246 09:14:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:57.246 09:14:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:57.246 09:14:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:57.246 09:14:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:57.246 09:14:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:57.246 09:14:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:57.246 09:14:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:57.246 09:14:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:57.246 09:14:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:57.246 09:14:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:57.246 09:14:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:57.246 09:14:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:57.246 09:14:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:57.246 09:14:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:57.246 09:14:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:57.246 09:14:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:57.246 09:14:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:57.246 09:14:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:57.246 09:14:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:57.246 09:14:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:57.246 09:14:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:57.246 09:14:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:57.246 09:14:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:57.246 09:14:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:57.246 09:14:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:57.246 09:14:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:57.246 09:14:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:57.246 09:14:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:57.246 09:14:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:57.246 09:14:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:57.246 09:14:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:57.246 09:14:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:57.246 09:14:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:57.246 09:14:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:57.246 09:14:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:57.246 09:14:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:57.246 09:14:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:57.246 09:14:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:57.246 09:14:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:57.246 09:14:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:57.246 09:14:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:57.246 09:14:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:57.246 09:14:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:57.246 09:14:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:57.246 09:14:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:57.246 09:14:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:57.246 09:14:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:57.246 09:14:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:57.246 09:14:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:57.246 09:14:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:57.246 09:14:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:57.246 09:14:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:57.246 09:14:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:57.246 09:14:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:57.246 09:14:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:57.246 09:14:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:57.246 09:14:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:57.246 09:14:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:57.246 09:14:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:57.247 09:14:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:57.247 09:14:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:57.247 09:14:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:57.247 09:14:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:57.247 09:14:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:57.247 09:14:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:57.247 09:14:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:57.247 09:14:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:57.247 09:14:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:57.247 09:14:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:57.247 09:14:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:57.247 09:14:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:57.247 09:14:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:57.247 09:14:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:57.247 09:14:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:57.247 09:14:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:57.247 09:14:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:57.247 09:14:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:57.247 09:14:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:57.247 09:14:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:57.247 09:14:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:57.247 09:14:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:57.247 09:14:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:57.247 09:14:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:57.247 09:14:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:57.247 09:14:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:57.247 09:14:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:57.247 09:14:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:57.247 09:14:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:57.247 09:14:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:57.247 09:14:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:57.247 09:14:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:57.247 09:14:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:57.247 09:14:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:57.247 09:14:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:57.247 09:14:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:57.247 09:14:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:57.247 09:14:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:57.247 09:14:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:57.247 09:14:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:57.247 09:14:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:57.247 09:14:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:57.247 09:14:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:57.247 09:14:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:57.247 09:14:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:57.247 09:14:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:57.247 09:14:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:57.247 09:14:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:57.247 09:14:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:57.247 09:14:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:57.247 09:14:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:57.247 09:14:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:57.247 09:14:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:57.247 09:14:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:57.247 09:14:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:57.247 09:14:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:57.247 09:14:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:57.247 09:14:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:57.247 09:14:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:57.247 09:14:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:57.247 09:14:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:57.247 09:14:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:57.247 09:14:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:57.247 09:14:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:57.247 09:14:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:57.247 09:14:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:57.247 09:14:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:57.247 09:14:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:57.247 09:14:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:57.247 09:14:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:57.247 09:14:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:57.247 09:14:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:57.247 09:14:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:57.247 09:14:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:57.247 09:14:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:57.247 09:14:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:57.247 09:14:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:57.247 09:14:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:57.247 09:14:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:57.247 09:14:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:57.247 09:14:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:57.247 09:14:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:57.247 09:14:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:57.247 09:14:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:57.247 09:14:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:57.247 09:14:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:57.247 09:14:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:57.247 09:14:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:57.247 09:14:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:57.247 09:14:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:57.247 09:14:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:57.247 09:14:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@33 -- # echo 1024 00:05:57.247 09:14:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@33 -- # return 0 00:05:57.247 09:14:08 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@110 -- # (( 1024 == nr_hugepages + surp + resv )) 00:05:57.247 09:14:08 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@112 -- # get_nodes 00:05:57.247 09:14:08 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@27 -- # local node 00:05:57.247 09:14:08 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:05:57.247 09:14:08 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=512 00:05:57.247 09:14:08 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:05:57.247 09:14:08 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=512 00:05:57.247 09:14:08 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@32 -- # no_nodes=2 00:05:57.247 09:14:08 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@33 -- # (( no_nodes > 0 )) 00:05:57.247 09:14:08 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:05:57.247 09:14:08 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:05:57.247 09:14:08 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 0 00:05:57.247 09:14:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:05:57.247 09:14:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@18 -- # local node=0 00:05:57.247 09:14:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@19 -- # local var val 00:05:57.247 09:14:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@20 -- # local mem_f mem 00:05:57.247 09:14:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:05:57.247 09:14:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node0/meminfo ]] 00:05:57.247 09:14:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node0/meminfo 00:05:57.247 09:14:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:05:57.247 09:14:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:05:57.247 09:14:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:57.247 09:14:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:57.247 09:14:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 32829884 kB' 'MemFree: 23001868 kB' 'MemUsed: 9828016 kB' 'SwapCached: 0 kB' 'Active: 4640236 kB' 'Inactive: 3357784 kB' 'Active(anon): 4506816 kB' 'Inactive(anon): 0 kB' 'Active(file): 133420 kB' 'Inactive(file): 3357784 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'FilePages: 7326124 kB' 'Mapped: 77348 kB' 'AnonPages: 675100 kB' 'Shmem: 3834920 kB' 'KernelStack: 5832 kB' 'PageTables: 4196 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 59292 kB' 'Slab: 176808 kB' 'SReclaimable: 59292 kB' 'SUnreclaim: 117516 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 512' 'HugePages_Free: 512' 'HugePages_Surp: 0' 00:05:57.248 09:14:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:57.248 09:14:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:57.248 09:14:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:57.248 09:14:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:57.248 09:14:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:57.248 09:14:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:57.248 09:14:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:57.248 09:14:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:57.248 09:14:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:57.248 09:14:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:57.248 09:14:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:57.248 09:14:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:57.248 09:14:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:57.248 09:14:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:57.248 09:14:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:57.248 09:14:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:57.248 09:14:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:57.248 09:14:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:57.248 09:14:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:57.248 09:14:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:57.248 09:14:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:57.248 09:14:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:57.248 09:14:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:57.248 09:14:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:57.248 09:14:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:57.248 09:14:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:57.248 09:14:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:57.248 09:14:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:57.248 09:14:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:57.248 09:14:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:57.248 09:14:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:57.248 09:14:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:57.248 09:14:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:57.248 09:14:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:57.248 09:14:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:57.248 09:14:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:57.248 09:14:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:57.248 09:14:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:57.248 09:14:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:57.248 09:14:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:57.248 09:14:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:57.248 09:14:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:57.248 09:14:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:57.248 09:14:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:57.248 09:14:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:57.248 09:14:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:57.248 09:14:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:57.248 09:14:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:57.248 09:14:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:57.248 09:14:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:57.248 09:14:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:57.248 09:14:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:57.248 09:14:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:57.248 09:14:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:57.248 09:14:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:57.248 09:14:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:57.248 09:14:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:57.248 09:14:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:57.248 09:14:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:57.248 09:14:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:57.248 09:14:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:57.248 09:14:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:57.248 09:14:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:57.248 09:14:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:57.248 09:14:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:57.248 09:14:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:57.248 09:14:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:57.248 09:14:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:57.248 09:14:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:57.248 09:14:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:57.248 09:14:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:57.248 09:14:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:57.248 09:14:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:57.248 09:14:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:57.248 09:14:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:57.248 09:14:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:57.248 09:14:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:57.248 09:14:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:57.248 09:14:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:57.248 09:14:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:57.248 09:14:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:57.248 09:14:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:57.248 09:14:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:57.248 09:14:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:57.248 09:14:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:57.248 09:14:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:57.248 09:14:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:57.248 09:14:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:57.248 09:14:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:57.248 09:14:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:57.248 09:14:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:57.248 09:14:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:57.248 09:14:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:57.248 09:14:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:57.248 09:14:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:57.248 09:14:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:57.248 09:14:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:57.248 09:14:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:57.248 09:14:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:57.248 09:14:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:57.248 09:14:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:57.248 09:14:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:57.248 09:14:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:57.248 09:14:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:57.248 09:14:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:57.248 09:14:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:57.248 09:14:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:57.248 09:14:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:57.248 09:14:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:57.248 09:14:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:57.248 09:14:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:57.248 09:14:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:57.248 09:14:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:57.248 09:14:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:57.248 09:14:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:57.248 09:14:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:57.248 09:14:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:57.248 09:14:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:57.248 09:14:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:57.248 09:14:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:57.248 09:14:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:57.248 09:14:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:57.248 09:14:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:57.248 09:14:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:57.249 09:14:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:57.249 09:14:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:57.249 09:14:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:57.249 09:14:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:57.249 09:14:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:57.249 09:14:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:57.249 09:14:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:57.249 09:14:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:57.249 09:14:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:57.249 09:14:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:57.249 09:14:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:57.249 09:14:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:57.249 09:14:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:57.249 09:14:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:57.249 09:14:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:57.249 09:14:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:57.249 09:14:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:57.249 09:14:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:57.249 09:14:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:57.249 09:14:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:57.249 09:14:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:57.249 09:14:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@33 -- # echo 0 00:05:57.249 09:14:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@33 -- # return 0 00:05:57.249 09:14:08 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:05:57.249 09:14:08 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:05:57.249 09:14:08 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:05:57.249 09:14:08 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 1 00:05:57.249 09:14:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:05:57.249 09:14:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@18 -- # local node=1 00:05:57.249 09:14:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@19 -- # local var val 00:05:57.249 09:14:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@20 -- # local mem_f mem 00:05:57.249 09:14:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:05:57.249 09:14:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node1/meminfo ]] 00:05:57.249 09:14:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node1/meminfo 00:05:57.249 09:14:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:05:57.249 09:14:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:05:57.249 09:14:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:57.249 09:14:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:57.249 09:14:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 27711828 kB' 'MemFree: 23635548 kB' 'MemUsed: 4076280 kB' 'SwapCached: 0 kB' 'Active: 2018540 kB' 'Inactive: 94388 kB' 'Active(anon): 1775252 kB' 'Inactive(anon): 0 kB' 'Active(file): 243288 kB' 'Inactive(file): 94388 kB' 'Unevictable: 0 kB' 'Mlocked: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'FilePages: 1798288 kB' 'Mapped: 92384 kB' 'AnonPages: 314676 kB' 'Shmem: 1460612 kB' 'KernelStack: 6936 kB' 'PageTables: 4352 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 98144 kB' 'Slab: 269312 kB' 'SReclaimable: 98144 kB' 'SUnreclaim: 171168 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 512' 'HugePages_Free: 512' 'HugePages_Surp: 0' 00:05:57.249 09:14:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:57.249 09:14:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:57.249 09:14:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:57.249 09:14:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:57.249 09:14:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:57.249 09:14:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:57.249 09:14:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:57.249 09:14:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:57.249 09:14:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:57.249 09:14:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:57.249 09:14:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:57.249 09:14:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:57.249 09:14:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:57.249 09:14:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:57.249 09:14:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:57.249 09:14:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:57.249 09:14:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:57.249 09:14:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:57.249 09:14:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:57.249 09:14:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:57.249 09:14:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:57.249 09:14:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:57.249 09:14:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:57.249 09:14:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:57.249 09:14:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:57.249 09:14:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:57.249 09:14:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:57.249 09:14:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:57.249 09:14:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:57.249 09:14:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:57.249 09:14:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:57.249 09:14:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:57.249 09:14:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:57.249 09:14:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:57.249 09:14:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:57.249 09:14:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:57.249 09:14:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:57.249 09:14:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:57.249 09:14:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:57.249 09:14:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:57.249 09:14:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:57.249 09:14:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:57.249 09:14:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:57.249 09:14:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:57.249 09:14:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:57.249 09:14:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:57.249 09:14:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:57.249 09:14:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:57.249 09:14:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:57.249 09:14:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:57.249 09:14:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:57.249 09:14:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:57.249 09:14:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:57.249 09:14:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:57.249 09:14:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:57.249 09:14:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:57.249 09:14:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:57.249 09:14:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:57.249 09:14:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:57.249 09:14:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:57.249 09:14:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:57.249 09:14:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:57.249 09:14:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:57.249 09:14:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:57.249 09:14:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:57.249 09:14:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:57.249 09:14:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:57.249 09:14:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:57.249 09:14:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:57.249 09:14:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:57.249 09:14:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:57.249 09:14:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:57.249 09:14:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:57.249 09:14:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:57.249 09:14:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:57.249 09:14:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:57.249 09:14:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:57.249 09:14:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:57.249 09:14:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:57.250 09:14:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:57.250 09:14:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:57.250 09:14:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:57.250 09:14:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:57.250 09:14:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:57.250 09:14:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:57.250 09:14:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:57.250 09:14:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:57.250 09:14:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:57.250 09:14:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:57.250 09:14:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:57.250 09:14:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:57.250 09:14:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:57.250 09:14:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:57.250 09:14:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:57.250 09:14:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:57.250 09:14:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:57.250 09:14:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:57.250 09:14:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:57.250 09:14:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:57.250 09:14:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:57.250 09:14:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:57.250 09:14:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:57.250 09:14:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:57.250 09:14:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:57.250 09:14:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:57.250 09:14:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:57.250 09:14:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:57.250 09:14:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:57.250 09:14:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:57.250 09:14:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:57.250 09:14:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:57.250 09:14:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:57.250 09:14:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:57.250 09:14:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:57.250 09:14:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:57.250 09:14:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:57.250 09:14:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:57.250 09:14:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:57.250 09:14:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:57.250 09:14:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:57.250 09:14:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:57.250 09:14:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:57.250 09:14:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:57.250 09:14:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:57.250 09:14:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:57.250 09:14:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:57.250 09:14:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:57.250 09:14:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:57.250 09:14:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:57.250 09:14:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:57.250 09:14:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:57.250 09:14:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:57.250 09:14:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:57.250 09:14:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:57.250 09:14:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:57.250 09:14:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:57.250 09:14:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:57.250 09:14:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:57.250 09:14:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:57.250 09:14:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:57.250 09:14:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:57.250 09:14:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:57.250 09:14:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:57.250 09:14:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:57.250 09:14:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:57.250 09:14:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@33 -- # echo 0 00:05:57.250 09:14:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@33 -- # return 0 00:05:57.250 09:14:08 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:05:57.250 09:14:08 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:05:57.250 09:14:08 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:05:57.250 09:14:08 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:05:57.250 09:14:08 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@128 -- # echo 'node0=512 expecting 512' 00:05:57.250 node0=512 expecting 512 00:05:57.250 09:14:08 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:05:57.250 09:14:08 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:05:57.250 09:14:08 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:05:57.250 09:14:08 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@128 -- # echo 'node1=512 expecting 512' 00:05:57.250 node1=512 expecting 512 00:05:57.250 09:14:08 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@130 -- # [[ 512 == \5\1\2 ]] 00:05:57.250 00:05:57.250 real 0m1.516s 00:05:57.250 user 0m0.641s 00:05:57.250 sys 0m0.837s 00:05:57.250 09:14:08 setup.sh.hugepages.per_node_1G_alloc -- common/autotest_common.sh@1124 -- # xtrace_disable 00:05:57.250 09:14:08 setup.sh.hugepages.per_node_1G_alloc -- common/autotest_common.sh@10 -- # set +x 00:05:57.250 ************************************ 00:05:57.250 END TEST per_node_1G_alloc 00:05:57.250 ************************************ 00:05:57.250 09:14:08 setup.sh.hugepages -- common/autotest_common.sh@1142 -- # return 0 00:05:57.250 09:14:08 setup.sh.hugepages -- setup/hugepages.sh@212 -- # run_test even_2G_alloc even_2G_alloc 00:05:57.250 09:14:08 setup.sh.hugepages -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:05:57.250 09:14:08 setup.sh.hugepages -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:57.250 09:14:08 setup.sh.hugepages -- common/autotest_common.sh@10 -- # set +x 00:05:57.250 ************************************ 00:05:57.250 START TEST even_2G_alloc 00:05:57.250 ************************************ 00:05:57.250 09:14:08 setup.sh.hugepages.even_2G_alloc -- common/autotest_common.sh@1123 -- # even_2G_alloc 00:05:57.250 09:14:08 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@152 -- # get_test_nr_hugepages 2097152 00:05:57.250 09:14:08 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@49 -- # local size=2097152 00:05:57.250 09:14:08 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@50 -- # (( 1 > 1 )) 00:05:57.250 09:14:08 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@55 -- # (( size >= default_hugepages )) 00:05:57.250 09:14:08 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@57 -- # nr_hugepages=1024 00:05:57.250 09:14:08 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@58 -- # get_test_nr_hugepages_per_node 00:05:57.250 09:14:08 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@62 -- # user_nodes=() 00:05:57.250 09:14:08 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@62 -- # local user_nodes 00:05:57.250 09:14:08 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@64 -- # local _nr_hugepages=1024 00:05:57.250 09:14:08 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@65 -- # local _no_nodes=2 00:05:57.250 09:14:08 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@67 -- # nodes_test=() 00:05:57.250 09:14:08 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@67 -- # local -g nodes_test 00:05:57.250 09:14:08 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@69 -- # (( 0 > 0 )) 00:05:57.251 09:14:08 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@74 -- # (( 0 > 0 )) 00:05:57.251 09:14:08 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@81 -- # (( _no_nodes > 0 )) 00:05:57.251 09:14:08 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@82 -- # nodes_test[_no_nodes - 1]=512 00:05:57.251 09:14:08 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@83 -- # : 512 00:05:57.251 09:14:08 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@84 -- # : 1 00:05:57.251 09:14:08 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@81 -- # (( _no_nodes > 0 )) 00:05:57.251 09:14:08 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@82 -- # nodes_test[_no_nodes - 1]=512 00:05:57.251 09:14:08 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@83 -- # : 0 00:05:57.251 09:14:08 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@84 -- # : 0 00:05:57.251 09:14:08 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@81 -- # (( _no_nodes > 0 )) 00:05:57.251 09:14:08 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@153 -- # NRHUGE=1024 00:05:57.251 09:14:08 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@153 -- # HUGE_EVEN_ALLOC=yes 00:05:57.251 09:14:08 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@153 -- # setup output 00:05:57.251 09:14:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@9 -- # [[ output == output ]] 00:05:57.251 09:14:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@10 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:05:58.633 0000:00:04.7 (8086 0e27): Already using the vfio-pci driver 00:05:58.633 0000:00:04.6 (8086 0e26): Already using the vfio-pci driver 00:05:58.633 0000:00:04.5 (8086 0e25): Already using the vfio-pci driver 00:05:58.633 0000:00:04.4 (8086 0e24): Already using the vfio-pci driver 00:05:58.633 0000:00:04.3 (8086 0e23): Already using the vfio-pci driver 00:05:58.633 0000:00:04.2 (8086 0e22): Already using the vfio-pci driver 00:05:58.633 0000:00:04.1 (8086 0e21): Already using the vfio-pci driver 00:05:58.633 0000:00:04.0 (8086 0e20): Already using the vfio-pci driver 00:05:58.633 0000:80:04.7 (8086 0e27): Already using the vfio-pci driver 00:05:58.633 0000:0b:00.0 (8086 0a54): Already using the vfio-pci driver 00:05:58.633 0000:80:04.6 (8086 0e26): Already using the vfio-pci driver 00:05:58.633 0000:80:04.5 (8086 0e25): Already using the vfio-pci driver 00:05:58.633 0000:80:04.4 (8086 0e24): Already using the vfio-pci driver 00:05:58.633 0000:80:04.3 (8086 0e23): Already using the vfio-pci driver 00:05:58.633 0000:80:04.2 (8086 0e22): Already using the vfio-pci driver 00:05:58.633 0000:80:04.1 (8086 0e21): Already using the vfio-pci driver 00:05:58.633 0000:80:04.0 (8086 0e20): Already using the vfio-pci driver 00:05:58.633 09:14:09 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@154 -- # verify_nr_hugepages 00:05:58.633 09:14:09 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@89 -- # local node 00:05:58.633 09:14:09 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@90 -- # local sorted_t 00:05:58.633 09:14:09 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@91 -- # local sorted_s 00:05:58.633 09:14:09 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@92 -- # local surp 00:05:58.633 09:14:09 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@93 -- # local resv 00:05:58.633 09:14:09 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@94 -- # local anon 00:05:58.633 09:14:09 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@96 -- # [[ always [madvise] never != *\[\n\e\v\e\r\]* ]] 00:05:58.633 09:14:09 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@97 -- # get_meminfo AnonHugePages 00:05:58.633 09:14:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@17 -- # local get=AnonHugePages 00:05:58.633 09:14:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@18 -- # local node= 00:05:58.633 09:14:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@19 -- # local var val 00:05:58.633 09:14:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@20 -- # local mem_f mem 00:05:58.633 09:14:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:05:58.633 09:14:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:05:58.633 09:14:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:05:58.633 09:14:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:05:58.633 09:14:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:05:58.633 09:14:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:58.633 09:14:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:58.633 09:14:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60541712 kB' 'MemFree: 46633060 kB' 'MemAvailable: 50047036 kB' 'Buffers: 12432 kB' 'Cached: 9112044 kB' 'SwapCached: 0 kB' 'Active: 6659300 kB' 'Inactive: 3452172 kB' 'Active(anon): 6282592 kB' 'Inactive(anon): 0 kB' 'Active(file): 376708 kB' 'Inactive(file): 3452172 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 990180 kB' 'Mapped: 169784 kB' 'Shmem: 5295596 kB' 'KReclaimable: 157436 kB' 'Slab: 446112 kB' 'SReclaimable: 157436 kB' 'SUnreclaim: 288676 kB' 'KernelStack: 12784 kB' 'PageTables: 8540 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 37610884 kB' 'Committed_AS: 7791160 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 193704 kB' 'VmallocChunk: 0 kB' 'Percpu: 33408 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 579164 kB' 'DirectMap2M: 12972032 kB' 'DirectMap1G: 55574528 kB' 00:05:58.633 09:14:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:58.633 09:14:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:58.633 09:14:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:58.633 09:14:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:58.634 09:14:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:58.634 09:14:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:58.634 09:14:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:58.634 09:14:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:58.634 09:14:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:58.634 09:14:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:58.634 09:14:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:58.634 09:14:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:58.634 09:14:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Buffers == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:58.634 09:14:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:58.634 09:14:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:58.634 09:14:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:58.634 09:14:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Cached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:58.634 09:14:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:58.634 09:14:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:58.634 09:14:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:58.634 09:14:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapCached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:58.634 09:14:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:58.634 09:14:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:58.634 09:14:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:58.634 09:14:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:58.634 09:14:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:58.634 09:14:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:58.634 09:14:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:58.634 09:14:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:58.634 09:14:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:58.634 09:14:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:58.634 09:14:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:58.634 09:14:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:58.634 09:14:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:58.634 09:14:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:58.634 09:14:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:58.634 09:14:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:58.634 09:14:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:58.634 09:14:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:58.634 09:14:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:58.634 09:14:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:58.634 09:14:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:58.634 09:14:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:58.634 09:14:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:58.634 09:14:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:58.634 09:14:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:58.634 09:14:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:58.634 09:14:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:58.634 09:14:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Unevictable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:58.634 09:14:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:58.634 09:14:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:58.634 09:14:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:58.634 09:14:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Mlocked == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:58.634 09:14:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:58.634 09:14:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:58.634 09:14:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:58.634 09:14:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:58.634 09:14:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:58.634 09:14:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:58.634 09:14:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:58.634 09:14:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:58.634 09:14:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:58.634 09:14:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:58.634 09:14:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:58.634 09:14:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Zswap == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:58.634 09:14:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:58.634 09:14:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:58.634 09:14:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:58.634 09:14:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Zswapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:58.634 09:14:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:58.634 09:14:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:58.634 09:14:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:58.634 09:14:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Dirty == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:58.634 09:14:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:58.634 09:14:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:58.634 09:14:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:58.634 09:14:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Writeback == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:58.634 09:14:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:58.634 09:14:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:58.634 09:14:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:58.634 09:14:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ AnonPages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:58.634 09:14:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:58.634 09:14:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:58.634 09:14:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:58.634 09:14:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Mapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:58.634 09:14:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:58.634 09:14:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:58.634 09:14:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:58.634 09:14:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Shmem == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:58.634 09:14:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:58.634 09:14:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:58.634 09:14:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:58.634 09:14:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:58.634 09:14:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:58.634 09:14:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:58.634 09:14:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:58.634 09:14:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Slab == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:58.634 09:14:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:58.634 09:14:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:58.634 09:14:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:58.634 09:14:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:58.634 09:14:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:58.634 09:14:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:58.634 09:14:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:58.634 09:14:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:58.634 09:14:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:58.634 09:14:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:58.634 09:14:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:58.634 09:14:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ KernelStack == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:58.634 09:14:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:58.634 09:14:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:58.634 09:14:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:58.634 09:14:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ PageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:58.634 09:14:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:58.634 09:14:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:58.634 09:14:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:58.634 09:14:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:58.634 09:14:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:58.634 09:14:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:58.634 09:14:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:58.634 09:14:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:58.634 09:14:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:58.634 09:14:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:58.634 09:14:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:58.634 09:14:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Bounce == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:58.634 09:14:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:58.634 09:14:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:58.634 09:14:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:58.634 09:14:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:58.634 09:14:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:58.634 09:14:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:58.634 09:14:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:58.634 09:14:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:58.634 09:14:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:58.634 09:14:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:58.634 09:14:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:58.635 09:14:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:58.635 09:14:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:58.635 09:14:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:58.635 09:14:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:58.635 09:14:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:58.635 09:14:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:58.635 09:14:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:58.635 09:14:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:58.635 09:14:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:58.635 09:14:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:58.635 09:14:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:58.635 09:14:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:58.635 09:14:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:58.635 09:14:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:58.635 09:14:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:58.635 09:14:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:58.635 09:14:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Percpu == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:58.635 09:14:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:58.635 09:14:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:58.635 09:14:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:58.635 09:14:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:58.635 09:14:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:58.635 09:14:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:58.635 09:14:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:58.635 09:14:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:58.635 09:14:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@33 -- # echo 0 00:05:58.635 09:14:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@33 -- # return 0 00:05:58.635 09:14:09 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@97 -- # anon=0 00:05:58.635 09:14:09 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@99 -- # get_meminfo HugePages_Surp 00:05:58.635 09:14:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:05:58.635 09:14:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@18 -- # local node= 00:05:58.635 09:14:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@19 -- # local var val 00:05:58.635 09:14:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@20 -- # local mem_f mem 00:05:58.635 09:14:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:05:58.635 09:14:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:05:58.635 09:14:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:05:58.635 09:14:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:05:58.635 09:14:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:05:58.635 09:14:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:58.635 09:14:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:58.635 09:14:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60541712 kB' 'MemFree: 46635820 kB' 'MemAvailable: 50049796 kB' 'Buffers: 12432 kB' 'Cached: 9112048 kB' 'SwapCached: 0 kB' 'Active: 6661724 kB' 'Inactive: 3452172 kB' 'Active(anon): 6285016 kB' 'Inactive(anon): 0 kB' 'Active(file): 376708 kB' 'Inactive(file): 3452172 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 992604 kB' 'Mapped: 170220 kB' 'Shmem: 5295600 kB' 'KReclaimable: 157436 kB' 'Slab: 446112 kB' 'SReclaimable: 157436 kB' 'SUnreclaim: 288676 kB' 'KernelStack: 12816 kB' 'PageTables: 8644 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 37610884 kB' 'Committed_AS: 7793592 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 193656 kB' 'VmallocChunk: 0 kB' 'Percpu: 33408 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 579164 kB' 'DirectMap2M: 12972032 kB' 'DirectMap1G: 55574528 kB' 00:05:58.635 09:14:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:58.635 09:14:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:58.635 09:14:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:58.635 09:14:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:58.635 09:14:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:58.635 09:14:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:58.635 09:14:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:58.635 09:14:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:58.635 09:14:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:58.635 09:14:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:58.635 09:14:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:58.635 09:14:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:58.635 09:14:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:58.635 09:14:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:58.635 09:14:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:58.635 09:14:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:58.635 09:14:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:58.635 09:14:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:58.635 09:14:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:58.635 09:14:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:58.635 09:14:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:58.635 09:14:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:58.635 09:14:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:58.635 09:14:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:58.635 09:14:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:58.635 09:14:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:58.635 09:14:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:58.635 09:14:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:58.635 09:14:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:58.635 09:14:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:58.635 09:14:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:58.635 09:14:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:58.635 09:14:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:58.635 09:14:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:58.635 09:14:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:58.635 09:14:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:58.635 09:14:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:58.635 09:14:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:58.635 09:14:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:58.635 09:14:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:58.635 09:14:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:58.635 09:14:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:58.635 09:14:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:58.635 09:14:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:58.635 09:14:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:58.635 09:14:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:58.635 09:14:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:58.635 09:14:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:58.635 09:14:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:58.635 09:14:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:58.635 09:14:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:58.635 09:14:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:58.635 09:14:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:58.635 09:14:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:58.635 09:14:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:58.635 09:14:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:58.635 09:14:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:58.635 09:14:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:58.635 09:14:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:58.635 09:14:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:58.635 09:14:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:58.635 09:14:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:58.635 09:14:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:58.635 09:14:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:58.635 09:14:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:58.635 09:14:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:58.635 09:14:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:58.635 09:14:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:58.635 09:14:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:58.635 09:14:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:58.635 09:14:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:58.635 09:14:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:58.635 09:14:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:58.635 09:14:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:58.635 09:14:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:58.635 09:14:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:58.636 09:14:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:58.636 09:14:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:58.636 09:14:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:58.636 09:14:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:58.636 09:14:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:58.636 09:14:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:58.636 09:14:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:58.636 09:14:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:58.636 09:14:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:58.636 09:14:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:58.636 09:14:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:58.636 09:14:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:58.636 09:14:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:58.636 09:14:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:58.636 09:14:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:58.636 09:14:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:58.636 09:14:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:58.636 09:14:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:58.636 09:14:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:58.636 09:14:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:58.636 09:14:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:58.636 09:14:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:58.636 09:14:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:58.636 09:14:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:58.636 09:14:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:58.636 09:14:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:58.636 09:14:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:58.636 09:14:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:58.636 09:14:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:58.636 09:14:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:58.636 09:14:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:58.636 09:14:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:58.636 09:14:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:58.636 09:14:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:58.636 09:14:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:58.636 09:14:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:58.636 09:14:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:58.636 09:14:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:58.636 09:14:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:58.636 09:14:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:58.636 09:14:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:58.636 09:14:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:58.636 09:14:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:58.636 09:14:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:58.636 09:14:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:58.636 09:14:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:58.636 09:14:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:58.636 09:14:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:58.636 09:14:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:58.636 09:14:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:58.636 09:14:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:58.636 09:14:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:58.636 09:14:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:58.636 09:14:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:58.636 09:14:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:58.636 09:14:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:58.636 09:14:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:58.636 09:14:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:58.636 09:14:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:58.636 09:14:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:58.636 09:14:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:58.636 09:14:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:58.636 09:14:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:58.636 09:14:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:58.636 09:14:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:58.636 09:14:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:58.636 09:14:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:58.636 09:14:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:58.636 09:14:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:58.636 09:14:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:58.636 09:14:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:58.636 09:14:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:58.636 09:14:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:58.636 09:14:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:58.636 09:14:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:58.636 09:14:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:58.636 09:14:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:58.636 09:14:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:58.636 09:14:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:58.636 09:14:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:58.636 09:14:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:58.636 09:14:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:58.636 09:14:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:58.636 09:14:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:58.636 09:14:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:58.636 09:14:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:58.636 09:14:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:58.636 09:14:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:58.636 09:14:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:58.636 09:14:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:58.636 09:14:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:58.636 09:14:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:58.636 09:14:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:58.636 09:14:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:58.636 09:14:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:58.636 09:14:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:58.636 09:14:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:58.636 09:14:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:58.636 09:14:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:58.636 09:14:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:58.636 09:14:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:58.636 09:14:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:58.636 09:14:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:58.636 09:14:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:58.636 09:14:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:58.636 09:14:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:58.636 09:14:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:58.636 09:14:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:58.636 09:14:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:58.636 09:14:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:58.636 09:14:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:58.636 09:14:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:58.636 09:14:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:58.636 09:14:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:58.636 09:14:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:58.636 09:14:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:58.636 09:14:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:58.636 09:14:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:58.636 09:14:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:58.636 09:14:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:58.636 09:14:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:58.636 09:14:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:58.636 09:14:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:58.636 09:14:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:58.636 09:14:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:58.636 09:14:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:58.636 09:14:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:58.636 09:14:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:58.636 09:14:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:58.636 09:14:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@33 -- # echo 0 00:05:58.636 09:14:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@33 -- # return 0 00:05:58.636 09:14:09 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@99 -- # surp=0 00:05:58.637 09:14:09 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@100 -- # get_meminfo HugePages_Rsvd 00:05:58.637 09:14:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@17 -- # local get=HugePages_Rsvd 00:05:58.637 09:14:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@18 -- # local node= 00:05:58.637 09:14:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@19 -- # local var val 00:05:58.637 09:14:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@20 -- # local mem_f mem 00:05:58.637 09:14:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:05:58.637 09:14:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:05:58.637 09:14:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:05:58.637 09:14:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:05:58.637 09:14:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:05:58.637 09:14:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:58.637 09:14:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:58.637 09:14:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60541712 kB' 'MemFree: 46630288 kB' 'MemAvailable: 50044264 kB' 'Buffers: 12432 kB' 'Cached: 9112048 kB' 'SwapCached: 0 kB' 'Active: 6664980 kB' 'Inactive: 3452172 kB' 'Active(anon): 6288272 kB' 'Inactive(anon): 0 kB' 'Active(file): 376708 kB' 'Inactive(file): 3452172 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 995868 kB' 'Mapped: 170180 kB' 'Shmem: 5295600 kB' 'KReclaimable: 157436 kB' 'Slab: 446088 kB' 'SReclaimable: 157436 kB' 'SUnreclaim: 288652 kB' 'KernelStack: 12768 kB' 'PageTables: 8464 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 37610884 kB' 'Committed_AS: 7797320 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 193640 kB' 'VmallocChunk: 0 kB' 'Percpu: 33408 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 579164 kB' 'DirectMap2M: 12972032 kB' 'DirectMap1G: 55574528 kB' 00:05:58.637 09:14:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:58.637 09:14:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:58.637 09:14:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:58.637 09:14:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:58.637 09:14:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:58.637 09:14:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:58.637 09:14:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:58.637 09:14:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:58.637 09:14:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:58.637 09:14:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:58.637 09:14:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:58.637 09:14:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:58.637 09:14:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:58.637 09:14:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:58.637 09:14:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:58.637 09:14:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:58.637 09:14:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:58.637 09:14:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:58.637 09:14:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:58.637 09:14:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:58.637 09:14:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:58.637 09:14:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:58.637 09:14:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:58.637 09:14:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:58.637 09:14:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:58.637 09:14:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:58.637 09:14:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:58.637 09:14:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:58.637 09:14:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:58.637 09:14:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:58.637 09:14:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:58.637 09:14:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:58.637 09:14:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:58.637 09:14:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:58.637 09:14:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:58.637 09:14:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:58.637 09:14:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:58.637 09:14:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:58.637 09:14:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:58.637 09:14:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:58.637 09:14:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:58.637 09:14:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:58.637 09:14:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:58.637 09:14:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:58.637 09:14:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:58.637 09:14:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:58.637 09:14:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:58.637 09:14:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:58.637 09:14:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:58.637 09:14:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:58.637 09:14:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:58.637 09:14:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:58.637 09:14:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:58.637 09:14:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:58.637 09:14:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:58.637 09:14:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:58.637 09:14:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:58.637 09:14:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:58.637 09:14:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:58.637 09:14:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:58.637 09:14:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:58.637 09:14:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:58.637 09:14:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:58.637 09:14:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:58.637 09:14:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:58.637 09:14:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:58.637 09:14:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:58.637 09:14:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:58.637 09:14:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:58.637 09:14:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:58.637 09:14:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:58.637 09:14:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:58.637 09:14:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:58.637 09:14:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:58.637 09:14:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:58.637 09:14:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:58.637 09:14:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:58.638 09:14:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:58.638 09:14:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:58.638 09:14:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:58.638 09:14:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:58.638 09:14:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:58.638 09:14:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:58.638 09:14:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:58.638 09:14:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:58.638 09:14:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:58.638 09:14:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:58.638 09:14:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:58.638 09:14:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:58.638 09:14:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:58.638 09:14:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:58.638 09:14:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:58.638 09:14:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:58.638 09:14:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:58.638 09:14:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:58.638 09:14:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:58.638 09:14:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:58.638 09:14:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:58.638 09:14:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:58.638 09:14:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:58.638 09:14:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:58.638 09:14:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:58.638 09:14:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:58.638 09:14:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:58.638 09:14:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:58.638 09:14:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:58.638 09:14:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:58.638 09:14:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:58.638 09:14:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:58.638 09:14:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:58.638 09:14:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:58.638 09:14:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:58.638 09:14:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:58.638 09:14:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:58.638 09:14:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:58.638 09:14:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:58.638 09:14:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:58.638 09:14:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:58.638 09:14:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:58.638 09:14:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:58.638 09:14:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:58.638 09:14:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:58.638 09:14:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:58.638 09:14:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:58.638 09:14:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:58.638 09:14:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:58.638 09:14:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:58.638 09:14:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:58.638 09:14:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:58.638 09:14:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:58.638 09:14:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:58.638 09:14:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:58.638 09:14:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:58.638 09:14:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:58.638 09:14:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:58.638 09:14:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:58.638 09:14:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:58.638 09:14:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:58.638 09:14:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:58.638 09:14:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:58.638 09:14:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:58.638 09:14:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:58.638 09:14:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:58.638 09:14:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:58.638 09:14:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:58.638 09:14:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:58.638 09:14:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:58.638 09:14:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:58.638 09:14:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:58.638 09:14:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:58.638 09:14:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:58.638 09:14:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:58.638 09:14:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:58.638 09:14:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:58.638 09:14:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:58.638 09:14:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:58.638 09:14:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:58.638 09:14:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:58.638 09:14:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:58.638 09:14:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:58.638 09:14:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:58.638 09:14:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:58.638 09:14:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:58.638 09:14:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:58.638 09:14:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:58.638 09:14:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:58.638 09:14:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:58.638 09:14:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:58.638 09:14:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:58.638 09:14:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:58.638 09:14:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:58.638 09:14:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:58.638 09:14:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:58.638 09:14:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:58.638 09:14:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:58.638 09:14:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:58.638 09:14:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:58.638 09:14:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:58.638 09:14:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:58.638 09:14:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:58.638 09:14:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:58.638 09:14:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:58.638 09:14:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:58.638 09:14:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:58.638 09:14:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:58.638 09:14:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:58.638 09:14:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:58.638 09:14:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:58.638 09:14:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:58.638 09:14:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:58.638 09:14:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:58.638 09:14:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:58.638 09:14:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:58.638 09:14:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:58.638 09:14:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:58.638 09:14:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:58.638 09:14:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:58.638 09:14:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:58.638 09:14:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:58.638 09:14:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:58.638 09:14:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:58.638 09:14:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@33 -- # echo 0 00:05:58.638 09:14:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@33 -- # return 0 00:05:58.638 09:14:09 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@100 -- # resv=0 00:05:58.638 09:14:09 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@102 -- # echo nr_hugepages=1024 00:05:58.638 nr_hugepages=1024 00:05:58.639 09:14:09 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@103 -- # echo resv_hugepages=0 00:05:58.639 resv_hugepages=0 00:05:58.639 09:14:09 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@104 -- # echo surplus_hugepages=0 00:05:58.639 surplus_hugepages=0 00:05:58.639 09:14:09 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@105 -- # echo anon_hugepages=0 00:05:58.639 anon_hugepages=0 00:05:58.639 09:14:09 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@107 -- # (( 1024 == nr_hugepages + surp + resv )) 00:05:58.639 09:14:09 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@109 -- # (( 1024 == nr_hugepages )) 00:05:58.639 09:14:09 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@110 -- # get_meminfo HugePages_Total 00:05:58.639 09:14:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@17 -- # local get=HugePages_Total 00:05:58.639 09:14:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@18 -- # local node= 00:05:58.639 09:14:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@19 -- # local var val 00:05:58.639 09:14:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@20 -- # local mem_f mem 00:05:58.639 09:14:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:05:58.639 09:14:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:05:58.639 09:14:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:05:58.639 09:14:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:05:58.639 09:14:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:05:58.639 09:14:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:58.639 09:14:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:58.639 09:14:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60541712 kB' 'MemFree: 46630040 kB' 'MemAvailable: 50044016 kB' 'Buffers: 12432 kB' 'Cached: 9112088 kB' 'SwapCached: 0 kB' 'Active: 6664628 kB' 'Inactive: 3452172 kB' 'Active(anon): 6287920 kB' 'Inactive(anon): 0 kB' 'Active(file): 376708 kB' 'Inactive(file): 3452172 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 995480 kB' 'Mapped: 170648 kB' 'Shmem: 5295640 kB' 'KReclaimable: 157436 kB' 'Slab: 446128 kB' 'SReclaimable: 157436 kB' 'SUnreclaim: 288692 kB' 'KernelStack: 12784 kB' 'PageTables: 8560 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 37610884 kB' 'Committed_AS: 7797344 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 193644 kB' 'VmallocChunk: 0 kB' 'Percpu: 33408 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 579164 kB' 'DirectMap2M: 12972032 kB' 'DirectMap1G: 55574528 kB' 00:05:58.639 09:14:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:58.639 09:14:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:58.639 09:14:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:58.639 09:14:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:58.639 09:14:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:58.639 09:14:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:58.639 09:14:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:58.639 09:14:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:58.639 09:14:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:58.639 09:14:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:58.639 09:14:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:58.639 09:14:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:58.639 09:14:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:58.639 09:14:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:58.639 09:14:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:58.639 09:14:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:58.639 09:14:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:58.639 09:14:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:58.639 09:14:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:58.639 09:14:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:58.639 09:14:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:58.639 09:14:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:58.639 09:14:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:58.639 09:14:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:58.639 09:14:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:58.639 09:14:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:58.639 09:14:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:58.639 09:14:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:58.639 09:14:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:58.639 09:14:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:58.639 09:14:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:58.639 09:14:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:58.639 09:14:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:58.639 09:14:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:58.639 09:14:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:58.639 09:14:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:58.639 09:14:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:58.639 09:14:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:58.639 09:14:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:58.639 09:14:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:58.639 09:14:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:58.639 09:14:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:58.639 09:14:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:58.639 09:14:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:58.639 09:14:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:58.639 09:14:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:58.639 09:14:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:58.639 09:14:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:58.639 09:14:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:58.639 09:14:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:58.639 09:14:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:58.639 09:14:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:58.639 09:14:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:58.639 09:14:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:58.639 09:14:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:58.639 09:14:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:58.639 09:14:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:58.639 09:14:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:58.639 09:14:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:58.639 09:14:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:58.639 09:14:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:58.639 09:14:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:58.639 09:14:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:58.639 09:14:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:58.639 09:14:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:58.639 09:14:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:58.639 09:14:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:58.639 09:14:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:58.639 09:14:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:58.639 09:14:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:58.639 09:14:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:58.639 09:14:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:58.639 09:14:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:58.639 09:14:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:58.639 09:14:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:58.639 09:14:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:58.639 09:14:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:58.639 09:14:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:58.639 09:14:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:58.639 09:14:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:58.639 09:14:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:58.639 09:14:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:58.639 09:14:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:58.639 09:14:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:58.639 09:14:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:58.639 09:14:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:58.639 09:14:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:58.639 09:14:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:58.639 09:14:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:58.639 09:14:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:58.639 09:14:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:58.639 09:14:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:58.639 09:14:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:58.639 09:14:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:58.639 09:14:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:58.639 09:14:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:58.640 09:14:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:58.640 09:14:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:58.640 09:14:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:58.640 09:14:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:58.640 09:14:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:58.640 09:14:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:58.640 09:14:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:58.640 09:14:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:58.640 09:14:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:58.640 09:14:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:58.640 09:14:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:58.640 09:14:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:58.640 09:14:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:58.640 09:14:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:58.640 09:14:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:58.640 09:14:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:58.640 09:14:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:58.640 09:14:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:58.640 09:14:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:58.640 09:14:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:58.640 09:14:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:58.640 09:14:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:58.640 09:14:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:58.640 09:14:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:58.640 09:14:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:58.640 09:14:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:58.640 09:14:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:58.640 09:14:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:58.640 09:14:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:58.640 09:14:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:58.640 09:14:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:58.640 09:14:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:58.640 09:14:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:58.640 09:14:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:58.640 09:14:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:58.640 09:14:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:58.640 09:14:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:58.640 09:14:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:58.640 09:14:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:58.640 09:14:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:58.640 09:14:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:58.640 09:14:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:58.640 09:14:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:58.640 09:14:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:58.640 09:14:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:58.640 09:14:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:58.640 09:14:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:58.640 09:14:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:58.640 09:14:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:58.640 09:14:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:58.640 09:14:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:58.640 09:14:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:58.640 09:14:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:58.640 09:14:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:58.640 09:14:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:58.640 09:14:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:58.640 09:14:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:58.640 09:14:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:58.640 09:14:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:58.640 09:14:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:58.640 09:14:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:58.640 09:14:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:58.640 09:14:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:58.640 09:14:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:58.640 09:14:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:58.640 09:14:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:58.640 09:14:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:58.640 09:14:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:58.640 09:14:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:58.640 09:14:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:58.640 09:14:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:58.640 09:14:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:58.640 09:14:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:58.640 09:14:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:58.640 09:14:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:58.640 09:14:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:58.640 09:14:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:58.640 09:14:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:58.640 09:14:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:58.640 09:14:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:58.640 09:14:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:58.640 09:14:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:58.640 09:14:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:58.640 09:14:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:58.640 09:14:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:58.640 09:14:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:58.640 09:14:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:58.640 09:14:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:58.640 09:14:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:58.640 09:14:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:58.640 09:14:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:58.640 09:14:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:58.640 09:14:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:58.640 09:14:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:58.640 09:14:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:58.640 09:14:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:58.640 09:14:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:58.640 09:14:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@33 -- # echo 1024 00:05:58.640 09:14:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@33 -- # return 0 00:05:58.640 09:14:09 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@110 -- # (( 1024 == nr_hugepages + surp + resv )) 00:05:58.640 09:14:09 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@112 -- # get_nodes 00:05:58.640 09:14:09 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@27 -- # local node 00:05:58.640 09:14:09 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:05:58.640 09:14:09 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=512 00:05:58.640 09:14:09 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:05:58.640 09:14:09 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=512 00:05:58.640 09:14:09 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@32 -- # no_nodes=2 00:05:58.640 09:14:09 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@33 -- # (( no_nodes > 0 )) 00:05:58.640 09:14:09 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:05:58.640 09:14:09 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:05:58.640 09:14:09 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 0 00:05:58.640 09:14:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:05:58.640 09:14:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@18 -- # local node=0 00:05:58.640 09:14:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@19 -- # local var val 00:05:58.640 09:14:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@20 -- # local mem_f mem 00:05:58.640 09:14:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:05:58.640 09:14:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node0/meminfo ]] 00:05:58.640 09:14:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node0/meminfo 00:05:58.640 09:14:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:05:58.640 09:14:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:05:58.640 09:14:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:58.640 09:14:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:58.641 09:14:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 32829884 kB' 'MemFree: 22990740 kB' 'MemUsed: 9839144 kB' 'SwapCached: 0 kB' 'Active: 4640452 kB' 'Inactive: 3357784 kB' 'Active(anon): 4507032 kB' 'Inactive(anon): 0 kB' 'Active(file): 133420 kB' 'Inactive(file): 3357784 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'FilePages: 7326248 kB' 'Mapped: 77360 kB' 'AnonPages: 675164 kB' 'Shmem: 3835044 kB' 'KernelStack: 5832 kB' 'PageTables: 4088 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 59292 kB' 'Slab: 176824 kB' 'SReclaimable: 59292 kB' 'SUnreclaim: 117532 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 512' 'HugePages_Free: 512' 'HugePages_Surp: 0' 00:05:58.641 09:14:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:58.641 09:14:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:58.641 09:14:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:58.641 09:14:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:58.641 09:14:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:58.641 09:14:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:58.641 09:14:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:58.641 09:14:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:58.641 09:14:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:58.641 09:14:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:58.641 09:14:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:58.641 09:14:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:58.641 09:14:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:58.641 09:14:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:58.641 09:14:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:58.641 09:14:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:58.641 09:14:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:58.641 09:14:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:58.641 09:14:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:58.641 09:14:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:58.641 09:14:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:58.641 09:14:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:58.641 09:14:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:58.641 09:14:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:58.641 09:14:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:58.641 09:14:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:58.641 09:14:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:58.641 09:14:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:58.641 09:14:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:58.641 09:14:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:58.641 09:14:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:58.641 09:14:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:58.641 09:14:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:58.641 09:14:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:58.641 09:14:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:58.641 09:14:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:58.641 09:14:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:58.641 09:14:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:58.641 09:14:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:58.641 09:14:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:58.641 09:14:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:58.641 09:14:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:58.641 09:14:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:58.641 09:14:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:58.641 09:14:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:58.641 09:14:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:58.641 09:14:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:58.641 09:14:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:58.641 09:14:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:58.641 09:14:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:58.641 09:14:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:58.641 09:14:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:58.641 09:14:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:58.641 09:14:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:58.641 09:14:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:58.641 09:14:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:58.641 09:14:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:58.641 09:14:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:58.641 09:14:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:58.641 09:14:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:58.641 09:14:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:58.641 09:14:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:58.641 09:14:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:58.641 09:14:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:58.641 09:14:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:58.641 09:14:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:58.641 09:14:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:58.641 09:14:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:58.641 09:14:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:58.641 09:14:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:58.641 09:14:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:58.641 09:14:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:58.641 09:14:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:58.641 09:14:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:58.641 09:14:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:58.641 09:14:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:58.641 09:14:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:58.641 09:14:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:58.641 09:14:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:58.641 09:14:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:58.641 09:14:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:58.641 09:14:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:58.641 09:14:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:58.641 09:14:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:58.641 09:14:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:58.641 09:14:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:58.641 09:14:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:58.641 09:14:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:58.641 09:14:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:58.641 09:14:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:58.641 09:14:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:58.641 09:14:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:58.641 09:14:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:58.641 09:14:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:58.641 09:14:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:58.641 09:14:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:58.641 09:14:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:58.641 09:14:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:58.641 09:14:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:58.641 09:14:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:58.641 09:14:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:58.641 09:14:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:58.641 09:14:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:58.641 09:14:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:58.641 09:14:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:58.641 09:14:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:58.641 09:14:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:58.641 09:14:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:58.641 09:14:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:58.641 09:14:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:58.641 09:14:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:58.641 09:14:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:58.641 09:14:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:58.641 09:14:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:58.641 09:14:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:58.641 09:14:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:58.641 09:14:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:58.641 09:14:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:58.641 09:14:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:58.641 09:14:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:58.641 09:14:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:58.641 09:14:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:58.641 09:14:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:58.641 09:14:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:58.641 09:14:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:58.641 09:14:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:58.641 09:14:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:58.641 09:14:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:58.642 09:14:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:58.642 09:14:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:58.642 09:14:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:58.642 09:14:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:58.642 09:14:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:58.642 09:14:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:58.642 09:14:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:58.642 09:14:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:58.642 09:14:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:58.642 09:14:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:58.642 09:14:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:58.642 09:14:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:58.642 09:14:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:58.642 09:14:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:58.642 09:14:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:58.642 09:14:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:58.642 09:14:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:58.642 09:14:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@33 -- # echo 0 00:05:58.642 09:14:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@33 -- # return 0 00:05:58.642 09:14:09 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:05:58.642 09:14:09 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:05:58.642 09:14:09 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:05:58.642 09:14:09 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 1 00:05:58.642 09:14:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:05:58.642 09:14:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@18 -- # local node=1 00:05:58.642 09:14:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@19 -- # local var val 00:05:58.642 09:14:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@20 -- # local mem_f mem 00:05:58.642 09:14:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:05:58.642 09:14:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node1/meminfo ]] 00:05:58.642 09:14:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node1/meminfo 00:05:58.642 09:14:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:05:58.642 09:14:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:05:58.642 09:14:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:58.642 09:14:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:58.642 09:14:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 27711828 kB' 'MemFree: 23637092 kB' 'MemUsed: 4074736 kB' 'SwapCached: 0 kB' 'Active: 2024176 kB' 'Inactive: 94388 kB' 'Active(anon): 1780888 kB' 'Inactive(anon): 0 kB' 'Active(file): 243288 kB' 'Inactive(file): 94388 kB' 'Unevictable: 0 kB' 'Mlocked: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'FilePages: 1798292 kB' 'Mapped: 92820 kB' 'AnonPages: 320324 kB' 'Shmem: 1460616 kB' 'KernelStack: 6952 kB' 'PageTables: 4404 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 98144 kB' 'Slab: 269304 kB' 'SReclaimable: 98144 kB' 'SUnreclaim: 171160 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 512' 'HugePages_Free: 512' 'HugePages_Surp: 0' 00:05:58.642 09:14:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:58.642 09:14:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:58.642 09:14:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:58.642 09:14:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:58.642 09:14:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:58.642 09:14:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:58.642 09:14:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:58.642 09:14:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:58.642 09:14:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:58.642 09:14:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:58.642 09:14:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:58.642 09:14:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:58.642 09:14:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:58.642 09:14:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:58.642 09:14:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:58.642 09:14:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:58.642 09:14:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:58.642 09:14:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:58.642 09:14:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:58.642 09:14:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:58.642 09:14:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:58.642 09:14:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:58.642 09:14:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:58.642 09:14:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:58.642 09:14:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:58.642 09:14:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:58.642 09:14:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:58.642 09:14:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:58.642 09:14:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:58.642 09:14:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:58.642 09:14:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:58.642 09:14:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:58.642 09:14:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:58.642 09:14:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:58.642 09:14:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:58.642 09:14:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:58.642 09:14:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:58.642 09:14:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:58.642 09:14:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:58.642 09:14:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:58.642 09:14:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:58.642 09:14:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:58.642 09:14:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:58.642 09:14:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:58.642 09:14:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:58.642 09:14:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:58.642 09:14:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:58.642 09:14:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:58.642 09:14:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:58.642 09:14:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:58.642 09:14:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:58.642 09:14:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:58.642 09:14:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:58.642 09:14:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:58.642 09:14:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:58.642 09:14:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:58.642 09:14:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:58.642 09:14:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:58.642 09:14:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:58.642 09:14:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:58.642 09:14:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:58.642 09:14:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:58.642 09:14:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:58.642 09:14:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:58.642 09:14:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:58.642 09:14:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:58.642 09:14:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:58.642 09:14:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:58.643 09:14:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:58.643 09:14:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:58.643 09:14:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:58.643 09:14:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:58.643 09:14:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:58.643 09:14:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:58.643 09:14:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:58.643 09:14:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:58.643 09:14:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:58.643 09:14:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:58.643 09:14:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:58.643 09:14:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:58.643 09:14:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:58.643 09:14:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:58.643 09:14:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:58.643 09:14:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:58.643 09:14:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:58.643 09:14:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:58.643 09:14:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:58.643 09:14:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:58.643 09:14:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:58.643 09:14:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:58.643 09:14:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:58.643 09:14:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:58.643 09:14:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:58.643 09:14:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:58.643 09:14:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:58.643 09:14:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:58.643 09:14:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:58.643 09:14:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:58.643 09:14:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:58.643 09:14:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:58.643 09:14:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:58.643 09:14:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:58.643 09:14:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:58.643 09:14:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:58.643 09:14:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:58.643 09:14:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:58.643 09:14:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:58.643 09:14:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:58.643 09:14:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:58.643 09:14:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:58.643 09:14:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:58.643 09:14:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:58.643 09:14:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:58.643 09:14:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:58.643 09:14:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:58.643 09:14:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:58.643 09:14:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:58.643 09:14:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:58.643 09:14:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:58.643 09:14:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:58.643 09:14:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:58.643 09:14:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:58.643 09:14:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:58.643 09:14:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:58.643 09:14:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:58.643 09:14:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:58.643 09:14:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:58.643 09:14:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:58.643 09:14:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:58.643 09:14:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:58.643 09:14:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:58.643 09:14:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:58.643 09:14:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:58.643 09:14:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:58.643 09:14:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:58.643 09:14:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:58.643 09:14:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:58.643 09:14:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:58.643 09:14:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:58.643 09:14:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:58.643 09:14:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:58.643 09:14:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:58.643 09:14:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:58.643 09:14:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:58.643 09:14:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:58.643 09:14:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@33 -- # echo 0 00:05:58.643 09:14:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@33 -- # return 0 00:05:58.643 09:14:09 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:05:58.643 09:14:09 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:05:58.643 09:14:09 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:05:58.643 09:14:09 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:05:58.643 09:14:09 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@128 -- # echo 'node0=512 expecting 512' 00:05:58.643 node0=512 expecting 512 00:05:58.643 09:14:09 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:05:58.643 09:14:09 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:05:58.643 09:14:09 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:05:58.643 09:14:09 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@128 -- # echo 'node1=512 expecting 512' 00:05:58.643 node1=512 expecting 512 00:05:58.643 09:14:09 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@130 -- # [[ 512 == \5\1\2 ]] 00:05:58.643 00:05:58.643 real 0m1.457s 00:05:58.643 user 0m0.616s 00:05:58.643 sys 0m0.803s 00:05:58.643 09:14:09 setup.sh.hugepages.even_2G_alloc -- common/autotest_common.sh@1124 -- # xtrace_disable 00:05:58.643 09:14:09 setup.sh.hugepages.even_2G_alloc -- common/autotest_common.sh@10 -- # set +x 00:05:58.643 ************************************ 00:05:58.643 END TEST even_2G_alloc 00:05:58.643 ************************************ 00:05:58.643 09:14:09 setup.sh.hugepages -- common/autotest_common.sh@1142 -- # return 0 00:05:58.643 09:14:09 setup.sh.hugepages -- setup/hugepages.sh@213 -- # run_test odd_alloc odd_alloc 00:05:58.643 09:14:09 setup.sh.hugepages -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:05:58.643 09:14:09 setup.sh.hugepages -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:58.643 09:14:09 setup.sh.hugepages -- common/autotest_common.sh@10 -- # set +x 00:05:58.643 ************************************ 00:05:58.643 START TEST odd_alloc 00:05:58.643 ************************************ 00:05:58.643 09:14:09 setup.sh.hugepages.odd_alloc -- common/autotest_common.sh@1123 -- # odd_alloc 00:05:58.643 09:14:09 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@159 -- # get_test_nr_hugepages 2098176 00:05:58.643 09:14:09 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@49 -- # local size=2098176 00:05:58.643 09:14:09 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@50 -- # (( 1 > 1 )) 00:05:58.643 09:14:09 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@55 -- # (( size >= default_hugepages )) 00:05:58.643 09:14:09 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@57 -- # nr_hugepages=1025 00:05:58.643 09:14:09 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@58 -- # get_test_nr_hugepages_per_node 00:05:58.643 09:14:09 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@62 -- # user_nodes=() 00:05:58.643 09:14:09 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@62 -- # local user_nodes 00:05:58.643 09:14:09 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@64 -- # local _nr_hugepages=1025 00:05:58.643 09:14:09 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@65 -- # local _no_nodes=2 00:05:58.643 09:14:09 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@67 -- # nodes_test=() 00:05:58.643 09:14:09 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@67 -- # local -g nodes_test 00:05:58.643 09:14:09 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@69 -- # (( 0 > 0 )) 00:05:58.643 09:14:09 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@74 -- # (( 0 > 0 )) 00:05:58.643 09:14:09 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@81 -- # (( _no_nodes > 0 )) 00:05:58.643 09:14:09 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@82 -- # nodes_test[_no_nodes - 1]=512 00:05:58.643 09:14:09 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@83 -- # : 513 00:05:58.643 09:14:09 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@84 -- # : 1 00:05:58.643 09:14:09 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@81 -- # (( _no_nodes > 0 )) 00:05:58.643 09:14:09 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@82 -- # nodes_test[_no_nodes - 1]=513 00:05:58.643 09:14:09 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@83 -- # : 0 00:05:58.643 09:14:09 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@84 -- # : 0 00:05:58.643 09:14:09 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@81 -- # (( _no_nodes > 0 )) 00:05:58.643 09:14:09 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@160 -- # HUGEMEM=2049 00:05:58.643 09:14:09 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@160 -- # HUGE_EVEN_ALLOC=yes 00:05:58.643 09:14:09 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@160 -- # setup output 00:05:58.643 09:14:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@9 -- # [[ output == output ]] 00:05:58.643 09:14:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@10 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:06:00.021 0000:00:04.7 (8086 0e27): Already using the vfio-pci driver 00:06:00.021 0000:00:04.6 (8086 0e26): Already using the vfio-pci driver 00:06:00.021 0000:00:04.5 (8086 0e25): Already using the vfio-pci driver 00:06:00.021 0000:00:04.4 (8086 0e24): Already using the vfio-pci driver 00:06:00.021 0000:00:04.3 (8086 0e23): Already using the vfio-pci driver 00:06:00.021 0000:00:04.2 (8086 0e22): Already using the vfio-pci driver 00:06:00.021 0000:00:04.1 (8086 0e21): Already using the vfio-pci driver 00:06:00.021 0000:00:04.0 (8086 0e20): Already using the vfio-pci driver 00:06:00.021 0000:80:04.7 (8086 0e27): Already using the vfio-pci driver 00:06:00.021 0000:0b:00.0 (8086 0a54): Already using the vfio-pci driver 00:06:00.021 0000:80:04.6 (8086 0e26): Already using the vfio-pci driver 00:06:00.021 0000:80:04.5 (8086 0e25): Already using the vfio-pci driver 00:06:00.021 0000:80:04.4 (8086 0e24): Already using the vfio-pci driver 00:06:00.021 0000:80:04.3 (8086 0e23): Already using the vfio-pci driver 00:06:00.021 0000:80:04.2 (8086 0e22): Already using the vfio-pci driver 00:06:00.021 0000:80:04.1 (8086 0e21): Already using the vfio-pci driver 00:06:00.021 0000:80:04.0 (8086 0e20): Already using the vfio-pci driver 00:06:00.021 09:14:11 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@161 -- # verify_nr_hugepages 00:06:00.021 09:14:11 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@89 -- # local node 00:06:00.021 09:14:11 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@90 -- # local sorted_t 00:06:00.021 09:14:11 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@91 -- # local sorted_s 00:06:00.021 09:14:11 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@92 -- # local surp 00:06:00.021 09:14:11 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@93 -- # local resv 00:06:00.021 09:14:11 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@94 -- # local anon 00:06:00.021 09:14:11 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@96 -- # [[ always [madvise] never != *\[\n\e\v\e\r\]* ]] 00:06:00.021 09:14:11 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@97 -- # get_meminfo AnonHugePages 00:06:00.021 09:14:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@17 -- # local get=AnonHugePages 00:06:00.021 09:14:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@18 -- # local node= 00:06:00.021 09:14:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@19 -- # local var val 00:06:00.021 09:14:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@20 -- # local mem_f mem 00:06:00.021 09:14:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:06:00.021 09:14:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:06:00.021 09:14:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:06:00.021 09:14:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:06:00.021 09:14:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:06:00.021 09:14:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:00.021 09:14:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:00.021 09:14:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60541712 kB' 'MemFree: 46627440 kB' 'MemAvailable: 50041492 kB' 'Buffers: 12432 kB' 'Cached: 9112180 kB' 'SwapCached: 0 kB' 'Active: 6655588 kB' 'Inactive: 3452172 kB' 'Active(anon): 6278880 kB' 'Inactive(anon): 0 kB' 'Active(file): 376708 kB' 'Inactive(file): 3452172 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 986452 kB' 'Mapped: 168900 kB' 'Shmem: 5295732 kB' 'KReclaimable: 157588 kB' 'Slab: 446244 kB' 'SReclaimable: 157588 kB' 'SUnreclaim: 288656 kB' 'KernelStack: 12672 kB' 'PageTables: 8048 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 37609860 kB' 'Committed_AS: 7775232 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 193592 kB' 'VmallocChunk: 0 kB' 'Percpu: 33408 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1025' 'HugePages_Free: 1025' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2099200 kB' 'DirectMap4k: 579164 kB' 'DirectMap2M: 12972032 kB' 'DirectMap1G: 55574528 kB' 00:06:00.021 09:14:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:00.021 09:14:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:06:00.021 09:14:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:00.021 09:14:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:00.021 09:14:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:00.021 09:14:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:06:00.021 09:14:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:00.021 09:14:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:00.021 09:14:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:00.021 09:14:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:06:00.021 09:14:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:00.021 09:14:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:00.021 09:14:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Buffers == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:00.021 09:14:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:06:00.021 09:14:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:00.021 09:14:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:00.021 09:14:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Cached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:00.021 09:14:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:06:00.021 09:14:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:00.021 09:14:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:00.021 09:14:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapCached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:00.021 09:14:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:06:00.021 09:14:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:00.021 09:14:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:00.021 09:14:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:00.021 09:14:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:06:00.021 09:14:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:00.021 09:14:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:00.021 09:14:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:00.021 09:14:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:06:00.021 09:14:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:00.021 09:14:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:00.021 09:14:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:00.021 09:14:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:06:00.021 09:14:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:00.021 09:14:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:00.021 09:14:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:00.021 09:14:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:06:00.021 09:14:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:00.021 09:14:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:00.021 09:14:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:00.021 09:14:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:06:00.021 09:14:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:00.021 09:14:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:00.021 09:14:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:00.021 09:14:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:06:00.021 09:14:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:00.021 09:14:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:00.021 09:14:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Unevictable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:00.021 09:14:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:06:00.021 09:14:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:00.021 09:14:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:00.021 09:14:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Mlocked == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:00.021 09:14:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:06:00.021 09:14:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:00.021 09:14:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:00.021 09:14:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:00.021 09:14:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:06:00.021 09:14:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:00.021 09:14:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:00.021 09:14:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:00.021 09:14:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:06:00.021 09:14:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:00.021 09:14:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:00.021 09:14:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Zswap == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:00.021 09:14:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:06:00.021 09:14:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:00.022 09:14:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:00.022 09:14:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Zswapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:00.022 09:14:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:06:00.022 09:14:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:00.022 09:14:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:00.022 09:14:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Dirty == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:00.022 09:14:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:06:00.022 09:14:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:00.022 09:14:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:00.022 09:14:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Writeback == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:00.022 09:14:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:06:00.022 09:14:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:00.022 09:14:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:00.022 09:14:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ AnonPages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:00.022 09:14:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:06:00.022 09:14:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:00.022 09:14:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:00.022 09:14:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Mapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:00.022 09:14:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:06:00.022 09:14:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:00.022 09:14:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:00.022 09:14:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Shmem == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:00.022 09:14:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:06:00.022 09:14:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:00.022 09:14:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:00.022 09:14:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:00.022 09:14:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:06:00.022 09:14:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:00.022 09:14:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:00.022 09:14:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Slab == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:00.022 09:14:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:06:00.022 09:14:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:00.022 09:14:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:00.022 09:14:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:00.022 09:14:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:06:00.022 09:14:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:00.022 09:14:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:00.022 09:14:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:00.022 09:14:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:06:00.282 09:14:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:00.282 09:14:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:00.282 09:14:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ KernelStack == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:00.282 09:14:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:06:00.282 09:14:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:00.282 09:14:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:00.282 09:14:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ PageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:00.282 09:14:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:06:00.282 09:14:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:00.282 09:14:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:00.282 09:14:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:00.282 09:14:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:06:00.282 09:14:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:00.282 09:14:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:00.282 09:14:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:00.282 09:14:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:06:00.282 09:14:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:00.282 09:14:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:00.282 09:14:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Bounce == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:00.282 09:14:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:06:00.282 09:14:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:00.282 09:14:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:00.282 09:14:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:00.282 09:14:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:06:00.282 09:14:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:00.282 09:14:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:00.282 09:14:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:00.282 09:14:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:06:00.282 09:14:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:00.282 09:14:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:00.282 09:14:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:00.282 09:14:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:06:00.282 09:14:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:00.282 09:14:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:00.282 09:14:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:00.282 09:14:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:06:00.282 09:14:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:00.282 09:14:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:00.282 09:14:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:00.282 09:14:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:06:00.282 09:14:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:00.283 09:14:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:00.283 09:14:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:00.283 09:14:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:06:00.283 09:14:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:00.283 09:14:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:00.283 09:14:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Percpu == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:00.283 09:14:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:06:00.283 09:14:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:00.283 09:14:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:00.283 09:14:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:00.283 09:14:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:06:00.283 09:14:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:00.283 09:14:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:00.283 09:14:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:00.283 09:14:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@33 -- # echo 0 00:06:00.283 09:14:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@33 -- # return 0 00:06:00.283 09:14:11 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@97 -- # anon=0 00:06:00.283 09:14:11 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@99 -- # get_meminfo HugePages_Surp 00:06:00.283 09:14:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:06:00.283 09:14:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@18 -- # local node= 00:06:00.283 09:14:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@19 -- # local var val 00:06:00.283 09:14:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@20 -- # local mem_f mem 00:06:00.283 09:14:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:06:00.283 09:14:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:06:00.283 09:14:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:06:00.283 09:14:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:06:00.283 09:14:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:06:00.283 09:14:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:00.283 09:14:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:00.283 09:14:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60541712 kB' 'MemFree: 46626444 kB' 'MemAvailable: 50040496 kB' 'Buffers: 12432 kB' 'Cached: 9112180 kB' 'SwapCached: 0 kB' 'Active: 6655664 kB' 'Inactive: 3452172 kB' 'Active(anon): 6278956 kB' 'Inactive(anon): 0 kB' 'Active(file): 376708 kB' 'Inactive(file): 3452172 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 986568 kB' 'Mapped: 168860 kB' 'Shmem: 5295732 kB' 'KReclaimable: 157588 kB' 'Slab: 446220 kB' 'SReclaimable: 157588 kB' 'SUnreclaim: 288632 kB' 'KernelStack: 12656 kB' 'PageTables: 7988 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 37609860 kB' 'Committed_AS: 7775256 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 193560 kB' 'VmallocChunk: 0 kB' 'Percpu: 33408 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1025' 'HugePages_Free: 1025' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2099200 kB' 'DirectMap4k: 579164 kB' 'DirectMap2M: 12972032 kB' 'DirectMap1G: 55574528 kB' 00:06:00.283 09:14:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:00.283 09:14:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:06:00.283 09:14:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:00.283 09:14:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:00.283 09:14:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:00.283 09:14:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:06:00.283 09:14:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:00.283 09:14:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:00.283 09:14:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:00.283 09:14:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:06:00.283 09:14:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:00.283 09:14:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:00.283 09:14:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:00.283 09:14:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:06:00.283 09:14:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:00.283 09:14:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:00.283 09:14:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:00.283 09:14:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:06:00.283 09:14:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:00.283 09:14:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:00.283 09:14:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:00.283 09:14:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:06:00.283 09:14:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:00.283 09:14:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:00.283 09:14:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:00.283 09:14:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:06:00.283 09:14:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:00.283 09:14:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:00.283 09:14:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:00.283 09:14:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:06:00.283 09:14:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:00.283 09:14:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:00.283 09:14:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:00.283 09:14:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:06:00.283 09:14:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:00.283 09:14:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:00.283 09:14:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:00.283 09:14:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:06:00.283 09:14:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:00.283 09:14:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:00.283 09:14:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:00.283 09:14:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:06:00.283 09:14:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:00.283 09:14:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:00.283 09:14:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:00.283 09:14:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:06:00.283 09:14:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:00.283 09:14:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:00.283 09:14:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:00.283 09:14:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:06:00.283 09:14:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:00.283 09:14:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:00.283 09:14:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:00.283 09:14:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:06:00.283 09:14:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:00.283 09:14:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:00.283 09:14:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:00.283 09:14:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:06:00.283 09:14:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:00.283 09:14:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:00.283 09:14:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:00.283 09:14:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:06:00.283 09:14:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:00.283 09:14:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:00.283 09:14:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:00.283 09:14:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:06:00.283 09:14:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:00.283 09:14:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:00.283 09:14:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:00.283 09:14:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:06:00.283 09:14:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:00.283 09:14:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:00.283 09:14:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:00.283 09:14:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:06:00.283 09:14:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:00.283 09:14:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:00.283 09:14:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:00.283 09:14:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:06:00.283 09:14:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:00.283 09:14:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:00.283 09:14:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:00.283 09:14:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:06:00.283 09:14:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:00.283 09:14:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:00.283 09:14:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:00.283 09:14:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:06:00.283 09:14:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:00.283 09:14:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:00.283 09:14:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:00.284 09:14:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:06:00.284 09:14:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:00.284 09:14:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:00.284 09:14:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:00.284 09:14:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:06:00.284 09:14:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:00.284 09:14:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:00.284 09:14:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:00.284 09:14:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:06:00.284 09:14:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:00.284 09:14:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:00.284 09:14:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:00.284 09:14:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:06:00.284 09:14:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:00.284 09:14:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:00.284 09:14:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:00.284 09:14:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:06:00.284 09:14:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:00.284 09:14:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:00.284 09:14:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:00.284 09:14:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:06:00.284 09:14:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:00.284 09:14:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:00.284 09:14:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:00.284 09:14:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:06:00.284 09:14:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:00.284 09:14:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:00.284 09:14:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:00.284 09:14:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:06:00.284 09:14:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:00.284 09:14:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:00.284 09:14:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:00.284 09:14:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:06:00.284 09:14:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:00.284 09:14:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:00.284 09:14:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:00.284 09:14:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:06:00.284 09:14:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:00.284 09:14:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:00.284 09:14:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:00.284 09:14:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:06:00.284 09:14:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:00.284 09:14:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:00.284 09:14:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:00.284 09:14:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:06:00.284 09:14:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:00.284 09:14:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:00.284 09:14:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:00.284 09:14:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:06:00.284 09:14:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:00.284 09:14:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:00.284 09:14:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:00.284 09:14:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:06:00.284 09:14:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:00.284 09:14:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:00.284 09:14:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:00.284 09:14:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:06:00.284 09:14:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:00.284 09:14:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:00.284 09:14:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:00.284 09:14:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:06:00.284 09:14:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:00.284 09:14:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:00.284 09:14:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:00.284 09:14:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:06:00.284 09:14:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:00.284 09:14:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:00.284 09:14:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:00.284 09:14:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:06:00.284 09:14:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:00.284 09:14:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:00.284 09:14:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:00.284 09:14:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:06:00.284 09:14:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:00.284 09:14:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:00.284 09:14:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:00.284 09:14:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:06:00.284 09:14:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:00.284 09:14:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:00.284 09:14:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:00.284 09:14:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:06:00.284 09:14:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:00.284 09:14:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:00.284 09:14:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:00.284 09:14:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:06:00.284 09:14:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:00.284 09:14:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:00.284 09:14:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:00.284 09:14:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:06:00.284 09:14:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:00.284 09:14:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:00.284 09:14:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:00.284 09:14:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:06:00.284 09:14:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:00.284 09:14:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:00.284 09:14:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:00.284 09:14:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:06:00.284 09:14:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:00.284 09:14:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:00.284 09:14:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:00.284 09:14:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:06:00.284 09:14:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:00.284 09:14:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:00.284 09:14:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:00.284 09:14:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:06:00.284 09:14:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:00.284 09:14:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:00.284 09:14:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:00.284 09:14:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:06:00.284 09:14:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:00.284 09:14:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:00.284 09:14:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:00.284 09:14:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:06:00.284 09:14:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:00.284 09:14:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:00.284 09:14:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:00.284 09:14:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@33 -- # echo 0 00:06:00.284 09:14:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@33 -- # return 0 00:06:00.284 09:14:11 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@99 -- # surp=0 00:06:00.284 09:14:11 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@100 -- # get_meminfo HugePages_Rsvd 00:06:00.284 09:14:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@17 -- # local get=HugePages_Rsvd 00:06:00.284 09:14:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@18 -- # local node= 00:06:00.284 09:14:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@19 -- # local var val 00:06:00.284 09:14:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@20 -- # local mem_f mem 00:06:00.284 09:14:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:06:00.284 09:14:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:06:00.284 09:14:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:06:00.284 09:14:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:06:00.284 09:14:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:06:00.284 09:14:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:00.284 09:14:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:00.285 09:14:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60541712 kB' 'MemFree: 46626444 kB' 'MemAvailable: 50040496 kB' 'Buffers: 12432 kB' 'Cached: 9112204 kB' 'SwapCached: 0 kB' 'Active: 6655384 kB' 'Inactive: 3452172 kB' 'Active(anon): 6278676 kB' 'Inactive(anon): 0 kB' 'Active(file): 376708 kB' 'Inactive(file): 3452172 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 986240 kB' 'Mapped: 168780 kB' 'Shmem: 5295756 kB' 'KReclaimable: 157588 kB' 'Slab: 446220 kB' 'SReclaimable: 157588 kB' 'SUnreclaim: 288632 kB' 'KernelStack: 12704 kB' 'PageTables: 8124 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 37609860 kB' 'Committed_AS: 7775644 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 193592 kB' 'VmallocChunk: 0 kB' 'Percpu: 33408 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1025' 'HugePages_Free: 1025' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2099200 kB' 'DirectMap4k: 579164 kB' 'DirectMap2M: 12972032 kB' 'DirectMap1G: 55574528 kB' 00:06:00.285 09:14:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:00.285 09:14:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:06:00.285 09:14:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:00.285 09:14:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:00.285 09:14:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:00.285 09:14:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:06:00.285 09:14:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:00.285 09:14:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:00.285 09:14:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:00.285 09:14:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:06:00.285 09:14:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:00.285 09:14:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:00.285 09:14:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:00.285 09:14:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:06:00.285 09:14:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:00.285 09:14:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:00.285 09:14:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:00.285 09:14:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:06:00.285 09:14:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:00.285 09:14:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:00.285 09:14:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:00.285 09:14:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:06:00.285 09:14:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:00.285 09:14:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:00.285 09:14:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:00.285 09:14:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:06:00.285 09:14:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:00.285 09:14:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:00.285 09:14:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:00.285 09:14:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:06:00.285 09:14:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:00.285 09:14:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:00.285 09:14:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:00.285 09:14:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:06:00.285 09:14:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:00.285 09:14:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:00.285 09:14:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:00.285 09:14:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:06:00.285 09:14:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:00.285 09:14:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:00.285 09:14:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:00.285 09:14:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:06:00.285 09:14:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:00.285 09:14:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:00.285 09:14:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:00.285 09:14:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:06:00.285 09:14:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:00.285 09:14:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:00.285 09:14:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:00.285 09:14:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:06:00.285 09:14:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:00.285 09:14:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:00.285 09:14:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:00.285 09:14:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:06:00.285 09:14:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:00.285 09:14:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:00.285 09:14:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:00.285 09:14:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:06:00.285 09:14:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:00.285 09:14:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:00.285 09:14:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:00.285 09:14:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:06:00.285 09:14:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:00.285 09:14:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:00.285 09:14:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:00.285 09:14:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:06:00.285 09:14:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:00.285 09:14:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:00.285 09:14:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:00.285 09:14:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:06:00.285 09:14:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:00.285 09:14:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:00.285 09:14:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:00.285 09:14:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:06:00.285 09:14:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:00.285 09:14:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:00.285 09:14:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:00.285 09:14:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:06:00.285 09:14:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:00.285 09:14:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:00.285 09:14:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:00.285 09:14:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:06:00.285 09:14:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:00.285 09:14:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:00.285 09:14:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:00.285 09:14:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:06:00.285 09:14:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:00.285 09:14:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:00.285 09:14:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:00.285 09:14:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:06:00.285 09:14:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:00.285 09:14:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:00.285 09:14:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:00.285 09:14:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:06:00.285 09:14:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:00.285 09:14:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:00.285 09:14:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:00.285 09:14:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:06:00.285 09:14:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:00.285 09:14:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:00.285 09:14:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:00.285 09:14:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:06:00.285 09:14:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:00.285 09:14:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:00.285 09:14:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:00.285 09:14:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:06:00.285 09:14:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:00.285 09:14:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:00.285 09:14:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:00.285 09:14:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:06:00.285 09:14:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:00.285 09:14:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:00.285 09:14:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:00.285 09:14:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:06:00.285 09:14:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:00.285 09:14:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:00.285 09:14:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:00.285 09:14:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:06:00.285 09:14:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:00.285 09:14:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:00.285 09:14:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:00.285 09:14:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:06:00.285 09:14:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:00.285 09:14:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:00.285 09:14:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:00.285 09:14:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:06:00.286 09:14:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:00.286 09:14:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:00.286 09:14:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:00.286 09:14:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:06:00.286 09:14:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:00.286 09:14:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:00.286 09:14:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:00.286 09:14:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:06:00.286 09:14:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:00.286 09:14:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:00.286 09:14:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:00.286 09:14:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:06:00.286 09:14:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:00.286 09:14:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:00.286 09:14:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:00.286 09:14:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:06:00.286 09:14:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:00.286 09:14:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:00.286 09:14:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:00.286 09:14:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:06:00.286 09:14:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:00.286 09:14:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:00.286 09:14:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:00.286 09:14:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:06:00.286 09:14:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:00.286 09:14:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:00.286 09:14:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:00.286 09:14:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:06:00.286 09:14:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:00.286 09:14:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:00.286 09:14:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:00.286 09:14:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:06:00.286 09:14:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:00.286 09:14:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:00.286 09:14:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:00.286 09:14:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:06:00.286 09:14:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:00.286 09:14:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:00.286 09:14:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:00.286 09:14:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:06:00.286 09:14:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:00.286 09:14:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:00.286 09:14:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:00.286 09:14:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:06:00.286 09:14:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:00.286 09:14:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:00.286 09:14:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:00.286 09:14:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:06:00.286 09:14:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:00.286 09:14:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:00.286 09:14:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:00.286 09:14:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:06:00.286 09:14:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:00.286 09:14:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:00.286 09:14:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:00.286 09:14:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:06:00.286 09:14:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:00.286 09:14:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:00.286 09:14:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:00.286 09:14:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:06:00.286 09:14:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:00.286 09:14:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:00.286 09:14:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:00.286 09:14:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:06:00.286 09:14:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:00.286 09:14:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:00.286 09:14:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:00.286 09:14:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:06:00.286 09:14:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:00.286 09:14:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:00.286 09:14:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:00.286 09:14:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:06:00.286 09:14:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:00.286 09:14:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:00.286 09:14:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:00.286 09:14:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@33 -- # echo 0 00:06:00.286 09:14:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@33 -- # return 0 00:06:00.286 09:14:11 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@100 -- # resv=0 00:06:00.286 09:14:11 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@102 -- # echo nr_hugepages=1025 00:06:00.286 nr_hugepages=1025 00:06:00.286 09:14:11 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@103 -- # echo resv_hugepages=0 00:06:00.286 resv_hugepages=0 00:06:00.286 09:14:11 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@104 -- # echo surplus_hugepages=0 00:06:00.286 surplus_hugepages=0 00:06:00.286 09:14:11 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@105 -- # echo anon_hugepages=0 00:06:00.286 anon_hugepages=0 00:06:00.286 09:14:11 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@107 -- # (( 1025 == nr_hugepages + surp + resv )) 00:06:00.286 09:14:11 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@109 -- # (( 1025 == nr_hugepages )) 00:06:00.286 09:14:11 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@110 -- # get_meminfo HugePages_Total 00:06:00.286 09:14:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@17 -- # local get=HugePages_Total 00:06:00.286 09:14:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@18 -- # local node= 00:06:00.286 09:14:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@19 -- # local var val 00:06:00.286 09:14:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@20 -- # local mem_f mem 00:06:00.286 09:14:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:06:00.286 09:14:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:06:00.286 09:14:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:06:00.286 09:14:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:06:00.286 09:14:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:06:00.286 09:14:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:00.286 09:14:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:00.286 09:14:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60541712 kB' 'MemFree: 46627316 kB' 'MemAvailable: 50041368 kB' 'Buffers: 12432 kB' 'Cached: 9112224 kB' 'SwapCached: 0 kB' 'Active: 6655416 kB' 'Inactive: 3452172 kB' 'Active(anon): 6278708 kB' 'Inactive(anon): 0 kB' 'Active(file): 376708 kB' 'Inactive(file): 3452172 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 985824 kB' 'Mapped: 168856 kB' 'Shmem: 5295776 kB' 'KReclaimable: 157588 kB' 'Slab: 446220 kB' 'SReclaimable: 157588 kB' 'SUnreclaim: 288632 kB' 'KernelStack: 12736 kB' 'PageTables: 8212 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 37609860 kB' 'Committed_AS: 7775664 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 193592 kB' 'VmallocChunk: 0 kB' 'Percpu: 33408 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1025' 'HugePages_Free: 1025' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2099200 kB' 'DirectMap4k: 579164 kB' 'DirectMap2M: 12972032 kB' 'DirectMap1G: 55574528 kB' 00:06:00.286 09:14:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:00.286 09:14:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:06:00.286 09:14:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:00.286 09:14:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:00.286 09:14:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:00.286 09:14:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:06:00.286 09:14:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:00.286 09:14:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:00.287 09:14:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:00.287 09:14:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:06:00.287 09:14:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:00.287 09:14:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:00.287 09:14:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:00.287 09:14:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:06:00.287 09:14:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:00.287 09:14:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:00.287 09:14:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:00.287 09:14:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:06:00.287 09:14:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:00.287 09:14:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:00.287 09:14:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:00.287 09:14:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:06:00.287 09:14:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:00.287 09:14:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:00.287 09:14:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:00.287 09:14:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:06:00.287 09:14:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:00.287 09:14:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:00.287 09:14:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:00.287 09:14:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:06:00.287 09:14:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:00.287 09:14:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:00.287 09:14:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:00.287 09:14:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:06:00.287 09:14:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:00.287 09:14:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:00.287 09:14:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:00.287 09:14:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:06:00.287 09:14:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:00.287 09:14:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:00.287 09:14:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:00.287 09:14:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:06:00.287 09:14:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:00.287 09:14:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:00.287 09:14:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:00.287 09:14:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:06:00.287 09:14:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:00.287 09:14:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:00.287 09:14:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:00.287 09:14:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:06:00.287 09:14:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:00.287 09:14:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:00.287 09:14:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:00.287 09:14:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:06:00.287 09:14:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:00.287 09:14:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:00.287 09:14:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:00.287 09:14:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:06:00.287 09:14:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:00.287 09:14:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:00.287 09:14:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:00.287 09:14:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:06:00.287 09:14:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:00.287 09:14:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:00.287 09:14:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:00.287 09:14:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:06:00.287 09:14:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:00.287 09:14:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:00.287 09:14:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:00.287 09:14:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:06:00.287 09:14:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:00.287 09:14:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:00.287 09:14:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:00.287 09:14:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:06:00.287 09:14:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:00.287 09:14:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:00.287 09:14:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:00.287 09:14:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:06:00.287 09:14:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:00.287 09:14:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:00.287 09:14:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:00.287 09:14:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:06:00.287 09:14:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:00.287 09:14:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:00.287 09:14:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:00.287 09:14:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:06:00.287 09:14:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:00.287 09:14:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:00.287 09:14:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:00.287 09:14:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:06:00.287 09:14:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:00.287 09:14:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:00.287 09:14:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:00.287 09:14:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:06:00.287 09:14:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:00.287 09:14:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:00.287 09:14:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:00.287 09:14:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:06:00.287 09:14:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:00.287 09:14:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:00.287 09:14:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:00.287 09:14:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:06:00.287 09:14:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:00.287 09:14:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:00.287 09:14:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:00.287 09:14:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:06:00.287 09:14:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:00.287 09:14:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:00.287 09:14:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:00.287 09:14:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:06:00.287 09:14:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:00.287 09:14:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:00.287 09:14:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:00.287 09:14:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:06:00.287 09:14:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:00.287 09:14:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:00.287 09:14:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:00.287 09:14:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:06:00.287 09:14:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:00.287 09:14:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:00.287 09:14:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:00.287 09:14:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:06:00.287 09:14:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:00.287 09:14:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:00.287 09:14:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:00.287 09:14:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:06:00.287 09:14:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:00.287 09:14:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:00.288 09:14:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:00.288 09:14:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:06:00.288 09:14:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:00.288 09:14:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:00.288 09:14:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:00.288 09:14:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:06:00.288 09:14:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:00.288 09:14:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:00.288 09:14:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:00.288 09:14:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:06:00.288 09:14:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:00.288 09:14:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:00.288 09:14:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:00.288 09:14:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:06:00.288 09:14:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:00.288 09:14:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:00.288 09:14:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:00.288 09:14:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:06:00.288 09:14:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:00.288 09:14:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:00.288 09:14:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:00.288 09:14:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:06:00.288 09:14:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:00.288 09:14:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:00.288 09:14:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:00.288 09:14:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:06:00.288 09:14:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:00.288 09:14:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:00.288 09:14:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:00.288 09:14:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:06:00.288 09:14:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:00.288 09:14:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:00.288 09:14:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:00.288 09:14:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:06:00.288 09:14:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:00.288 09:14:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:00.288 09:14:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:00.288 09:14:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:06:00.288 09:14:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:00.288 09:14:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:00.288 09:14:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:00.288 09:14:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:06:00.288 09:14:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:00.288 09:14:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:00.288 09:14:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:00.288 09:14:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:06:00.288 09:14:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:00.288 09:14:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:00.288 09:14:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:00.288 09:14:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:06:00.288 09:14:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:00.288 09:14:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:00.288 09:14:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:00.288 09:14:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:06:00.288 09:14:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:00.288 09:14:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:00.288 09:14:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:00.288 09:14:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:06:00.288 09:14:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:00.288 09:14:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:00.288 09:14:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:00.288 09:14:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:06:00.288 09:14:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:00.288 09:14:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:00.288 09:14:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:00.288 09:14:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@33 -- # echo 1025 00:06:00.288 09:14:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@33 -- # return 0 00:06:00.288 09:14:11 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@110 -- # (( 1025 == nr_hugepages + surp + resv )) 00:06:00.288 09:14:11 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@112 -- # get_nodes 00:06:00.288 09:14:11 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@27 -- # local node 00:06:00.288 09:14:11 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:06:00.288 09:14:11 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=512 00:06:00.288 09:14:11 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:06:00.288 09:14:11 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=513 00:06:00.288 09:14:11 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@32 -- # no_nodes=2 00:06:00.288 09:14:11 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@33 -- # (( no_nodes > 0 )) 00:06:00.288 09:14:11 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:06:00.288 09:14:11 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:06:00.288 09:14:11 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 0 00:06:00.288 09:14:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:06:00.288 09:14:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@18 -- # local node=0 00:06:00.288 09:14:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@19 -- # local var val 00:06:00.288 09:14:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@20 -- # local mem_f mem 00:06:00.288 09:14:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:06:00.288 09:14:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node0/meminfo ]] 00:06:00.288 09:14:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node0/meminfo 00:06:00.288 09:14:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:06:00.288 09:14:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:06:00.288 09:14:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:00.288 09:14:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 32829884 kB' 'MemFree: 22991616 kB' 'MemUsed: 9838268 kB' 'SwapCached: 0 kB' 'Active: 4638832 kB' 'Inactive: 3357784 kB' 'Active(anon): 4505412 kB' 'Inactive(anon): 0 kB' 'Active(file): 133420 kB' 'Inactive(file): 3357784 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'FilePages: 7326376 kB' 'Mapped: 76488 kB' 'AnonPages: 673488 kB' 'Shmem: 3835172 kB' 'KernelStack: 5832 kB' 'PageTables: 4088 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 59572 kB' 'Slab: 177184 kB' 'SReclaimable: 59572 kB' 'SUnreclaim: 117612 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 512' 'HugePages_Free: 512' 'HugePages_Surp: 0' 00:06:00.288 09:14:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:00.288 09:14:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:00.288 09:14:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:06:00.288 09:14:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:00.288 09:14:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:00.288 09:14:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:00.288 09:14:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:06:00.288 09:14:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:00.288 09:14:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:00.288 09:14:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:00.288 09:14:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:06:00.288 09:14:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:00.288 09:14:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:00.288 09:14:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:00.288 09:14:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:06:00.288 09:14:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:00.288 09:14:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:00.288 09:14:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:00.288 09:14:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:06:00.288 09:14:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:00.288 09:14:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:00.288 09:14:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:00.288 09:14:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:06:00.288 09:14:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:00.288 09:14:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:00.288 09:14:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:00.288 09:14:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:06:00.288 09:14:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:00.288 09:14:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:00.288 09:14:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:00.288 09:14:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:06:00.288 09:14:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:00.288 09:14:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:00.288 09:14:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:00.288 09:14:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:06:00.288 09:14:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:00.289 09:14:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:00.289 09:14:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:00.289 09:14:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:06:00.289 09:14:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:00.289 09:14:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:00.289 09:14:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:00.289 09:14:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:06:00.289 09:14:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:00.289 09:14:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:00.289 09:14:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:00.289 09:14:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:06:00.289 09:14:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:00.289 09:14:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:00.289 09:14:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:00.289 09:14:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:06:00.289 09:14:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:00.289 09:14:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:00.289 09:14:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:00.289 09:14:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:06:00.289 09:14:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:00.289 09:14:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:00.289 09:14:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:00.289 09:14:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:06:00.289 09:14:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:00.289 09:14:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:00.289 09:14:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:00.289 09:14:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:06:00.289 09:14:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:00.289 09:14:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:00.289 09:14:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:00.289 09:14:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:06:00.289 09:14:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:00.289 09:14:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:00.289 09:14:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:00.289 09:14:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:06:00.289 09:14:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:00.289 09:14:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:00.289 09:14:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:00.289 09:14:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:06:00.289 09:14:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:00.289 09:14:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:00.289 09:14:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:00.289 09:14:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:06:00.289 09:14:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:00.289 09:14:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:00.289 09:14:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:00.289 09:14:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:06:00.289 09:14:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:00.289 09:14:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:00.289 09:14:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:00.289 09:14:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:06:00.289 09:14:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:00.289 09:14:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:00.289 09:14:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:00.289 09:14:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:06:00.289 09:14:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:00.289 09:14:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:00.289 09:14:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:00.289 09:14:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:06:00.289 09:14:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:00.289 09:14:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:00.289 09:14:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:00.289 09:14:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:06:00.289 09:14:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:00.289 09:14:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:00.289 09:14:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:00.289 09:14:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:06:00.289 09:14:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:00.289 09:14:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:00.289 09:14:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:00.289 09:14:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:06:00.289 09:14:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:00.289 09:14:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:00.289 09:14:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:00.289 09:14:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:06:00.289 09:14:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:00.289 09:14:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:00.289 09:14:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:00.289 09:14:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:06:00.289 09:14:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:00.289 09:14:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:00.289 09:14:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:00.289 09:14:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:06:00.289 09:14:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:00.289 09:14:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:00.289 09:14:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:00.289 09:14:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:06:00.289 09:14:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:00.289 09:14:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:00.289 09:14:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:00.289 09:14:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:06:00.289 09:14:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:00.289 09:14:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:00.289 09:14:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:00.289 09:14:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:06:00.289 09:14:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:00.289 09:14:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:00.289 09:14:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:00.289 09:14:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:06:00.289 09:14:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:00.289 09:14:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:00.289 09:14:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:00.289 09:14:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:06:00.289 09:14:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:00.289 09:14:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:00.289 09:14:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:00.289 09:14:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:06:00.289 09:14:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:00.289 09:14:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:00.289 09:14:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:00.289 09:14:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@33 -- # echo 0 00:06:00.289 09:14:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@33 -- # return 0 00:06:00.289 09:14:11 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:06:00.289 09:14:11 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:06:00.289 09:14:11 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:06:00.289 09:14:11 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 1 00:06:00.289 09:14:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:06:00.289 09:14:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@18 -- # local node=1 00:06:00.289 09:14:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@19 -- # local var val 00:06:00.289 09:14:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@20 -- # local mem_f mem 00:06:00.289 09:14:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:06:00.289 09:14:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node1/meminfo ]] 00:06:00.289 09:14:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node1/meminfo 00:06:00.289 09:14:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:06:00.289 09:14:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:06:00.289 09:14:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:00.289 09:14:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:00.290 09:14:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 27711828 kB' 'MemFree: 23637080 kB' 'MemUsed: 4074748 kB' 'SwapCached: 0 kB' 'Active: 2016660 kB' 'Inactive: 94388 kB' 'Active(anon): 1773372 kB' 'Inactive(anon): 0 kB' 'Active(file): 243288 kB' 'Inactive(file): 94388 kB' 'Unevictable: 0 kB' 'Mlocked: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'FilePages: 1798304 kB' 'Mapped: 92296 kB' 'AnonPages: 312844 kB' 'Shmem: 1460628 kB' 'KernelStack: 6904 kB' 'PageTables: 4136 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 97968 kB' 'Slab: 268988 kB' 'SReclaimable: 97968 kB' 'SUnreclaim: 171020 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 513' 'HugePages_Free: 513' 'HugePages_Surp: 0' 00:06:00.290 09:14:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:00.290 09:14:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:06:00.290 09:14:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:00.290 09:14:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:00.290 09:14:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:00.290 09:14:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:06:00.290 09:14:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:00.290 09:14:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:00.290 09:14:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:00.290 09:14:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:06:00.290 09:14:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:00.290 09:14:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:00.290 09:14:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:00.290 09:14:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:06:00.290 09:14:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:00.290 09:14:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:00.290 09:14:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:00.290 09:14:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:06:00.290 09:14:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:00.290 09:14:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:00.290 09:14:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:00.290 09:14:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:06:00.290 09:14:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:00.290 09:14:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:00.290 09:14:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:00.290 09:14:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:06:00.290 09:14:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:00.290 09:14:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:00.290 09:14:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:00.290 09:14:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:06:00.290 09:14:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:00.290 09:14:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:00.290 09:14:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:00.290 09:14:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:06:00.290 09:14:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:00.290 09:14:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:00.290 09:14:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:00.290 09:14:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:06:00.290 09:14:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:00.290 09:14:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:00.290 09:14:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:00.290 09:14:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:06:00.290 09:14:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:00.290 09:14:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:00.290 09:14:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:00.290 09:14:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:06:00.290 09:14:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:00.290 09:14:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:00.290 09:14:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:00.290 09:14:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:06:00.290 09:14:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:00.290 09:14:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:00.290 09:14:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:00.290 09:14:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:06:00.290 09:14:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:00.290 09:14:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:00.290 09:14:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:00.290 09:14:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:06:00.290 09:14:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:00.290 09:14:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:00.290 09:14:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:00.290 09:14:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:06:00.290 09:14:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:00.290 09:14:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:00.290 09:14:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:00.290 09:14:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:06:00.290 09:14:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:00.290 09:14:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:00.290 09:14:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:00.290 09:14:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:06:00.290 09:14:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:00.290 09:14:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:00.290 09:14:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:00.290 09:14:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:06:00.290 09:14:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:00.290 09:14:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:00.290 09:14:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:00.290 09:14:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:06:00.290 09:14:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:00.290 09:14:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:00.290 09:14:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:00.290 09:14:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:06:00.290 09:14:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:00.290 09:14:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:00.290 09:14:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:00.290 09:14:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:06:00.290 09:14:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:00.290 09:14:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:00.290 09:14:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:00.290 09:14:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:06:00.290 09:14:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:00.290 09:14:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:00.290 09:14:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:00.290 09:14:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:06:00.290 09:14:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:00.290 09:14:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:00.290 09:14:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:00.290 09:14:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:06:00.290 09:14:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:00.290 09:14:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:00.290 09:14:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:00.290 09:14:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:06:00.290 09:14:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:00.290 09:14:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:00.290 09:14:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:00.290 09:14:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:06:00.290 09:14:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:00.290 09:14:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:00.290 09:14:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:00.290 09:14:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:06:00.290 09:14:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:00.290 09:14:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:00.290 09:14:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:00.290 09:14:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:06:00.290 09:14:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:00.290 09:14:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:00.290 09:14:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:00.290 09:14:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:06:00.290 09:14:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:00.290 09:14:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:00.290 09:14:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:00.290 09:14:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:06:00.290 09:14:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:00.290 09:14:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:00.290 09:14:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:00.290 09:14:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:06:00.290 09:14:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:00.290 09:14:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:00.290 09:14:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:00.290 09:14:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:06:00.290 09:14:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:00.290 09:14:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:00.290 09:14:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:00.290 09:14:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:06:00.291 09:14:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:00.291 09:14:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:00.291 09:14:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:00.291 09:14:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:06:00.291 09:14:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:00.291 09:14:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:00.291 09:14:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:00.291 09:14:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:06:00.291 09:14:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:00.291 09:14:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:00.291 09:14:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:00.291 09:14:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@33 -- # echo 0 00:06:00.291 09:14:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@33 -- # return 0 00:06:00.291 09:14:11 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:06:00.291 09:14:11 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:06:00.291 09:14:11 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:06:00.291 09:14:11 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:06:00.291 09:14:11 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@128 -- # echo 'node0=512 expecting 513' 00:06:00.291 node0=512 expecting 513 00:06:00.291 09:14:11 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:06:00.291 09:14:11 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:06:00.291 09:14:11 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:06:00.291 09:14:11 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@128 -- # echo 'node1=513 expecting 512' 00:06:00.291 node1=513 expecting 512 00:06:00.291 09:14:11 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@130 -- # [[ 512 513 == \5\1\2\ \5\1\3 ]] 00:06:00.291 00:06:00.291 real 0m1.565s 00:06:00.291 user 0m0.654s 00:06:00.291 sys 0m0.874s 00:06:00.291 09:14:11 setup.sh.hugepages.odd_alloc -- common/autotest_common.sh@1124 -- # xtrace_disable 00:06:00.291 09:14:11 setup.sh.hugepages.odd_alloc -- common/autotest_common.sh@10 -- # set +x 00:06:00.291 ************************************ 00:06:00.291 END TEST odd_alloc 00:06:00.291 ************************************ 00:06:00.291 09:14:11 setup.sh.hugepages -- common/autotest_common.sh@1142 -- # return 0 00:06:00.291 09:14:11 setup.sh.hugepages -- setup/hugepages.sh@214 -- # run_test custom_alloc custom_alloc 00:06:00.291 09:14:11 setup.sh.hugepages -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:06:00.291 09:14:11 setup.sh.hugepages -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:00.291 09:14:11 setup.sh.hugepages -- common/autotest_common.sh@10 -- # set +x 00:06:00.291 ************************************ 00:06:00.291 START TEST custom_alloc 00:06:00.291 ************************************ 00:06:00.291 09:14:11 setup.sh.hugepages.custom_alloc -- common/autotest_common.sh@1123 -- # custom_alloc 00:06:00.291 09:14:11 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@167 -- # local IFS=, 00:06:00.291 09:14:11 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@169 -- # local node 00:06:00.291 09:14:11 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@170 -- # nodes_hp=() 00:06:00.291 09:14:11 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@170 -- # local nodes_hp 00:06:00.291 09:14:11 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@172 -- # local nr_hugepages=0 _nr_hugepages=0 00:06:00.291 09:14:11 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@174 -- # get_test_nr_hugepages 1048576 00:06:00.291 09:14:11 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@49 -- # local size=1048576 00:06:00.291 09:14:11 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@50 -- # (( 1 > 1 )) 00:06:00.291 09:14:11 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@55 -- # (( size >= default_hugepages )) 00:06:00.291 09:14:11 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@57 -- # nr_hugepages=512 00:06:00.291 09:14:11 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@58 -- # get_test_nr_hugepages_per_node 00:06:00.291 09:14:11 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@62 -- # user_nodes=() 00:06:00.291 09:14:11 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@62 -- # local user_nodes 00:06:00.291 09:14:11 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@64 -- # local _nr_hugepages=512 00:06:00.291 09:14:11 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@65 -- # local _no_nodes=2 00:06:00.291 09:14:11 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@67 -- # nodes_test=() 00:06:00.291 09:14:11 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@67 -- # local -g nodes_test 00:06:00.291 09:14:11 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@69 -- # (( 0 > 0 )) 00:06:00.291 09:14:11 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@74 -- # (( 0 > 0 )) 00:06:00.291 09:14:11 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@81 -- # (( _no_nodes > 0 )) 00:06:00.291 09:14:11 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@82 -- # nodes_test[_no_nodes - 1]=256 00:06:00.291 09:14:11 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@83 -- # : 256 00:06:00.291 09:14:11 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@84 -- # : 1 00:06:00.291 09:14:11 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@81 -- # (( _no_nodes > 0 )) 00:06:00.291 09:14:11 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@82 -- # nodes_test[_no_nodes - 1]=256 00:06:00.291 09:14:11 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@83 -- # : 0 00:06:00.291 09:14:11 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@84 -- # : 0 00:06:00.291 09:14:11 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@81 -- # (( _no_nodes > 0 )) 00:06:00.291 09:14:11 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@175 -- # nodes_hp[0]=512 00:06:00.291 09:14:11 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@176 -- # (( 2 > 1 )) 00:06:00.291 09:14:11 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@177 -- # get_test_nr_hugepages 2097152 00:06:00.291 09:14:11 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@49 -- # local size=2097152 00:06:00.291 09:14:11 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@50 -- # (( 1 > 1 )) 00:06:00.291 09:14:11 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@55 -- # (( size >= default_hugepages )) 00:06:00.291 09:14:11 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@57 -- # nr_hugepages=1024 00:06:00.291 09:14:11 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@58 -- # get_test_nr_hugepages_per_node 00:06:00.291 09:14:11 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@62 -- # user_nodes=() 00:06:00.291 09:14:11 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@62 -- # local user_nodes 00:06:00.291 09:14:11 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@64 -- # local _nr_hugepages=1024 00:06:00.291 09:14:11 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@65 -- # local _no_nodes=2 00:06:00.291 09:14:11 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@67 -- # nodes_test=() 00:06:00.291 09:14:11 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@67 -- # local -g nodes_test 00:06:00.291 09:14:11 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@69 -- # (( 0 > 0 )) 00:06:00.291 09:14:11 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@74 -- # (( 1 > 0 )) 00:06:00.291 09:14:11 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@75 -- # for _no_nodes in "${!nodes_hp[@]}" 00:06:00.291 09:14:11 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@76 -- # nodes_test[_no_nodes]=512 00:06:00.291 09:14:11 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@78 -- # return 0 00:06:00.291 09:14:11 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@178 -- # nodes_hp[1]=1024 00:06:00.291 09:14:11 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@181 -- # for node in "${!nodes_hp[@]}" 00:06:00.291 09:14:11 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@182 -- # HUGENODE+=("nodes_hp[$node]=${nodes_hp[node]}") 00:06:00.291 09:14:11 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@183 -- # (( _nr_hugepages += nodes_hp[node] )) 00:06:00.291 09:14:11 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@181 -- # for node in "${!nodes_hp[@]}" 00:06:00.291 09:14:11 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@182 -- # HUGENODE+=("nodes_hp[$node]=${nodes_hp[node]}") 00:06:00.291 09:14:11 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@183 -- # (( _nr_hugepages += nodes_hp[node] )) 00:06:00.291 09:14:11 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@186 -- # get_test_nr_hugepages_per_node 00:06:00.291 09:14:11 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@62 -- # user_nodes=() 00:06:00.291 09:14:11 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@62 -- # local user_nodes 00:06:00.291 09:14:11 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@64 -- # local _nr_hugepages=1024 00:06:00.291 09:14:11 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@65 -- # local _no_nodes=2 00:06:00.291 09:14:11 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@67 -- # nodes_test=() 00:06:00.291 09:14:11 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@67 -- # local -g nodes_test 00:06:00.291 09:14:11 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@69 -- # (( 0 > 0 )) 00:06:00.291 09:14:11 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@74 -- # (( 2 > 0 )) 00:06:00.291 09:14:11 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@75 -- # for _no_nodes in "${!nodes_hp[@]}" 00:06:00.291 09:14:11 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@76 -- # nodes_test[_no_nodes]=512 00:06:00.291 09:14:11 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@75 -- # for _no_nodes in "${!nodes_hp[@]}" 00:06:00.291 09:14:11 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@76 -- # nodes_test[_no_nodes]=1024 00:06:00.291 09:14:11 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@78 -- # return 0 00:06:00.291 09:14:11 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@187 -- # HUGENODE='nodes_hp[0]=512,nodes_hp[1]=1024' 00:06:00.291 09:14:11 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@187 -- # setup output 00:06:00.291 09:14:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@9 -- # [[ output == output ]] 00:06:00.291 09:14:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@10 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:06:01.671 0000:00:04.7 (8086 0e27): Already using the vfio-pci driver 00:06:01.671 0000:00:04.6 (8086 0e26): Already using the vfio-pci driver 00:06:01.671 0000:00:04.5 (8086 0e25): Already using the vfio-pci driver 00:06:01.671 0000:00:04.4 (8086 0e24): Already using the vfio-pci driver 00:06:01.671 0000:00:04.3 (8086 0e23): Already using the vfio-pci driver 00:06:01.671 0000:00:04.2 (8086 0e22): Already using the vfio-pci driver 00:06:01.671 0000:00:04.1 (8086 0e21): Already using the vfio-pci driver 00:06:01.671 0000:00:04.0 (8086 0e20): Already using the vfio-pci driver 00:06:01.671 0000:80:04.7 (8086 0e27): Already using the vfio-pci driver 00:06:01.671 0000:0b:00.0 (8086 0a54): Already using the vfio-pci driver 00:06:01.671 0000:80:04.6 (8086 0e26): Already using the vfio-pci driver 00:06:01.671 0000:80:04.5 (8086 0e25): Already using the vfio-pci driver 00:06:01.671 0000:80:04.4 (8086 0e24): Already using the vfio-pci driver 00:06:01.671 0000:80:04.3 (8086 0e23): Already using the vfio-pci driver 00:06:01.671 0000:80:04.2 (8086 0e22): Already using the vfio-pci driver 00:06:01.671 0000:80:04.1 (8086 0e21): Already using the vfio-pci driver 00:06:01.671 0000:80:04.0 (8086 0e20): Already using the vfio-pci driver 00:06:01.671 09:14:12 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@188 -- # nr_hugepages=1536 00:06:01.671 09:14:12 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@188 -- # verify_nr_hugepages 00:06:01.671 09:14:12 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@89 -- # local node 00:06:01.671 09:14:12 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@90 -- # local sorted_t 00:06:01.671 09:14:12 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@91 -- # local sorted_s 00:06:01.671 09:14:12 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@92 -- # local surp 00:06:01.671 09:14:12 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@93 -- # local resv 00:06:01.671 09:14:12 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@94 -- # local anon 00:06:01.671 09:14:12 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@96 -- # [[ always [madvise] never != *\[\n\e\v\e\r\]* ]] 00:06:01.671 09:14:12 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@97 -- # get_meminfo AnonHugePages 00:06:01.671 09:14:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@17 -- # local get=AnonHugePages 00:06:01.671 09:14:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@18 -- # local node= 00:06:01.671 09:14:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@19 -- # local var val 00:06:01.671 09:14:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@20 -- # local mem_f mem 00:06:01.671 09:14:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:06:01.671 09:14:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:06:01.671 09:14:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:06:01.671 09:14:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:06:01.671 09:14:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:06:01.671 09:14:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:01.671 09:14:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:01.671 09:14:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60541712 kB' 'MemFree: 45572668 kB' 'MemAvailable: 48986696 kB' 'Buffers: 12432 kB' 'Cached: 9112312 kB' 'SwapCached: 0 kB' 'Active: 6656024 kB' 'Inactive: 3452172 kB' 'Active(anon): 6279316 kB' 'Inactive(anon): 0 kB' 'Active(file): 376708 kB' 'Inactive(file): 3452172 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 986544 kB' 'Mapped: 168840 kB' 'Shmem: 5295864 kB' 'KReclaimable: 157540 kB' 'Slab: 445884 kB' 'SReclaimable: 157540 kB' 'SUnreclaim: 288344 kB' 'KernelStack: 12704 kB' 'PageTables: 8076 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 37086596 kB' 'Committed_AS: 7776020 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 193720 kB' 'VmallocChunk: 0 kB' 'Percpu: 33408 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1536' 'HugePages_Free: 1536' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 3145728 kB' 'DirectMap4k: 579164 kB' 'DirectMap2M: 12972032 kB' 'DirectMap1G: 55574528 kB' 00:06:01.671 09:14:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:01.671 09:14:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:06:01.671 09:14:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:01.671 09:14:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:01.671 09:14:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:01.671 09:14:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:06:01.671 09:14:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:01.671 09:14:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:01.671 09:14:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:01.671 09:14:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:06:01.671 09:14:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:01.671 09:14:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:01.671 09:14:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Buffers == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:01.671 09:14:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:06:01.671 09:14:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:01.671 09:14:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:01.671 09:14:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Cached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:01.671 09:14:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:06:01.671 09:14:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:01.671 09:14:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:01.671 09:14:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapCached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:01.671 09:14:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:06:01.671 09:14:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:01.671 09:14:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:01.671 09:14:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:01.671 09:14:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:06:01.671 09:14:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:01.671 09:14:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:01.671 09:14:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:01.671 09:14:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:06:01.671 09:14:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:01.671 09:14:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:01.671 09:14:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:01.671 09:14:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:06:01.671 09:14:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:01.671 09:14:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:01.671 09:14:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:01.671 09:14:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:06:01.671 09:14:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:01.671 09:14:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:01.671 09:14:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:01.671 09:14:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:06:01.671 09:14:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:01.671 09:14:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:01.671 09:14:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:01.671 09:14:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:06:01.671 09:14:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:01.671 09:14:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:01.671 09:14:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Unevictable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:01.671 09:14:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:06:01.671 09:14:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:01.671 09:14:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:01.671 09:14:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Mlocked == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:01.671 09:14:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:06:01.671 09:14:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:01.671 09:14:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:01.671 09:14:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:01.671 09:14:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:06:01.671 09:14:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:01.671 09:14:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:01.671 09:14:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:01.671 09:14:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:06:01.671 09:14:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:01.671 09:14:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:01.671 09:14:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Zswap == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:01.671 09:14:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:06:01.671 09:14:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:01.671 09:14:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:01.671 09:14:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Zswapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:01.671 09:14:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:06:01.671 09:14:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:01.671 09:14:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:01.671 09:14:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Dirty == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:01.671 09:14:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:06:01.671 09:14:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:01.671 09:14:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:01.671 09:14:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Writeback == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:01.671 09:14:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:06:01.671 09:14:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:01.671 09:14:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:01.671 09:14:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ AnonPages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:01.671 09:14:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:06:01.671 09:14:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:01.671 09:14:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:01.671 09:14:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Mapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:01.671 09:14:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:06:01.671 09:14:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:01.671 09:14:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:01.671 09:14:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Shmem == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:01.671 09:14:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:06:01.671 09:14:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:01.671 09:14:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:01.671 09:14:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:01.671 09:14:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:06:01.671 09:14:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:01.671 09:14:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:01.671 09:14:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Slab == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:01.671 09:14:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:06:01.671 09:14:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:01.671 09:14:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:01.671 09:14:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:01.671 09:14:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:06:01.671 09:14:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:01.671 09:14:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:01.671 09:14:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:01.671 09:14:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:06:01.671 09:14:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:01.671 09:14:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:01.671 09:14:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ KernelStack == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:01.671 09:14:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:06:01.671 09:14:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:01.672 09:14:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:01.672 09:14:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ PageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:01.672 09:14:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:06:01.672 09:14:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:01.672 09:14:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:01.672 09:14:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:01.672 09:14:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:06:01.672 09:14:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:01.672 09:14:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:01.672 09:14:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:01.672 09:14:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:06:01.672 09:14:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:01.672 09:14:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:01.672 09:14:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Bounce == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:01.672 09:14:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:06:01.672 09:14:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:01.672 09:14:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:01.672 09:14:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:01.672 09:14:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:06:01.672 09:14:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:01.672 09:14:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:01.672 09:14:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:01.672 09:14:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:06:01.672 09:14:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:01.672 09:14:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:01.672 09:14:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:01.672 09:14:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:06:01.672 09:14:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:01.672 09:14:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:01.672 09:14:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:01.672 09:14:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:06:01.672 09:14:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:01.672 09:14:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:01.672 09:14:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:01.672 09:14:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:06:01.672 09:14:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:01.672 09:14:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:01.672 09:14:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:01.672 09:14:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:06:01.672 09:14:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:01.672 09:14:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:01.672 09:14:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Percpu == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:01.672 09:14:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:06:01.672 09:14:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:01.672 09:14:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:01.672 09:14:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:01.672 09:14:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:06:01.672 09:14:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:01.672 09:14:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:01.672 09:14:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:01.672 09:14:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@33 -- # echo 0 00:06:01.672 09:14:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@33 -- # return 0 00:06:01.672 09:14:12 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@97 -- # anon=0 00:06:01.672 09:14:12 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@99 -- # get_meminfo HugePages_Surp 00:06:01.672 09:14:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:06:01.672 09:14:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@18 -- # local node= 00:06:01.672 09:14:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@19 -- # local var val 00:06:01.672 09:14:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@20 -- # local mem_f mem 00:06:01.672 09:14:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:06:01.672 09:14:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:06:01.672 09:14:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:06:01.672 09:14:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:06:01.672 09:14:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:06:01.672 09:14:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:01.672 09:14:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:01.672 09:14:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60541712 kB' 'MemFree: 45578084 kB' 'MemAvailable: 48992112 kB' 'Buffers: 12432 kB' 'Cached: 9112312 kB' 'SwapCached: 0 kB' 'Active: 6656032 kB' 'Inactive: 3452172 kB' 'Active(anon): 6279324 kB' 'Inactive(anon): 0 kB' 'Active(file): 376708 kB' 'Inactive(file): 3452172 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 986536 kB' 'Mapped: 168796 kB' 'Shmem: 5295864 kB' 'KReclaimable: 157540 kB' 'Slab: 445884 kB' 'SReclaimable: 157540 kB' 'SUnreclaim: 288344 kB' 'KernelStack: 12736 kB' 'PageTables: 8152 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 37086596 kB' 'Committed_AS: 7776040 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 193704 kB' 'VmallocChunk: 0 kB' 'Percpu: 33408 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1536' 'HugePages_Free: 1536' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 3145728 kB' 'DirectMap4k: 579164 kB' 'DirectMap2M: 12972032 kB' 'DirectMap1G: 55574528 kB' 00:06:01.672 09:14:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:01.672 09:14:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:06:01.672 09:14:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:01.672 09:14:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:01.672 09:14:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:01.672 09:14:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:06:01.672 09:14:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:01.672 09:14:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:01.672 09:14:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:01.672 09:14:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:06:01.672 09:14:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:01.672 09:14:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:01.672 09:14:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:01.672 09:14:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:06:01.672 09:14:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:01.672 09:14:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:01.672 09:14:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:01.672 09:14:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:06:01.672 09:14:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:01.672 09:14:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:01.672 09:14:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:01.672 09:14:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:06:01.672 09:14:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:01.672 09:14:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:01.672 09:14:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:01.672 09:14:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:06:01.672 09:14:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:01.672 09:14:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:01.672 09:14:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:01.672 09:14:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:06:01.672 09:14:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:01.672 09:14:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:01.672 09:14:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:01.672 09:14:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:06:01.672 09:14:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:01.672 09:14:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:01.672 09:14:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:01.672 09:14:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:06:01.672 09:14:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:01.672 09:14:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:01.672 09:14:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:01.672 09:14:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:06:01.672 09:14:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:01.672 09:14:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:01.672 09:14:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:01.672 09:14:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:06:01.672 09:14:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:01.672 09:14:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:01.672 09:14:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:01.672 09:14:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:06:01.672 09:14:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:01.672 09:14:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:01.672 09:14:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:01.672 09:14:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:06:01.672 09:14:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:01.672 09:14:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:01.672 09:14:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:01.672 09:14:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:06:01.672 09:14:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:01.672 09:14:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:01.672 09:14:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:01.672 09:14:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:06:01.672 09:14:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:01.672 09:14:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:01.672 09:14:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:01.672 09:14:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:06:01.672 09:14:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:01.672 09:14:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:01.672 09:14:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:01.672 09:14:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:06:01.672 09:14:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:01.672 09:14:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:01.672 09:14:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:01.672 09:14:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:06:01.672 09:14:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:01.672 09:14:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:01.672 09:14:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:01.672 09:14:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:06:01.672 09:14:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:01.672 09:14:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:01.672 09:14:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:01.672 09:14:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:06:01.672 09:14:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:01.672 09:14:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:01.672 09:14:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:01.672 09:14:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:06:01.672 09:14:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:01.672 09:14:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:01.672 09:14:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:01.672 09:14:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:06:01.672 09:14:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:01.672 09:14:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:01.672 09:14:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:01.672 09:14:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:06:01.672 09:14:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:01.672 09:14:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:01.672 09:14:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:01.672 09:14:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:06:01.672 09:14:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:01.672 09:14:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:01.672 09:14:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:01.672 09:14:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:06:01.672 09:14:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:01.672 09:14:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:01.672 09:14:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:01.672 09:14:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:06:01.672 09:14:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:01.672 09:14:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:01.672 09:14:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:01.672 09:14:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:06:01.672 09:14:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:01.672 09:14:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:01.672 09:14:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:01.672 09:14:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:06:01.672 09:14:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:01.672 09:14:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:01.672 09:14:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:01.672 09:14:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:06:01.672 09:14:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:01.672 09:14:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:01.672 09:14:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:01.673 09:14:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:06:01.673 09:14:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:01.673 09:14:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:01.673 09:14:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:01.673 09:14:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:06:01.673 09:14:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:01.673 09:14:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:01.673 09:14:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:01.673 09:14:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:06:01.673 09:14:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:01.673 09:14:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:01.673 09:14:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:01.673 09:14:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:06:01.673 09:14:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:01.673 09:14:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:01.673 09:14:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:01.673 09:14:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:06:01.673 09:14:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:01.673 09:14:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:01.673 09:14:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:01.673 09:14:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:06:01.673 09:14:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:01.673 09:14:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:01.673 09:14:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:01.673 09:14:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:06:01.673 09:14:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:01.673 09:14:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:01.673 09:14:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:01.673 09:14:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:06:01.673 09:14:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:01.673 09:14:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:01.673 09:14:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:01.673 09:14:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:06:01.673 09:14:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:01.673 09:14:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:01.673 09:14:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:01.673 09:14:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:06:01.673 09:14:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:01.673 09:14:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:01.673 09:14:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:01.673 09:14:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:06:01.673 09:14:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:01.673 09:14:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:01.673 09:14:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:01.673 09:14:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:06:01.673 09:14:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:01.673 09:14:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:01.673 09:14:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:01.673 09:14:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:06:01.673 09:14:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:01.673 09:14:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:01.673 09:14:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:01.673 09:14:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:06:01.673 09:14:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:01.673 09:14:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:01.673 09:14:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:01.673 09:14:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:06:01.673 09:14:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:01.673 09:14:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:01.673 09:14:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:01.673 09:14:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:06:01.673 09:14:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:01.673 09:14:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:01.673 09:14:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:01.673 09:14:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:06:01.673 09:14:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:01.673 09:14:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:01.673 09:14:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:01.673 09:14:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:06:01.673 09:14:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:01.673 09:14:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:01.673 09:14:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:01.673 09:14:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:06:01.673 09:14:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:01.673 09:14:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:01.673 09:14:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:01.673 09:14:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:06:01.673 09:14:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:01.673 09:14:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:01.673 09:14:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:01.673 09:14:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:06:01.673 09:14:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:01.673 09:14:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:01.673 09:14:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:01.673 09:14:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@33 -- # echo 0 00:06:01.673 09:14:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@33 -- # return 0 00:06:01.673 09:14:12 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@99 -- # surp=0 00:06:01.673 09:14:12 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@100 -- # get_meminfo HugePages_Rsvd 00:06:01.673 09:14:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@17 -- # local get=HugePages_Rsvd 00:06:01.673 09:14:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@18 -- # local node= 00:06:01.673 09:14:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@19 -- # local var val 00:06:01.673 09:14:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@20 -- # local mem_f mem 00:06:01.673 09:14:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:06:01.673 09:14:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:06:01.673 09:14:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:06:01.673 09:14:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:06:01.673 09:14:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:06:01.673 09:14:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:01.673 09:14:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:01.673 09:14:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60541712 kB' 'MemFree: 45578340 kB' 'MemAvailable: 48992368 kB' 'Buffers: 12432 kB' 'Cached: 9112324 kB' 'SwapCached: 0 kB' 'Active: 6655892 kB' 'Inactive: 3452172 kB' 'Active(anon): 6279184 kB' 'Inactive(anon): 0 kB' 'Active(file): 376708 kB' 'Inactive(file): 3452172 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 986384 kB' 'Mapped: 168796 kB' 'Shmem: 5295876 kB' 'KReclaimable: 157540 kB' 'Slab: 445884 kB' 'SReclaimable: 157540 kB' 'SUnreclaim: 288344 kB' 'KernelStack: 12736 kB' 'PageTables: 8152 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 37086596 kB' 'Committed_AS: 7776060 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 193704 kB' 'VmallocChunk: 0 kB' 'Percpu: 33408 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1536' 'HugePages_Free: 1536' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 3145728 kB' 'DirectMap4k: 579164 kB' 'DirectMap2M: 12972032 kB' 'DirectMap1G: 55574528 kB' 00:06:01.673 09:14:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:01.673 09:14:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:06:01.673 09:14:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:01.673 09:14:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:01.673 09:14:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:01.673 09:14:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:06:01.673 09:14:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:01.673 09:14:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:01.673 09:14:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:01.673 09:14:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:06:01.673 09:14:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:01.673 09:14:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:01.673 09:14:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:01.673 09:14:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:06:01.673 09:14:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:01.673 09:14:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:01.673 09:14:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:01.673 09:14:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:06:01.673 09:14:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:01.673 09:14:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:01.673 09:14:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:01.673 09:14:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:06:01.673 09:14:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:01.673 09:14:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:01.673 09:14:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:01.673 09:14:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:06:01.673 09:14:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:01.673 09:14:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:01.673 09:14:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:01.673 09:14:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:06:01.673 09:14:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:01.673 09:14:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:01.673 09:14:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:01.673 09:14:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:06:01.673 09:14:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:01.673 09:14:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:01.673 09:14:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:01.673 09:14:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:06:01.673 09:14:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:01.673 09:14:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:01.673 09:14:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:01.673 09:14:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:06:01.673 09:14:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:01.673 09:14:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:01.673 09:14:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:01.673 09:14:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:06:01.673 09:14:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:01.673 09:14:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:01.673 09:14:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:01.673 09:14:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:06:01.673 09:14:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:01.673 09:14:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:01.673 09:14:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:01.673 09:14:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:06:01.673 09:14:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:01.673 09:14:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:01.673 09:14:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:01.673 09:14:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:06:01.673 09:14:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:01.673 09:14:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:01.673 09:14:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:01.673 09:14:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:06:01.673 09:14:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:01.673 09:14:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:01.673 09:14:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:01.673 09:14:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:06:01.673 09:14:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:01.673 09:14:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:01.673 09:14:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:01.673 09:14:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:06:01.673 09:14:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:01.673 09:14:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:01.673 09:14:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:01.673 09:14:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:06:01.673 09:14:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:01.673 09:14:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:01.673 09:14:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:01.673 09:14:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:06:01.673 09:14:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:01.673 09:14:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:01.673 09:14:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:01.674 09:14:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:06:01.674 09:14:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:01.674 09:14:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:01.674 09:14:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:01.674 09:14:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:06:01.674 09:14:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:01.674 09:14:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:01.674 09:14:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:01.674 09:14:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:06:01.674 09:14:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:01.674 09:14:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:01.674 09:14:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:01.674 09:14:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:06:01.674 09:14:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:01.674 09:14:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:01.674 09:14:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:01.674 09:14:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:06:01.674 09:14:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:01.674 09:14:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:01.674 09:14:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:01.674 09:14:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:06:01.674 09:14:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:01.674 09:14:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:01.674 09:14:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:01.674 09:14:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:06:01.674 09:14:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:01.674 09:14:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:01.674 09:14:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:01.674 09:14:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:06:01.674 09:14:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:01.674 09:14:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:01.674 09:14:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:01.674 09:14:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:06:01.674 09:14:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:01.674 09:14:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:01.674 09:14:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:01.674 09:14:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:06:01.674 09:14:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:01.674 09:14:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:01.936 09:14:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:01.936 09:14:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:06:01.936 09:14:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:01.936 09:14:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:01.936 09:14:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:01.936 09:14:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:06:01.936 09:14:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:01.936 09:14:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:01.936 09:14:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:01.936 09:14:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:06:01.936 09:14:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:01.936 09:14:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:01.936 09:14:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:01.936 09:14:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:06:01.936 09:14:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:01.936 09:14:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:01.936 09:14:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:01.936 09:14:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:06:01.936 09:14:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:01.936 09:14:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:01.936 09:14:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:01.936 09:14:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:06:01.936 09:14:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:01.936 09:14:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:01.936 09:14:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:01.936 09:14:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:06:01.936 09:14:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:01.936 09:14:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:01.936 09:14:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:01.936 09:14:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:06:01.936 09:14:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:01.936 09:14:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:01.936 09:14:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:01.936 09:14:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:06:01.936 09:14:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:01.936 09:14:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:01.936 09:14:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:01.936 09:14:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:06:01.936 09:14:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:01.936 09:14:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:01.936 09:14:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:01.936 09:14:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:06:01.936 09:14:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:01.936 09:14:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:01.936 09:14:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:01.936 09:14:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:06:01.936 09:14:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:01.936 09:14:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:01.936 09:14:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:01.936 09:14:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:06:01.936 09:14:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:01.936 09:14:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:01.936 09:14:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:01.936 09:14:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:06:01.936 09:14:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:01.936 09:14:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:01.936 09:14:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:01.936 09:14:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:06:01.936 09:14:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:01.936 09:14:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:01.936 09:14:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:01.936 09:14:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:06:01.936 09:14:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:01.936 09:14:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:01.936 09:14:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:01.936 09:14:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:06:01.936 09:14:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:01.936 09:14:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:01.936 09:14:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:01.936 09:14:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:06:01.936 09:14:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:01.936 09:14:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:01.936 09:14:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:01.936 09:14:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:06:01.937 09:14:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:01.937 09:14:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:01.937 09:14:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:01.937 09:14:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:06:01.937 09:14:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:01.937 09:14:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:01.937 09:14:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:01.937 09:14:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@33 -- # echo 0 00:06:01.937 09:14:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@33 -- # return 0 00:06:01.937 09:14:12 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@100 -- # resv=0 00:06:01.937 09:14:12 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@102 -- # echo nr_hugepages=1536 00:06:01.937 nr_hugepages=1536 00:06:01.937 09:14:12 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@103 -- # echo resv_hugepages=0 00:06:01.937 resv_hugepages=0 00:06:01.937 09:14:12 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@104 -- # echo surplus_hugepages=0 00:06:01.937 surplus_hugepages=0 00:06:01.937 09:14:12 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@105 -- # echo anon_hugepages=0 00:06:01.937 anon_hugepages=0 00:06:01.937 09:14:12 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@107 -- # (( 1536 == nr_hugepages + surp + resv )) 00:06:01.937 09:14:12 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@109 -- # (( 1536 == nr_hugepages )) 00:06:01.937 09:14:12 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@110 -- # get_meminfo HugePages_Total 00:06:01.937 09:14:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@17 -- # local get=HugePages_Total 00:06:01.937 09:14:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@18 -- # local node= 00:06:01.937 09:14:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@19 -- # local var val 00:06:01.937 09:14:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@20 -- # local mem_f mem 00:06:01.937 09:14:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:06:01.937 09:14:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:06:01.937 09:14:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:06:01.937 09:14:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:06:01.937 09:14:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:06:01.937 09:14:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:01.937 09:14:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:01.937 09:14:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60541712 kB' 'MemFree: 45578540 kB' 'MemAvailable: 48992568 kB' 'Buffers: 12432 kB' 'Cached: 9112356 kB' 'SwapCached: 0 kB' 'Active: 6655660 kB' 'Inactive: 3452172 kB' 'Active(anon): 6278952 kB' 'Inactive(anon): 0 kB' 'Active(file): 376708 kB' 'Inactive(file): 3452172 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 986120 kB' 'Mapped: 168796 kB' 'Shmem: 5295908 kB' 'KReclaimable: 157540 kB' 'Slab: 445980 kB' 'SReclaimable: 157540 kB' 'SUnreclaim: 288440 kB' 'KernelStack: 12688 kB' 'PageTables: 8016 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 37086596 kB' 'Committed_AS: 7776080 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 193720 kB' 'VmallocChunk: 0 kB' 'Percpu: 33408 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1536' 'HugePages_Free: 1536' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 3145728 kB' 'DirectMap4k: 579164 kB' 'DirectMap2M: 12972032 kB' 'DirectMap1G: 55574528 kB' 00:06:01.937 09:14:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:01.937 09:14:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:06:01.937 09:14:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:01.937 09:14:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:01.937 09:14:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:01.937 09:14:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:06:01.937 09:14:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:01.937 09:14:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:01.937 09:14:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:01.937 09:14:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:06:01.937 09:14:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:01.937 09:14:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:01.937 09:14:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:01.937 09:14:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:06:01.937 09:14:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:01.937 09:14:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:01.937 09:14:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:01.937 09:14:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:06:01.937 09:14:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:01.937 09:14:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:01.937 09:14:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:01.937 09:14:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:06:01.937 09:14:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:01.937 09:14:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:01.937 09:14:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:01.937 09:14:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:06:01.937 09:14:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:01.937 09:14:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:01.937 09:14:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:01.937 09:14:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:06:01.937 09:14:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:01.937 09:14:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:01.937 09:14:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:01.937 09:14:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:06:01.937 09:14:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:01.937 09:14:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:01.937 09:14:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:01.937 09:14:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:06:01.937 09:14:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:01.937 09:14:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:01.937 09:14:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:01.937 09:14:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:06:01.937 09:14:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:01.937 09:14:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:01.937 09:14:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:01.937 09:14:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:06:01.937 09:14:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:01.937 09:14:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:01.937 09:14:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:01.937 09:14:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:06:01.937 09:14:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:01.937 09:14:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:01.937 09:14:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:01.937 09:14:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:06:01.937 09:14:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:01.937 09:14:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:01.937 09:14:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:01.937 09:14:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:06:01.937 09:14:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:01.937 09:14:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:01.937 09:14:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:01.937 09:14:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:06:01.937 09:14:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:01.937 09:14:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:01.937 09:14:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:01.937 09:14:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:06:01.937 09:14:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:01.937 09:14:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:01.937 09:14:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:01.937 09:14:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:06:01.937 09:14:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:01.937 09:14:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:01.937 09:14:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:01.937 09:14:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:06:01.937 09:14:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:01.937 09:14:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:01.937 09:14:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:01.937 09:14:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:06:01.937 09:14:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:01.937 09:14:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:01.937 09:14:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:01.937 09:14:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:06:01.937 09:14:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:01.937 09:14:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:01.937 09:14:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:01.937 09:14:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:06:01.938 09:14:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:01.938 09:14:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:01.938 09:14:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:01.938 09:14:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:06:01.938 09:14:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:01.938 09:14:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:01.938 09:14:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:01.938 09:14:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:06:01.938 09:14:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:01.938 09:14:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:01.938 09:14:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:01.938 09:14:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:06:01.938 09:14:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:01.938 09:14:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:01.938 09:14:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:01.938 09:14:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:06:01.938 09:14:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:01.938 09:14:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:01.938 09:14:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:01.938 09:14:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:06:01.938 09:14:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:01.938 09:14:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:01.938 09:14:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:01.938 09:14:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:06:01.938 09:14:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:01.938 09:14:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:01.938 09:14:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:01.938 09:14:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:06:01.938 09:14:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:01.938 09:14:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:01.938 09:14:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:01.938 09:14:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:06:01.938 09:14:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:01.938 09:14:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:01.938 09:14:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:01.938 09:14:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:06:01.938 09:14:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:01.938 09:14:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:01.938 09:14:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:01.938 09:14:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:06:01.938 09:14:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:01.938 09:14:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:01.938 09:14:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:01.938 09:14:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:06:01.938 09:14:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:01.938 09:14:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:01.938 09:14:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:01.938 09:14:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:06:01.938 09:14:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:01.938 09:14:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:01.938 09:14:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:01.938 09:14:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:06:01.938 09:14:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:01.938 09:14:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:01.938 09:14:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:01.938 09:14:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:06:01.938 09:14:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:01.938 09:14:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:01.938 09:14:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:01.938 09:14:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:06:01.938 09:14:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:01.938 09:14:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:01.938 09:14:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:01.938 09:14:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:06:01.938 09:14:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:01.938 09:14:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:01.938 09:14:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:01.938 09:14:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:06:01.938 09:14:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:01.938 09:14:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:01.938 09:14:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:01.938 09:14:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:06:01.938 09:14:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:01.938 09:14:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:01.938 09:14:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:01.938 09:14:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:06:01.938 09:14:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:01.938 09:14:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:01.938 09:14:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:01.938 09:14:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:06:01.938 09:14:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:01.938 09:14:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:01.938 09:14:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:01.938 09:14:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:06:01.938 09:14:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:01.938 09:14:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:01.938 09:14:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:01.938 09:14:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:06:01.938 09:14:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:01.938 09:14:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:01.938 09:14:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:01.938 09:14:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:06:01.938 09:14:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:01.938 09:14:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:01.938 09:14:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:01.938 09:14:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:06:01.938 09:14:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:01.938 09:14:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:01.938 09:14:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:01.938 09:14:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:06:01.938 09:14:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:01.938 09:14:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:01.938 09:14:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:01.938 09:14:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:06:01.938 09:14:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:01.938 09:14:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:01.938 09:14:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:01.938 09:14:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@33 -- # echo 1536 00:06:01.938 09:14:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@33 -- # return 0 00:06:01.938 09:14:12 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@110 -- # (( 1536 == nr_hugepages + surp + resv )) 00:06:01.938 09:14:12 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@112 -- # get_nodes 00:06:01.938 09:14:12 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@27 -- # local node 00:06:01.938 09:14:12 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:06:01.938 09:14:12 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=512 00:06:01.938 09:14:12 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:06:01.938 09:14:12 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=1024 00:06:01.938 09:14:12 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@32 -- # no_nodes=2 00:06:01.938 09:14:12 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@33 -- # (( no_nodes > 0 )) 00:06:01.938 09:14:12 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:06:01.938 09:14:12 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:06:01.938 09:14:12 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 0 00:06:01.938 09:14:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:06:01.938 09:14:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@18 -- # local node=0 00:06:01.938 09:14:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@19 -- # local var val 00:06:01.938 09:14:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@20 -- # local mem_f mem 00:06:01.938 09:14:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:06:01.938 09:14:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node0/meminfo ]] 00:06:01.938 09:14:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node0/meminfo 00:06:01.938 09:14:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:06:01.938 09:14:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:06:01.938 09:14:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:01.939 09:14:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:01.939 09:14:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 32829884 kB' 'MemFree: 22985620 kB' 'MemUsed: 9844264 kB' 'SwapCached: 0 kB' 'Active: 4639008 kB' 'Inactive: 3357784 kB' 'Active(anon): 4505588 kB' 'Inactive(anon): 0 kB' 'Active(file): 133420 kB' 'Inactive(file): 3357784 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'FilePages: 7326484 kB' 'Mapped: 76500 kB' 'AnonPages: 673380 kB' 'Shmem: 3835280 kB' 'KernelStack: 5784 kB' 'PageTables: 3924 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 59572 kB' 'Slab: 177060 kB' 'SReclaimable: 59572 kB' 'SUnreclaim: 117488 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 512' 'HugePages_Free: 512' 'HugePages_Surp: 0' 00:06:01.939 09:14:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:01.939 09:14:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:06:01.939 09:14:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:01.939 09:14:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:01.939 09:14:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:01.939 09:14:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:06:01.939 09:14:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:01.939 09:14:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:01.939 09:14:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:01.939 09:14:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:06:01.939 09:14:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:01.939 09:14:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:01.939 09:14:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:01.939 09:14:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:06:01.939 09:14:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:01.939 09:14:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:01.939 09:14:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:01.939 09:14:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:06:01.939 09:14:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:01.939 09:14:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:01.939 09:14:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:01.939 09:14:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:06:01.939 09:14:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:01.939 09:14:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:01.939 09:14:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:01.939 09:14:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:06:01.939 09:14:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:01.939 09:14:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:01.939 09:14:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:01.939 09:14:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:06:01.939 09:14:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:01.939 09:14:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:01.939 09:14:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:01.939 09:14:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:06:01.939 09:14:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:01.939 09:14:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:01.939 09:14:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:01.939 09:14:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:06:01.939 09:14:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:01.939 09:14:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:01.939 09:14:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:01.939 09:14:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:06:01.939 09:14:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:01.939 09:14:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:01.939 09:14:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:01.939 09:14:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:06:01.939 09:14:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:01.939 09:14:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:01.939 09:14:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:01.939 09:14:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:06:01.939 09:14:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:01.939 09:14:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:01.939 09:14:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:01.939 09:14:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:06:01.939 09:14:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:01.939 09:14:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:01.939 09:14:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:01.939 09:14:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:06:01.939 09:14:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:01.939 09:14:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:01.939 09:14:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:01.939 09:14:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:06:01.939 09:14:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:01.939 09:14:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:01.939 09:14:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:01.939 09:14:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:06:01.939 09:14:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:01.939 09:14:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:01.939 09:14:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:01.939 09:14:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:06:01.939 09:14:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:01.939 09:14:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:01.939 09:14:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:01.939 09:14:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:06:01.939 09:14:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:01.939 09:14:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:01.939 09:14:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:01.939 09:14:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:06:01.939 09:14:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:01.939 09:14:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:01.939 09:14:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:01.939 09:14:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:06:01.939 09:14:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:01.939 09:14:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:01.939 09:14:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:01.939 09:14:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:06:01.939 09:14:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:01.939 09:14:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:01.939 09:14:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:01.939 09:14:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:06:01.939 09:14:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:01.939 09:14:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:01.939 09:14:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:01.939 09:14:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:06:01.939 09:14:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:01.939 09:14:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:01.939 09:14:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:01.939 09:14:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:06:01.939 09:14:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:01.939 09:14:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:01.939 09:14:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:01.939 09:14:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:06:01.939 09:14:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:01.939 09:14:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:01.939 09:14:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:01.939 09:14:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:06:01.939 09:14:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:01.939 09:14:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:01.939 09:14:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:01.939 09:14:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:06:01.939 09:14:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:01.939 09:14:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:01.939 09:14:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:01.939 09:14:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:06:01.939 09:14:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:01.939 09:14:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:01.939 09:14:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:01.939 09:14:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:06:01.939 09:14:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:01.939 09:14:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:01.939 09:14:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:01.939 09:14:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:06:01.940 09:14:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:01.940 09:14:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:01.940 09:14:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:01.940 09:14:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:06:01.940 09:14:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:01.940 09:14:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:01.940 09:14:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:01.940 09:14:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:06:01.940 09:14:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:01.940 09:14:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:01.940 09:14:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:01.940 09:14:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:06:01.940 09:14:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:01.940 09:14:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:01.940 09:14:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:01.940 09:14:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:06:01.940 09:14:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:01.940 09:14:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:01.940 09:14:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:01.940 09:14:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:06:01.940 09:14:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:01.940 09:14:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:01.940 09:14:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:01.940 09:14:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@33 -- # echo 0 00:06:01.940 09:14:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@33 -- # return 0 00:06:01.940 09:14:12 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:06:01.940 09:14:12 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:06:01.940 09:14:12 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:06:01.940 09:14:12 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 1 00:06:01.940 09:14:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:06:01.940 09:14:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@18 -- # local node=1 00:06:01.940 09:14:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@19 -- # local var val 00:06:01.940 09:14:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@20 -- # local mem_f mem 00:06:01.940 09:14:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:06:01.940 09:14:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node1/meminfo ]] 00:06:01.940 09:14:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node1/meminfo 00:06:01.940 09:14:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:06:01.940 09:14:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:06:01.940 09:14:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:01.940 09:14:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:01.940 09:14:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 27711828 kB' 'MemFree: 22592920 kB' 'MemUsed: 5118908 kB' 'SwapCached: 0 kB' 'Active: 2016784 kB' 'Inactive: 94388 kB' 'Active(anon): 1773496 kB' 'Inactive(anon): 0 kB' 'Active(file): 243288 kB' 'Inactive(file): 94388 kB' 'Unevictable: 0 kB' 'Mlocked: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'FilePages: 1798308 kB' 'Mapped: 92296 kB' 'AnonPages: 312868 kB' 'Shmem: 1460632 kB' 'KernelStack: 6888 kB' 'PageTables: 4040 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 97968 kB' 'Slab: 268920 kB' 'SReclaimable: 97968 kB' 'SUnreclaim: 170952 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Surp: 0' 00:06:01.940 09:14:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:01.940 09:14:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:06:01.940 09:14:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:01.940 09:14:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:01.940 09:14:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:01.940 09:14:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:06:01.940 09:14:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:01.940 09:14:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:01.940 09:14:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:01.940 09:14:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:06:01.940 09:14:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:01.940 09:14:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:01.940 09:14:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:01.940 09:14:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:06:01.940 09:14:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:01.940 09:14:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:01.940 09:14:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:01.940 09:14:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:06:01.940 09:14:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:01.940 09:14:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:01.940 09:14:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:01.940 09:14:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:06:01.940 09:14:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:01.940 09:14:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:01.940 09:14:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:01.940 09:14:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:06:01.940 09:14:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:01.940 09:14:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:01.940 09:14:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:01.940 09:14:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:06:01.940 09:14:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:01.940 09:14:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:01.940 09:14:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:01.940 09:14:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:06:01.940 09:14:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:01.940 09:14:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:01.940 09:14:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:01.940 09:14:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:06:01.940 09:14:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:01.940 09:14:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:01.940 09:14:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:01.940 09:14:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:06:01.940 09:14:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:01.940 09:14:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:01.940 09:14:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:01.940 09:14:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:06:01.940 09:14:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:01.940 09:14:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:01.940 09:14:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:01.940 09:14:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:06:01.940 09:14:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:01.940 09:14:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:01.940 09:14:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:01.940 09:14:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:06:01.940 09:14:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:01.940 09:14:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:01.941 09:14:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:01.941 09:14:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:06:01.941 09:14:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:01.941 09:14:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:01.941 09:14:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:01.941 09:14:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:06:01.941 09:14:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:01.941 09:14:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:01.941 09:14:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:01.941 09:14:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:06:01.941 09:14:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:01.941 09:14:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:01.941 09:14:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:01.941 09:14:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:06:01.941 09:14:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:01.941 09:14:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:01.941 09:14:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:01.941 09:14:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:06:01.941 09:14:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:01.941 09:14:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:01.941 09:14:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:01.941 09:14:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:06:01.941 09:14:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:01.941 09:14:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:01.941 09:14:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:01.941 09:14:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:06:01.941 09:14:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:01.941 09:14:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:01.941 09:14:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:01.941 09:14:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:06:01.941 09:14:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:01.941 09:14:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:01.941 09:14:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:01.941 09:14:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:06:01.941 09:14:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:01.941 09:14:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:01.941 09:14:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:01.941 09:14:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:06:01.941 09:14:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:01.941 09:14:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:01.941 09:14:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:01.941 09:14:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:06:01.941 09:14:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:01.941 09:14:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:01.941 09:14:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:01.941 09:14:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:06:01.941 09:14:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:01.941 09:14:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:01.941 09:14:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:01.941 09:14:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:06:01.941 09:14:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:01.941 09:14:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:01.941 09:14:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:01.941 09:14:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:06:01.941 09:14:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:01.941 09:14:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:01.941 09:14:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:01.941 09:14:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:06:01.941 09:14:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:01.941 09:14:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:01.941 09:14:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:01.941 09:14:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:06:01.941 09:14:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:01.941 09:14:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:01.941 09:14:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:01.941 09:14:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:06:01.941 09:14:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:01.941 09:14:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:01.941 09:14:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:01.941 09:14:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:06:01.941 09:14:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:01.941 09:14:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:01.941 09:14:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:01.941 09:14:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:06:01.941 09:14:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:01.941 09:14:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:01.941 09:14:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:01.941 09:14:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:06:01.941 09:14:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:01.941 09:14:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:01.941 09:14:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:01.941 09:14:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:06:01.941 09:14:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:01.941 09:14:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:01.941 09:14:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:01.941 09:14:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:06:01.941 09:14:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:01.941 09:14:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:01.941 09:14:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:01.941 09:14:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@33 -- # echo 0 00:06:01.941 09:14:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@33 -- # return 0 00:06:01.941 09:14:12 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:06:01.941 09:14:12 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:06:01.941 09:14:12 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:06:01.941 09:14:12 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:06:01.941 09:14:12 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@128 -- # echo 'node0=512 expecting 512' 00:06:01.941 node0=512 expecting 512 00:06:01.941 09:14:12 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:06:01.941 09:14:12 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:06:01.941 09:14:12 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:06:01.941 09:14:12 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@128 -- # echo 'node1=1024 expecting 1024' 00:06:01.941 node1=1024 expecting 1024 00:06:01.941 09:14:12 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@130 -- # [[ 512,1024 == \5\1\2\,\1\0\2\4 ]] 00:06:01.941 00:06:01.941 real 0m1.545s 00:06:01.941 user 0m0.634s 00:06:01.941 sys 0m0.872s 00:06:01.941 09:14:12 setup.sh.hugepages.custom_alloc -- common/autotest_common.sh@1124 -- # xtrace_disable 00:06:01.941 09:14:12 setup.sh.hugepages.custom_alloc -- common/autotest_common.sh@10 -- # set +x 00:06:01.941 ************************************ 00:06:01.941 END TEST custom_alloc 00:06:01.941 ************************************ 00:06:01.941 09:14:12 setup.sh.hugepages -- common/autotest_common.sh@1142 -- # return 0 00:06:01.941 09:14:12 setup.sh.hugepages -- setup/hugepages.sh@215 -- # run_test no_shrink_alloc no_shrink_alloc 00:06:01.941 09:14:12 setup.sh.hugepages -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:06:01.941 09:14:12 setup.sh.hugepages -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:01.941 09:14:12 setup.sh.hugepages -- common/autotest_common.sh@10 -- # set +x 00:06:01.941 ************************************ 00:06:01.941 START TEST no_shrink_alloc 00:06:01.941 ************************************ 00:06:01.941 09:14:12 setup.sh.hugepages.no_shrink_alloc -- common/autotest_common.sh@1123 -- # no_shrink_alloc 00:06:01.941 09:14:12 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@195 -- # get_test_nr_hugepages 2097152 0 00:06:01.941 09:14:12 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@49 -- # local size=2097152 00:06:01.941 09:14:12 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@50 -- # (( 2 > 1 )) 00:06:01.941 09:14:12 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@51 -- # shift 00:06:01.941 09:14:12 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@52 -- # node_ids=('0') 00:06:01.941 09:14:12 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@52 -- # local node_ids 00:06:01.941 09:14:12 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@55 -- # (( size >= default_hugepages )) 00:06:01.941 09:14:12 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@57 -- # nr_hugepages=1024 00:06:01.941 09:14:12 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@58 -- # get_test_nr_hugepages_per_node 0 00:06:01.941 09:14:12 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@62 -- # user_nodes=('0') 00:06:01.941 09:14:12 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@62 -- # local user_nodes 00:06:01.941 09:14:12 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@64 -- # local _nr_hugepages=1024 00:06:01.941 09:14:12 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@65 -- # local _no_nodes=2 00:06:01.941 09:14:12 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@67 -- # nodes_test=() 00:06:01.941 09:14:12 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@67 -- # local -g nodes_test 00:06:01.941 09:14:12 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@69 -- # (( 1 > 0 )) 00:06:01.941 09:14:12 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@70 -- # for _no_nodes in "${user_nodes[@]}" 00:06:01.942 09:14:12 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@71 -- # nodes_test[_no_nodes]=1024 00:06:01.942 09:14:12 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@73 -- # return 0 00:06:01.942 09:14:12 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@198 -- # setup output 00:06:01.942 09:14:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@9 -- # [[ output == output ]] 00:06:01.942 09:14:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@10 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:06:03.319 0000:00:04.7 (8086 0e27): Already using the vfio-pci driver 00:06:03.319 0000:00:04.6 (8086 0e26): Already using the vfio-pci driver 00:06:03.319 0000:00:04.5 (8086 0e25): Already using the vfio-pci driver 00:06:03.319 0000:00:04.4 (8086 0e24): Already using the vfio-pci driver 00:06:03.319 0000:00:04.3 (8086 0e23): Already using the vfio-pci driver 00:06:03.319 0000:00:04.2 (8086 0e22): Already using the vfio-pci driver 00:06:03.319 0000:00:04.1 (8086 0e21): Already using the vfio-pci driver 00:06:03.319 0000:00:04.0 (8086 0e20): Already using the vfio-pci driver 00:06:03.319 0000:80:04.7 (8086 0e27): Already using the vfio-pci driver 00:06:03.319 0000:0b:00.0 (8086 0a54): Already using the vfio-pci driver 00:06:03.319 0000:80:04.6 (8086 0e26): Already using the vfio-pci driver 00:06:03.319 0000:80:04.5 (8086 0e25): Already using the vfio-pci driver 00:06:03.319 0000:80:04.4 (8086 0e24): Already using the vfio-pci driver 00:06:03.319 0000:80:04.3 (8086 0e23): Already using the vfio-pci driver 00:06:03.319 0000:80:04.2 (8086 0e22): Already using the vfio-pci driver 00:06:03.319 0000:80:04.1 (8086 0e21): Already using the vfio-pci driver 00:06:03.319 0000:80:04.0 (8086 0e20): Already using the vfio-pci driver 00:06:03.319 09:14:14 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@199 -- # verify_nr_hugepages 00:06:03.319 09:14:14 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@89 -- # local node 00:06:03.319 09:14:14 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@90 -- # local sorted_t 00:06:03.319 09:14:14 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@91 -- # local sorted_s 00:06:03.319 09:14:14 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@92 -- # local surp 00:06:03.319 09:14:14 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@93 -- # local resv 00:06:03.319 09:14:14 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@94 -- # local anon 00:06:03.319 09:14:14 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@96 -- # [[ always [madvise] never != *\[\n\e\v\e\r\]* ]] 00:06:03.319 09:14:14 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@97 -- # get_meminfo AnonHugePages 00:06:03.319 09:14:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@17 -- # local get=AnonHugePages 00:06:03.319 09:14:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@18 -- # local node= 00:06:03.319 09:14:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@19 -- # local var val 00:06:03.319 09:14:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@20 -- # local mem_f mem 00:06:03.319 09:14:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:06:03.319 09:14:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:06:03.320 09:14:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:06:03.320 09:14:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:06:03.320 09:14:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:06:03.320 09:14:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:03.320 09:14:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:03.320 09:14:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60541712 kB' 'MemFree: 46619872 kB' 'MemAvailable: 50033900 kB' 'Buffers: 12432 kB' 'Cached: 9112440 kB' 'SwapCached: 0 kB' 'Active: 6656764 kB' 'Inactive: 3452172 kB' 'Active(anon): 6280056 kB' 'Inactive(anon): 0 kB' 'Active(file): 376708 kB' 'Inactive(file): 3452172 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 986904 kB' 'Mapped: 169396 kB' 'Shmem: 5295992 kB' 'KReclaimable: 157540 kB' 'Slab: 446168 kB' 'SReclaimable: 157540 kB' 'SUnreclaim: 288628 kB' 'KernelStack: 12720 kB' 'PageTables: 8148 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 37610884 kB' 'Committed_AS: 7776284 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 193672 kB' 'VmallocChunk: 0 kB' 'Percpu: 33408 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 579164 kB' 'DirectMap2M: 12972032 kB' 'DirectMap1G: 55574528 kB' 00:06:03.320 09:14:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:03.320 09:14:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:03.320 09:14:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:03.320 09:14:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:03.320 09:14:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:03.320 09:14:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:03.320 09:14:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:03.320 09:14:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:03.320 09:14:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:03.320 09:14:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:03.320 09:14:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:03.320 09:14:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:03.320 09:14:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Buffers == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:03.320 09:14:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:03.320 09:14:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:03.320 09:14:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:03.320 09:14:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Cached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:03.320 09:14:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:03.320 09:14:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:03.320 09:14:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:03.320 09:14:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapCached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:03.320 09:14:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:03.320 09:14:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:03.320 09:14:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:03.320 09:14:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:03.320 09:14:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:03.320 09:14:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:03.320 09:14:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:03.320 09:14:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:03.320 09:14:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:03.320 09:14:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:03.320 09:14:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:03.320 09:14:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:03.320 09:14:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:03.320 09:14:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:03.320 09:14:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:03.320 09:14:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:03.320 09:14:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:03.320 09:14:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:03.320 09:14:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:03.320 09:14:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:03.320 09:14:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:03.320 09:14:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:03.320 09:14:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:03.320 09:14:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:03.320 09:14:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:03.320 09:14:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:03.320 09:14:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:03.320 09:14:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unevictable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:03.320 09:14:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:03.320 09:14:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:03.320 09:14:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:03.320 09:14:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mlocked == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:03.320 09:14:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:03.320 09:14:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:03.320 09:14:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:03.320 09:14:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:03.320 09:14:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:03.320 09:14:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:03.320 09:14:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:03.320 09:14:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:03.320 09:14:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:03.320 09:14:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:03.320 09:14:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:03.320 09:14:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswap == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:03.320 09:14:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:03.320 09:14:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:03.320 09:14:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:03.320 09:14:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:03.320 09:14:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:03.320 09:14:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:03.320 09:14:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:03.320 09:14:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Dirty == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:03.320 09:14:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:03.320 09:14:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:03.320 09:14:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:03.320 09:14:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Writeback == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:03.320 09:14:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:03.320 09:14:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:03.320 09:14:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:03.320 09:14:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonPages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:03.320 09:14:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:03.320 09:14:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:03.320 09:14:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:03.320 09:14:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:03.320 09:14:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:03.320 09:14:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:03.320 09:14:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:03.320 09:14:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Shmem == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:03.320 09:14:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:03.320 09:14:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:03.320 09:14:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:03.320 09:14:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:03.320 09:14:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:03.320 09:14:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:03.320 09:14:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:03.320 09:14:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Slab == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:03.320 09:14:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:03.320 09:14:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:03.320 09:14:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:03.320 09:14:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:03.320 09:14:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:03.320 09:14:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:03.320 09:14:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:03.320 09:14:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:03.320 09:14:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:03.320 09:14:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:03.320 09:14:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:03.320 09:14:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KernelStack == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:03.320 09:14:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:03.320 09:14:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:03.320 09:14:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:03.320 09:14:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ PageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:03.320 09:14:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:03.320 09:14:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:03.320 09:14:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:03.320 09:14:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:03.320 09:14:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:03.320 09:14:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:03.320 09:14:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:03.320 09:14:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:03.320 09:14:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:03.320 09:14:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:03.320 09:14:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:03.320 09:14:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Bounce == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:03.320 09:14:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:03.320 09:14:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:03.320 09:14:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:03.320 09:14:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:03.320 09:14:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:03.320 09:14:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:03.320 09:14:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:03.320 09:14:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:03.320 09:14:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:03.320 09:14:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:03.320 09:14:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:03.320 09:14:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:03.320 09:14:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:03.320 09:14:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:03.320 09:14:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:03.320 09:14:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:03.320 09:14:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:03.320 09:14:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:03.320 09:14:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:03.320 09:14:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:03.320 09:14:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:03.320 09:14:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:03.320 09:14:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:03.320 09:14:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:03.320 09:14:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:03.320 09:14:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:03.320 09:14:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:03.320 09:14:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Percpu == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:03.320 09:14:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:03.320 09:14:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:03.320 09:14:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:03.320 09:14:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:03.320 09:14:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:03.320 09:14:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:03.320 09:14:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:03.320 09:14:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:03.320 09:14:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # echo 0 00:06:03.320 09:14:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # return 0 00:06:03.320 09:14:14 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@97 -- # anon=0 00:06:03.320 09:14:14 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@99 -- # get_meminfo HugePages_Surp 00:06:03.320 09:14:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:06:03.320 09:14:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@18 -- # local node= 00:06:03.320 09:14:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@19 -- # local var val 00:06:03.320 09:14:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@20 -- # local mem_f mem 00:06:03.320 09:14:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:06:03.320 09:14:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:06:03.320 09:14:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:06:03.320 09:14:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:06:03.320 09:14:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:06:03.320 09:14:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:03.320 09:14:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:03.320 09:14:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60541712 kB' 'MemFree: 46622664 kB' 'MemAvailable: 50036692 kB' 'Buffers: 12432 kB' 'Cached: 9112440 kB' 'SwapCached: 0 kB' 'Active: 6659584 kB' 'Inactive: 3452172 kB' 'Active(anon): 6282876 kB' 'Inactive(anon): 0 kB' 'Active(file): 376708 kB' 'Inactive(file): 3452172 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 989720 kB' 'Mapped: 169764 kB' 'Shmem: 5295992 kB' 'KReclaimable: 157540 kB' 'Slab: 446116 kB' 'SReclaimable: 157540 kB' 'SUnreclaim: 288576 kB' 'KernelStack: 12752 kB' 'PageTables: 8216 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 37610884 kB' 'Committed_AS: 7779872 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 193608 kB' 'VmallocChunk: 0 kB' 'Percpu: 33408 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 579164 kB' 'DirectMap2M: 12972032 kB' 'DirectMap1G: 55574528 kB' 00:06:03.320 09:14:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:03.320 09:14:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:03.320 09:14:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:03.320 09:14:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:03.320 09:14:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:03.320 09:14:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:03.320 09:14:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:03.320 09:14:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:03.320 09:14:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:03.320 09:14:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:03.320 09:14:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:03.320 09:14:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:03.320 09:14:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:03.320 09:14:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:03.320 09:14:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:03.320 09:14:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:03.320 09:14:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:03.320 09:14:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:03.321 09:14:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:03.321 09:14:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:03.321 09:14:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:03.321 09:14:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:03.321 09:14:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:03.321 09:14:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:03.321 09:14:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:03.321 09:14:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:03.321 09:14:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:03.321 09:14:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:03.321 09:14:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:03.321 09:14:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:03.321 09:14:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:03.321 09:14:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:03.321 09:14:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:03.321 09:14:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:03.321 09:14:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:03.321 09:14:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:03.321 09:14:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:03.321 09:14:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:03.321 09:14:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:03.321 09:14:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:03.321 09:14:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:03.321 09:14:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:03.321 09:14:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:03.321 09:14:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:03.321 09:14:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:03.321 09:14:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:03.321 09:14:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:03.321 09:14:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:03.321 09:14:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:03.321 09:14:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:03.321 09:14:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:03.321 09:14:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:03.321 09:14:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:03.321 09:14:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:03.321 09:14:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:03.321 09:14:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:03.321 09:14:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:03.321 09:14:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:03.321 09:14:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:03.321 09:14:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:03.321 09:14:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:03.321 09:14:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:03.321 09:14:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:03.321 09:14:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:03.321 09:14:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:03.321 09:14:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:03.321 09:14:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:03.321 09:14:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:03.321 09:14:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:03.321 09:14:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:03.321 09:14:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:03.321 09:14:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:03.321 09:14:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:03.321 09:14:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:03.321 09:14:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:03.321 09:14:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:03.321 09:14:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:03.321 09:14:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:03.321 09:14:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:03.321 09:14:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:03.321 09:14:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:03.321 09:14:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:03.321 09:14:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:03.321 09:14:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:03.321 09:14:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:03.321 09:14:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:03.321 09:14:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:03.321 09:14:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:03.321 09:14:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:03.321 09:14:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:03.321 09:14:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:03.321 09:14:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:03.321 09:14:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:03.321 09:14:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:03.321 09:14:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:03.321 09:14:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:03.321 09:14:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:03.321 09:14:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:03.321 09:14:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:03.321 09:14:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:03.321 09:14:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:03.321 09:14:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:03.321 09:14:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:03.321 09:14:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:03.321 09:14:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:03.321 09:14:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:03.321 09:14:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:03.321 09:14:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:03.321 09:14:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:03.321 09:14:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:03.321 09:14:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:03.321 09:14:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:03.321 09:14:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:03.321 09:14:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:03.321 09:14:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:03.321 09:14:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:03.321 09:14:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:03.321 09:14:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:03.321 09:14:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:03.321 09:14:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:03.321 09:14:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:03.321 09:14:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:03.321 09:14:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:03.321 09:14:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:03.321 09:14:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:03.321 09:14:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:03.321 09:14:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:03.321 09:14:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:03.321 09:14:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:03.321 09:14:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:03.321 09:14:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:03.321 09:14:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:03.321 09:14:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:03.321 09:14:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:03.321 09:14:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:03.321 09:14:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:03.321 09:14:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:03.321 09:14:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:03.321 09:14:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:03.321 09:14:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:03.321 09:14:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:03.321 09:14:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:03.321 09:14:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:03.321 09:14:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:03.321 09:14:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:03.321 09:14:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:03.321 09:14:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:03.321 09:14:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:03.321 09:14:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:03.321 09:14:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:03.321 09:14:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:03.321 09:14:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:03.321 09:14:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:03.321 09:14:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:03.321 09:14:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:03.321 09:14:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:03.321 09:14:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:03.321 09:14:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:03.321 09:14:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:03.321 09:14:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:03.321 09:14:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:03.321 09:14:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:03.321 09:14:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:03.321 09:14:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:03.321 09:14:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:03.321 09:14:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:03.321 09:14:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:03.321 09:14:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:03.321 09:14:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:03.321 09:14:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:03.321 09:14:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:03.321 09:14:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:03.321 09:14:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:03.321 09:14:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:03.321 09:14:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:03.321 09:14:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:03.321 09:14:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:03.321 09:14:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:03.321 09:14:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:03.321 09:14:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:03.321 09:14:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:03.321 09:14:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:03.321 09:14:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:03.321 09:14:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:03.321 09:14:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:03.321 09:14:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:03.321 09:14:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:03.321 09:14:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:03.321 09:14:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:03.321 09:14:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:03.321 09:14:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:03.321 09:14:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:03.321 09:14:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:03.321 09:14:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:03.321 09:14:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:03.321 09:14:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:03.321 09:14:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:03.321 09:14:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:03.321 09:14:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:03.321 09:14:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:03.321 09:14:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:03.321 09:14:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:03.321 09:14:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:03.321 09:14:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:03.321 09:14:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:03.321 09:14:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # echo 0 00:06:03.321 09:14:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # return 0 00:06:03.321 09:14:14 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@99 -- # surp=0 00:06:03.321 09:14:14 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@100 -- # get_meminfo HugePages_Rsvd 00:06:03.321 09:14:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@17 -- # local get=HugePages_Rsvd 00:06:03.321 09:14:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@18 -- # local node= 00:06:03.321 09:14:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@19 -- # local var val 00:06:03.321 09:14:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@20 -- # local mem_f mem 00:06:03.321 09:14:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:06:03.321 09:14:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:06:03.321 09:14:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:06:03.321 09:14:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:06:03.321 09:14:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:06:03.321 09:14:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:03.321 09:14:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:03.321 09:14:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60541712 kB' 'MemFree: 46618072 kB' 'MemAvailable: 50032100 kB' 'Buffers: 12432 kB' 'Cached: 9112460 kB' 'SwapCached: 0 kB' 'Active: 6662648 kB' 'Inactive: 3452172 kB' 'Active(anon): 6285940 kB' 'Inactive(anon): 0 kB' 'Active(file): 376708 kB' 'Inactive(file): 3452172 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 993188 kB' 'Mapped: 169272 kB' 'Shmem: 5296012 kB' 'KReclaimable: 157540 kB' 'Slab: 446108 kB' 'SReclaimable: 157540 kB' 'SUnreclaim: 288568 kB' 'KernelStack: 12736 kB' 'PageTables: 8136 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 37610884 kB' 'Committed_AS: 7784176 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 193612 kB' 'VmallocChunk: 0 kB' 'Percpu: 33408 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 579164 kB' 'DirectMap2M: 12972032 kB' 'DirectMap1G: 55574528 kB' 00:06:03.321 09:14:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:03.321 09:14:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:03.321 09:14:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:03.321 09:14:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:03.321 09:14:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:03.321 09:14:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:03.321 09:14:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:03.321 09:14:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:03.321 09:14:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:03.321 09:14:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:03.321 09:14:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:03.321 09:14:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:03.321 09:14:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:03.321 09:14:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:03.321 09:14:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:03.321 09:14:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:03.321 09:14:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:03.321 09:14:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:03.321 09:14:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:03.321 09:14:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:03.321 09:14:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:03.321 09:14:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:03.321 09:14:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:03.322 09:14:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:03.322 09:14:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:03.322 09:14:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:03.322 09:14:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:03.322 09:14:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:03.322 09:14:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:03.322 09:14:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:03.322 09:14:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:03.322 09:14:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:03.322 09:14:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:03.322 09:14:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:03.322 09:14:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:03.322 09:14:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:03.322 09:14:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:03.322 09:14:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:03.322 09:14:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:03.322 09:14:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:03.322 09:14:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:03.322 09:14:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:03.322 09:14:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:03.322 09:14:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:03.322 09:14:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:03.322 09:14:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:03.322 09:14:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:03.322 09:14:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:03.322 09:14:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:03.322 09:14:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:03.322 09:14:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:03.322 09:14:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:03.322 09:14:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:03.322 09:14:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:03.322 09:14:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:03.322 09:14:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:03.322 09:14:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:03.322 09:14:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:03.322 09:14:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:03.322 09:14:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:03.322 09:14:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:03.322 09:14:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:03.322 09:14:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:03.322 09:14:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:03.322 09:14:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:03.322 09:14:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:03.322 09:14:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:03.322 09:14:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:03.322 09:14:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:03.322 09:14:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:03.322 09:14:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:03.322 09:14:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:03.322 09:14:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:03.322 09:14:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:03.322 09:14:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:03.322 09:14:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:03.322 09:14:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:03.322 09:14:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:03.322 09:14:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:03.322 09:14:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:03.322 09:14:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:03.322 09:14:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:03.322 09:14:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:03.322 09:14:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:03.322 09:14:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:03.322 09:14:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:03.322 09:14:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:03.322 09:14:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:03.322 09:14:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:03.322 09:14:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:03.322 09:14:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:03.322 09:14:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:03.322 09:14:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:03.322 09:14:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:03.322 09:14:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:03.322 09:14:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:03.322 09:14:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:03.322 09:14:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:03.322 09:14:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:03.322 09:14:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:03.322 09:14:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:03.322 09:14:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:03.322 09:14:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:03.322 09:14:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:03.322 09:14:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:03.322 09:14:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:03.322 09:14:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:03.322 09:14:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:03.322 09:14:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:03.322 09:14:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:03.322 09:14:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:03.322 09:14:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:03.322 09:14:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:03.322 09:14:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:03.322 09:14:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:03.322 09:14:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:03.322 09:14:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:03.322 09:14:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:03.322 09:14:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:03.322 09:14:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:03.322 09:14:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:03.322 09:14:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:03.322 09:14:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:03.322 09:14:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:03.322 09:14:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:03.322 09:14:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:03.322 09:14:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:03.322 09:14:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:03.322 09:14:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:03.322 09:14:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:03.322 09:14:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:03.322 09:14:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:03.322 09:14:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:03.322 09:14:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:03.322 09:14:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:03.322 09:14:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:03.322 09:14:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:03.322 09:14:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:03.322 09:14:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:03.322 09:14:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:03.322 09:14:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:03.322 09:14:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:03.322 09:14:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:03.322 09:14:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:03.322 09:14:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:03.322 09:14:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:03.322 09:14:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:03.322 09:14:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:03.322 09:14:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:03.322 09:14:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:03.322 09:14:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:03.322 09:14:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:03.322 09:14:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:03.322 09:14:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:03.322 09:14:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:03.322 09:14:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:03.322 09:14:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:03.322 09:14:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:03.322 09:14:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:03.322 09:14:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:03.322 09:14:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:03.322 09:14:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:03.322 09:14:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:03.322 09:14:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:03.322 09:14:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:03.322 09:14:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:03.322 09:14:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:03.322 09:14:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:03.322 09:14:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:03.322 09:14:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:03.322 09:14:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:03.322 09:14:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:03.322 09:14:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:03.322 09:14:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:03.322 09:14:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:03.322 09:14:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:03.322 09:14:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:03.322 09:14:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:03.322 09:14:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:03.322 09:14:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:03.322 09:14:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:03.322 09:14:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:03.322 09:14:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:03.322 09:14:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:03.322 09:14:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:03.322 09:14:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:03.322 09:14:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:03.322 09:14:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:03.322 09:14:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:03.322 09:14:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:03.322 09:14:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:03.322 09:14:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:03.322 09:14:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:03.322 09:14:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:03.322 09:14:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:03.322 09:14:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:03.322 09:14:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:03.322 09:14:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:03.322 09:14:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:03.322 09:14:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:03.322 09:14:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:03.322 09:14:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # echo 0 00:06:03.322 09:14:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # return 0 00:06:03.322 09:14:14 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@100 -- # resv=0 00:06:03.322 09:14:14 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@102 -- # echo nr_hugepages=1024 00:06:03.322 nr_hugepages=1024 00:06:03.322 09:14:14 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@103 -- # echo resv_hugepages=0 00:06:03.322 resv_hugepages=0 00:06:03.322 09:14:14 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@104 -- # echo surplus_hugepages=0 00:06:03.322 surplus_hugepages=0 00:06:03.322 09:14:14 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@105 -- # echo anon_hugepages=0 00:06:03.322 anon_hugepages=0 00:06:03.322 09:14:14 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@107 -- # (( 1024 == nr_hugepages + surp + resv )) 00:06:03.322 09:14:14 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@109 -- # (( 1024 == nr_hugepages )) 00:06:03.322 09:14:14 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@110 -- # get_meminfo HugePages_Total 00:06:03.322 09:14:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@17 -- # local get=HugePages_Total 00:06:03.322 09:14:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@18 -- # local node= 00:06:03.322 09:14:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@19 -- # local var val 00:06:03.322 09:14:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@20 -- # local mem_f mem 00:06:03.322 09:14:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:06:03.322 09:14:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:06:03.322 09:14:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:06:03.322 09:14:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:06:03.322 09:14:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:06:03.322 09:14:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:03.322 09:14:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:03.323 09:14:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60541712 kB' 'MemFree: 46625508 kB' 'MemAvailable: 50039536 kB' 'Buffers: 12432 kB' 'Cached: 9112480 kB' 'SwapCached: 0 kB' 'Active: 6658252 kB' 'Inactive: 3452172 kB' 'Active(anon): 6281544 kB' 'Inactive(anon): 0 kB' 'Active(file): 376708 kB' 'Inactive(file): 3452172 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 988372 kB' 'Mapped: 168836 kB' 'Shmem: 5296032 kB' 'KReclaimable: 157540 kB' 'Slab: 446108 kB' 'SReclaimable: 157540 kB' 'SUnreclaim: 288568 kB' 'KernelStack: 12896 kB' 'PageTables: 8412 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 37610884 kB' 'Committed_AS: 7785064 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 193752 kB' 'VmallocChunk: 0 kB' 'Percpu: 33408 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 579164 kB' 'DirectMap2M: 12972032 kB' 'DirectMap1G: 55574528 kB' 00:06:03.323 09:14:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:03.323 09:14:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:03.323 09:14:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:03.323 09:14:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:03.323 09:14:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:03.323 09:14:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:03.323 09:14:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:03.323 09:14:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:03.323 09:14:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:03.323 09:14:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:03.323 09:14:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:03.323 09:14:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:03.323 09:14:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:03.323 09:14:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:03.323 09:14:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:03.323 09:14:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:03.323 09:14:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:03.323 09:14:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:03.323 09:14:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:03.323 09:14:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:03.323 09:14:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:03.323 09:14:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:03.323 09:14:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:03.323 09:14:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:03.323 09:14:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:03.323 09:14:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:03.323 09:14:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:03.323 09:14:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:03.323 09:14:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:03.323 09:14:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:03.323 09:14:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:03.323 09:14:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:03.323 09:14:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:03.323 09:14:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:03.323 09:14:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:03.323 09:14:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:03.323 09:14:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:03.323 09:14:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:03.323 09:14:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:03.323 09:14:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:03.323 09:14:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:03.323 09:14:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:03.323 09:14:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:03.323 09:14:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:03.323 09:14:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:03.323 09:14:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:03.323 09:14:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:03.323 09:14:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:03.323 09:14:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:03.323 09:14:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:03.323 09:14:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:03.323 09:14:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:03.323 09:14:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:03.323 09:14:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:03.323 09:14:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:03.323 09:14:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:03.323 09:14:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:03.323 09:14:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:03.323 09:14:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:03.323 09:14:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:03.323 09:14:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:03.323 09:14:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:03.323 09:14:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:03.323 09:14:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:03.323 09:14:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:03.323 09:14:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:03.323 09:14:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:03.323 09:14:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:03.323 09:14:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:03.323 09:14:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:03.323 09:14:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:03.323 09:14:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:03.323 09:14:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:03.323 09:14:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:03.323 09:14:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:03.323 09:14:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:03.323 09:14:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:03.323 09:14:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:03.323 09:14:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:03.323 09:14:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:03.323 09:14:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:03.323 09:14:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:03.323 09:14:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:03.323 09:14:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:03.323 09:14:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:03.323 09:14:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:03.323 09:14:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:03.323 09:14:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:03.323 09:14:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:03.323 09:14:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:03.323 09:14:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:03.323 09:14:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:03.323 09:14:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:03.323 09:14:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:03.323 09:14:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:03.323 09:14:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:03.323 09:14:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:03.323 09:14:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:03.323 09:14:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:03.323 09:14:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:03.323 09:14:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:03.323 09:14:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:03.323 09:14:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:03.323 09:14:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:03.323 09:14:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:03.323 09:14:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:03.323 09:14:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:03.323 09:14:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:03.323 09:14:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:03.323 09:14:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:03.323 09:14:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:03.323 09:14:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:03.323 09:14:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:03.323 09:14:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:03.323 09:14:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:03.323 09:14:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:03.323 09:14:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:03.323 09:14:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:03.323 09:14:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:03.323 09:14:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:03.323 09:14:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:03.323 09:14:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:03.323 09:14:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:03.323 09:14:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:03.323 09:14:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:03.323 09:14:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:03.323 09:14:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:03.323 09:14:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:03.323 09:14:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:03.323 09:14:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:03.323 09:14:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:03.323 09:14:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:03.323 09:14:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:03.323 09:14:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:03.323 09:14:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:03.323 09:14:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:03.323 09:14:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:03.323 09:14:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:03.323 09:14:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:03.323 09:14:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:03.323 09:14:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:03.323 09:14:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:03.323 09:14:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:03.323 09:14:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:03.323 09:14:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:03.323 09:14:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:03.323 09:14:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:03.323 09:14:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:03.323 09:14:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:03.323 09:14:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:03.323 09:14:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:03.323 09:14:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:03.323 09:14:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:03.323 09:14:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:03.323 09:14:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:03.323 09:14:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:03.323 09:14:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:03.323 09:14:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:03.323 09:14:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:03.323 09:14:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:03.323 09:14:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:03.323 09:14:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:03.323 09:14:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:03.323 09:14:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:03.323 09:14:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:03.323 09:14:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:03.323 09:14:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:03.323 09:14:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:03.323 09:14:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:03.323 09:14:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:03.323 09:14:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:03.323 09:14:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:03.323 09:14:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:03.323 09:14:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:03.323 09:14:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:03.323 09:14:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:03.323 09:14:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:03.323 09:14:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:03.323 09:14:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:03.323 09:14:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:03.323 09:14:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:03.323 09:14:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:03.323 09:14:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:03.323 09:14:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:03.323 09:14:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:03.323 09:14:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:03.323 09:14:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:03.323 09:14:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:03.323 09:14:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:03.323 09:14:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:03.323 09:14:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:03.323 09:14:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:03.323 09:14:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:03.323 09:14:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # echo 1024 00:06:03.323 09:14:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # return 0 00:06:03.323 09:14:14 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@110 -- # (( 1024 == nr_hugepages + surp + resv )) 00:06:03.323 09:14:14 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@112 -- # get_nodes 00:06:03.323 09:14:14 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@27 -- # local node 00:06:03.323 09:14:14 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:06:03.323 09:14:14 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=1024 00:06:03.323 09:14:14 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:06:03.323 09:14:14 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=0 00:06:03.323 09:14:14 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@32 -- # no_nodes=2 00:06:03.323 09:14:14 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@33 -- # (( no_nodes > 0 )) 00:06:03.323 09:14:14 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:06:03.323 09:14:14 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:06:03.323 09:14:14 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 0 00:06:03.323 09:14:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:06:03.323 09:14:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@18 -- # local node=0 00:06:03.323 09:14:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@19 -- # local var val 00:06:03.323 09:14:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@20 -- # local mem_f mem 00:06:03.323 09:14:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:06:03.323 09:14:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node0/meminfo ]] 00:06:03.323 09:14:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node0/meminfo 00:06:03.323 09:14:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:06:03.323 09:14:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:06:03.323 09:14:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:03.323 09:14:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:03.323 09:14:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 32829884 kB' 'MemFree: 21944776 kB' 'MemUsed: 10885108 kB' 'SwapCached: 0 kB' 'Active: 4640924 kB' 'Inactive: 3357784 kB' 'Active(anon): 4507504 kB' 'Inactive(anon): 0 kB' 'Active(file): 133420 kB' 'Inactive(file): 3357784 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'FilePages: 7326604 kB' 'Mapped: 76600 kB' 'AnonPages: 674480 kB' 'Shmem: 3835400 kB' 'KernelStack: 6008 kB' 'PageTables: 4392 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 59572 kB' 'Slab: 177228 kB' 'SReclaimable: 59572 kB' 'SUnreclaim: 117656 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Surp: 0' 00:06:03.323 09:14:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:03.323 09:14:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:03.323 09:14:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:03.323 09:14:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:03.324 09:14:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:03.324 09:14:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:03.324 09:14:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:03.324 09:14:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:03.324 09:14:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:03.324 09:14:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:03.324 09:14:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:03.324 09:14:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:03.324 09:14:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:03.324 09:14:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:03.324 09:14:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:03.324 09:14:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:03.324 09:14:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:03.324 09:14:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:03.324 09:14:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:03.324 09:14:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:03.324 09:14:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:03.324 09:14:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:03.324 09:14:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:03.324 09:14:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:03.324 09:14:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:03.324 09:14:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:03.324 09:14:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:03.324 09:14:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:03.324 09:14:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:03.324 09:14:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:03.324 09:14:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:03.324 09:14:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:03.324 09:14:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:03.324 09:14:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:03.324 09:14:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:03.324 09:14:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:03.324 09:14:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:03.324 09:14:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:03.324 09:14:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:03.324 09:14:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:03.324 09:14:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:03.324 09:14:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:03.324 09:14:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:03.324 09:14:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:03.324 09:14:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:03.324 09:14:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:03.324 09:14:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:03.324 09:14:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:03.324 09:14:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:03.324 09:14:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:03.324 09:14:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:03.324 09:14:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:03.324 09:14:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:03.324 09:14:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:03.324 09:14:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:03.324 09:14:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:03.324 09:14:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:03.324 09:14:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:03.324 09:14:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:03.324 09:14:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:03.324 09:14:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:03.324 09:14:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:03.324 09:14:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:03.324 09:14:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:03.324 09:14:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:03.324 09:14:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:03.324 09:14:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:03.324 09:14:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:03.324 09:14:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:03.324 09:14:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:03.324 09:14:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:03.324 09:14:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:03.324 09:14:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:03.324 09:14:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:03.324 09:14:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:03.324 09:14:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:03.324 09:14:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:03.324 09:14:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:03.324 09:14:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:03.324 09:14:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:03.324 09:14:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:03.324 09:14:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:03.324 09:14:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:03.324 09:14:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:03.324 09:14:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:03.324 09:14:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:03.324 09:14:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:03.324 09:14:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:03.324 09:14:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:03.324 09:14:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:03.324 09:14:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:03.324 09:14:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:03.324 09:14:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:03.324 09:14:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:03.324 09:14:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:03.324 09:14:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:03.324 09:14:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:03.324 09:14:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:03.324 09:14:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:03.324 09:14:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:03.324 09:14:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:03.324 09:14:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:03.324 09:14:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:03.324 09:14:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:03.324 09:14:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:03.324 09:14:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:03.324 09:14:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:03.324 09:14:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:03.324 09:14:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:03.324 09:14:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:03.324 09:14:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:03.324 09:14:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:03.324 09:14:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:03.324 09:14:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:03.324 09:14:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:03.324 09:14:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:03.324 09:14:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:03.324 09:14:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:03.324 09:14:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:03.324 09:14:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:03.324 09:14:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:03.324 09:14:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:03.324 09:14:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:03.324 09:14:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:03.324 09:14:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:03.324 09:14:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:03.324 09:14:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:03.324 09:14:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:03.324 09:14:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:03.324 09:14:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:03.324 09:14:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:03.324 09:14:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:03.324 09:14:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:03.324 09:14:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:03.324 09:14:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:03.324 09:14:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:03.324 09:14:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:03.324 09:14:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:03.324 09:14:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:03.324 09:14:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:03.325 09:14:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:03.325 09:14:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:03.325 09:14:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:03.325 09:14:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:03.325 09:14:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:03.325 09:14:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # echo 0 00:06:03.325 09:14:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # return 0 00:06:03.325 09:14:14 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:06:03.325 09:14:14 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:06:03.325 09:14:14 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:06:03.325 09:14:14 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:06:03.325 09:14:14 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@128 -- # echo 'node0=1024 expecting 1024' 00:06:03.325 node0=1024 expecting 1024 00:06:03.325 09:14:14 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@130 -- # [[ 1024 == \1\0\2\4 ]] 00:06:03.325 09:14:14 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@202 -- # CLEAR_HUGE=no 00:06:03.325 09:14:14 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@202 -- # NRHUGE=512 00:06:03.325 09:14:14 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@202 -- # setup output 00:06:03.325 09:14:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@9 -- # [[ output == output ]] 00:06:03.325 09:14:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@10 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:06:04.709 0000:00:04.7 (8086 0e27): Already using the vfio-pci driver 00:06:04.709 0000:00:04.6 (8086 0e26): Already using the vfio-pci driver 00:06:04.709 0000:00:04.5 (8086 0e25): Already using the vfio-pci driver 00:06:04.709 0000:00:04.4 (8086 0e24): Already using the vfio-pci driver 00:06:04.709 0000:00:04.3 (8086 0e23): Already using the vfio-pci driver 00:06:04.709 0000:00:04.2 (8086 0e22): Already using the vfio-pci driver 00:06:04.709 0000:00:04.1 (8086 0e21): Already using the vfio-pci driver 00:06:04.709 0000:00:04.0 (8086 0e20): Already using the vfio-pci driver 00:06:04.709 0000:80:04.7 (8086 0e27): Already using the vfio-pci driver 00:06:04.709 0000:0b:00.0 (8086 0a54): Already using the vfio-pci driver 00:06:04.709 0000:80:04.6 (8086 0e26): Already using the vfio-pci driver 00:06:04.709 0000:80:04.5 (8086 0e25): Already using the vfio-pci driver 00:06:04.709 0000:80:04.4 (8086 0e24): Already using the vfio-pci driver 00:06:04.709 0000:80:04.3 (8086 0e23): Already using the vfio-pci driver 00:06:04.709 0000:80:04.2 (8086 0e22): Already using the vfio-pci driver 00:06:04.709 0000:80:04.1 (8086 0e21): Already using the vfio-pci driver 00:06:04.709 0000:80:04.0 (8086 0e20): Already using the vfio-pci driver 00:06:04.709 INFO: Requested 512 hugepages but 1024 already allocated on node0 00:06:04.709 09:14:15 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@204 -- # verify_nr_hugepages 00:06:04.709 09:14:15 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@89 -- # local node 00:06:04.710 09:14:15 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@90 -- # local sorted_t 00:06:04.710 09:14:15 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@91 -- # local sorted_s 00:06:04.710 09:14:15 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@92 -- # local surp 00:06:04.710 09:14:15 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@93 -- # local resv 00:06:04.710 09:14:15 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@94 -- # local anon 00:06:04.710 09:14:15 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@96 -- # [[ always [madvise] never != *\[\n\e\v\e\r\]* ]] 00:06:04.710 09:14:15 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@97 -- # get_meminfo AnonHugePages 00:06:04.710 09:14:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@17 -- # local get=AnonHugePages 00:06:04.710 09:14:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@18 -- # local node= 00:06:04.710 09:14:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@19 -- # local var val 00:06:04.710 09:14:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@20 -- # local mem_f mem 00:06:04.710 09:14:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:06:04.710 09:14:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:06:04.710 09:14:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:06:04.710 09:14:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:06:04.710 09:14:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:06:04.710 09:14:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:04.710 09:14:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:04.710 09:14:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60541712 kB' 'MemFree: 46599012 kB' 'MemAvailable: 50013040 kB' 'Buffers: 12432 kB' 'Cached: 9112556 kB' 'SwapCached: 0 kB' 'Active: 6657608 kB' 'Inactive: 3452172 kB' 'Active(anon): 6280900 kB' 'Inactive(anon): 0 kB' 'Active(file): 376708 kB' 'Inactive(file): 3452172 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 988020 kB' 'Mapped: 168852 kB' 'Shmem: 5296108 kB' 'KReclaimable: 157540 kB' 'Slab: 446292 kB' 'SReclaimable: 157540 kB' 'SUnreclaim: 288752 kB' 'KernelStack: 12720 kB' 'PageTables: 8124 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 37610884 kB' 'Committed_AS: 7777028 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 193688 kB' 'VmallocChunk: 0 kB' 'Percpu: 33408 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 579164 kB' 'DirectMap2M: 12972032 kB' 'DirectMap1G: 55574528 kB' 00:06:04.710 09:14:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:04.710 09:14:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:04.710 09:14:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:04.710 09:14:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:04.710 09:14:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:04.710 09:14:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:04.710 09:14:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:04.710 09:14:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:04.710 09:14:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:04.710 09:14:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:04.710 09:14:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:04.710 09:14:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:04.710 09:14:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Buffers == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:04.710 09:14:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:04.710 09:14:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:04.710 09:14:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:04.710 09:14:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Cached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:04.710 09:14:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:04.710 09:14:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:04.710 09:14:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:04.710 09:14:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapCached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:04.710 09:14:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:04.710 09:14:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:04.710 09:14:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:04.710 09:14:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:04.710 09:14:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:04.710 09:14:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:04.710 09:14:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:04.710 09:14:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:04.710 09:14:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:04.710 09:14:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:04.710 09:14:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:04.710 09:14:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:04.710 09:14:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:04.710 09:14:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:04.710 09:14:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:04.710 09:14:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:04.710 09:14:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:04.710 09:14:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:04.710 09:14:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:04.710 09:14:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:04.710 09:14:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:04.710 09:14:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:04.710 09:14:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:04.710 09:14:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:04.710 09:14:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:04.710 09:14:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:04.710 09:14:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:04.710 09:14:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unevictable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:04.710 09:14:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:04.710 09:14:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:04.710 09:14:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:04.710 09:14:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mlocked == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:04.710 09:14:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:04.710 09:14:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:04.710 09:14:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:04.710 09:14:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:04.710 09:14:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:04.710 09:14:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:04.710 09:14:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:04.710 09:14:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:04.710 09:14:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:04.710 09:14:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:04.710 09:14:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:04.710 09:14:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswap == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:04.710 09:14:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:04.710 09:14:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:04.710 09:14:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:04.710 09:14:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:04.710 09:14:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:04.710 09:14:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:04.710 09:14:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:04.710 09:14:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Dirty == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:04.710 09:14:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:04.710 09:14:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:04.710 09:14:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:04.710 09:14:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Writeback == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:04.710 09:14:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:04.710 09:14:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:04.710 09:14:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:04.710 09:14:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonPages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:04.710 09:14:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:04.710 09:14:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:04.710 09:14:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:04.710 09:14:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:04.710 09:14:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:04.710 09:14:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:04.710 09:14:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:04.710 09:14:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Shmem == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:04.710 09:14:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:04.710 09:14:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:04.710 09:14:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:04.710 09:14:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:04.710 09:14:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:04.710 09:14:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:04.710 09:14:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:04.711 09:14:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Slab == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:04.711 09:14:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:04.711 09:14:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:04.711 09:14:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:04.711 09:14:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:04.711 09:14:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:04.711 09:14:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:04.711 09:14:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:04.711 09:14:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:04.711 09:14:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:04.711 09:14:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:04.711 09:14:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:04.711 09:14:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KernelStack == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:04.711 09:14:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:04.711 09:14:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:04.711 09:14:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:04.711 09:14:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ PageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:04.711 09:14:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:04.711 09:14:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:04.711 09:14:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:04.711 09:14:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:04.711 09:14:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:04.711 09:14:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:04.711 09:14:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:04.711 09:14:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:04.711 09:14:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:04.711 09:14:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:04.711 09:14:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:04.711 09:14:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Bounce == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:04.711 09:14:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:04.711 09:14:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:04.711 09:14:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:04.711 09:14:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:04.711 09:14:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:04.711 09:14:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:04.711 09:14:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:04.711 09:14:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:04.711 09:14:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:04.711 09:14:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:04.711 09:14:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:04.711 09:14:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:04.711 09:14:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:04.711 09:14:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:04.711 09:14:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:04.711 09:14:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:04.711 09:14:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:04.711 09:14:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:04.711 09:14:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:04.711 09:14:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:04.711 09:14:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:04.711 09:14:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:04.711 09:14:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:04.711 09:14:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:04.711 09:14:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:04.711 09:14:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:04.711 09:14:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:04.711 09:14:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Percpu == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:04.711 09:14:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:04.711 09:14:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:04.711 09:14:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:04.711 09:14:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:04.711 09:14:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:04.711 09:14:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:04.711 09:14:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:04.711 09:14:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:04.711 09:14:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # echo 0 00:06:04.711 09:14:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # return 0 00:06:04.711 09:14:15 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@97 -- # anon=0 00:06:04.711 09:14:15 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@99 -- # get_meminfo HugePages_Surp 00:06:04.711 09:14:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:06:04.711 09:14:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@18 -- # local node= 00:06:04.711 09:14:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@19 -- # local var val 00:06:04.711 09:14:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@20 -- # local mem_f mem 00:06:04.711 09:14:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:06:04.711 09:14:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:06:04.711 09:14:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:06:04.711 09:14:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:06:04.711 09:14:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:06:04.711 09:14:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:04.711 09:14:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:04.711 09:14:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60541712 kB' 'MemFree: 46601660 kB' 'MemAvailable: 50015688 kB' 'Buffers: 12432 kB' 'Cached: 9112560 kB' 'SwapCached: 0 kB' 'Active: 6658336 kB' 'Inactive: 3452172 kB' 'Active(anon): 6281628 kB' 'Inactive(anon): 0 kB' 'Active(file): 376708 kB' 'Inactive(file): 3452172 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 988704 kB' 'Mapped: 168812 kB' 'Shmem: 5296112 kB' 'KReclaimable: 157540 kB' 'Slab: 446268 kB' 'SReclaimable: 157540 kB' 'SUnreclaim: 288728 kB' 'KernelStack: 12752 kB' 'PageTables: 8204 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 37610884 kB' 'Committed_AS: 7777044 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 193624 kB' 'VmallocChunk: 0 kB' 'Percpu: 33408 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 579164 kB' 'DirectMap2M: 12972032 kB' 'DirectMap1G: 55574528 kB' 00:06:04.711 09:14:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:04.711 09:14:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:04.711 09:14:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:04.711 09:14:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:04.711 09:14:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:04.711 09:14:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:04.711 09:14:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:04.711 09:14:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:04.711 09:14:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:04.711 09:14:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:04.711 09:14:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:04.711 09:14:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:04.711 09:14:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:04.711 09:14:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:04.711 09:14:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:04.711 09:14:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:04.711 09:14:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:04.711 09:14:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:04.711 09:14:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:04.711 09:14:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:04.711 09:14:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:04.711 09:14:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:04.711 09:14:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:04.711 09:14:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:04.711 09:14:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:04.711 09:14:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:04.711 09:14:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:04.711 09:14:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:04.711 09:14:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:04.711 09:14:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:04.711 09:14:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:04.711 09:14:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:04.711 09:14:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:04.711 09:14:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:04.711 09:14:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:04.711 09:14:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:04.712 09:14:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:04.712 09:14:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:04.712 09:14:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:04.712 09:14:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:04.712 09:14:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:04.712 09:14:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:04.712 09:14:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:04.712 09:14:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:04.712 09:14:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:04.712 09:14:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:04.712 09:14:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:04.712 09:14:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:04.712 09:14:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:04.712 09:14:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:04.712 09:14:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:04.712 09:14:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:04.712 09:14:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:04.712 09:14:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:04.712 09:14:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:04.712 09:14:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:04.712 09:14:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:04.712 09:14:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:04.712 09:14:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:04.712 09:14:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:04.712 09:14:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:04.712 09:14:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:04.712 09:14:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:04.712 09:14:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:04.712 09:14:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:04.712 09:14:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:04.712 09:14:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:04.712 09:14:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:04.712 09:14:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:04.712 09:14:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:04.712 09:14:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:04.712 09:14:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:04.712 09:14:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:04.712 09:14:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:04.712 09:14:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:04.712 09:14:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:04.712 09:14:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:04.712 09:14:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:04.712 09:14:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:04.712 09:14:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:04.712 09:14:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:04.712 09:14:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:04.712 09:14:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:04.712 09:14:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:04.712 09:14:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:04.712 09:14:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:04.712 09:14:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:04.712 09:14:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:04.712 09:14:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:04.712 09:14:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:04.712 09:14:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:04.712 09:14:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:04.712 09:14:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:04.712 09:14:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:04.712 09:14:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:04.712 09:14:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:04.712 09:14:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:04.712 09:14:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:04.712 09:14:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:04.712 09:14:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:04.712 09:14:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:04.712 09:14:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:04.712 09:14:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:04.712 09:14:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:04.712 09:14:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:04.712 09:14:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:04.712 09:14:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:04.712 09:14:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:04.712 09:14:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:04.712 09:14:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:04.712 09:14:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:04.712 09:14:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:04.712 09:14:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:04.712 09:14:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:04.712 09:14:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:04.712 09:14:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:04.712 09:14:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:04.712 09:14:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:04.712 09:14:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:04.712 09:14:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:04.712 09:14:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:04.712 09:14:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:04.712 09:14:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:04.712 09:14:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:04.712 09:14:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:04.712 09:14:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:04.712 09:14:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:04.712 09:14:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:04.712 09:14:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:04.712 09:14:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:04.712 09:14:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:04.712 09:14:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:04.712 09:14:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:04.712 09:14:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:04.712 09:14:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:04.712 09:14:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:04.712 09:14:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:04.712 09:14:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:04.712 09:14:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:04.712 09:14:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:04.712 09:14:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:04.712 09:14:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:04.712 09:14:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:04.712 09:14:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:04.712 09:14:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:04.712 09:14:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:04.712 09:14:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:04.712 09:14:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:04.712 09:14:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:04.712 09:14:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:04.712 09:14:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:04.712 09:14:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:04.712 09:14:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:04.712 09:14:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:04.712 09:14:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:04.712 09:14:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:04.712 09:14:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:04.712 09:14:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:04.712 09:14:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:04.712 09:14:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:04.712 09:14:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:04.712 09:14:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:04.712 09:14:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:04.712 09:14:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:04.712 09:14:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:04.713 09:14:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:04.713 09:14:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:04.713 09:14:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:04.713 09:14:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:04.713 09:14:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:04.713 09:14:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:04.713 09:14:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:04.713 09:14:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:04.713 09:14:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:04.713 09:14:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:04.713 09:14:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:04.713 09:14:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:04.713 09:14:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:04.713 09:14:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:04.713 09:14:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:04.713 09:14:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:04.713 09:14:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:04.713 09:14:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:04.713 09:14:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:04.713 09:14:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:04.713 09:14:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:04.713 09:14:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:04.713 09:14:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:04.713 09:14:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:04.713 09:14:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:04.713 09:14:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:04.713 09:14:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:04.713 09:14:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:04.713 09:14:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:04.713 09:14:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:04.713 09:14:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:04.713 09:14:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:04.713 09:14:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:04.713 09:14:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:04.713 09:14:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:04.713 09:14:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:04.713 09:14:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:04.713 09:14:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:04.713 09:14:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:04.713 09:14:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:04.713 09:14:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # echo 0 00:06:04.713 09:14:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # return 0 00:06:04.713 09:14:15 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@99 -- # surp=0 00:06:04.713 09:14:15 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@100 -- # get_meminfo HugePages_Rsvd 00:06:04.713 09:14:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@17 -- # local get=HugePages_Rsvd 00:06:04.713 09:14:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@18 -- # local node= 00:06:04.713 09:14:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@19 -- # local var val 00:06:04.713 09:14:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@20 -- # local mem_f mem 00:06:04.713 09:14:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:06:04.713 09:14:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:06:04.713 09:14:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:06:04.713 09:14:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:06:04.713 09:14:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:06:04.713 09:14:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:04.713 09:14:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:04.713 09:14:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60541712 kB' 'MemFree: 46601976 kB' 'MemAvailable: 50016004 kB' 'Buffers: 12432 kB' 'Cached: 9112560 kB' 'SwapCached: 0 kB' 'Active: 6657972 kB' 'Inactive: 3452172 kB' 'Active(anon): 6281264 kB' 'Inactive(anon): 0 kB' 'Active(file): 376708 kB' 'Inactive(file): 3452172 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 988340 kB' 'Mapped: 168812 kB' 'Shmem: 5296112 kB' 'KReclaimable: 157540 kB' 'Slab: 446268 kB' 'SReclaimable: 157540 kB' 'SUnreclaim: 288728 kB' 'KernelStack: 12752 kB' 'PageTables: 8148 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 37610884 kB' 'Committed_AS: 7777068 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 193640 kB' 'VmallocChunk: 0 kB' 'Percpu: 33408 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 579164 kB' 'DirectMap2M: 12972032 kB' 'DirectMap1G: 55574528 kB' 00:06:04.713 09:14:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:04.713 09:14:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:04.713 09:14:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:04.713 09:14:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:04.713 09:14:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:04.713 09:14:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:04.713 09:14:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:04.713 09:14:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:04.713 09:14:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:04.713 09:14:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:04.713 09:14:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:04.713 09:14:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:04.713 09:14:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:04.713 09:14:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:04.713 09:14:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:04.713 09:14:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:04.713 09:14:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:04.713 09:14:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:04.713 09:14:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:04.713 09:14:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:04.713 09:14:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:04.713 09:14:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:04.713 09:14:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:04.713 09:14:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:04.713 09:14:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:04.713 09:14:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:04.713 09:14:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:04.713 09:14:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:04.713 09:14:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:04.713 09:14:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:04.713 09:14:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:04.713 09:14:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:04.713 09:14:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:04.713 09:14:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:04.713 09:14:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:04.713 09:14:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:04.713 09:14:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:04.713 09:14:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:04.713 09:14:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:04.713 09:14:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:04.713 09:14:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:04.713 09:14:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:04.713 09:14:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:04.713 09:14:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:04.713 09:14:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:04.713 09:14:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:04.713 09:14:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:04.713 09:14:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:04.713 09:14:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:04.713 09:14:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:04.713 09:14:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:04.713 09:14:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:04.713 09:14:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:04.713 09:14:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:04.713 09:14:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:04.713 09:14:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:04.713 09:14:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:04.713 09:14:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:04.713 09:14:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:04.713 09:14:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:04.713 09:14:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:04.714 09:14:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:04.714 09:14:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:04.714 09:14:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:04.714 09:14:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:04.714 09:14:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:04.714 09:14:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:04.714 09:14:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:04.714 09:14:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:04.714 09:14:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:04.714 09:14:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:04.714 09:14:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:04.714 09:14:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:04.714 09:14:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:04.714 09:14:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:04.714 09:14:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:04.714 09:14:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:04.714 09:14:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:04.714 09:14:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:04.714 09:14:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:04.714 09:14:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:04.714 09:14:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:04.714 09:14:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:04.714 09:14:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:04.714 09:14:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:04.714 09:14:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:04.714 09:14:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:04.714 09:14:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:04.714 09:14:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:04.714 09:14:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:04.714 09:14:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:04.714 09:14:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:04.714 09:14:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:04.714 09:14:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:04.714 09:14:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:04.714 09:14:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:04.714 09:14:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:04.714 09:14:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:04.714 09:14:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:04.714 09:14:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:04.714 09:14:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:04.714 09:14:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:04.714 09:14:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:04.714 09:14:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:04.714 09:14:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:04.714 09:14:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:04.714 09:14:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:04.714 09:14:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:04.714 09:14:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:04.714 09:14:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:04.714 09:14:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:04.714 09:14:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:04.714 09:14:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:04.714 09:14:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:04.714 09:14:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:04.714 09:14:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:04.714 09:14:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:04.714 09:14:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:04.714 09:14:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:04.714 09:14:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:04.714 09:14:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:04.714 09:14:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:04.714 09:14:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:04.714 09:14:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:04.714 09:14:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:04.714 09:14:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:04.714 09:14:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:04.714 09:14:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:04.714 09:14:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:04.714 09:14:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:04.714 09:14:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:04.714 09:14:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:04.714 09:14:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:04.714 09:14:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:04.714 09:14:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:04.714 09:14:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:04.714 09:14:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:04.714 09:14:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:04.714 09:14:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:04.714 09:14:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:04.714 09:14:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:04.714 09:14:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:04.714 09:14:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:04.714 09:14:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:04.714 09:14:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:04.714 09:14:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:04.714 09:14:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:04.714 09:14:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:04.714 09:14:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:04.714 09:14:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:04.714 09:14:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:04.714 09:14:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:04.714 09:14:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:04.714 09:14:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:04.714 09:14:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:04.714 09:14:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:04.714 09:14:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:04.714 09:14:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:04.714 09:14:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:04.714 09:14:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:04.714 09:14:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:04.714 09:14:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:04.714 09:14:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:04.714 09:14:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:04.715 09:14:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:04.715 09:14:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:04.715 09:14:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:04.715 09:14:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:04.715 09:14:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:04.715 09:14:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:04.715 09:14:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:04.715 09:14:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:04.715 09:14:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:04.715 09:14:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:04.715 09:14:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:04.715 09:14:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:04.715 09:14:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:04.715 09:14:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:04.715 09:14:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:04.715 09:14:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:04.715 09:14:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:04.715 09:14:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:04.715 09:14:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:04.715 09:14:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:04.715 09:14:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:04.715 09:14:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:04.715 09:14:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:04.715 09:14:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:04.715 09:14:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:04.715 09:14:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:04.715 09:14:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:04.715 09:14:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:04.715 09:14:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:04.715 09:14:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:04.715 09:14:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:04.715 09:14:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:04.715 09:14:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:04.715 09:14:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:04.715 09:14:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:04.715 09:14:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:04.715 09:14:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:04.715 09:14:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # echo 0 00:06:04.715 09:14:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # return 0 00:06:04.715 09:14:15 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@100 -- # resv=0 00:06:04.715 09:14:15 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@102 -- # echo nr_hugepages=1024 00:06:04.715 nr_hugepages=1024 00:06:04.715 09:14:15 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@103 -- # echo resv_hugepages=0 00:06:04.715 resv_hugepages=0 00:06:04.715 09:14:15 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@104 -- # echo surplus_hugepages=0 00:06:04.715 surplus_hugepages=0 00:06:04.715 09:14:15 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@105 -- # echo anon_hugepages=0 00:06:04.715 anon_hugepages=0 00:06:04.715 09:14:15 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@107 -- # (( 1024 == nr_hugepages + surp + resv )) 00:06:04.715 09:14:15 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@109 -- # (( 1024 == nr_hugepages )) 00:06:04.715 09:14:15 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@110 -- # get_meminfo HugePages_Total 00:06:04.715 09:14:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@17 -- # local get=HugePages_Total 00:06:04.715 09:14:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@18 -- # local node= 00:06:04.715 09:14:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@19 -- # local var val 00:06:04.715 09:14:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@20 -- # local mem_f mem 00:06:04.715 09:14:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:06:04.715 09:14:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:06:04.715 09:14:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:06:04.715 09:14:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:06:04.715 09:14:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:06:04.715 09:14:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:04.715 09:14:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:04.715 09:14:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60541712 kB' 'MemFree: 46602548 kB' 'MemAvailable: 50016576 kB' 'Buffers: 12432 kB' 'Cached: 9112600 kB' 'SwapCached: 0 kB' 'Active: 6657976 kB' 'Inactive: 3452172 kB' 'Active(anon): 6281268 kB' 'Inactive(anon): 0 kB' 'Active(file): 376708 kB' 'Inactive(file): 3452172 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 988348 kB' 'Mapped: 168812 kB' 'Shmem: 5296152 kB' 'KReclaimable: 157540 kB' 'Slab: 446332 kB' 'SReclaimable: 157540 kB' 'SUnreclaim: 288792 kB' 'KernelStack: 12768 kB' 'PageTables: 8216 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 37610884 kB' 'Committed_AS: 7777088 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 193640 kB' 'VmallocChunk: 0 kB' 'Percpu: 33408 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 579164 kB' 'DirectMap2M: 12972032 kB' 'DirectMap1G: 55574528 kB' 00:06:04.715 09:14:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:04.715 09:14:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:04.715 09:14:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:04.715 09:14:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:04.715 09:14:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:04.715 09:14:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:04.715 09:14:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:04.715 09:14:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:04.715 09:14:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:04.715 09:14:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:04.715 09:14:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:04.715 09:14:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:04.715 09:14:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:04.715 09:14:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:04.715 09:14:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:04.715 09:14:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:04.715 09:14:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:04.715 09:14:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:04.715 09:14:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:04.715 09:14:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:04.715 09:14:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:04.715 09:14:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:04.715 09:14:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:04.715 09:14:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:04.715 09:14:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:04.715 09:14:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:04.715 09:14:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:04.715 09:14:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:04.715 09:14:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:04.715 09:14:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:04.715 09:14:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:04.715 09:14:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:04.715 09:14:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:04.715 09:14:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:04.715 09:14:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:04.715 09:14:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:04.715 09:14:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:04.715 09:14:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:04.715 09:14:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:04.715 09:14:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:04.715 09:14:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:04.715 09:14:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:04.715 09:14:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:04.715 09:14:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:04.715 09:14:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:04.715 09:14:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:04.715 09:14:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:04.715 09:14:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:04.715 09:14:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:04.715 09:14:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:04.715 09:14:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:04.715 09:14:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:04.715 09:14:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:04.715 09:14:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:04.715 09:14:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:04.716 09:14:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:04.716 09:14:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:04.716 09:14:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:04.716 09:14:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:04.716 09:14:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:04.716 09:14:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:04.716 09:14:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:04.716 09:14:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:04.716 09:14:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:04.716 09:14:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:04.716 09:14:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:04.716 09:14:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:04.716 09:14:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:04.716 09:14:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:04.716 09:14:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:04.716 09:14:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:04.716 09:14:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:04.716 09:14:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:04.716 09:14:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:04.716 09:14:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:04.716 09:14:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:04.716 09:14:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:04.716 09:14:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:04.716 09:14:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:04.716 09:14:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:04.716 09:14:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:04.716 09:14:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:04.716 09:14:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:04.716 09:14:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:04.716 09:14:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:04.716 09:14:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:04.716 09:14:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:04.716 09:14:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:04.716 09:14:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:04.716 09:14:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:04.716 09:14:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:04.716 09:14:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:04.716 09:14:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:04.716 09:14:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:04.716 09:14:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:04.716 09:14:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:04.716 09:14:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:04.716 09:14:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:04.716 09:14:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:04.716 09:14:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:04.716 09:14:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:04.716 09:14:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:04.716 09:14:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:04.716 09:14:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:04.716 09:14:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:04.716 09:14:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:04.716 09:14:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:04.716 09:14:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:04.716 09:14:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:04.716 09:14:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:04.716 09:14:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:04.716 09:14:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:04.716 09:14:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:04.716 09:14:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:04.716 09:14:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:04.716 09:14:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:04.716 09:14:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:04.716 09:14:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:04.716 09:14:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:04.716 09:14:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:04.716 09:14:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:04.716 09:14:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:04.716 09:14:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:04.716 09:14:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:04.716 09:14:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:04.716 09:14:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:04.716 09:14:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:04.716 09:14:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:04.716 09:14:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:04.716 09:14:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:04.716 09:14:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:04.716 09:14:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:04.716 09:14:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:04.716 09:14:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:04.716 09:14:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:04.716 09:14:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:04.716 09:14:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:04.716 09:14:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:04.716 09:14:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:04.716 09:14:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:04.716 09:14:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:04.716 09:14:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:04.716 09:14:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:04.716 09:14:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:04.716 09:14:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:04.716 09:14:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:04.716 09:14:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:04.716 09:14:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:04.716 09:14:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:04.716 09:14:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:04.716 09:14:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:04.716 09:14:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:04.716 09:14:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:04.716 09:14:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:04.716 09:14:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:04.716 09:14:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:04.716 09:14:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:04.716 09:14:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:04.716 09:14:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:04.716 09:14:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:04.716 09:14:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:04.716 09:14:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:04.716 09:14:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:04.716 09:14:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:04.716 09:14:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:04.716 09:14:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:04.716 09:14:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:04.716 09:14:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:04.716 09:14:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:04.716 09:14:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:04.716 09:14:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:04.716 09:14:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:04.716 09:14:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:04.716 09:14:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:04.716 09:14:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:04.716 09:14:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:04.717 09:14:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:04.717 09:14:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:04.717 09:14:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:04.717 09:14:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:04.717 09:14:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:04.717 09:14:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:04.717 09:14:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:04.717 09:14:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:04.717 09:14:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:04.717 09:14:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:04.717 09:14:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:04.717 09:14:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:04.717 09:14:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:04.717 09:14:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:04.717 09:14:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:04.717 09:14:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:04.717 09:14:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:04.717 09:14:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # echo 1024 00:06:04.717 09:14:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # return 0 00:06:04.717 09:14:15 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@110 -- # (( 1024 == nr_hugepages + surp + resv )) 00:06:04.717 09:14:15 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@112 -- # get_nodes 00:06:04.717 09:14:15 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@27 -- # local node 00:06:04.717 09:14:15 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:06:04.976 09:14:15 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=1024 00:06:04.976 09:14:15 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:06:04.976 09:14:15 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=0 00:06:04.976 09:14:15 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@32 -- # no_nodes=2 00:06:04.976 09:14:15 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@33 -- # (( no_nodes > 0 )) 00:06:04.976 09:14:15 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:06:04.976 09:14:15 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:06:04.976 09:14:15 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 0 00:06:04.976 09:14:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:06:04.976 09:14:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@18 -- # local node=0 00:06:04.976 09:14:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@19 -- # local var val 00:06:04.976 09:14:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@20 -- # local mem_f mem 00:06:04.976 09:14:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:06:04.976 09:14:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node0/meminfo ]] 00:06:04.976 09:14:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node0/meminfo 00:06:04.976 09:14:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:06:04.976 09:14:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:06:04.976 09:14:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:04.976 09:14:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:04.976 09:14:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 32829884 kB' 'MemFree: 21940552 kB' 'MemUsed: 10889332 kB' 'SwapCached: 0 kB' 'Active: 4641528 kB' 'Inactive: 3357784 kB' 'Active(anon): 4508108 kB' 'Inactive(anon): 0 kB' 'Active(file): 133420 kB' 'Inactive(file): 3357784 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'FilePages: 7326732 kB' 'Mapped: 76516 kB' 'AnonPages: 675852 kB' 'Shmem: 3835528 kB' 'KernelStack: 5832 kB' 'PageTables: 4088 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 59572 kB' 'Slab: 177240 kB' 'SReclaimable: 59572 kB' 'SUnreclaim: 117668 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Surp: 0' 00:06:04.976 09:14:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:04.976 09:14:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:04.977 09:14:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:04.977 09:14:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:04.977 09:14:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:04.977 09:14:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:04.977 09:14:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:04.977 09:14:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:04.977 09:14:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:04.977 09:14:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:04.977 09:14:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:04.977 09:14:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:04.977 09:14:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:04.977 09:14:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:04.977 09:14:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:04.977 09:14:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:04.977 09:14:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:04.977 09:14:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:04.977 09:14:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:04.977 09:14:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:04.977 09:14:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:04.977 09:14:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:04.977 09:14:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:04.977 09:14:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:04.977 09:14:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:04.977 09:14:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:04.977 09:14:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:04.977 09:14:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:04.977 09:14:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:04.977 09:14:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:04.977 09:14:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:04.977 09:14:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:04.977 09:14:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:04.977 09:14:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:04.977 09:14:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:04.977 09:14:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:04.977 09:14:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:04.977 09:14:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:04.977 09:14:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:04.977 09:14:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:04.977 09:14:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:04.977 09:14:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:04.977 09:14:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:04.977 09:14:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:04.977 09:14:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:04.977 09:14:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:04.977 09:14:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:04.977 09:14:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:04.977 09:14:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:04.977 09:14:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:04.977 09:14:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:04.977 09:14:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:04.977 09:14:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:04.977 09:14:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:04.977 09:14:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:04.977 09:14:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:04.977 09:14:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:04.977 09:14:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:04.977 09:14:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:04.977 09:14:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:04.977 09:14:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:04.977 09:14:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:04.977 09:14:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:04.977 09:14:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:04.977 09:14:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:04.977 09:14:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:04.977 09:14:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:04.977 09:14:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:04.977 09:14:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:04.977 09:14:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:04.977 09:14:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:04.977 09:14:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:04.977 09:14:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:04.977 09:14:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:04.977 09:14:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:04.977 09:14:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:04.977 09:14:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:04.977 09:14:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:04.977 09:14:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:04.977 09:14:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:04.977 09:14:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:04.977 09:14:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:04.977 09:14:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:04.977 09:14:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:04.977 09:14:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:04.977 09:14:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:04.977 09:14:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:04.977 09:14:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:04.977 09:14:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:04.977 09:14:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:04.977 09:14:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:04.977 09:14:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:04.977 09:14:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:04.977 09:14:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:04.977 09:14:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:04.977 09:14:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:04.977 09:14:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:04.977 09:14:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:04.977 09:14:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:04.977 09:14:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:04.977 09:14:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:04.977 09:14:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:04.977 09:14:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:04.977 09:14:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:04.977 09:14:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:04.977 09:14:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:04.977 09:14:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:04.977 09:14:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:04.977 09:14:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:04.977 09:14:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:04.977 09:14:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:04.977 09:14:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:04.977 09:14:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:04.977 09:14:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:04.977 09:14:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:04.977 09:14:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:04.977 09:14:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:04.977 09:14:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:04.977 09:14:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:04.977 09:14:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:04.977 09:14:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:04.977 09:14:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:04.977 09:14:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:04.977 09:14:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:04.977 09:14:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:04.977 09:14:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:04.977 09:14:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:04.977 09:14:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:04.977 09:14:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:04.977 09:14:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:04.977 09:14:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:04.978 09:14:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:04.978 09:14:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:04.978 09:14:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:04.978 09:14:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:04.978 09:14:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:04.978 09:14:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:04.978 09:14:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:04.978 09:14:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:04.978 09:14:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:04.978 09:14:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:04.978 09:14:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:04.978 09:14:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:04.978 09:14:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:04.978 09:14:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:04.978 09:14:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # echo 0 00:06:04.978 09:14:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # return 0 00:06:04.978 09:14:15 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:06:04.978 09:14:15 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:06:04.978 09:14:15 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:06:04.978 09:14:15 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:06:04.978 09:14:15 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@128 -- # echo 'node0=1024 expecting 1024' 00:06:04.978 node0=1024 expecting 1024 00:06:04.978 09:14:15 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@130 -- # [[ 1024 == \1\0\2\4 ]] 00:06:04.978 00:06:04.978 real 0m2.931s 00:06:04.978 user 0m1.232s 00:06:04.978 sys 0m1.623s 00:06:04.978 09:14:15 setup.sh.hugepages.no_shrink_alloc -- common/autotest_common.sh@1124 -- # xtrace_disable 00:06:04.978 09:14:15 setup.sh.hugepages.no_shrink_alloc -- common/autotest_common.sh@10 -- # set +x 00:06:04.978 ************************************ 00:06:04.978 END TEST no_shrink_alloc 00:06:04.978 ************************************ 00:06:04.978 09:14:15 setup.sh.hugepages -- common/autotest_common.sh@1142 -- # return 0 00:06:04.978 09:14:15 setup.sh.hugepages -- setup/hugepages.sh@217 -- # clear_hp 00:06:04.978 09:14:15 setup.sh.hugepages -- setup/hugepages.sh@37 -- # local node hp 00:06:04.978 09:14:15 setup.sh.hugepages -- setup/hugepages.sh@39 -- # for node in "${!nodes_sys[@]}" 00:06:04.978 09:14:15 setup.sh.hugepages -- setup/hugepages.sh@40 -- # for hp in "/sys/devices/system/node/node$node/hugepages/hugepages-"* 00:06:04.978 09:14:15 setup.sh.hugepages -- setup/hugepages.sh@41 -- # echo 0 00:06:04.978 09:14:15 setup.sh.hugepages -- setup/hugepages.sh@40 -- # for hp in "/sys/devices/system/node/node$node/hugepages/hugepages-"* 00:06:04.978 09:14:15 setup.sh.hugepages -- setup/hugepages.sh@41 -- # echo 0 00:06:04.978 09:14:15 setup.sh.hugepages -- setup/hugepages.sh@39 -- # for node in "${!nodes_sys[@]}" 00:06:04.978 09:14:15 setup.sh.hugepages -- setup/hugepages.sh@40 -- # for hp in "/sys/devices/system/node/node$node/hugepages/hugepages-"* 00:06:04.978 09:14:15 setup.sh.hugepages -- setup/hugepages.sh@41 -- # echo 0 00:06:04.978 09:14:15 setup.sh.hugepages -- setup/hugepages.sh@40 -- # for hp in "/sys/devices/system/node/node$node/hugepages/hugepages-"* 00:06:04.978 09:14:15 setup.sh.hugepages -- setup/hugepages.sh@41 -- # echo 0 00:06:04.978 09:14:15 setup.sh.hugepages -- setup/hugepages.sh@45 -- # export CLEAR_HUGE=yes 00:06:04.978 09:14:15 setup.sh.hugepages -- setup/hugepages.sh@45 -- # CLEAR_HUGE=yes 00:06:04.978 00:06:04.978 real 0m11.975s 00:06:04.978 user 0m4.630s 00:06:04.978 sys 0m6.178s 00:06:04.978 09:14:15 setup.sh.hugepages -- common/autotest_common.sh@1124 -- # xtrace_disable 00:06:04.978 09:14:15 setup.sh.hugepages -- common/autotest_common.sh@10 -- # set +x 00:06:04.978 ************************************ 00:06:04.978 END TEST hugepages 00:06:04.978 ************************************ 00:06:04.978 09:14:15 setup.sh -- common/autotest_common.sh@1142 -- # return 0 00:06:04.978 09:14:15 setup.sh -- setup/test-setup.sh@14 -- # run_test driver /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/driver.sh 00:06:04.978 09:14:15 setup.sh -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:06:04.978 09:14:15 setup.sh -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:04.978 09:14:15 setup.sh -- common/autotest_common.sh@10 -- # set +x 00:06:04.978 ************************************ 00:06:04.978 START TEST driver 00:06:04.978 ************************************ 00:06:04.978 09:14:16 setup.sh.driver -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/driver.sh 00:06:04.978 * Looking for test storage... 00:06:04.978 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup 00:06:04.978 09:14:16 setup.sh.driver -- setup/driver.sh@68 -- # setup reset 00:06:04.978 09:14:16 setup.sh.driver -- setup/common.sh@9 -- # [[ reset == output ]] 00:06:04.978 09:14:16 setup.sh.driver -- setup/common.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh reset 00:06:07.513 09:14:18 setup.sh.driver -- setup/driver.sh@69 -- # run_test guess_driver guess_driver 00:06:07.514 09:14:18 setup.sh.driver -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:06:07.514 09:14:18 setup.sh.driver -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:07.514 09:14:18 setup.sh.driver -- common/autotest_common.sh@10 -- # set +x 00:06:07.514 ************************************ 00:06:07.514 START TEST guess_driver 00:06:07.514 ************************************ 00:06:07.514 09:14:18 setup.sh.driver.guess_driver -- common/autotest_common.sh@1123 -- # guess_driver 00:06:07.514 09:14:18 setup.sh.driver.guess_driver -- setup/driver.sh@46 -- # local driver setup_driver marker 00:06:07.514 09:14:18 setup.sh.driver.guess_driver -- setup/driver.sh@47 -- # local fail=0 00:06:07.514 09:14:18 setup.sh.driver.guess_driver -- setup/driver.sh@49 -- # pick_driver 00:06:07.514 09:14:18 setup.sh.driver.guess_driver -- setup/driver.sh@36 -- # vfio 00:06:07.514 09:14:18 setup.sh.driver.guess_driver -- setup/driver.sh@21 -- # local iommu_grups 00:06:07.514 09:14:18 setup.sh.driver.guess_driver -- setup/driver.sh@22 -- # local unsafe_vfio 00:06:07.514 09:14:18 setup.sh.driver.guess_driver -- setup/driver.sh@24 -- # [[ -e /sys/module/vfio/parameters/enable_unsafe_noiommu_mode ]] 00:06:07.514 09:14:18 setup.sh.driver.guess_driver -- setup/driver.sh@25 -- # unsafe_vfio=N 00:06:07.514 09:14:18 setup.sh.driver.guess_driver -- setup/driver.sh@27 -- # iommu_groups=(/sys/kernel/iommu_groups/*) 00:06:07.514 09:14:18 setup.sh.driver.guess_driver -- setup/driver.sh@29 -- # (( 141 > 0 )) 00:06:07.514 09:14:18 setup.sh.driver.guess_driver -- setup/driver.sh@30 -- # is_driver vfio_pci 00:06:07.514 09:14:18 setup.sh.driver.guess_driver -- setup/driver.sh@14 -- # mod vfio_pci 00:06:07.514 09:14:18 setup.sh.driver.guess_driver -- setup/driver.sh@12 -- # dep vfio_pci 00:06:07.514 09:14:18 setup.sh.driver.guess_driver -- setup/driver.sh@11 -- # modprobe --show-depends vfio_pci 00:06:07.514 09:14:18 setup.sh.driver.guess_driver -- setup/driver.sh@12 -- # [[ insmod /lib/modules/6.7.0-68.fc38.x86_64/kernel/virt/lib/irqbypass.ko.xz 00:06:07.514 insmod /lib/modules/6.7.0-68.fc38.x86_64/kernel/drivers/iommu/iommufd/iommufd.ko.xz 00:06:07.514 insmod /lib/modules/6.7.0-68.fc38.x86_64/kernel/drivers/vfio/vfio.ko.xz 00:06:07.514 insmod /lib/modules/6.7.0-68.fc38.x86_64/kernel/drivers/iommu/iommufd/iommufd.ko.xz 00:06:07.514 insmod /lib/modules/6.7.0-68.fc38.x86_64/kernel/drivers/vfio/vfio.ko.xz 00:06:07.514 insmod /lib/modules/6.7.0-68.fc38.x86_64/kernel/drivers/vfio/vfio_iommu_type1.ko.xz 00:06:07.514 insmod /lib/modules/6.7.0-68.fc38.x86_64/kernel/drivers/vfio/pci/vfio-pci-core.ko.xz 00:06:07.514 insmod /lib/modules/6.7.0-68.fc38.x86_64/kernel/drivers/vfio/pci/vfio-pci.ko.xz == *\.\k\o* ]] 00:06:07.514 09:14:18 setup.sh.driver.guess_driver -- setup/driver.sh@30 -- # return 0 00:06:07.514 09:14:18 setup.sh.driver.guess_driver -- setup/driver.sh@37 -- # echo vfio-pci 00:06:07.514 09:14:18 setup.sh.driver.guess_driver -- setup/driver.sh@49 -- # driver=vfio-pci 00:06:07.514 09:14:18 setup.sh.driver.guess_driver -- setup/driver.sh@51 -- # [[ vfio-pci == \N\o\ \v\a\l\i\d\ \d\r\i\v\e\r\ \f\o\u\n\d ]] 00:06:07.514 09:14:18 setup.sh.driver.guess_driver -- setup/driver.sh@56 -- # echo 'Looking for driver=vfio-pci' 00:06:07.514 Looking for driver=vfio-pci 00:06:07.514 09:14:18 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:06:07.514 09:14:18 setup.sh.driver.guess_driver -- setup/driver.sh@45 -- # setup output config 00:06:07.514 09:14:18 setup.sh.driver.guess_driver -- setup/common.sh@9 -- # [[ output == output ]] 00:06:07.514 09:14:18 setup.sh.driver.guess_driver -- setup/common.sh@10 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh config 00:06:08.891 09:14:19 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:06:08.891 09:14:19 setup.sh.driver.guess_driver -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:06:08.891 09:14:19 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:06:08.892 09:14:19 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:06:08.892 09:14:19 setup.sh.driver.guess_driver -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:06:08.892 09:14:19 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:06:08.892 09:14:19 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:06:08.892 09:14:19 setup.sh.driver.guess_driver -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:06:08.892 09:14:19 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:06:08.892 09:14:19 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:06:08.892 09:14:19 setup.sh.driver.guess_driver -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:06:08.892 09:14:19 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:06:08.892 09:14:19 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:06:08.892 09:14:19 setup.sh.driver.guess_driver -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:06:08.892 09:14:19 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:06:08.892 09:14:19 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:06:08.892 09:14:19 setup.sh.driver.guess_driver -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:06:08.892 09:14:19 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:06:08.892 09:14:19 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:06:08.892 09:14:19 setup.sh.driver.guess_driver -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:06:08.892 09:14:19 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:06:08.892 09:14:19 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:06:08.892 09:14:19 setup.sh.driver.guess_driver -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:06:08.892 09:14:19 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:06:08.892 09:14:19 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:06:08.892 09:14:19 setup.sh.driver.guess_driver -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:06:08.892 09:14:19 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:06:08.892 09:14:19 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:06:08.892 09:14:19 setup.sh.driver.guess_driver -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:06:08.892 09:14:19 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:06:08.892 09:14:19 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:06:08.892 09:14:19 setup.sh.driver.guess_driver -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:06:08.892 09:14:19 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:06:08.892 09:14:20 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:06:08.892 09:14:20 setup.sh.driver.guess_driver -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:06:08.892 09:14:20 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:06:08.892 09:14:20 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:06:08.892 09:14:20 setup.sh.driver.guess_driver -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:06:08.892 09:14:20 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:06:08.892 09:14:20 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:06:08.892 09:14:20 setup.sh.driver.guess_driver -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:06:08.892 09:14:20 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:06:08.892 09:14:20 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:06:08.892 09:14:20 setup.sh.driver.guess_driver -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:06:08.892 09:14:20 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:06:08.892 09:14:20 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:06:08.892 09:14:20 setup.sh.driver.guess_driver -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:06:08.892 09:14:20 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:06:09.830 09:14:20 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:06:09.830 09:14:20 setup.sh.driver.guess_driver -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:06:09.830 09:14:20 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:06:10.088 09:14:21 setup.sh.driver.guess_driver -- setup/driver.sh@64 -- # (( fail == 0 )) 00:06:10.088 09:14:21 setup.sh.driver.guess_driver -- setup/driver.sh@65 -- # setup reset 00:06:10.088 09:14:21 setup.sh.driver.guess_driver -- setup/common.sh@9 -- # [[ reset == output ]] 00:06:10.088 09:14:21 setup.sh.driver.guess_driver -- setup/common.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh reset 00:06:12.625 00:06:12.625 real 0m4.920s 00:06:12.625 user 0m1.091s 00:06:12.625 sys 0m1.883s 00:06:12.625 09:14:23 setup.sh.driver.guess_driver -- common/autotest_common.sh@1124 -- # xtrace_disable 00:06:12.625 09:14:23 setup.sh.driver.guess_driver -- common/autotest_common.sh@10 -- # set +x 00:06:12.625 ************************************ 00:06:12.625 END TEST guess_driver 00:06:12.625 ************************************ 00:06:12.625 09:14:23 setup.sh.driver -- common/autotest_common.sh@1142 -- # return 0 00:06:12.625 00:06:12.625 real 0m7.591s 00:06:12.625 user 0m1.694s 00:06:12.625 sys 0m2.916s 00:06:12.625 09:14:23 setup.sh.driver -- common/autotest_common.sh@1124 -- # xtrace_disable 00:06:12.625 09:14:23 setup.sh.driver -- common/autotest_common.sh@10 -- # set +x 00:06:12.625 ************************************ 00:06:12.625 END TEST driver 00:06:12.625 ************************************ 00:06:12.625 09:14:23 setup.sh -- common/autotest_common.sh@1142 -- # return 0 00:06:12.625 09:14:23 setup.sh -- setup/test-setup.sh@15 -- # run_test devices /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/devices.sh 00:06:12.625 09:14:23 setup.sh -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:06:12.625 09:14:23 setup.sh -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:12.625 09:14:23 setup.sh -- common/autotest_common.sh@10 -- # set +x 00:06:12.625 ************************************ 00:06:12.625 START TEST devices 00:06:12.625 ************************************ 00:06:12.625 09:14:23 setup.sh.devices -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/devices.sh 00:06:12.625 * Looking for test storage... 00:06:12.625 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup 00:06:12.625 09:14:23 setup.sh.devices -- setup/devices.sh@190 -- # trap cleanup EXIT 00:06:12.625 09:14:23 setup.sh.devices -- setup/devices.sh@192 -- # setup reset 00:06:12.625 09:14:23 setup.sh.devices -- setup/common.sh@9 -- # [[ reset == output ]] 00:06:12.625 09:14:23 setup.sh.devices -- setup/common.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh reset 00:06:14.000 09:14:25 setup.sh.devices -- setup/devices.sh@194 -- # get_zoned_devs 00:06:14.000 09:14:25 setup.sh.devices -- common/autotest_common.sh@1669 -- # zoned_devs=() 00:06:14.000 09:14:25 setup.sh.devices -- common/autotest_common.sh@1669 -- # local -gA zoned_devs 00:06:14.000 09:14:25 setup.sh.devices -- common/autotest_common.sh@1670 -- # local nvme bdf 00:06:14.000 09:14:25 setup.sh.devices -- common/autotest_common.sh@1672 -- # for nvme in /sys/block/nvme* 00:06:14.000 09:14:25 setup.sh.devices -- common/autotest_common.sh@1673 -- # is_block_zoned nvme0n1 00:06:14.000 09:14:25 setup.sh.devices -- common/autotest_common.sh@1662 -- # local device=nvme0n1 00:06:14.000 09:14:25 setup.sh.devices -- common/autotest_common.sh@1664 -- # [[ -e /sys/block/nvme0n1/queue/zoned ]] 00:06:14.000 09:14:25 setup.sh.devices -- common/autotest_common.sh@1665 -- # [[ none != none ]] 00:06:14.000 09:14:25 setup.sh.devices -- setup/devices.sh@196 -- # blocks=() 00:06:14.000 09:14:25 setup.sh.devices -- setup/devices.sh@196 -- # declare -a blocks 00:06:14.000 09:14:25 setup.sh.devices -- setup/devices.sh@197 -- # blocks_to_pci=() 00:06:14.000 09:14:25 setup.sh.devices -- setup/devices.sh@197 -- # declare -A blocks_to_pci 00:06:14.000 09:14:25 setup.sh.devices -- setup/devices.sh@198 -- # min_disk_size=3221225472 00:06:14.000 09:14:25 setup.sh.devices -- setup/devices.sh@200 -- # for block in "/sys/block/nvme"!(*c*) 00:06:14.000 09:14:25 setup.sh.devices -- setup/devices.sh@201 -- # ctrl=nvme0n1 00:06:14.000 09:14:25 setup.sh.devices -- setup/devices.sh@201 -- # ctrl=nvme0 00:06:14.000 09:14:25 setup.sh.devices -- setup/devices.sh@202 -- # pci=0000:0b:00.0 00:06:14.000 09:14:25 setup.sh.devices -- setup/devices.sh@203 -- # [[ '' == *\0\0\0\0\:\0\b\:\0\0\.\0* ]] 00:06:14.000 09:14:25 setup.sh.devices -- setup/devices.sh@204 -- # block_in_use nvme0n1 00:06:14.000 09:14:25 setup.sh.devices -- scripts/common.sh@378 -- # local block=nvme0n1 pt 00:06:14.000 09:14:25 setup.sh.devices -- scripts/common.sh@387 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/spdk-gpt.py nvme0n1 00:06:14.000 No valid GPT data, bailing 00:06:14.261 09:14:25 setup.sh.devices -- scripts/common.sh@391 -- # blkid -s PTTYPE -o value /dev/nvme0n1 00:06:14.261 09:14:25 setup.sh.devices -- scripts/common.sh@391 -- # pt= 00:06:14.261 09:14:25 setup.sh.devices -- scripts/common.sh@392 -- # return 1 00:06:14.261 09:14:25 setup.sh.devices -- setup/devices.sh@204 -- # sec_size_to_bytes nvme0n1 00:06:14.261 09:14:25 setup.sh.devices -- setup/common.sh@76 -- # local dev=nvme0n1 00:06:14.261 09:14:25 setup.sh.devices -- setup/common.sh@78 -- # [[ -e /sys/block/nvme0n1 ]] 00:06:14.261 09:14:25 setup.sh.devices -- setup/common.sh@80 -- # echo 1000204886016 00:06:14.261 09:14:25 setup.sh.devices -- setup/devices.sh@204 -- # (( 1000204886016 >= min_disk_size )) 00:06:14.261 09:14:25 setup.sh.devices -- setup/devices.sh@205 -- # blocks+=("${block##*/}") 00:06:14.261 09:14:25 setup.sh.devices -- setup/devices.sh@206 -- # blocks_to_pci["${block##*/}"]=0000:0b:00.0 00:06:14.261 09:14:25 setup.sh.devices -- setup/devices.sh@209 -- # (( 1 > 0 )) 00:06:14.261 09:14:25 setup.sh.devices -- setup/devices.sh@211 -- # declare -r test_disk=nvme0n1 00:06:14.261 09:14:25 setup.sh.devices -- setup/devices.sh@213 -- # run_test nvme_mount nvme_mount 00:06:14.261 09:14:25 setup.sh.devices -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:06:14.261 09:14:25 setup.sh.devices -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:14.261 09:14:25 setup.sh.devices -- common/autotest_common.sh@10 -- # set +x 00:06:14.261 ************************************ 00:06:14.261 START TEST nvme_mount 00:06:14.261 ************************************ 00:06:14.261 09:14:25 setup.sh.devices.nvme_mount -- common/autotest_common.sh@1123 -- # nvme_mount 00:06:14.261 09:14:25 setup.sh.devices.nvme_mount -- setup/devices.sh@95 -- # nvme_disk=nvme0n1 00:06:14.261 09:14:25 setup.sh.devices.nvme_mount -- setup/devices.sh@96 -- # nvme_disk_p=nvme0n1p1 00:06:14.261 09:14:25 setup.sh.devices.nvme_mount -- setup/devices.sh@97 -- # nvme_mount=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount 00:06:14.261 09:14:25 setup.sh.devices.nvme_mount -- setup/devices.sh@98 -- # nvme_dummy_test_file=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount/test_nvme 00:06:14.261 09:14:25 setup.sh.devices.nvme_mount -- setup/devices.sh@101 -- # partition_drive nvme0n1 1 00:06:14.261 09:14:25 setup.sh.devices.nvme_mount -- setup/common.sh@39 -- # local disk=nvme0n1 00:06:14.261 09:14:25 setup.sh.devices.nvme_mount -- setup/common.sh@40 -- # local part_no=1 00:06:14.261 09:14:25 setup.sh.devices.nvme_mount -- setup/common.sh@41 -- # local size=1073741824 00:06:14.261 09:14:25 setup.sh.devices.nvme_mount -- setup/common.sh@43 -- # local part part_start=0 part_end=0 00:06:14.261 09:14:25 setup.sh.devices.nvme_mount -- setup/common.sh@44 -- # parts=() 00:06:14.261 09:14:25 setup.sh.devices.nvme_mount -- setup/common.sh@44 -- # local parts 00:06:14.261 09:14:25 setup.sh.devices.nvme_mount -- setup/common.sh@46 -- # (( part = 1 )) 00:06:14.261 09:14:25 setup.sh.devices.nvme_mount -- setup/common.sh@46 -- # (( part <= part_no )) 00:06:14.261 09:14:25 setup.sh.devices.nvme_mount -- setup/common.sh@47 -- # parts+=("${disk}p$part") 00:06:14.261 09:14:25 setup.sh.devices.nvme_mount -- setup/common.sh@46 -- # (( part++ )) 00:06:14.261 09:14:25 setup.sh.devices.nvme_mount -- setup/common.sh@46 -- # (( part <= part_no )) 00:06:14.261 09:14:25 setup.sh.devices.nvme_mount -- setup/common.sh@51 -- # (( size /= 512 )) 00:06:14.261 09:14:25 setup.sh.devices.nvme_mount -- setup/common.sh@56 -- # sgdisk /dev/nvme0n1 --zap-all 00:06:14.261 09:14:25 setup.sh.devices.nvme_mount -- setup/common.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/sync_dev_uevents.sh block/partition nvme0n1p1 00:06:15.202 Creating new GPT entries in memory. 00:06:15.202 GPT data structures destroyed! You may now partition the disk using fdisk or 00:06:15.202 other utilities. 00:06:15.202 09:14:26 setup.sh.devices.nvme_mount -- setup/common.sh@57 -- # (( part = 1 )) 00:06:15.202 09:14:26 setup.sh.devices.nvme_mount -- setup/common.sh@57 -- # (( part <= part_no )) 00:06:15.202 09:14:26 setup.sh.devices.nvme_mount -- setup/common.sh@58 -- # (( part_start = part_start == 0 ? 2048 : part_end + 1 )) 00:06:15.202 09:14:26 setup.sh.devices.nvme_mount -- setup/common.sh@59 -- # (( part_end = part_start + size - 1 )) 00:06:15.202 09:14:26 setup.sh.devices.nvme_mount -- setup/common.sh@60 -- # flock /dev/nvme0n1 sgdisk /dev/nvme0n1 --new=1:2048:2099199 00:06:16.143 Creating new GPT entries in memory. 00:06:16.143 The operation has completed successfully. 00:06:16.143 09:14:27 setup.sh.devices.nvme_mount -- setup/common.sh@57 -- # (( part++ )) 00:06:16.143 09:14:27 setup.sh.devices.nvme_mount -- setup/common.sh@57 -- # (( part <= part_no )) 00:06:16.143 09:14:27 setup.sh.devices.nvme_mount -- setup/common.sh@62 -- # wait 704647 00:06:16.143 09:14:27 setup.sh.devices.nvme_mount -- setup/devices.sh@102 -- # mkfs /dev/nvme0n1p1 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount 00:06:16.143 09:14:27 setup.sh.devices.nvme_mount -- setup/common.sh@66 -- # local dev=/dev/nvme0n1p1 mount=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount size= 00:06:16.143 09:14:27 setup.sh.devices.nvme_mount -- setup/common.sh@68 -- # mkdir -p /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount 00:06:16.143 09:14:27 setup.sh.devices.nvme_mount -- setup/common.sh@70 -- # [[ -e /dev/nvme0n1p1 ]] 00:06:16.143 09:14:27 setup.sh.devices.nvme_mount -- setup/common.sh@71 -- # mkfs.ext4 -qF /dev/nvme0n1p1 00:06:16.401 09:14:27 setup.sh.devices.nvme_mount -- setup/common.sh@72 -- # mount /dev/nvme0n1p1 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount 00:06:16.401 09:14:27 setup.sh.devices.nvme_mount -- setup/devices.sh@105 -- # verify 0000:0b:00.0 nvme0n1:nvme0n1p1 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount/test_nvme 00:06:16.401 09:14:27 setup.sh.devices.nvme_mount -- setup/devices.sh@48 -- # local dev=0000:0b:00.0 00:06:16.401 09:14:27 setup.sh.devices.nvme_mount -- setup/devices.sh@49 -- # local mounts=nvme0n1:nvme0n1p1 00:06:16.401 09:14:27 setup.sh.devices.nvme_mount -- setup/devices.sh@50 -- # local mount_point=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount 00:06:16.401 09:14:27 setup.sh.devices.nvme_mount -- setup/devices.sh@51 -- # local test_file=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount/test_nvme 00:06:16.402 09:14:27 setup.sh.devices.nvme_mount -- setup/devices.sh@53 -- # local found=0 00:06:16.402 09:14:27 setup.sh.devices.nvme_mount -- setup/devices.sh@55 -- # [[ -n /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount/test_nvme ]] 00:06:16.402 09:14:27 setup.sh.devices.nvme_mount -- setup/devices.sh@56 -- # : 00:06:16.402 09:14:27 setup.sh.devices.nvme_mount -- setup/devices.sh@59 -- # local pci status 00:06:16.402 09:14:27 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:06:16.402 09:14:27 setup.sh.devices.nvme_mount -- setup/devices.sh@47 -- # PCI_ALLOWED=0000:0b:00.0 00:06:16.402 09:14:27 setup.sh.devices.nvme_mount -- setup/devices.sh@47 -- # setup output config 00:06:16.402 09:14:27 setup.sh.devices.nvme_mount -- setup/common.sh@9 -- # [[ output == output ]] 00:06:16.402 09:14:27 setup.sh.devices.nvme_mount -- setup/common.sh@10 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh config 00:06:17.338 09:14:28 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.7 == \0\0\0\0\:\0\b\:\0\0\.\0 ]] 00:06:17.338 09:14:28 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:06:17.339 09:14:28 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.6 == \0\0\0\0\:\0\b\:\0\0\.\0 ]] 00:06:17.339 09:14:28 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:06:17.339 09:14:28 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.5 == \0\0\0\0\:\0\b\:\0\0\.\0 ]] 00:06:17.339 09:14:28 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:06:17.339 09:14:28 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.4 == \0\0\0\0\:\0\b\:\0\0\.\0 ]] 00:06:17.339 09:14:28 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:06:17.339 09:14:28 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.3 == \0\0\0\0\:\0\b\:\0\0\.\0 ]] 00:06:17.339 09:14:28 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:06:17.339 09:14:28 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.2 == \0\0\0\0\:\0\b\:\0\0\.\0 ]] 00:06:17.339 09:14:28 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:06:17.339 09:14:28 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.1 == \0\0\0\0\:\0\b\:\0\0\.\0 ]] 00:06:17.339 09:14:28 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:06:17.339 09:14:28 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.0 == \0\0\0\0\:\0\b\:\0\0\.\0 ]] 00:06:17.339 09:14:28 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:06:17.339 09:14:28 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:0b:00.0 == \0\0\0\0\:\0\b\:\0\0\.\0 ]] 00:06:17.339 09:14:28 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ Active devices: mount@nvme0n1:nvme0n1p1, so not binding PCI dev == *\A\c\t\i\v\e\ \d\e\v\i\c\e\s\:\ *\n\v\m\e\0\n\1\:\n\v\m\e\0\n\1\p\1* ]] 00:06:17.339 09:14:28 setup.sh.devices.nvme_mount -- setup/devices.sh@63 -- # found=1 00:06:17.339 09:14:28 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:06:17.339 09:14:28 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.7 == \0\0\0\0\:\0\b\:\0\0\.\0 ]] 00:06:17.339 09:14:28 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:06:17.339 09:14:28 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.6 == \0\0\0\0\:\0\b\:\0\0\.\0 ]] 00:06:17.339 09:14:28 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:06:17.339 09:14:28 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.5 == \0\0\0\0\:\0\b\:\0\0\.\0 ]] 00:06:17.339 09:14:28 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:06:17.339 09:14:28 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.4 == \0\0\0\0\:\0\b\:\0\0\.\0 ]] 00:06:17.339 09:14:28 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:06:17.339 09:14:28 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.3 == \0\0\0\0\:\0\b\:\0\0\.\0 ]] 00:06:17.339 09:14:28 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:06:17.339 09:14:28 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.2 == \0\0\0\0\:\0\b\:\0\0\.\0 ]] 00:06:17.339 09:14:28 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:06:17.339 09:14:28 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.1 == \0\0\0\0\:\0\b\:\0\0\.\0 ]] 00:06:17.339 09:14:28 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:06:17.339 09:14:28 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.0 == \0\0\0\0\:\0\b\:\0\0\.\0 ]] 00:06:17.339 09:14:28 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:06:17.599 09:14:28 setup.sh.devices.nvme_mount -- setup/devices.sh@66 -- # (( found == 1 )) 00:06:17.599 09:14:28 setup.sh.devices.nvme_mount -- setup/devices.sh@68 -- # [[ -n /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount ]] 00:06:17.599 09:14:28 setup.sh.devices.nvme_mount -- setup/devices.sh@71 -- # mountpoint -q /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount 00:06:17.599 09:14:28 setup.sh.devices.nvme_mount -- setup/devices.sh@73 -- # [[ -e /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount/test_nvme ]] 00:06:17.599 09:14:28 setup.sh.devices.nvme_mount -- setup/devices.sh@74 -- # rm /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount/test_nvme 00:06:17.599 09:14:28 setup.sh.devices.nvme_mount -- setup/devices.sh@110 -- # cleanup_nvme 00:06:17.599 09:14:28 setup.sh.devices.nvme_mount -- setup/devices.sh@20 -- # mountpoint -q /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount 00:06:17.599 09:14:28 setup.sh.devices.nvme_mount -- setup/devices.sh@21 -- # umount /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount 00:06:17.599 09:14:28 setup.sh.devices.nvme_mount -- setup/devices.sh@24 -- # [[ -b /dev/nvme0n1p1 ]] 00:06:17.599 09:14:28 setup.sh.devices.nvme_mount -- setup/devices.sh@25 -- # wipefs --all /dev/nvme0n1p1 00:06:17.600 /dev/nvme0n1p1: 2 bytes were erased at offset 0x00000438 (ext4): 53 ef 00:06:17.600 09:14:28 setup.sh.devices.nvme_mount -- setup/devices.sh@27 -- # [[ -b /dev/nvme0n1 ]] 00:06:17.600 09:14:28 setup.sh.devices.nvme_mount -- setup/devices.sh@28 -- # wipefs --all /dev/nvme0n1 00:06:17.858 /dev/nvme0n1: 8 bytes were erased at offset 0x00000200 (gpt): 45 46 49 20 50 41 52 54 00:06:17.858 /dev/nvme0n1: 8 bytes were erased at offset 0xe8e0db5e00 (gpt): 45 46 49 20 50 41 52 54 00:06:17.858 /dev/nvme0n1: 2 bytes were erased at offset 0x000001fe (PMBR): 55 aa 00:06:17.858 /dev/nvme0n1: calling ioctl to re-read partition table: Success 00:06:17.858 09:14:28 setup.sh.devices.nvme_mount -- setup/devices.sh@113 -- # mkfs /dev/nvme0n1 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount 1024M 00:06:17.858 09:14:28 setup.sh.devices.nvme_mount -- setup/common.sh@66 -- # local dev=/dev/nvme0n1 mount=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount size=1024M 00:06:17.858 09:14:28 setup.sh.devices.nvme_mount -- setup/common.sh@68 -- # mkdir -p /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount 00:06:17.858 09:14:28 setup.sh.devices.nvme_mount -- setup/common.sh@70 -- # [[ -e /dev/nvme0n1 ]] 00:06:17.858 09:14:28 setup.sh.devices.nvme_mount -- setup/common.sh@71 -- # mkfs.ext4 -qF /dev/nvme0n1 1024M 00:06:17.858 09:14:29 setup.sh.devices.nvme_mount -- setup/common.sh@72 -- # mount /dev/nvme0n1 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount 00:06:17.858 09:14:29 setup.sh.devices.nvme_mount -- setup/devices.sh@116 -- # verify 0000:0b:00.0 nvme0n1:nvme0n1 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount/test_nvme 00:06:17.858 09:14:29 setup.sh.devices.nvme_mount -- setup/devices.sh@48 -- # local dev=0000:0b:00.0 00:06:17.858 09:14:29 setup.sh.devices.nvme_mount -- setup/devices.sh@49 -- # local mounts=nvme0n1:nvme0n1 00:06:17.858 09:14:29 setup.sh.devices.nvme_mount -- setup/devices.sh@50 -- # local mount_point=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount 00:06:17.859 09:14:29 setup.sh.devices.nvme_mount -- setup/devices.sh@51 -- # local test_file=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount/test_nvme 00:06:17.859 09:14:29 setup.sh.devices.nvme_mount -- setup/devices.sh@53 -- # local found=0 00:06:17.859 09:14:29 setup.sh.devices.nvme_mount -- setup/devices.sh@55 -- # [[ -n /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount/test_nvme ]] 00:06:17.859 09:14:29 setup.sh.devices.nvme_mount -- setup/devices.sh@56 -- # : 00:06:17.859 09:14:29 setup.sh.devices.nvme_mount -- setup/devices.sh@59 -- # local pci status 00:06:17.859 09:14:29 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:06:17.859 09:14:29 setup.sh.devices.nvme_mount -- setup/devices.sh@47 -- # PCI_ALLOWED=0000:0b:00.0 00:06:17.859 09:14:29 setup.sh.devices.nvme_mount -- setup/devices.sh@47 -- # setup output config 00:06:17.859 09:14:29 setup.sh.devices.nvme_mount -- setup/common.sh@9 -- # [[ output == output ]] 00:06:17.859 09:14:29 setup.sh.devices.nvme_mount -- setup/common.sh@10 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh config 00:06:19.237 09:14:30 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.7 == \0\0\0\0\:\0\b\:\0\0\.\0 ]] 00:06:19.237 09:14:30 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:06:19.237 09:14:30 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.6 == \0\0\0\0\:\0\b\:\0\0\.\0 ]] 00:06:19.237 09:14:30 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:06:19.237 09:14:30 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.5 == \0\0\0\0\:\0\b\:\0\0\.\0 ]] 00:06:19.237 09:14:30 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:06:19.237 09:14:30 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.4 == \0\0\0\0\:\0\b\:\0\0\.\0 ]] 00:06:19.237 09:14:30 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:06:19.237 09:14:30 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.3 == \0\0\0\0\:\0\b\:\0\0\.\0 ]] 00:06:19.237 09:14:30 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:06:19.237 09:14:30 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.2 == \0\0\0\0\:\0\b\:\0\0\.\0 ]] 00:06:19.237 09:14:30 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:06:19.237 09:14:30 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.1 == \0\0\0\0\:\0\b\:\0\0\.\0 ]] 00:06:19.237 09:14:30 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:06:19.237 09:14:30 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.0 == \0\0\0\0\:\0\b\:\0\0\.\0 ]] 00:06:19.237 09:14:30 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:06:19.237 09:14:30 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:0b:00.0 == \0\0\0\0\:\0\b\:\0\0\.\0 ]] 00:06:19.238 09:14:30 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ Active devices: mount@nvme0n1:nvme0n1, so not binding PCI dev == *\A\c\t\i\v\e\ \d\e\v\i\c\e\s\:\ *\n\v\m\e\0\n\1\:\n\v\m\e\0\n\1* ]] 00:06:19.238 09:14:30 setup.sh.devices.nvme_mount -- setup/devices.sh@63 -- # found=1 00:06:19.238 09:14:30 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:06:19.238 09:14:30 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.7 == \0\0\0\0\:\0\b\:\0\0\.\0 ]] 00:06:19.238 09:14:30 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:06:19.238 09:14:30 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.6 == \0\0\0\0\:\0\b\:\0\0\.\0 ]] 00:06:19.238 09:14:30 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:06:19.238 09:14:30 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.5 == \0\0\0\0\:\0\b\:\0\0\.\0 ]] 00:06:19.238 09:14:30 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:06:19.238 09:14:30 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.4 == \0\0\0\0\:\0\b\:\0\0\.\0 ]] 00:06:19.238 09:14:30 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:06:19.238 09:14:30 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.3 == \0\0\0\0\:\0\b\:\0\0\.\0 ]] 00:06:19.238 09:14:30 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:06:19.238 09:14:30 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.2 == \0\0\0\0\:\0\b\:\0\0\.\0 ]] 00:06:19.238 09:14:30 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:06:19.238 09:14:30 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.1 == \0\0\0\0\:\0\b\:\0\0\.\0 ]] 00:06:19.238 09:14:30 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:06:19.238 09:14:30 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.0 == \0\0\0\0\:\0\b\:\0\0\.\0 ]] 00:06:19.238 09:14:30 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:06:19.238 09:14:30 setup.sh.devices.nvme_mount -- setup/devices.sh@66 -- # (( found == 1 )) 00:06:19.238 09:14:30 setup.sh.devices.nvme_mount -- setup/devices.sh@68 -- # [[ -n /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount ]] 00:06:19.238 09:14:30 setup.sh.devices.nvme_mount -- setup/devices.sh@71 -- # mountpoint -q /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount 00:06:19.238 09:14:30 setup.sh.devices.nvme_mount -- setup/devices.sh@73 -- # [[ -e /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount/test_nvme ]] 00:06:19.238 09:14:30 setup.sh.devices.nvme_mount -- setup/devices.sh@74 -- # rm /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount/test_nvme 00:06:19.238 09:14:30 setup.sh.devices.nvme_mount -- setup/devices.sh@123 -- # umount /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount 00:06:19.238 09:14:30 setup.sh.devices.nvme_mount -- setup/devices.sh@125 -- # verify 0000:0b:00.0 data@nvme0n1 '' '' 00:06:19.238 09:14:30 setup.sh.devices.nvme_mount -- setup/devices.sh@48 -- # local dev=0000:0b:00.0 00:06:19.238 09:14:30 setup.sh.devices.nvme_mount -- setup/devices.sh@49 -- # local mounts=data@nvme0n1 00:06:19.238 09:14:30 setup.sh.devices.nvme_mount -- setup/devices.sh@50 -- # local mount_point= 00:06:19.238 09:14:30 setup.sh.devices.nvme_mount -- setup/devices.sh@51 -- # local test_file= 00:06:19.238 09:14:30 setup.sh.devices.nvme_mount -- setup/devices.sh@53 -- # local found=0 00:06:19.238 09:14:30 setup.sh.devices.nvme_mount -- setup/devices.sh@55 -- # [[ -n '' ]] 00:06:19.238 09:14:30 setup.sh.devices.nvme_mount -- setup/devices.sh@59 -- # local pci status 00:06:19.238 09:14:30 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:06:19.238 09:14:30 setup.sh.devices.nvme_mount -- setup/devices.sh@47 -- # PCI_ALLOWED=0000:0b:00.0 00:06:19.238 09:14:30 setup.sh.devices.nvme_mount -- setup/devices.sh@47 -- # setup output config 00:06:19.238 09:14:30 setup.sh.devices.nvme_mount -- setup/common.sh@9 -- # [[ output == output ]] 00:06:19.238 09:14:30 setup.sh.devices.nvme_mount -- setup/common.sh@10 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh config 00:06:20.616 09:14:31 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.7 == \0\0\0\0\:\0\b\:\0\0\.\0 ]] 00:06:20.616 09:14:31 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:06:20.616 09:14:31 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.6 == \0\0\0\0\:\0\b\:\0\0\.\0 ]] 00:06:20.616 09:14:31 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:06:20.616 09:14:31 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.5 == \0\0\0\0\:\0\b\:\0\0\.\0 ]] 00:06:20.616 09:14:31 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:06:20.616 09:14:31 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.4 == \0\0\0\0\:\0\b\:\0\0\.\0 ]] 00:06:20.616 09:14:31 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:06:20.616 09:14:31 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.3 == \0\0\0\0\:\0\b\:\0\0\.\0 ]] 00:06:20.616 09:14:31 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:06:20.616 09:14:31 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.2 == \0\0\0\0\:\0\b\:\0\0\.\0 ]] 00:06:20.616 09:14:31 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:06:20.616 09:14:31 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.1 == \0\0\0\0\:\0\b\:\0\0\.\0 ]] 00:06:20.616 09:14:31 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:06:20.616 09:14:31 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.0 == \0\0\0\0\:\0\b\:\0\0\.\0 ]] 00:06:20.616 09:14:31 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:06:20.616 09:14:31 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:0b:00.0 == \0\0\0\0\:\0\b\:\0\0\.\0 ]] 00:06:20.616 09:14:31 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ Active devices: data@nvme0n1, so not binding PCI dev == *\A\c\t\i\v\e\ \d\e\v\i\c\e\s\:\ *\d\a\t\a\@\n\v\m\e\0\n\1* ]] 00:06:20.616 09:14:31 setup.sh.devices.nvme_mount -- setup/devices.sh@63 -- # found=1 00:06:20.616 09:14:31 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:06:20.616 09:14:31 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.7 == \0\0\0\0\:\0\b\:\0\0\.\0 ]] 00:06:20.616 09:14:31 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:06:20.616 09:14:31 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.6 == \0\0\0\0\:\0\b\:\0\0\.\0 ]] 00:06:20.616 09:14:31 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:06:20.616 09:14:31 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.5 == \0\0\0\0\:\0\b\:\0\0\.\0 ]] 00:06:20.616 09:14:31 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:06:20.616 09:14:31 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.4 == \0\0\0\0\:\0\b\:\0\0\.\0 ]] 00:06:20.616 09:14:31 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:06:20.616 09:14:31 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.3 == \0\0\0\0\:\0\b\:\0\0\.\0 ]] 00:06:20.616 09:14:31 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:06:20.616 09:14:31 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.2 == \0\0\0\0\:\0\b\:\0\0\.\0 ]] 00:06:20.616 09:14:31 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:06:20.616 09:14:31 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.1 == \0\0\0\0\:\0\b\:\0\0\.\0 ]] 00:06:20.616 09:14:31 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:06:20.616 09:14:31 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.0 == \0\0\0\0\:\0\b\:\0\0\.\0 ]] 00:06:20.616 09:14:31 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:06:20.616 09:14:31 setup.sh.devices.nvme_mount -- setup/devices.sh@66 -- # (( found == 1 )) 00:06:20.616 09:14:31 setup.sh.devices.nvme_mount -- setup/devices.sh@68 -- # [[ -n '' ]] 00:06:20.616 09:14:31 setup.sh.devices.nvme_mount -- setup/devices.sh@68 -- # return 0 00:06:20.616 09:14:31 setup.sh.devices.nvme_mount -- setup/devices.sh@128 -- # cleanup_nvme 00:06:20.616 09:14:31 setup.sh.devices.nvme_mount -- setup/devices.sh@20 -- # mountpoint -q /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount 00:06:20.616 09:14:31 setup.sh.devices.nvme_mount -- setup/devices.sh@24 -- # [[ -b /dev/nvme0n1p1 ]] 00:06:20.616 09:14:31 setup.sh.devices.nvme_mount -- setup/devices.sh@27 -- # [[ -b /dev/nvme0n1 ]] 00:06:20.616 09:14:31 setup.sh.devices.nvme_mount -- setup/devices.sh@28 -- # wipefs --all /dev/nvme0n1 00:06:20.616 /dev/nvme0n1: 2 bytes were erased at offset 0x00000438 (ext4): 53 ef 00:06:20.616 00:06:20.616 real 0m6.479s 00:06:20.616 user 0m1.462s 00:06:20.616 sys 0m2.567s 00:06:20.616 09:14:31 setup.sh.devices.nvme_mount -- common/autotest_common.sh@1124 -- # xtrace_disable 00:06:20.616 09:14:31 setup.sh.devices.nvme_mount -- common/autotest_common.sh@10 -- # set +x 00:06:20.616 ************************************ 00:06:20.616 END TEST nvme_mount 00:06:20.616 ************************************ 00:06:20.616 09:14:31 setup.sh.devices -- common/autotest_common.sh@1142 -- # return 0 00:06:20.616 09:14:31 setup.sh.devices -- setup/devices.sh@214 -- # run_test dm_mount dm_mount 00:06:20.616 09:14:31 setup.sh.devices -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:06:20.616 09:14:31 setup.sh.devices -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:20.616 09:14:31 setup.sh.devices -- common/autotest_common.sh@10 -- # set +x 00:06:20.616 ************************************ 00:06:20.617 START TEST dm_mount 00:06:20.617 ************************************ 00:06:20.617 09:14:31 setup.sh.devices.dm_mount -- common/autotest_common.sh@1123 -- # dm_mount 00:06:20.617 09:14:31 setup.sh.devices.dm_mount -- setup/devices.sh@144 -- # pv=nvme0n1 00:06:20.617 09:14:31 setup.sh.devices.dm_mount -- setup/devices.sh@145 -- # pv0=nvme0n1p1 00:06:20.617 09:14:31 setup.sh.devices.dm_mount -- setup/devices.sh@146 -- # pv1=nvme0n1p2 00:06:20.617 09:14:31 setup.sh.devices.dm_mount -- setup/devices.sh@148 -- # partition_drive nvme0n1 00:06:20.617 09:14:31 setup.sh.devices.dm_mount -- setup/common.sh@39 -- # local disk=nvme0n1 00:06:20.617 09:14:31 setup.sh.devices.dm_mount -- setup/common.sh@40 -- # local part_no=2 00:06:20.617 09:14:31 setup.sh.devices.dm_mount -- setup/common.sh@41 -- # local size=1073741824 00:06:20.617 09:14:31 setup.sh.devices.dm_mount -- setup/common.sh@43 -- # local part part_start=0 part_end=0 00:06:20.617 09:14:31 setup.sh.devices.dm_mount -- setup/common.sh@44 -- # parts=() 00:06:20.617 09:14:31 setup.sh.devices.dm_mount -- setup/common.sh@44 -- # local parts 00:06:20.617 09:14:31 setup.sh.devices.dm_mount -- setup/common.sh@46 -- # (( part = 1 )) 00:06:20.617 09:14:31 setup.sh.devices.dm_mount -- setup/common.sh@46 -- # (( part <= part_no )) 00:06:20.617 09:14:31 setup.sh.devices.dm_mount -- setup/common.sh@47 -- # parts+=("${disk}p$part") 00:06:20.617 09:14:31 setup.sh.devices.dm_mount -- setup/common.sh@46 -- # (( part++ )) 00:06:20.617 09:14:31 setup.sh.devices.dm_mount -- setup/common.sh@46 -- # (( part <= part_no )) 00:06:20.617 09:14:31 setup.sh.devices.dm_mount -- setup/common.sh@47 -- # parts+=("${disk}p$part") 00:06:20.617 09:14:31 setup.sh.devices.dm_mount -- setup/common.sh@46 -- # (( part++ )) 00:06:20.617 09:14:31 setup.sh.devices.dm_mount -- setup/common.sh@46 -- # (( part <= part_no )) 00:06:20.617 09:14:31 setup.sh.devices.dm_mount -- setup/common.sh@51 -- # (( size /= 512 )) 00:06:20.617 09:14:31 setup.sh.devices.dm_mount -- setup/common.sh@56 -- # sgdisk /dev/nvme0n1 --zap-all 00:06:20.617 09:14:31 setup.sh.devices.dm_mount -- setup/common.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/sync_dev_uevents.sh block/partition nvme0n1p1 nvme0n1p2 00:06:21.995 Creating new GPT entries in memory. 00:06:21.995 GPT data structures destroyed! You may now partition the disk using fdisk or 00:06:21.995 other utilities. 00:06:21.995 09:14:32 setup.sh.devices.dm_mount -- setup/common.sh@57 -- # (( part = 1 )) 00:06:21.995 09:14:32 setup.sh.devices.dm_mount -- setup/common.sh@57 -- # (( part <= part_no )) 00:06:21.995 09:14:32 setup.sh.devices.dm_mount -- setup/common.sh@58 -- # (( part_start = part_start == 0 ? 2048 : part_end + 1 )) 00:06:21.995 09:14:32 setup.sh.devices.dm_mount -- setup/common.sh@59 -- # (( part_end = part_start + size - 1 )) 00:06:21.995 09:14:32 setup.sh.devices.dm_mount -- setup/common.sh@60 -- # flock /dev/nvme0n1 sgdisk /dev/nvme0n1 --new=1:2048:2099199 00:06:22.936 Creating new GPT entries in memory. 00:06:22.936 The operation has completed successfully. 00:06:22.936 09:14:33 setup.sh.devices.dm_mount -- setup/common.sh@57 -- # (( part++ )) 00:06:22.936 09:14:33 setup.sh.devices.dm_mount -- setup/common.sh@57 -- # (( part <= part_no )) 00:06:22.936 09:14:33 setup.sh.devices.dm_mount -- setup/common.sh@58 -- # (( part_start = part_start == 0 ? 2048 : part_end + 1 )) 00:06:22.936 09:14:33 setup.sh.devices.dm_mount -- setup/common.sh@59 -- # (( part_end = part_start + size - 1 )) 00:06:22.936 09:14:33 setup.sh.devices.dm_mount -- setup/common.sh@60 -- # flock /dev/nvme0n1 sgdisk /dev/nvme0n1 --new=2:2099200:4196351 00:06:23.876 The operation has completed successfully. 00:06:23.876 09:14:34 setup.sh.devices.dm_mount -- setup/common.sh@57 -- # (( part++ )) 00:06:23.876 09:14:34 setup.sh.devices.dm_mount -- setup/common.sh@57 -- # (( part <= part_no )) 00:06:23.876 09:14:34 setup.sh.devices.dm_mount -- setup/common.sh@62 -- # wait 707035 00:06:23.876 09:14:34 setup.sh.devices.dm_mount -- setup/devices.sh@150 -- # dm_name=nvme_dm_test 00:06:23.876 09:14:34 setup.sh.devices.dm_mount -- setup/devices.sh@151 -- # dm_mount=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/dm_mount 00:06:23.876 09:14:34 setup.sh.devices.dm_mount -- setup/devices.sh@152 -- # dm_dummy_test_file=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/dm_mount/test_dm 00:06:23.876 09:14:34 setup.sh.devices.dm_mount -- setup/devices.sh@155 -- # dmsetup create nvme_dm_test 00:06:23.876 09:14:34 setup.sh.devices.dm_mount -- setup/devices.sh@160 -- # for t in {1..5} 00:06:23.876 09:14:34 setup.sh.devices.dm_mount -- setup/devices.sh@161 -- # [[ -e /dev/mapper/nvme_dm_test ]] 00:06:23.876 09:14:34 setup.sh.devices.dm_mount -- setup/devices.sh@161 -- # break 00:06:23.876 09:14:34 setup.sh.devices.dm_mount -- setup/devices.sh@164 -- # [[ -e /dev/mapper/nvme_dm_test ]] 00:06:23.876 09:14:34 setup.sh.devices.dm_mount -- setup/devices.sh@165 -- # readlink -f /dev/mapper/nvme_dm_test 00:06:23.876 09:14:34 setup.sh.devices.dm_mount -- setup/devices.sh@165 -- # dm=/dev/dm-0 00:06:23.876 09:14:34 setup.sh.devices.dm_mount -- setup/devices.sh@166 -- # dm=dm-0 00:06:23.876 09:14:34 setup.sh.devices.dm_mount -- setup/devices.sh@168 -- # [[ -e /sys/class/block/nvme0n1p1/holders/dm-0 ]] 00:06:23.876 09:14:34 setup.sh.devices.dm_mount -- setup/devices.sh@169 -- # [[ -e /sys/class/block/nvme0n1p2/holders/dm-0 ]] 00:06:23.877 09:14:34 setup.sh.devices.dm_mount -- setup/devices.sh@171 -- # mkfs /dev/mapper/nvme_dm_test /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/dm_mount 00:06:23.877 09:14:34 setup.sh.devices.dm_mount -- setup/common.sh@66 -- # local dev=/dev/mapper/nvme_dm_test mount=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/dm_mount size= 00:06:23.877 09:14:34 setup.sh.devices.dm_mount -- setup/common.sh@68 -- # mkdir -p /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/dm_mount 00:06:23.877 09:14:34 setup.sh.devices.dm_mount -- setup/common.sh@70 -- # [[ -e /dev/mapper/nvme_dm_test ]] 00:06:23.877 09:14:34 setup.sh.devices.dm_mount -- setup/common.sh@71 -- # mkfs.ext4 -qF /dev/mapper/nvme_dm_test 00:06:23.877 09:14:34 setup.sh.devices.dm_mount -- setup/common.sh@72 -- # mount /dev/mapper/nvme_dm_test /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/dm_mount 00:06:23.877 09:14:34 setup.sh.devices.dm_mount -- setup/devices.sh@174 -- # verify 0000:0b:00.0 nvme0n1:nvme_dm_test /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/dm_mount /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/dm_mount/test_dm 00:06:23.877 09:14:34 setup.sh.devices.dm_mount -- setup/devices.sh@48 -- # local dev=0000:0b:00.0 00:06:23.877 09:14:34 setup.sh.devices.dm_mount -- setup/devices.sh@49 -- # local mounts=nvme0n1:nvme_dm_test 00:06:23.877 09:14:34 setup.sh.devices.dm_mount -- setup/devices.sh@50 -- # local mount_point=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/dm_mount 00:06:23.877 09:14:34 setup.sh.devices.dm_mount -- setup/devices.sh@51 -- # local test_file=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/dm_mount/test_dm 00:06:23.877 09:14:34 setup.sh.devices.dm_mount -- setup/devices.sh@53 -- # local found=0 00:06:23.877 09:14:34 setup.sh.devices.dm_mount -- setup/devices.sh@55 -- # [[ -n /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/dm_mount/test_dm ]] 00:06:23.877 09:14:34 setup.sh.devices.dm_mount -- setup/devices.sh@56 -- # : 00:06:23.877 09:14:34 setup.sh.devices.dm_mount -- setup/devices.sh@59 -- # local pci status 00:06:23.877 09:14:34 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:06:23.877 09:14:34 setup.sh.devices.dm_mount -- setup/devices.sh@47 -- # PCI_ALLOWED=0000:0b:00.0 00:06:23.877 09:14:34 setup.sh.devices.dm_mount -- setup/devices.sh@47 -- # setup output config 00:06:23.877 09:14:34 setup.sh.devices.dm_mount -- setup/common.sh@9 -- # [[ output == output ]] 00:06:23.877 09:14:34 setup.sh.devices.dm_mount -- setup/common.sh@10 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh config 00:06:24.815 09:14:35 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.7 == \0\0\0\0\:\0\b\:\0\0\.\0 ]] 00:06:24.815 09:14:35 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:06:24.815 09:14:35 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.6 == \0\0\0\0\:\0\b\:\0\0\.\0 ]] 00:06:24.815 09:14:35 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:06:24.815 09:14:35 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.5 == \0\0\0\0\:\0\b\:\0\0\.\0 ]] 00:06:24.815 09:14:35 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:06:24.815 09:14:35 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.4 == \0\0\0\0\:\0\b\:\0\0\.\0 ]] 00:06:24.815 09:14:35 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:06:24.815 09:14:35 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.3 == \0\0\0\0\:\0\b\:\0\0\.\0 ]] 00:06:24.815 09:14:35 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:06:24.815 09:14:36 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.2 == \0\0\0\0\:\0\b\:\0\0\.\0 ]] 00:06:24.815 09:14:36 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:06:24.815 09:14:36 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.1 == \0\0\0\0\:\0\b\:\0\0\.\0 ]] 00:06:24.815 09:14:36 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:06:24.815 09:14:36 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.0 == \0\0\0\0\:\0\b\:\0\0\.\0 ]] 00:06:24.815 09:14:36 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:06:25.073 09:14:36 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:0b:00.0 == \0\0\0\0\:\0\b\:\0\0\.\0 ]] 00:06:25.073 09:14:36 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ Active devices: holder@nvme0n1p1:dm-0,holder@nvme0n1p2:dm-0,mount@nvme0n1:nvme_dm_test, so not binding PCI dev == *\A\c\t\i\v\e\ \d\e\v\i\c\e\s\:\ *\n\v\m\e\0\n\1\:\n\v\m\e\_\d\m\_\t\e\s\t* ]] 00:06:25.074 09:14:36 setup.sh.devices.dm_mount -- setup/devices.sh@63 -- # found=1 00:06:25.074 09:14:36 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:06:25.074 09:14:36 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.7 == \0\0\0\0\:\0\b\:\0\0\.\0 ]] 00:06:25.074 09:14:36 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:06:25.074 09:14:36 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.6 == \0\0\0\0\:\0\b\:\0\0\.\0 ]] 00:06:25.074 09:14:36 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:06:25.074 09:14:36 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.5 == \0\0\0\0\:\0\b\:\0\0\.\0 ]] 00:06:25.074 09:14:36 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:06:25.074 09:14:36 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.4 == \0\0\0\0\:\0\b\:\0\0\.\0 ]] 00:06:25.074 09:14:36 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:06:25.074 09:14:36 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.3 == \0\0\0\0\:\0\b\:\0\0\.\0 ]] 00:06:25.074 09:14:36 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:06:25.074 09:14:36 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.2 == \0\0\0\0\:\0\b\:\0\0\.\0 ]] 00:06:25.074 09:14:36 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:06:25.074 09:14:36 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.1 == \0\0\0\0\:\0\b\:\0\0\.\0 ]] 00:06:25.074 09:14:36 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:06:25.074 09:14:36 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.0 == \0\0\0\0\:\0\b\:\0\0\.\0 ]] 00:06:25.074 09:14:36 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:06:25.074 09:14:36 setup.sh.devices.dm_mount -- setup/devices.sh@66 -- # (( found == 1 )) 00:06:25.074 09:14:36 setup.sh.devices.dm_mount -- setup/devices.sh@68 -- # [[ -n /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/dm_mount ]] 00:06:25.074 09:14:36 setup.sh.devices.dm_mount -- setup/devices.sh@71 -- # mountpoint -q /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/dm_mount 00:06:25.074 09:14:36 setup.sh.devices.dm_mount -- setup/devices.sh@73 -- # [[ -e /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/dm_mount/test_dm ]] 00:06:25.074 09:14:36 setup.sh.devices.dm_mount -- setup/devices.sh@74 -- # rm /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/dm_mount/test_dm 00:06:25.074 09:14:36 setup.sh.devices.dm_mount -- setup/devices.sh@182 -- # umount /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/dm_mount 00:06:25.074 09:14:36 setup.sh.devices.dm_mount -- setup/devices.sh@184 -- # verify 0000:0b:00.0 holder@nvme0n1p1:dm-0,holder@nvme0n1p2:dm-0 '' '' 00:06:25.074 09:14:36 setup.sh.devices.dm_mount -- setup/devices.sh@48 -- # local dev=0000:0b:00.0 00:06:25.074 09:14:36 setup.sh.devices.dm_mount -- setup/devices.sh@49 -- # local mounts=holder@nvme0n1p1:dm-0,holder@nvme0n1p2:dm-0 00:06:25.074 09:14:36 setup.sh.devices.dm_mount -- setup/devices.sh@50 -- # local mount_point= 00:06:25.074 09:14:36 setup.sh.devices.dm_mount -- setup/devices.sh@51 -- # local test_file= 00:06:25.074 09:14:36 setup.sh.devices.dm_mount -- setup/devices.sh@53 -- # local found=0 00:06:25.074 09:14:36 setup.sh.devices.dm_mount -- setup/devices.sh@55 -- # [[ -n '' ]] 00:06:25.074 09:14:36 setup.sh.devices.dm_mount -- setup/devices.sh@59 -- # local pci status 00:06:25.074 09:14:36 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:06:25.074 09:14:36 setup.sh.devices.dm_mount -- setup/devices.sh@47 -- # PCI_ALLOWED=0000:0b:00.0 00:06:25.074 09:14:36 setup.sh.devices.dm_mount -- setup/devices.sh@47 -- # setup output config 00:06:25.074 09:14:36 setup.sh.devices.dm_mount -- setup/common.sh@9 -- # [[ output == output ]] 00:06:25.074 09:14:36 setup.sh.devices.dm_mount -- setup/common.sh@10 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh config 00:06:26.454 09:14:37 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.7 == \0\0\0\0\:\0\b\:\0\0\.\0 ]] 00:06:26.454 09:14:37 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:06:26.454 09:14:37 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.6 == \0\0\0\0\:\0\b\:\0\0\.\0 ]] 00:06:26.454 09:14:37 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:06:26.454 09:14:37 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.5 == \0\0\0\0\:\0\b\:\0\0\.\0 ]] 00:06:26.454 09:14:37 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:06:26.454 09:14:37 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.4 == \0\0\0\0\:\0\b\:\0\0\.\0 ]] 00:06:26.454 09:14:37 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:06:26.454 09:14:37 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.3 == \0\0\0\0\:\0\b\:\0\0\.\0 ]] 00:06:26.454 09:14:37 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:06:26.454 09:14:37 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.2 == \0\0\0\0\:\0\b\:\0\0\.\0 ]] 00:06:26.454 09:14:37 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:06:26.454 09:14:37 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.1 == \0\0\0\0\:\0\b\:\0\0\.\0 ]] 00:06:26.454 09:14:37 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:06:26.454 09:14:37 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.0 == \0\0\0\0\:\0\b\:\0\0\.\0 ]] 00:06:26.454 09:14:37 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:06:26.454 09:14:37 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:0b:00.0 == \0\0\0\0\:\0\b\:\0\0\.\0 ]] 00:06:26.454 09:14:37 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ Active devices: holder@nvme0n1p1:dm-0,holder@nvme0n1p2:dm-0, so not binding PCI dev == *\A\c\t\i\v\e\ \d\e\v\i\c\e\s\:\ *\h\o\l\d\e\r\@\n\v\m\e\0\n\1\p\1\:\d\m\-\0\,\h\o\l\d\e\r\@\n\v\m\e\0\n\1\p\2\:\d\m\-\0* ]] 00:06:26.454 09:14:37 setup.sh.devices.dm_mount -- setup/devices.sh@63 -- # found=1 00:06:26.454 09:14:37 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:06:26.454 09:14:37 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.7 == \0\0\0\0\:\0\b\:\0\0\.\0 ]] 00:06:26.454 09:14:37 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:06:26.454 09:14:37 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.6 == \0\0\0\0\:\0\b\:\0\0\.\0 ]] 00:06:26.454 09:14:37 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:06:26.454 09:14:37 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.5 == \0\0\0\0\:\0\b\:\0\0\.\0 ]] 00:06:26.454 09:14:37 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:06:26.454 09:14:37 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.4 == \0\0\0\0\:\0\b\:\0\0\.\0 ]] 00:06:26.454 09:14:37 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:06:26.454 09:14:37 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.3 == \0\0\0\0\:\0\b\:\0\0\.\0 ]] 00:06:26.454 09:14:37 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:06:26.454 09:14:37 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.2 == \0\0\0\0\:\0\b\:\0\0\.\0 ]] 00:06:26.454 09:14:37 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:06:26.454 09:14:37 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.1 == \0\0\0\0\:\0\b\:\0\0\.\0 ]] 00:06:26.454 09:14:37 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:06:26.454 09:14:37 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.0 == \0\0\0\0\:\0\b\:\0\0\.\0 ]] 00:06:26.454 09:14:37 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:06:26.454 09:14:37 setup.sh.devices.dm_mount -- setup/devices.sh@66 -- # (( found == 1 )) 00:06:26.454 09:14:37 setup.sh.devices.dm_mount -- setup/devices.sh@68 -- # [[ -n '' ]] 00:06:26.454 09:14:37 setup.sh.devices.dm_mount -- setup/devices.sh@68 -- # return 0 00:06:26.454 09:14:37 setup.sh.devices.dm_mount -- setup/devices.sh@187 -- # cleanup_dm 00:06:26.454 09:14:37 setup.sh.devices.dm_mount -- setup/devices.sh@33 -- # mountpoint -q /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/dm_mount 00:06:26.454 09:14:37 setup.sh.devices.dm_mount -- setup/devices.sh@36 -- # [[ -L /dev/mapper/nvme_dm_test ]] 00:06:26.454 09:14:37 setup.sh.devices.dm_mount -- setup/devices.sh@37 -- # dmsetup remove --force nvme_dm_test 00:06:26.714 09:14:37 setup.sh.devices.dm_mount -- setup/devices.sh@39 -- # [[ -b /dev/nvme0n1p1 ]] 00:06:26.714 09:14:37 setup.sh.devices.dm_mount -- setup/devices.sh@40 -- # wipefs --all /dev/nvme0n1p1 00:06:26.714 /dev/nvme0n1p1: 2 bytes were erased at offset 0x00000438 (ext4): 53 ef 00:06:26.714 09:14:37 setup.sh.devices.dm_mount -- setup/devices.sh@42 -- # [[ -b /dev/nvme0n1p2 ]] 00:06:26.714 09:14:37 setup.sh.devices.dm_mount -- setup/devices.sh@43 -- # wipefs --all /dev/nvme0n1p2 00:06:26.714 00:06:26.714 real 0m5.939s 00:06:26.714 user 0m1.009s 00:06:26.714 sys 0m1.785s 00:06:26.714 09:14:37 setup.sh.devices.dm_mount -- common/autotest_common.sh@1124 -- # xtrace_disable 00:06:26.714 09:14:37 setup.sh.devices.dm_mount -- common/autotest_common.sh@10 -- # set +x 00:06:26.714 ************************************ 00:06:26.714 END TEST dm_mount 00:06:26.715 ************************************ 00:06:26.715 09:14:37 setup.sh.devices -- common/autotest_common.sh@1142 -- # return 0 00:06:26.715 09:14:37 setup.sh.devices -- setup/devices.sh@1 -- # cleanup 00:06:26.715 09:14:37 setup.sh.devices -- setup/devices.sh@11 -- # cleanup_nvme 00:06:26.715 09:14:37 setup.sh.devices -- setup/devices.sh@20 -- # mountpoint -q /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount 00:06:26.715 09:14:37 setup.sh.devices -- setup/devices.sh@24 -- # [[ -b /dev/nvme0n1p1 ]] 00:06:26.715 09:14:37 setup.sh.devices -- setup/devices.sh@25 -- # wipefs --all /dev/nvme0n1p1 00:06:26.715 09:14:37 setup.sh.devices -- setup/devices.sh@27 -- # [[ -b /dev/nvme0n1 ]] 00:06:26.715 09:14:37 setup.sh.devices -- setup/devices.sh@28 -- # wipefs --all /dev/nvme0n1 00:06:26.973 /dev/nvme0n1: 8 bytes were erased at offset 0x00000200 (gpt): 45 46 49 20 50 41 52 54 00:06:26.973 /dev/nvme0n1: 8 bytes were erased at offset 0xe8e0db5e00 (gpt): 45 46 49 20 50 41 52 54 00:06:26.973 /dev/nvme0n1: 2 bytes were erased at offset 0x000001fe (PMBR): 55 aa 00:06:26.973 /dev/nvme0n1: calling ioctl to re-read partition table: Success 00:06:26.973 09:14:37 setup.sh.devices -- setup/devices.sh@12 -- # cleanup_dm 00:06:26.973 09:14:37 setup.sh.devices -- setup/devices.sh@33 -- # mountpoint -q /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/dm_mount 00:06:26.973 09:14:37 setup.sh.devices -- setup/devices.sh@36 -- # [[ -L /dev/mapper/nvme_dm_test ]] 00:06:26.973 09:14:37 setup.sh.devices -- setup/devices.sh@39 -- # [[ -b /dev/nvme0n1p1 ]] 00:06:26.973 09:14:37 setup.sh.devices -- setup/devices.sh@42 -- # [[ -b /dev/nvme0n1p2 ]] 00:06:26.973 09:14:37 setup.sh.devices -- setup/devices.sh@14 -- # [[ -b /dev/nvme0n1 ]] 00:06:26.973 09:14:37 setup.sh.devices -- setup/devices.sh@15 -- # wipefs --all /dev/nvme0n1 00:06:26.973 00:06:26.973 real 0m14.368s 00:06:26.973 user 0m3.150s 00:06:26.973 sys 0m5.385s 00:06:26.973 09:14:38 setup.sh.devices -- common/autotest_common.sh@1124 -- # xtrace_disable 00:06:26.973 09:14:38 setup.sh.devices -- common/autotest_common.sh@10 -- # set +x 00:06:26.973 ************************************ 00:06:26.973 END TEST devices 00:06:26.973 ************************************ 00:06:26.973 09:14:38 setup.sh -- common/autotest_common.sh@1142 -- # return 0 00:06:26.973 00:06:26.973 real 0m45.203s 00:06:26.973 user 0m12.935s 00:06:26.973 sys 0m20.317s 00:06:26.973 09:14:38 setup.sh -- common/autotest_common.sh@1124 -- # xtrace_disable 00:06:26.973 09:14:38 setup.sh -- common/autotest_common.sh@10 -- # set +x 00:06:26.973 ************************************ 00:06:26.973 END TEST setup.sh 00:06:26.973 ************************************ 00:06:26.973 09:14:38 -- common/autotest_common.sh@1142 -- # return 0 00:06:26.973 09:14:38 -- spdk/autotest.sh@128 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh status 00:06:28.348 Hugepages 00:06:28.348 node hugesize free / total 00:06:28.348 node0 1048576kB 0 / 0 00:06:28.348 node0 2048kB 2048 / 2048 00:06:28.348 node1 1048576kB 0 / 0 00:06:28.348 node1 2048kB 0 / 0 00:06:28.348 00:06:28.348 Type BDF Vendor Device NUMA Driver Device Block devices 00:06:28.348 I/OAT 0000:00:04.0 8086 0e20 0 ioatdma - - 00:06:28.348 I/OAT 0000:00:04.1 8086 0e21 0 ioatdma - - 00:06:28.348 I/OAT 0000:00:04.2 8086 0e22 0 ioatdma - - 00:06:28.348 I/OAT 0000:00:04.3 8086 0e23 0 ioatdma - - 00:06:28.348 I/OAT 0000:00:04.4 8086 0e24 0 ioatdma - - 00:06:28.348 I/OAT 0000:00:04.5 8086 0e25 0 ioatdma - - 00:06:28.348 I/OAT 0000:00:04.6 8086 0e26 0 ioatdma - - 00:06:28.348 I/OAT 0000:00:04.7 8086 0e27 0 ioatdma - - 00:06:28.348 NVMe 0000:0b:00.0 8086 0a54 0 nvme nvme0 nvme0n1 00:06:28.348 I/OAT 0000:80:04.0 8086 0e20 1 ioatdma - - 00:06:28.348 I/OAT 0000:80:04.1 8086 0e21 1 ioatdma - - 00:06:28.348 I/OAT 0000:80:04.2 8086 0e22 1 ioatdma - - 00:06:28.348 I/OAT 0000:80:04.3 8086 0e23 1 ioatdma - - 00:06:28.348 I/OAT 0000:80:04.4 8086 0e24 1 ioatdma - - 00:06:28.348 I/OAT 0000:80:04.5 8086 0e25 1 ioatdma - - 00:06:28.348 I/OAT 0000:80:04.6 8086 0e26 1 ioatdma - - 00:06:28.348 I/OAT 0000:80:04.7 8086 0e27 1 ioatdma - - 00:06:28.348 09:14:39 -- spdk/autotest.sh@130 -- # uname -s 00:06:28.348 09:14:39 -- spdk/autotest.sh@130 -- # [[ Linux == Linux ]] 00:06:28.348 09:14:39 -- spdk/autotest.sh@132 -- # nvme_namespace_revert 00:06:28.348 09:14:39 -- common/autotest_common.sh@1531 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:06:29.727 0000:00:04.7 (8086 0e27): ioatdma -> vfio-pci 00:06:29.727 0000:00:04.6 (8086 0e26): ioatdma -> vfio-pci 00:06:29.727 0000:00:04.5 (8086 0e25): ioatdma -> vfio-pci 00:06:29.727 0000:00:04.4 (8086 0e24): ioatdma -> vfio-pci 00:06:29.727 0000:00:04.3 (8086 0e23): ioatdma -> vfio-pci 00:06:29.727 0000:00:04.2 (8086 0e22): ioatdma -> vfio-pci 00:06:29.727 0000:00:04.1 (8086 0e21): ioatdma -> vfio-pci 00:06:29.727 0000:00:04.0 (8086 0e20): ioatdma -> vfio-pci 00:06:29.727 0000:80:04.7 (8086 0e27): ioatdma -> vfio-pci 00:06:29.727 0000:80:04.6 (8086 0e26): ioatdma -> vfio-pci 00:06:29.727 0000:80:04.5 (8086 0e25): ioatdma -> vfio-pci 00:06:29.727 0000:80:04.4 (8086 0e24): ioatdma -> vfio-pci 00:06:29.727 0000:80:04.3 (8086 0e23): ioatdma -> vfio-pci 00:06:29.727 0000:80:04.2 (8086 0e22): ioatdma -> vfio-pci 00:06:29.727 0000:80:04.1 (8086 0e21): ioatdma -> vfio-pci 00:06:29.727 0000:80:04.0 (8086 0e20): ioatdma -> vfio-pci 00:06:30.669 0000:0b:00.0 (8086 0a54): nvme -> vfio-pci 00:06:30.669 09:14:41 -- common/autotest_common.sh@1532 -- # sleep 1 00:06:31.606 09:14:42 -- common/autotest_common.sh@1533 -- # bdfs=() 00:06:31.606 09:14:42 -- common/autotest_common.sh@1533 -- # local bdfs 00:06:31.606 09:14:42 -- common/autotest_common.sh@1534 -- # bdfs=($(get_nvme_bdfs)) 00:06:31.606 09:14:42 -- common/autotest_common.sh@1534 -- # get_nvme_bdfs 00:06:31.606 09:14:42 -- common/autotest_common.sh@1513 -- # bdfs=() 00:06:31.606 09:14:42 -- common/autotest_common.sh@1513 -- # local bdfs 00:06:31.606 09:14:42 -- common/autotest_common.sh@1514 -- # bdfs=($("$rootdir/scripts/gen_nvme.sh" | jq -r '.config[].params.traddr')) 00:06:31.606 09:14:42 -- common/autotest_common.sh@1514 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/gen_nvme.sh 00:06:31.606 09:14:42 -- common/autotest_common.sh@1514 -- # jq -r '.config[].params.traddr' 00:06:31.865 09:14:42 -- common/autotest_common.sh@1515 -- # (( 1 == 0 )) 00:06:31.865 09:14:42 -- common/autotest_common.sh@1519 -- # printf '%s\n' 0000:0b:00.0 00:06:31.865 09:14:42 -- common/autotest_common.sh@1536 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh reset 00:06:32.800 Waiting for block devices as requested 00:06:33.058 0000:00:04.7 (8086 0e27): vfio-pci -> ioatdma 00:06:33.058 0000:00:04.6 (8086 0e26): vfio-pci -> ioatdma 00:06:33.058 0000:00:04.5 (8086 0e25): vfio-pci -> ioatdma 00:06:33.318 0000:00:04.4 (8086 0e24): vfio-pci -> ioatdma 00:06:33.318 0000:00:04.3 (8086 0e23): vfio-pci -> ioatdma 00:06:33.318 0000:00:04.2 (8086 0e22): vfio-pci -> ioatdma 00:06:33.318 0000:00:04.1 (8086 0e21): vfio-pci -> ioatdma 00:06:33.577 0000:00:04.0 (8086 0e20): vfio-pci -> ioatdma 00:06:33.577 0000:0b:00.0 (8086 0a54): vfio-pci -> nvme 00:06:33.835 0000:80:04.7 (8086 0e27): vfio-pci -> ioatdma 00:06:33.835 0000:80:04.6 (8086 0e26): vfio-pci -> ioatdma 00:06:33.835 0000:80:04.5 (8086 0e25): vfio-pci -> ioatdma 00:06:34.095 0000:80:04.4 (8086 0e24): vfio-pci -> ioatdma 00:06:34.095 0000:80:04.3 (8086 0e23): vfio-pci -> ioatdma 00:06:34.095 0000:80:04.2 (8086 0e22): vfio-pci -> ioatdma 00:06:34.095 0000:80:04.1 (8086 0e21): vfio-pci -> ioatdma 00:06:34.355 0000:80:04.0 (8086 0e20): vfio-pci -> ioatdma 00:06:34.355 09:14:45 -- common/autotest_common.sh@1538 -- # for bdf in "${bdfs[@]}" 00:06:34.355 09:14:45 -- common/autotest_common.sh@1539 -- # get_nvme_ctrlr_from_bdf 0000:0b:00.0 00:06:34.355 09:14:45 -- common/autotest_common.sh@1502 -- # readlink -f /sys/class/nvme/nvme0 00:06:34.355 09:14:45 -- common/autotest_common.sh@1502 -- # grep 0000:0b:00.0/nvme/nvme 00:06:34.355 09:14:45 -- common/autotest_common.sh@1502 -- # bdf_sysfs_path=/sys/devices/pci0000:00/0000:00:03.2/0000:0b:00.0/nvme/nvme0 00:06:34.355 09:14:45 -- common/autotest_common.sh@1503 -- # [[ -z /sys/devices/pci0000:00/0000:00:03.2/0000:0b:00.0/nvme/nvme0 ]] 00:06:34.355 09:14:45 -- common/autotest_common.sh@1507 -- # basename /sys/devices/pci0000:00/0000:00:03.2/0000:0b:00.0/nvme/nvme0 00:06:34.355 09:14:45 -- common/autotest_common.sh@1507 -- # printf '%s\n' nvme0 00:06:34.355 09:14:45 -- common/autotest_common.sh@1539 -- # nvme_ctrlr=/dev/nvme0 00:06:34.355 09:14:45 -- common/autotest_common.sh@1540 -- # [[ -z /dev/nvme0 ]] 00:06:34.355 09:14:45 -- common/autotest_common.sh@1545 -- # nvme id-ctrl /dev/nvme0 00:06:34.355 09:14:45 -- common/autotest_common.sh@1545 -- # grep oacs 00:06:34.355 09:14:45 -- common/autotest_common.sh@1545 -- # cut -d: -f2 00:06:34.355 09:14:45 -- common/autotest_common.sh@1545 -- # oacs=' 0xf' 00:06:34.355 09:14:45 -- common/autotest_common.sh@1546 -- # oacs_ns_manage=8 00:06:34.355 09:14:45 -- common/autotest_common.sh@1548 -- # [[ 8 -ne 0 ]] 00:06:34.355 09:14:45 -- common/autotest_common.sh@1554 -- # nvme id-ctrl /dev/nvme0 00:06:34.355 09:14:45 -- common/autotest_common.sh@1554 -- # grep unvmcap 00:06:34.355 09:14:45 -- common/autotest_common.sh@1554 -- # cut -d: -f2 00:06:34.355 09:14:45 -- common/autotest_common.sh@1554 -- # unvmcap=' 0' 00:06:34.355 09:14:45 -- common/autotest_common.sh@1555 -- # [[ 0 -eq 0 ]] 00:06:34.355 09:14:45 -- common/autotest_common.sh@1557 -- # continue 00:06:34.355 09:14:45 -- spdk/autotest.sh@135 -- # timing_exit pre_cleanup 00:06:34.355 09:14:45 -- common/autotest_common.sh@728 -- # xtrace_disable 00:06:34.355 09:14:45 -- common/autotest_common.sh@10 -- # set +x 00:06:34.355 09:14:45 -- spdk/autotest.sh@138 -- # timing_enter afterboot 00:06:34.355 09:14:45 -- common/autotest_common.sh@722 -- # xtrace_disable 00:06:34.355 09:14:45 -- common/autotest_common.sh@10 -- # set +x 00:06:34.355 09:14:45 -- spdk/autotest.sh@139 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:06:35.736 0000:00:04.7 (8086 0e27): ioatdma -> vfio-pci 00:06:35.736 0000:00:04.6 (8086 0e26): ioatdma -> vfio-pci 00:06:35.736 0000:00:04.5 (8086 0e25): ioatdma -> vfio-pci 00:06:35.736 0000:00:04.4 (8086 0e24): ioatdma -> vfio-pci 00:06:35.736 0000:00:04.3 (8086 0e23): ioatdma -> vfio-pci 00:06:35.736 0000:00:04.2 (8086 0e22): ioatdma -> vfio-pci 00:06:35.736 0000:00:04.1 (8086 0e21): ioatdma -> vfio-pci 00:06:35.736 0000:00:04.0 (8086 0e20): ioatdma -> vfio-pci 00:06:35.736 0000:80:04.7 (8086 0e27): ioatdma -> vfio-pci 00:06:35.736 0000:80:04.6 (8086 0e26): ioatdma -> vfio-pci 00:06:35.736 0000:80:04.5 (8086 0e25): ioatdma -> vfio-pci 00:06:35.736 0000:80:04.4 (8086 0e24): ioatdma -> vfio-pci 00:06:35.736 0000:80:04.3 (8086 0e23): ioatdma -> vfio-pci 00:06:35.736 0000:80:04.2 (8086 0e22): ioatdma -> vfio-pci 00:06:35.736 0000:80:04.1 (8086 0e21): ioatdma -> vfio-pci 00:06:35.736 0000:80:04.0 (8086 0e20): ioatdma -> vfio-pci 00:06:36.672 0000:0b:00.0 (8086 0a54): nvme -> vfio-pci 00:06:36.672 09:14:47 -- spdk/autotest.sh@140 -- # timing_exit afterboot 00:06:36.672 09:14:47 -- common/autotest_common.sh@728 -- # xtrace_disable 00:06:36.672 09:14:47 -- common/autotest_common.sh@10 -- # set +x 00:06:36.931 09:14:47 -- spdk/autotest.sh@144 -- # opal_revert_cleanup 00:06:36.931 09:14:47 -- common/autotest_common.sh@1591 -- # mapfile -t bdfs 00:06:36.931 09:14:47 -- common/autotest_common.sh@1591 -- # get_nvme_bdfs_by_id 0x0a54 00:06:36.931 09:14:47 -- common/autotest_common.sh@1577 -- # bdfs=() 00:06:36.931 09:14:47 -- common/autotest_common.sh@1577 -- # local bdfs 00:06:36.931 09:14:47 -- common/autotest_common.sh@1579 -- # get_nvme_bdfs 00:06:36.931 09:14:47 -- common/autotest_common.sh@1513 -- # bdfs=() 00:06:36.931 09:14:47 -- common/autotest_common.sh@1513 -- # local bdfs 00:06:36.931 09:14:47 -- common/autotest_common.sh@1514 -- # bdfs=($("$rootdir/scripts/gen_nvme.sh" | jq -r '.config[].params.traddr')) 00:06:36.931 09:14:47 -- common/autotest_common.sh@1514 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/gen_nvme.sh 00:06:36.931 09:14:47 -- common/autotest_common.sh@1514 -- # jq -r '.config[].params.traddr' 00:06:36.931 09:14:47 -- common/autotest_common.sh@1515 -- # (( 1 == 0 )) 00:06:36.931 09:14:47 -- common/autotest_common.sh@1519 -- # printf '%s\n' 0000:0b:00.0 00:06:36.931 09:14:47 -- common/autotest_common.sh@1579 -- # for bdf in $(get_nvme_bdfs) 00:06:36.931 09:14:47 -- common/autotest_common.sh@1580 -- # cat /sys/bus/pci/devices/0000:0b:00.0/device 00:06:36.931 09:14:47 -- common/autotest_common.sh@1580 -- # device=0x0a54 00:06:36.931 09:14:47 -- common/autotest_common.sh@1581 -- # [[ 0x0a54 == \0\x\0\a\5\4 ]] 00:06:36.931 09:14:47 -- common/autotest_common.sh@1582 -- # bdfs+=($bdf) 00:06:36.931 09:14:47 -- common/autotest_common.sh@1586 -- # printf '%s\n' 0000:0b:00.0 00:06:36.931 09:14:47 -- common/autotest_common.sh@1592 -- # [[ -z 0000:0b:00.0 ]] 00:06:36.931 09:14:47 -- common/autotest_common.sh@1597 -- # spdk_tgt_pid=712337 00:06:36.931 09:14:47 -- common/autotest_common.sh@1596 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:06:36.931 09:14:47 -- common/autotest_common.sh@1598 -- # waitforlisten 712337 00:06:36.931 09:14:47 -- common/autotest_common.sh@829 -- # '[' -z 712337 ']' 00:06:36.931 09:14:47 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:36.931 09:14:47 -- common/autotest_common.sh@834 -- # local max_retries=100 00:06:36.932 09:14:47 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:36.932 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:36.932 09:14:47 -- common/autotest_common.sh@838 -- # xtrace_disable 00:06:36.932 09:14:47 -- common/autotest_common.sh@10 -- # set +x 00:06:36.932 [2024-07-15 09:14:47.994107] Starting SPDK v24.09-pre git sha1 b0f01ebc5 / DPDK 24.03.0 initialization... 00:06:36.932 [2024-07-15 09:14:47.994218] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid712337 ] 00:06:36.932 EAL: No free 2048 kB hugepages reported on node 1 00:06:36.932 [2024-07-15 09:14:48.051054] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:37.190 [2024-07-15 09:14:48.151595] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:06:37.449 09:14:48 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:06:37.449 09:14:48 -- common/autotest_common.sh@862 -- # return 0 00:06:37.449 09:14:48 -- common/autotest_common.sh@1600 -- # bdf_id=0 00:06:37.449 09:14:48 -- common/autotest_common.sh@1601 -- # for bdf in "${bdfs[@]}" 00:06:37.449 09:14:48 -- common/autotest_common.sh@1602 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_nvme_attach_controller -b nvme0 -t pcie -a 0000:0b:00.0 00:06:40.732 nvme0n1 00:06:40.732 09:14:51 -- common/autotest_common.sh@1604 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_nvme_opal_revert -b nvme0 -p test 00:06:40.732 [2024-07-15 09:14:51.701495] nvme_opal.c:2063:spdk_opal_cmd_revert_tper: *ERROR*: Error on starting admin SP session with error 18 00:06:40.732 [2024-07-15 09:14:51.701532] vbdev_opal_rpc.c: 134:rpc_bdev_nvme_opal_revert: *ERROR*: Revert TPer failure: 18 00:06:40.732 request: 00:06:40.732 { 00:06:40.732 "nvme_ctrlr_name": "nvme0", 00:06:40.732 "password": "test", 00:06:40.732 "method": "bdev_nvme_opal_revert", 00:06:40.732 "req_id": 1 00:06:40.732 } 00:06:40.732 Got JSON-RPC error response 00:06:40.732 response: 00:06:40.732 { 00:06:40.732 "code": -32603, 00:06:40.732 "message": "Internal error" 00:06:40.732 } 00:06:40.732 09:14:51 -- common/autotest_common.sh@1604 -- # true 00:06:40.732 09:14:51 -- common/autotest_common.sh@1605 -- # (( ++bdf_id )) 00:06:40.732 09:14:51 -- common/autotest_common.sh@1608 -- # killprocess 712337 00:06:40.732 09:14:51 -- common/autotest_common.sh@948 -- # '[' -z 712337 ']' 00:06:40.732 09:14:51 -- common/autotest_common.sh@952 -- # kill -0 712337 00:06:40.732 09:14:51 -- common/autotest_common.sh@953 -- # uname 00:06:40.732 09:14:51 -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:06:40.732 09:14:51 -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 712337 00:06:40.732 09:14:51 -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:06:40.732 09:14:51 -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:06:40.732 09:14:51 -- common/autotest_common.sh@966 -- # echo 'killing process with pid 712337' 00:06:40.732 killing process with pid 712337 00:06:40.732 09:14:51 -- common/autotest_common.sh@967 -- # kill 712337 00:06:40.732 09:14:51 -- common/autotest_common.sh@972 -- # wait 712337 00:06:42.629 09:14:53 -- spdk/autotest.sh@150 -- # '[' 0 -eq 1 ']' 00:06:42.629 09:14:53 -- spdk/autotest.sh@154 -- # '[' 1 -eq 1 ']' 00:06:42.629 09:14:53 -- spdk/autotest.sh@155 -- # [[ 0 -eq 1 ]] 00:06:42.629 09:14:53 -- spdk/autotest.sh@155 -- # [[ 0 -eq 1 ]] 00:06:42.629 09:14:53 -- spdk/autotest.sh@162 -- # timing_enter lib 00:06:42.629 09:14:53 -- common/autotest_common.sh@722 -- # xtrace_disable 00:06:42.629 09:14:53 -- common/autotest_common.sh@10 -- # set +x 00:06:42.629 09:14:53 -- spdk/autotest.sh@164 -- # [[ 0 -eq 1 ]] 00:06:42.629 09:14:53 -- spdk/autotest.sh@168 -- # run_test env /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/env.sh 00:06:42.629 09:14:53 -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:06:42.629 09:14:53 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:42.629 09:14:53 -- common/autotest_common.sh@10 -- # set +x 00:06:42.629 ************************************ 00:06:42.629 START TEST env 00:06:42.629 ************************************ 00:06:42.629 09:14:53 env -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/env.sh 00:06:42.629 * Looking for test storage... 00:06:42.629 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env 00:06:42.629 09:14:53 env -- env/env.sh@10 -- # run_test env_memory /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/memory/memory_ut 00:06:42.629 09:14:53 env -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:06:42.629 09:14:53 env -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:42.629 09:14:53 env -- common/autotest_common.sh@10 -- # set +x 00:06:42.629 ************************************ 00:06:42.629 START TEST env_memory 00:06:42.629 ************************************ 00:06:42.629 09:14:53 env.env_memory -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/memory/memory_ut 00:06:42.629 00:06:42.629 00:06:42.629 CUnit - A unit testing framework for C - Version 2.1-3 00:06:42.629 http://cunit.sourceforge.net/ 00:06:42.629 00:06:42.629 00:06:42.629 Suite: memory 00:06:42.629 Test: alloc and free memory map ...[2024-07-15 09:14:53.634187] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk/memory.c: 283:spdk_mem_map_alloc: *ERROR*: Initial mem_map notify failed 00:06:42.629 passed 00:06:42.629 Test: mem map translation ...[2024-07-15 09:14:53.653923] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk/memory.c: 590:spdk_mem_map_set_translation: *ERROR*: invalid spdk_mem_map_set_translation parameters, vaddr=2097152 len=1234 00:06:42.629 [2024-07-15 09:14:53.653945] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk/memory.c: 590:spdk_mem_map_set_translation: *ERROR*: invalid spdk_mem_map_set_translation parameters, vaddr=1234 len=2097152 00:06:42.629 [2024-07-15 09:14:53.653995] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk/memory.c: 584:spdk_mem_map_set_translation: *ERROR*: invalid usermode virtual address 281474976710656 00:06:42.629 [2024-07-15 09:14:53.654007] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk/memory.c: 600:spdk_mem_map_set_translation: *ERROR*: could not get 0xffffffe00000 map 00:06:42.629 passed 00:06:42.629 Test: mem map registration ...[2024-07-15 09:14:53.695065] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk/memory.c: 346:spdk_mem_register: *ERROR*: invalid spdk_mem_register parameters, vaddr=0x200000 len=1234 00:06:42.629 [2024-07-15 09:14:53.695086] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk/memory.c: 346:spdk_mem_register: *ERROR*: invalid spdk_mem_register parameters, vaddr=0x4d2 len=2097152 00:06:42.629 passed 00:06:42.629 Test: mem map adjacent registrations ...passed 00:06:42.629 00:06:42.629 Run Summary: Type Total Ran Passed Failed Inactive 00:06:42.629 suites 1 1 n/a 0 0 00:06:42.629 tests 4 4 4 0 0 00:06:42.629 asserts 152 152 152 0 n/a 00:06:42.629 00:06:42.629 Elapsed time = 0.139 seconds 00:06:42.629 00:06:42.629 real 0m0.147s 00:06:42.629 user 0m0.139s 00:06:42.629 sys 0m0.007s 00:06:42.629 09:14:53 env.env_memory -- common/autotest_common.sh@1124 -- # xtrace_disable 00:06:42.629 09:14:53 env.env_memory -- common/autotest_common.sh@10 -- # set +x 00:06:42.629 ************************************ 00:06:42.629 END TEST env_memory 00:06:42.629 ************************************ 00:06:42.629 09:14:53 env -- common/autotest_common.sh@1142 -- # return 0 00:06:42.629 09:14:53 env -- env/env.sh@11 -- # run_test env_vtophys /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/vtophys/vtophys 00:06:42.629 09:14:53 env -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:06:42.629 09:14:53 env -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:42.629 09:14:53 env -- common/autotest_common.sh@10 -- # set +x 00:06:42.629 ************************************ 00:06:42.629 START TEST env_vtophys 00:06:42.629 ************************************ 00:06:42.629 09:14:53 env.env_vtophys -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/vtophys/vtophys 00:06:42.629 EAL: lib.eal log level changed from notice to debug 00:06:42.629 EAL: Detected lcore 0 as core 0 on socket 0 00:06:42.629 EAL: Detected lcore 1 as core 1 on socket 0 00:06:42.629 EAL: Detected lcore 2 as core 2 on socket 0 00:06:42.629 EAL: Detected lcore 3 as core 3 on socket 0 00:06:42.629 EAL: Detected lcore 4 as core 4 on socket 0 00:06:42.629 EAL: Detected lcore 5 as core 5 on socket 0 00:06:42.629 EAL: Detected lcore 6 as core 8 on socket 0 00:06:42.629 EAL: Detected lcore 7 as core 9 on socket 0 00:06:42.629 EAL: Detected lcore 8 as core 10 on socket 0 00:06:42.629 EAL: Detected lcore 9 as core 11 on socket 0 00:06:42.629 EAL: Detected lcore 10 as core 12 on socket 0 00:06:42.629 EAL: Detected lcore 11 as core 13 on socket 0 00:06:42.629 EAL: Detected lcore 12 as core 0 on socket 1 00:06:42.629 EAL: Detected lcore 13 as core 1 on socket 1 00:06:42.629 EAL: Detected lcore 14 as core 2 on socket 1 00:06:42.629 EAL: Detected lcore 15 as core 3 on socket 1 00:06:42.629 EAL: Detected lcore 16 as core 4 on socket 1 00:06:42.629 EAL: Detected lcore 17 as core 5 on socket 1 00:06:42.629 EAL: Detected lcore 18 as core 8 on socket 1 00:06:42.629 EAL: Detected lcore 19 as core 9 on socket 1 00:06:42.629 EAL: Detected lcore 20 as core 10 on socket 1 00:06:42.629 EAL: Detected lcore 21 as core 11 on socket 1 00:06:42.629 EAL: Detected lcore 22 as core 12 on socket 1 00:06:42.629 EAL: Detected lcore 23 as core 13 on socket 1 00:06:42.629 EAL: Detected lcore 24 as core 0 on socket 0 00:06:42.629 EAL: Detected lcore 25 as core 1 on socket 0 00:06:42.629 EAL: Detected lcore 26 as core 2 on socket 0 00:06:42.629 EAL: Detected lcore 27 as core 3 on socket 0 00:06:42.629 EAL: Detected lcore 28 as core 4 on socket 0 00:06:42.629 EAL: Detected lcore 29 as core 5 on socket 0 00:06:42.629 EAL: Detected lcore 30 as core 8 on socket 0 00:06:42.629 EAL: Detected lcore 31 as core 9 on socket 0 00:06:42.629 EAL: Detected lcore 32 as core 10 on socket 0 00:06:42.629 EAL: Detected lcore 33 as core 11 on socket 0 00:06:42.629 EAL: Detected lcore 34 as core 12 on socket 0 00:06:42.629 EAL: Detected lcore 35 as core 13 on socket 0 00:06:42.629 EAL: Detected lcore 36 as core 0 on socket 1 00:06:42.629 EAL: Detected lcore 37 as core 1 on socket 1 00:06:42.629 EAL: Detected lcore 38 as core 2 on socket 1 00:06:42.629 EAL: Detected lcore 39 as core 3 on socket 1 00:06:42.629 EAL: Detected lcore 40 as core 4 on socket 1 00:06:42.629 EAL: Detected lcore 41 as core 5 on socket 1 00:06:42.629 EAL: Detected lcore 42 as core 8 on socket 1 00:06:42.629 EAL: Detected lcore 43 as core 9 on socket 1 00:06:42.629 EAL: Detected lcore 44 as core 10 on socket 1 00:06:42.629 EAL: Detected lcore 45 as core 11 on socket 1 00:06:42.629 EAL: Detected lcore 46 as core 12 on socket 1 00:06:42.629 EAL: Detected lcore 47 as core 13 on socket 1 00:06:42.629 EAL: Maximum logical cores by configuration: 128 00:06:42.629 EAL: Detected CPU lcores: 48 00:06:42.629 EAL: Detected NUMA nodes: 2 00:06:42.629 EAL: Checking presence of .so 'librte_eal.so.24.1' 00:06:42.629 EAL: Detected shared linkage of DPDK 00:06:42.629 EAL: No shared files mode enabled, IPC will be disabled 00:06:42.887 EAL: Bus pci wants IOVA as 'DC' 00:06:42.887 EAL: Buses did not request a specific IOVA mode. 00:06:42.887 EAL: IOMMU is available, selecting IOVA as VA mode. 00:06:42.887 EAL: Selected IOVA mode 'VA' 00:06:42.887 EAL: No free 2048 kB hugepages reported on node 1 00:06:42.887 EAL: Probing VFIO support... 00:06:42.887 EAL: IOMMU type 1 (Type 1) is supported 00:06:42.887 EAL: IOMMU type 7 (sPAPR) is not supported 00:06:42.887 EAL: IOMMU type 8 (No-IOMMU) is not supported 00:06:42.887 EAL: VFIO support initialized 00:06:42.887 EAL: Ask a virtual area of 0x2e000 bytes 00:06:42.887 EAL: Virtual area found at 0x200000000000 (size = 0x2e000) 00:06:42.887 EAL: Setting up physically contiguous memory... 00:06:42.887 EAL: Setting maximum number of open files to 524288 00:06:42.887 EAL: Detected memory type: socket_id:0 hugepage_sz:2097152 00:06:42.887 EAL: Detected memory type: socket_id:1 hugepage_sz:2097152 00:06:42.887 EAL: Creating 4 segment lists: n_segs:8192 socket_id:0 hugepage_sz:2097152 00:06:42.887 EAL: Ask a virtual area of 0x61000 bytes 00:06:42.887 EAL: Virtual area found at 0x20000002e000 (size = 0x61000) 00:06:42.887 EAL: Memseg list allocated at socket 0, page size 0x800kB 00:06:42.887 EAL: Ask a virtual area of 0x400000000 bytes 00:06:42.887 EAL: Virtual area found at 0x200000200000 (size = 0x400000000) 00:06:42.887 EAL: VA reserved for memseg list at 0x200000200000, size 400000000 00:06:42.887 EAL: Ask a virtual area of 0x61000 bytes 00:06:42.887 EAL: Virtual area found at 0x200400200000 (size = 0x61000) 00:06:42.887 EAL: Memseg list allocated at socket 0, page size 0x800kB 00:06:42.887 EAL: Ask a virtual area of 0x400000000 bytes 00:06:42.887 EAL: Virtual area found at 0x200400400000 (size = 0x400000000) 00:06:42.887 EAL: VA reserved for memseg list at 0x200400400000, size 400000000 00:06:42.887 EAL: Ask a virtual area of 0x61000 bytes 00:06:42.887 EAL: Virtual area found at 0x200800400000 (size = 0x61000) 00:06:42.887 EAL: Memseg list allocated at socket 0, page size 0x800kB 00:06:42.887 EAL: Ask a virtual area of 0x400000000 bytes 00:06:42.887 EAL: Virtual area found at 0x200800600000 (size = 0x400000000) 00:06:42.887 EAL: VA reserved for memseg list at 0x200800600000, size 400000000 00:06:42.887 EAL: Ask a virtual area of 0x61000 bytes 00:06:42.887 EAL: Virtual area found at 0x200c00600000 (size = 0x61000) 00:06:42.887 EAL: Memseg list allocated at socket 0, page size 0x800kB 00:06:42.887 EAL: Ask a virtual area of 0x400000000 bytes 00:06:42.887 EAL: Virtual area found at 0x200c00800000 (size = 0x400000000) 00:06:42.887 EAL: VA reserved for memseg list at 0x200c00800000, size 400000000 00:06:42.888 EAL: Creating 4 segment lists: n_segs:8192 socket_id:1 hugepage_sz:2097152 00:06:42.888 EAL: Ask a virtual area of 0x61000 bytes 00:06:42.888 EAL: Virtual area found at 0x201000800000 (size = 0x61000) 00:06:42.888 EAL: Memseg list allocated at socket 1, page size 0x800kB 00:06:42.888 EAL: Ask a virtual area of 0x400000000 bytes 00:06:42.888 EAL: Virtual area found at 0x201000a00000 (size = 0x400000000) 00:06:42.888 EAL: VA reserved for memseg list at 0x201000a00000, size 400000000 00:06:42.888 EAL: Ask a virtual area of 0x61000 bytes 00:06:42.888 EAL: Virtual area found at 0x201400a00000 (size = 0x61000) 00:06:42.888 EAL: Memseg list allocated at socket 1, page size 0x800kB 00:06:42.888 EAL: Ask a virtual area of 0x400000000 bytes 00:06:42.888 EAL: Virtual area found at 0x201400c00000 (size = 0x400000000) 00:06:42.888 EAL: VA reserved for memseg list at 0x201400c00000, size 400000000 00:06:42.888 EAL: Ask a virtual area of 0x61000 bytes 00:06:42.888 EAL: Virtual area found at 0x201800c00000 (size = 0x61000) 00:06:42.888 EAL: Memseg list allocated at socket 1, page size 0x800kB 00:06:42.888 EAL: Ask a virtual area of 0x400000000 bytes 00:06:42.888 EAL: Virtual area found at 0x201800e00000 (size = 0x400000000) 00:06:42.888 EAL: VA reserved for memseg list at 0x201800e00000, size 400000000 00:06:42.888 EAL: Ask a virtual area of 0x61000 bytes 00:06:42.888 EAL: Virtual area found at 0x201c00e00000 (size = 0x61000) 00:06:42.888 EAL: Memseg list allocated at socket 1, page size 0x800kB 00:06:42.888 EAL: Ask a virtual area of 0x400000000 bytes 00:06:42.888 EAL: Virtual area found at 0x201c01000000 (size = 0x400000000) 00:06:42.888 EAL: VA reserved for memseg list at 0x201c01000000, size 400000000 00:06:42.888 EAL: Hugepages will be freed exactly as allocated. 00:06:42.888 EAL: No shared files mode enabled, IPC is disabled 00:06:42.888 EAL: No shared files mode enabled, IPC is disabled 00:06:42.888 EAL: TSC frequency is ~2700000 KHz 00:06:42.888 EAL: Main lcore 0 is ready (tid=7fe591783a00;cpuset=[0]) 00:06:42.888 EAL: Trying to obtain current memory policy. 00:06:42.888 EAL: Setting policy MPOL_PREFERRED for socket 0 00:06:42.888 EAL: Restoring previous memory policy: 0 00:06:42.888 EAL: request: mp_malloc_sync 00:06:42.888 EAL: No shared files mode enabled, IPC is disabled 00:06:42.888 EAL: Heap on socket 0 was expanded by 2MB 00:06:42.888 EAL: No shared files mode enabled, IPC is disabled 00:06:42.888 EAL: No PCI address specified using 'addr=' in: bus=pci 00:06:42.888 EAL: Mem event callback 'spdk:(nil)' registered 00:06:42.888 00:06:42.888 00:06:42.888 CUnit - A unit testing framework for C - Version 2.1-3 00:06:42.888 http://cunit.sourceforge.net/ 00:06:42.888 00:06:42.888 00:06:42.888 Suite: components_suite 00:06:42.888 Test: vtophys_malloc_test ...passed 00:06:42.888 Test: vtophys_spdk_malloc_test ...EAL: Trying to obtain current memory policy. 00:06:42.888 EAL: Setting policy MPOL_PREFERRED for socket 0 00:06:42.888 EAL: Restoring previous memory policy: 4 00:06:42.888 EAL: Calling mem event callback 'spdk:(nil)' 00:06:42.888 EAL: request: mp_malloc_sync 00:06:42.888 EAL: No shared files mode enabled, IPC is disabled 00:06:42.888 EAL: Heap on socket 0 was expanded by 4MB 00:06:42.888 EAL: Calling mem event callback 'spdk:(nil)' 00:06:42.888 EAL: request: mp_malloc_sync 00:06:42.888 EAL: No shared files mode enabled, IPC is disabled 00:06:42.888 EAL: Heap on socket 0 was shrunk by 4MB 00:06:42.888 EAL: Trying to obtain current memory policy. 00:06:42.888 EAL: Setting policy MPOL_PREFERRED for socket 0 00:06:42.888 EAL: Restoring previous memory policy: 4 00:06:42.888 EAL: Calling mem event callback 'spdk:(nil)' 00:06:42.888 EAL: request: mp_malloc_sync 00:06:42.888 EAL: No shared files mode enabled, IPC is disabled 00:06:42.888 EAL: Heap on socket 0 was expanded by 6MB 00:06:42.888 EAL: Calling mem event callback 'spdk:(nil)' 00:06:42.888 EAL: request: mp_malloc_sync 00:06:42.888 EAL: No shared files mode enabled, IPC is disabled 00:06:42.888 EAL: Heap on socket 0 was shrunk by 6MB 00:06:42.888 EAL: Trying to obtain current memory policy. 00:06:42.888 EAL: Setting policy MPOL_PREFERRED for socket 0 00:06:42.888 EAL: Restoring previous memory policy: 4 00:06:42.888 EAL: Calling mem event callback 'spdk:(nil)' 00:06:42.888 EAL: request: mp_malloc_sync 00:06:42.888 EAL: No shared files mode enabled, IPC is disabled 00:06:42.888 EAL: Heap on socket 0 was expanded by 10MB 00:06:42.888 EAL: Calling mem event callback 'spdk:(nil)' 00:06:42.888 EAL: request: mp_malloc_sync 00:06:42.888 EAL: No shared files mode enabled, IPC is disabled 00:06:42.888 EAL: Heap on socket 0 was shrunk by 10MB 00:06:42.888 EAL: Trying to obtain current memory policy. 00:06:42.888 EAL: Setting policy MPOL_PREFERRED for socket 0 00:06:42.888 EAL: Restoring previous memory policy: 4 00:06:42.888 EAL: Calling mem event callback 'spdk:(nil)' 00:06:42.888 EAL: request: mp_malloc_sync 00:06:42.888 EAL: No shared files mode enabled, IPC is disabled 00:06:42.888 EAL: Heap on socket 0 was expanded by 18MB 00:06:42.888 EAL: Calling mem event callback 'spdk:(nil)' 00:06:42.888 EAL: request: mp_malloc_sync 00:06:42.888 EAL: No shared files mode enabled, IPC is disabled 00:06:42.888 EAL: Heap on socket 0 was shrunk by 18MB 00:06:42.888 EAL: Trying to obtain current memory policy. 00:06:42.888 EAL: Setting policy MPOL_PREFERRED for socket 0 00:06:42.888 EAL: Restoring previous memory policy: 4 00:06:42.888 EAL: Calling mem event callback 'spdk:(nil)' 00:06:42.888 EAL: request: mp_malloc_sync 00:06:42.888 EAL: No shared files mode enabled, IPC is disabled 00:06:42.888 EAL: Heap on socket 0 was expanded by 34MB 00:06:42.888 EAL: Calling mem event callback 'spdk:(nil)' 00:06:42.888 EAL: request: mp_malloc_sync 00:06:42.888 EAL: No shared files mode enabled, IPC is disabled 00:06:42.888 EAL: Heap on socket 0 was shrunk by 34MB 00:06:42.888 EAL: Trying to obtain current memory policy. 00:06:42.888 EAL: Setting policy MPOL_PREFERRED for socket 0 00:06:42.888 EAL: Restoring previous memory policy: 4 00:06:42.888 EAL: Calling mem event callback 'spdk:(nil)' 00:06:42.888 EAL: request: mp_malloc_sync 00:06:42.888 EAL: No shared files mode enabled, IPC is disabled 00:06:42.888 EAL: Heap on socket 0 was expanded by 66MB 00:06:42.888 EAL: Calling mem event callback 'spdk:(nil)' 00:06:42.888 EAL: request: mp_malloc_sync 00:06:42.888 EAL: No shared files mode enabled, IPC is disabled 00:06:42.888 EAL: Heap on socket 0 was shrunk by 66MB 00:06:42.888 EAL: Trying to obtain current memory policy. 00:06:42.888 EAL: Setting policy MPOL_PREFERRED for socket 0 00:06:42.888 EAL: Restoring previous memory policy: 4 00:06:42.888 EAL: Calling mem event callback 'spdk:(nil)' 00:06:42.888 EAL: request: mp_malloc_sync 00:06:42.888 EAL: No shared files mode enabled, IPC is disabled 00:06:42.888 EAL: Heap on socket 0 was expanded by 130MB 00:06:42.888 EAL: Calling mem event callback 'spdk:(nil)' 00:06:42.888 EAL: request: mp_malloc_sync 00:06:42.888 EAL: No shared files mode enabled, IPC is disabled 00:06:42.888 EAL: Heap on socket 0 was shrunk by 130MB 00:06:42.888 EAL: Trying to obtain current memory policy. 00:06:42.888 EAL: Setting policy MPOL_PREFERRED for socket 0 00:06:42.888 EAL: Restoring previous memory policy: 4 00:06:42.888 EAL: Calling mem event callback 'spdk:(nil)' 00:06:42.888 EAL: request: mp_malloc_sync 00:06:42.888 EAL: No shared files mode enabled, IPC is disabled 00:06:42.888 EAL: Heap on socket 0 was expanded by 258MB 00:06:43.145 EAL: Calling mem event callback 'spdk:(nil)' 00:06:43.145 EAL: request: mp_malloc_sync 00:06:43.145 EAL: No shared files mode enabled, IPC is disabled 00:06:43.145 EAL: Heap on socket 0 was shrunk by 258MB 00:06:43.145 EAL: Trying to obtain current memory policy. 00:06:43.145 EAL: Setting policy MPOL_PREFERRED for socket 0 00:06:43.145 EAL: Restoring previous memory policy: 4 00:06:43.145 EAL: Calling mem event callback 'spdk:(nil)' 00:06:43.145 EAL: request: mp_malloc_sync 00:06:43.145 EAL: No shared files mode enabled, IPC is disabled 00:06:43.145 EAL: Heap on socket 0 was expanded by 514MB 00:06:43.402 EAL: Calling mem event callback 'spdk:(nil)' 00:06:43.402 EAL: request: mp_malloc_sync 00:06:43.402 EAL: No shared files mode enabled, IPC is disabled 00:06:43.402 EAL: Heap on socket 0 was shrunk by 514MB 00:06:43.402 EAL: Trying to obtain current memory policy. 00:06:43.402 EAL: Setting policy MPOL_PREFERRED for socket 0 00:06:43.660 EAL: Restoring previous memory policy: 4 00:06:43.660 EAL: Calling mem event callback 'spdk:(nil)' 00:06:43.660 EAL: request: mp_malloc_sync 00:06:43.660 EAL: No shared files mode enabled, IPC is disabled 00:06:43.660 EAL: Heap on socket 0 was expanded by 1026MB 00:06:43.918 EAL: Calling mem event callback 'spdk:(nil)' 00:06:44.176 EAL: request: mp_malloc_sync 00:06:44.176 EAL: No shared files mode enabled, IPC is disabled 00:06:44.176 EAL: Heap on socket 0 was shrunk by 1026MB 00:06:44.176 passed 00:06:44.176 00:06:44.176 Run Summary: Type Total Ran Passed Failed Inactive 00:06:44.176 suites 1 1 n/a 0 0 00:06:44.176 tests 2 2 2 0 0 00:06:44.176 asserts 497 497 497 0 n/a 00:06:44.176 00:06:44.176 Elapsed time = 1.290 seconds 00:06:44.176 EAL: Calling mem event callback 'spdk:(nil)' 00:06:44.176 EAL: request: mp_malloc_sync 00:06:44.176 EAL: No shared files mode enabled, IPC is disabled 00:06:44.176 EAL: Heap on socket 0 was shrunk by 2MB 00:06:44.176 EAL: No shared files mode enabled, IPC is disabled 00:06:44.176 EAL: No shared files mode enabled, IPC is disabled 00:06:44.176 EAL: No shared files mode enabled, IPC is disabled 00:06:44.176 00:06:44.176 real 0m1.400s 00:06:44.176 user 0m0.822s 00:06:44.176 sys 0m0.544s 00:06:44.176 09:14:55 env.env_vtophys -- common/autotest_common.sh@1124 -- # xtrace_disable 00:06:44.176 09:14:55 env.env_vtophys -- common/autotest_common.sh@10 -- # set +x 00:06:44.176 ************************************ 00:06:44.176 END TEST env_vtophys 00:06:44.176 ************************************ 00:06:44.176 09:14:55 env -- common/autotest_common.sh@1142 -- # return 0 00:06:44.176 09:14:55 env -- env/env.sh@12 -- # run_test env_pci /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/pci/pci_ut 00:06:44.176 09:14:55 env -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:06:44.176 09:14:55 env -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:44.176 09:14:55 env -- common/autotest_common.sh@10 -- # set +x 00:06:44.176 ************************************ 00:06:44.176 START TEST env_pci 00:06:44.176 ************************************ 00:06:44.176 09:14:55 env.env_pci -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/pci/pci_ut 00:06:44.176 00:06:44.176 00:06:44.176 CUnit - A unit testing framework for C - Version 2.1-3 00:06:44.176 http://cunit.sourceforge.net/ 00:06:44.176 00:06:44.176 00:06:44.176 Suite: pci 00:06:44.176 Test: pci_hook ...[2024-07-15 09:14:55.254337] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk/pci.c:1040:spdk_pci_device_claim: *ERROR*: Cannot create lock on device /var/tmp/spdk_pci_lock_10000:00:01.0, probably process 713223 has claimed it 00:06:44.176 EAL: Cannot find device (10000:00:01.0) 00:06:44.176 EAL: Failed to attach device on primary process 00:06:44.176 passed 00:06:44.176 00:06:44.176 Run Summary: Type Total Ran Passed Failed Inactive 00:06:44.176 suites 1 1 n/a 0 0 00:06:44.176 tests 1 1 1 0 0 00:06:44.176 asserts 25 25 25 0 n/a 00:06:44.176 00:06:44.176 Elapsed time = 0.022 seconds 00:06:44.176 00:06:44.176 real 0m0.035s 00:06:44.176 user 0m0.012s 00:06:44.176 sys 0m0.022s 00:06:44.176 09:14:55 env.env_pci -- common/autotest_common.sh@1124 -- # xtrace_disable 00:06:44.176 09:14:55 env.env_pci -- common/autotest_common.sh@10 -- # set +x 00:06:44.176 ************************************ 00:06:44.176 END TEST env_pci 00:06:44.176 ************************************ 00:06:44.176 09:14:55 env -- common/autotest_common.sh@1142 -- # return 0 00:06:44.176 09:14:55 env -- env/env.sh@14 -- # argv='-c 0x1 ' 00:06:44.176 09:14:55 env -- env/env.sh@15 -- # uname 00:06:44.176 09:14:55 env -- env/env.sh@15 -- # '[' Linux = Linux ']' 00:06:44.176 09:14:55 env -- env/env.sh@22 -- # argv+=--base-virtaddr=0x200000000000 00:06:44.176 09:14:55 env -- env/env.sh@24 -- # run_test env_dpdk_post_init /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/env_dpdk_post_init/env_dpdk_post_init -c 0x1 --base-virtaddr=0x200000000000 00:06:44.176 09:14:55 env -- common/autotest_common.sh@1099 -- # '[' 5 -le 1 ']' 00:06:44.176 09:14:55 env -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:44.176 09:14:55 env -- common/autotest_common.sh@10 -- # set +x 00:06:44.176 ************************************ 00:06:44.176 START TEST env_dpdk_post_init 00:06:44.176 ************************************ 00:06:44.176 09:14:55 env.env_dpdk_post_init -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/env_dpdk_post_init/env_dpdk_post_init -c 0x1 --base-virtaddr=0x200000000000 00:06:44.176 EAL: Detected CPU lcores: 48 00:06:44.176 EAL: Detected NUMA nodes: 2 00:06:44.177 EAL: Detected shared linkage of DPDK 00:06:44.177 EAL: Multi-process socket /var/run/dpdk/rte/mp_socket 00:06:44.437 EAL: Selected IOVA mode 'VA' 00:06:44.437 EAL: No free 2048 kB hugepages reported on node 1 00:06:44.437 EAL: VFIO support initialized 00:06:44.437 TELEMETRY: No legacy callbacks, legacy socket not created 00:06:44.437 EAL: Using IOMMU type 1 (Type 1) 00:06:44.437 EAL: Probe PCI driver: spdk_ioat (8086:0e20) device: 0000:00:04.0 (socket 0) 00:06:44.437 EAL: Probe PCI driver: spdk_ioat (8086:0e21) device: 0000:00:04.1 (socket 0) 00:06:44.437 EAL: Probe PCI driver: spdk_ioat (8086:0e22) device: 0000:00:04.2 (socket 0) 00:06:44.437 EAL: Probe PCI driver: spdk_ioat (8086:0e23) device: 0000:00:04.3 (socket 0) 00:06:44.437 EAL: Probe PCI driver: spdk_ioat (8086:0e24) device: 0000:00:04.4 (socket 0) 00:06:44.437 EAL: Probe PCI driver: spdk_ioat (8086:0e25) device: 0000:00:04.5 (socket 0) 00:06:44.437 EAL: Probe PCI driver: spdk_ioat (8086:0e26) device: 0000:00:04.6 (socket 0) 00:06:44.437 EAL: Probe PCI driver: spdk_ioat (8086:0e27) device: 0000:00:04.7 (socket 0) 00:06:45.378 EAL: Probe PCI driver: spdk_nvme (8086:0a54) device: 0000:0b:00.0 (socket 0) 00:06:45.378 EAL: Probe PCI driver: spdk_ioat (8086:0e20) device: 0000:80:04.0 (socket 1) 00:06:45.378 EAL: Probe PCI driver: spdk_ioat (8086:0e21) device: 0000:80:04.1 (socket 1) 00:06:45.378 EAL: Probe PCI driver: spdk_ioat (8086:0e22) device: 0000:80:04.2 (socket 1) 00:06:45.378 EAL: Probe PCI driver: spdk_ioat (8086:0e23) device: 0000:80:04.3 (socket 1) 00:06:45.378 EAL: Probe PCI driver: spdk_ioat (8086:0e24) device: 0000:80:04.4 (socket 1) 00:06:45.378 EAL: Probe PCI driver: spdk_ioat (8086:0e25) device: 0000:80:04.5 (socket 1) 00:06:45.378 EAL: Probe PCI driver: spdk_ioat (8086:0e26) device: 0000:80:04.6 (socket 1) 00:06:45.378 EAL: Probe PCI driver: spdk_ioat (8086:0e27) device: 0000:80:04.7 (socket 1) 00:06:48.701 EAL: Releasing PCI mapped resource for 0000:0b:00.0 00:06:48.701 EAL: Calling pci_unmap_resource for 0000:0b:00.0 at 0x202001020000 00:06:48.701 Starting DPDK initialization... 00:06:48.701 Starting SPDK post initialization... 00:06:48.701 SPDK NVMe probe 00:06:48.701 Attaching to 0000:0b:00.0 00:06:48.701 Attached to 0000:0b:00.0 00:06:48.701 Cleaning up... 00:06:48.701 00:06:48.701 real 0m4.322s 00:06:48.701 user 0m3.199s 00:06:48.701 sys 0m0.183s 00:06:48.701 09:14:59 env.env_dpdk_post_init -- common/autotest_common.sh@1124 -- # xtrace_disable 00:06:48.701 09:14:59 env.env_dpdk_post_init -- common/autotest_common.sh@10 -- # set +x 00:06:48.701 ************************************ 00:06:48.701 END TEST env_dpdk_post_init 00:06:48.701 ************************************ 00:06:48.701 09:14:59 env -- common/autotest_common.sh@1142 -- # return 0 00:06:48.701 09:14:59 env -- env/env.sh@26 -- # uname 00:06:48.701 09:14:59 env -- env/env.sh@26 -- # '[' Linux = Linux ']' 00:06:48.701 09:14:59 env -- env/env.sh@29 -- # run_test env_mem_callbacks /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/mem_callbacks/mem_callbacks 00:06:48.701 09:14:59 env -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:06:48.701 09:14:59 env -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:48.701 09:14:59 env -- common/autotest_common.sh@10 -- # set +x 00:06:48.701 ************************************ 00:06:48.701 START TEST env_mem_callbacks 00:06:48.701 ************************************ 00:06:48.701 09:14:59 env.env_mem_callbacks -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/mem_callbacks/mem_callbacks 00:06:48.701 EAL: Detected CPU lcores: 48 00:06:48.701 EAL: Detected NUMA nodes: 2 00:06:48.701 EAL: Detected shared linkage of DPDK 00:06:48.701 EAL: Multi-process socket /var/run/dpdk/rte/mp_socket 00:06:48.701 EAL: Selected IOVA mode 'VA' 00:06:48.701 EAL: No free 2048 kB hugepages reported on node 1 00:06:48.701 EAL: VFIO support initialized 00:06:48.701 TELEMETRY: No legacy callbacks, legacy socket not created 00:06:48.701 00:06:48.701 00:06:48.701 CUnit - A unit testing framework for C - Version 2.1-3 00:06:48.701 http://cunit.sourceforge.net/ 00:06:48.701 00:06:48.701 00:06:48.701 Suite: memory 00:06:48.701 Test: test ... 00:06:48.701 register 0x200000200000 2097152 00:06:48.701 malloc 3145728 00:06:48.701 register 0x200000400000 4194304 00:06:48.701 buf 0x200000500000 len 3145728 PASSED 00:06:48.701 malloc 64 00:06:48.701 buf 0x2000004fff40 len 64 PASSED 00:06:48.701 malloc 4194304 00:06:48.701 register 0x200000800000 6291456 00:06:48.701 buf 0x200000a00000 len 4194304 PASSED 00:06:48.701 free 0x200000500000 3145728 00:06:48.701 free 0x2000004fff40 64 00:06:48.701 unregister 0x200000400000 4194304 PASSED 00:06:48.701 free 0x200000a00000 4194304 00:06:48.701 unregister 0x200000800000 6291456 PASSED 00:06:48.701 malloc 8388608 00:06:48.701 register 0x200000400000 10485760 00:06:48.701 buf 0x200000600000 len 8388608 PASSED 00:06:48.701 free 0x200000600000 8388608 00:06:48.701 unregister 0x200000400000 10485760 PASSED 00:06:48.701 passed 00:06:48.701 00:06:48.701 Run Summary: Type Total Ran Passed Failed Inactive 00:06:48.701 suites 1 1 n/a 0 0 00:06:48.701 tests 1 1 1 0 0 00:06:48.701 asserts 15 15 15 0 n/a 00:06:48.701 00:06:48.701 Elapsed time = 0.005 seconds 00:06:48.701 00:06:48.701 real 0m0.048s 00:06:48.701 user 0m0.014s 00:06:48.701 sys 0m0.034s 00:06:48.701 09:14:59 env.env_mem_callbacks -- common/autotest_common.sh@1124 -- # xtrace_disable 00:06:48.701 09:14:59 env.env_mem_callbacks -- common/autotest_common.sh@10 -- # set +x 00:06:48.701 ************************************ 00:06:48.701 END TEST env_mem_callbacks 00:06:48.701 ************************************ 00:06:48.701 09:14:59 env -- common/autotest_common.sh@1142 -- # return 0 00:06:48.701 00:06:48.701 real 0m6.249s 00:06:48.701 user 0m4.309s 00:06:48.701 sys 0m0.983s 00:06:48.701 09:14:59 env -- common/autotest_common.sh@1124 -- # xtrace_disable 00:06:48.701 09:14:59 env -- common/autotest_common.sh@10 -- # set +x 00:06:48.701 ************************************ 00:06:48.701 END TEST env 00:06:48.701 ************************************ 00:06:48.701 09:14:59 -- common/autotest_common.sh@1142 -- # return 0 00:06:48.701 09:14:59 -- spdk/autotest.sh@169 -- # run_test rpc /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc/rpc.sh 00:06:48.701 09:14:59 -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:06:48.701 09:14:59 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:48.701 09:14:59 -- common/autotest_common.sh@10 -- # set +x 00:06:48.701 ************************************ 00:06:48.701 START TEST rpc 00:06:48.701 ************************************ 00:06:48.701 09:14:59 rpc -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc/rpc.sh 00:06:48.701 * Looking for test storage... 00:06:48.701 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc 00:06:48.701 09:14:59 rpc -- rpc/rpc.sh@65 -- # spdk_pid=713882 00:06:48.701 09:14:59 rpc -- rpc/rpc.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -e bdev 00:06:48.701 09:14:59 rpc -- rpc/rpc.sh@66 -- # trap 'killprocess $spdk_pid; exit 1' SIGINT SIGTERM EXIT 00:06:48.701 09:14:59 rpc -- rpc/rpc.sh@67 -- # waitforlisten 713882 00:06:48.701 09:14:59 rpc -- common/autotest_common.sh@829 -- # '[' -z 713882 ']' 00:06:48.701 09:14:59 rpc -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:48.701 09:14:59 rpc -- common/autotest_common.sh@834 -- # local max_retries=100 00:06:48.701 09:14:59 rpc -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:48.701 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:48.701 09:14:59 rpc -- common/autotest_common.sh@838 -- # xtrace_disable 00:06:48.701 09:14:59 rpc -- common/autotest_common.sh@10 -- # set +x 00:06:48.958 [2024-07-15 09:14:59.923113] Starting SPDK v24.09-pre git sha1 b0f01ebc5 / DPDK 24.03.0 initialization... 00:06:48.959 [2024-07-15 09:14:59.923220] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid713882 ] 00:06:48.959 EAL: No free 2048 kB hugepages reported on node 1 00:06:48.959 [2024-07-15 09:14:59.981008] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:48.959 [2024-07-15 09:15:00.096219] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask bdev specified. 00:06:48.959 [2024-07-15 09:15:00.096277] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s spdk_tgt -p 713882' to capture a snapshot of events at runtime. 00:06:48.959 [2024-07-15 09:15:00.096307] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:06:48.959 [2024-07-15 09:15:00.096318] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:06:48.959 [2024-07-15 09:15:00.096328] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/spdk_tgt_trace.pid713882 for offline analysis/debug. 00:06:48.959 [2024-07-15 09:15:00.096373] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:06:49.217 09:15:00 rpc -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:06:49.217 09:15:00 rpc -- common/autotest_common.sh@862 -- # return 0 00:06:49.217 09:15:00 rpc -- rpc/rpc.sh@69 -- # export PYTHONPATH=:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc 00:06:49.217 09:15:00 rpc -- rpc/rpc.sh@69 -- # PYTHONPATH=:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc 00:06:49.217 09:15:00 rpc -- rpc/rpc.sh@72 -- # rpc=rpc_cmd 00:06:49.217 09:15:00 rpc -- rpc/rpc.sh@73 -- # run_test rpc_integrity rpc_integrity 00:06:49.217 09:15:00 rpc -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:06:49.217 09:15:00 rpc -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:49.217 09:15:00 rpc -- common/autotest_common.sh@10 -- # set +x 00:06:49.217 ************************************ 00:06:49.217 START TEST rpc_integrity 00:06:49.217 ************************************ 00:06:49.217 09:15:00 rpc.rpc_integrity -- common/autotest_common.sh@1123 -- # rpc_integrity 00:06:49.217 09:15:00 rpc.rpc_integrity -- rpc/rpc.sh@12 -- # rpc_cmd bdev_get_bdevs 00:06:49.217 09:15:00 rpc.rpc_integrity -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:49.217 09:15:00 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:06:49.217 09:15:00 rpc.rpc_integrity -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:06:49.217 09:15:00 rpc.rpc_integrity -- rpc/rpc.sh@12 -- # bdevs='[]' 00:06:49.217 09:15:00 rpc.rpc_integrity -- rpc/rpc.sh@13 -- # jq length 00:06:49.217 09:15:00 rpc.rpc_integrity -- rpc/rpc.sh@13 -- # '[' 0 == 0 ']' 00:06:49.217 09:15:00 rpc.rpc_integrity -- rpc/rpc.sh@15 -- # rpc_cmd bdev_malloc_create 8 512 00:06:49.217 09:15:00 rpc.rpc_integrity -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:49.217 09:15:00 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:06:49.475 09:15:00 rpc.rpc_integrity -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:06:49.475 09:15:00 rpc.rpc_integrity -- rpc/rpc.sh@15 -- # malloc=Malloc0 00:06:49.475 09:15:00 rpc.rpc_integrity -- rpc/rpc.sh@16 -- # rpc_cmd bdev_get_bdevs 00:06:49.475 09:15:00 rpc.rpc_integrity -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:49.475 09:15:00 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:06:49.475 09:15:00 rpc.rpc_integrity -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:06:49.475 09:15:00 rpc.rpc_integrity -- rpc/rpc.sh@16 -- # bdevs='[ 00:06:49.475 { 00:06:49.475 "name": "Malloc0", 00:06:49.475 "aliases": [ 00:06:49.475 "97d2448b-ff83-4f8d-80d9-33475bc0daae" 00:06:49.475 ], 00:06:49.475 "product_name": "Malloc disk", 00:06:49.475 "block_size": 512, 00:06:49.475 "num_blocks": 16384, 00:06:49.475 "uuid": "97d2448b-ff83-4f8d-80d9-33475bc0daae", 00:06:49.475 "assigned_rate_limits": { 00:06:49.475 "rw_ios_per_sec": 0, 00:06:49.475 "rw_mbytes_per_sec": 0, 00:06:49.475 "r_mbytes_per_sec": 0, 00:06:49.475 "w_mbytes_per_sec": 0 00:06:49.475 }, 00:06:49.475 "claimed": false, 00:06:49.475 "zoned": false, 00:06:49.475 "supported_io_types": { 00:06:49.475 "read": true, 00:06:49.475 "write": true, 00:06:49.475 "unmap": true, 00:06:49.475 "flush": true, 00:06:49.475 "reset": true, 00:06:49.475 "nvme_admin": false, 00:06:49.475 "nvme_io": false, 00:06:49.475 "nvme_io_md": false, 00:06:49.475 "write_zeroes": true, 00:06:49.475 "zcopy": true, 00:06:49.475 "get_zone_info": false, 00:06:49.475 "zone_management": false, 00:06:49.475 "zone_append": false, 00:06:49.475 "compare": false, 00:06:49.475 "compare_and_write": false, 00:06:49.475 "abort": true, 00:06:49.475 "seek_hole": false, 00:06:49.475 "seek_data": false, 00:06:49.475 "copy": true, 00:06:49.475 "nvme_iov_md": false 00:06:49.475 }, 00:06:49.475 "memory_domains": [ 00:06:49.475 { 00:06:49.476 "dma_device_id": "system", 00:06:49.476 "dma_device_type": 1 00:06:49.476 }, 00:06:49.476 { 00:06:49.476 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:06:49.476 "dma_device_type": 2 00:06:49.476 } 00:06:49.476 ], 00:06:49.476 "driver_specific": {} 00:06:49.476 } 00:06:49.476 ]' 00:06:49.476 09:15:00 rpc.rpc_integrity -- rpc/rpc.sh@17 -- # jq length 00:06:49.476 09:15:00 rpc.rpc_integrity -- rpc/rpc.sh@17 -- # '[' 1 == 1 ']' 00:06:49.476 09:15:00 rpc.rpc_integrity -- rpc/rpc.sh@19 -- # rpc_cmd bdev_passthru_create -b Malloc0 -p Passthru0 00:06:49.476 09:15:00 rpc.rpc_integrity -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:49.476 09:15:00 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:06:49.476 [2024-07-15 09:15:00.463225] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on Malloc0 00:06:49.476 [2024-07-15 09:15:00.463265] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:06:49.476 [2024-07-15 09:15:00.463285] vbdev_passthru.c: 680:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0xa2ed50 00:06:49.476 [2024-07-15 09:15:00.463298] vbdev_passthru.c: 695:vbdev_passthru_register: *NOTICE*: bdev claimed 00:06:49.476 [2024-07-15 09:15:00.464593] vbdev_passthru.c: 708:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:06:49.476 [2024-07-15 09:15:00.464615] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: created pt_bdev for: Passthru0 00:06:49.476 Passthru0 00:06:49.476 09:15:00 rpc.rpc_integrity -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:06:49.476 09:15:00 rpc.rpc_integrity -- rpc/rpc.sh@20 -- # rpc_cmd bdev_get_bdevs 00:06:49.476 09:15:00 rpc.rpc_integrity -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:49.476 09:15:00 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:06:49.476 09:15:00 rpc.rpc_integrity -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:06:49.476 09:15:00 rpc.rpc_integrity -- rpc/rpc.sh@20 -- # bdevs='[ 00:06:49.476 { 00:06:49.476 "name": "Malloc0", 00:06:49.476 "aliases": [ 00:06:49.476 "97d2448b-ff83-4f8d-80d9-33475bc0daae" 00:06:49.476 ], 00:06:49.476 "product_name": "Malloc disk", 00:06:49.476 "block_size": 512, 00:06:49.476 "num_blocks": 16384, 00:06:49.476 "uuid": "97d2448b-ff83-4f8d-80d9-33475bc0daae", 00:06:49.476 "assigned_rate_limits": { 00:06:49.476 "rw_ios_per_sec": 0, 00:06:49.476 "rw_mbytes_per_sec": 0, 00:06:49.476 "r_mbytes_per_sec": 0, 00:06:49.476 "w_mbytes_per_sec": 0 00:06:49.476 }, 00:06:49.476 "claimed": true, 00:06:49.476 "claim_type": "exclusive_write", 00:06:49.476 "zoned": false, 00:06:49.476 "supported_io_types": { 00:06:49.476 "read": true, 00:06:49.476 "write": true, 00:06:49.476 "unmap": true, 00:06:49.476 "flush": true, 00:06:49.476 "reset": true, 00:06:49.476 "nvme_admin": false, 00:06:49.476 "nvme_io": false, 00:06:49.476 "nvme_io_md": false, 00:06:49.476 "write_zeroes": true, 00:06:49.476 "zcopy": true, 00:06:49.476 "get_zone_info": false, 00:06:49.476 "zone_management": false, 00:06:49.476 "zone_append": false, 00:06:49.476 "compare": false, 00:06:49.476 "compare_and_write": false, 00:06:49.476 "abort": true, 00:06:49.476 "seek_hole": false, 00:06:49.476 "seek_data": false, 00:06:49.476 "copy": true, 00:06:49.476 "nvme_iov_md": false 00:06:49.476 }, 00:06:49.476 "memory_domains": [ 00:06:49.476 { 00:06:49.476 "dma_device_id": "system", 00:06:49.476 "dma_device_type": 1 00:06:49.476 }, 00:06:49.476 { 00:06:49.476 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:06:49.476 "dma_device_type": 2 00:06:49.476 } 00:06:49.476 ], 00:06:49.476 "driver_specific": {} 00:06:49.476 }, 00:06:49.476 { 00:06:49.476 "name": "Passthru0", 00:06:49.476 "aliases": [ 00:06:49.476 "46b79e07-723d-544b-b683-4d73beafe8fa" 00:06:49.476 ], 00:06:49.476 "product_name": "passthru", 00:06:49.476 "block_size": 512, 00:06:49.476 "num_blocks": 16384, 00:06:49.476 "uuid": "46b79e07-723d-544b-b683-4d73beafe8fa", 00:06:49.476 "assigned_rate_limits": { 00:06:49.476 "rw_ios_per_sec": 0, 00:06:49.476 "rw_mbytes_per_sec": 0, 00:06:49.476 "r_mbytes_per_sec": 0, 00:06:49.476 "w_mbytes_per_sec": 0 00:06:49.476 }, 00:06:49.476 "claimed": false, 00:06:49.476 "zoned": false, 00:06:49.476 "supported_io_types": { 00:06:49.476 "read": true, 00:06:49.476 "write": true, 00:06:49.476 "unmap": true, 00:06:49.476 "flush": true, 00:06:49.476 "reset": true, 00:06:49.476 "nvme_admin": false, 00:06:49.476 "nvme_io": false, 00:06:49.476 "nvme_io_md": false, 00:06:49.476 "write_zeroes": true, 00:06:49.476 "zcopy": true, 00:06:49.476 "get_zone_info": false, 00:06:49.476 "zone_management": false, 00:06:49.476 "zone_append": false, 00:06:49.476 "compare": false, 00:06:49.476 "compare_and_write": false, 00:06:49.476 "abort": true, 00:06:49.476 "seek_hole": false, 00:06:49.476 "seek_data": false, 00:06:49.476 "copy": true, 00:06:49.476 "nvme_iov_md": false 00:06:49.476 }, 00:06:49.476 "memory_domains": [ 00:06:49.476 { 00:06:49.476 "dma_device_id": "system", 00:06:49.476 "dma_device_type": 1 00:06:49.476 }, 00:06:49.476 { 00:06:49.476 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:06:49.476 "dma_device_type": 2 00:06:49.476 } 00:06:49.476 ], 00:06:49.476 "driver_specific": { 00:06:49.476 "passthru": { 00:06:49.476 "name": "Passthru0", 00:06:49.476 "base_bdev_name": "Malloc0" 00:06:49.476 } 00:06:49.476 } 00:06:49.476 } 00:06:49.476 ]' 00:06:49.476 09:15:00 rpc.rpc_integrity -- rpc/rpc.sh@21 -- # jq length 00:06:49.476 09:15:00 rpc.rpc_integrity -- rpc/rpc.sh@21 -- # '[' 2 == 2 ']' 00:06:49.476 09:15:00 rpc.rpc_integrity -- rpc/rpc.sh@23 -- # rpc_cmd bdev_passthru_delete Passthru0 00:06:49.476 09:15:00 rpc.rpc_integrity -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:49.476 09:15:00 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:06:49.476 09:15:00 rpc.rpc_integrity -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:06:49.476 09:15:00 rpc.rpc_integrity -- rpc/rpc.sh@24 -- # rpc_cmd bdev_malloc_delete Malloc0 00:06:49.476 09:15:00 rpc.rpc_integrity -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:49.476 09:15:00 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:06:49.476 09:15:00 rpc.rpc_integrity -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:06:49.476 09:15:00 rpc.rpc_integrity -- rpc/rpc.sh@25 -- # rpc_cmd bdev_get_bdevs 00:06:49.476 09:15:00 rpc.rpc_integrity -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:49.476 09:15:00 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:06:49.476 09:15:00 rpc.rpc_integrity -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:06:49.476 09:15:00 rpc.rpc_integrity -- rpc/rpc.sh@25 -- # bdevs='[]' 00:06:49.476 09:15:00 rpc.rpc_integrity -- rpc/rpc.sh@26 -- # jq length 00:06:49.476 09:15:00 rpc.rpc_integrity -- rpc/rpc.sh@26 -- # '[' 0 == 0 ']' 00:06:49.476 00:06:49.476 real 0m0.213s 00:06:49.476 user 0m0.131s 00:06:49.476 sys 0m0.026s 00:06:49.476 09:15:00 rpc.rpc_integrity -- common/autotest_common.sh@1124 -- # xtrace_disable 00:06:49.476 09:15:00 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:06:49.476 ************************************ 00:06:49.476 END TEST rpc_integrity 00:06:49.476 ************************************ 00:06:49.476 09:15:00 rpc -- common/autotest_common.sh@1142 -- # return 0 00:06:49.476 09:15:00 rpc -- rpc/rpc.sh@74 -- # run_test rpc_plugins rpc_plugins 00:06:49.476 09:15:00 rpc -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:06:49.476 09:15:00 rpc -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:49.476 09:15:00 rpc -- common/autotest_common.sh@10 -- # set +x 00:06:49.476 ************************************ 00:06:49.476 START TEST rpc_plugins 00:06:49.476 ************************************ 00:06:49.476 09:15:00 rpc.rpc_plugins -- common/autotest_common.sh@1123 -- # rpc_plugins 00:06:49.476 09:15:00 rpc.rpc_plugins -- rpc/rpc.sh@30 -- # rpc_cmd --plugin rpc_plugin create_malloc 00:06:49.476 09:15:00 rpc.rpc_plugins -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:49.476 09:15:00 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:06:49.476 09:15:00 rpc.rpc_plugins -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:06:49.476 09:15:00 rpc.rpc_plugins -- rpc/rpc.sh@30 -- # malloc=Malloc1 00:06:49.476 09:15:00 rpc.rpc_plugins -- rpc/rpc.sh@31 -- # rpc_cmd bdev_get_bdevs 00:06:49.476 09:15:00 rpc.rpc_plugins -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:49.476 09:15:00 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:06:49.476 09:15:00 rpc.rpc_plugins -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:06:49.476 09:15:00 rpc.rpc_plugins -- rpc/rpc.sh@31 -- # bdevs='[ 00:06:49.476 { 00:06:49.476 "name": "Malloc1", 00:06:49.476 "aliases": [ 00:06:49.476 "3daff00a-21d3-4011-9107-55a0b05ec1a8" 00:06:49.476 ], 00:06:49.476 "product_name": "Malloc disk", 00:06:49.476 "block_size": 4096, 00:06:49.476 "num_blocks": 256, 00:06:49.476 "uuid": "3daff00a-21d3-4011-9107-55a0b05ec1a8", 00:06:49.476 "assigned_rate_limits": { 00:06:49.476 "rw_ios_per_sec": 0, 00:06:49.476 "rw_mbytes_per_sec": 0, 00:06:49.476 "r_mbytes_per_sec": 0, 00:06:49.476 "w_mbytes_per_sec": 0 00:06:49.476 }, 00:06:49.476 "claimed": false, 00:06:49.476 "zoned": false, 00:06:49.476 "supported_io_types": { 00:06:49.476 "read": true, 00:06:49.476 "write": true, 00:06:49.476 "unmap": true, 00:06:49.476 "flush": true, 00:06:49.476 "reset": true, 00:06:49.476 "nvme_admin": false, 00:06:49.476 "nvme_io": false, 00:06:49.476 "nvme_io_md": false, 00:06:49.476 "write_zeroes": true, 00:06:49.476 "zcopy": true, 00:06:49.476 "get_zone_info": false, 00:06:49.476 "zone_management": false, 00:06:49.476 "zone_append": false, 00:06:49.476 "compare": false, 00:06:49.476 "compare_and_write": false, 00:06:49.476 "abort": true, 00:06:49.476 "seek_hole": false, 00:06:49.476 "seek_data": false, 00:06:49.476 "copy": true, 00:06:49.476 "nvme_iov_md": false 00:06:49.476 }, 00:06:49.476 "memory_domains": [ 00:06:49.476 { 00:06:49.476 "dma_device_id": "system", 00:06:49.476 "dma_device_type": 1 00:06:49.476 }, 00:06:49.476 { 00:06:49.476 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:06:49.476 "dma_device_type": 2 00:06:49.476 } 00:06:49.476 ], 00:06:49.476 "driver_specific": {} 00:06:49.476 } 00:06:49.476 ]' 00:06:49.476 09:15:00 rpc.rpc_plugins -- rpc/rpc.sh@32 -- # jq length 00:06:49.734 09:15:00 rpc.rpc_plugins -- rpc/rpc.sh@32 -- # '[' 1 == 1 ']' 00:06:49.734 09:15:00 rpc.rpc_plugins -- rpc/rpc.sh@34 -- # rpc_cmd --plugin rpc_plugin delete_malloc Malloc1 00:06:49.734 09:15:00 rpc.rpc_plugins -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:49.734 09:15:00 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:06:49.734 09:15:00 rpc.rpc_plugins -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:06:49.734 09:15:00 rpc.rpc_plugins -- rpc/rpc.sh@35 -- # rpc_cmd bdev_get_bdevs 00:06:49.734 09:15:00 rpc.rpc_plugins -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:49.734 09:15:00 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:06:49.734 09:15:00 rpc.rpc_plugins -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:06:49.734 09:15:00 rpc.rpc_plugins -- rpc/rpc.sh@35 -- # bdevs='[]' 00:06:49.734 09:15:00 rpc.rpc_plugins -- rpc/rpc.sh@36 -- # jq length 00:06:49.734 09:15:00 rpc.rpc_plugins -- rpc/rpc.sh@36 -- # '[' 0 == 0 ']' 00:06:49.734 00:06:49.734 real 0m0.102s 00:06:49.734 user 0m0.068s 00:06:49.734 sys 0m0.008s 00:06:49.734 09:15:00 rpc.rpc_plugins -- common/autotest_common.sh@1124 -- # xtrace_disable 00:06:49.734 09:15:00 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:06:49.734 ************************************ 00:06:49.734 END TEST rpc_plugins 00:06:49.734 ************************************ 00:06:49.734 09:15:00 rpc -- common/autotest_common.sh@1142 -- # return 0 00:06:49.734 09:15:00 rpc -- rpc/rpc.sh@75 -- # run_test rpc_trace_cmd_test rpc_trace_cmd_test 00:06:49.734 09:15:00 rpc -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:06:49.734 09:15:00 rpc -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:49.735 09:15:00 rpc -- common/autotest_common.sh@10 -- # set +x 00:06:49.735 ************************************ 00:06:49.735 START TEST rpc_trace_cmd_test 00:06:49.735 ************************************ 00:06:49.735 09:15:00 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@1123 -- # rpc_trace_cmd_test 00:06:49.735 09:15:00 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@40 -- # local info 00:06:49.735 09:15:00 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@42 -- # rpc_cmd trace_get_info 00:06:49.735 09:15:00 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:49.735 09:15:00 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@10 -- # set +x 00:06:49.735 09:15:00 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:06:49.735 09:15:00 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@42 -- # info='{ 00:06:49.735 "tpoint_shm_path": "/dev/shm/spdk_tgt_trace.pid713882", 00:06:49.735 "tpoint_group_mask": "0x8", 00:06:49.735 "iscsi_conn": { 00:06:49.735 "mask": "0x2", 00:06:49.735 "tpoint_mask": "0x0" 00:06:49.735 }, 00:06:49.735 "scsi": { 00:06:49.735 "mask": "0x4", 00:06:49.735 "tpoint_mask": "0x0" 00:06:49.735 }, 00:06:49.735 "bdev": { 00:06:49.735 "mask": "0x8", 00:06:49.735 "tpoint_mask": "0xffffffffffffffff" 00:06:49.735 }, 00:06:49.735 "nvmf_rdma": { 00:06:49.735 "mask": "0x10", 00:06:49.735 "tpoint_mask": "0x0" 00:06:49.735 }, 00:06:49.735 "nvmf_tcp": { 00:06:49.735 "mask": "0x20", 00:06:49.735 "tpoint_mask": "0x0" 00:06:49.735 }, 00:06:49.735 "ftl": { 00:06:49.735 "mask": "0x40", 00:06:49.735 "tpoint_mask": "0x0" 00:06:49.735 }, 00:06:49.735 "blobfs": { 00:06:49.735 "mask": "0x80", 00:06:49.735 "tpoint_mask": "0x0" 00:06:49.735 }, 00:06:49.735 "dsa": { 00:06:49.735 "mask": "0x200", 00:06:49.735 "tpoint_mask": "0x0" 00:06:49.735 }, 00:06:49.735 "thread": { 00:06:49.735 "mask": "0x400", 00:06:49.735 "tpoint_mask": "0x0" 00:06:49.735 }, 00:06:49.735 "nvme_pcie": { 00:06:49.735 "mask": "0x800", 00:06:49.735 "tpoint_mask": "0x0" 00:06:49.735 }, 00:06:49.735 "iaa": { 00:06:49.735 "mask": "0x1000", 00:06:49.735 "tpoint_mask": "0x0" 00:06:49.735 }, 00:06:49.735 "nvme_tcp": { 00:06:49.735 "mask": "0x2000", 00:06:49.735 "tpoint_mask": "0x0" 00:06:49.735 }, 00:06:49.735 "bdev_nvme": { 00:06:49.735 "mask": "0x4000", 00:06:49.735 "tpoint_mask": "0x0" 00:06:49.735 }, 00:06:49.735 "sock": { 00:06:49.735 "mask": "0x8000", 00:06:49.735 "tpoint_mask": "0x0" 00:06:49.735 } 00:06:49.735 }' 00:06:49.735 09:15:00 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@43 -- # jq length 00:06:49.735 09:15:00 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@43 -- # '[' 16 -gt 2 ']' 00:06:49.735 09:15:00 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@44 -- # jq 'has("tpoint_group_mask")' 00:06:49.735 09:15:00 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@44 -- # '[' true = true ']' 00:06:49.735 09:15:00 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@45 -- # jq 'has("tpoint_shm_path")' 00:06:49.735 09:15:00 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@45 -- # '[' true = true ']' 00:06:49.735 09:15:00 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@46 -- # jq 'has("bdev")' 00:06:49.735 09:15:00 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@46 -- # '[' true = true ']' 00:06:49.735 09:15:00 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@47 -- # jq -r .bdev.tpoint_mask 00:06:49.993 09:15:00 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@47 -- # '[' 0xffffffffffffffff '!=' 0x0 ']' 00:06:49.993 00:06:49.993 real 0m0.178s 00:06:49.993 user 0m0.156s 00:06:49.993 sys 0m0.016s 00:06:49.993 09:15:00 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@1124 -- # xtrace_disable 00:06:49.993 09:15:00 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@10 -- # set +x 00:06:49.993 ************************************ 00:06:49.993 END TEST rpc_trace_cmd_test 00:06:49.993 ************************************ 00:06:49.993 09:15:00 rpc -- common/autotest_common.sh@1142 -- # return 0 00:06:49.993 09:15:00 rpc -- rpc/rpc.sh@76 -- # [[ 0 -eq 1 ]] 00:06:49.993 09:15:00 rpc -- rpc/rpc.sh@80 -- # rpc=rpc_cmd 00:06:49.993 09:15:00 rpc -- rpc/rpc.sh@81 -- # run_test rpc_daemon_integrity rpc_integrity 00:06:49.993 09:15:00 rpc -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:06:49.993 09:15:00 rpc -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:49.993 09:15:00 rpc -- common/autotest_common.sh@10 -- # set +x 00:06:49.993 ************************************ 00:06:49.993 START TEST rpc_daemon_integrity 00:06:49.993 ************************************ 00:06:49.993 09:15:00 rpc.rpc_daemon_integrity -- common/autotest_common.sh@1123 -- # rpc_integrity 00:06:49.993 09:15:00 rpc.rpc_daemon_integrity -- rpc/rpc.sh@12 -- # rpc_cmd bdev_get_bdevs 00:06:49.993 09:15:00 rpc.rpc_daemon_integrity -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:49.993 09:15:00 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:06:49.993 09:15:00 rpc.rpc_daemon_integrity -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:06:49.993 09:15:00 rpc.rpc_daemon_integrity -- rpc/rpc.sh@12 -- # bdevs='[]' 00:06:49.993 09:15:00 rpc.rpc_daemon_integrity -- rpc/rpc.sh@13 -- # jq length 00:06:49.993 09:15:01 rpc.rpc_daemon_integrity -- rpc/rpc.sh@13 -- # '[' 0 == 0 ']' 00:06:49.993 09:15:01 rpc.rpc_daemon_integrity -- rpc/rpc.sh@15 -- # rpc_cmd bdev_malloc_create 8 512 00:06:49.993 09:15:01 rpc.rpc_daemon_integrity -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:49.993 09:15:01 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:06:49.993 09:15:01 rpc.rpc_daemon_integrity -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:06:49.993 09:15:01 rpc.rpc_daemon_integrity -- rpc/rpc.sh@15 -- # malloc=Malloc2 00:06:49.993 09:15:01 rpc.rpc_daemon_integrity -- rpc/rpc.sh@16 -- # rpc_cmd bdev_get_bdevs 00:06:49.993 09:15:01 rpc.rpc_daemon_integrity -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:49.993 09:15:01 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:06:49.993 09:15:01 rpc.rpc_daemon_integrity -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:06:49.993 09:15:01 rpc.rpc_daemon_integrity -- rpc/rpc.sh@16 -- # bdevs='[ 00:06:49.993 { 00:06:49.993 "name": "Malloc2", 00:06:49.993 "aliases": [ 00:06:49.993 "d5a825e6-4613-475a-9c20-2d2c0bdb4c71" 00:06:49.993 ], 00:06:49.993 "product_name": "Malloc disk", 00:06:49.993 "block_size": 512, 00:06:49.993 "num_blocks": 16384, 00:06:49.993 "uuid": "d5a825e6-4613-475a-9c20-2d2c0bdb4c71", 00:06:49.993 "assigned_rate_limits": { 00:06:49.993 "rw_ios_per_sec": 0, 00:06:49.993 "rw_mbytes_per_sec": 0, 00:06:49.993 "r_mbytes_per_sec": 0, 00:06:49.993 "w_mbytes_per_sec": 0 00:06:49.993 }, 00:06:49.993 "claimed": false, 00:06:49.993 "zoned": false, 00:06:49.993 "supported_io_types": { 00:06:49.993 "read": true, 00:06:49.993 "write": true, 00:06:49.993 "unmap": true, 00:06:49.993 "flush": true, 00:06:49.993 "reset": true, 00:06:49.993 "nvme_admin": false, 00:06:49.993 "nvme_io": false, 00:06:49.993 "nvme_io_md": false, 00:06:49.993 "write_zeroes": true, 00:06:49.993 "zcopy": true, 00:06:49.993 "get_zone_info": false, 00:06:49.993 "zone_management": false, 00:06:49.993 "zone_append": false, 00:06:49.993 "compare": false, 00:06:49.993 "compare_and_write": false, 00:06:49.994 "abort": true, 00:06:49.994 "seek_hole": false, 00:06:49.994 "seek_data": false, 00:06:49.994 "copy": true, 00:06:49.994 "nvme_iov_md": false 00:06:49.994 }, 00:06:49.994 "memory_domains": [ 00:06:49.994 { 00:06:49.994 "dma_device_id": "system", 00:06:49.994 "dma_device_type": 1 00:06:49.994 }, 00:06:49.994 { 00:06:49.994 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:06:49.994 "dma_device_type": 2 00:06:49.994 } 00:06:49.994 ], 00:06:49.994 "driver_specific": {} 00:06:49.994 } 00:06:49.994 ]' 00:06:49.994 09:15:01 rpc.rpc_daemon_integrity -- rpc/rpc.sh@17 -- # jq length 00:06:49.994 09:15:01 rpc.rpc_daemon_integrity -- rpc/rpc.sh@17 -- # '[' 1 == 1 ']' 00:06:49.994 09:15:01 rpc.rpc_daemon_integrity -- rpc/rpc.sh@19 -- # rpc_cmd bdev_passthru_create -b Malloc2 -p Passthru0 00:06:49.994 09:15:01 rpc.rpc_daemon_integrity -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:49.994 09:15:01 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:06:49.994 [2024-07-15 09:15:01.088992] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on Malloc2 00:06:49.994 [2024-07-15 09:15:01.089035] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:06:49.994 [2024-07-15 09:15:01.089058] vbdev_passthru.c: 680:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0xa2fc00 00:06:49.994 [2024-07-15 09:15:01.089071] vbdev_passthru.c: 695:vbdev_passthru_register: *NOTICE*: bdev claimed 00:06:49.994 [2024-07-15 09:15:01.090232] vbdev_passthru.c: 708:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:06:49.994 [2024-07-15 09:15:01.090255] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: created pt_bdev for: Passthru0 00:06:49.994 Passthru0 00:06:49.994 09:15:01 rpc.rpc_daemon_integrity -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:06:49.994 09:15:01 rpc.rpc_daemon_integrity -- rpc/rpc.sh@20 -- # rpc_cmd bdev_get_bdevs 00:06:49.994 09:15:01 rpc.rpc_daemon_integrity -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:49.994 09:15:01 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:06:49.994 09:15:01 rpc.rpc_daemon_integrity -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:06:49.994 09:15:01 rpc.rpc_daemon_integrity -- rpc/rpc.sh@20 -- # bdevs='[ 00:06:49.994 { 00:06:49.994 "name": "Malloc2", 00:06:49.994 "aliases": [ 00:06:49.994 "d5a825e6-4613-475a-9c20-2d2c0bdb4c71" 00:06:49.994 ], 00:06:49.994 "product_name": "Malloc disk", 00:06:49.994 "block_size": 512, 00:06:49.994 "num_blocks": 16384, 00:06:49.994 "uuid": "d5a825e6-4613-475a-9c20-2d2c0bdb4c71", 00:06:49.994 "assigned_rate_limits": { 00:06:49.994 "rw_ios_per_sec": 0, 00:06:49.994 "rw_mbytes_per_sec": 0, 00:06:49.994 "r_mbytes_per_sec": 0, 00:06:49.994 "w_mbytes_per_sec": 0 00:06:49.994 }, 00:06:49.994 "claimed": true, 00:06:49.994 "claim_type": "exclusive_write", 00:06:49.994 "zoned": false, 00:06:49.994 "supported_io_types": { 00:06:49.994 "read": true, 00:06:49.994 "write": true, 00:06:49.994 "unmap": true, 00:06:49.994 "flush": true, 00:06:49.994 "reset": true, 00:06:49.994 "nvme_admin": false, 00:06:49.994 "nvme_io": false, 00:06:49.994 "nvme_io_md": false, 00:06:49.994 "write_zeroes": true, 00:06:49.994 "zcopy": true, 00:06:49.994 "get_zone_info": false, 00:06:49.994 "zone_management": false, 00:06:49.994 "zone_append": false, 00:06:49.994 "compare": false, 00:06:49.994 "compare_and_write": false, 00:06:49.994 "abort": true, 00:06:49.994 "seek_hole": false, 00:06:49.994 "seek_data": false, 00:06:49.994 "copy": true, 00:06:49.994 "nvme_iov_md": false 00:06:49.994 }, 00:06:49.994 "memory_domains": [ 00:06:49.994 { 00:06:49.994 "dma_device_id": "system", 00:06:49.994 "dma_device_type": 1 00:06:49.994 }, 00:06:49.994 { 00:06:49.994 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:06:49.994 "dma_device_type": 2 00:06:49.994 } 00:06:49.994 ], 00:06:49.994 "driver_specific": {} 00:06:49.994 }, 00:06:49.994 { 00:06:49.994 "name": "Passthru0", 00:06:49.994 "aliases": [ 00:06:49.994 "1e55b168-8ed3-516b-a137-22ee6988fb0a" 00:06:49.994 ], 00:06:49.994 "product_name": "passthru", 00:06:49.994 "block_size": 512, 00:06:49.994 "num_blocks": 16384, 00:06:49.994 "uuid": "1e55b168-8ed3-516b-a137-22ee6988fb0a", 00:06:49.994 "assigned_rate_limits": { 00:06:49.994 "rw_ios_per_sec": 0, 00:06:49.994 "rw_mbytes_per_sec": 0, 00:06:49.994 "r_mbytes_per_sec": 0, 00:06:49.994 "w_mbytes_per_sec": 0 00:06:49.994 }, 00:06:49.994 "claimed": false, 00:06:49.994 "zoned": false, 00:06:49.994 "supported_io_types": { 00:06:49.994 "read": true, 00:06:49.994 "write": true, 00:06:49.994 "unmap": true, 00:06:49.994 "flush": true, 00:06:49.994 "reset": true, 00:06:49.994 "nvme_admin": false, 00:06:49.994 "nvme_io": false, 00:06:49.994 "nvme_io_md": false, 00:06:49.994 "write_zeroes": true, 00:06:49.994 "zcopy": true, 00:06:49.994 "get_zone_info": false, 00:06:49.994 "zone_management": false, 00:06:49.994 "zone_append": false, 00:06:49.994 "compare": false, 00:06:49.994 "compare_and_write": false, 00:06:49.994 "abort": true, 00:06:49.994 "seek_hole": false, 00:06:49.994 "seek_data": false, 00:06:49.994 "copy": true, 00:06:49.994 "nvme_iov_md": false 00:06:49.994 }, 00:06:49.994 "memory_domains": [ 00:06:49.994 { 00:06:49.994 "dma_device_id": "system", 00:06:49.994 "dma_device_type": 1 00:06:49.994 }, 00:06:49.994 { 00:06:49.994 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:06:49.994 "dma_device_type": 2 00:06:49.994 } 00:06:49.994 ], 00:06:49.994 "driver_specific": { 00:06:49.994 "passthru": { 00:06:49.994 "name": "Passthru0", 00:06:49.994 "base_bdev_name": "Malloc2" 00:06:49.994 } 00:06:49.994 } 00:06:49.994 } 00:06:49.994 ]' 00:06:49.994 09:15:01 rpc.rpc_daemon_integrity -- rpc/rpc.sh@21 -- # jq length 00:06:49.994 09:15:01 rpc.rpc_daemon_integrity -- rpc/rpc.sh@21 -- # '[' 2 == 2 ']' 00:06:49.994 09:15:01 rpc.rpc_daemon_integrity -- rpc/rpc.sh@23 -- # rpc_cmd bdev_passthru_delete Passthru0 00:06:49.994 09:15:01 rpc.rpc_daemon_integrity -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:49.994 09:15:01 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:06:49.994 09:15:01 rpc.rpc_daemon_integrity -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:06:49.994 09:15:01 rpc.rpc_daemon_integrity -- rpc/rpc.sh@24 -- # rpc_cmd bdev_malloc_delete Malloc2 00:06:49.994 09:15:01 rpc.rpc_daemon_integrity -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:49.994 09:15:01 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:06:49.994 09:15:01 rpc.rpc_daemon_integrity -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:06:49.994 09:15:01 rpc.rpc_daemon_integrity -- rpc/rpc.sh@25 -- # rpc_cmd bdev_get_bdevs 00:06:49.994 09:15:01 rpc.rpc_daemon_integrity -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:49.994 09:15:01 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:06:49.994 09:15:01 rpc.rpc_daemon_integrity -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:06:49.994 09:15:01 rpc.rpc_daemon_integrity -- rpc/rpc.sh@25 -- # bdevs='[]' 00:06:49.994 09:15:01 rpc.rpc_daemon_integrity -- rpc/rpc.sh@26 -- # jq length 00:06:50.252 09:15:01 rpc.rpc_daemon_integrity -- rpc/rpc.sh@26 -- # '[' 0 == 0 ']' 00:06:50.252 00:06:50.252 real 0m0.215s 00:06:50.252 user 0m0.143s 00:06:50.252 sys 0m0.015s 00:06:50.252 09:15:01 rpc.rpc_daemon_integrity -- common/autotest_common.sh@1124 -- # xtrace_disable 00:06:50.252 09:15:01 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:06:50.252 ************************************ 00:06:50.252 END TEST rpc_daemon_integrity 00:06:50.252 ************************************ 00:06:50.252 09:15:01 rpc -- common/autotest_common.sh@1142 -- # return 0 00:06:50.252 09:15:01 rpc -- rpc/rpc.sh@83 -- # trap - SIGINT SIGTERM EXIT 00:06:50.252 09:15:01 rpc -- rpc/rpc.sh@84 -- # killprocess 713882 00:06:50.252 09:15:01 rpc -- common/autotest_common.sh@948 -- # '[' -z 713882 ']' 00:06:50.252 09:15:01 rpc -- common/autotest_common.sh@952 -- # kill -0 713882 00:06:50.252 09:15:01 rpc -- common/autotest_common.sh@953 -- # uname 00:06:50.252 09:15:01 rpc -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:06:50.252 09:15:01 rpc -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 713882 00:06:50.252 09:15:01 rpc -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:06:50.252 09:15:01 rpc -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:06:50.252 09:15:01 rpc -- common/autotest_common.sh@966 -- # echo 'killing process with pid 713882' 00:06:50.252 killing process with pid 713882 00:06:50.252 09:15:01 rpc -- common/autotest_common.sh@967 -- # kill 713882 00:06:50.252 09:15:01 rpc -- common/autotest_common.sh@972 -- # wait 713882 00:06:50.510 00:06:50.510 real 0m1.838s 00:06:50.510 user 0m2.311s 00:06:50.510 sys 0m0.558s 00:06:50.510 09:15:01 rpc -- common/autotest_common.sh@1124 -- # xtrace_disable 00:06:50.510 09:15:01 rpc -- common/autotest_common.sh@10 -- # set +x 00:06:50.510 ************************************ 00:06:50.510 END TEST rpc 00:06:50.510 ************************************ 00:06:50.510 09:15:01 -- common/autotest_common.sh@1142 -- # return 0 00:06:50.510 09:15:01 -- spdk/autotest.sh@170 -- # run_test skip_rpc /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc/skip_rpc.sh 00:06:50.510 09:15:01 -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:06:50.510 09:15:01 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:50.510 09:15:01 -- common/autotest_common.sh@10 -- # set +x 00:06:50.768 ************************************ 00:06:50.768 START TEST skip_rpc 00:06:50.768 ************************************ 00:06:50.768 09:15:01 skip_rpc -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc/skip_rpc.sh 00:06:50.768 * Looking for test storage... 00:06:50.768 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc 00:06:50.768 09:15:01 skip_rpc -- rpc/skip_rpc.sh@11 -- # CONFIG_PATH=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc/config.json 00:06:50.768 09:15:01 skip_rpc -- rpc/skip_rpc.sh@12 -- # LOG_PATH=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc/log.txt 00:06:50.768 09:15:01 skip_rpc -- rpc/skip_rpc.sh@73 -- # run_test skip_rpc test_skip_rpc 00:06:50.768 09:15:01 skip_rpc -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:06:50.768 09:15:01 skip_rpc -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:50.768 09:15:01 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:50.768 ************************************ 00:06:50.768 START TEST skip_rpc 00:06:50.768 ************************************ 00:06:50.768 09:15:01 skip_rpc.skip_rpc -- common/autotest_common.sh@1123 -- # test_skip_rpc 00:06:50.768 09:15:01 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@16 -- # local spdk_pid=714418 00:06:50.768 09:15:01 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@15 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 00:06:50.768 09:15:01 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@18 -- # trap 'killprocess $spdk_pid; exit 1' SIGINT SIGTERM EXIT 00:06:50.768 09:15:01 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@19 -- # sleep 5 00:06:50.768 [2024-07-15 09:15:01.836907] Starting SPDK v24.09-pre git sha1 b0f01ebc5 / DPDK 24.03.0 initialization... 00:06:50.768 [2024-07-15 09:15:01.836990] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid714418 ] 00:06:50.768 EAL: No free 2048 kB hugepages reported on node 1 00:06:50.768 [2024-07-15 09:15:01.895385] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:51.026 [2024-07-15 09:15:02.003310] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:06:56.288 09:15:06 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@21 -- # NOT rpc_cmd spdk_get_version 00:06:56.288 09:15:06 skip_rpc.skip_rpc -- common/autotest_common.sh@648 -- # local es=0 00:06:56.288 09:15:06 skip_rpc.skip_rpc -- common/autotest_common.sh@650 -- # valid_exec_arg rpc_cmd spdk_get_version 00:06:56.288 09:15:06 skip_rpc.skip_rpc -- common/autotest_common.sh@636 -- # local arg=rpc_cmd 00:06:56.288 09:15:06 skip_rpc.skip_rpc -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:06:56.288 09:15:06 skip_rpc.skip_rpc -- common/autotest_common.sh@640 -- # type -t rpc_cmd 00:06:56.288 09:15:06 skip_rpc.skip_rpc -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:06:56.288 09:15:06 skip_rpc.skip_rpc -- common/autotest_common.sh@651 -- # rpc_cmd spdk_get_version 00:06:56.288 09:15:06 skip_rpc.skip_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:56.288 09:15:06 skip_rpc.skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:56.288 09:15:06 skip_rpc.skip_rpc -- common/autotest_common.sh@587 -- # [[ 1 == 0 ]] 00:06:56.288 09:15:06 skip_rpc.skip_rpc -- common/autotest_common.sh@651 -- # es=1 00:06:56.288 09:15:06 skip_rpc.skip_rpc -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:06:56.288 09:15:06 skip_rpc.skip_rpc -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:06:56.288 09:15:06 skip_rpc.skip_rpc -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:06:56.288 09:15:06 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@22 -- # trap - SIGINT SIGTERM EXIT 00:06:56.288 09:15:06 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@23 -- # killprocess 714418 00:06:56.288 09:15:06 skip_rpc.skip_rpc -- common/autotest_common.sh@948 -- # '[' -z 714418 ']' 00:06:56.288 09:15:06 skip_rpc.skip_rpc -- common/autotest_common.sh@952 -- # kill -0 714418 00:06:56.288 09:15:06 skip_rpc.skip_rpc -- common/autotest_common.sh@953 -- # uname 00:06:56.288 09:15:06 skip_rpc.skip_rpc -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:06:56.288 09:15:06 skip_rpc.skip_rpc -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 714418 00:06:56.288 09:15:06 skip_rpc.skip_rpc -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:06:56.288 09:15:06 skip_rpc.skip_rpc -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:06:56.288 09:15:06 skip_rpc.skip_rpc -- common/autotest_common.sh@966 -- # echo 'killing process with pid 714418' 00:06:56.288 killing process with pid 714418 00:06:56.288 09:15:06 skip_rpc.skip_rpc -- common/autotest_common.sh@967 -- # kill 714418 00:06:56.288 09:15:06 skip_rpc.skip_rpc -- common/autotest_common.sh@972 -- # wait 714418 00:06:56.288 00:06:56.288 real 0m5.450s 00:06:56.288 user 0m5.153s 00:06:56.288 sys 0m0.299s 00:06:56.288 09:15:07 skip_rpc.skip_rpc -- common/autotest_common.sh@1124 -- # xtrace_disable 00:06:56.288 09:15:07 skip_rpc.skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:56.288 ************************************ 00:06:56.288 END TEST skip_rpc 00:06:56.288 ************************************ 00:06:56.288 09:15:07 skip_rpc -- common/autotest_common.sh@1142 -- # return 0 00:06:56.288 09:15:07 skip_rpc -- rpc/skip_rpc.sh@74 -- # run_test skip_rpc_with_json test_skip_rpc_with_json 00:06:56.288 09:15:07 skip_rpc -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:06:56.288 09:15:07 skip_rpc -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:56.288 09:15:07 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:56.288 ************************************ 00:06:56.288 START TEST skip_rpc_with_json 00:06:56.288 ************************************ 00:06:56.288 09:15:07 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@1123 -- # test_skip_rpc_with_json 00:06:56.288 09:15:07 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@44 -- # gen_json_config 00:06:56.288 09:15:07 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@28 -- # local spdk_pid=715132 00:06:56.288 09:15:07 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 00:06:56.288 09:15:07 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@30 -- # trap 'killprocess $spdk_pid; exit 1' SIGINT SIGTERM EXIT 00:06:56.288 09:15:07 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@31 -- # waitforlisten 715132 00:06:56.288 09:15:07 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@829 -- # '[' -z 715132 ']' 00:06:56.288 09:15:07 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:56.288 09:15:07 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@834 -- # local max_retries=100 00:06:56.288 09:15:07 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:56.288 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:56.288 09:15:07 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@838 -- # xtrace_disable 00:06:56.288 09:15:07 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:06:56.288 [2024-07-15 09:15:07.336384] Starting SPDK v24.09-pre git sha1 b0f01ebc5 / DPDK 24.03.0 initialization... 00:06:56.288 [2024-07-15 09:15:07.336496] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid715132 ] 00:06:56.288 EAL: No free 2048 kB hugepages reported on node 1 00:06:56.288 [2024-07-15 09:15:07.398023] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:56.545 [2024-07-15 09:15:07.510004] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:06:56.804 09:15:07 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:06:56.804 09:15:07 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@862 -- # return 0 00:06:56.804 09:15:07 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@34 -- # rpc_cmd nvmf_get_transports --trtype tcp 00:06:56.804 09:15:07 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:56.804 09:15:07 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:06:56.804 [2024-07-15 09:15:07.758753] nvmf_rpc.c:2562:rpc_nvmf_get_transports: *ERROR*: transport 'tcp' does not exist 00:06:56.804 request: 00:06:56.804 { 00:06:56.804 "trtype": "tcp", 00:06:56.804 "method": "nvmf_get_transports", 00:06:56.804 "req_id": 1 00:06:56.804 } 00:06:56.804 Got JSON-RPC error response 00:06:56.804 response: 00:06:56.804 { 00:06:56.804 "code": -19, 00:06:56.804 "message": "No such device" 00:06:56.804 } 00:06:56.804 09:15:07 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@587 -- # [[ 1 == 0 ]] 00:06:56.805 09:15:07 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@34 -- # rpc_cmd nvmf_create_transport -t tcp 00:06:56.805 09:15:07 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:56.805 09:15:07 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:06:56.805 [2024-07-15 09:15:07.766882] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:06:56.805 09:15:07 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:06:56.805 09:15:07 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@36 -- # rpc_cmd save_config 00:06:56.805 09:15:07 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:56.805 09:15:07 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:06:56.805 09:15:07 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:06:56.805 09:15:07 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@37 -- # cat /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc/config.json 00:06:56.805 { 00:06:56.805 "subsystems": [ 00:06:56.805 { 00:06:56.805 "subsystem": "vfio_user_target", 00:06:56.805 "config": null 00:06:56.805 }, 00:06:56.805 { 00:06:56.805 "subsystem": "keyring", 00:06:56.805 "config": [] 00:06:56.805 }, 00:06:56.805 { 00:06:56.805 "subsystem": "iobuf", 00:06:56.805 "config": [ 00:06:56.805 { 00:06:56.805 "method": "iobuf_set_options", 00:06:56.805 "params": { 00:06:56.805 "small_pool_count": 8192, 00:06:56.805 "large_pool_count": 1024, 00:06:56.805 "small_bufsize": 8192, 00:06:56.805 "large_bufsize": 135168 00:06:56.805 } 00:06:56.805 } 00:06:56.805 ] 00:06:56.805 }, 00:06:56.805 { 00:06:56.805 "subsystem": "sock", 00:06:56.805 "config": [ 00:06:56.805 { 00:06:56.805 "method": "sock_set_default_impl", 00:06:56.805 "params": { 00:06:56.805 "impl_name": "posix" 00:06:56.805 } 00:06:56.805 }, 00:06:56.805 { 00:06:56.805 "method": "sock_impl_set_options", 00:06:56.805 "params": { 00:06:56.805 "impl_name": "ssl", 00:06:56.805 "recv_buf_size": 4096, 00:06:56.805 "send_buf_size": 4096, 00:06:56.805 "enable_recv_pipe": true, 00:06:56.805 "enable_quickack": false, 00:06:56.805 "enable_placement_id": 0, 00:06:56.805 "enable_zerocopy_send_server": true, 00:06:56.805 "enable_zerocopy_send_client": false, 00:06:56.805 "zerocopy_threshold": 0, 00:06:56.805 "tls_version": 0, 00:06:56.805 "enable_ktls": false 00:06:56.805 } 00:06:56.805 }, 00:06:56.805 { 00:06:56.805 "method": "sock_impl_set_options", 00:06:56.805 "params": { 00:06:56.805 "impl_name": "posix", 00:06:56.805 "recv_buf_size": 2097152, 00:06:56.805 "send_buf_size": 2097152, 00:06:56.805 "enable_recv_pipe": true, 00:06:56.805 "enable_quickack": false, 00:06:56.805 "enable_placement_id": 0, 00:06:56.805 "enable_zerocopy_send_server": true, 00:06:56.805 "enable_zerocopy_send_client": false, 00:06:56.805 "zerocopy_threshold": 0, 00:06:56.805 "tls_version": 0, 00:06:56.805 "enable_ktls": false 00:06:56.805 } 00:06:56.805 } 00:06:56.805 ] 00:06:56.805 }, 00:06:56.805 { 00:06:56.805 "subsystem": "vmd", 00:06:56.805 "config": [] 00:06:56.805 }, 00:06:56.805 { 00:06:56.805 "subsystem": "accel", 00:06:56.805 "config": [ 00:06:56.805 { 00:06:56.805 "method": "accel_set_options", 00:06:56.805 "params": { 00:06:56.805 "small_cache_size": 128, 00:06:56.805 "large_cache_size": 16, 00:06:56.805 "task_count": 2048, 00:06:56.805 "sequence_count": 2048, 00:06:56.805 "buf_count": 2048 00:06:56.805 } 00:06:56.805 } 00:06:56.805 ] 00:06:56.805 }, 00:06:56.805 { 00:06:56.805 "subsystem": "bdev", 00:06:56.805 "config": [ 00:06:56.805 { 00:06:56.805 "method": "bdev_set_options", 00:06:56.805 "params": { 00:06:56.805 "bdev_io_pool_size": 65535, 00:06:56.805 "bdev_io_cache_size": 256, 00:06:56.805 "bdev_auto_examine": true, 00:06:56.805 "iobuf_small_cache_size": 128, 00:06:56.805 "iobuf_large_cache_size": 16 00:06:56.805 } 00:06:56.805 }, 00:06:56.805 { 00:06:56.805 "method": "bdev_raid_set_options", 00:06:56.805 "params": { 00:06:56.805 "process_window_size_kb": 1024 00:06:56.805 } 00:06:56.805 }, 00:06:56.805 { 00:06:56.805 "method": "bdev_iscsi_set_options", 00:06:56.805 "params": { 00:06:56.805 "timeout_sec": 30 00:06:56.805 } 00:06:56.805 }, 00:06:56.805 { 00:06:56.805 "method": "bdev_nvme_set_options", 00:06:56.805 "params": { 00:06:56.805 "action_on_timeout": "none", 00:06:56.805 "timeout_us": 0, 00:06:56.805 "timeout_admin_us": 0, 00:06:56.805 "keep_alive_timeout_ms": 10000, 00:06:56.805 "arbitration_burst": 0, 00:06:56.805 "low_priority_weight": 0, 00:06:56.805 "medium_priority_weight": 0, 00:06:56.805 "high_priority_weight": 0, 00:06:56.805 "nvme_adminq_poll_period_us": 10000, 00:06:56.805 "nvme_ioq_poll_period_us": 0, 00:06:56.805 "io_queue_requests": 0, 00:06:56.805 "delay_cmd_submit": true, 00:06:56.805 "transport_retry_count": 4, 00:06:56.805 "bdev_retry_count": 3, 00:06:56.805 "transport_ack_timeout": 0, 00:06:56.805 "ctrlr_loss_timeout_sec": 0, 00:06:56.805 "reconnect_delay_sec": 0, 00:06:56.805 "fast_io_fail_timeout_sec": 0, 00:06:56.805 "disable_auto_failback": false, 00:06:56.805 "generate_uuids": false, 00:06:56.805 "transport_tos": 0, 00:06:56.805 "nvme_error_stat": false, 00:06:56.805 "rdma_srq_size": 0, 00:06:56.805 "io_path_stat": false, 00:06:56.805 "allow_accel_sequence": false, 00:06:56.805 "rdma_max_cq_size": 0, 00:06:56.805 "rdma_cm_event_timeout_ms": 0, 00:06:56.805 "dhchap_digests": [ 00:06:56.805 "sha256", 00:06:56.805 "sha384", 00:06:56.805 "sha512" 00:06:56.805 ], 00:06:56.805 "dhchap_dhgroups": [ 00:06:56.805 "null", 00:06:56.805 "ffdhe2048", 00:06:56.805 "ffdhe3072", 00:06:56.805 "ffdhe4096", 00:06:56.805 "ffdhe6144", 00:06:56.805 "ffdhe8192" 00:06:56.805 ] 00:06:56.805 } 00:06:56.805 }, 00:06:56.805 { 00:06:56.805 "method": "bdev_nvme_set_hotplug", 00:06:56.805 "params": { 00:06:56.805 "period_us": 100000, 00:06:56.805 "enable": false 00:06:56.805 } 00:06:56.805 }, 00:06:56.805 { 00:06:56.805 "method": "bdev_wait_for_examine" 00:06:56.805 } 00:06:56.805 ] 00:06:56.805 }, 00:06:56.805 { 00:06:56.805 "subsystem": "scsi", 00:06:56.805 "config": null 00:06:56.805 }, 00:06:56.805 { 00:06:56.805 "subsystem": "scheduler", 00:06:56.805 "config": [ 00:06:56.805 { 00:06:56.805 "method": "framework_set_scheduler", 00:06:56.805 "params": { 00:06:56.805 "name": "static" 00:06:56.805 } 00:06:56.805 } 00:06:56.805 ] 00:06:56.805 }, 00:06:56.805 { 00:06:56.805 "subsystem": "vhost_scsi", 00:06:56.805 "config": [] 00:06:56.805 }, 00:06:56.805 { 00:06:56.805 "subsystem": "vhost_blk", 00:06:56.805 "config": [] 00:06:56.805 }, 00:06:56.805 { 00:06:56.805 "subsystem": "ublk", 00:06:56.805 "config": [] 00:06:56.805 }, 00:06:56.805 { 00:06:56.805 "subsystem": "nbd", 00:06:56.805 "config": [] 00:06:56.805 }, 00:06:56.805 { 00:06:56.805 "subsystem": "nvmf", 00:06:56.805 "config": [ 00:06:56.805 { 00:06:56.805 "method": "nvmf_set_config", 00:06:56.805 "params": { 00:06:56.805 "discovery_filter": "match_any", 00:06:56.805 "admin_cmd_passthru": { 00:06:56.805 "identify_ctrlr": false 00:06:56.805 } 00:06:56.805 } 00:06:56.805 }, 00:06:56.805 { 00:06:56.805 "method": "nvmf_set_max_subsystems", 00:06:56.805 "params": { 00:06:56.805 "max_subsystems": 1024 00:06:56.805 } 00:06:56.805 }, 00:06:56.805 { 00:06:56.805 "method": "nvmf_set_crdt", 00:06:56.805 "params": { 00:06:56.805 "crdt1": 0, 00:06:56.805 "crdt2": 0, 00:06:56.805 "crdt3": 0 00:06:56.805 } 00:06:56.805 }, 00:06:56.805 { 00:06:56.805 "method": "nvmf_create_transport", 00:06:56.805 "params": { 00:06:56.805 "trtype": "TCP", 00:06:56.805 "max_queue_depth": 128, 00:06:56.805 "max_io_qpairs_per_ctrlr": 127, 00:06:56.805 "in_capsule_data_size": 4096, 00:06:56.805 "max_io_size": 131072, 00:06:56.805 "io_unit_size": 131072, 00:06:56.805 "max_aq_depth": 128, 00:06:56.805 "num_shared_buffers": 511, 00:06:56.805 "buf_cache_size": 4294967295, 00:06:56.805 "dif_insert_or_strip": false, 00:06:56.805 "zcopy": false, 00:06:56.805 "c2h_success": true, 00:06:56.805 "sock_priority": 0, 00:06:56.805 "abort_timeout_sec": 1, 00:06:56.805 "ack_timeout": 0, 00:06:56.805 "data_wr_pool_size": 0 00:06:56.805 } 00:06:56.805 } 00:06:56.805 ] 00:06:56.805 }, 00:06:56.805 { 00:06:56.805 "subsystem": "iscsi", 00:06:56.805 "config": [ 00:06:56.805 { 00:06:56.805 "method": "iscsi_set_options", 00:06:56.805 "params": { 00:06:56.805 "node_base": "iqn.2016-06.io.spdk", 00:06:56.805 "max_sessions": 128, 00:06:56.805 "max_connections_per_session": 2, 00:06:56.805 "max_queue_depth": 64, 00:06:56.805 "default_time2wait": 2, 00:06:56.805 "default_time2retain": 20, 00:06:56.805 "first_burst_length": 8192, 00:06:56.805 "immediate_data": true, 00:06:56.805 "allow_duplicated_isid": false, 00:06:56.805 "error_recovery_level": 0, 00:06:56.805 "nop_timeout": 60, 00:06:56.805 "nop_in_interval": 30, 00:06:56.805 "disable_chap": false, 00:06:56.805 "require_chap": false, 00:06:56.805 "mutual_chap": false, 00:06:56.805 "chap_group": 0, 00:06:56.805 "max_large_datain_per_connection": 64, 00:06:56.805 "max_r2t_per_connection": 4, 00:06:56.805 "pdu_pool_size": 36864, 00:06:56.805 "immediate_data_pool_size": 16384, 00:06:56.805 "data_out_pool_size": 2048 00:06:56.805 } 00:06:56.806 } 00:06:56.806 ] 00:06:56.806 } 00:06:56.806 ] 00:06:56.806 } 00:06:56.806 09:15:07 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@39 -- # trap - SIGINT SIGTERM EXIT 00:06:56.806 09:15:07 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@40 -- # killprocess 715132 00:06:56.806 09:15:07 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@948 -- # '[' -z 715132 ']' 00:06:56.806 09:15:07 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@952 -- # kill -0 715132 00:06:56.806 09:15:07 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@953 -- # uname 00:06:56.806 09:15:07 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:06:56.806 09:15:07 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 715132 00:06:56.806 09:15:07 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:06:56.806 09:15:07 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:06:56.806 09:15:07 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@966 -- # echo 'killing process with pid 715132' 00:06:56.806 killing process with pid 715132 00:06:56.806 09:15:07 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@967 -- # kill 715132 00:06:56.806 09:15:07 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@972 -- # wait 715132 00:06:57.372 09:15:08 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@47 -- # local spdk_pid=715619 00:06:57.372 09:15:08 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 --json /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc/config.json 00:06:57.372 09:15:08 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@48 -- # sleep 5 00:07:02.639 09:15:13 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@50 -- # killprocess 715619 00:07:02.639 09:15:13 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@948 -- # '[' -z 715619 ']' 00:07:02.639 09:15:13 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@952 -- # kill -0 715619 00:07:02.639 09:15:13 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@953 -- # uname 00:07:02.639 09:15:13 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:07:02.639 09:15:13 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 715619 00:07:02.639 09:15:13 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:07:02.639 09:15:13 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:07:02.639 09:15:13 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@966 -- # echo 'killing process with pid 715619' 00:07:02.639 killing process with pid 715619 00:07:02.639 09:15:13 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@967 -- # kill 715619 00:07:02.639 09:15:13 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@972 -- # wait 715619 00:07:02.639 09:15:13 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@51 -- # grep -q 'TCP Transport Init' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc/log.txt 00:07:02.639 09:15:13 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@52 -- # rm /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc/log.txt 00:07:02.639 00:07:02.639 real 0m6.531s 00:07:02.639 user 0m6.155s 00:07:02.639 sys 0m0.661s 00:07:02.639 09:15:13 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@1124 -- # xtrace_disable 00:07:02.639 09:15:13 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:07:02.639 ************************************ 00:07:02.639 END TEST skip_rpc_with_json 00:07:02.639 ************************************ 00:07:02.898 09:15:13 skip_rpc -- common/autotest_common.sh@1142 -- # return 0 00:07:02.898 09:15:13 skip_rpc -- rpc/skip_rpc.sh@75 -- # run_test skip_rpc_with_delay test_skip_rpc_with_delay 00:07:02.898 09:15:13 skip_rpc -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:07:02.898 09:15:13 skip_rpc -- common/autotest_common.sh@1105 -- # xtrace_disable 00:07:02.898 09:15:13 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:07:02.898 ************************************ 00:07:02.898 START TEST skip_rpc_with_delay 00:07:02.898 ************************************ 00:07:02.898 09:15:13 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@1123 -- # test_skip_rpc_with_delay 00:07:02.898 09:15:13 skip_rpc.skip_rpc_with_delay -- rpc/skip_rpc.sh@57 -- # NOT /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 --wait-for-rpc 00:07:02.898 09:15:13 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@648 -- # local es=0 00:07:02.898 09:15:13 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@650 -- # valid_exec_arg /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 --wait-for-rpc 00:07:02.898 09:15:13 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@636 -- # local arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:07:02.898 09:15:13 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:07:02.898 09:15:13 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@640 -- # type -t /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:07:02.898 09:15:13 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:07:02.898 09:15:13 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@642 -- # type -P /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:07:02.898 09:15:13 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:07:02.898 09:15:13 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@642 -- # arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:07:02.898 09:15:13 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@642 -- # [[ -x /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt ]] 00:07:02.898 09:15:13 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@651 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 --wait-for-rpc 00:07:02.898 [2024-07-15 09:15:13.917940] app.c: 831:spdk_app_start: *ERROR*: Cannot use '--wait-for-rpc' if no RPC server is going to be started. 00:07:02.898 [2024-07-15 09:15:13.918060] app.c: 710:unclaim_cpu_cores: *ERROR*: Failed to unlink lock fd for core 0, errno: 2 00:07:02.898 09:15:13 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@651 -- # es=1 00:07:02.898 09:15:13 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:07:02.898 09:15:13 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:07:02.898 09:15:13 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:07:02.898 00:07:02.898 real 0m0.068s 00:07:02.898 user 0m0.045s 00:07:02.898 sys 0m0.022s 00:07:02.898 09:15:13 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@1124 -- # xtrace_disable 00:07:02.898 09:15:13 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@10 -- # set +x 00:07:02.898 ************************************ 00:07:02.898 END TEST skip_rpc_with_delay 00:07:02.898 ************************************ 00:07:02.898 09:15:13 skip_rpc -- common/autotest_common.sh@1142 -- # return 0 00:07:02.898 09:15:13 skip_rpc -- rpc/skip_rpc.sh@77 -- # uname 00:07:02.898 09:15:13 skip_rpc -- rpc/skip_rpc.sh@77 -- # '[' Linux '!=' FreeBSD ']' 00:07:02.898 09:15:13 skip_rpc -- rpc/skip_rpc.sh@78 -- # run_test exit_on_failed_rpc_init test_exit_on_failed_rpc_init 00:07:02.898 09:15:13 skip_rpc -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:07:02.898 09:15:13 skip_rpc -- common/autotest_common.sh@1105 -- # xtrace_disable 00:07:02.898 09:15:13 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:07:02.898 ************************************ 00:07:02.898 START TEST exit_on_failed_rpc_init 00:07:02.898 ************************************ 00:07:02.898 09:15:13 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@1123 -- # test_exit_on_failed_rpc_init 00:07:02.898 09:15:13 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@62 -- # local spdk_pid=716480 00:07:02.898 09:15:13 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@61 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 00:07:02.898 09:15:13 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@63 -- # waitforlisten 716480 00:07:02.898 09:15:13 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@829 -- # '[' -z 716480 ']' 00:07:02.898 09:15:13 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:02.898 09:15:13 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@834 -- # local max_retries=100 00:07:02.898 09:15:13 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:02.898 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:02.898 09:15:13 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@838 -- # xtrace_disable 00:07:02.898 09:15:13 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@10 -- # set +x 00:07:02.898 [2024-07-15 09:15:14.037347] Starting SPDK v24.09-pre git sha1 b0f01ebc5 / DPDK 24.03.0 initialization... 00:07:02.898 [2024-07-15 09:15:14.037458] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid716480 ] 00:07:02.898 EAL: No free 2048 kB hugepages reported on node 1 00:07:03.156 [2024-07-15 09:15:14.095650] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:03.156 [2024-07-15 09:15:14.194902] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:07:03.415 09:15:14 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:07:03.415 09:15:14 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@862 -- # return 0 00:07:03.415 09:15:14 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@65 -- # trap 'killprocess $spdk_pid; exit 1' SIGINT SIGTERM EXIT 00:07:03.415 09:15:14 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@67 -- # NOT /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x2 00:07:03.415 09:15:14 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@648 -- # local es=0 00:07:03.415 09:15:14 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@650 -- # valid_exec_arg /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x2 00:07:03.415 09:15:14 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@636 -- # local arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:07:03.415 09:15:14 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:07:03.415 09:15:14 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@640 -- # type -t /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:07:03.415 09:15:14 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:07:03.415 09:15:14 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@642 -- # type -P /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:07:03.415 09:15:14 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:07:03.415 09:15:14 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@642 -- # arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:07:03.415 09:15:14 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@642 -- # [[ -x /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt ]] 00:07:03.415 09:15:14 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@651 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x2 00:07:03.415 [2024-07-15 09:15:14.497558] Starting SPDK v24.09-pre git sha1 b0f01ebc5 / DPDK 24.03.0 initialization... 00:07:03.415 [2024-07-15 09:15:14.497648] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid716493 ] 00:07:03.415 EAL: No free 2048 kB hugepages reported on node 1 00:07:03.415 [2024-07-15 09:15:14.553679] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:03.673 [2024-07-15 09:15:14.665434] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:07:03.673 [2024-07-15 09:15:14.665531] rpc.c: 180:_spdk_rpc_listen: *ERROR*: RPC Unix domain socket path /var/tmp/spdk.sock in use. Specify another. 00:07:03.673 [2024-07-15 09:15:14.665553] rpc.c: 166:spdk_rpc_initialize: *ERROR*: Unable to start RPC service at /var/tmp/spdk.sock 00:07:03.673 [2024-07-15 09:15:14.665565] app.c:1052:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:07:03.673 09:15:14 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@651 -- # es=234 00:07:03.673 09:15:14 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:07:03.673 09:15:14 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@660 -- # es=106 00:07:03.673 09:15:14 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@661 -- # case "$es" in 00:07:03.673 09:15:14 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@668 -- # es=1 00:07:03.673 09:15:14 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:07:03.673 09:15:14 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@69 -- # trap - SIGINT SIGTERM EXIT 00:07:03.673 09:15:14 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@70 -- # killprocess 716480 00:07:03.673 09:15:14 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@948 -- # '[' -z 716480 ']' 00:07:03.673 09:15:14 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@952 -- # kill -0 716480 00:07:03.673 09:15:14 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@953 -- # uname 00:07:03.673 09:15:14 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:07:03.673 09:15:14 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 716480 00:07:03.673 09:15:14 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:07:03.673 09:15:14 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:07:03.673 09:15:14 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@966 -- # echo 'killing process with pid 716480' 00:07:03.673 killing process with pid 716480 00:07:03.673 09:15:14 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@967 -- # kill 716480 00:07:03.673 09:15:14 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@972 -- # wait 716480 00:07:04.239 00:07:04.239 real 0m1.273s 00:07:04.239 user 0m1.440s 00:07:04.239 sys 0m0.439s 00:07:04.239 09:15:15 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@1124 -- # xtrace_disable 00:07:04.239 09:15:15 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@10 -- # set +x 00:07:04.239 ************************************ 00:07:04.239 END TEST exit_on_failed_rpc_init 00:07:04.239 ************************************ 00:07:04.239 09:15:15 skip_rpc -- common/autotest_common.sh@1142 -- # return 0 00:07:04.239 09:15:15 skip_rpc -- rpc/skip_rpc.sh@81 -- # rm /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc/config.json 00:07:04.239 00:07:04.239 real 0m13.577s 00:07:04.239 user 0m12.897s 00:07:04.239 sys 0m1.588s 00:07:04.239 09:15:15 skip_rpc -- common/autotest_common.sh@1124 -- # xtrace_disable 00:07:04.239 09:15:15 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:07:04.239 ************************************ 00:07:04.239 END TEST skip_rpc 00:07:04.239 ************************************ 00:07:04.239 09:15:15 -- common/autotest_common.sh@1142 -- # return 0 00:07:04.239 09:15:15 -- spdk/autotest.sh@171 -- # run_test rpc_client /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_client/rpc_client.sh 00:07:04.239 09:15:15 -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:07:04.239 09:15:15 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:07:04.239 09:15:15 -- common/autotest_common.sh@10 -- # set +x 00:07:04.239 ************************************ 00:07:04.239 START TEST rpc_client 00:07:04.239 ************************************ 00:07:04.239 09:15:15 rpc_client -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_client/rpc_client.sh 00:07:04.239 * Looking for test storage... 00:07:04.239 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_client 00:07:04.239 09:15:15 rpc_client -- rpc_client/rpc_client.sh@10 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_client/rpc_client_test 00:07:04.239 OK 00:07:04.239 09:15:15 rpc_client -- rpc_client/rpc_client.sh@12 -- # trap - SIGINT SIGTERM EXIT 00:07:04.239 00:07:04.239 real 0m0.069s 00:07:04.239 user 0m0.042s 00:07:04.239 sys 0m0.032s 00:07:04.239 09:15:15 rpc_client -- common/autotest_common.sh@1124 -- # xtrace_disable 00:07:04.239 09:15:15 rpc_client -- common/autotest_common.sh@10 -- # set +x 00:07:04.239 ************************************ 00:07:04.239 END TEST rpc_client 00:07:04.239 ************************************ 00:07:04.239 09:15:15 -- common/autotest_common.sh@1142 -- # return 0 00:07:04.239 09:15:15 -- spdk/autotest.sh@172 -- # run_test json_config /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/json_config.sh 00:07:04.239 09:15:15 -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:07:04.239 09:15:15 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:07:04.239 09:15:15 -- common/autotest_common.sh@10 -- # set +x 00:07:04.498 ************************************ 00:07:04.498 START TEST json_config 00:07:04.498 ************************************ 00:07:04.498 09:15:15 json_config -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/json_config.sh 00:07:04.498 09:15:15 json_config -- json_config/json_config.sh@8 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:07:04.498 09:15:15 json_config -- nvmf/common.sh@7 -- # uname -s 00:07:04.498 09:15:15 json_config -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:07:04.498 09:15:15 json_config -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:07:04.498 09:15:15 json_config -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:07:04.498 09:15:15 json_config -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:07:04.498 09:15:15 json_config -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:07:04.498 09:15:15 json_config -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:07:04.498 09:15:15 json_config -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:07:04.498 09:15:15 json_config -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:07:04.498 09:15:15 json_config -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:07:04.498 09:15:15 json_config -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:07:04.498 09:15:15 json_config -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a 00:07:04.498 09:15:15 json_config -- nvmf/common.sh@18 -- # NVME_HOSTID=29f67375-a902-e411-ace9-001e67bc3c9a 00:07:04.498 09:15:15 json_config -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:07:04.498 09:15:15 json_config -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:07:04.498 09:15:15 json_config -- nvmf/common.sh@21 -- # NET_TYPE=phy-fallback 00:07:04.498 09:15:15 json_config -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:07:04.498 09:15:15 json_config -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:07:04.498 09:15:15 json_config -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:07:04.498 09:15:15 json_config -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:07:04.498 09:15:15 json_config -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:07:04.499 09:15:15 json_config -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:04.499 09:15:15 json_config -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:04.499 09:15:15 json_config -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:04.499 09:15:15 json_config -- paths/export.sh@5 -- # export PATH 00:07:04.499 09:15:15 json_config -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:04.499 09:15:15 json_config -- nvmf/common.sh@47 -- # : 0 00:07:04.499 09:15:15 json_config -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:07:04.499 09:15:15 json_config -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:07:04.499 09:15:15 json_config -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:07:04.499 09:15:15 json_config -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:07:04.499 09:15:15 json_config -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:07:04.499 09:15:15 json_config -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:07:04.499 09:15:15 json_config -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:07:04.499 09:15:15 json_config -- nvmf/common.sh@51 -- # have_pci_nics=0 00:07:04.499 09:15:15 json_config -- json_config/json_config.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/common.sh 00:07:04.499 09:15:15 json_config -- json_config/json_config.sh@11 -- # [[ 0 -eq 1 ]] 00:07:04.499 09:15:15 json_config -- json_config/json_config.sh@15 -- # [[ 0 -ne 1 ]] 00:07:04.499 09:15:15 json_config -- json_config/json_config.sh@15 -- # [[ 0 -eq 1 ]] 00:07:04.499 09:15:15 json_config -- json_config/json_config.sh@26 -- # (( SPDK_TEST_BLOCKDEV + SPDK_TEST_ISCSI + SPDK_TEST_NVMF + SPDK_TEST_VHOST + SPDK_TEST_VHOST_INIT + SPDK_TEST_RBD == 0 )) 00:07:04.499 09:15:15 json_config -- json_config/json_config.sh@31 -- # app_pid=(['target']='' ['initiator']='') 00:07:04.499 09:15:15 json_config -- json_config/json_config.sh@31 -- # declare -A app_pid 00:07:04.499 09:15:15 json_config -- json_config/json_config.sh@32 -- # app_socket=(['target']='/var/tmp/spdk_tgt.sock' ['initiator']='/var/tmp/spdk_initiator.sock') 00:07:04.499 09:15:15 json_config -- json_config/json_config.sh@32 -- # declare -A app_socket 00:07:04.499 09:15:15 json_config -- json_config/json_config.sh@33 -- # app_params=(['target']='-m 0x1 -s 1024' ['initiator']='-m 0x2 -g -u -s 1024') 00:07:04.499 09:15:15 json_config -- json_config/json_config.sh@33 -- # declare -A app_params 00:07:04.499 09:15:15 json_config -- json_config/json_config.sh@34 -- # configs_path=(['target']='/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/spdk_tgt_config.json' ['initiator']='/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/spdk_initiator_config.json') 00:07:04.499 09:15:15 json_config -- json_config/json_config.sh@34 -- # declare -A configs_path 00:07:04.499 09:15:15 json_config -- json_config/json_config.sh@40 -- # last_event_id=0 00:07:04.499 09:15:15 json_config -- json_config/json_config.sh@355 -- # trap 'on_error_exit "${FUNCNAME}" "${LINENO}"' ERR 00:07:04.499 09:15:15 json_config -- json_config/json_config.sh@356 -- # echo 'INFO: JSON configuration test init' 00:07:04.499 INFO: JSON configuration test init 00:07:04.499 09:15:15 json_config -- json_config/json_config.sh@357 -- # json_config_test_init 00:07:04.499 09:15:15 json_config -- json_config/json_config.sh@262 -- # timing_enter json_config_test_init 00:07:04.499 09:15:15 json_config -- common/autotest_common.sh@722 -- # xtrace_disable 00:07:04.499 09:15:15 json_config -- common/autotest_common.sh@10 -- # set +x 00:07:04.499 09:15:15 json_config -- json_config/json_config.sh@263 -- # timing_enter json_config_setup_target 00:07:04.499 09:15:15 json_config -- common/autotest_common.sh@722 -- # xtrace_disable 00:07:04.499 09:15:15 json_config -- common/autotest_common.sh@10 -- # set +x 00:07:04.499 09:15:15 json_config -- json_config/json_config.sh@265 -- # json_config_test_start_app target --wait-for-rpc 00:07:04.499 09:15:15 json_config -- json_config/common.sh@9 -- # local app=target 00:07:04.499 09:15:15 json_config -- json_config/common.sh@10 -- # shift 00:07:04.499 09:15:15 json_config -- json_config/common.sh@12 -- # [[ -n 22 ]] 00:07:04.499 09:15:15 json_config -- json_config/common.sh@13 -- # [[ -z '' ]] 00:07:04.499 09:15:15 json_config -- json_config/common.sh@15 -- # local app_extra_params= 00:07:04.499 09:15:15 json_config -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 00:07:04.499 09:15:15 json_config -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 00:07:04.499 09:15:15 json_config -- json_config/common.sh@22 -- # app_pid["$app"]=716735 00:07:04.499 09:15:15 json_config -- json_config/common.sh@21 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 -s 1024 -r /var/tmp/spdk_tgt.sock --wait-for-rpc 00:07:04.499 09:15:15 json_config -- json_config/common.sh@24 -- # echo 'Waiting for target to run...' 00:07:04.499 Waiting for target to run... 00:07:04.499 09:15:15 json_config -- json_config/common.sh@25 -- # waitforlisten 716735 /var/tmp/spdk_tgt.sock 00:07:04.499 09:15:15 json_config -- common/autotest_common.sh@829 -- # '[' -z 716735 ']' 00:07:04.499 09:15:15 json_config -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk_tgt.sock 00:07:04.499 09:15:15 json_config -- common/autotest_common.sh@834 -- # local max_retries=100 00:07:04.499 09:15:15 json_config -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock...' 00:07:04.499 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock... 00:07:04.499 09:15:15 json_config -- common/autotest_common.sh@838 -- # xtrace_disable 00:07:04.499 09:15:15 json_config -- common/autotest_common.sh@10 -- # set +x 00:07:04.499 [2024-07-15 09:15:15.559049] Starting SPDK v24.09-pre git sha1 b0f01ebc5 / DPDK 24.03.0 initialization... 00:07:04.499 [2024-07-15 09:15:15.559132] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 -m 1024 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid716735 ] 00:07:04.499 EAL: No free 2048 kB hugepages reported on node 1 00:07:04.757 [2024-07-15 09:15:15.878546] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:05.013 [2024-07-15 09:15:15.957128] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:07:05.578 09:15:16 json_config -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:07:05.578 09:15:16 json_config -- common/autotest_common.sh@862 -- # return 0 00:07:05.578 09:15:16 json_config -- json_config/common.sh@26 -- # echo '' 00:07:05.578 00:07:05.578 09:15:16 json_config -- json_config/json_config.sh@269 -- # create_accel_config 00:07:05.578 09:15:16 json_config -- json_config/json_config.sh@93 -- # timing_enter create_accel_config 00:07:05.578 09:15:16 json_config -- common/autotest_common.sh@722 -- # xtrace_disable 00:07:05.578 09:15:16 json_config -- common/autotest_common.sh@10 -- # set +x 00:07:05.578 09:15:16 json_config -- json_config/json_config.sh@95 -- # [[ 0 -eq 1 ]] 00:07:05.578 09:15:16 json_config -- json_config/json_config.sh@101 -- # timing_exit create_accel_config 00:07:05.578 09:15:16 json_config -- common/autotest_common.sh@728 -- # xtrace_disable 00:07:05.578 09:15:16 json_config -- common/autotest_common.sh@10 -- # set +x 00:07:05.578 09:15:16 json_config -- json_config/json_config.sh@273 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/gen_nvme.sh --json-with-subsystems 00:07:05.578 09:15:16 json_config -- json_config/json_config.sh@274 -- # tgt_rpc load_config 00:07:05.578 09:15:16 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock load_config 00:07:08.857 09:15:19 json_config -- json_config/json_config.sh@276 -- # tgt_check_notification_types 00:07:08.857 09:15:19 json_config -- json_config/json_config.sh@43 -- # timing_enter tgt_check_notification_types 00:07:08.857 09:15:19 json_config -- common/autotest_common.sh@722 -- # xtrace_disable 00:07:08.857 09:15:19 json_config -- common/autotest_common.sh@10 -- # set +x 00:07:08.858 09:15:19 json_config -- json_config/json_config.sh@45 -- # local ret=0 00:07:08.858 09:15:19 json_config -- json_config/json_config.sh@46 -- # enabled_types=('bdev_register' 'bdev_unregister') 00:07:08.858 09:15:19 json_config -- json_config/json_config.sh@46 -- # local enabled_types 00:07:08.858 09:15:19 json_config -- json_config/json_config.sh@48 -- # tgt_rpc notify_get_types 00:07:08.858 09:15:19 json_config -- json_config/json_config.sh@48 -- # jq -r '.[]' 00:07:08.858 09:15:19 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock notify_get_types 00:07:08.858 09:15:19 json_config -- json_config/json_config.sh@48 -- # get_types=('bdev_register' 'bdev_unregister') 00:07:08.858 09:15:19 json_config -- json_config/json_config.sh@48 -- # local get_types 00:07:08.858 09:15:19 json_config -- json_config/json_config.sh@49 -- # [[ bdev_register bdev_unregister != \b\d\e\v\_\r\e\g\i\s\t\e\r\ \b\d\e\v\_\u\n\r\e\g\i\s\t\e\r ]] 00:07:08.858 09:15:19 json_config -- json_config/json_config.sh@54 -- # timing_exit tgt_check_notification_types 00:07:08.858 09:15:19 json_config -- common/autotest_common.sh@728 -- # xtrace_disable 00:07:08.858 09:15:19 json_config -- common/autotest_common.sh@10 -- # set +x 00:07:08.858 09:15:19 json_config -- json_config/json_config.sh@55 -- # return 0 00:07:08.858 09:15:19 json_config -- json_config/json_config.sh@278 -- # [[ 0 -eq 1 ]] 00:07:08.858 09:15:19 json_config -- json_config/json_config.sh@282 -- # [[ 0 -eq 1 ]] 00:07:08.858 09:15:19 json_config -- json_config/json_config.sh@286 -- # [[ 0 -eq 1 ]] 00:07:08.858 09:15:19 json_config -- json_config/json_config.sh@290 -- # [[ 1 -eq 1 ]] 00:07:08.858 09:15:19 json_config -- json_config/json_config.sh@291 -- # create_nvmf_subsystem_config 00:07:08.858 09:15:19 json_config -- json_config/json_config.sh@230 -- # timing_enter create_nvmf_subsystem_config 00:07:08.858 09:15:19 json_config -- common/autotest_common.sh@722 -- # xtrace_disable 00:07:08.858 09:15:19 json_config -- common/autotest_common.sh@10 -- # set +x 00:07:08.858 09:15:19 json_config -- json_config/json_config.sh@232 -- # NVMF_FIRST_TARGET_IP=127.0.0.1 00:07:08.858 09:15:19 json_config -- json_config/json_config.sh@233 -- # [[ tcp == \r\d\m\a ]] 00:07:08.858 09:15:19 json_config -- json_config/json_config.sh@237 -- # [[ -z 127.0.0.1 ]] 00:07:08.858 09:15:19 json_config -- json_config/json_config.sh@242 -- # tgt_rpc bdev_malloc_create 8 512 --name MallocForNvmf0 00:07:08.858 09:15:19 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock bdev_malloc_create 8 512 --name MallocForNvmf0 00:07:09.116 MallocForNvmf0 00:07:09.116 09:15:20 json_config -- json_config/json_config.sh@243 -- # tgt_rpc bdev_malloc_create 4 1024 --name MallocForNvmf1 00:07:09.116 09:15:20 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock bdev_malloc_create 4 1024 --name MallocForNvmf1 00:07:09.374 MallocForNvmf1 00:07:09.374 09:15:20 json_config -- json_config/json_config.sh@245 -- # tgt_rpc nvmf_create_transport -t tcp -u 8192 -c 0 00:07:09.374 09:15:20 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock nvmf_create_transport -t tcp -u 8192 -c 0 00:07:09.632 [2024-07-15 09:15:20.641132] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:07:09.632 09:15:20 json_config -- json_config/json_config.sh@246 -- # tgt_rpc nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:07:09.632 09:15:20 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:07:09.889 09:15:20 json_config -- json_config/json_config.sh@247 -- # tgt_rpc nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 MallocForNvmf0 00:07:09.889 09:15:20 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 MallocForNvmf0 00:07:10.146 09:15:21 json_config -- json_config/json_config.sh@248 -- # tgt_rpc nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 MallocForNvmf1 00:07:10.146 09:15:21 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 MallocForNvmf1 00:07:10.404 09:15:21 json_config -- json_config/json_config.sh@249 -- # tgt_rpc nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 127.0.0.1 -s 4420 00:07:10.404 09:15:21 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 127.0.0.1 -s 4420 00:07:10.661 [2024-07-15 09:15:21.600317] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4420 *** 00:07:10.661 09:15:21 json_config -- json_config/json_config.sh@251 -- # timing_exit create_nvmf_subsystem_config 00:07:10.661 09:15:21 json_config -- common/autotest_common.sh@728 -- # xtrace_disable 00:07:10.661 09:15:21 json_config -- common/autotest_common.sh@10 -- # set +x 00:07:10.661 09:15:21 json_config -- json_config/json_config.sh@293 -- # timing_exit json_config_setup_target 00:07:10.661 09:15:21 json_config -- common/autotest_common.sh@728 -- # xtrace_disable 00:07:10.661 09:15:21 json_config -- common/autotest_common.sh@10 -- # set +x 00:07:10.661 09:15:21 json_config -- json_config/json_config.sh@295 -- # [[ 0 -eq 1 ]] 00:07:10.661 09:15:21 json_config -- json_config/json_config.sh@300 -- # tgt_rpc bdev_malloc_create 8 512 --name MallocBdevForConfigChangeCheck 00:07:10.661 09:15:21 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock bdev_malloc_create 8 512 --name MallocBdevForConfigChangeCheck 00:07:10.920 MallocBdevForConfigChangeCheck 00:07:10.920 09:15:21 json_config -- json_config/json_config.sh@302 -- # timing_exit json_config_test_init 00:07:10.920 09:15:21 json_config -- common/autotest_common.sh@728 -- # xtrace_disable 00:07:10.920 09:15:21 json_config -- common/autotest_common.sh@10 -- # set +x 00:07:10.920 09:15:21 json_config -- json_config/json_config.sh@359 -- # tgt_rpc save_config 00:07:10.920 09:15:21 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock save_config 00:07:11.186 09:15:22 json_config -- json_config/json_config.sh@361 -- # echo 'INFO: shutting down applications...' 00:07:11.186 INFO: shutting down applications... 00:07:11.186 09:15:22 json_config -- json_config/json_config.sh@362 -- # [[ 0 -eq 1 ]] 00:07:11.186 09:15:22 json_config -- json_config/json_config.sh@368 -- # json_config_clear target 00:07:11.186 09:15:22 json_config -- json_config/json_config.sh@332 -- # [[ -n 22 ]] 00:07:11.186 09:15:22 json_config -- json_config/json_config.sh@333 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/clear_config.py -s /var/tmp/spdk_tgt.sock clear_config 00:07:13.090 Calling clear_iscsi_subsystem 00:07:13.090 Calling clear_nvmf_subsystem 00:07:13.090 Calling clear_nbd_subsystem 00:07:13.090 Calling clear_ublk_subsystem 00:07:13.090 Calling clear_vhost_blk_subsystem 00:07:13.090 Calling clear_vhost_scsi_subsystem 00:07:13.090 Calling clear_bdev_subsystem 00:07:13.090 09:15:23 json_config -- json_config/json_config.sh@337 -- # local config_filter=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/config_filter.py 00:07:13.090 09:15:23 json_config -- json_config/json_config.sh@343 -- # count=100 00:07:13.090 09:15:23 json_config -- json_config/json_config.sh@344 -- # '[' 100 -gt 0 ']' 00:07:13.090 09:15:23 json_config -- json_config/json_config.sh@345 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock save_config 00:07:13.090 09:15:23 json_config -- json_config/json_config.sh@345 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/config_filter.py -method delete_global_parameters 00:07:13.090 09:15:23 json_config -- json_config/json_config.sh@345 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/config_filter.py -method check_empty 00:07:13.090 09:15:24 json_config -- json_config/json_config.sh@345 -- # break 00:07:13.090 09:15:24 json_config -- json_config/json_config.sh@350 -- # '[' 100 -eq 0 ']' 00:07:13.090 09:15:24 json_config -- json_config/json_config.sh@369 -- # json_config_test_shutdown_app target 00:07:13.090 09:15:24 json_config -- json_config/common.sh@31 -- # local app=target 00:07:13.090 09:15:24 json_config -- json_config/common.sh@34 -- # [[ -n 22 ]] 00:07:13.090 09:15:24 json_config -- json_config/common.sh@35 -- # [[ -n 716735 ]] 00:07:13.090 09:15:24 json_config -- json_config/common.sh@38 -- # kill -SIGINT 716735 00:07:13.090 09:15:24 json_config -- json_config/common.sh@40 -- # (( i = 0 )) 00:07:13.090 09:15:24 json_config -- json_config/common.sh@40 -- # (( i < 30 )) 00:07:13.090 09:15:24 json_config -- json_config/common.sh@41 -- # kill -0 716735 00:07:13.090 09:15:24 json_config -- json_config/common.sh@45 -- # sleep 0.5 00:07:13.656 09:15:24 json_config -- json_config/common.sh@40 -- # (( i++ )) 00:07:13.656 09:15:24 json_config -- json_config/common.sh@40 -- # (( i < 30 )) 00:07:13.656 09:15:24 json_config -- json_config/common.sh@41 -- # kill -0 716735 00:07:13.657 09:15:24 json_config -- json_config/common.sh@42 -- # app_pid["$app"]= 00:07:13.657 09:15:24 json_config -- json_config/common.sh@43 -- # break 00:07:13.657 09:15:24 json_config -- json_config/common.sh@48 -- # [[ -n '' ]] 00:07:13.657 09:15:24 json_config -- json_config/common.sh@53 -- # echo 'SPDK target shutdown done' 00:07:13.657 SPDK target shutdown done 00:07:13.657 09:15:24 json_config -- json_config/json_config.sh@371 -- # echo 'INFO: relaunching applications...' 00:07:13.657 INFO: relaunching applications... 00:07:13.657 09:15:24 json_config -- json_config/json_config.sh@372 -- # json_config_test_start_app target --json /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/spdk_tgt_config.json 00:07:13.657 09:15:24 json_config -- json_config/common.sh@9 -- # local app=target 00:07:13.657 09:15:24 json_config -- json_config/common.sh@10 -- # shift 00:07:13.657 09:15:24 json_config -- json_config/common.sh@12 -- # [[ -n 22 ]] 00:07:13.657 09:15:24 json_config -- json_config/common.sh@13 -- # [[ -z '' ]] 00:07:13.657 09:15:24 json_config -- json_config/common.sh@15 -- # local app_extra_params= 00:07:13.657 09:15:24 json_config -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 00:07:13.657 09:15:24 json_config -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 00:07:13.657 09:15:24 json_config -- json_config/common.sh@22 -- # app_pid["$app"]=718007 00:07:13.657 09:15:24 json_config -- json_config/common.sh@21 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 -s 1024 -r /var/tmp/spdk_tgt.sock --json /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/spdk_tgt_config.json 00:07:13.657 09:15:24 json_config -- json_config/common.sh@24 -- # echo 'Waiting for target to run...' 00:07:13.657 Waiting for target to run... 00:07:13.657 09:15:24 json_config -- json_config/common.sh@25 -- # waitforlisten 718007 /var/tmp/spdk_tgt.sock 00:07:13.657 09:15:24 json_config -- common/autotest_common.sh@829 -- # '[' -z 718007 ']' 00:07:13.657 09:15:24 json_config -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk_tgt.sock 00:07:13.657 09:15:24 json_config -- common/autotest_common.sh@834 -- # local max_retries=100 00:07:13.657 09:15:24 json_config -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock...' 00:07:13.657 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock... 00:07:13.657 09:15:24 json_config -- common/autotest_common.sh@838 -- # xtrace_disable 00:07:13.657 09:15:24 json_config -- common/autotest_common.sh@10 -- # set +x 00:07:13.657 [2024-07-15 09:15:24.825417] Starting SPDK v24.09-pre git sha1 b0f01ebc5 / DPDK 24.03.0 initialization... 00:07:13.657 [2024-07-15 09:15:24.825515] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 -m 1024 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid718007 ] 00:07:13.917 EAL: No free 2048 kB hugepages reported on node 1 00:07:14.176 [2024-07-15 09:15:25.335189] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:14.435 [2024-07-15 09:15:25.429085] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:07:17.719 [2024-07-15 09:15:28.458596] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:07:17.719 [2024-07-15 09:15:28.491003] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4420 *** 00:07:18.285 09:15:29 json_config -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:07:18.285 09:15:29 json_config -- common/autotest_common.sh@862 -- # return 0 00:07:18.285 09:15:29 json_config -- json_config/common.sh@26 -- # echo '' 00:07:18.285 00:07:18.285 09:15:29 json_config -- json_config/json_config.sh@373 -- # [[ 0 -eq 1 ]] 00:07:18.285 09:15:29 json_config -- json_config/json_config.sh@377 -- # echo 'INFO: Checking if target configuration is the same...' 00:07:18.285 INFO: Checking if target configuration is the same... 00:07:18.285 09:15:29 json_config -- json_config/json_config.sh@378 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/json_diff.sh /dev/fd/62 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/spdk_tgt_config.json 00:07:18.285 09:15:29 json_config -- json_config/json_config.sh@378 -- # tgt_rpc save_config 00:07:18.285 09:15:29 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock save_config 00:07:18.285 + '[' 2 -ne 2 ']' 00:07:18.285 +++ dirname /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/json_diff.sh 00:07:18.285 ++ readlink -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/../.. 00:07:18.285 + rootdir=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:07:18.285 +++ basename /dev/fd/62 00:07:18.285 ++ mktemp /tmp/62.XXX 00:07:18.285 + tmp_file_1=/tmp/62.oLF 00:07:18.285 +++ basename /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/spdk_tgt_config.json 00:07:18.285 ++ mktemp /tmp/spdk_tgt_config.json.XXX 00:07:18.285 + tmp_file_2=/tmp/spdk_tgt_config.json.wfY 00:07:18.285 + ret=0 00:07:18.285 + /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/config_filter.py -method sort 00:07:18.543 + /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/config_filter.py -method sort 00:07:18.543 + diff -u /tmp/62.oLF /tmp/spdk_tgt_config.json.wfY 00:07:18.543 + echo 'INFO: JSON config files are the same' 00:07:18.543 INFO: JSON config files are the same 00:07:18.543 + rm /tmp/62.oLF /tmp/spdk_tgt_config.json.wfY 00:07:18.543 + exit 0 00:07:18.543 09:15:29 json_config -- json_config/json_config.sh@379 -- # [[ 0 -eq 1 ]] 00:07:18.543 09:15:29 json_config -- json_config/json_config.sh@384 -- # echo 'INFO: changing configuration and checking if this can be detected...' 00:07:18.543 INFO: changing configuration and checking if this can be detected... 00:07:18.543 09:15:29 json_config -- json_config/json_config.sh@386 -- # tgt_rpc bdev_malloc_delete MallocBdevForConfigChangeCheck 00:07:18.543 09:15:29 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock bdev_malloc_delete MallocBdevForConfigChangeCheck 00:07:18.801 09:15:29 json_config -- json_config/json_config.sh@387 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/json_diff.sh /dev/fd/62 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/spdk_tgt_config.json 00:07:18.801 09:15:29 json_config -- json_config/json_config.sh@387 -- # tgt_rpc save_config 00:07:18.801 09:15:29 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock save_config 00:07:18.801 + '[' 2 -ne 2 ']' 00:07:18.801 +++ dirname /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/json_diff.sh 00:07:18.801 ++ readlink -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/../.. 00:07:18.801 + rootdir=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:07:18.801 +++ basename /dev/fd/62 00:07:18.801 ++ mktemp /tmp/62.XXX 00:07:18.801 + tmp_file_1=/tmp/62.iyq 00:07:18.801 +++ basename /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/spdk_tgt_config.json 00:07:18.801 ++ mktemp /tmp/spdk_tgt_config.json.XXX 00:07:18.801 + tmp_file_2=/tmp/spdk_tgt_config.json.oIZ 00:07:18.801 + ret=0 00:07:18.801 + /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/config_filter.py -method sort 00:07:19.369 + /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/config_filter.py -method sort 00:07:19.369 + diff -u /tmp/62.iyq /tmp/spdk_tgt_config.json.oIZ 00:07:19.369 + ret=1 00:07:19.369 + echo '=== Start of file: /tmp/62.iyq ===' 00:07:19.369 + cat /tmp/62.iyq 00:07:19.369 + echo '=== End of file: /tmp/62.iyq ===' 00:07:19.369 + echo '' 00:07:19.369 + echo '=== Start of file: /tmp/spdk_tgt_config.json.oIZ ===' 00:07:19.369 + cat /tmp/spdk_tgt_config.json.oIZ 00:07:19.369 + echo '=== End of file: /tmp/spdk_tgt_config.json.oIZ ===' 00:07:19.369 + echo '' 00:07:19.369 + rm /tmp/62.iyq /tmp/spdk_tgt_config.json.oIZ 00:07:19.369 + exit 1 00:07:19.369 09:15:30 json_config -- json_config/json_config.sh@391 -- # echo 'INFO: configuration change detected.' 00:07:19.369 INFO: configuration change detected. 00:07:19.369 09:15:30 json_config -- json_config/json_config.sh@394 -- # json_config_test_fini 00:07:19.369 09:15:30 json_config -- json_config/json_config.sh@306 -- # timing_enter json_config_test_fini 00:07:19.369 09:15:30 json_config -- common/autotest_common.sh@722 -- # xtrace_disable 00:07:19.369 09:15:30 json_config -- common/autotest_common.sh@10 -- # set +x 00:07:19.369 09:15:30 json_config -- json_config/json_config.sh@307 -- # local ret=0 00:07:19.369 09:15:30 json_config -- json_config/json_config.sh@309 -- # [[ -n '' ]] 00:07:19.369 09:15:30 json_config -- json_config/json_config.sh@317 -- # [[ -n 718007 ]] 00:07:19.369 09:15:30 json_config -- json_config/json_config.sh@320 -- # cleanup_bdev_subsystem_config 00:07:19.369 09:15:30 json_config -- json_config/json_config.sh@184 -- # timing_enter cleanup_bdev_subsystem_config 00:07:19.369 09:15:30 json_config -- common/autotest_common.sh@722 -- # xtrace_disable 00:07:19.369 09:15:30 json_config -- common/autotest_common.sh@10 -- # set +x 00:07:19.369 09:15:30 json_config -- json_config/json_config.sh@186 -- # [[ 0 -eq 1 ]] 00:07:19.369 09:15:30 json_config -- json_config/json_config.sh@193 -- # uname -s 00:07:19.369 09:15:30 json_config -- json_config/json_config.sh@193 -- # [[ Linux = Linux ]] 00:07:19.369 09:15:30 json_config -- json_config/json_config.sh@194 -- # rm -f /sample_aio 00:07:19.369 09:15:30 json_config -- json_config/json_config.sh@197 -- # [[ 0 -eq 1 ]] 00:07:19.369 09:15:30 json_config -- json_config/json_config.sh@201 -- # timing_exit cleanup_bdev_subsystem_config 00:07:19.369 09:15:30 json_config -- common/autotest_common.sh@728 -- # xtrace_disable 00:07:19.369 09:15:30 json_config -- common/autotest_common.sh@10 -- # set +x 00:07:19.369 09:15:30 json_config -- json_config/json_config.sh@323 -- # killprocess 718007 00:07:19.369 09:15:30 json_config -- common/autotest_common.sh@948 -- # '[' -z 718007 ']' 00:07:19.369 09:15:30 json_config -- common/autotest_common.sh@952 -- # kill -0 718007 00:07:19.369 09:15:30 json_config -- common/autotest_common.sh@953 -- # uname 00:07:19.369 09:15:30 json_config -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:07:19.369 09:15:30 json_config -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 718007 00:07:19.369 09:15:30 json_config -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:07:19.369 09:15:30 json_config -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:07:19.369 09:15:30 json_config -- common/autotest_common.sh@966 -- # echo 'killing process with pid 718007' 00:07:19.369 killing process with pid 718007 00:07:19.369 09:15:30 json_config -- common/autotest_common.sh@967 -- # kill 718007 00:07:19.369 09:15:30 json_config -- common/autotest_common.sh@972 -- # wait 718007 00:07:21.272 09:15:32 json_config -- json_config/json_config.sh@326 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/spdk_initiator_config.json /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/spdk_tgt_config.json 00:07:21.272 09:15:32 json_config -- json_config/json_config.sh@327 -- # timing_exit json_config_test_fini 00:07:21.272 09:15:32 json_config -- common/autotest_common.sh@728 -- # xtrace_disable 00:07:21.272 09:15:32 json_config -- common/autotest_common.sh@10 -- # set +x 00:07:21.272 09:15:32 json_config -- json_config/json_config.sh@328 -- # return 0 00:07:21.272 09:15:32 json_config -- json_config/json_config.sh@396 -- # echo 'INFO: Success' 00:07:21.272 INFO: Success 00:07:21.272 00:07:21.272 real 0m16.616s 00:07:21.272 user 0m18.555s 00:07:21.272 sys 0m2.030s 00:07:21.272 09:15:32 json_config -- common/autotest_common.sh@1124 -- # xtrace_disable 00:07:21.272 09:15:32 json_config -- common/autotest_common.sh@10 -- # set +x 00:07:21.272 ************************************ 00:07:21.272 END TEST json_config 00:07:21.272 ************************************ 00:07:21.272 09:15:32 -- common/autotest_common.sh@1142 -- # return 0 00:07:21.272 09:15:32 -- spdk/autotest.sh@173 -- # run_test json_config_extra_key /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/json_config_extra_key.sh 00:07:21.272 09:15:32 -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:07:21.272 09:15:32 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:07:21.272 09:15:32 -- common/autotest_common.sh@10 -- # set +x 00:07:21.272 ************************************ 00:07:21.272 START TEST json_config_extra_key 00:07:21.272 ************************************ 00:07:21.272 09:15:32 json_config_extra_key -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/json_config_extra_key.sh 00:07:21.272 09:15:32 json_config_extra_key -- json_config/json_config_extra_key.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:07:21.272 09:15:32 json_config_extra_key -- nvmf/common.sh@7 -- # uname -s 00:07:21.272 09:15:32 json_config_extra_key -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:07:21.272 09:15:32 json_config_extra_key -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:07:21.272 09:15:32 json_config_extra_key -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:07:21.272 09:15:32 json_config_extra_key -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:07:21.272 09:15:32 json_config_extra_key -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:07:21.272 09:15:32 json_config_extra_key -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:07:21.272 09:15:32 json_config_extra_key -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:07:21.272 09:15:32 json_config_extra_key -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:07:21.272 09:15:32 json_config_extra_key -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:07:21.272 09:15:32 json_config_extra_key -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:07:21.272 09:15:32 json_config_extra_key -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a 00:07:21.272 09:15:32 json_config_extra_key -- nvmf/common.sh@18 -- # NVME_HOSTID=29f67375-a902-e411-ace9-001e67bc3c9a 00:07:21.272 09:15:32 json_config_extra_key -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:07:21.272 09:15:32 json_config_extra_key -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:07:21.272 09:15:32 json_config_extra_key -- nvmf/common.sh@21 -- # NET_TYPE=phy-fallback 00:07:21.272 09:15:32 json_config_extra_key -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:07:21.272 09:15:32 json_config_extra_key -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:07:21.272 09:15:32 json_config_extra_key -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:07:21.272 09:15:32 json_config_extra_key -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:07:21.272 09:15:32 json_config_extra_key -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:07:21.273 09:15:32 json_config_extra_key -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:21.273 09:15:32 json_config_extra_key -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:21.273 09:15:32 json_config_extra_key -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:21.273 09:15:32 json_config_extra_key -- paths/export.sh@5 -- # export PATH 00:07:21.273 09:15:32 json_config_extra_key -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:21.273 09:15:32 json_config_extra_key -- nvmf/common.sh@47 -- # : 0 00:07:21.273 09:15:32 json_config_extra_key -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:07:21.273 09:15:32 json_config_extra_key -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:07:21.273 09:15:32 json_config_extra_key -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:07:21.273 09:15:32 json_config_extra_key -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:07:21.273 09:15:32 json_config_extra_key -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:07:21.273 09:15:32 json_config_extra_key -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:07:21.273 09:15:32 json_config_extra_key -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:07:21.273 09:15:32 json_config_extra_key -- nvmf/common.sh@51 -- # have_pci_nics=0 00:07:21.273 09:15:32 json_config_extra_key -- json_config/json_config_extra_key.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/common.sh 00:07:21.273 09:15:32 json_config_extra_key -- json_config/json_config_extra_key.sh@17 -- # app_pid=(['target']='') 00:07:21.273 09:15:32 json_config_extra_key -- json_config/json_config_extra_key.sh@17 -- # declare -A app_pid 00:07:21.273 09:15:32 json_config_extra_key -- json_config/json_config_extra_key.sh@18 -- # app_socket=(['target']='/var/tmp/spdk_tgt.sock') 00:07:21.273 09:15:32 json_config_extra_key -- json_config/json_config_extra_key.sh@18 -- # declare -A app_socket 00:07:21.273 09:15:32 json_config_extra_key -- json_config/json_config_extra_key.sh@19 -- # app_params=(['target']='-m 0x1 -s 1024') 00:07:21.273 09:15:32 json_config_extra_key -- json_config/json_config_extra_key.sh@19 -- # declare -A app_params 00:07:21.273 09:15:32 json_config_extra_key -- json_config/json_config_extra_key.sh@20 -- # configs_path=(['target']='/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/extra_key.json') 00:07:21.273 09:15:32 json_config_extra_key -- json_config/json_config_extra_key.sh@20 -- # declare -A configs_path 00:07:21.273 09:15:32 json_config_extra_key -- json_config/json_config_extra_key.sh@22 -- # trap 'on_error_exit "${FUNCNAME}" "${LINENO}"' ERR 00:07:21.273 09:15:32 json_config_extra_key -- json_config/json_config_extra_key.sh@24 -- # echo 'INFO: launching applications...' 00:07:21.273 INFO: launching applications... 00:07:21.273 09:15:32 json_config_extra_key -- json_config/json_config_extra_key.sh@25 -- # json_config_test_start_app target --json /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/extra_key.json 00:07:21.273 09:15:32 json_config_extra_key -- json_config/common.sh@9 -- # local app=target 00:07:21.273 09:15:32 json_config_extra_key -- json_config/common.sh@10 -- # shift 00:07:21.273 09:15:32 json_config_extra_key -- json_config/common.sh@12 -- # [[ -n 22 ]] 00:07:21.273 09:15:32 json_config_extra_key -- json_config/common.sh@13 -- # [[ -z '' ]] 00:07:21.273 09:15:32 json_config_extra_key -- json_config/common.sh@15 -- # local app_extra_params= 00:07:21.273 09:15:32 json_config_extra_key -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 00:07:21.273 09:15:32 json_config_extra_key -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 00:07:21.273 09:15:32 json_config_extra_key -- json_config/common.sh@22 -- # app_pid["$app"]=718978 00:07:21.273 09:15:32 json_config_extra_key -- json_config/common.sh@21 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 -s 1024 -r /var/tmp/spdk_tgt.sock --json /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/extra_key.json 00:07:21.273 09:15:32 json_config_extra_key -- json_config/common.sh@24 -- # echo 'Waiting for target to run...' 00:07:21.273 Waiting for target to run... 00:07:21.273 09:15:32 json_config_extra_key -- json_config/common.sh@25 -- # waitforlisten 718978 /var/tmp/spdk_tgt.sock 00:07:21.273 09:15:32 json_config_extra_key -- common/autotest_common.sh@829 -- # '[' -z 718978 ']' 00:07:21.273 09:15:32 json_config_extra_key -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk_tgt.sock 00:07:21.273 09:15:32 json_config_extra_key -- common/autotest_common.sh@834 -- # local max_retries=100 00:07:21.273 09:15:32 json_config_extra_key -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock...' 00:07:21.273 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock... 00:07:21.273 09:15:32 json_config_extra_key -- common/autotest_common.sh@838 -- # xtrace_disable 00:07:21.273 09:15:32 json_config_extra_key -- common/autotest_common.sh@10 -- # set +x 00:07:21.273 [2024-07-15 09:15:32.228409] Starting SPDK v24.09-pre git sha1 b0f01ebc5 / DPDK 24.03.0 initialization... 00:07:21.273 [2024-07-15 09:15:32.228485] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 -m 1024 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid718978 ] 00:07:21.273 EAL: No free 2048 kB hugepages reported on node 1 00:07:21.531 [2024-07-15 09:15:32.552178] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:21.531 [2024-07-15 09:15:32.629778] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:07:22.098 09:15:33 json_config_extra_key -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:07:22.098 09:15:33 json_config_extra_key -- common/autotest_common.sh@862 -- # return 0 00:07:22.098 09:15:33 json_config_extra_key -- json_config/common.sh@26 -- # echo '' 00:07:22.098 00:07:22.098 09:15:33 json_config_extra_key -- json_config/json_config_extra_key.sh@27 -- # echo 'INFO: shutting down applications...' 00:07:22.098 INFO: shutting down applications... 00:07:22.098 09:15:33 json_config_extra_key -- json_config/json_config_extra_key.sh@28 -- # json_config_test_shutdown_app target 00:07:22.098 09:15:33 json_config_extra_key -- json_config/common.sh@31 -- # local app=target 00:07:22.098 09:15:33 json_config_extra_key -- json_config/common.sh@34 -- # [[ -n 22 ]] 00:07:22.098 09:15:33 json_config_extra_key -- json_config/common.sh@35 -- # [[ -n 718978 ]] 00:07:22.098 09:15:33 json_config_extra_key -- json_config/common.sh@38 -- # kill -SIGINT 718978 00:07:22.098 09:15:33 json_config_extra_key -- json_config/common.sh@40 -- # (( i = 0 )) 00:07:22.098 09:15:33 json_config_extra_key -- json_config/common.sh@40 -- # (( i < 30 )) 00:07:22.098 09:15:33 json_config_extra_key -- json_config/common.sh@41 -- # kill -0 718978 00:07:22.098 09:15:33 json_config_extra_key -- json_config/common.sh@45 -- # sleep 0.5 00:07:22.663 09:15:33 json_config_extra_key -- json_config/common.sh@40 -- # (( i++ )) 00:07:22.663 09:15:33 json_config_extra_key -- json_config/common.sh@40 -- # (( i < 30 )) 00:07:22.663 09:15:33 json_config_extra_key -- json_config/common.sh@41 -- # kill -0 718978 00:07:22.663 09:15:33 json_config_extra_key -- json_config/common.sh@42 -- # app_pid["$app"]= 00:07:22.663 09:15:33 json_config_extra_key -- json_config/common.sh@43 -- # break 00:07:22.663 09:15:33 json_config_extra_key -- json_config/common.sh@48 -- # [[ -n '' ]] 00:07:22.663 09:15:33 json_config_extra_key -- json_config/common.sh@53 -- # echo 'SPDK target shutdown done' 00:07:22.663 SPDK target shutdown done 00:07:22.663 09:15:33 json_config_extra_key -- json_config/json_config_extra_key.sh@30 -- # echo Success 00:07:22.663 Success 00:07:22.663 00:07:22.663 real 0m1.558s 00:07:22.663 user 0m1.574s 00:07:22.663 sys 0m0.397s 00:07:22.663 09:15:33 json_config_extra_key -- common/autotest_common.sh@1124 -- # xtrace_disable 00:07:22.663 09:15:33 json_config_extra_key -- common/autotest_common.sh@10 -- # set +x 00:07:22.663 ************************************ 00:07:22.663 END TEST json_config_extra_key 00:07:22.663 ************************************ 00:07:22.663 09:15:33 -- common/autotest_common.sh@1142 -- # return 0 00:07:22.663 09:15:33 -- spdk/autotest.sh@174 -- # run_test alias_rpc /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/alias_rpc/alias_rpc.sh 00:07:22.663 09:15:33 -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:07:22.663 09:15:33 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:07:22.663 09:15:33 -- common/autotest_common.sh@10 -- # set +x 00:07:22.663 ************************************ 00:07:22.663 START TEST alias_rpc 00:07:22.663 ************************************ 00:07:22.663 09:15:33 alias_rpc -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/alias_rpc/alias_rpc.sh 00:07:22.663 * Looking for test storage... 00:07:22.663 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/alias_rpc 00:07:22.663 09:15:33 alias_rpc -- alias_rpc/alias_rpc.sh@10 -- # trap 'killprocess $spdk_tgt_pid; exit 1' ERR 00:07:22.663 09:15:33 alias_rpc -- alias_rpc/alias_rpc.sh@13 -- # spdk_tgt_pid=719191 00:07:22.663 09:15:33 alias_rpc -- alias_rpc/alias_rpc.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:07:22.663 09:15:33 alias_rpc -- alias_rpc/alias_rpc.sh@14 -- # waitforlisten 719191 00:07:22.663 09:15:33 alias_rpc -- common/autotest_common.sh@829 -- # '[' -z 719191 ']' 00:07:22.663 09:15:33 alias_rpc -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:22.663 09:15:33 alias_rpc -- common/autotest_common.sh@834 -- # local max_retries=100 00:07:22.663 09:15:33 alias_rpc -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:22.663 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:22.663 09:15:33 alias_rpc -- common/autotest_common.sh@838 -- # xtrace_disable 00:07:22.663 09:15:33 alias_rpc -- common/autotest_common.sh@10 -- # set +x 00:07:22.663 [2024-07-15 09:15:33.832283] Starting SPDK v24.09-pre git sha1 b0f01ebc5 / DPDK 24.03.0 initialization... 00:07:22.663 [2024-07-15 09:15:33.832363] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid719191 ] 00:07:22.922 EAL: No free 2048 kB hugepages reported on node 1 00:07:22.922 [2024-07-15 09:15:33.890232] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:22.922 [2024-07-15 09:15:33.996714] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:07:23.180 09:15:34 alias_rpc -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:07:23.180 09:15:34 alias_rpc -- common/autotest_common.sh@862 -- # return 0 00:07:23.180 09:15:34 alias_rpc -- alias_rpc/alias_rpc.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py load_config -i 00:07:23.438 09:15:34 alias_rpc -- alias_rpc/alias_rpc.sh@19 -- # killprocess 719191 00:07:23.438 09:15:34 alias_rpc -- common/autotest_common.sh@948 -- # '[' -z 719191 ']' 00:07:23.438 09:15:34 alias_rpc -- common/autotest_common.sh@952 -- # kill -0 719191 00:07:23.438 09:15:34 alias_rpc -- common/autotest_common.sh@953 -- # uname 00:07:23.438 09:15:34 alias_rpc -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:07:23.438 09:15:34 alias_rpc -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 719191 00:07:23.438 09:15:34 alias_rpc -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:07:23.438 09:15:34 alias_rpc -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:07:23.438 09:15:34 alias_rpc -- common/autotest_common.sh@966 -- # echo 'killing process with pid 719191' 00:07:23.438 killing process with pid 719191 00:07:23.438 09:15:34 alias_rpc -- common/autotest_common.sh@967 -- # kill 719191 00:07:23.438 09:15:34 alias_rpc -- common/autotest_common.sh@972 -- # wait 719191 00:07:24.004 00:07:24.004 real 0m1.216s 00:07:24.004 user 0m1.320s 00:07:24.004 sys 0m0.403s 00:07:24.004 09:15:34 alias_rpc -- common/autotest_common.sh@1124 -- # xtrace_disable 00:07:24.004 09:15:34 alias_rpc -- common/autotest_common.sh@10 -- # set +x 00:07:24.004 ************************************ 00:07:24.004 END TEST alias_rpc 00:07:24.004 ************************************ 00:07:24.004 09:15:34 -- common/autotest_common.sh@1142 -- # return 0 00:07:24.004 09:15:34 -- spdk/autotest.sh@176 -- # [[ 0 -eq 0 ]] 00:07:24.004 09:15:34 -- spdk/autotest.sh@177 -- # run_test spdkcli_tcp /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/tcp.sh 00:07:24.004 09:15:34 -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:07:24.004 09:15:34 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:07:24.004 09:15:34 -- common/autotest_common.sh@10 -- # set +x 00:07:24.004 ************************************ 00:07:24.004 START TEST spdkcli_tcp 00:07:24.004 ************************************ 00:07:24.004 09:15:34 spdkcli_tcp -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/tcp.sh 00:07:24.004 * Looking for test storage... 00:07:24.004 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli 00:07:24.005 09:15:35 spdkcli_tcp -- spdkcli/tcp.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/common.sh 00:07:24.005 09:15:35 spdkcli_tcp -- spdkcli/common.sh@6 -- # spdkcli_job=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/spdkcli_job.py 00:07:24.005 09:15:35 spdkcli_tcp -- spdkcli/common.sh@7 -- # spdk_clear_config_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/clear_config.py 00:07:24.005 09:15:35 spdkcli_tcp -- spdkcli/tcp.sh@18 -- # IP_ADDRESS=127.0.0.1 00:07:24.005 09:15:35 spdkcli_tcp -- spdkcli/tcp.sh@19 -- # PORT=9998 00:07:24.005 09:15:35 spdkcli_tcp -- spdkcli/tcp.sh@21 -- # trap 'err_cleanup; exit 1' SIGINT SIGTERM EXIT 00:07:24.005 09:15:35 spdkcli_tcp -- spdkcli/tcp.sh@23 -- # timing_enter run_spdk_tgt_tcp 00:07:24.005 09:15:35 spdkcli_tcp -- common/autotest_common.sh@722 -- # xtrace_disable 00:07:24.005 09:15:35 spdkcli_tcp -- common/autotest_common.sh@10 -- # set +x 00:07:24.005 09:15:35 spdkcli_tcp -- spdkcli/tcp.sh@25 -- # spdk_tgt_pid=719477 00:07:24.005 09:15:35 spdkcli_tcp -- spdkcli/tcp.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x3 -p 0 00:07:24.005 09:15:35 spdkcli_tcp -- spdkcli/tcp.sh@27 -- # waitforlisten 719477 00:07:24.005 09:15:35 spdkcli_tcp -- common/autotest_common.sh@829 -- # '[' -z 719477 ']' 00:07:24.005 09:15:35 spdkcli_tcp -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:24.005 09:15:35 spdkcli_tcp -- common/autotest_common.sh@834 -- # local max_retries=100 00:07:24.005 09:15:35 spdkcli_tcp -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:24.005 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:24.005 09:15:35 spdkcli_tcp -- common/autotest_common.sh@838 -- # xtrace_disable 00:07:24.005 09:15:35 spdkcli_tcp -- common/autotest_common.sh@10 -- # set +x 00:07:24.005 [2024-07-15 09:15:35.101421] Starting SPDK v24.09-pre git sha1 b0f01ebc5 / DPDK 24.03.0 initialization... 00:07:24.005 [2024-07-15 09:15:35.101498] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid719477 ] 00:07:24.005 EAL: No free 2048 kB hugepages reported on node 1 00:07:24.005 [2024-07-15 09:15:35.157543] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 2 00:07:24.263 [2024-07-15 09:15:35.265513] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:07:24.263 [2024-07-15 09:15:35.265518] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:07:24.521 09:15:35 spdkcli_tcp -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:07:24.521 09:15:35 spdkcli_tcp -- common/autotest_common.sh@862 -- # return 0 00:07:24.521 09:15:35 spdkcli_tcp -- spdkcli/tcp.sh@31 -- # socat_pid=719484 00:07:24.521 09:15:35 spdkcli_tcp -- spdkcli/tcp.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -r 100 -t 2 -s 127.0.0.1 -p 9998 rpc_get_methods 00:07:24.521 09:15:35 spdkcli_tcp -- spdkcli/tcp.sh@30 -- # socat TCP-LISTEN:9998 UNIX-CONNECT:/var/tmp/spdk.sock 00:07:24.780 [ 00:07:24.780 "bdev_malloc_delete", 00:07:24.780 "bdev_malloc_create", 00:07:24.780 "bdev_null_resize", 00:07:24.780 "bdev_null_delete", 00:07:24.780 "bdev_null_create", 00:07:24.780 "bdev_nvme_cuse_unregister", 00:07:24.780 "bdev_nvme_cuse_register", 00:07:24.780 "bdev_opal_new_user", 00:07:24.780 "bdev_opal_set_lock_state", 00:07:24.780 "bdev_opal_delete", 00:07:24.780 "bdev_opal_get_info", 00:07:24.780 "bdev_opal_create", 00:07:24.780 "bdev_nvme_opal_revert", 00:07:24.780 "bdev_nvme_opal_init", 00:07:24.780 "bdev_nvme_send_cmd", 00:07:24.780 "bdev_nvme_get_path_iostat", 00:07:24.780 "bdev_nvme_get_mdns_discovery_info", 00:07:24.780 "bdev_nvme_stop_mdns_discovery", 00:07:24.780 "bdev_nvme_start_mdns_discovery", 00:07:24.780 "bdev_nvme_set_multipath_policy", 00:07:24.780 "bdev_nvme_set_preferred_path", 00:07:24.780 "bdev_nvme_get_io_paths", 00:07:24.780 "bdev_nvme_remove_error_injection", 00:07:24.780 "bdev_nvme_add_error_injection", 00:07:24.780 "bdev_nvme_get_discovery_info", 00:07:24.780 "bdev_nvme_stop_discovery", 00:07:24.780 "bdev_nvme_start_discovery", 00:07:24.780 "bdev_nvme_get_controller_health_info", 00:07:24.780 "bdev_nvme_disable_controller", 00:07:24.780 "bdev_nvme_enable_controller", 00:07:24.780 "bdev_nvme_reset_controller", 00:07:24.780 "bdev_nvme_get_transport_statistics", 00:07:24.780 "bdev_nvme_apply_firmware", 00:07:24.780 "bdev_nvme_detach_controller", 00:07:24.780 "bdev_nvme_get_controllers", 00:07:24.780 "bdev_nvme_attach_controller", 00:07:24.780 "bdev_nvme_set_hotplug", 00:07:24.780 "bdev_nvme_set_options", 00:07:24.780 "bdev_passthru_delete", 00:07:24.780 "bdev_passthru_create", 00:07:24.780 "bdev_lvol_set_parent_bdev", 00:07:24.780 "bdev_lvol_set_parent", 00:07:24.780 "bdev_lvol_check_shallow_copy", 00:07:24.780 "bdev_lvol_start_shallow_copy", 00:07:24.780 "bdev_lvol_grow_lvstore", 00:07:24.780 "bdev_lvol_get_lvols", 00:07:24.780 "bdev_lvol_get_lvstores", 00:07:24.780 "bdev_lvol_delete", 00:07:24.780 "bdev_lvol_set_read_only", 00:07:24.780 "bdev_lvol_resize", 00:07:24.780 "bdev_lvol_decouple_parent", 00:07:24.780 "bdev_lvol_inflate", 00:07:24.780 "bdev_lvol_rename", 00:07:24.780 "bdev_lvol_clone_bdev", 00:07:24.780 "bdev_lvol_clone", 00:07:24.780 "bdev_lvol_snapshot", 00:07:24.780 "bdev_lvol_create", 00:07:24.780 "bdev_lvol_delete_lvstore", 00:07:24.780 "bdev_lvol_rename_lvstore", 00:07:24.780 "bdev_lvol_create_lvstore", 00:07:24.780 "bdev_raid_set_options", 00:07:24.780 "bdev_raid_remove_base_bdev", 00:07:24.780 "bdev_raid_add_base_bdev", 00:07:24.780 "bdev_raid_delete", 00:07:24.780 "bdev_raid_create", 00:07:24.780 "bdev_raid_get_bdevs", 00:07:24.780 "bdev_error_inject_error", 00:07:24.780 "bdev_error_delete", 00:07:24.780 "bdev_error_create", 00:07:24.780 "bdev_split_delete", 00:07:24.780 "bdev_split_create", 00:07:24.780 "bdev_delay_delete", 00:07:24.780 "bdev_delay_create", 00:07:24.780 "bdev_delay_update_latency", 00:07:24.780 "bdev_zone_block_delete", 00:07:24.780 "bdev_zone_block_create", 00:07:24.780 "blobfs_create", 00:07:24.780 "blobfs_detect", 00:07:24.780 "blobfs_set_cache_size", 00:07:24.780 "bdev_aio_delete", 00:07:24.780 "bdev_aio_rescan", 00:07:24.780 "bdev_aio_create", 00:07:24.780 "bdev_ftl_set_property", 00:07:24.780 "bdev_ftl_get_properties", 00:07:24.780 "bdev_ftl_get_stats", 00:07:24.780 "bdev_ftl_unmap", 00:07:24.780 "bdev_ftl_unload", 00:07:24.780 "bdev_ftl_delete", 00:07:24.780 "bdev_ftl_load", 00:07:24.780 "bdev_ftl_create", 00:07:24.780 "bdev_virtio_attach_controller", 00:07:24.780 "bdev_virtio_scsi_get_devices", 00:07:24.780 "bdev_virtio_detach_controller", 00:07:24.780 "bdev_virtio_blk_set_hotplug", 00:07:24.780 "bdev_iscsi_delete", 00:07:24.780 "bdev_iscsi_create", 00:07:24.780 "bdev_iscsi_set_options", 00:07:24.780 "accel_error_inject_error", 00:07:24.780 "ioat_scan_accel_module", 00:07:24.780 "dsa_scan_accel_module", 00:07:24.780 "iaa_scan_accel_module", 00:07:24.780 "vfu_virtio_create_scsi_endpoint", 00:07:24.780 "vfu_virtio_scsi_remove_target", 00:07:24.780 "vfu_virtio_scsi_add_target", 00:07:24.780 "vfu_virtio_create_blk_endpoint", 00:07:24.780 "vfu_virtio_delete_endpoint", 00:07:24.780 "keyring_file_remove_key", 00:07:24.780 "keyring_file_add_key", 00:07:24.780 "keyring_linux_set_options", 00:07:24.780 "iscsi_get_histogram", 00:07:24.780 "iscsi_enable_histogram", 00:07:24.781 "iscsi_set_options", 00:07:24.781 "iscsi_get_auth_groups", 00:07:24.781 "iscsi_auth_group_remove_secret", 00:07:24.781 "iscsi_auth_group_add_secret", 00:07:24.781 "iscsi_delete_auth_group", 00:07:24.781 "iscsi_create_auth_group", 00:07:24.781 "iscsi_set_discovery_auth", 00:07:24.781 "iscsi_get_options", 00:07:24.781 "iscsi_target_node_request_logout", 00:07:24.781 "iscsi_target_node_set_redirect", 00:07:24.781 "iscsi_target_node_set_auth", 00:07:24.781 "iscsi_target_node_add_lun", 00:07:24.781 "iscsi_get_stats", 00:07:24.781 "iscsi_get_connections", 00:07:24.781 "iscsi_portal_group_set_auth", 00:07:24.781 "iscsi_start_portal_group", 00:07:24.781 "iscsi_delete_portal_group", 00:07:24.781 "iscsi_create_portal_group", 00:07:24.781 "iscsi_get_portal_groups", 00:07:24.781 "iscsi_delete_target_node", 00:07:24.781 "iscsi_target_node_remove_pg_ig_maps", 00:07:24.781 "iscsi_target_node_add_pg_ig_maps", 00:07:24.781 "iscsi_create_target_node", 00:07:24.781 "iscsi_get_target_nodes", 00:07:24.781 "iscsi_delete_initiator_group", 00:07:24.781 "iscsi_initiator_group_remove_initiators", 00:07:24.781 "iscsi_initiator_group_add_initiators", 00:07:24.781 "iscsi_create_initiator_group", 00:07:24.781 "iscsi_get_initiator_groups", 00:07:24.781 "nvmf_set_crdt", 00:07:24.781 "nvmf_set_config", 00:07:24.781 "nvmf_set_max_subsystems", 00:07:24.781 "nvmf_stop_mdns_prr", 00:07:24.781 "nvmf_publish_mdns_prr", 00:07:24.781 "nvmf_subsystem_get_listeners", 00:07:24.781 "nvmf_subsystem_get_qpairs", 00:07:24.781 "nvmf_subsystem_get_controllers", 00:07:24.781 "nvmf_get_stats", 00:07:24.781 "nvmf_get_transports", 00:07:24.781 "nvmf_create_transport", 00:07:24.781 "nvmf_get_targets", 00:07:24.781 "nvmf_delete_target", 00:07:24.781 "nvmf_create_target", 00:07:24.781 "nvmf_subsystem_allow_any_host", 00:07:24.781 "nvmf_subsystem_remove_host", 00:07:24.781 "nvmf_subsystem_add_host", 00:07:24.781 "nvmf_ns_remove_host", 00:07:24.781 "nvmf_ns_add_host", 00:07:24.781 "nvmf_subsystem_remove_ns", 00:07:24.781 "nvmf_subsystem_add_ns", 00:07:24.781 "nvmf_subsystem_listener_set_ana_state", 00:07:24.781 "nvmf_discovery_get_referrals", 00:07:24.781 "nvmf_discovery_remove_referral", 00:07:24.781 "nvmf_discovery_add_referral", 00:07:24.781 "nvmf_subsystem_remove_listener", 00:07:24.781 "nvmf_subsystem_add_listener", 00:07:24.781 "nvmf_delete_subsystem", 00:07:24.781 "nvmf_create_subsystem", 00:07:24.781 "nvmf_get_subsystems", 00:07:24.781 "env_dpdk_get_mem_stats", 00:07:24.781 "nbd_get_disks", 00:07:24.781 "nbd_stop_disk", 00:07:24.781 "nbd_start_disk", 00:07:24.781 "ublk_recover_disk", 00:07:24.781 "ublk_get_disks", 00:07:24.781 "ublk_stop_disk", 00:07:24.781 "ublk_start_disk", 00:07:24.781 "ublk_destroy_target", 00:07:24.781 "ublk_create_target", 00:07:24.781 "virtio_blk_create_transport", 00:07:24.781 "virtio_blk_get_transports", 00:07:24.781 "vhost_controller_set_coalescing", 00:07:24.781 "vhost_get_controllers", 00:07:24.781 "vhost_delete_controller", 00:07:24.781 "vhost_create_blk_controller", 00:07:24.781 "vhost_scsi_controller_remove_target", 00:07:24.781 "vhost_scsi_controller_add_target", 00:07:24.781 "vhost_start_scsi_controller", 00:07:24.781 "vhost_create_scsi_controller", 00:07:24.781 "thread_set_cpumask", 00:07:24.781 "framework_get_governor", 00:07:24.781 "framework_get_scheduler", 00:07:24.781 "framework_set_scheduler", 00:07:24.781 "framework_get_reactors", 00:07:24.781 "thread_get_io_channels", 00:07:24.781 "thread_get_pollers", 00:07:24.781 "thread_get_stats", 00:07:24.781 "framework_monitor_context_switch", 00:07:24.781 "spdk_kill_instance", 00:07:24.781 "log_enable_timestamps", 00:07:24.781 "log_get_flags", 00:07:24.781 "log_clear_flag", 00:07:24.781 "log_set_flag", 00:07:24.781 "log_get_level", 00:07:24.781 "log_set_level", 00:07:24.781 "log_get_print_level", 00:07:24.781 "log_set_print_level", 00:07:24.781 "framework_enable_cpumask_locks", 00:07:24.781 "framework_disable_cpumask_locks", 00:07:24.781 "framework_wait_init", 00:07:24.781 "framework_start_init", 00:07:24.781 "scsi_get_devices", 00:07:24.781 "bdev_get_histogram", 00:07:24.781 "bdev_enable_histogram", 00:07:24.781 "bdev_set_qos_limit", 00:07:24.781 "bdev_set_qd_sampling_period", 00:07:24.781 "bdev_get_bdevs", 00:07:24.781 "bdev_reset_iostat", 00:07:24.781 "bdev_get_iostat", 00:07:24.781 "bdev_examine", 00:07:24.781 "bdev_wait_for_examine", 00:07:24.781 "bdev_set_options", 00:07:24.781 "notify_get_notifications", 00:07:24.781 "notify_get_types", 00:07:24.781 "accel_get_stats", 00:07:24.781 "accel_set_options", 00:07:24.781 "accel_set_driver", 00:07:24.781 "accel_crypto_key_destroy", 00:07:24.781 "accel_crypto_keys_get", 00:07:24.781 "accel_crypto_key_create", 00:07:24.781 "accel_assign_opc", 00:07:24.781 "accel_get_module_info", 00:07:24.781 "accel_get_opc_assignments", 00:07:24.781 "vmd_rescan", 00:07:24.781 "vmd_remove_device", 00:07:24.781 "vmd_enable", 00:07:24.781 "sock_get_default_impl", 00:07:24.781 "sock_set_default_impl", 00:07:24.781 "sock_impl_set_options", 00:07:24.781 "sock_impl_get_options", 00:07:24.781 "iobuf_get_stats", 00:07:24.781 "iobuf_set_options", 00:07:24.781 "keyring_get_keys", 00:07:24.781 "framework_get_pci_devices", 00:07:24.781 "framework_get_config", 00:07:24.781 "framework_get_subsystems", 00:07:24.781 "vfu_tgt_set_base_path", 00:07:24.781 "trace_get_info", 00:07:24.781 "trace_get_tpoint_group_mask", 00:07:24.781 "trace_disable_tpoint_group", 00:07:24.781 "trace_enable_tpoint_group", 00:07:24.781 "trace_clear_tpoint_mask", 00:07:24.781 "trace_set_tpoint_mask", 00:07:24.781 "spdk_get_version", 00:07:24.781 "rpc_get_methods" 00:07:24.781 ] 00:07:24.781 09:15:35 spdkcli_tcp -- spdkcli/tcp.sh@35 -- # timing_exit run_spdk_tgt_tcp 00:07:24.781 09:15:35 spdkcli_tcp -- common/autotest_common.sh@728 -- # xtrace_disable 00:07:24.781 09:15:35 spdkcli_tcp -- common/autotest_common.sh@10 -- # set +x 00:07:24.781 09:15:35 spdkcli_tcp -- spdkcli/tcp.sh@37 -- # trap - SIGINT SIGTERM EXIT 00:07:24.781 09:15:35 spdkcli_tcp -- spdkcli/tcp.sh@38 -- # killprocess 719477 00:07:24.781 09:15:35 spdkcli_tcp -- common/autotest_common.sh@948 -- # '[' -z 719477 ']' 00:07:24.781 09:15:35 spdkcli_tcp -- common/autotest_common.sh@952 -- # kill -0 719477 00:07:24.781 09:15:35 spdkcli_tcp -- common/autotest_common.sh@953 -- # uname 00:07:24.781 09:15:35 spdkcli_tcp -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:07:24.781 09:15:35 spdkcli_tcp -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 719477 00:07:24.781 09:15:35 spdkcli_tcp -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:07:24.781 09:15:35 spdkcli_tcp -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:07:24.781 09:15:35 spdkcli_tcp -- common/autotest_common.sh@966 -- # echo 'killing process with pid 719477' 00:07:24.781 killing process with pid 719477 00:07:24.781 09:15:35 spdkcli_tcp -- common/autotest_common.sh@967 -- # kill 719477 00:07:24.781 09:15:35 spdkcli_tcp -- common/autotest_common.sh@972 -- # wait 719477 00:07:25.040 00:07:25.040 real 0m1.223s 00:07:25.040 user 0m2.133s 00:07:25.040 sys 0m0.437s 00:07:25.040 09:15:36 spdkcli_tcp -- common/autotest_common.sh@1124 -- # xtrace_disable 00:07:25.040 09:15:36 spdkcli_tcp -- common/autotest_common.sh@10 -- # set +x 00:07:25.040 ************************************ 00:07:25.040 END TEST spdkcli_tcp 00:07:25.040 ************************************ 00:07:25.299 09:15:36 -- common/autotest_common.sh@1142 -- # return 0 00:07:25.299 09:15:36 -- spdk/autotest.sh@180 -- # run_test dpdk_mem_utility /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/dpdk_memory_utility/test_dpdk_mem_info.sh 00:07:25.299 09:15:36 -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:07:25.299 09:15:36 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:07:25.299 09:15:36 -- common/autotest_common.sh@10 -- # set +x 00:07:25.299 ************************************ 00:07:25.299 START TEST dpdk_mem_utility 00:07:25.299 ************************************ 00:07:25.299 09:15:36 dpdk_mem_utility -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/dpdk_memory_utility/test_dpdk_mem_info.sh 00:07:25.299 * Looking for test storage... 00:07:25.299 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/dpdk_memory_utility 00:07:25.299 09:15:36 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@10 -- # MEM_SCRIPT=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/dpdk_mem_info.py 00:07:25.299 09:15:36 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@13 -- # spdkpid=719680 00:07:25.299 09:15:36 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@15 -- # waitforlisten 719680 00:07:25.299 09:15:36 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:07:25.299 09:15:36 dpdk_mem_utility -- common/autotest_common.sh@829 -- # '[' -z 719680 ']' 00:07:25.299 09:15:36 dpdk_mem_utility -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:25.299 09:15:36 dpdk_mem_utility -- common/autotest_common.sh@834 -- # local max_retries=100 00:07:25.299 09:15:36 dpdk_mem_utility -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:25.299 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:25.299 09:15:36 dpdk_mem_utility -- common/autotest_common.sh@838 -- # xtrace_disable 00:07:25.299 09:15:36 dpdk_mem_utility -- common/autotest_common.sh@10 -- # set +x 00:07:25.299 [2024-07-15 09:15:36.370770] Starting SPDK v24.09-pre git sha1 b0f01ebc5 / DPDK 24.03.0 initialization... 00:07:25.300 [2024-07-15 09:15:36.370867] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid719680 ] 00:07:25.300 EAL: No free 2048 kB hugepages reported on node 1 00:07:25.300 [2024-07-15 09:15:36.425271] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:25.558 [2024-07-15 09:15:36.529913] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:07:25.817 09:15:36 dpdk_mem_utility -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:07:25.817 09:15:36 dpdk_mem_utility -- common/autotest_common.sh@862 -- # return 0 00:07:25.817 09:15:36 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@17 -- # trap 'killprocess $spdkpid' SIGINT SIGTERM EXIT 00:07:25.817 09:15:36 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@19 -- # rpc_cmd env_dpdk_get_mem_stats 00:07:25.817 09:15:36 dpdk_mem_utility -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:25.817 09:15:36 dpdk_mem_utility -- common/autotest_common.sh@10 -- # set +x 00:07:25.817 { 00:07:25.817 "filename": "/tmp/spdk_mem_dump.txt" 00:07:25.817 } 00:07:25.817 09:15:36 dpdk_mem_utility -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:25.817 09:15:36 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@21 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/dpdk_mem_info.py 00:07:25.817 DPDK memory size 814.000000 MiB in 1 heap(s) 00:07:25.817 1 heaps totaling size 814.000000 MiB 00:07:25.817 size: 814.000000 MiB heap id: 0 00:07:25.817 end heaps---------- 00:07:25.817 8 mempools totaling size 598.116089 MiB 00:07:25.817 size: 212.674988 MiB name: PDU_immediate_data_Pool 00:07:25.817 size: 158.602051 MiB name: PDU_data_out_Pool 00:07:25.817 size: 84.521057 MiB name: bdev_io_719680 00:07:25.817 size: 51.011292 MiB name: evtpool_719680 00:07:25.817 size: 50.003479 MiB name: msgpool_719680 00:07:25.817 size: 21.763794 MiB name: PDU_Pool 00:07:25.817 size: 19.513306 MiB name: SCSI_TASK_Pool 00:07:25.817 size: 0.026123 MiB name: Session_Pool 00:07:25.817 end mempools------- 00:07:25.817 6 memzones totaling size 4.142822 MiB 00:07:25.817 size: 1.000366 MiB name: RG_ring_0_719680 00:07:25.817 size: 1.000366 MiB name: RG_ring_1_719680 00:07:25.817 size: 1.000366 MiB name: RG_ring_4_719680 00:07:25.817 size: 1.000366 MiB name: RG_ring_5_719680 00:07:25.817 size: 0.125366 MiB name: RG_ring_2_719680 00:07:25.817 size: 0.015991 MiB name: RG_ring_3_719680 00:07:25.817 end memzones------- 00:07:25.817 09:15:36 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@23 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/dpdk_mem_info.py -m 0 00:07:25.817 heap id: 0 total size: 814.000000 MiB number of busy elements: 41 number of free elements: 15 00:07:25.817 list of free elements. size: 12.519348 MiB 00:07:25.817 element at address: 0x200000400000 with size: 1.999512 MiB 00:07:25.817 element at address: 0x200018e00000 with size: 0.999878 MiB 00:07:25.817 element at address: 0x200019000000 with size: 0.999878 MiB 00:07:25.817 element at address: 0x200003e00000 with size: 0.996277 MiB 00:07:25.817 element at address: 0x200031c00000 with size: 0.994446 MiB 00:07:25.817 element at address: 0x200013800000 with size: 0.978699 MiB 00:07:25.817 element at address: 0x200007000000 with size: 0.959839 MiB 00:07:25.817 element at address: 0x200019200000 with size: 0.936584 MiB 00:07:25.817 element at address: 0x200000200000 with size: 0.841614 MiB 00:07:25.817 element at address: 0x20001aa00000 with size: 0.582886 MiB 00:07:25.817 element at address: 0x20000b200000 with size: 0.490723 MiB 00:07:25.817 element at address: 0x200000800000 with size: 0.487793 MiB 00:07:25.817 element at address: 0x200019400000 with size: 0.485657 MiB 00:07:25.817 element at address: 0x200027e00000 with size: 0.410034 MiB 00:07:25.817 element at address: 0x200003a00000 with size: 0.355530 MiB 00:07:25.818 list of standard malloc elements. size: 199.218079 MiB 00:07:25.818 element at address: 0x20000b3fff80 with size: 132.000122 MiB 00:07:25.818 element at address: 0x2000071fff80 with size: 64.000122 MiB 00:07:25.818 element at address: 0x200018efff80 with size: 1.000122 MiB 00:07:25.818 element at address: 0x2000190fff80 with size: 1.000122 MiB 00:07:25.818 element at address: 0x2000192fff80 with size: 1.000122 MiB 00:07:25.818 element at address: 0x2000003d9f00 with size: 0.140747 MiB 00:07:25.818 element at address: 0x2000192eff00 with size: 0.062622 MiB 00:07:25.818 element at address: 0x2000003fdf80 with size: 0.007935 MiB 00:07:25.818 element at address: 0x2000192efdc0 with size: 0.000305 MiB 00:07:25.818 element at address: 0x2000002d7740 with size: 0.000183 MiB 00:07:25.818 element at address: 0x2000002d7800 with size: 0.000183 MiB 00:07:25.818 element at address: 0x2000002d78c0 with size: 0.000183 MiB 00:07:25.818 element at address: 0x2000002d7ac0 with size: 0.000183 MiB 00:07:25.818 element at address: 0x2000002d7b80 with size: 0.000183 MiB 00:07:25.818 element at address: 0x2000002d7c40 with size: 0.000183 MiB 00:07:25.818 element at address: 0x2000003d9e40 with size: 0.000183 MiB 00:07:25.818 element at address: 0x20000087ce00 with size: 0.000183 MiB 00:07:25.818 element at address: 0x20000087cec0 with size: 0.000183 MiB 00:07:25.818 element at address: 0x2000008fd180 with size: 0.000183 MiB 00:07:25.818 element at address: 0x200003a5b040 with size: 0.000183 MiB 00:07:25.818 element at address: 0x200003adb300 with size: 0.000183 MiB 00:07:25.818 element at address: 0x200003adb500 with size: 0.000183 MiB 00:07:25.818 element at address: 0x200003adf7c0 with size: 0.000183 MiB 00:07:25.818 element at address: 0x200003affa80 with size: 0.000183 MiB 00:07:25.818 element at address: 0x200003affb40 with size: 0.000183 MiB 00:07:25.818 element at address: 0x200003eff0c0 with size: 0.000183 MiB 00:07:25.818 element at address: 0x2000070fdd80 with size: 0.000183 MiB 00:07:25.818 element at address: 0x20000b27da00 with size: 0.000183 MiB 00:07:25.818 element at address: 0x20000b27dac0 with size: 0.000183 MiB 00:07:25.818 element at address: 0x20000b2fdd80 with size: 0.000183 MiB 00:07:25.818 element at address: 0x2000138fa8c0 with size: 0.000183 MiB 00:07:25.818 element at address: 0x2000192efc40 with size: 0.000183 MiB 00:07:25.818 element at address: 0x2000192efd00 with size: 0.000183 MiB 00:07:25.818 element at address: 0x2000194bc740 with size: 0.000183 MiB 00:07:25.818 element at address: 0x20001aa95380 with size: 0.000183 MiB 00:07:25.818 element at address: 0x20001aa95440 with size: 0.000183 MiB 00:07:25.818 element at address: 0x200027e68f80 with size: 0.000183 MiB 00:07:25.818 element at address: 0x200027e69040 with size: 0.000183 MiB 00:07:25.818 element at address: 0x200027e6fc40 with size: 0.000183 MiB 00:07:25.818 element at address: 0x200027e6fe40 with size: 0.000183 MiB 00:07:25.818 element at address: 0x200027e6ff00 with size: 0.000183 MiB 00:07:25.818 list of memzone associated elements. size: 602.262573 MiB 00:07:25.818 element at address: 0x20001aa95500 with size: 211.416748 MiB 00:07:25.818 associated memzone info: size: 211.416626 MiB name: MP_PDU_immediate_data_Pool_0 00:07:25.818 element at address: 0x200027e6ffc0 with size: 157.562561 MiB 00:07:25.818 associated memzone info: size: 157.562439 MiB name: MP_PDU_data_out_Pool_0 00:07:25.818 element at address: 0x2000139fab80 with size: 84.020630 MiB 00:07:25.818 associated memzone info: size: 84.020508 MiB name: MP_bdev_io_719680_0 00:07:25.818 element at address: 0x2000009ff380 with size: 48.003052 MiB 00:07:25.818 associated memzone info: size: 48.002930 MiB name: MP_evtpool_719680_0 00:07:25.818 element at address: 0x200003fff380 with size: 48.003052 MiB 00:07:25.818 associated memzone info: size: 48.002930 MiB name: MP_msgpool_719680_0 00:07:25.818 element at address: 0x2000195be940 with size: 20.255554 MiB 00:07:25.818 associated memzone info: size: 20.255432 MiB name: MP_PDU_Pool_0 00:07:25.818 element at address: 0x200031dfeb40 with size: 18.005066 MiB 00:07:25.818 associated memzone info: size: 18.004944 MiB name: MP_SCSI_TASK_Pool_0 00:07:25.818 element at address: 0x2000005ffe00 with size: 2.000488 MiB 00:07:25.818 associated memzone info: size: 2.000366 MiB name: RG_MP_evtpool_719680 00:07:25.818 element at address: 0x200003bffe00 with size: 2.000488 MiB 00:07:25.818 associated memzone info: size: 2.000366 MiB name: RG_MP_msgpool_719680 00:07:25.818 element at address: 0x2000002d7d00 with size: 1.008118 MiB 00:07:25.818 associated memzone info: size: 1.007996 MiB name: MP_evtpool_719680 00:07:25.818 element at address: 0x20000b2fde40 with size: 1.008118 MiB 00:07:25.818 associated memzone info: size: 1.007996 MiB name: MP_PDU_Pool 00:07:25.818 element at address: 0x2000194bc800 with size: 1.008118 MiB 00:07:25.818 associated memzone info: size: 1.007996 MiB name: MP_PDU_immediate_data_Pool 00:07:25.818 element at address: 0x2000070fde40 with size: 1.008118 MiB 00:07:25.818 associated memzone info: size: 1.007996 MiB name: MP_PDU_data_out_Pool 00:07:25.818 element at address: 0x2000008fd240 with size: 1.008118 MiB 00:07:25.818 associated memzone info: size: 1.007996 MiB name: MP_SCSI_TASK_Pool 00:07:25.818 element at address: 0x200003eff180 with size: 1.000488 MiB 00:07:25.818 associated memzone info: size: 1.000366 MiB name: RG_ring_0_719680 00:07:25.818 element at address: 0x200003affc00 with size: 1.000488 MiB 00:07:25.818 associated memzone info: size: 1.000366 MiB name: RG_ring_1_719680 00:07:25.818 element at address: 0x2000138fa980 with size: 1.000488 MiB 00:07:25.818 associated memzone info: size: 1.000366 MiB name: RG_ring_4_719680 00:07:25.818 element at address: 0x200031cfe940 with size: 1.000488 MiB 00:07:25.818 associated memzone info: size: 1.000366 MiB name: RG_ring_5_719680 00:07:25.818 element at address: 0x200003a5b100 with size: 0.500488 MiB 00:07:25.818 associated memzone info: size: 0.500366 MiB name: RG_MP_bdev_io_719680 00:07:25.818 element at address: 0x20000b27db80 with size: 0.500488 MiB 00:07:25.818 associated memzone info: size: 0.500366 MiB name: RG_MP_PDU_Pool 00:07:25.818 element at address: 0x20000087cf80 with size: 0.500488 MiB 00:07:25.818 associated memzone info: size: 0.500366 MiB name: RG_MP_SCSI_TASK_Pool 00:07:25.818 element at address: 0x20001947c540 with size: 0.250488 MiB 00:07:25.818 associated memzone info: size: 0.250366 MiB name: RG_MP_PDU_immediate_data_Pool 00:07:25.818 element at address: 0x200003adf880 with size: 0.125488 MiB 00:07:25.818 associated memzone info: size: 0.125366 MiB name: RG_ring_2_719680 00:07:25.818 element at address: 0x2000070f5b80 with size: 0.031738 MiB 00:07:25.818 associated memzone info: size: 0.031616 MiB name: RG_MP_PDU_data_out_Pool 00:07:25.818 element at address: 0x200027e69100 with size: 0.023743 MiB 00:07:25.818 associated memzone info: size: 0.023621 MiB name: MP_Session_Pool_0 00:07:25.818 element at address: 0x200003adb5c0 with size: 0.016113 MiB 00:07:25.818 associated memzone info: size: 0.015991 MiB name: RG_ring_3_719680 00:07:25.818 element at address: 0x200027e6f240 with size: 0.002441 MiB 00:07:25.818 associated memzone info: size: 0.002319 MiB name: RG_MP_Session_Pool 00:07:25.818 element at address: 0x2000002d7980 with size: 0.000305 MiB 00:07:25.818 associated memzone info: size: 0.000183 MiB name: MP_msgpool_719680 00:07:25.818 element at address: 0x200003adb3c0 with size: 0.000305 MiB 00:07:25.818 associated memzone info: size: 0.000183 MiB name: MP_bdev_io_719680 00:07:25.818 element at address: 0x200027e6fd00 with size: 0.000305 MiB 00:07:25.818 associated memzone info: size: 0.000183 MiB name: MP_Session_Pool 00:07:25.818 09:15:36 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@25 -- # trap - SIGINT SIGTERM EXIT 00:07:25.818 09:15:36 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@26 -- # killprocess 719680 00:07:25.818 09:15:36 dpdk_mem_utility -- common/autotest_common.sh@948 -- # '[' -z 719680 ']' 00:07:25.818 09:15:36 dpdk_mem_utility -- common/autotest_common.sh@952 -- # kill -0 719680 00:07:25.818 09:15:36 dpdk_mem_utility -- common/autotest_common.sh@953 -- # uname 00:07:25.818 09:15:36 dpdk_mem_utility -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:07:25.818 09:15:36 dpdk_mem_utility -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 719680 00:07:25.818 09:15:36 dpdk_mem_utility -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:07:25.818 09:15:36 dpdk_mem_utility -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:07:25.818 09:15:36 dpdk_mem_utility -- common/autotest_common.sh@966 -- # echo 'killing process with pid 719680' 00:07:25.818 killing process with pid 719680 00:07:25.818 09:15:36 dpdk_mem_utility -- common/autotest_common.sh@967 -- # kill 719680 00:07:25.818 09:15:36 dpdk_mem_utility -- common/autotest_common.sh@972 -- # wait 719680 00:07:26.385 00:07:26.385 real 0m1.056s 00:07:26.385 user 0m1.036s 00:07:26.385 sys 0m0.378s 00:07:26.385 09:15:37 dpdk_mem_utility -- common/autotest_common.sh@1124 -- # xtrace_disable 00:07:26.385 09:15:37 dpdk_mem_utility -- common/autotest_common.sh@10 -- # set +x 00:07:26.385 ************************************ 00:07:26.385 END TEST dpdk_mem_utility 00:07:26.385 ************************************ 00:07:26.385 09:15:37 -- common/autotest_common.sh@1142 -- # return 0 00:07:26.385 09:15:37 -- spdk/autotest.sh@181 -- # run_test event /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/event.sh 00:07:26.385 09:15:37 -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:07:26.385 09:15:37 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:07:26.385 09:15:37 -- common/autotest_common.sh@10 -- # set +x 00:07:26.385 ************************************ 00:07:26.385 START TEST event 00:07:26.385 ************************************ 00:07:26.385 09:15:37 event -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/event.sh 00:07:26.385 * Looking for test storage... 00:07:26.385 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event 00:07:26.385 09:15:37 event -- event/event.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/bdev/nbd_common.sh 00:07:26.385 09:15:37 event -- bdev/nbd_common.sh@6 -- # set -e 00:07:26.385 09:15:37 event -- event/event.sh@45 -- # run_test event_perf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/event_perf/event_perf -m 0xF -t 1 00:07:26.385 09:15:37 event -- common/autotest_common.sh@1099 -- # '[' 6 -le 1 ']' 00:07:26.385 09:15:37 event -- common/autotest_common.sh@1105 -- # xtrace_disable 00:07:26.385 09:15:37 event -- common/autotest_common.sh@10 -- # set +x 00:07:26.385 ************************************ 00:07:26.385 START TEST event_perf 00:07:26.385 ************************************ 00:07:26.385 09:15:37 event.event_perf -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/event_perf/event_perf -m 0xF -t 1 00:07:26.385 Running I/O for 1 seconds...[2024-07-15 09:15:37.468725] Starting SPDK v24.09-pre git sha1 b0f01ebc5 / DPDK 24.03.0 initialization... 00:07:26.385 [2024-07-15 09:15:37.468810] [ DPDK EAL parameters: event_perf --no-shconf -c 0xF --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid719868 ] 00:07:26.385 EAL: No free 2048 kB hugepages reported on node 1 00:07:26.385 [2024-07-15 09:15:37.527226] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 4 00:07:26.644 [2024-07-15 09:15:37.630136] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:07:26.644 [2024-07-15 09:15:37.630199] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:07:26.644 [2024-07-15 09:15:37.630266] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:07:26.644 [2024-07-15 09:15:37.630269] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:07:27.578 Running I/O for 1 seconds... 00:07:27.578 lcore 0: 237903 00:07:27.578 lcore 1: 237905 00:07:27.578 lcore 2: 237903 00:07:27.578 lcore 3: 237903 00:07:27.578 done. 00:07:27.578 00:07:27.578 real 0m1.286s 00:07:27.578 user 0m4.208s 00:07:27.578 sys 0m0.074s 00:07:27.578 09:15:38 event.event_perf -- common/autotest_common.sh@1124 -- # xtrace_disable 00:07:27.578 09:15:38 event.event_perf -- common/autotest_common.sh@10 -- # set +x 00:07:27.578 ************************************ 00:07:27.578 END TEST event_perf 00:07:27.578 ************************************ 00:07:27.578 09:15:38 event -- common/autotest_common.sh@1142 -- # return 0 00:07:27.578 09:15:38 event -- event/event.sh@46 -- # run_test event_reactor /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/reactor/reactor -t 1 00:07:27.578 09:15:38 event -- common/autotest_common.sh@1099 -- # '[' 4 -le 1 ']' 00:07:27.578 09:15:38 event -- common/autotest_common.sh@1105 -- # xtrace_disable 00:07:27.578 09:15:38 event -- common/autotest_common.sh@10 -- # set +x 00:07:27.836 ************************************ 00:07:27.836 START TEST event_reactor 00:07:27.836 ************************************ 00:07:27.836 09:15:38 event.event_reactor -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/reactor/reactor -t 1 00:07:27.836 [2024-07-15 09:15:38.803725] Starting SPDK v24.09-pre git sha1 b0f01ebc5 / DPDK 24.03.0 initialization... 00:07:27.836 [2024-07-15 09:15:38.803790] [ DPDK EAL parameters: reactor --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid720025 ] 00:07:27.836 EAL: No free 2048 kB hugepages reported on node 1 00:07:27.836 [2024-07-15 09:15:38.860098] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:27.836 [2024-07-15 09:15:38.963503] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:07:29.250 test_start 00:07:29.250 oneshot 00:07:29.250 tick 100 00:07:29.250 tick 100 00:07:29.250 tick 250 00:07:29.250 tick 100 00:07:29.250 tick 100 00:07:29.250 tick 100 00:07:29.250 tick 250 00:07:29.250 tick 500 00:07:29.250 tick 100 00:07:29.250 tick 100 00:07:29.250 tick 250 00:07:29.250 tick 100 00:07:29.250 tick 100 00:07:29.250 test_end 00:07:29.250 00:07:29.250 real 0m1.286s 00:07:29.250 user 0m1.203s 00:07:29.250 sys 0m0.079s 00:07:29.250 09:15:40 event.event_reactor -- common/autotest_common.sh@1124 -- # xtrace_disable 00:07:29.250 09:15:40 event.event_reactor -- common/autotest_common.sh@10 -- # set +x 00:07:29.250 ************************************ 00:07:29.250 END TEST event_reactor 00:07:29.250 ************************************ 00:07:29.250 09:15:40 event -- common/autotest_common.sh@1142 -- # return 0 00:07:29.250 09:15:40 event -- event/event.sh@47 -- # run_test event_reactor_perf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/reactor_perf/reactor_perf -t 1 00:07:29.250 09:15:40 event -- common/autotest_common.sh@1099 -- # '[' 4 -le 1 ']' 00:07:29.250 09:15:40 event -- common/autotest_common.sh@1105 -- # xtrace_disable 00:07:29.250 09:15:40 event -- common/autotest_common.sh@10 -- # set +x 00:07:29.250 ************************************ 00:07:29.250 START TEST event_reactor_perf 00:07:29.250 ************************************ 00:07:29.250 09:15:40 event.event_reactor_perf -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/reactor_perf/reactor_perf -t 1 00:07:29.250 [2024-07-15 09:15:40.139922] Starting SPDK v24.09-pre git sha1 b0f01ebc5 / DPDK 24.03.0 initialization... 00:07:29.250 [2024-07-15 09:15:40.139991] [ DPDK EAL parameters: reactor_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid720183 ] 00:07:29.250 EAL: No free 2048 kB hugepages reported on node 1 00:07:29.250 [2024-07-15 09:15:40.200395] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:29.250 [2024-07-15 09:15:40.299272] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:07:30.622 test_start 00:07:30.622 test_end 00:07:30.622 Performance: 448231 events per second 00:07:30.622 00:07:30.622 real 0m1.280s 00:07:30.622 user 0m1.198s 00:07:30.622 sys 0m0.077s 00:07:30.622 09:15:41 event.event_reactor_perf -- common/autotest_common.sh@1124 -- # xtrace_disable 00:07:30.622 09:15:41 event.event_reactor_perf -- common/autotest_common.sh@10 -- # set +x 00:07:30.622 ************************************ 00:07:30.622 END TEST event_reactor_perf 00:07:30.622 ************************************ 00:07:30.622 09:15:41 event -- common/autotest_common.sh@1142 -- # return 0 00:07:30.622 09:15:41 event -- event/event.sh@49 -- # uname -s 00:07:30.622 09:15:41 event -- event/event.sh@49 -- # '[' Linux = Linux ']' 00:07:30.622 09:15:41 event -- event/event.sh@50 -- # run_test event_scheduler /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/scheduler/scheduler.sh 00:07:30.622 09:15:41 event -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:07:30.622 09:15:41 event -- common/autotest_common.sh@1105 -- # xtrace_disable 00:07:30.622 09:15:41 event -- common/autotest_common.sh@10 -- # set +x 00:07:30.622 ************************************ 00:07:30.622 START TEST event_scheduler 00:07:30.622 ************************************ 00:07:30.622 09:15:41 event.event_scheduler -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/scheduler/scheduler.sh 00:07:30.622 * Looking for test storage... 00:07:30.622 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/scheduler 00:07:30.622 09:15:41 event.event_scheduler -- scheduler/scheduler.sh@29 -- # rpc=rpc_cmd 00:07:30.622 09:15:41 event.event_scheduler -- scheduler/scheduler.sh@35 -- # scheduler_pid=720368 00:07:30.622 09:15:41 event.event_scheduler -- scheduler/scheduler.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/scheduler/scheduler -m 0xF -p 0x2 --wait-for-rpc -f 00:07:30.622 09:15:41 event.event_scheduler -- scheduler/scheduler.sh@36 -- # trap 'killprocess $scheduler_pid; exit 1' SIGINT SIGTERM EXIT 00:07:30.622 09:15:41 event.event_scheduler -- scheduler/scheduler.sh@37 -- # waitforlisten 720368 00:07:30.622 09:15:41 event.event_scheduler -- common/autotest_common.sh@829 -- # '[' -z 720368 ']' 00:07:30.622 09:15:41 event.event_scheduler -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:30.622 09:15:41 event.event_scheduler -- common/autotest_common.sh@834 -- # local max_retries=100 00:07:30.622 09:15:41 event.event_scheduler -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:30.622 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:30.622 09:15:41 event.event_scheduler -- common/autotest_common.sh@838 -- # xtrace_disable 00:07:30.622 09:15:41 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:07:30.622 [2024-07-15 09:15:41.555700] Starting SPDK v24.09-pre git sha1 b0f01ebc5 / DPDK 24.03.0 initialization... 00:07:30.622 [2024-07-15 09:15:41.555770] [ DPDK EAL parameters: scheduler --no-shconf -c 0xF --main-lcore=2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid720368 ] 00:07:30.622 EAL: No free 2048 kB hugepages reported on node 1 00:07:30.622 [2024-07-15 09:15:41.612383] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 4 00:07:30.622 [2024-07-15 09:15:41.730337] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:07:30.622 [2024-07-15 09:15:41.730392] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:07:30.622 [2024-07-15 09:15:41.730467] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:07:30.622 [2024-07-15 09:15:41.730470] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:07:30.622 09:15:41 event.event_scheduler -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:07:30.622 09:15:41 event.event_scheduler -- common/autotest_common.sh@862 -- # return 0 00:07:30.622 09:15:41 event.event_scheduler -- scheduler/scheduler.sh@39 -- # rpc_cmd framework_set_scheduler dynamic 00:07:30.622 09:15:41 event.event_scheduler -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:30.622 09:15:41 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:07:30.622 [2024-07-15 09:15:41.775262] dpdk_governor.c: 173:_init: *ERROR*: App core mask contains some but not all of a set of SMT siblings 00:07:30.622 [2024-07-15 09:15:41.775289] scheduler_dynamic.c: 270:init: *NOTICE*: Unable to initialize dpdk governor 00:07:30.622 [2024-07-15 09:15:41.775306] scheduler_dynamic.c: 416:set_opts: *NOTICE*: Setting scheduler load limit to 20 00:07:30.623 [2024-07-15 09:15:41.775317] scheduler_dynamic.c: 418:set_opts: *NOTICE*: Setting scheduler core limit to 80 00:07:30.623 [2024-07-15 09:15:41.775326] scheduler_dynamic.c: 420:set_opts: *NOTICE*: Setting scheduler core busy to 95 00:07:30.623 09:15:41 event.event_scheduler -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:30.623 09:15:41 event.event_scheduler -- scheduler/scheduler.sh@40 -- # rpc_cmd framework_start_init 00:07:30.623 09:15:41 event.event_scheduler -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:30.623 09:15:41 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:07:30.881 [2024-07-15 09:15:41.872012] scheduler.c: 382:test_start: *NOTICE*: Scheduler test application started. 00:07:30.881 09:15:41 event.event_scheduler -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:30.881 09:15:41 event.event_scheduler -- scheduler/scheduler.sh@43 -- # run_test scheduler_create_thread scheduler_create_thread 00:07:30.881 09:15:41 event.event_scheduler -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:07:30.881 09:15:41 event.event_scheduler -- common/autotest_common.sh@1105 -- # xtrace_disable 00:07:30.881 09:15:41 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:07:30.881 ************************************ 00:07:30.881 START TEST scheduler_create_thread 00:07:30.881 ************************************ 00:07:30.881 09:15:41 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@1123 -- # scheduler_create_thread 00:07:30.881 09:15:41 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@12 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n active_pinned -m 0x1 -a 100 00:07:30.881 09:15:41 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:30.881 09:15:41 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:07:30.881 2 00:07:30.881 09:15:41 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:30.881 09:15:41 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@13 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n active_pinned -m 0x2 -a 100 00:07:30.881 09:15:41 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:30.881 09:15:41 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:07:30.881 3 00:07:30.881 09:15:41 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:30.881 09:15:41 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@14 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n active_pinned -m 0x4 -a 100 00:07:30.882 09:15:41 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:30.882 09:15:41 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:07:30.882 4 00:07:30.882 09:15:41 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:30.882 09:15:41 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@15 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n active_pinned -m 0x8 -a 100 00:07:30.882 09:15:41 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:30.882 09:15:41 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:07:30.882 5 00:07:30.882 09:15:41 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:30.882 09:15:41 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@16 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n idle_pinned -m 0x1 -a 0 00:07:30.882 09:15:41 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:30.882 09:15:41 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:07:30.882 6 00:07:30.882 09:15:41 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:30.882 09:15:41 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@17 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n idle_pinned -m 0x2 -a 0 00:07:30.882 09:15:41 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:30.882 09:15:41 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:07:30.882 7 00:07:30.882 09:15:41 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:30.882 09:15:41 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@18 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n idle_pinned -m 0x4 -a 0 00:07:30.882 09:15:41 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:30.882 09:15:41 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:07:30.882 8 00:07:30.882 09:15:41 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:30.882 09:15:41 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@19 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n idle_pinned -m 0x8 -a 0 00:07:30.882 09:15:41 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:30.882 09:15:41 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:07:30.882 9 00:07:30.882 09:15:41 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:30.882 09:15:41 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@21 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n one_third_active -a 30 00:07:30.882 09:15:41 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:30.882 09:15:41 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:07:30.882 10 00:07:30.882 09:15:41 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:30.882 09:15:41 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@22 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n half_active -a 0 00:07:30.882 09:15:41 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:30.882 09:15:41 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:07:30.882 09:15:41 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:30.882 09:15:41 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@22 -- # thread_id=11 00:07:30.882 09:15:41 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@23 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_set_active 11 50 00:07:30.882 09:15:41 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:30.882 09:15:41 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:07:30.882 09:15:41 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:30.882 09:15:41 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@25 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n deleted -a 100 00:07:30.882 09:15:41 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:30.882 09:15:41 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:07:30.882 09:15:42 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:30.882 09:15:42 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@25 -- # thread_id=12 00:07:30.882 09:15:42 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@26 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_delete 12 00:07:30.882 09:15:42 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:30.882 09:15:42 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:07:32.255 09:15:43 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:32.255 00:07:32.255 real 0m1.171s 00:07:32.255 user 0m0.009s 00:07:32.255 sys 0m0.005s 00:07:32.255 09:15:43 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@1124 -- # xtrace_disable 00:07:32.255 09:15:43 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:07:32.255 ************************************ 00:07:32.255 END TEST scheduler_create_thread 00:07:32.255 ************************************ 00:07:32.255 09:15:43 event.event_scheduler -- common/autotest_common.sh@1142 -- # return 0 00:07:32.255 09:15:43 event.event_scheduler -- scheduler/scheduler.sh@45 -- # trap - SIGINT SIGTERM EXIT 00:07:32.255 09:15:43 event.event_scheduler -- scheduler/scheduler.sh@46 -- # killprocess 720368 00:07:32.255 09:15:43 event.event_scheduler -- common/autotest_common.sh@948 -- # '[' -z 720368 ']' 00:07:32.255 09:15:43 event.event_scheduler -- common/autotest_common.sh@952 -- # kill -0 720368 00:07:32.255 09:15:43 event.event_scheduler -- common/autotest_common.sh@953 -- # uname 00:07:32.255 09:15:43 event.event_scheduler -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:07:32.255 09:15:43 event.event_scheduler -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 720368 00:07:32.255 09:15:43 event.event_scheduler -- common/autotest_common.sh@954 -- # process_name=reactor_2 00:07:32.255 09:15:43 event.event_scheduler -- common/autotest_common.sh@958 -- # '[' reactor_2 = sudo ']' 00:07:32.255 09:15:43 event.event_scheduler -- common/autotest_common.sh@966 -- # echo 'killing process with pid 720368' 00:07:32.255 killing process with pid 720368 00:07:32.255 09:15:43 event.event_scheduler -- common/autotest_common.sh@967 -- # kill 720368 00:07:32.255 09:15:43 event.event_scheduler -- common/autotest_common.sh@972 -- # wait 720368 00:07:32.513 [2024-07-15 09:15:43.549891] scheduler.c: 360:test_shutdown: *NOTICE*: Scheduler test application stopped. 00:07:32.770 00:07:32.770 real 0m2.339s 00:07:32.770 user 0m2.671s 00:07:32.770 sys 0m0.317s 00:07:32.770 09:15:43 event.event_scheduler -- common/autotest_common.sh@1124 -- # xtrace_disable 00:07:32.771 09:15:43 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:07:32.771 ************************************ 00:07:32.771 END TEST event_scheduler 00:07:32.771 ************************************ 00:07:32.771 09:15:43 event -- common/autotest_common.sh@1142 -- # return 0 00:07:32.771 09:15:43 event -- event/event.sh@51 -- # modprobe -n nbd 00:07:32.771 09:15:43 event -- event/event.sh@52 -- # run_test app_repeat app_repeat_test 00:07:32.771 09:15:43 event -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:07:32.771 09:15:43 event -- common/autotest_common.sh@1105 -- # xtrace_disable 00:07:32.771 09:15:43 event -- common/autotest_common.sh@10 -- # set +x 00:07:32.771 ************************************ 00:07:32.771 START TEST app_repeat 00:07:32.771 ************************************ 00:07:32.771 09:15:43 event.app_repeat -- common/autotest_common.sh@1123 -- # app_repeat_test 00:07:32.771 09:15:43 event.app_repeat -- event/event.sh@12 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:07:32.771 09:15:43 event.app_repeat -- event/event.sh@13 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:07:32.771 09:15:43 event.app_repeat -- event/event.sh@13 -- # local nbd_list 00:07:32.771 09:15:43 event.app_repeat -- event/event.sh@14 -- # bdev_list=('Malloc0' 'Malloc1') 00:07:32.771 09:15:43 event.app_repeat -- event/event.sh@14 -- # local bdev_list 00:07:32.771 09:15:43 event.app_repeat -- event/event.sh@15 -- # local repeat_times=4 00:07:32.771 09:15:43 event.app_repeat -- event/event.sh@17 -- # modprobe nbd 00:07:32.771 09:15:43 event.app_repeat -- event/event.sh@19 -- # repeat_pid=720694 00:07:32.771 09:15:43 event.app_repeat -- event/event.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/app_repeat/app_repeat -r /var/tmp/spdk-nbd.sock -m 0x3 -t 4 00:07:32.771 09:15:43 event.app_repeat -- event/event.sh@20 -- # trap 'killprocess $repeat_pid; exit 1' SIGINT SIGTERM EXIT 00:07:32.771 09:15:43 event.app_repeat -- event/event.sh@21 -- # echo 'Process app_repeat pid: 720694' 00:07:32.771 Process app_repeat pid: 720694 00:07:32.771 09:15:43 event.app_repeat -- event/event.sh@23 -- # for i in {0..2} 00:07:32.771 09:15:43 event.app_repeat -- event/event.sh@24 -- # echo 'spdk_app_start Round 0' 00:07:32.771 spdk_app_start Round 0 00:07:32.771 09:15:43 event.app_repeat -- event/event.sh@25 -- # waitforlisten 720694 /var/tmp/spdk-nbd.sock 00:07:32.771 09:15:43 event.app_repeat -- common/autotest_common.sh@829 -- # '[' -z 720694 ']' 00:07:32.771 09:15:43 event.app_repeat -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:07:32.771 09:15:43 event.app_repeat -- common/autotest_common.sh@834 -- # local max_retries=100 00:07:32.771 09:15:43 event.app_repeat -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:07:32.771 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:07:32.771 09:15:43 event.app_repeat -- common/autotest_common.sh@838 -- # xtrace_disable 00:07:32.771 09:15:43 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:07:32.771 [2024-07-15 09:15:43.879709] Starting SPDK v24.09-pre git sha1 b0f01ebc5 / DPDK 24.03.0 initialization... 00:07:32.771 [2024-07-15 09:15:43.879777] [ DPDK EAL parameters: app_repeat --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid720694 ] 00:07:32.771 EAL: No free 2048 kB hugepages reported on node 1 00:07:32.771 [2024-07-15 09:15:43.939453] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 2 00:07:33.029 [2024-07-15 09:15:44.050258] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:07:33.029 [2024-07-15 09:15:44.050262] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:07:33.029 09:15:44 event.app_repeat -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:07:33.029 09:15:44 event.app_repeat -- common/autotest_common.sh@862 -- # return 0 00:07:33.029 09:15:44 event.app_repeat -- event/event.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:07:33.287 Malloc0 00:07:33.287 09:15:44 event.app_repeat -- event/event.sh@28 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:07:33.545 Malloc1 00:07:33.545 09:15:44 event.app_repeat -- event/event.sh@30 -- # nbd_rpc_data_verify /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:07:33.545 09:15:44 event.app_repeat -- bdev/nbd_common.sh@90 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:07:33.545 09:15:44 event.app_repeat -- bdev/nbd_common.sh@91 -- # bdev_list=('Malloc0' 'Malloc1') 00:07:33.545 09:15:44 event.app_repeat -- bdev/nbd_common.sh@91 -- # local bdev_list 00:07:33.545 09:15:44 event.app_repeat -- bdev/nbd_common.sh@92 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:07:33.545 09:15:44 event.app_repeat -- bdev/nbd_common.sh@92 -- # local nbd_list 00:07:33.545 09:15:44 event.app_repeat -- bdev/nbd_common.sh@94 -- # nbd_start_disks /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:07:33.545 09:15:44 event.app_repeat -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:07:33.545 09:15:44 event.app_repeat -- bdev/nbd_common.sh@10 -- # bdev_list=('Malloc0' 'Malloc1') 00:07:33.545 09:15:44 event.app_repeat -- bdev/nbd_common.sh@10 -- # local bdev_list 00:07:33.545 09:15:44 event.app_repeat -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:07:33.545 09:15:44 event.app_repeat -- bdev/nbd_common.sh@11 -- # local nbd_list 00:07:33.545 09:15:44 event.app_repeat -- bdev/nbd_common.sh@12 -- # local i 00:07:33.545 09:15:44 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:07:33.545 09:15:44 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:07:33.545 09:15:44 event.app_repeat -- bdev/nbd_common.sh@15 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc0 /dev/nbd0 00:07:33.803 /dev/nbd0 00:07:33.803 09:15:44 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:07:33.803 09:15:44 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:07:33.803 09:15:44 event.app_repeat -- common/autotest_common.sh@866 -- # local nbd_name=nbd0 00:07:33.803 09:15:44 event.app_repeat -- common/autotest_common.sh@867 -- # local i 00:07:33.803 09:15:44 event.app_repeat -- common/autotest_common.sh@869 -- # (( i = 1 )) 00:07:33.803 09:15:44 event.app_repeat -- common/autotest_common.sh@869 -- # (( i <= 20 )) 00:07:33.803 09:15:44 event.app_repeat -- common/autotest_common.sh@870 -- # grep -q -w nbd0 /proc/partitions 00:07:33.803 09:15:44 event.app_repeat -- common/autotest_common.sh@871 -- # break 00:07:33.803 09:15:44 event.app_repeat -- common/autotest_common.sh@882 -- # (( i = 1 )) 00:07:33.803 09:15:44 event.app_repeat -- common/autotest_common.sh@882 -- # (( i <= 20 )) 00:07:33.803 09:15:44 event.app_repeat -- common/autotest_common.sh@883 -- # dd if=/dev/nbd0 of=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:07:33.803 1+0 records in 00:07:33.803 1+0 records out 00:07:33.803 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000179569 s, 22.8 MB/s 00:07:33.803 09:15:44 event.app_repeat -- common/autotest_common.sh@884 -- # stat -c %s /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:07:33.803 09:15:44 event.app_repeat -- common/autotest_common.sh@884 -- # size=4096 00:07:33.803 09:15:44 event.app_repeat -- common/autotest_common.sh@885 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:07:33.803 09:15:44 event.app_repeat -- common/autotest_common.sh@886 -- # '[' 4096 '!=' 0 ']' 00:07:33.803 09:15:44 event.app_repeat -- common/autotest_common.sh@887 -- # return 0 00:07:33.803 09:15:44 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:07:33.803 09:15:44 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:07:33.803 09:15:44 event.app_repeat -- bdev/nbd_common.sh@15 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc1 /dev/nbd1 00:07:34.061 /dev/nbd1 00:07:34.061 09:15:45 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:07:34.061 09:15:45 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:07:34.061 09:15:45 event.app_repeat -- common/autotest_common.sh@866 -- # local nbd_name=nbd1 00:07:34.061 09:15:45 event.app_repeat -- common/autotest_common.sh@867 -- # local i 00:07:34.061 09:15:45 event.app_repeat -- common/autotest_common.sh@869 -- # (( i = 1 )) 00:07:34.061 09:15:45 event.app_repeat -- common/autotest_common.sh@869 -- # (( i <= 20 )) 00:07:34.061 09:15:45 event.app_repeat -- common/autotest_common.sh@870 -- # grep -q -w nbd1 /proc/partitions 00:07:34.061 09:15:45 event.app_repeat -- common/autotest_common.sh@871 -- # break 00:07:34.061 09:15:45 event.app_repeat -- common/autotest_common.sh@882 -- # (( i = 1 )) 00:07:34.061 09:15:45 event.app_repeat -- common/autotest_common.sh@882 -- # (( i <= 20 )) 00:07:34.061 09:15:45 event.app_repeat -- common/autotest_common.sh@883 -- # dd if=/dev/nbd1 of=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:07:34.061 1+0 records in 00:07:34.061 1+0 records out 00:07:34.061 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000184637 s, 22.2 MB/s 00:07:34.061 09:15:45 event.app_repeat -- common/autotest_common.sh@884 -- # stat -c %s /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:07:34.061 09:15:45 event.app_repeat -- common/autotest_common.sh@884 -- # size=4096 00:07:34.061 09:15:45 event.app_repeat -- common/autotest_common.sh@885 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:07:34.061 09:15:45 event.app_repeat -- common/autotest_common.sh@886 -- # '[' 4096 '!=' 0 ']' 00:07:34.061 09:15:45 event.app_repeat -- common/autotest_common.sh@887 -- # return 0 00:07:34.061 09:15:45 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:07:34.061 09:15:45 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:07:34.061 09:15:45 event.app_repeat -- bdev/nbd_common.sh@95 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:07:34.061 09:15:45 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:07:34.061 09:15:45 event.app_repeat -- bdev/nbd_common.sh@63 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:07:34.320 09:15:45 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 00:07:34.320 { 00:07:34.320 "nbd_device": "/dev/nbd0", 00:07:34.320 "bdev_name": "Malloc0" 00:07:34.320 }, 00:07:34.320 { 00:07:34.320 "nbd_device": "/dev/nbd1", 00:07:34.320 "bdev_name": "Malloc1" 00:07:34.320 } 00:07:34.320 ]' 00:07:34.320 09:15:45 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[ 00:07:34.320 { 00:07:34.320 "nbd_device": "/dev/nbd0", 00:07:34.320 "bdev_name": "Malloc0" 00:07:34.320 }, 00:07:34.320 { 00:07:34.320 "nbd_device": "/dev/nbd1", 00:07:34.320 "bdev_name": "Malloc1" 00:07:34.320 } 00:07:34.320 ]' 00:07:34.320 09:15:45 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:07:34.579 09:15:45 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name='/dev/nbd0 00:07:34.579 /dev/nbd1' 00:07:34.579 09:15:45 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '/dev/nbd0 00:07:34.579 /dev/nbd1' 00:07:34.579 09:15:45 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:07:34.579 09:15:45 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=2 00:07:34.579 09:15:45 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 2 00:07:34.579 09:15:45 event.app_repeat -- bdev/nbd_common.sh@95 -- # count=2 00:07:34.579 09:15:45 event.app_repeat -- bdev/nbd_common.sh@96 -- # '[' 2 -ne 2 ']' 00:07:34.579 09:15:45 event.app_repeat -- bdev/nbd_common.sh@100 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' write 00:07:34.579 09:15:45 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:07:34.579 09:15:45 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:07:34.579 09:15:45 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=write 00:07:34.579 09:15:45 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest 00:07:34.579 09:15:45 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' write = write ']' 00:07:34.579 09:15:45 event.app_repeat -- bdev/nbd_common.sh@76 -- # dd if=/dev/urandom of=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest bs=4096 count=256 00:07:34.579 256+0 records in 00:07:34.579 256+0 records out 00:07:34.579 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00519797 s, 202 MB/s 00:07:34.579 09:15:45 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:07:34.579 09:15:45 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest of=/dev/nbd0 bs=4096 count=256 oflag=direct 00:07:34.579 256+0 records in 00:07:34.579 256+0 records out 00:07:34.579 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0208703 s, 50.2 MB/s 00:07:34.579 09:15:45 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:07:34.579 09:15:45 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest of=/dev/nbd1 bs=4096 count=256 oflag=direct 00:07:34.579 256+0 records in 00:07:34.579 256+0 records out 00:07:34.579 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.021999 s, 47.7 MB/s 00:07:34.579 09:15:45 event.app_repeat -- bdev/nbd_common.sh@101 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' verify 00:07:34.579 09:15:45 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:07:34.579 09:15:45 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:07:34.579 09:15:45 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=verify 00:07:34.579 09:15:45 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest 00:07:34.579 09:15:45 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' verify = write ']' 00:07:34.579 09:15:45 event.app_repeat -- bdev/nbd_common.sh@80 -- # '[' verify = verify ']' 00:07:34.579 09:15:45 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:07:34.579 09:15:45 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest /dev/nbd0 00:07:34.579 09:15:45 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:07:34.579 09:15:45 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest /dev/nbd1 00:07:34.579 09:15:45 event.app_repeat -- bdev/nbd_common.sh@85 -- # rm /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest 00:07:34.579 09:15:45 event.app_repeat -- bdev/nbd_common.sh@103 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1' 00:07:34.579 09:15:45 event.app_repeat -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:07:34.579 09:15:45 event.app_repeat -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:07:34.579 09:15:45 event.app_repeat -- bdev/nbd_common.sh@50 -- # local nbd_list 00:07:34.579 09:15:45 event.app_repeat -- bdev/nbd_common.sh@51 -- # local i 00:07:34.579 09:15:45 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:07:34.579 09:15:45 event.app_repeat -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:07:34.837 09:15:45 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:07:34.837 09:15:45 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:07:34.837 09:15:45 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:07:34.837 09:15:45 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:07:34.837 09:15:45 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:07:34.837 09:15:45 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:07:34.837 09:15:45 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:07:34.837 09:15:45 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:07:34.837 09:15:45 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:07:34.837 09:15:45 event.app_repeat -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:07:35.095 09:15:46 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:07:35.095 09:15:46 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:07:35.095 09:15:46 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:07:35.095 09:15:46 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:07:35.095 09:15:46 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:07:35.095 09:15:46 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:07:35.095 09:15:46 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:07:35.095 09:15:46 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:07:35.095 09:15:46 event.app_repeat -- bdev/nbd_common.sh@104 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:07:35.095 09:15:46 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:07:35.095 09:15:46 event.app_repeat -- bdev/nbd_common.sh@63 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:07:35.353 09:15:46 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:07:35.353 09:15:46 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[]' 00:07:35.353 09:15:46 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:07:35.353 09:15:46 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:07:35.353 09:15:46 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '' 00:07:35.353 09:15:46 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:07:35.353 09:15:46 event.app_repeat -- bdev/nbd_common.sh@65 -- # true 00:07:35.353 09:15:46 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=0 00:07:35.353 09:15:46 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 0 00:07:35.353 09:15:46 event.app_repeat -- bdev/nbd_common.sh@104 -- # count=0 00:07:35.353 09:15:46 event.app_repeat -- bdev/nbd_common.sh@105 -- # '[' 0 -ne 0 ']' 00:07:35.353 09:15:46 event.app_repeat -- bdev/nbd_common.sh@109 -- # return 0 00:07:35.353 09:15:46 event.app_repeat -- event/event.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock spdk_kill_instance SIGTERM 00:07:35.610 09:15:46 event.app_repeat -- event/event.sh@35 -- # sleep 3 00:07:35.868 [2024-07-15 09:15:46.950681] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 2 00:07:35.868 [2024-07-15 09:15:47.049036] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:07:35.868 [2024-07-15 09:15:47.049037] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:07:36.125 [2024-07-15 09:15:47.106751] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_register' already registered. 00:07:36.125 [2024-07-15 09:15:47.106837] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_unregister' already registered. 00:07:38.652 09:15:49 event.app_repeat -- event/event.sh@23 -- # for i in {0..2} 00:07:38.652 09:15:49 event.app_repeat -- event/event.sh@24 -- # echo 'spdk_app_start Round 1' 00:07:38.652 spdk_app_start Round 1 00:07:38.652 09:15:49 event.app_repeat -- event/event.sh@25 -- # waitforlisten 720694 /var/tmp/spdk-nbd.sock 00:07:38.652 09:15:49 event.app_repeat -- common/autotest_common.sh@829 -- # '[' -z 720694 ']' 00:07:38.652 09:15:49 event.app_repeat -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:07:38.652 09:15:49 event.app_repeat -- common/autotest_common.sh@834 -- # local max_retries=100 00:07:38.652 09:15:49 event.app_repeat -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:07:38.652 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:07:38.652 09:15:49 event.app_repeat -- common/autotest_common.sh@838 -- # xtrace_disable 00:07:38.652 09:15:49 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:07:38.910 09:15:49 event.app_repeat -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:07:38.910 09:15:49 event.app_repeat -- common/autotest_common.sh@862 -- # return 0 00:07:38.910 09:15:49 event.app_repeat -- event/event.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:07:39.168 Malloc0 00:07:39.168 09:15:50 event.app_repeat -- event/event.sh@28 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:07:39.426 Malloc1 00:07:39.426 09:15:50 event.app_repeat -- event/event.sh@30 -- # nbd_rpc_data_verify /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:07:39.426 09:15:50 event.app_repeat -- bdev/nbd_common.sh@90 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:07:39.426 09:15:50 event.app_repeat -- bdev/nbd_common.sh@91 -- # bdev_list=('Malloc0' 'Malloc1') 00:07:39.426 09:15:50 event.app_repeat -- bdev/nbd_common.sh@91 -- # local bdev_list 00:07:39.426 09:15:50 event.app_repeat -- bdev/nbd_common.sh@92 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:07:39.426 09:15:50 event.app_repeat -- bdev/nbd_common.sh@92 -- # local nbd_list 00:07:39.427 09:15:50 event.app_repeat -- bdev/nbd_common.sh@94 -- # nbd_start_disks /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:07:39.427 09:15:50 event.app_repeat -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:07:39.427 09:15:50 event.app_repeat -- bdev/nbd_common.sh@10 -- # bdev_list=('Malloc0' 'Malloc1') 00:07:39.427 09:15:50 event.app_repeat -- bdev/nbd_common.sh@10 -- # local bdev_list 00:07:39.427 09:15:50 event.app_repeat -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:07:39.427 09:15:50 event.app_repeat -- bdev/nbd_common.sh@11 -- # local nbd_list 00:07:39.427 09:15:50 event.app_repeat -- bdev/nbd_common.sh@12 -- # local i 00:07:39.427 09:15:50 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:07:39.427 09:15:50 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:07:39.427 09:15:50 event.app_repeat -- bdev/nbd_common.sh@15 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc0 /dev/nbd0 00:07:39.685 /dev/nbd0 00:07:39.685 09:15:50 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:07:39.685 09:15:50 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:07:39.685 09:15:50 event.app_repeat -- common/autotest_common.sh@866 -- # local nbd_name=nbd0 00:07:39.685 09:15:50 event.app_repeat -- common/autotest_common.sh@867 -- # local i 00:07:39.685 09:15:50 event.app_repeat -- common/autotest_common.sh@869 -- # (( i = 1 )) 00:07:39.685 09:15:50 event.app_repeat -- common/autotest_common.sh@869 -- # (( i <= 20 )) 00:07:39.685 09:15:50 event.app_repeat -- common/autotest_common.sh@870 -- # grep -q -w nbd0 /proc/partitions 00:07:39.685 09:15:50 event.app_repeat -- common/autotest_common.sh@871 -- # break 00:07:39.685 09:15:50 event.app_repeat -- common/autotest_common.sh@882 -- # (( i = 1 )) 00:07:39.685 09:15:50 event.app_repeat -- common/autotest_common.sh@882 -- # (( i <= 20 )) 00:07:39.685 09:15:50 event.app_repeat -- common/autotest_common.sh@883 -- # dd if=/dev/nbd0 of=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:07:39.685 1+0 records in 00:07:39.685 1+0 records out 00:07:39.685 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000183718 s, 22.3 MB/s 00:07:39.685 09:15:50 event.app_repeat -- common/autotest_common.sh@884 -- # stat -c %s /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:07:39.685 09:15:50 event.app_repeat -- common/autotest_common.sh@884 -- # size=4096 00:07:39.685 09:15:50 event.app_repeat -- common/autotest_common.sh@885 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:07:39.685 09:15:50 event.app_repeat -- common/autotest_common.sh@886 -- # '[' 4096 '!=' 0 ']' 00:07:39.685 09:15:50 event.app_repeat -- common/autotest_common.sh@887 -- # return 0 00:07:39.685 09:15:50 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:07:39.685 09:15:50 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:07:39.685 09:15:50 event.app_repeat -- bdev/nbd_common.sh@15 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc1 /dev/nbd1 00:07:39.943 /dev/nbd1 00:07:39.943 09:15:51 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:07:39.943 09:15:51 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:07:39.943 09:15:51 event.app_repeat -- common/autotest_common.sh@866 -- # local nbd_name=nbd1 00:07:39.943 09:15:51 event.app_repeat -- common/autotest_common.sh@867 -- # local i 00:07:39.943 09:15:51 event.app_repeat -- common/autotest_common.sh@869 -- # (( i = 1 )) 00:07:39.943 09:15:51 event.app_repeat -- common/autotest_common.sh@869 -- # (( i <= 20 )) 00:07:39.943 09:15:51 event.app_repeat -- common/autotest_common.sh@870 -- # grep -q -w nbd1 /proc/partitions 00:07:39.943 09:15:51 event.app_repeat -- common/autotest_common.sh@871 -- # break 00:07:39.943 09:15:51 event.app_repeat -- common/autotest_common.sh@882 -- # (( i = 1 )) 00:07:39.943 09:15:51 event.app_repeat -- common/autotest_common.sh@882 -- # (( i <= 20 )) 00:07:39.943 09:15:51 event.app_repeat -- common/autotest_common.sh@883 -- # dd if=/dev/nbd1 of=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:07:39.943 1+0 records in 00:07:39.943 1+0 records out 00:07:39.943 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000222053 s, 18.4 MB/s 00:07:39.943 09:15:51 event.app_repeat -- common/autotest_common.sh@884 -- # stat -c %s /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:07:39.943 09:15:51 event.app_repeat -- common/autotest_common.sh@884 -- # size=4096 00:07:39.943 09:15:51 event.app_repeat -- common/autotest_common.sh@885 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:07:39.943 09:15:51 event.app_repeat -- common/autotest_common.sh@886 -- # '[' 4096 '!=' 0 ']' 00:07:39.943 09:15:51 event.app_repeat -- common/autotest_common.sh@887 -- # return 0 00:07:39.943 09:15:51 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:07:39.943 09:15:51 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:07:39.943 09:15:51 event.app_repeat -- bdev/nbd_common.sh@95 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:07:39.943 09:15:51 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:07:39.943 09:15:51 event.app_repeat -- bdev/nbd_common.sh@63 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:07:40.201 09:15:51 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 00:07:40.201 { 00:07:40.201 "nbd_device": "/dev/nbd0", 00:07:40.201 "bdev_name": "Malloc0" 00:07:40.201 }, 00:07:40.201 { 00:07:40.201 "nbd_device": "/dev/nbd1", 00:07:40.201 "bdev_name": "Malloc1" 00:07:40.201 } 00:07:40.201 ]' 00:07:40.201 09:15:51 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[ 00:07:40.201 { 00:07:40.201 "nbd_device": "/dev/nbd0", 00:07:40.201 "bdev_name": "Malloc0" 00:07:40.201 }, 00:07:40.201 { 00:07:40.201 "nbd_device": "/dev/nbd1", 00:07:40.201 "bdev_name": "Malloc1" 00:07:40.201 } 00:07:40.201 ]' 00:07:40.201 09:15:51 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:07:40.201 09:15:51 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name='/dev/nbd0 00:07:40.201 /dev/nbd1' 00:07:40.202 09:15:51 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '/dev/nbd0 00:07:40.202 /dev/nbd1' 00:07:40.202 09:15:51 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:07:40.202 09:15:51 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=2 00:07:40.202 09:15:51 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 2 00:07:40.202 09:15:51 event.app_repeat -- bdev/nbd_common.sh@95 -- # count=2 00:07:40.202 09:15:51 event.app_repeat -- bdev/nbd_common.sh@96 -- # '[' 2 -ne 2 ']' 00:07:40.202 09:15:51 event.app_repeat -- bdev/nbd_common.sh@100 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' write 00:07:40.202 09:15:51 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:07:40.202 09:15:51 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:07:40.202 09:15:51 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=write 00:07:40.202 09:15:51 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest 00:07:40.202 09:15:51 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' write = write ']' 00:07:40.202 09:15:51 event.app_repeat -- bdev/nbd_common.sh@76 -- # dd if=/dev/urandom of=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest bs=4096 count=256 00:07:40.202 256+0 records in 00:07:40.202 256+0 records out 00:07:40.202 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00493118 s, 213 MB/s 00:07:40.202 09:15:51 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:07:40.202 09:15:51 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest of=/dev/nbd0 bs=4096 count=256 oflag=direct 00:07:40.202 256+0 records in 00:07:40.202 256+0 records out 00:07:40.202 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0214274 s, 48.9 MB/s 00:07:40.202 09:15:51 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:07:40.202 09:15:51 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest of=/dev/nbd1 bs=4096 count=256 oflag=direct 00:07:40.202 256+0 records in 00:07:40.202 256+0 records out 00:07:40.202 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0231638 s, 45.3 MB/s 00:07:40.202 09:15:51 event.app_repeat -- bdev/nbd_common.sh@101 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' verify 00:07:40.202 09:15:51 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:07:40.202 09:15:51 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:07:40.202 09:15:51 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=verify 00:07:40.202 09:15:51 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest 00:07:40.202 09:15:51 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' verify = write ']' 00:07:40.202 09:15:51 event.app_repeat -- bdev/nbd_common.sh@80 -- # '[' verify = verify ']' 00:07:40.202 09:15:51 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:07:40.202 09:15:51 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest /dev/nbd0 00:07:40.202 09:15:51 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:07:40.202 09:15:51 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest /dev/nbd1 00:07:40.202 09:15:51 event.app_repeat -- bdev/nbd_common.sh@85 -- # rm /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest 00:07:40.460 09:15:51 event.app_repeat -- bdev/nbd_common.sh@103 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1' 00:07:40.460 09:15:51 event.app_repeat -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:07:40.460 09:15:51 event.app_repeat -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:07:40.460 09:15:51 event.app_repeat -- bdev/nbd_common.sh@50 -- # local nbd_list 00:07:40.460 09:15:51 event.app_repeat -- bdev/nbd_common.sh@51 -- # local i 00:07:40.460 09:15:51 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:07:40.460 09:15:51 event.app_repeat -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:07:40.718 09:15:51 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:07:40.718 09:15:51 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:07:40.718 09:15:51 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:07:40.718 09:15:51 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:07:40.718 09:15:51 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:07:40.718 09:15:51 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:07:40.718 09:15:51 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:07:40.718 09:15:51 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:07:40.718 09:15:51 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:07:40.718 09:15:51 event.app_repeat -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:07:40.976 09:15:51 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:07:40.976 09:15:51 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:07:40.976 09:15:51 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:07:40.976 09:15:51 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:07:40.976 09:15:51 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:07:40.976 09:15:51 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:07:40.976 09:15:51 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:07:40.976 09:15:51 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:07:40.976 09:15:51 event.app_repeat -- bdev/nbd_common.sh@104 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:07:40.976 09:15:51 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:07:40.976 09:15:51 event.app_repeat -- bdev/nbd_common.sh@63 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:07:41.234 09:15:52 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:07:41.234 09:15:52 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[]' 00:07:41.234 09:15:52 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:07:41.234 09:15:52 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:07:41.234 09:15:52 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:07:41.234 09:15:52 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '' 00:07:41.234 09:15:52 event.app_repeat -- bdev/nbd_common.sh@65 -- # true 00:07:41.234 09:15:52 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=0 00:07:41.234 09:15:52 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 0 00:07:41.234 09:15:52 event.app_repeat -- bdev/nbd_common.sh@104 -- # count=0 00:07:41.234 09:15:52 event.app_repeat -- bdev/nbd_common.sh@105 -- # '[' 0 -ne 0 ']' 00:07:41.234 09:15:52 event.app_repeat -- bdev/nbd_common.sh@109 -- # return 0 00:07:41.234 09:15:52 event.app_repeat -- event/event.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock spdk_kill_instance SIGTERM 00:07:41.493 09:15:52 event.app_repeat -- event/event.sh@35 -- # sleep 3 00:07:41.750 [2024-07-15 09:15:52.753092] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 2 00:07:41.750 [2024-07-15 09:15:52.854828] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:07:41.750 [2024-07-15 09:15:52.854832] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:07:41.750 [2024-07-15 09:15:52.907712] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_register' already registered. 00:07:41.750 [2024-07-15 09:15:52.907770] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_unregister' already registered. 00:07:45.026 09:15:55 event.app_repeat -- event/event.sh@23 -- # for i in {0..2} 00:07:45.026 09:15:55 event.app_repeat -- event/event.sh@24 -- # echo 'spdk_app_start Round 2' 00:07:45.026 spdk_app_start Round 2 00:07:45.026 09:15:55 event.app_repeat -- event/event.sh@25 -- # waitforlisten 720694 /var/tmp/spdk-nbd.sock 00:07:45.026 09:15:55 event.app_repeat -- common/autotest_common.sh@829 -- # '[' -z 720694 ']' 00:07:45.026 09:15:55 event.app_repeat -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:07:45.026 09:15:55 event.app_repeat -- common/autotest_common.sh@834 -- # local max_retries=100 00:07:45.026 09:15:55 event.app_repeat -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:07:45.026 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:07:45.026 09:15:55 event.app_repeat -- common/autotest_common.sh@838 -- # xtrace_disable 00:07:45.026 09:15:55 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:07:45.026 09:15:55 event.app_repeat -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:07:45.026 09:15:55 event.app_repeat -- common/autotest_common.sh@862 -- # return 0 00:07:45.026 09:15:55 event.app_repeat -- event/event.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:07:45.026 Malloc0 00:07:45.026 09:15:56 event.app_repeat -- event/event.sh@28 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:07:45.284 Malloc1 00:07:45.284 09:15:56 event.app_repeat -- event/event.sh@30 -- # nbd_rpc_data_verify /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:07:45.284 09:15:56 event.app_repeat -- bdev/nbd_common.sh@90 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:07:45.284 09:15:56 event.app_repeat -- bdev/nbd_common.sh@91 -- # bdev_list=('Malloc0' 'Malloc1') 00:07:45.284 09:15:56 event.app_repeat -- bdev/nbd_common.sh@91 -- # local bdev_list 00:07:45.284 09:15:56 event.app_repeat -- bdev/nbd_common.sh@92 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:07:45.284 09:15:56 event.app_repeat -- bdev/nbd_common.sh@92 -- # local nbd_list 00:07:45.284 09:15:56 event.app_repeat -- bdev/nbd_common.sh@94 -- # nbd_start_disks /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:07:45.284 09:15:56 event.app_repeat -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:07:45.284 09:15:56 event.app_repeat -- bdev/nbd_common.sh@10 -- # bdev_list=('Malloc0' 'Malloc1') 00:07:45.284 09:15:56 event.app_repeat -- bdev/nbd_common.sh@10 -- # local bdev_list 00:07:45.284 09:15:56 event.app_repeat -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:07:45.284 09:15:56 event.app_repeat -- bdev/nbd_common.sh@11 -- # local nbd_list 00:07:45.284 09:15:56 event.app_repeat -- bdev/nbd_common.sh@12 -- # local i 00:07:45.284 09:15:56 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:07:45.284 09:15:56 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:07:45.284 09:15:56 event.app_repeat -- bdev/nbd_common.sh@15 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc0 /dev/nbd0 00:07:45.541 /dev/nbd0 00:07:45.541 09:15:56 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:07:45.541 09:15:56 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:07:45.541 09:15:56 event.app_repeat -- common/autotest_common.sh@866 -- # local nbd_name=nbd0 00:07:45.541 09:15:56 event.app_repeat -- common/autotest_common.sh@867 -- # local i 00:07:45.541 09:15:56 event.app_repeat -- common/autotest_common.sh@869 -- # (( i = 1 )) 00:07:45.541 09:15:56 event.app_repeat -- common/autotest_common.sh@869 -- # (( i <= 20 )) 00:07:45.541 09:15:56 event.app_repeat -- common/autotest_common.sh@870 -- # grep -q -w nbd0 /proc/partitions 00:07:45.541 09:15:56 event.app_repeat -- common/autotest_common.sh@871 -- # break 00:07:45.541 09:15:56 event.app_repeat -- common/autotest_common.sh@882 -- # (( i = 1 )) 00:07:45.541 09:15:56 event.app_repeat -- common/autotest_common.sh@882 -- # (( i <= 20 )) 00:07:45.541 09:15:56 event.app_repeat -- common/autotest_common.sh@883 -- # dd if=/dev/nbd0 of=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:07:45.541 1+0 records in 00:07:45.541 1+0 records out 00:07:45.541 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000163421 s, 25.1 MB/s 00:07:45.541 09:15:56 event.app_repeat -- common/autotest_common.sh@884 -- # stat -c %s /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:07:45.541 09:15:56 event.app_repeat -- common/autotest_common.sh@884 -- # size=4096 00:07:45.541 09:15:56 event.app_repeat -- common/autotest_common.sh@885 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:07:45.541 09:15:56 event.app_repeat -- common/autotest_common.sh@886 -- # '[' 4096 '!=' 0 ']' 00:07:45.541 09:15:56 event.app_repeat -- common/autotest_common.sh@887 -- # return 0 00:07:45.541 09:15:56 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:07:45.541 09:15:56 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:07:45.541 09:15:56 event.app_repeat -- bdev/nbd_common.sh@15 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc1 /dev/nbd1 00:07:45.799 /dev/nbd1 00:07:45.799 09:15:56 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:07:45.799 09:15:56 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:07:45.799 09:15:56 event.app_repeat -- common/autotest_common.sh@866 -- # local nbd_name=nbd1 00:07:45.799 09:15:56 event.app_repeat -- common/autotest_common.sh@867 -- # local i 00:07:45.799 09:15:56 event.app_repeat -- common/autotest_common.sh@869 -- # (( i = 1 )) 00:07:45.799 09:15:56 event.app_repeat -- common/autotest_common.sh@869 -- # (( i <= 20 )) 00:07:45.799 09:15:56 event.app_repeat -- common/autotest_common.sh@870 -- # grep -q -w nbd1 /proc/partitions 00:07:45.799 09:15:56 event.app_repeat -- common/autotest_common.sh@871 -- # break 00:07:45.799 09:15:56 event.app_repeat -- common/autotest_common.sh@882 -- # (( i = 1 )) 00:07:45.799 09:15:56 event.app_repeat -- common/autotest_common.sh@882 -- # (( i <= 20 )) 00:07:45.799 09:15:56 event.app_repeat -- common/autotest_common.sh@883 -- # dd if=/dev/nbd1 of=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:07:45.799 1+0 records in 00:07:45.799 1+0 records out 00:07:45.799 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000210494 s, 19.5 MB/s 00:07:45.799 09:15:56 event.app_repeat -- common/autotest_common.sh@884 -- # stat -c %s /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:07:45.799 09:15:56 event.app_repeat -- common/autotest_common.sh@884 -- # size=4096 00:07:45.799 09:15:56 event.app_repeat -- common/autotest_common.sh@885 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:07:45.799 09:15:56 event.app_repeat -- common/autotest_common.sh@886 -- # '[' 4096 '!=' 0 ']' 00:07:45.799 09:15:56 event.app_repeat -- common/autotest_common.sh@887 -- # return 0 00:07:45.799 09:15:56 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:07:45.799 09:15:56 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:07:45.799 09:15:56 event.app_repeat -- bdev/nbd_common.sh@95 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:07:45.799 09:15:56 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:07:45.799 09:15:56 event.app_repeat -- bdev/nbd_common.sh@63 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:07:46.058 09:15:57 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 00:07:46.058 { 00:07:46.058 "nbd_device": "/dev/nbd0", 00:07:46.058 "bdev_name": "Malloc0" 00:07:46.058 }, 00:07:46.058 { 00:07:46.058 "nbd_device": "/dev/nbd1", 00:07:46.058 "bdev_name": "Malloc1" 00:07:46.058 } 00:07:46.058 ]' 00:07:46.058 09:15:57 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[ 00:07:46.058 { 00:07:46.058 "nbd_device": "/dev/nbd0", 00:07:46.058 "bdev_name": "Malloc0" 00:07:46.058 }, 00:07:46.058 { 00:07:46.058 "nbd_device": "/dev/nbd1", 00:07:46.058 "bdev_name": "Malloc1" 00:07:46.058 } 00:07:46.058 ]' 00:07:46.058 09:15:57 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:07:46.058 09:15:57 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name='/dev/nbd0 00:07:46.058 /dev/nbd1' 00:07:46.058 09:15:57 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '/dev/nbd0 00:07:46.058 /dev/nbd1' 00:07:46.058 09:15:57 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:07:46.058 09:15:57 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=2 00:07:46.058 09:15:57 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 2 00:07:46.058 09:15:57 event.app_repeat -- bdev/nbd_common.sh@95 -- # count=2 00:07:46.058 09:15:57 event.app_repeat -- bdev/nbd_common.sh@96 -- # '[' 2 -ne 2 ']' 00:07:46.058 09:15:57 event.app_repeat -- bdev/nbd_common.sh@100 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' write 00:07:46.058 09:15:57 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:07:46.058 09:15:57 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:07:46.058 09:15:57 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=write 00:07:46.058 09:15:57 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest 00:07:46.058 09:15:57 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' write = write ']' 00:07:46.058 09:15:57 event.app_repeat -- bdev/nbd_common.sh@76 -- # dd if=/dev/urandom of=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest bs=4096 count=256 00:07:46.058 256+0 records in 00:07:46.058 256+0 records out 00:07:46.058 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00486365 s, 216 MB/s 00:07:46.058 09:15:57 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:07:46.058 09:15:57 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest of=/dev/nbd0 bs=4096 count=256 oflag=direct 00:07:46.058 256+0 records in 00:07:46.058 256+0 records out 00:07:46.058 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0209541 s, 50.0 MB/s 00:07:46.058 09:15:57 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:07:46.058 09:15:57 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest of=/dev/nbd1 bs=4096 count=256 oflag=direct 00:07:46.058 256+0 records in 00:07:46.058 256+0 records out 00:07:46.058 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0229669 s, 45.7 MB/s 00:07:46.058 09:15:57 event.app_repeat -- bdev/nbd_common.sh@101 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' verify 00:07:46.058 09:15:57 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:07:46.058 09:15:57 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:07:46.058 09:15:57 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=verify 00:07:46.058 09:15:57 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest 00:07:46.058 09:15:57 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' verify = write ']' 00:07:46.058 09:15:57 event.app_repeat -- bdev/nbd_common.sh@80 -- # '[' verify = verify ']' 00:07:46.058 09:15:57 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:07:46.058 09:15:57 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest /dev/nbd0 00:07:46.058 09:15:57 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:07:46.058 09:15:57 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest /dev/nbd1 00:07:46.058 09:15:57 event.app_repeat -- bdev/nbd_common.sh@85 -- # rm /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest 00:07:46.058 09:15:57 event.app_repeat -- bdev/nbd_common.sh@103 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1' 00:07:46.058 09:15:57 event.app_repeat -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:07:46.058 09:15:57 event.app_repeat -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:07:46.058 09:15:57 event.app_repeat -- bdev/nbd_common.sh@50 -- # local nbd_list 00:07:46.058 09:15:57 event.app_repeat -- bdev/nbd_common.sh@51 -- # local i 00:07:46.058 09:15:57 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:07:46.058 09:15:57 event.app_repeat -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:07:46.316 09:15:57 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:07:46.316 09:15:57 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:07:46.316 09:15:57 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:07:46.316 09:15:57 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:07:46.316 09:15:57 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:07:46.316 09:15:57 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:07:46.316 09:15:57 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:07:46.316 09:15:57 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:07:46.316 09:15:57 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:07:46.316 09:15:57 event.app_repeat -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:07:46.573 09:15:57 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:07:46.573 09:15:57 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:07:46.573 09:15:57 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:07:46.573 09:15:57 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:07:46.573 09:15:57 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:07:46.573 09:15:57 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:07:46.573 09:15:57 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:07:46.573 09:15:57 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:07:46.573 09:15:57 event.app_repeat -- bdev/nbd_common.sh@104 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:07:46.573 09:15:57 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:07:46.573 09:15:57 event.app_repeat -- bdev/nbd_common.sh@63 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:07:46.830 09:15:57 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:07:46.830 09:15:57 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[]' 00:07:46.830 09:15:57 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:07:46.830 09:15:57 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:07:46.830 09:15:57 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '' 00:07:46.830 09:15:57 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:07:46.830 09:15:57 event.app_repeat -- bdev/nbd_common.sh@65 -- # true 00:07:46.830 09:15:57 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=0 00:07:46.830 09:15:57 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 0 00:07:46.830 09:15:57 event.app_repeat -- bdev/nbd_common.sh@104 -- # count=0 00:07:46.830 09:15:57 event.app_repeat -- bdev/nbd_common.sh@105 -- # '[' 0 -ne 0 ']' 00:07:46.830 09:15:57 event.app_repeat -- bdev/nbd_common.sh@109 -- # return 0 00:07:46.830 09:15:57 event.app_repeat -- event/event.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock spdk_kill_instance SIGTERM 00:07:47.089 09:15:58 event.app_repeat -- event/event.sh@35 -- # sleep 3 00:07:47.348 [2024-07-15 09:15:58.526347] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 2 00:07:47.605 [2024-07-15 09:15:58.629785] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:07:47.605 [2024-07-15 09:15:58.629785] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:07:47.605 [2024-07-15 09:15:58.687327] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_register' already registered. 00:07:47.605 [2024-07-15 09:15:58.687390] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_unregister' already registered. 00:07:50.133 09:16:01 event.app_repeat -- event/event.sh@38 -- # waitforlisten 720694 /var/tmp/spdk-nbd.sock 00:07:50.133 09:16:01 event.app_repeat -- common/autotest_common.sh@829 -- # '[' -z 720694 ']' 00:07:50.133 09:16:01 event.app_repeat -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:07:50.133 09:16:01 event.app_repeat -- common/autotest_common.sh@834 -- # local max_retries=100 00:07:50.133 09:16:01 event.app_repeat -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:07:50.133 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:07:50.133 09:16:01 event.app_repeat -- common/autotest_common.sh@838 -- # xtrace_disable 00:07:50.133 09:16:01 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:07:50.391 09:16:01 event.app_repeat -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:07:50.391 09:16:01 event.app_repeat -- common/autotest_common.sh@862 -- # return 0 00:07:50.391 09:16:01 event.app_repeat -- event/event.sh@39 -- # killprocess 720694 00:07:50.391 09:16:01 event.app_repeat -- common/autotest_common.sh@948 -- # '[' -z 720694 ']' 00:07:50.391 09:16:01 event.app_repeat -- common/autotest_common.sh@952 -- # kill -0 720694 00:07:50.391 09:16:01 event.app_repeat -- common/autotest_common.sh@953 -- # uname 00:07:50.391 09:16:01 event.app_repeat -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:07:50.391 09:16:01 event.app_repeat -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 720694 00:07:50.391 09:16:01 event.app_repeat -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:07:50.391 09:16:01 event.app_repeat -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:07:50.391 09:16:01 event.app_repeat -- common/autotest_common.sh@966 -- # echo 'killing process with pid 720694' 00:07:50.391 killing process with pid 720694 00:07:50.391 09:16:01 event.app_repeat -- common/autotest_common.sh@967 -- # kill 720694 00:07:50.391 09:16:01 event.app_repeat -- common/autotest_common.sh@972 -- # wait 720694 00:07:50.652 spdk_app_start is called in Round 0. 00:07:50.652 Shutdown signal received, stop current app iteration 00:07:50.652 Starting SPDK v24.09-pre git sha1 b0f01ebc5 / DPDK 24.03.0 reinitialization... 00:07:50.652 spdk_app_start is called in Round 1. 00:07:50.652 Shutdown signal received, stop current app iteration 00:07:50.652 Starting SPDK v24.09-pre git sha1 b0f01ebc5 / DPDK 24.03.0 reinitialization... 00:07:50.652 spdk_app_start is called in Round 2. 00:07:50.652 Shutdown signal received, stop current app iteration 00:07:50.652 Starting SPDK v24.09-pre git sha1 b0f01ebc5 / DPDK 24.03.0 reinitialization... 00:07:50.652 spdk_app_start is called in Round 3. 00:07:50.652 Shutdown signal received, stop current app iteration 00:07:50.652 09:16:01 event.app_repeat -- event/event.sh@40 -- # trap - SIGINT SIGTERM EXIT 00:07:50.652 09:16:01 event.app_repeat -- event/event.sh@42 -- # return 0 00:07:50.652 00:07:50.652 real 0m17.937s 00:07:50.652 user 0m38.875s 00:07:50.652 sys 0m3.261s 00:07:50.652 09:16:01 event.app_repeat -- common/autotest_common.sh@1124 -- # xtrace_disable 00:07:50.652 09:16:01 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:07:50.652 ************************************ 00:07:50.652 END TEST app_repeat 00:07:50.652 ************************************ 00:07:50.652 09:16:01 event -- common/autotest_common.sh@1142 -- # return 0 00:07:50.652 09:16:01 event -- event/event.sh@54 -- # (( SPDK_TEST_CRYPTO == 0 )) 00:07:50.652 09:16:01 event -- event/event.sh@55 -- # run_test cpu_locks /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/cpu_locks.sh 00:07:50.652 09:16:01 event -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:07:50.652 09:16:01 event -- common/autotest_common.sh@1105 -- # xtrace_disable 00:07:50.652 09:16:01 event -- common/autotest_common.sh@10 -- # set +x 00:07:50.652 ************************************ 00:07:50.652 START TEST cpu_locks 00:07:50.652 ************************************ 00:07:50.652 09:16:01 event.cpu_locks -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/cpu_locks.sh 00:07:50.911 * Looking for test storage... 00:07:50.911 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event 00:07:50.911 09:16:01 event.cpu_locks -- event/cpu_locks.sh@11 -- # rpc_sock1=/var/tmp/spdk.sock 00:07:50.911 09:16:01 event.cpu_locks -- event/cpu_locks.sh@12 -- # rpc_sock2=/var/tmp/spdk2.sock 00:07:50.911 09:16:01 event.cpu_locks -- event/cpu_locks.sh@164 -- # trap cleanup EXIT SIGTERM SIGINT 00:07:50.911 09:16:01 event.cpu_locks -- event/cpu_locks.sh@166 -- # run_test default_locks default_locks 00:07:50.911 09:16:01 event.cpu_locks -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:07:50.911 09:16:01 event.cpu_locks -- common/autotest_common.sh@1105 -- # xtrace_disable 00:07:50.911 09:16:01 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:07:50.911 ************************************ 00:07:50.911 START TEST default_locks 00:07:50.911 ************************************ 00:07:50.911 09:16:01 event.cpu_locks.default_locks -- common/autotest_common.sh@1123 -- # default_locks 00:07:50.911 09:16:01 event.cpu_locks.default_locks -- event/cpu_locks.sh@46 -- # spdk_tgt_pid=723063 00:07:50.911 09:16:01 event.cpu_locks.default_locks -- event/cpu_locks.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 00:07:50.911 09:16:01 event.cpu_locks.default_locks -- event/cpu_locks.sh@47 -- # waitforlisten 723063 00:07:50.911 09:16:01 event.cpu_locks.default_locks -- common/autotest_common.sh@829 -- # '[' -z 723063 ']' 00:07:50.911 09:16:01 event.cpu_locks.default_locks -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:50.911 09:16:01 event.cpu_locks.default_locks -- common/autotest_common.sh@834 -- # local max_retries=100 00:07:50.911 09:16:01 event.cpu_locks.default_locks -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:50.911 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:50.911 09:16:01 event.cpu_locks.default_locks -- common/autotest_common.sh@838 -- # xtrace_disable 00:07:50.911 09:16:01 event.cpu_locks.default_locks -- common/autotest_common.sh@10 -- # set +x 00:07:50.911 [2024-07-15 09:16:01.971585] Starting SPDK v24.09-pre git sha1 b0f01ebc5 / DPDK 24.03.0 initialization... 00:07:50.911 [2024-07-15 09:16:01.971691] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid723063 ] 00:07:50.911 EAL: No free 2048 kB hugepages reported on node 1 00:07:50.911 [2024-07-15 09:16:02.030589] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:51.169 [2024-07-15 09:16:02.136270] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:07:51.428 09:16:02 event.cpu_locks.default_locks -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:07:51.428 09:16:02 event.cpu_locks.default_locks -- common/autotest_common.sh@862 -- # return 0 00:07:51.428 09:16:02 event.cpu_locks.default_locks -- event/cpu_locks.sh@49 -- # locks_exist 723063 00:07:51.428 09:16:02 event.cpu_locks.default_locks -- event/cpu_locks.sh@22 -- # lslocks -p 723063 00:07:51.428 09:16:02 event.cpu_locks.default_locks -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:07:51.687 lslocks: write error 00:07:51.687 09:16:02 event.cpu_locks.default_locks -- event/cpu_locks.sh@50 -- # killprocess 723063 00:07:51.687 09:16:02 event.cpu_locks.default_locks -- common/autotest_common.sh@948 -- # '[' -z 723063 ']' 00:07:51.687 09:16:02 event.cpu_locks.default_locks -- common/autotest_common.sh@952 -- # kill -0 723063 00:07:51.687 09:16:02 event.cpu_locks.default_locks -- common/autotest_common.sh@953 -- # uname 00:07:51.687 09:16:02 event.cpu_locks.default_locks -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:07:51.687 09:16:02 event.cpu_locks.default_locks -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 723063 00:07:51.687 09:16:02 event.cpu_locks.default_locks -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:07:51.687 09:16:02 event.cpu_locks.default_locks -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:07:51.687 09:16:02 event.cpu_locks.default_locks -- common/autotest_common.sh@966 -- # echo 'killing process with pid 723063' 00:07:51.687 killing process with pid 723063 00:07:51.687 09:16:02 event.cpu_locks.default_locks -- common/autotest_common.sh@967 -- # kill 723063 00:07:51.687 09:16:02 event.cpu_locks.default_locks -- common/autotest_common.sh@972 -- # wait 723063 00:07:51.947 09:16:03 event.cpu_locks.default_locks -- event/cpu_locks.sh@52 -- # NOT waitforlisten 723063 00:07:51.947 09:16:03 event.cpu_locks.default_locks -- common/autotest_common.sh@648 -- # local es=0 00:07:51.947 09:16:03 event.cpu_locks.default_locks -- common/autotest_common.sh@650 -- # valid_exec_arg waitforlisten 723063 00:07:51.947 09:16:03 event.cpu_locks.default_locks -- common/autotest_common.sh@636 -- # local arg=waitforlisten 00:07:51.947 09:16:03 event.cpu_locks.default_locks -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:07:51.947 09:16:03 event.cpu_locks.default_locks -- common/autotest_common.sh@640 -- # type -t waitforlisten 00:07:51.947 09:16:03 event.cpu_locks.default_locks -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:07:51.947 09:16:03 event.cpu_locks.default_locks -- common/autotest_common.sh@651 -- # waitforlisten 723063 00:07:51.947 09:16:03 event.cpu_locks.default_locks -- common/autotest_common.sh@829 -- # '[' -z 723063 ']' 00:07:51.947 09:16:03 event.cpu_locks.default_locks -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:51.947 09:16:03 event.cpu_locks.default_locks -- common/autotest_common.sh@834 -- # local max_retries=100 00:07:51.947 09:16:03 event.cpu_locks.default_locks -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:51.947 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:51.947 09:16:03 event.cpu_locks.default_locks -- common/autotest_common.sh@838 -- # xtrace_disable 00:07:51.947 09:16:03 event.cpu_locks.default_locks -- common/autotest_common.sh@10 -- # set +x 00:07:51.947 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/autotest_common.sh: line 844: kill: (723063) - No such process 00:07:51.947 ERROR: process (pid: 723063) is no longer running 00:07:51.947 09:16:03 event.cpu_locks.default_locks -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:07:51.947 09:16:03 event.cpu_locks.default_locks -- common/autotest_common.sh@862 -- # return 1 00:07:51.947 09:16:03 event.cpu_locks.default_locks -- common/autotest_common.sh@651 -- # es=1 00:07:51.947 09:16:03 event.cpu_locks.default_locks -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:07:51.947 09:16:03 event.cpu_locks.default_locks -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:07:51.947 09:16:03 event.cpu_locks.default_locks -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:07:51.947 09:16:03 event.cpu_locks.default_locks -- event/cpu_locks.sh@54 -- # no_locks 00:07:51.947 09:16:03 event.cpu_locks.default_locks -- event/cpu_locks.sh@26 -- # lock_files=() 00:07:51.947 09:16:03 event.cpu_locks.default_locks -- event/cpu_locks.sh@26 -- # local lock_files 00:07:51.947 09:16:03 event.cpu_locks.default_locks -- event/cpu_locks.sh@27 -- # (( 0 != 0 )) 00:07:51.947 00:07:51.947 real 0m1.203s 00:07:51.947 user 0m1.139s 00:07:51.947 sys 0m0.492s 00:07:51.947 09:16:03 event.cpu_locks.default_locks -- common/autotest_common.sh@1124 -- # xtrace_disable 00:07:51.947 09:16:03 event.cpu_locks.default_locks -- common/autotest_common.sh@10 -- # set +x 00:07:51.947 ************************************ 00:07:51.947 END TEST default_locks 00:07:51.947 ************************************ 00:07:52.206 09:16:03 event.cpu_locks -- common/autotest_common.sh@1142 -- # return 0 00:07:52.206 09:16:03 event.cpu_locks -- event/cpu_locks.sh@167 -- # run_test default_locks_via_rpc default_locks_via_rpc 00:07:52.206 09:16:03 event.cpu_locks -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:07:52.206 09:16:03 event.cpu_locks -- common/autotest_common.sh@1105 -- # xtrace_disable 00:07:52.206 09:16:03 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:07:52.206 ************************************ 00:07:52.206 START TEST default_locks_via_rpc 00:07:52.206 ************************************ 00:07:52.206 09:16:03 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@1123 -- # default_locks_via_rpc 00:07:52.206 09:16:03 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@62 -- # spdk_tgt_pid=723313 00:07:52.206 09:16:03 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@61 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 00:07:52.206 09:16:03 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@63 -- # waitforlisten 723313 00:07:52.206 09:16:03 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@829 -- # '[' -z 723313 ']' 00:07:52.206 09:16:03 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:52.206 09:16:03 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@834 -- # local max_retries=100 00:07:52.207 09:16:03 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:52.207 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:52.207 09:16:03 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@838 -- # xtrace_disable 00:07:52.207 09:16:03 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:07:52.207 [2024-07-15 09:16:03.225325] Starting SPDK v24.09-pre git sha1 b0f01ebc5 / DPDK 24.03.0 initialization... 00:07:52.207 [2024-07-15 09:16:03.225437] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid723313 ] 00:07:52.207 EAL: No free 2048 kB hugepages reported on node 1 00:07:52.207 [2024-07-15 09:16:03.283465] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:52.207 [2024-07-15 09:16:03.384659] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:07:52.464 09:16:03 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:07:52.464 09:16:03 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@862 -- # return 0 00:07:52.464 09:16:03 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@65 -- # rpc_cmd framework_disable_cpumask_locks 00:07:52.464 09:16:03 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:52.464 09:16:03 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:07:52.464 09:16:03 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:52.464 09:16:03 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@67 -- # no_locks 00:07:52.464 09:16:03 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@26 -- # lock_files=() 00:07:52.464 09:16:03 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@26 -- # local lock_files 00:07:52.464 09:16:03 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@27 -- # (( 0 != 0 )) 00:07:52.464 09:16:03 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@69 -- # rpc_cmd framework_enable_cpumask_locks 00:07:52.464 09:16:03 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:52.464 09:16:03 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:07:52.464 09:16:03 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:52.464 09:16:03 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@71 -- # locks_exist 723313 00:07:52.464 09:16:03 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@22 -- # lslocks -p 723313 00:07:52.464 09:16:03 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:07:52.721 09:16:03 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@73 -- # killprocess 723313 00:07:52.721 09:16:03 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@948 -- # '[' -z 723313 ']' 00:07:52.721 09:16:03 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@952 -- # kill -0 723313 00:07:52.721 09:16:03 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@953 -- # uname 00:07:52.721 09:16:03 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:07:52.721 09:16:03 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 723313 00:07:52.721 09:16:03 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:07:52.721 09:16:03 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:07:52.721 09:16:03 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@966 -- # echo 'killing process with pid 723313' 00:07:52.721 killing process with pid 723313 00:07:52.721 09:16:03 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@967 -- # kill 723313 00:07:52.721 09:16:03 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@972 -- # wait 723313 00:07:53.284 00:07:53.284 real 0m1.168s 00:07:53.284 user 0m1.132s 00:07:53.284 sys 0m0.466s 00:07:53.284 09:16:04 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@1124 -- # xtrace_disable 00:07:53.284 09:16:04 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:07:53.284 ************************************ 00:07:53.284 END TEST default_locks_via_rpc 00:07:53.284 ************************************ 00:07:53.284 09:16:04 event.cpu_locks -- common/autotest_common.sh@1142 -- # return 0 00:07:53.284 09:16:04 event.cpu_locks -- event/cpu_locks.sh@168 -- # run_test non_locking_app_on_locked_coremask non_locking_app_on_locked_coremask 00:07:53.284 09:16:04 event.cpu_locks -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:07:53.284 09:16:04 event.cpu_locks -- common/autotest_common.sh@1105 -- # xtrace_disable 00:07:53.284 09:16:04 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:07:53.284 ************************************ 00:07:53.284 START TEST non_locking_app_on_locked_coremask 00:07:53.284 ************************************ 00:07:53.284 09:16:04 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@1123 -- # non_locking_app_on_locked_coremask 00:07:53.284 09:16:04 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@80 -- # spdk_tgt_pid=723479 00:07:53.284 09:16:04 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@79 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 00:07:53.284 09:16:04 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@81 -- # waitforlisten 723479 /var/tmp/spdk.sock 00:07:53.284 09:16:04 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@829 -- # '[' -z 723479 ']' 00:07:53.284 09:16:04 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:53.284 09:16:04 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@834 -- # local max_retries=100 00:07:53.284 09:16:04 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:53.284 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:53.284 09:16:04 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@838 -- # xtrace_disable 00:07:53.284 09:16:04 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:07:53.284 [2024-07-15 09:16:04.440565] Starting SPDK v24.09-pre git sha1 b0f01ebc5 / DPDK 24.03.0 initialization... 00:07:53.284 [2024-07-15 09:16:04.440672] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid723479 ] 00:07:53.284 EAL: No free 2048 kB hugepages reported on node 1 00:07:53.541 [2024-07-15 09:16:04.498229] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:53.541 [2024-07-15 09:16:04.601986] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:07:53.799 09:16:04 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:07:53.799 09:16:04 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@862 -- # return 0 00:07:53.799 09:16:04 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@84 -- # spdk_tgt_pid2=723491 00:07:53.799 09:16:04 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@83 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 --disable-cpumask-locks -r /var/tmp/spdk2.sock 00:07:53.799 09:16:04 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@85 -- # waitforlisten 723491 /var/tmp/spdk2.sock 00:07:53.799 09:16:04 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@829 -- # '[' -z 723491 ']' 00:07:53.799 09:16:04 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk2.sock 00:07:53.799 09:16:04 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@834 -- # local max_retries=100 00:07:53.799 09:16:04 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:07:53.799 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:07:53.799 09:16:04 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@838 -- # xtrace_disable 00:07:53.799 09:16:04 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:07:53.799 [2024-07-15 09:16:04.892440] Starting SPDK v24.09-pre git sha1 b0f01ebc5 / DPDK 24.03.0 initialization... 00:07:53.799 [2024-07-15 09:16:04.892535] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid723491 ] 00:07:53.799 EAL: No free 2048 kB hugepages reported on node 1 00:07:53.799 [2024-07-15 09:16:04.976376] app.c: 905:spdk_app_start: *NOTICE*: CPU core locks deactivated. 00:07:53.799 [2024-07-15 09:16:04.976418] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:54.056 [2024-07-15 09:16:05.198336] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:07:54.989 09:16:05 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:07:54.989 09:16:05 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@862 -- # return 0 00:07:54.989 09:16:05 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@87 -- # locks_exist 723479 00:07:54.989 09:16:05 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@22 -- # lslocks -p 723479 00:07:54.989 09:16:05 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:07:55.246 lslocks: write error 00:07:55.246 09:16:06 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@89 -- # killprocess 723479 00:07:55.246 09:16:06 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@948 -- # '[' -z 723479 ']' 00:07:55.246 09:16:06 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@952 -- # kill -0 723479 00:07:55.246 09:16:06 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@953 -- # uname 00:07:55.246 09:16:06 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:07:55.246 09:16:06 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 723479 00:07:55.246 09:16:06 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:07:55.246 09:16:06 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:07:55.246 09:16:06 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@966 -- # echo 'killing process with pid 723479' 00:07:55.246 killing process with pid 723479 00:07:55.246 09:16:06 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@967 -- # kill 723479 00:07:55.246 09:16:06 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@972 -- # wait 723479 00:07:56.177 09:16:07 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@90 -- # killprocess 723491 00:07:56.178 09:16:07 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@948 -- # '[' -z 723491 ']' 00:07:56.178 09:16:07 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@952 -- # kill -0 723491 00:07:56.178 09:16:07 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@953 -- # uname 00:07:56.178 09:16:07 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:07:56.178 09:16:07 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 723491 00:07:56.178 09:16:07 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:07:56.178 09:16:07 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:07:56.178 09:16:07 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@966 -- # echo 'killing process with pid 723491' 00:07:56.178 killing process with pid 723491 00:07:56.178 09:16:07 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@967 -- # kill 723491 00:07:56.178 09:16:07 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@972 -- # wait 723491 00:07:56.435 00:07:56.435 real 0m3.165s 00:07:56.435 user 0m3.357s 00:07:56.435 sys 0m0.991s 00:07:56.435 09:16:07 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@1124 -- # xtrace_disable 00:07:56.435 09:16:07 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:07:56.435 ************************************ 00:07:56.435 END TEST non_locking_app_on_locked_coremask 00:07:56.435 ************************************ 00:07:56.435 09:16:07 event.cpu_locks -- common/autotest_common.sh@1142 -- # return 0 00:07:56.435 09:16:07 event.cpu_locks -- event/cpu_locks.sh@169 -- # run_test locking_app_on_unlocked_coremask locking_app_on_unlocked_coremask 00:07:56.435 09:16:07 event.cpu_locks -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:07:56.435 09:16:07 event.cpu_locks -- common/autotest_common.sh@1105 -- # xtrace_disable 00:07:56.435 09:16:07 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:07:56.435 ************************************ 00:07:56.435 START TEST locking_app_on_unlocked_coremask 00:07:56.435 ************************************ 00:07:56.435 09:16:07 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@1123 -- # locking_app_on_unlocked_coremask 00:07:56.435 09:16:07 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@98 -- # spdk_tgt_pid=723872 00:07:56.435 09:16:07 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@97 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 --disable-cpumask-locks 00:07:56.435 09:16:07 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@99 -- # waitforlisten 723872 /var/tmp/spdk.sock 00:07:56.435 09:16:07 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@829 -- # '[' -z 723872 ']' 00:07:56.435 09:16:07 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:56.435 09:16:07 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@834 -- # local max_retries=100 00:07:56.435 09:16:07 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:56.435 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:56.435 09:16:07 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@838 -- # xtrace_disable 00:07:56.435 09:16:07 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@10 -- # set +x 00:07:56.693 [2024-07-15 09:16:07.657210] Starting SPDK v24.09-pre git sha1 b0f01ebc5 / DPDK 24.03.0 initialization... 00:07:56.693 [2024-07-15 09:16:07.657287] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid723872 ] 00:07:56.693 EAL: No free 2048 kB hugepages reported on node 1 00:07:56.693 [2024-07-15 09:16:07.713393] app.c: 905:spdk_app_start: *NOTICE*: CPU core locks deactivated. 00:07:56.693 [2024-07-15 09:16:07.713428] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:56.693 [2024-07-15 09:16:07.821762] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:07:56.951 09:16:08 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:07:56.951 09:16:08 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@862 -- # return 0 00:07:56.951 09:16:08 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@102 -- # spdk_tgt_pid2=723920 00:07:56.951 09:16:08 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@101 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 -r /var/tmp/spdk2.sock 00:07:56.951 09:16:08 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@103 -- # waitforlisten 723920 /var/tmp/spdk2.sock 00:07:56.951 09:16:08 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@829 -- # '[' -z 723920 ']' 00:07:56.951 09:16:08 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk2.sock 00:07:56.951 09:16:08 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@834 -- # local max_retries=100 00:07:56.951 09:16:08 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:07:56.951 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:07:56.951 09:16:08 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@838 -- # xtrace_disable 00:07:56.951 09:16:08 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@10 -- # set +x 00:07:56.951 [2024-07-15 09:16:08.102950] Starting SPDK v24.09-pre git sha1 b0f01ebc5 / DPDK 24.03.0 initialization... 00:07:56.951 [2024-07-15 09:16:08.103021] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid723920 ] 00:07:56.951 EAL: No free 2048 kB hugepages reported on node 1 00:07:57.209 [2024-07-15 09:16:08.182149] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:57.209 [2024-07-15 09:16:08.394070] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:07:58.142 09:16:09 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:07:58.142 09:16:09 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@862 -- # return 0 00:07:58.142 09:16:09 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@105 -- # locks_exist 723920 00:07:58.142 09:16:09 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@22 -- # lslocks -p 723920 00:07:58.142 09:16:09 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:07:58.400 lslocks: write error 00:07:58.400 09:16:09 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@107 -- # killprocess 723872 00:07:58.400 09:16:09 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@948 -- # '[' -z 723872 ']' 00:07:58.400 09:16:09 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@952 -- # kill -0 723872 00:07:58.400 09:16:09 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@953 -- # uname 00:07:58.400 09:16:09 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:07:58.400 09:16:09 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 723872 00:07:58.400 09:16:09 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:07:58.400 09:16:09 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:07:58.400 09:16:09 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@966 -- # echo 'killing process with pid 723872' 00:07:58.400 killing process with pid 723872 00:07:58.400 09:16:09 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@967 -- # kill 723872 00:07:58.400 09:16:09 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@972 -- # wait 723872 00:07:59.334 09:16:10 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@108 -- # killprocess 723920 00:07:59.334 09:16:10 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@948 -- # '[' -z 723920 ']' 00:07:59.334 09:16:10 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@952 -- # kill -0 723920 00:07:59.334 09:16:10 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@953 -- # uname 00:07:59.334 09:16:10 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:07:59.334 09:16:10 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 723920 00:07:59.334 09:16:10 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:07:59.334 09:16:10 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:07:59.334 09:16:10 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@966 -- # echo 'killing process with pid 723920' 00:07:59.334 killing process with pid 723920 00:07:59.334 09:16:10 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@967 -- # kill 723920 00:07:59.334 09:16:10 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@972 -- # wait 723920 00:07:59.900 00:07:59.900 real 0m3.223s 00:07:59.900 user 0m3.396s 00:07:59.900 sys 0m0.998s 00:07:59.900 09:16:10 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@1124 -- # xtrace_disable 00:07:59.900 09:16:10 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@10 -- # set +x 00:07:59.900 ************************************ 00:07:59.900 END TEST locking_app_on_unlocked_coremask 00:07:59.900 ************************************ 00:07:59.900 09:16:10 event.cpu_locks -- common/autotest_common.sh@1142 -- # return 0 00:07:59.900 09:16:10 event.cpu_locks -- event/cpu_locks.sh@170 -- # run_test locking_app_on_locked_coremask locking_app_on_locked_coremask 00:07:59.900 09:16:10 event.cpu_locks -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:07:59.900 09:16:10 event.cpu_locks -- common/autotest_common.sh@1105 -- # xtrace_disable 00:07:59.900 09:16:10 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:07:59.900 ************************************ 00:07:59.900 START TEST locking_app_on_locked_coremask 00:07:59.900 ************************************ 00:07:59.900 09:16:10 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@1123 -- # locking_app_on_locked_coremask 00:07:59.900 09:16:10 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@115 -- # spdk_tgt_pid=724232 00:07:59.900 09:16:10 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@114 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 00:07:59.900 09:16:10 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@116 -- # waitforlisten 724232 /var/tmp/spdk.sock 00:07:59.900 09:16:10 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@829 -- # '[' -z 724232 ']' 00:07:59.900 09:16:10 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:59.900 09:16:10 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@834 -- # local max_retries=100 00:07:59.900 09:16:10 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:59.900 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:59.900 09:16:10 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@838 -- # xtrace_disable 00:07:59.900 09:16:10 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:07:59.900 [2024-07-15 09:16:10.931254] Starting SPDK v24.09-pre git sha1 b0f01ebc5 / DPDK 24.03.0 initialization... 00:07:59.900 [2024-07-15 09:16:10.931370] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid724232 ] 00:07:59.900 EAL: No free 2048 kB hugepages reported on node 1 00:07:59.900 [2024-07-15 09:16:10.991978] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:00.159 [2024-07-15 09:16:11.099120] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:08:00.159 09:16:11 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:08:00.159 09:16:11 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@862 -- # return 0 00:08:00.159 09:16:11 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@118 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 -r /var/tmp/spdk2.sock 00:08:00.159 09:16:11 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@119 -- # spdk_tgt_pid2=724350 00:08:00.159 09:16:11 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@120 -- # NOT waitforlisten 724350 /var/tmp/spdk2.sock 00:08:00.159 09:16:11 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@648 -- # local es=0 00:08:00.160 09:16:11 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@650 -- # valid_exec_arg waitforlisten 724350 /var/tmp/spdk2.sock 00:08:00.160 09:16:11 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@636 -- # local arg=waitforlisten 00:08:00.160 09:16:11 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:08:00.160 09:16:11 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@640 -- # type -t waitforlisten 00:08:00.160 09:16:11 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:08:00.160 09:16:11 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@651 -- # waitforlisten 724350 /var/tmp/spdk2.sock 00:08:00.160 09:16:11 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@829 -- # '[' -z 724350 ']' 00:08:00.160 09:16:11 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk2.sock 00:08:00.160 09:16:11 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@834 -- # local max_retries=100 00:08:00.160 09:16:11 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:08:00.160 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:08:00.160 09:16:11 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@838 -- # xtrace_disable 00:08:00.160 09:16:11 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:08:00.418 [2024-07-15 09:16:11.386053] Starting SPDK v24.09-pre git sha1 b0f01ebc5 / DPDK 24.03.0 initialization... 00:08:00.418 [2024-07-15 09:16:11.386137] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid724350 ] 00:08:00.418 EAL: No free 2048 kB hugepages reported on node 1 00:08:00.418 [2024-07-15 09:16:11.468104] app.c: 770:claim_cpu_cores: *ERROR*: Cannot create lock on core 0, probably process 724232 has claimed it. 00:08:00.418 [2024-07-15 09:16:11.468168] app.c: 901:spdk_app_start: *ERROR*: Unable to acquire lock on assigned core mask - exiting. 00:08:00.984 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/autotest_common.sh: line 844: kill: (724350) - No such process 00:08:00.984 ERROR: process (pid: 724350) is no longer running 00:08:00.984 09:16:12 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:08:00.984 09:16:12 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@862 -- # return 1 00:08:00.984 09:16:12 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@651 -- # es=1 00:08:00.984 09:16:12 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:08:00.984 09:16:12 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:08:00.984 09:16:12 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:08:00.984 09:16:12 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@122 -- # locks_exist 724232 00:08:00.984 09:16:12 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@22 -- # lslocks -p 724232 00:08:00.984 09:16:12 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:08:01.551 lslocks: write error 00:08:01.551 09:16:12 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@124 -- # killprocess 724232 00:08:01.551 09:16:12 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@948 -- # '[' -z 724232 ']' 00:08:01.551 09:16:12 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@952 -- # kill -0 724232 00:08:01.551 09:16:12 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@953 -- # uname 00:08:01.551 09:16:12 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:08:01.551 09:16:12 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 724232 00:08:01.551 09:16:12 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:08:01.551 09:16:12 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:08:01.551 09:16:12 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@966 -- # echo 'killing process with pid 724232' 00:08:01.551 killing process with pid 724232 00:08:01.551 09:16:12 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@967 -- # kill 724232 00:08:01.551 09:16:12 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@972 -- # wait 724232 00:08:01.810 00:08:01.810 real 0m2.023s 00:08:01.810 user 0m2.219s 00:08:01.810 sys 0m0.620s 00:08:01.810 09:16:12 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@1124 -- # xtrace_disable 00:08:01.810 09:16:12 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:08:01.810 ************************************ 00:08:01.810 END TEST locking_app_on_locked_coremask 00:08:01.810 ************************************ 00:08:01.810 09:16:12 event.cpu_locks -- common/autotest_common.sh@1142 -- # return 0 00:08:01.810 09:16:12 event.cpu_locks -- event/cpu_locks.sh@171 -- # run_test locking_overlapped_coremask locking_overlapped_coremask 00:08:01.810 09:16:12 event.cpu_locks -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:08:01.810 09:16:12 event.cpu_locks -- common/autotest_common.sh@1105 -- # xtrace_disable 00:08:01.810 09:16:12 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:08:01.810 ************************************ 00:08:01.810 START TEST locking_overlapped_coremask 00:08:01.810 ************************************ 00:08:01.810 09:16:12 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@1123 -- # locking_overlapped_coremask 00:08:01.810 09:16:12 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@132 -- # spdk_tgt_pid=724530 00:08:01.810 09:16:12 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@131 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x7 00:08:01.810 09:16:12 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@133 -- # waitforlisten 724530 /var/tmp/spdk.sock 00:08:01.810 09:16:12 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@829 -- # '[' -z 724530 ']' 00:08:01.810 09:16:12 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:08:01.810 09:16:12 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@834 -- # local max_retries=100 00:08:01.810 09:16:12 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:08:01.810 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:08:01.810 09:16:12 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@838 -- # xtrace_disable 00:08:01.810 09:16:12 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@10 -- # set +x 00:08:02.068 [2024-07-15 09:16:13.012481] Starting SPDK v24.09-pre git sha1 b0f01ebc5 / DPDK 24.03.0 initialization... 00:08:02.068 [2024-07-15 09:16:13.012554] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x7 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid724530 ] 00:08:02.068 EAL: No free 2048 kB hugepages reported on node 1 00:08:02.068 [2024-07-15 09:16:13.068633] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 3 00:08:02.068 [2024-07-15 09:16:13.181611] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:08:02.068 [2024-07-15 09:16:13.181675] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:08:02.068 [2024-07-15 09:16:13.181678] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:08:02.326 09:16:13 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:08:02.326 09:16:13 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@862 -- # return 0 00:08:02.326 09:16:13 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@136 -- # spdk_tgt_pid2=724656 00:08:02.326 09:16:13 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@137 -- # NOT waitforlisten 724656 /var/tmp/spdk2.sock 00:08:02.326 09:16:13 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@648 -- # local es=0 00:08:02.326 09:16:13 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@650 -- # valid_exec_arg waitforlisten 724656 /var/tmp/spdk2.sock 00:08:02.326 09:16:13 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@636 -- # local arg=waitforlisten 00:08:02.326 09:16:13 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@135 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1c -r /var/tmp/spdk2.sock 00:08:02.326 09:16:13 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:08:02.326 09:16:13 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@640 -- # type -t waitforlisten 00:08:02.326 09:16:13 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:08:02.326 09:16:13 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@651 -- # waitforlisten 724656 /var/tmp/spdk2.sock 00:08:02.326 09:16:13 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@829 -- # '[' -z 724656 ']' 00:08:02.326 09:16:13 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk2.sock 00:08:02.326 09:16:13 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@834 -- # local max_retries=100 00:08:02.326 09:16:13 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:08:02.326 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:08:02.326 09:16:13 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@838 -- # xtrace_disable 00:08:02.326 09:16:13 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@10 -- # set +x 00:08:02.326 [2024-07-15 09:16:13.492820] Starting SPDK v24.09-pre git sha1 b0f01ebc5 / DPDK 24.03.0 initialization... 00:08:02.327 [2024-07-15 09:16:13.492920] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1c --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid724656 ] 00:08:02.327 EAL: No free 2048 kB hugepages reported on node 1 00:08:02.584 [2024-07-15 09:16:13.581138] app.c: 770:claim_cpu_cores: *ERROR*: Cannot create lock on core 2, probably process 724530 has claimed it. 00:08:02.584 [2024-07-15 09:16:13.581195] app.c: 901:spdk_app_start: *ERROR*: Unable to acquire lock on assigned core mask - exiting. 00:08:03.148 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/autotest_common.sh: line 844: kill: (724656) - No such process 00:08:03.148 ERROR: process (pid: 724656) is no longer running 00:08:03.148 09:16:14 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:08:03.148 09:16:14 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@862 -- # return 1 00:08:03.148 09:16:14 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@651 -- # es=1 00:08:03.148 09:16:14 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:08:03.148 09:16:14 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:08:03.148 09:16:14 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:08:03.148 09:16:14 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@139 -- # check_remaining_locks 00:08:03.148 09:16:14 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@36 -- # locks=(/var/tmp/spdk_cpu_lock_*) 00:08:03.148 09:16:14 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@37 -- # locks_expected=(/var/tmp/spdk_cpu_lock_{000..002}) 00:08:03.148 09:16:14 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@38 -- # [[ /var/tmp/spdk_cpu_lock_000 /var/tmp/spdk_cpu_lock_001 /var/tmp/spdk_cpu_lock_002 == \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\0\ \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\1\ \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\2 ]] 00:08:03.148 09:16:14 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@141 -- # killprocess 724530 00:08:03.148 09:16:14 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@948 -- # '[' -z 724530 ']' 00:08:03.148 09:16:14 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@952 -- # kill -0 724530 00:08:03.148 09:16:14 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@953 -- # uname 00:08:03.148 09:16:14 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:08:03.148 09:16:14 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 724530 00:08:03.148 09:16:14 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:08:03.148 09:16:14 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:08:03.148 09:16:14 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@966 -- # echo 'killing process with pid 724530' 00:08:03.148 killing process with pid 724530 00:08:03.148 09:16:14 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@967 -- # kill 724530 00:08:03.148 09:16:14 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@972 -- # wait 724530 00:08:03.713 00:08:03.713 real 0m1.685s 00:08:03.713 user 0m4.476s 00:08:03.713 sys 0m0.445s 00:08:03.713 09:16:14 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@1124 -- # xtrace_disable 00:08:03.713 09:16:14 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@10 -- # set +x 00:08:03.713 ************************************ 00:08:03.713 END TEST locking_overlapped_coremask 00:08:03.713 ************************************ 00:08:03.713 09:16:14 event.cpu_locks -- common/autotest_common.sh@1142 -- # return 0 00:08:03.713 09:16:14 event.cpu_locks -- event/cpu_locks.sh@172 -- # run_test locking_overlapped_coremask_via_rpc locking_overlapped_coremask_via_rpc 00:08:03.713 09:16:14 event.cpu_locks -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:08:03.713 09:16:14 event.cpu_locks -- common/autotest_common.sh@1105 -- # xtrace_disable 00:08:03.713 09:16:14 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:08:03.713 ************************************ 00:08:03.713 START TEST locking_overlapped_coremask_via_rpc 00:08:03.713 ************************************ 00:08:03.713 09:16:14 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@1123 -- # locking_overlapped_coremask_via_rpc 00:08:03.713 09:16:14 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@148 -- # spdk_tgt_pid=724820 00:08:03.713 09:16:14 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@147 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x7 --disable-cpumask-locks 00:08:03.713 09:16:14 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@149 -- # waitforlisten 724820 /var/tmp/spdk.sock 00:08:03.713 09:16:14 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@829 -- # '[' -z 724820 ']' 00:08:03.713 09:16:14 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:08:03.713 09:16:14 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@834 -- # local max_retries=100 00:08:03.713 09:16:14 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:08:03.713 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:08:03.713 09:16:14 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@838 -- # xtrace_disable 00:08:03.713 09:16:14 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:08:03.713 [2024-07-15 09:16:14.743590] Starting SPDK v24.09-pre git sha1 b0f01ebc5 / DPDK 24.03.0 initialization... 00:08:03.713 [2024-07-15 09:16:14.743701] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x7 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid724820 ] 00:08:03.713 EAL: No free 2048 kB hugepages reported on node 1 00:08:03.713 [2024-07-15 09:16:14.800768] app.c: 905:spdk_app_start: *NOTICE*: CPU core locks deactivated. 00:08:03.713 [2024-07-15 09:16:14.800832] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 3 00:08:03.713 [2024-07-15 09:16:14.904721] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:08:03.714 [2024-07-15 09:16:14.904783] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:08:03.714 [2024-07-15 09:16:14.904796] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:08:03.972 09:16:15 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:08:03.972 09:16:15 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@862 -- # return 0 00:08:03.972 09:16:15 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@152 -- # spdk_tgt_pid2=724835 00:08:03.972 09:16:15 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@151 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1c -r /var/tmp/spdk2.sock --disable-cpumask-locks 00:08:03.972 09:16:15 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@153 -- # waitforlisten 724835 /var/tmp/spdk2.sock 00:08:03.972 09:16:15 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@829 -- # '[' -z 724835 ']' 00:08:03.972 09:16:15 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk2.sock 00:08:03.972 09:16:15 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@834 -- # local max_retries=100 00:08:03.972 09:16:15 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:08:03.972 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:08:03.972 09:16:15 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@838 -- # xtrace_disable 00:08:03.972 09:16:15 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:08:04.231 [2024-07-15 09:16:15.195047] Starting SPDK v24.09-pre git sha1 b0f01ebc5 / DPDK 24.03.0 initialization... 00:08:04.231 [2024-07-15 09:16:15.195159] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1c --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid724835 ] 00:08:04.231 EAL: No free 2048 kB hugepages reported on node 1 00:08:04.231 [2024-07-15 09:16:15.283461] app.c: 905:spdk_app_start: *NOTICE*: CPU core locks deactivated. 00:08:04.231 [2024-07-15 09:16:15.283501] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 3 00:08:04.490 [2024-07-15 09:16:15.507112] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:08:04.490 [2024-07-15 09:16:15.507168] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 4 00:08:04.490 [2024-07-15 09:16:15.507171] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:08:05.056 09:16:16 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:08:05.056 09:16:16 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@862 -- # return 0 00:08:05.056 09:16:16 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@155 -- # rpc_cmd framework_enable_cpumask_locks 00:08:05.056 09:16:16 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:05.056 09:16:16 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:08:05.056 09:16:16 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:05.056 09:16:16 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@156 -- # NOT rpc_cmd -s /var/tmp/spdk2.sock framework_enable_cpumask_locks 00:08:05.056 09:16:16 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@648 -- # local es=0 00:08:05.056 09:16:16 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@650 -- # valid_exec_arg rpc_cmd -s /var/tmp/spdk2.sock framework_enable_cpumask_locks 00:08:05.056 09:16:16 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@636 -- # local arg=rpc_cmd 00:08:05.056 09:16:16 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:08:05.057 09:16:16 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@640 -- # type -t rpc_cmd 00:08:05.057 09:16:16 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:08:05.057 09:16:16 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@651 -- # rpc_cmd -s /var/tmp/spdk2.sock framework_enable_cpumask_locks 00:08:05.057 09:16:16 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:05.057 09:16:16 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:08:05.057 [2024-07-15 09:16:16.154897] app.c: 770:claim_cpu_cores: *ERROR*: Cannot create lock on core 2, probably process 724820 has claimed it. 00:08:05.057 request: 00:08:05.057 { 00:08:05.057 "method": "framework_enable_cpumask_locks", 00:08:05.057 "req_id": 1 00:08:05.057 } 00:08:05.057 Got JSON-RPC error response 00:08:05.057 response: 00:08:05.057 { 00:08:05.057 "code": -32603, 00:08:05.057 "message": "Failed to claim CPU core: 2" 00:08:05.057 } 00:08:05.057 09:16:16 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@587 -- # [[ 1 == 0 ]] 00:08:05.057 09:16:16 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@651 -- # es=1 00:08:05.057 09:16:16 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:08:05.057 09:16:16 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:08:05.057 09:16:16 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:08:05.057 09:16:16 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@158 -- # waitforlisten 724820 /var/tmp/spdk.sock 00:08:05.057 09:16:16 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@829 -- # '[' -z 724820 ']' 00:08:05.057 09:16:16 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:08:05.057 09:16:16 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@834 -- # local max_retries=100 00:08:05.057 09:16:16 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:08:05.057 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:08:05.057 09:16:16 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@838 -- # xtrace_disable 00:08:05.057 09:16:16 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:08:05.315 09:16:16 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:08:05.315 09:16:16 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@862 -- # return 0 00:08:05.315 09:16:16 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@159 -- # waitforlisten 724835 /var/tmp/spdk2.sock 00:08:05.315 09:16:16 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@829 -- # '[' -z 724835 ']' 00:08:05.315 09:16:16 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk2.sock 00:08:05.315 09:16:16 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@834 -- # local max_retries=100 00:08:05.315 09:16:16 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:08:05.315 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:08:05.315 09:16:16 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@838 -- # xtrace_disable 00:08:05.315 09:16:16 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:08:05.573 09:16:16 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:08:05.573 09:16:16 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@862 -- # return 0 00:08:05.573 09:16:16 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@161 -- # check_remaining_locks 00:08:05.573 09:16:16 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@36 -- # locks=(/var/tmp/spdk_cpu_lock_*) 00:08:05.573 09:16:16 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@37 -- # locks_expected=(/var/tmp/spdk_cpu_lock_{000..002}) 00:08:05.573 09:16:16 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@38 -- # [[ /var/tmp/spdk_cpu_lock_000 /var/tmp/spdk_cpu_lock_001 /var/tmp/spdk_cpu_lock_002 == \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\0\ \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\1\ \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\2 ]] 00:08:05.573 00:08:05.573 real 0m1.974s 00:08:05.573 user 0m1.041s 00:08:05.573 sys 0m0.162s 00:08:05.573 09:16:16 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@1124 -- # xtrace_disable 00:08:05.573 09:16:16 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:08:05.573 ************************************ 00:08:05.573 END TEST locking_overlapped_coremask_via_rpc 00:08:05.573 ************************************ 00:08:05.573 09:16:16 event.cpu_locks -- common/autotest_common.sh@1142 -- # return 0 00:08:05.573 09:16:16 event.cpu_locks -- event/cpu_locks.sh@174 -- # cleanup 00:08:05.573 09:16:16 event.cpu_locks -- event/cpu_locks.sh@15 -- # [[ -z 724820 ]] 00:08:05.573 09:16:16 event.cpu_locks -- event/cpu_locks.sh@15 -- # killprocess 724820 00:08:05.573 09:16:16 event.cpu_locks -- common/autotest_common.sh@948 -- # '[' -z 724820 ']' 00:08:05.573 09:16:16 event.cpu_locks -- common/autotest_common.sh@952 -- # kill -0 724820 00:08:05.573 09:16:16 event.cpu_locks -- common/autotest_common.sh@953 -- # uname 00:08:05.573 09:16:16 event.cpu_locks -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:08:05.573 09:16:16 event.cpu_locks -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 724820 00:08:05.573 09:16:16 event.cpu_locks -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:08:05.573 09:16:16 event.cpu_locks -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:08:05.573 09:16:16 event.cpu_locks -- common/autotest_common.sh@966 -- # echo 'killing process with pid 724820' 00:08:05.573 killing process with pid 724820 00:08:05.573 09:16:16 event.cpu_locks -- common/autotest_common.sh@967 -- # kill 724820 00:08:05.573 09:16:16 event.cpu_locks -- common/autotest_common.sh@972 -- # wait 724820 00:08:06.140 09:16:17 event.cpu_locks -- event/cpu_locks.sh@16 -- # [[ -z 724835 ]] 00:08:06.140 09:16:17 event.cpu_locks -- event/cpu_locks.sh@16 -- # killprocess 724835 00:08:06.140 09:16:17 event.cpu_locks -- common/autotest_common.sh@948 -- # '[' -z 724835 ']' 00:08:06.140 09:16:17 event.cpu_locks -- common/autotest_common.sh@952 -- # kill -0 724835 00:08:06.140 09:16:17 event.cpu_locks -- common/autotest_common.sh@953 -- # uname 00:08:06.140 09:16:17 event.cpu_locks -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:08:06.140 09:16:17 event.cpu_locks -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 724835 00:08:06.140 09:16:17 event.cpu_locks -- common/autotest_common.sh@954 -- # process_name=reactor_2 00:08:06.140 09:16:17 event.cpu_locks -- common/autotest_common.sh@958 -- # '[' reactor_2 = sudo ']' 00:08:06.140 09:16:17 event.cpu_locks -- common/autotest_common.sh@966 -- # echo 'killing process with pid 724835' 00:08:06.140 killing process with pid 724835 00:08:06.140 09:16:17 event.cpu_locks -- common/autotest_common.sh@967 -- # kill 724835 00:08:06.140 09:16:17 event.cpu_locks -- common/autotest_common.sh@972 -- # wait 724835 00:08:06.706 09:16:17 event.cpu_locks -- event/cpu_locks.sh@18 -- # rm -f 00:08:06.706 09:16:17 event.cpu_locks -- event/cpu_locks.sh@1 -- # cleanup 00:08:06.706 09:16:17 event.cpu_locks -- event/cpu_locks.sh@15 -- # [[ -z 724820 ]] 00:08:06.706 09:16:17 event.cpu_locks -- event/cpu_locks.sh@15 -- # killprocess 724820 00:08:06.706 09:16:17 event.cpu_locks -- common/autotest_common.sh@948 -- # '[' -z 724820 ']' 00:08:06.706 09:16:17 event.cpu_locks -- common/autotest_common.sh@952 -- # kill -0 724820 00:08:06.706 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/autotest_common.sh: line 952: kill: (724820) - No such process 00:08:06.706 09:16:17 event.cpu_locks -- common/autotest_common.sh@975 -- # echo 'Process with pid 724820 is not found' 00:08:06.706 Process with pid 724820 is not found 00:08:06.706 09:16:17 event.cpu_locks -- event/cpu_locks.sh@16 -- # [[ -z 724835 ]] 00:08:06.706 09:16:17 event.cpu_locks -- event/cpu_locks.sh@16 -- # killprocess 724835 00:08:06.706 09:16:17 event.cpu_locks -- common/autotest_common.sh@948 -- # '[' -z 724835 ']' 00:08:06.706 09:16:17 event.cpu_locks -- common/autotest_common.sh@952 -- # kill -0 724835 00:08:06.706 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/autotest_common.sh: line 952: kill: (724835) - No such process 00:08:06.706 09:16:17 event.cpu_locks -- common/autotest_common.sh@975 -- # echo 'Process with pid 724835 is not found' 00:08:06.706 Process with pid 724835 is not found 00:08:06.706 09:16:17 event.cpu_locks -- event/cpu_locks.sh@18 -- # rm -f 00:08:06.706 00:08:06.706 real 0m15.795s 00:08:06.706 user 0m27.708s 00:08:06.706 sys 0m5.057s 00:08:06.706 09:16:17 event.cpu_locks -- common/autotest_common.sh@1124 -- # xtrace_disable 00:08:06.706 09:16:17 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:08:06.706 ************************************ 00:08:06.706 END TEST cpu_locks 00:08:06.706 ************************************ 00:08:06.706 09:16:17 event -- common/autotest_common.sh@1142 -- # return 0 00:08:06.706 00:08:06.706 real 0m40.291s 00:08:06.706 user 1m16.022s 00:08:06.706 sys 0m9.098s 00:08:06.706 09:16:17 event -- common/autotest_common.sh@1124 -- # xtrace_disable 00:08:06.706 09:16:17 event -- common/autotest_common.sh@10 -- # set +x 00:08:06.706 ************************************ 00:08:06.706 END TEST event 00:08:06.706 ************************************ 00:08:06.706 09:16:17 -- common/autotest_common.sh@1142 -- # return 0 00:08:06.706 09:16:17 -- spdk/autotest.sh@182 -- # run_test thread /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/thread/thread.sh 00:08:06.706 09:16:17 -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:08:06.706 09:16:17 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:08:06.706 09:16:17 -- common/autotest_common.sh@10 -- # set +x 00:08:06.706 ************************************ 00:08:06.706 START TEST thread 00:08:06.706 ************************************ 00:08:06.706 09:16:17 thread -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/thread/thread.sh 00:08:06.706 * Looking for test storage... 00:08:06.706 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/thread 00:08:06.706 09:16:17 thread -- thread/thread.sh@11 -- # run_test thread_poller_perf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 1 -t 1 00:08:06.706 09:16:17 thread -- common/autotest_common.sh@1099 -- # '[' 8 -le 1 ']' 00:08:06.706 09:16:17 thread -- common/autotest_common.sh@1105 -- # xtrace_disable 00:08:06.706 09:16:17 thread -- common/autotest_common.sh@10 -- # set +x 00:08:06.706 ************************************ 00:08:06.706 START TEST thread_poller_perf 00:08:06.706 ************************************ 00:08:06.706 09:16:17 thread.thread_poller_perf -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 1 -t 1 00:08:06.706 [2024-07-15 09:16:17.806992] Starting SPDK v24.09-pre git sha1 b0f01ebc5 / DPDK 24.03.0 initialization... 00:08:06.706 [2024-07-15 09:16:17.807060] [ DPDK EAL parameters: poller_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid725278 ] 00:08:06.706 EAL: No free 2048 kB hugepages reported on node 1 00:08:06.706 [2024-07-15 09:16:17.868084] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:06.965 [2024-07-15 09:16:17.978623] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:08:06.965 Running 1000 pollers for 1 seconds with 1 microseconds period. 00:08:07.899 ====================================== 00:08:07.899 busy:2708754672 (cyc) 00:08:07.899 total_run_count: 365000 00:08:07.899 tsc_hz: 2700000000 (cyc) 00:08:07.899 ====================================== 00:08:07.899 poller_cost: 7421 (cyc), 2748 (nsec) 00:08:08.158 00:08:08.158 real 0m1.302s 00:08:08.158 user 0m1.215s 00:08:08.158 sys 0m0.082s 00:08:08.158 09:16:19 thread.thread_poller_perf -- common/autotest_common.sh@1124 -- # xtrace_disable 00:08:08.158 09:16:19 thread.thread_poller_perf -- common/autotest_common.sh@10 -- # set +x 00:08:08.158 ************************************ 00:08:08.158 END TEST thread_poller_perf 00:08:08.158 ************************************ 00:08:08.158 09:16:19 thread -- common/autotest_common.sh@1142 -- # return 0 00:08:08.158 09:16:19 thread -- thread/thread.sh@12 -- # run_test thread_poller_perf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 0 -t 1 00:08:08.158 09:16:19 thread -- common/autotest_common.sh@1099 -- # '[' 8 -le 1 ']' 00:08:08.158 09:16:19 thread -- common/autotest_common.sh@1105 -- # xtrace_disable 00:08:08.158 09:16:19 thread -- common/autotest_common.sh@10 -- # set +x 00:08:08.158 ************************************ 00:08:08.158 START TEST thread_poller_perf 00:08:08.158 ************************************ 00:08:08.158 09:16:19 thread.thread_poller_perf -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 0 -t 1 00:08:08.158 [2024-07-15 09:16:19.157025] Starting SPDK v24.09-pre git sha1 b0f01ebc5 / DPDK 24.03.0 initialization... 00:08:08.158 [2024-07-15 09:16:19.157090] [ DPDK EAL parameters: poller_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid725471 ] 00:08:08.158 EAL: No free 2048 kB hugepages reported on node 1 00:08:08.158 [2024-07-15 09:16:19.216451] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:08.158 [2024-07-15 09:16:19.317230] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:08:08.158 Running 1000 pollers for 1 seconds with 0 microseconds period. 00:08:09.533 ====================================== 00:08:09.533 busy:2702057502 (cyc) 00:08:09.533 total_run_count: 4804000 00:08:09.533 tsc_hz: 2700000000 (cyc) 00:08:09.533 ====================================== 00:08:09.533 poller_cost: 562 (cyc), 208 (nsec) 00:08:09.533 00:08:09.533 real 0m1.282s 00:08:09.533 user 0m1.195s 00:08:09.533 sys 0m0.082s 00:08:09.533 09:16:20 thread.thread_poller_perf -- common/autotest_common.sh@1124 -- # xtrace_disable 00:08:09.533 09:16:20 thread.thread_poller_perf -- common/autotest_common.sh@10 -- # set +x 00:08:09.533 ************************************ 00:08:09.533 END TEST thread_poller_perf 00:08:09.533 ************************************ 00:08:09.533 09:16:20 thread -- common/autotest_common.sh@1142 -- # return 0 00:08:09.533 09:16:20 thread -- thread/thread.sh@17 -- # [[ y != \y ]] 00:08:09.533 00:08:09.533 real 0m2.739s 00:08:09.533 user 0m2.485s 00:08:09.533 sys 0m0.255s 00:08:09.533 09:16:20 thread -- common/autotest_common.sh@1124 -- # xtrace_disable 00:08:09.533 09:16:20 thread -- common/autotest_common.sh@10 -- # set +x 00:08:09.533 ************************************ 00:08:09.533 END TEST thread 00:08:09.533 ************************************ 00:08:09.533 09:16:20 -- common/autotest_common.sh@1142 -- # return 0 00:08:09.533 09:16:20 -- spdk/autotest.sh@183 -- # run_test accel /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/accel.sh 00:08:09.533 09:16:20 -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:08:09.533 09:16:20 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:08:09.533 09:16:20 -- common/autotest_common.sh@10 -- # set +x 00:08:09.533 ************************************ 00:08:09.533 START TEST accel 00:08:09.533 ************************************ 00:08:09.533 09:16:20 accel -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/accel.sh 00:08:09.533 * Looking for test storage... 00:08:09.533 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel 00:08:09.533 09:16:20 accel -- accel/accel.sh@81 -- # declare -A expected_opcs 00:08:09.533 09:16:20 accel -- accel/accel.sh@82 -- # get_expected_opcs 00:08:09.533 09:16:20 accel -- accel/accel.sh@60 -- # trap 'killprocess $spdk_tgt_pid; exit 1' ERR 00:08:09.533 09:16:20 accel -- accel/accel.sh@62 -- # spdk_tgt_pid=725672 00:08:09.533 09:16:20 accel -- accel/accel.sh@63 -- # waitforlisten 725672 00:08:09.533 09:16:20 accel -- common/autotest_common.sh@829 -- # '[' -z 725672 ']' 00:08:09.533 09:16:20 accel -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:08:09.533 09:16:20 accel -- accel/accel.sh@61 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -c /dev/fd/63 00:08:09.533 09:16:20 accel -- common/autotest_common.sh@834 -- # local max_retries=100 00:08:09.533 09:16:20 accel -- accel/accel.sh@61 -- # build_accel_config 00:08:09.533 09:16:20 accel -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:08:09.533 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:08:09.533 09:16:20 accel -- accel/accel.sh@31 -- # accel_json_cfg=() 00:08:09.533 09:16:20 accel -- common/autotest_common.sh@838 -- # xtrace_disable 00:08:09.533 09:16:20 accel -- common/autotest_common.sh@10 -- # set +x 00:08:09.533 09:16:20 accel -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:08:09.533 09:16:20 accel -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:08:09.533 09:16:20 accel -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:08:09.533 09:16:20 accel -- accel/accel.sh@36 -- # [[ -n '' ]] 00:08:09.533 09:16:20 accel -- accel/accel.sh@40 -- # local IFS=, 00:08:09.533 09:16:20 accel -- accel/accel.sh@41 -- # jq -r . 00:08:09.533 [2024-07-15 09:16:20.606302] Starting SPDK v24.09-pre git sha1 b0f01ebc5 / DPDK 24.03.0 initialization... 00:08:09.533 [2024-07-15 09:16:20.606381] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid725672 ] 00:08:09.533 EAL: No free 2048 kB hugepages reported on node 1 00:08:09.533 [2024-07-15 09:16:20.664756] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:09.792 [2024-07-15 09:16:20.770825] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:08:10.050 09:16:21 accel -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:08:10.050 09:16:21 accel -- common/autotest_common.sh@862 -- # return 0 00:08:10.050 09:16:21 accel -- accel/accel.sh@65 -- # [[ 0 -gt 0 ]] 00:08:10.050 09:16:21 accel -- accel/accel.sh@66 -- # [[ 0 -gt 0 ]] 00:08:10.050 09:16:21 accel -- accel/accel.sh@67 -- # [[ 0 -gt 0 ]] 00:08:10.050 09:16:21 accel -- accel/accel.sh@68 -- # [[ -n '' ]] 00:08:10.050 09:16:21 accel -- accel/accel.sh@70 -- # exp_opcs=($($rpc_py accel_get_opc_assignments | jq -r ". | to_entries | map(\"\(.key)=\(.value)\") | .[]")) 00:08:10.050 09:16:21 accel -- accel/accel.sh@70 -- # rpc_cmd accel_get_opc_assignments 00:08:10.050 09:16:21 accel -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:10.050 09:16:21 accel -- common/autotest_common.sh@10 -- # set +x 00:08:10.050 09:16:21 accel -- accel/accel.sh@70 -- # jq -r '. | to_entries | map("\(.key)=\(.value)") | .[]' 00:08:10.050 09:16:21 accel -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:10.050 09:16:21 accel -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:08:10.050 09:16:21 accel -- accel/accel.sh@72 -- # IFS== 00:08:10.050 09:16:21 accel -- accel/accel.sh@72 -- # read -r opc module 00:08:10.050 09:16:21 accel -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:08:10.050 09:16:21 accel -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:08:10.050 09:16:21 accel -- accel/accel.sh@72 -- # IFS== 00:08:10.050 09:16:21 accel -- accel/accel.sh@72 -- # read -r opc module 00:08:10.050 09:16:21 accel -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:08:10.050 09:16:21 accel -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:08:10.050 09:16:21 accel -- accel/accel.sh@72 -- # IFS== 00:08:10.050 09:16:21 accel -- accel/accel.sh@72 -- # read -r opc module 00:08:10.050 09:16:21 accel -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:08:10.050 09:16:21 accel -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:08:10.050 09:16:21 accel -- accel/accel.sh@72 -- # IFS== 00:08:10.050 09:16:21 accel -- accel/accel.sh@72 -- # read -r opc module 00:08:10.050 09:16:21 accel -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:08:10.050 09:16:21 accel -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:08:10.050 09:16:21 accel -- accel/accel.sh@72 -- # IFS== 00:08:10.050 09:16:21 accel -- accel/accel.sh@72 -- # read -r opc module 00:08:10.050 09:16:21 accel -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:08:10.050 09:16:21 accel -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:08:10.050 09:16:21 accel -- accel/accel.sh@72 -- # IFS== 00:08:10.050 09:16:21 accel -- accel/accel.sh@72 -- # read -r opc module 00:08:10.050 09:16:21 accel -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:08:10.050 09:16:21 accel -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:08:10.050 09:16:21 accel -- accel/accel.sh@72 -- # IFS== 00:08:10.050 09:16:21 accel -- accel/accel.sh@72 -- # read -r opc module 00:08:10.050 09:16:21 accel -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:08:10.050 09:16:21 accel -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:08:10.050 09:16:21 accel -- accel/accel.sh@72 -- # IFS== 00:08:10.050 09:16:21 accel -- accel/accel.sh@72 -- # read -r opc module 00:08:10.050 09:16:21 accel -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:08:10.050 09:16:21 accel -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:08:10.050 09:16:21 accel -- accel/accel.sh@72 -- # IFS== 00:08:10.050 09:16:21 accel -- accel/accel.sh@72 -- # read -r opc module 00:08:10.050 09:16:21 accel -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:08:10.050 09:16:21 accel -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:08:10.050 09:16:21 accel -- accel/accel.sh@72 -- # IFS== 00:08:10.050 09:16:21 accel -- accel/accel.sh@72 -- # read -r opc module 00:08:10.050 09:16:21 accel -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:08:10.050 09:16:21 accel -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:08:10.050 09:16:21 accel -- accel/accel.sh@72 -- # IFS== 00:08:10.050 09:16:21 accel -- accel/accel.sh@72 -- # read -r opc module 00:08:10.050 09:16:21 accel -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:08:10.050 09:16:21 accel -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:08:10.050 09:16:21 accel -- accel/accel.sh@72 -- # IFS== 00:08:10.050 09:16:21 accel -- accel/accel.sh@72 -- # read -r opc module 00:08:10.050 09:16:21 accel -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:08:10.050 09:16:21 accel -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:08:10.050 09:16:21 accel -- accel/accel.sh@72 -- # IFS== 00:08:10.050 09:16:21 accel -- accel/accel.sh@72 -- # read -r opc module 00:08:10.050 09:16:21 accel -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:08:10.050 09:16:21 accel -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:08:10.050 09:16:21 accel -- accel/accel.sh@72 -- # IFS== 00:08:10.050 09:16:21 accel -- accel/accel.sh@72 -- # read -r opc module 00:08:10.050 09:16:21 accel -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:08:10.050 09:16:21 accel -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:08:10.050 09:16:21 accel -- accel/accel.sh@72 -- # IFS== 00:08:10.050 09:16:21 accel -- accel/accel.sh@72 -- # read -r opc module 00:08:10.050 09:16:21 accel -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:08:10.050 09:16:21 accel -- accel/accel.sh@75 -- # killprocess 725672 00:08:10.050 09:16:21 accel -- common/autotest_common.sh@948 -- # '[' -z 725672 ']' 00:08:10.050 09:16:21 accel -- common/autotest_common.sh@952 -- # kill -0 725672 00:08:10.050 09:16:21 accel -- common/autotest_common.sh@953 -- # uname 00:08:10.050 09:16:21 accel -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:08:10.050 09:16:21 accel -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 725672 00:08:10.050 09:16:21 accel -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:08:10.051 09:16:21 accel -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:08:10.051 09:16:21 accel -- common/autotest_common.sh@966 -- # echo 'killing process with pid 725672' 00:08:10.051 killing process with pid 725672 00:08:10.051 09:16:21 accel -- common/autotest_common.sh@967 -- # kill 725672 00:08:10.051 09:16:21 accel -- common/autotest_common.sh@972 -- # wait 725672 00:08:10.617 09:16:21 accel -- accel/accel.sh@76 -- # trap - ERR 00:08:10.617 09:16:21 accel -- accel/accel.sh@89 -- # run_test accel_help accel_perf -h 00:08:10.617 09:16:21 accel -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:08:10.617 09:16:21 accel -- common/autotest_common.sh@1105 -- # xtrace_disable 00:08:10.617 09:16:21 accel -- common/autotest_common.sh@10 -- # set +x 00:08:10.617 09:16:21 accel.accel_help -- common/autotest_common.sh@1123 -- # accel_perf -h 00:08:10.617 09:16:21 accel.accel_help -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -h 00:08:10.617 09:16:21 accel.accel_help -- accel/accel.sh@12 -- # build_accel_config 00:08:10.617 09:16:21 accel.accel_help -- accel/accel.sh@31 -- # accel_json_cfg=() 00:08:10.617 09:16:21 accel.accel_help -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:08:10.617 09:16:21 accel.accel_help -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:08:10.617 09:16:21 accel.accel_help -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:08:10.617 09:16:21 accel.accel_help -- accel/accel.sh@36 -- # [[ -n '' ]] 00:08:10.617 09:16:21 accel.accel_help -- accel/accel.sh@40 -- # local IFS=, 00:08:10.617 09:16:21 accel.accel_help -- accel/accel.sh@41 -- # jq -r . 00:08:10.617 09:16:21 accel.accel_help -- common/autotest_common.sh@1124 -- # xtrace_disable 00:08:10.617 09:16:21 accel.accel_help -- common/autotest_common.sh@10 -- # set +x 00:08:10.617 09:16:21 accel -- common/autotest_common.sh@1142 -- # return 0 00:08:10.617 09:16:21 accel -- accel/accel.sh@91 -- # run_test accel_missing_filename NOT accel_perf -t 1 -w compress 00:08:10.617 09:16:21 accel -- common/autotest_common.sh@1099 -- # '[' 7 -le 1 ']' 00:08:10.617 09:16:21 accel -- common/autotest_common.sh@1105 -- # xtrace_disable 00:08:10.617 09:16:21 accel -- common/autotest_common.sh@10 -- # set +x 00:08:10.617 ************************************ 00:08:10.617 START TEST accel_missing_filename 00:08:10.617 ************************************ 00:08:10.617 09:16:21 accel.accel_missing_filename -- common/autotest_common.sh@1123 -- # NOT accel_perf -t 1 -w compress 00:08:10.617 09:16:21 accel.accel_missing_filename -- common/autotest_common.sh@648 -- # local es=0 00:08:10.617 09:16:21 accel.accel_missing_filename -- common/autotest_common.sh@650 -- # valid_exec_arg accel_perf -t 1 -w compress 00:08:10.617 09:16:21 accel.accel_missing_filename -- common/autotest_common.sh@636 -- # local arg=accel_perf 00:08:10.617 09:16:21 accel.accel_missing_filename -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:08:10.617 09:16:21 accel.accel_missing_filename -- common/autotest_common.sh@640 -- # type -t accel_perf 00:08:10.617 09:16:21 accel.accel_missing_filename -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:08:10.617 09:16:21 accel.accel_missing_filename -- common/autotest_common.sh@651 -- # accel_perf -t 1 -w compress 00:08:10.617 09:16:21 accel.accel_missing_filename -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w compress 00:08:10.617 09:16:21 accel.accel_missing_filename -- accel/accel.sh@12 -- # build_accel_config 00:08:10.617 09:16:21 accel.accel_missing_filename -- accel/accel.sh@31 -- # accel_json_cfg=() 00:08:10.617 09:16:21 accel.accel_missing_filename -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:08:10.617 09:16:21 accel.accel_missing_filename -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:08:10.617 09:16:21 accel.accel_missing_filename -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:08:10.617 09:16:21 accel.accel_missing_filename -- accel/accel.sh@36 -- # [[ -n '' ]] 00:08:10.617 09:16:21 accel.accel_missing_filename -- accel/accel.sh@40 -- # local IFS=, 00:08:10.617 09:16:21 accel.accel_missing_filename -- accel/accel.sh@41 -- # jq -r . 00:08:10.617 [2024-07-15 09:16:21.631290] Starting SPDK v24.09-pre git sha1 b0f01ebc5 / DPDK 24.03.0 initialization... 00:08:10.617 [2024-07-15 09:16:21.631354] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid725838 ] 00:08:10.617 EAL: No free 2048 kB hugepages reported on node 1 00:08:10.617 [2024-07-15 09:16:21.688553] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:10.617 [2024-07-15 09:16:21.799898] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:08:10.876 [2024-07-15 09:16:21.848900] app.c:1052:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:08:10.876 [2024-07-15 09:16:21.921638] accel_perf.c:1464:main: *ERROR*: ERROR starting application 00:08:10.876 A filename is required. 00:08:10.876 09:16:22 accel.accel_missing_filename -- common/autotest_common.sh@651 -- # es=234 00:08:10.876 09:16:22 accel.accel_missing_filename -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:08:10.876 09:16:22 accel.accel_missing_filename -- common/autotest_common.sh@660 -- # es=106 00:08:10.876 09:16:22 accel.accel_missing_filename -- common/autotest_common.sh@661 -- # case "$es" in 00:08:10.876 09:16:22 accel.accel_missing_filename -- common/autotest_common.sh@668 -- # es=1 00:08:10.876 09:16:22 accel.accel_missing_filename -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:08:10.876 00:08:10.876 real 0m0.421s 00:08:10.876 user 0m0.324s 00:08:10.876 sys 0m0.131s 00:08:10.876 09:16:22 accel.accel_missing_filename -- common/autotest_common.sh@1124 -- # xtrace_disable 00:08:10.876 09:16:22 accel.accel_missing_filename -- common/autotest_common.sh@10 -- # set +x 00:08:10.876 ************************************ 00:08:10.876 END TEST accel_missing_filename 00:08:10.876 ************************************ 00:08:10.876 09:16:22 accel -- common/autotest_common.sh@1142 -- # return 0 00:08:10.876 09:16:22 accel -- accel/accel.sh@93 -- # run_test accel_compress_verify NOT accel_perf -t 1 -w compress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y 00:08:10.876 09:16:22 accel -- common/autotest_common.sh@1099 -- # '[' 10 -le 1 ']' 00:08:10.876 09:16:22 accel -- common/autotest_common.sh@1105 -- # xtrace_disable 00:08:10.876 09:16:22 accel -- common/autotest_common.sh@10 -- # set +x 00:08:11.134 ************************************ 00:08:11.134 START TEST accel_compress_verify 00:08:11.134 ************************************ 00:08:11.134 09:16:22 accel.accel_compress_verify -- common/autotest_common.sh@1123 -- # NOT accel_perf -t 1 -w compress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y 00:08:11.134 09:16:22 accel.accel_compress_verify -- common/autotest_common.sh@648 -- # local es=0 00:08:11.134 09:16:22 accel.accel_compress_verify -- common/autotest_common.sh@650 -- # valid_exec_arg accel_perf -t 1 -w compress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y 00:08:11.134 09:16:22 accel.accel_compress_verify -- common/autotest_common.sh@636 -- # local arg=accel_perf 00:08:11.134 09:16:22 accel.accel_compress_verify -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:08:11.134 09:16:22 accel.accel_compress_verify -- common/autotest_common.sh@640 -- # type -t accel_perf 00:08:11.134 09:16:22 accel.accel_compress_verify -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:08:11.134 09:16:22 accel.accel_compress_verify -- common/autotest_common.sh@651 -- # accel_perf -t 1 -w compress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y 00:08:11.134 09:16:22 accel.accel_compress_verify -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w compress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y 00:08:11.134 09:16:22 accel.accel_compress_verify -- accel/accel.sh@12 -- # build_accel_config 00:08:11.134 09:16:22 accel.accel_compress_verify -- accel/accel.sh@31 -- # accel_json_cfg=() 00:08:11.134 09:16:22 accel.accel_compress_verify -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:08:11.134 09:16:22 accel.accel_compress_verify -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:08:11.134 09:16:22 accel.accel_compress_verify -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:08:11.134 09:16:22 accel.accel_compress_verify -- accel/accel.sh@36 -- # [[ -n '' ]] 00:08:11.135 09:16:22 accel.accel_compress_verify -- accel/accel.sh@40 -- # local IFS=, 00:08:11.135 09:16:22 accel.accel_compress_verify -- accel/accel.sh@41 -- # jq -r . 00:08:11.135 [2024-07-15 09:16:22.099874] Starting SPDK v24.09-pre git sha1 b0f01ebc5 / DPDK 24.03.0 initialization... 00:08:11.135 [2024-07-15 09:16:22.099937] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid725869 ] 00:08:11.135 EAL: No free 2048 kB hugepages reported on node 1 00:08:11.135 [2024-07-15 09:16:22.156949] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:11.135 [2024-07-15 09:16:22.260985] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:08:11.135 [2024-07-15 09:16:22.313250] app.c:1052:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:08:11.393 [2024-07-15 09:16:22.387103] accel_perf.c:1464:main: *ERROR*: ERROR starting application 00:08:11.393 00:08:11.393 Compression does not support the verify option, aborting. 00:08:11.393 09:16:22 accel.accel_compress_verify -- common/autotest_common.sh@651 -- # es=161 00:08:11.393 09:16:22 accel.accel_compress_verify -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:08:11.393 09:16:22 accel.accel_compress_verify -- common/autotest_common.sh@660 -- # es=33 00:08:11.393 09:16:22 accel.accel_compress_verify -- common/autotest_common.sh@661 -- # case "$es" in 00:08:11.393 09:16:22 accel.accel_compress_verify -- common/autotest_common.sh@668 -- # es=1 00:08:11.393 09:16:22 accel.accel_compress_verify -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:08:11.393 00:08:11.393 real 0m0.419s 00:08:11.393 user 0m0.329s 00:08:11.393 sys 0m0.124s 00:08:11.393 09:16:22 accel.accel_compress_verify -- common/autotest_common.sh@1124 -- # xtrace_disable 00:08:11.393 09:16:22 accel.accel_compress_verify -- common/autotest_common.sh@10 -- # set +x 00:08:11.393 ************************************ 00:08:11.393 END TEST accel_compress_verify 00:08:11.393 ************************************ 00:08:11.393 09:16:22 accel -- common/autotest_common.sh@1142 -- # return 0 00:08:11.393 09:16:22 accel -- accel/accel.sh@95 -- # run_test accel_wrong_workload NOT accel_perf -t 1 -w foobar 00:08:11.393 09:16:22 accel -- common/autotest_common.sh@1099 -- # '[' 7 -le 1 ']' 00:08:11.393 09:16:22 accel -- common/autotest_common.sh@1105 -- # xtrace_disable 00:08:11.393 09:16:22 accel -- common/autotest_common.sh@10 -- # set +x 00:08:11.393 ************************************ 00:08:11.393 START TEST accel_wrong_workload 00:08:11.393 ************************************ 00:08:11.393 09:16:22 accel.accel_wrong_workload -- common/autotest_common.sh@1123 -- # NOT accel_perf -t 1 -w foobar 00:08:11.393 09:16:22 accel.accel_wrong_workload -- common/autotest_common.sh@648 -- # local es=0 00:08:11.393 09:16:22 accel.accel_wrong_workload -- common/autotest_common.sh@650 -- # valid_exec_arg accel_perf -t 1 -w foobar 00:08:11.393 09:16:22 accel.accel_wrong_workload -- common/autotest_common.sh@636 -- # local arg=accel_perf 00:08:11.393 09:16:22 accel.accel_wrong_workload -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:08:11.393 09:16:22 accel.accel_wrong_workload -- common/autotest_common.sh@640 -- # type -t accel_perf 00:08:11.393 09:16:22 accel.accel_wrong_workload -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:08:11.393 09:16:22 accel.accel_wrong_workload -- common/autotest_common.sh@651 -- # accel_perf -t 1 -w foobar 00:08:11.393 09:16:22 accel.accel_wrong_workload -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w foobar 00:08:11.393 09:16:22 accel.accel_wrong_workload -- accel/accel.sh@12 -- # build_accel_config 00:08:11.393 09:16:22 accel.accel_wrong_workload -- accel/accel.sh@31 -- # accel_json_cfg=() 00:08:11.393 09:16:22 accel.accel_wrong_workload -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:08:11.393 09:16:22 accel.accel_wrong_workload -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:08:11.393 09:16:22 accel.accel_wrong_workload -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:08:11.394 09:16:22 accel.accel_wrong_workload -- accel/accel.sh@36 -- # [[ -n '' ]] 00:08:11.394 09:16:22 accel.accel_wrong_workload -- accel/accel.sh@40 -- # local IFS=, 00:08:11.394 09:16:22 accel.accel_wrong_workload -- accel/accel.sh@41 -- # jq -r . 00:08:11.394 Unsupported workload type: foobar 00:08:11.394 [2024-07-15 09:16:22.565445] app.c:1450:spdk_app_parse_args: *ERROR*: Parsing app-specific command line parameter 'w' failed: 1 00:08:11.394 accel_perf options: 00:08:11.394 [-h help message] 00:08:11.394 [-q queue depth per core] 00:08:11.394 [-C for supported workloads, use this value to configure the io vector size to test (default 1) 00:08:11.394 [-T number of threads per core 00:08:11.394 [-o transfer size in bytes (default: 4KiB. For compress/decompress, 0 means the input file size)] 00:08:11.394 [-t time in seconds] 00:08:11.394 [-w workload type must be one of these: copy, fill, crc32c, copy_crc32c, compare, compress, decompress, dualcast, xor, 00:08:11.394 [ dif_verify, dif_verify_copy, dif_generate, dif_generate_copy 00:08:11.394 [-M assign module to the operation, not compatible with accel_assign_opc RPC 00:08:11.394 [-l for compress/decompress workloads, name of uncompressed input file 00:08:11.394 [-S for crc32c workload, use this seed value (default 0) 00:08:11.394 [-P for compare workload, percentage of operations that should miscompare (percent, default 0) 00:08:11.394 [-f for fill workload, use this BYTE value (default 255) 00:08:11.394 [-x for xor workload, use this number of source buffers (default, minimum: 2)] 00:08:11.394 [-y verify result if this switch is on] 00:08:11.394 [-a tasks to allocate per core (default: same value as -q)] 00:08:11.394 Can be used to spread operations across a wider range of memory. 00:08:11.394 09:16:22 accel.accel_wrong_workload -- common/autotest_common.sh@651 -- # es=1 00:08:11.394 09:16:22 accel.accel_wrong_workload -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:08:11.394 09:16:22 accel.accel_wrong_workload -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:08:11.394 09:16:22 accel.accel_wrong_workload -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:08:11.394 00:08:11.394 real 0m0.023s 00:08:11.394 user 0m0.016s 00:08:11.394 sys 0m0.007s 00:08:11.394 09:16:22 accel.accel_wrong_workload -- common/autotest_common.sh@1124 -- # xtrace_disable 00:08:11.394 09:16:22 accel.accel_wrong_workload -- common/autotest_common.sh@10 -- # set +x 00:08:11.394 ************************************ 00:08:11.394 END TEST accel_wrong_workload 00:08:11.394 ************************************ 00:08:11.394 Error: writing output failed: Broken pipe 00:08:11.653 09:16:22 accel -- common/autotest_common.sh@1142 -- # return 0 00:08:11.653 09:16:22 accel -- accel/accel.sh@97 -- # run_test accel_negative_buffers NOT accel_perf -t 1 -w xor -y -x -1 00:08:11.653 09:16:22 accel -- common/autotest_common.sh@1099 -- # '[' 10 -le 1 ']' 00:08:11.653 09:16:22 accel -- common/autotest_common.sh@1105 -- # xtrace_disable 00:08:11.653 09:16:22 accel -- common/autotest_common.sh@10 -- # set +x 00:08:11.653 ************************************ 00:08:11.653 START TEST accel_negative_buffers 00:08:11.653 ************************************ 00:08:11.653 09:16:22 accel.accel_negative_buffers -- common/autotest_common.sh@1123 -- # NOT accel_perf -t 1 -w xor -y -x -1 00:08:11.653 09:16:22 accel.accel_negative_buffers -- common/autotest_common.sh@648 -- # local es=0 00:08:11.653 09:16:22 accel.accel_negative_buffers -- common/autotest_common.sh@650 -- # valid_exec_arg accel_perf -t 1 -w xor -y -x -1 00:08:11.653 09:16:22 accel.accel_negative_buffers -- common/autotest_common.sh@636 -- # local arg=accel_perf 00:08:11.653 09:16:22 accel.accel_negative_buffers -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:08:11.653 09:16:22 accel.accel_negative_buffers -- common/autotest_common.sh@640 -- # type -t accel_perf 00:08:11.653 09:16:22 accel.accel_negative_buffers -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:08:11.653 09:16:22 accel.accel_negative_buffers -- common/autotest_common.sh@651 -- # accel_perf -t 1 -w xor -y -x -1 00:08:11.653 09:16:22 accel.accel_negative_buffers -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w xor -y -x -1 00:08:11.653 09:16:22 accel.accel_negative_buffers -- accel/accel.sh@12 -- # build_accel_config 00:08:11.653 09:16:22 accel.accel_negative_buffers -- accel/accel.sh@31 -- # accel_json_cfg=() 00:08:11.653 09:16:22 accel.accel_negative_buffers -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:08:11.653 09:16:22 accel.accel_negative_buffers -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:08:11.653 09:16:22 accel.accel_negative_buffers -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:08:11.653 09:16:22 accel.accel_negative_buffers -- accel/accel.sh@36 -- # [[ -n '' ]] 00:08:11.653 09:16:22 accel.accel_negative_buffers -- accel/accel.sh@40 -- # local IFS=, 00:08:11.653 09:16:22 accel.accel_negative_buffers -- accel/accel.sh@41 -- # jq -r . 00:08:11.653 -x option must be non-negative. 00:08:11.653 [2024-07-15 09:16:22.634034] app.c:1450:spdk_app_parse_args: *ERROR*: Parsing app-specific command line parameter 'x' failed: 1 00:08:11.653 accel_perf options: 00:08:11.653 [-h help message] 00:08:11.653 [-q queue depth per core] 00:08:11.653 [-C for supported workloads, use this value to configure the io vector size to test (default 1) 00:08:11.653 [-T number of threads per core 00:08:11.653 [-o transfer size in bytes (default: 4KiB. For compress/decompress, 0 means the input file size)] 00:08:11.653 [-t time in seconds] 00:08:11.653 [-w workload type must be one of these: copy, fill, crc32c, copy_crc32c, compare, compress, decompress, dualcast, xor, 00:08:11.653 [ dif_verify, dif_verify_copy, dif_generate, dif_generate_copy 00:08:11.653 [-M assign module to the operation, not compatible with accel_assign_opc RPC 00:08:11.653 [-l for compress/decompress workloads, name of uncompressed input file 00:08:11.653 [-S for crc32c workload, use this seed value (default 0) 00:08:11.653 [-P for compare workload, percentage of operations that should miscompare (percent, default 0) 00:08:11.653 [-f for fill workload, use this BYTE value (default 255) 00:08:11.653 [-x for xor workload, use this number of source buffers (default, minimum: 2)] 00:08:11.653 [-y verify result if this switch is on] 00:08:11.653 [-a tasks to allocate per core (default: same value as -q)] 00:08:11.653 Can be used to spread operations across a wider range of memory. 00:08:11.653 09:16:22 accel.accel_negative_buffers -- common/autotest_common.sh@651 -- # es=1 00:08:11.653 09:16:22 accel.accel_negative_buffers -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:08:11.653 09:16:22 accel.accel_negative_buffers -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:08:11.653 09:16:22 accel.accel_negative_buffers -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:08:11.653 00:08:11.653 real 0m0.023s 00:08:11.653 user 0m0.016s 00:08:11.653 sys 0m0.007s 00:08:11.653 09:16:22 accel.accel_negative_buffers -- common/autotest_common.sh@1124 -- # xtrace_disable 00:08:11.653 09:16:22 accel.accel_negative_buffers -- common/autotest_common.sh@10 -- # set +x 00:08:11.653 ************************************ 00:08:11.653 END TEST accel_negative_buffers 00:08:11.653 ************************************ 00:08:11.653 Error: writing output failed: Broken pipe 00:08:11.653 09:16:22 accel -- common/autotest_common.sh@1142 -- # return 0 00:08:11.653 09:16:22 accel -- accel/accel.sh@101 -- # run_test accel_crc32c accel_test -t 1 -w crc32c -S 32 -y 00:08:11.653 09:16:22 accel -- common/autotest_common.sh@1099 -- # '[' 9 -le 1 ']' 00:08:11.653 09:16:22 accel -- common/autotest_common.sh@1105 -- # xtrace_disable 00:08:11.653 09:16:22 accel -- common/autotest_common.sh@10 -- # set +x 00:08:11.653 ************************************ 00:08:11.653 START TEST accel_crc32c 00:08:11.653 ************************************ 00:08:11.653 09:16:22 accel.accel_crc32c -- common/autotest_common.sh@1123 -- # accel_test -t 1 -w crc32c -S 32 -y 00:08:11.653 09:16:22 accel.accel_crc32c -- accel/accel.sh@16 -- # local accel_opc 00:08:11.653 09:16:22 accel.accel_crc32c -- accel/accel.sh@17 -- # local accel_module 00:08:11.653 09:16:22 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:08:11.653 09:16:22 accel.accel_crc32c -- accel/accel.sh@15 -- # accel_perf -t 1 -w crc32c -S 32 -y 00:08:11.653 09:16:22 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:08:11.653 09:16:22 accel.accel_crc32c -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w crc32c -S 32 -y 00:08:11.653 09:16:22 accel.accel_crc32c -- accel/accel.sh@12 -- # build_accel_config 00:08:11.653 09:16:22 accel.accel_crc32c -- accel/accel.sh@31 -- # accel_json_cfg=() 00:08:11.653 09:16:22 accel.accel_crc32c -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:08:11.653 09:16:22 accel.accel_crc32c -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:08:11.653 09:16:22 accel.accel_crc32c -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:08:11.653 09:16:22 accel.accel_crc32c -- accel/accel.sh@36 -- # [[ -n '' ]] 00:08:11.653 09:16:22 accel.accel_crc32c -- accel/accel.sh@40 -- # local IFS=, 00:08:11.653 09:16:22 accel.accel_crc32c -- accel/accel.sh@41 -- # jq -r . 00:08:11.653 [2024-07-15 09:16:22.703603] Starting SPDK v24.09-pre git sha1 b0f01ebc5 / DPDK 24.03.0 initialization... 00:08:11.653 [2024-07-15 09:16:22.703666] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid726049 ] 00:08:11.653 EAL: No free 2048 kB hugepages reported on node 1 00:08:11.653 [2024-07-15 09:16:22.761031] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:11.912 [2024-07-15 09:16:22.874062] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:08:11.912 09:16:22 accel.accel_crc32c -- accel/accel.sh@20 -- # val= 00:08:11.912 09:16:22 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:08:11.912 09:16:22 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:08:11.912 09:16:22 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:08:11.913 09:16:22 accel.accel_crc32c -- accel/accel.sh@20 -- # val= 00:08:11.913 09:16:22 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:08:11.913 09:16:22 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:08:11.913 09:16:22 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:08:11.913 09:16:22 accel.accel_crc32c -- accel/accel.sh@20 -- # val=0x1 00:08:11.913 09:16:22 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:08:11.913 09:16:22 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:08:11.913 09:16:22 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:08:11.913 09:16:22 accel.accel_crc32c -- accel/accel.sh@20 -- # val= 00:08:11.913 09:16:22 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:08:11.913 09:16:22 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:08:11.913 09:16:22 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:08:11.913 09:16:22 accel.accel_crc32c -- accel/accel.sh@20 -- # val= 00:08:11.913 09:16:22 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:08:11.913 09:16:22 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:08:11.913 09:16:22 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:08:11.913 09:16:22 accel.accel_crc32c -- accel/accel.sh@20 -- # val=crc32c 00:08:11.913 09:16:22 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:08:11.913 09:16:22 accel.accel_crc32c -- accel/accel.sh@23 -- # accel_opc=crc32c 00:08:11.913 09:16:22 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:08:11.913 09:16:22 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:08:11.913 09:16:22 accel.accel_crc32c -- accel/accel.sh@20 -- # val=32 00:08:11.913 09:16:22 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:08:11.913 09:16:22 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:08:11.913 09:16:22 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:08:11.913 09:16:22 accel.accel_crc32c -- accel/accel.sh@20 -- # val='4096 bytes' 00:08:11.913 09:16:22 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:08:11.913 09:16:22 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:08:11.913 09:16:22 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:08:11.913 09:16:22 accel.accel_crc32c -- accel/accel.sh@20 -- # val= 00:08:11.913 09:16:22 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:08:11.913 09:16:22 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:08:11.913 09:16:22 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:08:11.913 09:16:22 accel.accel_crc32c -- accel/accel.sh@20 -- # val=software 00:08:11.913 09:16:22 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:08:11.913 09:16:22 accel.accel_crc32c -- accel/accel.sh@22 -- # accel_module=software 00:08:11.913 09:16:22 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:08:11.913 09:16:22 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:08:11.913 09:16:22 accel.accel_crc32c -- accel/accel.sh@20 -- # val=32 00:08:11.913 09:16:22 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:08:11.913 09:16:22 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:08:11.913 09:16:22 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:08:11.913 09:16:22 accel.accel_crc32c -- accel/accel.sh@20 -- # val=32 00:08:11.913 09:16:22 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:08:11.913 09:16:22 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:08:11.913 09:16:22 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:08:11.913 09:16:22 accel.accel_crc32c -- accel/accel.sh@20 -- # val=1 00:08:11.913 09:16:22 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:08:11.913 09:16:22 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:08:11.913 09:16:22 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:08:11.913 09:16:22 accel.accel_crc32c -- accel/accel.sh@20 -- # val='1 seconds' 00:08:11.913 09:16:22 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:08:11.913 09:16:22 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:08:11.913 09:16:22 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:08:11.913 09:16:22 accel.accel_crc32c -- accel/accel.sh@20 -- # val=Yes 00:08:11.913 09:16:22 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:08:11.913 09:16:22 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:08:11.913 09:16:22 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:08:11.913 09:16:22 accel.accel_crc32c -- accel/accel.sh@20 -- # val= 00:08:11.913 09:16:22 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:08:11.913 09:16:22 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:08:11.913 09:16:22 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:08:11.913 09:16:22 accel.accel_crc32c -- accel/accel.sh@20 -- # val= 00:08:11.913 09:16:22 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:08:11.913 09:16:22 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:08:11.913 09:16:22 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:08:13.290 09:16:24 accel.accel_crc32c -- accel/accel.sh@20 -- # val= 00:08:13.290 09:16:24 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:08:13.290 09:16:24 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:08:13.290 09:16:24 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:08:13.290 09:16:24 accel.accel_crc32c -- accel/accel.sh@20 -- # val= 00:08:13.290 09:16:24 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:08:13.290 09:16:24 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:08:13.290 09:16:24 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:08:13.290 09:16:24 accel.accel_crc32c -- accel/accel.sh@20 -- # val= 00:08:13.290 09:16:24 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:08:13.290 09:16:24 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:08:13.290 09:16:24 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:08:13.290 09:16:24 accel.accel_crc32c -- accel/accel.sh@20 -- # val= 00:08:13.290 09:16:24 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:08:13.290 09:16:24 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:08:13.290 09:16:24 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:08:13.290 09:16:24 accel.accel_crc32c -- accel/accel.sh@20 -- # val= 00:08:13.290 09:16:24 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:08:13.290 09:16:24 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:08:13.290 09:16:24 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:08:13.290 09:16:24 accel.accel_crc32c -- accel/accel.sh@20 -- # val= 00:08:13.290 09:16:24 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:08:13.290 09:16:24 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:08:13.290 09:16:24 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:08:13.290 09:16:24 accel.accel_crc32c -- accel/accel.sh@27 -- # [[ -n software ]] 00:08:13.290 09:16:24 accel.accel_crc32c -- accel/accel.sh@27 -- # [[ -n crc32c ]] 00:08:13.290 09:16:24 accel.accel_crc32c -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:08:13.290 00:08:13.290 real 0m1.437s 00:08:13.290 user 0m1.310s 00:08:13.290 sys 0m0.130s 00:08:13.290 09:16:24 accel.accel_crc32c -- common/autotest_common.sh@1124 -- # xtrace_disable 00:08:13.290 09:16:24 accel.accel_crc32c -- common/autotest_common.sh@10 -- # set +x 00:08:13.290 ************************************ 00:08:13.290 END TEST accel_crc32c 00:08:13.290 ************************************ 00:08:13.290 09:16:24 accel -- common/autotest_common.sh@1142 -- # return 0 00:08:13.290 09:16:24 accel -- accel/accel.sh@102 -- # run_test accel_crc32c_C2 accel_test -t 1 -w crc32c -y -C 2 00:08:13.290 09:16:24 accel -- common/autotest_common.sh@1099 -- # '[' 9 -le 1 ']' 00:08:13.290 09:16:24 accel -- common/autotest_common.sh@1105 -- # xtrace_disable 00:08:13.290 09:16:24 accel -- common/autotest_common.sh@10 -- # set +x 00:08:13.290 ************************************ 00:08:13.290 START TEST accel_crc32c_C2 00:08:13.290 ************************************ 00:08:13.290 09:16:24 accel.accel_crc32c_C2 -- common/autotest_common.sh@1123 -- # accel_test -t 1 -w crc32c -y -C 2 00:08:13.290 09:16:24 accel.accel_crc32c_C2 -- accel/accel.sh@16 -- # local accel_opc 00:08:13.290 09:16:24 accel.accel_crc32c_C2 -- accel/accel.sh@17 -- # local accel_module 00:08:13.290 09:16:24 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:08:13.290 09:16:24 accel.accel_crc32c_C2 -- accel/accel.sh@15 -- # accel_perf -t 1 -w crc32c -y -C 2 00:08:13.290 09:16:24 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:08:13.290 09:16:24 accel.accel_crc32c_C2 -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w crc32c -y -C 2 00:08:13.290 09:16:24 accel.accel_crc32c_C2 -- accel/accel.sh@12 -- # build_accel_config 00:08:13.290 09:16:24 accel.accel_crc32c_C2 -- accel/accel.sh@31 -- # accel_json_cfg=() 00:08:13.290 09:16:24 accel.accel_crc32c_C2 -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:08:13.290 09:16:24 accel.accel_crc32c_C2 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:08:13.290 09:16:24 accel.accel_crc32c_C2 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:08:13.290 09:16:24 accel.accel_crc32c_C2 -- accel/accel.sh@36 -- # [[ -n '' ]] 00:08:13.290 09:16:24 accel.accel_crc32c_C2 -- accel/accel.sh@40 -- # local IFS=, 00:08:13.290 09:16:24 accel.accel_crc32c_C2 -- accel/accel.sh@41 -- # jq -r . 00:08:13.290 [2024-07-15 09:16:24.192009] Starting SPDK v24.09-pre git sha1 b0f01ebc5 / DPDK 24.03.0 initialization... 00:08:13.290 [2024-07-15 09:16:24.192073] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid726211 ] 00:08:13.290 EAL: No free 2048 kB hugepages reported on node 1 00:08:13.290 [2024-07-15 09:16:24.248381] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:13.290 [2024-07-15 09:16:24.352791] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:08:13.290 09:16:24 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:08:13.290 09:16:24 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:08:13.290 09:16:24 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:08:13.290 09:16:24 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:08:13.290 09:16:24 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:08:13.290 09:16:24 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:08:13.290 09:16:24 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:08:13.290 09:16:24 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:08:13.290 09:16:24 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val=0x1 00:08:13.290 09:16:24 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:08:13.290 09:16:24 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:08:13.290 09:16:24 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:08:13.290 09:16:24 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:08:13.290 09:16:24 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:08:13.290 09:16:24 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:08:13.290 09:16:24 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:08:13.290 09:16:24 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:08:13.290 09:16:24 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:08:13.290 09:16:24 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:08:13.290 09:16:24 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:08:13.290 09:16:24 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val=crc32c 00:08:13.290 09:16:24 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:08:13.290 09:16:24 accel.accel_crc32c_C2 -- accel/accel.sh@23 -- # accel_opc=crc32c 00:08:13.290 09:16:24 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:08:13.291 09:16:24 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:08:13.291 09:16:24 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val=0 00:08:13.291 09:16:24 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:08:13.291 09:16:24 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:08:13.291 09:16:24 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:08:13.291 09:16:24 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val='4096 bytes' 00:08:13.291 09:16:24 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:08:13.291 09:16:24 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:08:13.291 09:16:24 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:08:13.291 09:16:24 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:08:13.291 09:16:24 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:08:13.291 09:16:24 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:08:13.291 09:16:24 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:08:13.291 09:16:24 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val=software 00:08:13.291 09:16:24 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:08:13.291 09:16:24 accel.accel_crc32c_C2 -- accel/accel.sh@22 -- # accel_module=software 00:08:13.291 09:16:24 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:08:13.291 09:16:24 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:08:13.291 09:16:24 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val=32 00:08:13.291 09:16:24 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:08:13.291 09:16:24 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:08:13.291 09:16:24 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:08:13.291 09:16:24 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val=32 00:08:13.291 09:16:24 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:08:13.291 09:16:24 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:08:13.291 09:16:24 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:08:13.291 09:16:24 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val=1 00:08:13.291 09:16:24 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:08:13.291 09:16:24 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:08:13.291 09:16:24 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:08:13.291 09:16:24 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val='1 seconds' 00:08:13.291 09:16:24 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:08:13.291 09:16:24 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:08:13.291 09:16:24 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:08:13.291 09:16:24 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val=Yes 00:08:13.291 09:16:24 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:08:13.291 09:16:24 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:08:13.291 09:16:24 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:08:13.291 09:16:24 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:08:13.291 09:16:24 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:08:13.291 09:16:24 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:08:13.291 09:16:24 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:08:13.291 09:16:24 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:08:13.291 09:16:24 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:08:13.291 09:16:24 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:08:13.291 09:16:24 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:08:14.692 09:16:25 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:08:14.692 09:16:25 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:08:14.692 09:16:25 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:08:14.692 09:16:25 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:08:14.692 09:16:25 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:08:14.692 09:16:25 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:08:14.692 09:16:25 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:08:14.692 09:16:25 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:08:14.692 09:16:25 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:08:14.692 09:16:25 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:08:14.692 09:16:25 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:08:14.692 09:16:25 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:08:14.692 09:16:25 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:08:14.692 09:16:25 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:08:14.692 09:16:25 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:08:14.692 09:16:25 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:08:14.692 09:16:25 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:08:14.692 09:16:25 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:08:14.692 09:16:25 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:08:14.692 09:16:25 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:08:14.692 09:16:25 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:08:14.692 09:16:25 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:08:14.692 09:16:25 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:08:14.692 09:16:25 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:08:14.692 09:16:25 accel.accel_crc32c_C2 -- accel/accel.sh@27 -- # [[ -n software ]] 00:08:14.692 09:16:25 accel.accel_crc32c_C2 -- accel/accel.sh@27 -- # [[ -n crc32c ]] 00:08:14.692 09:16:25 accel.accel_crc32c_C2 -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:08:14.692 00:08:14.692 real 0m1.434s 00:08:14.692 user 0m1.301s 00:08:14.692 sys 0m0.135s 00:08:14.692 09:16:25 accel.accel_crc32c_C2 -- common/autotest_common.sh@1124 -- # xtrace_disable 00:08:14.692 09:16:25 accel.accel_crc32c_C2 -- common/autotest_common.sh@10 -- # set +x 00:08:14.692 ************************************ 00:08:14.692 END TEST accel_crc32c_C2 00:08:14.692 ************************************ 00:08:14.692 09:16:25 accel -- common/autotest_common.sh@1142 -- # return 0 00:08:14.692 09:16:25 accel -- accel/accel.sh@103 -- # run_test accel_copy accel_test -t 1 -w copy -y 00:08:14.692 09:16:25 accel -- common/autotest_common.sh@1099 -- # '[' 7 -le 1 ']' 00:08:14.692 09:16:25 accel -- common/autotest_common.sh@1105 -- # xtrace_disable 00:08:14.692 09:16:25 accel -- common/autotest_common.sh@10 -- # set +x 00:08:14.692 ************************************ 00:08:14.692 START TEST accel_copy 00:08:14.692 ************************************ 00:08:14.692 09:16:25 accel.accel_copy -- common/autotest_common.sh@1123 -- # accel_test -t 1 -w copy -y 00:08:14.692 09:16:25 accel.accel_copy -- accel/accel.sh@16 -- # local accel_opc 00:08:14.692 09:16:25 accel.accel_copy -- accel/accel.sh@17 -- # local accel_module 00:08:14.692 09:16:25 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:08:14.692 09:16:25 accel.accel_copy -- accel/accel.sh@15 -- # accel_perf -t 1 -w copy -y 00:08:14.692 09:16:25 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:08:14.692 09:16:25 accel.accel_copy -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w copy -y 00:08:14.692 09:16:25 accel.accel_copy -- accel/accel.sh@12 -- # build_accel_config 00:08:14.692 09:16:25 accel.accel_copy -- accel/accel.sh@31 -- # accel_json_cfg=() 00:08:14.692 09:16:25 accel.accel_copy -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:08:14.692 09:16:25 accel.accel_copy -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:08:14.692 09:16:25 accel.accel_copy -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:08:14.692 09:16:25 accel.accel_copy -- accel/accel.sh@36 -- # [[ -n '' ]] 00:08:14.692 09:16:25 accel.accel_copy -- accel/accel.sh@40 -- # local IFS=, 00:08:14.692 09:16:25 accel.accel_copy -- accel/accel.sh@41 -- # jq -r . 00:08:14.692 [2024-07-15 09:16:25.675626] Starting SPDK v24.09-pre git sha1 b0f01ebc5 / DPDK 24.03.0 initialization... 00:08:14.692 [2024-07-15 09:16:25.675690] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid726375 ] 00:08:14.692 EAL: No free 2048 kB hugepages reported on node 1 00:08:14.692 [2024-07-15 09:16:25.736295] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:14.692 [2024-07-15 09:16:25.844112] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:08:14.949 09:16:25 accel.accel_copy -- accel/accel.sh@20 -- # val= 00:08:14.949 09:16:25 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:08:14.949 09:16:25 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:08:14.949 09:16:25 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:08:14.949 09:16:25 accel.accel_copy -- accel/accel.sh@20 -- # val= 00:08:14.949 09:16:25 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:08:14.949 09:16:25 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:08:14.949 09:16:25 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:08:14.949 09:16:25 accel.accel_copy -- accel/accel.sh@20 -- # val=0x1 00:08:14.949 09:16:25 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:08:14.949 09:16:25 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:08:14.949 09:16:25 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:08:14.949 09:16:25 accel.accel_copy -- accel/accel.sh@20 -- # val= 00:08:14.949 09:16:25 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:08:14.949 09:16:25 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:08:14.949 09:16:25 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:08:14.949 09:16:25 accel.accel_copy -- accel/accel.sh@20 -- # val= 00:08:14.949 09:16:25 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:08:14.949 09:16:25 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:08:14.949 09:16:25 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:08:14.949 09:16:25 accel.accel_copy -- accel/accel.sh@20 -- # val=copy 00:08:14.949 09:16:25 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:08:14.949 09:16:25 accel.accel_copy -- accel/accel.sh@23 -- # accel_opc=copy 00:08:14.949 09:16:25 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:08:14.949 09:16:25 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:08:14.949 09:16:25 accel.accel_copy -- accel/accel.sh@20 -- # val='4096 bytes' 00:08:14.949 09:16:25 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:08:14.949 09:16:25 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:08:14.949 09:16:25 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:08:14.949 09:16:25 accel.accel_copy -- accel/accel.sh@20 -- # val= 00:08:14.949 09:16:25 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:08:14.949 09:16:25 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:08:14.949 09:16:25 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:08:14.949 09:16:25 accel.accel_copy -- accel/accel.sh@20 -- # val=software 00:08:14.949 09:16:25 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:08:14.949 09:16:25 accel.accel_copy -- accel/accel.sh@22 -- # accel_module=software 00:08:14.949 09:16:25 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:08:14.949 09:16:25 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:08:14.949 09:16:25 accel.accel_copy -- accel/accel.sh@20 -- # val=32 00:08:14.949 09:16:25 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:08:14.949 09:16:25 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:08:14.949 09:16:25 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:08:14.949 09:16:25 accel.accel_copy -- accel/accel.sh@20 -- # val=32 00:08:14.949 09:16:25 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:08:14.949 09:16:25 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:08:14.949 09:16:25 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:08:14.949 09:16:25 accel.accel_copy -- accel/accel.sh@20 -- # val=1 00:08:14.949 09:16:25 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:08:14.949 09:16:25 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:08:14.949 09:16:25 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:08:14.949 09:16:25 accel.accel_copy -- accel/accel.sh@20 -- # val='1 seconds' 00:08:14.949 09:16:25 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:08:14.949 09:16:25 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:08:14.949 09:16:25 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:08:14.949 09:16:25 accel.accel_copy -- accel/accel.sh@20 -- # val=Yes 00:08:14.949 09:16:25 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:08:14.949 09:16:25 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:08:14.949 09:16:25 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:08:14.949 09:16:25 accel.accel_copy -- accel/accel.sh@20 -- # val= 00:08:14.949 09:16:25 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:08:14.949 09:16:25 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:08:14.949 09:16:25 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:08:14.949 09:16:25 accel.accel_copy -- accel/accel.sh@20 -- # val= 00:08:14.949 09:16:25 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:08:14.949 09:16:25 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:08:14.949 09:16:25 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:08:16.318 09:16:27 accel.accel_copy -- accel/accel.sh@20 -- # val= 00:08:16.318 09:16:27 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:08:16.318 09:16:27 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:08:16.318 09:16:27 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:08:16.318 09:16:27 accel.accel_copy -- accel/accel.sh@20 -- # val= 00:08:16.318 09:16:27 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:08:16.318 09:16:27 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:08:16.318 09:16:27 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:08:16.318 09:16:27 accel.accel_copy -- accel/accel.sh@20 -- # val= 00:08:16.318 09:16:27 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:08:16.318 09:16:27 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:08:16.318 09:16:27 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:08:16.318 09:16:27 accel.accel_copy -- accel/accel.sh@20 -- # val= 00:08:16.318 09:16:27 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:08:16.318 09:16:27 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:08:16.318 09:16:27 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:08:16.318 09:16:27 accel.accel_copy -- accel/accel.sh@20 -- # val= 00:08:16.318 09:16:27 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:08:16.318 09:16:27 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:08:16.318 09:16:27 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:08:16.318 09:16:27 accel.accel_copy -- accel/accel.sh@20 -- # val= 00:08:16.318 09:16:27 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:08:16.318 09:16:27 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:08:16.318 09:16:27 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:08:16.318 09:16:27 accel.accel_copy -- accel/accel.sh@27 -- # [[ -n software ]] 00:08:16.318 09:16:27 accel.accel_copy -- accel/accel.sh@27 -- # [[ -n copy ]] 00:08:16.318 09:16:27 accel.accel_copy -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:08:16.318 00:08:16.318 real 0m1.442s 00:08:16.318 user 0m1.305s 00:08:16.318 sys 0m0.139s 00:08:16.318 09:16:27 accel.accel_copy -- common/autotest_common.sh@1124 -- # xtrace_disable 00:08:16.318 09:16:27 accel.accel_copy -- common/autotest_common.sh@10 -- # set +x 00:08:16.318 ************************************ 00:08:16.318 END TEST accel_copy 00:08:16.318 ************************************ 00:08:16.318 09:16:27 accel -- common/autotest_common.sh@1142 -- # return 0 00:08:16.318 09:16:27 accel -- accel/accel.sh@104 -- # run_test accel_fill accel_test -t 1 -w fill -f 128 -q 64 -a 64 -y 00:08:16.318 09:16:27 accel -- common/autotest_common.sh@1099 -- # '[' 13 -le 1 ']' 00:08:16.318 09:16:27 accel -- common/autotest_common.sh@1105 -- # xtrace_disable 00:08:16.318 09:16:27 accel -- common/autotest_common.sh@10 -- # set +x 00:08:16.318 ************************************ 00:08:16.318 START TEST accel_fill 00:08:16.318 ************************************ 00:08:16.318 09:16:27 accel.accel_fill -- common/autotest_common.sh@1123 -- # accel_test -t 1 -w fill -f 128 -q 64 -a 64 -y 00:08:16.318 09:16:27 accel.accel_fill -- accel/accel.sh@16 -- # local accel_opc 00:08:16.318 09:16:27 accel.accel_fill -- accel/accel.sh@17 -- # local accel_module 00:08:16.318 09:16:27 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:08:16.318 09:16:27 accel.accel_fill -- accel/accel.sh@15 -- # accel_perf -t 1 -w fill -f 128 -q 64 -a 64 -y 00:08:16.318 09:16:27 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:08:16.319 09:16:27 accel.accel_fill -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w fill -f 128 -q 64 -a 64 -y 00:08:16.319 09:16:27 accel.accel_fill -- accel/accel.sh@12 -- # build_accel_config 00:08:16.319 09:16:27 accel.accel_fill -- accel/accel.sh@31 -- # accel_json_cfg=() 00:08:16.319 09:16:27 accel.accel_fill -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:08:16.319 09:16:27 accel.accel_fill -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:08:16.319 09:16:27 accel.accel_fill -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:08:16.319 09:16:27 accel.accel_fill -- accel/accel.sh@36 -- # [[ -n '' ]] 00:08:16.319 09:16:27 accel.accel_fill -- accel/accel.sh@40 -- # local IFS=, 00:08:16.319 09:16:27 accel.accel_fill -- accel/accel.sh@41 -- # jq -r . 00:08:16.319 [2024-07-15 09:16:27.163689] Starting SPDK v24.09-pre git sha1 b0f01ebc5 / DPDK 24.03.0 initialization... 00:08:16.319 [2024-07-15 09:16:27.163751] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid726637 ] 00:08:16.319 EAL: No free 2048 kB hugepages reported on node 1 00:08:16.319 [2024-07-15 09:16:27.220087] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:16.319 [2024-07-15 09:16:27.322964] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:08:16.319 09:16:27 accel.accel_fill -- accel/accel.sh@20 -- # val= 00:08:16.319 09:16:27 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:08:16.319 09:16:27 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:08:16.319 09:16:27 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:08:16.319 09:16:27 accel.accel_fill -- accel/accel.sh@20 -- # val= 00:08:16.319 09:16:27 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:08:16.319 09:16:27 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:08:16.319 09:16:27 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:08:16.319 09:16:27 accel.accel_fill -- accel/accel.sh@20 -- # val=0x1 00:08:16.319 09:16:27 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:08:16.319 09:16:27 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:08:16.319 09:16:27 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:08:16.319 09:16:27 accel.accel_fill -- accel/accel.sh@20 -- # val= 00:08:16.319 09:16:27 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:08:16.319 09:16:27 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:08:16.319 09:16:27 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:08:16.319 09:16:27 accel.accel_fill -- accel/accel.sh@20 -- # val= 00:08:16.319 09:16:27 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:08:16.319 09:16:27 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:08:16.319 09:16:27 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:08:16.319 09:16:27 accel.accel_fill -- accel/accel.sh@20 -- # val=fill 00:08:16.319 09:16:27 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:08:16.319 09:16:27 accel.accel_fill -- accel/accel.sh@23 -- # accel_opc=fill 00:08:16.319 09:16:27 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:08:16.319 09:16:27 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:08:16.319 09:16:27 accel.accel_fill -- accel/accel.sh@20 -- # val=0x80 00:08:16.319 09:16:27 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:08:16.319 09:16:27 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:08:16.319 09:16:27 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:08:16.319 09:16:27 accel.accel_fill -- accel/accel.sh@20 -- # val='4096 bytes' 00:08:16.319 09:16:27 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:08:16.319 09:16:27 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:08:16.319 09:16:27 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:08:16.319 09:16:27 accel.accel_fill -- accel/accel.sh@20 -- # val= 00:08:16.319 09:16:27 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:08:16.319 09:16:27 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:08:16.319 09:16:27 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:08:16.319 09:16:27 accel.accel_fill -- accel/accel.sh@20 -- # val=software 00:08:16.319 09:16:27 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:08:16.319 09:16:27 accel.accel_fill -- accel/accel.sh@22 -- # accel_module=software 00:08:16.319 09:16:27 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:08:16.319 09:16:27 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:08:16.319 09:16:27 accel.accel_fill -- accel/accel.sh@20 -- # val=64 00:08:16.319 09:16:27 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:08:16.319 09:16:27 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:08:16.319 09:16:27 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:08:16.319 09:16:27 accel.accel_fill -- accel/accel.sh@20 -- # val=64 00:08:16.319 09:16:27 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:08:16.319 09:16:27 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:08:16.319 09:16:27 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:08:16.319 09:16:27 accel.accel_fill -- accel/accel.sh@20 -- # val=1 00:08:16.319 09:16:27 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:08:16.319 09:16:27 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:08:16.319 09:16:27 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:08:16.319 09:16:27 accel.accel_fill -- accel/accel.sh@20 -- # val='1 seconds' 00:08:16.319 09:16:27 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:08:16.319 09:16:27 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:08:16.319 09:16:27 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:08:16.319 09:16:27 accel.accel_fill -- accel/accel.sh@20 -- # val=Yes 00:08:16.319 09:16:27 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:08:16.319 09:16:27 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:08:16.319 09:16:27 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:08:16.319 09:16:27 accel.accel_fill -- accel/accel.sh@20 -- # val= 00:08:16.319 09:16:27 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:08:16.319 09:16:27 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:08:16.319 09:16:27 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:08:16.319 09:16:27 accel.accel_fill -- accel/accel.sh@20 -- # val= 00:08:16.319 09:16:27 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:08:16.319 09:16:27 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:08:16.319 09:16:27 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:08:17.691 09:16:28 accel.accel_fill -- accel/accel.sh@20 -- # val= 00:08:17.691 09:16:28 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:08:17.691 09:16:28 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:08:17.691 09:16:28 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:08:17.691 09:16:28 accel.accel_fill -- accel/accel.sh@20 -- # val= 00:08:17.691 09:16:28 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:08:17.691 09:16:28 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:08:17.691 09:16:28 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:08:17.691 09:16:28 accel.accel_fill -- accel/accel.sh@20 -- # val= 00:08:17.691 09:16:28 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:08:17.691 09:16:28 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:08:17.691 09:16:28 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:08:17.691 09:16:28 accel.accel_fill -- accel/accel.sh@20 -- # val= 00:08:17.691 09:16:28 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:08:17.691 09:16:28 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:08:17.691 09:16:28 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:08:17.691 09:16:28 accel.accel_fill -- accel/accel.sh@20 -- # val= 00:08:17.691 09:16:28 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:08:17.691 09:16:28 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:08:17.691 09:16:28 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:08:17.691 09:16:28 accel.accel_fill -- accel/accel.sh@20 -- # val= 00:08:17.691 09:16:28 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:08:17.691 09:16:28 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:08:17.691 09:16:28 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:08:17.691 09:16:28 accel.accel_fill -- accel/accel.sh@27 -- # [[ -n software ]] 00:08:17.691 09:16:28 accel.accel_fill -- accel/accel.sh@27 -- # [[ -n fill ]] 00:08:17.691 09:16:28 accel.accel_fill -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:08:17.691 00:08:17.691 real 0m1.428s 00:08:17.691 user 0m1.304s 00:08:17.691 sys 0m0.125s 00:08:17.691 09:16:28 accel.accel_fill -- common/autotest_common.sh@1124 -- # xtrace_disable 00:08:17.691 09:16:28 accel.accel_fill -- common/autotest_common.sh@10 -- # set +x 00:08:17.691 ************************************ 00:08:17.691 END TEST accel_fill 00:08:17.691 ************************************ 00:08:17.691 09:16:28 accel -- common/autotest_common.sh@1142 -- # return 0 00:08:17.691 09:16:28 accel -- accel/accel.sh@105 -- # run_test accel_copy_crc32c accel_test -t 1 -w copy_crc32c -y 00:08:17.691 09:16:28 accel -- common/autotest_common.sh@1099 -- # '[' 7 -le 1 ']' 00:08:17.691 09:16:28 accel -- common/autotest_common.sh@1105 -- # xtrace_disable 00:08:17.691 09:16:28 accel -- common/autotest_common.sh@10 -- # set +x 00:08:17.691 ************************************ 00:08:17.691 START TEST accel_copy_crc32c 00:08:17.691 ************************************ 00:08:17.691 09:16:28 accel.accel_copy_crc32c -- common/autotest_common.sh@1123 -- # accel_test -t 1 -w copy_crc32c -y 00:08:17.691 09:16:28 accel.accel_copy_crc32c -- accel/accel.sh@16 -- # local accel_opc 00:08:17.691 09:16:28 accel.accel_copy_crc32c -- accel/accel.sh@17 -- # local accel_module 00:08:17.691 09:16:28 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:08:17.691 09:16:28 accel.accel_copy_crc32c -- accel/accel.sh@15 -- # accel_perf -t 1 -w copy_crc32c -y 00:08:17.691 09:16:28 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:08:17.691 09:16:28 accel.accel_copy_crc32c -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w copy_crc32c -y 00:08:17.691 09:16:28 accel.accel_copy_crc32c -- accel/accel.sh@12 -- # build_accel_config 00:08:17.691 09:16:28 accel.accel_copy_crc32c -- accel/accel.sh@31 -- # accel_json_cfg=() 00:08:17.691 09:16:28 accel.accel_copy_crc32c -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:08:17.691 09:16:28 accel.accel_copy_crc32c -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:08:17.691 09:16:28 accel.accel_copy_crc32c -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:08:17.691 09:16:28 accel.accel_copy_crc32c -- accel/accel.sh@36 -- # [[ -n '' ]] 00:08:17.691 09:16:28 accel.accel_copy_crc32c -- accel/accel.sh@40 -- # local IFS=, 00:08:17.691 09:16:28 accel.accel_copy_crc32c -- accel/accel.sh@41 -- # jq -r . 00:08:17.691 [2024-07-15 09:16:28.638977] Starting SPDK v24.09-pre git sha1 b0f01ebc5 / DPDK 24.03.0 initialization... 00:08:17.692 [2024-07-15 09:16:28.639041] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid726802 ] 00:08:17.692 EAL: No free 2048 kB hugepages reported on node 1 00:08:17.692 [2024-07-15 09:16:28.695686] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:17.692 [2024-07-15 09:16:28.797466] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:08:17.692 09:16:28 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val= 00:08:17.692 09:16:28 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:08:17.692 09:16:28 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:08:17.692 09:16:28 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:08:17.692 09:16:28 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val= 00:08:17.692 09:16:28 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:08:17.692 09:16:28 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:08:17.692 09:16:28 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:08:17.692 09:16:28 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val=0x1 00:08:17.692 09:16:28 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:08:17.692 09:16:28 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:08:17.692 09:16:28 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:08:17.692 09:16:28 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val= 00:08:17.692 09:16:28 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:08:17.692 09:16:28 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:08:17.692 09:16:28 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:08:17.692 09:16:28 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val= 00:08:17.692 09:16:28 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:08:17.692 09:16:28 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:08:17.692 09:16:28 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:08:17.692 09:16:28 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val=copy_crc32c 00:08:17.692 09:16:28 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:08:17.692 09:16:28 accel.accel_copy_crc32c -- accel/accel.sh@23 -- # accel_opc=copy_crc32c 00:08:17.692 09:16:28 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:08:17.692 09:16:28 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:08:17.692 09:16:28 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val=0 00:08:17.692 09:16:28 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:08:17.692 09:16:28 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:08:17.692 09:16:28 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:08:17.692 09:16:28 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val='4096 bytes' 00:08:17.692 09:16:28 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:08:17.692 09:16:28 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:08:17.692 09:16:28 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:08:17.692 09:16:28 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val='4096 bytes' 00:08:17.692 09:16:28 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:08:17.692 09:16:28 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:08:17.692 09:16:28 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:08:17.692 09:16:28 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val= 00:08:17.692 09:16:28 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:08:17.692 09:16:28 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:08:17.692 09:16:28 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:08:17.692 09:16:28 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val=software 00:08:17.692 09:16:28 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:08:17.692 09:16:28 accel.accel_copy_crc32c -- accel/accel.sh@22 -- # accel_module=software 00:08:17.692 09:16:28 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:08:17.692 09:16:28 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:08:17.692 09:16:28 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val=32 00:08:17.692 09:16:28 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:08:17.692 09:16:28 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:08:17.692 09:16:28 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:08:17.692 09:16:28 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val=32 00:08:17.692 09:16:28 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:08:17.692 09:16:28 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:08:17.692 09:16:28 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:08:17.692 09:16:28 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val=1 00:08:17.692 09:16:28 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:08:17.692 09:16:28 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:08:17.692 09:16:28 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:08:17.692 09:16:28 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val='1 seconds' 00:08:17.692 09:16:28 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:08:17.692 09:16:28 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:08:17.692 09:16:28 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:08:17.692 09:16:28 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val=Yes 00:08:17.692 09:16:28 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:08:17.692 09:16:28 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:08:17.692 09:16:28 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:08:17.692 09:16:28 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val= 00:08:17.692 09:16:28 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:08:17.692 09:16:28 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:08:17.692 09:16:28 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:08:17.692 09:16:28 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val= 00:08:17.692 09:16:28 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:08:17.692 09:16:28 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:08:17.692 09:16:28 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:08:19.065 09:16:30 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val= 00:08:19.065 09:16:30 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:08:19.065 09:16:30 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:08:19.065 09:16:30 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:08:19.065 09:16:30 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val= 00:08:19.065 09:16:30 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:08:19.065 09:16:30 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:08:19.065 09:16:30 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:08:19.065 09:16:30 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val= 00:08:19.065 09:16:30 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:08:19.065 09:16:30 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:08:19.065 09:16:30 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:08:19.065 09:16:30 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val= 00:08:19.065 09:16:30 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:08:19.065 09:16:30 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:08:19.065 09:16:30 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:08:19.065 09:16:30 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val= 00:08:19.065 09:16:30 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:08:19.065 09:16:30 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:08:19.065 09:16:30 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:08:19.065 09:16:30 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val= 00:08:19.065 09:16:30 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:08:19.065 09:16:30 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:08:19.065 09:16:30 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:08:19.065 09:16:30 accel.accel_copy_crc32c -- accel/accel.sh@27 -- # [[ -n software ]] 00:08:19.065 09:16:30 accel.accel_copy_crc32c -- accel/accel.sh@27 -- # [[ -n copy_crc32c ]] 00:08:19.065 09:16:30 accel.accel_copy_crc32c -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:08:19.065 00:08:19.065 real 0m1.433s 00:08:19.065 user 0m1.305s 00:08:19.065 sys 0m0.131s 00:08:19.065 09:16:30 accel.accel_copy_crc32c -- common/autotest_common.sh@1124 -- # xtrace_disable 00:08:19.065 09:16:30 accel.accel_copy_crc32c -- common/autotest_common.sh@10 -- # set +x 00:08:19.065 ************************************ 00:08:19.065 END TEST accel_copy_crc32c 00:08:19.065 ************************************ 00:08:19.065 09:16:30 accel -- common/autotest_common.sh@1142 -- # return 0 00:08:19.065 09:16:30 accel -- accel/accel.sh@106 -- # run_test accel_copy_crc32c_C2 accel_test -t 1 -w copy_crc32c -y -C 2 00:08:19.065 09:16:30 accel -- common/autotest_common.sh@1099 -- # '[' 9 -le 1 ']' 00:08:19.066 09:16:30 accel -- common/autotest_common.sh@1105 -- # xtrace_disable 00:08:19.066 09:16:30 accel -- common/autotest_common.sh@10 -- # set +x 00:08:19.066 ************************************ 00:08:19.066 START TEST accel_copy_crc32c_C2 00:08:19.066 ************************************ 00:08:19.066 09:16:30 accel.accel_copy_crc32c_C2 -- common/autotest_common.sh@1123 -- # accel_test -t 1 -w copy_crc32c -y -C 2 00:08:19.066 09:16:30 accel.accel_copy_crc32c_C2 -- accel/accel.sh@16 -- # local accel_opc 00:08:19.066 09:16:30 accel.accel_copy_crc32c_C2 -- accel/accel.sh@17 -- # local accel_module 00:08:19.066 09:16:30 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:08:19.066 09:16:30 accel.accel_copy_crc32c_C2 -- accel/accel.sh@15 -- # accel_perf -t 1 -w copy_crc32c -y -C 2 00:08:19.066 09:16:30 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:08:19.066 09:16:30 accel.accel_copy_crc32c_C2 -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w copy_crc32c -y -C 2 00:08:19.066 09:16:30 accel.accel_copy_crc32c_C2 -- accel/accel.sh@12 -- # build_accel_config 00:08:19.066 09:16:30 accel.accel_copy_crc32c_C2 -- accel/accel.sh@31 -- # accel_json_cfg=() 00:08:19.066 09:16:30 accel.accel_copy_crc32c_C2 -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:08:19.066 09:16:30 accel.accel_copy_crc32c_C2 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:08:19.066 09:16:30 accel.accel_copy_crc32c_C2 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:08:19.066 09:16:30 accel.accel_copy_crc32c_C2 -- accel/accel.sh@36 -- # [[ -n '' ]] 00:08:19.066 09:16:30 accel.accel_copy_crc32c_C2 -- accel/accel.sh@40 -- # local IFS=, 00:08:19.066 09:16:30 accel.accel_copy_crc32c_C2 -- accel/accel.sh@41 -- # jq -r . 00:08:19.066 [2024-07-15 09:16:30.113872] Starting SPDK v24.09-pre git sha1 b0f01ebc5 / DPDK 24.03.0 initialization... 00:08:19.066 [2024-07-15 09:16:30.113931] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid726957 ] 00:08:19.066 EAL: No free 2048 kB hugepages reported on node 1 00:08:19.066 [2024-07-15 09:16:30.169082] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:19.324 [2024-07-15 09:16:30.275957] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:08:19.324 09:16:30 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:08:19.324 09:16:30 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:08:19.324 09:16:30 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:08:19.324 09:16:30 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:08:19.324 09:16:30 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:08:19.324 09:16:30 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:08:19.324 09:16:30 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:08:19.324 09:16:30 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:08:19.324 09:16:30 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val=0x1 00:08:19.324 09:16:30 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:08:19.324 09:16:30 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:08:19.324 09:16:30 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:08:19.324 09:16:30 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:08:19.324 09:16:30 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:08:19.324 09:16:30 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:08:19.324 09:16:30 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:08:19.324 09:16:30 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:08:19.324 09:16:30 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:08:19.324 09:16:30 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:08:19.324 09:16:30 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:08:19.324 09:16:30 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val=copy_crc32c 00:08:19.324 09:16:30 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:08:19.324 09:16:30 accel.accel_copy_crc32c_C2 -- accel/accel.sh@23 -- # accel_opc=copy_crc32c 00:08:19.324 09:16:30 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:08:19.324 09:16:30 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:08:19.324 09:16:30 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val=0 00:08:19.324 09:16:30 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:08:19.324 09:16:30 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:08:19.324 09:16:30 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:08:19.324 09:16:30 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val='4096 bytes' 00:08:19.324 09:16:30 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:08:19.324 09:16:30 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:08:19.324 09:16:30 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:08:19.324 09:16:30 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val='8192 bytes' 00:08:19.324 09:16:30 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:08:19.324 09:16:30 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:08:19.324 09:16:30 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:08:19.324 09:16:30 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:08:19.324 09:16:30 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:08:19.324 09:16:30 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:08:19.324 09:16:30 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:08:19.324 09:16:30 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val=software 00:08:19.324 09:16:30 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:08:19.324 09:16:30 accel.accel_copy_crc32c_C2 -- accel/accel.sh@22 -- # accel_module=software 00:08:19.324 09:16:30 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:08:19.324 09:16:30 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:08:19.324 09:16:30 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val=32 00:08:19.324 09:16:30 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:08:19.324 09:16:30 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:08:19.324 09:16:30 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:08:19.324 09:16:30 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val=32 00:08:19.324 09:16:30 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:08:19.324 09:16:30 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:08:19.324 09:16:30 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:08:19.324 09:16:30 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val=1 00:08:19.324 09:16:30 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:08:19.324 09:16:30 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:08:19.324 09:16:30 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:08:19.324 09:16:30 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val='1 seconds' 00:08:19.324 09:16:30 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:08:19.324 09:16:30 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:08:19.324 09:16:30 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:08:19.324 09:16:30 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val=Yes 00:08:19.324 09:16:30 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:08:19.324 09:16:30 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:08:19.324 09:16:30 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:08:19.324 09:16:30 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:08:19.324 09:16:30 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:08:19.324 09:16:30 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:08:19.324 09:16:30 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:08:19.324 09:16:30 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:08:19.324 09:16:30 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:08:19.324 09:16:30 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:08:19.324 09:16:30 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:08:20.698 09:16:31 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:08:20.698 09:16:31 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:08:20.698 09:16:31 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:08:20.698 09:16:31 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:08:20.698 09:16:31 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:08:20.698 09:16:31 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:08:20.698 09:16:31 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:08:20.698 09:16:31 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:08:20.698 09:16:31 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:08:20.698 09:16:31 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:08:20.698 09:16:31 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:08:20.698 09:16:31 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:08:20.698 09:16:31 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:08:20.698 09:16:31 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:08:20.698 09:16:31 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:08:20.698 09:16:31 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:08:20.698 09:16:31 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:08:20.698 09:16:31 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:08:20.698 09:16:31 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:08:20.698 09:16:31 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:08:20.698 09:16:31 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:08:20.698 09:16:31 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:08:20.698 09:16:31 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:08:20.698 09:16:31 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:08:20.698 09:16:31 accel.accel_copy_crc32c_C2 -- accel/accel.sh@27 -- # [[ -n software ]] 00:08:20.698 09:16:31 accel.accel_copy_crc32c_C2 -- accel/accel.sh@27 -- # [[ -n copy_crc32c ]] 00:08:20.698 09:16:31 accel.accel_copy_crc32c_C2 -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:08:20.698 00:08:20.698 real 0m1.419s 00:08:20.698 user 0m1.297s 00:08:20.698 sys 0m0.124s 00:08:20.698 09:16:31 accel.accel_copy_crc32c_C2 -- common/autotest_common.sh@1124 -- # xtrace_disable 00:08:20.698 09:16:31 accel.accel_copy_crc32c_C2 -- common/autotest_common.sh@10 -- # set +x 00:08:20.698 ************************************ 00:08:20.698 END TEST accel_copy_crc32c_C2 00:08:20.698 ************************************ 00:08:20.698 09:16:31 accel -- common/autotest_common.sh@1142 -- # return 0 00:08:20.698 09:16:31 accel -- accel/accel.sh@107 -- # run_test accel_dualcast accel_test -t 1 -w dualcast -y 00:08:20.698 09:16:31 accel -- common/autotest_common.sh@1099 -- # '[' 7 -le 1 ']' 00:08:20.698 09:16:31 accel -- common/autotest_common.sh@1105 -- # xtrace_disable 00:08:20.698 09:16:31 accel -- common/autotest_common.sh@10 -- # set +x 00:08:20.698 ************************************ 00:08:20.698 START TEST accel_dualcast 00:08:20.698 ************************************ 00:08:20.698 09:16:31 accel.accel_dualcast -- common/autotest_common.sh@1123 -- # accel_test -t 1 -w dualcast -y 00:08:20.698 09:16:31 accel.accel_dualcast -- accel/accel.sh@16 -- # local accel_opc 00:08:20.698 09:16:31 accel.accel_dualcast -- accel/accel.sh@17 -- # local accel_module 00:08:20.698 09:16:31 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:08:20.698 09:16:31 accel.accel_dualcast -- accel/accel.sh@15 -- # accel_perf -t 1 -w dualcast -y 00:08:20.698 09:16:31 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:08:20.698 09:16:31 accel.accel_dualcast -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w dualcast -y 00:08:20.698 09:16:31 accel.accel_dualcast -- accel/accel.sh@12 -- # build_accel_config 00:08:20.698 09:16:31 accel.accel_dualcast -- accel/accel.sh@31 -- # accel_json_cfg=() 00:08:20.698 09:16:31 accel.accel_dualcast -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:08:20.698 09:16:31 accel.accel_dualcast -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:08:20.698 09:16:31 accel.accel_dualcast -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:08:20.698 09:16:31 accel.accel_dualcast -- accel/accel.sh@36 -- # [[ -n '' ]] 00:08:20.698 09:16:31 accel.accel_dualcast -- accel/accel.sh@40 -- # local IFS=, 00:08:20.698 09:16:31 accel.accel_dualcast -- accel/accel.sh@41 -- # jq -r . 00:08:20.698 [2024-07-15 09:16:31.584488] Starting SPDK v24.09-pre git sha1 b0f01ebc5 / DPDK 24.03.0 initialization... 00:08:20.698 [2024-07-15 09:16:31.584552] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid727223 ] 00:08:20.698 EAL: No free 2048 kB hugepages reported on node 1 00:08:20.698 [2024-07-15 09:16:31.643189] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:20.698 [2024-07-15 09:16:31.746515] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:08:20.698 09:16:31 accel.accel_dualcast -- accel/accel.sh@20 -- # val= 00:08:20.698 09:16:31 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:08:20.698 09:16:31 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:08:20.698 09:16:31 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:08:20.698 09:16:31 accel.accel_dualcast -- accel/accel.sh@20 -- # val= 00:08:20.698 09:16:31 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:08:20.699 09:16:31 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:08:20.699 09:16:31 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:08:20.699 09:16:31 accel.accel_dualcast -- accel/accel.sh@20 -- # val=0x1 00:08:20.699 09:16:31 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:08:20.699 09:16:31 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:08:20.699 09:16:31 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:08:20.699 09:16:31 accel.accel_dualcast -- accel/accel.sh@20 -- # val= 00:08:20.699 09:16:31 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:08:20.699 09:16:31 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:08:20.699 09:16:31 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:08:20.699 09:16:31 accel.accel_dualcast -- accel/accel.sh@20 -- # val= 00:08:20.699 09:16:31 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:08:20.699 09:16:31 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:08:20.699 09:16:31 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:08:20.699 09:16:31 accel.accel_dualcast -- accel/accel.sh@20 -- # val=dualcast 00:08:20.699 09:16:31 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:08:20.699 09:16:31 accel.accel_dualcast -- accel/accel.sh@23 -- # accel_opc=dualcast 00:08:20.699 09:16:31 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:08:20.699 09:16:31 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:08:20.699 09:16:31 accel.accel_dualcast -- accel/accel.sh@20 -- # val='4096 bytes' 00:08:20.699 09:16:31 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:08:20.699 09:16:31 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:08:20.699 09:16:31 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:08:20.699 09:16:31 accel.accel_dualcast -- accel/accel.sh@20 -- # val= 00:08:20.699 09:16:31 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:08:20.699 09:16:31 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:08:20.699 09:16:31 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:08:20.699 09:16:31 accel.accel_dualcast -- accel/accel.sh@20 -- # val=software 00:08:20.699 09:16:31 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:08:20.699 09:16:31 accel.accel_dualcast -- accel/accel.sh@22 -- # accel_module=software 00:08:20.699 09:16:31 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:08:20.699 09:16:31 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:08:20.699 09:16:31 accel.accel_dualcast -- accel/accel.sh@20 -- # val=32 00:08:20.699 09:16:31 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:08:20.699 09:16:31 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:08:20.699 09:16:31 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:08:20.699 09:16:31 accel.accel_dualcast -- accel/accel.sh@20 -- # val=32 00:08:20.699 09:16:31 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:08:20.699 09:16:31 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:08:20.699 09:16:31 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:08:20.699 09:16:31 accel.accel_dualcast -- accel/accel.sh@20 -- # val=1 00:08:20.699 09:16:31 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:08:20.699 09:16:31 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:08:20.699 09:16:31 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:08:20.699 09:16:31 accel.accel_dualcast -- accel/accel.sh@20 -- # val='1 seconds' 00:08:20.699 09:16:31 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:08:20.699 09:16:31 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:08:20.699 09:16:31 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:08:20.699 09:16:31 accel.accel_dualcast -- accel/accel.sh@20 -- # val=Yes 00:08:20.699 09:16:31 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:08:20.699 09:16:31 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:08:20.699 09:16:31 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:08:20.699 09:16:31 accel.accel_dualcast -- accel/accel.sh@20 -- # val= 00:08:20.699 09:16:31 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:08:20.699 09:16:31 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:08:20.699 09:16:31 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:08:20.699 09:16:31 accel.accel_dualcast -- accel/accel.sh@20 -- # val= 00:08:20.699 09:16:31 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:08:20.699 09:16:31 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:08:20.699 09:16:31 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:08:22.073 09:16:32 accel.accel_dualcast -- accel/accel.sh@20 -- # val= 00:08:22.073 09:16:32 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:08:22.073 09:16:32 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:08:22.073 09:16:32 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:08:22.073 09:16:32 accel.accel_dualcast -- accel/accel.sh@20 -- # val= 00:08:22.073 09:16:32 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:08:22.073 09:16:32 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:08:22.073 09:16:32 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:08:22.073 09:16:32 accel.accel_dualcast -- accel/accel.sh@20 -- # val= 00:08:22.073 09:16:32 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:08:22.073 09:16:32 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:08:22.073 09:16:32 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:08:22.073 09:16:32 accel.accel_dualcast -- accel/accel.sh@20 -- # val= 00:08:22.073 09:16:32 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:08:22.073 09:16:32 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:08:22.073 09:16:32 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:08:22.073 09:16:32 accel.accel_dualcast -- accel/accel.sh@20 -- # val= 00:08:22.073 09:16:32 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:08:22.073 09:16:32 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:08:22.073 09:16:32 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:08:22.073 09:16:32 accel.accel_dualcast -- accel/accel.sh@20 -- # val= 00:08:22.073 09:16:32 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:08:22.073 09:16:32 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:08:22.073 09:16:32 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:08:22.073 09:16:32 accel.accel_dualcast -- accel/accel.sh@27 -- # [[ -n software ]] 00:08:22.073 09:16:32 accel.accel_dualcast -- accel/accel.sh@27 -- # [[ -n dualcast ]] 00:08:22.073 09:16:33 accel.accel_dualcast -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:08:22.073 00:08:22.073 real 0m1.436s 00:08:22.073 user 0m1.293s 00:08:22.073 sys 0m0.144s 00:08:22.073 09:16:33 accel.accel_dualcast -- common/autotest_common.sh@1124 -- # xtrace_disable 00:08:22.073 09:16:33 accel.accel_dualcast -- common/autotest_common.sh@10 -- # set +x 00:08:22.073 ************************************ 00:08:22.073 END TEST accel_dualcast 00:08:22.073 ************************************ 00:08:22.073 09:16:33 accel -- common/autotest_common.sh@1142 -- # return 0 00:08:22.073 09:16:33 accel -- accel/accel.sh@108 -- # run_test accel_compare accel_test -t 1 -w compare -y 00:08:22.073 09:16:33 accel -- common/autotest_common.sh@1099 -- # '[' 7 -le 1 ']' 00:08:22.073 09:16:33 accel -- common/autotest_common.sh@1105 -- # xtrace_disable 00:08:22.073 09:16:33 accel -- common/autotest_common.sh@10 -- # set +x 00:08:22.073 ************************************ 00:08:22.073 START TEST accel_compare 00:08:22.073 ************************************ 00:08:22.073 09:16:33 accel.accel_compare -- common/autotest_common.sh@1123 -- # accel_test -t 1 -w compare -y 00:08:22.073 09:16:33 accel.accel_compare -- accel/accel.sh@16 -- # local accel_opc 00:08:22.073 09:16:33 accel.accel_compare -- accel/accel.sh@17 -- # local accel_module 00:08:22.073 09:16:33 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:08:22.073 09:16:33 accel.accel_compare -- accel/accel.sh@15 -- # accel_perf -t 1 -w compare -y 00:08:22.073 09:16:33 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:08:22.073 09:16:33 accel.accel_compare -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w compare -y 00:08:22.073 09:16:33 accel.accel_compare -- accel/accel.sh@12 -- # build_accel_config 00:08:22.073 09:16:33 accel.accel_compare -- accel/accel.sh@31 -- # accel_json_cfg=() 00:08:22.073 09:16:33 accel.accel_compare -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:08:22.073 09:16:33 accel.accel_compare -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:08:22.073 09:16:33 accel.accel_compare -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:08:22.073 09:16:33 accel.accel_compare -- accel/accel.sh@36 -- # [[ -n '' ]] 00:08:22.073 09:16:33 accel.accel_compare -- accel/accel.sh@40 -- # local IFS=, 00:08:22.073 09:16:33 accel.accel_compare -- accel/accel.sh@41 -- # jq -r . 00:08:22.073 [2024-07-15 09:16:33.069486] Starting SPDK v24.09-pre git sha1 b0f01ebc5 / DPDK 24.03.0 initialization... 00:08:22.073 [2024-07-15 09:16:33.069549] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid727382 ] 00:08:22.073 EAL: No free 2048 kB hugepages reported on node 1 00:08:22.073 [2024-07-15 09:16:33.128957] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:22.073 [2024-07-15 09:16:33.238713] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:08:22.331 09:16:33 accel.accel_compare -- accel/accel.sh@20 -- # val= 00:08:22.331 09:16:33 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:08:22.331 09:16:33 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:08:22.331 09:16:33 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:08:22.331 09:16:33 accel.accel_compare -- accel/accel.sh@20 -- # val= 00:08:22.331 09:16:33 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:08:22.331 09:16:33 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:08:22.331 09:16:33 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:08:22.331 09:16:33 accel.accel_compare -- accel/accel.sh@20 -- # val=0x1 00:08:22.331 09:16:33 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:08:22.331 09:16:33 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:08:22.331 09:16:33 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:08:22.331 09:16:33 accel.accel_compare -- accel/accel.sh@20 -- # val= 00:08:22.331 09:16:33 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:08:22.331 09:16:33 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:08:22.331 09:16:33 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:08:22.331 09:16:33 accel.accel_compare -- accel/accel.sh@20 -- # val= 00:08:22.331 09:16:33 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:08:22.331 09:16:33 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:08:22.331 09:16:33 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:08:22.331 09:16:33 accel.accel_compare -- accel/accel.sh@20 -- # val=compare 00:08:22.331 09:16:33 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:08:22.331 09:16:33 accel.accel_compare -- accel/accel.sh@23 -- # accel_opc=compare 00:08:22.331 09:16:33 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:08:22.331 09:16:33 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:08:22.331 09:16:33 accel.accel_compare -- accel/accel.sh@20 -- # val='4096 bytes' 00:08:22.331 09:16:33 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:08:22.331 09:16:33 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:08:22.331 09:16:33 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:08:22.331 09:16:33 accel.accel_compare -- accel/accel.sh@20 -- # val= 00:08:22.331 09:16:33 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:08:22.331 09:16:33 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:08:22.331 09:16:33 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:08:22.331 09:16:33 accel.accel_compare -- accel/accel.sh@20 -- # val=software 00:08:22.331 09:16:33 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:08:22.331 09:16:33 accel.accel_compare -- accel/accel.sh@22 -- # accel_module=software 00:08:22.331 09:16:33 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:08:22.331 09:16:33 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:08:22.331 09:16:33 accel.accel_compare -- accel/accel.sh@20 -- # val=32 00:08:22.331 09:16:33 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:08:22.331 09:16:33 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:08:22.331 09:16:33 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:08:22.331 09:16:33 accel.accel_compare -- accel/accel.sh@20 -- # val=32 00:08:22.331 09:16:33 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:08:22.331 09:16:33 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:08:22.331 09:16:33 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:08:22.331 09:16:33 accel.accel_compare -- accel/accel.sh@20 -- # val=1 00:08:22.331 09:16:33 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:08:22.331 09:16:33 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:08:22.331 09:16:33 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:08:22.331 09:16:33 accel.accel_compare -- accel/accel.sh@20 -- # val='1 seconds' 00:08:22.331 09:16:33 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:08:22.331 09:16:33 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:08:22.331 09:16:33 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:08:22.331 09:16:33 accel.accel_compare -- accel/accel.sh@20 -- # val=Yes 00:08:22.331 09:16:33 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:08:22.331 09:16:33 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:08:22.332 09:16:33 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:08:22.332 09:16:33 accel.accel_compare -- accel/accel.sh@20 -- # val= 00:08:22.332 09:16:33 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:08:22.332 09:16:33 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:08:22.332 09:16:33 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:08:22.332 09:16:33 accel.accel_compare -- accel/accel.sh@20 -- # val= 00:08:22.332 09:16:33 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:08:22.332 09:16:33 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:08:22.332 09:16:33 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:08:23.705 09:16:34 accel.accel_compare -- accel/accel.sh@20 -- # val= 00:08:23.705 09:16:34 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:08:23.705 09:16:34 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:08:23.705 09:16:34 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:08:23.705 09:16:34 accel.accel_compare -- accel/accel.sh@20 -- # val= 00:08:23.705 09:16:34 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:08:23.705 09:16:34 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:08:23.705 09:16:34 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:08:23.705 09:16:34 accel.accel_compare -- accel/accel.sh@20 -- # val= 00:08:23.705 09:16:34 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:08:23.705 09:16:34 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:08:23.705 09:16:34 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:08:23.705 09:16:34 accel.accel_compare -- accel/accel.sh@20 -- # val= 00:08:23.705 09:16:34 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:08:23.705 09:16:34 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:08:23.705 09:16:34 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:08:23.705 09:16:34 accel.accel_compare -- accel/accel.sh@20 -- # val= 00:08:23.705 09:16:34 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:08:23.705 09:16:34 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:08:23.705 09:16:34 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:08:23.705 09:16:34 accel.accel_compare -- accel/accel.sh@20 -- # val= 00:08:23.705 09:16:34 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:08:23.705 09:16:34 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:08:23.705 09:16:34 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:08:23.705 09:16:34 accel.accel_compare -- accel/accel.sh@27 -- # [[ -n software ]] 00:08:23.705 09:16:34 accel.accel_compare -- accel/accel.sh@27 -- # [[ -n compare ]] 00:08:23.705 09:16:34 accel.accel_compare -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:08:23.705 00:08:23.705 real 0m1.430s 00:08:23.705 user 0m1.295s 00:08:23.705 sys 0m0.136s 00:08:23.705 09:16:34 accel.accel_compare -- common/autotest_common.sh@1124 -- # xtrace_disable 00:08:23.705 09:16:34 accel.accel_compare -- common/autotest_common.sh@10 -- # set +x 00:08:23.705 ************************************ 00:08:23.705 END TEST accel_compare 00:08:23.705 ************************************ 00:08:23.705 09:16:34 accel -- common/autotest_common.sh@1142 -- # return 0 00:08:23.705 09:16:34 accel -- accel/accel.sh@109 -- # run_test accel_xor accel_test -t 1 -w xor -y 00:08:23.705 09:16:34 accel -- common/autotest_common.sh@1099 -- # '[' 7 -le 1 ']' 00:08:23.705 09:16:34 accel -- common/autotest_common.sh@1105 -- # xtrace_disable 00:08:23.705 09:16:34 accel -- common/autotest_common.sh@10 -- # set +x 00:08:23.705 ************************************ 00:08:23.705 START TEST accel_xor 00:08:23.705 ************************************ 00:08:23.705 09:16:34 accel.accel_xor -- common/autotest_common.sh@1123 -- # accel_test -t 1 -w xor -y 00:08:23.705 09:16:34 accel.accel_xor -- accel/accel.sh@16 -- # local accel_opc 00:08:23.705 09:16:34 accel.accel_xor -- accel/accel.sh@17 -- # local accel_module 00:08:23.705 09:16:34 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:08:23.705 09:16:34 accel.accel_xor -- accel/accel.sh@15 -- # accel_perf -t 1 -w xor -y 00:08:23.705 09:16:34 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:08:23.705 09:16:34 accel.accel_xor -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w xor -y 00:08:23.705 09:16:34 accel.accel_xor -- accel/accel.sh@12 -- # build_accel_config 00:08:23.705 09:16:34 accel.accel_xor -- accel/accel.sh@31 -- # accel_json_cfg=() 00:08:23.705 09:16:34 accel.accel_xor -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:08:23.705 09:16:34 accel.accel_xor -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:08:23.705 09:16:34 accel.accel_xor -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:08:23.705 09:16:34 accel.accel_xor -- accel/accel.sh@36 -- # [[ -n '' ]] 00:08:23.705 09:16:34 accel.accel_xor -- accel/accel.sh@40 -- # local IFS=, 00:08:23.705 09:16:34 accel.accel_xor -- accel/accel.sh@41 -- # jq -r . 00:08:23.705 [2024-07-15 09:16:34.550756] Starting SPDK v24.09-pre git sha1 b0f01ebc5 / DPDK 24.03.0 initialization... 00:08:23.705 [2024-07-15 09:16:34.550831] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid727543 ] 00:08:23.705 EAL: No free 2048 kB hugepages reported on node 1 00:08:23.705 [2024-07-15 09:16:34.607231] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:23.705 [2024-07-15 09:16:34.714230] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:08:23.705 09:16:34 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:08:23.705 09:16:34 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:08:23.705 09:16:34 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:08:23.705 09:16:34 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:08:23.705 09:16:34 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:08:23.705 09:16:34 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:08:23.705 09:16:34 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:08:23.705 09:16:34 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:08:23.705 09:16:34 accel.accel_xor -- accel/accel.sh@20 -- # val=0x1 00:08:23.705 09:16:34 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:08:23.705 09:16:34 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:08:23.705 09:16:34 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:08:23.705 09:16:34 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:08:23.705 09:16:34 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:08:23.705 09:16:34 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:08:23.705 09:16:34 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:08:23.705 09:16:34 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:08:23.705 09:16:34 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:08:23.705 09:16:34 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:08:23.705 09:16:34 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:08:23.705 09:16:34 accel.accel_xor -- accel/accel.sh@20 -- # val=xor 00:08:23.705 09:16:34 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:08:23.705 09:16:34 accel.accel_xor -- accel/accel.sh@23 -- # accel_opc=xor 00:08:23.705 09:16:34 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:08:23.705 09:16:34 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:08:23.705 09:16:34 accel.accel_xor -- accel/accel.sh@20 -- # val=2 00:08:23.705 09:16:34 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:08:23.705 09:16:34 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:08:23.705 09:16:34 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:08:23.705 09:16:34 accel.accel_xor -- accel/accel.sh@20 -- # val='4096 bytes' 00:08:23.705 09:16:34 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:08:23.705 09:16:34 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:08:23.705 09:16:34 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:08:23.705 09:16:34 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:08:23.705 09:16:34 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:08:23.705 09:16:34 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:08:23.705 09:16:34 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:08:23.705 09:16:34 accel.accel_xor -- accel/accel.sh@20 -- # val=software 00:08:23.705 09:16:34 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:08:23.705 09:16:34 accel.accel_xor -- accel/accel.sh@22 -- # accel_module=software 00:08:23.705 09:16:34 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:08:23.705 09:16:34 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:08:23.705 09:16:34 accel.accel_xor -- accel/accel.sh@20 -- # val=32 00:08:23.705 09:16:34 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:08:23.705 09:16:34 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:08:23.705 09:16:34 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:08:23.705 09:16:34 accel.accel_xor -- accel/accel.sh@20 -- # val=32 00:08:23.705 09:16:34 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:08:23.705 09:16:34 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:08:23.705 09:16:34 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:08:23.705 09:16:34 accel.accel_xor -- accel/accel.sh@20 -- # val=1 00:08:23.705 09:16:34 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:08:23.705 09:16:34 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:08:23.705 09:16:34 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:08:23.705 09:16:34 accel.accel_xor -- accel/accel.sh@20 -- # val='1 seconds' 00:08:23.705 09:16:34 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:08:23.705 09:16:34 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:08:23.705 09:16:34 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:08:23.705 09:16:34 accel.accel_xor -- accel/accel.sh@20 -- # val=Yes 00:08:23.705 09:16:34 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:08:23.705 09:16:34 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:08:23.705 09:16:34 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:08:23.705 09:16:34 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:08:23.705 09:16:34 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:08:23.705 09:16:34 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:08:23.705 09:16:34 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:08:23.705 09:16:34 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:08:23.705 09:16:34 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:08:23.705 09:16:34 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:08:23.705 09:16:34 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:08:25.080 09:16:35 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:08:25.080 09:16:35 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:08:25.080 09:16:35 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:08:25.080 09:16:35 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:08:25.080 09:16:35 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:08:25.080 09:16:35 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:08:25.080 09:16:35 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:08:25.080 09:16:35 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:08:25.080 09:16:35 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:08:25.080 09:16:35 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:08:25.080 09:16:35 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:08:25.080 09:16:35 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:08:25.080 09:16:35 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:08:25.080 09:16:35 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:08:25.080 09:16:35 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:08:25.080 09:16:35 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:08:25.080 09:16:35 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:08:25.080 09:16:35 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:08:25.080 09:16:35 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:08:25.080 09:16:35 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:08:25.080 09:16:35 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:08:25.080 09:16:35 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:08:25.080 09:16:35 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:08:25.080 09:16:35 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:08:25.080 09:16:35 accel.accel_xor -- accel/accel.sh@27 -- # [[ -n software ]] 00:08:25.080 09:16:35 accel.accel_xor -- accel/accel.sh@27 -- # [[ -n xor ]] 00:08:25.080 09:16:35 accel.accel_xor -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:08:25.080 00:08:25.080 real 0m1.438s 00:08:25.080 user 0m1.301s 00:08:25.080 sys 0m0.139s 00:08:25.080 09:16:35 accel.accel_xor -- common/autotest_common.sh@1124 -- # xtrace_disable 00:08:25.080 09:16:35 accel.accel_xor -- common/autotest_common.sh@10 -- # set +x 00:08:25.080 ************************************ 00:08:25.080 END TEST accel_xor 00:08:25.080 ************************************ 00:08:25.080 09:16:35 accel -- common/autotest_common.sh@1142 -- # return 0 00:08:25.080 09:16:35 accel -- accel/accel.sh@110 -- # run_test accel_xor accel_test -t 1 -w xor -y -x 3 00:08:25.080 09:16:35 accel -- common/autotest_common.sh@1099 -- # '[' 9 -le 1 ']' 00:08:25.080 09:16:35 accel -- common/autotest_common.sh@1105 -- # xtrace_disable 00:08:25.080 09:16:35 accel -- common/autotest_common.sh@10 -- # set +x 00:08:25.080 ************************************ 00:08:25.080 START TEST accel_xor 00:08:25.080 ************************************ 00:08:25.080 09:16:36 accel.accel_xor -- common/autotest_common.sh@1123 -- # accel_test -t 1 -w xor -y -x 3 00:08:25.080 09:16:36 accel.accel_xor -- accel/accel.sh@16 -- # local accel_opc 00:08:25.080 09:16:36 accel.accel_xor -- accel/accel.sh@17 -- # local accel_module 00:08:25.080 09:16:36 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:08:25.080 09:16:36 accel.accel_xor -- accel/accel.sh@15 -- # accel_perf -t 1 -w xor -y -x 3 00:08:25.080 09:16:36 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:08:25.080 09:16:36 accel.accel_xor -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w xor -y -x 3 00:08:25.080 09:16:36 accel.accel_xor -- accel/accel.sh@12 -- # build_accel_config 00:08:25.080 09:16:36 accel.accel_xor -- accel/accel.sh@31 -- # accel_json_cfg=() 00:08:25.080 09:16:36 accel.accel_xor -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:08:25.080 09:16:36 accel.accel_xor -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:08:25.080 09:16:36 accel.accel_xor -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:08:25.080 09:16:36 accel.accel_xor -- accel/accel.sh@36 -- # [[ -n '' ]] 00:08:25.080 09:16:36 accel.accel_xor -- accel/accel.sh@40 -- # local IFS=, 00:08:25.080 09:16:36 accel.accel_xor -- accel/accel.sh@41 -- # jq -r . 00:08:25.080 [2024-07-15 09:16:36.033354] Starting SPDK v24.09-pre git sha1 b0f01ebc5 / DPDK 24.03.0 initialization... 00:08:25.080 [2024-07-15 09:16:36.033418] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid727816 ] 00:08:25.080 EAL: No free 2048 kB hugepages reported on node 1 00:08:25.080 [2024-07-15 09:16:36.090007] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:25.080 [2024-07-15 09:16:36.194255] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:08:25.080 09:16:36 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:08:25.080 09:16:36 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:08:25.080 09:16:36 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:08:25.080 09:16:36 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:08:25.080 09:16:36 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:08:25.080 09:16:36 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:08:25.080 09:16:36 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:08:25.080 09:16:36 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:08:25.080 09:16:36 accel.accel_xor -- accel/accel.sh@20 -- # val=0x1 00:08:25.080 09:16:36 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:08:25.080 09:16:36 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:08:25.080 09:16:36 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:08:25.080 09:16:36 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:08:25.080 09:16:36 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:08:25.080 09:16:36 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:08:25.080 09:16:36 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:08:25.080 09:16:36 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:08:25.080 09:16:36 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:08:25.080 09:16:36 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:08:25.080 09:16:36 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:08:25.080 09:16:36 accel.accel_xor -- accel/accel.sh@20 -- # val=xor 00:08:25.080 09:16:36 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:08:25.080 09:16:36 accel.accel_xor -- accel/accel.sh@23 -- # accel_opc=xor 00:08:25.080 09:16:36 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:08:25.080 09:16:36 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:08:25.080 09:16:36 accel.accel_xor -- accel/accel.sh@20 -- # val=3 00:08:25.080 09:16:36 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:08:25.080 09:16:36 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:08:25.080 09:16:36 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:08:25.080 09:16:36 accel.accel_xor -- accel/accel.sh@20 -- # val='4096 bytes' 00:08:25.080 09:16:36 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:08:25.080 09:16:36 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:08:25.080 09:16:36 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:08:25.080 09:16:36 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:08:25.080 09:16:36 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:08:25.080 09:16:36 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:08:25.080 09:16:36 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:08:25.080 09:16:36 accel.accel_xor -- accel/accel.sh@20 -- # val=software 00:08:25.080 09:16:36 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:08:25.080 09:16:36 accel.accel_xor -- accel/accel.sh@22 -- # accel_module=software 00:08:25.080 09:16:36 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:08:25.080 09:16:36 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:08:25.080 09:16:36 accel.accel_xor -- accel/accel.sh@20 -- # val=32 00:08:25.080 09:16:36 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:08:25.080 09:16:36 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:08:25.080 09:16:36 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:08:25.080 09:16:36 accel.accel_xor -- accel/accel.sh@20 -- # val=32 00:08:25.080 09:16:36 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:08:25.080 09:16:36 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:08:25.080 09:16:36 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:08:25.080 09:16:36 accel.accel_xor -- accel/accel.sh@20 -- # val=1 00:08:25.080 09:16:36 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:08:25.080 09:16:36 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:08:25.080 09:16:36 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:08:25.080 09:16:36 accel.accel_xor -- accel/accel.sh@20 -- # val='1 seconds' 00:08:25.080 09:16:36 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:08:25.080 09:16:36 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:08:25.080 09:16:36 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:08:25.080 09:16:36 accel.accel_xor -- accel/accel.sh@20 -- # val=Yes 00:08:25.080 09:16:36 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:08:25.080 09:16:36 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:08:25.080 09:16:36 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:08:25.080 09:16:36 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:08:25.080 09:16:36 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:08:25.080 09:16:36 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:08:25.080 09:16:36 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:08:25.080 09:16:36 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:08:25.080 09:16:36 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:08:25.080 09:16:36 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:08:25.080 09:16:36 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:08:26.454 09:16:37 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:08:26.454 09:16:37 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:08:26.454 09:16:37 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:08:26.454 09:16:37 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:08:26.454 09:16:37 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:08:26.454 09:16:37 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:08:26.454 09:16:37 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:08:26.454 09:16:37 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:08:26.454 09:16:37 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:08:26.454 09:16:37 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:08:26.454 09:16:37 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:08:26.454 09:16:37 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:08:26.454 09:16:37 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:08:26.454 09:16:37 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:08:26.454 09:16:37 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:08:26.454 09:16:37 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:08:26.454 09:16:37 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:08:26.454 09:16:37 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:08:26.454 09:16:37 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:08:26.454 09:16:37 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:08:26.454 09:16:37 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:08:26.454 09:16:37 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:08:26.454 09:16:37 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:08:26.454 09:16:37 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:08:26.454 09:16:37 accel.accel_xor -- accel/accel.sh@27 -- # [[ -n software ]] 00:08:26.454 09:16:37 accel.accel_xor -- accel/accel.sh@27 -- # [[ -n xor ]] 00:08:26.454 09:16:37 accel.accel_xor -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:08:26.454 00:08:26.454 real 0m1.432s 00:08:26.454 user 0m1.298s 00:08:26.454 sys 0m0.135s 00:08:26.454 09:16:37 accel.accel_xor -- common/autotest_common.sh@1124 -- # xtrace_disable 00:08:26.454 09:16:37 accel.accel_xor -- common/autotest_common.sh@10 -- # set +x 00:08:26.454 ************************************ 00:08:26.454 END TEST accel_xor 00:08:26.454 ************************************ 00:08:26.454 09:16:37 accel -- common/autotest_common.sh@1142 -- # return 0 00:08:26.454 09:16:37 accel -- accel/accel.sh@111 -- # run_test accel_dif_verify accel_test -t 1 -w dif_verify 00:08:26.454 09:16:37 accel -- common/autotest_common.sh@1099 -- # '[' 6 -le 1 ']' 00:08:26.454 09:16:37 accel -- common/autotest_common.sh@1105 -- # xtrace_disable 00:08:26.454 09:16:37 accel -- common/autotest_common.sh@10 -- # set +x 00:08:26.454 ************************************ 00:08:26.454 START TEST accel_dif_verify 00:08:26.454 ************************************ 00:08:26.454 09:16:37 accel.accel_dif_verify -- common/autotest_common.sh@1123 -- # accel_test -t 1 -w dif_verify 00:08:26.454 09:16:37 accel.accel_dif_verify -- accel/accel.sh@16 -- # local accel_opc 00:08:26.454 09:16:37 accel.accel_dif_verify -- accel/accel.sh@17 -- # local accel_module 00:08:26.454 09:16:37 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:08:26.454 09:16:37 accel.accel_dif_verify -- accel/accel.sh@15 -- # accel_perf -t 1 -w dif_verify 00:08:26.454 09:16:37 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:08:26.454 09:16:37 accel.accel_dif_verify -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w dif_verify 00:08:26.454 09:16:37 accel.accel_dif_verify -- accel/accel.sh@12 -- # build_accel_config 00:08:26.454 09:16:37 accel.accel_dif_verify -- accel/accel.sh@31 -- # accel_json_cfg=() 00:08:26.454 09:16:37 accel.accel_dif_verify -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:08:26.454 09:16:37 accel.accel_dif_verify -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:08:26.454 09:16:37 accel.accel_dif_verify -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:08:26.454 09:16:37 accel.accel_dif_verify -- accel/accel.sh@36 -- # [[ -n '' ]] 00:08:26.454 09:16:37 accel.accel_dif_verify -- accel/accel.sh@40 -- # local IFS=, 00:08:26.454 09:16:37 accel.accel_dif_verify -- accel/accel.sh@41 -- # jq -r . 00:08:26.454 [2024-07-15 09:16:37.512789] Starting SPDK v24.09-pre git sha1 b0f01ebc5 / DPDK 24.03.0 initialization... 00:08:26.454 [2024-07-15 09:16:37.512875] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid727968 ] 00:08:26.454 EAL: No free 2048 kB hugepages reported on node 1 00:08:26.454 [2024-07-15 09:16:37.572163] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:26.713 [2024-07-15 09:16:37.674809] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:08:26.713 09:16:37 accel.accel_dif_verify -- accel/accel.sh@20 -- # val= 00:08:26.713 09:16:37 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:08:26.713 09:16:37 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:08:26.713 09:16:37 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:08:26.713 09:16:37 accel.accel_dif_verify -- accel/accel.sh@20 -- # val= 00:08:26.713 09:16:37 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:08:26.713 09:16:37 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:08:26.713 09:16:37 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:08:26.713 09:16:37 accel.accel_dif_verify -- accel/accel.sh@20 -- # val=0x1 00:08:26.713 09:16:37 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:08:26.713 09:16:37 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:08:26.713 09:16:37 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:08:26.713 09:16:37 accel.accel_dif_verify -- accel/accel.sh@20 -- # val= 00:08:26.713 09:16:37 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:08:26.713 09:16:37 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:08:26.713 09:16:37 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:08:26.713 09:16:37 accel.accel_dif_verify -- accel/accel.sh@20 -- # val= 00:08:26.713 09:16:37 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:08:26.713 09:16:37 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:08:26.713 09:16:37 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:08:26.713 09:16:37 accel.accel_dif_verify -- accel/accel.sh@20 -- # val=dif_verify 00:08:26.713 09:16:37 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:08:26.713 09:16:37 accel.accel_dif_verify -- accel/accel.sh@23 -- # accel_opc=dif_verify 00:08:26.714 09:16:37 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:08:26.714 09:16:37 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:08:26.714 09:16:37 accel.accel_dif_verify -- accel/accel.sh@20 -- # val='4096 bytes' 00:08:26.714 09:16:37 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:08:26.714 09:16:37 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:08:26.714 09:16:37 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:08:26.714 09:16:37 accel.accel_dif_verify -- accel/accel.sh@20 -- # val='4096 bytes' 00:08:26.714 09:16:37 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:08:26.714 09:16:37 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:08:26.714 09:16:37 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:08:26.714 09:16:37 accel.accel_dif_verify -- accel/accel.sh@20 -- # val='512 bytes' 00:08:26.714 09:16:37 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:08:26.714 09:16:37 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:08:26.714 09:16:37 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:08:26.714 09:16:37 accel.accel_dif_verify -- accel/accel.sh@20 -- # val='8 bytes' 00:08:26.714 09:16:37 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:08:26.714 09:16:37 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:08:26.714 09:16:37 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:08:26.714 09:16:37 accel.accel_dif_verify -- accel/accel.sh@20 -- # val= 00:08:26.714 09:16:37 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:08:26.714 09:16:37 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:08:26.714 09:16:37 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:08:26.714 09:16:37 accel.accel_dif_verify -- accel/accel.sh@20 -- # val=software 00:08:26.714 09:16:37 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:08:26.714 09:16:37 accel.accel_dif_verify -- accel/accel.sh@22 -- # accel_module=software 00:08:26.714 09:16:37 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:08:26.714 09:16:37 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:08:26.714 09:16:37 accel.accel_dif_verify -- accel/accel.sh@20 -- # val=32 00:08:26.714 09:16:37 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:08:26.714 09:16:37 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:08:26.714 09:16:37 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:08:26.714 09:16:37 accel.accel_dif_verify -- accel/accel.sh@20 -- # val=32 00:08:26.714 09:16:37 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:08:26.714 09:16:37 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:08:26.714 09:16:37 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:08:26.714 09:16:37 accel.accel_dif_verify -- accel/accel.sh@20 -- # val=1 00:08:26.714 09:16:37 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:08:26.714 09:16:37 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:08:26.714 09:16:37 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:08:26.714 09:16:37 accel.accel_dif_verify -- accel/accel.sh@20 -- # val='1 seconds' 00:08:26.714 09:16:37 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:08:26.714 09:16:37 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:08:26.714 09:16:37 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:08:26.714 09:16:37 accel.accel_dif_verify -- accel/accel.sh@20 -- # val=No 00:08:26.714 09:16:37 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:08:26.714 09:16:37 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:08:26.714 09:16:37 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:08:26.714 09:16:37 accel.accel_dif_verify -- accel/accel.sh@20 -- # val= 00:08:26.714 09:16:37 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:08:26.714 09:16:37 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:08:26.714 09:16:37 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:08:26.714 09:16:37 accel.accel_dif_verify -- accel/accel.sh@20 -- # val= 00:08:26.714 09:16:37 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:08:26.714 09:16:37 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:08:26.714 09:16:37 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:08:28.089 09:16:38 accel.accel_dif_verify -- accel/accel.sh@20 -- # val= 00:08:28.089 09:16:38 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:08:28.089 09:16:38 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:08:28.089 09:16:38 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:08:28.089 09:16:38 accel.accel_dif_verify -- accel/accel.sh@20 -- # val= 00:08:28.089 09:16:38 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:08:28.089 09:16:38 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:08:28.089 09:16:38 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:08:28.089 09:16:38 accel.accel_dif_verify -- accel/accel.sh@20 -- # val= 00:08:28.089 09:16:38 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:08:28.089 09:16:38 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:08:28.089 09:16:38 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:08:28.089 09:16:38 accel.accel_dif_verify -- accel/accel.sh@20 -- # val= 00:08:28.089 09:16:38 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:08:28.089 09:16:38 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:08:28.089 09:16:38 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:08:28.089 09:16:38 accel.accel_dif_verify -- accel/accel.sh@20 -- # val= 00:08:28.089 09:16:38 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:08:28.089 09:16:38 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:08:28.089 09:16:38 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:08:28.089 09:16:38 accel.accel_dif_verify -- accel/accel.sh@20 -- # val= 00:08:28.089 09:16:38 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:08:28.089 09:16:38 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:08:28.089 09:16:38 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:08:28.089 09:16:38 accel.accel_dif_verify -- accel/accel.sh@27 -- # [[ -n software ]] 00:08:28.089 09:16:38 accel.accel_dif_verify -- accel/accel.sh@27 -- # [[ -n dif_verify ]] 00:08:28.089 09:16:38 accel.accel_dif_verify -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:08:28.089 00:08:28.089 real 0m1.429s 00:08:28.089 user 0m1.295s 00:08:28.089 sys 0m0.137s 00:08:28.089 09:16:38 accel.accel_dif_verify -- common/autotest_common.sh@1124 -- # xtrace_disable 00:08:28.089 09:16:38 accel.accel_dif_verify -- common/autotest_common.sh@10 -- # set +x 00:08:28.089 ************************************ 00:08:28.089 END TEST accel_dif_verify 00:08:28.089 ************************************ 00:08:28.089 09:16:38 accel -- common/autotest_common.sh@1142 -- # return 0 00:08:28.089 09:16:38 accel -- accel/accel.sh@112 -- # run_test accel_dif_generate accel_test -t 1 -w dif_generate 00:08:28.089 09:16:38 accel -- common/autotest_common.sh@1099 -- # '[' 6 -le 1 ']' 00:08:28.089 09:16:38 accel -- common/autotest_common.sh@1105 -- # xtrace_disable 00:08:28.089 09:16:38 accel -- common/autotest_common.sh@10 -- # set +x 00:08:28.089 ************************************ 00:08:28.089 START TEST accel_dif_generate 00:08:28.089 ************************************ 00:08:28.089 09:16:38 accel.accel_dif_generate -- common/autotest_common.sh@1123 -- # accel_test -t 1 -w dif_generate 00:08:28.089 09:16:38 accel.accel_dif_generate -- accel/accel.sh@16 -- # local accel_opc 00:08:28.089 09:16:38 accel.accel_dif_generate -- accel/accel.sh@17 -- # local accel_module 00:08:28.089 09:16:38 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:08:28.089 09:16:38 accel.accel_dif_generate -- accel/accel.sh@15 -- # accel_perf -t 1 -w dif_generate 00:08:28.089 09:16:38 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:08:28.089 09:16:38 accel.accel_dif_generate -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w dif_generate 00:08:28.089 09:16:38 accel.accel_dif_generate -- accel/accel.sh@12 -- # build_accel_config 00:08:28.089 09:16:38 accel.accel_dif_generate -- accel/accel.sh@31 -- # accel_json_cfg=() 00:08:28.089 09:16:38 accel.accel_dif_generate -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:08:28.089 09:16:38 accel.accel_dif_generate -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:08:28.089 09:16:38 accel.accel_dif_generate -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:08:28.089 09:16:38 accel.accel_dif_generate -- accel/accel.sh@36 -- # [[ -n '' ]] 00:08:28.089 09:16:38 accel.accel_dif_generate -- accel/accel.sh@40 -- # local IFS=, 00:08:28.089 09:16:38 accel.accel_dif_generate -- accel/accel.sh@41 -- # jq -r . 00:08:28.089 [2024-07-15 09:16:38.989237] Starting SPDK v24.09-pre git sha1 b0f01ebc5 / DPDK 24.03.0 initialization... 00:08:28.089 [2024-07-15 09:16:38.989300] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid728135 ] 00:08:28.089 EAL: No free 2048 kB hugepages reported on node 1 00:08:28.089 [2024-07-15 09:16:39.046808] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:28.089 [2024-07-15 09:16:39.156225] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:08:28.089 09:16:39 accel.accel_dif_generate -- accel/accel.sh@20 -- # val= 00:08:28.089 09:16:39 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:08:28.089 09:16:39 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:08:28.089 09:16:39 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:08:28.089 09:16:39 accel.accel_dif_generate -- accel/accel.sh@20 -- # val= 00:08:28.089 09:16:39 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:08:28.089 09:16:39 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:08:28.089 09:16:39 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:08:28.089 09:16:39 accel.accel_dif_generate -- accel/accel.sh@20 -- # val=0x1 00:08:28.089 09:16:39 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:08:28.089 09:16:39 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:08:28.089 09:16:39 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:08:28.089 09:16:39 accel.accel_dif_generate -- accel/accel.sh@20 -- # val= 00:08:28.089 09:16:39 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:08:28.089 09:16:39 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:08:28.089 09:16:39 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:08:28.089 09:16:39 accel.accel_dif_generate -- accel/accel.sh@20 -- # val= 00:08:28.089 09:16:39 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:08:28.089 09:16:39 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:08:28.089 09:16:39 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:08:28.089 09:16:39 accel.accel_dif_generate -- accel/accel.sh@20 -- # val=dif_generate 00:08:28.089 09:16:39 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:08:28.089 09:16:39 accel.accel_dif_generate -- accel/accel.sh@23 -- # accel_opc=dif_generate 00:08:28.089 09:16:39 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:08:28.089 09:16:39 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:08:28.089 09:16:39 accel.accel_dif_generate -- accel/accel.sh@20 -- # val='4096 bytes' 00:08:28.089 09:16:39 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:08:28.089 09:16:39 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:08:28.089 09:16:39 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:08:28.089 09:16:39 accel.accel_dif_generate -- accel/accel.sh@20 -- # val='4096 bytes' 00:08:28.089 09:16:39 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:08:28.089 09:16:39 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:08:28.089 09:16:39 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:08:28.089 09:16:39 accel.accel_dif_generate -- accel/accel.sh@20 -- # val='512 bytes' 00:08:28.089 09:16:39 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:08:28.089 09:16:39 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:08:28.089 09:16:39 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:08:28.089 09:16:39 accel.accel_dif_generate -- accel/accel.sh@20 -- # val='8 bytes' 00:08:28.089 09:16:39 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:08:28.089 09:16:39 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:08:28.089 09:16:39 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:08:28.089 09:16:39 accel.accel_dif_generate -- accel/accel.sh@20 -- # val= 00:08:28.089 09:16:39 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:08:28.089 09:16:39 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:08:28.089 09:16:39 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:08:28.089 09:16:39 accel.accel_dif_generate -- accel/accel.sh@20 -- # val=software 00:08:28.089 09:16:39 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:08:28.089 09:16:39 accel.accel_dif_generate -- accel/accel.sh@22 -- # accel_module=software 00:08:28.089 09:16:39 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:08:28.089 09:16:39 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:08:28.089 09:16:39 accel.accel_dif_generate -- accel/accel.sh@20 -- # val=32 00:08:28.089 09:16:39 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:08:28.089 09:16:39 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:08:28.089 09:16:39 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:08:28.089 09:16:39 accel.accel_dif_generate -- accel/accel.sh@20 -- # val=32 00:08:28.089 09:16:39 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:08:28.089 09:16:39 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:08:28.089 09:16:39 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:08:28.089 09:16:39 accel.accel_dif_generate -- accel/accel.sh@20 -- # val=1 00:08:28.089 09:16:39 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:08:28.090 09:16:39 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:08:28.090 09:16:39 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:08:28.090 09:16:39 accel.accel_dif_generate -- accel/accel.sh@20 -- # val='1 seconds' 00:08:28.090 09:16:39 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:08:28.090 09:16:39 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:08:28.090 09:16:39 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:08:28.090 09:16:39 accel.accel_dif_generate -- accel/accel.sh@20 -- # val=No 00:08:28.090 09:16:39 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:08:28.090 09:16:39 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:08:28.090 09:16:39 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:08:28.090 09:16:39 accel.accel_dif_generate -- accel/accel.sh@20 -- # val= 00:08:28.090 09:16:39 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:08:28.090 09:16:39 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:08:28.090 09:16:39 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:08:28.090 09:16:39 accel.accel_dif_generate -- accel/accel.sh@20 -- # val= 00:08:28.090 09:16:39 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:08:28.090 09:16:39 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:08:28.090 09:16:39 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:08:29.462 09:16:40 accel.accel_dif_generate -- accel/accel.sh@20 -- # val= 00:08:29.462 09:16:40 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:08:29.462 09:16:40 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:08:29.462 09:16:40 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:08:29.462 09:16:40 accel.accel_dif_generate -- accel/accel.sh@20 -- # val= 00:08:29.462 09:16:40 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:08:29.462 09:16:40 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:08:29.462 09:16:40 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:08:29.462 09:16:40 accel.accel_dif_generate -- accel/accel.sh@20 -- # val= 00:08:29.462 09:16:40 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:08:29.462 09:16:40 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:08:29.462 09:16:40 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:08:29.462 09:16:40 accel.accel_dif_generate -- accel/accel.sh@20 -- # val= 00:08:29.462 09:16:40 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:08:29.462 09:16:40 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:08:29.462 09:16:40 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:08:29.462 09:16:40 accel.accel_dif_generate -- accel/accel.sh@20 -- # val= 00:08:29.462 09:16:40 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:08:29.462 09:16:40 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:08:29.462 09:16:40 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:08:29.462 09:16:40 accel.accel_dif_generate -- accel/accel.sh@20 -- # val= 00:08:29.462 09:16:40 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:08:29.462 09:16:40 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:08:29.462 09:16:40 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:08:29.462 09:16:40 accel.accel_dif_generate -- accel/accel.sh@27 -- # [[ -n software ]] 00:08:29.462 09:16:40 accel.accel_dif_generate -- accel/accel.sh@27 -- # [[ -n dif_generate ]] 00:08:29.462 09:16:40 accel.accel_dif_generate -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:08:29.462 00:08:29.462 real 0m1.430s 00:08:29.462 user 0m1.301s 00:08:29.462 sys 0m0.133s 00:08:29.462 09:16:40 accel.accel_dif_generate -- common/autotest_common.sh@1124 -- # xtrace_disable 00:08:29.462 09:16:40 accel.accel_dif_generate -- common/autotest_common.sh@10 -- # set +x 00:08:29.462 ************************************ 00:08:29.462 END TEST accel_dif_generate 00:08:29.462 ************************************ 00:08:29.462 09:16:40 accel -- common/autotest_common.sh@1142 -- # return 0 00:08:29.462 09:16:40 accel -- accel/accel.sh@113 -- # run_test accel_dif_generate_copy accel_test -t 1 -w dif_generate_copy 00:08:29.462 09:16:40 accel -- common/autotest_common.sh@1099 -- # '[' 6 -le 1 ']' 00:08:29.462 09:16:40 accel -- common/autotest_common.sh@1105 -- # xtrace_disable 00:08:29.462 09:16:40 accel -- common/autotest_common.sh@10 -- # set +x 00:08:29.462 ************************************ 00:08:29.462 START TEST accel_dif_generate_copy 00:08:29.462 ************************************ 00:08:29.462 09:16:40 accel.accel_dif_generate_copy -- common/autotest_common.sh@1123 -- # accel_test -t 1 -w dif_generate_copy 00:08:29.462 09:16:40 accel.accel_dif_generate_copy -- accel/accel.sh@16 -- # local accel_opc 00:08:29.462 09:16:40 accel.accel_dif_generate_copy -- accel/accel.sh@17 -- # local accel_module 00:08:29.462 09:16:40 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:08:29.462 09:16:40 accel.accel_dif_generate_copy -- accel/accel.sh@15 -- # accel_perf -t 1 -w dif_generate_copy 00:08:29.462 09:16:40 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:08:29.462 09:16:40 accel.accel_dif_generate_copy -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w dif_generate_copy 00:08:29.462 09:16:40 accel.accel_dif_generate_copy -- accel/accel.sh@12 -- # build_accel_config 00:08:29.462 09:16:40 accel.accel_dif_generate_copy -- accel/accel.sh@31 -- # accel_json_cfg=() 00:08:29.462 09:16:40 accel.accel_dif_generate_copy -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:08:29.462 09:16:40 accel.accel_dif_generate_copy -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:08:29.462 09:16:40 accel.accel_dif_generate_copy -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:08:29.462 09:16:40 accel.accel_dif_generate_copy -- accel/accel.sh@36 -- # [[ -n '' ]] 00:08:29.462 09:16:40 accel.accel_dif_generate_copy -- accel/accel.sh@40 -- # local IFS=, 00:08:29.462 09:16:40 accel.accel_dif_generate_copy -- accel/accel.sh@41 -- # jq -r . 00:08:29.462 [2024-07-15 09:16:40.468209] Starting SPDK v24.09-pre git sha1 b0f01ebc5 / DPDK 24.03.0 initialization... 00:08:29.462 [2024-07-15 09:16:40.468273] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid728294 ] 00:08:29.462 EAL: No free 2048 kB hugepages reported on node 1 00:08:29.462 [2024-07-15 09:16:40.525449] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:29.462 [2024-07-15 09:16:40.629685] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:08:29.721 09:16:40 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val= 00:08:29.721 09:16:40 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:08:29.721 09:16:40 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:08:29.721 09:16:40 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:08:29.721 09:16:40 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val= 00:08:29.721 09:16:40 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:08:29.721 09:16:40 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:08:29.721 09:16:40 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:08:29.721 09:16:40 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val=0x1 00:08:29.721 09:16:40 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:08:29.721 09:16:40 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:08:29.721 09:16:40 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:08:29.721 09:16:40 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val= 00:08:29.721 09:16:40 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:08:29.721 09:16:40 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:08:29.721 09:16:40 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:08:29.721 09:16:40 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val= 00:08:29.721 09:16:40 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:08:29.721 09:16:40 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:08:29.721 09:16:40 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:08:29.721 09:16:40 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val=dif_generate_copy 00:08:29.721 09:16:40 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:08:29.721 09:16:40 accel.accel_dif_generate_copy -- accel/accel.sh@23 -- # accel_opc=dif_generate_copy 00:08:29.721 09:16:40 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:08:29.721 09:16:40 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:08:29.721 09:16:40 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val='4096 bytes' 00:08:29.721 09:16:40 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:08:29.721 09:16:40 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:08:29.721 09:16:40 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:08:29.721 09:16:40 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val='4096 bytes' 00:08:29.721 09:16:40 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:08:29.721 09:16:40 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:08:29.721 09:16:40 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:08:29.721 09:16:40 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val= 00:08:29.721 09:16:40 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:08:29.721 09:16:40 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:08:29.721 09:16:40 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:08:29.721 09:16:40 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val=software 00:08:29.721 09:16:40 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:08:29.721 09:16:40 accel.accel_dif_generate_copy -- accel/accel.sh@22 -- # accel_module=software 00:08:29.721 09:16:40 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:08:29.721 09:16:40 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:08:29.721 09:16:40 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val=32 00:08:29.721 09:16:40 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:08:29.721 09:16:40 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:08:29.721 09:16:40 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:08:29.721 09:16:40 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val=32 00:08:29.721 09:16:40 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:08:29.721 09:16:40 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:08:29.721 09:16:40 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:08:29.721 09:16:40 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val=1 00:08:29.721 09:16:40 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:08:29.721 09:16:40 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:08:29.721 09:16:40 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:08:29.721 09:16:40 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val='1 seconds' 00:08:29.721 09:16:40 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:08:29.721 09:16:40 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:08:29.721 09:16:40 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:08:29.721 09:16:40 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val=No 00:08:29.721 09:16:40 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:08:29.721 09:16:40 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:08:29.722 09:16:40 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:08:29.722 09:16:40 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val= 00:08:29.722 09:16:40 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:08:29.722 09:16:40 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:08:29.722 09:16:40 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:08:29.722 09:16:40 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val= 00:08:29.722 09:16:40 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:08:29.722 09:16:40 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:08:29.722 09:16:40 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:08:31.096 09:16:41 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val= 00:08:31.097 09:16:41 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:08:31.097 09:16:41 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:08:31.097 09:16:41 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:08:31.097 09:16:41 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val= 00:08:31.097 09:16:41 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:08:31.097 09:16:41 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:08:31.097 09:16:41 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:08:31.097 09:16:41 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val= 00:08:31.097 09:16:41 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:08:31.097 09:16:41 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:08:31.097 09:16:41 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:08:31.097 09:16:41 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val= 00:08:31.097 09:16:41 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:08:31.097 09:16:41 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:08:31.097 09:16:41 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:08:31.097 09:16:41 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val= 00:08:31.097 09:16:41 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:08:31.097 09:16:41 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:08:31.097 09:16:41 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:08:31.097 09:16:41 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val= 00:08:31.097 09:16:41 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:08:31.097 09:16:41 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:08:31.097 09:16:41 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:08:31.097 09:16:41 accel.accel_dif_generate_copy -- accel/accel.sh@27 -- # [[ -n software ]] 00:08:31.097 09:16:41 accel.accel_dif_generate_copy -- accel/accel.sh@27 -- # [[ -n dif_generate_copy ]] 00:08:31.097 09:16:41 accel.accel_dif_generate_copy -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:08:31.097 00:08:31.097 real 0m1.431s 00:08:31.097 user 0m1.298s 00:08:31.097 sys 0m0.134s 00:08:31.097 09:16:41 accel.accel_dif_generate_copy -- common/autotest_common.sh@1124 -- # xtrace_disable 00:08:31.097 09:16:41 accel.accel_dif_generate_copy -- common/autotest_common.sh@10 -- # set +x 00:08:31.097 ************************************ 00:08:31.097 END TEST accel_dif_generate_copy 00:08:31.097 ************************************ 00:08:31.097 09:16:41 accel -- common/autotest_common.sh@1142 -- # return 0 00:08:31.097 09:16:41 accel -- accel/accel.sh@115 -- # [[ y == y ]] 00:08:31.097 09:16:41 accel -- accel/accel.sh@116 -- # run_test accel_comp accel_test -t 1 -w compress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib 00:08:31.097 09:16:41 accel -- common/autotest_common.sh@1099 -- # '[' 8 -le 1 ']' 00:08:31.097 09:16:41 accel -- common/autotest_common.sh@1105 -- # xtrace_disable 00:08:31.097 09:16:41 accel -- common/autotest_common.sh@10 -- # set +x 00:08:31.097 ************************************ 00:08:31.097 START TEST accel_comp 00:08:31.097 ************************************ 00:08:31.097 09:16:41 accel.accel_comp -- common/autotest_common.sh@1123 -- # accel_test -t 1 -w compress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib 00:08:31.097 09:16:41 accel.accel_comp -- accel/accel.sh@16 -- # local accel_opc 00:08:31.097 09:16:41 accel.accel_comp -- accel/accel.sh@17 -- # local accel_module 00:08:31.097 09:16:41 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:08:31.097 09:16:41 accel.accel_comp -- accel/accel.sh@15 -- # accel_perf -t 1 -w compress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib 00:08:31.097 09:16:41 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:08:31.097 09:16:41 accel.accel_comp -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w compress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib 00:08:31.097 09:16:41 accel.accel_comp -- accel/accel.sh@12 -- # build_accel_config 00:08:31.097 09:16:41 accel.accel_comp -- accel/accel.sh@31 -- # accel_json_cfg=() 00:08:31.097 09:16:41 accel.accel_comp -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:08:31.097 09:16:41 accel.accel_comp -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:08:31.097 09:16:41 accel.accel_comp -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:08:31.097 09:16:41 accel.accel_comp -- accel/accel.sh@36 -- # [[ -n '' ]] 00:08:31.097 09:16:41 accel.accel_comp -- accel/accel.sh@40 -- # local IFS=, 00:08:31.097 09:16:41 accel.accel_comp -- accel/accel.sh@41 -- # jq -r . 00:08:31.097 [2024-07-15 09:16:41.941143] Starting SPDK v24.09-pre git sha1 b0f01ebc5 / DPDK 24.03.0 initialization... 00:08:31.097 [2024-07-15 09:16:41.941209] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid728562 ] 00:08:31.097 EAL: No free 2048 kB hugepages reported on node 1 00:08:31.097 [2024-07-15 09:16:41.995834] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:31.097 [2024-07-15 09:16:42.099200] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:08:31.097 09:16:42 accel.accel_comp -- accel/accel.sh@20 -- # val= 00:08:31.097 09:16:42 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:08:31.097 09:16:42 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:08:31.097 09:16:42 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:08:31.097 09:16:42 accel.accel_comp -- accel/accel.sh@20 -- # val= 00:08:31.097 09:16:42 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:08:31.097 09:16:42 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:08:31.097 09:16:42 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:08:31.097 09:16:42 accel.accel_comp -- accel/accel.sh@20 -- # val= 00:08:31.097 09:16:42 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:08:31.097 09:16:42 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:08:31.097 09:16:42 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:08:31.097 09:16:42 accel.accel_comp -- accel/accel.sh@20 -- # val=0x1 00:08:31.097 09:16:42 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:08:31.097 09:16:42 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:08:31.097 09:16:42 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:08:31.097 09:16:42 accel.accel_comp -- accel/accel.sh@20 -- # val= 00:08:31.097 09:16:42 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:08:31.097 09:16:42 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:08:31.097 09:16:42 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:08:31.097 09:16:42 accel.accel_comp -- accel/accel.sh@20 -- # val= 00:08:31.097 09:16:42 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:08:31.097 09:16:42 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:08:31.097 09:16:42 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:08:31.097 09:16:42 accel.accel_comp -- accel/accel.sh@20 -- # val=compress 00:08:31.097 09:16:42 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:08:31.097 09:16:42 accel.accel_comp -- accel/accel.sh@23 -- # accel_opc=compress 00:08:31.097 09:16:42 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:08:31.097 09:16:42 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:08:31.097 09:16:42 accel.accel_comp -- accel/accel.sh@20 -- # val='4096 bytes' 00:08:31.097 09:16:42 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:08:31.097 09:16:42 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:08:31.097 09:16:42 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:08:31.097 09:16:42 accel.accel_comp -- accel/accel.sh@20 -- # val= 00:08:31.097 09:16:42 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:08:31.097 09:16:42 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:08:31.097 09:16:42 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:08:31.097 09:16:42 accel.accel_comp -- accel/accel.sh@20 -- # val=software 00:08:31.097 09:16:42 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:08:31.097 09:16:42 accel.accel_comp -- accel/accel.sh@22 -- # accel_module=software 00:08:31.097 09:16:42 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:08:31.097 09:16:42 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:08:31.097 09:16:42 accel.accel_comp -- accel/accel.sh@20 -- # val=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib 00:08:31.097 09:16:42 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:08:31.097 09:16:42 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:08:31.097 09:16:42 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:08:31.097 09:16:42 accel.accel_comp -- accel/accel.sh@20 -- # val=32 00:08:31.097 09:16:42 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:08:31.097 09:16:42 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:08:31.097 09:16:42 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:08:31.097 09:16:42 accel.accel_comp -- accel/accel.sh@20 -- # val=32 00:08:31.097 09:16:42 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:08:31.097 09:16:42 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:08:31.097 09:16:42 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:08:31.097 09:16:42 accel.accel_comp -- accel/accel.sh@20 -- # val=1 00:08:31.097 09:16:42 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:08:31.097 09:16:42 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:08:31.097 09:16:42 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:08:31.097 09:16:42 accel.accel_comp -- accel/accel.sh@20 -- # val='1 seconds' 00:08:31.097 09:16:42 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:08:31.097 09:16:42 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:08:31.097 09:16:42 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:08:31.097 09:16:42 accel.accel_comp -- accel/accel.sh@20 -- # val=No 00:08:31.097 09:16:42 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:08:31.097 09:16:42 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:08:31.097 09:16:42 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:08:31.097 09:16:42 accel.accel_comp -- accel/accel.sh@20 -- # val= 00:08:31.097 09:16:42 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:08:31.097 09:16:42 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:08:31.097 09:16:42 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:08:31.097 09:16:42 accel.accel_comp -- accel/accel.sh@20 -- # val= 00:08:31.097 09:16:42 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:08:31.097 09:16:42 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:08:31.097 09:16:42 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:08:32.471 09:16:43 accel.accel_comp -- accel/accel.sh@20 -- # val= 00:08:32.471 09:16:43 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:08:32.471 09:16:43 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:08:32.471 09:16:43 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:08:32.471 09:16:43 accel.accel_comp -- accel/accel.sh@20 -- # val= 00:08:32.471 09:16:43 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:08:32.471 09:16:43 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:08:32.471 09:16:43 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:08:32.471 09:16:43 accel.accel_comp -- accel/accel.sh@20 -- # val= 00:08:32.471 09:16:43 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:08:32.471 09:16:43 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:08:32.471 09:16:43 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:08:32.471 09:16:43 accel.accel_comp -- accel/accel.sh@20 -- # val= 00:08:32.471 09:16:43 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:08:32.471 09:16:43 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:08:32.471 09:16:43 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:08:32.471 09:16:43 accel.accel_comp -- accel/accel.sh@20 -- # val= 00:08:32.471 09:16:43 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:08:32.471 09:16:43 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:08:32.471 09:16:43 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:08:32.471 09:16:43 accel.accel_comp -- accel/accel.sh@20 -- # val= 00:08:32.471 09:16:43 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:08:32.471 09:16:43 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:08:32.471 09:16:43 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:08:32.471 09:16:43 accel.accel_comp -- accel/accel.sh@27 -- # [[ -n software ]] 00:08:32.471 09:16:43 accel.accel_comp -- accel/accel.sh@27 -- # [[ -n compress ]] 00:08:32.471 09:16:43 accel.accel_comp -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:08:32.471 00:08:32.471 real 0m1.429s 00:08:32.471 user 0m1.293s 00:08:32.471 sys 0m0.139s 00:08:32.471 09:16:43 accel.accel_comp -- common/autotest_common.sh@1124 -- # xtrace_disable 00:08:32.471 09:16:43 accel.accel_comp -- common/autotest_common.sh@10 -- # set +x 00:08:32.471 ************************************ 00:08:32.471 END TEST accel_comp 00:08:32.471 ************************************ 00:08:32.471 09:16:43 accel -- common/autotest_common.sh@1142 -- # return 0 00:08:32.471 09:16:43 accel -- accel/accel.sh@117 -- # run_test accel_decomp accel_test -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y 00:08:32.471 09:16:43 accel -- common/autotest_common.sh@1099 -- # '[' 9 -le 1 ']' 00:08:32.471 09:16:43 accel -- common/autotest_common.sh@1105 -- # xtrace_disable 00:08:32.471 09:16:43 accel -- common/autotest_common.sh@10 -- # set +x 00:08:32.471 ************************************ 00:08:32.471 START TEST accel_decomp 00:08:32.471 ************************************ 00:08:32.471 09:16:43 accel.accel_decomp -- common/autotest_common.sh@1123 -- # accel_test -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y 00:08:32.471 09:16:43 accel.accel_decomp -- accel/accel.sh@16 -- # local accel_opc 00:08:32.471 09:16:43 accel.accel_decomp -- accel/accel.sh@17 -- # local accel_module 00:08:32.471 09:16:43 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:08:32.471 09:16:43 accel.accel_decomp -- accel/accel.sh@15 -- # accel_perf -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y 00:08:32.471 09:16:43 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:08:32.471 09:16:43 accel.accel_decomp -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y 00:08:32.471 09:16:43 accel.accel_decomp -- accel/accel.sh@12 -- # build_accel_config 00:08:32.471 09:16:43 accel.accel_decomp -- accel/accel.sh@31 -- # accel_json_cfg=() 00:08:32.471 09:16:43 accel.accel_decomp -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:08:32.471 09:16:43 accel.accel_decomp -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:08:32.471 09:16:43 accel.accel_decomp -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:08:32.471 09:16:43 accel.accel_decomp -- accel/accel.sh@36 -- # [[ -n '' ]] 00:08:32.471 09:16:43 accel.accel_decomp -- accel/accel.sh@40 -- # local IFS=, 00:08:32.471 09:16:43 accel.accel_decomp -- accel/accel.sh@41 -- # jq -r . 00:08:32.471 [2024-07-15 09:16:43.420717] Starting SPDK v24.09-pre git sha1 b0f01ebc5 / DPDK 24.03.0 initialization... 00:08:32.471 [2024-07-15 09:16:43.420783] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid728715 ] 00:08:32.471 EAL: No free 2048 kB hugepages reported on node 1 00:08:32.471 [2024-07-15 09:16:43.478310] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:32.471 [2024-07-15 09:16:43.582495] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:08:32.471 09:16:43 accel.accel_decomp -- accel/accel.sh@20 -- # val= 00:08:32.471 09:16:43 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:08:32.471 09:16:43 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:08:32.471 09:16:43 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:08:32.471 09:16:43 accel.accel_decomp -- accel/accel.sh@20 -- # val= 00:08:32.471 09:16:43 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:08:32.471 09:16:43 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:08:32.471 09:16:43 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:08:32.471 09:16:43 accel.accel_decomp -- accel/accel.sh@20 -- # val= 00:08:32.471 09:16:43 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:08:32.471 09:16:43 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:08:32.471 09:16:43 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:08:32.471 09:16:43 accel.accel_decomp -- accel/accel.sh@20 -- # val=0x1 00:08:32.471 09:16:43 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:08:32.471 09:16:43 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:08:32.471 09:16:43 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:08:32.471 09:16:43 accel.accel_decomp -- accel/accel.sh@20 -- # val= 00:08:32.471 09:16:43 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:08:32.471 09:16:43 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:08:32.471 09:16:43 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:08:32.471 09:16:43 accel.accel_decomp -- accel/accel.sh@20 -- # val= 00:08:32.471 09:16:43 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:08:32.471 09:16:43 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:08:32.471 09:16:43 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:08:32.471 09:16:43 accel.accel_decomp -- accel/accel.sh@20 -- # val=decompress 00:08:32.471 09:16:43 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:08:32.471 09:16:43 accel.accel_decomp -- accel/accel.sh@23 -- # accel_opc=decompress 00:08:32.471 09:16:43 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:08:32.471 09:16:43 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:08:32.471 09:16:43 accel.accel_decomp -- accel/accel.sh@20 -- # val='4096 bytes' 00:08:32.471 09:16:43 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:08:32.471 09:16:43 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:08:32.471 09:16:43 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:08:32.471 09:16:43 accel.accel_decomp -- accel/accel.sh@20 -- # val= 00:08:32.471 09:16:43 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:08:32.471 09:16:43 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:08:32.471 09:16:43 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:08:32.471 09:16:43 accel.accel_decomp -- accel/accel.sh@20 -- # val=software 00:08:32.471 09:16:43 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:08:32.471 09:16:43 accel.accel_decomp -- accel/accel.sh@22 -- # accel_module=software 00:08:32.471 09:16:43 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:08:32.471 09:16:43 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:08:32.471 09:16:43 accel.accel_decomp -- accel/accel.sh@20 -- # val=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib 00:08:32.471 09:16:43 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:08:32.471 09:16:43 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:08:32.472 09:16:43 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:08:32.472 09:16:43 accel.accel_decomp -- accel/accel.sh@20 -- # val=32 00:08:32.472 09:16:43 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:08:32.472 09:16:43 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:08:32.472 09:16:43 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:08:32.472 09:16:43 accel.accel_decomp -- accel/accel.sh@20 -- # val=32 00:08:32.472 09:16:43 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:08:32.472 09:16:43 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:08:32.472 09:16:43 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:08:32.472 09:16:43 accel.accel_decomp -- accel/accel.sh@20 -- # val=1 00:08:32.472 09:16:43 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:08:32.472 09:16:43 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:08:32.472 09:16:43 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:08:32.472 09:16:43 accel.accel_decomp -- accel/accel.sh@20 -- # val='1 seconds' 00:08:32.472 09:16:43 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:08:32.472 09:16:43 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:08:32.472 09:16:43 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:08:32.472 09:16:43 accel.accel_decomp -- accel/accel.sh@20 -- # val=Yes 00:08:32.472 09:16:43 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:08:32.472 09:16:43 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:08:32.472 09:16:43 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:08:32.472 09:16:43 accel.accel_decomp -- accel/accel.sh@20 -- # val= 00:08:32.472 09:16:43 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:08:32.472 09:16:43 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:08:32.472 09:16:43 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:08:32.472 09:16:43 accel.accel_decomp -- accel/accel.sh@20 -- # val= 00:08:32.472 09:16:43 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:08:32.472 09:16:43 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:08:32.472 09:16:43 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:08:33.844 09:16:44 accel.accel_decomp -- accel/accel.sh@20 -- # val= 00:08:33.844 09:16:44 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:08:33.844 09:16:44 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:08:33.844 09:16:44 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:08:33.844 09:16:44 accel.accel_decomp -- accel/accel.sh@20 -- # val= 00:08:33.844 09:16:44 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:08:33.844 09:16:44 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:08:33.844 09:16:44 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:08:33.844 09:16:44 accel.accel_decomp -- accel/accel.sh@20 -- # val= 00:08:33.844 09:16:44 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:08:33.844 09:16:44 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:08:33.844 09:16:44 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:08:33.844 09:16:44 accel.accel_decomp -- accel/accel.sh@20 -- # val= 00:08:33.844 09:16:44 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:08:33.844 09:16:44 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:08:33.844 09:16:44 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:08:33.844 09:16:44 accel.accel_decomp -- accel/accel.sh@20 -- # val= 00:08:33.844 09:16:44 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:08:33.844 09:16:44 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:08:33.844 09:16:44 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:08:33.844 09:16:44 accel.accel_decomp -- accel/accel.sh@20 -- # val= 00:08:33.844 09:16:44 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:08:33.844 09:16:44 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:08:33.844 09:16:44 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:08:33.844 09:16:44 accel.accel_decomp -- accel/accel.sh@27 -- # [[ -n software ]] 00:08:33.844 09:16:44 accel.accel_decomp -- accel/accel.sh@27 -- # [[ -n decompress ]] 00:08:33.844 09:16:44 accel.accel_decomp -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:08:33.844 00:08:33.844 real 0m1.418s 00:08:33.844 user 0m1.299s 00:08:33.844 sys 0m0.121s 00:08:33.844 09:16:44 accel.accel_decomp -- common/autotest_common.sh@1124 -- # xtrace_disable 00:08:33.844 09:16:44 accel.accel_decomp -- common/autotest_common.sh@10 -- # set +x 00:08:33.844 ************************************ 00:08:33.844 END TEST accel_decomp 00:08:33.844 ************************************ 00:08:33.844 09:16:44 accel -- common/autotest_common.sh@1142 -- # return 0 00:08:33.844 09:16:44 accel -- accel/accel.sh@118 -- # run_test accel_decomp_full accel_test -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y -o 0 00:08:33.844 09:16:44 accel -- common/autotest_common.sh@1099 -- # '[' 11 -le 1 ']' 00:08:33.844 09:16:44 accel -- common/autotest_common.sh@1105 -- # xtrace_disable 00:08:33.844 09:16:44 accel -- common/autotest_common.sh@10 -- # set +x 00:08:33.844 ************************************ 00:08:33.844 START TEST accel_decomp_full 00:08:33.844 ************************************ 00:08:33.845 09:16:44 accel.accel_decomp_full -- common/autotest_common.sh@1123 -- # accel_test -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y -o 0 00:08:33.845 09:16:44 accel.accel_decomp_full -- accel/accel.sh@16 -- # local accel_opc 00:08:33.845 09:16:44 accel.accel_decomp_full -- accel/accel.sh@17 -- # local accel_module 00:08:33.845 09:16:44 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:08:33.845 09:16:44 accel.accel_decomp_full -- accel/accel.sh@15 -- # accel_perf -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y -o 0 00:08:33.845 09:16:44 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:08:33.845 09:16:44 accel.accel_decomp_full -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y -o 0 00:08:33.845 09:16:44 accel.accel_decomp_full -- accel/accel.sh@12 -- # build_accel_config 00:08:33.845 09:16:44 accel.accel_decomp_full -- accel/accel.sh@31 -- # accel_json_cfg=() 00:08:33.845 09:16:44 accel.accel_decomp_full -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:08:33.845 09:16:44 accel.accel_decomp_full -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:08:33.845 09:16:44 accel.accel_decomp_full -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:08:33.845 09:16:44 accel.accel_decomp_full -- accel/accel.sh@36 -- # [[ -n '' ]] 00:08:33.845 09:16:44 accel.accel_decomp_full -- accel/accel.sh@40 -- # local IFS=, 00:08:33.845 09:16:44 accel.accel_decomp_full -- accel/accel.sh@41 -- # jq -r . 00:08:33.845 [2024-07-15 09:16:44.883608] Starting SPDK v24.09-pre git sha1 b0f01ebc5 / DPDK 24.03.0 initialization... 00:08:33.845 [2024-07-15 09:16:44.883671] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid728882 ] 00:08:33.845 EAL: No free 2048 kB hugepages reported on node 1 00:08:33.845 [2024-07-15 09:16:44.941114] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:34.104 [2024-07-15 09:16:45.046755] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:08:34.104 09:16:45 accel.accel_decomp_full -- accel/accel.sh@20 -- # val= 00:08:34.104 09:16:45 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:08:34.104 09:16:45 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:08:34.104 09:16:45 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:08:34.104 09:16:45 accel.accel_decomp_full -- accel/accel.sh@20 -- # val= 00:08:34.104 09:16:45 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:08:34.104 09:16:45 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:08:34.104 09:16:45 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:08:34.104 09:16:45 accel.accel_decomp_full -- accel/accel.sh@20 -- # val= 00:08:34.104 09:16:45 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:08:34.104 09:16:45 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:08:34.104 09:16:45 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:08:34.104 09:16:45 accel.accel_decomp_full -- accel/accel.sh@20 -- # val=0x1 00:08:34.104 09:16:45 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:08:34.104 09:16:45 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:08:34.104 09:16:45 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:08:34.104 09:16:45 accel.accel_decomp_full -- accel/accel.sh@20 -- # val= 00:08:34.104 09:16:45 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:08:34.104 09:16:45 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:08:34.104 09:16:45 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:08:34.104 09:16:45 accel.accel_decomp_full -- accel/accel.sh@20 -- # val= 00:08:34.104 09:16:45 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:08:34.104 09:16:45 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:08:34.104 09:16:45 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:08:34.104 09:16:45 accel.accel_decomp_full -- accel/accel.sh@20 -- # val=decompress 00:08:34.104 09:16:45 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:08:34.104 09:16:45 accel.accel_decomp_full -- accel/accel.sh@23 -- # accel_opc=decompress 00:08:34.104 09:16:45 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:08:34.104 09:16:45 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:08:34.104 09:16:45 accel.accel_decomp_full -- accel/accel.sh@20 -- # val='111250 bytes' 00:08:34.104 09:16:45 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:08:34.104 09:16:45 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:08:34.104 09:16:45 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:08:34.104 09:16:45 accel.accel_decomp_full -- accel/accel.sh@20 -- # val= 00:08:34.104 09:16:45 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:08:34.104 09:16:45 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:08:34.104 09:16:45 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:08:34.104 09:16:45 accel.accel_decomp_full -- accel/accel.sh@20 -- # val=software 00:08:34.104 09:16:45 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:08:34.104 09:16:45 accel.accel_decomp_full -- accel/accel.sh@22 -- # accel_module=software 00:08:34.104 09:16:45 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:08:34.104 09:16:45 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:08:34.104 09:16:45 accel.accel_decomp_full -- accel/accel.sh@20 -- # val=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib 00:08:34.104 09:16:45 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:08:34.104 09:16:45 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:08:34.104 09:16:45 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:08:34.104 09:16:45 accel.accel_decomp_full -- accel/accel.sh@20 -- # val=32 00:08:34.104 09:16:45 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:08:34.104 09:16:45 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:08:34.104 09:16:45 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:08:34.104 09:16:45 accel.accel_decomp_full -- accel/accel.sh@20 -- # val=32 00:08:34.104 09:16:45 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:08:34.104 09:16:45 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:08:34.104 09:16:45 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:08:34.104 09:16:45 accel.accel_decomp_full -- accel/accel.sh@20 -- # val=1 00:08:34.104 09:16:45 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:08:34.104 09:16:45 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:08:34.104 09:16:45 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:08:34.104 09:16:45 accel.accel_decomp_full -- accel/accel.sh@20 -- # val='1 seconds' 00:08:34.104 09:16:45 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:08:34.104 09:16:45 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:08:34.104 09:16:45 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:08:34.104 09:16:45 accel.accel_decomp_full -- accel/accel.sh@20 -- # val=Yes 00:08:34.104 09:16:45 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:08:34.104 09:16:45 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:08:34.104 09:16:45 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:08:34.104 09:16:45 accel.accel_decomp_full -- accel/accel.sh@20 -- # val= 00:08:34.104 09:16:45 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:08:34.104 09:16:45 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:08:34.104 09:16:45 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:08:34.104 09:16:45 accel.accel_decomp_full -- accel/accel.sh@20 -- # val= 00:08:34.104 09:16:45 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:08:34.104 09:16:45 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:08:34.104 09:16:45 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:08:35.476 09:16:46 accel.accel_decomp_full -- accel/accel.sh@20 -- # val= 00:08:35.476 09:16:46 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:08:35.476 09:16:46 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:08:35.476 09:16:46 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:08:35.476 09:16:46 accel.accel_decomp_full -- accel/accel.sh@20 -- # val= 00:08:35.476 09:16:46 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:08:35.476 09:16:46 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:08:35.476 09:16:46 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:08:35.476 09:16:46 accel.accel_decomp_full -- accel/accel.sh@20 -- # val= 00:08:35.476 09:16:46 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:08:35.476 09:16:46 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:08:35.476 09:16:46 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:08:35.476 09:16:46 accel.accel_decomp_full -- accel/accel.sh@20 -- # val= 00:08:35.476 09:16:46 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:08:35.476 09:16:46 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:08:35.476 09:16:46 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:08:35.476 09:16:46 accel.accel_decomp_full -- accel/accel.sh@20 -- # val= 00:08:35.476 09:16:46 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:08:35.476 09:16:46 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:08:35.476 09:16:46 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:08:35.476 09:16:46 accel.accel_decomp_full -- accel/accel.sh@20 -- # val= 00:08:35.476 09:16:46 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:08:35.476 09:16:46 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:08:35.476 09:16:46 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:08:35.476 09:16:46 accel.accel_decomp_full -- accel/accel.sh@27 -- # [[ -n software ]] 00:08:35.476 09:16:46 accel.accel_decomp_full -- accel/accel.sh@27 -- # [[ -n decompress ]] 00:08:35.476 09:16:46 accel.accel_decomp_full -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:08:35.476 00:08:35.476 real 0m1.433s 00:08:35.476 user 0m1.315s 00:08:35.476 sys 0m0.120s 00:08:35.476 09:16:46 accel.accel_decomp_full -- common/autotest_common.sh@1124 -- # xtrace_disable 00:08:35.476 09:16:46 accel.accel_decomp_full -- common/autotest_common.sh@10 -- # set +x 00:08:35.476 ************************************ 00:08:35.476 END TEST accel_decomp_full 00:08:35.476 ************************************ 00:08:35.476 09:16:46 accel -- common/autotest_common.sh@1142 -- # return 0 00:08:35.476 09:16:46 accel -- accel/accel.sh@119 -- # run_test accel_decomp_mcore accel_test -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y -m 0xf 00:08:35.476 09:16:46 accel -- common/autotest_common.sh@1099 -- # '[' 11 -le 1 ']' 00:08:35.476 09:16:46 accel -- common/autotest_common.sh@1105 -- # xtrace_disable 00:08:35.476 09:16:46 accel -- common/autotest_common.sh@10 -- # set +x 00:08:35.476 ************************************ 00:08:35.476 START TEST accel_decomp_mcore 00:08:35.476 ************************************ 00:08:35.476 09:16:46 accel.accel_decomp_mcore -- common/autotest_common.sh@1123 -- # accel_test -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y -m 0xf 00:08:35.476 09:16:46 accel.accel_decomp_mcore -- accel/accel.sh@16 -- # local accel_opc 00:08:35.476 09:16:46 accel.accel_decomp_mcore -- accel/accel.sh@17 -- # local accel_module 00:08:35.476 09:16:46 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:08:35.476 09:16:46 accel.accel_decomp_mcore -- accel/accel.sh@15 -- # accel_perf -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y -m 0xf 00:08:35.476 09:16:46 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:08:35.476 09:16:46 accel.accel_decomp_mcore -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y -m 0xf 00:08:35.476 09:16:46 accel.accel_decomp_mcore -- accel/accel.sh@12 -- # build_accel_config 00:08:35.476 09:16:46 accel.accel_decomp_mcore -- accel/accel.sh@31 -- # accel_json_cfg=() 00:08:35.476 09:16:46 accel.accel_decomp_mcore -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:08:35.476 09:16:46 accel.accel_decomp_mcore -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:08:35.476 09:16:46 accel.accel_decomp_mcore -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:08:35.476 09:16:46 accel.accel_decomp_mcore -- accel/accel.sh@36 -- # [[ -n '' ]] 00:08:35.476 09:16:46 accel.accel_decomp_mcore -- accel/accel.sh@40 -- # local IFS=, 00:08:35.476 09:16:46 accel.accel_decomp_mcore -- accel/accel.sh@41 -- # jq -r . 00:08:35.476 [2024-07-15 09:16:46.364108] Starting SPDK v24.09-pre git sha1 b0f01ebc5 / DPDK 24.03.0 initialization... 00:08:35.476 [2024-07-15 09:16:46.364181] [ DPDK EAL parameters: accel_perf --no-shconf -c 0xf --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid729148 ] 00:08:35.476 EAL: No free 2048 kB hugepages reported on node 1 00:08:35.476 [2024-07-15 09:16:46.422532] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 4 00:08:35.476 [2024-07-15 09:16:46.529089] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:08:35.476 [2024-07-15 09:16:46.529154] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:08:35.476 [2024-07-15 09:16:46.529218] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:08:35.476 [2024-07-15 09:16:46.529221] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:08:35.476 09:16:46 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:08:35.476 09:16:46 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:08:35.476 09:16:46 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:08:35.476 09:16:46 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:08:35.476 09:16:46 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:08:35.476 09:16:46 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:08:35.476 09:16:46 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:08:35.476 09:16:46 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:08:35.476 09:16:46 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:08:35.476 09:16:46 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:08:35.476 09:16:46 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:08:35.476 09:16:46 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:08:35.476 09:16:46 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val=0xf 00:08:35.476 09:16:46 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:08:35.476 09:16:46 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:08:35.476 09:16:46 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:08:35.476 09:16:46 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:08:35.476 09:16:46 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:08:35.476 09:16:46 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:08:35.476 09:16:46 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:08:35.476 09:16:46 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:08:35.476 09:16:46 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:08:35.476 09:16:46 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:08:35.476 09:16:46 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:08:35.476 09:16:46 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val=decompress 00:08:35.476 09:16:46 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:08:35.476 09:16:46 accel.accel_decomp_mcore -- accel/accel.sh@23 -- # accel_opc=decompress 00:08:35.476 09:16:46 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:08:35.476 09:16:46 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:08:35.476 09:16:46 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val='4096 bytes' 00:08:35.476 09:16:46 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:08:35.476 09:16:46 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:08:35.476 09:16:46 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:08:35.476 09:16:46 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:08:35.476 09:16:46 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:08:35.476 09:16:46 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:08:35.476 09:16:46 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:08:35.476 09:16:46 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val=software 00:08:35.476 09:16:46 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:08:35.476 09:16:46 accel.accel_decomp_mcore -- accel/accel.sh@22 -- # accel_module=software 00:08:35.476 09:16:46 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:08:35.476 09:16:46 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:08:35.476 09:16:46 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib 00:08:35.476 09:16:46 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:08:35.476 09:16:46 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:08:35.476 09:16:46 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:08:35.476 09:16:46 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val=32 00:08:35.476 09:16:46 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:08:35.476 09:16:46 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:08:35.476 09:16:46 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:08:35.476 09:16:46 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val=32 00:08:35.476 09:16:46 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:08:35.476 09:16:46 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:08:35.476 09:16:46 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:08:35.476 09:16:46 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val=1 00:08:35.476 09:16:46 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:08:35.476 09:16:46 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:08:35.476 09:16:46 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:08:35.476 09:16:46 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val='1 seconds' 00:08:35.476 09:16:46 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:08:35.476 09:16:46 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:08:35.476 09:16:46 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:08:35.476 09:16:46 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val=Yes 00:08:35.476 09:16:46 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:08:35.476 09:16:46 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:08:35.476 09:16:46 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:08:35.476 09:16:46 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:08:35.476 09:16:46 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:08:35.476 09:16:46 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:08:35.476 09:16:46 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:08:35.476 09:16:46 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:08:35.476 09:16:46 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:08:35.476 09:16:46 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:08:35.476 09:16:46 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:08:36.845 09:16:47 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:08:36.845 09:16:47 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:08:36.845 09:16:47 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:08:36.846 09:16:47 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:08:36.846 09:16:47 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:08:36.846 09:16:47 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:08:36.846 09:16:47 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:08:36.846 09:16:47 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:08:36.846 09:16:47 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:08:36.846 09:16:47 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:08:36.846 09:16:47 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:08:36.846 09:16:47 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:08:36.846 09:16:47 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:08:36.846 09:16:47 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:08:36.846 09:16:47 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:08:36.846 09:16:47 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:08:36.846 09:16:47 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:08:36.846 09:16:47 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:08:36.846 09:16:47 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:08:36.846 09:16:47 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:08:36.846 09:16:47 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:08:36.846 09:16:47 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:08:36.846 09:16:47 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:08:36.846 09:16:47 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:08:36.846 09:16:47 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:08:36.846 09:16:47 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:08:36.846 09:16:47 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:08:36.846 09:16:47 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:08:36.846 09:16:47 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:08:36.846 09:16:47 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:08:36.846 09:16:47 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:08:36.846 09:16:47 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:08:36.846 09:16:47 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:08:36.846 09:16:47 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:08:36.846 09:16:47 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:08:36.846 09:16:47 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:08:36.846 09:16:47 accel.accel_decomp_mcore -- accel/accel.sh@27 -- # [[ -n software ]] 00:08:36.846 09:16:47 accel.accel_decomp_mcore -- accel/accel.sh@27 -- # [[ -n decompress ]] 00:08:36.846 09:16:47 accel.accel_decomp_mcore -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:08:36.846 00:08:36.846 real 0m1.453s 00:08:36.846 user 0m4.758s 00:08:36.846 sys 0m0.148s 00:08:36.846 09:16:47 accel.accel_decomp_mcore -- common/autotest_common.sh@1124 -- # xtrace_disable 00:08:36.846 09:16:47 accel.accel_decomp_mcore -- common/autotest_common.sh@10 -- # set +x 00:08:36.846 ************************************ 00:08:36.846 END TEST accel_decomp_mcore 00:08:36.846 ************************************ 00:08:36.846 09:16:47 accel -- common/autotest_common.sh@1142 -- # return 0 00:08:36.846 09:16:47 accel -- accel/accel.sh@120 -- # run_test accel_decomp_full_mcore accel_test -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y -o 0 -m 0xf 00:08:36.846 09:16:47 accel -- common/autotest_common.sh@1099 -- # '[' 13 -le 1 ']' 00:08:36.846 09:16:47 accel -- common/autotest_common.sh@1105 -- # xtrace_disable 00:08:36.846 09:16:47 accel -- common/autotest_common.sh@10 -- # set +x 00:08:36.846 ************************************ 00:08:36.846 START TEST accel_decomp_full_mcore 00:08:36.846 ************************************ 00:08:36.846 09:16:47 accel.accel_decomp_full_mcore -- common/autotest_common.sh@1123 -- # accel_test -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y -o 0 -m 0xf 00:08:36.846 09:16:47 accel.accel_decomp_full_mcore -- accel/accel.sh@16 -- # local accel_opc 00:08:36.846 09:16:47 accel.accel_decomp_full_mcore -- accel/accel.sh@17 -- # local accel_module 00:08:36.846 09:16:47 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:08:36.846 09:16:47 accel.accel_decomp_full_mcore -- accel/accel.sh@15 -- # accel_perf -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y -o 0 -m 0xf 00:08:36.846 09:16:47 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:08:36.846 09:16:47 accel.accel_decomp_full_mcore -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y -o 0 -m 0xf 00:08:36.846 09:16:47 accel.accel_decomp_full_mcore -- accel/accel.sh@12 -- # build_accel_config 00:08:36.846 09:16:47 accel.accel_decomp_full_mcore -- accel/accel.sh@31 -- # accel_json_cfg=() 00:08:36.846 09:16:47 accel.accel_decomp_full_mcore -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:08:36.846 09:16:47 accel.accel_decomp_full_mcore -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:08:36.846 09:16:47 accel.accel_decomp_full_mcore -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:08:36.846 09:16:47 accel.accel_decomp_full_mcore -- accel/accel.sh@36 -- # [[ -n '' ]] 00:08:36.846 09:16:47 accel.accel_decomp_full_mcore -- accel/accel.sh@40 -- # local IFS=, 00:08:36.846 09:16:47 accel.accel_decomp_full_mcore -- accel/accel.sh@41 -- # jq -r . 00:08:36.846 [2024-07-15 09:16:47.859618] Starting SPDK v24.09-pre git sha1 b0f01ebc5 / DPDK 24.03.0 initialization... 00:08:36.846 [2024-07-15 09:16:47.859670] [ DPDK EAL parameters: accel_perf --no-shconf -c 0xf --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid729310 ] 00:08:36.846 EAL: No free 2048 kB hugepages reported on node 1 00:08:36.846 [2024-07-15 09:16:47.915903] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 4 00:08:36.846 [2024-07-15 09:16:48.022064] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:08:36.846 [2024-07-15 09:16:48.022124] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:08:36.846 [2024-07-15 09:16:48.022189] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:08:36.846 [2024-07-15 09:16:48.022192] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:08:37.105 09:16:48 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:08:37.105 09:16:48 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:08:37.105 09:16:48 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:08:37.105 09:16:48 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:08:37.105 09:16:48 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:08:37.105 09:16:48 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:08:37.105 09:16:48 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:08:37.105 09:16:48 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:08:37.105 09:16:48 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:08:37.105 09:16:48 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:08:37.105 09:16:48 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:08:37.105 09:16:48 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:08:37.105 09:16:48 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val=0xf 00:08:37.105 09:16:48 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:08:37.105 09:16:48 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:08:37.105 09:16:48 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:08:37.105 09:16:48 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:08:37.105 09:16:48 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:08:37.105 09:16:48 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:08:37.105 09:16:48 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:08:37.105 09:16:48 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:08:37.105 09:16:48 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:08:37.105 09:16:48 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:08:37.105 09:16:48 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:08:37.105 09:16:48 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val=decompress 00:08:37.105 09:16:48 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:08:37.105 09:16:48 accel.accel_decomp_full_mcore -- accel/accel.sh@23 -- # accel_opc=decompress 00:08:37.105 09:16:48 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:08:37.105 09:16:48 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:08:37.105 09:16:48 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val='111250 bytes' 00:08:37.105 09:16:48 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:08:37.105 09:16:48 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:08:37.105 09:16:48 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:08:37.105 09:16:48 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:08:37.105 09:16:48 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:08:37.105 09:16:48 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:08:37.105 09:16:48 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:08:37.105 09:16:48 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val=software 00:08:37.105 09:16:48 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:08:37.105 09:16:48 accel.accel_decomp_full_mcore -- accel/accel.sh@22 -- # accel_module=software 00:08:37.105 09:16:48 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:08:37.105 09:16:48 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:08:37.105 09:16:48 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib 00:08:37.105 09:16:48 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:08:37.105 09:16:48 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:08:37.105 09:16:48 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:08:37.105 09:16:48 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val=32 00:08:37.105 09:16:48 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:08:37.106 09:16:48 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:08:37.106 09:16:48 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:08:37.106 09:16:48 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val=32 00:08:37.106 09:16:48 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:08:37.106 09:16:48 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:08:37.106 09:16:48 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:08:37.106 09:16:48 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val=1 00:08:37.106 09:16:48 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:08:37.106 09:16:48 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:08:37.106 09:16:48 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:08:37.106 09:16:48 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val='1 seconds' 00:08:37.106 09:16:48 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:08:37.106 09:16:48 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:08:37.106 09:16:48 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:08:37.106 09:16:48 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val=Yes 00:08:37.106 09:16:48 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:08:37.106 09:16:48 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:08:37.106 09:16:48 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:08:37.106 09:16:48 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:08:37.106 09:16:48 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:08:37.106 09:16:48 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:08:37.106 09:16:48 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:08:37.106 09:16:48 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:08:37.106 09:16:48 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:08:37.106 09:16:48 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:08:37.106 09:16:48 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:08:38.479 09:16:49 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:08:38.479 09:16:49 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:08:38.479 09:16:49 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:08:38.479 09:16:49 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:08:38.479 09:16:49 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:08:38.479 09:16:49 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:08:38.479 09:16:49 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:08:38.479 09:16:49 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:08:38.479 09:16:49 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:08:38.479 09:16:49 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:08:38.479 09:16:49 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:08:38.479 09:16:49 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:08:38.479 09:16:49 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:08:38.479 09:16:49 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:08:38.479 09:16:49 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:08:38.479 09:16:49 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:08:38.479 09:16:49 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:08:38.479 09:16:49 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:08:38.479 09:16:49 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:08:38.479 09:16:49 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:08:38.479 09:16:49 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:08:38.479 09:16:49 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:08:38.479 09:16:49 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:08:38.479 09:16:49 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:08:38.479 09:16:49 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:08:38.479 09:16:49 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:08:38.479 09:16:49 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:08:38.479 09:16:49 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:08:38.479 09:16:49 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:08:38.479 09:16:49 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:08:38.479 09:16:49 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:08:38.479 09:16:49 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:08:38.479 09:16:49 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:08:38.479 09:16:49 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:08:38.479 09:16:49 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:08:38.479 09:16:49 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:08:38.479 09:16:49 accel.accel_decomp_full_mcore -- accel/accel.sh@27 -- # [[ -n software ]] 00:08:38.479 09:16:49 accel.accel_decomp_full_mcore -- accel/accel.sh@27 -- # [[ -n decompress ]] 00:08:38.479 09:16:49 accel.accel_decomp_full_mcore -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:08:38.479 00:08:38.479 real 0m1.447s 00:08:38.479 user 0m4.771s 00:08:38.479 sys 0m0.131s 00:08:38.479 09:16:49 accel.accel_decomp_full_mcore -- common/autotest_common.sh@1124 -- # xtrace_disable 00:08:38.479 09:16:49 accel.accel_decomp_full_mcore -- common/autotest_common.sh@10 -- # set +x 00:08:38.479 ************************************ 00:08:38.479 END TEST accel_decomp_full_mcore 00:08:38.479 ************************************ 00:08:38.479 09:16:49 accel -- common/autotest_common.sh@1142 -- # return 0 00:08:38.479 09:16:49 accel -- accel/accel.sh@121 -- # run_test accel_decomp_mthread accel_test -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y -T 2 00:08:38.479 09:16:49 accel -- common/autotest_common.sh@1099 -- # '[' 11 -le 1 ']' 00:08:38.479 09:16:49 accel -- common/autotest_common.sh@1105 -- # xtrace_disable 00:08:38.479 09:16:49 accel -- common/autotest_common.sh@10 -- # set +x 00:08:38.479 ************************************ 00:08:38.479 START TEST accel_decomp_mthread 00:08:38.479 ************************************ 00:08:38.479 09:16:49 accel.accel_decomp_mthread -- common/autotest_common.sh@1123 -- # accel_test -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y -T 2 00:08:38.479 09:16:49 accel.accel_decomp_mthread -- accel/accel.sh@16 -- # local accel_opc 00:08:38.479 09:16:49 accel.accel_decomp_mthread -- accel/accel.sh@17 -- # local accel_module 00:08:38.479 09:16:49 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:08:38.479 09:16:49 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:08:38.479 09:16:49 accel.accel_decomp_mthread -- accel/accel.sh@15 -- # accel_perf -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y -T 2 00:08:38.479 09:16:49 accel.accel_decomp_mthread -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y -T 2 00:08:38.479 09:16:49 accel.accel_decomp_mthread -- accel/accel.sh@12 -- # build_accel_config 00:08:38.479 09:16:49 accel.accel_decomp_mthread -- accel/accel.sh@31 -- # accel_json_cfg=() 00:08:38.479 09:16:49 accel.accel_decomp_mthread -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:08:38.479 09:16:49 accel.accel_decomp_mthread -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:08:38.479 09:16:49 accel.accel_decomp_mthread -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:08:38.479 09:16:49 accel.accel_decomp_mthread -- accel/accel.sh@36 -- # [[ -n '' ]] 00:08:38.479 09:16:49 accel.accel_decomp_mthread -- accel/accel.sh@40 -- # local IFS=, 00:08:38.479 09:16:49 accel.accel_decomp_mthread -- accel/accel.sh@41 -- # jq -r . 00:08:38.479 [2024-07-15 09:16:49.357953] Starting SPDK v24.09-pre git sha1 b0f01ebc5 / DPDK 24.03.0 initialization... 00:08:38.479 [2024-07-15 09:16:49.358023] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid729473 ] 00:08:38.479 EAL: No free 2048 kB hugepages reported on node 1 00:08:38.479 [2024-07-15 09:16:49.416681] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:38.479 [2024-07-15 09:16:49.522157] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:08:38.479 09:16:49 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val= 00:08:38.479 09:16:49 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:08:38.479 09:16:49 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:08:38.479 09:16:49 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:08:38.479 09:16:49 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val= 00:08:38.479 09:16:49 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:08:38.479 09:16:49 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:08:38.479 09:16:49 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:08:38.479 09:16:49 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val= 00:08:38.479 09:16:49 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:08:38.479 09:16:49 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:08:38.479 09:16:49 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:08:38.479 09:16:49 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val=0x1 00:08:38.479 09:16:49 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:08:38.479 09:16:49 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:08:38.479 09:16:49 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:08:38.479 09:16:49 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val= 00:08:38.479 09:16:49 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:08:38.479 09:16:49 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:08:38.479 09:16:49 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:08:38.479 09:16:49 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val= 00:08:38.479 09:16:49 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:08:38.479 09:16:49 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:08:38.479 09:16:49 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:08:38.479 09:16:49 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val=decompress 00:08:38.479 09:16:49 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:08:38.479 09:16:49 accel.accel_decomp_mthread -- accel/accel.sh@23 -- # accel_opc=decompress 00:08:38.479 09:16:49 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:08:38.479 09:16:49 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:08:38.479 09:16:49 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val='4096 bytes' 00:08:38.479 09:16:49 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:08:38.479 09:16:49 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:08:38.479 09:16:49 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:08:38.479 09:16:49 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val= 00:08:38.479 09:16:49 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:08:38.479 09:16:49 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:08:38.479 09:16:49 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:08:38.479 09:16:49 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val=software 00:08:38.479 09:16:49 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:08:38.479 09:16:49 accel.accel_decomp_mthread -- accel/accel.sh@22 -- # accel_module=software 00:08:38.479 09:16:49 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:08:38.479 09:16:49 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:08:38.479 09:16:49 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib 00:08:38.479 09:16:49 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:08:38.479 09:16:49 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:08:38.479 09:16:49 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:08:38.479 09:16:49 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val=32 00:08:38.479 09:16:49 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:08:38.479 09:16:49 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:08:38.479 09:16:49 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:08:38.479 09:16:49 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val=32 00:08:38.479 09:16:49 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:08:38.479 09:16:49 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:08:38.479 09:16:49 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:08:38.480 09:16:49 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val=2 00:08:38.480 09:16:49 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:08:38.480 09:16:49 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:08:38.480 09:16:49 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:08:38.480 09:16:49 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val='1 seconds' 00:08:38.480 09:16:49 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:08:38.480 09:16:49 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:08:38.480 09:16:49 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:08:38.480 09:16:49 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val=Yes 00:08:38.480 09:16:49 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:08:38.480 09:16:49 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:08:38.480 09:16:49 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:08:38.480 09:16:49 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val= 00:08:38.480 09:16:49 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:08:38.480 09:16:49 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:08:38.480 09:16:49 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:08:38.480 09:16:49 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val= 00:08:38.480 09:16:49 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:08:38.480 09:16:49 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:08:38.480 09:16:49 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:08:39.851 09:16:50 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val= 00:08:39.851 09:16:50 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:08:39.851 09:16:50 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:08:39.851 09:16:50 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:08:39.851 09:16:50 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val= 00:08:39.851 09:16:50 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:08:39.851 09:16:50 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:08:39.851 09:16:50 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:08:39.851 09:16:50 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val= 00:08:39.851 09:16:50 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:08:39.851 09:16:50 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:08:39.851 09:16:50 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:08:39.851 09:16:50 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val= 00:08:39.851 09:16:50 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:08:39.851 09:16:50 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:08:39.851 09:16:50 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:08:39.851 09:16:50 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val= 00:08:39.851 09:16:50 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:08:39.851 09:16:50 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:08:39.851 09:16:50 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:08:39.851 09:16:50 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val= 00:08:39.851 09:16:50 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:08:39.851 09:16:50 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:08:39.851 09:16:50 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:08:39.851 09:16:50 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val= 00:08:39.851 09:16:50 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:08:39.851 09:16:50 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:08:39.851 09:16:50 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:08:39.851 09:16:50 accel.accel_decomp_mthread -- accel/accel.sh@27 -- # [[ -n software ]] 00:08:39.851 09:16:50 accel.accel_decomp_mthread -- accel/accel.sh@27 -- # [[ -n decompress ]] 00:08:39.851 09:16:50 accel.accel_decomp_mthread -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:08:39.851 00:08:39.851 real 0m1.434s 00:08:39.851 user 0m1.303s 00:08:39.851 sys 0m0.134s 00:08:39.851 09:16:50 accel.accel_decomp_mthread -- common/autotest_common.sh@1124 -- # xtrace_disable 00:08:39.851 09:16:50 accel.accel_decomp_mthread -- common/autotest_common.sh@10 -- # set +x 00:08:39.851 ************************************ 00:08:39.851 END TEST accel_decomp_mthread 00:08:39.851 ************************************ 00:08:39.851 09:16:50 accel -- common/autotest_common.sh@1142 -- # return 0 00:08:39.851 09:16:50 accel -- accel/accel.sh@122 -- # run_test accel_decomp_full_mthread accel_test -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y -o 0 -T 2 00:08:39.851 09:16:50 accel -- common/autotest_common.sh@1099 -- # '[' 13 -le 1 ']' 00:08:39.851 09:16:50 accel -- common/autotest_common.sh@1105 -- # xtrace_disable 00:08:39.851 09:16:50 accel -- common/autotest_common.sh@10 -- # set +x 00:08:39.851 ************************************ 00:08:39.851 START TEST accel_decomp_full_mthread 00:08:39.851 ************************************ 00:08:39.851 09:16:50 accel.accel_decomp_full_mthread -- common/autotest_common.sh@1123 -- # accel_test -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y -o 0 -T 2 00:08:39.851 09:16:50 accel.accel_decomp_full_mthread -- accel/accel.sh@16 -- # local accel_opc 00:08:39.851 09:16:50 accel.accel_decomp_full_mthread -- accel/accel.sh@17 -- # local accel_module 00:08:39.851 09:16:50 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:08:39.851 09:16:50 accel.accel_decomp_full_mthread -- accel/accel.sh@15 -- # accel_perf -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y -o 0 -T 2 00:08:39.851 09:16:50 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:08:39.851 09:16:50 accel.accel_decomp_full_mthread -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y -o 0 -T 2 00:08:39.851 09:16:50 accel.accel_decomp_full_mthread -- accel/accel.sh@12 -- # build_accel_config 00:08:39.851 09:16:50 accel.accel_decomp_full_mthread -- accel/accel.sh@31 -- # accel_json_cfg=() 00:08:39.851 09:16:50 accel.accel_decomp_full_mthread -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:08:39.851 09:16:50 accel.accel_decomp_full_mthread -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:08:39.851 09:16:50 accel.accel_decomp_full_mthread -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:08:39.851 09:16:50 accel.accel_decomp_full_mthread -- accel/accel.sh@36 -- # [[ -n '' ]] 00:08:39.851 09:16:50 accel.accel_decomp_full_mthread -- accel/accel.sh@40 -- # local IFS=, 00:08:39.851 09:16:50 accel.accel_decomp_full_mthread -- accel/accel.sh@41 -- # jq -r . 00:08:39.851 [2024-07-15 09:16:50.839582] Starting SPDK v24.09-pre git sha1 b0f01ebc5 / DPDK 24.03.0 initialization... 00:08:39.851 [2024-07-15 09:16:50.839644] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid729731 ] 00:08:39.851 EAL: No free 2048 kB hugepages reported on node 1 00:08:39.851 [2024-07-15 09:16:50.896020] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:39.851 [2024-07-15 09:16:50.998940] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:08:40.108 09:16:51 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val= 00:08:40.108 09:16:51 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:08:40.108 09:16:51 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:08:40.108 09:16:51 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:08:40.108 09:16:51 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val= 00:08:40.108 09:16:51 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:08:40.108 09:16:51 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:08:40.108 09:16:51 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:08:40.108 09:16:51 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val= 00:08:40.108 09:16:51 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:08:40.108 09:16:51 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:08:40.108 09:16:51 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:08:40.108 09:16:51 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val=0x1 00:08:40.108 09:16:51 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:08:40.108 09:16:51 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:08:40.108 09:16:51 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:08:40.108 09:16:51 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val= 00:08:40.108 09:16:51 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:08:40.108 09:16:51 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:08:40.108 09:16:51 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:08:40.108 09:16:51 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val= 00:08:40.108 09:16:51 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:08:40.108 09:16:51 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:08:40.108 09:16:51 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:08:40.108 09:16:51 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val=decompress 00:08:40.108 09:16:51 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:08:40.108 09:16:51 accel.accel_decomp_full_mthread -- accel/accel.sh@23 -- # accel_opc=decompress 00:08:40.108 09:16:51 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:08:40.108 09:16:51 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:08:40.108 09:16:51 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val='111250 bytes' 00:08:40.108 09:16:51 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:08:40.108 09:16:51 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:08:40.108 09:16:51 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:08:40.108 09:16:51 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val= 00:08:40.108 09:16:51 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:08:40.108 09:16:51 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:08:40.108 09:16:51 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:08:40.108 09:16:51 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val=software 00:08:40.108 09:16:51 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:08:40.108 09:16:51 accel.accel_decomp_full_mthread -- accel/accel.sh@22 -- # accel_module=software 00:08:40.108 09:16:51 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:08:40.108 09:16:51 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:08:40.108 09:16:51 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib 00:08:40.108 09:16:51 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:08:40.108 09:16:51 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:08:40.108 09:16:51 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:08:40.108 09:16:51 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val=32 00:08:40.108 09:16:51 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:08:40.108 09:16:51 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:08:40.108 09:16:51 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:08:40.108 09:16:51 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val=32 00:08:40.108 09:16:51 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:08:40.108 09:16:51 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:08:40.108 09:16:51 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:08:40.108 09:16:51 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val=2 00:08:40.108 09:16:51 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:08:40.108 09:16:51 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:08:40.108 09:16:51 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:08:40.108 09:16:51 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val='1 seconds' 00:08:40.108 09:16:51 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:08:40.108 09:16:51 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:08:40.108 09:16:51 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:08:40.108 09:16:51 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val=Yes 00:08:40.108 09:16:51 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:08:40.108 09:16:51 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:08:40.108 09:16:51 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:08:40.108 09:16:51 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val= 00:08:40.108 09:16:51 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:08:40.108 09:16:51 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:08:40.108 09:16:51 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:08:40.108 09:16:51 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val= 00:08:40.108 09:16:51 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:08:40.108 09:16:51 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:08:40.108 09:16:51 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:08:41.516 09:16:52 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val= 00:08:41.516 09:16:52 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:08:41.516 09:16:52 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:08:41.516 09:16:52 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:08:41.516 09:16:52 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val= 00:08:41.516 09:16:52 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:08:41.516 09:16:52 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:08:41.517 09:16:52 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:08:41.517 09:16:52 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val= 00:08:41.517 09:16:52 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:08:41.517 09:16:52 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:08:41.517 09:16:52 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:08:41.517 09:16:52 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val= 00:08:41.517 09:16:52 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:08:41.517 09:16:52 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:08:41.517 09:16:52 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:08:41.517 09:16:52 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val= 00:08:41.517 09:16:52 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:08:41.517 09:16:52 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:08:41.517 09:16:52 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:08:41.517 09:16:52 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val= 00:08:41.517 09:16:52 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:08:41.517 09:16:52 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:08:41.517 09:16:52 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:08:41.517 09:16:52 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val= 00:08:41.517 09:16:52 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:08:41.517 09:16:52 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:08:41.517 09:16:52 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:08:41.517 09:16:52 accel.accel_decomp_full_mthread -- accel/accel.sh@27 -- # [[ -n software ]] 00:08:41.517 09:16:52 accel.accel_decomp_full_mthread -- accel/accel.sh@27 -- # [[ -n decompress ]] 00:08:41.517 09:16:52 accel.accel_decomp_full_mthread -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:08:41.517 00:08:41.517 real 0m1.458s 00:08:41.517 user 0m1.324s 00:08:41.517 sys 0m0.136s 00:08:41.517 09:16:52 accel.accel_decomp_full_mthread -- common/autotest_common.sh@1124 -- # xtrace_disable 00:08:41.517 09:16:52 accel.accel_decomp_full_mthread -- common/autotest_common.sh@10 -- # set +x 00:08:41.517 ************************************ 00:08:41.517 END TEST accel_decomp_full_mthread 00:08:41.517 ************************************ 00:08:41.517 09:16:52 accel -- common/autotest_common.sh@1142 -- # return 0 00:08:41.517 09:16:52 accel -- accel/accel.sh@124 -- # [[ n == y ]] 00:08:41.517 09:16:52 accel -- accel/accel.sh@137 -- # run_test accel_dif_functional_tests /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/dif/dif -c /dev/fd/62 00:08:41.517 09:16:52 accel -- accel/accel.sh@137 -- # build_accel_config 00:08:41.517 09:16:52 accel -- accel/accel.sh@31 -- # accel_json_cfg=() 00:08:41.517 09:16:52 accel -- common/autotest_common.sh@1099 -- # '[' 4 -le 1 ']' 00:08:41.517 09:16:52 accel -- common/autotest_common.sh@1105 -- # xtrace_disable 00:08:41.517 09:16:52 accel -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:08:41.517 09:16:52 accel -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:08:41.517 09:16:52 accel -- common/autotest_common.sh@10 -- # set +x 00:08:41.517 09:16:52 accel -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:08:41.517 09:16:52 accel -- accel/accel.sh@36 -- # [[ -n '' ]] 00:08:41.517 09:16:52 accel -- accel/accel.sh@40 -- # local IFS=, 00:08:41.517 09:16:52 accel -- accel/accel.sh@41 -- # jq -r . 00:08:41.517 ************************************ 00:08:41.517 START TEST accel_dif_functional_tests 00:08:41.517 ************************************ 00:08:41.517 09:16:52 accel.accel_dif_functional_tests -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/dif/dif -c /dev/fd/62 00:08:41.517 [2024-07-15 09:16:52.365289] Starting SPDK v24.09-pre git sha1 b0f01ebc5 / DPDK 24.03.0 initialization... 00:08:41.517 [2024-07-15 09:16:52.365349] [ DPDK EAL parameters: DIF --no-shconf -c 0x7 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid729900 ] 00:08:41.517 EAL: No free 2048 kB hugepages reported on node 1 00:08:41.517 [2024-07-15 09:16:52.420418] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 3 00:08:41.517 [2024-07-15 09:16:52.525290] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:08:41.517 [2024-07-15 09:16:52.525358] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:08:41.517 [2024-07-15 09:16:52.525361] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:08:41.517 00:08:41.517 00:08:41.517 CUnit - A unit testing framework for C - Version 2.1-3 00:08:41.517 http://cunit.sourceforge.net/ 00:08:41.517 00:08:41.517 00:08:41.517 Suite: accel_dif 00:08:41.517 Test: verify: DIF generated, GUARD check ...passed 00:08:41.517 Test: verify: DIF generated, APPTAG check ...passed 00:08:41.517 Test: verify: DIF generated, REFTAG check ...passed 00:08:41.517 Test: verify: DIF not generated, GUARD check ...[2024-07-15 09:16:52.615873] dif.c: 826:_dif_verify: *ERROR*: Failed to compare Guard: LBA=10, Expected=5a5a, Actual=7867 00:08:41.517 passed 00:08:41.517 Test: verify: DIF not generated, APPTAG check ...[2024-07-15 09:16:52.615947] dif.c: 841:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=10, Expected=14, Actual=5a5a 00:08:41.517 passed 00:08:41.517 Test: verify: DIF not generated, REFTAG check ...[2024-07-15 09:16:52.615984] dif.c: 776:_dif_reftag_check: *ERROR*: Failed to compare Ref Tag: LBA=10, Expected=a, Actual=5a5a5a5a 00:08:41.517 passed 00:08:41.517 Test: verify: APPTAG correct, APPTAG check ...passed 00:08:41.517 Test: verify: APPTAG incorrect, APPTAG check ...[2024-07-15 09:16:52.616050] dif.c: 841:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=30, Expected=28, Actual=14 00:08:41.517 passed 00:08:41.517 Test: verify: APPTAG incorrect, no APPTAG check ...passed 00:08:41.517 Test: verify: REFTAG incorrect, REFTAG ignore ...passed 00:08:41.517 Test: verify: REFTAG_INIT correct, REFTAG check ...passed 00:08:41.517 Test: verify: REFTAG_INIT incorrect, REFTAG check ...[2024-07-15 09:16:52.616200] dif.c: 776:_dif_reftag_check: *ERROR*: Failed to compare Ref Tag: LBA=10, Expected=a, Actual=10 00:08:41.517 passed 00:08:41.517 Test: verify copy: DIF generated, GUARD check ...passed 00:08:41.517 Test: verify copy: DIF generated, APPTAG check ...passed 00:08:41.517 Test: verify copy: DIF generated, REFTAG check ...passed 00:08:41.517 Test: verify copy: DIF not generated, GUARD check ...[2024-07-15 09:16:52.616360] dif.c: 826:_dif_verify: *ERROR*: Failed to compare Guard: LBA=10, Expected=5a5a, Actual=7867 00:08:41.517 passed 00:08:41.517 Test: verify copy: DIF not generated, APPTAG check ...[2024-07-15 09:16:52.616397] dif.c: 841:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=10, Expected=14, Actual=5a5a 00:08:41.517 passed 00:08:41.517 Test: verify copy: DIF not generated, REFTAG check ...[2024-07-15 09:16:52.616431] dif.c: 776:_dif_reftag_check: *ERROR*: Failed to compare Ref Tag: LBA=10, Expected=a, Actual=5a5a5a5a 00:08:41.517 passed 00:08:41.517 Test: generate copy: DIF generated, GUARD check ...passed 00:08:41.517 Test: generate copy: DIF generated, APTTAG check ...passed 00:08:41.517 Test: generate copy: DIF generated, REFTAG check ...passed 00:08:41.517 Test: generate copy: DIF generated, no GUARD check flag set ...passed 00:08:41.517 Test: generate copy: DIF generated, no APPTAG check flag set ...passed 00:08:41.517 Test: generate copy: DIF generated, no REFTAG check flag set ...passed 00:08:41.517 Test: generate copy: iovecs-len validate ...[2024-07-15 09:16:52.616654] dif.c:1190:spdk_dif_generate_copy: *ERROR*: Size of bounce_iovs arrays are not valid or misaligned with block_size. 00:08:41.517 passed 00:08:41.517 Test: generate copy: buffer alignment validate ...passed 00:08:41.517 00:08:41.517 Run Summary: Type Total Ran Passed Failed Inactive 00:08:41.517 suites 1 1 n/a 0 0 00:08:41.517 tests 26 26 26 0 0 00:08:41.517 asserts 115 115 115 0 n/a 00:08:41.517 00:08:41.517 Elapsed time = 0.003 seconds 00:08:41.775 00:08:41.775 real 0m0.530s 00:08:41.775 user 0m0.799s 00:08:41.775 sys 0m0.174s 00:08:41.775 09:16:52 accel.accel_dif_functional_tests -- common/autotest_common.sh@1124 -- # xtrace_disable 00:08:41.775 09:16:52 accel.accel_dif_functional_tests -- common/autotest_common.sh@10 -- # set +x 00:08:41.775 ************************************ 00:08:41.775 END TEST accel_dif_functional_tests 00:08:41.775 ************************************ 00:08:41.775 09:16:52 accel -- common/autotest_common.sh@1142 -- # return 0 00:08:41.775 00:08:41.775 real 0m32.378s 00:08:41.775 user 0m36.019s 00:08:41.775 sys 0m4.302s 00:08:41.775 09:16:52 accel -- common/autotest_common.sh@1124 -- # xtrace_disable 00:08:41.775 09:16:52 accel -- common/autotest_common.sh@10 -- # set +x 00:08:41.775 ************************************ 00:08:41.775 END TEST accel 00:08:41.775 ************************************ 00:08:41.775 09:16:52 -- common/autotest_common.sh@1142 -- # return 0 00:08:41.775 09:16:52 -- spdk/autotest.sh@184 -- # run_test accel_rpc /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/accel_rpc.sh 00:08:41.775 09:16:52 -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:08:41.775 09:16:52 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:08:41.775 09:16:52 -- common/autotest_common.sh@10 -- # set +x 00:08:41.775 ************************************ 00:08:41.775 START TEST accel_rpc 00:08:41.775 ************************************ 00:08:41.775 09:16:52 accel_rpc -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/accel_rpc.sh 00:08:42.033 * Looking for test storage... 00:08:42.033 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel 00:08:42.033 09:16:52 accel_rpc -- accel/accel_rpc.sh@11 -- # trap 'killprocess $spdk_tgt_pid; exit 1' ERR 00:08:42.033 09:16:52 accel_rpc -- accel/accel_rpc.sh@14 -- # spdk_tgt_pid=729981 00:08:42.033 09:16:52 accel_rpc -- accel/accel_rpc.sh@13 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt --wait-for-rpc 00:08:42.033 09:16:52 accel_rpc -- accel/accel_rpc.sh@15 -- # waitforlisten 729981 00:08:42.033 09:16:52 accel_rpc -- common/autotest_common.sh@829 -- # '[' -z 729981 ']' 00:08:42.033 09:16:52 accel_rpc -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:08:42.033 09:16:52 accel_rpc -- common/autotest_common.sh@834 -- # local max_retries=100 00:08:42.033 09:16:52 accel_rpc -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:08:42.033 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:08:42.033 09:16:52 accel_rpc -- common/autotest_common.sh@838 -- # xtrace_disable 00:08:42.033 09:16:52 accel_rpc -- common/autotest_common.sh@10 -- # set +x 00:08:42.033 [2024-07-15 09:16:53.035287] Starting SPDK v24.09-pre git sha1 b0f01ebc5 / DPDK 24.03.0 initialization... 00:08:42.033 [2024-07-15 09:16:53.035377] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid729981 ] 00:08:42.033 EAL: No free 2048 kB hugepages reported on node 1 00:08:42.033 [2024-07-15 09:16:53.094200] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:42.033 [2024-07-15 09:16:53.203524] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:08:42.291 09:16:53 accel_rpc -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:08:42.291 09:16:53 accel_rpc -- common/autotest_common.sh@862 -- # return 0 00:08:42.291 09:16:53 accel_rpc -- accel/accel_rpc.sh@45 -- # [[ y == y ]] 00:08:42.291 09:16:53 accel_rpc -- accel/accel_rpc.sh@45 -- # [[ 0 -gt 0 ]] 00:08:42.291 09:16:53 accel_rpc -- accel/accel_rpc.sh@49 -- # [[ y == y ]] 00:08:42.291 09:16:53 accel_rpc -- accel/accel_rpc.sh@49 -- # [[ 0 -gt 0 ]] 00:08:42.291 09:16:53 accel_rpc -- accel/accel_rpc.sh@53 -- # run_test accel_assign_opcode accel_assign_opcode_test_suite 00:08:42.291 09:16:53 accel_rpc -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:08:42.291 09:16:53 accel_rpc -- common/autotest_common.sh@1105 -- # xtrace_disable 00:08:42.291 09:16:53 accel_rpc -- common/autotest_common.sh@10 -- # set +x 00:08:42.291 ************************************ 00:08:42.291 START TEST accel_assign_opcode 00:08:42.291 ************************************ 00:08:42.291 09:16:53 accel_rpc.accel_assign_opcode -- common/autotest_common.sh@1123 -- # accel_assign_opcode_test_suite 00:08:42.291 09:16:53 accel_rpc.accel_assign_opcode -- accel/accel_rpc.sh@38 -- # rpc_cmd accel_assign_opc -o copy -m incorrect 00:08:42.291 09:16:53 accel_rpc.accel_assign_opcode -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:42.291 09:16:53 accel_rpc.accel_assign_opcode -- common/autotest_common.sh@10 -- # set +x 00:08:42.291 [2024-07-15 09:16:53.260123] accel_rpc.c: 167:rpc_accel_assign_opc: *NOTICE*: Operation copy will be assigned to module incorrect 00:08:42.291 09:16:53 accel_rpc.accel_assign_opcode -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:42.291 09:16:53 accel_rpc.accel_assign_opcode -- accel/accel_rpc.sh@40 -- # rpc_cmd accel_assign_opc -o copy -m software 00:08:42.291 09:16:53 accel_rpc.accel_assign_opcode -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:42.291 09:16:53 accel_rpc.accel_assign_opcode -- common/autotest_common.sh@10 -- # set +x 00:08:42.291 [2024-07-15 09:16:53.268132] accel_rpc.c: 167:rpc_accel_assign_opc: *NOTICE*: Operation copy will be assigned to module software 00:08:42.291 09:16:53 accel_rpc.accel_assign_opcode -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:42.291 09:16:53 accel_rpc.accel_assign_opcode -- accel/accel_rpc.sh@41 -- # rpc_cmd framework_start_init 00:08:42.291 09:16:53 accel_rpc.accel_assign_opcode -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:42.291 09:16:53 accel_rpc.accel_assign_opcode -- common/autotest_common.sh@10 -- # set +x 00:08:42.637 09:16:53 accel_rpc.accel_assign_opcode -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:42.637 09:16:53 accel_rpc.accel_assign_opcode -- accel/accel_rpc.sh@42 -- # rpc_cmd accel_get_opc_assignments 00:08:42.637 09:16:53 accel_rpc.accel_assign_opcode -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:42.637 09:16:53 accel_rpc.accel_assign_opcode -- common/autotest_common.sh@10 -- # set +x 00:08:42.637 09:16:53 accel_rpc.accel_assign_opcode -- accel/accel_rpc.sh@42 -- # jq -r .copy 00:08:42.637 09:16:53 accel_rpc.accel_assign_opcode -- accel/accel_rpc.sh@42 -- # grep software 00:08:42.637 09:16:53 accel_rpc.accel_assign_opcode -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:42.637 software 00:08:42.637 00:08:42.637 real 0m0.281s 00:08:42.637 user 0m0.039s 00:08:42.637 sys 0m0.008s 00:08:42.637 09:16:53 accel_rpc.accel_assign_opcode -- common/autotest_common.sh@1124 -- # xtrace_disable 00:08:42.637 09:16:53 accel_rpc.accel_assign_opcode -- common/autotest_common.sh@10 -- # set +x 00:08:42.637 ************************************ 00:08:42.637 END TEST accel_assign_opcode 00:08:42.637 ************************************ 00:08:42.637 09:16:53 accel_rpc -- common/autotest_common.sh@1142 -- # return 0 00:08:42.637 09:16:53 accel_rpc -- accel/accel_rpc.sh@55 -- # killprocess 729981 00:08:42.637 09:16:53 accel_rpc -- common/autotest_common.sh@948 -- # '[' -z 729981 ']' 00:08:42.637 09:16:53 accel_rpc -- common/autotest_common.sh@952 -- # kill -0 729981 00:08:42.637 09:16:53 accel_rpc -- common/autotest_common.sh@953 -- # uname 00:08:42.637 09:16:53 accel_rpc -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:08:42.637 09:16:53 accel_rpc -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 729981 00:08:42.637 09:16:53 accel_rpc -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:08:42.637 09:16:53 accel_rpc -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:08:42.637 09:16:53 accel_rpc -- common/autotest_common.sh@966 -- # echo 'killing process with pid 729981' 00:08:42.637 killing process with pid 729981 00:08:42.637 09:16:53 accel_rpc -- common/autotest_common.sh@967 -- # kill 729981 00:08:42.637 09:16:53 accel_rpc -- common/autotest_common.sh@972 -- # wait 729981 00:08:42.905 00:08:42.905 real 0m1.101s 00:08:42.905 user 0m1.060s 00:08:42.905 sys 0m0.400s 00:08:42.905 09:16:54 accel_rpc -- common/autotest_common.sh@1124 -- # xtrace_disable 00:08:42.905 09:16:54 accel_rpc -- common/autotest_common.sh@10 -- # set +x 00:08:42.905 ************************************ 00:08:42.905 END TEST accel_rpc 00:08:42.905 ************************************ 00:08:42.905 09:16:54 -- common/autotest_common.sh@1142 -- # return 0 00:08:42.905 09:16:54 -- spdk/autotest.sh@185 -- # run_test app_cmdline /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app/cmdline.sh 00:08:42.905 09:16:54 -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:08:42.905 09:16:54 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:08:42.905 09:16:54 -- common/autotest_common.sh@10 -- # set +x 00:08:42.905 ************************************ 00:08:42.905 START TEST app_cmdline 00:08:42.905 ************************************ 00:08:42.905 09:16:54 app_cmdline -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app/cmdline.sh 00:08:43.177 * Looking for test storage... 00:08:43.177 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app 00:08:43.177 09:16:54 app_cmdline -- app/cmdline.sh@14 -- # trap 'killprocess $spdk_tgt_pid' EXIT 00:08:43.177 09:16:54 app_cmdline -- app/cmdline.sh@17 -- # spdk_tgt_pid=730198 00:08:43.177 09:16:54 app_cmdline -- app/cmdline.sh@16 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt --rpcs-allowed spdk_get_version,rpc_get_methods 00:08:43.177 09:16:54 app_cmdline -- app/cmdline.sh@18 -- # waitforlisten 730198 00:08:43.177 09:16:54 app_cmdline -- common/autotest_common.sh@829 -- # '[' -z 730198 ']' 00:08:43.177 09:16:54 app_cmdline -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:08:43.177 09:16:54 app_cmdline -- common/autotest_common.sh@834 -- # local max_retries=100 00:08:43.177 09:16:54 app_cmdline -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:08:43.177 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:08:43.177 09:16:54 app_cmdline -- common/autotest_common.sh@838 -- # xtrace_disable 00:08:43.177 09:16:54 app_cmdline -- common/autotest_common.sh@10 -- # set +x 00:08:43.177 [2024-07-15 09:16:54.180309] Starting SPDK v24.09-pre git sha1 b0f01ebc5 / DPDK 24.03.0 initialization... 00:08:43.177 [2024-07-15 09:16:54.180379] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid730198 ] 00:08:43.177 EAL: No free 2048 kB hugepages reported on node 1 00:08:43.177 [2024-07-15 09:16:54.238739] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:43.177 [2024-07-15 09:16:54.341429] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:08:43.470 09:16:54 app_cmdline -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:08:43.470 09:16:54 app_cmdline -- common/autotest_common.sh@862 -- # return 0 00:08:43.470 09:16:54 app_cmdline -- app/cmdline.sh@20 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py spdk_get_version 00:08:43.743 { 00:08:43.743 "version": "SPDK v24.09-pre git sha1 b0f01ebc5", 00:08:43.743 "fields": { 00:08:43.743 "major": 24, 00:08:43.743 "minor": 9, 00:08:43.743 "patch": 0, 00:08:43.743 "suffix": "-pre", 00:08:43.743 "commit": "b0f01ebc5" 00:08:43.743 } 00:08:43.743 } 00:08:43.743 09:16:54 app_cmdline -- app/cmdline.sh@22 -- # expected_methods=() 00:08:43.743 09:16:54 app_cmdline -- app/cmdline.sh@23 -- # expected_methods+=("rpc_get_methods") 00:08:43.743 09:16:54 app_cmdline -- app/cmdline.sh@24 -- # expected_methods+=("spdk_get_version") 00:08:43.743 09:16:54 app_cmdline -- app/cmdline.sh@26 -- # methods=($(rpc_cmd rpc_get_methods | jq -r ".[]" | sort)) 00:08:43.743 09:16:54 app_cmdline -- app/cmdline.sh@26 -- # rpc_cmd rpc_get_methods 00:08:43.743 09:16:54 app_cmdline -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:43.743 09:16:54 app_cmdline -- app/cmdline.sh@26 -- # jq -r '.[]' 00:08:43.743 09:16:54 app_cmdline -- common/autotest_common.sh@10 -- # set +x 00:08:43.743 09:16:54 app_cmdline -- app/cmdline.sh@26 -- # sort 00:08:43.743 09:16:54 app_cmdline -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:43.743 09:16:54 app_cmdline -- app/cmdline.sh@27 -- # (( 2 == 2 )) 00:08:43.743 09:16:54 app_cmdline -- app/cmdline.sh@28 -- # [[ rpc_get_methods spdk_get_version == \r\p\c\_\g\e\t\_\m\e\t\h\o\d\s\ \s\p\d\k\_\g\e\t\_\v\e\r\s\i\o\n ]] 00:08:43.743 09:16:54 app_cmdline -- app/cmdline.sh@30 -- # NOT /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py env_dpdk_get_mem_stats 00:08:43.743 09:16:54 app_cmdline -- common/autotest_common.sh@648 -- # local es=0 00:08:43.743 09:16:54 app_cmdline -- common/autotest_common.sh@650 -- # valid_exec_arg /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py env_dpdk_get_mem_stats 00:08:43.743 09:16:54 app_cmdline -- common/autotest_common.sh@636 -- # local arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:08:43.743 09:16:54 app_cmdline -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:08:43.743 09:16:54 app_cmdline -- common/autotest_common.sh@640 -- # type -t /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:08:43.743 09:16:54 app_cmdline -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:08:43.743 09:16:54 app_cmdline -- common/autotest_common.sh@642 -- # type -P /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:08:43.743 09:16:54 app_cmdline -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:08:43.743 09:16:54 app_cmdline -- common/autotest_common.sh@642 -- # arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:08:43.743 09:16:54 app_cmdline -- common/autotest_common.sh@642 -- # [[ -x /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py ]] 00:08:43.743 09:16:54 app_cmdline -- common/autotest_common.sh@651 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py env_dpdk_get_mem_stats 00:08:44.026 request: 00:08:44.026 { 00:08:44.026 "method": "env_dpdk_get_mem_stats", 00:08:44.026 "req_id": 1 00:08:44.026 } 00:08:44.026 Got JSON-RPC error response 00:08:44.026 response: 00:08:44.026 { 00:08:44.026 "code": -32601, 00:08:44.026 "message": "Method not found" 00:08:44.026 } 00:08:44.026 09:16:55 app_cmdline -- common/autotest_common.sh@651 -- # es=1 00:08:44.026 09:16:55 app_cmdline -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:08:44.026 09:16:55 app_cmdline -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:08:44.026 09:16:55 app_cmdline -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:08:44.026 09:16:55 app_cmdline -- app/cmdline.sh@1 -- # killprocess 730198 00:08:44.026 09:16:55 app_cmdline -- common/autotest_common.sh@948 -- # '[' -z 730198 ']' 00:08:44.026 09:16:55 app_cmdline -- common/autotest_common.sh@952 -- # kill -0 730198 00:08:44.026 09:16:55 app_cmdline -- common/autotest_common.sh@953 -- # uname 00:08:44.026 09:16:55 app_cmdline -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:08:44.026 09:16:55 app_cmdline -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 730198 00:08:44.026 09:16:55 app_cmdline -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:08:44.026 09:16:55 app_cmdline -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:08:44.026 09:16:55 app_cmdline -- common/autotest_common.sh@966 -- # echo 'killing process with pid 730198' 00:08:44.026 killing process with pid 730198 00:08:44.026 09:16:55 app_cmdline -- common/autotest_common.sh@967 -- # kill 730198 00:08:44.026 09:16:55 app_cmdline -- common/autotest_common.sh@972 -- # wait 730198 00:08:44.637 00:08:44.637 real 0m1.543s 00:08:44.637 user 0m1.946s 00:08:44.637 sys 0m0.412s 00:08:44.637 09:16:55 app_cmdline -- common/autotest_common.sh@1124 -- # xtrace_disable 00:08:44.637 09:16:55 app_cmdline -- common/autotest_common.sh@10 -- # set +x 00:08:44.637 ************************************ 00:08:44.637 END TEST app_cmdline 00:08:44.637 ************************************ 00:08:44.637 09:16:55 -- common/autotest_common.sh@1142 -- # return 0 00:08:44.637 09:16:55 -- spdk/autotest.sh@186 -- # run_test version /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app/version.sh 00:08:44.637 09:16:55 -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:08:44.637 09:16:55 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:08:44.637 09:16:55 -- common/autotest_common.sh@10 -- # set +x 00:08:44.637 ************************************ 00:08:44.637 START TEST version 00:08:44.637 ************************************ 00:08:44.637 09:16:55 version -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app/version.sh 00:08:44.637 * Looking for test storage... 00:08:44.637 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app 00:08:44.637 09:16:55 version -- app/version.sh@17 -- # get_header_version major 00:08:44.637 09:16:55 version -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_MAJOR[[:space:]]+' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk/version.h 00:08:44.637 09:16:55 version -- app/version.sh@14 -- # cut -f2 00:08:44.637 09:16:55 version -- app/version.sh@14 -- # tr -d '"' 00:08:44.637 09:16:55 version -- app/version.sh@17 -- # major=24 00:08:44.637 09:16:55 version -- app/version.sh@18 -- # get_header_version minor 00:08:44.637 09:16:55 version -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_MINOR[[:space:]]+' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk/version.h 00:08:44.637 09:16:55 version -- app/version.sh@14 -- # cut -f2 00:08:44.637 09:16:55 version -- app/version.sh@14 -- # tr -d '"' 00:08:44.637 09:16:55 version -- app/version.sh@18 -- # minor=9 00:08:44.637 09:16:55 version -- app/version.sh@19 -- # get_header_version patch 00:08:44.637 09:16:55 version -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_PATCH[[:space:]]+' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk/version.h 00:08:44.637 09:16:55 version -- app/version.sh@14 -- # cut -f2 00:08:44.637 09:16:55 version -- app/version.sh@14 -- # tr -d '"' 00:08:44.637 09:16:55 version -- app/version.sh@19 -- # patch=0 00:08:44.637 09:16:55 version -- app/version.sh@20 -- # get_header_version suffix 00:08:44.637 09:16:55 version -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_SUFFIX[[:space:]]+' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk/version.h 00:08:44.637 09:16:55 version -- app/version.sh@14 -- # cut -f2 00:08:44.637 09:16:55 version -- app/version.sh@14 -- # tr -d '"' 00:08:44.637 09:16:55 version -- app/version.sh@20 -- # suffix=-pre 00:08:44.637 09:16:55 version -- app/version.sh@22 -- # version=24.9 00:08:44.637 09:16:55 version -- app/version.sh@25 -- # (( patch != 0 )) 00:08:44.637 09:16:55 version -- app/version.sh@28 -- # version=24.9rc0 00:08:44.637 09:16:55 version -- app/version.sh@30 -- # PYTHONPATH=:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python 00:08:44.637 09:16:55 version -- app/version.sh@30 -- # python3 -c 'import spdk; print(spdk.__version__)' 00:08:44.637 09:16:55 version -- app/version.sh@30 -- # py_version=24.9rc0 00:08:44.637 09:16:55 version -- app/version.sh@31 -- # [[ 24.9rc0 == \2\4\.\9\r\c\0 ]] 00:08:44.637 00:08:44.637 real 0m0.108s 00:08:44.637 user 0m0.064s 00:08:44.637 sys 0m0.067s 00:08:44.637 09:16:55 version -- common/autotest_common.sh@1124 -- # xtrace_disable 00:08:44.637 09:16:55 version -- common/autotest_common.sh@10 -- # set +x 00:08:44.637 ************************************ 00:08:44.637 END TEST version 00:08:44.637 ************************************ 00:08:44.637 09:16:55 -- common/autotest_common.sh@1142 -- # return 0 00:08:44.637 09:16:55 -- spdk/autotest.sh@188 -- # '[' 0 -eq 1 ']' 00:08:44.637 09:16:55 -- spdk/autotest.sh@198 -- # uname -s 00:08:44.637 09:16:55 -- spdk/autotest.sh@198 -- # [[ Linux == Linux ]] 00:08:44.637 09:16:55 -- spdk/autotest.sh@199 -- # [[ 0 -eq 1 ]] 00:08:44.637 09:16:55 -- spdk/autotest.sh@199 -- # [[ 0 -eq 1 ]] 00:08:44.637 09:16:55 -- spdk/autotest.sh@211 -- # '[' 0 -eq 1 ']' 00:08:44.637 09:16:55 -- spdk/autotest.sh@256 -- # '[' 0 -eq 1 ']' 00:08:44.637 09:16:55 -- spdk/autotest.sh@260 -- # timing_exit lib 00:08:44.637 09:16:55 -- common/autotest_common.sh@728 -- # xtrace_disable 00:08:44.638 09:16:55 -- common/autotest_common.sh@10 -- # set +x 00:08:44.918 09:16:55 -- spdk/autotest.sh@262 -- # '[' 0 -eq 1 ']' 00:08:44.918 09:16:55 -- spdk/autotest.sh@270 -- # '[' 0 -eq 1 ']' 00:08:44.918 09:16:55 -- spdk/autotest.sh@279 -- # '[' 1 -eq 1 ']' 00:08:44.918 09:16:55 -- spdk/autotest.sh@280 -- # export NET_TYPE 00:08:44.918 09:16:55 -- spdk/autotest.sh@283 -- # '[' tcp = rdma ']' 00:08:44.918 09:16:55 -- spdk/autotest.sh@286 -- # '[' tcp = tcp ']' 00:08:44.918 09:16:55 -- spdk/autotest.sh@287 -- # run_test nvmf_tcp /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/nvmf.sh --transport=tcp 00:08:44.918 09:16:55 -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:08:44.918 09:16:55 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:08:44.918 09:16:55 -- common/autotest_common.sh@10 -- # set +x 00:08:44.918 ************************************ 00:08:44.918 START TEST nvmf_tcp 00:08:44.918 ************************************ 00:08:44.918 09:16:55 nvmf_tcp -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/nvmf.sh --transport=tcp 00:08:44.918 * Looking for test storage... 00:08:44.918 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf 00:08:44.918 09:16:55 nvmf_tcp -- nvmf/nvmf.sh@10 -- # uname -s 00:08:44.918 09:16:55 nvmf_tcp -- nvmf/nvmf.sh@10 -- # '[' '!' Linux = Linux ']' 00:08:44.918 09:16:55 nvmf_tcp -- nvmf/nvmf.sh@14 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:08:44.918 09:16:55 nvmf_tcp -- nvmf/common.sh@7 -- # uname -s 00:08:44.918 09:16:55 nvmf_tcp -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:08:44.918 09:16:55 nvmf_tcp -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:08:44.918 09:16:55 nvmf_tcp -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:08:44.918 09:16:55 nvmf_tcp -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:08:44.918 09:16:55 nvmf_tcp -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:08:44.918 09:16:55 nvmf_tcp -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:08:44.918 09:16:55 nvmf_tcp -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:08:44.918 09:16:55 nvmf_tcp -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:08:44.918 09:16:55 nvmf_tcp -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:08:44.918 09:16:55 nvmf_tcp -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:08:44.918 09:16:55 nvmf_tcp -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a 00:08:44.918 09:16:55 nvmf_tcp -- nvmf/common.sh@18 -- # NVME_HOSTID=29f67375-a902-e411-ace9-001e67bc3c9a 00:08:44.918 09:16:55 nvmf_tcp -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:08:44.918 09:16:55 nvmf_tcp -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:08:44.918 09:16:55 nvmf_tcp -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:08:44.918 09:16:55 nvmf_tcp -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:08:44.918 09:16:55 nvmf_tcp -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:08:44.918 09:16:55 nvmf_tcp -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:08:44.918 09:16:55 nvmf_tcp -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:08:44.918 09:16:55 nvmf_tcp -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:08:44.918 09:16:55 nvmf_tcp -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:44.918 09:16:55 nvmf_tcp -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:44.918 09:16:55 nvmf_tcp -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:44.918 09:16:55 nvmf_tcp -- paths/export.sh@5 -- # export PATH 00:08:44.918 09:16:55 nvmf_tcp -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:44.918 09:16:55 nvmf_tcp -- nvmf/common.sh@47 -- # : 0 00:08:44.918 09:16:55 nvmf_tcp -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:08:44.918 09:16:55 nvmf_tcp -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:08:44.918 09:16:55 nvmf_tcp -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:08:44.918 09:16:55 nvmf_tcp -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:08:44.918 09:16:55 nvmf_tcp -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:08:44.918 09:16:55 nvmf_tcp -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:08:44.918 09:16:55 nvmf_tcp -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:08:44.918 09:16:55 nvmf_tcp -- nvmf/common.sh@51 -- # have_pci_nics=0 00:08:44.918 09:16:55 nvmf_tcp -- nvmf/nvmf.sh@16 -- # trap 'exit 1' SIGINT SIGTERM EXIT 00:08:44.918 09:16:55 nvmf_tcp -- nvmf/nvmf.sh@18 -- # TEST_ARGS=("$@") 00:08:44.918 09:16:55 nvmf_tcp -- nvmf/nvmf.sh@20 -- # timing_enter target 00:08:44.918 09:16:55 nvmf_tcp -- common/autotest_common.sh@722 -- # xtrace_disable 00:08:44.918 09:16:55 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:08:44.918 09:16:55 nvmf_tcp -- nvmf/nvmf.sh@22 -- # [[ 0 -eq 0 ]] 00:08:44.918 09:16:55 nvmf_tcp -- nvmf/nvmf.sh@23 -- # run_test nvmf_example /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_example.sh --transport=tcp 00:08:44.918 09:16:55 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:08:44.918 09:16:55 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:08:44.918 09:16:55 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:08:44.918 ************************************ 00:08:44.918 START TEST nvmf_example 00:08:44.918 ************************************ 00:08:44.918 09:16:55 nvmf_tcp.nvmf_example -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_example.sh --transport=tcp 00:08:44.918 * Looking for test storage... 00:08:44.918 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:08:44.919 09:16:55 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:08:44.919 09:16:55 nvmf_tcp.nvmf_example -- nvmf/common.sh@7 -- # uname -s 00:08:44.919 09:16:55 nvmf_tcp.nvmf_example -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:08:44.919 09:16:55 nvmf_tcp.nvmf_example -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:08:44.919 09:16:55 nvmf_tcp.nvmf_example -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:08:44.919 09:16:55 nvmf_tcp.nvmf_example -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:08:44.919 09:16:55 nvmf_tcp.nvmf_example -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:08:44.919 09:16:55 nvmf_tcp.nvmf_example -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:08:44.919 09:16:55 nvmf_tcp.nvmf_example -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:08:44.919 09:16:55 nvmf_tcp.nvmf_example -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:08:44.919 09:16:55 nvmf_tcp.nvmf_example -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:08:44.919 09:16:55 nvmf_tcp.nvmf_example -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:08:44.919 09:16:55 nvmf_tcp.nvmf_example -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a 00:08:44.919 09:16:55 nvmf_tcp.nvmf_example -- nvmf/common.sh@18 -- # NVME_HOSTID=29f67375-a902-e411-ace9-001e67bc3c9a 00:08:44.919 09:16:55 nvmf_tcp.nvmf_example -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:08:44.919 09:16:55 nvmf_tcp.nvmf_example -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:08:44.919 09:16:55 nvmf_tcp.nvmf_example -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:08:44.919 09:16:55 nvmf_tcp.nvmf_example -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:08:44.919 09:16:55 nvmf_tcp.nvmf_example -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:08:44.919 09:16:55 nvmf_tcp.nvmf_example -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:08:44.919 09:16:55 nvmf_tcp.nvmf_example -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:08:44.919 09:16:55 nvmf_tcp.nvmf_example -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:08:44.919 09:16:55 nvmf_tcp.nvmf_example -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:44.919 09:16:55 nvmf_tcp.nvmf_example -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:44.919 09:16:55 nvmf_tcp.nvmf_example -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:44.919 09:16:55 nvmf_tcp.nvmf_example -- paths/export.sh@5 -- # export PATH 00:08:44.919 09:16:55 nvmf_tcp.nvmf_example -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:44.919 09:16:55 nvmf_tcp.nvmf_example -- nvmf/common.sh@47 -- # : 0 00:08:44.919 09:16:55 nvmf_tcp.nvmf_example -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:08:44.919 09:16:55 nvmf_tcp.nvmf_example -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:08:44.919 09:16:55 nvmf_tcp.nvmf_example -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:08:44.919 09:16:55 nvmf_tcp.nvmf_example -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:08:44.919 09:16:55 nvmf_tcp.nvmf_example -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:08:44.919 09:16:55 nvmf_tcp.nvmf_example -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:08:44.919 09:16:55 nvmf_tcp.nvmf_example -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:08:44.919 09:16:55 nvmf_tcp.nvmf_example -- nvmf/common.sh@51 -- # have_pci_nics=0 00:08:44.919 09:16:55 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@11 -- # NVMF_EXAMPLE=("$SPDK_EXAMPLE_DIR/nvmf") 00:08:44.919 09:16:55 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@13 -- # MALLOC_BDEV_SIZE=64 00:08:44.919 09:16:55 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@14 -- # MALLOC_BLOCK_SIZE=512 00:08:44.919 09:16:55 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@24 -- # build_nvmf_example_args 00:08:44.919 09:16:55 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@17 -- # '[' 0 -eq 1 ']' 00:08:44.919 09:16:55 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@20 -- # NVMF_EXAMPLE+=(-i "$NVMF_APP_SHM_ID" -g 10000) 00:08:44.919 09:16:55 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@21 -- # NVMF_EXAMPLE+=("${NO_HUGE[@]}") 00:08:44.919 09:16:55 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@40 -- # timing_enter nvmf_example_test 00:08:44.919 09:16:55 nvmf_tcp.nvmf_example -- common/autotest_common.sh@722 -- # xtrace_disable 00:08:44.919 09:16:55 nvmf_tcp.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:08:44.919 09:16:55 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@41 -- # nvmftestinit 00:08:44.919 09:16:55 nvmf_tcp.nvmf_example -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:08:44.919 09:16:55 nvmf_tcp.nvmf_example -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:08:44.919 09:16:55 nvmf_tcp.nvmf_example -- nvmf/common.sh@448 -- # prepare_net_devs 00:08:44.919 09:16:55 nvmf_tcp.nvmf_example -- nvmf/common.sh@410 -- # local -g is_hw=no 00:08:44.919 09:16:55 nvmf_tcp.nvmf_example -- nvmf/common.sh@412 -- # remove_spdk_ns 00:08:44.919 09:16:55 nvmf_tcp.nvmf_example -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:08:44.919 09:16:55 nvmf_tcp.nvmf_example -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:08:44.919 09:16:55 nvmf_tcp.nvmf_example -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:08:44.919 09:16:55 nvmf_tcp.nvmf_example -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:08:44.919 09:16:55 nvmf_tcp.nvmf_example -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:08:44.919 09:16:55 nvmf_tcp.nvmf_example -- nvmf/common.sh@285 -- # xtrace_disable 00:08:44.919 09:16:55 nvmf_tcp.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:08:46.918 09:16:57 nvmf_tcp.nvmf_example -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:08:46.918 09:16:57 nvmf_tcp.nvmf_example -- nvmf/common.sh@291 -- # pci_devs=() 00:08:46.918 09:16:57 nvmf_tcp.nvmf_example -- nvmf/common.sh@291 -- # local -a pci_devs 00:08:46.918 09:16:57 nvmf_tcp.nvmf_example -- nvmf/common.sh@292 -- # pci_net_devs=() 00:08:46.918 09:16:57 nvmf_tcp.nvmf_example -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:08:46.918 09:16:57 nvmf_tcp.nvmf_example -- nvmf/common.sh@293 -- # pci_drivers=() 00:08:46.918 09:16:57 nvmf_tcp.nvmf_example -- nvmf/common.sh@293 -- # local -A pci_drivers 00:08:46.918 09:16:57 nvmf_tcp.nvmf_example -- nvmf/common.sh@295 -- # net_devs=() 00:08:46.918 09:16:57 nvmf_tcp.nvmf_example -- nvmf/common.sh@295 -- # local -ga net_devs 00:08:46.918 09:16:57 nvmf_tcp.nvmf_example -- nvmf/common.sh@296 -- # e810=() 00:08:46.919 09:16:57 nvmf_tcp.nvmf_example -- nvmf/common.sh@296 -- # local -ga e810 00:08:46.919 09:16:57 nvmf_tcp.nvmf_example -- nvmf/common.sh@297 -- # x722=() 00:08:46.919 09:16:57 nvmf_tcp.nvmf_example -- nvmf/common.sh@297 -- # local -ga x722 00:08:46.919 09:16:57 nvmf_tcp.nvmf_example -- nvmf/common.sh@298 -- # mlx=() 00:08:46.919 09:16:57 nvmf_tcp.nvmf_example -- nvmf/common.sh@298 -- # local -ga mlx 00:08:46.919 09:16:57 nvmf_tcp.nvmf_example -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:08:46.919 09:16:57 nvmf_tcp.nvmf_example -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:08:46.919 09:16:57 nvmf_tcp.nvmf_example -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:08:46.919 09:16:57 nvmf_tcp.nvmf_example -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:08:46.919 09:16:57 nvmf_tcp.nvmf_example -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:08:46.919 09:16:57 nvmf_tcp.nvmf_example -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:08:46.919 09:16:57 nvmf_tcp.nvmf_example -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:08:46.919 09:16:57 nvmf_tcp.nvmf_example -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:08:46.919 09:16:57 nvmf_tcp.nvmf_example -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:08:46.919 09:16:57 nvmf_tcp.nvmf_example -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:08:46.919 09:16:57 nvmf_tcp.nvmf_example -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:08:46.919 09:16:57 nvmf_tcp.nvmf_example -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:08:46.919 09:16:57 nvmf_tcp.nvmf_example -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:08:46.919 09:16:57 nvmf_tcp.nvmf_example -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:08:46.919 09:16:57 nvmf_tcp.nvmf_example -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:08:46.919 09:16:57 nvmf_tcp.nvmf_example -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:08:46.919 09:16:57 nvmf_tcp.nvmf_example -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:08:46.919 09:16:57 nvmf_tcp.nvmf_example -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:08:46.919 09:16:57 nvmf_tcp.nvmf_example -- nvmf/common.sh@341 -- # echo 'Found 0000:09:00.0 (0x8086 - 0x159b)' 00:08:46.919 Found 0000:09:00.0 (0x8086 - 0x159b) 00:08:46.919 09:16:57 nvmf_tcp.nvmf_example -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:08:46.919 09:16:57 nvmf_tcp.nvmf_example -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:08:46.919 09:16:57 nvmf_tcp.nvmf_example -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:08:46.919 09:16:57 nvmf_tcp.nvmf_example -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:08:46.919 09:16:57 nvmf_tcp.nvmf_example -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:08:46.919 09:16:57 nvmf_tcp.nvmf_example -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:08:46.919 09:16:57 nvmf_tcp.nvmf_example -- nvmf/common.sh@341 -- # echo 'Found 0000:09:00.1 (0x8086 - 0x159b)' 00:08:46.919 Found 0000:09:00.1 (0x8086 - 0x159b) 00:08:46.919 09:16:57 nvmf_tcp.nvmf_example -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:08:46.919 09:16:57 nvmf_tcp.nvmf_example -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:08:46.919 09:16:57 nvmf_tcp.nvmf_example -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:08:46.919 09:16:57 nvmf_tcp.nvmf_example -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:08:46.919 09:16:57 nvmf_tcp.nvmf_example -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:08:46.919 09:16:57 nvmf_tcp.nvmf_example -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:08:46.919 09:16:57 nvmf_tcp.nvmf_example -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:08:46.919 09:16:57 nvmf_tcp.nvmf_example -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:08:46.919 09:16:57 nvmf_tcp.nvmf_example -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:08:46.919 09:16:57 nvmf_tcp.nvmf_example -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:08:46.919 09:16:57 nvmf_tcp.nvmf_example -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:08:46.919 09:16:57 nvmf_tcp.nvmf_example -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:08:46.919 09:16:57 nvmf_tcp.nvmf_example -- nvmf/common.sh@390 -- # [[ up == up ]] 00:08:46.919 09:16:57 nvmf_tcp.nvmf_example -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:08:46.919 09:16:57 nvmf_tcp.nvmf_example -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:08:46.919 09:16:57 nvmf_tcp.nvmf_example -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:09:00.0: cvl_0_0' 00:08:46.919 Found net devices under 0000:09:00.0: cvl_0_0 00:08:46.919 09:16:57 nvmf_tcp.nvmf_example -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:08:46.919 09:16:57 nvmf_tcp.nvmf_example -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:08:46.919 09:16:57 nvmf_tcp.nvmf_example -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:08:46.919 09:16:57 nvmf_tcp.nvmf_example -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:08:46.919 09:16:57 nvmf_tcp.nvmf_example -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:08:46.919 09:16:57 nvmf_tcp.nvmf_example -- nvmf/common.sh@390 -- # [[ up == up ]] 00:08:46.919 09:16:57 nvmf_tcp.nvmf_example -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:08:46.919 09:16:57 nvmf_tcp.nvmf_example -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:08:46.919 09:16:57 nvmf_tcp.nvmf_example -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:09:00.1: cvl_0_1' 00:08:46.919 Found net devices under 0000:09:00.1: cvl_0_1 00:08:46.919 09:16:57 nvmf_tcp.nvmf_example -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:08:46.919 09:16:57 nvmf_tcp.nvmf_example -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:08:46.919 09:16:57 nvmf_tcp.nvmf_example -- nvmf/common.sh@414 -- # is_hw=yes 00:08:46.919 09:16:57 nvmf_tcp.nvmf_example -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:08:46.919 09:16:57 nvmf_tcp.nvmf_example -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:08:46.919 09:16:57 nvmf_tcp.nvmf_example -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:08:46.919 09:16:57 nvmf_tcp.nvmf_example -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:08:46.919 09:16:57 nvmf_tcp.nvmf_example -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:08:46.919 09:16:57 nvmf_tcp.nvmf_example -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:08:46.919 09:16:57 nvmf_tcp.nvmf_example -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:08:46.919 09:16:57 nvmf_tcp.nvmf_example -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:08:46.919 09:16:57 nvmf_tcp.nvmf_example -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:08:46.919 09:16:57 nvmf_tcp.nvmf_example -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:08:46.919 09:16:57 nvmf_tcp.nvmf_example -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:08:46.919 09:16:57 nvmf_tcp.nvmf_example -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:08:46.919 09:16:57 nvmf_tcp.nvmf_example -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:08:46.919 09:16:57 nvmf_tcp.nvmf_example -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:08:46.919 09:16:57 nvmf_tcp.nvmf_example -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:08:46.919 09:16:57 nvmf_tcp.nvmf_example -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:08:46.919 09:16:57 nvmf_tcp.nvmf_example -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:08:46.919 09:16:57 nvmf_tcp.nvmf_example -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:08:46.919 09:16:57 nvmf_tcp.nvmf_example -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:08:46.919 09:16:57 nvmf_tcp.nvmf_example -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:08:46.919 09:16:58 nvmf_tcp.nvmf_example -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:08:46.919 09:16:58 nvmf_tcp.nvmf_example -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:08:46.919 09:16:58 nvmf_tcp.nvmf_example -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:08:46.919 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:08:46.919 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.233 ms 00:08:46.919 00:08:46.919 --- 10.0.0.2 ping statistics --- 00:08:46.919 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:08:46.919 rtt min/avg/max/mdev = 0.233/0.233/0.233/0.000 ms 00:08:46.919 09:16:58 nvmf_tcp.nvmf_example -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:08:46.919 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:08:46.919 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.063 ms 00:08:46.919 00:08:46.919 --- 10.0.0.1 ping statistics --- 00:08:46.919 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:08:46.919 rtt min/avg/max/mdev = 0.063/0.063/0.063/0.000 ms 00:08:46.919 09:16:58 nvmf_tcp.nvmf_example -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:08:46.919 09:16:58 nvmf_tcp.nvmf_example -- nvmf/common.sh@422 -- # return 0 00:08:46.919 09:16:58 nvmf_tcp.nvmf_example -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:08:46.919 09:16:58 nvmf_tcp.nvmf_example -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:08:46.919 09:16:58 nvmf_tcp.nvmf_example -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:08:46.919 09:16:58 nvmf_tcp.nvmf_example -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:08:46.919 09:16:58 nvmf_tcp.nvmf_example -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:08:46.919 09:16:58 nvmf_tcp.nvmf_example -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:08:46.919 09:16:58 nvmf_tcp.nvmf_example -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:08:46.919 09:16:58 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@42 -- # nvmfexamplestart '-m 0xF' 00:08:46.919 09:16:58 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@27 -- # timing_enter start_nvmf_example 00:08:46.919 09:16:58 nvmf_tcp.nvmf_example -- common/autotest_common.sh@722 -- # xtrace_disable 00:08:46.919 09:16:58 nvmf_tcp.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:08:47.215 09:16:58 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@29 -- # '[' tcp == tcp ']' 00:08:47.215 09:16:58 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@30 -- # NVMF_EXAMPLE=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_EXAMPLE[@]}") 00:08:47.215 09:16:58 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@34 -- # nvmfpid=732217 00:08:47.215 09:16:58 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@33 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/nvmf -i 0 -g 10000 -m 0xF 00:08:47.215 09:16:58 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@35 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:08:47.215 09:16:58 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@36 -- # waitforlisten 732217 00:08:47.215 09:16:58 nvmf_tcp.nvmf_example -- common/autotest_common.sh@829 -- # '[' -z 732217 ']' 00:08:47.215 09:16:58 nvmf_tcp.nvmf_example -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:08:47.215 09:16:58 nvmf_tcp.nvmf_example -- common/autotest_common.sh@834 -- # local max_retries=100 00:08:47.215 09:16:58 nvmf_tcp.nvmf_example -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:08:47.215 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:08:47.215 09:16:58 nvmf_tcp.nvmf_example -- common/autotest_common.sh@838 -- # xtrace_disable 00:08:47.215 09:16:58 nvmf_tcp.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:08:47.215 EAL: No free 2048 kB hugepages reported on node 1 00:08:48.203 09:16:59 nvmf_tcp.nvmf_example -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:08:48.203 09:16:59 nvmf_tcp.nvmf_example -- common/autotest_common.sh@862 -- # return 0 00:08:48.203 09:16:59 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@37 -- # timing_exit start_nvmf_example 00:08:48.203 09:16:59 nvmf_tcp.nvmf_example -- common/autotest_common.sh@728 -- # xtrace_disable 00:08:48.203 09:16:59 nvmf_tcp.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:08:48.203 09:16:59 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@45 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:08:48.203 09:16:59 nvmf_tcp.nvmf_example -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:48.203 09:16:59 nvmf_tcp.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:08:48.203 09:16:59 nvmf_tcp.nvmf_example -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:48.203 09:16:59 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@47 -- # rpc_cmd bdev_malloc_create 64 512 00:08:48.203 09:16:59 nvmf_tcp.nvmf_example -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:48.203 09:16:59 nvmf_tcp.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:08:48.203 09:16:59 nvmf_tcp.nvmf_example -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:48.203 09:16:59 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@47 -- # malloc_bdevs='Malloc0 ' 00:08:48.203 09:16:59 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@49 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:08:48.203 09:16:59 nvmf_tcp.nvmf_example -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:48.203 09:16:59 nvmf_tcp.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:08:48.203 09:16:59 nvmf_tcp.nvmf_example -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:48.203 09:16:59 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@52 -- # for malloc_bdev in $malloc_bdevs 00:08:48.203 09:16:59 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@53 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:08:48.203 09:16:59 nvmf_tcp.nvmf_example -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:48.203 09:16:59 nvmf_tcp.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:08:48.203 09:16:59 nvmf_tcp.nvmf_example -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:48.203 09:16:59 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@57 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:08:48.203 09:16:59 nvmf_tcp.nvmf_example -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:48.203 09:16:59 nvmf_tcp.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:08:48.203 09:16:59 nvmf_tcp.nvmf_example -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:48.203 09:16:59 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@59 -- # perf=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf 00:08:48.203 09:16:59 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@61 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 64 -o 4096 -w randrw -M 30 -t 10 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' 00:08:48.203 EAL: No free 2048 kB hugepages reported on node 1 00:08:58.289 Initializing NVMe Controllers 00:08:58.289 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:08:58.289 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:08:58.289 Initialization complete. Launching workers. 00:08:58.289 ======================================================== 00:08:58.289 Latency(us) 00:08:58.289 Device Information : IOPS MiB/s Average min max 00:08:58.289 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 14984.70 58.53 4270.90 684.70 15303.26 00:08:58.289 ======================================================== 00:08:58.289 Total : 14984.70 58.53 4270.90 684.70 15303.26 00:08:58.289 00:08:58.289 09:17:09 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@65 -- # trap - SIGINT SIGTERM EXIT 00:08:58.289 09:17:09 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@66 -- # nvmftestfini 00:08:58.289 09:17:09 nvmf_tcp.nvmf_example -- nvmf/common.sh@488 -- # nvmfcleanup 00:08:58.289 09:17:09 nvmf_tcp.nvmf_example -- nvmf/common.sh@117 -- # sync 00:08:58.289 09:17:09 nvmf_tcp.nvmf_example -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:08:58.289 09:17:09 nvmf_tcp.nvmf_example -- nvmf/common.sh@120 -- # set +e 00:08:58.289 09:17:09 nvmf_tcp.nvmf_example -- nvmf/common.sh@121 -- # for i in {1..20} 00:08:58.289 09:17:09 nvmf_tcp.nvmf_example -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:08:58.289 rmmod nvme_tcp 00:08:58.289 rmmod nvme_fabrics 00:08:58.289 rmmod nvme_keyring 00:08:58.289 09:17:09 nvmf_tcp.nvmf_example -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:08:58.289 09:17:09 nvmf_tcp.nvmf_example -- nvmf/common.sh@124 -- # set -e 00:08:58.289 09:17:09 nvmf_tcp.nvmf_example -- nvmf/common.sh@125 -- # return 0 00:08:58.289 09:17:09 nvmf_tcp.nvmf_example -- nvmf/common.sh@489 -- # '[' -n 732217 ']' 00:08:58.289 09:17:09 nvmf_tcp.nvmf_example -- nvmf/common.sh@490 -- # killprocess 732217 00:08:58.289 09:17:09 nvmf_tcp.nvmf_example -- common/autotest_common.sh@948 -- # '[' -z 732217 ']' 00:08:58.289 09:17:09 nvmf_tcp.nvmf_example -- common/autotest_common.sh@952 -- # kill -0 732217 00:08:58.289 09:17:09 nvmf_tcp.nvmf_example -- common/autotest_common.sh@953 -- # uname 00:08:58.289 09:17:09 nvmf_tcp.nvmf_example -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:08:58.289 09:17:09 nvmf_tcp.nvmf_example -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 732217 00:08:58.289 09:17:09 nvmf_tcp.nvmf_example -- common/autotest_common.sh@954 -- # process_name=nvmf 00:08:58.289 09:17:09 nvmf_tcp.nvmf_example -- common/autotest_common.sh@958 -- # '[' nvmf = sudo ']' 00:08:58.289 09:17:09 nvmf_tcp.nvmf_example -- common/autotest_common.sh@966 -- # echo 'killing process with pid 732217' 00:08:58.289 killing process with pid 732217 00:08:58.289 09:17:09 nvmf_tcp.nvmf_example -- common/autotest_common.sh@967 -- # kill 732217 00:08:58.289 09:17:09 nvmf_tcp.nvmf_example -- common/autotest_common.sh@972 -- # wait 732217 00:08:58.548 nvmf threads initialize successfully 00:08:58.548 bdev subsystem init successfully 00:08:58.548 created a nvmf target service 00:08:58.548 create targets's poll groups done 00:08:58.548 all subsystems of target started 00:08:58.548 nvmf target is running 00:08:58.548 all subsystems of target stopped 00:08:58.548 destroy targets's poll groups done 00:08:58.548 destroyed the nvmf target service 00:08:58.548 bdev subsystem finish successfully 00:08:58.548 nvmf threads destroy successfully 00:08:58.548 09:17:09 nvmf_tcp.nvmf_example -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:08:58.548 09:17:09 nvmf_tcp.nvmf_example -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:08:58.548 09:17:09 nvmf_tcp.nvmf_example -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:08:58.548 09:17:09 nvmf_tcp.nvmf_example -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:08:58.548 09:17:09 nvmf_tcp.nvmf_example -- nvmf/common.sh@278 -- # remove_spdk_ns 00:08:58.548 09:17:09 nvmf_tcp.nvmf_example -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:08:58.548 09:17:09 nvmf_tcp.nvmf_example -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:08:58.548 09:17:09 nvmf_tcp.nvmf_example -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:09:01.083 09:17:11 nvmf_tcp.nvmf_example -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:09:01.083 09:17:11 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@67 -- # timing_exit nvmf_example_test 00:09:01.083 09:17:11 nvmf_tcp.nvmf_example -- common/autotest_common.sh@728 -- # xtrace_disable 00:09:01.083 09:17:11 nvmf_tcp.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:09:01.083 00:09:01.083 real 0m15.815s 00:09:01.083 user 0m44.788s 00:09:01.083 sys 0m3.309s 00:09:01.083 09:17:11 nvmf_tcp.nvmf_example -- common/autotest_common.sh@1124 -- # xtrace_disable 00:09:01.083 09:17:11 nvmf_tcp.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:09:01.083 ************************************ 00:09:01.083 END TEST nvmf_example 00:09:01.083 ************************************ 00:09:01.083 09:17:11 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:09:01.083 09:17:11 nvmf_tcp -- nvmf/nvmf.sh@24 -- # run_test nvmf_filesystem /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/filesystem.sh --transport=tcp 00:09:01.083 09:17:11 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:09:01.083 09:17:11 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:09:01.083 09:17:11 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:09:01.083 ************************************ 00:09:01.083 START TEST nvmf_filesystem 00:09:01.083 ************************************ 00:09:01.083 09:17:11 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/filesystem.sh --transport=tcp 00:09:01.083 * Looking for test storage... 00:09:01.083 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:09:01.083 09:17:11 nvmf_tcp.nvmf_filesystem -- target/filesystem.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/autotest_common.sh 00:09:01.083 09:17:11 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@7 -- # rpc_py=rpc_cmd 00:09:01.083 09:17:11 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@34 -- # set -e 00:09:01.083 09:17:11 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@35 -- # shopt -s nullglob 00:09:01.083 09:17:11 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@36 -- # shopt -s extglob 00:09:01.083 09:17:11 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@37 -- # shopt -s inherit_errexit 00:09:01.083 09:17:11 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@39 -- # '[' -z /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output ']' 00:09:01.083 09:17:11 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@44 -- # [[ -e /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/build_config.sh ]] 00:09:01.083 09:17:11 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/build_config.sh 00:09:01.083 09:17:11 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@1 -- # CONFIG_WPDK_DIR= 00:09:01.083 09:17:11 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@2 -- # CONFIG_ASAN=n 00:09:01.083 09:17:11 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@3 -- # CONFIG_VBDEV_COMPRESS=n 00:09:01.083 09:17:11 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@4 -- # CONFIG_HAVE_EXECINFO_H=y 00:09:01.083 09:17:11 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@5 -- # CONFIG_USDT=n 00:09:01.083 09:17:11 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@6 -- # CONFIG_CUSTOMOCF=n 00:09:01.083 09:17:11 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@7 -- # CONFIG_PREFIX=/usr/local 00:09:01.083 09:17:11 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@8 -- # CONFIG_RBD=n 00:09:01.083 09:17:11 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@9 -- # CONFIG_LIBDIR= 00:09:01.083 09:17:11 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@10 -- # CONFIG_IDXD=y 00:09:01.083 09:17:11 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@11 -- # CONFIG_NVME_CUSE=y 00:09:01.083 09:17:11 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@12 -- # CONFIG_SMA=n 00:09:01.083 09:17:11 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@13 -- # CONFIG_VTUNE=n 00:09:01.083 09:17:11 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@14 -- # CONFIG_TSAN=n 00:09:01.083 09:17:11 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@15 -- # CONFIG_RDMA_SEND_WITH_INVAL=y 00:09:01.083 09:17:11 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@16 -- # CONFIG_VFIO_USER_DIR= 00:09:01.083 09:17:11 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@17 -- # CONFIG_PGO_CAPTURE=n 00:09:01.083 09:17:11 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@18 -- # CONFIG_HAVE_UUID_GENERATE_SHA1=y 00:09:01.083 09:17:11 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@19 -- # CONFIG_ENV=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk 00:09:01.083 09:17:11 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@20 -- # CONFIG_LTO=n 00:09:01.083 09:17:11 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@21 -- # CONFIG_ISCSI_INITIATOR=y 00:09:01.083 09:17:11 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@22 -- # CONFIG_CET=n 00:09:01.083 09:17:11 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@23 -- # CONFIG_VBDEV_COMPRESS_MLX5=n 00:09:01.083 09:17:11 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@24 -- # CONFIG_OCF_PATH= 00:09:01.083 09:17:11 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@25 -- # CONFIG_RDMA_SET_TOS=y 00:09:01.083 09:17:11 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@26 -- # CONFIG_HAVE_ARC4RANDOM=y 00:09:01.083 09:17:11 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@27 -- # CONFIG_HAVE_LIBARCHIVE=n 00:09:01.083 09:17:11 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@28 -- # CONFIG_UBLK=y 00:09:01.083 09:17:11 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@29 -- # CONFIG_ISAL_CRYPTO=y 00:09:01.083 09:17:11 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@30 -- # CONFIG_OPENSSL_PATH= 00:09:01.083 09:17:11 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@31 -- # CONFIG_OCF=n 00:09:01.083 09:17:11 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@32 -- # CONFIG_FUSE=n 00:09:01.083 09:17:11 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@33 -- # CONFIG_VTUNE_DIR= 00:09:01.083 09:17:11 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@34 -- # CONFIG_FUZZER_LIB= 00:09:01.083 09:17:11 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@35 -- # CONFIG_FUZZER=n 00:09:01.083 09:17:11 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@36 -- # CONFIG_DPDK_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build 00:09:01.083 09:17:11 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@37 -- # CONFIG_CRYPTO=n 00:09:01.083 09:17:11 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@38 -- # CONFIG_PGO_USE=n 00:09:01.083 09:17:11 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@39 -- # CONFIG_VHOST=y 00:09:01.084 09:17:11 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@40 -- # CONFIG_DAOS=n 00:09:01.084 09:17:11 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@41 -- # CONFIG_DPDK_INC_DIR= 00:09:01.084 09:17:11 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@42 -- # CONFIG_DAOS_DIR= 00:09:01.084 09:17:11 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@43 -- # CONFIG_UNIT_TESTS=n 00:09:01.084 09:17:11 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@44 -- # CONFIG_RDMA_SET_ACK_TIMEOUT=y 00:09:01.084 09:17:11 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@45 -- # CONFIG_VIRTIO=y 00:09:01.084 09:17:11 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@46 -- # CONFIG_DPDK_UADK=n 00:09:01.084 09:17:11 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@47 -- # CONFIG_COVERAGE=y 00:09:01.084 09:17:11 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@48 -- # CONFIG_RDMA=y 00:09:01.084 09:17:11 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@49 -- # CONFIG_FIO_SOURCE_DIR=/usr/src/fio 00:09:01.084 09:17:11 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@50 -- # CONFIG_URING_PATH= 00:09:01.084 09:17:11 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@51 -- # CONFIG_XNVME=n 00:09:01.084 09:17:11 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@52 -- # CONFIG_VFIO_USER=y 00:09:01.084 09:17:11 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@53 -- # CONFIG_ARCH=native 00:09:01.084 09:17:11 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@54 -- # CONFIG_HAVE_EVP_MAC=y 00:09:01.084 09:17:11 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@55 -- # CONFIG_URING_ZNS=n 00:09:01.084 09:17:11 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@56 -- # CONFIG_WERROR=y 00:09:01.084 09:17:11 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@57 -- # CONFIG_HAVE_LIBBSD=n 00:09:01.084 09:17:11 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@58 -- # CONFIG_UBSAN=y 00:09:01.084 09:17:11 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@59 -- # CONFIG_IPSEC_MB_DIR= 00:09:01.084 09:17:11 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@60 -- # CONFIG_GOLANG=n 00:09:01.084 09:17:11 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@61 -- # CONFIG_ISAL=y 00:09:01.084 09:17:11 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@62 -- # CONFIG_IDXD_KERNEL=y 00:09:01.084 09:17:11 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@63 -- # CONFIG_DPDK_LIB_DIR= 00:09:01.084 09:17:11 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@64 -- # CONFIG_RDMA_PROV=verbs 00:09:01.084 09:17:11 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@65 -- # CONFIG_APPS=y 00:09:01.084 09:17:11 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@66 -- # CONFIG_SHARED=y 00:09:01.084 09:17:11 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@67 -- # CONFIG_HAVE_KEYUTILS=y 00:09:01.084 09:17:11 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@68 -- # CONFIG_FC_PATH= 00:09:01.084 09:17:11 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@69 -- # CONFIG_DPDK_PKG_CONFIG=n 00:09:01.084 09:17:11 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@70 -- # CONFIG_FC=n 00:09:01.084 09:17:11 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@71 -- # CONFIG_AVAHI=n 00:09:01.084 09:17:11 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@72 -- # CONFIG_FIO_PLUGIN=y 00:09:01.084 09:17:11 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@73 -- # CONFIG_RAID5F=n 00:09:01.084 09:17:11 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@74 -- # CONFIG_EXAMPLES=y 00:09:01.084 09:17:11 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@75 -- # CONFIG_TESTS=y 00:09:01.084 09:17:11 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@76 -- # CONFIG_CRYPTO_MLX5=n 00:09:01.084 09:17:11 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@77 -- # CONFIG_MAX_LCORES=128 00:09:01.084 09:17:11 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@78 -- # CONFIG_IPSEC_MB=n 00:09:01.084 09:17:11 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@79 -- # CONFIG_PGO_DIR= 00:09:01.084 09:17:11 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@80 -- # CONFIG_DEBUG=y 00:09:01.084 09:17:11 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@81 -- # CONFIG_DPDK_COMPRESSDEV=n 00:09:01.084 09:17:11 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@82 -- # CONFIG_CROSS_PREFIX= 00:09:01.084 09:17:11 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@83 -- # CONFIG_URING=n 00:09:01.084 09:17:11 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@54 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/applications.sh 00:09:01.084 09:17:11 nvmf_tcp.nvmf_filesystem -- common/applications.sh@8 -- # dirname /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/applications.sh 00:09:01.084 09:17:11 nvmf_tcp.nvmf_filesystem -- common/applications.sh@8 -- # readlink -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common 00:09:01.084 09:17:11 nvmf_tcp.nvmf_filesystem -- common/applications.sh@8 -- # _root=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common 00:09:01.084 09:17:11 nvmf_tcp.nvmf_filesystem -- common/applications.sh@9 -- # _root=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:09:01.084 09:17:11 nvmf_tcp.nvmf_filesystem -- common/applications.sh@10 -- # _app_dir=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin 00:09:01.084 09:17:11 nvmf_tcp.nvmf_filesystem -- common/applications.sh@11 -- # _test_app_dir=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app 00:09:01.084 09:17:11 nvmf_tcp.nvmf_filesystem -- common/applications.sh@12 -- # _examples_dir=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples 00:09:01.084 09:17:11 nvmf_tcp.nvmf_filesystem -- common/applications.sh@14 -- # VHOST_FUZZ_APP=("$_test_app_dir/fuzz/vhost_fuzz/vhost_fuzz") 00:09:01.084 09:17:11 nvmf_tcp.nvmf_filesystem -- common/applications.sh@15 -- # ISCSI_APP=("$_app_dir/iscsi_tgt") 00:09:01.084 09:17:11 nvmf_tcp.nvmf_filesystem -- common/applications.sh@16 -- # NVMF_APP=("$_app_dir/nvmf_tgt") 00:09:01.084 09:17:11 nvmf_tcp.nvmf_filesystem -- common/applications.sh@17 -- # VHOST_APP=("$_app_dir/vhost") 00:09:01.084 09:17:11 nvmf_tcp.nvmf_filesystem -- common/applications.sh@18 -- # DD_APP=("$_app_dir/spdk_dd") 00:09:01.084 09:17:11 nvmf_tcp.nvmf_filesystem -- common/applications.sh@19 -- # SPDK_APP=("$_app_dir/spdk_tgt") 00:09:01.084 09:17:11 nvmf_tcp.nvmf_filesystem -- common/applications.sh@22 -- # [[ -e /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk/config.h ]] 00:09:01.084 09:17:11 nvmf_tcp.nvmf_filesystem -- common/applications.sh@23 -- # [[ #ifndef SPDK_CONFIG_H 00:09:01.084 #define SPDK_CONFIG_H 00:09:01.084 #define SPDK_CONFIG_APPS 1 00:09:01.084 #define SPDK_CONFIG_ARCH native 00:09:01.084 #undef SPDK_CONFIG_ASAN 00:09:01.084 #undef SPDK_CONFIG_AVAHI 00:09:01.084 #undef SPDK_CONFIG_CET 00:09:01.084 #define SPDK_CONFIG_COVERAGE 1 00:09:01.084 #define SPDK_CONFIG_CROSS_PREFIX 00:09:01.084 #undef SPDK_CONFIG_CRYPTO 00:09:01.084 #undef SPDK_CONFIG_CRYPTO_MLX5 00:09:01.084 #undef SPDK_CONFIG_CUSTOMOCF 00:09:01.084 #undef SPDK_CONFIG_DAOS 00:09:01.084 #define SPDK_CONFIG_DAOS_DIR 00:09:01.084 #define SPDK_CONFIG_DEBUG 1 00:09:01.084 #undef SPDK_CONFIG_DPDK_COMPRESSDEV 00:09:01.084 #define SPDK_CONFIG_DPDK_DIR /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build 00:09:01.084 #define SPDK_CONFIG_DPDK_INC_DIR 00:09:01.084 #define SPDK_CONFIG_DPDK_LIB_DIR 00:09:01.084 #undef SPDK_CONFIG_DPDK_PKG_CONFIG 00:09:01.084 #undef SPDK_CONFIG_DPDK_UADK 00:09:01.084 #define SPDK_CONFIG_ENV /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk 00:09:01.084 #define SPDK_CONFIG_EXAMPLES 1 00:09:01.084 #undef SPDK_CONFIG_FC 00:09:01.084 #define SPDK_CONFIG_FC_PATH 00:09:01.084 #define SPDK_CONFIG_FIO_PLUGIN 1 00:09:01.084 #define SPDK_CONFIG_FIO_SOURCE_DIR /usr/src/fio 00:09:01.084 #undef SPDK_CONFIG_FUSE 00:09:01.084 #undef SPDK_CONFIG_FUZZER 00:09:01.084 #define SPDK_CONFIG_FUZZER_LIB 00:09:01.084 #undef SPDK_CONFIG_GOLANG 00:09:01.084 #define SPDK_CONFIG_HAVE_ARC4RANDOM 1 00:09:01.084 #define SPDK_CONFIG_HAVE_EVP_MAC 1 00:09:01.084 #define SPDK_CONFIG_HAVE_EXECINFO_H 1 00:09:01.084 #define SPDK_CONFIG_HAVE_KEYUTILS 1 00:09:01.084 #undef SPDK_CONFIG_HAVE_LIBARCHIVE 00:09:01.084 #undef SPDK_CONFIG_HAVE_LIBBSD 00:09:01.084 #define SPDK_CONFIG_HAVE_UUID_GENERATE_SHA1 1 00:09:01.084 #define SPDK_CONFIG_IDXD 1 00:09:01.084 #define SPDK_CONFIG_IDXD_KERNEL 1 00:09:01.084 #undef SPDK_CONFIG_IPSEC_MB 00:09:01.084 #define SPDK_CONFIG_IPSEC_MB_DIR 00:09:01.084 #define SPDK_CONFIG_ISAL 1 00:09:01.084 #define SPDK_CONFIG_ISAL_CRYPTO 1 00:09:01.084 #define SPDK_CONFIG_ISCSI_INITIATOR 1 00:09:01.084 #define SPDK_CONFIG_LIBDIR 00:09:01.084 #undef SPDK_CONFIG_LTO 00:09:01.084 #define SPDK_CONFIG_MAX_LCORES 128 00:09:01.084 #define SPDK_CONFIG_NVME_CUSE 1 00:09:01.084 #undef SPDK_CONFIG_OCF 00:09:01.084 #define SPDK_CONFIG_OCF_PATH 00:09:01.084 #define SPDK_CONFIG_OPENSSL_PATH 00:09:01.084 #undef SPDK_CONFIG_PGO_CAPTURE 00:09:01.084 #define SPDK_CONFIG_PGO_DIR 00:09:01.084 #undef SPDK_CONFIG_PGO_USE 00:09:01.084 #define SPDK_CONFIG_PREFIX /usr/local 00:09:01.084 #undef SPDK_CONFIG_RAID5F 00:09:01.084 #undef SPDK_CONFIG_RBD 00:09:01.084 #define SPDK_CONFIG_RDMA 1 00:09:01.084 #define SPDK_CONFIG_RDMA_PROV verbs 00:09:01.084 #define SPDK_CONFIG_RDMA_SEND_WITH_INVAL 1 00:09:01.084 #define SPDK_CONFIG_RDMA_SET_ACK_TIMEOUT 1 00:09:01.084 #define SPDK_CONFIG_RDMA_SET_TOS 1 00:09:01.084 #define SPDK_CONFIG_SHARED 1 00:09:01.084 #undef SPDK_CONFIG_SMA 00:09:01.084 #define SPDK_CONFIG_TESTS 1 00:09:01.084 #undef SPDK_CONFIG_TSAN 00:09:01.084 #define SPDK_CONFIG_UBLK 1 00:09:01.084 #define SPDK_CONFIG_UBSAN 1 00:09:01.084 #undef SPDK_CONFIG_UNIT_TESTS 00:09:01.084 #undef SPDK_CONFIG_URING 00:09:01.084 #define SPDK_CONFIG_URING_PATH 00:09:01.084 #undef SPDK_CONFIG_URING_ZNS 00:09:01.084 #undef SPDK_CONFIG_USDT 00:09:01.085 #undef SPDK_CONFIG_VBDEV_COMPRESS 00:09:01.085 #undef SPDK_CONFIG_VBDEV_COMPRESS_MLX5 00:09:01.085 #define SPDK_CONFIG_VFIO_USER 1 00:09:01.085 #define SPDK_CONFIG_VFIO_USER_DIR 00:09:01.085 #define SPDK_CONFIG_VHOST 1 00:09:01.085 #define SPDK_CONFIG_VIRTIO 1 00:09:01.085 #undef SPDK_CONFIG_VTUNE 00:09:01.085 #define SPDK_CONFIG_VTUNE_DIR 00:09:01.085 #define SPDK_CONFIG_WERROR 1 00:09:01.085 #define SPDK_CONFIG_WPDK_DIR 00:09:01.085 #undef SPDK_CONFIG_XNVME 00:09:01.085 #endif /* SPDK_CONFIG_H */ == *\#\d\e\f\i\n\e\ \S\P\D\K\_\C\O\N\F\I\G\_\D\E\B\U\G* ]] 00:09:01.085 09:17:11 nvmf_tcp.nvmf_filesystem -- common/applications.sh@24 -- # (( SPDK_AUTOTEST_DEBUG_APPS )) 00:09:01.085 09:17:11 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@55 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:09:01.085 09:17:11 nvmf_tcp.nvmf_filesystem -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:09:01.085 09:17:11 nvmf_tcp.nvmf_filesystem -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:09:01.085 09:17:11 nvmf_tcp.nvmf_filesystem -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:09:01.085 09:17:11 nvmf_tcp.nvmf_filesystem -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:01.085 09:17:11 nvmf_tcp.nvmf_filesystem -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:01.085 09:17:11 nvmf_tcp.nvmf_filesystem -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:01.085 09:17:11 nvmf_tcp.nvmf_filesystem -- paths/export.sh@5 -- # export PATH 00:09:01.085 09:17:11 nvmf_tcp.nvmf_filesystem -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:01.085 09:17:11 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@56 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/common 00:09:01.085 09:17:11 nvmf_tcp.nvmf_filesystem -- pm/common@6 -- # dirname /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/common 00:09:01.085 09:17:11 nvmf_tcp.nvmf_filesystem -- pm/common@6 -- # readlink -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm 00:09:01.085 09:17:11 nvmf_tcp.nvmf_filesystem -- pm/common@6 -- # _pmdir=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm 00:09:01.085 09:17:11 nvmf_tcp.nvmf_filesystem -- pm/common@7 -- # readlink -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/../../../ 00:09:01.085 09:17:11 nvmf_tcp.nvmf_filesystem -- pm/common@7 -- # _pmrootdir=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:09:01.085 09:17:11 nvmf_tcp.nvmf_filesystem -- pm/common@64 -- # TEST_TAG=N/A 00:09:01.085 09:17:11 nvmf_tcp.nvmf_filesystem -- pm/common@65 -- # TEST_TAG_FILE=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/.run_test_name 00:09:01.085 09:17:11 nvmf_tcp.nvmf_filesystem -- pm/common@67 -- # PM_OUTPUTDIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power 00:09:01.085 09:17:11 nvmf_tcp.nvmf_filesystem -- pm/common@68 -- # uname -s 00:09:01.085 09:17:11 nvmf_tcp.nvmf_filesystem -- pm/common@68 -- # PM_OS=Linux 00:09:01.085 09:17:11 nvmf_tcp.nvmf_filesystem -- pm/common@70 -- # MONITOR_RESOURCES_SUDO=() 00:09:01.085 09:17:11 nvmf_tcp.nvmf_filesystem -- pm/common@70 -- # declare -A MONITOR_RESOURCES_SUDO 00:09:01.085 09:17:11 nvmf_tcp.nvmf_filesystem -- pm/common@71 -- # MONITOR_RESOURCES_SUDO["collect-bmc-pm"]=1 00:09:01.085 09:17:11 nvmf_tcp.nvmf_filesystem -- pm/common@72 -- # MONITOR_RESOURCES_SUDO["collect-cpu-load"]=0 00:09:01.085 09:17:11 nvmf_tcp.nvmf_filesystem -- pm/common@73 -- # MONITOR_RESOURCES_SUDO["collect-cpu-temp"]=0 00:09:01.085 09:17:11 nvmf_tcp.nvmf_filesystem -- pm/common@74 -- # MONITOR_RESOURCES_SUDO["collect-vmstat"]=0 00:09:01.085 09:17:11 nvmf_tcp.nvmf_filesystem -- pm/common@76 -- # SUDO[0]= 00:09:01.085 09:17:11 nvmf_tcp.nvmf_filesystem -- pm/common@76 -- # SUDO[1]='sudo -E' 00:09:01.085 09:17:11 nvmf_tcp.nvmf_filesystem -- pm/common@78 -- # MONITOR_RESOURCES=(collect-cpu-load collect-vmstat) 00:09:01.085 09:17:11 nvmf_tcp.nvmf_filesystem -- pm/common@79 -- # [[ Linux == FreeBSD ]] 00:09:01.085 09:17:11 nvmf_tcp.nvmf_filesystem -- pm/common@81 -- # [[ Linux == Linux ]] 00:09:01.085 09:17:11 nvmf_tcp.nvmf_filesystem -- pm/common@81 -- # [[ ............................... != QEMU ]] 00:09:01.085 09:17:11 nvmf_tcp.nvmf_filesystem -- pm/common@81 -- # [[ ! -e /.dockerenv ]] 00:09:01.085 09:17:11 nvmf_tcp.nvmf_filesystem -- pm/common@84 -- # MONITOR_RESOURCES+=(collect-cpu-temp) 00:09:01.085 09:17:11 nvmf_tcp.nvmf_filesystem -- pm/common@85 -- # MONITOR_RESOURCES+=(collect-bmc-pm) 00:09:01.085 09:17:11 nvmf_tcp.nvmf_filesystem -- pm/common@88 -- # [[ ! -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power ]] 00:09:01.085 09:17:11 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@58 -- # : 0 00:09:01.085 09:17:11 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@59 -- # export RUN_NIGHTLY 00:09:01.085 09:17:11 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@62 -- # : 0 00:09:01.085 09:17:11 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@63 -- # export SPDK_AUTOTEST_DEBUG_APPS 00:09:01.085 09:17:11 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@64 -- # : 0 00:09:01.085 09:17:11 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@65 -- # export SPDK_RUN_VALGRIND 00:09:01.085 09:17:11 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@66 -- # : 1 00:09:01.085 09:17:11 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@67 -- # export SPDK_RUN_FUNCTIONAL_TEST 00:09:01.085 09:17:11 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@68 -- # : 0 00:09:01.085 09:17:11 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@69 -- # export SPDK_TEST_UNITTEST 00:09:01.085 09:17:11 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@70 -- # : 00:09:01.085 09:17:11 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@71 -- # export SPDK_TEST_AUTOBUILD 00:09:01.085 09:17:11 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@72 -- # : 0 00:09:01.085 09:17:11 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@73 -- # export SPDK_TEST_RELEASE_BUILD 00:09:01.085 09:17:11 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@74 -- # : 0 00:09:01.085 09:17:11 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@75 -- # export SPDK_TEST_ISAL 00:09:01.085 09:17:11 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@76 -- # : 0 00:09:01.085 09:17:11 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@77 -- # export SPDK_TEST_ISCSI 00:09:01.085 09:17:11 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@78 -- # : 0 00:09:01.085 09:17:11 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@79 -- # export SPDK_TEST_ISCSI_INITIATOR 00:09:01.085 09:17:11 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@80 -- # : 0 00:09:01.085 09:17:11 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@81 -- # export SPDK_TEST_NVME 00:09:01.085 09:17:11 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@82 -- # : 0 00:09:01.085 09:17:11 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@83 -- # export SPDK_TEST_NVME_PMR 00:09:01.085 09:17:11 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@84 -- # : 0 00:09:01.085 09:17:11 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@85 -- # export SPDK_TEST_NVME_BP 00:09:01.085 09:17:11 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@86 -- # : 1 00:09:01.085 09:17:11 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@87 -- # export SPDK_TEST_NVME_CLI 00:09:01.085 09:17:11 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@88 -- # : 0 00:09:01.085 09:17:11 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@89 -- # export SPDK_TEST_NVME_CUSE 00:09:01.085 09:17:11 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@90 -- # : 0 00:09:01.085 09:17:11 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@91 -- # export SPDK_TEST_NVME_FDP 00:09:01.085 09:17:11 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@92 -- # : 1 00:09:01.085 09:17:11 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@93 -- # export SPDK_TEST_NVMF 00:09:01.085 09:17:11 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@94 -- # : 1 00:09:01.085 09:17:11 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@95 -- # export SPDK_TEST_VFIOUSER 00:09:01.085 09:17:11 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@96 -- # : 0 00:09:01.085 09:17:11 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@97 -- # export SPDK_TEST_VFIOUSER_QEMU 00:09:01.085 09:17:11 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@98 -- # : 0 00:09:01.085 09:17:11 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@99 -- # export SPDK_TEST_FUZZER 00:09:01.085 09:17:11 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@100 -- # : 0 00:09:01.085 09:17:11 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@101 -- # export SPDK_TEST_FUZZER_SHORT 00:09:01.085 09:17:11 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@102 -- # : tcp 00:09:01.085 09:17:11 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@103 -- # export SPDK_TEST_NVMF_TRANSPORT 00:09:01.085 09:17:11 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@104 -- # : 0 00:09:01.085 09:17:11 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@105 -- # export SPDK_TEST_RBD 00:09:01.085 09:17:11 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@106 -- # : 0 00:09:01.085 09:17:11 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@107 -- # export SPDK_TEST_VHOST 00:09:01.085 09:17:11 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@108 -- # : 0 00:09:01.085 09:17:11 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@109 -- # export SPDK_TEST_BLOCKDEV 00:09:01.085 09:17:11 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@110 -- # : 0 00:09:01.085 09:17:11 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@111 -- # export SPDK_TEST_IOAT 00:09:01.085 09:17:11 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@112 -- # : 0 00:09:01.085 09:17:11 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@113 -- # export SPDK_TEST_BLOBFS 00:09:01.085 09:17:11 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@114 -- # : 0 00:09:01.085 09:17:11 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@115 -- # export SPDK_TEST_VHOST_INIT 00:09:01.085 09:17:11 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@116 -- # : 0 00:09:01.085 09:17:11 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@117 -- # export SPDK_TEST_LVOL 00:09:01.085 09:17:11 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@118 -- # : 0 00:09:01.085 09:17:11 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@119 -- # export SPDK_TEST_VBDEV_COMPRESS 00:09:01.085 09:17:11 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@120 -- # : 0 00:09:01.085 09:17:11 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@121 -- # export SPDK_RUN_ASAN 00:09:01.086 09:17:11 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@122 -- # : 1 00:09:01.086 09:17:11 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@123 -- # export SPDK_RUN_UBSAN 00:09:01.086 09:17:11 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@124 -- # : 00:09:01.086 09:17:11 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@125 -- # export SPDK_RUN_EXTERNAL_DPDK 00:09:01.086 09:17:11 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@126 -- # : 0 00:09:01.086 09:17:11 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@127 -- # export SPDK_RUN_NON_ROOT 00:09:01.086 09:17:11 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@128 -- # : 0 00:09:01.086 09:17:11 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@129 -- # export SPDK_TEST_CRYPTO 00:09:01.086 09:17:11 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@130 -- # : 0 00:09:01.086 09:17:11 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@131 -- # export SPDK_TEST_FTL 00:09:01.086 09:17:11 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@132 -- # : 0 00:09:01.086 09:17:11 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@133 -- # export SPDK_TEST_OCF 00:09:01.086 09:17:11 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@134 -- # : 0 00:09:01.086 09:17:11 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@135 -- # export SPDK_TEST_VMD 00:09:01.086 09:17:11 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@136 -- # : 0 00:09:01.086 09:17:11 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@137 -- # export SPDK_TEST_OPAL 00:09:01.086 09:17:11 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@138 -- # : 00:09:01.086 09:17:11 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@139 -- # export SPDK_TEST_NATIVE_DPDK 00:09:01.086 09:17:11 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@140 -- # : true 00:09:01.086 09:17:11 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@141 -- # export SPDK_AUTOTEST_X 00:09:01.086 09:17:11 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@142 -- # : 0 00:09:01.086 09:17:11 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@143 -- # export SPDK_TEST_RAID5 00:09:01.086 09:17:11 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@144 -- # : 0 00:09:01.086 09:17:11 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@145 -- # export SPDK_TEST_URING 00:09:01.086 09:17:11 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@146 -- # : 0 00:09:01.086 09:17:11 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@147 -- # export SPDK_TEST_USDT 00:09:01.086 09:17:11 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@148 -- # : 0 00:09:01.086 09:17:11 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@149 -- # export SPDK_TEST_USE_IGB_UIO 00:09:01.086 09:17:11 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@150 -- # : 0 00:09:01.086 09:17:11 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@151 -- # export SPDK_TEST_SCHEDULER 00:09:01.086 09:17:11 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@152 -- # : 0 00:09:01.086 09:17:11 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@153 -- # export SPDK_TEST_SCANBUILD 00:09:01.086 09:17:11 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@154 -- # : e810 00:09:01.086 09:17:11 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@155 -- # export SPDK_TEST_NVMF_NICS 00:09:01.086 09:17:11 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@156 -- # : 0 00:09:01.086 09:17:11 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@157 -- # export SPDK_TEST_SMA 00:09:01.086 09:17:11 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@158 -- # : 0 00:09:01.086 09:17:11 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@159 -- # export SPDK_TEST_DAOS 00:09:01.086 09:17:11 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@160 -- # : 0 00:09:01.086 09:17:11 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@161 -- # export SPDK_TEST_XNVME 00:09:01.086 09:17:11 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@162 -- # : 0 00:09:01.086 09:17:11 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@163 -- # export SPDK_TEST_ACCEL_DSA 00:09:01.086 09:17:11 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@164 -- # : 0 00:09:01.086 09:17:11 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@165 -- # export SPDK_TEST_ACCEL_IAA 00:09:01.086 09:17:11 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@167 -- # : 00:09:01.086 09:17:11 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@168 -- # export SPDK_TEST_FUZZER_TARGET 00:09:01.086 09:17:11 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@169 -- # : 0 00:09:01.086 09:17:11 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@170 -- # export SPDK_TEST_NVMF_MDNS 00:09:01.086 09:17:11 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@171 -- # : 0 00:09:01.086 09:17:11 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@172 -- # export SPDK_JSONRPC_GO_CLIENT 00:09:01.086 09:17:11 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@175 -- # export SPDK_LIB_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib 00:09:01.086 09:17:11 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@175 -- # SPDK_LIB_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib 00:09:01.086 09:17:11 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@176 -- # export DPDK_LIB_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build/lib 00:09:01.086 09:17:11 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@176 -- # DPDK_LIB_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build/lib 00:09:01.086 09:17:11 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@177 -- # export VFIO_LIB_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib 00:09:01.086 09:17:11 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@177 -- # VFIO_LIB_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib 00:09:01.086 09:17:11 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@178 -- # export LD_LIBRARY_PATH=:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib 00:09:01.086 09:17:11 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@178 -- # LD_LIBRARY_PATH=:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib 00:09:01.086 09:17:11 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@181 -- # export PCI_BLOCK_SYNC_ON_RESET=yes 00:09:01.086 09:17:11 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@181 -- # PCI_BLOCK_SYNC_ON_RESET=yes 00:09:01.086 09:17:11 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@185 -- # export PYTHONPATH=:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python 00:09:01.086 09:17:11 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@185 -- # PYTHONPATH=:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python 00:09:01.086 09:17:11 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@189 -- # export PYTHONDONTWRITEBYTECODE=1 00:09:01.086 09:17:11 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@189 -- # PYTHONDONTWRITEBYTECODE=1 00:09:01.086 09:17:11 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@193 -- # export ASAN_OPTIONS=new_delete_type_mismatch=0:disable_coredump=0:abort_on_error=1:use_sigaltstack=0 00:09:01.086 09:17:11 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@193 -- # ASAN_OPTIONS=new_delete_type_mismatch=0:disable_coredump=0:abort_on_error=1:use_sigaltstack=0 00:09:01.086 09:17:11 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@194 -- # export UBSAN_OPTIONS=halt_on_error=1:print_stacktrace=1:abort_on_error=1:disable_coredump=0:exitcode=134 00:09:01.086 09:17:11 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@194 -- # UBSAN_OPTIONS=halt_on_error=1:print_stacktrace=1:abort_on_error=1:disable_coredump=0:exitcode=134 00:09:01.086 09:17:11 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@198 -- # asan_suppression_file=/var/tmp/asan_suppression_file 00:09:01.086 09:17:11 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@199 -- # rm -rf /var/tmp/asan_suppression_file 00:09:01.086 09:17:11 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@200 -- # cat 00:09:01.086 09:17:11 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@236 -- # echo leak:libfuse3.so 00:09:01.086 09:17:11 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@238 -- # export LSAN_OPTIONS=suppressions=/var/tmp/asan_suppression_file 00:09:01.086 09:17:11 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@238 -- # LSAN_OPTIONS=suppressions=/var/tmp/asan_suppression_file 00:09:01.086 09:17:11 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@240 -- # export DEFAULT_RPC_ADDR=/var/tmp/spdk.sock 00:09:01.086 09:17:11 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@240 -- # DEFAULT_RPC_ADDR=/var/tmp/spdk.sock 00:09:01.086 09:17:11 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@242 -- # '[' -z /var/spdk/dependencies ']' 00:09:01.086 09:17:11 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@245 -- # export DEPENDENCY_DIR 00:09:01.086 09:17:11 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@249 -- # export SPDK_BIN_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin 00:09:01.086 09:17:11 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@249 -- # SPDK_BIN_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin 00:09:01.086 09:17:11 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@250 -- # export SPDK_EXAMPLE_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples 00:09:01.086 09:17:11 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@250 -- # SPDK_EXAMPLE_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples 00:09:01.086 09:17:11 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@253 -- # export QEMU_BIN=/usr/local/qemu/vanilla-latest/bin/qemu-system-x86_64 00:09:01.086 09:17:11 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@253 -- # QEMU_BIN=/usr/local/qemu/vanilla-latest/bin/qemu-system-x86_64 00:09:01.086 09:17:11 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@254 -- # export VFIO_QEMU_BIN=/usr/local/qemu/vfio-user-latest/bin/qemu-system-x86_64 00:09:01.086 09:17:11 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@254 -- # VFIO_QEMU_BIN=/usr/local/qemu/vfio-user-latest/bin/qemu-system-x86_64 00:09:01.086 09:17:11 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@256 -- # export AR_TOOL=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/ar-xnvme-fixer 00:09:01.086 09:17:11 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@256 -- # AR_TOOL=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/ar-xnvme-fixer 00:09:01.086 09:17:11 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@259 -- # export UNBIND_ENTIRE_IOMMU_GROUP=yes 00:09:01.086 09:17:11 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@259 -- # UNBIND_ENTIRE_IOMMU_GROUP=yes 00:09:01.086 09:17:11 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@262 -- # '[' 0 -eq 0 ']' 00:09:01.086 09:17:11 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@263 -- # export valgrind= 00:09:01.086 09:17:11 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@263 -- # valgrind= 00:09:01.086 09:17:11 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@269 -- # uname -s 00:09:01.087 09:17:11 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@269 -- # '[' Linux = Linux ']' 00:09:01.087 09:17:11 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@270 -- # HUGEMEM=4096 00:09:01.087 09:17:11 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@271 -- # export CLEAR_HUGE=yes 00:09:01.087 09:17:11 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@271 -- # CLEAR_HUGE=yes 00:09:01.087 09:17:11 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@272 -- # [[ 0 -eq 1 ]] 00:09:01.087 09:17:11 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@272 -- # [[ 0 -eq 1 ]] 00:09:01.087 09:17:11 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@279 -- # MAKE=make 00:09:01.087 09:17:11 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@280 -- # MAKEFLAGS=-j48 00:09:01.087 09:17:11 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@296 -- # export HUGEMEM=4096 00:09:01.087 09:17:11 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@296 -- # HUGEMEM=4096 00:09:01.087 09:17:11 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@298 -- # NO_HUGE=() 00:09:01.087 09:17:11 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@299 -- # TEST_MODE= 00:09:01.087 09:17:11 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@300 -- # for i in "$@" 00:09:01.087 09:17:11 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@301 -- # case "$i" in 00:09:01.087 09:17:11 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@306 -- # TEST_TRANSPORT=tcp 00:09:01.087 09:17:11 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@318 -- # [[ -z 733941 ]] 00:09:01.087 09:17:11 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@318 -- # kill -0 733941 00:09:01.087 09:17:11 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@1680 -- # set_test_storage 2147483648 00:09:01.087 09:17:11 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@328 -- # [[ -v testdir ]] 00:09:01.087 09:17:11 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@330 -- # local requested_size=2147483648 00:09:01.087 09:17:11 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@331 -- # local mount target_dir 00:09:01.087 09:17:11 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@333 -- # local -A mounts fss sizes avails uses 00:09:01.087 09:17:11 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@334 -- # local source fs size avail mount use 00:09:01.087 09:17:11 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@336 -- # local storage_fallback storage_candidates 00:09:01.087 09:17:11 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@338 -- # mktemp -udt spdk.XXXXXX 00:09:01.087 09:17:11 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@338 -- # storage_fallback=/tmp/spdk.2wKrTf 00:09:01.087 09:17:11 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@343 -- # storage_candidates=("$testdir" "$storage_fallback/tests/${testdir##*/}" "$storage_fallback") 00:09:01.087 09:17:11 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@345 -- # [[ -n '' ]] 00:09:01.087 09:17:11 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@350 -- # [[ -n '' ]] 00:09:01.087 09:17:11 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@355 -- # mkdir -p /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target /tmp/spdk.2wKrTf/tests/target /tmp/spdk.2wKrTf 00:09:01.087 09:17:11 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@358 -- # requested_size=2214592512 00:09:01.087 09:17:11 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@360 -- # read -r source fs size use avail _ mount 00:09:01.087 09:17:11 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@327 -- # df -T 00:09:01.087 09:17:11 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@327 -- # grep -v Filesystem 00:09:01.087 09:17:11 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@361 -- # mounts["$mount"]=spdk_devtmpfs 00:09:01.087 09:17:11 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@361 -- # fss["$mount"]=devtmpfs 00:09:01.087 09:17:11 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@362 -- # avails["$mount"]=67108864 00:09:01.087 09:17:11 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@362 -- # sizes["$mount"]=67108864 00:09:01.087 09:17:11 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@363 -- # uses["$mount"]=0 00:09:01.087 09:17:11 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@360 -- # read -r source fs size use avail _ mount 00:09:01.087 09:17:11 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@361 -- # mounts["$mount"]=/dev/pmem0 00:09:01.087 09:17:11 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@361 -- # fss["$mount"]=ext2 00:09:01.087 09:17:11 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@362 -- # avails["$mount"]=952066048 00:09:01.087 09:17:11 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@362 -- # sizes["$mount"]=5284429824 00:09:01.087 09:17:11 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@363 -- # uses["$mount"]=4332363776 00:09:01.087 09:17:11 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@360 -- # read -r source fs size use avail _ mount 00:09:01.087 09:17:11 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@361 -- # mounts["$mount"]=spdk_root 00:09:01.087 09:17:11 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@361 -- # fss["$mount"]=overlay 00:09:01.087 09:17:11 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@362 -- # avails["$mount"]=56572764160 00:09:01.087 09:17:11 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@362 -- # sizes["$mount"]=61994713088 00:09:01.087 09:17:11 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@363 -- # uses["$mount"]=5421948928 00:09:01.087 09:17:11 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@360 -- # read -r source fs size use avail _ mount 00:09:01.087 09:17:11 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@361 -- # mounts["$mount"]=tmpfs 00:09:01.087 09:17:11 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@361 -- # fss["$mount"]=tmpfs 00:09:01.087 09:17:11 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@362 -- # avails["$mount"]=30993981440 00:09:01.087 09:17:11 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@362 -- # sizes["$mount"]=30997356544 00:09:01.087 09:17:11 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@363 -- # uses["$mount"]=3375104 00:09:01.087 09:17:11 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@360 -- # read -r source fs size use avail _ mount 00:09:01.087 09:17:11 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@361 -- # mounts["$mount"]=tmpfs 00:09:01.087 09:17:11 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@361 -- # fss["$mount"]=tmpfs 00:09:01.087 09:17:11 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@362 -- # avails["$mount"]=12390182912 00:09:01.087 09:17:11 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@362 -- # sizes["$mount"]=12398944256 00:09:01.087 09:17:11 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@363 -- # uses["$mount"]=8761344 00:09:01.087 09:17:11 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@360 -- # read -r source fs size use avail _ mount 00:09:01.087 09:17:11 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@361 -- # mounts["$mount"]=tmpfs 00:09:01.087 09:17:11 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@361 -- # fss["$mount"]=tmpfs 00:09:01.087 09:17:11 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@362 -- # avails["$mount"]=30996922368 00:09:01.087 09:17:11 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@362 -- # sizes["$mount"]=30997356544 00:09:01.087 09:17:11 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@363 -- # uses["$mount"]=434176 00:09:01.087 09:17:11 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@360 -- # read -r source fs size use avail _ mount 00:09:01.087 09:17:11 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@361 -- # mounts["$mount"]=tmpfs 00:09:01.087 09:17:11 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@361 -- # fss["$mount"]=tmpfs 00:09:01.087 09:17:11 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@362 -- # avails["$mount"]=6199463936 00:09:01.087 09:17:11 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@362 -- # sizes["$mount"]=6199468032 00:09:01.087 09:17:11 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@363 -- # uses["$mount"]=4096 00:09:01.087 09:17:11 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@360 -- # read -r source fs size use avail _ mount 00:09:01.087 09:17:11 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@366 -- # printf '* Looking for test storage...\n' 00:09:01.087 * Looking for test storage... 00:09:01.087 09:17:11 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@368 -- # local target_space new_size 00:09:01.087 09:17:11 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@369 -- # for target_dir in "${storage_candidates[@]}" 00:09:01.087 09:17:11 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@372 -- # df /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:09:01.087 09:17:11 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@372 -- # awk '$1 !~ /Filesystem/{print $6}' 00:09:01.087 09:17:11 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@372 -- # mount=/ 00:09:01.087 09:17:11 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@374 -- # target_space=56572764160 00:09:01.087 09:17:11 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@375 -- # (( target_space == 0 || target_space < requested_size )) 00:09:01.087 09:17:11 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@378 -- # (( target_space >= requested_size )) 00:09:01.087 09:17:11 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@380 -- # [[ overlay == tmpfs ]] 00:09:01.087 09:17:11 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@380 -- # [[ overlay == ramfs ]] 00:09:01.087 09:17:11 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@380 -- # [[ / == / ]] 00:09:01.087 09:17:11 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@381 -- # new_size=7636541440 00:09:01.087 09:17:11 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@382 -- # (( new_size * 100 / sizes[/] > 95 )) 00:09:01.087 09:17:11 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@387 -- # export SPDK_TEST_STORAGE=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:09:01.087 09:17:11 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@387 -- # SPDK_TEST_STORAGE=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:09:01.087 09:17:11 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@388 -- # printf '* Found test storage at %s\n' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:09:01.087 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:09:01.087 09:17:11 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@389 -- # return 0 00:09:01.087 09:17:11 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@1682 -- # set -o errtrace 00:09:01.087 09:17:11 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@1683 -- # shopt -s extdebug 00:09:01.087 09:17:11 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@1684 -- # trap 'trap - ERR; print_backtrace >&2' ERR 00:09:01.087 09:17:11 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@1686 -- # PS4=' \t ${test_domain:-} -- ${BASH_SOURCE#${BASH_SOURCE%/*/*}/}@${LINENO} -- \$ ' 00:09:01.087 09:17:11 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@1687 -- # true 00:09:01.087 09:17:11 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@1689 -- # xtrace_fd 00:09:01.087 09:17:11 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@25 -- # [[ -n 14 ]] 00:09:01.087 09:17:11 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@25 -- # [[ -e /proc/self/fd/14 ]] 00:09:01.087 09:17:11 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@27 -- # exec 00:09:01.087 09:17:11 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@29 -- # exec 00:09:01.087 09:17:11 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@31 -- # xtrace_restore 00:09:01.087 09:17:11 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@16 -- # unset -v 'X_STACK[0 - 1 < 0 ? 0 : 0 - 1]' 00:09:01.087 09:17:11 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@17 -- # (( 0 == 0 )) 00:09:01.087 09:17:11 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@18 -- # set -x 00:09:01.087 09:17:11 nvmf_tcp.nvmf_filesystem -- target/filesystem.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:09:01.087 09:17:11 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@7 -- # uname -s 00:09:01.087 09:17:11 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:09:01.087 09:17:11 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:09:01.087 09:17:11 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:09:01.087 09:17:11 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:09:01.087 09:17:11 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:09:01.088 09:17:11 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:09:01.088 09:17:11 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:09:01.088 09:17:11 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:09:01.088 09:17:11 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:09:01.088 09:17:11 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:09:01.088 09:17:11 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a 00:09:01.088 09:17:11 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@18 -- # NVME_HOSTID=29f67375-a902-e411-ace9-001e67bc3c9a 00:09:01.088 09:17:11 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:09:01.088 09:17:11 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:09:01.088 09:17:11 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:09:01.088 09:17:11 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:09:01.088 09:17:11 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:09:01.088 09:17:11 nvmf_tcp.nvmf_filesystem -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:09:01.088 09:17:11 nvmf_tcp.nvmf_filesystem -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:09:01.088 09:17:11 nvmf_tcp.nvmf_filesystem -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:09:01.088 09:17:11 nvmf_tcp.nvmf_filesystem -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:01.088 09:17:11 nvmf_tcp.nvmf_filesystem -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:01.088 09:17:11 nvmf_tcp.nvmf_filesystem -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:01.088 09:17:11 nvmf_tcp.nvmf_filesystem -- paths/export.sh@5 -- # export PATH 00:09:01.088 09:17:11 nvmf_tcp.nvmf_filesystem -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:01.088 09:17:11 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@47 -- # : 0 00:09:01.088 09:17:11 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:09:01.088 09:17:11 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:09:01.088 09:17:11 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:09:01.088 09:17:11 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:09:01.088 09:17:11 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:09:01.088 09:17:11 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:09:01.088 09:17:11 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:09:01.088 09:17:11 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@51 -- # have_pci_nics=0 00:09:01.088 09:17:11 nvmf_tcp.nvmf_filesystem -- target/filesystem.sh@12 -- # MALLOC_BDEV_SIZE=512 00:09:01.088 09:17:11 nvmf_tcp.nvmf_filesystem -- target/filesystem.sh@13 -- # MALLOC_BLOCK_SIZE=512 00:09:01.088 09:17:11 nvmf_tcp.nvmf_filesystem -- target/filesystem.sh@15 -- # nvmftestinit 00:09:01.088 09:17:11 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:09:01.088 09:17:11 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:09:01.088 09:17:11 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@448 -- # prepare_net_devs 00:09:01.088 09:17:11 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@410 -- # local -g is_hw=no 00:09:01.088 09:17:11 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@412 -- # remove_spdk_ns 00:09:01.088 09:17:11 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:09:01.088 09:17:11 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:09:01.088 09:17:11 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:09:01.088 09:17:11 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:09:01.088 09:17:11 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:09:01.088 09:17:11 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@285 -- # xtrace_disable 00:09:01.088 09:17:11 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@10 -- # set +x 00:09:02.992 09:17:14 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:09:02.992 09:17:14 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@291 -- # pci_devs=() 00:09:02.992 09:17:14 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@291 -- # local -a pci_devs 00:09:02.992 09:17:14 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@292 -- # pci_net_devs=() 00:09:02.992 09:17:14 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:09:02.992 09:17:14 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@293 -- # pci_drivers=() 00:09:02.992 09:17:14 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@293 -- # local -A pci_drivers 00:09:02.992 09:17:14 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@295 -- # net_devs=() 00:09:02.992 09:17:14 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@295 -- # local -ga net_devs 00:09:02.992 09:17:14 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@296 -- # e810=() 00:09:02.992 09:17:14 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@296 -- # local -ga e810 00:09:02.992 09:17:14 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@297 -- # x722=() 00:09:02.992 09:17:14 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@297 -- # local -ga x722 00:09:02.992 09:17:14 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@298 -- # mlx=() 00:09:02.992 09:17:14 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@298 -- # local -ga mlx 00:09:02.992 09:17:14 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:09:02.992 09:17:14 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:09:02.992 09:17:14 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:09:02.993 09:17:14 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:09:02.993 09:17:14 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:09:02.993 09:17:14 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:09:02.993 09:17:14 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:09:02.993 09:17:14 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:09:02.993 09:17:14 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:09:02.993 09:17:14 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:09:02.993 09:17:14 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:09:02.993 09:17:14 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:09:02.993 09:17:14 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:09:02.993 09:17:14 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:09:02.993 09:17:14 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:09:02.993 09:17:14 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:09:02.993 09:17:14 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:09:02.993 09:17:14 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:09:02.993 09:17:14 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@341 -- # echo 'Found 0000:09:00.0 (0x8086 - 0x159b)' 00:09:02.993 Found 0000:09:00.0 (0x8086 - 0x159b) 00:09:02.993 09:17:14 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:09:02.993 09:17:14 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:09:02.993 09:17:14 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:09:02.993 09:17:14 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:09:02.993 09:17:14 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:09:02.993 09:17:14 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:09:02.993 09:17:14 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@341 -- # echo 'Found 0000:09:00.1 (0x8086 - 0x159b)' 00:09:02.993 Found 0000:09:00.1 (0x8086 - 0x159b) 00:09:02.993 09:17:14 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:09:02.993 09:17:14 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:09:02.993 09:17:14 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:09:02.993 09:17:14 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:09:02.993 09:17:14 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:09:02.993 09:17:14 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:09:02.993 09:17:14 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:09:02.993 09:17:14 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:09:02.993 09:17:14 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:09:02.993 09:17:14 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:09:02.993 09:17:14 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:09:02.993 09:17:14 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:09:02.993 09:17:14 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@390 -- # [[ up == up ]] 00:09:02.993 09:17:14 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:09:02.993 09:17:14 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:09:02.993 09:17:14 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:09:00.0: cvl_0_0' 00:09:02.993 Found net devices under 0000:09:00.0: cvl_0_0 00:09:02.993 09:17:14 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:09:02.993 09:17:14 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:09:02.993 09:17:14 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:09:02.993 09:17:14 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:09:02.993 09:17:14 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:09:02.993 09:17:14 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@390 -- # [[ up == up ]] 00:09:02.993 09:17:14 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:09:02.993 09:17:14 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:09:02.993 09:17:14 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:09:00.1: cvl_0_1' 00:09:02.993 Found net devices under 0000:09:00.1: cvl_0_1 00:09:02.993 09:17:14 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:09:02.993 09:17:14 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:09:02.993 09:17:14 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@414 -- # is_hw=yes 00:09:02.993 09:17:14 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:09:02.993 09:17:14 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:09:02.993 09:17:14 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:09:02.993 09:17:14 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:09:02.993 09:17:14 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:09:02.993 09:17:14 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:09:02.993 09:17:14 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:09:02.993 09:17:14 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:09:02.993 09:17:14 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:09:02.993 09:17:14 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:09:02.993 09:17:14 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:09:02.993 09:17:14 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:09:02.993 09:17:14 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:09:02.993 09:17:14 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:09:02.993 09:17:14 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:09:02.993 09:17:14 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:09:02.993 09:17:14 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:09:02.993 09:17:14 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:09:02.993 09:17:14 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:09:02.993 09:17:14 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:09:02.993 09:17:14 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:09:02.993 09:17:14 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:09:02.993 09:17:14 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:09:02.993 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:09:02.993 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.118 ms 00:09:02.993 00:09:02.993 --- 10.0.0.2 ping statistics --- 00:09:02.993 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:09:02.993 rtt min/avg/max/mdev = 0.118/0.118/0.118/0.000 ms 00:09:02.993 09:17:14 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:09:02.993 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:09:02.993 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.163 ms 00:09:02.993 00:09:02.993 --- 10.0.0.1 ping statistics --- 00:09:02.993 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:09:02.993 rtt min/avg/max/mdev = 0.163/0.163/0.163/0.000 ms 00:09:02.993 09:17:14 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:09:02.993 09:17:14 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@422 -- # return 0 00:09:02.993 09:17:14 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:09:02.993 09:17:14 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:09:02.993 09:17:14 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:09:02.993 09:17:14 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:09:02.993 09:17:14 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:09:02.993 09:17:14 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:09:02.993 09:17:14 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:09:02.993 09:17:14 nvmf_tcp.nvmf_filesystem -- target/filesystem.sh@105 -- # run_test nvmf_filesystem_no_in_capsule nvmf_filesystem_part 0 00:09:02.994 09:17:14 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:09:02.994 09:17:14 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@1105 -- # xtrace_disable 00:09:02.994 09:17:14 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@10 -- # set +x 00:09:03.251 ************************************ 00:09:03.251 START TEST nvmf_filesystem_no_in_capsule 00:09:03.251 ************************************ 00:09:03.251 09:17:14 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1123 -- # nvmf_filesystem_part 0 00:09:03.251 09:17:14 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@47 -- # in_capsule=0 00:09:03.251 09:17:14 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@49 -- # nvmfappstart -m 0xF 00:09:03.251 09:17:14 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:09:03.251 09:17:14 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@722 -- # xtrace_disable 00:09:03.251 09:17:14 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:09:03.251 09:17:14 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- nvmf/common.sh@481 -- # nvmfpid=735571 00:09:03.251 09:17:14 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:09:03.251 09:17:14 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- nvmf/common.sh@482 -- # waitforlisten 735571 00:09:03.251 09:17:14 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@829 -- # '[' -z 735571 ']' 00:09:03.251 09:17:14 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:09:03.251 09:17:14 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@834 -- # local max_retries=100 00:09:03.251 09:17:14 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:09:03.251 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:09:03.251 09:17:14 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@838 -- # xtrace_disable 00:09:03.251 09:17:14 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:09:03.251 [2024-07-15 09:17:14.258982] Starting SPDK v24.09-pre git sha1 b0f01ebc5 / DPDK 24.03.0 initialization... 00:09:03.252 [2024-07-15 09:17:14.259051] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:09:03.252 EAL: No free 2048 kB hugepages reported on node 1 00:09:03.252 [2024-07-15 09:17:14.320447] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 4 00:09:03.252 [2024-07-15 09:17:14.424735] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:09:03.252 [2024-07-15 09:17:14.424785] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:09:03.252 [2024-07-15 09:17:14.424798] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:09:03.252 [2024-07-15 09:17:14.424835] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:09:03.252 [2024-07-15 09:17:14.424846] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:09:03.252 [2024-07-15 09:17:14.424925] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:09:03.252 [2024-07-15 09:17:14.424989] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:09:03.252 [2024-07-15 09:17:14.425053] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:09:03.252 [2024-07-15 09:17:14.425055] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:09:03.509 09:17:14 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:09:03.509 09:17:14 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@862 -- # return 0 00:09:03.509 09:17:14 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:09:03.509 09:17:14 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@728 -- # xtrace_disable 00:09:03.509 09:17:14 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:09:03.509 09:17:14 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:09:03.509 09:17:14 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@50 -- # malloc_name=Malloc1 00:09:03.509 09:17:14 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@52 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 -c 0 00:09:03.509 09:17:14 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:03.509 09:17:14 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:09:03.509 [2024-07-15 09:17:14.575666] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:09:03.509 09:17:14 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:03.509 09:17:14 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@53 -- # rpc_cmd bdev_malloc_create 512 512 -b Malloc1 00:09:03.509 09:17:14 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:03.509 09:17:14 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:09:03.769 Malloc1 00:09:03.769 09:17:14 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:03.769 09:17:14 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@54 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:09:03.769 09:17:14 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:03.769 09:17:14 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:09:03.769 09:17:14 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:03.769 09:17:14 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@55 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:09:03.769 09:17:14 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:03.769 09:17:14 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:09:03.769 09:17:14 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:03.769 09:17:14 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@56 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:09:03.769 09:17:14 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:03.769 09:17:14 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:09:03.769 [2024-07-15 09:17:14.749193] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:09:03.769 09:17:14 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:03.769 09:17:14 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@58 -- # get_bdev_size Malloc1 00:09:03.769 09:17:14 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1378 -- # local bdev_name=Malloc1 00:09:03.769 09:17:14 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1379 -- # local bdev_info 00:09:03.769 09:17:14 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1380 -- # local bs 00:09:03.769 09:17:14 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1381 -- # local nb 00:09:03.769 09:17:14 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1382 -- # rpc_cmd bdev_get_bdevs -b Malloc1 00:09:03.769 09:17:14 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:03.769 09:17:14 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:09:03.770 09:17:14 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:03.770 09:17:14 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1382 -- # bdev_info='[ 00:09:03.770 { 00:09:03.770 "name": "Malloc1", 00:09:03.770 "aliases": [ 00:09:03.770 "277bbbdd-0d28-478d-8282-719b124c84c7" 00:09:03.770 ], 00:09:03.770 "product_name": "Malloc disk", 00:09:03.770 "block_size": 512, 00:09:03.770 "num_blocks": 1048576, 00:09:03.770 "uuid": "277bbbdd-0d28-478d-8282-719b124c84c7", 00:09:03.770 "assigned_rate_limits": { 00:09:03.770 "rw_ios_per_sec": 0, 00:09:03.770 "rw_mbytes_per_sec": 0, 00:09:03.770 "r_mbytes_per_sec": 0, 00:09:03.770 "w_mbytes_per_sec": 0 00:09:03.770 }, 00:09:03.770 "claimed": true, 00:09:03.770 "claim_type": "exclusive_write", 00:09:03.770 "zoned": false, 00:09:03.770 "supported_io_types": { 00:09:03.770 "read": true, 00:09:03.770 "write": true, 00:09:03.770 "unmap": true, 00:09:03.770 "flush": true, 00:09:03.770 "reset": true, 00:09:03.770 "nvme_admin": false, 00:09:03.770 "nvme_io": false, 00:09:03.770 "nvme_io_md": false, 00:09:03.770 "write_zeroes": true, 00:09:03.770 "zcopy": true, 00:09:03.770 "get_zone_info": false, 00:09:03.770 "zone_management": false, 00:09:03.770 "zone_append": false, 00:09:03.770 "compare": false, 00:09:03.770 "compare_and_write": false, 00:09:03.770 "abort": true, 00:09:03.770 "seek_hole": false, 00:09:03.770 "seek_data": false, 00:09:03.770 "copy": true, 00:09:03.770 "nvme_iov_md": false 00:09:03.770 }, 00:09:03.770 "memory_domains": [ 00:09:03.770 { 00:09:03.770 "dma_device_id": "system", 00:09:03.770 "dma_device_type": 1 00:09:03.770 }, 00:09:03.770 { 00:09:03.770 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:03.770 "dma_device_type": 2 00:09:03.770 } 00:09:03.770 ], 00:09:03.770 "driver_specific": {} 00:09:03.770 } 00:09:03.770 ]' 00:09:03.770 09:17:14 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1383 -- # jq '.[] .block_size' 00:09:03.770 09:17:14 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1383 -- # bs=512 00:09:03.770 09:17:14 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1384 -- # jq '.[] .num_blocks' 00:09:03.770 09:17:14 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1384 -- # nb=1048576 00:09:03.770 09:17:14 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1387 -- # bdev_size=512 00:09:03.770 09:17:14 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1388 -- # echo 512 00:09:03.770 09:17:14 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@58 -- # malloc_size=536870912 00:09:03.770 09:17:14 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@60 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --hostid=29f67375-a902-e411-ace9-001e67bc3c9a -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:09:04.707 09:17:15 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@62 -- # waitforserial SPDKISFASTANDAWESOME 00:09:04.707 09:17:15 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1198 -- # local i=0 00:09:04.707 09:17:15 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1199 -- # local nvme_device_counter=1 nvme_devices=0 00:09:04.708 09:17:15 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1200 -- # [[ -n '' ]] 00:09:04.708 09:17:15 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1205 -- # sleep 2 00:09:06.611 09:17:17 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1206 -- # (( i++ <= 15 )) 00:09:06.611 09:17:17 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1207 -- # lsblk -l -o NAME,SERIAL 00:09:06.611 09:17:17 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1207 -- # grep -c SPDKISFASTANDAWESOME 00:09:06.611 09:17:17 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1207 -- # nvme_devices=1 00:09:06.611 09:17:17 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1208 -- # (( nvme_devices == nvme_device_counter )) 00:09:06.611 09:17:17 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1208 -- # return 0 00:09:06.611 09:17:17 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@63 -- # lsblk -l -o NAME,SERIAL 00:09:06.611 09:17:17 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@63 -- # grep -oP '([\w]*)(?=\s+SPDKISFASTANDAWESOME)' 00:09:06.611 09:17:17 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@63 -- # nvme_name=nvme0n1 00:09:06.611 09:17:17 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@64 -- # sec_size_to_bytes nvme0n1 00:09:06.611 09:17:17 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- setup/common.sh@76 -- # local dev=nvme0n1 00:09:06.611 09:17:17 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- setup/common.sh@78 -- # [[ -e /sys/block/nvme0n1 ]] 00:09:06.611 09:17:17 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- setup/common.sh@80 -- # echo 536870912 00:09:06.611 09:17:17 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@64 -- # nvme_size=536870912 00:09:06.611 09:17:17 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@66 -- # mkdir -p /mnt/device 00:09:06.611 09:17:17 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@67 -- # (( nvme_size == malloc_size )) 00:09:06.611 09:17:17 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@68 -- # parted -s /dev/nvme0n1 mklabel gpt mkpart SPDK_TEST 0% 100% 00:09:06.869 09:17:17 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@69 -- # partprobe 00:09:07.128 09:17:18 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@70 -- # sleep 1 00:09:08.065 09:17:19 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@76 -- # '[' 0 -eq 0 ']' 00:09:08.065 09:17:19 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@77 -- # run_test filesystem_ext4 nvmf_filesystem_create ext4 nvme0n1 00:09:08.065 09:17:19 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1099 -- # '[' 4 -le 1 ']' 00:09:08.065 09:17:19 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1105 -- # xtrace_disable 00:09:08.065 09:17:19 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:09:08.325 ************************************ 00:09:08.325 START TEST filesystem_ext4 00:09:08.325 ************************************ 00:09:08.325 09:17:19 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@1123 -- # nvmf_filesystem_create ext4 nvme0n1 00:09:08.325 09:17:19 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@18 -- # fstype=ext4 00:09:08.325 09:17:19 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@19 -- # nvme_name=nvme0n1 00:09:08.325 09:17:19 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@21 -- # make_filesystem ext4 /dev/nvme0n1p1 00:09:08.325 09:17:19 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@924 -- # local fstype=ext4 00:09:08.325 09:17:19 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@925 -- # local dev_name=/dev/nvme0n1p1 00:09:08.325 09:17:19 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@926 -- # local i=0 00:09:08.325 09:17:19 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@927 -- # local force 00:09:08.325 09:17:19 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@929 -- # '[' ext4 = ext4 ']' 00:09:08.325 09:17:19 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@930 -- # force=-F 00:09:08.325 09:17:19 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@935 -- # mkfs.ext4 -F /dev/nvme0n1p1 00:09:08.325 mke2fs 1.46.5 (30-Dec-2021) 00:09:08.325 Discarding device blocks: 0/522240 done 00:09:08.325 Creating filesystem with 522240 1k blocks and 130560 inodes 00:09:08.325 Filesystem UUID: 9b5422e5-3b4d-4146-bcb1-521d73902b8a 00:09:08.325 Superblock backups stored on blocks: 00:09:08.325 8193, 24577, 40961, 57345, 73729, 204801, 221185, 401409 00:09:08.325 00:09:08.325 Allocating group tables: 0/64 done 00:09:08.325 Writing inode tables: 0/64 done 00:09:08.584 Creating journal (8192 blocks): done 00:09:08.584 Writing superblocks and filesystem accounting information: 0/64 done 00:09:08.584 00:09:08.584 09:17:19 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@943 -- # return 0 00:09:08.584 09:17:19 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@23 -- # mount /dev/nvme0n1p1 /mnt/device 00:09:08.843 09:17:19 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@24 -- # touch /mnt/device/aaa 00:09:08.843 09:17:19 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@25 -- # sync 00:09:08.843 09:17:19 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@26 -- # rm /mnt/device/aaa 00:09:08.843 09:17:19 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@27 -- # sync 00:09:08.843 09:17:19 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@29 -- # i=0 00:09:08.843 09:17:19 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@30 -- # umount /mnt/device 00:09:08.843 09:17:19 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@37 -- # kill -0 735571 00:09:08.843 09:17:19 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@40 -- # lsblk -l -o NAME 00:09:08.843 09:17:19 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@40 -- # grep -q -w nvme0n1 00:09:08.843 09:17:19 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@43 -- # lsblk -l -o NAME 00:09:08.843 09:17:19 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@43 -- # grep -q -w nvme0n1p1 00:09:08.843 00:09:08.843 real 0m0.666s 00:09:08.843 user 0m0.024s 00:09:08.843 sys 0m0.052s 00:09:08.843 09:17:19 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@1124 -- # xtrace_disable 00:09:08.843 09:17:19 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@10 -- # set +x 00:09:08.843 ************************************ 00:09:08.843 END TEST filesystem_ext4 00:09:08.843 ************************************ 00:09:08.843 09:17:19 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1142 -- # return 0 00:09:08.843 09:17:19 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@78 -- # run_test filesystem_btrfs nvmf_filesystem_create btrfs nvme0n1 00:09:08.843 09:17:19 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1099 -- # '[' 4 -le 1 ']' 00:09:08.843 09:17:19 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1105 -- # xtrace_disable 00:09:08.843 09:17:19 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:09:08.843 ************************************ 00:09:08.843 START TEST filesystem_btrfs 00:09:08.843 ************************************ 00:09:08.843 09:17:19 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@1123 -- # nvmf_filesystem_create btrfs nvme0n1 00:09:08.843 09:17:19 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@18 -- # fstype=btrfs 00:09:08.843 09:17:19 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@19 -- # nvme_name=nvme0n1 00:09:08.843 09:17:19 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@21 -- # make_filesystem btrfs /dev/nvme0n1p1 00:09:08.843 09:17:19 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@924 -- # local fstype=btrfs 00:09:08.843 09:17:19 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@925 -- # local dev_name=/dev/nvme0n1p1 00:09:08.843 09:17:19 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@926 -- # local i=0 00:09:08.843 09:17:19 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@927 -- # local force 00:09:08.843 09:17:19 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@929 -- # '[' btrfs = ext4 ']' 00:09:08.843 09:17:19 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@932 -- # force=-f 00:09:08.843 09:17:19 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@935 -- # mkfs.btrfs -f /dev/nvme0n1p1 00:09:09.101 btrfs-progs v6.6.2 00:09:09.101 See https://btrfs.readthedocs.io for more information. 00:09:09.101 00:09:09.101 Performing full device TRIM /dev/nvme0n1p1 (510.00MiB) ... 00:09:09.101 NOTE: several default settings have changed in version 5.15, please make sure 00:09:09.101 this does not affect your deployments: 00:09:09.101 - DUP for metadata (-m dup) 00:09:09.101 - enabled no-holes (-O no-holes) 00:09:09.101 - enabled free-space-tree (-R free-space-tree) 00:09:09.101 00:09:09.101 Label: (null) 00:09:09.101 UUID: 124f0314-4d23-4421-a0a7-6eaf4854d1a4 00:09:09.101 Node size: 16384 00:09:09.101 Sector size: 4096 00:09:09.101 Filesystem size: 510.00MiB 00:09:09.101 Block group profiles: 00:09:09.101 Data: single 8.00MiB 00:09:09.101 Metadata: DUP 32.00MiB 00:09:09.101 System: DUP 8.00MiB 00:09:09.101 SSD detected: yes 00:09:09.101 Zoned device: no 00:09:09.101 Incompat features: extref, skinny-metadata, no-holes, free-space-tree 00:09:09.101 Runtime features: free-space-tree 00:09:09.101 Checksum: crc32c 00:09:09.101 Number of devices: 1 00:09:09.101 Devices: 00:09:09.101 ID SIZE PATH 00:09:09.101 1 510.00MiB /dev/nvme0n1p1 00:09:09.101 00:09:09.101 09:17:20 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@943 -- # return 0 00:09:09.101 09:17:20 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@23 -- # mount /dev/nvme0n1p1 /mnt/device 00:09:10.038 09:17:21 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@24 -- # touch /mnt/device/aaa 00:09:10.038 09:17:21 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@25 -- # sync 00:09:10.038 09:17:21 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@26 -- # rm /mnt/device/aaa 00:09:10.038 09:17:21 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@27 -- # sync 00:09:10.038 09:17:21 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@29 -- # i=0 00:09:10.038 09:17:21 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@30 -- # umount /mnt/device 00:09:10.038 09:17:21 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@37 -- # kill -0 735571 00:09:10.038 09:17:21 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@40 -- # lsblk -l -o NAME 00:09:10.038 09:17:21 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@40 -- # grep -q -w nvme0n1 00:09:10.038 09:17:21 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@43 -- # lsblk -l -o NAME 00:09:10.038 09:17:21 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@43 -- # grep -q -w nvme0n1p1 00:09:10.038 00:09:10.038 real 0m1.220s 00:09:10.038 user 0m0.019s 00:09:10.038 sys 0m0.159s 00:09:10.038 09:17:21 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@1124 -- # xtrace_disable 00:09:10.038 09:17:21 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@10 -- # set +x 00:09:10.038 ************************************ 00:09:10.038 END TEST filesystem_btrfs 00:09:10.038 ************************************ 00:09:10.296 09:17:21 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1142 -- # return 0 00:09:10.296 09:17:21 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@79 -- # run_test filesystem_xfs nvmf_filesystem_create xfs nvme0n1 00:09:10.296 09:17:21 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1099 -- # '[' 4 -le 1 ']' 00:09:10.296 09:17:21 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1105 -- # xtrace_disable 00:09:10.296 09:17:21 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:09:10.296 ************************************ 00:09:10.296 START TEST filesystem_xfs 00:09:10.296 ************************************ 00:09:10.296 09:17:21 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@1123 -- # nvmf_filesystem_create xfs nvme0n1 00:09:10.296 09:17:21 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@18 -- # fstype=xfs 00:09:10.296 09:17:21 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@19 -- # nvme_name=nvme0n1 00:09:10.296 09:17:21 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@21 -- # make_filesystem xfs /dev/nvme0n1p1 00:09:10.296 09:17:21 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@924 -- # local fstype=xfs 00:09:10.296 09:17:21 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@925 -- # local dev_name=/dev/nvme0n1p1 00:09:10.296 09:17:21 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@926 -- # local i=0 00:09:10.296 09:17:21 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@927 -- # local force 00:09:10.296 09:17:21 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@929 -- # '[' xfs = ext4 ']' 00:09:10.296 09:17:21 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@932 -- # force=-f 00:09:10.296 09:17:21 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@935 -- # mkfs.xfs -f /dev/nvme0n1p1 00:09:10.296 meta-data=/dev/nvme0n1p1 isize=512 agcount=4, agsize=32640 blks 00:09:10.296 = sectsz=512 attr=2, projid32bit=1 00:09:10.296 = crc=1 finobt=1, sparse=1, rmapbt=0 00:09:10.296 = reflink=1 bigtime=1 inobtcount=1 nrext64=0 00:09:10.296 data = bsize=4096 blocks=130560, imaxpct=25 00:09:10.296 = sunit=0 swidth=0 blks 00:09:10.296 naming =version 2 bsize=4096 ascii-ci=0, ftype=1 00:09:10.296 log =internal log bsize=4096 blocks=16384, version=2 00:09:10.296 = sectsz=512 sunit=0 blks, lazy-count=1 00:09:10.296 realtime =none extsz=4096 blocks=0, rtextents=0 00:09:11.233 Discarding blocks...Done. 00:09:11.233 09:17:22 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@943 -- # return 0 00:09:11.233 09:17:22 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@23 -- # mount /dev/nvme0n1p1 /mnt/device 00:09:13.765 09:17:24 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@24 -- # touch /mnt/device/aaa 00:09:14.024 09:17:24 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@25 -- # sync 00:09:14.024 09:17:24 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@26 -- # rm /mnt/device/aaa 00:09:14.024 09:17:24 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@27 -- # sync 00:09:14.024 09:17:24 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@29 -- # i=0 00:09:14.024 09:17:24 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@30 -- # umount /mnt/device 00:09:14.024 09:17:25 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@37 -- # kill -0 735571 00:09:14.024 09:17:25 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@40 -- # lsblk -l -o NAME 00:09:14.024 09:17:25 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@40 -- # grep -q -w nvme0n1 00:09:14.024 09:17:25 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@43 -- # lsblk -l -o NAME 00:09:14.024 09:17:25 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@43 -- # grep -q -w nvme0n1p1 00:09:14.024 00:09:14.025 real 0m3.772s 00:09:14.025 user 0m0.019s 00:09:14.025 sys 0m0.085s 00:09:14.025 09:17:25 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@1124 -- # xtrace_disable 00:09:14.025 09:17:25 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@10 -- # set +x 00:09:14.025 ************************************ 00:09:14.025 END TEST filesystem_xfs 00:09:14.025 ************************************ 00:09:14.025 09:17:25 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1142 -- # return 0 00:09:14.025 09:17:25 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@91 -- # flock /dev/nvme0n1 parted -s /dev/nvme0n1 rm 1 00:09:14.285 09:17:25 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@93 -- # sync 00:09:14.285 09:17:25 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@94 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:09:14.285 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:09:14.285 09:17:25 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@95 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:09:14.285 09:17:25 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1219 -- # local i=0 00:09:14.285 09:17:25 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1220 -- # lsblk -o NAME,SERIAL 00:09:14.285 09:17:25 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1220 -- # grep -q -w SPDKISFASTANDAWESOME 00:09:14.285 09:17:25 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1227 -- # lsblk -l -o NAME,SERIAL 00:09:14.285 09:17:25 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1227 -- # grep -q -w SPDKISFASTANDAWESOME 00:09:14.285 09:17:25 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1231 -- # return 0 00:09:14.285 09:17:25 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@97 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:09:14.285 09:17:25 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:14.285 09:17:25 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:09:14.285 09:17:25 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:14.285 09:17:25 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@99 -- # trap - SIGINT SIGTERM EXIT 00:09:14.285 09:17:25 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@101 -- # killprocess 735571 00:09:14.285 09:17:25 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@948 -- # '[' -z 735571 ']' 00:09:14.285 09:17:25 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@952 -- # kill -0 735571 00:09:14.285 09:17:25 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@953 -- # uname 00:09:14.285 09:17:25 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:09:14.285 09:17:25 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 735571 00:09:14.285 09:17:25 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:09:14.285 09:17:25 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:09:14.285 09:17:25 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@966 -- # echo 'killing process with pid 735571' 00:09:14.285 killing process with pid 735571 00:09:14.285 09:17:25 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@967 -- # kill 735571 00:09:14.285 09:17:25 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@972 -- # wait 735571 00:09:14.854 09:17:25 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@102 -- # nvmfpid= 00:09:14.854 00:09:14.854 real 0m11.691s 00:09:14.854 user 0m44.845s 00:09:14.854 sys 0m1.816s 00:09:14.854 09:17:25 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1124 -- # xtrace_disable 00:09:14.854 09:17:25 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:09:14.854 ************************************ 00:09:14.854 END TEST nvmf_filesystem_no_in_capsule 00:09:14.854 ************************************ 00:09:14.854 09:17:25 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@1142 -- # return 0 00:09:14.854 09:17:25 nvmf_tcp.nvmf_filesystem -- target/filesystem.sh@106 -- # run_test nvmf_filesystem_in_capsule nvmf_filesystem_part 4096 00:09:14.854 09:17:25 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:09:14.854 09:17:25 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@1105 -- # xtrace_disable 00:09:14.854 09:17:25 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@10 -- # set +x 00:09:14.854 ************************************ 00:09:14.854 START TEST nvmf_filesystem_in_capsule 00:09:14.854 ************************************ 00:09:14.854 09:17:25 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1123 -- # nvmf_filesystem_part 4096 00:09:14.854 09:17:25 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@47 -- # in_capsule=4096 00:09:14.854 09:17:25 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@49 -- # nvmfappstart -m 0xF 00:09:14.854 09:17:25 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:09:14.854 09:17:25 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@722 -- # xtrace_disable 00:09:14.854 09:17:25 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:09:14.854 09:17:25 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- nvmf/common.sh@481 -- # nvmfpid=737131 00:09:14.854 09:17:25 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:09:14.854 09:17:25 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- nvmf/common.sh@482 -- # waitforlisten 737131 00:09:14.854 09:17:25 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@829 -- # '[' -z 737131 ']' 00:09:14.854 09:17:25 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:09:14.854 09:17:25 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@834 -- # local max_retries=100 00:09:14.854 09:17:25 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:09:14.854 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:09:14.854 09:17:25 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@838 -- # xtrace_disable 00:09:14.854 09:17:25 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:09:14.854 [2024-07-15 09:17:26.007251] Starting SPDK v24.09-pre git sha1 b0f01ebc5 / DPDK 24.03.0 initialization... 00:09:14.854 [2024-07-15 09:17:26.007320] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:09:14.854 EAL: No free 2048 kB hugepages reported on node 1 00:09:15.113 [2024-07-15 09:17:26.073975] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 4 00:09:15.113 [2024-07-15 09:17:26.183468] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:09:15.113 [2024-07-15 09:17:26.183522] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:09:15.113 [2024-07-15 09:17:26.183536] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:09:15.113 [2024-07-15 09:17:26.183547] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:09:15.113 [2024-07-15 09:17:26.183556] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:09:15.113 [2024-07-15 09:17:26.183628] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:09:15.113 [2024-07-15 09:17:26.183690] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:09:15.113 [2024-07-15 09:17:26.183751] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:09:15.113 [2024-07-15 09:17:26.183754] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:09:15.371 09:17:26 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:09:15.371 09:17:26 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@862 -- # return 0 00:09:15.371 09:17:26 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:09:15.371 09:17:26 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@728 -- # xtrace_disable 00:09:15.371 09:17:26 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:09:15.372 09:17:26 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:09:15.372 09:17:26 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@50 -- # malloc_name=Malloc1 00:09:15.372 09:17:26 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@52 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 -c 4096 00:09:15.372 09:17:26 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:15.372 09:17:26 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:09:15.372 [2024-07-15 09:17:26.341707] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:09:15.372 09:17:26 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:15.372 09:17:26 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@53 -- # rpc_cmd bdev_malloc_create 512 512 -b Malloc1 00:09:15.372 09:17:26 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:15.372 09:17:26 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:09:15.372 Malloc1 00:09:15.372 09:17:26 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:15.372 09:17:26 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@54 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:09:15.372 09:17:26 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:15.372 09:17:26 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:09:15.372 09:17:26 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:15.372 09:17:26 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@55 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:09:15.372 09:17:26 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:15.372 09:17:26 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:09:15.372 09:17:26 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:15.372 09:17:26 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@56 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:09:15.372 09:17:26 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:15.372 09:17:26 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:09:15.372 [2024-07-15 09:17:26.518584] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:09:15.372 09:17:26 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:15.372 09:17:26 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@58 -- # get_bdev_size Malloc1 00:09:15.372 09:17:26 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1378 -- # local bdev_name=Malloc1 00:09:15.372 09:17:26 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1379 -- # local bdev_info 00:09:15.372 09:17:26 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1380 -- # local bs 00:09:15.372 09:17:26 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1381 -- # local nb 00:09:15.372 09:17:26 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1382 -- # rpc_cmd bdev_get_bdevs -b Malloc1 00:09:15.372 09:17:26 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:15.372 09:17:26 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:09:15.372 09:17:26 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:15.372 09:17:26 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1382 -- # bdev_info='[ 00:09:15.372 { 00:09:15.372 "name": "Malloc1", 00:09:15.372 "aliases": [ 00:09:15.372 "c0814d94-a4be-4d1d-b656-bef26cd46f92" 00:09:15.372 ], 00:09:15.372 "product_name": "Malloc disk", 00:09:15.372 "block_size": 512, 00:09:15.372 "num_blocks": 1048576, 00:09:15.372 "uuid": "c0814d94-a4be-4d1d-b656-bef26cd46f92", 00:09:15.372 "assigned_rate_limits": { 00:09:15.372 "rw_ios_per_sec": 0, 00:09:15.372 "rw_mbytes_per_sec": 0, 00:09:15.372 "r_mbytes_per_sec": 0, 00:09:15.372 "w_mbytes_per_sec": 0 00:09:15.372 }, 00:09:15.372 "claimed": true, 00:09:15.372 "claim_type": "exclusive_write", 00:09:15.372 "zoned": false, 00:09:15.372 "supported_io_types": { 00:09:15.372 "read": true, 00:09:15.372 "write": true, 00:09:15.372 "unmap": true, 00:09:15.372 "flush": true, 00:09:15.372 "reset": true, 00:09:15.372 "nvme_admin": false, 00:09:15.372 "nvme_io": false, 00:09:15.372 "nvme_io_md": false, 00:09:15.372 "write_zeroes": true, 00:09:15.372 "zcopy": true, 00:09:15.372 "get_zone_info": false, 00:09:15.372 "zone_management": false, 00:09:15.372 "zone_append": false, 00:09:15.372 "compare": false, 00:09:15.372 "compare_and_write": false, 00:09:15.372 "abort": true, 00:09:15.372 "seek_hole": false, 00:09:15.372 "seek_data": false, 00:09:15.372 "copy": true, 00:09:15.372 "nvme_iov_md": false 00:09:15.372 }, 00:09:15.372 "memory_domains": [ 00:09:15.372 { 00:09:15.372 "dma_device_id": "system", 00:09:15.372 "dma_device_type": 1 00:09:15.372 }, 00:09:15.372 { 00:09:15.372 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:15.372 "dma_device_type": 2 00:09:15.372 } 00:09:15.372 ], 00:09:15.372 "driver_specific": {} 00:09:15.372 } 00:09:15.372 ]' 00:09:15.372 09:17:26 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1383 -- # jq '.[] .block_size' 00:09:15.632 09:17:26 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1383 -- # bs=512 00:09:15.632 09:17:26 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1384 -- # jq '.[] .num_blocks' 00:09:15.632 09:17:26 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1384 -- # nb=1048576 00:09:15.632 09:17:26 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1387 -- # bdev_size=512 00:09:15.632 09:17:26 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1388 -- # echo 512 00:09:15.632 09:17:26 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@58 -- # malloc_size=536870912 00:09:15.632 09:17:26 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@60 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --hostid=29f67375-a902-e411-ace9-001e67bc3c9a -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:09:16.202 09:17:27 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@62 -- # waitforserial SPDKISFASTANDAWESOME 00:09:16.203 09:17:27 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1198 -- # local i=0 00:09:16.203 09:17:27 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1199 -- # local nvme_device_counter=1 nvme_devices=0 00:09:16.203 09:17:27 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1200 -- # [[ -n '' ]] 00:09:16.203 09:17:27 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1205 -- # sleep 2 00:09:18.106 09:17:29 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1206 -- # (( i++ <= 15 )) 00:09:18.106 09:17:29 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1207 -- # lsblk -l -o NAME,SERIAL 00:09:18.106 09:17:29 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1207 -- # grep -c SPDKISFASTANDAWESOME 00:09:18.364 09:17:29 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1207 -- # nvme_devices=1 00:09:18.364 09:17:29 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1208 -- # (( nvme_devices == nvme_device_counter )) 00:09:18.364 09:17:29 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1208 -- # return 0 00:09:18.364 09:17:29 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@63 -- # lsblk -l -o NAME,SERIAL 00:09:18.364 09:17:29 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@63 -- # grep -oP '([\w]*)(?=\s+SPDKISFASTANDAWESOME)' 00:09:18.364 09:17:29 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@63 -- # nvme_name=nvme0n1 00:09:18.364 09:17:29 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@64 -- # sec_size_to_bytes nvme0n1 00:09:18.364 09:17:29 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- setup/common.sh@76 -- # local dev=nvme0n1 00:09:18.364 09:17:29 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- setup/common.sh@78 -- # [[ -e /sys/block/nvme0n1 ]] 00:09:18.364 09:17:29 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- setup/common.sh@80 -- # echo 536870912 00:09:18.364 09:17:29 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@64 -- # nvme_size=536870912 00:09:18.364 09:17:29 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@66 -- # mkdir -p /mnt/device 00:09:18.364 09:17:29 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@67 -- # (( nvme_size == malloc_size )) 00:09:18.364 09:17:29 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@68 -- # parted -s /dev/nvme0n1 mklabel gpt mkpart SPDK_TEST 0% 100% 00:09:18.624 09:17:29 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@69 -- # partprobe 00:09:18.884 09:17:29 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@70 -- # sleep 1 00:09:19.822 09:17:30 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@76 -- # '[' 4096 -eq 0 ']' 00:09:19.822 09:17:30 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@81 -- # run_test filesystem_in_capsule_ext4 nvmf_filesystem_create ext4 nvme0n1 00:09:19.822 09:17:30 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1099 -- # '[' 4 -le 1 ']' 00:09:19.822 09:17:30 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1105 -- # xtrace_disable 00:09:19.822 09:17:30 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:09:19.822 ************************************ 00:09:19.822 START TEST filesystem_in_capsule_ext4 00:09:19.822 ************************************ 00:09:19.822 09:17:30 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@1123 -- # nvmf_filesystem_create ext4 nvme0n1 00:09:19.822 09:17:30 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@18 -- # fstype=ext4 00:09:19.822 09:17:30 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@19 -- # nvme_name=nvme0n1 00:09:19.822 09:17:30 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@21 -- # make_filesystem ext4 /dev/nvme0n1p1 00:09:19.822 09:17:30 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@924 -- # local fstype=ext4 00:09:19.822 09:17:30 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@925 -- # local dev_name=/dev/nvme0n1p1 00:09:19.822 09:17:30 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@926 -- # local i=0 00:09:19.822 09:17:30 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@927 -- # local force 00:09:19.822 09:17:30 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@929 -- # '[' ext4 = ext4 ']' 00:09:19.822 09:17:30 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@930 -- # force=-F 00:09:19.822 09:17:30 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@935 -- # mkfs.ext4 -F /dev/nvme0n1p1 00:09:19.822 mke2fs 1.46.5 (30-Dec-2021) 00:09:20.081 Discarding device blocks: 0/522240 done 00:09:20.081 Creating filesystem with 522240 1k blocks and 130560 inodes 00:09:20.081 Filesystem UUID: 03e41b95-0fea-4e87-b2f6-bc251e70138b 00:09:20.081 Superblock backups stored on blocks: 00:09:20.081 8193, 24577, 40961, 57345, 73729, 204801, 221185, 401409 00:09:20.081 00:09:20.081 Allocating group tables: 0/64 done 00:09:20.081 Writing inode tables: 0/64 done 00:09:20.081 Creating journal (8192 blocks): done 00:09:20.341 Writing superblocks and filesystem accounting information: 0/64 done 00:09:20.341 00:09:20.341 09:17:31 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@943 -- # return 0 00:09:20.341 09:17:31 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@23 -- # mount /dev/nvme0n1p1 /mnt/device 00:09:20.905 09:17:31 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@24 -- # touch /mnt/device/aaa 00:09:20.905 09:17:32 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@25 -- # sync 00:09:20.905 09:17:32 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@26 -- # rm /mnt/device/aaa 00:09:20.905 09:17:32 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@27 -- # sync 00:09:20.905 09:17:32 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@29 -- # i=0 00:09:20.905 09:17:32 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@30 -- # umount /mnt/device 00:09:20.905 09:17:32 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@37 -- # kill -0 737131 00:09:20.905 09:17:32 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@40 -- # lsblk -l -o NAME 00:09:20.905 09:17:32 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@40 -- # grep -q -w nvme0n1 00:09:20.905 09:17:32 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@43 -- # lsblk -l -o NAME 00:09:20.905 09:17:32 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@43 -- # grep -q -w nvme0n1p1 00:09:20.905 00:09:20.905 real 0m1.128s 00:09:20.905 user 0m0.016s 00:09:20.905 sys 0m0.055s 00:09:20.905 09:17:32 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@1124 -- # xtrace_disable 00:09:20.905 09:17:32 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@10 -- # set +x 00:09:20.905 ************************************ 00:09:20.905 END TEST filesystem_in_capsule_ext4 00:09:20.905 ************************************ 00:09:21.162 09:17:32 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1142 -- # return 0 00:09:21.162 09:17:32 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@82 -- # run_test filesystem_in_capsule_btrfs nvmf_filesystem_create btrfs nvme0n1 00:09:21.162 09:17:32 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1099 -- # '[' 4 -le 1 ']' 00:09:21.162 09:17:32 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1105 -- # xtrace_disable 00:09:21.162 09:17:32 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:09:21.162 ************************************ 00:09:21.162 START TEST filesystem_in_capsule_btrfs 00:09:21.162 ************************************ 00:09:21.162 09:17:32 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@1123 -- # nvmf_filesystem_create btrfs nvme0n1 00:09:21.162 09:17:32 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@18 -- # fstype=btrfs 00:09:21.162 09:17:32 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@19 -- # nvme_name=nvme0n1 00:09:21.162 09:17:32 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@21 -- # make_filesystem btrfs /dev/nvme0n1p1 00:09:21.162 09:17:32 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@924 -- # local fstype=btrfs 00:09:21.162 09:17:32 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@925 -- # local dev_name=/dev/nvme0n1p1 00:09:21.162 09:17:32 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@926 -- # local i=0 00:09:21.162 09:17:32 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@927 -- # local force 00:09:21.162 09:17:32 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@929 -- # '[' btrfs = ext4 ']' 00:09:21.162 09:17:32 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@932 -- # force=-f 00:09:21.162 09:17:32 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@935 -- # mkfs.btrfs -f /dev/nvme0n1p1 00:09:21.162 btrfs-progs v6.6.2 00:09:21.162 See https://btrfs.readthedocs.io for more information. 00:09:21.162 00:09:21.162 Performing full device TRIM /dev/nvme0n1p1 (510.00MiB) ... 00:09:21.162 NOTE: several default settings have changed in version 5.15, please make sure 00:09:21.162 this does not affect your deployments: 00:09:21.162 - DUP for metadata (-m dup) 00:09:21.162 - enabled no-holes (-O no-holes) 00:09:21.162 - enabled free-space-tree (-R free-space-tree) 00:09:21.162 00:09:21.162 Label: (null) 00:09:21.162 UUID: 1b931f4f-80f3-4852-af63-8dd2f19e8bc4 00:09:21.162 Node size: 16384 00:09:21.162 Sector size: 4096 00:09:21.162 Filesystem size: 510.00MiB 00:09:21.162 Block group profiles: 00:09:21.162 Data: single 8.00MiB 00:09:21.162 Metadata: DUP 32.00MiB 00:09:21.162 System: DUP 8.00MiB 00:09:21.162 SSD detected: yes 00:09:21.162 Zoned device: no 00:09:21.162 Incompat features: extref, skinny-metadata, no-holes, free-space-tree 00:09:21.162 Runtime features: free-space-tree 00:09:21.162 Checksum: crc32c 00:09:21.162 Number of devices: 1 00:09:21.162 Devices: 00:09:21.162 ID SIZE PATH 00:09:21.162 1 510.00MiB /dev/nvme0n1p1 00:09:21.162 00:09:21.162 09:17:32 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@943 -- # return 0 00:09:21.162 09:17:32 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@23 -- # mount /dev/nvme0n1p1 /mnt/device 00:09:22.098 09:17:32 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@24 -- # touch /mnt/device/aaa 00:09:22.098 09:17:32 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@25 -- # sync 00:09:22.098 09:17:32 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@26 -- # rm /mnt/device/aaa 00:09:22.098 09:17:32 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@27 -- # sync 00:09:22.098 09:17:32 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@29 -- # i=0 00:09:22.098 09:17:32 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@30 -- # umount /mnt/device 00:09:22.098 09:17:32 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@37 -- # kill -0 737131 00:09:22.098 09:17:32 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@40 -- # lsblk -l -o NAME 00:09:22.098 09:17:32 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@40 -- # grep -q -w nvme0n1 00:09:22.098 09:17:32 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@43 -- # lsblk -l -o NAME 00:09:22.098 09:17:32 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@43 -- # grep -q -w nvme0n1p1 00:09:22.098 00:09:22.098 real 0m0.853s 00:09:22.098 user 0m0.016s 00:09:22.098 sys 0m0.116s 00:09:22.098 09:17:32 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@1124 -- # xtrace_disable 00:09:22.098 09:17:32 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@10 -- # set +x 00:09:22.098 ************************************ 00:09:22.098 END TEST filesystem_in_capsule_btrfs 00:09:22.098 ************************************ 00:09:22.098 09:17:33 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1142 -- # return 0 00:09:22.098 09:17:33 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@83 -- # run_test filesystem_in_capsule_xfs nvmf_filesystem_create xfs nvme0n1 00:09:22.098 09:17:33 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1099 -- # '[' 4 -le 1 ']' 00:09:22.098 09:17:33 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1105 -- # xtrace_disable 00:09:22.098 09:17:33 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:09:22.098 ************************************ 00:09:22.098 START TEST filesystem_in_capsule_xfs 00:09:22.098 ************************************ 00:09:22.098 09:17:33 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@1123 -- # nvmf_filesystem_create xfs nvme0n1 00:09:22.098 09:17:33 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@18 -- # fstype=xfs 00:09:22.098 09:17:33 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@19 -- # nvme_name=nvme0n1 00:09:22.098 09:17:33 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@21 -- # make_filesystem xfs /dev/nvme0n1p1 00:09:22.098 09:17:33 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@924 -- # local fstype=xfs 00:09:22.098 09:17:33 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@925 -- # local dev_name=/dev/nvme0n1p1 00:09:22.098 09:17:33 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@926 -- # local i=0 00:09:22.098 09:17:33 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@927 -- # local force 00:09:22.098 09:17:33 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@929 -- # '[' xfs = ext4 ']' 00:09:22.098 09:17:33 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@932 -- # force=-f 00:09:22.098 09:17:33 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@935 -- # mkfs.xfs -f /dev/nvme0n1p1 00:09:22.098 meta-data=/dev/nvme0n1p1 isize=512 agcount=4, agsize=32640 blks 00:09:22.098 = sectsz=512 attr=2, projid32bit=1 00:09:22.098 = crc=1 finobt=1, sparse=1, rmapbt=0 00:09:22.098 = reflink=1 bigtime=1 inobtcount=1 nrext64=0 00:09:22.098 data = bsize=4096 blocks=130560, imaxpct=25 00:09:22.098 = sunit=0 swidth=0 blks 00:09:22.098 naming =version 2 bsize=4096 ascii-ci=0, ftype=1 00:09:22.098 log =internal log bsize=4096 blocks=16384, version=2 00:09:22.098 = sectsz=512 sunit=0 blks, lazy-count=1 00:09:22.098 realtime =none extsz=4096 blocks=0, rtextents=0 00:09:23.036 Discarding blocks...Done. 00:09:23.036 09:17:33 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@943 -- # return 0 00:09:23.036 09:17:33 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@23 -- # mount /dev/nvme0n1p1 /mnt/device 00:09:24.940 09:17:36 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@24 -- # touch /mnt/device/aaa 00:09:24.940 09:17:36 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@25 -- # sync 00:09:24.940 09:17:36 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@26 -- # rm /mnt/device/aaa 00:09:24.940 09:17:36 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@27 -- # sync 00:09:24.940 09:17:36 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@29 -- # i=0 00:09:24.940 09:17:36 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@30 -- # umount /mnt/device 00:09:24.940 09:17:36 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@37 -- # kill -0 737131 00:09:24.940 09:17:36 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@40 -- # lsblk -l -o NAME 00:09:24.940 09:17:36 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@40 -- # grep -q -w nvme0n1 00:09:24.940 09:17:36 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@43 -- # lsblk -l -o NAME 00:09:24.940 09:17:36 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@43 -- # grep -q -w nvme0n1p1 00:09:24.940 00:09:24.940 real 0m3.053s 00:09:24.940 user 0m0.014s 00:09:24.940 sys 0m0.065s 00:09:24.941 09:17:36 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@1124 -- # xtrace_disable 00:09:24.941 09:17:36 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@10 -- # set +x 00:09:24.941 ************************************ 00:09:24.941 END TEST filesystem_in_capsule_xfs 00:09:24.941 ************************************ 00:09:24.941 09:17:36 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1142 -- # return 0 00:09:24.941 09:17:36 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@91 -- # flock /dev/nvme0n1 parted -s /dev/nvme0n1 rm 1 00:09:25.198 09:17:36 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@93 -- # sync 00:09:25.198 09:17:36 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@94 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:09:25.198 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:09:25.198 09:17:36 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@95 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:09:25.198 09:17:36 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1219 -- # local i=0 00:09:25.199 09:17:36 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1220 -- # lsblk -o NAME,SERIAL 00:09:25.199 09:17:36 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1220 -- # grep -q -w SPDKISFASTANDAWESOME 00:09:25.199 09:17:36 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1227 -- # lsblk -l -o NAME,SERIAL 00:09:25.199 09:17:36 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1227 -- # grep -q -w SPDKISFASTANDAWESOME 00:09:25.199 09:17:36 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1231 -- # return 0 00:09:25.199 09:17:36 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@97 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:09:25.199 09:17:36 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:25.199 09:17:36 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:09:25.199 09:17:36 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:25.199 09:17:36 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@99 -- # trap - SIGINT SIGTERM EXIT 00:09:25.199 09:17:36 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@101 -- # killprocess 737131 00:09:25.199 09:17:36 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@948 -- # '[' -z 737131 ']' 00:09:25.199 09:17:36 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@952 -- # kill -0 737131 00:09:25.199 09:17:36 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@953 -- # uname 00:09:25.199 09:17:36 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:09:25.199 09:17:36 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 737131 00:09:25.199 09:17:36 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:09:25.199 09:17:36 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:09:25.199 09:17:36 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@966 -- # echo 'killing process with pid 737131' 00:09:25.199 killing process with pid 737131 00:09:25.199 09:17:36 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@967 -- # kill 737131 00:09:25.199 09:17:36 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@972 -- # wait 737131 00:09:25.764 09:17:36 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@102 -- # nvmfpid= 00:09:25.764 00:09:25.764 real 0m10.838s 00:09:25.764 user 0m41.355s 00:09:25.764 sys 0m1.726s 00:09:25.764 09:17:36 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1124 -- # xtrace_disable 00:09:25.764 09:17:36 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:09:25.764 ************************************ 00:09:25.764 END TEST nvmf_filesystem_in_capsule 00:09:25.764 ************************************ 00:09:25.764 09:17:36 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@1142 -- # return 0 00:09:25.764 09:17:36 nvmf_tcp.nvmf_filesystem -- target/filesystem.sh@108 -- # nvmftestfini 00:09:25.764 09:17:36 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@488 -- # nvmfcleanup 00:09:25.764 09:17:36 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@117 -- # sync 00:09:25.764 09:17:36 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:09:25.764 09:17:36 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@120 -- # set +e 00:09:25.764 09:17:36 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@121 -- # for i in {1..20} 00:09:25.764 09:17:36 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:09:25.764 rmmod nvme_tcp 00:09:25.764 rmmod nvme_fabrics 00:09:25.764 rmmod nvme_keyring 00:09:25.764 09:17:36 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:09:25.764 09:17:36 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@124 -- # set -e 00:09:25.764 09:17:36 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@125 -- # return 0 00:09:25.764 09:17:36 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@489 -- # '[' -n '' ']' 00:09:25.764 09:17:36 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:09:25.764 09:17:36 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:09:25.764 09:17:36 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:09:25.764 09:17:36 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:09:25.764 09:17:36 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@278 -- # remove_spdk_ns 00:09:25.764 09:17:36 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:09:25.764 09:17:36 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:09:25.764 09:17:36 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:09:28.303 09:17:38 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:09:28.303 00:09:28.303 real 0m27.113s 00:09:28.303 user 1m27.137s 00:09:28.303 sys 0m5.189s 00:09:28.303 09:17:38 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@1124 -- # xtrace_disable 00:09:28.303 09:17:38 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@10 -- # set +x 00:09:28.303 ************************************ 00:09:28.303 END TEST nvmf_filesystem 00:09:28.303 ************************************ 00:09:28.303 09:17:38 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:09:28.303 09:17:38 nvmf_tcp -- nvmf/nvmf.sh@25 -- # run_test nvmf_target_discovery /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/discovery.sh --transport=tcp 00:09:28.303 09:17:38 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:09:28.303 09:17:38 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:09:28.303 09:17:38 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:09:28.303 ************************************ 00:09:28.303 START TEST nvmf_target_discovery 00:09:28.303 ************************************ 00:09:28.303 09:17:38 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/discovery.sh --transport=tcp 00:09:28.303 * Looking for test storage... 00:09:28.303 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:09:28.303 09:17:39 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:09:28.303 09:17:39 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@7 -- # uname -s 00:09:28.303 09:17:39 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:09:28.303 09:17:39 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:09:28.303 09:17:39 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:09:28.303 09:17:39 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:09:28.303 09:17:39 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:09:28.303 09:17:39 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:09:28.304 09:17:39 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:09:28.304 09:17:39 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:09:28.304 09:17:39 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:09:28.304 09:17:39 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:09:28.304 09:17:39 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a 00:09:28.304 09:17:39 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@18 -- # NVME_HOSTID=29f67375-a902-e411-ace9-001e67bc3c9a 00:09:28.304 09:17:39 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:09:28.304 09:17:39 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:09:28.304 09:17:39 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:09:28.304 09:17:39 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:09:28.304 09:17:39 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:09:28.304 09:17:39 nvmf_tcp.nvmf_target_discovery -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:09:28.304 09:17:39 nvmf_tcp.nvmf_target_discovery -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:09:28.304 09:17:39 nvmf_tcp.nvmf_target_discovery -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:09:28.304 09:17:39 nvmf_tcp.nvmf_target_discovery -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:28.304 09:17:39 nvmf_tcp.nvmf_target_discovery -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:28.304 09:17:39 nvmf_tcp.nvmf_target_discovery -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:28.304 09:17:39 nvmf_tcp.nvmf_target_discovery -- paths/export.sh@5 -- # export PATH 00:09:28.304 09:17:39 nvmf_tcp.nvmf_target_discovery -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:28.304 09:17:39 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@47 -- # : 0 00:09:28.304 09:17:39 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:09:28.304 09:17:39 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:09:28.304 09:17:39 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:09:28.304 09:17:39 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:09:28.304 09:17:39 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:09:28.304 09:17:39 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:09:28.304 09:17:39 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:09:28.304 09:17:39 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@51 -- # have_pci_nics=0 00:09:28.304 09:17:39 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@11 -- # NULL_BDEV_SIZE=102400 00:09:28.304 09:17:39 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@12 -- # NULL_BLOCK_SIZE=512 00:09:28.304 09:17:39 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@13 -- # NVMF_PORT_REFERRAL=4430 00:09:28.304 09:17:39 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@15 -- # hash nvme 00:09:28.304 09:17:39 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@20 -- # nvmftestinit 00:09:28.304 09:17:39 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:09:28.304 09:17:39 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:09:28.304 09:17:39 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@448 -- # prepare_net_devs 00:09:28.304 09:17:39 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@410 -- # local -g is_hw=no 00:09:28.304 09:17:39 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@412 -- # remove_spdk_ns 00:09:28.304 09:17:39 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:09:28.304 09:17:39 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:09:28.304 09:17:39 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:09:28.304 09:17:39 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:09:28.304 09:17:39 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:09:28.304 09:17:39 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@285 -- # xtrace_disable 00:09:28.304 09:17:39 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:09:30.206 09:17:41 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:09:30.206 09:17:41 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@291 -- # pci_devs=() 00:09:30.206 09:17:41 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@291 -- # local -a pci_devs 00:09:30.206 09:17:41 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@292 -- # pci_net_devs=() 00:09:30.206 09:17:41 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:09:30.206 09:17:41 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@293 -- # pci_drivers=() 00:09:30.206 09:17:41 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@293 -- # local -A pci_drivers 00:09:30.206 09:17:41 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@295 -- # net_devs=() 00:09:30.206 09:17:41 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@295 -- # local -ga net_devs 00:09:30.206 09:17:41 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@296 -- # e810=() 00:09:30.206 09:17:41 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@296 -- # local -ga e810 00:09:30.206 09:17:41 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@297 -- # x722=() 00:09:30.206 09:17:41 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@297 -- # local -ga x722 00:09:30.206 09:17:41 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@298 -- # mlx=() 00:09:30.206 09:17:41 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@298 -- # local -ga mlx 00:09:30.206 09:17:41 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:09:30.206 09:17:41 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:09:30.206 09:17:41 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:09:30.206 09:17:41 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:09:30.206 09:17:41 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:09:30.206 09:17:41 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:09:30.206 09:17:41 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:09:30.206 09:17:41 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:09:30.206 09:17:41 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:09:30.206 09:17:41 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:09:30.206 09:17:41 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:09:30.207 09:17:41 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:09:30.207 09:17:41 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:09:30.207 09:17:41 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:09:30.207 09:17:41 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:09:30.207 09:17:41 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:09:30.207 09:17:41 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:09:30.207 09:17:41 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:09:30.207 09:17:41 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@341 -- # echo 'Found 0000:09:00.0 (0x8086 - 0x159b)' 00:09:30.207 Found 0000:09:00.0 (0x8086 - 0x159b) 00:09:30.207 09:17:41 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:09:30.207 09:17:41 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:09:30.207 09:17:41 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:09:30.207 09:17:41 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:09:30.207 09:17:41 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:09:30.207 09:17:41 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:09:30.207 09:17:41 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@341 -- # echo 'Found 0000:09:00.1 (0x8086 - 0x159b)' 00:09:30.207 Found 0000:09:00.1 (0x8086 - 0x159b) 00:09:30.207 09:17:41 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:09:30.207 09:17:41 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:09:30.207 09:17:41 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:09:30.207 09:17:41 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:09:30.207 09:17:41 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:09:30.207 09:17:41 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:09:30.207 09:17:41 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:09:30.207 09:17:41 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:09:30.207 09:17:41 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:09:30.207 09:17:41 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:09:30.207 09:17:41 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:09:30.207 09:17:41 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:09:30.207 09:17:41 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@390 -- # [[ up == up ]] 00:09:30.207 09:17:41 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:09:30.207 09:17:41 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:09:30.207 09:17:41 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:09:00.0: cvl_0_0' 00:09:30.207 Found net devices under 0000:09:00.0: cvl_0_0 00:09:30.207 09:17:41 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:09:30.207 09:17:41 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:09:30.207 09:17:41 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:09:30.207 09:17:41 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:09:30.207 09:17:41 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:09:30.207 09:17:41 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@390 -- # [[ up == up ]] 00:09:30.207 09:17:41 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:09:30.207 09:17:41 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:09:30.207 09:17:41 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:09:00.1: cvl_0_1' 00:09:30.207 Found net devices under 0000:09:00.1: cvl_0_1 00:09:30.207 09:17:41 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:09:30.207 09:17:41 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:09:30.207 09:17:41 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@414 -- # is_hw=yes 00:09:30.207 09:17:41 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:09:30.207 09:17:41 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:09:30.207 09:17:41 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:09:30.207 09:17:41 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:09:30.207 09:17:41 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:09:30.207 09:17:41 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:09:30.207 09:17:41 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:09:30.207 09:17:41 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:09:30.207 09:17:41 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:09:30.207 09:17:41 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:09:30.207 09:17:41 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:09:30.207 09:17:41 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:09:30.207 09:17:41 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:09:30.207 09:17:41 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:09:30.207 09:17:41 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:09:30.207 09:17:41 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:09:30.207 09:17:41 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:09:30.207 09:17:41 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:09:30.207 09:17:41 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:09:30.207 09:17:41 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:09:30.207 09:17:41 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:09:30.207 09:17:41 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:09:30.207 09:17:41 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:09:30.207 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:09:30.207 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.134 ms 00:09:30.207 00:09:30.207 --- 10.0.0.2 ping statistics --- 00:09:30.207 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:09:30.207 rtt min/avg/max/mdev = 0.134/0.134/0.134/0.000 ms 00:09:30.207 09:17:41 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:09:30.207 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:09:30.207 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.103 ms 00:09:30.207 00:09:30.207 --- 10.0.0.1 ping statistics --- 00:09:30.207 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:09:30.207 rtt min/avg/max/mdev = 0.103/0.103/0.103/0.000 ms 00:09:30.207 09:17:41 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:09:30.207 09:17:41 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@422 -- # return 0 00:09:30.207 09:17:41 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:09:30.207 09:17:41 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:09:30.207 09:17:41 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:09:30.207 09:17:41 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:09:30.207 09:17:41 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:09:30.207 09:17:41 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:09:30.207 09:17:41 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:09:30.207 09:17:41 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@21 -- # nvmfappstart -m 0xF 00:09:30.207 09:17:41 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:09:30.207 09:17:41 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@722 -- # xtrace_disable 00:09:30.207 09:17:41 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:09:30.207 09:17:41 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@481 -- # nvmfpid=740589 00:09:30.207 09:17:41 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:09:30.207 09:17:41 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@482 -- # waitforlisten 740589 00:09:30.207 09:17:41 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@829 -- # '[' -z 740589 ']' 00:09:30.207 09:17:41 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:09:30.207 09:17:41 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@834 -- # local max_retries=100 00:09:30.207 09:17:41 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:09:30.207 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:09:30.207 09:17:41 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@838 -- # xtrace_disable 00:09:30.207 09:17:41 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:09:30.207 [2024-07-15 09:17:41.314217] Starting SPDK v24.09-pre git sha1 b0f01ebc5 / DPDK 24.03.0 initialization... 00:09:30.207 [2024-07-15 09:17:41.314302] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:09:30.207 EAL: No free 2048 kB hugepages reported on node 1 00:09:30.207 [2024-07-15 09:17:41.376951] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 4 00:09:30.466 [2024-07-15 09:17:41.485022] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:09:30.466 [2024-07-15 09:17:41.485070] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:09:30.466 [2024-07-15 09:17:41.485098] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:09:30.466 [2024-07-15 09:17:41.485116] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:09:30.466 [2024-07-15 09:17:41.485125] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:09:30.466 [2024-07-15 09:17:41.486822] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:09:30.466 [2024-07-15 09:17:41.486860] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:09:30.466 [2024-07-15 09:17:41.486885] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:09:30.466 [2024-07-15 09:17:41.486888] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:09:30.466 09:17:41 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:09:30.466 09:17:41 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@862 -- # return 0 00:09:30.466 09:17:41 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:09:30.466 09:17:41 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@728 -- # xtrace_disable 00:09:30.466 09:17:41 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:09:30.466 09:17:41 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:09:30.466 09:17:41 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@23 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:09:30.466 09:17:41 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:30.466 09:17:41 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:09:30.466 [2024-07-15 09:17:41.634427] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:09:30.466 09:17:41 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:30.466 09:17:41 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@26 -- # seq 1 4 00:09:30.466 09:17:41 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@26 -- # for i in $(seq 1 4) 00:09:30.466 09:17:41 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@27 -- # rpc_cmd bdev_null_create Null1 102400 512 00:09:30.466 09:17:41 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:30.466 09:17:41 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:09:30.466 Null1 00:09:30.466 09:17:41 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:30.466 09:17:41 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@28 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:09:30.466 09:17:41 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:30.466 09:17:41 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:09:30.727 09:17:41 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:30.727 09:17:41 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@29 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Null1 00:09:30.727 09:17:41 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:30.727 09:17:41 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:09:30.728 09:17:41 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:30.728 09:17:41 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@30 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:09:30.728 09:17:41 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:30.728 09:17:41 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:09:30.728 [2024-07-15 09:17:41.678714] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:09:30.728 09:17:41 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:30.728 09:17:41 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@26 -- # for i in $(seq 1 4) 00:09:30.728 09:17:41 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@27 -- # rpc_cmd bdev_null_create Null2 102400 512 00:09:30.728 09:17:41 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:30.728 09:17:41 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:09:30.728 Null2 00:09:30.728 09:17:41 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:30.728 09:17:41 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@28 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode2 -a -s SPDK00000000000002 00:09:30.728 09:17:41 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:30.728 09:17:41 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:09:30.728 09:17:41 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:30.728 09:17:41 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@29 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode2 Null2 00:09:30.728 09:17:41 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:30.728 09:17:41 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:09:30.728 09:17:41 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:30.728 09:17:41 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@30 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode2 -t tcp -a 10.0.0.2 -s 4420 00:09:30.728 09:17:41 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:30.728 09:17:41 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:09:30.728 09:17:41 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:30.728 09:17:41 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@26 -- # for i in $(seq 1 4) 00:09:30.728 09:17:41 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@27 -- # rpc_cmd bdev_null_create Null3 102400 512 00:09:30.728 09:17:41 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:30.728 09:17:41 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:09:30.728 Null3 00:09:30.728 09:17:41 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:30.728 09:17:41 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@28 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode3 -a -s SPDK00000000000003 00:09:30.728 09:17:41 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:30.728 09:17:41 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:09:30.728 09:17:41 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:30.728 09:17:41 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@29 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode3 Null3 00:09:30.728 09:17:41 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:30.728 09:17:41 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:09:30.728 09:17:41 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:30.728 09:17:41 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@30 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode3 -t tcp -a 10.0.0.2 -s 4420 00:09:30.728 09:17:41 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:30.728 09:17:41 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:09:30.728 09:17:41 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:30.728 09:17:41 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@26 -- # for i in $(seq 1 4) 00:09:30.728 09:17:41 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@27 -- # rpc_cmd bdev_null_create Null4 102400 512 00:09:30.728 09:17:41 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:30.728 09:17:41 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:09:30.728 Null4 00:09:30.728 09:17:41 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:30.728 09:17:41 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@28 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode4 -a -s SPDK00000000000004 00:09:30.728 09:17:41 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:30.728 09:17:41 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:09:30.728 09:17:41 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:30.728 09:17:41 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@29 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode4 Null4 00:09:30.728 09:17:41 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:30.728 09:17:41 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:09:30.728 09:17:41 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:30.728 09:17:41 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@30 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode4 -t tcp -a 10.0.0.2 -s 4420 00:09:30.728 09:17:41 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:30.728 09:17:41 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:09:30.728 09:17:41 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:30.728 09:17:41 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@32 -- # rpc_cmd nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:09:30.728 09:17:41 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:30.728 09:17:41 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:09:30.728 09:17:41 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:30.728 09:17:41 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@35 -- # rpc_cmd nvmf_discovery_add_referral -t tcp -a 10.0.0.2 -s 4430 00:09:30.728 09:17:41 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:30.728 09:17:41 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:09:30.728 09:17:41 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:30.728 09:17:41 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@37 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --hostid=29f67375-a902-e411-ace9-001e67bc3c9a -t tcp -a 10.0.0.2 -s 4420 00:09:30.728 00:09:30.728 Discovery Log Number of Records 6, Generation counter 6 00:09:30.728 =====Discovery Log Entry 0====== 00:09:30.728 trtype: tcp 00:09:30.728 adrfam: ipv4 00:09:30.728 subtype: current discovery subsystem 00:09:30.728 treq: not required 00:09:30.728 portid: 0 00:09:30.728 trsvcid: 4420 00:09:30.728 subnqn: nqn.2014-08.org.nvmexpress.discovery 00:09:30.728 traddr: 10.0.0.2 00:09:30.728 eflags: explicit discovery connections, duplicate discovery information 00:09:30.728 sectype: none 00:09:30.728 =====Discovery Log Entry 1====== 00:09:30.728 trtype: tcp 00:09:30.728 adrfam: ipv4 00:09:30.728 subtype: nvme subsystem 00:09:30.728 treq: not required 00:09:30.728 portid: 0 00:09:30.728 trsvcid: 4420 00:09:30.728 subnqn: nqn.2016-06.io.spdk:cnode1 00:09:30.728 traddr: 10.0.0.2 00:09:30.728 eflags: none 00:09:30.728 sectype: none 00:09:30.728 =====Discovery Log Entry 2====== 00:09:30.728 trtype: tcp 00:09:30.728 adrfam: ipv4 00:09:30.728 subtype: nvme subsystem 00:09:30.728 treq: not required 00:09:30.728 portid: 0 00:09:30.728 trsvcid: 4420 00:09:30.728 subnqn: nqn.2016-06.io.spdk:cnode2 00:09:30.728 traddr: 10.0.0.2 00:09:30.728 eflags: none 00:09:30.728 sectype: none 00:09:30.728 =====Discovery Log Entry 3====== 00:09:30.728 trtype: tcp 00:09:30.728 adrfam: ipv4 00:09:30.728 subtype: nvme subsystem 00:09:30.728 treq: not required 00:09:30.728 portid: 0 00:09:30.728 trsvcid: 4420 00:09:30.728 subnqn: nqn.2016-06.io.spdk:cnode3 00:09:30.728 traddr: 10.0.0.2 00:09:30.728 eflags: none 00:09:30.728 sectype: none 00:09:30.728 =====Discovery Log Entry 4====== 00:09:30.728 trtype: tcp 00:09:30.728 adrfam: ipv4 00:09:30.728 subtype: nvme subsystem 00:09:30.728 treq: not required 00:09:30.728 portid: 0 00:09:30.728 trsvcid: 4420 00:09:30.728 subnqn: nqn.2016-06.io.spdk:cnode4 00:09:30.728 traddr: 10.0.0.2 00:09:30.728 eflags: none 00:09:30.728 sectype: none 00:09:30.728 =====Discovery Log Entry 5====== 00:09:30.728 trtype: tcp 00:09:30.728 adrfam: ipv4 00:09:30.728 subtype: discovery subsystem referral 00:09:30.728 treq: not required 00:09:30.728 portid: 0 00:09:30.728 trsvcid: 4430 00:09:30.728 subnqn: nqn.2014-08.org.nvmexpress.discovery 00:09:30.728 traddr: 10.0.0.2 00:09:30.728 eflags: none 00:09:30.728 sectype: none 00:09:30.728 09:17:41 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@39 -- # echo 'Perform nvmf subsystem discovery via RPC' 00:09:30.728 Perform nvmf subsystem discovery via RPC 00:09:30.728 09:17:41 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@40 -- # rpc_cmd nvmf_get_subsystems 00:09:30.728 09:17:41 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:30.728 09:17:41 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:09:30.990 [ 00:09:30.990 { 00:09:30.990 "nqn": "nqn.2014-08.org.nvmexpress.discovery", 00:09:30.990 "subtype": "Discovery", 00:09:30.990 "listen_addresses": [ 00:09:30.990 { 00:09:30.990 "trtype": "TCP", 00:09:30.990 "adrfam": "IPv4", 00:09:30.990 "traddr": "10.0.0.2", 00:09:30.990 "trsvcid": "4420" 00:09:30.990 } 00:09:30.990 ], 00:09:30.990 "allow_any_host": true, 00:09:30.990 "hosts": [] 00:09:30.990 }, 00:09:30.990 { 00:09:30.990 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:09:30.990 "subtype": "NVMe", 00:09:30.990 "listen_addresses": [ 00:09:30.990 { 00:09:30.990 "trtype": "TCP", 00:09:30.990 "adrfam": "IPv4", 00:09:30.990 "traddr": "10.0.0.2", 00:09:30.990 "trsvcid": "4420" 00:09:30.990 } 00:09:30.990 ], 00:09:30.990 "allow_any_host": true, 00:09:30.990 "hosts": [], 00:09:30.990 "serial_number": "SPDK00000000000001", 00:09:30.990 "model_number": "SPDK bdev Controller", 00:09:30.990 "max_namespaces": 32, 00:09:30.990 "min_cntlid": 1, 00:09:30.990 "max_cntlid": 65519, 00:09:30.990 "namespaces": [ 00:09:30.990 { 00:09:30.990 "nsid": 1, 00:09:30.990 "bdev_name": "Null1", 00:09:30.990 "name": "Null1", 00:09:30.990 "nguid": "0ADDA771BCB64F7EB4AD16BC760CF0B5", 00:09:30.990 "uuid": "0adda771-bcb6-4f7e-b4ad-16bc760cf0b5" 00:09:30.990 } 00:09:30.990 ] 00:09:30.990 }, 00:09:30.990 { 00:09:30.990 "nqn": "nqn.2016-06.io.spdk:cnode2", 00:09:30.990 "subtype": "NVMe", 00:09:30.990 "listen_addresses": [ 00:09:30.990 { 00:09:30.990 "trtype": "TCP", 00:09:30.990 "adrfam": "IPv4", 00:09:30.990 "traddr": "10.0.0.2", 00:09:30.990 "trsvcid": "4420" 00:09:30.990 } 00:09:30.990 ], 00:09:30.990 "allow_any_host": true, 00:09:30.990 "hosts": [], 00:09:30.990 "serial_number": "SPDK00000000000002", 00:09:30.990 "model_number": "SPDK bdev Controller", 00:09:30.990 "max_namespaces": 32, 00:09:30.990 "min_cntlid": 1, 00:09:30.990 "max_cntlid": 65519, 00:09:30.990 "namespaces": [ 00:09:30.990 { 00:09:30.990 "nsid": 1, 00:09:30.990 "bdev_name": "Null2", 00:09:30.990 "name": "Null2", 00:09:30.990 "nguid": "FBF7F9FF357E4406B37AE851F77A12FD", 00:09:30.990 "uuid": "fbf7f9ff-357e-4406-b37a-e851f77a12fd" 00:09:30.990 } 00:09:30.990 ] 00:09:30.990 }, 00:09:30.990 { 00:09:30.990 "nqn": "nqn.2016-06.io.spdk:cnode3", 00:09:30.990 "subtype": "NVMe", 00:09:30.990 "listen_addresses": [ 00:09:30.990 { 00:09:30.990 "trtype": "TCP", 00:09:30.990 "adrfam": "IPv4", 00:09:30.990 "traddr": "10.0.0.2", 00:09:30.990 "trsvcid": "4420" 00:09:30.990 } 00:09:30.990 ], 00:09:30.990 "allow_any_host": true, 00:09:30.990 "hosts": [], 00:09:30.990 "serial_number": "SPDK00000000000003", 00:09:30.990 "model_number": "SPDK bdev Controller", 00:09:30.990 "max_namespaces": 32, 00:09:30.990 "min_cntlid": 1, 00:09:30.991 "max_cntlid": 65519, 00:09:30.991 "namespaces": [ 00:09:30.991 { 00:09:30.991 "nsid": 1, 00:09:30.991 "bdev_name": "Null3", 00:09:30.991 "name": "Null3", 00:09:30.991 "nguid": "9C85A0EADC4A4B71BD9893BBC8E40FFD", 00:09:30.991 "uuid": "9c85a0ea-dc4a-4b71-bd98-93bbc8e40ffd" 00:09:30.991 } 00:09:30.991 ] 00:09:30.991 }, 00:09:30.991 { 00:09:30.991 "nqn": "nqn.2016-06.io.spdk:cnode4", 00:09:30.991 "subtype": "NVMe", 00:09:30.991 "listen_addresses": [ 00:09:30.991 { 00:09:30.991 "trtype": "TCP", 00:09:30.991 "adrfam": "IPv4", 00:09:30.991 "traddr": "10.0.0.2", 00:09:30.991 "trsvcid": "4420" 00:09:30.991 } 00:09:30.991 ], 00:09:30.991 "allow_any_host": true, 00:09:30.991 "hosts": [], 00:09:30.991 "serial_number": "SPDK00000000000004", 00:09:30.991 "model_number": "SPDK bdev Controller", 00:09:30.991 "max_namespaces": 32, 00:09:30.991 "min_cntlid": 1, 00:09:30.991 "max_cntlid": 65519, 00:09:30.991 "namespaces": [ 00:09:30.991 { 00:09:30.991 "nsid": 1, 00:09:30.991 "bdev_name": "Null4", 00:09:30.991 "name": "Null4", 00:09:30.991 "nguid": "AFB4723FF35F43EBA240901A2C5198E2", 00:09:30.991 "uuid": "afb4723f-f35f-43eb-a240-901a2c5198e2" 00:09:30.991 } 00:09:30.991 ] 00:09:30.991 } 00:09:30.991 ] 00:09:30.991 09:17:41 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:30.991 09:17:41 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@42 -- # seq 1 4 00:09:30.991 09:17:41 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@42 -- # for i in $(seq 1 4) 00:09:30.991 09:17:41 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@43 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:09:30.991 09:17:41 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:30.991 09:17:41 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:09:30.991 09:17:41 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:30.991 09:17:41 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@44 -- # rpc_cmd bdev_null_delete Null1 00:09:30.991 09:17:41 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:30.991 09:17:41 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:09:30.991 09:17:41 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:30.991 09:17:41 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@42 -- # for i in $(seq 1 4) 00:09:30.991 09:17:41 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@43 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode2 00:09:30.991 09:17:41 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:30.991 09:17:41 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:09:30.991 09:17:41 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:30.991 09:17:41 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@44 -- # rpc_cmd bdev_null_delete Null2 00:09:30.991 09:17:41 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:30.991 09:17:41 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:09:30.991 09:17:41 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:30.991 09:17:41 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@42 -- # for i in $(seq 1 4) 00:09:30.991 09:17:41 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@43 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode3 00:09:30.991 09:17:41 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:30.991 09:17:41 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:09:30.991 09:17:41 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:30.991 09:17:41 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@44 -- # rpc_cmd bdev_null_delete Null3 00:09:30.991 09:17:41 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:30.991 09:17:41 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:09:30.991 09:17:41 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:30.991 09:17:41 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@42 -- # for i in $(seq 1 4) 00:09:30.991 09:17:41 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@43 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode4 00:09:30.991 09:17:41 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:30.991 09:17:41 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:09:30.991 09:17:41 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:30.991 09:17:41 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@44 -- # rpc_cmd bdev_null_delete Null4 00:09:30.991 09:17:41 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:30.991 09:17:41 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:09:30.991 09:17:42 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:30.991 09:17:42 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@47 -- # rpc_cmd nvmf_discovery_remove_referral -t tcp -a 10.0.0.2 -s 4430 00:09:30.991 09:17:42 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:30.991 09:17:42 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:09:30.991 09:17:42 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:30.991 09:17:42 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@49 -- # rpc_cmd bdev_get_bdevs 00:09:30.991 09:17:42 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@49 -- # jq -r '.[].name' 00:09:30.991 09:17:42 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:30.991 09:17:42 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:09:30.991 09:17:42 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:30.991 09:17:42 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@49 -- # check_bdevs= 00:09:30.991 09:17:42 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@50 -- # '[' -n '' ']' 00:09:30.991 09:17:42 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@55 -- # trap - SIGINT SIGTERM EXIT 00:09:30.991 09:17:42 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@57 -- # nvmftestfini 00:09:30.991 09:17:42 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@488 -- # nvmfcleanup 00:09:30.991 09:17:42 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@117 -- # sync 00:09:30.991 09:17:42 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:09:30.991 09:17:42 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@120 -- # set +e 00:09:30.991 09:17:42 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@121 -- # for i in {1..20} 00:09:30.991 09:17:42 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:09:30.991 rmmod nvme_tcp 00:09:30.991 rmmod nvme_fabrics 00:09:30.991 rmmod nvme_keyring 00:09:30.991 09:17:42 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:09:30.991 09:17:42 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@124 -- # set -e 00:09:30.991 09:17:42 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@125 -- # return 0 00:09:30.991 09:17:42 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@489 -- # '[' -n 740589 ']' 00:09:30.991 09:17:42 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@490 -- # killprocess 740589 00:09:30.991 09:17:42 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@948 -- # '[' -z 740589 ']' 00:09:30.991 09:17:42 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@952 -- # kill -0 740589 00:09:30.991 09:17:42 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@953 -- # uname 00:09:30.991 09:17:42 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:09:30.991 09:17:42 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 740589 00:09:30.991 09:17:42 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:09:30.991 09:17:42 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:09:30.991 09:17:42 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@966 -- # echo 'killing process with pid 740589' 00:09:30.991 killing process with pid 740589 00:09:30.991 09:17:42 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@967 -- # kill 740589 00:09:30.991 09:17:42 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@972 -- # wait 740589 00:09:31.251 09:17:42 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:09:31.251 09:17:42 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:09:31.251 09:17:42 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:09:31.251 09:17:42 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:09:31.251 09:17:42 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@278 -- # remove_spdk_ns 00:09:31.251 09:17:42 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:09:31.251 09:17:42 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:09:31.251 09:17:42 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:09:33.793 09:17:44 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:09:33.793 00:09:33.793 real 0m5.493s 00:09:33.793 user 0m4.393s 00:09:33.793 sys 0m1.850s 00:09:33.793 09:17:44 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@1124 -- # xtrace_disable 00:09:33.793 09:17:44 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:09:33.793 ************************************ 00:09:33.793 END TEST nvmf_target_discovery 00:09:33.793 ************************************ 00:09:33.793 09:17:44 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:09:33.793 09:17:44 nvmf_tcp -- nvmf/nvmf.sh@26 -- # run_test nvmf_referrals /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/referrals.sh --transport=tcp 00:09:33.793 09:17:44 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:09:33.793 09:17:44 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:09:33.793 09:17:44 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:09:33.793 ************************************ 00:09:33.793 START TEST nvmf_referrals 00:09:33.793 ************************************ 00:09:33.793 09:17:44 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/referrals.sh --transport=tcp 00:09:33.793 * Looking for test storage... 00:09:33.793 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:09:33.793 09:17:44 nvmf_tcp.nvmf_referrals -- target/referrals.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:09:33.793 09:17:44 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@7 -- # uname -s 00:09:33.793 09:17:44 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:09:33.793 09:17:44 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:09:33.793 09:17:44 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:09:33.793 09:17:44 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:09:33.793 09:17:44 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:09:33.793 09:17:44 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:09:33.793 09:17:44 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:09:33.793 09:17:44 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:09:33.793 09:17:44 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:09:33.793 09:17:44 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:09:33.793 09:17:44 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a 00:09:33.793 09:17:44 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@18 -- # NVME_HOSTID=29f67375-a902-e411-ace9-001e67bc3c9a 00:09:33.793 09:17:44 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:09:33.793 09:17:44 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:09:33.793 09:17:44 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:09:33.793 09:17:44 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:09:33.793 09:17:44 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:09:33.793 09:17:44 nvmf_tcp.nvmf_referrals -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:09:33.793 09:17:44 nvmf_tcp.nvmf_referrals -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:09:33.793 09:17:44 nvmf_tcp.nvmf_referrals -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:09:33.793 09:17:44 nvmf_tcp.nvmf_referrals -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:33.793 09:17:44 nvmf_tcp.nvmf_referrals -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:33.793 09:17:44 nvmf_tcp.nvmf_referrals -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:33.793 09:17:44 nvmf_tcp.nvmf_referrals -- paths/export.sh@5 -- # export PATH 00:09:33.793 09:17:44 nvmf_tcp.nvmf_referrals -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:33.793 09:17:44 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@47 -- # : 0 00:09:33.793 09:17:44 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:09:33.793 09:17:44 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:09:33.793 09:17:44 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:09:33.793 09:17:44 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:09:33.793 09:17:44 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:09:33.793 09:17:44 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:09:33.793 09:17:44 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:09:33.793 09:17:44 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@51 -- # have_pci_nics=0 00:09:33.793 09:17:44 nvmf_tcp.nvmf_referrals -- target/referrals.sh@11 -- # NVMF_REFERRAL_IP_1=127.0.0.2 00:09:33.793 09:17:44 nvmf_tcp.nvmf_referrals -- target/referrals.sh@12 -- # NVMF_REFERRAL_IP_2=127.0.0.3 00:09:33.793 09:17:44 nvmf_tcp.nvmf_referrals -- target/referrals.sh@13 -- # NVMF_REFERRAL_IP_3=127.0.0.4 00:09:33.793 09:17:44 nvmf_tcp.nvmf_referrals -- target/referrals.sh@14 -- # NVMF_PORT_REFERRAL=4430 00:09:33.793 09:17:44 nvmf_tcp.nvmf_referrals -- target/referrals.sh@15 -- # DISCOVERY_NQN=nqn.2014-08.org.nvmexpress.discovery 00:09:33.793 09:17:44 nvmf_tcp.nvmf_referrals -- target/referrals.sh@16 -- # NQN=nqn.2016-06.io.spdk:cnode1 00:09:33.793 09:17:44 nvmf_tcp.nvmf_referrals -- target/referrals.sh@37 -- # nvmftestinit 00:09:33.793 09:17:44 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:09:33.793 09:17:44 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:09:33.793 09:17:44 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@448 -- # prepare_net_devs 00:09:33.793 09:17:44 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@410 -- # local -g is_hw=no 00:09:33.793 09:17:44 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@412 -- # remove_spdk_ns 00:09:33.793 09:17:44 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:09:33.793 09:17:44 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:09:33.793 09:17:44 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:09:33.793 09:17:44 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:09:33.793 09:17:44 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:09:33.793 09:17:44 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@285 -- # xtrace_disable 00:09:33.793 09:17:44 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:09:35.697 09:17:46 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:09:35.697 09:17:46 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@291 -- # pci_devs=() 00:09:35.697 09:17:46 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@291 -- # local -a pci_devs 00:09:35.697 09:17:46 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@292 -- # pci_net_devs=() 00:09:35.697 09:17:46 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:09:35.697 09:17:46 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@293 -- # pci_drivers=() 00:09:35.697 09:17:46 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@293 -- # local -A pci_drivers 00:09:35.697 09:17:46 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@295 -- # net_devs=() 00:09:35.697 09:17:46 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@295 -- # local -ga net_devs 00:09:35.697 09:17:46 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@296 -- # e810=() 00:09:35.697 09:17:46 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@296 -- # local -ga e810 00:09:35.697 09:17:46 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@297 -- # x722=() 00:09:35.697 09:17:46 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@297 -- # local -ga x722 00:09:35.697 09:17:46 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@298 -- # mlx=() 00:09:35.697 09:17:46 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@298 -- # local -ga mlx 00:09:35.697 09:17:46 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:09:35.697 09:17:46 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:09:35.697 09:17:46 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:09:35.697 09:17:46 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:09:35.697 09:17:46 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:09:35.697 09:17:46 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:09:35.697 09:17:46 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:09:35.697 09:17:46 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:09:35.697 09:17:46 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:09:35.697 09:17:46 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:09:35.697 09:17:46 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:09:35.697 09:17:46 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:09:35.697 09:17:46 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:09:35.697 09:17:46 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:09:35.697 09:17:46 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:09:35.697 09:17:46 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:09:35.697 09:17:46 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:09:35.697 09:17:46 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:09:35.697 09:17:46 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@341 -- # echo 'Found 0000:09:00.0 (0x8086 - 0x159b)' 00:09:35.697 Found 0000:09:00.0 (0x8086 - 0x159b) 00:09:35.697 09:17:46 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:09:35.697 09:17:46 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:09:35.697 09:17:46 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:09:35.697 09:17:46 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:09:35.697 09:17:46 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:09:35.697 09:17:46 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:09:35.697 09:17:46 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@341 -- # echo 'Found 0000:09:00.1 (0x8086 - 0x159b)' 00:09:35.697 Found 0000:09:00.1 (0x8086 - 0x159b) 00:09:35.697 09:17:46 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:09:35.697 09:17:46 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:09:35.697 09:17:46 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:09:35.697 09:17:46 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:09:35.697 09:17:46 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:09:35.697 09:17:46 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:09:35.697 09:17:46 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:09:35.697 09:17:46 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:09:35.697 09:17:46 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:09:35.697 09:17:46 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:09:35.697 09:17:46 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:09:35.697 09:17:46 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:09:35.697 09:17:46 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@390 -- # [[ up == up ]] 00:09:35.697 09:17:46 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:09:35.697 09:17:46 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:09:35.697 09:17:46 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:09:00.0: cvl_0_0' 00:09:35.697 Found net devices under 0000:09:00.0: cvl_0_0 00:09:35.697 09:17:46 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:09:35.697 09:17:46 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:09:35.697 09:17:46 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:09:35.697 09:17:46 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:09:35.697 09:17:46 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:09:35.697 09:17:46 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@390 -- # [[ up == up ]] 00:09:35.697 09:17:46 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:09:35.697 09:17:46 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:09:35.697 09:17:46 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:09:00.1: cvl_0_1' 00:09:35.697 Found net devices under 0000:09:00.1: cvl_0_1 00:09:35.697 09:17:46 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:09:35.697 09:17:46 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:09:35.697 09:17:46 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@414 -- # is_hw=yes 00:09:35.697 09:17:46 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:09:35.697 09:17:46 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:09:35.697 09:17:46 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:09:35.697 09:17:46 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:09:35.697 09:17:46 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:09:35.697 09:17:46 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:09:35.697 09:17:46 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:09:35.697 09:17:46 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:09:35.697 09:17:46 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:09:35.697 09:17:46 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:09:35.697 09:17:46 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:09:35.697 09:17:46 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:09:35.697 09:17:46 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:09:35.697 09:17:46 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:09:35.697 09:17:46 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:09:35.697 09:17:46 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:09:35.697 09:17:46 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:09:35.697 09:17:46 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:09:35.697 09:17:46 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:09:35.697 09:17:46 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:09:35.697 09:17:46 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:09:35.697 09:17:46 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:09:35.697 09:17:46 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:09:35.697 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:09:35.697 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.250 ms 00:09:35.697 00:09:35.697 --- 10.0.0.2 ping statistics --- 00:09:35.697 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:09:35.697 rtt min/avg/max/mdev = 0.250/0.250/0.250/0.000 ms 00:09:35.697 09:17:46 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:09:35.697 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:09:35.697 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.132 ms 00:09:35.697 00:09:35.697 --- 10.0.0.1 ping statistics --- 00:09:35.697 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:09:35.697 rtt min/avg/max/mdev = 0.132/0.132/0.132/0.000 ms 00:09:35.697 09:17:46 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:09:35.697 09:17:46 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@422 -- # return 0 00:09:35.697 09:17:46 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:09:35.697 09:17:46 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:09:35.697 09:17:46 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:09:35.697 09:17:46 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:09:35.697 09:17:46 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:09:35.697 09:17:46 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:09:35.697 09:17:46 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:09:35.697 09:17:46 nvmf_tcp.nvmf_referrals -- target/referrals.sh@38 -- # nvmfappstart -m 0xF 00:09:35.697 09:17:46 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:09:35.697 09:17:46 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@722 -- # xtrace_disable 00:09:35.697 09:17:46 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:09:35.697 09:17:46 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@481 -- # nvmfpid=742685 00:09:35.697 09:17:46 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:09:35.697 09:17:46 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@482 -- # waitforlisten 742685 00:09:35.697 09:17:46 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@829 -- # '[' -z 742685 ']' 00:09:35.697 09:17:46 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:09:35.697 09:17:46 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@834 -- # local max_retries=100 00:09:35.697 09:17:46 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:09:35.697 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:09:35.697 09:17:46 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@838 -- # xtrace_disable 00:09:35.697 09:17:46 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:09:35.956 [2024-07-15 09:17:46.936349] Starting SPDK v24.09-pre git sha1 b0f01ebc5 / DPDK 24.03.0 initialization... 00:09:35.956 [2024-07-15 09:17:46.936432] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:09:35.956 EAL: No free 2048 kB hugepages reported on node 1 00:09:35.956 [2024-07-15 09:17:47.010826] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 4 00:09:35.956 [2024-07-15 09:17:47.122196] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:09:35.956 [2024-07-15 09:17:47.122253] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:09:35.956 [2024-07-15 09:17:47.122282] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:09:35.956 [2024-07-15 09:17:47.122293] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:09:35.956 [2024-07-15 09:17:47.122302] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:09:35.956 [2024-07-15 09:17:47.122382] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:09:35.956 [2024-07-15 09:17:47.123541] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:09:35.956 [2024-07-15 09:17:47.123596] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:09:35.956 [2024-07-15 09:17:47.123598] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:09:36.895 09:17:47 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:09:36.895 09:17:47 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@862 -- # return 0 00:09:36.895 09:17:47 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:09:36.895 09:17:47 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@728 -- # xtrace_disable 00:09:36.895 09:17:47 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:09:36.895 09:17:47 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:09:36.895 09:17:47 nvmf_tcp.nvmf_referrals -- target/referrals.sh@40 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:09:36.895 09:17:47 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:36.895 09:17:47 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:09:36.895 [2024-07-15 09:17:47.915531] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:09:36.895 09:17:47 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:36.895 09:17:47 nvmf_tcp.nvmf_referrals -- target/referrals.sh@41 -- # rpc_cmd nvmf_subsystem_add_listener -t tcp -a 10.0.0.2 -s 8009 discovery 00:09:36.895 09:17:47 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:36.895 09:17:47 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:09:36.895 [2024-07-15 09:17:47.927717] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 8009 *** 00:09:36.895 09:17:47 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:36.895 09:17:47 nvmf_tcp.nvmf_referrals -- target/referrals.sh@44 -- # rpc_cmd nvmf_discovery_add_referral -t tcp -a 127.0.0.2 -s 4430 00:09:36.895 09:17:47 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:36.895 09:17:47 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:09:36.895 09:17:47 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:36.895 09:17:47 nvmf_tcp.nvmf_referrals -- target/referrals.sh@45 -- # rpc_cmd nvmf_discovery_add_referral -t tcp -a 127.0.0.3 -s 4430 00:09:36.895 09:17:47 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:36.895 09:17:47 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:09:36.895 09:17:47 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:36.895 09:17:47 nvmf_tcp.nvmf_referrals -- target/referrals.sh@46 -- # rpc_cmd nvmf_discovery_add_referral -t tcp -a 127.0.0.4 -s 4430 00:09:36.895 09:17:47 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:36.895 09:17:47 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:09:36.895 09:17:47 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:36.895 09:17:47 nvmf_tcp.nvmf_referrals -- target/referrals.sh@48 -- # rpc_cmd nvmf_discovery_get_referrals 00:09:36.895 09:17:47 nvmf_tcp.nvmf_referrals -- target/referrals.sh@48 -- # jq length 00:09:36.895 09:17:47 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:36.895 09:17:47 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:09:36.895 09:17:47 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:36.895 09:17:47 nvmf_tcp.nvmf_referrals -- target/referrals.sh@48 -- # (( 3 == 3 )) 00:09:36.895 09:17:47 nvmf_tcp.nvmf_referrals -- target/referrals.sh@49 -- # get_referral_ips rpc 00:09:36.895 09:17:47 nvmf_tcp.nvmf_referrals -- target/referrals.sh@19 -- # [[ rpc == \r\p\c ]] 00:09:36.895 09:17:47 nvmf_tcp.nvmf_referrals -- target/referrals.sh@21 -- # rpc_cmd nvmf_discovery_get_referrals 00:09:36.895 09:17:47 nvmf_tcp.nvmf_referrals -- target/referrals.sh@21 -- # jq -r '.[].address.traddr' 00:09:36.895 09:17:48 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:36.895 09:17:48 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:09:36.895 09:17:48 nvmf_tcp.nvmf_referrals -- target/referrals.sh@21 -- # sort 00:09:36.895 09:17:48 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:36.895 09:17:48 nvmf_tcp.nvmf_referrals -- target/referrals.sh@21 -- # echo 127.0.0.2 127.0.0.3 127.0.0.4 00:09:36.895 09:17:48 nvmf_tcp.nvmf_referrals -- target/referrals.sh@49 -- # [[ 127.0.0.2 127.0.0.3 127.0.0.4 == \1\2\7\.\0\.\0\.\2\ \1\2\7\.\0\.\0\.\3\ \1\2\7\.\0\.\0\.\4 ]] 00:09:36.895 09:17:48 nvmf_tcp.nvmf_referrals -- target/referrals.sh@50 -- # get_referral_ips nvme 00:09:36.895 09:17:48 nvmf_tcp.nvmf_referrals -- target/referrals.sh@19 -- # [[ nvme == \r\p\c ]] 00:09:36.895 09:17:48 nvmf_tcp.nvmf_referrals -- target/referrals.sh@22 -- # [[ nvme == \n\v\m\e ]] 00:09:36.895 09:17:48 nvmf_tcp.nvmf_referrals -- target/referrals.sh@26 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --hostid=29f67375-a902-e411-ace9-001e67bc3c9a -t tcp -a 10.0.0.2 -s 8009 -o json 00:09:36.895 09:17:48 nvmf_tcp.nvmf_referrals -- target/referrals.sh@26 -- # jq -r '.records[] | select(.subtype != "current discovery subsystem").traddr' 00:09:36.895 09:17:48 nvmf_tcp.nvmf_referrals -- target/referrals.sh@26 -- # sort 00:09:37.153 09:17:48 nvmf_tcp.nvmf_referrals -- target/referrals.sh@26 -- # echo 127.0.0.2 127.0.0.3 127.0.0.4 00:09:37.153 09:17:48 nvmf_tcp.nvmf_referrals -- target/referrals.sh@50 -- # [[ 127.0.0.2 127.0.0.3 127.0.0.4 == \1\2\7\.\0\.\0\.\2\ \1\2\7\.\0\.\0\.\3\ \1\2\7\.\0\.\0\.\4 ]] 00:09:37.153 09:17:48 nvmf_tcp.nvmf_referrals -- target/referrals.sh@52 -- # rpc_cmd nvmf_discovery_remove_referral -t tcp -a 127.0.0.2 -s 4430 00:09:37.153 09:17:48 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:37.153 09:17:48 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:09:37.153 09:17:48 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:37.153 09:17:48 nvmf_tcp.nvmf_referrals -- target/referrals.sh@53 -- # rpc_cmd nvmf_discovery_remove_referral -t tcp -a 127.0.0.3 -s 4430 00:09:37.153 09:17:48 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:37.153 09:17:48 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:09:37.153 09:17:48 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:37.153 09:17:48 nvmf_tcp.nvmf_referrals -- target/referrals.sh@54 -- # rpc_cmd nvmf_discovery_remove_referral -t tcp -a 127.0.0.4 -s 4430 00:09:37.153 09:17:48 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:37.153 09:17:48 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:09:37.153 09:17:48 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:37.153 09:17:48 nvmf_tcp.nvmf_referrals -- target/referrals.sh@56 -- # rpc_cmd nvmf_discovery_get_referrals 00:09:37.153 09:17:48 nvmf_tcp.nvmf_referrals -- target/referrals.sh@56 -- # jq length 00:09:37.153 09:17:48 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:37.153 09:17:48 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:09:37.153 09:17:48 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:37.153 09:17:48 nvmf_tcp.nvmf_referrals -- target/referrals.sh@56 -- # (( 0 == 0 )) 00:09:37.153 09:17:48 nvmf_tcp.nvmf_referrals -- target/referrals.sh@57 -- # get_referral_ips nvme 00:09:37.153 09:17:48 nvmf_tcp.nvmf_referrals -- target/referrals.sh@19 -- # [[ nvme == \r\p\c ]] 00:09:37.153 09:17:48 nvmf_tcp.nvmf_referrals -- target/referrals.sh@22 -- # [[ nvme == \n\v\m\e ]] 00:09:37.153 09:17:48 nvmf_tcp.nvmf_referrals -- target/referrals.sh@26 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --hostid=29f67375-a902-e411-ace9-001e67bc3c9a -t tcp -a 10.0.0.2 -s 8009 -o json 00:09:37.153 09:17:48 nvmf_tcp.nvmf_referrals -- target/referrals.sh@26 -- # jq -r '.records[] | select(.subtype != "current discovery subsystem").traddr' 00:09:37.153 09:17:48 nvmf_tcp.nvmf_referrals -- target/referrals.sh@26 -- # sort 00:09:37.411 09:17:48 nvmf_tcp.nvmf_referrals -- target/referrals.sh@26 -- # echo 00:09:37.411 09:17:48 nvmf_tcp.nvmf_referrals -- target/referrals.sh@57 -- # [[ '' == '' ]] 00:09:37.411 09:17:48 nvmf_tcp.nvmf_referrals -- target/referrals.sh@60 -- # rpc_cmd nvmf_discovery_add_referral -t tcp -a 127.0.0.2 -s 4430 -n discovery 00:09:37.411 09:17:48 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:37.411 09:17:48 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:09:37.411 09:17:48 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:37.411 09:17:48 nvmf_tcp.nvmf_referrals -- target/referrals.sh@62 -- # rpc_cmd nvmf_discovery_add_referral -t tcp -a 127.0.0.2 -s 4430 -n nqn.2016-06.io.spdk:cnode1 00:09:37.411 09:17:48 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:37.411 09:17:48 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:09:37.411 09:17:48 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:37.411 09:17:48 nvmf_tcp.nvmf_referrals -- target/referrals.sh@65 -- # get_referral_ips rpc 00:09:37.411 09:17:48 nvmf_tcp.nvmf_referrals -- target/referrals.sh@19 -- # [[ rpc == \r\p\c ]] 00:09:37.411 09:17:48 nvmf_tcp.nvmf_referrals -- target/referrals.sh@21 -- # rpc_cmd nvmf_discovery_get_referrals 00:09:37.411 09:17:48 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:37.411 09:17:48 nvmf_tcp.nvmf_referrals -- target/referrals.sh@21 -- # jq -r '.[].address.traddr' 00:09:37.411 09:17:48 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:09:37.411 09:17:48 nvmf_tcp.nvmf_referrals -- target/referrals.sh@21 -- # sort 00:09:37.411 09:17:48 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:37.411 09:17:48 nvmf_tcp.nvmf_referrals -- target/referrals.sh@21 -- # echo 127.0.0.2 127.0.0.2 00:09:37.411 09:17:48 nvmf_tcp.nvmf_referrals -- target/referrals.sh@65 -- # [[ 127.0.0.2 127.0.0.2 == \1\2\7\.\0\.\0\.\2\ \1\2\7\.\0\.\0\.\2 ]] 00:09:37.411 09:17:48 nvmf_tcp.nvmf_referrals -- target/referrals.sh@66 -- # get_referral_ips nvme 00:09:37.411 09:17:48 nvmf_tcp.nvmf_referrals -- target/referrals.sh@19 -- # [[ nvme == \r\p\c ]] 00:09:37.411 09:17:48 nvmf_tcp.nvmf_referrals -- target/referrals.sh@22 -- # [[ nvme == \n\v\m\e ]] 00:09:37.411 09:17:48 nvmf_tcp.nvmf_referrals -- target/referrals.sh@26 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --hostid=29f67375-a902-e411-ace9-001e67bc3c9a -t tcp -a 10.0.0.2 -s 8009 -o json 00:09:37.411 09:17:48 nvmf_tcp.nvmf_referrals -- target/referrals.sh@26 -- # jq -r '.records[] | select(.subtype != "current discovery subsystem").traddr' 00:09:37.411 09:17:48 nvmf_tcp.nvmf_referrals -- target/referrals.sh@26 -- # sort 00:09:37.411 09:17:48 nvmf_tcp.nvmf_referrals -- target/referrals.sh@26 -- # echo 127.0.0.2 127.0.0.2 00:09:37.411 09:17:48 nvmf_tcp.nvmf_referrals -- target/referrals.sh@66 -- # [[ 127.0.0.2 127.0.0.2 == \1\2\7\.\0\.\0\.\2\ \1\2\7\.\0\.\0\.\2 ]] 00:09:37.411 09:17:48 nvmf_tcp.nvmf_referrals -- target/referrals.sh@67 -- # get_discovery_entries 'nvme subsystem' 00:09:37.412 09:17:48 nvmf_tcp.nvmf_referrals -- target/referrals.sh@67 -- # jq -r .subnqn 00:09:37.412 09:17:48 nvmf_tcp.nvmf_referrals -- target/referrals.sh@31 -- # local 'subtype=nvme subsystem' 00:09:37.412 09:17:48 nvmf_tcp.nvmf_referrals -- target/referrals.sh@33 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --hostid=29f67375-a902-e411-ace9-001e67bc3c9a -t tcp -a 10.0.0.2 -s 8009 -o json 00:09:37.412 09:17:48 nvmf_tcp.nvmf_referrals -- target/referrals.sh@34 -- # jq '.records[] | select(.subtype == "nvme subsystem")' 00:09:37.671 09:17:48 nvmf_tcp.nvmf_referrals -- target/referrals.sh@67 -- # [[ nqn.2016-06.io.spdk:cnode1 == \n\q\n\.\2\0\1\6\-\0\6\.\i\o\.\s\p\d\k\:\c\n\o\d\e\1 ]] 00:09:37.671 09:17:48 nvmf_tcp.nvmf_referrals -- target/referrals.sh@68 -- # get_discovery_entries 'discovery subsystem referral' 00:09:37.671 09:17:48 nvmf_tcp.nvmf_referrals -- target/referrals.sh@31 -- # local 'subtype=discovery subsystem referral' 00:09:37.671 09:17:48 nvmf_tcp.nvmf_referrals -- target/referrals.sh@68 -- # jq -r .subnqn 00:09:37.671 09:17:48 nvmf_tcp.nvmf_referrals -- target/referrals.sh@33 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --hostid=29f67375-a902-e411-ace9-001e67bc3c9a -t tcp -a 10.0.0.2 -s 8009 -o json 00:09:37.671 09:17:48 nvmf_tcp.nvmf_referrals -- target/referrals.sh@34 -- # jq '.records[] | select(.subtype == "discovery subsystem referral")' 00:09:37.929 09:17:48 nvmf_tcp.nvmf_referrals -- target/referrals.sh@68 -- # [[ nqn.2014-08.org.nvmexpress.discovery == \n\q\n\.\2\0\1\4\-\0\8\.\o\r\g\.\n\v\m\e\x\p\r\e\s\s\.\d\i\s\c\o\v\e\r\y ]] 00:09:37.929 09:17:48 nvmf_tcp.nvmf_referrals -- target/referrals.sh@71 -- # rpc_cmd nvmf_discovery_remove_referral -t tcp -a 127.0.0.2 -s 4430 -n nqn.2016-06.io.spdk:cnode1 00:09:37.929 09:17:48 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:37.929 09:17:48 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:09:37.929 09:17:48 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:37.929 09:17:48 nvmf_tcp.nvmf_referrals -- target/referrals.sh@73 -- # get_referral_ips rpc 00:09:37.929 09:17:48 nvmf_tcp.nvmf_referrals -- target/referrals.sh@19 -- # [[ rpc == \r\p\c ]] 00:09:37.929 09:17:48 nvmf_tcp.nvmf_referrals -- target/referrals.sh@21 -- # rpc_cmd nvmf_discovery_get_referrals 00:09:37.929 09:17:48 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:37.929 09:17:48 nvmf_tcp.nvmf_referrals -- target/referrals.sh@21 -- # jq -r '.[].address.traddr' 00:09:37.929 09:17:48 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:09:37.929 09:17:48 nvmf_tcp.nvmf_referrals -- target/referrals.sh@21 -- # sort 00:09:37.929 09:17:48 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:37.929 09:17:48 nvmf_tcp.nvmf_referrals -- target/referrals.sh@21 -- # echo 127.0.0.2 00:09:37.929 09:17:48 nvmf_tcp.nvmf_referrals -- target/referrals.sh@73 -- # [[ 127.0.0.2 == \1\2\7\.\0\.\0\.\2 ]] 00:09:37.929 09:17:48 nvmf_tcp.nvmf_referrals -- target/referrals.sh@74 -- # get_referral_ips nvme 00:09:37.929 09:17:48 nvmf_tcp.nvmf_referrals -- target/referrals.sh@19 -- # [[ nvme == \r\p\c ]] 00:09:37.929 09:17:48 nvmf_tcp.nvmf_referrals -- target/referrals.sh@22 -- # [[ nvme == \n\v\m\e ]] 00:09:37.929 09:17:48 nvmf_tcp.nvmf_referrals -- target/referrals.sh@26 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --hostid=29f67375-a902-e411-ace9-001e67bc3c9a -t tcp -a 10.0.0.2 -s 8009 -o json 00:09:37.929 09:17:48 nvmf_tcp.nvmf_referrals -- target/referrals.sh@26 -- # jq -r '.records[] | select(.subtype != "current discovery subsystem").traddr' 00:09:37.929 09:17:48 nvmf_tcp.nvmf_referrals -- target/referrals.sh@26 -- # sort 00:09:37.929 09:17:49 nvmf_tcp.nvmf_referrals -- target/referrals.sh@26 -- # echo 127.0.0.2 00:09:37.929 09:17:49 nvmf_tcp.nvmf_referrals -- target/referrals.sh@74 -- # [[ 127.0.0.2 == \1\2\7\.\0\.\0\.\2 ]] 00:09:37.929 09:17:49 nvmf_tcp.nvmf_referrals -- target/referrals.sh@75 -- # get_discovery_entries 'nvme subsystem' 00:09:37.929 09:17:49 nvmf_tcp.nvmf_referrals -- target/referrals.sh@31 -- # local 'subtype=nvme subsystem' 00:09:37.929 09:17:49 nvmf_tcp.nvmf_referrals -- target/referrals.sh@75 -- # jq -r .subnqn 00:09:37.929 09:17:49 nvmf_tcp.nvmf_referrals -- target/referrals.sh@33 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --hostid=29f67375-a902-e411-ace9-001e67bc3c9a -t tcp -a 10.0.0.2 -s 8009 -o json 00:09:37.929 09:17:49 nvmf_tcp.nvmf_referrals -- target/referrals.sh@34 -- # jq '.records[] | select(.subtype == "nvme subsystem")' 00:09:38.188 09:17:49 nvmf_tcp.nvmf_referrals -- target/referrals.sh@75 -- # [[ '' == '' ]] 00:09:38.188 09:17:49 nvmf_tcp.nvmf_referrals -- target/referrals.sh@76 -- # get_discovery_entries 'discovery subsystem referral' 00:09:38.188 09:17:49 nvmf_tcp.nvmf_referrals -- target/referrals.sh@76 -- # jq -r .subnqn 00:09:38.188 09:17:49 nvmf_tcp.nvmf_referrals -- target/referrals.sh@31 -- # local 'subtype=discovery subsystem referral' 00:09:38.188 09:17:49 nvmf_tcp.nvmf_referrals -- target/referrals.sh@33 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --hostid=29f67375-a902-e411-ace9-001e67bc3c9a -t tcp -a 10.0.0.2 -s 8009 -o json 00:09:38.188 09:17:49 nvmf_tcp.nvmf_referrals -- target/referrals.sh@34 -- # jq '.records[] | select(.subtype == "discovery subsystem referral")' 00:09:38.188 09:17:49 nvmf_tcp.nvmf_referrals -- target/referrals.sh@76 -- # [[ nqn.2014-08.org.nvmexpress.discovery == \n\q\n\.\2\0\1\4\-\0\8\.\o\r\g\.\n\v\m\e\x\p\r\e\s\s\.\d\i\s\c\o\v\e\r\y ]] 00:09:38.188 09:17:49 nvmf_tcp.nvmf_referrals -- target/referrals.sh@79 -- # rpc_cmd nvmf_discovery_remove_referral -t tcp -a 127.0.0.2 -s 4430 -n nqn.2014-08.org.nvmexpress.discovery 00:09:38.188 09:17:49 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:38.188 09:17:49 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:09:38.188 09:17:49 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:38.188 09:17:49 nvmf_tcp.nvmf_referrals -- target/referrals.sh@82 -- # rpc_cmd nvmf_discovery_get_referrals 00:09:38.188 09:17:49 nvmf_tcp.nvmf_referrals -- target/referrals.sh@82 -- # jq length 00:09:38.188 09:17:49 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:38.188 09:17:49 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:09:38.188 09:17:49 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:38.188 09:17:49 nvmf_tcp.nvmf_referrals -- target/referrals.sh@82 -- # (( 0 == 0 )) 00:09:38.188 09:17:49 nvmf_tcp.nvmf_referrals -- target/referrals.sh@83 -- # get_referral_ips nvme 00:09:38.188 09:17:49 nvmf_tcp.nvmf_referrals -- target/referrals.sh@19 -- # [[ nvme == \r\p\c ]] 00:09:38.188 09:17:49 nvmf_tcp.nvmf_referrals -- target/referrals.sh@22 -- # [[ nvme == \n\v\m\e ]] 00:09:38.188 09:17:49 nvmf_tcp.nvmf_referrals -- target/referrals.sh@26 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --hostid=29f67375-a902-e411-ace9-001e67bc3c9a -t tcp -a 10.0.0.2 -s 8009 -o json 00:09:38.188 09:17:49 nvmf_tcp.nvmf_referrals -- target/referrals.sh@26 -- # jq -r '.records[] | select(.subtype != "current discovery subsystem").traddr' 00:09:38.188 09:17:49 nvmf_tcp.nvmf_referrals -- target/referrals.sh@26 -- # sort 00:09:38.448 09:17:49 nvmf_tcp.nvmf_referrals -- target/referrals.sh@26 -- # echo 00:09:38.448 09:17:49 nvmf_tcp.nvmf_referrals -- target/referrals.sh@83 -- # [[ '' == '' ]] 00:09:38.448 09:17:49 nvmf_tcp.nvmf_referrals -- target/referrals.sh@85 -- # trap - SIGINT SIGTERM EXIT 00:09:38.448 09:17:49 nvmf_tcp.nvmf_referrals -- target/referrals.sh@86 -- # nvmftestfini 00:09:38.448 09:17:49 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@488 -- # nvmfcleanup 00:09:38.448 09:17:49 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@117 -- # sync 00:09:38.448 09:17:49 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:09:38.448 09:17:49 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@120 -- # set +e 00:09:38.448 09:17:49 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@121 -- # for i in {1..20} 00:09:38.448 09:17:49 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:09:38.448 rmmod nvme_tcp 00:09:38.448 rmmod nvme_fabrics 00:09:38.448 rmmod nvme_keyring 00:09:38.448 09:17:49 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:09:38.448 09:17:49 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@124 -- # set -e 00:09:38.448 09:17:49 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@125 -- # return 0 00:09:38.448 09:17:49 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@489 -- # '[' -n 742685 ']' 00:09:38.448 09:17:49 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@490 -- # killprocess 742685 00:09:38.448 09:17:49 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@948 -- # '[' -z 742685 ']' 00:09:38.448 09:17:49 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@952 -- # kill -0 742685 00:09:38.448 09:17:49 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@953 -- # uname 00:09:38.448 09:17:49 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:09:38.448 09:17:49 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 742685 00:09:38.448 09:17:49 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:09:38.448 09:17:49 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:09:38.448 09:17:49 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@966 -- # echo 'killing process with pid 742685' 00:09:38.448 killing process with pid 742685 00:09:38.448 09:17:49 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@967 -- # kill 742685 00:09:38.448 09:17:49 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@972 -- # wait 742685 00:09:38.706 09:17:49 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:09:38.706 09:17:49 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:09:38.707 09:17:49 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:09:38.707 09:17:49 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:09:38.707 09:17:49 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@278 -- # remove_spdk_ns 00:09:38.707 09:17:49 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:09:38.707 09:17:49 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:09:38.707 09:17:49 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:09:41.243 09:17:51 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:09:41.243 00:09:41.243 real 0m7.348s 00:09:41.243 user 0m12.246s 00:09:41.243 sys 0m2.216s 00:09:41.243 09:17:51 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@1124 -- # xtrace_disable 00:09:41.243 09:17:51 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:09:41.243 ************************************ 00:09:41.243 END TEST nvmf_referrals 00:09:41.243 ************************************ 00:09:41.243 09:17:51 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:09:41.243 09:17:51 nvmf_tcp -- nvmf/nvmf.sh@27 -- # run_test nvmf_connect_disconnect /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/connect_disconnect.sh --transport=tcp 00:09:41.243 09:17:51 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:09:41.243 09:17:51 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:09:41.243 09:17:51 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:09:41.243 ************************************ 00:09:41.243 START TEST nvmf_connect_disconnect 00:09:41.243 ************************************ 00:09:41.243 09:17:51 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/connect_disconnect.sh --transport=tcp 00:09:41.243 * Looking for test storage... 00:09:41.243 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:09:41.243 09:17:51 nvmf_tcp.nvmf_connect_disconnect -- target/connect_disconnect.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:09:41.243 09:17:51 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@7 -- # uname -s 00:09:41.243 09:17:51 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:09:41.243 09:17:51 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:09:41.243 09:17:51 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:09:41.243 09:17:51 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:09:41.243 09:17:51 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:09:41.243 09:17:51 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:09:41.243 09:17:51 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:09:41.243 09:17:51 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:09:41.243 09:17:51 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:09:41.243 09:17:51 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:09:41.243 09:17:51 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a 00:09:41.243 09:17:51 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@18 -- # NVME_HOSTID=29f67375-a902-e411-ace9-001e67bc3c9a 00:09:41.243 09:17:51 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:09:41.243 09:17:51 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:09:41.243 09:17:51 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:09:41.243 09:17:51 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:09:41.243 09:17:51 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:09:41.243 09:17:51 nvmf_tcp.nvmf_connect_disconnect -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:09:41.243 09:17:51 nvmf_tcp.nvmf_connect_disconnect -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:09:41.243 09:17:51 nvmf_tcp.nvmf_connect_disconnect -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:09:41.243 09:17:51 nvmf_tcp.nvmf_connect_disconnect -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:41.243 09:17:51 nvmf_tcp.nvmf_connect_disconnect -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:41.243 09:17:51 nvmf_tcp.nvmf_connect_disconnect -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:41.243 09:17:51 nvmf_tcp.nvmf_connect_disconnect -- paths/export.sh@5 -- # export PATH 00:09:41.243 09:17:51 nvmf_tcp.nvmf_connect_disconnect -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:41.243 09:17:51 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@47 -- # : 0 00:09:41.243 09:17:51 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:09:41.243 09:17:51 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:09:41.243 09:17:51 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:09:41.243 09:17:51 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:09:41.243 09:17:51 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:09:41.243 09:17:51 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:09:41.243 09:17:51 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:09:41.243 09:17:51 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@51 -- # have_pci_nics=0 00:09:41.243 09:17:51 nvmf_tcp.nvmf_connect_disconnect -- target/connect_disconnect.sh@11 -- # MALLOC_BDEV_SIZE=64 00:09:41.243 09:17:51 nvmf_tcp.nvmf_connect_disconnect -- target/connect_disconnect.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:09:41.243 09:17:51 nvmf_tcp.nvmf_connect_disconnect -- target/connect_disconnect.sh@15 -- # nvmftestinit 00:09:41.243 09:17:51 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:09:41.243 09:17:51 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:09:41.243 09:17:51 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@448 -- # prepare_net_devs 00:09:41.244 09:17:51 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@410 -- # local -g is_hw=no 00:09:41.244 09:17:51 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@412 -- # remove_spdk_ns 00:09:41.244 09:17:51 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:09:41.244 09:17:51 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:09:41.244 09:17:51 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:09:41.244 09:17:51 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:09:41.244 09:17:51 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:09:41.244 09:17:51 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@285 -- # xtrace_disable 00:09:41.244 09:17:51 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@10 -- # set +x 00:09:43.150 09:17:54 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:09:43.150 09:17:54 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@291 -- # pci_devs=() 00:09:43.150 09:17:54 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@291 -- # local -a pci_devs 00:09:43.150 09:17:54 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@292 -- # pci_net_devs=() 00:09:43.150 09:17:54 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:09:43.150 09:17:54 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@293 -- # pci_drivers=() 00:09:43.150 09:17:54 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@293 -- # local -A pci_drivers 00:09:43.150 09:17:54 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@295 -- # net_devs=() 00:09:43.150 09:17:54 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@295 -- # local -ga net_devs 00:09:43.150 09:17:54 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@296 -- # e810=() 00:09:43.150 09:17:54 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@296 -- # local -ga e810 00:09:43.150 09:17:54 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@297 -- # x722=() 00:09:43.150 09:17:54 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@297 -- # local -ga x722 00:09:43.150 09:17:54 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@298 -- # mlx=() 00:09:43.150 09:17:54 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@298 -- # local -ga mlx 00:09:43.150 09:17:54 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:09:43.150 09:17:54 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:09:43.150 09:17:54 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:09:43.150 09:17:54 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:09:43.150 09:17:54 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:09:43.150 09:17:54 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:09:43.150 09:17:54 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:09:43.150 09:17:54 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:09:43.150 09:17:54 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:09:43.150 09:17:54 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:09:43.150 09:17:54 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:09:43.150 09:17:54 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:09:43.150 09:17:54 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:09:43.150 09:17:54 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:09:43.150 09:17:54 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:09:43.150 09:17:54 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:09:43.150 09:17:54 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:09:43.150 09:17:54 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:09:43.150 09:17:54 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@341 -- # echo 'Found 0000:09:00.0 (0x8086 - 0x159b)' 00:09:43.150 Found 0000:09:00.0 (0x8086 - 0x159b) 00:09:43.150 09:17:54 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:09:43.150 09:17:54 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:09:43.150 09:17:54 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:09:43.150 09:17:54 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:09:43.150 09:17:54 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:09:43.150 09:17:54 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:09:43.150 09:17:54 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@341 -- # echo 'Found 0000:09:00.1 (0x8086 - 0x159b)' 00:09:43.150 Found 0000:09:00.1 (0x8086 - 0x159b) 00:09:43.150 09:17:54 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:09:43.150 09:17:54 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:09:43.150 09:17:54 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:09:43.150 09:17:54 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:09:43.150 09:17:54 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:09:43.150 09:17:54 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:09:43.150 09:17:54 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:09:43.150 09:17:54 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:09:43.150 09:17:54 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:09:43.150 09:17:54 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:09:43.150 09:17:54 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:09:43.150 09:17:54 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:09:43.150 09:17:54 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@390 -- # [[ up == up ]] 00:09:43.150 09:17:54 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:09:43.150 09:17:54 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:09:43.150 09:17:54 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:09:00.0: cvl_0_0' 00:09:43.150 Found net devices under 0000:09:00.0: cvl_0_0 00:09:43.150 09:17:54 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:09:43.150 09:17:54 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:09:43.150 09:17:54 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:09:43.150 09:17:54 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:09:43.150 09:17:54 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:09:43.150 09:17:54 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@390 -- # [[ up == up ]] 00:09:43.150 09:17:54 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:09:43.150 09:17:54 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:09:43.150 09:17:54 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:09:00.1: cvl_0_1' 00:09:43.150 Found net devices under 0000:09:00.1: cvl_0_1 00:09:43.150 09:17:54 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:09:43.150 09:17:54 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:09:43.150 09:17:54 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@414 -- # is_hw=yes 00:09:43.150 09:17:54 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:09:43.150 09:17:54 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:09:43.150 09:17:54 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:09:43.150 09:17:54 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:09:43.150 09:17:54 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:09:43.150 09:17:54 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:09:43.150 09:17:54 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:09:43.150 09:17:54 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:09:43.150 09:17:54 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:09:43.150 09:17:54 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:09:43.150 09:17:54 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:09:43.150 09:17:54 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:09:43.150 09:17:54 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:09:43.150 09:17:54 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:09:43.150 09:17:54 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:09:43.150 09:17:54 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:09:43.151 09:17:54 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:09:43.151 09:17:54 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:09:43.151 09:17:54 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:09:43.151 09:17:54 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:09:43.151 09:17:54 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:09:43.151 09:17:54 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:09:43.151 09:17:54 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:09:43.151 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:09:43.151 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.196 ms 00:09:43.151 00:09:43.151 --- 10.0.0.2 ping statistics --- 00:09:43.151 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:09:43.151 rtt min/avg/max/mdev = 0.196/0.196/0.196/0.000 ms 00:09:43.151 09:17:54 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:09:43.151 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:09:43.151 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.075 ms 00:09:43.151 00:09:43.151 --- 10.0.0.1 ping statistics --- 00:09:43.151 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:09:43.151 rtt min/avg/max/mdev = 0.075/0.075/0.075/0.000 ms 00:09:43.151 09:17:54 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:09:43.151 09:17:54 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@422 -- # return 0 00:09:43.151 09:17:54 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:09:43.151 09:17:54 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:09:43.151 09:17:54 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:09:43.151 09:17:54 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:09:43.151 09:17:54 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:09:43.151 09:17:54 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:09:43.151 09:17:54 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:09:43.151 09:17:54 nvmf_tcp.nvmf_connect_disconnect -- target/connect_disconnect.sh@16 -- # nvmfappstart -m 0xF 00:09:43.151 09:17:54 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:09:43.151 09:17:54 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@722 -- # xtrace_disable 00:09:43.151 09:17:54 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@10 -- # set +x 00:09:43.151 09:17:54 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@481 -- # nvmfpid=744991 00:09:43.151 09:17:54 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:09:43.151 09:17:54 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@482 -- # waitforlisten 744991 00:09:43.151 09:17:54 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@829 -- # '[' -z 744991 ']' 00:09:43.151 09:17:54 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:09:43.151 09:17:54 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@834 -- # local max_retries=100 00:09:43.151 09:17:54 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:09:43.151 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:09:43.151 09:17:54 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@838 -- # xtrace_disable 00:09:43.151 09:17:54 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@10 -- # set +x 00:09:43.410 [2024-07-15 09:17:54.350535] Starting SPDK v24.09-pre git sha1 b0f01ebc5 / DPDK 24.03.0 initialization... 00:09:43.410 [2024-07-15 09:17:54.350618] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:09:43.410 EAL: No free 2048 kB hugepages reported on node 1 00:09:43.410 [2024-07-15 09:17:54.412807] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 4 00:09:43.410 [2024-07-15 09:17:54.517037] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:09:43.410 [2024-07-15 09:17:54.517104] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:09:43.410 [2024-07-15 09:17:54.517118] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:09:43.410 [2024-07-15 09:17:54.517130] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:09:43.410 [2024-07-15 09:17:54.517140] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:09:43.410 [2024-07-15 09:17:54.517263] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:09:43.410 [2024-07-15 09:17:54.517473] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:09:43.410 [2024-07-15 09:17:54.517538] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:09:43.410 [2024-07-15 09:17:54.517542] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:09:43.668 09:17:54 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:09:43.668 09:17:54 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@862 -- # return 0 00:09:43.669 09:17:54 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:09:43.669 09:17:54 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@728 -- # xtrace_disable 00:09:43.669 09:17:54 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@10 -- # set +x 00:09:43.669 09:17:54 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:09:43.669 09:17:54 nvmf_tcp.nvmf_connect_disconnect -- target/connect_disconnect.sh@18 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 -c 0 00:09:43.669 09:17:54 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:43.669 09:17:54 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@10 -- # set +x 00:09:43.669 [2024-07-15 09:17:54.678699] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:09:43.669 09:17:54 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:43.669 09:17:54 nvmf_tcp.nvmf_connect_disconnect -- target/connect_disconnect.sh@20 -- # rpc_cmd bdev_malloc_create 64 512 00:09:43.669 09:17:54 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:43.669 09:17:54 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@10 -- # set +x 00:09:43.669 09:17:54 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:43.669 09:17:54 nvmf_tcp.nvmf_connect_disconnect -- target/connect_disconnect.sh@20 -- # bdev=Malloc0 00:09:43.669 09:17:54 nvmf_tcp.nvmf_connect_disconnect -- target/connect_disconnect.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:09:43.669 09:17:54 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:43.669 09:17:54 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@10 -- # set +x 00:09:43.669 09:17:54 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:43.669 09:17:54 nvmf_tcp.nvmf_connect_disconnect -- target/connect_disconnect.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:09:43.669 09:17:54 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:43.669 09:17:54 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@10 -- # set +x 00:09:43.669 09:17:54 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:43.669 09:17:54 nvmf_tcp.nvmf_connect_disconnect -- target/connect_disconnect.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:09:43.669 09:17:54 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:43.669 09:17:54 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@10 -- # set +x 00:09:43.669 [2024-07-15 09:17:54.730756] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:09:43.669 09:17:54 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:43.669 09:17:54 nvmf_tcp.nvmf_connect_disconnect -- target/connect_disconnect.sh@26 -- # '[' 0 -eq 1 ']' 00:09:43.669 09:17:54 nvmf_tcp.nvmf_connect_disconnect -- target/connect_disconnect.sh@31 -- # num_iterations=5 00:09:43.669 09:17:54 nvmf_tcp.nvmf_connect_disconnect -- target/connect_disconnect.sh@34 -- # set +x 00:09:46.962 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:09:49.493 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:09:52.040 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:09:54.603 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:09:57.888 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:09:57.888 09:18:08 nvmf_tcp.nvmf_connect_disconnect -- target/connect_disconnect.sh@43 -- # trap - SIGINT SIGTERM EXIT 00:09:57.888 09:18:08 nvmf_tcp.nvmf_connect_disconnect -- target/connect_disconnect.sh@45 -- # nvmftestfini 00:09:57.888 09:18:08 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@488 -- # nvmfcleanup 00:09:57.888 09:18:08 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@117 -- # sync 00:09:57.888 09:18:08 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:09:57.888 09:18:08 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@120 -- # set +e 00:09:57.888 09:18:08 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@121 -- # for i in {1..20} 00:09:57.888 09:18:08 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:09:57.888 rmmod nvme_tcp 00:09:57.888 rmmod nvme_fabrics 00:09:57.888 rmmod nvme_keyring 00:09:57.888 09:18:08 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:09:57.888 09:18:08 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@124 -- # set -e 00:09:57.888 09:18:08 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@125 -- # return 0 00:09:57.888 09:18:08 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@489 -- # '[' -n 744991 ']' 00:09:57.888 09:18:08 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@490 -- # killprocess 744991 00:09:57.888 09:18:08 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@948 -- # '[' -z 744991 ']' 00:09:57.888 09:18:08 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@952 -- # kill -0 744991 00:09:57.888 09:18:08 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@953 -- # uname 00:09:57.888 09:18:08 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:09:57.888 09:18:08 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 744991 00:09:57.888 09:18:08 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:09:57.888 09:18:08 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:09:57.888 09:18:08 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@966 -- # echo 'killing process with pid 744991' 00:09:57.888 killing process with pid 744991 00:09:57.888 09:18:08 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@967 -- # kill 744991 00:09:57.888 09:18:08 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@972 -- # wait 744991 00:09:57.888 09:18:08 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:09:57.888 09:18:08 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:09:57.888 09:18:08 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:09:57.888 09:18:08 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:09:57.888 09:18:08 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@278 -- # remove_spdk_ns 00:09:57.888 09:18:08 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:09:57.888 09:18:08 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:09:57.888 09:18:08 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:09:59.793 09:18:10 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:09:59.793 00:09:59.793 real 0m18.919s 00:09:59.793 user 0m56.470s 00:09:59.793 sys 0m3.363s 00:09:59.793 09:18:10 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@1124 -- # xtrace_disable 00:09:59.793 09:18:10 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@10 -- # set +x 00:09:59.793 ************************************ 00:09:59.793 END TEST nvmf_connect_disconnect 00:09:59.793 ************************************ 00:09:59.793 09:18:10 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:09:59.793 09:18:10 nvmf_tcp -- nvmf/nvmf.sh@28 -- # run_test nvmf_multitarget /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget.sh --transport=tcp 00:09:59.793 09:18:10 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:09:59.793 09:18:10 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:09:59.793 09:18:10 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:09:59.793 ************************************ 00:09:59.793 START TEST nvmf_multitarget 00:09:59.793 ************************************ 00:09:59.793 09:18:10 nvmf_tcp.nvmf_multitarget -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget.sh --transport=tcp 00:09:59.793 * Looking for test storage... 00:09:59.793 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:09:59.793 09:18:10 nvmf_tcp.nvmf_multitarget -- target/multitarget.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:09:59.793 09:18:10 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@7 -- # uname -s 00:09:59.793 09:18:10 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:09:59.793 09:18:10 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:09:59.793 09:18:10 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:09:59.793 09:18:10 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:09:59.793 09:18:10 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:09:59.793 09:18:10 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:09:59.793 09:18:10 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:09:59.793 09:18:10 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:09:59.793 09:18:10 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:09:59.793 09:18:10 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:09:59.793 09:18:10 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a 00:09:59.793 09:18:10 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@18 -- # NVME_HOSTID=29f67375-a902-e411-ace9-001e67bc3c9a 00:09:59.793 09:18:10 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:09:59.793 09:18:10 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:09:59.793 09:18:10 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:09:59.793 09:18:10 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:09:59.793 09:18:10 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:09:59.793 09:18:10 nvmf_tcp.nvmf_multitarget -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:09:59.793 09:18:10 nvmf_tcp.nvmf_multitarget -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:09:59.793 09:18:10 nvmf_tcp.nvmf_multitarget -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:09:59.793 09:18:10 nvmf_tcp.nvmf_multitarget -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:59.793 09:18:10 nvmf_tcp.nvmf_multitarget -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:59.793 09:18:10 nvmf_tcp.nvmf_multitarget -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:59.793 09:18:10 nvmf_tcp.nvmf_multitarget -- paths/export.sh@5 -- # export PATH 00:09:59.793 09:18:10 nvmf_tcp.nvmf_multitarget -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:59.793 09:18:10 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@47 -- # : 0 00:09:59.793 09:18:10 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:09:59.793 09:18:10 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:09:59.793 09:18:10 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:09:59.793 09:18:10 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:09:59.793 09:18:10 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:09:59.793 09:18:10 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:09:59.793 09:18:10 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:09:59.793 09:18:10 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@51 -- # have_pci_nics=0 00:09:59.793 09:18:10 nvmf_tcp.nvmf_multitarget -- target/multitarget.sh@13 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py 00:09:59.793 09:18:10 nvmf_tcp.nvmf_multitarget -- target/multitarget.sh@15 -- # nvmftestinit 00:09:59.793 09:18:10 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:09:59.793 09:18:10 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:09:59.793 09:18:10 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@448 -- # prepare_net_devs 00:09:59.793 09:18:10 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@410 -- # local -g is_hw=no 00:09:59.793 09:18:10 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@412 -- # remove_spdk_ns 00:09:59.793 09:18:10 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:09:59.793 09:18:10 nvmf_tcp.nvmf_multitarget -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:09:59.793 09:18:10 nvmf_tcp.nvmf_multitarget -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:09:59.793 09:18:10 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:09:59.793 09:18:10 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:09:59.793 09:18:10 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@285 -- # xtrace_disable 00:09:59.793 09:18:10 nvmf_tcp.nvmf_multitarget -- common/autotest_common.sh@10 -- # set +x 00:10:02.330 09:18:12 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:10:02.330 09:18:12 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@291 -- # pci_devs=() 00:10:02.330 09:18:12 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@291 -- # local -a pci_devs 00:10:02.330 09:18:12 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@292 -- # pci_net_devs=() 00:10:02.330 09:18:12 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:10:02.330 09:18:12 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@293 -- # pci_drivers=() 00:10:02.330 09:18:12 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@293 -- # local -A pci_drivers 00:10:02.330 09:18:12 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@295 -- # net_devs=() 00:10:02.331 09:18:12 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@295 -- # local -ga net_devs 00:10:02.331 09:18:12 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@296 -- # e810=() 00:10:02.331 09:18:12 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@296 -- # local -ga e810 00:10:02.331 09:18:12 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@297 -- # x722=() 00:10:02.331 09:18:12 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@297 -- # local -ga x722 00:10:02.331 09:18:12 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@298 -- # mlx=() 00:10:02.331 09:18:12 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@298 -- # local -ga mlx 00:10:02.331 09:18:12 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:10:02.331 09:18:12 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:10:02.331 09:18:12 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:10:02.331 09:18:12 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:10:02.331 09:18:12 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:10:02.331 09:18:12 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:10:02.331 09:18:12 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:10:02.331 09:18:12 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:10:02.331 09:18:12 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:10:02.331 09:18:12 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:10:02.331 09:18:12 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:10:02.331 09:18:12 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:10:02.331 09:18:12 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:10:02.331 09:18:12 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:10:02.331 09:18:12 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:10:02.331 09:18:12 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:10:02.331 09:18:12 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:10:02.331 09:18:12 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:10:02.331 09:18:12 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@341 -- # echo 'Found 0000:09:00.0 (0x8086 - 0x159b)' 00:10:02.331 Found 0000:09:00.0 (0x8086 - 0x159b) 00:10:02.331 09:18:12 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:10:02.331 09:18:12 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:10:02.331 09:18:12 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:10:02.331 09:18:12 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:10:02.331 09:18:12 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:10:02.331 09:18:12 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:10:02.331 09:18:12 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@341 -- # echo 'Found 0000:09:00.1 (0x8086 - 0x159b)' 00:10:02.331 Found 0000:09:00.1 (0x8086 - 0x159b) 00:10:02.331 09:18:12 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:10:02.331 09:18:12 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:10:02.331 09:18:12 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:10:02.331 09:18:12 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:10:02.331 09:18:12 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:10:02.331 09:18:12 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:10:02.331 09:18:12 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:10:02.331 09:18:12 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:10:02.331 09:18:12 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:10:02.331 09:18:12 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:10:02.331 09:18:12 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:10:02.331 09:18:12 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:10:02.331 09:18:12 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@390 -- # [[ up == up ]] 00:10:02.331 09:18:12 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:10:02.331 09:18:12 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:10:02.331 09:18:12 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:09:00.0: cvl_0_0' 00:10:02.331 Found net devices under 0000:09:00.0: cvl_0_0 00:10:02.331 09:18:12 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:10:02.331 09:18:12 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:10:02.331 09:18:12 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:10:02.331 09:18:12 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:10:02.331 09:18:12 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:10:02.331 09:18:12 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@390 -- # [[ up == up ]] 00:10:02.331 09:18:12 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:10:02.331 09:18:12 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:10:02.331 09:18:12 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:09:00.1: cvl_0_1' 00:10:02.331 Found net devices under 0000:09:00.1: cvl_0_1 00:10:02.331 09:18:12 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:10:02.331 09:18:12 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:10:02.331 09:18:12 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@414 -- # is_hw=yes 00:10:02.331 09:18:12 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:10:02.331 09:18:12 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:10:02.331 09:18:12 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:10:02.331 09:18:12 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:10:02.331 09:18:12 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:10:02.331 09:18:12 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:10:02.331 09:18:12 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:10:02.331 09:18:12 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:10:02.331 09:18:12 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:10:02.331 09:18:12 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:10:02.331 09:18:12 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:10:02.331 09:18:12 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:10:02.331 09:18:12 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:10:02.331 09:18:12 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:10:02.331 09:18:12 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:10:02.331 09:18:12 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:10:02.331 09:18:13 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:10:02.331 09:18:13 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:10:02.331 09:18:13 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:10:02.331 09:18:13 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:10:02.331 09:18:13 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:10:02.331 09:18:13 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:10:02.331 09:18:13 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:10:02.331 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:10:02.331 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.129 ms 00:10:02.331 00:10:02.331 --- 10.0.0.2 ping statistics --- 00:10:02.331 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:10:02.331 rtt min/avg/max/mdev = 0.129/0.129/0.129/0.000 ms 00:10:02.331 09:18:13 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:10:02.331 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:10:02.331 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.082 ms 00:10:02.331 00:10:02.331 --- 10.0.0.1 ping statistics --- 00:10:02.331 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:10:02.331 rtt min/avg/max/mdev = 0.082/0.082/0.082/0.000 ms 00:10:02.331 09:18:13 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:10:02.331 09:18:13 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@422 -- # return 0 00:10:02.331 09:18:13 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:10:02.331 09:18:13 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:10:02.331 09:18:13 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:10:02.331 09:18:13 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:10:02.331 09:18:13 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:10:02.331 09:18:13 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:10:02.331 09:18:13 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:10:02.331 09:18:13 nvmf_tcp.nvmf_multitarget -- target/multitarget.sh@16 -- # nvmfappstart -m 0xF 00:10:02.331 09:18:13 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:10:02.331 09:18:13 nvmf_tcp.nvmf_multitarget -- common/autotest_common.sh@722 -- # xtrace_disable 00:10:02.331 09:18:13 nvmf_tcp.nvmf_multitarget -- common/autotest_common.sh@10 -- # set +x 00:10:02.331 09:18:13 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@481 -- # nvmfpid=749376 00:10:02.331 09:18:13 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:10:02.331 09:18:13 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@482 -- # waitforlisten 749376 00:10:02.331 09:18:13 nvmf_tcp.nvmf_multitarget -- common/autotest_common.sh@829 -- # '[' -z 749376 ']' 00:10:02.331 09:18:13 nvmf_tcp.nvmf_multitarget -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:10:02.331 09:18:13 nvmf_tcp.nvmf_multitarget -- common/autotest_common.sh@834 -- # local max_retries=100 00:10:02.331 09:18:13 nvmf_tcp.nvmf_multitarget -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:10:02.331 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:10:02.331 09:18:13 nvmf_tcp.nvmf_multitarget -- common/autotest_common.sh@838 -- # xtrace_disable 00:10:02.331 09:18:13 nvmf_tcp.nvmf_multitarget -- common/autotest_common.sh@10 -- # set +x 00:10:02.331 [2024-07-15 09:18:13.191406] Starting SPDK v24.09-pre git sha1 b0f01ebc5 / DPDK 24.03.0 initialization... 00:10:02.331 [2024-07-15 09:18:13.191498] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:10:02.331 EAL: No free 2048 kB hugepages reported on node 1 00:10:02.331 [2024-07-15 09:18:13.253737] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 4 00:10:02.331 [2024-07-15 09:18:13.355600] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:10:02.331 [2024-07-15 09:18:13.355642] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:10:02.331 [2024-07-15 09:18:13.355670] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:10:02.331 [2024-07-15 09:18:13.355681] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:10:02.331 [2024-07-15 09:18:13.355690] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:10:02.331 [2024-07-15 09:18:13.355771] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:10:02.331 [2024-07-15 09:18:13.355835] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:10:02.331 [2024-07-15 09:18:13.355901] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:10:02.331 [2024-07-15 09:18:13.355904] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:10:02.331 09:18:13 nvmf_tcp.nvmf_multitarget -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:10:02.331 09:18:13 nvmf_tcp.nvmf_multitarget -- common/autotest_common.sh@862 -- # return 0 00:10:02.331 09:18:13 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:10:02.331 09:18:13 nvmf_tcp.nvmf_multitarget -- common/autotest_common.sh@728 -- # xtrace_disable 00:10:02.331 09:18:13 nvmf_tcp.nvmf_multitarget -- common/autotest_common.sh@10 -- # set +x 00:10:02.331 09:18:13 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:10:02.331 09:18:13 nvmf_tcp.nvmf_multitarget -- target/multitarget.sh@18 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; nvmftestfini $1; exit 1' SIGINT SIGTERM EXIT 00:10:02.331 09:18:13 nvmf_tcp.nvmf_multitarget -- target/multitarget.sh@21 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py nvmf_get_targets 00:10:02.331 09:18:13 nvmf_tcp.nvmf_multitarget -- target/multitarget.sh@21 -- # jq length 00:10:02.588 09:18:13 nvmf_tcp.nvmf_multitarget -- target/multitarget.sh@21 -- # '[' 1 '!=' 1 ']' 00:10:02.588 09:18:13 nvmf_tcp.nvmf_multitarget -- target/multitarget.sh@25 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py nvmf_create_target -n nvmf_tgt_1 -s 32 00:10:02.588 "nvmf_tgt_1" 00:10:02.588 09:18:13 nvmf_tcp.nvmf_multitarget -- target/multitarget.sh@26 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py nvmf_create_target -n nvmf_tgt_2 -s 32 00:10:02.845 "nvmf_tgt_2" 00:10:02.845 09:18:13 nvmf_tcp.nvmf_multitarget -- target/multitarget.sh@28 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py nvmf_get_targets 00:10:02.845 09:18:13 nvmf_tcp.nvmf_multitarget -- target/multitarget.sh@28 -- # jq length 00:10:02.845 09:18:13 nvmf_tcp.nvmf_multitarget -- target/multitarget.sh@28 -- # '[' 3 '!=' 3 ']' 00:10:02.845 09:18:13 nvmf_tcp.nvmf_multitarget -- target/multitarget.sh@32 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py nvmf_delete_target -n nvmf_tgt_1 00:10:03.103 true 00:10:03.103 09:18:14 nvmf_tcp.nvmf_multitarget -- target/multitarget.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py nvmf_delete_target -n nvmf_tgt_2 00:10:03.103 true 00:10:03.103 09:18:14 nvmf_tcp.nvmf_multitarget -- target/multitarget.sh@35 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py nvmf_get_targets 00:10:03.103 09:18:14 nvmf_tcp.nvmf_multitarget -- target/multitarget.sh@35 -- # jq length 00:10:03.362 09:18:14 nvmf_tcp.nvmf_multitarget -- target/multitarget.sh@35 -- # '[' 1 '!=' 1 ']' 00:10:03.362 09:18:14 nvmf_tcp.nvmf_multitarget -- target/multitarget.sh@39 -- # trap - SIGINT SIGTERM EXIT 00:10:03.362 09:18:14 nvmf_tcp.nvmf_multitarget -- target/multitarget.sh@41 -- # nvmftestfini 00:10:03.362 09:18:14 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@488 -- # nvmfcleanup 00:10:03.362 09:18:14 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@117 -- # sync 00:10:03.362 09:18:14 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:10:03.362 09:18:14 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@120 -- # set +e 00:10:03.362 09:18:14 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@121 -- # for i in {1..20} 00:10:03.362 09:18:14 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:10:03.362 rmmod nvme_tcp 00:10:03.362 rmmod nvme_fabrics 00:10:03.362 rmmod nvme_keyring 00:10:03.362 09:18:14 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:10:03.362 09:18:14 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@124 -- # set -e 00:10:03.362 09:18:14 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@125 -- # return 0 00:10:03.362 09:18:14 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@489 -- # '[' -n 749376 ']' 00:10:03.362 09:18:14 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@490 -- # killprocess 749376 00:10:03.362 09:18:14 nvmf_tcp.nvmf_multitarget -- common/autotest_common.sh@948 -- # '[' -z 749376 ']' 00:10:03.362 09:18:14 nvmf_tcp.nvmf_multitarget -- common/autotest_common.sh@952 -- # kill -0 749376 00:10:03.362 09:18:14 nvmf_tcp.nvmf_multitarget -- common/autotest_common.sh@953 -- # uname 00:10:03.362 09:18:14 nvmf_tcp.nvmf_multitarget -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:10:03.362 09:18:14 nvmf_tcp.nvmf_multitarget -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 749376 00:10:03.362 09:18:14 nvmf_tcp.nvmf_multitarget -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:10:03.362 09:18:14 nvmf_tcp.nvmf_multitarget -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:10:03.362 09:18:14 nvmf_tcp.nvmf_multitarget -- common/autotest_common.sh@966 -- # echo 'killing process with pid 749376' 00:10:03.362 killing process with pid 749376 00:10:03.362 09:18:14 nvmf_tcp.nvmf_multitarget -- common/autotest_common.sh@967 -- # kill 749376 00:10:03.362 09:18:14 nvmf_tcp.nvmf_multitarget -- common/autotest_common.sh@972 -- # wait 749376 00:10:03.621 09:18:14 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:10:03.621 09:18:14 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:10:03.621 09:18:14 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:10:03.621 09:18:14 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:10:03.621 09:18:14 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@278 -- # remove_spdk_ns 00:10:03.622 09:18:14 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:10:03.622 09:18:14 nvmf_tcp.nvmf_multitarget -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:10:03.622 09:18:14 nvmf_tcp.nvmf_multitarget -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:10:05.522 09:18:16 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:10:05.522 00:10:05.522 real 0m5.846s 00:10:05.522 user 0m6.602s 00:10:05.522 sys 0m1.966s 00:10:05.522 09:18:16 nvmf_tcp.nvmf_multitarget -- common/autotest_common.sh@1124 -- # xtrace_disable 00:10:05.522 09:18:16 nvmf_tcp.nvmf_multitarget -- common/autotest_common.sh@10 -- # set +x 00:10:05.522 ************************************ 00:10:05.522 END TEST nvmf_multitarget 00:10:05.522 ************************************ 00:10:05.780 09:18:16 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:10:05.780 09:18:16 nvmf_tcp -- nvmf/nvmf.sh@29 -- # run_test nvmf_rpc /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpc.sh --transport=tcp 00:10:05.780 09:18:16 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:10:05.780 09:18:16 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:10:05.780 09:18:16 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:10:05.780 ************************************ 00:10:05.780 START TEST nvmf_rpc 00:10:05.780 ************************************ 00:10:05.780 09:18:16 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpc.sh --transport=tcp 00:10:05.780 * Looking for test storage... 00:10:05.780 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:10:05.780 09:18:16 nvmf_tcp.nvmf_rpc -- target/rpc.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:10:05.780 09:18:16 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@7 -- # uname -s 00:10:05.780 09:18:16 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:10:05.780 09:18:16 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:10:05.780 09:18:16 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:10:05.780 09:18:16 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:10:05.780 09:18:16 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:10:05.780 09:18:16 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:10:05.780 09:18:16 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:10:05.780 09:18:16 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:10:05.780 09:18:16 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:10:05.780 09:18:16 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:10:05.780 09:18:16 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a 00:10:05.780 09:18:16 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@18 -- # NVME_HOSTID=29f67375-a902-e411-ace9-001e67bc3c9a 00:10:05.780 09:18:16 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:10:05.780 09:18:16 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:10:05.780 09:18:16 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:10:05.780 09:18:16 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:10:05.780 09:18:16 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:10:05.780 09:18:16 nvmf_tcp.nvmf_rpc -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:10:05.780 09:18:16 nvmf_tcp.nvmf_rpc -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:10:05.780 09:18:16 nvmf_tcp.nvmf_rpc -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:10:05.780 09:18:16 nvmf_tcp.nvmf_rpc -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:05.780 09:18:16 nvmf_tcp.nvmf_rpc -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:05.780 09:18:16 nvmf_tcp.nvmf_rpc -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:05.780 09:18:16 nvmf_tcp.nvmf_rpc -- paths/export.sh@5 -- # export PATH 00:10:05.781 09:18:16 nvmf_tcp.nvmf_rpc -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:05.781 09:18:16 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@47 -- # : 0 00:10:05.781 09:18:16 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:10:05.781 09:18:16 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:10:05.781 09:18:16 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:10:05.781 09:18:16 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:10:05.781 09:18:16 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:10:05.781 09:18:16 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:10:05.781 09:18:16 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:10:05.781 09:18:16 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@51 -- # have_pci_nics=0 00:10:05.781 09:18:16 nvmf_tcp.nvmf_rpc -- target/rpc.sh@11 -- # loops=5 00:10:05.781 09:18:16 nvmf_tcp.nvmf_rpc -- target/rpc.sh@23 -- # nvmftestinit 00:10:05.781 09:18:16 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:10:05.781 09:18:16 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:10:05.781 09:18:16 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@448 -- # prepare_net_devs 00:10:05.781 09:18:16 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@410 -- # local -g is_hw=no 00:10:05.781 09:18:16 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@412 -- # remove_spdk_ns 00:10:05.781 09:18:16 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:10:05.781 09:18:16 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:10:05.781 09:18:16 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:10:05.781 09:18:16 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:10:05.781 09:18:16 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:10:05.781 09:18:16 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@285 -- # xtrace_disable 00:10:05.781 09:18:16 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:10:08.309 09:18:18 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:10:08.309 09:18:18 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@291 -- # pci_devs=() 00:10:08.309 09:18:18 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@291 -- # local -a pci_devs 00:10:08.309 09:18:18 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@292 -- # pci_net_devs=() 00:10:08.309 09:18:18 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:10:08.309 09:18:18 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@293 -- # pci_drivers=() 00:10:08.309 09:18:18 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@293 -- # local -A pci_drivers 00:10:08.309 09:18:18 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@295 -- # net_devs=() 00:10:08.309 09:18:18 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@295 -- # local -ga net_devs 00:10:08.309 09:18:18 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@296 -- # e810=() 00:10:08.309 09:18:18 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@296 -- # local -ga e810 00:10:08.309 09:18:18 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@297 -- # x722=() 00:10:08.309 09:18:18 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@297 -- # local -ga x722 00:10:08.309 09:18:18 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@298 -- # mlx=() 00:10:08.309 09:18:18 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@298 -- # local -ga mlx 00:10:08.309 09:18:18 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:10:08.309 09:18:18 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:10:08.309 09:18:18 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:10:08.309 09:18:18 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:10:08.309 09:18:18 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:10:08.309 09:18:18 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:10:08.309 09:18:18 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:10:08.309 09:18:18 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:10:08.309 09:18:18 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:10:08.309 09:18:18 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:10:08.309 09:18:18 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:10:08.309 09:18:18 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:10:08.309 09:18:18 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:10:08.309 09:18:18 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:10:08.309 09:18:18 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:10:08.309 09:18:18 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:10:08.309 09:18:18 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:10:08.309 09:18:18 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:10:08.309 09:18:18 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@341 -- # echo 'Found 0000:09:00.0 (0x8086 - 0x159b)' 00:10:08.309 Found 0000:09:00.0 (0x8086 - 0x159b) 00:10:08.309 09:18:18 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:10:08.309 09:18:18 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:10:08.309 09:18:18 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:10:08.309 09:18:18 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:10:08.309 09:18:18 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:10:08.310 09:18:18 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:10:08.310 09:18:18 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@341 -- # echo 'Found 0000:09:00.1 (0x8086 - 0x159b)' 00:10:08.310 Found 0000:09:00.1 (0x8086 - 0x159b) 00:10:08.310 09:18:18 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:10:08.310 09:18:18 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:10:08.310 09:18:18 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:10:08.310 09:18:18 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:10:08.310 09:18:18 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:10:08.310 09:18:18 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:10:08.310 09:18:18 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:10:08.310 09:18:18 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:10:08.310 09:18:18 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:10:08.310 09:18:18 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:10:08.310 09:18:18 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:10:08.310 09:18:18 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:10:08.310 09:18:18 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@390 -- # [[ up == up ]] 00:10:08.310 09:18:18 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:10:08.310 09:18:18 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:10:08.310 09:18:18 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:09:00.0: cvl_0_0' 00:10:08.310 Found net devices under 0000:09:00.0: cvl_0_0 00:10:08.310 09:18:18 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:10:08.310 09:18:18 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:10:08.310 09:18:18 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:10:08.310 09:18:18 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:10:08.310 09:18:18 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:10:08.310 09:18:18 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@390 -- # [[ up == up ]] 00:10:08.310 09:18:18 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:10:08.310 09:18:18 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:10:08.310 09:18:18 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:09:00.1: cvl_0_1' 00:10:08.310 Found net devices under 0000:09:00.1: cvl_0_1 00:10:08.310 09:18:18 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:10:08.310 09:18:18 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:10:08.310 09:18:18 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@414 -- # is_hw=yes 00:10:08.310 09:18:18 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:10:08.310 09:18:18 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:10:08.310 09:18:18 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:10:08.310 09:18:18 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:10:08.310 09:18:18 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:10:08.310 09:18:18 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:10:08.310 09:18:18 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:10:08.310 09:18:18 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:10:08.310 09:18:18 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:10:08.310 09:18:18 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:10:08.310 09:18:18 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:10:08.310 09:18:18 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:10:08.310 09:18:18 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:10:08.310 09:18:18 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:10:08.310 09:18:18 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:10:08.310 09:18:18 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:10:08.310 09:18:18 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:10:08.310 09:18:18 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:10:08.310 09:18:18 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:10:08.310 09:18:18 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:10:08.310 09:18:19 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:10:08.310 09:18:19 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:10:08.310 09:18:19 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:10:08.310 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:10:08.310 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.248 ms 00:10:08.310 00:10:08.310 --- 10.0.0.2 ping statistics --- 00:10:08.310 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:10:08.310 rtt min/avg/max/mdev = 0.248/0.248/0.248/0.000 ms 00:10:08.310 09:18:19 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:10:08.310 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:10:08.310 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.151 ms 00:10:08.310 00:10:08.310 --- 10.0.0.1 ping statistics --- 00:10:08.310 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:10:08.310 rtt min/avg/max/mdev = 0.151/0.151/0.151/0.000 ms 00:10:08.310 09:18:19 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:10:08.310 09:18:19 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@422 -- # return 0 00:10:08.310 09:18:19 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:10:08.310 09:18:19 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:10:08.310 09:18:19 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:10:08.310 09:18:19 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:10:08.310 09:18:19 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:10:08.310 09:18:19 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:10:08.310 09:18:19 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:10:08.310 09:18:19 nvmf_tcp.nvmf_rpc -- target/rpc.sh@24 -- # nvmfappstart -m 0xF 00:10:08.310 09:18:19 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:10:08.310 09:18:19 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@722 -- # xtrace_disable 00:10:08.310 09:18:19 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:10:08.310 09:18:19 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@481 -- # nvmfpid=751475 00:10:08.310 09:18:19 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:10:08.310 09:18:19 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@482 -- # waitforlisten 751475 00:10:08.310 09:18:19 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@829 -- # '[' -z 751475 ']' 00:10:08.310 09:18:19 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:10:08.310 09:18:19 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@834 -- # local max_retries=100 00:10:08.310 09:18:19 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:10:08.310 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:10:08.310 09:18:19 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@838 -- # xtrace_disable 00:10:08.310 09:18:19 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:10:08.310 [2024-07-15 09:18:19.103461] Starting SPDK v24.09-pre git sha1 b0f01ebc5 / DPDK 24.03.0 initialization... 00:10:08.310 [2024-07-15 09:18:19.103534] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:10:08.310 EAL: No free 2048 kB hugepages reported on node 1 00:10:08.310 [2024-07-15 09:18:19.165606] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 4 00:10:08.310 [2024-07-15 09:18:19.274506] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:10:08.310 [2024-07-15 09:18:19.274573] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:10:08.310 [2024-07-15 09:18:19.274587] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:10:08.310 [2024-07-15 09:18:19.274597] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:10:08.310 [2024-07-15 09:18:19.274607] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:10:08.310 [2024-07-15 09:18:19.274693] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:10:08.310 [2024-07-15 09:18:19.274757] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:10:08.310 [2024-07-15 09:18:19.274828] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:10:08.310 [2024-07-15 09:18:19.274831] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:10:08.310 09:18:19 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:10:08.310 09:18:19 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@862 -- # return 0 00:10:08.310 09:18:19 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:10:08.310 09:18:19 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@728 -- # xtrace_disable 00:10:08.310 09:18:19 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:10:08.310 09:18:19 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:10:08.310 09:18:19 nvmf_tcp.nvmf_rpc -- target/rpc.sh@26 -- # rpc_cmd nvmf_get_stats 00:10:08.310 09:18:19 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:10:08.310 09:18:19 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:10:08.310 09:18:19 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:10:08.310 09:18:19 nvmf_tcp.nvmf_rpc -- target/rpc.sh@26 -- # stats='{ 00:10:08.310 "tick_rate": 2700000000, 00:10:08.310 "poll_groups": [ 00:10:08.310 { 00:10:08.310 "name": "nvmf_tgt_poll_group_000", 00:10:08.310 "admin_qpairs": 0, 00:10:08.310 "io_qpairs": 0, 00:10:08.310 "current_admin_qpairs": 0, 00:10:08.310 "current_io_qpairs": 0, 00:10:08.310 "pending_bdev_io": 0, 00:10:08.310 "completed_nvme_io": 0, 00:10:08.310 "transports": [] 00:10:08.310 }, 00:10:08.310 { 00:10:08.310 "name": "nvmf_tgt_poll_group_001", 00:10:08.310 "admin_qpairs": 0, 00:10:08.310 "io_qpairs": 0, 00:10:08.310 "current_admin_qpairs": 0, 00:10:08.310 "current_io_qpairs": 0, 00:10:08.311 "pending_bdev_io": 0, 00:10:08.311 "completed_nvme_io": 0, 00:10:08.311 "transports": [] 00:10:08.311 }, 00:10:08.311 { 00:10:08.311 "name": "nvmf_tgt_poll_group_002", 00:10:08.311 "admin_qpairs": 0, 00:10:08.311 "io_qpairs": 0, 00:10:08.311 "current_admin_qpairs": 0, 00:10:08.311 "current_io_qpairs": 0, 00:10:08.311 "pending_bdev_io": 0, 00:10:08.311 "completed_nvme_io": 0, 00:10:08.311 "transports": [] 00:10:08.311 }, 00:10:08.311 { 00:10:08.311 "name": "nvmf_tgt_poll_group_003", 00:10:08.311 "admin_qpairs": 0, 00:10:08.311 "io_qpairs": 0, 00:10:08.311 "current_admin_qpairs": 0, 00:10:08.311 "current_io_qpairs": 0, 00:10:08.311 "pending_bdev_io": 0, 00:10:08.311 "completed_nvme_io": 0, 00:10:08.311 "transports": [] 00:10:08.311 } 00:10:08.311 ] 00:10:08.311 }' 00:10:08.311 09:18:19 nvmf_tcp.nvmf_rpc -- target/rpc.sh@28 -- # jcount '.poll_groups[].name' 00:10:08.311 09:18:19 nvmf_tcp.nvmf_rpc -- target/rpc.sh@14 -- # local 'filter=.poll_groups[].name' 00:10:08.311 09:18:19 nvmf_tcp.nvmf_rpc -- target/rpc.sh@15 -- # jq '.poll_groups[].name' 00:10:08.311 09:18:19 nvmf_tcp.nvmf_rpc -- target/rpc.sh@15 -- # wc -l 00:10:08.311 09:18:19 nvmf_tcp.nvmf_rpc -- target/rpc.sh@28 -- # (( 4 == 4 )) 00:10:08.311 09:18:19 nvmf_tcp.nvmf_rpc -- target/rpc.sh@29 -- # jq '.poll_groups[0].transports[0]' 00:10:08.570 09:18:19 nvmf_tcp.nvmf_rpc -- target/rpc.sh@29 -- # [[ null == null ]] 00:10:08.570 09:18:19 nvmf_tcp.nvmf_rpc -- target/rpc.sh@31 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:10:08.570 09:18:19 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:10:08.570 09:18:19 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:10:08.570 [2024-07-15 09:18:19.532953] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:10:08.570 09:18:19 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:10:08.570 09:18:19 nvmf_tcp.nvmf_rpc -- target/rpc.sh@33 -- # rpc_cmd nvmf_get_stats 00:10:08.570 09:18:19 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:10:08.570 09:18:19 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:10:08.570 09:18:19 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:10:08.570 09:18:19 nvmf_tcp.nvmf_rpc -- target/rpc.sh@33 -- # stats='{ 00:10:08.570 "tick_rate": 2700000000, 00:10:08.570 "poll_groups": [ 00:10:08.570 { 00:10:08.570 "name": "nvmf_tgt_poll_group_000", 00:10:08.570 "admin_qpairs": 0, 00:10:08.570 "io_qpairs": 0, 00:10:08.570 "current_admin_qpairs": 0, 00:10:08.570 "current_io_qpairs": 0, 00:10:08.570 "pending_bdev_io": 0, 00:10:08.570 "completed_nvme_io": 0, 00:10:08.570 "transports": [ 00:10:08.570 { 00:10:08.570 "trtype": "TCP" 00:10:08.570 } 00:10:08.570 ] 00:10:08.570 }, 00:10:08.570 { 00:10:08.570 "name": "nvmf_tgt_poll_group_001", 00:10:08.570 "admin_qpairs": 0, 00:10:08.570 "io_qpairs": 0, 00:10:08.570 "current_admin_qpairs": 0, 00:10:08.570 "current_io_qpairs": 0, 00:10:08.570 "pending_bdev_io": 0, 00:10:08.570 "completed_nvme_io": 0, 00:10:08.570 "transports": [ 00:10:08.570 { 00:10:08.570 "trtype": "TCP" 00:10:08.570 } 00:10:08.570 ] 00:10:08.570 }, 00:10:08.570 { 00:10:08.570 "name": "nvmf_tgt_poll_group_002", 00:10:08.570 "admin_qpairs": 0, 00:10:08.570 "io_qpairs": 0, 00:10:08.570 "current_admin_qpairs": 0, 00:10:08.570 "current_io_qpairs": 0, 00:10:08.570 "pending_bdev_io": 0, 00:10:08.570 "completed_nvme_io": 0, 00:10:08.570 "transports": [ 00:10:08.570 { 00:10:08.570 "trtype": "TCP" 00:10:08.570 } 00:10:08.570 ] 00:10:08.570 }, 00:10:08.570 { 00:10:08.570 "name": "nvmf_tgt_poll_group_003", 00:10:08.570 "admin_qpairs": 0, 00:10:08.570 "io_qpairs": 0, 00:10:08.570 "current_admin_qpairs": 0, 00:10:08.570 "current_io_qpairs": 0, 00:10:08.570 "pending_bdev_io": 0, 00:10:08.570 "completed_nvme_io": 0, 00:10:08.570 "transports": [ 00:10:08.570 { 00:10:08.570 "trtype": "TCP" 00:10:08.570 } 00:10:08.570 ] 00:10:08.570 } 00:10:08.570 ] 00:10:08.570 }' 00:10:08.570 09:18:19 nvmf_tcp.nvmf_rpc -- target/rpc.sh@35 -- # jsum '.poll_groups[].admin_qpairs' 00:10:08.570 09:18:19 nvmf_tcp.nvmf_rpc -- target/rpc.sh@19 -- # local 'filter=.poll_groups[].admin_qpairs' 00:10:08.570 09:18:19 nvmf_tcp.nvmf_rpc -- target/rpc.sh@20 -- # jq '.poll_groups[].admin_qpairs' 00:10:08.570 09:18:19 nvmf_tcp.nvmf_rpc -- target/rpc.sh@20 -- # awk '{s+=$1}END{print s}' 00:10:08.570 09:18:19 nvmf_tcp.nvmf_rpc -- target/rpc.sh@35 -- # (( 0 == 0 )) 00:10:08.570 09:18:19 nvmf_tcp.nvmf_rpc -- target/rpc.sh@36 -- # jsum '.poll_groups[].io_qpairs' 00:10:08.570 09:18:19 nvmf_tcp.nvmf_rpc -- target/rpc.sh@19 -- # local 'filter=.poll_groups[].io_qpairs' 00:10:08.570 09:18:19 nvmf_tcp.nvmf_rpc -- target/rpc.sh@20 -- # jq '.poll_groups[].io_qpairs' 00:10:08.570 09:18:19 nvmf_tcp.nvmf_rpc -- target/rpc.sh@20 -- # awk '{s+=$1}END{print s}' 00:10:08.570 09:18:19 nvmf_tcp.nvmf_rpc -- target/rpc.sh@36 -- # (( 0 == 0 )) 00:10:08.570 09:18:19 nvmf_tcp.nvmf_rpc -- target/rpc.sh@38 -- # '[' rdma == tcp ']' 00:10:08.570 09:18:19 nvmf_tcp.nvmf_rpc -- target/rpc.sh@46 -- # MALLOC_BDEV_SIZE=64 00:10:08.570 09:18:19 nvmf_tcp.nvmf_rpc -- target/rpc.sh@47 -- # MALLOC_BLOCK_SIZE=512 00:10:08.570 09:18:19 nvmf_tcp.nvmf_rpc -- target/rpc.sh@49 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc1 00:10:08.570 09:18:19 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:10:08.570 09:18:19 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:10:08.570 Malloc1 00:10:08.570 09:18:19 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:10:08.570 09:18:19 nvmf_tcp.nvmf_rpc -- target/rpc.sh@52 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:10:08.570 09:18:19 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:10:08.570 09:18:19 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:10:08.570 09:18:19 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:10:08.571 09:18:19 nvmf_tcp.nvmf_rpc -- target/rpc.sh@53 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:10:08.571 09:18:19 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:10:08.571 09:18:19 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:10:08.571 09:18:19 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:10:08.571 09:18:19 nvmf_tcp.nvmf_rpc -- target/rpc.sh@54 -- # rpc_cmd nvmf_subsystem_allow_any_host -d nqn.2016-06.io.spdk:cnode1 00:10:08.571 09:18:19 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:10:08.571 09:18:19 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:10:08.571 09:18:19 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:10:08.571 09:18:19 nvmf_tcp.nvmf_rpc -- target/rpc.sh@55 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:10:08.571 09:18:19 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:10:08.571 09:18:19 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:10:08.571 [2024-07-15 09:18:19.684152] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:10:08.571 09:18:19 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:10:08.571 09:18:19 nvmf_tcp.nvmf_rpc -- target/rpc.sh@58 -- # NOT nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --hostid=29f67375-a902-e411-ace9-001e67bc3c9a -t tcp -n nqn.2016-06.io.spdk:cnode1 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a -a 10.0.0.2 -s 4420 00:10:08.571 09:18:19 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@648 -- # local es=0 00:10:08.571 09:18:19 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@650 -- # valid_exec_arg nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --hostid=29f67375-a902-e411-ace9-001e67bc3c9a -t tcp -n nqn.2016-06.io.spdk:cnode1 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a -a 10.0.0.2 -s 4420 00:10:08.571 09:18:19 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@636 -- # local arg=nvme 00:10:08.571 09:18:19 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:10:08.571 09:18:19 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@640 -- # type -t nvme 00:10:08.571 09:18:19 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:10:08.571 09:18:19 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@642 -- # type -P nvme 00:10:08.571 09:18:19 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:10:08.571 09:18:19 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@642 -- # arg=/usr/sbin/nvme 00:10:08.571 09:18:19 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@642 -- # [[ -x /usr/sbin/nvme ]] 00:10:08.571 09:18:19 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@651 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --hostid=29f67375-a902-e411-ace9-001e67bc3c9a -t tcp -n nqn.2016-06.io.spdk:cnode1 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a -a 10.0.0.2 -s 4420 00:10:08.571 [2024-07-15 09:18:19.706678] ctrlr.c: 822:nvmf_qpair_access_allowed: *ERROR*: Subsystem 'nqn.2016-06.io.spdk:cnode1' does not allow host 'nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a' 00:10:08.571 Failed to write to /dev/nvme-fabrics: Input/output error 00:10:08.571 could not add new controller: failed to write to nvme-fabrics device 00:10:08.571 09:18:19 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@651 -- # es=1 00:10:08.571 09:18:19 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:10:08.571 09:18:19 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:10:08.571 09:18:19 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:10:08.571 09:18:19 nvmf_tcp.nvmf_rpc -- target/rpc.sh@61 -- # rpc_cmd nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode1 nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a 00:10:08.571 09:18:19 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:10:08.571 09:18:19 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:10:08.571 09:18:19 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:10:08.571 09:18:19 nvmf_tcp.nvmf_rpc -- target/rpc.sh@62 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --hostid=29f67375-a902-e411-ace9-001e67bc3c9a -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:10:09.507 09:18:20 nvmf_tcp.nvmf_rpc -- target/rpc.sh@63 -- # waitforserial SPDKISFASTANDAWESOME 00:10:09.507 09:18:20 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1198 -- # local i=0 00:10:09.508 09:18:20 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1199 -- # local nvme_device_counter=1 nvme_devices=0 00:10:09.508 09:18:20 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1200 -- # [[ -n '' ]] 00:10:09.508 09:18:20 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1205 -- # sleep 2 00:10:11.416 09:18:22 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1206 -- # (( i++ <= 15 )) 00:10:11.416 09:18:22 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1207 -- # lsblk -l -o NAME,SERIAL 00:10:11.416 09:18:22 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1207 -- # grep -c SPDKISFASTANDAWESOME 00:10:11.416 09:18:22 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1207 -- # nvme_devices=1 00:10:11.416 09:18:22 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1208 -- # (( nvme_devices == nvme_device_counter )) 00:10:11.416 09:18:22 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1208 -- # return 0 00:10:11.416 09:18:22 nvmf_tcp.nvmf_rpc -- target/rpc.sh@64 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:10:11.416 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:10:11.416 09:18:22 nvmf_tcp.nvmf_rpc -- target/rpc.sh@65 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:10:11.416 09:18:22 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1219 -- # local i=0 00:10:11.416 09:18:22 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1220 -- # lsblk -o NAME,SERIAL 00:10:11.416 09:18:22 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1220 -- # grep -q -w SPDKISFASTANDAWESOME 00:10:11.416 09:18:22 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1227 -- # lsblk -l -o NAME,SERIAL 00:10:11.416 09:18:22 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1227 -- # grep -q -w SPDKISFASTANDAWESOME 00:10:11.416 09:18:22 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1231 -- # return 0 00:10:11.416 09:18:22 nvmf_tcp.nvmf_rpc -- target/rpc.sh@68 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2016-06.io.spdk:cnode1 nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a 00:10:11.416 09:18:22 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:10:11.416 09:18:22 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:10:11.416 09:18:22 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:10:11.416 09:18:22 nvmf_tcp.nvmf_rpc -- target/rpc.sh@69 -- # NOT nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --hostid=29f67375-a902-e411-ace9-001e67bc3c9a -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:10:11.416 09:18:22 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@648 -- # local es=0 00:10:11.416 09:18:22 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@650 -- # valid_exec_arg nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --hostid=29f67375-a902-e411-ace9-001e67bc3c9a -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:10:11.416 09:18:22 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@636 -- # local arg=nvme 00:10:11.416 09:18:22 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:10:11.416 09:18:22 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@640 -- # type -t nvme 00:10:11.416 09:18:22 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:10:11.416 09:18:22 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@642 -- # type -P nvme 00:10:11.416 09:18:22 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:10:11.416 09:18:22 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@642 -- # arg=/usr/sbin/nvme 00:10:11.416 09:18:22 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@642 -- # [[ -x /usr/sbin/nvme ]] 00:10:11.416 09:18:22 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@651 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --hostid=29f67375-a902-e411-ace9-001e67bc3c9a -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:10:11.416 [2024-07-15 09:18:22.465070] ctrlr.c: 822:nvmf_qpair_access_allowed: *ERROR*: Subsystem 'nqn.2016-06.io.spdk:cnode1' does not allow host 'nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a' 00:10:11.416 Failed to write to /dev/nvme-fabrics: Input/output error 00:10:11.416 could not add new controller: failed to write to nvme-fabrics device 00:10:11.416 09:18:22 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@651 -- # es=1 00:10:11.416 09:18:22 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:10:11.416 09:18:22 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:10:11.416 09:18:22 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:10:11.416 09:18:22 nvmf_tcp.nvmf_rpc -- target/rpc.sh@72 -- # rpc_cmd nvmf_subsystem_allow_any_host -e nqn.2016-06.io.spdk:cnode1 00:10:11.416 09:18:22 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:10:11.416 09:18:22 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:10:11.416 09:18:22 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:10:11.416 09:18:22 nvmf_tcp.nvmf_rpc -- target/rpc.sh@73 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --hostid=29f67375-a902-e411-ace9-001e67bc3c9a -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:10:11.983 09:18:23 nvmf_tcp.nvmf_rpc -- target/rpc.sh@74 -- # waitforserial SPDKISFASTANDAWESOME 00:10:11.983 09:18:23 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1198 -- # local i=0 00:10:11.983 09:18:23 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1199 -- # local nvme_device_counter=1 nvme_devices=0 00:10:11.983 09:18:23 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1200 -- # [[ -n '' ]] 00:10:11.983 09:18:23 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1205 -- # sleep 2 00:10:13.912 09:18:25 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1206 -- # (( i++ <= 15 )) 00:10:13.912 09:18:25 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1207 -- # lsblk -l -o NAME,SERIAL 00:10:13.912 09:18:25 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1207 -- # grep -c SPDKISFASTANDAWESOME 00:10:14.169 09:18:25 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1207 -- # nvme_devices=1 00:10:14.169 09:18:25 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1208 -- # (( nvme_devices == nvme_device_counter )) 00:10:14.169 09:18:25 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1208 -- # return 0 00:10:14.169 09:18:25 nvmf_tcp.nvmf_rpc -- target/rpc.sh@75 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:10:14.169 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:10:14.169 09:18:25 nvmf_tcp.nvmf_rpc -- target/rpc.sh@76 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:10:14.169 09:18:25 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1219 -- # local i=0 00:10:14.169 09:18:25 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1220 -- # lsblk -o NAME,SERIAL 00:10:14.169 09:18:25 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1220 -- # grep -q -w SPDKISFASTANDAWESOME 00:10:14.169 09:18:25 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1227 -- # lsblk -l -o NAME,SERIAL 00:10:14.169 09:18:25 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1227 -- # grep -q -w SPDKISFASTANDAWESOME 00:10:14.169 09:18:25 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1231 -- # return 0 00:10:14.169 09:18:25 nvmf_tcp.nvmf_rpc -- target/rpc.sh@78 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:10:14.169 09:18:25 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:10:14.169 09:18:25 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:10:14.169 09:18:25 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:10:14.169 09:18:25 nvmf_tcp.nvmf_rpc -- target/rpc.sh@81 -- # seq 1 5 00:10:14.169 09:18:25 nvmf_tcp.nvmf_rpc -- target/rpc.sh@81 -- # for i in $(seq 1 $loops) 00:10:14.169 09:18:25 nvmf_tcp.nvmf_rpc -- target/rpc.sh@82 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:10:14.169 09:18:25 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:10:14.169 09:18:25 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:10:14.169 09:18:25 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:10:14.169 09:18:25 nvmf_tcp.nvmf_rpc -- target/rpc.sh@83 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:10:14.169 09:18:25 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:10:14.169 09:18:25 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:10:14.169 [2024-07-15 09:18:25.240461] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:10:14.169 09:18:25 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:10:14.169 09:18:25 nvmf_tcp.nvmf_rpc -- target/rpc.sh@84 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 5 00:10:14.169 09:18:25 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:10:14.169 09:18:25 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:10:14.169 09:18:25 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:10:14.169 09:18:25 nvmf_tcp.nvmf_rpc -- target/rpc.sh@85 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:10:14.169 09:18:25 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:10:14.169 09:18:25 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:10:14.169 09:18:25 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:10:14.169 09:18:25 nvmf_tcp.nvmf_rpc -- target/rpc.sh@86 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --hostid=29f67375-a902-e411-ace9-001e67bc3c9a -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:10:14.738 09:18:25 nvmf_tcp.nvmf_rpc -- target/rpc.sh@88 -- # waitforserial SPDKISFASTANDAWESOME 00:10:14.738 09:18:25 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1198 -- # local i=0 00:10:14.738 09:18:25 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1199 -- # local nvme_device_counter=1 nvme_devices=0 00:10:14.738 09:18:25 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1200 -- # [[ -n '' ]] 00:10:14.738 09:18:25 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1205 -- # sleep 2 00:10:17.278 09:18:27 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1206 -- # (( i++ <= 15 )) 00:10:17.278 09:18:27 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1207 -- # lsblk -l -o NAME,SERIAL 00:10:17.278 09:18:27 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1207 -- # grep -c SPDKISFASTANDAWESOME 00:10:17.278 09:18:27 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1207 -- # nvme_devices=1 00:10:17.278 09:18:27 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1208 -- # (( nvme_devices == nvme_device_counter )) 00:10:17.278 09:18:27 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1208 -- # return 0 00:10:17.278 09:18:27 nvmf_tcp.nvmf_rpc -- target/rpc.sh@90 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:10:17.278 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:10:17.278 09:18:27 nvmf_tcp.nvmf_rpc -- target/rpc.sh@91 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:10:17.278 09:18:27 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1219 -- # local i=0 00:10:17.278 09:18:27 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1220 -- # lsblk -o NAME,SERIAL 00:10:17.278 09:18:27 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1220 -- # grep -q -w SPDKISFASTANDAWESOME 00:10:17.278 09:18:28 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1227 -- # lsblk -l -o NAME,SERIAL 00:10:17.278 09:18:28 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1227 -- # grep -q -w SPDKISFASTANDAWESOME 00:10:17.278 09:18:28 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1231 -- # return 0 00:10:17.278 09:18:28 nvmf_tcp.nvmf_rpc -- target/rpc.sh@93 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:10:17.278 09:18:28 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:10:17.278 09:18:28 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:10:17.278 09:18:28 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:10:17.278 09:18:28 nvmf_tcp.nvmf_rpc -- target/rpc.sh@94 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:10:17.278 09:18:28 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:10:17.278 09:18:28 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:10:17.278 09:18:28 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:10:17.278 09:18:28 nvmf_tcp.nvmf_rpc -- target/rpc.sh@81 -- # for i in $(seq 1 $loops) 00:10:17.278 09:18:28 nvmf_tcp.nvmf_rpc -- target/rpc.sh@82 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:10:17.278 09:18:28 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:10:17.278 09:18:28 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:10:17.278 09:18:28 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:10:17.278 09:18:28 nvmf_tcp.nvmf_rpc -- target/rpc.sh@83 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:10:17.278 09:18:28 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:10:17.278 09:18:28 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:10:17.278 [2024-07-15 09:18:28.045456] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:10:17.278 09:18:28 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:10:17.279 09:18:28 nvmf_tcp.nvmf_rpc -- target/rpc.sh@84 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 5 00:10:17.279 09:18:28 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:10:17.279 09:18:28 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:10:17.279 09:18:28 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:10:17.279 09:18:28 nvmf_tcp.nvmf_rpc -- target/rpc.sh@85 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:10:17.279 09:18:28 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:10:17.279 09:18:28 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:10:17.279 09:18:28 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:10:17.279 09:18:28 nvmf_tcp.nvmf_rpc -- target/rpc.sh@86 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --hostid=29f67375-a902-e411-ace9-001e67bc3c9a -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:10:17.546 09:18:28 nvmf_tcp.nvmf_rpc -- target/rpc.sh@88 -- # waitforserial SPDKISFASTANDAWESOME 00:10:17.546 09:18:28 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1198 -- # local i=0 00:10:17.546 09:18:28 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1199 -- # local nvme_device_counter=1 nvme_devices=0 00:10:17.546 09:18:28 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1200 -- # [[ -n '' ]] 00:10:17.546 09:18:28 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1205 -- # sleep 2 00:10:19.574 09:18:30 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1206 -- # (( i++ <= 15 )) 00:10:19.574 09:18:30 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1207 -- # lsblk -l -o NAME,SERIAL 00:10:19.574 09:18:30 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1207 -- # grep -c SPDKISFASTANDAWESOME 00:10:19.574 09:18:30 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1207 -- # nvme_devices=1 00:10:19.575 09:18:30 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1208 -- # (( nvme_devices == nvme_device_counter )) 00:10:19.575 09:18:30 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1208 -- # return 0 00:10:19.575 09:18:30 nvmf_tcp.nvmf_rpc -- target/rpc.sh@90 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:10:19.575 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:10:19.575 09:18:30 nvmf_tcp.nvmf_rpc -- target/rpc.sh@91 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:10:19.575 09:18:30 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1219 -- # local i=0 00:10:19.575 09:18:30 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1220 -- # lsblk -o NAME,SERIAL 00:10:19.575 09:18:30 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1220 -- # grep -q -w SPDKISFASTANDAWESOME 00:10:19.834 09:18:30 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1227 -- # lsblk -l -o NAME,SERIAL 00:10:19.834 09:18:30 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1227 -- # grep -q -w SPDKISFASTANDAWESOME 00:10:19.834 09:18:30 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1231 -- # return 0 00:10:19.834 09:18:30 nvmf_tcp.nvmf_rpc -- target/rpc.sh@93 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:10:19.834 09:18:30 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:10:19.834 09:18:30 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:10:19.834 09:18:30 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:10:19.834 09:18:30 nvmf_tcp.nvmf_rpc -- target/rpc.sh@94 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:10:19.834 09:18:30 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:10:19.834 09:18:30 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:10:19.834 09:18:30 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:10:19.834 09:18:30 nvmf_tcp.nvmf_rpc -- target/rpc.sh@81 -- # for i in $(seq 1 $loops) 00:10:19.834 09:18:30 nvmf_tcp.nvmf_rpc -- target/rpc.sh@82 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:10:19.834 09:18:30 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:10:19.834 09:18:30 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:10:19.834 09:18:30 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:10:19.834 09:18:30 nvmf_tcp.nvmf_rpc -- target/rpc.sh@83 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:10:19.834 09:18:30 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:10:19.834 09:18:30 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:10:19.834 [2024-07-15 09:18:30.812570] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:10:19.834 09:18:30 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:10:19.834 09:18:30 nvmf_tcp.nvmf_rpc -- target/rpc.sh@84 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 5 00:10:19.834 09:18:30 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:10:19.834 09:18:30 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:10:19.834 09:18:30 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:10:19.834 09:18:30 nvmf_tcp.nvmf_rpc -- target/rpc.sh@85 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:10:19.834 09:18:30 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:10:19.834 09:18:30 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:10:19.834 09:18:30 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:10:19.834 09:18:30 nvmf_tcp.nvmf_rpc -- target/rpc.sh@86 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --hostid=29f67375-a902-e411-ace9-001e67bc3c9a -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:10:20.402 09:18:31 nvmf_tcp.nvmf_rpc -- target/rpc.sh@88 -- # waitforserial SPDKISFASTANDAWESOME 00:10:20.402 09:18:31 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1198 -- # local i=0 00:10:20.402 09:18:31 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1199 -- # local nvme_device_counter=1 nvme_devices=0 00:10:20.402 09:18:31 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1200 -- # [[ -n '' ]] 00:10:20.402 09:18:31 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1205 -- # sleep 2 00:10:22.936 09:18:33 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1206 -- # (( i++ <= 15 )) 00:10:22.936 09:18:33 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1207 -- # lsblk -l -o NAME,SERIAL 00:10:22.936 09:18:33 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1207 -- # grep -c SPDKISFASTANDAWESOME 00:10:22.936 09:18:33 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1207 -- # nvme_devices=1 00:10:22.936 09:18:33 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1208 -- # (( nvme_devices == nvme_device_counter )) 00:10:22.936 09:18:33 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1208 -- # return 0 00:10:22.936 09:18:33 nvmf_tcp.nvmf_rpc -- target/rpc.sh@90 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:10:22.936 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:10:22.936 09:18:33 nvmf_tcp.nvmf_rpc -- target/rpc.sh@91 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:10:22.936 09:18:33 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1219 -- # local i=0 00:10:22.936 09:18:33 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1220 -- # lsblk -o NAME,SERIAL 00:10:22.936 09:18:33 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1220 -- # grep -q -w SPDKISFASTANDAWESOME 00:10:22.936 09:18:33 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1227 -- # lsblk -l -o NAME,SERIAL 00:10:22.936 09:18:33 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1227 -- # grep -q -w SPDKISFASTANDAWESOME 00:10:22.936 09:18:33 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1231 -- # return 0 00:10:22.936 09:18:33 nvmf_tcp.nvmf_rpc -- target/rpc.sh@93 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:10:22.936 09:18:33 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:10:22.936 09:18:33 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:10:22.936 09:18:33 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:10:22.936 09:18:33 nvmf_tcp.nvmf_rpc -- target/rpc.sh@94 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:10:22.936 09:18:33 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:10:22.936 09:18:33 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:10:22.936 09:18:33 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:10:22.936 09:18:33 nvmf_tcp.nvmf_rpc -- target/rpc.sh@81 -- # for i in $(seq 1 $loops) 00:10:22.936 09:18:33 nvmf_tcp.nvmf_rpc -- target/rpc.sh@82 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:10:22.936 09:18:33 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:10:22.936 09:18:33 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:10:22.936 09:18:33 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:10:22.936 09:18:33 nvmf_tcp.nvmf_rpc -- target/rpc.sh@83 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:10:22.936 09:18:33 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:10:22.936 09:18:33 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:10:22.936 [2024-07-15 09:18:33.652440] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:10:22.936 09:18:33 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:10:22.936 09:18:33 nvmf_tcp.nvmf_rpc -- target/rpc.sh@84 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 5 00:10:22.936 09:18:33 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:10:22.937 09:18:33 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:10:22.937 09:18:33 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:10:22.937 09:18:33 nvmf_tcp.nvmf_rpc -- target/rpc.sh@85 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:10:22.937 09:18:33 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:10:22.937 09:18:33 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:10:22.937 09:18:33 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:10:22.937 09:18:33 nvmf_tcp.nvmf_rpc -- target/rpc.sh@86 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --hostid=29f67375-a902-e411-ace9-001e67bc3c9a -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:10:23.197 09:18:34 nvmf_tcp.nvmf_rpc -- target/rpc.sh@88 -- # waitforserial SPDKISFASTANDAWESOME 00:10:23.197 09:18:34 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1198 -- # local i=0 00:10:23.197 09:18:34 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1199 -- # local nvme_device_counter=1 nvme_devices=0 00:10:23.197 09:18:34 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1200 -- # [[ -n '' ]] 00:10:23.197 09:18:34 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1205 -- # sleep 2 00:10:25.728 09:18:36 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1206 -- # (( i++ <= 15 )) 00:10:25.728 09:18:36 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1207 -- # lsblk -l -o NAME,SERIAL 00:10:25.728 09:18:36 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1207 -- # grep -c SPDKISFASTANDAWESOME 00:10:25.728 09:18:36 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1207 -- # nvme_devices=1 00:10:25.728 09:18:36 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1208 -- # (( nvme_devices == nvme_device_counter )) 00:10:25.728 09:18:36 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1208 -- # return 0 00:10:25.728 09:18:36 nvmf_tcp.nvmf_rpc -- target/rpc.sh@90 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:10:25.728 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:10:25.728 09:18:36 nvmf_tcp.nvmf_rpc -- target/rpc.sh@91 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:10:25.728 09:18:36 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1219 -- # local i=0 00:10:25.728 09:18:36 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1220 -- # lsblk -o NAME,SERIAL 00:10:25.728 09:18:36 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1220 -- # grep -q -w SPDKISFASTANDAWESOME 00:10:25.728 09:18:36 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1227 -- # lsblk -l -o NAME,SERIAL 00:10:25.728 09:18:36 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1227 -- # grep -q -w SPDKISFASTANDAWESOME 00:10:25.728 09:18:36 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1231 -- # return 0 00:10:25.728 09:18:36 nvmf_tcp.nvmf_rpc -- target/rpc.sh@93 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:10:25.728 09:18:36 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:10:25.728 09:18:36 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:10:25.728 09:18:36 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:10:25.728 09:18:36 nvmf_tcp.nvmf_rpc -- target/rpc.sh@94 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:10:25.728 09:18:36 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:10:25.728 09:18:36 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:10:25.728 09:18:36 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:10:25.728 09:18:36 nvmf_tcp.nvmf_rpc -- target/rpc.sh@81 -- # for i in $(seq 1 $loops) 00:10:25.728 09:18:36 nvmf_tcp.nvmf_rpc -- target/rpc.sh@82 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:10:25.728 09:18:36 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:10:25.728 09:18:36 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:10:25.728 09:18:36 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:10:25.728 09:18:36 nvmf_tcp.nvmf_rpc -- target/rpc.sh@83 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:10:25.728 09:18:36 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:10:25.728 09:18:36 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:10:25.728 [2024-07-15 09:18:36.414874] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:10:25.728 09:18:36 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:10:25.728 09:18:36 nvmf_tcp.nvmf_rpc -- target/rpc.sh@84 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 5 00:10:25.728 09:18:36 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:10:25.728 09:18:36 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:10:25.728 09:18:36 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:10:25.728 09:18:36 nvmf_tcp.nvmf_rpc -- target/rpc.sh@85 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:10:25.728 09:18:36 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:10:25.728 09:18:36 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:10:25.728 09:18:36 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:10:25.728 09:18:36 nvmf_tcp.nvmf_rpc -- target/rpc.sh@86 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --hostid=29f67375-a902-e411-ace9-001e67bc3c9a -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:10:25.988 09:18:37 nvmf_tcp.nvmf_rpc -- target/rpc.sh@88 -- # waitforserial SPDKISFASTANDAWESOME 00:10:25.988 09:18:37 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1198 -- # local i=0 00:10:25.988 09:18:37 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1199 -- # local nvme_device_counter=1 nvme_devices=0 00:10:25.988 09:18:37 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1200 -- # [[ -n '' ]] 00:10:25.988 09:18:37 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1205 -- # sleep 2 00:10:28.523 09:18:39 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1206 -- # (( i++ <= 15 )) 00:10:28.523 09:18:39 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1207 -- # lsblk -l -o NAME,SERIAL 00:10:28.523 09:18:39 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1207 -- # grep -c SPDKISFASTANDAWESOME 00:10:28.523 09:18:39 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1207 -- # nvme_devices=1 00:10:28.523 09:18:39 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1208 -- # (( nvme_devices == nvme_device_counter )) 00:10:28.523 09:18:39 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1208 -- # return 0 00:10:28.524 09:18:39 nvmf_tcp.nvmf_rpc -- target/rpc.sh@90 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:10:28.524 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:10:28.524 09:18:39 nvmf_tcp.nvmf_rpc -- target/rpc.sh@91 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:10:28.524 09:18:39 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1219 -- # local i=0 00:10:28.524 09:18:39 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1220 -- # lsblk -o NAME,SERIAL 00:10:28.524 09:18:39 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1220 -- # grep -q -w SPDKISFASTANDAWESOME 00:10:28.524 09:18:39 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1227 -- # lsblk -l -o NAME,SERIAL 00:10:28.524 09:18:39 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1227 -- # grep -q -w SPDKISFASTANDAWESOME 00:10:28.524 09:18:39 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1231 -- # return 0 00:10:28.524 09:18:39 nvmf_tcp.nvmf_rpc -- target/rpc.sh@93 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:10:28.524 09:18:39 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:10:28.524 09:18:39 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:10:28.524 09:18:39 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:10:28.524 09:18:39 nvmf_tcp.nvmf_rpc -- target/rpc.sh@94 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:10:28.524 09:18:39 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:10:28.524 09:18:39 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:10:28.524 09:18:39 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:10:28.524 09:18:39 nvmf_tcp.nvmf_rpc -- target/rpc.sh@99 -- # seq 1 5 00:10:28.524 09:18:39 nvmf_tcp.nvmf_rpc -- target/rpc.sh@99 -- # for i in $(seq 1 $loops) 00:10:28.524 09:18:39 nvmf_tcp.nvmf_rpc -- target/rpc.sh@100 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:10:28.524 09:18:39 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:10:28.524 09:18:39 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:10:28.524 09:18:39 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:10:28.524 09:18:39 nvmf_tcp.nvmf_rpc -- target/rpc.sh@101 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:10:28.524 09:18:39 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:10:28.524 09:18:39 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:10:28.524 [2024-07-15 09:18:39.258223] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:10:28.524 09:18:39 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:10:28.524 09:18:39 nvmf_tcp.nvmf_rpc -- target/rpc.sh@102 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:10:28.524 09:18:39 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:10:28.524 09:18:39 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:10:28.524 09:18:39 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:10:28.524 09:18:39 nvmf_tcp.nvmf_rpc -- target/rpc.sh@103 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:10:28.524 09:18:39 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:10:28.524 09:18:39 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:10:28.524 09:18:39 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:10:28.524 09:18:39 nvmf_tcp.nvmf_rpc -- target/rpc.sh@105 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:10:28.524 09:18:39 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:10:28.524 09:18:39 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:10:28.524 09:18:39 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:10:28.524 09:18:39 nvmf_tcp.nvmf_rpc -- target/rpc.sh@107 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:10:28.524 09:18:39 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:10:28.524 09:18:39 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:10:28.524 09:18:39 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:10:28.524 09:18:39 nvmf_tcp.nvmf_rpc -- target/rpc.sh@99 -- # for i in $(seq 1 $loops) 00:10:28.524 09:18:39 nvmf_tcp.nvmf_rpc -- target/rpc.sh@100 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:10:28.524 09:18:39 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:10:28.524 09:18:39 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:10:28.524 09:18:39 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:10:28.524 09:18:39 nvmf_tcp.nvmf_rpc -- target/rpc.sh@101 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:10:28.524 09:18:39 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:10:28.524 09:18:39 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:10:28.524 [2024-07-15 09:18:39.306317] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:10:28.524 09:18:39 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:10:28.524 09:18:39 nvmf_tcp.nvmf_rpc -- target/rpc.sh@102 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:10:28.524 09:18:39 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:10:28.524 09:18:39 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:10:28.524 09:18:39 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:10:28.524 09:18:39 nvmf_tcp.nvmf_rpc -- target/rpc.sh@103 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:10:28.524 09:18:39 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:10:28.524 09:18:39 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:10:28.524 09:18:39 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:10:28.524 09:18:39 nvmf_tcp.nvmf_rpc -- target/rpc.sh@105 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:10:28.524 09:18:39 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:10:28.524 09:18:39 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:10:28.524 09:18:39 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:10:28.524 09:18:39 nvmf_tcp.nvmf_rpc -- target/rpc.sh@107 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:10:28.524 09:18:39 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:10:28.524 09:18:39 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:10:28.524 09:18:39 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:10:28.524 09:18:39 nvmf_tcp.nvmf_rpc -- target/rpc.sh@99 -- # for i in $(seq 1 $loops) 00:10:28.524 09:18:39 nvmf_tcp.nvmf_rpc -- target/rpc.sh@100 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:10:28.524 09:18:39 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:10:28.524 09:18:39 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:10:28.524 09:18:39 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:10:28.524 09:18:39 nvmf_tcp.nvmf_rpc -- target/rpc.sh@101 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:10:28.524 09:18:39 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:10:28.524 09:18:39 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:10:28.524 [2024-07-15 09:18:39.354468] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:10:28.524 09:18:39 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:10:28.524 09:18:39 nvmf_tcp.nvmf_rpc -- target/rpc.sh@102 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:10:28.524 09:18:39 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:10:28.524 09:18:39 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:10:28.524 09:18:39 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:10:28.524 09:18:39 nvmf_tcp.nvmf_rpc -- target/rpc.sh@103 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:10:28.524 09:18:39 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:10:28.524 09:18:39 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:10:28.524 09:18:39 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:10:28.524 09:18:39 nvmf_tcp.nvmf_rpc -- target/rpc.sh@105 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:10:28.524 09:18:39 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:10:28.524 09:18:39 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:10:28.524 09:18:39 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:10:28.524 09:18:39 nvmf_tcp.nvmf_rpc -- target/rpc.sh@107 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:10:28.524 09:18:39 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:10:28.524 09:18:39 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:10:28.524 09:18:39 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:10:28.524 09:18:39 nvmf_tcp.nvmf_rpc -- target/rpc.sh@99 -- # for i in $(seq 1 $loops) 00:10:28.524 09:18:39 nvmf_tcp.nvmf_rpc -- target/rpc.sh@100 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:10:28.524 09:18:39 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:10:28.524 09:18:39 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:10:28.524 09:18:39 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:10:28.524 09:18:39 nvmf_tcp.nvmf_rpc -- target/rpc.sh@101 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:10:28.524 09:18:39 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:10:28.524 09:18:39 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:10:28.524 [2024-07-15 09:18:39.402630] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:10:28.524 09:18:39 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:10:28.524 09:18:39 nvmf_tcp.nvmf_rpc -- target/rpc.sh@102 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:10:28.524 09:18:39 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:10:28.524 09:18:39 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:10:28.524 09:18:39 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:10:28.524 09:18:39 nvmf_tcp.nvmf_rpc -- target/rpc.sh@103 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:10:28.524 09:18:39 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:10:28.524 09:18:39 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:10:28.524 09:18:39 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:10:28.524 09:18:39 nvmf_tcp.nvmf_rpc -- target/rpc.sh@105 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:10:28.524 09:18:39 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:10:28.524 09:18:39 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:10:28.524 09:18:39 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:10:28.524 09:18:39 nvmf_tcp.nvmf_rpc -- target/rpc.sh@107 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:10:28.524 09:18:39 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:10:28.524 09:18:39 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:10:28.524 09:18:39 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:10:28.524 09:18:39 nvmf_tcp.nvmf_rpc -- target/rpc.sh@99 -- # for i in $(seq 1 $loops) 00:10:28.524 09:18:39 nvmf_tcp.nvmf_rpc -- target/rpc.sh@100 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:10:28.524 09:18:39 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:10:28.524 09:18:39 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:10:28.524 09:18:39 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:10:28.524 09:18:39 nvmf_tcp.nvmf_rpc -- target/rpc.sh@101 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:10:28.524 09:18:39 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:10:28.524 09:18:39 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:10:28.524 [2024-07-15 09:18:39.450793] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:10:28.524 09:18:39 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:10:28.524 09:18:39 nvmf_tcp.nvmf_rpc -- target/rpc.sh@102 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:10:28.525 09:18:39 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:10:28.525 09:18:39 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:10:28.525 09:18:39 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:10:28.525 09:18:39 nvmf_tcp.nvmf_rpc -- target/rpc.sh@103 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:10:28.525 09:18:39 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:10:28.525 09:18:39 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:10:28.525 09:18:39 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:10:28.525 09:18:39 nvmf_tcp.nvmf_rpc -- target/rpc.sh@105 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:10:28.525 09:18:39 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:10:28.525 09:18:39 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:10:28.525 09:18:39 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:10:28.525 09:18:39 nvmf_tcp.nvmf_rpc -- target/rpc.sh@107 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:10:28.525 09:18:39 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:10:28.525 09:18:39 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:10:28.525 09:18:39 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:10:28.525 09:18:39 nvmf_tcp.nvmf_rpc -- target/rpc.sh@110 -- # rpc_cmd nvmf_get_stats 00:10:28.525 09:18:39 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:10:28.525 09:18:39 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:10:28.525 09:18:39 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:10:28.525 09:18:39 nvmf_tcp.nvmf_rpc -- target/rpc.sh@110 -- # stats='{ 00:10:28.525 "tick_rate": 2700000000, 00:10:28.525 "poll_groups": [ 00:10:28.525 { 00:10:28.525 "name": "nvmf_tgt_poll_group_000", 00:10:28.525 "admin_qpairs": 2, 00:10:28.525 "io_qpairs": 84, 00:10:28.525 "current_admin_qpairs": 0, 00:10:28.525 "current_io_qpairs": 0, 00:10:28.525 "pending_bdev_io": 0, 00:10:28.525 "completed_nvme_io": 184, 00:10:28.525 "transports": [ 00:10:28.525 { 00:10:28.525 "trtype": "TCP" 00:10:28.525 } 00:10:28.525 ] 00:10:28.525 }, 00:10:28.525 { 00:10:28.525 "name": "nvmf_tgt_poll_group_001", 00:10:28.525 "admin_qpairs": 2, 00:10:28.525 "io_qpairs": 84, 00:10:28.525 "current_admin_qpairs": 0, 00:10:28.525 "current_io_qpairs": 0, 00:10:28.525 "pending_bdev_io": 0, 00:10:28.525 "completed_nvme_io": 136, 00:10:28.525 "transports": [ 00:10:28.525 { 00:10:28.525 "trtype": "TCP" 00:10:28.525 } 00:10:28.525 ] 00:10:28.525 }, 00:10:28.525 { 00:10:28.525 "name": "nvmf_tgt_poll_group_002", 00:10:28.525 "admin_qpairs": 1, 00:10:28.525 "io_qpairs": 84, 00:10:28.525 "current_admin_qpairs": 0, 00:10:28.525 "current_io_qpairs": 0, 00:10:28.525 "pending_bdev_io": 0, 00:10:28.525 "completed_nvme_io": 133, 00:10:28.525 "transports": [ 00:10:28.525 { 00:10:28.525 "trtype": "TCP" 00:10:28.525 } 00:10:28.525 ] 00:10:28.525 }, 00:10:28.525 { 00:10:28.525 "name": "nvmf_tgt_poll_group_003", 00:10:28.525 "admin_qpairs": 2, 00:10:28.525 "io_qpairs": 84, 00:10:28.525 "current_admin_qpairs": 0, 00:10:28.525 "current_io_qpairs": 0, 00:10:28.525 "pending_bdev_io": 0, 00:10:28.525 "completed_nvme_io": 233, 00:10:28.525 "transports": [ 00:10:28.525 { 00:10:28.525 "trtype": "TCP" 00:10:28.525 } 00:10:28.525 ] 00:10:28.525 } 00:10:28.525 ] 00:10:28.525 }' 00:10:28.525 09:18:39 nvmf_tcp.nvmf_rpc -- target/rpc.sh@112 -- # jsum '.poll_groups[].admin_qpairs' 00:10:28.525 09:18:39 nvmf_tcp.nvmf_rpc -- target/rpc.sh@19 -- # local 'filter=.poll_groups[].admin_qpairs' 00:10:28.525 09:18:39 nvmf_tcp.nvmf_rpc -- target/rpc.sh@20 -- # jq '.poll_groups[].admin_qpairs' 00:10:28.525 09:18:39 nvmf_tcp.nvmf_rpc -- target/rpc.sh@20 -- # awk '{s+=$1}END{print s}' 00:10:28.525 09:18:39 nvmf_tcp.nvmf_rpc -- target/rpc.sh@112 -- # (( 7 > 0 )) 00:10:28.525 09:18:39 nvmf_tcp.nvmf_rpc -- target/rpc.sh@113 -- # jsum '.poll_groups[].io_qpairs' 00:10:28.525 09:18:39 nvmf_tcp.nvmf_rpc -- target/rpc.sh@19 -- # local 'filter=.poll_groups[].io_qpairs' 00:10:28.525 09:18:39 nvmf_tcp.nvmf_rpc -- target/rpc.sh@20 -- # jq '.poll_groups[].io_qpairs' 00:10:28.525 09:18:39 nvmf_tcp.nvmf_rpc -- target/rpc.sh@20 -- # awk '{s+=$1}END{print s}' 00:10:28.525 09:18:39 nvmf_tcp.nvmf_rpc -- target/rpc.sh@113 -- # (( 336 > 0 )) 00:10:28.525 09:18:39 nvmf_tcp.nvmf_rpc -- target/rpc.sh@115 -- # '[' rdma == tcp ']' 00:10:28.525 09:18:39 nvmf_tcp.nvmf_rpc -- target/rpc.sh@121 -- # trap - SIGINT SIGTERM EXIT 00:10:28.525 09:18:39 nvmf_tcp.nvmf_rpc -- target/rpc.sh@123 -- # nvmftestfini 00:10:28.525 09:18:39 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@488 -- # nvmfcleanup 00:10:28.525 09:18:39 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@117 -- # sync 00:10:28.525 09:18:39 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:10:28.525 09:18:39 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@120 -- # set +e 00:10:28.525 09:18:39 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@121 -- # for i in {1..20} 00:10:28.525 09:18:39 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:10:28.525 rmmod nvme_tcp 00:10:28.525 rmmod nvme_fabrics 00:10:28.525 rmmod nvme_keyring 00:10:28.525 09:18:39 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:10:28.525 09:18:39 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@124 -- # set -e 00:10:28.525 09:18:39 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@125 -- # return 0 00:10:28.525 09:18:39 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@489 -- # '[' -n 751475 ']' 00:10:28.525 09:18:39 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@490 -- # killprocess 751475 00:10:28.525 09:18:39 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@948 -- # '[' -z 751475 ']' 00:10:28.525 09:18:39 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@952 -- # kill -0 751475 00:10:28.525 09:18:39 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@953 -- # uname 00:10:28.525 09:18:39 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:10:28.525 09:18:39 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 751475 00:10:28.525 09:18:39 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:10:28.525 09:18:39 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:10:28.525 09:18:39 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@966 -- # echo 'killing process with pid 751475' 00:10:28.525 killing process with pid 751475 00:10:28.525 09:18:39 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@967 -- # kill 751475 00:10:28.525 09:18:39 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@972 -- # wait 751475 00:10:28.783 09:18:39 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:10:28.783 09:18:39 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:10:28.783 09:18:39 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:10:28.783 09:18:39 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:10:28.783 09:18:39 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@278 -- # remove_spdk_ns 00:10:28.783 09:18:39 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:10:28.783 09:18:39 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:10:28.783 09:18:39 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:10:31.320 09:18:42 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:10:31.320 00:10:31.320 real 0m25.243s 00:10:31.320 user 1m21.776s 00:10:31.320 sys 0m4.211s 00:10:31.320 09:18:42 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1124 -- # xtrace_disable 00:10:31.320 09:18:42 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:10:31.320 ************************************ 00:10:31.320 END TEST nvmf_rpc 00:10:31.320 ************************************ 00:10:31.320 09:18:42 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:10:31.320 09:18:42 nvmf_tcp -- nvmf/nvmf.sh@30 -- # run_test nvmf_invalid /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/invalid.sh --transport=tcp 00:10:31.320 09:18:42 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:10:31.320 09:18:42 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:10:31.320 09:18:42 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:10:31.320 ************************************ 00:10:31.320 START TEST nvmf_invalid 00:10:31.320 ************************************ 00:10:31.320 09:18:42 nvmf_tcp.nvmf_invalid -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/invalid.sh --transport=tcp 00:10:31.320 * Looking for test storage... 00:10:31.320 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:10:31.320 09:18:42 nvmf_tcp.nvmf_invalid -- target/invalid.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:10:31.320 09:18:42 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@7 -- # uname -s 00:10:31.320 09:18:42 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:10:31.320 09:18:42 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:10:31.320 09:18:42 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:10:31.320 09:18:42 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:10:31.320 09:18:42 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:10:31.320 09:18:42 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:10:31.321 09:18:42 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:10:31.321 09:18:42 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:10:31.321 09:18:42 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:10:31.321 09:18:42 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:10:31.321 09:18:42 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a 00:10:31.321 09:18:42 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@18 -- # NVME_HOSTID=29f67375-a902-e411-ace9-001e67bc3c9a 00:10:31.321 09:18:42 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:10:31.321 09:18:42 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:10:31.321 09:18:42 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:10:31.321 09:18:42 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:10:31.321 09:18:42 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:10:31.321 09:18:42 nvmf_tcp.nvmf_invalid -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:10:31.321 09:18:42 nvmf_tcp.nvmf_invalid -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:10:31.321 09:18:42 nvmf_tcp.nvmf_invalid -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:10:31.321 09:18:42 nvmf_tcp.nvmf_invalid -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:31.321 09:18:42 nvmf_tcp.nvmf_invalid -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:31.321 09:18:42 nvmf_tcp.nvmf_invalid -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:31.321 09:18:42 nvmf_tcp.nvmf_invalid -- paths/export.sh@5 -- # export PATH 00:10:31.321 09:18:42 nvmf_tcp.nvmf_invalid -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:31.321 09:18:42 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@47 -- # : 0 00:10:31.321 09:18:42 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:10:31.321 09:18:42 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:10:31.321 09:18:42 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:10:31.321 09:18:42 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:10:31.321 09:18:42 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:10:31.321 09:18:42 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:10:31.321 09:18:42 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:10:31.321 09:18:42 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@51 -- # have_pci_nics=0 00:10:31.321 09:18:42 nvmf_tcp.nvmf_invalid -- target/invalid.sh@11 -- # multi_target_rpc=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py 00:10:31.321 09:18:42 nvmf_tcp.nvmf_invalid -- target/invalid.sh@12 -- # rpc=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:10:31.321 09:18:42 nvmf_tcp.nvmf_invalid -- target/invalid.sh@13 -- # nqn=nqn.2016-06.io.spdk:cnode 00:10:31.321 09:18:42 nvmf_tcp.nvmf_invalid -- target/invalid.sh@14 -- # target=foobar 00:10:31.321 09:18:42 nvmf_tcp.nvmf_invalid -- target/invalid.sh@16 -- # RANDOM=0 00:10:31.321 09:18:42 nvmf_tcp.nvmf_invalid -- target/invalid.sh@34 -- # nvmftestinit 00:10:31.321 09:18:42 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:10:31.321 09:18:42 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:10:31.321 09:18:42 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@448 -- # prepare_net_devs 00:10:31.321 09:18:42 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@410 -- # local -g is_hw=no 00:10:31.321 09:18:42 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@412 -- # remove_spdk_ns 00:10:31.321 09:18:42 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:10:31.321 09:18:42 nvmf_tcp.nvmf_invalid -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:10:31.321 09:18:42 nvmf_tcp.nvmf_invalid -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:10:31.321 09:18:42 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:10:31.321 09:18:42 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:10:31.321 09:18:42 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@285 -- # xtrace_disable 00:10:31.321 09:18:42 nvmf_tcp.nvmf_invalid -- common/autotest_common.sh@10 -- # set +x 00:10:33.224 09:18:44 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:10:33.224 09:18:44 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@291 -- # pci_devs=() 00:10:33.224 09:18:44 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@291 -- # local -a pci_devs 00:10:33.224 09:18:44 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@292 -- # pci_net_devs=() 00:10:33.224 09:18:44 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:10:33.224 09:18:44 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@293 -- # pci_drivers=() 00:10:33.224 09:18:44 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@293 -- # local -A pci_drivers 00:10:33.224 09:18:44 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@295 -- # net_devs=() 00:10:33.224 09:18:44 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@295 -- # local -ga net_devs 00:10:33.224 09:18:44 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@296 -- # e810=() 00:10:33.224 09:18:44 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@296 -- # local -ga e810 00:10:33.224 09:18:44 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@297 -- # x722=() 00:10:33.224 09:18:44 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@297 -- # local -ga x722 00:10:33.224 09:18:44 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@298 -- # mlx=() 00:10:33.224 09:18:44 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@298 -- # local -ga mlx 00:10:33.224 09:18:44 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:10:33.224 09:18:44 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:10:33.224 09:18:44 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:10:33.224 09:18:44 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:10:33.224 09:18:44 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:10:33.224 09:18:44 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:10:33.224 09:18:44 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:10:33.224 09:18:44 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:10:33.224 09:18:44 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:10:33.224 09:18:44 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:10:33.224 09:18:44 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:10:33.224 09:18:44 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:10:33.224 09:18:44 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:10:33.224 09:18:44 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:10:33.224 09:18:44 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:10:33.224 09:18:44 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:10:33.224 09:18:44 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:10:33.224 09:18:44 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:10:33.224 09:18:44 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@341 -- # echo 'Found 0000:09:00.0 (0x8086 - 0x159b)' 00:10:33.224 Found 0000:09:00.0 (0x8086 - 0x159b) 00:10:33.224 09:18:44 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:10:33.224 09:18:44 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:10:33.224 09:18:44 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:10:33.224 09:18:44 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:10:33.224 09:18:44 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:10:33.224 09:18:44 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:10:33.224 09:18:44 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@341 -- # echo 'Found 0000:09:00.1 (0x8086 - 0x159b)' 00:10:33.224 Found 0000:09:00.1 (0x8086 - 0x159b) 00:10:33.224 09:18:44 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:10:33.224 09:18:44 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:10:33.224 09:18:44 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:10:33.224 09:18:44 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:10:33.224 09:18:44 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:10:33.224 09:18:44 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:10:33.224 09:18:44 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:10:33.224 09:18:44 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:10:33.224 09:18:44 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:10:33.224 09:18:44 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:10:33.224 09:18:44 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:10:33.224 09:18:44 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:10:33.224 09:18:44 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@390 -- # [[ up == up ]] 00:10:33.224 09:18:44 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:10:33.224 09:18:44 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:10:33.224 09:18:44 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:09:00.0: cvl_0_0' 00:10:33.224 Found net devices under 0000:09:00.0: cvl_0_0 00:10:33.224 09:18:44 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:10:33.224 09:18:44 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:10:33.224 09:18:44 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:10:33.224 09:18:44 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:10:33.224 09:18:44 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:10:33.224 09:18:44 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@390 -- # [[ up == up ]] 00:10:33.224 09:18:44 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:10:33.224 09:18:44 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:10:33.224 09:18:44 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:09:00.1: cvl_0_1' 00:10:33.224 Found net devices under 0000:09:00.1: cvl_0_1 00:10:33.224 09:18:44 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:10:33.224 09:18:44 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:10:33.224 09:18:44 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@414 -- # is_hw=yes 00:10:33.224 09:18:44 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:10:33.224 09:18:44 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:10:33.224 09:18:44 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:10:33.224 09:18:44 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:10:33.224 09:18:44 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:10:33.225 09:18:44 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:10:33.225 09:18:44 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:10:33.225 09:18:44 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:10:33.225 09:18:44 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:10:33.225 09:18:44 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:10:33.225 09:18:44 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:10:33.225 09:18:44 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:10:33.225 09:18:44 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:10:33.225 09:18:44 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:10:33.225 09:18:44 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:10:33.225 09:18:44 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:10:33.225 09:18:44 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:10:33.225 09:18:44 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:10:33.225 09:18:44 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:10:33.225 09:18:44 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:10:33.225 09:18:44 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:10:33.225 09:18:44 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:10:33.225 09:18:44 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:10:33.225 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:10:33.225 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.176 ms 00:10:33.225 00:10:33.225 --- 10.0.0.2 ping statistics --- 00:10:33.225 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:10:33.225 rtt min/avg/max/mdev = 0.176/0.176/0.176/0.000 ms 00:10:33.225 09:18:44 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:10:33.225 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:10:33.225 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.109 ms 00:10:33.225 00:10:33.225 --- 10.0.0.1 ping statistics --- 00:10:33.225 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:10:33.225 rtt min/avg/max/mdev = 0.109/0.109/0.109/0.000 ms 00:10:33.225 09:18:44 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:10:33.225 09:18:44 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@422 -- # return 0 00:10:33.225 09:18:44 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:10:33.225 09:18:44 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:10:33.225 09:18:44 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:10:33.225 09:18:44 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:10:33.225 09:18:44 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:10:33.225 09:18:44 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:10:33.225 09:18:44 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:10:33.225 09:18:44 nvmf_tcp.nvmf_invalid -- target/invalid.sh@35 -- # nvmfappstart -m 0xF 00:10:33.225 09:18:44 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:10:33.225 09:18:44 nvmf_tcp.nvmf_invalid -- common/autotest_common.sh@722 -- # xtrace_disable 00:10:33.225 09:18:44 nvmf_tcp.nvmf_invalid -- common/autotest_common.sh@10 -- # set +x 00:10:33.225 09:18:44 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@481 -- # nvmfpid=755980 00:10:33.225 09:18:44 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:10:33.225 09:18:44 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@482 -- # waitforlisten 755980 00:10:33.225 09:18:44 nvmf_tcp.nvmf_invalid -- common/autotest_common.sh@829 -- # '[' -z 755980 ']' 00:10:33.225 09:18:44 nvmf_tcp.nvmf_invalid -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:10:33.225 09:18:44 nvmf_tcp.nvmf_invalid -- common/autotest_common.sh@834 -- # local max_retries=100 00:10:33.225 09:18:44 nvmf_tcp.nvmf_invalid -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:10:33.225 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:10:33.225 09:18:44 nvmf_tcp.nvmf_invalid -- common/autotest_common.sh@838 -- # xtrace_disable 00:10:33.225 09:18:44 nvmf_tcp.nvmf_invalid -- common/autotest_common.sh@10 -- # set +x 00:10:33.225 [2024-07-15 09:18:44.402218] Starting SPDK v24.09-pre git sha1 b0f01ebc5 / DPDK 24.03.0 initialization... 00:10:33.225 [2024-07-15 09:18:44.402324] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:10:33.484 EAL: No free 2048 kB hugepages reported on node 1 00:10:33.484 [2024-07-15 09:18:44.466033] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 4 00:10:33.484 [2024-07-15 09:18:44.566358] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:10:33.484 [2024-07-15 09:18:44.566413] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:10:33.484 [2024-07-15 09:18:44.566436] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:10:33.484 [2024-07-15 09:18:44.566446] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:10:33.484 [2024-07-15 09:18:44.566456] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:10:33.484 [2024-07-15 09:18:44.566592] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:10:33.484 [2024-07-15 09:18:44.566698] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:10:33.484 [2024-07-15 09:18:44.566793] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:10:33.484 [2024-07-15 09:18:44.566796] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:10:33.742 09:18:44 nvmf_tcp.nvmf_invalid -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:10:33.742 09:18:44 nvmf_tcp.nvmf_invalid -- common/autotest_common.sh@862 -- # return 0 00:10:33.742 09:18:44 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:10:33.742 09:18:44 nvmf_tcp.nvmf_invalid -- common/autotest_common.sh@728 -- # xtrace_disable 00:10:33.742 09:18:44 nvmf_tcp.nvmf_invalid -- common/autotest_common.sh@10 -- # set +x 00:10:33.742 09:18:44 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:10:33.742 09:18:44 nvmf_tcp.nvmf_invalid -- target/invalid.sh@37 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; nvmftestfini $1; exit 1' SIGINT SIGTERM EXIT 00:10:33.742 09:18:44 nvmf_tcp.nvmf_invalid -- target/invalid.sh@40 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem -t foobar nqn.2016-06.io.spdk:cnode18577 00:10:33.999 [2024-07-15 09:18:44.958452] nvmf_rpc.c: 396:rpc_nvmf_create_subsystem: *ERROR*: Unable to find target foobar 00:10:33.999 09:18:44 nvmf_tcp.nvmf_invalid -- target/invalid.sh@40 -- # out='request: 00:10:33.999 { 00:10:33.999 "nqn": "nqn.2016-06.io.spdk:cnode18577", 00:10:33.999 "tgt_name": "foobar", 00:10:33.999 "method": "nvmf_create_subsystem", 00:10:33.999 "req_id": 1 00:10:33.999 } 00:10:33.999 Got JSON-RPC error response 00:10:33.999 response: 00:10:33.999 { 00:10:33.999 "code": -32603, 00:10:33.999 "message": "Unable to find target foobar" 00:10:33.999 }' 00:10:33.999 09:18:44 nvmf_tcp.nvmf_invalid -- target/invalid.sh@41 -- # [[ request: 00:10:33.999 { 00:10:33.999 "nqn": "nqn.2016-06.io.spdk:cnode18577", 00:10:33.999 "tgt_name": "foobar", 00:10:33.999 "method": "nvmf_create_subsystem", 00:10:33.999 "req_id": 1 00:10:33.999 } 00:10:33.999 Got JSON-RPC error response 00:10:33.999 response: 00:10:33.999 { 00:10:33.999 "code": -32603, 00:10:33.999 "message": "Unable to find target foobar" 00:10:33.999 } == *\U\n\a\b\l\e\ \t\o\ \f\i\n\d\ \t\a\r\g\e\t* ]] 00:10:33.999 09:18:44 nvmf_tcp.nvmf_invalid -- target/invalid.sh@45 -- # echo -e '\x1f' 00:10:33.999 09:18:44 nvmf_tcp.nvmf_invalid -- target/invalid.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem -s $'SPDKISFASTANDAWESOME\037' nqn.2016-06.io.spdk:cnode24431 00:10:34.257 [2024-07-15 09:18:45.211338] nvmf_rpc.c: 413:rpc_nvmf_create_subsystem: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode24431: invalid serial number 'SPDKISFASTANDAWESOME' 00:10:34.257 09:18:45 nvmf_tcp.nvmf_invalid -- target/invalid.sh@45 -- # out='request: 00:10:34.257 { 00:10:34.257 "nqn": "nqn.2016-06.io.spdk:cnode24431", 00:10:34.257 "serial_number": "SPDKISFASTANDAWESOME\u001f", 00:10:34.257 "method": "nvmf_create_subsystem", 00:10:34.257 "req_id": 1 00:10:34.257 } 00:10:34.257 Got JSON-RPC error response 00:10:34.257 response: 00:10:34.257 { 00:10:34.257 "code": -32602, 00:10:34.257 "message": "Invalid SN SPDKISFASTANDAWESOME\u001f" 00:10:34.257 }' 00:10:34.257 09:18:45 nvmf_tcp.nvmf_invalid -- target/invalid.sh@46 -- # [[ request: 00:10:34.257 { 00:10:34.257 "nqn": "nqn.2016-06.io.spdk:cnode24431", 00:10:34.257 "serial_number": "SPDKISFASTANDAWESOME\u001f", 00:10:34.257 "method": "nvmf_create_subsystem", 00:10:34.257 "req_id": 1 00:10:34.257 } 00:10:34.257 Got JSON-RPC error response 00:10:34.257 response: 00:10:34.257 { 00:10:34.257 "code": -32602, 00:10:34.257 "message": "Invalid SN SPDKISFASTANDAWESOME\u001f" 00:10:34.257 } == *\I\n\v\a\l\i\d\ \S\N* ]] 00:10:34.257 09:18:45 nvmf_tcp.nvmf_invalid -- target/invalid.sh@50 -- # echo -e '\x1f' 00:10:34.257 09:18:45 nvmf_tcp.nvmf_invalid -- target/invalid.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem -d $'SPDK_Controller\037' nqn.2016-06.io.spdk:cnode11913 00:10:34.516 [2024-07-15 09:18:45.456101] nvmf_rpc.c: 422:rpc_nvmf_create_subsystem: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode11913: invalid model number 'SPDK_Controller' 00:10:34.516 09:18:45 nvmf_tcp.nvmf_invalid -- target/invalid.sh@50 -- # out='request: 00:10:34.516 { 00:10:34.516 "nqn": "nqn.2016-06.io.spdk:cnode11913", 00:10:34.516 "model_number": "SPDK_Controller\u001f", 00:10:34.516 "method": "nvmf_create_subsystem", 00:10:34.516 "req_id": 1 00:10:34.516 } 00:10:34.516 Got JSON-RPC error response 00:10:34.516 response: 00:10:34.516 { 00:10:34.516 "code": -32602, 00:10:34.516 "message": "Invalid MN SPDK_Controller\u001f" 00:10:34.516 }' 00:10:34.516 09:18:45 nvmf_tcp.nvmf_invalid -- target/invalid.sh@51 -- # [[ request: 00:10:34.516 { 00:10:34.516 "nqn": "nqn.2016-06.io.spdk:cnode11913", 00:10:34.516 "model_number": "SPDK_Controller\u001f", 00:10:34.516 "method": "nvmf_create_subsystem", 00:10:34.516 "req_id": 1 00:10:34.516 } 00:10:34.516 Got JSON-RPC error response 00:10:34.516 response: 00:10:34.516 { 00:10:34.516 "code": -32602, 00:10:34.516 "message": "Invalid MN SPDK_Controller\u001f" 00:10:34.516 } == *\I\n\v\a\l\i\d\ \M\N* ]] 00:10:34.516 09:18:45 nvmf_tcp.nvmf_invalid -- target/invalid.sh@54 -- # gen_random_s 21 00:10:34.516 09:18:45 nvmf_tcp.nvmf_invalid -- target/invalid.sh@19 -- # local length=21 ll 00:10:34.516 09:18:45 nvmf_tcp.nvmf_invalid -- target/invalid.sh@21 -- # chars=('32' '33' '34' '35' '36' '37' '38' '39' '40' '41' '42' '43' '44' '45' '46' '47' '48' '49' '50' '51' '52' '53' '54' '55' '56' '57' '58' '59' '60' '61' '62' '63' '64' '65' '66' '67' '68' '69' '70' '71' '72' '73' '74' '75' '76' '77' '78' '79' '80' '81' '82' '83' '84' '85' '86' '87' '88' '89' '90' '91' '92' '93' '94' '95' '96' '97' '98' '99' '100' '101' '102' '103' '104' '105' '106' '107' '108' '109' '110' '111' '112' '113' '114' '115' '116' '117' '118' '119' '120' '121' '122' '123' '124' '125' '126' '127') 00:10:34.516 09:18:45 nvmf_tcp.nvmf_invalid -- target/invalid.sh@21 -- # local chars 00:10:34.516 09:18:45 nvmf_tcp.nvmf_invalid -- target/invalid.sh@22 -- # local string 00:10:34.516 09:18:45 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll = 0 )) 00:10:34.516 09:18:45 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:10:34.516 09:18:45 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 51 00:10:34.516 09:18:45 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x33' 00:10:34.516 09:18:45 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=3 00:10:34.516 09:18:45 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:10:34.516 09:18:45 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:10:34.516 09:18:45 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 52 00:10:34.516 09:18:45 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x34' 00:10:34.516 09:18:45 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=4 00:10:34.516 09:18:45 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:10:34.516 09:18:45 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:10:34.516 09:18:45 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 69 00:10:34.516 09:18:45 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x45' 00:10:34.516 09:18:45 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=E 00:10:34.516 09:18:45 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:10:34.516 09:18:45 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:10:34.516 09:18:45 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 85 00:10:34.516 09:18:45 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x55' 00:10:34.516 09:18:45 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=U 00:10:34.516 09:18:45 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:10:34.516 09:18:45 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:10:34.516 09:18:45 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 111 00:10:34.516 09:18:45 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x6f' 00:10:34.516 09:18:45 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=o 00:10:34.516 09:18:45 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:10:34.516 09:18:45 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:10:34.516 09:18:45 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 37 00:10:34.516 09:18:45 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x25' 00:10:34.516 09:18:45 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=% 00:10:34.516 09:18:45 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:10:34.516 09:18:45 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:10:34.516 09:18:45 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 53 00:10:34.516 09:18:45 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x35' 00:10:34.516 09:18:45 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=5 00:10:34.516 09:18:45 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:10:34.516 09:18:45 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:10:34.516 09:18:45 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 63 00:10:34.516 09:18:45 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x3f' 00:10:34.516 09:18:45 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+='?' 00:10:34.516 09:18:45 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:10:34.516 09:18:45 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:10:34.516 09:18:45 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 97 00:10:34.516 09:18:45 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x61' 00:10:34.516 09:18:45 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=a 00:10:34.516 09:18:45 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:10:34.516 09:18:45 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:10:34.516 09:18:45 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 78 00:10:34.516 09:18:45 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x4e' 00:10:34.516 09:18:45 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=N 00:10:34.516 09:18:45 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:10:34.516 09:18:45 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:10:34.516 09:18:45 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 91 00:10:34.516 09:18:45 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x5b' 00:10:34.516 09:18:45 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+='[' 00:10:34.516 09:18:45 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:10:34.516 09:18:45 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:10:34.516 09:18:45 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 58 00:10:34.516 09:18:45 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x3a' 00:10:34.516 09:18:45 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=: 00:10:34.516 09:18:45 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:10:34.516 09:18:45 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:10:34.516 09:18:45 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 71 00:10:34.516 09:18:45 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x47' 00:10:34.516 09:18:45 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=G 00:10:34.516 09:18:45 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:10:34.516 09:18:45 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:10:34.516 09:18:45 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 123 00:10:34.516 09:18:45 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x7b' 00:10:34.516 09:18:45 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+='{' 00:10:34.516 09:18:45 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:10:34.516 09:18:45 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:10:34.516 09:18:45 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 107 00:10:34.516 09:18:45 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x6b' 00:10:34.516 09:18:45 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=k 00:10:34.516 09:18:45 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:10:34.516 09:18:45 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:10:34.516 09:18:45 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 118 00:10:34.517 09:18:45 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x76' 00:10:34.517 09:18:45 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=v 00:10:34.517 09:18:45 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:10:34.517 09:18:45 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:10:34.517 09:18:45 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 69 00:10:34.517 09:18:45 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x45' 00:10:34.517 09:18:45 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=E 00:10:34.517 09:18:45 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:10:34.517 09:18:45 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:10:34.517 09:18:45 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 99 00:10:34.517 09:18:45 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x63' 00:10:34.517 09:18:45 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=c 00:10:34.517 09:18:45 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:10:34.517 09:18:45 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:10:34.517 09:18:45 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 60 00:10:34.517 09:18:45 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x3c' 00:10:34.517 09:18:45 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+='<' 00:10:34.517 09:18:45 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:10:34.517 09:18:45 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:10:34.517 09:18:45 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 101 00:10:34.517 09:18:45 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x65' 00:10:34.517 09:18:45 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=e 00:10:34.517 09:18:45 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:10:34.517 09:18:45 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:10:34.517 09:18:45 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 121 00:10:34.517 09:18:45 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x79' 00:10:34.517 09:18:45 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=y 00:10:34.517 09:18:45 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:10:34.517 09:18:45 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:10:34.517 09:18:45 nvmf_tcp.nvmf_invalid -- target/invalid.sh@28 -- # [[ 3 == \- ]] 00:10:34.517 09:18:45 nvmf_tcp.nvmf_invalid -- target/invalid.sh@31 -- # echo '34EUo%5?aN[:G{kvEc1-[ir7v~tMX~0' 00:10:35.032 09:18:45 nvmf_tcp.nvmf_invalid -- target/invalid.sh@58 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem -d 'zb&"0=W:a?EQChTm'\''&\?yN4KS.->1-[ir7v~tMX~0' nqn.2016-06.io.spdk:cnode13386 00:10:35.291 [2024-07-15 09:18:46.230625] nvmf_rpc.c: 422:rpc_nvmf_create_subsystem: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode13386: invalid model number 'zb&"0=W:a?EQChTm'&\?yN4KS.->1-[ir7v~tMX~0' 00:10:35.291 09:18:46 nvmf_tcp.nvmf_invalid -- target/invalid.sh@58 -- # out='request: 00:10:35.291 { 00:10:35.291 "nqn": "nqn.2016-06.io.spdk:cnode13386", 00:10:35.291 "model_number": "zb&\"0=W:a?EQChTm'\''&\\?yN4KS.->1-[ir7v~tMX~0", 00:10:35.291 "method": "nvmf_create_subsystem", 00:10:35.291 "req_id": 1 00:10:35.291 } 00:10:35.291 Got JSON-RPC error response 00:10:35.291 response: 00:10:35.291 { 00:10:35.291 "code": -32602, 00:10:35.291 "message": "Invalid MN zb&\"0=W:a?EQChTm'\''&\\?yN4KS.->1-[ir7v~tMX~0" 00:10:35.291 }' 00:10:35.291 09:18:46 nvmf_tcp.nvmf_invalid -- target/invalid.sh@59 -- # [[ request: 00:10:35.291 { 00:10:35.291 "nqn": "nqn.2016-06.io.spdk:cnode13386", 00:10:35.291 "model_number": "zb&\"0=W:a?EQChTm'&\\?yN4KS.->1-[ir7v~tMX~0", 00:10:35.291 "method": "nvmf_create_subsystem", 00:10:35.291 "req_id": 1 00:10:35.291 } 00:10:35.291 Got JSON-RPC error response 00:10:35.291 response: 00:10:35.291 { 00:10:35.291 "code": -32602, 00:10:35.291 "message": "Invalid MN zb&\"0=W:a?EQChTm'&\\?yN4KS.->1-[ir7v~tMX~0" 00:10:35.291 } == *\I\n\v\a\l\i\d\ \M\N* ]] 00:10:35.291 09:18:46 nvmf_tcp.nvmf_invalid -- target/invalid.sh@62 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport --trtype tcp 00:10:35.291 [2024-07-15 09:18:46.471543] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:10:35.549 09:18:46 nvmf_tcp.nvmf_invalid -- target/invalid.sh@63 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode -s SPDK001 -a 00:10:35.549 09:18:46 nvmf_tcp.nvmf_invalid -- target/invalid.sh@64 -- # [[ tcp == \T\C\P ]] 00:10:35.549 09:18:46 nvmf_tcp.nvmf_invalid -- target/invalid.sh@67 -- # echo '' 00:10:35.549 09:18:46 nvmf_tcp.nvmf_invalid -- target/invalid.sh@67 -- # head -n 1 00:10:35.549 09:18:46 nvmf_tcp.nvmf_invalid -- target/invalid.sh@67 -- # IP= 00:10:35.549 09:18:46 nvmf_tcp.nvmf_invalid -- target/invalid.sh@69 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_listener nqn.2016-06.io.spdk:cnode -t tcp -a '' -s 4421 00:10:35.808 [2024-07-15 09:18:46.961165] nvmf_rpc.c: 804:nvmf_rpc_listen_paused: *ERROR*: Unable to remove listener, rc -2 00:10:35.808 09:18:46 nvmf_tcp.nvmf_invalid -- target/invalid.sh@69 -- # out='request: 00:10:35.808 { 00:10:35.808 "nqn": "nqn.2016-06.io.spdk:cnode", 00:10:35.808 "listen_address": { 00:10:35.808 "trtype": "tcp", 00:10:35.808 "traddr": "", 00:10:35.808 "trsvcid": "4421" 00:10:35.808 }, 00:10:35.808 "method": "nvmf_subsystem_remove_listener", 00:10:35.808 "req_id": 1 00:10:35.808 } 00:10:35.808 Got JSON-RPC error response 00:10:35.808 response: 00:10:35.808 { 00:10:35.808 "code": -32602, 00:10:35.808 "message": "Invalid parameters" 00:10:35.808 }' 00:10:35.808 09:18:46 nvmf_tcp.nvmf_invalid -- target/invalid.sh@70 -- # [[ request: 00:10:35.808 { 00:10:35.808 "nqn": "nqn.2016-06.io.spdk:cnode", 00:10:35.808 "listen_address": { 00:10:35.808 "trtype": "tcp", 00:10:35.808 "traddr": "", 00:10:35.808 "trsvcid": "4421" 00:10:35.808 }, 00:10:35.808 "method": "nvmf_subsystem_remove_listener", 00:10:35.808 "req_id": 1 00:10:35.808 } 00:10:35.808 Got JSON-RPC error response 00:10:35.808 response: 00:10:35.808 { 00:10:35.808 "code": -32602, 00:10:35.808 "message": "Invalid parameters" 00:10:35.808 } != *\U\n\a\b\l\e\ \t\o\ \s\t\o\p\ \l\i\s\t\e\n\e\r\.* ]] 00:10:35.808 09:18:46 nvmf_tcp.nvmf_invalid -- target/invalid.sh@73 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode20984 -i 0 00:10:36.066 [2024-07-15 09:18:47.217970] nvmf_rpc.c: 434:rpc_nvmf_create_subsystem: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode20984: invalid cntlid range [0-65519] 00:10:36.066 09:18:47 nvmf_tcp.nvmf_invalid -- target/invalid.sh@73 -- # out='request: 00:10:36.066 { 00:10:36.066 "nqn": "nqn.2016-06.io.spdk:cnode20984", 00:10:36.066 "min_cntlid": 0, 00:10:36.066 "method": "nvmf_create_subsystem", 00:10:36.066 "req_id": 1 00:10:36.066 } 00:10:36.066 Got JSON-RPC error response 00:10:36.066 response: 00:10:36.066 { 00:10:36.066 "code": -32602, 00:10:36.066 "message": "Invalid cntlid range [0-65519]" 00:10:36.066 }' 00:10:36.066 09:18:47 nvmf_tcp.nvmf_invalid -- target/invalid.sh@74 -- # [[ request: 00:10:36.066 { 00:10:36.066 "nqn": "nqn.2016-06.io.spdk:cnode20984", 00:10:36.066 "min_cntlid": 0, 00:10:36.066 "method": "nvmf_create_subsystem", 00:10:36.066 "req_id": 1 00:10:36.066 } 00:10:36.066 Got JSON-RPC error response 00:10:36.066 response: 00:10:36.066 { 00:10:36.066 "code": -32602, 00:10:36.066 "message": "Invalid cntlid range [0-65519]" 00:10:36.067 } == *\I\n\v\a\l\i\d\ \c\n\t\l\i\d\ \r\a\n\g\e* ]] 00:10:36.067 09:18:47 nvmf_tcp.nvmf_invalid -- target/invalid.sh@75 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode6273 -i 65520 00:10:36.325 [2024-07-15 09:18:47.462827] nvmf_rpc.c: 434:rpc_nvmf_create_subsystem: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode6273: invalid cntlid range [65520-65519] 00:10:36.325 09:18:47 nvmf_tcp.nvmf_invalid -- target/invalid.sh@75 -- # out='request: 00:10:36.325 { 00:10:36.325 "nqn": "nqn.2016-06.io.spdk:cnode6273", 00:10:36.325 "min_cntlid": 65520, 00:10:36.325 "method": "nvmf_create_subsystem", 00:10:36.325 "req_id": 1 00:10:36.325 } 00:10:36.325 Got JSON-RPC error response 00:10:36.325 response: 00:10:36.325 { 00:10:36.325 "code": -32602, 00:10:36.325 "message": "Invalid cntlid range [65520-65519]" 00:10:36.325 }' 00:10:36.325 09:18:47 nvmf_tcp.nvmf_invalid -- target/invalid.sh@76 -- # [[ request: 00:10:36.325 { 00:10:36.325 "nqn": "nqn.2016-06.io.spdk:cnode6273", 00:10:36.325 "min_cntlid": 65520, 00:10:36.325 "method": "nvmf_create_subsystem", 00:10:36.325 "req_id": 1 00:10:36.325 } 00:10:36.325 Got JSON-RPC error response 00:10:36.325 response: 00:10:36.325 { 00:10:36.325 "code": -32602, 00:10:36.325 "message": "Invalid cntlid range [65520-65519]" 00:10:36.325 } == *\I\n\v\a\l\i\d\ \c\n\t\l\i\d\ \r\a\n\g\e* ]] 00:10:36.325 09:18:47 nvmf_tcp.nvmf_invalid -- target/invalid.sh@77 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode12273 -I 0 00:10:36.582 [2024-07-15 09:18:47.711659] nvmf_rpc.c: 434:rpc_nvmf_create_subsystem: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode12273: invalid cntlid range [1-0] 00:10:36.582 09:18:47 nvmf_tcp.nvmf_invalid -- target/invalid.sh@77 -- # out='request: 00:10:36.582 { 00:10:36.582 "nqn": "nqn.2016-06.io.spdk:cnode12273", 00:10:36.582 "max_cntlid": 0, 00:10:36.582 "method": "nvmf_create_subsystem", 00:10:36.582 "req_id": 1 00:10:36.582 } 00:10:36.582 Got JSON-RPC error response 00:10:36.582 response: 00:10:36.582 { 00:10:36.582 "code": -32602, 00:10:36.582 "message": "Invalid cntlid range [1-0]" 00:10:36.582 }' 00:10:36.582 09:18:47 nvmf_tcp.nvmf_invalid -- target/invalid.sh@78 -- # [[ request: 00:10:36.582 { 00:10:36.582 "nqn": "nqn.2016-06.io.spdk:cnode12273", 00:10:36.582 "max_cntlid": 0, 00:10:36.582 "method": "nvmf_create_subsystem", 00:10:36.582 "req_id": 1 00:10:36.582 } 00:10:36.582 Got JSON-RPC error response 00:10:36.582 response: 00:10:36.582 { 00:10:36.582 "code": -32602, 00:10:36.582 "message": "Invalid cntlid range [1-0]" 00:10:36.582 } == *\I\n\v\a\l\i\d\ \c\n\t\l\i\d\ \r\a\n\g\e* ]] 00:10:36.582 09:18:47 nvmf_tcp.nvmf_invalid -- target/invalid.sh@79 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode3516 -I 65520 00:10:36.840 [2024-07-15 09:18:47.956419] nvmf_rpc.c: 434:rpc_nvmf_create_subsystem: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode3516: invalid cntlid range [1-65520] 00:10:36.840 09:18:47 nvmf_tcp.nvmf_invalid -- target/invalid.sh@79 -- # out='request: 00:10:36.840 { 00:10:36.840 "nqn": "nqn.2016-06.io.spdk:cnode3516", 00:10:36.840 "max_cntlid": 65520, 00:10:36.840 "method": "nvmf_create_subsystem", 00:10:36.840 "req_id": 1 00:10:36.840 } 00:10:36.840 Got JSON-RPC error response 00:10:36.840 response: 00:10:36.840 { 00:10:36.840 "code": -32602, 00:10:36.840 "message": "Invalid cntlid range [1-65520]" 00:10:36.840 }' 00:10:36.840 09:18:47 nvmf_tcp.nvmf_invalid -- target/invalid.sh@80 -- # [[ request: 00:10:36.840 { 00:10:36.840 "nqn": "nqn.2016-06.io.spdk:cnode3516", 00:10:36.840 "max_cntlid": 65520, 00:10:36.840 "method": "nvmf_create_subsystem", 00:10:36.840 "req_id": 1 00:10:36.840 } 00:10:36.840 Got JSON-RPC error response 00:10:36.840 response: 00:10:36.840 { 00:10:36.840 "code": -32602, 00:10:36.840 "message": "Invalid cntlid range [1-65520]" 00:10:36.840 } == *\I\n\v\a\l\i\d\ \c\n\t\l\i\d\ \r\a\n\g\e* ]] 00:10:36.840 09:18:47 nvmf_tcp.nvmf_invalid -- target/invalid.sh@83 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode6940 -i 6 -I 5 00:10:37.098 [2024-07-15 09:18:48.213281] nvmf_rpc.c: 434:rpc_nvmf_create_subsystem: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode6940: invalid cntlid range [6-5] 00:10:37.098 09:18:48 nvmf_tcp.nvmf_invalid -- target/invalid.sh@83 -- # out='request: 00:10:37.098 { 00:10:37.098 "nqn": "nqn.2016-06.io.spdk:cnode6940", 00:10:37.098 "min_cntlid": 6, 00:10:37.098 "max_cntlid": 5, 00:10:37.098 "method": "nvmf_create_subsystem", 00:10:37.098 "req_id": 1 00:10:37.098 } 00:10:37.098 Got JSON-RPC error response 00:10:37.098 response: 00:10:37.098 { 00:10:37.098 "code": -32602, 00:10:37.098 "message": "Invalid cntlid range [6-5]" 00:10:37.098 }' 00:10:37.098 09:18:48 nvmf_tcp.nvmf_invalid -- target/invalid.sh@84 -- # [[ request: 00:10:37.098 { 00:10:37.098 "nqn": "nqn.2016-06.io.spdk:cnode6940", 00:10:37.098 "min_cntlid": 6, 00:10:37.098 "max_cntlid": 5, 00:10:37.098 "method": "nvmf_create_subsystem", 00:10:37.098 "req_id": 1 00:10:37.098 } 00:10:37.098 Got JSON-RPC error response 00:10:37.098 response: 00:10:37.098 { 00:10:37.098 "code": -32602, 00:10:37.098 "message": "Invalid cntlid range [6-5]" 00:10:37.098 } == *\I\n\v\a\l\i\d\ \c\n\t\l\i\d\ \r\a\n\g\e* ]] 00:10:37.098 09:18:48 nvmf_tcp.nvmf_invalid -- target/invalid.sh@87 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py nvmf_delete_target --name foobar 00:10:37.355 09:18:48 nvmf_tcp.nvmf_invalid -- target/invalid.sh@87 -- # out='request: 00:10:37.355 { 00:10:37.355 "name": "foobar", 00:10:37.355 "method": "nvmf_delete_target", 00:10:37.355 "req_id": 1 00:10:37.355 } 00:10:37.355 Got JSON-RPC error response 00:10:37.355 response: 00:10:37.355 { 00:10:37.355 "code": -32602, 00:10:37.355 "message": "The specified target doesn'\''t exist, cannot delete it." 00:10:37.355 }' 00:10:37.355 09:18:48 nvmf_tcp.nvmf_invalid -- target/invalid.sh@88 -- # [[ request: 00:10:37.355 { 00:10:37.355 "name": "foobar", 00:10:37.355 "method": "nvmf_delete_target", 00:10:37.355 "req_id": 1 00:10:37.355 } 00:10:37.355 Got JSON-RPC error response 00:10:37.355 response: 00:10:37.355 { 00:10:37.356 "code": -32602, 00:10:37.356 "message": "The specified target doesn't exist, cannot delete it." 00:10:37.356 } == *\T\h\e\ \s\p\e\c\i\f\i\e\d\ \t\a\r\g\e\t\ \d\o\e\s\n\'\t\ \e\x\i\s\t\,\ \c\a\n\n\o\t\ \d\e\l\e\t\e\ \i\t\.* ]] 00:10:37.356 09:18:48 nvmf_tcp.nvmf_invalid -- target/invalid.sh@90 -- # trap - SIGINT SIGTERM EXIT 00:10:37.356 09:18:48 nvmf_tcp.nvmf_invalid -- target/invalid.sh@91 -- # nvmftestfini 00:10:37.356 09:18:48 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@488 -- # nvmfcleanup 00:10:37.356 09:18:48 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@117 -- # sync 00:10:37.356 09:18:48 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:10:37.356 09:18:48 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@120 -- # set +e 00:10:37.356 09:18:48 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@121 -- # for i in {1..20} 00:10:37.356 09:18:48 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:10:37.356 rmmod nvme_tcp 00:10:37.356 rmmod nvme_fabrics 00:10:37.356 rmmod nvme_keyring 00:10:37.356 09:18:48 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:10:37.356 09:18:48 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@124 -- # set -e 00:10:37.356 09:18:48 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@125 -- # return 0 00:10:37.356 09:18:48 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@489 -- # '[' -n 755980 ']' 00:10:37.356 09:18:48 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@490 -- # killprocess 755980 00:10:37.356 09:18:48 nvmf_tcp.nvmf_invalid -- common/autotest_common.sh@948 -- # '[' -z 755980 ']' 00:10:37.356 09:18:48 nvmf_tcp.nvmf_invalid -- common/autotest_common.sh@952 -- # kill -0 755980 00:10:37.356 09:18:48 nvmf_tcp.nvmf_invalid -- common/autotest_common.sh@953 -- # uname 00:10:37.356 09:18:48 nvmf_tcp.nvmf_invalid -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:10:37.356 09:18:48 nvmf_tcp.nvmf_invalid -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 755980 00:10:37.356 09:18:48 nvmf_tcp.nvmf_invalid -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:10:37.356 09:18:48 nvmf_tcp.nvmf_invalid -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:10:37.356 09:18:48 nvmf_tcp.nvmf_invalid -- common/autotest_common.sh@966 -- # echo 'killing process with pid 755980' 00:10:37.356 killing process with pid 755980 00:10:37.356 09:18:48 nvmf_tcp.nvmf_invalid -- common/autotest_common.sh@967 -- # kill 755980 00:10:37.356 09:18:48 nvmf_tcp.nvmf_invalid -- common/autotest_common.sh@972 -- # wait 755980 00:10:37.614 09:18:48 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:10:37.614 09:18:48 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:10:37.614 09:18:48 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:10:37.614 09:18:48 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:10:37.614 09:18:48 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@278 -- # remove_spdk_ns 00:10:37.614 09:18:48 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:10:37.614 09:18:48 nvmf_tcp.nvmf_invalid -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:10:37.614 09:18:48 nvmf_tcp.nvmf_invalid -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:10:39.518 09:18:50 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:10:39.518 00:10:39.518 real 0m8.630s 00:10:39.518 user 0m19.993s 00:10:39.518 sys 0m2.346s 00:10:39.518 09:18:50 nvmf_tcp.nvmf_invalid -- common/autotest_common.sh@1124 -- # xtrace_disable 00:10:39.518 09:18:50 nvmf_tcp.nvmf_invalid -- common/autotest_common.sh@10 -- # set +x 00:10:39.518 ************************************ 00:10:39.518 END TEST nvmf_invalid 00:10:39.518 ************************************ 00:10:39.776 09:18:50 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:10:39.776 09:18:50 nvmf_tcp -- nvmf/nvmf.sh@31 -- # run_test nvmf_abort /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/abort.sh --transport=tcp 00:10:39.776 09:18:50 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:10:39.776 09:18:50 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:10:39.776 09:18:50 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:10:39.776 ************************************ 00:10:39.776 START TEST nvmf_abort 00:10:39.776 ************************************ 00:10:39.776 09:18:50 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/abort.sh --transport=tcp 00:10:39.776 * Looking for test storage... 00:10:39.776 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:10:39.776 09:18:50 nvmf_tcp.nvmf_abort -- target/abort.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:10:39.776 09:18:50 nvmf_tcp.nvmf_abort -- nvmf/common.sh@7 -- # uname -s 00:10:39.776 09:18:50 nvmf_tcp.nvmf_abort -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:10:39.776 09:18:50 nvmf_tcp.nvmf_abort -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:10:39.776 09:18:50 nvmf_tcp.nvmf_abort -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:10:39.776 09:18:50 nvmf_tcp.nvmf_abort -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:10:39.776 09:18:50 nvmf_tcp.nvmf_abort -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:10:39.776 09:18:50 nvmf_tcp.nvmf_abort -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:10:39.776 09:18:50 nvmf_tcp.nvmf_abort -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:10:39.776 09:18:50 nvmf_tcp.nvmf_abort -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:10:39.776 09:18:50 nvmf_tcp.nvmf_abort -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:10:39.776 09:18:50 nvmf_tcp.nvmf_abort -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:10:39.776 09:18:50 nvmf_tcp.nvmf_abort -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a 00:10:39.776 09:18:50 nvmf_tcp.nvmf_abort -- nvmf/common.sh@18 -- # NVME_HOSTID=29f67375-a902-e411-ace9-001e67bc3c9a 00:10:39.776 09:18:50 nvmf_tcp.nvmf_abort -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:10:39.776 09:18:50 nvmf_tcp.nvmf_abort -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:10:39.776 09:18:50 nvmf_tcp.nvmf_abort -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:10:39.776 09:18:50 nvmf_tcp.nvmf_abort -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:10:39.776 09:18:50 nvmf_tcp.nvmf_abort -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:10:39.776 09:18:50 nvmf_tcp.nvmf_abort -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:10:39.776 09:18:50 nvmf_tcp.nvmf_abort -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:10:39.776 09:18:50 nvmf_tcp.nvmf_abort -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:10:39.776 09:18:50 nvmf_tcp.nvmf_abort -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:39.776 09:18:50 nvmf_tcp.nvmf_abort -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:39.776 09:18:50 nvmf_tcp.nvmf_abort -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:39.776 09:18:50 nvmf_tcp.nvmf_abort -- paths/export.sh@5 -- # export PATH 00:10:39.776 09:18:50 nvmf_tcp.nvmf_abort -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:39.776 09:18:50 nvmf_tcp.nvmf_abort -- nvmf/common.sh@47 -- # : 0 00:10:39.776 09:18:50 nvmf_tcp.nvmf_abort -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:10:39.776 09:18:50 nvmf_tcp.nvmf_abort -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:10:39.776 09:18:50 nvmf_tcp.nvmf_abort -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:10:39.776 09:18:50 nvmf_tcp.nvmf_abort -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:10:39.776 09:18:50 nvmf_tcp.nvmf_abort -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:10:39.776 09:18:50 nvmf_tcp.nvmf_abort -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:10:39.776 09:18:50 nvmf_tcp.nvmf_abort -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:10:39.776 09:18:50 nvmf_tcp.nvmf_abort -- nvmf/common.sh@51 -- # have_pci_nics=0 00:10:39.776 09:18:50 nvmf_tcp.nvmf_abort -- target/abort.sh@11 -- # MALLOC_BDEV_SIZE=64 00:10:39.776 09:18:50 nvmf_tcp.nvmf_abort -- target/abort.sh@12 -- # MALLOC_BLOCK_SIZE=4096 00:10:39.776 09:18:50 nvmf_tcp.nvmf_abort -- target/abort.sh@14 -- # nvmftestinit 00:10:39.776 09:18:50 nvmf_tcp.nvmf_abort -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:10:39.776 09:18:50 nvmf_tcp.nvmf_abort -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:10:39.776 09:18:50 nvmf_tcp.nvmf_abort -- nvmf/common.sh@448 -- # prepare_net_devs 00:10:39.776 09:18:50 nvmf_tcp.nvmf_abort -- nvmf/common.sh@410 -- # local -g is_hw=no 00:10:39.776 09:18:50 nvmf_tcp.nvmf_abort -- nvmf/common.sh@412 -- # remove_spdk_ns 00:10:39.776 09:18:50 nvmf_tcp.nvmf_abort -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:10:39.776 09:18:50 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:10:39.776 09:18:50 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:10:39.776 09:18:50 nvmf_tcp.nvmf_abort -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:10:39.776 09:18:50 nvmf_tcp.nvmf_abort -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:10:39.776 09:18:50 nvmf_tcp.nvmf_abort -- nvmf/common.sh@285 -- # xtrace_disable 00:10:39.776 09:18:50 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:10:41.682 09:18:52 nvmf_tcp.nvmf_abort -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:10:41.682 09:18:52 nvmf_tcp.nvmf_abort -- nvmf/common.sh@291 -- # pci_devs=() 00:10:41.682 09:18:52 nvmf_tcp.nvmf_abort -- nvmf/common.sh@291 -- # local -a pci_devs 00:10:41.682 09:18:52 nvmf_tcp.nvmf_abort -- nvmf/common.sh@292 -- # pci_net_devs=() 00:10:41.682 09:18:52 nvmf_tcp.nvmf_abort -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:10:41.682 09:18:52 nvmf_tcp.nvmf_abort -- nvmf/common.sh@293 -- # pci_drivers=() 00:10:41.682 09:18:52 nvmf_tcp.nvmf_abort -- nvmf/common.sh@293 -- # local -A pci_drivers 00:10:41.682 09:18:52 nvmf_tcp.nvmf_abort -- nvmf/common.sh@295 -- # net_devs=() 00:10:41.682 09:18:52 nvmf_tcp.nvmf_abort -- nvmf/common.sh@295 -- # local -ga net_devs 00:10:41.682 09:18:52 nvmf_tcp.nvmf_abort -- nvmf/common.sh@296 -- # e810=() 00:10:41.682 09:18:52 nvmf_tcp.nvmf_abort -- nvmf/common.sh@296 -- # local -ga e810 00:10:41.682 09:18:52 nvmf_tcp.nvmf_abort -- nvmf/common.sh@297 -- # x722=() 00:10:41.682 09:18:52 nvmf_tcp.nvmf_abort -- nvmf/common.sh@297 -- # local -ga x722 00:10:41.682 09:18:52 nvmf_tcp.nvmf_abort -- nvmf/common.sh@298 -- # mlx=() 00:10:41.682 09:18:52 nvmf_tcp.nvmf_abort -- nvmf/common.sh@298 -- # local -ga mlx 00:10:41.682 09:18:52 nvmf_tcp.nvmf_abort -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:10:41.682 09:18:52 nvmf_tcp.nvmf_abort -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:10:41.682 09:18:52 nvmf_tcp.nvmf_abort -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:10:41.682 09:18:52 nvmf_tcp.nvmf_abort -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:10:41.682 09:18:52 nvmf_tcp.nvmf_abort -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:10:41.682 09:18:52 nvmf_tcp.nvmf_abort -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:10:41.682 09:18:52 nvmf_tcp.nvmf_abort -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:10:41.682 09:18:52 nvmf_tcp.nvmf_abort -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:10:41.682 09:18:52 nvmf_tcp.nvmf_abort -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:10:41.682 09:18:52 nvmf_tcp.nvmf_abort -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:10:41.682 09:18:52 nvmf_tcp.nvmf_abort -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:10:41.682 09:18:52 nvmf_tcp.nvmf_abort -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:10:41.682 09:18:52 nvmf_tcp.nvmf_abort -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:10:41.682 09:18:52 nvmf_tcp.nvmf_abort -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:10:41.682 09:18:52 nvmf_tcp.nvmf_abort -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:10:41.682 09:18:52 nvmf_tcp.nvmf_abort -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:10:41.682 09:18:52 nvmf_tcp.nvmf_abort -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:10:41.682 09:18:52 nvmf_tcp.nvmf_abort -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:10:41.682 09:18:52 nvmf_tcp.nvmf_abort -- nvmf/common.sh@341 -- # echo 'Found 0000:09:00.0 (0x8086 - 0x159b)' 00:10:41.682 Found 0000:09:00.0 (0x8086 - 0x159b) 00:10:41.682 09:18:52 nvmf_tcp.nvmf_abort -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:10:41.682 09:18:52 nvmf_tcp.nvmf_abort -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:10:41.682 09:18:52 nvmf_tcp.nvmf_abort -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:10:41.682 09:18:52 nvmf_tcp.nvmf_abort -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:10:41.682 09:18:52 nvmf_tcp.nvmf_abort -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:10:41.682 09:18:52 nvmf_tcp.nvmf_abort -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:10:41.682 09:18:52 nvmf_tcp.nvmf_abort -- nvmf/common.sh@341 -- # echo 'Found 0000:09:00.1 (0x8086 - 0x159b)' 00:10:41.682 Found 0000:09:00.1 (0x8086 - 0x159b) 00:10:41.682 09:18:52 nvmf_tcp.nvmf_abort -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:10:41.682 09:18:52 nvmf_tcp.nvmf_abort -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:10:41.682 09:18:52 nvmf_tcp.nvmf_abort -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:10:41.682 09:18:52 nvmf_tcp.nvmf_abort -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:10:41.682 09:18:52 nvmf_tcp.nvmf_abort -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:10:41.682 09:18:52 nvmf_tcp.nvmf_abort -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:10:41.682 09:18:52 nvmf_tcp.nvmf_abort -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:10:41.682 09:18:52 nvmf_tcp.nvmf_abort -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:10:41.682 09:18:52 nvmf_tcp.nvmf_abort -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:10:41.682 09:18:52 nvmf_tcp.nvmf_abort -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:10:41.682 09:18:52 nvmf_tcp.nvmf_abort -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:10:41.682 09:18:52 nvmf_tcp.nvmf_abort -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:10:41.682 09:18:52 nvmf_tcp.nvmf_abort -- nvmf/common.sh@390 -- # [[ up == up ]] 00:10:41.682 09:18:52 nvmf_tcp.nvmf_abort -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:10:41.682 09:18:52 nvmf_tcp.nvmf_abort -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:10:41.682 09:18:52 nvmf_tcp.nvmf_abort -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:09:00.0: cvl_0_0' 00:10:41.682 Found net devices under 0000:09:00.0: cvl_0_0 00:10:41.682 09:18:52 nvmf_tcp.nvmf_abort -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:10:41.682 09:18:52 nvmf_tcp.nvmf_abort -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:10:41.682 09:18:52 nvmf_tcp.nvmf_abort -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:10:41.682 09:18:52 nvmf_tcp.nvmf_abort -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:10:41.682 09:18:52 nvmf_tcp.nvmf_abort -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:10:41.682 09:18:52 nvmf_tcp.nvmf_abort -- nvmf/common.sh@390 -- # [[ up == up ]] 00:10:41.682 09:18:52 nvmf_tcp.nvmf_abort -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:10:41.682 09:18:52 nvmf_tcp.nvmf_abort -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:10:41.682 09:18:52 nvmf_tcp.nvmf_abort -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:09:00.1: cvl_0_1' 00:10:41.682 Found net devices under 0000:09:00.1: cvl_0_1 00:10:41.682 09:18:52 nvmf_tcp.nvmf_abort -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:10:41.682 09:18:52 nvmf_tcp.nvmf_abort -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:10:41.682 09:18:52 nvmf_tcp.nvmf_abort -- nvmf/common.sh@414 -- # is_hw=yes 00:10:41.683 09:18:52 nvmf_tcp.nvmf_abort -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:10:41.683 09:18:52 nvmf_tcp.nvmf_abort -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:10:41.683 09:18:52 nvmf_tcp.nvmf_abort -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:10:41.683 09:18:52 nvmf_tcp.nvmf_abort -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:10:41.683 09:18:52 nvmf_tcp.nvmf_abort -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:10:41.683 09:18:52 nvmf_tcp.nvmf_abort -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:10:41.683 09:18:52 nvmf_tcp.nvmf_abort -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:10:41.683 09:18:52 nvmf_tcp.nvmf_abort -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:10:41.683 09:18:52 nvmf_tcp.nvmf_abort -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:10:41.683 09:18:52 nvmf_tcp.nvmf_abort -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:10:41.683 09:18:52 nvmf_tcp.nvmf_abort -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:10:41.683 09:18:52 nvmf_tcp.nvmf_abort -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:10:41.683 09:18:52 nvmf_tcp.nvmf_abort -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:10:41.683 09:18:52 nvmf_tcp.nvmf_abort -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:10:41.683 09:18:52 nvmf_tcp.nvmf_abort -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:10:41.683 09:18:52 nvmf_tcp.nvmf_abort -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:10:41.941 09:18:52 nvmf_tcp.nvmf_abort -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:10:41.941 09:18:52 nvmf_tcp.nvmf_abort -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:10:41.941 09:18:52 nvmf_tcp.nvmf_abort -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:10:41.941 09:18:52 nvmf_tcp.nvmf_abort -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:10:41.941 09:18:52 nvmf_tcp.nvmf_abort -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:10:41.941 09:18:52 nvmf_tcp.nvmf_abort -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:10:41.941 09:18:52 nvmf_tcp.nvmf_abort -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:10:41.941 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:10:41.941 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.156 ms 00:10:41.941 00:10:41.941 --- 10.0.0.2 ping statistics --- 00:10:41.941 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:10:41.941 rtt min/avg/max/mdev = 0.156/0.156/0.156/0.000 ms 00:10:41.941 09:18:52 nvmf_tcp.nvmf_abort -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:10:41.941 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:10:41.941 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.060 ms 00:10:41.941 00:10:41.941 --- 10.0.0.1 ping statistics --- 00:10:41.941 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:10:41.941 rtt min/avg/max/mdev = 0.060/0.060/0.060/0.000 ms 00:10:41.941 09:18:52 nvmf_tcp.nvmf_abort -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:10:41.941 09:18:52 nvmf_tcp.nvmf_abort -- nvmf/common.sh@422 -- # return 0 00:10:41.941 09:18:52 nvmf_tcp.nvmf_abort -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:10:41.941 09:18:52 nvmf_tcp.nvmf_abort -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:10:41.941 09:18:52 nvmf_tcp.nvmf_abort -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:10:41.941 09:18:52 nvmf_tcp.nvmf_abort -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:10:41.941 09:18:52 nvmf_tcp.nvmf_abort -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:10:41.941 09:18:52 nvmf_tcp.nvmf_abort -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:10:41.941 09:18:52 nvmf_tcp.nvmf_abort -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:10:41.941 09:18:52 nvmf_tcp.nvmf_abort -- target/abort.sh@15 -- # nvmfappstart -m 0xE 00:10:41.941 09:18:52 nvmf_tcp.nvmf_abort -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:10:41.941 09:18:52 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@722 -- # xtrace_disable 00:10:41.941 09:18:52 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:10:41.941 09:18:52 nvmf_tcp.nvmf_abort -- nvmf/common.sh@481 -- # nvmfpid=758607 00:10:41.941 09:18:52 nvmf_tcp.nvmf_abort -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xE 00:10:41.941 09:18:52 nvmf_tcp.nvmf_abort -- nvmf/common.sh@482 -- # waitforlisten 758607 00:10:41.941 09:18:52 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@829 -- # '[' -z 758607 ']' 00:10:41.941 09:18:52 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:10:41.941 09:18:52 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@834 -- # local max_retries=100 00:10:41.941 09:18:52 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:10:41.941 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:10:41.941 09:18:52 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@838 -- # xtrace_disable 00:10:41.941 09:18:52 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:10:41.941 [2024-07-15 09:18:53.023163] Starting SPDK v24.09-pre git sha1 b0f01ebc5 / DPDK 24.03.0 initialization... 00:10:41.941 [2024-07-15 09:18:53.023255] [ DPDK EAL parameters: nvmf -c 0xE --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:10:41.941 EAL: No free 2048 kB hugepages reported on node 1 00:10:41.941 [2024-07-15 09:18:53.086477] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 3 00:10:42.219 [2024-07-15 09:18:53.187551] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:10:42.219 [2024-07-15 09:18:53.187621] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:10:42.219 [2024-07-15 09:18:53.187644] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:10:42.219 [2024-07-15 09:18:53.187655] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:10:42.219 [2024-07-15 09:18:53.187665] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:10:42.219 [2024-07-15 09:18:53.187811] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:10:42.219 [2024-07-15 09:18:53.187884] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:10:42.219 [2024-07-15 09:18:53.187887] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:10:42.219 09:18:53 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:10:42.219 09:18:53 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@862 -- # return 0 00:10:42.219 09:18:53 nvmf_tcp.nvmf_abort -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:10:42.219 09:18:53 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@728 -- # xtrace_disable 00:10:42.219 09:18:53 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:10:42.219 09:18:53 nvmf_tcp.nvmf_abort -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:10:42.219 09:18:53 nvmf_tcp.nvmf_abort -- target/abort.sh@17 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 -a 256 00:10:42.219 09:18:53 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@559 -- # xtrace_disable 00:10:42.219 09:18:53 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:10:42.219 [2024-07-15 09:18:53.334514] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:10:42.219 09:18:53 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:10:42.219 09:18:53 nvmf_tcp.nvmf_abort -- target/abort.sh@20 -- # rpc_cmd bdev_malloc_create 64 4096 -b Malloc0 00:10:42.219 09:18:53 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@559 -- # xtrace_disable 00:10:42.219 09:18:53 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:10:42.219 Malloc0 00:10:42.219 09:18:53 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:10:42.219 09:18:53 nvmf_tcp.nvmf_abort -- target/abort.sh@21 -- # rpc_cmd bdev_delay_create -b Malloc0 -d Delay0 -r 1000000 -t 1000000 -w 1000000 -n 1000000 00:10:42.219 09:18:53 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@559 -- # xtrace_disable 00:10:42.219 09:18:53 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:10:42.219 Delay0 00:10:42.219 09:18:53 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:10:42.219 09:18:53 nvmf_tcp.nvmf_abort -- target/abort.sh@24 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 -a -s SPDK0 00:10:42.219 09:18:53 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@559 -- # xtrace_disable 00:10:42.219 09:18:53 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:10:42.219 09:18:53 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:10:42.219 09:18:53 nvmf_tcp.nvmf_abort -- target/abort.sh@25 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 Delay0 00:10:42.219 09:18:53 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@559 -- # xtrace_disable 00:10:42.219 09:18:53 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:10:42.219 09:18:53 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:10:42.219 09:18:53 nvmf_tcp.nvmf_abort -- target/abort.sh@26 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:10:42.219 09:18:53 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@559 -- # xtrace_disable 00:10:42.219 09:18:53 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:10:42.219 [2024-07-15 09:18:53.400558] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:10:42.219 09:18:53 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:10:42.219 09:18:53 nvmf_tcp.nvmf_abort -- target/abort.sh@27 -- # rpc_cmd nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:10:42.219 09:18:53 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@559 -- # xtrace_disable 00:10:42.219 09:18:53 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:10:42.485 09:18:53 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:10:42.485 09:18:53 nvmf_tcp.nvmf_abort -- target/abort.sh@30 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/abort -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' -c 0x1 -t 1 -l warning -q 128 00:10:42.485 EAL: No free 2048 kB hugepages reported on node 1 00:10:42.485 [2024-07-15 09:18:53.505613] nvme_fabric.c: 295:nvme_fabric_discover_probe: *WARNING*: Skipping unsupported current discovery service or discovery service referral 00:10:44.392 Initializing NVMe Controllers 00:10:44.392 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode0 00:10:44.392 controller IO queue size 128 less than required 00:10:44.392 Consider using lower queue depth or small IO size because IO requests may be queued at the NVMe driver. 00:10:44.392 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 with lcore 0 00:10:44.392 Initialization complete. Launching workers. 00:10:44.392 NS: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 I/O completed: 123, failed: 33331 00:10:44.392 CTRLR: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) abort submitted 33392, failed to submit 62 00:10:44.392 success 33335, unsuccess 57, failed 0 00:10:44.392 09:18:55 nvmf_tcp.nvmf_abort -- target/abort.sh@34 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:10:44.392 09:18:55 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@559 -- # xtrace_disable 00:10:44.392 09:18:55 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:10:44.392 09:18:55 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:10:44.392 09:18:55 nvmf_tcp.nvmf_abort -- target/abort.sh@36 -- # trap - SIGINT SIGTERM EXIT 00:10:44.392 09:18:55 nvmf_tcp.nvmf_abort -- target/abort.sh@38 -- # nvmftestfini 00:10:44.392 09:18:55 nvmf_tcp.nvmf_abort -- nvmf/common.sh@488 -- # nvmfcleanup 00:10:44.392 09:18:55 nvmf_tcp.nvmf_abort -- nvmf/common.sh@117 -- # sync 00:10:44.392 09:18:55 nvmf_tcp.nvmf_abort -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:10:44.392 09:18:55 nvmf_tcp.nvmf_abort -- nvmf/common.sh@120 -- # set +e 00:10:44.392 09:18:55 nvmf_tcp.nvmf_abort -- nvmf/common.sh@121 -- # for i in {1..20} 00:10:44.392 09:18:55 nvmf_tcp.nvmf_abort -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:10:44.392 rmmod nvme_tcp 00:10:44.392 rmmod nvme_fabrics 00:10:44.652 rmmod nvme_keyring 00:10:44.652 09:18:55 nvmf_tcp.nvmf_abort -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:10:44.652 09:18:55 nvmf_tcp.nvmf_abort -- nvmf/common.sh@124 -- # set -e 00:10:44.652 09:18:55 nvmf_tcp.nvmf_abort -- nvmf/common.sh@125 -- # return 0 00:10:44.652 09:18:55 nvmf_tcp.nvmf_abort -- nvmf/common.sh@489 -- # '[' -n 758607 ']' 00:10:44.652 09:18:55 nvmf_tcp.nvmf_abort -- nvmf/common.sh@490 -- # killprocess 758607 00:10:44.652 09:18:55 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@948 -- # '[' -z 758607 ']' 00:10:44.652 09:18:55 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@952 -- # kill -0 758607 00:10:44.652 09:18:55 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@953 -- # uname 00:10:44.652 09:18:55 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:10:44.652 09:18:55 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 758607 00:10:44.652 09:18:55 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:10:44.652 09:18:55 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:10:44.652 09:18:55 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@966 -- # echo 'killing process with pid 758607' 00:10:44.652 killing process with pid 758607 00:10:44.652 09:18:55 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@967 -- # kill 758607 00:10:44.652 09:18:55 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@972 -- # wait 758607 00:10:44.911 09:18:55 nvmf_tcp.nvmf_abort -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:10:44.911 09:18:55 nvmf_tcp.nvmf_abort -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:10:44.911 09:18:55 nvmf_tcp.nvmf_abort -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:10:44.911 09:18:55 nvmf_tcp.nvmf_abort -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:10:44.911 09:18:55 nvmf_tcp.nvmf_abort -- nvmf/common.sh@278 -- # remove_spdk_ns 00:10:44.911 09:18:55 nvmf_tcp.nvmf_abort -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:10:44.911 09:18:55 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:10:44.911 09:18:55 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:10:46.818 09:18:57 nvmf_tcp.nvmf_abort -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:10:46.818 00:10:46.818 real 0m7.229s 00:10:46.818 user 0m10.526s 00:10:46.818 sys 0m2.297s 00:10:46.818 09:18:57 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@1124 -- # xtrace_disable 00:10:46.818 09:18:57 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:10:46.818 ************************************ 00:10:46.818 END TEST nvmf_abort 00:10:46.818 ************************************ 00:10:46.818 09:18:58 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:10:46.818 09:18:58 nvmf_tcp -- nvmf/nvmf.sh@32 -- # run_test nvmf_ns_hotplug_stress /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/ns_hotplug_stress.sh --transport=tcp 00:10:46.818 09:18:58 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:10:46.818 09:18:58 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:10:46.818 09:18:58 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:10:47.076 ************************************ 00:10:47.076 START TEST nvmf_ns_hotplug_stress 00:10:47.076 ************************************ 00:10:47.076 09:18:58 nvmf_tcp.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/ns_hotplug_stress.sh --transport=tcp 00:10:47.076 * Looking for test storage... 00:10:47.076 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:10:47.076 09:18:58 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:10:47.076 09:18:58 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@7 -- # uname -s 00:10:47.076 09:18:58 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:10:47.076 09:18:58 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:10:47.076 09:18:58 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:10:47.076 09:18:58 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:10:47.076 09:18:58 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:10:47.076 09:18:58 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:10:47.076 09:18:58 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:10:47.076 09:18:58 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:10:47.076 09:18:58 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:10:47.076 09:18:58 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:10:47.076 09:18:58 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a 00:10:47.076 09:18:58 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@18 -- # NVME_HOSTID=29f67375-a902-e411-ace9-001e67bc3c9a 00:10:47.076 09:18:58 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:10:47.076 09:18:58 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:10:47.076 09:18:58 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:10:47.076 09:18:58 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:10:47.076 09:18:58 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:10:47.076 09:18:58 nvmf_tcp.nvmf_ns_hotplug_stress -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:10:47.076 09:18:58 nvmf_tcp.nvmf_ns_hotplug_stress -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:10:47.076 09:18:58 nvmf_tcp.nvmf_ns_hotplug_stress -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:10:47.076 09:18:58 nvmf_tcp.nvmf_ns_hotplug_stress -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:47.076 09:18:58 nvmf_tcp.nvmf_ns_hotplug_stress -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:47.077 09:18:58 nvmf_tcp.nvmf_ns_hotplug_stress -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:47.077 09:18:58 nvmf_tcp.nvmf_ns_hotplug_stress -- paths/export.sh@5 -- # export PATH 00:10:47.077 09:18:58 nvmf_tcp.nvmf_ns_hotplug_stress -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:47.077 09:18:58 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@47 -- # : 0 00:10:47.077 09:18:58 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:10:47.077 09:18:58 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:10:47.077 09:18:58 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:10:47.077 09:18:58 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:10:47.077 09:18:58 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:10:47.077 09:18:58 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:10:47.077 09:18:58 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:10:47.077 09:18:58 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@51 -- # have_pci_nics=0 00:10:47.077 09:18:58 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@11 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:10:47.077 09:18:58 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@22 -- # nvmftestinit 00:10:47.077 09:18:58 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:10:47.077 09:18:58 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:10:47.077 09:18:58 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@448 -- # prepare_net_devs 00:10:47.077 09:18:58 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@410 -- # local -g is_hw=no 00:10:47.077 09:18:58 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@412 -- # remove_spdk_ns 00:10:47.077 09:18:58 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:10:47.077 09:18:58 nvmf_tcp.nvmf_ns_hotplug_stress -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:10:47.077 09:18:58 nvmf_tcp.nvmf_ns_hotplug_stress -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:10:47.077 09:18:58 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:10:47.077 09:18:58 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:10:47.077 09:18:58 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@285 -- # xtrace_disable 00:10:47.077 09:18:58 nvmf_tcp.nvmf_ns_hotplug_stress -- common/autotest_common.sh@10 -- # set +x 00:10:49.610 09:19:00 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:10:49.610 09:19:00 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@291 -- # pci_devs=() 00:10:49.611 09:19:00 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@291 -- # local -a pci_devs 00:10:49.611 09:19:00 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@292 -- # pci_net_devs=() 00:10:49.611 09:19:00 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:10:49.611 09:19:00 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@293 -- # pci_drivers=() 00:10:49.611 09:19:00 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@293 -- # local -A pci_drivers 00:10:49.611 09:19:00 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@295 -- # net_devs=() 00:10:49.611 09:19:00 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@295 -- # local -ga net_devs 00:10:49.611 09:19:00 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@296 -- # e810=() 00:10:49.611 09:19:00 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@296 -- # local -ga e810 00:10:49.611 09:19:00 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@297 -- # x722=() 00:10:49.611 09:19:00 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@297 -- # local -ga x722 00:10:49.611 09:19:00 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@298 -- # mlx=() 00:10:49.611 09:19:00 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@298 -- # local -ga mlx 00:10:49.611 09:19:00 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:10:49.611 09:19:00 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:10:49.611 09:19:00 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:10:49.611 09:19:00 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:10:49.611 09:19:00 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:10:49.611 09:19:00 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:10:49.611 09:19:00 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:10:49.611 09:19:00 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:10:49.611 09:19:00 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:10:49.611 09:19:00 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:10:49.611 09:19:00 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:10:49.611 09:19:00 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:10:49.611 09:19:00 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:10:49.611 09:19:00 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:10:49.611 09:19:00 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:10:49.611 09:19:00 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:10:49.611 09:19:00 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:10:49.611 09:19:00 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:10:49.611 09:19:00 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@341 -- # echo 'Found 0000:09:00.0 (0x8086 - 0x159b)' 00:10:49.611 Found 0000:09:00.0 (0x8086 - 0x159b) 00:10:49.611 09:19:00 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:10:49.611 09:19:00 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:10:49.611 09:19:00 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:10:49.611 09:19:00 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:10:49.611 09:19:00 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:10:49.611 09:19:00 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:10:49.611 09:19:00 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@341 -- # echo 'Found 0000:09:00.1 (0x8086 - 0x159b)' 00:10:49.611 Found 0000:09:00.1 (0x8086 - 0x159b) 00:10:49.611 09:19:00 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:10:49.611 09:19:00 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:10:49.611 09:19:00 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:10:49.611 09:19:00 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:10:49.611 09:19:00 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:10:49.611 09:19:00 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:10:49.611 09:19:00 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:10:49.611 09:19:00 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:10:49.611 09:19:00 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:10:49.611 09:19:00 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:10:49.611 09:19:00 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:10:49.611 09:19:00 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:10:49.611 09:19:00 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@390 -- # [[ up == up ]] 00:10:49.611 09:19:00 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:10:49.611 09:19:00 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:10:49.611 09:19:00 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:09:00.0: cvl_0_0' 00:10:49.611 Found net devices under 0000:09:00.0: cvl_0_0 00:10:49.611 09:19:00 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:10:49.611 09:19:00 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:10:49.611 09:19:00 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:10:49.611 09:19:00 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:10:49.611 09:19:00 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:10:49.611 09:19:00 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@390 -- # [[ up == up ]] 00:10:49.611 09:19:00 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:10:49.611 09:19:00 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:10:49.611 09:19:00 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:09:00.1: cvl_0_1' 00:10:49.611 Found net devices under 0000:09:00.1: cvl_0_1 00:10:49.611 09:19:00 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:10:49.611 09:19:00 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:10:49.611 09:19:00 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@414 -- # is_hw=yes 00:10:49.611 09:19:00 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:10:49.611 09:19:00 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:10:49.611 09:19:00 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:10:49.611 09:19:00 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:10:49.611 09:19:00 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:10:49.611 09:19:00 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:10:49.611 09:19:00 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:10:49.611 09:19:00 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:10:49.611 09:19:00 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:10:49.611 09:19:00 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:10:49.611 09:19:00 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:10:49.611 09:19:00 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:10:49.612 09:19:00 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:10:49.612 09:19:00 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:10:49.612 09:19:00 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:10:49.612 09:19:00 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:10:49.612 09:19:00 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:10:49.612 09:19:00 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:10:49.612 09:19:00 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:10:49.612 09:19:00 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:10:49.612 09:19:00 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:10:49.612 09:19:00 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:10:49.612 09:19:00 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:10:49.612 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:10:49.612 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.205 ms 00:10:49.612 00:10:49.612 --- 10.0.0.2 ping statistics --- 00:10:49.612 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:10:49.612 rtt min/avg/max/mdev = 0.205/0.205/0.205/0.000 ms 00:10:49.612 09:19:00 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:10:49.612 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:10:49.612 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.138 ms 00:10:49.612 00:10:49.612 --- 10.0.0.1 ping statistics --- 00:10:49.612 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:10:49.612 rtt min/avg/max/mdev = 0.138/0.138/0.138/0.000 ms 00:10:49.612 09:19:00 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:10:49.612 09:19:00 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@422 -- # return 0 00:10:49.612 09:19:00 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:10:49.612 09:19:00 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:10:49.612 09:19:00 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:10:49.612 09:19:00 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:10:49.612 09:19:00 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:10:49.612 09:19:00 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:10:49.612 09:19:00 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:10:49.612 09:19:00 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@23 -- # nvmfappstart -m 0xE 00:10:49.612 09:19:00 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:10:49.612 09:19:00 nvmf_tcp.nvmf_ns_hotplug_stress -- common/autotest_common.sh@722 -- # xtrace_disable 00:10:49.612 09:19:00 nvmf_tcp.nvmf_ns_hotplug_stress -- common/autotest_common.sh@10 -- # set +x 00:10:49.612 09:19:00 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@481 -- # nvmfpid=760836 00:10:49.612 09:19:00 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xE 00:10:49.612 09:19:00 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@482 -- # waitforlisten 760836 00:10:49.612 09:19:00 nvmf_tcp.nvmf_ns_hotplug_stress -- common/autotest_common.sh@829 -- # '[' -z 760836 ']' 00:10:49.612 09:19:00 nvmf_tcp.nvmf_ns_hotplug_stress -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:10:49.612 09:19:00 nvmf_tcp.nvmf_ns_hotplug_stress -- common/autotest_common.sh@834 -- # local max_retries=100 00:10:49.612 09:19:00 nvmf_tcp.nvmf_ns_hotplug_stress -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:10:49.612 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:10:49.612 09:19:00 nvmf_tcp.nvmf_ns_hotplug_stress -- common/autotest_common.sh@838 -- # xtrace_disable 00:10:49.612 09:19:00 nvmf_tcp.nvmf_ns_hotplug_stress -- common/autotest_common.sh@10 -- # set +x 00:10:49.612 [2024-07-15 09:19:00.425996] Starting SPDK v24.09-pre git sha1 b0f01ebc5 / DPDK 24.03.0 initialization... 00:10:49.612 [2024-07-15 09:19:00.426075] [ DPDK EAL parameters: nvmf -c 0xE --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:10:49.612 EAL: No free 2048 kB hugepages reported on node 1 00:10:49.612 [2024-07-15 09:19:00.487247] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 3 00:10:49.612 [2024-07-15 09:19:00.597314] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:10:49.612 [2024-07-15 09:19:00.597369] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:10:49.612 [2024-07-15 09:19:00.597390] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:10:49.612 [2024-07-15 09:19:00.597402] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:10:49.612 [2024-07-15 09:19:00.597412] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:10:49.612 [2024-07-15 09:19:00.597501] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:10:49.612 [2024-07-15 09:19:00.597532] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:10:49.612 [2024-07-15 09:19:00.597535] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:10:49.612 09:19:00 nvmf_tcp.nvmf_ns_hotplug_stress -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:10:49.612 09:19:00 nvmf_tcp.nvmf_ns_hotplug_stress -- common/autotest_common.sh@862 -- # return 0 00:10:49.612 09:19:00 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:10:49.612 09:19:00 nvmf_tcp.nvmf_ns_hotplug_stress -- common/autotest_common.sh@728 -- # xtrace_disable 00:10:49.612 09:19:00 nvmf_tcp.nvmf_ns_hotplug_stress -- common/autotest_common.sh@10 -- # set +x 00:10:49.612 09:19:00 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:10:49.612 09:19:00 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@25 -- # null_size=1000 00:10:49.612 09:19:00 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:10:49.870 [2024-07-15 09:19:00.961070] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:10:49.870 09:19:00 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 10 00:10:50.127 09:19:01 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@30 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:10:50.385 [2024-07-15 09:19:01.463830] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:10:50.385 09:19:01 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:10:50.642 09:19:01 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@32 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 32 512 -b Malloc0 00:10:50.900 Malloc0 00:10:50.900 09:19:02 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_delay_create -b Malloc0 -d Delay0 -r 1000000 -t 1000000 -w 1000000 -n 1000000 00:10:51.157 Delay0 00:10:51.157 09:19:02 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:10:51.415 09:19:02 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@35 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create NULL1 1000 512 00:10:51.672 NULL1 00:10:51.672 09:19:02 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 NULL1 00:10:51.929 09:19:03 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@42 -- # PERF_PID=761250 00:10:51.929 09:19:03 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@40 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -c 0x1 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' -t 30 -q 128 -w randread -o 512 -Q 1000 00:10:51.929 09:19:03 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 761250 00:10:51.929 09:19:03 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:10:51.929 EAL: No free 2048 kB hugepages reported on node 1 00:10:52.192 09:19:03 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:10:52.450 09:19:03 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1001 00:10:52.450 09:19:03 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1001 00:10:52.707 true 00:10:52.707 09:19:03 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 761250 00:10:52.707 09:19:03 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:10:52.964 09:19:04 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:10:53.221 09:19:04 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1002 00:10:53.221 09:19:04 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1002 00:10:53.479 true 00:10:53.479 09:19:04 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 761250 00:10:53.479 09:19:04 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:10:53.737 09:19:04 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:10:53.994 09:19:05 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1003 00:10:53.995 09:19:05 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1003 00:10:54.253 true 00:10:54.253 09:19:05 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 761250 00:10:54.253 09:19:05 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:10:55.631 Read completed with error (sct=0, sc=11) 00:10:55.631 09:19:06 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:10:55.631 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:10:55.631 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:10:55.631 09:19:06 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1004 00:10:55.631 09:19:06 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1004 00:10:55.889 true 00:10:55.889 09:19:06 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 761250 00:10:55.889 09:19:06 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:10:56.147 09:19:07 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:10:56.404 09:19:07 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1005 00:10:56.404 09:19:07 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1005 00:10:56.662 true 00:10:56.662 09:19:07 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 761250 00:10:56.662 09:19:07 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:10:57.599 09:19:08 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:10:57.599 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:10:57.857 09:19:08 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1006 00:10:57.858 09:19:08 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1006 00:10:58.116 true 00:10:58.116 09:19:09 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 761250 00:10:58.116 09:19:09 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:10:58.373 09:19:09 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:10:58.630 09:19:09 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1007 00:10:58.630 09:19:09 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1007 00:10:58.888 true 00:10:58.888 09:19:09 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 761250 00:10:58.888 09:19:09 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:10:59.146 09:19:10 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:10:59.403 09:19:10 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1008 00:10:59.403 09:19:10 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1008 00:10:59.720 true 00:10:59.720 09:19:10 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 761250 00:10:59.720 09:19:10 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:11:00.656 09:19:11 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:11:00.656 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:11:00.656 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:11:00.912 09:19:12 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1009 00:11:00.912 09:19:12 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1009 00:11:01.170 true 00:11:01.170 09:19:12 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 761250 00:11:01.170 09:19:12 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:11:01.428 09:19:12 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:11:01.703 09:19:12 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1010 00:11:01.703 09:19:12 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1010 00:11:01.980 true 00:11:01.980 09:19:13 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 761250 00:11:01.980 09:19:13 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:11:02.914 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:11:02.914 09:19:13 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:11:02.914 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:11:03.173 09:19:14 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1011 00:11:03.173 09:19:14 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1011 00:11:03.432 true 00:11:03.432 09:19:14 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 761250 00:11:03.432 09:19:14 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:11:03.690 09:19:14 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:11:03.948 09:19:15 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1012 00:11:03.948 09:19:15 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1012 00:11:04.206 true 00:11:04.206 09:19:15 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 761250 00:11:04.206 09:19:15 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:11:05.153 09:19:16 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:11:05.153 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:11:05.153 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:11:05.410 09:19:16 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1013 00:11:05.410 09:19:16 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1013 00:11:05.668 true 00:11:05.668 09:19:16 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 761250 00:11:05.668 09:19:16 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:11:05.927 09:19:16 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:11:06.185 09:19:17 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1014 00:11:06.185 09:19:17 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1014 00:11:06.444 true 00:11:06.444 09:19:17 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 761250 00:11:06.444 09:19:17 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:11:07.381 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:11:07.381 09:19:18 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:11:07.381 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:11:07.381 09:19:18 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1015 00:11:07.381 09:19:18 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1015 00:11:07.639 true 00:11:07.639 09:19:18 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 761250 00:11:07.639 09:19:18 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:11:07.897 09:19:19 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:11:08.155 09:19:19 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1016 00:11:08.155 09:19:19 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1016 00:11:08.413 true 00:11:08.413 09:19:19 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 761250 00:11:08.413 09:19:19 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:11:09.353 09:19:20 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:11:09.610 09:19:20 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1017 00:11:09.610 09:19:20 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1017 00:11:09.868 true 00:11:09.868 09:19:20 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 761250 00:11:09.868 09:19:20 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:11:10.126 09:19:21 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:11:10.384 09:19:21 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1018 00:11:10.384 09:19:21 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1018 00:11:10.642 true 00:11:10.642 09:19:21 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 761250 00:11:10.642 09:19:21 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:11:11.601 09:19:22 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:11:11.601 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:11:11.601 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:11:11.601 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:11:11.601 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:11:11.601 09:19:22 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1019 00:11:11.601 09:19:22 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1019 00:11:11.859 true 00:11:11.859 09:19:23 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 761250 00:11:11.859 09:19:23 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:11:12.116 09:19:23 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:11:12.373 09:19:23 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1020 00:11:12.373 09:19:23 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1020 00:11:12.631 true 00:11:12.631 09:19:23 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 761250 00:11:12.631 09:19:23 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:11:13.570 09:19:24 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:11:13.828 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:11:13.828 09:19:25 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1021 00:11:13.828 09:19:25 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1021 00:11:14.087 true 00:11:14.087 09:19:25 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 761250 00:11:14.087 09:19:25 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:11:14.345 09:19:25 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:11:14.603 09:19:25 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1022 00:11:14.603 09:19:25 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1022 00:11:14.861 true 00:11:14.861 09:19:25 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 761250 00:11:14.861 09:19:25 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:11:15.797 09:19:26 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:11:15.797 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:11:16.055 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:11:16.055 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:11:16.055 09:19:27 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1023 00:11:16.055 09:19:27 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1023 00:11:16.312 true 00:11:16.312 09:19:27 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 761250 00:11:16.312 09:19:27 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:11:16.570 09:19:27 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:11:16.829 09:19:27 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1024 00:11:16.829 09:19:27 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1024 00:11:17.086 true 00:11:17.086 09:19:28 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 761250 00:11:17.086 09:19:28 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:11:18.020 09:19:29 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:11:18.587 09:19:29 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1025 00:11:18.587 09:19:29 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1025 00:11:18.587 true 00:11:18.587 09:19:29 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 761250 00:11:18.587 09:19:29 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:11:18.846 09:19:29 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:11:19.104 09:19:30 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1026 00:11:19.104 09:19:30 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1026 00:11:19.361 true 00:11:19.361 09:19:30 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 761250 00:11:19.361 09:19:30 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:11:19.619 09:19:30 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:11:19.876 09:19:31 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1027 00:11:19.876 09:19:31 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1027 00:11:20.134 true 00:11:20.134 09:19:31 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 761250 00:11:20.134 09:19:31 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:11:21.509 09:19:32 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:11:21.509 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:11:21.509 09:19:32 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1028 00:11:21.509 09:19:32 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1028 00:11:21.767 true 00:11:21.767 09:19:32 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 761250 00:11:21.767 09:19:32 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:11:22.025 09:19:33 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:11:22.283 09:19:33 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1029 00:11:22.283 09:19:33 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1029 00:11:22.539 Initializing NVMe Controllers 00:11:22.539 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:11:22.539 Controller IO queue size 128, less than required. 00:11:22.539 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:11:22.539 Controller IO queue size 128, less than required. 00:11:22.539 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:11:22.539 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:11:22.539 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 with lcore 0 00:11:22.539 Initialization complete. Launching workers. 00:11:22.539 ======================================================== 00:11:22.539 Latency(us) 00:11:22.539 Device Information : IOPS MiB/s Average min max 00:11:22.539 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 738.74 0.36 78327.29 2915.45 1023545.21 00:11:22.539 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 from core 0: 9494.37 4.64 13442.50 3222.76 543171.48 00:11:22.539 ======================================================== 00:11:22.539 Total : 10233.11 5.00 18126.59 2915.45 1023545.21 00:11:22.539 00:11:22.539 true 00:11:22.539 09:19:33 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 761250 00:11:22.539 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/ns_hotplug_stress.sh: line 44: kill: (761250) - No such process 00:11:22.539 09:19:33 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@53 -- # wait 761250 00:11:22.539 09:19:33 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:11:22.796 09:19:33 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@55 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:11:23.053 09:19:34 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@58 -- # nthreads=8 00:11:23.053 09:19:34 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@58 -- # pids=() 00:11:23.053 09:19:34 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i = 0 )) 00:11:23.053 09:19:34 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:11:23.053 09:19:34 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null0 100 4096 00:11:23.309 null0 00:11:23.309 09:19:34 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:11:23.309 09:19:34 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:11:23.309 09:19:34 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null1 100 4096 00:11:23.591 null1 00:11:23.591 09:19:34 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:11:23.591 09:19:34 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:11:23.591 09:19:34 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null2 100 4096 00:11:23.868 null2 00:11:23.868 09:19:34 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:11:23.868 09:19:34 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:11:23.868 09:19:34 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null3 100 4096 00:11:24.146 null3 00:11:24.146 09:19:35 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:11:24.146 09:19:35 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:11:24.146 09:19:35 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null4 100 4096 00:11:24.146 null4 00:11:24.146 09:19:35 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:11:24.146 09:19:35 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:11:24.146 09:19:35 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null5 100 4096 00:11:24.419 null5 00:11:24.419 09:19:35 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:11:24.419 09:19:35 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:11:24.419 09:19:35 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null6 100 4096 00:11:24.685 null6 00:11:24.685 09:19:35 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:11:24.685 09:19:35 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:11:24.685 09:19:35 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null7 100 4096 00:11:24.942 null7 00:11:24.943 09:19:36 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:11:24.943 09:19:36 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:11:24.943 09:19:36 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i = 0 )) 00:11:24.943 09:19:36 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:11:24.943 09:19:36 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:11:24.943 09:19:36 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 1 null0 00:11:24.943 09:19:36 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:11:24.943 09:19:36 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=1 bdev=null0 00:11:24.943 09:19:36 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:11:24.943 09:19:36 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:11:24.943 09:19:36 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:11:24.943 09:19:36 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:11:24.943 09:19:36 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:11:24.943 09:19:36 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 2 null1 00:11:24.943 09:19:36 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:11:24.943 09:19:36 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=2 bdev=null1 00:11:24.943 09:19:36 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:11:24.943 09:19:36 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:11:24.943 09:19:36 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:11:24.943 09:19:36 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:11:24.943 09:19:36 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:11:24.943 09:19:36 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 3 null2 00:11:24.943 09:19:36 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:11:24.943 09:19:36 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=3 bdev=null2 00:11:24.943 09:19:36 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:11:24.943 09:19:36 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:11:24.943 09:19:36 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:11:24.943 09:19:36 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:11:24.943 09:19:36 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:11:24.943 09:19:36 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 4 null3 00:11:24.943 09:19:36 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:11:24.943 09:19:36 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=4 bdev=null3 00:11:24.943 09:19:36 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:11:24.943 09:19:36 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:11:24.943 09:19:36 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:11:24.943 09:19:36 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:11:24.943 09:19:36 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:11:24.943 09:19:36 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 5 null4 00:11:24.943 09:19:36 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:11:24.943 09:19:36 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=5 bdev=null4 00:11:24.943 09:19:36 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:11:24.943 09:19:36 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:11:24.943 09:19:36 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:11:24.943 09:19:36 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:11:24.943 09:19:36 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:11:24.943 09:19:36 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 6 null5 00:11:24.943 09:19:36 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:11:24.943 09:19:36 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=6 bdev=null5 00:11:24.943 09:19:36 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:11:24.943 09:19:36 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:11:24.943 09:19:36 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:11:24.943 09:19:36 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:11:24.943 09:19:36 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:11:24.943 09:19:36 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 7 null6 00:11:24.943 09:19:36 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:11:24.943 09:19:36 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=7 bdev=null6 00:11:24.943 09:19:36 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:11:24.943 09:19:36 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:11:24.943 09:19:36 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:11:24.943 09:19:36 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:11:24.943 09:19:36 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:11:24.943 09:19:36 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 8 null7 00:11:24.943 09:19:36 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:11:24.943 09:19:36 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=8 bdev=null7 00:11:24.943 09:19:36 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:11:24.943 09:19:36 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:11:24.943 09:19:36 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@66 -- # wait 765195 765196 765198 765200 765202 765204 765206 765208 00:11:24.943 09:19:36 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:11:24.943 09:19:36 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:11:25.200 09:19:36 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:11:25.200 09:19:36 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:11:25.200 09:19:36 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:11:25.200 09:19:36 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:11:25.200 09:19:36 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:11:25.200 09:19:36 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:11:25.200 09:19:36 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:11:25.200 09:19:36 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:11:25.457 09:19:36 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:11:25.457 09:19:36 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:11:25.457 09:19:36 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:11:25.457 09:19:36 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:11:25.457 09:19:36 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:11:25.457 09:19:36 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:11:25.457 09:19:36 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:11:25.457 09:19:36 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:11:25.457 09:19:36 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:11:25.457 09:19:36 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:11:25.457 09:19:36 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:11:25.457 09:19:36 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:11:25.457 09:19:36 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:11:25.457 09:19:36 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:11:25.457 09:19:36 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:11:25.457 09:19:36 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:11:25.457 09:19:36 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:11:25.457 09:19:36 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:11:25.457 09:19:36 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:11:25.457 09:19:36 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:11:25.457 09:19:36 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:11:25.457 09:19:36 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:11:25.457 09:19:36 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:11:25.457 09:19:36 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:11:25.715 09:19:36 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:11:25.715 09:19:36 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:11:25.715 09:19:36 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:11:25.715 09:19:36 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:11:25.715 09:19:36 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:11:25.974 09:19:36 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:11:25.974 09:19:36 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:11:25.974 09:19:36 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:11:25.974 09:19:37 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:11:25.974 09:19:37 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:11:25.974 09:19:37 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:11:26.232 09:19:37 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:11:26.232 09:19:37 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:11:26.232 09:19:37 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:11:26.232 09:19:37 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:11:26.233 09:19:37 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:11:26.233 09:19:37 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:11:26.233 09:19:37 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:11:26.233 09:19:37 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:11:26.233 09:19:37 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:11:26.233 09:19:37 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:11:26.233 09:19:37 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:11:26.233 09:19:37 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:11:26.233 09:19:37 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:11:26.233 09:19:37 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:11:26.233 09:19:37 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:11:26.233 09:19:37 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:11:26.233 09:19:37 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:11:26.233 09:19:37 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:11:26.233 09:19:37 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:11:26.233 09:19:37 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:11:26.233 09:19:37 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:11:26.490 09:19:37 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:11:26.490 09:19:37 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:11:26.490 09:19:37 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:11:26.490 09:19:37 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:11:26.490 09:19:37 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:11:26.490 09:19:37 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:11:26.490 09:19:37 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:11:26.490 09:19:37 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:11:26.748 09:19:37 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:11:26.748 09:19:37 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:11:26.748 09:19:37 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:11:26.748 09:19:37 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:11:26.748 09:19:37 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:11:26.748 09:19:37 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:11:26.748 09:19:37 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:11:26.748 09:19:37 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:11:26.748 09:19:37 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:11:26.748 09:19:37 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:11:26.748 09:19:37 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:11:26.748 09:19:37 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:11:26.748 09:19:37 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:11:26.748 09:19:37 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:11:26.748 09:19:37 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:11:26.748 09:19:37 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:11:26.748 09:19:37 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:11:26.748 09:19:37 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:11:26.748 09:19:37 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:11:26.748 09:19:37 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:11:26.748 09:19:37 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:11:26.748 09:19:37 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:11:26.748 09:19:37 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:11:26.748 09:19:37 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:11:27.006 09:19:37 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:11:27.006 09:19:37 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:11:27.006 09:19:37 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:11:27.006 09:19:37 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:11:27.006 09:19:38 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:11:27.006 09:19:38 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:11:27.006 09:19:38 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:11:27.006 09:19:38 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:11:27.264 09:19:38 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:11:27.264 09:19:38 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:11:27.264 09:19:38 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:11:27.264 09:19:38 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:11:27.264 09:19:38 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:11:27.264 09:19:38 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:11:27.264 09:19:38 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:11:27.264 09:19:38 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:11:27.264 09:19:38 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:11:27.264 09:19:38 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:11:27.264 09:19:38 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:11:27.264 09:19:38 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:11:27.264 09:19:38 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:11:27.264 09:19:38 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:11:27.264 09:19:38 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:11:27.264 09:19:38 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:11:27.264 09:19:38 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:11:27.264 09:19:38 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:11:27.264 09:19:38 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:11:27.264 09:19:38 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:11:27.264 09:19:38 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:11:27.264 09:19:38 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:11:27.264 09:19:38 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:11:27.264 09:19:38 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:11:27.522 09:19:38 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:11:27.522 09:19:38 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:11:27.522 09:19:38 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:11:27.522 09:19:38 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:11:27.522 09:19:38 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:11:27.522 09:19:38 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:11:27.522 09:19:38 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:11:27.522 09:19:38 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:11:27.780 09:19:38 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:11:27.780 09:19:38 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:11:27.780 09:19:38 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:11:27.780 09:19:38 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:11:27.780 09:19:38 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:11:27.780 09:19:38 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:11:27.780 09:19:38 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:11:27.780 09:19:38 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:11:27.780 09:19:38 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:11:27.780 09:19:38 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:11:27.780 09:19:38 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:11:27.780 09:19:38 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:11:27.780 09:19:38 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:11:27.780 09:19:38 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:11:27.780 09:19:38 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:11:27.780 09:19:38 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:11:27.780 09:19:38 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:11:27.780 09:19:38 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:11:27.780 09:19:38 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:11:27.780 09:19:38 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:11:27.780 09:19:38 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:11:27.780 09:19:38 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:11:27.780 09:19:38 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:11:27.780 09:19:38 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:11:28.039 09:19:39 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:11:28.039 09:19:39 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:11:28.039 09:19:39 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:11:28.039 09:19:39 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:11:28.039 09:19:39 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:11:28.039 09:19:39 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:11:28.039 09:19:39 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:11:28.039 09:19:39 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:11:28.297 09:19:39 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:11:28.297 09:19:39 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:11:28.297 09:19:39 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:11:28.297 09:19:39 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:11:28.297 09:19:39 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:11:28.297 09:19:39 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:11:28.297 09:19:39 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:11:28.297 09:19:39 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:11:28.297 09:19:39 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:11:28.297 09:19:39 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:11:28.297 09:19:39 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:11:28.297 09:19:39 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:11:28.297 09:19:39 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:11:28.297 09:19:39 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:11:28.297 09:19:39 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:11:28.297 09:19:39 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:11:28.297 09:19:39 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:11:28.297 09:19:39 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:11:28.297 09:19:39 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:11:28.297 09:19:39 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:11:28.297 09:19:39 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:11:28.297 09:19:39 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:11:28.297 09:19:39 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:11:28.297 09:19:39 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:11:28.555 09:19:39 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:11:28.555 09:19:39 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:11:28.555 09:19:39 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:11:28.555 09:19:39 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:11:28.555 09:19:39 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:11:28.555 09:19:39 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:11:28.555 09:19:39 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:11:28.555 09:19:39 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:11:28.814 09:19:39 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:11:28.814 09:19:39 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:11:28.814 09:19:39 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:11:28.814 09:19:39 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:11:28.814 09:19:39 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:11:28.814 09:19:39 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:11:28.814 09:19:39 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:11:28.814 09:19:39 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:11:28.814 09:19:39 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:11:28.814 09:19:39 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:11:28.814 09:19:39 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:11:28.814 09:19:39 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:11:28.814 09:19:39 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:11:28.814 09:19:39 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:11:28.814 09:19:39 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:11:28.814 09:19:39 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:11:28.814 09:19:39 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:11:28.814 09:19:39 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:11:28.814 09:19:39 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:11:28.814 09:19:39 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:11:28.814 09:19:39 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:11:28.814 09:19:39 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:11:28.814 09:19:39 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:11:28.814 09:19:39 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:11:29.072 09:19:40 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:11:29.072 09:19:40 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:11:29.072 09:19:40 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:11:29.072 09:19:40 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:11:29.072 09:19:40 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:11:29.072 09:19:40 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:11:29.072 09:19:40 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:11:29.072 09:19:40 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:11:29.331 09:19:40 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:11:29.331 09:19:40 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:11:29.331 09:19:40 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:11:29.331 09:19:40 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:11:29.331 09:19:40 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:11:29.331 09:19:40 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:11:29.331 09:19:40 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:11:29.331 09:19:40 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:11:29.331 09:19:40 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:11:29.331 09:19:40 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:11:29.331 09:19:40 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:11:29.331 09:19:40 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:11:29.331 09:19:40 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:11:29.331 09:19:40 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:11:29.331 09:19:40 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:11:29.331 09:19:40 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:11:29.331 09:19:40 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:11:29.331 09:19:40 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:11:29.331 09:19:40 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:11:29.331 09:19:40 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:11:29.331 09:19:40 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:11:29.331 09:19:40 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:11:29.331 09:19:40 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:11:29.331 09:19:40 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:11:29.589 09:19:40 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:11:29.589 09:19:40 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:11:29.589 09:19:40 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:11:29.589 09:19:40 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:11:29.589 09:19:40 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:11:29.589 09:19:40 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:11:29.589 09:19:40 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:11:29.589 09:19:40 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:11:29.847 09:19:40 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:11:29.847 09:19:40 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:11:29.847 09:19:40 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:11:29.847 09:19:40 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:11:29.847 09:19:40 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:11:29.847 09:19:40 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:11:29.847 09:19:41 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:11:29.847 09:19:41 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:11:29.847 09:19:41 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:11:29.847 09:19:41 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:11:29.847 09:19:41 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:11:29.847 09:19:41 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:11:29.847 09:19:41 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:11:29.847 09:19:41 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:11:29.847 09:19:41 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:11:29.847 09:19:41 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:11:29.847 09:19:41 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:11:29.847 09:19:41 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:11:29.847 09:19:41 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:11:29.847 09:19:41 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:11:29.847 09:19:41 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:11:29.847 09:19:41 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:11:29.847 09:19:41 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:11:29.847 09:19:41 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:11:30.104 09:19:41 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:11:30.104 09:19:41 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:11:30.104 09:19:41 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:11:30.104 09:19:41 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:11:30.104 09:19:41 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:11:30.104 09:19:41 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:11:30.104 09:19:41 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:11:30.362 09:19:41 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:11:30.362 09:19:41 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:11:30.362 09:19:41 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:11:30.362 09:19:41 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:11:30.622 09:19:41 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:11:30.622 09:19:41 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:11:30.622 09:19:41 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:11:30.622 09:19:41 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:11:30.622 09:19:41 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:11:30.622 09:19:41 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:11:30.622 09:19:41 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:11:30.622 09:19:41 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:11:30.622 09:19:41 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:11:30.622 09:19:41 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:11:30.622 09:19:41 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:11:30.622 09:19:41 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:11:30.622 09:19:41 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:11:30.622 09:19:41 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@68 -- # trap - SIGINT SIGTERM EXIT 00:11:30.622 09:19:41 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@70 -- # nvmftestfini 00:11:30.622 09:19:41 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@488 -- # nvmfcleanup 00:11:30.622 09:19:41 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@117 -- # sync 00:11:30.622 09:19:41 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:11:30.622 09:19:41 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@120 -- # set +e 00:11:30.622 09:19:41 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@121 -- # for i in {1..20} 00:11:30.622 09:19:41 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:11:30.622 rmmod nvme_tcp 00:11:30.622 rmmod nvme_fabrics 00:11:30.622 rmmod nvme_keyring 00:11:30.622 09:19:41 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:11:30.622 09:19:41 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@124 -- # set -e 00:11:30.622 09:19:41 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@125 -- # return 0 00:11:30.622 09:19:41 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@489 -- # '[' -n 760836 ']' 00:11:30.622 09:19:41 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@490 -- # killprocess 760836 00:11:30.622 09:19:41 nvmf_tcp.nvmf_ns_hotplug_stress -- common/autotest_common.sh@948 -- # '[' -z 760836 ']' 00:11:30.622 09:19:41 nvmf_tcp.nvmf_ns_hotplug_stress -- common/autotest_common.sh@952 -- # kill -0 760836 00:11:30.622 09:19:41 nvmf_tcp.nvmf_ns_hotplug_stress -- common/autotest_common.sh@953 -- # uname 00:11:30.622 09:19:41 nvmf_tcp.nvmf_ns_hotplug_stress -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:11:30.622 09:19:41 nvmf_tcp.nvmf_ns_hotplug_stress -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 760836 00:11:30.622 09:19:41 nvmf_tcp.nvmf_ns_hotplug_stress -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:11:30.622 09:19:41 nvmf_tcp.nvmf_ns_hotplug_stress -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:11:30.622 09:19:41 nvmf_tcp.nvmf_ns_hotplug_stress -- common/autotest_common.sh@966 -- # echo 'killing process with pid 760836' 00:11:30.622 killing process with pid 760836 00:11:30.622 09:19:41 nvmf_tcp.nvmf_ns_hotplug_stress -- common/autotest_common.sh@967 -- # kill 760836 00:11:30.622 09:19:41 nvmf_tcp.nvmf_ns_hotplug_stress -- common/autotest_common.sh@972 -- # wait 760836 00:11:30.882 09:19:41 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:11:30.882 09:19:41 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:11:30.882 09:19:41 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:11:30.882 09:19:41 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:11:30.882 09:19:41 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@278 -- # remove_spdk_ns 00:11:30.882 09:19:41 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:11:30.882 09:19:41 nvmf_tcp.nvmf_ns_hotplug_stress -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:11:30.882 09:19:41 nvmf_tcp.nvmf_ns_hotplug_stress -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:11:32.783 09:19:43 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:11:32.783 00:11:32.783 real 0m45.946s 00:11:32.783 user 3m30.895s 00:11:32.783 sys 0m15.845s 00:11:32.783 09:19:43 nvmf_tcp.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1124 -- # xtrace_disable 00:11:32.783 09:19:43 nvmf_tcp.nvmf_ns_hotplug_stress -- common/autotest_common.sh@10 -- # set +x 00:11:32.783 ************************************ 00:11:32.783 END TEST nvmf_ns_hotplug_stress 00:11:32.783 ************************************ 00:11:33.041 09:19:43 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:11:33.041 09:19:43 nvmf_tcp -- nvmf/nvmf.sh@33 -- # run_test nvmf_connect_stress /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/connect_stress.sh --transport=tcp 00:11:33.041 09:19:43 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:11:33.041 09:19:43 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:11:33.041 09:19:43 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:11:33.041 ************************************ 00:11:33.041 START TEST nvmf_connect_stress 00:11:33.041 ************************************ 00:11:33.041 09:19:44 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/connect_stress.sh --transport=tcp 00:11:33.041 * Looking for test storage... 00:11:33.041 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:11:33.041 09:19:44 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:11:33.041 09:19:44 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@7 -- # uname -s 00:11:33.041 09:19:44 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:11:33.041 09:19:44 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:11:33.041 09:19:44 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:11:33.041 09:19:44 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:11:33.041 09:19:44 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:11:33.041 09:19:44 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:11:33.041 09:19:44 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:11:33.041 09:19:44 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:11:33.041 09:19:44 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:11:33.041 09:19:44 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:11:33.041 09:19:44 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a 00:11:33.041 09:19:44 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@18 -- # NVME_HOSTID=29f67375-a902-e411-ace9-001e67bc3c9a 00:11:33.041 09:19:44 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:11:33.041 09:19:44 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:11:33.041 09:19:44 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:11:33.041 09:19:44 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:11:33.041 09:19:44 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:11:33.041 09:19:44 nvmf_tcp.nvmf_connect_stress -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:11:33.041 09:19:44 nvmf_tcp.nvmf_connect_stress -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:11:33.041 09:19:44 nvmf_tcp.nvmf_connect_stress -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:11:33.041 09:19:44 nvmf_tcp.nvmf_connect_stress -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:33.041 09:19:44 nvmf_tcp.nvmf_connect_stress -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:33.041 09:19:44 nvmf_tcp.nvmf_connect_stress -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:33.041 09:19:44 nvmf_tcp.nvmf_connect_stress -- paths/export.sh@5 -- # export PATH 00:11:33.041 09:19:44 nvmf_tcp.nvmf_connect_stress -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:33.041 09:19:44 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@47 -- # : 0 00:11:33.041 09:19:44 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:11:33.041 09:19:44 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:11:33.041 09:19:44 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:11:33.041 09:19:44 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:11:33.041 09:19:44 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:11:33.041 09:19:44 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:11:33.041 09:19:44 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:11:33.041 09:19:44 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@51 -- # have_pci_nics=0 00:11:33.041 09:19:44 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@12 -- # nvmftestinit 00:11:33.041 09:19:44 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:11:33.041 09:19:44 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:11:33.041 09:19:44 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@448 -- # prepare_net_devs 00:11:33.041 09:19:44 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@410 -- # local -g is_hw=no 00:11:33.041 09:19:44 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@412 -- # remove_spdk_ns 00:11:33.041 09:19:44 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:11:33.041 09:19:44 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:11:33.041 09:19:44 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:11:33.041 09:19:44 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:11:33.041 09:19:44 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:11:33.041 09:19:44 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@285 -- # xtrace_disable 00:11:33.041 09:19:44 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:11:35.575 09:19:46 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:11:35.575 09:19:46 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@291 -- # pci_devs=() 00:11:35.575 09:19:46 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@291 -- # local -a pci_devs 00:11:35.575 09:19:46 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@292 -- # pci_net_devs=() 00:11:35.575 09:19:46 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:11:35.575 09:19:46 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@293 -- # pci_drivers=() 00:11:35.575 09:19:46 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@293 -- # local -A pci_drivers 00:11:35.575 09:19:46 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@295 -- # net_devs=() 00:11:35.575 09:19:46 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@295 -- # local -ga net_devs 00:11:35.575 09:19:46 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@296 -- # e810=() 00:11:35.575 09:19:46 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@296 -- # local -ga e810 00:11:35.575 09:19:46 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@297 -- # x722=() 00:11:35.575 09:19:46 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@297 -- # local -ga x722 00:11:35.575 09:19:46 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@298 -- # mlx=() 00:11:35.575 09:19:46 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@298 -- # local -ga mlx 00:11:35.575 09:19:46 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:11:35.575 09:19:46 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:11:35.575 09:19:46 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:11:35.575 09:19:46 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:11:35.575 09:19:46 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:11:35.575 09:19:46 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:11:35.575 09:19:46 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:11:35.575 09:19:46 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:11:35.575 09:19:46 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:11:35.575 09:19:46 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:11:35.575 09:19:46 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:11:35.575 09:19:46 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:11:35.575 09:19:46 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:11:35.575 09:19:46 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:11:35.575 09:19:46 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:11:35.575 09:19:46 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:11:35.575 09:19:46 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:11:35.575 09:19:46 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:11:35.575 09:19:46 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@341 -- # echo 'Found 0000:09:00.0 (0x8086 - 0x159b)' 00:11:35.575 Found 0000:09:00.0 (0x8086 - 0x159b) 00:11:35.575 09:19:46 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:11:35.575 09:19:46 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:11:35.575 09:19:46 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:11:35.575 09:19:46 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:11:35.575 09:19:46 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:11:35.575 09:19:46 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:11:35.575 09:19:46 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@341 -- # echo 'Found 0000:09:00.1 (0x8086 - 0x159b)' 00:11:35.575 Found 0000:09:00.1 (0x8086 - 0x159b) 00:11:35.575 09:19:46 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:11:35.575 09:19:46 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:11:35.575 09:19:46 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:11:35.575 09:19:46 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:11:35.575 09:19:46 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:11:35.575 09:19:46 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:11:35.575 09:19:46 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:11:35.575 09:19:46 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:11:35.575 09:19:46 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:11:35.575 09:19:46 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:11:35.575 09:19:46 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:11:35.575 09:19:46 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:11:35.575 09:19:46 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@390 -- # [[ up == up ]] 00:11:35.575 09:19:46 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:11:35.575 09:19:46 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:11:35.575 09:19:46 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:09:00.0: cvl_0_0' 00:11:35.575 Found net devices under 0000:09:00.0: cvl_0_0 00:11:35.575 09:19:46 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:11:35.575 09:19:46 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:11:35.575 09:19:46 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:11:35.575 09:19:46 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:11:35.575 09:19:46 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:11:35.575 09:19:46 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@390 -- # [[ up == up ]] 00:11:35.575 09:19:46 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:11:35.575 09:19:46 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:11:35.575 09:19:46 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:09:00.1: cvl_0_1' 00:11:35.575 Found net devices under 0000:09:00.1: cvl_0_1 00:11:35.575 09:19:46 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:11:35.575 09:19:46 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:11:35.575 09:19:46 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@414 -- # is_hw=yes 00:11:35.575 09:19:46 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:11:35.575 09:19:46 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:11:35.575 09:19:46 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:11:35.575 09:19:46 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:11:35.575 09:19:46 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:11:35.575 09:19:46 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:11:35.575 09:19:46 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:11:35.575 09:19:46 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:11:35.575 09:19:46 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:11:35.575 09:19:46 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:11:35.575 09:19:46 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:11:35.575 09:19:46 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:11:35.575 09:19:46 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:11:35.575 09:19:46 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:11:35.575 09:19:46 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:11:35.575 09:19:46 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:11:35.575 09:19:46 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:11:35.575 09:19:46 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:11:35.575 09:19:46 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:11:35.575 09:19:46 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:11:35.575 09:19:46 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:11:35.575 09:19:46 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:11:35.575 09:19:46 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:11:35.575 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:11:35.575 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.112 ms 00:11:35.575 00:11:35.575 --- 10.0.0.2 ping statistics --- 00:11:35.575 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:11:35.575 rtt min/avg/max/mdev = 0.112/0.112/0.112/0.000 ms 00:11:35.575 09:19:46 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:11:35.576 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:11:35.576 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.072 ms 00:11:35.576 00:11:35.576 --- 10.0.0.1 ping statistics --- 00:11:35.576 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:11:35.576 rtt min/avg/max/mdev = 0.072/0.072/0.072/0.000 ms 00:11:35.576 09:19:46 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:11:35.576 09:19:46 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@422 -- # return 0 00:11:35.576 09:19:46 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:11:35.576 09:19:46 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:11:35.576 09:19:46 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:11:35.576 09:19:46 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:11:35.576 09:19:46 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:11:35.576 09:19:46 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:11:35.576 09:19:46 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:11:35.576 09:19:46 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@13 -- # nvmfappstart -m 0xE 00:11:35.576 09:19:46 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:11:35.576 09:19:46 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@722 -- # xtrace_disable 00:11:35.576 09:19:46 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:11:35.576 09:19:46 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@481 -- # nvmfpid=768021 00:11:35.576 09:19:46 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xE 00:11:35.576 09:19:46 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@482 -- # waitforlisten 768021 00:11:35.576 09:19:46 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@829 -- # '[' -z 768021 ']' 00:11:35.576 09:19:46 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:11:35.576 09:19:46 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@834 -- # local max_retries=100 00:11:35.576 09:19:46 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:11:35.576 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:11:35.576 09:19:46 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@838 -- # xtrace_disable 00:11:35.576 09:19:46 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:11:35.576 [2024-07-15 09:19:46.373623] Starting SPDK v24.09-pre git sha1 b0f01ebc5 / DPDK 24.03.0 initialization... 00:11:35.576 [2024-07-15 09:19:46.373712] [ DPDK EAL parameters: nvmf -c 0xE --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:11:35.576 EAL: No free 2048 kB hugepages reported on node 1 00:11:35.576 [2024-07-15 09:19:46.453205] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 3 00:11:35.576 [2024-07-15 09:19:46.587204] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:11:35.576 [2024-07-15 09:19:46.587271] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:11:35.576 [2024-07-15 09:19:46.587298] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:11:35.576 [2024-07-15 09:19:46.587336] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:11:35.576 [2024-07-15 09:19:46.587356] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:11:35.576 [2024-07-15 09:19:46.587454] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:11:35.576 [2024-07-15 09:19:46.587528] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:11:35.576 [2024-07-15 09:19:46.587519] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:11:35.576 09:19:46 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:11:35.576 09:19:46 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@862 -- # return 0 00:11:35.576 09:19:46 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:11:35.576 09:19:46 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@728 -- # xtrace_disable 00:11:35.576 09:19:46 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:11:35.576 09:19:46 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:11:35.576 09:19:46 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@15 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:11:35.576 09:19:46 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:35.576 09:19:46 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:11:35.576 [2024-07-15 09:19:46.745959] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:11:35.576 09:19:46 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:35.576 09:19:46 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@16 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 10 00:11:35.576 09:19:46 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:35.576 09:19:46 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:11:35.576 09:19:46 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:35.576 09:19:46 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@17 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:11:35.576 09:19:46 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:35.576 09:19:46 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:11:35.836 [2024-07-15 09:19:46.780974] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:11:35.836 09:19:46 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:35.836 09:19:46 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@18 -- # rpc_cmd bdev_null_create NULL1 1000 512 00:11:35.836 09:19:46 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:35.836 09:19:46 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:11:35.836 NULL1 00:11:35.836 09:19:46 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:35.836 09:19:46 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@21 -- # PERF_PID=768097 00:11:35.836 09:19:46 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@23 -- # rpcs=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpc.txt 00:11:35.836 09:19:46 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@20 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvme/connect_stress/connect_stress -c 0x1 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' -t 10 00:11:35.836 09:19:46 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@25 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpc.txt 00:11:35.836 09:19:46 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@27 -- # seq 1 20 00:11:35.836 09:19:46 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:11:35.836 09:19:46 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:11:35.836 09:19:46 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:11:35.836 09:19:46 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:11:35.836 09:19:46 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:11:35.836 09:19:46 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:11:35.836 09:19:46 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:11:35.836 09:19:46 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:11:35.836 09:19:46 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:11:35.836 09:19:46 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:11:35.836 09:19:46 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:11:35.836 09:19:46 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:11:35.836 09:19:46 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:11:35.836 09:19:46 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:11:35.836 09:19:46 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:11:35.836 09:19:46 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:11:35.836 09:19:46 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:11:35.836 09:19:46 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:11:35.836 09:19:46 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:11:35.836 09:19:46 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:11:35.836 09:19:46 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:11:35.836 09:19:46 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:11:35.836 09:19:46 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:11:35.836 09:19:46 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:11:35.836 EAL: No free 2048 kB hugepages reported on node 1 00:11:35.836 09:19:46 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:11:35.836 09:19:46 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:11:35.836 09:19:46 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:11:35.836 09:19:46 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:11:35.836 09:19:46 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:11:35.836 09:19:46 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:11:35.836 09:19:46 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:11:35.836 09:19:46 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:11:35.836 09:19:46 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:11:35.836 09:19:46 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:11:35.836 09:19:46 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:11:35.836 09:19:46 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:11:35.836 09:19:46 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:11:35.836 09:19:46 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:11:35.836 09:19:46 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:11:35.836 09:19:46 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:11:35.836 09:19:46 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 768097 00:11:35.836 09:19:46 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:11:35.836 09:19:46 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:35.836 09:19:46 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:11:36.096 09:19:47 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:36.096 09:19:47 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 768097 00:11:36.096 09:19:47 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:11:36.096 09:19:47 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:36.096 09:19:47 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:11:36.354 09:19:47 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:36.354 09:19:47 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 768097 00:11:36.354 09:19:47 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:11:36.354 09:19:47 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:36.354 09:19:47 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:11:36.613 09:19:47 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:36.613 09:19:47 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 768097 00:11:36.613 09:19:47 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:11:36.613 09:19:47 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:36.613 09:19:47 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:11:37.181 09:19:48 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:37.181 09:19:48 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 768097 00:11:37.181 09:19:48 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:11:37.181 09:19:48 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:37.181 09:19:48 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:11:37.440 09:19:48 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:37.440 09:19:48 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 768097 00:11:37.440 09:19:48 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:11:37.440 09:19:48 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:37.440 09:19:48 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:11:37.699 09:19:48 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:37.699 09:19:48 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 768097 00:11:37.699 09:19:48 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:11:37.699 09:19:48 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:37.699 09:19:48 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:11:37.958 09:19:49 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:37.958 09:19:49 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 768097 00:11:37.958 09:19:49 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:11:37.958 09:19:49 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:37.958 09:19:49 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:11:38.218 09:19:49 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:38.218 09:19:49 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 768097 00:11:38.218 09:19:49 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:11:38.218 09:19:49 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:38.218 09:19:49 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:11:38.785 09:19:49 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:38.785 09:19:49 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 768097 00:11:38.785 09:19:49 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:11:38.785 09:19:49 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:38.785 09:19:49 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:11:39.044 09:19:50 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:39.044 09:19:50 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 768097 00:11:39.044 09:19:50 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:11:39.044 09:19:50 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:39.044 09:19:50 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:11:39.304 09:19:50 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:39.304 09:19:50 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 768097 00:11:39.304 09:19:50 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:11:39.304 09:19:50 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:39.304 09:19:50 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:11:39.564 09:19:50 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:39.564 09:19:50 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 768097 00:11:39.564 09:19:50 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:11:39.564 09:19:50 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:39.564 09:19:50 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:11:39.824 09:19:51 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:39.824 09:19:51 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 768097 00:11:39.824 09:19:51 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:11:39.824 09:19:51 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:39.824 09:19:51 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:11:40.391 09:19:51 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:40.391 09:19:51 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 768097 00:11:40.391 09:19:51 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:11:40.391 09:19:51 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:40.391 09:19:51 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:11:40.650 09:19:51 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:40.650 09:19:51 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 768097 00:11:40.650 09:19:51 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:11:40.650 09:19:51 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:40.650 09:19:51 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:11:40.911 09:19:51 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:40.911 09:19:51 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 768097 00:11:40.911 09:19:51 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:11:40.911 09:19:51 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:40.911 09:19:51 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:11:41.171 09:19:52 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:41.171 09:19:52 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 768097 00:11:41.171 09:19:52 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:11:41.171 09:19:52 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:41.171 09:19:52 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:11:41.429 09:19:52 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:41.429 09:19:52 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 768097 00:11:41.429 09:19:52 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:11:41.429 09:19:52 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:41.429 09:19:52 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:11:41.997 09:19:52 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:41.997 09:19:52 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 768097 00:11:41.997 09:19:52 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:11:41.997 09:19:52 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:41.997 09:19:52 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:11:42.258 09:19:53 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:42.258 09:19:53 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 768097 00:11:42.258 09:19:53 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:11:42.258 09:19:53 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:42.258 09:19:53 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:11:42.515 09:19:53 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:42.515 09:19:53 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 768097 00:11:42.515 09:19:53 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:11:42.515 09:19:53 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:42.515 09:19:53 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:11:42.774 09:19:53 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:42.774 09:19:53 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 768097 00:11:42.774 09:19:53 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:11:42.774 09:19:53 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:42.774 09:19:53 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:11:43.341 09:19:54 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:43.341 09:19:54 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 768097 00:11:43.341 09:19:54 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:11:43.341 09:19:54 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:43.341 09:19:54 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:11:43.601 09:19:54 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:43.601 09:19:54 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 768097 00:11:43.601 09:19:54 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:11:43.601 09:19:54 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:43.601 09:19:54 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:11:43.860 09:19:54 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:43.860 09:19:54 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 768097 00:11:43.860 09:19:54 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:11:43.860 09:19:54 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:43.860 09:19:54 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:11:44.119 09:19:55 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:44.119 09:19:55 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 768097 00:11:44.119 09:19:55 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:11:44.119 09:19:55 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:44.119 09:19:55 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:11:44.379 09:19:55 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:44.379 09:19:55 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 768097 00:11:44.379 09:19:55 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:11:44.379 09:19:55 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:44.379 09:19:55 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:11:44.949 09:19:55 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:44.949 09:19:55 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 768097 00:11:44.949 09:19:55 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:11:44.949 09:19:55 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:44.949 09:19:55 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:11:45.208 09:19:56 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:45.208 09:19:56 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 768097 00:11:45.208 09:19:56 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:11:45.208 09:19:56 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:45.208 09:19:56 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:11:45.467 09:19:56 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:45.467 09:19:56 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 768097 00:11:45.467 09:19:56 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:11:45.467 09:19:56 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:45.467 09:19:56 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:11:45.726 09:19:56 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:45.726 09:19:56 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 768097 00:11:45.726 09:19:56 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:11:45.726 09:19:56 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:45.726 09:19:56 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:11:45.726 Testing NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:11:45.984 09:19:57 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:45.984 09:19:57 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 768097 00:11:45.984 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/connect_stress.sh: line 34: kill: (768097) - No such process 00:11:45.984 09:19:57 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@38 -- # wait 768097 00:11:45.984 09:19:57 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@39 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpc.txt 00:11:45.984 09:19:57 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@41 -- # trap - SIGINT SIGTERM EXIT 00:11:45.984 09:19:57 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@43 -- # nvmftestfini 00:11:45.984 09:19:57 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@488 -- # nvmfcleanup 00:11:45.984 09:19:57 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@117 -- # sync 00:11:45.984 09:19:57 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:11:45.984 09:19:57 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@120 -- # set +e 00:11:45.984 09:19:57 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@121 -- # for i in {1..20} 00:11:45.984 09:19:57 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:11:45.984 rmmod nvme_tcp 00:11:45.984 rmmod nvme_fabrics 00:11:45.984 rmmod nvme_keyring 00:11:46.243 09:19:57 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:11:46.243 09:19:57 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@124 -- # set -e 00:11:46.243 09:19:57 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@125 -- # return 0 00:11:46.243 09:19:57 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@489 -- # '[' -n 768021 ']' 00:11:46.243 09:19:57 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@490 -- # killprocess 768021 00:11:46.243 09:19:57 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@948 -- # '[' -z 768021 ']' 00:11:46.243 09:19:57 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@952 -- # kill -0 768021 00:11:46.243 09:19:57 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@953 -- # uname 00:11:46.243 09:19:57 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:11:46.243 09:19:57 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 768021 00:11:46.243 09:19:57 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:11:46.243 09:19:57 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:11:46.243 09:19:57 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@966 -- # echo 'killing process with pid 768021' 00:11:46.243 killing process with pid 768021 00:11:46.243 09:19:57 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@967 -- # kill 768021 00:11:46.243 09:19:57 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@972 -- # wait 768021 00:11:46.504 09:19:57 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:11:46.504 09:19:57 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:11:46.504 09:19:57 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:11:46.504 09:19:57 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:11:46.504 09:19:57 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@278 -- # remove_spdk_ns 00:11:46.504 09:19:57 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:11:46.504 09:19:57 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:11:46.504 09:19:57 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:11:48.479 09:19:59 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:11:48.479 00:11:48.479 real 0m15.493s 00:11:48.479 user 0m39.977s 00:11:48.479 sys 0m4.814s 00:11:48.479 09:19:59 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@1124 -- # xtrace_disable 00:11:48.479 09:19:59 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:11:48.479 ************************************ 00:11:48.479 END TEST nvmf_connect_stress 00:11:48.479 ************************************ 00:11:48.479 09:19:59 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:11:48.479 09:19:59 nvmf_tcp -- nvmf/nvmf.sh@34 -- # run_test nvmf_fused_ordering /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/fused_ordering.sh --transport=tcp 00:11:48.479 09:19:59 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:11:48.479 09:19:59 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:11:48.479 09:19:59 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:11:48.479 ************************************ 00:11:48.479 START TEST nvmf_fused_ordering 00:11:48.479 ************************************ 00:11:48.479 09:19:59 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/fused_ordering.sh --transport=tcp 00:11:48.479 * Looking for test storage... 00:11:48.479 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:11:48.479 09:19:59 nvmf_tcp.nvmf_fused_ordering -- target/fused_ordering.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:11:48.479 09:19:59 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@7 -- # uname -s 00:11:48.479 09:19:59 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:11:48.479 09:19:59 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:11:48.479 09:19:59 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:11:48.479 09:19:59 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:11:48.479 09:19:59 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:11:48.479 09:19:59 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:11:48.479 09:19:59 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:11:48.479 09:19:59 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:11:48.479 09:19:59 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:11:48.479 09:19:59 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:11:48.479 09:19:59 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a 00:11:48.479 09:19:59 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@18 -- # NVME_HOSTID=29f67375-a902-e411-ace9-001e67bc3c9a 00:11:48.479 09:19:59 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:11:48.479 09:19:59 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:11:48.479 09:19:59 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:11:48.479 09:19:59 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:11:48.479 09:19:59 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:11:48.479 09:19:59 nvmf_tcp.nvmf_fused_ordering -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:11:48.479 09:19:59 nvmf_tcp.nvmf_fused_ordering -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:11:48.479 09:19:59 nvmf_tcp.nvmf_fused_ordering -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:11:48.479 09:19:59 nvmf_tcp.nvmf_fused_ordering -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:48.479 09:19:59 nvmf_tcp.nvmf_fused_ordering -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:48.479 09:19:59 nvmf_tcp.nvmf_fused_ordering -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:48.479 09:19:59 nvmf_tcp.nvmf_fused_ordering -- paths/export.sh@5 -- # export PATH 00:11:48.479 09:19:59 nvmf_tcp.nvmf_fused_ordering -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:48.479 09:19:59 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@47 -- # : 0 00:11:48.479 09:19:59 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:11:48.479 09:19:59 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:11:48.479 09:19:59 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:11:48.479 09:19:59 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:11:48.479 09:19:59 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:11:48.479 09:19:59 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:11:48.479 09:19:59 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:11:48.479 09:19:59 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@51 -- # have_pci_nics=0 00:11:48.479 09:19:59 nvmf_tcp.nvmf_fused_ordering -- target/fused_ordering.sh@12 -- # nvmftestinit 00:11:48.479 09:19:59 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:11:48.479 09:19:59 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:11:48.479 09:19:59 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@448 -- # prepare_net_devs 00:11:48.479 09:19:59 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@410 -- # local -g is_hw=no 00:11:48.479 09:19:59 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@412 -- # remove_spdk_ns 00:11:48.479 09:19:59 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:11:48.479 09:19:59 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:11:48.479 09:19:59 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:11:48.479 09:19:59 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:11:48.479 09:19:59 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:11:48.479 09:19:59 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@285 -- # xtrace_disable 00:11:48.479 09:19:59 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:11:51.011 09:20:01 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:11:51.011 09:20:01 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@291 -- # pci_devs=() 00:11:51.011 09:20:01 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@291 -- # local -a pci_devs 00:11:51.011 09:20:01 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@292 -- # pci_net_devs=() 00:11:51.011 09:20:01 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:11:51.011 09:20:01 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@293 -- # pci_drivers=() 00:11:51.011 09:20:01 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@293 -- # local -A pci_drivers 00:11:51.011 09:20:01 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@295 -- # net_devs=() 00:11:51.011 09:20:01 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@295 -- # local -ga net_devs 00:11:51.011 09:20:01 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@296 -- # e810=() 00:11:51.011 09:20:01 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@296 -- # local -ga e810 00:11:51.011 09:20:01 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@297 -- # x722=() 00:11:51.011 09:20:01 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@297 -- # local -ga x722 00:11:51.011 09:20:01 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@298 -- # mlx=() 00:11:51.011 09:20:01 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@298 -- # local -ga mlx 00:11:51.012 09:20:01 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:11:51.012 09:20:01 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:11:51.012 09:20:01 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:11:51.012 09:20:01 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:11:51.012 09:20:01 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:11:51.012 09:20:01 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:11:51.012 09:20:01 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:11:51.012 09:20:01 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:11:51.012 09:20:01 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:11:51.012 09:20:01 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:11:51.012 09:20:01 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:11:51.012 09:20:01 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:11:51.012 09:20:01 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:11:51.012 09:20:01 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:11:51.012 09:20:01 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:11:51.012 09:20:01 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:11:51.012 09:20:01 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:11:51.012 09:20:01 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:11:51.012 09:20:01 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@341 -- # echo 'Found 0000:09:00.0 (0x8086 - 0x159b)' 00:11:51.012 Found 0000:09:00.0 (0x8086 - 0x159b) 00:11:51.012 09:20:01 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:11:51.012 09:20:01 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:11:51.012 09:20:01 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:11:51.012 09:20:01 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:11:51.012 09:20:01 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:11:51.012 09:20:01 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:11:51.012 09:20:01 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@341 -- # echo 'Found 0000:09:00.1 (0x8086 - 0x159b)' 00:11:51.012 Found 0000:09:00.1 (0x8086 - 0x159b) 00:11:51.012 09:20:01 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:11:51.012 09:20:01 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:11:51.012 09:20:01 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:11:51.012 09:20:01 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:11:51.012 09:20:01 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:11:51.012 09:20:01 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:11:51.012 09:20:01 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:11:51.012 09:20:01 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:11:51.012 09:20:01 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:11:51.012 09:20:01 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:11:51.012 09:20:01 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:11:51.012 09:20:01 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:11:51.012 09:20:01 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@390 -- # [[ up == up ]] 00:11:51.012 09:20:01 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:11:51.012 09:20:01 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:11:51.012 09:20:01 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:09:00.0: cvl_0_0' 00:11:51.012 Found net devices under 0000:09:00.0: cvl_0_0 00:11:51.012 09:20:01 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:11:51.012 09:20:01 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:11:51.012 09:20:01 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:11:51.012 09:20:01 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:11:51.012 09:20:01 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:11:51.012 09:20:01 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@390 -- # [[ up == up ]] 00:11:51.012 09:20:01 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:11:51.012 09:20:01 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:11:51.012 09:20:01 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:09:00.1: cvl_0_1' 00:11:51.012 Found net devices under 0000:09:00.1: cvl_0_1 00:11:51.012 09:20:01 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:11:51.012 09:20:01 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:11:51.012 09:20:01 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@414 -- # is_hw=yes 00:11:51.012 09:20:01 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:11:51.012 09:20:01 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:11:51.012 09:20:01 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:11:51.012 09:20:01 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:11:51.012 09:20:01 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:11:51.012 09:20:01 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:11:51.012 09:20:01 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:11:51.012 09:20:01 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:11:51.012 09:20:01 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:11:51.012 09:20:01 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:11:51.012 09:20:01 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:11:51.012 09:20:01 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:11:51.012 09:20:01 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:11:51.012 09:20:01 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:11:51.012 09:20:01 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:11:51.012 09:20:01 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:11:51.012 09:20:01 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:11:51.012 09:20:01 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:11:51.012 09:20:01 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:11:51.012 09:20:01 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:11:51.012 09:20:01 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:11:51.012 09:20:01 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:11:51.012 09:20:01 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:11:51.012 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:11:51.012 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.208 ms 00:11:51.012 00:11:51.012 --- 10.0.0.2 ping statistics --- 00:11:51.012 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:11:51.012 rtt min/avg/max/mdev = 0.208/0.208/0.208/0.000 ms 00:11:51.012 09:20:01 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:11:51.012 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:11:51.012 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.105 ms 00:11:51.012 00:11:51.012 --- 10.0.0.1 ping statistics --- 00:11:51.012 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:11:51.012 rtt min/avg/max/mdev = 0.105/0.105/0.105/0.000 ms 00:11:51.012 09:20:01 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:11:51.012 09:20:01 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@422 -- # return 0 00:11:51.012 09:20:01 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:11:51.012 09:20:01 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:11:51.012 09:20:01 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:11:51.012 09:20:01 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:11:51.012 09:20:01 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:11:51.012 09:20:01 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:11:51.012 09:20:01 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:11:51.012 09:20:01 nvmf_tcp.nvmf_fused_ordering -- target/fused_ordering.sh@13 -- # nvmfappstart -m 0x2 00:11:51.012 09:20:01 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:11:51.012 09:20:01 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@722 -- # xtrace_disable 00:11:51.012 09:20:01 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:11:51.012 09:20:01 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@481 -- # nvmfpid=771250 00:11:51.012 09:20:01 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:11:51.012 09:20:01 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@482 -- # waitforlisten 771250 00:11:51.012 09:20:01 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@829 -- # '[' -z 771250 ']' 00:11:51.012 09:20:01 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:11:51.012 09:20:01 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@834 -- # local max_retries=100 00:11:51.012 09:20:01 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:11:51.012 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:11:51.012 09:20:01 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@838 -- # xtrace_disable 00:11:51.012 09:20:01 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:11:51.012 [2024-07-15 09:20:02.017773] Starting SPDK v24.09-pre git sha1 b0f01ebc5 / DPDK 24.03.0 initialization... 00:11:51.012 [2024-07-15 09:20:02.017903] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:11:51.012 EAL: No free 2048 kB hugepages reported on node 1 00:11:51.012 [2024-07-15 09:20:02.082585] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:11:51.012 [2024-07-15 09:20:02.188552] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:11:51.012 [2024-07-15 09:20:02.188616] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:11:51.012 [2024-07-15 09:20:02.188655] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:11:51.012 [2024-07-15 09:20:02.188669] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:11:51.012 [2024-07-15 09:20:02.188680] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:11:51.012 [2024-07-15 09:20:02.188732] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:11:51.272 09:20:02 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:11:51.272 09:20:02 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@862 -- # return 0 00:11:51.272 09:20:02 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:11:51.272 09:20:02 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@728 -- # xtrace_disable 00:11:51.272 09:20:02 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:11:51.272 09:20:02 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:11:51.272 09:20:02 nvmf_tcp.nvmf_fused_ordering -- target/fused_ordering.sh@15 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:11:51.272 09:20:02 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:51.272 09:20:02 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:11:51.272 [2024-07-15 09:20:02.338314] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:11:51.272 09:20:02 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:51.272 09:20:02 nvmf_tcp.nvmf_fused_ordering -- target/fused_ordering.sh@16 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 10 00:11:51.272 09:20:02 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:51.272 09:20:02 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:11:51.272 09:20:02 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:51.272 09:20:02 nvmf_tcp.nvmf_fused_ordering -- target/fused_ordering.sh@17 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:11:51.272 09:20:02 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:51.272 09:20:02 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:11:51.272 [2024-07-15 09:20:02.354466] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:11:51.272 09:20:02 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:51.272 09:20:02 nvmf_tcp.nvmf_fused_ordering -- target/fused_ordering.sh@18 -- # rpc_cmd bdev_null_create NULL1 1000 512 00:11:51.272 09:20:02 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:51.272 09:20:02 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:11:51.272 NULL1 00:11:51.272 09:20:02 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:51.272 09:20:02 nvmf_tcp.nvmf_fused_ordering -- target/fused_ordering.sh@19 -- # rpc_cmd bdev_wait_for_examine 00:11:51.272 09:20:02 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:51.272 09:20:02 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:11:51.272 09:20:02 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:51.272 09:20:02 nvmf_tcp.nvmf_fused_ordering -- target/fused_ordering.sh@20 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 NULL1 00:11:51.272 09:20:02 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:51.272 09:20:02 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:11:51.272 09:20:02 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:51.272 09:20:02 nvmf_tcp.nvmf_fused_ordering -- target/fused_ordering.sh@22 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvme/fused_ordering/fused_ordering -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' 00:11:51.272 [2024-07-15 09:20:02.398148] Starting SPDK v24.09-pre git sha1 b0f01ebc5 / DPDK 24.03.0 initialization... 00:11:51.273 [2024-07-15 09:20:02.398185] [ DPDK EAL parameters: fused_ordering --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid771390 ] 00:11:51.273 EAL: No free 2048 kB hugepages reported on node 1 00:11:51.842 Attached to nqn.2016-06.io.spdk:cnode1 00:11:51.842 Namespace ID: 1 size: 1GB 00:11:51.842 fused_ordering(0) 00:11:51.842 fused_ordering(1) 00:11:51.842 fused_ordering(2) 00:11:51.842 fused_ordering(3) 00:11:51.842 fused_ordering(4) 00:11:51.842 fused_ordering(5) 00:11:51.842 fused_ordering(6) 00:11:51.842 fused_ordering(7) 00:11:51.842 fused_ordering(8) 00:11:51.842 fused_ordering(9) 00:11:51.842 fused_ordering(10) 00:11:51.842 fused_ordering(11) 00:11:51.842 fused_ordering(12) 00:11:51.842 fused_ordering(13) 00:11:51.842 fused_ordering(14) 00:11:51.842 fused_ordering(15) 00:11:51.842 fused_ordering(16) 00:11:51.842 fused_ordering(17) 00:11:51.842 fused_ordering(18) 00:11:51.842 fused_ordering(19) 00:11:51.842 fused_ordering(20) 00:11:51.842 fused_ordering(21) 00:11:51.842 fused_ordering(22) 00:11:51.842 fused_ordering(23) 00:11:51.842 fused_ordering(24) 00:11:51.842 fused_ordering(25) 00:11:51.842 fused_ordering(26) 00:11:51.842 fused_ordering(27) 00:11:51.842 fused_ordering(28) 00:11:51.842 fused_ordering(29) 00:11:51.842 fused_ordering(30) 00:11:51.842 fused_ordering(31) 00:11:51.842 fused_ordering(32) 00:11:51.842 fused_ordering(33) 00:11:51.842 fused_ordering(34) 00:11:51.842 fused_ordering(35) 00:11:51.842 fused_ordering(36) 00:11:51.842 fused_ordering(37) 00:11:51.842 fused_ordering(38) 00:11:51.842 fused_ordering(39) 00:11:51.842 fused_ordering(40) 00:11:51.842 fused_ordering(41) 00:11:51.842 fused_ordering(42) 00:11:51.842 fused_ordering(43) 00:11:51.842 fused_ordering(44) 00:11:51.842 fused_ordering(45) 00:11:51.842 fused_ordering(46) 00:11:51.842 fused_ordering(47) 00:11:51.842 fused_ordering(48) 00:11:51.842 fused_ordering(49) 00:11:51.842 fused_ordering(50) 00:11:51.842 fused_ordering(51) 00:11:51.842 fused_ordering(52) 00:11:51.842 fused_ordering(53) 00:11:51.842 fused_ordering(54) 00:11:51.842 fused_ordering(55) 00:11:51.842 fused_ordering(56) 00:11:51.842 fused_ordering(57) 00:11:51.842 fused_ordering(58) 00:11:51.842 fused_ordering(59) 00:11:51.842 fused_ordering(60) 00:11:51.842 fused_ordering(61) 00:11:51.842 fused_ordering(62) 00:11:51.842 fused_ordering(63) 00:11:51.842 fused_ordering(64) 00:11:51.842 fused_ordering(65) 00:11:51.842 fused_ordering(66) 00:11:51.842 fused_ordering(67) 00:11:51.842 fused_ordering(68) 00:11:51.842 fused_ordering(69) 00:11:51.842 fused_ordering(70) 00:11:51.842 fused_ordering(71) 00:11:51.842 fused_ordering(72) 00:11:51.842 fused_ordering(73) 00:11:51.842 fused_ordering(74) 00:11:51.842 fused_ordering(75) 00:11:51.842 fused_ordering(76) 00:11:51.842 fused_ordering(77) 00:11:51.842 fused_ordering(78) 00:11:51.842 fused_ordering(79) 00:11:51.842 fused_ordering(80) 00:11:51.842 fused_ordering(81) 00:11:51.842 fused_ordering(82) 00:11:51.842 fused_ordering(83) 00:11:51.843 fused_ordering(84) 00:11:51.843 fused_ordering(85) 00:11:51.843 fused_ordering(86) 00:11:51.843 fused_ordering(87) 00:11:51.843 fused_ordering(88) 00:11:51.843 fused_ordering(89) 00:11:51.843 fused_ordering(90) 00:11:51.843 fused_ordering(91) 00:11:51.843 fused_ordering(92) 00:11:51.843 fused_ordering(93) 00:11:51.843 fused_ordering(94) 00:11:51.843 fused_ordering(95) 00:11:51.843 fused_ordering(96) 00:11:51.843 fused_ordering(97) 00:11:51.843 fused_ordering(98) 00:11:51.843 fused_ordering(99) 00:11:51.843 fused_ordering(100) 00:11:51.843 fused_ordering(101) 00:11:51.843 fused_ordering(102) 00:11:51.843 fused_ordering(103) 00:11:51.843 fused_ordering(104) 00:11:51.843 fused_ordering(105) 00:11:51.843 fused_ordering(106) 00:11:51.843 fused_ordering(107) 00:11:51.843 fused_ordering(108) 00:11:51.843 fused_ordering(109) 00:11:51.843 fused_ordering(110) 00:11:51.843 fused_ordering(111) 00:11:51.843 fused_ordering(112) 00:11:51.843 fused_ordering(113) 00:11:51.843 fused_ordering(114) 00:11:51.843 fused_ordering(115) 00:11:51.843 fused_ordering(116) 00:11:51.843 fused_ordering(117) 00:11:51.843 fused_ordering(118) 00:11:51.843 fused_ordering(119) 00:11:51.843 fused_ordering(120) 00:11:51.843 fused_ordering(121) 00:11:51.843 fused_ordering(122) 00:11:51.843 fused_ordering(123) 00:11:51.843 fused_ordering(124) 00:11:51.843 fused_ordering(125) 00:11:51.843 fused_ordering(126) 00:11:51.843 fused_ordering(127) 00:11:51.843 fused_ordering(128) 00:11:51.843 fused_ordering(129) 00:11:51.843 fused_ordering(130) 00:11:51.843 fused_ordering(131) 00:11:51.843 fused_ordering(132) 00:11:51.843 fused_ordering(133) 00:11:51.843 fused_ordering(134) 00:11:51.843 fused_ordering(135) 00:11:51.843 fused_ordering(136) 00:11:51.843 fused_ordering(137) 00:11:51.843 fused_ordering(138) 00:11:51.843 fused_ordering(139) 00:11:51.843 fused_ordering(140) 00:11:51.843 fused_ordering(141) 00:11:51.843 fused_ordering(142) 00:11:51.843 fused_ordering(143) 00:11:51.843 fused_ordering(144) 00:11:51.843 fused_ordering(145) 00:11:51.843 fused_ordering(146) 00:11:51.843 fused_ordering(147) 00:11:51.843 fused_ordering(148) 00:11:51.843 fused_ordering(149) 00:11:51.843 fused_ordering(150) 00:11:51.843 fused_ordering(151) 00:11:51.843 fused_ordering(152) 00:11:51.843 fused_ordering(153) 00:11:51.843 fused_ordering(154) 00:11:51.843 fused_ordering(155) 00:11:51.843 fused_ordering(156) 00:11:51.843 fused_ordering(157) 00:11:51.843 fused_ordering(158) 00:11:51.843 fused_ordering(159) 00:11:51.843 fused_ordering(160) 00:11:51.843 fused_ordering(161) 00:11:51.843 fused_ordering(162) 00:11:51.843 fused_ordering(163) 00:11:51.843 fused_ordering(164) 00:11:51.843 fused_ordering(165) 00:11:51.843 fused_ordering(166) 00:11:51.843 fused_ordering(167) 00:11:51.843 fused_ordering(168) 00:11:51.843 fused_ordering(169) 00:11:51.843 fused_ordering(170) 00:11:51.843 fused_ordering(171) 00:11:51.843 fused_ordering(172) 00:11:51.843 fused_ordering(173) 00:11:51.843 fused_ordering(174) 00:11:51.843 fused_ordering(175) 00:11:51.843 fused_ordering(176) 00:11:51.843 fused_ordering(177) 00:11:51.843 fused_ordering(178) 00:11:51.843 fused_ordering(179) 00:11:51.843 fused_ordering(180) 00:11:51.843 fused_ordering(181) 00:11:51.843 fused_ordering(182) 00:11:51.843 fused_ordering(183) 00:11:51.843 fused_ordering(184) 00:11:51.843 fused_ordering(185) 00:11:51.843 fused_ordering(186) 00:11:51.843 fused_ordering(187) 00:11:51.843 fused_ordering(188) 00:11:51.843 fused_ordering(189) 00:11:51.843 fused_ordering(190) 00:11:51.843 fused_ordering(191) 00:11:51.843 fused_ordering(192) 00:11:51.843 fused_ordering(193) 00:11:51.843 fused_ordering(194) 00:11:51.843 fused_ordering(195) 00:11:51.843 fused_ordering(196) 00:11:51.843 fused_ordering(197) 00:11:51.843 fused_ordering(198) 00:11:51.843 fused_ordering(199) 00:11:51.843 fused_ordering(200) 00:11:51.843 fused_ordering(201) 00:11:51.843 fused_ordering(202) 00:11:51.843 fused_ordering(203) 00:11:51.843 fused_ordering(204) 00:11:51.843 fused_ordering(205) 00:11:52.102 fused_ordering(206) 00:11:52.102 fused_ordering(207) 00:11:52.102 fused_ordering(208) 00:11:52.102 fused_ordering(209) 00:11:52.102 fused_ordering(210) 00:11:52.102 fused_ordering(211) 00:11:52.102 fused_ordering(212) 00:11:52.102 fused_ordering(213) 00:11:52.102 fused_ordering(214) 00:11:52.102 fused_ordering(215) 00:11:52.102 fused_ordering(216) 00:11:52.102 fused_ordering(217) 00:11:52.102 fused_ordering(218) 00:11:52.102 fused_ordering(219) 00:11:52.102 fused_ordering(220) 00:11:52.102 fused_ordering(221) 00:11:52.102 fused_ordering(222) 00:11:52.102 fused_ordering(223) 00:11:52.102 fused_ordering(224) 00:11:52.102 fused_ordering(225) 00:11:52.102 fused_ordering(226) 00:11:52.102 fused_ordering(227) 00:11:52.102 fused_ordering(228) 00:11:52.102 fused_ordering(229) 00:11:52.102 fused_ordering(230) 00:11:52.102 fused_ordering(231) 00:11:52.102 fused_ordering(232) 00:11:52.102 fused_ordering(233) 00:11:52.102 fused_ordering(234) 00:11:52.102 fused_ordering(235) 00:11:52.102 fused_ordering(236) 00:11:52.102 fused_ordering(237) 00:11:52.102 fused_ordering(238) 00:11:52.102 fused_ordering(239) 00:11:52.102 fused_ordering(240) 00:11:52.102 fused_ordering(241) 00:11:52.102 fused_ordering(242) 00:11:52.102 fused_ordering(243) 00:11:52.102 fused_ordering(244) 00:11:52.102 fused_ordering(245) 00:11:52.102 fused_ordering(246) 00:11:52.102 fused_ordering(247) 00:11:52.102 fused_ordering(248) 00:11:52.102 fused_ordering(249) 00:11:52.102 fused_ordering(250) 00:11:52.102 fused_ordering(251) 00:11:52.102 fused_ordering(252) 00:11:52.102 fused_ordering(253) 00:11:52.102 fused_ordering(254) 00:11:52.102 fused_ordering(255) 00:11:52.102 fused_ordering(256) 00:11:52.102 fused_ordering(257) 00:11:52.102 fused_ordering(258) 00:11:52.102 fused_ordering(259) 00:11:52.102 fused_ordering(260) 00:11:52.102 fused_ordering(261) 00:11:52.102 fused_ordering(262) 00:11:52.102 fused_ordering(263) 00:11:52.102 fused_ordering(264) 00:11:52.102 fused_ordering(265) 00:11:52.102 fused_ordering(266) 00:11:52.102 fused_ordering(267) 00:11:52.102 fused_ordering(268) 00:11:52.102 fused_ordering(269) 00:11:52.102 fused_ordering(270) 00:11:52.102 fused_ordering(271) 00:11:52.102 fused_ordering(272) 00:11:52.102 fused_ordering(273) 00:11:52.102 fused_ordering(274) 00:11:52.102 fused_ordering(275) 00:11:52.102 fused_ordering(276) 00:11:52.102 fused_ordering(277) 00:11:52.102 fused_ordering(278) 00:11:52.102 fused_ordering(279) 00:11:52.102 fused_ordering(280) 00:11:52.102 fused_ordering(281) 00:11:52.102 fused_ordering(282) 00:11:52.102 fused_ordering(283) 00:11:52.102 fused_ordering(284) 00:11:52.102 fused_ordering(285) 00:11:52.102 fused_ordering(286) 00:11:52.102 fused_ordering(287) 00:11:52.102 fused_ordering(288) 00:11:52.102 fused_ordering(289) 00:11:52.102 fused_ordering(290) 00:11:52.102 fused_ordering(291) 00:11:52.102 fused_ordering(292) 00:11:52.102 fused_ordering(293) 00:11:52.102 fused_ordering(294) 00:11:52.102 fused_ordering(295) 00:11:52.102 fused_ordering(296) 00:11:52.102 fused_ordering(297) 00:11:52.102 fused_ordering(298) 00:11:52.102 fused_ordering(299) 00:11:52.102 fused_ordering(300) 00:11:52.102 fused_ordering(301) 00:11:52.102 fused_ordering(302) 00:11:52.102 fused_ordering(303) 00:11:52.102 fused_ordering(304) 00:11:52.102 fused_ordering(305) 00:11:52.102 fused_ordering(306) 00:11:52.102 fused_ordering(307) 00:11:52.102 fused_ordering(308) 00:11:52.102 fused_ordering(309) 00:11:52.102 fused_ordering(310) 00:11:52.102 fused_ordering(311) 00:11:52.102 fused_ordering(312) 00:11:52.102 fused_ordering(313) 00:11:52.102 fused_ordering(314) 00:11:52.102 fused_ordering(315) 00:11:52.102 fused_ordering(316) 00:11:52.102 fused_ordering(317) 00:11:52.102 fused_ordering(318) 00:11:52.102 fused_ordering(319) 00:11:52.102 fused_ordering(320) 00:11:52.102 fused_ordering(321) 00:11:52.102 fused_ordering(322) 00:11:52.102 fused_ordering(323) 00:11:52.102 fused_ordering(324) 00:11:52.102 fused_ordering(325) 00:11:52.102 fused_ordering(326) 00:11:52.102 fused_ordering(327) 00:11:52.102 fused_ordering(328) 00:11:52.102 fused_ordering(329) 00:11:52.102 fused_ordering(330) 00:11:52.102 fused_ordering(331) 00:11:52.102 fused_ordering(332) 00:11:52.102 fused_ordering(333) 00:11:52.102 fused_ordering(334) 00:11:52.102 fused_ordering(335) 00:11:52.102 fused_ordering(336) 00:11:52.102 fused_ordering(337) 00:11:52.102 fused_ordering(338) 00:11:52.102 fused_ordering(339) 00:11:52.102 fused_ordering(340) 00:11:52.102 fused_ordering(341) 00:11:52.102 fused_ordering(342) 00:11:52.102 fused_ordering(343) 00:11:52.102 fused_ordering(344) 00:11:52.102 fused_ordering(345) 00:11:52.102 fused_ordering(346) 00:11:52.102 fused_ordering(347) 00:11:52.102 fused_ordering(348) 00:11:52.102 fused_ordering(349) 00:11:52.102 fused_ordering(350) 00:11:52.102 fused_ordering(351) 00:11:52.102 fused_ordering(352) 00:11:52.102 fused_ordering(353) 00:11:52.102 fused_ordering(354) 00:11:52.102 fused_ordering(355) 00:11:52.102 fused_ordering(356) 00:11:52.102 fused_ordering(357) 00:11:52.102 fused_ordering(358) 00:11:52.102 fused_ordering(359) 00:11:52.102 fused_ordering(360) 00:11:52.103 fused_ordering(361) 00:11:52.103 fused_ordering(362) 00:11:52.103 fused_ordering(363) 00:11:52.103 fused_ordering(364) 00:11:52.103 fused_ordering(365) 00:11:52.103 fused_ordering(366) 00:11:52.103 fused_ordering(367) 00:11:52.103 fused_ordering(368) 00:11:52.103 fused_ordering(369) 00:11:52.103 fused_ordering(370) 00:11:52.103 fused_ordering(371) 00:11:52.103 fused_ordering(372) 00:11:52.103 fused_ordering(373) 00:11:52.103 fused_ordering(374) 00:11:52.103 fused_ordering(375) 00:11:52.103 fused_ordering(376) 00:11:52.103 fused_ordering(377) 00:11:52.103 fused_ordering(378) 00:11:52.103 fused_ordering(379) 00:11:52.103 fused_ordering(380) 00:11:52.103 fused_ordering(381) 00:11:52.103 fused_ordering(382) 00:11:52.103 fused_ordering(383) 00:11:52.103 fused_ordering(384) 00:11:52.103 fused_ordering(385) 00:11:52.103 fused_ordering(386) 00:11:52.103 fused_ordering(387) 00:11:52.103 fused_ordering(388) 00:11:52.103 fused_ordering(389) 00:11:52.103 fused_ordering(390) 00:11:52.103 fused_ordering(391) 00:11:52.103 fused_ordering(392) 00:11:52.103 fused_ordering(393) 00:11:52.103 fused_ordering(394) 00:11:52.103 fused_ordering(395) 00:11:52.103 fused_ordering(396) 00:11:52.103 fused_ordering(397) 00:11:52.103 fused_ordering(398) 00:11:52.103 fused_ordering(399) 00:11:52.103 fused_ordering(400) 00:11:52.103 fused_ordering(401) 00:11:52.103 fused_ordering(402) 00:11:52.103 fused_ordering(403) 00:11:52.103 fused_ordering(404) 00:11:52.103 fused_ordering(405) 00:11:52.103 fused_ordering(406) 00:11:52.103 fused_ordering(407) 00:11:52.103 fused_ordering(408) 00:11:52.103 fused_ordering(409) 00:11:52.103 fused_ordering(410) 00:11:52.362 fused_ordering(411) 00:11:52.362 fused_ordering(412) 00:11:52.362 fused_ordering(413) 00:11:52.362 fused_ordering(414) 00:11:52.362 fused_ordering(415) 00:11:52.362 fused_ordering(416) 00:11:52.362 fused_ordering(417) 00:11:52.362 fused_ordering(418) 00:11:52.362 fused_ordering(419) 00:11:52.362 fused_ordering(420) 00:11:52.362 fused_ordering(421) 00:11:52.362 fused_ordering(422) 00:11:52.362 fused_ordering(423) 00:11:52.362 fused_ordering(424) 00:11:52.362 fused_ordering(425) 00:11:52.362 fused_ordering(426) 00:11:52.362 fused_ordering(427) 00:11:52.362 fused_ordering(428) 00:11:52.362 fused_ordering(429) 00:11:52.362 fused_ordering(430) 00:11:52.362 fused_ordering(431) 00:11:52.362 fused_ordering(432) 00:11:52.362 fused_ordering(433) 00:11:52.362 fused_ordering(434) 00:11:52.362 fused_ordering(435) 00:11:52.362 fused_ordering(436) 00:11:52.362 fused_ordering(437) 00:11:52.362 fused_ordering(438) 00:11:52.362 fused_ordering(439) 00:11:52.362 fused_ordering(440) 00:11:52.362 fused_ordering(441) 00:11:52.362 fused_ordering(442) 00:11:52.362 fused_ordering(443) 00:11:52.362 fused_ordering(444) 00:11:52.362 fused_ordering(445) 00:11:52.362 fused_ordering(446) 00:11:52.362 fused_ordering(447) 00:11:52.362 fused_ordering(448) 00:11:52.362 fused_ordering(449) 00:11:52.362 fused_ordering(450) 00:11:52.362 fused_ordering(451) 00:11:52.362 fused_ordering(452) 00:11:52.362 fused_ordering(453) 00:11:52.362 fused_ordering(454) 00:11:52.362 fused_ordering(455) 00:11:52.362 fused_ordering(456) 00:11:52.362 fused_ordering(457) 00:11:52.362 fused_ordering(458) 00:11:52.362 fused_ordering(459) 00:11:52.362 fused_ordering(460) 00:11:52.362 fused_ordering(461) 00:11:52.362 fused_ordering(462) 00:11:52.363 fused_ordering(463) 00:11:52.363 fused_ordering(464) 00:11:52.363 fused_ordering(465) 00:11:52.363 fused_ordering(466) 00:11:52.363 fused_ordering(467) 00:11:52.363 fused_ordering(468) 00:11:52.363 fused_ordering(469) 00:11:52.363 fused_ordering(470) 00:11:52.363 fused_ordering(471) 00:11:52.363 fused_ordering(472) 00:11:52.363 fused_ordering(473) 00:11:52.363 fused_ordering(474) 00:11:52.363 fused_ordering(475) 00:11:52.363 fused_ordering(476) 00:11:52.363 fused_ordering(477) 00:11:52.363 fused_ordering(478) 00:11:52.363 fused_ordering(479) 00:11:52.363 fused_ordering(480) 00:11:52.363 fused_ordering(481) 00:11:52.363 fused_ordering(482) 00:11:52.363 fused_ordering(483) 00:11:52.363 fused_ordering(484) 00:11:52.363 fused_ordering(485) 00:11:52.363 fused_ordering(486) 00:11:52.363 fused_ordering(487) 00:11:52.363 fused_ordering(488) 00:11:52.363 fused_ordering(489) 00:11:52.363 fused_ordering(490) 00:11:52.363 fused_ordering(491) 00:11:52.363 fused_ordering(492) 00:11:52.363 fused_ordering(493) 00:11:52.363 fused_ordering(494) 00:11:52.363 fused_ordering(495) 00:11:52.363 fused_ordering(496) 00:11:52.363 fused_ordering(497) 00:11:52.363 fused_ordering(498) 00:11:52.363 fused_ordering(499) 00:11:52.363 fused_ordering(500) 00:11:52.363 fused_ordering(501) 00:11:52.363 fused_ordering(502) 00:11:52.363 fused_ordering(503) 00:11:52.363 fused_ordering(504) 00:11:52.363 fused_ordering(505) 00:11:52.363 fused_ordering(506) 00:11:52.363 fused_ordering(507) 00:11:52.363 fused_ordering(508) 00:11:52.363 fused_ordering(509) 00:11:52.363 fused_ordering(510) 00:11:52.363 fused_ordering(511) 00:11:52.363 fused_ordering(512) 00:11:52.363 fused_ordering(513) 00:11:52.363 fused_ordering(514) 00:11:52.363 fused_ordering(515) 00:11:52.363 fused_ordering(516) 00:11:52.363 fused_ordering(517) 00:11:52.363 fused_ordering(518) 00:11:52.363 fused_ordering(519) 00:11:52.363 fused_ordering(520) 00:11:52.363 fused_ordering(521) 00:11:52.363 fused_ordering(522) 00:11:52.363 fused_ordering(523) 00:11:52.363 fused_ordering(524) 00:11:52.363 fused_ordering(525) 00:11:52.363 fused_ordering(526) 00:11:52.363 fused_ordering(527) 00:11:52.363 fused_ordering(528) 00:11:52.363 fused_ordering(529) 00:11:52.363 fused_ordering(530) 00:11:52.363 fused_ordering(531) 00:11:52.363 fused_ordering(532) 00:11:52.363 fused_ordering(533) 00:11:52.363 fused_ordering(534) 00:11:52.363 fused_ordering(535) 00:11:52.363 fused_ordering(536) 00:11:52.363 fused_ordering(537) 00:11:52.363 fused_ordering(538) 00:11:52.363 fused_ordering(539) 00:11:52.363 fused_ordering(540) 00:11:52.363 fused_ordering(541) 00:11:52.363 fused_ordering(542) 00:11:52.363 fused_ordering(543) 00:11:52.363 fused_ordering(544) 00:11:52.363 fused_ordering(545) 00:11:52.363 fused_ordering(546) 00:11:52.363 fused_ordering(547) 00:11:52.363 fused_ordering(548) 00:11:52.363 fused_ordering(549) 00:11:52.363 fused_ordering(550) 00:11:52.363 fused_ordering(551) 00:11:52.363 fused_ordering(552) 00:11:52.363 fused_ordering(553) 00:11:52.363 fused_ordering(554) 00:11:52.363 fused_ordering(555) 00:11:52.363 fused_ordering(556) 00:11:52.363 fused_ordering(557) 00:11:52.363 fused_ordering(558) 00:11:52.363 fused_ordering(559) 00:11:52.363 fused_ordering(560) 00:11:52.363 fused_ordering(561) 00:11:52.363 fused_ordering(562) 00:11:52.363 fused_ordering(563) 00:11:52.363 fused_ordering(564) 00:11:52.363 fused_ordering(565) 00:11:52.363 fused_ordering(566) 00:11:52.363 fused_ordering(567) 00:11:52.363 fused_ordering(568) 00:11:52.363 fused_ordering(569) 00:11:52.363 fused_ordering(570) 00:11:52.363 fused_ordering(571) 00:11:52.363 fused_ordering(572) 00:11:52.363 fused_ordering(573) 00:11:52.363 fused_ordering(574) 00:11:52.363 fused_ordering(575) 00:11:52.363 fused_ordering(576) 00:11:52.363 fused_ordering(577) 00:11:52.363 fused_ordering(578) 00:11:52.363 fused_ordering(579) 00:11:52.363 fused_ordering(580) 00:11:52.363 fused_ordering(581) 00:11:52.363 fused_ordering(582) 00:11:52.363 fused_ordering(583) 00:11:52.363 fused_ordering(584) 00:11:52.363 fused_ordering(585) 00:11:52.363 fused_ordering(586) 00:11:52.363 fused_ordering(587) 00:11:52.363 fused_ordering(588) 00:11:52.363 fused_ordering(589) 00:11:52.363 fused_ordering(590) 00:11:52.363 fused_ordering(591) 00:11:52.363 fused_ordering(592) 00:11:52.363 fused_ordering(593) 00:11:52.363 fused_ordering(594) 00:11:52.363 fused_ordering(595) 00:11:52.363 fused_ordering(596) 00:11:52.363 fused_ordering(597) 00:11:52.363 fused_ordering(598) 00:11:52.363 fused_ordering(599) 00:11:52.363 fused_ordering(600) 00:11:52.363 fused_ordering(601) 00:11:52.363 fused_ordering(602) 00:11:52.363 fused_ordering(603) 00:11:52.363 fused_ordering(604) 00:11:52.363 fused_ordering(605) 00:11:52.363 fused_ordering(606) 00:11:52.363 fused_ordering(607) 00:11:52.363 fused_ordering(608) 00:11:52.363 fused_ordering(609) 00:11:52.363 fused_ordering(610) 00:11:52.363 fused_ordering(611) 00:11:52.363 fused_ordering(612) 00:11:52.363 fused_ordering(613) 00:11:52.363 fused_ordering(614) 00:11:52.363 fused_ordering(615) 00:11:52.932 fused_ordering(616) 00:11:52.932 fused_ordering(617) 00:11:52.932 fused_ordering(618) 00:11:52.932 fused_ordering(619) 00:11:52.932 fused_ordering(620) 00:11:52.932 fused_ordering(621) 00:11:52.932 fused_ordering(622) 00:11:52.932 fused_ordering(623) 00:11:52.932 fused_ordering(624) 00:11:52.932 fused_ordering(625) 00:11:52.932 fused_ordering(626) 00:11:52.932 fused_ordering(627) 00:11:52.932 fused_ordering(628) 00:11:52.932 fused_ordering(629) 00:11:52.932 fused_ordering(630) 00:11:52.932 fused_ordering(631) 00:11:52.932 fused_ordering(632) 00:11:52.932 fused_ordering(633) 00:11:52.932 fused_ordering(634) 00:11:52.932 fused_ordering(635) 00:11:52.932 fused_ordering(636) 00:11:52.932 fused_ordering(637) 00:11:52.932 fused_ordering(638) 00:11:52.932 fused_ordering(639) 00:11:52.932 fused_ordering(640) 00:11:52.932 fused_ordering(641) 00:11:52.932 fused_ordering(642) 00:11:52.932 fused_ordering(643) 00:11:52.932 fused_ordering(644) 00:11:52.932 fused_ordering(645) 00:11:52.932 fused_ordering(646) 00:11:52.932 fused_ordering(647) 00:11:52.932 fused_ordering(648) 00:11:52.932 fused_ordering(649) 00:11:52.932 fused_ordering(650) 00:11:52.932 fused_ordering(651) 00:11:52.932 fused_ordering(652) 00:11:52.932 fused_ordering(653) 00:11:52.932 fused_ordering(654) 00:11:52.932 fused_ordering(655) 00:11:52.932 fused_ordering(656) 00:11:52.932 fused_ordering(657) 00:11:52.932 fused_ordering(658) 00:11:52.932 fused_ordering(659) 00:11:52.932 fused_ordering(660) 00:11:52.932 fused_ordering(661) 00:11:52.932 fused_ordering(662) 00:11:52.932 fused_ordering(663) 00:11:52.932 fused_ordering(664) 00:11:52.932 fused_ordering(665) 00:11:52.932 fused_ordering(666) 00:11:52.932 fused_ordering(667) 00:11:52.932 fused_ordering(668) 00:11:52.932 fused_ordering(669) 00:11:52.932 fused_ordering(670) 00:11:52.932 fused_ordering(671) 00:11:52.932 fused_ordering(672) 00:11:52.932 fused_ordering(673) 00:11:52.932 fused_ordering(674) 00:11:52.932 fused_ordering(675) 00:11:52.932 fused_ordering(676) 00:11:52.932 fused_ordering(677) 00:11:52.932 fused_ordering(678) 00:11:52.932 fused_ordering(679) 00:11:52.932 fused_ordering(680) 00:11:52.932 fused_ordering(681) 00:11:52.932 fused_ordering(682) 00:11:52.932 fused_ordering(683) 00:11:52.932 fused_ordering(684) 00:11:52.932 fused_ordering(685) 00:11:52.932 fused_ordering(686) 00:11:52.932 fused_ordering(687) 00:11:52.932 fused_ordering(688) 00:11:52.932 fused_ordering(689) 00:11:52.932 fused_ordering(690) 00:11:52.932 fused_ordering(691) 00:11:52.932 fused_ordering(692) 00:11:52.932 fused_ordering(693) 00:11:52.932 fused_ordering(694) 00:11:52.932 fused_ordering(695) 00:11:52.932 fused_ordering(696) 00:11:52.932 fused_ordering(697) 00:11:52.932 fused_ordering(698) 00:11:52.932 fused_ordering(699) 00:11:52.932 fused_ordering(700) 00:11:52.932 fused_ordering(701) 00:11:52.932 fused_ordering(702) 00:11:52.932 fused_ordering(703) 00:11:52.932 fused_ordering(704) 00:11:52.932 fused_ordering(705) 00:11:52.932 fused_ordering(706) 00:11:52.932 fused_ordering(707) 00:11:52.932 fused_ordering(708) 00:11:52.932 fused_ordering(709) 00:11:52.932 fused_ordering(710) 00:11:52.932 fused_ordering(711) 00:11:52.932 fused_ordering(712) 00:11:52.932 fused_ordering(713) 00:11:52.932 fused_ordering(714) 00:11:52.932 fused_ordering(715) 00:11:52.932 fused_ordering(716) 00:11:52.932 fused_ordering(717) 00:11:52.932 fused_ordering(718) 00:11:52.932 fused_ordering(719) 00:11:52.932 fused_ordering(720) 00:11:52.932 fused_ordering(721) 00:11:52.932 fused_ordering(722) 00:11:52.932 fused_ordering(723) 00:11:52.932 fused_ordering(724) 00:11:52.932 fused_ordering(725) 00:11:52.932 fused_ordering(726) 00:11:52.932 fused_ordering(727) 00:11:52.932 fused_ordering(728) 00:11:52.932 fused_ordering(729) 00:11:52.932 fused_ordering(730) 00:11:52.932 fused_ordering(731) 00:11:52.932 fused_ordering(732) 00:11:52.932 fused_ordering(733) 00:11:52.932 fused_ordering(734) 00:11:52.932 fused_ordering(735) 00:11:52.932 fused_ordering(736) 00:11:52.932 fused_ordering(737) 00:11:52.932 fused_ordering(738) 00:11:52.932 fused_ordering(739) 00:11:52.932 fused_ordering(740) 00:11:52.932 fused_ordering(741) 00:11:52.932 fused_ordering(742) 00:11:52.932 fused_ordering(743) 00:11:52.932 fused_ordering(744) 00:11:52.932 fused_ordering(745) 00:11:52.932 fused_ordering(746) 00:11:52.932 fused_ordering(747) 00:11:52.932 fused_ordering(748) 00:11:52.932 fused_ordering(749) 00:11:52.932 fused_ordering(750) 00:11:52.932 fused_ordering(751) 00:11:52.932 fused_ordering(752) 00:11:52.932 fused_ordering(753) 00:11:52.932 fused_ordering(754) 00:11:52.932 fused_ordering(755) 00:11:52.932 fused_ordering(756) 00:11:52.932 fused_ordering(757) 00:11:52.932 fused_ordering(758) 00:11:52.932 fused_ordering(759) 00:11:52.932 fused_ordering(760) 00:11:52.932 fused_ordering(761) 00:11:52.932 fused_ordering(762) 00:11:52.932 fused_ordering(763) 00:11:52.932 fused_ordering(764) 00:11:52.932 fused_ordering(765) 00:11:52.932 fused_ordering(766) 00:11:52.932 fused_ordering(767) 00:11:52.932 fused_ordering(768) 00:11:52.932 fused_ordering(769) 00:11:52.932 fused_ordering(770) 00:11:52.932 fused_ordering(771) 00:11:52.932 fused_ordering(772) 00:11:52.932 fused_ordering(773) 00:11:52.932 fused_ordering(774) 00:11:52.932 fused_ordering(775) 00:11:52.932 fused_ordering(776) 00:11:52.932 fused_ordering(777) 00:11:52.932 fused_ordering(778) 00:11:52.932 fused_ordering(779) 00:11:52.932 fused_ordering(780) 00:11:52.932 fused_ordering(781) 00:11:52.932 fused_ordering(782) 00:11:52.932 fused_ordering(783) 00:11:52.932 fused_ordering(784) 00:11:52.932 fused_ordering(785) 00:11:52.932 fused_ordering(786) 00:11:52.932 fused_ordering(787) 00:11:52.932 fused_ordering(788) 00:11:52.932 fused_ordering(789) 00:11:52.932 fused_ordering(790) 00:11:52.932 fused_ordering(791) 00:11:52.932 fused_ordering(792) 00:11:52.932 fused_ordering(793) 00:11:52.932 fused_ordering(794) 00:11:52.932 fused_ordering(795) 00:11:52.932 fused_ordering(796) 00:11:52.932 fused_ordering(797) 00:11:52.932 fused_ordering(798) 00:11:52.932 fused_ordering(799) 00:11:52.932 fused_ordering(800) 00:11:52.932 fused_ordering(801) 00:11:52.932 fused_ordering(802) 00:11:52.932 fused_ordering(803) 00:11:52.932 fused_ordering(804) 00:11:52.932 fused_ordering(805) 00:11:52.932 fused_ordering(806) 00:11:52.932 fused_ordering(807) 00:11:52.932 fused_ordering(808) 00:11:52.932 fused_ordering(809) 00:11:52.932 fused_ordering(810) 00:11:52.932 fused_ordering(811) 00:11:52.932 fused_ordering(812) 00:11:52.932 fused_ordering(813) 00:11:52.932 fused_ordering(814) 00:11:52.932 fused_ordering(815) 00:11:52.932 fused_ordering(816) 00:11:52.932 fused_ordering(817) 00:11:52.932 fused_ordering(818) 00:11:52.932 fused_ordering(819) 00:11:52.932 fused_ordering(820) 00:11:53.502 fused_ordering(821) 00:11:53.502 fused_ordering(822) 00:11:53.502 fused_ordering(823) 00:11:53.502 fused_ordering(824) 00:11:53.502 fused_ordering(825) 00:11:53.502 fused_ordering(826) 00:11:53.502 fused_ordering(827) 00:11:53.502 fused_ordering(828) 00:11:53.502 fused_ordering(829) 00:11:53.502 fused_ordering(830) 00:11:53.502 fused_ordering(831) 00:11:53.502 fused_ordering(832) 00:11:53.502 fused_ordering(833) 00:11:53.502 fused_ordering(834) 00:11:53.502 fused_ordering(835) 00:11:53.502 fused_ordering(836) 00:11:53.502 fused_ordering(837) 00:11:53.502 fused_ordering(838) 00:11:53.502 fused_ordering(839) 00:11:53.502 fused_ordering(840) 00:11:53.502 fused_ordering(841) 00:11:53.502 fused_ordering(842) 00:11:53.502 fused_ordering(843) 00:11:53.502 fused_ordering(844) 00:11:53.502 fused_ordering(845) 00:11:53.502 fused_ordering(846) 00:11:53.502 fused_ordering(847) 00:11:53.502 fused_ordering(848) 00:11:53.502 fused_ordering(849) 00:11:53.502 fused_ordering(850) 00:11:53.502 fused_ordering(851) 00:11:53.502 fused_ordering(852) 00:11:53.502 fused_ordering(853) 00:11:53.502 fused_ordering(854) 00:11:53.502 fused_ordering(855) 00:11:53.502 fused_ordering(856) 00:11:53.502 fused_ordering(857) 00:11:53.502 fused_ordering(858) 00:11:53.502 fused_ordering(859) 00:11:53.502 fused_ordering(860) 00:11:53.502 fused_ordering(861) 00:11:53.502 fused_ordering(862) 00:11:53.502 fused_ordering(863) 00:11:53.502 fused_ordering(864) 00:11:53.502 fused_ordering(865) 00:11:53.502 fused_ordering(866) 00:11:53.502 fused_ordering(867) 00:11:53.502 fused_ordering(868) 00:11:53.502 fused_ordering(869) 00:11:53.502 fused_ordering(870) 00:11:53.502 fused_ordering(871) 00:11:53.502 fused_ordering(872) 00:11:53.502 fused_ordering(873) 00:11:53.502 fused_ordering(874) 00:11:53.502 fused_ordering(875) 00:11:53.502 fused_ordering(876) 00:11:53.502 fused_ordering(877) 00:11:53.502 fused_ordering(878) 00:11:53.502 fused_ordering(879) 00:11:53.502 fused_ordering(880) 00:11:53.502 fused_ordering(881) 00:11:53.502 fused_ordering(882) 00:11:53.502 fused_ordering(883) 00:11:53.502 fused_ordering(884) 00:11:53.502 fused_ordering(885) 00:11:53.502 fused_ordering(886) 00:11:53.502 fused_ordering(887) 00:11:53.502 fused_ordering(888) 00:11:53.502 fused_ordering(889) 00:11:53.502 fused_ordering(890) 00:11:53.502 fused_ordering(891) 00:11:53.502 fused_ordering(892) 00:11:53.502 fused_ordering(893) 00:11:53.502 fused_ordering(894) 00:11:53.502 fused_ordering(895) 00:11:53.502 fused_ordering(896) 00:11:53.502 fused_ordering(897) 00:11:53.502 fused_ordering(898) 00:11:53.502 fused_ordering(899) 00:11:53.502 fused_ordering(900) 00:11:53.502 fused_ordering(901) 00:11:53.502 fused_ordering(902) 00:11:53.502 fused_ordering(903) 00:11:53.502 fused_ordering(904) 00:11:53.502 fused_ordering(905) 00:11:53.502 fused_ordering(906) 00:11:53.502 fused_ordering(907) 00:11:53.502 fused_ordering(908) 00:11:53.502 fused_ordering(909) 00:11:53.502 fused_ordering(910) 00:11:53.502 fused_ordering(911) 00:11:53.502 fused_ordering(912) 00:11:53.502 fused_ordering(913) 00:11:53.502 fused_ordering(914) 00:11:53.502 fused_ordering(915) 00:11:53.502 fused_ordering(916) 00:11:53.502 fused_ordering(917) 00:11:53.502 fused_ordering(918) 00:11:53.502 fused_ordering(919) 00:11:53.502 fused_ordering(920) 00:11:53.502 fused_ordering(921) 00:11:53.502 fused_ordering(922) 00:11:53.502 fused_ordering(923) 00:11:53.502 fused_ordering(924) 00:11:53.502 fused_ordering(925) 00:11:53.502 fused_ordering(926) 00:11:53.502 fused_ordering(927) 00:11:53.502 fused_ordering(928) 00:11:53.502 fused_ordering(929) 00:11:53.502 fused_ordering(930) 00:11:53.502 fused_ordering(931) 00:11:53.502 fused_ordering(932) 00:11:53.502 fused_ordering(933) 00:11:53.502 fused_ordering(934) 00:11:53.502 fused_ordering(935) 00:11:53.502 fused_ordering(936) 00:11:53.502 fused_ordering(937) 00:11:53.502 fused_ordering(938) 00:11:53.502 fused_ordering(939) 00:11:53.502 fused_ordering(940) 00:11:53.502 fused_ordering(941) 00:11:53.502 fused_ordering(942) 00:11:53.502 fused_ordering(943) 00:11:53.502 fused_ordering(944) 00:11:53.502 fused_ordering(945) 00:11:53.502 fused_ordering(946) 00:11:53.502 fused_ordering(947) 00:11:53.502 fused_ordering(948) 00:11:53.502 fused_ordering(949) 00:11:53.502 fused_ordering(950) 00:11:53.502 fused_ordering(951) 00:11:53.502 fused_ordering(952) 00:11:53.502 fused_ordering(953) 00:11:53.502 fused_ordering(954) 00:11:53.502 fused_ordering(955) 00:11:53.502 fused_ordering(956) 00:11:53.502 fused_ordering(957) 00:11:53.502 fused_ordering(958) 00:11:53.502 fused_ordering(959) 00:11:53.502 fused_ordering(960) 00:11:53.502 fused_ordering(961) 00:11:53.502 fused_ordering(962) 00:11:53.502 fused_ordering(963) 00:11:53.502 fused_ordering(964) 00:11:53.502 fused_ordering(965) 00:11:53.502 fused_ordering(966) 00:11:53.502 fused_ordering(967) 00:11:53.502 fused_ordering(968) 00:11:53.502 fused_ordering(969) 00:11:53.502 fused_ordering(970) 00:11:53.502 fused_ordering(971) 00:11:53.502 fused_ordering(972) 00:11:53.502 fused_ordering(973) 00:11:53.502 fused_ordering(974) 00:11:53.502 fused_ordering(975) 00:11:53.502 fused_ordering(976) 00:11:53.502 fused_ordering(977) 00:11:53.502 fused_ordering(978) 00:11:53.502 fused_ordering(979) 00:11:53.502 fused_ordering(980) 00:11:53.502 fused_ordering(981) 00:11:53.502 fused_ordering(982) 00:11:53.502 fused_ordering(983) 00:11:53.502 fused_ordering(984) 00:11:53.502 fused_ordering(985) 00:11:53.502 fused_ordering(986) 00:11:53.502 fused_ordering(987) 00:11:53.502 fused_ordering(988) 00:11:53.502 fused_ordering(989) 00:11:53.502 fused_ordering(990) 00:11:53.502 fused_ordering(991) 00:11:53.502 fused_ordering(992) 00:11:53.502 fused_ordering(993) 00:11:53.502 fused_ordering(994) 00:11:53.502 fused_ordering(995) 00:11:53.502 fused_ordering(996) 00:11:53.502 fused_ordering(997) 00:11:53.502 fused_ordering(998) 00:11:53.502 fused_ordering(999) 00:11:53.502 fused_ordering(1000) 00:11:53.502 fused_ordering(1001) 00:11:53.502 fused_ordering(1002) 00:11:53.502 fused_ordering(1003) 00:11:53.502 fused_ordering(1004) 00:11:53.502 fused_ordering(1005) 00:11:53.502 fused_ordering(1006) 00:11:53.502 fused_ordering(1007) 00:11:53.502 fused_ordering(1008) 00:11:53.502 fused_ordering(1009) 00:11:53.502 fused_ordering(1010) 00:11:53.502 fused_ordering(1011) 00:11:53.502 fused_ordering(1012) 00:11:53.502 fused_ordering(1013) 00:11:53.502 fused_ordering(1014) 00:11:53.502 fused_ordering(1015) 00:11:53.502 fused_ordering(1016) 00:11:53.502 fused_ordering(1017) 00:11:53.502 fused_ordering(1018) 00:11:53.502 fused_ordering(1019) 00:11:53.502 fused_ordering(1020) 00:11:53.502 fused_ordering(1021) 00:11:53.502 fused_ordering(1022) 00:11:53.502 fused_ordering(1023) 00:11:53.502 09:20:04 nvmf_tcp.nvmf_fused_ordering -- target/fused_ordering.sh@23 -- # trap - SIGINT SIGTERM EXIT 00:11:53.502 09:20:04 nvmf_tcp.nvmf_fused_ordering -- target/fused_ordering.sh@25 -- # nvmftestfini 00:11:53.502 09:20:04 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@488 -- # nvmfcleanup 00:11:53.502 09:20:04 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@117 -- # sync 00:11:53.502 09:20:04 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:11:53.502 09:20:04 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@120 -- # set +e 00:11:53.502 09:20:04 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@121 -- # for i in {1..20} 00:11:53.502 09:20:04 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:11:53.502 rmmod nvme_tcp 00:11:53.502 rmmod nvme_fabrics 00:11:53.502 rmmod nvme_keyring 00:11:53.502 09:20:04 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:11:53.502 09:20:04 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@124 -- # set -e 00:11:53.502 09:20:04 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@125 -- # return 0 00:11:53.502 09:20:04 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@489 -- # '[' -n 771250 ']' 00:11:53.502 09:20:04 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@490 -- # killprocess 771250 00:11:53.502 09:20:04 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@948 -- # '[' -z 771250 ']' 00:11:53.503 09:20:04 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@952 -- # kill -0 771250 00:11:53.503 09:20:04 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@953 -- # uname 00:11:53.503 09:20:04 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:11:53.503 09:20:04 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 771250 00:11:53.503 09:20:04 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:11:53.503 09:20:04 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:11:53.503 09:20:04 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@966 -- # echo 'killing process with pid 771250' 00:11:53.503 killing process with pid 771250 00:11:53.503 09:20:04 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@967 -- # kill 771250 00:11:53.503 09:20:04 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@972 -- # wait 771250 00:11:53.762 09:20:04 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:11:53.762 09:20:04 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:11:53.762 09:20:04 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:11:53.762 09:20:04 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:11:53.762 09:20:04 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@278 -- # remove_spdk_ns 00:11:53.762 09:20:04 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:11:53.762 09:20:04 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:11:53.762 09:20:04 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:11:55.667 09:20:06 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:11:55.667 00:11:55.667 real 0m7.283s 00:11:55.667 user 0m4.994s 00:11:55.667 sys 0m2.685s 00:11:55.667 09:20:06 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@1124 -- # xtrace_disable 00:11:55.667 09:20:06 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:11:55.667 ************************************ 00:11:55.667 END TEST nvmf_fused_ordering 00:11:55.667 ************************************ 00:11:55.926 09:20:06 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:11:55.926 09:20:06 nvmf_tcp -- nvmf/nvmf.sh@35 -- # run_test nvmf_delete_subsystem /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/delete_subsystem.sh --transport=tcp 00:11:55.926 09:20:06 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:11:55.926 09:20:06 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:11:55.926 09:20:06 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:11:55.926 ************************************ 00:11:55.926 START TEST nvmf_delete_subsystem 00:11:55.926 ************************************ 00:11:55.926 09:20:06 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/delete_subsystem.sh --transport=tcp 00:11:55.926 * Looking for test storage... 00:11:55.926 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:11:55.926 09:20:06 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:11:55.926 09:20:06 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@7 -- # uname -s 00:11:55.926 09:20:06 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:11:55.926 09:20:06 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:11:55.926 09:20:06 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:11:55.926 09:20:06 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:11:55.926 09:20:06 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:11:55.926 09:20:06 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:11:55.926 09:20:06 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:11:55.926 09:20:06 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:11:55.926 09:20:06 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:11:55.926 09:20:06 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:11:55.926 09:20:06 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a 00:11:55.926 09:20:06 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@18 -- # NVME_HOSTID=29f67375-a902-e411-ace9-001e67bc3c9a 00:11:55.926 09:20:06 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:11:55.926 09:20:06 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:11:55.926 09:20:06 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:11:55.926 09:20:06 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:11:55.926 09:20:06 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:11:55.926 09:20:06 nvmf_tcp.nvmf_delete_subsystem -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:11:55.926 09:20:06 nvmf_tcp.nvmf_delete_subsystem -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:11:55.926 09:20:06 nvmf_tcp.nvmf_delete_subsystem -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:11:55.926 09:20:06 nvmf_tcp.nvmf_delete_subsystem -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:55.926 09:20:06 nvmf_tcp.nvmf_delete_subsystem -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:55.926 09:20:06 nvmf_tcp.nvmf_delete_subsystem -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:55.926 09:20:06 nvmf_tcp.nvmf_delete_subsystem -- paths/export.sh@5 -- # export PATH 00:11:55.926 09:20:06 nvmf_tcp.nvmf_delete_subsystem -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:55.926 09:20:06 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@47 -- # : 0 00:11:55.927 09:20:06 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:11:55.927 09:20:06 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:11:55.927 09:20:06 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:11:55.927 09:20:06 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:11:55.927 09:20:06 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:11:55.927 09:20:06 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:11:55.927 09:20:06 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:11:55.927 09:20:06 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@51 -- # have_pci_nics=0 00:11:55.927 09:20:06 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@12 -- # nvmftestinit 00:11:55.927 09:20:06 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:11:55.927 09:20:06 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:11:55.927 09:20:06 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@448 -- # prepare_net_devs 00:11:55.927 09:20:06 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@410 -- # local -g is_hw=no 00:11:55.927 09:20:06 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@412 -- # remove_spdk_ns 00:11:55.927 09:20:06 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:11:55.927 09:20:06 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:11:55.927 09:20:06 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:11:55.927 09:20:06 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:11:55.927 09:20:06 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:11:55.927 09:20:06 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@285 -- # xtrace_disable 00:11:55.927 09:20:06 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:11:58.463 09:20:09 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:11:58.463 09:20:09 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@291 -- # pci_devs=() 00:11:58.463 09:20:09 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@291 -- # local -a pci_devs 00:11:58.463 09:20:09 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@292 -- # pci_net_devs=() 00:11:58.463 09:20:09 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:11:58.463 09:20:09 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@293 -- # pci_drivers=() 00:11:58.463 09:20:09 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@293 -- # local -A pci_drivers 00:11:58.463 09:20:09 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@295 -- # net_devs=() 00:11:58.463 09:20:09 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@295 -- # local -ga net_devs 00:11:58.463 09:20:09 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@296 -- # e810=() 00:11:58.463 09:20:09 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@296 -- # local -ga e810 00:11:58.463 09:20:09 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@297 -- # x722=() 00:11:58.463 09:20:09 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@297 -- # local -ga x722 00:11:58.463 09:20:09 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@298 -- # mlx=() 00:11:58.463 09:20:09 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@298 -- # local -ga mlx 00:11:58.463 09:20:09 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:11:58.463 09:20:09 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:11:58.463 09:20:09 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:11:58.463 09:20:09 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:11:58.463 09:20:09 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:11:58.463 09:20:09 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:11:58.463 09:20:09 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:11:58.463 09:20:09 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:11:58.463 09:20:09 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:11:58.463 09:20:09 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:11:58.463 09:20:09 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:11:58.463 09:20:09 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:11:58.463 09:20:09 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:11:58.463 09:20:09 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:11:58.463 09:20:09 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:11:58.463 09:20:09 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:11:58.463 09:20:09 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:11:58.463 09:20:09 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:11:58.463 09:20:09 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@341 -- # echo 'Found 0000:09:00.0 (0x8086 - 0x159b)' 00:11:58.463 Found 0000:09:00.0 (0x8086 - 0x159b) 00:11:58.463 09:20:09 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:11:58.463 09:20:09 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:11:58.463 09:20:09 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:11:58.463 09:20:09 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:11:58.463 09:20:09 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:11:58.463 09:20:09 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:11:58.463 09:20:09 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@341 -- # echo 'Found 0000:09:00.1 (0x8086 - 0x159b)' 00:11:58.463 Found 0000:09:00.1 (0x8086 - 0x159b) 00:11:58.463 09:20:09 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:11:58.463 09:20:09 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:11:58.463 09:20:09 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:11:58.463 09:20:09 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:11:58.463 09:20:09 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:11:58.463 09:20:09 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:11:58.463 09:20:09 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:11:58.463 09:20:09 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:11:58.463 09:20:09 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:11:58.463 09:20:09 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:11:58.463 09:20:09 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:11:58.463 09:20:09 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:11:58.463 09:20:09 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@390 -- # [[ up == up ]] 00:11:58.463 09:20:09 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:11:58.463 09:20:09 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:11:58.463 09:20:09 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:09:00.0: cvl_0_0' 00:11:58.463 Found net devices under 0000:09:00.0: cvl_0_0 00:11:58.463 09:20:09 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:11:58.463 09:20:09 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:11:58.463 09:20:09 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:11:58.463 09:20:09 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:11:58.463 09:20:09 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:11:58.463 09:20:09 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@390 -- # [[ up == up ]] 00:11:58.463 09:20:09 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:11:58.463 09:20:09 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:11:58.464 09:20:09 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:09:00.1: cvl_0_1' 00:11:58.464 Found net devices under 0000:09:00.1: cvl_0_1 00:11:58.464 09:20:09 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:11:58.464 09:20:09 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:11:58.464 09:20:09 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@414 -- # is_hw=yes 00:11:58.464 09:20:09 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:11:58.464 09:20:09 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:11:58.464 09:20:09 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:11:58.464 09:20:09 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:11:58.464 09:20:09 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:11:58.464 09:20:09 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:11:58.464 09:20:09 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:11:58.464 09:20:09 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:11:58.464 09:20:09 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:11:58.464 09:20:09 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:11:58.464 09:20:09 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:11:58.464 09:20:09 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:11:58.464 09:20:09 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:11:58.464 09:20:09 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:11:58.464 09:20:09 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:11:58.464 09:20:09 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:11:58.464 09:20:09 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:11:58.464 09:20:09 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:11:58.464 09:20:09 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:11:58.464 09:20:09 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:11:58.464 09:20:09 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:11:58.464 09:20:09 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:11:58.464 09:20:09 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:11:58.464 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:11:58.464 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.267 ms 00:11:58.464 00:11:58.464 --- 10.0.0.2 ping statistics --- 00:11:58.464 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:11:58.464 rtt min/avg/max/mdev = 0.267/0.267/0.267/0.000 ms 00:11:58.464 09:20:09 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:11:58.464 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:11:58.464 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.179 ms 00:11:58.464 00:11:58.464 --- 10.0.0.1 ping statistics --- 00:11:58.464 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:11:58.464 rtt min/avg/max/mdev = 0.179/0.179/0.179/0.000 ms 00:11:58.464 09:20:09 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:11:58.464 09:20:09 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@422 -- # return 0 00:11:58.464 09:20:09 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:11:58.464 09:20:09 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:11:58.464 09:20:09 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:11:58.464 09:20:09 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:11:58.464 09:20:09 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:11:58.464 09:20:09 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:11:58.464 09:20:09 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:11:58.464 09:20:09 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@13 -- # nvmfappstart -m 0x3 00:11:58.464 09:20:09 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:11:58.464 09:20:09 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@722 -- # xtrace_disable 00:11:58.464 09:20:09 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:11:58.464 09:20:09 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@481 -- # nvmfpid=773588 00:11:58.464 09:20:09 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x3 00:11:58.464 09:20:09 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@482 -- # waitforlisten 773588 00:11:58.464 09:20:09 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@829 -- # '[' -z 773588 ']' 00:11:58.464 09:20:09 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:11:58.464 09:20:09 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@834 -- # local max_retries=100 00:11:58.464 09:20:09 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:11:58.464 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:11:58.464 09:20:09 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@838 -- # xtrace_disable 00:11:58.464 09:20:09 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:11:58.464 [2024-07-15 09:20:09.288052] Starting SPDK v24.09-pre git sha1 b0f01ebc5 / DPDK 24.03.0 initialization... 00:11:58.464 [2024-07-15 09:20:09.288136] [ DPDK EAL parameters: nvmf -c 0x3 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:11:58.464 EAL: No free 2048 kB hugepages reported on node 1 00:11:58.464 [2024-07-15 09:20:09.349950] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 2 00:11:58.464 [2024-07-15 09:20:09.452823] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:11:58.464 [2024-07-15 09:20:09.452875] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:11:58.464 [2024-07-15 09:20:09.452898] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:11:58.464 [2024-07-15 09:20:09.452909] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:11:58.464 [2024-07-15 09:20:09.452918] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:11:58.464 [2024-07-15 09:20:09.453006] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:11:58.464 [2024-07-15 09:20:09.453011] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:11:58.464 09:20:09 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:11:58.464 09:20:09 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@862 -- # return 0 00:11:58.464 09:20:09 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:11:58.464 09:20:09 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@728 -- # xtrace_disable 00:11:58.464 09:20:09 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:11:58.464 09:20:09 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:11:58.464 09:20:09 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@15 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:11:58.464 09:20:09 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:58.464 09:20:09 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:11:58.464 [2024-07-15 09:20:09.601255] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:11:58.464 09:20:09 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:58.464 09:20:09 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@16 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 10 00:11:58.464 09:20:09 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:58.464 09:20:09 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:11:58.464 09:20:09 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:58.464 09:20:09 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@17 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:11:58.464 09:20:09 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:58.464 09:20:09 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:11:58.464 [2024-07-15 09:20:09.617459] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:11:58.464 09:20:09 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:58.464 09:20:09 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@18 -- # rpc_cmd bdev_null_create NULL1 1000 512 00:11:58.464 09:20:09 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:58.464 09:20:09 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:11:58.464 NULL1 00:11:58.464 09:20:09 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:58.464 09:20:09 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@23 -- # rpc_cmd bdev_delay_create -b NULL1 -d Delay0 -r 1000000 -t 1000000 -w 1000000 -n 1000000 00:11:58.464 09:20:09 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:58.464 09:20:09 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:11:58.464 Delay0 00:11:58.464 09:20:09 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:58.464 09:20:09 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:11:58.464 09:20:09 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:58.464 09:20:09 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:11:58.464 09:20:09 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:58.464 09:20:09 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@28 -- # perf_pid=773609 00:11:58.464 09:20:09 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@30 -- # sleep 2 00:11:58.464 09:20:09 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@26 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -c 0xC -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' -t 5 -q 128 -w randrw -M 70 -o 512 -P 4 00:11:58.724 EAL: No free 2048 kB hugepages reported on node 1 00:11:58.724 [2024-07-15 09:20:09.692145] subsystem.c:1568:spdk_nvmf_subsystem_listener_allowed: *WARNING*: Allowing connection to discovery subsystem on TCP/10.0.0.2/4420, even though this listener was not added to the discovery subsystem. This behavior is deprecated and will be removed in a future release. 00:12:00.627 09:20:11 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@32 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:12:00.627 09:20:11 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:00.627 09:20:11 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:12:00.885 Read completed with error (sct=0, sc=8) 00:12:00.885 Write completed with error (sct=0, sc=8) 00:12:00.885 Read completed with error (sct=0, sc=8) 00:12:00.885 starting I/O failed: -6 00:12:00.885 Read completed with error (sct=0, sc=8) 00:12:00.885 Write completed with error (sct=0, sc=8) 00:12:00.885 Write completed with error (sct=0, sc=8) 00:12:00.885 Write completed with error (sct=0, sc=8) 00:12:00.885 starting I/O failed: -6 00:12:00.885 Read completed with error (sct=0, sc=8) 00:12:00.885 Write completed with error (sct=0, sc=8) 00:12:00.885 Read completed with error (sct=0, sc=8) 00:12:00.885 Write completed with error (sct=0, sc=8) 00:12:00.885 starting I/O failed: -6 00:12:00.885 Write completed with error (sct=0, sc=8) 00:12:00.885 Read completed with error (sct=0, sc=8) 00:12:00.885 Read completed with error (sct=0, sc=8) 00:12:00.885 Read completed with error (sct=0, sc=8) 00:12:00.885 starting I/O failed: -6 00:12:00.885 Read completed with error (sct=0, sc=8) 00:12:00.885 Write completed with error (sct=0, sc=8) 00:12:00.885 Read completed with error (sct=0, sc=8) 00:12:00.885 Read completed with error (sct=0, sc=8) 00:12:00.885 starting I/O failed: -6 00:12:00.885 Write completed with error (sct=0, sc=8) 00:12:00.885 Read completed with error (sct=0, sc=8) 00:12:00.885 Read completed with error (sct=0, sc=8) 00:12:00.885 Read completed with error (sct=0, sc=8) 00:12:00.885 starting I/O failed: -6 00:12:00.885 Write completed with error (sct=0, sc=8) 00:12:00.885 Write completed with error (sct=0, sc=8) 00:12:00.885 Write completed with error (sct=0, sc=8) 00:12:00.885 Read completed with error (sct=0, sc=8) 00:12:00.885 starting I/O failed: -6 00:12:00.885 Read completed with error (sct=0, sc=8) 00:12:00.885 Write completed with error (sct=0, sc=8) 00:12:00.885 Read completed with error (sct=0, sc=8) 00:12:00.885 Write completed with error (sct=0, sc=8) 00:12:00.885 starting I/O failed: -6 00:12:00.885 Read completed with error (sct=0, sc=8) 00:12:00.885 Read completed with error (sct=0, sc=8) 00:12:00.885 Read completed with error (sct=0, sc=8) 00:12:00.886 Read completed with error (sct=0, sc=8) 00:12:00.886 starting I/O failed: -6 00:12:00.886 Write completed with error (sct=0, sc=8) 00:12:00.886 Read completed with error (sct=0, sc=8) 00:12:00.886 [2024-07-15 09:20:11.982137] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7f43b400d600 is same with the state(5) to be set 00:12:00.886 Read completed with error (sct=0, sc=8) 00:12:00.886 Read completed with error (sct=0, sc=8) 00:12:00.886 Write completed with error (sct=0, sc=8) 00:12:00.886 Read completed with error (sct=0, sc=8) 00:12:00.886 Read completed with error (sct=0, sc=8) 00:12:00.886 Read completed with error (sct=0, sc=8) 00:12:00.886 Read completed with error (sct=0, sc=8) 00:12:00.886 Read completed with error (sct=0, sc=8) 00:12:00.886 Read completed with error (sct=0, sc=8) 00:12:00.886 Read completed with error (sct=0, sc=8) 00:12:00.886 Read completed with error (sct=0, sc=8) 00:12:00.886 Read completed with error (sct=0, sc=8) 00:12:00.886 Read completed with error (sct=0, sc=8) 00:12:00.886 Read completed with error (sct=0, sc=8) 00:12:00.886 Read completed with error (sct=0, sc=8) 00:12:00.886 Read completed with error (sct=0, sc=8) 00:12:00.886 Read completed with error (sct=0, sc=8) 00:12:00.886 Write completed with error (sct=0, sc=8) 00:12:00.886 Read completed with error (sct=0, sc=8) 00:12:00.886 Read completed with error (sct=0, sc=8) 00:12:00.886 Read completed with error (sct=0, sc=8) 00:12:00.886 Read completed with error (sct=0, sc=8) 00:12:00.886 Read completed with error (sct=0, sc=8) 00:12:00.886 Read completed with error (sct=0, sc=8) 00:12:00.886 Write completed with error (sct=0, sc=8) 00:12:00.886 Read completed with error (sct=0, sc=8) 00:12:00.886 Read completed with error (sct=0, sc=8) 00:12:00.886 Write completed with error (sct=0, sc=8) 00:12:00.886 Read completed with error (sct=0, sc=8) 00:12:00.886 Read completed with error (sct=0, sc=8) 00:12:00.886 Read completed with error (sct=0, sc=8) 00:12:00.886 Write completed with error (sct=0, sc=8) 00:12:00.886 Read completed with error (sct=0, sc=8) 00:12:00.886 Read completed with error (sct=0, sc=8) 00:12:00.886 Write completed with error (sct=0, sc=8) 00:12:00.886 Read completed with error (sct=0, sc=8) 00:12:00.886 Read completed with error (sct=0, sc=8) 00:12:00.886 Write completed with error (sct=0, sc=8) 00:12:00.886 Write completed with error (sct=0, sc=8) 00:12:00.886 Read completed with error (sct=0, sc=8) 00:12:00.886 Read completed with error (sct=0, sc=8) 00:12:00.886 Read completed with error (sct=0, sc=8) 00:12:00.886 Read completed with error (sct=0, sc=8) 00:12:00.886 Read completed with error (sct=0, sc=8) 00:12:00.886 Read completed with error (sct=0, sc=8) 00:12:00.886 Read completed with error (sct=0, sc=8) 00:12:00.886 Write completed with error (sct=0, sc=8) 00:12:00.886 [2024-07-15 09:20:11.982785] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7f43b400cfe0 is same with the state(5) to be set 00:12:00.886 Read completed with error (sct=0, sc=8) 00:12:00.886 Read completed with error (sct=0, sc=8) 00:12:00.886 starting I/O failed: -6 00:12:00.886 Read completed with error (sct=0, sc=8) 00:12:00.886 Read completed with error (sct=0, sc=8) 00:12:00.886 Read completed with error (sct=0, sc=8) 00:12:00.886 Write completed with error (sct=0, sc=8) 00:12:00.886 starting I/O failed: -6 00:12:00.886 Read completed with error (sct=0, sc=8) 00:12:00.886 Read completed with error (sct=0, sc=8) 00:12:00.886 Write completed with error (sct=0, sc=8) 00:12:00.886 Read completed with error (sct=0, sc=8) 00:12:00.886 starting I/O failed: -6 00:12:00.886 Read completed with error (sct=0, sc=8) 00:12:00.886 Read completed with error (sct=0, sc=8) 00:12:00.886 Read completed with error (sct=0, sc=8) 00:12:00.886 Read completed with error (sct=0, sc=8) 00:12:00.886 starting I/O failed: -6 00:12:00.886 Write completed with error (sct=0, sc=8) 00:12:00.886 Write completed with error (sct=0, sc=8) 00:12:00.886 Read completed with error (sct=0, sc=8) 00:12:00.886 Read completed with error (sct=0, sc=8) 00:12:00.886 starting I/O failed: -6 00:12:00.886 Read completed with error (sct=0, sc=8) 00:12:00.886 Write completed with error (sct=0, sc=8) 00:12:00.886 Read completed with error (sct=0, sc=8) 00:12:00.886 Read completed with error (sct=0, sc=8) 00:12:00.886 starting I/O failed: -6 00:12:00.886 Write completed with error (sct=0, sc=8) 00:12:00.886 Read completed with error (sct=0, sc=8) 00:12:00.886 Read completed with error (sct=0, sc=8) 00:12:00.886 Write completed with error (sct=0, sc=8) 00:12:00.886 starting I/O failed: -6 00:12:00.886 Read completed with error (sct=0, sc=8) 00:12:00.886 Write completed with error (sct=0, sc=8) 00:12:00.886 Read completed with error (sct=0, sc=8) 00:12:00.886 Read completed with error (sct=0, sc=8) 00:12:00.886 starting I/O failed: -6 00:12:00.886 Read completed with error (sct=0, sc=8) 00:12:00.886 Read completed with error (sct=0, sc=8) 00:12:00.886 Read completed with error (sct=0, sc=8) 00:12:00.886 Write completed with error (sct=0, sc=8) 00:12:00.886 starting I/O failed: -6 00:12:00.886 Read completed with error (sct=0, sc=8) 00:12:00.886 Read completed with error (sct=0, sc=8) 00:12:00.886 Write completed with error (sct=0, sc=8) 00:12:00.886 Read completed with error (sct=0, sc=8) 00:12:00.886 starting I/O failed: -6 00:12:00.886 Write completed with error (sct=0, sc=8) 00:12:00.886 Read completed with error (sct=0, sc=8) 00:12:00.886 Read completed with error (sct=0, sc=8) 00:12:00.886 Write completed with error (sct=0, sc=8) 00:12:00.886 starting I/O failed: -6 00:12:00.886 Read completed with error (sct=0, sc=8) 00:12:00.886 Write completed with error (sct=0, sc=8) 00:12:00.886 Write completed with error (sct=0, sc=8) 00:12:00.886 Write completed with error (sct=0, sc=8) 00:12:00.886 starting I/O failed: -6 00:12:00.886 Read completed with error (sct=0, sc=8) 00:12:00.886 Write completed with error (sct=0, sc=8) 00:12:00.886 Write completed with error (sct=0, sc=8) 00:12:00.886 Read completed with error (sct=0, sc=8) 00:12:00.886 starting I/O failed: -6 00:12:00.886 Write completed with error (sct=0, sc=8) 00:12:00.886 Read completed with error (sct=0, sc=8) 00:12:00.886 Write completed with error (sct=0, sc=8) 00:12:00.886 [2024-07-15 09:20:11.983404] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf467a0 is same with the state(5) to be set 00:12:00.886 Read completed with error (sct=0, sc=8) 00:12:00.886 Read completed with error (sct=0, sc=8) 00:12:00.886 Read completed with error (sct=0, sc=8) 00:12:00.886 Read completed with error (sct=0, sc=8) 00:12:00.886 Read completed with error (sct=0, sc=8) 00:12:00.886 Write completed with error (sct=0, sc=8) 00:12:00.886 Read completed with error (sct=0, sc=8) 00:12:00.886 Read completed with error (sct=0, sc=8) 00:12:00.886 Read completed with error (sct=0, sc=8) 00:12:00.886 Read completed with error (sct=0, sc=8) 00:12:00.886 Read completed with error (sct=0, sc=8) 00:12:00.886 Write completed with error (sct=0, sc=8) 00:12:00.886 Read completed with error (sct=0, sc=8) 00:12:00.886 Read completed with error (sct=0, sc=8) 00:12:00.886 Write completed with error (sct=0, sc=8) 00:12:00.886 Read completed with error (sct=0, sc=8) 00:12:00.886 Write completed with error (sct=0, sc=8) 00:12:00.886 Write completed with error (sct=0, sc=8) 00:12:00.886 Write completed with error (sct=0, sc=8) 00:12:00.886 Write completed with error (sct=0, sc=8) 00:12:00.886 Read completed with error (sct=0, sc=8) 00:12:00.886 Read completed with error (sct=0, sc=8) 00:12:00.886 Read completed with error (sct=0, sc=8) 00:12:00.886 Write completed with error (sct=0, sc=8) 00:12:00.886 Read completed with error (sct=0, sc=8) 00:12:00.886 Write completed with error (sct=0, sc=8) 00:12:00.886 Read completed with error (sct=0, sc=8) 00:12:00.886 Write completed with error (sct=0, sc=8) 00:12:00.886 Read completed with error (sct=0, sc=8) 00:12:00.886 Read completed with error (sct=0, sc=8) 00:12:00.886 Read completed with error (sct=0, sc=8) 00:12:00.886 Read completed with error (sct=0, sc=8) 00:12:00.886 Read completed with error (sct=0, sc=8) 00:12:00.886 Read completed with error (sct=0, sc=8) 00:12:00.886 Read completed with error (sct=0, sc=8) 00:12:00.886 Write completed with error (sct=0, sc=8) 00:12:00.886 Read completed with error (sct=0, sc=8) 00:12:00.886 Read completed with error (sct=0, sc=8) 00:12:00.886 Read completed with error (sct=0, sc=8) 00:12:00.886 Read completed with error (sct=0, sc=8) 00:12:00.886 Write completed with error (sct=0, sc=8) 00:12:00.886 Read completed with error (sct=0, sc=8) 00:12:00.886 Read completed with error (sct=0, sc=8) 00:12:00.886 Read completed with error (sct=0, sc=8) 00:12:00.886 Read completed with error (sct=0, sc=8) 00:12:00.886 Read completed with error (sct=0, sc=8) 00:12:00.886 Write completed with error (sct=0, sc=8) 00:12:00.886 [2024-07-15 09:20:11.983714] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7f43b4000c00 is same with the state(5) to be set 00:12:01.825 [2024-07-15 09:20:12.949433] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf47ac0 is same with the state(5) to be set 00:12:01.825 Read completed with error (sct=0, sc=8) 00:12:01.825 Write completed with error (sct=0, sc=8) 00:12:01.825 Read completed with error (sct=0, sc=8) 00:12:01.825 Write completed with error (sct=0, sc=8) 00:12:01.825 Read completed with error (sct=0, sc=8) 00:12:01.825 Read completed with error (sct=0, sc=8) 00:12:01.825 Write completed with error (sct=0, sc=8) 00:12:01.825 Write completed with error (sct=0, sc=8) 00:12:01.825 Read completed with error (sct=0, sc=8) 00:12:01.825 Read completed with error (sct=0, sc=8) 00:12:01.825 Read completed with error (sct=0, sc=8) 00:12:01.825 Read completed with error (sct=0, sc=8) 00:12:01.825 Read completed with error (sct=0, sc=8) 00:12:01.825 Read completed with error (sct=0, sc=8) 00:12:01.825 Write completed with error (sct=0, sc=8) 00:12:01.825 Write completed with error (sct=0, sc=8) 00:12:01.825 Read completed with error (sct=0, sc=8) 00:12:01.825 Write completed with error (sct=0, sc=8) 00:12:01.825 Read completed with error (sct=0, sc=8) 00:12:01.825 Read completed with error (sct=0, sc=8) 00:12:01.825 Write completed with error (sct=0, sc=8) 00:12:01.825 Read completed with error (sct=0, sc=8) 00:12:01.825 Write completed with error (sct=0, sc=8) 00:12:01.825 Read completed with error (sct=0, sc=8) 00:12:01.825 Read completed with error (sct=0, sc=8) 00:12:01.825 Read completed with error (sct=0, sc=8) 00:12:01.825 Write completed with error (sct=0, sc=8) 00:12:01.825 Read completed with error (sct=0, sc=8) 00:12:01.825 Read completed with error (sct=0, sc=8) 00:12:01.825 Write completed with error (sct=0, sc=8) 00:12:01.825 Write completed with error (sct=0, sc=8) 00:12:01.825 Read completed with error (sct=0, sc=8) 00:12:01.825 Read completed with error (sct=0, sc=8) 00:12:01.825 Write completed with error (sct=0, sc=8) 00:12:01.825 Write completed with error (sct=0, sc=8) 00:12:01.825 [2024-07-15 09:20:12.985107] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf463e0 is same with the state(5) to be set 00:12:01.825 Read completed with error (sct=0, sc=8) 00:12:01.825 Write completed with error (sct=0, sc=8) 00:12:01.825 Write completed with error (sct=0, sc=8) 00:12:01.825 Read completed with error (sct=0, sc=8) 00:12:01.825 Read completed with error (sct=0, sc=8) 00:12:01.825 Read completed with error (sct=0, sc=8) 00:12:01.825 Read completed with error (sct=0, sc=8) 00:12:01.825 Write completed with error (sct=0, sc=8) 00:12:01.825 Read completed with error (sct=0, sc=8) 00:12:01.825 Read completed with error (sct=0, sc=8) 00:12:01.825 Read completed with error (sct=0, sc=8) 00:12:01.825 Write completed with error (sct=0, sc=8) 00:12:01.825 Read completed with error (sct=0, sc=8) 00:12:01.825 Read completed with error (sct=0, sc=8) 00:12:01.825 Read completed with error (sct=0, sc=8) 00:12:01.825 Read completed with error (sct=0, sc=8) 00:12:01.825 Read completed with error (sct=0, sc=8) 00:12:01.825 Write completed with error (sct=0, sc=8) 00:12:01.825 Read completed with error (sct=0, sc=8) 00:12:01.825 Read completed with error (sct=0, sc=8) 00:12:01.825 Write completed with error (sct=0, sc=8) 00:12:01.825 Read completed with error (sct=0, sc=8) 00:12:01.825 Write completed with error (sct=0, sc=8) 00:12:01.825 Read completed with error (sct=0, sc=8) 00:12:01.825 Read completed with error (sct=0, sc=8) 00:12:01.825 Read completed with error (sct=0, sc=8) 00:12:01.825 Write completed with error (sct=0, sc=8) 00:12:01.825 Read completed with error (sct=0, sc=8) 00:12:01.825 Write completed with error (sct=0, sc=8) 00:12:01.825 Read completed with error (sct=0, sc=8) 00:12:01.825 Read completed with error (sct=0, sc=8) 00:12:01.825 Write completed with error (sct=0, sc=8) 00:12:01.825 Write completed with error (sct=0, sc=8) 00:12:01.825 Read completed with error (sct=0, sc=8) 00:12:01.825 Read completed with error (sct=0, sc=8) 00:12:01.825 [2024-07-15 09:20:12.986181] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf465c0 is same with the state(5) to be set 00:12:01.825 Read completed with error (sct=0, sc=8) 00:12:01.825 Read completed with error (sct=0, sc=8) 00:12:01.825 Read completed with error (sct=0, sc=8) 00:12:01.825 Write completed with error (sct=0, sc=8) 00:12:01.825 Read completed with error (sct=0, sc=8) 00:12:01.825 Write completed with error (sct=0, sc=8) 00:12:01.825 Read completed with error (sct=0, sc=8) 00:12:01.825 Read completed with error (sct=0, sc=8) 00:12:01.825 Write completed with error (sct=0, sc=8) 00:12:01.825 Write completed with error (sct=0, sc=8) 00:12:01.825 Write completed with error (sct=0, sc=8) 00:12:01.825 Write completed with error (sct=0, sc=8) 00:12:01.825 Write completed with error (sct=0, sc=8) 00:12:01.825 Read completed with error (sct=0, sc=8) 00:12:01.825 Read completed with error (sct=0, sc=8) 00:12:01.825 Write completed with error (sct=0, sc=8) 00:12:01.825 Read completed with error (sct=0, sc=8) 00:12:01.825 Read completed with error (sct=0, sc=8) 00:12:01.825 Write completed with error (sct=0, sc=8) 00:12:01.825 Write completed with error (sct=0, sc=8) 00:12:01.825 Read completed with error (sct=0, sc=8) 00:12:01.825 Read completed with error (sct=0, sc=8) 00:12:01.825 Write completed with error (sct=0, sc=8) 00:12:01.825 Read completed with error (sct=0, sc=8) 00:12:01.825 Read completed with error (sct=0, sc=8) 00:12:01.825 Read completed with error (sct=0, sc=8) 00:12:01.825 Read completed with error (sct=0, sc=8) 00:12:01.825 Read completed with error (sct=0, sc=8) 00:12:01.825 Read completed with error (sct=0, sc=8) 00:12:01.825 Read completed with error (sct=0, sc=8) 00:12:01.825 Write completed with error (sct=0, sc=8) 00:12:01.825 Read completed with error (sct=0, sc=8) 00:12:01.825 Read completed with error (sct=0, sc=8) 00:12:01.825 Write completed with error (sct=0, sc=8) 00:12:01.825 [2024-07-15 09:20:12.986404] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf46980 is same with the state(5) to be set 00:12:01.825 Read completed with error (sct=0, sc=8) 00:12:01.825 Write completed with error (sct=0, sc=8) 00:12:01.825 Write completed with error (sct=0, sc=8) 00:12:01.825 Read completed with error (sct=0, sc=8) 00:12:01.825 Write completed with error (sct=0, sc=8) 00:12:01.825 Read completed with error (sct=0, sc=8) 00:12:01.825 Write completed with error (sct=0, sc=8) 00:12:01.825 Read completed with error (sct=0, sc=8) 00:12:01.825 Read completed with error (sct=0, sc=8) 00:12:01.825 Write completed with error (sct=0, sc=8) 00:12:01.825 Read completed with error (sct=0, sc=8) 00:12:01.825 Write completed with error (sct=0, sc=8) 00:12:01.825 Write completed with error (sct=0, sc=8) 00:12:01.825 Write completed with error (sct=0, sc=8) 00:12:01.825 [2024-07-15 09:20:12.986532] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7f43b400d2f0 is same with the state(5) to be set 00:12:01.825 Initializing NVMe Controllers 00:12:01.825 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:12:01.825 Controller IO queue size 128, less than required. 00:12:01.825 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:12:01.825 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 2 00:12:01.825 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 3 00:12:01.825 Initialization complete. Launching workers. 00:12:01.825 ======================================================== 00:12:01.825 Latency(us) 00:12:01.825 Device Information : IOPS MiB/s Average min max 00:12:01.825 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 2: 189.08 0.09 949842.72 677.89 1011743.42 00:12:01.825 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 3: 151.37 0.07 892264.11 651.95 1043232.37 00:12:01.825 ======================================================== 00:12:01.825 Total : 340.45 0.17 924242.90 651.95 1043232.37 00:12:01.825 00:12:01.825 [2024-07-15 09:20:12.987701] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xf47ac0 (9): Bad file descriptor 00:12:01.825 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf: errors occurred 00:12:01.825 09:20:12 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:01.825 09:20:12 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@34 -- # delay=0 00:12:01.825 09:20:12 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@35 -- # kill -0 773609 00:12:01.825 09:20:12 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@36 -- # sleep 0.5 00:12:02.392 09:20:13 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@38 -- # (( delay++ > 30 )) 00:12:02.392 09:20:13 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@35 -- # kill -0 773609 00:12:02.392 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/delete_subsystem.sh: line 35: kill: (773609) - No such process 00:12:02.392 09:20:13 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@45 -- # NOT wait 773609 00:12:02.392 09:20:13 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@648 -- # local es=0 00:12:02.392 09:20:13 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@650 -- # valid_exec_arg wait 773609 00:12:02.392 09:20:13 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@636 -- # local arg=wait 00:12:02.392 09:20:13 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:12:02.392 09:20:13 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@640 -- # type -t wait 00:12:02.392 09:20:13 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:12:02.392 09:20:13 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@651 -- # wait 773609 00:12:02.392 09:20:13 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@651 -- # es=1 00:12:02.392 09:20:13 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:12:02.392 09:20:13 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:12:02.392 09:20:13 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:12:02.392 09:20:13 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@48 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 10 00:12:02.392 09:20:13 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:02.392 09:20:13 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:12:02.392 09:20:13 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:02.392 09:20:13 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@49 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:12:02.392 09:20:13 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:02.392 09:20:13 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:12:02.392 [2024-07-15 09:20:13.510911] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:12:02.392 09:20:13 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:02.392 09:20:13 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@50 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:12:02.392 09:20:13 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:02.392 09:20:13 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:12:02.392 09:20:13 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:02.392 09:20:13 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@54 -- # perf_pid=774033 00:12:02.392 09:20:13 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@56 -- # delay=0 00:12:02.392 09:20:13 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -c 0xC -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' -t 3 -q 128 -w randrw -M 70 -o 512 -P 4 00:12:02.392 09:20:13 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 774033 00:12:02.392 09:20:13 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:12:02.392 EAL: No free 2048 kB hugepages reported on node 1 00:12:02.392 [2024-07-15 09:20:13.574511] subsystem.c:1568:spdk_nvmf_subsystem_listener_allowed: *WARNING*: Allowing connection to discovery subsystem on TCP/10.0.0.2/4420, even though this listener was not added to the discovery subsystem. This behavior is deprecated and will be removed in a future release. 00:12:02.959 09:20:14 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:12:02.959 09:20:14 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 774033 00:12:02.959 09:20:14 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:12:03.525 09:20:14 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:12:03.525 09:20:14 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 774033 00:12:03.525 09:20:14 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:12:04.095 09:20:15 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:12:04.095 09:20:15 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 774033 00:12:04.095 09:20:15 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:12:04.355 09:20:15 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:12:04.355 09:20:15 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 774033 00:12:04.355 09:20:15 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:12:04.922 09:20:16 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:12:04.922 09:20:16 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 774033 00:12:04.922 09:20:16 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:12:05.490 09:20:16 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:12:05.490 09:20:16 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 774033 00:12:05.490 09:20:16 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:12:05.748 Initializing NVMe Controllers 00:12:05.748 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:12:05.749 Controller IO queue size 128, less than required. 00:12:05.749 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:12:05.749 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 2 00:12:05.749 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 3 00:12:05.749 Initialization complete. Launching workers. 00:12:05.749 ======================================================== 00:12:05.749 Latency(us) 00:12:05.749 Device Information : IOPS MiB/s Average min max 00:12:05.749 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 2: 128.00 0.06 1004466.65 1000168.37 1042636.18 00:12:05.749 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 3: 128.00 0.06 1004802.11 1000209.03 1040970.66 00:12:05.749 ======================================================== 00:12:05.749 Total : 256.00 0.12 1004634.38 1000168.37 1042636.18 00:12:05.749 00:12:06.009 09:20:17 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:12:06.009 09:20:17 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 774033 00:12:06.009 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/delete_subsystem.sh: line 57: kill: (774033) - No such process 00:12:06.009 09:20:17 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@67 -- # wait 774033 00:12:06.009 09:20:17 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@69 -- # trap - SIGINT SIGTERM EXIT 00:12:06.009 09:20:17 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@71 -- # nvmftestfini 00:12:06.009 09:20:17 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@488 -- # nvmfcleanup 00:12:06.009 09:20:17 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@117 -- # sync 00:12:06.009 09:20:17 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:12:06.009 09:20:17 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@120 -- # set +e 00:12:06.009 09:20:17 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@121 -- # for i in {1..20} 00:12:06.009 09:20:17 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:12:06.009 rmmod nvme_tcp 00:12:06.009 rmmod nvme_fabrics 00:12:06.009 rmmod nvme_keyring 00:12:06.009 09:20:17 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:12:06.009 09:20:17 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@124 -- # set -e 00:12:06.009 09:20:17 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@125 -- # return 0 00:12:06.009 09:20:17 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@489 -- # '[' -n 773588 ']' 00:12:06.009 09:20:17 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@490 -- # killprocess 773588 00:12:06.009 09:20:17 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@948 -- # '[' -z 773588 ']' 00:12:06.009 09:20:17 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@952 -- # kill -0 773588 00:12:06.009 09:20:17 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@953 -- # uname 00:12:06.009 09:20:17 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:12:06.009 09:20:17 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 773588 00:12:06.009 09:20:17 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:12:06.009 09:20:17 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:12:06.009 09:20:17 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@966 -- # echo 'killing process with pid 773588' 00:12:06.009 killing process with pid 773588 00:12:06.009 09:20:17 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@967 -- # kill 773588 00:12:06.009 09:20:17 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@972 -- # wait 773588 00:12:06.268 09:20:17 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:12:06.268 09:20:17 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:12:06.268 09:20:17 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:12:06.268 09:20:17 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:12:06.268 09:20:17 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@278 -- # remove_spdk_ns 00:12:06.268 09:20:17 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:12:06.268 09:20:17 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:12:06.268 09:20:17 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:12:08.806 09:20:19 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:12:08.806 00:12:08.806 real 0m12.499s 00:12:08.806 user 0m28.161s 00:12:08.806 sys 0m3.051s 00:12:08.806 09:20:19 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@1124 -- # xtrace_disable 00:12:08.806 09:20:19 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:12:08.806 ************************************ 00:12:08.806 END TEST nvmf_delete_subsystem 00:12:08.806 ************************************ 00:12:08.806 09:20:19 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:12:08.806 09:20:19 nvmf_tcp -- nvmf/nvmf.sh@36 -- # run_test nvmf_ns_masking test/nvmf/target/ns_masking.sh --transport=tcp 00:12:08.806 09:20:19 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:12:08.806 09:20:19 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:12:08.806 09:20:19 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:12:08.806 ************************************ 00:12:08.806 START TEST nvmf_ns_masking 00:12:08.806 ************************************ 00:12:08.806 09:20:19 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1123 -- # test/nvmf/target/ns_masking.sh --transport=tcp 00:12:08.806 * Looking for test storage... 00:12:08.806 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:12:08.806 09:20:19 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@8 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:12:08.806 09:20:19 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@7 -- # uname -s 00:12:08.806 09:20:19 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:12:08.806 09:20:19 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:12:08.806 09:20:19 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:12:08.806 09:20:19 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:12:08.806 09:20:19 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:12:08.806 09:20:19 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:12:08.806 09:20:19 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:12:08.806 09:20:19 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:12:08.806 09:20:19 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:12:08.806 09:20:19 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:12:08.806 09:20:19 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a 00:12:08.806 09:20:19 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@18 -- # NVME_HOSTID=29f67375-a902-e411-ace9-001e67bc3c9a 00:12:08.806 09:20:19 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:12:08.806 09:20:19 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:12:08.806 09:20:19 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:12:08.806 09:20:19 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:12:08.806 09:20:19 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:12:08.806 09:20:19 nvmf_tcp.nvmf_ns_masking -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:12:08.806 09:20:19 nvmf_tcp.nvmf_ns_masking -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:12:08.806 09:20:19 nvmf_tcp.nvmf_ns_masking -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:12:08.806 09:20:19 nvmf_tcp.nvmf_ns_masking -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:08.806 09:20:19 nvmf_tcp.nvmf_ns_masking -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:08.807 09:20:19 nvmf_tcp.nvmf_ns_masking -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:08.807 09:20:19 nvmf_tcp.nvmf_ns_masking -- paths/export.sh@5 -- # export PATH 00:12:08.807 09:20:19 nvmf_tcp.nvmf_ns_masking -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:08.807 09:20:19 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@47 -- # : 0 00:12:08.807 09:20:19 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:12:08.807 09:20:19 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:12:08.807 09:20:19 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:12:08.807 09:20:19 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:12:08.807 09:20:19 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:12:08.807 09:20:19 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:12:08.807 09:20:19 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:12:08.807 09:20:19 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@51 -- # have_pci_nics=0 00:12:08.807 09:20:19 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@10 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:12:08.807 09:20:19 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@11 -- # hostsock=/var/tmp/host.sock 00:12:08.807 09:20:19 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@12 -- # loops=5 00:12:08.807 09:20:19 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@13 -- # uuidgen 00:12:08.807 09:20:19 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@13 -- # ns1uuid=cfe3fa47-c479-487f-9775-773baaf1c96b 00:12:08.807 09:20:19 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@14 -- # uuidgen 00:12:08.807 09:20:19 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@14 -- # ns2uuid=c87a91a2-90d2-4866-83f5-49d7056c908a 00:12:08.807 09:20:19 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@16 -- # SUBSYSNQN=nqn.2016-06.io.spdk:cnode1 00:12:08.807 09:20:19 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@17 -- # HOSTNQN1=nqn.2016-06.io.spdk:host1 00:12:08.807 09:20:19 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@18 -- # HOSTNQN2=nqn.2016-06.io.spdk:host2 00:12:08.807 09:20:19 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@19 -- # uuidgen 00:12:08.807 09:20:19 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@19 -- # HOSTID=2e7c9011-c0ae-4d97-b09b-b548e2458136 00:12:08.807 09:20:19 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@50 -- # nvmftestinit 00:12:08.807 09:20:19 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:12:08.807 09:20:19 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:12:08.807 09:20:19 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@448 -- # prepare_net_devs 00:12:08.807 09:20:19 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@410 -- # local -g is_hw=no 00:12:08.807 09:20:19 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@412 -- # remove_spdk_ns 00:12:08.807 09:20:19 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:12:08.807 09:20:19 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:12:08.807 09:20:19 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:12:08.807 09:20:19 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:12:08.807 09:20:19 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:12:08.807 09:20:19 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@285 -- # xtrace_disable 00:12:08.807 09:20:19 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@10 -- # set +x 00:12:10.714 09:20:21 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:12:10.714 09:20:21 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@291 -- # pci_devs=() 00:12:10.714 09:20:21 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@291 -- # local -a pci_devs 00:12:10.714 09:20:21 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@292 -- # pci_net_devs=() 00:12:10.714 09:20:21 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:12:10.714 09:20:21 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@293 -- # pci_drivers=() 00:12:10.714 09:20:21 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@293 -- # local -A pci_drivers 00:12:10.714 09:20:21 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@295 -- # net_devs=() 00:12:10.715 09:20:21 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@295 -- # local -ga net_devs 00:12:10.715 09:20:21 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@296 -- # e810=() 00:12:10.715 09:20:21 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@296 -- # local -ga e810 00:12:10.715 09:20:21 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@297 -- # x722=() 00:12:10.715 09:20:21 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@297 -- # local -ga x722 00:12:10.715 09:20:21 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@298 -- # mlx=() 00:12:10.715 09:20:21 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@298 -- # local -ga mlx 00:12:10.715 09:20:21 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:12:10.715 09:20:21 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:12:10.715 09:20:21 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:12:10.715 09:20:21 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:12:10.715 09:20:21 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:12:10.715 09:20:21 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:12:10.715 09:20:21 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:12:10.715 09:20:21 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:12:10.715 09:20:21 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:12:10.715 09:20:21 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:12:10.715 09:20:21 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:12:10.715 09:20:21 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:12:10.715 09:20:21 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:12:10.715 09:20:21 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:12:10.715 09:20:21 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:12:10.715 09:20:21 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:12:10.715 09:20:21 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:12:10.715 09:20:21 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:12:10.715 09:20:21 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@341 -- # echo 'Found 0000:09:00.0 (0x8086 - 0x159b)' 00:12:10.715 Found 0000:09:00.0 (0x8086 - 0x159b) 00:12:10.715 09:20:21 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:12:10.715 09:20:21 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:12:10.715 09:20:21 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:12:10.715 09:20:21 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:12:10.715 09:20:21 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:12:10.715 09:20:21 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:12:10.715 09:20:21 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@341 -- # echo 'Found 0000:09:00.1 (0x8086 - 0x159b)' 00:12:10.715 Found 0000:09:00.1 (0x8086 - 0x159b) 00:12:10.715 09:20:21 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:12:10.715 09:20:21 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:12:10.715 09:20:21 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:12:10.715 09:20:21 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:12:10.715 09:20:21 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:12:10.715 09:20:21 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:12:10.715 09:20:21 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:12:10.715 09:20:21 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:12:10.715 09:20:21 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:12:10.715 09:20:21 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:12:10.715 09:20:21 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:12:10.715 09:20:21 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:12:10.715 09:20:21 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@390 -- # [[ up == up ]] 00:12:10.715 09:20:21 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:12:10.715 09:20:21 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:12:10.715 09:20:21 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:09:00.0: cvl_0_0' 00:12:10.715 Found net devices under 0000:09:00.0: cvl_0_0 00:12:10.715 09:20:21 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:12:10.715 09:20:21 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:12:10.715 09:20:21 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:12:10.715 09:20:21 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:12:10.715 09:20:21 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:12:10.715 09:20:21 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@390 -- # [[ up == up ]] 00:12:10.715 09:20:21 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:12:10.715 09:20:21 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:12:10.715 09:20:21 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:09:00.1: cvl_0_1' 00:12:10.715 Found net devices under 0000:09:00.1: cvl_0_1 00:12:10.715 09:20:21 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:12:10.715 09:20:21 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:12:10.715 09:20:21 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@414 -- # is_hw=yes 00:12:10.715 09:20:21 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:12:10.715 09:20:21 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:12:10.715 09:20:21 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:12:10.715 09:20:21 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:12:10.715 09:20:21 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:12:10.715 09:20:21 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:12:10.715 09:20:21 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:12:10.715 09:20:21 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:12:10.715 09:20:21 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:12:10.715 09:20:21 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:12:10.715 09:20:21 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:12:10.715 09:20:21 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:12:10.715 09:20:21 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:12:10.715 09:20:21 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:12:10.715 09:20:21 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:12:10.715 09:20:21 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:12:10.715 09:20:21 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:12:10.715 09:20:21 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:12:10.715 09:20:21 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:12:10.715 09:20:21 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:12:10.715 09:20:21 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:12:10.715 09:20:21 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:12:10.715 09:20:21 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:12:10.715 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:12:10.715 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.208 ms 00:12:10.715 00:12:10.715 --- 10.0.0.2 ping statistics --- 00:12:10.715 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:12:10.715 rtt min/avg/max/mdev = 0.208/0.208/0.208/0.000 ms 00:12:10.715 09:20:21 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:12:10.715 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:12:10.715 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.074 ms 00:12:10.715 00:12:10.715 --- 10.0.0.1 ping statistics --- 00:12:10.715 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:12:10.715 rtt min/avg/max/mdev = 0.074/0.074/0.074/0.000 ms 00:12:10.715 09:20:21 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:12:10.715 09:20:21 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@422 -- # return 0 00:12:10.715 09:20:21 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:12:10.715 09:20:21 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:12:10.715 09:20:21 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:12:10.715 09:20:21 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:12:10.715 09:20:21 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:12:10.715 09:20:21 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:12:10.715 09:20:21 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:12:10.715 09:20:21 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@51 -- # nvmfappstart 00:12:10.715 09:20:21 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:12:10.715 09:20:21 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@722 -- # xtrace_disable 00:12:10.715 09:20:21 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@10 -- # set +x 00:12:10.715 09:20:21 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@481 -- # nvmfpid=776483 00:12:10.715 09:20:21 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF 00:12:10.715 09:20:21 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@482 -- # waitforlisten 776483 00:12:10.715 09:20:21 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@829 -- # '[' -z 776483 ']' 00:12:10.715 09:20:21 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:12:10.715 09:20:21 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@834 -- # local max_retries=100 00:12:10.715 09:20:21 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:12:10.715 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:12:10.715 09:20:21 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@838 -- # xtrace_disable 00:12:10.715 09:20:21 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@10 -- # set +x 00:12:10.715 [2024-07-15 09:20:21.858176] Starting SPDK v24.09-pre git sha1 b0f01ebc5 / DPDK 24.03.0 initialization... 00:12:10.715 [2024-07-15 09:20:21.858279] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:12:10.715 EAL: No free 2048 kB hugepages reported on node 1 00:12:10.974 [2024-07-15 09:20:21.920727] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:12:10.974 [2024-07-15 09:20:22.020053] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:12:10.974 [2024-07-15 09:20:22.020108] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:12:10.974 [2024-07-15 09:20:22.020129] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:12:10.974 [2024-07-15 09:20:22.020140] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:12:10.974 [2024-07-15 09:20:22.020149] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:12:10.974 [2024-07-15 09:20:22.020178] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:12:10.974 09:20:22 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:12:10.974 09:20:22 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@862 -- # return 0 00:12:10.974 09:20:22 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:12:10.974 09:20:22 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@728 -- # xtrace_disable 00:12:10.974 09:20:22 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@10 -- # set +x 00:12:10.974 09:20:22 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:12:10.974 09:20:22 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:12:11.232 [2024-07-15 09:20:22.381021] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:12:11.232 09:20:22 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@55 -- # MALLOC_BDEV_SIZE=64 00:12:11.232 09:20:22 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@56 -- # MALLOC_BLOCK_SIZE=512 00:12:11.232 09:20:22 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@58 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc1 00:12:11.491 Malloc1 00:12:11.491 09:20:22 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc2 00:12:11.750 Malloc2 00:12:11.750 09:20:22 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@62 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:12:12.009 09:20:23 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@63 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 1 00:12:12.265 09:20:23 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:12:12.522 [2024-07-15 09:20:23.635400] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:12:12.522 09:20:23 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@67 -- # connect 00:12:12.522 09:20:23 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@22 -- # nvme connect -t tcp -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 -I 2e7c9011-c0ae-4d97-b09b-b548e2458136 -a 10.0.0.2 -s 4420 -i 4 00:12:12.780 09:20:23 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@24 -- # waitforserial SPDKISFASTANDAWESOME 00:12:12.780 09:20:23 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1198 -- # local i=0 00:12:12.780 09:20:23 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1199 -- # local nvme_device_counter=1 nvme_devices=0 00:12:12.780 09:20:23 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1200 -- # [[ -n '' ]] 00:12:12.780 09:20:23 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1205 -- # sleep 2 00:12:14.681 09:20:25 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1206 -- # (( i++ <= 15 )) 00:12:14.681 09:20:25 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1207 -- # lsblk -l -o NAME,SERIAL 00:12:14.681 09:20:25 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1207 -- # grep -c SPDKISFASTANDAWESOME 00:12:14.681 09:20:25 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1207 -- # nvme_devices=1 00:12:14.681 09:20:25 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1208 -- # (( nvme_devices == nvme_device_counter )) 00:12:14.681 09:20:25 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1208 -- # return 0 00:12:14.681 09:20:25 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@26 -- # nvme list-subsys -o json 00:12:14.681 09:20:25 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@26 -- # jq -r '.[].Subsystems[] | select(.NQN=="nqn.2016-06.io.spdk:cnode1") | .Paths[0].Name' 00:12:14.681 09:20:25 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@26 -- # ctrl_id=nvme0 00:12:14.681 09:20:25 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@27 -- # [[ -z nvme0 ]] 00:12:14.681 09:20:25 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@68 -- # ns_is_visible 0x1 00:12:14.681 09:20:25 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:12:14.681 09:20:25 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x1 00:12:14.681 [ 0]:0x1 00:12:14.681 09:20:25 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x1 -o json 00:12:14.681 09:20:25 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:12:14.938 09:20:25 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=de417d9f7e234add91f897b1fdb3fa19 00:12:14.938 09:20:25 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ de417d9f7e234add91f897b1fdb3fa19 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:12:14.938 09:20:25 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@71 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc2 -n 2 00:12:15.195 09:20:26 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@72 -- # ns_is_visible 0x1 00:12:15.195 09:20:26 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:12:15.195 09:20:26 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x1 00:12:15.195 [ 0]:0x1 00:12:15.195 09:20:26 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x1 -o json 00:12:15.195 09:20:26 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:12:15.195 09:20:26 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=de417d9f7e234add91f897b1fdb3fa19 00:12:15.195 09:20:26 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ de417d9f7e234add91f897b1fdb3fa19 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:12:15.195 09:20:26 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@73 -- # ns_is_visible 0x2 00:12:15.195 09:20:26 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:12:15.195 09:20:26 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x2 00:12:15.195 [ 1]:0x2 00:12:15.195 09:20:26 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x2 -o json 00:12:15.195 09:20:26 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:12:15.195 09:20:26 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=82c010d73f024f7e8892e7b11f5cb6d8 00:12:15.195 09:20:26 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ 82c010d73f024f7e8892e7b11f5cb6d8 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:12:15.195 09:20:26 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@75 -- # disconnect 00:12:15.195 09:20:26 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@38 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:12:15.195 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:12:15.195 09:20:26 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@79 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:12:15.451 09:20:26 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@80 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 1 --no-auto-visible 00:12:15.709 09:20:26 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@83 -- # connect 1 00:12:15.709 09:20:26 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@22 -- # nvme connect -t tcp -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 -I 2e7c9011-c0ae-4d97-b09b-b548e2458136 -a 10.0.0.2 -s 4420 -i 4 00:12:15.966 09:20:27 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@24 -- # waitforserial SPDKISFASTANDAWESOME 1 00:12:15.966 09:20:27 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1198 -- # local i=0 00:12:15.966 09:20:27 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1199 -- # local nvme_device_counter=1 nvme_devices=0 00:12:15.966 09:20:27 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1200 -- # [[ -n 1 ]] 00:12:15.966 09:20:27 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1201 -- # nvme_device_counter=1 00:12:15.966 09:20:27 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1205 -- # sleep 2 00:12:17.893 09:20:29 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1206 -- # (( i++ <= 15 )) 00:12:17.893 09:20:29 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1207 -- # lsblk -l -o NAME,SERIAL 00:12:17.893 09:20:29 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1207 -- # grep -c SPDKISFASTANDAWESOME 00:12:17.893 09:20:29 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1207 -- # nvme_devices=1 00:12:17.893 09:20:29 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1208 -- # (( nvme_devices == nvme_device_counter )) 00:12:17.893 09:20:29 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1208 -- # return 0 00:12:18.151 09:20:29 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@26 -- # nvme list-subsys -o json 00:12:18.151 09:20:29 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@26 -- # jq -r '.[].Subsystems[] | select(.NQN=="nqn.2016-06.io.spdk:cnode1") | .Paths[0].Name' 00:12:18.151 09:20:29 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@26 -- # ctrl_id=nvme0 00:12:18.151 09:20:29 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@27 -- # [[ -z nvme0 ]] 00:12:18.151 09:20:29 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@84 -- # NOT ns_is_visible 0x1 00:12:18.151 09:20:29 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@648 -- # local es=0 00:12:18.151 09:20:29 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@650 -- # valid_exec_arg ns_is_visible 0x1 00:12:18.151 09:20:29 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@636 -- # local arg=ns_is_visible 00:12:18.151 09:20:29 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:12:18.151 09:20:29 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@640 -- # type -t ns_is_visible 00:12:18.151 09:20:29 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:12:18.151 09:20:29 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@651 -- # ns_is_visible 0x1 00:12:18.151 09:20:29 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:12:18.151 09:20:29 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x1 00:12:18.151 09:20:29 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x1 -o json 00:12:18.151 09:20:29 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:12:18.151 09:20:29 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=00000000000000000000000000000000 00:12:18.151 09:20:29 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ 00000000000000000000000000000000 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:12:18.151 09:20:29 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@651 -- # es=1 00:12:18.151 09:20:29 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:12:18.151 09:20:29 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:12:18.151 09:20:29 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:12:18.151 09:20:29 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@85 -- # ns_is_visible 0x2 00:12:18.151 09:20:29 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:12:18.151 09:20:29 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x2 00:12:18.151 [ 0]:0x2 00:12:18.151 09:20:29 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x2 -o json 00:12:18.151 09:20:29 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:12:18.151 09:20:29 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=82c010d73f024f7e8892e7b11f5cb6d8 00:12:18.151 09:20:29 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ 82c010d73f024f7e8892e7b11f5cb6d8 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:12:18.151 09:20:29 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@88 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_ns_add_host nqn.2016-06.io.spdk:cnode1 1 nqn.2016-06.io.spdk:host1 00:12:18.409 09:20:29 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@89 -- # ns_is_visible 0x1 00:12:18.409 09:20:29 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:12:18.409 09:20:29 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x1 00:12:18.409 [ 0]:0x1 00:12:18.409 09:20:29 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x1 -o json 00:12:18.409 09:20:29 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:12:18.409 09:20:29 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=de417d9f7e234add91f897b1fdb3fa19 00:12:18.409 09:20:29 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ de417d9f7e234add91f897b1fdb3fa19 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:12:18.409 09:20:29 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@90 -- # ns_is_visible 0x2 00:12:18.409 09:20:29 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:12:18.409 09:20:29 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x2 00:12:18.409 [ 1]:0x2 00:12:18.409 09:20:29 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x2 -o json 00:12:18.409 09:20:29 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:12:18.409 09:20:29 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=82c010d73f024f7e8892e7b11f5cb6d8 00:12:18.409 09:20:29 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ 82c010d73f024f7e8892e7b11f5cb6d8 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:12:18.409 09:20:29 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@93 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_ns_remove_host nqn.2016-06.io.spdk:cnode1 1 nqn.2016-06.io.spdk:host1 00:12:18.666 09:20:29 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@94 -- # NOT ns_is_visible 0x1 00:12:18.666 09:20:29 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@648 -- # local es=0 00:12:18.666 09:20:29 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@650 -- # valid_exec_arg ns_is_visible 0x1 00:12:18.666 09:20:29 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@636 -- # local arg=ns_is_visible 00:12:18.666 09:20:29 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:12:18.666 09:20:29 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@640 -- # type -t ns_is_visible 00:12:18.667 09:20:29 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:12:18.667 09:20:29 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@651 -- # ns_is_visible 0x1 00:12:18.667 09:20:29 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:12:18.667 09:20:29 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x1 00:12:18.667 09:20:29 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x1 -o json 00:12:18.667 09:20:29 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:12:18.924 09:20:29 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=00000000000000000000000000000000 00:12:18.924 09:20:29 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ 00000000000000000000000000000000 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:12:18.924 09:20:29 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@651 -- # es=1 00:12:18.924 09:20:29 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:12:18.924 09:20:29 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:12:18.924 09:20:29 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:12:18.924 09:20:29 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@95 -- # ns_is_visible 0x2 00:12:18.924 09:20:29 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:12:18.924 09:20:29 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x2 00:12:18.924 [ 0]:0x2 00:12:18.924 09:20:29 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x2 -o json 00:12:18.924 09:20:29 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:12:18.924 09:20:29 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=82c010d73f024f7e8892e7b11f5cb6d8 00:12:18.924 09:20:29 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ 82c010d73f024f7e8892e7b11f5cb6d8 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:12:18.924 09:20:29 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@97 -- # disconnect 00:12:18.924 09:20:29 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@38 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:12:18.924 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:12:18.924 09:20:29 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@100 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_ns_add_host nqn.2016-06.io.spdk:cnode1 1 nqn.2016-06.io.spdk:host1 00:12:19.182 09:20:30 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@101 -- # connect 2 00:12:19.182 09:20:30 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@22 -- # nvme connect -t tcp -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 -I 2e7c9011-c0ae-4d97-b09b-b548e2458136 -a 10.0.0.2 -s 4420 -i 4 00:12:19.182 09:20:30 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@24 -- # waitforserial SPDKISFASTANDAWESOME 2 00:12:19.182 09:20:30 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1198 -- # local i=0 00:12:19.182 09:20:30 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1199 -- # local nvme_device_counter=1 nvme_devices=0 00:12:19.182 09:20:30 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1200 -- # [[ -n 2 ]] 00:12:19.182 09:20:30 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1201 -- # nvme_device_counter=2 00:12:19.182 09:20:30 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1205 -- # sleep 2 00:12:21.715 09:20:32 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1206 -- # (( i++ <= 15 )) 00:12:21.715 09:20:32 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1207 -- # lsblk -l -o NAME,SERIAL 00:12:21.715 09:20:32 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1207 -- # grep -c SPDKISFASTANDAWESOME 00:12:21.715 09:20:32 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1207 -- # nvme_devices=2 00:12:21.715 09:20:32 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1208 -- # (( nvme_devices == nvme_device_counter )) 00:12:21.715 09:20:32 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1208 -- # return 0 00:12:21.715 09:20:32 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@26 -- # nvme list-subsys -o json 00:12:21.715 09:20:32 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@26 -- # jq -r '.[].Subsystems[] | select(.NQN=="nqn.2016-06.io.spdk:cnode1") | .Paths[0].Name' 00:12:21.715 09:20:32 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@26 -- # ctrl_id=nvme0 00:12:21.715 09:20:32 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@27 -- # [[ -z nvme0 ]] 00:12:21.715 09:20:32 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@102 -- # ns_is_visible 0x1 00:12:21.715 09:20:32 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:12:21.715 09:20:32 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x1 00:12:21.715 [ 0]:0x1 00:12:21.715 09:20:32 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x1 -o json 00:12:21.715 09:20:32 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:12:21.715 09:20:32 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=de417d9f7e234add91f897b1fdb3fa19 00:12:21.715 09:20:32 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ de417d9f7e234add91f897b1fdb3fa19 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:12:21.715 09:20:32 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@103 -- # ns_is_visible 0x2 00:12:21.715 09:20:32 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:12:21.715 09:20:32 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x2 00:12:21.715 [ 1]:0x2 00:12:21.715 09:20:32 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x2 -o json 00:12:21.715 09:20:32 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:12:21.715 09:20:32 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=82c010d73f024f7e8892e7b11f5cb6d8 00:12:21.715 09:20:32 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ 82c010d73f024f7e8892e7b11f5cb6d8 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:12:21.715 09:20:32 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@106 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_ns_remove_host nqn.2016-06.io.spdk:cnode1 1 nqn.2016-06.io.spdk:host1 00:12:21.715 09:20:32 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@107 -- # NOT ns_is_visible 0x1 00:12:21.715 09:20:32 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@648 -- # local es=0 00:12:21.715 09:20:32 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@650 -- # valid_exec_arg ns_is_visible 0x1 00:12:21.716 09:20:32 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@636 -- # local arg=ns_is_visible 00:12:21.716 09:20:32 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:12:21.716 09:20:32 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@640 -- # type -t ns_is_visible 00:12:21.716 09:20:32 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:12:21.716 09:20:32 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@651 -- # ns_is_visible 0x1 00:12:21.716 09:20:32 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:12:21.716 09:20:32 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x1 00:12:21.716 09:20:32 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x1 -o json 00:12:21.716 09:20:32 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:12:21.716 09:20:32 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=00000000000000000000000000000000 00:12:21.716 09:20:32 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ 00000000000000000000000000000000 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:12:21.716 09:20:32 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@651 -- # es=1 00:12:21.716 09:20:32 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:12:21.716 09:20:32 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:12:21.716 09:20:32 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:12:21.716 09:20:32 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@108 -- # ns_is_visible 0x2 00:12:21.716 09:20:32 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:12:21.716 09:20:32 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x2 00:12:21.716 [ 0]:0x2 00:12:21.716 09:20:32 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x2 -o json 00:12:21.716 09:20:32 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:12:21.716 09:20:32 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=82c010d73f024f7e8892e7b11f5cb6d8 00:12:21.716 09:20:32 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ 82c010d73f024f7e8892e7b11f5cb6d8 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:12:21.716 09:20:32 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@111 -- # NOT /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_ns_remove_host nqn.2016-06.io.spdk:cnode1 2 nqn.2016-06.io.spdk:host1 00:12:21.716 09:20:32 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@648 -- # local es=0 00:12:21.716 09:20:32 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@650 -- # valid_exec_arg /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_ns_remove_host nqn.2016-06.io.spdk:cnode1 2 nqn.2016-06.io.spdk:host1 00:12:21.716 09:20:32 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@636 -- # local arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:12:21.716 09:20:32 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:12:21.716 09:20:32 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@640 -- # type -t /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:12:21.716 09:20:32 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:12:21.716 09:20:32 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@642 -- # type -P /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:12:21.716 09:20:32 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:12:21.716 09:20:32 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@642 -- # arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:12:21.716 09:20:32 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@642 -- # [[ -x /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py ]] 00:12:21.716 09:20:32 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@651 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_ns_remove_host nqn.2016-06.io.spdk:cnode1 2 nqn.2016-06.io.spdk:host1 00:12:21.974 [2024-07-15 09:20:33.092699] nvmf_rpc.c:1791:nvmf_rpc_ns_visible_paused: *ERROR*: Unable to add/remove nqn.2016-06.io.spdk:host1 to namespace ID 2 00:12:21.974 request: 00:12:21.974 { 00:12:21.974 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:12:21.974 "nsid": 2, 00:12:21.974 "host": "nqn.2016-06.io.spdk:host1", 00:12:21.974 "method": "nvmf_ns_remove_host", 00:12:21.974 "req_id": 1 00:12:21.974 } 00:12:21.974 Got JSON-RPC error response 00:12:21.974 response: 00:12:21.974 { 00:12:21.974 "code": -32602, 00:12:21.974 "message": "Invalid parameters" 00:12:21.974 } 00:12:21.974 09:20:33 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@651 -- # es=1 00:12:21.974 09:20:33 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:12:21.974 09:20:33 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:12:21.974 09:20:33 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:12:21.974 09:20:33 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@112 -- # NOT ns_is_visible 0x1 00:12:21.974 09:20:33 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@648 -- # local es=0 00:12:21.974 09:20:33 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@650 -- # valid_exec_arg ns_is_visible 0x1 00:12:21.974 09:20:33 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@636 -- # local arg=ns_is_visible 00:12:21.974 09:20:33 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:12:21.974 09:20:33 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@640 -- # type -t ns_is_visible 00:12:21.974 09:20:33 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:12:21.974 09:20:33 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@651 -- # ns_is_visible 0x1 00:12:21.974 09:20:33 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:12:21.974 09:20:33 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x1 00:12:21.974 09:20:33 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x1 -o json 00:12:21.974 09:20:33 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:12:21.974 09:20:33 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=00000000000000000000000000000000 00:12:21.974 09:20:33 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ 00000000000000000000000000000000 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:12:21.974 09:20:33 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@651 -- # es=1 00:12:21.974 09:20:33 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:12:21.974 09:20:33 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:12:21.974 09:20:33 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:12:21.974 09:20:33 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@113 -- # ns_is_visible 0x2 00:12:21.974 09:20:33 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:12:21.974 09:20:33 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x2 00:12:21.974 [ 0]:0x2 00:12:22.231 09:20:33 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x2 -o json 00:12:22.231 09:20:33 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:12:22.231 09:20:33 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=82c010d73f024f7e8892e7b11f5cb6d8 00:12:22.231 09:20:33 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ 82c010d73f024f7e8892e7b11f5cb6d8 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:12:22.231 09:20:33 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@114 -- # disconnect 00:12:22.231 09:20:33 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@38 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:12:22.231 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:12:22.231 09:20:33 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@118 -- # hostpid=777999 00:12:22.231 09:20:33 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@119 -- # trap 'killprocess $hostpid; nvmftestfini' SIGINT SIGTERM EXIT 00:12:22.231 09:20:33 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@121 -- # waitforlisten 777999 /var/tmp/host.sock 00:12:22.231 09:20:33 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@117 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -r /var/tmp/host.sock -m 2 00:12:22.231 09:20:33 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@829 -- # '[' -z 777999 ']' 00:12:22.231 09:20:33 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/host.sock 00:12:22.231 09:20:33 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@834 -- # local max_retries=100 00:12:22.231 09:20:33 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/host.sock...' 00:12:22.231 Waiting for process to start up and listen on UNIX domain socket /var/tmp/host.sock... 00:12:22.231 09:20:33 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@838 -- # xtrace_disable 00:12:22.231 09:20:33 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@10 -- # set +x 00:12:22.231 [2024-07-15 09:20:33.284295] Starting SPDK v24.09-pre git sha1 b0f01ebc5 / DPDK 24.03.0 initialization... 00:12:22.231 [2024-07-15 09:20:33.284389] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid777999 ] 00:12:22.231 EAL: No free 2048 kB hugepages reported on node 1 00:12:22.231 [2024-07-15 09:20:33.342038] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:12:22.490 [2024-07-15 09:20:33.448353] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:12:22.749 09:20:33 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:12:22.749 09:20:33 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@862 -- # return 0 00:12:22.749 09:20:33 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@122 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:12:23.006 09:20:33 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:12:23.265 09:20:34 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@124 -- # uuid2nguid cfe3fa47-c479-487f-9775-773baaf1c96b 00:12:23.265 09:20:34 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@759 -- # tr -d - 00:12:23.265 09:20:34 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@124 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 1 -g CFE3FA47C479487F9775773BAAF1C96B -i 00:12:23.523 09:20:34 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@125 -- # uuid2nguid c87a91a2-90d2-4866-83f5-49d7056c908a 00:12:23.523 09:20:34 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@759 -- # tr -d - 00:12:23.523 09:20:34 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc2 -n 2 -g C87A91A290D2486683F549D7056C908A -i 00:12:23.523 09:20:34 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@126 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_ns_add_host nqn.2016-06.io.spdk:cnode1 1 nqn.2016-06.io.spdk:host1 00:12:23.781 09:20:34 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@127 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_ns_add_host nqn.2016-06.io.spdk:cnode1 2 nqn.2016-06.io.spdk:host2 00:12:24.039 09:20:35 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@129 -- # hostrpc bdev_nvme_attach_controller -t tcp -a 10.0.0.2 -f ipv4 -s 4420 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 -b nvme0 00:12:24.039 09:20:35 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@48 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -a 10.0.0.2 -f ipv4 -s 4420 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 -b nvme0 00:12:24.606 nvme0n1 00:12:24.606 09:20:35 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@131 -- # hostrpc bdev_nvme_attach_controller -t tcp -a 10.0.0.2 -f ipv4 -s 4420 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host2 -b nvme1 00:12:24.606 09:20:35 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@48 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -a 10.0.0.2 -f ipv4 -s 4420 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host2 -b nvme1 00:12:24.863 nvme1n2 00:12:24.863 09:20:36 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@134 -- # hostrpc bdev_get_bdevs 00:12:24.863 09:20:36 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@134 -- # jq -r '.[].name' 00:12:24.863 09:20:36 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@48 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_get_bdevs 00:12:24.863 09:20:36 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@134 -- # sort 00:12:24.863 09:20:36 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@134 -- # xargs 00:12:25.121 09:20:36 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@134 -- # [[ nvme0n1 nvme1n2 == \n\v\m\e\0\n\1\ \n\v\m\e\1\n\2 ]] 00:12:25.121 09:20:36 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@135 -- # hostrpc bdev_get_bdevs -b nvme0n1 00:12:25.121 09:20:36 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@135 -- # jq -r '.[].uuid' 00:12:25.121 09:20:36 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@48 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_get_bdevs -b nvme0n1 00:12:25.380 09:20:36 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@135 -- # [[ cfe3fa47-c479-487f-9775-773baaf1c96b == \c\f\e\3\f\a\4\7\-\c\4\7\9\-\4\8\7\f\-\9\7\7\5\-\7\7\3\b\a\a\f\1\c\9\6\b ]] 00:12:25.380 09:20:36 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@136 -- # hostrpc bdev_get_bdevs -b nvme1n2 00:12:25.380 09:20:36 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@136 -- # jq -r '.[].uuid' 00:12:25.380 09:20:36 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@48 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_get_bdevs -b nvme1n2 00:12:25.638 09:20:36 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@136 -- # [[ c87a91a2-90d2-4866-83f5-49d7056c908a == \c\8\7\a\9\1\a\2\-\9\0\d\2\-\4\8\6\6\-\8\3\f\5\-\4\9\d\7\0\5\6\c\9\0\8\a ]] 00:12:25.638 09:20:36 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@138 -- # killprocess 777999 00:12:25.638 09:20:36 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@948 -- # '[' -z 777999 ']' 00:12:25.638 09:20:36 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@952 -- # kill -0 777999 00:12:25.638 09:20:36 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@953 -- # uname 00:12:25.638 09:20:36 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:12:25.638 09:20:36 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 777999 00:12:25.897 09:20:36 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:12:25.897 09:20:36 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:12:25.897 09:20:36 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@966 -- # echo 'killing process with pid 777999' 00:12:25.897 killing process with pid 777999 00:12:25.897 09:20:36 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@967 -- # kill 777999 00:12:25.897 09:20:36 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@972 -- # wait 777999 00:12:26.156 09:20:37 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@139 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:12:26.414 09:20:37 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@141 -- # trap - SIGINT SIGTERM EXIT 00:12:26.414 09:20:37 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@142 -- # nvmftestfini 00:12:26.414 09:20:37 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@488 -- # nvmfcleanup 00:12:26.414 09:20:37 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@117 -- # sync 00:12:26.414 09:20:37 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:12:26.414 09:20:37 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@120 -- # set +e 00:12:26.414 09:20:37 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@121 -- # for i in {1..20} 00:12:26.414 09:20:37 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:12:26.414 rmmod nvme_tcp 00:12:26.414 rmmod nvme_fabrics 00:12:26.414 rmmod nvme_keyring 00:12:26.674 09:20:37 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:12:26.674 09:20:37 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@124 -- # set -e 00:12:26.674 09:20:37 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@125 -- # return 0 00:12:26.674 09:20:37 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@489 -- # '[' -n 776483 ']' 00:12:26.674 09:20:37 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@490 -- # killprocess 776483 00:12:26.674 09:20:37 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@948 -- # '[' -z 776483 ']' 00:12:26.674 09:20:37 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@952 -- # kill -0 776483 00:12:26.674 09:20:37 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@953 -- # uname 00:12:26.674 09:20:37 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:12:26.674 09:20:37 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 776483 00:12:26.674 09:20:37 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:12:26.674 09:20:37 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:12:26.674 09:20:37 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@966 -- # echo 'killing process with pid 776483' 00:12:26.674 killing process with pid 776483 00:12:26.674 09:20:37 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@967 -- # kill 776483 00:12:26.674 09:20:37 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@972 -- # wait 776483 00:12:26.934 09:20:37 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:12:26.934 09:20:37 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:12:26.934 09:20:37 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:12:26.934 09:20:37 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:12:26.934 09:20:37 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@278 -- # remove_spdk_ns 00:12:26.934 09:20:37 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:12:26.934 09:20:37 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:12:26.934 09:20:37 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:12:28.842 09:20:40 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:12:28.842 00:12:28.842 real 0m20.547s 00:12:28.842 user 0m26.609s 00:12:28.842 sys 0m4.054s 00:12:28.842 09:20:40 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1124 -- # xtrace_disable 00:12:28.842 09:20:40 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@10 -- # set +x 00:12:28.842 ************************************ 00:12:28.842 END TEST nvmf_ns_masking 00:12:28.842 ************************************ 00:12:28.842 09:20:40 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:12:28.842 09:20:40 nvmf_tcp -- nvmf/nvmf.sh@37 -- # [[ 1 -eq 1 ]] 00:12:28.842 09:20:40 nvmf_tcp -- nvmf/nvmf.sh@38 -- # run_test nvmf_nvme_cli /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvme_cli.sh --transport=tcp 00:12:28.842 09:20:40 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:12:28.842 09:20:40 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:12:28.842 09:20:40 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:12:29.101 ************************************ 00:12:29.101 START TEST nvmf_nvme_cli 00:12:29.101 ************************************ 00:12:29.101 09:20:40 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvme_cli.sh --transport=tcp 00:12:29.101 * Looking for test storage... 00:12:29.101 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:12:29.101 09:20:40 nvmf_tcp.nvmf_nvme_cli -- target/nvme_cli.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:12:29.101 09:20:40 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@7 -- # uname -s 00:12:29.101 09:20:40 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:12:29.101 09:20:40 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:12:29.101 09:20:40 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:12:29.101 09:20:40 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:12:29.101 09:20:40 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:12:29.101 09:20:40 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:12:29.101 09:20:40 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:12:29.101 09:20:40 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:12:29.101 09:20:40 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:12:29.101 09:20:40 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:12:29.101 09:20:40 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a 00:12:29.101 09:20:40 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@18 -- # NVME_HOSTID=29f67375-a902-e411-ace9-001e67bc3c9a 00:12:29.101 09:20:40 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:12:29.101 09:20:40 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:12:29.101 09:20:40 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:12:29.101 09:20:40 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:12:29.101 09:20:40 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:12:29.101 09:20:40 nvmf_tcp.nvmf_nvme_cli -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:12:29.101 09:20:40 nvmf_tcp.nvmf_nvme_cli -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:12:29.101 09:20:40 nvmf_tcp.nvmf_nvme_cli -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:12:29.101 09:20:40 nvmf_tcp.nvmf_nvme_cli -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:29.101 09:20:40 nvmf_tcp.nvmf_nvme_cli -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:29.101 09:20:40 nvmf_tcp.nvmf_nvme_cli -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:29.101 09:20:40 nvmf_tcp.nvmf_nvme_cli -- paths/export.sh@5 -- # export PATH 00:12:29.101 09:20:40 nvmf_tcp.nvmf_nvme_cli -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:29.101 09:20:40 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@47 -- # : 0 00:12:29.101 09:20:40 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:12:29.101 09:20:40 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:12:29.101 09:20:40 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:12:29.101 09:20:40 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:12:29.101 09:20:40 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:12:29.101 09:20:40 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:12:29.101 09:20:40 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:12:29.101 09:20:40 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@51 -- # have_pci_nics=0 00:12:29.101 09:20:40 nvmf_tcp.nvmf_nvme_cli -- target/nvme_cli.sh@11 -- # MALLOC_BDEV_SIZE=64 00:12:29.101 09:20:40 nvmf_tcp.nvmf_nvme_cli -- target/nvme_cli.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:12:29.101 09:20:40 nvmf_tcp.nvmf_nvme_cli -- target/nvme_cli.sh@14 -- # devs=() 00:12:29.101 09:20:40 nvmf_tcp.nvmf_nvme_cli -- target/nvme_cli.sh@16 -- # nvmftestinit 00:12:29.101 09:20:40 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:12:29.101 09:20:40 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:12:29.101 09:20:40 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@448 -- # prepare_net_devs 00:12:29.101 09:20:40 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@410 -- # local -g is_hw=no 00:12:29.101 09:20:40 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@412 -- # remove_spdk_ns 00:12:29.101 09:20:40 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:12:29.101 09:20:40 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:12:29.101 09:20:40 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:12:29.101 09:20:40 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:12:29.101 09:20:40 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:12:29.101 09:20:40 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@285 -- # xtrace_disable 00:12:29.101 09:20:40 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:12:31.006 09:20:42 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:12:31.006 09:20:42 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@291 -- # pci_devs=() 00:12:31.006 09:20:42 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@291 -- # local -a pci_devs 00:12:31.006 09:20:42 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@292 -- # pci_net_devs=() 00:12:31.006 09:20:42 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:12:31.006 09:20:42 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@293 -- # pci_drivers=() 00:12:31.006 09:20:42 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@293 -- # local -A pci_drivers 00:12:31.006 09:20:42 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@295 -- # net_devs=() 00:12:31.006 09:20:42 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@295 -- # local -ga net_devs 00:12:31.006 09:20:42 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@296 -- # e810=() 00:12:31.006 09:20:42 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@296 -- # local -ga e810 00:12:31.006 09:20:42 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@297 -- # x722=() 00:12:31.006 09:20:42 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@297 -- # local -ga x722 00:12:31.006 09:20:42 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@298 -- # mlx=() 00:12:31.006 09:20:42 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@298 -- # local -ga mlx 00:12:31.006 09:20:42 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:12:31.006 09:20:42 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:12:31.006 09:20:42 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:12:31.006 09:20:42 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:12:31.006 09:20:42 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:12:31.006 09:20:42 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:12:31.006 09:20:42 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:12:31.007 09:20:42 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:12:31.007 09:20:42 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:12:31.007 09:20:42 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:12:31.007 09:20:42 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:12:31.007 09:20:42 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:12:31.007 09:20:42 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:12:31.007 09:20:42 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:12:31.007 09:20:42 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:12:31.007 09:20:42 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:12:31.007 09:20:42 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:12:31.007 09:20:42 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:12:31.007 09:20:42 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@341 -- # echo 'Found 0000:09:00.0 (0x8086 - 0x159b)' 00:12:31.007 Found 0000:09:00.0 (0x8086 - 0x159b) 00:12:31.007 09:20:42 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:12:31.007 09:20:42 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:12:31.007 09:20:42 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:12:31.007 09:20:42 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:12:31.007 09:20:42 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:12:31.007 09:20:42 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:12:31.007 09:20:42 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@341 -- # echo 'Found 0000:09:00.1 (0x8086 - 0x159b)' 00:12:31.007 Found 0000:09:00.1 (0x8086 - 0x159b) 00:12:31.007 09:20:42 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:12:31.007 09:20:42 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:12:31.007 09:20:42 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:12:31.007 09:20:42 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:12:31.007 09:20:42 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:12:31.007 09:20:42 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:12:31.007 09:20:42 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:12:31.007 09:20:42 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:12:31.007 09:20:42 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:12:31.007 09:20:42 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:12:31.007 09:20:42 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:12:31.007 09:20:42 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:12:31.007 09:20:42 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@390 -- # [[ up == up ]] 00:12:31.007 09:20:42 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:12:31.007 09:20:42 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:12:31.007 09:20:42 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:09:00.0: cvl_0_0' 00:12:31.007 Found net devices under 0000:09:00.0: cvl_0_0 00:12:31.007 09:20:42 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:12:31.007 09:20:42 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:12:31.007 09:20:42 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:12:31.007 09:20:42 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:12:31.007 09:20:42 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:12:31.007 09:20:42 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@390 -- # [[ up == up ]] 00:12:31.007 09:20:42 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:12:31.007 09:20:42 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:12:31.007 09:20:42 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:09:00.1: cvl_0_1' 00:12:31.007 Found net devices under 0000:09:00.1: cvl_0_1 00:12:31.007 09:20:42 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:12:31.007 09:20:42 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:12:31.007 09:20:42 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@414 -- # is_hw=yes 00:12:31.007 09:20:42 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:12:31.007 09:20:42 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:12:31.007 09:20:42 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:12:31.007 09:20:42 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:12:31.007 09:20:42 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:12:31.007 09:20:42 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:12:31.007 09:20:42 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:12:31.007 09:20:42 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:12:31.007 09:20:42 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:12:31.007 09:20:42 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:12:31.007 09:20:42 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:12:31.007 09:20:42 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:12:31.007 09:20:42 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:12:31.007 09:20:42 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:12:31.007 09:20:42 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:12:31.007 09:20:42 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:12:31.007 09:20:42 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:12:31.007 09:20:42 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:12:31.007 09:20:42 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:12:31.007 09:20:42 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:12:31.007 09:20:42 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:12:31.007 09:20:42 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:12:31.007 09:20:42 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:12:31.007 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:12:31.007 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.191 ms 00:12:31.007 00:12:31.007 --- 10.0.0.2 ping statistics --- 00:12:31.007 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:12:31.007 rtt min/avg/max/mdev = 0.191/0.191/0.191/0.000 ms 00:12:31.007 09:20:42 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:12:31.007 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:12:31.007 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.099 ms 00:12:31.007 00:12:31.007 --- 10.0.0.1 ping statistics --- 00:12:31.007 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:12:31.007 rtt min/avg/max/mdev = 0.099/0.099/0.099/0.000 ms 00:12:31.007 09:20:42 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:12:31.007 09:20:42 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@422 -- # return 0 00:12:31.007 09:20:42 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:12:31.007 09:20:42 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:12:31.007 09:20:42 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:12:31.007 09:20:42 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:12:31.007 09:20:42 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:12:31.007 09:20:42 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:12:31.007 09:20:42 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:12:31.007 09:20:42 nvmf_tcp.nvmf_nvme_cli -- target/nvme_cli.sh@17 -- # nvmfappstart -m 0xF 00:12:31.007 09:20:42 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:12:31.007 09:20:42 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@722 -- # xtrace_disable 00:12:31.007 09:20:42 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:12:31.007 09:20:42 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@481 -- # nvmfpid=780492 00:12:31.007 09:20:42 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:12:31.007 09:20:42 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@482 -- # waitforlisten 780492 00:12:31.007 09:20:42 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@829 -- # '[' -z 780492 ']' 00:12:31.007 09:20:42 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:12:31.007 09:20:42 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@834 -- # local max_retries=100 00:12:31.007 09:20:42 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:12:31.007 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:12:31.007 09:20:42 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@838 -- # xtrace_disable 00:12:31.007 09:20:42 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:12:31.267 [2024-07-15 09:20:42.236356] Starting SPDK v24.09-pre git sha1 b0f01ebc5 / DPDK 24.03.0 initialization... 00:12:31.267 [2024-07-15 09:20:42.236461] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:12:31.267 EAL: No free 2048 kB hugepages reported on node 1 00:12:31.267 [2024-07-15 09:20:42.298445] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 4 00:12:31.267 [2024-07-15 09:20:42.401911] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:12:31.267 [2024-07-15 09:20:42.401964] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:12:31.267 [2024-07-15 09:20:42.401985] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:12:31.267 [2024-07-15 09:20:42.401996] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:12:31.267 [2024-07-15 09:20:42.402006] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:12:31.267 [2024-07-15 09:20:42.402086] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:12:31.268 [2024-07-15 09:20:42.402145] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:12:31.268 [2024-07-15 09:20:42.402205] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:12:31.268 [2024-07-15 09:20:42.402208] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:12:31.526 09:20:42 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:12:31.526 09:20:42 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@862 -- # return 0 00:12:31.526 09:20:42 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:12:31.526 09:20:42 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@728 -- # xtrace_disable 00:12:31.526 09:20:42 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:12:31.526 09:20:42 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:12:31.526 09:20:42 nvmf_tcp.nvmf_nvme_cli -- target/nvme_cli.sh@19 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:12:31.526 09:20:42 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:31.526 09:20:42 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:12:31.526 [2024-07-15 09:20:42.555678] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:12:31.526 09:20:42 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:31.526 09:20:42 nvmf_tcp.nvmf_nvme_cli -- target/nvme_cli.sh@21 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:12:31.526 09:20:42 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:31.526 09:20:42 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:12:31.526 Malloc0 00:12:31.526 09:20:42 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:31.526 09:20:42 nvmf_tcp.nvmf_nvme_cli -- target/nvme_cli.sh@22 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc1 00:12:31.526 09:20:42 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:31.526 09:20:42 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:12:31.526 Malloc1 00:12:31.526 09:20:42 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:31.526 09:20:42 nvmf_tcp.nvmf_nvme_cli -- target/nvme_cli.sh@24 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME -d SPDK_Controller1 -i 291 00:12:31.526 09:20:42 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:31.526 09:20:42 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:12:31.526 09:20:42 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:31.526 09:20:42 nvmf_tcp.nvmf_nvme_cli -- target/nvme_cli.sh@25 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:12:31.527 09:20:42 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:31.527 09:20:42 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:12:31.527 09:20:42 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:31.527 09:20:42 nvmf_tcp.nvmf_nvme_cli -- target/nvme_cli.sh@26 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:12:31.527 09:20:42 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:31.527 09:20:42 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:12:31.527 09:20:42 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:31.527 09:20:42 nvmf_tcp.nvmf_nvme_cli -- target/nvme_cli.sh@27 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:12:31.527 09:20:42 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:31.527 09:20:42 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:12:31.527 [2024-07-15 09:20:42.641491] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:12:31.527 09:20:42 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:31.527 09:20:42 nvmf_tcp.nvmf_nvme_cli -- target/nvme_cli.sh@28 -- # rpc_cmd nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:12:31.527 09:20:42 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:31.527 09:20:42 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:12:31.527 09:20:42 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:31.527 09:20:42 nvmf_tcp.nvmf_nvme_cli -- target/nvme_cli.sh@30 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --hostid=29f67375-a902-e411-ace9-001e67bc3c9a -t tcp -a 10.0.0.2 -s 4420 00:12:31.785 00:12:31.785 Discovery Log Number of Records 2, Generation counter 2 00:12:31.785 =====Discovery Log Entry 0====== 00:12:31.785 trtype: tcp 00:12:31.785 adrfam: ipv4 00:12:31.785 subtype: current discovery subsystem 00:12:31.785 treq: not required 00:12:31.785 portid: 0 00:12:31.785 trsvcid: 4420 00:12:31.785 subnqn: nqn.2014-08.org.nvmexpress.discovery 00:12:31.785 traddr: 10.0.0.2 00:12:31.785 eflags: explicit discovery connections, duplicate discovery information 00:12:31.785 sectype: none 00:12:31.785 =====Discovery Log Entry 1====== 00:12:31.785 trtype: tcp 00:12:31.785 adrfam: ipv4 00:12:31.785 subtype: nvme subsystem 00:12:31.785 treq: not required 00:12:31.785 portid: 0 00:12:31.785 trsvcid: 4420 00:12:31.785 subnqn: nqn.2016-06.io.spdk:cnode1 00:12:31.785 traddr: 10.0.0.2 00:12:31.785 eflags: none 00:12:31.785 sectype: none 00:12:31.785 09:20:42 nvmf_tcp.nvmf_nvme_cli -- target/nvme_cli.sh@31 -- # devs=($(get_nvme_devs)) 00:12:31.785 09:20:42 nvmf_tcp.nvmf_nvme_cli -- target/nvme_cli.sh@31 -- # get_nvme_devs 00:12:31.785 09:20:42 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@522 -- # local dev _ 00:12:31.785 09:20:42 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@524 -- # read -r dev _ 00:12:31.785 09:20:42 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@521 -- # nvme list 00:12:31.785 09:20:42 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@525 -- # [[ Node == /dev/nvme* ]] 00:12:31.785 09:20:42 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@524 -- # read -r dev _ 00:12:31.785 09:20:42 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@525 -- # [[ --------------------- == /dev/nvme* ]] 00:12:31.785 09:20:42 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@524 -- # read -r dev _ 00:12:31.785 09:20:42 nvmf_tcp.nvmf_nvme_cli -- target/nvme_cli.sh@31 -- # nvme_num_before_connection=0 00:12:31.785 09:20:42 nvmf_tcp.nvmf_nvme_cli -- target/nvme_cli.sh@32 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --hostid=29f67375-a902-e411-ace9-001e67bc3c9a -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:12:32.395 09:20:43 nvmf_tcp.nvmf_nvme_cli -- target/nvme_cli.sh@34 -- # waitforserial SPDKISFASTANDAWESOME 2 00:12:32.395 09:20:43 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@1198 -- # local i=0 00:12:32.395 09:20:43 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@1199 -- # local nvme_device_counter=1 nvme_devices=0 00:12:32.395 09:20:43 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@1200 -- # [[ -n 2 ]] 00:12:32.395 09:20:43 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@1201 -- # nvme_device_counter=2 00:12:32.395 09:20:43 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@1205 -- # sleep 2 00:12:34.349 09:20:45 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@1206 -- # (( i++ <= 15 )) 00:12:34.349 09:20:45 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@1207 -- # lsblk -l -o NAME,SERIAL 00:12:34.349 09:20:45 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@1207 -- # grep -c SPDKISFASTANDAWESOME 00:12:34.349 09:20:45 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@1207 -- # nvme_devices=2 00:12:34.349 09:20:45 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@1208 -- # (( nvme_devices == nvme_device_counter )) 00:12:34.349 09:20:45 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@1208 -- # return 0 00:12:34.349 09:20:45 nvmf_tcp.nvmf_nvme_cli -- target/nvme_cli.sh@35 -- # get_nvme_devs 00:12:34.349 09:20:45 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@522 -- # local dev _ 00:12:34.349 09:20:45 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@524 -- # read -r dev _ 00:12:34.349 09:20:45 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@521 -- # nvme list 00:12:34.607 09:20:45 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@525 -- # [[ Node == /dev/nvme* ]] 00:12:34.607 09:20:45 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@524 -- # read -r dev _ 00:12:34.607 09:20:45 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@525 -- # [[ --------------------- == /dev/nvme* ]] 00:12:34.607 09:20:45 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@524 -- # read -r dev _ 00:12:34.607 09:20:45 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@525 -- # [[ /dev/nvme0n2 == /dev/nvme* ]] 00:12:34.607 09:20:45 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@526 -- # echo /dev/nvme0n2 00:12:34.607 09:20:45 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@524 -- # read -r dev _ 00:12:34.607 09:20:45 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@525 -- # [[ /dev/nvme0n1 == /dev/nvme* ]] 00:12:34.607 09:20:45 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@526 -- # echo /dev/nvme0n1 00:12:34.607 09:20:45 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@524 -- # read -r dev _ 00:12:34.607 09:20:45 nvmf_tcp.nvmf_nvme_cli -- target/nvme_cli.sh@35 -- # [[ -z /dev/nvme0n2 00:12:34.607 /dev/nvme0n1 ]] 00:12:34.607 09:20:45 nvmf_tcp.nvmf_nvme_cli -- target/nvme_cli.sh@59 -- # devs=($(get_nvme_devs)) 00:12:34.607 09:20:45 nvmf_tcp.nvmf_nvme_cli -- target/nvme_cli.sh@59 -- # get_nvme_devs 00:12:34.607 09:20:45 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@522 -- # local dev _ 00:12:34.607 09:20:45 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@524 -- # read -r dev _ 00:12:34.607 09:20:45 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@521 -- # nvme list 00:12:34.607 09:20:45 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@525 -- # [[ Node == /dev/nvme* ]] 00:12:34.607 09:20:45 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@524 -- # read -r dev _ 00:12:34.607 09:20:45 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@525 -- # [[ --------------------- == /dev/nvme* ]] 00:12:34.607 09:20:45 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@524 -- # read -r dev _ 00:12:34.607 09:20:45 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@525 -- # [[ /dev/nvme0n2 == /dev/nvme* ]] 00:12:34.607 09:20:45 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@526 -- # echo /dev/nvme0n2 00:12:34.607 09:20:45 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@524 -- # read -r dev _ 00:12:34.607 09:20:45 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@525 -- # [[ /dev/nvme0n1 == /dev/nvme* ]] 00:12:34.607 09:20:45 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@526 -- # echo /dev/nvme0n1 00:12:34.607 09:20:45 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@524 -- # read -r dev _ 00:12:34.607 09:20:45 nvmf_tcp.nvmf_nvme_cli -- target/nvme_cli.sh@59 -- # nvme_num=2 00:12:34.607 09:20:45 nvmf_tcp.nvmf_nvme_cli -- target/nvme_cli.sh@60 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:12:34.866 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:12:34.866 09:20:46 nvmf_tcp.nvmf_nvme_cli -- target/nvme_cli.sh@61 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:12:34.866 09:20:46 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@1219 -- # local i=0 00:12:34.866 09:20:46 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@1220 -- # lsblk -o NAME,SERIAL 00:12:34.866 09:20:46 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@1220 -- # grep -q -w SPDKISFASTANDAWESOME 00:12:34.866 09:20:46 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@1227 -- # lsblk -l -o NAME,SERIAL 00:12:34.866 09:20:46 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@1227 -- # grep -q -w SPDKISFASTANDAWESOME 00:12:34.866 09:20:46 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@1231 -- # return 0 00:12:34.866 09:20:46 nvmf_tcp.nvmf_nvme_cli -- target/nvme_cli.sh@62 -- # (( nvme_num <= nvme_num_before_connection )) 00:12:34.866 09:20:46 nvmf_tcp.nvmf_nvme_cli -- target/nvme_cli.sh@67 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:12:34.866 09:20:46 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:34.866 09:20:46 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:12:34.866 09:20:46 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:34.866 09:20:46 nvmf_tcp.nvmf_nvme_cli -- target/nvme_cli.sh@68 -- # trap - SIGINT SIGTERM EXIT 00:12:34.866 09:20:46 nvmf_tcp.nvmf_nvme_cli -- target/nvme_cli.sh@70 -- # nvmftestfini 00:12:34.866 09:20:46 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@488 -- # nvmfcleanup 00:12:34.866 09:20:46 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@117 -- # sync 00:12:34.866 09:20:46 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:12:34.866 09:20:46 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@120 -- # set +e 00:12:34.866 09:20:46 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@121 -- # for i in {1..20} 00:12:34.866 09:20:46 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:12:34.866 rmmod nvme_tcp 00:12:34.866 rmmod nvme_fabrics 00:12:35.124 rmmod nvme_keyring 00:12:35.124 09:20:46 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:12:35.124 09:20:46 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@124 -- # set -e 00:12:35.124 09:20:46 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@125 -- # return 0 00:12:35.124 09:20:46 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@489 -- # '[' -n 780492 ']' 00:12:35.124 09:20:46 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@490 -- # killprocess 780492 00:12:35.124 09:20:46 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@948 -- # '[' -z 780492 ']' 00:12:35.124 09:20:46 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@952 -- # kill -0 780492 00:12:35.124 09:20:46 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@953 -- # uname 00:12:35.124 09:20:46 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:12:35.124 09:20:46 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 780492 00:12:35.124 09:20:46 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:12:35.124 09:20:46 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:12:35.124 09:20:46 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@966 -- # echo 'killing process with pid 780492' 00:12:35.124 killing process with pid 780492 00:12:35.124 09:20:46 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@967 -- # kill 780492 00:12:35.124 09:20:46 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@972 -- # wait 780492 00:12:35.382 09:20:46 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:12:35.382 09:20:46 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:12:35.382 09:20:46 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:12:35.382 09:20:46 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:12:35.382 09:20:46 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@278 -- # remove_spdk_ns 00:12:35.382 09:20:46 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:12:35.382 09:20:46 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:12:35.382 09:20:46 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:12:37.285 09:20:48 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:12:37.285 00:12:37.285 real 0m8.425s 00:12:37.285 user 0m16.317s 00:12:37.285 sys 0m2.177s 00:12:37.285 09:20:48 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@1124 -- # xtrace_disable 00:12:37.544 09:20:48 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:12:37.544 ************************************ 00:12:37.544 END TEST nvmf_nvme_cli 00:12:37.544 ************************************ 00:12:37.544 09:20:48 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:12:37.544 09:20:48 nvmf_tcp -- nvmf/nvmf.sh@40 -- # [[ 1 -eq 1 ]] 00:12:37.544 09:20:48 nvmf_tcp -- nvmf/nvmf.sh@41 -- # run_test nvmf_vfio_user /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_vfio_user.sh --transport=tcp 00:12:37.544 09:20:48 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:12:37.544 09:20:48 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:12:37.544 09:20:48 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:12:37.544 ************************************ 00:12:37.544 START TEST nvmf_vfio_user 00:12:37.544 ************************************ 00:12:37.544 09:20:48 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_vfio_user.sh --transport=tcp 00:12:37.544 * Looking for test storage... 00:12:37.544 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:12:37.544 09:20:48 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:12:37.544 09:20:48 nvmf_tcp.nvmf_vfio_user -- nvmf/common.sh@7 -- # uname -s 00:12:37.544 09:20:48 nvmf_tcp.nvmf_vfio_user -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:12:37.544 09:20:48 nvmf_tcp.nvmf_vfio_user -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:12:37.544 09:20:48 nvmf_tcp.nvmf_vfio_user -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:12:37.544 09:20:48 nvmf_tcp.nvmf_vfio_user -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:12:37.544 09:20:48 nvmf_tcp.nvmf_vfio_user -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:12:37.544 09:20:48 nvmf_tcp.nvmf_vfio_user -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:12:37.544 09:20:48 nvmf_tcp.nvmf_vfio_user -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:12:37.544 09:20:48 nvmf_tcp.nvmf_vfio_user -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:12:37.544 09:20:48 nvmf_tcp.nvmf_vfio_user -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:12:37.544 09:20:48 nvmf_tcp.nvmf_vfio_user -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:12:37.544 09:20:48 nvmf_tcp.nvmf_vfio_user -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a 00:12:37.544 09:20:48 nvmf_tcp.nvmf_vfio_user -- nvmf/common.sh@18 -- # NVME_HOSTID=29f67375-a902-e411-ace9-001e67bc3c9a 00:12:37.544 09:20:48 nvmf_tcp.nvmf_vfio_user -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:12:37.544 09:20:48 nvmf_tcp.nvmf_vfio_user -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:12:37.544 09:20:48 nvmf_tcp.nvmf_vfio_user -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:12:37.544 09:20:48 nvmf_tcp.nvmf_vfio_user -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:12:37.544 09:20:48 nvmf_tcp.nvmf_vfio_user -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:12:37.544 09:20:48 nvmf_tcp.nvmf_vfio_user -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:12:37.544 09:20:48 nvmf_tcp.nvmf_vfio_user -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:12:37.544 09:20:48 nvmf_tcp.nvmf_vfio_user -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:12:37.544 09:20:48 nvmf_tcp.nvmf_vfio_user -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:37.544 09:20:48 nvmf_tcp.nvmf_vfio_user -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:37.544 09:20:48 nvmf_tcp.nvmf_vfio_user -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:37.544 09:20:48 nvmf_tcp.nvmf_vfio_user -- paths/export.sh@5 -- # export PATH 00:12:37.544 09:20:48 nvmf_tcp.nvmf_vfio_user -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:37.544 09:20:48 nvmf_tcp.nvmf_vfio_user -- nvmf/common.sh@47 -- # : 0 00:12:37.544 09:20:48 nvmf_tcp.nvmf_vfio_user -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:12:37.544 09:20:48 nvmf_tcp.nvmf_vfio_user -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:12:37.544 09:20:48 nvmf_tcp.nvmf_vfio_user -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:12:37.544 09:20:48 nvmf_tcp.nvmf_vfio_user -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:12:37.544 09:20:48 nvmf_tcp.nvmf_vfio_user -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:12:37.544 09:20:48 nvmf_tcp.nvmf_vfio_user -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:12:37.544 09:20:48 nvmf_tcp.nvmf_vfio_user -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:12:37.544 09:20:48 nvmf_tcp.nvmf_vfio_user -- nvmf/common.sh@51 -- # have_pci_nics=0 00:12:37.544 09:20:48 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@12 -- # MALLOC_BDEV_SIZE=64 00:12:37.544 09:20:48 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@13 -- # MALLOC_BLOCK_SIZE=512 00:12:37.544 09:20:48 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@14 -- # NUM_DEVICES=2 00:12:37.544 09:20:48 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@16 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:12:37.544 09:20:48 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@18 -- # export TEST_TRANSPORT=VFIOUSER 00:12:37.544 09:20:48 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@18 -- # TEST_TRANSPORT=VFIOUSER 00:12:37.544 09:20:48 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@47 -- # rm -rf /var/run/vfio-user 00:12:37.544 09:20:48 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@103 -- # setup_nvmf_vfio_user '' '' 00:12:37.544 09:20:48 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@51 -- # local nvmf_app_args= 00:12:37.544 09:20:48 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@52 -- # local transport_args= 00:12:37.544 09:20:48 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@55 -- # nvmfpid=781313 00:12:37.544 09:20:48 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@57 -- # echo 'Process pid: 781313' 00:12:37.544 Process pid: 781313 00:12:37.544 09:20:48 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m '[0,1,2,3]' 00:12:37.544 09:20:48 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@59 -- # trap 'killprocess $nvmfpid; exit 1' SIGINT SIGTERM EXIT 00:12:37.544 09:20:48 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@60 -- # waitforlisten 781313 00:12:37.544 09:20:48 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@829 -- # '[' -z 781313 ']' 00:12:37.544 09:20:48 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:12:37.544 09:20:48 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@834 -- # local max_retries=100 00:12:37.544 09:20:48 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:12:37.544 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:12:37.544 09:20:48 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@838 -- # xtrace_disable 00:12:37.544 09:20:48 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@10 -- # set +x 00:12:37.544 [2024-07-15 09:20:48.634430] Starting SPDK v24.09-pre git sha1 b0f01ebc5 / DPDK 24.03.0 initialization... 00:12:37.544 [2024-07-15 09:20:48.634525] [ DPDK EAL parameters: nvmf -l 0,1,2,3 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:12:37.544 EAL: No free 2048 kB hugepages reported on node 1 00:12:37.544 [2024-07-15 09:20:48.693397] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 4 00:12:37.803 [2024-07-15 09:20:48.806875] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:12:37.803 [2024-07-15 09:20:48.806926] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:12:37.803 [2024-07-15 09:20:48.806939] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:12:37.803 [2024-07-15 09:20:48.806951] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:12:37.803 [2024-07-15 09:20:48.806961] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:12:37.803 [2024-07-15 09:20:48.807026] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:12:37.803 [2024-07-15 09:20:48.807124] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:12:37.803 [2024-07-15 09:20:48.807226] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:12:37.803 [2024-07-15 09:20:48.807229] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:12:37.803 09:20:48 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:12:37.803 09:20:48 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@862 -- # return 0 00:12:37.803 09:20:48 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@62 -- # sleep 1 00:12:38.740 09:20:49 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t VFIOUSER 00:12:38.999 09:20:50 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@66 -- # mkdir -p /var/run/vfio-user 00:12:38.999 09:20:50 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@68 -- # seq 1 2 00:12:39.259 09:20:50 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@68 -- # for i in $(seq 1 $NUM_DEVICES) 00:12:39.259 09:20:50 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@69 -- # mkdir -p /var/run/vfio-user/domain/vfio-user1/1 00:12:39.259 09:20:50 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@71 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc1 00:12:39.259 Malloc1 00:12:39.518 09:20:50 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@72 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2019-07.io.spdk:cnode1 -a -s SPDK1 00:12:39.518 09:20:50 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@73 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2019-07.io.spdk:cnode1 Malloc1 00:12:39.777 09:20:50 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@74 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2019-07.io.spdk:cnode1 -t VFIOUSER -a /var/run/vfio-user/domain/vfio-user1/1 -s 0 00:12:40.035 09:20:51 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@68 -- # for i in $(seq 1 $NUM_DEVICES) 00:12:40.035 09:20:51 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@69 -- # mkdir -p /var/run/vfio-user/domain/vfio-user2/2 00:12:40.035 09:20:51 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@71 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc2 00:12:40.293 Malloc2 00:12:40.293 09:20:51 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@72 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2019-07.io.spdk:cnode2 -a -s SPDK2 00:12:40.551 09:20:51 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@73 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2019-07.io.spdk:cnode2 Malloc2 00:12:40.809 09:20:51 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@74 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2019-07.io.spdk:cnode2 -t VFIOUSER -a /var/run/vfio-user/domain/vfio-user2/2 -s 0 00:12:41.067 09:20:52 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@104 -- # run_nvmf_vfio_user 00:12:41.067 09:20:52 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@80 -- # seq 1 2 00:12:41.067 09:20:52 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@80 -- # for i in $(seq 1 $NUM_DEVICES) 00:12:41.067 09:20:52 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@81 -- # test_traddr=/var/run/vfio-user/domain/vfio-user1/1 00:12:41.067 09:20:52 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@82 -- # test_subnqn=nqn.2019-07.io.spdk:cnode1 00:12:41.067 09:20:52 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@83 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_identify -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user1/1 subnqn:nqn.2019-07.io.spdk:cnode1' -g -L nvme -L nvme_vfio -L vfio_pci 00:12:41.067 [2024-07-15 09:20:52.250554] Starting SPDK v24.09-pre git sha1 b0f01ebc5 / DPDK 24.03.0 initialization... 00:12:41.067 [2024-07-15 09:20:52.250597] [ DPDK EAL parameters: identify --no-shconf -c 0x1 -n 1 -m 0 --no-pci --single-file-segments --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid781845 ] 00:12:41.067 EAL: No free 2048 kB hugepages reported on node 1 00:12:41.327 [2024-07-15 09:20:52.285995] nvme_vfio_user.c: 259:nvme_vfio_ctrlr_scan: *DEBUG*: Scan controller : /var/run/vfio-user/domain/vfio-user1/1 00:12:41.327 [2024-07-15 09:20:52.295467] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 0, Size 0x2000, Offset 0x0, Flags 0xf, Cap offset 32 00:12:41.327 [2024-07-15 09:20:52.295495] vfio_user_pci.c: 233:vfio_device_setup_sparse_mmaps: *DEBUG*: Sparse region 0, Size 0x1000, Offset 0x1000, Map addr 0x7f540f1e8000 00:12:41.327 [2024-07-15 09:20:52.296466] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 1, Size 0x0, Offset 0x0, Flags 0x0, Cap offset 0 00:12:41.327 [2024-07-15 09:20:52.297462] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 2, Size 0x0, Offset 0x0, Flags 0x0, Cap offset 0 00:12:41.327 [2024-07-15 09:20:52.298464] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 3, Size 0x0, Offset 0x0, Flags 0x0, Cap offset 0 00:12:41.327 [2024-07-15 09:20:52.299470] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 4, Size 0x2000, Offset 0x0, Flags 0x3, Cap offset 0 00:12:41.327 [2024-07-15 09:20:52.300471] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 5, Size 0x1000, Offset 0x0, Flags 0x3, Cap offset 0 00:12:41.327 [2024-07-15 09:20:52.301476] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 6, Size 0x0, Offset 0x0, Flags 0x0, Cap offset 0 00:12:41.327 [2024-07-15 09:20:52.302484] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 7, Size 0x1000, Offset 0x0, Flags 0x3, Cap offset 0 00:12:41.327 [2024-07-15 09:20:52.303487] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 8, Size 0x0, Offset 0x0, Flags 0x0, Cap offset 0 00:12:41.327 [2024-07-15 09:20:52.304496] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 9, Size 0xc000, Offset 0x0, Flags 0xf, Cap offset 32 00:12:41.327 [2024-07-15 09:20:52.304516] vfio_user_pci.c: 233:vfio_device_setup_sparse_mmaps: *DEBUG*: Sparse region 0, Size 0xb000, Offset 0x1000, Map addr 0x7f540f1dd000 00:12:41.327 [2024-07-15 09:20:52.305634] vfio_user_pci.c: 65:vfio_add_mr: *DEBUG*: Add memory region: FD 10, VADDR 0x200000200000, IOVA 0x200000200000, Size 0x200000 00:12:41.327 [2024-07-15 09:20:52.319570] vfio_user_pci.c: 386:spdk_vfio_user_setup: *DEBUG*: Device vfio-user0, Path /var/run/vfio-user/domain/vfio-user1/1/cntrl Setup Successfully 00:12:41.327 [2024-07-15 09:20:52.319608] nvme_ctrlr.c:1559:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to connect adminq (no timeout) 00:12:41.327 [2024-07-15 09:20:52.328616] nvme_vfio_user.c: 103:nvme_vfio_ctrlr_get_reg_8: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x0, value 0x201e0100ff 00:12:41.327 [2024-07-15 09:20:52.328673] nvme_pcie_common.c: 132:nvme_pcie_qpair_construct: *INFO*: max_completions_cap = 64 num_trackers = 192 00:12:41.327 [2024-07-15 09:20:52.328757] nvme_ctrlr.c:1559:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to wait for connect adminq (no timeout) 00:12:41.327 [2024-07-15 09:20:52.328798] nvme_ctrlr.c:1559:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to read vs (no timeout) 00:12:41.327 [2024-07-15 09:20:52.328819] nvme_ctrlr.c:1559:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to read vs wait for vs (no timeout) 00:12:41.327 [2024-07-15 09:20:52.329612] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x8, value 0x10300 00:12:41.327 [2024-07-15 09:20:52.329630] nvme_ctrlr.c:1559:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to read cap (no timeout) 00:12:41.327 [2024-07-15 09:20:52.329642] nvme_ctrlr.c:1559:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to read cap wait for cap (no timeout) 00:12:41.327 [2024-07-15 09:20:52.330615] nvme_vfio_user.c: 103:nvme_vfio_ctrlr_get_reg_8: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x0, value 0x201e0100ff 00:12:41.327 [2024-07-15 09:20:52.330633] nvme_ctrlr.c:1559:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to check en (no timeout) 00:12:41.327 [2024-07-15 09:20:52.330646] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to check en wait for cc (timeout 15000 ms) 00:12:41.327 [2024-07-15 09:20:52.331618] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x14, value 0x0 00:12:41.327 [2024-07-15 09:20:52.331636] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to disable and wait for CSTS.RDY = 0 (timeout 15000 ms) 00:12:41.327 [2024-07-15 09:20:52.332624] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x1c, value 0x0 00:12:41.327 [2024-07-15 09:20:52.332643] nvme_ctrlr.c:3869:nvme_ctrlr_process_init_wait_for_ready_0: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] CC.EN = 0 && CSTS.RDY = 0 00:12:41.328 [2024-07-15 09:20:52.332652] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to controller is disabled (timeout 15000 ms) 00:12:41.328 [2024-07-15 09:20:52.332664] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to enable controller by writing CC.EN = 1 (timeout 15000 ms) 00:12:41.328 [2024-07-15 09:20:52.332773] nvme_ctrlr.c:4062:nvme_ctrlr_process_init: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] Setting CC.EN = 1 00:12:41.328 [2024-07-15 09:20:52.332795] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to enable controller by writing CC.EN = 1 reg (timeout 15000 ms) 00:12:41.328 [2024-07-15 09:20:52.332813] nvme_vfio_user.c: 61:nvme_vfio_ctrlr_set_reg_8: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x28, value 0x2000003c0000 00:12:41.328 [2024-07-15 09:20:52.333632] nvme_vfio_user.c: 61:nvme_vfio_ctrlr_set_reg_8: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x30, value 0x2000003be000 00:12:41.328 [2024-07-15 09:20:52.334633] nvme_vfio_user.c: 49:nvme_vfio_ctrlr_set_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x24, value 0xff00ff 00:12:41.328 [2024-07-15 09:20:52.335636] nvme_vfio_user.c: 49:nvme_vfio_ctrlr_set_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x14, value 0x460001 00:12:41.328 [2024-07-15 09:20:52.336634] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: enabling controller 00:12:41.328 [2024-07-15 09:20:52.336735] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to wait for CSTS.RDY = 1 (timeout 15000 ms) 00:12:41.328 [2024-07-15 09:20:52.337650] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x1c, value 0x1 00:12:41.328 [2024-07-15 09:20:52.337669] nvme_ctrlr.c:3904:nvme_ctrlr_process_init_enable_wait_for_ready_1: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] CC.EN = 1 && CSTS.RDY = 1 - controller is ready 00:12:41.328 [2024-07-15 09:20:52.337678] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to reset admin queue (timeout 30000 ms) 00:12:41.328 [2024-07-15 09:20:52.337701] nvme_ctrlr.c:1559:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to identify controller (no timeout) 00:12:41.328 [2024-07-15 09:20:52.337718] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to wait for identify controller (timeout 30000 ms) 00:12:41.328 [2024-07-15 09:20:52.337741] nvme_pcie_common.c:1201:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002fb000 len:4096 00:12:41.328 [2024-07-15 09:20:52.337751] nvme_pcie_common.c:1229:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002fb000 00:12:41.328 [2024-07-15 09:20:52.337769] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:191 nsid:0 cdw10:00000001 cdw11:00000000 PRP1 0x2000002fb000 PRP2 0x0 00:12:41.328 [2024-07-15 09:20:52.337859] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:0001 p:1 m:0 dnr:0 00:12:41.328 [2024-07-15 09:20:52.337877] nvme_ctrlr.c:2053:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] transport max_xfer_size 131072 00:12:41.328 [2024-07-15 09:20:52.337889] nvme_ctrlr.c:2057:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] MDTS max_xfer_size 131072 00:12:41.328 [2024-07-15 09:20:52.337898] nvme_ctrlr.c:2060:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] CNTLID 0x0001 00:12:41.328 [2024-07-15 09:20:52.337906] nvme_ctrlr.c:2071:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] Identify CNTLID 0x0001 != Connect CNTLID 0x0000 00:12:41.328 [2024-07-15 09:20:52.337914] nvme_ctrlr.c:2084:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] transport max_sges 1 00:12:41.328 [2024-07-15 09:20:52.337922] nvme_ctrlr.c:2099:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] fuses compare and write: 1 00:12:41.328 [2024-07-15 09:20:52.337929] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to configure AER (timeout 30000 ms) 00:12:41.328 [2024-07-15 09:20:52.337942] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to wait for configure aer (timeout 30000 ms) 00:12:41.328 [2024-07-15 09:20:52.337957] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES ASYNC EVENT CONFIGURATION cid:191 cdw10:0000000b PRP1 0x0 PRP2 0x0 00:12:41.328 [2024-07-15 09:20:52.337972] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:0002 p:1 m:0 dnr:0 00:12:41.328 [2024-07-15 09:20:52.337992] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:190 nsid:0 cdw10:00000000 cdw11:00000000 00:12:41.328 [2024-07-15 09:20:52.338006] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:189 nsid:0 cdw10:00000000 cdw11:00000000 00:12:41.328 [2024-07-15 09:20:52.338019] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:188 nsid:0 cdw10:00000000 cdw11:00000000 00:12:41.328 [2024-07-15 09:20:52.338031] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:187 nsid:0 cdw10:00000000 cdw11:00000000 00:12:41.328 [2024-07-15 09:20:52.338039] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to set keep alive timeout (timeout 30000 ms) 00:12:41.328 [2024-07-15 09:20:52.338057] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to wait for set keep alive timeout (timeout 30000 ms) 00:12:41.328 [2024-07-15 09:20:52.338072] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES KEEP ALIVE TIMER cid:191 cdw10:0000000f PRP1 0x0 PRP2 0x0 00:12:41.328 [2024-07-15 09:20:52.338113] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:0007 p:1 m:0 dnr:0 00:12:41.328 [2024-07-15 09:20:52.338124] nvme_ctrlr.c:3010:nvme_ctrlr_set_keep_alive_timeout_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] Controller adjusted keep alive timeout to 0 ms 00:12:41.328 [2024-07-15 09:20:52.338132] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to identify controller iocs specific (timeout 30000 ms) 00:12:41.328 [2024-07-15 09:20:52.338143] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to set number of queues (timeout 30000 ms) 00:12:41.328 [2024-07-15 09:20:52.338152] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to wait for set number of queues (timeout 30000 ms) 00:12:41.328 [2024-07-15 09:20:52.338165] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES NUMBER OF QUEUES cid:191 cdw10:00000007 PRP1 0x0 PRP2 0x0 00:12:41.328 [2024-07-15 09:20:52.338176] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:7e007e sqhd:0008 p:1 m:0 dnr:0 00:12:41.328 [2024-07-15 09:20:52.338237] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to identify active ns (timeout 30000 ms) 00:12:41.328 [2024-07-15 09:20:52.338251] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to wait for identify active ns (timeout 30000 ms) 00:12:41.328 [2024-07-15 09:20:52.338265] nvme_pcie_common.c:1201:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002f9000 len:4096 00:12:41.328 [2024-07-15 09:20:52.338273] nvme_pcie_common.c:1229:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002f9000 00:12:41.328 [2024-07-15 09:20:52.338282] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:191 nsid:0 cdw10:00000002 cdw11:00000000 PRP1 0x2000002f9000 PRP2 0x0 00:12:41.328 [2024-07-15 09:20:52.338296] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:0009 p:1 m:0 dnr:0 00:12:41.328 [2024-07-15 09:20:52.338312] nvme_ctrlr.c:4693:spdk_nvme_ctrlr_get_ns: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] Namespace 1 was added 00:12:41.328 [2024-07-15 09:20:52.338327] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to identify ns (timeout 30000 ms) 00:12:41.328 [2024-07-15 09:20:52.338341] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to wait for identify ns (timeout 30000 ms) 00:12:41.328 [2024-07-15 09:20:52.338353] nvme_pcie_common.c:1201:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002fb000 len:4096 00:12:41.328 [2024-07-15 09:20:52.338362] nvme_pcie_common.c:1229:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002fb000 00:12:41.328 [2024-07-15 09:20:52.338371] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:191 nsid:1 cdw10:00000000 cdw11:00000000 PRP1 0x2000002fb000 PRP2 0x0 00:12:41.328 [2024-07-15 09:20:52.338393] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:000a p:1 m:0 dnr:0 00:12:41.328 [2024-07-15 09:20:52.338414] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to identify namespace id descriptors (timeout 30000 ms) 00:12:41.328 [2024-07-15 09:20:52.338428] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to wait for identify namespace id descriptors (timeout 30000 ms) 00:12:41.328 [2024-07-15 09:20:52.338440] nvme_pcie_common.c:1201:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002fb000 len:4096 00:12:41.328 [2024-07-15 09:20:52.338449] nvme_pcie_common.c:1229:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002fb000 00:12:41.328 [2024-07-15 09:20:52.338458] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:191 nsid:1 cdw10:00000003 cdw11:00000000 PRP1 0x2000002fb000 PRP2 0x0 00:12:41.328 [2024-07-15 09:20:52.338472] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:000b p:1 m:0 dnr:0 00:12:41.328 [2024-07-15 09:20:52.338486] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to identify ns iocs specific (timeout 30000 ms) 00:12:41.328 [2024-07-15 09:20:52.338498] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to set supported log pages (timeout 30000 ms) 00:12:41.328 [2024-07-15 09:20:52.338511] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to set supported features (timeout 30000 ms) 00:12:41.328 [2024-07-15 09:20:52.338521] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to set host behavior support feature (timeout 30000 ms) 00:12:41.328 [2024-07-15 09:20:52.338529] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to set doorbell buffer config (timeout 30000 ms) 00:12:41.328 [2024-07-15 09:20:52.338537] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to set host ID (timeout 30000 ms) 00:12:41.328 [2024-07-15 09:20:52.338545] nvme_ctrlr.c:3110:nvme_ctrlr_set_host_id: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] NVMe-oF transport - not sending Set Features - Host ID 00:12:41.328 [2024-07-15 09:20:52.338553] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to transport ready (timeout 30000 ms) 00:12:41.328 [2024-07-15 09:20:52.338561] nvme_ctrlr.c:1559:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to ready (no timeout) 00:12:41.328 [2024-07-15 09:20:52.338585] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES ARBITRATION cid:191 cdw10:00000001 PRP1 0x0 PRP2 0x0 00:12:41.328 [2024-07-15 09:20:52.338603] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:000c p:1 m:0 dnr:0 00:12:41.328 [2024-07-15 09:20:52.338622] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES POWER MANAGEMENT cid:191 cdw10:00000002 PRP1 0x0 PRP2 0x0 00:12:41.328 [2024-07-15 09:20:52.338634] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:000d p:1 m:0 dnr:0 00:12:41.328 [2024-07-15 09:20:52.338650] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES TEMPERATURE THRESHOLD cid:191 cdw10:00000004 PRP1 0x0 PRP2 0x0 00:12:41.328 [2024-07-15 09:20:52.338661] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:000e p:1 m:0 dnr:0 00:12:41.328 [2024-07-15 09:20:52.338677] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES NUMBER OF QUEUES cid:191 cdw10:00000007 PRP1 0x0 PRP2 0x0 00:12:41.328 [2024-07-15 09:20:52.338688] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:7e007e sqhd:000f p:1 m:0 dnr:0 00:12:41.328 [2024-07-15 09:20:52.338710] nvme_pcie_common.c:1201:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002f6000 len:8192 00:12:41.328 [2024-07-15 09:20:52.338720] nvme_pcie_common.c:1229:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002f6000 00:12:41.328 [2024-07-15 09:20:52.338726] nvme_pcie_common.c:1238:nvme_pcie_prp_list_append: *DEBUG*: prp[0] = 0x2000002f7000 00:12:41.328 [2024-07-15 09:20:52.338732] nvme_pcie_common.c:1254:nvme_pcie_prp_list_append: *DEBUG*: prp2 = 0x2000002f7000 00:12:41.328 [2024-07-15 09:20:52.338742] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:191 nsid:ffffffff cdw10:07ff0001 cdw11:00000000 PRP1 0x2000002f6000 PRP2 0x2000002f7000 00:12:41.328 [2024-07-15 09:20:52.338753] nvme_pcie_common.c:1201:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002fc000 len:512 00:12:41.328 [2024-07-15 09:20:52.338761] nvme_pcie_common.c:1229:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002fc000 00:12:41.329 [2024-07-15 09:20:52.338770] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:186 nsid:ffffffff cdw10:007f0002 cdw11:00000000 PRP1 0x2000002fc000 PRP2 0x0 00:12:41.329 [2024-07-15 09:20:52.338807] nvme_pcie_common.c:1201:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002fb000 len:512 00:12:41.329 [2024-07-15 09:20:52.338818] nvme_pcie_common.c:1229:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002fb000 00:12:41.329 [2024-07-15 09:20:52.338828] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:185 nsid:ffffffff cdw10:007f0003 cdw11:00000000 PRP1 0x2000002fb000 PRP2 0x0 00:12:41.329 [2024-07-15 09:20:52.338862] nvme_pcie_common.c:1201:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002f4000 len:4096 00:12:41.329 [2024-07-15 09:20:52.338872] nvme_pcie_common.c:1229:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002f4000 00:12:41.329 [2024-07-15 09:20:52.338881] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:184 nsid:ffffffff cdw10:03ff0005 cdw11:00000000 PRP1 0x2000002f4000 PRP2 0x0 00:12:41.329 [2024-07-15 09:20:52.338894] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:0010 p:1 m:0 dnr:0 00:12:41.329 [2024-07-15 09:20:52.338915] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:186 cdw0:0 sqhd:0011 p:1 m:0 dnr:0 00:12:41.329 [2024-07-15 09:20:52.338933] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:185 cdw0:0 sqhd:0012 p:1 m:0 dnr:0 00:12:41.329 [2024-07-15 09:20:52.338946] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:184 cdw0:0 sqhd:0013 p:1 m:0 dnr:0 00:12:41.329 ===================================================== 00:12:41.329 NVMe over Fabrics controller at /var/run/vfio-user/domain/vfio-user1/1:: nqn.2019-07.io.spdk:cnode1 00:12:41.329 ===================================================== 00:12:41.329 Controller Capabilities/Features 00:12:41.329 ================================ 00:12:41.329 Vendor ID: 4e58 00:12:41.329 Subsystem Vendor ID: 4e58 00:12:41.329 Serial Number: SPDK1 00:12:41.329 Model Number: SPDK bdev Controller 00:12:41.329 Firmware Version: 24.09 00:12:41.329 Recommended Arb Burst: 6 00:12:41.329 IEEE OUI Identifier: 8d 6b 50 00:12:41.329 Multi-path I/O 00:12:41.329 May have multiple subsystem ports: Yes 00:12:41.329 May have multiple controllers: Yes 00:12:41.329 Associated with SR-IOV VF: No 00:12:41.329 Max Data Transfer Size: 131072 00:12:41.329 Max Number of Namespaces: 32 00:12:41.329 Max Number of I/O Queues: 127 00:12:41.329 NVMe Specification Version (VS): 1.3 00:12:41.329 NVMe Specification Version (Identify): 1.3 00:12:41.329 Maximum Queue Entries: 256 00:12:41.329 Contiguous Queues Required: Yes 00:12:41.329 Arbitration Mechanisms Supported 00:12:41.329 Weighted Round Robin: Not Supported 00:12:41.329 Vendor Specific: Not Supported 00:12:41.329 Reset Timeout: 15000 ms 00:12:41.329 Doorbell Stride: 4 bytes 00:12:41.329 NVM Subsystem Reset: Not Supported 00:12:41.329 Command Sets Supported 00:12:41.329 NVM Command Set: Supported 00:12:41.329 Boot Partition: Not Supported 00:12:41.329 Memory Page Size Minimum: 4096 bytes 00:12:41.329 Memory Page Size Maximum: 4096 bytes 00:12:41.329 Persistent Memory Region: Not Supported 00:12:41.329 Optional Asynchronous Events Supported 00:12:41.329 Namespace Attribute Notices: Supported 00:12:41.329 Firmware Activation Notices: Not Supported 00:12:41.329 ANA Change Notices: Not Supported 00:12:41.329 PLE Aggregate Log Change Notices: Not Supported 00:12:41.329 LBA Status Info Alert Notices: Not Supported 00:12:41.329 EGE Aggregate Log Change Notices: Not Supported 00:12:41.329 Normal NVM Subsystem Shutdown event: Not Supported 00:12:41.329 Zone Descriptor Change Notices: Not Supported 00:12:41.329 Discovery Log Change Notices: Not Supported 00:12:41.329 Controller Attributes 00:12:41.329 128-bit Host Identifier: Supported 00:12:41.329 Non-Operational Permissive Mode: Not Supported 00:12:41.329 NVM Sets: Not Supported 00:12:41.329 Read Recovery Levels: Not Supported 00:12:41.329 Endurance Groups: Not Supported 00:12:41.329 Predictable Latency Mode: Not Supported 00:12:41.329 Traffic Based Keep ALive: Not Supported 00:12:41.329 Namespace Granularity: Not Supported 00:12:41.329 SQ Associations: Not Supported 00:12:41.329 UUID List: Not Supported 00:12:41.329 Multi-Domain Subsystem: Not Supported 00:12:41.329 Fixed Capacity Management: Not Supported 00:12:41.329 Variable Capacity Management: Not Supported 00:12:41.329 Delete Endurance Group: Not Supported 00:12:41.329 Delete NVM Set: Not Supported 00:12:41.329 Extended LBA Formats Supported: Not Supported 00:12:41.329 Flexible Data Placement Supported: Not Supported 00:12:41.329 00:12:41.329 Controller Memory Buffer Support 00:12:41.329 ================================ 00:12:41.329 Supported: No 00:12:41.329 00:12:41.329 Persistent Memory Region Support 00:12:41.329 ================================ 00:12:41.329 Supported: No 00:12:41.329 00:12:41.329 Admin Command Set Attributes 00:12:41.329 ============================ 00:12:41.329 Security Send/Receive: Not Supported 00:12:41.329 Format NVM: Not Supported 00:12:41.329 Firmware Activate/Download: Not Supported 00:12:41.329 Namespace Management: Not Supported 00:12:41.329 Device Self-Test: Not Supported 00:12:41.329 Directives: Not Supported 00:12:41.329 NVMe-MI: Not Supported 00:12:41.329 Virtualization Management: Not Supported 00:12:41.329 Doorbell Buffer Config: Not Supported 00:12:41.329 Get LBA Status Capability: Not Supported 00:12:41.329 Command & Feature Lockdown Capability: Not Supported 00:12:41.329 Abort Command Limit: 4 00:12:41.329 Async Event Request Limit: 4 00:12:41.329 Number of Firmware Slots: N/A 00:12:41.329 Firmware Slot 1 Read-Only: N/A 00:12:41.329 Firmware Activation Without Reset: N/A 00:12:41.329 Multiple Update Detection Support: N/A 00:12:41.329 Firmware Update Granularity: No Information Provided 00:12:41.329 Per-Namespace SMART Log: No 00:12:41.329 Asymmetric Namespace Access Log Page: Not Supported 00:12:41.329 Subsystem NQN: nqn.2019-07.io.spdk:cnode1 00:12:41.329 Command Effects Log Page: Supported 00:12:41.329 Get Log Page Extended Data: Supported 00:12:41.329 Telemetry Log Pages: Not Supported 00:12:41.329 Persistent Event Log Pages: Not Supported 00:12:41.329 Supported Log Pages Log Page: May Support 00:12:41.329 Commands Supported & Effects Log Page: Not Supported 00:12:41.329 Feature Identifiers & Effects Log Page:May Support 00:12:41.329 NVMe-MI Commands & Effects Log Page: May Support 00:12:41.329 Data Area 4 for Telemetry Log: Not Supported 00:12:41.329 Error Log Page Entries Supported: 128 00:12:41.329 Keep Alive: Supported 00:12:41.329 Keep Alive Granularity: 10000 ms 00:12:41.329 00:12:41.329 NVM Command Set Attributes 00:12:41.329 ========================== 00:12:41.329 Submission Queue Entry Size 00:12:41.329 Max: 64 00:12:41.329 Min: 64 00:12:41.329 Completion Queue Entry Size 00:12:41.329 Max: 16 00:12:41.329 Min: 16 00:12:41.329 Number of Namespaces: 32 00:12:41.329 Compare Command: Supported 00:12:41.329 Write Uncorrectable Command: Not Supported 00:12:41.329 Dataset Management Command: Supported 00:12:41.329 Write Zeroes Command: Supported 00:12:41.329 Set Features Save Field: Not Supported 00:12:41.329 Reservations: Not Supported 00:12:41.329 Timestamp: Not Supported 00:12:41.329 Copy: Supported 00:12:41.329 Volatile Write Cache: Present 00:12:41.329 Atomic Write Unit (Normal): 1 00:12:41.329 Atomic Write Unit (PFail): 1 00:12:41.329 Atomic Compare & Write Unit: 1 00:12:41.329 Fused Compare & Write: Supported 00:12:41.329 Scatter-Gather List 00:12:41.329 SGL Command Set: Supported (Dword aligned) 00:12:41.329 SGL Keyed: Not Supported 00:12:41.329 SGL Bit Bucket Descriptor: Not Supported 00:12:41.329 SGL Metadata Pointer: Not Supported 00:12:41.329 Oversized SGL: Not Supported 00:12:41.329 SGL Metadata Address: Not Supported 00:12:41.329 SGL Offset: Not Supported 00:12:41.329 Transport SGL Data Block: Not Supported 00:12:41.329 Replay Protected Memory Block: Not Supported 00:12:41.329 00:12:41.329 Firmware Slot Information 00:12:41.329 ========================= 00:12:41.329 Active slot: 1 00:12:41.329 Slot 1 Firmware Revision: 24.09 00:12:41.329 00:12:41.329 00:12:41.329 Commands Supported and Effects 00:12:41.329 ============================== 00:12:41.329 Admin Commands 00:12:41.329 -------------- 00:12:41.329 Get Log Page (02h): Supported 00:12:41.329 Identify (06h): Supported 00:12:41.329 Abort (08h): Supported 00:12:41.329 Set Features (09h): Supported 00:12:41.329 Get Features (0Ah): Supported 00:12:41.329 Asynchronous Event Request (0Ch): Supported 00:12:41.329 Keep Alive (18h): Supported 00:12:41.329 I/O Commands 00:12:41.329 ------------ 00:12:41.329 Flush (00h): Supported LBA-Change 00:12:41.329 Write (01h): Supported LBA-Change 00:12:41.329 Read (02h): Supported 00:12:41.329 Compare (05h): Supported 00:12:41.329 Write Zeroes (08h): Supported LBA-Change 00:12:41.329 Dataset Management (09h): Supported LBA-Change 00:12:41.329 Copy (19h): Supported LBA-Change 00:12:41.329 00:12:41.329 Error Log 00:12:41.329 ========= 00:12:41.329 00:12:41.329 Arbitration 00:12:41.329 =========== 00:12:41.329 Arbitration Burst: 1 00:12:41.329 00:12:41.329 Power Management 00:12:41.329 ================ 00:12:41.329 Number of Power States: 1 00:12:41.329 Current Power State: Power State #0 00:12:41.329 Power State #0: 00:12:41.329 Max Power: 0.00 W 00:12:41.329 Non-Operational State: Operational 00:12:41.329 Entry Latency: Not Reported 00:12:41.329 Exit Latency: Not Reported 00:12:41.329 Relative Read Throughput: 0 00:12:41.330 Relative Read Latency: 0 00:12:41.330 Relative Write Throughput: 0 00:12:41.330 Relative Write Latency: 0 00:12:41.330 Idle Power: Not Reported 00:12:41.330 Active Power: Not Reported 00:12:41.330 Non-Operational Permissive Mode: Not Supported 00:12:41.330 00:12:41.330 Health Information 00:12:41.330 ================== 00:12:41.330 Critical Warnings: 00:12:41.330 Available Spare Space: OK 00:12:41.330 Temperature: OK 00:12:41.330 Device Reliability: OK 00:12:41.330 Read Only: No 00:12:41.330 Volatile Memory Backup: OK 00:12:41.330 Current Temperature: 0 Kelvin (-273 Celsius) 00:12:41.330 Temperature Threshold: 0 Kelvin (-273 Celsius) 00:12:41.330 Available Spare: 0% 00:12:41.330 Available Sp[2024-07-15 09:20:52.339072] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES ERROR_RECOVERY cid:184 cdw10:00000005 PRP1 0x0 PRP2 0x0 00:12:41.330 [2024-07-15 09:20:52.339110] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:184 cdw0:0 sqhd:0014 p:1 m:0 dnr:0 00:12:41.330 [2024-07-15 09:20:52.339155] nvme_ctrlr.c:4357:nvme_ctrlr_destruct_async: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] Prepare to destruct SSD 00:12:41.330 [2024-07-15 09:20:52.339189] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:190 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:12:41.330 [2024-07-15 09:20:52.339200] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:189 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:12:41.330 [2024-07-15 09:20:52.339210] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:188 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:12:41.330 [2024-07-15 09:20:52.339220] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:187 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:12:41.330 [2024-07-15 09:20:52.339659] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x14, value 0x460001 00:12:41.330 [2024-07-15 09:20:52.339678] nvme_vfio_user.c: 49:nvme_vfio_ctrlr_set_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x14, value 0x464001 00:12:41.330 [2024-07-15 09:20:52.340658] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: disabling controller 00:12:41.330 [2024-07-15 09:20:52.340734] nvme_ctrlr.c:1147:nvme_ctrlr_shutdown_set_cc_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] RTD3E = 0 us 00:12:41.330 [2024-07-15 09:20:52.340748] nvme_ctrlr.c:1150:nvme_ctrlr_shutdown_set_cc_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] shutdown timeout = 10000 ms 00:12:41.330 [2024-07-15 09:20:52.341668] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x1c, value 0x9 00:12:41.330 [2024-07-15 09:20:52.341691] nvme_ctrlr.c:1269:nvme_ctrlr_shutdown_poll_async: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] shutdown complete in 0 milliseconds 00:12:41.330 [2024-07-15 09:20:52.341743] vfio_user_pci.c: 399:spdk_vfio_user_release: *DEBUG*: Release file /var/run/vfio-user/domain/vfio-user1/1/cntrl 00:12:41.330 [2024-07-15 09:20:52.343709] vfio_user_pci.c: 96:vfio_remove_mr: *DEBUG*: Remove memory region: FD 10, VADDR 0x200000200000, IOVA 0x200000200000, Size 0x200000 00:12:41.330 are Threshold: 0% 00:12:41.330 Life Percentage Used: 0% 00:12:41.330 Data Units Read: 0 00:12:41.330 Data Units Written: 0 00:12:41.330 Host Read Commands: 0 00:12:41.330 Host Write Commands: 0 00:12:41.330 Controller Busy Time: 0 minutes 00:12:41.330 Power Cycles: 0 00:12:41.330 Power On Hours: 0 hours 00:12:41.330 Unsafe Shutdowns: 0 00:12:41.330 Unrecoverable Media Errors: 0 00:12:41.330 Lifetime Error Log Entries: 0 00:12:41.330 Warning Temperature Time: 0 minutes 00:12:41.330 Critical Temperature Time: 0 minutes 00:12:41.330 00:12:41.330 Number of Queues 00:12:41.330 ================ 00:12:41.330 Number of I/O Submission Queues: 127 00:12:41.330 Number of I/O Completion Queues: 127 00:12:41.330 00:12:41.330 Active Namespaces 00:12:41.330 ================= 00:12:41.330 Namespace ID:1 00:12:41.330 Error Recovery Timeout: Unlimited 00:12:41.330 Command Set Identifier: NVM (00h) 00:12:41.330 Deallocate: Supported 00:12:41.330 Deallocated/Unwritten Error: Not Supported 00:12:41.330 Deallocated Read Value: Unknown 00:12:41.330 Deallocate in Write Zeroes: Not Supported 00:12:41.330 Deallocated Guard Field: 0xFFFF 00:12:41.330 Flush: Supported 00:12:41.330 Reservation: Supported 00:12:41.330 Namespace Sharing Capabilities: Multiple Controllers 00:12:41.330 Size (in LBAs): 131072 (0GiB) 00:12:41.330 Capacity (in LBAs): 131072 (0GiB) 00:12:41.330 Utilization (in LBAs): 131072 (0GiB) 00:12:41.330 NGUID: 87F10B2B24044DB980C0FB200322FC78 00:12:41.330 UUID: 87f10b2b-2404-4db9-80c0-fb200322fc78 00:12:41.330 Thin Provisioning: Not Supported 00:12:41.330 Per-NS Atomic Units: Yes 00:12:41.330 Atomic Boundary Size (Normal): 0 00:12:41.330 Atomic Boundary Size (PFail): 0 00:12:41.330 Atomic Boundary Offset: 0 00:12:41.330 Maximum Single Source Range Length: 65535 00:12:41.330 Maximum Copy Length: 65535 00:12:41.330 Maximum Source Range Count: 1 00:12:41.330 NGUID/EUI64 Never Reused: No 00:12:41.330 Namespace Write Protected: No 00:12:41.330 Number of LBA Formats: 1 00:12:41.330 Current LBA Format: LBA Format #00 00:12:41.330 LBA Format #00: Data Size: 512 Metadata Size: 0 00:12:41.330 00:12:41.330 09:20:52 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@84 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user1/1 subnqn:nqn.2019-07.io.spdk:cnode1' -s 256 -g -q 128 -o 4096 -w read -t 5 -c 0x2 00:12:41.330 EAL: No free 2048 kB hugepages reported on node 1 00:12:41.590 [2024-07-15 09:20:52.572641] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: enabling controller 00:12:46.867 Initializing NVMe Controllers 00:12:46.867 Attached to NVMe over Fabrics controller at /var/run/vfio-user/domain/vfio-user1/1:: nqn.2019-07.io.spdk:cnode1 00:12:46.867 Associating VFIOUSER (/var/run/vfio-user/domain/vfio-user1/1) NSID 1 with lcore 1 00:12:46.867 Initialization complete. Launching workers. 00:12:46.867 ======================================================== 00:12:46.867 Latency(us) 00:12:46.867 Device Information : IOPS MiB/s Average min max 00:12:46.867 VFIOUSER (/var/run/vfio-user/domain/vfio-user1/1) NSID 1 from core 1: 35521.19 138.75 3603.39 1145.44 8987.07 00:12:46.867 ======================================================== 00:12:46.867 Total : 35521.19 138.75 3603.39 1145.44 8987.07 00:12:46.867 00:12:46.867 [2024-07-15 09:20:57.594942] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: disabling controller 00:12:46.867 09:20:57 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@85 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user1/1 subnqn:nqn.2019-07.io.spdk:cnode1' -s 256 -g -q 128 -o 4096 -w write -t 5 -c 0x2 00:12:46.867 EAL: No free 2048 kB hugepages reported on node 1 00:12:46.867 [2024-07-15 09:20:57.839138] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: enabling controller 00:12:52.143 Initializing NVMe Controllers 00:12:52.143 Attached to NVMe over Fabrics controller at /var/run/vfio-user/domain/vfio-user1/1:: nqn.2019-07.io.spdk:cnode1 00:12:52.143 Associating VFIOUSER (/var/run/vfio-user/domain/vfio-user1/1) NSID 1 with lcore 1 00:12:52.143 Initialization complete. Launching workers. 00:12:52.143 ======================================================== 00:12:52.143 Latency(us) 00:12:52.143 Device Information : IOPS MiB/s Average min max 00:12:52.143 VFIOUSER (/var/run/vfio-user/domain/vfio-user1/1) NSID 1 from core 1: 16076.80 62.80 7968.40 4987.82 8981.92 00:12:52.143 ======================================================== 00:12:52.143 Total : 16076.80 62.80 7968.40 4987.82 8981.92 00:12:52.143 00:12:52.143 [2024-07-15 09:21:02.873959] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: disabling controller 00:12:52.143 09:21:02 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@86 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/reconnect -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user1/1 subnqn:nqn.2019-07.io.spdk:cnode1' -g -q 32 -o 4096 -w randrw -M 50 -t 5 -c 0xE 00:12:52.143 EAL: No free 2048 kB hugepages reported on node 1 00:12:52.143 [2024-07-15 09:21:03.090010] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: enabling controller 00:12:57.417 [2024-07-15 09:21:08.168143] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: disabling controller 00:12:57.417 Initializing NVMe Controllers 00:12:57.417 Attaching to NVMe over Fabrics controller at /var/run/vfio-user/domain/vfio-user1/1:: nqn.2019-07.io.spdk:cnode1 00:12:57.417 Attached to NVMe over Fabrics controller at /var/run/vfio-user/domain/vfio-user1/1:: nqn.2019-07.io.spdk:cnode1 00:12:57.417 Associating VFIOUSER (/var/run/vfio-user/domain/vfio-user1/1) with lcore 1 00:12:57.417 Associating VFIOUSER (/var/run/vfio-user/domain/vfio-user1/1) with lcore 2 00:12:57.417 Associating VFIOUSER (/var/run/vfio-user/domain/vfio-user1/1) with lcore 3 00:12:57.417 Initialization complete. Launching workers. 00:12:57.417 Starting thread on core 2 00:12:57.417 Starting thread on core 3 00:12:57.417 Starting thread on core 1 00:12:57.417 09:21:08 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@87 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/arbitration -t 3 -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user1/1 subnqn:nqn.2019-07.io.spdk:cnode1' -d 256 -g 00:12:57.417 EAL: No free 2048 kB hugepages reported on node 1 00:12:57.417 [2024-07-15 09:21:08.457669] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: enabling controller 00:13:00.702 [2024-07-15 09:21:11.533731] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: disabling controller 00:13:00.702 Initializing NVMe Controllers 00:13:00.702 Attaching to /var/run/vfio-user/domain/vfio-user1/1 00:13:00.702 Attached to /var/run/vfio-user/domain/vfio-user1/1 00:13:00.702 Associating SPDK bdev Controller (SPDK1 ) with lcore 0 00:13:00.702 Associating SPDK bdev Controller (SPDK1 ) with lcore 1 00:13:00.702 Associating SPDK bdev Controller (SPDK1 ) with lcore 2 00:13:00.702 Associating SPDK bdev Controller (SPDK1 ) with lcore 3 00:13:00.702 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/arbitration run with configuration: 00:13:00.702 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/arbitration -q 64 -s 131072 -w randrw -M 50 -l 0 -t 3 -c 0xf -m 0 -a 0 -b 0 -n 100000 -i -1 00:13:00.702 Initialization complete. Launching workers. 00:13:00.702 Starting thread on core 1 with urgent priority queue 00:13:00.702 Starting thread on core 2 with urgent priority queue 00:13:00.702 Starting thread on core 3 with urgent priority queue 00:13:00.702 Starting thread on core 0 with urgent priority queue 00:13:00.702 SPDK bdev Controller (SPDK1 ) core 0: 5259.33 IO/s 19.01 secs/100000 ios 00:13:00.702 SPDK bdev Controller (SPDK1 ) core 1: 5434.00 IO/s 18.40 secs/100000 ios 00:13:00.702 SPDK bdev Controller (SPDK1 ) core 2: 5432.67 IO/s 18.41 secs/100000 ios 00:13:00.702 SPDK bdev Controller (SPDK1 ) core 3: 5956.00 IO/s 16.79 secs/100000 ios 00:13:00.702 ======================================================== 00:13:00.702 00:13:00.702 09:21:11 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@88 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/hello_world -d 256 -g -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user1/1 subnqn:nqn.2019-07.io.spdk:cnode1' 00:13:00.702 EAL: No free 2048 kB hugepages reported on node 1 00:13:00.702 [2024-07-15 09:21:11.842369] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: enabling controller 00:13:00.702 Initializing NVMe Controllers 00:13:00.702 Attaching to /var/run/vfio-user/domain/vfio-user1/1 00:13:00.702 Attached to /var/run/vfio-user/domain/vfio-user1/1 00:13:00.702 Namespace ID: 1 size: 0GB 00:13:00.702 Initialization complete. 00:13:00.702 INFO: using host memory buffer for IO 00:13:00.702 Hello world! 00:13:00.702 [2024-07-15 09:21:11.875930] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: disabling controller 00:13:00.961 09:21:11 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@89 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvme/overhead/overhead -o 4096 -t 1 -H -g -d 256 -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user1/1 subnqn:nqn.2019-07.io.spdk:cnode1' 00:13:00.961 EAL: No free 2048 kB hugepages reported on node 1 00:13:01.218 [2024-07-15 09:21:12.179265] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: enabling controller 00:13:02.159 Initializing NVMe Controllers 00:13:02.160 Attaching to /var/run/vfio-user/domain/vfio-user1/1 00:13:02.160 Attached to /var/run/vfio-user/domain/vfio-user1/1 00:13:02.160 Initialization complete. Launching workers. 00:13:02.160 submit (in ns) avg, min, max = 8044.1, 3504.4, 4017505.6 00:13:02.160 complete (in ns) avg, min, max = 23267.5, 2062.2, 7991024.4 00:13:02.160 00:13:02.160 Submit histogram 00:13:02.160 ================ 00:13:02.160 Range in us Cumulative Count 00:13:02.160 3.484 - 3.508: 0.0147% ( 2) 00:13:02.160 3.508 - 3.532: 0.2565% ( 33) 00:13:02.160 3.532 - 3.556: 1.4069% ( 157) 00:13:02.160 3.556 - 3.579: 4.5871% ( 434) 00:13:02.160 3.579 - 3.603: 9.9143% ( 727) 00:13:02.160 3.603 - 3.627: 18.3410% ( 1150) 00:13:02.160 3.627 - 3.650: 26.8630% ( 1163) 00:13:02.160 3.650 - 3.674: 34.9967% ( 1110) 00:13:02.160 3.674 - 3.698: 42.2877% ( 995) 00:13:02.160 3.698 - 3.721: 49.7838% ( 1023) 00:13:02.160 3.721 - 3.745: 55.0084% ( 713) 00:13:02.160 3.745 - 3.769: 59.8740% ( 664) 00:13:02.160 3.769 - 3.793: 63.2886% ( 466) 00:13:02.160 3.793 - 3.816: 66.7986% ( 479) 00:13:02.160 3.816 - 3.840: 70.3305% ( 482) 00:13:02.160 3.840 - 3.864: 74.4266% ( 559) 00:13:02.160 3.864 - 3.887: 78.1784% ( 512) 00:13:02.160 3.887 - 3.911: 81.7762% ( 491) 00:13:02.160 3.911 - 3.935: 84.6560% ( 393) 00:13:02.160 3.935 - 3.959: 87.0081% ( 321) 00:13:02.160 3.959 - 3.982: 88.7741% ( 241) 00:13:02.160 3.982 - 4.006: 90.3349% ( 213) 00:13:02.160 4.006 - 4.030: 91.4413% ( 151) 00:13:02.160 4.030 - 4.053: 92.3133% ( 119) 00:13:02.160 4.053 - 4.077: 93.3392% ( 140) 00:13:02.160 4.077 - 4.101: 94.2039% ( 118) 00:13:02.160 4.101 - 4.124: 94.8414% ( 87) 00:13:02.160 4.124 - 4.148: 95.5082% ( 91) 00:13:02.160 4.148 - 4.172: 95.8965% ( 53) 00:13:02.160 4.172 - 4.196: 96.2043% ( 42) 00:13:02.160 4.196 - 4.219: 96.4388% ( 32) 00:13:02.160 4.219 - 4.243: 96.5707% ( 18) 00:13:02.160 4.243 - 4.267: 96.7172% ( 20) 00:13:02.160 4.267 - 4.290: 96.8491% ( 18) 00:13:02.160 4.290 - 4.314: 97.0030% ( 21) 00:13:02.160 4.314 - 4.338: 97.0983% ( 13) 00:13:02.160 4.338 - 4.361: 97.1715% ( 10) 00:13:02.160 4.361 - 4.385: 97.2448% ( 10) 00:13:02.160 4.385 - 4.409: 97.2888% ( 6) 00:13:02.160 4.409 - 4.433: 97.3327% ( 6) 00:13:02.160 4.433 - 4.456: 97.3621% ( 4) 00:13:02.160 4.456 - 4.480: 97.3840% ( 3) 00:13:02.160 4.480 - 4.504: 97.4060% ( 3) 00:13:02.160 4.504 - 4.527: 97.4280% ( 3) 00:13:02.160 4.527 - 4.551: 97.4353% ( 1) 00:13:02.160 4.551 - 4.575: 97.4427% ( 1) 00:13:02.160 4.599 - 4.622: 97.4500% ( 1) 00:13:02.160 4.646 - 4.670: 97.4573% ( 1) 00:13:02.160 4.670 - 4.693: 97.4646% ( 1) 00:13:02.160 4.693 - 4.717: 97.4720% ( 1) 00:13:02.160 4.717 - 4.741: 97.4793% ( 1) 00:13:02.160 4.741 - 4.764: 97.4866% ( 1) 00:13:02.160 4.764 - 4.788: 97.4940% ( 1) 00:13:02.160 4.788 - 4.812: 97.5233% ( 4) 00:13:02.160 4.812 - 4.836: 97.5526% ( 4) 00:13:02.160 4.836 - 4.859: 97.5746% ( 3) 00:13:02.160 4.859 - 4.883: 97.6405% ( 9) 00:13:02.160 4.883 - 4.907: 97.6918% ( 7) 00:13:02.160 4.907 - 4.930: 97.7211% ( 4) 00:13:02.160 4.930 - 4.954: 97.7504% ( 4) 00:13:02.160 4.954 - 4.978: 97.8090% ( 8) 00:13:02.160 4.978 - 5.001: 97.8603% ( 7) 00:13:02.160 5.001 - 5.025: 97.9336% ( 10) 00:13:02.160 5.025 - 5.049: 97.9409% ( 1) 00:13:02.160 5.049 - 5.073: 97.9556% ( 2) 00:13:02.160 5.073 - 5.096: 97.9996% ( 6) 00:13:02.160 5.096 - 5.120: 98.0435% ( 6) 00:13:02.160 5.120 - 5.144: 98.0728% ( 4) 00:13:02.160 5.144 - 5.167: 98.0802% ( 1) 00:13:02.160 5.167 - 5.191: 98.1241% ( 6) 00:13:02.160 5.191 - 5.215: 98.1608% ( 5) 00:13:02.160 5.215 - 5.239: 98.1828% ( 3) 00:13:02.160 5.239 - 5.262: 98.2267% ( 6) 00:13:02.160 5.286 - 5.310: 98.2340% ( 1) 00:13:02.160 5.333 - 5.357: 98.2414% ( 1) 00:13:02.160 5.357 - 5.381: 98.2560% ( 2) 00:13:02.160 5.381 - 5.404: 98.2634% ( 1) 00:13:02.160 5.404 - 5.428: 98.2853% ( 3) 00:13:02.160 5.428 - 5.452: 98.2927% ( 1) 00:13:02.160 5.452 - 5.476: 98.3000% ( 1) 00:13:02.160 5.523 - 5.547: 98.3146% ( 2) 00:13:02.160 5.547 - 5.570: 98.3220% ( 1) 00:13:02.160 5.641 - 5.665: 98.3293% ( 1) 00:13:02.160 5.736 - 5.760: 98.3366% ( 1) 00:13:02.160 5.760 - 5.784: 98.3440% ( 1) 00:13:02.160 5.784 - 5.807: 98.3513% ( 1) 00:13:02.160 5.902 - 5.926: 98.3586% ( 1) 00:13:02.160 5.950 - 5.973: 98.3659% ( 1) 00:13:02.160 5.973 - 5.997: 98.3733% ( 1) 00:13:02.160 5.997 - 6.021: 98.3806% ( 1) 00:13:02.160 6.116 - 6.163: 98.3953% ( 2) 00:13:02.160 6.353 - 6.400: 98.4026% ( 1) 00:13:02.160 6.590 - 6.637: 98.4099% ( 1) 00:13:02.160 6.732 - 6.779: 98.4172% ( 1) 00:13:02.160 6.827 - 6.874: 98.4246% ( 1) 00:13:02.160 6.874 - 6.921: 98.4319% ( 1) 00:13:02.160 7.064 - 7.111: 98.4392% ( 1) 00:13:02.160 7.111 - 7.159: 98.4539% ( 2) 00:13:02.160 7.253 - 7.301: 98.4685% ( 2) 00:13:02.160 7.348 - 7.396: 98.4832% ( 2) 00:13:02.160 7.396 - 7.443: 98.4905% ( 1) 00:13:02.160 7.443 - 7.490: 98.4978% ( 1) 00:13:02.160 7.538 - 7.585: 98.5125% ( 2) 00:13:02.160 7.633 - 7.680: 98.5271% ( 2) 00:13:02.160 7.680 - 7.727: 98.5345% ( 1) 00:13:02.160 7.727 - 7.775: 98.5418% ( 1) 00:13:02.160 7.822 - 7.870: 98.5491% ( 1) 00:13:02.160 7.870 - 7.917: 98.5565% ( 1) 00:13:02.160 7.917 - 7.964: 98.5638% ( 1) 00:13:02.160 8.012 - 8.059: 98.5784% ( 2) 00:13:02.160 8.107 - 8.154: 98.5931% ( 2) 00:13:02.160 8.201 - 8.249: 98.6004% ( 1) 00:13:02.160 8.249 - 8.296: 98.6224% ( 3) 00:13:02.160 8.391 - 8.439: 98.6297% ( 1) 00:13:02.160 8.439 - 8.486: 98.6444% ( 2) 00:13:02.160 8.486 - 8.533: 98.6590% ( 2) 00:13:02.160 8.533 - 8.581: 98.6664% ( 1) 00:13:02.160 8.581 - 8.628: 98.6737% ( 1) 00:13:02.160 8.676 - 8.723: 98.6810% ( 1) 00:13:02.160 8.723 - 8.770: 98.6884% ( 1) 00:13:02.160 8.913 - 8.960: 98.7030% ( 2) 00:13:02.160 8.960 - 9.007: 98.7103% ( 1) 00:13:02.160 9.055 - 9.102: 98.7177% ( 1) 00:13:02.160 9.102 - 9.150: 98.7250% ( 1) 00:13:02.160 9.150 - 9.197: 98.7323% ( 1) 00:13:02.160 9.339 - 9.387: 98.7396% ( 1) 00:13:02.160 9.481 - 9.529: 98.7470% ( 1) 00:13:02.160 9.576 - 9.624: 98.7690% ( 3) 00:13:02.160 9.671 - 9.719: 98.7763% ( 1) 00:13:02.160 9.719 - 9.766: 98.7836% ( 1) 00:13:02.160 9.766 - 9.813: 98.7983% ( 2) 00:13:02.160 9.956 - 10.003: 98.8056% ( 1) 00:13:02.160 10.003 - 10.050: 98.8129% ( 1) 00:13:02.160 10.050 - 10.098: 98.8203% ( 1) 00:13:02.160 10.193 - 10.240: 98.8276% ( 1) 00:13:02.160 10.240 - 10.287: 98.8349% ( 1) 00:13:02.160 10.335 - 10.382: 98.8422% ( 1) 00:13:02.160 10.524 - 10.572: 98.8496% ( 1) 00:13:02.160 10.619 - 10.667: 98.8569% ( 1) 00:13:02.160 10.761 - 10.809: 98.8642% ( 1) 00:13:02.160 10.809 - 10.856: 98.8715% ( 1) 00:13:02.160 10.856 - 10.904: 98.8789% ( 1) 00:13:02.160 10.951 - 10.999: 98.8862% ( 1) 00:13:02.160 11.188 - 11.236: 98.8935% ( 1) 00:13:02.161 11.567 - 11.615: 98.9009% ( 1) 00:13:02.161 11.852 - 11.899: 98.9082% ( 1) 00:13:02.161 12.136 - 12.231: 98.9155% ( 1) 00:13:02.161 12.326 - 12.421: 98.9228% ( 1) 00:13:02.161 12.516 - 12.610: 98.9302% ( 1) 00:13:02.161 12.610 - 12.705: 98.9375% ( 1) 00:13:02.161 12.895 - 12.990: 98.9448% ( 1) 00:13:02.161 12.990 - 13.084: 98.9595% ( 2) 00:13:02.161 13.179 - 13.274: 98.9668% ( 1) 00:13:02.161 13.274 - 13.369: 98.9741% ( 1) 00:13:02.161 13.938 - 14.033: 98.9888% ( 2) 00:13:02.161 14.033 - 14.127: 98.9961% ( 1) 00:13:02.161 14.127 - 14.222: 99.0034% ( 1) 00:13:02.161 14.317 - 14.412: 99.0108% ( 1) 00:13:02.161 14.412 - 14.507: 99.0181% ( 1) 00:13:02.161 14.696 - 14.791: 99.0328% ( 2) 00:13:02.161 14.791 - 14.886: 99.0401% ( 1) 00:13:02.161 16.687 - 16.782: 99.0474% ( 1) 00:13:02.161 17.067 - 17.161: 99.0547% ( 1) 00:13:02.161 17.161 - 17.256: 99.0621% ( 1) 00:13:02.161 17.256 - 17.351: 99.0840% ( 3) 00:13:02.161 17.446 - 17.541: 99.1280% ( 6) 00:13:02.161 17.541 - 17.636: 99.1573% ( 4) 00:13:02.161 17.636 - 17.730: 99.1720% ( 2) 00:13:02.161 17.730 - 17.825: 99.2159% ( 6) 00:13:02.161 17.825 - 17.920: 99.2599% ( 6) 00:13:02.161 17.920 - 18.015: 99.3039% ( 6) 00:13:02.161 18.015 - 18.110: 99.3478% ( 6) 00:13:02.161 18.110 - 18.204: 99.4138% ( 9) 00:13:02.161 18.204 - 18.299: 99.4724% ( 8) 00:13:02.161 18.299 - 18.394: 99.4871% ( 2) 00:13:02.161 18.394 - 18.489: 99.5677% ( 11) 00:13:02.161 18.489 - 18.584: 99.6336% ( 9) 00:13:02.161 18.584 - 18.679: 99.6849% ( 7) 00:13:02.161 18.679 - 18.773: 99.7289% ( 6) 00:13:02.161 18.773 - 18.868: 99.7655% ( 5) 00:13:02.161 18.868 - 18.963: 99.7948% ( 4) 00:13:02.161 18.963 - 19.058: 99.8315% ( 5) 00:13:02.161 19.058 - 19.153: 99.8388% ( 1) 00:13:02.161 19.342 - 19.437: 99.8534% ( 2) 00:13:02.161 19.532 - 19.627: 99.8608% ( 1) 00:13:02.161 21.997 - 22.092: 99.8681% ( 1) 00:13:02.161 24.273 - 24.462: 99.8754% ( 1) 00:13:02.161 24.462 - 24.652: 99.8828% ( 1) 00:13:02.161 25.979 - 26.169: 99.8901% ( 1) 00:13:02.161 28.065 - 28.255: 99.8974% ( 1) 00:13:02.161 3980.705 - 4004.978: 99.9707% ( 10) 00:13:02.161 4004.978 - 4029.250: 100.0000% ( 4) 00:13:02.161 00:13:02.161 Complete histogram 00:13:02.161 ================== 00:13:02.161 Range in us Cumulative Count 00:13:02.161 2.062 - 2.074: 13.0138% ( 1776) 00:13:02.161 2.074 - 2.086: 32.4540% ( 2653) 00:13:02.161 2.086 - 2.098: 34.1027% ( 225) 00:13:02.161 2.098 - 2.110: 52.7369% ( 2543) 00:13:02.161 2.110 - 2.121: 60.9511% ( 1121) 00:13:02.161 2.121 - 2.133: 63.0395% ( 285) 00:13:02.161 2.133 - 2.145: 70.7994% ( 1059) 00:13:02.161 2.145 - 2.157: 74.6391% ( 524) 00:13:02.161 2.157 - 2.169: 75.8262% ( 162) 00:13:02.161 2.169 - 2.181: 81.2853% ( 745) 00:13:02.161 2.181 - 2.193: 83.2198% ( 264) 00:13:02.161 2.193 - 2.204: 83.8646% ( 88) 00:13:02.161 2.204 - 2.216: 86.3486% ( 339) 00:13:02.161 2.216 - 2.228: 88.5616% ( 302) 00:13:02.161 2.228 - 2.240: 90.2616% ( 232) 00:13:02.161 2.240 - 2.252: 92.9142% ( 362) 00:13:02.161 2.252 - 2.264: 93.8668% ( 130) 00:13:02.161 2.264 - 2.276: 94.0866% ( 30) 00:13:02.161 2.276 - 2.287: 94.4603% ( 51) 00:13:02.161 2.287 - 2.299: 94.7754% ( 43) 00:13:02.161 2.299 - 2.311: 95.2957% ( 71) 00:13:02.161 2.311 - 2.323: 95.5814% ( 39) 00:13:02.161 2.323 - 2.335: 95.6840% ( 14) 00:13:02.161 2.335 - 2.347: 95.7500% ( 9) 00:13:02.161 2.347 - 2.359: 95.8672% ( 16) 00:13:02.161 2.359 - 2.370: 96.1090% ( 33) 00:13:02.161 2.370 - 2.382: 96.4241% ( 43) 00:13:02.161 2.382 - 2.394: 96.8711% ( 61) 00:13:02.161 2.394 - 2.406: 97.2009% ( 45) 00:13:02.161 2.406 - 2.418: 97.3914% ( 26) 00:13:02.161 2.418 - 2.430: 97.6332% ( 33) 00:13:02.161 2.430 - 2.441: 97.7577% ( 17) 00:13:02.161 2.441 - 2.453: 97.8823% ( 17) 00:13:02.161 2.453 - 2.465: 98.0069% ( 17) 00:13:02.161 2.465 - 2.477: 98.0728% ( 9) 00:13:02.161 2.477 - 2.489: 98.1241% ( 7) 00:13:02.161 2.489 - 2.501: 98.2194% ( 13) 00:13:02.161 2.501 - 2.513: 98.2634% ( 6) 00:13:02.161 2.513 - 2.524: 98.2780% ( 2) 00:13:02.161 2.524 - 2.536: 98.3146% ( 5) 00:13:02.161 2.536 - 2.548: 98.3366% ( 3) 00:13:02.161 2.548 - 2.560: 98.3513% ( 2) 00:13:02.161 2.560 - 2.572: 98.3586% ( 1) 00:13:02.161 2.572 - 2.584: 98.3659% ( 1) 00:13:02.161 2.584 - 2.596: 98.3733% ( 1) 00:13:02.161 2.596 - 2.607: 98.3879% ( 2) 00:13:02.161 2.619 - 2.631: 98.3953% ( 1) 00:13:02.161 2.643 - 2.655: 98.4026% ( 1) 00:13:02.161 2.655 - 2.667: 98.4099% ( 1) 00:13:02.161 2.679 - 2.690: 98.4246% ( 2) 00:13:02.161 2.690 - 2.702: 98.4392% ( 2) 00:13:02.161 2.702 - 2.714: 98.4465% ( 1) 00:13:02.161 2.714 - 2.726: 98.4539% ( 1) 00:13:02.161 2.750 - 2.761: 98.4612% ( 1) 00:13:02.161 2.761 - 2.773: 98.4685% ( 1) 00:13:02.161 2.797 - 2.809: 98.4759% ( 1) 00:13:02.161 2.856 - 2.868: 98.4832% ( 1) 00:13:02.161 2.868 - 2.880: 98.4905% ( 1) 00:13:02.161 2.880 - 2.892: 98.4978% ( 1) 00:13:02.161 2.892 - 2.904: 9[2024-07-15 09:21:13.199324] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: disabling controller 00:13:02.161 8.5052% ( 1) 00:13:02.161 2.904 - 2.916: 98.5125% ( 1) 00:13:02.161 3.058 - 3.081: 98.5198% ( 1) 00:13:02.161 3.366 - 3.390: 98.5271% ( 1) 00:13:02.161 3.390 - 3.413: 98.5345% ( 1) 00:13:02.161 3.413 - 3.437: 98.5565% ( 3) 00:13:02.161 3.437 - 3.461: 98.5931% ( 5) 00:13:02.161 3.532 - 3.556: 98.6004% ( 1) 00:13:02.161 3.556 - 3.579: 98.6151% ( 2) 00:13:02.161 3.579 - 3.603: 98.6224% ( 1) 00:13:02.161 3.603 - 3.627: 98.6371% ( 2) 00:13:02.161 3.627 - 3.650: 98.6444% ( 1) 00:13:02.161 3.721 - 3.745: 98.6590% ( 2) 00:13:02.161 3.769 - 3.793: 98.6664% ( 1) 00:13:02.161 3.793 - 3.816: 98.6737% ( 1) 00:13:02.161 3.816 - 3.840: 98.6957% ( 3) 00:13:02.161 3.840 - 3.864: 98.7103% ( 2) 00:13:02.161 3.864 - 3.887: 98.7177% ( 1) 00:13:02.161 3.887 - 3.911: 98.7250% ( 1) 00:13:02.161 3.911 - 3.935: 98.7323% ( 1) 00:13:02.161 3.982 - 4.006: 98.7396% ( 1) 00:13:02.161 4.101 - 4.124: 98.7470% ( 1) 00:13:02.161 5.286 - 5.310: 98.7543% ( 1) 00:13:02.161 5.452 - 5.476: 98.7616% ( 1) 00:13:02.161 5.641 - 5.665: 98.7690% ( 1) 00:13:02.161 6.400 - 6.447: 98.7763% ( 1) 00:13:02.161 6.447 - 6.495: 98.7836% ( 1) 00:13:02.161 6.542 - 6.590: 98.7909% ( 1) 00:13:02.161 6.590 - 6.637: 98.8056% ( 2) 00:13:02.161 6.684 - 6.732: 98.8129% ( 1) 00:13:02.161 6.732 - 6.779: 98.8276% ( 2) 00:13:02.161 6.874 - 6.921: 98.8349% ( 1) 00:13:02.161 6.921 - 6.969: 98.8496% ( 2) 00:13:02.161 7.111 - 7.159: 98.8569% ( 1) 00:13:02.161 7.538 - 7.585: 98.8642% ( 1) 00:13:02.161 7.633 - 7.680: 98.8715% ( 1) 00:13:02.161 7.680 - 7.727: 98.8862% ( 2) 00:13:02.162 7.964 - 8.012: 98.8935% ( 1) 00:13:02.162 8.059 - 8.107: 98.9009% ( 1) 00:13:02.162 8.107 - 8.154: 98.9082% ( 1) 00:13:02.162 8.296 - 8.344: 98.9155% ( 1) 00:13:02.162 8.628 - 8.676: 98.9228% ( 1) 00:13:02.162 9.150 - 9.197: 98.9302% ( 1) 00:13:02.162 9.813 - 9.861: 98.9375% ( 1) 00:13:02.162 15.644 - 15.739: 98.9595% ( 3) 00:13:02.162 15.739 - 15.834: 98.9741% ( 2) 00:13:02.162 15.834 - 15.929: 98.9815% ( 1) 00:13:02.162 15.929 - 16.024: 99.0108% ( 4) 00:13:02.162 16.024 - 16.119: 99.0547% ( 6) 00:13:02.162 16.119 - 16.213: 99.0914% ( 5) 00:13:02.162 16.213 - 16.308: 99.1207% ( 4) 00:13:02.162 16.308 - 16.403: 99.1353% ( 2) 00:13:02.162 16.403 - 16.498: 99.1720% ( 5) 00:13:02.162 16.498 - 16.593: 99.2086% ( 5) 00:13:02.162 16.593 - 16.687: 99.2526% ( 6) 00:13:02.162 16.687 - 16.782: 99.3185% ( 9) 00:13:02.162 16.782 - 16.877: 99.3552% ( 5) 00:13:02.162 16.877 - 16.972: 99.3698% ( 2) 00:13:02.162 17.067 - 17.161: 99.3772% ( 1) 00:13:02.162 17.161 - 17.256: 99.3845% ( 1) 00:13:02.162 17.256 - 17.351: 99.3991% ( 2) 00:13:02.162 17.351 - 17.446: 99.4138% ( 2) 00:13:02.162 17.446 - 17.541: 99.4284% ( 2) 00:13:02.162 17.636 - 17.730: 99.4358% ( 1) 00:13:02.162 17.730 - 17.825: 99.4504% ( 2) 00:13:02.162 18.204 - 18.299: 99.4578% ( 1) 00:13:02.162 19.247 - 19.342: 99.4724% ( 2) 00:13:02.162 22.092 - 22.187: 99.4797% ( 1) 00:13:02.162 1019.449 - 1025.517: 99.4871% ( 1) 00:13:02.162 3009.801 - 3021.938: 99.4944% ( 1) 00:13:02.162 3094.756 - 3106.892: 99.5017% ( 1) 00:13:02.162 3980.705 - 4004.978: 99.8828% ( 52) 00:13:02.162 4004.978 - 4029.250: 99.9853% ( 14) 00:13:02.162 7961.410 - 8009.956: 100.0000% ( 2) 00:13:02.162 00:13:02.162 09:21:13 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@90 -- # aer_vfio_user /var/run/vfio-user/domain/vfio-user1/1 nqn.2019-07.io.spdk:cnode1 1 00:13:02.162 09:21:13 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@22 -- # local traddr=/var/run/vfio-user/domain/vfio-user1/1 00:13:02.162 09:21:13 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@23 -- # local subnqn=nqn.2019-07.io.spdk:cnode1 00:13:02.162 09:21:13 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@24 -- # local malloc_num=Malloc3 00:13:02.162 09:21:13 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@25 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_get_subsystems 00:13:02.422 [ 00:13:02.422 { 00:13:02.422 "nqn": "nqn.2014-08.org.nvmexpress.discovery", 00:13:02.422 "subtype": "Discovery", 00:13:02.422 "listen_addresses": [], 00:13:02.422 "allow_any_host": true, 00:13:02.422 "hosts": [] 00:13:02.422 }, 00:13:02.422 { 00:13:02.422 "nqn": "nqn.2019-07.io.spdk:cnode1", 00:13:02.422 "subtype": "NVMe", 00:13:02.422 "listen_addresses": [ 00:13:02.422 { 00:13:02.422 "trtype": "VFIOUSER", 00:13:02.422 "adrfam": "IPv4", 00:13:02.422 "traddr": "/var/run/vfio-user/domain/vfio-user1/1", 00:13:02.422 "trsvcid": "0" 00:13:02.422 } 00:13:02.422 ], 00:13:02.422 "allow_any_host": true, 00:13:02.422 "hosts": [], 00:13:02.422 "serial_number": "SPDK1", 00:13:02.422 "model_number": "SPDK bdev Controller", 00:13:02.422 "max_namespaces": 32, 00:13:02.422 "min_cntlid": 1, 00:13:02.422 "max_cntlid": 65519, 00:13:02.422 "namespaces": [ 00:13:02.422 { 00:13:02.422 "nsid": 1, 00:13:02.422 "bdev_name": "Malloc1", 00:13:02.422 "name": "Malloc1", 00:13:02.422 "nguid": "87F10B2B24044DB980C0FB200322FC78", 00:13:02.422 "uuid": "87f10b2b-2404-4db9-80c0-fb200322fc78" 00:13:02.422 } 00:13:02.422 ] 00:13:02.422 }, 00:13:02.422 { 00:13:02.422 "nqn": "nqn.2019-07.io.spdk:cnode2", 00:13:02.422 "subtype": "NVMe", 00:13:02.422 "listen_addresses": [ 00:13:02.422 { 00:13:02.422 "trtype": "VFIOUSER", 00:13:02.422 "adrfam": "IPv4", 00:13:02.422 "traddr": "/var/run/vfio-user/domain/vfio-user2/2", 00:13:02.422 "trsvcid": "0" 00:13:02.422 } 00:13:02.422 ], 00:13:02.422 "allow_any_host": true, 00:13:02.422 "hosts": [], 00:13:02.422 "serial_number": "SPDK2", 00:13:02.422 "model_number": "SPDK bdev Controller", 00:13:02.422 "max_namespaces": 32, 00:13:02.422 "min_cntlid": 1, 00:13:02.422 "max_cntlid": 65519, 00:13:02.422 "namespaces": [ 00:13:02.422 { 00:13:02.422 "nsid": 1, 00:13:02.422 "bdev_name": "Malloc2", 00:13:02.422 "name": "Malloc2", 00:13:02.422 "nguid": "415E29FFE1864004A965904790FF8997", 00:13:02.422 "uuid": "415e29ff-e186-4004-a965-904790ff8997" 00:13:02.422 } 00:13:02.422 ] 00:13:02.422 } 00:13:02.422 ] 00:13:02.422 09:21:13 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@27 -- # AER_TOUCH_FILE=/tmp/aer_touch_file 00:13:02.422 09:21:13 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@34 -- # aerpid=784876 00:13:02.422 09:21:13 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@30 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvme/aer/aer -r ' trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user1/1 subnqn:nqn.2019-07.io.spdk:cnode1' -n 2 -g -t /tmp/aer_touch_file 00:13:02.422 09:21:13 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@37 -- # waitforfile /tmp/aer_touch_file 00:13:02.422 09:21:13 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@1265 -- # local i=0 00:13:02.422 09:21:13 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@1266 -- # '[' '!' -e /tmp/aer_touch_file ']' 00:13:02.422 09:21:13 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@1267 -- # '[' 0 -lt 200 ']' 00:13:02.422 09:21:13 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@1268 -- # i=1 00:13:02.422 09:21:13 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@1269 -- # sleep 0.1 00:13:02.422 EAL: No free 2048 kB hugepages reported on node 1 00:13:02.422 09:21:13 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@1266 -- # '[' '!' -e /tmp/aer_touch_file ']' 00:13:02.422 09:21:13 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@1267 -- # '[' 1 -lt 200 ']' 00:13:02.422 09:21:13 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@1268 -- # i=2 00:13:02.422 09:21:13 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@1269 -- # sleep 0.1 00:13:02.680 [2024-07-15 09:21:13.661293] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: enabling controller 00:13:02.680 09:21:13 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@1266 -- # '[' '!' -e /tmp/aer_touch_file ']' 00:13:02.680 09:21:13 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@1272 -- # '[' '!' -e /tmp/aer_touch_file ']' 00:13:02.680 09:21:13 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@1276 -- # return 0 00:13:02.680 09:21:13 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@38 -- # rm -f /tmp/aer_touch_file 00:13:02.680 09:21:13 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@40 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 --name Malloc3 00:13:02.939 Malloc3 00:13:02.939 09:21:13 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2019-07.io.spdk:cnode1 Malloc3 -n 2 00:13:03.197 [2024-07-15 09:21:14.222277] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: disabling controller 00:13:03.197 09:21:14 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@42 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_get_subsystems 00:13:03.197 Asynchronous Event Request test 00:13:03.197 Attaching to /var/run/vfio-user/domain/vfio-user1/1 00:13:03.197 Attached to /var/run/vfio-user/domain/vfio-user1/1 00:13:03.197 Registering asynchronous event callbacks... 00:13:03.197 Starting namespace attribute notice tests for all controllers... 00:13:03.197 /var/run/vfio-user/domain/vfio-user1/1: aer_cb for log page 4, aen_event_type: 0x02, aen_event_info: 0x00 00:13:03.197 aer_cb - Changed Namespace 00:13:03.197 Cleaning up... 00:13:03.458 [ 00:13:03.458 { 00:13:03.458 "nqn": "nqn.2014-08.org.nvmexpress.discovery", 00:13:03.458 "subtype": "Discovery", 00:13:03.458 "listen_addresses": [], 00:13:03.458 "allow_any_host": true, 00:13:03.458 "hosts": [] 00:13:03.458 }, 00:13:03.458 { 00:13:03.458 "nqn": "nqn.2019-07.io.spdk:cnode1", 00:13:03.458 "subtype": "NVMe", 00:13:03.458 "listen_addresses": [ 00:13:03.458 { 00:13:03.458 "trtype": "VFIOUSER", 00:13:03.458 "adrfam": "IPv4", 00:13:03.458 "traddr": "/var/run/vfio-user/domain/vfio-user1/1", 00:13:03.458 "trsvcid": "0" 00:13:03.458 } 00:13:03.458 ], 00:13:03.458 "allow_any_host": true, 00:13:03.458 "hosts": [], 00:13:03.458 "serial_number": "SPDK1", 00:13:03.458 "model_number": "SPDK bdev Controller", 00:13:03.458 "max_namespaces": 32, 00:13:03.458 "min_cntlid": 1, 00:13:03.458 "max_cntlid": 65519, 00:13:03.458 "namespaces": [ 00:13:03.458 { 00:13:03.458 "nsid": 1, 00:13:03.458 "bdev_name": "Malloc1", 00:13:03.458 "name": "Malloc1", 00:13:03.458 "nguid": "87F10B2B24044DB980C0FB200322FC78", 00:13:03.458 "uuid": "87f10b2b-2404-4db9-80c0-fb200322fc78" 00:13:03.458 }, 00:13:03.458 { 00:13:03.458 "nsid": 2, 00:13:03.458 "bdev_name": "Malloc3", 00:13:03.458 "name": "Malloc3", 00:13:03.458 "nguid": "1E89D2D65FB6482FB8D9D46D244415AE", 00:13:03.458 "uuid": "1e89d2d6-5fb6-482f-b8d9-d46d244415ae" 00:13:03.458 } 00:13:03.458 ] 00:13:03.458 }, 00:13:03.458 { 00:13:03.458 "nqn": "nqn.2019-07.io.spdk:cnode2", 00:13:03.458 "subtype": "NVMe", 00:13:03.458 "listen_addresses": [ 00:13:03.458 { 00:13:03.458 "trtype": "VFIOUSER", 00:13:03.458 "adrfam": "IPv4", 00:13:03.458 "traddr": "/var/run/vfio-user/domain/vfio-user2/2", 00:13:03.458 "trsvcid": "0" 00:13:03.458 } 00:13:03.458 ], 00:13:03.458 "allow_any_host": true, 00:13:03.458 "hosts": [], 00:13:03.458 "serial_number": "SPDK2", 00:13:03.458 "model_number": "SPDK bdev Controller", 00:13:03.458 "max_namespaces": 32, 00:13:03.458 "min_cntlid": 1, 00:13:03.458 "max_cntlid": 65519, 00:13:03.458 "namespaces": [ 00:13:03.458 { 00:13:03.458 "nsid": 1, 00:13:03.458 "bdev_name": "Malloc2", 00:13:03.458 "name": "Malloc2", 00:13:03.458 "nguid": "415E29FFE1864004A965904790FF8997", 00:13:03.458 "uuid": "415e29ff-e186-4004-a965-904790ff8997" 00:13:03.458 } 00:13:03.458 ] 00:13:03.458 } 00:13:03.458 ] 00:13:03.458 09:21:14 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@44 -- # wait 784876 00:13:03.458 09:21:14 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@80 -- # for i in $(seq 1 $NUM_DEVICES) 00:13:03.458 09:21:14 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@81 -- # test_traddr=/var/run/vfio-user/domain/vfio-user2/2 00:13:03.458 09:21:14 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@82 -- # test_subnqn=nqn.2019-07.io.spdk:cnode2 00:13:03.458 09:21:14 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@83 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_identify -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user2/2 subnqn:nqn.2019-07.io.spdk:cnode2' -g -L nvme -L nvme_vfio -L vfio_pci 00:13:03.458 [2024-07-15 09:21:14.505313] Starting SPDK v24.09-pre git sha1 b0f01ebc5 / DPDK 24.03.0 initialization... 00:13:03.458 [2024-07-15 09:21:14.505364] [ DPDK EAL parameters: identify --no-shconf -c 0x1 -n 1 -m 0 --no-pci --single-file-segments --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid785012 ] 00:13:03.458 EAL: No free 2048 kB hugepages reported on node 1 00:13:03.458 [2024-07-15 09:21:14.536984] nvme_vfio_user.c: 259:nvme_vfio_ctrlr_scan: *DEBUG*: Scan controller : /var/run/vfio-user/domain/vfio-user2/2 00:13:03.458 [2024-07-15 09:21:14.549132] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 0, Size 0x2000, Offset 0x0, Flags 0xf, Cap offset 32 00:13:03.458 [2024-07-15 09:21:14.549162] vfio_user_pci.c: 233:vfio_device_setup_sparse_mmaps: *DEBUG*: Sparse region 0, Size 0x1000, Offset 0x1000, Map addr 0x7f6e92864000 00:13:03.458 [2024-07-15 09:21:14.550136] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 1, Size 0x0, Offset 0x0, Flags 0x0, Cap offset 0 00:13:03.458 [2024-07-15 09:21:14.551129] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 2, Size 0x0, Offset 0x0, Flags 0x0, Cap offset 0 00:13:03.458 [2024-07-15 09:21:14.552135] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 3, Size 0x0, Offset 0x0, Flags 0x0, Cap offset 0 00:13:03.458 [2024-07-15 09:21:14.553143] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 4, Size 0x2000, Offset 0x0, Flags 0x3, Cap offset 0 00:13:03.458 [2024-07-15 09:21:14.554158] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 5, Size 0x1000, Offset 0x0, Flags 0x3, Cap offset 0 00:13:03.458 [2024-07-15 09:21:14.555187] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 6, Size 0x0, Offset 0x0, Flags 0x0, Cap offset 0 00:13:03.458 [2024-07-15 09:21:14.556180] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 7, Size 0x1000, Offset 0x0, Flags 0x3, Cap offset 0 00:13:03.458 [2024-07-15 09:21:14.557185] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 8, Size 0x0, Offset 0x0, Flags 0x0, Cap offset 0 00:13:03.458 [2024-07-15 09:21:14.558206] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 9, Size 0xc000, Offset 0x0, Flags 0xf, Cap offset 32 00:13:03.458 [2024-07-15 09:21:14.558228] vfio_user_pci.c: 233:vfio_device_setup_sparse_mmaps: *DEBUG*: Sparse region 0, Size 0xb000, Offset 0x1000, Map addr 0x7f6e92859000 00:13:03.458 [2024-07-15 09:21:14.559341] vfio_user_pci.c: 65:vfio_add_mr: *DEBUG*: Add memory region: FD 10, VADDR 0x200000200000, IOVA 0x200000200000, Size 0x200000 00:13:03.458 [2024-07-15 09:21:14.574070] vfio_user_pci.c: 386:spdk_vfio_user_setup: *DEBUG*: Device vfio-user0, Path /var/run/vfio-user/domain/vfio-user2/2/cntrl Setup Successfully 00:13:03.458 [2024-07-15 09:21:14.574120] nvme_ctrlr.c:1559:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to connect adminq (no timeout) 00:13:03.459 [2024-07-15 09:21:14.576219] nvme_vfio_user.c: 103:nvme_vfio_ctrlr_get_reg_8: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x0, value 0x201e0100ff 00:13:03.459 [2024-07-15 09:21:14.576272] nvme_pcie_common.c: 132:nvme_pcie_qpair_construct: *INFO*: max_completions_cap = 64 num_trackers = 192 00:13:03.459 [2024-07-15 09:21:14.576357] nvme_ctrlr.c:1559:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to wait for connect adminq (no timeout) 00:13:03.459 [2024-07-15 09:21:14.576381] nvme_ctrlr.c:1559:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to read vs (no timeout) 00:13:03.459 [2024-07-15 09:21:14.576390] nvme_ctrlr.c:1559:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to read vs wait for vs (no timeout) 00:13:03.459 [2024-07-15 09:21:14.577813] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x8, value 0x10300 00:13:03.459 [2024-07-15 09:21:14.577835] nvme_ctrlr.c:1559:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to read cap (no timeout) 00:13:03.459 [2024-07-15 09:21:14.577849] nvme_ctrlr.c:1559:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to read cap wait for cap (no timeout) 00:13:03.459 [2024-07-15 09:21:14.578230] nvme_vfio_user.c: 103:nvme_vfio_ctrlr_get_reg_8: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x0, value 0x201e0100ff 00:13:03.459 [2024-07-15 09:21:14.578250] nvme_ctrlr.c:1559:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to check en (no timeout) 00:13:03.459 [2024-07-15 09:21:14.578264] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to check en wait for cc (timeout 15000 ms) 00:13:03.459 [2024-07-15 09:21:14.579235] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x14, value 0x0 00:13:03.459 [2024-07-15 09:21:14.579255] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to disable and wait for CSTS.RDY = 0 (timeout 15000 ms) 00:13:03.459 [2024-07-15 09:21:14.580243] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x1c, value 0x0 00:13:03.459 [2024-07-15 09:21:14.580262] nvme_ctrlr.c:3869:nvme_ctrlr_process_init_wait_for_ready_0: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] CC.EN = 0 && CSTS.RDY = 0 00:13:03.459 [2024-07-15 09:21:14.580271] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to controller is disabled (timeout 15000 ms) 00:13:03.459 [2024-07-15 09:21:14.580282] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to enable controller by writing CC.EN = 1 (timeout 15000 ms) 00:13:03.459 [2024-07-15 09:21:14.580392] nvme_ctrlr.c:4062:nvme_ctrlr_process_init: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] Setting CC.EN = 1 00:13:03.459 [2024-07-15 09:21:14.580399] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to enable controller by writing CC.EN = 1 reg (timeout 15000 ms) 00:13:03.459 [2024-07-15 09:21:14.580408] nvme_vfio_user.c: 61:nvme_vfio_ctrlr_set_reg_8: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x28, value 0x2000003c0000 00:13:03.459 [2024-07-15 09:21:14.581256] nvme_vfio_user.c: 61:nvme_vfio_ctrlr_set_reg_8: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x30, value 0x2000003be000 00:13:03.459 [2024-07-15 09:21:14.582265] nvme_vfio_user.c: 49:nvme_vfio_ctrlr_set_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x24, value 0xff00ff 00:13:03.459 [2024-07-15 09:21:14.584814] nvme_vfio_user.c: 49:nvme_vfio_ctrlr_set_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x14, value 0x460001 00:13:03.459 [2024-07-15 09:21:14.585280] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: enabling controller 00:13:03.459 [2024-07-15 09:21:14.585347] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to wait for CSTS.RDY = 1 (timeout 15000 ms) 00:13:03.459 [2024-07-15 09:21:14.586296] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x1c, value 0x1 00:13:03.459 [2024-07-15 09:21:14.586316] nvme_ctrlr.c:3904:nvme_ctrlr_process_init_enable_wait_for_ready_1: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] CC.EN = 1 && CSTS.RDY = 1 - controller is ready 00:13:03.459 [2024-07-15 09:21:14.586325] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to reset admin queue (timeout 30000 ms) 00:13:03.459 [2024-07-15 09:21:14.586349] nvme_ctrlr.c:1559:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to identify controller (no timeout) 00:13:03.459 [2024-07-15 09:21:14.586362] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to wait for identify controller (timeout 30000 ms) 00:13:03.459 [2024-07-15 09:21:14.586396] nvme_pcie_common.c:1201:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002fb000 len:4096 00:13:03.459 [2024-07-15 09:21:14.586407] nvme_pcie_common.c:1229:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002fb000 00:13:03.459 [2024-07-15 09:21:14.586425] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:191 nsid:0 cdw10:00000001 cdw11:00000000 PRP1 0x2000002fb000 PRP2 0x0 00:13:03.459 [2024-07-15 09:21:14.595812] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:0001 p:1 m:0 dnr:0 00:13:03.459 [2024-07-15 09:21:14.595835] nvme_ctrlr.c:2053:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] transport max_xfer_size 131072 00:13:03.459 [2024-07-15 09:21:14.595848] nvme_ctrlr.c:2057:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] MDTS max_xfer_size 131072 00:13:03.459 [2024-07-15 09:21:14.595857] nvme_ctrlr.c:2060:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] CNTLID 0x0001 00:13:03.459 [2024-07-15 09:21:14.595864] nvme_ctrlr.c:2071:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] Identify CNTLID 0x0001 != Connect CNTLID 0x0000 00:13:03.459 [2024-07-15 09:21:14.595872] nvme_ctrlr.c:2084:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] transport max_sges 1 00:13:03.459 [2024-07-15 09:21:14.595879] nvme_ctrlr.c:2099:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] fuses compare and write: 1 00:13:03.459 [2024-07-15 09:21:14.595887] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to configure AER (timeout 30000 ms) 00:13:03.459 [2024-07-15 09:21:14.595900] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to wait for configure aer (timeout 30000 ms) 00:13:03.459 [2024-07-15 09:21:14.595916] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES ASYNC EVENT CONFIGURATION cid:191 cdw10:0000000b PRP1 0x0 PRP2 0x0 00:13:03.459 [2024-07-15 09:21:14.603810] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:0002 p:1 m:0 dnr:0 00:13:03.459 [2024-07-15 09:21:14.603839] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:190 nsid:0 cdw10:00000000 cdw11:00000000 00:13:03.459 [2024-07-15 09:21:14.603853] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:189 nsid:0 cdw10:00000000 cdw11:00000000 00:13:03.459 [2024-07-15 09:21:14.603865] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:188 nsid:0 cdw10:00000000 cdw11:00000000 00:13:03.459 [2024-07-15 09:21:14.603878] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:187 nsid:0 cdw10:00000000 cdw11:00000000 00:13:03.459 [2024-07-15 09:21:14.603886] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to set keep alive timeout (timeout 30000 ms) 00:13:03.459 [2024-07-15 09:21:14.603901] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to wait for set keep alive timeout (timeout 30000 ms) 00:13:03.459 [2024-07-15 09:21:14.603916] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES KEEP ALIVE TIMER cid:191 cdw10:0000000f PRP1 0x0 PRP2 0x0 00:13:03.459 [2024-07-15 09:21:14.611829] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:0007 p:1 m:0 dnr:0 00:13:03.459 [2024-07-15 09:21:14.611847] nvme_ctrlr.c:3010:nvme_ctrlr_set_keep_alive_timeout_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] Controller adjusted keep alive timeout to 0 ms 00:13:03.459 [2024-07-15 09:21:14.611856] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to identify controller iocs specific (timeout 30000 ms) 00:13:03.459 [2024-07-15 09:21:14.611868] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to set number of queues (timeout 30000 ms) 00:13:03.459 [2024-07-15 09:21:14.611877] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to wait for set number of queues (timeout 30000 ms) 00:13:03.459 [2024-07-15 09:21:14.611891] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES NUMBER OF QUEUES cid:191 cdw10:00000007 PRP1 0x0 PRP2 0x0 00:13:03.459 [2024-07-15 09:21:14.618841] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:7e007e sqhd:0008 p:1 m:0 dnr:0 00:13:03.459 [2024-07-15 09:21:14.618933] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to identify active ns (timeout 30000 ms) 00:13:03.459 [2024-07-15 09:21:14.618951] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to wait for identify active ns (timeout 30000 ms) 00:13:03.459 [2024-07-15 09:21:14.618965] nvme_pcie_common.c:1201:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002f9000 len:4096 00:13:03.459 [2024-07-15 09:21:14.618974] nvme_pcie_common.c:1229:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002f9000 00:13:03.459 [2024-07-15 09:21:14.618984] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:191 nsid:0 cdw10:00000002 cdw11:00000000 PRP1 0x2000002f9000 PRP2 0x0 00:13:03.459 [2024-07-15 09:21:14.627812] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:0009 p:1 m:0 dnr:0 00:13:03.459 [2024-07-15 09:21:14.627835] nvme_ctrlr.c:4693:spdk_nvme_ctrlr_get_ns: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] Namespace 1 was added 00:13:03.459 [2024-07-15 09:21:14.627870] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to identify ns (timeout 30000 ms) 00:13:03.459 [2024-07-15 09:21:14.627886] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to wait for identify ns (timeout 30000 ms) 00:13:03.459 [2024-07-15 09:21:14.627899] nvme_pcie_common.c:1201:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002fb000 len:4096 00:13:03.459 [2024-07-15 09:21:14.627908] nvme_pcie_common.c:1229:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002fb000 00:13:03.459 [2024-07-15 09:21:14.627918] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:191 nsid:1 cdw10:00000000 cdw11:00000000 PRP1 0x2000002fb000 PRP2 0x0 00:13:03.460 [2024-07-15 09:21:14.635810] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:000a p:1 m:0 dnr:0 00:13:03.460 [2024-07-15 09:21:14.635838] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to identify namespace id descriptors (timeout 30000 ms) 00:13:03.460 [2024-07-15 09:21:14.635854] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to wait for identify namespace id descriptors (timeout 30000 ms) 00:13:03.460 [2024-07-15 09:21:14.635868] nvme_pcie_common.c:1201:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002fb000 len:4096 00:13:03.460 [2024-07-15 09:21:14.635877] nvme_pcie_common.c:1229:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002fb000 00:13:03.460 [2024-07-15 09:21:14.635887] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:191 nsid:1 cdw10:00000003 cdw11:00000000 PRP1 0x2000002fb000 PRP2 0x0 00:13:03.460 [2024-07-15 09:21:14.643812] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:000b p:1 m:0 dnr:0 00:13:03.460 [2024-07-15 09:21:14.643833] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to identify ns iocs specific (timeout 30000 ms) 00:13:03.460 [2024-07-15 09:21:14.643846] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to set supported log pages (timeout 30000 ms) 00:13:03.460 [2024-07-15 09:21:14.643860] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to set supported features (timeout 30000 ms) 00:13:03.460 [2024-07-15 09:21:14.643870] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to set host behavior support feature (timeout 30000 ms) 00:13:03.460 [2024-07-15 09:21:14.643878] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to set doorbell buffer config (timeout 30000 ms) 00:13:03.460 [2024-07-15 09:21:14.643886] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to set host ID (timeout 30000 ms) 00:13:03.460 [2024-07-15 09:21:14.643897] nvme_ctrlr.c:3110:nvme_ctrlr_set_host_id: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] NVMe-oF transport - not sending Set Features - Host ID 00:13:03.460 [2024-07-15 09:21:14.643905] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to transport ready (timeout 30000 ms) 00:13:03.460 [2024-07-15 09:21:14.643914] nvme_ctrlr.c:1559:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to ready (no timeout) 00:13:03.460 [2024-07-15 09:21:14.643939] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES ARBITRATION cid:191 cdw10:00000001 PRP1 0x0 PRP2 0x0 00:13:03.720 [2024-07-15 09:21:14.651832] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:000c p:1 m:0 dnr:0 00:13:03.720 [2024-07-15 09:21:14.651861] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES POWER MANAGEMENT cid:191 cdw10:00000002 PRP1 0x0 PRP2 0x0 00:13:03.720 [2024-07-15 09:21:14.659828] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:000d p:1 m:0 dnr:0 00:13:03.720 [2024-07-15 09:21:14.659865] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES TEMPERATURE THRESHOLD cid:191 cdw10:00000004 PRP1 0x0 PRP2 0x0 00:13:03.720 [2024-07-15 09:21:14.667815] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:000e p:1 m:0 dnr:0 00:13:03.720 [2024-07-15 09:21:14.667840] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES NUMBER OF QUEUES cid:191 cdw10:00000007 PRP1 0x0 PRP2 0x0 00:13:03.720 [2024-07-15 09:21:14.675829] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:7e007e sqhd:000f p:1 m:0 dnr:0 00:13:03.720 [2024-07-15 09:21:14.675871] nvme_pcie_common.c:1201:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002f6000 len:8192 00:13:03.720 [2024-07-15 09:21:14.675883] nvme_pcie_common.c:1229:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002f6000 00:13:03.720 [2024-07-15 09:21:14.675889] nvme_pcie_common.c:1238:nvme_pcie_prp_list_append: *DEBUG*: prp[0] = 0x2000002f7000 00:13:03.720 [2024-07-15 09:21:14.675895] nvme_pcie_common.c:1254:nvme_pcie_prp_list_append: *DEBUG*: prp2 = 0x2000002f7000 00:13:03.720 [2024-07-15 09:21:14.675905] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:191 nsid:ffffffff cdw10:07ff0001 cdw11:00000000 PRP1 0x2000002f6000 PRP2 0x2000002f7000 00:13:03.720 [2024-07-15 09:21:14.675933] nvme_pcie_common.c:1201:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002fc000 len:512 00:13:03.720 [2024-07-15 09:21:14.675942] nvme_pcie_common.c:1229:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002fc000 00:13:03.720 [2024-07-15 09:21:14.675952] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:186 nsid:ffffffff cdw10:007f0002 cdw11:00000000 PRP1 0x2000002fc000 PRP2 0x0 00:13:03.720 [2024-07-15 09:21:14.675963] nvme_pcie_common.c:1201:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002fb000 len:512 00:13:03.720 [2024-07-15 09:21:14.675971] nvme_pcie_common.c:1229:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002fb000 00:13:03.720 [2024-07-15 09:21:14.675980] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:185 nsid:ffffffff cdw10:007f0003 cdw11:00000000 PRP1 0x2000002fb000 PRP2 0x0 00:13:03.720 [2024-07-15 09:21:14.675993] nvme_pcie_common.c:1201:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002f4000 len:4096 00:13:03.720 [2024-07-15 09:21:14.676002] nvme_pcie_common.c:1229:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002f4000 00:13:03.720 [2024-07-15 09:21:14.676011] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:184 nsid:ffffffff cdw10:03ff0005 cdw11:00000000 PRP1 0x2000002f4000 PRP2 0x0 00:13:03.720 [2024-07-15 09:21:14.683817] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:0010 p:1 m:0 dnr:0 00:13:03.720 [2024-07-15 09:21:14.683851] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:186 cdw0:0 sqhd:0011 p:1 m:0 dnr:0 00:13:03.720 [2024-07-15 09:21:14.683872] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:185 cdw0:0 sqhd:0012 p:1 m:0 dnr:0 00:13:03.720 [2024-07-15 09:21:14.683885] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:184 cdw0:0 sqhd:0013 p:1 m:0 dnr:0 00:13:03.720 ===================================================== 00:13:03.720 NVMe over Fabrics controller at /var/run/vfio-user/domain/vfio-user2/2:: nqn.2019-07.io.spdk:cnode2 00:13:03.720 ===================================================== 00:13:03.720 Controller Capabilities/Features 00:13:03.720 ================================ 00:13:03.720 Vendor ID: 4e58 00:13:03.720 Subsystem Vendor ID: 4e58 00:13:03.720 Serial Number: SPDK2 00:13:03.720 Model Number: SPDK bdev Controller 00:13:03.720 Firmware Version: 24.09 00:13:03.720 Recommended Arb Burst: 6 00:13:03.720 IEEE OUI Identifier: 8d 6b 50 00:13:03.720 Multi-path I/O 00:13:03.720 May have multiple subsystem ports: Yes 00:13:03.720 May have multiple controllers: Yes 00:13:03.720 Associated with SR-IOV VF: No 00:13:03.720 Max Data Transfer Size: 131072 00:13:03.720 Max Number of Namespaces: 32 00:13:03.720 Max Number of I/O Queues: 127 00:13:03.720 NVMe Specification Version (VS): 1.3 00:13:03.720 NVMe Specification Version (Identify): 1.3 00:13:03.720 Maximum Queue Entries: 256 00:13:03.720 Contiguous Queues Required: Yes 00:13:03.720 Arbitration Mechanisms Supported 00:13:03.720 Weighted Round Robin: Not Supported 00:13:03.720 Vendor Specific: Not Supported 00:13:03.720 Reset Timeout: 15000 ms 00:13:03.720 Doorbell Stride: 4 bytes 00:13:03.720 NVM Subsystem Reset: Not Supported 00:13:03.720 Command Sets Supported 00:13:03.720 NVM Command Set: Supported 00:13:03.720 Boot Partition: Not Supported 00:13:03.720 Memory Page Size Minimum: 4096 bytes 00:13:03.720 Memory Page Size Maximum: 4096 bytes 00:13:03.720 Persistent Memory Region: Not Supported 00:13:03.720 Optional Asynchronous Events Supported 00:13:03.720 Namespace Attribute Notices: Supported 00:13:03.720 Firmware Activation Notices: Not Supported 00:13:03.720 ANA Change Notices: Not Supported 00:13:03.720 PLE Aggregate Log Change Notices: Not Supported 00:13:03.720 LBA Status Info Alert Notices: Not Supported 00:13:03.720 EGE Aggregate Log Change Notices: Not Supported 00:13:03.720 Normal NVM Subsystem Shutdown event: Not Supported 00:13:03.720 Zone Descriptor Change Notices: Not Supported 00:13:03.720 Discovery Log Change Notices: Not Supported 00:13:03.720 Controller Attributes 00:13:03.720 128-bit Host Identifier: Supported 00:13:03.720 Non-Operational Permissive Mode: Not Supported 00:13:03.720 NVM Sets: Not Supported 00:13:03.720 Read Recovery Levels: Not Supported 00:13:03.720 Endurance Groups: Not Supported 00:13:03.720 Predictable Latency Mode: Not Supported 00:13:03.720 Traffic Based Keep ALive: Not Supported 00:13:03.720 Namespace Granularity: Not Supported 00:13:03.720 SQ Associations: Not Supported 00:13:03.720 UUID List: Not Supported 00:13:03.720 Multi-Domain Subsystem: Not Supported 00:13:03.720 Fixed Capacity Management: Not Supported 00:13:03.720 Variable Capacity Management: Not Supported 00:13:03.720 Delete Endurance Group: Not Supported 00:13:03.720 Delete NVM Set: Not Supported 00:13:03.721 Extended LBA Formats Supported: Not Supported 00:13:03.721 Flexible Data Placement Supported: Not Supported 00:13:03.721 00:13:03.721 Controller Memory Buffer Support 00:13:03.721 ================================ 00:13:03.721 Supported: No 00:13:03.721 00:13:03.721 Persistent Memory Region Support 00:13:03.721 ================================ 00:13:03.721 Supported: No 00:13:03.721 00:13:03.721 Admin Command Set Attributes 00:13:03.721 ============================ 00:13:03.721 Security Send/Receive: Not Supported 00:13:03.721 Format NVM: Not Supported 00:13:03.721 Firmware Activate/Download: Not Supported 00:13:03.721 Namespace Management: Not Supported 00:13:03.721 Device Self-Test: Not Supported 00:13:03.721 Directives: Not Supported 00:13:03.721 NVMe-MI: Not Supported 00:13:03.721 Virtualization Management: Not Supported 00:13:03.721 Doorbell Buffer Config: Not Supported 00:13:03.721 Get LBA Status Capability: Not Supported 00:13:03.721 Command & Feature Lockdown Capability: Not Supported 00:13:03.721 Abort Command Limit: 4 00:13:03.721 Async Event Request Limit: 4 00:13:03.721 Number of Firmware Slots: N/A 00:13:03.721 Firmware Slot 1 Read-Only: N/A 00:13:03.721 Firmware Activation Without Reset: N/A 00:13:03.721 Multiple Update Detection Support: N/A 00:13:03.721 Firmware Update Granularity: No Information Provided 00:13:03.721 Per-Namespace SMART Log: No 00:13:03.721 Asymmetric Namespace Access Log Page: Not Supported 00:13:03.721 Subsystem NQN: nqn.2019-07.io.spdk:cnode2 00:13:03.721 Command Effects Log Page: Supported 00:13:03.721 Get Log Page Extended Data: Supported 00:13:03.721 Telemetry Log Pages: Not Supported 00:13:03.721 Persistent Event Log Pages: Not Supported 00:13:03.721 Supported Log Pages Log Page: May Support 00:13:03.721 Commands Supported & Effects Log Page: Not Supported 00:13:03.721 Feature Identifiers & Effects Log Page:May Support 00:13:03.721 NVMe-MI Commands & Effects Log Page: May Support 00:13:03.721 Data Area 4 for Telemetry Log: Not Supported 00:13:03.721 Error Log Page Entries Supported: 128 00:13:03.721 Keep Alive: Supported 00:13:03.721 Keep Alive Granularity: 10000 ms 00:13:03.721 00:13:03.721 NVM Command Set Attributes 00:13:03.721 ========================== 00:13:03.721 Submission Queue Entry Size 00:13:03.721 Max: 64 00:13:03.721 Min: 64 00:13:03.721 Completion Queue Entry Size 00:13:03.721 Max: 16 00:13:03.721 Min: 16 00:13:03.721 Number of Namespaces: 32 00:13:03.721 Compare Command: Supported 00:13:03.721 Write Uncorrectable Command: Not Supported 00:13:03.721 Dataset Management Command: Supported 00:13:03.721 Write Zeroes Command: Supported 00:13:03.721 Set Features Save Field: Not Supported 00:13:03.721 Reservations: Not Supported 00:13:03.721 Timestamp: Not Supported 00:13:03.721 Copy: Supported 00:13:03.721 Volatile Write Cache: Present 00:13:03.721 Atomic Write Unit (Normal): 1 00:13:03.721 Atomic Write Unit (PFail): 1 00:13:03.721 Atomic Compare & Write Unit: 1 00:13:03.721 Fused Compare & Write: Supported 00:13:03.721 Scatter-Gather List 00:13:03.721 SGL Command Set: Supported (Dword aligned) 00:13:03.721 SGL Keyed: Not Supported 00:13:03.721 SGL Bit Bucket Descriptor: Not Supported 00:13:03.721 SGL Metadata Pointer: Not Supported 00:13:03.721 Oversized SGL: Not Supported 00:13:03.721 SGL Metadata Address: Not Supported 00:13:03.721 SGL Offset: Not Supported 00:13:03.721 Transport SGL Data Block: Not Supported 00:13:03.721 Replay Protected Memory Block: Not Supported 00:13:03.721 00:13:03.721 Firmware Slot Information 00:13:03.721 ========================= 00:13:03.721 Active slot: 1 00:13:03.721 Slot 1 Firmware Revision: 24.09 00:13:03.721 00:13:03.721 00:13:03.721 Commands Supported and Effects 00:13:03.721 ============================== 00:13:03.721 Admin Commands 00:13:03.721 -------------- 00:13:03.721 Get Log Page (02h): Supported 00:13:03.721 Identify (06h): Supported 00:13:03.721 Abort (08h): Supported 00:13:03.721 Set Features (09h): Supported 00:13:03.721 Get Features (0Ah): Supported 00:13:03.721 Asynchronous Event Request (0Ch): Supported 00:13:03.721 Keep Alive (18h): Supported 00:13:03.721 I/O Commands 00:13:03.721 ------------ 00:13:03.721 Flush (00h): Supported LBA-Change 00:13:03.721 Write (01h): Supported LBA-Change 00:13:03.721 Read (02h): Supported 00:13:03.721 Compare (05h): Supported 00:13:03.721 Write Zeroes (08h): Supported LBA-Change 00:13:03.721 Dataset Management (09h): Supported LBA-Change 00:13:03.721 Copy (19h): Supported LBA-Change 00:13:03.721 00:13:03.721 Error Log 00:13:03.721 ========= 00:13:03.721 00:13:03.721 Arbitration 00:13:03.721 =========== 00:13:03.721 Arbitration Burst: 1 00:13:03.721 00:13:03.721 Power Management 00:13:03.721 ================ 00:13:03.721 Number of Power States: 1 00:13:03.721 Current Power State: Power State #0 00:13:03.721 Power State #0: 00:13:03.721 Max Power: 0.00 W 00:13:03.721 Non-Operational State: Operational 00:13:03.721 Entry Latency: Not Reported 00:13:03.721 Exit Latency: Not Reported 00:13:03.721 Relative Read Throughput: 0 00:13:03.721 Relative Read Latency: 0 00:13:03.721 Relative Write Throughput: 0 00:13:03.721 Relative Write Latency: 0 00:13:03.721 Idle Power: Not Reported 00:13:03.721 Active Power: Not Reported 00:13:03.721 Non-Operational Permissive Mode: Not Supported 00:13:03.721 00:13:03.721 Health Information 00:13:03.721 ================== 00:13:03.721 Critical Warnings: 00:13:03.721 Available Spare Space: OK 00:13:03.721 Temperature: OK 00:13:03.721 Device Reliability: OK 00:13:03.721 Read Only: No 00:13:03.721 Volatile Memory Backup: OK 00:13:03.721 Current Temperature: 0 Kelvin (-273 Celsius) 00:13:03.721 Temperature Threshold: 0 Kelvin (-273 Celsius) 00:13:03.721 Available Spare: 0% 00:13:03.721 Available Sp[2024-07-15 09:21:14.684003] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES ERROR_RECOVERY cid:184 cdw10:00000005 PRP1 0x0 PRP2 0x0 00:13:03.721 [2024-07-15 09:21:14.691829] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:184 cdw0:0 sqhd:0014 p:1 m:0 dnr:0 00:13:03.721 [2024-07-15 09:21:14.691891] nvme_ctrlr.c:4357:nvme_ctrlr_destruct_async: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] Prepare to destruct SSD 00:13:03.721 [2024-07-15 09:21:14.691908] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:190 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:13:03.721 [2024-07-15 09:21:14.691919] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:189 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:13:03.721 [2024-07-15 09:21:14.691930] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:188 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:13:03.721 [2024-07-15 09:21:14.691940] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:187 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:13:03.721 [2024-07-15 09:21:14.692024] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x14, value 0x460001 00:13:03.721 [2024-07-15 09:21:14.692046] nvme_vfio_user.c: 49:nvme_vfio_ctrlr_set_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x14, value 0x464001 00:13:03.721 [2024-07-15 09:21:14.693033] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: disabling controller 00:13:03.721 [2024-07-15 09:21:14.693128] nvme_ctrlr.c:1147:nvme_ctrlr_shutdown_set_cc_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] RTD3E = 0 us 00:13:03.721 [2024-07-15 09:21:14.693142] nvme_ctrlr.c:1150:nvme_ctrlr_shutdown_set_cc_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] shutdown timeout = 10000 ms 00:13:03.721 [2024-07-15 09:21:14.694043] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x1c, value 0x9 00:13:03.721 [2024-07-15 09:21:14.694067] nvme_ctrlr.c:1269:nvme_ctrlr_shutdown_poll_async: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] shutdown complete in 0 milliseconds 00:13:03.721 [2024-07-15 09:21:14.694120] vfio_user_pci.c: 399:spdk_vfio_user_release: *DEBUG*: Release file /var/run/vfio-user/domain/vfio-user2/2/cntrl 00:13:03.721 [2024-07-15 09:21:14.695338] vfio_user_pci.c: 96:vfio_remove_mr: *DEBUG*: Remove memory region: FD 10, VADDR 0x200000200000, IOVA 0x200000200000, Size 0x200000 00:13:03.721 are Threshold: 0% 00:13:03.721 Life Percentage Used: 0% 00:13:03.721 Data Units Read: 0 00:13:03.721 Data Units Written: 0 00:13:03.721 Host Read Commands: 0 00:13:03.721 Host Write Commands: 0 00:13:03.721 Controller Busy Time: 0 minutes 00:13:03.721 Power Cycles: 0 00:13:03.721 Power On Hours: 0 hours 00:13:03.721 Unsafe Shutdowns: 0 00:13:03.721 Unrecoverable Media Errors: 0 00:13:03.721 Lifetime Error Log Entries: 0 00:13:03.721 Warning Temperature Time: 0 minutes 00:13:03.721 Critical Temperature Time: 0 minutes 00:13:03.721 00:13:03.721 Number of Queues 00:13:03.721 ================ 00:13:03.721 Number of I/O Submission Queues: 127 00:13:03.721 Number of I/O Completion Queues: 127 00:13:03.721 00:13:03.721 Active Namespaces 00:13:03.721 ================= 00:13:03.721 Namespace ID:1 00:13:03.721 Error Recovery Timeout: Unlimited 00:13:03.721 Command Set Identifier: NVM (00h) 00:13:03.721 Deallocate: Supported 00:13:03.721 Deallocated/Unwritten Error: Not Supported 00:13:03.721 Deallocated Read Value: Unknown 00:13:03.721 Deallocate in Write Zeroes: Not Supported 00:13:03.721 Deallocated Guard Field: 0xFFFF 00:13:03.721 Flush: Supported 00:13:03.721 Reservation: Supported 00:13:03.721 Namespace Sharing Capabilities: Multiple Controllers 00:13:03.721 Size (in LBAs): 131072 (0GiB) 00:13:03.721 Capacity (in LBAs): 131072 (0GiB) 00:13:03.721 Utilization (in LBAs): 131072 (0GiB) 00:13:03.721 NGUID: 415E29FFE1864004A965904790FF8997 00:13:03.722 UUID: 415e29ff-e186-4004-a965-904790ff8997 00:13:03.722 Thin Provisioning: Not Supported 00:13:03.722 Per-NS Atomic Units: Yes 00:13:03.722 Atomic Boundary Size (Normal): 0 00:13:03.722 Atomic Boundary Size (PFail): 0 00:13:03.722 Atomic Boundary Offset: 0 00:13:03.722 Maximum Single Source Range Length: 65535 00:13:03.722 Maximum Copy Length: 65535 00:13:03.722 Maximum Source Range Count: 1 00:13:03.722 NGUID/EUI64 Never Reused: No 00:13:03.722 Namespace Write Protected: No 00:13:03.722 Number of LBA Formats: 1 00:13:03.722 Current LBA Format: LBA Format #00 00:13:03.722 LBA Format #00: Data Size: 512 Metadata Size: 0 00:13:03.722 00:13:03.722 09:21:14 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@84 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user2/2 subnqn:nqn.2019-07.io.spdk:cnode2' -s 256 -g -q 128 -o 4096 -w read -t 5 -c 0x2 00:13:03.722 EAL: No free 2048 kB hugepages reported on node 1 00:13:03.981 [2024-07-15 09:21:14.922545] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: enabling controller 00:13:09.255 Initializing NVMe Controllers 00:13:09.255 Attached to NVMe over Fabrics controller at /var/run/vfio-user/domain/vfio-user2/2:: nqn.2019-07.io.spdk:cnode2 00:13:09.255 Associating VFIOUSER (/var/run/vfio-user/domain/vfio-user2/2) NSID 1 with lcore 1 00:13:09.255 Initialization complete. Launching workers. 00:13:09.255 ======================================================== 00:13:09.255 Latency(us) 00:13:09.255 Device Information : IOPS MiB/s Average min max 00:13:09.255 VFIOUSER (/var/run/vfio-user/domain/vfio-user2/2) NSID 1 from core 1: 35395.18 138.26 3615.67 1145.64 7651.57 00:13:09.255 ======================================================== 00:13:09.255 Total : 35395.18 138.26 3615.67 1145.64 7651.57 00:13:09.255 00:13:09.255 [2024-07-15 09:21:20.026171] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: disabling controller 00:13:09.255 09:21:20 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@85 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user2/2 subnqn:nqn.2019-07.io.spdk:cnode2' -s 256 -g -q 128 -o 4096 -w write -t 5 -c 0x2 00:13:09.255 EAL: No free 2048 kB hugepages reported on node 1 00:13:09.255 [2024-07-15 09:21:20.270895] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: enabling controller 00:13:14.528 Initializing NVMe Controllers 00:13:14.528 Attached to NVMe over Fabrics controller at /var/run/vfio-user/domain/vfio-user2/2:: nqn.2019-07.io.spdk:cnode2 00:13:14.528 Associating VFIOUSER (/var/run/vfio-user/domain/vfio-user2/2) NSID 1 with lcore 1 00:13:14.528 Initialization complete. Launching workers. 00:13:14.528 ======================================================== 00:13:14.528 Latency(us) 00:13:14.528 Device Information : IOPS MiB/s Average min max 00:13:14.528 VFIOUSER (/var/run/vfio-user/domain/vfio-user2/2) NSID 1 from core 1: 32235.60 125.92 3970.17 1190.29 7494.73 00:13:14.528 ======================================================== 00:13:14.528 Total : 32235.60 125.92 3970.17 1190.29 7494.73 00:13:14.528 00:13:14.528 [2024-07-15 09:21:25.291108] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: disabling controller 00:13:14.528 09:21:25 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@86 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/reconnect -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user2/2 subnqn:nqn.2019-07.io.spdk:cnode2' -g -q 32 -o 4096 -w randrw -M 50 -t 5 -c 0xE 00:13:14.528 EAL: No free 2048 kB hugepages reported on node 1 00:13:14.528 [2024-07-15 09:21:25.505062] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: enabling controller 00:13:19.800 [2024-07-15 09:21:30.664943] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: disabling controller 00:13:19.800 Initializing NVMe Controllers 00:13:19.801 Attaching to NVMe over Fabrics controller at /var/run/vfio-user/domain/vfio-user2/2:: nqn.2019-07.io.spdk:cnode2 00:13:19.801 Attached to NVMe over Fabrics controller at /var/run/vfio-user/domain/vfio-user2/2:: nqn.2019-07.io.spdk:cnode2 00:13:19.801 Associating VFIOUSER (/var/run/vfio-user/domain/vfio-user2/2) with lcore 1 00:13:19.801 Associating VFIOUSER (/var/run/vfio-user/domain/vfio-user2/2) with lcore 2 00:13:19.801 Associating VFIOUSER (/var/run/vfio-user/domain/vfio-user2/2) with lcore 3 00:13:19.801 Initialization complete. Launching workers. 00:13:19.801 Starting thread on core 2 00:13:19.801 Starting thread on core 3 00:13:19.801 Starting thread on core 1 00:13:19.801 09:21:30 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@87 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/arbitration -t 3 -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user2/2 subnqn:nqn.2019-07.io.spdk:cnode2' -d 256 -g 00:13:19.801 EAL: No free 2048 kB hugepages reported on node 1 00:13:19.801 [2024-07-15 09:21:30.980040] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: enabling controller 00:13:23.090 [2024-07-15 09:21:34.066102] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: disabling controller 00:13:23.090 Initializing NVMe Controllers 00:13:23.090 Attaching to /var/run/vfio-user/domain/vfio-user2/2 00:13:23.090 Attached to /var/run/vfio-user/domain/vfio-user2/2 00:13:23.090 Associating SPDK bdev Controller (SPDK2 ) with lcore 0 00:13:23.090 Associating SPDK bdev Controller (SPDK2 ) with lcore 1 00:13:23.090 Associating SPDK bdev Controller (SPDK2 ) with lcore 2 00:13:23.090 Associating SPDK bdev Controller (SPDK2 ) with lcore 3 00:13:23.090 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/arbitration run with configuration: 00:13:23.090 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/arbitration -q 64 -s 131072 -w randrw -M 50 -l 0 -t 3 -c 0xf -m 0 -a 0 -b 0 -n 100000 -i -1 00:13:23.090 Initialization complete. Launching workers. 00:13:23.090 Starting thread on core 1 with urgent priority queue 00:13:23.090 Starting thread on core 2 with urgent priority queue 00:13:23.090 Starting thread on core 3 with urgent priority queue 00:13:23.090 Starting thread on core 0 with urgent priority queue 00:13:23.090 SPDK bdev Controller (SPDK2 ) core 0: 5912.67 IO/s 16.91 secs/100000 ios 00:13:23.090 SPDK bdev Controller (SPDK2 ) core 1: 5700.67 IO/s 17.54 secs/100000 ios 00:13:23.090 SPDK bdev Controller (SPDK2 ) core 2: 5043.67 IO/s 19.83 secs/100000 ios 00:13:23.090 SPDK bdev Controller (SPDK2 ) core 3: 6106.33 IO/s 16.38 secs/100000 ios 00:13:23.090 ======================================================== 00:13:23.090 00:13:23.090 09:21:34 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@88 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/hello_world -d 256 -g -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user2/2 subnqn:nqn.2019-07.io.spdk:cnode2' 00:13:23.090 EAL: No free 2048 kB hugepages reported on node 1 00:13:23.349 [2024-07-15 09:21:34.353341] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: enabling controller 00:13:23.349 Initializing NVMe Controllers 00:13:23.349 Attaching to /var/run/vfio-user/domain/vfio-user2/2 00:13:23.349 Attached to /var/run/vfio-user/domain/vfio-user2/2 00:13:23.349 Namespace ID: 1 size: 0GB 00:13:23.349 Initialization complete. 00:13:23.349 INFO: using host memory buffer for IO 00:13:23.349 Hello world! 00:13:23.349 [2024-07-15 09:21:34.366478] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: disabling controller 00:13:23.349 09:21:34 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@89 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvme/overhead/overhead -o 4096 -t 1 -H -g -d 256 -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user2/2 subnqn:nqn.2019-07.io.spdk:cnode2' 00:13:23.349 EAL: No free 2048 kB hugepages reported on node 1 00:13:23.607 [2024-07-15 09:21:34.657749] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: enabling controller 00:13:24.985 Initializing NVMe Controllers 00:13:24.985 Attaching to /var/run/vfio-user/domain/vfio-user2/2 00:13:24.985 Attached to /var/run/vfio-user/domain/vfio-user2/2 00:13:24.985 Initialization complete. Launching workers. 00:13:24.985 submit (in ns) avg, min, max = 9859.7, 3497.8, 4017180.0 00:13:24.985 complete (in ns) avg, min, max = 25193.0, 2100.0, 4016983.3 00:13:24.985 00:13:24.985 Submit histogram 00:13:24.985 ================ 00:13:24.985 Range in us Cumulative Count 00:13:24.985 3.484 - 3.508: 0.1107% ( 15) 00:13:24.985 3.508 - 3.532: 0.8044% ( 94) 00:13:24.985 3.532 - 3.556: 2.5756% ( 240) 00:13:24.985 3.556 - 3.579: 5.9114% ( 452) 00:13:24.985 3.579 - 3.603: 12.1181% ( 841) 00:13:24.985 3.603 - 3.627: 18.8266% ( 909) 00:13:24.985 3.627 - 3.650: 27.6015% ( 1189) 00:13:24.985 3.650 - 3.674: 35.2103% ( 1031) 00:13:24.985 3.674 - 3.698: 42.4649% ( 983) 00:13:24.985 3.698 - 3.721: 50.3395% ( 1067) 00:13:24.985 3.721 - 3.745: 55.6384% ( 718) 00:13:24.985 3.745 - 3.769: 60.4059% ( 646) 00:13:24.985 3.769 - 3.793: 63.8007% ( 460) 00:13:24.985 3.793 - 3.816: 67.2399% ( 466) 00:13:24.985 3.816 - 3.840: 70.5240% ( 445) 00:13:24.985 3.840 - 3.864: 74.3100% ( 513) 00:13:24.985 3.864 - 3.887: 77.8967% ( 486) 00:13:24.985 3.887 - 3.911: 81.4465% ( 481) 00:13:24.985 3.911 - 3.935: 84.6716% ( 437) 00:13:24.985 3.935 - 3.959: 87.0185% ( 318) 00:13:24.985 3.959 - 3.982: 88.7011% ( 228) 00:13:24.985 3.982 - 4.006: 90.1771% ( 200) 00:13:24.985 4.006 - 4.030: 91.2251% ( 142) 00:13:24.985 4.030 - 4.053: 92.4133% ( 161) 00:13:24.985 4.053 - 4.077: 93.3506% ( 127) 00:13:24.985 4.077 - 4.101: 94.2804% ( 126) 00:13:24.985 4.101 - 4.124: 94.9151% ( 86) 00:13:24.985 4.124 - 4.148: 95.4613% ( 74) 00:13:24.985 4.148 - 4.172: 95.8524% ( 53) 00:13:24.985 4.172 - 4.196: 96.2140% ( 49) 00:13:24.985 4.196 - 4.219: 96.4207% ( 28) 00:13:24.985 4.219 - 4.243: 96.5535% ( 18) 00:13:24.985 4.243 - 4.267: 96.6863% ( 18) 00:13:24.986 4.267 - 4.290: 96.8118% ( 17) 00:13:24.986 4.290 - 4.314: 96.8930% ( 11) 00:13:24.986 4.314 - 4.338: 96.9963% ( 14) 00:13:24.986 4.338 - 4.361: 97.1292% ( 18) 00:13:24.986 4.361 - 4.385: 97.2103% ( 11) 00:13:24.986 4.385 - 4.409: 97.2768% ( 9) 00:13:24.986 4.409 - 4.433: 97.3210% ( 6) 00:13:24.986 4.433 - 4.456: 97.3506% ( 4) 00:13:24.986 4.456 - 4.480: 97.3653% ( 2) 00:13:24.986 4.480 - 4.504: 97.3948% ( 4) 00:13:24.986 4.504 - 4.527: 97.4096% ( 2) 00:13:24.986 4.527 - 4.551: 97.4170% ( 1) 00:13:24.986 4.551 - 4.575: 97.4391% ( 3) 00:13:24.986 4.599 - 4.622: 97.4539% ( 2) 00:13:24.986 4.622 - 4.646: 97.4613% ( 1) 00:13:24.986 4.646 - 4.670: 97.4760% ( 2) 00:13:24.986 4.670 - 4.693: 97.4908% ( 2) 00:13:24.986 4.693 - 4.717: 97.4982% ( 1) 00:13:24.986 4.717 - 4.741: 97.5055% ( 1) 00:13:24.986 4.741 - 4.764: 97.5129% ( 1) 00:13:24.986 4.764 - 4.788: 97.5203% ( 1) 00:13:24.986 4.788 - 4.812: 97.5572% ( 5) 00:13:24.986 4.812 - 4.836: 97.5646% ( 1) 00:13:24.986 4.836 - 4.859: 97.5793% ( 2) 00:13:24.986 4.859 - 4.883: 97.6089% ( 4) 00:13:24.986 4.883 - 4.907: 97.6531% ( 6) 00:13:24.986 4.907 - 4.930: 97.7048% ( 7) 00:13:24.986 4.930 - 4.954: 97.7343% ( 4) 00:13:24.986 4.954 - 4.978: 97.7712% ( 5) 00:13:24.986 4.978 - 5.001: 97.8081% ( 5) 00:13:24.986 5.001 - 5.025: 97.8376% ( 4) 00:13:24.986 5.025 - 5.049: 97.8598% ( 3) 00:13:24.986 5.049 - 5.073: 97.9114% ( 7) 00:13:24.986 5.073 - 5.096: 97.9557% ( 6) 00:13:24.986 5.096 - 5.120: 98.0000% ( 6) 00:13:24.986 5.120 - 5.144: 98.0443% ( 6) 00:13:24.986 5.144 - 5.167: 98.0812% ( 5) 00:13:24.986 5.167 - 5.191: 98.1328% ( 7) 00:13:24.986 5.191 - 5.215: 98.1476% ( 2) 00:13:24.986 5.215 - 5.239: 98.1697% ( 3) 00:13:24.986 5.239 - 5.262: 98.1845% ( 2) 00:13:24.986 5.262 - 5.286: 98.2214% ( 5) 00:13:24.986 5.286 - 5.310: 98.2288% ( 1) 00:13:24.986 5.310 - 5.333: 98.2509% ( 3) 00:13:24.986 5.333 - 5.357: 98.2731% ( 3) 00:13:24.986 5.404 - 5.428: 98.2804% ( 1) 00:13:24.986 5.476 - 5.499: 98.2878% ( 1) 00:13:24.986 5.523 - 5.547: 98.2952% ( 1) 00:13:24.986 5.570 - 5.594: 98.3100% ( 2) 00:13:24.986 5.618 - 5.641: 98.3173% ( 1) 00:13:24.986 5.665 - 5.689: 98.3395% ( 3) 00:13:24.986 5.713 - 5.736: 98.3469% ( 1) 00:13:24.986 5.807 - 5.831: 98.3616% ( 2) 00:13:24.986 6.353 - 6.400: 98.3690% ( 1) 00:13:24.986 6.921 - 6.969: 98.3764% ( 1) 00:13:24.986 7.159 - 7.206: 98.3838% ( 1) 00:13:24.986 7.206 - 7.253: 98.3911% ( 1) 00:13:24.986 7.396 - 7.443: 98.4059% ( 2) 00:13:24.986 7.538 - 7.585: 98.4207% ( 2) 00:13:24.986 7.727 - 7.775: 98.4280% ( 1) 00:13:24.986 7.870 - 7.917: 98.4354% ( 1) 00:13:24.986 7.917 - 7.964: 98.4428% ( 1) 00:13:24.986 8.012 - 8.059: 98.4502% ( 1) 00:13:24.986 8.059 - 8.107: 98.4649% ( 2) 00:13:24.986 8.107 - 8.154: 98.4797% ( 2) 00:13:24.986 8.154 - 8.201: 98.4871% ( 1) 00:13:24.986 8.201 - 8.249: 98.4945% ( 1) 00:13:24.986 8.391 - 8.439: 98.5018% ( 1) 00:13:24.986 8.439 - 8.486: 98.5092% ( 1) 00:13:24.986 8.533 - 8.581: 98.5166% ( 1) 00:13:24.986 8.628 - 8.676: 98.5240% ( 1) 00:13:24.986 8.676 - 8.723: 98.5314% ( 1) 00:13:24.986 8.723 - 8.770: 98.5387% ( 1) 00:13:24.986 8.913 - 8.960: 98.5535% ( 2) 00:13:24.986 8.960 - 9.007: 98.5609% ( 1) 00:13:24.986 9.102 - 9.150: 98.5756% ( 2) 00:13:24.986 9.150 - 9.197: 98.5830% ( 1) 00:13:24.986 9.197 - 9.244: 98.5904% ( 1) 00:13:24.986 9.244 - 9.292: 98.5978% ( 1) 00:13:24.986 9.292 - 9.339: 98.6052% ( 1) 00:13:24.986 9.481 - 9.529: 98.6125% ( 1) 00:13:24.986 9.529 - 9.576: 98.6199% ( 1) 00:13:24.986 9.624 - 9.671: 98.6273% ( 1) 00:13:24.986 9.671 - 9.719: 98.6421% ( 2) 00:13:24.986 9.719 - 9.766: 98.6494% ( 1) 00:13:24.986 9.813 - 9.861: 98.6568% ( 1) 00:13:24.986 10.524 - 10.572: 98.6642% ( 1) 00:13:24.986 10.714 - 10.761: 98.6716% ( 1) 00:13:24.986 10.761 - 10.809: 98.6790% ( 1) 00:13:24.986 10.904 - 10.951: 98.6863% ( 1) 00:13:24.986 11.236 - 11.283: 98.6937% ( 1) 00:13:24.986 11.425 - 11.473: 98.7011% ( 1) 00:13:24.986 11.615 - 11.662: 98.7085% ( 1) 00:13:24.986 11.710 - 11.757: 98.7159% ( 1) 00:13:24.986 11.757 - 11.804: 98.7232% ( 1) 00:13:24.986 11.852 - 11.899: 98.7306% ( 1) 00:13:24.986 12.041 - 12.089: 98.7380% ( 1) 00:13:24.986 12.421 - 12.516: 98.7454% ( 1) 00:13:24.986 12.610 - 12.705: 98.7528% ( 1) 00:13:24.986 13.369 - 13.464: 98.7601% ( 1) 00:13:24.986 13.464 - 13.559: 98.7675% ( 1) 00:13:24.986 13.748 - 13.843: 98.7749% ( 1) 00:13:24.986 13.843 - 13.938: 98.7823% ( 1) 00:13:24.986 14.033 - 14.127: 98.7970% ( 2) 00:13:24.986 14.222 - 14.317: 98.8044% ( 1) 00:13:24.986 14.507 - 14.601: 98.8192% ( 2) 00:13:24.986 15.834 - 15.929: 98.8266% ( 1) 00:13:24.986 17.067 - 17.161: 98.8339% ( 1) 00:13:24.986 17.161 - 17.256: 98.8413% ( 1) 00:13:24.986 17.256 - 17.351: 98.8561% ( 2) 00:13:24.986 17.351 - 17.446: 98.8782% ( 3) 00:13:24.986 17.446 - 17.541: 98.9077% ( 4) 00:13:24.986 17.541 - 17.636: 98.9225% ( 2) 00:13:24.986 17.636 - 17.730: 98.9668% ( 6) 00:13:24.986 17.730 - 17.825: 99.0480% ( 11) 00:13:24.986 17.825 - 17.920: 99.0923% ( 6) 00:13:24.986 17.920 - 18.015: 99.1365% ( 6) 00:13:24.986 18.015 - 18.110: 99.2030% ( 9) 00:13:24.986 18.110 - 18.204: 99.2620% ( 8) 00:13:24.986 18.204 - 18.299: 99.3506% ( 12) 00:13:24.986 18.299 - 18.394: 99.4170% ( 9) 00:13:24.986 18.394 - 18.489: 99.5055% ( 12) 00:13:24.986 18.489 - 18.584: 99.5572% ( 7) 00:13:24.986 18.584 - 18.679: 99.6310% ( 10) 00:13:24.986 18.679 - 18.773: 99.6605% ( 4) 00:13:24.986 18.773 - 18.868: 99.6900% ( 4) 00:13:24.986 18.868 - 18.963: 99.7122% ( 3) 00:13:24.986 18.963 - 19.058: 99.7565% ( 6) 00:13:24.986 19.058 - 19.153: 99.7638% ( 1) 00:13:24.986 19.153 - 19.247: 99.7712% ( 1) 00:13:24.986 19.342 - 19.437: 99.7786% ( 1) 00:13:24.986 20.290 - 20.385: 99.7860% ( 1) 00:13:24.986 20.480 - 20.575: 99.7934% ( 1) 00:13:24.986 20.670 - 20.764: 99.8007% ( 1) 00:13:24.986 20.764 - 20.859: 99.8081% ( 1) 00:13:24.986 21.997 - 22.092: 99.8155% ( 1) 00:13:24.986 25.410 - 25.600: 99.8229% ( 1) 00:13:24.986 26.927 - 27.117: 99.8303% ( 1) 00:13:24.986 27.496 - 27.686: 99.8376% ( 1) 00:13:24.986 30.341 - 30.530: 99.8450% ( 1) 00:13:24.986 32.806 - 32.996: 99.8524% ( 1) 00:13:24.986 3980.705 - 4004.978: 99.9557% ( 14) 00:13:24.986 4004.978 - 4029.250: 100.0000% ( 6) 00:13:24.986 00:13:24.986 Complete histogram 00:13:24.986 ================== 00:13:24.986 Range in us Cumulative Count 00:13:24.986 2.098 - 2.110: 0.1845% ( 25) 00:13:24.986 2.110 - 2.121: 7.8229% ( 1035) 00:13:24.986 2.121 - 2.133: 15.1070% ( 987) 00:13:24.986 2.133 - 2.145: 36.7380% ( 2931) 00:13:24.986 2.145 - 2.157: 44.1697% ( 1007) 00:13:24.986 2.157 - 2.169: 56.0295% ( 1607) 00:13:24.986 2.169 - 2.181: 65.2177% ( 1245) 00:13:24.986 2.181 - 2.193: 68.0886% ( 389) 00:13:24.986 2.193 - 2.204: 71.0775% ( 405) 00:13:24.986 2.204 - 2.216: 76.3542% ( 715) 00:13:24.986 2.216 - 2.228: 78.4280% ( 281) 00:13:24.986 2.228 - 2.240: 81.8819% ( 468) 00:13:24.986 2.240 - 2.252: 84.2509% ( 321) 00:13:24.986 2.252 - 2.264: 85.2472% ( 135) 00:13:24.986 2.264 - 2.276: 86.4576% ( 164) 00:13:24.986 2.276 - 2.287: 89.3284% ( 389) 00:13:24.986 2.287 - 2.299: 91.3210% ( 270) 00:13:24.986 2.299 - 2.311: 92.8192% ( 203) 00:13:24.986 2.311 - 2.323: 94.0221% ( 163) 00:13:24.986 2.323 - 2.335: 94.2288% ( 28) 00:13:24.986 2.335 - 2.347: 94.4133% ( 25) 00:13:24.986 2.347 - 2.359: 94.6790% ( 36) 00:13:24.986 2.359 - 2.370: 95.1587% ( 65) 00:13:24.986 2.370 - 2.382: 95.4022% ( 33) 00:13:24.986 2.382 - 2.394: 95.5055% ( 14) 00:13:24.986 2.394 - 2.406: 95.6089% ( 14) 00:13:24.986 2.406 - 2.418: 95.6679% ( 8) 00:13:24.986 2.418 - 2.430: 95.8303% ( 22) 00:13:24.986 2.430 - 2.441: 96.1476% ( 43) 00:13:24.986 2.441 - 2.453: 96.5314% ( 52) 00:13:24.986 2.453 - 2.465: 96.8487% ( 43) 00:13:24.986 2.465 - 2.477: 97.1956% ( 47) 00:13:24.986 2.477 - 2.489: 97.4465% ( 34) 00:13:24.986 2.489 - 2.501: 97.6310% ( 25) 00:13:24.986 2.501 - 2.513: 97.7122% ( 11) 00:13:24.986 2.513 - 2.524: 97.8155% ( 14) 00:13:24.986 2.524 - 2.536: 97.9041% ( 12) 00:13:24.986 2.536 - 2.548: 97.9705% ( 9) 00:13:24.986 2.548 - 2.560: 98.0443% ( 10) 00:13:24.986 2.560 - 2.572: 98.1328% ( 12) 00:13:24.986 2.572 - 2.584: 98.1697% ( 5) 00:13:24.986 2.584 - 2.596: 98.2066% ( 5) 00:13:24.986 2.596 - 2.607: 98.2509% ( 6) 00:13:24.986 2.607 - 2.619: 98.2731% ( 3) 00:13:24.986 2.619 - 2.631: 98.2804% ( 1) 00:13:24.986 2.631 - 2.643: 98.2952% ( 2) 00:13:24.987 2.643 - 2.655: 98.3247% ( 4) 00:13:24.987 2.655 - 2.667: 98.3616% ( 5) 00:13:24.987 2.667 - 2.679: 98.3690% ( 1) 00:13:24.987 2.679 - 2.690: 98.3764% ( 1) 00:13:24.987 2.690 - 2.702: 98.3838% ( 1) 00:13:24.987 2.702 - 2.714: 98.3911% ( 1) 00:13:24.987 2.714 - 2.726: 98.4059% ( 2) 00:13:24.987 2.726 - 2.738: 98.4280% ( 3) 00:13:24.987 2.738 - 2.750: 98.4354% ( 1) 00:13:24.987 2.761 - 2.773: 98.4428% ( 1) 00:13:24.987 2.773 - 2.785: 98.4576% ( 2) 00:13:24.987 2.797 - 2.809: 98.4649% ( 1) 00:13:24.987 2.809 - 2.821: 98.4797% ( 2) 00:13:24.987 2.880 - 2.892: 98.4871% ( 1) 00:13:24.987 2.987 - 2.999: 98.4945% ( 1) 00:13:24.987 3.034 - 3.058: 98.5018% ( 1) 00:13:24.987 3.129 - 3.153: 98.5092% ( 1) 00:13:24.987 3.224 - 3.247: 98.5166% ( 1) 00:13:24.987 3.461 - 3.484: 98.5240% ( 1) 00:13:24.987 3.484 - 3.508: 98.5387% ( 2) 00:13:24.987 3.556 - 3.579: 98.5461% ( 1) 00:13:24.987 3.650 - 3.674: 98.5535% ( 1) 00:13:24.987 3.674 - 3.698: 98.5609% ( 1) 00:13:24.987 3.721 - 3.745: 98.5683% ( 1) 00:13:24.987 3.769 - 3.793: 98.5756% ( 1) 00:13:24.987 3.793 - 3.816: 98.5904% ( 2) 00:13:24.987 3.816 - 3.840: 98.5978% ( 1) 00:13:24.987 4.006 - 4.030: 98.6052% ( 1) 00:13:24.987 4.030 - 4.053: 98.6125% ( 1) 00:13:24.987 4.053 - 4.077: 9[2024-07-15 09:21:35.756580] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: disabling controller 00:13:24.987 8.6199% ( 1) 00:13:24.987 4.148 - 4.172: 98.6273% ( 1) 00:13:24.987 4.717 - 4.741: 98.6347% ( 1) 00:13:24.987 5.404 - 5.428: 98.6421% ( 1) 00:13:24.987 5.499 - 5.523: 98.6494% ( 1) 00:13:24.987 5.641 - 5.665: 98.6568% ( 1) 00:13:24.987 5.784 - 5.807: 98.6642% ( 1) 00:13:24.987 5.902 - 5.926: 98.6716% ( 1) 00:13:24.987 6.068 - 6.116: 98.6863% ( 2) 00:13:24.987 6.258 - 6.305: 98.7011% ( 2) 00:13:24.987 6.305 - 6.353: 98.7085% ( 1) 00:13:24.987 6.447 - 6.495: 98.7159% ( 1) 00:13:24.987 6.684 - 6.732: 98.7232% ( 1) 00:13:24.987 6.969 - 7.016: 98.7306% ( 1) 00:13:24.987 7.016 - 7.064: 98.7380% ( 1) 00:13:24.987 7.064 - 7.111: 98.7454% ( 1) 00:13:24.987 7.348 - 7.396: 98.7528% ( 1) 00:13:24.987 7.917 - 7.964: 98.7601% ( 1) 00:13:24.987 8.628 - 8.676: 98.7675% ( 1) 00:13:24.987 10.572 - 10.619: 98.7749% ( 1) 00:13:24.987 15.644 - 15.739: 98.7970% ( 3) 00:13:24.987 15.739 - 15.834: 98.8192% ( 3) 00:13:24.987 15.834 - 15.929: 98.8266% ( 1) 00:13:24.987 15.929 - 16.024: 98.8487% ( 3) 00:13:24.987 16.024 - 16.119: 98.9077% ( 8) 00:13:24.987 16.119 - 16.213: 98.9299% ( 3) 00:13:24.987 16.213 - 16.308: 98.9594% ( 4) 00:13:24.987 16.308 - 16.403: 98.9963% ( 5) 00:13:24.987 16.403 - 16.498: 99.0258% ( 4) 00:13:24.987 16.498 - 16.593: 99.0701% ( 6) 00:13:24.987 16.593 - 16.687: 99.1218% ( 7) 00:13:24.987 16.687 - 16.782: 99.1808% ( 8) 00:13:24.987 16.782 - 16.877: 99.2103% ( 4) 00:13:24.987 16.877 - 16.972: 99.2399% ( 4) 00:13:24.987 16.972 - 17.067: 99.2694% ( 4) 00:13:24.987 17.067 - 17.161: 99.2915% ( 3) 00:13:24.987 17.161 - 17.256: 99.3063% ( 2) 00:13:24.987 17.256 - 17.351: 99.3210% ( 2) 00:13:24.987 17.351 - 17.446: 99.3284% ( 1) 00:13:24.987 17.446 - 17.541: 99.3358% ( 1) 00:13:24.987 17.541 - 17.636: 99.3579% ( 3) 00:13:24.987 17.730 - 17.825: 99.3727% ( 2) 00:13:24.987 17.920 - 18.015: 99.3801% ( 1) 00:13:24.987 18.015 - 18.110: 99.3875% ( 1) 00:13:24.987 18.584 - 18.679: 99.3948% ( 1) 00:13:24.987 18.679 - 18.773: 99.4022% ( 1) 00:13:24.987 18.773 - 18.868: 99.4096% ( 1) 00:13:24.987 23.609 - 23.704: 99.4170% ( 1) 00:13:24.987 27.496 - 27.686: 99.4244% ( 1) 00:13:24.987 3046.210 - 3058.347: 99.4391% ( 2) 00:13:24.987 3932.160 - 3956.433: 99.4465% ( 1) 00:13:24.987 3980.705 - 4004.978: 99.7934% ( 47) 00:13:24.987 4004.978 - 4029.250: 100.0000% ( 28) 00:13:24.987 00:13:24.987 09:21:35 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@90 -- # aer_vfio_user /var/run/vfio-user/domain/vfio-user2/2 nqn.2019-07.io.spdk:cnode2 2 00:13:24.987 09:21:35 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@22 -- # local traddr=/var/run/vfio-user/domain/vfio-user2/2 00:13:24.987 09:21:35 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@23 -- # local subnqn=nqn.2019-07.io.spdk:cnode2 00:13:24.987 09:21:35 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@24 -- # local malloc_num=Malloc4 00:13:24.987 09:21:35 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@25 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_get_subsystems 00:13:24.987 [ 00:13:24.987 { 00:13:24.987 "nqn": "nqn.2014-08.org.nvmexpress.discovery", 00:13:24.987 "subtype": "Discovery", 00:13:24.987 "listen_addresses": [], 00:13:24.987 "allow_any_host": true, 00:13:24.987 "hosts": [] 00:13:24.987 }, 00:13:24.987 { 00:13:24.987 "nqn": "nqn.2019-07.io.spdk:cnode1", 00:13:24.987 "subtype": "NVMe", 00:13:24.987 "listen_addresses": [ 00:13:24.987 { 00:13:24.987 "trtype": "VFIOUSER", 00:13:24.987 "adrfam": "IPv4", 00:13:24.987 "traddr": "/var/run/vfio-user/domain/vfio-user1/1", 00:13:24.987 "trsvcid": "0" 00:13:24.987 } 00:13:24.987 ], 00:13:24.987 "allow_any_host": true, 00:13:24.987 "hosts": [], 00:13:24.987 "serial_number": "SPDK1", 00:13:24.987 "model_number": "SPDK bdev Controller", 00:13:24.987 "max_namespaces": 32, 00:13:24.987 "min_cntlid": 1, 00:13:24.987 "max_cntlid": 65519, 00:13:24.987 "namespaces": [ 00:13:24.987 { 00:13:24.987 "nsid": 1, 00:13:24.987 "bdev_name": "Malloc1", 00:13:24.987 "name": "Malloc1", 00:13:24.987 "nguid": "87F10B2B24044DB980C0FB200322FC78", 00:13:24.987 "uuid": "87f10b2b-2404-4db9-80c0-fb200322fc78" 00:13:24.987 }, 00:13:24.987 { 00:13:24.987 "nsid": 2, 00:13:24.987 "bdev_name": "Malloc3", 00:13:24.987 "name": "Malloc3", 00:13:24.987 "nguid": "1E89D2D65FB6482FB8D9D46D244415AE", 00:13:24.987 "uuid": "1e89d2d6-5fb6-482f-b8d9-d46d244415ae" 00:13:24.987 } 00:13:24.987 ] 00:13:24.987 }, 00:13:24.987 { 00:13:24.987 "nqn": "nqn.2019-07.io.spdk:cnode2", 00:13:24.987 "subtype": "NVMe", 00:13:24.987 "listen_addresses": [ 00:13:24.987 { 00:13:24.987 "trtype": "VFIOUSER", 00:13:24.987 "adrfam": "IPv4", 00:13:24.987 "traddr": "/var/run/vfio-user/domain/vfio-user2/2", 00:13:24.987 "trsvcid": "0" 00:13:24.987 } 00:13:24.987 ], 00:13:24.987 "allow_any_host": true, 00:13:24.987 "hosts": [], 00:13:24.987 "serial_number": "SPDK2", 00:13:24.987 "model_number": "SPDK bdev Controller", 00:13:24.987 "max_namespaces": 32, 00:13:24.987 "min_cntlid": 1, 00:13:24.987 "max_cntlid": 65519, 00:13:24.987 "namespaces": [ 00:13:24.987 { 00:13:24.987 "nsid": 1, 00:13:24.987 "bdev_name": "Malloc2", 00:13:24.987 "name": "Malloc2", 00:13:24.987 "nguid": "415E29FFE1864004A965904790FF8997", 00:13:24.987 "uuid": "415e29ff-e186-4004-a965-904790ff8997" 00:13:24.987 } 00:13:24.987 ] 00:13:24.987 } 00:13:24.987 ] 00:13:24.987 09:21:36 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@27 -- # AER_TOUCH_FILE=/tmp/aer_touch_file 00:13:24.987 09:21:36 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@34 -- # aerpid=787532 00:13:24.987 09:21:36 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@30 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvme/aer/aer -r ' trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user2/2 subnqn:nqn.2019-07.io.spdk:cnode2' -n 2 -g -t /tmp/aer_touch_file 00:13:24.987 09:21:36 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@37 -- # waitforfile /tmp/aer_touch_file 00:13:24.987 09:21:36 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@1265 -- # local i=0 00:13:24.987 09:21:36 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@1266 -- # '[' '!' -e /tmp/aer_touch_file ']' 00:13:24.987 09:21:36 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@1267 -- # '[' 0 -lt 200 ']' 00:13:24.987 09:21:36 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@1268 -- # i=1 00:13:24.987 09:21:36 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@1269 -- # sleep 0.1 00:13:24.987 EAL: No free 2048 kB hugepages reported on node 1 00:13:25.246 09:21:36 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@1266 -- # '[' '!' -e /tmp/aer_touch_file ']' 00:13:25.246 09:21:36 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@1267 -- # '[' 1 -lt 200 ']' 00:13:25.246 09:21:36 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@1268 -- # i=2 00:13:25.246 09:21:36 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@1269 -- # sleep 0.1 00:13:25.246 [2024-07-15 09:21:36.257732] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: enabling controller 00:13:25.246 09:21:36 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@1266 -- # '[' '!' -e /tmp/aer_touch_file ']' 00:13:25.246 09:21:36 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@1272 -- # '[' '!' -e /tmp/aer_touch_file ']' 00:13:25.246 09:21:36 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@1276 -- # return 0 00:13:25.246 09:21:36 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@38 -- # rm -f /tmp/aer_touch_file 00:13:25.246 09:21:36 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@40 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 --name Malloc4 00:13:25.503 Malloc4 00:13:25.503 09:21:36 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2019-07.io.spdk:cnode2 Malloc4 -n 2 00:13:25.760 [2024-07-15 09:21:36.812832] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: disabling controller 00:13:25.760 09:21:36 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@42 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_get_subsystems 00:13:25.760 Asynchronous Event Request test 00:13:25.760 Attaching to /var/run/vfio-user/domain/vfio-user2/2 00:13:25.760 Attached to /var/run/vfio-user/domain/vfio-user2/2 00:13:25.760 Registering asynchronous event callbacks... 00:13:25.760 Starting namespace attribute notice tests for all controllers... 00:13:25.760 /var/run/vfio-user/domain/vfio-user2/2: aer_cb for log page 4, aen_event_type: 0x02, aen_event_info: 0x00 00:13:25.760 aer_cb - Changed Namespace 00:13:25.760 Cleaning up... 00:13:26.019 [ 00:13:26.019 { 00:13:26.019 "nqn": "nqn.2014-08.org.nvmexpress.discovery", 00:13:26.019 "subtype": "Discovery", 00:13:26.019 "listen_addresses": [], 00:13:26.019 "allow_any_host": true, 00:13:26.019 "hosts": [] 00:13:26.019 }, 00:13:26.019 { 00:13:26.019 "nqn": "nqn.2019-07.io.spdk:cnode1", 00:13:26.019 "subtype": "NVMe", 00:13:26.019 "listen_addresses": [ 00:13:26.019 { 00:13:26.019 "trtype": "VFIOUSER", 00:13:26.019 "adrfam": "IPv4", 00:13:26.019 "traddr": "/var/run/vfio-user/domain/vfio-user1/1", 00:13:26.019 "trsvcid": "0" 00:13:26.019 } 00:13:26.019 ], 00:13:26.019 "allow_any_host": true, 00:13:26.019 "hosts": [], 00:13:26.019 "serial_number": "SPDK1", 00:13:26.019 "model_number": "SPDK bdev Controller", 00:13:26.019 "max_namespaces": 32, 00:13:26.019 "min_cntlid": 1, 00:13:26.019 "max_cntlid": 65519, 00:13:26.019 "namespaces": [ 00:13:26.019 { 00:13:26.019 "nsid": 1, 00:13:26.019 "bdev_name": "Malloc1", 00:13:26.019 "name": "Malloc1", 00:13:26.019 "nguid": "87F10B2B24044DB980C0FB200322FC78", 00:13:26.019 "uuid": "87f10b2b-2404-4db9-80c0-fb200322fc78" 00:13:26.019 }, 00:13:26.019 { 00:13:26.019 "nsid": 2, 00:13:26.019 "bdev_name": "Malloc3", 00:13:26.019 "name": "Malloc3", 00:13:26.019 "nguid": "1E89D2D65FB6482FB8D9D46D244415AE", 00:13:26.019 "uuid": "1e89d2d6-5fb6-482f-b8d9-d46d244415ae" 00:13:26.019 } 00:13:26.019 ] 00:13:26.019 }, 00:13:26.019 { 00:13:26.019 "nqn": "nqn.2019-07.io.spdk:cnode2", 00:13:26.019 "subtype": "NVMe", 00:13:26.019 "listen_addresses": [ 00:13:26.019 { 00:13:26.019 "trtype": "VFIOUSER", 00:13:26.019 "adrfam": "IPv4", 00:13:26.019 "traddr": "/var/run/vfio-user/domain/vfio-user2/2", 00:13:26.019 "trsvcid": "0" 00:13:26.019 } 00:13:26.019 ], 00:13:26.019 "allow_any_host": true, 00:13:26.019 "hosts": [], 00:13:26.019 "serial_number": "SPDK2", 00:13:26.019 "model_number": "SPDK bdev Controller", 00:13:26.019 "max_namespaces": 32, 00:13:26.019 "min_cntlid": 1, 00:13:26.019 "max_cntlid": 65519, 00:13:26.019 "namespaces": [ 00:13:26.019 { 00:13:26.019 "nsid": 1, 00:13:26.019 "bdev_name": "Malloc2", 00:13:26.019 "name": "Malloc2", 00:13:26.019 "nguid": "415E29FFE1864004A965904790FF8997", 00:13:26.019 "uuid": "415e29ff-e186-4004-a965-904790ff8997" 00:13:26.019 }, 00:13:26.019 { 00:13:26.019 "nsid": 2, 00:13:26.019 "bdev_name": "Malloc4", 00:13:26.019 "name": "Malloc4", 00:13:26.019 "nguid": "5C9612850BAF49D8A030AFF92DA66659", 00:13:26.019 "uuid": "5c961285-0baf-49d8-a030-aff92da66659" 00:13:26.019 } 00:13:26.019 ] 00:13:26.019 } 00:13:26.019 ] 00:13:26.019 09:21:37 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@44 -- # wait 787532 00:13:26.019 09:21:37 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@105 -- # stop_nvmf_vfio_user 00:13:26.019 09:21:37 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@95 -- # killprocess 781313 00:13:26.019 09:21:37 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@948 -- # '[' -z 781313 ']' 00:13:26.019 09:21:37 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@952 -- # kill -0 781313 00:13:26.019 09:21:37 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@953 -- # uname 00:13:26.019 09:21:37 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:13:26.019 09:21:37 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 781313 00:13:26.019 09:21:37 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:13:26.019 09:21:37 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:13:26.019 09:21:37 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@966 -- # echo 'killing process with pid 781313' 00:13:26.019 killing process with pid 781313 00:13:26.019 09:21:37 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@967 -- # kill 781313 00:13:26.019 09:21:37 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@972 -- # wait 781313 00:13:26.276 09:21:37 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@97 -- # rm -rf /var/run/vfio-user 00:13:26.277 09:21:37 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@99 -- # trap - SIGINT SIGTERM EXIT 00:13:26.277 09:21:37 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@108 -- # setup_nvmf_vfio_user --interrupt-mode '-M -I' 00:13:26.277 09:21:37 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@51 -- # local nvmf_app_args=--interrupt-mode 00:13:26.277 09:21:37 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@52 -- # local 'transport_args=-M -I' 00:13:26.277 09:21:37 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@55 -- # nvmfpid=787799 00:13:26.277 09:21:37 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m '[0,1,2,3]' --interrupt-mode 00:13:26.277 09:21:37 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@57 -- # echo 'Process pid: 787799' 00:13:26.277 Process pid: 787799 00:13:26.277 09:21:37 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@59 -- # trap 'killprocess $nvmfpid; exit 1' SIGINT SIGTERM EXIT 00:13:26.277 09:21:37 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@60 -- # waitforlisten 787799 00:13:26.277 09:21:37 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@829 -- # '[' -z 787799 ']' 00:13:26.277 09:21:37 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:13:26.277 09:21:37 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@834 -- # local max_retries=100 00:13:26.277 09:21:37 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:13:26.277 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:13:26.277 09:21:37 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@838 -- # xtrace_disable 00:13:26.277 09:21:37 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@10 -- # set +x 00:13:26.536 [2024-07-15 09:21:37.510469] thread.c:2948:spdk_interrupt_mode_enable: *NOTICE*: Set SPDK running in interrupt mode. 00:13:26.536 [2024-07-15 09:21:37.511558] Starting SPDK v24.09-pre git sha1 b0f01ebc5 / DPDK 24.03.0 initialization... 00:13:26.536 [2024-07-15 09:21:37.511612] [ DPDK EAL parameters: nvmf -l 0,1,2,3 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:13:26.536 EAL: No free 2048 kB hugepages reported on node 1 00:13:26.536 [2024-07-15 09:21:37.573165] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 4 00:13:26.536 [2024-07-15 09:21:37.684119] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:13:26.536 [2024-07-15 09:21:37.684171] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:13:26.536 [2024-07-15 09:21:37.684184] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:13:26.536 [2024-07-15 09:21:37.684195] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:13:26.536 [2024-07-15 09:21:37.684204] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:13:26.536 [2024-07-15 09:21:37.684283] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:13:26.536 [2024-07-15 09:21:37.684349] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:13:26.536 [2024-07-15 09:21:37.684371] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:13:26.536 [2024-07-15 09:21:37.684375] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:13:26.795 [2024-07-15 09:21:37.789770] thread.c:2099:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_000) to intr mode from intr mode. 00:13:26.795 [2024-07-15 09:21:37.790011] thread.c:2099:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_001) to intr mode from intr mode. 00:13:26.795 [2024-07-15 09:21:37.790290] thread.c:2099:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_002) to intr mode from intr mode. 00:13:26.795 [2024-07-15 09:21:37.790958] thread.c:2099:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (app_thread) to intr mode from intr mode. 00:13:26.795 [2024-07-15 09:21:37.791210] thread.c:2099:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_003) to intr mode from intr mode. 00:13:26.795 09:21:37 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:13:26.795 09:21:37 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@862 -- # return 0 00:13:26.795 09:21:37 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@62 -- # sleep 1 00:13:27.731 09:21:38 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t VFIOUSER -M -I 00:13:27.988 09:21:39 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@66 -- # mkdir -p /var/run/vfio-user 00:13:27.988 09:21:39 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@68 -- # seq 1 2 00:13:27.988 09:21:39 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@68 -- # for i in $(seq 1 $NUM_DEVICES) 00:13:27.988 09:21:39 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@69 -- # mkdir -p /var/run/vfio-user/domain/vfio-user1/1 00:13:27.988 09:21:39 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@71 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc1 00:13:28.246 Malloc1 00:13:28.246 09:21:39 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@72 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2019-07.io.spdk:cnode1 -a -s SPDK1 00:13:28.504 09:21:39 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@73 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2019-07.io.spdk:cnode1 Malloc1 00:13:28.762 09:21:39 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@74 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2019-07.io.spdk:cnode1 -t VFIOUSER -a /var/run/vfio-user/domain/vfio-user1/1 -s 0 00:13:29.020 09:21:40 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@68 -- # for i in $(seq 1 $NUM_DEVICES) 00:13:29.020 09:21:40 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@69 -- # mkdir -p /var/run/vfio-user/domain/vfio-user2/2 00:13:29.020 09:21:40 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@71 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc2 00:13:29.278 Malloc2 00:13:29.278 09:21:40 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@72 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2019-07.io.spdk:cnode2 -a -s SPDK2 00:13:29.536 09:21:40 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@73 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2019-07.io.spdk:cnode2 Malloc2 00:13:29.794 09:21:40 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@74 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2019-07.io.spdk:cnode2 -t VFIOUSER -a /var/run/vfio-user/domain/vfio-user2/2 -s 0 00:13:30.052 09:21:41 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@109 -- # stop_nvmf_vfio_user 00:13:30.052 09:21:41 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@95 -- # killprocess 787799 00:13:30.052 09:21:41 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@948 -- # '[' -z 787799 ']' 00:13:30.052 09:21:41 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@952 -- # kill -0 787799 00:13:30.052 09:21:41 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@953 -- # uname 00:13:30.052 09:21:41 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:13:30.052 09:21:41 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 787799 00:13:30.052 09:21:41 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:13:30.052 09:21:41 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:13:30.052 09:21:41 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@966 -- # echo 'killing process with pid 787799' 00:13:30.052 killing process with pid 787799 00:13:30.052 09:21:41 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@967 -- # kill 787799 00:13:30.052 09:21:41 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@972 -- # wait 787799 00:13:30.621 09:21:41 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@97 -- # rm -rf /var/run/vfio-user 00:13:30.621 09:21:41 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@99 -- # trap - SIGINT SIGTERM EXIT 00:13:30.621 00:13:30.621 real 0m52.981s 00:13:30.621 user 3m29.066s 00:13:30.621 sys 0m4.367s 00:13:30.621 09:21:41 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@1124 -- # xtrace_disable 00:13:30.621 09:21:41 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@10 -- # set +x 00:13:30.621 ************************************ 00:13:30.621 END TEST nvmf_vfio_user 00:13:30.621 ************************************ 00:13:30.621 09:21:41 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:13:30.621 09:21:41 nvmf_tcp -- nvmf/nvmf.sh@42 -- # run_test nvmf_vfio_user_nvme_compliance /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvme/compliance/compliance.sh --transport=tcp 00:13:30.621 09:21:41 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:13:30.621 09:21:41 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:13:30.621 09:21:41 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:13:30.621 ************************************ 00:13:30.621 START TEST nvmf_vfio_user_nvme_compliance 00:13:30.621 ************************************ 00:13:30.621 09:21:41 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvme/compliance/compliance.sh --transport=tcp 00:13:30.621 * Looking for test storage... 00:13:30.621 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvme/compliance 00:13:30.621 09:21:41 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:13:30.621 09:21:41 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@7 -- # uname -s 00:13:30.621 09:21:41 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:13:30.621 09:21:41 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:13:30.621 09:21:41 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:13:30.621 09:21:41 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:13:30.621 09:21:41 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:13:30.621 09:21:41 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:13:30.621 09:21:41 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:13:30.621 09:21:41 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:13:30.621 09:21:41 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:13:30.621 09:21:41 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:13:30.621 09:21:41 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a 00:13:30.621 09:21:41 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@18 -- # NVME_HOSTID=29f67375-a902-e411-ace9-001e67bc3c9a 00:13:30.621 09:21:41 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:13:30.621 09:21:41 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:13:30.621 09:21:41 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:13:30.621 09:21:41 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:13:30.621 09:21:41 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:13:30.621 09:21:41 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:13:30.621 09:21:41 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:13:30.621 09:21:41 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:13:30.621 09:21:41 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:30.621 09:21:41 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:30.621 09:21:41 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:30.621 09:21:41 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- paths/export.sh@5 -- # export PATH 00:13:30.621 09:21:41 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:30.621 09:21:41 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@47 -- # : 0 00:13:30.621 09:21:41 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:13:30.621 09:21:41 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:13:30.621 09:21:41 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:13:30.621 09:21:41 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:13:30.621 09:21:41 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:13:30.621 09:21:41 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:13:30.621 09:21:41 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:13:30.621 09:21:41 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@51 -- # have_pci_nics=0 00:13:30.621 09:21:41 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@11 -- # MALLOC_BDEV_SIZE=64 00:13:30.621 09:21:41 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:13:30.621 09:21:41 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@14 -- # export TEST_TRANSPORT=VFIOUSER 00:13:30.621 09:21:41 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@14 -- # TEST_TRANSPORT=VFIOUSER 00:13:30.621 09:21:41 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@16 -- # rm -rf /var/run/vfio-user 00:13:30.621 09:21:41 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@20 -- # nvmfpid=788279 00:13:30.621 09:21:41 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@19 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x7 00:13:30.621 09:21:41 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@21 -- # echo 'Process pid: 788279' 00:13:30.621 Process pid: 788279 00:13:30.622 09:21:41 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@23 -- # trap 'killprocess $nvmfpid; exit 1' SIGINT SIGTERM EXIT 00:13:30.622 09:21:41 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@24 -- # waitforlisten 788279 00:13:30.622 09:21:41 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@829 -- # '[' -z 788279 ']' 00:13:30.622 09:21:41 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:13:30.622 09:21:41 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@834 -- # local max_retries=100 00:13:30.622 09:21:41 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:13:30.622 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:13:30.622 09:21:41 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@838 -- # xtrace_disable 00:13:30.622 09:21:41 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@10 -- # set +x 00:13:30.622 [2024-07-15 09:21:41.682244] Starting SPDK v24.09-pre git sha1 b0f01ebc5 / DPDK 24.03.0 initialization... 00:13:30.622 [2024-07-15 09:21:41.682315] [ DPDK EAL parameters: nvmf -c 0x7 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:13:30.622 EAL: No free 2048 kB hugepages reported on node 1 00:13:30.622 [2024-07-15 09:21:41.739525] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 3 00:13:30.880 [2024-07-15 09:21:41.846143] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:13:30.880 [2024-07-15 09:21:41.846207] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:13:30.880 [2024-07-15 09:21:41.846220] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:13:30.880 [2024-07-15 09:21:41.846233] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:13:30.880 [2024-07-15 09:21:41.846243] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:13:30.880 [2024-07-15 09:21:41.846347] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:13:30.880 [2024-07-15 09:21:41.846407] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:13:30.880 [2024-07-15 09:21:41.846410] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:13:30.880 09:21:41 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:13:30.880 09:21:41 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@862 -- # return 0 00:13:30.880 09:21:41 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@26 -- # sleep 1 00:13:31.818 09:21:42 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@28 -- # nqn=nqn.2021-09.io.spdk:cnode0 00:13:31.818 09:21:42 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@29 -- # traddr=/var/run/vfio-user 00:13:31.818 09:21:42 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@31 -- # rpc_cmd nvmf_create_transport -t VFIOUSER 00:13:31.818 09:21:42 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:31.818 09:21:42 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@10 -- # set +x 00:13:31.818 09:21:42 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:31.818 09:21:42 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@33 -- # mkdir -p /var/run/vfio-user 00:13:31.818 09:21:42 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@35 -- # rpc_cmd bdev_malloc_create 64 512 -b malloc0 00:13:31.818 09:21:42 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:31.818 09:21:42 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@10 -- # set +x 00:13:31.818 malloc0 00:13:31.818 09:21:43 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:31.818 09:21:43 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@36 -- # rpc_cmd nvmf_create_subsystem nqn.2021-09.io.spdk:cnode0 -a -s spdk -m 32 00:13:31.818 09:21:43 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:31.818 09:21:43 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@10 -- # set +x 00:13:32.077 09:21:43 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:32.077 09:21:43 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@37 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2021-09.io.spdk:cnode0 malloc0 00:13:32.077 09:21:43 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:32.077 09:21:43 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@10 -- # set +x 00:13:32.077 09:21:43 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:32.077 09:21:43 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@38 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2021-09.io.spdk:cnode0 -t VFIOUSER -a /var/run/vfio-user -s 0 00:13:32.077 09:21:43 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:32.077 09:21:43 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@10 -- # set +x 00:13:32.077 09:21:43 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:32.077 09:21:43 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@40 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvme/compliance/nvme_compliance -g -r 'trtype:VFIOUSER traddr:/var/run/vfio-user subnqn:nqn.2021-09.io.spdk:cnode0' 00:13:32.077 EAL: No free 2048 kB hugepages reported on node 1 00:13:32.077 00:13:32.077 00:13:32.077 CUnit - A unit testing framework for C - Version 2.1-3 00:13:32.077 http://cunit.sourceforge.net/ 00:13:32.077 00:13:32.077 00:13:32.077 Suite: nvme_compliance 00:13:32.077 Test: admin_identify_ctrlr_verify_dptr ...[2024-07-15 09:21:43.196276] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:13:32.077 [2024-07-15 09:21:43.197682] vfio_user.c: 804:nvme_cmd_map_prps: *ERROR*: no PRP2, 3072 remaining 00:13:32.077 [2024-07-15 09:21:43.197707] vfio_user.c:5514:map_admin_cmd_req: *ERROR*: /var/run/vfio-user: map Admin Opc 6 failed 00:13:32.077 [2024-07-15 09:21:43.197723] vfio_user.c:5607:handle_cmd_req: *ERROR*: /var/run/vfio-user: process NVMe command opc 0x6 failed 00:13:32.077 [2024-07-15 09:21:43.199291] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:13:32.077 passed 00:13:32.335 Test: admin_identify_ctrlr_verify_fused ...[2024-07-15 09:21:43.284896] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:13:32.335 [2024-07-15 09:21:43.287915] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:13:32.335 passed 00:13:32.335 Test: admin_identify_ns ...[2024-07-15 09:21:43.373293] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:13:32.335 [2024-07-15 09:21:43.432818] ctrlr.c:2729:_nvmf_ctrlr_get_ns_safe: *ERROR*: Identify Namespace for invalid NSID 0 00:13:32.335 [2024-07-15 09:21:43.440819] ctrlr.c:2729:_nvmf_ctrlr_get_ns_safe: *ERROR*: Identify Namespace for invalid NSID 4294967295 00:13:32.335 [2024-07-15 09:21:43.461945] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:13:32.335 passed 00:13:32.594 Test: admin_get_features_mandatory_features ...[2024-07-15 09:21:43.546011] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:13:32.594 [2024-07-15 09:21:43.549032] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:13:32.594 passed 00:13:32.594 Test: admin_get_features_optional_features ...[2024-07-15 09:21:43.632575] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:13:32.594 [2024-07-15 09:21:43.635596] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:13:32.594 passed 00:13:32.594 Test: admin_set_features_number_of_queues ...[2024-07-15 09:21:43.719295] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:13:32.854 [2024-07-15 09:21:43.823903] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:13:32.854 passed 00:13:32.854 Test: admin_get_log_page_mandatory_logs ...[2024-07-15 09:21:43.907952] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:13:32.854 [2024-07-15 09:21:43.910984] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:13:32.854 passed 00:13:32.854 Test: admin_get_log_page_with_lpo ...[2024-07-15 09:21:43.993317] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:13:33.114 [2024-07-15 09:21:44.060830] ctrlr.c:2677:nvmf_ctrlr_get_log_page: *ERROR*: Get log page: offset (516) > len (512) 00:13:33.114 [2024-07-15 09:21:44.073878] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:13:33.114 passed 00:13:33.114 Test: fabric_property_get ...[2024-07-15 09:21:44.157361] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:13:33.114 [2024-07-15 09:21:44.158633] vfio_user.c:5607:handle_cmd_req: *ERROR*: /var/run/vfio-user: process NVMe command opc 0x7f failed 00:13:33.114 [2024-07-15 09:21:44.160382] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:13:33.114 passed 00:13:33.114 Test: admin_delete_io_sq_use_admin_qid ...[2024-07-15 09:21:44.245921] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:13:33.114 [2024-07-15 09:21:44.247225] vfio_user.c:2309:handle_del_io_q: *ERROR*: /var/run/vfio-user: I/O sqid:0 does not exist 00:13:33.114 [2024-07-15 09:21:44.248941] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:13:33.114 passed 00:13:33.373 Test: admin_delete_io_sq_delete_sq_twice ...[2024-07-15 09:21:44.333491] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:13:33.373 [2024-07-15 09:21:44.416823] vfio_user.c:2309:handle_del_io_q: *ERROR*: /var/run/vfio-user: I/O sqid:1 does not exist 00:13:33.373 [2024-07-15 09:21:44.432813] vfio_user.c:2309:handle_del_io_q: *ERROR*: /var/run/vfio-user: I/O sqid:1 does not exist 00:13:33.373 [2024-07-15 09:21:44.437913] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:13:33.373 passed 00:13:33.373 Test: admin_delete_io_cq_use_admin_qid ...[2024-07-15 09:21:44.520437] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:13:33.373 [2024-07-15 09:21:44.521737] vfio_user.c:2309:handle_del_io_q: *ERROR*: /var/run/vfio-user: I/O cqid:0 does not exist 00:13:33.373 [2024-07-15 09:21:44.523467] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:13:33.373 passed 00:13:33.632 Test: admin_delete_io_cq_delete_cq_first ...[2024-07-15 09:21:44.604639] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:13:33.632 [2024-07-15 09:21:44.680815] vfio_user.c:2319:handle_del_io_q: *ERROR*: /var/run/vfio-user: the associated SQ must be deleted first 00:13:33.632 [2024-07-15 09:21:44.704828] vfio_user.c:2309:handle_del_io_q: *ERROR*: /var/run/vfio-user: I/O sqid:1 does not exist 00:13:33.632 [2024-07-15 09:21:44.709919] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:13:33.632 passed 00:13:33.632 Test: admin_create_io_cq_verify_iv_pc ...[2024-07-15 09:21:44.793502] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:13:33.632 [2024-07-15 09:21:44.794830] vfio_user.c:2158:handle_create_io_cq: *ERROR*: /var/run/vfio-user: IV is too big 00:13:33.632 [2024-07-15 09:21:44.794870] vfio_user.c:2152:handle_create_io_cq: *ERROR*: /var/run/vfio-user: non-PC CQ not supported 00:13:33.632 [2024-07-15 09:21:44.796530] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:13:33.892 passed 00:13:33.892 Test: admin_create_io_sq_verify_qsize_cqid ...[2024-07-15 09:21:44.878644] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:13:33.892 [2024-07-15 09:21:44.969808] vfio_user.c:2240:handle_create_io_q: *ERROR*: /var/run/vfio-user: invalid I/O queue size 1 00:13:33.892 [2024-07-15 09:21:44.977814] vfio_user.c:2240:handle_create_io_q: *ERROR*: /var/run/vfio-user: invalid I/O queue size 257 00:13:33.892 [2024-07-15 09:21:44.985811] vfio_user.c:2038:handle_create_io_sq: *ERROR*: /var/run/vfio-user: invalid cqid:0 00:13:33.892 [2024-07-15 09:21:44.993812] vfio_user.c:2038:handle_create_io_sq: *ERROR*: /var/run/vfio-user: invalid cqid:128 00:13:33.892 [2024-07-15 09:21:45.022906] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:13:33.892 passed 00:13:34.150 Test: admin_create_io_sq_verify_pc ...[2024-07-15 09:21:45.106431] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:13:34.150 [2024-07-15 09:21:45.122824] vfio_user.c:2051:handle_create_io_sq: *ERROR*: /var/run/vfio-user: non-PC SQ not supported 00:13:34.150 [2024-07-15 09:21:45.140738] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:13:34.150 passed 00:13:34.150 Test: admin_create_io_qp_max_qps ...[2024-07-15 09:21:45.221304] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:13:35.527 [2024-07-15 09:21:46.330830] nvme_ctrlr.c:5465:spdk_nvme_ctrlr_alloc_qid: *ERROR*: [/var/run/vfio-user] No free I/O queue IDs 00:13:35.527 [2024-07-15 09:21:46.710972] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:13:35.787 passed 00:13:35.787 Test: admin_create_io_sq_shared_cq ...[2024-07-15 09:21:46.794214] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:13:35.787 [2024-07-15 09:21:46.925814] vfio_user.c:2319:handle_del_io_q: *ERROR*: /var/run/vfio-user: the associated SQ must be deleted first 00:13:35.787 [2024-07-15 09:21:46.962898] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:13:36.046 passed 00:13:36.046 00:13:36.046 Run Summary: Type Total Ran Passed Failed Inactive 00:13:36.046 suites 1 1 n/a 0 0 00:13:36.046 tests 18 18 18 0 0 00:13:36.046 asserts 360 360 360 0 n/a 00:13:36.046 00:13:36.046 Elapsed time = 1.561 seconds 00:13:36.046 09:21:47 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@42 -- # killprocess 788279 00:13:36.046 09:21:47 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@948 -- # '[' -z 788279 ']' 00:13:36.046 09:21:47 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@952 -- # kill -0 788279 00:13:36.046 09:21:47 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@953 -- # uname 00:13:36.046 09:21:47 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:13:36.046 09:21:47 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 788279 00:13:36.046 09:21:47 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:13:36.046 09:21:47 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:13:36.046 09:21:47 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@966 -- # echo 'killing process with pid 788279' 00:13:36.046 killing process with pid 788279 00:13:36.046 09:21:47 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@967 -- # kill 788279 00:13:36.046 09:21:47 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@972 -- # wait 788279 00:13:36.305 09:21:47 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@44 -- # rm -rf /var/run/vfio-user 00:13:36.305 09:21:47 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@46 -- # trap - SIGINT SIGTERM EXIT 00:13:36.305 00:13:36.305 real 0m5.756s 00:13:36.305 user 0m16.156s 00:13:36.305 sys 0m0.542s 00:13:36.305 09:21:47 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@1124 -- # xtrace_disable 00:13:36.305 09:21:47 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@10 -- # set +x 00:13:36.305 ************************************ 00:13:36.305 END TEST nvmf_vfio_user_nvme_compliance 00:13:36.305 ************************************ 00:13:36.305 09:21:47 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:13:36.305 09:21:47 nvmf_tcp -- nvmf/nvmf.sh@43 -- # run_test nvmf_vfio_user_fuzz /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/vfio_user_fuzz.sh --transport=tcp 00:13:36.305 09:21:47 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:13:36.305 09:21:47 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:13:36.305 09:21:47 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:13:36.305 ************************************ 00:13:36.305 START TEST nvmf_vfio_user_fuzz 00:13:36.305 ************************************ 00:13:36.305 09:21:47 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/vfio_user_fuzz.sh --transport=tcp 00:13:36.305 * Looking for test storage... 00:13:36.305 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:13:36.305 09:21:47 nvmf_tcp.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:13:36.305 09:21:47 nvmf_tcp.nvmf_vfio_user_fuzz -- nvmf/common.sh@7 -- # uname -s 00:13:36.305 09:21:47 nvmf_tcp.nvmf_vfio_user_fuzz -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:13:36.305 09:21:47 nvmf_tcp.nvmf_vfio_user_fuzz -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:13:36.305 09:21:47 nvmf_tcp.nvmf_vfio_user_fuzz -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:13:36.305 09:21:47 nvmf_tcp.nvmf_vfio_user_fuzz -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:13:36.305 09:21:47 nvmf_tcp.nvmf_vfio_user_fuzz -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:13:36.305 09:21:47 nvmf_tcp.nvmf_vfio_user_fuzz -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:13:36.305 09:21:47 nvmf_tcp.nvmf_vfio_user_fuzz -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:13:36.305 09:21:47 nvmf_tcp.nvmf_vfio_user_fuzz -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:13:36.305 09:21:47 nvmf_tcp.nvmf_vfio_user_fuzz -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:13:36.305 09:21:47 nvmf_tcp.nvmf_vfio_user_fuzz -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:13:36.305 09:21:47 nvmf_tcp.nvmf_vfio_user_fuzz -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a 00:13:36.305 09:21:47 nvmf_tcp.nvmf_vfio_user_fuzz -- nvmf/common.sh@18 -- # NVME_HOSTID=29f67375-a902-e411-ace9-001e67bc3c9a 00:13:36.305 09:21:47 nvmf_tcp.nvmf_vfio_user_fuzz -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:13:36.305 09:21:47 nvmf_tcp.nvmf_vfio_user_fuzz -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:13:36.305 09:21:47 nvmf_tcp.nvmf_vfio_user_fuzz -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:13:36.305 09:21:47 nvmf_tcp.nvmf_vfio_user_fuzz -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:13:36.305 09:21:47 nvmf_tcp.nvmf_vfio_user_fuzz -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:13:36.305 09:21:47 nvmf_tcp.nvmf_vfio_user_fuzz -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:13:36.305 09:21:47 nvmf_tcp.nvmf_vfio_user_fuzz -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:13:36.305 09:21:47 nvmf_tcp.nvmf_vfio_user_fuzz -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:13:36.305 09:21:47 nvmf_tcp.nvmf_vfio_user_fuzz -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:36.305 09:21:47 nvmf_tcp.nvmf_vfio_user_fuzz -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:36.305 09:21:47 nvmf_tcp.nvmf_vfio_user_fuzz -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:36.305 09:21:47 nvmf_tcp.nvmf_vfio_user_fuzz -- paths/export.sh@5 -- # export PATH 00:13:36.305 09:21:47 nvmf_tcp.nvmf_vfio_user_fuzz -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:36.305 09:21:47 nvmf_tcp.nvmf_vfio_user_fuzz -- nvmf/common.sh@47 -- # : 0 00:13:36.306 09:21:47 nvmf_tcp.nvmf_vfio_user_fuzz -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:13:36.306 09:21:47 nvmf_tcp.nvmf_vfio_user_fuzz -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:13:36.306 09:21:47 nvmf_tcp.nvmf_vfio_user_fuzz -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:13:36.306 09:21:47 nvmf_tcp.nvmf_vfio_user_fuzz -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:13:36.306 09:21:47 nvmf_tcp.nvmf_vfio_user_fuzz -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:13:36.306 09:21:47 nvmf_tcp.nvmf_vfio_user_fuzz -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:13:36.306 09:21:47 nvmf_tcp.nvmf_vfio_user_fuzz -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:13:36.306 09:21:47 nvmf_tcp.nvmf_vfio_user_fuzz -- nvmf/common.sh@51 -- # have_pci_nics=0 00:13:36.306 09:21:47 nvmf_tcp.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@12 -- # MALLOC_BDEV_SIZE=64 00:13:36.306 09:21:47 nvmf_tcp.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@13 -- # MALLOC_BLOCK_SIZE=512 00:13:36.306 09:21:47 nvmf_tcp.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@15 -- # nqn=nqn.2021-09.io.spdk:cnode0 00:13:36.306 09:21:47 nvmf_tcp.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@16 -- # traddr=/var/run/vfio-user 00:13:36.306 09:21:47 nvmf_tcp.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@18 -- # export TEST_TRANSPORT=VFIOUSER 00:13:36.306 09:21:47 nvmf_tcp.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@18 -- # TEST_TRANSPORT=VFIOUSER 00:13:36.306 09:21:47 nvmf_tcp.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@20 -- # rm -rf /var/run/vfio-user 00:13:36.306 09:21:47 nvmf_tcp.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@24 -- # nvmfpid=789006 00:13:36.306 09:21:47 nvmf_tcp.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@23 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1 00:13:36.306 09:21:47 nvmf_tcp.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@25 -- # echo 'Process pid: 789006' 00:13:36.306 Process pid: 789006 00:13:36.306 09:21:47 nvmf_tcp.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@27 -- # trap 'killprocess $nvmfpid; exit 1' SIGINT SIGTERM EXIT 00:13:36.306 09:21:47 nvmf_tcp.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@28 -- # waitforlisten 789006 00:13:36.306 09:21:47 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@829 -- # '[' -z 789006 ']' 00:13:36.306 09:21:47 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:13:36.306 09:21:47 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@834 -- # local max_retries=100 00:13:36.306 09:21:47 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:13:36.306 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:13:36.306 09:21:47 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@838 -- # xtrace_disable 00:13:36.306 09:21:47 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@10 -- # set +x 00:13:36.565 09:21:47 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:13:36.565 09:21:47 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@862 -- # return 0 00:13:36.565 09:21:47 nvmf_tcp.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@30 -- # sleep 1 00:13:37.945 09:21:48 nvmf_tcp.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@32 -- # rpc_cmd nvmf_create_transport -t VFIOUSER 00:13:37.945 09:21:48 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:37.945 09:21:48 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@10 -- # set +x 00:13:37.945 09:21:48 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:37.945 09:21:48 nvmf_tcp.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@34 -- # mkdir -p /var/run/vfio-user 00:13:37.945 09:21:48 nvmf_tcp.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@36 -- # rpc_cmd bdev_malloc_create 64 512 -b malloc0 00:13:37.945 09:21:48 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:37.945 09:21:48 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@10 -- # set +x 00:13:37.945 malloc0 00:13:37.945 09:21:48 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:37.945 09:21:48 nvmf_tcp.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@37 -- # rpc_cmd nvmf_create_subsystem nqn.2021-09.io.spdk:cnode0 -a -s spdk 00:13:37.945 09:21:48 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:37.945 09:21:48 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@10 -- # set +x 00:13:37.945 09:21:48 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:37.945 09:21:48 nvmf_tcp.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@38 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2021-09.io.spdk:cnode0 malloc0 00:13:37.945 09:21:48 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:37.945 09:21:48 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@10 -- # set +x 00:13:37.945 09:21:48 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:37.945 09:21:48 nvmf_tcp.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@39 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2021-09.io.spdk:cnode0 -t VFIOUSER -a /var/run/vfio-user -s 0 00:13:37.945 09:21:48 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:37.945 09:21:48 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@10 -- # set +x 00:13:37.945 09:21:48 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:37.945 09:21:48 nvmf_tcp.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@41 -- # trid='trtype:VFIOUSER subnqn:nqn.2021-09.io.spdk:cnode0 traddr:/var/run/vfio-user' 00:13:37.945 09:21:48 nvmf_tcp.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@43 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app/fuzz/nvme_fuzz/nvme_fuzz -m 0x2 -t 30 -S 123456 -F 'trtype:VFIOUSER subnqn:nqn.2021-09.io.spdk:cnode0 traddr:/var/run/vfio-user' -N -a 00:14:10.024 Fuzzing completed. Shutting down the fuzz application 00:14:10.024 00:14:10.024 Dumping successful admin opcodes: 00:14:10.024 8, 9, 10, 24, 00:14:10.024 Dumping successful io opcodes: 00:14:10.024 0, 00:14:10.024 NS: 0x200003a1ef00 I/O qp, Total commands completed: 655061, total successful commands: 2548, random_seed: 1049499008 00:14:10.024 NS: 0x200003a1ef00 admin qp, Total commands completed: 86479, total successful commands: 691, random_seed: 2010357184 00:14:10.024 09:22:19 nvmf_tcp.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@44 -- # rpc_cmd nvmf_delete_subsystem nqn.2021-09.io.spdk:cnode0 00:14:10.024 09:22:19 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:10.024 09:22:19 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@10 -- # set +x 00:14:10.024 09:22:19 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:10.024 09:22:19 nvmf_tcp.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@46 -- # killprocess 789006 00:14:10.024 09:22:19 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@948 -- # '[' -z 789006 ']' 00:14:10.024 09:22:19 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@952 -- # kill -0 789006 00:14:10.024 09:22:19 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@953 -- # uname 00:14:10.024 09:22:19 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:14:10.024 09:22:19 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 789006 00:14:10.024 09:22:19 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:14:10.024 09:22:19 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:14:10.024 09:22:19 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@966 -- # echo 'killing process with pid 789006' 00:14:10.024 killing process with pid 789006 00:14:10.024 09:22:19 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@967 -- # kill 789006 00:14:10.024 09:22:19 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@972 -- # wait 789006 00:14:10.024 09:22:19 nvmf_tcp.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@48 -- # rm -rf /var/run/vfio-user /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/vfio_user_fuzz_log.txt /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/vfio_user_fuzz_tgt_output.txt 00:14:10.024 09:22:19 nvmf_tcp.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@50 -- # trap - SIGINT SIGTERM EXIT 00:14:10.024 00:14:10.024 real 0m32.272s 00:14:10.024 user 0m32.029s 00:14:10.024 sys 0m27.021s 00:14:10.024 09:22:19 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@1124 -- # xtrace_disable 00:14:10.024 09:22:19 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@10 -- # set +x 00:14:10.024 ************************************ 00:14:10.024 END TEST nvmf_vfio_user_fuzz 00:14:10.024 ************************************ 00:14:10.024 09:22:19 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:14:10.024 09:22:19 nvmf_tcp -- nvmf/nvmf.sh@47 -- # run_test nvmf_host_management /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/host_management.sh --transport=tcp 00:14:10.024 09:22:19 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:14:10.024 09:22:19 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:14:10.024 09:22:19 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:14:10.024 ************************************ 00:14:10.024 START TEST nvmf_host_management 00:14:10.024 ************************************ 00:14:10.024 09:22:19 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/host_management.sh --transport=tcp 00:14:10.024 * Looking for test storage... 00:14:10.024 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:14:10.024 09:22:19 nvmf_tcp.nvmf_host_management -- target/host_management.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:14:10.024 09:22:19 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@7 -- # uname -s 00:14:10.024 09:22:19 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:14:10.024 09:22:19 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:14:10.024 09:22:19 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:14:10.024 09:22:19 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:14:10.024 09:22:19 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:14:10.024 09:22:19 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:14:10.024 09:22:19 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:14:10.024 09:22:19 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:14:10.024 09:22:19 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:14:10.024 09:22:19 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:14:10.024 09:22:19 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a 00:14:10.024 09:22:19 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@18 -- # NVME_HOSTID=29f67375-a902-e411-ace9-001e67bc3c9a 00:14:10.024 09:22:19 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:14:10.024 09:22:19 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:14:10.024 09:22:19 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:14:10.024 09:22:19 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:14:10.024 09:22:19 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:14:10.024 09:22:19 nvmf_tcp.nvmf_host_management -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:14:10.024 09:22:19 nvmf_tcp.nvmf_host_management -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:14:10.024 09:22:19 nvmf_tcp.nvmf_host_management -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:14:10.024 09:22:19 nvmf_tcp.nvmf_host_management -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:10.024 09:22:19 nvmf_tcp.nvmf_host_management -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:10.024 09:22:19 nvmf_tcp.nvmf_host_management -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:10.024 09:22:19 nvmf_tcp.nvmf_host_management -- paths/export.sh@5 -- # export PATH 00:14:10.024 09:22:19 nvmf_tcp.nvmf_host_management -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:10.024 09:22:19 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@47 -- # : 0 00:14:10.024 09:22:19 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:14:10.024 09:22:19 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:14:10.024 09:22:19 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:14:10.024 09:22:19 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:14:10.024 09:22:19 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:14:10.024 09:22:19 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:14:10.024 09:22:19 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:14:10.024 09:22:19 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@51 -- # have_pci_nics=0 00:14:10.024 09:22:19 nvmf_tcp.nvmf_host_management -- target/host_management.sh@11 -- # MALLOC_BDEV_SIZE=64 00:14:10.024 09:22:19 nvmf_tcp.nvmf_host_management -- target/host_management.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:14:10.024 09:22:19 nvmf_tcp.nvmf_host_management -- target/host_management.sh@105 -- # nvmftestinit 00:14:10.024 09:22:19 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:14:10.024 09:22:19 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:14:10.024 09:22:19 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@448 -- # prepare_net_devs 00:14:10.024 09:22:19 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@410 -- # local -g is_hw=no 00:14:10.024 09:22:19 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@412 -- # remove_spdk_ns 00:14:10.024 09:22:19 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:14:10.024 09:22:19 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:14:10.024 09:22:19 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:14:10.024 09:22:19 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:14:10.024 09:22:19 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:14:10.024 09:22:19 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@285 -- # xtrace_disable 00:14:10.024 09:22:19 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:14:10.590 09:22:21 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:14:10.590 09:22:21 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@291 -- # pci_devs=() 00:14:10.590 09:22:21 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@291 -- # local -a pci_devs 00:14:10.590 09:22:21 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@292 -- # pci_net_devs=() 00:14:10.590 09:22:21 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:14:10.590 09:22:21 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@293 -- # pci_drivers=() 00:14:10.590 09:22:21 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@293 -- # local -A pci_drivers 00:14:10.590 09:22:21 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@295 -- # net_devs=() 00:14:10.590 09:22:21 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@295 -- # local -ga net_devs 00:14:10.590 09:22:21 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@296 -- # e810=() 00:14:10.590 09:22:21 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@296 -- # local -ga e810 00:14:10.590 09:22:21 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@297 -- # x722=() 00:14:10.590 09:22:21 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@297 -- # local -ga x722 00:14:10.590 09:22:21 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@298 -- # mlx=() 00:14:10.590 09:22:21 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@298 -- # local -ga mlx 00:14:10.590 09:22:21 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:14:10.590 09:22:21 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:14:10.590 09:22:21 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:14:10.590 09:22:21 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:14:10.590 09:22:21 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:14:10.590 09:22:21 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:14:10.590 09:22:21 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:14:10.590 09:22:21 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:14:10.590 09:22:21 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:14:10.590 09:22:21 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:14:10.590 09:22:21 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:14:10.590 09:22:21 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:14:10.590 09:22:21 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:14:10.590 09:22:21 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:14:10.590 09:22:21 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:14:10.590 09:22:21 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:14:10.590 09:22:21 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:14:10.590 09:22:21 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:14:10.590 09:22:21 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@341 -- # echo 'Found 0000:09:00.0 (0x8086 - 0x159b)' 00:14:10.590 Found 0000:09:00.0 (0x8086 - 0x159b) 00:14:10.590 09:22:21 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:14:10.590 09:22:21 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:14:10.590 09:22:21 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:14:10.590 09:22:21 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:14:10.590 09:22:21 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:14:10.590 09:22:21 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:14:10.590 09:22:21 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@341 -- # echo 'Found 0000:09:00.1 (0x8086 - 0x159b)' 00:14:10.590 Found 0000:09:00.1 (0x8086 - 0x159b) 00:14:10.591 09:22:21 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:14:10.591 09:22:21 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:14:10.591 09:22:21 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:14:10.591 09:22:21 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:14:10.591 09:22:21 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:14:10.591 09:22:21 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:14:10.591 09:22:21 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:14:10.591 09:22:21 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:14:10.591 09:22:21 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:14:10.591 09:22:21 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:14:10.591 09:22:21 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:14:10.591 09:22:21 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:14:10.591 09:22:21 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@390 -- # [[ up == up ]] 00:14:10.591 09:22:21 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:14:10.591 09:22:21 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:14:10.591 09:22:21 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:09:00.0: cvl_0_0' 00:14:10.591 Found net devices under 0000:09:00.0: cvl_0_0 00:14:10.591 09:22:21 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:14:10.591 09:22:21 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:14:10.591 09:22:21 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:14:10.591 09:22:21 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:14:10.591 09:22:21 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:14:10.591 09:22:21 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@390 -- # [[ up == up ]] 00:14:10.591 09:22:21 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:14:10.591 09:22:21 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:14:10.591 09:22:21 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:09:00.1: cvl_0_1' 00:14:10.591 Found net devices under 0000:09:00.1: cvl_0_1 00:14:10.591 09:22:21 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:14:10.591 09:22:21 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:14:10.591 09:22:21 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@414 -- # is_hw=yes 00:14:10.591 09:22:21 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:14:10.591 09:22:21 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:14:10.591 09:22:21 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:14:10.591 09:22:21 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:14:10.591 09:22:21 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:14:10.591 09:22:21 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:14:10.591 09:22:21 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:14:10.591 09:22:21 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:14:10.591 09:22:21 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:14:10.591 09:22:21 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:14:10.591 09:22:21 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:14:10.591 09:22:21 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:14:10.591 09:22:21 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:14:10.591 09:22:21 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:14:10.591 09:22:21 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:14:10.591 09:22:21 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:14:10.591 09:22:21 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:14:10.591 09:22:21 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:14:10.849 09:22:21 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:14:10.849 09:22:21 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:14:10.849 09:22:21 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:14:10.849 09:22:21 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:14:10.849 09:22:21 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:14:10.849 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:14:10.849 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.203 ms 00:14:10.849 00:14:10.849 --- 10.0.0.2 ping statistics --- 00:14:10.849 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:14:10.849 rtt min/avg/max/mdev = 0.203/0.203/0.203/0.000 ms 00:14:10.849 09:22:21 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:14:10.849 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:14:10.849 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.122 ms 00:14:10.849 00:14:10.849 --- 10.0.0.1 ping statistics --- 00:14:10.849 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:14:10.849 rtt min/avg/max/mdev = 0.122/0.122/0.122/0.000 ms 00:14:10.849 09:22:21 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:14:10.849 09:22:21 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@422 -- # return 0 00:14:10.849 09:22:21 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:14:10.849 09:22:21 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:14:10.849 09:22:21 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:14:10.849 09:22:21 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:14:10.850 09:22:21 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:14:10.850 09:22:21 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:14:10.850 09:22:21 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:14:10.850 09:22:21 nvmf_tcp.nvmf_host_management -- target/host_management.sh@107 -- # nvmf_host_management 00:14:10.850 09:22:21 nvmf_tcp.nvmf_host_management -- target/host_management.sh@69 -- # starttarget 00:14:10.850 09:22:21 nvmf_tcp.nvmf_host_management -- target/host_management.sh@16 -- # nvmfappstart -m 0x1E 00:14:10.850 09:22:21 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:14:10.850 09:22:21 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@722 -- # xtrace_disable 00:14:10.850 09:22:21 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:14:10.850 09:22:21 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@481 -- # nvmfpid=794455 00:14:10.850 09:22:21 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@482 -- # waitforlisten 794455 00:14:10.850 09:22:21 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1E 00:14:10.850 09:22:21 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@829 -- # '[' -z 794455 ']' 00:14:10.850 09:22:21 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:14:10.850 09:22:21 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@834 -- # local max_retries=100 00:14:10.850 09:22:21 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:14:10.850 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:14:10.850 09:22:21 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@838 -- # xtrace_disable 00:14:10.850 09:22:21 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:14:10.850 [2024-07-15 09:22:21.920103] Starting SPDK v24.09-pre git sha1 b0f01ebc5 / DPDK 24.03.0 initialization... 00:14:10.850 [2024-07-15 09:22:21.920190] [ DPDK EAL parameters: nvmf -c 0x1E --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:14:10.850 EAL: No free 2048 kB hugepages reported on node 1 00:14:10.850 [2024-07-15 09:22:21.982300] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 4 00:14:11.107 [2024-07-15 09:22:22.083962] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:14:11.107 [2024-07-15 09:22:22.084013] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:14:11.107 [2024-07-15 09:22:22.084041] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:14:11.107 [2024-07-15 09:22:22.084052] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:14:11.107 [2024-07-15 09:22:22.084061] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:14:11.107 [2024-07-15 09:22:22.084149] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:14:11.107 [2024-07-15 09:22:22.084221] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:14:11.107 [2024-07-15 09:22:22.084277] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 4 00:14:11.107 [2024-07-15 09:22:22.084280] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:14:11.107 09:22:22 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:14:11.107 09:22:22 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@862 -- # return 0 00:14:11.107 09:22:22 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:14:11.107 09:22:22 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@728 -- # xtrace_disable 00:14:11.107 09:22:22 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:14:11.107 09:22:22 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:14:11.107 09:22:22 nvmf_tcp.nvmf_host_management -- target/host_management.sh@18 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:14:11.107 09:22:22 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:11.107 09:22:22 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:14:11.107 [2024-07-15 09:22:22.246651] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:14:11.107 09:22:22 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:11.107 09:22:22 nvmf_tcp.nvmf_host_management -- target/host_management.sh@20 -- # timing_enter create_subsystem 00:14:11.107 09:22:22 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@722 -- # xtrace_disable 00:14:11.107 09:22:22 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:14:11.107 09:22:22 nvmf_tcp.nvmf_host_management -- target/host_management.sh@22 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:14:11.107 09:22:22 nvmf_tcp.nvmf_host_management -- target/host_management.sh@23 -- # cat 00:14:11.107 09:22:22 nvmf_tcp.nvmf_host_management -- target/host_management.sh@30 -- # rpc_cmd 00:14:11.107 09:22:22 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:11.107 09:22:22 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:14:11.107 Malloc0 00:14:11.365 [2024-07-15 09:22:22.305606] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:14:11.365 09:22:22 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:11.365 09:22:22 nvmf_tcp.nvmf_host_management -- target/host_management.sh@31 -- # timing_exit create_subsystems 00:14:11.365 09:22:22 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@728 -- # xtrace_disable 00:14:11.365 09:22:22 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:14:11.365 09:22:22 nvmf_tcp.nvmf_host_management -- target/host_management.sh@73 -- # perfpid=794498 00:14:11.365 09:22:22 nvmf_tcp.nvmf_host_management -- target/host_management.sh@74 -- # waitforlisten 794498 /var/tmp/bdevperf.sock 00:14:11.365 09:22:22 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@829 -- # '[' -z 794498 ']' 00:14:11.365 09:22:22 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:14:11.365 09:22:22 nvmf_tcp.nvmf_host_management -- target/host_management.sh@72 -- # gen_nvmf_target_json 0 00:14:11.365 09:22:22 nvmf_tcp.nvmf_host_management -- target/host_management.sh@72 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -r /var/tmp/bdevperf.sock --json /dev/fd/63 -q 64 -o 65536 -w verify -t 10 00:14:11.365 09:22:22 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@834 -- # local max_retries=100 00:14:11.365 09:22:22 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@532 -- # config=() 00:14:11.365 09:22:22 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:14:11.365 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:14:11.365 09:22:22 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@532 -- # local subsystem config 00:14:11.365 09:22:22 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@838 -- # xtrace_disable 00:14:11.365 09:22:22 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:14:11.365 09:22:22 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:14:11.365 09:22:22 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:14:11.365 { 00:14:11.365 "params": { 00:14:11.365 "name": "Nvme$subsystem", 00:14:11.365 "trtype": "$TEST_TRANSPORT", 00:14:11.365 "traddr": "$NVMF_FIRST_TARGET_IP", 00:14:11.365 "adrfam": "ipv4", 00:14:11.365 "trsvcid": "$NVMF_PORT", 00:14:11.365 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:14:11.365 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:14:11.365 "hdgst": ${hdgst:-false}, 00:14:11.365 "ddgst": ${ddgst:-false} 00:14:11.365 }, 00:14:11.365 "method": "bdev_nvme_attach_controller" 00:14:11.365 } 00:14:11.365 EOF 00:14:11.365 )") 00:14:11.365 09:22:22 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@554 -- # cat 00:14:11.365 09:22:22 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@556 -- # jq . 00:14:11.365 09:22:22 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@557 -- # IFS=, 00:14:11.365 09:22:22 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:14:11.365 "params": { 00:14:11.365 "name": "Nvme0", 00:14:11.365 "trtype": "tcp", 00:14:11.365 "traddr": "10.0.0.2", 00:14:11.365 "adrfam": "ipv4", 00:14:11.365 "trsvcid": "4420", 00:14:11.365 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:14:11.365 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:14:11.365 "hdgst": false, 00:14:11.365 "ddgst": false 00:14:11.365 }, 00:14:11.365 "method": "bdev_nvme_attach_controller" 00:14:11.365 }' 00:14:11.365 [2024-07-15 09:22:22.386327] Starting SPDK v24.09-pre git sha1 b0f01ebc5 / DPDK 24.03.0 initialization... 00:14:11.365 [2024-07-15 09:22:22.386404] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid794498 ] 00:14:11.365 EAL: No free 2048 kB hugepages reported on node 1 00:14:11.365 [2024-07-15 09:22:22.448968] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:14:11.623 [2024-07-15 09:22:22.560520] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:14:11.623 Running I/O for 10 seconds... 00:14:11.623 09:22:22 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:14:11.623 09:22:22 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@862 -- # return 0 00:14:11.623 09:22:22 nvmf_tcp.nvmf_host_management -- target/host_management.sh@75 -- # rpc_cmd -s /var/tmp/bdevperf.sock framework_wait_init 00:14:11.623 09:22:22 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:11.623 09:22:22 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:14:11.623 09:22:22 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:11.623 09:22:22 nvmf_tcp.nvmf_host_management -- target/host_management.sh@78 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; kill -9 $perfpid || true; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:14:11.623 09:22:22 nvmf_tcp.nvmf_host_management -- target/host_management.sh@80 -- # waitforio /var/tmp/bdevperf.sock Nvme0n1 00:14:11.623 09:22:22 nvmf_tcp.nvmf_host_management -- target/host_management.sh@45 -- # '[' -z /var/tmp/bdevperf.sock ']' 00:14:11.623 09:22:22 nvmf_tcp.nvmf_host_management -- target/host_management.sh@49 -- # '[' -z Nvme0n1 ']' 00:14:11.623 09:22:22 nvmf_tcp.nvmf_host_management -- target/host_management.sh@52 -- # local ret=1 00:14:11.623 09:22:22 nvmf_tcp.nvmf_host_management -- target/host_management.sh@53 -- # local i 00:14:11.623 09:22:22 nvmf_tcp.nvmf_host_management -- target/host_management.sh@54 -- # (( i = 10 )) 00:14:11.623 09:22:22 nvmf_tcp.nvmf_host_management -- target/host_management.sh@54 -- # (( i != 0 )) 00:14:11.623 09:22:22 nvmf_tcp.nvmf_host_management -- target/host_management.sh@55 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_get_iostat -b Nvme0n1 00:14:11.623 09:22:22 nvmf_tcp.nvmf_host_management -- target/host_management.sh@55 -- # jq -r '.bdevs[0].num_read_ops' 00:14:11.623 09:22:22 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:11.623 09:22:22 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:14:11.623 09:22:22 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:11.882 09:22:22 nvmf_tcp.nvmf_host_management -- target/host_management.sh@55 -- # read_io_count=67 00:14:11.882 09:22:22 nvmf_tcp.nvmf_host_management -- target/host_management.sh@58 -- # '[' 67 -ge 100 ']' 00:14:11.882 09:22:22 nvmf_tcp.nvmf_host_management -- target/host_management.sh@62 -- # sleep 0.25 00:14:12.143 09:22:23 nvmf_tcp.nvmf_host_management -- target/host_management.sh@54 -- # (( i-- )) 00:14:12.143 09:22:23 nvmf_tcp.nvmf_host_management -- target/host_management.sh@54 -- # (( i != 0 )) 00:14:12.143 09:22:23 nvmf_tcp.nvmf_host_management -- target/host_management.sh@55 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_get_iostat -b Nvme0n1 00:14:12.143 09:22:23 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:12.143 09:22:23 nvmf_tcp.nvmf_host_management -- target/host_management.sh@55 -- # jq -r '.bdevs[0].num_read_ops' 00:14:12.143 09:22:23 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:14:12.143 09:22:23 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:12.143 09:22:23 nvmf_tcp.nvmf_host_management -- target/host_management.sh@55 -- # read_io_count=579 00:14:12.143 09:22:23 nvmf_tcp.nvmf_host_management -- target/host_management.sh@58 -- # '[' 579 -ge 100 ']' 00:14:12.143 09:22:23 nvmf_tcp.nvmf_host_management -- target/host_management.sh@59 -- # ret=0 00:14:12.143 09:22:23 nvmf_tcp.nvmf_host_management -- target/host_management.sh@60 -- # break 00:14:12.143 09:22:23 nvmf_tcp.nvmf_host_management -- target/host_management.sh@64 -- # return 0 00:14:12.143 09:22:23 nvmf_tcp.nvmf_host_management -- target/host_management.sh@84 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2016-06.io.spdk:cnode0 nqn.2016-06.io.spdk:host0 00:14:12.143 09:22:23 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:12.143 09:22:23 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:14:12.143 [2024-07-15 09:22:23.132384] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x125e380 is same with the state(5) to be set 00:14:12.143 [2024-07-15 09:22:23.132499] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x125e380 is same with the state(5) to be set 00:14:12.143 [2024-07-15 09:22:23.132516] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x125e380 is same with the state(5) to be set 00:14:12.143 [2024-07-15 09:22:23.132529] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x125e380 is same with the state(5) to be set 00:14:12.143 [2024-07-15 09:22:23.132541] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x125e380 is same with the state(5) to be set 00:14:12.143 [2024-07-15 09:22:23.132553] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x125e380 is same with the state(5) to be set 00:14:12.143 [2024-07-15 09:22:23.132565] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x125e380 is same with the state(5) to be set 00:14:12.143 [2024-07-15 09:22:23.133512] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:14:12.143 [2024-07-15 09:22:23.133554] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:12.143 [2024-07-15 09:22:23.133573] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:14:12.143 [2024-07-15 09:22:23.133587] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:12.143 [2024-07-15 09:22:23.133601] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:14:12.143 [2024-07-15 09:22:23.133615] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:12.143 [2024-07-15 09:22:23.133629] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:14:12.143 [2024-07-15 09:22:23.133644] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:12.143 [2024-07-15 09:22:23.133658] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xce1790 is same with the state(5) to be set 00:14:12.143 [2024-07-15 09:22:23.133730] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:81920 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:12.143 [2024-07-15 09:22:23.133752] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:12.143 [2024-07-15 09:22:23.133780] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:82048 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:12.143 [2024-07-15 09:22:23.133798] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:12.143 [2024-07-15 09:22:23.133834] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:82176 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:12.143 [2024-07-15 09:22:23.133862] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:12.143 [2024-07-15 09:22:23.133878] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:82304 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:12.143 [2024-07-15 09:22:23.133894] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:12.143 [2024-07-15 09:22:23.133910] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:82432 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:12.143 [2024-07-15 09:22:23.133925] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:12.143 [2024-07-15 09:22:23.133942] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:82560 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:12.143 [2024-07-15 09:22:23.133957] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:12.143 [2024-07-15 09:22:23.133973] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:82688 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:12.143 [2024-07-15 09:22:23.133988] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:12.143 [2024-07-15 09:22:23.134005] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:82816 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:12.143 [2024-07-15 09:22:23.134020] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:12.143 [2024-07-15 09:22:23.134036] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:82944 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:12.143 [2024-07-15 09:22:23.134051] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:12.143 [2024-07-15 09:22:23.134068] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:83072 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:12.143 [2024-07-15 09:22:23.134083] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:12.143 [2024-07-15 09:22:23.134099] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:83200 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:12.143 [2024-07-15 09:22:23.134116] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:12.143 [2024-07-15 09:22:23.134132] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:83328 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:12.143 [2024-07-15 09:22:23.134147] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:12.143 [2024-07-15 09:22:23.134163] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:83456 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:12.143 [2024-07-15 09:22:23.134179] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:12.143 [2024-07-15 09:22:23.134195] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:83584 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:12.143 [2024-07-15 09:22:23.134210] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:12.143 [2024-07-15 09:22:23.134226] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:83712 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:12.143 [2024-07-15 09:22:23.134246] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:12.143 [2024-07-15 09:22:23.134262] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:83840 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:12.144 [2024-07-15 09:22:23.134278] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:12.144 [2024-07-15 09:22:23.134294] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:83968 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:12.144 [2024-07-15 09:22:23.134309] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:12.144 [2024-07-15 09:22:23.134326] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:84096 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:12.144 [2024-07-15 09:22:23.134341] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:12.144 [2024-07-15 09:22:23.134357] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:84224 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:12.144 [2024-07-15 09:22:23.134372] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:12.144 [2024-07-15 09:22:23.134389] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:84352 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:12.144 [2024-07-15 09:22:23.134404] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:12.144 [2024-07-15 09:22:23.134420] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:84480 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:12.144 [2024-07-15 09:22:23.134435] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:12.144 [2024-07-15 09:22:23.134451] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:84608 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:12.144 [2024-07-15 09:22:23.134466] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:12.144 [2024-07-15 09:22:23.134483] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:84736 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:12.144 [2024-07-15 09:22:23.134498] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:12.144 [2024-07-15 09:22:23.134514] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:84864 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:12.144 [2024-07-15 09:22:23.134529] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:12.144 [2024-07-15 09:22:23.134545] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:84992 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:12.144 [2024-07-15 09:22:23.134561] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:12.144 [2024-07-15 09:22:23.134577] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:85120 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:12.144 [2024-07-15 09:22:23.134592] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:12.144 [2024-07-15 09:22:23.134608] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:85248 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:12.144 [2024-07-15 09:22:23.134623] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:12.144 [2024-07-15 09:22:23.134642] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:85376 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:12.144 [2024-07-15 09:22:23.134657] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:12.144 [2024-07-15 09:22:23.134674] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:85504 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:12.144 [2024-07-15 09:22:23.134688] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:12.144 [2024-07-15 09:22:23.134705] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:85632 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:12.144 [2024-07-15 09:22:23.134720] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:12.144 [2024-07-15 09:22:23.134736] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:85760 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:12.144 [2024-07-15 09:22:23.134750] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:12.144 [2024-07-15 09:22:23.134766] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:85888 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:12.144 [2024-07-15 09:22:23.134781] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:12.144 [2024-07-15 09:22:23.134798] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:86016 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:12.144 [2024-07-15 09:22:23.134820] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:12.144 [2024-07-15 09:22:23.134837] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:86144 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:12.144 [2024-07-15 09:22:23.134853] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:12.144 [2024-07-15 09:22:23.134869] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:86272 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:12.144 [2024-07-15 09:22:23.134884] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:12.144 [2024-07-15 09:22:23.134900] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:86400 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:12.144 [2024-07-15 09:22:23.134915] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:12.144 [2024-07-15 09:22:23.134931] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:86528 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:12.144 [2024-07-15 09:22:23.134946] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:12.144 [2024-07-15 09:22:23.134963] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:86656 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:12.144 [2024-07-15 09:22:23.134978] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:12.144 [2024-07-15 09:22:23.134994] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:86784 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:12.144 [2024-07-15 09:22:23.135009] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:12.144 [2024-07-15 09:22:23.135025] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:86912 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:12.144 [2024-07-15 09:22:23.135044] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:12.144 [2024-07-15 09:22:23.135061] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:87040 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:12.144 [2024-07-15 09:22:23.135075] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:12.144 [2024-07-15 09:22:23.135092] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:87168 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:12.144 [2024-07-15 09:22:23.135107] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:12.144 [2024-07-15 09:22:23.135123] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:87296 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:12.144 [2024-07-15 09:22:23.135138] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:12.144 [2024-07-15 09:22:23.135154] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:87424 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:12.144 [2024-07-15 09:22:23.135169] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:12.144 [2024-07-15 09:22:23.135186] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:87552 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:12.144 [2024-07-15 09:22:23.135200] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:12.144 [2024-07-15 09:22:23.135217] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:87680 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:12.144 [2024-07-15 09:22:23.135232] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:12.144 [2024-07-15 09:22:23.135248] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:87808 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:12.144 [2024-07-15 09:22:23.135263] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:12.144 [2024-07-15 09:22:23.135279] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:87936 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:12.144 [2024-07-15 09:22:23.135294] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:12.144 [2024-07-15 09:22:23.135310] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:88064 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:12.144 [2024-07-15 09:22:23.135325] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:12.144 [2024-07-15 09:22:23.135341] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:88192 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:12.144 [2024-07-15 09:22:23.135357] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:12.144 [2024-07-15 09:22:23.135373] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:88320 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:12.144 [2024-07-15 09:22:23.135388] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:12.144 [2024-07-15 09:22:23.135405] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:88448 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:12.144 [2024-07-15 09:22:23.135420] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:12.144 [2024-07-15 09:22:23.135439] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:88576 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:12.144 [2024-07-15 09:22:23.135454] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:12.144 [2024-07-15 09:22:23.135471] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:88704 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:12.144 [2024-07-15 09:22:23.135486] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:12.144 [2024-07-15 09:22:23.135502] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:88832 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:12.144 [2024-07-15 09:22:23.135517] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:12.144 [2024-07-15 09:22:23.135533] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:88960 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:12.144 [2024-07-15 09:22:23.135548] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:12.144 [2024-07-15 09:22:23.135565] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:89088 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:12.144 [2024-07-15 09:22:23.135580] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:12.144 [2024-07-15 09:22:23.135596] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:89216 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:12.144 [2024-07-15 09:22:23.135611] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:12.144 [2024-07-15 09:22:23.135627] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:89344 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:12.144 [2024-07-15 09:22:23.135642] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:12.144 [2024-07-15 09:22:23.135658] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:89472 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:12.144 [2024-07-15 09:22:23.135673] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:12.144 [2024-07-15 09:22:23.135689] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:89600 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:12.144 [2024-07-15 09:22:23.135704] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:12.144 [2024-07-15 09:22:23.135721] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:89728 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:12.144 [2024-07-15 09:22:23.135736] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:12.144 [2024-07-15 09:22:23.135752] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:89856 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:12.144 [2024-07-15 09:22:23.135767] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:12.144 [2024-07-15 09:22:23.135783] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:89984 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:12.144 [2024-07-15 09:22:23.135798] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:12.144 [2024-07-15 09:22:23.135898] bdev_nvme.c:1612:bdev_nvme_disconnected_qpair_cb: *NOTICE*: qpair 0x10f2900 was disconnected and freed. reset controller. 00:14:12.144 09:22:23 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:12.144 09:22:23 nvmf_tcp.nvmf_host_management -- target/host_management.sh@85 -- # rpc_cmd nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode0 nqn.2016-06.io.spdk:host0 00:14:12.144 [2024-07-15 09:22:23.137015] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:14:12.144 09:22:23 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:12.144 09:22:23 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:14:12.144 task offset: 81920 on job bdev=Nvme0n1 fails 00:14:12.144 00:14:12.144 Latency(us) 00:14:12.145 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:14:12.145 Job: Nvme0n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:14:12.145 Job: Nvme0n1 ended in about 0.40 seconds with error 00:14:12.145 Verification LBA range: start 0x0 length 0x400 00:14:12.145 Nvme0n1 : 0.40 1619.16 101.20 161.92 0.00 34884.09 3070.48 33593.27 00:14:12.145 =================================================================================================================== 00:14:12.145 Total : 1619.16 101.20 161.92 0.00 34884.09 3070.48 33593.27 00:14:12.145 [2024-07-15 09:22:23.138870] app.c:1052:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:14:12.145 [2024-07-15 09:22:23.138898] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xce1790 (9): Bad file descriptor 00:14:12.145 09:22:23 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:12.145 09:22:23 nvmf_tcp.nvmf_host_management -- target/host_management.sh@87 -- # sleep 1 00:14:12.145 [2024-07-15 09:22:23.282933] bdev_nvme.c:2067:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:14:13.078 09:22:24 nvmf_tcp.nvmf_host_management -- target/host_management.sh@91 -- # kill -9 794498 00:14:13.078 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/host_management.sh: line 91: kill: (794498) - No such process 00:14:13.078 09:22:24 nvmf_tcp.nvmf_host_management -- target/host_management.sh@91 -- # true 00:14:13.078 09:22:24 nvmf_tcp.nvmf_host_management -- target/host_management.sh@97 -- # rm -f /var/tmp/spdk_cpu_lock_001 /var/tmp/spdk_cpu_lock_002 /var/tmp/spdk_cpu_lock_003 /var/tmp/spdk_cpu_lock_004 00:14:13.078 09:22:24 nvmf_tcp.nvmf_host_management -- target/host_management.sh@100 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf --json /dev/fd/62 -q 64 -o 65536 -w verify -t 1 00:14:13.078 09:22:24 nvmf_tcp.nvmf_host_management -- target/host_management.sh@100 -- # gen_nvmf_target_json 0 00:14:13.078 09:22:24 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@532 -- # config=() 00:14:13.078 09:22:24 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@532 -- # local subsystem config 00:14:13.078 09:22:24 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:14:13.078 09:22:24 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:14:13.078 { 00:14:13.078 "params": { 00:14:13.078 "name": "Nvme$subsystem", 00:14:13.078 "trtype": "$TEST_TRANSPORT", 00:14:13.078 "traddr": "$NVMF_FIRST_TARGET_IP", 00:14:13.078 "adrfam": "ipv4", 00:14:13.078 "trsvcid": "$NVMF_PORT", 00:14:13.078 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:14:13.078 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:14:13.078 "hdgst": ${hdgst:-false}, 00:14:13.078 "ddgst": ${ddgst:-false} 00:14:13.078 }, 00:14:13.078 "method": "bdev_nvme_attach_controller" 00:14:13.078 } 00:14:13.078 EOF 00:14:13.078 )") 00:14:13.078 09:22:24 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@554 -- # cat 00:14:13.078 09:22:24 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@556 -- # jq . 00:14:13.078 09:22:24 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@557 -- # IFS=, 00:14:13.078 09:22:24 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:14:13.078 "params": { 00:14:13.078 "name": "Nvme0", 00:14:13.078 "trtype": "tcp", 00:14:13.078 "traddr": "10.0.0.2", 00:14:13.078 "adrfam": "ipv4", 00:14:13.078 "trsvcid": "4420", 00:14:13.078 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:14:13.078 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:14:13.078 "hdgst": false, 00:14:13.078 "ddgst": false 00:14:13.078 }, 00:14:13.078 "method": "bdev_nvme_attach_controller" 00:14:13.078 }' 00:14:13.078 [2024-07-15 09:22:24.192692] Starting SPDK v24.09-pre git sha1 b0f01ebc5 / DPDK 24.03.0 initialization... 00:14:13.078 [2024-07-15 09:22:24.192768] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid794777 ] 00:14:13.078 EAL: No free 2048 kB hugepages reported on node 1 00:14:13.078 [2024-07-15 09:22:24.252701] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:14:13.338 [2024-07-15 09:22:24.363651] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:14:13.596 Running I/O for 1 seconds... 00:14:14.532 00:14:14.532 Latency(us) 00:14:14.532 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:14:14.532 Job: Nvme0n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:14:14.532 Verification LBA range: start 0x0 length 0x400 00:14:14.532 Nvme0n1 : 1.01 1711.66 106.98 0.00 0.00 36775.65 7039.05 32816.55 00:14:14.532 =================================================================================================================== 00:14:14.532 Total : 1711.66 106.98 0.00 0.00 36775.65 7039.05 32816.55 00:14:14.792 09:22:25 nvmf_tcp.nvmf_host_management -- target/host_management.sh@102 -- # stoptarget 00:14:14.792 09:22:25 nvmf_tcp.nvmf_host_management -- target/host_management.sh@36 -- # rm -f ./local-job0-0-verify.state 00:14:14.792 09:22:25 nvmf_tcp.nvmf_host_management -- target/host_management.sh@37 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdevperf.conf 00:14:14.792 09:22:25 nvmf_tcp.nvmf_host_management -- target/host_management.sh@38 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:14:14.792 09:22:25 nvmf_tcp.nvmf_host_management -- target/host_management.sh@40 -- # nvmftestfini 00:14:14.792 09:22:25 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@488 -- # nvmfcleanup 00:14:14.792 09:22:25 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@117 -- # sync 00:14:14.792 09:22:25 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:14:14.792 09:22:25 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@120 -- # set +e 00:14:14.792 09:22:25 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@121 -- # for i in {1..20} 00:14:14.792 09:22:25 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:14:15.051 rmmod nvme_tcp 00:14:15.051 rmmod nvme_fabrics 00:14:15.051 rmmod nvme_keyring 00:14:15.051 09:22:26 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:14:15.051 09:22:26 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@124 -- # set -e 00:14:15.051 09:22:26 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@125 -- # return 0 00:14:15.051 09:22:26 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@489 -- # '[' -n 794455 ']' 00:14:15.051 09:22:26 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@490 -- # killprocess 794455 00:14:15.051 09:22:26 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@948 -- # '[' -z 794455 ']' 00:14:15.051 09:22:26 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@952 -- # kill -0 794455 00:14:15.051 09:22:26 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@953 -- # uname 00:14:15.051 09:22:26 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:14:15.051 09:22:26 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 794455 00:14:15.051 09:22:26 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:14:15.051 09:22:26 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:14:15.051 09:22:26 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@966 -- # echo 'killing process with pid 794455' 00:14:15.051 killing process with pid 794455 00:14:15.051 09:22:26 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@967 -- # kill 794455 00:14:15.051 09:22:26 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@972 -- # wait 794455 00:14:15.310 [2024-07-15 09:22:26.335732] app.c: 710:unclaim_cpu_cores: *ERROR*: Failed to unlink lock fd for core 1, errno: 2 00:14:15.310 09:22:26 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:14:15.310 09:22:26 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:14:15.310 09:22:26 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:14:15.310 09:22:26 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:14:15.310 09:22:26 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@278 -- # remove_spdk_ns 00:14:15.310 09:22:26 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:14:15.310 09:22:26 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:14:15.310 09:22:26 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:14:17.217 09:22:28 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:14:17.217 09:22:28 nvmf_tcp.nvmf_host_management -- target/host_management.sh@109 -- # trap - SIGINT SIGTERM EXIT 00:14:17.217 00:14:17.217 real 0m8.722s 00:14:17.217 user 0m20.238s 00:14:17.217 sys 0m2.571s 00:14:17.217 09:22:28 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@1124 -- # xtrace_disable 00:14:17.217 09:22:28 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:14:17.475 ************************************ 00:14:17.475 END TEST nvmf_host_management 00:14:17.475 ************************************ 00:14:17.475 09:22:28 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:14:17.475 09:22:28 nvmf_tcp -- nvmf/nvmf.sh@48 -- # run_test nvmf_lvol /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_lvol.sh --transport=tcp 00:14:17.475 09:22:28 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:14:17.475 09:22:28 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:14:17.475 09:22:28 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:14:17.475 ************************************ 00:14:17.475 START TEST nvmf_lvol 00:14:17.475 ************************************ 00:14:17.475 09:22:28 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_lvol.sh --transport=tcp 00:14:17.475 * Looking for test storage... 00:14:17.475 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:14:17.475 09:22:28 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:14:17.475 09:22:28 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@7 -- # uname -s 00:14:17.475 09:22:28 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:14:17.475 09:22:28 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:14:17.475 09:22:28 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:14:17.475 09:22:28 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:14:17.475 09:22:28 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:14:17.475 09:22:28 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:14:17.475 09:22:28 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:14:17.475 09:22:28 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:14:17.475 09:22:28 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:14:17.475 09:22:28 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:14:17.475 09:22:28 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a 00:14:17.475 09:22:28 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@18 -- # NVME_HOSTID=29f67375-a902-e411-ace9-001e67bc3c9a 00:14:17.476 09:22:28 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:14:17.476 09:22:28 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:14:17.476 09:22:28 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:14:17.476 09:22:28 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:14:17.476 09:22:28 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:14:17.476 09:22:28 nvmf_tcp.nvmf_lvol -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:14:17.476 09:22:28 nvmf_tcp.nvmf_lvol -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:14:17.476 09:22:28 nvmf_tcp.nvmf_lvol -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:14:17.476 09:22:28 nvmf_tcp.nvmf_lvol -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:17.476 09:22:28 nvmf_tcp.nvmf_lvol -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:17.476 09:22:28 nvmf_tcp.nvmf_lvol -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:17.476 09:22:28 nvmf_tcp.nvmf_lvol -- paths/export.sh@5 -- # export PATH 00:14:17.476 09:22:28 nvmf_tcp.nvmf_lvol -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:17.476 09:22:28 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@47 -- # : 0 00:14:17.476 09:22:28 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:14:17.476 09:22:28 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:14:17.476 09:22:28 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:14:17.476 09:22:28 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:14:17.476 09:22:28 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:14:17.476 09:22:28 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:14:17.476 09:22:28 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:14:17.476 09:22:28 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@51 -- # have_pci_nics=0 00:14:17.476 09:22:28 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@11 -- # MALLOC_BDEV_SIZE=64 00:14:17.476 09:22:28 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:14:17.476 09:22:28 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@13 -- # LVOL_BDEV_INIT_SIZE=20 00:14:17.476 09:22:28 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@14 -- # LVOL_BDEV_FINAL_SIZE=30 00:14:17.476 09:22:28 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@16 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:14:17.476 09:22:28 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@18 -- # nvmftestinit 00:14:17.476 09:22:28 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:14:17.476 09:22:28 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:14:17.476 09:22:28 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@448 -- # prepare_net_devs 00:14:17.476 09:22:28 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@410 -- # local -g is_hw=no 00:14:17.476 09:22:28 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@412 -- # remove_spdk_ns 00:14:17.476 09:22:28 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:14:17.476 09:22:28 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:14:17.476 09:22:28 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:14:17.476 09:22:28 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:14:17.476 09:22:28 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:14:17.476 09:22:28 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@285 -- # xtrace_disable 00:14:17.476 09:22:28 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@10 -- # set +x 00:14:19.374 09:22:30 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:14:19.374 09:22:30 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@291 -- # pci_devs=() 00:14:19.374 09:22:30 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@291 -- # local -a pci_devs 00:14:19.374 09:22:30 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@292 -- # pci_net_devs=() 00:14:19.374 09:22:30 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:14:19.374 09:22:30 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@293 -- # pci_drivers=() 00:14:19.374 09:22:30 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@293 -- # local -A pci_drivers 00:14:19.374 09:22:30 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@295 -- # net_devs=() 00:14:19.374 09:22:30 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@295 -- # local -ga net_devs 00:14:19.374 09:22:30 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@296 -- # e810=() 00:14:19.374 09:22:30 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@296 -- # local -ga e810 00:14:19.374 09:22:30 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@297 -- # x722=() 00:14:19.374 09:22:30 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@297 -- # local -ga x722 00:14:19.374 09:22:30 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@298 -- # mlx=() 00:14:19.375 09:22:30 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@298 -- # local -ga mlx 00:14:19.375 09:22:30 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:14:19.375 09:22:30 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:14:19.375 09:22:30 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:14:19.375 09:22:30 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:14:19.375 09:22:30 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:14:19.375 09:22:30 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:14:19.375 09:22:30 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:14:19.375 09:22:30 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:14:19.375 09:22:30 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:14:19.375 09:22:30 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:14:19.375 09:22:30 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:14:19.375 09:22:30 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:14:19.375 09:22:30 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:14:19.375 09:22:30 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:14:19.375 09:22:30 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:14:19.375 09:22:30 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:14:19.375 09:22:30 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:14:19.375 09:22:30 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:14:19.375 09:22:30 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@341 -- # echo 'Found 0000:09:00.0 (0x8086 - 0x159b)' 00:14:19.375 Found 0000:09:00.0 (0x8086 - 0x159b) 00:14:19.375 09:22:30 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:14:19.375 09:22:30 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:14:19.375 09:22:30 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:14:19.375 09:22:30 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:14:19.375 09:22:30 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:14:19.375 09:22:30 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:14:19.375 09:22:30 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@341 -- # echo 'Found 0000:09:00.1 (0x8086 - 0x159b)' 00:14:19.375 Found 0000:09:00.1 (0x8086 - 0x159b) 00:14:19.375 09:22:30 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:14:19.375 09:22:30 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:14:19.375 09:22:30 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:14:19.375 09:22:30 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:14:19.375 09:22:30 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:14:19.375 09:22:30 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:14:19.375 09:22:30 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:14:19.375 09:22:30 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:14:19.375 09:22:30 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:14:19.375 09:22:30 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:14:19.375 09:22:30 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:14:19.375 09:22:30 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:14:19.375 09:22:30 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@390 -- # [[ up == up ]] 00:14:19.375 09:22:30 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:14:19.375 09:22:30 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:14:19.375 09:22:30 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:09:00.0: cvl_0_0' 00:14:19.375 Found net devices under 0000:09:00.0: cvl_0_0 00:14:19.375 09:22:30 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:14:19.375 09:22:30 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:14:19.375 09:22:30 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:14:19.375 09:22:30 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:14:19.375 09:22:30 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:14:19.375 09:22:30 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@390 -- # [[ up == up ]] 00:14:19.375 09:22:30 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:14:19.375 09:22:30 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:14:19.375 09:22:30 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:09:00.1: cvl_0_1' 00:14:19.375 Found net devices under 0000:09:00.1: cvl_0_1 00:14:19.375 09:22:30 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:14:19.375 09:22:30 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:14:19.375 09:22:30 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@414 -- # is_hw=yes 00:14:19.375 09:22:30 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:14:19.375 09:22:30 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:14:19.375 09:22:30 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:14:19.375 09:22:30 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:14:19.375 09:22:30 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:14:19.375 09:22:30 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:14:19.375 09:22:30 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:14:19.375 09:22:30 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:14:19.375 09:22:30 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:14:19.375 09:22:30 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:14:19.375 09:22:30 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:14:19.375 09:22:30 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:14:19.375 09:22:30 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:14:19.375 09:22:30 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:14:19.375 09:22:30 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:14:19.375 09:22:30 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:14:19.635 09:22:30 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:14:19.635 09:22:30 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:14:19.635 09:22:30 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:14:19.635 09:22:30 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:14:19.635 09:22:30 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:14:19.635 09:22:30 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:14:19.635 09:22:30 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:14:19.635 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:14:19.635 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.268 ms 00:14:19.635 00:14:19.635 --- 10.0.0.2 ping statistics --- 00:14:19.635 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:14:19.635 rtt min/avg/max/mdev = 0.268/0.268/0.268/0.000 ms 00:14:19.635 09:22:30 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:14:19.635 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:14:19.635 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.151 ms 00:14:19.635 00:14:19.635 --- 10.0.0.1 ping statistics --- 00:14:19.635 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:14:19.635 rtt min/avg/max/mdev = 0.151/0.151/0.151/0.000 ms 00:14:19.635 09:22:30 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:14:19.635 09:22:30 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@422 -- # return 0 00:14:19.635 09:22:30 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:14:19.635 09:22:30 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:14:19.635 09:22:30 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:14:19.635 09:22:30 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:14:19.635 09:22:30 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:14:19.635 09:22:30 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:14:19.635 09:22:30 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:14:19.635 09:22:30 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@19 -- # nvmfappstart -m 0x7 00:14:19.635 09:22:30 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:14:19.635 09:22:30 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@722 -- # xtrace_disable 00:14:19.635 09:22:30 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@10 -- # set +x 00:14:19.635 09:22:30 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@481 -- # nvmfpid=796969 00:14:19.635 09:22:30 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@482 -- # waitforlisten 796969 00:14:19.635 09:22:30 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@829 -- # '[' -z 796969 ']' 00:14:19.635 09:22:30 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x7 00:14:19.635 09:22:30 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:14:19.635 09:22:30 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@834 -- # local max_retries=100 00:14:19.635 09:22:30 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:14:19.635 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:14:19.635 09:22:30 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@838 -- # xtrace_disable 00:14:19.635 09:22:30 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@10 -- # set +x 00:14:19.635 [2024-07-15 09:22:30.763494] Starting SPDK v24.09-pre git sha1 b0f01ebc5 / DPDK 24.03.0 initialization... 00:14:19.635 [2024-07-15 09:22:30.763584] [ DPDK EAL parameters: nvmf -c 0x7 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:14:19.635 EAL: No free 2048 kB hugepages reported on node 1 00:14:19.894 [2024-07-15 09:22:30.831070] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 3 00:14:19.894 [2024-07-15 09:22:30.943194] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:14:19.894 [2024-07-15 09:22:30.943243] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:14:19.894 [2024-07-15 09:22:30.943273] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:14:19.894 [2024-07-15 09:22:30.943285] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:14:19.894 [2024-07-15 09:22:30.943295] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:14:19.894 [2024-07-15 09:22:30.943374] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:14:19.894 [2024-07-15 09:22:30.943443] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:14:19.894 [2024-07-15 09:22:30.943446] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:14:19.894 09:22:31 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:14:19.894 09:22:31 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@862 -- # return 0 00:14:19.894 09:22:31 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:14:19.894 09:22:31 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@728 -- # xtrace_disable 00:14:19.894 09:22:31 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@10 -- # set +x 00:14:19.894 09:22:31 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:14:19.894 09:22:31 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@21 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:14:20.462 [2024-07-15 09:22:31.360006] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:14:20.462 09:22:31 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:14:20.721 09:22:31 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@24 -- # base_bdevs='Malloc0 ' 00:14:20.721 09:22:31 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@25 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:14:20.979 09:22:31 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@25 -- # base_bdevs+=Malloc1 00:14:20.979 09:22:31 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@26 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_raid_create -n raid0 -z 64 -r 0 -b 'Malloc0 Malloc1' 00:14:21.237 09:22:32 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create_lvstore raid0 lvs 00:14:21.496 09:22:32 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@29 -- # lvs=32dc1ff6-f181-43cd-b9a2-8deaac54c493 00:14:21.496 09:22:32 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@32 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create -u 32dc1ff6-f181-43cd-b9a2-8deaac54c493 lvol 20 00:14:21.754 09:22:32 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@32 -- # lvol=82127f11-cd65-415a-a700-9ff876c1f8c4 00:14:21.754 09:22:32 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@35 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 -a -s SPDK0 00:14:22.011 09:22:33 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 82127f11-cd65-415a-a700-9ff876c1f8c4 00:14:22.275 09:22:33 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@37 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:14:22.553 [2024-07-15 09:22:33.602301] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:14:22.553 09:22:33 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@38 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:14:22.844 09:22:33 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@42 -- # perf_pid=797408 00:14:22.844 09:22:33 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' -o 4096 -q 128 -s 512 -w randwrite -t 10 -c 0x18 00:14:22.844 09:22:33 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@44 -- # sleep 1 00:14:22.844 EAL: No free 2048 kB hugepages reported on node 1 00:14:23.823 09:22:34 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@47 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_snapshot 82127f11-cd65-415a-a700-9ff876c1f8c4 MY_SNAPSHOT 00:14:24.081 09:22:35 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@47 -- # snapshot=eb426dd9-42a9-4be1-b493-8542da2e5a5c 00:14:24.081 09:22:35 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@48 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_resize 82127f11-cd65-415a-a700-9ff876c1f8c4 30 00:14:24.338 09:22:35 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@49 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_clone eb426dd9-42a9-4be1-b493-8542da2e5a5c MY_CLONE 00:14:24.596 09:22:35 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@49 -- # clone=12583e64-8eb6-45f1-b524-8786967b1174 00:14:24.596 09:22:35 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_inflate 12583e64-8eb6-45f1-b524-8786967b1174 00:14:25.530 09:22:36 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@53 -- # wait 797408 00:14:33.653 Initializing NVMe Controllers 00:14:33.653 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode0 00:14:33.653 Controller IO queue size 128, less than required. 00:14:33.653 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:14:33.653 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 with lcore 3 00:14:33.653 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 with lcore 4 00:14:33.653 Initialization complete. Launching workers. 00:14:33.653 ======================================================== 00:14:33.653 Latency(us) 00:14:33.653 Device Information : IOPS MiB/s Average min max 00:14:33.653 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 from core 3: 10476.40 40.92 12217.62 570.18 65574.10 00:14:33.653 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 from core 4: 10471.80 40.91 12230.65 1870.88 71447.38 00:14:33.653 ======================================================== 00:14:33.653 Total : 20948.20 81.83 12224.13 570.18 71447.38 00:14:33.653 00:14:33.653 09:22:44 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@56 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:14:33.653 09:22:44 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete 82127f11-cd65-415a-a700-9ff876c1f8c4 00:14:33.911 09:22:44 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@58 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -u 32dc1ff6-f181-43cd-b9a2-8deaac54c493 00:14:34.169 09:22:45 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@60 -- # rm -f 00:14:34.169 09:22:45 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@62 -- # trap - SIGINT SIGTERM EXIT 00:14:34.169 09:22:45 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@64 -- # nvmftestfini 00:14:34.169 09:22:45 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@488 -- # nvmfcleanup 00:14:34.169 09:22:45 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@117 -- # sync 00:14:34.169 09:22:45 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:14:34.169 09:22:45 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@120 -- # set +e 00:14:34.169 09:22:45 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@121 -- # for i in {1..20} 00:14:34.169 09:22:45 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:14:34.169 rmmod nvme_tcp 00:14:34.169 rmmod nvme_fabrics 00:14:34.169 rmmod nvme_keyring 00:14:34.169 09:22:45 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:14:34.169 09:22:45 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@124 -- # set -e 00:14:34.169 09:22:45 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@125 -- # return 0 00:14:34.169 09:22:45 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@489 -- # '[' -n 796969 ']' 00:14:34.169 09:22:45 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@490 -- # killprocess 796969 00:14:34.170 09:22:45 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@948 -- # '[' -z 796969 ']' 00:14:34.170 09:22:45 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@952 -- # kill -0 796969 00:14:34.170 09:22:45 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@953 -- # uname 00:14:34.170 09:22:45 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:14:34.170 09:22:45 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 796969 00:14:34.170 09:22:45 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:14:34.170 09:22:45 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:14:34.170 09:22:45 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@966 -- # echo 'killing process with pid 796969' 00:14:34.170 killing process with pid 796969 00:14:34.170 09:22:45 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@967 -- # kill 796969 00:14:34.170 09:22:45 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@972 -- # wait 796969 00:14:34.428 09:22:45 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:14:34.428 09:22:45 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:14:34.428 09:22:45 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:14:34.428 09:22:45 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:14:34.428 09:22:45 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@278 -- # remove_spdk_ns 00:14:34.428 09:22:45 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:14:34.428 09:22:45 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:14:34.428 09:22:45 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:14:36.968 09:22:47 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:14:36.968 00:14:36.968 real 0m19.163s 00:14:36.968 user 1m5.291s 00:14:36.968 sys 0m5.658s 00:14:36.968 09:22:47 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@1124 -- # xtrace_disable 00:14:36.968 09:22:47 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@10 -- # set +x 00:14:36.968 ************************************ 00:14:36.968 END TEST nvmf_lvol 00:14:36.968 ************************************ 00:14:36.968 09:22:47 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:14:36.968 09:22:47 nvmf_tcp -- nvmf/nvmf.sh@49 -- # run_test nvmf_lvs_grow /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_lvs_grow.sh --transport=tcp 00:14:36.968 09:22:47 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:14:36.968 09:22:47 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:14:36.968 09:22:47 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:14:36.968 ************************************ 00:14:36.968 START TEST nvmf_lvs_grow 00:14:36.968 ************************************ 00:14:36.968 09:22:47 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_lvs_grow.sh --transport=tcp 00:14:36.968 * Looking for test storage... 00:14:36.968 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:14:36.968 09:22:47 nvmf_tcp.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:14:36.968 09:22:47 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@7 -- # uname -s 00:14:36.968 09:22:47 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:14:36.968 09:22:47 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:14:36.968 09:22:47 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:14:36.968 09:22:47 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:14:36.968 09:22:47 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:14:36.968 09:22:47 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:14:36.968 09:22:47 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:14:36.968 09:22:47 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:14:36.968 09:22:47 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:14:36.968 09:22:47 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:14:36.968 09:22:47 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a 00:14:36.968 09:22:47 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@18 -- # NVME_HOSTID=29f67375-a902-e411-ace9-001e67bc3c9a 00:14:36.968 09:22:47 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:14:36.968 09:22:47 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:14:36.968 09:22:47 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:14:36.968 09:22:47 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:14:36.969 09:22:47 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:14:36.969 09:22:47 nvmf_tcp.nvmf_lvs_grow -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:14:36.969 09:22:47 nvmf_tcp.nvmf_lvs_grow -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:14:36.969 09:22:47 nvmf_tcp.nvmf_lvs_grow -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:14:36.969 09:22:47 nvmf_tcp.nvmf_lvs_grow -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:36.969 09:22:47 nvmf_tcp.nvmf_lvs_grow -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:36.969 09:22:47 nvmf_tcp.nvmf_lvs_grow -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:36.969 09:22:47 nvmf_tcp.nvmf_lvs_grow -- paths/export.sh@5 -- # export PATH 00:14:36.969 09:22:47 nvmf_tcp.nvmf_lvs_grow -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:36.969 09:22:47 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@47 -- # : 0 00:14:36.969 09:22:47 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:14:36.969 09:22:47 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:14:36.969 09:22:47 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:14:36.969 09:22:47 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:14:36.969 09:22:47 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:14:36.969 09:22:47 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:14:36.969 09:22:47 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:14:36.969 09:22:47 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@51 -- # have_pci_nics=0 00:14:36.969 09:22:47 nvmf_tcp.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@11 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:14:36.969 09:22:47 nvmf_tcp.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@12 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:14:36.969 09:22:47 nvmf_tcp.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@98 -- # nvmftestinit 00:14:36.969 09:22:47 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:14:36.969 09:22:47 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:14:36.969 09:22:47 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@448 -- # prepare_net_devs 00:14:36.969 09:22:47 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@410 -- # local -g is_hw=no 00:14:36.969 09:22:47 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@412 -- # remove_spdk_ns 00:14:36.969 09:22:47 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:14:36.969 09:22:47 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:14:36.969 09:22:47 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:14:36.969 09:22:47 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:14:36.969 09:22:47 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:14:36.969 09:22:47 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@285 -- # xtrace_disable 00:14:36.969 09:22:47 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:14:38.874 09:22:49 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:14:38.874 09:22:49 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@291 -- # pci_devs=() 00:14:38.874 09:22:49 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@291 -- # local -a pci_devs 00:14:38.874 09:22:49 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@292 -- # pci_net_devs=() 00:14:38.874 09:22:49 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:14:38.874 09:22:49 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@293 -- # pci_drivers=() 00:14:38.874 09:22:49 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@293 -- # local -A pci_drivers 00:14:38.874 09:22:49 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@295 -- # net_devs=() 00:14:38.874 09:22:49 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@295 -- # local -ga net_devs 00:14:38.874 09:22:49 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@296 -- # e810=() 00:14:38.874 09:22:49 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@296 -- # local -ga e810 00:14:38.874 09:22:49 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@297 -- # x722=() 00:14:38.874 09:22:49 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@297 -- # local -ga x722 00:14:38.874 09:22:49 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@298 -- # mlx=() 00:14:38.874 09:22:49 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@298 -- # local -ga mlx 00:14:38.874 09:22:49 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:14:38.874 09:22:49 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:14:38.874 09:22:49 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:14:38.874 09:22:49 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:14:38.874 09:22:49 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:14:38.874 09:22:49 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:14:38.874 09:22:49 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:14:38.874 09:22:49 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:14:38.874 09:22:49 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:14:38.874 09:22:49 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:14:38.874 09:22:49 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:14:38.874 09:22:49 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:14:38.874 09:22:49 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:14:38.874 09:22:49 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:14:38.874 09:22:49 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:14:38.874 09:22:49 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:14:38.874 09:22:49 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:14:38.874 09:22:49 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:14:38.874 09:22:49 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@341 -- # echo 'Found 0000:09:00.0 (0x8086 - 0x159b)' 00:14:38.874 Found 0000:09:00.0 (0x8086 - 0x159b) 00:14:38.874 09:22:49 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:14:38.874 09:22:49 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:14:38.874 09:22:49 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:14:38.874 09:22:49 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:14:38.874 09:22:49 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:14:38.874 09:22:49 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:14:38.874 09:22:49 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@341 -- # echo 'Found 0000:09:00.1 (0x8086 - 0x159b)' 00:14:38.874 Found 0000:09:00.1 (0x8086 - 0x159b) 00:14:38.874 09:22:49 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:14:38.874 09:22:49 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:14:38.874 09:22:49 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:14:38.874 09:22:49 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:14:38.874 09:22:49 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:14:38.874 09:22:49 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:14:38.874 09:22:49 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:14:38.874 09:22:49 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:14:38.874 09:22:49 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:14:38.874 09:22:49 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:14:38.874 09:22:49 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:14:38.874 09:22:49 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:14:38.874 09:22:49 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@390 -- # [[ up == up ]] 00:14:38.874 09:22:49 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:14:38.874 09:22:49 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:14:38.874 09:22:49 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:09:00.0: cvl_0_0' 00:14:38.874 Found net devices under 0000:09:00.0: cvl_0_0 00:14:38.874 09:22:49 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:14:38.874 09:22:49 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:14:38.874 09:22:49 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:14:38.874 09:22:49 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:14:38.874 09:22:49 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:14:38.874 09:22:49 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@390 -- # [[ up == up ]] 00:14:38.874 09:22:49 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:14:38.874 09:22:49 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:14:38.874 09:22:49 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:09:00.1: cvl_0_1' 00:14:38.874 Found net devices under 0000:09:00.1: cvl_0_1 00:14:38.874 09:22:49 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:14:38.874 09:22:49 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:14:38.875 09:22:49 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@414 -- # is_hw=yes 00:14:38.875 09:22:49 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:14:38.875 09:22:49 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:14:38.875 09:22:49 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:14:38.875 09:22:49 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:14:38.875 09:22:49 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:14:38.875 09:22:49 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:14:38.875 09:22:49 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:14:38.875 09:22:49 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:14:38.875 09:22:49 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:14:38.875 09:22:49 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:14:38.875 09:22:49 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:14:38.875 09:22:49 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:14:38.875 09:22:49 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:14:38.875 09:22:49 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:14:38.875 09:22:49 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:14:38.875 09:22:49 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:14:38.875 09:22:49 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:14:38.875 09:22:49 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:14:38.875 09:22:49 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:14:38.875 09:22:49 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:14:38.875 09:22:49 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:14:38.875 09:22:49 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:14:38.875 09:22:49 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:14:38.875 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:14:38.875 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.186 ms 00:14:38.875 00:14:38.875 --- 10.0.0.2 ping statistics --- 00:14:38.875 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:14:38.875 rtt min/avg/max/mdev = 0.186/0.186/0.186/0.000 ms 00:14:38.875 09:22:49 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:14:38.875 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:14:38.875 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.086 ms 00:14:38.875 00:14:38.875 --- 10.0.0.1 ping statistics --- 00:14:38.875 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:14:38.875 rtt min/avg/max/mdev = 0.086/0.086/0.086/0.000 ms 00:14:38.875 09:22:49 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:14:38.875 09:22:49 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@422 -- # return 0 00:14:38.875 09:22:49 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:14:38.875 09:22:49 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:14:38.875 09:22:49 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:14:38.875 09:22:49 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:14:38.875 09:22:49 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:14:38.875 09:22:49 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:14:38.875 09:22:49 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:14:38.875 09:22:49 nvmf_tcp.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@99 -- # nvmfappstart -m 0x1 00:14:38.875 09:22:49 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:14:38.875 09:22:49 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@722 -- # xtrace_disable 00:14:38.875 09:22:49 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:14:38.875 09:22:49 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@481 -- # nvmfpid=800664 00:14:38.875 09:22:49 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@482 -- # waitforlisten 800664 00:14:38.875 09:22:49 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1 00:14:38.875 09:22:49 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@829 -- # '[' -z 800664 ']' 00:14:38.875 09:22:49 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:14:38.875 09:22:49 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@834 -- # local max_retries=100 00:14:38.875 09:22:49 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:14:38.875 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:14:38.875 09:22:49 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@838 -- # xtrace_disable 00:14:38.875 09:22:49 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:14:38.875 [2024-07-15 09:22:49.919307] Starting SPDK v24.09-pre git sha1 b0f01ebc5 / DPDK 24.03.0 initialization... 00:14:38.875 [2024-07-15 09:22:49.919385] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:14:38.875 EAL: No free 2048 kB hugepages reported on node 1 00:14:38.875 [2024-07-15 09:22:49.983718] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:14:39.132 [2024-07-15 09:22:50.100631] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:14:39.132 [2024-07-15 09:22:50.100684] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:14:39.132 [2024-07-15 09:22:50.100719] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:14:39.132 [2024-07-15 09:22:50.100731] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:14:39.132 [2024-07-15 09:22:50.100741] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:14:39.132 [2024-07-15 09:22:50.100769] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:14:39.132 09:22:50 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:14:39.132 09:22:50 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@862 -- # return 0 00:14:39.132 09:22:50 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:14:39.132 09:22:50 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@728 -- # xtrace_disable 00:14:39.132 09:22:50 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:14:39.132 09:22:50 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:14:39.132 09:22:50 nvmf_tcp.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@100 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:14:39.388 [2024-07-15 09:22:50.509318] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:14:39.388 09:22:50 nvmf_tcp.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@102 -- # run_test lvs_grow_clean lvs_grow 00:14:39.388 09:22:50 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:14:39.388 09:22:50 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@1105 -- # xtrace_disable 00:14:39.388 09:22:50 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:14:39.388 ************************************ 00:14:39.388 START TEST lvs_grow_clean 00:14:39.388 ************************************ 00:14:39.388 09:22:50 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@1123 -- # lvs_grow 00:14:39.388 09:22:50 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@15 -- # local aio_bdev lvs lvol 00:14:39.388 09:22:50 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@16 -- # local data_clusters free_clusters 00:14:39.388 09:22:50 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@17 -- # local bdevperf_pid run_test_pid 00:14:39.388 09:22:50 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@18 -- # local aio_init_size_mb=200 00:14:39.388 09:22:50 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@19 -- # local aio_final_size_mb=400 00:14:39.388 09:22:50 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@20 -- # local lvol_bdev_size_mb=150 00:14:39.388 09:22:50 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@23 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:14:39.388 09:22:50 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@24 -- # truncate -s 200M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:14:39.388 09:22:50 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@25 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_create /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev aio_bdev 4096 00:14:39.646 09:22:50 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@25 -- # aio_bdev=aio_bdev 00:14:39.646 09:22:50 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@28 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create_lvstore --cluster-sz 4194304 --md-pages-per-cluster-ratio 300 aio_bdev lvs 00:14:39.903 09:22:51 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@28 -- # lvs=057e096e-f3f9-4dd5-a09c-5aaee559e47b 00:14:39.903 09:22:51 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 057e096e-f3f9-4dd5-a09c-5aaee559e47b 00:14:39.903 09:22:51 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@29 -- # jq -r '.[0].total_data_clusters' 00:14:40.161 09:22:51 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@29 -- # data_clusters=49 00:14:40.161 09:22:51 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@30 -- # (( data_clusters == 49 )) 00:14:40.161 09:22:51 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create -u 057e096e-f3f9-4dd5-a09c-5aaee559e47b lvol 150 00:14:40.419 09:22:51 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@33 -- # lvol=e4fe8eab-55b1-41ff-9dff-5dd27b3b1d62 00:14:40.419 09:22:51 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@36 -- # truncate -s 400M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:14:40.419 09:22:51 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@37 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_rescan aio_bdev 00:14:40.677 [2024-07-15 09:22:51.790906] bdev_aio.c:1030:bdev_aio_rescan: *NOTICE*: AIO device is resized: bdev name /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev, old block count 51200, new block count 102400 00:14:40.677 [2024-07-15 09:22:51.791012] vbdev_lvol.c: 165:vbdev_lvs_base_bdev_event_cb: *NOTICE*: Unsupported bdev event: type 1 00:14:40.677 true 00:14:40.677 09:22:51 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@38 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 057e096e-f3f9-4dd5-a09c-5aaee559e47b 00:14:40.677 09:22:51 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@38 -- # jq -r '.[0].total_data_clusters' 00:14:40.934 09:22:52 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@38 -- # (( data_clusters == 49 )) 00:14:40.934 09:22:52 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 -a -s SPDK0 00:14:41.192 09:22:52 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@42 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 e4fe8eab-55b1-41ff-9dff-5dd27b3b1d62 00:14:41.451 09:22:52 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@43 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:14:41.711 [2024-07-15 09:22:52.765848] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:14:41.711 09:22:52 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@44 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:14:41.969 09:22:53 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@48 -- # bdevperf_pid=801051 00:14:41.969 09:22:53 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@47 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -r /var/tmp/bdevperf.sock -m 0x2 -o 4096 -q 128 -w randwrite -t 10 -S 1 -z 00:14:41.969 09:22:53 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@49 -- # trap 'killprocess $bdevperf_pid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:14:41.970 09:22:53 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@50 -- # waitforlisten 801051 /var/tmp/bdevperf.sock 00:14:41.970 09:22:53 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@829 -- # '[' -z 801051 ']' 00:14:41.970 09:22:53 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:14:41.970 09:22:53 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@834 -- # local max_retries=100 00:14:41.970 09:22:53 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:14:41.970 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:14:41.970 09:22:53 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@838 -- # xtrace_disable 00:14:41.970 09:22:53 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@10 -- # set +x 00:14:41.970 [2024-07-15 09:22:53.063954] Starting SPDK v24.09-pre git sha1 b0f01ebc5 / DPDK 24.03.0 initialization... 00:14:41.970 [2024-07-15 09:22:53.064043] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid801051 ] 00:14:41.970 EAL: No free 2048 kB hugepages reported on node 1 00:14:41.970 [2024-07-15 09:22:53.120954] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:14:42.228 [2024-07-15 09:22:53.229054] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:14:42.228 09:22:53 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:14:42.228 09:22:53 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@862 -- # return 0 00:14:42.228 09:22:53 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b Nvme0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 00:14:42.487 Nvme0n1 00:14:42.745 09:22:53 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_get_bdevs -b Nvme0n1 -t 3000 00:14:43.003 [ 00:14:43.003 { 00:14:43.003 "name": "Nvme0n1", 00:14:43.003 "aliases": [ 00:14:43.003 "e4fe8eab-55b1-41ff-9dff-5dd27b3b1d62" 00:14:43.003 ], 00:14:43.003 "product_name": "NVMe disk", 00:14:43.003 "block_size": 4096, 00:14:43.003 "num_blocks": 38912, 00:14:43.003 "uuid": "e4fe8eab-55b1-41ff-9dff-5dd27b3b1d62", 00:14:43.003 "assigned_rate_limits": { 00:14:43.003 "rw_ios_per_sec": 0, 00:14:43.003 "rw_mbytes_per_sec": 0, 00:14:43.003 "r_mbytes_per_sec": 0, 00:14:43.003 "w_mbytes_per_sec": 0 00:14:43.003 }, 00:14:43.003 "claimed": false, 00:14:43.003 "zoned": false, 00:14:43.003 "supported_io_types": { 00:14:43.003 "read": true, 00:14:43.003 "write": true, 00:14:43.003 "unmap": true, 00:14:43.003 "flush": true, 00:14:43.003 "reset": true, 00:14:43.003 "nvme_admin": true, 00:14:43.003 "nvme_io": true, 00:14:43.003 "nvme_io_md": false, 00:14:43.003 "write_zeroes": true, 00:14:43.003 "zcopy": false, 00:14:43.003 "get_zone_info": false, 00:14:43.003 "zone_management": false, 00:14:43.003 "zone_append": false, 00:14:43.003 "compare": true, 00:14:43.003 "compare_and_write": true, 00:14:43.003 "abort": true, 00:14:43.003 "seek_hole": false, 00:14:43.003 "seek_data": false, 00:14:43.003 "copy": true, 00:14:43.003 "nvme_iov_md": false 00:14:43.003 }, 00:14:43.003 "memory_domains": [ 00:14:43.003 { 00:14:43.003 "dma_device_id": "system", 00:14:43.003 "dma_device_type": 1 00:14:43.003 } 00:14:43.004 ], 00:14:43.004 "driver_specific": { 00:14:43.004 "nvme": [ 00:14:43.004 { 00:14:43.004 "trid": { 00:14:43.004 "trtype": "TCP", 00:14:43.004 "adrfam": "IPv4", 00:14:43.004 "traddr": "10.0.0.2", 00:14:43.004 "trsvcid": "4420", 00:14:43.004 "subnqn": "nqn.2016-06.io.spdk:cnode0" 00:14:43.004 }, 00:14:43.004 "ctrlr_data": { 00:14:43.004 "cntlid": 1, 00:14:43.004 "vendor_id": "0x8086", 00:14:43.004 "model_number": "SPDK bdev Controller", 00:14:43.004 "serial_number": "SPDK0", 00:14:43.004 "firmware_revision": "24.09", 00:14:43.004 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:14:43.004 "oacs": { 00:14:43.004 "security": 0, 00:14:43.004 "format": 0, 00:14:43.004 "firmware": 0, 00:14:43.004 "ns_manage": 0 00:14:43.004 }, 00:14:43.004 "multi_ctrlr": true, 00:14:43.004 "ana_reporting": false 00:14:43.004 }, 00:14:43.004 "vs": { 00:14:43.004 "nvme_version": "1.3" 00:14:43.004 }, 00:14:43.004 "ns_data": { 00:14:43.004 "id": 1, 00:14:43.004 "can_share": true 00:14:43.004 } 00:14:43.004 } 00:14:43.004 ], 00:14:43.004 "mp_policy": "active_passive" 00:14:43.004 } 00:14:43.004 } 00:14:43.004 ] 00:14:43.004 09:22:53 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@56 -- # run_test_pid=801120 00:14:43.004 09:22:53 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@57 -- # sleep 2 00:14:43.004 09:22:53 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@55 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:14:43.004 Running I/O for 10 seconds... 00:14:43.942 Latency(us) 00:14:43.942 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:14:43.942 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:14:43.942 Nvme0n1 : 1.00 15495.00 60.53 0.00 0.00 0.00 0.00 0.00 00:14:43.942 =================================================================================================================== 00:14:43.942 Total : 15495.00 60.53 0.00 0.00 0.00 0.00 0.00 00:14:43.942 00:14:44.878 09:22:55 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_grow_lvstore -u 057e096e-f3f9-4dd5-a09c-5aaee559e47b 00:14:45.136 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:14:45.136 Nvme0n1 : 2.00 15685.00 61.27 0.00 0.00 0.00 0.00 0.00 00:14:45.136 =================================================================================================================== 00:14:45.136 Total : 15685.00 61.27 0.00 0.00 0.00 0.00 0.00 00:14:45.136 00:14:45.136 true 00:14:45.136 09:22:56 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@61 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 057e096e-f3f9-4dd5-a09c-5aaee559e47b 00:14:45.136 09:22:56 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@61 -- # jq -r '.[0].total_data_clusters' 00:14:45.395 09:22:56 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@61 -- # data_clusters=99 00:14:45.396 09:22:56 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@62 -- # (( data_clusters == 99 )) 00:14:45.396 09:22:56 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@65 -- # wait 801120 00:14:45.963 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:14:45.963 Nvme0n1 : 3.00 15790.67 61.68 0.00 0.00 0.00 0.00 0.00 00:14:45.963 =================================================================================================================== 00:14:45.963 Total : 15790.67 61.68 0.00 0.00 0.00 0.00 0.00 00:14:45.963 00:14:46.925 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:14:46.925 Nvme0n1 : 4.00 15845.75 61.90 0.00 0.00 0.00 0.00 0.00 00:14:46.925 =================================================================================================================== 00:14:46.925 Total : 15845.75 61.90 0.00 0.00 0.00 0.00 0.00 00:14:46.925 00:14:48.301 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:14:48.301 Nvme0n1 : 5.00 15927.80 62.22 0.00 0.00 0.00 0.00 0.00 00:14:48.301 =================================================================================================================== 00:14:48.301 Total : 15927.80 62.22 0.00 0.00 0.00 0.00 0.00 00:14:48.301 00:14:49.239 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:14:49.239 Nvme0n1 : 6.00 15972.17 62.39 0.00 0.00 0.00 0.00 0.00 00:14:49.239 =================================================================================================================== 00:14:49.239 Total : 15972.17 62.39 0.00 0.00 0.00 0.00 0.00 00:14:49.239 00:14:50.175 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:14:50.175 Nvme0n1 : 7.00 16022.29 62.59 0.00 0.00 0.00 0.00 0.00 00:14:50.175 =================================================================================================================== 00:14:50.175 Total : 16022.29 62.59 0.00 0.00 0.00 0.00 0.00 00:14:50.175 00:14:51.116 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:14:51.116 Nvme0n1 : 8.00 16067.50 62.76 0.00 0.00 0.00 0.00 0.00 00:14:51.116 =================================================================================================================== 00:14:51.116 Total : 16067.50 62.76 0.00 0.00 0.00 0.00 0.00 00:14:51.116 00:14:52.052 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:14:52.053 Nvme0n1 : 9.00 16096.00 62.88 0.00 0.00 0.00 0.00 0.00 00:14:52.053 =================================================================================================================== 00:14:52.053 Total : 16096.00 62.88 0.00 0.00 0.00 0.00 0.00 00:14:52.053 00:14:52.987 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:14:52.987 Nvme0n1 : 10.00 16112.00 62.94 0.00 0.00 0.00 0.00 0.00 00:14:52.987 =================================================================================================================== 00:14:52.987 Total : 16112.00 62.94 0.00 0.00 0.00 0.00 0.00 00:14:52.987 00:14:52.987 00:14:52.987 Latency(us) 00:14:52.987 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:14:52.987 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:14:52.987 Nvme0n1 : 10.00 16118.29 62.96 0.00 0.00 7936.76 2269.49 15728.64 00:14:52.987 =================================================================================================================== 00:14:52.987 Total : 16118.29 62.96 0.00 0.00 7936.76 2269.49 15728.64 00:14:52.987 0 00:14:52.987 09:23:04 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@66 -- # killprocess 801051 00:14:52.987 09:23:04 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@948 -- # '[' -z 801051 ']' 00:14:52.987 09:23:04 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@952 -- # kill -0 801051 00:14:52.987 09:23:04 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@953 -- # uname 00:14:52.987 09:23:04 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:14:52.987 09:23:04 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 801051 00:14:52.987 09:23:04 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:14:52.987 09:23:04 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:14:52.987 09:23:04 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@966 -- # echo 'killing process with pid 801051' 00:14:52.987 killing process with pid 801051 00:14:52.987 09:23:04 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@967 -- # kill 801051 00:14:52.987 Received shutdown signal, test time was about 10.000000 seconds 00:14:52.987 00:14:52.987 Latency(us) 00:14:52.987 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:14:52.987 =================================================================================================================== 00:14:52.987 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:14:52.987 09:23:04 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@972 -- # wait 801051 00:14:53.245 09:23:04 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@68 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:14:53.810 09:23:04 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@69 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:14:53.810 09:23:04 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@70 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 057e096e-f3f9-4dd5-a09c-5aaee559e47b 00:14:53.810 09:23:04 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@70 -- # jq -r '.[0].free_clusters' 00:14:54.070 09:23:05 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@70 -- # free_clusters=61 00:14:54.070 09:23:05 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@72 -- # [[ '' == \d\i\r\t\y ]] 00:14:54.070 09:23:05 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@84 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_delete aio_bdev 00:14:54.329 [2024-07-15 09:23:05.444535] vbdev_lvol.c: 150:vbdev_lvs_hotremove_cb: *NOTICE*: bdev aio_bdev being removed: closing lvstore lvs 00:14:54.329 09:23:05 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@85 -- # NOT /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 057e096e-f3f9-4dd5-a09c-5aaee559e47b 00:14:54.329 09:23:05 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@648 -- # local es=0 00:14:54.329 09:23:05 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@650 -- # valid_exec_arg /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 057e096e-f3f9-4dd5-a09c-5aaee559e47b 00:14:54.329 09:23:05 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@636 -- # local arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:14:54.329 09:23:05 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:14:54.329 09:23:05 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@640 -- # type -t /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:14:54.329 09:23:05 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:14:54.329 09:23:05 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@642 -- # type -P /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:14:54.329 09:23:05 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:14:54.329 09:23:05 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@642 -- # arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:14:54.329 09:23:05 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@642 -- # [[ -x /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py ]] 00:14:54.329 09:23:05 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@651 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 057e096e-f3f9-4dd5-a09c-5aaee559e47b 00:14:54.589 request: 00:14:54.590 { 00:14:54.590 "uuid": "057e096e-f3f9-4dd5-a09c-5aaee559e47b", 00:14:54.590 "method": "bdev_lvol_get_lvstores", 00:14:54.590 "req_id": 1 00:14:54.590 } 00:14:54.590 Got JSON-RPC error response 00:14:54.590 response: 00:14:54.590 { 00:14:54.590 "code": -19, 00:14:54.590 "message": "No such device" 00:14:54.590 } 00:14:54.590 09:23:05 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@651 -- # es=1 00:14:54.590 09:23:05 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:14:54.590 09:23:05 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:14:54.590 09:23:05 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:14:54.590 09:23:05 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@86 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_create /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev aio_bdev 4096 00:14:54.849 aio_bdev 00:14:54.849 09:23:05 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@87 -- # waitforbdev e4fe8eab-55b1-41ff-9dff-5dd27b3b1d62 00:14:54.849 09:23:05 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@897 -- # local bdev_name=e4fe8eab-55b1-41ff-9dff-5dd27b3b1d62 00:14:54.849 09:23:05 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@898 -- # local bdev_timeout= 00:14:54.849 09:23:05 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@899 -- # local i 00:14:54.849 09:23:05 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@900 -- # [[ -z '' ]] 00:14:54.849 09:23:05 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@900 -- # bdev_timeout=2000 00:14:54.849 09:23:05 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@902 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_wait_for_examine 00:14:55.107 09:23:06 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@904 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_get_bdevs -b e4fe8eab-55b1-41ff-9dff-5dd27b3b1d62 -t 2000 00:14:55.366 [ 00:14:55.366 { 00:14:55.366 "name": "e4fe8eab-55b1-41ff-9dff-5dd27b3b1d62", 00:14:55.366 "aliases": [ 00:14:55.366 "lvs/lvol" 00:14:55.366 ], 00:14:55.366 "product_name": "Logical Volume", 00:14:55.366 "block_size": 4096, 00:14:55.366 "num_blocks": 38912, 00:14:55.366 "uuid": "e4fe8eab-55b1-41ff-9dff-5dd27b3b1d62", 00:14:55.366 "assigned_rate_limits": { 00:14:55.366 "rw_ios_per_sec": 0, 00:14:55.366 "rw_mbytes_per_sec": 0, 00:14:55.366 "r_mbytes_per_sec": 0, 00:14:55.366 "w_mbytes_per_sec": 0 00:14:55.366 }, 00:14:55.366 "claimed": false, 00:14:55.366 "zoned": false, 00:14:55.366 "supported_io_types": { 00:14:55.366 "read": true, 00:14:55.366 "write": true, 00:14:55.366 "unmap": true, 00:14:55.366 "flush": false, 00:14:55.366 "reset": true, 00:14:55.366 "nvme_admin": false, 00:14:55.366 "nvme_io": false, 00:14:55.366 "nvme_io_md": false, 00:14:55.366 "write_zeroes": true, 00:14:55.366 "zcopy": false, 00:14:55.366 "get_zone_info": false, 00:14:55.366 "zone_management": false, 00:14:55.366 "zone_append": false, 00:14:55.366 "compare": false, 00:14:55.366 "compare_and_write": false, 00:14:55.366 "abort": false, 00:14:55.366 "seek_hole": true, 00:14:55.366 "seek_data": true, 00:14:55.366 "copy": false, 00:14:55.366 "nvme_iov_md": false 00:14:55.366 }, 00:14:55.366 "driver_specific": { 00:14:55.366 "lvol": { 00:14:55.366 "lvol_store_uuid": "057e096e-f3f9-4dd5-a09c-5aaee559e47b", 00:14:55.366 "base_bdev": "aio_bdev", 00:14:55.366 "thin_provision": false, 00:14:55.366 "num_allocated_clusters": 38, 00:14:55.366 "snapshot": false, 00:14:55.366 "clone": false, 00:14:55.366 "esnap_clone": false 00:14:55.366 } 00:14:55.366 } 00:14:55.366 } 00:14:55.366 ] 00:14:55.366 09:23:06 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@905 -- # return 0 00:14:55.366 09:23:06 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@88 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 057e096e-f3f9-4dd5-a09c-5aaee559e47b 00:14:55.366 09:23:06 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@88 -- # jq -r '.[0].free_clusters' 00:14:55.625 09:23:06 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@88 -- # (( free_clusters == 61 )) 00:14:55.625 09:23:06 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@89 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 057e096e-f3f9-4dd5-a09c-5aaee559e47b 00:14:55.625 09:23:06 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@89 -- # jq -r '.[0].total_data_clusters' 00:14:55.885 09:23:06 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@89 -- # (( data_clusters == 99 )) 00:14:55.885 09:23:06 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@92 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete e4fe8eab-55b1-41ff-9dff-5dd27b3b1d62 00:14:56.145 09:23:07 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@93 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -u 057e096e-f3f9-4dd5-a09c-5aaee559e47b 00:14:56.404 09:23:07 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@94 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_delete aio_bdev 00:14:56.661 09:23:07 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@95 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:14:56.661 00:14:56.661 real 0m17.273s 00:14:56.661 user 0m16.764s 00:14:56.661 sys 0m1.844s 00:14:56.661 09:23:07 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@1124 -- # xtrace_disable 00:14:56.661 09:23:07 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@10 -- # set +x 00:14:56.661 ************************************ 00:14:56.661 END TEST lvs_grow_clean 00:14:56.661 ************************************ 00:14:56.661 09:23:07 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@1142 -- # return 0 00:14:56.661 09:23:07 nvmf_tcp.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@103 -- # run_test lvs_grow_dirty lvs_grow dirty 00:14:56.661 09:23:07 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:14:56.661 09:23:07 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@1105 -- # xtrace_disable 00:14:56.661 09:23:07 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:14:56.921 ************************************ 00:14:56.921 START TEST lvs_grow_dirty 00:14:56.921 ************************************ 00:14:56.921 09:23:07 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@1123 -- # lvs_grow dirty 00:14:56.921 09:23:07 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@15 -- # local aio_bdev lvs lvol 00:14:56.921 09:23:07 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@16 -- # local data_clusters free_clusters 00:14:56.921 09:23:07 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@17 -- # local bdevperf_pid run_test_pid 00:14:56.921 09:23:07 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@18 -- # local aio_init_size_mb=200 00:14:56.921 09:23:07 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@19 -- # local aio_final_size_mb=400 00:14:56.921 09:23:07 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@20 -- # local lvol_bdev_size_mb=150 00:14:56.921 09:23:07 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@23 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:14:56.921 09:23:07 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@24 -- # truncate -s 200M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:14:56.921 09:23:07 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@25 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_create /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev aio_bdev 4096 00:14:57.181 09:23:08 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@25 -- # aio_bdev=aio_bdev 00:14:57.181 09:23:08 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@28 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create_lvstore --cluster-sz 4194304 --md-pages-per-cluster-ratio 300 aio_bdev lvs 00:14:57.441 09:23:08 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@28 -- # lvs=5f909de5-ce2d-435d-a879-c4b94ea8dd96 00:14:57.441 09:23:08 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@29 -- # jq -r '.[0].total_data_clusters' 00:14:57.441 09:23:08 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 5f909de5-ce2d-435d-a879-c4b94ea8dd96 00:14:57.441 09:23:08 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@29 -- # data_clusters=49 00:14:57.441 09:23:08 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@30 -- # (( data_clusters == 49 )) 00:14:57.441 09:23:08 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create -u 5f909de5-ce2d-435d-a879-c4b94ea8dd96 lvol 150 00:14:57.699 09:23:08 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@33 -- # lvol=6e727a95-6bdd-402d-ac05-704224158542 00:14:57.700 09:23:08 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@36 -- # truncate -s 400M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:14:57.700 09:23:08 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@37 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_rescan aio_bdev 00:14:58.267 [2024-07-15 09:23:09.159225] bdev_aio.c:1030:bdev_aio_rescan: *NOTICE*: AIO device is resized: bdev name /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev, old block count 51200, new block count 102400 00:14:58.267 [2024-07-15 09:23:09.159326] vbdev_lvol.c: 165:vbdev_lvs_base_bdev_event_cb: *NOTICE*: Unsupported bdev event: type 1 00:14:58.267 true 00:14:58.267 09:23:09 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@38 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 5f909de5-ce2d-435d-a879-c4b94ea8dd96 00:14:58.267 09:23:09 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@38 -- # jq -r '.[0].total_data_clusters' 00:14:58.267 09:23:09 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@38 -- # (( data_clusters == 49 )) 00:14:58.267 09:23:09 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 -a -s SPDK0 00:14:58.526 09:23:09 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@42 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 6e727a95-6bdd-402d-ac05-704224158542 00:14:58.786 09:23:09 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@43 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:14:59.048 [2024-07-15 09:23:10.190320] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:14:59.048 09:23:10 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@44 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:14:59.307 09:23:10 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@48 -- # bdevperf_pid=803157 00:14:59.307 09:23:10 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@47 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -r /var/tmp/bdevperf.sock -m 0x2 -o 4096 -q 128 -w randwrite -t 10 -S 1 -z 00:14:59.307 09:23:10 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@49 -- # trap 'killprocess $bdevperf_pid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:14:59.307 09:23:10 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@50 -- # waitforlisten 803157 /var/tmp/bdevperf.sock 00:14:59.307 09:23:10 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@829 -- # '[' -z 803157 ']' 00:14:59.307 09:23:10 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:14:59.307 09:23:10 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@834 -- # local max_retries=100 00:14:59.307 09:23:10 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:14:59.307 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:14:59.307 09:23:10 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@838 -- # xtrace_disable 00:14:59.307 09:23:10 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@10 -- # set +x 00:14:59.307 [2024-07-15 09:23:10.488275] Starting SPDK v24.09-pre git sha1 b0f01ebc5 / DPDK 24.03.0 initialization... 00:14:59.307 [2024-07-15 09:23:10.488362] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid803157 ] 00:14:59.566 EAL: No free 2048 kB hugepages reported on node 1 00:14:59.566 [2024-07-15 09:23:10.545451] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:14:59.566 [2024-07-15 09:23:10.653563] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:14:59.827 09:23:10 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:14:59.827 09:23:10 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@862 -- # return 0 00:14:59.827 09:23:10 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b Nvme0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 00:15:00.085 Nvme0n1 00:15:00.085 09:23:11 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_get_bdevs -b Nvme0n1 -t 3000 00:15:00.344 [ 00:15:00.344 { 00:15:00.344 "name": "Nvme0n1", 00:15:00.344 "aliases": [ 00:15:00.344 "6e727a95-6bdd-402d-ac05-704224158542" 00:15:00.344 ], 00:15:00.344 "product_name": "NVMe disk", 00:15:00.345 "block_size": 4096, 00:15:00.345 "num_blocks": 38912, 00:15:00.345 "uuid": "6e727a95-6bdd-402d-ac05-704224158542", 00:15:00.345 "assigned_rate_limits": { 00:15:00.345 "rw_ios_per_sec": 0, 00:15:00.345 "rw_mbytes_per_sec": 0, 00:15:00.345 "r_mbytes_per_sec": 0, 00:15:00.345 "w_mbytes_per_sec": 0 00:15:00.345 }, 00:15:00.345 "claimed": false, 00:15:00.345 "zoned": false, 00:15:00.345 "supported_io_types": { 00:15:00.345 "read": true, 00:15:00.345 "write": true, 00:15:00.345 "unmap": true, 00:15:00.345 "flush": true, 00:15:00.345 "reset": true, 00:15:00.345 "nvme_admin": true, 00:15:00.345 "nvme_io": true, 00:15:00.345 "nvme_io_md": false, 00:15:00.345 "write_zeroes": true, 00:15:00.345 "zcopy": false, 00:15:00.345 "get_zone_info": false, 00:15:00.345 "zone_management": false, 00:15:00.345 "zone_append": false, 00:15:00.345 "compare": true, 00:15:00.345 "compare_and_write": true, 00:15:00.345 "abort": true, 00:15:00.345 "seek_hole": false, 00:15:00.345 "seek_data": false, 00:15:00.345 "copy": true, 00:15:00.345 "nvme_iov_md": false 00:15:00.345 }, 00:15:00.345 "memory_domains": [ 00:15:00.345 { 00:15:00.345 "dma_device_id": "system", 00:15:00.345 "dma_device_type": 1 00:15:00.345 } 00:15:00.345 ], 00:15:00.345 "driver_specific": { 00:15:00.345 "nvme": [ 00:15:00.345 { 00:15:00.345 "trid": { 00:15:00.345 "trtype": "TCP", 00:15:00.345 "adrfam": "IPv4", 00:15:00.345 "traddr": "10.0.0.2", 00:15:00.345 "trsvcid": "4420", 00:15:00.345 "subnqn": "nqn.2016-06.io.spdk:cnode0" 00:15:00.345 }, 00:15:00.345 "ctrlr_data": { 00:15:00.345 "cntlid": 1, 00:15:00.345 "vendor_id": "0x8086", 00:15:00.345 "model_number": "SPDK bdev Controller", 00:15:00.345 "serial_number": "SPDK0", 00:15:00.345 "firmware_revision": "24.09", 00:15:00.345 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:15:00.345 "oacs": { 00:15:00.345 "security": 0, 00:15:00.345 "format": 0, 00:15:00.345 "firmware": 0, 00:15:00.345 "ns_manage": 0 00:15:00.345 }, 00:15:00.345 "multi_ctrlr": true, 00:15:00.345 "ana_reporting": false 00:15:00.345 }, 00:15:00.345 "vs": { 00:15:00.345 "nvme_version": "1.3" 00:15:00.345 }, 00:15:00.345 "ns_data": { 00:15:00.345 "id": 1, 00:15:00.345 "can_share": true 00:15:00.345 } 00:15:00.345 } 00:15:00.345 ], 00:15:00.345 "mp_policy": "active_passive" 00:15:00.345 } 00:15:00.345 } 00:15:00.345 ] 00:15:00.345 09:23:11 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@56 -- # run_test_pid=803289 00:15:00.345 09:23:11 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@57 -- # sleep 2 00:15:00.345 09:23:11 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@55 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:15:00.345 Running I/O for 10 seconds... 00:15:01.286 Latency(us) 00:15:01.286 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:15:01.286 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:15:01.286 Nvme0n1 : 1.00 15368.00 60.03 0.00 0.00 0.00 0.00 0.00 00:15:01.286 =================================================================================================================== 00:15:01.286 Total : 15368.00 60.03 0.00 0.00 0.00 0.00 0.00 00:15:01.286 00:15:02.223 09:23:13 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_grow_lvstore -u 5f909de5-ce2d-435d-a879-c4b94ea8dd96 00:15:02.480 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:15:02.480 Nvme0n1 : 2.00 15623.00 61.03 0.00 0.00 0.00 0.00 0.00 00:15:02.480 =================================================================================================================== 00:15:02.480 Total : 15623.00 61.03 0.00 0.00 0.00 0.00 0.00 00:15:02.480 00:15:02.480 true 00:15:02.480 09:23:13 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@61 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 5f909de5-ce2d-435d-a879-c4b94ea8dd96 00:15:02.480 09:23:13 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@61 -- # jq -r '.[0].total_data_clusters' 00:15:02.738 09:23:13 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@61 -- # data_clusters=99 00:15:02.738 09:23:13 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@62 -- # (( data_clusters == 99 )) 00:15:02.738 09:23:13 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@65 -- # wait 803289 00:15:03.306 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:15:03.306 Nvme0n1 : 3.00 15749.33 61.52 0.00 0.00 0.00 0.00 0.00 00:15:03.306 =================================================================================================================== 00:15:03.306 Total : 15749.33 61.52 0.00 0.00 0.00 0.00 0.00 00:15:03.306 00:15:04.684 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:15:04.684 Nvme0n1 : 4.00 15876.00 62.02 0.00 0.00 0.00 0.00 0.00 00:15:04.684 =================================================================================================================== 00:15:04.685 Total : 15876.00 62.02 0.00 0.00 0.00 0.00 0.00 00:15:04.685 00:15:05.620 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:15:05.620 Nvme0n1 : 5.00 15939.40 62.26 0.00 0.00 0.00 0.00 0.00 00:15:05.620 =================================================================================================================== 00:15:05.620 Total : 15939.40 62.26 0.00 0.00 0.00 0.00 0.00 00:15:05.620 00:15:06.557 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:15:06.557 Nvme0n1 : 6.00 16008.33 62.53 0.00 0.00 0.00 0.00 0.00 00:15:06.557 =================================================================================================================== 00:15:06.557 Total : 16008.33 62.53 0.00 0.00 0.00 0.00 0.00 00:15:06.557 00:15:07.492 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:15:07.492 Nvme0n1 : 7.00 16043.71 62.67 0.00 0.00 0.00 0.00 0.00 00:15:07.492 =================================================================================================================== 00:15:07.492 Total : 16043.71 62.67 0.00 0.00 0.00 0.00 0.00 00:15:07.492 00:15:08.430 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:15:08.430 Nvme0n1 : 8.00 16070.25 62.77 0.00 0.00 0.00 0.00 0.00 00:15:08.430 =================================================================================================================== 00:15:08.430 Total : 16070.25 62.77 0.00 0.00 0.00 0.00 0.00 00:15:08.430 00:15:09.367 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:15:09.367 Nvme0n1 : 9.00 16090.89 62.86 0.00 0.00 0.00 0.00 0.00 00:15:09.367 =================================================================================================================== 00:15:09.367 Total : 16090.89 62.86 0.00 0.00 0.00 0.00 0.00 00:15:09.367 00:15:10.311 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:15:10.311 Nvme0n1 : 10.00 16110.80 62.93 0.00 0.00 0.00 0.00 0.00 00:15:10.311 =================================================================================================================== 00:15:10.311 Total : 16110.80 62.93 0.00 0.00 0.00 0.00 0.00 00:15:10.311 00:15:10.311 00:15:10.311 Latency(us) 00:15:10.311 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:15:10.311 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:15:10.311 Nvme0n1 : 10.01 16118.09 62.96 0.00 0.00 7935.97 4174.89 17961.72 00:15:10.311 =================================================================================================================== 00:15:10.311 Total : 16118.09 62.96 0.00 0.00 7935.97 4174.89 17961.72 00:15:10.311 0 00:15:10.311 09:23:21 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@66 -- # killprocess 803157 00:15:10.311 09:23:21 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@948 -- # '[' -z 803157 ']' 00:15:10.311 09:23:21 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@952 -- # kill -0 803157 00:15:10.311 09:23:21 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@953 -- # uname 00:15:10.311 09:23:21 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:15:10.311 09:23:21 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 803157 00:15:10.570 09:23:21 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:15:10.570 09:23:21 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:15:10.570 09:23:21 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@966 -- # echo 'killing process with pid 803157' 00:15:10.570 killing process with pid 803157 00:15:10.570 09:23:21 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@967 -- # kill 803157 00:15:10.570 Received shutdown signal, test time was about 10.000000 seconds 00:15:10.570 00:15:10.570 Latency(us) 00:15:10.570 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:15:10.570 =================================================================================================================== 00:15:10.570 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:15:10.570 09:23:21 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@972 -- # wait 803157 00:15:10.837 09:23:21 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@68 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:15:11.095 09:23:22 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@69 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:15:11.353 09:23:22 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@70 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 5f909de5-ce2d-435d-a879-c4b94ea8dd96 00:15:11.353 09:23:22 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@70 -- # jq -r '.[0].free_clusters' 00:15:11.611 09:23:22 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@70 -- # free_clusters=61 00:15:11.612 09:23:22 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@72 -- # [[ dirty == \d\i\r\t\y ]] 00:15:11.612 09:23:22 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@74 -- # kill -9 800664 00:15:11.612 09:23:22 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@75 -- # wait 800664 00:15:11.612 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_lvs_grow.sh: line 75: 800664 Killed "${NVMF_APP[@]}" "$@" 00:15:11.612 09:23:22 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@75 -- # true 00:15:11.612 09:23:22 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@76 -- # nvmfappstart -m 0x1 00:15:11.612 09:23:22 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:15:11.612 09:23:22 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@722 -- # xtrace_disable 00:15:11.612 09:23:22 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@10 -- # set +x 00:15:11.612 09:23:22 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@481 -- # nvmfpid=804569 00:15:11.612 09:23:22 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1 00:15:11.612 09:23:22 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@482 -- # waitforlisten 804569 00:15:11.612 09:23:22 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@829 -- # '[' -z 804569 ']' 00:15:11.612 09:23:22 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:15:11.612 09:23:22 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@834 -- # local max_retries=100 00:15:11.612 09:23:22 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:15:11.612 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:15:11.612 09:23:22 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@838 -- # xtrace_disable 00:15:11.612 09:23:22 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@10 -- # set +x 00:15:11.612 [2024-07-15 09:23:22.650422] Starting SPDK v24.09-pre git sha1 b0f01ebc5 / DPDK 24.03.0 initialization... 00:15:11.612 [2024-07-15 09:23:22.650515] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:15:11.612 EAL: No free 2048 kB hugepages reported on node 1 00:15:11.612 [2024-07-15 09:23:22.713568] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:15:11.870 [2024-07-15 09:23:22.813067] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:15:11.870 [2024-07-15 09:23:22.813132] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:15:11.870 [2024-07-15 09:23:22.813146] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:15:11.870 [2024-07-15 09:23:22.813171] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:15:11.870 [2024-07-15 09:23:22.813180] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:15:11.870 [2024-07-15 09:23:22.813211] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:15:11.870 09:23:22 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:15:11.870 09:23:22 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@862 -- # return 0 00:15:11.870 09:23:22 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:15:11.870 09:23:22 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@728 -- # xtrace_disable 00:15:11.870 09:23:22 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@10 -- # set +x 00:15:11.870 09:23:22 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:15:11.870 09:23:22 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@77 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_create /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev aio_bdev 4096 00:15:12.129 [2024-07-15 09:23:23.172391] blobstore.c:4865:bs_recover: *NOTICE*: Performing recovery on blobstore 00:15:12.129 [2024-07-15 09:23:23.172528] blobstore.c:4812:bs_load_replay_md_cpl: *NOTICE*: Recover: blob 0x0 00:15:12.129 [2024-07-15 09:23:23.172573] blobstore.c:4812:bs_load_replay_md_cpl: *NOTICE*: Recover: blob 0x1 00:15:12.129 09:23:23 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@77 -- # aio_bdev=aio_bdev 00:15:12.129 09:23:23 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@78 -- # waitforbdev 6e727a95-6bdd-402d-ac05-704224158542 00:15:12.129 09:23:23 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@897 -- # local bdev_name=6e727a95-6bdd-402d-ac05-704224158542 00:15:12.129 09:23:23 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@898 -- # local bdev_timeout= 00:15:12.129 09:23:23 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@899 -- # local i 00:15:12.129 09:23:23 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@900 -- # [[ -z '' ]] 00:15:12.129 09:23:23 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@900 -- # bdev_timeout=2000 00:15:12.129 09:23:23 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@902 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_wait_for_examine 00:15:12.388 09:23:23 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@904 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_get_bdevs -b 6e727a95-6bdd-402d-ac05-704224158542 -t 2000 00:15:12.646 [ 00:15:12.646 { 00:15:12.646 "name": "6e727a95-6bdd-402d-ac05-704224158542", 00:15:12.646 "aliases": [ 00:15:12.646 "lvs/lvol" 00:15:12.646 ], 00:15:12.646 "product_name": "Logical Volume", 00:15:12.646 "block_size": 4096, 00:15:12.646 "num_blocks": 38912, 00:15:12.646 "uuid": "6e727a95-6bdd-402d-ac05-704224158542", 00:15:12.646 "assigned_rate_limits": { 00:15:12.646 "rw_ios_per_sec": 0, 00:15:12.646 "rw_mbytes_per_sec": 0, 00:15:12.646 "r_mbytes_per_sec": 0, 00:15:12.646 "w_mbytes_per_sec": 0 00:15:12.646 }, 00:15:12.646 "claimed": false, 00:15:12.646 "zoned": false, 00:15:12.646 "supported_io_types": { 00:15:12.646 "read": true, 00:15:12.646 "write": true, 00:15:12.646 "unmap": true, 00:15:12.646 "flush": false, 00:15:12.646 "reset": true, 00:15:12.646 "nvme_admin": false, 00:15:12.646 "nvme_io": false, 00:15:12.646 "nvme_io_md": false, 00:15:12.646 "write_zeroes": true, 00:15:12.646 "zcopy": false, 00:15:12.646 "get_zone_info": false, 00:15:12.646 "zone_management": false, 00:15:12.646 "zone_append": false, 00:15:12.646 "compare": false, 00:15:12.646 "compare_and_write": false, 00:15:12.646 "abort": false, 00:15:12.646 "seek_hole": true, 00:15:12.646 "seek_data": true, 00:15:12.646 "copy": false, 00:15:12.646 "nvme_iov_md": false 00:15:12.646 }, 00:15:12.646 "driver_specific": { 00:15:12.646 "lvol": { 00:15:12.646 "lvol_store_uuid": "5f909de5-ce2d-435d-a879-c4b94ea8dd96", 00:15:12.646 "base_bdev": "aio_bdev", 00:15:12.646 "thin_provision": false, 00:15:12.646 "num_allocated_clusters": 38, 00:15:12.646 "snapshot": false, 00:15:12.646 "clone": false, 00:15:12.646 "esnap_clone": false 00:15:12.646 } 00:15:12.646 } 00:15:12.646 } 00:15:12.646 ] 00:15:12.646 09:23:23 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@905 -- # return 0 00:15:12.646 09:23:23 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@79 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 5f909de5-ce2d-435d-a879-c4b94ea8dd96 00:15:12.646 09:23:23 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@79 -- # jq -r '.[0].free_clusters' 00:15:12.905 09:23:23 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@79 -- # (( free_clusters == 61 )) 00:15:12.905 09:23:23 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@80 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 5f909de5-ce2d-435d-a879-c4b94ea8dd96 00:15:12.905 09:23:23 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@80 -- # jq -r '.[0].total_data_clusters' 00:15:13.163 09:23:24 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@80 -- # (( data_clusters == 99 )) 00:15:13.163 09:23:24 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@84 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_delete aio_bdev 00:15:13.421 [2024-07-15 09:23:24.457955] vbdev_lvol.c: 150:vbdev_lvs_hotremove_cb: *NOTICE*: bdev aio_bdev being removed: closing lvstore lvs 00:15:13.421 09:23:24 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@85 -- # NOT /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 5f909de5-ce2d-435d-a879-c4b94ea8dd96 00:15:13.421 09:23:24 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@648 -- # local es=0 00:15:13.421 09:23:24 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@650 -- # valid_exec_arg /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 5f909de5-ce2d-435d-a879-c4b94ea8dd96 00:15:13.421 09:23:24 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@636 -- # local arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:15:13.421 09:23:24 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:15:13.421 09:23:24 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@640 -- # type -t /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:15:13.421 09:23:24 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:15:13.421 09:23:24 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@642 -- # type -P /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:15:13.421 09:23:24 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:15:13.421 09:23:24 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@642 -- # arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:15:13.421 09:23:24 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@642 -- # [[ -x /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py ]] 00:15:13.421 09:23:24 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@651 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 5f909de5-ce2d-435d-a879-c4b94ea8dd96 00:15:13.679 request: 00:15:13.679 { 00:15:13.679 "uuid": "5f909de5-ce2d-435d-a879-c4b94ea8dd96", 00:15:13.679 "method": "bdev_lvol_get_lvstores", 00:15:13.679 "req_id": 1 00:15:13.679 } 00:15:13.679 Got JSON-RPC error response 00:15:13.679 response: 00:15:13.679 { 00:15:13.679 "code": -19, 00:15:13.679 "message": "No such device" 00:15:13.679 } 00:15:13.679 09:23:24 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@651 -- # es=1 00:15:13.679 09:23:24 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:15:13.679 09:23:24 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:15:13.679 09:23:24 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:15:13.679 09:23:24 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@86 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_create /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev aio_bdev 4096 00:15:13.937 aio_bdev 00:15:13.938 09:23:24 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@87 -- # waitforbdev 6e727a95-6bdd-402d-ac05-704224158542 00:15:13.938 09:23:24 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@897 -- # local bdev_name=6e727a95-6bdd-402d-ac05-704224158542 00:15:13.938 09:23:24 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@898 -- # local bdev_timeout= 00:15:13.938 09:23:24 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@899 -- # local i 00:15:13.938 09:23:24 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@900 -- # [[ -z '' ]] 00:15:13.938 09:23:24 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@900 -- # bdev_timeout=2000 00:15:13.938 09:23:24 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@902 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_wait_for_examine 00:15:14.195 09:23:25 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@904 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_get_bdevs -b 6e727a95-6bdd-402d-ac05-704224158542 -t 2000 00:15:14.453 [ 00:15:14.453 { 00:15:14.453 "name": "6e727a95-6bdd-402d-ac05-704224158542", 00:15:14.453 "aliases": [ 00:15:14.453 "lvs/lvol" 00:15:14.453 ], 00:15:14.453 "product_name": "Logical Volume", 00:15:14.453 "block_size": 4096, 00:15:14.453 "num_blocks": 38912, 00:15:14.453 "uuid": "6e727a95-6bdd-402d-ac05-704224158542", 00:15:14.453 "assigned_rate_limits": { 00:15:14.453 "rw_ios_per_sec": 0, 00:15:14.453 "rw_mbytes_per_sec": 0, 00:15:14.453 "r_mbytes_per_sec": 0, 00:15:14.453 "w_mbytes_per_sec": 0 00:15:14.453 }, 00:15:14.453 "claimed": false, 00:15:14.453 "zoned": false, 00:15:14.453 "supported_io_types": { 00:15:14.453 "read": true, 00:15:14.453 "write": true, 00:15:14.453 "unmap": true, 00:15:14.453 "flush": false, 00:15:14.453 "reset": true, 00:15:14.453 "nvme_admin": false, 00:15:14.453 "nvme_io": false, 00:15:14.453 "nvme_io_md": false, 00:15:14.453 "write_zeroes": true, 00:15:14.453 "zcopy": false, 00:15:14.453 "get_zone_info": false, 00:15:14.453 "zone_management": false, 00:15:14.453 "zone_append": false, 00:15:14.453 "compare": false, 00:15:14.453 "compare_and_write": false, 00:15:14.453 "abort": false, 00:15:14.453 "seek_hole": true, 00:15:14.453 "seek_data": true, 00:15:14.453 "copy": false, 00:15:14.453 "nvme_iov_md": false 00:15:14.453 }, 00:15:14.453 "driver_specific": { 00:15:14.453 "lvol": { 00:15:14.453 "lvol_store_uuid": "5f909de5-ce2d-435d-a879-c4b94ea8dd96", 00:15:14.453 "base_bdev": "aio_bdev", 00:15:14.453 "thin_provision": false, 00:15:14.453 "num_allocated_clusters": 38, 00:15:14.453 "snapshot": false, 00:15:14.453 "clone": false, 00:15:14.453 "esnap_clone": false 00:15:14.453 } 00:15:14.453 } 00:15:14.453 } 00:15:14.453 ] 00:15:14.453 09:23:25 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@905 -- # return 0 00:15:14.453 09:23:25 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@88 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 5f909de5-ce2d-435d-a879-c4b94ea8dd96 00:15:14.453 09:23:25 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@88 -- # jq -r '.[0].free_clusters' 00:15:14.713 09:23:25 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@88 -- # (( free_clusters == 61 )) 00:15:14.713 09:23:25 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@89 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 5f909de5-ce2d-435d-a879-c4b94ea8dd96 00:15:14.713 09:23:25 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@89 -- # jq -r '.[0].total_data_clusters' 00:15:14.973 09:23:26 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@89 -- # (( data_clusters == 99 )) 00:15:14.973 09:23:26 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@92 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete 6e727a95-6bdd-402d-ac05-704224158542 00:15:15.231 09:23:26 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@93 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -u 5f909de5-ce2d-435d-a879-c4b94ea8dd96 00:15:15.490 09:23:26 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@94 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_delete aio_bdev 00:15:15.748 09:23:26 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@95 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:15:15.748 00:15:15.748 real 0m18.961s 00:15:15.748 user 0m48.044s 00:15:15.748 sys 0m4.569s 00:15:15.748 09:23:26 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@1124 -- # xtrace_disable 00:15:15.748 09:23:26 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@10 -- # set +x 00:15:15.748 ************************************ 00:15:15.748 END TEST lvs_grow_dirty 00:15:15.748 ************************************ 00:15:15.748 09:23:26 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@1142 -- # return 0 00:15:15.748 09:23:26 nvmf_tcp.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@1 -- # process_shm --id 0 00:15:15.748 09:23:26 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@806 -- # type=--id 00:15:15.748 09:23:26 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@807 -- # id=0 00:15:15.748 09:23:26 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@808 -- # '[' --id = --pid ']' 00:15:15.748 09:23:26 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@812 -- # find /dev/shm -name '*.0' -printf '%f\n' 00:15:15.748 09:23:26 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@812 -- # shm_files=nvmf_trace.0 00:15:15.748 09:23:26 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@814 -- # [[ -z nvmf_trace.0 ]] 00:15:15.748 09:23:26 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@818 -- # for n in $shm_files 00:15:15.748 09:23:26 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@819 -- # tar -C /dev/shm/ -cvzf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/nvmf_trace.0_shm.tar.gz nvmf_trace.0 00:15:15.748 nvmf_trace.0 00:15:15.748 09:23:26 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@821 -- # return 0 00:15:15.748 09:23:26 nvmf_tcp.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@1 -- # nvmftestfini 00:15:15.748 09:23:26 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@488 -- # nvmfcleanup 00:15:15.748 09:23:26 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@117 -- # sync 00:15:15.748 09:23:26 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:15:15.748 09:23:26 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@120 -- # set +e 00:15:15.748 09:23:26 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@121 -- # for i in {1..20} 00:15:15.748 09:23:26 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:15:15.748 rmmod nvme_tcp 00:15:15.748 rmmod nvme_fabrics 00:15:15.748 rmmod nvme_keyring 00:15:15.748 09:23:26 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:15:16.007 09:23:26 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@124 -- # set -e 00:15:16.007 09:23:26 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@125 -- # return 0 00:15:16.007 09:23:26 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@489 -- # '[' -n 804569 ']' 00:15:16.007 09:23:26 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@490 -- # killprocess 804569 00:15:16.007 09:23:26 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@948 -- # '[' -z 804569 ']' 00:15:16.007 09:23:26 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@952 -- # kill -0 804569 00:15:16.007 09:23:26 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@953 -- # uname 00:15:16.007 09:23:26 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:15:16.007 09:23:26 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 804569 00:15:16.007 09:23:26 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:15:16.007 09:23:26 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:15:16.007 09:23:26 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@966 -- # echo 'killing process with pid 804569' 00:15:16.007 killing process with pid 804569 00:15:16.007 09:23:26 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@967 -- # kill 804569 00:15:16.007 09:23:26 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@972 -- # wait 804569 00:15:16.267 09:23:27 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:15:16.267 09:23:27 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:15:16.267 09:23:27 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:15:16.267 09:23:27 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:15:16.267 09:23:27 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@278 -- # remove_spdk_ns 00:15:16.267 09:23:27 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:15:16.267 09:23:27 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:15:16.267 09:23:27 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:15:18.169 09:23:29 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:15:18.169 00:15:18.169 real 0m41.599s 00:15:18.169 user 1m10.557s 00:15:18.169 sys 0m8.228s 00:15:18.169 09:23:29 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@1124 -- # xtrace_disable 00:15:18.169 09:23:29 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:15:18.169 ************************************ 00:15:18.169 END TEST nvmf_lvs_grow 00:15:18.169 ************************************ 00:15:18.169 09:23:29 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:15:18.169 09:23:29 nvmf_tcp -- nvmf/nvmf.sh@50 -- # run_test nvmf_bdev_io_wait /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdev_io_wait.sh --transport=tcp 00:15:18.169 09:23:29 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:15:18.169 09:23:29 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:15:18.169 09:23:29 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:15:18.169 ************************************ 00:15:18.169 START TEST nvmf_bdev_io_wait 00:15:18.169 ************************************ 00:15:18.169 09:23:29 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdev_io_wait.sh --transport=tcp 00:15:18.429 * Looking for test storage... 00:15:18.429 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:15:18.429 09:23:29 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:15:18.429 09:23:29 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@7 -- # uname -s 00:15:18.429 09:23:29 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:15:18.429 09:23:29 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:15:18.429 09:23:29 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:15:18.429 09:23:29 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:15:18.429 09:23:29 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:15:18.429 09:23:29 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:15:18.429 09:23:29 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:15:18.429 09:23:29 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:15:18.429 09:23:29 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:15:18.429 09:23:29 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:15:18.429 09:23:29 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a 00:15:18.429 09:23:29 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@18 -- # NVME_HOSTID=29f67375-a902-e411-ace9-001e67bc3c9a 00:15:18.429 09:23:29 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:15:18.429 09:23:29 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:15:18.429 09:23:29 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:15:18.429 09:23:29 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:15:18.429 09:23:29 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:15:18.429 09:23:29 nvmf_tcp.nvmf_bdev_io_wait -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:15:18.429 09:23:29 nvmf_tcp.nvmf_bdev_io_wait -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:15:18.429 09:23:29 nvmf_tcp.nvmf_bdev_io_wait -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:15:18.429 09:23:29 nvmf_tcp.nvmf_bdev_io_wait -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:18.429 09:23:29 nvmf_tcp.nvmf_bdev_io_wait -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:18.429 09:23:29 nvmf_tcp.nvmf_bdev_io_wait -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:18.429 09:23:29 nvmf_tcp.nvmf_bdev_io_wait -- paths/export.sh@5 -- # export PATH 00:15:18.429 09:23:29 nvmf_tcp.nvmf_bdev_io_wait -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:18.429 09:23:29 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@47 -- # : 0 00:15:18.429 09:23:29 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:15:18.429 09:23:29 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:15:18.429 09:23:29 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:15:18.429 09:23:29 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:15:18.429 09:23:29 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:15:18.429 09:23:29 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:15:18.429 09:23:29 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:15:18.429 09:23:29 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@51 -- # have_pci_nics=0 00:15:18.429 09:23:29 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@11 -- # MALLOC_BDEV_SIZE=64 00:15:18.429 09:23:29 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:15:18.429 09:23:29 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@14 -- # nvmftestinit 00:15:18.429 09:23:29 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:15:18.429 09:23:29 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:15:18.429 09:23:29 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@448 -- # prepare_net_devs 00:15:18.429 09:23:29 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@410 -- # local -g is_hw=no 00:15:18.429 09:23:29 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@412 -- # remove_spdk_ns 00:15:18.429 09:23:29 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:15:18.429 09:23:29 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:15:18.429 09:23:29 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:15:18.429 09:23:29 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:15:18.429 09:23:29 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:15:18.429 09:23:29 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@285 -- # xtrace_disable 00:15:18.429 09:23:29 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:15:20.340 09:23:31 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:15:20.340 09:23:31 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@291 -- # pci_devs=() 00:15:20.340 09:23:31 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@291 -- # local -a pci_devs 00:15:20.340 09:23:31 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@292 -- # pci_net_devs=() 00:15:20.340 09:23:31 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:15:20.340 09:23:31 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@293 -- # pci_drivers=() 00:15:20.340 09:23:31 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@293 -- # local -A pci_drivers 00:15:20.340 09:23:31 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@295 -- # net_devs=() 00:15:20.340 09:23:31 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@295 -- # local -ga net_devs 00:15:20.340 09:23:31 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@296 -- # e810=() 00:15:20.340 09:23:31 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@296 -- # local -ga e810 00:15:20.340 09:23:31 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@297 -- # x722=() 00:15:20.340 09:23:31 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@297 -- # local -ga x722 00:15:20.340 09:23:31 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@298 -- # mlx=() 00:15:20.340 09:23:31 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@298 -- # local -ga mlx 00:15:20.340 09:23:31 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:15:20.340 09:23:31 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:15:20.340 09:23:31 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:15:20.340 09:23:31 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:15:20.340 09:23:31 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:15:20.340 09:23:31 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:15:20.340 09:23:31 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:15:20.340 09:23:31 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:15:20.340 09:23:31 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:15:20.340 09:23:31 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:15:20.340 09:23:31 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:15:20.340 09:23:31 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:15:20.340 09:23:31 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:15:20.340 09:23:31 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:15:20.340 09:23:31 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:15:20.340 09:23:31 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:15:20.340 09:23:31 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:15:20.340 09:23:31 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:15:20.340 09:23:31 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@341 -- # echo 'Found 0000:09:00.0 (0x8086 - 0x159b)' 00:15:20.340 Found 0000:09:00.0 (0x8086 - 0x159b) 00:15:20.340 09:23:31 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:15:20.340 09:23:31 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:15:20.340 09:23:31 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:15:20.340 09:23:31 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:15:20.340 09:23:31 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:15:20.340 09:23:31 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:15:20.340 09:23:31 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@341 -- # echo 'Found 0000:09:00.1 (0x8086 - 0x159b)' 00:15:20.340 Found 0000:09:00.1 (0x8086 - 0x159b) 00:15:20.340 09:23:31 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:15:20.340 09:23:31 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:15:20.340 09:23:31 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:15:20.340 09:23:31 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:15:20.340 09:23:31 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:15:20.340 09:23:31 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:15:20.340 09:23:31 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:15:20.340 09:23:31 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:15:20.340 09:23:31 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:15:20.340 09:23:31 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:15:20.340 09:23:31 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:15:20.340 09:23:31 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:15:20.340 09:23:31 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@390 -- # [[ up == up ]] 00:15:20.340 09:23:31 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:15:20.340 09:23:31 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:15:20.340 09:23:31 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:09:00.0: cvl_0_0' 00:15:20.340 Found net devices under 0000:09:00.0: cvl_0_0 00:15:20.340 09:23:31 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:15:20.340 09:23:31 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:15:20.340 09:23:31 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:15:20.340 09:23:31 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:15:20.340 09:23:31 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:15:20.340 09:23:31 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@390 -- # [[ up == up ]] 00:15:20.340 09:23:31 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:15:20.340 09:23:31 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:15:20.340 09:23:31 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:09:00.1: cvl_0_1' 00:15:20.340 Found net devices under 0000:09:00.1: cvl_0_1 00:15:20.340 09:23:31 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:15:20.340 09:23:31 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:15:20.340 09:23:31 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@414 -- # is_hw=yes 00:15:20.340 09:23:31 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:15:20.340 09:23:31 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:15:20.340 09:23:31 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:15:20.340 09:23:31 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:15:20.340 09:23:31 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:15:20.340 09:23:31 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:15:20.340 09:23:31 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:15:20.340 09:23:31 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:15:20.340 09:23:31 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:15:20.340 09:23:31 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:15:20.340 09:23:31 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:15:20.340 09:23:31 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:15:20.340 09:23:31 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:15:20.340 09:23:31 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:15:20.340 09:23:31 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:15:20.340 09:23:31 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:15:20.340 09:23:31 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:15:20.340 09:23:31 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:15:20.340 09:23:31 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:15:20.340 09:23:31 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:15:20.340 09:23:31 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:15:20.340 09:23:31 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:15:20.340 09:23:31 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:15:20.340 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:15:20.340 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.203 ms 00:15:20.340 00:15:20.340 --- 10.0.0.2 ping statistics --- 00:15:20.340 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:15:20.340 rtt min/avg/max/mdev = 0.203/0.203/0.203/0.000 ms 00:15:20.340 09:23:31 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:15:20.340 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:15:20.340 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.051 ms 00:15:20.340 00:15:20.340 --- 10.0.0.1 ping statistics --- 00:15:20.340 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:15:20.340 rtt min/avg/max/mdev = 0.051/0.051/0.051/0.000 ms 00:15:20.340 09:23:31 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:15:20.340 09:23:31 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@422 -- # return 0 00:15:20.340 09:23:31 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:15:20.340 09:23:31 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:15:20.340 09:23:31 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:15:20.340 09:23:31 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:15:20.340 09:23:31 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:15:20.340 09:23:31 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:15:20.340 09:23:31 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:15:20.340 09:23:31 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@15 -- # nvmfappstart -m 0xF --wait-for-rpc 00:15:20.340 09:23:31 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:15:20.340 09:23:31 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@722 -- # xtrace_disable 00:15:20.340 09:23:31 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:15:20.340 09:23:31 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@481 -- # nvmfpid=807029 00:15:20.340 09:23:31 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF --wait-for-rpc 00:15:20.340 09:23:31 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@482 -- # waitforlisten 807029 00:15:20.341 09:23:31 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@829 -- # '[' -z 807029 ']' 00:15:20.341 09:23:31 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:15:20.341 09:23:31 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@834 -- # local max_retries=100 00:15:20.341 09:23:31 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:15:20.341 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:15:20.341 09:23:31 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@838 -- # xtrace_disable 00:15:20.341 09:23:31 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:15:20.601 [2024-07-15 09:23:31.545995] Starting SPDK v24.09-pre git sha1 b0f01ebc5 / DPDK 24.03.0 initialization... 00:15:20.601 [2024-07-15 09:23:31.546073] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:15:20.601 EAL: No free 2048 kB hugepages reported on node 1 00:15:20.601 [2024-07-15 09:23:31.613023] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 4 00:15:20.601 [2024-07-15 09:23:31.724860] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:15:20.601 [2024-07-15 09:23:31.724913] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:15:20.601 [2024-07-15 09:23:31.724926] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:15:20.601 [2024-07-15 09:23:31.724939] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:15:20.601 [2024-07-15 09:23:31.724949] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:15:20.601 [2024-07-15 09:23:31.725001] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:15:20.601 [2024-07-15 09:23:31.725056] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:15:20.601 [2024-07-15 09:23:31.725125] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:15:20.601 [2024-07-15 09:23:31.725128] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:15:20.601 09:23:31 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:15:20.601 09:23:31 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@862 -- # return 0 00:15:20.601 09:23:31 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:15:20.601 09:23:31 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@728 -- # xtrace_disable 00:15:20.601 09:23:31 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:15:20.601 09:23:31 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:15:20.601 09:23:31 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@18 -- # rpc_cmd bdev_set_options -p 5 -c 1 00:15:20.601 09:23:31 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:20.601 09:23:31 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:15:20.601 09:23:31 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:20.601 09:23:31 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@19 -- # rpc_cmd framework_start_init 00:15:20.601 09:23:31 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:20.601 09:23:31 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:15:20.860 09:23:31 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:20.860 09:23:31 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@20 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:15:20.860 09:23:31 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:20.860 09:23:31 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:15:20.860 [2024-07-15 09:23:31.857368] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:15:20.860 09:23:31 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:20.860 09:23:31 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@22 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:15:20.860 09:23:31 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:20.860 09:23:31 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:15:20.860 Malloc0 00:15:20.860 09:23:31 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:20.860 09:23:31 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:15:20.860 09:23:31 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:20.860 09:23:31 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:15:20.860 09:23:31 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:20.860 09:23:31 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:15:20.860 09:23:31 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:20.860 09:23:31 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:15:20.860 09:23:31 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:20.861 09:23:31 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:15:20.861 09:23:31 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:20.861 09:23:31 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:15:20.861 [2024-07-15 09:23:31.929465] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:15:20.861 09:23:31 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:20.861 09:23:31 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@28 -- # WRITE_PID=807167 00:15:20.861 09:23:31 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x10 -i 1 --json /dev/fd/63 -q 128 -o 4096 -w write -t 1 -s 256 00:15:20.861 09:23:31 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@27 -- # gen_nvmf_target_json 00:15:20.861 09:23:31 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@30 -- # READ_PID=807169 00:15:20.861 09:23:31 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@532 -- # config=() 00:15:20.861 09:23:31 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@532 -- # local subsystem config 00:15:20.861 09:23:31 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:15:20.861 09:23:31 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:15:20.861 { 00:15:20.861 "params": { 00:15:20.861 "name": "Nvme$subsystem", 00:15:20.861 "trtype": "$TEST_TRANSPORT", 00:15:20.861 "traddr": "$NVMF_FIRST_TARGET_IP", 00:15:20.861 "adrfam": "ipv4", 00:15:20.861 "trsvcid": "$NVMF_PORT", 00:15:20.861 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:15:20.861 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:15:20.861 "hdgst": ${hdgst:-false}, 00:15:20.861 "ddgst": ${ddgst:-false} 00:15:20.861 }, 00:15:20.861 "method": "bdev_nvme_attach_controller" 00:15:20.861 } 00:15:20.861 EOF 00:15:20.861 )") 00:15:20.861 09:23:31 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@29 -- # gen_nvmf_target_json 00:15:20.861 09:23:31 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x20 -i 2 --json /dev/fd/63 -q 128 -o 4096 -w read -t 1 -s 256 00:15:20.861 09:23:31 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@532 -- # config=() 00:15:20.861 09:23:31 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@32 -- # FLUSH_PID=807171 00:15:20.861 09:23:31 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@532 -- # local subsystem config 00:15:20.861 09:23:31 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:15:20.861 09:23:31 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:15:20.861 { 00:15:20.861 "params": { 00:15:20.861 "name": "Nvme$subsystem", 00:15:20.861 "trtype": "$TEST_TRANSPORT", 00:15:20.861 "traddr": "$NVMF_FIRST_TARGET_IP", 00:15:20.861 "adrfam": "ipv4", 00:15:20.861 "trsvcid": "$NVMF_PORT", 00:15:20.861 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:15:20.861 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:15:20.861 "hdgst": ${hdgst:-false}, 00:15:20.861 "ddgst": ${ddgst:-false} 00:15:20.861 }, 00:15:20.861 "method": "bdev_nvme_attach_controller" 00:15:20.861 } 00:15:20.861 EOF 00:15:20.861 )") 00:15:20.861 09:23:31 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@554 -- # cat 00:15:20.861 09:23:31 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@31 -- # gen_nvmf_target_json 00:15:20.861 09:23:31 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x40 -i 3 --json /dev/fd/63 -q 128 -o 4096 -w flush -t 1 -s 256 00:15:20.861 09:23:31 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@532 -- # config=() 00:15:20.861 09:23:31 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@34 -- # UNMAP_PID=807174 00:15:20.861 09:23:31 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@532 -- # local subsystem config 00:15:20.861 09:23:31 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@35 -- # sync 00:15:20.861 09:23:31 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:15:20.861 09:23:31 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:15:20.861 { 00:15:20.861 "params": { 00:15:20.861 "name": "Nvme$subsystem", 00:15:20.861 "trtype": "$TEST_TRANSPORT", 00:15:20.861 "traddr": "$NVMF_FIRST_TARGET_IP", 00:15:20.861 "adrfam": "ipv4", 00:15:20.861 "trsvcid": "$NVMF_PORT", 00:15:20.861 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:15:20.861 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:15:20.861 "hdgst": ${hdgst:-false}, 00:15:20.861 "ddgst": ${ddgst:-false} 00:15:20.861 }, 00:15:20.861 "method": "bdev_nvme_attach_controller" 00:15:20.861 } 00:15:20.861 EOF 00:15:20.861 )") 00:15:20.861 09:23:31 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@33 -- # gen_nvmf_target_json 00:15:20.861 09:23:31 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x80 -i 4 --json /dev/fd/63 -q 128 -o 4096 -w unmap -t 1 -s 256 00:15:20.861 09:23:31 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@554 -- # cat 00:15:20.861 09:23:31 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@532 -- # config=() 00:15:20.861 09:23:31 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@532 -- # local subsystem config 00:15:20.861 09:23:31 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:15:20.861 09:23:31 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:15:20.861 { 00:15:20.861 "params": { 00:15:20.861 "name": "Nvme$subsystem", 00:15:20.861 "trtype": "$TEST_TRANSPORT", 00:15:20.861 "traddr": "$NVMF_FIRST_TARGET_IP", 00:15:20.861 "adrfam": "ipv4", 00:15:20.861 "trsvcid": "$NVMF_PORT", 00:15:20.861 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:15:20.861 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:15:20.861 "hdgst": ${hdgst:-false}, 00:15:20.861 "ddgst": ${ddgst:-false} 00:15:20.861 }, 00:15:20.861 "method": "bdev_nvme_attach_controller" 00:15:20.861 } 00:15:20.861 EOF 00:15:20.861 )") 00:15:20.861 09:23:31 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@554 -- # cat 00:15:20.861 09:23:31 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@556 -- # jq . 00:15:20.861 09:23:31 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@554 -- # cat 00:15:20.861 09:23:31 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@37 -- # wait 807167 00:15:20.861 09:23:31 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@556 -- # jq . 00:15:20.861 09:23:31 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@556 -- # jq . 00:15:20.861 09:23:31 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@557 -- # IFS=, 00:15:20.861 09:23:31 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:15:20.861 "params": { 00:15:20.861 "name": "Nvme1", 00:15:20.861 "trtype": "tcp", 00:15:20.861 "traddr": "10.0.0.2", 00:15:20.861 "adrfam": "ipv4", 00:15:20.861 "trsvcid": "4420", 00:15:20.861 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:15:20.861 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:15:20.861 "hdgst": false, 00:15:20.861 "ddgst": false 00:15:20.861 }, 00:15:20.861 "method": "bdev_nvme_attach_controller" 00:15:20.861 }' 00:15:20.861 09:23:31 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@556 -- # jq . 00:15:20.861 09:23:31 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@557 -- # IFS=, 00:15:20.861 09:23:31 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:15:20.861 "params": { 00:15:20.861 "name": "Nvme1", 00:15:20.861 "trtype": "tcp", 00:15:20.861 "traddr": "10.0.0.2", 00:15:20.861 "adrfam": "ipv4", 00:15:20.861 "trsvcid": "4420", 00:15:20.861 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:15:20.861 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:15:20.861 "hdgst": false, 00:15:20.861 "ddgst": false 00:15:20.861 }, 00:15:20.861 "method": "bdev_nvme_attach_controller" 00:15:20.861 }' 00:15:20.861 09:23:31 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@557 -- # IFS=, 00:15:20.861 09:23:31 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:15:20.861 "params": { 00:15:20.861 "name": "Nvme1", 00:15:20.861 "trtype": "tcp", 00:15:20.861 "traddr": "10.0.0.2", 00:15:20.861 "adrfam": "ipv4", 00:15:20.861 "trsvcid": "4420", 00:15:20.861 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:15:20.861 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:15:20.861 "hdgst": false, 00:15:20.861 "ddgst": false 00:15:20.861 }, 00:15:20.861 "method": "bdev_nvme_attach_controller" 00:15:20.861 }' 00:15:20.861 09:23:31 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@557 -- # IFS=, 00:15:20.861 09:23:31 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:15:20.861 "params": { 00:15:20.861 "name": "Nvme1", 00:15:20.861 "trtype": "tcp", 00:15:20.861 "traddr": "10.0.0.2", 00:15:20.861 "adrfam": "ipv4", 00:15:20.861 "trsvcid": "4420", 00:15:20.861 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:15:20.861 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:15:20.861 "hdgst": false, 00:15:20.861 "ddgst": false 00:15:20.861 }, 00:15:20.861 "method": "bdev_nvme_attach_controller" 00:15:20.861 }' 00:15:20.862 [2024-07-15 09:23:31.977288] Starting SPDK v24.09-pre git sha1 b0f01ebc5 / DPDK 24.03.0 initialization... 00:15:20.862 [2024-07-15 09:23:31.977288] Starting SPDK v24.09-pre git sha1 b0f01ebc5 / DPDK 24.03.0 initialization... 00:15:20.862 [2024-07-15 09:23:31.977383] [ DPDK EAL parameters: bdevperf -c 0x20 -m 256 --no-telemetry --log-level=lib.eal:6 --log-level=lib[2024-07-15 09:23:31.977383] [ DPDK EAL parameters: bdevperf -c 0x40 -m 256 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk2 .cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk3 --proc-type=auto ] 00:15:20.862 --proc-type=auto ] 00:15:20.862 [2024-07-15 09:23:31.979841] Starting SPDK v24.09-pre git sha1 b0f01ebc5 / DPDK 24.03.0 initialization... 00:15:20.862 [2024-07-15 09:23:31.979914] [ DPDK EAL parameters: bdevperf -c 0x80 -m 256 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk4 --proc-type=auto ] 00:15:20.862 [2024-07-15 09:23:31.983242] Starting SPDK v24.09-pre git sha1 b0f01ebc5 / DPDK 24.03.0 initialization... 00:15:20.862 [2024-07-15 09:23:31.983296] [ DPDK EAL parameters: bdevperf -c 0x10 -m 256 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk1 --proc-type=auto ] 00:15:20.862 EAL: No free 2048 kB hugepages reported on node 1 00:15:21.121 EAL: No free 2048 kB hugepages reported on node 1 00:15:21.121 [2024-07-15 09:23:32.152612] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:15:21.121 EAL: No free 2048 kB hugepages reported on node 1 00:15:21.121 [2024-07-15 09:23:32.251317] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 5 00:15:21.121 [2024-07-15 09:23:32.256429] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:15:21.380 EAL: No free 2048 kB hugepages reported on node 1 00:15:21.380 [2024-07-15 09:23:32.354762] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 6 00:15:21.380 [2024-07-15 09:23:32.357615] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:15:21.380 [2024-07-15 09:23:32.424680] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:15:21.380 [2024-07-15 09:23:32.455294] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 4 00:15:21.380 [2024-07-15 09:23:32.522021] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 7 00:15:21.639 Running I/O for 1 seconds... 00:15:21.639 Running I/O for 1 seconds... 00:15:21.639 Running I/O for 1 seconds... 00:15:21.639 Running I/O for 1 seconds... 00:15:22.574 00:15:22.574 Latency(us) 00:15:22.575 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:15:22.575 Job: Nvme1n1 (Core Mask 0x10, workload: write, depth: 128, IO size: 4096) 00:15:22.575 Nvme1n1 : 1.01 11099.64 43.36 0.00 0.00 11487.38 6602.15 21068.61 00:15:22.575 =================================================================================================================== 00:15:22.575 Total : 11099.64 43.36 0.00 0.00 11487.38 6602.15 21068.61 00:15:22.575 00:15:22.575 Latency(us) 00:15:22.575 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:15:22.575 Job: Nvme1n1 (Core Mask 0x20, workload: read, depth: 128, IO size: 4096) 00:15:22.575 Nvme1n1 : 1.02 5461.08 21.33 0.00 0.00 23087.71 9466.31 39030.33 00:15:22.575 =================================================================================================================== 00:15:22.575 Total : 5461.08 21.33 0.00 0.00 23087.71 9466.31 39030.33 00:15:22.575 00:15:22.575 Latency(us) 00:15:22.575 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:15:22.575 Job: Nvme1n1 (Core Mask 0x40, workload: flush, depth: 128, IO size: 4096) 00:15:22.575 Nvme1n1 : 1.00 181150.24 707.62 0.00 0.00 703.79 298.86 989.11 00:15:22.575 =================================================================================================================== 00:15:22.575 Total : 181150.24 707.62 0.00 0.00 703.79 298.86 989.11 00:15:22.833 00:15:22.833 Latency(us) 00:15:22.833 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:15:22.833 Job: Nvme1n1 (Core Mask 0x80, workload: unmap, depth: 128, IO size: 4096) 00:15:22.833 Nvme1n1 : 1.01 5645.47 22.05 0.00 0.00 22586.92 6505.05 50486.99 00:15:22.833 =================================================================================================================== 00:15:22.833 Total : 5645.47 22.05 0.00 0.00 22586.92 6505.05 50486.99 00:15:22.833 09:23:33 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@38 -- # wait 807169 00:15:22.833 09:23:33 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@39 -- # wait 807171 00:15:23.092 09:23:34 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@40 -- # wait 807174 00:15:23.092 09:23:34 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@42 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:15:23.092 09:23:34 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:23.092 09:23:34 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:15:23.092 09:23:34 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:23.092 09:23:34 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@44 -- # trap - SIGINT SIGTERM EXIT 00:15:23.092 09:23:34 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@46 -- # nvmftestfini 00:15:23.092 09:23:34 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@488 -- # nvmfcleanup 00:15:23.092 09:23:34 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@117 -- # sync 00:15:23.092 09:23:34 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:15:23.092 09:23:34 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@120 -- # set +e 00:15:23.092 09:23:34 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@121 -- # for i in {1..20} 00:15:23.092 09:23:34 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:15:23.092 rmmod nvme_tcp 00:15:23.092 rmmod nvme_fabrics 00:15:23.092 rmmod nvme_keyring 00:15:23.092 09:23:34 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:15:23.092 09:23:34 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@124 -- # set -e 00:15:23.092 09:23:34 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@125 -- # return 0 00:15:23.092 09:23:34 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@489 -- # '[' -n 807029 ']' 00:15:23.092 09:23:34 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@490 -- # killprocess 807029 00:15:23.092 09:23:34 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@948 -- # '[' -z 807029 ']' 00:15:23.092 09:23:34 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@952 -- # kill -0 807029 00:15:23.092 09:23:34 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@953 -- # uname 00:15:23.092 09:23:34 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:15:23.092 09:23:34 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 807029 00:15:23.092 09:23:34 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:15:23.093 09:23:34 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:15:23.093 09:23:34 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@966 -- # echo 'killing process with pid 807029' 00:15:23.093 killing process with pid 807029 00:15:23.093 09:23:34 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@967 -- # kill 807029 00:15:23.093 09:23:34 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@972 -- # wait 807029 00:15:23.353 09:23:34 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:15:23.353 09:23:34 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:15:23.353 09:23:34 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:15:23.353 09:23:34 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:15:23.353 09:23:34 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@278 -- # remove_spdk_ns 00:15:23.353 09:23:34 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:15:23.353 09:23:34 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:15:23.353 09:23:34 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:15:25.891 09:23:36 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:15:25.891 00:15:25.891 real 0m7.186s 00:15:25.891 user 0m17.091s 00:15:25.891 sys 0m3.362s 00:15:25.891 09:23:36 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@1124 -- # xtrace_disable 00:15:25.891 09:23:36 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:15:25.891 ************************************ 00:15:25.891 END TEST nvmf_bdev_io_wait 00:15:25.891 ************************************ 00:15:25.891 09:23:36 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:15:25.891 09:23:36 nvmf_tcp -- nvmf/nvmf.sh@51 -- # run_test nvmf_queue_depth /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/queue_depth.sh --transport=tcp 00:15:25.891 09:23:36 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:15:25.891 09:23:36 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:15:25.891 09:23:36 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:15:25.891 ************************************ 00:15:25.891 START TEST nvmf_queue_depth 00:15:25.891 ************************************ 00:15:25.891 09:23:36 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/queue_depth.sh --transport=tcp 00:15:25.891 * Looking for test storage... 00:15:25.891 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:15:25.891 09:23:36 nvmf_tcp.nvmf_queue_depth -- target/queue_depth.sh@12 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:15:25.891 09:23:36 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@7 -- # uname -s 00:15:25.891 09:23:36 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:15:25.891 09:23:36 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:15:25.891 09:23:36 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:15:25.891 09:23:36 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:15:25.891 09:23:36 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:15:25.891 09:23:36 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:15:25.891 09:23:36 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:15:25.891 09:23:36 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:15:25.891 09:23:36 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:15:25.891 09:23:36 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:15:25.891 09:23:36 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a 00:15:25.891 09:23:36 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@18 -- # NVME_HOSTID=29f67375-a902-e411-ace9-001e67bc3c9a 00:15:25.891 09:23:36 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:15:25.891 09:23:36 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:15:25.891 09:23:36 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:15:25.891 09:23:36 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:15:25.891 09:23:36 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:15:25.891 09:23:36 nvmf_tcp.nvmf_queue_depth -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:15:25.891 09:23:36 nvmf_tcp.nvmf_queue_depth -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:15:25.891 09:23:36 nvmf_tcp.nvmf_queue_depth -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:15:25.891 09:23:36 nvmf_tcp.nvmf_queue_depth -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:25.891 09:23:36 nvmf_tcp.nvmf_queue_depth -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:25.891 09:23:36 nvmf_tcp.nvmf_queue_depth -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:25.891 09:23:36 nvmf_tcp.nvmf_queue_depth -- paths/export.sh@5 -- # export PATH 00:15:25.892 09:23:36 nvmf_tcp.nvmf_queue_depth -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:25.892 09:23:36 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@47 -- # : 0 00:15:25.892 09:23:36 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:15:25.892 09:23:36 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:15:25.892 09:23:36 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:15:25.892 09:23:36 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:15:25.892 09:23:36 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:15:25.892 09:23:36 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:15:25.892 09:23:36 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:15:25.892 09:23:36 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@51 -- # have_pci_nics=0 00:15:25.892 09:23:36 nvmf_tcp.nvmf_queue_depth -- target/queue_depth.sh@14 -- # MALLOC_BDEV_SIZE=64 00:15:25.892 09:23:36 nvmf_tcp.nvmf_queue_depth -- target/queue_depth.sh@15 -- # MALLOC_BLOCK_SIZE=512 00:15:25.892 09:23:36 nvmf_tcp.nvmf_queue_depth -- target/queue_depth.sh@17 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:15:25.892 09:23:36 nvmf_tcp.nvmf_queue_depth -- target/queue_depth.sh@19 -- # nvmftestinit 00:15:25.892 09:23:36 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:15:25.892 09:23:36 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:15:25.892 09:23:36 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@448 -- # prepare_net_devs 00:15:25.892 09:23:36 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@410 -- # local -g is_hw=no 00:15:25.892 09:23:36 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@412 -- # remove_spdk_ns 00:15:25.892 09:23:36 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:15:25.892 09:23:36 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:15:25.892 09:23:36 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:15:25.892 09:23:36 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:15:25.892 09:23:36 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:15:25.892 09:23:36 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@285 -- # xtrace_disable 00:15:25.892 09:23:36 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:15:27.800 09:23:38 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:15:27.800 09:23:38 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@291 -- # pci_devs=() 00:15:27.800 09:23:38 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@291 -- # local -a pci_devs 00:15:27.800 09:23:38 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@292 -- # pci_net_devs=() 00:15:27.800 09:23:38 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:15:27.800 09:23:38 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@293 -- # pci_drivers=() 00:15:27.800 09:23:38 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@293 -- # local -A pci_drivers 00:15:27.800 09:23:38 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@295 -- # net_devs=() 00:15:27.800 09:23:38 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@295 -- # local -ga net_devs 00:15:27.800 09:23:38 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@296 -- # e810=() 00:15:27.800 09:23:38 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@296 -- # local -ga e810 00:15:27.800 09:23:38 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@297 -- # x722=() 00:15:27.800 09:23:38 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@297 -- # local -ga x722 00:15:27.800 09:23:38 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@298 -- # mlx=() 00:15:27.800 09:23:38 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@298 -- # local -ga mlx 00:15:27.800 09:23:38 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:15:27.800 09:23:38 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:15:27.800 09:23:38 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:15:27.800 09:23:38 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:15:27.801 09:23:38 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:15:27.801 09:23:38 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:15:27.801 09:23:38 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:15:27.801 09:23:38 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:15:27.801 09:23:38 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:15:27.801 09:23:38 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:15:27.801 09:23:38 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:15:27.801 09:23:38 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:15:27.801 09:23:38 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:15:27.801 09:23:38 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:15:27.801 09:23:38 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:15:27.801 09:23:38 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:15:27.801 09:23:38 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:15:27.801 09:23:38 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:15:27.801 09:23:38 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@341 -- # echo 'Found 0000:09:00.0 (0x8086 - 0x159b)' 00:15:27.801 Found 0000:09:00.0 (0x8086 - 0x159b) 00:15:27.801 09:23:38 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:15:27.801 09:23:38 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:15:27.801 09:23:38 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:15:27.801 09:23:38 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:15:27.801 09:23:38 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:15:27.801 09:23:38 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:15:27.801 09:23:38 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@341 -- # echo 'Found 0000:09:00.1 (0x8086 - 0x159b)' 00:15:27.801 Found 0000:09:00.1 (0x8086 - 0x159b) 00:15:27.801 09:23:38 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:15:27.801 09:23:38 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:15:27.801 09:23:38 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:15:27.801 09:23:38 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:15:27.801 09:23:38 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:15:27.801 09:23:38 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:15:27.801 09:23:38 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:15:27.801 09:23:38 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:15:27.801 09:23:38 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:15:27.801 09:23:38 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:15:27.801 09:23:38 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:15:27.801 09:23:38 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:15:27.801 09:23:38 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@390 -- # [[ up == up ]] 00:15:27.801 09:23:38 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:15:27.801 09:23:38 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:15:27.801 09:23:38 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:09:00.0: cvl_0_0' 00:15:27.801 Found net devices under 0000:09:00.0: cvl_0_0 00:15:27.801 09:23:38 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:15:27.801 09:23:38 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:15:27.801 09:23:38 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:15:27.801 09:23:38 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:15:27.801 09:23:38 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:15:27.801 09:23:38 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@390 -- # [[ up == up ]] 00:15:27.801 09:23:38 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:15:27.801 09:23:38 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:15:27.801 09:23:38 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:09:00.1: cvl_0_1' 00:15:27.801 Found net devices under 0000:09:00.1: cvl_0_1 00:15:27.801 09:23:38 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:15:27.801 09:23:38 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:15:27.801 09:23:38 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@414 -- # is_hw=yes 00:15:27.801 09:23:38 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:15:27.801 09:23:38 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:15:27.801 09:23:38 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:15:27.801 09:23:38 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:15:27.801 09:23:38 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:15:27.801 09:23:38 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:15:27.801 09:23:38 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:15:27.801 09:23:38 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:15:27.801 09:23:38 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:15:27.801 09:23:38 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:15:27.801 09:23:38 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:15:27.801 09:23:38 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:15:27.801 09:23:38 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:15:27.801 09:23:38 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:15:27.801 09:23:38 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:15:27.801 09:23:38 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:15:27.801 09:23:38 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:15:27.801 09:23:38 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:15:27.801 09:23:38 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:15:27.801 09:23:38 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:15:27.801 09:23:38 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:15:27.801 09:23:38 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:15:27.801 09:23:38 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:15:27.801 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:15:27.801 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.248 ms 00:15:27.801 00:15:27.801 --- 10.0.0.2 ping statistics --- 00:15:27.801 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:15:27.801 rtt min/avg/max/mdev = 0.248/0.248/0.248/0.000 ms 00:15:27.801 09:23:38 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:15:27.801 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:15:27.801 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.154 ms 00:15:27.801 00:15:27.801 --- 10.0.0.1 ping statistics --- 00:15:27.801 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:15:27.801 rtt min/avg/max/mdev = 0.154/0.154/0.154/0.000 ms 00:15:27.801 09:23:38 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:15:27.801 09:23:38 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@422 -- # return 0 00:15:27.801 09:23:38 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:15:27.801 09:23:38 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:15:27.801 09:23:38 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:15:27.801 09:23:38 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:15:27.801 09:23:38 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:15:27.801 09:23:38 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:15:27.801 09:23:38 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:15:27.801 09:23:38 nvmf_tcp.nvmf_queue_depth -- target/queue_depth.sh@21 -- # nvmfappstart -m 0x2 00:15:27.801 09:23:38 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:15:27.801 09:23:38 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@722 -- # xtrace_disable 00:15:27.801 09:23:38 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:15:27.801 09:23:38 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@481 -- # nvmfpid=809392 00:15:27.801 09:23:38 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@482 -- # waitforlisten 809392 00:15:27.801 09:23:38 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@829 -- # '[' -z 809392 ']' 00:15:27.801 09:23:38 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:15:27.801 09:23:38 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:15:27.801 09:23:38 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@834 -- # local max_retries=100 00:15:27.801 09:23:38 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:15:27.801 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:15:27.801 09:23:38 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@838 -- # xtrace_disable 00:15:27.801 09:23:38 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:15:27.801 [2024-07-15 09:23:38.916438] Starting SPDK v24.09-pre git sha1 b0f01ebc5 / DPDK 24.03.0 initialization... 00:15:27.801 [2024-07-15 09:23:38.916531] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:15:27.801 EAL: No free 2048 kB hugepages reported on node 1 00:15:27.801 [2024-07-15 09:23:38.980429] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:15:28.061 [2024-07-15 09:23:39.090133] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:15:28.061 [2024-07-15 09:23:39.090202] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:15:28.061 [2024-07-15 09:23:39.090232] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:15:28.061 [2024-07-15 09:23:39.090243] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:15:28.061 [2024-07-15 09:23:39.090254] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:15:28.061 [2024-07-15 09:23:39.090289] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:15:28.061 09:23:39 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:15:28.061 09:23:39 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@862 -- # return 0 00:15:28.061 09:23:39 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:15:28.061 09:23:39 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@728 -- # xtrace_disable 00:15:28.061 09:23:39 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:15:28.061 09:23:39 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:15:28.061 09:23:39 nvmf_tcp.nvmf_queue_depth -- target/queue_depth.sh@23 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:15:28.061 09:23:39 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:28.061 09:23:39 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:15:28.061 [2024-07-15 09:23:39.231340] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:15:28.061 09:23:39 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:28.061 09:23:39 nvmf_tcp.nvmf_queue_depth -- target/queue_depth.sh@24 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:15:28.061 09:23:39 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:28.061 09:23:39 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:15:28.320 Malloc0 00:15:28.320 09:23:39 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:28.320 09:23:39 nvmf_tcp.nvmf_queue_depth -- target/queue_depth.sh@25 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:15:28.320 09:23:39 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:28.320 09:23:39 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:15:28.320 09:23:39 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:28.320 09:23:39 nvmf_tcp.nvmf_queue_depth -- target/queue_depth.sh@26 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:15:28.320 09:23:39 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:28.320 09:23:39 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:15:28.320 09:23:39 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:28.320 09:23:39 nvmf_tcp.nvmf_queue_depth -- target/queue_depth.sh@27 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:15:28.320 09:23:39 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:28.320 09:23:39 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:15:28.320 [2024-07-15 09:23:39.300835] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:15:28.320 09:23:39 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:28.320 09:23:39 nvmf_tcp.nvmf_queue_depth -- target/queue_depth.sh@30 -- # bdevperf_pid=809417 00:15:28.320 09:23:39 nvmf_tcp.nvmf_queue_depth -- target/queue_depth.sh@32 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; killprocess $bdevperf_pid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:15:28.320 09:23:39 nvmf_tcp.nvmf_queue_depth -- target/queue_depth.sh@33 -- # waitforlisten 809417 /var/tmp/bdevperf.sock 00:15:28.320 09:23:39 nvmf_tcp.nvmf_queue_depth -- target/queue_depth.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -z -r /var/tmp/bdevperf.sock -q 1024 -o 4096 -w verify -t 10 00:15:28.320 09:23:39 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@829 -- # '[' -z 809417 ']' 00:15:28.320 09:23:39 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:15:28.320 09:23:39 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@834 -- # local max_retries=100 00:15:28.320 09:23:39 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:15:28.320 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:15:28.320 09:23:39 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@838 -- # xtrace_disable 00:15:28.320 09:23:39 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:15:28.320 [2024-07-15 09:23:39.350649] Starting SPDK v24.09-pre git sha1 b0f01ebc5 / DPDK 24.03.0 initialization... 00:15:28.320 [2024-07-15 09:23:39.350723] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid809417 ] 00:15:28.320 EAL: No free 2048 kB hugepages reported on node 1 00:15:28.320 [2024-07-15 09:23:39.409674] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:15:28.579 [2024-07-15 09:23:39.518112] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:15:28.579 09:23:39 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:15:28.579 09:23:39 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@862 -- # return 0 00:15:28.579 09:23:39 nvmf_tcp.nvmf_queue_depth -- target/queue_depth.sh@34 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:15:28.579 09:23:39 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:28.579 09:23:39 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:15:28.839 NVMe0n1 00:15:28.839 09:23:39 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:28.839 09:23:39 nvmf_tcp.nvmf_queue_depth -- target/queue_depth.sh@35 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:15:28.839 Running I/O for 10 seconds... 00:15:41.036 00:15:41.036 Latency(us) 00:15:41.036 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:15:41.036 Job: NVMe0n1 (Core Mask 0x1, workload: verify, depth: 1024, IO size: 4096) 00:15:41.036 Verification LBA range: start 0x0 length 0x4000 00:15:41.036 NVMe0n1 : 10.08 9237.00 36.08 0.00 0.00 110459.35 21165.70 70681.79 00:15:41.036 =================================================================================================================== 00:15:41.036 Total : 9237.00 36.08 0.00 0.00 110459.35 21165.70 70681.79 00:15:41.036 0 00:15:41.036 09:23:50 nvmf_tcp.nvmf_queue_depth -- target/queue_depth.sh@39 -- # killprocess 809417 00:15:41.036 09:23:50 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@948 -- # '[' -z 809417 ']' 00:15:41.036 09:23:50 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@952 -- # kill -0 809417 00:15:41.036 09:23:50 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@953 -- # uname 00:15:41.036 09:23:50 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:15:41.036 09:23:50 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 809417 00:15:41.036 09:23:50 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:15:41.036 09:23:50 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:15:41.036 09:23:50 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@966 -- # echo 'killing process with pid 809417' 00:15:41.036 killing process with pid 809417 00:15:41.036 09:23:50 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@967 -- # kill 809417 00:15:41.036 Received shutdown signal, test time was about 10.000000 seconds 00:15:41.036 00:15:41.036 Latency(us) 00:15:41.036 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:15:41.036 =================================================================================================================== 00:15:41.036 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:15:41.036 09:23:50 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@972 -- # wait 809417 00:15:41.036 09:23:50 nvmf_tcp.nvmf_queue_depth -- target/queue_depth.sh@41 -- # trap - SIGINT SIGTERM EXIT 00:15:41.036 09:23:50 nvmf_tcp.nvmf_queue_depth -- target/queue_depth.sh@43 -- # nvmftestfini 00:15:41.036 09:23:50 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@488 -- # nvmfcleanup 00:15:41.036 09:23:50 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@117 -- # sync 00:15:41.036 09:23:50 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:15:41.036 09:23:50 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@120 -- # set +e 00:15:41.036 09:23:50 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@121 -- # for i in {1..20} 00:15:41.036 09:23:50 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:15:41.036 rmmod nvme_tcp 00:15:41.036 rmmod nvme_fabrics 00:15:41.036 rmmod nvme_keyring 00:15:41.036 09:23:50 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:15:41.036 09:23:50 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@124 -- # set -e 00:15:41.036 09:23:50 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@125 -- # return 0 00:15:41.036 09:23:50 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@489 -- # '[' -n 809392 ']' 00:15:41.036 09:23:50 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@490 -- # killprocess 809392 00:15:41.036 09:23:50 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@948 -- # '[' -z 809392 ']' 00:15:41.037 09:23:50 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@952 -- # kill -0 809392 00:15:41.037 09:23:50 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@953 -- # uname 00:15:41.037 09:23:50 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:15:41.037 09:23:50 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 809392 00:15:41.037 09:23:50 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:15:41.037 09:23:50 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:15:41.037 09:23:50 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@966 -- # echo 'killing process with pid 809392' 00:15:41.037 killing process with pid 809392 00:15:41.037 09:23:50 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@967 -- # kill 809392 00:15:41.037 09:23:50 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@972 -- # wait 809392 00:15:41.037 09:23:50 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:15:41.037 09:23:50 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:15:41.037 09:23:50 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:15:41.037 09:23:50 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:15:41.037 09:23:50 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@278 -- # remove_spdk_ns 00:15:41.037 09:23:50 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:15:41.037 09:23:50 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:15:41.037 09:23:50 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:15:41.977 09:23:52 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:15:41.977 00:15:41.977 real 0m16.269s 00:15:41.977 user 0m21.902s 00:15:41.977 sys 0m3.504s 00:15:41.977 09:23:52 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@1124 -- # xtrace_disable 00:15:41.977 09:23:52 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:15:41.977 ************************************ 00:15:41.977 END TEST nvmf_queue_depth 00:15:41.977 ************************************ 00:15:41.977 09:23:52 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:15:41.977 09:23:52 nvmf_tcp -- nvmf/nvmf.sh@52 -- # run_test nvmf_target_multipath /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multipath.sh --transport=tcp 00:15:41.977 09:23:52 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:15:41.977 09:23:52 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:15:41.977 09:23:52 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:15:41.977 ************************************ 00:15:41.977 START TEST nvmf_target_multipath 00:15:41.977 ************************************ 00:15:41.977 09:23:52 nvmf_tcp.nvmf_target_multipath -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multipath.sh --transport=tcp 00:15:41.977 * Looking for test storage... 00:15:41.977 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:15:41.977 09:23:52 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:15:41.977 09:23:52 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@7 -- # uname -s 00:15:41.977 09:23:52 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:15:41.977 09:23:52 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:15:41.977 09:23:52 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:15:41.977 09:23:52 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:15:41.977 09:23:52 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:15:41.977 09:23:52 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:15:41.977 09:23:52 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:15:41.977 09:23:52 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:15:41.977 09:23:52 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:15:41.977 09:23:52 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:15:41.977 09:23:52 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a 00:15:41.977 09:23:52 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@18 -- # NVME_HOSTID=29f67375-a902-e411-ace9-001e67bc3c9a 00:15:41.977 09:23:52 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:15:41.977 09:23:52 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:15:41.977 09:23:52 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:15:41.977 09:23:52 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:15:41.977 09:23:52 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:15:41.977 09:23:52 nvmf_tcp.nvmf_target_multipath -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:15:41.977 09:23:52 nvmf_tcp.nvmf_target_multipath -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:15:41.977 09:23:52 nvmf_tcp.nvmf_target_multipath -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:15:41.978 09:23:52 nvmf_tcp.nvmf_target_multipath -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:41.978 09:23:52 nvmf_tcp.nvmf_target_multipath -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:41.978 09:23:52 nvmf_tcp.nvmf_target_multipath -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:41.978 09:23:52 nvmf_tcp.nvmf_target_multipath -- paths/export.sh@5 -- # export PATH 00:15:41.978 09:23:52 nvmf_tcp.nvmf_target_multipath -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:41.978 09:23:52 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@47 -- # : 0 00:15:41.978 09:23:52 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:15:41.978 09:23:52 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:15:41.978 09:23:52 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:15:41.978 09:23:52 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:15:41.978 09:23:52 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:15:41.978 09:23:52 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:15:41.978 09:23:52 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:15:41.978 09:23:52 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@51 -- # have_pci_nics=0 00:15:41.978 09:23:52 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@11 -- # MALLOC_BDEV_SIZE=64 00:15:41.978 09:23:52 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:15:41.978 09:23:52 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@13 -- # nqn=nqn.2016-06.io.spdk:cnode1 00:15:41.978 09:23:52 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@15 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:15:41.978 09:23:52 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@43 -- # nvmftestinit 00:15:41.978 09:23:52 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:15:41.978 09:23:52 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:15:41.978 09:23:52 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@448 -- # prepare_net_devs 00:15:41.978 09:23:52 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@410 -- # local -g is_hw=no 00:15:41.978 09:23:52 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@412 -- # remove_spdk_ns 00:15:41.978 09:23:52 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:15:41.978 09:23:52 nvmf_tcp.nvmf_target_multipath -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:15:41.978 09:23:52 nvmf_tcp.nvmf_target_multipath -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:15:41.978 09:23:52 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:15:41.978 09:23:52 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:15:41.978 09:23:52 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@285 -- # xtrace_disable 00:15:41.978 09:23:52 nvmf_tcp.nvmf_target_multipath -- common/autotest_common.sh@10 -- # set +x 00:15:43.993 09:23:54 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:15:43.993 09:23:54 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@291 -- # pci_devs=() 00:15:43.993 09:23:54 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@291 -- # local -a pci_devs 00:15:43.993 09:23:54 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@292 -- # pci_net_devs=() 00:15:43.993 09:23:54 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:15:43.993 09:23:54 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@293 -- # pci_drivers=() 00:15:43.993 09:23:54 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@293 -- # local -A pci_drivers 00:15:43.993 09:23:54 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@295 -- # net_devs=() 00:15:43.993 09:23:54 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@295 -- # local -ga net_devs 00:15:43.993 09:23:54 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@296 -- # e810=() 00:15:43.993 09:23:54 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@296 -- # local -ga e810 00:15:43.993 09:23:54 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@297 -- # x722=() 00:15:43.993 09:23:54 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@297 -- # local -ga x722 00:15:43.993 09:23:54 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@298 -- # mlx=() 00:15:43.993 09:23:54 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@298 -- # local -ga mlx 00:15:43.993 09:23:54 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:15:43.993 09:23:54 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:15:43.993 09:23:54 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:15:43.993 09:23:54 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:15:43.993 09:23:54 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:15:43.993 09:23:54 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:15:43.993 09:23:54 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:15:43.993 09:23:54 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:15:43.993 09:23:54 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:15:43.993 09:23:54 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:15:43.993 09:23:54 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:15:43.993 09:23:54 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:15:43.993 09:23:54 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:15:43.993 09:23:54 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:15:43.993 09:23:54 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:15:43.993 09:23:54 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:15:43.993 09:23:54 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:15:43.993 09:23:54 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:15:43.993 09:23:54 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@341 -- # echo 'Found 0000:09:00.0 (0x8086 - 0x159b)' 00:15:43.993 Found 0000:09:00.0 (0x8086 - 0x159b) 00:15:43.993 09:23:54 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:15:43.993 09:23:54 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:15:43.993 09:23:54 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:15:43.993 09:23:54 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:15:43.993 09:23:54 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:15:43.994 09:23:54 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:15:43.994 09:23:54 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@341 -- # echo 'Found 0000:09:00.1 (0x8086 - 0x159b)' 00:15:43.994 Found 0000:09:00.1 (0x8086 - 0x159b) 00:15:43.994 09:23:54 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:15:43.994 09:23:54 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:15:43.994 09:23:54 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:15:43.994 09:23:54 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:15:43.994 09:23:54 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:15:43.994 09:23:54 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:15:43.994 09:23:54 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:15:43.994 09:23:54 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:15:43.994 09:23:54 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:15:43.994 09:23:54 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:15:43.994 09:23:54 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:15:43.994 09:23:54 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:15:43.994 09:23:54 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@390 -- # [[ up == up ]] 00:15:43.994 09:23:54 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:15:43.994 09:23:54 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:15:43.994 09:23:54 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:09:00.0: cvl_0_0' 00:15:43.994 Found net devices under 0000:09:00.0: cvl_0_0 00:15:43.994 09:23:54 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:15:43.994 09:23:54 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:15:43.994 09:23:54 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:15:43.994 09:23:54 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:15:43.994 09:23:54 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:15:43.994 09:23:54 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@390 -- # [[ up == up ]] 00:15:43.994 09:23:54 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:15:43.994 09:23:54 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:15:43.994 09:23:54 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:09:00.1: cvl_0_1' 00:15:43.994 Found net devices under 0000:09:00.1: cvl_0_1 00:15:43.994 09:23:54 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:15:43.994 09:23:54 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:15:43.994 09:23:54 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@414 -- # is_hw=yes 00:15:43.994 09:23:54 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:15:43.994 09:23:54 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:15:43.994 09:23:54 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:15:43.994 09:23:54 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:15:43.994 09:23:54 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:15:43.994 09:23:54 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:15:43.994 09:23:54 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:15:43.994 09:23:54 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:15:43.994 09:23:54 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:15:43.994 09:23:54 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:15:43.994 09:23:54 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:15:43.994 09:23:54 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:15:43.994 09:23:54 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:15:43.994 09:23:54 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:15:43.994 09:23:54 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:15:43.994 09:23:54 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:15:43.994 09:23:54 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:15:43.994 09:23:54 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:15:43.994 09:23:54 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:15:43.994 09:23:54 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:15:43.994 09:23:55 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:15:43.994 09:23:55 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:15:43.994 09:23:55 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:15:43.994 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:15:43.994 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.204 ms 00:15:43.994 00:15:43.994 --- 10.0.0.2 ping statistics --- 00:15:43.994 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:15:43.994 rtt min/avg/max/mdev = 0.204/0.204/0.204/0.000 ms 00:15:43.994 09:23:55 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:15:43.994 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:15:43.994 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.070 ms 00:15:43.994 00:15:43.994 --- 10.0.0.1 ping statistics --- 00:15:43.994 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:15:43.994 rtt min/avg/max/mdev = 0.070/0.070/0.070/0.000 ms 00:15:43.994 09:23:55 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:15:43.994 09:23:55 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@422 -- # return 0 00:15:43.994 09:23:55 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:15:43.994 09:23:55 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:15:43.994 09:23:55 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:15:43.994 09:23:55 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:15:43.994 09:23:55 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:15:43.994 09:23:55 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:15:43.994 09:23:55 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:15:43.994 09:23:55 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@45 -- # '[' -z ']' 00:15:43.994 09:23:55 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@46 -- # echo 'only one NIC for nvmf test' 00:15:43.994 only one NIC for nvmf test 00:15:43.994 09:23:55 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@47 -- # nvmftestfini 00:15:43.994 09:23:55 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@488 -- # nvmfcleanup 00:15:43.994 09:23:55 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@117 -- # sync 00:15:43.994 09:23:55 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:15:43.994 09:23:55 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@120 -- # set +e 00:15:43.994 09:23:55 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@121 -- # for i in {1..20} 00:15:43.994 09:23:55 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:15:43.994 rmmod nvme_tcp 00:15:43.994 rmmod nvme_fabrics 00:15:43.994 rmmod nvme_keyring 00:15:43.994 09:23:55 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:15:43.994 09:23:55 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@124 -- # set -e 00:15:43.994 09:23:55 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@125 -- # return 0 00:15:43.994 09:23:55 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@489 -- # '[' -n '' ']' 00:15:43.994 09:23:55 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:15:43.994 09:23:55 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:15:43.994 09:23:55 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:15:43.994 09:23:55 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:15:43.994 09:23:55 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@278 -- # remove_spdk_ns 00:15:43.994 09:23:55 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:15:43.994 09:23:55 nvmf_tcp.nvmf_target_multipath -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:15:43.994 09:23:55 nvmf_tcp.nvmf_target_multipath -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:15:46.530 09:23:57 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:15:46.530 09:23:57 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@48 -- # exit 0 00:15:46.530 09:23:57 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@1 -- # nvmftestfini 00:15:46.530 09:23:57 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@488 -- # nvmfcleanup 00:15:46.531 09:23:57 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@117 -- # sync 00:15:46.531 09:23:57 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:15:46.531 09:23:57 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@120 -- # set +e 00:15:46.531 09:23:57 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@121 -- # for i in {1..20} 00:15:46.531 09:23:57 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:15:46.531 09:23:57 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:15:46.531 09:23:57 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@124 -- # set -e 00:15:46.531 09:23:57 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@125 -- # return 0 00:15:46.531 09:23:57 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@489 -- # '[' -n '' ']' 00:15:46.531 09:23:57 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:15:46.531 09:23:57 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:15:46.531 09:23:57 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:15:46.531 09:23:57 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:15:46.531 09:23:57 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@278 -- # remove_spdk_ns 00:15:46.531 09:23:57 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:15:46.531 09:23:57 nvmf_tcp.nvmf_target_multipath -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:15:46.531 09:23:57 nvmf_tcp.nvmf_target_multipath -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:15:46.531 09:23:57 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:15:46.531 00:15:46.531 real 0m4.315s 00:15:46.531 user 0m0.823s 00:15:46.531 sys 0m1.498s 00:15:46.531 09:23:57 nvmf_tcp.nvmf_target_multipath -- common/autotest_common.sh@1124 -- # xtrace_disable 00:15:46.531 09:23:57 nvmf_tcp.nvmf_target_multipath -- common/autotest_common.sh@10 -- # set +x 00:15:46.531 ************************************ 00:15:46.531 END TEST nvmf_target_multipath 00:15:46.531 ************************************ 00:15:46.531 09:23:57 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:15:46.531 09:23:57 nvmf_tcp -- nvmf/nvmf.sh@53 -- # run_test nvmf_zcopy /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/zcopy.sh --transport=tcp 00:15:46.531 09:23:57 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:15:46.531 09:23:57 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:15:46.531 09:23:57 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:15:46.531 ************************************ 00:15:46.531 START TEST nvmf_zcopy 00:15:46.531 ************************************ 00:15:46.531 09:23:57 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/zcopy.sh --transport=tcp 00:15:46.531 * Looking for test storage... 00:15:46.531 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:15:46.531 09:23:57 nvmf_tcp.nvmf_zcopy -- target/zcopy.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:15:46.531 09:23:57 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@7 -- # uname -s 00:15:46.531 09:23:57 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:15:46.531 09:23:57 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:15:46.531 09:23:57 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:15:46.531 09:23:57 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:15:46.531 09:23:57 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:15:46.531 09:23:57 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:15:46.531 09:23:57 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:15:46.531 09:23:57 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:15:46.531 09:23:57 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:15:46.531 09:23:57 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:15:46.531 09:23:57 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a 00:15:46.531 09:23:57 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@18 -- # NVME_HOSTID=29f67375-a902-e411-ace9-001e67bc3c9a 00:15:46.531 09:23:57 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:15:46.531 09:23:57 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:15:46.531 09:23:57 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:15:46.531 09:23:57 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:15:46.531 09:23:57 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:15:46.531 09:23:57 nvmf_tcp.nvmf_zcopy -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:15:46.531 09:23:57 nvmf_tcp.nvmf_zcopy -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:15:46.531 09:23:57 nvmf_tcp.nvmf_zcopy -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:15:46.531 09:23:57 nvmf_tcp.nvmf_zcopy -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:46.531 09:23:57 nvmf_tcp.nvmf_zcopy -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:46.531 09:23:57 nvmf_tcp.nvmf_zcopy -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:46.531 09:23:57 nvmf_tcp.nvmf_zcopy -- paths/export.sh@5 -- # export PATH 00:15:46.531 09:23:57 nvmf_tcp.nvmf_zcopy -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:46.531 09:23:57 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@47 -- # : 0 00:15:46.531 09:23:57 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:15:46.531 09:23:57 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:15:46.531 09:23:57 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:15:46.531 09:23:57 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:15:46.531 09:23:57 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:15:46.531 09:23:57 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:15:46.531 09:23:57 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:15:46.531 09:23:57 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@51 -- # have_pci_nics=0 00:15:46.531 09:23:57 nvmf_tcp.nvmf_zcopy -- target/zcopy.sh@12 -- # nvmftestinit 00:15:46.531 09:23:57 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:15:46.531 09:23:57 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:15:46.531 09:23:57 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@448 -- # prepare_net_devs 00:15:46.531 09:23:57 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@410 -- # local -g is_hw=no 00:15:46.531 09:23:57 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@412 -- # remove_spdk_ns 00:15:46.531 09:23:57 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:15:46.531 09:23:57 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:15:46.531 09:23:57 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:15:46.531 09:23:57 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:15:46.531 09:23:57 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:15:46.531 09:23:57 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@285 -- # xtrace_disable 00:15:46.531 09:23:57 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:15:48.430 09:23:59 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:15:48.430 09:23:59 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@291 -- # pci_devs=() 00:15:48.430 09:23:59 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@291 -- # local -a pci_devs 00:15:48.430 09:23:59 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@292 -- # pci_net_devs=() 00:15:48.430 09:23:59 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:15:48.430 09:23:59 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@293 -- # pci_drivers=() 00:15:48.430 09:23:59 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@293 -- # local -A pci_drivers 00:15:48.430 09:23:59 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@295 -- # net_devs=() 00:15:48.430 09:23:59 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@295 -- # local -ga net_devs 00:15:48.430 09:23:59 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@296 -- # e810=() 00:15:48.430 09:23:59 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@296 -- # local -ga e810 00:15:48.430 09:23:59 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@297 -- # x722=() 00:15:48.430 09:23:59 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@297 -- # local -ga x722 00:15:48.430 09:23:59 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@298 -- # mlx=() 00:15:48.430 09:23:59 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@298 -- # local -ga mlx 00:15:48.430 09:23:59 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:15:48.430 09:23:59 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:15:48.430 09:23:59 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:15:48.430 09:23:59 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:15:48.430 09:23:59 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:15:48.430 09:23:59 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:15:48.430 09:23:59 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:15:48.430 09:23:59 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:15:48.430 09:23:59 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:15:48.430 09:23:59 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:15:48.431 09:23:59 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:15:48.431 09:23:59 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:15:48.431 09:23:59 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:15:48.431 09:23:59 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:15:48.431 09:23:59 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:15:48.431 09:23:59 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:15:48.431 09:23:59 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:15:48.431 09:23:59 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:15:48.431 09:23:59 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@341 -- # echo 'Found 0000:09:00.0 (0x8086 - 0x159b)' 00:15:48.431 Found 0000:09:00.0 (0x8086 - 0x159b) 00:15:48.431 09:23:59 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:15:48.431 09:23:59 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:15:48.431 09:23:59 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:15:48.431 09:23:59 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:15:48.431 09:23:59 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:15:48.431 09:23:59 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:15:48.431 09:23:59 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@341 -- # echo 'Found 0000:09:00.1 (0x8086 - 0x159b)' 00:15:48.431 Found 0000:09:00.1 (0x8086 - 0x159b) 00:15:48.431 09:23:59 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:15:48.431 09:23:59 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:15:48.431 09:23:59 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:15:48.431 09:23:59 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:15:48.431 09:23:59 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:15:48.431 09:23:59 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:15:48.431 09:23:59 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:15:48.431 09:23:59 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:15:48.431 09:23:59 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:15:48.431 09:23:59 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:15:48.431 09:23:59 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:15:48.431 09:23:59 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:15:48.431 09:23:59 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@390 -- # [[ up == up ]] 00:15:48.431 09:23:59 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:15:48.431 09:23:59 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:15:48.431 09:23:59 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:09:00.0: cvl_0_0' 00:15:48.431 Found net devices under 0000:09:00.0: cvl_0_0 00:15:48.431 09:23:59 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:15:48.431 09:23:59 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:15:48.431 09:23:59 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:15:48.431 09:23:59 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:15:48.431 09:23:59 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:15:48.431 09:23:59 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@390 -- # [[ up == up ]] 00:15:48.431 09:23:59 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:15:48.431 09:23:59 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:15:48.431 09:23:59 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:09:00.1: cvl_0_1' 00:15:48.431 Found net devices under 0000:09:00.1: cvl_0_1 00:15:48.431 09:23:59 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:15:48.431 09:23:59 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:15:48.431 09:23:59 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@414 -- # is_hw=yes 00:15:48.431 09:23:59 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:15:48.431 09:23:59 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:15:48.431 09:23:59 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:15:48.431 09:23:59 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:15:48.431 09:23:59 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:15:48.431 09:23:59 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:15:48.431 09:23:59 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:15:48.431 09:23:59 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:15:48.431 09:23:59 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:15:48.431 09:23:59 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:15:48.431 09:23:59 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:15:48.431 09:23:59 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:15:48.431 09:23:59 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:15:48.431 09:23:59 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:15:48.431 09:23:59 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:15:48.431 09:23:59 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:15:48.431 09:23:59 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:15:48.431 09:23:59 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:15:48.431 09:23:59 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:15:48.431 09:23:59 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:15:48.431 09:23:59 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:15:48.431 09:23:59 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:15:48.431 09:23:59 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:15:48.431 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:15:48.431 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.287 ms 00:15:48.431 00:15:48.431 --- 10.0.0.2 ping statistics --- 00:15:48.431 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:15:48.431 rtt min/avg/max/mdev = 0.287/0.287/0.287/0.000 ms 00:15:48.431 09:23:59 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:15:48.431 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:15:48.431 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.139 ms 00:15:48.431 00:15:48.431 --- 10.0.0.1 ping statistics --- 00:15:48.431 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:15:48.431 rtt min/avg/max/mdev = 0.139/0.139/0.139/0.000 ms 00:15:48.431 09:23:59 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:15:48.431 09:23:59 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@422 -- # return 0 00:15:48.431 09:23:59 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:15:48.431 09:23:59 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:15:48.431 09:23:59 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:15:48.431 09:23:59 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:15:48.431 09:23:59 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:15:48.431 09:23:59 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:15:48.431 09:23:59 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:15:48.431 09:23:59 nvmf_tcp.nvmf_zcopy -- target/zcopy.sh@13 -- # nvmfappstart -m 0x2 00:15:48.431 09:23:59 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:15:48.431 09:23:59 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@722 -- # xtrace_disable 00:15:48.431 09:23:59 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:15:48.431 09:23:59 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@481 -- # nvmfpid=814600 00:15:48.431 09:23:59 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:15:48.431 09:23:59 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@482 -- # waitforlisten 814600 00:15:48.431 09:23:59 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@829 -- # '[' -z 814600 ']' 00:15:48.431 09:23:59 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:15:48.431 09:23:59 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@834 -- # local max_retries=100 00:15:48.431 09:23:59 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:15:48.431 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:15:48.431 09:23:59 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@838 -- # xtrace_disable 00:15:48.431 09:23:59 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:15:48.431 [2024-07-15 09:23:59.605037] Starting SPDK v24.09-pre git sha1 b0f01ebc5 / DPDK 24.03.0 initialization... 00:15:48.431 [2024-07-15 09:23:59.605129] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:15:48.690 EAL: No free 2048 kB hugepages reported on node 1 00:15:48.690 [2024-07-15 09:23:59.669032] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:15:48.690 [2024-07-15 09:23:59.778962] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:15:48.690 [2024-07-15 09:23:59.779014] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:15:48.690 [2024-07-15 09:23:59.779054] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:15:48.690 [2024-07-15 09:23:59.779066] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:15:48.690 [2024-07-15 09:23:59.779077] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:15:48.690 [2024-07-15 09:23:59.779124] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:15:48.948 09:23:59 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:15:48.948 09:23:59 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@862 -- # return 0 00:15:48.948 09:23:59 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:15:48.948 09:23:59 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@728 -- # xtrace_disable 00:15:48.948 09:23:59 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:15:48.948 09:23:59 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:15:48.948 09:23:59 nvmf_tcp.nvmf_zcopy -- target/zcopy.sh@15 -- # '[' tcp '!=' tcp ']' 00:15:48.948 09:23:59 nvmf_tcp.nvmf_zcopy -- target/zcopy.sh@22 -- # rpc_cmd nvmf_create_transport -t tcp -o -c 0 --zcopy 00:15:48.948 09:23:59 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:48.948 09:23:59 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:15:48.948 [2024-07-15 09:23:59.920075] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:15:48.948 09:23:59 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:48.948 09:23:59 nvmf_tcp.nvmf_zcopy -- target/zcopy.sh@24 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 10 00:15:48.948 09:23:59 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:48.948 09:23:59 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:15:48.948 09:23:59 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:48.948 09:23:59 nvmf_tcp.nvmf_zcopy -- target/zcopy.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:15:48.948 09:23:59 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:48.948 09:23:59 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:15:48.948 [2024-07-15 09:23:59.936263] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:15:48.948 09:23:59 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:48.948 09:23:59 nvmf_tcp.nvmf_zcopy -- target/zcopy.sh@27 -- # rpc_cmd nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:15:48.948 09:23:59 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:48.948 09:23:59 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:15:48.948 09:23:59 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:48.948 09:23:59 nvmf_tcp.nvmf_zcopy -- target/zcopy.sh@29 -- # rpc_cmd bdev_malloc_create 32 4096 -b malloc0 00:15:48.948 09:23:59 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:48.948 09:23:59 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:15:48.948 malloc0 00:15:48.948 09:23:59 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:48.948 09:23:59 nvmf_tcp.nvmf_zcopy -- target/zcopy.sh@30 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 malloc0 -n 1 00:15:48.948 09:23:59 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:48.948 09:23:59 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:15:48.948 09:23:59 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:48.948 09:23:59 nvmf_tcp.nvmf_zcopy -- target/zcopy.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf --json /dev/fd/62 -t 10 -q 128 -w verify -o 8192 00:15:48.948 09:23:59 nvmf_tcp.nvmf_zcopy -- target/zcopy.sh@33 -- # gen_nvmf_target_json 00:15:48.948 09:23:59 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@532 -- # config=() 00:15:48.948 09:23:59 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@532 -- # local subsystem config 00:15:48.948 09:23:59 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:15:48.948 09:23:59 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:15:48.948 { 00:15:48.948 "params": { 00:15:48.948 "name": "Nvme$subsystem", 00:15:48.948 "trtype": "$TEST_TRANSPORT", 00:15:48.948 "traddr": "$NVMF_FIRST_TARGET_IP", 00:15:48.948 "adrfam": "ipv4", 00:15:48.948 "trsvcid": "$NVMF_PORT", 00:15:48.948 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:15:48.948 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:15:48.948 "hdgst": ${hdgst:-false}, 00:15:48.948 "ddgst": ${ddgst:-false} 00:15:48.948 }, 00:15:48.948 "method": "bdev_nvme_attach_controller" 00:15:48.948 } 00:15:48.948 EOF 00:15:48.948 )") 00:15:48.948 09:23:59 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@554 -- # cat 00:15:48.948 09:23:59 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@556 -- # jq . 00:15:48.948 09:23:59 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@557 -- # IFS=, 00:15:48.948 09:23:59 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:15:48.948 "params": { 00:15:48.948 "name": "Nvme1", 00:15:48.948 "trtype": "tcp", 00:15:48.948 "traddr": "10.0.0.2", 00:15:48.948 "adrfam": "ipv4", 00:15:48.948 "trsvcid": "4420", 00:15:48.948 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:15:48.948 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:15:48.948 "hdgst": false, 00:15:48.948 "ddgst": false 00:15:48.948 }, 00:15:48.948 "method": "bdev_nvme_attach_controller" 00:15:48.948 }' 00:15:48.948 [2024-07-15 09:24:00.019367] Starting SPDK v24.09-pre git sha1 b0f01ebc5 / DPDK 24.03.0 initialization... 00:15:48.948 [2024-07-15 09:24:00.019452] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid814633 ] 00:15:48.948 EAL: No free 2048 kB hugepages reported on node 1 00:15:48.948 [2024-07-15 09:24:00.080902] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:15:49.206 [2024-07-15 09:24:00.193449] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:15:49.463 Running I/O for 10 seconds... 00:15:59.479 00:15:59.479 Latency(us) 00:15:59.479 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:15:59.479 Job: Nvme1n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 8192) 00:15:59.479 Verification LBA range: start 0x0 length 0x1000 00:15:59.479 Nvme1n1 : 10.01 6048.35 47.25 0.00 0.00 21106.02 2985.53 29515.47 00:15:59.479 =================================================================================================================== 00:15:59.479 Total : 6048.35 47.25 0.00 0.00 21106.02 2985.53 29515.47 00:15:59.738 09:24:10 nvmf_tcp.nvmf_zcopy -- target/zcopy.sh@39 -- # perfpid=816555 00:15:59.738 09:24:10 nvmf_tcp.nvmf_zcopy -- target/zcopy.sh@41 -- # xtrace_disable 00:15:59.738 09:24:10 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:15:59.738 09:24:10 nvmf_tcp.nvmf_zcopy -- target/zcopy.sh@37 -- # gen_nvmf_target_json 00:15:59.738 09:24:10 nvmf_tcp.nvmf_zcopy -- target/zcopy.sh@37 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf --json /dev/fd/63 -t 5 -q 128 -w randrw -M 50 -o 8192 00:15:59.738 09:24:10 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@532 -- # config=() 00:15:59.738 09:24:10 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@532 -- # local subsystem config 00:15:59.738 09:24:10 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:15:59.738 09:24:10 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:15:59.738 { 00:15:59.738 "params": { 00:15:59.738 "name": "Nvme$subsystem", 00:15:59.738 "trtype": "$TEST_TRANSPORT", 00:15:59.738 "traddr": "$NVMF_FIRST_TARGET_IP", 00:15:59.738 "adrfam": "ipv4", 00:15:59.738 "trsvcid": "$NVMF_PORT", 00:15:59.738 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:15:59.738 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:15:59.738 "hdgst": ${hdgst:-false}, 00:15:59.738 "ddgst": ${ddgst:-false} 00:15:59.738 }, 00:15:59.738 "method": "bdev_nvme_attach_controller" 00:15:59.738 } 00:15:59.738 EOF 00:15:59.738 )") 00:15:59.738 09:24:10 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@554 -- # cat 00:15:59.738 [2024-07-15 09:24:10.805588] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:59.738 [2024-07-15 09:24:10.805625] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:59.738 09:24:10 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@556 -- # jq . 00:15:59.738 09:24:10 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@557 -- # IFS=, 00:15:59.738 09:24:10 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:15:59.738 "params": { 00:15:59.738 "name": "Nvme1", 00:15:59.738 "trtype": "tcp", 00:15:59.738 "traddr": "10.0.0.2", 00:15:59.738 "adrfam": "ipv4", 00:15:59.738 "trsvcid": "4420", 00:15:59.738 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:15:59.738 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:15:59.738 "hdgst": false, 00:15:59.738 "ddgst": false 00:15:59.738 }, 00:15:59.738 "method": "bdev_nvme_attach_controller" 00:15:59.738 }' 00:15:59.738 [2024-07-15 09:24:10.813553] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:59.738 [2024-07-15 09:24:10.813574] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:59.738 [2024-07-15 09:24:10.821577] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:59.738 [2024-07-15 09:24:10.821598] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:59.738 [2024-07-15 09:24:10.829596] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:59.738 [2024-07-15 09:24:10.829615] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:59.738 [2024-07-15 09:24:10.837616] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:59.738 [2024-07-15 09:24:10.837636] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:59.738 [2024-07-15 09:24:10.845637] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:59.738 [2024-07-15 09:24:10.845658] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:59.738 [2024-07-15 09:24:10.845657] Starting SPDK v24.09-pre git sha1 b0f01ebc5 / DPDK 24.03.0 initialization... 00:15:59.738 [2024-07-15 09:24:10.845739] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid816555 ] 00:15:59.738 [2024-07-15 09:24:10.853658] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:59.738 [2024-07-15 09:24:10.853678] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:59.738 [2024-07-15 09:24:10.861680] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:59.738 [2024-07-15 09:24:10.861700] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:59.738 [2024-07-15 09:24:10.869702] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:59.738 [2024-07-15 09:24:10.869721] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:59.738 EAL: No free 2048 kB hugepages reported on node 1 00:15:59.738 [2024-07-15 09:24:10.877729] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:59.738 [2024-07-15 09:24:10.877750] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:59.738 [2024-07-15 09:24:10.885751] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:59.738 [2024-07-15 09:24:10.885771] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:59.738 [2024-07-15 09:24:10.893774] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:59.738 [2024-07-15 09:24:10.893815] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:59.738 [2024-07-15 09:24:10.901817] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:59.738 [2024-07-15 09:24:10.901839] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:59.738 [2024-07-15 09:24:10.907791] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:15:59.738 [2024-07-15 09:24:10.909870] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:59.738 [2024-07-15 09:24:10.909893] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:59.738 [2024-07-15 09:24:10.917924] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:59.738 [2024-07-15 09:24:10.917958] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:59.738 [2024-07-15 09:24:10.925895] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:59.738 [2024-07-15 09:24:10.925919] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:59.996 [2024-07-15 09:24:10.933907] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:59.996 [2024-07-15 09:24:10.933930] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:59.996 [2024-07-15 09:24:10.941931] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:59.996 [2024-07-15 09:24:10.941952] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:59.996 [2024-07-15 09:24:10.949944] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:59.996 [2024-07-15 09:24:10.949966] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:59.996 [2024-07-15 09:24:10.957969] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:59.996 [2024-07-15 09:24:10.957992] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:59.996 [2024-07-15 09:24:10.965991] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:59.996 [2024-07-15 09:24:10.966021] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:59.996 [2024-07-15 09:24:10.974047] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:59.996 [2024-07-15 09:24:10.974095] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:59.996 [2024-07-15 09:24:10.982040] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:59.996 [2024-07-15 09:24:10.982064] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:59.996 [2024-07-15 09:24:10.990055] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:59.996 [2024-07-15 09:24:10.990090] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:59.996 [2024-07-15 09:24:10.998089] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:59.996 [2024-07-15 09:24:10.998116] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:59.996 [2024-07-15 09:24:11.006112] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:59.996 [2024-07-15 09:24:11.006133] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:59.996 [2024-07-15 09:24:11.014135] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:59.996 [2024-07-15 09:24:11.014169] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:59.996 [2024-07-15 09:24:11.022169] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:59.996 [2024-07-15 09:24:11.022189] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:59.996 [2024-07-15 09:24:11.024213] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:15:59.996 [2024-07-15 09:24:11.030174] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:59.996 [2024-07-15 09:24:11.030194] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:59.996 [2024-07-15 09:24:11.038221] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:59.996 [2024-07-15 09:24:11.038244] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:59.996 [2024-07-15 09:24:11.046251] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:59.996 [2024-07-15 09:24:11.046281] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:59.996 [2024-07-15 09:24:11.054272] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:59.996 [2024-07-15 09:24:11.054304] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:59.996 [2024-07-15 09:24:11.062296] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:59.996 [2024-07-15 09:24:11.062328] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:59.996 [2024-07-15 09:24:11.070315] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:59.996 [2024-07-15 09:24:11.070348] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:59.996 [2024-07-15 09:24:11.078334] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:59.996 [2024-07-15 09:24:11.078366] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:59.996 [2024-07-15 09:24:11.086358] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:59.996 [2024-07-15 09:24:11.086390] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:59.996 [2024-07-15 09:24:11.094354] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:59.996 [2024-07-15 09:24:11.094373] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:59.996 [2024-07-15 09:24:11.102401] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:59.996 [2024-07-15 09:24:11.102432] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:59.996 [2024-07-15 09:24:11.110421] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:59.996 [2024-07-15 09:24:11.110453] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:59.996 [2024-07-15 09:24:11.118423] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:59.996 [2024-07-15 09:24:11.118443] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:59.996 [2024-07-15 09:24:11.126439] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:59.996 [2024-07-15 09:24:11.126458] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:59.996 [2024-07-15 09:24:11.134469] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:59.996 [2024-07-15 09:24:11.134491] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:59.996 [2024-07-15 09:24:11.142490] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:59.996 [2024-07-15 09:24:11.142513] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:59.996 [2024-07-15 09:24:11.150509] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:59.996 [2024-07-15 09:24:11.150531] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:59.996 [2024-07-15 09:24:11.158531] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:59.996 [2024-07-15 09:24:11.158552] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:59.996 [2024-07-15 09:24:11.166552] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:59.996 [2024-07-15 09:24:11.166573] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:59.996 [2024-07-15 09:24:11.174574] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:59.996 [2024-07-15 09:24:11.174594] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:59.996 [2024-07-15 09:24:11.182598] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:59.996 [2024-07-15 09:24:11.182618] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:00.254 [2024-07-15 09:24:11.190620] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:00.254 [2024-07-15 09:24:11.190641] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:00.254 [2024-07-15 09:24:11.198646] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:00.254 [2024-07-15 09:24:11.198667] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:00.254 [2024-07-15 09:24:11.206710] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:00.254 [2024-07-15 09:24:11.206736] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:00.254 [2024-07-15 09:24:11.214701] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:00.254 [2024-07-15 09:24:11.214722] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:00.254 [2024-07-15 09:24:11.222718] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:00.254 [2024-07-15 09:24:11.222738] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:00.254 [2024-07-15 09:24:11.230742] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:00.254 [2024-07-15 09:24:11.230761] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:00.254 [2024-07-15 09:24:11.238762] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:00.254 [2024-07-15 09:24:11.238795] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:00.254 [2024-07-15 09:24:11.246787] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:00.254 [2024-07-15 09:24:11.246828] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:00.254 [2024-07-15 09:24:11.254833] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:00.254 [2024-07-15 09:24:11.254868] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:00.254 [2024-07-15 09:24:11.262869] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:00.254 [2024-07-15 09:24:11.262891] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:00.254 [2024-07-15 09:24:11.270885] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:00.254 [2024-07-15 09:24:11.270906] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:00.254 [2024-07-15 09:24:11.278902] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:00.254 [2024-07-15 09:24:11.278923] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:00.254 [2024-07-15 09:24:11.286925] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:00.254 [2024-07-15 09:24:11.286946] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:00.255 [2024-07-15 09:24:11.294941] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:00.255 [2024-07-15 09:24:11.294961] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:00.255 [2024-07-15 09:24:11.302967] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:00.255 [2024-07-15 09:24:11.302990] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:00.255 [2024-07-15 09:24:11.310986] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:00.255 [2024-07-15 09:24:11.311007] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:00.255 [2024-07-15 09:24:11.319009] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:00.255 [2024-07-15 09:24:11.319031] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:00.255 [2024-07-15 09:24:11.327032] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:00.255 [2024-07-15 09:24:11.327054] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:00.255 [2024-07-15 09:24:11.335055] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:00.255 [2024-07-15 09:24:11.335076] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:00.255 [2024-07-15 09:24:11.343097] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:00.255 [2024-07-15 09:24:11.343120] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:00.255 [2024-07-15 09:24:11.351120] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:00.255 [2024-07-15 09:24:11.351155] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:00.255 [2024-07-15 09:24:11.397021] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:00.255 [2024-07-15 09:24:11.397049] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:00.255 [2024-07-15 09:24:11.403261] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:00.255 [2024-07-15 09:24:11.403282] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:00.255 Running I/O for 5 seconds... 00:16:00.255 [2024-07-15 09:24:11.411283] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:00.255 [2024-07-15 09:24:11.411304] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:00.255 [2024-07-15 09:24:11.426349] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:00.255 [2024-07-15 09:24:11.426391] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:00.255 [2024-07-15 09:24:11.438677] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:00.255 [2024-07-15 09:24:11.438705] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:00.514 [2024-07-15 09:24:11.451041] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:00.514 [2024-07-15 09:24:11.451070] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:00.514 [2024-07-15 09:24:11.463501] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:00.514 [2024-07-15 09:24:11.463529] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:00.514 [2024-07-15 09:24:11.476229] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:00.514 [2024-07-15 09:24:11.476262] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:00.514 [2024-07-15 09:24:11.488341] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:00.514 [2024-07-15 09:24:11.488368] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:00.514 [2024-07-15 09:24:11.499965] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:00.514 [2024-07-15 09:24:11.499993] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:00.514 [2024-07-15 09:24:11.512007] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:00.514 [2024-07-15 09:24:11.512035] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:00.514 [2024-07-15 09:24:11.523506] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:00.514 [2024-07-15 09:24:11.523533] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:00.514 [2024-07-15 09:24:11.535635] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:00.514 [2024-07-15 09:24:11.535661] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:00.514 [2024-07-15 09:24:11.547699] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:00.514 [2024-07-15 09:24:11.547725] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:00.514 [2024-07-15 09:24:11.559225] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:00.514 [2024-07-15 09:24:11.559251] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:00.514 [2024-07-15 09:24:11.570982] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:00.514 [2024-07-15 09:24:11.571009] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:00.514 [2024-07-15 09:24:11.582524] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:00.514 [2024-07-15 09:24:11.582550] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:00.514 [2024-07-15 09:24:11.594018] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:00.514 [2024-07-15 09:24:11.594045] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:00.514 [2024-07-15 09:24:11.606165] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:00.514 [2024-07-15 09:24:11.606191] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:00.514 [2024-07-15 09:24:11.617862] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:00.514 [2024-07-15 09:24:11.617889] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:00.514 [2024-07-15 09:24:11.629947] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:00.514 [2024-07-15 09:24:11.629987] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:00.514 [2024-07-15 09:24:11.643286] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:00.514 [2024-07-15 09:24:11.643312] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:00.514 [2024-07-15 09:24:11.654034] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:00.514 [2024-07-15 09:24:11.654061] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:00.514 [2024-07-15 09:24:11.665881] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:00.514 [2024-07-15 09:24:11.665908] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:00.514 [2024-07-15 09:24:11.677585] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:00.514 [2024-07-15 09:24:11.677611] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:00.514 [2024-07-15 09:24:11.689475] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:00.514 [2024-07-15 09:24:11.689502] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:00.514 [2024-07-15 09:24:11.700642] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:00.514 [2024-07-15 09:24:11.700691] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:00.775 [2024-07-15 09:24:11.712184] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:00.775 [2024-07-15 09:24:11.712212] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:00.775 [2024-07-15 09:24:11.723712] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:00.775 [2024-07-15 09:24:11.723738] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:00.775 [2024-07-15 09:24:11.735433] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:00.775 [2024-07-15 09:24:11.735459] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:00.775 [2024-07-15 09:24:11.747239] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:00.775 [2024-07-15 09:24:11.747265] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:00.775 [2024-07-15 09:24:11.758868] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:00.775 [2024-07-15 09:24:11.758895] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:00.775 [2024-07-15 09:24:11.770505] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:00.775 [2024-07-15 09:24:11.770531] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:00.775 [2024-07-15 09:24:11.781894] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:00.775 [2024-07-15 09:24:11.781921] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:00.775 [2024-07-15 09:24:11.793311] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:00.775 [2024-07-15 09:24:11.793337] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:00.775 [2024-07-15 09:24:11.805070] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:00.775 [2024-07-15 09:24:11.805097] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:00.775 [2024-07-15 09:24:11.817049] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:00.775 [2024-07-15 09:24:11.817076] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:00.775 [2024-07-15 09:24:11.830501] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:00.775 [2024-07-15 09:24:11.830542] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:00.775 [2024-07-15 09:24:11.841104] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:00.775 [2024-07-15 09:24:11.841146] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:00.775 [2024-07-15 09:24:11.853721] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:00.775 [2024-07-15 09:24:11.853748] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:00.775 [2024-07-15 09:24:11.865654] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:00.775 [2024-07-15 09:24:11.865680] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:00.775 [2024-07-15 09:24:11.877310] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:00.775 [2024-07-15 09:24:11.877337] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:00.775 [2024-07-15 09:24:11.889015] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:00.775 [2024-07-15 09:24:11.889043] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:00.775 [2024-07-15 09:24:11.900470] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:00.775 [2024-07-15 09:24:11.900497] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:00.775 [2024-07-15 09:24:11.912351] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:00.775 [2024-07-15 09:24:11.912377] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:00.775 [2024-07-15 09:24:11.924075] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:00.775 [2024-07-15 09:24:11.924132] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:00.775 [2024-07-15 09:24:11.935895] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:00.775 [2024-07-15 09:24:11.935923] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:00.775 [2024-07-15 09:24:11.949280] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:00.775 [2024-07-15 09:24:11.949307] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:00.775 [2024-07-15 09:24:11.960280] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:00.775 [2024-07-15 09:24:11.960307] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:01.035 [2024-07-15 09:24:11.971483] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:01.035 [2024-07-15 09:24:11.971511] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:01.035 [2024-07-15 09:24:11.983357] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:01.035 [2024-07-15 09:24:11.983383] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:01.035 [2024-07-15 09:24:11.995279] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:01.035 [2024-07-15 09:24:11.995306] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:01.035 [2024-07-15 09:24:12.007036] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:01.035 [2024-07-15 09:24:12.007063] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:01.035 [2024-07-15 09:24:12.020438] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:01.035 [2024-07-15 09:24:12.020465] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:01.035 [2024-07-15 09:24:12.031597] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:01.035 [2024-07-15 09:24:12.031623] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:01.035 [2024-07-15 09:24:12.042706] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:01.035 [2024-07-15 09:24:12.042734] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:01.035 [2024-07-15 09:24:12.054315] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:01.035 [2024-07-15 09:24:12.054344] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:01.035 [2024-07-15 09:24:12.065858] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:01.035 [2024-07-15 09:24:12.065885] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:01.035 [2024-07-15 09:24:12.077911] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:01.035 [2024-07-15 09:24:12.077939] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:01.035 [2024-07-15 09:24:12.089204] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:01.035 [2024-07-15 09:24:12.089245] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:01.035 [2024-07-15 09:24:12.100905] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:01.035 [2024-07-15 09:24:12.100932] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:01.035 [2024-07-15 09:24:12.112278] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:01.035 [2024-07-15 09:24:12.112305] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:01.035 [2024-07-15 09:24:12.124078] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:01.035 [2024-07-15 09:24:12.124106] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:01.035 [2024-07-15 09:24:12.135710] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:01.035 [2024-07-15 09:24:12.135738] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:01.035 [2024-07-15 09:24:12.147593] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:01.035 [2024-07-15 09:24:12.147626] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:01.035 [2024-07-15 09:24:12.159487] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:01.035 [2024-07-15 09:24:12.159514] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:01.035 [2024-07-15 09:24:12.171030] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:01.035 [2024-07-15 09:24:12.171057] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:01.035 [2024-07-15 09:24:12.182722] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:01.035 [2024-07-15 09:24:12.182748] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:01.035 [2024-07-15 09:24:12.194165] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:01.035 [2024-07-15 09:24:12.194191] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:01.035 [2024-07-15 09:24:12.205628] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:01.035 [2024-07-15 09:24:12.205654] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:01.035 [2024-07-15 09:24:12.217197] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:01.035 [2024-07-15 09:24:12.217223] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:01.035 [2024-07-15 09:24:12.229077] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:01.035 [2024-07-15 09:24:12.229104] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:01.295 [2024-07-15 09:24:12.240399] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:01.296 [2024-07-15 09:24:12.240426] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:01.296 [2024-07-15 09:24:12.252165] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:01.296 [2024-07-15 09:24:12.252191] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:01.296 [2024-07-15 09:24:12.264021] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:01.296 [2024-07-15 09:24:12.264049] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:01.296 [2024-07-15 09:24:12.276395] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:01.296 [2024-07-15 09:24:12.276421] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:01.296 [2024-07-15 09:24:12.288217] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:01.296 [2024-07-15 09:24:12.288243] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:01.296 [2024-07-15 09:24:12.300024] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:01.296 [2024-07-15 09:24:12.300051] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:01.296 [2024-07-15 09:24:12.311652] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:01.296 [2024-07-15 09:24:12.311678] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:01.296 [2024-07-15 09:24:12.324106] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:01.296 [2024-07-15 09:24:12.324147] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:01.296 [2024-07-15 09:24:12.336391] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:01.296 [2024-07-15 09:24:12.336418] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:01.296 [2024-07-15 09:24:12.348102] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:01.296 [2024-07-15 09:24:12.348128] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:01.296 [2024-07-15 09:24:12.359732] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:01.296 [2024-07-15 09:24:12.359757] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:01.296 [2024-07-15 09:24:12.371608] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:01.296 [2024-07-15 09:24:12.371634] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:01.296 [2024-07-15 09:24:12.382912] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:01.296 [2024-07-15 09:24:12.382939] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:01.296 [2024-07-15 09:24:12.394269] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:01.296 [2024-07-15 09:24:12.394295] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:01.296 [2024-07-15 09:24:12.406159] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:01.296 [2024-07-15 09:24:12.406186] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:01.296 [2024-07-15 09:24:12.417996] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:01.296 [2024-07-15 09:24:12.418023] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:01.296 [2024-07-15 09:24:12.429623] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:01.296 [2024-07-15 09:24:12.429649] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:01.296 [2024-07-15 09:24:12.441373] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:01.296 [2024-07-15 09:24:12.441400] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:01.296 [2024-07-15 09:24:12.453434] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:01.296 [2024-07-15 09:24:12.453460] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:01.296 [2024-07-15 09:24:12.465336] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:01.296 [2024-07-15 09:24:12.465362] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:01.296 [2024-07-15 09:24:12.477090] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:01.296 [2024-07-15 09:24:12.477132] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:01.296 [2024-07-15 09:24:12.488770] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:01.296 [2024-07-15 09:24:12.488823] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:01.556 [2024-07-15 09:24:12.500874] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:01.556 [2024-07-15 09:24:12.500902] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:01.556 [2024-07-15 09:24:12.512855] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:01.556 [2024-07-15 09:24:12.512882] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:01.556 [2024-07-15 09:24:12.524372] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:01.556 [2024-07-15 09:24:12.524398] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:01.556 [2024-07-15 09:24:12.535414] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:01.556 [2024-07-15 09:24:12.535441] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:01.556 [2024-07-15 09:24:12.546035] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:01.556 [2024-07-15 09:24:12.546061] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:01.556 [2024-07-15 09:24:12.557673] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:01.556 [2024-07-15 09:24:12.557699] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:01.556 [2024-07-15 09:24:12.571379] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:01.556 [2024-07-15 09:24:12.571406] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:01.556 [2024-07-15 09:24:12.582597] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:01.556 [2024-07-15 09:24:12.582624] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:01.556 [2024-07-15 09:24:12.594237] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:01.556 [2024-07-15 09:24:12.594264] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:01.556 [2024-07-15 09:24:12.608166] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:01.556 [2024-07-15 09:24:12.608193] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:01.556 [2024-07-15 09:24:12.619265] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:01.556 [2024-07-15 09:24:12.619292] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:01.556 [2024-07-15 09:24:12.630749] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:01.556 [2024-07-15 09:24:12.630776] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:01.556 [2024-07-15 09:24:12.641808] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:01.556 [2024-07-15 09:24:12.641851] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:01.556 [2024-07-15 09:24:12.653574] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:01.556 [2024-07-15 09:24:12.653601] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:01.556 [2024-07-15 09:24:12.665024] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:01.556 [2024-07-15 09:24:12.665053] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:01.556 [2024-07-15 09:24:12.676850] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:01.556 [2024-07-15 09:24:12.676877] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:01.556 [2024-07-15 09:24:12.688695] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:01.556 [2024-07-15 09:24:12.688722] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:01.556 [2024-07-15 09:24:12.700637] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:01.556 [2024-07-15 09:24:12.700664] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:01.556 [2024-07-15 09:24:12.712230] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:01.556 [2024-07-15 09:24:12.712258] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:01.556 [2024-07-15 09:24:12.723884] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:01.556 [2024-07-15 09:24:12.723911] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:01.556 [2024-07-15 09:24:12.735374] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:01.556 [2024-07-15 09:24:12.735400] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:01.556 [2024-07-15 09:24:12.747106] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:01.556 [2024-07-15 09:24:12.747132] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:01.817 [2024-07-15 09:24:12.758854] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:01.817 [2024-07-15 09:24:12.758882] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:01.817 [2024-07-15 09:24:12.770937] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:01.817 [2024-07-15 09:24:12.770963] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:01.817 [2024-07-15 09:24:12.783309] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:01.817 [2024-07-15 09:24:12.783336] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:01.817 [2024-07-15 09:24:12.795686] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:01.817 [2024-07-15 09:24:12.795713] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:01.817 [2024-07-15 09:24:12.807243] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:01.817 [2024-07-15 09:24:12.807269] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:01.817 [2024-07-15 09:24:12.818978] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:01.817 [2024-07-15 09:24:12.819006] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:01.817 [2024-07-15 09:24:12.830700] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:01.817 [2024-07-15 09:24:12.830726] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:01.817 [2024-07-15 09:24:12.842608] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:01.817 [2024-07-15 09:24:12.842635] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:01.817 [2024-07-15 09:24:12.854058] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:01.817 [2024-07-15 09:24:12.854098] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:01.817 [2024-07-15 09:24:12.865782] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:01.817 [2024-07-15 09:24:12.865819] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:01.817 [2024-07-15 09:24:12.877729] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:01.817 [2024-07-15 09:24:12.877755] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:01.817 [2024-07-15 09:24:12.889294] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:01.817 [2024-07-15 09:24:12.889321] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:01.817 [2024-07-15 09:24:12.900880] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:01.817 [2024-07-15 09:24:12.900907] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:01.817 [2024-07-15 09:24:12.912305] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:01.817 [2024-07-15 09:24:12.912331] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:01.817 [2024-07-15 09:24:12.923960] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:01.817 [2024-07-15 09:24:12.924000] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:01.817 [2024-07-15 09:24:12.935552] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:01.817 [2024-07-15 09:24:12.935579] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:01.817 [2024-07-15 09:24:12.947411] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:01.817 [2024-07-15 09:24:12.947439] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:01.817 [2024-07-15 09:24:12.959035] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:01.817 [2024-07-15 09:24:12.959075] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:01.817 [2024-07-15 09:24:12.970537] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:01.817 [2024-07-15 09:24:12.970563] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:01.817 [2024-07-15 09:24:12.982161] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:01.817 [2024-07-15 09:24:12.982187] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:01.817 [2024-07-15 09:24:12.993525] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:01.817 [2024-07-15 09:24:12.993552] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:01.817 [2024-07-15 09:24:13.005021] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:01.817 [2024-07-15 09:24:13.005048] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:02.077 [2024-07-15 09:24:13.017245] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:02.077 [2024-07-15 09:24:13.017273] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:02.077 [2024-07-15 09:24:13.029198] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:02.077 [2024-07-15 09:24:13.029239] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:02.077 [2024-07-15 09:24:13.041024] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:02.077 [2024-07-15 09:24:13.041067] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:02.077 [2024-07-15 09:24:13.052221] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:02.077 [2024-07-15 09:24:13.052247] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:02.077 [2024-07-15 09:24:13.063571] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:02.077 [2024-07-15 09:24:13.063597] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:02.077 [2024-07-15 09:24:13.075299] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:02.077 [2024-07-15 09:24:13.075325] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:02.077 [2024-07-15 09:24:13.086832] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:02.077 [2024-07-15 09:24:13.086859] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:02.077 [2024-07-15 09:24:13.098290] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:02.077 [2024-07-15 09:24:13.098316] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:02.077 [2024-07-15 09:24:13.110181] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:02.077 [2024-07-15 09:24:13.110207] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:02.077 [2024-07-15 09:24:13.121768] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:02.077 [2024-07-15 09:24:13.121818] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:02.077 [2024-07-15 09:24:13.134848] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:02.077 [2024-07-15 09:24:13.134875] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:02.077 [2024-07-15 09:24:13.145473] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:02.077 [2024-07-15 09:24:13.145499] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:02.077 [2024-07-15 09:24:13.158275] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:02.077 [2024-07-15 09:24:13.158301] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:02.077 [2024-07-15 09:24:13.170016] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:02.077 [2024-07-15 09:24:13.170059] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:02.077 [2024-07-15 09:24:13.181628] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:02.077 [2024-07-15 09:24:13.181654] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:02.077 [2024-07-15 09:24:13.193425] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:02.077 [2024-07-15 09:24:13.193452] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:02.077 [2024-07-15 09:24:13.205015] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:02.077 [2024-07-15 09:24:13.205042] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:02.077 [2024-07-15 09:24:13.218643] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:02.077 [2024-07-15 09:24:13.218670] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:02.077 [2024-07-15 09:24:13.230012] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:02.077 [2024-07-15 09:24:13.230041] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:02.077 [2024-07-15 09:24:13.241312] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:02.077 [2024-07-15 09:24:13.241339] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:02.077 [2024-07-15 09:24:13.252954] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:02.077 [2024-07-15 09:24:13.252989] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:02.077 [2024-07-15 09:24:13.264607] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:02.077 [2024-07-15 09:24:13.264633] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:02.337 [2024-07-15 09:24:13.276098] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:02.337 [2024-07-15 09:24:13.276126] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:02.337 [2024-07-15 09:24:13.287554] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:02.337 [2024-07-15 09:24:13.287581] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:02.337 [2024-07-15 09:24:13.299134] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:02.337 [2024-07-15 09:24:13.299161] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:02.337 [2024-07-15 09:24:13.310866] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:02.337 [2024-07-15 09:24:13.310907] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:02.337 [2024-07-15 09:24:13.323060] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:02.337 [2024-07-15 09:24:13.323101] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:02.337 [2024-07-15 09:24:13.334904] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:02.337 [2024-07-15 09:24:13.334931] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:02.337 [2024-07-15 09:24:13.347843] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:02.337 [2024-07-15 09:24:13.347869] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:02.337 [2024-07-15 09:24:13.358964] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:02.337 [2024-07-15 09:24:13.359005] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:02.337 [2024-07-15 09:24:13.370874] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:02.337 [2024-07-15 09:24:13.370902] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:02.337 [2024-07-15 09:24:13.382767] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:02.337 [2024-07-15 09:24:13.382815] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:02.337 [2024-07-15 09:24:13.394845] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:02.337 [2024-07-15 09:24:13.394873] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:02.337 [2024-07-15 09:24:13.406110] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:02.337 [2024-07-15 09:24:13.406136] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:02.337 [2024-07-15 09:24:13.417538] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:02.337 [2024-07-15 09:24:13.417564] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:02.337 [2024-07-15 09:24:13.429227] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:02.337 [2024-07-15 09:24:13.429253] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:02.337 [2024-07-15 09:24:13.441288] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:02.337 [2024-07-15 09:24:13.441314] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:02.337 [2024-07-15 09:24:13.453237] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:02.337 [2024-07-15 09:24:13.453263] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:02.337 [2024-07-15 09:24:13.467095] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:02.337 [2024-07-15 09:24:13.467137] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:02.337 [2024-07-15 09:24:13.478062] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:02.337 [2024-07-15 09:24:13.478111] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:02.337 [2024-07-15 09:24:13.489536] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:02.337 [2024-07-15 09:24:13.489563] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:02.337 [2024-07-15 09:24:13.500991] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:02.337 [2024-07-15 09:24:13.501033] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:02.337 [2024-07-15 09:24:13.512774] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:02.337 [2024-07-15 09:24:13.512807] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:02.337 [2024-07-15 09:24:13.524505] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:02.337 [2024-07-15 09:24:13.524531] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:02.596 [2024-07-15 09:24:13.536282] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:02.596 [2024-07-15 09:24:13.536309] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:02.596 [2024-07-15 09:24:13.547905] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:02.596 [2024-07-15 09:24:13.547932] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:02.596 [2024-07-15 09:24:13.561713] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:02.596 [2024-07-15 09:24:13.561739] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:02.596 [2024-07-15 09:24:13.572975] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:02.596 [2024-07-15 09:24:13.573002] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:02.596 [2024-07-15 09:24:13.584811] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:02.596 [2024-07-15 09:24:13.584837] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:02.596 [2024-07-15 09:24:13.596989] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:02.596 [2024-07-15 09:24:13.597031] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:02.596 [2024-07-15 09:24:13.610293] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:02.596 [2024-07-15 09:24:13.610320] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:02.596 [2024-07-15 09:24:13.621260] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:02.596 [2024-07-15 09:24:13.621287] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:02.596 [2024-07-15 09:24:13.632983] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:02.596 [2024-07-15 09:24:13.633011] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:02.596 [2024-07-15 09:24:13.644931] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:02.596 [2024-07-15 09:24:13.644958] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:02.596 [2024-07-15 09:24:13.656762] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:02.596 [2024-07-15 09:24:13.656788] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:02.596 [2024-07-15 09:24:13.668244] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:02.596 [2024-07-15 09:24:13.668284] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:02.596 [2024-07-15 09:24:13.680000] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:02.596 [2024-07-15 09:24:13.680026] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:02.597 [2024-07-15 09:24:13.691826] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:02.597 [2024-07-15 09:24:13.691853] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:02.597 [2024-07-15 09:24:13.703569] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:02.597 [2024-07-15 09:24:13.703602] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:02.597 [2024-07-15 09:24:13.715164] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:02.597 [2024-07-15 09:24:13.715191] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:02.597 [2024-07-15 09:24:13.726998] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:02.597 [2024-07-15 09:24:13.727026] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:02.597 [2024-07-15 09:24:13.739038] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:02.597 [2024-07-15 09:24:13.739067] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:02.597 [2024-07-15 09:24:13.751032] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:02.597 [2024-07-15 09:24:13.751059] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:02.597 [2024-07-15 09:24:13.762863] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:02.597 [2024-07-15 09:24:13.762891] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:02.597 [2024-07-15 09:24:13.774577] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:02.597 [2024-07-15 09:24:13.774604] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:02.597 [2024-07-15 09:24:13.786528] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:02.597 [2024-07-15 09:24:13.786555] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:02.856 [2024-07-15 09:24:13.798154] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:02.856 [2024-07-15 09:24:13.798181] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:02.856 [2024-07-15 09:24:13.809842] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:02.856 [2024-07-15 09:24:13.809869] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:02.856 [2024-07-15 09:24:13.821506] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:02.856 [2024-07-15 09:24:13.821532] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:02.856 [2024-07-15 09:24:13.833163] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:02.856 [2024-07-15 09:24:13.833190] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:02.856 [2024-07-15 09:24:13.844960] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:02.856 [2024-07-15 09:24:13.844988] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:02.856 [2024-07-15 09:24:13.856878] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:02.856 [2024-07-15 09:24:13.856906] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:02.856 [2024-07-15 09:24:13.868758] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:02.856 [2024-07-15 09:24:13.868799] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:02.856 [2024-07-15 09:24:13.880203] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:02.856 [2024-07-15 09:24:13.880230] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:02.856 [2024-07-15 09:24:13.891307] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:02.856 [2024-07-15 09:24:13.891334] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:02.856 [2024-07-15 09:24:13.903005] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:02.856 [2024-07-15 09:24:13.903033] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:02.856 [2024-07-15 09:24:13.914558] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:02.856 [2024-07-15 09:24:13.914584] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:02.856 [2024-07-15 09:24:13.926452] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:02.856 [2024-07-15 09:24:13.926485] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:02.856 [2024-07-15 09:24:13.938361] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:02.856 [2024-07-15 09:24:13.938387] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:02.856 [2024-07-15 09:24:13.949622] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:02.856 [2024-07-15 09:24:13.949649] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:02.856 [2024-07-15 09:24:13.961692] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:02.856 [2024-07-15 09:24:13.961718] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:02.856 [2024-07-15 09:24:13.973912] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:02.856 [2024-07-15 09:24:13.973940] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:02.856 [2024-07-15 09:24:13.985105] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:02.856 [2024-07-15 09:24:13.985131] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:02.856 [2024-07-15 09:24:13.997092] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:02.856 [2024-07-15 09:24:13.997133] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:02.856 [2024-07-15 09:24:14.008445] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:02.856 [2024-07-15 09:24:14.008472] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:02.856 [2024-07-15 09:24:14.020088] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:02.856 [2024-07-15 09:24:14.020115] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:02.856 [2024-07-15 09:24:14.031532] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:02.856 [2024-07-15 09:24:14.031558] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:02.856 [2024-07-15 09:24:14.042944] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:02.856 [2024-07-15 09:24:14.042972] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:03.115 [2024-07-15 09:24:14.054957] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:03.115 [2024-07-15 09:24:14.054985] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:03.115 [2024-07-15 09:24:14.066211] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:03.115 [2024-07-15 09:24:14.066252] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:03.115 [2024-07-15 09:24:14.078035] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:03.115 [2024-07-15 09:24:14.078062] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:03.115 [2024-07-15 09:24:14.089575] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:03.115 [2024-07-15 09:24:14.089602] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:03.115 [2024-07-15 09:24:14.100795] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:03.115 [2024-07-15 09:24:14.100845] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:03.115 [2024-07-15 09:24:14.112287] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:03.115 [2024-07-15 09:24:14.112327] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:03.115 [2024-07-15 09:24:14.124057] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:03.115 [2024-07-15 09:24:14.124085] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:03.115 [2024-07-15 09:24:14.135838] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:03.115 [2024-07-15 09:24:14.135865] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:03.115 [2024-07-15 09:24:14.147416] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:03.115 [2024-07-15 09:24:14.147442] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:03.115 [2024-07-15 09:24:14.159363] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:03.115 [2024-07-15 09:24:14.159389] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:03.115 [2024-07-15 09:24:14.170673] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:03.115 [2024-07-15 09:24:14.170699] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:03.115 [2024-07-15 09:24:14.182611] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:03.115 [2024-07-15 09:24:14.182637] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:03.115 [2024-07-15 09:24:14.194295] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:03.115 [2024-07-15 09:24:14.194321] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:03.115 [2024-07-15 09:24:14.205930] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:03.115 [2024-07-15 09:24:14.205958] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:03.115 [2024-07-15 09:24:14.217661] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:03.115 [2024-07-15 09:24:14.217687] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:03.115 [2024-07-15 09:24:14.229579] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:03.115 [2024-07-15 09:24:14.229605] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:03.115 [2024-07-15 09:24:14.241173] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:03.115 [2024-07-15 09:24:14.241213] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:03.115 [2024-07-15 09:24:14.252964] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:03.115 [2024-07-15 09:24:14.252991] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:03.115 [2024-07-15 09:24:14.265304] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:03.115 [2024-07-15 09:24:14.265330] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:03.115 [2024-07-15 09:24:14.277072] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:03.115 [2024-07-15 09:24:14.277100] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:03.115 [2024-07-15 09:24:14.288823] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:03.115 [2024-07-15 09:24:14.288851] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:03.115 [2024-07-15 09:24:14.300444] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:03.115 [2024-07-15 09:24:14.300470] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:03.375 [2024-07-15 09:24:14.312450] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:03.375 [2024-07-15 09:24:14.312478] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:03.375 [2024-07-15 09:24:14.324394] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:03.375 [2024-07-15 09:24:14.324421] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:03.375 [2024-07-15 09:24:14.336423] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:03.375 [2024-07-15 09:24:14.336450] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:03.375 [2024-07-15 09:24:14.347981] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:03.375 [2024-07-15 09:24:14.348007] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:03.375 [2024-07-15 09:24:14.359246] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:03.375 [2024-07-15 09:24:14.359274] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:03.375 [2024-07-15 09:24:14.372592] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:03.375 [2024-07-15 09:24:14.372619] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:03.375 [2024-07-15 09:24:14.383481] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:03.375 [2024-07-15 09:24:14.383508] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:03.375 [2024-07-15 09:24:14.395144] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:03.375 [2024-07-15 09:24:14.395171] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:03.375 [2024-07-15 09:24:14.405999] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:03.375 [2024-07-15 09:24:14.406027] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:03.375 [2024-07-15 09:24:14.417686] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:03.375 [2024-07-15 09:24:14.417711] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:03.375 [2024-07-15 09:24:14.429465] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:03.375 [2024-07-15 09:24:14.429492] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:03.375 [2024-07-15 09:24:14.440789] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:03.375 [2024-07-15 09:24:14.440826] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:03.375 [2024-07-15 09:24:14.452549] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:03.375 [2024-07-15 09:24:14.452575] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:03.375 [2024-07-15 09:24:14.465005] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:03.375 [2024-07-15 09:24:14.465032] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:03.375 [2024-07-15 09:24:14.476946] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:03.375 [2024-07-15 09:24:14.476973] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:03.375 [2024-07-15 09:24:14.488504] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:03.375 [2024-07-15 09:24:14.488530] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:03.375 [2024-07-15 09:24:14.500476] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:03.375 [2024-07-15 09:24:14.500502] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:03.375 [2024-07-15 09:24:14.512351] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:03.375 [2024-07-15 09:24:14.512377] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:03.375 [2024-07-15 09:24:14.524415] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:03.376 [2024-07-15 09:24:14.524441] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:03.376 [2024-07-15 09:24:14.536216] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:03.376 [2024-07-15 09:24:14.536243] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:03.376 [2024-07-15 09:24:14.547751] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:03.376 [2024-07-15 09:24:14.547777] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:03.376 [2024-07-15 09:24:14.559894] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:03.376 [2024-07-15 09:24:14.559922] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:03.635 [2024-07-15 09:24:14.571911] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:03.635 [2024-07-15 09:24:14.571955] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:03.635 [2024-07-15 09:24:14.583971] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:03.635 [2024-07-15 09:24:14.583998] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:03.635 [2024-07-15 09:24:14.596173] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:03.635 [2024-07-15 09:24:14.596199] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:03.635 [2024-07-15 09:24:14.608319] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:03.635 [2024-07-15 09:24:14.608346] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:03.635 [2024-07-15 09:24:14.620167] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:03.635 [2024-07-15 09:24:14.620195] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:03.635 [2024-07-15 09:24:14.632094] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:03.635 [2024-07-15 09:24:14.632135] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:03.635 [2024-07-15 09:24:14.644071] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:03.635 [2024-07-15 09:24:14.644097] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:03.635 [2024-07-15 09:24:14.655763] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:03.635 [2024-07-15 09:24:14.655789] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:03.635 [2024-07-15 09:24:14.667568] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:03.635 [2024-07-15 09:24:14.667594] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:03.635 [2024-07-15 09:24:14.679770] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:03.635 [2024-07-15 09:24:14.679821] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:03.635 [2024-07-15 09:24:14.691713] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:03.635 [2024-07-15 09:24:14.691739] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:03.635 [2024-07-15 09:24:14.702847] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:03.635 [2024-07-15 09:24:14.702875] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:03.635 [2024-07-15 09:24:14.714280] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:03.635 [2024-07-15 09:24:14.714306] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:03.635 [2024-07-15 09:24:14.725943] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:03.635 [2024-07-15 09:24:14.725971] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:03.635 [2024-07-15 09:24:14.737569] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:03.635 [2024-07-15 09:24:14.737595] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:03.635 [2024-07-15 09:24:14.749009] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:03.635 [2024-07-15 09:24:14.749037] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:03.635 [2024-07-15 09:24:14.760953] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:03.635 [2024-07-15 09:24:14.760981] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:03.635 [2024-07-15 09:24:14.772669] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:03.635 [2024-07-15 09:24:14.772695] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:03.635 [2024-07-15 09:24:14.784409] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:03.635 [2024-07-15 09:24:14.784436] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:03.635 [2024-07-15 09:24:14.795981] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:03.635 [2024-07-15 09:24:14.796008] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:03.635 [2024-07-15 09:24:14.806934] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:03.635 [2024-07-15 09:24:14.806961] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:03.635 [2024-07-15 09:24:14.818568] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:03.635 [2024-07-15 09:24:14.818594] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:03.895 [2024-07-15 09:24:14.830128] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:03.895 [2024-07-15 09:24:14.830166] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:03.895 [2024-07-15 09:24:14.842303] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:03.895 [2024-07-15 09:24:14.842330] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:03.895 [2024-07-15 09:24:14.854020] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:03.895 [2024-07-15 09:24:14.854049] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:03.895 [2024-07-15 09:24:14.865851] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:03.895 [2024-07-15 09:24:14.865880] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:03.895 [2024-07-15 09:24:14.877524] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:03.895 [2024-07-15 09:24:14.877565] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:03.895 [2024-07-15 09:24:14.889313] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:03.895 [2024-07-15 09:24:14.889340] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:03.895 [2024-07-15 09:24:14.901070] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:03.895 [2024-07-15 09:24:14.901112] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:03.895 [2024-07-15 09:24:14.912917] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:03.895 [2024-07-15 09:24:14.912945] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:03.895 [2024-07-15 09:24:14.926064] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:03.895 [2024-07-15 09:24:14.926091] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:03.895 [2024-07-15 09:24:14.936698] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:03.895 [2024-07-15 09:24:14.936724] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:03.895 [2024-07-15 09:24:14.948655] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:03.895 [2024-07-15 09:24:14.948681] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:03.895 [2024-07-15 09:24:14.960017] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:03.896 [2024-07-15 09:24:14.960045] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:03.896 [2024-07-15 09:24:14.971744] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:03.896 [2024-07-15 09:24:14.971770] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:03.896 [2024-07-15 09:24:14.983867] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:03.896 [2024-07-15 09:24:14.983895] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:03.896 [2024-07-15 09:24:14.996000] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:03.896 [2024-07-15 09:24:14.996027] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:03.896 [2024-07-15 09:24:15.007529] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:03.896 [2024-07-15 09:24:15.007556] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:03.896 [2024-07-15 09:24:15.019587] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:03.896 [2024-07-15 09:24:15.019614] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:03.896 [2024-07-15 09:24:15.031327] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:03.896 [2024-07-15 09:24:15.031359] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:03.896 [2024-07-15 09:24:15.045225] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:03.896 [2024-07-15 09:24:15.045251] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:03.896 [2024-07-15 09:24:15.056087] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:03.896 [2024-07-15 09:24:15.056129] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:03.896 [2024-07-15 09:24:15.067789] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:03.896 [2024-07-15 09:24:15.067838] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:03.896 [2024-07-15 09:24:15.079980] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:03.896 [2024-07-15 09:24:15.080007] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:04.156 [2024-07-15 09:24:15.091987] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:04.156 [2024-07-15 09:24:15.092016] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:04.156 [2024-07-15 09:24:15.103536] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:04.156 [2024-07-15 09:24:15.103562] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:04.156 [2024-07-15 09:24:15.115479] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:04.156 [2024-07-15 09:24:15.115505] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:04.156 [2024-07-15 09:24:15.127022] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:04.156 [2024-07-15 09:24:15.127064] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:04.156 [2024-07-15 09:24:15.138261] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:04.156 [2024-07-15 09:24:15.138288] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:04.156 [2024-07-15 09:24:15.149502] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:04.156 [2024-07-15 09:24:15.149529] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:04.156 [2024-07-15 09:24:15.160853] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:04.156 [2024-07-15 09:24:15.160887] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:04.156 [2024-07-15 09:24:15.172391] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:04.156 [2024-07-15 09:24:15.172418] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:04.156 [2024-07-15 09:24:15.184176] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:04.156 [2024-07-15 09:24:15.184203] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:04.156 [2024-07-15 09:24:15.195684] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:04.156 [2024-07-15 09:24:15.195710] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:04.157 [2024-07-15 09:24:15.207678] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:04.157 [2024-07-15 09:24:15.207705] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:04.157 [2024-07-15 09:24:15.219210] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:04.157 [2024-07-15 09:24:15.219237] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:04.157 [2024-07-15 09:24:15.231131] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:04.157 [2024-07-15 09:24:15.231157] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:04.157 [2024-07-15 09:24:15.242808] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:04.157 [2024-07-15 09:24:15.242835] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:04.157 [2024-07-15 09:24:15.254651] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:04.157 [2024-07-15 09:24:15.254685] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:04.157 [2024-07-15 09:24:15.265973] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:04.157 [2024-07-15 09:24:15.266014] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:04.157 [2024-07-15 09:24:15.277139] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:04.157 [2024-07-15 09:24:15.277166] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:04.157 [2024-07-15 09:24:15.288900] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:04.157 [2024-07-15 09:24:15.288927] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:04.157 [2024-07-15 09:24:15.300833] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:04.157 [2024-07-15 09:24:15.300860] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:04.157 [2024-07-15 09:24:15.312684] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:04.157 [2024-07-15 09:24:15.312710] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:04.157 [2024-07-15 09:24:15.324488] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:04.157 [2024-07-15 09:24:15.324515] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:04.157 [2024-07-15 09:24:15.336500] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:04.157 [2024-07-15 09:24:15.336527] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:04.157 [2024-07-15 09:24:15.349739] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:04.157 [2024-07-15 09:24:15.349781] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:04.416 [2024-07-15 09:24:15.360500] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:04.416 [2024-07-15 09:24:15.360527] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:04.416 [2024-07-15 09:24:15.372261] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:04.416 [2024-07-15 09:24:15.372287] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:04.416 [2024-07-15 09:24:15.384476] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:04.416 [2024-07-15 09:24:15.384502] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:04.416 [2024-07-15 09:24:15.396342] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:04.416 [2024-07-15 09:24:15.396368] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:04.416 [2024-07-15 09:24:15.409576] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:04.416 [2024-07-15 09:24:15.409603] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:04.416 [2024-07-15 09:24:15.420212] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:04.416 [2024-07-15 09:24:15.420238] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:04.416 [2024-07-15 09:24:15.431720] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:04.416 [2024-07-15 09:24:15.431746] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:04.416 [2024-07-15 09:24:15.443234] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:04.416 [2024-07-15 09:24:15.443260] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:04.416 [2024-07-15 09:24:15.456512] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:04.416 [2024-07-15 09:24:15.456538] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:04.416 [2024-07-15 09:24:15.467551] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:04.416 [2024-07-15 09:24:15.467578] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:04.416 [2024-07-15 09:24:15.479631] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:04.416 [2024-07-15 09:24:15.479665] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:04.416 [2024-07-15 09:24:15.491420] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:04.416 [2024-07-15 09:24:15.491447] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:04.416 [2024-07-15 09:24:15.505240] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:04.416 [2024-07-15 09:24:15.505267] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:04.416 [2024-07-15 09:24:15.516230] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:04.416 [2024-07-15 09:24:15.516257] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:04.416 [2024-07-15 09:24:15.528306] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:04.416 [2024-07-15 09:24:15.528333] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:04.416 [2024-07-15 09:24:15.540404] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:04.416 [2024-07-15 09:24:15.540431] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:04.416 [2024-07-15 09:24:15.552500] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:04.416 [2024-07-15 09:24:15.552527] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:04.416 [2024-07-15 09:24:15.564201] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:04.416 [2024-07-15 09:24:15.564228] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:04.416 [2024-07-15 09:24:15.575973] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:04.416 [2024-07-15 09:24:15.576000] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:04.416 [2024-07-15 09:24:15.587682] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:04.416 [2024-07-15 09:24:15.587709] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:04.416 [2024-07-15 09:24:15.599910] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:04.416 [2024-07-15 09:24:15.599938] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:04.676 [2024-07-15 09:24:15.612080] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:04.676 [2024-07-15 09:24:15.612108] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:04.676 [2024-07-15 09:24:15.624120] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:04.676 [2024-07-15 09:24:15.624147] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:04.676 [2024-07-15 09:24:15.636014] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:04.676 [2024-07-15 09:24:15.636057] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:04.676 [2024-07-15 09:24:15.648180] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:04.676 [2024-07-15 09:24:15.648206] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:04.676 [2024-07-15 09:24:15.660455] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:04.677 [2024-07-15 09:24:15.660481] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:04.677 [2024-07-15 09:24:15.672835] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:04.677 [2024-07-15 09:24:15.672863] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:04.677 [2024-07-15 09:24:15.684835] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:04.677 [2024-07-15 09:24:15.684862] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:04.677 [2024-07-15 09:24:15.696414] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:04.677 [2024-07-15 09:24:15.696440] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:04.677 [2024-07-15 09:24:15.708319] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:04.677 [2024-07-15 09:24:15.708352] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:04.677 [2024-07-15 09:24:15.719695] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:04.677 [2024-07-15 09:24:15.719721] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:04.677 [2024-07-15 09:24:15.731749] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:04.677 [2024-07-15 09:24:15.731775] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:04.677 [2024-07-15 09:24:15.743654] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:04.677 [2024-07-15 09:24:15.743680] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:04.677 [2024-07-15 09:24:15.755090] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:04.677 [2024-07-15 09:24:15.755117] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:04.677 [2024-07-15 09:24:15.767112] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:04.677 [2024-07-15 09:24:15.767139] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:04.677 [2024-07-15 09:24:15.779329] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:04.677 [2024-07-15 09:24:15.779356] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:04.677 [2024-07-15 09:24:15.791501] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:04.677 [2024-07-15 09:24:15.791527] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:04.677 [2024-07-15 09:24:15.803351] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:04.677 [2024-07-15 09:24:15.803378] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:04.677 [2024-07-15 09:24:15.815517] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:04.677 [2024-07-15 09:24:15.815544] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:04.677 [2024-07-15 09:24:15.827650] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:04.677 [2024-07-15 09:24:15.827676] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:04.677 [2024-07-15 09:24:15.839498] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:04.677 [2024-07-15 09:24:15.839524] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:04.677 [2024-07-15 09:24:15.851730] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:04.677 [2024-07-15 09:24:15.851757] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:04.677 [2024-07-15 09:24:15.863746] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:04.677 [2024-07-15 09:24:15.863772] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:04.937 [2024-07-15 09:24:15.875322] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:04.937 [2024-07-15 09:24:15.875349] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:04.937 [2024-07-15 09:24:15.887580] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:04.937 [2024-07-15 09:24:15.887606] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:04.937 [2024-07-15 09:24:15.904398] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:04.937 [2024-07-15 09:24:15.904425] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:04.937 [2024-07-15 09:24:15.915376] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:04.937 [2024-07-15 09:24:15.915416] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:04.937 [2024-07-15 09:24:15.927350] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:04.937 [2024-07-15 09:24:15.927377] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:04.937 [2024-07-15 09:24:15.939015] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:04.937 [2024-07-15 09:24:15.939057] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:04.937 [2024-07-15 09:24:15.951008] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:04.937 [2024-07-15 09:24:15.951050] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:04.937 [2024-07-15 09:24:15.962826] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:04.937 [2024-07-15 09:24:15.962867] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:04.937 [2024-07-15 09:24:15.974822] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:04.937 [2024-07-15 09:24:15.974858] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:04.937 [2024-07-15 09:24:15.986947] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:04.937 [2024-07-15 09:24:15.986975] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:04.937 [2024-07-15 09:24:15.998918] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:04.937 [2024-07-15 09:24:15.998947] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:04.937 [2024-07-15 09:24:16.010776] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:04.937 [2024-07-15 09:24:16.010828] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:04.937 [2024-07-15 09:24:16.022495] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:04.937 [2024-07-15 09:24:16.022522] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:04.937 [2024-07-15 09:24:16.034356] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:04.937 [2024-07-15 09:24:16.034383] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:04.937 [2024-07-15 09:24:16.046444] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:04.937 [2024-07-15 09:24:16.046486] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:04.937 [2024-07-15 09:24:16.058285] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:04.937 [2024-07-15 09:24:16.058312] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:04.937 [2024-07-15 09:24:16.070322] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:04.937 [2024-07-15 09:24:16.070348] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:04.937 [2024-07-15 09:24:16.082458] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:04.937 [2024-07-15 09:24:16.082484] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:04.937 [2024-07-15 09:24:16.094702] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:04.937 [2024-07-15 09:24:16.094729] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:04.937 [2024-07-15 09:24:16.108400] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:04.937 [2024-07-15 09:24:16.108427] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:04.937 [2024-07-15 09:24:16.119207] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:04.937 [2024-07-15 09:24:16.119233] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:04.937 [2024-07-15 09:24:16.130916] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:04.937 [2024-07-15 09:24:16.130950] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:05.198 [2024-07-15 09:24:16.142717] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:05.198 [2024-07-15 09:24:16.142744] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:05.198 [2024-07-15 09:24:16.155143] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:05.198 [2024-07-15 09:24:16.155169] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:05.198 [2024-07-15 09:24:16.167029] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:05.198 [2024-07-15 09:24:16.167056] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:05.198 [2024-07-15 09:24:16.178994] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:05.198 [2024-07-15 09:24:16.179021] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:05.198 [2024-07-15 09:24:16.191177] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:05.198 [2024-07-15 09:24:16.191203] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:05.198 [2024-07-15 09:24:16.202956] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:05.198 [2024-07-15 09:24:16.202984] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:05.198 [2024-07-15 09:24:16.214903] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:05.198 [2024-07-15 09:24:16.214945] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:05.198 [2024-07-15 09:24:16.226707] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:05.198 [2024-07-15 09:24:16.226733] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:05.198 [2024-07-15 09:24:16.238419] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:05.198 [2024-07-15 09:24:16.238445] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:05.198 [2024-07-15 09:24:16.250929] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:05.198 [2024-07-15 09:24:16.250957] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:05.198 [2024-07-15 09:24:16.262541] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:05.198 [2024-07-15 09:24:16.262567] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:05.198 [2024-07-15 09:24:16.274513] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:05.198 [2024-07-15 09:24:16.274539] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:05.198 [2024-07-15 09:24:16.286705] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:05.198 [2024-07-15 09:24:16.286732] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:05.198 [2024-07-15 09:24:16.299149] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:05.198 [2024-07-15 09:24:16.299189] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:05.198 [2024-07-15 09:24:16.311007] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:05.198 [2024-07-15 09:24:16.311034] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:05.198 [2024-07-15 09:24:16.323326] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:05.198 [2024-07-15 09:24:16.323353] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:05.198 [2024-07-15 09:24:16.335719] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:05.198 [2024-07-15 09:24:16.335745] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:05.198 [2024-07-15 09:24:16.347651] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:05.198 [2024-07-15 09:24:16.347678] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:05.198 [2024-07-15 09:24:16.359864] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:05.198 [2024-07-15 09:24:16.359906] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:05.198 [2024-07-15 09:24:16.371532] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:05.198 [2024-07-15 09:24:16.371558] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:05.198 [2024-07-15 09:24:16.383302] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:05.198 [2024-07-15 09:24:16.383329] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:05.457 [2024-07-15 09:24:16.395203] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:05.457 [2024-07-15 09:24:16.395230] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:05.457 [2024-07-15 09:24:16.407074] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:05.457 [2024-07-15 09:24:16.407115] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:05.457 [2024-07-15 09:24:16.419011] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:05.457 [2024-07-15 09:24:16.419052] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:05.457 [2024-07-15 09:24:16.429376] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:05.457 [2024-07-15 09:24:16.429403] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:05.457 00:16:05.457 Latency(us) 00:16:05.457 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:16:05.457 Job: Nvme1n1 (Core Mask 0x1, workload: randrw, percentage: 50, depth: 128, IO size: 8192) 00:16:05.457 Nvme1n1 : 5.01 10802.22 84.39 0.00 0.00 11832.91 5412.79 26602.76 00:16:05.457 =================================================================================================================== 00:16:05.457 Total : 10802.22 84.39 0.00 0.00 11832.91 5412.79 26602.76 00:16:05.457 [2024-07-15 09:24:16.433616] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:05.457 [2024-07-15 09:24:16.433638] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:05.457 [2024-07-15 09:24:16.441632] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:05.457 [2024-07-15 09:24:16.441654] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:05.457 [2024-07-15 09:24:16.449651] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:05.457 [2024-07-15 09:24:16.449672] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:05.457 [2024-07-15 09:24:16.457729] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:05.457 [2024-07-15 09:24:16.457769] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:05.457 [2024-07-15 09:24:16.465759] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:05.457 [2024-07-15 09:24:16.465812] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:05.457 [2024-07-15 09:24:16.473768] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:05.457 [2024-07-15 09:24:16.473817] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:05.457 [2024-07-15 09:24:16.481784] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:05.457 [2024-07-15 09:24:16.481835] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:05.457 [2024-07-15 09:24:16.489812] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:05.457 [2024-07-15 09:24:16.489852] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:05.457 [2024-07-15 09:24:16.497843] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:05.457 [2024-07-15 09:24:16.497886] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:05.457 [2024-07-15 09:24:16.505859] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:05.457 [2024-07-15 09:24:16.505900] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:05.457 [2024-07-15 09:24:16.513889] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:05.457 [2024-07-15 09:24:16.513931] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:05.457 [2024-07-15 09:24:16.521914] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:05.457 [2024-07-15 09:24:16.521970] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:05.457 [2024-07-15 09:24:16.529929] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:05.457 [2024-07-15 09:24:16.529974] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:05.457 [2024-07-15 09:24:16.537948] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:05.457 [2024-07-15 09:24:16.537991] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:05.457 [2024-07-15 09:24:16.545966] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:05.457 [2024-07-15 09:24:16.546008] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:05.457 [2024-07-15 09:24:16.553984] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:05.457 [2024-07-15 09:24:16.554027] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:05.457 [2024-07-15 09:24:16.562004] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:05.457 [2024-07-15 09:24:16.562045] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:05.457 [2024-07-15 09:24:16.570015] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:05.457 [2024-07-15 09:24:16.570053] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:05.457 [2024-07-15 09:24:16.578003] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:05.457 [2024-07-15 09:24:16.578024] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:05.458 [2024-07-15 09:24:16.586023] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:05.458 [2024-07-15 09:24:16.586044] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:05.458 [2024-07-15 09:24:16.594043] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:05.458 [2024-07-15 09:24:16.594064] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:05.458 [2024-07-15 09:24:16.602065] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:05.458 [2024-07-15 09:24:16.602099] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:05.458 [2024-07-15 09:24:16.610139] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:05.458 [2024-07-15 09:24:16.610180] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:05.458 [2024-07-15 09:24:16.618160] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:05.458 [2024-07-15 09:24:16.618204] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:05.458 [2024-07-15 09:24:16.626189] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:05.458 [2024-07-15 09:24:16.626225] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:05.458 [2024-07-15 09:24:16.634161] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:05.458 [2024-07-15 09:24:16.634181] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:05.458 [2024-07-15 09:24:16.642189] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:05.458 [2024-07-15 09:24:16.642209] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:05.458 [2024-07-15 09:24:16.650214] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:05.458 [2024-07-15 09:24:16.650233] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:05.717 [2024-07-15 09:24:16.658226] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:05.717 [2024-07-15 09:24:16.658248] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:05.717 [2024-07-15 09:24:16.666304] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:05.718 [2024-07-15 09:24:16.666348] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:05.718 [2024-07-15 09:24:16.674318] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:05.718 [2024-07-15 09:24:16.674371] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:05.718 [2024-07-15 09:24:16.682284] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:05.718 [2024-07-15 09:24:16.682304] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:05.718 [2024-07-15 09:24:16.690306] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:05.718 [2024-07-15 09:24:16.690325] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:05.718 [2024-07-15 09:24:16.698324] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:05.718 [2024-07-15 09:24:16.698344] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:05.718 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/zcopy.sh: line 42: kill: (816555) - No such process 00:16:05.718 09:24:16 nvmf_tcp.nvmf_zcopy -- target/zcopy.sh@49 -- # wait 816555 00:16:05.718 09:24:16 nvmf_tcp.nvmf_zcopy -- target/zcopy.sh@52 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:16:05.718 09:24:16 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:05.718 09:24:16 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:16:05.718 09:24:16 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:05.718 09:24:16 nvmf_tcp.nvmf_zcopy -- target/zcopy.sh@53 -- # rpc_cmd bdev_delay_create -b malloc0 -d delay0 -r 1000000 -t 1000000 -w 1000000 -n 1000000 00:16:05.718 09:24:16 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:05.718 09:24:16 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:16:05.718 delay0 00:16:05.718 09:24:16 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:05.718 09:24:16 nvmf_tcp.nvmf_zcopy -- target/zcopy.sh@54 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 delay0 -n 1 00:16:05.718 09:24:16 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:05.718 09:24:16 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:16:05.718 09:24:16 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:05.718 09:24:16 nvmf_tcp.nvmf_zcopy -- target/zcopy.sh@56 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/abort -c 0x1 -t 5 -q 64 -w randrw -M 50 -l warning -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 ns:1' 00:16:05.718 EAL: No free 2048 kB hugepages reported on node 1 00:16:05.718 [2024-07-15 09:24:16.855969] nvme_fabric.c: 295:nvme_fabric_discover_probe: *WARNING*: Skipping unsupported current discovery service or discovery service referral 00:16:12.293 Initializing NVMe Controllers 00:16:12.293 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:16:12.293 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:16:12.293 Initialization complete. Launching workers. 00:16:12.293 NS: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 I/O completed: 320, failed: 103 00:16:12.293 CTRLR: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) abort submitted 390, failed to submit 33 00:16:12.293 success 215, unsuccess 175, failed 0 00:16:12.293 09:24:23 nvmf_tcp.nvmf_zcopy -- target/zcopy.sh@59 -- # trap - SIGINT SIGTERM EXIT 00:16:12.293 09:24:23 nvmf_tcp.nvmf_zcopy -- target/zcopy.sh@60 -- # nvmftestfini 00:16:12.293 09:24:23 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@488 -- # nvmfcleanup 00:16:12.293 09:24:23 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@117 -- # sync 00:16:12.293 09:24:23 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:16:12.293 09:24:23 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@120 -- # set +e 00:16:12.293 09:24:23 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@121 -- # for i in {1..20} 00:16:12.293 09:24:23 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:16:12.293 rmmod nvme_tcp 00:16:12.293 rmmod nvme_fabrics 00:16:12.293 rmmod nvme_keyring 00:16:12.293 09:24:23 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:16:12.293 09:24:23 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@124 -- # set -e 00:16:12.293 09:24:23 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@125 -- # return 0 00:16:12.293 09:24:23 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@489 -- # '[' -n 814600 ']' 00:16:12.293 09:24:23 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@490 -- # killprocess 814600 00:16:12.293 09:24:23 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@948 -- # '[' -z 814600 ']' 00:16:12.293 09:24:23 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@952 -- # kill -0 814600 00:16:12.293 09:24:23 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@953 -- # uname 00:16:12.293 09:24:23 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:16:12.293 09:24:23 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 814600 00:16:12.293 09:24:23 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:16:12.293 09:24:23 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:16:12.293 09:24:23 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@966 -- # echo 'killing process with pid 814600' 00:16:12.293 killing process with pid 814600 00:16:12.293 09:24:23 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@967 -- # kill 814600 00:16:12.293 09:24:23 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@972 -- # wait 814600 00:16:12.293 09:24:23 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:16:12.293 09:24:23 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:16:12.293 09:24:23 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:16:12.293 09:24:23 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:16:12.293 09:24:23 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@278 -- # remove_spdk_ns 00:16:12.293 09:24:23 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:16:12.293 09:24:23 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:16:12.293 09:24:23 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:16:14.829 09:24:25 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:16:14.829 00:16:14.829 real 0m28.261s 00:16:14.829 user 0m41.993s 00:16:14.829 sys 0m7.830s 00:16:14.829 09:24:25 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@1124 -- # xtrace_disable 00:16:14.829 09:24:25 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:16:14.829 ************************************ 00:16:14.829 END TEST nvmf_zcopy 00:16:14.829 ************************************ 00:16:14.829 09:24:25 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:16:14.829 09:24:25 nvmf_tcp -- nvmf/nvmf.sh@54 -- # run_test nvmf_nmic /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nmic.sh --transport=tcp 00:16:14.829 09:24:25 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:16:14.829 09:24:25 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:16:14.829 09:24:25 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:16:14.829 ************************************ 00:16:14.829 START TEST nvmf_nmic 00:16:14.829 ************************************ 00:16:14.829 09:24:25 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nmic.sh --transport=tcp 00:16:14.829 * Looking for test storage... 00:16:14.829 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:16:14.829 09:24:25 nvmf_tcp.nvmf_nmic -- target/nmic.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:16:14.829 09:24:25 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@7 -- # uname -s 00:16:14.829 09:24:25 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:16:14.829 09:24:25 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:16:14.829 09:24:25 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:16:14.829 09:24:25 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:16:14.829 09:24:25 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:16:14.829 09:24:25 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:16:14.829 09:24:25 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:16:14.829 09:24:25 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:16:14.829 09:24:25 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:16:14.829 09:24:25 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:16:14.829 09:24:25 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a 00:16:14.829 09:24:25 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@18 -- # NVME_HOSTID=29f67375-a902-e411-ace9-001e67bc3c9a 00:16:14.829 09:24:25 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:16:14.829 09:24:25 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:16:14.829 09:24:25 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:16:14.829 09:24:25 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:16:14.829 09:24:25 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:16:14.829 09:24:25 nvmf_tcp.nvmf_nmic -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:16:14.829 09:24:25 nvmf_tcp.nvmf_nmic -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:16:14.829 09:24:25 nvmf_tcp.nvmf_nmic -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:16:14.829 09:24:25 nvmf_tcp.nvmf_nmic -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:14.829 09:24:25 nvmf_tcp.nvmf_nmic -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:14.829 09:24:25 nvmf_tcp.nvmf_nmic -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:14.829 09:24:25 nvmf_tcp.nvmf_nmic -- paths/export.sh@5 -- # export PATH 00:16:14.829 09:24:25 nvmf_tcp.nvmf_nmic -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:14.829 09:24:25 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@47 -- # : 0 00:16:14.829 09:24:25 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:16:14.829 09:24:25 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:16:14.829 09:24:25 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:16:14.829 09:24:25 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:16:14.829 09:24:25 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:16:14.829 09:24:25 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:16:14.829 09:24:25 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:16:14.829 09:24:25 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@51 -- # have_pci_nics=0 00:16:14.829 09:24:25 nvmf_tcp.nvmf_nmic -- target/nmic.sh@11 -- # MALLOC_BDEV_SIZE=64 00:16:14.829 09:24:25 nvmf_tcp.nvmf_nmic -- target/nmic.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:16:14.829 09:24:25 nvmf_tcp.nvmf_nmic -- target/nmic.sh@14 -- # nvmftestinit 00:16:14.829 09:24:25 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:16:14.829 09:24:25 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:16:14.829 09:24:25 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@448 -- # prepare_net_devs 00:16:14.829 09:24:25 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@410 -- # local -g is_hw=no 00:16:14.829 09:24:25 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@412 -- # remove_spdk_ns 00:16:14.829 09:24:25 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:16:14.829 09:24:25 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:16:14.829 09:24:25 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:16:14.829 09:24:25 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:16:14.830 09:24:25 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:16:14.830 09:24:25 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@285 -- # xtrace_disable 00:16:14.830 09:24:25 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:16:16.731 09:24:27 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:16:16.731 09:24:27 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@291 -- # pci_devs=() 00:16:16.731 09:24:27 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@291 -- # local -a pci_devs 00:16:16.731 09:24:27 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@292 -- # pci_net_devs=() 00:16:16.731 09:24:27 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:16:16.731 09:24:27 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@293 -- # pci_drivers=() 00:16:16.731 09:24:27 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@293 -- # local -A pci_drivers 00:16:16.731 09:24:27 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@295 -- # net_devs=() 00:16:16.731 09:24:27 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@295 -- # local -ga net_devs 00:16:16.731 09:24:27 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@296 -- # e810=() 00:16:16.731 09:24:27 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@296 -- # local -ga e810 00:16:16.731 09:24:27 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@297 -- # x722=() 00:16:16.731 09:24:27 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@297 -- # local -ga x722 00:16:16.731 09:24:27 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@298 -- # mlx=() 00:16:16.731 09:24:27 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@298 -- # local -ga mlx 00:16:16.731 09:24:27 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:16:16.731 09:24:27 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:16:16.731 09:24:27 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:16:16.731 09:24:27 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:16:16.731 09:24:27 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:16:16.731 09:24:27 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:16:16.731 09:24:27 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:16:16.731 09:24:27 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:16:16.731 09:24:27 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:16:16.731 09:24:27 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:16:16.731 09:24:27 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:16:16.731 09:24:27 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:16:16.731 09:24:27 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:16:16.731 09:24:27 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:16:16.731 09:24:27 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:16:16.731 09:24:27 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:16:16.731 09:24:27 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:16:16.731 09:24:27 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:16:16.731 09:24:27 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@341 -- # echo 'Found 0000:09:00.0 (0x8086 - 0x159b)' 00:16:16.731 Found 0000:09:00.0 (0x8086 - 0x159b) 00:16:16.731 09:24:27 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:16:16.731 09:24:27 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:16:16.731 09:24:27 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:16:16.731 09:24:27 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:16:16.731 09:24:27 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:16:16.731 09:24:27 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:16:16.731 09:24:27 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@341 -- # echo 'Found 0000:09:00.1 (0x8086 - 0x159b)' 00:16:16.731 Found 0000:09:00.1 (0x8086 - 0x159b) 00:16:16.731 09:24:27 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:16:16.731 09:24:27 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:16:16.731 09:24:27 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:16:16.731 09:24:27 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:16:16.731 09:24:27 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:16:16.732 09:24:27 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:16:16.732 09:24:27 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:16:16.732 09:24:27 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:16:16.732 09:24:27 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:16:16.732 09:24:27 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:16:16.732 09:24:27 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:16:16.732 09:24:27 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:16:16.732 09:24:27 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@390 -- # [[ up == up ]] 00:16:16.732 09:24:27 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:16:16.732 09:24:27 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:16:16.732 09:24:27 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:09:00.0: cvl_0_0' 00:16:16.732 Found net devices under 0000:09:00.0: cvl_0_0 00:16:16.732 09:24:27 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:16:16.732 09:24:27 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:16:16.732 09:24:27 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:16:16.732 09:24:27 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:16:16.732 09:24:27 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:16:16.732 09:24:27 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@390 -- # [[ up == up ]] 00:16:16.732 09:24:27 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:16:16.732 09:24:27 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:16:16.732 09:24:27 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:09:00.1: cvl_0_1' 00:16:16.732 Found net devices under 0000:09:00.1: cvl_0_1 00:16:16.732 09:24:27 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:16:16.732 09:24:27 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:16:16.732 09:24:27 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@414 -- # is_hw=yes 00:16:16.732 09:24:27 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:16:16.732 09:24:27 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:16:16.732 09:24:27 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:16:16.732 09:24:27 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:16:16.732 09:24:27 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:16:16.732 09:24:27 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:16:16.732 09:24:27 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:16:16.732 09:24:27 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:16:16.732 09:24:27 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:16:16.732 09:24:27 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:16:16.732 09:24:27 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:16:16.732 09:24:27 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:16:16.732 09:24:27 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:16:16.732 09:24:27 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:16:16.732 09:24:27 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:16:16.732 09:24:27 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:16:16.732 09:24:27 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:16:16.732 09:24:27 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:16:16.732 09:24:27 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:16:16.732 09:24:27 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:16:16.732 09:24:27 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:16:16.732 09:24:27 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:16:16.732 09:24:27 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:16:16.732 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:16:16.732 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.210 ms 00:16:16.732 00:16:16.732 --- 10.0.0.2 ping statistics --- 00:16:16.732 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:16:16.732 rtt min/avg/max/mdev = 0.210/0.210/0.210/0.000 ms 00:16:16.732 09:24:27 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:16:16.732 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:16:16.732 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.043 ms 00:16:16.732 00:16:16.732 --- 10.0.0.1 ping statistics --- 00:16:16.732 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:16:16.732 rtt min/avg/max/mdev = 0.043/0.043/0.043/0.000 ms 00:16:16.732 09:24:27 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:16:16.732 09:24:27 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@422 -- # return 0 00:16:16.732 09:24:27 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:16:16.732 09:24:27 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:16:16.732 09:24:27 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:16:16.732 09:24:27 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:16:16.732 09:24:27 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:16:16.732 09:24:27 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:16:16.732 09:24:27 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:16:16.732 09:24:27 nvmf_tcp.nvmf_nmic -- target/nmic.sh@15 -- # nvmfappstart -m 0xF 00:16:16.732 09:24:27 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:16:16.732 09:24:27 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@722 -- # xtrace_disable 00:16:16.732 09:24:27 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:16:16.732 09:24:27 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@481 -- # nvmfpid=819937 00:16:16.732 09:24:27 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@482 -- # waitforlisten 819937 00:16:16.732 09:24:27 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@829 -- # '[' -z 819937 ']' 00:16:16.732 09:24:27 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:16:16.732 09:24:27 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@834 -- # local max_retries=100 00:16:16.732 09:24:27 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:16:16.732 09:24:27 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:16:16.732 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:16:16.732 09:24:27 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@838 -- # xtrace_disable 00:16:16.732 09:24:27 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:16:16.732 [2024-07-15 09:24:27.849962] Starting SPDK v24.09-pre git sha1 b0f01ebc5 / DPDK 24.03.0 initialization... 00:16:16.732 [2024-07-15 09:24:27.850052] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:16:16.732 EAL: No free 2048 kB hugepages reported on node 1 00:16:16.732 [2024-07-15 09:24:27.912028] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 4 00:16:16.991 [2024-07-15 09:24:28.014545] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:16:16.991 [2024-07-15 09:24:28.014612] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:16:16.991 [2024-07-15 09:24:28.014625] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:16:16.991 [2024-07-15 09:24:28.014639] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:16:16.991 [2024-07-15 09:24:28.014664] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:16:16.991 [2024-07-15 09:24:28.014750] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:16:16.991 [2024-07-15 09:24:28.014867] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:16:16.991 [2024-07-15 09:24:28.014894] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:16:16.991 [2024-07-15 09:24:28.014897] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:16:16.991 09:24:28 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:16:16.991 09:24:28 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@862 -- # return 0 00:16:16.991 09:24:28 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:16:16.991 09:24:28 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@728 -- # xtrace_disable 00:16:16.991 09:24:28 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:16:16.991 09:24:28 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:16:16.991 09:24:28 nvmf_tcp.nvmf_nmic -- target/nmic.sh@17 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:16:16.991 09:24:28 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:16.991 09:24:28 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:16:16.991 [2024-07-15 09:24:28.172630] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:16:16.991 09:24:28 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:16.991 09:24:28 nvmf_tcp.nvmf_nmic -- target/nmic.sh@20 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:16:16.991 09:24:28 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:16.991 09:24:28 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:16:17.251 Malloc0 00:16:17.251 09:24:28 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:17.251 09:24:28 nvmf_tcp.nvmf_nmic -- target/nmic.sh@21 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:16:17.251 09:24:28 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:17.251 09:24:28 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:16:17.251 09:24:28 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:17.251 09:24:28 nvmf_tcp.nvmf_nmic -- target/nmic.sh@22 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:16:17.251 09:24:28 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:17.251 09:24:28 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:16:17.251 09:24:28 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:17.251 09:24:28 nvmf_tcp.nvmf_nmic -- target/nmic.sh@23 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:16:17.251 09:24:28 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:17.251 09:24:28 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:16:17.251 [2024-07-15 09:24:28.226229] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:16:17.251 09:24:28 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:17.251 09:24:28 nvmf_tcp.nvmf_nmic -- target/nmic.sh@25 -- # echo 'test case1: single bdev can'\''t be used in multiple subsystems' 00:16:17.251 test case1: single bdev can't be used in multiple subsystems 00:16:17.251 09:24:28 nvmf_tcp.nvmf_nmic -- target/nmic.sh@26 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode2 -a -s SPDK2 00:16:17.251 09:24:28 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:17.251 09:24:28 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:16:17.251 09:24:28 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:17.251 09:24:28 nvmf_tcp.nvmf_nmic -- target/nmic.sh@27 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode2 -t tcp -a 10.0.0.2 -s 4420 00:16:17.251 09:24:28 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:17.252 09:24:28 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:16:17.252 09:24:28 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:17.252 09:24:28 nvmf_tcp.nvmf_nmic -- target/nmic.sh@28 -- # nmic_status=0 00:16:17.252 09:24:28 nvmf_tcp.nvmf_nmic -- target/nmic.sh@29 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode2 Malloc0 00:16:17.252 09:24:28 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:17.252 09:24:28 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:16:17.252 [2024-07-15 09:24:28.250081] bdev.c:8078:bdev_open: *ERROR*: bdev Malloc0 already claimed: type exclusive_write by module NVMe-oF Target 00:16:17.252 [2024-07-15 09:24:28.250124] subsystem.c:2083:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode2: bdev Malloc0 cannot be opened, error=-1 00:16:17.252 [2024-07-15 09:24:28.250139] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:17.252 request: 00:16:17.252 { 00:16:17.252 "nqn": "nqn.2016-06.io.spdk:cnode2", 00:16:17.252 "namespace": { 00:16:17.252 "bdev_name": "Malloc0", 00:16:17.252 "no_auto_visible": false 00:16:17.252 }, 00:16:17.252 "method": "nvmf_subsystem_add_ns", 00:16:17.252 "req_id": 1 00:16:17.252 } 00:16:17.252 Got JSON-RPC error response 00:16:17.252 response: 00:16:17.252 { 00:16:17.252 "code": -32602, 00:16:17.252 "message": "Invalid parameters" 00:16:17.252 } 00:16:17.252 09:24:28 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@587 -- # [[ 1 == 0 ]] 00:16:17.252 09:24:28 nvmf_tcp.nvmf_nmic -- target/nmic.sh@29 -- # nmic_status=1 00:16:17.252 09:24:28 nvmf_tcp.nvmf_nmic -- target/nmic.sh@31 -- # '[' 1 -eq 0 ']' 00:16:17.252 09:24:28 nvmf_tcp.nvmf_nmic -- target/nmic.sh@36 -- # echo ' Adding namespace failed - expected result.' 00:16:17.252 Adding namespace failed - expected result. 00:16:17.252 09:24:28 nvmf_tcp.nvmf_nmic -- target/nmic.sh@39 -- # echo 'test case2: host connect to nvmf target in multiple paths' 00:16:17.252 test case2: host connect to nvmf target in multiple paths 00:16:17.252 09:24:28 nvmf_tcp.nvmf_nmic -- target/nmic.sh@40 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 00:16:17.252 09:24:28 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:17.252 09:24:28 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:16:17.252 [2024-07-15 09:24:28.262235] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4421 *** 00:16:17.252 09:24:28 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:17.252 09:24:28 nvmf_tcp.nvmf_nmic -- target/nmic.sh@41 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --hostid=29f67375-a902-e411-ace9-001e67bc3c9a -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:16:17.819 09:24:28 nvmf_tcp.nvmf_nmic -- target/nmic.sh@42 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --hostid=29f67375-a902-e411-ace9-001e67bc3c9a -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4421 00:16:18.390 09:24:29 nvmf_tcp.nvmf_nmic -- target/nmic.sh@44 -- # waitforserial SPDKISFASTANDAWESOME 00:16:18.390 09:24:29 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@1198 -- # local i=0 00:16:18.390 09:24:29 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@1199 -- # local nvme_device_counter=1 nvme_devices=0 00:16:18.390 09:24:29 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@1200 -- # [[ -n '' ]] 00:16:18.390 09:24:29 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@1205 -- # sleep 2 00:16:20.291 09:24:31 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@1206 -- # (( i++ <= 15 )) 00:16:20.291 09:24:31 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@1207 -- # lsblk -l -o NAME,SERIAL 00:16:20.291 09:24:31 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@1207 -- # grep -c SPDKISFASTANDAWESOME 00:16:20.291 09:24:31 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@1207 -- # nvme_devices=1 00:16:20.291 09:24:31 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@1208 -- # (( nvme_devices == nvme_device_counter )) 00:16:20.291 09:24:31 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@1208 -- # return 0 00:16:20.291 09:24:31 nvmf_tcp.nvmf_nmic -- target/nmic.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 1 -t write -r 1 -v 00:16:20.291 [global] 00:16:20.291 thread=1 00:16:20.291 invalidate=1 00:16:20.291 rw=write 00:16:20.291 time_based=1 00:16:20.291 runtime=1 00:16:20.291 ioengine=libaio 00:16:20.291 direct=1 00:16:20.291 bs=4096 00:16:20.291 iodepth=1 00:16:20.291 norandommap=0 00:16:20.291 numjobs=1 00:16:20.291 00:16:20.291 verify_dump=1 00:16:20.291 verify_backlog=512 00:16:20.291 verify_state_save=0 00:16:20.291 do_verify=1 00:16:20.291 verify=crc32c-intel 00:16:20.291 [job0] 00:16:20.291 filename=/dev/nvme0n1 00:16:20.549 Could not set queue depth (nvme0n1) 00:16:20.808 job0: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:16:20.808 fio-3.35 00:16:20.808 Starting 1 thread 00:16:22.192 00:16:22.192 job0: (groupid=0, jobs=1): err= 0: pid=820457: Mon Jul 15 09:24:32 2024 00:16:22.192 read: IOPS=21, BW=85.6KiB/s (87.7kB/s)(88.0KiB/1028msec) 00:16:22.192 slat (nsec): min=11144, max=39229, avg=29539.23, stdev=9295.11 00:16:22.192 clat (usec): min=40932, max=41067, avg=40966.62, stdev=38.25 00:16:22.192 lat (usec): min=40962, max=41078, avg=40996.16, stdev=33.43 00:16:22.192 clat percentiles (usec): 00:16:22.192 | 1.00th=[41157], 5.00th=[41157], 10.00th=[41157], 20.00th=[41157], 00:16:22.192 | 30.00th=[41157], 40.00th=[41157], 50.00th=[41157], 60.00th=[41157], 00:16:22.192 | 70.00th=[41157], 80.00th=[41157], 90.00th=[41157], 95.00th=[41157], 00:16:22.192 | 99.00th=[41157], 99.50th=[41157], 99.90th=[41157], 99.95th=[41157], 00:16:22.192 | 99.99th=[41157] 00:16:22.192 write: IOPS=498, BW=1992KiB/s (2040kB/s)(2048KiB/1028msec); 0 zone resets 00:16:22.192 slat (usec): min=10, max=31900, avg=80.12, stdev=1409.06 00:16:22.192 clat (usec): min=128, max=259, avg=160.52, stdev=17.25 00:16:22.192 lat (usec): min=140, max=32142, avg=240.64, stdev=1412.79 00:16:22.192 clat percentiles (usec): 00:16:22.192 | 1.00th=[ 131], 5.00th=[ 137], 10.00th=[ 139], 20.00th=[ 145], 00:16:22.192 | 30.00th=[ 151], 40.00th=[ 157], 50.00th=[ 161], 60.00th=[ 165], 00:16:22.192 | 70.00th=[ 169], 80.00th=[ 174], 90.00th=[ 180], 95.00th=[ 188], 00:16:22.192 | 99.00th=[ 206], 99.50th=[ 217], 99.90th=[ 260], 99.95th=[ 260], 00:16:22.192 | 99.99th=[ 260] 00:16:22.192 bw ( KiB/s): min= 4096, max= 4096, per=100.00%, avg=4096.00, stdev= 0.00, samples=1 00:16:22.192 iops : min= 1024, max= 1024, avg=1024.00, stdev= 0.00, samples=1 00:16:22.192 lat (usec) : 250=95.69%, 500=0.19% 00:16:22.192 lat (msec) : 50=4.12% 00:16:22.192 cpu : usr=0.88%, sys=0.88%, ctx=537, majf=0, minf=2 00:16:22.192 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:16:22.192 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:16:22.192 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:16:22.192 issued rwts: total=22,512,0,0 short=0,0,0,0 dropped=0,0,0,0 00:16:22.192 latency : target=0, window=0, percentile=100.00%, depth=1 00:16:22.192 00:16:22.192 Run status group 0 (all jobs): 00:16:22.192 READ: bw=85.6KiB/s (87.7kB/s), 85.6KiB/s-85.6KiB/s (87.7kB/s-87.7kB/s), io=88.0KiB (90.1kB), run=1028-1028msec 00:16:22.192 WRITE: bw=1992KiB/s (2040kB/s), 1992KiB/s-1992KiB/s (2040kB/s-2040kB/s), io=2048KiB (2097kB), run=1028-1028msec 00:16:22.192 00:16:22.192 Disk stats (read/write): 00:16:22.192 nvme0n1: ios=70/512, merge=0/0, ticks=1097/73, in_queue=1170, util=98.90% 00:16:22.192 09:24:33 nvmf_tcp.nvmf_nmic -- target/nmic.sh@48 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:16:22.192 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 2 controller(s) 00:16:22.192 09:24:33 nvmf_tcp.nvmf_nmic -- target/nmic.sh@49 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:16:22.192 09:24:33 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@1219 -- # local i=0 00:16:22.192 09:24:33 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@1220 -- # lsblk -o NAME,SERIAL 00:16:22.192 09:24:33 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@1220 -- # grep -q -w SPDKISFASTANDAWESOME 00:16:22.192 09:24:33 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@1227 -- # lsblk -l -o NAME,SERIAL 00:16:22.192 09:24:33 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@1227 -- # grep -q -w SPDKISFASTANDAWESOME 00:16:22.192 09:24:33 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@1231 -- # return 0 00:16:22.192 09:24:33 nvmf_tcp.nvmf_nmic -- target/nmic.sh@51 -- # trap - SIGINT SIGTERM EXIT 00:16:22.192 09:24:33 nvmf_tcp.nvmf_nmic -- target/nmic.sh@53 -- # nvmftestfini 00:16:22.192 09:24:33 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@488 -- # nvmfcleanup 00:16:22.192 09:24:33 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@117 -- # sync 00:16:22.192 09:24:33 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:16:22.192 09:24:33 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@120 -- # set +e 00:16:22.192 09:24:33 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@121 -- # for i in {1..20} 00:16:22.192 09:24:33 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:16:22.192 rmmod nvme_tcp 00:16:22.192 rmmod nvme_fabrics 00:16:22.192 rmmod nvme_keyring 00:16:22.192 09:24:33 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:16:22.192 09:24:33 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@124 -- # set -e 00:16:22.192 09:24:33 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@125 -- # return 0 00:16:22.192 09:24:33 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@489 -- # '[' -n 819937 ']' 00:16:22.192 09:24:33 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@490 -- # killprocess 819937 00:16:22.192 09:24:33 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@948 -- # '[' -z 819937 ']' 00:16:22.192 09:24:33 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@952 -- # kill -0 819937 00:16:22.192 09:24:33 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@953 -- # uname 00:16:22.192 09:24:33 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:16:22.192 09:24:33 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 819937 00:16:22.192 09:24:33 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:16:22.192 09:24:33 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:16:22.192 09:24:33 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@966 -- # echo 'killing process with pid 819937' 00:16:22.192 killing process with pid 819937 00:16:22.192 09:24:33 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@967 -- # kill 819937 00:16:22.192 09:24:33 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@972 -- # wait 819937 00:16:22.450 09:24:33 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:16:22.450 09:24:33 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:16:22.450 09:24:33 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:16:22.450 09:24:33 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:16:22.450 09:24:33 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@278 -- # remove_spdk_ns 00:16:22.450 09:24:33 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:16:22.450 09:24:33 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:16:22.450 09:24:33 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:16:24.986 09:24:35 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:16:24.986 00:16:24.986 real 0m10.041s 00:16:24.986 user 0m22.862s 00:16:24.986 sys 0m2.538s 00:16:24.986 09:24:35 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@1124 -- # xtrace_disable 00:16:24.986 09:24:35 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:16:24.986 ************************************ 00:16:24.986 END TEST nvmf_nmic 00:16:24.986 ************************************ 00:16:24.986 09:24:35 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:16:24.986 09:24:35 nvmf_tcp -- nvmf/nvmf.sh@55 -- # run_test nvmf_fio_target /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/fio.sh --transport=tcp 00:16:24.986 09:24:35 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:16:24.986 09:24:35 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:16:24.986 09:24:35 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:16:24.986 ************************************ 00:16:24.986 START TEST nvmf_fio_target 00:16:24.986 ************************************ 00:16:24.986 09:24:35 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/fio.sh --transport=tcp 00:16:24.986 * Looking for test storage... 00:16:24.986 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:16:24.986 09:24:35 nvmf_tcp.nvmf_fio_target -- target/fio.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:16:24.986 09:24:35 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@7 -- # uname -s 00:16:24.986 09:24:35 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:16:24.986 09:24:35 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:16:24.986 09:24:35 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:16:24.986 09:24:35 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:16:24.986 09:24:35 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:16:24.986 09:24:35 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:16:24.986 09:24:35 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:16:24.986 09:24:35 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:16:24.986 09:24:35 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:16:24.986 09:24:35 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:16:24.986 09:24:35 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a 00:16:24.986 09:24:35 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@18 -- # NVME_HOSTID=29f67375-a902-e411-ace9-001e67bc3c9a 00:16:24.986 09:24:35 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:16:24.986 09:24:35 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:16:24.986 09:24:35 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:16:24.986 09:24:35 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:16:24.986 09:24:35 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:16:24.986 09:24:35 nvmf_tcp.nvmf_fio_target -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:16:24.986 09:24:35 nvmf_tcp.nvmf_fio_target -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:16:24.986 09:24:35 nvmf_tcp.nvmf_fio_target -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:16:24.986 09:24:35 nvmf_tcp.nvmf_fio_target -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:24.986 09:24:35 nvmf_tcp.nvmf_fio_target -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:24.986 09:24:35 nvmf_tcp.nvmf_fio_target -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:24.986 09:24:35 nvmf_tcp.nvmf_fio_target -- paths/export.sh@5 -- # export PATH 00:16:24.986 09:24:35 nvmf_tcp.nvmf_fio_target -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:24.986 09:24:35 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@47 -- # : 0 00:16:24.986 09:24:35 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:16:24.986 09:24:35 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:16:24.986 09:24:35 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:16:24.986 09:24:35 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:16:24.986 09:24:35 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:16:24.986 09:24:35 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:16:24.986 09:24:35 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:16:24.986 09:24:35 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@51 -- # have_pci_nics=0 00:16:24.986 09:24:35 nvmf_tcp.nvmf_fio_target -- target/fio.sh@11 -- # MALLOC_BDEV_SIZE=64 00:16:24.986 09:24:35 nvmf_tcp.nvmf_fio_target -- target/fio.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:16:24.986 09:24:35 nvmf_tcp.nvmf_fio_target -- target/fio.sh@14 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:16:24.986 09:24:35 nvmf_tcp.nvmf_fio_target -- target/fio.sh@16 -- # nvmftestinit 00:16:24.986 09:24:35 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:16:24.986 09:24:35 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:16:24.986 09:24:35 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@448 -- # prepare_net_devs 00:16:24.986 09:24:35 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@410 -- # local -g is_hw=no 00:16:24.986 09:24:35 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@412 -- # remove_spdk_ns 00:16:24.986 09:24:35 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:16:24.986 09:24:35 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:16:24.986 09:24:35 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:16:24.986 09:24:35 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:16:24.986 09:24:35 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:16:24.986 09:24:35 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@285 -- # xtrace_disable 00:16:24.986 09:24:35 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@10 -- # set +x 00:16:26.883 09:24:37 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:16:26.883 09:24:37 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@291 -- # pci_devs=() 00:16:26.883 09:24:37 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@291 -- # local -a pci_devs 00:16:26.883 09:24:37 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@292 -- # pci_net_devs=() 00:16:26.883 09:24:37 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:16:26.883 09:24:37 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@293 -- # pci_drivers=() 00:16:26.883 09:24:37 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@293 -- # local -A pci_drivers 00:16:26.883 09:24:37 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@295 -- # net_devs=() 00:16:26.883 09:24:37 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@295 -- # local -ga net_devs 00:16:26.883 09:24:37 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@296 -- # e810=() 00:16:26.883 09:24:37 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@296 -- # local -ga e810 00:16:26.883 09:24:37 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@297 -- # x722=() 00:16:26.883 09:24:37 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@297 -- # local -ga x722 00:16:26.883 09:24:37 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@298 -- # mlx=() 00:16:26.883 09:24:37 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@298 -- # local -ga mlx 00:16:26.883 09:24:37 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:16:26.883 09:24:37 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:16:26.883 09:24:37 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:16:26.883 09:24:37 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:16:26.883 09:24:37 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:16:26.883 09:24:37 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:16:26.883 09:24:37 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:16:26.883 09:24:37 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:16:26.883 09:24:37 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:16:26.883 09:24:37 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:16:26.883 09:24:37 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:16:26.883 09:24:37 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:16:26.883 09:24:37 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:16:26.883 09:24:37 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:16:26.883 09:24:37 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:16:26.883 09:24:37 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:16:26.883 09:24:37 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:16:26.883 09:24:37 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:16:26.883 09:24:37 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@341 -- # echo 'Found 0000:09:00.0 (0x8086 - 0x159b)' 00:16:26.883 Found 0000:09:00.0 (0x8086 - 0x159b) 00:16:26.883 09:24:37 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:16:26.883 09:24:37 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:16:26.883 09:24:37 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:16:26.883 09:24:37 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:16:26.883 09:24:37 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:16:26.883 09:24:37 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:16:26.883 09:24:37 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@341 -- # echo 'Found 0000:09:00.1 (0x8086 - 0x159b)' 00:16:26.883 Found 0000:09:00.1 (0x8086 - 0x159b) 00:16:26.883 09:24:37 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:16:26.883 09:24:37 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:16:26.883 09:24:37 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:16:26.883 09:24:37 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:16:26.883 09:24:37 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:16:26.883 09:24:37 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:16:26.883 09:24:37 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:16:26.883 09:24:37 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:16:26.883 09:24:37 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:16:26.883 09:24:37 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:16:26.883 09:24:37 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:16:26.883 09:24:37 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:16:26.883 09:24:37 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@390 -- # [[ up == up ]] 00:16:26.883 09:24:37 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:16:26.883 09:24:37 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:16:26.883 09:24:37 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:09:00.0: cvl_0_0' 00:16:26.883 Found net devices under 0000:09:00.0: cvl_0_0 00:16:26.883 09:24:37 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:16:26.883 09:24:37 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:16:26.883 09:24:37 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:16:26.883 09:24:37 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:16:26.883 09:24:37 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:16:26.883 09:24:37 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@390 -- # [[ up == up ]] 00:16:26.883 09:24:37 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:16:26.883 09:24:37 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:16:26.883 09:24:37 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:09:00.1: cvl_0_1' 00:16:26.883 Found net devices under 0000:09:00.1: cvl_0_1 00:16:26.883 09:24:37 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:16:26.883 09:24:37 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:16:26.883 09:24:37 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@414 -- # is_hw=yes 00:16:26.883 09:24:37 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:16:26.883 09:24:37 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:16:26.883 09:24:37 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:16:26.883 09:24:37 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:16:26.883 09:24:37 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:16:26.883 09:24:37 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:16:26.883 09:24:37 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:16:26.883 09:24:37 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:16:26.883 09:24:37 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:16:26.883 09:24:37 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:16:26.883 09:24:37 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:16:26.883 09:24:37 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:16:26.883 09:24:37 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:16:26.883 09:24:37 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:16:26.883 09:24:37 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:16:26.883 09:24:37 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:16:26.883 09:24:37 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:16:26.883 09:24:37 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:16:26.883 09:24:37 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:16:26.883 09:24:37 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:16:26.883 09:24:37 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:16:26.883 09:24:37 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:16:26.883 09:24:37 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:16:26.883 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:16:26.883 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.200 ms 00:16:26.883 00:16:26.883 --- 10.0.0.2 ping statistics --- 00:16:26.883 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:16:26.883 rtt min/avg/max/mdev = 0.200/0.200/0.200/0.000 ms 00:16:26.883 09:24:37 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:16:26.883 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:16:26.883 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.115 ms 00:16:26.883 00:16:26.883 --- 10.0.0.1 ping statistics --- 00:16:26.883 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:16:26.883 rtt min/avg/max/mdev = 0.115/0.115/0.115/0.000 ms 00:16:26.883 09:24:37 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:16:26.883 09:24:37 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@422 -- # return 0 00:16:26.883 09:24:37 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:16:26.883 09:24:37 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:16:26.883 09:24:37 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:16:26.883 09:24:37 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:16:26.883 09:24:37 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:16:26.883 09:24:37 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:16:26.883 09:24:37 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:16:26.883 09:24:37 nvmf_tcp.nvmf_fio_target -- target/fio.sh@17 -- # nvmfappstart -m 0xF 00:16:26.883 09:24:37 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:16:26.883 09:24:37 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@722 -- # xtrace_disable 00:16:26.883 09:24:37 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@10 -- # set +x 00:16:26.883 09:24:37 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@481 -- # nvmfpid=822647 00:16:26.883 09:24:37 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:16:26.883 09:24:37 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@482 -- # waitforlisten 822647 00:16:26.883 09:24:37 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@829 -- # '[' -z 822647 ']' 00:16:26.883 09:24:37 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:16:26.883 09:24:37 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@834 -- # local max_retries=100 00:16:26.883 09:24:37 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:16:26.883 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:16:26.883 09:24:37 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@838 -- # xtrace_disable 00:16:26.883 09:24:37 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@10 -- # set +x 00:16:26.883 [2024-07-15 09:24:38.033382] Starting SPDK v24.09-pre git sha1 b0f01ebc5 / DPDK 24.03.0 initialization... 00:16:26.883 [2024-07-15 09:24:38.033476] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:16:26.883 EAL: No free 2048 kB hugepages reported on node 1 00:16:27.142 [2024-07-15 09:24:38.097463] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 4 00:16:27.142 [2024-07-15 09:24:38.198168] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:16:27.142 [2024-07-15 09:24:38.198220] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:16:27.142 [2024-07-15 09:24:38.198244] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:16:27.142 [2024-07-15 09:24:38.198255] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:16:27.142 [2024-07-15 09:24:38.198264] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:16:27.142 [2024-07-15 09:24:38.198349] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:16:27.142 [2024-07-15 09:24:38.198459] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:16:27.142 [2024-07-15 09:24:38.198591] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:16:27.142 [2024-07-15 09:24:38.198584] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:16:27.142 09:24:38 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:16:27.142 09:24:38 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@862 -- # return 0 00:16:27.142 09:24:38 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:16:27.142 09:24:38 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@728 -- # xtrace_disable 00:16:27.142 09:24:38 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@10 -- # set +x 00:16:27.399 09:24:38 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:16:27.399 09:24:38 nvmf_tcp.nvmf_fio_target -- target/fio.sh@19 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:16:27.656 [2024-07-15 09:24:38.618477] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:16:27.656 09:24:38 nvmf_tcp.nvmf_fio_target -- target/fio.sh@21 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:16:27.913 09:24:38 nvmf_tcp.nvmf_fio_target -- target/fio.sh@21 -- # malloc_bdevs='Malloc0 ' 00:16:27.913 09:24:38 nvmf_tcp.nvmf_fio_target -- target/fio.sh@22 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:16:28.171 09:24:39 nvmf_tcp.nvmf_fio_target -- target/fio.sh@22 -- # malloc_bdevs+=Malloc1 00:16:28.171 09:24:39 nvmf_tcp.nvmf_fio_target -- target/fio.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:16:28.428 09:24:39 nvmf_tcp.nvmf_fio_target -- target/fio.sh@24 -- # raid_malloc_bdevs='Malloc2 ' 00:16:28.428 09:24:39 nvmf_tcp.nvmf_fio_target -- target/fio.sh@25 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:16:28.686 09:24:39 nvmf_tcp.nvmf_fio_target -- target/fio.sh@25 -- # raid_malloc_bdevs+=Malloc3 00:16:28.686 09:24:39 nvmf_tcp.nvmf_fio_target -- target/fio.sh@26 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_raid_create -n raid0 -z 64 -r 0 -b 'Malloc2 Malloc3' 00:16:28.944 09:24:40 nvmf_tcp.nvmf_fio_target -- target/fio.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:16:29.202 09:24:40 nvmf_tcp.nvmf_fio_target -- target/fio.sh@29 -- # concat_malloc_bdevs='Malloc4 ' 00:16:29.202 09:24:40 nvmf_tcp.nvmf_fio_target -- target/fio.sh@30 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:16:29.461 09:24:40 nvmf_tcp.nvmf_fio_target -- target/fio.sh@30 -- # concat_malloc_bdevs+='Malloc5 ' 00:16:29.461 09:24:40 nvmf_tcp.nvmf_fio_target -- target/fio.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:16:29.720 09:24:40 nvmf_tcp.nvmf_fio_target -- target/fio.sh@31 -- # concat_malloc_bdevs+=Malloc6 00:16:29.720 09:24:40 nvmf_tcp.nvmf_fio_target -- target/fio.sh@32 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_raid_create -n concat0 -r concat -z 64 -b 'Malloc4 Malloc5 Malloc6' 00:16:29.978 09:24:41 nvmf_tcp.nvmf_fio_target -- target/fio.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:16:30.236 09:24:41 nvmf_tcp.nvmf_fio_target -- target/fio.sh@35 -- # for malloc_bdev in $malloc_bdevs 00:16:30.236 09:24:41 nvmf_tcp.nvmf_fio_target -- target/fio.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:16:30.493 09:24:41 nvmf_tcp.nvmf_fio_target -- target/fio.sh@35 -- # for malloc_bdev in $malloc_bdevs 00:16:30.493 09:24:41 nvmf_tcp.nvmf_fio_target -- target/fio.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:16:30.751 09:24:41 nvmf_tcp.nvmf_fio_target -- target/fio.sh@38 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:16:31.009 [2024-07-15 09:24:42.060956] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:16:31.009 09:24:42 nvmf_tcp.nvmf_fio_target -- target/fio.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 raid0 00:16:31.266 09:24:42 nvmf_tcp.nvmf_fio_target -- target/fio.sh@44 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 concat0 00:16:31.524 09:24:42 nvmf_tcp.nvmf_fio_target -- target/fio.sh@46 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --hostid=29f67375-a902-e411-ace9-001e67bc3c9a -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:16:32.093 09:24:43 nvmf_tcp.nvmf_fio_target -- target/fio.sh@48 -- # waitforserial SPDKISFASTANDAWESOME 4 00:16:32.093 09:24:43 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@1198 -- # local i=0 00:16:32.093 09:24:43 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@1199 -- # local nvme_device_counter=1 nvme_devices=0 00:16:32.093 09:24:43 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@1200 -- # [[ -n 4 ]] 00:16:32.093 09:24:43 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@1201 -- # nvme_device_counter=4 00:16:32.093 09:24:43 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@1205 -- # sleep 2 00:16:33.991 09:24:45 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@1206 -- # (( i++ <= 15 )) 00:16:33.991 09:24:45 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@1207 -- # lsblk -l -o NAME,SERIAL 00:16:33.991 09:24:45 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@1207 -- # grep -c SPDKISFASTANDAWESOME 00:16:33.991 09:24:45 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@1207 -- # nvme_devices=4 00:16:33.991 09:24:45 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@1208 -- # (( nvme_devices == nvme_device_counter )) 00:16:33.991 09:24:45 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@1208 -- # return 0 00:16:33.991 09:24:45 nvmf_tcp.nvmf_fio_target -- target/fio.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 1 -t write -r 1 -v 00:16:33.991 [global] 00:16:33.991 thread=1 00:16:33.991 invalidate=1 00:16:33.991 rw=write 00:16:33.991 time_based=1 00:16:33.991 runtime=1 00:16:33.991 ioengine=libaio 00:16:33.991 direct=1 00:16:33.991 bs=4096 00:16:33.991 iodepth=1 00:16:33.991 norandommap=0 00:16:33.991 numjobs=1 00:16:33.991 00:16:33.991 verify_dump=1 00:16:33.991 verify_backlog=512 00:16:33.991 verify_state_save=0 00:16:33.991 do_verify=1 00:16:33.991 verify=crc32c-intel 00:16:33.991 [job0] 00:16:33.991 filename=/dev/nvme0n1 00:16:33.991 [job1] 00:16:33.991 filename=/dev/nvme0n2 00:16:33.991 [job2] 00:16:33.991 filename=/dev/nvme0n3 00:16:33.991 [job3] 00:16:33.991 filename=/dev/nvme0n4 00:16:34.248 Could not set queue depth (nvme0n1) 00:16:34.248 Could not set queue depth (nvme0n2) 00:16:34.248 Could not set queue depth (nvme0n3) 00:16:34.248 Could not set queue depth (nvme0n4) 00:16:34.248 job0: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:16:34.248 job1: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:16:34.248 job2: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:16:34.248 job3: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:16:34.248 fio-3.35 00:16:34.248 Starting 4 threads 00:16:35.630 00:16:35.630 job0: (groupid=0, jobs=1): err= 0: pid=823604: Mon Jul 15 09:24:46 2024 00:16:35.630 read: IOPS=2045, BW=8184KiB/s (8380kB/s)(8192KiB/1001msec) 00:16:35.630 slat (nsec): min=6685, max=50524, avg=13664.16, stdev=3234.15 00:16:35.630 clat (usec): min=173, max=41070, avg=251.84, stdev=1270.62 00:16:35.630 lat (usec): min=194, max=41080, avg=265.50, stdev=1270.51 00:16:35.630 clat percentiles (usec): 00:16:35.630 | 1.00th=[ 186], 5.00th=[ 190], 10.00th=[ 192], 20.00th=[ 196], 00:16:35.630 | 30.00th=[ 198], 40.00th=[ 202], 50.00th=[ 206], 60.00th=[ 210], 00:16:35.630 | 70.00th=[ 219], 80.00th=[ 233], 90.00th=[ 243], 95.00th=[ 251], 00:16:35.630 | 99.00th=[ 269], 99.50th=[ 277], 99.90th=[ 334], 99.95th=[40633], 00:16:35.630 | 99.99th=[41157] 00:16:35.630 write: IOPS=2445, BW=9782KiB/s (10.0MB/s)(9792KiB/1001msec); 0 zone resets 00:16:35.630 slat (nsec): min=7898, max=44472, avg=16242.29, stdev=5265.02 00:16:35.630 clat (usec): min=126, max=313, avg=162.57, stdev=25.50 00:16:35.630 lat (usec): min=135, max=358, avg=178.81, stdev=25.53 00:16:35.630 clat percentiles (usec): 00:16:35.630 | 1.00th=[ 135], 5.00th=[ 139], 10.00th=[ 141], 20.00th=[ 145], 00:16:35.630 | 30.00th=[ 147], 40.00th=[ 151], 50.00th=[ 153], 60.00th=[ 157], 00:16:35.630 | 70.00th=[ 165], 80.00th=[ 182], 90.00th=[ 202], 95.00th=[ 217], 00:16:35.630 | 99.00th=[ 247], 99.50th=[ 262], 99.90th=[ 281], 99.95th=[ 302], 00:16:35.630 | 99.99th=[ 314] 00:16:35.630 bw ( KiB/s): min= 8192, max= 8192, per=35.63%, avg=8192.00, stdev= 0.00, samples=1 00:16:35.630 iops : min= 2048, max= 2048, avg=2048.00, stdev= 0.00, samples=1 00:16:35.630 lat (usec) : 250=96.86%, 500=3.09% 00:16:35.630 lat (msec) : 50=0.04% 00:16:35.630 cpu : usr=4.40%, sys=7.30%, ctx=4497, majf=0, minf=2 00:16:35.630 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:16:35.630 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:16:35.630 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:16:35.630 issued rwts: total=2048,2448,0,0 short=0,0,0,0 dropped=0,0,0,0 00:16:35.630 latency : target=0, window=0, percentile=100.00%, depth=1 00:16:35.630 job1: (groupid=0, jobs=1): err= 0: pid=823605: Mon Jul 15 09:24:46 2024 00:16:35.630 read: IOPS=2045, BW=8184KiB/s (8380kB/s)(8192KiB/1001msec) 00:16:35.630 slat (nsec): min=5165, max=66660, avg=13505.46, stdev=7723.30 00:16:35.630 clat (usec): min=165, max=40973, avg=251.29, stdev=903.64 00:16:35.630 lat (usec): min=171, max=40993, avg=264.79, stdev=904.32 00:16:35.630 clat percentiles (usec): 00:16:35.630 | 1.00th=[ 172], 5.00th=[ 180], 10.00th=[ 184], 20.00th=[ 190], 00:16:35.630 | 30.00th=[ 194], 40.00th=[ 198], 50.00th=[ 202], 60.00th=[ 206], 00:16:35.630 | 70.00th=[ 221], 80.00th=[ 245], 90.00th=[ 318], 95.00th=[ 461], 00:16:35.630 | 99.00th=[ 502], 99.50th=[ 510], 99.90th=[ 529], 99.95th=[ 553], 00:16:35.630 | 99.99th=[41157] 00:16:35.630 write: IOPS=2308, BW=9235KiB/s (9456kB/s)(9244KiB/1001msec); 0 zone resets 00:16:35.630 slat (usec): min=6, max=14772, avg=20.59, stdev=307.03 00:16:35.630 clat (usec): min=129, max=546, avg=170.32, stdev=30.36 00:16:35.630 lat (usec): min=136, max=15144, avg=190.91, stdev=312.67 00:16:35.630 clat percentiles (usec): 00:16:35.630 | 1.00th=[ 135], 5.00th=[ 139], 10.00th=[ 143], 20.00th=[ 147], 00:16:35.630 | 30.00th=[ 149], 40.00th=[ 153], 50.00th=[ 159], 60.00th=[ 174], 00:16:35.630 | 70.00th=[ 186], 80.00th=[ 194], 90.00th=[ 210], 95.00th=[ 229], 00:16:35.630 | 99.00th=[ 255], 99.50th=[ 265], 99.90th=[ 314], 99.95th=[ 371], 00:16:35.630 | 99.99th=[ 545] 00:16:35.630 bw ( KiB/s): min= 8192, max= 8192, per=35.63%, avg=8192.00, stdev= 0.00, samples=1 00:16:35.630 iops : min= 2048, max= 2048, avg=2048.00, stdev= 0.00, samples=1 00:16:35.630 lat (usec) : 250=90.82%, 500=8.63%, 750=0.53% 00:16:35.630 lat (msec) : 50=0.02% 00:16:35.630 cpu : usr=3.30%, sys=6.30%, ctx=4361, majf=0, minf=1 00:16:35.630 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:16:35.630 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:16:35.630 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:16:35.630 issued rwts: total=2048,2311,0,0 short=0,0,0,0 dropped=0,0,0,0 00:16:35.630 latency : target=0, window=0, percentile=100.00%, depth=1 00:16:35.630 job2: (groupid=0, jobs=1): err= 0: pid=823606: Mon Jul 15 09:24:46 2024 00:16:35.630 read: IOPS=21, BW=87.7KiB/s (89.8kB/s)(88.0KiB/1003msec) 00:16:35.630 slat (nsec): min=6925, max=36506, avg=29187.05, stdev=9968.62 00:16:35.630 clat (usec): min=40810, max=42082, avg=41094.31, stdev=361.20 00:16:35.630 lat (usec): min=40830, max=42097, avg=41123.49, stdev=354.81 00:16:35.630 clat percentiles (usec): 00:16:35.630 | 1.00th=[40633], 5.00th=[40633], 10.00th=[41157], 20.00th=[41157], 00:16:35.630 | 30.00th=[41157], 40.00th=[41157], 50.00th=[41157], 60.00th=[41157], 00:16:35.630 | 70.00th=[41157], 80.00th=[41157], 90.00th=[41681], 95.00th=[41681], 00:16:35.630 | 99.00th=[42206], 99.50th=[42206], 99.90th=[42206], 99.95th=[42206], 00:16:35.630 | 99.99th=[42206] 00:16:35.630 write: IOPS=510, BW=2042KiB/s (2091kB/s)(2048KiB/1003msec); 0 zone resets 00:16:35.630 slat (nsec): min=6405, max=65257, avg=11841.16, stdev=6791.24 00:16:35.630 clat (usec): min=141, max=823, avg=177.16, stdev=33.42 00:16:35.630 lat (usec): min=151, max=832, avg=189.00, stdev=33.95 00:16:35.630 clat percentiles (usec): 00:16:35.630 | 1.00th=[ 147], 5.00th=[ 153], 10.00th=[ 157], 20.00th=[ 163], 00:16:35.630 | 30.00th=[ 167], 40.00th=[ 169], 50.00th=[ 174], 60.00th=[ 178], 00:16:35.630 | 70.00th=[ 182], 80.00th=[ 188], 90.00th=[ 196], 95.00th=[ 204], 00:16:35.630 | 99.00th=[ 235], 99.50th=[ 273], 99.90th=[ 824], 99.95th=[ 824], 00:16:35.630 | 99.99th=[ 824] 00:16:35.630 bw ( KiB/s): min= 4096, max= 4096, per=17.81%, avg=4096.00, stdev= 0.00, samples=1 00:16:35.630 iops : min= 1024, max= 1024, avg=1024.00, stdev= 0.00, samples=1 00:16:35.630 lat (usec) : 250=94.94%, 500=0.75%, 1000=0.19% 00:16:35.630 lat (msec) : 50=4.12% 00:16:35.630 cpu : usr=0.30%, sys=0.60%, ctx=535, majf=0, minf=1 00:16:35.630 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:16:35.630 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:16:35.630 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:16:35.630 issued rwts: total=22,512,0,0 short=0,0,0,0 dropped=0,0,0,0 00:16:35.630 latency : target=0, window=0, percentile=100.00%, depth=1 00:16:35.630 job3: (groupid=0, jobs=1): err= 0: pid=823608: Mon Jul 15 09:24:46 2024 00:16:35.630 read: IOPS=20, BW=83.5KiB/s (85.5kB/s)(84.0KiB/1006msec) 00:16:35.630 slat (nsec): min=8802, max=33729, avg=26640.33, stdev=9110.05 00:16:35.630 clat (usec): min=40519, max=42079, avg=41135.03, stdev=452.61 00:16:35.630 lat (usec): min=40528, max=42094, avg=41161.67, stdev=448.80 00:16:35.630 clat percentiles (usec): 00:16:35.630 | 1.00th=[40633], 5.00th=[40633], 10.00th=[41157], 20.00th=[41157], 00:16:35.630 | 30.00th=[41157], 40.00th=[41157], 50.00th=[41157], 60.00th=[41157], 00:16:35.630 | 70.00th=[41157], 80.00th=[41157], 90.00th=[42206], 95.00th=[42206], 00:16:35.630 | 99.00th=[42206], 99.50th=[42206], 99.90th=[42206], 99.95th=[42206], 00:16:35.630 | 99.99th=[42206] 00:16:35.630 write: IOPS=508, BW=2036KiB/s (2085kB/s)(2048KiB/1006msec); 0 zone resets 00:16:35.630 slat (usec): min=6, max=14822, avg=39.74, stdev=654.64 00:16:35.630 clat (usec): min=153, max=1422, avg=234.27, stdev=88.20 00:16:35.630 lat (usec): min=161, max=15230, avg=274.00, stdev=668.19 00:16:35.630 clat percentiles (usec): 00:16:35.630 | 1.00th=[ 157], 5.00th=[ 174], 10.00th=[ 184], 20.00th=[ 194], 00:16:35.630 | 30.00th=[ 202], 40.00th=[ 210], 50.00th=[ 219], 60.00th=[ 227], 00:16:35.630 | 70.00th=[ 235], 80.00th=[ 247], 90.00th=[ 285], 95.00th=[ 367], 00:16:35.630 | 99.00th=[ 478], 99.50th=[ 930], 99.90th=[ 1418], 99.95th=[ 1418], 00:16:35.630 | 99.99th=[ 1418] 00:16:35.630 bw ( KiB/s): min= 4096, max= 4096, per=17.81%, avg=4096.00, stdev= 0.00, samples=1 00:16:35.630 iops : min= 1024, max= 1024, avg=1024.00, stdev= 0.00, samples=1 00:16:35.630 lat (usec) : 250=78.24%, 500=17.07%, 750=0.19%, 1000=0.38% 00:16:35.630 lat (msec) : 2=0.19%, 50=3.94% 00:16:35.630 cpu : usr=0.20%, sys=0.60%, ctx=535, majf=0, minf=1 00:16:35.630 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:16:35.630 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:16:35.630 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:16:35.630 issued rwts: total=21,512,0,0 short=0,0,0,0 dropped=0,0,0,0 00:16:35.630 latency : target=0, window=0, percentile=100.00%, depth=1 00:16:35.630 00:16:35.630 Run status group 0 (all jobs): 00:16:35.631 READ: bw=16.1MiB/s (16.9MB/s), 83.5KiB/s-8184KiB/s (85.5kB/s-8380kB/s), io=16.2MiB (17.0MB), run=1001-1006msec 00:16:35.631 WRITE: bw=22.5MiB/s (23.5MB/s), 2036KiB/s-9782KiB/s (2085kB/s-10.0MB/s), io=22.6MiB (23.7MB), run=1001-1006msec 00:16:35.631 00:16:35.631 Disk stats (read/write): 00:16:35.631 nvme0n1: ios=1724/2048, merge=0/0, ticks=1279/325, in_queue=1604, util=85.27% 00:16:35.631 nvme0n2: ios=1752/2048, merge=0/0, ticks=651/346, in_queue=997, util=91.74% 00:16:35.631 nvme0n3: ios=75/512, merge=0/0, ticks=938/86, in_queue=1024, util=93.07% 00:16:35.631 nvme0n4: ios=77/512, merge=0/0, ticks=902/115, in_queue=1017, util=97.04% 00:16:35.631 09:24:46 nvmf_tcp.nvmf_fio_target -- target/fio.sh@51 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 1 -t randwrite -r 1 -v 00:16:35.631 [global] 00:16:35.631 thread=1 00:16:35.631 invalidate=1 00:16:35.631 rw=randwrite 00:16:35.631 time_based=1 00:16:35.631 runtime=1 00:16:35.631 ioengine=libaio 00:16:35.631 direct=1 00:16:35.631 bs=4096 00:16:35.631 iodepth=1 00:16:35.631 norandommap=0 00:16:35.631 numjobs=1 00:16:35.631 00:16:35.631 verify_dump=1 00:16:35.631 verify_backlog=512 00:16:35.631 verify_state_save=0 00:16:35.631 do_verify=1 00:16:35.631 verify=crc32c-intel 00:16:35.631 [job0] 00:16:35.631 filename=/dev/nvme0n1 00:16:35.631 [job1] 00:16:35.631 filename=/dev/nvme0n2 00:16:35.631 [job2] 00:16:35.631 filename=/dev/nvme0n3 00:16:35.631 [job3] 00:16:35.631 filename=/dev/nvme0n4 00:16:35.631 Could not set queue depth (nvme0n1) 00:16:35.631 Could not set queue depth (nvme0n2) 00:16:35.631 Could not set queue depth (nvme0n3) 00:16:35.631 Could not set queue depth (nvme0n4) 00:16:35.888 job0: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:16:35.889 job1: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:16:35.889 job2: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:16:35.889 job3: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:16:35.889 fio-3.35 00:16:35.889 Starting 4 threads 00:16:37.265 00:16:37.265 job0: (groupid=0, jobs=1): err= 0: pid=823941: Mon Jul 15 09:24:48 2024 00:16:37.265 read: IOPS=519, BW=2079KiB/s (2129kB/s)(2108KiB/1014msec) 00:16:37.265 slat (nsec): min=5990, max=51698, avg=16630.20, stdev=5764.89 00:16:37.265 clat (usec): min=189, max=41175, avg=1366.42, stdev=6584.79 00:16:37.265 lat (usec): min=195, max=41195, avg=1383.05, stdev=6586.48 00:16:37.265 clat percentiles (usec): 00:16:37.265 | 1.00th=[ 194], 5.00th=[ 212], 10.00th=[ 221], 20.00th=[ 229], 00:16:37.265 | 30.00th=[ 233], 40.00th=[ 237], 50.00th=[ 243], 60.00th=[ 253], 00:16:37.265 | 70.00th=[ 265], 80.00th=[ 289], 90.00th=[ 306], 95.00th=[ 338], 00:16:37.265 | 99.00th=[41157], 99.50th=[41157], 99.90th=[41157], 99.95th=[41157], 00:16:37.265 | 99.99th=[41157] 00:16:37.265 write: IOPS=1009, BW=4039KiB/s (4136kB/s)(4096KiB/1014msec); 0 zone resets 00:16:37.265 slat (nsec): min=8165, max=66469, avg=21452.20, stdev=9248.52 00:16:37.265 clat (usec): min=135, max=525, avg=248.72, stdev=77.37 00:16:37.265 lat (usec): min=152, max=566, avg=270.18, stdev=78.55 00:16:37.265 clat percentiles (usec): 00:16:37.265 | 1.00th=[ 147], 5.00th=[ 167], 10.00th=[ 176], 20.00th=[ 186], 00:16:37.265 | 30.00th=[ 200], 40.00th=[ 217], 50.00th=[ 227], 60.00th=[ 239], 00:16:37.265 | 70.00th=[ 255], 80.00th=[ 306], 90.00th=[ 388], 95.00th=[ 420], 00:16:37.265 | 99.00th=[ 461], 99.50th=[ 486], 99.90th=[ 498], 99.95th=[ 529], 00:16:37.265 | 99.99th=[ 529] 00:16:37.265 bw ( KiB/s): min= 8192, max= 8192, per=83.04%, avg=8192.00, stdev= 0.00, samples=1 00:16:37.265 iops : min= 2048, max= 2048, avg=2048.00, stdev= 0.00, samples=1 00:16:37.265 lat (usec) : 250=64.28%, 500=34.36%, 750=0.39% 00:16:37.265 lat (msec) : 20=0.06%, 50=0.90% 00:16:37.265 cpu : usr=2.17%, sys=3.95%, ctx=1552, majf=0, minf=1 00:16:37.265 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:16:37.266 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:16:37.266 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:16:37.266 issued rwts: total=527,1024,0,0 short=0,0,0,0 dropped=0,0,0,0 00:16:37.266 latency : target=0, window=0, percentile=100.00%, depth=1 00:16:37.266 job1: (groupid=0, jobs=1): err= 0: pid=823953: Mon Jul 15 09:24:48 2024 00:16:37.266 read: IOPS=59, BW=237KiB/s (243kB/s)(240KiB/1013msec) 00:16:37.266 slat (nsec): min=5631, max=46153, avg=17918.80, stdev=9430.83 00:16:37.266 clat (usec): min=262, max=41608, avg=13882.18, stdev=19344.65 00:16:37.266 lat (usec): min=268, max=41625, avg=13900.10, stdev=19349.26 00:16:37.266 clat percentiles (usec): 00:16:37.266 | 1.00th=[ 265], 5.00th=[ 281], 10.00th=[ 289], 20.00th=[ 302], 00:16:37.266 | 30.00th=[ 310], 40.00th=[ 314], 50.00th=[ 326], 60.00th=[ 338], 00:16:37.266 | 70.00th=[40633], 80.00th=[41157], 90.00th=[41157], 95.00th=[41157], 00:16:37.266 | 99.00th=[41681], 99.50th=[41681], 99.90th=[41681], 99.95th=[41681], 00:16:37.266 | 99.99th=[41681] 00:16:37.266 write: IOPS=505, BW=2022KiB/s (2070kB/s)(2048KiB/1013msec); 0 zone resets 00:16:37.266 slat (nsec): min=7087, max=82596, avg=21273.05, stdev=11327.06 00:16:37.266 clat (usec): min=190, max=587, avg=322.07, stdev=86.38 00:16:37.266 lat (usec): min=217, max=610, avg=343.34, stdev=89.82 00:16:37.266 clat percentiles (usec): 00:16:37.266 | 1.00th=[ 208], 5.00th=[ 229], 10.00th=[ 235], 20.00th=[ 245], 00:16:37.266 | 30.00th=[ 260], 40.00th=[ 273], 50.00th=[ 289], 60.00th=[ 318], 00:16:37.266 | 70.00th=[ 363], 80.00th=[ 416], 90.00th=[ 457], 95.00th=[ 486], 00:16:37.266 | 99.00th=[ 523], 99.50th=[ 537], 99.90th=[ 586], 99.95th=[ 586], 00:16:37.266 | 99.99th=[ 586] 00:16:37.266 bw ( KiB/s): min= 4096, max= 4096, per=41.52%, avg=4096.00, stdev= 0.00, samples=1 00:16:37.266 iops : min= 1024, max= 1024, avg=1024.00, stdev= 0.00, samples=1 00:16:37.266 lat (usec) : 250=20.63%, 500=72.90%, 750=2.97% 00:16:37.266 lat (msec) : 50=3.50% 00:16:37.266 cpu : usr=0.79%, sys=1.58%, ctx=572, majf=0, minf=1 00:16:37.266 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:16:37.266 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:16:37.266 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:16:37.266 issued rwts: total=60,512,0,0 short=0,0,0,0 dropped=0,0,0,0 00:16:37.266 latency : target=0, window=0, percentile=100.00%, depth=1 00:16:37.266 job2: (groupid=0, jobs=1): err= 0: pid=823954: Mon Jul 15 09:24:48 2024 00:16:37.266 read: IOPS=21, BW=84.8KiB/s (86.8kB/s)(88.0KiB/1038msec) 00:16:37.266 slat (nsec): min=13241, max=33365, avg=23667.55, stdev=8292.91 00:16:37.266 clat (usec): min=338, max=42021, avg=39533.78, stdev=8772.27 00:16:37.266 lat (usec): min=365, max=42038, avg=39557.44, stdev=8771.71 00:16:37.266 clat percentiles (usec): 00:16:37.266 | 1.00th=[ 338], 5.00th=[40633], 10.00th=[40633], 20.00th=[41157], 00:16:37.266 | 30.00th=[41157], 40.00th=[41157], 50.00th=[41157], 60.00th=[42206], 00:16:37.266 | 70.00th=[42206], 80.00th=[42206], 90.00th=[42206], 95.00th=[42206], 00:16:37.266 | 99.00th=[42206], 99.50th=[42206], 99.90th=[42206], 99.95th=[42206], 00:16:37.266 | 99.99th=[42206] 00:16:37.266 write: IOPS=493, BW=1973KiB/s (2020kB/s)(2048KiB/1038msec); 0 zone resets 00:16:37.266 slat (nsec): min=7066, max=68872, avg=20993.46, stdev=11204.74 00:16:37.266 clat (usec): min=170, max=484, avg=301.06, stdev=75.98 00:16:37.266 lat (usec): min=182, max=522, avg=322.05, stdev=75.11 00:16:37.266 clat percentiles (usec): 00:16:37.266 | 1.00th=[ 180], 5.00th=[ 200], 10.00th=[ 215], 20.00th=[ 237], 00:16:37.266 | 30.00th=[ 249], 40.00th=[ 262], 50.00th=[ 281], 60.00th=[ 306], 00:16:37.266 | 70.00th=[ 343], 80.00th=[ 388], 90.00th=[ 416], 95.00th=[ 441], 00:16:37.266 | 99.00th=[ 465], 99.50th=[ 469], 99.90th=[ 486], 99.95th=[ 486], 00:16:37.266 | 99.99th=[ 486] 00:16:37.266 bw ( KiB/s): min= 4096, max= 4096, per=41.52%, avg=4096.00, stdev= 0.00, samples=1 00:16:37.266 iops : min= 1024, max= 1024, avg=1024.00, stdev= 0.00, samples=1 00:16:37.266 lat (usec) : 250=29.59%, 500=66.48% 00:16:37.266 lat (msec) : 50=3.93% 00:16:37.266 cpu : usr=0.48%, sys=1.06%, ctx=534, majf=0, minf=1 00:16:37.266 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:16:37.266 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:16:37.266 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:16:37.266 issued rwts: total=22,512,0,0 short=0,0,0,0 dropped=0,0,0,0 00:16:37.266 latency : target=0, window=0, percentile=100.00%, depth=1 00:16:37.266 job3: (groupid=0, jobs=1): err= 0: pid=823955: Mon Jul 15 09:24:48 2024 00:16:37.266 read: IOPS=21, BW=85.8KiB/s (87.8kB/s)(88.0KiB/1026msec) 00:16:37.266 slat (nsec): min=12082, max=34267, avg=23424.14, stdev=8756.85 00:16:37.266 clat (usec): min=40650, max=41074, avg=40952.55, stdev=82.61 00:16:37.266 lat (usec): min=40668, max=41091, avg=40975.97, stdev=80.50 00:16:37.266 clat percentiles (usec): 00:16:37.266 | 1.00th=[40633], 5.00th=[40633], 10.00th=[41157], 20.00th=[41157], 00:16:37.266 | 30.00th=[41157], 40.00th=[41157], 50.00th=[41157], 60.00th=[41157], 00:16:37.266 | 70.00th=[41157], 80.00th=[41157], 90.00th=[41157], 95.00th=[41157], 00:16:37.266 | 99.00th=[41157], 99.50th=[41157], 99.90th=[41157], 99.95th=[41157], 00:16:37.266 | 99.99th=[41157] 00:16:37.266 write: IOPS=499, BW=1996KiB/s (2044kB/s)(2048KiB/1026msec); 0 zone resets 00:16:37.266 slat (nsec): min=7234, max=53645, avg=17463.97, stdev=7420.13 00:16:37.266 clat (usec): min=151, max=319, avg=220.75, stdev=25.19 00:16:37.266 lat (usec): min=159, max=329, avg=238.22, stdev=24.69 00:16:37.266 clat percentiles (usec): 00:16:37.266 | 1.00th=[ 163], 5.00th=[ 180], 10.00th=[ 188], 20.00th=[ 198], 00:16:37.266 | 30.00th=[ 208], 40.00th=[ 217], 50.00th=[ 225], 60.00th=[ 229], 00:16:37.266 | 70.00th=[ 231], 80.00th=[ 241], 90.00th=[ 258], 95.00th=[ 260], 00:16:37.266 | 99.00th=[ 273], 99.50th=[ 273], 99.90th=[ 322], 99.95th=[ 322], 00:16:37.266 | 99.99th=[ 322] 00:16:37.266 bw ( KiB/s): min= 4096, max= 4096, per=41.52%, avg=4096.00, stdev= 0.00, samples=1 00:16:37.266 iops : min= 1024, max= 1024, avg=1024.00, stdev= 0.00, samples=1 00:16:37.266 lat (usec) : 250=80.90%, 500=14.98% 00:16:37.266 lat (msec) : 50=4.12% 00:16:37.266 cpu : usr=0.10%, sys=1.66%, ctx=534, majf=0, minf=2 00:16:37.266 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:16:37.266 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:16:37.266 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:16:37.266 issued rwts: total=22,512,0,0 short=0,0,0,0 dropped=0,0,0,0 00:16:37.266 latency : target=0, window=0, percentile=100.00%, depth=1 00:16:37.266 00:16:37.266 Run status group 0 (all jobs): 00:16:37.266 READ: bw=2432KiB/s (2490kB/s), 84.8KiB/s-2079KiB/s (86.8kB/s-2129kB/s), io=2524KiB (2585kB), run=1013-1038msec 00:16:37.266 WRITE: bw=9865KiB/s (10.1MB/s), 1973KiB/s-4039KiB/s (2020kB/s-4136kB/s), io=10.0MiB (10.5MB), run=1013-1038msec 00:16:37.266 00:16:37.266 Disk stats (read/write): 00:16:37.266 nvme0n1: ios=565/1024, merge=0/0, ticks=863/242, in_queue=1105, util=94.29% 00:16:37.266 nvme0n2: ios=106/512, merge=0/0, ticks=770/145, in_queue=915, util=94.51% 00:16:37.266 nvme0n3: ios=67/512, merge=0/0, ticks=739/145, in_queue=884, util=90.60% 00:16:37.266 nvme0n4: ios=51/512, merge=0/0, ticks=725/110, in_queue=835, util=90.31% 00:16:37.266 09:24:48 nvmf_tcp.nvmf_fio_target -- target/fio.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 128 -t write -r 1 -v 00:16:37.266 [global] 00:16:37.266 thread=1 00:16:37.266 invalidate=1 00:16:37.266 rw=write 00:16:37.266 time_based=1 00:16:37.266 runtime=1 00:16:37.266 ioengine=libaio 00:16:37.266 direct=1 00:16:37.266 bs=4096 00:16:37.266 iodepth=128 00:16:37.266 norandommap=0 00:16:37.266 numjobs=1 00:16:37.266 00:16:37.266 verify_dump=1 00:16:37.266 verify_backlog=512 00:16:37.266 verify_state_save=0 00:16:37.266 do_verify=1 00:16:37.266 verify=crc32c-intel 00:16:37.266 [job0] 00:16:37.266 filename=/dev/nvme0n1 00:16:37.266 [job1] 00:16:37.266 filename=/dev/nvme0n2 00:16:37.266 [job2] 00:16:37.266 filename=/dev/nvme0n3 00:16:37.266 [job3] 00:16:37.266 filename=/dev/nvme0n4 00:16:37.266 Could not set queue depth (nvme0n1) 00:16:37.266 Could not set queue depth (nvme0n2) 00:16:37.266 Could not set queue depth (nvme0n3) 00:16:37.266 Could not set queue depth (nvme0n4) 00:16:37.266 job0: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:16:37.266 job1: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:16:37.266 job2: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:16:37.266 job3: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:16:37.266 fio-3.35 00:16:37.266 Starting 4 threads 00:16:38.644 00:16:38.644 job0: (groupid=0, jobs=1): err= 0: pid=824182: Mon Jul 15 09:24:49 2024 00:16:38.644 read: IOPS=4394, BW=17.2MiB/s (18.0MB/s)(18.0MiB/1047msec) 00:16:38.644 slat (usec): min=3, max=11147, avg=89.58, stdev=589.74 00:16:38.644 clat (usec): min=4149, max=53783, avg=12783.99, stdev=7553.01 00:16:38.644 lat (usec): min=4158, max=56903, avg=12873.58, stdev=7579.51 00:16:38.644 clat percentiles (usec): 00:16:38.644 | 1.00th=[ 7242], 5.00th=[ 8848], 10.00th=[ 9241], 20.00th=[ 9634], 00:16:38.644 | 30.00th=[10159], 40.00th=[10421], 50.00th=[10945], 60.00th=[11469], 00:16:38.644 | 70.00th=[11863], 80.00th=[12911], 90.00th=[15664], 95.00th=[19792], 00:16:38.644 | 99.00th=[52691], 99.50th=[53216], 99.90th=[53740], 99.95th=[53740], 00:16:38.644 | 99.99th=[53740] 00:16:38.644 write: IOPS=4401, BW=17.2MiB/s (18.0MB/s)(18.0MiB/1047msec); 0 zone resets 00:16:38.644 slat (usec): min=5, max=54078, avg=113.38, stdev=1243.72 00:16:38.644 clat (usec): min=335, max=135457, avg=13397.29, stdev=11564.96 00:16:38.644 lat (usec): min=361, max=135471, avg=13510.67, stdev=11735.51 00:16:38.644 clat percentiles (msec): 00:16:38.644 | 1.00th=[ 3], 5.00th=[ 6], 10.00th=[ 8], 20.00th=[ 10], 00:16:38.644 | 30.00th=[ 10], 40.00th=[ 10], 50.00th=[ 11], 60.00th=[ 11], 00:16:38.644 | 70.00th=[ 12], 80.00th=[ 16], 90.00th=[ 23], 95.00th=[ 35], 00:16:38.644 | 99.00th=[ 43], 99.50th=[ 97], 99.90th=[ 136], 99.95th=[ 136], 00:16:38.644 | 99.99th=[ 136] 00:16:38.644 bw ( KiB/s): min=17560, max=19304, per=31.14%, avg=18432.00, stdev=1233.19, samples=2 00:16:38.644 iops : min= 4390, max= 4826, avg=4608.00, stdev=308.30, samples=2 00:16:38.644 lat (usec) : 500=0.05%, 750=0.11% 00:16:38.644 lat (msec) : 2=0.16%, 4=0.50%, 10=36.43%, 20=53.82%, 50=7.84% 00:16:38.644 lat (msec) : 100=0.91%, 250=0.17% 00:16:38.644 cpu : usr=8.03%, sys=9.18%, ctx=400, majf=0, minf=13 00:16:38.644 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.3%, >=64=99.3% 00:16:38.644 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:16:38.644 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:16:38.644 issued rwts: total=4601,4608,0,0 short=0,0,0,0 dropped=0,0,0,0 00:16:38.644 latency : target=0, window=0, percentile=100.00%, depth=128 00:16:38.644 job1: (groupid=0, jobs=1): err= 0: pid=824183: Mon Jul 15 09:24:49 2024 00:16:38.644 read: IOPS=3697, BW=14.4MiB/s (15.1MB/s)(14.5MiB/1005msec) 00:16:38.644 slat (usec): min=2, max=20469, avg=129.46, stdev=944.62 00:16:38.644 clat (usec): min=2913, max=74804, avg=16953.71, stdev=11847.29 00:16:38.644 lat (usec): min=7408, max=74814, avg=17083.17, stdev=11916.84 00:16:38.644 clat percentiles (usec): 00:16:38.644 | 1.00th=[ 7635], 5.00th=[ 8717], 10.00th=[ 9503], 20.00th=[ 9896], 00:16:38.645 | 30.00th=[10421], 40.00th=[11076], 50.00th=[12911], 60.00th=[13829], 00:16:38.645 | 70.00th=[16909], 80.00th=[20841], 90.00th=[28967], 95.00th=[46924], 00:16:38.645 | 99.00th=[73925], 99.50th=[74974], 99.90th=[74974], 99.95th=[74974], 00:16:38.645 | 99.99th=[74974] 00:16:38.645 write: IOPS=4075, BW=15.9MiB/s (16.7MB/s)(16.0MiB/1005msec); 0 zone resets 00:16:38.645 slat (usec): min=3, max=18893, avg=116.30, stdev=807.09 00:16:38.645 clat (usec): min=1022, max=72726, avg=15710.25, stdev=8302.18 00:16:38.645 lat (usec): min=1071, max=72733, avg=15826.55, stdev=8314.84 00:16:38.645 clat percentiles (usec): 00:16:38.645 | 1.00th=[ 6194], 5.00th=[ 8094], 10.00th=[ 8979], 20.00th=[10159], 00:16:38.645 | 30.00th=[10421], 40.00th=[11863], 50.00th=[13435], 60.00th=[16712], 00:16:38.645 | 70.00th=[17695], 80.00th=[20055], 90.00th=[24511], 95.00th=[28967], 00:16:38.645 | 99.00th=[57934], 99.50th=[63701], 99.90th=[72877], 99.95th=[72877], 00:16:38.645 | 99.99th=[72877] 00:16:38.645 bw ( KiB/s): min=15728, max=17040, per=27.68%, avg=16384.00, stdev=927.72, samples=2 00:16:38.645 iops : min= 3932, max= 4260, avg=4096.00, stdev=231.93, samples=2 00:16:38.645 lat (msec) : 2=0.01%, 4=0.12%, 10=19.64%, 20=58.73%, 50=19.09% 00:16:38.645 lat (msec) : 100=2.42% 00:16:38.645 cpu : usr=4.28%, sys=5.38%, ctx=221, majf=0, minf=17 00:16:38.645 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.4%, >=64=99.2% 00:16:38.645 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:16:38.645 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:16:38.645 issued rwts: total=3716,4096,0,0 short=0,0,0,0 dropped=0,0,0,0 00:16:38.645 latency : target=0, window=0, percentile=100.00%, depth=128 00:16:38.645 job2: (groupid=0, jobs=1): err= 0: pid=824184: Mon Jul 15 09:24:49 2024 00:16:38.645 read: IOPS=3053, BW=11.9MiB/s (12.5MB/s)(12.0MiB/1006msec) 00:16:38.645 slat (usec): min=2, max=31834, avg=146.39, stdev=1134.30 00:16:38.645 clat (usec): min=7977, max=68646, avg=18717.94, stdev=7693.86 00:16:38.645 lat (usec): min=8002, max=68681, avg=18864.33, stdev=7808.52 00:16:38.645 clat percentiles (usec): 00:16:38.645 | 1.00th=[ 9241], 5.00th=[11863], 10.00th=[11994], 20.00th=[12911], 00:16:38.645 | 30.00th=[14091], 40.00th=[15533], 50.00th=[16909], 60.00th=[17695], 00:16:38.645 | 70.00th=[19006], 80.00th=[20579], 90.00th=[31065], 95.00th=[36963], 00:16:38.645 | 99.00th=[43779], 99.50th=[43779], 99.90th=[50070], 99.95th=[62129], 00:16:38.645 | 99.99th=[68682] 00:16:38.645 write: IOPS=3187, BW=12.5MiB/s (13.1MB/s)(12.5MiB/1006msec); 0 zone resets 00:16:38.645 slat (usec): min=3, max=30075, avg=159.69, stdev=1109.47 00:16:38.645 clat (usec): min=959, max=54917, avg=21801.89, stdev=9579.53 00:16:38.645 lat (usec): min=995, max=54941, avg=21961.58, stdev=9639.88 00:16:38.645 clat percentiles (usec): 00:16:38.645 | 1.00th=[ 8225], 5.00th=[11600], 10.00th=[12125], 20.00th=[12649], 00:16:38.645 | 30.00th=[13042], 40.00th=[15270], 50.00th=[20317], 60.00th=[23725], 00:16:38.645 | 70.00th=[26346], 80.00th=[31327], 90.00th=[37487], 95.00th=[40109], 00:16:38.645 | 99.00th=[41681], 99.50th=[41681], 99.90th=[41681], 99.95th=[46924], 00:16:38.645 | 99.99th=[54789] 00:16:38.645 bw ( KiB/s): min=10760, max=13880, per=20.81%, avg=12320.00, stdev=2206.17, samples=2 00:16:38.645 iops : min= 2690, max= 3470, avg=3080.00, stdev=551.54, samples=2 00:16:38.645 lat (usec) : 1000=0.02% 00:16:38.645 lat (msec) : 2=0.02%, 10=1.62%, 20=61.01%, 50=37.24%, 100=0.10% 00:16:38.645 cpu : usr=5.67%, sys=6.87%, ctx=310, majf=0, minf=13 00:16:38.645 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.3%, 32=0.5%, >=64=99.0% 00:16:38.645 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:16:38.645 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:16:38.645 issued rwts: total=3072,3207,0,0 short=0,0,0,0 dropped=0,0,0,0 00:16:38.645 latency : target=0, window=0, percentile=100.00%, depth=128 00:16:38.645 job3: (groupid=0, jobs=1): err= 0: pid=824185: Mon Jul 15 09:24:49 2024 00:16:38.645 read: IOPS=3127, BW=12.2MiB/s (12.8MB/s)(12.8MiB/1047msec) 00:16:38.645 slat (usec): min=2, max=47704, avg=171.09, stdev=1429.64 00:16:38.645 clat (msec): min=5, max=117, avg=23.11, stdev=20.42 00:16:38.645 lat (msec): min=5, max=117, avg=23.28, stdev=20.52 00:16:38.645 clat percentiles (msec): 00:16:38.645 | 1.00th=[ 7], 5.00th=[ 9], 10.00th=[ 12], 20.00th=[ 13], 00:16:38.645 | 30.00th=[ 14], 40.00th=[ 15], 50.00th=[ 16], 60.00th=[ 18], 00:16:38.645 | 70.00th=[ 22], 80.00th=[ 25], 90.00th=[ 57], 95.00th=[ 79], 00:16:38.645 | 99.00th=[ 97], 99.50th=[ 117], 99.90th=[ 117], 99.95th=[ 117], 00:16:38.645 | 99.99th=[ 117] 00:16:38.645 write: IOPS=3423, BW=13.4MiB/s (14.0MB/s)(14.0MiB/1047msec); 0 zone resets 00:16:38.645 slat (usec): min=3, max=18278, avg=113.26, stdev=762.45 00:16:38.645 clat (usec): min=2180, max=62499, avg=15776.62, stdev=7808.43 00:16:38.645 lat (usec): min=2192, max=62515, avg=15889.88, stdev=7837.50 00:16:38.645 clat percentiles (usec): 00:16:38.645 | 1.00th=[ 5014], 5.00th=[ 7701], 10.00th=[10552], 20.00th=[12125], 00:16:38.645 | 30.00th=[12649], 40.00th=[13042], 50.00th=[13566], 60.00th=[14222], 00:16:38.645 | 70.00th=[15664], 80.00th=[19006], 90.00th=[21103], 95.00th=[34341], 00:16:38.645 | 99.00th=[53216], 99.50th=[62653], 99.90th=[62653], 99.95th=[62653], 00:16:38.645 | 99.99th=[62653] 00:16:38.645 bw ( KiB/s): min=12288, max=16384, per=24.22%, avg=14336.00, stdev=2896.31, samples=2 00:16:38.645 iops : min= 3072, max= 4096, avg=3584.00, stdev=724.08, samples=2 00:16:38.645 lat (msec) : 4=0.09%, 10=6.87%, 20=69.34%, 50=18.15%, 100=5.12% 00:16:38.645 lat (msec) : 250=0.44% 00:16:38.645 cpu : usr=2.68%, sys=5.35%, ctx=281, majf=0, minf=7 00:16:38.645 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.5%, >=64=99.1% 00:16:38.645 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:16:38.645 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:16:38.645 issued rwts: total=3275,3584,0,0 short=0,0,0,0 dropped=0,0,0,0 00:16:38.645 latency : target=0, window=0, percentile=100.00%, depth=128 00:16:38.645 00:16:38.645 Run status group 0 (all jobs): 00:16:38.645 READ: bw=54.7MiB/s (57.4MB/s), 11.9MiB/s-17.2MiB/s (12.5MB/s-18.0MB/s), io=57.3MiB (60.1MB), run=1005-1047msec 00:16:38.645 WRITE: bw=57.8MiB/s (60.6MB/s), 12.5MiB/s-17.2MiB/s (13.1MB/s-18.0MB/s), io=60.5MiB (63.5MB), run=1005-1047msec 00:16:38.645 00:16:38.645 Disk stats (read/write): 00:16:38.645 nvme0n1: ios=3613/4015, merge=0/0, ticks=38092/46890, in_queue=84982, util=98.20% 00:16:38.645 nvme0n2: ios=3267/3584, merge=0/0, ticks=27423/26381, in_queue=53804, util=98.07% 00:16:38.645 nvme0n3: ios=2524/2560, merge=0/0, ticks=31410/30344, in_queue=61754, util=97.81% 00:16:38.645 nvme0n4: ios=3126/3370, merge=0/0, ticks=31279/27489, in_queue=58768, util=97.80% 00:16:38.645 09:24:49 nvmf_tcp.nvmf_fio_target -- target/fio.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 128 -t randwrite -r 1 -v 00:16:38.645 [global] 00:16:38.645 thread=1 00:16:38.645 invalidate=1 00:16:38.645 rw=randwrite 00:16:38.645 time_based=1 00:16:38.645 runtime=1 00:16:38.645 ioengine=libaio 00:16:38.645 direct=1 00:16:38.645 bs=4096 00:16:38.645 iodepth=128 00:16:38.645 norandommap=0 00:16:38.645 numjobs=1 00:16:38.645 00:16:38.645 verify_dump=1 00:16:38.645 verify_backlog=512 00:16:38.645 verify_state_save=0 00:16:38.645 do_verify=1 00:16:38.645 verify=crc32c-intel 00:16:38.645 [job0] 00:16:38.645 filename=/dev/nvme0n1 00:16:38.645 [job1] 00:16:38.645 filename=/dev/nvme0n2 00:16:38.645 [job2] 00:16:38.645 filename=/dev/nvme0n3 00:16:38.645 [job3] 00:16:38.645 filename=/dev/nvme0n4 00:16:38.645 Could not set queue depth (nvme0n1) 00:16:38.645 Could not set queue depth (nvme0n2) 00:16:38.645 Could not set queue depth (nvme0n3) 00:16:38.645 Could not set queue depth (nvme0n4) 00:16:38.645 job0: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:16:38.645 job1: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:16:38.645 job2: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:16:38.645 job3: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:16:38.645 fio-3.35 00:16:38.645 Starting 4 threads 00:16:40.023 00:16:40.023 job0: (groupid=0, jobs=1): err= 0: pid=824415: Mon Jul 15 09:24:50 2024 00:16:40.023 read: IOPS=2544, BW=9.94MiB/s (10.4MB/s)(10.0MiB/1006msec) 00:16:40.023 slat (usec): min=2, max=31103, avg=202.17, stdev=1595.69 00:16:40.023 clat (msec): min=5, max=117, avg=24.96, stdev=22.33 00:16:40.023 lat (msec): min=5, max=117, avg=25.16, stdev=22.51 00:16:40.023 clat percentiles (msec): 00:16:40.023 | 1.00th=[ 8], 5.00th=[ 9], 10.00th=[ 10], 20.00th=[ 11], 00:16:40.023 | 30.00th=[ 11], 40.00th=[ 12], 50.00th=[ 15], 60.00th=[ 20], 00:16:40.023 | 70.00th=[ 28], 80.00th=[ 36], 90.00th=[ 57], 95.00th=[ 80], 00:16:40.023 | 99.00th=[ 101], 99.50th=[ 101], 99.90th=[ 106], 99.95th=[ 107], 00:16:40.023 | 99.99th=[ 118] 00:16:40.023 write: IOPS=2713, BW=10.6MiB/s (11.1MB/s)(10.7MiB/1006msec); 0 zone resets 00:16:40.023 slat (usec): min=3, max=29100, avg=169.93, stdev=1317.47 00:16:40.023 clat (usec): min=4367, max=98516, avg=22607.96, stdev=17623.87 00:16:40.023 lat (usec): min=4381, max=98547, avg=22777.89, stdev=17772.59 00:16:40.023 clat percentiles (usec): 00:16:40.023 | 1.00th=[ 5669], 5.00th=[ 7439], 10.00th=[ 9765], 20.00th=[11207], 00:16:40.023 | 30.00th=[12649], 40.00th=[13435], 50.00th=[16188], 60.00th=[17695], 00:16:40.023 | 70.00th=[22414], 80.00th=[26870], 90.00th=[52167], 95.00th=[66847], 00:16:40.023 | 99.00th=[73925], 99.50th=[83362], 99.90th=[86508], 99.95th=[90702], 00:16:40.023 | 99.99th=[98042] 00:16:40.023 bw ( KiB/s): min= 5704, max=15120, per=17.48%, avg=10412.00, stdev=6658.12, samples=2 00:16:40.023 iops : min= 1426, max= 3780, avg=2603.00, stdev=1664.53, samples=2 00:16:40.023 lat (msec) : 10=13.04%, 20=49.89%, 50=24.73%, 100=11.59%, 250=0.76% 00:16:40.023 cpu : usr=2.09%, sys=3.88%, ctx=165, majf=0, minf=1 00:16:40.023 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.2%, 16=0.3%, 32=0.6%, >=64=98.8% 00:16:40.023 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:16:40.023 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:16:40.023 issued rwts: total=2560,2730,0,0 short=0,0,0,0 dropped=0,0,0,0 00:16:40.023 latency : target=0, window=0, percentile=100.00%, depth=128 00:16:40.023 job1: (groupid=0, jobs=1): err= 0: pid=824416: Mon Jul 15 09:24:50 2024 00:16:40.023 read: IOPS=3576, BW=14.0MiB/s (14.7MB/s)(14.0MiB/1002msec) 00:16:40.023 slat (usec): min=2, max=16248, avg=133.16, stdev=937.98 00:16:40.023 clat (usec): min=6206, max=58070, avg=17721.40, stdev=9860.12 00:16:40.023 lat (usec): min=6212, max=58110, avg=17854.56, stdev=9920.79 00:16:40.023 clat percentiles (usec): 00:16:40.023 | 1.00th=[ 6521], 5.00th=[ 8586], 10.00th=[10159], 20.00th=[11076], 00:16:40.023 | 30.00th=[11469], 40.00th=[11863], 50.00th=[14877], 60.00th=[15795], 00:16:40.023 | 70.00th=[18482], 80.00th=[22938], 90.00th=[31851], 95.00th=[41157], 00:16:40.023 | 99.00th=[53740], 99.50th=[56361], 99.90th=[56361], 99.95th=[56361], 00:16:40.023 | 99.99th=[57934] 00:16:40.023 write: IOPS=3828, BW=15.0MiB/s (15.7MB/s)(15.0MiB/1002msec); 0 zone resets 00:16:40.023 slat (usec): min=3, max=16252, avg=129.17, stdev=966.18 00:16:40.023 clat (usec): min=263, max=53619, avg=16413.21, stdev=8786.67 00:16:40.023 lat (usec): min=2745, max=53631, avg=16542.38, stdev=8876.74 00:16:40.023 clat percentiles (usec): 00:16:40.023 | 1.00th=[ 3523], 5.00th=[ 7701], 10.00th=[ 9372], 20.00th=[10421], 00:16:40.023 | 30.00th=[11207], 40.00th=[11731], 50.00th=[11994], 60.00th=[12911], 00:16:40.023 | 70.00th=[18482], 80.00th=[24511], 90.00th=[31327], 95.00th=[34341], 00:16:40.023 | 99.00th=[40633], 99.50th=[40633], 99.90th=[47449], 99.95th=[52691], 00:16:40.023 | 99.99th=[53740] 00:16:40.023 bw ( KiB/s): min=17040, max=17040, per=28.61%, avg=17040.00, stdev= 0.00, samples=1 00:16:40.023 iops : min= 4260, max= 4260, avg=4260.00, stdev= 0.00, samples=1 00:16:40.023 lat (usec) : 500=0.01% 00:16:40.023 lat (msec) : 4=0.53%, 10=12.33%, 20=59.74%, 50=26.39%, 100=1.00% 00:16:40.023 cpu : usr=2.90%, sys=5.69%, ctx=284, majf=0, minf=1 00:16:40.023 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.4%, >=64=99.2% 00:16:40.023 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:16:40.023 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:16:40.023 issued rwts: total=3584,3836,0,0 short=0,0,0,0 dropped=0,0,0,0 00:16:40.023 latency : target=0, window=0, percentile=100.00%, depth=128 00:16:40.023 job2: (groupid=0, jobs=1): err= 0: pid=824417: Mon Jul 15 09:24:50 2024 00:16:40.023 read: IOPS=4876, BW=19.0MiB/s (20.0MB/s)(19.1MiB/1002msec) 00:16:40.023 slat (usec): min=2, max=16313, avg=97.50, stdev=633.36 00:16:40.024 clat (usec): min=874, max=40071, avg=12707.10, stdev=4044.50 00:16:40.024 lat (usec): min=921, max=40121, avg=12804.60, stdev=4090.38 00:16:40.024 clat percentiles (usec): 00:16:40.024 | 1.00th=[ 7439], 5.00th=[ 8291], 10.00th=[10028], 20.00th=[10290], 00:16:40.024 | 30.00th=[10421], 40.00th=[10683], 50.00th=[11207], 60.00th=[11994], 00:16:40.024 | 70.00th=[13042], 80.00th=[14615], 90.00th=[17171], 95.00th=[22676], 00:16:40.024 | 99.00th=[25822], 99.50th=[28967], 99.90th=[32900], 99.95th=[32900], 00:16:40.024 | 99.99th=[40109] 00:16:40.024 write: IOPS=5109, BW=20.0MiB/s (20.9MB/s)(20.0MiB/1002msec); 0 zone resets 00:16:40.024 slat (usec): min=4, max=14569, avg=89.65, stdev=541.73 00:16:40.024 clat (usec): min=3230, max=36635, avg=12686.29, stdev=4884.73 00:16:40.024 lat (usec): min=3238, max=36644, avg=12775.94, stdev=4930.23 00:16:40.024 clat percentiles (usec): 00:16:40.024 | 1.00th=[ 5080], 5.00th=[ 7767], 10.00th=[ 9634], 20.00th=[10028], 00:16:40.024 | 30.00th=[10290], 40.00th=[10552], 50.00th=[10814], 60.00th=[11994], 00:16:40.024 | 70.00th=[13829], 80.00th=[14484], 90.00th=[16057], 95.00th=[23462], 00:16:40.024 | 99.00th=[32113], 99.50th=[34341], 99.90th=[36439], 99.95th=[36439], 00:16:40.024 | 99.99th=[36439] 00:16:40.024 bw ( KiB/s): min=16480, max=16480, per=27.67%, avg=16480.00, stdev= 0.00, samples=1 00:16:40.024 iops : min= 4120, max= 4120, avg=4120.00, stdev= 0.00, samples=1 00:16:40.024 lat (usec) : 1000=0.02% 00:16:40.024 lat (msec) : 4=0.21%, 10=14.30%, 20=77.28%, 50=8.19% 00:16:40.024 cpu : usr=8.39%, sys=13.19%, ctx=448, majf=0, minf=1 00:16:40.024 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.3%, >=64=99.4% 00:16:40.024 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:16:40.024 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:16:40.024 issued rwts: total=4886,5120,0,0 short=0,0,0,0 dropped=0,0,0,0 00:16:40.024 latency : target=0, window=0, percentile=100.00%, depth=128 00:16:40.024 job3: (groupid=0, jobs=1): err= 0: pid=824418: Mon Jul 15 09:24:50 2024 00:16:40.024 read: IOPS=3035, BW=11.9MiB/s (12.4MB/s)(12.0MiB/1012msec) 00:16:40.024 slat (usec): min=2, max=18678, avg=134.33, stdev=980.71 00:16:40.024 clat (usec): min=4551, max=55432, avg=17745.11, stdev=7064.85 00:16:40.024 lat (usec): min=4560, max=55455, avg=17879.45, stdev=7148.96 00:16:40.024 clat percentiles (usec): 00:16:40.024 | 1.00th=[ 8455], 5.00th=[10945], 10.00th=[11731], 20.00th=[13960], 00:16:40.024 | 30.00th=[14746], 40.00th=[15795], 50.00th=[16319], 60.00th=[16909], 00:16:40.024 | 70.00th=[17695], 80.00th=[19268], 90.00th=[22938], 95.00th=[35914], 00:16:40.024 | 99.00th=[47449], 99.50th=[49021], 99.90th=[55313], 99.95th=[55313], 00:16:40.024 | 99.99th=[55313] 00:16:40.024 write: IOPS=3341, BW=13.1MiB/s (13.7MB/s)(13.2MiB/1012msec); 0 zone resets 00:16:40.024 slat (usec): min=3, max=13437, avg=156.51, stdev=825.53 00:16:40.024 clat (usec): min=3575, max=57389, avg=21640.12, stdev=12690.18 00:16:40.024 lat (usec): min=3583, max=57398, avg=21796.63, stdev=12779.30 00:16:40.024 clat percentiles (usec): 00:16:40.024 | 1.00th=[ 7701], 5.00th=[10814], 10.00th=[12256], 20.00th=[13173], 00:16:40.024 | 30.00th=[13960], 40.00th=[14484], 50.00th=[15795], 60.00th=[16909], 00:16:40.024 | 70.00th=[21627], 80.00th=[29230], 90.00th=[46924], 95.00th=[51119], 00:16:40.024 | 99.00th=[55837], 99.50th=[56886], 99.90th=[57410], 99.95th=[57410], 00:16:40.024 | 99.99th=[57410] 00:16:40.024 bw ( KiB/s): min= 9952, max=16088, per=21.86%, avg=13020.00, stdev=4338.81, samples=2 00:16:40.024 iops : min= 2488, max= 4022, avg=3255.00, stdev=1084.70, samples=2 00:16:40.024 lat (msec) : 4=0.09%, 10=3.05%, 20=70.45%, 50=23.18%, 100=3.22% 00:16:40.024 cpu : usr=2.08%, sys=4.65%, ctx=331, majf=0, minf=1 00:16:40.024 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.5%, >=64=99.0% 00:16:40.024 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:16:40.024 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:16:40.024 issued rwts: total=3072,3382,0,0 short=0,0,0,0 dropped=0,0,0,0 00:16:40.024 latency : target=0, window=0, percentile=100.00%, depth=128 00:16:40.024 00:16:40.024 Run status group 0 (all jobs): 00:16:40.024 READ: bw=54.4MiB/s (57.1MB/s), 9.94MiB/s-19.0MiB/s (10.4MB/s-20.0MB/s), io=55.1MiB (57.8MB), run=1002-1012msec 00:16:40.024 WRITE: bw=58.2MiB/s (61.0MB/s), 10.6MiB/s-20.0MiB/s (11.1MB/s-20.9MB/s), io=58.9MiB (61.7MB), run=1002-1012msec 00:16:40.024 00:16:40.024 Disk stats (read/write): 00:16:40.024 nvme0n1: ios=2341/2560, merge=0/0, ticks=17451/20794, in_queue=38245, util=89.88% 00:16:40.024 nvme0n2: ios=3124/3503, merge=0/0, ticks=19931/25240, in_queue=45171, util=93.71% 00:16:40.024 nvme0n3: ios=4148/4176, merge=0/0, ticks=33806/37847, in_queue=71653, util=100.00% 00:16:40.024 nvme0n4: ios=2603/2567, merge=0/0, ticks=39618/50180, in_queue=89798, util=97.79% 00:16:40.024 09:24:50 nvmf_tcp.nvmf_fio_target -- target/fio.sh@55 -- # sync 00:16:40.024 09:24:50 nvmf_tcp.nvmf_fio_target -- target/fio.sh@59 -- # fio_pid=824554 00:16:40.024 09:24:50 nvmf_tcp.nvmf_fio_target -- target/fio.sh@58 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 1 -t read -r 10 00:16:40.024 09:24:50 nvmf_tcp.nvmf_fio_target -- target/fio.sh@61 -- # sleep 3 00:16:40.024 [global] 00:16:40.024 thread=1 00:16:40.024 invalidate=1 00:16:40.024 rw=read 00:16:40.024 time_based=1 00:16:40.024 runtime=10 00:16:40.024 ioengine=libaio 00:16:40.024 direct=1 00:16:40.024 bs=4096 00:16:40.024 iodepth=1 00:16:40.024 norandommap=1 00:16:40.024 numjobs=1 00:16:40.024 00:16:40.024 [job0] 00:16:40.024 filename=/dev/nvme0n1 00:16:40.024 [job1] 00:16:40.024 filename=/dev/nvme0n2 00:16:40.024 [job2] 00:16:40.024 filename=/dev/nvme0n3 00:16:40.024 [job3] 00:16:40.024 filename=/dev/nvme0n4 00:16:40.024 Could not set queue depth (nvme0n1) 00:16:40.024 Could not set queue depth (nvme0n2) 00:16:40.024 Could not set queue depth (nvme0n3) 00:16:40.024 Could not set queue depth (nvme0n4) 00:16:40.024 job0: (g=0): rw=read, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:16:40.024 job1: (g=0): rw=read, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:16:40.024 job2: (g=0): rw=read, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:16:40.024 job3: (g=0): rw=read, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:16:40.024 fio-3.35 00:16:40.024 Starting 4 threads 00:16:43.314 09:24:53 nvmf_tcp.nvmf_fio_target -- target/fio.sh@63 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_raid_delete concat0 00:16:43.314 09:24:54 nvmf_tcp.nvmf_fio_target -- target/fio.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_raid_delete raid0 00:16:43.314 fio: io_u error on file /dev/nvme0n4: Remote I/O error: read offset=311296, buflen=4096 00:16:43.314 fio: pid=824651, err=121/file:io_u.c:1889, func=io_u error, error=Remote I/O error 00:16:43.314 09:24:54 nvmf_tcp.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:16:43.314 09:24:54 nvmf_tcp.nvmf_fio_target -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc0 00:16:43.572 fio: io_u error on file /dev/nvme0n3: Remote I/O error: read offset=7356416, buflen=4096 00:16:43.572 fio: pid=824650, err=121/file:io_u.c:1889, func=io_u error, error=Remote I/O error 00:16:43.572 09:24:54 nvmf_tcp.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:16:43.572 09:24:54 nvmf_tcp.nvmf_fio_target -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc1 00:16:43.831 fio: io_u error on file /dev/nvme0n1: Remote I/O error: read offset=339968, buflen=4096 00:16:43.831 fio: pid=824646, err=121/file:io_u.c:1889, func=io_u error, error=Remote I/O error 00:16:43.831 fio: io_u error on file /dev/nvme0n2: Input/output error: read offset=38887424, buflen=4096 00:16:43.831 fio: pid=824647, err=5/file:io_u.c:1889, func=io_u error, error=Input/output error 00:16:43.831 09:24:55 nvmf_tcp.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:16:43.831 09:24:55 nvmf_tcp.nvmf_fio_target -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc2 00:16:44.090 00:16:44.090 job0: (groupid=0, jobs=1): err=121 (file:io_u.c:1889, func=io_u error, error=Remote I/O error): pid=824646: Mon Jul 15 09:24:55 2024 00:16:44.090 read: IOPS=24, BW=97.4KiB/s (99.7kB/s)(332KiB/3410msec) 00:16:44.090 slat (usec): min=11, max=8835, avg=130.61, stdev=961.27 00:16:44.090 clat (usec): min=485, max=55180, avg=40669.57, stdev=4730.39 00:16:44.090 lat (usec): min=510, max=64015, avg=40801.57, stdev=5140.97 00:16:44.090 clat percentiles (usec): 00:16:44.090 | 1.00th=[ 486], 5.00th=[40633], 10.00th=[41157], 20.00th=[41157], 00:16:44.090 | 30.00th=[41157], 40.00th=[41157], 50.00th=[41157], 60.00th=[41157], 00:16:44.090 | 70.00th=[41157], 80.00th=[41157], 90.00th=[41157], 95.00th=[41157], 00:16:44.090 | 99.00th=[55313], 99.50th=[55313], 99.90th=[55313], 99.95th=[55313], 00:16:44.090 | 99.99th=[55313] 00:16:44.090 bw ( KiB/s): min= 96, max= 104, per=0.78%, avg=98.67, stdev= 4.13, samples=6 00:16:44.090 iops : min= 24, max= 26, avg=24.67, stdev= 1.03, samples=6 00:16:44.090 lat (usec) : 500=1.19% 00:16:44.090 lat (msec) : 50=96.43%, 100=1.19% 00:16:44.090 cpu : usr=0.15%, sys=0.00%, ctx=87, majf=0, minf=1 00:16:44.090 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:16:44.090 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:16:44.090 complete : 0=1.2%, 4=98.8%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:16:44.090 issued rwts: total=84,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:16:44.090 latency : target=0, window=0, percentile=100.00%, depth=1 00:16:44.090 job1: (groupid=0, jobs=1): err= 5 (file:io_u.c:1889, func=io_u error, error=Input/output error): pid=824647: Mon Jul 15 09:24:55 2024 00:16:44.090 read: IOPS=2606, BW=10.2MiB/s (10.7MB/s)(37.1MiB/3643msec) 00:16:44.090 slat (usec): min=4, max=8926, avg=12.08, stdev=118.08 00:16:44.090 clat (usec): min=159, max=42045, avg=369.44, stdev=2699.32 00:16:44.090 lat (usec): min=165, max=50955, avg=380.76, stdev=2717.38 00:16:44.090 clat percentiles (usec): 00:16:44.090 | 1.00th=[ 167], 5.00th=[ 174], 10.00th=[ 176], 20.00th=[ 182], 00:16:44.090 | 30.00th=[ 184], 40.00th=[ 188], 50.00th=[ 190], 60.00th=[ 194], 00:16:44.090 | 70.00th=[ 198], 80.00th=[ 204], 90.00th=[ 215], 95.00th=[ 229], 00:16:44.090 | 99.00th=[ 289], 99.50th=[ 322], 99.90th=[42206], 99.95th=[42206], 00:16:44.090 | 99.99th=[42206] 00:16:44.090 bw ( KiB/s): min= 94, max=19904, per=86.28%, avg=10846.57, stdev=8369.73, samples=7 00:16:44.090 iops : min= 23, max= 4976, avg=2711.57, stdev=2092.54, samples=7 00:16:44.090 lat (usec) : 250=97.51%, 500=2.04%, 750=0.01% 00:16:44.090 lat (msec) : 50=0.42% 00:16:44.090 cpu : usr=0.99%, sys=3.38%, ctx=9497, majf=0, minf=1 00:16:44.090 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:16:44.090 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:16:44.090 complete : 0=0.1%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:16:44.090 issued rwts: total=9495,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:16:44.090 latency : target=0, window=0, percentile=100.00%, depth=1 00:16:44.090 job2: (groupid=0, jobs=1): err=121 (file:io_u.c:1889, func=io_u error, error=Remote I/O error): pid=824650: Mon Jul 15 09:24:55 2024 00:16:44.090 read: IOPS=568, BW=2273KiB/s (2328kB/s)(7184KiB/3160msec) 00:16:44.090 slat (nsec): min=5336, max=39810, avg=8179.08, stdev=5099.88 00:16:44.090 clat (usec): min=179, max=42044, avg=1736.39, stdev=7725.66 00:16:44.090 lat (usec): min=187, max=42060, avg=1744.56, stdev=7729.17 00:16:44.090 clat percentiles (usec): 00:16:44.090 | 1.00th=[ 190], 5.00th=[ 194], 10.00th=[ 196], 20.00th=[ 202], 00:16:44.090 | 30.00th=[ 204], 40.00th=[ 208], 50.00th=[ 212], 60.00th=[ 215], 00:16:44.090 | 70.00th=[ 219], 80.00th=[ 231], 90.00th=[ 297], 95.00th=[ 363], 00:16:44.090 | 99.00th=[41681], 99.50th=[42206], 99.90th=[42206], 99.95th=[42206], 00:16:44.090 | 99.99th=[42206] 00:16:44.090 bw ( KiB/s): min= 96, max=13816, per=19.00%, avg=2389.33, stdev=5597.91, samples=6 00:16:44.090 iops : min= 24, max= 3454, avg=597.33, stdev=1399.48, samples=6 00:16:44.090 lat (usec) : 250=82.03%, 500=14.19% 00:16:44.090 lat (msec) : 20=0.06%, 50=3.67% 00:16:44.090 cpu : usr=0.13%, sys=0.57%, ctx=1797, majf=0, minf=1 00:16:44.090 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:16:44.090 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:16:44.090 complete : 0=0.1%, 4=99.9%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:16:44.090 issued rwts: total=1797,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:16:44.090 latency : target=0, window=0, percentile=100.00%, depth=1 00:16:44.090 job3: (groupid=0, jobs=1): err=121 (file:io_u.c:1889, func=io_u error, error=Remote I/O error): pid=824651: Mon Jul 15 09:24:55 2024 00:16:44.090 read: IOPS=26, BW=105KiB/s (108kB/s)(304KiB/2895msec) 00:16:44.090 slat (nsec): min=7663, max=38801, avg=25277.04, stdev=10326.64 00:16:44.090 clat (usec): min=275, max=41247, avg=37762.15, stdev=11034.66 00:16:44.090 lat (usec): min=284, max=41254, avg=37787.58, stdev=11037.17 00:16:44.090 clat percentiles (usec): 00:16:44.090 | 1.00th=[ 277], 5.00th=[ 306], 10.00th=[41157], 20.00th=[41157], 00:16:44.090 | 30.00th=[41157], 40.00th=[41157], 50.00th=[41157], 60.00th=[41157], 00:16:44.090 | 70.00th=[41157], 80.00th=[41157], 90.00th=[41157], 95.00th=[41157], 00:16:44.090 | 99.00th=[41157], 99.50th=[41157], 99.90th=[41157], 99.95th=[41157], 00:16:44.090 | 99.99th=[41157] 00:16:44.090 bw ( KiB/s): min= 96, max= 144, per=0.84%, avg=105.60, stdev=21.47, samples=5 00:16:44.090 iops : min= 24, max= 36, avg=26.40, stdev= 5.37, samples=5 00:16:44.090 lat (usec) : 500=7.79% 00:16:44.090 lat (msec) : 50=90.91% 00:16:44.090 cpu : usr=0.14%, sys=0.00%, ctx=77, majf=0, minf=1 00:16:44.090 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:16:44.090 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:16:44.090 complete : 0=1.3%, 4=98.7%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:16:44.090 issued rwts: total=77,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:16:44.090 latency : target=0, window=0, percentile=100.00%, depth=1 00:16:44.090 00:16:44.091 Run status group 0 (all jobs): 00:16:44.091 READ: bw=12.3MiB/s (12.9MB/s), 97.4KiB/s-10.2MiB/s (99.7kB/s-10.7MB/s), io=44.7MiB (46.9MB), run=2895-3643msec 00:16:44.091 00:16:44.091 Disk stats (read/write): 00:16:44.091 nvme0n1: ios=127/0, merge=0/0, ticks=4139/0, in_queue=4139, util=99.89% 00:16:44.091 nvme0n2: ios=9493/0, merge=0/0, ticks=3390/0, in_queue=3390, util=96.22% 00:16:44.091 nvme0n3: ios=1795/0, merge=0/0, ticks=3074/0, in_queue=3074, util=96.75% 00:16:44.091 nvme0n4: ios=75/0, merge=0/0, ticks=2832/0, in_queue=2832, util=96.77% 00:16:44.091 09:24:55 nvmf_tcp.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:16:44.091 09:24:55 nvmf_tcp.nvmf_fio_target -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc3 00:16:44.349 09:24:55 nvmf_tcp.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:16:44.349 09:24:55 nvmf_tcp.nvmf_fio_target -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc4 00:16:44.607 09:24:55 nvmf_tcp.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:16:44.607 09:24:55 nvmf_tcp.nvmf_fio_target -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc5 00:16:44.865 09:24:56 nvmf_tcp.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:16:44.865 09:24:56 nvmf_tcp.nvmf_fio_target -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc6 00:16:45.123 09:24:56 nvmf_tcp.nvmf_fio_target -- target/fio.sh@69 -- # fio_status=0 00:16:45.123 09:24:56 nvmf_tcp.nvmf_fio_target -- target/fio.sh@70 -- # wait 824554 00:16:45.123 09:24:56 nvmf_tcp.nvmf_fio_target -- target/fio.sh@70 -- # fio_status=4 00:16:45.123 09:24:56 nvmf_tcp.nvmf_fio_target -- target/fio.sh@72 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:16:45.382 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:16:45.382 09:24:56 nvmf_tcp.nvmf_fio_target -- target/fio.sh@73 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:16:45.382 09:24:56 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@1219 -- # local i=0 00:16:45.382 09:24:56 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@1220 -- # lsblk -o NAME,SERIAL 00:16:45.382 09:24:56 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@1220 -- # grep -q -w SPDKISFASTANDAWESOME 00:16:45.382 09:24:56 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@1227 -- # lsblk -l -o NAME,SERIAL 00:16:45.382 09:24:56 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@1227 -- # grep -q -w SPDKISFASTANDAWESOME 00:16:45.382 09:24:56 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@1231 -- # return 0 00:16:45.382 09:24:56 nvmf_tcp.nvmf_fio_target -- target/fio.sh@75 -- # '[' 4 -eq 0 ']' 00:16:45.382 09:24:56 nvmf_tcp.nvmf_fio_target -- target/fio.sh@80 -- # echo 'nvmf hotplug test: fio failed as expected' 00:16:45.382 nvmf hotplug test: fio failed as expected 00:16:45.382 09:24:56 nvmf_tcp.nvmf_fio_target -- target/fio.sh@83 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:16:45.641 09:24:56 nvmf_tcp.nvmf_fio_target -- target/fio.sh@85 -- # rm -f ./local-job0-0-verify.state 00:16:45.642 09:24:56 nvmf_tcp.nvmf_fio_target -- target/fio.sh@86 -- # rm -f ./local-job1-1-verify.state 00:16:45.642 09:24:56 nvmf_tcp.nvmf_fio_target -- target/fio.sh@87 -- # rm -f ./local-job2-2-verify.state 00:16:45.642 09:24:56 nvmf_tcp.nvmf_fio_target -- target/fio.sh@89 -- # trap - SIGINT SIGTERM EXIT 00:16:45.642 09:24:56 nvmf_tcp.nvmf_fio_target -- target/fio.sh@91 -- # nvmftestfini 00:16:45.642 09:24:56 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@488 -- # nvmfcleanup 00:16:45.642 09:24:56 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@117 -- # sync 00:16:45.642 09:24:56 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:16:45.642 09:24:56 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@120 -- # set +e 00:16:45.642 09:24:56 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@121 -- # for i in {1..20} 00:16:45.642 09:24:56 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:16:45.642 rmmod nvme_tcp 00:16:45.642 rmmod nvme_fabrics 00:16:45.642 rmmod nvme_keyring 00:16:45.642 09:24:56 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:16:45.642 09:24:56 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@124 -- # set -e 00:16:45.642 09:24:56 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@125 -- # return 0 00:16:45.642 09:24:56 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@489 -- # '[' -n 822647 ']' 00:16:45.642 09:24:56 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@490 -- # killprocess 822647 00:16:45.642 09:24:56 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@948 -- # '[' -z 822647 ']' 00:16:45.642 09:24:56 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@952 -- # kill -0 822647 00:16:45.642 09:24:56 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@953 -- # uname 00:16:45.642 09:24:56 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:16:45.642 09:24:56 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 822647 00:16:45.642 09:24:56 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:16:45.642 09:24:56 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:16:45.642 09:24:56 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@966 -- # echo 'killing process with pid 822647' 00:16:45.642 killing process with pid 822647 00:16:45.642 09:24:56 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@967 -- # kill 822647 00:16:45.642 09:24:56 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@972 -- # wait 822647 00:16:45.899 09:24:57 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:16:45.899 09:24:57 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:16:45.899 09:24:57 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:16:45.899 09:24:57 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:16:45.899 09:24:57 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@278 -- # remove_spdk_ns 00:16:45.899 09:24:57 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:16:45.899 09:24:57 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:16:45.899 09:24:57 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:16:48.437 09:24:59 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:16:48.437 00:16:48.437 real 0m23.425s 00:16:48.437 user 1m21.321s 00:16:48.437 sys 0m6.504s 00:16:48.437 09:24:59 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@1124 -- # xtrace_disable 00:16:48.437 09:24:59 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@10 -- # set +x 00:16:48.437 ************************************ 00:16:48.437 END TEST nvmf_fio_target 00:16:48.437 ************************************ 00:16:48.437 09:24:59 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:16:48.437 09:24:59 nvmf_tcp -- nvmf/nvmf.sh@56 -- # run_test nvmf_bdevio /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdevio.sh --transport=tcp 00:16:48.437 09:24:59 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:16:48.437 09:24:59 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:16:48.437 09:24:59 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:16:48.437 ************************************ 00:16:48.437 START TEST nvmf_bdevio 00:16:48.437 ************************************ 00:16:48.437 09:24:59 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdevio.sh --transport=tcp 00:16:48.437 * Looking for test storage... 00:16:48.437 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:16:48.437 09:24:59 nvmf_tcp.nvmf_bdevio -- target/bdevio.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:16:48.437 09:24:59 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@7 -- # uname -s 00:16:48.437 09:24:59 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:16:48.437 09:24:59 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:16:48.437 09:24:59 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:16:48.437 09:24:59 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:16:48.437 09:24:59 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:16:48.437 09:24:59 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:16:48.437 09:24:59 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:16:48.437 09:24:59 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:16:48.437 09:24:59 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:16:48.437 09:24:59 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:16:48.437 09:24:59 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a 00:16:48.437 09:24:59 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@18 -- # NVME_HOSTID=29f67375-a902-e411-ace9-001e67bc3c9a 00:16:48.437 09:24:59 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:16:48.437 09:24:59 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:16:48.437 09:24:59 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:16:48.437 09:24:59 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:16:48.437 09:24:59 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:16:48.437 09:24:59 nvmf_tcp.nvmf_bdevio -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:16:48.437 09:24:59 nvmf_tcp.nvmf_bdevio -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:16:48.437 09:24:59 nvmf_tcp.nvmf_bdevio -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:16:48.437 09:24:59 nvmf_tcp.nvmf_bdevio -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:48.437 09:24:59 nvmf_tcp.nvmf_bdevio -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:48.437 09:24:59 nvmf_tcp.nvmf_bdevio -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:48.437 09:24:59 nvmf_tcp.nvmf_bdevio -- paths/export.sh@5 -- # export PATH 00:16:48.437 09:24:59 nvmf_tcp.nvmf_bdevio -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:48.437 09:24:59 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@47 -- # : 0 00:16:48.437 09:24:59 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:16:48.437 09:24:59 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:16:48.437 09:24:59 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:16:48.437 09:24:59 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:16:48.437 09:24:59 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:16:48.437 09:24:59 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:16:48.437 09:24:59 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:16:48.437 09:24:59 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@51 -- # have_pci_nics=0 00:16:48.437 09:24:59 nvmf_tcp.nvmf_bdevio -- target/bdevio.sh@11 -- # MALLOC_BDEV_SIZE=64 00:16:48.437 09:24:59 nvmf_tcp.nvmf_bdevio -- target/bdevio.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:16:48.437 09:24:59 nvmf_tcp.nvmf_bdevio -- target/bdevio.sh@14 -- # nvmftestinit 00:16:48.437 09:24:59 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:16:48.437 09:24:59 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:16:48.437 09:24:59 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@448 -- # prepare_net_devs 00:16:48.437 09:24:59 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@410 -- # local -g is_hw=no 00:16:48.437 09:24:59 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@412 -- # remove_spdk_ns 00:16:48.437 09:24:59 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:16:48.437 09:24:59 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:16:48.437 09:24:59 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:16:48.438 09:24:59 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:16:48.438 09:24:59 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:16:48.438 09:24:59 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@285 -- # xtrace_disable 00:16:48.438 09:24:59 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:16:50.336 09:25:01 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:16:50.336 09:25:01 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@291 -- # pci_devs=() 00:16:50.336 09:25:01 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@291 -- # local -a pci_devs 00:16:50.336 09:25:01 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@292 -- # pci_net_devs=() 00:16:50.336 09:25:01 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:16:50.336 09:25:01 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@293 -- # pci_drivers=() 00:16:50.336 09:25:01 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@293 -- # local -A pci_drivers 00:16:50.336 09:25:01 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@295 -- # net_devs=() 00:16:50.336 09:25:01 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@295 -- # local -ga net_devs 00:16:50.336 09:25:01 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@296 -- # e810=() 00:16:50.336 09:25:01 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@296 -- # local -ga e810 00:16:50.336 09:25:01 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@297 -- # x722=() 00:16:50.336 09:25:01 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@297 -- # local -ga x722 00:16:50.336 09:25:01 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@298 -- # mlx=() 00:16:50.336 09:25:01 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@298 -- # local -ga mlx 00:16:50.336 09:25:01 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:16:50.336 09:25:01 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:16:50.336 09:25:01 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:16:50.336 09:25:01 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:16:50.336 09:25:01 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:16:50.336 09:25:01 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:16:50.336 09:25:01 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:16:50.336 09:25:01 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:16:50.336 09:25:01 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:16:50.336 09:25:01 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:16:50.336 09:25:01 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:16:50.336 09:25:01 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:16:50.336 09:25:01 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:16:50.336 09:25:01 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:16:50.336 09:25:01 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:16:50.336 09:25:01 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:16:50.336 09:25:01 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:16:50.336 09:25:01 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:16:50.336 09:25:01 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@341 -- # echo 'Found 0000:09:00.0 (0x8086 - 0x159b)' 00:16:50.336 Found 0000:09:00.0 (0x8086 - 0x159b) 00:16:50.336 09:25:01 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:16:50.336 09:25:01 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:16:50.336 09:25:01 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:16:50.336 09:25:01 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:16:50.336 09:25:01 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:16:50.336 09:25:01 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:16:50.336 09:25:01 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@341 -- # echo 'Found 0000:09:00.1 (0x8086 - 0x159b)' 00:16:50.336 Found 0000:09:00.1 (0x8086 - 0x159b) 00:16:50.336 09:25:01 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:16:50.336 09:25:01 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:16:50.336 09:25:01 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:16:50.336 09:25:01 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:16:50.336 09:25:01 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:16:50.336 09:25:01 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:16:50.336 09:25:01 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:16:50.336 09:25:01 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:16:50.336 09:25:01 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:16:50.336 09:25:01 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:16:50.336 09:25:01 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:16:50.336 09:25:01 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:16:50.336 09:25:01 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@390 -- # [[ up == up ]] 00:16:50.337 09:25:01 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:16:50.337 09:25:01 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:16:50.337 09:25:01 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:09:00.0: cvl_0_0' 00:16:50.337 Found net devices under 0000:09:00.0: cvl_0_0 00:16:50.337 09:25:01 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:16:50.337 09:25:01 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:16:50.337 09:25:01 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:16:50.337 09:25:01 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:16:50.337 09:25:01 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:16:50.337 09:25:01 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@390 -- # [[ up == up ]] 00:16:50.337 09:25:01 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:16:50.337 09:25:01 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:16:50.337 09:25:01 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:09:00.1: cvl_0_1' 00:16:50.337 Found net devices under 0000:09:00.1: cvl_0_1 00:16:50.337 09:25:01 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:16:50.337 09:25:01 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:16:50.337 09:25:01 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@414 -- # is_hw=yes 00:16:50.337 09:25:01 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:16:50.337 09:25:01 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:16:50.337 09:25:01 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:16:50.337 09:25:01 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:16:50.337 09:25:01 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:16:50.337 09:25:01 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:16:50.337 09:25:01 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:16:50.337 09:25:01 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:16:50.337 09:25:01 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:16:50.337 09:25:01 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:16:50.337 09:25:01 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:16:50.337 09:25:01 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:16:50.337 09:25:01 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:16:50.337 09:25:01 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:16:50.337 09:25:01 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:16:50.337 09:25:01 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:16:50.337 09:25:01 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:16:50.337 09:25:01 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:16:50.337 09:25:01 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:16:50.337 09:25:01 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:16:50.337 09:25:01 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:16:50.337 09:25:01 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:16:50.337 09:25:01 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:16:50.337 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:16:50.337 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.251 ms 00:16:50.337 00:16:50.337 --- 10.0.0.2 ping statistics --- 00:16:50.337 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:16:50.337 rtt min/avg/max/mdev = 0.251/0.251/0.251/0.000 ms 00:16:50.337 09:25:01 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:16:50.337 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:16:50.337 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.147 ms 00:16:50.337 00:16:50.337 --- 10.0.0.1 ping statistics --- 00:16:50.337 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:16:50.337 rtt min/avg/max/mdev = 0.147/0.147/0.147/0.000 ms 00:16:50.337 09:25:01 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:16:50.337 09:25:01 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@422 -- # return 0 00:16:50.337 09:25:01 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:16:50.337 09:25:01 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:16:50.337 09:25:01 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:16:50.337 09:25:01 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:16:50.337 09:25:01 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:16:50.337 09:25:01 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:16:50.337 09:25:01 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:16:50.337 09:25:01 nvmf_tcp.nvmf_bdevio -- target/bdevio.sh@16 -- # nvmfappstart -m 0x78 00:16:50.337 09:25:01 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:16:50.337 09:25:01 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@722 -- # xtrace_disable 00:16:50.337 09:25:01 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:16:50.337 09:25:01 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@481 -- # nvmfpid=827266 00:16:50.337 09:25:01 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x78 00:16:50.337 09:25:01 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@482 -- # waitforlisten 827266 00:16:50.337 09:25:01 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@829 -- # '[' -z 827266 ']' 00:16:50.337 09:25:01 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:16:50.337 09:25:01 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@834 -- # local max_retries=100 00:16:50.337 09:25:01 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:16:50.337 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:16:50.337 09:25:01 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@838 -- # xtrace_disable 00:16:50.337 09:25:01 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:16:50.337 [2024-07-15 09:25:01.407133] Starting SPDK v24.09-pre git sha1 b0f01ebc5 / DPDK 24.03.0 initialization... 00:16:50.337 [2024-07-15 09:25:01.407205] [ DPDK EAL parameters: nvmf -c 0x78 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:16:50.337 EAL: No free 2048 kB hugepages reported on node 1 00:16:50.337 [2024-07-15 09:25:01.470973] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 4 00:16:50.595 [2024-07-15 09:25:01.582846] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:16:50.595 [2024-07-15 09:25:01.582895] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:16:50.595 [2024-07-15 09:25:01.582918] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:16:50.595 [2024-07-15 09:25:01.582929] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:16:50.595 [2024-07-15 09:25:01.582939] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:16:50.595 [2024-07-15 09:25:01.583041] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 4 00:16:50.595 [2024-07-15 09:25:01.583112] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 5 00:16:50.595 [2024-07-15 09:25:01.583232] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 6 00:16:50.595 [2024-07-15 09:25:01.583234] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:16:50.595 09:25:01 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:16:50.595 09:25:01 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@862 -- # return 0 00:16:50.595 09:25:01 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:16:50.595 09:25:01 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@728 -- # xtrace_disable 00:16:50.596 09:25:01 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:16:50.596 09:25:01 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:16:50.596 09:25:01 nvmf_tcp.nvmf_bdevio -- target/bdevio.sh@18 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:16:50.596 09:25:01 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:50.596 09:25:01 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:16:50.596 [2024-07-15 09:25:01.742684] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:16:50.596 09:25:01 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:50.596 09:25:01 nvmf_tcp.nvmf_bdevio -- target/bdevio.sh@19 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:16:50.596 09:25:01 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:50.596 09:25:01 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:16:50.596 Malloc0 00:16:50.596 09:25:01 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:50.596 09:25:01 nvmf_tcp.nvmf_bdevio -- target/bdevio.sh@20 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:16:50.596 09:25:01 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:50.596 09:25:01 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:16:50.596 09:25:01 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:50.596 09:25:01 nvmf_tcp.nvmf_bdevio -- target/bdevio.sh@21 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:16:50.596 09:25:01 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:50.596 09:25:01 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:16:50.854 09:25:01 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:50.854 09:25:01 nvmf_tcp.nvmf_bdevio -- target/bdevio.sh@22 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:16:50.854 09:25:01 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:50.854 09:25:01 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:16:50.854 [2024-07-15 09:25:01.795749] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:16:50.854 09:25:01 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:50.854 09:25:01 nvmf_tcp.nvmf_bdevio -- target/bdevio.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/bdev/bdevio/bdevio --json /dev/fd/62 00:16:50.854 09:25:01 nvmf_tcp.nvmf_bdevio -- target/bdevio.sh@24 -- # gen_nvmf_target_json 00:16:50.854 09:25:01 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@532 -- # config=() 00:16:50.854 09:25:01 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@532 -- # local subsystem config 00:16:50.854 09:25:01 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:16:50.854 09:25:01 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:16:50.854 { 00:16:50.854 "params": { 00:16:50.854 "name": "Nvme$subsystem", 00:16:50.854 "trtype": "$TEST_TRANSPORT", 00:16:50.854 "traddr": "$NVMF_FIRST_TARGET_IP", 00:16:50.854 "adrfam": "ipv4", 00:16:50.854 "trsvcid": "$NVMF_PORT", 00:16:50.854 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:16:50.854 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:16:50.854 "hdgst": ${hdgst:-false}, 00:16:50.854 "ddgst": ${ddgst:-false} 00:16:50.854 }, 00:16:50.854 "method": "bdev_nvme_attach_controller" 00:16:50.854 } 00:16:50.854 EOF 00:16:50.854 )") 00:16:50.854 09:25:01 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@554 -- # cat 00:16:50.854 09:25:01 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@556 -- # jq . 00:16:50.854 09:25:01 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@557 -- # IFS=, 00:16:50.854 09:25:01 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:16:50.854 "params": { 00:16:50.854 "name": "Nvme1", 00:16:50.854 "trtype": "tcp", 00:16:50.854 "traddr": "10.0.0.2", 00:16:50.854 "adrfam": "ipv4", 00:16:50.854 "trsvcid": "4420", 00:16:50.854 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:16:50.854 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:16:50.854 "hdgst": false, 00:16:50.854 "ddgst": false 00:16:50.854 }, 00:16:50.854 "method": "bdev_nvme_attach_controller" 00:16:50.854 }' 00:16:50.854 [2024-07-15 09:25:01.840245] Starting SPDK v24.09-pre git sha1 b0f01ebc5 / DPDK 24.03.0 initialization... 00:16:50.854 [2024-07-15 09:25:01.840317] [ DPDK EAL parameters: bdevio --no-shconf -c 0x7 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid827415 ] 00:16:50.854 EAL: No free 2048 kB hugepages reported on node 1 00:16:50.854 [2024-07-15 09:25:01.902092] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 3 00:16:50.854 [2024-07-15 09:25:02.017747] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:16:50.854 [2024-07-15 09:25:02.017796] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:16:50.854 [2024-07-15 09:25:02.017806] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:16:51.113 I/O targets: 00:16:51.113 Nvme1n1: 131072 blocks of 512 bytes (64 MiB) 00:16:51.113 00:16:51.113 00:16:51.113 CUnit - A unit testing framework for C - Version 2.1-3 00:16:51.113 http://cunit.sourceforge.net/ 00:16:51.113 00:16:51.113 00:16:51.113 Suite: bdevio tests on: Nvme1n1 00:16:51.113 Test: blockdev write read block ...passed 00:16:51.113 Test: blockdev write zeroes read block ...passed 00:16:51.113 Test: blockdev write zeroes read no split ...passed 00:16:51.113 Test: blockdev write zeroes read split ...passed 00:16:51.370 Test: blockdev write zeroes read split partial ...passed 00:16:51.370 Test: blockdev reset ...[2024-07-15 09:25:02.312447] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:16:51.370 [2024-07-15 09:25:02.312552] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x152d580 (9): Bad file descriptor 00:16:51.370 [2024-07-15 09:25:02.407567] bdev_nvme.c:2067:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:16:51.370 passed 00:16:51.370 Test: blockdev write read 8 blocks ...passed 00:16:51.370 Test: blockdev write read size > 128k ...passed 00:16:51.370 Test: blockdev write read invalid size ...passed 00:16:51.370 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:16:51.370 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:16:51.370 Test: blockdev write read max offset ...passed 00:16:51.370 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:16:51.370 Test: blockdev writev readv 8 blocks ...passed 00:16:51.370 Test: blockdev writev readv 30 x 1block ...passed 00:16:51.629 Test: blockdev writev readv block ...passed 00:16:51.629 Test: blockdev writev readv size > 128k ...passed 00:16:51.629 Test: blockdev writev readv size > 128k in two iovs ...passed 00:16:51.629 Test: blockdev comparev and writev ...[2024-07-15 09:25:02.577859] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:16:51.629 [2024-07-15 09:25:02.577895] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:16:51.629 [2024-07-15 09:25:02.577919] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:16:51.629 [2024-07-15 09:25:02.577937] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:16:51.629 [2024-07-15 09:25:02.578270] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:16:51.629 [2024-07-15 09:25:02.578296] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:1 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:16:51.629 [2024-07-15 09:25:02.578319] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:16:51.629 [2024-07-15 09:25:02.578337] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:0 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:16:51.629 [2024-07-15 09:25:02.578654] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:16:51.629 [2024-07-15 09:25:02.578682] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:0 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:16:51.630 [2024-07-15 09:25:02.578706] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:16:51.630 [2024-07-15 09:25:02.578723] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:1 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:16:51.630 [2024-07-15 09:25:02.579063] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:16:51.630 [2024-07-15 09:25:02.579087] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:1 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:16:51.630 [2024-07-15 09:25:02.579109] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:16:51.630 [2024-07-15 09:25:02.579125] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:0 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:16:51.630 passed 00:16:51.630 Test: blockdev nvme passthru rw ...passed 00:16:51.630 Test: blockdev nvme passthru vendor specific ...[2024-07-15 09:25:02.661053] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:16:51.630 [2024-07-15 09:25:02.661080] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:16:51.630 [2024-07-15 09:25:02.661230] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:16:51.630 [2024-07-15 09:25:02.661254] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:16:51.630 [2024-07-15 09:25:02.661387] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:16:51.630 [2024-07-15 09:25:02.661411] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:16:51.630 [2024-07-15 09:25:02.661542] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:16:51.630 [2024-07-15 09:25:02.661565] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:16:51.630 passed 00:16:51.630 Test: blockdev nvme admin passthru ...passed 00:16:51.630 Test: blockdev copy ...passed 00:16:51.630 00:16:51.630 Run Summary: Type Total Ran Passed Failed Inactive 00:16:51.630 suites 1 1 n/a 0 0 00:16:51.630 tests 23 23 23 0 0 00:16:51.630 asserts 152 152 152 0 n/a 00:16:51.630 00:16:51.630 Elapsed time = 1.032 seconds 00:16:51.888 09:25:02 nvmf_tcp.nvmf_bdevio -- target/bdevio.sh@26 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:16:51.888 09:25:02 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:51.888 09:25:02 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:16:51.888 09:25:02 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:51.888 09:25:02 nvmf_tcp.nvmf_bdevio -- target/bdevio.sh@28 -- # trap - SIGINT SIGTERM EXIT 00:16:51.888 09:25:02 nvmf_tcp.nvmf_bdevio -- target/bdevio.sh@30 -- # nvmftestfini 00:16:51.888 09:25:02 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@488 -- # nvmfcleanup 00:16:51.888 09:25:02 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@117 -- # sync 00:16:51.888 09:25:02 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:16:51.888 09:25:02 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@120 -- # set +e 00:16:51.888 09:25:02 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@121 -- # for i in {1..20} 00:16:51.888 09:25:02 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:16:51.888 rmmod nvme_tcp 00:16:51.888 rmmod nvme_fabrics 00:16:51.888 rmmod nvme_keyring 00:16:51.888 09:25:02 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:16:51.888 09:25:02 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@124 -- # set -e 00:16:51.888 09:25:02 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@125 -- # return 0 00:16:51.888 09:25:02 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@489 -- # '[' -n 827266 ']' 00:16:51.888 09:25:02 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@490 -- # killprocess 827266 00:16:51.888 09:25:02 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@948 -- # '[' -z 827266 ']' 00:16:51.888 09:25:02 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@952 -- # kill -0 827266 00:16:51.888 09:25:02 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@953 -- # uname 00:16:51.888 09:25:02 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:16:51.888 09:25:02 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 827266 00:16:51.888 09:25:03 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@954 -- # process_name=reactor_3 00:16:51.888 09:25:03 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@958 -- # '[' reactor_3 = sudo ']' 00:16:51.888 09:25:03 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@966 -- # echo 'killing process with pid 827266' 00:16:51.888 killing process with pid 827266 00:16:51.888 09:25:03 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@967 -- # kill 827266 00:16:51.888 09:25:03 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@972 -- # wait 827266 00:16:52.147 09:25:03 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:16:52.147 09:25:03 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:16:52.147 09:25:03 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:16:52.147 09:25:03 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:16:52.147 09:25:03 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@278 -- # remove_spdk_ns 00:16:52.147 09:25:03 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:16:52.147 09:25:03 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:16:52.147 09:25:03 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:16:54.686 09:25:05 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:16:54.686 00:16:54.686 real 0m6.213s 00:16:54.686 user 0m9.425s 00:16:54.686 sys 0m2.045s 00:16:54.686 09:25:05 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@1124 -- # xtrace_disable 00:16:54.686 09:25:05 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:16:54.686 ************************************ 00:16:54.686 END TEST nvmf_bdevio 00:16:54.686 ************************************ 00:16:54.686 09:25:05 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:16:54.686 09:25:05 nvmf_tcp -- nvmf/nvmf.sh@57 -- # run_test nvmf_auth_target /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/auth.sh --transport=tcp 00:16:54.686 09:25:05 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:16:54.686 09:25:05 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:16:54.686 09:25:05 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:16:54.686 ************************************ 00:16:54.686 START TEST nvmf_auth_target 00:16:54.686 ************************************ 00:16:54.686 09:25:05 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/auth.sh --transport=tcp 00:16:54.686 * Looking for test storage... 00:16:54.686 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:16:54.686 09:25:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:16:54.686 09:25:05 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@7 -- # uname -s 00:16:54.686 09:25:05 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:16:54.686 09:25:05 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:16:54.686 09:25:05 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:16:54.686 09:25:05 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:16:54.686 09:25:05 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:16:54.686 09:25:05 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:16:54.686 09:25:05 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:16:54.686 09:25:05 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:16:54.686 09:25:05 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:16:54.686 09:25:05 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:16:54.686 09:25:05 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a 00:16:54.686 09:25:05 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@18 -- # NVME_HOSTID=29f67375-a902-e411-ace9-001e67bc3c9a 00:16:54.686 09:25:05 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:16:54.686 09:25:05 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:16:54.686 09:25:05 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:16:54.686 09:25:05 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:16:54.686 09:25:05 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:16:54.686 09:25:05 nvmf_tcp.nvmf_auth_target -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:16:54.686 09:25:05 nvmf_tcp.nvmf_auth_target -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:16:54.686 09:25:05 nvmf_tcp.nvmf_auth_target -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:16:54.686 09:25:05 nvmf_tcp.nvmf_auth_target -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:54.686 09:25:05 nvmf_tcp.nvmf_auth_target -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:54.686 09:25:05 nvmf_tcp.nvmf_auth_target -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:54.686 09:25:05 nvmf_tcp.nvmf_auth_target -- paths/export.sh@5 -- # export PATH 00:16:54.686 09:25:05 nvmf_tcp.nvmf_auth_target -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:54.686 09:25:05 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@47 -- # : 0 00:16:54.686 09:25:05 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:16:54.686 09:25:05 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:16:54.686 09:25:05 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:16:54.686 09:25:05 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:16:54.686 09:25:05 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:16:54.686 09:25:05 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:16:54.686 09:25:05 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:16:54.686 09:25:05 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@51 -- # have_pci_nics=0 00:16:54.686 09:25:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@13 -- # digests=("sha256" "sha384" "sha512") 00:16:54.686 09:25:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@14 -- # dhgroups=("null" "ffdhe2048" "ffdhe3072" "ffdhe4096" "ffdhe6144" "ffdhe8192") 00:16:54.686 09:25:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@15 -- # subnqn=nqn.2024-03.io.spdk:cnode0 00:16:54.686 09:25:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@16 -- # hostnqn=nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a 00:16:54.686 09:25:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@17 -- # hostsock=/var/tmp/host.sock 00:16:54.686 09:25:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@18 -- # keys=() 00:16:54.686 09:25:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@18 -- # ckeys=() 00:16:54.686 09:25:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@59 -- # nvmftestinit 00:16:54.686 09:25:05 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:16:54.686 09:25:05 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:16:54.686 09:25:05 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@448 -- # prepare_net_devs 00:16:54.686 09:25:05 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@410 -- # local -g is_hw=no 00:16:54.686 09:25:05 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@412 -- # remove_spdk_ns 00:16:54.686 09:25:05 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:16:54.686 09:25:05 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:16:54.686 09:25:05 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:16:54.686 09:25:05 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:16:54.686 09:25:05 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:16:54.687 09:25:05 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@285 -- # xtrace_disable 00:16:54.687 09:25:05 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:56.592 09:25:07 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:16:56.592 09:25:07 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@291 -- # pci_devs=() 00:16:56.592 09:25:07 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@291 -- # local -a pci_devs 00:16:56.592 09:25:07 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@292 -- # pci_net_devs=() 00:16:56.592 09:25:07 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:16:56.592 09:25:07 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@293 -- # pci_drivers=() 00:16:56.592 09:25:07 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@293 -- # local -A pci_drivers 00:16:56.592 09:25:07 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@295 -- # net_devs=() 00:16:56.592 09:25:07 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@295 -- # local -ga net_devs 00:16:56.592 09:25:07 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@296 -- # e810=() 00:16:56.592 09:25:07 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@296 -- # local -ga e810 00:16:56.592 09:25:07 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@297 -- # x722=() 00:16:56.592 09:25:07 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@297 -- # local -ga x722 00:16:56.592 09:25:07 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@298 -- # mlx=() 00:16:56.592 09:25:07 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@298 -- # local -ga mlx 00:16:56.592 09:25:07 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:16:56.592 09:25:07 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:16:56.592 09:25:07 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:16:56.592 09:25:07 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:16:56.592 09:25:07 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:16:56.592 09:25:07 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:16:56.592 09:25:07 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:16:56.592 09:25:07 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:16:56.592 09:25:07 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:16:56.592 09:25:07 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:16:56.592 09:25:07 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:16:56.592 09:25:07 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:16:56.592 09:25:07 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:16:56.592 09:25:07 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:16:56.592 09:25:07 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:16:56.592 09:25:07 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:16:56.592 09:25:07 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:16:56.592 09:25:07 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:16:56.592 09:25:07 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@341 -- # echo 'Found 0000:09:00.0 (0x8086 - 0x159b)' 00:16:56.592 Found 0000:09:00.0 (0x8086 - 0x159b) 00:16:56.592 09:25:07 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:16:56.592 09:25:07 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:16:56.592 09:25:07 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:16:56.592 09:25:07 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:16:56.592 09:25:07 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:16:56.592 09:25:07 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:16:56.592 09:25:07 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@341 -- # echo 'Found 0000:09:00.1 (0x8086 - 0x159b)' 00:16:56.592 Found 0000:09:00.1 (0x8086 - 0x159b) 00:16:56.592 09:25:07 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:16:56.592 09:25:07 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:16:56.592 09:25:07 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:16:56.592 09:25:07 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:16:56.592 09:25:07 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:16:56.592 09:25:07 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:16:56.592 09:25:07 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:16:56.592 09:25:07 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:16:56.592 09:25:07 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:16:56.592 09:25:07 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:16:56.592 09:25:07 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:16:56.592 09:25:07 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:16:56.592 09:25:07 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@390 -- # [[ up == up ]] 00:16:56.592 09:25:07 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:16:56.592 09:25:07 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:16:56.592 09:25:07 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:09:00.0: cvl_0_0' 00:16:56.592 Found net devices under 0000:09:00.0: cvl_0_0 00:16:56.592 09:25:07 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:16:56.592 09:25:07 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:16:56.592 09:25:07 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:16:56.592 09:25:07 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:16:56.592 09:25:07 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:16:56.592 09:25:07 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@390 -- # [[ up == up ]] 00:16:56.592 09:25:07 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:16:56.592 09:25:07 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:16:56.593 09:25:07 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:09:00.1: cvl_0_1' 00:16:56.593 Found net devices under 0000:09:00.1: cvl_0_1 00:16:56.593 09:25:07 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:16:56.593 09:25:07 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:16:56.593 09:25:07 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@414 -- # is_hw=yes 00:16:56.593 09:25:07 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:16:56.593 09:25:07 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:16:56.593 09:25:07 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:16:56.593 09:25:07 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:16:56.593 09:25:07 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:16:56.593 09:25:07 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:16:56.593 09:25:07 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:16:56.593 09:25:07 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:16:56.593 09:25:07 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:16:56.593 09:25:07 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:16:56.593 09:25:07 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:16:56.593 09:25:07 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:16:56.593 09:25:07 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:16:56.593 09:25:07 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:16:56.593 09:25:07 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:16:56.593 09:25:07 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:16:56.593 09:25:07 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:16:56.593 09:25:07 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:16:56.593 09:25:07 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:16:56.593 09:25:07 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:16:56.593 09:25:07 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:16:56.593 09:25:07 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:16:56.593 09:25:07 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:16:56.593 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:16:56.593 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.110 ms 00:16:56.593 00:16:56.593 --- 10.0.0.2 ping statistics --- 00:16:56.593 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:16:56.593 rtt min/avg/max/mdev = 0.110/0.110/0.110/0.000 ms 00:16:56.593 09:25:07 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:16:56.593 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:16:56.593 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.097 ms 00:16:56.593 00:16:56.593 --- 10.0.0.1 ping statistics --- 00:16:56.593 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:16:56.593 rtt min/avg/max/mdev = 0.097/0.097/0.097/0.000 ms 00:16:56.593 09:25:07 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:16:56.593 09:25:07 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@422 -- # return 0 00:16:56.593 09:25:07 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:16:56.593 09:25:07 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:16:56.593 09:25:07 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:16:56.593 09:25:07 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:16:56.593 09:25:07 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:16:56.593 09:25:07 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:16:56.593 09:25:07 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:16:56.593 09:25:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@60 -- # nvmfappstart -L nvmf_auth 00:16:56.593 09:25:07 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:16:56.593 09:25:07 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@722 -- # xtrace_disable 00:16:56.593 09:25:07 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:56.593 09:25:07 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@481 -- # nvmfpid=829484 00:16:56.593 09:25:07 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -L nvmf_auth 00:16:56.593 09:25:07 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@482 -- # waitforlisten 829484 00:16:56.593 09:25:07 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@829 -- # '[' -z 829484 ']' 00:16:56.593 09:25:07 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:16:56.593 09:25:07 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@834 -- # local max_retries=100 00:16:56.593 09:25:07 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:16:56.593 09:25:07 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@838 -- # xtrace_disable 00:16:56.593 09:25:07 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:56.852 09:25:07 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:16:56.852 09:25:07 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@862 -- # return 0 00:16:56.852 09:25:07 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:16:56.852 09:25:07 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@728 -- # xtrace_disable 00:16:56.852 09:25:07 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:56.852 09:25:07 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:16:56.852 09:25:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@62 -- # hostpid=829504 00:16:56.852 09:25:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@61 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 2 -r /var/tmp/host.sock -L nvme_auth 00:16:56.852 09:25:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@64 -- # trap 'dumplogs; cleanup' SIGINT SIGTERM EXIT 00:16:56.852 09:25:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@67 -- # gen_dhchap_key null 48 00:16:56.852 09:25:07 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@723 -- # local digest len file key 00:16:56.852 09:25:07 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@724 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:16:56.852 09:25:07 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@724 -- # local -A digests 00:16:56.852 09:25:07 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@726 -- # digest=null 00:16:56.852 09:25:07 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@726 -- # len=48 00:16:56.852 09:25:07 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@727 -- # xxd -p -c0 -l 24 /dev/urandom 00:16:56.852 09:25:07 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@727 -- # key=25b594fb1b90a4ca224de94858e7250ef86566e55825227f 00:16:56.852 09:25:07 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@728 -- # mktemp -t spdk.key-null.XXX 00:16:56.853 09:25:07 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@728 -- # file=/tmp/spdk.key-null.kuQ 00:16:56.853 09:25:07 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@729 -- # format_dhchap_key 25b594fb1b90a4ca224de94858e7250ef86566e55825227f 0 00:16:56.853 09:25:07 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@719 -- # format_key DHHC-1 25b594fb1b90a4ca224de94858e7250ef86566e55825227f 0 00:16:56.853 09:25:07 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@702 -- # local prefix key digest 00:16:56.853 09:25:07 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@704 -- # prefix=DHHC-1 00:16:56.853 09:25:07 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@704 -- # key=25b594fb1b90a4ca224de94858e7250ef86566e55825227f 00:16:56.853 09:25:07 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@704 -- # digest=0 00:16:56.853 09:25:07 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@705 -- # python - 00:16:56.853 09:25:07 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@730 -- # chmod 0600 /tmp/spdk.key-null.kuQ 00:16:56.853 09:25:07 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@732 -- # echo /tmp/spdk.key-null.kuQ 00:16:56.853 09:25:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@67 -- # keys[0]=/tmp/spdk.key-null.kuQ 00:16:56.853 09:25:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@67 -- # gen_dhchap_key sha512 64 00:16:56.853 09:25:07 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@723 -- # local digest len file key 00:16:56.853 09:25:07 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@724 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:16:56.853 09:25:07 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@724 -- # local -A digests 00:16:56.853 09:25:07 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@726 -- # digest=sha512 00:16:56.853 09:25:07 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@726 -- # len=64 00:16:56.853 09:25:07 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@727 -- # xxd -p -c0 -l 32 /dev/urandom 00:16:56.853 09:25:07 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@727 -- # key=44533186a61bb4520de411b254e2a7c3c3a0ae4011b49706c998d1c73360e509 00:16:56.853 09:25:07 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@728 -- # mktemp -t spdk.key-sha512.XXX 00:16:56.853 09:25:07 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@728 -- # file=/tmp/spdk.key-sha512.YNq 00:16:56.853 09:25:07 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@729 -- # format_dhchap_key 44533186a61bb4520de411b254e2a7c3c3a0ae4011b49706c998d1c73360e509 3 00:16:56.853 09:25:07 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@719 -- # format_key DHHC-1 44533186a61bb4520de411b254e2a7c3c3a0ae4011b49706c998d1c73360e509 3 00:16:56.853 09:25:07 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@702 -- # local prefix key digest 00:16:56.853 09:25:07 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@704 -- # prefix=DHHC-1 00:16:56.853 09:25:07 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@704 -- # key=44533186a61bb4520de411b254e2a7c3c3a0ae4011b49706c998d1c73360e509 00:16:56.853 09:25:07 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@704 -- # digest=3 00:16:56.853 09:25:07 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@705 -- # python - 00:16:56.853 09:25:08 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@730 -- # chmod 0600 /tmp/spdk.key-sha512.YNq 00:16:56.853 09:25:08 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@732 -- # echo /tmp/spdk.key-sha512.YNq 00:16:56.853 09:25:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@67 -- # ckeys[0]=/tmp/spdk.key-sha512.YNq 00:16:56.853 09:25:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@68 -- # gen_dhchap_key sha256 32 00:16:56.853 09:25:08 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@723 -- # local digest len file key 00:16:56.853 09:25:08 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@724 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:16:56.853 09:25:08 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@724 -- # local -A digests 00:16:56.853 09:25:08 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@726 -- # digest=sha256 00:16:56.853 09:25:08 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@726 -- # len=32 00:16:56.853 09:25:08 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@727 -- # xxd -p -c0 -l 16 /dev/urandom 00:16:56.853 09:25:08 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@727 -- # key=8d2649c51fea01176382e015cbfcfa9d 00:16:56.853 09:25:08 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@728 -- # mktemp -t spdk.key-sha256.XXX 00:16:56.853 09:25:08 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@728 -- # file=/tmp/spdk.key-sha256.0os 00:16:56.853 09:25:08 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@729 -- # format_dhchap_key 8d2649c51fea01176382e015cbfcfa9d 1 00:16:56.853 09:25:08 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@719 -- # format_key DHHC-1 8d2649c51fea01176382e015cbfcfa9d 1 00:16:56.853 09:25:08 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@702 -- # local prefix key digest 00:16:57.112 09:25:08 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@704 -- # prefix=DHHC-1 00:16:57.112 09:25:08 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@704 -- # key=8d2649c51fea01176382e015cbfcfa9d 00:16:57.112 09:25:08 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@704 -- # digest=1 00:16:57.112 09:25:08 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@705 -- # python - 00:16:57.112 09:25:08 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@730 -- # chmod 0600 /tmp/spdk.key-sha256.0os 00:16:57.112 09:25:08 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@732 -- # echo /tmp/spdk.key-sha256.0os 00:16:57.112 09:25:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@68 -- # keys[1]=/tmp/spdk.key-sha256.0os 00:16:57.112 09:25:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@68 -- # gen_dhchap_key sha384 48 00:16:57.112 09:25:08 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@723 -- # local digest len file key 00:16:57.112 09:25:08 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@724 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:16:57.112 09:25:08 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@724 -- # local -A digests 00:16:57.112 09:25:08 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@726 -- # digest=sha384 00:16:57.112 09:25:08 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@726 -- # len=48 00:16:57.112 09:25:08 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@727 -- # xxd -p -c0 -l 24 /dev/urandom 00:16:57.112 09:25:08 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@727 -- # key=d1289b2c891cad792332bc75cd5f44d7899fb3b1ba18b8f6 00:16:57.112 09:25:08 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@728 -- # mktemp -t spdk.key-sha384.XXX 00:16:57.112 09:25:08 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@728 -- # file=/tmp/spdk.key-sha384.AoN 00:16:57.112 09:25:08 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@729 -- # format_dhchap_key d1289b2c891cad792332bc75cd5f44d7899fb3b1ba18b8f6 2 00:16:57.112 09:25:08 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@719 -- # format_key DHHC-1 d1289b2c891cad792332bc75cd5f44d7899fb3b1ba18b8f6 2 00:16:57.112 09:25:08 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@702 -- # local prefix key digest 00:16:57.112 09:25:08 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@704 -- # prefix=DHHC-1 00:16:57.112 09:25:08 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@704 -- # key=d1289b2c891cad792332bc75cd5f44d7899fb3b1ba18b8f6 00:16:57.112 09:25:08 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@704 -- # digest=2 00:16:57.112 09:25:08 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@705 -- # python - 00:16:57.112 09:25:08 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@730 -- # chmod 0600 /tmp/spdk.key-sha384.AoN 00:16:57.112 09:25:08 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@732 -- # echo /tmp/spdk.key-sha384.AoN 00:16:57.112 09:25:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@68 -- # ckeys[1]=/tmp/spdk.key-sha384.AoN 00:16:57.112 09:25:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@69 -- # gen_dhchap_key sha384 48 00:16:57.112 09:25:08 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@723 -- # local digest len file key 00:16:57.112 09:25:08 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@724 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:16:57.112 09:25:08 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@724 -- # local -A digests 00:16:57.112 09:25:08 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@726 -- # digest=sha384 00:16:57.112 09:25:08 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@726 -- # len=48 00:16:57.112 09:25:08 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@727 -- # xxd -p -c0 -l 24 /dev/urandom 00:16:57.112 09:25:08 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@727 -- # key=44833af33823ce5701d15f6633069a9fd894cf1730e2f13e 00:16:57.112 09:25:08 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@728 -- # mktemp -t spdk.key-sha384.XXX 00:16:57.112 09:25:08 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@728 -- # file=/tmp/spdk.key-sha384.JfD 00:16:57.112 09:25:08 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@729 -- # format_dhchap_key 44833af33823ce5701d15f6633069a9fd894cf1730e2f13e 2 00:16:57.112 09:25:08 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@719 -- # format_key DHHC-1 44833af33823ce5701d15f6633069a9fd894cf1730e2f13e 2 00:16:57.112 09:25:08 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@702 -- # local prefix key digest 00:16:57.112 09:25:08 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@704 -- # prefix=DHHC-1 00:16:57.112 09:25:08 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@704 -- # key=44833af33823ce5701d15f6633069a9fd894cf1730e2f13e 00:16:57.112 09:25:08 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@704 -- # digest=2 00:16:57.112 09:25:08 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@705 -- # python - 00:16:57.112 09:25:08 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@730 -- # chmod 0600 /tmp/spdk.key-sha384.JfD 00:16:57.112 09:25:08 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@732 -- # echo /tmp/spdk.key-sha384.JfD 00:16:57.112 09:25:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@69 -- # keys[2]=/tmp/spdk.key-sha384.JfD 00:16:57.112 09:25:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@69 -- # gen_dhchap_key sha256 32 00:16:57.112 09:25:08 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@723 -- # local digest len file key 00:16:57.112 09:25:08 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@724 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:16:57.112 09:25:08 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@724 -- # local -A digests 00:16:57.112 09:25:08 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@726 -- # digest=sha256 00:16:57.112 09:25:08 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@726 -- # len=32 00:16:57.112 09:25:08 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@727 -- # xxd -p -c0 -l 16 /dev/urandom 00:16:57.112 09:25:08 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@727 -- # key=e75da58826e1ec9695ac3ff7b1311289 00:16:57.112 09:25:08 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@728 -- # mktemp -t spdk.key-sha256.XXX 00:16:57.112 09:25:08 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@728 -- # file=/tmp/spdk.key-sha256.o48 00:16:57.112 09:25:08 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@729 -- # format_dhchap_key e75da58826e1ec9695ac3ff7b1311289 1 00:16:57.112 09:25:08 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@719 -- # format_key DHHC-1 e75da58826e1ec9695ac3ff7b1311289 1 00:16:57.112 09:25:08 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@702 -- # local prefix key digest 00:16:57.112 09:25:08 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@704 -- # prefix=DHHC-1 00:16:57.112 09:25:08 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@704 -- # key=e75da58826e1ec9695ac3ff7b1311289 00:16:57.112 09:25:08 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@704 -- # digest=1 00:16:57.112 09:25:08 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@705 -- # python - 00:16:57.112 09:25:08 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@730 -- # chmod 0600 /tmp/spdk.key-sha256.o48 00:16:57.112 09:25:08 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@732 -- # echo /tmp/spdk.key-sha256.o48 00:16:57.112 09:25:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@69 -- # ckeys[2]=/tmp/spdk.key-sha256.o48 00:16:57.112 09:25:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@70 -- # gen_dhchap_key sha512 64 00:16:57.112 09:25:08 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@723 -- # local digest len file key 00:16:57.112 09:25:08 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@724 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:16:57.112 09:25:08 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@724 -- # local -A digests 00:16:57.112 09:25:08 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@726 -- # digest=sha512 00:16:57.112 09:25:08 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@726 -- # len=64 00:16:57.112 09:25:08 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@727 -- # xxd -p -c0 -l 32 /dev/urandom 00:16:57.112 09:25:08 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@727 -- # key=5acf8eba34168ad1bac54be7402b9fa858b4ca28f68da6a09b7f0e240d7a33a4 00:16:57.112 09:25:08 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@728 -- # mktemp -t spdk.key-sha512.XXX 00:16:57.112 09:25:08 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@728 -- # file=/tmp/spdk.key-sha512.CTU 00:16:57.112 09:25:08 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@729 -- # format_dhchap_key 5acf8eba34168ad1bac54be7402b9fa858b4ca28f68da6a09b7f0e240d7a33a4 3 00:16:57.112 09:25:08 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@719 -- # format_key DHHC-1 5acf8eba34168ad1bac54be7402b9fa858b4ca28f68da6a09b7f0e240d7a33a4 3 00:16:57.112 09:25:08 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@702 -- # local prefix key digest 00:16:57.112 09:25:08 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@704 -- # prefix=DHHC-1 00:16:57.112 09:25:08 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@704 -- # key=5acf8eba34168ad1bac54be7402b9fa858b4ca28f68da6a09b7f0e240d7a33a4 00:16:57.112 09:25:08 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@704 -- # digest=3 00:16:57.112 09:25:08 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@705 -- # python - 00:16:57.370 09:25:08 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@730 -- # chmod 0600 /tmp/spdk.key-sha512.CTU 00:16:57.370 09:25:08 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@732 -- # echo /tmp/spdk.key-sha512.CTU 00:16:57.370 09:25:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@70 -- # keys[3]=/tmp/spdk.key-sha512.CTU 00:16:57.370 09:25:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@70 -- # ckeys[3]= 00:16:57.370 09:25:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@72 -- # waitforlisten 829484 00:16:57.370 09:25:08 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@829 -- # '[' -z 829484 ']' 00:16:57.370 09:25:08 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:16:57.370 09:25:08 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@834 -- # local max_retries=100 00:16:57.370 09:25:08 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:16:57.370 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:16:57.371 09:25:08 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@838 -- # xtrace_disable 00:16:57.371 09:25:08 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:57.629 09:25:08 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:16:57.629 09:25:08 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@862 -- # return 0 00:16:57.629 09:25:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@73 -- # waitforlisten 829504 /var/tmp/host.sock 00:16:57.629 09:25:08 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@829 -- # '[' -z 829504 ']' 00:16:57.629 09:25:08 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/host.sock 00:16:57.629 09:25:08 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@834 -- # local max_retries=100 00:16:57.629 09:25:08 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/host.sock...' 00:16:57.629 Waiting for process to start up and listen on UNIX domain socket /var/tmp/host.sock... 00:16:57.629 09:25:08 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@838 -- # xtrace_disable 00:16:57.629 09:25:08 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:57.629 09:25:08 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:16:57.629 09:25:08 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@862 -- # return 0 00:16:57.629 09:25:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd 00:16:57.629 09:25:08 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:57.629 09:25:08 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:57.887 09:25:08 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:57.888 09:25:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@81 -- # for i in "${!keys[@]}" 00:16:57.888 09:25:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@82 -- # rpc_cmd keyring_file_add_key key0 /tmp/spdk.key-null.kuQ 00:16:57.888 09:25:08 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:57.888 09:25:08 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:57.888 09:25:08 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:57.888 09:25:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@83 -- # hostrpc keyring_file_add_key key0 /tmp/spdk.key-null.kuQ 00:16:57.888 09:25:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock keyring_file_add_key key0 /tmp/spdk.key-null.kuQ 00:16:58.145 09:25:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@84 -- # [[ -n /tmp/spdk.key-sha512.YNq ]] 00:16:58.145 09:25:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@85 -- # rpc_cmd keyring_file_add_key ckey0 /tmp/spdk.key-sha512.YNq 00:16:58.145 09:25:09 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:58.145 09:25:09 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:58.145 09:25:09 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:58.145 09:25:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@86 -- # hostrpc keyring_file_add_key ckey0 /tmp/spdk.key-sha512.YNq 00:16:58.145 09:25:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock keyring_file_add_key ckey0 /tmp/spdk.key-sha512.YNq 00:16:58.402 09:25:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@81 -- # for i in "${!keys[@]}" 00:16:58.402 09:25:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@82 -- # rpc_cmd keyring_file_add_key key1 /tmp/spdk.key-sha256.0os 00:16:58.402 09:25:09 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:58.402 09:25:09 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:58.402 09:25:09 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:58.402 09:25:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@83 -- # hostrpc keyring_file_add_key key1 /tmp/spdk.key-sha256.0os 00:16:58.402 09:25:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock keyring_file_add_key key1 /tmp/spdk.key-sha256.0os 00:16:58.660 09:25:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@84 -- # [[ -n /tmp/spdk.key-sha384.AoN ]] 00:16:58.660 09:25:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@85 -- # rpc_cmd keyring_file_add_key ckey1 /tmp/spdk.key-sha384.AoN 00:16:58.660 09:25:09 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:58.660 09:25:09 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:58.660 09:25:09 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:58.660 09:25:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@86 -- # hostrpc keyring_file_add_key ckey1 /tmp/spdk.key-sha384.AoN 00:16:58.660 09:25:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock keyring_file_add_key ckey1 /tmp/spdk.key-sha384.AoN 00:16:58.917 09:25:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@81 -- # for i in "${!keys[@]}" 00:16:58.917 09:25:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@82 -- # rpc_cmd keyring_file_add_key key2 /tmp/spdk.key-sha384.JfD 00:16:58.917 09:25:09 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:58.917 09:25:09 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:58.917 09:25:09 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:58.917 09:25:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@83 -- # hostrpc keyring_file_add_key key2 /tmp/spdk.key-sha384.JfD 00:16:58.917 09:25:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock keyring_file_add_key key2 /tmp/spdk.key-sha384.JfD 00:16:59.175 09:25:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@84 -- # [[ -n /tmp/spdk.key-sha256.o48 ]] 00:16:59.175 09:25:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@85 -- # rpc_cmd keyring_file_add_key ckey2 /tmp/spdk.key-sha256.o48 00:16:59.175 09:25:10 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:59.175 09:25:10 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:59.175 09:25:10 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:59.175 09:25:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@86 -- # hostrpc keyring_file_add_key ckey2 /tmp/spdk.key-sha256.o48 00:16:59.175 09:25:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock keyring_file_add_key ckey2 /tmp/spdk.key-sha256.o48 00:16:59.433 09:25:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@81 -- # for i in "${!keys[@]}" 00:16:59.433 09:25:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@82 -- # rpc_cmd keyring_file_add_key key3 /tmp/spdk.key-sha512.CTU 00:16:59.433 09:25:10 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:59.433 09:25:10 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:59.433 09:25:10 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:59.433 09:25:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@83 -- # hostrpc keyring_file_add_key key3 /tmp/spdk.key-sha512.CTU 00:16:59.433 09:25:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock keyring_file_add_key key3 /tmp/spdk.key-sha512.CTU 00:16:59.690 09:25:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@84 -- # [[ -n '' ]] 00:16:59.690 09:25:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@91 -- # for digest in "${digests[@]}" 00:16:59.690 09:25:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@92 -- # for dhgroup in "${dhgroups[@]}" 00:16:59.690 09:25:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:16:59.690 09:25:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups null 00:16:59.690 09:25:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups null 00:16:59.947 09:25:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 null 0 00:16:59.947 09:25:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:16:59.947 09:25:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:16:59.947 09:25:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=null 00:16:59.947 09:25:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:16:59.947 09:25:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:16:59.947 09:25:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:16:59.947 09:25:11 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:59.947 09:25:11 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:59.947 09:25:11 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:59.947 09:25:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:16:59.948 09:25:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:17:00.205 00:17:00.205 09:25:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:17:00.205 09:25:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:17:00.205 09:25:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:00.462 09:25:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:00.462 09:25:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:17:00.462 09:25:11 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:00.462 09:25:11 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:00.462 09:25:11 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:00.462 09:25:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:17:00.462 { 00:17:00.462 "cntlid": 1, 00:17:00.462 "qid": 0, 00:17:00.462 "state": "enabled", 00:17:00.462 "thread": "nvmf_tgt_poll_group_000", 00:17:00.462 "listen_address": { 00:17:00.462 "trtype": "TCP", 00:17:00.462 "adrfam": "IPv4", 00:17:00.462 "traddr": "10.0.0.2", 00:17:00.462 "trsvcid": "4420" 00:17:00.462 }, 00:17:00.462 "peer_address": { 00:17:00.462 "trtype": "TCP", 00:17:00.462 "adrfam": "IPv4", 00:17:00.462 "traddr": "10.0.0.1", 00:17:00.462 "trsvcid": "60158" 00:17:00.462 }, 00:17:00.462 "auth": { 00:17:00.462 "state": "completed", 00:17:00.462 "digest": "sha256", 00:17:00.462 "dhgroup": "null" 00:17:00.462 } 00:17:00.462 } 00:17:00.462 ]' 00:17:00.462 09:25:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:17:00.463 09:25:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:17:00.463 09:25:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:17:00.463 09:25:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ null == \n\u\l\l ]] 00:17:00.463 09:25:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:17:00.721 09:25:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:17:00.721 09:25:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:00.721 09:25:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:00.978 09:25:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --hostid 29f67375-a902-e411-ace9-001e67bc3c9a --dhchap-secret DHHC-1:00:MjViNTk0ZmIxYjkwYTRjYTIyNGRlOTQ4NThlNzI1MGVmODY1NjZlNTU4MjUyMjdm9pHsZA==: --dhchap-ctrl-secret DHHC-1:03:NDQ1MzMxODZhNjFiYjQ1MjBkZTQxMWIyNTRlMmE3YzNjM2EwYWU0MDExYjQ5NzA2Yzk5OGQxYzczMzYwZTUwOSOhOLs=: 00:17:06.238 09:25:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:17:06.238 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:17:06.238 09:25:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a 00:17:06.238 09:25:16 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:06.238 09:25:16 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:06.238 09:25:16 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:06.238 09:25:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:17:06.238 09:25:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups null 00:17:06.238 09:25:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups null 00:17:06.238 09:25:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 null 1 00:17:06.238 09:25:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:17:06.238 09:25:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:17:06.238 09:25:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=null 00:17:06.238 09:25:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key1 00:17:06.238 09:25:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:17:06.238 09:25:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:17:06.238 09:25:16 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:06.238 09:25:16 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:06.238 09:25:16 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:06.239 09:25:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:17:06.239 09:25:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:17:06.239 00:17:06.239 09:25:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:17:06.239 09:25:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:17:06.239 09:25:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:06.239 09:25:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:06.239 09:25:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:17:06.239 09:25:17 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:06.239 09:25:17 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:06.239 09:25:17 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:06.239 09:25:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:17:06.239 { 00:17:06.239 "cntlid": 3, 00:17:06.239 "qid": 0, 00:17:06.239 "state": "enabled", 00:17:06.239 "thread": "nvmf_tgt_poll_group_000", 00:17:06.239 "listen_address": { 00:17:06.239 "trtype": "TCP", 00:17:06.239 "adrfam": "IPv4", 00:17:06.239 "traddr": "10.0.0.2", 00:17:06.239 "trsvcid": "4420" 00:17:06.239 }, 00:17:06.239 "peer_address": { 00:17:06.239 "trtype": "TCP", 00:17:06.239 "adrfam": "IPv4", 00:17:06.239 "traddr": "10.0.0.1", 00:17:06.239 "trsvcid": "37172" 00:17:06.239 }, 00:17:06.239 "auth": { 00:17:06.239 "state": "completed", 00:17:06.239 "digest": "sha256", 00:17:06.239 "dhgroup": "null" 00:17:06.239 } 00:17:06.239 } 00:17:06.239 ]' 00:17:06.239 09:25:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:17:06.239 09:25:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:17:06.239 09:25:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:17:06.239 09:25:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ null == \n\u\l\l ]] 00:17:06.239 09:25:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:17:06.239 09:25:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:17:06.239 09:25:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:06.239 09:25:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:06.503 09:25:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --hostid 29f67375-a902-e411-ace9-001e67bc3c9a --dhchap-secret DHHC-1:01:OGQyNjQ5YzUxZmVhMDExNzYzODJlMDE1Y2JmY2ZhOWT+1yqE: --dhchap-ctrl-secret DHHC-1:02:ZDEyODliMmM4OTFjYWQ3OTIzMzJiYzc1Y2Q1ZjQ0ZDc4OTlmYjNiMWJhMThiOGY2nJOiQQ==: 00:17:07.570 09:25:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:17:07.570 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:17:07.570 09:25:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a 00:17:07.570 09:25:18 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:07.570 09:25:18 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:07.570 09:25:18 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:07.570 09:25:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:17:07.570 09:25:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups null 00:17:07.570 09:25:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups null 00:17:07.845 09:25:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 null 2 00:17:07.845 09:25:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:17:07.845 09:25:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:17:07.845 09:25:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=null 00:17:07.845 09:25:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key2 00:17:07.845 09:25:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:17:07.845 09:25:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:17:07.845 09:25:18 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:07.845 09:25:18 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:07.845 09:25:18 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:07.845 09:25:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:17:07.845 09:25:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:17:08.110 00:17:08.110 09:25:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:17:08.110 09:25:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:17:08.110 09:25:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:08.110 09:25:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:08.110 09:25:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:17:08.110 09:25:19 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:08.110 09:25:19 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:08.368 09:25:19 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:08.368 09:25:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:17:08.368 { 00:17:08.368 "cntlid": 5, 00:17:08.368 "qid": 0, 00:17:08.368 "state": "enabled", 00:17:08.368 "thread": "nvmf_tgt_poll_group_000", 00:17:08.368 "listen_address": { 00:17:08.368 "trtype": "TCP", 00:17:08.368 "adrfam": "IPv4", 00:17:08.368 "traddr": "10.0.0.2", 00:17:08.368 "trsvcid": "4420" 00:17:08.368 }, 00:17:08.368 "peer_address": { 00:17:08.368 "trtype": "TCP", 00:17:08.368 "adrfam": "IPv4", 00:17:08.368 "traddr": "10.0.0.1", 00:17:08.368 "trsvcid": "37202" 00:17:08.368 }, 00:17:08.368 "auth": { 00:17:08.368 "state": "completed", 00:17:08.368 "digest": "sha256", 00:17:08.368 "dhgroup": "null" 00:17:08.368 } 00:17:08.368 } 00:17:08.368 ]' 00:17:08.368 09:25:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:17:08.368 09:25:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:17:08.368 09:25:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:17:08.368 09:25:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ null == \n\u\l\l ]] 00:17:08.368 09:25:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:17:08.368 09:25:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:17:08.368 09:25:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:08.368 09:25:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:08.625 09:25:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --hostid 29f67375-a902-e411-ace9-001e67bc3c9a --dhchap-secret DHHC-1:02:NDQ4MzNhZjMzODIzY2U1NzAxZDE1ZjY2MzMwNjlhOWZkODk0Y2YxNzMwZTJmMTNlwvlBbg==: --dhchap-ctrl-secret DHHC-1:01:ZTc1ZGE1ODgyNmUxZWM5Njk1YWMzZmY3YjEzMTEyODmK5dgD: 00:17:09.562 09:25:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:17:09.562 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:17:09.562 09:25:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a 00:17:09.562 09:25:20 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:09.562 09:25:20 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:09.562 09:25:20 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:09.562 09:25:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:17:09.562 09:25:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups null 00:17:09.562 09:25:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups null 00:17:09.820 09:25:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 null 3 00:17:09.820 09:25:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:17:09.820 09:25:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:17:09.820 09:25:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=null 00:17:09.820 09:25:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:17:09.820 09:25:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:17:09.820 09:25:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --dhchap-key key3 00:17:09.820 09:25:20 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:09.820 09:25:20 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:09.820 09:25:20 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:09.820 09:25:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:17:09.820 09:25:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:17:10.078 00:17:10.078 09:25:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:17:10.078 09:25:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:17:10.078 09:25:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:10.335 09:25:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:10.335 09:25:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:17:10.335 09:25:21 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:10.335 09:25:21 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:10.335 09:25:21 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:10.335 09:25:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:17:10.335 { 00:17:10.335 "cntlid": 7, 00:17:10.335 "qid": 0, 00:17:10.335 "state": "enabled", 00:17:10.335 "thread": "nvmf_tgt_poll_group_000", 00:17:10.335 "listen_address": { 00:17:10.335 "trtype": "TCP", 00:17:10.336 "adrfam": "IPv4", 00:17:10.336 "traddr": "10.0.0.2", 00:17:10.336 "trsvcid": "4420" 00:17:10.336 }, 00:17:10.336 "peer_address": { 00:17:10.336 "trtype": "TCP", 00:17:10.336 "adrfam": "IPv4", 00:17:10.336 "traddr": "10.0.0.1", 00:17:10.336 "trsvcid": "37226" 00:17:10.336 }, 00:17:10.336 "auth": { 00:17:10.336 "state": "completed", 00:17:10.336 "digest": "sha256", 00:17:10.336 "dhgroup": "null" 00:17:10.336 } 00:17:10.336 } 00:17:10.336 ]' 00:17:10.336 09:25:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:17:10.336 09:25:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:17:10.336 09:25:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:17:10.336 09:25:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ null == \n\u\l\l ]] 00:17:10.336 09:25:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:17:10.336 09:25:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:17:10.336 09:25:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:10.336 09:25:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:10.594 09:25:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --hostid 29f67375-a902-e411-ace9-001e67bc3c9a --dhchap-secret DHHC-1:03:NWFjZjhlYmEzNDE2OGFkMWJhYzU0YmU3NDAyYjlmYTg1OGI0Y2EyOGY2OGRhNmEwOWI3ZjBlMjQwZDdhMzNhNDF+BrQ=: 00:17:11.528 09:25:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:17:11.528 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:17:11.528 09:25:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a 00:17:11.528 09:25:22 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:11.528 09:25:22 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:11.528 09:25:22 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:11.528 09:25:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@92 -- # for dhgroup in "${dhgroups[@]}" 00:17:11.528 09:25:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:17:11.528 09:25:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:17:11.528 09:25:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:17:11.785 09:25:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 ffdhe2048 0 00:17:11.785 09:25:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:17:11.785 09:25:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:17:11.785 09:25:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe2048 00:17:11.785 09:25:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:17:11.785 09:25:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:17:11.785 09:25:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:17:11.785 09:25:22 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:11.785 09:25:22 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:11.785 09:25:22 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:11.785 09:25:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:17:11.785 09:25:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:17:12.042 00:17:12.042 09:25:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:17:12.042 09:25:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:17:12.042 09:25:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:12.299 09:25:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:12.299 09:25:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:17:12.299 09:25:23 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:12.299 09:25:23 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:12.299 09:25:23 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:12.299 09:25:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:17:12.299 { 00:17:12.299 "cntlid": 9, 00:17:12.299 "qid": 0, 00:17:12.299 "state": "enabled", 00:17:12.299 "thread": "nvmf_tgt_poll_group_000", 00:17:12.299 "listen_address": { 00:17:12.299 "trtype": "TCP", 00:17:12.299 "adrfam": "IPv4", 00:17:12.299 "traddr": "10.0.0.2", 00:17:12.299 "trsvcid": "4420" 00:17:12.299 }, 00:17:12.299 "peer_address": { 00:17:12.299 "trtype": "TCP", 00:17:12.299 "adrfam": "IPv4", 00:17:12.299 "traddr": "10.0.0.1", 00:17:12.299 "trsvcid": "37266" 00:17:12.299 }, 00:17:12.299 "auth": { 00:17:12.299 "state": "completed", 00:17:12.299 "digest": "sha256", 00:17:12.299 "dhgroup": "ffdhe2048" 00:17:12.299 } 00:17:12.299 } 00:17:12.299 ]' 00:17:12.299 09:25:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:17:12.556 09:25:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:17:12.556 09:25:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:17:12.556 09:25:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:17:12.556 09:25:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:17:12.556 09:25:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:17:12.556 09:25:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:12.556 09:25:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:12.815 09:25:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --hostid 29f67375-a902-e411-ace9-001e67bc3c9a --dhchap-secret DHHC-1:00:MjViNTk0ZmIxYjkwYTRjYTIyNGRlOTQ4NThlNzI1MGVmODY1NjZlNTU4MjUyMjdm9pHsZA==: --dhchap-ctrl-secret DHHC-1:03:NDQ1MzMxODZhNjFiYjQ1MjBkZTQxMWIyNTRlMmE3YzNjM2EwYWU0MDExYjQ5NzA2Yzk5OGQxYzczMzYwZTUwOSOhOLs=: 00:17:13.748 09:25:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:17:13.748 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:17:13.748 09:25:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a 00:17:13.748 09:25:24 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:13.748 09:25:24 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:13.748 09:25:24 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:13.748 09:25:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:17:13.749 09:25:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:17:13.749 09:25:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:17:13.749 09:25:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 ffdhe2048 1 00:17:13.749 09:25:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:17:13.749 09:25:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:17:13.749 09:25:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe2048 00:17:13.749 09:25:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key1 00:17:13.749 09:25:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:17:13.749 09:25:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:17:13.749 09:25:24 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:13.749 09:25:24 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:13.749 09:25:24 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:13.749 09:25:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:17:13.749 09:25:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:17:14.316 00:17:14.316 09:25:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:17:14.316 09:25:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:17:14.316 09:25:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:14.316 09:25:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:14.316 09:25:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:17:14.316 09:25:25 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:14.316 09:25:25 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:14.575 09:25:25 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:14.575 09:25:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:17:14.575 { 00:17:14.575 "cntlid": 11, 00:17:14.575 "qid": 0, 00:17:14.575 "state": "enabled", 00:17:14.575 "thread": "nvmf_tgt_poll_group_000", 00:17:14.575 "listen_address": { 00:17:14.575 "trtype": "TCP", 00:17:14.575 "adrfam": "IPv4", 00:17:14.575 "traddr": "10.0.0.2", 00:17:14.575 "trsvcid": "4420" 00:17:14.575 }, 00:17:14.575 "peer_address": { 00:17:14.575 "trtype": "TCP", 00:17:14.575 "adrfam": "IPv4", 00:17:14.575 "traddr": "10.0.0.1", 00:17:14.575 "trsvcid": "37308" 00:17:14.575 }, 00:17:14.575 "auth": { 00:17:14.575 "state": "completed", 00:17:14.575 "digest": "sha256", 00:17:14.575 "dhgroup": "ffdhe2048" 00:17:14.575 } 00:17:14.575 } 00:17:14.575 ]' 00:17:14.575 09:25:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:17:14.575 09:25:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:17:14.575 09:25:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:17:14.575 09:25:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:17:14.575 09:25:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:17:14.575 09:25:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:17:14.575 09:25:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:14.575 09:25:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:14.833 09:25:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --hostid 29f67375-a902-e411-ace9-001e67bc3c9a --dhchap-secret DHHC-1:01:OGQyNjQ5YzUxZmVhMDExNzYzODJlMDE1Y2JmY2ZhOWT+1yqE: --dhchap-ctrl-secret DHHC-1:02:ZDEyODliMmM4OTFjYWQ3OTIzMzJiYzc1Y2Q1ZjQ0ZDc4OTlmYjNiMWJhMThiOGY2nJOiQQ==: 00:17:15.766 09:25:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:17:15.766 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:17:15.766 09:25:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a 00:17:15.766 09:25:26 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:15.766 09:25:26 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:15.766 09:25:26 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:15.766 09:25:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:17:15.766 09:25:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:17:15.766 09:25:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:17:16.023 09:25:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 ffdhe2048 2 00:17:16.023 09:25:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:17:16.023 09:25:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:17:16.023 09:25:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe2048 00:17:16.023 09:25:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key2 00:17:16.024 09:25:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:17:16.024 09:25:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:17:16.024 09:25:27 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:16.024 09:25:27 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:16.024 09:25:27 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:16.024 09:25:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:17:16.024 09:25:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:17:16.281 00:17:16.281 09:25:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:17:16.281 09:25:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:17:16.281 09:25:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:16.537 09:25:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:16.537 09:25:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:17:16.537 09:25:27 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:16.537 09:25:27 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:16.537 09:25:27 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:16.537 09:25:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:17:16.537 { 00:17:16.537 "cntlid": 13, 00:17:16.537 "qid": 0, 00:17:16.537 "state": "enabled", 00:17:16.537 "thread": "nvmf_tgt_poll_group_000", 00:17:16.537 "listen_address": { 00:17:16.537 "trtype": "TCP", 00:17:16.537 "adrfam": "IPv4", 00:17:16.537 "traddr": "10.0.0.2", 00:17:16.537 "trsvcid": "4420" 00:17:16.537 }, 00:17:16.537 "peer_address": { 00:17:16.537 "trtype": "TCP", 00:17:16.537 "adrfam": "IPv4", 00:17:16.537 "traddr": "10.0.0.1", 00:17:16.537 "trsvcid": "50712" 00:17:16.537 }, 00:17:16.537 "auth": { 00:17:16.537 "state": "completed", 00:17:16.537 "digest": "sha256", 00:17:16.537 "dhgroup": "ffdhe2048" 00:17:16.537 } 00:17:16.537 } 00:17:16.537 ]' 00:17:16.537 09:25:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:17:16.537 09:25:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:17:16.537 09:25:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:17:16.537 09:25:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:17:16.537 09:25:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:17:16.794 09:25:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:17:16.794 09:25:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:16.794 09:25:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:17.051 09:25:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --hostid 29f67375-a902-e411-ace9-001e67bc3c9a --dhchap-secret DHHC-1:02:NDQ4MzNhZjMzODIzY2U1NzAxZDE1ZjY2MzMwNjlhOWZkODk0Y2YxNzMwZTJmMTNlwvlBbg==: --dhchap-ctrl-secret DHHC-1:01:ZTc1ZGE1ODgyNmUxZWM5Njk1YWMzZmY3YjEzMTEyODmK5dgD: 00:17:17.984 09:25:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:17:17.984 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:17:17.984 09:25:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a 00:17:17.984 09:25:28 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:17.984 09:25:28 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:17.984 09:25:28 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:17.984 09:25:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:17:17.984 09:25:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:17:17.984 09:25:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:17:17.984 09:25:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 ffdhe2048 3 00:17:17.984 09:25:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:17:17.984 09:25:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:17:17.984 09:25:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe2048 00:17:17.984 09:25:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:17:17.984 09:25:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:17:17.984 09:25:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --dhchap-key key3 00:17:17.984 09:25:29 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:17.984 09:25:29 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:17.984 09:25:29 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:17.984 09:25:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:17:17.984 09:25:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:17:18.243 00:17:18.502 09:25:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:17:18.502 09:25:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:17:18.502 09:25:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:18.502 09:25:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:18.502 09:25:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:17:18.502 09:25:29 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:18.502 09:25:29 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:18.760 09:25:29 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:18.760 09:25:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:17:18.760 { 00:17:18.760 "cntlid": 15, 00:17:18.760 "qid": 0, 00:17:18.760 "state": "enabled", 00:17:18.760 "thread": "nvmf_tgt_poll_group_000", 00:17:18.761 "listen_address": { 00:17:18.761 "trtype": "TCP", 00:17:18.761 "adrfam": "IPv4", 00:17:18.761 "traddr": "10.0.0.2", 00:17:18.761 "trsvcid": "4420" 00:17:18.761 }, 00:17:18.761 "peer_address": { 00:17:18.761 "trtype": "TCP", 00:17:18.761 "adrfam": "IPv4", 00:17:18.761 "traddr": "10.0.0.1", 00:17:18.761 "trsvcid": "50744" 00:17:18.761 }, 00:17:18.761 "auth": { 00:17:18.761 "state": "completed", 00:17:18.761 "digest": "sha256", 00:17:18.761 "dhgroup": "ffdhe2048" 00:17:18.761 } 00:17:18.761 } 00:17:18.761 ]' 00:17:18.761 09:25:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:17:18.761 09:25:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:17:18.761 09:25:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:17:18.761 09:25:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:17:18.761 09:25:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:17:18.761 09:25:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:17:18.761 09:25:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:18.761 09:25:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:19.019 09:25:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --hostid 29f67375-a902-e411-ace9-001e67bc3c9a --dhchap-secret DHHC-1:03:NWFjZjhlYmEzNDE2OGFkMWJhYzU0YmU3NDAyYjlmYTg1OGI0Y2EyOGY2OGRhNmEwOWI3ZjBlMjQwZDdhMzNhNDF+BrQ=: 00:17:19.959 09:25:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:17:19.959 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:17:19.959 09:25:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a 00:17:19.959 09:25:30 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:19.959 09:25:30 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:19.959 09:25:30 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:19.959 09:25:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@92 -- # for dhgroup in "${dhgroups[@]}" 00:17:19.959 09:25:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:17:19.959 09:25:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:17:19.959 09:25:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:17:20.219 09:25:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 ffdhe3072 0 00:17:20.219 09:25:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:17:20.219 09:25:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:17:20.219 09:25:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe3072 00:17:20.219 09:25:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:17:20.219 09:25:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:17:20.219 09:25:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:17:20.219 09:25:31 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:20.219 09:25:31 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:20.219 09:25:31 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:20.219 09:25:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:17:20.219 09:25:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:17:20.477 00:17:20.477 09:25:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:17:20.477 09:25:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:17:20.477 09:25:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:20.735 09:25:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:20.735 09:25:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:17:20.735 09:25:31 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:20.735 09:25:31 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:20.735 09:25:31 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:20.735 09:25:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:17:20.735 { 00:17:20.735 "cntlid": 17, 00:17:20.735 "qid": 0, 00:17:20.735 "state": "enabled", 00:17:20.735 "thread": "nvmf_tgt_poll_group_000", 00:17:20.735 "listen_address": { 00:17:20.735 "trtype": "TCP", 00:17:20.735 "adrfam": "IPv4", 00:17:20.735 "traddr": "10.0.0.2", 00:17:20.735 "trsvcid": "4420" 00:17:20.735 }, 00:17:20.735 "peer_address": { 00:17:20.735 "trtype": "TCP", 00:17:20.735 "adrfam": "IPv4", 00:17:20.735 "traddr": "10.0.0.1", 00:17:20.735 "trsvcid": "50772" 00:17:20.735 }, 00:17:20.735 "auth": { 00:17:20.735 "state": "completed", 00:17:20.735 "digest": "sha256", 00:17:20.735 "dhgroup": "ffdhe3072" 00:17:20.735 } 00:17:20.735 } 00:17:20.735 ]' 00:17:20.735 09:25:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:17:20.735 09:25:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:17:20.735 09:25:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:17:20.735 09:25:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:17:20.735 09:25:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:17:20.994 09:25:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:17:20.994 09:25:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:20.994 09:25:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:21.252 09:25:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --hostid 29f67375-a902-e411-ace9-001e67bc3c9a --dhchap-secret DHHC-1:00:MjViNTk0ZmIxYjkwYTRjYTIyNGRlOTQ4NThlNzI1MGVmODY1NjZlNTU4MjUyMjdm9pHsZA==: --dhchap-ctrl-secret DHHC-1:03:NDQ1MzMxODZhNjFiYjQ1MjBkZTQxMWIyNTRlMmE3YzNjM2EwYWU0MDExYjQ5NzA2Yzk5OGQxYzczMzYwZTUwOSOhOLs=: 00:17:22.189 09:25:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:17:22.189 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:17:22.189 09:25:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a 00:17:22.189 09:25:33 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:22.189 09:25:33 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:22.189 09:25:33 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:22.189 09:25:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:17:22.189 09:25:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:17:22.189 09:25:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:17:22.189 09:25:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 ffdhe3072 1 00:17:22.189 09:25:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:17:22.189 09:25:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:17:22.189 09:25:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe3072 00:17:22.189 09:25:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key1 00:17:22.189 09:25:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:17:22.189 09:25:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:17:22.189 09:25:33 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:22.189 09:25:33 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:22.189 09:25:33 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:22.189 09:25:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:17:22.189 09:25:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:17:22.757 00:17:22.757 09:25:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:17:22.757 09:25:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:17:22.757 09:25:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:23.015 09:25:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:23.015 09:25:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:17:23.015 09:25:33 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:23.015 09:25:33 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:23.015 09:25:33 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:23.015 09:25:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:17:23.015 { 00:17:23.015 "cntlid": 19, 00:17:23.015 "qid": 0, 00:17:23.015 "state": "enabled", 00:17:23.015 "thread": "nvmf_tgt_poll_group_000", 00:17:23.015 "listen_address": { 00:17:23.015 "trtype": "TCP", 00:17:23.015 "adrfam": "IPv4", 00:17:23.015 "traddr": "10.0.0.2", 00:17:23.015 "trsvcid": "4420" 00:17:23.015 }, 00:17:23.015 "peer_address": { 00:17:23.015 "trtype": "TCP", 00:17:23.015 "adrfam": "IPv4", 00:17:23.015 "traddr": "10.0.0.1", 00:17:23.015 "trsvcid": "50804" 00:17:23.015 }, 00:17:23.015 "auth": { 00:17:23.015 "state": "completed", 00:17:23.015 "digest": "sha256", 00:17:23.015 "dhgroup": "ffdhe3072" 00:17:23.015 } 00:17:23.015 } 00:17:23.015 ]' 00:17:23.015 09:25:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:17:23.015 09:25:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:17:23.015 09:25:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:17:23.015 09:25:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:17:23.015 09:25:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:17:23.015 09:25:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:17:23.015 09:25:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:23.015 09:25:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:23.275 09:25:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --hostid 29f67375-a902-e411-ace9-001e67bc3c9a --dhchap-secret DHHC-1:01:OGQyNjQ5YzUxZmVhMDExNzYzODJlMDE1Y2JmY2ZhOWT+1yqE: --dhchap-ctrl-secret DHHC-1:02:ZDEyODliMmM4OTFjYWQ3OTIzMzJiYzc1Y2Q1ZjQ0ZDc4OTlmYjNiMWJhMThiOGY2nJOiQQ==: 00:17:24.213 09:25:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:17:24.213 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:17:24.213 09:25:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a 00:17:24.213 09:25:35 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:24.213 09:25:35 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:24.213 09:25:35 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:24.213 09:25:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:17:24.213 09:25:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:17:24.213 09:25:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:17:24.472 09:25:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 ffdhe3072 2 00:17:24.472 09:25:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:17:24.472 09:25:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:17:24.472 09:25:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe3072 00:17:24.472 09:25:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key2 00:17:24.472 09:25:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:17:24.472 09:25:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:17:24.472 09:25:35 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:24.472 09:25:35 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:24.472 09:25:35 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:24.472 09:25:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:17:24.472 09:25:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:17:24.731 00:17:24.731 09:25:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:17:24.731 09:25:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:17:24.731 09:25:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:24.989 09:25:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:24.989 09:25:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:17:24.989 09:25:36 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:24.989 09:25:36 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:24.989 09:25:36 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:24.989 09:25:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:17:24.989 { 00:17:24.989 "cntlid": 21, 00:17:24.989 "qid": 0, 00:17:24.989 "state": "enabled", 00:17:24.989 "thread": "nvmf_tgt_poll_group_000", 00:17:24.989 "listen_address": { 00:17:24.989 "trtype": "TCP", 00:17:24.989 "adrfam": "IPv4", 00:17:24.989 "traddr": "10.0.0.2", 00:17:24.989 "trsvcid": "4420" 00:17:24.989 }, 00:17:24.989 "peer_address": { 00:17:24.989 "trtype": "TCP", 00:17:24.989 "adrfam": "IPv4", 00:17:24.989 "traddr": "10.0.0.1", 00:17:24.989 "trsvcid": "50830" 00:17:24.989 }, 00:17:24.989 "auth": { 00:17:24.989 "state": "completed", 00:17:24.989 "digest": "sha256", 00:17:24.989 "dhgroup": "ffdhe3072" 00:17:24.989 } 00:17:24.989 } 00:17:24.989 ]' 00:17:24.989 09:25:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:17:24.989 09:25:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:17:24.989 09:25:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:17:24.989 09:25:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:17:24.989 09:25:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:17:25.248 09:25:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:17:25.248 09:25:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:25.248 09:25:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:25.508 09:25:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --hostid 29f67375-a902-e411-ace9-001e67bc3c9a --dhchap-secret DHHC-1:02:NDQ4MzNhZjMzODIzY2U1NzAxZDE1ZjY2MzMwNjlhOWZkODk0Y2YxNzMwZTJmMTNlwvlBbg==: --dhchap-ctrl-secret DHHC-1:01:ZTc1ZGE1ODgyNmUxZWM5Njk1YWMzZmY3YjEzMTEyODmK5dgD: 00:17:26.446 09:25:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:17:26.446 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:17:26.446 09:25:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a 00:17:26.446 09:25:37 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:26.446 09:25:37 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:26.446 09:25:37 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:26.446 09:25:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:17:26.446 09:25:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:17:26.446 09:25:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:17:26.705 09:25:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 ffdhe3072 3 00:17:26.705 09:25:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:17:26.705 09:25:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:17:26.705 09:25:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe3072 00:17:26.705 09:25:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:17:26.705 09:25:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:17:26.705 09:25:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --dhchap-key key3 00:17:26.705 09:25:37 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:26.705 09:25:37 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:26.705 09:25:37 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:26.705 09:25:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:17:26.705 09:25:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:17:26.964 00:17:26.964 09:25:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:17:26.964 09:25:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:17:26.964 09:25:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:27.222 09:25:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:27.222 09:25:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:17:27.222 09:25:38 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:27.222 09:25:38 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:27.222 09:25:38 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:27.222 09:25:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:17:27.222 { 00:17:27.222 "cntlid": 23, 00:17:27.222 "qid": 0, 00:17:27.222 "state": "enabled", 00:17:27.222 "thread": "nvmf_tgt_poll_group_000", 00:17:27.222 "listen_address": { 00:17:27.222 "trtype": "TCP", 00:17:27.222 "adrfam": "IPv4", 00:17:27.222 "traddr": "10.0.0.2", 00:17:27.222 "trsvcid": "4420" 00:17:27.222 }, 00:17:27.222 "peer_address": { 00:17:27.222 "trtype": "TCP", 00:17:27.222 "adrfam": "IPv4", 00:17:27.222 "traddr": "10.0.0.1", 00:17:27.222 "trsvcid": "59580" 00:17:27.222 }, 00:17:27.222 "auth": { 00:17:27.222 "state": "completed", 00:17:27.222 "digest": "sha256", 00:17:27.222 "dhgroup": "ffdhe3072" 00:17:27.222 } 00:17:27.222 } 00:17:27.222 ]' 00:17:27.222 09:25:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:17:27.222 09:25:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:17:27.222 09:25:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:17:27.222 09:25:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:17:27.222 09:25:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:17:27.222 09:25:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:17:27.222 09:25:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:27.222 09:25:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:27.480 09:25:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --hostid 29f67375-a902-e411-ace9-001e67bc3c9a --dhchap-secret DHHC-1:03:NWFjZjhlYmEzNDE2OGFkMWJhYzU0YmU3NDAyYjlmYTg1OGI0Y2EyOGY2OGRhNmEwOWI3ZjBlMjQwZDdhMzNhNDF+BrQ=: 00:17:28.415 09:25:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:17:28.415 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:17:28.415 09:25:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a 00:17:28.415 09:25:39 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:28.415 09:25:39 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:28.415 09:25:39 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:28.415 09:25:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@92 -- # for dhgroup in "${dhgroups[@]}" 00:17:28.415 09:25:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:17:28.415 09:25:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:17:28.415 09:25:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:17:28.673 09:25:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 ffdhe4096 0 00:17:28.673 09:25:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:17:28.673 09:25:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:17:28.673 09:25:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe4096 00:17:28.673 09:25:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:17:28.673 09:25:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:17:28.673 09:25:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:17:28.673 09:25:39 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:28.673 09:25:39 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:28.673 09:25:39 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:28.673 09:25:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:17:28.673 09:25:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:17:28.931 00:17:29.189 09:25:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:17:29.189 09:25:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:29.189 09:25:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:17:29.189 09:25:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:29.189 09:25:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:17:29.189 09:25:40 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:29.189 09:25:40 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:29.447 09:25:40 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:29.447 09:25:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:17:29.447 { 00:17:29.447 "cntlid": 25, 00:17:29.447 "qid": 0, 00:17:29.447 "state": "enabled", 00:17:29.447 "thread": "nvmf_tgt_poll_group_000", 00:17:29.447 "listen_address": { 00:17:29.447 "trtype": "TCP", 00:17:29.447 "adrfam": "IPv4", 00:17:29.447 "traddr": "10.0.0.2", 00:17:29.447 "trsvcid": "4420" 00:17:29.447 }, 00:17:29.447 "peer_address": { 00:17:29.447 "trtype": "TCP", 00:17:29.447 "adrfam": "IPv4", 00:17:29.447 "traddr": "10.0.0.1", 00:17:29.447 "trsvcid": "59612" 00:17:29.447 }, 00:17:29.447 "auth": { 00:17:29.447 "state": "completed", 00:17:29.447 "digest": "sha256", 00:17:29.447 "dhgroup": "ffdhe4096" 00:17:29.447 } 00:17:29.447 } 00:17:29.447 ]' 00:17:29.447 09:25:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:17:29.447 09:25:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:17:29.447 09:25:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:17:29.447 09:25:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:17:29.447 09:25:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:17:29.447 09:25:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:17:29.447 09:25:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:29.447 09:25:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:29.706 09:25:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --hostid 29f67375-a902-e411-ace9-001e67bc3c9a --dhchap-secret DHHC-1:00:MjViNTk0ZmIxYjkwYTRjYTIyNGRlOTQ4NThlNzI1MGVmODY1NjZlNTU4MjUyMjdm9pHsZA==: --dhchap-ctrl-secret DHHC-1:03:NDQ1MzMxODZhNjFiYjQ1MjBkZTQxMWIyNTRlMmE3YzNjM2EwYWU0MDExYjQ5NzA2Yzk5OGQxYzczMzYwZTUwOSOhOLs=: 00:17:30.641 09:25:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:17:30.641 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:17:30.641 09:25:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a 00:17:30.641 09:25:41 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:30.641 09:25:41 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:30.641 09:25:41 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:30.641 09:25:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:17:30.641 09:25:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:17:30.641 09:25:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:17:30.899 09:25:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 ffdhe4096 1 00:17:30.899 09:25:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:17:30.899 09:25:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:17:30.899 09:25:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe4096 00:17:30.899 09:25:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key1 00:17:30.899 09:25:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:17:30.899 09:25:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:17:30.899 09:25:41 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:30.899 09:25:41 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:30.899 09:25:41 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:30.899 09:25:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:17:30.899 09:25:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:17:31.157 00:17:31.157 09:25:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:17:31.157 09:25:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:17:31.157 09:25:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:31.414 09:25:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:31.414 09:25:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:17:31.414 09:25:42 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:31.414 09:25:42 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:31.414 09:25:42 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:31.414 09:25:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:17:31.414 { 00:17:31.414 "cntlid": 27, 00:17:31.414 "qid": 0, 00:17:31.414 "state": "enabled", 00:17:31.414 "thread": "nvmf_tgt_poll_group_000", 00:17:31.414 "listen_address": { 00:17:31.414 "trtype": "TCP", 00:17:31.414 "adrfam": "IPv4", 00:17:31.414 "traddr": "10.0.0.2", 00:17:31.414 "trsvcid": "4420" 00:17:31.414 }, 00:17:31.414 "peer_address": { 00:17:31.414 "trtype": "TCP", 00:17:31.414 "adrfam": "IPv4", 00:17:31.414 "traddr": "10.0.0.1", 00:17:31.414 "trsvcid": "59658" 00:17:31.414 }, 00:17:31.414 "auth": { 00:17:31.414 "state": "completed", 00:17:31.414 "digest": "sha256", 00:17:31.414 "dhgroup": "ffdhe4096" 00:17:31.414 } 00:17:31.414 } 00:17:31.414 ]' 00:17:31.414 09:25:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:17:31.414 09:25:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:17:31.414 09:25:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:17:31.414 09:25:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:17:31.414 09:25:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:17:31.672 09:25:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:17:31.672 09:25:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:31.672 09:25:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:31.930 09:25:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --hostid 29f67375-a902-e411-ace9-001e67bc3c9a --dhchap-secret DHHC-1:01:OGQyNjQ5YzUxZmVhMDExNzYzODJlMDE1Y2JmY2ZhOWT+1yqE: --dhchap-ctrl-secret DHHC-1:02:ZDEyODliMmM4OTFjYWQ3OTIzMzJiYzc1Y2Q1ZjQ0ZDc4OTlmYjNiMWJhMThiOGY2nJOiQQ==: 00:17:32.864 09:25:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:17:32.864 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:17:32.864 09:25:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a 00:17:32.864 09:25:43 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:32.864 09:25:43 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:32.864 09:25:43 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:32.864 09:25:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:17:32.864 09:25:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:17:32.864 09:25:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:17:32.864 09:25:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 ffdhe4096 2 00:17:32.864 09:25:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:17:32.864 09:25:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:17:32.864 09:25:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe4096 00:17:32.864 09:25:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key2 00:17:32.864 09:25:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:17:32.864 09:25:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:17:32.864 09:25:43 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:32.864 09:25:43 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:32.864 09:25:43 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:32.864 09:25:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:17:32.864 09:25:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:17:33.434 00:17:33.434 09:25:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:17:33.434 09:25:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:17:33.434 09:25:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:33.434 09:25:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:33.434 09:25:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:17:33.434 09:25:44 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:33.434 09:25:44 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:33.692 09:25:44 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:33.692 09:25:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:17:33.692 { 00:17:33.692 "cntlid": 29, 00:17:33.692 "qid": 0, 00:17:33.692 "state": "enabled", 00:17:33.692 "thread": "nvmf_tgt_poll_group_000", 00:17:33.692 "listen_address": { 00:17:33.692 "trtype": "TCP", 00:17:33.692 "adrfam": "IPv4", 00:17:33.692 "traddr": "10.0.0.2", 00:17:33.692 "trsvcid": "4420" 00:17:33.692 }, 00:17:33.692 "peer_address": { 00:17:33.692 "trtype": "TCP", 00:17:33.692 "adrfam": "IPv4", 00:17:33.692 "traddr": "10.0.0.1", 00:17:33.692 "trsvcid": "59682" 00:17:33.692 }, 00:17:33.692 "auth": { 00:17:33.692 "state": "completed", 00:17:33.692 "digest": "sha256", 00:17:33.692 "dhgroup": "ffdhe4096" 00:17:33.692 } 00:17:33.692 } 00:17:33.692 ]' 00:17:33.692 09:25:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:17:33.692 09:25:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:17:33.692 09:25:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:17:33.692 09:25:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:17:33.692 09:25:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:17:33.692 09:25:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:17:33.692 09:25:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:33.692 09:25:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:33.950 09:25:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --hostid 29f67375-a902-e411-ace9-001e67bc3c9a --dhchap-secret DHHC-1:02:NDQ4MzNhZjMzODIzY2U1NzAxZDE1ZjY2MzMwNjlhOWZkODk0Y2YxNzMwZTJmMTNlwvlBbg==: --dhchap-ctrl-secret DHHC-1:01:ZTc1ZGE1ODgyNmUxZWM5Njk1YWMzZmY3YjEzMTEyODmK5dgD: 00:17:34.891 09:25:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:17:34.891 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:17:34.891 09:25:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a 00:17:34.891 09:25:45 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:34.891 09:25:45 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:34.892 09:25:45 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:34.892 09:25:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:17:34.892 09:25:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:17:34.892 09:25:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:17:35.150 09:25:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 ffdhe4096 3 00:17:35.150 09:25:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:17:35.150 09:25:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:17:35.150 09:25:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe4096 00:17:35.150 09:25:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:17:35.150 09:25:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:17:35.150 09:25:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --dhchap-key key3 00:17:35.150 09:25:46 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:35.150 09:25:46 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:35.150 09:25:46 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:35.150 09:25:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:17:35.150 09:25:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:17:35.721 00:17:35.721 09:25:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:17:35.721 09:25:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:17:35.721 09:25:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:35.979 09:25:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:35.979 09:25:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:17:35.979 09:25:46 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:35.979 09:25:46 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:35.979 09:25:46 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:35.979 09:25:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:17:35.979 { 00:17:35.979 "cntlid": 31, 00:17:35.979 "qid": 0, 00:17:35.979 "state": "enabled", 00:17:35.979 "thread": "nvmf_tgt_poll_group_000", 00:17:35.979 "listen_address": { 00:17:35.979 "trtype": "TCP", 00:17:35.979 "adrfam": "IPv4", 00:17:35.979 "traddr": "10.0.0.2", 00:17:35.979 "trsvcid": "4420" 00:17:35.979 }, 00:17:35.979 "peer_address": { 00:17:35.979 "trtype": "TCP", 00:17:35.979 "adrfam": "IPv4", 00:17:35.979 "traddr": "10.0.0.1", 00:17:35.979 "trsvcid": "34146" 00:17:35.979 }, 00:17:35.979 "auth": { 00:17:35.979 "state": "completed", 00:17:35.979 "digest": "sha256", 00:17:35.979 "dhgroup": "ffdhe4096" 00:17:35.979 } 00:17:35.979 } 00:17:35.979 ]' 00:17:35.979 09:25:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:17:35.979 09:25:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:17:35.979 09:25:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:17:35.979 09:25:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:17:35.979 09:25:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:17:35.980 09:25:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:17:35.980 09:25:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:35.980 09:25:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:36.240 09:25:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --hostid 29f67375-a902-e411-ace9-001e67bc3c9a --dhchap-secret DHHC-1:03:NWFjZjhlYmEzNDE2OGFkMWJhYzU0YmU3NDAyYjlmYTg1OGI0Y2EyOGY2OGRhNmEwOWI3ZjBlMjQwZDdhMzNhNDF+BrQ=: 00:17:37.179 09:25:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:17:37.179 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:17:37.179 09:25:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a 00:17:37.179 09:25:48 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:37.179 09:25:48 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:37.179 09:25:48 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:37.179 09:25:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@92 -- # for dhgroup in "${dhgroups[@]}" 00:17:37.179 09:25:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:17:37.179 09:25:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:17:37.179 09:25:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:17:37.437 09:25:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 ffdhe6144 0 00:17:37.437 09:25:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:17:37.437 09:25:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:17:37.437 09:25:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe6144 00:17:37.437 09:25:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:17:37.437 09:25:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:17:37.437 09:25:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:17:37.437 09:25:48 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:37.437 09:25:48 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:37.437 09:25:48 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:37.437 09:25:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:17:37.437 09:25:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:17:38.005 00:17:38.005 09:25:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:17:38.005 09:25:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:17:38.006 09:25:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:38.264 09:25:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:38.264 09:25:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:17:38.264 09:25:49 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:38.264 09:25:49 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:38.264 09:25:49 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:38.264 09:25:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:17:38.264 { 00:17:38.264 "cntlid": 33, 00:17:38.264 "qid": 0, 00:17:38.264 "state": "enabled", 00:17:38.264 "thread": "nvmf_tgt_poll_group_000", 00:17:38.264 "listen_address": { 00:17:38.264 "trtype": "TCP", 00:17:38.264 "adrfam": "IPv4", 00:17:38.264 "traddr": "10.0.0.2", 00:17:38.264 "trsvcid": "4420" 00:17:38.264 }, 00:17:38.264 "peer_address": { 00:17:38.264 "trtype": "TCP", 00:17:38.264 "adrfam": "IPv4", 00:17:38.264 "traddr": "10.0.0.1", 00:17:38.264 "trsvcid": "34170" 00:17:38.264 }, 00:17:38.264 "auth": { 00:17:38.264 "state": "completed", 00:17:38.264 "digest": "sha256", 00:17:38.264 "dhgroup": "ffdhe6144" 00:17:38.264 } 00:17:38.264 } 00:17:38.264 ]' 00:17:38.264 09:25:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:17:38.264 09:25:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:17:38.264 09:25:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:17:38.264 09:25:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:17:38.264 09:25:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:17:38.264 09:25:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:17:38.264 09:25:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:38.264 09:25:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:38.524 09:25:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --hostid 29f67375-a902-e411-ace9-001e67bc3c9a --dhchap-secret DHHC-1:00:MjViNTk0ZmIxYjkwYTRjYTIyNGRlOTQ4NThlNzI1MGVmODY1NjZlNTU4MjUyMjdm9pHsZA==: --dhchap-ctrl-secret DHHC-1:03:NDQ1MzMxODZhNjFiYjQ1MjBkZTQxMWIyNTRlMmE3YzNjM2EwYWU0MDExYjQ5NzA2Yzk5OGQxYzczMzYwZTUwOSOhOLs=: 00:17:39.462 09:25:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:17:39.462 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:17:39.462 09:25:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a 00:17:39.462 09:25:50 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:39.462 09:25:50 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:39.462 09:25:50 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:39.462 09:25:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:17:39.462 09:25:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:17:39.462 09:25:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:17:39.720 09:25:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 ffdhe6144 1 00:17:39.720 09:25:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:17:39.720 09:25:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:17:39.720 09:25:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe6144 00:17:39.720 09:25:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key1 00:17:39.720 09:25:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:17:39.720 09:25:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:17:39.720 09:25:50 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:39.720 09:25:50 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:39.720 09:25:50 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:39.720 09:25:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:17:39.720 09:25:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:17:40.288 00:17:40.288 09:25:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:17:40.288 09:25:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:17:40.288 09:25:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:40.546 09:25:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:40.546 09:25:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:17:40.546 09:25:51 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:40.546 09:25:51 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:40.546 09:25:51 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:40.546 09:25:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:17:40.546 { 00:17:40.546 "cntlid": 35, 00:17:40.546 "qid": 0, 00:17:40.546 "state": "enabled", 00:17:40.546 "thread": "nvmf_tgt_poll_group_000", 00:17:40.546 "listen_address": { 00:17:40.546 "trtype": "TCP", 00:17:40.546 "adrfam": "IPv4", 00:17:40.546 "traddr": "10.0.0.2", 00:17:40.546 "trsvcid": "4420" 00:17:40.546 }, 00:17:40.546 "peer_address": { 00:17:40.546 "trtype": "TCP", 00:17:40.546 "adrfam": "IPv4", 00:17:40.546 "traddr": "10.0.0.1", 00:17:40.546 "trsvcid": "34192" 00:17:40.546 }, 00:17:40.546 "auth": { 00:17:40.546 "state": "completed", 00:17:40.546 "digest": "sha256", 00:17:40.546 "dhgroup": "ffdhe6144" 00:17:40.546 } 00:17:40.546 } 00:17:40.546 ]' 00:17:40.546 09:25:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:17:40.546 09:25:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:17:40.546 09:25:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:17:40.804 09:25:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:17:40.804 09:25:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:17:40.804 09:25:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:17:40.804 09:25:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:40.804 09:25:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:41.061 09:25:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --hostid 29f67375-a902-e411-ace9-001e67bc3c9a --dhchap-secret DHHC-1:01:OGQyNjQ5YzUxZmVhMDExNzYzODJlMDE1Y2JmY2ZhOWT+1yqE: --dhchap-ctrl-secret DHHC-1:02:ZDEyODliMmM4OTFjYWQ3OTIzMzJiYzc1Y2Q1ZjQ0ZDc4OTlmYjNiMWJhMThiOGY2nJOiQQ==: 00:17:41.993 09:25:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:17:41.993 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:17:41.993 09:25:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a 00:17:41.993 09:25:52 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:41.993 09:25:52 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:41.993 09:25:52 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:41.993 09:25:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:17:41.993 09:25:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:17:41.993 09:25:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:17:41.993 09:25:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 ffdhe6144 2 00:17:41.993 09:25:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:17:41.993 09:25:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:17:41.993 09:25:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe6144 00:17:41.993 09:25:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key2 00:17:41.993 09:25:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:17:41.993 09:25:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:17:41.993 09:25:53 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:41.993 09:25:53 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:41.993 09:25:53 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:41.994 09:25:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:17:41.994 09:25:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:17:42.558 00:17:42.558 09:25:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:17:42.558 09:25:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:17:42.558 09:25:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:42.815 09:25:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:42.815 09:25:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:17:42.815 09:25:53 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:42.815 09:25:53 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:42.815 09:25:53 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:42.815 09:25:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:17:42.815 { 00:17:42.815 "cntlid": 37, 00:17:42.815 "qid": 0, 00:17:42.815 "state": "enabled", 00:17:42.815 "thread": "nvmf_tgt_poll_group_000", 00:17:42.815 "listen_address": { 00:17:42.815 "trtype": "TCP", 00:17:42.815 "adrfam": "IPv4", 00:17:42.815 "traddr": "10.0.0.2", 00:17:42.816 "trsvcid": "4420" 00:17:42.816 }, 00:17:42.816 "peer_address": { 00:17:42.816 "trtype": "TCP", 00:17:42.816 "adrfam": "IPv4", 00:17:42.816 "traddr": "10.0.0.1", 00:17:42.816 "trsvcid": "34224" 00:17:42.816 }, 00:17:42.816 "auth": { 00:17:42.816 "state": "completed", 00:17:42.816 "digest": "sha256", 00:17:42.816 "dhgroup": "ffdhe6144" 00:17:42.816 } 00:17:42.816 } 00:17:42.816 ]' 00:17:42.816 09:25:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:17:43.073 09:25:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:17:43.073 09:25:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:17:43.073 09:25:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:17:43.073 09:25:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:17:43.073 09:25:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:17:43.073 09:25:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:43.073 09:25:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:43.332 09:25:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --hostid 29f67375-a902-e411-ace9-001e67bc3c9a --dhchap-secret DHHC-1:02:NDQ4MzNhZjMzODIzY2U1NzAxZDE1ZjY2MzMwNjlhOWZkODk0Y2YxNzMwZTJmMTNlwvlBbg==: --dhchap-ctrl-secret DHHC-1:01:ZTc1ZGE1ODgyNmUxZWM5Njk1YWMzZmY3YjEzMTEyODmK5dgD: 00:17:44.270 09:25:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:17:44.270 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:17:44.270 09:25:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a 00:17:44.270 09:25:55 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:44.270 09:25:55 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:44.270 09:25:55 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:44.270 09:25:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:17:44.270 09:25:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:17:44.270 09:25:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:17:44.270 09:25:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 ffdhe6144 3 00:17:44.270 09:25:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:17:44.270 09:25:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:17:44.270 09:25:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe6144 00:17:44.270 09:25:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:17:44.270 09:25:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:17:44.270 09:25:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --dhchap-key key3 00:17:44.270 09:25:55 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:44.270 09:25:55 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:44.270 09:25:55 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:44.270 09:25:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:17:44.270 09:25:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:17:44.836 00:17:44.836 09:25:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:17:44.836 09:25:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:17:44.836 09:25:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:45.094 09:25:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:45.094 09:25:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:17:45.094 09:25:56 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:45.094 09:25:56 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:45.094 09:25:56 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:45.094 09:25:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:17:45.094 { 00:17:45.094 "cntlid": 39, 00:17:45.094 "qid": 0, 00:17:45.094 "state": "enabled", 00:17:45.094 "thread": "nvmf_tgt_poll_group_000", 00:17:45.094 "listen_address": { 00:17:45.094 "trtype": "TCP", 00:17:45.094 "adrfam": "IPv4", 00:17:45.094 "traddr": "10.0.0.2", 00:17:45.094 "trsvcid": "4420" 00:17:45.094 }, 00:17:45.094 "peer_address": { 00:17:45.094 "trtype": "TCP", 00:17:45.094 "adrfam": "IPv4", 00:17:45.094 "traddr": "10.0.0.1", 00:17:45.094 "trsvcid": "34252" 00:17:45.094 }, 00:17:45.094 "auth": { 00:17:45.094 "state": "completed", 00:17:45.094 "digest": "sha256", 00:17:45.094 "dhgroup": "ffdhe6144" 00:17:45.094 } 00:17:45.094 } 00:17:45.094 ]' 00:17:45.094 09:25:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:17:45.352 09:25:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:17:45.352 09:25:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:17:45.352 09:25:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:17:45.352 09:25:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:17:45.353 09:25:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:17:45.353 09:25:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:45.353 09:25:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:45.611 09:25:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --hostid 29f67375-a902-e411-ace9-001e67bc3c9a --dhchap-secret DHHC-1:03:NWFjZjhlYmEzNDE2OGFkMWJhYzU0YmU3NDAyYjlmYTg1OGI0Y2EyOGY2OGRhNmEwOWI3ZjBlMjQwZDdhMzNhNDF+BrQ=: 00:17:46.547 09:25:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:17:46.547 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:17:46.547 09:25:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a 00:17:46.547 09:25:57 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:46.547 09:25:57 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:46.547 09:25:57 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:46.547 09:25:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@92 -- # for dhgroup in "${dhgroups[@]}" 00:17:46.547 09:25:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:17:46.547 09:25:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:17:46.547 09:25:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:17:46.547 09:25:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 ffdhe8192 0 00:17:46.547 09:25:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:17:46.547 09:25:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:17:46.547 09:25:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe8192 00:17:46.547 09:25:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:17:46.547 09:25:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:17:46.547 09:25:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:17:46.547 09:25:57 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:46.547 09:25:57 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:46.547 09:25:57 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:46.547 09:25:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:17:46.547 09:25:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:17:47.485 00:17:47.485 09:25:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:17:47.485 09:25:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:17:47.485 09:25:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:47.743 09:25:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:47.743 09:25:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:17:47.743 09:25:58 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:47.743 09:25:58 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:47.743 09:25:58 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:47.743 09:25:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:17:47.743 { 00:17:47.743 "cntlid": 41, 00:17:47.743 "qid": 0, 00:17:47.743 "state": "enabled", 00:17:47.743 "thread": "nvmf_tgt_poll_group_000", 00:17:47.743 "listen_address": { 00:17:47.743 "trtype": "TCP", 00:17:47.743 "adrfam": "IPv4", 00:17:47.743 "traddr": "10.0.0.2", 00:17:47.743 "trsvcid": "4420" 00:17:47.743 }, 00:17:47.743 "peer_address": { 00:17:47.743 "trtype": "TCP", 00:17:47.743 "adrfam": "IPv4", 00:17:47.743 "traddr": "10.0.0.1", 00:17:47.743 "trsvcid": "52134" 00:17:47.743 }, 00:17:47.743 "auth": { 00:17:47.743 "state": "completed", 00:17:47.743 "digest": "sha256", 00:17:47.743 "dhgroup": "ffdhe8192" 00:17:47.743 } 00:17:47.743 } 00:17:47.743 ]' 00:17:47.743 09:25:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:17:47.743 09:25:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:17:47.743 09:25:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:17:47.743 09:25:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:17:47.743 09:25:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:17:47.743 09:25:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:17:47.743 09:25:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:47.743 09:25:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:48.002 09:25:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --hostid 29f67375-a902-e411-ace9-001e67bc3c9a --dhchap-secret DHHC-1:00:MjViNTk0ZmIxYjkwYTRjYTIyNGRlOTQ4NThlNzI1MGVmODY1NjZlNTU4MjUyMjdm9pHsZA==: --dhchap-ctrl-secret DHHC-1:03:NDQ1MzMxODZhNjFiYjQ1MjBkZTQxMWIyNTRlMmE3YzNjM2EwYWU0MDExYjQ5NzA2Yzk5OGQxYzczMzYwZTUwOSOhOLs=: 00:17:48.938 09:26:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:17:48.938 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:17:48.938 09:26:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a 00:17:48.938 09:26:00 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:48.938 09:26:00 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:48.938 09:26:00 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:48.938 09:26:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:17:48.938 09:26:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:17:48.938 09:26:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:17:49.196 09:26:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 ffdhe8192 1 00:17:49.196 09:26:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:17:49.196 09:26:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:17:49.196 09:26:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe8192 00:17:49.196 09:26:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key1 00:17:49.196 09:26:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:17:49.196 09:26:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:17:49.196 09:26:00 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:49.196 09:26:00 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:49.196 09:26:00 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:49.196 09:26:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:17:49.196 09:26:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:17:50.133 00:17:50.133 09:26:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:17:50.133 09:26:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:17:50.133 09:26:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:50.391 09:26:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:50.391 09:26:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:17:50.391 09:26:01 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:50.391 09:26:01 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:50.391 09:26:01 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:50.391 09:26:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:17:50.391 { 00:17:50.391 "cntlid": 43, 00:17:50.391 "qid": 0, 00:17:50.391 "state": "enabled", 00:17:50.391 "thread": "nvmf_tgt_poll_group_000", 00:17:50.391 "listen_address": { 00:17:50.391 "trtype": "TCP", 00:17:50.391 "adrfam": "IPv4", 00:17:50.391 "traddr": "10.0.0.2", 00:17:50.391 "trsvcid": "4420" 00:17:50.391 }, 00:17:50.391 "peer_address": { 00:17:50.391 "trtype": "TCP", 00:17:50.391 "adrfam": "IPv4", 00:17:50.391 "traddr": "10.0.0.1", 00:17:50.391 "trsvcid": "52158" 00:17:50.391 }, 00:17:50.391 "auth": { 00:17:50.391 "state": "completed", 00:17:50.391 "digest": "sha256", 00:17:50.391 "dhgroup": "ffdhe8192" 00:17:50.391 } 00:17:50.391 } 00:17:50.391 ]' 00:17:50.391 09:26:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:17:50.391 09:26:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:17:50.391 09:26:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:17:50.391 09:26:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:17:50.391 09:26:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:17:50.391 09:26:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:17:50.391 09:26:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:50.391 09:26:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:50.651 09:26:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --hostid 29f67375-a902-e411-ace9-001e67bc3c9a --dhchap-secret DHHC-1:01:OGQyNjQ5YzUxZmVhMDExNzYzODJlMDE1Y2JmY2ZhOWT+1yqE: --dhchap-ctrl-secret DHHC-1:02:ZDEyODliMmM4OTFjYWQ3OTIzMzJiYzc1Y2Q1ZjQ0ZDc4OTlmYjNiMWJhMThiOGY2nJOiQQ==: 00:17:51.584 09:26:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:17:51.584 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:17:51.584 09:26:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a 00:17:51.584 09:26:02 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:51.584 09:26:02 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:51.584 09:26:02 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:51.584 09:26:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:17:51.584 09:26:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:17:51.584 09:26:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:17:51.842 09:26:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 ffdhe8192 2 00:17:51.842 09:26:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:17:51.842 09:26:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:17:51.842 09:26:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe8192 00:17:51.842 09:26:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key2 00:17:51.842 09:26:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:17:51.842 09:26:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:17:51.842 09:26:02 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:51.842 09:26:02 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:51.843 09:26:02 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:51.843 09:26:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:17:51.843 09:26:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:17:52.777 00:17:52.777 09:26:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:17:52.777 09:26:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:17:52.777 09:26:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:53.036 09:26:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:53.036 09:26:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:17:53.036 09:26:04 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:53.036 09:26:04 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:53.036 09:26:04 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:53.036 09:26:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:17:53.036 { 00:17:53.036 "cntlid": 45, 00:17:53.036 "qid": 0, 00:17:53.036 "state": "enabled", 00:17:53.036 "thread": "nvmf_tgt_poll_group_000", 00:17:53.036 "listen_address": { 00:17:53.036 "trtype": "TCP", 00:17:53.036 "adrfam": "IPv4", 00:17:53.036 "traddr": "10.0.0.2", 00:17:53.036 "trsvcid": "4420" 00:17:53.036 }, 00:17:53.036 "peer_address": { 00:17:53.036 "trtype": "TCP", 00:17:53.036 "adrfam": "IPv4", 00:17:53.036 "traddr": "10.0.0.1", 00:17:53.036 "trsvcid": "52186" 00:17:53.036 }, 00:17:53.036 "auth": { 00:17:53.036 "state": "completed", 00:17:53.036 "digest": "sha256", 00:17:53.036 "dhgroup": "ffdhe8192" 00:17:53.036 } 00:17:53.036 } 00:17:53.036 ]' 00:17:53.036 09:26:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:17:53.036 09:26:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:17:53.036 09:26:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:17:53.036 09:26:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:17:53.036 09:26:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:17:53.036 09:26:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:17:53.036 09:26:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:53.036 09:26:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:53.294 09:26:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --hostid 29f67375-a902-e411-ace9-001e67bc3c9a --dhchap-secret DHHC-1:02:NDQ4MzNhZjMzODIzY2U1NzAxZDE1ZjY2MzMwNjlhOWZkODk0Y2YxNzMwZTJmMTNlwvlBbg==: --dhchap-ctrl-secret DHHC-1:01:ZTc1ZGE1ODgyNmUxZWM5Njk1YWMzZmY3YjEzMTEyODmK5dgD: 00:17:54.232 09:26:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:17:54.232 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:17:54.232 09:26:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a 00:17:54.232 09:26:05 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:54.232 09:26:05 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:54.232 09:26:05 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:54.232 09:26:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:17:54.232 09:26:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:17:54.232 09:26:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:17:54.490 09:26:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 ffdhe8192 3 00:17:54.490 09:26:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:17:54.490 09:26:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:17:54.490 09:26:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe8192 00:17:54.490 09:26:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:17:54.490 09:26:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:17:54.490 09:26:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --dhchap-key key3 00:17:54.490 09:26:05 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:54.490 09:26:05 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:54.490 09:26:05 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:54.490 09:26:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:17:54.490 09:26:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:17:55.427 00:17:55.427 09:26:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:17:55.427 09:26:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:17:55.427 09:26:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:55.427 09:26:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:55.427 09:26:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:17:55.427 09:26:06 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:55.427 09:26:06 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:55.427 09:26:06 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:55.427 09:26:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:17:55.427 { 00:17:55.427 "cntlid": 47, 00:17:55.427 "qid": 0, 00:17:55.427 "state": "enabled", 00:17:55.428 "thread": "nvmf_tgt_poll_group_000", 00:17:55.428 "listen_address": { 00:17:55.428 "trtype": "TCP", 00:17:55.428 "adrfam": "IPv4", 00:17:55.428 "traddr": "10.0.0.2", 00:17:55.428 "trsvcid": "4420" 00:17:55.428 }, 00:17:55.428 "peer_address": { 00:17:55.428 "trtype": "TCP", 00:17:55.428 "adrfam": "IPv4", 00:17:55.428 "traddr": "10.0.0.1", 00:17:55.428 "trsvcid": "52204" 00:17:55.428 }, 00:17:55.428 "auth": { 00:17:55.428 "state": "completed", 00:17:55.428 "digest": "sha256", 00:17:55.428 "dhgroup": "ffdhe8192" 00:17:55.428 } 00:17:55.428 } 00:17:55.428 ]' 00:17:55.428 09:26:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:17:55.686 09:26:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:17:55.686 09:26:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:17:55.686 09:26:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:17:55.686 09:26:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:17:55.686 09:26:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:17:55.686 09:26:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:55.686 09:26:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:55.943 09:26:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --hostid 29f67375-a902-e411-ace9-001e67bc3c9a --dhchap-secret DHHC-1:03:NWFjZjhlYmEzNDE2OGFkMWJhYzU0YmU3NDAyYjlmYTg1OGI0Y2EyOGY2OGRhNmEwOWI3ZjBlMjQwZDdhMzNhNDF+BrQ=: 00:17:56.875 09:26:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:17:56.875 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:17:56.875 09:26:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a 00:17:56.875 09:26:07 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:56.875 09:26:07 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:56.875 09:26:07 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:56.875 09:26:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@91 -- # for digest in "${digests[@]}" 00:17:56.875 09:26:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@92 -- # for dhgroup in "${dhgroups[@]}" 00:17:56.875 09:26:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:17:56.875 09:26:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups null 00:17:56.875 09:26:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups null 00:17:57.132 09:26:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 null 0 00:17:57.132 09:26:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:17:57.132 09:26:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:17:57.132 09:26:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=null 00:17:57.132 09:26:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:17:57.132 09:26:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:17:57.132 09:26:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:17:57.132 09:26:08 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:57.132 09:26:08 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:57.132 09:26:08 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:57.132 09:26:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:17:57.132 09:26:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:17:57.389 00:17:57.389 09:26:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:17:57.389 09:26:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:17:57.389 09:26:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:57.646 09:26:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:57.646 09:26:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:17:57.646 09:26:08 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:57.646 09:26:08 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:57.646 09:26:08 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:57.646 09:26:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:17:57.646 { 00:17:57.646 "cntlid": 49, 00:17:57.646 "qid": 0, 00:17:57.646 "state": "enabled", 00:17:57.646 "thread": "nvmf_tgt_poll_group_000", 00:17:57.646 "listen_address": { 00:17:57.646 "trtype": "TCP", 00:17:57.646 "adrfam": "IPv4", 00:17:57.646 "traddr": "10.0.0.2", 00:17:57.646 "trsvcid": "4420" 00:17:57.646 }, 00:17:57.646 "peer_address": { 00:17:57.646 "trtype": "TCP", 00:17:57.646 "adrfam": "IPv4", 00:17:57.646 "traddr": "10.0.0.1", 00:17:57.646 "trsvcid": "48432" 00:17:57.646 }, 00:17:57.646 "auth": { 00:17:57.646 "state": "completed", 00:17:57.646 "digest": "sha384", 00:17:57.646 "dhgroup": "null" 00:17:57.646 } 00:17:57.646 } 00:17:57.646 ]' 00:17:57.646 09:26:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:17:57.646 09:26:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:17:57.646 09:26:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:17:57.903 09:26:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ null == \n\u\l\l ]] 00:17:57.903 09:26:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:17:57.903 09:26:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:17:57.903 09:26:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:57.903 09:26:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:58.162 09:26:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --hostid 29f67375-a902-e411-ace9-001e67bc3c9a --dhchap-secret DHHC-1:00:MjViNTk0ZmIxYjkwYTRjYTIyNGRlOTQ4NThlNzI1MGVmODY1NjZlNTU4MjUyMjdm9pHsZA==: --dhchap-ctrl-secret DHHC-1:03:NDQ1MzMxODZhNjFiYjQ1MjBkZTQxMWIyNTRlMmE3YzNjM2EwYWU0MDExYjQ5NzA2Yzk5OGQxYzczMzYwZTUwOSOhOLs=: 00:17:59.101 09:26:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:17:59.101 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:17:59.101 09:26:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a 00:17:59.101 09:26:09 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:59.101 09:26:09 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:59.101 09:26:10 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:59.101 09:26:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:17:59.101 09:26:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups null 00:17:59.101 09:26:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups null 00:17:59.101 09:26:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 null 1 00:17:59.101 09:26:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:17:59.101 09:26:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:17:59.101 09:26:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=null 00:17:59.101 09:26:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key1 00:17:59.101 09:26:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:17:59.101 09:26:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:17:59.101 09:26:10 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:59.102 09:26:10 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:59.102 09:26:10 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:59.102 09:26:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:17:59.102 09:26:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:17:59.359 00:17:59.618 09:26:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:17:59.618 09:26:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:17:59.618 09:26:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:59.876 09:26:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:59.876 09:26:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:17:59.876 09:26:10 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:59.876 09:26:10 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:59.876 09:26:10 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:59.876 09:26:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:17:59.876 { 00:17:59.876 "cntlid": 51, 00:17:59.876 "qid": 0, 00:17:59.876 "state": "enabled", 00:17:59.876 "thread": "nvmf_tgt_poll_group_000", 00:17:59.876 "listen_address": { 00:17:59.876 "trtype": "TCP", 00:17:59.876 "adrfam": "IPv4", 00:17:59.876 "traddr": "10.0.0.2", 00:17:59.876 "trsvcid": "4420" 00:17:59.876 }, 00:17:59.876 "peer_address": { 00:17:59.876 "trtype": "TCP", 00:17:59.876 "adrfam": "IPv4", 00:17:59.876 "traddr": "10.0.0.1", 00:17:59.876 "trsvcid": "48456" 00:17:59.876 }, 00:17:59.876 "auth": { 00:17:59.876 "state": "completed", 00:17:59.876 "digest": "sha384", 00:17:59.876 "dhgroup": "null" 00:17:59.876 } 00:17:59.876 } 00:17:59.876 ]' 00:17:59.876 09:26:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:17:59.876 09:26:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:17:59.876 09:26:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:17:59.876 09:26:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ null == \n\u\l\l ]] 00:17:59.876 09:26:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:17:59.876 09:26:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:17:59.876 09:26:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:59.876 09:26:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:18:00.133 09:26:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --hostid 29f67375-a902-e411-ace9-001e67bc3c9a --dhchap-secret DHHC-1:01:OGQyNjQ5YzUxZmVhMDExNzYzODJlMDE1Y2JmY2ZhOWT+1yqE: --dhchap-ctrl-secret DHHC-1:02:ZDEyODliMmM4OTFjYWQ3OTIzMzJiYzc1Y2Q1ZjQ0ZDc4OTlmYjNiMWJhMThiOGY2nJOiQQ==: 00:18:01.069 09:26:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:18:01.069 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:18:01.069 09:26:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a 00:18:01.069 09:26:12 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:01.069 09:26:12 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:01.069 09:26:12 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:01.069 09:26:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:18:01.069 09:26:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups null 00:18:01.069 09:26:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups null 00:18:01.326 09:26:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 null 2 00:18:01.326 09:26:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:18:01.326 09:26:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:18:01.326 09:26:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=null 00:18:01.326 09:26:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key2 00:18:01.326 09:26:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:18:01.326 09:26:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:18:01.326 09:26:12 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:01.326 09:26:12 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:01.327 09:26:12 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:01.327 09:26:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:18:01.327 09:26:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:18:01.583 00:18:01.583 09:26:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:18:01.583 09:26:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:18:01.583 09:26:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:18:01.841 09:26:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:01.841 09:26:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:18:01.841 09:26:12 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:01.841 09:26:12 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:01.841 09:26:12 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:01.841 09:26:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:18:01.841 { 00:18:01.841 "cntlid": 53, 00:18:01.841 "qid": 0, 00:18:01.841 "state": "enabled", 00:18:01.841 "thread": "nvmf_tgt_poll_group_000", 00:18:01.841 "listen_address": { 00:18:01.841 "trtype": "TCP", 00:18:01.841 "adrfam": "IPv4", 00:18:01.841 "traddr": "10.0.0.2", 00:18:01.841 "trsvcid": "4420" 00:18:01.841 }, 00:18:01.841 "peer_address": { 00:18:01.841 "trtype": "TCP", 00:18:01.841 "adrfam": "IPv4", 00:18:01.841 "traddr": "10.0.0.1", 00:18:01.841 "trsvcid": "48486" 00:18:01.841 }, 00:18:01.841 "auth": { 00:18:01.841 "state": "completed", 00:18:01.841 "digest": "sha384", 00:18:01.841 "dhgroup": "null" 00:18:01.841 } 00:18:01.841 } 00:18:01.841 ]' 00:18:01.841 09:26:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:18:01.841 09:26:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:18:01.841 09:26:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:18:01.841 09:26:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ null == \n\u\l\l ]] 00:18:01.841 09:26:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:18:02.098 09:26:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:18:02.098 09:26:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:18:02.098 09:26:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:18:02.356 09:26:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --hostid 29f67375-a902-e411-ace9-001e67bc3c9a --dhchap-secret DHHC-1:02:NDQ4MzNhZjMzODIzY2U1NzAxZDE1ZjY2MzMwNjlhOWZkODk0Y2YxNzMwZTJmMTNlwvlBbg==: --dhchap-ctrl-secret DHHC-1:01:ZTc1ZGE1ODgyNmUxZWM5Njk1YWMzZmY3YjEzMTEyODmK5dgD: 00:18:03.294 09:26:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:18:03.294 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:18:03.294 09:26:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a 00:18:03.294 09:26:14 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:03.294 09:26:14 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:03.294 09:26:14 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:03.294 09:26:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:18:03.294 09:26:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups null 00:18:03.294 09:26:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups null 00:18:03.552 09:26:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 null 3 00:18:03.552 09:26:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:18:03.552 09:26:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:18:03.552 09:26:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=null 00:18:03.552 09:26:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:18:03.552 09:26:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:18:03.552 09:26:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --dhchap-key key3 00:18:03.552 09:26:14 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:03.552 09:26:14 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:03.552 09:26:14 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:03.552 09:26:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:18:03.552 09:26:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:18:03.810 00:18:03.810 09:26:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:18:03.810 09:26:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:18:03.810 09:26:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:18:04.068 09:26:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:04.068 09:26:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:18:04.068 09:26:15 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:04.068 09:26:15 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:04.068 09:26:15 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:04.068 09:26:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:18:04.068 { 00:18:04.068 "cntlid": 55, 00:18:04.068 "qid": 0, 00:18:04.068 "state": "enabled", 00:18:04.068 "thread": "nvmf_tgt_poll_group_000", 00:18:04.068 "listen_address": { 00:18:04.068 "trtype": "TCP", 00:18:04.068 "adrfam": "IPv4", 00:18:04.068 "traddr": "10.0.0.2", 00:18:04.068 "trsvcid": "4420" 00:18:04.068 }, 00:18:04.068 "peer_address": { 00:18:04.068 "trtype": "TCP", 00:18:04.068 "adrfam": "IPv4", 00:18:04.068 "traddr": "10.0.0.1", 00:18:04.068 "trsvcid": "48516" 00:18:04.068 }, 00:18:04.068 "auth": { 00:18:04.068 "state": "completed", 00:18:04.068 "digest": "sha384", 00:18:04.068 "dhgroup": "null" 00:18:04.068 } 00:18:04.068 } 00:18:04.068 ]' 00:18:04.068 09:26:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:18:04.068 09:26:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:18:04.068 09:26:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:18:04.068 09:26:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ null == \n\u\l\l ]] 00:18:04.068 09:26:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:18:04.068 09:26:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:18:04.068 09:26:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:18:04.068 09:26:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:18:04.327 09:26:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --hostid 29f67375-a902-e411-ace9-001e67bc3c9a --dhchap-secret DHHC-1:03:NWFjZjhlYmEzNDE2OGFkMWJhYzU0YmU3NDAyYjlmYTg1OGI0Y2EyOGY2OGRhNmEwOWI3ZjBlMjQwZDdhMzNhNDF+BrQ=: 00:18:05.262 09:26:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:18:05.262 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:18:05.262 09:26:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a 00:18:05.262 09:26:16 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:05.262 09:26:16 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:05.262 09:26:16 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:05.262 09:26:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@92 -- # for dhgroup in "${dhgroups[@]}" 00:18:05.263 09:26:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:18:05.263 09:26:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:18:05.263 09:26:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:18:05.520 09:26:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 ffdhe2048 0 00:18:05.520 09:26:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:18:05.520 09:26:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:18:05.520 09:26:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe2048 00:18:05.520 09:26:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:18:05.520 09:26:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:18:05.520 09:26:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:18:05.520 09:26:16 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:05.520 09:26:16 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:05.520 09:26:16 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:05.520 09:26:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:18:05.520 09:26:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:18:05.777 00:18:05.777 09:26:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:18:05.777 09:26:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:18:05.777 09:26:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:18:06.034 09:26:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:06.034 09:26:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:18:06.034 09:26:17 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:06.034 09:26:17 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:06.034 09:26:17 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:06.034 09:26:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:18:06.034 { 00:18:06.034 "cntlid": 57, 00:18:06.034 "qid": 0, 00:18:06.034 "state": "enabled", 00:18:06.034 "thread": "nvmf_tgt_poll_group_000", 00:18:06.034 "listen_address": { 00:18:06.034 "trtype": "TCP", 00:18:06.034 "adrfam": "IPv4", 00:18:06.034 "traddr": "10.0.0.2", 00:18:06.034 "trsvcid": "4420" 00:18:06.034 }, 00:18:06.034 "peer_address": { 00:18:06.034 "trtype": "TCP", 00:18:06.034 "adrfam": "IPv4", 00:18:06.034 "traddr": "10.0.0.1", 00:18:06.034 "trsvcid": "40212" 00:18:06.034 }, 00:18:06.034 "auth": { 00:18:06.034 "state": "completed", 00:18:06.034 "digest": "sha384", 00:18:06.034 "dhgroup": "ffdhe2048" 00:18:06.034 } 00:18:06.034 } 00:18:06.034 ]' 00:18:06.034 09:26:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:18:06.291 09:26:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:18:06.291 09:26:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:18:06.291 09:26:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:18:06.291 09:26:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:18:06.291 09:26:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:18:06.291 09:26:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:18:06.291 09:26:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:18:06.548 09:26:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --hostid 29f67375-a902-e411-ace9-001e67bc3c9a --dhchap-secret DHHC-1:00:MjViNTk0ZmIxYjkwYTRjYTIyNGRlOTQ4NThlNzI1MGVmODY1NjZlNTU4MjUyMjdm9pHsZA==: --dhchap-ctrl-secret DHHC-1:03:NDQ1MzMxODZhNjFiYjQ1MjBkZTQxMWIyNTRlMmE3YzNjM2EwYWU0MDExYjQ5NzA2Yzk5OGQxYzczMzYwZTUwOSOhOLs=: 00:18:07.484 09:26:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:18:07.484 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:18:07.484 09:26:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a 00:18:07.484 09:26:18 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:07.484 09:26:18 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:07.484 09:26:18 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:07.484 09:26:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:18:07.484 09:26:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:18:07.484 09:26:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:18:07.743 09:26:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 ffdhe2048 1 00:18:07.743 09:26:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:18:07.743 09:26:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:18:07.743 09:26:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe2048 00:18:07.743 09:26:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key1 00:18:07.743 09:26:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:18:07.743 09:26:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:18:07.743 09:26:18 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:07.743 09:26:18 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:07.743 09:26:18 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:07.743 09:26:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:18:07.743 09:26:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:18:08.002 00:18:08.002 09:26:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:18:08.002 09:26:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:18:08.002 09:26:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:18:08.260 09:26:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:08.260 09:26:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:18:08.260 09:26:19 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:08.260 09:26:19 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:08.260 09:26:19 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:08.260 09:26:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:18:08.260 { 00:18:08.260 "cntlid": 59, 00:18:08.260 "qid": 0, 00:18:08.260 "state": "enabled", 00:18:08.260 "thread": "nvmf_tgt_poll_group_000", 00:18:08.260 "listen_address": { 00:18:08.260 "trtype": "TCP", 00:18:08.260 "adrfam": "IPv4", 00:18:08.260 "traddr": "10.0.0.2", 00:18:08.260 "trsvcid": "4420" 00:18:08.260 }, 00:18:08.260 "peer_address": { 00:18:08.260 "trtype": "TCP", 00:18:08.260 "adrfam": "IPv4", 00:18:08.260 "traddr": "10.0.0.1", 00:18:08.260 "trsvcid": "40254" 00:18:08.260 }, 00:18:08.260 "auth": { 00:18:08.260 "state": "completed", 00:18:08.260 "digest": "sha384", 00:18:08.260 "dhgroup": "ffdhe2048" 00:18:08.260 } 00:18:08.260 } 00:18:08.260 ]' 00:18:08.260 09:26:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:18:08.260 09:26:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:18:08.260 09:26:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:18:08.518 09:26:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:18:08.518 09:26:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:18:08.518 09:26:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:18:08.518 09:26:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:18:08.518 09:26:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:18:08.777 09:26:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --hostid 29f67375-a902-e411-ace9-001e67bc3c9a --dhchap-secret DHHC-1:01:OGQyNjQ5YzUxZmVhMDExNzYzODJlMDE1Y2JmY2ZhOWT+1yqE: --dhchap-ctrl-secret DHHC-1:02:ZDEyODliMmM4OTFjYWQ3OTIzMzJiYzc1Y2Q1ZjQ0ZDc4OTlmYjNiMWJhMThiOGY2nJOiQQ==: 00:18:09.714 09:26:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:18:09.714 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:18:09.714 09:26:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a 00:18:09.714 09:26:20 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:09.714 09:26:20 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:09.714 09:26:20 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:09.714 09:26:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:18:09.714 09:26:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:18:09.714 09:26:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:18:09.714 09:26:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 ffdhe2048 2 00:18:09.714 09:26:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:18:09.714 09:26:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:18:09.714 09:26:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe2048 00:18:09.714 09:26:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key2 00:18:09.714 09:26:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:18:09.714 09:26:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:18:09.714 09:26:20 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:09.714 09:26:20 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:09.714 09:26:20 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:09.714 09:26:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:18:09.714 09:26:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:18:10.279 00:18:10.279 09:26:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:18:10.279 09:26:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:18:10.279 09:26:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:18:10.279 09:26:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:10.279 09:26:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:18:10.279 09:26:21 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:10.279 09:26:21 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:10.279 09:26:21 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:10.279 09:26:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:18:10.279 { 00:18:10.279 "cntlid": 61, 00:18:10.279 "qid": 0, 00:18:10.279 "state": "enabled", 00:18:10.279 "thread": "nvmf_tgt_poll_group_000", 00:18:10.279 "listen_address": { 00:18:10.279 "trtype": "TCP", 00:18:10.279 "adrfam": "IPv4", 00:18:10.279 "traddr": "10.0.0.2", 00:18:10.279 "trsvcid": "4420" 00:18:10.279 }, 00:18:10.279 "peer_address": { 00:18:10.279 "trtype": "TCP", 00:18:10.279 "adrfam": "IPv4", 00:18:10.279 "traddr": "10.0.0.1", 00:18:10.279 "trsvcid": "40270" 00:18:10.279 }, 00:18:10.279 "auth": { 00:18:10.279 "state": "completed", 00:18:10.279 "digest": "sha384", 00:18:10.279 "dhgroup": "ffdhe2048" 00:18:10.279 } 00:18:10.279 } 00:18:10.279 ]' 00:18:10.279 09:26:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:18:10.577 09:26:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:18:10.577 09:26:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:18:10.577 09:26:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:18:10.577 09:26:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:18:10.577 09:26:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:18:10.577 09:26:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:18:10.577 09:26:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:18:10.898 09:26:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --hostid 29f67375-a902-e411-ace9-001e67bc3c9a --dhchap-secret DHHC-1:02:NDQ4MzNhZjMzODIzY2U1NzAxZDE1ZjY2MzMwNjlhOWZkODk0Y2YxNzMwZTJmMTNlwvlBbg==: --dhchap-ctrl-secret DHHC-1:01:ZTc1ZGE1ODgyNmUxZWM5Njk1YWMzZmY3YjEzMTEyODmK5dgD: 00:18:11.875 09:26:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:18:11.875 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:18:11.875 09:26:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a 00:18:11.875 09:26:22 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:11.875 09:26:22 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:11.875 09:26:22 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:11.875 09:26:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:18:11.875 09:26:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:18:11.875 09:26:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:18:11.875 09:26:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 ffdhe2048 3 00:18:11.875 09:26:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:18:11.875 09:26:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:18:11.875 09:26:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe2048 00:18:11.875 09:26:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:18:11.875 09:26:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:18:11.875 09:26:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --dhchap-key key3 00:18:11.875 09:26:23 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:11.875 09:26:23 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:11.875 09:26:23 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:11.875 09:26:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:18:11.875 09:26:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:18:12.442 00:18:12.442 09:26:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:18:12.442 09:26:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:18:12.442 09:26:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:18:12.701 09:26:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:12.701 09:26:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:18:12.701 09:26:23 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:12.701 09:26:23 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:12.701 09:26:23 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:12.701 09:26:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:18:12.701 { 00:18:12.701 "cntlid": 63, 00:18:12.701 "qid": 0, 00:18:12.701 "state": "enabled", 00:18:12.701 "thread": "nvmf_tgt_poll_group_000", 00:18:12.701 "listen_address": { 00:18:12.701 "trtype": "TCP", 00:18:12.701 "adrfam": "IPv4", 00:18:12.701 "traddr": "10.0.0.2", 00:18:12.701 "trsvcid": "4420" 00:18:12.701 }, 00:18:12.701 "peer_address": { 00:18:12.701 "trtype": "TCP", 00:18:12.701 "adrfam": "IPv4", 00:18:12.701 "traddr": "10.0.0.1", 00:18:12.701 "trsvcid": "40298" 00:18:12.701 }, 00:18:12.701 "auth": { 00:18:12.701 "state": "completed", 00:18:12.701 "digest": "sha384", 00:18:12.701 "dhgroup": "ffdhe2048" 00:18:12.701 } 00:18:12.701 } 00:18:12.701 ]' 00:18:12.701 09:26:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:18:12.701 09:26:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:18:12.701 09:26:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:18:12.701 09:26:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:18:12.701 09:26:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:18:12.701 09:26:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:18:12.701 09:26:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:18:12.701 09:26:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:18:12.959 09:26:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --hostid 29f67375-a902-e411-ace9-001e67bc3c9a --dhchap-secret DHHC-1:03:NWFjZjhlYmEzNDE2OGFkMWJhYzU0YmU3NDAyYjlmYTg1OGI0Y2EyOGY2OGRhNmEwOWI3ZjBlMjQwZDdhMzNhNDF+BrQ=: 00:18:13.894 09:26:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:18:13.894 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:18:13.894 09:26:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a 00:18:13.894 09:26:24 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:13.894 09:26:24 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:13.894 09:26:24 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:13.894 09:26:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@92 -- # for dhgroup in "${dhgroups[@]}" 00:18:13.894 09:26:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:18:13.894 09:26:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:18:13.894 09:26:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:18:14.152 09:26:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 ffdhe3072 0 00:18:14.153 09:26:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:18:14.153 09:26:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:18:14.153 09:26:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe3072 00:18:14.153 09:26:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:18:14.153 09:26:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:18:14.153 09:26:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:18:14.153 09:26:25 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:14.153 09:26:25 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:14.153 09:26:25 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:14.153 09:26:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:18:14.153 09:26:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:18:14.411 00:18:14.411 09:26:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:18:14.411 09:26:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:18:14.411 09:26:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:18:14.669 09:26:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:14.669 09:26:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:18:14.669 09:26:25 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:14.669 09:26:25 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:14.669 09:26:25 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:14.669 09:26:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:18:14.669 { 00:18:14.669 "cntlid": 65, 00:18:14.669 "qid": 0, 00:18:14.669 "state": "enabled", 00:18:14.669 "thread": "nvmf_tgt_poll_group_000", 00:18:14.669 "listen_address": { 00:18:14.669 "trtype": "TCP", 00:18:14.669 "adrfam": "IPv4", 00:18:14.669 "traddr": "10.0.0.2", 00:18:14.669 "trsvcid": "4420" 00:18:14.669 }, 00:18:14.669 "peer_address": { 00:18:14.669 "trtype": "TCP", 00:18:14.669 "adrfam": "IPv4", 00:18:14.669 "traddr": "10.0.0.1", 00:18:14.669 "trsvcid": "40324" 00:18:14.669 }, 00:18:14.669 "auth": { 00:18:14.669 "state": "completed", 00:18:14.669 "digest": "sha384", 00:18:14.669 "dhgroup": "ffdhe3072" 00:18:14.669 } 00:18:14.669 } 00:18:14.669 ]' 00:18:14.669 09:26:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:18:14.669 09:26:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:18:14.669 09:26:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:18:14.669 09:26:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:18:14.669 09:26:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:18:14.928 09:26:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:18:14.928 09:26:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:18:14.928 09:26:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:18:15.187 09:26:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --hostid 29f67375-a902-e411-ace9-001e67bc3c9a --dhchap-secret DHHC-1:00:MjViNTk0ZmIxYjkwYTRjYTIyNGRlOTQ4NThlNzI1MGVmODY1NjZlNTU4MjUyMjdm9pHsZA==: --dhchap-ctrl-secret DHHC-1:03:NDQ1MzMxODZhNjFiYjQ1MjBkZTQxMWIyNTRlMmE3YzNjM2EwYWU0MDExYjQ5NzA2Yzk5OGQxYzczMzYwZTUwOSOhOLs=: 00:18:16.124 09:26:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:18:16.124 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:18:16.124 09:26:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a 00:18:16.124 09:26:26 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:16.124 09:26:26 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:16.124 09:26:26 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:16.124 09:26:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:18:16.124 09:26:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:18:16.124 09:26:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:18:16.124 09:26:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 ffdhe3072 1 00:18:16.124 09:26:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:18:16.124 09:26:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:18:16.124 09:26:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe3072 00:18:16.124 09:26:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key1 00:18:16.124 09:26:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:18:16.124 09:26:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:18:16.124 09:26:27 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:16.124 09:26:27 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:16.124 09:26:27 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:16.124 09:26:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:18:16.124 09:26:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:18:16.694 00:18:16.694 09:26:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:18:16.694 09:26:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:18:16.694 09:26:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:18:16.694 09:26:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:16.694 09:26:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:18:16.694 09:26:27 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:16.694 09:26:27 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:16.694 09:26:27 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:16.694 09:26:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:18:16.694 { 00:18:16.694 "cntlid": 67, 00:18:16.694 "qid": 0, 00:18:16.694 "state": "enabled", 00:18:16.694 "thread": "nvmf_tgt_poll_group_000", 00:18:16.694 "listen_address": { 00:18:16.694 "trtype": "TCP", 00:18:16.694 "adrfam": "IPv4", 00:18:16.694 "traddr": "10.0.0.2", 00:18:16.694 "trsvcid": "4420" 00:18:16.694 }, 00:18:16.694 "peer_address": { 00:18:16.694 "trtype": "TCP", 00:18:16.694 "adrfam": "IPv4", 00:18:16.694 "traddr": "10.0.0.1", 00:18:16.694 "trsvcid": "49160" 00:18:16.694 }, 00:18:16.694 "auth": { 00:18:16.694 "state": "completed", 00:18:16.694 "digest": "sha384", 00:18:16.694 "dhgroup": "ffdhe3072" 00:18:16.694 } 00:18:16.694 } 00:18:16.694 ]' 00:18:16.694 09:26:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:18:16.951 09:26:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:18:16.951 09:26:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:18:16.951 09:26:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:18:16.951 09:26:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:18:16.951 09:26:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:18:16.951 09:26:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:18:16.951 09:26:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:18:17.209 09:26:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --hostid 29f67375-a902-e411-ace9-001e67bc3c9a --dhchap-secret DHHC-1:01:OGQyNjQ5YzUxZmVhMDExNzYzODJlMDE1Y2JmY2ZhOWT+1yqE: --dhchap-ctrl-secret DHHC-1:02:ZDEyODliMmM4OTFjYWQ3OTIzMzJiYzc1Y2Q1ZjQ0ZDc4OTlmYjNiMWJhMThiOGY2nJOiQQ==: 00:18:18.147 09:26:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:18:18.147 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:18:18.147 09:26:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a 00:18:18.147 09:26:29 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:18.147 09:26:29 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:18.147 09:26:29 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:18.147 09:26:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:18:18.147 09:26:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:18:18.147 09:26:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:18:18.406 09:26:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 ffdhe3072 2 00:18:18.406 09:26:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:18:18.406 09:26:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:18:18.406 09:26:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe3072 00:18:18.406 09:26:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key2 00:18:18.406 09:26:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:18:18.406 09:26:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:18:18.406 09:26:29 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:18.406 09:26:29 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:18.406 09:26:29 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:18.406 09:26:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:18:18.406 09:26:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:18:18.664 00:18:18.664 09:26:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:18:18.664 09:26:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:18:18.664 09:26:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:18:18.922 09:26:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:18.922 09:26:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:18:18.922 09:26:30 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:18.922 09:26:30 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:18.922 09:26:30 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:18.922 09:26:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:18:18.922 { 00:18:18.922 "cntlid": 69, 00:18:18.922 "qid": 0, 00:18:18.922 "state": "enabled", 00:18:18.922 "thread": "nvmf_tgt_poll_group_000", 00:18:18.922 "listen_address": { 00:18:18.922 "trtype": "TCP", 00:18:18.922 "adrfam": "IPv4", 00:18:18.922 "traddr": "10.0.0.2", 00:18:18.922 "trsvcid": "4420" 00:18:18.922 }, 00:18:18.922 "peer_address": { 00:18:18.922 "trtype": "TCP", 00:18:18.922 "adrfam": "IPv4", 00:18:18.922 "traddr": "10.0.0.1", 00:18:18.922 "trsvcid": "49192" 00:18:18.922 }, 00:18:18.922 "auth": { 00:18:18.922 "state": "completed", 00:18:18.922 "digest": "sha384", 00:18:18.922 "dhgroup": "ffdhe3072" 00:18:18.922 } 00:18:18.922 } 00:18:18.922 ]' 00:18:18.922 09:26:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:18:19.180 09:26:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:18:19.180 09:26:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:18:19.180 09:26:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:18:19.180 09:26:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:18:19.180 09:26:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:18:19.180 09:26:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:18:19.180 09:26:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:18:19.438 09:26:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --hostid 29f67375-a902-e411-ace9-001e67bc3c9a --dhchap-secret DHHC-1:02:NDQ4MzNhZjMzODIzY2U1NzAxZDE1ZjY2MzMwNjlhOWZkODk0Y2YxNzMwZTJmMTNlwvlBbg==: --dhchap-ctrl-secret DHHC-1:01:ZTc1ZGE1ODgyNmUxZWM5Njk1YWMzZmY3YjEzMTEyODmK5dgD: 00:18:20.375 09:26:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:18:20.375 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:18:20.375 09:26:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a 00:18:20.375 09:26:31 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:20.375 09:26:31 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:20.375 09:26:31 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:20.375 09:26:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:18:20.375 09:26:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:18:20.375 09:26:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:18:20.633 09:26:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 ffdhe3072 3 00:18:20.633 09:26:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:18:20.633 09:26:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:18:20.633 09:26:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe3072 00:18:20.633 09:26:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:18:20.633 09:26:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:18:20.633 09:26:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --dhchap-key key3 00:18:20.633 09:26:31 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:20.633 09:26:31 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:20.633 09:26:31 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:20.633 09:26:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:18:20.633 09:26:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:18:20.891 00:18:20.891 09:26:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:18:20.891 09:26:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:18:20.891 09:26:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:18:21.149 09:26:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:21.149 09:26:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:18:21.149 09:26:32 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:21.149 09:26:32 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:21.149 09:26:32 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:21.149 09:26:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:18:21.149 { 00:18:21.149 "cntlid": 71, 00:18:21.149 "qid": 0, 00:18:21.149 "state": "enabled", 00:18:21.150 "thread": "nvmf_tgt_poll_group_000", 00:18:21.150 "listen_address": { 00:18:21.150 "trtype": "TCP", 00:18:21.150 "adrfam": "IPv4", 00:18:21.150 "traddr": "10.0.0.2", 00:18:21.150 "trsvcid": "4420" 00:18:21.150 }, 00:18:21.150 "peer_address": { 00:18:21.150 "trtype": "TCP", 00:18:21.150 "adrfam": "IPv4", 00:18:21.150 "traddr": "10.0.0.1", 00:18:21.150 "trsvcid": "49230" 00:18:21.150 }, 00:18:21.150 "auth": { 00:18:21.150 "state": "completed", 00:18:21.150 "digest": "sha384", 00:18:21.150 "dhgroup": "ffdhe3072" 00:18:21.150 } 00:18:21.150 } 00:18:21.150 ]' 00:18:21.150 09:26:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:18:21.150 09:26:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:18:21.150 09:26:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:18:21.150 09:26:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:18:21.150 09:26:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:18:21.409 09:26:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:18:21.409 09:26:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:18:21.409 09:26:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:18:21.667 09:26:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --hostid 29f67375-a902-e411-ace9-001e67bc3c9a --dhchap-secret DHHC-1:03:NWFjZjhlYmEzNDE2OGFkMWJhYzU0YmU3NDAyYjlmYTg1OGI0Y2EyOGY2OGRhNmEwOWI3ZjBlMjQwZDdhMzNhNDF+BrQ=: 00:18:22.601 09:26:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:18:22.601 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:18:22.601 09:26:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a 00:18:22.601 09:26:33 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:22.601 09:26:33 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:22.601 09:26:33 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:22.601 09:26:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@92 -- # for dhgroup in "${dhgroups[@]}" 00:18:22.601 09:26:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:18:22.601 09:26:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:18:22.601 09:26:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:18:22.859 09:26:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 ffdhe4096 0 00:18:22.859 09:26:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:18:22.859 09:26:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:18:22.859 09:26:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe4096 00:18:22.859 09:26:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:18:22.859 09:26:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:18:22.859 09:26:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:18:22.859 09:26:33 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:22.859 09:26:33 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:22.859 09:26:33 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:22.859 09:26:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:18:22.859 09:26:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:18:23.117 00:18:23.117 09:26:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:18:23.117 09:26:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:18:23.117 09:26:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:18:23.374 09:26:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:23.374 09:26:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:18:23.374 09:26:34 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:23.374 09:26:34 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:23.374 09:26:34 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:23.374 09:26:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:18:23.374 { 00:18:23.374 "cntlid": 73, 00:18:23.374 "qid": 0, 00:18:23.374 "state": "enabled", 00:18:23.374 "thread": "nvmf_tgt_poll_group_000", 00:18:23.374 "listen_address": { 00:18:23.374 "trtype": "TCP", 00:18:23.374 "adrfam": "IPv4", 00:18:23.374 "traddr": "10.0.0.2", 00:18:23.374 "trsvcid": "4420" 00:18:23.374 }, 00:18:23.374 "peer_address": { 00:18:23.374 "trtype": "TCP", 00:18:23.374 "adrfam": "IPv4", 00:18:23.374 "traddr": "10.0.0.1", 00:18:23.374 "trsvcid": "49272" 00:18:23.374 }, 00:18:23.374 "auth": { 00:18:23.374 "state": "completed", 00:18:23.374 "digest": "sha384", 00:18:23.374 "dhgroup": "ffdhe4096" 00:18:23.374 } 00:18:23.374 } 00:18:23.374 ]' 00:18:23.374 09:26:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:18:23.374 09:26:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:18:23.374 09:26:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:18:23.632 09:26:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:18:23.632 09:26:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:18:23.632 09:26:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:18:23.632 09:26:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:18:23.632 09:26:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:18:23.890 09:26:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --hostid 29f67375-a902-e411-ace9-001e67bc3c9a --dhchap-secret DHHC-1:00:MjViNTk0ZmIxYjkwYTRjYTIyNGRlOTQ4NThlNzI1MGVmODY1NjZlNTU4MjUyMjdm9pHsZA==: --dhchap-ctrl-secret DHHC-1:03:NDQ1MzMxODZhNjFiYjQ1MjBkZTQxMWIyNTRlMmE3YzNjM2EwYWU0MDExYjQ5NzA2Yzk5OGQxYzczMzYwZTUwOSOhOLs=: 00:18:24.828 09:26:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:18:24.828 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:18:24.828 09:26:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a 00:18:24.828 09:26:35 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:24.828 09:26:35 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:24.828 09:26:35 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:24.828 09:26:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:18:24.828 09:26:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:18:24.828 09:26:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:18:25.086 09:26:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 ffdhe4096 1 00:18:25.086 09:26:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:18:25.086 09:26:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:18:25.086 09:26:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe4096 00:18:25.086 09:26:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key1 00:18:25.086 09:26:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:18:25.086 09:26:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:18:25.086 09:26:36 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:25.086 09:26:36 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:25.086 09:26:36 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:25.086 09:26:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:18:25.086 09:26:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:18:25.345 00:18:25.605 09:26:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:18:25.605 09:26:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:18:25.605 09:26:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:18:25.864 09:26:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:25.864 09:26:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:18:25.864 09:26:36 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:25.864 09:26:36 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:25.864 09:26:36 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:25.864 09:26:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:18:25.864 { 00:18:25.864 "cntlid": 75, 00:18:25.864 "qid": 0, 00:18:25.864 "state": "enabled", 00:18:25.864 "thread": "nvmf_tgt_poll_group_000", 00:18:25.864 "listen_address": { 00:18:25.864 "trtype": "TCP", 00:18:25.864 "adrfam": "IPv4", 00:18:25.864 "traddr": "10.0.0.2", 00:18:25.864 "trsvcid": "4420" 00:18:25.864 }, 00:18:25.864 "peer_address": { 00:18:25.864 "trtype": "TCP", 00:18:25.864 "adrfam": "IPv4", 00:18:25.864 "traddr": "10.0.0.1", 00:18:25.864 "trsvcid": "58372" 00:18:25.864 }, 00:18:25.864 "auth": { 00:18:25.864 "state": "completed", 00:18:25.864 "digest": "sha384", 00:18:25.864 "dhgroup": "ffdhe4096" 00:18:25.864 } 00:18:25.864 } 00:18:25.864 ]' 00:18:25.864 09:26:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:18:25.864 09:26:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:18:25.864 09:26:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:18:25.864 09:26:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:18:25.864 09:26:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:18:25.864 09:26:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:18:25.864 09:26:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:18:25.864 09:26:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:18:26.123 09:26:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --hostid 29f67375-a902-e411-ace9-001e67bc3c9a --dhchap-secret DHHC-1:01:OGQyNjQ5YzUxZmVhMDExNzYzODJlMDE1Y2JmY2ZhOWT+1yqE: --dhchap-ctrl-secret DHHC-1:02:ZDEyODliMmM4OTFjYWQ3OTIzMzJiYzc1Y2Q1ZjQ0ZDc4OTlmYjNiMWJhMThiOGY2nJOiQQ==: 00:18:27.059 09:26:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:18:27.059 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:18:27.059 09:26:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a 00:18:27.059 09:26:38 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:27.059 09:26:38 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:27.059 09:26:38 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:27.059 09:26:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:18:27.059 09:26:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:18:27.059 09:26:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:18:27.317 09:26:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 ffdhe4096 2 00:18:27.317 09:26:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:18:27.317 09:26:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:18:27.317 09:26:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe4096 00:18:27.317 09:26:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key2 00:18:27.317 09:26:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:18:27.317 09:26:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:18:27.317 09:26:38 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:27.317 09:26:38 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:27.317 09:26:38 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:27.317 09:26:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:18:27.317 09:26:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:18:27.884 00:18:27.884 09:26:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:18:27.884 09:26:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:18:27.884 09:26:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:18:27.884 09:26:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:27.884 09:26:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:18:27.884 09:26:39 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:27.884 09:26:39 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:27.884 09:26:39 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:27.884 09:26:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:18:27.884 { 00:18:27.884 "cntlid": 77, 00:18:27.884 "qid": 0, 00:18:27.884 "state": "enabled", 00:18:27.884 "thread": "nvmf_tgt_poll_group_000", 00:18:27.884 "listen_address": { 00:18:27.884 "trtype": "TCP", 00:18:27.884 "adrfam": "IPv4", 00:18:27.884 "traddr": "10.0.0.2", 00:18:27.884 "trsvcid": "4420" 00:18:27.884 }, 00:18:27.884 "peer_address": { 00:18:27.884 "trtype": "TCP", 00:18:27.884 "adrfam": "IPv4", 00:18:27.884 "traddr": "10.0.0.1", 00:18:27.884 "trsvcid": "58406" 00:18:27.884 }, 00:18:27.884 "auth": { 00:18:27.884 "state": "completed", 00:18:27.884 "digest": "sha384", 00:18:27.884 "dhgroup": "ffdhe4096" 00:18:27.884 } 00:18:27.884 } 00:18:27.884 ]' 00:18:27.884 09:26:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:18:28.142 09:26:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:18:28.142 09:26:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:18:28.142 09:26:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:18:28.142 09:26:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:18:28.142 09:26:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:18:28.142 09:26:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:18:28.142 09:26:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:18:28.399 09:26:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --hostid 29f67375-a902-e411-ace9-001e67bc3c9a --dhchap-secret DHHC-1:02:NDQ4MzNhZjMzODIzY2U1NzAxZDE1ZjY2MzMwNjlhOWZkODk0Y2YxNzMwZTJmMTNlwvlBbg==: --dhchap-ctrl-secret DHHC-1:01:ZTc1ZGE1ODgyNmUxZWM5Njk1YWMzZmY3YjEzMTEyODmK5dgD: 00:18:29.337 09:26:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:18:29.337 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:18:29.337 09:26:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a 00:18:29.337 09:26:40 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:29.337 09:26:40 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:29.337 09:26:40 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:29.337 09:26:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:18:29.337 09:26:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:18:29.337 09:26:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:18:29.595 09:26:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 ffdhe4096 3 00:18:29.595 09:26:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:18:29.595 09:26:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:18:29.595 09:26:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe4096 00:18:29.595 09:26:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:18:29.595 09:26:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:18:29.595 09:26:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --dhchap-key key3 00:18:29.595 09:26:40 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:29.595 09:26:40 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:29.595 09:26:40 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:29.595 09:26:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:18:29.595 09:26:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:18:30.161 00:18:30.161 09:26:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:18:30.161 09:26:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:18:30.162 09:26:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:18:30.162 09:26:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:30.162 09:26:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:18:30.162 09:26:41 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:30.162 09:26:41 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:30.162 09:26:41 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:30.162 09:26:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:18:30.162 { 00:18:30.162 "cntlid": 79, 00:18:30.162 "qid": 0, 00:18:30.162 "state": "enabled", 00:18:30.162 "thread": "nvmf_tgt_poll_group_000", 00:18:30.162 "listen_address": { 00:18:30.162 "trtype": "TCP", 00:18:30.162 "adrfam": "IPv4", 00:18:30.162 "traddr": "10.0.0.2", 00:18:30.162 "trsvcid": "4420" 00:18:30.162 }, 00:18:30.162 "peer_address": { 00:18:30.162 "trtype": "TCP", 00:18:30.162 "adrfam": "IPv4", 00:18:30.162 "traddr": "10.0.0.1", 00:18:30.162 "trsvcid": "58438" 00:18:30.162 }, 00:18:30.162 "auth": { 00:18:30.162 "state": "completed", 00:18:30.162 "digest": "sha384", 00:18:30.162 "dhgroup": "ffdhe4096" 00:18:30.162 } 00:18:30.162 } 00:18:30.162 ]' 00:18:30.162 09:26:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:18:30.420 09:26:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:18:30.420 09:26:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:18:30.421 09:26:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:18:30.421 09:26:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:18:30.421 09:26:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:18:30.421 09:26:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:18:30.421 09:26:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:18:30.678 09:26:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --hostid 29f67375-a902-e411-ace9-001e67bc3c9a --dhchap-secret DHHC-1:03:NWFjZjhlYmEzNDE2OGFkMWJhYzU0YmU3NDAyYjlmYTg1OGI0Y2EyOGY2OGRhNmEwOWI3ZjBlMjQwZDdhMzNhNDF+BrQ=: 00:18:31.614 09:26:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:18:31.614 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:18:31.614 09:26:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a 00:18:31.614 09:26:42 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:31.614 09:26:42 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:31.614 09:26:42 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:31.614 09:26:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@92 -- # for dhgroup in "${dhgroups[@]}" 00:18:31.614 09:26:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:18:31.614 09:26:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:18:31.614 09:26:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:18:31.871 09:26:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 ffdhe6144 0 00:18:31.872 09:26:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:18:31.872 09:26:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:18:31.872 09:26:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe6144 00:18:31.872 09:26:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:18:31.872 09:26:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:18:31.872 09:26:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:18:31.872 09:26:42 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:31.872 09:26:42 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:31.872 09:26:42 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:31.872 09:26:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:18:31.872 09:26:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:18:32.437 00:18:32.437 09:26:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:18:32.437 09:26:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:18:32.437 09:26:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:18:32.695 09:26:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:32.695 09:26:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:18:32.695 09:26:43 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:32.695 09:26:43 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:32.695 09:26:43 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:32.695 09:26:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:18:32.695 { 00:18:32.695 "cntlid": 81, 00:18:32.695 "qid": 0, 00:18:32.695 "state": "enabled", 00:18:32.695 "thread": "nvmf_tgt_poll_group_000", 00:18:32.695 "listen_address": { 00:18:32.695 "trtype": "TCP", 00:18:32.695 "adrfam": "IPv4", 00:18:32.695 "traddr": "10.0.0.2", 00:18:32.695 "trsvcid": "4420" 00:18:32.695 }, 00:18:32.695 "peer_address": { 00:18:32.695 "trtype": "TCP", 00:18:32.695 "adrfam": "IPv4", 00:18:32.695 "traddr": "10.0.0.1", 00:18:32.695 "trsvcid": "58468" 00:18:32.695 }, 00:18:32.695 "auth": { 00:18:32.695 "state": "completed", 00:18:32.695 "digest": "sha384", 00:18:32.695 "dhgroup": "ffdhe6144" 00:18:32.695 } 00:18:32.695 } 00:18:32.695 ]' 00:18:32.695 09:26:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:18:32.695 09:26:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:18:32.695 09:26:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:18:32.695 09:26:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:18:32.695 09:26:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:18:32.695 09:26:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:18:32.695 09:26:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:18:32.695 09:26:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:18:32.951 09:26:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --hostid 29f67375-a902-e411-ace9-001e67bc3c9a --dhchap-secret DHHC-1:00:MjViNTk0ZmIxYjkwYTRjYTIyNGRlOTQ4NThlNzI1MGVmODY1NjZlNTU4MjUyMjdm9pHsZA==: --dhchap-ctrl-secret DHHC-1:03:NDQ1MzMxODZhNjFiYjQ1MjBkZTQxMWIyNTRlMmE3YzNjM2EwYWU0MDExYjQ5NzA2Yzk5OGQxYzczMzYwZTUwOSOhOLs=: 00:18:33.885 09:26:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:18:33.885 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:18:33.885 09:26:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a 00:18:33.885 09:26:44 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:33.885 09:26:44 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:33.885 09:26:44 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:33.885 09:26:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:18:33.885 09:26:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:18:33.885 09:26:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:18:34.142 09:26:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 ffdhe6144 1 00:18:34.142 09:26:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:18:34.142 09:26:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:18:34.142 09:26:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe6144 00:18:34.142 09:26:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key1 00:18:34.142 09:26:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:18:34.142 09:26:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:18:34.142 09:26:45 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:34.142 09:26:45 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:34.142 09:26:45 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:34.142 09:26:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:18:34.142 09:26:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:18:34.707 00:18:34.707 09:26:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:18:34.707 09:26:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:18:34.707 09:26:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:18:34.965 09:26:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:34.965 09:26:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:18:34.965 09:26:45 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:34.965 09:26:45 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:34.965 09:26:45 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:34.965 09:26:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:18:34.965 { 00:18:34.965 "cntlid": 83, 00:18:34.965 "qid": 0, 00:18:34.965 "state": "enabled", 00:18:34.965 "thread": "nvmf_tgt_poll_group_000", 00:18:34.965 "listen_address": { 00:18:34.965 "trtype": "TCP", 00:18:34.965 "adrfam": "IPv4", 00:18:34.965 "traddr": "10.0.0.2", 00:18:34.965 "trsvcid": "4420" 00:18:34.965 }, 00:18:34.965 "peer_address": { 00:18:34.965 "trtype": "TCP", 00:18:34.965 "adrfam": "IPv4", 00:18:34.965 "traddr": "10.0.0.1", 00:18:34.965 "trsvcid": "58488" 00:18:34.965 }, 00:18:34.965 "auth": { 00:18:34.965 "state": "completed", 00:18:34.965 "digest": "sha384", 00:18:34.965 "dhgroup": "ffdhe6144" 00:18:34.965 } 00:18:34.965 } 00:18:34.965 ]' 00:18:34.965 09:26:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:18:34.965 09:26:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:18:34.965 09:26:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:18:34.965 09:26:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:18:34.965 09:26:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:18:34.965 09:26:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:18:34.965 09:26:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:18:34.965 09:26:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:18:35.224 09:26:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --hostid 29f67375-a902-e411-ace9-001e67bc3c9a --dhchap-secret DHHC-1:01:OGQyNjQ5YzUxZmVhMDExNzYzODJlMDE1Y2JmY2ZhOWT+1yqE: --dhchap-ctrl-secret DHHC-1:02:ZDEyODliMmM4OTFjYWQ3OTIzMzJiYzc1Y2Q1ZjQ0ZDc4OTlmYjNiMWJhMThiOGY2nJOiQQ==: 00:18:36.158 09:26:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:18:36.158 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:18:36.158 09:26:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a 00:18:36.158 09:26:47 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:36.158 09:26:47 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:36.158 09:26:47 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:36.158 09:26:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:18:36.158 09:26:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:18:36.158 09:26:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:18:36.416 09:26:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 ffdhe6144 2 00:18:36.416 09:26:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:18:36.416 09:26:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:18:36.416 09:26:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe6144 00:18:36.416 09:26:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key2 00:18:36.416 09:26:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:18:36.416 09:26:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:18:36.416 09:26:47 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:36.416 09:26:47 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:36.416 09:26:47 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:36.416 09:26:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:18:36.416 09:26:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:18:36.983 00:18:36.983 09:26:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:18:36.983 09:26:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:18:36.983 09:26:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:18:37.269 09:26:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:37.269 09:26:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:18:37.269 09:26:48 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:37.269 09:26:48 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:37.269 09:26:48 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:37.269 09:26:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:18:37.269 { 00:18:37.269 "cntlid": 85, 00:18:37.269 "qid": 0, 00:18:37.269 "state": "enabled", 00:18:37.269 "thread": "nvmf_tgt_poll_group_000", 00:18:37.269 "listen_address": { 00:18:37.269 "trtype": "TCP", 00:18:37.269 "adrfam": "IPv4", 00:18:37.269 "traddr": "10.0.0.2", 00:18:37.269 "trsvcid": "4420" 00:18:37.269 }, 00:18:37.269 "peer_address": { 00:18:37.269 "trtype": "TCP", 00:18:37.269 "adrfam": "IPv4", 00:18:37.269 "traddr": "10.0.0.1", 00:18:37.269 "trsvcid": "54870" 00:18:37.269 }, 00:18:37.269 "auth": { 00:18:37.269 "state": "completed", 00:18:37.269 "digest": "sha384", 00:18:37.269 "dhgroup": "ffdhe6144" 00:18:37.269 } 00:18:37.269 } 00:18:37.269 ]' 00:18:37.269 09:26:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:18:37.269 09:26:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:18:37.269 09:26:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:18:37.269 09:26:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:18:37.269 09:26:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:18:37.527 09:26:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:18:37.528 09:26:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:18:37.528 09:26:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:18:37.786 09:26:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --hostid 29f67375-a902-e411-ace9-001e67bc3c9a --dhchap-secret DHHC-1:02:NDQ4MzNhZjMzODIzY2U1NzAxZDE1ZjY2MzMwNjlhOWZkODk0Y2YxNzMwZTJmMTNlwvlBbg==: --dhchap-ctrl-secret DHHC-1:01:ZTc1ZGE1ODgyNmUxZWM5Njk1YWMzZmY3YjEzMTEyODmK5dgD: 00:18:38.722 09:26:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:18:38.722 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:18:38.722 09:26:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a 00:18:38.722 09:26:49 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:38.722 09:26:49 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:38.722 09:26:49 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:38.722 09:26:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:18:38.722 09:26:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:18:38.722 09:26:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:18:38.980 09:26:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 ffdhe6144 3 00:18:38.980 09:26:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:18:38.980 09:26:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:18:38.980 09:26:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe6144 00:18:38.980 09:26:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:18:38.980 09:26:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:18:38.980 09:26:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --dhchap-key key3 00:18:38.980 09:26:49 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:38.980 09:26:49 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:38.980 09:26:49 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:38.980 09:26:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:18:38.980 09:26:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:18:39.543 00:18:39.543 09:26:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:18:39.543 09:26:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:18:39.543 09:26:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:18:39.543 09:26:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:39.543 09:26:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:18:39.543 09:26:50 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:39.543 09:26:50 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:39.799 09:26:50 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:39.799 09:26:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:18:39.799 { 00:18:39.799 "cntlid": 87, 00:18:39.799 "qid": 0, 00:18:39.799 "state": "enabled", 00:18:39.799 "thread": "nvmf_tgt_poll_group_000", 00:18:39.799 "listen_address": { 00:18:39.799 "trtype": "TCP", 00:18:39.799 "adrfam": "IPv4", 00:18:39.799 "traddr": "10.0.0.2", 00:18:39.799 "trsvcid": "4420" 00:18:39.799 }, 00:18:39.799 "peer_address": { 00:18:39.799 "trtype": "TCP", 00:18:39.799 "adrfam": "IPv4", 00:18:39.799 "traddr": "10.0.0.1", 00:18:39.799 "trsvcid": "54904" 00:18:39.799 }, 00:18:39.799 "auth": { 00:18:39.799 "state": "completed", 00:18:39.799 "digest": "sha384", 00:18:39.799 "dhgroup": "ffdhe6144" 00:18:39.799 } 00:18:39.799 } 00:18:39.799 ]' 00:18:39.799 09:26:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:18:39.799 09:26:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:18:39.799 09:26:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:18:39.799 09:26:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:18:39.799 09:26:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:18:39.799 09:26:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:18:39.799 09:26:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:18:39.799 09:26:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:18:40.057 09:26:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --hostid 29f67375-a902-e411-ace9-001e67bc3c9a --dhchap-secret DHHC-1:03:NWFjZjhlYmEzNDE2OGFkMWJhYzU0YmU3NDAyYjlmYTg1OGI0Y2EyOGY2OGRhNmEwOWI3ZjBlMjQwZDdhMzNhNDF+BrQ=: 00:18:40.988 09:26:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:18:40.988 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:18:40.988 09:26:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a 00:18:40.988 09:26:52 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:40.988 09:26:52 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:40.988 09:26:52 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:40.988 09:26:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@92 -- # for dhgroup in "${dhgroups[@]}" 00:18:40.988 09:26:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:18:40.988 09:26:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:18:40.988 09:26:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:18:41.244 09:26:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 ffdhe8192 0 00:18:41.244 09:26:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:18:41.244 09:26:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:18:41.244 09:26:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe8192 00:18:41.244 09:26:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:18:41.245 09:26:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:18:41.245 09:26:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:18:41.245 09:26:52 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:41.245 09:26:52 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:41.245 09:26:52 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:41.245 09:26:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:18:41.245 09:26:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:18:42.176 00:18:42.176 09:26:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:18:42.176 09:26:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:18:42.176 09:26:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:18:42.433 09:26:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:42.433 09:26:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:18:42.433 09:26:53 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:42.433 09:26:53 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:42.433 09:26:53 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:42.433 09:26:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:18:42.433 { 00:18:42.433 "cntlid": 89, 00:18:42.433 "qid": 0, 00:18:42.433 "state": "enabled", 00:18:42.433 "thread": "nvmf_tgt_poll_group_000", 00:18:42.433 "listen_address": { 00:18:42.433 "trtype": "TCP", 00:18:42.433 "adrfam": "IPv4", 00:18:42.433 "traddr": "10.0.0.2", 00:18:42.433 "trsvcid": "4420" 00:18:42.433 }, 00:18:42.433 "peer_address": { 00:18:42.433 "trtype": "TCP", 00:18:42.433 "adrfam": "IPv4", 00:18:42.433 "traddr": "10.0.0.1", 00:18:42.433 "trsvcid": "54940" 00:18:42.433 }, 00:18:42.433 "auth": { 00:18:42.433 "state": "completed", 00:18:42.433 "digest": "sha384", 00:18:42.433 "dhgroup": "ffdhe8192" 00:18:42.433 } 00:18:42.433 } 00:18:42.433 ]' 00:18:42.433 09:26:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:18:42.433 09:26:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:18:42.433 09:26:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:18:42.433 09:26:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:18:42.433 09:26:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:18:42.433 09:26:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:18:42.433 09:26:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:18:42.433 09:26:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:18:42.690 09:26:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --hostid 29f67375-a902-e411-ace9-001e67bc3c9a --dhchap-secret DHHC-1:00:MjViNTk0ZmIxYjkwYTRjYTIyNGRlOTQ4NThlNzI1MGVmODY1NjZlNTU4MjUyMjdm9pHsZA==: --dhchap-ctrl-secret DHHC-1:03:NDQ1MzMxODZhNjFiYjQ1MjBkZTQxMWIyNTRlMmE3YzNjM2EwYWU0MDExYjQ5NzA2Yzk5OGQxYzczMzYwZTUwOSOhOLs=: 00:18:43.623 09:26:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:18:43.623 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:18:43.623 09:26:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a 00:18:43.623 09:26:54 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:43.623 09:26:54 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:43.623 09:26:54 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:43.623 09:26:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:18:43.623 09:26:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:18:43.623 09:26:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:18:43.881 09:26:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 ffdhe8192 1 00:18:43.881 09:26:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:18:43.881 09:26:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:18:43.881 09:26:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe8192 00:18:43.881 09:26:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key1 00:18:43.881 09:26:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:18:43.881 09:26:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:18:43.881 09:26:54 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:43.881 09:26:54 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:43.881 09:26:54 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:43.881 09:26:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:18:43.881 09:26:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:18:44.812 00:18:44.812 09:26:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:18:44.812 09:26:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:18:44.812 09:26:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:18:44.812 09:26:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:44.812 09:26:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:18:44.812 09:26:55 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:44.812 09:26:55 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:44.812 09:26:55 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:44.812 09:26:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:18:44.812 { 00:18:44.812 "cntlid": 91, 00:18:44.812 "qid": 0, 00:18:44.812 "state": "enabled", 00:18:44.812 "thread": "nvmf_tgt_poll_group_000", 00:18:44.812 "listen_address": { 00:18:44.812 "trtype": "TCP", 00:18:44.812 "adrfam": "IPv4", 00:18:44.812 "traddr": "10.0.0.2", 00:18:44.812 "trsvcid": "4420" 00:18:44.812 }, 00:18:44.812 "peer_address": { 00:18:44.812 "trtype": "TCP", 00:18:44.812 "adrfam": "IPv4", 00:18:44.812 "traddr": "10.0.0.1", 00:18:44.812 "trsvcid": "54968" 00:18:44.812 }, 00:18:44.812 "auth": { 00:18:44.812 "state": "completed", 00:18:44.812 "digest": "sha384", 00:18:44.812 "dhgroup": "ffdhe8192" 00:18:44.812 } 00:18:44.812 } 00:18:44.812 ]' 00:18:44.812 09:26:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:18:45.069 09:26:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:18:45.069 09:26:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:18:45.069 09:26:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:18:45.069 09:26:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:18:45.069 09:26:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:18:45.069 09:26:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:18:45.069 09:26:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:18:45.327 09:26:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --hostid 29f67375-a902-e411-ace9-001e67bc3c9a --dhchap-secret DHHC-1:01:OGQyNjQ5YzUxZmVhMDExNzYzODJlMDE1Y2JmY2ZhOWT+1yqE: --dhchap-ctrl-secret DHHC-1:02:ZDEyODliMmM4OTFjYWQ3OTIzMzJiYzc1Y2Q1ZjQ0ZDc4OTlmYjNiMWJhMThiOGY2nJOiQQ==: 00:18:46.258 09:26:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:18:46.258 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:18:46.258 09:26:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a 00:18:46.258 09:26:57 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:46.258 09:26:57 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:46.258 09:26:57 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:46.258 09:26:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:18:46.258 09:26:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:18:46.258 09:26:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:18:46.515 09:26:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 ffdhe8192 2 00:18:46.516 09:26:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:18:46.516 09:26:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:18:46.516 09:26:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe8192 00:18:46.516 09:26:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key2 00:18:46.516 09:26:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:18:46.516 09:26:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:18:46.516 09:26:57 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:46.516 09:26:57 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:46.516 09:26:57 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:46.516 09:26:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:18:46.516 09:26:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:18:47.448 00:18:47.448 09:26:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:18:47.448 09:26:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:18:47.448 09:26:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:18:47.448 09:26:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:47.448 09:26:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:18:47.448 09:26:58 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:47.448 09:26:58 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:47.448 09:26:58 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:47.448 09:26:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:18:47.448 { 00:18:47.448 "cntlid": 93, 00:18:47.448 "qid": 0, 00:18:47.448 "state": "enabled", 00:18:47.448 "thread": "nvmf_tgt_poll_group_000", 00:18:47.448 "listen_address": { 00:18:47.448 "trtype": "TCP", 00:18:47.448 "adrfam": "IPv4", 00:18:47.448 "traddr": "10.0.0.2", 00:18:47.448 "trsvcid": "4420" 00:18:47.448 }, 00:18:47.448 "peer_address": { 00:18:47.448 "trtype": "TCP", 00:18:47.448 "adrfam": "IPv4", 00:18:47.448 "traddr": "10.0.0.1", 00:18:47.448 "trsvcid": "39644" 00:18:47.448 }, 00:18:47.448 "auth": { 00:18:47.448 "state": "completed", 00:18:47.448 "digest": "sha384", 00:18:47.448 "dhgroup": "ffdhe8192" 00:18:47.448 } 00:18:47.448 } 00:18:47.448 ]' 00:18:47.448 09:26:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:18:47.448 09:26:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:18:47.448 09:26:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:18:47.706 09:26:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:18:47.706 09:26:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:18:47.706 09:26:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:18:47.706 09:26:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:18:47.706 09:26:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:18:47.963 09:26:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --hostid 29f67375-a902-e411-ace9-001e67bc3c9a --dhchap-secret DHHC-1:02:NDQ4MzNhZjMzODIzY2U1NzAxZDE1ZjY2MzMwNjlhOWZkODk0Y2YxNzMwZTJmMTNlwvlBbg==: --dhchap-ctrl-secret DHHC-1:01:ZTc1ZGE1ODgyNmUxZWM5Njk1YWMzZmY3YjEzMTEyODmK5dgD: 00:18:48.896 09:26:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:18:48.896 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:18:48.896 09:26:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a 00:18:48.896 09:26:59 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:48.896 09:26:59 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:48.896 09:26:59 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:48.896 09:26:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:18:48.896 09:26:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:18:48.896 09:26:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:18:49.155 09:27:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 ffdhe8192 3 00:18:49.155 09:27:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:18:49.155 09:27:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:18:49.155 09:27:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe8192 00:18:49.155 09:27:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:18:49.155 09:27:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:18:49.155 09:27:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --dhchap-key key3 00:18:49.155 09:27:00 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:49.155 09:27:00 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:49.155 09:27:00 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:49.155 09:27:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:18:49.155 09:27:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:18:50.088 00:18:50.088 09:27:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:18:50.088 09:27:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:18:50.088 09:27:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:18:50.088 09:27:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:50.088 09:27:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:18:50.088 09:27:01 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:50.088 09:27:01 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:50.088 09:27:01 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:50.088 09:27:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:18:50.088 { 00:18:50.088 "cntlid": 95, 00:18:50.088 "qid": 0, 00:18:50.088 "state": "enabled", 00:18:50.088 "thread": "nvmf_tgt_poll_group_000", 00:18:50.088 "listen_address": { 00:18:50.088 "trtype": "TCP", 00:18:50.088 "adrfam": "IPv4", 00:18:50.088 "traddr": "10.0.0.2", 00:18:50.088 "trsvcid": "4420" 00:18:50.088 }, 00:18:50.088 "peer_address": { 00:18:50.088 "trtype": "TCP", 00:18:50.088 "adrfam": "IPv4", 00:18:50.088 "traddr": "10.0.0.1", 00:18:50.088 "trsvcid": "39676" 00:18:50.088 }, 00:18:50.088 "auth": { 00:18:50.088 "state": "completed", 00:18:50.088 "digest": "sha384", 00:18:50.088 "dhgroup": "ffdhe8192" 00:18:50.088 } 00:18:50.088 } 00:18:50.088 ]' 00:18:50.088 09:27:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:18:50.088 09:27:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:18:50.088 09:27:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:18:50.346 09:27:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:18:50.346 09:27:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:18:50.346 09:27:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:18:50.346 09:27:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:18:50.346 09:27:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:18:50.604 09:27:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --hostid 29f67375-a902-e411-ace9-001e67bc3c9a --dhchap-secret DHHC-1:03:NWFjZjhlYmEzNDE2OGFkMWJhYzU0YmU3NDAyYjlmYTg1OGI0Y2EyOGY2OGRhNmEwOWI3ZjBlMjQwZDdhMzNhNDF+BrQ=: 00:18:51.536 09:27:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:18:51.536 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:18:51.536 09:27:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a 00:18:51.536 09:27:02 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:51.536 09:27:02 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:51.536 09:27:02 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:51.536 09:27:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@91 -- # for digest in "${digests[@]}" 00:18:51.536 09:27:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@92 -- # for dhgroup in "${dhgroups[@]}" 00:18:51.536 09:27:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:18:51.536 09:27:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups null 00:18:51.536 09:27:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups null 00:18:51.793 09:27:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 null 0 00:18:51.793 09:27:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:18:51.793 09:27:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:18:51.793 09:27:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=null 00:18:51.793 09:27:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:18:51.793 09:27:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:18:51.793 09:27:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:18:51.793 09:27:02 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:51.793 09:27:02 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:51.793 09:27:02 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:51.793 09:27:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:18:51.793 09:27:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:18:52.050 00:18:52.050 09:27:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:18:52.050 09:27:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:18:52.050 09:27:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:18:52.307 09:27:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:52.307 09:27:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:18:52.307 09:27:03 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:52.307 09:27:03 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:52.307 09:27:03 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:52.307 09:27:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:18:52.307 { 00:18:52.307 "cntlid": 97, 00:18:52.307 "qid": 0, 00:18:52.307 "state": "enabled", 00:18:52.307 "thread": "nvmf_tgt_poll_group_000", 00:18:52.307 "listen_address": { 00:18:52.307 "trtype": "TCP", 00:18:52.307 "adrfam": "IPv4", 00:18:52.307 "traddr": "10.0.0.2", 00:18:52.307 "trsvcid": "4420" 00:18:52.307 }, 00:18:52.307 "peer_address": { 00:18:52.307 "trtype": "TCP", 00:18:52.307 "adrfam": "IPv4", 00:18:52.307 "traddr": "10.0.0.1", 00:18:52.307 "trsvcid": "39700" 00:18:52.307 }, 00:18:52.307 "auth": { 00:18:52.307 "state": "completed", 00:18:52.307 "digest": "sha512", 00:18:52.307 "dhgroup": "null" 00:18:52.307 } 00:18:52.307 } 00:18:52.307 ]' 00:18:52.307 09:27:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:18:52.307 09:27:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:18:52.307 09:27:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:18:52.307 09:27:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ null == \n\u\l\l ]] 00:18:52.307 09:27:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:18:52.307 09:27:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:18:52.308 09:27:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:18:52.308 09:27:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:18:52.870 09:27:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --hostid 29f67375-a902-e411-ace9-001e67bc3c9a --dhchap-secret DHHC-1:00:MjViNTk0ZmIxYjkwYTRjYTIyNGRlOTQ4NThlNzI1MGVmODY1NjZlNTU4MjUyMjdm9pHsZA==: --dhchap-ctrl-secret DHHC-1:03:NDQ1MzMxODZhNjFiYjQ1MjBkZTQxMWIyNTRlMmE3YzNjM2EwYWU0MDExYjQ5NzA2Yzk5OGQxYzczMzYwZTUwOSOhOLs=: 00:18:53.803 09:27:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:18:53.803 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:18:53.803 09:27:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a 00:18:53.803 09:27:04 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:53.803 09:27:04 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:53.803 09:27:04 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:53.803 09:27:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:18:53.803 09:27:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups null 00:18:53.803 09:27:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups null 00:18:53.803 09:27:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 null 1 00:18:53.803 09:27:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:18:53.803 09:27:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:18:53.803 09:27:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=null 00:18:53.803 09:27:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key1 00:18:53.803 09:27:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:18:53.803 09:27:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:18:53.803 09:27:04 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:53.803 09:27:04 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:53.803 09:27:04 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:53.803 09:27:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:18:53.803 09:27:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:18:54.060 00:18:54.060 09:27:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:18:54.060 09:27:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:18:54.060 09:27:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:18:54.317 09:27:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:54.317 09:27:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:18:54.317 09:27:05 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:54.317 09:27:05 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:54.317 09:27:05 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:54.317 09:27:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:18:54.317 { 00:18:54.317 "cntlid": 99, 00:18:54.317 "qid": 0, 00:18:54.317 "state": "enabled", 00:18:54.317 "thread": "nvmf_tgt_poll_group_000", 00:18:54.317 "listen_address": { 00:18:54.317 "trtype": "TCP", 00:18:54.317 "adrfam": "IPv4", 00:18:54.317 "traddr": "10.0.0.2", 00:18:54.317 "trsvcid": "4420" 00:18:54.317 }, 00:18:54.317 "peer_address": { 00:18:54.317 "trtype": "TCP", 00:18:54.317 "adrfam": "IPv4", 00:18:54.317 "traddr": "10.0.0.1", 00:18:54.317 "trsvcid": "39744" 00:18:54.317 }, 00:18:54.317 "auth": { 00:18:54.317 "state": "completed", 00:18:54.317 "digest": "sha512", 00:18:54.317 "dhgroup": "null" 00:18:54.317 } 00:18:54.317 } 00:18:54.317 ]' 00:18:54.317 09:27:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:18:54.574 09:27:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:18:54.574 09:27:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:18:54.574 09:27:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ null == \n\u\l\l ]] 00:18:54.574 09:27:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:18:54.574 09:27:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:18:54.574 09:27:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:18:54.574 09:27:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:18:54.839 09:27:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --hostid 29f67375-a902-e411-ace9-001e67bc3c9a --dhchap-secret DHHC-1:01:OGQyNjQ5YzUxZmVhMDExNzYzODJlMDE1Y2JmY2ZhOWT+1yqE: --dhchap-ctrl-secret DHHC-1:02:ZDEyODliMmM4OTFjYWQ3OTIzMzJiYzc1Y2Q1ZjQ0ZDc4OTlmYjNiMWJhMThiOGY2nJOiQQ==: 00:18:55.768 09:27:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:18:55.768 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:18:55.768 09:27:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a 00:18:55.768 09:27:06 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:55.768 09:27:06 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:55.768 09:27:06 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:55.768 09:27:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:18:55.768 09:27:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups null 00:18:55.768 09:27:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups null 00:18:56.025 09:27:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 null 2 00:18:56.025 09:27:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:18:56.025 09:27:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:18:56.025 09:27:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=null 00:18:56.025 09:27:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key2 00:18:56.025 09:27:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:18:56.025 09:27:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:18:56.025 09:27:07 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:56.025 09:27:07 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:56.025 09:27:07 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:56.025 09:27:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:18:56.025 09:27:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:18:56.282 00:18:56.282 09:27:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:18:56.282 09:27:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:18:56.282 09:27:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:18:56.539 09:27:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:56.540 09:27:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:18:56.540 09:27:07 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:56.540 09:27:07 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:56.540 09:27:07 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:56.540 09:27:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:18:56.540 { 00:18:56.540 "cntlid": 101, 00:18:56.540 "qid": 0, 00:18:56.540 "state": "enabled", 00:18:56.540 "thread": "nvmf_tgt_poll_group_000", 00:18:56.540 "listen_address": { 00:18:56.540 "trtype": "TCP", 00:18:56.540 "adrfam": "IPv4", 00:18:56.540 "traddr": "10.0.0.2", 00:18:56.540 "trsvcid": "4420" 00:18:56.540 }, 00:18:56.540 "peer_address": { 00:18:56.540 "trtype": "TCP", 00:18:56.540 "adrfam": "IPv4", 00:18:56.540 "traddr": "10.0.0.1", 00:18:56.540 "trsvcid": "50392" 00:18:56.540 }, 00:18:56.540 "auth": { 00:18:56.540 "state": "completed", 00:18:56.540 "digest": "sha512", 00:18:56.540 "dhgroup": "null" 00:18:56.540 } 00:18:56.540 } 00:18:56.540 ]' 00:18:56.540 09:27:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:18:56.540 09:27:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:18:56.540 09:27:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:18:56.540 09:27:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ null == \n\u\l\l ]] 00:18:56.540 09:27:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:18:56.798 09:27:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:18:56.798 09:27:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:18:56.798 09:27:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:18:57.055 09:27:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --hostid 29f67375-a902-e411-ace9-001e67bc3c9a --dhchap-secret DHHC-1:02:NDQ4MzNhZjMzODIzY2U1NzAxZDE1ZjY2MzMwNjlhOWZkODk0Y2YxNzMwZTJmMTNlwvlBbg==: --dhchap-ctrl-secret DHHC-1:01:ZTc1ZGE1ODgyNmUxZWM5Njk1YWMzZmY3YjEzMTEyODmK5dgD: 00:18:57.988 09:27:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:18:57.988 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:18:57.988 09:27:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a 00:18:57.988 09:27:08 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:57.988 09:27:08 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:57.988 09:27:08 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:57.988 09:27:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:18:57.988 09:27:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups null 00:18:57.988 09:27:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups null 00:18:57.988 09:27:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 null 3 00:18:57.988 09:27:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:18:57.988 09:27:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:18:57.988 09:27:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=null 00:18:57.988 09:27:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:18:57.988 09:27:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:18:57.988 09:27:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --dhchap-key key3 00:18:57.988 09:27:09 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:57.988 09:27:09 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:57.988 09:27:09 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:57.988 09:27:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:18:57.988 09:27:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:18:58.554 00:18:58.554 09:27:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:18:58.554 09:27:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:18:58.554 09:27:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:18:58.554 09:27:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:58.554 09:27:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:18:58.554 09:27:09 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:58.554 09:27:09 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:58.554 09:27:09 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:58.554 09:27:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:18:58.554 { 00:18:58.554 "cntlid": 103, 00:18:58.554 "qid": 0, 00:18:58.554 "state": "enabled", 00:18:58.554 "thread": "nvmf_tgt_poll_group_000", 00:18:58.554 "listen_address": { 00:18:58.554 "trtype": "TCP", 00:18:58.554 "adrfam": "IPv4", 00:18:58.554 "traddr": "10.0.0.2", 00:18:58.554 "trsvcid": "4420" 00:18:58.554 }, 00:18:58.554 "peer_address": { 00:18:58.554 "trtype": "TCP", 00:18:58.554 "adrfam": "IPv4", 00:18:58.554 "traddr": "10.0.0.1", 00:18:58.554 "trsvcid": "50418" 00:18:58.554 }, 00:18:58.554 "auth": { 00:18:58.554 "state": "completed", 00:18:58.554 "digest": "sha512", 00:18:58.554 "dhgroup": "null" 00:18:58.554 } 00:18:58.554 } 00:18:58.554 ]' 00:18:58.554 09:27:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:18:58.812 09:27:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:18:58.812 09:27:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:18:58.812 09:27:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ null == \n\u\l\l ]] 00:18:58.812 09:27:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:18:58.812 09:27:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:18:58.812 09:27:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:18:58.812 09:27:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:18:59.069 09:27:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --hostid 29f67375-a902-e411-ace9-001e67bc3c9a --dhchap-secret DHHC-1:03:NWFjZjhlYmEzNDE2OGFkMWJhYzU0YmU3NDAyYjlmYTg1OGI0Y2EyOGY2OGRhNmEwOWI3ZjBlMjQwZDdhMzNhNDF+BrQ=: 00:19:00.002 09:27:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:19:00.002 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:19:00.002 09:27:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a 00:19:00.002 09:27:10 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:00.002 09:27:10 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:00.003 09:27:11 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:00.003 09:27:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@92 -- # for dhgroup in "${dhgroups[@]}" 00:19:00.003 09:27:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:19:00.003 09:27:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:19:00.003 09:27:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:19:00.260 09:27:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 ffdhe2048 0 00:19:00.260 09:27:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:19:00.260 09:27:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:19:00.260 09:27:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe2048 00:19:00.260 09:27:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:19:00.260 09:27:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:19:00.260 09:27:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:19:00.260 09:27:11 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:00.260 09:27:11 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:00.260 09:27:11 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:00.260 09:27:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:19:00.260 09:27:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:19:00.517 00:19:00.517 09:27:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:19:00.517 09:27:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:19:00.517 09:27:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:19:00.774 09:27:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:00.774 09:27:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:19:00.774 09:27:11 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:00.774 09:27:11 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:00.774 09:27:11 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:00.774 09:27:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:19:00.774 { 00:19:00.774 "cntlid": 105, 00:19:00.774 "qid": 0, 00:19:00.774 "state": "enabled", 00:19:00.774 "thread": "nvmf_tgt_poll_group_000", 00:19:00.774 "listen_address": { 00:19:00.774 "trtype": "TCP", 00:19:00.774 "adrfam": "IPv4", 00:19:00.774 "traddr": "10.0.0.2", 00:19:00.774 "trsvcid": "4420" 00:19:00.774 }, 00:19:00.774 "peer_address": { 00:19:00.774 "trtype": "TCP", 00:19:00.774 "adrfam": "IPv4", 00:19:00.774 "traddr": "10.0.0.1", 00:19:00.774 "trsvcid": "50444" 00:19:00.774 }, 00:19:00.774 "auth": { 00:19:00.774 "state": "completed", 00:19:00.774 "digest": "sha512", 00:19:00.774 "dhgroup": "ffdhe2048" 00:19:00.774 } 00:19:00.774 } 00:19:00.774 ]' 00:19:00.774 09:27:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:19:00.774 09:27:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:19:00.774 09:27:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:19:00.774 09:27:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:19:00.774 09:27:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:19:00.774 09:27:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:19:00.774 09:27:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:19:00.774 09:27:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:19:01.033 09:27:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --hostid 29f67375-a902-e411-ace9-001e67bc3c9a --dhchap-secret DHHC-1:00:MjViNTk0ZmIxYjkwYTRjYTIyNGRlOTQ4NThlNzI1MGVmODY1NjZlNTU4MjUyMjdm9pHsZA==: --dhchap-ctrl-secret DHHC-1:03:NDQ1MzMxODZhNjFiYjQ1MjBkZTQxMWIyNTRlMmE3YzNjM2EwYWU0MDExYjQ5NzA2Yzk5OGQxYzczMzYwZTUwOSOhOLs=: 00:19:01.967 09:27:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:19:01.967 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:19:01.967 09:27:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a 00:19:01.967 09:27:13 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:01.967 09:27:13 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:01.967 09:27:13 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:01.967 09:27:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:19:01.967 09:27:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:19:01.967 09:27:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:19:02.225 09:27:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 ffdhe2048 1 00:19:02.225 09:27:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:19:02.225 09:27:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:19:02.225 09:27:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe2048 00:19:02.225 09:27:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key1 00:19:02.225 09:27:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:19:02.225 09:27:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:19:02.225 09:27:13 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:02.225 09:27:13 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:02.225 09:27:13 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:02.225 09:27:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:19:02.225 09:27:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:19:02.483 00:19:02.483 09:27:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:19:02.483 09:27:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:19:02.483 09:27:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:19:02.740 09:27:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:02.740 09:27:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:19:02.740 09:27:13 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:02.740 09:27:13 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:02.740 09:27:13 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:02.740 09:27:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:19:02.740 { 00:19:02.740 "cntlid": 107, 00:19:02.740 "qid": 0, 00:19:02.740 "state": "enabled", 00:19:02.740 "thread": "nvmf_tgt_poll_group_000", 00:19:02.740 "listen_address": { 00:19:02.740 "trtype": "TCP", 00:19:02.740 "adrfam": "IPv4", 00:19:02.740 "traddr": "10.0.0.2", 00:19:02.740 "trsvcid": "4420" 00:19:02.740 }, 00:19:02.740 "peer_address": { 00:19:02.740 "trtype": "TCP", 00:19:02.740 "adrfam": "IPv4", 00:19:02.740 "traddr": "10.0.0.1", 00:19:02.740 "trsvcid": "50480" 00:19:02.740 }, 00:19:02.741 "auth": { 00:19:02.741 "state": "completed", 00:19:02.741 "digest": "sha512", 00:19:02.741 "dhgroup": "ffdhe2048" 00:19:02.741 } 00:19:02.741 } 00:19:02.741 ]' 00:19:02.741 09:27:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:19:02.998 09:27:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:19:02.998 09:27:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:19:02.998 09:27:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:19:02.998 09:27:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:19:02.998 09:27:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:19:02.998 09:27:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:19:02.998 09:27:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:19:03.256 09:27:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --hostid 29f67375-a902-e411-ace9-001e67bc3c9a --dhchap-secret DHHC-1:01:OGQyNjQ5YzUxZmVhMDExNzYzODJlMDE1Y2JmY2ZhOWT+1yqE: --dhchap-ctrl-secret DHHC-1:02:ZDEyODliMmM4OTFjYWQ3OTIzMzJiYzc1Y2Q1ZjQ0ZDc4OTlmYjNiMWJhMThiOGY2nJOiQQ==: 00:19:04.189 09:27:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:19:04.189 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:19:04.189 09:27:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a 00:19:04.189 09:27:15 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:04.189 09:27:15 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:04.189 09:27:15 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:04.189 09:27:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:19:04.189 09:27:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:19:04.189 09:27:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:19:04.447 09:27:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 ffdhe2048 2 00:19:04.447 09:27:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:19:04.447 09:27:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:19:04.447 09:27:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe2048 00:19:04.447 09:27:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key2 00:19:04.447 09:27:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:19:04.447 09:27:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:19:04.447 09:27:15 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:04.447 09:27:15 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:04.447 09:27:15 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:04.447 09:27:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:19:04.447 09:27:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:19:04.705 00:19:04.705 09:27:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:19:04.706 09:27:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:19:04.706 09:27:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:19:04.964 09:27:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:04.964 09:27:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:19:04.964 09:27:16 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:04.964 09:27:16 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:04.964 09:27:16 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:04.964 09:27:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:19:04.964 { 00:19:04.964 "cntlid": 109, 00:19:04.964 "qid": 0, 00:19:04.964 "state": "enabled", 00:19:04.964 "thread": "nvmf_tgt_poll_group_000", 00:19:04.964 "listen_address": { 00:19:04.964 "trtype": "TCP", 00:19:04.964 "adrfam": "IPv4", 00:19:04.964 "traddr": "10.0.0.2", 00:19:04.964 "trsvcid": "4420" 00:19:04.964 }, 00:19:04.964 "peer_address": { 00:19:04.964 "trtype": "TCP", 00:19:04.964 "adrfam": "IPv4", 00:19:04.964 "traddr": "10.0.0.1", 00:19:04.964 "trsvcid": "50498" 00:19:04.964 }, 00:19:04.964 "auth": { 00:19:04.964 "state": "completed", 00:19:04.964 "digest": "sha512", 00:19:04.964 "dhgroup": "ffdhe2048" 00:19:04.964 } 00:19:04.964 } 00:19:04.964 ]' 00:19:04.964 09:27:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:19:04.964 09:27:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:19:04.964 09:27:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:19:04.964 09:27:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:19:04.964 09:27:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:19:04.964 09:27:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:19:04.964 09:27:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:19:04.964 09:27:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:19:05.224 09:27:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --hostid 29f67375-a902-e411-ace9-001e67bc3c9a --dhchap-secret DHHC-1:02:NDQ4MzNhZjMzODIzY2U1NzAxZDE1ZjY2MzMwNjlhOWZkODk0Y2YxNzMwZTJmMTNlwvlBbg==: --dhchap-ctrl-secret DHHC-1:01:ZTc1ZGE1ODgyNmUxZWM5Njk1YWMzZmY3YjEzMTEyODmK5dgD: 00:19:06.158 09:27:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:19:06.158 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:19:06.158 09:27:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a 00:19:06.158 09:27:17 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:06.159 09:27:17 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:06.159 09:27:17 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:06.159 09:27:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:19:06.159 09:27:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:19:06.159 09:27:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:19:06.416 09:27:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 ffdhe2048 3 00:19:06.416 09:27:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:19:06.416 09:27:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:19:06.416 09:27:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe2048 00:19:06.416 09:27:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:19:06.416 09:27:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:19:06.416 09:27:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --dhchap-key key3 00:19:06.416 09:27:17 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:06.416 09:27:17 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:06.416 09:27:17 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:06.416 09:27:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:19:06.416 09:27:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:19:06.673 00:19:06.674 09:27:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:19:06.674 09:27:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:19:06.674 09:27:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:19:06.931 09:27:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:06.931 09:27:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:19:06.931 09:27:18 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:06.931 09:27:18 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:06.931 09:27:18 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:06.931 09:27:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:19:06.931 { 00:19:06.931 "cntlid": 111, 00:19:06.931 "qid": 0, 00:19:06.931 "state": "enabled", 00:19:06.931 "thread": "nvmf_tgt_poll_group_000", 00:19:06.931 "listen_address": { 00:19:06.931 "trtype": "TCP", 00:19:06.931 "adrfam": "IPv4", 00:19:06.931 "traddr": "10.0.0.2", 00:19:06.931 "trsvcid": "4420" 00:19:06.931 }, 00:19:06.931 "peer_address": { 00:19:06.931 "trtype": "TCP", 00:19:06.931 "adrfam": "IPv4", 00:19:06.931 "traddr": "10.0.0.1", 00:19:06.931 "trsvcid": "37596" 00:19:06.931 }, 00:19:06.931 "auth": { 00:19:06.931 "state": "completed", 00:19:06.931 "digest": "sha512", 00:19:06.931 "dhgroup": "ffdhe2048" 00:19:06.931 } 00:19:06.931 } 00:19:06.931 ]' 00:19:06.931 09:27:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:19:07.188 09:27:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:19:07.188 09:27:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:19:07.188 09:27:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:19:07.188 09:27:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:19:07.188 09:27:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:19:07.188 09:27:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:19:07.188 09:27:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:19:07.446 09:27:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --hostid 29f67375-a902-e411-ace9-001e67bc3c9a --dhchap-secret DHHC-1:03:NWFjZjhlYmEzNDE2OGFkMWJhYzU0YmU3NDAyYjlmYTg1OGI0Y2EyOGY2OGRhNmEwOWI3ZjBlMjQwZDdhMzNhNDF+BrQ=: 00:19:08.381 09:27:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:19:08.381 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:19:08.381 09:27:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a 00:19:08.381 09:27:19 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:08.381 09:27:19 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:08.381 09:27:19 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:08.381 09:27:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@92 -- # for dhgroup in "${dhgroups[@]}" 00:19:08.381 09:27:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:19:08.381 09:27:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:19:08.381 09:27:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:19:08.642 09:27:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 ffdhe3072 0 00:19:08.642 09:27:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:19:08.642 09:27:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:19:08.642 09:27:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe3072 00:19:08.642 09:27:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:19:08.642 09:27:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:19:08.642 09:27:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:19:08.642 09:27:19 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:08.642 09:27:19 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:08.642 09:27:19 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:08.642 09:27:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:19:08.642 09:27:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:19:08.900 00:19:08.900 09:27:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:19:08.900 09:27:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:19:08.900 09:27:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:19:09.158 09:27:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:09.158 09:27:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:19:09.158 09:27:20 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:09.158 09:27:20 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:09.158 09:27:20 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:09.158 09:27:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:19:09.158 { 00:19:09.158 "cntlid": 113, 00:19:09.158 "qid": 0, 00:19:09.158 "state": "enabled", 00:19:09.158 "thread": "nvmf_tgt_poll_group_000", 00:19:09.158 "listen_address": { 00:19:09.158 "trtype": "TCP", 00:19:09.158 "adrfam": "IPv4", 00:19:09.158 "traddr": "10.0.0.2", 00:19:09.158 "trsvcid": "4420" 00:19:09.158 }, 00:19:09.158 "peer_address": { 00:19:09.158 "trtype": "TCP", 00:19:09.158 "adrfam": "IPv4", 00:19:09.158 "traddr": "10.0.0.1", 00:19:09.158 "trsvcid": "37622" 00:19:09.158 }, 00:19:09.158 "auth": { 00:19:09.158 "state": "completed", 00:19:09.158 "digest": "sha512", 00:19:09.158 "dhgroup": "ffdhe3072" 00:19:09.158 } 00:19:09.158 } 00:19:09.158 ]' 00:19:09.158 09:27:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:19:09.158 09:27:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:19:09.158 09:27:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:19:09.158 09:27:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:19:09.158 09:27:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:19:09.158 09:27:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:19:09.158 09:27:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:19:09.158 09:27:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:19:09.417 09:27:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --hostid 29f67375-a902-e411-ace9-001e67bc3c9a --dhchap-secret DHHC-1:00:MjViNTk0ZmIxYjkwYTRjYTIyNGRlOTQ4NThlNzI1MGVmODY1NjZlNTU4MjUyMjdm9pHsZA==: --dhchap-ctrl-secret DHHC-1:03:NDQ1MzMxODZhNjFiYjQ1MjBkZTQxMWIyNTRlMmE3YzNjM2EwYWU0MDExYjQ5NzA2Yzk5OGQxYzczMzYwZTUwOSOhOLs=: 00:19:10.352 09:27:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:19:10.352 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:19:10.352 09:27:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a 00:19:10.352 09:27:21 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:10.352 09:27:21 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:10.352 09:27:21 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:10.352 09:27:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:19:10.352 09:27:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:19:10.352 09:27:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:19:10.611 09:27:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 ffdhe3072 1 00:19:10.611 09:27:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:19:10.611 09:27:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:19:10.611 09:27:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe3072 00:19:10.611 09:27:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key1 00:19:10.611 09:27:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:19:10.611 09:27:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:19:10.611 09:27:21 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:10.611 09:27:21 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:10.611 09:27:21 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:10.611 09:27:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:19:10.611 09:27:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:19:11.178 00:19:11.178 09:27:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:19:11.178 09:27:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:19:11.178 09:27:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:19:11.436 09:27:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:11.436 09:27:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:19:11.436 09:27:22 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:11.436 09:27:22 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:11.436 09:27:22 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:11.436 09:27:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:19:11.436 { 00:19:11.436 "cntlid": 115, 00:19:11.436 "qid": 0, 00:19:11.436 "state": "enabled", 00:19:11.436 "thread": "nvmf_tgt_poll_group_000", 00:19:11.436 "listen_address": { 00:19:11.436 "trtype": "TCP", 00:19:11.436 "adrfam": "IPv4", 00:19:11.436 "traddr": "10.0.0.2", 00:19:11.436 "trsvcid": "4420" 00:19:11.436 }, 00:19:11.436 "peer_address": { 00:19:11.436 "trtype": "TCP", 00:19:11.436 "adrfam": "IPv4", 00:19:11.436 "traddr": "10.0.0.1", 00:19:11.436 "trsvcid": "37638" 00:19:11.436 }, 00:19:11.436 "auth": { 00:19:11.436 "state": "completed", 00:19:11.436 "digest": "sha512", 00:19:11.436 "dhgroup": "ffdhe3072" 00:19:11.436 } 00:19:11.436 } 00:19:11.436 ]' 00:19:11.436 09:27:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:19:11.436 09:27:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:19:11.436 09:27:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:19:11.436 09:27:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:19:11.436 09:27:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:19:11.436 09:27:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:19:11.436 09:27:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:19:11.436 09:27:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:19:11.694 09:27:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --hostid 29f67375-a902-e411-ace9-001e67bc3c9a --dhchap-secret DHHC-1:01:OGQyNjQ5YzUxZmVhMDExNzYzODJlMDE1Y2JmY2ZhOWT+1yqE: --dhchap-ctrl-secret DHHC-1:02:ZDEyODliMmM4OTFjYWQ3OTIzMzJiYzc1Y2Q1ZjQ0ZDc4OTlmYjNiMWJhMThiOGY2nJOiQQ==: 00:19:12.633 09:27:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:19:12.633 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:19:12.633 09:27:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a 00:19:12.633 09:27:23 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:12.633 09:27:23 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:12.633 09:27:23 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:12.633 09:27:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:19:12.633 09:27:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:19:12.633 09:27:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:19:12.912 09:27:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 ffdhe3072 2 00:19:12.912 09:27:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:19:12.912 09:27:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:19:12.912 09:27:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe3072 00:19:12.912 09:27:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key2 00:19:12.912 09:27:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:19:12.912 09:27:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:19:12.912 09:27:24 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:12.912 09:27:24 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:12.912 09:27:24 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:12.912 09:27:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:19:12.912 09:27:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:19:13.222 00:19:13.223 09:27:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:19:13.223 09:27:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:19:13.223 09:27:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:19:13.495 09:27:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:13.495 09:27:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:19:13.496 09:27:24 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:13.496 09:27:24 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:13.496 09:27:24 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:13.496 09:27:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:19:13.496 { 00:19:13.496 "cntlid": 117, 00:19:13.496 "qid": 0, 00:19:13.496 "state": "enabled", 00:19:13.496 "thread": "nvmf_tgt_poll_group_000", 00:19:13.496 "listen_address": { 00:19:13.496 "trtype": "TCP", 00:19:13.496 "adrfam": "IPv4", 00:19:13.496 "traddr": "10.0.0.2", 00:19:13.496 "trsvcid": "4420" 00:19:13.496 }, 00:19:13.496 "peer_address": { 00:19:13.496 "trtype": "TCP", 00:19:13.496 "adrfam": "IPv4", 00:19:13.496 "traddr": "10.0.0.1", 00:19:13.496 "trsvcid": "37664" 00:19:13.496 }, 00:19:13.496 "auth": { 00:19:13.496 "state": "completed", 00:19:13.496 "digest": "sha512", 00:19:13.496 "dhgroup": "ffdhe3072" 00:19:13.496 } 00:19:13.496 } 00:19:13.496 ]' 00:19:13.496 09:27:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:19:13.777 09:27:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:19:13.778 09:27:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:19:13.778 09:27:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:19:13.778 09:27:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:19:13.778 09:27:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:19:13.778 09:27:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:19:13.778 09:27:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:19:14.062 09:27:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --hostid 29f67375-a902-e411-ace9-001e67bc3c9a --dhchap-secret DHHC-1:02:NDQ4MzNhZjMzODIzY2U1NzAxZDE1ZjY2MzMwNjlhOWZkODk0Y2YxNzMwZTJmMTNlwvlBbg==: --dhchap-ctrl-secret DHHC-1:01:ZTc1ZGE1ODgyNmUxZWM5Njk1YWMzZmY3YjEzMTEyODmK5dgD: 00:19:14.998 09:27:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:19:14.998 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:19:14.998 09:27:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a 00:19:14.998 09:27:25 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:14.998 09:27:25 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:14.998 09:27:25 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:14.998 09:27:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:19:14.998 09:27:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:19:14.998 09:27:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:19:14.998 09:27:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 ffdhe3072 3 00:19:14.998 09:27:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:19:14.998 09:27:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:19:14.998 09:27:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe3072 00:19:14.998 09:27:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:19:14.998 09:27:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:19:14.998 09:27:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --dhchap-key key3 00:19:14.998 09:27:26 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:14.998 09:27:26 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:14.998 09:27:26 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:14.999 09:27:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:19:14.999 09:27:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:19:15.566 00:19:15.566 09:27:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:19:15.566 09:27:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:19:15.566 09:27:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:19:15.566 09:27:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:15.566 09:27:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:19:15.566 09:27:26 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:15.566 09:27:26 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:15.825 09:27:26 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:15.825 09:27:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:19:15.825 { 00:19:15.825 "cntlid": 119, 00:19:15.825 "qid": 0, 00:19:15.825 "state": "enabled", 00:19:15.825 "thread": "nvmf_tgt_poll_group_000", 00:19:15.825 "listen_address": { 00:19:15.825 "trtype": "TCP", 00:19:15.825 "adrfam": "IPv4", 00:19:15.825 "traddr": "10.0.0.2", 00:19:15.825 "trsvcid": "4420" 00:19:15.825 }, 00:19:15.825 "peer_address": { 00:19:15.825 "trtype": "TCP", 00:19:15.825 "adrfam": "IPv4", 00:19:15.825 "traddr": "10.0.0.1", 00:19:15.825 "trsvcid": "52436" 00:19:15.825 }, 00:19:15.825 "auth": { 00:19:15.825 "state": "completed", 00:19:15.825 "digest": "sha512", 00:19:15.825 "dhgroup": "ffdhe3072" 00:19:15.825 } 00:19:15.825 } 00:19:15.825 ]' 00:19:15.825 09:27:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:19:15.825 09:27:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:19:15.825 09:27:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:19:15.825 09:27:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:19:15.825 09:27:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:19:15.825 09:27:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:19:15.825 09:27:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:19:15.825 09:27:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:19:16.083 09:27:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --hostid 29f67375-a902-e411-ace9-001e67bc3c9a --dhchap-secret DHHC-1:03:NWFjZjhlYmEzNDE2OGFkMWJhYzU0YmU3NDAyYjlmYTg1OGI0Y2EyOGY2OGRhNmEwOWI3ZjBlMjQwZDdhMzNhNDF+BrQ=: 00:19:17.014 09:27:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:19:17.014 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:19:17.014 09:27:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a 00:19:17.014 09:27:28 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:17.014 09:27:28 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:17.014 09:27:28 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:17.014 09:27:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@92 -- # for dhgroup in "${dhgroups[@]}" 00:19:17.014 09:27:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:19:17.014 09:27:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:19:17.014 09:27:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:19:17.272 09:27:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 ffdhe4096 0 00:19:17.272 09:27:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:19:17.272 09:27:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:19:17.272 09:27:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe4096 00:19:17.272 09:27:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:19:17.272 09:27:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:19:17.272 09:27:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:19:17.272 09:27:28 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:17.272 09:27:28 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:17.272 09:27:28 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:17.272 09:27:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:19:17.272 09:27:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:19:17.530 00:19:17.530 09:27:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:19:17.530 09:27:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:19:17.530 09:27:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:19:17.788 09:27:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:17.788 09:27:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:19:17.788 09:27:28 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:17.788 09:27:28 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:17.788 09:27:28 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:17.788 09:27:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:19:17.788 { 00:19:17.788 "cntlid": 121, 00:19:17.788 "qid": 0, 00:19:17.788 "state": "enabled", 00:19:17.788 "thread": "nvmf_tgt_poll_group_000", 00:19:17.788 "listen_address": { 00:19:17.788 "trtype": "TCP", 00:19:17.788 "adrfam": "IPv4", 00:19:17.788 "traddr": "10.0.0.2", 00:19:17.788 "trsvcid": "4420" 00:19:17.788 }, 00:19:17.788 "peer_address": { 00:19:17.788 "trtype": "TCP", 00:19:17.788 "adrfam": "IPv4", 00:19:17.788 "traddr": "10.0.0.1", 00:19:17.788 "trsvcid": "52466" 00:19:17.788 }, 00:19:17.788 "auth": { 00:19:17.788 "state": "completed", 00:19:17.788 "digest": "sha512", 00:19:17.788 "dhgroup": "ffdhe4096" 00:19:17.788 } 00:19:17.788 } 00:19:17.788 ]' 00:19:17.788 09:27:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:19:17.788 09:27:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:19:17.788 09:27:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:19:18.046 09:27:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:19:18.046 09:27:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:19:18.046 09:27:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:19:18.046 09:27:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:19:18.046 09:27:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:19:18.305 09:27:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --hostid 29f67375-a902-e411-ace9-001e67bc3c9a --dhchap-secret DHHC-1:00:MjViNTk0ZmIxYjkwYTRjYTIyNGRlOTQ4NThlNzI1MGVmODY1NjZlNTU4MjUyMjdm9pHsZA==: --dhchap-ctrl-secret DHHC-1:03:NDQ1MzMxODZhNjFiYjQ1MjBkZTQxMWIyNTRlMmE3YzNjM2EwYWU0MDExYjQ5NzA2Yzk5OGQxYzczMzYwZTUwOSOhOLs=: 00:19:19.241 09:27:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:19:19.241 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:19:19.241 09:27:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a 00:19:19.241 09:27:30 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:19.241 09:27:30 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:19.241 09:27:30 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:19.241 09:27:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:19:19.241 09:27:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:19:19.241 09:27:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:19:19.499 09:27:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 ffdhe4096 1 00:19:19.499 09:27:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:19:19.499 09:27:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:19:19.499 09:27:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe4096 00:19:19.499 09:27:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key1 00:19:19.499 09:27:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:19:19.499 09:27:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:19:19.499 09:27:30 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:19.499 09:27:30 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:19.499 09:27:30 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:19.499 09:27:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:19:19.499 09:27:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:19:19.757 00:19:19.757 09:27:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:19:19.757 09:27:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:19:19.757 09:27:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:19:20.014 09:27:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:20.014 09:27:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:19:20.014 09:27:31 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:20.014 09:27:31 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:20.014 09:27:31 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:20.014 09:27:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:19:20.014 { 00:19:20.014 "cntlid": 123, 00:19:20.014 "qid": 0, 00:19:20.014 "state": "enabled", 00:19:20.014 "thread": "nvmf_tgt_poll_group_000", 00:19:20.014 "listen_address": { 00:19:20.014 "trtype": "TCP", 00:19:20.014 "adrfam": "IPv4", 00:19:20.014 "traddr": "10.0.0.2", 00:19:20.014 "trsvcid": "4420" 00:19:20.014 }, 00:19:20.014 "peer_address": { 00:19:20.014 "trtype": "TCP", 00:19:20.014 "adrfam": "IPv4", 00:19:20.014 "traddr": "10.0.0.1", 00:19:20.014 "trsvcid": "52502" 00:19:20.014 }, 00:19:20.014 "auth": { 00:19:20.014 "state": "completed", 00:19:20.014 "digest": "sha512", 00:19:20.014 "dhgroup": "ffdhe4096" 00:19:20.014 } 00:19:20.014 } 00:19:20.014 ]' 00:19:20.014 09:27:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:19:20.014 09:27:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:19:20.014 09:27:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:19:20.014 09:27:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:19:20.014 09:27:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:19:20.274 09:27:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:19:20.274 09:27:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:19:20.274 09:27:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:19:20.535 09:27:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --hostid 29f67375-a902-e411-ace9-001e67bc3c9a --dhchap-secret DHHC-1:01:OGQyNjQ5YzUxZmVhMDExNzYzODJlMDE1Y2JmY2ZhOWT+1yqE: --dhchap-ctrl-secret DHHC-1:02:ZDEyODliMmM4OTFjYWQ3OTIzMzJiYzc1Y2Q1ZjQ0ZDc4OTlmYjNiMWJhMThiOGY2nJOiQQ==: 00:19:21.496 09:27:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:19:21.496 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:19:21.496 09:27:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a 00:19:21.496 09:27:32 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:21.496 09:27:32 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:21.496 09:27:32 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:21.496 09:27:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:19:21.496 09:27:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:19:21.496 09:27:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:19:21.496 09:27:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 ffdhe4096 2 00:19:21.496 09:27:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:19:21.496 09:27:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:19:21.496 09:27:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe4096 00:19:21.496 09:27:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key2 00:19:21.496 09:27:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:19:21.496 09:27:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:19:21.496 09:27:32 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:21.496 09:27:32 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:21.754 09:27:32 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:21.754 09:27:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:19:21.754 09:27:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:19:22.012 00:19:22.012 09:27:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:19:22.012 09:27:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:19:22.012 09:27:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:19:22.269 09:27:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:22.269 09:27:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:19:22.269 09:27:33 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:22.269 09:27:33 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:22.269 09:27:33 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:22.269 09:27:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:19:22.269 { 00:19:22.269 "cntlid": 125, 00:19:22.269 "qid": 0, 00:19:22.269 "state": "enabled", 00:19:22.269 "thread": "nvmf_tgt_poll_group_000", 00:19:22.269 "listen_address": { 00:19:22.269 "trtype": "TCP", 00:19:22.269 "adrfam": "IPv4", 00:19:22.269 "traddr": "10.0.0.2", 00:19:22.269 "trsvcid": "4420" 00:19:22.269 }, 00:19:22.269 "peer_address": { 00:19:22.269 "trtype": "TCP", 00:19:22.269 "adrfam": "IPv4", 00:19:22.269 "traddr": "10.0.0.1", 00:19:22.269 "trsvcid": "52538" 00:19:22.269 }, 00:19:22.269 "auth": { 00:19:22.269 "state": "completed", 00:19:22.269 "digest": "sha512", 00:19:22.269 "dhgroup": "ffdhe4096" 00:19:22.269 } 00:19:22.269 } 00:19:22.269 ]' 00:19:22.269 09:27:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:19:22.269 09:27:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:19:22.269 09:27:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:19:22.269 09:27:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:19:22.269 09:27:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:19:22.269 09:27:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:19:22.269 09:27:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:19:22.269 09:27:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:19:22.527 09:27:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --hostid 29f67375-a902-e411-ace9-001e67bc3c9a --dhchap-secret DHHC-1:02:NDQ4MzNhZjMzODIzY2U1NzAxZDE1ZjY2MzMwNjlhOWZkODk0Y2YxNzMwZTJmMTNlwvlBbg==: --dhchap-ctrl-secret DHHC-1:01:ZTc1ZGE1ODgyNmUxZWM5Njk1YWMzZmY3YjEzMTEyODmK5dgD: 00:19:23.459 09:27:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:19:23.459 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:19:23.459 09:27:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a 00:19:23.459 09:27:34 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:23.459 09:27:34 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:23.459 09:27:34 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:23.459 09:27:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:19:23.459 09:27:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:19:23.459 09:27:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:19:23.716 09:27:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 ffdhe4096 3 00:19:23.716 09:27:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:19:23.716 09:27:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:19:23.716 09:27:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe4096 00:19:23.716 09:27:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:19:23.716 09:27:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:19:23.716 09:27:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --dhchap-key key3 00:19:23.716 09:27:34 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:23.716 09:27:34 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:23.716 09:27:34 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:23.716 09:27:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:19:23.716 09:27:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:19:24.284 00:19:24.284 09:27:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:19:24.284 09:27:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:19:24.284 09:27:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:19:24.284 09:27:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:24.543 09:27:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:19:24.543 09:27:35 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:24.543 09:27:35 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:24.543 09:27:35 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:24.543 09:27:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:19:24.543 { 00:19:24.543 "cntlid": 127, 00:19:24.543 "qid": 0, 00:19:24.543 "state": "enabled", 00:19:24.543 "thread": "nvmf_tgt_poll_group_000", 00:19:24.543 "listen_address": { 00:19:24.543 "trtype": "TCP", 00:19:24.543 "adrfam": "IPv4", 00:19:24.543 "traddr": "10.0.0.2", 00:19:24.543 "trsvcid": "4420" 00:19:24.543 }, 00:19:24.543 "peer_address": { 00:19:24.543 "trtype": "TCP", 00:19:24.543 "adrfam": "IPv4", 00:19:24.543 "traddr": "10.0.0.1", 00:19:24.543 "trsvcid": "52566" 00:19:24.543 }, 00:19:24.543 "auth": { 00:19:24.543 "state": "completed", 00:19:24.543 "digest": "sha512", 00:19:24.543 "dhgroup": "ffdhe4096" 00:19:24.543 } 00:19:24.543 } 00:19:24.543 ]' 00:19:24.543 09:27:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:19:24.543 09:27:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:19:24.543 09:27:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:19:24.543 09:27:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:19:24.543 09:27:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:19:24.543 09:27:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:19:24.543 09:27:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:19:24.543 09:27:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:19:24.800 09:27:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --hostid 29f67375-a902-e411-ace9-001e67bc3c9a --dhchap-secret DHHC-1:03:NWFjZjhlYmEzNDE2OGFkMWJhYzU0YmU3NDAyYjlmYTg1OGI0Y2EyOGY2OGRhNmEwOWI3ZjBlMjQwZDdhMzNhNDF+BrQ=: 00:19:25.736 09:27:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:19:25.736 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:19:25.736 09:27:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a 00:19:25.736 09:27:36 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:25.736 09:27:36 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:25.736 09:27:36 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:25.736 09:27:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@92 -- # for dhgroup in "${dhgroups[@]}" 00:19:25.736 09:27:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:19:25.736 09:27:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:19:25.736 09:27:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:19:25.993 09:27:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 ffdhe6144 0 00:19:25.993 09:27:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:19:25.993 09:27:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:19:25.993 09:27:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe6144 00:19:25.993 09:27:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:19:25.993 09:27:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:19:25.993 09:27:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:19:25.993 09:27:36 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:25.993 09:27:36 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:25.994 09:27:36 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:25.994 09:27:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:19:25.994 09:27:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:19:26.558 00:19:26.558 09:27:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:19:26.558 09:27:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:19:26.558 09:27:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:19:26.558 09:27:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:26.558 09:27:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:19:26.558 09:27:37 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:26.558 09:27:37 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:26.558 09:27:37 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:26.558 09:27:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:19:26.558 { 00:19:26.558 "cntlid": 129, 00:19:26.558 "qid": 0, 00:19:26.558 "state": "enabled", 00:19:26.558 "thread": "nvmf_tgt_poll_group_000", 00:19:26.558 "listen_address": { 00:19:26.558 "trtype": "TCP", 00:19:26.558 "adrfam": "IPv4", 00:19:26.558 "traddr": "10.0.0.2", 00:19:26.558 "trsvcid": "4420" 00:19:26.558 }, 00:19:26.558 "peer_address": { 00:19:26.558 "trtype": "TCP", 00:19:26.558 "adrfam": "IPv4", 00:19:26.558 "traddr": "10.0.0.1", 00:19:26.558 "trsvcid": "60362" 00:19:26.558 }, 00:19:26.558 "auth": { 00:19:26.558 "state": "completed", 00:19:26.558 "digest": "sha512", 00:19:26.558 "dhgroup": "ffdhe6144" 00:19:26.558 } 00:19:26.558 } 00:19:26.558 ]' 00:19:26.558 09:27:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:19:26.817 09:27:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:19:26.817 09:27:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:19:26.817 09:27:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:19:26.817 09:27:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:19:26.817 09:27:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:19:26.817 09:27:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:19:26.817 09:27:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:19:27.075 09:27:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --hostid 29f67375-a902-e411-ace9-001e67bc3c9a --dhchap-secret DHHC-1:00:MjViNTk0ZmIxYjkwYTRjYTIyNGRlOTQ4NThlNzI1MGVmODY1NjZlNTU4MjUyMjdm9pHsZA==: --dhchap-ctrl-secret DHHC-1:03:NDQ1MzMxODZhNjFiYjQ1MjBkZTQxMWIyNTRlMmE3YzNjM2EwYWU0MDExYjQ5NzA2Yzk5OGQxYzczMzYwZTUwOSOhOLs=: 00:19:28.009 09:27:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:19:28.009 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:19:28.009 09:27:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a 00:19:28.009 09:27:38 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:28.009 09:27:38 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:28.009 09:27:38 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:28.009 09:27:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:19:28.009 09:27:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:19:28.009 09:27:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:19:28.266 09:27:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 ffdhe6144 1 00:19:28.266 09:27:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:19:28.266 09:27:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:19:28.266 09:27:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe6144 00:19:28.266 09:27:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key1 00:19:28.266 09:27:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:19:28.266 09:27:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:19:28.266 09:27:39 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:28.266 09:27:39 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:28.266 09:27:39 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:28.266 09:27:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:19:28.266 09:27:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:19:28.834 00:19:28.834 09:27:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:19:28.834 09:27:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:19:28.834 09:27:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:19:29.092 09:27:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:29.092 09:27:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:19:29.092 09:27:40 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:29.092 09:27:40 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:29.092 09:27:40 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:29.092 09:27:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:19:29.092 { 00:19:29.092 "cntlid": 131, 00:19:29.092 "qid": 0, 00:19:29.092 "state": "enabled", 00:19:29.092 "thread": "nvmf_tgt_poll_group_000", 00:19:29.092 "listen_address": { 00:19:29.092 "trtype": "TCP", 00:19:29.092 "adrfam": "IPv4", 00:19:29.092 "traddr": "10.0.0.2", 00:19:29.092 "trsvcid": "4420" 00:19:29.092 }, 00:19:29.092 "peer_address": { 00:19:29.092 "trtype": "TCP", 00:19:29.092 "adrfam": "IPv4", 00:19:29.092 "traddr": "10.0.0.1", 00:19:29.092 "trsvcid": "60386" 00:19:29.092 }, 00:19:29.092 "auth": { 00:19:29.092 "state": "completed", 00:19:29.092 "digest": "sha512", 00:19:29.092 "dhgroup": "ffdhe6144" 00:19:29.092 } 00:19:29.092 } 00:19:29.092 ]' 00:19:29.092 09:27:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:19:29.092 09:27:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:19:29.092 09:27:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:19:29.092 09:27:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:19:29.092 09:27:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:19:29.092 09:27:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:19:29.092 09:27:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:19:29.092 09:27:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:19:29.350 09:27:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --hostid 29f67375-a902-e411-ace9-001e67bc3c9a --dhchap-secret DHHC-1:01:OGQyNjQ5YzUxZmVhMDExNzYzODJlMDE1Y2JmY2ZhOWT+1yqE: --dhchap-ctrl-secret DHHC-1:02:ZDEyODliMmM4OTFjYWQ3OTIzMzJiYzc1Y2Q1ZjQ0ZDc4OTlmYjNiMWJhMThiOGY2nJOiQQ==: 00:19:30.284 09:27:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:19:30.284 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:19:30.284 09:27:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a 00:19:30.284 09:27:41 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:30.284 09:27:41 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:30.284 09:27:41 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:30.284 09:27:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:19:30.284 09:27:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:19:30.284 09:27:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:19:30.542 09:27:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 ffdhe6144 2 00:19:30.542 09:27:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:19:30.542 09:27:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:19:30.542 09:27:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe6144 00:19:30.542 09:27:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key2 00:19:30.542 09:27:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:19:30.542 09:27:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:19:30.542 09:27:41 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:30.542 09:27:41 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:30.542 09:27:41 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:30.542 09:27:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:19:30.542 09:27:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:19:31.110 00:19:31.110 09:27:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:19:31.110 09:27:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:19:31.110 09:27:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:19:31.368 09:27:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:31.368 09:27:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:19:31.368 09:27:42 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:31.368 09:27:42 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:31.368 09:27:42 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:31.368 09:27:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:19:31.368 { 00:19:31.368 "cntlid": 133, 00:19:31.368 "qid": 0, 00:19:31.368 "state": "enabled", 00:19:31.368 "thread": "nvmf_tgt_poll_group_000", 00:19:31.368 "listen_address": { 00:19:31.368 "trtype": "TCP", 00:19:31.368 "adrfam": "IPv4", 00:19:31.368 "traddr": "10.0.0.2", 00:19:31.368 "trsvcid": "4420" 00:19:31.368 }, 00:19:31.368 "peer_address": { 00:19:31.368 "trtype": "TCP", 00:19:31.368 "adrfam": "IPv4", 00:19:31.368 "traddr": "10.0.0.1", 00:19:31.368 "trsvcid": "60408" 00:19:31.368 }, 00:19:31.368 "auth": { 00:19:31.368 "state": "completed", 00:19:31.368 "digest": "sha512", 00:19:31.368 "dhgroup": "ffdhe6144" 00:19:31.368 } 00:19:31.368 } 00:19:31.368 ]' 00:19:31.368 09:27:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:19:31.368 09:27:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:19:31.368 09:27:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:19:31.368 09:27:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:19:31.368 09:27:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:19:31.368 09:27:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:19:31.368 09:27:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:19:31.368 09:27:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:19:31.629 09:27:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --hostid 29f67375-a902-e411-ace9-001e67bc3c9a --dhchap-secret DHHC-1:02:NDQ4MzNhZjMzODIzY2U1NzAxZDE1ZjY2MzMwNjlhOWZkODk0Y2YxNzMwZTJmMTNlwvlBbg==: --dhchap-ctrl-secret DHHC-1:01:ZTc1ZGE1ODgyNmUxZWM5Njk1YWMzZmY3YjEzMTEyODmK5dgD: 00:19:32.569 09:27:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:19:32.569 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:19:32.569 09:27:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a 00:19:32.569 09:27:43 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:32.569 09:27:43 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:32.569 09:27:43 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:32.569 09:27:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:19:32.569 09:27:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:19:32.569 09:27:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:19:32.828 09:27:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 ffdhe6144 3 00:19:32.828 09:27:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:19:32.828 09:27:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:19:33.087 09:27:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe6144 00:19:33.087 09:27:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:19:33.087 09:27:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:19:33.087 09:27:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --dhchap-key key3 00:19:33.087 09:27:44 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:33.087 09:27:44 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:33.087 09:27:44 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:33.087 09:27:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:19:33.087 09:27:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:19:33.657 00:19:33.657 09:27:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:19:33.657 09:27:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:19:33.657 09:27:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:19:33.916 09:27:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:33.916 09:27:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:19:33.916 09:27:44 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:33.916 09:27:44 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:33.916 09:27:44 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:33.916 09:27:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:19:33.916 { 00:19:33.916 "cntlid": 135, 00:19:33.916 "qid": 0, 00:19:33.916 "state": "enabled", 00:19:33.916 "thread": "nvmf_tgt_poll_group_000", 00:19:33.916 "listen_address": { 00:19:33.916 "trtype": "TCP", 00:19:33.916 "adrfam": "IPv4", 00:19:33.916 "traddr": "10.0.0.2", 00:19:33.916 "trsvcid": "4420" 00:19:33.916 }, 00:19:33.916 "peer_address": { 00:19:33.916 "trtype": "TCP", 00:19:33.916 "adrfam": "IPv4", 00:19:33.916 "traddr": "10.0.0.1", 00:19:33.916 "trsvcid": "60434" 00:19:33.916 }, 00:19:33.916 "auth": { 00:19:33.916 "state": "completed", 00:19:33.916 "digest": "sha512", 00:19:33.916 "dhgroup": "ffdhe6144" 00:19:33.916 } 00:19:33.916 } 00:19:33.916 ]' 00:19:33.916 09:27:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:19:33.916 09:27:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:19:33.916 09:27:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:19:33.916 09:27:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:19:33.916 09:27:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:19:33.916 09:27:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:19:33.916 09:27:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:19:33.916 09:27:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:19:34.175 09:27:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --hostid 29f67375-a902-e411-ace9-001e67bc3c9a --dhchap-secret DHHC-1:03:NWFjZjhlYmEzNDE2OGFkMWJhYzU0YmU3NDAyYjlmYTg1OGI0Y2EyOGY2OGRhNmEwOWI3ZjBlMjQwZDdhMzNhNDF+BrQ=: 00:19:35.114 09:27:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:19:35.114 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:19:35.114 09:27:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a 00:19:35.114 09:27:46 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:35.114 09:27:46 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:35.114 09:27:46 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:35.114 09:27:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@92 -- # for dhgroup in "${dhgroups[@]}" 00:19:35.114 09:27:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:19:35.114 09:27:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:19:35.114 09:27:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:19:35.372 09:27:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 ffdhe8192 0 00:19:35.372 09:27:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:19:35.372 09:27:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:19:35.372 09:27:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe8192 00:19:35.372 09:27:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:19:35.372 09:27:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:19:35.372 09:27:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:19:35.372 09:27:46 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:35.372 09:27:46 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:35.372 09:27:46 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:35.372 09:27:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:19:35.372 09:27:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:19:36.311 00:19:36.311 09:27:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:19:36.311 09:27:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:19:36.311 09:27:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:19:36.311 09:27:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:36.311 09:27:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:19:36.311 09:27:47 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:36.311 09:27:47 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:36.311 09:27:47 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:36.311 09:27:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:19:36.311 { 00:19:36.311 "cntlid": 137, 00:19:36.311 "qid": 0, 00:19:36.311 "state": "enabled", 00:19:36.311 "thread": "nvmf_tgt_poll_group_000", 00:19:36.311 "listen_address": { 00:19:36.311 "trtype": "TCP", 00:19:36.311 "adrfam": "IPv4", 00:19:36.311 "traddr": "10.0.0.2", 00:19:36.311 "trsvcid": "4420" 00:19:36.311 }, 00:19:36.311 "peer_address": { 00:19:36.311 "trtype": "TCP", 00:19:36.311 "adrfam": "IPv4", 00:19:36.311 "traddr": "10.0.0.1", 00:19:36.311 "trsvcid": "50056" 00:19:36.311 }, 00:19:36.311 "auth": { 00:19:36.311 "state": "completed", 00:19:36.311 "digest": "sha512", 00:19:36.311 "dhgroup": "ffdhe8192" 00:19:36.311 } 00:19:36.311 } 00:19:36.311 ]' 00:19:36.311 09:27:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:19:36.569 09:27:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:19:36.569 09:27:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:19:36.569 09:27:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:19:36.569 09:27:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:19:36.569 09:27:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:19:36.569 09:27:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:19:36.569 09:27:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:19:36.827 09:27:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --hostid 29f67375-a902-e411-ace9-001e67bc3c9a --dhchap-secret DHHC-1:00:MjViNTk0ZmIxYjkwYTRjYTIyNGRlOTQ4NThlNzI1MGVmODY1NjZlNTU4MjUyMjdm9pHsZA==: --dhchap-ctrl-secret DHHC-1:03:NDQ1MzMxODZhNjFiYjQ1MjBkZTQxMWIyNTRlMmE3YzNjM2EwYWU0MDExYjQ5NzA2Yzk5OGQxYzczMzYwZTUwOSOhOLs=: 00:19:37.761 09:27:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:19:37.761 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:19:37.761 09:27:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a 00:19:37.761 09:27:48 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:37.761 09:27:48 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:37.761 09:27:48 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:37.761 09:27:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:19:37.761 09:27:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:19:37.761 09:27:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:19:38.019 09:27:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 ffdhe8192 1 00:19:38.019 09:27:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:19:38.019 09:27:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:19:38.019 09:27:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe8192 00:19:38.019 09:27:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key1 00:19:38.019 09:27:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:19:38.019 09:27:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:19:38.019 09:27:49 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:38.019 09:27:49 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:38.019 09:27:49 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:38.019 09:27:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:19:38.019 09:27:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:19:38.957 00:19:38.957 09:27:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:19:38.957 09:27:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:19:38.957 09:27:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:19:39.215 09:27:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:39.215 09:27:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:19:39.215 09:27:50 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:39.215 09:27:50 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:39.215 09:27:50 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:39.215 09:27:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:19:39.215 { 00:19:39.215 "cntlid": 139, 00:19:39.215 "qid": 0, 00:19:39.215 "state": "enabled", 00:19:39.215 "thread": "nvmf_tgt_poll_group_000", 00:19:39.215 "listen_address": { 00:19:39.215 "trtype": "TCP", 00:19:39.215 "adrfam": "IPv4", 00:19:39.215 "traddr": "10.0.0.2", 00:19:39.215 "trsvcid": "4420" 00:19:39.215 }, 00:19:39.215 "peer_address": { 00:19:39.215 "trtype": "TCP", 00:19:39.215 "adrfam": "IPv4", 00:19:39.215 "traddr": "10.0.0.1", 00:19:39.215 "trsvcid": "50090" 00:19:39.215 }, 00:19:39.215 "auth": { 00:19:39.215 "state": "completed", 00:19:39.215 "digest": "sha512", 00:19:39.215 "dhgroup": "ffdhe8192" 00:19:39.215 } 00:19:39.215 } 00:19:39.215 ]' 00:19:39.215 09:27:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:19:39.215 09:27:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:19:39.215 09:27:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:19:39.215 09:27:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:19:39.215 09:27:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:19:39.215 09:27:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:19:39.215 09:27:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:19:39.215 09:27:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:19:39.472 09:27:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --hostid 29f67375-a902-e411-ace9-001e67bc3c9a --dhchap-secret DHHC-1:01:OGQyNjQ5YzUxZmVhMDExNzYzODJlMDE1Y2JmY2ZhOWT+1yqE: --dhchap-ctrl-secret DHHC-1:02:ZDEyODliMmM4OTFjYWQ3OTIzMzJiYzc1Y2Q1ZjQ0ZDc4OTlmYjNiMWJhMThiOGY2nJOiQQ==: 00:19:40.409 09:27:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:19:40.409 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:19:40.409 09:27:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a 00:19:40.409 09:27:51 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:40.409 09:27:51 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:40.409 09:27:51 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:40.409 09:27:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:19:40.409 09:27:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:19:40.409 09:27:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:19:40.666 09:27:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 ffdhe8192 2 00:19:40.666 09:27:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:19:40.666 09:27:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:19:40.666 09:27:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe8192 00:19:40.666 09:27:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key2 00:19:40.666 09:27:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:19:40.666 09:27:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:19:40.666 09:27:51 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:40.666 09:27:51 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:40.667 09:27:51 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:40.667 09:27:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:19:40.667 09:27:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:19:41.602 00:19:41.602 09:27:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:19:41.602 09:27:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:19:41.602 09:27:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:19:41.602 09:27:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:41.602 09:27:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:19:41.602 09:27:52 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:41.602 09:27:52 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:41.861 09:27:52 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:41.861 09:27:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:19:41.861 { 00:19:41.861 "cntlid": 141, 00:19:41.861 "qid": 0, 00:19:41.861 "state": "enabled", 00:19:41.861 "thread": "nvmf_tgt_poll_group_000", 00:19:41.861 "listen_address": { 00:19:41.861 "trtype": "TCP", 00:19:41.861 "adrfam": "IPv4", 00:19:41.861 "traddr": "10.0.0.2", 00:19:41.861 "trsvcid": "4420" 00:19:41.861 }, 00:19:41.861 "peer_address": { 00:19:41.861 "trtype": "TCP", 00:19:41.861 "adrfam": "IPv4", 00:19:41.861 "traddr": "10.0.0.1", 00:19:41.861 "trsvcid": "50120" 00:19:41.861 }, 00:19:41.861 "auth": { 00:19:41.861 "state": "completed", 00:19:41.861 "digest": "sha512", 00:19:41.861 "dhgroup": "ffdhe8192" 00:19:41.861 } 00:19:41.861 } 00:19:41.861 ]' 00:19:41.861 09:27:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:19:41.861 09:27:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:19:41.861 09:27:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:19:41.861 09:27:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:19:41.861 09:27:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:19:41.861 09:27:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:19:41.861 09:27:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:19:41.861 09:27:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:19:42.119 09:27:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --hostid 29f67375-a902-e411-ace9-001e67bc3c9a --dhchap-secret DHHC-1:02:NDQ4MzNhZjMzODIzY2U1NzAxZDE1ZjY2MzMwNjlhOWZkODk0Y2YxNzMwZTJmMTNlwvlBbg==: --dhchap-ctrl-secret DHHC-1:01:ZTc1ZGE1ODgyNmUxZWM5Njk1YWMzZmY3YjEzMTEyODmK5dgD: 00:19:43.056 09:27:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:19:43.056 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:19:43.056 09:27:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a 00:19:43.056 09:27:54 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:43.056 09:27:54 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:43.056 09:27:54 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:43.056 09:27:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:19:43.056 09:27:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:19:43.056 09:27:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:19:43.312 09:27:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 ffdhe8192 3 00:19:43.313 09:27:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:19:43.313 09:27:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:19:43.313 09:27:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe8192 00:19:43.313 09:27:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:19:43.313 09:27:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:19:43.313 09:27:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --dhchap-key key3 00:19:43.313 09:27:54 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:43.313 09:27:54 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:43.313 09:27:54 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:43.313 09:27:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:19:43.313 09:27:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:19:44.246 00:19:44.246 09:27:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:19:44.246 09:27:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:19:44.246 09:27:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:19:44.246 09:27:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:44.246 09:27:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:19:44.246 09:27:55 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:44.246 09:27:55 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:44.503 09:27:55 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:44.503 09:27:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:19:44.503 { 00:19:44.503 "cntlid": 143, 00:19:44.503 "qid": 0, 00:19:44.503 "state": "enabled", 00:19:44.503 "thread": "nvmf_tgt_poll_group_000", 00:19:44.503 "listen_address": { 00:19:44.503 "trtype": "TCP", 00:19:44.503 "adrfam": "IPv4", 00:19:44.503 "traddr": "10.0.0.2", 00:19:44.503 "trsvcid": "4420" 00:19:44.503 }, 00:19:44.503 "peer_address": { 00:19:44.503 "trtype": "TCP", 00:19:44.503 "adrfam": "IPv4", 00:19:44.503 "traddr": "10.0.0.1", 00:19:44.503 "trsvcid": "50150" 00:19:44.503 }, 00:19:44.503 "auth": { 00:19:44.503 "state": "completed", 00:19:44.503 "digest": "sha512", 00:19:44.503 "dhgroup": "ffdhe8192" 00:19:44.503 } 00:19:44.503 } 00:19:44.503 ]' 00:19:44.503 09:27:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:19:44.503 09:27:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:19:44.503 09:27:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:19:44.503 09:27:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:19:44.503 09:27:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:19:44.503 09:27:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:19:44.503 09:27:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:19:44.504 09:27:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:19:44.760 09:27:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --hostid 29f67375-a902-e411-ace9-001e67bc3c9a --dhchap-secret DHHC-1:03:NWFjZjhlYmEzNDE2OGFkMWJhYzU0YmU3NDAyYjlmYTg1OGI0Y2EyOGY2OGRhNmEwOWI3ZjBlMjQwZDdhMzNhNDF+BrQ=: 00:19:45.700 09:27:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:19:45.700 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:19:45.700 09:27:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a 00:19:45.700 09:27:56 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:45.700 09:27:56 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:45.700 09:27:56 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:45.700 09:27:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@102 -- # IFS=, 00:19:45.700 09:27:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@103 -- # printf %s sha256,sha384,sha512 00:19:45.700 09:27:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@102 -- # IFS=, 00:19:45.700 09:27:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@103 -- # printf %s null,ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:19:45.700 09:27:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@102 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256,sha384,sha512 --dhchap-dhgroups null,ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:19:45.700 09:27:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256,sha384,sha512 --dhchap-dhgroups null,ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:19:45.958 09:27:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@114 -- # connect_authenticate sha512 ffdhe8192 0 00:19:45.958 09:27:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:19:45.958 09:27:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:19:45.958 09:27:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe8192 00:19:45.958 09:27:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:19:45.958 09:27:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:19:45.958 09:27:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:19:45.958 09:27:57 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:45.958 09:27:57 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:45.958 09:27:57 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:45.958 09:27:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:19:45.959 09:27:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:19:46.895 00:19:46.895 09:27:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:19:46.895 09:27:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:19:46.895 09:27:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:19:47.154 09:27:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:47.154 09:27:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:19:47.154 09:27:58 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:47.154 09:27:58 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:47.154 09:27:58 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:47.154 09:27:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:19:47.154 { 00:19:47.154 "cntlid": 145, 00:19:47.154 "qid": 0, 00:19:47.154 "state": "enabled", 00:19:47.154 "thread": "nvmf_tgt_poll_group_000", 00:19:47.154 "listen_address": { 00:19:47.154 "trtype": "TCP", 00:19:47.154 "adrfam": "IPv4", 00:19:47.154 "traddr": "10.0.0.2", 00:19:47.154 "trsvcid": "4420" 00:19:47.154 }, 00:19:47.154 "peer_address": { 00:19:47.154 "trtype": "TCP", 00:19:47.154 "adrfam": "IPv4", 00:19:47.154 "traddr": "10.0.0.1", 00:19:47.154 "trsvcid": "58750" 00:19:47.154 }, 00:19:47.154 "auth": { 00:19:47.154 "state": "completed", 00:19:47.154 "digest": "sha512", 00:19:47.154 "dhgroup": "ffdhe8192" 00:19:47.154 } 00:19:47.154 } 00:19:47.154 ]' 00:19:47.154 09:27:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:19:47.154 09:27:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:19:47.154 09:27:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:19:47.154 09:27:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:19:47.154 09:27:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:19:47.154 09:27:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:19:47.154 09:27:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:19:47.154 09:27:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:19:47.412 09:27:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --hostid 29f67375-a902-e411-ace9-001e67bc3c9a --dhchap-secret DHHC-1:00:MjViNTk0ZmIxYjkwYTRjYTIyNGRlOTQ4NThlNzI1MGVmODY1NjZlNTU4MjUyMjdm9pHsZA==: --dhchap-ctrl-secret DHHC-1:03:NDQ1MzMxODZhNjFiYjQ1MjBkZTQxMWIyNTRlMmE3YzNjM2EwYWU0MDExYjQ5NzA2Yzk5OGQxYzczMzYwZTUwOSOhOLs=: 00:19:48.350 09:27:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:19:48.350 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:19:48.350 09:27:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a 00:19:48.350 09:27:59 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:48.350 09:27:59 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:48.350 09:27:59 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:48.350 09:27:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@117 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --dhchap-key key1 00:19:48.350 09:27:59 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:48.350 09:27:59 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:48.350 09:27:59 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:48.350 09:27:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@118 -- # NOT hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 00:19:48.350 09:27:59 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@648 -- # local es=0 00:19:48.350 09:27:59 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@650 -- # valid_exec_arg hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 00:19:48.350 09:27:59 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@636 -- # local arg=hostrpc 00:19:48.350 09:27:59 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:19:48.350 09:27:59 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@640 -- # type -t hostrpc 00:19:48.350 09:27:59 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:19:48.350 09:27:59 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@651 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 00:19:48.350 09:27:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 00:19:49.285 request: 00:19:49.285 { 00:19:49.285 "name": "nvme0", 00:19:49.285 "trtype": "tcp", 00:19:49.285 "traddr": "10.0.0.2", 00:19:49.285 "adrfam": "ipv4", 00:19:49.285 "trsvcid": "4420", 00:19:49.285 "subnqn": "nqn.2024-03.io.spdk:cnode0", 00:19:49.285 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a", 00:19:49.285 "prchk_reftag": false, 00:19:49.285 "prchk_guard": false, 00:19:49.285 "hdgst": false, 00:19:49.285 "ddgst": false, 00:19:49.285 "dhchap_key": "key2", 00:19:49.285 "method": "bdev_nvme_attach_controller", 00:19:49.285 "req_id": 1 00:19:49.285 } 00:19:49.285 Got JSON-RPC error response 00:19:49.285 response: 00:19:49.285 { 00:19:49.285 "code": -5, 00:19:49.285 "message": "Input/output error" 00:19:49.285 } 00:19:49.285 09:28:00 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@651 -- # es=1 00:19:49.285 09:28:00 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:19:49.285 09:28:00 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:19:49.285 09:28:00 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:19:49.285 09:28:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@121 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a 00:19:49.285 09:28:00 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:49.285 09:28:00 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:49.285 09:28:00 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:49.285 09:28:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@124 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:19:49.285 09:28:00 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:49.285 09:28:00 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:49.285 09:28:00 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:49.285 09:28:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@125 -- # NOT hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:19:49.285 09:28:00 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@648 -- # local es=0 00:19:49.285 09:28:00 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@650 -- # valid_exec_arg hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:19:49.285 09:28:00 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@636 -- # local arg=hostrpc 00:19:49.285 09:28:00 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:19:49.285 09:28:00 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@640 -- # type -t hostrpc 00:19:49.285 09:28:00 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:19:49.285 09:28:00 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@651 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:19:49.285 09:28:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:19:49.852 request: 00:19:49.852 { 00:19:49.852 "name": "nvme0", 00:19:49.852 "trtype": "tcp", 00:19:49.852 "traddr": "10.0.0.2", 00:19:49.852 "adrfam": "ipv4", 00:19:49.852 "trsvcid": "4420", 00:19:49.852 "subnqn": "nqn.2024-03.io.spdk:cnode0", 00:19:49.852 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a", 00:19:49.852 "prchk_reftag": false, 00:19:49.852 "prchk_guard": false, 00:19:49.852 "hdgst": false, 00:19:49.852 "ddgst": false, 00:19:49.852 "dhchap_key": "key1", 00:19:49.852 "dhchap_ctrlr_key": "ckey2", 00:19:49.852 "method": "bdev_nvme_attach_controller", 00:19:49.852 "req_id": 1 00:19:49.852 } 00:19:49.852 Got JSON-RPC error response 00:19:49.852 response: 00:19:49.852 { 00:19:49.852 "code": -5, 00:19:49.852 "message": "Input/output error" 00:19:49.852 } 00:19:49.852 09:28:00 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@651 -- # es=1 00:19:49.852 09:28:00 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:19:49.852 09:28:00 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:19:49.852 09:28:00 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:19:49.852 09:28:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@128 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a 00:19:49.852 09:28:00 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:49.852 09:28:00 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:49.852 09:28:00 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:49.852 09:28:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@131 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --dhchap-key key1 00:19:49.852 09:28:00 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:49.852 09:28:00 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:49.852 09:28:00 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:49.852 09:28:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@132 -- # NOT hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:19:49.852 09:28:00 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@648 -- # local es=0 00:19:49.852 09:28:00 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@650 -- # valid_exec_arg hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:19:49.852 09:28:00 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@636 -- # local arg=hostrpc 00:19:49.852 09:28:00 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:19:49.852 09:28:00 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@640 -- # type -t hostrpc 00:19:49.852 09:28:00 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:19:49.852 09:28:00 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@651 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:19:49.852 09:28:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:19:50.792 request: 00:19:50.792 { 00:19:50.792 "name": "nvme0", 00:19:50.792 "trtype": "tcp", 00:19:50.792 "traddr": "10.0.0.2", 00:19:50.792 "adrfam": "ipv4", 00:19:50.792 "trsvcid": "4420", 00:19:50.792 "subnqn": "nqn.2024-03.io.spdk:cnode0", 00:19:50.792 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a", 00:19:50.792 "prchk_reftag": false, 00:19:50.792 "prchk_guard": false, 00:19:50.792 "hdgst": false, 00:19:50.792 "ddgst": false, 00:19:50.792 "dhchap_key": "key1", 00:19:50.792 "dhchap_ctrlr_key": "ckey1", 00:19:50.792 "method": "bdev_nvme_attach_controller", 00:19:50.792 "req_id": 1 00:19:50.792 } 00:19:50.792 Got JSON-RPC error response 00:19:50.792 response: 00:19:50.792 { 00:19:50.792 "code": -5, 00:19:50.792 "message": "Input/output error" 00:19:50.792 } 00:19:50.792 09:28:01 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@651 -- # es=1 00:19:50.792 09:28:01 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:19:50.792 09:28:01 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:19:50.792 09:28:01 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:19:50.792 09:28:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@135 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a 00:19:50.792 09:28:01 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:50.792 09:28:01 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:50.792 09:28:01 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:50.792 09:28:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@138 -- # killprocess 829484 00:19:50.792 09:28:01 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@948 -- # '[' -z 829484 ']' 00:19:50.792 09:28:01 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@952 -- # kill -0 829484 00:19:50.792 09:28:01 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@953 -- # uname 00:19:50.792 09:28:01 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:19:50.792 09:28:01 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 829484 00:19:50.792 09:28:01 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:19:50.792 09:28:01 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:19:50.792 09:28:01 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@966 -- # echo 'killing process with pid 829484' 00:19:50.792 killing process with pid 829484 00:19:50.792 09:28:01 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@967 -- # kill 829484 00:19:50.792 09:28:01 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@972 -- # wait 829484 00:19:51.053 09:28:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@139 -- # nvmfappstart --wait-for-rpc -L nvmf_auth 00:19:51.053 09:28:02 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:19:51.053 09:28:02 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@722 -- # xtrace_disable 00:19:51.053 09:28:02 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:51.053 09:28:02 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@481 -- # nvmfpid=851735 00:19:51.053 09:28:02 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --wait-for-rpc -L nvmf_auth 00:19:51.053 09:28:02 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@482 -- # waitforlisten 851735 00:19:51.053 09:28:02 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@829 -- # '[' -z 851735 ']' 00:19:51.053 09:28:02 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:19:51.053 09:28:02 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@834 -- # local max_retries=100 00:19:51.053 09:28:02 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:19:51.053 09:28:02 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@838 -- # xtrace_disable 00:19:51.053 09:28:02 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:51.990 09:28:03 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:19:51.990 09:28:03 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@862 -- # return 0 00:19:51.990 09:28:03 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:19:51.990 09:28:03 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@728 -- # xtrace_disable 00:19:51.990 09:28:03 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:51.990 09:28:03 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:19:51.990 09:28:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@140 -- # trap 'dumplogs; cleanup' SIGINT SIGTERM EXIT 00:19:51.990 09:28:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@142 -- # waitforlisten 851735 00:19:51.990 09:28:03 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@829 -- # '[' -z 851735 ']' 00:19:51.990 09:28:03 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:19:51.990 09:28:03 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@834 -- # local max_retries=100 00:19:51.990 09:28:03 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:19:51.990 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:19:51.990 09:28:03 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@838 -- # xtrace_disable 00:19:51.990 09:28:03 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:52.246 09:28:03 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:19:52.246 09:28:03 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@862 -- # return 0 00:19:52.246 09:28:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@143 -- # rpc_cmd 00:19:52.246 09:28:03 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:52.246 09:28:03 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:52.504 09:28:03 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:52.504 09:28:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@153 -- # connect_authenticate sha512 ffdhe8192 3 00:19:52.504 09:28:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:19:52.504 09:28:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:19:52.504 09:28:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe8192 00:19:52.504 09:28:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:19:52.504 09:28:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:19:52.504 09:28:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --dhchap-key key3 00:19:52.504 09:28:03 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:52.504 09:28:03 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:52.504 09:28:03 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:52.504 09:28:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:19:52.504 09:28:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:19:53.439 00:19:53.440 09:28:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:19:53.440 09:28:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:19:53.440 09:28:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:19:53.440 09:28:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:53.440 09:28:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:19:53.440 09:28:04 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:53.440 09:28:04 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:53.440 09:28:04 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:53.440 09:28:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:19:53.440 { 00:19:53.440 "cntlid": 1, 00:19:53.440 "qid": 0, 00:19:53.440 "state": "enabled", 00:19:53.440 "thread": "nvmf_tgt_poll_group_000", 00:19:53.440 "listen_address": { 00:19:53.440 "trtype": "TCP", 00:19:53.440 "adrfam": "IPv4", 00:19:53.440 "traddr": "10.0.0.2", 00:19:53.440 "trsvcid": "4420" 00:19:53.440 }, 00:19:53.440 "peer_address": { 00:19:53.440 "trtype": "TCP", 00:19:53.440 "adrfam": "IPv4", 00:19:53.440 "traddr": "10.0.0.1", 00:19:53.440 "trsvcid": "58806" 00:19:53.440 }, 00:19:53.440 "auth": { 00:19:53.440 "state": "completed", 00:19:53.440 "digest": "sha512", 00:19:53.440 "dhgroup": "ffdhe8192" 00:19:53.440 } 00:19:53.440 } 00:19:53.440 ]' 00:19:53.440 09:28:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:19:53.440 09:28:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:19:53.440 09:28:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:19:53.698 09:28:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:19:53.698 09:28:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:19:53.698 09:28:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:19:53.698 09:28:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:19:53.698 09:28:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:19:53.957 09:28:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --hostid 29f67375-a902-e411-ace9-001e67bc3c9a --dhchap-secret DHHC-1:03:NWFjZjhlYmEzNDE2OGFkMWJhYzU0YmU3NDAyYjlmYTg1OGI0Y2EyOGY2OGRhNmEwOWI3ZjBlMjQwZDdhMzNhNDF+BrQ=: 00:19:54.892 09:28:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:19:54.892 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:19:54.892 09:28:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a 00:19:54.892 09:28:05 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:54.892 09:28:05 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:54.892 09:28:05 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:54.892 09:28:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@156 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --dhchap-key key3 00:19:54.892 09:28:05 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:54.892 09:28:05 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:54.892 09:28:05 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:54.892 09:28:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@157 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 00:19:54.892 09:28:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 00:19:54.892 09:28:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@158 -- # NOT hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:19:54.892 09:28:06 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@648 -- # local es=0 00:19:54.892 09:28:06 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@650 -- # valid_exec_arg hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:19:54.892 09:28:06 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@636 -- # local arg=hostrpc 00:19:54.892 09:28:06 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:19:54.892 09:28:06 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@640 -- # type -t hostrpc 00:19:54.892 09:28:06 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:19:54.892 09:28:06 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@651 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:19:54.892 09:28:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:19:55.150 request: 00:19:55.150 { 00:19:55.150 "name": "nvme0", 00:19:55.150 "trtype": "tcp", 00:19:55.150 "traddr": "10.0.0.2", 00:19:55.150 "adrfam": "ipv4", 00:19:55.150 "trsvcid": "4420", 00:19:55.150 "subnqn": "nqn.2024-03.io.spdk:cnode0", 00:19:55.150 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a", 00:19:55.150 "prchk_reftag": false, 00:19:55.150 "prchk_guard": false, 00:19:55.150 "hdgst": false, 00:19:55.150 "ddgst": false, 00:19:55.150 "dhchap_key": "key3", 00:19:55.150 "method": "bdev_nvme_attach_controller", 00:19:55.151 "req_id": 1 00:19:55.151 } 00:19:55.151 Got JSON-RPC error response 00:19:55.151 response: 00:19:55.151 { 00:19:55.151 "code": -5, 00:19:55.151 "message": "Input/output error" 00:19:55.151 } 00:19:55.151 09:28:06 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@651 -- # es=1 00:19:55.151 09:28:06 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:19:55.151 09:28:06 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:19:55.151 09:28:06 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:19:55.151 09:28:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@163 -- # IFS=, 00:19:55.151 09:28:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@164 -- # printf %s sha256,sha384,sha512 00:19:55.151 09:28:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@163 -- # hostrpc bdev_nvme_set_options --dhchap-dhgroups ffdhe2048 --dhchap-digests sha256,sha384,sha512 00:19:55.151 09:28:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-dhgroups ffdhe2048 --dhchap-digests sha256,sha384,sha512 00:19:55.409 09:28:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@169 -- # NOT hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:19:55.409 09:28:06 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@648 -- # local es=0 00:19:55.409 09:28:06 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@650 -- # valid_exec_arg hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:19:55.409 09:28:06 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@636 -- # local arg=hostrpc 00:19:55.409 09:28:06 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:19:55.409 09:28:06 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@640 -- # type -t hostrpc 00:19:55.409 09:28:06 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:19:55.409 09:28:06 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@651 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:19:55.409 09:28:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:19:55.667 request: 00:19:55.667 { 00:19:55.667 "name": "nvme0", 00:19:55.667 "trtype": "tcp", 00:19:55.667 "traddr": "10.0.0.2", 00:19:55.667 "adrfam": "ipv4", 00:19:55.667 "trsvcid": "4420", 00:19:55.667 "subnqn": "nqn.2024-03.io.spdk:cnode0", 00:19:55.667 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a", 00:19:55.667 "prchk_reftag": false, 00:19:55.667 "prchk_guard": false, 00:19:55.667 "hdgst": false, 00:19:55.667 "ddgst": false, 00:19:55.667 "dhchap_key": "key3", 00:19:55.667 "method": "bdev_nvme_attach_controller", 00:19:55.667 "req_id": 1 00:19:55.667 } 00:19:55.667 Got JSON-RPC error response 00:19:55.667 response: 00:19:55.667 { 00:19:55.667 "code": -5, 00:19:55.667 "message": "Input/output error" 00:19:55.667 } 00:19:55.667 09:28:06 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@651 -- # es=1 00:19:55.667 09:28:06 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:19:55.667 09:28:06 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:19:55.667 09:28:06 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:19:55.667 09:28:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@175 -- # IFS=, 00:19:55.667 09:28:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@176 -- # printf %s sha256,sha384,sha512 00:19:55.667 09:28:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@175 -- # IFS=, 00:19:55.667 09:28:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@176 -- # printf %s null,ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:19:55.667 09:28:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@175 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256,sha384,sha512 --dhchap-dhgroups null,ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:19:55.667 09:28:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256,sha384,sha512 --dhchap-dhgroups null,ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:19:55.925 09:28:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@186 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a 00:19:55.925 09:28:07 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:55.925 09:28:07 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:55.925 09:28:07 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:55.925 09:28:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@187 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a 00:19:55.925 09:28:07 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:55.925 09:28:07 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:55.925 09:28:07 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:55.925 09:28:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@188 -- # NOT hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key key1 00:19:55.925 09:28:07 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@648 -- # local es=0 00:19:55.925 09:28:07 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@650 -- # valid_exec_arg hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key key1 00:19:55.925 09:28:07 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@636 -- # local arg=hostrpc 00:19:55.925 09:28:07 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:19:55.925 09:28:07 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@640 -- # type -t hostrpc 00:19:55.925 09:28:07 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:19:55.925 09:28:07 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@651 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key key1 00:19:55.925 09:28:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key key1 00:19:56.184 request: 00:19:56.184 { 00:19:56.184 "name": "nvme0", 00:19:56.184 "trtype": "tcp", 00:19:56.184 "traddr": "10.0.0.2", 00:19:56.184 "adrfam": "ipv4", 00:19:56.184 "trsvcid": "4420", 00:19:56.184 "subnqn": "nqn.2024-03.io.spdk:cnode0", 00:19:56.184 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a", 00:19:56.184 "prchk_reftag": false, 00:19:56.184 "prchk_guard": false, 00:19:56.184 "hdgst": false, 00:19:56.184 "ddgst": false, 00:19:56.184 "dhchap_key": "key0", 00:19:56.184 "dhchap_ctrlr_key": "key1", 00:19:56.184 "method": "bdev_nvme_attach_controller", 00:19:56.184 "req_id": 1 00:19:56.184 } 00:19:56.184 Got JSON-RPC error response 00:19:56.184 response: 00:19:56.184 { 00:19:56.184 "code": -5, 00:19:56.184 "message": "Input/output error" 00:19:56.184 } 00:19:56.184 09:28:07 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@651 -- # es=1 00:19:56.184 09:28:07 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:19:56.184 09:28:07 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:19:56.184 09:28:07 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:19:56.184 09:28:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@192 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 00:19:56.184 09:28:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 00:19:56.442 00:19:56.701 09:28:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@195 -- # hostrpc bdev_nvme_get_controllers 00:19:56.701 09:28:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@195 -- # jq -r '.[].name' 00:19:56.701 09:28:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:19:56.701 09:28:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@195 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:56.701 09:28:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@196 -- # hostrpc bdev_nvme_detach_controller nvme0 00:19:56.701 09:28:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:19:56.960 09:28:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@198 -- # trap - SIGINT SIGTERM EXIT 00:19:56.960 09:28:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@199 -- # cleanup 00:19:56.960 09:28:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@21 -- # killprocess 829504 00:19:56.960 09:28:08 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@948 -- # '[' -z 829504 ']' 00:19:56.960 09:28:08 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@952 -- # kill -0 829504 00:19:56.960 09:28:08 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@953 -- # uname 00:19:56.960 09:28:08 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:19:57.219 09:28:08 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 829504 00:19:57.219 09:28:08 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:19:57.219 09:28:08 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:19:57.219 09:28:08 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@966 -- # echo 'killing process with pid 829504' 00:19:57.219 killing process with pid 829504 00:19:57.219 09:28:08 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@967 -- # kill 829504 00:19:57.219 09:28:08 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@972 -- # wait 829504 00:19:57.478 09:28:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@22 -- # nvmftestfini 00:19:57.478 09:28:08 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@488 -- # nvmfcleanup 00:19:57.478 09:28:08 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@117 -- # sync 00:19:57.478 09:28:08 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:19:57.478 09:28:08 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@120 -- # set +e 00:19:57.478 09:28:08 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@121 -- # for i in {1..20} 00:19:57.478 09:28:08 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:19:57.478 rmmod nvme_tcp 00:19:57.478 rmmod nvme_fabrics 00:19:57.478 rmmod nvme_keyring 00:19:57.737 09:28:08 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:19:57.737 09:28:08 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@124 -- # set -e 00:19:57.737 09:28:08 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@125 -- # return 0 00:19:57.737 09:28:08 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@489 -- # '[' -n 851735 ']' 00:19:57.737 09:28:08 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@490 -- # killprocess 851735 00:19:57.737 09:28:08 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@948 -- # '[' -z 851735 ']' 00:19:57.737 09:28:08 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@952 -- # kill -0 851735 00:19:57.737 09:28:08 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@953 -- # uname 00:19:57.737 09:28:08 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:19:57.737 09:28:08 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 851735 00:19:57.737 09:28:08 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:19:57.737 09:28:08 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:19:57.737 09:28:08 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@966 -- # echo 'killing process with pid 851735' 00:19:57.737 killing process with pid 851735 00:19:57.737 09:28:08 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@967 -- # kill 851735 00:19:57.737 09:28:08 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@972 -- # wait 851735 00:19:57.995 09:28:08 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:19:57.995 09:28:08 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:19:57.995 09:28:08 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:19:57.995 09:28:08 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:19:57.995 09:28:08 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@278 -- # remove_spdk_ns 00:19:57.995 09:28:08 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:19:57.995 09:28:08 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:19:57.995 09:28:08 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:19:59.900 09:28:11 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:19:59.900 09:28:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@23 -- # rm -f /tmp/spdk.key-null.kuQ /tmp/spdk.key-sha256.0os /tmp/spdk.key-sha384.JfD /tmp/spdk.key-sha512.CTU /tmp/spdk.key-sha512.YNq /tmp/spdk.key-sha384.AoN /tmp/spdk.key-sha256.o48 '' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/nvme-auth.log /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/nvmf-auth.log 00:19:59.900 00:19:59.900 real 3m5.628s 00:19:59.900 user 7m13.252s 00:19:59.900 sys 0m24.961s 00:19:59.900 09:28:11 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@1124 -- # xtrace_disable 00:19:59.900 09:28:11 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:59.900 ************************************ 00:19:59.900 END TEST nvmf_auth_target 00:19:59.900 ************************************ 00:19:59.900 09:28:11 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:19:59.900 09:28:11 nvmf_tcp -- nvmf/nvmf.sh@59 -- # '[' tcp = tcp ']' 00:19:59.900 09:28:11 nvmf_tcp -- nvmf/nvmf.sh@60 -- # run_test nvmf_bdevio_no_huge /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdevio.sh --transport=tcp --no-hugepages 00:19:59.900 09:28:11 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 4 -le 1 ']' 00:19:59.900 09:28:11 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:19:59.900 09:28:11 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:19:59.900 ************************************ 00:19:59.900 START TEST nvmf_bdevio_no_huge 00:19:59.900 ************************************ 00:19:59.900 09:28:11 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdevio.sh --transport=tcp --no-hugepages 00:20:00.159 * Looking for test storage... 00:20:00.159 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:20:00.159 09:28:11 nvmf_tcp.nvmf_bdevio_no_huge -- target/bdevio.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:20:00.159 09:28:11 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@7 -- # uname -s 00:20:00.159 09:28:11 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:20:00.159 09:28:11 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:20:00.159 09:28:11 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:20:00.159 09:28:11 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:20:00.159 09:28:11 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:20:00.159 09:28:11 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:20:00.159 09:28:11 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:20:00.159 09:28:11 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:20:00.159 09:28:11 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:20:00.159 09:28:11 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:20:00.159 09:28:11 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a 00:20:00.159 09:28:11 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@18 -- # NVME_HOSTID=29f67375-a902-e411-ace9-001e67bc3c9a 00:20:00.159 09:28:11 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:20:00.159 09:28:11 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:20:00.159 09:28:11 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:20:00.159 09:28:11 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:20:00.159 09:28:11 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:20:00.159 09:28:11 nvmf_tcp.nvmf_bdevio_no_huge -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:20:00.159 09:28:11 nvmf_tcp.nvmf_bdevio_no_huge -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:20:00.159 09:28:11 nvmf_tcp.nvmf_bdevio_no_huge -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:20:00.159 09:28:11 nvmf_tcp.nvmf_bdevio_no_huge -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:00.159 09:28:11 nvmf_tcp.nvmf_bdevio_no_huge -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:00.159 09:28:11 nvmf_tcp.nvmf_bdevio_no_huge -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:00.159 09:28:11 nvmf_tcp.nvmf_bdevio_no_huge -- paths/export.sh@5 -- # export PATH 00:20:00.159 09:28:11 nvmf_tcp.nvmf_bdevio_no_huge -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:00.159 09:28:11 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@47 -- # : 0 00:20:00.159 09:28:11 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:20:00.159 09:28:11 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:20:00.159 09:28:11 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:20:00.159 09:28:11 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:20:00.159 09:28:11 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:20:00.159 09:28:11 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:20:00.160 09:28:11 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:20:00.160 09:28:11 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@51 -- # have_pci_nics=0 00:20:00.160 09:28:11 nvmf_tcp.nvmf_bdevio_no_huge -- target/bdevio.sh@11 -- # MALLOC_BDEV_SIZE=64 00:20:00.160 09:28:11 nvmf_tcp.nvmf_bdevio_no_huge -- target/bdevio.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:20:00.160 09:28:11 nvmf_tcp.nvmf_bdevio_no_huge -- target/bdevio.sh@14 -- # nvmftestinit 00:20:00.160 09:28:11 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:20:00.160 09:28:11 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:20:00.160 09:28:11 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@448 -- # prepare_net_devs 00:20:00.160 09:28:11 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@410 -- # local -g is_hw=no 00:20:00.160 09:28:11 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@412 -- # remove_spdk_ns 00:20:00.160 09:28:11 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:20:00.160 09:28:11 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:20:00.160 09:28:11 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:20:00.160 09:28:11 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:20:00.160 09:28:11 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:20:00.160 09:28:11 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@285 -- # xtrace_disable 00:20:00.160 09:28:11 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:20:02.064 09:28:13 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:20:02.064 09:28:13 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@291 -- # pci_devs=() 00:20:02.064 09:28:13 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@291 -- # local -a pci_devs 00:20:02.064 09:28:13 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@292 -- # pci_net_devs=() 00:20:02.064 09:28:13 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:20:02.064 09:28:13 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@293 -- # pci_drivers=() 00:20:02.064 09:28:13 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@293 -- # local -A pci_drivers 00:20:02.064 09:28:13 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@295 -- # net_devs=() 00:20:02.064 09:28:13 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@295 -- # local -ga net_devs 00:20:02.064 09:28:13 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@296 -- # e810=() 00:20:02.064 09:28:13 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@296 -- # local -ga e810 00:20:02.064 09:28:13 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@297 -- # x722=() 00:20:02.064 09:28:13 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@297 -- # local -ga x722 00:20:02.064 09:28:13 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@298 -- # mlx=() 00:20:02.064 09:28:13 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@298 -- # local -ga mlx 00:20:02.064 09:28:13 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:20:02.064 09:28:13 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:20:02.064 09:28:13 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:20:02.064 09:28:13 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:20:02.064 09:28:13 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:20:02.064 09:28:13 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:20:02.064 09:28:13 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:20:02.064 09:28:13 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:20:02.064 09:28:13 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:20:02.064 09:28:13 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:20:02.064 09:28:13 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:20:02.064 09:28:13 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:20:02.064 09:28:13 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:20:02.065 09:28:13 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:20:02.065 09:28:13 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:20:02.065 09:28:13 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:20:02.065 09:28:13 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:20:02.065 09:28:13 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:20:02.065 09:28:13 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@341 -- # echo 'Found 0000:09:00.0 (0x8086 - 0x159b)' 00:20:02.065 Found 0000:09:00.0 (0x8086 - 0x159b) 00:20:02.065 09:28:13 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:20:02.065 09:28:13 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:20:02.065 09:28:13 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:20:02.065 09:28:13 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:20:02.065 09:28:13 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:20:02.065 09:28:13 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:20:02.065 09:28:13 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@341 -- # echo 'Found 0000:09:00.1 (0x8086 - 0x159b)' 00:20:02.065 Found 0000:09:00.1 (0x8086 - 0x159b) 00:20:02.065 09:28:13 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:20:02.065 09:28:13 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:20:02.065 09:28:13 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:20:02.065 09:28:13 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:20:02.065 09:28:13 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:20:02.065 09:28:13 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:20:02.065 09:28:13 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:20:02.065 09:28:13 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:20:02.065 09:28:13 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:20:02.065 09:28:13 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:20:02.065 09:28:13 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:20:02.065 09:28:13 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:20:02.065 09:28:13 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@390 -- # [[ up == up ]] 00:20:02.065 09:28:13 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:20:02.065 09:28:13 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:20:02.065 09:28:13 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:09:00.0: cvl_0_0' 00:20:02.065 Found net devices under 0000:09:00.0: cvl_0_0 00:20:02.065 09:28:13 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:20:02.065 09:28:13 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:20:02.065 09:28:13 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:20:02.065 09:28:13 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:20:02.065 09:28:13 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:20:02.065 09:28:13 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@390 -- # [[ up == up ]] 00:20:02.065 09:28:13 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:20:02.065 09:28:13 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:20:02.065 09:28:13 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:09:00.1: cvl_0_1' 00:20:02.065 Found net devices under 0000:09:00.1: cvl_0_1 00:20:02.065 09:28:13 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:20:02.065 09:28:13 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:20:02.065 09:28:13 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@414 -- # is_hw=yes 00:20:02.065 09:28:13 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:20:02.065 09:28:13 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:20:02.065 09:28:13 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:20:02.065 09:28:13 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:20:02.065 09:28:13 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:20:02.065 09:28:13 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:20:02.065 09:28:13 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:20:02.065 09:28:13 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:20:02.065 09:28:13 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:20:02.065 09:28:13 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:20:02.065 09:28:13 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:20:02.065 09:28:13 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:20:02.065 09:28:13 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:20:02.065 09:28:13 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:20:02.065 09:28:13 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:20:02.065 09:28:13 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:20:02.065 09:28:13 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:20:02.065 09:28:13 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:20:02.065 09:28:13 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:20:02.065 09:28:13 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:20:02.065 09:28:13 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:20:02.065 09:28:13 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:20:02.065 09:28:13 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:20:02.065 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:20:02.065 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.261 ms 00:20:02.065 00:20:02.065 --- 10.0.0.2 ping statistics --- 00:20:02.065 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:20:02.065 rtt min/avg/max/mdev = 0.261/0.261/0.261/0.000 ms 00:20:02.065 09:28:13 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:20:02.065 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:20:02.065 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.162 ms 00:20:02.065 00:20:02.065 --- 10.0.0.1 ping statistics --- 00:20:02.065 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:20:02.065 rtt min/avg/max/mdev = 0.162/0.162/0.162/0.000 ms 00:20:02.065 09:28:13 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:20:02.065 09:28:13 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@422 -- # return 0 00:20:02.065 09:28:13 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:20:02.065 09:28:13 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:20:02.065 09:28:13 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:20:02.065 09:28:13 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:20:02.065 09:28:13 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:20:02.065 09:28:13 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:20:02.065 09:28:13 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:20:02.065 09:28:13 nvmf_tcp.nvmf_bdevio_no_huge -- target/bdevio.sh@16 -- # nvmfappstart -m 0x78 00:20:02.065 09:28:13 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:20:02.065 09:28:13 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@722 -- # xtrace_disable 00:20:02.065 09:28:13 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:20:02.065 09:28:13 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@481 -- # nvmfpid=854444 00:20:02.065 09:28:13 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --no-huge -s 1024 -m 0x78 00:20:02.065 09:28:13 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@482 -- # waitforlisten 854444 00:20:02.065 09:28:13 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@829 -- # '[' -z 854444 ']' 00:20:02.065 09:28:13 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:20:02.065 09:28:13 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@834 -- # local max_retries=100 00:20:02.065 09:28:13 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:20:02.065 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:20:02.065 09:28:13 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@838 -- # xtrace_disable 00:20:02.065 09:28:13 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:20:02.323 [2024-07-15 09:28:13.294577] Starting SPDK v24.09-pre git sha1 b0f01ebc5 / DPDK 24.03.0 initialization... 00:20:02.323 [2024-07-15 09:28:13.294671] [ DPDK EAL parameters: nvmf -c 0x78 -m 1024 --no-huge --iova-mode=va --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --file-prefix=spdk0 --proc-type=auto ] 00:20:02.323 [2024-07-15 09:28:13.366851] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 4 00:20:02.323 [2024-07-15 09:28:13.471474] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:20:02.323 [2024-07-15 09:28:13.471526] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:20:02.323 [2024-07-15 09:28:13.471554] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:20:02.323 [2024-07-15 09:28:13.471566] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:20:02.323 [2024-07-15 09:28:13.471576] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:20:02.323 [2024-07-15 09:28:13.471978] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 4 00:20:02.323 [2024-07-15 09:28:13.472030] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 5 00:20:02.323 [2024-07-15 09:28:13.472087] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 6 00:20:02.323 [2024-07-15 09:28:13.472090] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:20:02.581 09:28:13 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:20:02.581 09:28:13 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@862 -- # return 0 00:20:02.581 09:28:13 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:20:02.581 09:28:13 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@728 -- # xtrace_disable 00:20:02.581 09:28:13 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:20:02.581 09:28:13 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:20:02.581 09:28:13 nvmf_tcp.nvmf_bdevio_no_huge -- target/bdevio.sh@18 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:20:02.581 09:28:13 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:02.581 09:28:13 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:20:02.581 [2024-07-15 09:28:13.595933] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:20:02.581 09:28:13 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:02.581 09:28:13 nvmf_tcp.nvmf_bdevio_no_huge -- target/bdevio.sh@19 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:20:02.581 09:28:13 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:02.581 09:28:13 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:20:02.581 Malloc0 00:20:02.581 09:28:13 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:02.581 09:28:13 nvmf_tcp.nvmf_bdevio_no_huge -- target/bdevio.sh@20 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:20:02.581 09:28:13 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:02.581 09:28:13 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:20:02.581 09:28:13 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:02.582 09:28:13 nvmf_tcp.nvmf_bdevio_no_huge -- target/bdevio.sh@21 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:20:02.582 09:28:13 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:02.582 09:28:13 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:20:02.582 09:28:13 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:02.582 09:28:13 nvmf_tcp.nvmf_bdevio_no_huge -- target/bdevio.sh@22 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:20:02.582 09:28:13 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:02.582 09:28:13 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:20:02.582 [2024-07-15 09:28:13.633851] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:20:02.582 09:28:13 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:02.582 09:28:13 nvmf_tcp.nvmf_bdevio_no_huge -- target/bdevio.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/bdev/bdevio/bdevio --json /dev/fd/62 --no-huge -s 1024 00:20:02.582 09:28:13 nvmf_tcp.nvmf_bdevio_no_huge -- target/bdevio.sh@24 -- # gen_nvmf_target_json 00:20:02.582 09:28:13 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@532 -- # config=() 00:20:02.582 09:28:13 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@532 -- # local subsystem config 00:20:02.582 09:28:13 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:20:02.582 09:28:13 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:20:02.582 { 00:20:02.582 "params": { 00:20:02.582 "name": "Nvme$subsystem", 00:20:02.582 "trtype": "$TEST_TRANSPORT", 00:20:02.582 "traddr": "$NVMF_FIRST_TARGET_IP", 00:20:02.582 "adrfam": "ipv4", 00:20:02.582 "trsvcid": "$NVMF_PORT", 00:20:02.582 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:20:02.582 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:20:02.582 "hdgst": ${hdgst:-false}, 00:20:02.582 "ddgst": ${ddgst:-false} 00:20:02.582 }, 00:20:02.582 "method": "bdev_nvme_attach_controller" 00:20:02.582 } 00:20:02.582 EOF 00:20:02.582 )") 00:20:02.582 09:28:13 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@554 -- # cat 00:20:02.582 09:28:13 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@556 -- # jq . 00:20:02.582 09:28:13 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@557 -- # IFS=, 00:20:02.582 09:28:13 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:20:02.582 "params": { 00:20:02.582 "name": "Nvme1", 00:20:02.582 "trtype": "tcp", 00:20:02.582 "traddr": "10.0.0.2", 00:20:02.582 "adrfam": "ipv4", 00:20:02.582 "trsvcid": "4420", 00:20:02.582 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:20:02.582 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:20:02.582 "hdgst": false, 00:20:02.582 "ddgst": false 00:20:02.582 }, 00:20:02.582 "method": "bdev_nvme_attach_controller" 00:20:02.582 }' 00:20:02.582 [2024-07-15 09:28:13.680498] Starting SPDK v24.09-pre git sha1 b0f01ebc5 / DPDK 24.03.0 initialization... 00:20:02.582 [2024-07-15 09:28:13.680569] [ DPDK EAL parameters: bdevio --no-shconf -c 0x7 -m 1024 --no-huge --iova-mode=va --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --file-prefix=spdk_pid854589 ] 00:20:02.582 [2024-07-15 09:28:13.744616] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 3 00:20:02.841 [2024-07-15 09:28:13.858699] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:20:02.841 [2024-07-15 09:28:13.858750] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:20:02.841 [2024-07-15 09:28:13.858753] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:20:03.100 I/O targets: 00:20:03.100 Nvme1n1: 131072 blocks of 512 bytes (64 MiB) 00:20:03.100 00:20:03.100 00:20:03.100 CUnit - A unit testing framework for C - Version 2.1-3 00:20:03.100 http://cunit.sourceforge.net/ 00:20:03.100 00:20:03.100 00:20:03.100 Suite: bdevio tests on: Nvme1n1 00:20:03.100 Test: blockdev write read block ...passed 00:20:03.100 Test: blockdev write zeroes read block ...passed 00:20:03.100 Test: blockdev write zeroes read no split ...passed 00:20:03.100 Test: blockdev write zeroes read split ...passed 00:20:03.100 Test: blockdev write zeroes read split partial ...passed 00:20:03.100 Test: blockdev reset ...[2024-07-15 09:28:14.220481] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:03.100 [2024-07-15 09:28:14.220593] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9e0fb0 (9): Bad file descriptor 00:20:03.100 [2024-07-15 09:28:14.238224] bdev_nvme.c:2067:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:20:03.100 passed 00:20:03.100 Test: blockdev write read 8 blocks ...passed 00:20:03.358 Test: blockdev write read size > 128k ...passed 00:20:03.358 Test: blockdev write read invalid size ...passed 00:20:03.358 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:20:03.358 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:20:03.358 Test: blockdev write read max offset ...passed 00:20:03.358 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:20:03.358 Test: blockdev writev readv 8 blocks ...passed 00:20:03.358 Test: blockdev writev readv 30 x 1block ...passed 00:20:03.358 Test: blockdev writev readv block ...passed 00:20:03.358 Test: blockdev writev readv size > 128k ...passed 00:20:03.358 Test: blockdev writev readv size > 128k in two iovs ...passed 00:20:03.358 Test: blockdev comparev and writev ...[2024-07-15 09:28:14.491195] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:20:03.358 [2024-07-15 09:28:14.491229] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:20:03.358 [2024-07-15 09:28:14.491253] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:20:03.358 [2024-07-15 09:28:14.491271] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:20:03.358 [2024-07-15 09:28:14.491601] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:20:03.358 [2024-07-15 09:28:14.491626] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:1 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:20:03.358 [2024-07-15 09:28:14.491649] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:20:03.358 [2024-07-15 09:28:14.491665] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:0 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:20:03.358 [2024-07-15 09:28:14.492006] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:20:03.358 [2024-07-15 09:28:14.492030] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:0 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:20:03.358 [2024-07-15 09:28:14.492052] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:20:03.358 [2024-07-15 09:28:14.492068] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:1 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:20:03.359 [2024-07-15 09:28:14.492390] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:20:03.359 [2024-07-15 09:28:14.492414] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:1 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:20:03.359 [2024-07-15 09:28:14.492435] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:20:03.359 [2024-07-15 09:28:14.492451] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:0 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:20:03.359 passed 00:20:03.617 Test: blockdev nvme passthru rw ...passed 00:20:03.617 Test: blockdev nvme passthru vendor specific ...[2024-07-15 09:28:14.574083] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:20:03.617 [2024-07-15 09:28:14.574111] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:20:03.617 [2024-07-15 09:28:14.574257] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:20:03.617 [2024-07-15 09:28:14.574280] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:20:03.617 [2024-07-15 09:28:14.574423] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:20:03.617 [2024-07-15 09:28:14.574446] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:20:03.617 [2024-07-15 09:28:14.574602] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:20:03.617 [2024-07-15 09:28:14.574625] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:20:03.617 passed 00:20:03.617 Test: blockdev nvme admin passthru ...passed 00:20:03.617 Test: blockdev copy ...passed 00:20:03.617 00:20:03.617 Run Summary: Type Total Ran Passed Failed Inactive 00:20:03.617 suites 1 1 n/a 0 0 00:20:03.617 tests 23 23 23 0 0 00:20:03.617 asserts 152 152 152 0 n/a 00:20:03.617 00:20:03.617 Elapsed time = 1.143 seconds 00:20:03.876 09:28:14 nvmf_tcp.nvmf_bdevio_no_huge -- target/bdevio.sh@26 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:20:03.876 09:28:14 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:03.876 09:28:14 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:20:03.876 09:28:14 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:03.876 09:28:14 nvmf_tcp.nvmf_bdevio_no_huge -- target/bdevio.sh@28 -- # trap - SIGINT SIGTERM EXIT 00:20:03.876 09:28:14 nvmf_tcp.nvmf_bdevio_no_huge -- target/bdevio.sh@30 -- # nvmftestfini 00:20:03.876 09:28:14 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@488 -- # nvmfcleanup 00:20:03.876 09:28:14 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@117 -- # sync 00:20:03.876 09:28:14 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:20:03.876 09:28:14 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@120 -- # set +e 00:20:03.876 09:28:14 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@121 -- # for i in {1..20} 00:20:03.876 09:28:14 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:20:03.876 rmmod nvme_tcp 00:20:03.876 rmmod nvme_fabrics 00:20:03.876 rmmod nvme_keyring 00:20:03.876 09:28:15 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:20:03.876 09:28:15 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@124 -- # set -e 00:20:03.876 09:28:15 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@125 -- # return 0 00:20:03.876 09:28:15 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@489 -- # '[' -n 854444 ']' 00:20:03.876 09:28:15 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@490 -- # killprocess 854444 00:20:03.876 09:28:15 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@948 -- # '[' -z 854444 ']' 00:20:03.876 09:28:15 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@952 -- # kill -0 854444 00:20:03.876 09:28:15 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@953 -- # uname 00:20:03.876 09:28:15 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:20:03.876 09:28:15 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 854444 00:20:03.876 09:28:15 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@954 -- # process_name=reactor_3 00:20:03.876 09:28:15 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@958 -- # '[' reactor_3 = sudo ']' 00:20:03.876 09:28:15 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@966 -- # echo 'killing process with pid 854444' 00:20:03.876 killing process with pid 854444 00:20:03.876 09:28:15 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@967 -- # kill 854444 00:20:03.876 09:28:15 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@972 -- # wait 854444 00:20:04.445 09:28:15 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:20:04.445 09:28:15 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:20:04.445 09:28:15 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:20:04.445 09:28:15 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:20:04.445 09:28:15 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@278 -- # remove_spdk_ns 00:20:04.445 09:28:15 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:20:04.445 09:28:15 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:20:04.445 09:28:15 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:20:06.354 09:28:17 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:20:06.354 00:20:06.354 real 0m6.446s 00:20:06.354 user 0m10.542s 00:20:06.354 sys 0m2.384s 00:20:06.354 09:28:17 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@1124 -- # xtrace_disable 00:20:06.354 09:28:17 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:20:06.354 ************************************ 00:20:06.354 END TEST nvmf_bdevio_no_huge 00:20:06.354 ************************************ 00:20:06.354 09:28:17 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:20:06.354 09:28:17 nvmf_tcp -- nvmf/nvmf.sh@61 -- # run_test nvmf_tls /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/tls.sh --transport=tcp 00:20:06.354 09:28:17 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:20:06.354 09:28:17 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:20:06.354 09:28:17 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:20:06.613 ************************************ 00:20:06.613 START TEST nvmf_tls 00:20:06.613 ************************************ 00:20:06.613 09:28:17 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/tls.sh --transport=tcp 00:20:06.613 * Looking for test storage... 00:20:06.613 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:20:06.613 09:28:17 nvmf_tcp.nvmf_tls -- target/tls.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:20:06.613 09:28:17 nvmf_tcp.nvmf_tls -- nvmf/common.sh@7 -- # uname -s 00:20:06.613 09:28:17 nvmf_tcp.nvmf_tls -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:20:06.613 09:28:17 nvmf_tcp.nvmf_tls -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:20:06.613 09:28:17 nvmf_tcp.nvmf_tls -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:20:06.613 09:28:17 nvmf_tcp.nvmf_tls -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:20:06.613 09:28:17 nvmf_tcp.nvmf_tls -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:20:06.613 09:28:17 nvmf_tcp.nvmf_tls -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:20:06.613 09:28:17 nvmf_tcp.nvmf_tls -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:20:06.613 09:28:17 nvmf_tcp.nvmf_tls -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:20:06.613 09:28:17 nvmf_tcp.nvmf_tls -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:20:06.613 09:28:17 nvmf_tcp.nvmf_tls -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:20:06.613 09:28:17 nvmf_tcp.nvmf_tls -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a 00:20:06.613 09:28:17 nvmf_tcp.nvmf_tls -- nvmf/common.sh@18 -- # NVME_HOSTID=29f67375-a902-e411-ace9-001e67bc3c9a 00:20:06.613 09:28:17 nvmf_tcp.nvmf_tls -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:20:06.613 09:28:17 nvmf_tcp.nvmf_tls -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:20:06.613 09:28:17 nvmf_tcp.nvmf_tls -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:20:06.613 09:28:17 nvmf_tcp.nvmf_tls -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:20:06.613 09:28:17 nvmf_tcp.nvmf_tls -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:20:06.613 09:28:17 nvmf_tcp.nvmf_tls -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:20:06.613 09:28:17 nvmf_tcp.nvmf_tls -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:20:06.613 09:28:17 nvmf_tcp.nvmf_tls -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:20:06.613 09:28:17 nvmf_tcp.nvmf_tls -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:06.614 09:28:17 nvmf_tcp.nvmf_tls -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:06.614 09:28:17 nvmf_tcp.nvmf_tls -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:06.614 09:28:17 nvmf_tcp.nvmf_tls -- paths/export.sh@5 -- # export PATH 00:20:06.614 09:28:17 nvmf_tcp.nvmf_tls -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:06.614 09:28:17 nvmf_tcp.nvmf_tls -- nvmf/common.sh@47 -- # : 0 00:20:06.614 09:28:17 nvmf_tcp.nvmf_tls -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:20:06.614 09:28:17 nvmf_tcp.nvmf_tls -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:20:06.614 09:28:17 nvmf_tcp.nvmf_tls -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:20:06.614 09:28:17 nvmf_tcp.nvmf_tls -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:20:06.614 09:28:17 nvmf_tcp.nvmf_tls -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:20:06.614 09:28:17 nvmf_tcp.nvmf_tls -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:20:06.614 09:28:17 nvmf_tcp.nvmf_tls -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:20:06.614 09:28:17 nvmf_tcp.nvmf_tls -- nvmf/common.sh@51 -- # have_pci_nics=0 00:20:06.614 09:28:17 nvmf_tcp.nvmf_tls -- target/tls.sh@12 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:20:06.614 09:28:17 nvmf_tcp.nvmf_tls -- target/tls.sh@62 -- # nvmftestinit 00:20:06.614 09:28:17 nvmf_tcp.nvmf_tls -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:20:06.614 09:28:17 nvmf_tcp.nvmf_tls -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:20:06.614 09:28:17 nvmf_tcp.nvmf_tls -- nvmf/common.sh@448 -- # prepare_net_devs 00:20:06.614 09:28:17 nvmf_tcp.nvmf_tls -- nvmf/common.sh@410 -- # local -g is_hw=no 00:20:06.614 09:28:17 nvmf_tcp.nvmf_tls -- nvmf/common.sh@412 -- # remove_spdk_ns 00:20:06.614 09:28:17 nvmf_tcp.nvmf_tls -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:20:06.614 09:28:17 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:20:06.614 09:28:17 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:20:06.614 09:28:17 nvmf_tcp.nvmf_tls -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:20:06.614 09:28:17 nvmf_tcp.nvmf_tls -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:20:06.614 09:28:17 nvmf_tcp.nvmf_tls -- nvmf/common.sh@285 -- # xtrace_disable 00:20:06.614 09:28:17 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:20:08.511 09:28:19 nvmf_tcp.nvmf_tls -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:20:08.511 09:28:19 nvmf_tcp.nvmf_tls -- nvmf/common.sh@291 -- # pci_devs=() 00:20:08.511 09:28:19 nvmf_tcp.nvmf_tls -- nvmf/common.sh@291 -- # local -a pci_devs 00:20:08.511 09:28:19 nvmf_tcp.nvmf_tls -- nvmf/common.sh@292 -- # pci_net_devs=() 00:20:08.511 09:28:19 nvmf_tcp.nvmf_tls -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:20:08.511 09:28:19 nvmf_tcp.nvmf_tls -- nvmf/common.sh@293 -- # pci_drivers=() 00:20:08.511 09:28:19 nvmf_tcp.nvmf_tls -- nvmf/common.sh@293 -- # local -A pci_drivers 00:20:08.511 09:28:19 nvmf_tcp.nvmf_tls -- nvmf/common.sh@295 -- # net_devs=() 00:20:08.511 09:28:19 nvmf_tcp.nvmf_tls -- nvmf/common.sh@295 -- # local -ga net_devs 00:20:08.511 09:28:19 nvmf_tcp.nvmf_tls -- nvmf/common.sh@296 -- # e810=() 00:20:08.511 09:28:19 nvmf_tcp.nvmf_tls -- nvmf/common.sh@296 -- # local -ga e810 00:20:08.511 09:28:19 nvmf_tcp.nvmf_tls -- nvmf/common.sh@297 -- # x722=() 00:20:08.511 09:28:19 nvmf_tcp.nvmf_tls -- nvmf/common.sh@297 -- # local -ga x722 00:20:08.511 09:28:19 nvmf_tcp.nvmf_tls -- nvmf/common.sh@298 -- # mlx=() 00:20:08.511 09:28:19 nvmf_tcp.nvmf_tls -- nvmf/common.sh@298 -- # local -ga mlx 00:20:08.511 09:28:19 nvmf_tcp.nvmf_tls -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:20:08.511 09:28:19 nvmf_tcp.nvmf_tls -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:20:08.511 09:28:19 nvmf_tcp.nvmf_tls -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:20:08.511 09:28:19 nvmf_tcp.nvmf_tls -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:20:08.511 09:28:19 nvmf_tcp.nvmf_tls -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:20:08.511 09:28:19 nvmf_tcp.nvmf_tls -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:20:08.511 09:28:19 nvmf_tcp.nvmf_tls -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:20:08.511 09:28:19 nvmf_tcp.nvmf_tls -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:20:08.511 09:28:19 nvmf_tcp.nvmf_tls -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:20:08.511 09:28:19 nvmf_tcp.nvmf_tls -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:20:08.511 09:28:19 nvmf_tcp.nvmf_tls -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:20:08.511 09:28:19 nvmf_tcp.nvmf_tls -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:20:08.511 09:28:19 nvmf_tcp.nvmf_tls -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:20:08.511 09:28:19 nvmf_tcp.nvmf_tls -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:20:08.511 09:28:19 nvmf_tcp.nvmf_tls -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:20:08.511 09:28:19 nvmf_tcp.nvmf_tls -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:20:08.511 09:28:19 nvmf_tcp.nvmf_tls -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:20:08.511 09:28:19 nvmf_tcp.nvmf_tls -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:20:08.511 09:28:19 nvmf_tcp.nvmf_tls -- nvmf/common.sh@341 -- # echo 'Found 0000:09:00.0 (0x8086 - 0x159b)' 00:20:08.511 Found 0000:09:00.0 (0x8086 - 0x159b) 00:20:08.511 09:28:19 nvmf_tcp.nvmf_tls -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:20:08.511 09:28:19 nvmf_tcp.nvmf_tls -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:20:08.511 09:28:19 nvmf_tcp.nvmf_tls -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:20:08.511 09:28:19 nvmf_tcp.nvmf_tls -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:20:08.511 09:28:19 nvmf_tcp.nvmf_tls -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:20:08.511 09:28:19 nvmf_tcp.nvmf_tls -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:20:08.511 09:28:19 nvmf_tcp.nvmf_tls -- nvmf/common.sh@341 -- # echo 'Found 0000:09:00.1 (0x8086 - 0x159b)' 00:20:08.511 Found 0000:09:00.1 (0x8086 - 0x159b) 00:20:08.511 09:28:19 nvmf_tcp.nvmf_tls -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:20:08.511 09:28:19 nvmf_tcp.nvmf_tls -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:20:08.511 09:28:19 nvmf_tcp.nvmf_tls -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:20:08.511 09:28:19 nvmf_tcp.nvmf_tls -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:20:08.511 09:28:19 nvmf_tcp.nvmf_tls -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:20:08.511 09:28:19 nvmf_tcp.nvmf_tls -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:20:08.511 09:28:19 nvmf_tcp.nvmf_tls -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:20:08.511 09:28:19 nvmf_tcp.nvmf_tls -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:20:08.511 09:28:19 nvmf_tcp.nvmf_tls -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:20:08.511 09:28:19 nvmf_tcp.nvmf_tls -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:20:08.511 09:28:19 nvmf_tcp.nvmf_tls -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:20:08.511 09:28:19 nvmf_tcp.nvmf_tls -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:20:08.511 09:28:19 nvmf_tcp.nvmf_tls -- nvmf/common.sh@390 -- # [[ up == up ]] 00:20:08.511 09:28:19 nvmf_tcp.nvmf_tls -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:20:08.511 09:28:19 nvmf_tcp.nvmf_tls -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:20:08.511 09:28:19 nvmf_tcp.nvmf_tls -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:09:00.0: cvl_0_0' 00:20:08.511 Found net devices under 0000:09:00.0: cvl_0_0 00:20:08.511 09:28:19 nvmf_tcp.nvmf_tls -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:20:08.511 09:28:19 nvmf_tcp.nvmf_tls -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:20:08.511 09:28:19 nvmf_tcp.nvmf_tls -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:20:08.511 09:28:19 nvmf_tcp.nvmf_tls -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:20:08.511 09:28:19 nvmf_tcp.nvmf_tls -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:20:08.511 09:28:19 nvmf_tcp.nvmf_tls -- nvmf/common.sh@390 -- # [[ up == up ]] 00:20:08.511 09:28:19 nvmf_tcp.nvmf_tls -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:20:08.511 09:28:19 nvmf_tcp.nvmf_tls -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:20:08.512 09:28:19 nvmf_tcp.nvmf_tls -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:09:00.1: cvl_0_1' 00:20:08.512 Found net devices under 0000:09:00.1: cvl_0_1 00:20:08.512 09:28:19 nvmf_tcp.nvmf_tls -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:20:08.512 09:28:19 nvmf_tcp.nvmf_tls -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:20:08.512 09:28:19 nvmf_tcp.nvmf_tls -- nvmf/common.sh@414 -- # is_hw=yes 00:20:08.512 09:28:19 nvmf_tcp.nvmf_tls -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:20:08.512 09:28:19 nvmf_tcp.nvmf_tls -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:20:08.512 09:28:19 nvmf_tcp.nvmf_tls -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:20:08.512 09:28:19 nvmf_tcp.nvmf_tls -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:20:08.512 09:28:19 nvmf_tcp.nvmf_tls -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:20:08.512 09:28:19 nvmf_tcp.nvmf_tls -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:20:08.512 09:28:19 nvmf_tcp.nvmf_tls -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:20:08.512 09:28:19 nvmf_tcp.nvmf_tls -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:20:08.512 09:28:19 nvmf_tcp.nvmf_tls -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:20:08.512 09:28:19 nvmf_tcp.nvmf_tls -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:20:08.512 09:28:19 nvmf_tcp.nvmf_tls -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:20:08.512 09:28:19 nvmf_tcp.nvmf_tls -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:20:08.512 09:28:19 nvmf_tcp.nvmf_tls -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:20:08.512 09:28:19 nvmf_tcp.nvmf_tls -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:20:08.512 09:28:19 nvmf_tcp.nvmf_tls -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:20:08.512 09:28:19 nvmf_tcp.nvmf_tls -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:20:08.771 09:28:19 nvmf_tcp.nvmf_tls -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:20:08.771 09:28:19 nvmf_tcp.nvmf_tls -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:20:08.771 09:28:19 nvmf_tcp.nvmf_tls -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:20:08.771 09:28:19 nvmf_tcp.nvmf_tls -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:20:08.771 09:28:19 nvmf_tcp.nvmf_tls -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:20:08.771 09:28:19 nvmf_tcp.nvmf_tls -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:20:08.771 09:28:19 nvmf_tcp.nvmf_tls -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:20:08.771 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:20:08.771 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.131 ms 00:20:08.771 00:20:08.771 --- 10.0.0.2 ping statistics --- 00:20:08.771 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:20:08.771 rtt min/avg/max/mdev = 0.131/0.131/0.131/0.000 ms 00:20:08.771 09:28:19 nvmf_tcp.nvmf_tls -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:20:08.771 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:20:08.771 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.121 ms 00:20:08.771 00:20:08.771 --- 10.0.0.1 ping statistics --- 00:20:08.771 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:20:08.771 rtt min/avg/max/mdev = 0.121/0.121/0.121/0.000 ms 00:20:08.771 09:28:19 nvmf_tcp.nvmf_tls -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:20:08.771 09:28:19 nvmf_tcp.nvmf_tls -- nvmf/common.sh@422 -- # return 0 00:20:08.771 09:28:19 nvmf_tcp.nvmf_tls -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:20:08.771 09:28:19 nvmf_tcp.nvmf_tls -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:20:08.771 09:28:19 nvmf_tcp.nvmf_tls -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:20:08.771 09:28:19 nvmf_tcp.nvmf_tls -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:20:08.771 09:28:19 nvmf_tcp.nvmf_tls -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:20:08.771 09:28:19 nvmf_tcp.nvmf_tls -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:20:08.771 09:28:19 nvmf_tcp.nvmf_tls -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:20:08.771 09:28:19 nvmf_tcp.nvmf_tls -- target/tls.sh@63 -- # nvmfappstart -m 0x2 --wait-for-rpc 00:20:08.771 09:28:19 nvmf_tcp.nvmf_tls -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:20:08.771 09:28:19 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@722 -- # xtrace_disable 00:20:08.771 09:28:19 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:20:08.771 09:28:19 nvmf_tcp.nvmf_tls -- nvmf/common.sh@481 -- # nvmfpid=856662 00:20:08.771 09:28:19 nvmf_tcp.nvmf_tls -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 --wait-for-rpc 00:20:08.771 09:28:19 nvmf_tcp.nvmf_tls -- nvmf/common.sh@482 -- # waitforlisten 856662 00:20:08.771 09:28:19 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@829 -- # '[' -z 856662 ']' 00:20:08.771 09:28:19 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:20:08.771 09:28:19 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@834 -- # local max_retries=100 00:20:08.771 09:28:19 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:20:08.771 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:20:08.771 09:28:19 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@838 -- # xtrace_disable 00:20:08.771 09:28:19 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:20:08.771 [2024-07-15 09:28:19.880270] Starting SPDK v24.09-pre git sha1 b0f01ebc5 / DPDK 24.03.0 initialization... 00:20:08.771 [2024-07-15 09:28:19.880334] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:20:08.771 EAL: No free 2048 kB hugepages reported on node 1 00:20:08.771 [2024-07-15 09:28:19.943096] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:20:09.031 [2024-07-15 09:28:20.063260] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:20:09.031 [2024-07-15 09:28:20.063334] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:20:09.031 [2024-07-15 09:28:20.063349] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:20:09.031 [2024-07-15 09:28:20.063361] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:20:09.031 [2024-07-15 09:28:20.063370] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:20:09.031 [2024-07-15 09:28:20.063417] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:20:09.966 09:28:20 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:20:09.966 09:28:20 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@862 -- # return 0 00:20:09.966 09:28:20 nvmf_tcp.nvmf_tls -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:20:09.966 09:28:20 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@728 -- # xtrace_disable 00:20:09.966 09:28:20 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:20:09.966 09:28:20 nvmf_tcp.nvmf_tls -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:20:09.966 09:28:20 nvmf_tcp.nvmf_tls -- target/tls.sh@65 -- # '[' tcp '!=' tcp ']' 00:20:09.966 09:28:20 nvmf_tcp.nvmf_tls -- target/tls.sh@70 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_set_default_impl -i ssl 00:20:09.966 true 00:20:09.966 09:28:21 nvmf_tcp.nvmf_tls -- target/tls.sh@73 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_get_options -i ssl 00:20:09.966 09:28:21 nvmf_tcp.nvmf_tls -- target/tls.sh@73 -- # jq -r .tls_version 00:20:10.231 09:28:21 nvmf_tcp.nvmf_tls -- target/tls.sh@73 -- # version=0 00:20:10.231 09:28:21 nvmf_tcp.nvmf_tls -- target/tls.sh@74 -- # [[ 0 != \0 ]] 00:20:10.231 09:28:21 nvmf_tcp.nvmf_tls -- target/tls.sh@80 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_set_options -i ssl --tls-version 13 00:20:10.490 09:28:21 nvmf_tcp.nvmf_tls -- target/tls.sh@81 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_get_options -i ssl 00:20:10.490 09:28:21 nvmf_tcp.nvmf_tls -- target/tls.sh@81 -- # jq -r .tls_version 00:20:10.747 09:28:21 nvmf_tcp.nvmf_tls -- target/tls.sh@81 -- # version=13 00:20:10.747 09:28:21 nvmf_tcp.nvmf_tls -- target/tls.sh@82 -- # [[ 13 != \1\3 ]] 00:20:10.747 09:28:21 nvmf_tcp.nvmf_tls -- target/tls.sh@88 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_set_options -i ssl --tls-version 7 00:20:11.006 09:28:22 nvmf_tcp.nvmf_tls -- target/tls.sh@89 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_get_options -i ssl 00:20:11.006 09:28:22 nvmf_tcp.nvmf_tls -- target/tls.sh@89 -- # jq -r .tls_version 00:20:11.264 09:28:22 nvmf_tcp.nvmf_tls -- target/tls.sh@89 -- # version=7 00:20:11.264 09:28:22 nvmf_tcp.nvmf_tls -- target/tls.sh@90 -- # [[ 7 != \7 ]] 00:20:11.264 09:28:22 nvmf_tcp.nvmf_tls -- target/tls.sh@96 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_get_options -i ssl 00:20:11.264 09:28:22 nvmf_tcp.nvmf_tls -- target/tls.sh@96 -- # jq -r .enable_ktls 00:20:11.520 09:28:22 nvmf_tcp.nvmf_tls -- target/tls.sh@96 -- # ktls=false 00:20:11.520 09:28:22 nvmf_tcp.nvmf_tls -- target/tls.sh@97 -- # [[ false != \f\a\l\s\e ]] 00:20:11.520 09:28:22 nvmf_tcp.nvmf_tls -- target/tls.sh@103 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_set_options -i ssl --enable-ktls 00:20:11.779 09:28:22 nvmf_tcp.nvmf_tls -- target/tls.sh@104 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_get_options -i ssl 00:20:11.779 09:28:22 nvmf_tcp.nvmf_tls -- target/tls.sh@104 -- # jq -r .enable_ktls 00:20:12.036 09:28:23 nvmf_tcp.nvmf_tls -- target/tls.sh@104 -- # ktls=true 00:20:12.036 09:28:23 nvmf_tcp.nvmf_tls -- target/tls.sh@105 -- # [[ true != \t\r\u\e ]] 00:20:12.036 09:28:23 nvmf_tcp.nvmf_tls -- target/tls.sh@111 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_set_options -i ssl --disable-ktls 00:20:12.293 09:28:23 nvmf_tcp.nvmf_tls -- target/tls.sh@112 -- # jq -r .enable_ktls 00:20:12.293 09:28:23 nvmf_tcp.nvmf_tls -- target/tls.sh@112 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_get_options -i ssl 00:20:12.549 09:28:23 nvmf_tcp.nvmf_tls -- target/tls.sh@112 -- # ktls=false 00:20:12.549 09:28:23 nvmf_tcp.nvmf_tls -- target/tls.sh@113 -- # [[ false != \f\a\l\s\e ]] 00:20:12.549 09:28:23 nvmf_tcp.nvmf_tls -- target/tls.sh@118 -- # format_interchange_psk 00112233445566778899aabbccddeeff 1 00:20:12.550 09:28:23 nvmf_tcp.nvmf_tls -- nvmf/common.sh@715 -- # format_key NVMeTLSkey-1 00112233445566778899aabbccddeeff 1 00:20:12.550 09:28:23 nvmf_tcp.nvmf_tls -- nvmf/common.sh@702 -- # local prefix key digest 00:20:12.550 09:28:23 nvmf_tcp.nvmf_tls -- nvmf/common.sh@704 -- # prefix=NVMeTLSkey-1 00:20:12.550 09:28:23 nvmf_tcp.nvmf_tls -- nvmf/common.sh@704 -- # key=00112233445566778899aabbccddeeff 00:20:12.550 09:28:23 nvmf_tcp.nvmf_tls -- nvmf/common.sh@704 -- # digest=1 00:20:12.550 09:28:23 nvmf_tcp.nvmf_tls -- nvmf/common.sh@705 -- # python - 00:20:12.550 09:28:23 nvmf_tcp.nvmf_tls -- target/tls.sh@118 -- # key=NVMeTLSkey-1:01:MDAxMTIyMzM0NDU1NjY3Nzg4OTlhYWJiY2NkZGVlZmZwJEiQ: 00:20:12.550 09:28:23 nvmf_tcp.nvmf_tls -- target/tls.sh@119 -- # format_interchange_psk ffeeddccbbaa99887766554433221100 1 00:20:12.550 09:28:23 nvmf_tcp.nvmf_tls -- nvmf/common.sh@715 -- # format_key NVMeTLSkey-1 ffeeddccbbaa99887766554433221100 1 00:20:12.550 09:28:23 nvmf_tcp.nvmf_tls -- nvmf/common.sh@702 -- # local prefix key digest 00:20:12.550 09:28:23 nvmf_tcp.nvmf_tls -- nvmf/common.sh@704 -- # prefix=NVMeTLSkey-1 00:20:12.550 09:28:23 nvmf_tcp.nvmf_tls -- nvmf/common.sh@704 -- # key=ffeeddccbbaa99887766554433221100 00:20:12.550 09:28:23 nvmf_tcp.nvmf_tls -- nvmf/common.sh@704 -- # digest=1 00:20:12.550 09:28:23 nvmf_tcp.nvmf_tls -- nvmf/common.sh@705 -- # python - 00:20:12.550 09:28:23 nvmf_tcp.nvmf_tls -- target/tls.sh@119 -- # key_2=NVMeTLSkey-1:01:ZmZlZWRkY2NiYmFhOTk4ODc3NjY1NTQ0MzMyMjExMDBfBm/Y: 00:20:12.550 09:28:23 nvmf_tcp.nvmf_tls -- target/tls.sh@121 -- # mktemp 00:20:12.550 09:28:23 nvmf_tcp.nvmf_tls -- target/tls.sh@121 -- # key_path=/tmp/tmp.KwTDY4onjA 00:20:12.550 09:28:23 nvmf_tcp.nvmf_tls -- target/tls.sh@122 -- # mktemp 00:20:12.550 09:28:23 nvmf_tcp.nvmf_tls -- target/tls.sh@122 -- # key_2_path=/tmp/tmp.w5bGT2ytwU 00:20:12.550 09:28:23 nvmf_tcp.nvmf_tls -- target/tls.sh@124 -- # echo -n NVMeTLSkey-1:01:MDAxMTIyMzM0NDU1NjY3Nzg4OTlhYWJiY2NkZGVlZmZwJEiQ: 00:20:12.550 09:28:23 nvmf_tcp.nvmf_tls -- target/tls.sh@125 -- # echo -n NVMeTLSkey-1:01:ZmZlZWRkY2NiYmFhOTk4ODc3NjY1NTQ0MzMyMjExMDBfBm/Y: 00:20:12.550 09:28:23 nvmf_tcp.nvmf_tls -- target/tls.sh@127 -- # chmod 0600 /tmp/tmp.KwTDY4onjA 00:20:12.550 09:28:23 nvmf_tcp.nvmf_tls -- target/tls.sh@128 -- # chmod 0600 /tmp/tmp.w5bGT2ytwU 00:20:12.550 09:28:23 nvmf_tcp.nvmf_tls -- target/tls.sh@130 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_set_options -i ssl --tls-version 13 00:20:12.806 09:28:23 nvmf_tcp.nvmf_tls -- target/tls.sh@131 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py framework_start_init 00:20:13.062 09:28:24 nvmf_tcp.nvmf_tls -- target/tls.sh@133 -- # setup_nvmf_tgt /tmp/tmp.KwTDY4onjA 00:20:13.062 09:28:24 nvmf_tcp.nvmf_tls -- target/tls.sh@49 -- # local key=/tmp/tmp.KwTDY4onjA 00:20:13.062 09:28:24 nvmf_tcp.nvmf_tls -- target/tls.sh@51 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o 00:20:13.320 [2024-07-15 09:28:24.446693] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:20:13.320 09:28:24 nvmf_tcp.nvmf_tls -- target/tls.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDK00000000000001 -m 10 00:20:13.584 09:28:24 nvmf_tcp.nvmf_tls -- target/tls.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -k 00:20:13.842 [2024-07-15 09:28:24.972124] tcp.c: 928:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:20:13.842 [2024-07-15 09:28:24.972339] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:20:13.842 09:28:24 nvmf_tcp.nvmf_tls -- target/tls.sh@55 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 32 4096 -b malloc0 00:20:14.100 malloc0 00:20:14.100 09:28:25 nvmf_tcp.nvmf_tls -- target/tls.sh@56 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 malloc0 -n 1 00:20:14.358 09:28:25 nvmf_tcp.nvmf_tls -- target/tls.sh@58 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 --psk /tmp/tmp.KwTDY4onjA 00:20:14.616 [2024-07-15 09:28:25.769512] tcp.c:3679:nvmf_tcp_subsystem_add_host: *WARNING*: nvmf_tcp_psk_path: deprecated feature PSK path to be removed in v24.09 00:20:14.616 09:28:25 nvmf_tcp.nvmf_tls -- target/tls.sh@137 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -S ssl -q 64 -o 4096 -w randrw -M 30 -t 10 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 hostnqn:nqn.2016-06.io.spdk:host1' --psk-path /tmp/tmp.KwTDY4onjA 00:20:14.873 EAL: No free 2048 kB hugepages reported on node 1 00:20:24.871 Initializing NVMe Controllers 00:20:24.871 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:20:24.871 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:20:24.871 Initialization complete. Launching workers. 00:20:24.871 ======================================================== 00:20:24.871 Latency(us) 00:20:24.871 Device Information : IOPS MiB/s Average min max 00:20:24.871 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 8745.49 34.16 7320.00 990.45 8663.32 00:20:24.871 ======================================================== 00:20:24.871 Total : 8745.49 34.16 7320.00 990.45 8663.32 00:20:24.871 00:20:24.871 09:28:35 nvmf_tcp.nvmf_tls -- target/tls.sh@143 -- # run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /tmp/tmp.KwTDY4onjA 00:20:24.871 09:28:35 nvmf_tcp.nvmf_tls -- target/tls.sh@22 -- # local subnqn hostnqn psk 00:20:24.871 09:28:35 nvmf_tcp.nvmf_tls -- target/tls.sh@23 -- # subnqn=nqn.2016-06.io.spdk:cnode1 00:20:24.871 09:28:35 nvmf_tcp.nvmf_tls -- target/tls.sh@23 -- # hostnqn=nqn.2016-06.io.spdk:host1 00:20:24.871 09:28:35 nvmf_tcp.nvmf_tls -- target/tls.sh@23 -- # psk='--psk /tmp/tmp.KwTDY4onjA' 00:20:24.871 09:28:35 nvmf_tcp.nvmf_tls -- target/tls.sh@25 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:20:24.871 09:28:35 nvmf_tcp.nvmf_tls -- target/tls.sh@28 -- # bdevperf_pid=858565 00:20:24.871 09:28:35 nvmf_tcp.nvmf_tls -- target/tls.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:20:24.871 09:28:35 nvmf_tcp.nvmf_tls -- target/tls.sh@30 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:20:24.871 09:28:35 nvmf_tcp.nvmf_tls -- target/tls.sh@31 -- # waitforlisten 858565 /var/tmp/bdevperf.sock 00:20:24.871 09:28:35 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@829 -- # '[' -z 858565 ']' 00:20:24.871 09:28:35 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:20:24.871 09:28:35 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@834 -- # local max_retries=100 00:20:24.871 09:28:35 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:20:24.871 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:20:24.871 09:28:35 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@838 -- # xtrace_disable 00:20:24.871 09:28:35 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:20:24.871 [2024-07-15 09:28:35.925880] Starting SPDK v24.09-pre git sha1 b0f01ebc5 / DPDK 24.03.0 initialization... 00:20:24.871 [2024-07-15 09:28:35.925959] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid858565 ] 00:20:24.871 EAL: No free 2048 kB hugepages reported on node 1 00:20:24.871 [2024-07-15 09:28:35.983680] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:20:25.158 [2024-07-15 09:28:36.094798] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:20:25.158 09:28:36 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:20:25.158 09:28:36 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@862 -- # return 0 00:20:25.158 09:28:36 nvmf_tcp.nvmf_tls -- target/tls.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 --psk /tmp/tmp.KwTDY4onjA 00:20:25.452 [2024-07-15 09:28:36.477909] bdev_nvme_rpc.c: 517:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:20:25.452 [2024-07-15 09:28:36.478035] nvme_tcp.c:2589:nvme_tcp_generate_tls_credentials: *WARNING*: nvme_ctrlr_psk: deprecated feature spdk_nvme_ctrlr_opts.psk to be removed in v24.09 00:20:25.452 TLSTESTn1 00:20:25.452 09:28:36 nvmf_tcp.nvmf_tls -- target/tls.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -t 20 -s /var/tmp/bdevperf.sock perform_tests 00:20:25.710 Running I/O for 10 seconds... 00:20:35.676 00:20:35.676 Latency(us) 00:20:35.676 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:20:35.676 Job: TLSTESTn1 (Core Mask 0x4, workload: verify, depth: 128, IO size: 4096) 00:20:35.676 Verification LBA range: start 0x0 length 0x2000 00:20:35.676 TLSTESTn1 : 10.02 3410.72 13.32 0.00 0.00 37463.96 7136.14 34952.53 00:20:35.676 =================================================================================================================== 00:20:35.676 Total : 3410.72 13.32 0.00 0.00 37463.96 7136.14 34952.53 00:20:35.676 0 00:20:35.676 09:28:46 nvmf_tcp.nvmf_tls -- target/tls.sh@44 -- # trap 'nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:20:35.676 09:28:46 nvmf_tcp.nvmf_tls -- target/tls.sh@45 -- # killprocess 858565 00:20:35.676 09:28:46 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@948 -- # '[' -z 858565 ']' 00:20:35.676 09:28:46 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@952 -- # kill -0 858565 00:20:35.676 09:28:46 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # uname 00:20:35.676 09:28:46 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:20:35.676 09:28:46 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 858565 00:20:35.676 09:28:46 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # process_name=reactor_2 00:20:35.676 09:28:46 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@958 -- # '[' reactor_2 = sudo ']' 00:20:35.676 09:28:46 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@966 -- # echo 'killing process with pid 858565' 00:20:35.676 killing process with pid 858565 00:20:35.676 09:28:46 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@967 -- # kill 858565 00:20:35.676 Received shutdown signal, test time was about 10.000000 seconds 00:20:35.676 00:20:35.676 Latency(us) 00:20:35.676 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:20:35.676 =================================================================================================================== 00:20:35.676 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:20:35.676 [2024-07-15 09:28:46.778049] app.c:1023:log_deprecation_hits: *WARNING*: nvme_ctrlr_psk: deprecation 'spdk_nvme_ctrlr_opts.psk' scheduled for removal in v24.09 hit 1 times 00:20:35.676 09:28:46 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@972 -- # wait 858565 00:20:35.933 09:28:47 nvmf_tcp.nvmf_tls -- target/tls.sh@146 -- # NOT run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /tmp/tmp.w5bGT2ytwU 00:20:35.933 09:28:47 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@648 -- # local es=0 00:20:35.933 09:28:47 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@650 -- # valid_exec_arg run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /tmp/tmp.w5bGT2ytwU 00:20:35.933 09:28:47 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@636 -- # local arg=run_bdevperf 00:20:35.933 09:28:47 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:20:35.933 09:28:47 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@640 -- # type -t run_bdevperf 00:20:35.933 09:28:47 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:20:35.933 09:28:47 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@651 -- # run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /tmp/tmp.w5bGT2ytwU 00:20:35.933 09:28:47 nvmf_tcp.nvmf_tls -- target/tls.sh@22 -- # local subnqn hostnqn psk 00:20:35.933 09:28:47 nvmf_tcp.nvmf_tls -- target/tls.sh@23 -- # subnqn=nqn.2016-06.io.spdk:cnode1 00:20:35.933 09:28:47 nvmf_tcp.nvmf_tls -- target/tls.sh@23 -- # hostnqn=nqn.2016-06.io.spdk:host1 00:20:35.933 09:28:47 nvmf_tcp.nvmf_tls -- target/tls.sh@23 -- # psk='--psk /tmp/tmp.w5bGT2ytwU' 00:20:35.933 09:28:47 nvmf_tcp.nvmf_tls -- target/tls.sh@25 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:20:35.933 09:28:47 nvmf_tcp.nvmf_tls -- target/tls.sh@28 -- # bdevperf_pid=859885 00:20:35.933 09:28:47 nvmf_tcp.nvmf_tls -- target/tls.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:20:35.933 09:28:47 nvmf_tcp.nvmf_tls -- target/tls.sh@30 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:20:35.933 09:28:47 nvmf_tcp.nvmf_tls -- target/tls.sh@31 -- # waitforlisten 859885 /var/tmp/bdevperf.sock 00:20:35.934 09:28:47 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@829 -- # '[' -z 859885 ']' 00:20:35.934 09:28:47 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:20:35.934 09:28:47 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@834 -- # local max_retries=100 00:20:35.934 09:28:47 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:20:35.934 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:20:35.934 09:28:47 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@838 -- # xtrace_disable 00:20:35.934 09:28:47 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:20:35.934 [2024-07-15 09:28:47.082929] Starting SPDK v24.09-pre git sha1 b0f01ebc5 / DPDK 24.03.0 initialization... 00:20:35.934 [2024-07-15 09:28:47.083009] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid859885 ] 00:20:35.934 EAL: No free 2048 kB hugepages reported on node 1 00:20:36.191 [2024-07-15 09:28:47.143570] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:20:36.191 [2024-07-15 09:28:47.251279] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:20:36.191 09:28:47 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:20:36.191 09:28:47 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@862 -- # return 0 00:20:36.191 09:28:47 nvmf_tcp.nvmf_tls -- target/tls.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 --psk /tmp/tmp.w5bGT2ytwU 00:20:36.447 [2024-07-15 09:28:47.582486] bdev_nvme_rpc.c: 517:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:20:36.447 [2024-07-15 09:28:47.582616] nvme_tcp.c:2589:nvme_tcp_generate_tls_credentials: *WARNING*: nvme_ctrlr_psk: deprecated feature spdk_nvme_ctrlr_opts.psk to be removed in v24.09 00:20:36.447 [2024-07-15 09:28:47.590400] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk_internal/nvme_tcp.h: 428:nvme_tcp_read_data: *ERROR*: spdk_sock_recv() failed, errno 107: Transport endpoint is not connected 00:20:36.447 [2024-07-15 09:28:47.590479] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9b3f90 (107): Transport endpoint is not connected 00:20:36.447 [2024-07-15 09:28:47.591453] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9b3f90 (9): Bad file descriptor 00:20:36.447 [2024-07-15 09:28:47.592452] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:36.447 [2024-07-15 09:28:47.592476] nvme.c: 708:nvme_ctrlr_poll_internal: *ERROR*: Failed to initialize SSD: 10.0.0.2 00:20:36.447 [2024-07-15 09:28:47.592507] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:36.447 request: 00:20:36.447 { 00:20:36.447 "name": "TLSTEST", 00:20:36.447 "trtype": "tcp", 00:20:36.447 "traddr": "10.0.0.2", 00:20:36.447 "adrfam": "ipv4", 00:20:36.447 "trsvcid": "4420", 00:20:36.447 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:20:36.447 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:20:36.447 "prchk_reftag": false, 00:20:36.447 "prchk_guard": false, 00:20:36.447 "hdgst": false, 00:20:36.447 "ddgst": false, 00:20:36.447 "psk": "/tmp/tmp.w5bGT2ytwU", 00:20:36.447 "method": "bdev_nvme_attach_controller", 00:20:36.447 "req_id": 1 00:20:36.447 } 00:20:36.447 Got JSON-RPC error response 00:20:36.447 response: 00:20:36.447 { 00:20:36.447 "code": -5, 00:20:36.448 "message": "Input/output error" 00:20:36.448 } 00:20:36.448 09:28:47 nvmf_tcp.nvmf_tls -- target/tls.sh@36 -- # killprocess 859885 00:20:36.448 09:28:47 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@948 -- # '[' -z 859885 ']' 00:20:36.448 09:28:47 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@952 -- # kill -0 859885 00:20:36.448 09:28:47 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # uname 00:20:36.448 09:28:47 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:20:36.448 09:28:47 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 859885 00:20:36.448 09:28:47 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # process_name=reactor_2 00:20:36.448 09:28:47 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@958 -- # '[' reactor_2 = sudo ']' 00:20:36.448 09:28:47 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@966 -- # echo 'killing process with pid 859885' 00:20:36.448 killing process with pid 859885 00:20:36.448 09:28:47 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@967 -- # kill 859885 00:20:36.448 Received shutdown signal, test time was about 10.000000 seconds 00:20:36.448 00:20:36.448 Latency(us) 00:20:36.448 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:20:36.448 =================================================================================================================== 00:20:36.448 Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:20:36.448 [2024-07-15 09:28:47.634258] app.c:1023:log_deprecation_hits: *WARNING*: nvme_ctrlr_psk: deprecation 'spdk_nvme_ctrlr_opts.psk' scheduled for removal in v24.09 hit 1 times 00:20:36.448 09:28:47 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@972 -- # wait 859885 00:20:36.704 09:28:47 nvmf_tcp.nvmf_tls -- target/tls.sh@37 -- # return 1 00:20:36.704 09:28:47 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@651 -- # es=1 00:20:36.704 09:28:47 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:20:36.704 09:28:47 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:20:36.704 09:28:47 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:20:36.704 09:28:47 nvmf_tcp.nvmf_tls -- target/tls.sh@149 -- # NOT run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host2 /tmp/tmp.KwTDY4onjA 00:20:36.704 09:28:47 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@648 -- # local es=0 00:20:36.704 09:28:47 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@650 -- # valid_exec_arg run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host2 /tmp/tmp.KwTDY4onjA 00:20:36.704 09:28:47 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@636 -- # local arg=run_bdevperf 00:20:36.704 09:28:47 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:20:36.704 09:28:47 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@640 -- # type -t run_bdevperf 00:20:36.704 09:28:47 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:20:36.704 09:28:47 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@651 -- # run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host2 /tmp/tmp.KwTDY4onjA 00:20:36.704 09:28:47 nvmf_tcp.nvmf_tls -- target/tls.sh@22 -- # local subnqn hostnqn psk 00:20:36.704 09:28:47 nvmf_tcp.nvmf_tls -- target/tls.sh@23 -- # subnqn=nqn.2016-06.io.spdk:cnode1 00:20:36.704 09:28:47 nvmf_tcp.nvmf_tls -- target/tls.sh@23 -- # hostnqn=nqn.2016-06.io.spdk:host2 00:20:36.704 09:28:47 nvmf_tcp.nvmf_tls -- target/tls.sh@23 -- # psk='--psk /tmp/tmp.KwTDY4onjA' 00:20:36.704 09:28:47 nvmf_tcp.nvmf_tls -- target/tls.sh@25 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:20:36.705 09:28:47 nvmf_tcp.nvmf_tls -- target/tls.sh@28 -- # bdevperf_pid=860015 00:20:36.705 09:28:47 nvmf_tcp.nvmf_tls -- target/tls.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:20:36.705 09:28:47 nvmf_tcp.nvmf_tls -- target/tls.sh@30 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:20:36.705 09:28:47 nvmf_tcp.nvmf_tls -- target/tls.sh@31 -- # waitforlisten 860015 /var/tmp/bdevperf.sock 00:20:36.705 09:28:47 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@829 -- # '[' -z 860015 ']' 00:20:36.705 09:28:47 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:20:36.705 09:28:47 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@834 -- # local max_retries=100 00:20:36.705 09:28:47 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:20:36.705 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:20:36.705 09:28:47 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@838 -- # xtrace_disable 00:20:36.705 09:28:47 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:20:36.962 [2024-07-15 09:28:47.929657] Starting SPDK v24.09-pre git sha1 b0f01ebc5 / DPDK 24.03.0 initialization... 00:20:36.962 [2024-07-15 09:28:47.929738] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid860015 ] 00:20:36.962 EAL: No free 2048 kB hugepages reported on node 1 00:20:36.962 [2024-07-15 09:28:47.987745] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:20:36.962 [2024-07-15 09:28:48.089824] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:20:37.219 09:28:48 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:20:37.219 09:28:48 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@862 -- # return 0 00:20:37.219 09:28:48 nvmf_tcp.nvmf_tls -- target/tls.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host2 --psk /tmp/tmp.KwTDY4onjA 00:20:37.477 [2024-07-15 09:28:48.468739] bdev_nvme_rpc.c: 517:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:20:37.477 [2024-07-15 09:28:48.468892] nvme_tcp.c:2589:nvme_tcp_generate_tls_credentials: *WARNING*: nvme_ctrlr_psk: deprecated feature spdk_nvme_ctrlr_opts.psk to be removed in v24.09 00:20:37.477 [2024-07-15 09:28:48.474148] tcp.c: 881:tcp_sock_get_key: *ERROR*: Could not find PSK for identity: NVMe0R01 nqn.2016-06.io.spdk:host2 nqn.2016-06.io.spdk:cnode1 00:20:37.477 [2024-07-15 09:28:48.474184] posix.c: 589:posix_sock_psk_find_session_server_cb: *ERROR*: Unable to find PSK for identity: NVMe0R01 nqn.2016-06.io.spdk:host2 nqn.2016-06.io.spdk:cnode1 00:20:37.477 [2024-07-15 09:28:48.474226] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk_internal/nvme_tcp.h: 428:nvme_tcp_read_data: *ERROR*: spdk_sock_recv() failed, errno 107: Transport endpoint is not connected 00:20:37.477 [2024-07-15 09:28:48.474749] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2045f90 (107): Transport endpoint is not connected 00:20:37.477 [2024-07-15 09:28:48.475737] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2045f90 (9): Bad file descriptor 00:20:37.477 [2024-07-15 09:28:48.476736] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:37.477 [2024-07-15 09:28:48.476756] nvme.c: 708:nvme_ctrlr_poll_internal: *ERROR*: Failed to initialize SSD: 10.0.0.2 00:20:37.477 [2024-07-15 09:28:48.476787] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:37.477 request: 00:20:37.477 { 00:20:37.477 "name": "TLSTEST", 00:20:37.477 "trtype": "tcp", 00:20:37.477 "traddr": "10.0.0.2", 00:20:37.477 "adrfam": "ipv4", 00:20:37.477 "trsvcid": "4420", 00:20:37.477 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:20:37.477 "hostnqn": "nqn.2016-06.io.spdk:host2", 00:20:37.477 "prchk_reftag": false, 00:20:37.477 "prchk_guard": false, 00:20:37.477 "hdgst": false, 00:20:37.477 "ddgst": false, 00:20:37.477 "psk": "/tmp/tmp.KwTDY4onjA", 00:20:37.477 "method": "bdev_nvme_attach_controller", 00:20:37.477 "req_id": 1 00:20:37.477 } 00:20:37.477 Got JSON-RPC error response 00:20:37.477 response: 00:20:37.477 { 00:20:37.477 "code": -5, 00:20:37.477 "message": "Input/output error" 00:20:37.477 } 00:20:37.477 09:28:48 nvmf_tcp.nvmf_tls -- target/tls.sh@36 -- # killprocess 860015 00:20:37.477 09:28:48 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@948 -- # '[' -z 860015 ']' 00:20:37.477 09:28:48 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@952 -- # kill -0 860015 00:20:37.477 09:28:48 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # uname 00:20:37.477 09:28:48 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:20:37.477 09:28:48 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 860015 00:20:37.477 09:28:48 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # process_name=reactor_2 00:20:37.477 09:28:48 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@958 -- # '[' reactor_2 = sudo ']' 00:20:37.477 09:28:48 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@966 -- # echo 'killing process with pid 860015' 00:20:37.477 killing process with pid 860015 00:20:37.477 09:28:48 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@967 -- # kill 860015 00:20:37.477 Received shutdown signal, test time was about 10.000000 seconds 00:20:37.477 00:20:37.477 Latency(us) 00:20:37.477 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:20:37.477 =================================================================================================================== 00:20:37.477 Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:20:37.477 [2024-07-15 09:28:48.522580] app.c:1023:log_deprecation_hits: *WARNING*: nvme_ctrlr_psk: deprecation 'spdk_nvme_ctrlr_opts.psk' scheduled for removal in v24.09 hit 1 times 00:20:37.477 09:28:48 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@972 -- # wait 860015 00:20:37.733 09:28:48 nvmf_tcp.nvmf_tls -- target/tls.sh@37 -- # return 1 00:20:37.733 09:28:48 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@651 -- # es=1 00:20:37.733 09:28:48 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:20:37.733 09:28:48 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:20:37.733 09:28:48 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:20:37.733 09:28:48 nvmf_tcp.nvmf_tls -- target/tls.sh@152 -- # NOT run_bdevperf nqn.2016-06.io.spdk:cnode2 nqn.2016-06.io.spdk:host1 /tmp/tmp.KwTDY4onjA 00:20:37.733 09:28:48 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@648 -- # local es=0 00:20:37.733 09:28:48 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@650 -- # valid_exec_arg run_bdevperf nqn.2016-06.io.spdk:cnode2 nqn.2016-06.io.spdk:host1 /tmp/tmp.KwTDY4onjA 00:20:37.733 09:28:48 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@636 -- # local arg=run_bdevperf 00:20:37.733 09:28:48 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:20:37.733 09:28:48 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@640 -- # type -t run_bdevperf 00:20:37.733 09:28:48 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:20:37.733 09:28:48 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@651 -- # run_bdevperf nqn.2016-06.io.spdk:cnode2 nqn.2016-06.io.spdk:host1 /tmp/tmp.KwTDY4onjA 00:20:37.734 09:28:48 nvmf_tcp.nvmf_tls -- target/tls.sh@22 -- # local subnqn hostnqn psk 00:20:37.734 09:28:48 nvmf_tcp.nvmf_tls -- target/tls.sh@23 -- # subnqn=nqn.2016-06.io.spdk:cnode2 00:20:37.734 09:28:48 nvmf_tcp.nvmf_tls -- target/tls.sh@23 -- # hostnqn=nqn.2016-06.io.spdk:host1 00:20:37.734 09:28:48 nvmf_tcp.nvmf_tls -- target/tls.sh@23 -- # psk='--psk /tmp/tmp.KwTDY4onjA' 00:20:37.734 09:28:48 nvmf_tcp.nvmf_tls -- target/tls.sh@25 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:20:37.734 09:28:48 nvmf_tcp.nvmf_tls -- target/tls.sh@28 -- # bdevperf_pid=860054 00:20:37.734 09:28:48 nvmf_tcp.nvmf_tls -- target/tls.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:20:37.734 09:28:48 nvmf_tcp.nvmf_tls -- target/tls.sh@30 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:20:37.734 09:28:48 nvmf_tcp.nvmf_tls -- target/tls.sh@31 -- # waitforlisten 860054 /var/tmp/bdevperf.sock 00:20:37.734 09:28:48 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@829 -- # '[' -z 860054 ']' 00:20:37.734 09:28:48 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:20:37.734 09:28:48 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@834 -- # local max_retries=100 00:20:37.734 09:28:48 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:20:37.734 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:20:37.734 09:28:48 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@838 -- # xtrace_disable 00:20:37.734 09:28:48 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:20:37.734 [2024-07-15 09:28:48.819464] Starting SPDK v24.09-pre git sha1 b0f01ebc5 / DPDK 24.03.0 initialization... 00:20:37.734 [2024-07-15 09:28:48.819549] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid860054 ] 00:20:37.734 EAL: No free 2048 kB hugepages reported on node 1 00:20:37.734 [2024-07-15 09:28:48.883079] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:20:37.990 [2024-07-15 09:28:48.993180] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:20:37.990 09:28:49 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:20:37.990 09:28:49 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@862 -- # return 0 00:20:37.990 09:28:49 nvmf_tcp.nvmf_tls -- target/tls.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode2 -q nqn.2016-06.io.spdk:host1 --psk /tmp/tmp.KwTDY4onjA 00:20:38.248 [2024-07-15 09:28:49.376128] bdev_nvme_rpc.c: 517:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:20:38.248 [2024-07-15 09:28:49.376255] nvme_tcp.c:2589:nvme_tcp_generate_tls_credentials: *WARNING*: nvme_ctrlr_psk: deprecated feature spdk_nvme_ctrlr_opts.psk to be removed in v24.09 00:20:38.248 [2024-07-15 09:28:49.384922] tcp.c: 881:tcp_sock_get_key: *ERROR*: Could not find PSK for identity: NVMe0R01 nqn.2016-06.io.spdk:host1 nqn.2016-06.io.spdk:cnode2 00:20:38.248 [2024-07-15 09:28:49.384955] posix.c: 589:posix_sock_psk_find_session_server_cb: *ERROR*: Unable to find PSK for identity: NVMe0R01 nqn.2016-06.io.spdk:host1 nqn.2016-06.io.spdk:cnode2 00:20:38.248 [2024-07-15 09:28:49.385011] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk_internal/nvme_tcp.h: 428:nvme_tcp_read_data: *ERROR*: spdk_sock_recv() failed, errno 107: Transport endpoint is not connected 00:20:38.248 [2024-07-15 09:28:49.385981] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x11cdf90 (107): Transport endpoint is not connected 00:20:38.248 [2024-07-15 09:28:49.386972] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x11cdf90 (9): Bad file descriptor 00:20:38.248 [2024-07-15 09:28:49.387970] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode2] Ctrlr is in error state 00:20:38.248 [2024-07-15 09:28:49.387991] nvme.c: 708:nvme_ctrlr_poll_internal: *ERROR*: Failed to initialize SSD: 10.0.0.2 00:20:38.248 [2024-07-15 09:28:49.388008] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode2] in failed state. 00:20:38.248 request: 00:20:38.248 { 00:20:38.248 "name": "TLSTEST", 00:20:38.248 "trtype": "tcp", 00:20:38.248 "traddr": "10.0.0.2", 00:20:38.248 "adrfam": "ipv4", 00:20:38.248 "trsvcid": "4420", 00:20:38.248 "subnqn": "nqn.2016-06.io.spdk:cnode2", 00:20:38.248 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:20:38.248 "prchk_reftag": false, 00:20:38.248 "prchk_guard": false, 00:20:38.248 "hdgst": false, 00:20:38.248 "ddgst": false, 00:20:38.248 "psk": "/tmp/tmp.KwTDY4onjA", 00:20:38.248 "method": "bdev_nvme_attach_controller", 00:20:38.248 "req_id": 1 00:20:38.248 } 00:20:38.248 Got JSON-RPC error response 00:20:38.248 response: 00:20:38.248 { 00:20:38.248 "code": -5, 00:20:38.248 "message": "Input/output error" 00:20:38.248 } 00:20:38.248 09:28:49 nvmf_tcp.nvmf_tls -- target/tls.sh@36 -- # killprocess 860054 00:20:38.248 09:28:49 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@948 -- # '[' -z 860054 ']' 00:20:38.248 09:28:49 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@952 -- # kill -0 860054 00:20:38.248 09:28:49 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # uname 00:20:38.248 09:28:49 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:20:38.248 09:28:49 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 860054 00:20:38.248 09:28:49 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # process_name=reactor_2 00:20:38.248 09:28:49 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@958 -- # '[' reactor_2 = sudo ']' 00:20:38.248 09:28:49 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@966 -- # echo 'killing process with pid 860054' 00:20:38.248 killing process with pid 860054 00:20:38.248 09:28:49 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@967 -- # kill 860054 00:20:38.248 Received shutdown signal, test time was about 10.000000 seconds 00:20:38.248 00:20:38.248 Latency(us) 00:20:38.248 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:20:38.248 =================================================================================================================== 00:20:38.248 Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:20:38.248 [2024-07-15 09:28:49.439776] app.c:1023:log_deprecation_hits: *WARNING*: nvme_ctrlr_psk: deprecation 'spdk_nvme_ctrlr_opts.psk' scheduled for removal in v24.09 hit 1 times 00:20:38.248 09:28:49 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@972 -- # wait 860054 00:20:38.505 09:28:49 nvmf_tcp.nvmf_tls -- target/tls.sh@37 -- # return 1 00:20:38.505 09:28:49 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@651 -- # es=1 00:20:38.505 09:28:49 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:20:38.505 09:28:49 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:20:38.505 09:28:49 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:20:38.505 09:28:49 nvmf_tcp.nvmf_tls -- target/tls.sh@155 -- # NOT run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 '' 00:20:38.505 09:28:49 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@648 -- # local es=0 00:20:38.505 09:28:49 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@650 -- # valid_exec_arg run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 '' 00:20:38.505 09:28:49 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@636 -- # local arg=run_bdevperf 00:20:38.505 09:28:49 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:20:38.505 09:28:49 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@640 -- # type -t run_bdevperf 00:20:38.505 09:28:49 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:20:38.505 09:28:49 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@651 -- # run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 '' 00:20:38.505 09:28:49 nvmf_tcp.nvmf_tls -- target/tls.sh@22 -- # local subnqn hostnqn psk 00:20:38.505 09:28:49 nvmf_tcp.nvmf_tls -- target/tls.sh@23 -- # subnqn=nqn.2016-06.io.spdk:cnode1 00:20:38.505 09:28:49 nvmf_tcp.nvmf_tls -- target/tls.sh@23 -- # hostnqn=nqn.2016-06.io.spdk:host1 00:20:38.505 09:28:49 nvmf_tcp.nvmf_tls -- target/tls.sh@23 -- # psk= 00:20:38.505 09:28:49 nvmf_tcp.nvmf_tls -- target/tls.sh@25 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:20:38.505 09:28:49 nvmf_tcp.nvmf_tls -- target/tls.sh@28 -- # bdevperf_pid=860175 00:20:38.763 09:28:49 nvmf_tcp.nvmf_tls -- target/tls.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:20:38.763 09:28:49 nvmf_tcp.nvmf_tls -- target/tls.sh@30 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:20:38.763 09:28:49 nvmf_tcp.nvmf_tls -- target/tls.sh@31 -- # waitforlisten 860175 /var/tmp/bdevperf.sock 00:20:38.763 09:28:49 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@829 -- # '[' -z 860175 ']' 00:20:38.763 09:28:49 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:20:38.763 09:28:49 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@834 -- # local max_retries=100 00:20:38.763 09:28:49 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:20:38.763 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:20:38.763 09:28:49 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@838 -- # xtrace_disable 00:20:38.763 09:28:49 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:20:38.763 [2024-07-15 09:28:49.742633] Starting SPDK v24.09-pre git sha1 b0f01ebc5 / DPDK 24.03.0 initialization... 00:20:38.763 [2024-07-15 09:28:49.742713] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid860175 ] 00:20:38.763 EAL: No free 2048 kB hugepages reported on node 1 00:20:38.763 [2024-07-15 09:28:49.801630] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:20:38.763 [2024-07-15 09:28:49.906342] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:20:39.020 09:28:50 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:20:39.020 09:28:50 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@862 -- # return 0 00:20:39.020 09:28:50 nvmf_tcp.nvmf_tls -- target/tls.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 00:20:39.278 [2024-07-15 09:28:50.257254] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk_internal/nvme_tcp.h: 428:nvme_tcp_read_data: *ERROR*: spdk_sock_recv() failed, errno 107: Transport endpoint is not connected 00:20:39.278 [2024-07-15 09:28:50.258859] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x5de770 (9): Bad file descriptor 00:20:39.278 [2024-07-15 09:28:50.259840] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:39.278 [2024-07-15 09:28:50.259874] nvme.c: 708:nvme_ctrlr_poll_internal: *ERROR*: Failed to initialize SSD: 10.0.0.2 00:20:39.278 [2024-07-15 09:28:50.259891] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:39.278 request: 00:20:39.278 { 00:20:39.278 "name": "TLSTEST", 00:20:39.278 "trtype": "tcp", 00:20:39.278 "traddr": "10.0.0.2", 00:20:39.278 "adrfam": "ipv4", 00:20:39.278 "trsvcid": "4420", 00:20:39.278 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:20:39.278 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:20:39.278 "prchk_reftag": false, 00:20:39.278 "prchk_guard": false, 00:20:39.278 "hdgst": false, 00:20:39.278 "ddgst": false, 00:20:39.278 "method": "bdev_nvme_attach_controller", 00:20:39.278 "req_id": 1 00:20:39.278 } 00:20:39.278 Got JSON-RPC error response 00:20:39.278 response: 00:20:39.278 { 00:20:39.278 "code": -5, 00:20:39.278 "message": "Input/output error" 00:20:39.278 } 00:20:39.278 09:28:50 nvmf_tcp.nvmf_tls -- target/tls.sh@36 -- # killprocess 860175 00:20:39.278 09:28:50 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@948 -- # '[' -z 860175 ']' 00:20:39.278 09:28:50 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@952 -- # kill -0 860175 00:20:39.278 09:28:50 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # uname 00:20:39.278 09:28:50 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:20:39.278 09:28:50 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 860175 00:20:39.278 09:28:50 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # process_name=reactor_2 00:20:39.278 09:28:50 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@958 -- # '[' reactor_2 = sudo ']' 00:20:39.278 09:28:50 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@966 -- # echo 'killing process with pid 860175' 00:20:39.278 killing process with pid 860175 00:20:39.278 09:28:50 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@967 -- # kill 860175 00:20:39.278 Received shutdown signal, test time was about 10.000000 seconds 00:20:39.278 00:20:39.278 Latency(us) 00:20:39.278 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:20:39.278 =================================================================================================================== 00:20:39.278 Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:20:39.278 09:28:50 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@972 -- # wait 860175 00:20:39.536 09:28:50 nvmf_tcp.nvmf_tls -- target/tls.sh@37 -- # return 1 00:20:39.536 09:28:50 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@651 -- # es=1 00:20:39.536 09:28:50 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:20:39.536 09:28:50 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:20:39.536 09:28:50 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:20:39.536 09:28:50 nvmf_tcp.nvmf_tls -- target/tls.sh@158 -- # killprocess 856662 00:20:39.536 09:28:50 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@948 -- # '[' -z 856662 ']' 00:20:39.536 09:28:50 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@952 -- # kill -0 856662 00:20:39.536 09:28:50 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # uname 00:20:39.536 09:28:50 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:20:39.536 09:28:50 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 856662 00:20:39.536 09:28:50 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:20:39.536 09:28:50 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:20:39.536 09:28:50 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@966 -- # echo 'killing process with pid 856662' 00:20:39.536 killing process with pid 856662 00:20:39.536 09:28:50 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@967 -- # kill 856662 00:20:39.536 [2024-07-15 09:28:50.550622] app.c:1023:log_deprecation_hits: *WARNING*: nvmf_tcp_psk_path: deprecation 'PSK path' scheduled for removal in v24.09 hit 1 times 00:20:39.536 09:28:50 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@972 -- # wait 856662 00:20:39.794 09:28:50 nvmf_tcp.nvmf_tls -- target/tls.sh@159 -- # format_interchange_psk 00112233445566778899aabbccddeeff0011223344556677 2 00:20:39.794 09:28:50 nvmf_tcp.nvmf_tls -- nvmf/common.sh@715 -- # format_key NVMeTLSkey-1 00112233445566778899aabbccddeeff0011223344556677 2 00:20:39.794 09:28:50 nvmf_tcp.nvmf_tls -- nvmf/common.sh@702 -- # local prefix key digest 00:20:39.794 09:28:50 nvmf_tcp.nvmf_tls -- nvmf/common.sh@704 -- # prefix=NVMeTLSkey-1 00:20:39.794 09:28:50 nvmf_tcp.nvmf_tls -- nvmf/common.sh@704 -- # key=00112233445566778899aabbccddeeff0011223344556677 00:20:39.794 09:28:50 nvmf_tcp.nvmf_tls -- nvmf/common.sh@704 -- # digest=2 00:20:39.794 09:28:50 nvmf_tcp.nvmf_tls -- nvmf/common.sh@705 -- # python - 00:20:39.794 09:28:50 nvmf_tcp.nvmf_tls -- target/tls.sh@159 -- # key_long=NVMeTLSkey-1:02:MDAxMTIyMzM0NDU1NjY3Nzg4OTlhYWJiY2NkZGVlZmYwMDExMjIzMzQ0NTU2Njc3wWXNJw==: 00:20:39.794 09:28:50 nvmf_tcp.nvmf_tls -- target/tls.sh@160 -- # mktemp 00:20:39.794 09:28:50 nvmf_tcp.nvmf_tls -- target/tls.sh@160 -- # key_long_path=/tmp/tmp.hE4n99WUFQ 00:20:39.794 09:28:50 nvmf_tcp.nvmf_tls -- target/tls.sh@161 -- # echo -n NVMeTLSkey-1:02:MDAxMTIyMzM0NDU1NjY3Nzg4OTlhYWJiY2NkZGVlZmYwMDExMjIzMzQ0NTU2Njc3wWXNJw==: 00:20:39.794 09:28:50 nvmf_tcp.nvmf_tls -- target/tls.sh@162 -- # chmod 0600 /tmp/tmp.hE4n99WUFQ 00:20:39.794 09:28:50 nvmf_tcp.nvmf_tls -- target/tls.sh@163 -- # nvmfappstart -m 0x2 00:20:39.794 09:28:50 nvmf_tcp.nvmf_tls -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:20:39.794 09:28:50 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@722 -- # xtrace_disable 00:20:39.794 09:28:50 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:20:39.794 09:28:50 nvmf_tcp.nvmf_tls -- nvmf/common.sh@481 -- # nvmfpid=860325 00:20:39.794 09:28:50 nvmf_tcp.nvmf_tls -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:20:39.794 09:28:50 nvmf_tcp.nvmf_tls -- nvmf/common.sh@482 -- # waitforlisten 860325 00:20:39.794 09:28:50 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@829 -- # '[' -z 860325 ']' 00:20:39.794 09:28:50 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:20:39.794 09:28:50 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@834 -- # local max_retries=100 00:20:39.794 09:28:50 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:20:39.794 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:20:39.794 09:28:50 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@838 -- # xtrace_disable 00:20:39.794 09:28:50 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:20:39.794 [2024-07-15 09:28:50.892322] Starting SPDK v24.09-pre git sha1 b0f01ebc5 / DPDK 24.03.0 initialization... 00:20:39.794 [2024-07-15 09:28:50.892407] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:20:39.794 EAL: No free 2048 kB hugepages reported on node 1 00:20:39.794 [2024-07-15 09:28:50.953244] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:20:40.052 [2024-07-15 09:28:51.061001] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:20:40.052 [2024-07-15 09:28:51.061056] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:20:40.052 [2024-07-15 09:28:51.061095] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:20:40.052 [2024-07-15 09:28:51.061107] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:20:40.052 [2024-07-15 09:28:51.061116] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:20:40.052 [2024-07-15 09:28:51.061142] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:20:40.052 09:28:51 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:20:40.052 09:28:51 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@862 -- # return 0 00:20:40.052 09:28:51 nvmf_tcp.nvmf_tls -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:20:40.052 09:28:51 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@728 -- # xtrace_disable 00:20:40.052 09:28:51 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:20:40.052 09:28:51 nvmf_tcp.nvmf_tls -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:20:40.052 09:28:51 nvmf_tcp.nvmf_tls -- target/tls.sh@165 -- # setup_nvmf_tgt /tmp/tmp.hE4n99WUFQ 00:20:40.052 09:28:51 nvmf_tcp.nvmf_tls -- target/tls.sh@49 -- # local key=/tmp/tmp.hE4n99WUFQ 00:20:40.052 09:28:51 nvmf_tcp.nvmf_tls -- target/tls.sh@51 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o 00:20:40.310 [2024-07-15 09:28:51.464015] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:20:40.310 09:28:51 nvmf_tcp.nvmf_tls -- target/tls.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDK00000000000001 -m 10 00:20:40.876 09:28:51 nvmf_tcp.nvmf_tls -- target/tls.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -k 00:20:40.876 [2024-07-15 09:28:52.005477] tcp.c: 928:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:20:40.876 [2024-07-15 09:28:52.005702] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:20:40.876 09:28:52 nvmf_tcp.nvmf_tls -- target/tls.sh@55 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 32 4096 -b malloc0 00:20:41.134 malloc0 00:20:41.134 09:28:52 nvmf_tcp.nvmf_tls -- target/tls.sh@56 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 malloc0 -n 1 00:20:41.392 09:28:52 nvmf_tcp.nvmf_tls -- target/tls.sh@58 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 --psk /tmp/tmp.hE4n99WUFQ 00:20:41.650 [2024-07-15 09:28:52.802122] tcp.c:3679:nvmf_tcp_subsystem_add_host: *WARNING*: nvmf_tcp_psk_path: deprecated feature PSK path to be removed in v24.09 00:20:41.650 09:28:52 nvmf_tcp.nvmf_tls -- target/tls.sh@167 -- # run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /tmp/tmp.hE4n99WUFQ 00:20:41.650 09:28:52 nvmf_tcp.nvmf_tls -- target/tls.sh@22 -- # local subnqn hostnqn psk 00:20:41.650 09:28:52 nvmf_tcp.nvmf_tls -- target/tls.sh@23 -- # subnqn=nqn.2016-06.io.spdk:cnode1 00:20:41.650 09:28:52 nvmf_tcp.nvmf_tls -- target/tls.sh@23 -- # hostnqn=nqn.2016-06.io.spdk:host1 00:20:41.650 09:28:52 nvmf_tcp.nvmf_tls -- target/tls.sh@23 -- # psk='--psk /tmp/tmp.hE4n99WUFQ' 00:20:41.650 09:28:52 nvmf_tcp.nvmf_tls -- target/tls.sh@25 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:20:41.650 09:28:52 nvmf_tcp.nvmf_tls -- target/tls.sh@28 -- # bdevperf_pid=860607 00:20:41.650 09:28:52 nvmf_tcp.nvmf_tls -- target/tls.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:20:41.650 09:28:52 nvmf_tcp.nvmf_tls -- target/tls.sh@30 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:20:41.650 09:28:52 nvmf_tcp.nvmf_tls -- target/tls.sh@31 -- # waitforlisten 860607 /var/tmp/bdevperf.sock 00:20:41.650 09:28:52 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@829 -- # '[' -z 860607 ']' 00:20:41.650 09:28:52 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:20:41.650 09:28:52 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@834 -- # local max_retries=100 00:20:41.650 09:28:52 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:20:41.650 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:20:41.650 09:28:52 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@838 -- # xtrace_disable 00:20:41.650 09:28:52 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:20:41.908 [2024-07-15 09:28:52.866411] Starting SPDK v24.09-pre git sha1 b0f01ebc5 / DPDK 24.03.0 initialization... 00:20:41.908 [2024-07-15 09:28:52.866484] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid860607 ] 00:20:41.908 EAL: No free 2048 kB hugepages reported on node 1 00:20:41.908 [2024-07-15 09:28:52.926205] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:20:41.908 [2024-07-15 09:28:53.031657] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:20:42.175 09:28:53 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:20:42.175 09:28:53 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@862 -- # return 0 00:20:42.175 09:28:53 nvmf_tcp.nvmf_tls -- target/tls.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 --psk /tmp/tmp.hE4n99WUFQ 00:20:42.435 [2024-07-15 09:28:53.378755] bdev_nvme_rpc.c: 517:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:20:42.435 [2024-07-15 09:28:53.378915] nvme_tcp.c:2589:nvme_tcp_generate_tls_credentials: *WARNING*: nvme_ctrlr_psk: deprecated feature spdk_nvme_ctrlr_opts.psk to be removed in v24.09 00:20:42.435 TLSTESTn1 00:20:42.435 09:28:53 nvmf_tcp.nvmf_tls -- target/tls.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -t 20 -s /var/tmp/bdevperf.sock perform_tests 00:20:42.435 Running I/O for 10 seconds... 00:20:54.632 00:20:54.632 Latency(us) 00:20:54.632 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:20:54.632 Job: TLSTESTn1 (Core Mask 0x4, workload: verify, depth: 128, IO size: 4096) 00:20:54.632 Verification LBA range: start 0x0 length 0x2000 00:20:54.632 TLSTESTn1 : 10.02 2882.42 11.26 0.00 0.00 44332.83 6844.87 49516.09 00:20:54.632 =================================================================================================================== 00:20:54.632 Total : 2882.42 11.26 0.00 0.00 44332.83 6844.87 49516.09 00:20:54.632 0 00:20:54.632 09:29:03 nvmf_tcp.nvmf_tls -- target/tls.sh@44 -- # trap 'nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:20:54.632 09:29:03 nvmf_tcp.nvmf_tls -- target/tls.sh@45 -- # killprocess 860607 00:20:54.632 09:29:03 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@948 -- # '[' -z 860607 ']' 00:20:54.632 09:29:03 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@952 -- # kill -0 860607 00:20:54.632 09:29:03 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # uname 00:20:54.632 09:29:03 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:20:54.632 09:29:03 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 860607 00:20:54.632 09:29:03 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # process_name=reactor_2 00:20:54.632 09:29:03 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@958 -- # '[' reactor_2 = sudo ']' 00:20:54.632 09:29:03 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@966 -- # echo 'killing process with pid 860607' 00:20:54.632 killing process with pid 860607 00:20:54.632 09:29:03 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@967 -- # kill 860607 00:20:54.632 Received shutdown signal, test time was about 10.000000 seconds 00:20:54.633 00:20:54.633 Latency(us) 00:20:54.633 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:20:54.633 =================================================================================================================== 00:20:54.633 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:20:54.633 [2024-07-15 09:29:03.651618] app.c:1023:log_deprecation_hits: *WARNING*: nvme_ctrlr_psk: deprecation 'spdk_nvme_ctrlr_opts.psk' scheduled for removal in v24.09 hit 1 times 00:20:54.633 09:29:03 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@972 -- # wait 860607 00:20:54.633 09:29:03 nvmf_tcp.nvmf_tls -- target/tls.sh@170 -- # chmod 0666 /tmp/tmp.hE4n99WUFQ 00:20:54.633 09:29:03 nvmf_tcp.nvmf_tls -- target/tls.sh@171 -- # NOT run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /tmp/tmp.hE4n99WUFQ 00:20:54.633 09:29:03 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@648 -- # local es=0 00:20:54.633 09:29:03 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@650 -- # valid_exec_arg run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /tmp/tmp.hE4n99WUFQ 00:20:54.633 09:29:03 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@636 -- # local arg=run_bdevperf 00:20:54.633 09:29:03 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:20:54.633 09:29:03 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@640 -- # type -t run_bdevperf 00:20:54.633 09:29:03 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:20:54.633 09:29:03 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@651 -- # run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /tmp/tmp.hE4n99WUFQ 00:20:54.633 09:29:03 nvmf_tcp.nvmf_tls -- target/tls.sh@22 -- # local subnqn hostnqn psk 00:20:54.633 09:29:03 nvmf_tcp.nvmf_tls -- target/tls.sh@23 -- # subnqn=nqn.2016-06.io.spdk:cnode1 00:20:54.633 09:29:03 nvmf_tcp.nvmf_tls -- target/tls.sh@23 -- # hostnqn=nqn.2016-06.io.spdk:host1 00:20:54.633 09:29:03 nvmf_tcp.nvmf_tls -- target/tls.sh@23 -- # psk='--psk /tmp/tmp.hE4n99WUFQ' 00:20:54.633 09:29:03 nvmf_tcp.nvmf_tls -- target/tls.sh@25 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:20:54.633 09:29:03 nvmf_tcp.nvmf_tls -- target/tls.sh@28 -- # bdevperf_pid=861926 00:20:54.633 09:29:03 nvmf_tcp.nvmf_tls -- target/tls.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:20:54.633 09:29:03 nvmf_tcp.nvmf_tls -- target/tls.sh@30 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:20:54.633 09:29:03 nvmf_tcp.nvmf_tls -- target/tls.sh@31 -- # waitforlisten 861926 /var/tmp/bdevperf.sock 00:20:54.633 09:29:03 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@829 -- # '[' -z 861926 ']' 00:20:54.633 09:29:03 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:20:54.633 09:29:03 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@834 -- # local max_retries=100 00:20:54.633 09:29:03 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:20:54.633 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:20:54.633 09:29:03 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@838 -- # xtrace_disable 00:20:54.633 09:29:03 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:20:54.633 [2024-07-15 09:29:03.968735] Starting SPDK v24.09-pre git sha1 b0f01ebc5 / DPDK 24.03.0 initialization... 00:20:54.633 [2024-07-15 09:29:03.968834] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid861926 ] 00:20:54.633 EAL: No free 2048 kB hugepages reported on node 1 00:20:54.633 [2024-07-15 09:29:04.026909] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:20:54.633 [2024-07-15 09:29:04.135057] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:20:54.633 09:29:04 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:20:54.633 09:29:04 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@862 -- # return 0 00:20:54.633 09:29:04 nvmf_tcp.nvmf_tls -- target/tls.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 --psk /tmp/tmp.hE4n99WUFQ 00:20:54.633 [2024-07-15 09:29:04.522881] bdev_nvme_rpc.c: 517:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:20:54.633 [2024-07-15 09:29:04.522968] bdev_nvme.c:6125:bdev_nvme_load_psk: *ERROR*: Incorrect permissions for PSK file 00:20:54.633 [2024-07-15 09:29:04.522984] bdev_nvme.c:6230:bdev_nvme_create: *ERROR*: Could not load PSK from /tmp/tmp.hE4n99WUFQ 00:20:54.633 request: 00:20:54.633 { 00:20:54.633 "name": "TLSTEST", 00:20:54.633 "trtype": "tcp", 00:20:54.633 "traddr": "10.0.0.2", 00:20:54.633 "adrfam": "ipv4", 00:20:54.633 "trsvcid": "4420", 00:20:54.633 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:20:54.633 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:20:54.633 "prchk_reftag": false, 00:20:54.633 "prchk_guard": false, 00:20:54.633 "hdgst": false, 00:20:54.633 "ddgst": false, 00:20:54.633 "psk": "/tmp/tmp.hE4n99WUFQ", 00:20:54.633 "method": "bdev_nvme_attach_controller", 00:20:54.633 "req_id": 1 00:20:54.633 } 00:20:54.633 Got JSON-RPC error response 00:20:54.633 response: 00:20:54.633 { 00:20:54.633 "code": -1, 00:20:54.633 "message": "Operation not permitted" 00:20:54.633 } 00:20:54.633 09:29:04 nvmf_tcp.nvmf_tls -- target/tls.sh@36 -- # killprocess 861926 00:20:54.633 09:29:04 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@948 -- # '[' -z 861926 ']' 00:20:54.633 09:29:04 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@952 -- # kill -0 861926 00:20:54.633 09:29:04 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # uname 00:20:54.633 09:29:04 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:20:54.633 09:29:04 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 861926 00:20:54.633 09:29:04 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # process_name=reactor_2 00:20:54.633 09:29:04 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@958 -- # '[' reactor_2 = sudo ']' 00:20:54.633 09:29:04 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@966 -- # echo 'killing process with pid 861926' 00:20:54.633 killing process with pid 861926 00:20:54.633 09:29:04 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@967 -- # kill 861926 00:20:54.633 Received shutdown signal, test time was about 10.000000 seconds 00:20:54.633 00:20:54.633 Latency(us) 00:20:54.633 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:20:54.633 =================================================================================================================== 00:20:54.633 Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:20:54.633 09:29:04 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@972 -- # wait 861926 00:20:54.633 09:29:04 nvmf_tcp.nvmf_tls -- target/tls.sh@37 -- # return 1 00:20:54.633 09:29:04 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@651 -- # es=1 00:20:54.633 09:29:04 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:20:54.633 09:29:04 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:20:54.633 09:29:04 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:20:54.633 09:29:04 nvmf_tcp.nvmf_tls -- target/tls.sh@174 -- # killprocess 860325 00:20:54.633 09:29:04 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@948 -- # '[' -z 860325 ']' 00:20:54.633 09:29:04 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@952 -- # kill -0 860325 00:20:54.633 09:29:04 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # uname 00:20:54.633 09:29:04 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:20:54.633 09:29:04 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 860325 00:20:54.633 09:29:04 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:20:54.633 09:29:04 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:20:54.633 09:29:04 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@966 -- # echo 'killing process with pid 860325' 00:20:54.633 killing process with pid 860325 00:20:54.633 09:29:04 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@967 -- # kill 860325 00:20:54.633 [2024-07-15 09:29:04.856240] app.c:1023:log_deprecation_hits: *WARNING*: nvmf_tcp_psk_path: deprecation 'PSK path' scheduled for removal in v24.09 hit 1 times 00:20:54.633 09:29:04 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@972 -- # wait 860325 00:20:54.633 09:29:05 nvmf_tcp.nvmf_tls -- target/tls.sh@175 -- # nvmfappstart -m 0x2 00:20:54.633 09:29:05 nvmf_tcp.nvmf_tls -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:20:54.633 09:29:05 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@722 -- # xtrace_disable 00:20:54.633 09:29:05 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:20:54.633 09:29:05 nvmf_tcp.nvmf_tls -- nvmf/common.sh@481 -- # nvmfpid=862067 00:20:54.633 09:29:05 nvmf_tcp.nvmf_tls -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:20:54.633 09:29:05 nvmf_tcp.nvmf_tls -- nvmf/common.sh@482 -- # waitforlisten 862067 00:20:54.633 09:29:05 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@829 -- # '[' -z 862067 ']' 00:20:54.633 09:29:05 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:20:54.634 09:29:05 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@834 -- # local max_retries=100 00:20:54.634 09:29:05 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:20:54.634 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:20:54.634 09:29:05 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@838 -- # xtrace_disable 00:20:54.634 09:29:05 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:20:54.634 [2024-07-15 09:29:05.183574] Starting SPDK v24.09-pre git sha1 b0f01ebc5 / DPDK 24.03.0 initialization... 00:20:54.634 [2024-07-15 09:29:05.183658] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:20:54.634 EAL: No free 2048 kB hugepages reported on node 1 00:20:54.634 [2024-07-15 09:29:05.255244] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:20:54.634 [2024-07-15 09:29:05.364787] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:20:54.634 [2024-07-15 09:29:05.364862] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:20:54.634 [2024-07-15 09:29:05.364891] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:20:54.634 [2024-07-15 09:29:05.364903] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:20:54.634 [2024-07-15 09:29:05.364913] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:20:54.634 [2024-07-15 09:29:05.364940] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:20:54.634 09:29:05 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:20:54.634 09:29:05 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@862 -- # return 0 00:20:54.634 09:29:05 nvmf_tcp.nvmf_tls -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:20:54.634 09:29:05 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@728 -- # xtrace_disable 00:20:54.634 09:29:05 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:20:54.634 09:29:05 nvmf_tcp.nvmf_tls -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:20:54.634 09:29:05 nvmf_tcp.nvmf_tls -- target/tls.sh@177 -- # NOT setup_nvmf_tgt /tmp/tmp.hE4n99WUFQ 00:20:54.634 09:29:05 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@648 -- # local es=0 00:20:54.634 09:29:05 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@650 -- # valid_exec_arg setup_nvmf_tgt /tmp/tmp.hE4n99WUFQ 00:20:54.634 09:29:05 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@636 -- # local arg=setup_nvmf_tgt 00:20:54.634 09:29:05 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:20:54.634 09:29:05 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@640 -- # type -t setup_nvmf_tgt 00:20:54.634 09:29:05 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:20:54.634 09:29:05 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@651 -- # setup_nvmf_tgt /tmp/tmp.hE4n99WUFQ 00:20:54.634 09:29:05 nvmf_tcp.nvmf_tls -- target/tls.sh@49 -- # local key=/tmp/tmp.hE4n99WUFQ 00:20:54.634 09:29:05 nvmf_tcp.nvmf_tls -- target/tls.sh@51 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o 00:20:54.634 [2024-07-15 09:29:05.707320] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:20:54.634 09:29:05 nvmf_tcp.nvmf_tls -- target/tls.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDK00000000000001 -m 10 00:20:54.891 09:29:05 nvmf_tcp.nvmf_tls -- target/tls.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -k 00:20:55.148 [2024-07-15 09:29:06.196612] tcp.c: 928:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:20:55.148 [2024-07-15 09:29:06.196847] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:20:55.148 09:29:06 nvmf_tcp.nvmf_tls -- target/tls.sh@55 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 32 4096 -b malloc0 00:20:55.406 malloc0 00:20:55.406 09:29:06 nvmf_tcp.nvmf_tls -- target/tls.sh@56 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 malloc0 -n 1 00:20:55.662 09:29:06 nvmf_tcp.nvmf_tls -- target/tls.sh@58 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 --psk /tmp/tmp.hE4n99WUFQ 00:20:55.919 [2024-07-15 09:29:06.912664] tcp.c:3589:tcp_load_psk: *ERROR*: Incorrect permissions for PSK file 00:20:55.919 [2024-07-15 09:29:06.912701] tcp.c:3675:nvmf_tcp_subsystem_add_host: *ERROR*: Could not retrieve PSK from file 00:20:55.919 [2024-07-15 09:29:06.912746] subsystem.c:1051:spdk_nvmf_subsystem_add_host_ext: *ERROR*: Unable to add host to TCP transport 00:20:55.919 request: 00:20:55.919 { 00:20:55.919 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:20:55.919 "host": "nqn.2016-06.io.spdk:host1", 00:20:55.919 "psk": "/tmp/tmp.hE4n99WUFQ", 00:20:55.919 "method": "nvmf_subsystem_add_host", 00:20:55.919 "req_id": 1 00:20:55.919 } 00:20:55.919 Got JSON-RPC error response 00:20:55.919 response: 00:20:55.919 { 00:20:55.920 "code": -32603, 00:20:55.920 "message": "Internal error" 00:20:55.920 } 00:20:55.920 09:29:06 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@651 -- # es=1 00:20:55.920 09:29:06 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:20:55.920 09:29:06 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:20:55.920 09:29:06 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:20:55.920 09:29:06 nvmf_tcp.nvmf_tls -- target/tls.sh@180 -- # killprocess 862067 00:20:55.920 09:29:06 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@948 -- # '[' -z 862067 ']' 00:20:55.920 09:29:06 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@952 -- # kill -0 862067 00:20:55.920 09:29:06 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # uname 00:20:55.920 09:29:06 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:20:55.920 09:29:06 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 862067 00:20:55.920 09:29:06 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:20:55.920 09:29:06 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:20:55.920 09:29:06 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@966 -- # echo 'killing process with pid 862067' 00:20:55.920 killing process with pid 862067 00:20:55.920 09:29:06 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@967 -- # kill 862067 00:20:55.920 09:29:06 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@972 -- # wait 862067 00:20:56.177 09:29:07 nvmf_tcp.nvmf_tls -- target/tls.sh@181 -- # chmod 0600 /tmp/tmp.hE4n99WUFQ 00:20:56.177 09:29:07 nvmf_tcp.nvmf_tls -- target/tls.sh@184 -- # nvmfappstart -m 0x2 00:20:56.177 09:29:07 nvmf_tcp.nvmf_tls -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:20:56.177 09:29:07 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@722 -- # xtrace_disable 00:20:56.177 09:29:07 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:20:56.177 09:29:07 nvmf_tcp.nvmf_tls -- nvmf/common.sh@481 -- # nvmfpid=862369 00:20:56.177 09:29:07 nvmf_tcp.nvmf_tls -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:20:56.177 09:29:07 nvmf_tcp.nvmf_tls -- nvmf/common.sh@482 -- # waitforlisten 862369 00:20:56.177 09:29:07 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@829 -- # '[' -z 862369 ']' 00:20:56.177 09:29:07 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:20:56.177 09:29:07 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@834 -- # local max_retries=100 00:20:56.177 09:29:07 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:20:56.177 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:20:56.177 09:29:07 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@838 -- # xtrace_disable 00:20:56.177 09:29:07 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:20:56.177 [2024-07-15 09:29:07.279425] Starting SPDK v24.09-pre git sha1 b0f01ebc5 / DPDK 24.03.0 initialization... 00:20:56.177 [2024-07-15 09:29:07.279508] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:20:56.177 EAL: No free 2048 kB hugepages reported on node 1 00:20:56.177 [2024-07-15 09:29:07.342322] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:20:56.434 [2024-07-15 09:29:07.448163] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:20:56.434 [2024-07-15 09:29:07.448211] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:20:56.434 [2024-07-15 09:29:07.448240] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:20:56.434 [2024-07-15 09:29:07.448251] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:20:56.434 [2024-07-15 09:29:07.448261] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:20:56.434 [2024-07-15 09:29:07.448292] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:20:56.434 09:29:07 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:20:56.434 09:29:07 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@862 -- # return 0 00:20:56.434 09:29:07 nvmf_tcp.nvmf_tls -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:20:56.434 09:29:07 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@728 -- # xtrace_disable 00:20:56.434 09:29:07 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:20:56.434 09:29:07 nvmf_tcp.nvmf_tls -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:20:56.435 09:29:07 nvmf_tcp.nvmf_tls -- target/tls.sh@185 -- # setup_nvmf_tgt /tmp/tmp.hE4n99WUFQ 00:20:56.435 09:29:07 nvmf_tcp.nvmf_tls -- target/tls.sh@49 -- # local key=/tmp/tmp.hE4n99WUFQ 00:20:56.435 09:29:07 nvmf_tcp.nvmf_tls -- target/tls.sh@51 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o 00:20:56.691 [2024-07-15 09:29:07.815896] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:20:56.691 09:29:07 nvmf_tcp.nvmf_tls -- target/tls.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDK00000000000001 -m 10 00:20:56.948 09:29:08 nvmf_tcp.nvmf_tls -- target/tls.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -k 00:20:57.205 [2024-07-15 09:29:08.313199] tcp.c: 928:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:20:57.205 [2024-07-15 09:29:08.313402] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:20:57.205 09:29:08 nvmf_tcp.nvmf_tls -- target/tls.sh@55 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 32 4096 -b malloc0 00:20:57.469 malloc0 00:20:57.469 09:29:08 nvmf_tcp.nvmf_tls -- target/tls.sh@56 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 malloc0 -n 1 00:20:57.729 09:29:08 nvmf_tcp.nvmf_tls -- target/tls.sh@58 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 --psk /tmp/tmp.hE4n99WUFQ 00:20:57.986 [2024-07-15 09:29:09.053495] tcp.c:3679:nvmf_tcp_subsystem_add_host: *WARNING*: nvmf_tcp_psk_path: deprecated feature PSK path to be removed in v24.09 00:20:57.986 09:29:09 nvmf_tcp.nvmf_tls -- target/tls.sh@188 -- # bdevperf_pid=862525 00:20:57.986 09:29:09 nvmf_tcp.nvmf_tls -- target/tls.sh@187 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:20:57.986 09:29:09 nvmf_tcp.nvmf_tls -- target/tls.sh@190 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:20:57.987 09:29:09 nvmf_tcp.nvmf_tls -- target/tls.sh@191 -- # waitforlisten 862525 /var/tmp/bdevperf.sock 00:20:57.987 09:29:09 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@829 -- # '[' -z 862525 ']' 00:20:57.987 09:29:09 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:20:57.987 09:29:09 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@834 -- # local max_retries=100 00:20:57.987 09:29:09 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:20:57.987 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:20:57.987 09:29:09 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@838 -- # xtrace_disable 00:20:57.987 09:29:09 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:20:57.987 [2024-07-15 09:29:09.108689] Starting SPDK v24.09-pre git sha1 b0f01ebc5 / DPDK 24.03.0 initialization... 00:20:57.987 [2024-07-15 09:29:09.108758] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid862525 ] 00:20:57.987 EAL: No free 2048 kB hugepages reported on node 1 00:20:57.987 [2024-07-15 09:29:09.167937] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:20:58.245 [2024-07-15 09:29:09.279550] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:20:58.245 09:29:09 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:20:58.245 09:29:09 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@862 -- # return 0 00:20:58.245 09:29:09 nvmf_tcp.nvmf_tls -- target/tls.sh@192 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 --psk /tmp/tmp.hE4n99WUFQ 00:20:58.503 [2024-07-15 09:29:09.601141] bdev_nvme_rpc.c: 517:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:20:58.503 [2024-07-15 09:29:09.601288] nvme_tcp.c:2589:nvme_tcp_generate_tls_credentials: *WARNING*: nvme_ctrlr_psk: deprecated feature spdk_nvme_ctrlr_opts.psk to be removed in v24.09 00:20:58.503 TLSTESTn1 00:20:58.503 09:29:09 nvmf_tcp.nvmf_tls -- target/tls.sh@196 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py save_config 00:20:59.069 09:29:10 nvmf_tcp.nvmf_tls -- target/tls.sh@196 -- # tgtconf='{ 00:20:59.069 "subsystems": [ 00:20:59.069 { 00:20:59.069 "subsystem": "keyring", 00:20:59.069 "config": [] 00:20:59.069 }, 00:20:59.069 { 00:20:59.069 "subsystem": "iobuf", 00:20:59.069 "config": [ 00:20:59.069 { 00:20:59.069 "method": "iobuf_set_options", 00:20:59.069 "params": { 00:20:59.069 "small_pool_count": 8192, 00:20:59.069 "large_pool_count": 1024, 00:20:59.069 "small_bufsize": 8192, 00:20:59.069 "large_bufsize": 135168 00:20:59.069 } 00:20:59.069 } 00:20:59.069 ] 00:20:59.069 }, 00:20:59.069 { 00:20:59.069 "subsystem": "sock", 00:20:59.069 "config": [ 00:20:59.069 { 00:20:59.069 "method": "sock_set_default_impl", 00:20:59.069 "params": { 00:20:59.069 "impl_name": "posix" 00:20:59.069 } 00:20:59.069 }, 00:20:59.069 { 00:20:59.069 "method": "sock_impl_set_options", 00:20:59.069 "params": { 00:20:59.069 "impl_name": "ssl", 00:20:59.069 "recv_buf_size": 4096, 00:20:59.069 "send_buf_size": 4096, 00:20:59.069 "enable_recv_pipe": true, 00:20:59.069 "enable_quickack": false, 00:20:59.069 "enable_placement_id": 0, 00:20:59.069 "enable_zerocopy_send_server": true, 00:20:59.069 "enable_zerocopy_send_client": false, 00:20:59.069 "zerocopy_threshold": 0, 00:20:59.069 "tls_version": 0, 00:20:59.069 "enable_ktls": false 00:20:59.069 } 00:20:59.069 }, 00:20:59.069 { 00:20:59.069 "method": "sock_impl_set_options", 00:20:59.069 "params": { 00:20:59.069 "impl_name": "posix", 00:20:59.069 "recv_buf_size": 2097152, 00:20:59.069 "send_buf_size": 2097152, 00:20:59.069 "enable_recv_pipe": true, 00:20:59.069 "enable_quickack": false, 00:20:59.069 "enable_placement_id": 0, 00:20:59.069 "enable_zerocopy_send_server": true, 00:20:59.069 "enable_zerocopy_send_client": false, 00:20:59.069 "zerocopy_threshold": 0, 00:20:59.069 "tls_version": 0, 00:20:59.069 "enable_ktls": false 00:20:59.069 } 00:20:59.069 } 00:20:59.069 ] 00:20:59.069 }, 00:20:59.069 { 00:20:59.069 "subsystem": "vmd", 00:20:59.069 "config": [] 00:20:59.069 }, 00:20:59.069 { 00:20:59.069 "subsystem": "accel", 00:20:59.069 "config": [ 00:20:59.069 { 00:20:59.069 "method": "accel_set_options", 00:20:59.069 "params": { 00:20:59.069 "small_cache_size": 128, 00:20:59.069 "large_cache_size": 16, 00:20:59.069 "task_count": 2048, 00:20:59.069 "sequence_count": 2048, 00:20:59.069 "buf_count": 2048 00:20:59.069 } 00:20:59.069 } 00:20:59.069 ] 00:20:59.069 }, 00:20:59.069 { 00:20:59.069 "subsystem": "bdev", 00:20:59.069 "config": [ 00:20:59.069 { 00:20:59.069 "method": "bdev_set_options", 00:20:59.069 "params": { 00:20:59.069 "bdev_io_pool_size": 65535, 00:20:59.069 "bdev_io_cache_size": 256, 00:20:59.069 "bdev_auto_examine": true, 00:20:59.069 "iobuf_small_cache_size": 128, 00:20:59.069 "iobuf_large_cache_size": 16 00:20:59.069 } 00:20:59.069 }, 00:20:59.069 { 00:20:59.069 "method": "bdev_raid_set_options", 00:20:59.069 "params": { 00:20:59.069 "process_window_size_kb": 1024 00:20:59.069 } 00:20:59.069 }, 00:20:59.069 { 00:20:59.069 "method": "bdev_iscsi_set_options", 00:20:59.069 "params": { 00:20:59.069 "timeout_sec": 30 00:20:59.069 } 00:20:59.069 }, 00:20:59.069 { 00:20:59.069 "method": "bdev_nvme_set_options", 00:20:59.069 "params": { 00:20:59.069 "action_on_timeout": "none", 00:20:59.069 "timeout_us": 0, 00:20:59.069 "timeout_admin_us": 0, 00:20:59.069 "keep_alive_timeout_ms": 10000, 00:20:59.069 "arbitration_burst": 0, 00:20:59.069 "low_priority_weight": 0, 00:20:59.069 "medium_priority_weight": 0, 00:20:59.069 "high_priority_weight": 0, 00:20:59.069 "nvme_adminq_poll_period_us": 10000, 00:20:59.069 "nvme_ioq_poll_period_us": 0, 00:20:59.069 "io_queue_requests": 0, 00:20:59.069 "delay_cmd_submit": true, 00:20:59.070 "transport_retry_count": 4, 00:20:59.070 "bdev_retry_count": 3, 00:20:59.070 "transport_ack_timeout": 0, 00:20:59.070 "ctrlr_loss_timeout_sec": 0, 00:20:59.070 "reconnect_delay_sec": 0, 00:20:59.070 "fast_io_fail_timeout_sec": 0, 00:20:59.070 "disable_auto_failback": false, 00:20:59.070 "generate_uuids": false, 00:20:59.070 "transport_tos": 0, 00:20:59.070 "nvme_error_stat": false, 00:20:59.070 "rdma_srq_size": 0, 00:20:59.070 "io_path_stat": false, 00:20:59.070 "allow_accel_sequence": false, 00:20:59.070 "rdma_max_cq_size": 0, 00:20:59.070 "rdma_cm_event_timeout_ms": 0, 00:20:59.070 "dhchap_digests": [ 00:20:59.070 "sha256", 00:20:59.070 "sha384", 00:20:59.070 "sha512" 00:20:59.070 ], 00:20:59.070 "dhchap_dhgroups": [ 00:20:59.070 "null", 00:20:59.070 "ffdhe2048", 00:20:59.070 "ffdhe3072", 00:20:59.070 "ffdhe4096", 00:20:59.070 "ffdhe6144", 00:20:59.070 "ffdhe8192" 00:20:59.070 ] 00:20:59.070 } 00:20:59.070 }, 00:20:59.070 { 00:20:59.070 "method": "bdev_nvme_set_hotplug", 00:20:59.070 "params": { 00:20:59.070 "period_us": 100000, 00:20:59.070 "enable": false 00:20:59.070 } 00:20:59.070 }, 00:20:59.070 { 00:20:59.070 "method": "bdev_malloc_create", 00:20:59.070 "params": { 00:20:59.070 "name": "malloc0", 00:20:59.070 "num_blocks": 8192, 00:20:59.070 "block_size": 4096, 00:20:59.070 "physical_block_size": 4096, 00:20:59.070 "uuid": "f284bf5b-b35c-4c95-ba09-04ba6032bbc7", 00:20:59.070 "optimal_io_boundary": 0 00:20:59.070 } 00:20:59.070 }, 00:20:59.070 { 00:20:59.070 "method": "bdev_wait_for_examine" 00:20:59.070 } 00:20:59.070 ] 00:20:59.070 }, 00:20:59.070 { 00:20:59.070 "subsystem": "nbd", 00:20:59.070 "config": [] 00:20:59.070 }, 00:20:59.070 { 00:20:59.070 "subsystem": "scheduler", 00:20:59.070 "config": [ 00:20:59.070 { 00:20:59.070 "method": "framework_set_scheduler", 00:20:59.070 "params": { 00:20:59.070 "name": "static" 00:20:59.070 } 00:20:59.070 } 00:20:59.070 ] 00:20:59.070 }, 00:20:59.070 { 00:20:59.070 "subsystem": "nvmf", 00:20:59.070 "config": [ 00:20:59.070 { 00:20:59.070 "method": "nvmf_set_config", 00:20:59.070 "params": { 00:20:59.070 "discovery_filter": "match_any", 00:20:59.070 "admin_cmd_passthru": { 00:20:59.070 "identify_ctrlr": false 00:20:59.070 } 00:20:59.070 } 00:20:59.070 }, 00:20:59.070 { 00:20:59.070 "method": "nvmf_set_max_subsystems", 00:20:59.070 "params": { 00:20:59.070 "max_subsystems": 1024 00:20:59.070 } 00:20:59.070 }, 00:20:59.070 { 00:20:59.070 "method": "nvmf_set_crdt", 00:20:59.070 "params": { 00:20:59.070 "crdt1": 0, 00:20:59.070 "crdt2": 0, 00:20:59.070 "crdt3": 0 00:20:59.070 } 00:20:59.070 }, 00:20:59.070 { 00:20:59.070 "method": "nvmf_create_transport", 00:20:59.070 "params": { 00:20:59.070 "trtype": "TCP", 00:20:59.070 "max_queue_depth": 128, 00:20:59.070 "max_io_qpairs_per_ctrlr": 127, 00:20:59.070 "in_capsule_data_size": 4096, 00:20:59.070 "max_io_size": 131072, 00:20:59.070 "io_unit_size": 131072, 00:20:59.070 "max_aq_depth": 128, 00:20:59.070 "num_shared_buffers": 511, 00:20:59.070 "buf_cache_size": 4294967295, 00:20:59.070 "dif_insert_or_strip": false, 00:20:59.070 "zcopy": false, 00:20:59.070 "c2h_success": false, 00:20:59.070 "sock_priority": 0, 00:20:59.070 "abort_timeout_sec": 1, 00:20:59.070 "ack_timeout": 0, 00:20:59.070 "data_wr_pool_size": 0 00:20:59.070 } 00:20:59.070 }, 00:20:59.070 { 00:20:59.070 "method": "nvmf_create_subsystem", 00:20:59.070 "params": { 00:20:59.070 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:20:59.070 "allow_any_host": false, 00:20:59.070 "serial_number": "SPDK00000000000001", 00:20:59.070 "model_number": "SPDK bdev Controller", 00:20:59.070 "max_namespaces": 10, 00:20:59.070 "min_cntlid": 1, 00:20:59.070 "max_cntlid": 65519, 00:20:59.070 "ana_reporting": false 00:20:59.070 } 00:20:59.070 }, 00:20:59.070 { 00:20:59.070 "method": "nvmf_subsystem_add_host", 00:20:59.070 "params": { 00:20:59.070 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:20:59.070 "host": "nqn.2016-06.io.spdk:host1", 00:20:59.070 "psk": "/tmp/tmp.hE4n99WUFQ" 00:20:59.070 } 00:20:59.070 }, 00:20:59.070 { 00:20:59.070 "method": "nvmf_subsystem_add_ns", 00:20:59.070 "params": { 00:20:59.070 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:20:59.070 "namespace": { 00:20:59.070 "nsid": 1, 00:20:59.070 "bdev_name": "malloc0", 00:20:59.070 "nguid": "F284BF5BB35C4C95BA0904BA6032BBC7", 00:20:59.070 "uuid": "f284bf5b-b35c-4c95-ba09-04ba6032bbc7", 00:20:59.070 "no_auto_visible": false 00:20:59.070 } 00:20:59.070 } 00:20:59.070 }, 00:20:59.070 { 00:20:59.070 "method": "nvmf_subsystem_add_listener", 00:20:59.070 "params": { 00:20:59.070 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:20:59.070 "listen_address": { 00:20:59.070 "trtype": "TCP", 00:20:59.070 "adrfam": "IPv4", 00:20:59.070 "traddr": "10.0.0.2", 00:20:59.070 "trsvcid": "4420" 00:20:59.070 }, 00:20:59.070 "secure_channel": true 00:20:59.070 } 00:20:59.070 } 00:20:59.070 ] 00:20:59.070 } 00:20:59.070 ] 00:20:59.070 }' 00:20:59.070 09:29:10 nvmf_tcp.nvmf_tls -- target/tls.sh@197 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock save_config 00:20:59.329 09:29:10 nvmf_tcp.nvmf_tls -- target/tls.sh@197 -- # bdevperfconf='{ 00:20:59.329 "subsystems": [ 00:20:59.329 { 00:20:59.329 "subsystem": "keyring", 00:20:59.329 "config": [] 00:20:59.329 }, 00:20:59.329 { 00:20:59.329 "subsystem": "iobuf", 00:20:59.329 "config": [ 00:20:59.329 { 00:20:59.329 "method": "iobuf_set_options", 00:20:59.329 "params": { 00:20:59.329 "small_pool_count": 8192, 00:20:59.329 "large_pool_count": 1024, 00:20:59.329 "small_bufsize": 8192, 00:20:59.329 "large_bufsize": 135168 00:20:59.329 } 00:20:59.329 } 00:20:59.329 ] 00:20:59.329 }, 00:20:59.329 { 00:20:59.329 "subsystem": "sock", 00:20:59.329 "config": [ 00:20:59.329 { 00:20:59.329 "method": "sock_set_default_impl", 00:20:59.329 "params": { 00:20:59.329 "impl_name": "posix" 00:20:59.329 } 00:20:59.329 }, 00:20:59.329 { 00:20:59.329 "method": "sock_impl_set_options", 00:20:59.329 "params": { 00:20:59.329 "impl_name": "ssl", 00:20:59.329 "recv_buf_size": 4096, 00:20:59.329 "send_buf_size": 4096, 00:20:59.329 "enable_recv_pipe": true, 00:20:59.329 "enable_quickack": false, 00:20:59.329 "enable_placement_id": 0, 00:20:59.329 "enable_zerocopy_send_server": true, 00:20:59.329 "enable_zerocopy_send_client": false, 00:20:59.329 "zerocopy_threshold": 0, 00:20:59.329 "tls_version": 0, 00:20:59.329 "enable_ktls": false 00:20:59.329 } 00:20:59.329 }, 00:20:59.329 { 00:20:59.329 "method": "sock_impl_set_options", 00:20:59.329 "params": { 00:20:59.329 "impl_name": "posix", 00:20:59.329 "recv_buf_size": 2097152, 00:20:59.329 "send_buf_size": 2097152, 00:20:59.329 "enable_recv_pipe": true, 00:20:59.329 "enable_quickack": false, 00:20:59.329 "enable_placement_id": 0, 00:20:59.329 "enable_zerocopy_send_server": true, 00:20:59.329 "enable_zerocopy_send_client": false, 00:20:59.329 "zerocopy_threshold": 0, 00:20:59.329 "tls_version": 0, 00:20:59.329 "enable_ktls": false 00:20:59.329 } 00:20:59.329 } 00:20:59.329 ] 00:20:59.329 }, 00:20:59.329 { 00:20:59.329 "subsystem": "vmd", 00:20:59.329 "config": [] 00:20:59.329 }, 00:20:59.329 { 00:20:59.329 "subsystem": "accel", 00:20:59.329 "config": [ 00:20:59.329 { 00:20:59.329 "method": "accel_set_options", 00:20:59.329 "params": { 00:20:59.329 "small_cache_size": 128, 00:20:59.329 "large_cache_size": 16, 00:20:59.329 "task_count": 2048, 00:20:59.329 "sequence_count": 2048, 00:20:59.329 "buf_count": 2048 00:20:59.329 } 00:20:59.329 } 00:20:59.329 ] 00:20:59.329 }, 00:20:59.329 { 00:20:59.329 "subsystem": "bdev", 00:20:59.329 "config": [ 00:20:59.329 { 00:20:59.329 "method": "bdev_set_options", 00:20:59.329 "params": { 00:20:59.329 "bdev_io_pool_size": 65535, 00:20:59.329 "bdev_io_cache_size": 256, 00:20:59.329 "bdev_auto_examine": true, 00:20:59.329 "iobuf_small_cache_size": 128, 00:20:59.329 "iobuf_large_cache_size": 16 00:20:59.329 } 00:20:59.329 }, 00:20:59.329 { 00:20:59.329 "method": "bdev_raid_set_options", 00:20:59.329 "params": { 00:20:59.329 "process_window_size_kb": 1024 00:20:59.329 } 00:20:59.329 }, 00:20:59.329 { 00:20:59.329 "method": "bdev_iscsi_set_options", 00:20:59.329 "params": { 00:20:59.329 "timeout_sec": 30 00:20:59.329 } 00:20:59.329 }, 00:20:59.329 { 00:20:59.329 "method": "bdev_nvme_set_options", 00:20:59.329 "params": { 00:20:59.329 "action_on_timeout": "none", 00:20:59.329 "timeout_us": 0, 00:20:59.329 "timeout_admin_us": 0, 00:20:59.329 "keep_alive_timeout_ms": 10000, 00:20:59.329 "arbitration_burst": 0, 00:20:59.329 "low_priority_weight": 0, 00:20:59.329 "medium_priority_weight": 0, 00:20:59.329 "high_priority_weight": 0, 00:20:59.329 "nvme_adminq_poll_period_us": 10000, 00:20:59.329 "nvme_ioq_poll_period_us": 0, 00:20:59.329 "io_queue_requests": 512, 00:20:59.329 "delay_cmd_submit": true, 00:20:59.329 "transport_retry_count": 4, 00:20:59.329 "bdev_retry_count": 3, 00:20:59.329 "transport_ack_timeout": 0, 00:20:59.329 "ctrlr_loss_timeout_sec": 0, 00:20:59.329 "reconnect_delay_sec": 0, 00:20:59.329 "fast_io_fail_timeout_sec": 0, 00:20:59.329 "disable_auto_failback": false, 00:20:59.329 "generate_uuids": false, 00:20:59.329 "transport_tos": 0, 00:20:59.329 "nvme_error_stat": false, 00:20:59.329 "rdma_srq_size": 0, 00:20:59.329 "io_path_stat": false, 00:20:59.329 "allow_accel_sequence": false, 00:20:59.329 "rdma_max_cq_size": 0, 00:20:59.329 "rdma_cm_event_timeout_ms": 0, 00:20:59.329 "dhchap_digests": [ 00:20:59.329 "sha256", 00:20:59.329 "sha384", 00:20:59.329 "sha512" 00:20:59.329 ], 00:20:59.329 "dhchap_dhgroups": [ 00:20:59.329 "null", 00:20:59.329 "ffdhe2048", 00:20:59.329 "ffdhe3072", 00:20:59.329 "ffdhe4096", 00:20:59.329 "ffdhe6144", 00:20:59.329 "ffdhe8192" 00:20:59.329 ] 00:20:59.329 } 00:20:59.329 }, 00:20:59.329 { 00:20:59.329 "method": "bdev_nvme_attach_controller", 00:20:59.329 "params": { 00:20:59.329 "name": "TLSTEST", 00:20:59.329 "trtype": "TCP", 00:20:59.329 "adrfam": "IPv4", 00:20:59.329 "traddr": "10.0.0.2", 00:20:59.330 "trsvcid": "4420", 00:20:59.330 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:20:59.330 "prchk_reftag": false, 00:20:59.330 "prchk_guard": false, 00:20:59.330 "ctrlr_loss_timeout_sec": 0, 00:20:59.330 "reconnect_delay_sec": 0, 00:20:59.330 "fast_io_fail_timeout_sec": 0, 00:20:59.330 "psk": "/tmp/tmp.hE4n99WUFQ", 00:20:59.330 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:20:59.330 "hdgst": false, 00:20:59.330 "ddgst": false 00:20:59.330 } 00:20:59.330 }, 00:20:59.330 { 00:20:59.330 "method": "bdev_nvme_set_hotplug", 00:20:59.330 "params": { 00:20:59.330 "period_us": 100000, 00:20:59.330 "enable": false 00:20:59.330 } 00:20:59.330 }, 00:20:59.330 { 00:20:59.330 "method": "bdev_wait_for_examine" 00:20:59.330 } 00:20:59.330 ] 00:20:59.330 }, 00:20:59.330 { 00:20:59.330 "subsystem": "nbd", 00:20:59.330 "config": [] 00:20:59.330 } 00:20:59.330 ] 00:20:59.330 }' 00:20:59.330 09:29:10 nvmf_tcp.nvmf_tls -- target/tls.sh@199 -- # killprocess 862525 00:20:59.330 09:29:10 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@948 -- # '[' -z 862525 ']' 00:20:59.330 09:29:10 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@952 -- # kill -0 862525 00:20:59.330 09:29:10 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # uname 00:20:59.330 09:29:10 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:20:59.330 09:29:10 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 862525 00:20:59.330 09:29:10 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # process_name=reactor_2 00:20:59.330 09:29:10 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@958 -- # '[' reactor_2 = sudo ']' 00:20:59.330 09:29:10 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@966 -- # echo 'killing process with pid 862525' 00:20:59.330 killing process with pid 862525 00:20:59.330 09:29:10 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@967 -- # kill 862525 00:20:59.330 Received shutdown signal, test time was about 10.000000 seconds 00:20:59.330 00:20:59.330 Latency(us) 00:20:59.330 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:20:59.330 =================================================================================================================== 00:20:59.330 Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:20:59.330 [2024-07-15 09:29:10.322952] app.c:1023:log_deprecation_hits: *WARNING*: nvme_ctrlr_psk: deprecation 'spdk_nvme_ctrlr_opts.psk' scheduled for removal in v24.09 hit 1 times 00:20:59.330 09:29:10 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@972 -- # wait 862525 00:20:59.588 09:29:10 nvmf_tcp.nvmf_tls -- target/tls.sh@200 -- # killprocess 862369 00:20:59.588 09:29:10 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@948 -- # '[' -z 862369 ']' 00:20:59.588 09:29:10 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@952 -- # kill -0 862369 00:20:59.588 09:29:10 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # uname 00:20:59.588 09:29:10 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:20:59.588 09:29:10 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 862369 00:20:59.588 09:29:10 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:20:59.588 09:29:10 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:20:59.588 09:29:10 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@966 -- # echo 'killing process with pid 862369' 00:20:59.588 killing process with pid 862369 00:20:59.588 09:29:10 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@967 -- # kill 862369 00:20:59.588 [2024-07-15 09:29:10.595060] app.c:1023:log_deprecation_hits: *WARNING*: nvmf_tcp_psk_path: deprecation 'PSK path' scheduled for removal in v24.09 hit 1 times 00:20:59.588 09:29:10 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@972 -- # wait 862369 00:20:59.847 09:29:10 nvmf_tcp.nvmf_tls -- target/tls.sh@203 -- # nvmfappstart -m 0x2 -c /dev/fd/62 00:20:59.847 09:29:10 nvmf_tcp.nvmf_tls -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:20:59.847 09:29:10 nvmf_tcp.nvmf_tls -- target/tls.sh@203 -- # echo '{ 00:20:59.847 "subsystems": [ 00:20:59.847 { 00:20:59.847 "subsystem": "keyring", 00:20:59.847 "config": [] 00:20:59.847 }, 00:20:59.847 { 00:20:59.847 "subsystem": "iobuf", 00:20:59.847 "config": [ 00:20:59.847 { 00:20:59.847 "method": "iobuf_set_options", 00:20:59.847 "params": { 00:20:59.847 "small_pool_count": 8192, 00:20:59.847 "large_pool_count": 1024, 00:20:59.847 "small_bufsize": 8192, 00:20:59.847 "large_bufsize": 135168 00:20:59.847 } 00:20:59.847 } 00:20:59.847 ] 00:20:59.847 }, 00:20:59.847 { 00:20:59.847 "subsystem": "sock", 00:20:59.847 "config": [ 00:20:59.847 { 00:20:59.847 "method": "sock_set_default_impl", 00:20:59.847 "params": { 00:20:59.847 "impl_name": "posix" 00:20:59.847 } 00:20:59.847 }, 00:20:59.847 { 00:20:59.847 "method": "sock_impl_set_options", 00:20:59.847 "params": { 00:20:59.847 "impl_name": "ssl", 00:20:59.847 "recv_buf_size": 4096, 00:20:59.847 "send_buf_size": 4096, 00:20:59.847 "enable_recv_pipe": true, 00:20:59.847 "enable_quickack": false, 00:20:59.847 "enable_placement_id": 0, 00:20:59.847 "enable_zerocopy_send_server": true, 00:20:59.847 "enable_zerocopy_send_client": false, 00:20:59.847 "zerocopy_threshold": 0, 00:20:59.847 "tls_version": 0, 00:20:59.847 "enable_ktls": false 00:20:59.847 } 00:20:59.847 }, 00:20:59.847 { 00:20:59.847 "method": "sock_impl_set_options", 00:20:59.847 "params": { 00:20:59.847 "impl_name": "posix", 00:20:59.847 "recv_buf_size": 2097152, 00:20:59.847 "send_buf_size": 2097152, 00:20:59.847 "enable_recv_pipe": true, 00:20:59.847 "enable_quickack": false, 00:20:59.847 "enable_placement_id": 0, 00:20:59.847 "enable_zerocopy_send_server": true, 00:20:59.847 "enable_zerocopy_send_client": false, 00:20:59.847 "zerocopy_threshold": 0, 00:20:59.847 "tls_version": 0, 00:20:59.847 "enable_ktls": false 00:20:59.847 } 00:20:59.847 } 00:20:59.847 ] 00:20:59.847 }, 00:20:59.847 { 00:20:59.847 "subsystem": "vmd", 00:20:59.847 "config": [] 00:20:59.847 }, 00:20:59.847 { 00:20:59.847 "subsystem": "accel", 00:20:59.847 "config": [ 00:20:59.847 { 00:20:59.847 "method": "accel_set_options", 00:20:59.847 "params": { 00:20:59.847 "small_cache_size": 128, 00:20:59.847 "large_cache_size": 16, 00:20:59.847 "task_count": 2048, 00:20:59.847 "sequence_count": 2048, 00:20:59.847 "buf_count": 2048 00:20:59.847 } 00:20:59.847 } 00:20:59.847 ] 00:20:59.847 }, 00:20:59.847 { 00:20:59.847 "subsystem": "bdev", 00:20:59.847 "config": [ 00:20:59.847 { 00:20:59.847 "method": "bdev_set_options", 00:20:59.847 "params": { 00:20:59.847 "bdev_io_pool_size": 65535, 00:20:59.847 "bdev_io_cache_size": 256, 00:20:59.847 "bdev_auto_examine": true, 00:20:59.847 "iobuf_small_cache_size": 128, 00:20:59.847 "iobuf_large_cache_size": 16 00:20:59.847 } 00:20:59.847 }, 00:20:59.847 { 00:20:59.847 "method": "bdev_raid_set_options", 00:20:59.847 "params": { 00:20:59.847 "process_window_size_kb": 1024 00:20:59.847 } 00:20:59.847 }, 00:20:59.847 { 00:20:59.847 "method": "bdev_iscsi_set_options", 00:20:59.847 "params": { 00:20:59.847 "timeout_sec": 30 00:20:59.847 } 00:20:59.847 }, 00:20:59.847 { 00:20:59.847 "method": "bdev_nvme_set_options", 00:20:59.847 "params": { 00:20:59.847 "action_on_timeout": "none", 00:20:59.847 "timeout_us": 0, 00:20:59.847 "timeout_admin_us": 0, 00:20:59.847 "keep_alive_timeout_ms": 10000, 00:20:59.847 "arbitration_burst": 0, 00:20:59.847 "low_priority_weight": 0, 00:20:59.847 "medium_priority_weight": 0, 00:20:59.847 "high_priority_weight": 0, 00:20:59.847 "nvme_adminq_poll_period_us": 10000, 00:20:59.847 "nvme_ioq_poll_period_us": 0, 00:20:59.847 "io_queue_requests": 0, 00:20:59.847 "delay_cmd_submit": true, 00:20:59.847 "transport_retry_count": 4, 00:20:59.847 "bdev_retry_count": 3, 00:20:59.847 "transport_ack_timeout": 0, 00:20:59.847 "ctrlr_loss_timeout_sec": 0, 00:20:59.847 "reconnect_delay_sec": 0, 00:20:59.847 "fast_io_fail_timeout_sec": 0, 00:20:59.847 "disable_auto_failback": false, 00:20:59.847 "generate_uuids": false, 00:20:59.847 "transport_tos": 0, 00:20:59.847 "nvme_error_stat": false, 00:20:59.847 "rdma_srq_size": 0, 00:20:59.847 "io_path_stat": false, 00:20:59.847 "allow_accel_sequence": false, 00:20:59.847 "rdma_max_cq_size": 0, 00:20:59.847 "rdma_cm_event_timeout_ms": 0, 00:20:59.847 "dhchap_digests": [ 00:20:59.847 "sha256", 00:20:59.847 "sha384", 00:20:59.847 "sha512" 00:20:59.847 ], 00:20:59.847 "dhchap_dhgroups": [ 00:20:59.847 "null", 00:20:59.847 "ffdhe2048", 00:20:59.847 "ffdhe3072", 00:20:59.847 "ffdhe4096", 00:20:59.847 "ffdhe6144", 00:20:59.847 "ffdhe8192" 00:20:59.847 ] 00:20:59.847 } 00:20:59.847 }, 00:20:59.847 { 00:20:59.847 "method": "bdev_nvme_set_hotplug", 00:20:59.847 "params": { 00:20:59.847 "period_us": 100000, 00:20:59.847 "enable": false 00:20:59.847 } 00:20:59.847 }, 00:20:59.847 { 00:20:59.847 "method": "bdev_malloc_create", 00:20:59.847 "params": { 00:20:59.847 "name": "malloc0", 00:20:59.847 "num_blocks": 8192, 00:20:59.847 "block_size": 4096, 00:20:59.847 "physical_block_size": 4096, 00:20:59.847 "uuid": "f284bf5b-b35c-4c95-ba09-04ba6032bbc7", 00:20:59.847 "optimal_io_boundary": 0 00:20:59.847 } 00:20:59.847 }, 00:20:59.847 { 00:20:59.847 "method": "bdev_wait_for_examine" 00:20:59.847 } 00:20:59.847 ] 00:20:59.847 }, 00:20:59.847 { 00:20:59.847 "subsystem": "nbd", 00:20:59.847 "config": [] 00:20:59.847 }, 00:20:59.847 { 00:20:59.847 "subsystem": "scheduler", 00:20:59.847 "config": [ 00:20:59.847 { 00:20:59.847 "method": "framework_set_scheduler", 00:20:59.847 "params": { 00:20:59.847 "name": "static" 00:20:59.847 } 00:20:59.847 } 00:20:59.847 ] 00:20:59.847 }, 00:20:59.847 { 00:20:59.847 "subsystem": "nvmf", 00:20:59.847 "config": [ 00:20:59.847 { 00:20:59.847 "method": "nvmf_set_config", 00:20:59.847 "params": { 00:20:59.847 "discovery_filter": "match_any", 00:20:59.847 "admin_cmd_passthru": { 00:20:59.847 "identify_ctrlr": false 00:20:59.847 } 00:20:59.848 } 00:20:59.848 }, 00:20:59.848 { 00:20:59.848 "method": "nvmf_set_max_subsystems", 00:20:59.848 "params": { 00:20:59.848 "max_subsystems": 1024 00:20:59.848 } 00:20:59.848 }, 00:20:59.848 { 00:20:59.848 "method": "nvmf_set_crdt", 00:20:59.848 "params": { 00:20:59.848 "crdt1": 0, 00:20:59.848 "crdt2": 0, 00:20:59.848 "crdt3": 0 00:20:59.848 } 00:20:59.848 }, 00:20:59.848 { 00:20:59.848 "method": "nvmf_create_transport", 00:20:59.848 "params": { 00:20:59.848 "trtype": "TCP", 00:20:59.848 "max_queue_depth": 128, 00:20:59.848 "max_io_qpairs_per_ctrlr": 127, 00:20:59.848 "in_capsule_data_size": 4096, 00:20:59.848 "max_io_size": 131072, 00:20:59.848 "io_unit_size": 131072, 00:20:59.848 "max_aq_depth": 128, 00:20:59.848 "num_shared_buffers": 511, 00:20:59.848 "buf_cache_size": 4294967295, 00:20:59.848 "dif_insert_or_strip": false, 00:20:59.848 "zcopy": false, 00:20:59.848 "c2h_success": false, 00:20:59.848 "sock_priority": 0, 00:20:59.848 "abort_timeout_sec": 1, 00:20:59.848 "ack_timeout": 0, 00:20:59.848 "data_wr_pool_size": 0 00:20:59.848 } 00:20:59.848 }, 00:20:59.848 { 00:20:59.848 "method": "nvmf_create_subsystem", 00:20:59.848 "params": { 00:20:59.848 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:20:59.848 "allow_any_host": false, 00:20:59.848 "serial_number": "SPDK00000000000001", 00:20:59.848 "model_number": "SPDK bdev Controller", 00:20:59.848 "max_namespaces": 10, 00:20:59.848 "min_cntlid": 1, 00:20:59.848 "max_cntlid": 65519, 00:20:59.848 "ana_reporting": false 00:20:59.848 } 00:20:59.848 }, 00:20:59.848 { 00:20:59.848 "method": "nvmf_subsystem_add_host", 00:20:59.848 "params": { 00:20:59.848 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:20:59.848 "host": "nqn.2016-06.io.spdk:host1", 00:20:59.848 "psk": "/tmp/tmp.hE4n99WUFQ" 00:20:59.848 } 00:20:59.848 }, 00:20:59.848 { 00:20:59.848 "method": "nvmf_subsystem_add_ns", 00:20:59.848 "params": { 00:20:59.848 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:20:59.848 "namespace": { 00:20:59.848 "nsid": 1, 00:20:59.848 "bdev_name": "malloc0", 00:20:59.848 "nguid": "F284BF5BB35C4C95BA0904BA6032BBC7", 00:20:59.848 "uuid": "f284bf5b-b35c-4c95-ba09-04ba6032bbc7", 00:20:59.848 "no_auto_visible": false 00:20:59.848 } 00:20:59.848 } 00:20:59.848 }, 00:20:59.848 { 00:20:59.848 "method": "nvmf_subsystem_add_listener", 00:20:59.848 "params": { 00:20:59.848 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:20:59.848 "listen_address": { 00:20:59.848 "trtype": "TCP", 00:20:59.848 "adrfam": "IPv4", 00:20:59.848 "traddr": "10.0.0.2", 00:20:59.848 "trsvcid": "4420" 00:20:59.848 }, 00:20:59.848 "secure_channel": true 00:20:59.848 } 00:20:59.848 } 00:20:59.848 ] 00:20:59.848 } 00:20:59.848 ] 00:20:59.848 }' 00:20:59.848 09:29:10 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@722 -- # xtrace_disable 00:20:59.848 09:29:10 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:20:59.848 09:29:10 nvmf_tcp.nvmf_tls -- nvmf/common.sh@481 -- # nvmfpid=862804 00:20:59.848 09:29:10 nvmf_tcp.nvmf_tls -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 -c /dev/fd/62 00:20:59.848 09:29:10 nvmf_tcp.nvmf_tls -- nvmf/common.sh@482 -- # waitforlisten 862804 00:20:59.848 09:29:10 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@829 -- # '[' -z 862804 ']' 00:20:59.848 09:29:10 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:20:59.848 09:29:10 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@834 -- # local max_retries=100 00:20:59.848 09:29:10 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:20:59.848 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:20:59.848 09:29:10 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@838 -- # xtrace_disable 00:20:59.848 09:29:10 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:20:59.848 [2024-07-15 09:29:10.893048] Starting SPDK v24.09-pre git sha1 b0f01ebc5 / DPDK 24.03.0 initialization... 00:20:59.848 [2024-07-15 09:29:10.893139] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:20:59.848 EAL: No free 2048 kB hugepages reported on node 1 00:20:59.848 [2024-07-15 09:29:10.954744] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:21:00.105 [2024-07-15 09:29:11.052744] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:21:00.105 [2024-07-15 09:29:11.052790] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:21:00.105 [2024-07-15 09:29:11.052825] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:21:00.105 [2024-07-15 09:29:11.052837] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:21:00.105 [2024-07-15 09:29:11.052846] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:21:00.105 [2024-07-15 09:29:11.052918] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:21:00.105 [2024-07-15 09:29:11.280303] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:21:00.105 [2024-07-15 09:29:11.296294] tcp.c:3679:nvmf_tcp_subsystem_add_host: *WARNING*: nvmf_tcp_psk_path: deprecated feature PSK path to be removed in v24.09 00:21:00.361 [2024-07-15 09:29:11.312341] tcp.c: 928:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:21:00.361 [2024-07-15 09:29:11.319970] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:21:00.925 09:29:11 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:21:00.925 09:29:11 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@862 -- # return 0 00:21:00.925 09:29:11 nvmf_tcp.nvmf_tls -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:21:00.925 09:29:11 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@728 -- # xtrace_disable 00:21:00.925 09:29:11 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:21:00.925 09:29:11 nvmf_tcp.nvmf_tls -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:21:00.925 09:29:11 nvmf_tcp.nvmf_tls -- target/tls.sh@207 -- # bdevperf_pid=862956 00:21:00.925 09:29:11 nvmf_tcp.nvmf_tls -- target/tls.sh@208 -- # waitforlisten 862956 /var/tmp/bdevperf.sock 00:21:00.925 09:29:11 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@829 -- # '[' -z 862956 ']' 00:21:00.925 09:29:11 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:21:00.925 09:29:11 nvmf_tcp.nvmf_tls -- target/tls.sh@204 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 -c /dev/fd/63 00:21:00.925 09:29:11 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@834 -- # local max_retries=100 00:21:00.925 09:29:11 nvmf_tcp.nvmf_tls -- target/tls.sh@204 -- # echo '{ 00:21:00.925 "subsystems": [ 00:21:00.925 { 00:21:00.925 "subsystem": "keyring", 00:21:00.925 "config": [] 00:21:00.925 }, 00:21:00.925 { 00:21:00.925 "subsystem": "iobuf", 00:21:00.925 "config": [ 00:21:00.925 { 00:21:00.925 "method": "iobuf_set_options", 00:21:00.925 "params": { 00:21:00.925 "small_pool_count": 8192, 00:21:00.925 "large_pool_count": 1024, 00:21:00.925 "small_bufsize": 8192, 00:21:00.925 "large_bufsize": 135168 00:21:00.925 } 00:21:00.925 } 00:21:00.925 ] 00:21:00.925 }, 00:21:00.925 { 00:21:00.925 "subsystem": "sock", 00:21:00.925 "config": [ 00:21:00.925 { 00:21:00.925 "method": "sock_set_default_impl", 00:21:00.925 "params": { 00:21:00.925 "impl_name": "posix" 00:21:00.925 } 00:21:00.925 }, 00:21:00.925 { 00:21:00.925 "method": "sock_impl_set_options", 00:21:00.925 "params": { 00:21:00.925 "impl_name": "ssl", 00:21:00.925 "recv_buf_size": 4096, 00:21:00.925 "send_buf_size": 4096, 00:21:00.925 "enable_recv_pipe": true, 00:21:00.925 "enable_quickack": false, 00:21:00.925 "enable_placement_id": 0, 00:21:00.925 "enable_zerocopy_send_server": true, 00:21:00.925 "enable_zerocopy_send_client": false, 00:21:00.925 "zerocopy_threshold": 0, 00:21:00.925 "tls_version": 0, 00:21:00.925 "enable_ktls": false 00:21:00.925 } 00:21:00.925 }, 00:21:00.925 { 00:21:00.925 "method": "sock_impl_set_options", 00:21:00.925 "params": { 00:21:00.925 "impl_name": "posix", 00:21:00.925 "recv_buf_size": 2097152, 00:21:00.925 "send_buf_size": 2097152, 00:21:00.925 "enable_recv_pipe": true, 00:21:00.925 "enable_quickack": false, 00:21:00.925 "enable_placement_id": 0, 00:21:00.925 "enable_zerocopy_send_server": true, 00:21:00.925 "enable_zerocopy_send_client": false, 00:21:00.925 "zerocopy_threshold": 0, 00:21:00.925 "tls_version": 0, 00:21:00.925 "enable_ktls": false 00:21:00.925 } 00:21:00.925 } 00:21:00.925 ] 00:21:00.925 }, 00:21:00.925 { 00:21:00.925 "subsystem": "vmd", 00:21:00.925 "config": [] 00:21:00.925 }, 00:21:00.925 { 00:21:00.925 "subsystem": "accel", 00:21:00.925 "config": [ 00:21:00.925 { 00:21:00.925 "method": "accel_set_options", 00:21:00.925 "params": { 00:21:00.925 "small_cache_size": 128, 00:21:00.925 "large_cache_size": 16, 00:21:00.925 "task_count": 2048, 00:21:00.925 "sequence_count": 2048, 00:21:00.925 "buf_count": 2048 00:21:00.925 } 00:21:00.925 } 00:21:00.925 ] 00:21:00.925 }, 00:21:00.925 { 00:21:00.925 "subsystem": "bdev", 00:21:00.925 "config": [ 00:21:00.925 { 00:21:00.925 "method": "bdev_set_options", 00:21:00.925 "params": { 00:21:00.925 "bdev_io_pool_size": 65535, 00:21:00.925 "bdev_io_cache_size": 256, 00:21:00.925 "bdev_auto_examine": true, 00:21:00.925 "iobuf_small_cache_size": 128, 00:21:00.925 "iobuf_large_cache_size": 16 00:21:00.925 } 00:21:00.925 }, 00:21:00.925 { 00:21:00.925 "method": "bdev_raid_set_options", 00:21:00.925 "params": { 00:21:00.925 "process_window_size_kb": 1024 00:21:00.925 } 00:21:00.925 }, 00:21:00.925 { 00:21:00.925 "method": "bdev_iscsi_set_options", 00:21:00.925 "params": { 00:21:00.925 "timeout_sec": 30 00:21:00.925 } 00:21:00.925 }, 00:21:00.925 { 00:21:00.925 "method": "bdev_nvme_set_options", 00:21:00.925 "params": { 00:21:00.925 "action_on_timeout": "none", 00:21:00.925 "timeout_us": 0, 00:21:00.925 "timeout_admin_us": 0, 00:21:00.925 "keep_alive_timeout_ms": 10000, 00:21:00.925 "arbitration_burst": 0, 00:21:00.925 "low_priority_weight": 0, 00:21:00.925 "medium_priority_weight": 0, 00:21:00.925 "high_priority_weight": 0, 00:21:00.925 "nvme_adminq_poll_period_us": 10000, 00:21:00.925 "nvme_ioq_poll_period_us": 0, 00:21:00.925 "io_queue_requests": 512, 00:21:00.925 "delay_cmd_submit": true, 00:21:00.925 "transport_retry_count": 4, 00:21:00.925 "bdev_retry_count": 3, 00:21:00.926 "transport_ack_timeout": 0, 00:21:00.926 "ctrlr_loss_timeout_sec": 0, 00:21:00.926 "reconnect_delay_sec": 0, 00:21:00.926 "fast_io_fail_timeout_sec": 0, 00:21:00.926 "disable_auto_failback": false, 00:21:00.926 "generate_uuids": false, 00:21:00.926 "transport_tos": 0, 00:21:00.926 "nvme_error_stat": false, 00:21:00.926 "rdma_srq_size": 0, 00:21:00.926 "io_path_stat": false, 00:21:00.926 "allow_accel_sequence": false, 00:21:00.926 "rdma_max_cq_size": 0, 00:21:00.926 "rdma_cm_event_timeout_ms": 0, 00:21:00.926 "dhchap_digests": [ 00:21:00.926 "sha256", 00:21:00.926 "sha384", 00:21:00.926 "sha512" 00:21:00.926 ], 00:21:00.926 "dhchap_dhgroups": [ 00:21:00.926 "null", 00:21:00.926 "ffdhe2048", 00:21:00.926 "ffdhe3072", 00:21:00.926 "ffdhe4096", 00:21:00.926 "ffdhe6144", 00:21:00.926 "ffdhe8192" 00:21:00.926 ] 00:21:00.926 } 00:21:00.926 }, 00:21:00.926 { 00:21:00.926 "method": "bdev_nvme_attach_controller", 00:21:00.926 "params": { 00:21:00.926 "name": "TLSTEST", 00:21:00.926 "trtype": "TCP", 00:21:00.926 "adrfam": "IPv4", 00:21:00.926 "traddr": "10.0.0.2", 00:21:00.926 "trsvcid": "4420", 00:21:00.926 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:21:00.926 "prchk_reftag": false, 00:21:00.926 "prchk_guard": false, 00:21:00.926 "ctrlr_loss_timeout_sec": 0, 00:21:00.926 "reconnect_delay_sec": 0, 00:21:00.926 "fast_io_fail_timeout_sec": 0, 00:21:00.926 "psk": "/tmp/tmp.hE4n99WUFQ", 00:21:00.926 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:21:00.926 "hdgst": false, 00:21:00.926 "ddgst": false 00:21:00.926 } 00:21:00.926 }, 00:21:00.926 { 00:21:00.926 "method": "bdev_nvme_set_hotplug", 00:21:00.926 "params": { 00:21:00.926 "period_us": 100000, 00:21:00.926 "enable": false 00:21:00.926 } 00:21:00.926 }, 00:21:00.926 { 00:21:00.926 "method": "bdev_wait_for_examine" 00:21:00.926 } 00:21:00.926 ] 00:21:00.926 }, 00:21:00.926 { 00:21:00.926 "subsystem": "nbd", 00:21:00.926 "config": [] 00:21:00.926 } 00:21:00.926 ] 00:21:00.926 }' 00:21:00.926 09:29:11 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:21:00.926 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:21:00.926 09:29:11 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@838 -- # xtrace_disable 00:21:00.926 09:29:11 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:21:00.926 [2024-07-15 09:29:11.888560] Starting SPDK v24.09-pre git sha1 b0f01ebc5 / DPDK 24.03.0 initialization... 00:21:00.926 [2024-07-15 09:29:11.888639] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid862956 ] 00:21:00.926 EAL: No free 2048 kB hugepages reported on node 1 00:21:00.926 [2024-07-15 09:29:11.946836] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:21:00.926 [2024-07-15 09:29:12.054882] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:21:01.183 [2024-07-15 09:29:12.227654] bdev_nvme_rpc.c: 517:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:21:01.183 [2024-07-15 09:29:12.227810] nvme_tcp.c:2589:nvme_tcp_generate_tls_credentials: *WARNING*: nvme_ctrlr_psk: deprecated feature spdk_nvme_ctrlr_opts.psk to be removed in v24.09 00:21:01.745 09:29:12 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:21:01.745 09:29:12 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@862 -- # return 0 00:21:01.745 09:29:12 nvmf_tcp.nvmf_tls -- target/tls.sh@211 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -t 20 -s /var/tmp/bdevperf.sock perform_tests 00:21:02.002 Running I/O for 10 seconds... 00:21:11.970 00:21:11.970 Latency(us) 00:21:11.970 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:21:11.970 Job: TLSTESTn1 (Core Mask 0x4, workload: verify, depth: 128, IO size: 4096) 00:21:11.970 Verification LBA range: start 0x0 length 0x2000 00:21:11.970 TLSTESTn1 : 10.02 3357.68 13.12 0.00 0.00 38052.75 9466.31 31651.46 00:21:11.970 =================================================================================================================== 00:21:11.970 Total : 3357.68 13.12 0.00 0.00 38052.75 9466.31 31651.46 00:21:11.970 0 00:21:11.970 09:29:23 nvmf_tcp.nvmf_tls -- target/tls.sh@213 -- # trap 'nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:21:11.970 09:29:23 nvmf_tcp.nvmf_tls -- target/tls.sh@214 -- # killprocess 862956 00:21:11.970 09:29:23 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@948 -- # '[' -z 862956 ']' 00:21:11.970 09:29:23 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@952 -- # kill -0 862956 00:21:11.970 09:29:23 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # uname 00:21:11.970 09:29:23 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:21:11.970 09:29:23 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 862956 00:21:11.970 09:29:23 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # process_name=reactor_2 00:21:11.970 09:29:23 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@958 -- # '[' reactor_2 = sudo ']' 00:21:11.970 09:29:23 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@966 -- # echo 'killing process with pid 862956' 00:21:11.970 killing process with pid 862956 00:21:11.970 09:29:23 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@967 -- # kill 862956 00:21:11.970 Received shutdown signal, test time was about 10.000000 seconds 00:21:11.970 00:21:11.970 Latency(us) 00:21:11.970 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:21:11.970 =================================================================================================================== 00:21:11.970 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:21:11.970 [2024-07-15 09:29:23.080301] app.c:1023:log_deprecation_hits: *WARNING*: nvme_ctrlr_psk: deprecation 'spdk_nvme_ctrlr_opts.psk' scheduled for removal in v24.09 hit 1 times 00:21:11.970 09:29:23 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@972 -- # wait 862956 00:21:12.228 09:29:23 nvmf_tcp.nvmf_tls -- target/tls.sh@215 -- # killprocess 862804 00:21:12.228 09:29:23 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@948 -- # '[' -z 862804 ']' 00:21:12.228 09:29:23 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@952 -- # kill -0 862804 00:21:12.228 09:29:23 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # uname 00:21:12.228 09:29:23 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:21:12.228 09:29:23 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 862804 00:21:12.228 09:29:23 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:21:12.228 09:29:23 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:21:12.228 09:29:23 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@966 -- # echo 'killing process with pid 862804' 00:21:12.228 killing process with pid 862804 00:21:12.228 09:29:23 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@967 -- # kill 862804 00:21:12.228 [2024-07-15 09:29:23.366178] app.c:1023:log_deprecation_hits: *WARNING*: nvmf_tcp_psk_path: deprecation 'PSK path' scheduled for removal in v24.09 hit 1 times 00:21:12.228 09:29:23 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@972 -- # wait 862804 00:21:12.487 09:29:23 nvmf_tcp.nvmf_tls -- target/tls.sh@218 -- # nvmfappstart 00:21:12.487 09:29:23 nvmf_tcp.nvmf_tls -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:21:12.487 09:29:23 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@722 -- # xtrace_disable 00:21:12.487 09:29:23 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:21:12.487 09:29:23 nvmf_tcp.nvmf_tls -- nvmf/common.sh@481 -- # nvmfpid=864284 00:21:12.487 09:29:23 nvmf_tcp.nvmf_tls -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF 00:21:12.487 09:29:23 nvmf_tcp.nvmf_tls -- nvmf/common.sh@482 -- # waitforlisten 864284 00:21:12.487 09:29:23 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@829 -- # '[' -z 864284 ']' 00:21:12.487 09:29:23 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:21:12.487 09:29:23 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@834 -- # local max_retries=100 00:21:12.487 09:29:23 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:21:12.487 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:21:12.487 09:29:23 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@838 -- # xtrace_disable 00:21:12.487 09:29:23 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:21:12.745 [2024-07-15 09:29:23.691344] Starting SPDK v24.09-pre git sha1 b0f01ebc5 / DPDK 24.03.0 initialization... 00:21:12.745 [2024-07-15 09:29:23.691440] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:21:12.745 EAL: No free 2048 kB hugepages reported on node 1 00:21:12.745 [2024-07-15 09:29:23.752732] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:21:12.745 [2024-07-15 09:29:23.856528] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:21:12.745 [2024-07-15 09:29:23.856594] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:21:12.745 [2024-07-15 09:29:23.856607] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:21:12.745 [2024-07-15 09:29:23.856617] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:21:12.745 [2024-07-15 09:29:23.856641] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:21:12.745 [2024-07-15 09:29:23.856672] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:21:13.003 09:29:23 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:21:13.003 09:29:23 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@862 -- # return 0 00:21:13.003 09:29:23 nvmf_tcp.nvmf_tls -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:21:13.003 09:29:23 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@728 -- # xtrace_disable 00:21:13.003 09:29:23 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:21:13.003 09:29:23 nvmf_tcp.nvmf_tls -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:21:13.003 09:29:23 nvmf_tcp.nvmf_tls -- target/tls.sh@219 -- # setup_nvmf_tgt /tmp/tmp.hE4n99WUFQ 00:21:13.003 09:29:23 nvmf_tcp.nvmf_tls -- target/tls.sh@49 -- # local key=/tmp/tmp.hE4n99WUFQ 00:21:13.003 09:29:23 nvmf_tcp.nvmf_tls -- target/tls.sh@51 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o 00:21:13.261 [2024-07-15 09:29:24.276744] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:21:13.261 09:29:24 nvmf_tcp.nvmf_tls -- target/tls.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDK00000000000001 -m 10 00:21:13.518 09:29:24 nvmf_tcp.nvmf_tls -- target/tls.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -k 00:21:13.775 [2024-07-15 09:29:24.818148] tcp.c: 928:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:21:13.775 [2024-07-15 09:29:24.818360] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:21:13.775 09:29:24 nvmf_tcp.nvmf_tls -- target/tls.sh@55 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 32 4096 -b malloc0 00:21:14.033 malloc0 00:21:14.033 09:29:25 nvmf_tcp.nvmf_tls -- target/tls.sh@56 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 malloc0 -n 1 00:21:14.291 09:29:25 nvmf_tcp.nvmf_tls -- target/tls.sh@58 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 --psk /tmp/tmp.hE4n99WUFQ 00:21:14.548 [2024-07-15 09:29:25.654889] tcp.c:3679:nvmf_tcp_subsystem_add_host: *WARNING*: nvmf_tcp_psk_path: deprecated feature PSK path to be removed in v24.09 00:21:14.548 09:29:25 nvmf_tcp.nvmf_tls -- target/tls.sh@222 -- # bdevperf_pid=864573 00:21:14.548 09:29:25 nvmf_tcp.nvmf_tls -- target/tls.sh@220 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 2 -z -r /var/tmp/bdevperf.sock -q 128 -o 4k -w verify -t 1 00:21:14.548 09:29:25 nvmf_tcp.nvmf_tls -- target/tls.sh@224 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:21:14.548 09:29:25 nvmf_tcp.nvmf_tls -- target/tls.sh@225 -- # waitforlisten 864573 /var/tmp/bdevperf.sock 00:21:14.548 09:29:25 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@829 -- # '[' -z 864573 ']' 00:21:14.548 09:29:25 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:21:14.548 09:29:25 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@834 -- # local max_retries=100 00:21:14.548 09:29:25 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:21:14.548 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:21:14.548 09:29:25 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@838 -- # xtrace_disable 00:21:14.548 09:29:25 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:21:14.548 [2024-07-15 09:29:25.711056] Starting SPDK v24.09-pre git sha1 b0f01ebc5 / DPDK 24.03.0 initialization... 00:21:14.548 [2024-07-15 09:29:25.711140] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid864573 ] 00:21:14.548 EAL: No free 2048 kB hugepages reported on node 1 00:21:14.806 [2024-07-15 09:29:25.770172] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:21:14.806 [2024-07-15 09:29:25.878102] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:21:14.806 09:29:25 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:21:14.806 09:29:25 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@862 -- # return 0 00:21:14.806 09:29:25 nvmf_tcp.nvmf_tls -- target/tls.sh@227 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock keyring_file_add_key key0 /tmp/tmp.hE4n99WUFQ 00:21:15.063 09:29:26 nvmf_tcp.nvmf_tls -- target/tls.sh@228 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 --psk key0 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 00:21:15.321 [2024-07-15 09:29:26.450295] bdev_nvme_rpc.c: 517:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:21:15.579 nvme0n1 00:21:15.579 09:29:26 nvmf_tcp.nvmf_tls -- target/tls.sh@232 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:21:15.579 Running I/O for 1 seconds... 00:21:16.511 00:21:16.511 Latency(us) 00:21:16.511 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:21:16.511 Job: nvme0n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:21:16.511 Verification LBA range: start 0x0 length 0x2000 00:21:16.511 nvme0n1 : 1.02 3383.96 13.22 0.00 0.00 37477.12 9223.59 46603.38 00:21:16.511 =================================================================================================================== 00:21:16.511 Total : 3383.96 13.22 0.00 0.00 37477.12 9223.59 46603.38 00:21:16.511 0 00:21:16.511 09:29:27 nvmf_tcp.nvmf_tls -- target/tls.sh@234 -- # killprocess 864573 00:21:16.511 09:29:27 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@948 -- # '[' -z 864573 ']' 00:21:16.511 09:29:27 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@952 -- # kill -0 864573 00:21:16.511 09:29:27 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # uname 00:21:16.511 09:29:27 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:21:16.511 09:29:27 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 864573 00:21:16.769 09:29:27 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:21:16.769 09:29:27 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:21:16.769 09:29:27 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@966 -- # echo 'killing process with pid 864573' 00:21:16.769 killing process with pid 864573 00:21:16.769 09:29:27 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@967 -- # kill 864573 00:21:16.769 Received shutdown signal, test time was about 1.000000 seconds 00:21:16.769 00:21:16.769 Latency(us) 00:21:16.769 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:21:16.769 =================================================================================================================== 00:21:16.769 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:21:16.769 09:29:27 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@972 -- # wait 864573 00:21:16.769 09:29:27 nvmf_tcp.nvmf_tls -- target/tls.sh@235 -- # killprocess 864284 00:21:16.769 09:29:27 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@948 -- # '[' -z 864284 ']' 00:21:16.769 09:29:27 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@952 -- # kill -0 864284 00:21:16.769 09:29:27 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # uname 00:21:17.027 09:29:27 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:21:17.027 09:29:27 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 864284 00:21:17.027 09:29:27 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:21:17.027 09:29:27 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:21:17.027 09:29:27 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@966 -- # echo 'killing process with pid 864284' 00:21:17.027 killing process with pid 864284 00:21:17.027 09:29:27 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@967 -- # kill 864284 00:21:17.027 [2024-07-15 09:29:27.993211] app.c:1023:log_deprecation_hits: *WARNING*: nvmf_tcp_psk_path: deprecation 'PSK path' scheduled for removal in v24.09 hit 1 times 00:21:17.027 09:29:27 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@972 -- # wait 864284 00:21:17.286 09:29:28 nvmf_tcp.nvmf_tls -- target/tls.sh@238 -- # nvmfappstart 00:21:17.286 09:29:28 nvmf_tcp.nvmf_tls -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:21:17.286 09:29:28 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@722 -- # xtrace_disable 00:21:17.286 09:29:28 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:21:17.286 09:29:28 nvmf_tcp.nvmf_tls -- nvmf/common.sh@481 -- # nvmfpid=864849 00:21:17.286 09:29:28 nvmf_tcp.nvmf_tls -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF 00:21:17.286 09:29:28 nvmf_tcp.nvmf_tls -- nvmf/common.sh@482 -- # waitforlisten 864849 00:21:17.286 09:29:28 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@829 -- # '[' -z 864849 ']' 00:21:17.286 09:29:28 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:21:17.286 09:29:28 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@834 -- # local max_retries=100 00:21:17.286 09:29:28 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:21:17.286 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:21:17.286 09:29:28 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@838 -- # xtrace_disable 00:21:17.286 09:29:28 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:21:17.286 [2024-07-15 09:29:28.321561] Starting SPDK v24.09-pre git sha1 b0f01ebc5 / DPDK 24.03.0 initialization... 00:21:17.286 [2024-07-15 09:29:28.321656] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:21:17.286 EAL: No free 2048 kB hugepages reported on node 1 00:21:17.286 [2024-07-15 09:29:28.382230] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:21:17.544 [2024-07-15 09:29:28.483989] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:21:17.544 [2024-07-15 09:29:28.484035] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:21:17.544 [2024-07-15 09:29:28.484063] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:21:17.544 [2024-07-15 09:29:28.484074] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:21:17.544 [2024-07-15 09:29:28.484084] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:21:17.544 [2024-07-15 09:29:28.484125] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:21:17.544 09:29:28 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:21:17.544 09:29:28 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@862 -- # return 0 00:21:17.544 09:29:28 nvmf_tcp.nvmf_tls -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:21:17.544 09:29:28 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@728 -- # xtrace_disable 00:21:17.544 09:29:28 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:21:17.544 09:29:28 nvmf_tcp.nvmf_tls -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:21:17.544 09:29:28 nvmf_tcp.nvmf_tls -- target/tls.sh@239 -- # rpc_cmd 00:21:17.544 09:29:28 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:17.544 09:29:28 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:21:17.544 [2024-07-15 09:29:28.619592] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:21:17.544 malloc0 00:21:17.544 [2024-07-15 09:29:28.650448] tcp.c: 928:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:21:17.544 [2024-07-15 09:29:28.650681] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:21:17.544 09:29:28 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:17.544 09:29:28 nvmf_tcp.nvmf_tls -- target/tls.sh@252 -- # bdevperf_pid=864983 00:21:17.545 09:29:28 nvmf_tcp.nvmf_tls -- target/tls.sh@250 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 2 -z -r /var/tmp/bdevperf.sock -q 128 -o 4k -w verify -t 1 00:21:17.545 09:29:28 nvmf_tcp.nvmf_tls -- target/tls.sh@254 -- # waitforlisten 864983 /var/tmp/bdevperf.sock 00:21:17.545 09:29:28 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@829 -- # '[' -z 864983 ']' 00:21:17.545 09:29:28 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:21:17.545 09:29:28 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@834 -- # local max_retries=100 00:21:17.545 09:29:28 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:21:17.545 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:21:17.545 09:29:28 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@838 -- # xtrace_disable 00:21:17.545 09:29:28 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:21:17.545 [2024-07-15 09:29:28.719630] Starting SPDK v24.09-pre git sha1 b0f01ebc5 / DPDK 24.03.0 initialization... 00:21:17.545 [2024-07-15 09:29:28.719709] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid864983 ] 00:21:17.803 EAL: No free 2048 kB hugepages reported on node 1 00:21:17.803 [2024-07-15 09:29:28.776605] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:21:17.803 [2024-07-15 09:29:28.883213] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:21:17.803 09:29:28 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:21:17.803 09:29:28 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@862 -- # return 0 00:21:17.803 09:29:28 nvmf_tcp.nvmf_tls -- target/tls.sh@255 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock keyring_file_add_key key0 /tmp/tmp.hE4n99WUFQ 00:21:18.061 09:29:29 nvmf_tcp.nvmf_tls -- target/tls.sh@256 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 --psk key0 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 00:21:18.318 [2024-07-15 09:29:29.444246] bdev_nvme_rpc.c: 517:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:21:18.576 nvme0n1 00:21:18.576 09:29:29 nvmf_tcp.nvmf_tls -- target/tls.sh@260 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:21:18.576 Running I/O for 1 seconds... 00:21:19.509 00:21:19.509 Latency(us) 00:21:19.509 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:21:19.509 Job: nvme0n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:21:19.509 Verification LBA range: start 0x0 length 0x2000 00:21:19.509 nvme0n1 : 1.02 3190.15 12.46 0.00 0.00 39771.23 5825.42 47962.64 00:21:19.509 =================================================================================================================== 00:21:19.509 Total : 3190.15 12.46 0.00 0.00 39771.23 5825.42 47962.64 00:21:19.509 0 00:21:19.509 09:29:30 nvmf_tcp.nvmf_tls -- target/tls.sh@263 -- # rpc_cmd save_config 00:21:19.509 09:29:30 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:19.509 09:29:30 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:21:19.767 09:29:30 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:19.767 09:29:30 nvmf_tcp.nvmf_tls -- target/tls.sh@263 -- # tgtcfg='{ 00:21:19.767 "subsystems": [ 00:21:19.767 { 00:21:19.767 "subsystem": "keyring", 00:21:19.767 "config": [ 00:21:19.767 { 00:21:19.767 "method": "keyring_file_add_key", 00:21:19.767 "params": { 00:21:19.767 "name": "key0", 00:21:19.767 "path": "/tmp/tmp.hE4n99WUFQ" 00:21:19.767 } 00:21:19.767 } 00:21:19.767 ] 00:21:19.767 }, 00:21:19.767 { 00:21:19.767 "subsystem": "iobuf", 00:21:19.767 "config": [ 00:21:19.767 { 00:21:19.767 "method": "iobuf_set_options", 00:21:19.767 "params": { 00:21:19.767 "small_pool_count": 8192, 00:21:19.767 "large_pool_count": 1024, 00:21:19.767 "small_bufsize": 8192, 00:21:19.767 "large_bufsize": 135168 00:21:19.767 } 00:21:19.767 } 00:21:19.767 ] 00:21:19.767 }, 00:21:19.767 { 00:21:19.767 "subsystem": "sock", 00:21:19.767 "config": [ 00:21:19.767 { 00:21:19.767 "method": "sock_set_default_impl", 00:21:19.767 "params": { 00:21:19.767 "impl_name": "posix" 00:21:19.767 } 00:21:19.767 }, 00:21:19.767 { 00:21:19.767 "method": "sock_impl_set_options", 00:21:19.767 "params": { 00:21:19.767 "impl_name": "ssl", 00:21:19.767 "recv_buf_size": 4096, 00:21:19.767 "send_buf_size": 4096, 00:21:19.767 "enable_recv_pipe": true, 00:21:19.767 "enable_quickack": false, 00:21:19.767 "enable_placement_id": 0, 00:21:19.767 "enable_zerocopy_send_server": true, 00:21:19.767 "enable_zerocopy_send_client": false, 00:21:19.767 "zerocopy_threshold": 0, 00:21:19.767 "tls_version": 0, 00:21:19.767 "enable_ktls": false 00:21:19.767 } 00:21:19.767 }, 00:21:19.767 { 00:21:19.767 "method": "sock_impl_set_options", 00:21:19.767 "params": { 00:21:19.767 "impl_name": "posix", 00:21:19.767 "recv_buf_size": 2097152, 00:21:19.767 "send_buf_size": 2097152, 00:21:19.767 "enable_recv_pipe": true, 00:21:19.767 "enable_quickack": false, 00:21:19.767 "enable_placement_id": 0, 00:21:19.767 "enable_zerocopy_send_server": true, 00:21:19.767 "enable_zerocopy_send_client": false, 00:21:19.767 "zerocopy_threshold": 0, 00:21:19.767 "tls_version": 0, 00:21:19.767 "enable_ktls": false 00:21:19.767 } 00:21:19.767 } 00:21:19.767 ] 00:21:19.767 }, 00:21:19.767 { 00:21:19.767 "subsystem": "vmd", 00:21:19.767 "config": [] 00:21:19.767 }, 00:21:19.767 { 00:21:19.767 "subsystem": "accel", 00:21:19.767 "config": [ 00:21:19.767 { 00:21:19.767 "method": "accel_set_options", 00:21:19.767 "params": { 00:21:19.767 "small_cache_size": 128, 00:21:19.767 "large_cache_size": 16, 00:21:19.767 "task_count": 2048, 00:21:19.767 "sequence_count": 2048, 00:21:19.767 "buf_count": 2048 00:21:19.767 } 00:21:19.767 } 00:21:19.767 ] 00:21:19.767 }, 00:21:19.767 { 00:21:19.767 "subsystem": "bdev", 00:21:19.767 "config": [ 00:21:19.767 { 00:21:19.767 "method": "bdev_set_options", 00:21:19.767 "params": { 00:21:19.767 "bdev_io_pool_size": 65535, 00:21:19.767 "bdev_io_cache_size": 256, 00:21:19.767 "bdev_auto_examine": true, 00:21:19.767 "iobuf_small_cache_size": 128, 00:21:19.767 "iobuf_large_cache_size": 16 00:21:19.767 } 00:21:19.767 }, 00:21:19.767 { 00:21:19.768 "method": "bdev_raid_set_options", 00:21:19.768 "params": { 00:21:19.768 "process_window_size_kb": 1024 00:21:19.768 } 00:21:19.768 }, 00:21:19.768 { 00:21:19.768 "method": "bdev_iscsi_set_options", 00:21:19.768 "params": { 00:21:19.768 "timeout_sec": 30 00:21:19.768 } 00:21:19.768 }, 00:21:19.768 { 00:21:19.768 "method": "bdev_nvme_set_options", 00:21:19.768 "params": { 00:21:19.768 "action_on_timeout": "none", 00:21:19.768 "timeout_us": 0, 00:21:19.768 "timeout_admin_us": 0, 00:21:19.768 "keep_alive_timeout_ms": 10000, 00:21:19.768 "arbitration_burst": 0, 00:21:19.768 "low_priority_weight": 0, 00:21:19.768 "medium_priority_weight": 0, 00:21:19.768 "high_priority_weight": 0, 00:21:19.768 "nvme_adminq_poll_period_us": 10000, 00:21:19.768 "nvme_ioq_poll_period_us": 0, 00:21:19.768 "io_queue_requests": 0, 00:21:19.768 "delay_cmd_submit": true, 00:21:19.768 "transport_retry_count": 4, 00:21:19.768 "bdev_retry_count": 3, 00:21:19.768 "transport_ack_timeout": 0, 00:21:19.768 "ctrlr_loss_timeout_sec": 0, 00:21:19.768 "reconnect_delay_sec": 0, 00:21:19.768 "fast_io_fail_timeout_sec": 0, 00:21:19.768 "disable_auto_failback": false, 00:21:19.768 "generate_uuids": false, 00:21:19.768 "transport_tos": 0, 00:21:19.768 "nvme_error_stat": false, 00:21:19.768 "rdma_srq_size": 0, 00:21:19.768 "io_path_stat": false, 00:21:19.768 "allow_accel_sequence": false, 00:21:19.768 "rdma_max_cq_size": 0, 00:21:19.768 "rdma_cm_event_timeout_ms": 0, 00:21:19.768 "dhchap_digests": [ 00:21:19.768 "sha256", 00:21:19.768 "sha384", 00:21:19.768 "sha512" 00:21:19.768 ], 00:21:19.768 "dhchap_dhgroups": [ 00:21:19.768 "null", 00:21:19.768 "ffdhe2048", 00:21:19.768 "ffdhe3072", 00:21:19.768 "ffdhe4096", 00:21:19.768 "ffdhe6144", 00:21:19.768 "ffdhe8192" 00:21:19.768 ] 00:21:19.768 } 00:21:19.768 }, 00:21:19.768 { 00:21:19.768 "method": "bdev_nvme_set_hotplug", 00:21:19.768 "params": { 00:21:19.768 "period_us": 100000, 00:21:19.768 "enable": false 00:21:19.768 } 00:21:19.768 }, 00:21:19.768 { 00:21:19.768 "method": "bdev_malloc_create", 00:21:19.768 "params": { 00:21:19.768 "name": "malloc0", 00:21:19.768 "num_blocks": 8192, 00:21:19.768 "block_size": 4096, 00:21:19.768 "physical_block_size": 4096, 00:21:19.768 "uuid": "d8b804ad-e4cb-4b73-93f2-cc31300af786", 00:21:19.768 "optimal_io_boundary": 0 00:21:19.768 } 00:21:19.768 }, 00:21:19.768 { 00:21:19.768 "method": "bdev_wait_for_examine" 00:21:19.768 } 00:21:19.768 ] 00:21:19.768 }, 00:21:19.768 { 00:21:19.768 "subsystem": "nbd", 00:21:19.768 "config": [] 00:21:19.768 }, 00:21:19.768 { 00:21:19.768 "subsystem": "scheduler", 00:21:19.768 "config": [ 00:21:19.768 { 00:21:19.768 "method": "framework_set_scheduler", 00:21:19.768 "params": { 00:21:19.768 "name": "static" 00:21:19.768 } 00:21:19.768 } 00:21:19.768 ] 00:21:19.768 }, 00:21:19.768 { 00:21:19.768 "subsystem": "nvmf", 00:21:19.768 "config": [ 00:21:19.768 { 00:21:19.768 "method": "nvmf_set_config", 00:21:19.768 "params": { 00:21:19.768 "discovery_filter": "match_any", 00:21:19.768 "admin_cmd_passthru": { 00:21:19.768 "identify_ctrlr": false 00:21:19.768 } 00:21:19.768 } 00:21:19.768 }, 00:21:19.768 { 00:21:19.768 "method": "nvmf_set_max_subsystems", 00:21:19.768 "params": { 00:21:19.768 "max_subsystems": 1024 00:21:19.768 } 00:21:19.768 }, 00:21:19.768 { 00:21:19.768 "method": "nvmf_set_crdt", 00:21:19.768 "params": { 00:21:19.768 "crdt1": 0, 00:21:19.768 "crdt2": 0, 00:21:19.768 "crdt3": 0 00:21:19.768 } 00:21:19.768 }, 00:21:19.768 { 00:21:19.768 "method": "nvmf_create_transport", 00:21:19.768 "params": { 00:21:19.768 "trtype": "TCP", 00:21:19.768 "max_queue_depth": 128, 00:21:19.768 "max_io_qpairs_per_ctrlr": 127, 00:21:19.768 "in_capsule_data_size": 4096, 00:21:19.768 "max_io_size": 131072, 00:21:19.768 "io_unit_size": 131072, 00:21:19.768 "max_aq_depth": 128, 00:21:19.768 "num_shared_buffers": 511, 00:21:19.768 "buf_cache_size": 4294967295, 00:21:19.768 "dif_insert_or_strip": false, 00:21:19.768 "zcopy": false, 00:21:19.768 "c2h_success": false, 00:21:19.768 "sock_priority": 0, 00:21:19.768 "abort_timeout_sec": 1, 00:21:19.768 "ack_timeout": 0, 00:21:19.768 "data_wr_pool_size": 0 00:21:19.768 } 00:21:19.768 }, 00:21:19.768 { 00:21:19.768 "method": "nvmf_create_subsystem", 00:21:19.768 "params": { 00:21:19.768 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:21:19.768 "allow_any_host": false, 00:21:19.768 "serial_number": "00000000000000000000", 00:21:19.768 "model_number": "SPDK bdev Controller", 00:21:19.768 "max_namespaces": 32, 00:21:19.768 "min_cntlid": 1, 00:21:19.768 "max_cntlid": 65519, 00:21:19.768 "ana_reporting": false 00:21:19.768 } 00:21:19.768 }, 00:21:19.768 { 00:21:19.768 "method": "nvmf_subsystem_add_host", 00:21:19.768 "params": { 00:21:19.768 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:21:19.768 "host": "nqn.2016-06.io.spdk:host1", 00:21:19.768 "psk": "key0" 00:21:19.768 } 00:21:19.768 }, 00:21:19.768 { 00:21:19.768 "method": "nvmf_subsystem_add_ns", 00:21:19.768 "params": { 00:21:19.768 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:21:19.768 "namespace": { 00:21:19.768 "nsid": 1, 00:21:19.768 "bdev_name": "malloc0", 00:21:19.768 "nguid": "D8B804ADE4CB4B7393F2CC31300AF786", 00:21:19.768 "uuid": "d8b804ad-e4cb-4b73-93f2-cc31300af786", 00:21:19.768 "no_auto_visible": false 00:21:19.768 } 00:21:19.768 } 00:21:19.768 }, 00:21:19.768 { 00:21:19.768 "method": "nvmf_subsystem_add_listener", 00:21:19.768 "params": { 00:21:19.768 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:21:19.768 "listen_address": { 00:21:19.768 "trtype": "TCP", 00:21:19.768 "adrfam": "IPv4", 00:21:19.768 "traddr": "10.0.0.2", 00:21:19.768 "trsvcid": "4420" 00:21:19.768 }, 00:21:19.768 "secure_channel": true 00:21:19.768 } 00:21:19.768 } 00:21:19.768 ] 00:21:19.768 } 00:21:19.768 ] 00:21:19.768 }' 00:21:19.768 09:29:30 nvmf_tcp.nvmf_tls -- target/tls.sh@264 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock save_config 00:21:20.026 09:29:31 nvmf_tcp.nvmf_tls -- target/tls.sh@264 -- # bperfcfg='{ 00:21:20.026 "subsystems": [ 00:21:20.026 { 00:21:20.026 "subsystem": "keyring", 00:21:20.026 "config": [ 00:21:20.026 { 00:21:20.026 "method": "keyring_file_add_key", 00:21:20.026 "params": { 00:21:20.026 "name": "key0", 00:21:20.026 "path": "/tmp/tmp.hE4n99WUFQ" 00:21:20.026 } 00:21:20.026 } 00:21:20.026 ] 00:21:20.026 }, 00:21:20.026 { 00:21:20.026 "subsystem": "iobuf", 00:21:20.026 "config": [ 00:21:20.026 { 00:21:20.026 "method": "iobuf_set_options", 00:21:20.026 "params": { 00:21:20.026 "small_pool_count": 8192, 00:21:20.026 "large_pool_count": 1024, 00:21:20.026 "small_bufsize": 8192, 00:21:20.026 "large_bufsize": 135168 00:21:20.026 } 00:21:20.026 } 00:21:20.026 ] 00:21:20.026 }, 00:21:20.026 { 00:21:20.026 "subsystem": "sock", 00:21:20.026 "config": [ 00:21:20.026 { 00:21:20.026 "method": "sock_set_default_impl", 00:21:20.026 "params": { 00:21:20.026 "impl_name": "posix" 00:21:20.026 } 00:21:20.026 }, 00:21:20.026 { 00:21:20.026 "method": "sock_impl_set_options", 00:21:20.026 "params": { 00:21:20.026 "impl_name": "ssl", 00:21:20.026 "recv_buf_size": 4096, 00:21:20.026 "send_buf_size": 4096, 00:21:20.026 "enable_recv_pipe": true, 00:21:20.026 "enable_quickack": false, 00:21:20.026 "enable_placement_id": 0, 00:21:20.026 "enable_zerocopy_send_server": true, 00:21:20.026 "enable_zerocopy_send_client": false, 00:21:20.026 "zerocopy_threshold": 0, 00:21:20.026 "tls_version": 0, 00:21:20.026 "enable_ktls": false 00:21:20.026 } 00:21:20.026 }, 00:21:20.026 { 00:21:20.026 "method": "sock_impl_set_options", 00:21:20.026 "params": { 00:21:20.026 "impl_name": "posix", 00:21:20.026 "recv_buf_size": 2097152, 00:21:20.026 "send_buf_size": 2097152, 00:21:20.026 "enable_recv_pipe": true, 00:21:20.026 "enable_quickack": false, 00:21:20.026 "enable_placement_id": 0, 00:21:20.026 "enable_zerocopy_send_server": true, 00:21:20.026 "enable_zerocopy_send_client": false, 00:21:20.026 "zerocopy_threshold": 0, 00:21:20.026 "tls_version": 0, 00:21:20.026 "enable_ktls": false 00:21:20.026 } 00:21:20.026 } 00:21:20.026 ] 00:21:20.026 }, 00:21:20.026 { 00:21:20.026 "subsystem": "vmd", 00:21:20.026 "config": [] 00:21:20.026 }, 00:21:20.026 { 00:21:20.026 "subsystem": "accel", 00:21:20.026 "config": [ 00:21:20.026 { 00:21:20.026 "method": "accel_set_options", 00:21:20.026 "params": { 00:21:20.026 "small_cache_size": 128, 00:21:20.026 "large_cache_size": 16, 00:21:20.026 "task_count": 2048, 00:21:20.026 "sequence_count": 2048, 00:21:20.026 "buf_count": 2048 00:21:20.026 } 00:21:20.026 } 00:21:20.026 ] 00:21:20.026 }, 00:21:20.026 { 00:21:20.026 "subsystem": "bdev", 00:21:20.026 "config": [ 00:21:20.026 { 00:21:20.026 "method": "bdev_set_options", 00:21:20.026 "params": { 00:21:20.026 "bdev_io_pool_size": 65535, 00:21:20.026 "bdev_io_cache_size": 256, 00:21:20.026 "bdev_auto_examine": true, 00:21:20.026 "iobuf_small_cache_size": 128, 00:21:20.026 "iobuf_large_cache_size": 16 00:21:20.026 } 00:21:20.026 }, 00:21:20.026 { 00:21:20.026 "method": "bdev_raid_set_options", 00:21:20.026 "params": { 00:21:20.026 "process_window_size_kb": 1024 00:21:20.026 } 00:21:20.026 }, 00:21:20.026 { 00:21:20.026 "method": "bdev_iscsi_set_options", 00:21:20.026 "params": { 00:21:20.026 "timeout_sec": 30 00:21:20.026 } 00:21:20.026 }, 00:21:20.026 { 00:21:20.026 "method": "bdev_nvme_set_options", 00:21:20.026 "params": { 00:21:20.026 "action_on_timeout": "none", 00:21:20.026 "timeout_us": 0, 00:21:20.026 "timeout_admin_us": 0, 00:21:20.026 "keep_alive_timeout_ms": 10000, 00:21:20.026 "arbitration_burst": 0, 00:21:20.026 "low_priority_weight": 0, 00:21:20.026 "medium_priority_weight": 0, 00:21:20.026 "high_priority_weight": 0, 00:21:20.026 "nvme_adminq_poll_period_us": 10000, 00:21:20.026 "nvme_ioq_poll_period_us": 0, 00:21:20.026 "io_queue_requests": 512, 00:21:20.026 "delay_cmd_submit": true, 00:21:20.026 "transport_retry_count": 4, 00:21:20.026 "bdev_retry_count": 3, 00:21:20.026 "transport_ack_timeout": 0, 00:21:20.026 "ctrlr_loss_timeout_sec": 0, 00:21:20.026 "reconnect_delay_sec": 0, 00:21:20.026 "fast_io_fail_timeout_sec": 0, 00:21:20.026 "disable_auto_failback": false, 00:21:20.026 "generate_uuids": false, 00:21:20.026 "transport_tos": 0, 00:21:20.026 "nvme_error_stat": false, 00:21:20.026 "rdma_srq_size": 0, 00:21:20.026 "io_path_stat": false, 00:21:20.026 "allow_accel_sequence": false, 00:21:20.026 "rdma_max_cq_size": 0, 00:21:20.026 "rdma_cm_event_timeout_ms": 0, 00:21:20.026 "dhchap_digests": [ 00:21:20.026 "sha256", 00:21:20.026 "sha384", 00:21:20.026 "sha512" 00:21:20.026 ], 00:21:20.026 "dhchap_dhgroups": [ 00:21:20.026 "null", 00:21:20.026 "ffdhe2048", 00:21:20.026 "ffdhe3072", 00:21:20.026 "ffdhe4096", 00:21:20.026 "ffdhe6144", 00:21:20.026 "ffdhe8192" 00:21:20.026 ] 00:21:20.026 } 00:21:20.026 }, 00:21:20.026 { 00:21:20.026 "method": "bdev_nvme_attach_controller", 00:21:20.026 "params": { 00:21:20.026 "name": "nvme0", 00:21:20.026 "trtype": "TCP", 00:21:20.026 "adrfam": "IPv4", 00:21:20.026 "traddr": "10.0.0.2", 00:21:20.026 "trsvcid": "4420", 00:21:20.027 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:21:20.027 "prchk_reftag": false, 00:21:20.027 "prchk_guard": false, 00:21:20.027 "ctrlr_loss_timeout_sec": 0, 00:21:20.027 "reconnect_delay_sec": 0, 00:21:20.027 "fast_io_fail_timeout_sec": 0, 00:21:20.027 "psk": "key0", 00:21:20.027 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:21:20.027 "hdgst": false, 00:21:20.027 "ddgst": false 00:21:20.027 } 00:21:20.027 }, 00:21:20.027 { 00:21:20.027 "method": "bdev_nvme_set_hotplug", 00:21:20.027 "params": { 00:21:20.027 "period_us": 100000, 00:21:20.027 "enable": false 00:21:20.027 } 00:21:20.027 }, 00:21:20.027 { 00:21:20.027 "method": "bdev_enable_histogram", 00:21:20.027 "params": { 00:21:20.027 "name": "nvme0n1", 00:21:20.027 "enable": true 00:21:20.027 } 00:21:20.027 }, 00:21:20.027 { 00:21:20.027 "method": "bdev_wait_for_examine" 00:21:20.027 } 00:21:20.027 ] 00:21:20.027 }, 00:21:20.027 { 00:21:20.027 "subsystem": "nbd", 00:21:20.027 "config": [] 00:21:20.027 } 00:21:20.027 ] 00:21:20.027 }' 00:21:20.027 09:29:31 nvmf_tcp.nvmf_tls -- target/tls.sh@266 -- # killprocess 864983 00:21:20.027 09:29:31 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@948 -- # '[' -z 864983 ']' 00:21:20.027 09:29:31 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@952 -- # kill -0 864983 00:21:20.027 09:29:31 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # uname 00:21:20.027 09:29:31 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:21:20.027 09:29:31 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 864983 00:21:20.027 09:29:31 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:21:20.027 09:29:31 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:21:20.027 09:29:31 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@966 -- # echo 'killing process with pid 864983' 00:21:20.027 killing process with pid 864983 00:21:20.027 09:29:31 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@967 -- # kill 864983 00:21:20.027 Received shutdown signal, test time was about 1.000000 seconds 00:21:20.027 00:21:20.027 Latency(us) 00:21:20.027 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:21:20.027 =================================================================================================================== 00:21:20.027 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:21:20.027 09:29:31 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@972 -- # wait 864983 00:21:20.284 09:29:31 nvmf_tcp.nvmf_tls -- target/tls.sh@267 -- # killprocess 864849 00:21:20.284 09:29:31 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@948 -- # '[' -z 864849 ']' 00:21:20.284 09:29:31 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@952 -- # kill -0 864849 00:21:20.284 09:29:31 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # uname 00:21:20.284 09:29:31 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:21:20.284 09:29:31 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 864849 00:21:20.284 09:29:31 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:21:20.284 09:29:31 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:21:20.284 09:29:31 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@966 -- # echo 'killing process with pid 864849' 00:21:20.284 killing process with pid 864849 00:21:20.284 09:29:31 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@967 -- # kill 864849 00:21:20.284 09:29:31 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@972 -- # wait 864849 00:21:20.543 09:29:31 nvmf_tcp.nvmf_tls -- target/tls.sh@269 -- # nvmfappstart -c /dev/fd/62 00:21:20.543 09:29:31 nvmf_tcp.nvmf_tls -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:21:20.543 09:29:31 nvmf_tcp.nvmf_tls -- target/tls.sh@269 -- # echo '{ 00:21:20.543 "subsystems": [ 00:21:20.543 { 00:21:20.543 "subsystem": "keyring", 00:21:20.543 "config": [ 00:21:20.543 { 00:21:20.543 "method": "keyring_file_add_key", 00:21:20.543 "params": { 00:21:20.543 "name": "key0", 00:21:20.543 "path": "/tmp/tmp.hE4n99WUFQ" 00:21:20.543 } 00:21:20.543 } 00:21:20.543 ] 00:21:20.543 }, 00:21:20.543 { 00:21:20.543 "subsystem": "iobuf", 00:21:20.543 "config": [ 00:21:20.543 { 00:21:20.543 "method": "iobuf_set_options", 00:21:20.543 "params": { 00:21:20.543 "small_pool_count": 8192, 00:21:20.543 "large_pool_count": 1024, 00:21:20.543 "small_bufsize": 8192, 00:21:20.543 "large_bufsize": 135168 00:21:20.543 } 00:21:20.543 } 00:21:20.543 ] 00:21:20.543 }, 00:21:20.543 { 00:21:20.543 "subsystem": "sock", 00:21:20.543 "config": [ 00:21:20.543 { 00:21:20.543 "method": "sock_set_default_impl", 00:21:20.543 "params": { 00:21:20.543 "impl_name": "posix" 00:21:20.543 } 00:21:20.543 }, 00:21:20.543 { 00:21:20.543 "method": "sock_impl_set_options", 00:21:20.543 "params": { 00:21:20.543 "impl_name": "ssl", 00:21:20.543 "recv_buf_size": 4096, 00:21:20.543 "send_buf_size": 4096, 00:21:20.543 "enable_recv_pipe": true, 00:21:20.543 "enable_quickack": false, 00:21:20.543 "enable_placement_id": 0, 00:21:20.543 "enable_zerocopy_send_server": true, 00:21:20.543 "enable_zerocopy_send_client": false, 00:21:20.543 "zerocopy_threshold": 0, 00:21:20.543 "tls_version": 0, 00:21:20.543 "enable_ktls": false 00:21:20.543 } 00:21:20.543 }, 00:21:20.543 { 00:21:20.543 "method": "sock_impl_set_options", 00:21:20.543 "params": { 00:21:20.543 "impl_name": "posix", 00:21:20.543 "recv_buf_size": 2097152, 00:21:20.543 "send_buf_size": 2097152, 00:21:20.543 "enable_recv_pipe": true, 00:21:20.543 "enable_quickack": false, 00:21:20.543 "enable_placement_id": 0, 00:21:20.543 "enable_zerocopy_send_server": true, 00:21:20.543 "enable_zerocopy_send_client": false, 00:21:20.543 "zerocopy_threshold": 0, 00:21:20.543 "tls_version": 0, 00:21:20.543 "enable_ktls": false 00:21:20.543 } 00:21:20.543 } 00:21:20.543 ] 00:21:20.543 }, 00:21:20.543 { 00:21:20.543 "subsystem": "vmd", 00:21:20.543 "config": [] 00:21:20.543 }, 00:21:20.543 { 00:21:20.543 "subsystem": "accel", 00:21:20.543 "config": [ 00:21:20.543 { 00:21:20.543 "method": "accel_set_options", 00:21:20.543 "params": { 00:21:20.543 "small_cache_size": 128, 00:21:20.543 "large_cache_size": 16, 00:21:20.543 "task_count": 2048, 00:21:20.543 "sequence_count": 2048, 00:21:20.543 "buf_count": 2048 00:21:20.543 } 00:21:20.543 } 00:21:20.543 ] 00:21:20.543 }, 00:21:20.543 { 00:21:20.543 "subsystem": "bdev", 00:21:20.543 "config": [ 00:21:20.543 { 00:21:20.543 "method": "bdev_set_options", 00:21:20.543 "params": { 00:21:20.543 "bdev_io_pool_size": 65535, 00:21:20.543 "bdev_io_cache_size": 256, 00:21:20.543 "bdev_auto_examine": true, 00:21:20.543 "iobuf_small_cache_size": 128, 00:21:20.543 "iobuf_large_cache_size": 16 00:21:20.543 } 00:21:20.543 }, 00:21:20.543 { 00:21:20.543 "method": "bdev_raid_set_options", 00:21:20.543 "params": { 00:21:20.543 "process_window_size_kb": 1024 00:21:20.543 } 00:21:20.543 }, 00:21:20.543 { 00:21:20.543 "method": "bdev_iscsi_set_options", 00:21:20.543 "params": { 00:21:20.543 "timeout_sec": 30 00:21:20.543 } 00:21:20.543 }, 00:21:20.543 { 00:21:20.543 "method": "bdev_nvme_set_options", 00:21:20.543 "params": { 00:21:20.543 "action_on_timeout": "none", 00:21:20.543 "timeout_us": 0, 00:21:20.543 "timeout_admin_us": 0, 00:21:20.543 "keep_alive_timeout_ms": 10000, 00:21:20.543 "arbitration_burst": 0, 00:21:20.543 "low_priority_weight": 0, 00:21:20.543 "medium_priority_weight": 0, 00:21:20.543 "high_priority_weight": 0, 00:21:20.543 "nvme_adminq_poll_period_us": 10000, 00:21:20.543 "nvme_ioq_poll_period_us": 0, 00:21:20.543 "io_queue_requests": 0, 00:21:20.543 "delay_cmd_submit": true, 00:21:20.543 "transport_retry_count": 4, 00:21:20.543 "bdev_retry_count": 3, 00:21:20.543 "transport_ack_timeout": 0, 00:21:20.543 "ctrlr_loss_timeout_sec": 0, 00:21:20.543 "reconnect_delay_sec": 0, 00:21:20.543 "fast_io_fail_timeout_sec": 0, 00:21:20.543 "disable_auto_failback": false, 00:21:20.543 "generate_uuids": false, 00:21:20.543 "transport_tos": 0, 00:21:20.543 "nvme_error_stat": false, 00:21:20.543 "rdma_srq_size": 0, 00:21:20.543 "io_path_stat": false, 00:21:20.543 "allow_accel_sequence": false, 00:21:20.543 "rdma_max_cq_size": 0, 00:21:20.543 "rdma_cm_event_timeout_ms": 0, 00:21:20.543 "dhchap_digests": [ 00:21:20.543 "sha256", 00:21:20.543 "sha384", 00:21:20.543 "sha512" 00:21:20.543 ], 00:21:20.543 "dhchap_dhgroups": [ 00:21:20.543 "null", 00:21:20.543 "ffdhe2048", 00:21:20.543 "ffdhe3072", 00:21:20.543 "ffdhe4096", 00:21:20.543 "ffdhe6144", 00:21:20.543 "ffdhe8192" 00:21:20.543 ] 00:21:20.543 } 00:21:20.543 }, 00:21:20.543 { 00:21:20.543 "method": "bdev_nvme_set_hotplug", 00:21:20.543 "params": { 00:21:20.543 "period_us": 100000, 00:21:20.543 "enable": false 00:21:20.543 } 00:21:20.543 }, 00:21:20.543 { 00:21:20.543 "method": "bdev_malloc_create", 00:21:20.543 "params": { 00:21:20.543 "name": "malloc0", 00:21:20.543 "num_blocks": 8192, 00:21:20.543 "block_size": 4096, 00:21:20.543 "physical_block_size": 4096, 00:21:20.543 "uuid": "d8b804ad-e4cb-4b73-93f2-cc31300af786", 00:21:20.543 "optimal_io_boundary": 0 00:21:20.543 } 00:21:20.543 }, 00:21:20.543 { 00:21:20.543 "method": "bdev_wait_for_examine" 00:21:20.543 } 00:21:20.543 ] 00:21:20.543 }, 00:21:20.543 { 00:21:20.543 "subsystem": "nbd", 00:21:20.543 "config": [] 00:21:20.543 }, 00:21:20.543 { 00:21:20.543 "subsystem": "scheduler", 00:21:20.543 "config": [ 00:21:20.543 { 00:21:20.543 "method": "framework_set_scheduler", 00:21:20.543 "params": { 00:21:20.543 "name": "static" 00:21:20.543 } 00:21:20.543 } 00:21:20.543 ] 00:21:20.543 }, 00:21:20.543 { 00:21:20.543 "subsystem": "nvmf", 00:21:20.543 "config": [ 00:21:20.543 { 00:21:20.543 "method": "nvmf_set_config", 00:21:20.543 "params": { 00:21:20.543 "discovery_filter": "match_any", 00:21:20.543 "admin_cmd_passthru": { 00:21:20.543 "identify_ctrlr": false 00:21:20.543 } 00:21:20.543 } 00:21:20.543 }, 00:21:20.543 { 00:21:20.543 "method": "nvmf_set_max_subsystems", 00:21:20.543 "params": { 00:21:20.543 "max_subsystems": 1024 00:21:20.543 } 00:21:20.543 }, 00:21:20.543 { 00:21:20.543 "method": "nvmf_set_crdt", 00:21:20.543 "params": { 00:21:20.543 "crdt1": 0, 00:21:20.543 "crdt2": 0, 00:21:20.543 "crdt3": 0 00:21:20.543 } 00:21:20.543 }, 00:21:20.543 { 00:21:20.543 "method": "nvmf_create_transport", 00:21:20.543 "params": { 00:21:20.543 "trtype": "TCP", 00:21:20.543 "max_queue_depth": 128, 00:21:20.543 "max_io_qpairs_per_ctrlr": 127, 00:21:20.543 "in_capsule_data_size": 4096, 00:21:20.543 "max_io_size": 131072, 00:21:20.543 "io_unit_size": 131072, 00:21:20.543 "max_aq_depth": 128, 00:21:20.543 "num_shared_buffers": 511, 00:21:20.543 "buf_cache_size": 4294967295, 00:21:20.543 "dif_insert_or_strip": false, 00:21:20.543 "zcopy": false, 00:21:20.543 "c2h_success": false, 00:21:20.543 "sock_priority": 0, 00:21:20.543 "abort_timeout_sec": 1, 00:21:20.543 "ack_timeout": 0, 00:21:20.543 "data_wr_pool_size": 0 00:21:20.543 } 00:21:20.543 }, 00:21:20.543 { 00:21:20.543 "method": "nvmf_create_subsystem", 00:21:20.543 "params": { 00:21:20.543 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:21:20.543 "allow_any_host": false, 00:21:20.543 "serial_number": "00000000000000000000", 00:21:20.543 "model_number": "SPDK bdev Controller", 00:21:20.543 "max_namespaces": 32, 00:21:20.543 "min_cntlid": 1, 00:21:20.543 "max_cntlid": 65519, 00:21:20.543 "ana_reporting": false 00:21:20.543 } 00:21:20.543 }, 00:21:20.543 { 00:21:20.543 "method": "nvmf_subsystem_add_host", 00:21:20.543 "params": { 00:21:20.543 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:21:20.543 "host": "nqn.2016-06.io.spdk:host1", 00:21:20.543 "psk": "key0" 00:21:20.544 } 00:21:20.544 }, 00:21:20.544 { 00:21:20.544 "method": "nvmf_subsystem_add_ns", 00:21:20.544 "params": { 00:21:20.544 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:21:20.544 "namespace": { 00:21:20.544 "nsid": 1, 00:21:20.544 "bdev_name": "malloc0", 00:21:20.544 "nguid": "D8B804ADE4CB4B7393F2CC31300AF786", 00:21:20.544 "uuid": "d8b804ad-e4cb-4b73-93f2-cc31300af786", 00:21:20.544 "no_auto_visible": false 00:21:20.544 } 00:21:20.544 } 00:21:20.544 }, 00:21:20.544 { 00:21:20.544 "method": "nvmf_subsystem_add_listener", 00:21:20.544 "params": { 00:21:20.544 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:21:20.544 "listen_address": { 00:21:20.544 "trtype": "TCP", 00:21:20.544 "adrfam": "IPv4", 00:21:20.544 "traddr": "10.0.0.2", 00:21:20.544 "trsvcid": "4420" 00:21:20.544 }, 00:21:20.544 "secure_channel": true 00:21:20.544 } 00:21:20.544 } 00:21:20.544 ] 00:21:20.544 } 00:21:20.544 ] 00:21:20.544 }' 00:21:20.544 09:29:31 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@722 -- # xtrace_disable 00:21:20.544 09:29:31 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:21:20.544 09:29:31 nvmf_tcp.nvmf_tls -- nvmf/common.sh@481 -- # nvmfpid=865285 00:21:20.544 09:29:31 nvmf_tcp.nvmf_tls -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -c /dev/fd/62 00:21:20.544 09:29:31 nvmf_tcp.nvmf_tls -- nvmf/common.sh@482 -- # waitforlisten 865285 00:21:20.544 09:29:31 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@829 -- # '[' -z 865285 ']' 00:21:20.544 09:29:31 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:21:20.544 09:29:31 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@834 -- # local max_retries=100 00:21:20.544 09:29:31 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:21:20.544 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:21:20.544 09:29:31 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@838 -- # xtrace_disable 00:21:20.544 09:29:31 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:21:20.802 [2024-07-15 09:29:31.767145] Starting SPDK v24.09-pre git sha1 b0f01ebc5 / DPDK 24.03.0 initialization... 00:21:20.802 [2024-07-15 09:29:31.767231] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:21:20.802 EAL: No free 2048 kB hugepages reported on node 1 00:21:20.802 [2024-07-15 09:29:31.827106] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:21:20.802 [2024-07-15 09:29:31.926111] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:21:20.803 [2024-07-15 09:29:31.926164] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:21:20.803 [2024-07-15 09:29:31.926193] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:21:20.803 [2024-07-15 09:29:31.926204] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:21:20.803 [2024-07-15 09:29:31.926214] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:21:20.803 [2024-07-15 09:29:31.926288] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:21:21.061 [2024-07-15 09:29:32.157430] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:21:21.061 [2024-07-15 09:29:32.189457] tcp.c: 928:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:21:21.061 [2024-07-15 09:29:32.197987] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:21:21.627 09:29:32 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:21:21.627 09:29:32 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@862 -- # return 0 00:21:21.627 09:29:32 nvmf_tcp.nvmf_tls -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:21:21.627 09:29:32 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@728 -- # xtrace_disable 00:21:21.627 09:29:32 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:21:21.627 09:29:32 nvmf_tcp.nvmf_tls -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:21:21.627 09:29:32 nvmf_tcp.nvmf_tls -- target/tls.sh@272 -- # bdevperf_pid=865436 00:21:21.627 09:29:32 nvmf_tcp.nvmf_tls -- target/tls.sh@273 -- # waitforlisten 865436 /var/tmp/bdevperf.sock 00:21:21.627 09:29:32 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@829 -- # '[' -z 865436 ']' 00:21:21.627 09:29:32 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:21:21.627 09:29:32 nvmf_tcp.nvmf_tls -- target/tls.sh@270 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 2 -z -r /var/tmp/bdevperf.sock -q 128 -o 4k -w verify -t 1 -c /dev/fd/63 00:21:21.627 09:29:32 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@834 -- # local max_retries=100 00:21:21.627 09:29:32 nvmf_tcp.nvmf_tls -- target/tls.sh@270 -- # echo '{ 00:21:21.627 "subsystems": [ 00:21:21.627 { 00:21:21.627 "subsystem": "keyring", 00:21:21.627 "config": [ 00:21:21.627 { 00:21:21.627 "method": "keyring_file_add_key", 00:21:21.627 "params": { 00:21:21.627 "name": "key0", 00:21:21.627 "path": "/tmp/tmp.hE4n99WUFQ" 00:21:21.627 } 00:21:21.627 } 00:21:21.627 ] 00:21:21.627 }, 00:21:21.627 { 00:21:21.627 "subsystem": "iobuf", 00:21:21.627 "config": [ 00:21:21.627 { 00:21:21.627 "method": "iobuf_set_options", 00:21:21.627 "params": { 00:21:21.627 "small_pool_count": 8192, 00:21:21.627 "large_pool_count": 1024, 00:21:21.627 "small_bufsize": 8192, 00:21:21.627 "large_bufsize": 135168 00:21:21.627 } 00:21:21.627 } 00:21:21.627 ] 00:21:21.627 }, 00:21:21.627 { 00:21:21.627 "subsystem": "sock", 00:21:21.627 "config": [ 00:21:21.627 { 00:21:21.627 "method": "sock_set_default_impl", 00:21:21.627 "params": { 00:21:21.627 "impl_name": "posix" 00:21:21.627 } 00:21:21.627 }, 00:21:21.627 { 00:21:21.627 "method": "sock_impl_set_options", 00:21:21.627 "params": { 00:21:21.627 "impl_name": "ssl", 00:21:21.627 "recv_buf_size": 4096, 00:21:21.627 "send_buf_size": 4096, 00:21:21.627 "enable_recv_pipe": true, 00:21:21.627 "enable_quickack": false, 00:21:21.627 "enable_placement_id": 0, 00:21:21.627 "enable_zerocopy_send_server": true, 00:21:21.627 "enable_zerocopy_send_client": false, 00:21:21.627 "zerocopy_threshold": 0, 00:21:21.627 "tls_version": 0, 00:21:21.627 "enable_ktls": false 00:21:21.627 } 00:21:21.627 }, 00:21:21.627 { 00:21:21.627 "method": "sock_impl_set_options", 00:21:21.627 "params": { 00:21:21.627 "impl_name": "posix", 00:21:21.627 "recv_buf_size": 2097152, 00:21:21.627 "send_buf_size": 2097152, 00:21:21.627 "enable_recv_pipe": true, 00:21:21.627 "enable_quickack": false, 00:21:21.627 "enable_placement_id": 0, 00:21:21.627 "enable_zerocopy_send_server": true, 00:21:21.627 "enable_zerocopy_send_client": false, 00:21:21.627 "zerocopy_threshold": 0, 00:21:21.627 "tls_version": 0, 00:21:21.627 "enable_ktls": false 00:21:21.627 } 00:21:21.627 } 00:21:21.627 ] 00:21:21.627 }, 00:21:21.627 { 00:21:21.627 "subsystem": "vmd", 00:21:21.627 "config": [] 00:21:21.627 }, 00:21:21.627 { 00:21:21.627 "subsystem": "accel", 00:21:21.627 "config": [ 00:21:21.627 { 00:21:21.627 "method": "accel_set_options", 00:21:21.627 "params": { 00:21:21.627 "small_cache_size": 128, 00:21:21.627 "large_cache_size": 16, 00:21:21.627 "task_count": 2048, 00:21:21.627 "sequence_count": 2048, 00:21:21.627 "buf_count": 2048 00:21:21.627 } 00:21:21.627 } 00:21:21.627 ] 00:21:21.627 }, 00:21:21.627 { 00:21:21.627 "subsystem": "bdev", 00:21:21.627 "config": [ 00:21:21.627 { 00:21:21.627 "method": "bdev_set_options", 00:21:21.627 "params": { 00:21:21.627 "bdev_io_pool_size": 65535, 00:21:21.627 "bdev_io_cache_size": 256, 00:21:21.627 "bdev_auto_examine": true, 00:21:21.627 "iobuf_small_cache_size": 128, 00:21:21.627 "iobuf_large_cache_size": 16 00:21:21.627 } 00:21:21.627 }, 00:21:21.628 { 00:21:21.628 "method": "bdev_raid_set_options", 00:21:21.628 "params": { 00:21:21.628 "process_window_size_kb": 1024 00:21:21.628 } 00:21:21.628 }, 00:21:21.628 { 00:21:21.628 "method": "bdev_iscsi_set_options", 00:21:21.628 "params": { 00:21:21.628 "timeout_sec": 30 00:21:21.628 } 00:21:21.628 }, 00:21:21.628 { 00:21:21.628 "method": "bdev_nvme_set_options", 00:21:21.628 "params": { 00:21:21.628 "action_on_timeout": "none", 00:21:21.628 "timeout_us": 0, 00:21:21.628 "timeout_admin_us": 0, 00:21:21.628 "keep_alive_timeout_ms": 10000, 00:21:21.628 "arbitration_burst": 0, 00:21:21.628 "low_priority_weight": 0, 00:21:21.628 "medium_priority_weight": 0, 00:21:21.628 "high_priority_weight": 0, 00:21:21.628 "nvme_adminq_poll_period_us": 10000, 00:21:21.628 "nvme_ioq_poll_period_us": 0, 00:21:21.628 "io_queue_requests": 512, 00:21:21.628 "delay_cmd_submit": true, 00:21:21.628 "transport_retry_count": 4, 00:21:21.628 "bdev_retry_count": 3, 00:21:21.628 "transport_ack_timeout": 0, 00:21:21.628 "ctrlr_loss_timeout_sec": 0, 00:21:21.628 "reconnect_delay_sec": 0, 00:21:21.628 "fast_io_fail_timeout_sec": 0, 00:21:21.628 "disable_auto_failback": false, 00:21:21.628 "generate_uuids": false, 00:21:21.628 "transport_tos": 0, 00:21:21.628 "nvme_error_stat": false, 00:21:21.628 "rdma_srq_size": 0, 00:21:21.628 "io_path_stat": false, 00:21:21.628 "allow_accel_sequence": false, 00:21:21.628 "rdma_max_cq_size": 0, 00:21:21.628 "rdma_cm_event_timeout_ms": 0, 00:21:21.628 "dhchap_digests": [ 00:21:21.628 "sha256", 00:21:21.628 "sha384", 00:21:21.628 "sha512" 00:21:21.628 ], 00:21:21.628 "dhchap_dhgroups": [ 00:21:21.628 "null", 00:21:21.628 "ffdhe2048", 00:21:21.628 "ffdhe3072", 00:21:21.628 "ffdhe4096", 00:21:21.628 "ffdhe6144", 00:21:21.628 "ffdhe8192" 00:21:21.628 ] 00:21:21.628 } 00:21:21.628 }, 00:21:21.628 { 00:21:21.628 "method": "bdev_nvme_attach_controller", 00:21:21.628 "params": { 00:21:21.628 "name": "nvme0", 00:21:21.628 "trtype": "TCP", 00:21:21.628 "adrfam": "IPv4", 00:21:21.628 "traddr": "10.0.0.2", 00:21:21.628 "trsvcid": "4420", 00:21:21.628 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:21:21.628 "prchk_reftag": false, 00:21:21.628 "prchk_guard": false, 00:21:21.628 "ctrlr_loss_timeout_sec": 0, 00:21:21.628 "reconnect_delay_sec": 0, 00:21:21.628 "fast_io_fail_timeout_sec": 0, 00:21:21.628 "psk": "key0", 00:21:21.628 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:21:21.628 "hdgst": false, 00:21:21.628 "ddgst": false 00:21:21.628 } 00:21:21.628 }, 00:21:21.628 { 00:21:21.628 "method": "bdev_nvme_set_hotplug", 00:21:21.628 "params": { 00:21:21.628 "period_us": 100000, 00:21:21.628 "enable": false 00:21:21.628 } 00:21:21.628 }, 00:21:21.628 { 00:21:21.628 "method": "bdev_enable_histogram", 00:21:21.628 "params": { 00:21:21.628 "name": "nvme0n1", 00:21:21.628 "enable": true 00:21:21.628 } 00:21:21.628 }, 00:21:21.628 { 00:21:21.628 "method": "bdev_wait_for_examine" 00:21:21.628 } 00:21:21.628 ] 00:21:21.628 }, 00:21:21.628 { 00:21:21.628 "subsystem": "nbd", 00:21:21.628 "config": [] 00:21:21.628 } 00:21:21.628 ] 00:21:21.628 }' 00:21:21.628 09:29:32 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:21:21.628 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:21:21.628 09:29:32 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@838 -- # xtrace_disable 00:21:21.628 09:29:32 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:21:21.628 [2024-07-15 09:29:32.813394] Starting SPDK v24.09-pre git sha1 b0f01ebc5 / DPDK 24.03.0 initialization... 00:21:21.628 [2024-07-15 09:29:32.813482] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid865436 ] 00:21:21.886 EAL: No free 2048 kB hugepages reported on node 1 00:21:21.886 [2024-07-15 09:29:32.873504] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:21:21.886 [2024-07-15 09:29:32.986537] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:21:22.143 [2024-07-15 09:29:33.161701] bdev_nvme_rpc.c: 517:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:21:22.707 09:29:33 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:21:22.707 09:29:33 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@862 -- # return 0 00:21:22.707 09:29:33 nvmf_tcp.nvmf_tls -- target/tls.sh@275 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:21:22.707 09:29:33 nvmf_tcp.nvmf_tls -- target/tls.sh@275 -- # jq -r '.[].name' 00:21:22.965 09:29:34 nvmf_tcp.nvmf_tls -- target/tls.sh@275 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:22.965 09:29:34 nvmf_tcp.nvmf_tls -- target/tls.sh@276 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:21:23.223 Running I/O for 1 seconds... 00:21:24.154 00:21:24.154 Latency(us) 00:21:24.154 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:21:24.154 Job: nvme0n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:21:24.154 Verification LBA range: start 0x0 length 0x2000 00:21:24.154 nvme0n1 : 1.04 3089.04 12.07 0.00 0.00 40714.50 9466.31 39612.87 00:21:24.154 =================================================================================================================== 00:21:24.154 Total : 3089.04 12.07 0.00 0.00 40714.50 9466.31 39612.87 00:21:24.154 0 00:21:24.154 09:29:35 nvmf_tcp.nvmf_tls -- target/tls.sh@278 -- # trap - SIGINT SIGTERM EXIT 00:21:24.154 09:29:35 nvmf_tcp.nvmf_tls -- target/tls.sh@279 -- # cleanup 00:21:24.154 09:29:35 nvmf_tcp.nvmf_tls -- target/tls.sh@15 -- # process_shm --id 0 00:21:24.154 09:29:35 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@806 -- # type=--id 00:21:24.154 09:29:35 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@807 -- # id=0 00:21:24.154 09:29:35 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@808 -- # '[' --id = --pid ']' 00:21:24.154 09:29:35 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@812 -- # find /dev/shm -name '*.0' -printf '%f\n' 00:21:24.154 09:29:35 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@812 -- # shm_files=nvmf_trace.0 00:21:24.154 09:29:35 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@814 -- # [[ -z nvmf_trace.0 ]] 00:21:24.154 09:29:35 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@818 -- # for n in $shm_files 00:21:24.154 09:29:35 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@819 -- # tar -C /dev/shm/ -cvzf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/nvmf_trace.0_shm.tar.gz nvmf_trace.0 00:21:24.154 nvmf_trace.0 00:21:24.154 09:29:35 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@821 -- # return 0 00:21:24.155 09:29:35 nvmf_tcp.nvmf_tls -- target/tls.sh@16 -- # killprocess 865436 00:21:24.155 09:29:35 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@948 -- # '[' -z 865436 ']' 00:21:24.155 09:29:35 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@952 -- # kill -0 865436 00:21:24.155 09:29:35 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # uname 00:21:24.155 09:29:35 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:21:24.155 09:29:35 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 865436 00:21:24.155 09:29:35 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:21:24.155 09:29:35 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:21:24.155 09:29:35 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@966 -- # echo 'killing process with pid 865436' 00:21:24.155 killing process with pid 865436 00:21:24.155 09:29:35 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@967 -- # kill 865436 00:21:24.155 Received shutdown signal, test time was about 1.000000 seconds 00:21:24.155 00:21:24.155 Latency(us) 00:21:24.155 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:21:24.155 =================================================================================================================== 00:21:24.155 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:21:24.155 09:29:35 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@972 -- # wait 865436 00:21:24.413 09:29:35 nvmf_tcp.nvmf_tls -- target/tls.sh@17 -- # nvmftestfini 00:21:24.413 09:29:35 nvmf_tcp.nvmf_tls -- nvmf/common.sh@488 -- # nvmfcleanup 00:21:24.413 09:29:35 nvmf_tcp.nvmf_tls -- nvmf/common.sh@117 -- # sync 00:21:24.413 09:29:35 nvmf_tcp.nvmf_tls -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:21:24.413 09:29:35 nvmf_tcp.nvmf_tls -- nvmf/common.sh@120 -- # set +e 00:21:24.413 09:29:35 nvmf_tcp.nvmf_tls -- nvmf/common.sh@121 -- # for i in {1..20} 00:21:24.413 09:29:35 nvmf_tcp.nvmf_tls -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:21:24.413 rmmod nvme_tcp 00:21:24.670 rmmod nvme_fabrics 00:21:24.670 rmmod nvme_keyring 00:21:24.670 09:29:35 nvmf_tcp.nvmf_tls -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:21:24.670 09:29:35 nvmf_tcp.nvmf_tls -- nvmf/common.sh@124 -- # set -e 00:21:24.670 09:29:35 nvmf_tcp.nvmf_tls -- nvmf/common.sh@125 -- # return 0 00:21:24.670 09:29:35 nvmf_tcp.nvmf_tls -- nvmf/common.sh@489 -- # '[' -n 865285 ']' 00:21:24.670 09:29:35 nvmf_tcp.nvmf_tls -- nvmf/common.sh@490 -- # killprocess 865285 00:21:24.670 09:29:35 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@948 -- # '[' -z 865285 ']' 00:21:24.670 09:29:35 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@952 -- # kill -0 865285 00:21:24.670 09:29:35 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # uname 00:21:24.670 09:29:35 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:21:24.670 09:29:35 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 865285 00:21:24.670 09:29:35 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:21:24.670 09:29:35 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:21:24.670 09:29:35 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@966 -- # echo 'killing process with pid 865285' 00:21:24.670 killing process with pid 865285 00:21:24.670 09:29:35 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@967 -- # kill 865285 00:21:24.670 09:29:35 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@972 -- # wait 865285 00:21:24.929 09:29:35 nvmf_tcp.nvmf_tls -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:21:24.929 09:29:35 nvmf_tcp.nvmf_tls -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:21:24.929 09:29:35 nvmf_tcp.nvmf_tls -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:21:24.929 09:29:35 nvmf_tcp.nvmf_tls -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:21:24.929 09:29:35 nvmf_tcp.nvmf_tls -- nvmf/common.sh@278 -- # remove_spdk_ns 00:21:24.929 09:29:35 nvmf_tcp.nvmf_tls -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:21:24.929 09:29:35 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:21:24.929 09:29:35 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:21:26.833 09:29:37 nvmf_tcp.nvmf_tls -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:21:26.833 09:29:37 nvmf_tcp.nvmf_tls -- target/tls.sh@18 -- # rm -f /tmp/tmp.KwTDY4onjA /tmp/tmp.w5bGT2ytwU /tmp/tmp.hE4n99WUFQ 00:21:26.833 00:21:26.833 real 1m20.426s 00:21:26.833 user 2m9.313s 00:21:26.833 sys 0m24.876s 00:21:26.833 09:29:37 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@1124 -- # xtrace_disable 00:21:26.833 09:29:37 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:21:26.833 ************************************ 00:21:26.833 END TEST nvmf_tls 00:21:26.833 ************************************ 00:21:26.833 09:29:38 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:21:26.833 09:29:38 nvmf_tcp -- nvmf/nvmf.sh@62 -- # run_test nvmf_fips /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/fips/fips.sh --transport=tcp 00:21:26.833 09:29:38 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:21:26.833 09:29:38 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:21:26.833 09:29:38 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:21:27.091 ************************************ 00:21:27.092 START TEST nvmf_fips 00:21:27.092 ************************************ 00:21:27.092 09:29:38 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/fips/fips.sh --transport=tcp 00:21:27.092 * Looking for test storage... 00:21:27.092 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/fips 00:21:27.092 09:29:38 nvmf_tcp.nvmf_fips -- fips/fips.sh@11 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:21:27.092 09:29:38 nvmf_tcp.nvmf_fips -- nvmf/common.sh@7 -- # uname -s 00:21:27.092 09:29:38 nvmf_tcp.nvmf_fips -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:21:27.092 09:29:38 nvmf_tcp.nvmf_fips -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:21:27.092 09:29:38 nvmf_tcp.nvmf_fips -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:21:27.092 09:29:38 nvmf_tcp.nvmf_fips -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:21:27.092 09:29:38 nvmf_tcp.nvmf_fips -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:21:27.092 09:29:38 nvmf_tcp.nvmf_fips -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:21:27.092 09:29:38 nvmf_tcp.nvmf_fips -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:21:27.092 09:29:38 nvmf_tcp.nvmf_fips -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:21:27.092 09:29:38 nvmf_tcp.nvmf_fips -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:21:27.092 09:29:38 nvmf_tcp.nvmf_fips -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:21:27.092 09:29:38 nvmf_tcp.nvmf_fips -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a 00:21:27.092 09:29:38 nvmf_tcp.nvmf_fips -- nvmf/common.sh@18 -- # NVME_HOSTID=29f67375-a902-e411-ace9-001e67bc3c9a 00:21:27.092 09:29:38 nvmf_tcp.nvmf_fips -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:21:27.092 09:29:38 nvmf_tcp.nvmf_fips -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:21:27.092 09:29:38 nvmf_tcp.nvmf_fips -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:21:27.092 09:29:38 nvmf_tcp.nvmf_fips -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:21:27.092 09:29:38 nvmf_tcp.nvmf_fips -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:21:27.092 09:29:38 nvmf_tcp.nvmf_fips -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:21:27.092 09:29:38 nvmf_tcp.nvmf_fips -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:21:27.092 09:29:38 nvmf_tcp.nvmf_fips -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:21:27.092 09:29:38 nvmf_tcp.nvmf_fips -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:27.092 09:29:38 nvmf_tcp.nvmf_fips -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:27.092 09:29:38 nvmf_tcp.nvmf_fips -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:27.092 09:29:38 nvmf_tcp.nvmf_fips -- paths/export.sh@5 -- # export PATH 00:21:27.092 09:29:38 nvmf_tcp.nvmf_fips -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:27.092 09:29:38 nvmf_tcp.nvmf_fips -- nvmf/common.sh@47 -- # : 0 00:21:27.092 09:29:38 nvmf_tcp.nvmf_fips -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:21:27.092 09:29:38 nvmf_tcp.nvmf_fips -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:21:27.092 09:29:38 nvmf_tcp.nvmf_fips -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:21:27.092 09:29:38 nvmf_tcp.nvmf_fips -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:21:27.092 09:29:38 nvmf_tcp.nvmf_fips -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:21:27.092 09:29:38 nvmf_tcp.nvmf_fips -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:21:27.092 09:29:38 nvmf_tcp.nvmf_fips -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:21:27.092 09:29:38 nvmf_tcp.nvmf_fips -- nvmf/common.sh@51 -- # have_pci_nics=0 00:21:27.092 09:29:38 nvmf_tcp.nvmf_fips -- fips/fips.sh@12 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:21:27.092 09:29:38 nvmf_tcp.nvmf_fips -- fips/fips.sh@89 -- # check_openssl_version 00:21:27.092 09:29:38 nvmf_tcp.nvmf_fips -- fips/fips.sh@83 -- # local target=3.0.0 00:21:27.092 09:29:38 nvmf_tcp.nvmf_fips -- fips/fips.sh@85 -- # openssl version 00:21:27.092 09:29:38 nvmf_tcp.nvmf_fips -- fips/fips.sh@85 -- # awk '{print $2}' 00:21:27.092 09:29:38 nvmf_tcp.nvmf_fips -- fips/fips.sh@85 -- # ge 3.0.9 3.0.0 00:21:27.092 09:29:38 nvmf_tcp.nvmf_fips -- scripts/common.sh@373 -- # cmp_versions 3.0.9 '>=' 3.0.0 00:21:27.092 09:29:38 nvmf_tcp.nvmf_fips -- scripts/common.sh@330 -- # local ver1 ver1_l 00:21:27.092 09:29:38 nvmf_tcp.nvmf_fips -- scripts/common.sh@331 -- # local ver2 ver2_l 00:21:27.092 09:29:38 nvmf_tcp.nvmf_fips -- scripts/common.sh@333 -- # IFS=.-: 00:21:27.092 09:29:38 nvmf_tcp.nvmf_fips -- scripts/common.sh@333 -- # read -ra ver1 00:21:27.092 09:29:38 nvmf_tcp.nvmf_fips -- scripts/common.sh@334 -- # IFS=.-: 00:21:27.092 09:29:38 nvmf_tcp.nvmf_fips -- scripts/common.sh@334 -- # read -ra ver2 00:21:27.092 09:29:38 nvmf_tcp.nvmf_fips -- scripts/common.sh@335 -- # local 'op=>=' 00:21:27.092 09:29:38 nvmf_tcp.nvmf_fips -- scripts/common.sh@337 -- # ver1_l=3 00:21:27.092 09:29:38 nvmf_tcp.nvmf_fips -- scripts/common.sh@338 -- # ver2_l=3 00:21:27.092 09:29:38 nvmf_tcp.nvmf_fips -- scripts/common.sh@340 -- # local lt=0 gt=0 eq=0 v 00:21:27.092 09:29:38 nvmf_tcp.nvmf_fips -- scripts/common.sh@341 -- # case "$op" in 00:21:27.092 09:29:38 nvmf_tcp.nvmf_fips -- scripts/common.sh@345 -- # : 1 00:21:27.092 09:29:38 nvmf_tcp.nvmf_fips -- scripts/common.sh@361 -- # (( v = 0 )) 00:21:27.092 09:29:38 nvmf_tcp.nvmf_fips -- scripts/common.sh@361 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:21:27.092 09:29:38 nvmf_tcp.nvmf_fips -- scripts/common.sh@362 -- # decimal 3 00:21:27.092 09:29:38 nvmf_tcp.nvmf_fips -- scripts/common.sh@350 -- # local d=3 00:21:27.092 09:29:38 nvmf_tcp.nvmf_fips -- scripts/common.sh@351 -- # [[ 3 =~ ^[0-9]+$ ]] 00:21:27.092 09:29:38 nvmf_tcp.nvmf_fips -- scripts/common.sh@352 -- # echo 3 00:21:27.092 09:29:38 nvmf_tcp.nvmf_fips -- scripts/common.sh@362 -- # ver1[v]=3 00:21:27.092 09:29:38 nvmf_tcp.nvmf_fips -- scripts/common.sh@363 -- # decimal 3 00:21:27.092 09:29:38 nvmf_tcp.nvmf_fips -- scripts/common.sh@350 -- # local d=3 00:21:27.092 09:29:38 nvmf_tcp.nvmf_fips -- scripts/common.sh@351 -- # [[ 3 =~ ^[0-9]+$ ]] 00:21:27.092 09:29:38 nvmf_tcp.nvmf_fips -- scripts/common.sh@352 -- # echo 3 00:21:27.092 09:29:38 nvmf_tcp.nvmf_fips -- scripts/common.sh@363 -- # ver2[v]=3 00:21:27.092 09:29:38 nvmf_tcp.nvmf_fips -- scripts/common.sh@364 -- # (( ver1[v] > ver2[v] )) 00:21:27.092 09:29:38 nvmf_tcp.nvmf_fips -- scripts/common.sh@365 -- # (( ver1[v] < ver2[v] )) 00:21:27.092 09:29:38 nvmf_tcp.nvmf_fips -- scripts/common.sh@361 -- # (( v++ )) 00:21:27.092 09:29:38 nvmf_tcp.nvmf_fips -- scripts/common.sh@361 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:21:27.092 09:29:38 nvmf_tcp.nvmf_fips -- scripts/common.sh@362 -- # decimal 0 00:21:27.092 09:29:38 nvmf_tcp.nvmf_fips -- scripts/common.sh@350 -- # local d=0 00:21:27.092 09:29:38 nvmf_tcp.nvmf_fips -- scripts/common.sh@351 -- # [[ 0 =~ ^[0-9]+$ ]] 00:21:27.092 09:29:38 nvmf_tcp.nvmf_fips -- scripts/common.sh@352 -- # echo 0 00:21:27.092 09:29:38 nvmf_tcp.nvmf_fips -- scripts/common.sh@362 -- # ver1[v]=0 00:21:27.092 09:29:38 nvmf_tcp.nvmf_fips -- scripts/common.sh@363 -- # decimal 0 00:21:27.092 09:29:38 nvmf_tcp.nvmf_fips -- scripts/common.sh@350 -- # local d=0 00:21:27.092 09:29:38 nvmf_tcp.nvmf_fips -- scripts/common.sh@351 -- # [[ 0 =~ ^[0-9]+$ ]] 00:21:27.092 09:29:38 nvmf_tcp.nvmf_fips -- scripts/common.sh@352 -- # echo 0 00:21:27.092 09:29:38 nvmf_tcp.nvmf_fips -- scripts/common.sh@363 -- # ver2[v]=0 00:21:27.092 09:29:38 nvmf_tcp.nvmf_fips -- scripts/common.sh@364 -- # (( ver1[v] > ver2[v] )) 00:21:27.092 09:29:38 nvmf_tcp.nvmf_fips -- scripts/common.sh@365 -- # (( ver1[v] < ver2[v] )) 00:21:27.092 09:29:38 nvmf_tcp.nvmf_fips -- scripts/common.sh@361 -- # (( v++ )) 00:21:27.092 09:29:38 nvmf_tcp.nvmf_fips -- scripts/common.sh@361 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:21:27.092 09:29:38 nvmf_tcp.nvmf_fips -- scripts/common.sh@362 -- # decimal 9 00:21:27.092 09:29:38 nvmf_tcp.nvmf_fips -- scripts/common.sh@350 -- # local d=9 00:21:27.092 09:29:38 nvmf_tcp.nvmf_fips -- scripts/common.sh@351 -- # [[ 9 =~ ^[0-9]+$ ]] 00:21:27.092 09:29:38 nvmf_tcp.nvmf_fips -- scripts/common.sh@352 -- # echo 9 00:21:27.092 09:29:38 nvmf_tcp.nvmf_fips -- scripts/common.sh@362 -- # ver1[v]=9 00:21:27.092 09:29:38 nvmf_tcp.nvmf_fips -- scripts/common.sh@363 -- # decimal 0 00:21:27.092 09:29:38 nvmf_tcp.nvmf_fips -- scripts/common.sh@350 -- # local d=0 00:21:27.092 09:29:38 nvmf_tcp.nvmf_fips -- scripts/common.sh@351 -- # [[ 0 =~ ^[0-9]+$ ]] 00:21:27.092 09:29:38 nvmf_tcp.nvmf_fips -- scripts/common.sh@352 -- # echo 0 00:21:27.092 09:29:38 nvmf_tcp.nvmf_fips -- scripts/common.sh@363 -- # ver2[v]=0 00:21:27.092 09:29:38 nvmf_tcp.nvmf_fips -- scripts/common.sh@364 -- # (( ver1[v] > ver2[v] )) 00:21:27.092 09:29:38 nvmf_tcp.nvmf_fips -- scripts/common.sh@364 -- # return 0 00:21:27.092 09:29:38 nvmf_tcp.nvmf_fips -- fips/fips.sh@95 -- # openssl info -modulesdir 00:21:27.092 09:29:38 nvmf_tcp.nvmf_fips -- fips/fips.sh@95 -- # [[ ! -f /usr/lib64/ossl-modules/fips.so ]] 00:21:27.092 09:29:38 nvmf_tcp.nvmf_fips -- fips/fips.sh@100 -- # openssl fipsinstall -help 00:21:27.092 09:29:38 nvmf_tcp.nvmf_fips -- fips/fips.sh@100 -- # warn='This command is not enabled in the Red Hat Enterprise Linux OpenSSL build, please consult Red Hat documentation to learn how to enable FIPS mode' 00:21:27.092 09:29:38 nvmf_tcp.nvmf_fips -- fips/fips.sh@101 -- # [[ This command is not enabled in the Red Hat Enterprise Linux OpenSSL build, please consult Red Hat documentation to learn how to enable FIPS mode == \T\h\i\s\ \c\o\m\m\a\n\d\ \i\s\ \n\o\t\ \e\n\a\b\l\e\d* ]] 00:21:27.092 09:29:38 nvmf_tcp.nvmf_fips -- fips/fips.sh@104 -- # export callback=build_openssl_config 00:21:27.092 09:29:38 nvmf_tcp.nvmf_fips -- fips/fips.sh@104 -- # callback=build_openssl_config 00:21:27.092 09:29:38 nvmf_tcp.nvmf_fips -- fips/fips.sh@113 -- # build_openssl_config 00:21:27.092 09:29:38 nvmf_tcp.nvmf_fips -- fips/fips.sh@37 -- # cat 00:21:27.092 09:29:38 nvmf_tcp.nvmf_fips -- fips/fips.sh@57 -- # [[ ! -t 0 ]] 00:21:27.092 09:29:38 nvmf_tcp.nvmf_fips -- fips/fips.sh@58 -- # cat - 00:21:27.092 09:29:38 nvmf_tcp.nvmf_fips -- fips/fips.sh@114 -- # export OPENSSL_CONF=spdk_fips.conf 00:21:27.092 09:29:38 nvmf_tcp.nvmf_fips -- fips/fips.sh@114 -- # OPENSSL_CONF=spdk_fips.conf 00:21:27.092 09:29:38 nvmf_tcp.nvmf_fips -- fips/fips.sh@116 -- # mapfile -t providers 00:21:27.092 09:29:38 nvmf_tcp.nvmf_fips -- fips/fips.sh@116 -- # openssl list -providers 00:21:27.092 09:29:38 nvmf_tcp.nvmf_fips -- fips/fips.sh@116 -- # grep name 00:21:27.092 09:29:38 nvmf_tcp.nvmf_fips -- fips/fips.sh@120 -- # (( 2 != 2 )) 00:21:27.092 09:29:38 nvmf_tcp.nvmf_fips -- fips/fips.sh@120 -- # [[ name: openssl base provider != *base* ]] 00:21:27.092 09:29:38 nvmf_tcp.nvmf_fips -- fips/fips.sh@120 -- # [[ name: red hat enterprise linux 9 - openssl fips provider != *fips* ]] 00:21:27.092 09:29:38 nvmf_tcp.nvmf_fips -- fips/fips.sh@127 -- # NOT openssl md5 /dev/fd/62 00:21:27.093 09:29:38 nvmf_tcp.nvmf_fips -- fips/fips.sh@127 -- # : 00:21:27.093 09:29:38 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@648 -- # local es=0 00:21:27.093 09:29:38 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@650 -- # valid_exec_arg openssl md5 /dev/fd/62 00:21:27.093 09:29:38 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@636 -- # local arg=openssl 00:21:27.093 09:29:38 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:21:27.093 09:29:38 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@640 -- # type -t openssl 00:21:27.093 09:29:38 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:21:27.093 09:29:38 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@642 -- # type -P openssl 00:21:27.093 09:29:38 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:21:27.093 09:29:38 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@642 -- # arg=/usr/bin/openssl 00:21:27.093 09:29:38 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@642 -- # [[ -x /usr/bin/openssl ]] 00:21:27.093 09:29:38 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@651 -- # openssl md5 /dev/fd/62 00:21:27.093 Error setting digest 00:21:27.093 0072BC51227F0000:error:0308010C:digital envelope routines:inner_evp_generic_fetch:unsupported:crypto/evp/evp_fetch.c:373:Global default library context, Algorithm (MD5 : 97), Properties () 00:21:27.093 0072BC51227F0000:error:03000086:digital envelope routines:evp_md_init_internal:initialization error:crypto/evp/digest.c:254: 00:21:27.093 09:29:38 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@651 -- # es=1 00:21:27.093 09:29:38 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:21:27.093 09:29:38 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:21:27.093 09:29:38 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:21:27.093 09:29:38 nvmf_tcp.nvmf_fips -- fips/fips.sh@130 -- # nvmftestinit 00:21:27.093 09:29:38 nvmf_tcp.nvmf_fips -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:21:27.093 09:29:38 nvmf_tcp.nvmf_fips -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:21:27.093 09:29:38 nvmf_tcp.nvmf_fips -- nvmf/common.sh@448 -- # prepare_net_devs 00:21:27.093 09:29:38 nvmf_tcp.nvmf_fips -- nvmf/common.sh@410 -- # local -g is_hw=no 00:21:27.093 09:29:38 nvmf_tcp.nvmf_fips -- nvmf/common.sh@412 -- # remove_spdk_ns 00:21:27.093 09:29:38 nvmf_tcp.nvmf_fips -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:21:27.093 09:29:38 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:21:27.093 09:29:38 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:21:27.093 09:29:38 nvmf_tcp.nvmf_fips -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:21:27.093 09:29:38 nvmf_tcp.nvmf_fips -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:21:27.093 09:29:38 nvmf_tcp.nvmf_fips -- nvmf/common.sh@285 -- # xtrace_disable 00:21:27.093 09:29:38 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@10 -- # set +x 00:21:29.625 09:29:40 nvmf_tcp.nvmf_fips -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:21:29.625 09:29:40 nvmf_tcp.nvmf_fips -- nvmf/common.sh@291 -- # pci_devs=() 00:21:29.625 09:29:40 nvmf_tcp.nvmf_fips -- nvmf/common.sh@291 -- # local -a pci_devs 00:21:29.625 09:29:40 nvmf_tcp.nvmf_fips -- nvmf/common.sh@292 -- # pci_net_devs=() 00:21:29.625 09:29:40 nvmf_tcp.nvmf_fips -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:21:29.625 09:29:40 nvmf_tcp.nvmf_fips -- nvmf/common.sh@293 -- # pci_drivers=() 00:21:29.625 09:29:40 nvmf_tcp.nvmf_fips -- nvmf/common.sh@293 -- # local -A pci_drivers 00:21:29.625 09:29:40 nvmf_tcp.nvmf_fips -- nvmf/common.sh@295 -- # net_devs=() 00:21:29.625 09:29:40 nvmf_tcp.nvmf_fips -- nvmf/common.sh@295 -- # local -ga net_devs 00:21:29.625 09:29:40 nvmf_tcp.nvmf_fips -- nvmf/common.sh@296 -- # e810=() 00:21:29.625 09:29:40 nvmf_tcp.nvmf_fips -- nvmf/common.sh@296 -- # local -ga e810 00:21:29.625 09:29:40 nvmf_tcp.nvmf_fips -- nvmf/common.sh@297 -- # x722=() 00:21:29.625 09:29:40 nvmf_tcp.nvmf_fips -- nvmf/common.sh@297 -- # local -ga x722 00:21:29.625 09:29:40 nvmf_tcp.nvmf_fips -- nvmf/common.sh@298 -- # mlx=() 00:21:29.625 09:29:40 nvmf_tcp.nvmf_fips -- nvmf/common.sh@298 -- # local -ga mlx 00:21:29.625 09:29:40 nvmf_tcp.nvmf_fips -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:21:29.625 09:29:40 nvmf_tcp.nvmf_fips -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:21:29.625 09:29:40 nvmf_tcp.nvmf_fips -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:21:29.625 09:29:40 nvmf_tcp.nvmf_fips -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:21:29.625 09:29:40 nvmf_tcp.nvmf_fips -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:21:29.625 09:29:40 nvmf_tcp.nvmf_fips -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:21:29.625 09:29:40 nvmf_tcp.nvmf_fips -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:21:29.625 09:29:40 nvmf_tcp.nvmf_fips -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:21:29.625 09:29:40 nvmf_tcp.nvmf_fips -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:21:29.625 09:29:40 nvmf_tcp.nvmf_fips -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:21:29.625 09:29:40 nvmf_tcp.nvmf_fips -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:21:29.625 09:29:40 nvmf_tcp.nvmf_fips -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:21:29.625 09:29:40 nvmf_tcp.nvmf_fips -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:21:29.625 09:29:40 nvmf_tcp.nvmf_fips -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:21:29.625 09:29:40 nvmf_tcp.nvmf_fips -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:21:29.625 09:29:40 nvmf_tcp.nvmf_fips -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:21:29.625 09:29:40 nvmf_tcp.nvmf_fips -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:21:29.625 09:29:40 nvmf_tcp.nvmf_fips -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:21:29.625 09:29:40 nvmf_tcp.nvmf_fips -- nvmf/common.sh@341 -- # echo 'Found 0000:09:00.0 (0x8086 - 0x159b)' 00:21:29.625 Found 0000:09:00.0 (0x8086 - 0x159b) 00:21:29.625 09:29:40 nvmf_tcp.nvmf_fips -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:21:29.625 09:29:40 nvmf_tcp.nvmf_fips -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:21:29.625 09:29:40 nvmf_tcp.nvmf_fips -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:21:29.625 09:29:40 nvmf_tcp.nvmf_fips -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:21:29.625 09:29:40 nvmf_tcp.nvmf_fips -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:21:29.625 09:29:40 nvmf_tcp.nvmf_fips -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:21:29.625 09:29:40 nvmf_tcp.nvmf_fips -- nvmf/common.sh@341 -- # echo 'Found 0000:09:00.1 (0x8086 - 0x159b)' 00:21:29.625 Found 0000:09:00.1 (0x8086 - 0x159b) 00:21:29.625 09:29:40 nvmf_tcp.nvmf_fips -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:21:29.625 09:29:40 nvmf_tcp.nvmf_fips -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:21:29.625 09:29:40 nvmf_tcp.nvmf_fips -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:21:29.625 09:29:40 nvmf_tcp.nvmf_fips -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:21:29.625 09:29:40 nvmf_tcp.nvmf_fips -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:21:29.625 09:29:40 nvmf_tcp.nvmf_fips -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:21:29.625 09:29:40 nvmf_tcp.nvmf_fips -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:21:29.625 09:29:40 nvmf_tcp.nvmf_fips -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:21:29.625 09:29:40 nvmf_tcp.nvmf_fips -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:21:29.625 09:29:40 nvmf_tcp.nvmf_fips -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:21:29.625 09:29:40 nvmf_tcp.nvmf_fips -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:21:29.625 09:29:40 nvmf_tcp.nvmf_fips -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:21:29.625 09:29:40 nvmf_tcp.nvmf_fips -- nvmf/common.sh@390 -- # [[ up == up ]] 00:21:29.625 09:29:40 nvmf_tcp.nvmf_fips -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:21:29.625 09:29:40 nvmf_tcp.nvmf_fips -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:21:29.625 09:29:40 nvmf_tcp.nvmf_fips -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:09:00.0: cvl_0_0' 00:21:29.625 Found net devices under 0000:09:00.0: cvl_0_0 00:21:29.625 09:29:40 nvmf_tcp.nvmf_fips -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:21:29.625 09:29:40 nvmf_tcp.nvmf_fips -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:21:29.625 09:29:40 nvmf_tcp.nvmf_fips -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:21:29.625 09:29:40 nvmf_tcp.nvmf_fips -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:21:29.625 09:29:40 nvmf_tcp.nvmf_fips -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:21:29.625 09:29:40 nvmf_tcp.nvmf_fips -- nvmf/common.sh@390 -- # [[ up == up ]] 00:21:29.625 09:29:40 nvmf_tcp.nvmf_fips -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:21:29.625 09:29:40 nvmf_tcp.nvmf_fips -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:21:29.625 09:29:40 nvmf_tcp.nvmf_fips -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:09:00.1: cvl_0_1' 00:21:29.625 Found net devices under 0000:09:00.1: cvl_0_1 00:21:29.625 09:29:40 nvmf_tcp.nvmf_fips -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:21:29.625 09:29:40 nvmf_tcp.nvmf_fips -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:21:29.625 09:29:40 nvmf_tcp.nvmf_fips -- nvmf/common.sh@414 -- # is_hw=yes 00:21:29.625 09:29:40 nvmf_tcp.nvmf_fips -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:21:29.625 09:29:40 nvmf_tcp.nvmf_fips -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:21:29.625 09:29:40 nvmf_tcp.nvmf_fips -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:21:29.625 09:29:40 nvmf_tcp.nvmf_fips -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:21:29.625 09:29:40 nvmf_tcp.nvmf_fips -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:21:29.625 09:29:40 nvmf_tcp.nvmf_fips -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:21:29.625 09:29:40 nvmf_tcp.nvmf_fips -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:21:29.625 09:29:40 nvmf_tcp.nvmf_fips -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:21:29.625 09:29:40 nvmf_tcp.nvmf_fips -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:21:29.625 09:29:40 nvmf_tcp.nvmf_fips -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:21:29.625 09:29:40 nvmf_tcp.nvmf_fips -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:21:29.625 09:29:40 nvmf_tcp.nvmf_fips -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:21:29.625 09:29:40 nvmf_tcp.nvmf_fips -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:21:29.625 09:29:40 nvmf_tcp.nvmf_fips -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:21:29.625 09:29:40 nvmf_tcp.nvmf_fips -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:21:29.625 09:29:40 nvmf_tcp.nvmf_fips -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:21:29.625 09:29:40 nvmf_tcp.nvmf_fips -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:21:29.625 09:29:40 nvmf_tcp.nvmf_fips -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:21:29.625 09:29:40 nvmf_tcp.nvmf_fips -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:21:29.625 09:29:40 nvmf_tcp.nvmf_fips -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:21:29.625 09:29:40 nvmf_tcp.nvmf_fips -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:21:29.625 09:29:40 nvmf_tcp.nvmf_fips -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:21:29.625 09:29:40 nvmf_tcp.nvmf_fips -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:21:29.625 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:21:29.625 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.252 ms 00:21:29.625 00:21:29.625 --- 10.0.0.2 ping statistics --- 00:21:29.625 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:21:29.625 rtt min/avg/max/mdev = 0.252/0.252/0.252/0.000 ms 00:21:29.625 09:29:40 nvmf_tcp.nvmf_fips -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:21:29.625 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:21:29.625 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.151 ms 00:21:29.625 00:21:29.625 --- 10.0.0.1 ping statistics --- 00:21:29.625 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:21:29.625 rtt min/avg/max/mdev = 0.151/0.151/0.151/0.000 ms 00:21:29.625 09:29:40 nvmf_tcp.nvmf_fips -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:21:29.625 09:29:40 nvmf_tcp.nvmf_fips -- nvmf/common.sh@422 -- # return 0 00:21:29.625 09:29:40 nvmf_tcp.nvmf_fips -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:21:29.625 09:29:40 nvmf_tcp.nvmf_fips -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:21:29.625 09:29:40 nvmf_tcp.nvmf_fips -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:21:29.625 09:29:40 nvmf_tcp.nvmf_fips -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:21:29.625 09:29:40 nvmf_tcp.nvmf_fips -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:21:29.625 09:29:40 nvmf_tcp.nvmf_fips -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:21:29.625 09:29:40 nvmf_tcp.nvmf_fips -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:21:29.625 09:29:40 nvmf_tcp.nvmf_fips -- fips/fips.sh@131 -- # nvmfappstart -m 0x2 00:21:29.625 09:29:40 nvmf_tcp.nvmf_fips -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:21:29.625 09:29:40 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@722 -- # xtrace_disable 00:21:29.625 09:29:40 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@10 -- # set +x 00:21:29.625 09:29:40 nvmf_tcp.nvmf_fips -- nvmf/common.sh@481 -- # nvmfpid=867796 00:21:29.625 09:29:40 nvmf_tcp.nvmf_fips -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:21:29.625 09:29:40 nvmf_tcp.nvmf_fips -- nvmf/common.sh@482 -- # waitforlisten 867796 00:21:29.625 09:29:40 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@829 -- # '[' -z 867796 ']' 00:21:29.625 09:29:40 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:21:29.625 09:29:40 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@834 -- # local max_retries=100 00:21:29.625 09:29:40 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:21:29.625 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:21:29.625 09:29:40 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@838 -- # xtrace_disable 00:21:29.625 09:29:40 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@10 -- # set +x 00:21:29.625 [2024-07-15 09:29:40.598532] Starting SPDK v24.09-pre git sha1 b0f01ebc5 / DPDK 24.03.0 initialization... 00:21:29.625 [2024-07-15 09:29:40.598618] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:21:29.626 EAL: No free 2048 kB hugepages reported on node 1 00:21:29.626 [2024-07-15 09:29:40.659394] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:21:29.626 [2024-07-15 09:29:40.760877] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:21:29.626 [2024-07-15 09:29:40.760922] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:21:29.626 [2024-07-15 09:29:40.760935] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:21:29.626 [2024-07-15 09:29:40.760946] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:21:29.626 [2024-07-15 09:29:40.760955] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:21:29.626 [2024-07-15 09:29:40.760983] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:21:30.611 09:29:41 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:21:30.611 09:29:41 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@862 -- # return 0 00:21:30.611 09:29:41 nvmf_tcp.nvmf_fips -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:21:30.611 09:29:41 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@728 -- # xtrace_disable 00:21:30.611 09:29:41 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@10 -- # set +x 00:21:30.611 09:29:41 nvmf_tcp.nvmf_fips -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:21:30.611 09:29:41 nvmf_tcp.nvmf_fips -- fips/fips.sh@133 -- # trap cleanup EXIT 00:21:30.611 09:29:41 nvmf_tcp.nvmf_fips -- fips/fips.sh@136 -- # key=NVMeTLSkey-1:01:VRLbtnN9AQb2WXW3c9+wEf/DRLz0QuLdbYvEhwtdWwNf9LrZ: 00:21:30.611 09:29:41 nvmf_tcp.nvmf_fips -- fips/fips.sh@137 -- # key_path=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/fips/key.txt 00:21:30.611 09:29:41 nvmf_tcp.nvmf_fips -- fips/fips.sh@138 -- # echo -n NVMeTLSkey-1:01:VRLbtnN9AQb2WXW3c9+wEf/DRLz0QuLdbYvEhwtdWwNf9LrZ: 00:21:30.611 09:29:41 nvmf_tcp.nvmf_fips -- fips/fips.sh@139 -- # chmod 0600 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/fips/key.txt 00:21:30.611 09:29:41 nvmf_tcp.nvmf_fips -- fips/fips.sh@141 -- # setup_nvmf_tgt_conf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/fips/key.txt 00:21:30.611 09:29:41 nvmf_tcp.nvmf_fips -- fips/fips.sh@22 -- # local key=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/fips/key.txt 00:21:30.611 09:29:41 nvmf_tcp.nvmf_fips -- fips/fips.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:21:30.611 [2024-07-15 09:29:41.798009] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:21:30.868 [2024-07-15 09:29:41.814018] tcp.c: 928:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:21:30.869 [2024-07-15 09:29:41.814194] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:21:30.869 [2024-07-15 09:29:41.844877] tcp.c:3679:nvmf_tcp_subsystem_add_host: *WARNING*: nvmf_tcp_psk_path: deprecated feature PSK path to be removed in v24.09 00:21:30.869 malloc0 00:21:30.869 09:29:41 nvmf_tcp.nvmf_fips -- fips/fips.sh@144 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:21:30.869 09:29:41 nvmf_tcp.nvmf_fips -- fips/fips.sh@147 -- # bdevperf_pid=867957 00:21:30.869 09:29:41 nvmf_tcp.nvmf_fips -- fips/fips.sh@145 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:21:30.869 09:29:41 nvmf_tcp.nvmf_fips -- fips/fips.sh@148 -- # waitforlisten 867957 /var/tmp/bdevperf.sock 00:21:30.869 09:29:41 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@829 -- # '[' -z 867957 ']' 00:21:30.869 09:29:41 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:21:30.869 09:29:41 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@834 -- # local max_retries=100 00:21:30.869 09:29:41 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:21:30.869 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:21:30.869 09:29:41 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@838 -- # xtrace_disable 00:21:30.869 09:29:41 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@10 -- # set +x 00:21:30.869 [2024-07-15 09:29:41.936775] Starting SPDK v24.09-pre git sha1 b0f01ebc5 / DPDK 24.03.0 initialization... 00:21:30.869 [2024-07-15 09:29:41.936879] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid867957 ] 00:21:30.869 EAL: No free 2048 kB hugepages reported on node 1 00:21:30.869 [2024-07-15 09:29:41.994939] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:21:31.126 [2024-07-15 09:29:42.100589] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:21:31.690 09:29:42 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:21:31.690 09:29:42 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@862 -- # return 0 00:21:31.690 09:29:42 nvmf_tcp.nvmf_fips -- fips/fips.sh@150 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 --psk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/fips/key.txt 00:21:31.946 [2024-07-15 09:29:43.073137] bdev_nvme_rpc.c: 517:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:21:31.946 [2024-07-15 09:29:43.073270] nvme_tcp.c:2589:nvme_tcp_generate_tls_credentials: *WARNING*: nvme_ctrlr_psk: deprecated feature spdk_nvme_ctrlr_opts.psk to be removed in v24.09 00:21:32.203 TLSTESTn1 00:21:32.203 09:29:43 nvmf_tcp.nvmf_fips -- fips/fips.sh@154 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:21:32.203 Running I/O for 10 seconds... 00:21:42.160 00:21:42.160 Latency(us) 00:21:42.160 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:21:42.160 Job: TLSTESTn1 (Core Mask 0x4, workload: verify, depth: 128, IO size: 4096) 00:21:42.160 Verification LBA range: start 0x0 length 0x2000 00:21:42.160 TLSTESTn1 : 10.03 3315.19 12.95 0.00 0.00 38533.83 9854.67 49710.27 00:21:42.160 =================================================================================================================== 00:21:42.160 Total : 3315.19 12.95 0.00 0.00 38533.83 9854.67 49710.27 00:21:42.160 0 00:21:42.160 09:29:53 nvmf_tcp.nvmf_fips -- fips/fips.sh@1 -- # cleanup 00:21:42.160 09:29:53 nvmf_tcp.nvmf_fips -- fips/fips.sh@15 -- # process_shm --id 0 00:21:42.160 09:29:53 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@806 -- # type=--id 00:21:42.160 09:29:53 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@807 -- # id=0 00:21:42.160 09:29:53 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@808 -- # '[' --id = --pid ']' 00:21:42.160 09:29:53 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@812 -- # find /dev/shm -name '*.0' -printf '%f\n' 00:21:42.160 09:29:53 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@812 -- # shm_files=nvmf_trace.0 00:21:42.160 09:29:53 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@814 -- # [[ -z nvmf_trace.0 ]] 00:21:42.160 09:29:53 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@818 -- # for n in $shm_files 00:21:42.160 09:29:53 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@819 -- # tar -C /dev/shm/ -cvzf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/nvmf_trace.0_shm.tar.gz nvmf_trace.0 00:21:42.160 nvmf_trace.0 00:21:42.416 09:29:53 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@821 -- # return 0 00:21:42.416 09:29:53 nvmf_tcp.nvmf_fips -- fips/fips.sh@16 -- # killprocess 867957 00:21:42.416 09:29:53 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@948 -- # '[' -z 867957 ']' 00:21:42.416 09:29:53 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@952 -- # kill -0 867957 00:21:42.416 09:29:53 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@953 -- # uname 00:21:42.416 09:29:53 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:21:42.416 09:29:53 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 867957 00:21:42.416 09:29:53 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@954 -- # process_name=reactor_2 00:21:42.416 09:29:53 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@958 -- # '[' reactor_2 = sudo ']' 00:21:42.416 09:29:53 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@966 -- # echo 'killing process with pid 867957' 00:21:42.416 killing process with pid 867957 00:21:42.416 09:29:53 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@967 -- # kill 867957 00:21:42.416 Received shutdown signal, test time was about 10.000000 seconds 00:21:42.416 00:21:42.416 Latency(us) 00:21:42.416 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:21:42.416 =================================================================================================================== 00:21:42.416 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:21:42.416 [2024-07-15 09:29:53.449816] app.c:1023:log_deprecation_hits: *WARNING*: nvme_ctrlr_psk: deprecation 'spdk_nvme_ctrlr_opts.psk' scheduled for removal in v24.09 hit 1 times 00:21:42.416 09:29:53 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@972 -- # wait 867957 00:21:42.673 09:29:53 nvmf_tcp.nvmf_fips -- fips/fips.sh@17 -- # nvmftestfini 00:21:42.673 09:29:53 nvmf_tcp.nvmf_fips -- nvmf/common.sh@488 -- # nvmfcleanup 00:21:42.673 09:29:53 nvmf_tcp.nvmf_fips -- nvmf/common.sh@117 -- # sync 00:21:42.673 09:29:53 nvmf_tcp.nvmf_fips -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:21:42.673 09:29:53 nvmf_tcp.nvmf_fips -- nvmf/common.sh@120 -- # set +e 00:21:42.673 09:29:53 nvmf_tcp.nvmf_fips -- nvmf/common.sh@121 -- # for i in {1..20} 00:21:42.673 09:29:53 nvmf_tcp.nvmf_fips -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:21:42.673 rmmod nvme_tcp 00:21:42.673 rmmod nvme_fabrics 00:21:42.673 rmmod nvme_keyring 00:21:42.673 09:29:53 nvmf_tcp.nvmf_fips -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:21:42.673 09:29:53 nvmf_tcp.nvmf_fips -- nvmf/common.sh@124 -- # set -e 00:21:42.673 09:29:53 nvmf_tcp.nvmf_fips -- nvmf/common.sh@125 -- # return 0 00:21:42.673 09:29:53 nvmf_tcp.nvmf_fips -- nvmf/common.sh@489 -- # '[' -n 867796 ']' 00:21:42.673 09:29:53 nvmf_tcp.nvmf_fips -- nvmf/common.sh@490 -- # killprocess 867796 00:21:42.673 09:29:53 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@948 -- # '[' -z 867796 ']' 00:21:42.673 09:29:53 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@952 -- # kill -0 867796 00:21:42.673 09:29:53 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@953 -- # uname 00:21:42.673 09:29:53 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:21:42.673 09:29:53 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 867796 00:21:42.673 09:29:53 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:21:42.673 09:29:53 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:21:42.673 09:29:53 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@966 -- # echo 'killing process with pid 867796' 00:21:42.673 killing process with pid 867796 00:21:42.673 09:29:53 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@967 -- # kill 867796 00:21:42.673 [2024-07-15 09:29:53.806821] app.c:1023:log_deprecation_hits: *WARNING*: nvmf_tcp_psk_path: deprecation 'PSK path' scheduled for removal in v24.09 hit 1 times 00:21:42.673 09:29:53 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@972 -- # wait 867796 00:21:42.930 09:29:54 nvmf_tcp.nvmf_fips -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:21:42.930 09:29:54 nvmf_tcp.nvmf_fips -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:21:42.930 09:29:54 nvmf_tcp.nvmf_fips -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:21:42.930 09:29:54 nvmf_tcp.nvmf_fips -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:21:42.930 09:29:54 nvmf_tcp.nvmf_fips -- nvmf/common.sh@278 -- # remove_spdk_ns 00:21:42.930 09:29:54 nvmf_tcp.nvmf_fips -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:21:42.931 09:29:54 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:21:42.931 09:29:54 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:21:45.491 09:29:56 nvmf_tcp.nvmf_fips -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:21:45.491 09:29:56 nvmf_tcp.nvmf_fips -- fips/fips.sh@18 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/fips/key.txt 00:21:45.491 00:21:45.491 real 0m18.092s 00:21:45.491 user 0m24.314s 00:21:45.491 sys 0m5.256s 00:21:45.491 09:29:56 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@1124 -- # xtrace_disable 00:21:45.491 09:29:56 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@10 -- # set +x 00:21:45.491 ************************************ 00:21:45.491 END TEST nvmf_fips 00:21:45.491 ************************************ 00:21:45.491 09:29:56 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:21:45.491 09:29:56 nvmf_tcp -- nvmf/nvmf.sh@65 -- # '[' 0 -eq 1 ']' 00:21:45.491 09:29:56 nvmf_tcp -- nvmf/nvmf.sh@71 -- # [[ phy == phy ]] 00:21:45.491 09:29:56 nvmf_tcp -- nvmf/nvmf.sh@72 -- # '[' tcp = tcp ']' 00:21:45.491 09:29:56 nvmf_tcp -- nvmf/nvmf.sh@73 -- # gather_supported_nvmf_pci_devs 00:21:45.491 09:29:56 nvmf_tcp -- nvmf/common.sh@285 -- # xtrace_disable 00:21:45.491 09:29:56 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:21:47.390 09:29:58 nvmf_tcp -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:21:47.390 09:29:58 nvmf_tcp -- nvmf/common.sh@291 -- # pci_devs=() 00:21:47.390 09:29:58 nvmf_tcp -- nvmf/common.sh@291 -- # local -a pci_devs 00:21:47.390 09:29:58 nvmf_tcp -- nvmf/common.sh@292 -- # pci_net_devs=() 00:21:47.390 09:29:58 nvmf_tcp -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:21:47.391 09:29:58 nvmf_tcp -- nvmf/common.sh@293 -- # pci_drivers=() 00:21:47.391 09:29:58 nvmf_tcp -- nvmf/common.sh@293 -- # local -A pci_drivers 00:21:47.391 09:29:58 nvmf_tcp -- nvmf/common.sh@295 -- # net_devs=() 00:21:47.391 09:29:58 nvmf_tcp -- nvmf/common.sh@295 -- # local -ga net_devs 00:21:47.391 09:29:58 nvmf_tcp -- nvmf/common.sh@296 -- # e810=() 00:21:47.391 09:29:58 nvmf_tcp -- nvmf/common.sh@296 -- # local -ga e810 00:21:47.391 09:29:58 nvmf_tcp -- nvmf/common.sh@297 -- # x722=() 00:21:47.391 09:29:58 nvmf_tcp -- nvmf/common.sh@297 -- # local -ga x722 00:21:47.391 09:29:58 nvmf_tcp -- nvmf/common.sh@298 -- # mlx=() 00:21:47.391 09:29:58 nvmf_tcp -- nvmf/common.sh@298 -- # local -ga mlx 00:21:47.391 09:29:58 nvmf_tcp -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:21:47.391 09:29:58 nvmf_tcp -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:21:47.391 09:29:58 nvmf_tcp -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:21:47.391 09:29:58 nvmf_tcp -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:21:47.391 09:29:58 nvmf_tcp -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:21:47.391 09:29:58 nvmf_tcp -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:21:47.391 09:29:58 nvmf_tcp -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:21:47.391 09:29:58 nvmf_tcp -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:21:47.391 09:29:58 nvmf_tcp -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:21:47.391 09:29:58 nvmf_tcp -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:21:47.391 09:29:58 nvmf_tcp -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:21:47.391 09:29:58 nvmf_tcp -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:21:47.391 09:29:58 nvmf_tcp -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:21:47.391 09:29:58 nvmf_tcp -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:21:47.391 09:29:58 nvmf_tcp -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:21:47.391 09:29:58 nvmf_tcp -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:21:47.391 09:29:58 nvmf_tcp -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:21:47.391 09:29:58 nvmf_tcp -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:21:47.391 09:29:58 nvmf_tcp -- nvmf/common.sh@341 -- # echo 'Found 0000:09:00.0 (0x8086 - 0x159b)' 00:21:47.391 Found 0000:09:00.0 (0x8086 - 0x159b) 00:21:47.391 09:29:58 nvmf_tcp -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:21:47.391 09:29:58 nvmf_tcp -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:21:47.391 09:29:58 nvmf_tcp -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:21:47.391 09:29:58 nvmf_tcp -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:21:47.391 09:29:58 nvmf_tcp -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:21:47.391 09:29:58 nvmf_tcp -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:21:47.391 09:29:58 nvmf_tcp -- nvmf/common.sh@341 -- # echo 'Found 0000:09:00.1 (0x8086 - 0x159b)' 00:21:47.391 Found 0000:09:00.1 (0x8086 - 0x159b) 00:21:47.391 09:29:58 nvmf_tcp -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:21:47.391 09:29:58 nvmf_tcp -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:21:47.391 09:29:58 nvmf_tcp -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:21:47.391 09:29:58 nvmf_tcp -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:21:47.391 09:29:58 nvmf_tcp -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:21:47.391 09:29:58 nvmf_tcp -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:21:47.391 09:29:58 nvmf_tcp -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:21:47.391 09:29:58 nvmf_tcp -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:21:47.391 09:29:58 nvmf_tcp -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:21:47.391 09:29:58 nvmf_tcp -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:21:47.391 09:29:58 nvmf_tcp -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:21:47.391 09:29:58 nvmf_tcp -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:21:47.391 09:29:58 nvmf_tcp -- nvmf/common.sh@390 -- # [[ up == up ]] 00:21:47.391 09:29:58 nvmf_tcp -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:21:47.391 09:29:58 nvmf_tcp -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:21:47.391 09:29:58 nvmf_tcp -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:09:00.0: cvl_0_0' 00:21:47.391 Found net devices under 0000:09:00.0: cvl_0_0 00:21:47.391 09:29:58 nvmf_tcp -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:21:47.391 09:29:58 nvmf_tcp -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:21:47.391 09:29:58 nvmf_tcp -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:21:47.391 09:29:58 nvmf_tcp -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:21:47.391 09:29:58 nvmf_tcp -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:21:47.391 09:29:58 nvmf_tcp -- nvmf/common.sh@390 -- # [[ up == up ]] 00:21:47.391 09:29:58 nvmf_tcp -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:21:47.391 09:29:58 nvmf_tcp -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:21:47.391 09:29:58 nvmf_tcp -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:09:00.1: cvl_0_1' 00:21:47.391 Found net devices under 0000:09:00.1: cvl_0_1 00:21:47.391 09:29:58 nvmf_tcp -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:21:47.391 09:29:58 nvmf_tcp -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:21:47.391 09:29:58 nvmf_tcp -- nvmf/nvmf.sh@74 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:21:47.391 09:29:58 nvmf_tcp -- nvmf/nvmf.sh@75 -- # (( 2 > 0 )) 00:21:47.391 09:29:58 nvmf_tcp -- nvmf/nvmf.sh@76 -- # run_test nvmf_perf_adq /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/perf_adq.sh --transport=tcp 00:21:47.391 09:29:58 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:21:47.391 09:29:58 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:21:47.391 09:29:58 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:21:47.391 ************************************ 00:21:47.391 START TEST nvmf_perf_adq 00:21:47.391 ************************************ 00:21:47.391 09:29:58 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/perf_adq.sh --transport=tcp 00:21:47.391 * Looking for test storage... 00:21:47.391 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:21:47.391 09:29:58 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:21:47.391 09:29:58 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@7 -- # uname -s 00:21:47.391 09:29:58 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:21:47.391 09:29:58 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:21:47.391 09:29:58 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:21:47.391 09:29:58 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:21:47.391 09:29:58 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:21:47.391 09:29:58 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:21:47.391 09:29:58 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:21:47.391 09:29:58 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:21:47.391 09:29:58 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:21:47.391 09:29:58 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:21:47.391 09:29:58 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a 00:21:47.391 09:29:58 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@18 -- # NVME_HOSTID=29f67375-a902-e411-ace9-001e67bc3c9a 00:21:47.391 09:29:58 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:21:47.391 09:29:58 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:21:47.391 09:29:58 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:21:47.391 09:29:58 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:21:47.391 09:29:58 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:21:47.391 09:29:58 nvmf_tcp.nvmf_perf_adq -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:21:47.391 09:29:58 nvmf_tcp.nvmf_perf_adq -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:21:47.391 09:29:58 nvmf_tcp.nvmf_perf_adq -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:21:47.391 09:29:58 nvmf_tcp.nvmf_perf_adq -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:47.391 09:29:58 nvmf_tcp.nvmf_perf_adq -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:47.391 09:29:58 nvmf_tcp.nvmf_perf_adq -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:47.391 09:29:58 nvmf_tcp.nvmf_perf_adq -- paths/export.sh@5 -- # export PATH 00:21:47.391 09:29:58 nvmf_tcp.nvmf_perf_adq -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:47.391 09:29:58 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@47 -- # : 0 00:21:47.391 09:29:58 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:21:47.391 09:29:58 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:21:47.391 09:29:58 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:21:47.391 09:29:58 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:21:47.391 09:29:58 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:21:47.391 09:29:58 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:21:47.391 09:29:58 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:21:47.391 09:29:58 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@51 -- # have_pci_nics=0 00:21:47.391 09:29:58 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@11 -- # gather_supported_nvmf_pci_devs 00:21:47.391 09:29:58 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@285 -- # xtrace_disable 00:21:47.391 09:29:58 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:21:49.291 09:30:00 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:21:49.291 09:30:00 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@291 -- # pci_devs=() 00:21:49.291 09:30:00 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@291 -- # local -a pci_devs 00:21:49.291 09:30:00 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@292 -- # pci_net_devs=() 00:21:49.291 09:30:00 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:21:49.291 09:30:00 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@293 -- # pci_drivers=() 00:21:49.291 09:30:00 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@293 -- # local -A pci_drivers 00:21:49.291 09:30:00 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@295 -- # net_devs=() 00:21:49.291 09:30:00 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@295 -- # local -ga net_devs 00:21:49.291 09:30:00 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@296 -- # e810=() 00:21:49.291 09:30:00 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@296 -- # local -ga e810 00:21:49.291 09:30:00 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@297 -- # x722=() 00:21:49.291 09:30:00 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@297 -- # local -ga x722 00:21:49.291 09:30:00 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@298 -- # mlx=() 00:21:49.291 09:30:00 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@298 -- # local -ga mlx 00:21:49.291 09:30:00 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:21:49.291 09:30:00 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:21:49.291 09:30:00 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:21:49.291 09:30:00 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:21:49.291 09:30:00 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:21:49.291 09:30:00 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:21:49.291 09:30:00 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:21:49.291 09:30:00 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:21:49.291 09:30:00 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:21:49.291 09:30:00 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:21:49.291 09:30:00 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:21:49.291 09:30:00 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:21:49.291 09:30:00 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:21:49.291 09:30:00 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:21:49.291 09:30:00 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:21:49.291 09:30:00 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:21:49.291 09:30:00 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:21:49.291 09:30:00 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:21:49.291 09:30:00 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@341 -- # echo 'Found 0000:09:00.0 (0x8086 - 0x159b)' 00:21:49.291 Found 0000:09:00.0 (0x8086 - 0x159b) 00:21:49.291 09:30:00 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:21:49.291 09:30:00 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:21:49.291 09:30:00 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:21:49.291 09:30:00 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:21:49.291 09:30:00 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:21:49.291 09:30:00 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:21:49.291 09:30:00 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@341 -- # echo 'Found 0000:09:00.1 (0x8086 - 0x159b)' 00:21:49.291 Found 0000:09:00.1 (0x8086 - 0x159b) 00:21:49.291 09:30:00 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:21:49.291 09:30:00 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:21:49.291 09:30:00 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:21:49.291 09:30:00 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:21:49.291 09:30:00 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:21:49.292 09:30:00 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:21:49.292 09:30:00 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:21:49.292 09:30:00 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:21:49.292 09:30:00 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:21:49.292 09:30:00 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:21:49.292 09:30:00 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:21:49.292 09:30:00 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:21:49.292 09:30:00 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@390 -- # [[ up == up ]] 00:21:49.292 09:30:00 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:21:49.292 09:30:00 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:21:49.292 09:30:00 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:09:00.0: cvl_0_0' 00:21:49.292 Found net devices under 0000:09:00.0: cvl_0_0 00:21:49.292 09:30:00 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:21:49.292 09:30:00 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:21:49.292 09:30:00 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:21:49.292 09:30:00 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:21:49.292 09:30:00 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:21:49.292 09:30:00 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@390 -- # [[ up == up ]] 00:21:49.292 09:30:00 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:21:49.292 09:30:00 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:21:49.292 09:30:00 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:09:00.1: cvl_0_1' 00:21:49.292 Found net devices under 0000:09:00.1: cvl_0_1 00:21:49.292 09:30:00 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:21:49.292 09:30:00 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:21:49.292 09:30:00 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@12 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:21:49.292 09:30:00 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@13 -- # (( 2 == 0 )) 00:21:49.292 09:30:00 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@18 -- # perf=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf 00:21:49.292 09:30:00 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@60 -- # adq_reload_driver 00:21:49.292 09:30:00 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@53 -- # rmmod ice 00:21:49.858 09:30:00 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@54 -- # modprobe ice 00:21:53.137 09:30:04 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@55 -- # sleep 5 00:21:58.408 09:30:09 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@68 -- # nvmftestinit 00:21:58.408 09:30:09 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:21:58.408 09:30:09 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:21:58.408 09:30:09 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@448 -- # prepare_net_devs 00:21:58.408 09:30:09 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@410 -- # local -g is_hw=no 00:21:58.408 09:30:09 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@412 -- # remove_spdk_ns 00:21:58.408 09:30:09 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:21:58.408 09:30:09 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:21:58.408 09:30:09 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:21:58.408 09:30:09 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:21:58.408 09:30:09 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:21:58.408 09:30:09 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@285 -- # xtrace_disable 00:21:58.408 09:30:09 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:21:58.408 09:30:09 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:21:58.408 09:30:09 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@291 -- # pci_devs=() 00:21:58.408 09:30:09 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@291 -- # local -a pci_devs 00:21:58.408 09:30:09 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@292 -- # pci_net_devs=() 00:21:58.408 09:30:09 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:21:58.408 09:30:09 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@293 -- # pci_drivers=() 00:21:58.408 09:30:09 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@293 -- # local -A pci_drivers 00:21:58.408 09:30:09 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@295 -- # net_devs=() 00:21:58.408 09:30:09 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@295 -- # local -ga net_devs 00:21:58.408 09:30:09 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@296 -- # e810=() 00:21:58.408 09:30:09 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@296 -- # local -ga e810 00:21:58.408 09:30:09 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@297 -- # x722=() 00:21:58.408 09:30:09 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@297 -- # local -ga x722 00:21:58.408 09:30:09 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@298 -- # mlx=() 00:21:58.408 09:30:09 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@298 -- # local -ga mlx 00:21:58.408 09:30:09 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:21:58.408 09:30:09 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:21:58.408 09:30:09 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:21:58.408 09:30:09 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:21:58.408 09:30:09 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:21:58.408 09:30:09 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:21:58.408 09:30:09 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:21:58.408 09:30:09 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:21:58.408 09:30:09 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:21:58.408 09:30:09 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:21:58.408 09:30:09 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:21:58.408 09:30:09 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:21:58.408 09:30:09 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:21:58.408 09:30:09 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:21:58.408 09:30:09 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:21:58.408 09:30:09 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:21:58.408 09:30:09 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:21:58.408 09:30:09 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:21:58.408 09:30:09 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@341 -- # echo 'Found 0000:09:00.0 (0x8086 - 0x159b)' 00:21:58.408 Found 0000:09:00.0 (0x8086 - 0x159b) 00:21:58.408 09:30:09 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:21:58.408 09:30:09 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:21:58.408 09:30:09 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:21:58.408 09:30:09 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:21:58.408 09:30:09 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:21:58.408 09:30:09 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:21:58.408 09:30:09 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@341 -- # echo 'Found 0000:09:00.1 (0x8086 - 0x159b)' 00:21:58.408 Found 0000:09:00.1 (0x8086 - 0x159b) 00:21:58.408 09:30:09 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:21:58.408 09:30:09 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:21:58.408 09:30:09 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:21:58.408 09:30:09 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:21:58.408 09:30:09 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:21:58.408 09:30:09 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:21:58.408 09:30:09 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:21:58.408 09:30:09 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:21:58.408 09:30:09 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:21:58.408 09:30:09 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:21:58.408 09:30:09 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:21:58.408 09:30:09 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:21:58.408 09:30:09 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@390 -- # [[ up == up ]] 00:21:58.408 09:30:09 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:21:58.408 09:30:09 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:21:58.409 09:30:09 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:09:00.0: cvl_0_0' 00:21:58.409 Found net devices under 0000:09:00.0: cvl_0_0 00:21:58.409 09:30:09 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:21:58.409 09:30:09 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:21:58.409 09:30:09 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:21:58.409 09:30:09 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:21:58.409 09:30:09 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:21:58.409 09:30:09 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@390 -- # [[ up == up ]] 00:21:58.409 09:30:09 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:21:58.409 09:30:09 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:21:58.409 09:30:09 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:09:00.1: cvl_0_1' 00:21:58.409 Found net devices under 0000:09:00.1: cvl_0_1 00:21:58.409 09:30:09 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:21:58.409 09:30:09 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:21:58.409 09:30:09 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@414 -- # is_hw=yes 00:21:58.409 09:30:09 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:21:58.409 09:30:09 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:21:58.409 09:30:09 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:21:58.409 09:30:09 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:21:58.409 09:30:09 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:21:58.409 09:30:09 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:21:58.409 09:30:09 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:21:58.409 09:30:09 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:21:58.409 09:30:09 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:21:58.409 09:30:09 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:21:58.409 09:30:09 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:21:58.409 09:30:09 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:21:58.409 09:30:09 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:21:58.409 09:30:09 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:21:58.409 09:30:09 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:21:58.409 09:30:09 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:21:58.409 09:30:09 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:21:58.409 09:30:09 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:21:58.409 09:30:09 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:21:58.409 09:30:09 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:21:58.409 09:30:09 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:21:58.409 09:30:09 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:21:58.409 09:30:09 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:21:58.409 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:21:58.409 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.187 ms 00:21:58.409 00:21:58.409 --- 10.0.0.2 ping statistics --- 00:21:58.409 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:21:58.409 rtt min/avg/max/mdev = 0.187/0.187/0.187/0.000 ms 00:21:58.409 09:30:09 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:21:58.409 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:21:58.409 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.070 ms 00:21:58.409 00:21:58.409 --- 10.0.0.1 ping statistics --- 00:21:58.409 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:21:58.409 rtt min/avg/max/mdev = 0.070/0.070/0.070/0.000 ms 00:21:58.409 09:30:09 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:21:58.409 09:30:09 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@422 -- # return 0 00:21:58.409 09:30:09 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:21:58.409 09:30:09 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:21:58.409 09:30:09 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:21:58.409 09:30:09 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:21:58.409 09:30:09 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:21:58.409 09:30:09 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:21:58.409 09:30:09 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:21:58.409 09:30:09 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@69 -- # nvmfappstart -m 0xF --wait-for-rpc 00:21:58.409 09:30:09 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:21:58.409 09:30:09 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@722 -- # xtrace_disable 00:21:58.409 09:30:09 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:21:58.409 09:30:09 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@481 -- # nvmfpid=874573 00:21:58.409 09:30:09 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF --wait-for-rpc 00:21:58.409 09:30:09 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@482 -- # waitforlisten 874573 00:21:58.409 09:30:09 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@829 -- # '[' -z 874573 ']' 00:21:58.409 09:30:09 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:21:58.409 09:30:09 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@834 -- # local max_retries=100 00:21:58.409 09:30:09 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:21:58.409 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:21:58.409 09:30:09 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@838 -- # xtrace_disable 00:21:58.409 09:30:09 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:21:58.409 [2024-07-15 09:30:09.245999] Starting SPDK v24.09-pre git sha1 b0f01ebc5 / DPDK 24.03.0 initialization... 00:21:58.409 [2024-07-15 09:30:09.246076] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:21:58.409 EAL: No free 2048 kB hugepages reported on node 1 00:21:58.409 [2024-07-15 09:30:09.308686] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 4 00:21:58.409 [2024-07-15 09:30:09.418525] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:21:58.409 [2024-07-15 09:30:09.418571] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:21:58.409 [2024-07-15 09:30:09.418599] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:21:58.409 [2024-07-15 09:30:09.418610] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:21:58.409 [2024-07-15 09:30:09.418629] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:21:58.409 [2024-07-15 09:30:09.418706] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:21:58.409 [2024-07-15 09:30:09.418771] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:21:58.409 [2024-07-15 09:30:09.418843] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:21:58.409 [2024-07-15 09:30:09.418838] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:21:58.409 09:30:09 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:21:58.409 09:30:09 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@862 -- # return 0 00:21:58.409 09:30:09 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:21:58.409 09:30:09 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@728 -- # xtrace_disable 00:21:58.409 09:30:09 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:21:58.409 09:30:09 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:21:58.409 09:30:09 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@70 -- # adq_configure_nvmf_target 0 00:21:58.409 09:30:09 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@42 -- # rpc_cmd sock_get_default_impl 00:21:58.409 09:30:09 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@42 -- # jq -r .impl_name 00:21:58.409 09:30:09 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:58.409 09:30:09 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:21:58.409 09:30:09 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:58.409 09:30:09 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@42 -- # socket_impl=posix 00:21:58.409 09:30:09 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@43 -- # rpc_cmd sock_impl_set_options --enable-placement-id 0 --enable-zerocopy-send-server -i posix 00:21:58.409 09:30:09 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:58.409 09:30:09 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:21:58.409 09:30:09 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:58.409 09:30:09 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@44 -- # rpc_cmd framework_start_init 00:21:58.409 09:30:09 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:58.409 09:30:09 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:21:58.667 09:30:09 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:58.667 09:30:09 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@45 -- # rpc_cmd nvmf_create_transport -t tcp -o --io-unit-size 8192 --sock-priority 0 00:21:58.667 09:30:09 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:58.667 09:30:09 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:21:58.667 [2024-07-15 09:30:09.629830] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:21:58.667 09:30:09 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:58.667 09:30:09 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@46 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc1 00:21:58.667 09:30:09 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:58.667 09:30:09 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:21:58.667 Malloc1 00:21:58.667 09:30:09 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:58.667 09:30:09 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@47 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:21:58.667 09:30:09 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:58.667 09:30:09 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:21:58.667 09:30:09 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:58.667 09:30:09 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@48 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:21:58.667 09:30:09 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:58.667 09:30:09 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:21:58.667 09:30:09 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:58.667 09:30:09 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@49 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:21:58.667 09:30:09 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:58.667 09:30:09 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:21:58.667 [2024-07-15 09:30:09.683217] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:21:58.667 09:30:09 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:58.667 09:30:09 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@74 -- # perfpid=874614 00:21:58.667 09:30:09 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@75 -- # sleep 2 00:21:58.667 09:30:09 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@71 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 64 -o 4096 -w randread -t 10 -c 0xF0 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' 00:21:58.667 EAL: No free 2048 kB hugepages reported on node 1 00:22:00.563 09:30:11 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@77 -- # rpc_cmd nvmf_get_stats 00:22:00.563 09:30:11 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:00.563 09:30:11 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:22:00.563 09:30:11 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:00.563 09:30:11 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@77 -- # nvmf_stats='{ 00:22:00.563 "tick_rate": 2700000000, 00:22:00.563 "poll_groups": [ 00:22:00.563 { 00:22:00.563 "name": "nvmf_tgt_poll_group_000", 00:22:00.563 "admin_qpairs": 1, 00:22:00.563 "io_qpairs": 1, 00:22:00.563 "current_admin_qpairs": 1, 00:22:00.563 "current_io_qpairs": 1, 00:22:00.563 "pending_bdev_io": 0, 00:22:00.563 "completed_nvme_io": 20439, 00:22:00.563 "transports": [ 00:22:00.563 { 00:22:00.563 "trtype": "TCP" 00:22:00.563 } 00:22:00.563 ] 00:22:00.563 }, 00:22:00.563 { 00:22:00.563 "name": "nvmf_tgt_poll_group_001", 00:22:00.563 "admin_qpairs": 0, 00:22:00.563 "io_qpairs": 1, 00:22:00.563 "current_admin_qpairs": 0, 00:22:00.563 "current_io_qpairs": 1, 00:22:00.563 "pending_bdev_io": 0, 00:22:00.563 "completed_nvme_io": 19849, 00:22:00.563 "transports": [ 00:22:00.563 { 00:22:00.563 "trtype": "TCP" 00:22:00.563 } 00:22:00.563 ] 00:22:00.563 }, 00:22:00.563 { 00:22:00.563 "name": "nvmf_tgt_poll_group_002", 00:22:00.563 "admin_qpairs": 0, 00:22:00.563 "io_qpairs": 1, 00:22:00.563 "current_admin_qpairs": 0, 00:22:00.563 "current_io_qpairs": 1, 00:22:00.563 "pending_bdev_io": 0, 00:22:00.563 "completed_nvme_io": 21089, 00:22:00.563 "transports": [ 00:22:00.563 { 00:22:00.563 "trtype": "TCP" 00:22:00.563 } 00:22:00.563 ] 00:22:00.563 }, 00:22:00.563 { 00:22:00.563 "name": "nvmf_tgt_poll_group_003", 00:22:00.563 "admin_qpairs": 0, 00:22:00.563 "io_qpairs": 1, 00:22:00.563 "current_admin_qpairs": 0, 00:22:00.563 "current_io_qpairs": 1, 00:22:00.563 "pending_bdev_io": 0, 00:22:00.563 "completed_nvme_io": 20737, 00:22:00.563 "transports": [ 00:22:00.563 { 00:22:00.563 "trtype": "TCP" 00:22:00.563 } 00:22:00.563 ] 00:22:00.564 } 00:22:00.564 ] 00:22:00.564 }' 00:22:00.564 09:30:11 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@78 -- # jq -r '.poll_groups[] | select(.current_io_qpairs == 1) | length' 00:22:00.564 09:30:11 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@78 -- # wc -l 00:22:00.564 09:30:11 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@78 -- # count=4 00:22:00.564 09:30:11 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@79 -- # [[ 4 -ne 4 ]] 00:22:00.564 09:30:11 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@83 -- # wait 874614 00:22:08.785 Initializing NVMe Controllers 00:22:08.785 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:22:08.785 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 4 00:22:08.785 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 5 00:22:08.785 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 6 00:22:08.785 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 7 00:22:08.785 Initialization complete. Launching workers. 00:22:08.785 ======================================================== 00:22:08.785 Latency(us) 00:22:08.785 Device Information : IOPS MiB/s Average min max 00:22:08.785 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 4: 10808.97 42.22 5922.19 2285.52 9809.65 00:22:08.785 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 5: 10488.57 40.97 6103.09 2411.23 10781.00 00:22:08.785 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 6: 11006.07 42.99 5816.49 2049.10 9501.58 00:22:08.785 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 7: 10771.97 42.08 5942.30 2159.07 10375.07 00:22:08.785 ======================================================== 00:22:08.785 Total : 43075.59 168.26 5944.26 2049.10 10781.00 00:22:08.785 00:22:08.785 09:30:19 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@84 -- # nvmftestfini 00:22:08.785 09:30:19 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@488 -- # nvmfcleanup 00:22:08.785 09:30:19 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@117 -- # sync 00:22:08.785 09:30:19 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:22:08.785 09:30:19 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@120 -- # set +e 00:22:08.785 09:30:19 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@121 -- # for i in {1..20} 00:22:08.785 09:30:19 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:22:08.785 rmmod nvme_tcp 00:22:08.785 rmmod nvme_fabrics 00:22:08.785 rmmod nvme_keyring 00:22:08.785 09:30:19 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:22:08.785 09:30:19 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@124 -- # set -e 00:22:08.785 09:30:19 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@125 -- # return 0 00:22:08.785 09:30:19 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@489 -- # '[' -n 874573 ']' 00:22:08.785 09:30:19 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@490 -- # killprocess 874573 00:22:08.785 09:30:19 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@948 -- # '[' -z 874573 ']' 00:22:08.785 09:30:19 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@952 -- # kill -0 874573 00:22:08.786 09:30:19 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@953 -- # uname 00:22:08.786 09:30:19 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:22:08.786 09:30:19 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 874573 00:22:08.786 09:30:19 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:22:08.786 09:30:19 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:22:08.786 09:30:19 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@966 -- # echo 'killing process with pid 874573' 00:22:08.786 killing process with pid 874573 00:22:08.786 09:30:19 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@967 -- # kill 874573 00:22:08.786 09:30:19 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@972 -- # wait 874573 00:22:09.043 09:30:20 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:22:09.043 09:30:20 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:22:09.043 09:30:20 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:22:09.043 09:30:20 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:22:09.043 09:30:20 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@278 -- # remove_spdk_ns 00:22:09.043 09:30:20 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:22:09.043 09:30:20 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:22:09.043 09:30:20 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:22:11.577 09:30:22 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:22:11.577 09:30:22 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@86 -- # adq_reload_driver 00:22:11.577 09:30:22 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@53 -- # rmmod ice 00:22:11.852 09:30:22 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@54 -- # modprobe ice 00:22:13.750 09:30:24 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@55 -- # sleep 5 00:22:19.015 09:30:29 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@89 -- # nvmftestinit 00:22:19.015 09:30:29 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:22:19.015 09:30:29 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:22:19.015 09:30:29 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@448 -- # prepare_net_devs 00:22:19.015 09:30:29 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@410 -- # local -g is_hw=no 00:22:19.015 09:30:29 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@412 -- # remove_spdk_ns 00:22:19.015 09:30:29 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:22:19.015 09:30:29 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:22:19.015 09:30:29 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:22:19.015 09:30:29 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:22:19.015 09:30:29 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:22:19.015 09:30:29 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@285 -- # xtrace_disable 00:22:19.015 09:30:29 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:22:19.015 09:30:29 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:22:19.015 09:30:29 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@291 -- # pci_devs=() 00:22:19.015 09:30:29 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@291 -- # local -a pci_devs 00:22:19.015 09:30:29 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@292 -- # pci_net_devs=() 00:22:19.015 09:30:29 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:22:19.015 09:30:29 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@293 -- # pci_drivers=() 00:22:19.015 09:30:29 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@293 -- # local -A pci_drivers 00:22:19.015 09:30:29 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@295 -- # net_devs=() 00:22:19.015 09:30:29 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@295 -- # local -ga net_devs 00:22:19.015 09:30:29 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@296 -- # e810=() 00:22:19.015 09:30:29 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@296 -- # local -ga e810 00:22:19.015 09:30:29 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@297 -- # x722=() 00:22:19.015 09:30:29 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@297 -- # local -ga x722 00:22:19.015 09:30:29 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@298 -- # mlx=() 00:22:19.015 09:30:29 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@298 -- # local -ga mlx 00:22:19.015 09:30:29 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:22:19.015 09:30:29 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:22:19.015 09:30:29 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:22:19.015 09:30:29 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:22:19.015 09:30:29 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:22:19.015 09:30:29 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:22:19.015 09:30:29 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:22:19.015 09:30:29 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:22:19.015 09:30:29 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:22:19.015 09:30:29 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:22:19.015 09:30:29 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:22:19.015 09:30:29 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:22:19.015 09:30:29 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:22:19.015 09:30:29 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:22:19.015 09:30:29 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:22:19.015 09:30:29 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:22:19.015 09:30:29 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:22:19.015 09:30:29 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:22:19.015 09:30:29 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@341 -- # echo 'Found 0000:09:00.0 (0x8086 - 0x159b)' 00:22:19.015 Found 0000:09:00.0 (0x8086 - 0x159b) 00:22:19.015 09:30:29 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:22:19.015 09:30:29 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:22:19.015 09:30:29 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:22:19.015 09:30:29 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:22:19.015 09:30:29 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:22:19.015 09:30:29 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:22:19.015 09:30:29 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@341 -- # echo 'Found 0000:09:00.1 (0x8086 - 0x159b)' 00:22:19.015 Found 0000:09:00.1 (0x8086 - 0x159b) 00:22:19.015 09:30:29 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:22:19.015 09:30:29 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:22:19.015 09:30:29 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:22:19.015 09:30:29 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:22:19.015 09:30:29 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:22:19.015 09:30:29 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:22:19.015 09:30:29 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:22:19.015 09:30:29 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:22:19.015 09:30:29 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:22:19.015 09:30:29 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:22:19.015 09:30:29 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:22:19.015 09:30:29 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:22:19.015 09:30:29 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@390 -- # [[ up == up ]] 00:22:19.015 09:30:29 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:22:19.015 09:30:29 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:22:19.015 09:30:29 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:09:00.0: cvl_0_0' 00:22:19.015 Found net devices under 0000:09:00.0: cvl_0_0 00:22:19.015 09:30:29 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:22:19.015 09:30:29 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:22:19.015 09:30:29 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:22:19.015 09:30:29 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:22:19.015 09:30:29 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:22:19.015 09:30:29 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@390 -- # [[ up == up ]] 00:22:19.015 09:30:29 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:22:19.015 09:30:29 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:22:19.015 09:30:29 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:09:00.1: cvl_0_1' 00:22:19.015 Found net devices under 0000:09:00.1: cvl_0_1 00:22:19.015 09:30:29 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:22:19.015 09:30:29 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:22:19.015 09:30:29 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@414 -- # is_hw=yes 00:22:19.015 09:30:29 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:22:19.015 09:30:29 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:22:19.015 09:30:29 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:22:19.015 09:30:29 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:22:19.015 09:30:29 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:22:19.015 09:30:29 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:22:19.015 09:30:29 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:22:19.015 09:30:29 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:22:19.015 09:30:29 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:22:19.015 09:30:29 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:22:19.015 09:30:29 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:22:19.015 09:30:29 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:22:19.015 09:30:29 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:22:19.015 09:30:29 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:22:19.015 09:30:29 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:22:19.015 09:30:29 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:22:19.015 09:30:29 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:22:19.015 09:30:29 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:22:19.015 09:30:29 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:22:19.015 09:30:29 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:22:19.015 09:30:30 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:22:19.016 09:30:30 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:22:19.016 09:30:30 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:22:19.016 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:22:19.016 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.201 ms 00:22:19.016 00:22:19.016 --- 10.0.0.2 ping statistics --- 00:22:19.016 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:22:19.016 rtt min/avg/max/mdev = 0.201/0.201/0.201/0.000 ms 00:22:19.016 09:30:30 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:22:19.016 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:22:19.016 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.122 ms 00:22:19.016 00:22:19.016 --- 10.0.0.1 ping statistics --- 00:22:19.016 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:22:19.016 rtt min/avg/max/mdev = 0.122/0.122/0.122/0.000 ms 00:22:19.016 09:30:30 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:22:19.016 09:30:30 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@422 -- # return 0 00:22:19.016 09:30:30 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:22:19.016 09:30:30 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:22:19.016 09:30:30 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:22:19.016 09:30:30 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:22:19.016 09:30:30 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:22:19.016 09:30:30 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:22:19.016 09:30:30 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:22:19.016 09:30:30 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@90 -- # adq_configure_driver 00:22:19.016 09:30:30 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@22 -- # ip netns exec cvl_0_0_ns_spdk ethtool --offload cvl_0_0 hw-tc-offload on 00:22:19.016 09:30:30 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@24 -- # ip netns exec cvl_0_0_ns_spdk ethtool --set-priv-flags cvl_0_0 channel-pkt-inspect-optimize off 00:22:19.016 09:30:30 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@26 -- # sysctl -w net.core.busy_poll=1 00:22:19.016 net.core.busy_poll = 1 00:22:19.016 09:30:30 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@27 -- # sysctl -w net.core.busy_read=1 00:22:19.016 net.core.busy_read = 1 00:22:19.016 09:30:30 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@29 -- # tc=/usr/sbin/tc 00:22:19.016 09:30:30 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@31 -- # ip netns exec cvl_0_0_ns_spdk /usr/sbin/tc qdisc add dev cvl_0_0 root mqprio num_tc 2 map 0 1 queues 2@0 2@2 hw 1 mode channel 00:22:19.016 09:30:30 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@33 -- # ip netns exec cvl_0_0_ns_spdk /usr/sbin/tc qdisc add dev cvl_0_0 ingress 00:22:19.016 09:30:30 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@35 -- # ip netns exec cvl_0_0_ns_spdk /usr/sbin/tc filter add dev cvl_0_0 protocol ip parent ffff: prio 1 flower dst_ip 10.0.0.2/32 ip_proto tcp dst_port 4420 skip_sw hw_tc 1 00:22:19.016 09:30:30 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@38 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/nvmf/set_xps_rxqs cvl_0_0 00:22:19.275 09:30:30 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@91 -- # nvmfappstart -m 0xF --wait-for-rpc 00:22:19.275 09:30:30 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:22:19.275 09:30:30 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@722 -- # xtrace_disable 00:22:19.275 09:30:30 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:22:19.275 09:30:30 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@481 -- # nvmfpid=877260 00:22:19.275 09:30:30 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF --wait-for-rpc 00:22:19.275 09:30:30 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@482 -- # waitforlisten 877260 00:22:19.275 09:30:30 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@829 -- # '[' -z 877260 ']' 00:22:19.275 09:30:30 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:22:19.275 09:30:30 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@834 -- # local max_retries=100 00:22:19.275 09:30:30 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:22:19.275 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:22:19.275 09:30:30 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@838 -- # xtrace_disable 00:22:19.275 09:30:30 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:22:19.275 [2024-07-15 09:30:30.269910] Starting SPDK v24.09-pre git sha1 b0f01ebc5 / DPDK 24.03.0 initialization... 00:22:19.276 [2024-07-15 09:30:30.269994] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:22:19.276 EAL: No free 2048 kB hugepages reported on node 1 00:22:19.276 [2024-07-15 09:30:30.342414] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 4 00:22:19.276 [2024-07-15 09:30:30.458308] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:22:19.276 [2024-07-15 09:30:30.458372] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:22:19.276 [2024-07-15 09:30:30.458401] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:22:19.276 [2024-07-15 09:30:30.458413] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:22:19.276 [2024-07-15 09:30:30.458423] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:22:19.276 [2024-07-15 09:30:30.458476] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:22:19.276 [2024-07-15 09:30:30.458544] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:22:19.276 [2024-07-15 09:30:30.458613] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:22:19.276 [2024-07-15 09:30:30.458616] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:22:19.534 09:30:30 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:22:19.534 09:30:30 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@862 -- # return 0 00:22:19.534 09:30:30 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:22:19.534 09:30:30 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@728 -- # xtrace_disable 00:22:19.534 09:30:30 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:22:19.534 09:30:30 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:22:19.534 09:30:30 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@92 -- # adq_configure_nvmf_target 1 00:22:19.534 09:30:30 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@42 -- # rpc_cmd sock_get_default_impl 00:22:19.534 09:30:30 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@42 -- # jq -r .impl_name 00:22:19.534 09:30:30 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:19.534 09:30:30 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:22:19.534 09:30:30 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:19.534 09:30:30 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@42 -- # socket_impl=posix 00:22:19.534 09:30:30 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@43 -- # rpc_cmd sock_impl_set_options --enable-placement-id 1 --enable-zerocopy-send-server -i posix 00:22:19.534 09:30:30 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:19.534 09:30:30 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:22:19.534 09:30:30 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:19.534 09:30:30 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@44 -- # rpc_cmd framework_start_init 00:22:19.534 09:30:30 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:19.534 09:30:30 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:22:19.534 09:30:30 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:19.534 09:30:30 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@45 -- # rpc_cmd nvmf_create_transport -t tcp -o --io-unit-size 8192 --sock-priority 1 00:22:19.534 09:30:30 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:19.534 09:30:30 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:22:19.534 [2024-07-15 09:30:30.679849] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:22:19.534 09:30:30 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:19.534 09:30:30 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@46 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc1 00:22:19.534 09:30:30 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:19.534 09:30:30 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:22:19.534 Malloc1 00:22:19.534 09:30:30 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:19.534 09:30:30 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@47 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:22:19.534 09:30:30 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:19.534 09:30:30 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:22:19.534 09:30:30 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:19.534 09:30:30 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@48 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:22:19.534 09:30:30 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:19.534 09:30:30 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:22:19.792 09:30:30 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:19.792 09:30:30 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@49 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:22:19.792 09:30:30 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:19.792 09:30:30 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:22:19.792 [2024-07-15 09:30:30.733081] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:22:19.792 09:30:30 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:19.792 09:30:30 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@96 -- # perfpid=877406 00:22:19.792 09:30:30 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@97 -- # sleep 2 00:22:19.792 09:30:30 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@93 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 64 -o 4096 -w randread -t 10 -c 0xF0 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' 00:22:19.792 EAL: No free 2048 kB hugepages reported on node 1 00:22:21.689 09:30:32 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@99 -- # rpc_cmd nvmf_get_stats 00:22:21.689 09:30:32 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:21.689 09:30:32 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:22:21.689 09:30:32 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:21.689 09:30:32 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@99 -- # nvmf_stats='{ 00:22:21.689 "tick_rate": 2700000000, 00:22:21.689 "poll_groups": [ 00:22:21.689 { 00:22:21.689 "name": "nvmf_tgt_poll_group_000", 00:22:21.689 "admin_qpairs": 1, 00:22:21.689 "io_qpairs": 2, 00:22:21.689 "current_admin_qpairs": 1, 00:22:21.689 "current_io_qpairs": 2, 00:22:21.689 "pending_bdev_io": 0, 00:22:21.689 "completed_nvme_io": 23932, 00:22:21.689 "transports": [ 00:22:21.689 { 00:22:21.689 "trtype": "TCP" 00:22:21.689 } 00:22:21.689 ] 00:22:21.689 }, 00:22:21.689 { 00:22:21.689 "name": "nvmf_tgt_poll_group_001", 00:22:21.689 "admin_qpairs": 0, 00:22:21.689 "io_qpairs": 2, 00:22:21.689 "current_admin_qpairs": 0, 00:22:21.689 "current_io_qpairs": 2, 00:22:21.689 "pending_bdev_io": 0, 00:22:21.689 "completed_nvme_io": 25617, 00:22:21.689 "transports": [ 00:22:21.689 { 00:22:21.689 "trtype": "TCP" 00:22:21.689 } 00:22:21.689 ] 00:22:21.689 }, 00:22:21.689 { 00:22:21.689 "name": "nvmf_tgt_poll_group_002", 00:22:21.689 "admin_qpairs": 0, 00:22:21.689 "io_qpairs": 0, 00:22:21.689 "current_admin_qpairs": 0, 00:22:21.690 "current_io_qpairs": 0, 00:22:21.690 "pending_bdev_io": 0, 00:22:21.690 "completed_nvme_io": 0, 00:22:21.690 "transports": [ 00:22:21.690 { 00:22:21.690 "trtype": "TCP" 00:22:21.690 } 00:22:21.690 ] 00:22:21.690 }, 00:22:21.690 { 00:22:21.690 "name": "nvmf_tgt_poll_group_003", 00:22:21.690 "admin_qpairs": 0, 00:22:21.690 "io_qpairs": 0, 00:22:21.690 "current_admin_qpairs": 0, 00:22:21.690 "current_io_qpairs": 0, 00:22:21.690 "pending_bdev_io": 0, 00:22:21.690 "completed_nvme_io": 0, 00:22:21.690 "transports": [ 00:22:21.690 { 00:22:21.690 "trtype": "TCP" 00:22:21.690 } 00:22:21.690 ] 00:22:21.690 } 00:22:21.690 ] 00:22:21.690 }' 00:22:21.690 09:30:32 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@100 -- # jq -r '.poll_groups[] | select(.current_io_qpairs == 0) | length' 00:22:21.690 09:30:32 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@100 -- # wc -l 00:22:21.690 09:30:32 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@100 -- # count=2 00:22:21.690 09:30:32 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@101 -- # [[ 2 -lt 2 ]] 00:22:21.690 09:30:32 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@106 -- # wait 877406 00:22:29.795 Initializing NVMe Controllers 00:22:29.795 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:22:29.795 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 4 00:22:29.795 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 5 00:22:29.795 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 6 00:22:29.795 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 7 00:22:29.795 Initialization complete. Launching workers. 00:22:29.795 ======================================================== 00:22:29.795 Latency(us) 00:22:29.795 Device Information : IOPS MiB/s Average min max 00:22:29.795 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 4: 7151.01 27.93 8956.21 1568.40 54315.95 00:22:29.795 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 5: 6946.51 27.13 9217.11 1644.66 53012.91 00:22:29.795 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 6: 6261.63 24.46 10227.76 1853.92 53571.55 00:22:29.795 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 7: 6871.61 26.84 9316.26 1389.50 53971.95 00:22:29.795 ======================================================== 00:22:29.795 Total : 27230.76 106.37 9406.01 1389.50 54315.95 00:22:29.795 00:22:29.795 09:30:40 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@107 -- # nvmftestfini 00:22:29.795 09:30:40 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@488 -- # nvmfcleanup 00:22:29.795 09:30:40 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@117 -- # sync 00:22:29.795 09:30:40 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:22:29.795 09:30:40 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@120 -- # set +e 00:22:29.795 09:30:40 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@121 -- # for i in {1..20} 00:22:29.795 09:30:40 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:22:29.795 rmmod nvme_tcp 00:22:29.795 rmmod nvme_fabrics 00:22:29.795 rmmod nvme_keyring 00:22:30.053 09:30:41 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:22:30.053 09:30:41 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@124 -- # set -e 00:22:30.053 09:30:41 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@125 -- # return 0 00:22:30.053 09:30:41 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@489 -- # '[' -n 877260 ']' 00:22:30.053 09:30:41 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@490 -- # killprocess 877260 00:22:30.053 09:30:41 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@948 -- # '[' -z 877260 ']' 00:22:30.053 09:30:41 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@952 -- # kill -0 877260 00:22:30.053 09:30:41 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@953 -- # uname 00:22:30.053 09:30:41 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:22:30.053 09:30:41 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 877260 00:22:30.053 09:30:41 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:22:30.053 09:30:41 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:22:30.053 09:30:41 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@966 -- # echo 'killing process with pid 877260' 00:22:30.053 killing process with pid 877260 00:22:30.053 09:30:41 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@967 -- # kill 877260 00:22:30.053 09:30:41 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@972 -- # wait 877260 00:22:30.313 09:30:41 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:22:30.313 09:30:41 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:22:30.313 09:30:41 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:22:30.313 09:30:41 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:22:30.313 09:30:41 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@278 -- # remove_spdk_ns 00:22:30.313 09:30:41 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:22:30.313 09:30:41 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:22:30.313 09:30:41 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:22:32.216 09:30:43 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:22:32.216 09:30:43 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@109 -- # trap - SIGINT SIGTERM EXIT 00:22:32.216 00:22:32.216 real 0m45.190s 00:22:32.216 user 2m39.272s 00:22:32.216 sys 0m10.850s 00:22:32.216 09:30:43 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@1124 -- # xtrace_disable 00:22:32.216 09:30:43 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:22:32.216 ************************************ 00:22:32.216 END TEST nvmf_perf_adq 00:22:32.216 ************************************ 00:22:32.216 09:30:43 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:22:32.216 09:30:43 nvmf_tcp -- nvmf/nvmf.sh@83 -- # run_test nvmf_shutdown /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/shutdown.sh --transport=tcp 00:22:32.216 09:30:43 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:22:32.216 09:30:43 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:22:32.216 09:30:43 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:22:32.474 ************************************ 00:22:32.474 START TEST nvmf_shutdown 00:22:32.474 ************************************ 00:22:32.474 09:30:43 nvmf_tcp.nvmf_shutdown -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/shutdown.sh --transport=tcp 00:22:32.474 * Looking for test storage... 00:22:32.474 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:22:32.474 09:30:43 nvmf_tcp.nvmf_shutdown -- target/shutdown.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:22:32.474 09:30:43 nvmf_tcp.nvmf_shutdown -- nvmf/common.sh@7 -- # uname -s 00:22:32.474 09:30:43 nvmf_tcp.nvmf_shutdown -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:22:32.474 09:30:43 nvmf_tcp.nvmf_shutdown -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:22:32.474 09:30:43 nvmf_tcp.nvmf_shutdown -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:22:32.474 09:30:43 nvmf_tcp.nvmf_shutdown -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:22:32.474 09:30:43 nvmf_tcp.nvmf_shutdown -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:22:32.474 09:30:43 nvmf_tcp.nvmf_shutdown -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:22:32.474 09:30:43 nvmf_tcp.nvmf_shutdown -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:22:32.474 09:30:43 nvmf_tcp.nvmf_shutdown -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:22:32.474 09:30:43 nvmf_tcp.nvmf_shutdown -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:22:32.474 09:30:43 nvmf_tcp.nvmf_shutdown -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:22:32.474 09:30:43 nvmf_tcp.nvmf_shutdown -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a 00:22:32.474 09:30:43 nvmf_tcp.nvmf_shutdown -- nvmf/common.sh@18 -- # NVME_HOSTID=29f67375-a902-e411-ace9-001e67bc3c9a 00:22:32.474 09:30:43 nvmf_tcp.nvmf_shutdown -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:22:32.474 09:30:43 nvmf_tcp.nvmf_shutdown -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:22:32.474 09:30:43 nvmf_tcp.nvmf_shutdown -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:22:32.474 09:30:43 nvmf_tcp.nvmf_shutdown -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:22:32.474 09:30:43 nvmf_tcp.nvmf_shutdown -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:22:32.474 09:30:43 nvmf_tcp.nvmf_shutdown -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:22:32.474 09:30:43 nvmf_tcp.nvmf_shutdown -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:22:32.474 09:30:43 nvmf_tcp.nvmf_shutdown -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:22:32.474 09:30:43 nvmf_tcp.nvmf_shutdown -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:32.474 09:30:43 nvmf_tcp.nvmf_shutdown -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:32.474 09:30:43 nvmf_tcp.nvmf_shutdown -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:32.474 09:30:43 nvmf_tcp.nvmf_shutdown -- paths/export.sh@5 -- # export PATH 00:22:32.474 09:30:43 nvmf_tcp.nvmf_shutdown -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:32.474 09:30:43 nvmf_tcp.nvmf_shutdown -- nvmf/common.sh@47 -- # : 0 00:22:32.474 09:30:43 nvmf_tcp.nvmf_shutdown -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:22:32.474 09:30:43 nvmf_tcp.nvmf_shutdown -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:22:32.474 09:30:43 nvmf_tcp.nvmf_shutdown -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:22:32.474 09:30:43 nvmf_tcp.nvmf_shutdown -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:22:32.474 09:30:43 nvmf_tcp.nvmf_shutdown -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:22:32.474 09:30:43 nvmf_tcp.nvmf_shutdown -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:22:32.474 09:30:43 nvmf_tcp.nvmf_shutdown -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:22:32.474 09:30:43 nvmf_tcp.nvmf_shutdown -- nvmf/common.sh@51 -- # have_pci_nics=0 00:22:32.474 09:30:43 nvmf_tcp.nvmf_shutdown -- target/shutdown.sh@11 -- # MALLOC_BDEV_SIZE=64 00:22:32.474 09:30:43 nvmf_tcp.nvmf_shutdown -- target/shutdown.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:22:32.474 09:30:43 nvmf_tcp.nvmf_shutdown -- target/shutdown.sh@147 -- # run_test nvmf_shutdown_tc1 nvmf_shutdown_tc1 00:22:32.474 09:30:43 nvmf_tcp.nvmf_shutdown -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:22:32.474 09:30:43 nvmf_tcp.nvmf_shutdown -- common/autotest_common.sh@1105 -- # xtrace_disable 00:22:32.474 09:30:43 nvmf_tcp.nvmf_shutdown -- common/autotest_common.sh@10 -- # set +x 00:22:32.474 ************************************ 00:22:32.474 START TEST nvmf_shutdown_tc1 00:22:32.474 ************************************ 00:22:32.474 09:30:43 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@1123 -- # nvmf_shutdown_tc1 00:22:32.474 09:30:43 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@74 -- # starttarget 00:22:32.474 09:30:43 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@15 -- # nvmftestinit 00:22:32.474 09:30:43 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:22:32.474 09:30:43 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:22:32.474 09:30:43 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@448 -- # prepare_net_devs 00:22:32.474 09:30:43 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@410 -- # local -g is_hw=no 00:22:32.474 09:30:43 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@412 -- # remove_spdk_ns 00:22:32.474 09:30:43 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:22:32.474 09:30:43 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:22:32.474 09:30:43 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:22:32.474 09:30:43 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:22:32.474 09:30:43 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:22:32.474 09:30:43 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@285 -- # xtrace_disable 00:22:32.474 09:30:43 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@10 -- # set +x 00:22:34.376 09:30:45 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:22:34.377 09:30:45 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@291 -- # pci_devs=() 00:22:34.377 09:30:45 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@291 -- # local -a pci_devs 00:22:34.377 09:30:45 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@292 -- # pci_net_devs=() 00:22:34.377 09:30:45 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:22:34.377 09:30:45 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@293 -- # pci_drivers=() 00:22:34.377 09:30:45 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@293 -- # local -A pci_drivers 00:22:34.377 09:30:45 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@295 -- # net_devs=() 00:22:34.377 09:30:45 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@295 -- # local -ga net_devs 00:22:34.377 09:30:45 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@296 -- # e810=() 00:22:34.377 09:30:45 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@296 -- # local -ga e810 00:22:34.377 09:30:45 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@297 -- # x722=() 00:22:34.377 09:30:45 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@297 -- # local -ga x722 00:22:34.377 09:30:45 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@298 -- # mlx=() 00:22:34.377 09:30:45 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@298 -- # local -ga mlx 00:22:34.377 09:30:45 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:22:34.377 09:30:45 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:22:34.377 09:30:45 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:22:34.377 09:30:45 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:22:34.377 09:30:45 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:22:34.377 09:30:45 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:22:34.377 09:30:45 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:22:34.377 09:30:45 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:22:34.377 09:30:45 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:22:34.377 09:30:45 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:22:34.377 09:30:45 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:22:34.377 09:30:45 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:22:34.377 09:30:45 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:22:34.377 09:30:45 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:22:34.377 09:30:45 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:22:34.377 09:30:45 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:22:34.377 09:30:45 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:22:34.377 09:30:45 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:22:34.377 09:30:45 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@341 -- # echo 'Found 0000:09:00.0 (0x8086 - 0x159b)' 00:22:34.377 Found 0000:09:00.0 (0x8086 - 0x159b) 00:22:34.377 09:30:45 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:22:34.377 09:30:45 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:22:34.377 09:30:45 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:22:34.377 09:30:45 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:22:34.377 09:30:45 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:22:34.377 09:30:45 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:22:34.377 09:30:45 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@341 -- # echo 'Found 0000:09:00.1 (0x8086 - 0x159b)' 00:22:34.377 Found 0000:09:00.1 (0x8086 - 0x159b) 00:22:34.377 09:30:45 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:22:34.377 09:30:45 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:22:34.377 09:30:45 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:22:34.377 09:30:45 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:22:34.377 09:30:45 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:22:34.377 09:30:45 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:22:34.377 09:30:45 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:22:34.377 09:30:45 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:22:34.377 09:30:45 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:22:34.377 09:30:45 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:22:34.377 09:30:45 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:22:34.377 09:30:45 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:22:34.377 09:30:45 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@390 -- # [[ up == up ]] 00:22:34.377 09:30:45 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:22:34.377 09:30:45 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:22:34.377 09:30:45 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:09:00.0: cvl_0_0' 00:22:34.377 Found net devices under 0000:09:00.0: cvl_0_0 00:22:34.377 09:30:45 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:22:34.377 09:30:45 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:22:34.377 09:30:45 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:22:34.377 09:30:45 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:22:34.377 09:30:45 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:22:34.377 09:30:45 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@390 -- # [[ up == up ]] 00:22:34.377 09:30:45 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:22:34.377 09:30:45 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:22:34.377 09:30:45 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:09:00.1: cvl_0_1' 00:22:34.377 Found net devices under 0000:09:00.1: cvl_0_1 00:22:34.377 09:30:45 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:22:34.377 09:30:45 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:22:34.377 09:30:45 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@414 -- # is_hw=yes 00:22:34.377 09:30:45 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:22:34.377 09:30:45 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:22:34.377 09:30:45 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:22:34.377 09:30:45 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:22:34.377 09:30:45 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:22:34.377 09:30:45 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:22:34.378 09:30:45 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:22:34.378 09:30:45 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:22:34.378 09:30:45 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:22:34.378 09:30:45 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:22:34.378 09:30:45 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:22:34.378 09:30:45 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:22:34.378 09:30:45 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:22:34.636 09:30:45 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:22:34.636 09:30:45 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:22:34.637 09:30:45 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:22:34.637 09:30:45 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:22:34.637 09:30:45 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:22:34.637 09:30:45 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:22:34.637 09:30:45 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:22:34.637 09:30:45 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:22:34.637 09:30:45 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:22:34.637 09:30:45 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:22:34.637 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:22:34.637 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.184 ms 00:22:34.637 00:22:34.637 --- 10.0.0.2 ping statistics --- 00:22:34.637 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:22:34.637 rtt min/avg/max/mdev = 0.184/0.184/0.184/0.000 ms 00:22:34.637 09:30:45 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:22:34.637 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:22:34.637 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.064 ms 00:22:34.637 00:22:34.637 --- 10.0.0.1 ping statistics --- 00:22:34.637 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:22:34.637 rtt min/avg/max/mdev = 0.064/0.064/0.064/0.000 ms 00:22:34.637 09:30:45 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:22:34.637 09:30:45 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@422 -- # return 0 00:22:34.637 09:30:45 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:22:34.637 09:30:45 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:22:34.637 09:30:45 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:22:34.637 09:30:45 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:22:34.637 09:30:45 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:22:34.637 09:30:45 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:22:34.637 09:30:45 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:22:34.637 09:30:45 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@18 -- # nvmfappstart -m 0x1E 00:22:34.637 09:30:45 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:22:34.637 09:30:45 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@722 -- # xtrace_disable 00:22:34.637 09:30:45 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@10 -- # set +x 00:22:34.637 09:30:45 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@481 -- # nvmfpid=880572 00:22:34.637 09:30:45 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1E 00:22:34.637 09:30:45 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@482 -- # waitforlisten 880572 00:22:34.637 09:30:45 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@829 -- # '[' -z 880572 ']' 00:22:34.637 09:30:45 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:22:34.637 09:30:45 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@834 -- # local max_retries=100 00:22:34.637 09:30:45 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:22:34.637 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:22:34.637 09:30:45 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@838 -- # xtrace_disable 00:22:34.637 09:30:45 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@10 -- # set +x 00:22:34.637 [2024-07-15 09:30:45.757278] Starting SPDK v24.09-pre git sha1 b0f01ebc5 / DPDK 24.03.0 initialization... 00:22:34.637 [2024-07-15 09:30:45.757364] [ DPDK EAL parameters: nvmf -c 0x1E --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:22:34.637 EAL: No free 2048 kB hugepages reported on node 1 00:22:34.637 [2024-07-15 09:30:45.821019] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 4 00:22:34.896 [2024-07-15 09:30:45.928715] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:22:34.896 [2024-07-15 09:30:45.928768] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:22:34.896 [2024-07-15 09:30:45.928820] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:22:34.896 [2024-07-15 09:30:45.928833] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:22:34.896 [2024-07-15 09:30:45.928843] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:22:34.896 [2024-07-15 09:30:45.928942] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:22:34.896 [2024-07-15 09:30:45.929014] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:22:34.896 [2024-07-15 09:30:45.929067] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 4 00:22:34.896 [2024-07-15 09:30:45.929070] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:22:34.896 09:30:46 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:22:34.896 09:30:46 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@862 -- # return 0 00:22:34.896 09:30:46 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:22:34.896 09:30:46 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@728 -- # xtrace_disable 00:22:34.896 09:30:46 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@10 -- # set +x 00:22:34.896 09:30:46 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:22:34.896 09:30:46 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@20 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:22:34.896 09:30:46 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:34.896 09:30:46 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@10 -- # set +x 00:22:34.896 [2024-07-15 09:30:46.063475] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:22:34.896 09:30:46 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:34.896 09:30:46 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@22 -- # num_subsystems=({1..10}) 00:22:34.896 09:30:46 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@24 -- # timing_enter create_subsystems 00:22:34.896 09:30:46 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@722 -- # xtrace_disable 00:22:34.896 09:30:46 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@10 -- # set +x 00:22:34.896 09:30:46 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@26 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:22:34.896 09:30:46 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:22:34.896 09:30:46 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@28 -- # cat 00:22:34.896 09:30:46 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:22:34.896 09:30:46 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@28 -- # cat 00:22:34.896 09:30:46 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:22:34.896 09:30:46 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@28 -- # cat 00:22:34.896 09:30:46 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:22:34.896 09:30:46 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@28 -- # cat 00:22:34.896 09:30:46 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:22:34.896 09:30:46 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@28 -- # cat 00:22:34.896 09:30:46 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:22:34.896 09:30:46 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@28 -- # cat 00:22:34.896 09:30:46 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:22:34.896 09:30:46 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@28 -- # cat 00:22:35.154 09:30:46 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:22:35.154 09:30:46 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@28 -- # cat 00:22:35.154 09:30:46 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:22:35.154 09:30:46 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@28 -- # cat 00:22:35.154 09:30:46 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:22:35.154 09:30:46 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@28 -- # cat 00:22:35.154 09:30:46 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@35 -- # rpc_cmd 00:22:35.154 09:30:46 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:35.154 09:30:46 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@10 -- # set +x 00:22:35.154 Malloc1 00:22:35.154 [2024-07-15 09:30:46.137960] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:22:35.154 Malloc2 00:22:35.154 Malloc3 00:22:35.154 Malloc4 00:22:35.154 Malloc5 00:22:35.413 Malloc6 00:22:35.413 Malloc7 00:22:35.413 Malloc8 00:22:35.413 Malloc9 00:22:35.413 Malloc10 00:22:35.413 09:30:46 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:35.413 09:30:46 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@36 -- # timing_exit create_subsystems 00:22:35.413 09:30:46 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@728 -- # xtrace_disable 00:22:35.413 09:30:46 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@10 -- # set +x 00:22:35.413 09:30:46 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@78 -- # perfpid=880750 00:22:35.413 09:30:46 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@79 -- # waitforlisten 880750 /var/tmp/bdevperf.sock 00:22:35.413 09:30:46 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@829 -- # '[' -z 880750 ']' 00:22:35.413 09:30:46 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@77 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app/bdev_svc/bdev_svc -m 0x1 -i 1 -r /var/tmp/bdevperf.sock --json /dev/fd/63 00:22:35.413 09:30:46 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@77 -- # gen_nvmf_target_json 1 2 3 4 5 6 7 8 9 10 00:22:35.413 09:30:46 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:22:35.413 09:30:46 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@834 -- # local max_retries=100 00:22:35.413 09:30:46 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@532 -- # config=() 00:22:35.413 09:30:46 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:22:35.413 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:22:35.413 09:30:46 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@532 -- # local subsystem config 00:22:35.413 09:30:46 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@838 -- # xtrace_disable 00:22:35.413 09:30:46 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:22:35.413 09:30:46 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@10 -- # set +x 00:22:35.413 09:30:46 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:22:35.413 { 00:22:35.413 "params": { 00:22:35.413 "name": "Nvme$subsystem", 00:22:35.413 "trtype": "$TEST_TRANSPORT", 00:22:35.413 "traddr": "$NVMF_FIRST_TARGET_IP", 00:22:35.413 "adrfam": "ipv4", 00:22:35.413 "trsvcid": "$NVMF_PORT", 00:22:35.413 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:22:35.413 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:22:35.413 "hdgst": ${hdgst:-false}, 00:22:35.413 "ddgst": ${ddgst:-false} 00:22:35.413 }, 00:22:35.413 "method": "bdev_nvme_attach_controller" 00:22:35.413 } 00:22:35.413 EOF 00:22:35.413 )") 00:22:35.413 09:30:46 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # cat 00:22:35.673 09:30:46 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:22:35.673 09:30:46 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:22:35.673 { 00:22:35.673 "params": { 00:22:35.673 "name": "Nvme$subsystem", 00:22:35.673 "trtype": "$TEST_TRANSPORT", 00:22:35.673 "traddr": "$NVMF_FIRST_TARGET_IP", 00:22:35.673 "adrfam": "ipv4", 00:22:35.673 "trsvcid": "$NVMF_PORT", 00:22:35.673 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:22:35.673 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:22:35.673 "hdgst": ${hdgst:-false}, 00:22:35.673 "ddgst": ${ddgst:-false} 00:22:35.673 }, 00:22:35.673 "method": "bdev_nvme_attach_controller" 00:22:35.673 } 00:22:35.673 EOF 00:22:35.673 )") 00:22:35.673 09:30:46 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # cat 00:22:35.673 09:30:46 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:22:35.673 09:30:46 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:22:35.673 { 00:22:35.673 "params": { 00:22:35.673 "name": "Nvme$subsystem", 00:22:35.673 "trtype": "$TEST_TRANSPORT", 00:22:35.673 "traddr": "$NVMF_FIRST_TARGET_IP", 00:22:35.673 "adrfam": "ipv4", 00:22:35.673 "trsvcid": "$NVMF_PORT", 00:22:35.673 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:22:35.673 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:22:35.673 "hdgst": ${hdgst:-false}, 00:22:35.673 "ddgst": ${ddgst:-false} 00:22:35.673 }, 00:22:35.673 "method": "bdev_nvme_attach_controller" 00:22:35.673 } 00:22:35.673 EOF 00:22:35.673 )") 00:22:35.673 09:30:46 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # cat 00:22:35.673 09:30:46 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:22:35.673 09:30:46 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:22:35.673 { 00:22:35.673 "params": { 00:22:35.673 "name": "Nvme$subsystem", 00:22:35.673 "trtype": "$TEST_TRANSPORT", 00:22:35.673 "traddr": "$NVMF_FIRST_TARGET_IP", 00:22:35.673 "adrfam": "ipv4", 00:22:35.673 "trsvcid": "$NVMF_PORT", 00:22:35.673 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:22:35.673 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:22:35.673 "hdgst": ${hdgst:-false}, 00:22:35.673 "ddgst": ${ddgst:-false} 00:22:35.673 }, 00:22:35.673 "method": "bdev_nvme_attach_controller" 00:22:35.673 } 00:22:35.673 EOF 00:22:35.673 )") 00:22:35.673 09:30:46 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # cat 00:22:35.673 09:30:46 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:22:35.673 09:30:46 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:22:35.673 { 00:22:35.673 "params": { 00:22:35.673 "name": "Nvme$subsystem", 00:22:35.673 "trtype": "$TEST_TRANSPORT", 00:22:35.673 "traddr": "$NVMF_FIRST_TARGET_IP", 00:22:35.673 "adrfam": "ipv4", 00:22:35.673 "trsvcid": "$NVMF_PORT", 00:22:35.673 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:22:35.673 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:22:35.673 "hdgst": ${hdgst:-false}, 00:22:35.673 "ddgst": ${ddgst:-false} 00:22:35.673 }, 00:22:35.673 "method": "bdev_nvme_attach_controller" 00:22:35.673 } 00:22:35.673 EOF 00:22:35.673 )") 00:22:35.673 09:30:46 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # cat 00:22:35.673 09:30:46 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:22:35.673 09:30:46 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:22:35.673 { 00:22:35.673 "params": { 00:22:35.673 "name": "Nvme$subsystem", 00:22:35.673 "trtype": "$TEST_TRANSPORT", 00:22:35.673 "traddr": "$NVMF_FIRST_TARGET_IP", 00:22:35.673 "adrfam": "ipv4", 00:22:35.673 "trsvcid": "$NVMF_PORT", 00:22:35.673 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:22:35.673 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:22:35.673 "hdgst": ${hdgst:-false}, 00:22:35.673 "ddgst": ${ddgst:-false} 00:22:35.673 }, 00:22:35.673 "method": "bdev_nvme_attach_controller" 00:22:35.673 } 00:22:35.673 EOF 00:22:35.673 )") 00:22:35.673 09:30:46 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # cat 00:22:35.673 09:30:46 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:22:35.673 09:30:46 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:22:35.673 { 00:22:35.673 "params": { 00:22:35.673 "name": "Nvme$subsystem", 00:22:35.673 "trtype": "$TEST_TRANSPORT", 00:22:35.673 "traddr": "$NVMF_FIRST_TARGET_IP", 00:22:35.673 "adrfam": "ipv4", 00:22:35.673 "trsvcid": "$NVMF_PORT", 00:22:35.673 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:22:35.673 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:22:35.673 "hdgst": ${hdgst:-false}, 00:22:35.673 "ddgst": ${ddgst:-false} 00:22:35.673 }, 00:22:35.673 "method": "bdev_nvme_attach_controller" 00:22:35.673 } 00:22:35.673 EOF 00:22:35.673 )") 00:22:35.673 09:30:46 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # cat 00:22:35.673 09:30:46 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:22:35.673 09:30:46 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:22:35.673 { 00:22:35.673 "params": { 00:22:35.673 "name": "Nvme$subsystem", 00:22:35.673 "trtype": "$TEST_TRANSPORT", 00:22:35.673 "traddr": "$NVMF_FIRST_TARGET_IP", 00:22:35.673 "adrfam": "ipv4", 00:22:35.673 "trsvcid": "$NVMF_PORT", 00:22:35.673 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:22:35.673 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:22:35.673 "hdgst": ${hdgst:-false}, 00:22:35.673 "ddgst": ${ddgst:-false} 00:22:35.673 }, 00:22:35.673 "method": "bdev_nvme_attach_controller" 00:22:35.673 } 00:22:35.673 EOF 00:22:35.673 )") 00:22:35.673 09:30:46 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # cat 00:22:35.673 09:30:46 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:22:35.673 09:30:46 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:22:35.673 { 00:22:35.673 "params": { 00:22:35.673 "name": "Nvme$subsystem", 00:22:35.673 "trtype": "$TEST_TRANSPORT", 00:22:35.673 "traddr": "$NVMF_FIRST_TARGET_IP", 00:22:35.673 "adrfam": "ipv4", 00:22:35.673 "trsvcid": "$NVMF_PORT", 00:22:35.673 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:22:35.673 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:22:35.673 "hdgst": ${hdgst:-false}, 00:22:35.673 "ddgst": ${ddgst:-false} 00:22:35.673 }, 00:22:35.673 "method": "bdev_nvme_attach_controller" 00:22:35.673 } 00:22:35.673 EOF 00:22:35.673 )") 00:22:35.673 09:30:46 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # cat 00:22:35.673 09:30:46 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:22:35.673 09:30:46 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:22:35.673 { 00:22:35.673 "params": { 00:22:35.673 "name": "Nvme$subsystem", 00:22:35.673 "trtype": "$TEST_TRANSPORT", 00:22:35.673 "traddr": "$NVMF_FIRST_TARGET_IP", 00:22:35.673 "adrfam": "ipv4", 00:22:35.673 "trsvcid": "$NVMF_PORT", 00:22:35.673 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:22:35.673 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:22:35.673 "hdgst": ${hdgst:-false}, 00:22:35.673 "ddgst": ${ddgst:-false} 00:22:35.673 }, 00:22:35.673 "method": "bdev_nvme_attach_controller" 00:22:35.673 } 00:22:35.673 EOF 00:22:35.673 )") 00:22:35.673 09:30:46 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # cat 00:22:35.673 09:30:46 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@556 -- # jq . 00:22:35.673 09:30:46 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@557 -- # IFS=, 00:22:35.673 09:30:46 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:22:35.673 "params": { 00:22:35.673 "name": "Nvme1", 00:22:35.673 "trtype": "tcp", 00:22:35.673 "traddr": "10.0.0.2", 00:22:35.673 "adrfam": "ipv4", 00:22:35.673 "trsvcid": "4420", 00:22:35.673 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:22:35.673 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:22:35.673 "hdgst": false, 00:22:35.673 "ddgst": false 00:22:35.673 }, 00:22:35.673 "method": "bdev_nvme_attach_controller" 00:22:35.673 },{ 00:22:35.673 "params": { 00:22:35.673 "name": "Nvme2", 00:22:35.673 "trtype": "tcp", 00:22:35.673 "traddr": "10.0.0.2", 00:22:35.673 "adrfam": "ipv4", 00:22:35.673 "trsvcid": "4420", 00:22:35.673 "subnqn": "nqn.2016-06.io.spdk:cnode2", 00:22:35.673 "hostnqn": "nqn.2016-06.io.spdk:host2", 00:22:35.673 "hdgst": false, 00:22:35.673 "ddgst": false 00:22:35.673 }, 00:22:35.673 "method": "bdev_nvme_attach_controller" 00:22:35.673 },{ 00:22:35.673 "params": { 00:22:35.673 "name": "Nvme3", 00:22:35.673 "trtype": "tcp", 00:22:35.673 "traddr": "10.0.0.2", 00:22:35.673 "adrfam": "ipv4", 00:22:35.673 "trsvcid": "4420", 00:22:35.673 "subnqn": "nqn.2016-06.io.spdk:cnode3", 00:22:35.673 "hostnqn": "nqn.2016-06.io.spdk:host3", 00:22:35.673 "hdgst": false, 00:22:35.673 "ddgst": false 00:22:35.673 }, 00:22:35.673 "method": "bdev_nvme_attach_controller" 00:22:35.673 },{ 00:22:35.673 "params": { 00:22:35.673 "name": "Nvme4", 00:22:35.673 "trtype": "tcp", 00:22:35.673 "traddr": "10.0.0.2", 00:22:35.674 "adrfam": "ipv4", 00:22:35.674 "trsvcid": "4420", 00:22:35.674 "subnqn": "nqn.2016-06.io.spdk:cnode4", 00:22:35.674 "hostnqn": "nqn.2016-06.io.spdk:host4", 00:22:35.674 "hdgst": false, 00:22:35.674 "ddgst": false 00:22:35.674 }, 00:22:35.674 "method": "bdev_nvme_attach_controller" 00:22:35.674 },{ 00:22:35.674 "params": { 00:22:35.674 "name": "Nvme5", 00:22:35.674 "trtype": "tcp", 00:22:35.674 "traddr": "10.0.0.2", 00:22:35.674 "adrfam": "ipv4", 00:22:35.674 "trsvcid": "4420", 00:22:35.674 "subnqn": "nqn.2016-06.io.spdk:cnode5", 00:22:35.674 "hostnqn": "nqn.2016-06.io.spdk:host5", 00:22:35.674 "hdgst": false, 00:22:35.674 "ddgst": false 00:22:35.674 }, 00:22:35.674 "method": "bdev_nvme_attach_controller" 00:22:35.674 },{ 00:22:35.674 "params": { 00:22:35.674 "name": "Nvme6", 00:22:35.674 "trtype": "tcp", 00:22:35.674 "traddr": "10.0.0.2", 00:22:35.674 "adrfam": "ipv4", 00:22:35.674 "trsvcid": "4420", 00:22:35.674 "subnqn": "nqn.2016-06.io.spdk:cnode6", 00:22:35.674 "hostnqn": "nqn.2016-06.io.spdk:host6", 00:22:35.674 "hdgst": false, 00:22:35.674 "ddgst": false 00:22:35.674 }, 00:22:35.674 "method": "bdev_nvme_attach_controller" 00:22:35.674 },{ 00:22:35.674 "params": { 00:22:35.674 "name": "Nvme7", 00:22:35.674 "trtype": "tcp", 00:22:35.674 "traddr": "10.0.0.2", 00:22:35.674 "adrfam": "ipv4", 00:22:35.674 "trsvcid": "4420", 00:22:35.674 "subnqn": "nqn.2016-06.io.spdk:cnode7", 00:22:35.674 "hostnqn": "nqn.2016-06.io.spdk:host7", 00:22:35.674 "hdgst": false, 00:22:35.674 "ddgst": false 00:22:35.674 }, 00:22:35.674 "method": "bdev_nvme_attach_controller" 00:22:35.674 },{ 00:22:35.674 "params": { 00:22:35.674 "name": "Nvme8", 00:22:35.674 "trtype": "tcp", 00:22:35.674 "traddr": "10.0.0.2", 00:22:35.674 "adrfam": "ipv4", 00:22:35.674 "trsvcid": "4420", 00:22:35.674 "subnqn": "nqn.2016-06.io.spdk:cnode8", 00:22:35.674 "hostnqn": "nqn.2016-06.io.spdk:host8", 00:22:35.674 "hdgst": false, 00:22:35.674 "ddgst": false 00:22:35.674 }, 00:22:35.674 "method": "bdev_nvme_attach_controller" 00:22:35.674 },{ 00:22:35.674 "params": { 00:22:35.674 "name": "Nvme9", 00:22:35.674 "trtype": "tcp", 00:22:35.674 "traddr": "10.0.0.2", 00:22:35.674 "adrfam": "ipv4", 00:22:35.674 "trsvcid": "4420", 00:22:35.674 "subnqn": "nqn.2016-06.io.spdk:cnode9", 00:22:35.674 "hostnqn": "nqn.2016-06.io.spdk:host9", 00:22:35.674 "hdgst": false, 00:22:35.674 "ddgst": false 00:22:35.674 }, 00:22:35.674 "method": "bdev_nvme_attach_controller" 00:22:35.674 },{ 00:22:35.674 "params": { 00:22:35.674 "name": "Nvme10", 00:22:35.674 "trtype": "tcp", 00:22:35.674 "traddr": "10.0.0.2", 00:22:35.674 "adrfam": "ipv4", 00:22:35.674 "trsvcid": "4420", 00:22:35.674 "subnqn": "nqn.2016-06.io.spdk:cnode10", 00:22:35.674 "hostnqn": "nqn.2016-06.io.spdk:host10", 00:22:35.674 "hdgst": false, 00:22:35.674 "ddgst": false 00:22:35.674 }, 00:22:35.674 "method": "bdev_nvme_attach_controller" 00:22:35.674 }' 00:22:35.674 [2024-07-15 09:30:46.646204] Starting SPDK v24.09-pre git sha1 b0f01ebc5 / DPDK 24.03.0 initialization... 00:22:35.674 [2024-07-15 09:30:46.646291] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk1 --proc-type=auto ] 00:22:35.674 EAL: No free 2048 kB hugepages reported on node 1 00:22:35.674 [2024-07-15 09:30:46.709498] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:22:35.674 [2024-07-15 09:30:46.819901] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:22:37.047 09:30:48 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:22:37.047 09:30:48 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@862 -- # return 0 00:22:37.047 09:30:48 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@80 -- # rpc_cmd -s /var/tmp/bdevperf.sock framework_wait_init 00:22:37.047 09:30:48 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:37.047 09:30:48 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@10 -- # set +x 00:22:37.047 09:30:48 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:37.047 09:30:48 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@83 -- # kill -9 880750 00:22:37.047 09:30:48 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@84 -- # rm -f /var/run/spdk_bdev1 00:22:37.047 09:30:48 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@87 -- # sleep 1 00:22:38.041 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/shutdown.sh: line 73: 880750 Killed $rootdir/test/app/bdev_svc/bdev_svc -m 0x1 -i 1 -r /var/tmp/bdevperf.sock --json <(gen_nvmf_target_json "${num_subsystems[@]}") 00:22:38.041 09:30:49 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@88 -- # kill -0 880572 00:22:38.041 09:30:49 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@91 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf --json /dev/fd/62 -q 64 -o 65536 -w verify -t 1 00:22:38.041 09:30:49 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@91 -- # gen_nvmf_target_json 1 2 3 4 5 6 7 8 9 10 00:22:38.041 09:30:49 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@532 -- # config=() 00:22:38.041 09:30:49 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@532 -- # local subsystem config 00:22:38.041 09:30:49 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:22:38.041 09:30:49 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:22:38.041 { 00:22:38.041 "params": { 00:22:38.041 "name": "Nvme$subsystem", 00:22:38.041 "trtype": "$TEST_TRANSPORT", 00:22:38.041 "traddr": "$NVMF_FIRST_TARGET_IP", 00:22:38.041 "adrfam": "ipv4", 00:22:38.041 "trsvcid": "$NVMF_PORT", 00:22:38.041 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:22:38.041 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:22:38.041 "hdgst": ${hdgst:-false}, 00:22:38.041 "ddgst": ${ddgst:-false} 00:22:38.041 }, 00:22:38.041 "method": "bdev_nvme_attach_controller" 00:22:38.041 } 00:22:38.041 EOF 00:22:38.041 )") 00:22:38.041 09:30:49 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # cat 00:22:38.041 09:30:49 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:22:38.041 09:30:49 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:22:38.041 { 00:22:38.041 "params": { 00:22:38.041 "name": "Nvme$subsystem", 00:22:38.041 "trtype": "$TEST_TRANSPORT", 00:22:38.041 "traddr": "$NVMF_FIRST_TARGET_IP", 00:22:38.041 "adrfam": "ipv4", 00:22:38.041 "trsvcid": "$NVMF_PORT", 00:22:38.041 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:22:38.041 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:22:38.041 "hdgst": ${hdgst:-false}, 00:22:38.041 "ddgst": ${ddgst:-false} 00:22:38.041 }, 00:22:38.041 "method": "bdev_nvme_attach_controller" 00:22:38.041 } 00:22:38.041 EOF 00:22:38.041 )") 00:22:38.041 09:30:49 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # cat 00:22:38.041 09:30:49 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:22:38.041 09:30:49 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:22:38.041 { 00:22:38.041 "params": { 00:22:38.041 "name": "Nvme$subsystem", 00:22:38.041 "trtype": "$TEST_TRANSPORT", 00:22:38.041 "traddr": "$NVMF_FIRST_TARGET_IP", 00:22:38.041 "adrfam": "ipv4", 00:22:38.041 "trsvcid": "$NVMF_PORT", 00:22:38.041 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:22:38.041 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:22:38.041 "hdgst": ${hdgst:-false}, 00:22:38.041 "ddgst": ${ddgst:-false} 00:22:38.041 }, 00:22:38.041 "method": "bdev_nvme_attach_controller" 00:22:38.041 } 00:22:38.041 EOF 00:22:38.041 )") 00:22:38.041 09:30:49 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # cat 00:22:38.041 09:30:49 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:22:38.041 09:30:49 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:22:38.041 { 00:22:38.041 "params": { 00:22:38.041 "name": "Nvme$subsystem", 00:22:38.041 "trtype": "$TEST_TRANSPORT", 00:22:38.041 "traddr": "$NVMF_FIRST_TARGET_IP", 00:22:38.041 "adrfam": "ipv4", 00:22:38.041 "trsvcid": "$NVMF_PORT", 00:22:38.041 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:22:38.041 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:22:38.041 "hdgst": ${hdgst:-false}, 00:22:38.041 "ddgst": ${ddgst:-false} 00:22:38.041 }, 00:22:38.041 "method": "bdev_nvme_attach_controller" 00:22:38.041 } 00:22:38.041 EOF 00:22:38.041 )") 00:22:38.041 09:30:49 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # cat 00:22:38.041 09:30:49 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:22:38.041 09:30:49 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:22:38.041 { 00:22:38.041 "params": { 00:22:38.041 "name": "Nvme$subsystem", 00:22:38.041 "trtype": "$TEST_TRANSPORT", 00:22:38.041 "traddr": "$NVMF_FIRST_TARGET_IP", 00:22:38.041 "adrfam": "ipv4", 00:22:38.041 "trsvcid": "$NVMF_PORT", 00:22:38.041 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:22:38.041 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:22:38.041 "hdgst": ${hdgst:-false}, 00:22:38.041 "ddgst": ${ddgst:-false} 00:22:38.041 }, 00:22:38.041 "method": "bdev_nvme_attach_controller" 00:22:38.041 } 00:22:38.041 EOF 00:22:38.041 )") 00:22:38.041 09:30:49 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # cat 00:22:38.041 09:30:49 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:22:38.041 09:30:49 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:22:38.041 { 00:22:38.041 "params": { 00:22:38.041 "name": "Nvme$subsystem", 00:22:38.041 "trtype": "$TEST_TRANSPORT", 00:22:38.041 "traddr": "$NVMF_FIRST_TARGET_IP", 00:22:38.041 "adrfam": "ipv4", 00:22:38.041 "trsvcid": "$NVMF_PORT", 00:22:38.041 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:22:38.041 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:22:38.041 "hdgst": ${hdgst:-false}, 00:22:38.041 "ddgst": ${ddgst:-false} 00:22:38.041 }, 00:22:38.041 "method": "bdev_nvme_attach_controller" 00:22:38.041 } 00:22:38.041 EOF 00:22:38.041 )") 00:22:38.041 09:30:49 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # cat 00:22:38.041 09:30:49 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:22:38.041 09:30:49 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:22:38.041 { 00:22:38.041 "params": { 00:22:38.041 "name": "Nvme$subsystem", 00:22:38.041 "trtype": "$TEST_TRANSPORT", 00:22:38.041 "traddr": "$NVMF_FIRST_TARGET_IP", 00:22:38.041 "adrfam": "ipv4", 00:22:38.041 "trsvcid": "$NVMF_PORT", 00:22:38.041 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:22:38.041 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:22:38.041 "hdgst": ${hdgst:-false}, 00:22:38.041 "ddgst": ${ddgst:-false} 00:22:38.041 }, 00:22:38.041 "method": "bdev_nvme_attach_controller" 00:22:38.041 } 00:22:38.041 EOF 00:22:38.041 )") 00:22:38.041 09:30:49 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # cat 00:22:38.041 09:30:49 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:22:38.041 09:30:49 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:22:38.041 { 00:22:38.041 "params": { 00:22:38.041 "name": "Nvme$subsystem", 00:22:38.041 "trtype": "$TEST_TRANSPORT", 00:22:38.041 "traddr": "$NVMF_FIRST_TARGET_IP", 00:22:38.041 "adrfam": "ipv4", 00:22:38.041 "trsvcid": "$NVMF_PORT", 00:22:38.041 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:22:38.041 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:22:38.041 "hdgst": ${hdgst:-false}, 00:22:38.041 "ddgst": ${ddgst:-false} 00:22:38.041 }, 00:22:38.041 "method": "bdev_nvme_attach_controller" 00:22:38.041 } 00:22:38.041 EOF 00:22:38.041 )") 00:22:38.041 09:30:49 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # cat 00:22:38.041 09:30:49 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:22:38.041 09:30:49 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:22:38.041 { 00:22:38.041 "params": { 00:22:38.041 "name": "Nvme$subsystem", 00:22:38.041 "trtype": "$TEST_TRANSPORT", 00:22:38.041 "traddr": "$NVMF_FIRST_TARGET_IP", 00:22:38.041 "adrfam": "ipv4", 00:22:38.041 "trsvcid": "$NVMF_PORT", 00:22:38.041 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:22:38.041 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:22:38.041 "hdgst": ${hdgst:-false}, 00:22:38.041 "ddgst": ${ddgst:-false} 00:22:38.041 }, 00:22:38.041 "method": "bdev_nvme_attach_controller" 00:22:38.041 } 00:22:38.041 EOF 00:22:38.041 )") 00:22:38.041 09:30:49 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # cat 00:22:38.041 09:30:49 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:22:38.042 09:30:49 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:22:38.042 { 00:22:38.042 "params": { 00:22:38.042 "name": "Nvme$subsystem", 00:22:38.042 "trtype": "$TEST_TRANSPORT", 00:22:38.042 "traddr": "$NVMF_FIRST_TARGET_IP", 00:22:38.042 "adrfam": "ipv4", 00:22:38.042 "trsvcid": "$NVMF_PORT", 00:22:38.042 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:22:38.042 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:22:38.042 "hdgst": ${hdgst:-false}, 00:22:38.042 "ddgst": ${ddgst:-false} 00:22:38.042 }, 00:22:38.042 "method": "bdev_nvme_attach_controller" 00:22:38.042 } 00:22:38.042 EOF 00:22:38.042 )") 00:22:38.042 09:30:49 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # cat 00:22:38.042 09:30:49 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@556 -- # jq . 00:22:38.042 09:30:49 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@557 -- # IFS=, 00:22:38.042 09:30:49 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:22:38.042 "params": { 00:22:38.042 "name": "Nvme1", 00:22:38.042 "trtype": "tcp", 00:22:38.042 "traddr": "10.0.0.2", 00:22:38.042 "adrfam": "ipv4", 00:22:38.042 "trsvcid": "4420", 00:22:38.042 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:22:38.042 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:22:38.042 "hdgst": false, 00:22:38.042 "ddgst": false 00:22:38.042 }, 00:22:38.042 "method": "bdev_nvme_attach_controller" 00:22:38.042 },{ 00:22:38.042 "params": { 00:22:38.042 "name": "Nvme2", 00:22:38.042 "trtype": "tcp", 00:22:38.042 "traddr": "10.0.0.2", 00:22:38.042 "adrfam": "ipv4", 00:22:38.042 "trsvcid": "4420", 00:22:38.042 "subnqn": "nqn.2016-06.io.spdk:cnode2", 00:22:38.042 "hostnqn": "nqn.2016-06.io.spdk:host2", 00:22:38.042 "hdgst": false, 00:22:38.042 "ddgst": false 00:22:38.042 }, 00:22:38.042 "method": "bdev_nvme_attach_controller" 00:22:38.042 },{ 00:22:38.042 "params": { 00:22:38.042 "name": "Nvme3", 00:22:38.042 "trtype": "tcp", 00:22:38.042 "traddr": "10.0.0.2", 00:22:38.042 "adrfam": "ipv4", 00:22:38.042 "trsvcid": "4420", 00:22:38.042 "subnqn": "nqn.2016-06.io.spdk:cnode3", 00:22:38.042 "hostnqn": "nqn.2016-06.io.spdk:host3", 00:22:38.042 "hdgst": false, 00:22:38.042 "ddgst": false 00:22:38.042 }, 00:22:38.042 "method": "bdev_nvme_attach_controller" 00:22:38.042 },{ 00:22:38.042 "params": { 00:22:38.042 "name": "Nvme4", 00:22:38.042 "trtype": "tcp", 00:22:38.042 "traddr": "10.0.0.2", 00:22:38.042 "adrfam": "ipv4", 00:22:38.042 "trsvcid": "4420", 00:22:38.042 "subnqn": "nqn.2016-06.io.spdk:cnode4", 00:22:38.042 "hostnqn": "nqn.2016-06.io.spdk:host4", 00:22:38.042 "hdgst": false, 00:22:38.042 "ddgst": false 00:22:38.042 }, 00:22:38.042 "method": "bdev_nvme_attach_controller" 00:22:38.042 },{ 00:22:38.042 "params": { 00:22:38.042 "name": "Nvme5", 00:22:38.042 "trtype": "tcp", 00:22:38.042 "traddr": "10.0.0.2", 00:22:38.042 "adrfam": "ipv4", 00:22:38.042 "trsvcid": "4420", 00:22:38.042 "subnqn": "nqn.2016-06.io.spdk:cnode5", 00:22:38.042 "hostnqn": "nqn.2016-06.io.spdk:host5", 00:22:38.042 "hdgst": false, 00:22:38.042 "ddgst": false 00:22:38.042 }, 00:22:38.042 "method": "bdev_nvme_attach_controller" 00:22:38.042 },{ 00:22:38.042 "params": { 00:22:38.042 "name": "Nvme6", 00:22:38.042 "trtype": "tcp", 00:22:38.042 "traddr": "10.0.0.2", 00:22:38.042 "adrfam": "ipv4", 00:22:38.042 "trsvcid": "4420", 00:22:38.042 "subnqn": "nqn.2016-06.io.spdk:cnode6", 00:22:38.042 "hostnqn": "nqn.2016-06.io.spdk:host6", 00:22:38.042 "hdgst": false, 00:22:38.042 "ddgst": false 00:22:38.042 }, 00:22:38.042 "method": "bdev_nvme_attach_controller" 00:22:38.042 },{ 00:22:38.042 "params": { 00:22:38.042 "name": "Nvme7", 00:22:38.042 "trtype": "tcp", 00:22:38.042 "traddr": "10.0.0.2", 00:22:38.042 "adrfam": "ipv4", 00:22:38.042 "trsvcid": "4420", 00:22:38.042 "subnqn": "nqn.2016-06.io.spdk:cnode7", 00:22:38.042 "hostnqn": "nqn.2016-06.io.spdk:host7", 00:22:38.042 "hdgst": false, 00:22:38.042 "ddgst": false 00:22:38.042 }, 00:22:38.042 "method": "bdev_nvme_attach_controller" 00:22:38.042 },{ 00:22:38.042 "params": { 00:22:38.042 "name": "Nvme8", 00:22:38.042 "trtype": "tcp", 00:22:38.042 "traddr": "10.0.0.2", 00:22:38.042 "adrfam": "ipv4", 00:22:38.042 "trsvcid": "4420", 00:22:38.042 "subnqn": "nqn.2016-06.io.spdk:cnode8", 00:22:38.042 "hostnqn": "nqn.2016-06.io.spdk:host8", 00:22:38.042 "hdgst": false, 00:22:38.042 "ddgst": false 00:22:38.042 }, 00:22:38.042 "method": "bdev_nvme_attach_controller" 00:22:38.042 },{ 00:22:38.042 "params": { 00:22:38.042 "name": "Nvme9", 00:22:38.042 "trtype": "tcp", 00:22:38.042 "traddr": "10.0.0.2", 00:22:38.042 "adrfam": "ipv4", 00:22:38.042 "trsvcid": "4420", 00:22:38.042 "subnqn": "nqn.2016-06.io.spdk:cnode9", 00:22:38.042 "hostnqn": "nqn.2016-06.io.spdk:host9", 00:22:38.042 "hdgst": false, 00:22:38.042 "ddgst": false 00:22:38.042 }, 00:22:38.042 "method": "bdev_nvme_attach_controller" 00:22:38.042 },{ 00:22:38.042 "params": { 00:22:38.042 "name": "Nvme10", 00:22:38.042 "trtype": "tcp", 00:22:38.042 "traddr": "10.0.0.2", 00:22:38.042 "adrfam": "ipv4", 00:22:38.042 "trsvcid": "4420", 00:22:38.042 "subnqn": "nqn.2016-06.io.spdk:cnode10", 00:22:38.042 "hostnqn": "nqn.2016-06.io.spdk:host10", 00:22:38.042 "hdgst": false, 00:22:38.042 "ddgst": false 00:22:38.042 }, 00:22:38.042 "method": "bdev_nvme_attach_controller" 00:22:38.042 }' 00:22:38.299 [2024-07-15 09:30:49.241269] Starting SPDK v24.09-pre git sha1 b0f01ebc5 / DPDK 24.03.0 initialization... 00:22:38.299 [2024-07-15 09:30:49.241348] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid881045 ] 00:22:38.299 EAL: No free 2048 kB hugepages reported on node 1 00:22:38.299 [2024-07-15 09:30:49.305712] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:22:38.299 [2024-07-15 09:30:49.416419] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:22:40.212 Running I/O for 1 seconds... 00:22:41.151 00:22:41.151 Latency(us) 00:22:41.151 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:22:41.151 Job: Nvme1n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:22:41.151 Verification LBA range: start 0x0 length 0x400 00:22:41.151 Nvme1n1 : 1.10 237.47 14.84 0.00 0.00 265467.35 4150.61 246997.90 00:22:41.151 Job: Nvme2n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:22:41.151 Verification LBA range: start 0x0 length 0x400 00:22:41.151 Nvme2n1 : 1.12 227.96 14.25 0.00 0.00 273394.92 20291.89 257872.02 00:22:41.151 Job: Nvme3n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:22:41.151 Verification LBA range: start 0x0 length 0x400 00:22:41.151 Nvme3n1 : 1.08 236.22 14.76 0.00 0.00 259092.67 18350.08 246997.90 00:22:41.151 Job: Nvme4n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:22:41.151 Verification LBA range: start 0x0 length 0x400 00:22:41.151 Nvme4n1 : 1.18 270.87 16.93 0.00 0.00 222808.75 17476.27 250104.79 00:22:41.151 Job: Nvme5n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:22:41.151 Verification LBA range: start 0x0 length 0x400 00:22:41.151 Nvme5n1 : 1.11 229.63 14.35 0.00 0.00 257528.04 22233.69 253211.69 00:22:41.151 Job: Nvme6n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:22:41.151 Verification LBA range: start 0x0 length 0x400 00:22:41.151 Nvme6n1 : 1.10 232.36 14.52 0.00 0.00 249706.76 18252.99 256318.58 00:22:41.151 Job: Nvme7n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:22:41.151 Verification LBA range: start 0x0 length 0x400 00:22:41.151 Nvme7n1 : 1.12 233.41 14.59 0.00 0.00 243275.55 2475.80 234570.33 00:22:41.151 Job: Nvme8n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:22:41.151 Verification LBA range: start 0x0 length 0x400 00:22:41.151 Nvme8n1 : 1.19 269.30 16.83 0.00 0.00 209471.72 4271.98 257872.02 00:22:41.151 Job: Nvme9n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:22:41.151 Verification LBA range: start 0x0 length 0x400 00:22:41.151 Nvme9n1 : 1.17 222.13 13.88 0.00 0.00 248643.35 2949.12 259425.47 00:22:41.151 Job: Nvme10n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:22:41.151 Verification LBA range: start 0x0 length 0x400 00:22:41.151 Nvme10n1 : 1.20 266.84 16.68 0.00 0.00 204480.66 12913.02 279620.27 00:22:41.151 =================================================================================================================== 00:22:41.151 Total : 2426.19 151.64 0.00 0.00 241275.86 2475.80 279620.27 00:22:41.410 09:30:52 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@94 -- # stoptarget 00:22:41.410 09:30:52 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@41 -- # rm -f ./local-job0-0-verify.state 00:22:41.410 09:30:52 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@42 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdevperf.conf 00:22:41.410 09:30:52 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@43 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:22:41.410 09:30:52 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@45 -- # nvmftestfini 00:22:41.410 09:30:52 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@488 -- # nvmfcleanup 00:22:41.410 09:30:52 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@117 -- # sync 00:22:41.410 09:30:52 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:22:41.410 09:30:52 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@120 -- # set +e 00:22:41.410 09:30:52 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@121 -- # for i in {1..20} 00:22:41.410 09:30:52 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:22:41.410 rmmod nvme_tcp 00:22:41.410 rmmod nvme_fabrics 00:22:41.410 rmmod nvme_keyring 00:22:41.410 09:30:52 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:22:41.410 09:30:52 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@124 -- # set -e 00:22:41.410 09:30:52 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@125 -- # return 0 00:22:41.410 09:30:52 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@489 -- # '[' -n 880572 ']' 00:22:41.410 09:30:52 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@490 -- # killprocess 880572 00:22:41.410 09:30:52 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@948 -- # '[' -z 880572 ']' 00:22:41.410 09:30:52 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@952 -- # kill -0 880572 00:22:41.410 09:30:52 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@953 -- # uname 00:22:41.410 09:30:52 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:22:41.410 09:30:52 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 880572 00:22:41.410 09:30:52 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:22:41.410 09:30:52 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:22:41.410 09:30:52 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@966 -- # echo 'killing process with pid 880572' 00:22:41.410 killing process with pid 880572 00:22:41.410 09:30:52 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@967 -- # kill 880572 00:22:41.410 09:30:52 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@972 -- # wait 880572 00:22:41.977 09:30:52 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:22:41.977 09:30:52 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:22:41.977 09:30:52 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:22:41.977 09:30:52 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:22:41.977 09:30:52 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@278 -- # remove_spdk_ns 00:22:41.977 09:30:52 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:22:41.977 09:30:52 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:22:41.977 09:30:52 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:22:43.884 09:30:54 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:22:43.884 00:22:43.884 real 0m11.474s 00:22:43.884 user 0m32.372s 00:22:43.884 sys 0m3.154s 00:22:43.884 09:30:54 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@1124 -- # xtrace_disable 00:22:43.884 09:30:54 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@10 -- # set +x 00:22:43.884 ************************************ 00:22:43.884 END TEST nvmf_shutdown_tc1 00:22:43.884 ************************************ 00:22:43.884 09:30:55 nvmf_tcp.nvmf_shutdown -- common/autotest_common.sh@1142 -- # return 0 00:22:43.884 09:30:55 nvmf_tcp.nvmf_shutdown -- target/shutdown.sh@148 -- # run_test nvmf_shutdown_tc2 nvmf_shutdown_tc2 00:22:43.884 09:30:55 nvmf_tcp.nvmf_shutdown -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:22:43.884 09:30:55 nvmf_tcp.nvmf_shutdown -- common/autotest_common.sh@1105 -- # xtrace_disable 00:22:43.884 09:30:55 nvmf_tcp.nvmf_shutdown -- common/autotest_common.sh@10 -- # set +x 00:22:43.884 ************************************ 00:22:43.884 START TEST nvmf_shutdown_tc2 00:22:43.884 ************************************ 00:22:43.884 09:30:55 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@1123 -- # nvmf_shutdown_tc2 00:22:43.884 09:30:55 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@99 -- # starttarget 00:22:43.884 09:30:55 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@15 -- # nvmftestinit 00:22:43.884 09:30:55 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:22:43.884 09:30:55 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:22:43.884 09:30:55 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@448 -- # prepare_net_devs 00:22:43.884 09:30:55 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@410 -- # local -g is_hw=no 00:22:43.884 09:30:55 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@412 -- # remove_spdk_ns 00:22:43.884 09:30:55 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:22:43.884 09:30:55 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:22:43.884 09:30:55 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:22:43.884 09:30:55 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:22:43.884 09:30:55 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:22:43.884 09:30:55 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@285 -- # xtrace_disable 00:22:43.884 09:30:55 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:22:43.884 09:30:55 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:22:43.884 09:30:55 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@291 -- # pci_devs=() 00:22:43.884 09:30:55 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@291 -- # local -a pci_devs 00:22:43.884 09:30:55 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@292 -- # pci_net_devs=() 00:22:43.884 09:30:55 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:22:43.884 09:30:55 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@293 -- # pci_drivers=() 00:22:43.884 09:30:55 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@293 -- # local -A pci_drivers 00:22:43.884 09:30:55 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@295 -- # net_devs=() 00:22:43.884 09:30:55 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@295 -- # local -ga net_devs 00:22:43.884 09:30:55 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@296 -- # e810=() 00:22:43.884 09:30:55 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@296 -- # local -ga e810 00:22:43.884 09:30:55 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@297 -- # x722=() 00:22:43.884 09:30:55 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@297 -- # local -ga x722 00:22:43.884 09:30:55 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@298 -- # mlx=() 00:22:43.884 09:30:55 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@298 -- # local -ga mlx 00:22:43.884 09:30:55 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:22:43.884 09:30:55 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:22:43.884 09:30:55 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:22:43.884 09:30:55 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:22:43.884 09:30:55 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:22:43.884 09:30:55 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:22:43.884 09:30:55 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:22:43.884 09:30:55 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:22:43.884 09:30:55 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:22:43.884 09:30:55 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:22:43.884 09:30:55 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:22:43.884 09:30:55 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:22:43.884 09:30:55 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:22:43.884 09:30:55 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:22:43.884 09:30:55 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:22:43.884 09:30:55 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:22:43.884 09:30:55 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:22:43.884 09:30:55 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:22:43.884 09:30:55 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@341 -- # echo 'Found 0000:09:00.0 (0x8086 - 0x159b)' 00:22:43.884 Found 0000:09:00.0 (0x8086 - 0x159b) 00:22:43.884 09:30:55 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:22:43.884 09:30:55 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:22:43.884 09:30:55 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:22:43.884 09:30:55 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:22:43.884 09:30:55 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:22:43.884 09:30:55 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:22:43.884 09:30:55 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@341 -- # echo 'Found 0000:09:00.1 (0x8086 - 0x159b)' 00:22:43.884 Found 0000:09:00.1 (0x8086 - 0x159b) 00:22:43.884 09:30:55 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:22:43.884 09:30:55 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:22:43.884 09:30:55 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:22:43.884 09:30:55 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:22:43.884 09:30:55 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:22:43.884 09:30:55 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:22:43.884 09:30:55 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:22:43.884 09:30:55 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:22:43.884 09:30:55 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:22:43.884 09:30:55 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:22:43.884 09:30:55 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:22:43.884 09:30:55 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:22:43.884 09:30:55 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@390 -- # [[ up == up ]] 00:22:43.884 09:30:55 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:22:43.884 09:30:55 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:22:43.884 09:30:55 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:09:00.0: cvl_0_0' 00:22:43.884 Found net devices under 0000:09:00.0: cvl_0_0 00:22:43.884 09:30:55 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:22:43.884 09:30:55 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:22:43.884 09:30:55 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:22:43.884 09:30:55 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:22:43.884 09:30:55 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:22:43.884 09:30:55 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@390 -- # [[ up == up ]] 00:22:43.884 09:30:55 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:22:43.884 09:30:55 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:22:43.884 09:30:55 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:09:00.1: cvl_0_1' 00:22:43.884 Found net devices under 0000:09:00.1: cvl_0_1 00:22:43.884 09:30:55 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:22:43.884 09:30:55 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:22:43.884 09:30:55 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@414 -- # is_hw=yes 00:22:43.884 09:30:55 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:22:43.884 09:30:55 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:22:43.884 09:30:55 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:22:43.884 09:30:55 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:22:43.884 09:30:55 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:22:43.884 09:30:55 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:22:43.884 09:30:55 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:22:43.884 09:30:55 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:22:43.884 09:30:55 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:22:43.884 09:30:55 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:22:43.884 09:30:55 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:22:43.884 09:30:55 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:22:43.884 09:30:55 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:22:43.884 09:30:55 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:22:43.884 09:30:55 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:22:43.884 09:30:55 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:22:44.143 09:30:55 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:22:44.143 09:30:55 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:22:44.143 09:30:55 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:22:44.143 09:30:55 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:22:44.143 09:30:55 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:22:44.143 09:30:55 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:22:44.143 09:30:55 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:22:44.143 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:22:44.143 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.238 ms 00:22:44.143 00:22:44.143 --- 10.0.0.2 ping statistics --- 00:22:44.143 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:22:44.143 rtt min/avg/max/mdev = 0.238/0.238/0.238/0.000 ms 00:22:44.143 09:30:55 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:22:44.143 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:22:44.143 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.153 ms 00:22:44.143 00:22:44.143 --- 10.0.0.1 ping statistics --- 00:22:44.143 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:22:44.143 rtt min/avg/max/mdev = 0.153/0.153/0.153/0.000 ms 00:22:44.143 09:30:55 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:22:44.143 09:30:55 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@422 -- # return 0 00:22:44.143 09:30:55 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:22:44.143 09:30:55 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:22:44.143 09:30:55 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:22:44.143 09:30:55 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:22:44.143 09:30:55 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:22:44.143 09:30:55 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:22:44.143 09:30:55 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:22:44.143 09:30:55 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@18 -- # nvmfappstart -m 0x1E 00:22:44.143 09:30:55 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:22:44.143 09:30:55 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@722 -- # xtrace_disable 00:22:44.143 09:30:55 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:22:44.143 09:30:55 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@481 -- # nvmfpid=881806 00:22:44.143 09:30:55 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1E 00:22:44.143 09:30:55 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@482 -- # waitforlisten 881806 00:22:44.143 09:30:55 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@829 -- # '[' -z 881806 ']' 00:22:44.143 09:30:55 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:22:44.143 09:30:55 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@834 -- # local max_retries=100 00:22:44.143 09:30:55 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:22:44.143 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:22:44.143 09:30:55 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@838 -- # xtrace_disable 00:22:44.143 09:30:55 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:22:44.143 [2024-07-15 09:30:55.264395] Starting SPDK v24.09-pre git sha1 b0f01ebc5 / DPDK 24.03.0 initialization... 00:22:44.143 [2024-07-15 09:30:55.264472] [ DPDK EAL parameters: nvmf -c 0x1E --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:22:44.143 EAL: No free 2048 kB hugepages reported on node 1 00:22:44.143 [2024-07-15 09:30:55.330360] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 4 00:22:44.402 [2024-07-15 09:30:55.440177] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:22:44.402 [2024-07-15 09:30:55.440226] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:22:44.402 [2024-07-15 09:30:55.440246] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:22:44.402 [2024-07-15 09:30:55.440258] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:22:44.402 [2024-07-15 09:30:55.440269] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:22:44.402 [2024-07-15 09:30:55.440417] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:22:44.402 [2024-07-15 09:30:55.440482] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:22:44.402 [2024-07-15 09:30:55.440513] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 4 00:22:44.402 [2024-07-15 09:30:55.440515] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:22:44.402 09:30:55 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:22:44.402 09:30:55 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@862 -- # return 0 00:22:44.402 09:30:55 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:22:44.402 09:30:55 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@728 -- # xtrace_disable 00:22:44.402 09:30:55 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:22:44.660 09:30:55 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:22:44.660 09:30:55 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@20 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:22:44.660 09:30:55 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:44.660 09:30:55 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:22:44.660 [2024-07-15 09:30:55.605679] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:22:44.660 09:30:55 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:44.660 09:30:55 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@22 -- # num_subsystems=({1..10}) 00:22:44.660 09:30:55 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@24 -- # timing_enter create_subsystems 00:22:44.660 09:30:55 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@722 -- # xtrace_disable 00:22:44.660 09:30:55 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:22:44.660 09:30:55 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@26 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:22:44.660 09:30:55 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:22:44.660 09:30:55 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@28 -- # cat 00:22:44.660 09:30:55 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:22:44.660 09:30:55 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@28 -- # cat 00:22:44.660 09:30:55 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:22:44.660 09:30:55 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@28 -- # cat 00:22:44.660 09:30:55 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:22:44.660 09:30:55 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@28 -- # cat 00:22:44.660 09:30:55 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:22:44.660 09:30:55 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@28 -- # cat 00:22:44.661 09:30:55 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:22:44.661 09:30:55 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@28 -- # cat 00:22:44.661 09:30:55 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:22:44.661 09:30:55 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@28 -- # cat 00:22:44.661 09:30:55 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:22:44.661 09:30:55 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@28 -- # cat 00:22:44.661 09:30:55 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:22:44.661 09:30:55 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@28 -- # cat 00:22:44.661 09:30:55 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:22:44.661 09:30:55 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@28 -- # cat 00:22:44.661 09:30:55 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@35 -- # rpc_cmd 00:22:44.661 09:30:55 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:44.661 09:30:55 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:22:44.661 Malloc1 00:22:44.661 [2024-07-15 09:30:55.684384] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:22:44.661 Malloc2 00:22:44.661 Malloc3 00:22:44.661 Malloc4 00:22:44.661 Malloc5 00:22:44.919 Malloc6 00:22:44.919 Malloc7 00:22:44.919 Malloc8 00:22:44.919 Malloc9 00:22:44.919 Malloc10 00:22:44.919 09:30:56 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:44.919 09:30:56 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@36 -- # timing_exit create_subsystems 00:22:44.919 09:30:56 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@728 -- # xtrace_disable 00:22:44.919 09:30:56 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:22:45.180 09:30:56 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@103 -- # perfpid=881987 00:22:45.180 09:30:56 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@104 -- # waitforlisten 881987 /var/tmp/bdevperf.sock 00:22:45.180 09:30:56 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@829 -- # '[' -z 881987 ']' 00:22:45.180 09:30:56 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:22:45.180 09:30:56 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@102 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -r /var/tmp/bdevperf.sock --json /dev/fd/63 -q 64 -o 65536 -w verify -t 10 00:22:45.180 09:30:56 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@102 -- # gen_nvmf_target_json 1 2 3 4 5 6 7 8 9 10 00:22:45.180 09:30:56 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@834 -- # local max_retries=100 00:22:45.180 09:30:56 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:22:45.180 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:22:45.180 09:30:56 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@532 -- # config=() 00:22:45.180 09:30:56 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@838 -- # xtrace_disable 00:22:45.180 09:30:56 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@532 -- # local subsystem config 00:22:45.180 09:30:56 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:22:45.180 09:30:56 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:22:45.180 09:30:56 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:22:45.180 { 00:22:45.180 "params": { 00:22:45.180 "name": "Nvme$subsystem", 00:22:45.180 "trtype": "$TEST_TRANSPORT", 00:22:45.180 "traddr": "$NVMF_FIRST_TARGET_IP", 00:22:45.180 "adrfam": "ipv4", 00:22:45.180 "trsvcid": "$NVMF_PORT", 00:22:45.180 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:22:45.180 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:22:45.180 "hdgst": ${hdgst:-false}, 00:22:45.180 "ddgst": ${ddgst:-false} 00:22:45.180 }, 00:22:45.180 "method": "bdev_nvme_attach_controller" 00:22:45.180 } 00:22:45.180 EOF 00:22:45.180 )") 00:22:45.180 09:30:56 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@554 -- # cat 00:22:45.180 09:30:56 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:22:45.180 09:30:56 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:22:45.180 { 00:22:45.180 "params": { 00:22:45.180 "name": "Nvme$subsystem", 00:22:45.180 "trtype": "$TEST_TRANSPORT", 00:22:45.180 "traddr": "$NVMF_FIRST_TARGET_IP", 00:22:45.180 "adrfam": "ipv4", 00:22:45.180 "trsvcid": "$NVMF_PORT", 00:22:45.180 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:22:45.180 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:22:45.180 "hdgst": ${hdgst:-false}, 00:22:45.180 "ddgst": ${ddgst:-false} 00:22:45.180 }, 00:22:45.180 "method": "bdev_nvme_attach_controller" 00:22:45.180 } 00:22:45.180 EOF 00:22:45.180 )") 00:22:45.180 09:30:56 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@554 -- # cat 00:22:45.180 09:30:56 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:22:45.180 09:30:56 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:22:45.180 { 00:22:45.180 "params": { 00:22:45.180 "name": "Nvme$subsystem", 00:22:45.180 "trtype": "$TEST_TRANSPORT", 00:22:45.180 "traddr": "$NVMF_FIRST_TARGET_IP", 00:22:45.180 "adrfam": "ipv4", 00:22:45.180 "trsvcid": "$NVMF_PORT", 00:22:45.180 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:22:45.180 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:22:45.180 "hdgst": ${hdgst:-false}, 00:22:45.180 "ddgst": ${ddgst:-false} 00:22:45.180 }, 00:22:45.180 "method": "bdev_nvme_attach_controller" 00:22:45.180 } 00:22:45.180 EOF 00:22:45.180 )") 00:22:45.180 09:30:56 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@554 -- # cat 00:22:45.180 09:30:56 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:22:45.180 09:30:56 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:22:45.180 { 00:22:45.180 "params": { 00:22:45.180 "name": "Nvme$subsystem", 00:22:45.180 "trtype": "$TEST_TRANSPORT", 00:22:45.180 "traddr": "$NVMF_FIRST_TARGET_IP", 00:22:45.180 "adrfam": "ipv4", 00:22:45.180 "trsvcid": "$NVMF_PORT", 00:22:45.180 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:22:45.180 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:22:45.180 "hdgst": ${hdgst:-false}, 00:22:45.180 "ddgst": ${ddgst:-false} 00:22:45.180 }, 00:22:45.180 "method": "bdev_nvme_attach_controller" 00:22:45.180 } 00:22:45.180 EOF 00:22:45.180 )") 00:22:45.180 09:30:56 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@554 -- # cat 00:22:45.180 09:30:56 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:22:45.180 09:30:56 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:22:45.180 { 00:22:45.180 "params": { 00:22:45.180 "name": "Nvme$subsystem", 00:22:45.180 "trtype": "$TEST_TRANSPORT", 00:22:45.180 "traddr": "$NVMF_FIRST_TARGET_IP", 00:22:45.180 "adrfam": "ipv4", 00:22:45.180 "trsvcid": "$NVMF_PORT", 00:22:45.180 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:22:45.180 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:22:45.180 "hdgst": ${hdgst:-false}, 00:22:45.180 "ddgst": ${ddgst:-false} 00:22:45.180 }, 00:22:45.180 "method": "bdev_nvme_attach_controller" 00:22:45.180 } 00:22:45.180 EOF 00:22:45.180 )") 00:22:45.180 09:30:56 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@554 -- # cat 00:22:45.180 09:30:56 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:22:45.180 09:30:56 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:22:45.180 { 00:22:45.180 "params": { 00:22:45.180 "name": "Nvme$subsystem", 00:22:45.180 "trtype": "$TEST_TRANSPORT", 00:22:45.180 "traddr": "$NVMF_FIRST_TARGET_IP", 00:22:45.180 "adrfam": "ipv4", 00:22:45.180 "trsvcid": "$NVMF_PORT", 00:22:45.180 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:22:45.180 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:22:45.180 "hdgst": ${hdgst:-false}, 00:22:45.180 "ddgst": ${ddgst:-false} 00:22:45.180 }, 00:22:45.180 "method": "bdev_nvme_attach_controller" 00:22:45.180 } 00:22:45.180 EOF 00:22:45.180 )") 00:22:45.180 09:30:56 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@554 -- # cat 00:22:45.180 09:30:56 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:22:45.180 09:30:56 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:22:45.180 { 00:22:45.180 "params": { 00:22:45.180 "name": "Nvme$subsystem", 00:22:45.180 "trtype": "$TEST_TRANSPORT", 00:22:45.180 "traddr": "$NVMF_FIRST_TARGET_IP", 00:22:45.180 "adrfam": "ipv4", 00:22:45.180 "trsvcid": "$NVMF_PORT", 00:22:45.180 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:22:45.180 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:22:45.180 "hdgst": ${hdgst:-false}, 00:22:45.180 "ddgst": ${ddgst:-false} 00:22:45.180 }, 00:22:45.180 "method": "bdev_nvme_attach_controller" 00:22:45.180 } 00:22:45.180 EOF 00:22:45.180 )") 00:22:45.180 09:30:56 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@554 -- # cat 00:22:45.180 09:30:56 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:22:45.180 09:30:56 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:22:45.180 { 00:22:45.180 "params": { 00:22:45.180 "name": "Nvme$subsystem", 00:22:45.180 "trtype": "$TEST_TRANSPORT", 00:22:45.180 "traddr": "$NVMF_FIRST_TARGET_IP", 00:22:45.180 "adrfam": "ipv4", 00:22:45.180 "trsvcid": "$NVMF_PORT", 00:22:45.180 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:22:45.180 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:22:45.180 "hdgst": ${hdgst:-false}, 00:22:45.180 "ddgst": ${ddgst:-false} 00:22:45.180 }, 00:22:45.180 "method": "bdev_nvme_attach_controller" 00:22:45.180 } 00:22:45.180 EOF 00:22:45.180 )") 00:22:45.180 09:30:56 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@554 -- # cat 00:22:45.180 09:30:56 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:22:45.180 09:30:56 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:22:45.180 { 00:22:45.180 "params": { 00:22:45.180 "name": "Nvme$subsystem", 00:22:45.180 "trtype": "$TEST_TRANSPORT", 00:22:45.180 "traddr": "$NVMF_FIRST_TARGET_IP", 00:22:45.180 "adrfam": "ipv4", 00:22:45.180 "trsvcid": "$NVMF_PORT", 00:22:45.180 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:22:45.180 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:22:45.180 "hdgst": ${hdgst:-false}, 00:22:45.180 "ddgst": ${ddgst:-false} 00:22:45.180 }, 00:22:45.180 "method": "bdev_nvme_attach_controller" 00:22:45.180 } 00:22:45.180 EOF 00:22:45.180 )") 00:22:45.180 09:30:56 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@554 -- # cat 00:22:45.180 09:30:56 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:22:45.180 09:30:56 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:22:45.180 { 00:22:45.180 "params": { 00:22:45.180 "name": "Nvme$subsystem", 00:22:45.180 "trtype": "$TEST_TRANSPORT", 00:22:45.180 "traddr": "$NVMF_FIRST_TARGET_IP", 00:22:45.180 "adrfam": "ipv4", 00:22:45.180 "trsvcid": "$NVMF_PORT", 00:22:45.180 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:22:45.180 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:22:45.180 "hdgst": ${hdgst:-false}, 00:22:45.180 "ddgst": ${ddgst:-false} 00:22:45.180 }, 00:22:45.180 "method": "bdev_nvme_attach_controller" 00:22:45.180 } 00:22:45.180 EOF 00:22:45.180 )") 00:22:45.180 09:30:56 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@554 -- # cat 00:22:45.180 09:30:56 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@556 -- # jq . 00:22:45.180 09:30:56 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@557 -- # IFS=, 00:22:45.180 09:30:56 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:22:45.180 "params": { 00:22:45.180 "name": "Nvme1", 00:22:45.180 "trtype": "tcp", 00:22:45.181 "traddr": "10.0.0.2", 00:22:45.181 "adrfam": "ipv4", 00:22:45.181 "trsvcid": "4420", 00:22:45.181 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:22:45.181 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:22:45.181 "hdgst": false, 00:22:45.181 "ddgst": false 00:22:45.181 }, 00:22:45.181 "method": "bdev_nvme_attach_controller" 00:22:45.181 },{ 00:22:45.181 "params": { 00:22:45.181 "name": "Nvme2", 00:22:45.181 "trtype": "tcp", 00:22:45.181 "traddr": "10.0.0.2", 00:22:45.181 "adrfam": "ipv4", 00:22:45.181 "trsvcid": "4420", 00:22:45.181 "subnqn": "nqn.2016-06.io.spdk:cnode2", 00:22:45.181 "hostnqn": "nqn.2016-06.io.spdk:host2", 00:22:45.181 "hdgst": false, 00:22:45.181 "ddgst": false 00:22:45.181 }, 00:22:45.181 "method": "bdev_nvme_attach_controller" 00:22:45.181 },{ 00:22:45.181 "params": { 00:22:45.181 "name": "Nvme3", 00:22:45.181 "trtype": "tcp", 00:22:45.181 "traddr": "10.0.0.2", 00:22:45.181 "adrfam": "ipv4", 00:22:45.181 "trsvcid": "4420", 00:22:45.181 "subnqn": "nqn.2016-06.io.spdk:cnode3", 00:22:45.181 "hostnqn": "nqn.2016-06.io.spdk:host3", 00:22:45.181 "hdgst": false, 00:22:45.181 "ddgst": false 00:22:45.181 }, 00:22:45.181 "method": "bdev_nvme_attach_controller" 00:22:45.181 },{ 00:22:45.181 "params": { 00:22:45.181 "name": "Nvme4", 00:22:45.181 "trtype": "tcp", 00:22:45.181 "traddr": "10.0.0.2", 00:22:45.181 "adrfam": "ipv4", 00:22:45.181 "trsvcid": "4420", 00:22:45.181 "subnqn": "nqn.2016-06.io.spdk:cnode4", 00:22:45.181 "hostnqn": "nqn.2016-06.io.spdk:host4", 00:22:45.181 "hdgst": false, 00:22:45.181 "ddgst": false 00:22:45.181 }, 00:22:45.181 "method": "bdev_nvme_attach_controller" 00:22:45.181 },{ 00:22:45.181 "params": { 00:22:45.181 "name": "Nvme5", 00:22:45.181 "trtype": "tcp", 00:22:45.181 "traddr": "10.0.0.2", 00:22:45.181 "adrfam": "ipv4", 00:22:45.181 "trsvcid": "4420", 00:22:45.181 "subnqn": "nqn.2016-06.io.spdk:cnode5", 00:22:45.181 "hostnqn": "nqn.2016-06.io.spdk:host5", 00:22:45.181 "hdgst": false, 00:22:45.181 "ddgst": false 00:22:45.181 }, 00:22:45.181 "method": "bdev_nvme_attach_controller" 00:22:45.181 },{ 00:22:45.181 "params": { 00:22:45.181 "name": "Nvme6", 00:22:45.181 "trtype": "tcp", 00:22:45.181 "traddr": "10.0.0.2", 00:22:45.181 "adrfam": "ipv4", 00:22:45.181 "trsvcid": "4420", 00:22:45.181 "subnqn": "nqn.2016-06.io.spdk:cnode6", 00:22:45.181 "hostnqn": "nqn.2016-06.io.spdk:host6", 00:22:45.181 "hdgst": false, 00:22:45.181 "ddgst": false 00:22:45.181 }, 00:22:45.181 "method": "bdev_nvme_attach_controller" 00:22:45.181 },{ 00:22:45.181 "params": { 00:22:45.181 "name": "Nvme7", 00:22:45.181 "trtype": "tcp", 00:22:45.181 "traddr": "10.0.0.2", 00:22:45.181 "adrfam": "ipv4", 00:22:45.181 "trsvcid": "4420", 00:22:45.181 "subnqn": "nqn.2016-06.io.spdk:cnode7", 00:22:45.181 "hostnqn": "nqn.2016-06.io.spdk:host7", 00:22:45.181 "hdgst": false, 00:22:45.181 "ddgst": false 00:22:45.181 }, 00:22:45.181 "method": "bdev_nvme_attach_controller" 00:22:45.181 },{ 00:22:45.181 "params": { 00:22:45.181 "name": "Nvme8", 00:22:45.181 "trtype": "tcp", 00:22:45.181 "traddr": "10.0.0.2", 00:22:45.181 "adrfam": "ipv4", 00:22:45.181 "trsvcid": "4420", 00:22:45.181 "subnqn": "nqn.2016-06.io.spdk:cnode8", 00:22:45.181 "hostnqn": "nqn.2016-06.io.spdk:host8", 00:22:45.181 "hdgst": false, 00:22:45.181 "ddgst": false 00:22:45.181 }, 00:22:45.181 "method": "bdev_nvme_attach_controller" 00:22:45.181 },{ 00:22:45.181 "params": { 00:22:45.181 "name": "Nvme9", 00:22:45.181 "trtype": "tcp", 00:22:45.181 "traddr": "10.0.0.2", 00:22:45.181 "adrfam": "ipv4", 00:22:45.181 "trsvcid": "4420", 00:22:45.181 "subnqn": "nqn.2016-06.io.spdk:cnode9", 00:22:45.181 "hostnqn": "nqn.2016-06.io.spdk:host9", 00:22:45.181 "hdgst": false, 00:22:45.181 "ddgst": false 00:22:45.181 }, 00:22:45.181 "method": "bdev_nvme_attach_controller" 00:22:45.181 },{ 00:22:45.181 "params": { 00:22:45.181 "name": "Nvme10", 00:22:45.181 "trtype": "tcp", 00:22:45.181 "traddr": "10.0.0.2", 00:22:45.181 "adrfam": "ipv4", 00:22:45.181 "trsvcid": "4420", 00:22:45.181 "subnqn": "nqn.2016-06.io.spdk:cnode10", 00:22:45.181 "hostnqn": "nqn.2016-06.io.spdk:host10", 00:22:45.181 "hdgst": false, 00:22:45.181 "ddgst": false 00:22:45.181 }, 00:22:45.181 "method": "bdev_nvme_attach_controller" 00:22:45.181 }' 00:22:45.181 [2024-07-15 09:30:56.171536] Starting SPDK v24.09-pre git sha1 b0f01ebc5 / DPDK 24.03.0 initialization... 00:22:45.181 [2024-07-15 09:30:56.171633] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid881987 ] 00:22:45.181 EAL: No free 2048 kB hugepages reported on node 1 00:22:45.181 [2024-07-15 09:30:56.235423] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:22:45.181 [2024-07-15 09:30:56.345690] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:22:47.083 Running I/O for 10 seconds... 00:22:47.083 09:30:57 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:22:47.083 09:30:57 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@862 -- # return 0 00:22:47.083 09:30:57 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@105 -- # rpc_cmd -s /var/tmp/bdevperf.sock framework_wait_init 00:22:47.083 09:30:57 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:47.083 09:30:57 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:22:47.083 09:30:58 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:47.083 09:30:58 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@107 -- # waitforio /var/tmp/bdevperf.sock Nvme1n1 00:22:47.083 09:30:58 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@50 -- # '[' -z /var/tmp/bdevperf.sock ']' 00:22:47.083 09:30:58 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@54 -- # '[' -z Nvme1n1 ']' 00:22:47.083 09:30:58 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@57 -- # local ret=1 00:22:47.083 09:30:58 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@58 -- # local i 00:22:47.083 09:30:58 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@59 -- # (( i = 10 )) 00:22:47.083 09:30:58 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@59 -- # (( i != 0 )) 00:22:47.083 09:30:58 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@60 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_get_iostat -b Nvme1n1 00:22:47.083 09:30:58 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@60 -- # jq -r '.bdevs[0].num_read_ops' 00:22:47.083 09:30:58 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:47.083 09:30:58 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:22:47.083 09:30:58 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:47.083 09:30:58 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@60 -- # read_io_count=3 00:22:47.083 09:30:58 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@63 -- # '[' 3 -ge 100 ']' 00:22:47.083 09:30:58 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@67 -- # sleep 0.25 00:22:47.344 09:30:58 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@59 -- # (( i-- )) 00:22:47.344 09:30:58 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@59 -- # (( i != 0 )) 00:22:47.344 09:30:58 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@60 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_get_iostat -b Nvme1n1 00:22:47.344 09:30:58 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:47.344 09:30:58 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@60 -- # jq -r '.bdevs[0].num_read_ops' 00:22:47.344 09:30:58 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:22:47.344 09:30:58 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:47.344 09:30:58 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@60 -- # read_io_count=67 00:22:47.344 09:30:58 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@63 -- # '[' 67 -ge 100 ']' 00:22:47.344 09:30:58 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@67 -- # sleep 0.25 00:22:47.603 09:30:58 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@59 -- # (( i-- )) 00:22:47.603 09:30:58 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@59 -- # (( i != 0 )) 00:22:47.603 09:30:58 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@60 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_get_iostat -b Nvme1n1 00:22:47.603 09:30:58 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@60 -- # jq -r '.bdevs[0].num_read_ops' 00:22:47.603 09:30:58 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:47.603 09:30:58 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:22:47.603 09:30:58 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:47.603 09:30:58 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@60 -- # read_io_count=131 00:22:47.603 09:30:58 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@63 -- # '[' 131 -ge 100 ']' 00:22:47.603 09:30:58 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@64 -- # ret=0 00:22:47.603 09:30:58 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@65 -- # break 00:22:47.603 09:30:58 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@69 -- # return 0 00:22:47.603 09:30:58 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@110 -- # killprocess 881987 00:22:47.603 09:30:58 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@948 -- # '[' -z 881987 ']' 00:22:47.603 09:30:58 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@952 -- # kill -0 881987 00:22:47.603 09:30:58 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@953 -- # uname 00:22:47.603 09:30:58 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:22:47.603 09:30:58 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 881987 00:22:47.603 09:30:58 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:22:47.603 09:30:58 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:22:47.603 09:30:58 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@966 -- # echo 'killing process with pid 881987' 00:22:47.603 killing process with pid 881987 00:22:47.603 09:30:58 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@967 -- # kill 881987 00:22:47.603 09:30:58 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@972 -- # wait 881987 00:22:47.862 Received shutdown signal, test time was about 0.910573 seconds 00:22:47.862 00:22:47.862 Latency(us) 00:22:47.862 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:22:47.862 Job: Nvme1n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:22:47.862 Verification LBA range: start 0x0 length 0x400 00:22:47.862 Nvme1n1 : 0.90 214.07 13.38 0.00 0.00 295268.00 22039.51 264085.81 00:22:47.862 Job: Nvme2n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:22:47.862 Verification LBA range: start 0x0 length 0x400 00:22:47.862 Nvme2n1 : 0.87 226.65 14.17 0.00 0.00 269957.68 2815.62 250104.79 00:22:47.862 Job: Nvme3n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:22:47.862 Verification LBA range: start 0x0 length 0x400 00:22:47.862 Nvme3n1 : 0.90 283.50 17.72 0.00 0.00 212257.19 17767.54 254765.13 00:22:47.862 Job: Nvme4n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:22:47.862 Verification LBA range: start 0x0 length 0x400 00:22:47.862 Nvme4n1 : 0.91 281.40 17.59 0.00 0.00 210723.84 17379.18 270299.59 00:22:47.862 Job: Nvme5n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:22:47.862 Verification LBA range: start 0x0 length 0x400 00:22:47.862 Nvme5n1 : 0.87 221.22 13.83 0.00 0.00 261318.98 19806.44 250104.79 00:22:47.862 Job: Nvme6n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:22:47.862 Verification LBA range: start 0x0 length 0x400 00:22:47.862 Nvme6n1 : 0.89 216.12 13.51 0.00 0.00 262221.12 20388.98 260978.92 00:22:47.862 Job: Nvme7n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:22:47.862 Verification LBA range: start 0x0 length 0x400 00:22:47.862 Nvme7n1 : 0.88 217.84 13.61 0.00 0.00 253820.52 22039.51 248551.35 00:22:47.862 Job: Nvme8n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:22:47.862 Verification LBA range: start 0x0 length 0x400 00:22:47.862 Nvme8n1 : 0.87 220.02 13.75 0.00 0.00 244465.97 25243.50 254765.13 00:22:47.862 Job: Nvme9n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:22:47.862 Verification LBA range: start 0x0 length 0x400 00:22:47.862 Nvme9n1 : 0.91 211.96 13.25 0.00 0.00 249765.48 24563.86 285834.05 00:22:47.862 Job: Nvme10n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:22:47.862 Verification LBA range: start 0x0 length 0x400 00:22:47.862 Nvme10n1 : 0.90 213.11 13.32 0.00 0.00 242649.57 38059.43 265639.25 00:22:47.862 =================================================================================================================== 00:22:47.862 Total : 2305.89 144.12 0.00 0.00 247876.60 2815.62 285834.05 00:22:48.121 09:30:59 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@113 -- # sleep 1 00:22:49.056 09:31:00 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@114 -- # kill -0 881806 00:22:49.056 09:31:00 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@116 -- # stoptarget 00:22:49.056 09:31:00 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@41 -- # rm -f ./local-job0-0-verify.state 00:22:49.056 09:31:00 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@42 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdevperf.conf 00:22:49.056 09:31:00 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@43 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:22:49.056 09:31:00 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@45 -- # nvmftestfini 00:22:49.056 09:31:00 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@488 -- # nvmfcleanup 00:22:49.056 09:31:00 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@117 -- # sync 00:22:49.056 09:31:00 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:22:49.056 09:31:00 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@120 -- # set +e 00:22:49.056 09:31:00 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@121 -- # for i in {1..20} 00:22:49.056 09:31:00 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:22:49.056 rmmod nvme_tcp 00:22:49.056 rmmod nvme_fabrics 00:22:49.056 rmmod nvme_keyring 00:22:49.056 09:31:00 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:22:49.056 09:31:00 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@124 -- # set -e 00:22:49.056 09:31:00 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@125 -- # return 0 00:22:49.056 09:31:00 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@489 -- # '[' -n 881806 ']' 00:22:49.056 09:31:00 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@490 -- # killprocess 881806 00:22:49.056 09:31:00 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@948 -- # '[' -z 881806 ']' 00:22:49.056 09:31:00 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@952 -- # kill -0 881806 00:22:49.056 09:31:00 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@953 -- # uname 00:22:49.056 09:31:00 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:22:49.056 09:31:00 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 881806 00:22:49.056 09:31:00 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:22:49.056 09:31:00 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:22:49.056 09:31:00 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@966 -- # echo 'killing process with pid 881806' 00:22:49.056 killing process with pid 881806 00:22:49.056 09:31:00 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@967 -- # kill 881806 00:22:49.056 09:31:00 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@972 -- # wait 881806 00:22:49.623 09:31:00 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:22:49.623 09:31:00 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:22:49.623 09:31:00 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:22:49.623 09:31:00 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:22:49.623 09:31:00 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@278 -- # remove_spdk_ns 00:22:49.623 09:31:00 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:22:49.623 09:31:00 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:22:49.623 09:31:00 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:22:52.158 09:31:02 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:22:52.158 00:22:52.158 real 0m7.727s 00:22:52.158 user 0m23.363s 00:22:52.158 sys 0m1.465s 00:22:52.158 09:31:02 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@1124 -- # xtrace_disable 00:22:52.158 09:31:02 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:22:52.158 ************************************ 00:22:52.158 END TEST nvmf_shutdown_tc2 00:22:52.158 ************************************ 00:22:52.158 09:31:02 nvmf_tcp.nvmf_shutdown -- common/autotest_common.sh@1142 -- # return 0 00:22:52.158 09:31:02 nvmf_tcp.nvmf_shutdown -- target/shutdown.sh@149 -- # run_test nvmf_shutdown_tc3 nvmf_shutdown_tc3 00:22:52.158 09:31:02 nvmf_tcp.nvmf_shutdown -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:22:52.158 09:31:02 nvmf_tcp.nvmf_shutdown -- common/autotest_common.sh@1105 -- # xtrace_disable 00:22:52.158 09:31:02 nvmf_tcp.nvmf_shutdown -- common/autotest_common.sh@10 -- # set +x 00:22:52.158 ************************************ 00:22:52.158 START TEST nvmf_shutdown_tc3 00:22:52.158 ************************************ 00:22:52.158 09:31:02 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@1123 -- # nvmf_shutdown_tc3 00:22:52.158 09:31:02 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@121 -- # starttarget 00:22:52.158 09:31:02 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@15 -- # nvmftestinit 00:22:52.159 09:31:02 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:22:52.159 09:31:02 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:22:52.159 09:31:02 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@448 -- # prepare_net_devs 00:22:52.159 09:31:02 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@410 -- # local -g is_hw=no 00:22:52.159 09:31:02 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@412 -- # remove_spdk_ns 00:22:52.159 09:31:02 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:22:52.159 09:31:02 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:22:52.159 09:31:02 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:22:52.159 09:31:02 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:22:52.159 09:31:02 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:22:52.159 09:31:02 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@285 -- # xtrace_disable 00:22:52.159 09:31:02 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:22:52.159 09:31:02 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:22:52.159 09:31:02 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@291 -- # pci_devs=() 00:22:52.159 09:31:02 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@291 -- # local -a pci_devs 00:22:52.159 09:31:02 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@292 -- # pci_net_devs=() 00:22:52.159 09:31:02 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:22:52.159 09:31:02 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@293 -- # pci_drivers=() 00:22:52.159 09:31:02 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@293 -- # local -A pci_drivers 00:22:52.159 09:31:02 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@295 -- # net_devs=() 00:22:52.159 09:31:02 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@295 -- # local -ga net_devs 00:22:52.159 09:31:02 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@296 -- # e810=() 00:22:52.159 09:31:02 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@296 -- # local -ga e810 00:22:52.159 09:31:02 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@297 -- # x722=() 00:22:52.159 09:31:02 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@297 -- # local -ga x722 00:22:52.159 09:31:02 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@298 -- # mlx=() 00:22:52.159 09:31:02 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@298 -- # local -ga mlx 00:22:52.159 09:31:02 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:22:52.159 09:31:02 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:22:52.159 09:31:02 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:22:52.159 09:31:02 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:22:52.159 09:31:02 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:22:52.159 09:31:02 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:22:52.159 09:31:02 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:22:52.159 09:31:02 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:22:52.159 09:31:02 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:22:52.159 09:31:02 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:22:52.159 09:31:02 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:22:52.159 09:31:02 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:22:52.159 09:31:02 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:22:52.159 09:31:02 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:22:52.159 09:31:02 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:22:52.159 09:31:02 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:22:52.159 09:31:02 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:22:52.159 09:31:02 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:22:52.159 09:31:02 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@341 -- # echo 'Found 0000:09:00.0 (0x8086 - 0x159b)' 00:22:52.159 Found 0000:09:00.0 (0x8086 - 0x159b) 00:22:52.159 09:31:02 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:22:52.159 09:31:02 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:22:52.159 09:31:02 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:22:52.159 09:31:02 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:22:52.159 09:31:02 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:22:52.159 09:31:02 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:22:52.159 09:31:02 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@341 -- # echo 'Found 0000:09:00.1 (0x8086 - 0x159b)' 00:22:52.159 Found 0000:09:00.1 (0x8086 - 0x159b) 00:22:52.159 09:31:02 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:22:52.159 09:31:02 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:22:52.159 09:31:02 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:22:52.159 09:31:02 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:22:52.159 09:31:02 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:22:52.159 09:31:02 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:22:52.159 09:31:02 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:22:52.159 09:31:02 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:22:52.159 09:31:02 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:22:52.159 09:31:02 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:22:52.159 09:31:02 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:22:52.159 09:31:02 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:22:52.159 09:31:02 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@390 -- # [[ up == up ]] 00:22:52.159 09:31:02 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:22:52.159 09:31:02 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:22:52.159 09:31:02 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:09:00.0: cvl_0_0' 00:22:52.159 Found net devices under 0000:09:00.0: cvl_0_0 00:22:52.159 09:31:02 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:22:52.159 09:31:02 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:22:52.159 09:31:02 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:22:52.159 09:31:02 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:22:52.159 09:31:02 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:22:52.159 09:31:02 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@390 -- # [[ up == up ]] 00:22:52.159 09:31:02 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:22:52.159 09:31:02 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:22:52.159 09:31:02 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:09:00.1: cvl_0_1' 00:22:52.159 Found net devices under 0000:09:00.1: cvl_0_1 00:22:52.159 09:31:02 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:22:52.159 09:31:02 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:22:52.159 09:31:02 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@414 -- # is_hw=yes 00:22:52.159 09:31:02 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:22:52.159 09:31:02 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:22:52.159 09:31:02 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:22:52.159 09:31:02 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:22:52.159 09:31:02 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:22:52.159 09:31:02 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:22:52.159 09:31:02 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:22:52.159 09:31:02 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:22:52.159 09:31:02 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:22:52.159 09:31:02 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:22:52.159 09:31:02 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:22:52.159 09:31:02 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:22:52.159 09:31:02 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:22:52.159 09:31:02 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:22:52.159 09:31:02 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:22:52.159 09:31:02 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:22:52.159 09:31:02 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:22:52.159 09:31:02 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:22:52.159 09:31:02 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:22:52.159 09:31:02 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:22:52.159 09:31:02 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:22:52.159 09:31:02 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:22:52.159 09:31:02 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:22:52.159 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:22:52.159 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.184 ms 00:22:52.159 00:22:52.159 --- 10.0.0.2 ping statistics --- 00:22:52.159 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:22:52.159 rtt min/avg/max/mdev = 0.184/0.184/0.184/0.000 ms 00:22:52.159 09:31:02 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:22:52.159 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:22:52.159 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.096 ms 00:22:52.159 00:22:52.159 --- 10.0.0.1 ping statistics --- 00:22:52.159 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:22:52.159 rtt min/avg/max/mdev = 0.096/0.096/0.096/0.000 ms 00:22:52.159 09:31:02 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:22:52.159 09:31:02 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@422 -- # return 0 00:22:52.159 09:31:02 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:22:52.160 09:31:02 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:22:52.160 09:31:02 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:22:52.160 09:31:02 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:22:52.160 09:31:02 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:22:52.160 09:31:02 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:22:52.160 09:31:02 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:22:52.160 09:31:02 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@18 -- # nvmfappstart -m 0x1E 00:22:52.160 09:31:02 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:22:52.160 09:31:02 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@722 -- # xtrace_disable 00:22:52.160 09:31:02 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:22:52.160 09:31:02 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@481 -- # nvmfpid=882898 00:22:52.160 09:31:02 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk ip netns exec cvl_0_0_ns_spdk ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1E 00:22:52.160 09:31:02 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@482 -- # waitforlisten 882898 00:22:52.160 09:31:02 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@829 -- # '[' -z 882898 ']' 00:22:52.160 09:31:02 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:22:52.160 09:31:02 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@834 -- # local max_retries=100 00:22:52.160 09:31:02 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:22:52.160 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:22:52.160 09:31:02 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@838 -- # xtrace_disable 00:22:52.160 09:31:02 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:22:52.160 [2024-07-15 09:31:03.042707] Starting SPDK v24.09-pre git sha1 b0f01ebc5 / DPDK 24.03.0 initialization... 00:22:52.160 [2024-07-15 09:31:03.042784] [ DPDK EAL parameters: nvmf -c 0x1E --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:22:52.160 EAL: No free 2048 kB hugepages reported on node 1 00:22:52.160 [2024-07-15 09:31:03.106995] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 4 00:22:52.160 [2024-07-15 09:31:03.219197] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:22:52.160 [2024-07-15 09:31:03.219256] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:22:52.160 [2024-07-15 09:31:03.219270] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:22:52.160 [2024-07-15 09:31:03.219281] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:22:52.160 [2024-07-15 09:31:03.219291] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:22:52.160 [2024-07-15 09:31:03.219424] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:22:52.160 [2024-07-15 09:31:03.219488] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:22:52.160 [2024-07-15 09:31:03.219558] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:22:52.160 [2024-07-15 09:31:03.219555] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 4 00:22:52.160 09:31:03 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:22:52.160 09:31:03 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@862 -- # return 0 00:22:52.160 09:31:03 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:22:52.160 09:31:03 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@728 -- # xtrace_disable 00:22:52.160 09:31:03 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:22:52.420 09:31:03 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:22:52.420 09:31:03 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@20 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:22:52.420 09:31:03 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:52.420 09:31:03 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:22:52.420 [2024-07-15 09:31:03.376667] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:22:52.420 09:31:03 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:52.420 09:31:03 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@22 -- # num_subsystems=({1..10}) 00:22:52.420 09:31:03 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@24 -- # timing_enter create_subsystems 00:22:52.420 09:31:03 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@722 -- # xtrace_disable 00:22:52.420 09:31:03 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:22:52.420 09:31:03 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@26 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:22:52.420 09:31:03 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:22:52.420 09:31:03 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@28 -- # cat 00:22:52.420 09:31:03 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:22:52.420 09:31:03 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@28 -- # cat 00:22:52.420 09:31:03 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:22:52.420 09:31:03 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@28 -- # cat 00:22:52.420 09:31:03 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:22:52.420 09:31:03 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@28 -- # cat 00:22:52.420 09:31:03 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:22:52.420 09:31:03 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@28 -- # cat 00:22:52.420 09:31:03 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:22:52.420 09:31:03 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@28 -- # cat 00:22:52.420 09:31:03 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:22:52.420 09:31:03 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@28 -- # cat 00:22:52.420 09:31:03 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:22:52.420 09:31:03 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@28 -- # cat 00:22:52.420 09:31:03 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:22:52.420 09:31:03 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@28 -- # cat 00:22:52.420 09:31:03 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:22:52.420 09:31:03 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@28 -- # cat 00:22:52.420 09:31:03 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@35 -- # rpc_cmd 00:22:52.420 09:31:03 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:52.420 09:31:03 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:22:52.420 Malloc1 00:22:52.420 [2024-07-15 09:31:03.459598] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:22:52.420 Malloc2 00:22:52.420 Malloc3 00:22:52.420 Malloc4 00:22:52.679 Malloc5 00:22:52.679 Malloc6 00:22:52.679 Malloc7 00:22:52.679 Malloc8 00:22:52.679 Malloc9 00:22:52.939 Malloc10 00:22:52.939 09:31:03 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:52.939 09:31:03 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@36 -- # timing_exit create_subsystems 00:22:52.939 09:31:03 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@728 -- # xtrace_disable 00:22:52.939 09:31:03 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:22:52.939 09:31:03 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@125 -- # perfpid=883079 00:22:52.939 09:31:03 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@126 -- # waitforlisten 883079 /var/tmp/bdevperf.sock 00:22:52.939 09:31:03 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@829 -- # '[' -z 883079 ']' 00:22:52.939 09:31:03 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:22:52.939 09:31:03 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@124 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -r /var/tmp/bdevperf.sock --json /dev/fd/63 -q 64 -o 65536 -w verify -t 10 00:22:52.939 09:31:03 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@124 -- # gen_nvmf_target_json 1 2 3 4 5 6 7 8 9 10 00:22:52.939 09:31:03 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@834 -- # local max_retries=100 00:22:52.939 09:31:03 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@532 -- # config=() 00:22:52.939 09:31:03 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:22:52.939 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:22:52.939 09:31:03 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@532 -- # local subsystem config 00:22:52.939 09:31:03 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@838 -- # xtrace_disable 00:22:52.939 09:31:03 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:22:52.939 09:31:03 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:22:52.939 09:31:03 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:22:52.939 { 00:22:52.939 "params": { 00:22:52.939 "name": "Nvme$subsystem", 00:22:52.939 "trtype": "$TEST_TRANSPORT", 00:22:52.939 "traddr": "$NVMF_FIRST_TARGET_IP", 00:22:52.939 "adrfam": "ipv4", 00:22:52.939 "trsvcid": "$NVMF_PORT", 00:22:52.939 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:22:52.939 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:22:52.939 "hdgst": ${hdgst:-false}, 00:22:52.939 "ddgst": ${ddgst:-false} 00:22:52.939 }, 00:22:52.939 "method": "bdev_nvme_attach_controller" 00:22:52.939 } 00:22:52.939 EOF 00:22:52.939 )") 00:22:52.939 09:31:03 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@554 -- # cat 00:22:52.939 09:31:03 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:22:52.939 09:31:03 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:22:52.939 { 00:22:52.939 "params": { 00:22:52.939 "name": "Nvme$subsystem", 00:22:52.939 "trtype": "$TEST_TRANSPORT", 00:22:52.939 "traddr": "$NVMF_FIRST_TARGET_IP", 00:22:52.939 "adrfam": "ipv4", 00:22:52.939 "trsvcid": "$NVMF_PORT", 00:22:52.939 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:22:52.939 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:22:52.939 "hdgst": ${hdgst:-false}, 00:22:52.939 "ddgst": ${ddgst:-false} 00:22:52.939 }, 00:22:52.939 "method": "bdev_nvme_attach_controller" 00:22:52.939 } 00:22:52.939 EOF 00:22:52.939 )") 00:22:52.939 09:31:03 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@554 -- # cat 00:22:52.939 09:31:03 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:22:52.939 09:31:03 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:22:52.939 { 00:22:52.939 "params": { 00:22:52.939 "name": "Nvme$subsystem", 00:22:52.939 "trtype": "$TEST_TRANSPORT", 00:22:52.939 "traddr": "$NVMF_FIRST_TARGET_IP", 00:22:52.939 "adrfam": "ipv4", 00:22:52.939 "trsvcid": "$NVMF_PORT", 00:22:52.939 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:22:52.939 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:22:52.939 "hdgst": ${hdgst:-false}, 00:22:52.939 "ddgst": ${ddgst:-false} 00:22:52.939 }, 00:22:52.939 "method": "bdev_nvme_attach_controller" 00:22:52.939 } 00:22:52.939 EOF 00:22:52.939 )") 00:22:52.939 09:31:03 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@554 -- # cat 00:22:52.939 09:31:03 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:22:52.939 09:31:03 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:22:52.939 { 00:22:52.939 "params": { 00:22:52.939 "name": "Nvme$subsystem", 00:22:52.939 "trtype": "$TEST_TRANSPORT", 00:22:52.939 "traddr": "$NVMF_FIRST_TARGET_IP", 00:22:52.939 "adrfam": "ipv4", 00:22:52.939 "trsvcid": "$NVMF_PORT", 00:22:52.939 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:22:52.939 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:22:52.939 "hdgst": ${hdgst:-false}, 00:22:52.939 "ddgst": ${ddgst:-false} 00:22:52.939 }, 00:22:52.939 "method": "bdev_nvme_attach_controller" 00:22:52.939 } 00:22:52.939 EOF 00:22:52.939 )") 00:22:52.939 09:31:03 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@554 -- # cat 00:22:52.939 09:31:03 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:22:52.939 09:31:03 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:22:52.939 { 00:22:52.939 "params": { 00:22:52.939 "name": "Nvme$subsystem", 00:22:52.939 "trtype": "$TEST_TRANSPORT", 00:22:52.939 "traddr": "$NVMF_FIRST_TARGET_IP", 00:22:52.939 "adrfam": "ipv4", 00:22:52.939 "trsvcid": "$NVMF_PORT", 00:22:52.939 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:22:52.939 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:22:52.939 "hdgst": ${hdgst:-false}, 00:22:52.939 "ddgst": ${ddgst:-false} 00:22:52.939 }, 00:22:52.939 "method": "bdev_nvme_attach_controller" 00:22:52.939 } 00:22:52.939 EOF 00:22:52.939 )") 00:22:52.939 09:31:03 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@554 -- # cat 00:22:52.939 09:31:03 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:22:52.939 09:31:03 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:22:52.939 { 00:22:52.939 "params": { 00:22:52.939 "name": "Nvme$subsystem", 00:22:52.939 "trtype": "$TEST_TRANSPORT", 00:22:52.939 "traddr": "$NVMF_FIRST_TARGET_IP", 00:22:52.939 "adrfam": "ipv4", 00:22:52.939 "trsvcid": "$NVMF_PORT", 00:22:52.939 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:22:52.939 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:22:52.939 "hdgst": ${hdgst:-false}, 00:22:52.939 "ddgst": ${ddgst:-false} 00:22:52.939 }, 00:22:52.939 "method": "bdev_nvme_attach_controller" 00:22:52.939 } 00:22:52.939 EOF 00:22:52.939 )") 00:22:52.939 09:31:03 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@554 -- # cat 00:22:52.939 09:31:03 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:22:52.939 09:31:03 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:22:52.939 { 00:22:52.939 "params": { 00:22:52.940 "name": "Nvme$subsystem", 00:22:52.940 "trtype": "$TEST_TRANSPORT", 00:22:52.940 "traddr": "$NVMF_FIRST_TARGET_IP", 00:22:52.940 "adrfam": "ipv4", 00:22:52.940 "trsvcid": "$NVMF_PORT", 00:22:52.940 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:22:52.940 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:22:52.940 "hdgst": ${hdgst:-false}, 00:22:52.940 "ddgst": ${ddgst:-false} 00:22:52.940 }, 00:22:52.940 "method": "bdev_nvme_attach_controller" 00:22:52.940 } 00:22:52.940 EOF 00:22:52.940 )") 00:22:52.940 09:31:03 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@554 -- # cat 00:22:52.940 09:31:03 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:22:52.940 09:31:03 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:22:52.940 { 00:22:52.940 "params": { 00:22:52.940 "name": "Nvme$subsystem", 00:22:52.940 "trtype": "$TEST_TRANSPORT", 00:22:52.940 "traddr": "$NVMF_FIRST_TARGET_IP", 00:22:52.940 "adrfam": "ipv4", 00:22:52.940 "trsvcid": "$NVMF_PORT", 00:22:52.940 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:22:52.940 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:22:52.940 "hdgst": ${hdgst:-false}, 00:22:52.940 "ddgst": ${ddgst:-false} 00:22:52.940 }, 00:22:52.940 "method": "bdev_nvme_attach_controller" 00:22:52.940 } 00:22:52.940 EOF 00:22:52.940 )") 00:22:52.940 09:31:03 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@554 -- # cat 00:22:52.940 09:31:03 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:22:52.940 09:31:03 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:22:52.940 { 00:22:52.940 "params": { 00:22:52.940 "name": "Nvme$subsystem", 00:22:52.940 "trtype": "$TEST_TRANSPORT", 00:22:52.940 "traddr": "$NVMF_FIRST_TARGET_IP", 00:22:52.940 "adrfam": "ipv4", 00:22:52.940 "trsvcid": "$NVMF_PORT", 00:22:52.940 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:22:52.940 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:22:52.940 "hdgst": ${hdgst:-false}, 00:22:52.940 "ddgst": ${ddgst:-false} 00:22:52.940 }, 00:22:52.940 "method": "bdev_nvme_attach_controller" 00:22:52.940 } 00:22:52.940 EOF 00:22:52.940 )") 00:22:52.940 09:31:03 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@554 -- # cat 00:22:52.940 09:31:03 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:22:52.940 09:31:03 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:22:52.940 { 00:22:52.940 "params": { 00:22:52.940 "name": "Nvme$subsystem", 00:22:52.940 "trtype": "$TEST_TRANSPORT", 00:22:52.940 "traddr": "$NVMF_FIRST_TARGET_IP", 00:22:52.940 "adrfam": "ipv4", 00:22:52.940 "trsvcid": "$NVMF_PORT", 00:22:52.940 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:22:52.940 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:22:52.940 "hdgst": ${hdgst:-false}, 00:22:52.940 "ddgst": ${ddgst:-false} 00:22:52.940 }, 00:22:52.940 "method": "bdev_nvme_attach_controller" 00:22:52.940 } 00:22:52.940 EOF 00:22:52.940 )") 00:22:52.940 09:31:03 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@554 -- # cat 00:22:52.940 09:31:03 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@556 -- # jq . 00:22:52.940 09:31:03 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@557 -- # IFS=, 00:22:52.940 09:31:03 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:22:52.940 "params": { 00:22:52.940 "name": "Nvme1", 00:22:52.940 "trtype": "tcp", 00:22:52.940 "traddr": "10.0.0.2", 00:22:52.940 "adrfam": "ipv4", 00:22:52.940 "trsvcid": "4420", 00:22:52.940 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:22:52.940 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:22:52.940 "hdgst": false, 00:22:52.940 "ddgst": false 00:22:52.940 }, 00:22:52.940 "method": "bdev_nvme_attach_controller" 00:22:52.940 },{ 00:22:52.940 "params": { 00:22:52.940 "name": "Nvme2", 00:22:52.940 "trtype": "tcp", 00:22:52.940 "traddr": "10.0.0.2", 00:22:52.940 "adrfam": "ipv4", 00:22:52.940 "trsvcid": "4420", 00:22:52.940 "subnqn": "nqn.2016-06.io.spdk:cnode2", 00:22:52.940 "hostnqn": "nqn.2016-06.io.spdk:host2", 00:22:52.940 "hdgst": false, 00:22:52.940 "ddgst": false 00:22:52.940 }, 00:22:52.940 "method": "bdev_nvme_attach_controller" 00:22:52.940 },{ 00:22:52.940 "params": { 00:22:52.940 "name": "Nvme3", 00:22:52.940 "trtype": "tcp", 00:22:52.940 "traddr": "10.0.0.2", 00:22:52.940 "adrfam": "ipv4", 00:22:52.940 "trsvcid": "4420", 00:22:52.940 "subnqn": "nqn.2016-06.io.spdk:cnode3", 00:22:52.940 "hostnqn": "nqn.2016-06.io.spdk:host3", 00:22:52.940 "hdgst": false, 00:22:52.940 "ddgst": false 00:22:52.940 }, 00:22:52.940 "method": "bdev_nvme_attach_controller" 00:22:52.940 },{ 00:22:52.940 "params": { 00:22:52.940 "name": "Nvme4", 00:22:52.940 "trtype": "tcp", 00:22:52.940 "traddr": "10.0.0.2", 00:22:52.940 "adrfam": "ipv4", 00:22:52.940 "trsvcid": "4420", 00:22:52.940 "subnqn": "nqn.2016-06.io.spdk:cnode4", 00:22:52.940 "hostnqn": "nqn.2016-06.io.spdk:host4", 00:22:52.940 "hdgst": false, 00:22:52.940 "ddgst": false 00:22:52.940 }, 00:22:52.940 "method": "bdev_nvme_attach_controller" 00:22:52.940 },{ 00:22:52.940 "params": { 00:22:52.940 "name": "Nvme5", 00:22:52.940 "trtype": "tcp", 00:22:52.940 "traddr": "10.0.0.2", 00:22:52.940 "adrfam": "ipv4", 00:22:52.940 "trsvcid": "4420", 00:22:52.940 "subnqn": "nqn.2016-06.io.spdk:cnode5", 00:22:52.940 "hostnqn": "nqn.2016-06.io.spdk:host5", 00:22:52.940 "hdgst": false, 00:22:52.940 "ddgst": false 00:22:52.940 }, 00:22:52.940 "method": "bdev_nvme_attach_controller" 00:22:52.940 },{ 00:22:52.940 "params": { 00:22:52.940 "name": "Nvme6", 00:22:52.940 "trtype": "tcp", 00:22:52.940 "traddr": "10.0.0.2", 00:22:52.940 "adrfam": "ipv4", 00:22:52.940 "trsvcid": "4420", 00:22:52.940 "subnqn": "nqn.2016-06.io.spdk:cnode6", 00:22:52.940 "hostnqn": "nqn.2016-06.io.spdk:host6", 00:22:52.940 "hdgst": false, 00:22:52.940 "ddgst": false 00:22:52.940 }, 00:22:52.940 "method": "bdev_nvme_attach_controller" 00:22:52.940 },{ 00:22:52.940 "params": { 00:22:52.940 "name": "Nvme7", 00:22:52.940 "trtype": "tcp", 00:22:52.940 "traddr": "10.0.0.2", 00:22:52.940 "adrfam": "ipv4", 00:22:52.940 "trsvcid": "4420", 00:22:52.940 "subnqn": "nqn.2016-06.io.spdk:cnode7", 00:22:52.940 "hostnqn": "nqn.2016-06.io.spdk:host7", 00:22:52.940 "hdgst": false, 00:22:52.940 "ddgst": false 00:22:52.940 }, 00:22:52.940 "method": "bdev_nvme_attach_controller" 00:22:52.940 },{ 00:22:52.940 "params": { 00:22:52.940 "name": "Nvme8", 00:22:52.940 "trtype": "tcp", 00:22:52.940 "traddr": "10.0.0.2", 00:22:52.940 "adrfam": "ipv4", 00:22:52.940 "trsvcid": "4420", 00:22:52.940 "subnqn": "nqn.2016-06.io.spdk:cnode8", 00:22:52.940 "hostnqn": "nqn.2016-06.io.spdk:host8", 00:22:52.940 "hdgst": false, 00:22:52.940 "ddgst": false 00:22:52.940 }, 00:22:52.940 "method": "bdev_nvme_attach_controller" 00:22:52.940 },{ 00:22:52.940 "params": { 00:22:52.940 "name": "Nvme9", 00:22:52.940 "trtype": "tcp", 00:22:52.940 "traddr": "10.0.0.2", 00:22:52.940 "adrfam": "ipv4", 00:22:52.940 "trsvcid": "4420", 00:22:52.940 "subnqn": "nqn.2016-06.io.spdk:cnode9", 00:22:52.940 "hostnqn": "nqn.2016-06.io.spdk:host9", 00:22:52.940 "hdgst": false, 00:22:52.940 "ddgst": false 00:22:52.940 }, 00:22:52.940 "method": "bdev_nvme_attach_controller" 00:22:52.940 },{ 00:22:52.940 "params": { 00:22:52.940 "name": "Nvme10", 00:22:52.940 "trtype": "tcp", 00:22:52.940 "traddr": "10.0.0.2", 00:22:52.940 "adrfam": "ipv4", 00:22:52.940 "trsvcid": "4420", 00:22:52.940 "subnqn": "nqn.2016-06.io.spdk:cnode10", 00:22:52.940 "hostnqn": "nqn.2016-06.io.spdk:host10", 00:22:52.940 "hdgst": false, 00:22:52.940 "ddgst": false 00:22:52.940 }, 00:22:52.940 "method": "bdev_nvme_attach_controller" 00:22:52.940 }' 00:22:52.940 [2024-07-15 09:31:03.972467] Starting SPDK v24.09-pre git sha1 b0f01ebc5 / DPDK 24.03.0 initialization... 00:22:52.940 [2024-07-15 09:31:03.972544] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid883079 ] 00:22:52.940 EAL: No free 2048 kB hugepages reported on node 1 00:22:52.940 [2024-07-15 09:31:04.035590] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:22:53.198 [2024-07-15 09:31:04.146194] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:22:54.576 Running I/O for 10 seconds... 00:22:54.835 09:31:05 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:22:54.835 09:31:05 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@862 -- # return 0 00:22:54.835 09:31:05 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@127 -- # rpc_cmd -s /var/tmp/bdevperf.sock framework_wait_init 00:22:54.835 09:31:05 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:54.835 09:31:05 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:22:54.835 09:31:05 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:54.835 09:31:05 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@130 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; kill -9 $perfpid || true; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:22:54.835 09:31:05 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@132 -- # waitforio /var/tmp/bdevperf.sock Nvme1n1 00:22:54.835 09:31:05 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@50 -- # '[' -z /var/tmp/bdevperf.sock ']' 00:22:54.835 09:31:05 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@54 -- # '[' -z Nvme1n1 ']' 00:22:54.835 09:31:05 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@57 -- # local ret=1 00:22:54.835 09:31:05 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@58 -- # local i 00:22:54.835 09:31:05 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@59 -- # (( i = 10 )) 00:22:54.835 09:31:05 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@59 -- # (( i != 0 )) 00:22:54.835 09:31:05 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@60 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_get_iostat -b Nvme1n1 00:22:54.835 09:31:05 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@60 -- # jq -r '.bdevs[0].num_read_ops' 00:22:54.835 09:31:05 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:54.835 09:31:05 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:22:55.092 09:31:06 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:55.092 09:31:06 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@60 -- # read_io_count=67 00:22:55.092 09:31:06 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@63 -- # '[' 67 -ge 100 ']' 00:22:55.092 09:31:06 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@67 -- # sleep 0.25 00:22:55.366 09:31:06 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@59 -- # (( i-- )) 00:22:55.366 09:31:06 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@59 -- # (( i != 0 )) 00:22:55.366 09:31:06 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@60 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_get_iostat -b Nvme1n1 00:22:55.366 09:31:06 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@60 -- # jq -r '.bdevs[0].num_read_ops' 00:22:55.366 09:31:06 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:55.366 09:31:06 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:22:55.366 09:31:06 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:55.366 09:31:06 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@60 -- # read_io_count=131 00:22:55.366 09:31:06 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@63 -- # '[' 131 -ge 100 ']' 00:22:55.366 09:31:06 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@64 -- # ret=0 00:22:55.366 09:31:06 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@65 -- # break 00:22:55.366 09:31:06 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@69 -- # return 0 00:22:55.366 09:31:06 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@135 -- # killprocess 882898 00:22:55.366 09:31:06 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@948 -- # '[' -z 882898 ']' 00:22:55.366 09:31:06 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@952 -- # kill -0 882898 00:22:55.366 09:31:06 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@953 -- # uname 00:22:55.366 09:31:06 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:22:55.366 09:31:06 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 882898 00:22:55.366 09:31:06 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:22:55.366 09:31:06 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:22:55.366 09:31:06 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@966 -- # echo 'killing process with pid 882898' 00:22:55.366 killing process with pid 882898 00:22:55.366 09:31:06 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@967 -- # kill 882898 00:22:55.366 09:31:06 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@972 -- # wait 882898 00:22:55.366 [2024-07-15 09:31:06.356312] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23561a0 is same with the state(5) to be set 00:22:55.366 [2024-07-15 09:31:06.356426] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23561a0 is same with the state(5) to be set 00:22:55.366 [2024-07-15 09:31:06.356461] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23561a0 is same with the state(5) to be set 00:22:55.366 [2024-07-15 09:31:06.356473] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23561a0 is same with the state(5) to be set 00:22:55.366 [2024-07-15 09:31:06.356485] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23561a0 is same with the state(5) to be set 00:22:55.366 [2024-07-15 09:31:06.356498] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23561a0 is same with the state(5) to be set 00:22:55.366 [2024-07-15 09:31:06.356509] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23561a0 is same with the state(5) to be set 00:22:55.366 [2024-07-15 09:31:06.356522] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23561a0 is same with the state(5) to be set 00:22:55.366 [2024-07-15 09:31:06.356533] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23561a0 is same with the state(5) to be set 00:22:55.366 [2024-07-15 09:31:06.356545] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23561a0 is same with the state(5) to be set 00:22:55.366 [2024-07-15 09:31:06.356557] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23561a0 is same with the state(5) to be set 00:22:55.366 [2024-07-15 09:31:06.356569] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23561a0 is same with the state(5) to be set 00:22:55.366 [2024-07-15 09:31:06.356581] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23561a0 is same with the state(5) to be set 00:22:55.366 [2024-07-15 09:31:06.356593] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23561a0 is same with the state(5) to be set 00:22:55.366 [2024-07-15 09:31:06.356605] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23561a0 is same with the state(5) to be set 00:22:55.366 [2024-07-15 09:31:06.356618] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23561a0 is same with the state(5) to be set 00:22:55.367 [2024-07-15 09:31:06.356629] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23561a0 is same with the state(5) to be set 00:22:55.367 [2024-07-15 09:31:06.356641] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23561a0 is same with the state(5) to be set 00:22:55.367 [2024-07-15 09:31:06.356653] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23561a0 is same with the state(5) to be set 00:22:55.367 [2024-07-15 09:31:06.356665] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23561a0 is same with the state(5) to be set 00:22:55.367 [2024-07-15 09:31:06.356677] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23561a0 is same with the state(5) to be set 00:22:55.367 [2024-07-15 09:31:06.356689] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23561a0 is same with the state(5) to be set 00:22:55.367 [2024-07-15 09:31:06.356701] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23561a0 is same with the state(5) to be set 00:22:55.367 [2024-07-15 09:31:06.356713] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23561a0 is same with the state(5) to be set 00:22:55.367 [2024-07-15 09:31:06.356725] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23561a0 is same with the state(5) to be set 00:22:55.367 [2024-07-15 09:31:06.356736] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23561a0 is same with the state(5) to be set 00:22:55.367 [2024-07-15 09:31:06.356748] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23561a0 is same with the state(5) to be set 00:22:55.367 [2024-07-15 09:31:06.356760] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23561a0 is same with the state(5) to be set 00:22:55.367 [2024-07-15 09:31:06.356772] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23561a0 is same with the state(5) to be set 00:22:55.367 [2024-07-15 09:31:06.356818] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23561a0 is same with the state(5) to be set 00:22:55.367 [2024-07-15 09:31:06.356832] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23561a0 is same with the state(5) to be set 00:22:55.367 [2024-07-15 09:31:06.356844] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23561a0 is same with the state(5) to be set 00:22:55.367 [2024-07-15 09:31:06.356862] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23561a0 is same with the state(5) to be set 00:22:55.367 [2024-07-15 09:31:06.356876] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23561a0 is same with the state(5) to be set 00:22:55.367 [2024-07-15 09:31:06.356888] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23561a0 is same with the state(5) to be set 00:22:55.367 [2024-07-15 09:31:06.356900] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23561a0 is same with the state(5) to be set 00:22:55.367 [2024-07-15 09:31:06.356912] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23561a0 is same with the state(5) to be set 00:22:55.367 [2024-07-15 09:31:06.356924] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23561a0 is same with the state(5) to be set 00:22:55.367 [2024-07-15 09:31:06.356936] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23561a0 is same with the state(5) to be set 00:22:55.367 [2024-07-15 09:31:06.356948] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23561a0 is same with the state(5) to be set 00:22:55.367 [2024-07-15 09:31:06.356960] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23561a0 is same with the state(5) to be set 00:22:55.367 [2024-07-15 09:31:06.356972] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23561a0 is same with the state(5) to be set 00:22:55.367 [2024-07-15 09:31:06.356984] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23561a0 is same with the state(5) to be set 00:22:55.367 [2024-07-15 09:31:06.356996] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23561a0 is same with the state(5) to be set 00:22:55.367 [2024-07-15 09:31:06.357009] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23561a0 is same with the state(5) to be set 00:22:55.367 [2024-07-15 09:31:06.357021] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23561a0 is same with the state(5) to be set 00:22:55.367 [2024-07-15 09:31:06.357033] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23561a0 is same with the state(5) to be set 00:22:55.367 [2024-07-15 09:31:06.357045] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23561a0 is same with the state(5) to be set 00:22:55.367 [2024-07-15 09:31:06.357057] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23561a0 is same with the state(5) to be set 00:22:55.367 [2024-07-15 09:31:06.357069] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23561a0 is same with the state(5) to be set 00:22:55.367 [2024-07-15 09:31:06.357095] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23561a0 is same with the state(5) to be set 00:22:55.367 [2024-07-15 09:31:06.357117] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23561a0 is same with the state(5) to be set 00:22:55.367 [2024-07-15 09:31:06.359250] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:22:55.367 [2024-07-15 09:31:06.359292] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:55.367 [2024-07-15 09:31:06.359317] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:22:55.367 [2024-07-15 09:31:06.359332] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:55.367 [2024-07-15 09:31:06.359353] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:22:55.367 [2024-07-15 09:31:06.359369] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:55.367 [2024-07-15 09:31:06.359385] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:22:55.367 [2024-07-15 09:31:06.359399] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:55.367 [2024-07-15 09:31:06.359412] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf0d830 is same with the state(5) to be set 00:22:55.367 [2024-07-15 09:31:06.359813] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2356640 is same with the state(5) to be set 00:22:55.367 [2024-07-15 09:31:06.359847] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2356640 is same with the state(5) to be set 00:22:55.367 [2024-07-15 09:31:06.359863] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2356640 is same with the state(5) to be set 00:22:55.367 [2024-07-15 09:31:06.359876] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2356640 is same with the state(5) to be set 00:22:55.367 [2024-07-15 09:31:06.359888] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2356640 is same with the state(5) to be set 00:22:55.367 [2024-07-15 09:31:06.359901] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2356640 is same with the state(5) to be set 00:22:55.367 [2024-07-15 09:31:06.359913] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2356640 is same with the state(5) to be set 00:22:55.367 [2024-07-15 09:31:06.359926] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2356640 is same with the state(5) to be set 00:22:55.367 [2024-07-15 09:31:06.359939] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2356640 is same with the state(5) to be set 00:22:55.367 [2024-07-15 09:31:06.359952] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2356640 is same with the state(5) to be set 00:22:55.367 [2024-07-15 09:31:06.359965] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2356640 is same with the state(5) to be set 00:22:55.367 [2024-07-15 09:31:06.359978] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2356640 is same with the state(5) to be set 00:22:55.367 [2024-07-15 09:31:06.359990] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2356640 is same with the state(5) to be set 00:22:55.367 [2024-07-15 09:31:06.360003] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2356640 is same with the state(5) to be set 00:22:55.367 [2024-07-15 09:31:06.360016] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2356640 is same with the state(5) to be set 00:22:55.367 [2024-07-15 09:31:06.360028] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2356640 is same with the state(5) to be set 00:22:55.367 [2024-07-15 09:31:06.360042] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2356640 is same with the state(5) to be set 00:22:55.367 [2024-07-15 09:31:06.360055] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2356640 is same with the state(5) to be set 00:22:55.367 [2024-07-15 09:31:06.360067] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2356640 is same with the state(5) to be set 00:22:55.367 [2024-07-15 09:31:06.360080] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2356640 is same with the state(5) to be set 00:22:55.367 [2024-07-15 09:31:06.360107] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2356640 is same with the state(5) to be set 00:22:55.367 [2024-07-15 09:31:06.360121] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2356640 is same with the state(5) to be set 00:22:55.367 [2024-07-15 09:31:06.360139] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2356640 is same with the state(5) to be set 00:22:55.367 [2024-07-15 09:31:06.360153] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2356640 is same with the state(5) to be set 00:22:55.367 [2024-07-15 09:31:06.360166] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2356640 is same with the state(5) to be set 00:22:55.367 [2024-07-15 09:31:06.360179] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2356640 is same with the state(5) to be set 00:22:55.367 [2024-07-15 09:31:06.360191] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2356640 is same with the state(5) to be set 00:22:55.367 [2024-07-15 09:31:06.360204] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2356640 is same with the state(5) to be set 00:22:55.367 [2024-07-15 09:31:06.360216] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2356640 is same with the state(5) to be set 00:22:55.367 [2024-07-15 09:31:06.360229] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2356640 is same with the state(5) to be set 00:22:55.367 [2024-07-15 09:31:06.360241] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2356640 is same with the state(5) to be set 00:22:55.367 [2024-07-15 09:31:06.360254] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2356640 is same with the state(5) to be set 00:22:55.367 [2024-07-15 09:31:06.360267] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2356640 is same with the state(5) to be set 00:22:55.367 [2024-07-15 09:31:06.360279] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2356640 is same with the state(5) to be set 00:22:55.367 [2024-07-15 09:31:06.360291] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2356640 is same with the state(5) to be set 00:22:55.367 [2024-07-15 09:31:06.360310] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2356640 is same with the state(5) to be set 00:22:55.367 [2024-07-15 09:31:06.360324] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2356640 is same with the state(5) to be set 00:22:55.367 [2024-07-15 09:31:06.360337] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2356640 is same with the state(5) to be set 00:22:55.367 [2024-07-15 09:31:06.360350] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2356640 is same with the state(5) to be set 00:22:55.367 [2024-07-15 09:31:06.360362] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2356640 is same with the state(5) to be set 00:22:55.367 [2024-07-15 09:31:06.360375] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2356640 is same with the state(5) to be set 00:22:55.367 [2024-07-15 09:31:06.360388] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2356640 is same with the state(5) to be set 00:22:55.367 [2024-07-15 09:31:06.360401] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2356640 is same with the state(5) to be set 00:22:55.367 [2024-07-15 09:31:06.360413] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2356640 is same with the state(5) to be set 00:22:55.367 [2024-07-15 09:31:06.360425] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2356640 is same with the state(5) to be set 00:22:55.368 [2024-07-15 09:31:06.360438] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2356640 is same with the state(5) to be set 00:22:55.368 [2024-07-15 09:31:06.360451] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2356640 is same with the state(5) to be set 00:22:55.368 [2024-07-15 09:31:06.360464] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2356640 is same with the state(5) to be set 00:22:55.368 [2024-07-15 09:31:06.360477] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2356640 is same with the state(5) to be set 00:22:55.368 [2024-07-15 09:31:06.360500] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2356640 is same with the state(5) to be set 00:22:55.368 [2024-07-15 09:31:06.360514] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2356640 is same with the state(5) to be set 00:22:55.368 [2024-07-15 09:31:06.360526] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2356640 is same with the state(5) to be set 00:22:55.368 [2024-07-15 09:31:06.360539] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2356640 is same with the state(5) to be set 00:22:55.368 [2024-07-15 09:31:06.360551] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2356640 is same with the state(5) to be set 00:22:55.368 [2024-07-15 09:31:06.360563] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2356640 is same with the state(5) to be set 00:22:55.368 [2024-07-15 09:31:06.360575] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2356640 is same with the state(5) to be set 00:22:55.368 [2024-07-15 09:31:06.360588] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2356640 is same with the state(5) to be set 00:22:55.368 [2024-07-15 09:31:06.360600] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2356640 is same with the state(5) to be set 00:22:55.368 [2024-07-15 09:31:06.360612] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2356640 is same with the state(5) to be set 00:22:55.368 [2024-07-15 09:31:06.360625] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2356640 is same with the state(5) to be set 00:22:55.368 [2024-07-15 09:31:06.360637] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2356640 is same with the state(5) to be set 00:22:55.368 [2024-07-15 09:31:06.360649] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2356640 is same with the state(5) to be set 00:22:55.368 [2024-07-15 09:31:06.360661] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2356640 is same with the state(5) to be set 00:22:55.368 [2024-07-15 09:31:06.360910] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:22400 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:55.368 [2024-07-15 09:31:06.360936] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:55.368 [2024-07-15 09:31:06.360963] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:22528 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:55.368 [2024-07-15 09:31:06.360979] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:55.368 [2024-07-15 09:31:06.360996] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:22656 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:55.368 [2024-07-15 09:31:06.361012] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:55.368 [2024-07-15 09:31:06.361028] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:22784 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:55.368 [2024-07-15 09:31:06.361042] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:55.368 [2024-07-15 09:31:06.361058] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:22912 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:55.368 [2024-07-15 09:31:06.361072] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:55.368 [2024-07-15 09:31:06.361098] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:23040 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:55.368 [2024-07-15 09:31:06.361112] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:55.368 [2024-07-15 09:31:06.361133] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:23168 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:55.368 [2024-07-15 09:31:06.361148] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:55.368 [2024-07-15 09:31:06.361165] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:23296 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:55.368 [2024-07-15 09:31:06.361179] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:55.368 [2024-07-15 09:31:06.361195] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:23424 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:55.368 [2024-07-15 09:31:06.361209] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:55.368 [2024-07-15 09:31:06.361224] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:23552 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:55.368 [2024-07-15 09:31:06.361239] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:55.368 [2024-07-15 09:31:06.361254] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:23680 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:55.368 [2024-07-15 09:31:06.361268] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:55.368 [2024-07-15 09:31:06.361284] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:23808 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:55.368 [2024-07-15 09:31:06.361298] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:55.368 [2024-07-15 09:31:06.361314] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:23936 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:55.368 [2024-07-15 09:31:06.361329] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:55.368 [2024-07-15 09:31:06.361345] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:24064 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:55.368 [2024-07-15 09:31:06.361360] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:55.368 [2024-07-15 09:31:06.361376] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:24192 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:55.368 [2024-07-15 09:31:06.361390] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:55.368 [2024-07-15 09:31:06.361407] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:24320 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:55.368 [2024-07-15 09:31:06.361421] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:55.368 [2024-07-15 09:31:06.361438] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:24448 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:55.368 [2024-07-15 09:31:06.361452] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:55.368 [2024-07-15 09:31:06.361469] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:16384 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:55.368 [2024-07-15 09:31:06.361483] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:55.368 [2024-07-15 09:31:06.361499] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:16512 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:55.368 [2024-07-15 09:31:06.361517] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:55.368 [2024-07-15 09:31:06.361533] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:16640 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:55.368 [2024-07-15 09:31:06.361548] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:55.368 [2024-07-15 09:31:06.361563] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:16768 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:55.368 [2024-07-15 09:31:06.361577] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:55.368 [2024-07-15 09:31:06.361593] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:16896 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:55.368 [2024-07-15 09:31:06.361609] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:55.368 [2024-07-15 09:31:06.361625] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:17024 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:55.368 [2024-07-15 09:31:06.361639] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:55.368 [2024-07-15 09:31:06.361655] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:17152 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:55.368 [2024-07-15 09:31:06.361669] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:55.368 [2024-07-15 09:31:06.361685] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:17280 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:55.368 [2024-07-15 09:31:06.361699] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:55.368 [2024-07-15 09:31:06.361715] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:17408 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:55.368 [2024-07-15 09:31:06.361729] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:55.368 [2024-07-15 09:31:06.361745] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:17536 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:55.368 [2024-07-15 09:31:06.361759] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:55.368 [2024-07-15 09:31:06.361774] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:17664 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:55.368 [2024-07-15 09:31:06.361798] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:55.368 [2024-07-15 09:31:06.361822] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:17792 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:55.368 [2024-07-15 09:31:06.361837] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:55.368 [2024-07-15 09:31:06.361853] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:17920 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:55.368 [2024-07-15 09:31:06.361868] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:55.368 [2024-07-15 09:31:06.361883] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:18048 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:55.368 [2024-07-15 09:31:06.361897] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:55.368 [2024-07-15 09:31:06.361917] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:18176 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:55.369 [2024-07-15 09:31:06.361933] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:55.369 [2024-07-15 09:31:06.361948] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:18304 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:55.369 [2024-07-15 09:31:06.361963] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:55.369 [2024-07-15 09:31:06.361979] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:18432 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:55.369 [2024-07-15 09:31:06.361993] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:55.369 [2024-07-15 09:31:06.362010] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:18560 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:55.369 [2024-07-15 09:31:06.362025] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:55.369 [2024-07-15 09:31:06.362040] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:18688 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:55.369 [2024-07-15 09:31:06.362055] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:55.369 [2024-07-15 09:31:06.362070] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:18816 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:55.369 [2024-07-15 09:31:06.362084] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:55.369 [2024-07-15 09:31:06.362102] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:18944 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:55.369 [2024-07-15 09:31:06.362116] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:55.369 [2024-07-15 09:31:06.362131] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:19072 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:55.369 [2024-07-15 09:31:06.362145] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:55.369 [2024-07-15 09:31:06.362161] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:19200 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:55.369 [2024-07-15 09:31:06.362175] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:55.369 [2024-07-15 09:31:06.362190] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:19328 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:55.369 [2024-07-15 09:31:06.362204] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:55.369 [2024-07-15 09:31:06.362220] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:19456 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:55.369 [2024-07-15 09:31:06.362234] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:55.369 [2024-07-15 09:31:06.362249] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:19584 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:55.369 [2024-07-15 09:31:06.362263] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:55.369 [2024-07-15 09:31:06.362279] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:19712 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:55.369 [2024-07-15 09:31:06.362296] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:55.369 [2024-07-15 09:31:06.362312] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:19840 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:55.369 [2024-07-15 09:31:06.362327] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:55.369 [2024-07-15 09:31:06.362342] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:19968 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:55.369 [2024-07-15 09:31:06.362357] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:55.369 [2024-07-15 09:31:06.362372] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:20096 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:55.369 [2024-07-15 09:31:06.362387] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:55.369 [2024-07-15 09:31:06.362403] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:20224 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:55.369 [2024-07-15 09:31:06.362417] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:55.369 [2024-07-15 09:31:06.362433] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:20352 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:55.369 [2024-07-15 09:31:06.362448] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:55.369 [2024-07-15 09:31:06.362464] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:20480 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:55.369 [2024-07-15 09:31:06.362478] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:55.369 [2024-07-15 09:31:06.362494] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:20608 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:55.369 [2024-07-15 09:31:06.362508] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:55.369 [2024-07-15 09:31:06.362524] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:20736 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:55.369 [2024-07-15 09:31:06.362538] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:55.369 [2024-07-15 09:31:06.362554] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:20864 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:55.369 [2024-07-15 09:31:06.362568] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:55.369 [2024-07-15 09:31:06.362584] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:20992 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:55.369 [2024-07-15 09:31:06.362598] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:55.369 [2024-07-15 09:31:06.362613] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:21120 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:55.369 [2024-07-15 09:31:06.362627] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:55.369 [2024-07-15 09:31:06.362642] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:21248 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:55.369 [2024-07-15 09:31:06.362656] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:55.369 [2024-07-15 09:31:06.362675] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:21376 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:55.369 [2024-07-15 09:31:06.362689] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:55.369 [2024-07-15 09:31:06.362705] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:21504 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:55.369 [2024-07-15 09:31:06.362719] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:55.369 [2024-07-15 09:31:06.362735] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:21632 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:55.369 [2024-07-15 09:31:06.362749] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:55.369 [2024-07-15 09:31:06.362764] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:21760 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:55.369 [2024-07-15 09:31:06.362778] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:55.369 [2024-07-15 09:31:06.362810] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:21888 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:55.369 [2024-07-15 09:31:06.362826] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:55.369 [2024-07-15 09:31:06.362842] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:22016 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:55.369 [2024-07-15 09:31:06.362856] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:55.369 [2024-07-15 09:31:06.362872] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:22144 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:55.369 [2024-07-15 09:31:06.362887] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:55.369 [2024-07-15 09:31:06.362902] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:22272 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:55.369 [2024-07-15 09:31:06.362916] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:55.369 [2024-07-15 09:31:06.363443] bdev_nvme.c:1612:bdev_nvme_disconnected_qpair_cb: *NOTICE*: qpair 0xfa65b0 was disconnected and freed. reset controller. 00:22:55.369 [2024-07-15 09:31:06.363552] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2356fa0 is same with the state(5) to be set 00:22:55.369 [2024-07-15 09:31:06.363586] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2356fa0 is same with the state(5) to be set 00:22:55.369 [2024-07-15 09:31:06.363603] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2356fa0 is same with the state(5) to be set 00:22:55.369 [2024-07-15 09:31:06.363615] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2356fa0 is same with the state(5) to be set 00:22:55.369 [2024-07-15 09:31:06.364416] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2357440 is same with the state(5) to be set 00:22:55.369 [2024-07-15 09:31:06.364444] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2357440 is same with the state(5) to be set 00:22:55.369 [2024-07-15 09:31:06.364459] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2357440 is same with the state(5) to be set 00:22:55.369 [2024-07-15 09:31:06.364472] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2357440 is same with the state(5) to be set 00:22:55.369 [2024-07-15 09:31:06.364485] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2357440 is same with the state(5) to be set 00:22:55.369 [2024-07-15 09:31:06.364503] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2357440 is same with the state(5) to be set 00:22:55.369 [2024-07-15 09:31:06.364516] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2357440 is same with the state(5) to be set 00:22:55.369 [2024-07-15 09:31:06.364529] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2357440 is same with the state(5) to be set 00:22:55.369 [2024-07-15 09:31:06.364541] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2357440 is same with the state(5) to be set 00:22:55.369 [2024-07-15 09:31:06.364554] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2357440 is same with the state(5) to be set 00:22:55.369 [2024-07-15 09:31:06.364581] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2357440 is same with the state(5) to be set 00:22:55.369 [2024-07-15 09:31:06.364593] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2357440 is same with the state(5) to be set 00:22:55.369 [2024-07-15 09:31:06.364606] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2357440 is same with the state(5) to be set 00:22:55.369 [2024-07-15 09:31:06.364619] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2357440 is same with the state(5) to be set 00:22:55.369 [2024-07-15 09:31:06.364632] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2357440 is same with the state(5) to be set 00:22:55.369 [2024-07-15 09:31:06.364644] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2357440 is same with the state(5) to be set 00:22:55.370 [2024-07-15 09:31:06.364656] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2357440 is same with the state(5) to be set 00:22:55.370 [2024-07-15 09:31:06.364668] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2357440 is same with the state(5) to be set 00:22:55.370 [2024-07-15 09:31:06.364679] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2357440 is same with the state(5) to be set 00:22:55.370 [2024-07-15 09:31:06.364691] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2357440 is same with the state(5) to be set 00:22:55.370 [2024-07-15 09:31:06.364703] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2357440 is same with the state(5) to be set 00:22:55.370 [2024-07-15 09:31:06.364715] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2357440 is same with the state(5) to be set 00:22:55.370 [2024-07-15 09:31:06.364727] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2357440 is same with the state(5) to be set 00:22:55.370 [2024-07-15 09:31:06.364739] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2357440 is same with the state(5) to be set 00:22:55.370 [2024-07-15 09:31:06.364751] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2357440 is same with the state(5) to be set 00:22:55.370 [2024-07-15 09:31:06.364763] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2357440 is same with the state(5) to be set 00:22:55.370 [2024-07-15 09:31:06.364775] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2357440 is same with the state(5) to be set 00:22:55.370 [2024-07-15 09:31:06.364809] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2357440 is same with the state(5) to be set 00:22:55.370 [2024-07-15 09:31:06.364824] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2357440 is same with the state(5) to be set 00:22:55.370 [2024-07-15 09:31:06.364837] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2357440 is same with the state(5) to be set 00:22:55.370 [2024-07-15 09:31:06.364850] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2357440 is same with the state(5) to be set 00:22:55.370 [2024-07-15 09:31:06.364863] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2357440 is same with the state(5) to be set 00:22:55.370 [2024-07-15 09:31:06.364879] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2357440 is same with the state(5) to be set 00:22:55.370 [2024-07-15 09:31:06.364893] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2357440 is same with the state(5) to be set 00:22:55.370 [2024-07-15 09:31:06.364906] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2357440 is same with the state(5) to be set 00:22:55.370 [2024-07-15 09:31:06.364920] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2357440 is same with the state(5) to be set 00:22:55.370 [2024-07-15 09:31:06.364933] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2357440 is same with the state(5) to be set 00:22:55.370 [2024-07-15 09:31:06.364946] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2357440 is same with the state(5) to be set 00:22:55.370 [2024-07-15 09:31:06.364958] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2357440 is same with the state(5) to be set 00:22:55.370 [2024-07-15 09:31:06.364970] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2357440 is same with the state(5) to be set 00:22:55.370 [2024-07-15 09:31:06.364983] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2357440 is same with the state(5) to be set 00:22:55.370 [2024-07-15 09:31:06.364995] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2357440 is same with the state(5) to be set 00:22:55.370 [2024-07-15 09:31:06.365008] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2357440 is same with the state(5) to be set 00:22:55.370 [2024-07-15 09:31:06.365020] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2357440 is same with the state(5) to be set 00:22:55.370 [2024-07-15 09:31:06.365032] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2357440 is same with the state(5) to be set 00:22:55.370 [2024-07-15 09:31:06.365045] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2357440 is same with the state(5) to be set 00:22:55.370 [2024-07-15 09:31:06.365058] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2357440 is same with the state(5) to be set 00:22:55.370 [2024-07-15 09:31:06.365070] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2357440 is same with the state(5) to be set 00:22:55.370 [2024-07-15 09:31:06.365093] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2357440 is same with the state(5) to be set 00:22:55.370 [2024-07-15 09:31:06.365120] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2357440 is same with the state(5) to be set 00:22:55.370 [2024-07-15 09:31:06.365132] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2357440 is same with the state(5) to be set 00:22:55.370 [2024-07-15 09:31:06.365145] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2357440 is same with the state(5) to be set 00:22:55.370 [2024-07-15 09:31:06.365157] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2357440 is same with the state(5) to be set 00:22:55.370 [2024-07-15 09:31:06.365168] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2357440 is same with the state(5) to be set 00:22:55.370 [2024-07-15 09:31:06.365180] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2357440 is same with the state(5) to be set 00:22:55.370 [2024-07-15 09:31:06.365192] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2357440 is same with the state(5) to be set 00:22:55.370 [2024-07-15 09:31:06.365204] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2357440 is same with the state(5) to be set 00:22:55.370 [2024-07-15 09:31:06.365216] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2357440 is same with the state(5) to be set 00:22:55.370 [2024-07-15 09:31:06.365227] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2357440 is same with the state(5) to be set 00:22:55.370 [2024-07-15 09:31:06.365240] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2357440 is same with the state(5) to be set 00:22:55.370 [2024-07-15 09:31:06.365255] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2357440 is same with the state(5) to be set 00:22:55.370 [2024-07-15 09:31:06.365267] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2357440 is same with the state(5) to be set 00:22:55.370 [2024-07-15 09:31:06.365279] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2357440 is same with the state(5) to be set 00:22:55.370 [2024-07-15 09:31:06.366452] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2357900 is same with the state(5) to be set 00:22:55.370 [2024-07-15 09:31:06.366479] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2357900 is same with the state(5) to be set 00:22:55.370 [2024-07-15 09:31:06.366493] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2357900 is same with the state(5) to be set 00:22:55.370 [2024-07-15 09:31:06.366505] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2357900 is same with the state(5) to be set 00:22:55.370 [2024-07-15 09:31:06.366517] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2357900 is same with the state(5) to be set 00:22:55.370 [2024-07-15 09:31:06.366529] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2357900 is same with the state(5) to be set 00:22:55.370 [2024-07-15 09:31:06.366542] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2357900 is same with the state(5) to be set 00:22:55.370 [2024-07-15 09:31:06.366554] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2357900 is same with the state(5) to be set 00:22:55.370 [2024-07-15 09:31:06.366565] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2357900 is same with the state(5) to be set 00:22:55.370 [2024-07-15 09:31:06.366577] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2357900 is same with the state(5) to be set 00:22:55.370 [2024-07-15 09:31:06.366589] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2357900 is same with the state(5) to be set 00:22:55.370 [2024-07-15 09:31:06.366601] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2357900 is same with the state(5) to be set 00:22:55.370 [2024-07-15 09:31:06.366613] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2357900 is same with the state(5) to be set 00:22:55.370 [2024-07-15 09:31:06.366625] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2357900 is same with the state(5) to be set 00:22:55.370 [2024-07-15 09:31:06.366637] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2357900 is same with the state(5) to be set 00:22:55.370 [2024-07-15 09:31:06.366649] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2357900 is same with the state(5) to be set 00:22:55.370 [2024-07-15 09:31:06.366661] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2357900 is same with the state(5) to be set 00:22:55.370 [2024-07-15 09:31:06.366674] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2357900 is same with the state(5) to be set 00:22:55.370 [2024-07-15 09:31:06.366687] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2357900 is same with the state(5) to be set 00:22:55.370 [2024-07-15 09:31:06.366700] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2357900 is same with the state(5) to be set 00:22:55.370 [2024-07-15 09:31:06.366712] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2357900 is same with the state(5) to be set 00:22:55.370 [2024-07-15 09:31:06.366725] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2357900 is same with the state(5) to be set 00:22:55.370 [2024-07-15 09:31:06.366737] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2357900 is same with the state(5) to be set 00:22:55.371 [2024-07-15 09:31:06.366750] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2357900 is same with the state(5) to be set 00:22:55.371 [2024-07-15 09:31:06.366767] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2357900 is same with the state(5) to be set 00:22:55.371 [2024-07-15 09:31:06.366780] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2357900 is same with the state(5) to be set 00:22:55.371 [2024-07-15 09:31:06.366827] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2357900 is same with the state(5) to be set 00:22:55.371 [2024-07-15 09:31:06.366843] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2357900 is same with the state(5) to be set 00:22:55.371 [2024-07-15 09:31:06.366856] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2357900 is same with the state(5) to be set 00:22:55.371 [2024-07-15 09:31:06.366869] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2357900 is same with the state(5) to be set 00:22:55.371 [2024-07-15 09:31:06.366882] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2357900 is same with the state(5) to be set 00:22:55.371 [2024-07-15 09:31:06.366895] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2357900 is same with the state(5) to be set 00:22:55.371 [2024-07-15 09:31:06.366908] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2357900 is same with the state(5) to be set 00:22:55.371 [2024-07-15 09:31:06.366926] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2357900 is same with the state(5) to be set 00:22:55.371 [2024-07-15 09:31:06.366939] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2357900 is same with the state(5) to be set 00:22:55.371 [2024-07-15 09:31:06.366951] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2357900 is same with the state(5) to be set 00:22:55.371 [2024-07-15 09:31:06.366964] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2357900 is same with the state(5) to be set 00:22:55.371 [2024-07-15 09:31:06.366977] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2357900 is same with the state(5) to be set 00:22:55.371 [2024-07-15 09:31:06.366989] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2357900 is same with the state(5) to be set 00:22:55.371 [2024-07-15 09:31:06.367002] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2357900 is same with the state(5) to be set 00:22:55.371 [2024-07-15 09:31:06.367015] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2357900 is same with the state(5) to be set 00:22:55.371 [2024-07-15 09:31:06.367028] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2357900 is same with the state(5) to be set 00:22:55.371 [2024-07-15 09:31:06.367040] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2357900 is same with the state(5) to be set 00:22:55.371 [2024-07-15 09:31:06.367052] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2357900 is same with the state(5) to be set 00:22:55.371 [2024-07-15 09:31:06.367065] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2357900 is same with the state(5) to be set 00:22:55.371 [2024-07-15 09:31:06.367078] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2357900 is same with the state(5) to be set 00:22:55.371 [2024-07-15 09:31:06.367091] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2357900 is same with the state(5) to be set 00:22:55.371 [2024-07-15 09:31:06.367127] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2357900 is same with the state(5) to be set 00:22:55.371 [2024-07-15 09:31:06.367139] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2357900 is same with the state(5) to be set 00:22:55.371 [2024-07-15 09:31:06.367152] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2357900 is same with the state(5) to be set 00:22:55.371 [2024-07-15 09:31:06.367164] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2357900 is same with the state(5) to be set 00:22:55.371 [2024-07-15 09:31:06.367188] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2357900 is same with the state(5) to be set 00:22:55.371 [2024-07-15 09:31:06.367201] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2357900 is same with the state(5) to be set 00:22:55.371 [2024-07-15 09:31:06.367213] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2357900 is same with the state(5) to be set 00:22:55.371 [2024-07-15 09:31:06.367225] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2357900 is same with the state(5) to be set 00:22:55.371 [2024-07-15 09:31:06.367237] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2357900 is same with the state(5) to be set 00:22:55.371 [2024-07-15 09:31:06.367249] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2357900 is same with the state(5) to be set 00:22:55.371 [2024-07-15 09:31:06.367260] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2357900 is same with the state(5) to be set 00:22:55.371 [2024-07-15 09:31:06.367272] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2357900 is same with the state(5) to be set 00:22:55.371 [2024-07-15 09:31:06.367284] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2357900 is same with the state(5) to be set 00:22:55.371 [2024-07-15 09:31:06.367296] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2357900 is same with the state(5) to be set 00:22:55.371 [2024-07-15 09:31:06.367307] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2357900 is same with the state(5) to be set 00:22:55.371 [2024-07-15 09:31:06.367319] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2357900 is same with the state(5) to be set 00:22:55.371 [2024-07-15 09:31:06.368122] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:25216 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:55.371 [2024-07-15 09:31:06.368150] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:55.371 [2024-07-15 09:31:06.368172] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:25344 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:55.371 [2024-07-15 09:31:06.368188] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:55.371 [2024-07-15 09:31:06.368205] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:25472 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:55.371 [2024-07-15 09:31:06.368220] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:55.371 [2024-07-15 09:31:06.368236] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:25600 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:55.371 [2024-07-15 09:31:06.368251] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:55.371 [2024-07-15 09:31:06.368267] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:25728 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:55.371 [2024-07-15 09:31:06.368281] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:55.371 [2024-07-15 09:31:06.368297] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:25856 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:55.371 [2024-07-15 09:31:06.368312] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:55.371 [2024-07-15 09:31:06.368328] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:25984 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:55.371 [2024-07-15 09:31:06.368342] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:55.371 [2024-07-15 09:31:06.368363] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:26112 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:55.371 [2024-07-15 09:31:06.368378] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:55.371 [2024-07-15 09:31:06.368394] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:26240 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:55.371 [2024-07-15 09:31:06.368409] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:55.371 [2024-07-15 09:31:06.368425] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:26368 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:55.371 [2024-07-15 09:31:06.368439] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:55.371 [2024-07-15 09:31:06.368455] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:26496 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:55.371 [2024-07-15 09:31:06.368469] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:55.371 [2024-07-15 09:31:06.368485] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:26624 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:55.371 [2024-07-15 09:31:06.368500] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:55.371 [2024-07-15 09:31:06.368516] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:26752 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:55.371 [2024-07-15 09:31:06.368530] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:55.371 [2024-07-15 09:31:06.368546] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:26880 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:55.371 [2024-07-15 09:31:06.368561] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:55.371 [2024-07-15 09:31:06.368577] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:27008 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:55.371 [2024-07-15 09:31:06.368591] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:55.371 [2024-07-15 09:31:06.368607] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:27136 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:55.371 [2024-07-15 09:31:06.368622] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:55.371 [2024-07-15 09:31:06.368627] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2357da0 is same with [2024-07-15 09:31:06.368639] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:27264 len:1the state(5) to be set 00:22:55.371 28 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:55.371 [2024-07-15 09:31:06.368657] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:55.371 [2024-07-15 09:31:06.368661] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2357da0 is same with the state(5) to be set 00:22:55.371 [2024-07-15 09:31:06.368674] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:27392 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:55.371 [2024-07-15 09:31:06.368676] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2357da0 is same with the state(5) to be set 00:22:55.371 [2024-07-15 09:31:06.368688] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 c[2024-07-15 09:31:06.368690] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2357da0 is same with dw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:55.371 the state(5) to be set 00:22:55.371 [2024-07-15 09:31:06.368707] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2357da0 is same with the state(5) to be set 00:22:55.371 [2024-07-15 09:31:06.368709] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:27520 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:55.371 [2024-07-15 09:31:06.368719] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2357da0 is same with the state(5) to be set 00:22:55.371 [2024-07-15 09:31:06.368724] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:55.371 [2024-07-15 09:31:06.368733] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2357da0 is same with the state(5) to be set 00:22:55.371 [2024-07-15 09:31:06.368739] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:27648 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:55.371 [2024-07-15 09:31:06.368746] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2357da0 is same with the state(5) to be set 00:22:55.372 [2024-07-15 09:31:06.368754] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:55.372 [2024-07-15 09:31:06.368759] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2357da0 is same with the state(5) to be set 00:22:55.372 [2024-07-15 09:31:06.368769] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:27776 len:1[2024-07-15 09:31:06.368772] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2357da0 is same with 28 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:55.372 the state(5) to be set 00:22:55.372 [2024-07-15 09:31:06.368785] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2357da0 is same with the state(5) to be set 00:22:55.372 [2024-07-15 09:31:06.368785] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:55.372 [2024-07-15 09:31:06.368816] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2357da0 is same with the state(5) to be set 00:22:55.372 [2024-07-15 09:31:06.368830] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2357da0 is same with the state(5) to be set 00:22:55.372 [2024-07-15 09:31:06.368832] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:27904 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:55.372 [2024-07-15 09:31:06.368842] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2357da0 is same with the state(5) to be set 00:22:55.372 [2024-07-15 09:31:06.368849] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:55.372 [2024-07-15 09:31:06.368854] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2357da0 is same with the state(5) to be set 00:22:55.372 [2024-07-15 09:31:06.368866] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:28032 len:1[2024-07-15 09:31:06.368867] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2357da0 is same with 28 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:55.372 the state(5) to be set 00:22:55.372 [2024-07-15 09:31:06.368882] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 c[2024-07-15 09:31:06.368882] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2357da0 is same with dw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:55.372 the state(5) to be set 00:22:55.372 [2024-07-15 09:31:06.368897] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2357da0 is same with the state(5) to be set 00:22:55.372 [2024-07-15 09:31:06.368899] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:28160 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:55.372 [2024-07-15 09:31:06.368910] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2357da0 is same with the state(5) to be set 00:22:55.372 [2024-07-15 09:31:06.368918] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:55.372 [2024-07-15 09:31:06.368923] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2357da0 is same with the state(5) to be set 00:22:55.372 [2024-07-15 09:31:06.368935] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:28288 len:1[2024-07-15 09:31:06.368936] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2357da0 is same with 28 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:55.372 the state(5) to be set 00:22:55.372 [2024-07-15 09:31:06.368951] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 c[2024-07-15 09:31:06.368952] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2357da0 is same with dw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:55.372 the state(5) to be set 00:22:55.372 [2024-07-15 09:31:06.368967] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2357da0 is same with the state(5) to be set 00:22:55.372 [2024-07-15 09:31:06.368969] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:28416 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:55.372 [2024-07-15 09:31:06.368980] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2357da0 is same with the state(5) to be set 00:22:55.372 [2024-07-15 09:31:06.368983] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:55.372 [2024-07-15 09:31:06.368992] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2357da0 is same with the state(5) to be set 00:22:55.372 [2024-07-15 09:31:06.368999] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:28544 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:55.372 [2024-07-15 09:31:06.369005] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2357da0 is same with the state(5) to be set 00:22:55.372 [2024-07-15 09:31:06.369014] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:55.372 [2024-07-15 09:31:06.369019] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2357da0 is same with the state(5) to be set 00:22:55.372 [2024-07-15 09:31:06.369030] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:28672 len:1[2024-07-15 09:31:06.369032] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2357da0 is same with 28 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:55.372 the state(5) to be set 00:22:55.372 [2024-07-15 09:31:06.369046] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 c[2024-07-15 09:31:06.369046] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2357da0 is same with dw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:55.372 the state(5) to be set 00:22:55.372 [2024-07-15 09:31:06.369062] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2357da0 is same with the state(5) to be set 00:22:55.372 [2024-07-15 09:31:06.369063] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:28800 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:55.372 [2024-07-15 09:31:06.369074] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2357da0 is same with the state(5) to be set 00:22:55.372 [2024-07-15 09:31:06.369079] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:55.372 [2024-07-15 09:31:06.369087] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2357da0 is same with the state(5) to be set 00:22:55.372 [2024-07-15 09:31:06.369103] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2357da0 is same with [2024-07-15 09:31:06.369106] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:28928 len:1the state(5) to be set 00:22:55.372 28 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:55.372 [2024-07-15 09:31:06.369122] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2357da0 is same with [2024-07-15 09:31:06.369123] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cthe state(5) to be set 00:22:55.372 dw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:55.372 [2024-07-15 09:31:06.369138] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2357da0 is same with the state(5) to be set 00:22:55.372 [2024-07-15 09:31:06.369142] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:29056 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:55.372 [2024-07-15 09:31:06.369151] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2357da0 is same with the state(5) to be set 00:22:55.372 [2024-07-15 09:31:06.369156] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:55.372 [2024-07-15 09:31:06.369164] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2357da0 is same with the state(5) to be set 00:22:55.372 [2024-07-15 09:31:06.369172] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:29184 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:55.372 [2024-07-15 09:31:06.369177] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2357da0 is same with the state(5) to be set 00:22:55.372 [2024-07-15 09:31:06.369187] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:55.372 [2024-07-15 09:31:06.369190] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2357da0 is same with the state(5) to be set 00:22:55.372 [2024-07-15 09:31:06.369202] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2357da0 is same with [2024-07-15 09:31:06.369203] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:29312 len:1the state(5) to be set 00:22:55.372 28 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:55.372 [2024-07-15 09:31:06.369231] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2357da0 is same with the state(5) to be set 00:22:55.372 [2024-07-15 09:31:06.369233] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:55.372 [2024-07-15 09:31:06.369243] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2357da0 is same with the state(5) to be set 00:22:55.372 [2024-07-15 09:31:06.369250] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:29440 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:55.372 [2024-07-15 09:31:06.369256] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2357da0 is same with the state(5) to be set 00:22:55.372 [2024-07-15 09:31:06.369264] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:55.372 [2024-07-15 09:31:06.369268] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2357da0 is same with the state(5) to be set 00:22:55.372 [2024-07-15 09:31:06.369280] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2357da0 is same with [2024-07-15 09:31:06.369280] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:29568 len:1the state(5) to be set 00:22:55.372 28 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:55.372 [2024-07-15 09:31:06.369295] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2357da0 is same with the state(5) to be set 00:22:55.372 [2024-07-15 09:31:06.369296] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:55.372 [2024-07-15 09:31:06.369307] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2357da0 is same with the state(5) to be set 00:22:55.372 [2024-07-15 09:31:06.369312] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:29696 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:55.372 [2024-07-15 09:31:06.369322] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2357da0 is same with the state(5) to be set 00:22:55.372 [2024-07-15 09:31:06.369326] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:55.372 [2024-07-15 09:31:06.369335] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2357da0 is same with the state(5) to be set 00:22:55.372 [2024-07-15 09:31:06.369341] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:29824 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:55.372 [2024-07-15 09:31:06.369347] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2357da0 is same with the state(5) to be set 00:22:55.372 [2024-07-15 09:31:06.369355] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:55.372 [2024-07-15 09:31:06.369359] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2357da0 is same with the state(5) to be set 00:22:55.372 [2024-07-15 09:31:06.369371] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2357da0 is same with [2024-07-15 09:31:06.369371] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:29952 len:1the state(5) to be set 00:22:55.372 28 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:55.372 [2024-07-15 09:31:06.369386] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2357da0 is same with [2024-07-15 09:31:06.369387] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cthe state(5) to be set 00:22:55.372 dw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:55.372 [2024-07-15 09:31:06.369401] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2357da0 is same with the state(5) to be set 00:22:55.372 [2024-07-15 09:31:06.369404] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:30080 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:55.372 [2024-07-15 09:31:06.369414] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2357da0 is same with the state(5) to be set 00:22:55.372 [2024-07-15 09:31:06.369419] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:55.372 [2024-07-15 09:31:06.369426] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2357da0 is same with the state(5) to be set 00:22:55.372 [2024-07-15 09:31:06.369434] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:30208 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:55.373 [2024-07-15 09:31:06.369438] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2357da0 is same with the state(5) to be set 00:22:55.373 [2024-07-15 09:31:06.369448] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 c[2024-07-15 09:31:06.369450] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2357da0 is same with dw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:55.373 the state(5) to be set 00:22:55.373 [2024-07-15 09:31:06.369463] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2357da0 is same with the state(5) to be set 00:22:55.373 [2024-07-15 09:31:06.369465] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:30336 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:55.373 [2024-07-15 09:31:06.369475] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2357da0 is same with the state(5) to be set 00:22:55.373 [2024-07-15 09:31:06.369479] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:55.373 [2024-07-15 09:31:06.369488] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2357da0 is same with the state(5) to be set 00:22:55.373 [2024-07-15 09:31:06.369495] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:30464 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:55.373 [2024-07-15 09:31:06.369500] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2357da0 is same with the state(5) to be set 00:22:55.373 [2024-07-15 09:31:06.369512] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:55.373 [2024-07-15 09:31:06.369516] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2357da0 is same with the state(5) to be set 00:22:55.373 [2024-07-15 09:31:06.369528] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:30592 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:55.373 [2024-07-15 09:31:06.369543] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:55.373 [2024-07-15 09:31:06.369557] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:30720 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:55.373 [2024-07-15 09:31:06.369571] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:55.373 [2024-07-15 09:31:06.369586] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:30848 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:55.373 [2024-07-15 09:31:06.369600] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:55.373 [2024-07-15 09:31:06.369615] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:30976 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:55.373 [2024-07-15 09:31:06.369629] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:55.373 [2024-07-15 09:31:06.369644] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:31104 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:55.373 [2024-07-15 09:31:06.369658] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:55.373 [2024-07-15 09:31:06.369673] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:31232 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:55.373 [2024-07-15 09:31:06.369687] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:55.373 [2024-07-15 09:31:06.369702] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:31360 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:55.373 [2024-07-15 09:31:06.369716] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:55.373 [2024-07-15 09:31:06.369731] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:31488 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:55.373 [2024-07-15 09:31:06.369744] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:55.373 [2024-07-15 09:31:06.369760] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:31616 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:55.373 [2024-07-15 09:31:06.369773] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:55.373 [2024-07-15 09:31:06.369812] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:31744 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:55.373 [2024-07-15 09:31:06.369829] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:55.373 [2024-07-15 09:31:06.369844] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:31872 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:55.373 [2024-07-15 09:31:06.369859] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:55.373 [2024-07-15 09:31:06.369878] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:32000 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:55.373 [2024-07-15 09:31:06.369892] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:55.373 [2024-07-15 09:31:06.369907] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:32128 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:55.373 [2024-07-15 09:31:06.369921] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:55.373 [2024-07-15 09:31:06.369937] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:32256 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:55.373 [2024-07-15 09:31:06.369951] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:55.373 [2024-07-15 09:31:06.369966] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:32384 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:55.373 [2024-07-15 09:31:06.369980] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:55.373 [2024-07-15 09:31:06.369995] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:32512 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:55.373 [2024-07-15 09:31:06.370009] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:55.373 [2024-07-15 09:31:06.370025] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:32640 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:55.373 [2024-07-15 09:31:06.370039] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:55.373 [2024-07-15 09:31:06.370054] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:24576 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:55.373 [2024-07-15 09:31:06.370069] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:55.373 [2024-07-15 09:31:06.370084] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:24704 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:55.373 [2024-07-15 09:31:06.370107] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:55.373 [2024-07-15 09:31:06.370137] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:24832 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:55.373 [2024-07-15 09:31:06.370152] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:55.373 [2024-07-15 09:31:06.370167] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:24960 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:55.373 [2024-07-15 09:31:06.370181] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:55.373 [2024-07-15 09:31:06.370196] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:25088 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:55.373 [2024-07-15 09:31:06.370211] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:55.373 [2024-07-15 09:31:06.370249] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:22:55.373 [2024-07-15 09:31:06.370314] bdev_nvme.c:1612:bdev_nvme_disconnected_qpair_cb: *NOTICE*: qpair 0xf075a0 was disconnected and freed. reset controller. 00:22:55.373 [2024-07-15 09:31:06.370616] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2358240 is same with the state(5) to be set 00:22:55.373 [2024-07-15 09:31:06.370649] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2358240 is same with the state(5) to be set 00:22:55.373 [2024-07-15 09:31:06.370670] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2358240 is same with the state(5) to be set 00:22:55.373 [2024-07-15 09:31:06.370683] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2358240 is same with the state(5) to be set 00:22:55.373 [2024-07-15 09:31:06.370695] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2358240 is same with the state(5) to be set 00:22:55.373 [2024-07-15 09:31:06.370708] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2358240 is same with the state(5) to be set 00:22:55.373 [2024-07-15 09:31:06.370721] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2358240 is same with the state(5) to be set 00:22:55.373 [2024-07-15 09:31:06.370733] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2358240 is same with the state(5) to be set 00:22:55.373 [2024-07-15 09:31:06.370745] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2358240 is same with the state(5) to be set 00:22:55.373 [2024-07-15 09:31:06.370757] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2358240 is same with the state(5) to be set 00:22:55.373 [2024-07-15 09:31:06.370770] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2358240 is same with the state(5) to be set 00:22:55.373 [2024-07-15 09:31:06.370783] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2358240 is same with the state(5) to be set 00:22:55.373 [2024-07-15 09:31:06.370816] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2358240 is same with the state(5) to be set 00:22:55.373 [2024-07-15 09:31:06.370830] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2358240 is same with the state(5) to be set 00:22:55.373 [2024-07-15 09:31:06.370842] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2358240 is same with the state(5) to be set 00:22:55.373 [2024-07-15 09:31:06.370854] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2358240 is same with the state(5) to be set 00:22:55.373 [2024-07-15 09:31:06.370866] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2358240 is same with the state(5) to be set 00:22:55.373 [2024-07-15 09:31:06.370878] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2358240 is same with the state(5) to be set 00:22:55.373 [2024-07-15 09:31:06.370890] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2358240 is same with the state(5) to be set 00:22:55.373 [2024-07-15 09:31:06.370903] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2358240 is same with the state(5) to be set 00:22:55.373 [2024-07-15 09:31:06.370915] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2358240 is same with the state(5) to be set 00:22:55.373 [2024-07-15 09:31:06.370927] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2358240 is same with the state(5) to be set 00:22:55.373 [2024-07-15 09:31:06.370939] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2358240 is same with the state(5) to be set 00:22:55.373 [2024-07-15 09:31:06.370951] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2358240 is same with the state(5) to be set 00:22:55.373 [2024-07-15 09:31:06.370963] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2358240 is same with the state(5) to be set 00:22:55.373 [2024-07-15 09:31:06.370975] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2358240 is same with the state(5) to be set 00:22:55.374 [2024-07-15 09:31:06.370987] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2358240 is same with the state(5) to be set 00:22:55.374 [2024-07-15 09:31:06.371000] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2358240 is same with the state(5) to be set 00:22:55.374 [2024-07-15 09:31:06.371016] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2358240 is same with the state(5) to be set 00:22:55.374 [2024-07-15 09:31:06.371028] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2358240 is same with the state(5) to be set 00:22:55.374 [2024-07-15 09:31:06.371041] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2358240 is same with the state(5) to be set 00:22:55.374 [2024-07-15 09:31:06.371053] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2358240 is same with the state(5) to be set 00:22:55.374 [2024-07-15 09:31:06.371065] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2358240 is same with the state(5) to be set 00:22:55.374 [2024-07-15 09:31:06.371092] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2358240 is same with the state(5) to be set 00:22:55.374 [2024-07-15 09:31:06.371105] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2358240 is same with the state(5) to be set 00:22:55.374 [2024-07-15 09:31:06.371117] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2358240 is same with the state(5) to be set 00:22:55.374 [2024-07-15 09:31:06.371129] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2358240 is same with the state(5) to be set 00:22:55.374 [2024-07-15 09:31:06.371141] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2358240 is same with the state(5) to be set 00:22:55.374 [2024-07-15 09:31:06.371153] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2358240 is same with the state(5) to be set 00:22:55.374 [2024-07-15 09:31:06.371165] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2358240 is same with the state(5) to be set 00:22:55.374 [2024-07-15 09:31:06.371177] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2358240 is same with the state(5) to be set 00:22:55.374 [2024-07-15 09:31:06.371188] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2358240 is same with the state(5) to be set 00:22:55.374 [2024-07-15 09:31:06.371200] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2358240 is same with the state(5) to be set 00:22:55.374 [2024-07-15 09:31:06.371212] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2358240 is same with the state(5) to be set 00:22:55.374 [2024-07-15 09:31:06.371224] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2358240 is same with the state(5) to be set 00:22:55.374 [2024-07-15 09:31:06.371235] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2358240 is same with the state(5) to be set 00:22:55.374 [2024-07-15 09:31:06.371247] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2358240 is same with the state(5) to be set 00:22:55.374 [2024-07-15 09:31:06.371258] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2358240 is same with the state(5) to be set 00:22:55.374 [2024-07-15 09:31:06.371270] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2358240 is same with the state(5) to be set 00:22:55.374 [2024-07-15 09:31:06.371283] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2358240 is same with the state(5) to be set 00:22:55.374 [2024-07-15 09:31:06.371295] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2358240 is same with the state(5) to be set 00:22:55.374 [2024-07-15 09:31:06.371307] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2358240 is same with the state(5) to be set 00:22:55.374 [2024-07-15 09:31:06.371318] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2358240 is same with the state(5) to be set 00:22:55.374 [2024-07-15 09:31:06.371330] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2358240 is same with the state(5) to be set 00:22:55.374 [2024-07-15 09:31:06.371342] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2358240 is same with the state(5) to be set 00:22:55.374 [2024-07-15 09:31:06.371358] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2358240 is same with the state(5) to be set 00:22:55.374 [2024-07-15 09:31:06.371371] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2358240 is same with the state(5) to be set 00:22:55.374 [2024-07-15 09:31:06.371382] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2358240 is same with the state(5) to be set 00:22:55.374 [2024-07-15 09:31:06.371394] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2358240 is same with the state(5) to be set 00:22:55.374 [2024-07-15 09:31:06.371405] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2358240 is same with the state(5) to be set 00:22:55.374 [2024-07-15 09:31:06.371417] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2358240 is same with the state(5) to be set 00:22:55.374 [2024-07-15 09:31:06.371429] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2358240 is same with the state(5) to be set 00:22:55.374 [2024-07-15 09:31:06.371440] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2358240 is same with the state(5) to be set 00:22:55.374 [2024-07-15 09:31:06.371773] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode10] resetting controller 00:22:55.374 [2024-07-15 09:31:06.371865] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xfbdec0 (9): Bad file descriptor 00:22:55.374 [2024-07-15 09:31:06.371928] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:22:55.374 [2024-07-15 09:31:06.371949] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:55.374 [2024-07-15 09:31:06.371964] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:22:55.374 [2024-07-15 09:31:06.371978] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:55.374 [2024-07-15 09:31:06.371992] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:22:55.374 [2024-07-15 09:31:06.372006] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:55.374 [2024-07-15 09:31:06.372019] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:22:55.374 [2024-07-15 09:31:06.372033] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:55.374 [2024-07-15 09:31:06.372046] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xfd3840 is same with the state(5) to be set 00:22:55.374 [2024-07-15 09:31:06.372102] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:22:55.374 [2024-07-15 09:31:06.372123] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:55.374 [2024-07-15 09:31:06.372138] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:22:55.374 [2024-07-15 09:31:06.372152] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:55.374 [2024-07-15 09:31:06.372166] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:22:55.374 [2024-07-15 09:31:06.372180] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:55.374 [2024-07-15 09:31:06.372194] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:22:55.374 [2024-07-15 09:31:06.372196] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23586e0 is same with the state(5) to be set 00:22:55.374 [2024-07-15 09:31:06.372212] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:55.374 [2024-07-15 09:31:06.372223] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23586e0 is same with the state(5) to be set 00:22:55.374 [2024-07-15 09:31:06.372227] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xfd1b70 is same with the state(5) to be set 00:22:55.374 [2024-07-15 09:31:06.372237] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23586e0 is same with the state(5) to be set 00:22:55.374 [2024-07-15 09:31:06.372250] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23586e0 is same with the state(5) to be set 00:22:55.374 [2024-07-15 09:31:06.372263] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23586e0 is same with the state(5) to be set 00:22:55.374 [2024-07-15 09:31:06.372275] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23586e0 is same with the state(5) to be set 00:22:55.374 [2024-07-15 09:31:06.372273] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:22:55.374 [2024-07-15 09:31:06.372288] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23586e0 is same with the state(5) to be set 00:22:55.374 [2024-07-15 09:31:06.372295] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:55.374 [2024-07-15 09:31:06.372301] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23586e0 is same with the state(5) to be set 00:22:55.374 [2024-07-15 09:31:06.372310] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:22:55.374 [2024-07-15 09:31:06.372314] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23586e0 is same with the state(5) to be set 00:22:55.374 [2024-07-15 09:31:06.372324] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:55.374 [2024-07-15 09:31:06.372327] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23586e0 is same with the state(5) to be set 00:22:55.374 [2024-07-15 09:31:06.372339] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 ns[2024-07-15 09:31:06.372340] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23586e0 is same with id:0 cdw10:00000000 cdw11:00000000 00:22:55.374 the state(5) to be set 00:22:55.374 [2024-07-15 09:31:06.372355] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23586e0 is same with [2024-07-15 09:31:06.372354] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cthe state(5) to be set 00:22:55.374 dw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:55.374 [2024-07-15 09:31:06.372369] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23586e0 is same with the state(5) to be set 00:22:55.374 [2024-07-15 09:31:06.372372] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:22:55.374 [2024-07-15 09:31:06.372382] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23586e0 is same with the state(5) to be set 00:22:55.374 [2024-07-15 09:31:06.372386] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:55.374 [2024-07-15 09:31:06.372395] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23586e0 is same with the state(5) to be set 00:22:55.374 [2024-07-15 09:31:06.372400] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xfd1990 is same with the state(5) to be set 00:22:55.374 [2024-07-15 09:31:06.372408] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23586e0 is same with the state(5) to be set 00:22:55.374 [2024-07-15 09:31:06.372426] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23586e0 is same with the state(5) to be set 00:22:55.374 [2024-07-15 09:31:06.372439] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23586e0 is same with the state(5) to be set 00:22:55.374 [2024-07-15 09:31:06.372444] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:22:55.374 [2024-07-15 09:31:06.372452] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23586e0 is same with the state(5) to be set 00:22:55.374 [2024-07-15 09:31:06.372464] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:55.374 [2024-07-15 09:31:06.372472] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23586e0 is same with the state(5) to be set 00:22:55.374 [2024-07-15 09:31:06.372480] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:22:55.374 [2024-07-15 09:31:06.372486] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23586e0 is same with the state(5) to be set 00:22:55.375 [2024-07-15 09:31:06.372493] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:55.375 [2024-07-15 09:31:06.372499] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23586e0 is same with the state(5) to be set 00:22:55.375 [2024-07-15 09:31:06.372509] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:22:55.375 [2024-07-15 09:31:06.372511] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23586e0 is same with the state(5) to be set 00:22:55.375 [2024-07-15 09:31:06.372523] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 c[2024-07-15 09:31:06.372524] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23586e0 is same with dw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:55.375 the state(5) to be set 00:22:55.375 [2024-07-15 09:31:06.372538] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23586e0 is same with [2024-07-15 09:31:06.372539] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsthe state(5) to be set 00:22:55.375 id:0 cdw10:00000000 cdw11:00000000 00:22:55.375 [2024-07-15 09:31:06.372552] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23586e0 is same with [2024-07-15 09:31:06.372554] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cthe state(5) to be set 00:22:55.375 dw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:55.375 [2024-07-15 09:31:06.372567] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23586e0 is same with [2024-07-15 09:31:06.372568] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf30280 is same wthe state(5) to be set 00:22:55.375 ith the state(5) to be set 00:22:55.375 [2024-07-15 09:31:06.372582] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23586e0 is same with the state(5) to be set 00:22:55.375 [2024-07-15 09:31:06.372595] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23586e0 is same with the state(5) to be set 00:22:55.375 [2024-07-15 09:31:06.372607] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23586e0 is same with the state(5) to be set 00:22:55.375 [2024-07-15 09:31:06.372614] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 ns[2024-07-15 09:31:06.372619] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23586e0 is same with id:0 cdw10:00000000 cdw11:00000000 00:22:55.375 the state(5) to be set 00:22:55.375 [2024-07-15 09:31:06.372634] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23586e0 is same with [2024-07-15 09:31:06.372636] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cthe state(5) to be set 00:22:55.375 dw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:55.375 [2024-07-15 09:31:06.372651] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23586e0 is same with the state(5) to be set 00:22:55.375 [2024-07-15 09:31:06.372654] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:22:55.375 [2024-07-15 09:31:06.372665] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23586e0 is same with the state(5) to be set 00:22:55.375 [2024-07-15 09:31:06.372668] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:55.375 [2024-07-15 09:31:06.372678] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23586e0 is same with the state(5) to be set 00:22:55.375 [2024-07-15 09:31:06.372683] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:22:55.375 [2024-07-15 09:31:06.372691] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23586e0 is same with the state(5) to be set 00:22:55.375 [2024-07-15 09:31:06.372696] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:55.375 [2024-07-15 09:31:06.372705] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23586e0 is same with the state(5) to be set 00:22:55.375 [2024-07-15 09:31:06.372712] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:22:55.375 [2024-07-15 09:31:06.372718] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23586e0 is same with the state(5) to be set 00:22:55.375 [2024-07-15 09:31:06.372725] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:55.375 [2024-07-15 09:31:06.372731] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23586e0 is same with the state(5) to be set 00:22:55.375 [2024-07-15 09:31:06.372739] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf39450 is same with the state(5) to be set 00:22:55.375 [2024-07-15 09:31:06.372743] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23586e0 is same with the state(5) to be set 00:22:55.375 [2024-07-15 09:31:06.372756] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23586e0 is same with the state(5) to be set 00:22:55.375 [2024-07-15 09:31:06.372769] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23586e0 is same with the state(5) to be set 00:22:55.375 [2024-07-15 09:31:06.372792] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23586e0 is same with the state(5) to be set 00:22:55.375 [2024-07-15 09:31:06.372793] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:22:55.375 [2024-07-15 09:31:06.372811] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23586e0 is same with the state(5) to be set 00:22:55.375 [2024-07-15 09:31:06.372820] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:55.375 [2024-07-15 09:31:06.372837] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:22:55.375 [2024-07-15 09:31:06.372825] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23586e0 is same with the state(5) to be set 00:22:55.375 [2024-07-15 09:31:06.372851] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:55.375 [2024-07-15 09:31:06.372856] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23586e0 is same with the state(5) to be set 00:22:55.375 [2024-07-15 09:31:06.372865] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:22:55.375 [2024-07-15 09:31:06.372872] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23586e0 is same with the state(5) to be set 00:22:55.375 [2024-07-15 09:31:06.372884] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 c[2024-07-15 09:31:06.372885] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23586e0 is same with dw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:55.375 the state(5) to be set 00:22:55.375 [2024-07-15 09:31:06.372900] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23586e0 is same with [2024-07-15 09:31:06.372901] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsthe state(5) to be set 00:22:55.375 id:0 cdw10:00000000 cdw11:00000000 00:22:55.375 [2024-07-15 09:31:06.372915] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23586e0 is same with [2024-07-15 09:31:06.372916] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cthe state(5) to be set 00:22:55.375 dw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:55.375 [2024-07-15 09:31:06.372929] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23586e0 is same with the state(5) to be set 00:22:55.375 [2024-07-15 09:31:06.372931] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf2fc60 is same with the state(5) to be set 00:22:55.375 [2024-07-15 09:31:06.372942] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23586e0 is same with the state(5) to be set 00:22:55.375 [2024-07-15 09:31:06.372956] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23586e0 is same with the state(5) to be set 00:22:55.375 [2024-07-15 09:31:06.372968] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23586e0 is same with the state(5) to be set 00:22:55.375 [2024-07-15 09:31:06.372969] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:22:55.375 [2024-07-15 09:31:06.372981] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23586e0 is same with the state(5) to be set 00:22:55.375 [2024-07-15 09:31:06.372989] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:55.375 [2024-07-15 09:31:06.372994] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23586e0 is same with the state(5) to be set 00:22:55.375 [2024-07-15 09:31:06.373005] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:22:55.375 [2024-07-15 09:31:06.373012] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23586e0 is same with the state(5) to be set 00:22:55.375 [2024-07-15 09:31:06.373019] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:55.375 [2024-07-15 09:31:06.373026] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23586e0 is same with the state(5) to be set 00:22:55.375 [2024-07-15 09:31:06.373034] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:22:55.375 [2024-07-15 09:31:06.373038] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23586e0 is same with the state(5) to be set 00:22:55.375 [2024-07-15 09:31:06.373048] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:55.375 [2024-07-15 09:31:06.373051] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23586e0 is same with the state(5) to be set 00:22:55.375 [2024-07-15 09:31:06.373063] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 ns[2024-07-15 09:31:06.373064] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23586e0 is same with id:0 cdw10:00000000 cdw11:00000000 00:22:55.375 the state(5) to be set 00:22:55.375 [2024-07-15 09:31:06.373081] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 c[2024-07-15 09:31:06.373082] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23586e0 is same with dw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:55.375 the state(5) to be set 00:22:55.375 [2024-07-15 09:31:06.373096] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10d9240 is same [2024-07-15 09:31:06.373097] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23586e0 is same with with the state(5) to be set 00:22:55.375 the state(5) to be set 00:22:55.376 [2024-07-15 09:31:06.373124] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xf0d830 (9): Bad file descriptor 00:22:55.376 [2024-07-15 09:31:06.373181] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:22:55.376 [2024-07-15 09:31:06.373203] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:55.376 [2024-07-15 09:31:06.373218] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:22:55.376 [2024-07-15 09:31:06.373231] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:55.376 [2024-07-15 09:31:06.373245] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:22:55.376 [2024-07-15 09:31:06.373259] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:55.376 [2024-07-15 09:31:06.373273] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:22:55.376 [2024-07-15 09:31:06.373286] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:55.376 [2024-07-15 09:31:06.373299] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10bba20 is same with the state(5) to be set 00:22:55.376 [2024-07-15 09:31:06.374993] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode4] resetting controller 00:22:55.376 [2024-07-15 09:31:06.375028] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xf2fc60 (9): Bad file descriptor 00:22:55.376 [2024-07-15 09:31:06.376807] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:55.376 [2024-07-15 09:31:06.376839] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfbdec0 with addr=10.0.0.2, port=4420 00:22:55.376 [2024-07-15 09:31:06.376857] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xfbdec0 is same with the state(5) to be set 00:22:55.376 [2024-07-15 09:31:06.376923] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:24320 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:55.376 [2024-07-15 09:31:06.376945] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:55.376 [2024-07-15 09:31:06.376967] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:24448 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:55.376 [2024-07-15 09:31:06.376983] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:55.376 [2024-07-15 09:31:06.376999] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:24576 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:55.376 [2024-07-15 09:31:06.377014] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:55.376 [2024-07-15 09:31:06.377031] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:24704 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:55.376 [2024-07-15 09:31:06.377051] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:55.376 [2024-07-15 09:31:06.377068] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:24832 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:55.376 [2024-07-15 09:31:06.377082] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:55.376 [2024-07-15 09:31:06.377100] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:24960 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:55.376 [2024-07-15 09:31:06.377115] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:55.376 [2024-07-15 09:31:06.377132] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:25088 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:55.376 [2024-07-15 09:31:06.377147] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:55.376 [2024-07-15 09:31:06.377163] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:25216 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:55.376 [2024-07-15 09:31:06.377177] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:55.376 [2024-07-15 09:31:06.377193] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:25344 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:55.376 [2024-07-15 09:31:06.377207] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:55.376 [2024-07-15 09:31:06.377224] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:25472 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:55.376 [2024-07-15 09:31:06.377238] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:55.376 [2024-07-15 09:31:06.377255] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:25600 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:55.376 [2024-07-15 09:31:06.377269] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:55.376 [2024-07-15 09:31:06.377285] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:25728 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:55.376 [2024-07-15 09:31:06.377299] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:55.376 [2024-07-15 09:31:06.377315] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:25856 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:55.376 [2024-07-15 09:31:06.377329] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:55.376 [2024-07-15 09:31:06.377345] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:25984 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:55.376 [2024-07-15 09:31:06.377359] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:55.376 [2024-07-15 09:31:06.377375] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:26112 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:55.376 [2024-07-15 09:31:06.377389] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:55.376 [2024-07-15 09:31:06.377406] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:26240 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:55.376 [2024-07-15 09:31:06.377420] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:55.376 [2024-07-15 09:31:06.377441] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:26368 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:55.376 [2024-07-15 09:31:06.377456] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:55.376 [2024-07-15 09:31:06.377472] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:26496 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:55.376 [2024-07-15 09:31:06.377488] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:55.376 [2024-07-15 09:31:06.377504] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:26624 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:55.376 [2024-07-15 09:31:06.377518] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:55.376 [2024-07-15 09:31:06.377535] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:26752 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:55.376 [2024-07-15 09:31:06.377549] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:55.376 [2024-07-15 09:31:06.377566] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:26880 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:55.376 [2024-07-15 09:31:06.377580] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:55.376 [2024-07-15 09:31:06.377662] bdev_nvme.c:1612:bdev_nvme_disconnected_qpair_cb: *NOTICE*: qpair 0xfad0f0 was disconnected and freed. reset controller. 00:22:55.376 [2024-07-15 09:31:06.377773] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:24576 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:55.376 [2024-07-15 09:31:06.377794] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:55.376 [2024-07-15 09:31:06.377825] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:24704 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:55.376 [2024-07-15 09:31:06.377843] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:55.376 [2024-07-15 09:31:06.377859] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:24832 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:55.376 [2024-07-15 09:31:06.377874] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:55.376 [2024-07-15 09:31:06.377890] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:24960 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:55.376 [2024-07-15 09:31:06.377905] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:55.376 [2024-07-15 09:31:06.377921] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:25088 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:55.376 [2024-07-15 09:31:06.377935] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:55.376 [2024-07-15 09:31:06.377951] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:25216 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:55.376 [2024-07-15 09:31:06.377965] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:55.376 [2024-07-15 09:31:06.377989] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:25344 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:55.376 [2024-07-15 09:31:06.378004] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:55.376 [2024-07-15 09:31:06.378019] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:25472 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:55.376 [2024-07-15 09:31:06.378039] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:55.376 [2024-07-15 09:31:06.378055] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:25600 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:55.376 [2024-07-15 09:31:06.378070] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:55.376 [2024-07-15 09:31:06.378092] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:25728 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:55.376 [2024-07-15 09:31:06.378106] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:55.376 [2024-07-15 09:31:06.378123] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:25856 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:55.376 [2024-07-15 09:31:06.378137] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:55.376 [2024-07-15 09:31:06.378153] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:25984 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:55.376 [2024-07-15 09:31:06.378168] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:55.376 [2024-07-15 09:31:06.378184] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:26112 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:55.376 [2024-07-15 09:31:06.378198] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:55.376 [2024-07-15 09:31:06.378215] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:26240 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:55.377 [2024-07-15 09:31:06.378229] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:55.377 [2024-07-15 09:31:06.378245] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:26368 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:55.377 [2024-07-15 09:31:06.378260] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:55.377 [2024-07-15 09:31:06.378276] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:26496 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:55.377 [2024-07-15 09:31:06.378290] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:55.377 [2024-07-15 09:31:06.378306] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:26624 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:55.377 [2024-07-15 09:31:06.378321] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:55.377 [2024-07-15 09:31:06.378337] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:26752 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:55.377 [2024-07-15 09:31:06.378352] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:55.377 [2024-07-15 09:31:06.378368] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:26880 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:55.377 [2024-07-15 09:31:06.378383] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:55.377 [2024-07-15 09:31:06.378399] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:27008 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:55.377 [2024-07-15 09:31:06.378414] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:55.377 [2024-07-15 09:31:06.378433] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:27136 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:55.377 [2024-07-15 09:31:06.378448] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:55.377 [2024-07-15 09:31:06.378464] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:27264 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:55.377 [2024-07-15 09:31:06.378478] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:55.377 [2024-07-15 09:31:06.378563] bdev_nvme.c:1612:bdev_nvme_disconnected_qpair_cb: *NOTICE*: qpair 0x10a59c0 was disconnected and freed. reset controller. 00:22:55.377 [2024-07-15 09:31:06.379321] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:55.377 [2024-07-15 09:31:06.379351] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf2fc60 with addr=10.0.0.2, port=4420 00:22:55.377 [2024-07-15 09:31:06.379368] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf2fc60 is same with the state(5) to be set 00:22:55.377 [2024-07-15 09:31:06.379387] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xfbdec0 (9): Bad file descriptor 00:22:55.377 [2024-07-15 09:31:06.380373] nvme_tcp.c:1241:nvme_tcp_pdu_ch_handle: *ERROR*: Unexpected PDU type 0x00 00:22:55.377 [2024-07-15 09:31:06.381396] nvme_tcp.c:1241:nvme_tcp_pdu_ch_handle: *ERROR*: Unexpected PDU type 0x00 00:22:55.377 [2024-07-15 09:31:06.381469] nvme_tcp.c:1241:nvme_tcp_pdu_ch_handle: *ERROR*: Unexpected PDU type 0x00 00:22:55.377 [2024-07-15 09:31:06.381553] nvme_tcp.c:1241:nvme_tcp_pdu_ch_handle: *ERROR*: Unexpected PDU type 0x00 00:22:55.377 [2024-07-15 09:31:06.381734] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:22:55.377 [2024-07-15 09:31:06.381765] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode3] resetting controller 00:22:55.377 [2024-07-15 09:31:06.381808] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xf39450 (9): Bad file descriptor 00:22:55.377 [2024-07-15 09:31:06.381848] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xf2fc60 (9): Bad file descriptor 00:22:55.377 [2024-07-15 09:31:06.381870] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode10] Ctrlr is in error state 00:22:55.377 [2024-07-15 09:31:06.381885] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode10] controller reinitialization failed 00:22:55.377 [2024-07-15 09:31:06.381901] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode10] in failed state. 00:22:55.377 [2024-07-15 09:31:06.382098] nvme_tcp.c:1241:nvme_tcp_pdu_ch_handle: *ERROR*: Unexpected PDU type 0x00 00:22:55.377 [2024-07-15 09:31:06.382173] nvme_tcp.c:1241:nvme_tcp_pdu_ch_handle: *ERROR*: Unexpected PDU type 0x00 00:22:55.377 [2024-07-15 09:31:06.382201] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:22:55.377 [2024-07-15 09:31:06.382317] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:55.377 [2024-07-15 09:31:06.382344] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf0d830 with addr=10.0.0.2, port=4420 00:22:55.377 [2024-07-15 09:31:06.382361] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf0d830 is same with the state(5) to be set 00:22:55.377 [2024-07-15 09:31:06.382388] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode4] Ctrlr is in error state 00:22:55.377 [2024-07-15 09:31:06.382405] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode4] controller reinitialization failed 00:22:55.377 [2024-07-15 09:31:06.382419] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode4] in failed state. 00:22:55.377 [2024-07-15 09:31:06.382447] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xfd3840 (9): Bad file descriptor 00:22:55.377 [2024-07-15 09:31:06.382488] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xfd1b70 (9): Bad file descriptor 00:22:55.377 [2024-07-15 09:31:06.382522] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xfd1990 (9): Bad file descriptor 00:22:55.377 [2024-07-15 09:31:06.382555] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xf30280 (9): Bad file descriptor 00:22:55.377 [2024-07-15 09:31:06.382588] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x10d9240 (9): Bad file descriptor 00:22:55.377 [2024-07-15 09:31:06.382620] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x10bba20 (9): Bad file descriptor 00:22:55.377 [2024-07-15 09:31:06.383277] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:22:55.377 [2024-07-15 09:31:06.383385] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:55.377 [2024-07-15 09:31:06.383413] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf39450 with addr=10.0.0.2, port=4420 00:22:55.377 [2024-07-15 09:31:06.383430] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf39450 is same with the state(5) to be set 00:22:55.377 [2024-07-15 09:31:06.383449] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xf0d830 (9): Bad file descriptor 00:22:55.377 [2024-07-15 09:31:06.383522] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xf39450 (9): Bad file descriptor 00:22:55.377 [2024-07-15 09:31:06.383546] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:22:55.377 [2024-07-15 09:31:06.383560] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:22:55.377 [2024-07-15 09:31:06.383575] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:22:55.377 [2024-07-15 09:31:06.383643] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:22:55.377 [2024-07-15 09:31:06.383666] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode3] Ctrlr is in error state 00:22:55.377 [2024-07-15 09:31:06.383680] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode3] controller reinitialization failed 00:22:55.377 [2024-07-15 09:31:06.383693] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode3] in failed state. 00:22:55.377 [2024-07-15 09:31:06.383745] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:22:55.377 [2024-07-15 09:31:06.385173] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode10] resetting controller 00:22:55.377 [2024-07-15 09:31:06.385312] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:55.377 [2024-07-15 09:31:06.385341] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfbdec0 with addr=10.0.0.2, port=4420 00:22:55.377 [2024-07-15 09:31:06.385358] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xfbdec0 is same with the state(5) to be set 00:22:55.377 [2024-07-15 09:31:06.385412] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xfbdec0 (9): Bad file descriptor 00:22:55.377 [2024-07-15 09:31:06.385467] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode10] Ctrlr is in error state 00:22:55.377 [2024-07-15 09:31:06.385486] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode10] controller reinitialization failed 00:22:55.377 [2024-07-15 09:31:06.385500] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode10] in failed state. 00:22:55.377 [2024-07-15 09:31:06.385553] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:22:55.377 [2024-07-15 09:31:06.387008] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode4] resetting controller 00:22:55.377 [2024-07-15 09:31:06.387161] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:55.377 [2024-07-15 09:31:06.387201] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf2fc60 with addr=10.0.0.2, port=4420 00:22:55.377 [2024-07-15 09:31:06.387219] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf2fc60 is same with the state(5) to be set 00:22:55.377 [2024-07-15 09:31:06.387272] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xf2fc60 (9): Bad file descriptor 00:22:55.377 [2024-07-15 09:31:06.387327] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode4] Ctrlr is in error state 00:22:55.377 [2024-07-15 09:31:06.387346] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode4] controller reinitialization failed 00:22:55.377 [2024-07-15 09:31:06.387361] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode4] in failed state. 00:22:55.377 [2024-07-15 09:31:06.387412] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:22:55.377 [2024-07-15 09:31:06.391950] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:22:55.377 [2024-07-15 09:31:06.392130] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:55.377 [2024-07-15 09:31:06.392160] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf0d830 with addr=10.0.0.2, port=4420 00:22:55.377 [2024-07-15 09:31:06.392177] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf0d830 is same with the state(5) to be set 00:22:55.377 [2024-07-15 09:31:06.392232] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xf0d830 (9): Bad file descriptor 00:22:55.377 [2024-07-15 09:31:06.392353] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:22:55.377 [2024-07-15 09:31:06.392375] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:22:55.377 [2024-07-15 09:31:06.392391] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:22:55.377 [2024-07-15 09:31:06.392463] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:17024 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:55.377 [2024-07-15 09:31:06.392487] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:55.377 [2024-07-15 09:31:06.392516] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:17152 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:55.377 [2024-07-15 09:31:06.392533] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:55.377 [2024-07-15 09:31:06.392550] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:17280 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:55.377 [2024-07-15 09:31:06.392566] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:55.377 [2024-07-15 09:31:06.392583] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:17408 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:55.378 [2024-07-15 09:31:06.392598] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:55.378 [2024-07-15 09:31:06.392614] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:17536 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:55.378 [2024-07-15 09:31:06.392629] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:55.378 [2024-07-15 09:31:06.392646] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:17664 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:55.378 [2024-07-15 09:31:06.392661] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:55.378 [2024-07-15 09:31:06.392677] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:17792 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:55.378 [2024-07-15 09:31:06.392700] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:55.378 [2024-07-15 09:31:06.392716] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:17920 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:55.378 [2024-07-15 09:31:06.392731] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:55.378 [2024-07-15 09:31:06.392748] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:18048 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:55.378 [2024-07-15 09:31:06.392763] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:55.378 [2024-07-15 09:31:06.392779] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:18176 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:55.378 [2024-07-15 09:31:06.392793] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:55.378 [2024-07-15 09:31:06.392819] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:18304 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:55.378 [2024-07-15 09:31:06.392835] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:55.378 [2024-07-15 09:31:06.392852] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:18432 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:55.378 [2024-07-15 09:31:06.392866] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:55.378 [2024-07-15 09:31:06.392882] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:18560 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:55.378 [2024-07-15 09:31:06.392897] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:55.378 [2024-07-15 09:31:06.392914] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:18688 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:55.378 [2024-07-15 09:31:06.392929] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:55.378 [2024-07-15 09:31:06.392945] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:18816 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:55.378 [2024-07-15 09:31:06.392960] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:55.378 [2024-07-15 09:31:06.392976] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:18944 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:55.378 [2024-07-15 09:31:06.392991] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:55.378 [2024-07-15 09:31:06.393007] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:19072 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:55.378 [2024-07-15 09:31:06.393022] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:55.378 [2024-07-15 09:31:06.393038] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:19200 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:55.378 [2024-07-15 09:31:06.393053] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:55.378 [2024-07-15 09:31:06.393069] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:19328 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:55.378 [2024-07-15 09:31:06.393084] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:55.378 [2024-07-15 09:31:06.393105] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:19456 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:55.378 [2024-07-15 09:31:06.393121] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:55.378 [2024-07-15 09:31:06.393136] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:19584 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:55.378 [2024-07-15 09:31:06.393151] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:55.378 [2024-07-15 09:31:06.393167] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:19712 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:55.378 [2024-07-15 09:31:06.393181] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:55.378 [2024-07-15 09:31:06.393198] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:19840 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:55.378 [2024-07-15 09:31:06.393213] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:55.378 [2024-07-15 09:31:06.393229] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:19968 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:55.378 [2024-07-15 09:31:06.393243] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:55.378 [2024-07-15 09:31:06.393260] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:20096 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:55.378 [2024-07-15 09:31:06.393274] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:55.378 [2024-07-15 09:31:06.393290] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:20224 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:55.378 [2024-07-15 09:31:06.393305] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:55.378 [2024-07-15 09:31:06.393321] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:20352 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:55.378 [2024-07-15 09:31:06.393336] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:55.378 [2024-07-15 09:31:06.393352] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:20480 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:55.378 [2024-07-15 09:31:06.393367] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:55.378 [2024-07-15 09:31:06.393382] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:20608 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:55.378 [2024-07-15 09:31:06.393397] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:55.378 [2024-07-15 09:31:06.393414] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:20736 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:55.378 [2024-07-15 09:31:06.393428] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:55.378 [2024-07-15 09:31:06.393444] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:20864 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:55.378 [2024-07-15 09:31:06.393459] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:55.378 [2024-07-15 09:31:06.393474] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:20992 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:55.378 [2024-07-15 09:31:06.393493] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:55.378 [2024-07-15 09:31:06.393509] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:21120 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:55.378 [2024-07-15 09:31:06.393524] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:55.378 [2024-07-15 09:31:06.393540] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:21248 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:55.378 [2024-07-15 09:31:06.393556] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:55.378 [2024-07-15 09:31:06.393572] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:21376 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:55.378 [2024-07-15 09:31:06.393588] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:55.378 [2024-07-15 09:31:06.393604] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:21504 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:55.378 [2024-07-15 09:31:06.393618] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:55.378 [2024-07-15 09:31:06.393635] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:21632 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:55.378 [2024-07-15 09:31:06.393649] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:55.378 [2024-07-15 09:31:06.393665] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:21760 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:55.378 [2024-07-15 09:31:06.393680] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:55.378 [2024-07-15 09:31:06.393697] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:21888 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:55.378 [2024-07-15 09:31:06.393712] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:55.378 [2024-07-15 09:31:06.393729] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:22016 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:55.378 [2024-07-15 09:31:06.393743] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:55.378 [2024-07-15 09:31:06.393760] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:22144 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:55.378 [2024-07-15 09:31:06.393775] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:55.378 [2024-07-15 09:31:06.393791] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:22272 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:55.378 [2024-07-15 09:31:06.393813] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:55.378 [2024-07-15 09:31:06.393830] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:22400 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:55.378 [2024-07-15 09:31:06.393845] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:55.378 [2024-07-15 09:31:06.393862] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:22528 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:55.378 [2024-07-15 09:31:06.393877] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:55.378 [2024-07-15 09:31:06.393898] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:22656 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:55.378 [2024-07-15 09:31:06.393913] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:55.379 [2024-07-15 09:31:06.393930] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:22784 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:55.379 [2024-07-15 09:31:06.393945] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:55.379 [2024-07-15 09:31:06.393961] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:22912 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:55.379 [2024-07-15 09:31:06.393975] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:55.379 [2024-07-15 09:31:06.393992] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:23040 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:55.379 [2024-07-15 09:31:06.394007] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:55.379 [2024-07-15 09:31:06.394023] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:23168 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:55.379 [2024-07-15 09:31:06.394037] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:55.379 [2024-07-15 09:31:06.394052] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:23296 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:55.379 [2024-07-15 09:31:06.394067] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:55.379 [2024-07-15 09:31:06.394084] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:23424 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:55.379 [2024-07-15 09:31:06.394099] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:55.379 [2024-07-15 09:31:06.394116] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:23552 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:55.379 [2024-07-15 09:31:06.394131] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:55.379 [2024-07-15 09:31:06.394147] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:23680 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:55.379 [2024-07-15 09:31:06.394162] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:55.379 [2024-07-15 09:31:06.394178] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:23808 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:55.379 [2024-07-15 09:31:06.394193] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:55.379 [2024-07-15 09:31:06.394209] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:23936 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:55.379 [2024-07-15 09:31:06.394223] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:55.379 [2024-07-15 09:31:06.394239] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:24576 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:55.379 [2024-07-15 09:31:06.394253] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:55.379 [2024-07-15 09:31:06.394270] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:24704 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:55.379 [2024-07-15 09:31:06.394288] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:55.379 [2024-07-15 09:31:06.394305] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:24832 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:55.379 [2024-07-15 09:31:06.394320] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:55.379 [2024-07-15 09:31:06.394336] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:24960 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:55.379 [2024-07-15 09:31:06.394350] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:55.379 [2024-07-15 09:31:06.394366] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:25088 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:55.379 [2024-07-15 09:31:06.394381] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:55.379 [2024-07-15 09:31:06.394397] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:24064 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:55.379 [2024-07-15 09:31:06.394412] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:55.379 [2024-07-15 09:31:06.394429] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:24192 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:55.379 [2024-07-15 09:31:06.394444] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:55.379 [2024-07-15 09:31:06.394460] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:24320 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:55.379 [2024-07-15 09:31:06.394475] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:55.379 [2024-07-15 09:31:06.394491] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:24448 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:55.379 [2024-07-15 09:31:06.394506] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:55.379 [2024-07-15 09:31:06.394520] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10a45e0 is same with the state(5) to be set 00:22:55.379 [2024-07-15 09:31:06.395783] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:16384 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:55.379 [2024-07-15 09:31:06.395816] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:55.379 [2024-07-15 09:31:06.395838] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:16512 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:55.379 [2024-07-15 09:31:06.395855] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:55.379 [2024-07-15 09:31:06.395871] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:16640 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:55.379 [2024-07-15 09:31:06.395886] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:55.379 [2024-07-15 09:31:06.395902] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:16768 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:55.379 [2024-07-15 09:31:06.395917] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:55.379 [2024-07-15 09:31:06.395933] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:16896 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:55.379 [2024-07-15 09:31:06.395954] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:55.379 [2024-07-15 09:31:06.395971] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:17024 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:55.379 [2024-07-15 09:31:06.395986] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:55.379 [2024-07-15 09:31:06.396003] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:17152 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:55.379 [2024-07-15 09:31:06.396018] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:55.379 [2024-07-15 09:31:06.396035] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:17280 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:55.379 [2024-07-15 09:31:06.396050] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:55.379 [2024-07-15 09:31:06.396066] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:17408 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:55.379 [2024-07-15 09:31:06.396081] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:55.379 [2024-07-15 09:31:06.396097] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:17536 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:55.379 [2024-07-15 09:31:06.396113] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:55.379 [2024-07-15 09:31:06.396129] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:17664 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:55.379 [2024-07-15 09:31:06.396144] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:55.379 [2024-07-15 09:31:06.396160] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:17792 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:55.379 [2024-07-15 09:31:06.396175] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:55.379 [2024-07-15 09:31:06.396192] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:17920 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:55.379 [2024-07-15 09:31:06.396207] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:55.379 [2024-07-15 09:31:06.396223] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:18048 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:55.379 [2024-07-15 09:31:06.396237] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:55.379 [2024-07-15 09:31:06.396254] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:18176 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:55.379 [2024-07-15 09:31:06.396269] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:55.379 [2024-07-15 09:31:06.396285] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:18304 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:55.379 [2024-07-15 09:31:06.396300] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:55.379 [2024-07-15 09:31:06.396316] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:18432 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:55.380 [2024-07-15 09:31:06.396331] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:55.380 [2024-07-15 09:31:06.396351] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:18560 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:55.380 [2024-07-15 09:31:06.396367] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:55.380 [2024-07-15 09:31:06.396383] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:18688 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:55.380 [2024-07-15 09:31:06.396397] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:55.380 [2024-07-15 09:31:06.396414] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:18816 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:55.380 [2024-07-15 09:31:06.396428] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:55.380 [2024-07-15 09:31:06.396444] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:18944 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:55.380 [2024-07-15 09:31:06.396459] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:55.380 [2024-07-15 09:31:06.396475] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:19072 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:55.380 [2024-07-15 09:31:06.396490] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:55.380 [2024-07-15 09:31:06.396506] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:19200 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:55.380 [2024-07-15 09:31:06.396521] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:55.380 [2024-07-15 09:31:06.396537] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:19328 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:55.380 [2024-07-15 09:31:06.396552] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:55.380 [2024-07-15 09:31:06.396568] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:19456 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:55.380 [2024-07-15 09:31:06.396583] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:55.380 [2024-07-15 09:31:06.396600] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:19584 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:55.380 [2024-07-15 09:31:06.396615] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:55.380 [2024-07-15 09:31:06.396631] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:19712 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:55.380 [2024-07-15 09:31:06.396646] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:55.380 [2024-07-15 09:31:06.396662] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:19840 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:55.380 [2024-07-15 09:31:06.396678] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:55.380 [2024-07-15 09:31:06.396694] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:19968 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:55.380 [2024-07-15 09:31:06.396709] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:55.380 [2024-07-15 09:31:06.396725] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:20096 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:55.380 [2024-07-15 09:31:06.396744] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:55.380 [2024-07-15 09:31:06.396761] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:20224 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:55.380 [2024-07-15 09:31:06.396776] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:55.380 [2024-07-15 09:31:06.396792] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:20352 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:55.380 [2024-07-15 09:31:06.396814] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:55.380 [2024-07-15 09:31:06.396832] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:20480 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:55.380 [2024-07-15 09:31:06.396846] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:55.380 [2024-07-15 09:31:06.396863] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:20608 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:55.380 [2024-07-15 09:31:06.396879] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:55.380 [2024-07-15 09:31:06.396895] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:20736 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:55.380 [2024-07-15 09:31:06.396910] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:55.380 [2024-07-15 09:31:06.396927] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:20864 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:55.380 [2024-07-15 09:31:06.396942] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:55.380 [2024-07-15 09:31:06.396958] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:20992 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:55.380 [2024-07-15 09:31:06.396973] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:55.380 [2024-07-15 09:31:06.396989] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:21120 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:55.380 [2024-07-15 09:31:06.397004] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:55.380 [2024-07-15 09:31:06.397020] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:21248 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:55.380 [2024-07-15 09:31:06.397035] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:55.380 [2024-07-15 09:31:06.397051] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:21376 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:55.380 [2024-07-15 09:31:06.397065] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:55.380 [2024-07-15 09:31:06.397081] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:21504 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:55.380 [2024-07-15 09:31:06.397095] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:55.380 [2024-07-15 09:31:06.397112] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:21632 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:55.380 [2024-07-15 09:31:06.397127] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:55.380 [2024-07-15 09:31:06.397147] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:21760 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:55.380 [2024-07-15 09:31:06.397162] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:55.380 [2024-07-15 09:31:06.397178] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:21888 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:55.380 [2024-07-15 09:31:06.397192] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:55.380 [2024-07-15 09:31:06.397208] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:22016 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:55.380 [2024-07-15 09:31:06.397223] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:55.380 [2024-07-15 09:31:06.397239] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:22144 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:55.380 [2024-07-15 09:31:06.397253] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:55.380 [2024-07-15 09:31:06.397269] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:22272 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:55.380 [2024-07-15 09:31:06.397284] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:55.380 [2024-07-15 09:31:06.397300] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:22400 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:55.380 [2024-07-15 09:31:06.397315] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:55.380 [2024-07-15 09:31:06.397331] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:22528 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:55.380 [2024-07-15 09:31:06.397345] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:55.380 [2024-07-15 09:31:06.397361] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:22656 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:55.380 [2024-07-15 09:31:06.397376] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:55.380 [2024-07-15 09:31:06.397392] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:22784 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:55.380 [2024-07-15 09:31:06.397407] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:55.380 [2024-07-15 09:31:06.397423] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:22912 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:55.380 [2024-07-15 09:31:06.397438] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:55.380 [2024-07-15 09:31:06.397455] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:23040 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:55.380 [2024-07-15 09:31:06.397469] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:55.380 [2024-07-15 09:31:06.397485] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:23168 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:55.380 [2024-07-15 09:31:06.397500] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:55.380 [2024-07-15 09:31:06.397516] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:23296 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:55.380 [2024-07-15 09:31:06.397534] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:55.380 [2024-07-15 09:31:06.397550] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:23424 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:55.380 [2024-07-15 09:31:06.397565] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:55.380 [2024-07-15 09:31:06.397581] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:23552 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:55.380 [2024-07-15 09:31:06.397596] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:55.380 [2024-07-15 09:31:06.397612] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:23680 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:55.380 [2024-07-15 09:31:06.397627] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:55.380 [2024-07-15 09:31:06.397643] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:23808 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:55.380 [2024-07-15 09:31:06.397657] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:55.381 [2024-07-15 09:31:06.397673] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:23936 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:55.381 [2024-07-15 09:31:06.397688] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:55.381 [2024-07-15 09:31:06.397704] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:24064 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:55.381 [2024-07-15 09:31:06.397718] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:55.381 [2024-07-15 09:31:06.397734] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:24192 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:55.381 [2024-07-15 09:31:06.397749] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:55.381 [2024-07-15 09:31:06.397765] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:24320 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:55.381 [2024-07-15 09:31:06.397779] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:55.381 [2024-07-15 09:31:06.397795] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:24448 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:55.381 [2024-07-15 09:31:06.397818] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:55.381 [2024-07-15 09:31:06.397834] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf08a30 is same with the state(5) to be set 00:22:55.381 [2024-07-15 09:31:06.399057] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:16384 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:55.381 [2024-07-15 09:31:06.399080] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:55.381 [2024-07-15 09:31:06.399103] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:16512 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:55.381 [2024-07-15 09:31:06.399120] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:55.381 [2024-07-15 09:31:06.399136] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:16640 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:55.381 [2024-07-15 09:31:06.399156] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:55.381 [2024-07-15 09:31:06.399174] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:16768 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:55.381 [2024-07-15 09:31:06.399189] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:55.381 [2024-07-15 09:31:06.399206] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:16896 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:55.381 [2024-07-15 09:31:06.399222] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:55.381 [2024-07-15 09:31:06.399238] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:17024 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:55.381 [2024-07-15 09:31:06.399254] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:55.381 [2024-07-15 09:31:06.399270] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:17152 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:55.381 [2024-07-15 09:31:06.399285] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:55.381 [2024-07-15 09:31:06.399301] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:17280 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:55.381 [2024-07-15 09:31:06.399316] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:55.381 [2024-07-15 09:31:06.399332] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:17408 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:55.381 [2024-07-15 09:31:06.399347] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:55.381 [2024-07-15 09:31:06.399363] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:17536 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:55.381 [2024-07-15 09:31:06.399378] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:55.381 [2024-07-15 09:31:06.399394] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:17664 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:55.381 [2024-07-15 09:31:06.399408] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:55.381 [2024-07-15 09:31:06.399425] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:17792 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:55.381 [2024-07-15 09:31:06.399439] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:55.381 [2024-07-15 09:31:06.399456] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:17920 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:55.381 [2024-07-15 09:31:06.399470] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:55.381 [2024-07-15 09:31:06.399486] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:18048 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:55.381 [2024-07-15 09:31:06.399501] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:55.381 [2024-07-15 09:31:06.399517] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:18176 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:55.381 [2024-07-15 09:31:06.399533] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:55.381 [2024-07-15 09:31:06.399553] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:18304 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:55.381 [2024-07-15 09:31:06.399568] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:55.381 [2024-07-15 09:31:06.399584] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:18432 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:55.381 [2024-07-15 09:31:06.399598] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:55.381 [2024-07-15 09:31:06.399614] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:18560 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:55.381 [2024-07-15 09:31:06.399629] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:55.381 [2024-07-15 09:31:06.399645] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:18688 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:55.381 [2024-07-15 09:31:06.399660] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:55.381 [2024-07-15 09:31:06.399676] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:18816 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:55.381 [2024-07-15 09:31:06.399690] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:55.381 [2024-07-15 09:31:06.399707] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:18944 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:55.381 [2024-07-15 09:31:06.399722] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:55.381 [2024-07-15 09:31:06.399738] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:19072 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:55.381 [2024-07-15 09:31:06.399753] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:55.381 [2024-07-15 09:31:06.399769] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:19200 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:55.381 [2024-07-15 09:31:06.399784] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:55.381 [2024-07-15 09:31:06.399808] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:19328 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:55.381 [2024-07-15 09:31:06.399825] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:55.381 [2024-07-15 09:31:06.399841] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:19456 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:55.381 [2024-07-15 09:31:06.399856] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:55.381 [2024-07-15 09:31:06.399873] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:19584 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:55.381 [2024-07-15 09:31:06.399890] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:55.381 [2024-07-15 09:31:06.399906] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:19712 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:55.381 [2024-07-15 09:31:06.399921] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:55.381 [2024-07-15 09:31:06.399937] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:19840 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:55.381 [2024-07-15 09:31:06.399955] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:55.381 [2024-07-15 09:31:06.399974] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:19968 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:55.381 [2024-07-15 09:31:06.399989] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:55.381 [2024-07-15 09:31:06.400005] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:20096 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:55.381 [2024-07-15 09:31:06.400020] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:55.381 [2024-07-15 09:31:06.400037] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:20224 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:55.381 [2024-07-15 09:31:06.400052] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:55.381 [2024-07-15 09:31:06.400069] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:20352 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:55.381 [2024-07-15 09:31:06.400084] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:55.381 [2024-07-15 09:31:06.400102] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:20480 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:55.381 [2024-07-15 09:31:06.400117] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:55.381 [2024-07-15 09:31:06.400133] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:20608 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:55.381 [2024-07-15 09:31:06.400147] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:55.381 [2024-07-15 09:31:06.400164] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:20736 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:55.381 [2024-07-15 09:31:06.400178] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:55.381 [2024-07-15 09:31:06.400194] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:20864 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:55.381 [2024-07-15 09:31:06.400209] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:55.381 [2024-07-15 09:31:06.400224] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:20992 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:55.381 [2024-07-15 09:31:06.400239] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:55.382 [2024-07-15 09:31:06.400255] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:21120 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:55.382 [2024-07-15 09:31:06.400269] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:55.382 [2024-07-15 09:31:06.400285] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:21248 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:55.382 [2024-07-15 09:31:06.400299] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:55.382 [2024-07-15 09:31:06.400316] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:21376 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:55.382 [2024-07-15 09:31:06.400330] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:55.382 [2024-07-15 09:31:06.400350] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:21504 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:55.382 [2024-07-15 09:31:06.400365] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:55.382 [2024-07-15 09:31:06.400382] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:21632 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:55.382 [2024-07-15 09:31:06.400396] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:55.382 [2024-07-15 09:31:06.400413] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:21760 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:55.382 [2024-07-15 09:31:06.400427] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:55.382 [2024-07-15 09:31:06.400443] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:21888 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:55.382 [2024-07-15 09:31:06.400458] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:55.382 [2024-07-15 09:31:06.400474] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:22016 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:55.382 [2024-07-15 09:31:06.400489] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:55.382 [2024-07-15 09:31:06.400506] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:22144 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:55.382 [2024-07-15 09:31:06.400520] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:55.382 [2024-07-15 09:31:06.400536] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:22272 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:55.382 [2024-07-15 09:31:06.400551] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:55.382 [2024-07-15 09:31:06.400568] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:22400 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:55.382 [2024-07-15 09:31:06.400583] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:55.382 [2024-07-15 09:31:06.400599] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:22528 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:55.382 [2024-07-15 09:31:06.400614] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:55.382 [2024-07-15 09:31:06.400631] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:22656 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:55.382 [2024-07-15 09:31:06.400645] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:55.382 [2024-07-15 09:31:06.400661] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:22784 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:55.382 [2024-07-15 09:31:06.400675] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:55.382 [2024-07-15 09:31:06.400691] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:22912 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:55.382 [2024-07-15 09:31:06.400706] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:55.382 [2024-07-15 09:31:06.400723] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:23040 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:55.382 [2024-07-15 09:31:06.400744] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:55.382 [2024-07-15 09:31:06.400761] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:23168 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:55.382 [2024-07-15 09:31:06.400775] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:55.382 [2024-07-15 09:31:06.400792] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:23296 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:55.382 [2024-07-15 09:31:06.400817] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:55.382 [2024-07-15 09:31:06.400834] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:23424 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:55.382 [2024-07-15 09:31:06.400849] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:55.382 [2024-07-15 09:31:06.400866] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:23552 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:55.382 [2024-07-15 09:31:06.400881] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:55.382 [2024-07-15 09:31:06.400897] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:23680 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:55.382 [2024-07-15 09:31:06.400911] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:55.382 [2024-07-15 09:31:06.400928] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:23808 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:55.382 [2024-07-15 09:31:06.400942] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:55.382 [2024-07-15 09:31:06.400958] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:23936 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:55.382 [2024-07-15 09:31:06.400973] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:55.382 [2024-07-15 09:31:06.400989] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:24064 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:55.382 [2024-07-15 09:31:06.401005] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:55.382 [2024-07-15 09:31:06.401022] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:24192 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:55.382 [2024-07-15 09:31:06.401037] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:55.382 [2024-07-15 09:31:06.401054] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:24320 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:55.382 [2024-07-15 09:31:06.401070] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:55.382 [2024-07-15 09:31:06.401086] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:24448 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:55.382 [2024-07-15 09:31:06.401101] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:55.382 [2024-07-15 09:31:06.401116] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1693900 is same with the state(5) to be set 00:22:55.382 [2024-07-15 09:31:06.402394] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:16384 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:55.382 [2024-07-15 09:31:06.402417] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:55.382 [2024-07-15 09:31:06.402443] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:16512 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:55.382 [2024-07-15 09:31:06.402460] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:55.382 [2024-07-15 09:31:06.402476] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:16640 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:55.382 [2024-07-15 09:31:06.402491] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:55.382 [2024-07-15 09:31:06.402507] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:16768 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:55.382 [2024-07-15 09:31:06.402522] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:55.382 [2024-07-15 09:31:06.402540] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:16896 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:55.382 [2024-07-15 09:31:06.402554] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:55.382 [2024-07-15 09:31:06.402571] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:17024 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:55.382 [2024-07-15 09:31:06.402586] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:55.382 [2024-07-15 09:31:06.402602] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:17152 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:55.382 [2024-07-15 09:31:06.402617] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:55.382 [2024-07-15 09:31:06.402632] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:17280 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:55.382 [2024-07-15 09:31:06.402647] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:55.382 [2024-07-15 09:31:06.402663] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:17408 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:55.382 [2024-07-15 09:31:06.402678] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:55.382 [2024-07-15 09:31:06.402695] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:17536 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:55.382 [2024-07-15 09:31:06.402710] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:55.382 [2024-07-15 09:31:06.402726] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:17664 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:55.382 [2024-07-15 09:31:06.402741] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:55.382 [2024-07-15 09:31:06.402757] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:17792 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:55.382 [2024-07-15 09:31:06.402772] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:55.382 [2024-07-15 09:31:06.402789] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:17920 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:55.382 [2024-07-15 09:31:06.402810] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:55.382 [2024-07-15 09:31:06.402828] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:18048 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:55.382 [2024-07-15 09:31:06.402848] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:55.382 [2024-07-15 09:31:06.402864] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:18176 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:55.383 [2024-07-15 09:31:06.402880] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:55.383 [2024-07-15 09:31:06.402896] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:18304 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:55.383 [2024-07-15 09:31:06.402911] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:55.383 [2024-07-15 09:31:06.402927] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:18432 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:55.383 [2024-07-15 09:31:06.402942] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:55.383 [2024-07-15 09:31:06.402958] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:18560 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:55.383 [2024-07-15 09:31:06.402973] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:55.383 [2024-07-15 09:31:06.402989] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:18688 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:55.383 [2024-07-15 09:31:06.403004] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:55.383 [2024-07-15 09:31:06.403020] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:18816 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:55.383 [2024-07-15 09:31:06.403035] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:55.383 [2024-07-15 09:31:06.403051] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:18944 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:55.383 [2024-07-15 09:31:06.403066] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:55.383 [2024-07-15 09:31:06.403082] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:19072 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:55.383 [2024-07-15 09:31:06.403098] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:55.383 [2024-07-15 09:31:06.403114] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:19200 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:55.383 [2024-07-15 09:31:06.403129] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:55.383 [2024-07-15 09:31:06.403146] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:19328 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:55.383 [2024-07-15 09:31:06.403160] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:55.383 [2024-07-15 09:31:06.403177] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:19456 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:55.383 [2024-07-15 09:31:06.403192] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:55.383 [2024-07-15 09:31:06.403208] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:19584 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:55.383 [2024-07-15 09:31:06.403223] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:55.383 [2024-07-15 09:31:06.403242] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:19712 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:55.383 [2024-07-15 09:31:06.403257] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:55.383 [2024-07-15 09:31:06.403273] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:19840 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:55.383 [2024-07-15 09:31:06.403289] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:55.383 [2024-07-15 09:31:06.403305] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:19968 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:55.383 [2024-07-15 09:31:06.403319] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:55.383 [2024-07-15 09:31:06.403335] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:20096 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:55.383 [2024-07-15 09:31:06.403350] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:55.383 [2024-07-15 09:31:06.403366] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:20224 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:55.383 [2024-07-15 09:31:06.403381] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:55.383 [2024-07-15 09:31:06.403397] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:20352 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:55.383 [2024-07-15 09:31:06.403413] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:55.383 [2024-07-15 09:31:06.403429] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:20480 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:55.383 [2024-07-15 09:31:06.403443] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:55.383 [2024-07-15 09:31:06.403459] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:20608 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:55.383 [2024-07-15 09:31:06.403474] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:55.383 [2024-07-15 09:31:06.403490] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:20736 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:55.383 [2024-07-15 09:31:06.403504] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:55.383 [2024-07-15 09:31:06.403520] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:20864 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:55.383 [2024-07-15 09:31:06.403534] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:55.383 [2024-07-15 09:31:06.403551] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:20992 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:55.383 [2024-07-15 09:31:06.403565] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:55.383 [2024-07-15 09:31:06.403581] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:21120 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:55.383 [2024-07-15 09:31:06.403595] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:55.383 [2024-07-15 09:31:06.403611] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:21248 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:55.383 [2024-07-15 09:31:06.403629] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:55.383 [2024-07-15 09:31:06.403645] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:21376 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:55.383 [2024-07-15 09:31:06.403660] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:55.383 [2024-07-15 09:31:06.403676] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:21504 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:55.383 [2024-07-15 09:31:06.403691] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:55.383 [2024-07-15 09:31:06.403706] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:21632 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:55.383 [2024-07-15 09:31:06.403721] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:55.383 [2024-07-15 09:31:06.403737] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:21760 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:55.383 [2024-07-15 09:31:06.403752] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:55.383 [2024-07-15 09:31:06.403768] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:21888 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:55.383 [2024-07-15 09:31:06.403782] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:55.383 [2024-07-15 09:31:06.403798] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:22016 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:55.383 [2024-07-15 09:31:06.403821] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:55.383 [2024-07-15 09:31:06.403838] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:22144 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:55.383 [2024-07-15 09:31:06.403853] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:55.383 [2024-07-15 09:31:06.403869] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:22272 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:55.383 [2024-07-15 09:31:06.403884] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:55.383 [2024-07-15 09:31:06.403901] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:22400 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:55.384 [2024-07-15 09:31:06.403916] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:55.384 [2024-07-15 09:31:06.403932] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:22528 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:55.384 [2024-07-15 09:31:06.403946] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:55.384 [2024-07-15 09:31:06.403963] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:22656 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:55.384 [2024-07-15 09:31:06.403977] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:55.384 [2024-07-15 09:31:06.403994] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:22784 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:55.384 [2024-07-15 09:31:06.404009] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:55.384 [2024-07-15 09:31:06.404029] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:22912 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:55.384 [2024-07-15 09:31:06.404043] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:55.384 [2024-07-15 09:31:06.404059] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:23040 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:55.384 [2024-07-15 09:31:06.404073] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:55.384 [2024-07-15 09:31:06.404090] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:23168 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:55.384 [2024-07-15 09:31:06.404104] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:55.384 [2024-07-15 09:31:06.404121] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:23296 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:55.384 [2024-07-15 09:31:06.404136] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:55.384 [2024-07-15 09:31:06.404152] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:23424 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:55.384 [2024-07-15 09:31:06.404166] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:55.384 [2024-07-15 09:31:06.404182] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:23552 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:55.384 [2024-07-15 09:31:06.404197] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:55.384 [2024-07-15 09:31:06.404214] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:23680 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:55.384 [2024-07-15 09:31:06.404228] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:55.384 [2024-07-15 09:31:06.404244] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:23808 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:55.384 [2024-07-15 09:31:06.404260] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:55.384 [2024-07-15 09:31:06.404276] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:23936 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:55.384 [2024-07-15 09:31:06.404291] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:55.384 [2024-07-15 09:31:06.404307] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:24064 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:55.384 [2024-07-15 09:31:06.404321] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:55.384 [2024-07-15 09:31:06.404338] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:24192 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:55.384 [2024-07-15 09:31:06.404352] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:55.384 [2024-07-15 09:31:06.404369] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:24320 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:55.384 [2024-07-15 09:31:06.404383] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:55.384 [2024-07-15 09:31:06.404399] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:24448 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:55.384 [2024-07-15 09:31:06.404418] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:55.384 [2024-07-15 09:31:06.404434] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x183b2b0 is same with the state(5) to be set 00:22:55.384 [2024-07-15 09:31:06.405669] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:16384 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:55.384 [2024-07-15 09:31:06.405693] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:55.384 [2024-07-15 09:31:06.405716] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:16512 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:55.384 [2024-07-15 09:31:06.405732] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:55.384 [2024-07-15 09:31:06.405749] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:16640 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:55.384 [2024-07-15 09:31:06.405764] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:55.384 [2024-07-15 09:31:06.405782] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:16768 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:55.384 [2024-07-15 09:31:06.405796] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:55.384 [2024-07-15 09:31:06.405821] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:16896 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:55.384 [2024-07-15 09:31:06.405836] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:55.384 [2024-07-15 09:31:06.405853] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:17024 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:55.384 [2024-07-15 09:31:06.405868] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:55.384 [2024-07-15 09:31:06.405884] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:17152 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:55.384 [2024-07-15 09:31:06.405899] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:55.384 [2024-07-15 09:31:06.405915] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:17280 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:55.384 [2024-07-15 09:31:06.405930] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:55.384 [2024-07-15 09:31:06.405946] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:17408 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:55.384 [2024-07-15 09:31:06.405961] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:55.384 [2024-07-15 09:31:06.405977] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:17536 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:55.384 [2024-07-15 09:31:06.405992] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:55.384 [2024-07-15 09:31:06.406008] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:17664 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:55.384 [2024-07-15 09:31:06.406023] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:55.384 [2024-07-15 09:31:06.406040] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:17792 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:55.384 [2024-07-15 09:31:06.406059] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:55.384 [2024-07-15 09:31:06.406077] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:17920 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:55.384 [2024-07-15 09:31:06.406091] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:55.384 [2024-07-15 09:31:06.406107] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:18048 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:55.384 [2024-07-15 09:31:06.406122] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:55.384 [2024-07-15 09:31:06.406138] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:18176 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:55.384 [2024-07-15 09:31:06.406153] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:55.384 [2024-07-15 09:31:06.406169] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:18304 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:55.384 [2024-07-15 09:31:06.406184] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:55.384 [2024-07-15 09:31:06.406200] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:18432 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:55.384 [2024-07-15 09:31:06.406215] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:55.384 [2024-07-15 09:31:06.406232] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:18560 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:55.384 [2024-07-15 09:31:06.406246] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:55.384 [2024-07-15 09:31:06.406263] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:18688 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:55.384 [2024-07-15 09:31:06.406278] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:55.384 [2024-07-15 09:31:06.406294] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:18816 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:55.384 [2024-07-15 09:31:06.406309] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:55.384 [2024-07-15 09:31:06.406325] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:18944 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:55.384 [2024-07-15 09:31:06.406340] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:55.384 [2024-07-15 09:31:06.406358] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:19072 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:55.384 [2024-07-15 09:31:06.406373] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:55.384 [2024-07-15 09:31:06.406389] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:19200 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:55.384 [2024-07-15 09:31:06.406404] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:55.384 [2024-07-15 09:31:06.406421] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:19328 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:55.384 [2024-07-15 09:31:06.406436] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:55.384 [2024-07-15 09:31:06.406457] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:19456 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:55.384 [2024-07-15 09:31:06.406473] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:55.384 [2024-07-15 09:31:06.406489] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:19584 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:55.385 [2024-07-15 09:31:06.406504] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:55.385 [2024-07-15 09:31:06.406520] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:19712 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:55.385 [2024-07-15 09:31:06.406535] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:55.385 [2024-07-15 09:31:06.406552] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:19840 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:55.385 [2024-07-15 09:31:06.406566] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:55.385 [2024-07-15 09:31:06.406582] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:19968 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:55.385 [2024-07-15 09:31:06.406597] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:55.385 [2024-07-15 09:31:06.406613] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:20096 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:55.385 [2024-07-15 09:31:06.406628] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:55.385 [2024-07-15 09:31:06.406643] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:20224 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:55.385 [2024-07-15 09:31:06.406658] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:55.385 [2024-07-15 09:31:06.406675] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:20352 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:55.385 [2024-07-15 09:31:06.406689] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:55.385 [2024-07-15 09:31:06.406705] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:20480 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:55.385 [2024-07-15 09:31:06.406719] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:55.385 [2024-07-15 09:31:06.406736] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:20608 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:55.385 [2024-07-15 09:31:06.406750] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:55.385 [2024-07-15 09:31:06.406766] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:20736 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:55.385 [2024-07-15 09:31:06.406782] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:55.385 [2024-07-15 09:31:06.406799] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:20864 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:55.385 [2024-07-15 09:31:06.406820] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:55.385 [2024-07-15 09:31:06.406837] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:20992 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:55.385 [2024-07-15 09:31:06.406855] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:55.385 [2024-07-15 09:31:06.406873] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:21120 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:55.385 [2024-07-15 09:31:06.406888] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:55.385 [2024-07-15 09:31:06.406904] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:21248 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:55.385 [2024-07-15 09:31:06.406919] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:55.385 [2024-07-15 09:31:06.406935] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:21376 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:55.385 [2024-07-15 09:31:06.406950] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:55.385 [2024-07-15 09:31:06.406966] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:21504 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:55.385 [2024-07-15 09:31:06.406981] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:55.385 [2024-07-15 09:31:06.406997] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:21632 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:55.385 [2024-07-15 09:31:06.407011] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:55.385 [2024-07-15 09:31:06.407027] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:21760 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:55.385 [2024-07-15 09:31:06.407042] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:55.385 [2024-07-15 09:31:06.407058] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:21888 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:55.385 [2024-07-15 09:31:06.407073] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:55.385 [2024-07-15 09:31:06.407088] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:22016 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:55.385 [2024-07-15 09:31:06.407103] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:55.385 [2024-07-15 09:31:06.407119] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:22144 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:55.385 [2024-07-15 09:31:06.407134] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:55.385 [2024-07-15 09:31:06.407150] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:22272 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:55.385 [2024-07-15 09:31:06.407164] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:55.385 [2024-07-15 09:31:06.407180] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:22400 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:55.385 [2024-07-15 09:31:06.407195] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:55.385 [2024-07-15 09:31:06.407211] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:22528 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:55.385 [2024-07-15 09:31:06.407226] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:55.385 [2024-07-15 09:31:06.407246] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:22656 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:55.385 [2024-07-15 09:31:06.407261] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:55.385 [2024-07-15 09:31:06.407278] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:22784 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:55.385 [2024-07-15 09:31:06.407292] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:55.385 [2024-07-15 09:31:06.407308] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:22912 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:55.385 [2024-07-15 09:31:06.407323] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:55.385 [2024-07-15 09:31:06.407338] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:23040 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:55.385 [2024-07-15 09:31:06.407353] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:55.385 [2024-07-15 09:31:06.407369] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:23168 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:55.385 [2024-07-15 09:31:06.407384] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:55.385 [2024-07-15 09:31:06.407401] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:23296 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:55.385 [2024-07-15 09:31:06.407415] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:55.385 [2024-07-15 09:31:06.407431] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:23424 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:55.385 [2024-07-15 09:31:06.407446] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:55.385 [2024-07-15 09:31:06.407463] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:23552 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:55.385 [2024-07-15 09:31:06.407477] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:55.385 [2024-07-15 09:31:06.407494] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:23680 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:55.385 [2024-07-15 09:31:06.407509] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:55.385 [2024-07-15 09:31:06.407525] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:23808 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:55.385 [2024-07-15 09:31:06.407539] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:55.385 [2024-07-15 09:31:06.407555] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:23936 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:55.385 [2024-07-15 09:31:06.407569] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:55.385 [2024-07-15 09:31:06.407586] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:24064 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:55.385 [2024-07-15 09:31:06.407601] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:55.385 [2024-07-15 09:31:06.407617] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:24192 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:55.385 [2024-07-15 09:31:06.407635] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:55.385 [2024-07-15 09:31:06.407652] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:24320 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:55.385 [2024-07-15 09:31:06.407668] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:55.385 [2024-07-15 09:31:06.407684] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:24448 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:55.385 [2024-07-15 09:31:06.407698] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:55.385 [2024-07-15 09:31:06.407713] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19e2c90 is same with the state(5) to be set 00:22:55.385 [2024-07-15 09:31:06.408949] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:16384 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:55.385 [2024-07-15 09:31:06.408972] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:55.385 [2024-07-15 09:31:06.408994] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:16512 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:55.385 [2024-07-15 09:31:06.409011] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:55.385 [2024-07-15 09:31:06.409027] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:16640 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:55.385 [2024-07-15 09:31:06.409042] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:55.385 [2024-07-15 09:31:06.409058] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:16768 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:55.386 [2024-07-15 09:31:06.409074] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:55.386 [2024-07-15 09:31:06.409091] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:16896 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:55.386 [2024-07-15 09:31:06.409106] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:55.386 [2024-07-15 09:31:06.409123] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:17024 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:55.386 [2024-07-15 09:31:06.409137] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:55.386 [2024-07-15 09:31:06.409154] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:17152 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:55.386 [2024-07-15 09:31:06.409168] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:55.386 [2024-07-15 09:31:06.409184] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:17280 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:55.386 [2024-07-15 09:31:06.409199] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:55.386 [2024-07-15 09:31:06.409215] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:17408 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:55.386 [2024-07-15 09:31:06.409230] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:55.386 [2024-07-15 09:31:06.409246] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:17536 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:55.386 [2024-07-15 09:31:06.409265] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:55.386 [2024-07-15 09:31:06.409283] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:17664 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:55.386 [2024-07-15 09:31:06.409298] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:55.386 [2024-07-15 09:31:06.409314] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:17792 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:55.386 [2024-07-15 09:31:06.409329] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:55.386 [2024-07-15 09:31:06.409345] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:17920 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:55.386 [2024-07-15 09:31:06.409360] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:55.386 [2024-07-15 09:31:06.409377] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:18048 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:55.386 [2024-07-15 09:31:06.409392] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:55.386 [2024-07-15 09:31:06.409408] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:18176 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:55.386 [2024-07-15 09:31:06.409423] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:55.386 [2024-07-15 09:31:06.409439] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:18304 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:55.386 [2024-07-15 09:31:06.409454] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:55.386 [2024-07-15 09:31:06.409470] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:18432 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:55.386 [2024-07-15 09:31:06.409485] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:55.386 [2024-07-15 09:31:06.409502] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:18560 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:55.386 [2024-07-15 09:31:06.409516] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:55.386 [2024-07-15 09:31:06.409533] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:18688 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:55.386 [2024-07-15 09:31:06.409548] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:55.386 [2024-07-15 09:31:06.409564] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:18816 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:55.386 [2024-07-15 09:31:06.409580] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:55.386 [2024-07-15 09:31:06.409596] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:18944 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:55.386 [2024-07-15 09:31:06.409611] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:55.386 [2024-07-15 09:31:06.409628] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:19072 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:55.386 [2024-07-15 09:31:06.409643] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:55.386 [2024-07-15 09:31:06.409662] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:19200 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:55.386 [2024-07-15 09:31:06.409678] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:55.386 [2024-07-15 09:31:06.409694] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:19328 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:55.386 [2024-07-15 09:31:06.409709] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:55.386 [2024-07-15 09:31:06.409725] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:19456 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:55.386 [2024-07-15 09:31:06.409739] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:55.386 [2024-07-15 09:31:06.409756] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:19584 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:55.386 [2024-07-15 09:31:06.409771] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:55.386 [2024-07-15 09:31:06.409787] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:19712 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:55.386 [2024-07-15 09:31:06.409809] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:55.386 [2024-07-15 09:31:06.409828] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:19840 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:55.386 [2024-07-15 09:31:06.409843] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:55.386 [2024-07-15 09:31:06.409860] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:19968 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:55.386 [2024-07-15 09:31:06.409875] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:55.386 [2024-07-15 09:31:06.409892] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:20096 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:55.386 [2024-07-15 09:31:06.409907] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:55.386 [2024-07-15 09:31:06.409924] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:20224 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:55.386 [2024-07-15 09:31:06.409939] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:55.386 [2024-07-15 09:31:06.409955] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:20352 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:55.386 [2024-07-15 09:31:06.409970] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:55.386 [2024-07-15 09:31:06.409986] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:20480 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:55.386 [2024-07-15 09:31:06.410011] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:55.386 [2024-07-15 09:31:06.410027] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:20608 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:55.386 [2024-07-15 09:31:06.410041] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:55.386 [2024-07-15 09:31:06.410057] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:20736 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:55.386 [2024-07-15 09:31:06.410075] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:55.386 [2024-07-15 09:31:06.410093] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:20864 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:55.386 [2024-07-15 09:31:06.410109] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:55.386 [2024-07-15 09:31:06.410125] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:20992 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:55.386 [2024-07-15 09:31:06.410140] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:55.386 [2024-07-15 09:31:06.410157] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:21120 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:55.386 [2024-07-15 09:31:06.410172] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:55.386 [2024-07-15 09:31:06.410188] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:21248 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:55.386 [2024-07-15 09:31:06.410203] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:55.386 [2024-07-15 09:31:06.410219] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:21376 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:55.386 [2024-07-15 09:31:06.410233] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:55.386 [2024-07-15 09:31:06.410249] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:21504 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:55.386 [2024-07-15 09:31:06.410264] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:55.386 [2024-07-15 09:31:06.410281] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:21632 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:55.386 [2024-07-15 09:31:06.410295] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:55.386 [2024-07-15 09:31:06.410321] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:21760 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:55.386 [2024-07-15 09:31:06.410335] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:55.386 [2024-07-15 09:31:06.410351] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:21888 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:55.386 [2024-07-15 09:31:06.410366] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:55.386 [2024-07-15 09:31:06.410382] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:22016 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:55.386 [2024-07-15 09:31:06.410396] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:55.386 [2024-07-15 09:31:06.410412] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:22144 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:55.386 [2024-07-15 09:31:06.410426] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:55.387 [2024-07-15 09:31:06.410443] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:22272 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:55.387 [2024-07-15 09:31:06.410457] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:55.387 [2024-07-15 09:31:06.410473] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:22400 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:55.387 [2024-07-15 09:31:06.410495] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:55.387 [2024-07-15 09:31:06.410512] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:22528 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:55.387 [2024-07-15 09:31:06.410526] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:55.387 [2024-07-15 09:31:06.410542] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:22656 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:55.387 [2024-07-15 09:31:06.410557] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:55.387 [2024-07-15 09:31:06.410573] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:22784 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:55.387 [2024-07-15 09:31:06.410587] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:55.387 [2024-07-15 09:31:06.410604] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:22912 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:55.387 [2024-07-15 09:31:06.410619] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:55.387 [2024-07-15 09:31:06.410636] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:23040 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:55.387 [2024-07-15 09:31:06.410650] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:55.387 [2024-07-15 09:31:06.410667] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:23168 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:55.387 [2024-07-15 09:31:06.410682] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:55.387 [2024-07-15 09:31:06.410698] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:23296 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:55.387 [2024-07-15 09:31:06.410713] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:55.387 [2024-07-15 09:31:06.410729] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:23424 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:55.387 [2024-07-15 09:31:06.410743] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:55.387 [2024-07-15 09:31:06.410759] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:23552 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:55.387 [2024-07-15 09:31:06.410774] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:55.387 [2024-07-15 09:31:06.410790] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:23680 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:55.387 [2024-07-15 09:31:06.410811] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:55.387 [2024-07-15 09:31:06.410829] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:23808 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:55.387 [2024-07-15 09:31:06.410844] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:55.387 [2024-07-15 09:31:06.410861] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:23936 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:55.387 [2024-07-15 09:31:06.410875] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:55.387 [2024-07-15 09:31:06.410895] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:24064 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:55.387 [2024-07-15 09:31:06.410910] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:55.387 [2024-07-15 09:31:06.410927] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:24192 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:55.387 [2024-07-15 09:31:06.410941] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:55.387 [2024-07-15 09:31:06.410958] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:24320 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:55.387 [2024-07-15 09:31:06.410973] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:55.387 [2024-07-15 09:31:06.410989] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:24448 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:55.387 [2024-07-15 09:31:06.411004] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:55.387 [2024-07-15 09:31:06.411019] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xfa5120 is same with the state(5) to be set 00:22:55.387 [2024-07-15 09:31:06.413037] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:22:55.387 [2024-07-15 09:31:06.413070] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode2] resetting controller 00:22:55.387 [2024-07-15 09:31:06.413097] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode5] resetting controller 00:22:55.387 [2024-07-15 09:31:06.413116] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode6] resetting controller 00:22:55.387 [2024-07-15 09:31:06.413246] bdev_nvme.c:2899:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:22:55.387 [2024-07-15 09:31:06.413273] bdev_nvme.c:2899:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:22:55.387 [2024-07-15 09:31:06.413295] bdev_nvme.c:2899:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:22:55.387 [2024-07-15 09:31:06.413390] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode7] resetting controller 00:22:55.387 [2024-07-15 09:31:06.413416] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode8] resetting controller 00:22:55.387 task offset: 22400 on job bdev=Nvme10n1 fails 00:22:55.387 00:22:55.387 Latency(us) 00:22:55.387 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:22:55.387 Job: Nvme1n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:22:55.387 Job: Nvme1n1 ended in about 0.85 seconds with error 00:22:55.387 Verification LBA range: start 0x0 length 0x400 00:22:55.387 Nvme1n1 : 0.85 222.43 13.90 24.58 0.00 255212.85 32816.55 248551.35 00:22:55.387 Job: Nvme2n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:22:55.387 Job: Nvme2n1 ended in about 0.87 seconds with error 00:22:55.387 Verification LBA range: start 0x0 length 0x400 00:22:55.387 Nvme2n1 : 0.87 152.96 9.56 73.60 0.00 273278.84 21845.33 239230.67 00:22:55.387 Job: Nvme3n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:22:55.387 Job: Nvme3n1 ended in about 0.86 seconds with error 00:22:55.387 Verification LBA range: start 0x0 length 0x400 00:22:55.387 Nvme3n1 : 0.86 224.52 14.03 25.73 0.00 240882.12 12524.66 248551.35 00:22:55.387 Job: Nvme4n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:22:55.387 Job: Nvme4n1 ended in about 0.85 seconds with error 00:22:55.387 Verification LBA range: start 0x0 length 0x400 00:22:55.387 Nvme4n1 : 0.85 226.28 14.14 75.43 0.00 195778.51 5704.06 260978.92 00:22:55.387 Job: Nvme5n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:22:55.387 Job: Nvme5n1 ended in about 0.87 seconds with error 00:22:55.387 Verification LBA range: start 0x0 length 0x400 00:22:55.387 Nvme5n1 : 0.87 146.66 9.17 73.33 0.00 263154.09 22330.79 262532.36 00:22:55.387 Job: Nvme6n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:22:55.387 Job: Nvme6n1 ended in about 0.88 seconds with error 00:22:55.387 Verification LBA range: start 0x0 length 0x400 00:22:55.387 Nvme6n1 : 0.88 146.11 9.13 73.05 0.00 258152.68 20680.25 260978.92 00:22:55.387 Job: Nvme7n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:22:55.387 Job: Nvme7n1 ended in about 0.88 seconds with error 00:22:55.387 Verification LBA range: start 0x0 length 0x400 00:22:55.387 Nvme7n1 : 0.88 145.56 9.10 72.78 0.00 253259.47 19806.44 239230.67 00:22:55.387 Job: Nvme8n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:22:55.387 Job: Nvme8n1 ended in about 0.88 seconds with error 00:22:55.387 Verification LBA range: start 0x0 length 0x400 00:22:55.387 Nvme8n1 : 0.88 145.02 9.06 72.51 0.00 248439.34 18155.90 254765.13 00:22:55.387 Job: Nvme9n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:22:55.387 Job: Nvme9n1 ended in about 0.89 seconds with error 00:22:55.387 Verification LBA range: start 0x0 length 0x400 00:22:55.387 Nvme9n1 : 0.89 144.48 9.03 72.24 0.00 243626.54 21456.97 293601.28 00:22:55.387 Job: Nvme10n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:22:55.387 Job: Nvme10n1 ended in about 0.85 seconds with error 00:22:55.387 Verification LBA range: start 0x0 length 0x400 00:22:55.387 Nvme10n1 : 0.85 151.36 9.46 75.68 0.00 224261.31 11796.48 271853.04 00:22:55.387 =================================================================================================================== 00:22:55.387 Total : 1705.37 106.59 638.93 0.00 244140.62 5704.06 293601.28 00:22:55.387 [2024-07-15 09:31:06.440666] app.c:1052:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:22:55.387 [2024-07-15 09:31:06.440743] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode9] resetting controller 00:22:55.387 [2024-07-15 09:31:06.441022] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:55.388 [2024-07-15 09:31:06.441057] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10d9240 with addr=10.0.0.2, port=4420 00:22:55.388 [2024-07-15 09:31:06.441080] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10d9240 is same with the state(5) to be set 00:22:55.388 [2024-07-15 09:31:06.441167] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:55.388 [2024-07-15 09:31:06.441194] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf30280 with addr=10.0.0.2, port=4420 00:22:55.388 [2024-07-15 09:31:06.441211] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf30280 is same with the state(5) to be set 00:22:55.388 [2024-07-15 09:31:06.441297] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:55.388 [2024-07-15 09:31:06.441324] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfd3840 with addr=10.0.0.2, port=4420 00:22:55.388 [2024-07-15 09:31:06.441349] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xfd3840 is same with the state(5) to be set 00:22:55.388 [2024-07-15 09:31:06.443013] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode3] resetting controller 00:22:55.388 [2024-07-15 09:31:06.443043] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode10] resetting controller 00:22:55.388 [2024-07-15 09:31:06.443062] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode4] resetting controller 00:22:55.388 [2024-07-15 09:31:06.443218] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:55.388 [2024-07-15 09:31:06.443247] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfd1990 with addr=10.0.0.2, port=4420 00:22:55.388 [2024-07-15 09:31:06.443275] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xfd1990 is same with the state(5) to be set 00:22:55.388 [2024-07-15 09:31:06.443353] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:55.388 [2024-07-15 09:31:06.443378] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfd1b70 with addr=10.0.0.2, port=4420 00:22:55.388 [2024-07-15 09:31:06.443394] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xfd1b70 is same with the state(5) to be set 00:22:55.388 [2024-07-15 09:31:06.443480] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:55.388 [2024-07-15 09:31:06.443505] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10bba20 with addr=10.0.0.2, port=4420 00:22:55.388 [2024-07-15 09:31:06.443521] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10bba20 is same with the state(5) to be set 00:22:55.388 [2024-07-15 09:31:06.443547] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x10d9240 (9): Bad file descriptor 00:22:55.388 [2024-07-15 09:31:06.443570] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xf30280 (9): Bad file descriptor 00:22:55.388 [2024-07-15 09:31:06.443588] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xfd3840 (9): Bad file descriptor 00:22:55.388 [2024-07-15 09:31:06.443639] bdev_nvme.c:2899:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:22:55.388 [2024-07-15 09:31:06.443667] bdev_nvme.c:2899:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:22:55.388 [2024-07-15 09:31:06.443688] bdev_nvme.c:2899:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:22:55.388 [2024-07-15 09:31:06.443709] bdev_nvme.c:2899:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:22:55.388 [2024-07-15 09:31:06.444034] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:22:55.388 [2024-07-15 09:31:06.444194] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:55.388 [2024-07-15 09:31:06.444221] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf39450 with addr=10.0.0.2, port=4420 00:22:55.388 [2024-07-15 09:31:06.444237] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf39450 is same with the state(5) to be set 00:22:55.388 [2024-07-15 09:31:06.444318] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:55.388 [2024-07-15 09:31:06.444344] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfbdec0 with addr=10.0.0.2, port=4420 00:22:55.388 [2024-07-15 09:31:06.444360] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xfbdec0 is same with the state(5) to be set 00:22:55.388 [2024-07-15 09:31:06.444448] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:55.388 [2024-07-15 09:31:06.444473] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf2fc60 with addr=10.0.0.2, port=4420 00:22:55.388 [2024-07-15 09:31:06.444488] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf2fc60 is same with the state(5) to be set 00:22:55.388 [2024-07-15 09:31:06.444508] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xfd1990 (9): Bad file descriptor 00:22:55.388 [2024-07-15 09:31:06.444527] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xfd1b70 (9): Bad file descriptor 00:22:55.388 [2024-07-15 09:31:06.444545] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x10bba20 (9): Bad file descriptor 00:22:55.388 [2024-07-15 09:31:06.444562] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode2] Ctrlr is in error state 00:22:55.388 [2024-07-15 09:31:06.444575] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode2] controller reinitialization failed 00:22:55.388 [2024-07-15 09:31:06.444596] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode2] in failed state. 00:22:55.388 [2024-07-15 09:31:06.444616] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode5] Ctrlr is in error state 00:22:55.388 [2024-07-15 09:31:06.444631] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode5] controller reinitialization failed 00:22:55.388 [2024-07-15 09:31:06.444644] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode5] in failed state. 00:22:55.388 [2024-07-15 09:31:06.444660] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode6] Ctrlr is in error state 00:22:55.388 [2024-07-15 09:31:06.444674] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode6] controller reinitialization failed 00:22:55.388 [2024-07-15 09:31:06.444687] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode6] in failed state. 00:22:55.388 [2024-07-15 09:31:06.444785] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:22:55.388 [2024-07-15 09:31:06.444815] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:22:55.388 [2024-07-15 09:31:06.444830] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:22:55.388 [2024-07-15 09:31:06.444912] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:55.388 [2024-07-15 09:31:06.444938] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf0d830 with addr=10.0.0.2, port=4420 00:22:55.388 [2024-07-15 09:31:06.444954] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf0d830 is same with the state(5) to be set 00:22:55.388 [2024-07-15 09:31:06.444972] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xf39450 (9): Bad file descriptor 00:22:55.388 [2024-07-15 09:31:06.444992] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xfbdec0 (9): Bad file descriptor 00:22:55.388 [2024-07-15 09:31:06.445010] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xf2fc60 (9): Bad file descriptor 00:22:55.388 [2024-07-15 09:31:06.445026] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode7] Ctrlr is in error state 00:22:55.388 [2024-07-15 09:31:06.445039] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode7] controller reinitialization failed 00:22:55.388 [2024-07-15 09:31:06.445052] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode7] in failed state. 00:22:55.388 [2024-07-15 09:31:06.445069] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode8] Ctrlr is in error state 00:22:55.388 [2024-07-15 09:31:06.445094] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode8] controller reinitialization failed 00:22:55.388 [2024-07-15 09:31:06.445107] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode8] in failed state. 00:22:55.388 [2024-07-15 09:31:06.445124] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode9] Ctrlr is in error state 00:22:55.388 [2024-07-15 09:31:06.445137] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode9] controller reinitialization failed 00:22:55.388 [2024-07-15 09:31:06.445151] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode9] in failed state. 00:22:55.388 [2024-07-15 09:31:06.445189] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:22:55.388 [2024-07-15 09:31:06.445207] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:22:55.388 [2024-07-15 09:31:06.445219] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:22:55.388 [2024-07-15 09:31:06.445235] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xf0d830 (9): Bad file descriptor 00:22:55.388 [2024-07-15 09:31:06.445253] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode3] Ctrlr is in error state 00:22:55.388 [2024-07-15 09:31:06.445266] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode3] controller reinitialization failed 00:22:55.388 [2024-07-15 09:31:06.445284] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode3] in failed state. 00:22:55.388 [2024-07-15 09:31:06.445302] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode10] Ctrlr is in error state 00:22:55.388 [2024-07-15 09:31:06.445316] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode10] controller reinitialization failed 00:22:55.388 [2024-07-15 09:31:06.445330] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode10] in failed state. 00:22:55.388 [2024-07-15 09:31:06.445347] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode4] Ctrlr is in error state 00:22:55.388 [2024-07-15 09:31:06.445360] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode4] controller reinitialization failed 00:22:55.388 [2024-07-15 09:31:06.445373] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode4] in failed state. 00:22:55.388 [2024-07-15 09:31:06.445410] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:22:55.388 [2024-07-15 09:31:06.445427] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:22:55.388 [2024-07-15 09:31:06.445440] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:22:55.388 [2024-07-15 09:31:06.445452] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:22:55.388 [2024-07-15 09:31:06.445465] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:22:55.388 [2024-07-15 09:31:06.445478] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:22:55.388 [2024-07-15 09:31:06.445515] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:22:55.955 09:31:06 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@136 -- # nvmfpid= 00:22:55.955 09:31:06 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@139 -- # sleep 1 00:22:56.894 09:31:07 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@142 -- # kill -9 883079 00:22:56.894 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/shutdown.sh: line 142: kill: (883079) - No such process 00:22:56.894 09:31:07 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@142 -- # true 00:22:56.894 09:31:07 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@144 -- # stoptarget 00:22:56.894 09:31:07 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@41 -- # rm -f ./local-job0-0-verify.state 00:22:56.894 09:31:07 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@42 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdevperf.conf 00:22:56.894 09:31:07 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@43 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:22:56.894 09:31:07 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@45 -- # nvmftestfini 00:22:56.894 09:31:07 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@488 -- # nvmfcleanup 00:22:56.894 09:31:07 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@117 -- # sync 00:22:56.894 09:31:07 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:22:56.894 09:31:07 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@120 -- # set +e 00:22:56.894 09:31:07 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@121 -- # for i in {1..20} 00:22:56.894 09:31:07 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:22:56.894 rmmod nvme_tcp 00:22:56.894 rmmod nvme_fabrics 00:22:56.894 rmmod nvme_keyring 00:22:56.894 09:31:08 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:22:56.894 09:31:08 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@124 -- # set -e 00:22:56.894 09:31:08 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@125 -- # return 0 00:22:56.894 09:31:08 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@489 -- # '[' -n '' ']' 00:22:56.894 09:31:08 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:22:56.894 09:31:08 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:22:56.894 09:31:08 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:22:56.894 09:31:08 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:22:56.894 09:31:08 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@278 -- # remove_spdk_ns 00:22:56.894 09:31:08 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:22:56.894 09:31:08 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:22:56.894 09:31:08 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:22:59.433 09:31:10 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:22:59.433 00:22:59.433 real 0m7.241s 00:22:59.433 user 0m16.974s 00:22:59.433 sys 0m1.430s 00:22:59.433 09:31:10 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@1124 -- # xtrace_disable 00:22:59.433 09:31:10 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:22:59.433 ************************************ 00:22:59.433 END TEST nvmf_shutdown_tc3 00:22:59.433 ************************************ 00:22:59.433 09:31:10 nvmf_tcp.nvmf_shutdown -- common/autotest_common.sh@1142 -- # return 0 00:22:59.433 09:31:10 nvmf_tcp.nvmf_shutdown -- target/shutdown.sh@151 -- # trap - SIGINT SIGTERM EXIT 00:22:59.433 00:22:59.433 real 0m26.667s 00:22:59.433 user 1m12.801s 00:22:59.433 sys 0m6.197s 00:22:59.433 09:31:10 nvmf_tcp.nvmf_shutdown -- common/autotest_common.sh@1124 -- # xtrace_disable 00:22:59.433 09:31:10 nvmf_tcp.nvmf_shutdown -- common/autotest_common.sh@10 -- # set +x 00:22:59.433 ************************************ 00:22:59.433 END TEST nvmf_shutdown 00:22:59.433 ************************************ 00:22:59.433 09:31:10 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:22:59.433 09:31:10 nvmf_tcp -- nvmf/nvmf.sh@86 -- # timing_exit target 00:22:59.433 09:31:10 nvmf_tcp -- common/autotest_common.sh@728 -- # xtrace_disable 00:22:59.433 09:31:10 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:22:59.433 09:31:10 nvmf_tcp -- nvmf/nvmf.sh@88 -- # timing_enter host 00:22:59.433 09:31:10 nvmf_tcp -- common/autotest_common.sh@722 -- # xtrace_disable 00:22:59.433 09:31:10 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:22:59.433 09:31:10 nvmf_tcp -- nvmf/nvmf.sh@90 -- # [[ 0 -eq 0 ]] 00:22:59.433 09:31:10 nvmf_tcp -- nvmf/nvmf.sh@91 -- # run_test nvmf_multicontroller /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/multicontroller.sh --transport=tcp 00:22:59.433 09:31:10 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:22:59.433 09:31:10 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:22:59.433 09:31:10 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:22:59.433 ************************************ 00:22:59.433 START TEST nvmf_multicontroller 00:22:59.433 ************************************ 00:22:59.433 09:31:10 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/multicontroller.sh --transport=tcp 00:22:59.433 * Looking for test storage... 00:22:59.433 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:22:59.433 09:31:10 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:22:59.433 09:31:10 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@7 -- # uname -s 00:22:59.433 09:31:10 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:22:59.433 09:31:10 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:22:59.433 09:31:10 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:22:59.433 09:31:10 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:22:59.433 09:31:10 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:22:59.433 09:31:10 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:22:59.433 09:31:10 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:22:59.433 09:31:10 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:22:59.433 09:31:10 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:22:59.433 09:31:10 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:22:59.433 09:31:10 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a 00:22:59.433 09:31:10 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@18 -- # NVME_HOSTID=29f67375-a902-e411-ace9-001e67bc3c9a 00:22:59.433 09:31:10 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:22:59.433 09:31:10 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:22:59.433 09:31:10 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:22:59.433 09:31:10 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:22:59.433 09:31:10 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:22:59.433 09:31:10 nvmf_tcp.nvmf_multicontroller -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:22:59.433 09:31:10 nvmf_tcp.nvmf_multicontroller -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:22:59.433 09:31:10 nvmf_tcp.nvmf_multicontroller -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:22:59.433 09:31:10 nvmf_tcp.nvmf_multicontroller -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:59.433 09:31:10 nvmf_tcp.nvmf_multicontroller -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:59.434 09:31:10 nvmf_tcp.nvmf_multicontroller -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:59.434 09:31:10 nvmf_tcp.nvmf_multicontroller -- paths/export.sh@5 -- # export PATH 00:22:59.434 09:31:10 nvmf_tcp.nvmf_multicontroller -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:59.434 09:31:10 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@47 -- # : 0 00:22:59.434 09:31:10 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:22:59.434 09:31:10 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:22:59.434 09:31:10 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:22:59.434 09:31:10 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:22:59.434 09:31:10 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:22:59.434 09:31:10 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:22:59.434 09:31:10 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:22:59.434 09:31:10 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@51 -- # have_pci_nics=0 00:22:59.434 09:31:10 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@11 -- # MALLOC_BDEV_SIZE=64 00:22:59.434 09:31:10 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:22:59.434 09:31:10 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@13 -- # NVMF_HOST_FIRST_PORT=60000 00:22:59.434 09:31:10 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@14 -- # NVMF_HOST_SECOND_PORT=60001 00:22:59.434 09:31:10 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@16 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:22:59.434 09:31:10 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@18 -- # '[' tcp == rdma ']' 00:22:59.434 09:31:10 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@23 -- # nvmftestinit 00:22:59.434 09:31:10 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:22:59.434 09:31:10 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:22:59.434 09:31:10 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@448 -- # prepare_net_devs 00:22:59.434 09:31:10 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@410 -- # local -g is_hw=no 00:22:59.434 09:31:10 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@412 -- # remove_spdk_ns 00:22:59.434 09:31:10 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:22:59.434 09:31:10 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:22:59.434 09:31:10 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:22:59.434 09:31:10 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:22:59.434 09:31:10 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:22:59.434 09:31:10 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@285 -- # xtrace_disable 00:22:59.434 09:31:10 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:23:01.338 09:31:12 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:23:01.338 09:31:12 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@291 -- # pci_devs=() 00:23:01.338 09:31:12 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@291 -- # local -a pci_devs 00:23:01.338 09:31:12 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@292 -- # pci_net_devs=() 00:23:01.338 09:31:12 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:23:01.338 09:31:12 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@293 -- # pci_drivers=() 00:23:01.338 09:31:12 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@293 -- # local -A pci_drivers 00:23:01.338 09:31:12 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@295 -- # net_devs=() 00:23:01.338 09:31:12 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@295 -- # local -ga net_devs 00:23:01.338 09:31:12 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@296 -- # e810=() 00:23:01.338 09:31:12 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@296 -- # local -ga e810 00:23:01.338 09:31:12 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@297 -- # x722=() 00:23:01.338 09:31:12 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@297 -- # local -ga x722 00:23:01.338 09:31:12 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@298 -- # mlx=() 00:23:01.338 09:31:12 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@298 -- # local -ga mlx 00:23:01.338 09:31:12 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:23:01.338 09:31:12 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:23:01.338 09:31:12 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:23:01.338 09:31:12 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:23:01.338 09:31:12 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:23:01.338 09:31:12 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:23:01.338 09:31:12 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:23:01.338 09:31:12 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:23:01.338 09:31:12 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:23:01.338 09:31:12 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:23:01.338 09:31:12 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:23:01.338 09:31:12 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:23:01.338 09:31:12 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:23:01.338 09:31:12 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:23:01.338 09:31:12 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:23:01.338 09:31:12 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:23:01.338 09:31:12 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:23:01.338 09:31:12 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:23:01.338 09:31:12 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@341 -- # echo 'Found 0000:09:00.0 (0x8086 - 0x159b)' 00:23:01.338 Found 0000:09:00.0 (0x8086 - 0x159b) 00:23:01.338 09:31:12 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:23:01.338 09:31:12 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:23:01.338 09:31:12 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:23:01.338 09:31:12 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:23:01.338 09:31:12 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:23:01.338 09:31:12 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:23:01.338 09:31:12 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@341 -- # echo 'Found 0000:09:00.1 (0x8086 - 0x159b)' 00:23:01.338 Found 0000:09:00.1 (0x8086 - 0x159b) 00:23:01.338 09:31:12 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:23:01.338 09:31:12 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:23:01.338 09:31:12 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:23:01.338 09:31:12 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:23:01.338 09:31:12 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:23:01.338 09:31:12 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:23:01.338 09:31:12 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:23:01.338 09:31:12 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:23:01.338 09:31:12 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:23:01.338 09:31:12 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:23:01.338 09:31:12 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:23:01.338 09:31:12 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:23:01.338 09:31:12 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@390 -- # [[ up == up ]] 00:23:01.338 09:31:12 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:23:01.338 09:31:12 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:23:01.338 09:31:12 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:09:00.0: cvl_0_0' 00:23:01.338 Found net devices under 0000:09:00.0: cvl_0_0 00:23:01.338 09:31:12 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:23:01.338 09:31:12 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:23:01.339 09:31:12 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:23:01.339 09:31:12 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:23:01.339 09:31:12 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:23:01.339 09:31:12 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@390 -- # [[ up == up ]] 00:23:01.339 09:31:12 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:23:01.339 09:31:12 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:23:01.339 09:31:12 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:09:00.1: cvl_0_1' 00:23:01.339 Found net devices under 0000:09:00.1: cvl_0_1 00:23:01.339 09:31:12 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:23:01.339 09:31:12 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:23:01.339 09:31:12 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@414 -- # is_hw=yes 00:23:01.339 09:31:12 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:23:01.339 09:31:12 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:23:01.339 09:31:12 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:23:01.339 09:31:12 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:23:01.339 09:31:12 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:23:01.339 09:31:12 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:23:01.339 09:31:12 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:23:01.339 09:31:12 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:23:01.339 09:31:12 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:23:01.339 09:31:12 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:23:01.339 09:31:12 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:23:01.339 09:31:12 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:23:01.339 09:31:12 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:23:01.339 09:31:12 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:23:01.339 09:31:12 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:23:01.339 09:31:12 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:23:01.339 09:31:12 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:23:01.339 09:31:12 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:23:01.339 09:31:12 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:23:01.339 09:31:12 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:23:01.339 09:31:12 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:23:01.339 09:31:12 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:23:01.339 09:31:12 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:23:01.339 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:23:01.339 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.151 ms 00:23:01.339 00:23:01.339 --- 10.0.0.2 ping statistics --- 00:23:01.339 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:23:01.339 rtt min/avg/max/mdev = 0.151/0.151/0.151/0.000 ms 00:23:01.339 09:31:12 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:23:01.599 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:23:01.599 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.060 ms 00:23:01.599 00:23:01.599 --- 10.0.0.1 ping statistics --- 00:23:01.599 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:23:01.599 rtt min/avg/max/mdev = 0.060/0.060/0.060/0.000 ms 00:23:01.599 09:31:12 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:23:01.599 09:31:12 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@422 -- # return 0 00:23:01.599 09:31:12 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:23:01.599 09:31:12 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:23:01.599 09:31:12 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:23:01.599 09:31:12 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:23:01.599 09:31:12 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:23:01.599 09:31:12 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:23:01.599 09:31:12 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:23:01.599 09:31:12 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@25 -- # nvmfappstart -m 0xE 00:23:01.599 09:31:12 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:23:01.599 09:31:12 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@722 -- # xtrace_disable 00:23:01.599 09:31:12 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:23:01.599 09:31:12 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@481 -- # nvmfpid=885539 00:23:01.599 09:31:12 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xE 00:23:01.599 09:31:12 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@482 -- # waitforlisten 885539 00:23:01.599 09:31:12 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@829 -- # '[' -z 885539 ']' 00:23:01.599 09:31:12 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:23:01.599 09:31:12 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@834 -- # local max_retries=100 00:23:01.599 09:31:12 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:23:01.599 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:23:01.599 09:31:12 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@838 -- # xtrace_disable 00:23:01.599 09:31:12 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:23:01.599 [2024-07-15 09:31:12.619883] Starting SPDK v24.09-pre git sha1 b0f01ebc5 / DPDK 24.03.0 initialization... 00:23:01.599 [2024-07-15 09:31:12.619958] [ DPDK EAL parameters: nvmf -c 0xE --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:23:01.599 EAL: No free 2048 kB hugepages reported on node 1 00:23:01.599 [2024-07-15 09:31:12.684922] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 3 00:23:01.860 [2024-07-15 09:31:12.798970] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:23:01.860 [2024-07-15 09:31:12.799017] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:23:01.860 [2024-07-15 09:31:12.799031] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:23:01.860 [2024-07-15 09:31:12.799043] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:23:01.860 [2024-07-15 09:31:12.799053] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:23:01.860 [2024-07-15 09:31:12.799106] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:23:01.860 [2024-07-15 09:31:12.799166] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:23:01.860 [2024-07-15 09:31:12.799170] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:23:01.860 09:31:12 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:23:01.860 09:31:12 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@862 -- # return 0 00:23:01.860 09:31:12 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:23:01.860 09:31:12 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@728 -- # xtrace_disable 00:23:01.860 09:31:12 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:23:01.860 09:31:12 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:23:01.860 09:31:12 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@27 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:23:01.860 09:31:12 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:01.860 09:31:12 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:23:01.860 [2024-07-15 09:31:12.938672] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:23:01.860 09:31:12 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:01.861 09:31:12 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@29 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:23:01.861 09:31:12 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:01.861 09:31:12 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:23:01.861 Malloc0 00:23:01.861 09:31:12 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:01.861 09:31:12 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@30 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:23:01.861 09:31:12 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:01.861 09:31:12 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:23:01.861 09:31:12 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:01.861 09:31:12 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@31 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:23:01.861 09:31:12 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:01.861 09:31:12 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:23:01.861 09:31:12 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:01.861 09:31:12 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@33 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:23:01.861 09:31:12 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:01.861 09:31:12 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:23:01.861 [2024-07-15 09:31:12.996143] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:23:01.861 09:31:12 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:01.861 09:31:12 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@34 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 00:23:01.861 09:31:12 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:01.861 09:31:12 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:23:01.861 [2024-07-15 09:31:13.004033] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4421 *** 00:23:01.861 09:31:13 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:01.861 09:31:13 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@36 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc1 00:23:01.861 09:31:13 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:01.861 09:31:13 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:23:01.861 Malloc1 00:23:01.861 09:31:13 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:01.861 09:31:13 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@37 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode2 -a -s SPDK00000000000002 00:23:01.861 09:31:13 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:01.861 09:31:13 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:23:01.861 09:31:13 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:01.861 09:31:13 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@38 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode2 Malloc1 00:23:01.861 09:31:13 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:01.861 09:31:13 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:23:01.861 09:31:13 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:01.861 09:31:13 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@40 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode2 -t tcp -a 10.0.0.2 -s 4420 00:23:01.861 09:31:13 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:01.861 09:31:13 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:23:02.121 09:31:13 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:02.121 09:31:13 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@41 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode2 -t tcp -a 10.0.0.2 -s 4421 00:23:02.121 09:31:13 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:02.121 09:31:13 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:23:02.121 09:31:13 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:02.121 09:31:13 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@44 -- # bdevperf_pid=885620 00:23:02.121 09:31:13 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@46 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; pap "$testdir/try.txt"; killprocess $bdevperf_pid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:23:02.121 09:31:13 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@47 -- # waitforlisten 885620 /var/tmp/bdevperf.sock 00:23:02.121 09:31:13 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@43 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w write -t 1 -f 00:23:02.121 09:31:13 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@829 -- # '[' -z 885620 ']' 00:23:02.121 09:31:13 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:23:02.121 09:31:13 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@834 -- # local max_retries=100 00:23:02.121 09:31:13 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:23:02.121 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:23:02.121 09:31:13 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@838 -- # xtrace_disable 00:23:02.121 09:31:13 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:23:02.380 09:31:13 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:23:02.380 09:31:13 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@862 -- # return 0 00:23:02.380 09:31:13 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@50 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.2 -c 60000 00:23:02.380 09:31:13 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:02.380 09:31:13 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:23:02.380 NVMe0n1 00:23:02.380 09:31:13 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:02.380 09:31:13 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@54 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:23:02.380 09:31:13 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@54 -- # grep -c NVMe 00:23:02.380 09:31:13 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:02.380 09:31:13 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:23:02.380 09:31:13 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:02.380 1 00:23:02.380 09:31:13 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@60 -- # NOT rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.2 -c 60000 -q nqn.2021-09-7.io.spdk:00001 00:23:02.380 09:31:13 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@648 -- # local es=0 00:23:02.380 09:31:13 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@650 -- # valid_exec_arg rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.2 -c 60000 -q nqn.2021-09-7.io.spdk:00001 00:23:02.380 09:31:13 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@636 -- # local arg=rpc_cmd 00:23:02.380 09:31:13 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:23:02.380 09:31:13 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@640 -- # type -t rpc_cmd 00:23:02.380 09:31:13 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:23:02.380 09:31:13 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@651 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.2 -c 60000 -q nqn.2021-09-7.io.spdk:00001 00:23:02.380 09:31:13 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:02.380 09:31:13 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:23:02.380 request: 00:23:02.380 { 00:23:02.380 "name": "NVMe0", 00:23:02.380 "trtype": "tcp", 00:23:02.380 "traddr": "10.0.0.2", 00:23:02.380 "adrfam": "ipv4", 00:23:02.380 "trsvcid": "4420", 00:23:02.380 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:23:02.380 "hostnqn": "nqn.2021-09-7.io.spdk:00001", 00:23:02.380 "hostaddr": "10.0.0.2", 00:23:02.380 "hostsvcid": "60000", 00:23:02.380 "prchk_reftag": false, 00:23:02.380 "prchk_guard": false, 00:23:02.380 "hdgst": false, 00:23:02.380 "ddgst": false, 00:23:02.380 "method": "bdev_nvme_attach_controller", 00:23:02.380 "req_id": 1 00:23:02.380 } 00:23:02.380 Got JSON-RPC error response 00:23:02.380 response: 00:23:02.380 { 00:23:02.380 "code": -114, 00:23:02.380 "message": "A controller named NVMe0 already exists with the specified network path\n" 00:23:02.380 } 00:23:02.380 09:31:13 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@587 -- # [[ 1 == 0 ]] 00:23:02.380 09:31:13 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@651 -- # es=1 00:23:02.380 09:31:13 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:23:02.380 09:31:13 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:23:02.380 09:31:13 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:23:02.380 09:31:13 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@65 -- # NOT rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode2 -i 10.0.0.2 -c 60000 00:23:02.380 09:31:13 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@648 -- # local es=0 00:23:02.380 09:31:13 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@650 -- # valid_exec_arg rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode2 -i 10.0.0.2 -c 60000 00:23:02.380 09:31:13 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@636 -- # local arg=rpc_cmd 00:23:02.380 09:31:13 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:23:02.380 09:31:13 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@640 -- # type -t rpc_cmd 00:23:02.380 09:31:13 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:23:02.380 09:31:13 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@651 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode2 -i 10.0.0.2 -c 60000 00:23:02.380 09:31:13 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:02.380 09:31:13 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:23:02.380 request: 00:23:02.380 { 00:23:02.380 "name": "NVMe0", 00:23:02.380 "trtype": "tcp", 00:23:02.380 "traddr": "10.0.0.2", 00:23:02.380 "adrfam": "ipv4", 00:23:02.380 "trsvcid": "4420", 00:23:02.380 "subnqn": "nqn.2016-06.io.spdk:cnode2", 00:23:02.380 "hostaddr": "10.0.0.2", 00:23:02.380 "hostsvcid": "60000", 00:23:02.380 "prchk_reftag": false, 00:23:02.380 "prchk_guard": false, 00:23:02.380 "hdgst": false, 00:23:02.380 "ddgst": false, 00:23:02.380 "method": "bdev_nvme_attach_controller", 00:23:02.380 "req_id": 1 00:23:02.380 } 00:23:02.380 Got JSON-RPC error response 00:23:02.380 response: 00:23:02.380 { 00:23:02.380 "code": -114, 00:23:02.380 "message": "A controller named NVMe0 already exists with the specified network path\n" 00:23:02.380 } 00:23:02.380 09:31:13 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@587 -- # [[ 1 == 0 ]] 00:23:02.380 09:31:13 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@651 -- # es=1 00:23:02.380 09:31:13 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:23:02.380 09:31:13 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:23:02.380 09:31:13 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:23:02.380 09:31:13 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@69 -- # NOT rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.2 -c 60000 -x disable 00:23:02.380 09:31:13 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@648 -- # local es=0 00:23:02.380 09:31:13 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@650 -- # valid_exec_arg rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.2 -c 60000 -x disable 00:23:02.380 09:31:13 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@636 -- # local arg=rpc_cmd 00:23:02.380 09:31:13 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:23:02.380 09:31:13 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@640 -- # type -t rpc_cmd 00:23:02.380 09:31:13 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:23:02.380 09:31:13 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@651 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.2 -c 60000 -x disable 00:23:02.380 09:31:13 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:02.380 09:31:13 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:23:02.380 request: 00:23:02.380 { 00:23:02.380 "name": "NVMe0", 00:23:02.380 "trtype": "tcp", 00:23:02.380 "traddr": "10.0.0.2", 00:23:02.380 "adrfam": "ipv4", 00:23:02.380 "trsvcid": "4420", 00:23:02.380 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:23:02.380 "hostaddr": "10.0.0.2", 00:23:02.380 "hostsvcid": "60000", 00:23:02.380 "prchk_reftag": false, 00:23:02.380 "prchk_guard": false, 00:23:02.380 "hdgst": false, 00:23:02.380 "ddgst": false, 00:23:02.380 "multipath": "disable", 00:23:02.380 "method": "bdev_nvme_attach_controller", 00:23:02.380 "req_id": 1 00:23:02.380 } 00:23:02.380 Got JSON-RPC error response 00:23:02.380 response: 00:23:02.380 { 00:23:02.380 "code": -114, 00:23:02.380 "message": "A controller named NVMe0 already exists and multipath is disabled\n" 00:23:02.380 } 00:23:02.380 09:31:13 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@587 -- # [[ 1 == 0 ]] 00:23:02.380 09:31:13 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@651 -- # es=1 00:23:02.380 09:31:13 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:23:02.380 09:31:13 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:23:02.380 09:31:13 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:23:02.380 09:31:13 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@74 -- # NOT rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.2 -c 60000 -x failover 00:23:02.380 09:31:13 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@648 -- # local es=0 00:23:02.380 09:31:13 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@650 -- # valid_exec_arg rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.2 -c 60000 -x failover 00:23:02.380 09:31:13 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@636 -- # local arg=rpc_cmd 00:23:02.380 09:31:13 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:23:02.380 09:31:13 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@640 -- # type -t rpc_cmd 00:23:02.380 09:31:13 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:23:02.380 09:31:13 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@651 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.2 -c 60000 -x failover 00:23:02.380 09:31:13 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:02.380 09:31:13 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:23:02.380 request: 00:23:02.380 { 00:23:02.380 "name": "NVMe0", 00:23:02.380 "trtype": "tcp", 00:23:02.380 "traddr": "10.0.0.2", 00:23:02.380 "adrfam": "ipv4", 00:23:02.380 "trsvcid": "4420", 00:23:02.381 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:23:02.381 "hostaddr": "10.0.0.2", 00:23:02.381 "hostsvcid": "60000", 00:23:02.381 "prchk_reftag": false, 00:23:02.381 "prchk_guard": false, 00:23:02.381 "hdgst": false, 00:23:02.381 "ddgst": false, 00:23:02.381 "multipath": "failover", 00:23:02.381 "method": "bdev_nvme_attach_controller", 00:23:02.381 "req_id": 1 00:23:02.381 } 00:23:02.381 Got JSON-RPC error response 00:23:02.381 response: 00:23:02.381 { 00:23:02.381 "code": -114, 00:23:02.381 "message": "A controller named NVMe0 already exists with the specified network path\n" 00:23:02.381 } 00:23:02.381 09:31:13 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@587 -- # [[ 1 == 0 ]] 00:23:02.381 09:31:13 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@651 -- # es=1 00:23:02.381 09:31:13 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:23:02.381 09:31:13 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:23:02.381 09:31:13 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:23:02.381 09:31:13 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@79 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4421 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:23:02.381 09:31:13 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:02.381 09:31:13 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:23:02.638 00:23:02.638 09:31:13 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:02.638 09:31:13 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@83 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_detach_controller NVMe0 -t tcp -a 10.0.0.2 -s 4421 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:23:02.638 09:31:13 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:02.638 09:31:13 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:23:02.638 09:31:13 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:02.638 09:31:13 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@87 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe1 -t tcp -a 10.0.0.2 -s 4421 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.2 -c 60000 00:23:02.638 09:31:13 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:02.638 09:31:13 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:23:02.638 00:23:02.638 09:31:13 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:02.638 09:31:13 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@90 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:23:02.638 09:31:13 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@90 -- # grep -c NVMe 00:23:02.638 09:31:13 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:02.638 09:31:13 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:23:02.638 09:31:13 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:02.638 09:31:13 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@90 -- # '[' 2 '!=' 2 ']' 00:23:02.638 09:31:13 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@95 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:23:04.009 0 00:23:04.009 09:31:14 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@98 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_detach_controller NVMe1 00:23:04.009 09:31:14 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:04.009 09:31:14 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:23:04.009 09:31:14 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:04.009 09:31:14 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@100 -- # killprocess 885620 00:23:04.009 09:31:14 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@948 -- # '[' -z 885620 ']' 00:23:04.009 09:31:14 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@952 -- # kill -0 885620 00:23:04.009 09:31:14 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@953 -- # uname 00:23:04.009 09:31:14 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:23:04.009 09:31:14 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 885620 00:23:04.009 09:31:15 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:23:04.009 09:31:15 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:23:04.009 09:31:15 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@966 -- # echo 'killing process with pid 885620' 00:23:04.009 killing process with pid 885620 00:23:04.009 09:31:15 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@967 -- # kill 885620 00:23:04.009 09:31:15 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@972 -- # wait 885620 00:23:04.269 09:31:15 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@102 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:23:04.269 09:31:15 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:04.269 09:31:15 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:23:04.269 09:31:15 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:04.269 09:31:15 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@103 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode2 00:23:04.269 09:31:15 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:04.269 09:31:15 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:23:04.269 09:31:15 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:04.269 09:31:15 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@105 -- # trap - SIGINT SIGTERM EXIT 00:23:04.269 09:31:15 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@107 -- # pap /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/try.txt 00:23:04.269 09:31:15 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@1612 -- # read -r file 00:23:04.269 09:31:15 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@1611 -- # find /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/try.txt -type f 00:23:04.269 09:31:15 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@1611 -- # sort -u 00:23:04.269 09:31:15 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@1613 -- # cat 00:23:04.269 --- /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/try.txt --- 00:23:04.269 [2024-07-15 09:31:13.109229] Starting SPDK v24.09-pre git sha1 b0f01ebc5 / DPDK 24.03.0 initialization... 00:23:04.269 [2024-07-15 09:31:13.109322] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid885620 ] 00:23:04.269 EAL: No free 2048 kB hugepages reported on node 1 00:23:04.269 [2024-07-15 09:31:13.171856] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:23:04.269 [2024-07-15 09:31:13.283261] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:23:04.269 [2024-07-15 09:31:13.807629] bdev.c:4613:bdev_name_add: *ERROR*: Bdev name 3ebf3474-1947-48ae-9503-99712994867a already exists 00:23:04.269 [2024-07-15 09:31:13.807672] bdev.c:7722:bdev_register: *ERROR*: Unable to add uuid:3ebf3474-1947-48ae-9503-99712994867a alias for bdev NVMe1n1 00:23:04.269 [2024-07-15 09:31:13.807687] bdev_nvme.c:4317:nvme_bdev_create: *ERROR*: spdk_bdev_register() failed 00:23:04.269 Running I/O for 1 seconds... 00:23:04.269 00:23:04.269 Latency(us) 00:23:04.269 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:23:04.269 Job: NVMe0n1 (Core Mask 0x1, workload: write, depth: 128, IO size: 4096) 00:23:04.269 NVMe0n1 : 1.01 18808.78 73.47 0.00 0.00 6795.00 4271.98 12136.30 00:23:04.269 =================================================================================================================== 00:23:04.269 Total : 18808.78 73.47 0.00 0.00 6795.00 4271.98 12136.30 00:23:04.269 Received shutdown signal, test time was about 1.000000 seconds 00:23:04.269 00:23:04.269 Latency(us) 00:23:04.269 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:23:04.269 =================================================================================================================== 00:23:04.269 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:23:04.269 --- /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/try.txt --- 00:23:04.269 09:31:15 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@1618 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/try.txt 00:23:04.269 09:31:15 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@1612 -- # read -r file 00:23:04.269 09:31:15 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@108 -- # nvmftestfini 00:23:04.269 09:31:15 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@488 -- # nvmfcleanup 00:23:04.269 09:31:15 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@117 -- # sync 00:23:04.269 09:31:15 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:23:04.269 09:31:15 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@120 -- # set +e 00:23:04.269 09:31:15 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@121 -- # for i in {1..20} 00:23:04.269 09:31:15 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:23:04.269 rmmod nvme_tcp 00:23:04.269 rmmod nvme_fabrics 00:23:04.269 rmmod nvme_keyring 00:23:04.269 09:31:15 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:23:04.269 09:31:15 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@124 -- # set -e 00:23:04.269 09:31:15 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@125 -- # return 0 00:23:04.269 09:31:15 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@489 -- # '[' -n 885539 ']' 00:23:04.269 09:31:15 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@490 -- # killprocess 885539 00:23:04.269 09:31:15 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@948 -- # '[' -z 885539 ']' 00:23:04.269 09:31:15 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@952 -- # kill -0 885539 00:23:04.269 09:31:15 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@953 -- # uname 00:23:04.269 09:31:15 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:23:04.269 09:31:15 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 885539 00:23:04.269 09:31:15 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:23:04.269 09:31:15 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:23:04.269 09:31:15 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@966 -- # echo 'killing process with pid 885539' 00:23:04.269 killing process with pid 885539 00:23:04.269 09:31:15 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@967 -- # kill 885539 00:23:04.269 09:31:15 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@972 -- # wait 885539 00:23:04.527 09:31:15 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:23:04.527 09:31:15 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:23:04.527 09:31:15 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:23:04.527 09:31:15 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:23:04.527 09:31:15 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@278 -- # remove_spdk_ns 00:23:04.527 09:31:15 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:23:04.527 09:31:15 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:23:04.527 09:31:15 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:23:07.056 09:31:17 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:23:07.056 00:23:07.056 real 0m7.543s 00:23:07.056 user 0m11.419s 00:23:07.056 sys 0m2.373s 00:23:07.056 09:31:17 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@1124 -- # xtrace_disable 00:23:07.056 09:31:17 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:23:07.056 ************************************ 00:23:07.056 END TEST nvmf_multicontroller 00:23:07.056 ************************************ 00:23:07.056 09:31:17 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:23:07.056 09:31:17 nvmf_tcp -- nvmf/nvmf.sh@92 -- # run_test nvmf_aer /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/aer.sh --transport=tcp 00:23:07.056 09:31:17 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:23:07.056 09:31:17 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:23:07.056 09:31:17 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:23:07.056 ************************************ 00:23:07.056 START TEST nvmf_aer 00:23:07.056 ************************************ 00:23:07.056 09:31:17 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/aer.sh --transport=tcp 00:23:07.056 * Looking for test storage... 00:23:07.056 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:23:07.056 09:31:17 nvmf_tcp.nvmf_aer -- host/aer.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:23:07.056 09:31:17 nvmf_tcp.nvmf_aer -- nvmf/common.sh@7 -- # uname -s 00:23:07.056 09:31:17 nvmf_tcp.nvmf_aer -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:23:07.056 09:31:17 nvmf_tcp.nvmf_aer -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:23:07.056 09:31:17 nvmf_tcp.nvmf_aer -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:23:07.056 09:31:17 nvmf_tcp.nvmf_aer -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:23:07.056 09:31:17 nvmf_tcp.nvmf_aer -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:23:07.056 09:31:17 nvmf_tcp.nvmf_aer -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:23:07.056 09:31:17 nvmf_tcp.nvmf_aer -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:23:07.056 09:31:17 nvmf_tcp.nvmf_aer -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:23:07.056 09:31:17 nvmf_tcp.nvmf_aer -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:23:07.056 09:31:17 nvmf_tcp.nvmf_aer -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:23:07.056 09:31:17 nvmf_tcp.nvmf_aer -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a 00:23:07.056 09:31:17 nvmf_tcp.nvmf_aer -- nvmf/common.sh@18 -- # NVME_HOSTID=29f67375-a902-e411-ace9-001e67bc3c9a 00:23:07.056 09:31:17 nvmf_tcp.nvmf_aer -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:23:07.056 09:31:17 nvmf_tcp.nvmf_aer -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:23:07.056 09:31:17 nvmf_tcp.nvmf_aer -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:23:07.056 09:31:17 nvmf_tcp.nvmf_aer -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:23:07.056 09:31:17 nvmf_tcp.nvmf_aer -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:23:07.056 09:31:17 nvmf_tcp.nvmf_aer -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:23:07.056 09:31:17 nvmf_tcp.nvmf_aer -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:23:07.056 09:31:17 nvmf_tcp.nvmf_aer -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:23:07.056 09:31:17 nvmf_tcp.nvmf_aer -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:07.056 09:31:17 nvmf_tcp.nvmf_aer -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:07.056 09:31:17 nvmf_tcp.nvmf_aer -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:07.056 09:31:17 nvmf_tcp.nvmf_aer -- paths/export.sh@5 -- # export PATH 00:23:07.056 09:31:17 nvmf_tcp.nvmf_aer -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:07.056 09:31:17 nvmf_tcp.nvmf_aer -- nvmf/common.sh@47 -- # : 0 00:23:07.056 09:31:17 nvmf_tcp.nvmf_aer -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:23:07.056 09:31:17 nvmf_tcp.nvmf_aer -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:23:07.056 09:31:17 nvmf_tcp.nvmf_aer -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:23:07.056 09:31:17 nvmf_tcp.nvmf_aer -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:23:07.056 09:31:17 nvmf_tcp.nvmf_aer -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:23:07.056 09:31:17 nvmf_tcp.nvmf_aer -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:23:07.056 09:31:17 nvmf_tcp.nvmf_aer -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:23:07.056 09:31:17 nvmf_tcp.nvmf_aer -- nvmf/common.sh@51 -- # have_pci_nics=0 00:23:07.056 09:31:17 nvmf_tcp.nvmf_aer -- host/aer.sh@11 -- # nvmftestinit 00:23:07.056 09:31:17 nvmf_tcp.nvmf_aer -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:23:07.056 09:31:17 nvmf_tcp.nvmf_aer -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:23:07.056 09:31:17 nvmf_tcp.nvmf_aer -- nvmf/common.sh@448 -- # prepare_net_devs 00:23:07.056 09:31:17 nvmf_tcp.nvmf_aer -- nvmf/common.sh@410 -- # local -g is_hw=no 00:23:07.056 09:31:17 nvmf_tcp.nvmf_aer -- nvmf/common.sh@412 -- # remove_spdk_ns 00:23:07.056 09:31:17 nvmf_tcp.nvmf_aer -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:23:07.056 09:31:17 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:23:07.056 09:31:17 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:23:07.056 09:31:17 nvmf_tcp.nvmf_aer -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:23:07.056 09:31:17 nvmf_tcp.nvmf_aer -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:23:07.056 09:31:17 nvmf_tcp.nvmf_aer -- nvmf/common.sh@285 -- # xtrace_disable 00:23:07.056 09:31:17 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:23:08.964 09:31:19 nvmf_tcp.nvmf_aer -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:23:08.964 09:31:19 nvmf_tcp.nvmf_aer -- nvmf/common.sh@291 -- # pci_devs=() 00:23:08.964 09:31:19 nvmf_tcp.nvmf_aer -- nvmf/common.sh@291 -- # local -a pci_devs 00:23:08.964 09:31:19 nvmf_tcp.nvmf_aer -- nvmf/common.sh@292 -- # pci_net_devs=() 00:23:08.964 09:31:19 nvmf_tcp.nvmf_aer -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:23:08.964 09:31:19 nvmf_tcp.nvmf_aer -- nvmf/common.sh@293 -- # pci_drivers=() 00:23:08.964 09:31:19 nvmf_tcp.nvmf_aer -- nvmf/common.sh@293 -- # local -A pci_drivers 00:23:08.964 09:31:19 nvmf_tcp.nvmf_aer -- nvmf/common.sh@295 -- # net_devs=() 00:23:08.964 09:31:19 nvmf_tcp.nvmf_aer -- nvmf/common.sh@295 -- # local -ga net_devs 00:23:08.964 09:31:19 nvmf_tcp.nvmf_aer -- nvmf/common.sh@296 -- # e810=() 00:23:08.964 09:31:19 nvmf_tcp.nvmf_aer -- nvmf/common.sh@296 -- # local -ga e810 00:23:08.964 09:31:19 nvmf_tcp.nvmf_aer -- nvmf/common.sh@297 -- # x722=() 00:23:08.964 09:31:19 nvmf_tcp.nvmf_aer -- nvmf/common.sh@297 -- # local -ga x722 00:23:08.964 09:31:19 nvmf_tcp.nvmf_aer -- nvmf/common.sh@298 -- # mlx=() 00:23:08.964 09:31:19 nvmf_tcp.nvmf_aer -- nvmf/common.sh@298 -- # local -ga mlx 00:23:08.964 09:31:19 nvmf_tcp.nvmf_aer -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:23:08.964 09:31:19 nvmf_tcp.nvmf_aer -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:23:08.964 09:31:19 nvmf_tcp.nvmf_aer -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:23:08.964 09:31:19 nvmf_tcp.nvmf_aer -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:23:08.964 09:31:19 nvmf_tcp.nvmf_aer -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:23:08.964 09:31:19 nvmf_tcp.nvmf_aer -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:23:08.964 09:31:19 nvmf_tcp.nvmf_aer -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:23:08.964 09:31:19 nvmf_tcp.nvmf_aer -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:23:08.964 09:31:19 nvmf_tcp.nvmf_aer -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:23:08.964 09:31:19 nvmf_tcp.nvmf_aer -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:23:08.964 09:31:19 nvmf_tcp.nvmf_aer -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:23:08.964 09:31:19 nvmf_tcp.nvmf_aer -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:23:08.964 09:31:19 nvmf_tcp.nvmf_aer -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:23:08.964 09:31:19 nvmf_tcp.nvmf_aer -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:23:08.964 09:31:19 nvmf_tcp.nvmf_aer -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:23:08.964 09:31:19 nvmf_tcp.nvmf_aer -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:23:08.964 09:31:19 nvmf_tcp.nvmf_aer -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:23:08.964 09:31:19 nvmf_tcp.nvmf_aer -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:23:08.964 09:31:19 nvmf_tcp.nvmf_aer -- nvmf/common.sh@341 -- # echo 'Found 0000:09:00.0 (0x8086 - 0x159b)' 00:23:08.964 Found 0000:09:00.0 (0x8086 - 0x159b) 00:23:08.964 09:31:19 nvmf_tcp.nvmf_aer -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:23:08.964 09:31:19 nvmf_tcp.nvmf_aer -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:23:08.964 09:31:19 nvmf_tcp.nvmf_aer -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:23:08.964 09:31:19 nvmf_tcp.nvmf_aer -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:23:08.964 09:31:19 nvmf_tcp.nvmf_aer -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:23:08.964 09:31:19 nvmf_tcp.nvmf_aer -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:23:08.964 09:31:19 nvmf_tcp.nvmf_aer -- nvmf/common.sh@341 -- # echo 'Found 0000:09:00.1 (0x8086 - 0x159b)' 00:23:08.964 Found 0000:09:00.1 (0x8086 - 0x159b) 00:23:08.964 09:31:19 nvmf_tcp.nvmf_aer -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:23:08.964 09:31:19 nvmf_tcp.nvmf_aer -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:23:08.964 09:31:19 nvmf_tcp.nvmf_aer -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:23:08.964 09:31:19 nvmf_tcp.nvmf_aer -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:23:08.964 09:31:19 nvmf_tcp.nvmf_aer -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:23:08.964 09:31:19 nvmf_tcp.nvmf_aer -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:23:08.964 09:31:19 nvmf_tcp.nvmf_aer -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:23:08.964 09:31:19 nvmf_tcp.nvmf_aer -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:23:08.964 09:31:19 nvmf_tcp.nvmf_aer -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:23:08.964 09:31:19 nvmf_tcp.nvmf_aer -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:23:08.964 09:31:19 nvmf_tcp.nvmf_aer -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:23:08.964 09:31:19 nvmf_tcp.nvmf_aer -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:23:08.964 09:31:19 nvmf_tcp.nvmf_aer -- nvmf/common.sh@390 -- # [[ up == up ]] 00:23:08.964 09:31:19 nvmf_tcp.nvmf_aer -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:23:08.964 09:31:19 nvmf_tcp.nvmf_aer -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:23:08.964 09:31:19 nvmf_tcp.nvmf_aer -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:09:00.0: cvl_0_0' 00:23:08.964 Found net devices under 0000:09:00.0: cvl_0_0 00:23:08.964 09:31:19 nvmf_tcp.nvmf_aer -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:23:08.964 09:31:19 nvmf_tcp.nvmf_aer -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:23:08.964 09:31:19 nvmf_tcp.nvmf_aer -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:23:08.964 09:31:19 nvmf_tcp.nvmf_aer -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:23:08.964 09:31:19 nvmf_tcp.nvmf_aer -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:23:08.964 09:31:19 nvmf_tcp.nvmf_aer -- nvmf/common.sh@390 -- # [[ up == up ]] 00:23:08.964 09:31:19 nvmf_tcp.nvmf_aer -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:23:08.964 09:31:19 nvmf_tcp.nvmf_aer -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:23:08.964 09:31:19 nvmf_tcp.nvmf_aer -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:09:00.1: cvl_0_1' 00:23:08.964 Found net devices under 0000:09:00.1: cvl_0_1 00:23:08.964 09:31:19 nvmf_tcp.nvmf_aer -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:23:08.964 09:31:19 nvmf_tcp.nvmf_aer -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:23:08.964 09:31:19 nvmf_tcp.nvmf_aer -- nvmf/common.sh@414 -- # is_hw=yes 00:23:08.964 09:31:19 nvmf_tcp.nvmf_aer -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:23:08.964 09:31:19 nvmf_tcp.nvmf_aer -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:23:08.964 09:31:19 nvmf_tcp.nvmf_aer -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:23:08.964 09:31:19 nvmf_tcp.nvmf_aer -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:23:08.964 09:31:19 nvmf_tcp.nvmf_aer -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:23:08.964 09:31:19 nvmf_tcp.nvmf_aer -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:23:08.964 09:31:19 nvmf_tcp.nvmf_aer -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:23:08.964 09:31:19 nvmf_tcp.nvmf_aer -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:23:08.964 09:31:19 nvmf_tcp.nvmf_aer -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:23:08.964 09:31:19 nvmf_tcp.nvmf_aer -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:23:08.964 09:31:19 nvmf_tcp.nvmf_aer -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:23:08.964 09:31:19 nvmf_tcp.nvmf_aer -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:23:08.964 09:31:19 nvmf_tcp.nvmf_aer -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:23:08.964 09:31:19 nvmf_tcp.nvmf_aer -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:23:08.964 09:31:19 nvmf_tcp.nvmf_aer -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:23:08.964 09:31:19 nvmf_tcp.nvmf_aer -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:23:08.964 09:31:19 nvmf_tcp.nvmf_aer -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:23:08.964 09:31:19 nvmf_tcp.nvmf_aer -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:23:08.964 09:31:19 nvmf_tcp.nvmf_aer -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:23:08.964 09:31:19 nvmf_tcp.nvmf_aer -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:23:08.964 09:31:19 nvmf_tcp.nvmf_aer -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:23:08.964 09:31:19 nvmf_tcp.nvmf_aer -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:23:08.964 09:31:19 nvmf_tcp.nvmf_aer -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:23:08.965 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:23:08.965 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.138 ms 00:23:08.965 00:23:08.965 --- 10.0.0.2 ping statistics --- 00:23:08.965 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:23:08.965 rtt min/avg/max/mdev = 0.138/0.138/0.138/0.000 ms 00:23:08.965 09:31:19 nvmf_tcp.nvmf_aer -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:23:08.965 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:23:08.965 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.183 ms 00:23:08.965 00:23:08.965 --- 10.0.0.1 ping statistics --- 00:23:08.965 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:23:08.965 rtt min/avg/max/mdev = 0.183/0.183/0.183/0.000 ms 00:23:08.965 09:31:20 nvmf_tcp.nvmf_aer -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:23:08.965 09:31:20 nvmf_tcp.nvmf_aer -- nvmf/common.sh@422 -- # return 0 00:23:08.965 09:31:20 nvmf_tcp.nvmf_aer -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:23:08.965 09:31:20 nvmf_tcp.nvmf_aer -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:23:08.965 09:31:20 nvmf_tcp.nvmf_aer -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:23:08.965 09:31:20 nvmf_tcp.nvmf_aer -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:23:08.965 09:31:20 nvmf_tcp.nvmf_aer -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:23:08.965 09:31:20 nvmf_tcp.nvmf_aer -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:23:08.965 09:31:20 nvmf_tcp.nvmf_aer -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:23:08.965 09:31:20 nvmf_tcp.nvmf_aer -- host/aer.sh@12 -- # nvmfappstart -m 0xF 00:23:08.965 09:31:20 nvmf_tcp.nvmf_aer -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:23:08.965 09:31:20 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@722 -- # xtrace_disable 00:23:08.965 09:31:20 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:23:08.965 09:31:20 nvmf_tcp.nvmf_aer -- nvmf/common.sh@481 -- # nvmfpid=887827 00:23:08.965 09:31:20 nvmf_tcp.nvmf_aer -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:23:08.965 09:31:20 nvmf_tcp.nvmf_aer -- nvmf/common.sh@482 -- # waitforlisten 887827 00:23:08.965 09:31:20 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@829 -- # '[' -z 887827 ']' 00:23:08.965 09:31:20 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:23:08.965 09:31:20 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@834 -- # local max_retries=100 00:23:08.965 09:31:20 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:23:08.965 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:23:08.965 09:31:20 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@838 -- # xtrace_disable 00:23:08.965 09:31:20 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:23:08.965 [2024-07-15 09:31:20.087426] Starting SPDK v24.09-pre git sha1 b0f01ebc5 / DPDK 24.03.0 initialization... 00:23:08.965 [2024-07-15 09:31:20.087518] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:23:08.965 EAL: No free 2048 kB hugepages reported on node 1 00:23:09.225 [2024-07-15 09:31:20.161906] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 4 00:23:09.225 [2024-07-15 09:31:20.278449] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:23:09.225 [2024-07-15 09:31:20.278498] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:23:09.225 [2024-07-15 09:31:20.278512] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:23:09.225 [2024-07-15 09:31:20.278523] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:23:09.225 [2024-07-15 09:31:20.278533] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:23:09.225 [2024-07-15 09:31:20.280832] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:23:09.225 [2024-07-15 09:31:20.280901] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:23:09.225 [2024-07-15 09:31:20.280931] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:23:09.225 [2024-07-15 09:31:20.280935] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:23:10.163 09:31:21 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:23:10.163 09:31:21 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@862 -- # return 0 00:23:10.163 09:31:21 nvmf_tcp.nvmf_aer -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:23:10.163 09:31:21 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@728 -- # xtrace_disable 00:23:10.163 09:31:21 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:23:10.163 09:31:21 nvmf_tcp.nvmf_aer -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:23:10.163 09:31:21 nvmf_tcp.nvmf_aer -- host/aer.sh@14 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:23:10.163 09:31:21 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:10.163 09:31:21 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:23:10.163 [2024-07-15 09:31:21.091754] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:23:10.163 09:31:21 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:10.163 09:31:21 nvmf_tcp.nvmf_aer -- host/aer.sh@16 -- # rpc_cmd bdev_malloc_create 64 512 --name Malloc0 00:23:10.163 09:31:21 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:10.163 09:31:21 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:23:10.163 Malloc0 00:23:10.163 09:31:21 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:10.163 09:31:21 nvmf_tcp.nvmf_aer -- host/aer.sh@17 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 2 00:23:10.163 09:31:21 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:10.163 09:31:21 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:23:10.163 09:31:21 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:10.163 09:31:21 nvmf_tcp.nvmf_aer -- host/aer.sh@18 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:23:10.163 09:31:21 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:10.163 09:31:21 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:23:10.163 09:31:21 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:10.163 09:31:21 nvmf_tcp.nvmf_aer -- host/aer.sh@19 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:23:10.163 09:31:21 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:10.163 09:31:21 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:23:10.163 [2024-07-15 09:31:21.144740] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:23:10.163 09:31:21 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:10.163 09:31:21 nvmf_tcp.nvmf_aer -- host/aer.sh@21 -- # rpc_cmd nvmf_get_subsystems 00:23:10.163 09:31:21 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:10.163 09:31:21 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:23:10.163 [ 00:23:10.163 { 00:23:10.163 "nqn": "nqn.2014-08.org.nvmexpress.discovery", 00:23:10.163 "subtype": "Discovery", 00:23:10.163 "listen_addresses": [], 00:23:10.163 "allow_any_host": true, 00:23:10.163 "hosts": [] 00:23:10.163 }, 00:23:10.163 { 00:23:10.163 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:23:10.163 "subtype": "NVMe", 00:23:10.163 "listen_addresses": [ 00:23:10.163 { 00:23:10.163 "trtype": "TCP", 00:23:10.163 "adrfam": "IPv4", 00:23:10.163 "traddr": "10.0.0.2", 00:23:10.163 "trsvcid": "4420" 00:23:10.163 } 00:23:10.163 ], 00:23:10.163 "allow_any_host": true, 00:23:10.163 "hosts": [], 00:23:10.163 "serial_number": "SPDK00000000000001", 00:23:10.163 "model_number": "SPDK bdev Controller", 00:23:10.163 "max_namespaces": 2, 00:23:10.163 "min_cntlid": 1, 00:23:10.163 "max_cntlid": 65519, 00:23:10.163 "namespaces": [ 00:23:10.163 { 00:23:10.163 "nsid": 1, 00:23:10.163 "bdev_name": "Malloc0", 00:23:10.163 "name": "Malloc0", 00:23:10.163 "nguid": "7AE6ADE5AE8F4D92ADCEFAD1B5BBB3DF", 00:23:10.163 "uuid": "7ae6ade5-ae8f-4d92-adce-fad1b5bbb3df" 00:23:10.163 } 00:23:10.163 ] 00:23:10.163 } 00:23:10.163 ] 00:23:10.163 09:31:21 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:10.163 09:31:21 nvmf_tcp.nvmf_aer -- host/aer.sh@23 -- # AER_TOUCH_FILE=/tmp/aer_touch_file 00:23:10.163 09:31:21 nvmf_tcp.nvmf_aer -- host/aer.sh@24 -- # rm -f /tmp/aer_touch_file 00:23:10.163 09:31:21 nvmf_tcp.nvmf_aer -- host/aer.sh@33 -- # aerpid=887980 00:23:10.163 09:31:21 nvmf_tcp.nvmf_aer -- host/aer.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvme/aer/aer -r ' trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' -n 2 -t /tmp/aer_touch_file 00:23:10.163 09:31:21 nvmf_tcp.nvmf_aer -- host/aer.sh@36 -- # waitforfile /tmp/aer_touch_file 00:23:10.163 09:31:21 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@1265 -- # local i=0 00:23:10.163 09:31:21 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@1266 -- # '[' '!' -e /tmp/aer_touch_file ']' 00:23:10.163 09:31:21 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@1267 -- # '[' 0 -lt 200 ']' 00:23:10.163 09:31:21 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@1268 -- # i=1 00:23:10.163 09:31:21 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@1269 -- # sleep 0.1 00:23:10.163 EAL: No free 2048 kB hugepages reported on node 1 00:23:10.163 09:31:21 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@1266 -- # '[' '!' -e /tmp/aer_touch_file ']' 00:23:10.163 09:31:21 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@1267 -- # '[' 1 -lt 200 ']' 00:23:10.163 09:31:21 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@1268 -- # i=2 00:23:10.163 09:31:21 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@1269 -- # sleep 0.1 00:23:10.421 09:31:21 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@1266 -- # '[' '!' -e /tmp/aer_touch_file ']' 00:23:10.421 09:31:21 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@1267 -- # '[' 2 -lt 200 ']' 00:23:10.421 09:31:21 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@1268 -- # i=3 00:23:10.421 09:31:21 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@1269 -- # sleep 0.1 00:23:10.421 09:31:21 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@1266 -- # '[' '!' -e /tmp/aer_touch_file ']' 00:23:10.421 09:31:21 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@1272 -- # '[' '!' -e /tmp/aer_touch_file ']' 00:23:10.421 09:31:21 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@1276 -- # return 0 00:23:10.421 09:31:21 nvmf_tcp.nvmf_aer -- host/aer.sh@39 -- # rpc_cmd bdev_malloc_create 64 4096 --name Malloc1 00:23:10.421 09:31:21 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:10.421 09:31:21 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:23:10.421 Malloc1 00:23:10.421 09:31:21 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:10.421 09:31:21 nvmf_tcp.nvmf_aer -- host/aer.sh@40 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 2 00:23:10.421 09:31:21 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:10.421 09:31:21 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:23:10.421 09:31:21 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:10.421 09:31:21 nvmf_tcp.nvmf_aer -- host/aer.sh@41 -- # rpc_cmd nvmf_get_subsystems 00:23:10.421 09:31:21 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:10.421 09:31:21 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:23:10.421 [ 00:23:10.421 { 00:23:10.421 "nqn": "nqn.2014-08.org.nvmexpress.discovery", 00:23:10.421 "subtype": "Discovery", 00:23:10.421 "listen_addresses": [], 00:23:10.421 "allow_any_host": true, 00:23:10.421 "hosts": [] 00:23:10.421 }, 00:23:10.421 { 00:23:10.421 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:23:10.421 "subtype": "NVMe", 00:23:10.421 "listen_addresses": [ 00:23:10.421 { 00:23:10.421 "trtype": "TCP", 00:23:10.421 "adrfam": "IPv4", 00:23:10.421 "traddr": "10.0.0.2", 00:23:10.421 "trsvcid": "4420" 00:23:10.421 } 00:23:10.421 ], 00:23:10.421 "allow_any_host": true, 00:23:10.421 "hosts": [], 00:23:10.421 "serial_number": "SPDK00000000000001", 00:23:10.421 "model_number": "SPDK bdev Controller", 00:23:10.421 "max_namespaces": 2, 00:23:10.421 "min_cntlid": 1, 00:23:10.421 "max_cntlid": 65519, 00:23:10.421 "namespaces": [ 00:23:10.421 { 00:23:10.421 "nsid": 1, 00:23:10.421 "bdev_name": "Malloc0", 00:23:10.421 "name": "Malloc0", 00:23:10.421 "nguid": "7AE6ADE5AE8F4D92ADCEFAD1B5BBB3DF", 00:23:10.421 "uuid": "7ae6ade5-ae8f-4d92-adce-fad1b5bbb3df" 00:23:10.421 }, 00:23:10.421 { 00:23:10.421 "nsid": 2, 00:23:10.421 "bdev_name": "Malloc1", 00:23:10.421 "name": "Malloc1", 00:23:10.421 "nguid": "D341ABF7448E4746B69B92ED2C314BAA", 00:23:10.421 "uuid": "d341abf7-448e-4746-b69b-92ed2c314baa" 00:23:10.421 } 00:23:10.421 ] 00:23:10.421 } 00:23:10.421 ] 00:23:10.421 09:31:21 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:10.421 09:31:21 nvmf_tcp.nvmf_aer -- host/aer.sh@43 -- # wait 887980 00:23:10.421 Asynchronous Event Request test 00:23:10.421 Attaching to 10.0.0.2 00:23:10.421 Attached to 10.0.0.2 00:23:10.421 Registering asynchronous event callbacks... 00:23:10.421 Starting namespace attribute notice tests for all controllers... 00:23:10.421 10.0.0.2: aer_cb for log page 4, aen_event_type: 0x02, aen_event_info: 0x00 00:23:10.421 aer_cb - Changed Namespace 00:23:10.421 Cleaning up... 00:23:10.421 09:31:21 nvmf_tcp.nvmf_aer -- host/aer.sh@45 -- # rpc_cmd bdev_malloc_delete Malloc0 00:23:10.421 09:31:21 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:10.421 09:31:21 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:23:10.421 09:31:21 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:10.421 09:31:21 nvmf_tcp.nvmf_aer -- host/aer.sh@46 -- # rpc_cmd bdev_malloc_delete Malloc1 00:23:10.421 09:31:21 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:10.421 09:31:21 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:23:10.421 09:31:21 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:10.421 09:31:21 nvmf_tcp.nvmf_aer -- host/aer.sh@47 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:23:10.421 09:31:21 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:10.421 09:31:21 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:23:10.679 09:31:21 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:10.679 09:31:21 nvmf_tcp.nvmf_aer -- host/aer.sh@49 -- # trap - SIGINT SIGTERM EXIT 00:23:10.679 09:31:21 nvmf_tcp.nvmf_aer -- host/aer.sh@51 -- # nvmftestfini 00:23:10.679 09:31:21 nvmf_tcp.nvmf_aer -- nvmf/common.sh@488 -- # nvmfcleanup 00:23:10.679 09:31:21 nvmf_tcp.nvmf_aer -- nvmf/common.sh@117 -- # sync 00:23:10.679 09:31:21 nvmf_tcp.nvmf_aer -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:23:10.679 09:31:21 nvmf_tcp.nvmf_aer -- nvmf/common.sh@120 -- # set +e 00:23:10.679 09:31:21 nvmf_tcp.nvmf_aer -- nvmf/common.sh@121 -- # for i in {1..20} 00:23:10.679 09:31:21 nvmf_tcp.nvmf_aer -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:23:10.679 rmmod nvme_tcp 00:23:10.679 rmmod nvme_fabrics 00:23:10.679 rmmod nvme_keyring 00:23:10.679 09:31:21 nvmf_tcp.nvmf_aer -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:23:10.679 09:31:21 nvmf_tcp.nvmf_aer -- nvmf/common.sh@124 -- # set -e 00:23:10.679 09:31:21 nvmf_tcp.nvmf_aer -- nvmf/common.sh@125 -- # return 0 00:23:10.679 09:31:21 nvmf_tcp.nvmf_aer -- nvmf/common.sh@489 -- # '[' -n 887827 ']' 00:23:10.679 09:31:21 nvmf_tcp.nvmf_aer -- nvmf/common.sh@490 -- # killprocess 887827 00:23:10.679 09:31:21 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@948 -- # '[' -z 887827 ']' 00:23:10.679 09:31:21 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@952 -- # kill -0 887827 00:23:10.679 09:31:21 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@953 -- # uname 00:23:10.679 09:31:21 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:23:10.679 09:31:21 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 887827 00:23:10.679 09:31:21 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:23:10.679 09:31:21 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:23:10.679 09:31:21 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@966 -- # echo 'killing process with pid 887827' 00:23:10.679 killing process with pid 887827 00:23:10.679 09:31:21 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@967 -- # kill 887827 00:23:10.679 09:31:21 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@972 -- # wait 887827 00:23:10.938 09:31:21 nvmf_tcp.nvmf_aer -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:23:10.938 09:31:21 nvmf_tcp.nvmf_aer -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:23:10.938 09:31:21 nvmf_tcp.nvmf_aer -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:23:10.938 09:31:21 nvmf_tcp.nvmf_aer -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:23:10.938 09:31:21 nvmf_tcp.nvmf_aer -- nvmf/common.sh@278 -- # remove_spdk_ns 00:23:10.938 09:31:21 nvmf_tcp.nvmf_aer -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:23:10.938 09:31:21 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:23:10.938 09:31:21 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:23:12.843 09:31:23 nvmf_tcp.nvmf_aer -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:23:12.843 00:23:12.843 real 0m6.223s 00:23:12.843 user 0m7.461s 00:23:12.843 sys 0m2.016s 00:23:12.843 09:31:23 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@1124 -- # xtrace_disable 00:23:12.843 09:31:23 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:23:12.843 ************************************ 00:23:12.843 END TEST nvmf_aer 00:23:12.843 ************************************ 00:23:12.843 09:31:24 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:23:12.843 09:31:24 nvmf_tcp -- nvmf/nvmf.sh@93 -- # run_test nvmf_async_init /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/async_init.sh --transport=tcp 00:23:12.843 09:31:24 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:23:12.843 09:31:24 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:23:12.843 09:31:24 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:23:12.843 ************************************ 00:23:12.843 START TEST nvmf_async_init 00:23:12.843 ************************************ 00:23:12.843 09:31:24 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/async_init.sh --transport=tcp 00:23:13.102 * Looking for test storage... 00:23:13.102 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:23:13.102 09:31:24 nvmf_tcp.nvmf_async_init -- host/async_init.sh@11 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:23:13.102 09:31:24 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@7 -- # uname -s 00:23:13.102 09:31:24 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:23:13.102 09:31:24 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:23:13.102 09:31:24 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:23:13.102 09:31:24 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:23:13.102 09:31:24 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:23:13.102 09:31:24 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:23:13.102 09:31:24 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:23:13.102 09:31:24 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:23:13.102 09:31:24 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:23:13.102 09:31:24 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:23:13.102 09:31:24 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a 00:23:13.102 09:31:24 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@18 -- # NVME_HOSTID=29f67375-a902-e411-ace9-001e67bc3c9a 00:23:13.102 09:31:24 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:23:13.102 09:31:24 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:23:13.102 09:31:24 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:23:13.102 09:31:24 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:23:13.102 09:31:24 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:23:13.102 09:31:24 nvmf_tcp.nvmf_async_init -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:23:13.102 09:31:24 nvmf_tcp.nvmf_async_init -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:23:13.102 09:31:24 nvmf_tcp.nvmf_async_init -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:23:13.102 09:31:24 nvmf_tcp.nvmf_async_init -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:13.102 09:31:24 nvmf_tcp.nvmf_async_init -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:13.102 09:31:24 nvmf_tcp.nvmf_async_init -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:13.102 09:31:24 nvmf_tcp.nvmf_async_init -- paths/export.sh@5 -- # export PATH 00:23:13.102 09:31:24 nvmf_tcp.nvmf_async_init -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:13.102 09:31:24 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@47 -- # : 0 00:23:13.102 09:31:24 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:23:13.102 09:31:24 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:23:13.102 09:31:24 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:23:13.102 09:31:24 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:23:13.102 09:31:24 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:23:13.102 09:31:24 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:23:13.102 09:31:24 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:23:13.102 09:31:24 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@51 -- # have_pci_nics=0 00:23:13.102 09:31:24 nvmf_tcp.nvmf_async_init -- host/async_init.sh@13 -- # null_bdev_size=1024 00:23:13.102 09:31:24 nvmf_tcp.nvmf_async_init -- host/async_init.sh@14 -- # null_block_size=512 00:23:13.102 09:31:24 nvmf_tcp.nvmf_async_init -- host/async_init.sh@15 -- # null_bdev=null0 00:23:13.102 09:31:24 nvmf_tcp.nvmf_async_init -- host/async_init.sh@16 -- # nvme_bdev=nvme0 00:23:13.102 09:31:24 nvmf_tcp.nvmf_async_init -- host/async_init.sh@20 -- # uuidgen 00:23:13.102 09:31:24 nvmf_tcp.nvmf_async_init -- host/async_init.sh@20 -- # tr -d - 00:23:13.102 09:31:24 nvmf_tcp.nvmf_async_init -- host/async_init.sh@20 -- # nguid=72c69e2ad978453ca1efa6581b585204 00:23:13.102 09:31:24 nvmf_tcp.nvmf_async_init -- host/async_init.sh@22 -- # nvmftestinit 00:23:13.102 09:31:24 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:23:13.102 09:31:24 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:23:13.102 09:31:24 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@448 -- # prepare_net_devs 00:23:13.102 09:31:24 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@410 -- # local -g is_hw=no 00:23:13.102 09:31:24 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@412 -- # remove_spdk_ns 00:23:13.102 09:31:24 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:23:13.102 09:31:24 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:23:13.102 09:31:24 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:23:13.102 09:31:24 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:23:13.102 09:31:24 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:23:13.102 09:31:24 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@285 -- # xtrace_disable 00:23:13.102 09:31:24 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:23:15.009 09:31:26 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:23:15.009 09:31:26 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@291 -- # pci_devs=() 00:23:15.009 09:31:26 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@291 -- # local -a pci_devs 00:23:15.009 09:31:26 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@292 -- # pci_net_devs=() 00:23:15.009 09:31:26 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:23:15.009 09:31:26 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@293 -- # pci_drivers=() 00:23:15.009 09:31:26 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@293 -- # local -A pci_drivers 00:23:15.009 09:31:26 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@295 -- # net_devs=() 00:23:15.009 09:31:26 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@295 -- # local -ga net_devs 00:23:15.010 09:31:26 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@296 -- # e810=() 00:23:15.010 09:31:26 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@296 -- # local -ga e810 00:23:15.010 09:31:26 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@297 -- # x722=() 00:23:15.010 09:31:26 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@297 -- # local -ga x722 00:23:15.010 09:31:26 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@298 -- # mlx=() 00:23:15.010 09:31:26 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@298 -- # local -ga mlx 00:23:15.010 09:31:26 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:23:15.010 09:31:26 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:23:15.010 09:31:26 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:23:15.010 09:31:26 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:23:15.010 09:31:26 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:23:15.010 09:31:26 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:23:15.010 09:31:26 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:23:15.010 09:31:26 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:23:15.010 09:31:26 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:23:15.010 09:31:26 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:23:15.010 09:31:26 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:23:15.010 09:31:26 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:23:15.010 09:31:26 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:23:15.010 09:31:26 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:23:15.010 09:31:26 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:23:15.010 09:31:26 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:23:15.010 09:31:26 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:23:15.010 09:31:26 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:23:15.010 09:31:26 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@341 -- # echo 'Found 0000:09:00.0 (0x8086 - 0x159b)' 00:23:15.010 Found 0000:09:00.0 (0x8086 - 0x159b) 00:23:15.010 09:31:26 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:23:15.010 09:31:26 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:23:15.010 09:31:26 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:23:15.010 09:31:26 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:23:15.010 09:31:26 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:23:15.010 09:31:26 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:23:15.010 09:31:26 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@341 -- # echo 'Found 0000:09:00.1 (0x8086 - 0x159b)' 00:23:15.010 Found 0000:09:00.1 (0x8086 - 0x159b) 00:23:15.010 09:31:26 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:23:15.010 09:31:26 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:23:15.010 09:31:26 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:23:15.010 09:31:26 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:23:15.010 09:31:26 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:23:15.010 09:31:26 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:23:15.010 09:31:26 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:23:15.010 09:31:26 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:23:15.010 09:31:26 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:23:15.010 09:31:26 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:23:15.010 09:31:26 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:23:15.010 09:31:26 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:23:15.010 09:31:26 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@390 -- # [[ up == up ]] 00:23:15.010 09:31:26 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:23:15.010 09:31:26 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:23:15.010 09:31:26 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:09:00.0: cvl_0_0' 00:23:15.010 Found net devices under 0000:09:00.0: cvl_0_0 00:23:15.010 09:31:26 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:23:15.010 09:31:26 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:23:15.010 09:31:26 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:23:15.010 09:31:26 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:23:15.010 09:31:26 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:23:15.010 09:31:26 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@390 -- # [[ up == up ]] 00:23:15.010 09:31:26 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:23:15.010 09:31:26 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:23:15.010 09:31:26 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:09:00.1: cvl_0_1' 00:23:15.010 Found net devices under 0000:09:00.1: cvl_0_1 00:23:15.010 09:31:26 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:23:15.010 09:31:26 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:23:15.010 09:31:26 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@414 -- # is_hw=yes 00:23:15.010 09:31:26 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:23:15.010 09:31:26 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:23:15.010 09:31:26 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:23:15.010 09:31:26 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:23:15.010 09:31:26 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:23:15.010 09:31:26 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:23:15.010 09:31:26 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:23:15.010 09:31:26 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:23:15.010 09:31:26 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:23:15.010 09:31:26 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:23:15.010 09:31:26 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:23:15.010 09:31:26 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:23:15.010 09:31:26 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:23:15.010 09:31:26 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:23:15.010 09:31:26 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:23:15.010 09:31:26 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:23:15.010 09:31:26 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:23:15.010 09:31:26 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:23:15.268 09:31:26 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:23:15.268 09:31:26 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:23:15.268 09:31:26 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:23:15.268 09:31:26 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:23:15.268 09:31:26 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:23:15.268 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:23:15.268 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.182 ms 00:23:15.268 00:23:15.268 --- 10.0.0.2 ping statistics --- 00:23:15.268 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:23:15.268 rtt min/avg/max/mdev = 0.182/0.182/0.182/0.000 ms 00:23:15.268 09:31:26 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:23:15.268 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:23:15.268 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.090 ms 00:23:15.268 00:23:15.268 --- 10.0.0.1 ping statistics --- 00:23:15.268 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:23:15.268 rtt min/avg/max/mdev = 0.090/0.090/0.090/0.000 ms 00:23:15.268 09:31:26 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:23:15.268 09:31:26 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@422 -- # return 0 00:23:15.268 09:31:26 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:23:15.268 09:31:26 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:23:15.268 09:31:26 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:23:15.268 09:31:26 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:23:15.268 09:31:26 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:23:15.268 09:31:26 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:23:15.268 09:31:26 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:23:15.268 09:31:26 nvmf_tcp.nvmf_async_init -- host/async_init.sh@23 -- # nvmfappstart -m 0x1 00:23:15.268 09:31:26 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:23:15.268 09:31:26 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@722 -- # xtrace_disable 00:23:15.268 09:31:26 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:23:15.268 09:31:26 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@481 -- # nvmfpid=889975 00:23:15.268 09:31:26 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1 00:23:15.268 09:31:26 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@482 -- # waitforlisten 889975 00:23:15.268 09:31:26 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@829 -- # '[' -z 889975 ']' 00:23:15.268 09:31:26 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:23:15.268 09:31:26 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@834 -- # local max_retries=100 00:23:15.268 09:31:26 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:23:15.268 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:23:15.268 09:31:26 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@838 -- # xtrace_disable 00:23:15.268 09:31:26 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:23:15.268 [2024-07-15 09:31:26.336559] Starting SPDK v24.09-pre git sha1 b0f01ebc5 / DPDK 24.03.0 initialization... 00:23:15.268 [2024-07-15 09:31:26.336644] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:23:15.268 EAL: No free 2048 kB hugepages reported on node 1 00:23:15.268 [2024-07-15 09:31:26.398316] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:23:15.525 [2024-07-15 09:31:26.507938] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:23:15.525 [2024-07-15 09:31:26.508000] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:23:15.525 [2024-07-15 09:31:26.508014] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:23:15.525 [2024-07-15 09:31:26.508024] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:23:15.525 [2024-07-15 09:31:26.508034] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:23:15.525 [2024-07-15 09:31:26.508061] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:23:15.525 09:31:26 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:23:15.525 09:31:26 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@862 -- # return 0 00:23:15.525 09:31:26 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:23:15.525 09:31:26 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@728 -- # xtrace_disable 00:23:15.525 09:31:26 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:23:15.525 09:31:26 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:23:15.525 09:31:26 nvmf_tcp.nvmf_async_init -- host/async_init.sh@26 -- # rpc_cmd nvmf_create_transport -t tcp -o 00:23:15.525 09:31:26 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:15.525 09:31:26 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:23:15.525 [2024-07-15 09:31:26.640541] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:23:15.525 09:31:26 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:15.525 09:31:26 nvmf_tcp.nvmf_async_init -- host/async_init.sh@27 -- # rpc_cmd bdev_null_create null0 1024 512 00:23:15.525 09:31:26 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:15.525 09:31:26 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:23:15.525 null0 00:23:15.525 09:31:26 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:15.525 09:31:26 nvmf_tcp.nvmf_async_init -- host/async_init.sh@28 -- # rpc_cmd bdev_wait_for_examine 00:23:15.525 09:31:26 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:15.525 09:31:26 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:23:15.525 09:31:26 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:15.525 09:31:26 nvmf_tcp.nvmf_async_init -- host/async_init.sh@29 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 -a 00:23:15.525 09:31:26 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:15.525 09:31:26 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:23:15.525 09:31:26 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:15.525 09:31:26 nvmf_tcp.nvmf_async_init -- host/async_init.sh@30 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 null0 -g 72c69e2ad978453ca1efa6581b585204 00:23:15.525 09:31:26 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:15.525 09:31:26 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:23:15.525 09:31:26 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:15.525 09:31:26 nvmf_tcp.nvmf_async_init -- host/async_init.sh@31 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:23:15.525 09:31:26 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:15.525 09:31:26 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:23:15.525 [2024-07-15 09:31:26.680770] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:23:15.525 09:31:26 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:15.525 09:31:26 nvmf_tcp.nvmf_async_init -- host/async_init.sh@37 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 10.0.0.2 -f ipv4 -s 4420 -n nqn.2016-06.io.spdk:cnode0 00:23:15.525 09:31:26 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:15.525 09:31:26 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:23:15.782 nvme0n1 00:23:15.782 09:31:26 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:15.782 09:31:26 nvmf_tcp.nvmf_async_init -- host/async_init.sh@41 -- # rpc_cmd bdev_get_bdevs -b nvme0n1 00:23:15.782 09:31:26 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:15.782 09:31:26 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:23:15.782 [ 00:23:15.782 { 00:23:15.782 "name": "nvme0n1", 00:23:15.782 "aliases": [ 00:23:15.782 "72c69e2a-d978-453c-a1ef-a6581b585204" 00:23:15.782 ], 00:23:15.782 "product_name": "NVMe disk", 00:23:15.782 "block_size": 512, 00:23:15.782 "num_blocks": 2097152, 00:23:15.782 "uuid": "72c69e2a-d978-453c-a1ef-a6581b585204", 00:23:15.782 "assigned_rate_limits": { 00:23:15.782 "rw_ios_per_sec": 0, 00:23:15.782 "rw_mbytes_per_sec": 0, 00:23:15.782 "r_mbytes_per_sec": 0, 00:23:15.782 "w_mbytes_per_sec": 0 00:23:15.782 }, 00:23:15.782 "claimed": false, 00:23:15.782 "zoned": false, 00:23:15.782 "supported_io_types": { 00:23:15.782 "read": true, 00:23:15.782 "write": true, 00:23:15.782 "unmap": false, 00:23:15.782 "flush": true, 00:23:15.782 "reset": true, 00:23:15.782 "nvme_admin": true, 00:23:15.782 "nvme_io": true, 00:23:15.782 "nvme_io_md": false, 00:23:15.782 "write_zeroes": true, 00:23:15.782 "zcopy": false, 00:23:15.782 "get_zone_info": false, 00:23:15.782 "zone_management": false, 00:23:15.782 "zone_append": false, 00:23:15.782 "compare": true, 00:23:15.782 "compare_and_write": true, 00:23:15.782 "abort": true, 00:23:15.782 "seek_hole": false, 00:23:15.782 "seek_data": false, 00:23:15.782 "copy": true, 00:23:15.782 "nvme_iov_md": false 00:23:15.782 }, 00:23:15.782 "memory_domains": [ 00:23:15.782 { 00:23:15.782 "dma_device_id": "system", 00:23:15.782 "dma_device_type": 1 00:23:15.782 } 00:23:15.782 ], 00:23:15.782 "driver_specific": { 00:23:15.782 "nvme": [ 00:23:15.782 { 00:23:15.782 "trid": { 00:23:15.782 "trtype": "TCP", 00:23:15.782 "adrfam": "IPv4", 00:23:15.782 "traddr": "10.0.0.2", 00:23:15.782 "trsvcid": "4420", 00:23:15.782 "subnqn": "nqn.2016-06.io.spdk:cnode0" 00:23:15.782 }, 00:23:15.782 "ctrlr_data": { 00:23:15.782 "cntlid": 1, 00:23:15.782 "vendor_id": "0x8086", 00:23:15.782 "model_number": "SPDK bdev Controller", 00:23:15.782 "serial_number": "00000000000000000000", 00:23:15.782 "firmware_revision": "24.09", 00:23:15.782 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:23:15.782 "oacs": { 00:23:15.782 "security": 0, 00:23:15.782 "format": 0, 00:23:15.782 "firmware": 0, 00:23:15.782 "ns_manage": 0 00:23:15.782 }, 00:23:15.782 "multi_ctrlr": true, 00:23:15.782 "ana_reporting": false 00:23:15.782 }, 00:23:15.782 "vs": { 00:23:15.782 "nvme_version": "1.3" 00:23:15.782 }, 00:23:15.782 "ns_data": { 00:23:15.782 "id": 1, 00:23:15.782 "can_share": true 00:23:15.782 } 00:23:15.782 } 00:23:15.782 ], 00:23:15.782 "mp_policy": "active_passive" 00:23:15.782 } 00:23:15.782 } 00:23:15.782 ] 00:23:15.782 09:31:26 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:15.782 09:31:26 nvmf_tcp.nvmf_async_init -- host/async_init.sh@44 -- # rpc_cmd bdev_nvme_reset_controller nvme0 00:23:15.782 09:31:26 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:15.782 09:31:26 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:23:15.782 [2024-07-15 09:31:26.929506] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:23:15.782 [2024-07-15 09:31:26.929581] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2471090 (9): Bad file descriptor 00:23:16.040 [2024-07-15 09:31:27.061942] bdev_nvme.c:2067:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:23:16.040 09:31:27 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:16.040 09:31:27 nvmf_tcp.nvmf_async_init -- host/async_init.sh@47 -- # rpc_cmd bdev_get_bdevs -b nvme0n1 00:23:16.040 09:31:27 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:16.040 09:31:27 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:23:16.040 [ 00:23:16.040 { 00:23:16.040 "name": "nvme0n1", 00:23:16.040 "aliases": [ 00:23:16.040 "72c69e2a-d978-453c-a1ef-a6581b585204" 00:23:16.040 ], 00:23:16.040 "product_name": "NVMe disk", 00:23:16.040 "block_size": 512, 00:23:16.040 "num_blocks": 2097152, 00:23:16.040 "uuid": "72c69e2a-d978-453c-a1ef-a6581b585204", 00:23:16.040 "assigned_rate_limits": { 00:23:16.040 "rw_ios_per_sec": 0, 00:23:16.040 "rw_mbytes_per_sec": 0, 00:23:16.040 "r_mbytes_per_sec": 0, 00:23:16.040 "w_mbytes_per_sec": 0 00:23:16.040 }, 00:23:16.040 "claimed": false, 00:23:16.040 "zoned": false, 00:23:16.040 "supported_io_types": { 00:23:16.040 "read": true, 00:23:16.040 "write": true, 00:23:16.040 "unmap": false, 00:23:16.040 "flush": true, 00:23:16.040 "reset": true, 00:23:16.040 "nvme_admin": true, 00:23:16.040 "nvme_io": true, 00:23:16.040 "nvme_io_md": false, 00:23:16.040 "write_zeroes": true, 00:23:16.040 "zcopy": false, 00:23:16.040 "get_zone_info": false, 00:23:16.040 "zone_management": false, 00:23:16.040 "zone_append": false, 00:23:16.040 "compare": true, 00:23:16.040 "compare_and_write": true, 00:23:16.040 "abort": true, 00:23:16.040 "seek_hole": false, 00:23:16.040 "seek_data": false, 00:23:16.040 "copy": true, 00:23:16.040 "nvme_iov_md": false 00:23:16.040 }, 00:23:16.040 "memory_domains": [ 00:23:16.040 { 00:23:16.040 "dma_device_id": "system", 00:23:16.040 "dma_device_type": 1 00:23:16.040 } 00:23:16.040 ], 00:23:16.040 "driver_specific": { 00:23:16.040 "nvme": [ 00:23:16.040 { 00:23:16.040 "trid": { 00:23:16.040 "trtype": "TCP", 00:23:16.040 "adrfam": "IPv4", 00:23:16.040 "traddr": "10.0.0.2", 00:23:16.040 "trsvcid": "4420", 00:23:16.040 "subnqn": "nqn.2016-06.io.spdk:cnode0" 00:23:16.040 }, 00:23:16.040 "ctrlr_data": { 00:23:16.040 "cntlid": 2, 00:23:16.040 "vendor_id": "0x8086", 00:23:16.040 "model_number": "SPDK bdev Controller", 00:23:16.040 "serial_number": "00000000000000000000", 00:23:16.040 "firmware_revision": "24.09", 00:23:16.040 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:23:16.040 "oacs": { 00:23:16.040 "security": 0, 00:23:16.040 "format": 0, 00:23:16.040 "firmware": 0, 00:23:16.040 "ns_manage": 0 00:23:16.040 }, 00:23:16.040 "multi_ctrlr": true, 00:23:16.040 "ana_reporting": false 00:23:16.040 }, 00:23:16.040 "vs": { 00:23:16.040 "nvme_version": "1.3" 00:23:16.040 }, 00:23:16.040 "ns_data": { 00:23:16.040 "id": 1, 00:23:16.040 "can_share": true 00:23:16.040 } 00:23:16.040 } 00:23:16.040 ], 00:23:16.040 "mp_policy": "active_passive" 00:23:16.040 } 00:23:16.040 } 00:23:16.040 ] 00:23:16.040 09:31:27 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:16.040 09:31:27 nvmf_tcp.nvmf_async_init -- host/async_init.sh@50 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:23:16.040 09:31:27 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:16.040 09:31:27 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:23:16.040 09:31:27 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:16.040 09:31:27 nvmf_tcp.nvmf_async_init -- host/async_init.sh@53 -- # mktemp 00:23:16.040 09:31:27 nvmf_tcp.nvmf_async_init -- host/async_init.sh@53 -- # key_path=/tmp/tmp.Gfwc2Xszdv 00:23:16.040 09:31:27 nvmf_tcp.nvmf_async_init -- host/async_init.sh@54 -- # echo -n NVMeTLSkey-1:01:MDAxMTIyMzM0NDU1NjY3Nzg4OTlhYWJiY2NkZGVlZmZwJEiQ: 00:23:16.040 09:31:27 nvmf_tcp.nvmf_async_init -- host/async_init.sh@55 -- # chmod 0600 /tmp/tmp.Gfwc2Xszdv 00:23:16.040 09:31:27 nvmf_tcp.nvmf_async_init -- host/async_init.sh@56 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode0 --disable 00:23:16.040 09:31:27 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:16.040 09:31:27 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:23:16.040 09:31:27 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:16.040 09:31:27 nvmf_tcp.nvmf_async_init -- host/async_init.sh@57 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4421 --secure-channel 00:23:16.040 09:31:27 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:16.040 09:31:27 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:23:16.040 [2024-07-15 09:31:27.110127] tcp.c: 928:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:23:16.040 [2024-07-15 09:31:27.110234] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4421 *** 00:23:16.040 09:31:27 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:16.040 09:31:27 nvmf_tcp.nvmf_async_init -- host/async_init.sh@59 -- # rpc_cmd nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode0 nqn.2016-06.io.spdk:host1 --psk /tmp/tmp.Gfwc2Xszdv 00:23:16.040 09:31:27 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:16.040 09:31:27 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:23:16.040 [2024-07-15 09:31:27.118143] tcp.c:3679:nvmf_tcp_subsystem_add_host: *WARNING*: nvmf_tcp_psk_path: deprecated feature PSK path to be removed in v24.09 00:23:16.040 09:31:27 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:16.040 09:31:27 nvmf_tcp.nvmf_async_init -- host/async_init.sh@65 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 10.0.0.2 -f ipv4 -s 4421 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host1 --psk /tmp/tmp.Gfwc2Xszdv 00:23:16.040 09:31:27 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:16.040 09:31:27 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:23:16.040 [2024-07-15 09:31:27.126169] bdev_nvme_rpc.c: 517:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:23:16.040 [2024-07-15 09:31:27.126225] nvme_tcp.c:2589:nvme_tcp_generate_tls_credentials: *WARNING*: nvme_ctrlr_psk: deprecated feature spdk_nvme_ctrlr_opts.psk to be removed in v24.09 00:23:16.040 nvme0n1 00:23:16.040 09:31:27 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:16.040 09:31:27 nvmf_tcp.nvmf_async_init -- host/async_init.sh@69 -- # rpc_cmd bdev_get_bdevs -b nvme0n1 00:23:16.040 09:31:27 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:16.040 09:31:27 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:23:16.040 [ 00:23:16.040 { 00:23:16.040 "name": "nvme0n1", 00:23:16.040 "aliases": [ 00:23:16.040 "72c69e2a-d978-453c-a1ef-a6581b585204" 00:23:16.040 ], 00:23:16.040 "product_name": "NVMe disk", 00:23:16.040 "block_size": 512, 00:23:16.040 "num_blocks": 2097152, 00:23:16.040 "uuid": "72c69e2a-d978-453c-a1ef-a6581b585204", 00:23:16.040 "assigned_rate_limits": { 00:23:16.040 "rw_ios_per_sec": 0, 00:23:16.040 "rw_mbytes_per_sec": 0, 00:23:16.040 "r_mbytes_per_sec": 0, 00:23:16.040 "w_mbytes_per_sec": 0 00:23:16.040 }, 00:23:16.040 "claimed": false, 00:23:16.040 "zoned": false, 00:23:16.040 "supported_io_types": { 00:23:16.040 "read": true, 00:23:16.040 "write": true, 00:23:16.040 "unmap": false, 00:23:16.040 "flush": true, 00:23:16.040 "reset": true, 00:23:16.040 "nvme_admin": true, 00:23:16.040 "nvme_io": true, 00:23:16.040 "nvme_io_md": false, 00:23:16.040 "write_zeroes": true, 00:23:16.040 "zcopy": false, 00:23:16.040 "get_zone_info": false, 00:23:16.040 "zone_management": false, 00:23:16.040 "zone_append": false, 00:23:16.040 "compare": true, 00:23:16.040 "compare_and_write": true, 00:23:16.040 "abort": true, 00:23:16.040 "seek_hole": false, 00:23:16.040 "seek_data": false, 00:23:16.040 "copy": true, 00:23:16.040 "nvme_iov_md": false 00:23:16.040 }, 00:23:16.040 "memory_domains": [ 00:23:16.040 { 00:23:16.040 "dma_device_id": "system", 00:23:16.040 "dma_device_type": 1 00:23:16.040 } 00:23:16.040 ], 00:23:16.040 "driver_specific": { 00:23:16.040 "nvme": [ 00:23:16.040 { 00:23:16.040 "trid": { 00:23:16.040 "trtype": "TCP", 00:23:16.040 "adrfam": "IPv4", 00:23:16.040 "traddr": "10.0.0.2", 00:23:16.040 "trsvcid": "4421", 00:23:16.040 "subnqn": "nqn.2016-06.io.spdk:cnode0" 00:23:16.040 }, 00:23:16.040 "ctrlr_data": { 00:23:16.040 "cntlid": 3, 00:23:16.040 "vendor_id": "0x8086", 00:23:16.040 "model_number": "SPDK bdev Controller", 00:23:16.040 "serial_number": "00000000000000000000", 00:23:16.040 "firmware_revision": "24.09", 00:23:16.040 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:23:16.040 "oacs": { 00:23:16.040 "security": 0, 00:23:16.040 "format": 0, 00:23:16.040 "firmware": 0, 00:23:16.040 "ns_manage": 0 00:23:16.040 }, 00:23:16.040 "multi_ctrlr": true, 00:23:16.040 "ana_reporting": false 00:23:16.040 }, 00:23:16.040 "vs": { 00:23:16.040 "nvme_version": "1.3" 00:23:16.040 }, 00:23:16.040 "ns_data": { 00:23:16.040 "id": 1, 00:23:16.040 "can_share": true 00:23:16.040 } 00:23:16.040 } 00:23:16.040 ], 00:23:16.040 "mp_policy": "active_passive" 00:23:16.040 } 00:23:16.040 } 00:23:16.040 ] 00:23:16.040 09:31:27 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:16.040 09:31:27 nvmf_tcp.nvmf_async_init -- host/async_init.sh@72 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:23:16.040 09:31:27 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:16.040 09:31:27 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:23:16.040 09:31:27 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:16.040 09:31:27 nvmf_tcp.nvmf_async_init -- host/async_init.sh@75 -- # rm -f /tmp/tmp.Gfwc2Xszdv 00:23:16.040 09:31:27 nvmf_tcp.nvmf_async_init -- host/async_init.sh@77 -- # trap - SIGINT SIGTERM EXIT 00:23:16.040 09:31:27 nvmf_tcp.nvmf_async_init -- host/async_init.sh@78 -- # nvmftestfini 00:23:16.041 09:31:27 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@488 -- # nvmfcleanup 00:23:16.041 09:31:27 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@117 -- # sync 00:23:16.041 09:31:27 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:23:16.041 09:31:27 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@120 -- # set +e 00:23:16.041 09:31:27 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@121 -- # for i in {1..20} 00:23:16.041 09:31:27 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:23:16.297 rmmod nvme_tcp 00:23:16.297 rmmod nvme_fabrics 00:23:16.297 rmmod nvme_keyring 00:23:16.297 09:31:27 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:23:16.297 09:31:27 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@124 -- # set -e 00:23:16.297 09:31:27 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@125 -- # return 0 00:23:16.297 09:31:27 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@489 -- # '[' -n 889975 ']' 00:23:16.297 09:31:27 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@490 -- # killprocess 889975 00:23:16.297 09:31:27 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@948 -- # '[' -z 889975 ']' 00:23:16.297 09:31:27 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@952 -- # kill -0 889975 00:23:16.297 09:31:27 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@953 -- # uname 00:23:16.297 09:31:27 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:23:16.297 09:31:27 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 889975 00:23:16.297 09:31:27 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:23:16.297 09:31:27 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:23:16.297 09:31:27 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@966 -- # echo 'killing process with pid 889975' 00:23:16.297 killing process with pid 889975 00:23:16.297 09:31:27 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@967 -- # kill 889975 00:23:16.297 [2024-07-15 09:31:27.315874] app.c:1023:log_deprecation_hits: *WARNING*: nvme_ctrlr_psk: deprecation 'spdk_nvme_ctrlr_opts.psk' scheduled for removal in v24.09 hit 1 times 00:23:16.297 [2024-07-15 09:31:27.315906] app.c:1023:log_deprecation_hits: *WARNING*: nvmf_tcp_psk_path: deprecation 'PSK path' scheduled for removal in v24.09 hit 1 times 00:23:16.297 09:31:27 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@972 -- # wait 889975 00:23:16.556 09:31:27 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:23:16.556 09:31:27 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:23:16.556 09:31:27 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:23:16.556 09:31:27 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:23:16.556 09:31:27 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@278 -- # remove_spdk_ns 00:23:16.556 09:31:27 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:23:16.556 09:31:27 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:23:16.556 09:31:27 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:23:18.459 09:31:29 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:23:18.459 00:23:18.459 real 0m5.563s 00:23:18.459 user 0m2.050s 00:23:18.459 sys 0m1.886s 00:23:18.459 09:31:29 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@1124 -- # xtrace_disable 00:23:18.459 09:31:29 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:23:18.459 ************************************ 00:23:18.459 END TEST nvmf_async_init 00:23:18.459 ************************************ 00:23:18.459 09:31:29 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:23:18.459 09:31:29 nvmf_tcp -- nvmf/nvmf.sh@94 -- # run_test dma /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/dma.sh --transport=tcp 00:23:18.459 09:31:29 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:23:18.459 09:31:29 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:23:18.459 09:31:29 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:23:18.459 ************************************ 00:23:18.459 START TEST dma 00:23:18.459 ************************************ 00:23:18.459 09:31:29 nvmf_tcp.dma -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/dma.sh --transport=tcp 00:23:18.718 * Looking for test storage... 00:23:18.718 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:23:18.718 09:31:29 nvmf_tcp.dma -- host/dma.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:23:18.718 09:31:29 nvmf_tcp.dma -- nvmf/common.sh@7 -- # uname -s 00:23:18.718 09:31:29 nvmf_tcp.dma -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:23:18.718 09:31:29 nvmf_tcp.dma -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:23:18.718 09:31:29 nvmf_tcp.dma -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:23:18.718 09:31:29 nvmf_tcp.dma -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:23:18.718 09:31:29 nvmf_tcp.dma -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:23:18.718 09:31:29 nvmf_tcp.dma -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:23:18.718 09:31:29 nvmf_tcp.dma -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:23:18.718 09:31:29 nvmf_tcp.dma -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:23:18.718 09:31:29 nvmf_tcp.dma -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:23:18.718 09:31:29 nvmf_tcp.dma -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:23:18.718 09:31:29 nvmf_tcp.dma -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a 00:23:18.718 09:31:29 nvmf_tcp.dma -- nvmf/common.sh@18 -- # NVME_HOSTID=29f67375-a902-e411-ace9-001e67bc3c9a 00:23:18.718 09:31:29 nvmf_tcp.dma -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:23:18.718 09:31:29 nvmf_tcp.dma -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:23:18.718 09:31:29 nvmf_tcp.dma -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:23:18.718 09:31:29 nvmf_tcp.dma -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:23:18.718 09:31:29 nvmf_tcp.dma -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:23:18.718 09:31:29 nvmf_tcp.dma -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:23:18.718 09:31:29 nvmf_tcp.dma -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:23:18.718 09:31:29 nvmf_tcp.dma -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:23:18.718 09:31:29 nvmf_tcp.dma -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:18.718 09:31:29 nvmf_tcp.dma -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:18.718 09:31:29 nvmf_tcp.dma -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:18.718 09:31:29 nvmf_tcp.dma -- paths/export.sh@5 -- # export PATH 00:23:18.718 09:31:29 nvmf_tcp.dma -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:18.718 09:31:29 nvmf_tcp.dma -- nvmf/common.sh@47 -- # : 0 00:23:18.718 09:31:29 nvmf_tcp.dma -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:23:18.718 09:31:29 nvmf_tcp.dma -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:23:18.718 09:31:29 nvmf_tcp.dma -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:23:18.718 09:31:29 nvmf_tcp.dma -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:23:18.718 09:31:29 nvmf_tcp.dma -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:23:18.718 09:31:29 nvmf_tcp.dma -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:23:18.718 09:31:29 nvmf_tcp.dma -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:23:18.718 09:31:29 nvmf_tcp.dma -- nvmf/common.sh@51 -- # have_pci_nics=0 00:23:18.718 09:31:29 nvmf_tcp.dma -- host/dma.sh@12 -- # '[' tcp '!=' rdma ']' 00:23:18.718 09:31:29 nvmf_tcp.dma -- host/dma.sh@13 -- # exit 0 00:23:18.718 00:23:18.718 real 0m0.073s 00:23:18.718 user 0m0.036s 00:23:18.718 sys 0m0.042s 00:23:18.718 09:31:29 nvmf_tcp.dma -- common/autotest_common.sh@1124 -- # xtrace_disable 00:23:18.718 09:31:29 nvmf_tcp.dma -- common/autotest_common.sh@10 -- # set +x 00:23:18.718 ************************************ 00:23:18.718 END TEST dma 00:23:18.718 ************************************ 00:23:18.718 09:31:29 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:23:18.718 09:31:29 nvmf_tcp -- nvmf/nvmf.sh@97 -- # run_test nvmf_identify /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/identify.sh --transport=tcp 00:23:18.718 09:31:29 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:23:18.718 09:31:29 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:23:18.718 09:31:29 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:23:18.718 ************************************ 00:23:18.718 START TEST nvmf_identify 00:23:18.718 ************************************ 00:23:18.718 09:31:29 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/identify.sh --transport=tcp 00:23:18.718 * Looking for test storage... 00:23:18.718 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:23:18.718 09:31:29 nvmf_tcp.nvmf_identify -- host/identify.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:23:18.718 09:31:29 nvmf_tcp.nvmf_identify -- nvmf/common.sh@7 -- # uname -s 00:23:18.718 09:31:29 nvmf_tcp.nvmf_identify -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:23:18.718 09:31:29 nvmf_tcp.nvmf_identify -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:23:18.718 09:31:29 nvmf_tcp.nvmf_identify -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:23:18.718 09:31:29 nvmf_tcp.nvmf_identify -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:23:18.718 09:31:29 nvmf_tcp.nvmf_identify -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:23:18.718 09:31:29 nvmf_tcp.nvmf_identify -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:23:18.718 09:31:29 nvmf_tcp.nvmf_identify -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:23:18.718 09:31:29 nvmf_tcp.nvmf_identify -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:23:18.718 09:31:29 nvmf_tcp.nvmf_identify -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:23:18.718 09:31:29 nvmf_tcp.nvmf_identify -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:23:18.718 09:31:29 nvmf_tcp.nvmf_identify -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a 00:23:18.718 09:31:29 nvmf_tcp.nvmf_identify -- nvmf/common.sh@18 -- # NVME_HOSTID=29f67375-a902-e411-ace9-001e67bc3c9a 00:23:18.718 09:31:29 nvmf_tcp.nvmf_identify -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:23:18.718 09:31:29 nvmf_tcp.nvmf_identify -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:23:18.718 09:31:29 nvmf_tcp.nvmf_identify -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:23:18.718 09:31:29 nvmf_tcp.nvmf_identify -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:23:18.718 09:31:29 nvmf_tcp.nvmf_identify -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:23:18.718 09:31:29 nvmf_tcp.nvmf_identify -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:23:18.718 09:31:29 nvmf_tcp.nvmf_identify -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:23:18.718 09:31:29 nvmf_tcp.nvmf_identify -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:23:18.718 09:31:29 nvmf_tcp.nvmf_identify -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:18.718 09:31:29 nvmf_tcp.nvmf_identify -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:18.719 09:31:29 nvmf_tcp.nvmf_identify -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:18.719 09:31:29 nvmf_tcp.nvmf_identify -- paths/export.sh@5 -- # export PATH 00:23:18.719 09:31:29 nvmf_tcp.nvmf_identify -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:18.719 09:31:29 nvmf_tcp.nvmf_identify -- nvmf/common.sh@47 -- # : 0 00:23:18.719 09:31:29 nvmf_tcp.nvmf_identify -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:23:18.719 09:31:29 nvmf_tcp.nvmf_identify -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:23:18.719 09:31:29 nvmf_tcp.nvmf_identify -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:23:18.719 09:31:29 nvmf_tcp.nvmf_identify -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:23:18.719 09:31:29 nvmf_tcp.nvmf_identify -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:23:18.719 09:31:29 nvmf_tcp.nvmf_identify -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:23:18.719 09:31:29 nvmf_tcp.nvmf_identify -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:23:18.719 09:31:29 nvmf_tcp.nvmf_identify -- nvmf/common.sh@51 -- # have_pci_nics=0 00:23:18.719 09:31:29 nvmf_tcp.nvmf_identify -- host/identify.sh@11 -- # MALLOC_BDEV_SIZE=64 00:23:18.719 09:31:29 nvmf_tcp.nvmf_identify -- host/identify.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:23:18.719 09:31:29 nvmf_tcp.nvmf_identify -- host/identify.sh@14 -- # nvmftestinit 00:23:18.719 09:31:29 nvmf_tcp.nvmf_identify -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:23:18.719 09:31:29 nvmf_tcp.nvmf_identify -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:23:18.719 09:31:29 nvmf_tcp.nvmf_identify -- nvmf/common.sh@448 -- # prepare_net_devs 00:23:18.719 09:31:29 nvmf_tcp.nvmf_identify -- nvmf/common.sh@410 -- # local -g is_hw=no 00:23:18.719 09:31:29 nvmf_tcp.nvmf_identify -- nvmf/common.sh@412 -- # remove_spdk_ns 00:23:18.719 09:31:29 nvmf_tcp.nvmf_identify -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:23:18.719 09:31:29 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:23:18.719 09:31:29 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:23:18.719 09:31:29 nvmf_tcp.nvmf_identify -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:23:18.719 09:31:29 nvmf_tcp.nvmf_identify -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:23:18.719 09:31:29 nvmf_tcp.nvmf_identify -- nvmf/common.sh@285 -- # xtrace_disable 00:23:18.719 09:31:29 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:23:21.246 09:31:31 nvmf_tcp.nvmf_identify -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:23:21.246 09:31:31 nvmf_tcp.nvmf_identify -- nvmf/common.sh@291 -- # pci_devs=() 00:23:21.246 09:31:31 nvmf_tcp.nvmf_identify -- nvmf/common.sh@291 -- # local -a pci_devs 00:23:21.246 09:31:31 nvmf_tcp.nvmf_identify -- nvmf/common.sh@292 -- # pci_net_devs=() 00:23:21.246 09:31:31 nvmf_tcp.nvmf_identify -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:23:21.246 09:31:31 nvmf_tcp.nvmf_identify -- nvmf/common.sh@293 -- # pci_drivers=() 00:23:21.246 09:31:31 nvmf_tcp.nvmf_identify -- nvmf/common.sh@293 -- # local -A pci_drivers 00:23:21.246 09:31:31 nvmf_tcp.nvmf_identify -- nvmf/common.sh@295 -- # net_devs=() 00:23:21.247 09:31:31 nvmf_tcp.nvmf_identify -- nvmf/common.sh@295 -- # local -ga net_devs 00:23:21.247 09:31:31 nvmf_tcp.nvmf_identify -- nvmf/common.sh@296 -- # e810=() 00:23:21.247 09:31:31 nvmf_tcp.nvmf_identify -- nvmf/common.sh@296 -- # local -ga e810 00:23:21.247 09:31:31 nvmf_tcp.nvmf_identify -- nvmf/common.sh@297 -- # x722=() 00:23:21.247 09:31:31 nvmf_tcp.nvmf_identify -- nvmf/common.sh@297 -- # local -ga x722 00:23:21.247 09:31:31 nvmf_tcp.nvmf_identify -- nvmf/common.sh@298 -- # mlx=() 00:23:21.247 09:31:31 nvmf_tcp.nvmf_identify -- nvmf/common.sh@298 -- # local -ga mlx 00:23:21.247 09:31:31 nvmf_tcp.nvmf_identify -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:23:21.247 09:31:31 nvmf_tcp.nvmf_identify -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:23:21.247 09:31:31 nvmf_tcp.nvmf_identify -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:23:21.247 09:31:31 nvmf_tcp.nvmf_identify -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:23:21.247 09:31:31 nvmf_tcp.nvmf_identify -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:23:21.247 09:31:31 nvmf_tcp.nvmf_identify -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:23:21.247 09:31:31 nvmf_tcp.nvmf_identify -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:23:21.247 09:31:31 nvmf_tcp.nvmf_identify -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:23:21.247 09:31:31 nvmf_tcp.nvmf_identify -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:23:21.247 09:31:31 nvmf_tcp.nvmf_identify -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:23:21.247 09:31:31 nvmf_tcp.nvmf_identify -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:23:21.247 09:31:31 nvmf_tcp.nvmf_identify -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:23:21.247 09:31:31 nvmf_tcp.nvmf_identify -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:23:21.247 09:31:31 nvmf_tcp.nvmf_identify -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:23:21.247 09:31:31 nvmf_tcp.nvmf_identify -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:23:21.247 09:31:31 nvmf_tcp.nvmf_identify -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:23:21.247 09:31:31 nvmf_tcp.nvmf_identify -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:23:21.247 09:31:31 nvmf_tcp.nvmf_identify -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:23:21.247 09:31:31 nvmf_tcp.nvmf_identify -- nvmf/common.sh@341 -- # echo 'Found 0000:09:00.0 (0x8086 - 0x159b)' 00:23:21.247 Found 0000:09:00.0 (0x8086 - 0x159b) 00:23:21.247 09:31:31 nvmf_tcp.nvmf_identify -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:23:21.247 09:31:31 nvmf_tcp.nvmf_identify -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:23:21.247 09:31:31 nvmf_tcp.nvmf_identify -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:23:21.247 09:31:31 nvmf_tcp.nvmf_identify -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:23:21.247 09:31:31 nvmf_tcp.nvmf_identify -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:23:21.247 09:31:31 nvmf_tcp.nvmf_identify -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:23:21.247 09:31:31 nvmf_tcp.nvmf_identify -- nvmf/common.sh@341 -- # echo 'Found 0000:09:00.1 (0x8086 - 0x159b)' 00:23:21.247 Found 0000:09:00.1 (0x8086 - 0x159b) 00:23:21.247 09:31:31 nvmf_tcp.nvmf_identify -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:23:21.247 09:31:31 nvmf_tcp.nvmf_identify -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:23:21.247 09:31:31 nvmf_tcp.nvmf_identify -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:23:21.247 09:31:31 nvmf_tcp.nvmf_identify -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:23:21.247 09:31:31 nvmf_tcp.nvmf_identify -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:23:21.247 09:31:31 nvmf_tcp.nvmf_identify -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:23:21.247 09:31:31 nvmf_tcp.nvmf_identify -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:23:21.247 09:31:31 nvmf_tcp.nvmf_identify -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:23:21.247 09:31:31 nvmf_tcp.nvmf_identify -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:23:21.247 09:31:31 nvmf_tcp.nvmf_identify -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:23:21.247 09:31:31 nvmf_tcp.nvmf_identify -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:23:21.247 09:31:31 nvmf_tcp.nvmf_identify -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:23:21.247 09:31:31 nvmf_tcp.nvmf_identify -- nvmf/common.sh@390 -- # [[ up == up ]] 00:23:21.247 09:31:31 nvmf_tcp.nvmf_identify -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:23:21.247 09:31:31 nvmf_tcp.nvmf_identify -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:23:21.247 09:31:31 nvmf_tcp.nvmf_identify -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:09:00.0: cvl_0_0' 00:23:21.247 Found net devices under 0000:09:00.0: cvl_0_0 00:23:21.247 09:31:31 nvmf_tcp.nvmf_identify -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:23:21.247 09:31:31 nvmf_tcp.nvmf_identify -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:23:21.247 09:31:31 nvmf_tcp.nvmf_identify -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:23:21.247 09:31:31 nvmf_tcp.nvmf_identify -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:23:21.247 09:31:31 nvmf_tcp.nvmf_identify -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:23:21.247 09:31:31 nvmf_tcp.nvmf_identify -- nvmf/common.sh@390 -- # [[ up == up ]] 00:23:21.247 09:31:31 nvmf_tcp.nvmf_identify -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:23:21.247 09:31:31 nvmf_tcp.nvmf_identify -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:23:21.247 09:31:31 nvmf_tcp.nvmf_identify -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:09:00.1: cvl_0_1' 00:23:21.247 Found net devices under 0000:09:00.1: cvl_0_1 00:23:21.247 09:31:31 nvmf_tcp.nvmf_identify -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:23:21.247 09:31:31 nvmf_tcp.nvmf_identify -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:23:21.247 09:31:31 nvmf_tcp.nvmf_identify -- nvmf/common.sh@414 -- # is_hw=yes 00:23:21.247 09:31:31 nvmf_tcp.nvmf_identify -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:23:21.247 09:31:31 nvmf_tcp.nvmf_identify -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:23:21.247 09:31:31 nvmf_tcp.nvmf_identify -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:23:21.247 09:31:31 nvmf_tcp.nvmf_identify -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:23:21.247 09:31:31 nvmf_tcp.nvmf_identify -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:23:21.247 09:31:31 nvmf_tcp.nvmf_identify -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:23:21.247 09:31:31 nvmf_tcp.nvmf_identify -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:23:21.247 09:31:31 nvmf_tcp.nvmf_identify -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:23:21.247 09:31:31 nvmf_tcp.nvmf_identify -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:23:21.247 09:31:31 nvmf_tcp.nvmf_identify -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:23:21.247 09:31:31 nvmf_tcp.nvmf_identify -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:23:21.247 09:31:31 nvmf_tcp.nvmf_identify -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:23:21.247 09:31:31 nvmf_tcp.nvmf_identify -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:23:21.247 09:31:31 nvmf_tcp.nvmf_identify -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:23:21.247 09:31:31 nvmf_tcp.nvmf_identify -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:23:21.247 09:31:31 nvmf_tcp.nvmf_identify -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:23:21.247 09:31:31 nvmf_tcp.nvmf_identify -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:23:21.247 09:31:31 nvmf_tcp.nvmf_identify -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:23:21.247 09:31:31 nvmf_tcp.nvmf_identify -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:23:21.247 09:31:31 nvmf_tcp.nvmf_identify -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:23:21.247 09:31:32 nvmf_tcp.nvmf_identify -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:23:21.247 09:31:32 nvmf_tcp.nvmf_identify -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:23:21.247 09:31:32 nvmf_tcp.nvmf_identify -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:23:21.247 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:23:21.247 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.131 ms 00:23:21.247 00:23:21.247 --- 10.0.0.2 ping statistics --- 00:23:21.247 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:23:21.247 rtt min/avg/max/mdev = 0.131/0.131/0.131/0.000 ms 00:23:21.247 09:31:32 nvmf_tcp.nvmf_identify -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:23:21.247 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:23:21.247 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.075 ms 00:23:21.247 00:23:21.247 --- 10.0.0.1 ping statistics --- 00:23:21.247 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:23:21.247 rtt min/avg/max/mdev = 0.075/0.075/0.075/0.000 ms 00:23:21.247 09:31:32 nvmf_tcp.nvmf_identify -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:23:21.247 09:31:32 nvmf_tcp.nvmf_identify -- nvmf/common.sh@422 -- # return 0 00:23:21.247 09:31:32 nvmf_tcp.nvmf_identify -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:23:21.247 09:31:32 nvmf_tcp.nvmf_identify -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:23:21.247 09:31:32 nvmf_tcp.nvmf_identify -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:23:21.247 09:31:32 nvmf_tcp.nvmf_identify -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:23:21.247 09:31:32 nvmf_tcp.nvmf_identify -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:23:21.247 09:31:32 nvmf_tcp.nvmf_identify -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:23:21.247 09:31:32 nvmf_tcp.nvmf_identify -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:23:21.247 09:31:32 nvmf_tcp.nvmf_identify -- host/identify.sh@16 -- # timing_enter start_nvmf_tgt 00:23:21.247 09:31:32 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@722 -- # xtrace_disable 00:23:21.247 09:31:32 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:23:21.247 09:31:32 nvmf_tcp.nvmf_identify -- host/identify.sh@19 -- # nvmfpid=892164 00:23:21.247 09:31:32 nvmf_tcp.nvmf_identify -- host/identify.sh@18 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:23:21.247 09:31:32 nvmf_tcp.nvmf_identify -- host/identify.sh@21 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:23:21.247 09:31:32 nvmf_tcp.nvmf_identify -- host/identify.sh@23 -- # waitforlisten 892164 00:23:21.248 09:31:32 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@829 -- # '[' -z 892164 ']' 00:23:21.248 09:31:32 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:23:21.248 09:31:32 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@834 -- # local max_retries=100 00:23:21.248 09:31:32 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:23:21.248 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:23:21.248 09:31:32 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@838 -- # xtrace_disable 00:23:21.248 09:31:32 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:23:21.248 [2024-07-15 09:31:32.097985] Starting SPDK v24.09-pre git sha1 b0f01ebc5 / DPDK 24.03.0 initialization... 00:23:21.248 [2024-07-15 09:31:32.098079] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:23:21.248 EAL: No free 2048 kB hugepages reported on node 1 00:23:21.248 [2024-07-15 09:31:32.162895] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 4 00:23:21.248 [2024-07-15 09:31:32.270627] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:23:21.248 [2024-07-15 09:31:32.270689] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:23:21.248 [2024-07-15 09:31:32.270712] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:23:21.248 [2024-07-15 09:31:32.270723] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:23:21.248 [2024-07-15 09:31:32.270733] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:23:21.248 [2024-07-15 09:31:32.270821] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:23:21.248 [2024-07-15 09:31:32.270879] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:23:21.248 [2024-07-15 09:31:32.270906] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:23:21.248 [2024-07-15 09:31:32.270909] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:23:22.181 09:31:33 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:23:22.181 09:31:33 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@862 -- # return 0 00:23:22.181 09:31:33 nvmf_tcp.nvmf_identify -- host/identify.sh@24 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:23:22.182 09:31:33 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:22.182 09:31:33 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:23:22.182 [2024-07-15 09:31:33.079634] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:23:22.182 09:31:33 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:22.182 09:31:33 nvmf_tcp.nvmf_identify -- host/identify.sh@25 -- # timing_exit start_nvmf_tgt 00:23:22.182 09:31:33 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@728 -- # xtrace_disable 00:23:22.182 09:31:33 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:23:22.182 09:31:33 nvmf_tcp.nvmf_identify -- host/identify.sh@27 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:23:22.182 09:31:33 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:22.182 09:31:33 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:23:22.182 Malloc0 00:23:22.182 09:31:33 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:22.182 09:31:33 nvmf_tcp.nvmf_identify -- host/identify.sh@28 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:23:22.182 09:31:33 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:22.182 09:31:33 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:23:22.182 09:31:33 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:22.182 09:31:33 nvmf_tcp.nvmf_identify -- host/identify.sh@31 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 --nguid ABCDEF0123456789ABCDEF0123456789 --eui64 ABCDEF0123456789 00:23:22.182 09:31:33 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:22.182 09:31:33 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:23:22.182 09:31:33 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:22.182 09:31:33 nvmf_tcp.nvmf_identify -- host/identify.sh@34 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:23:22.182 09:31:33 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:22.182 09:31:33 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:23:22.182 [2024-07-15 09:31:33.155128] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:23:22.182 09:31:33 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:22.182 09:31:33 nvmf_tcp.nvmf_identify -- host/identify.sh@35 -- # rpc_cmd nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:23:22.182 09:31:33 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:22.182 09:31:33 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:23:22.182 09:31:33 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:22.182 09:31:33 nvmf_tcp.nvmf_identify -- host/identify.sh@37 -- # rpc_cmd nvmf_get_subsystems 00:23:22.182 09:31:33 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:22.182 09:31:33 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:23:22.182 [ 00:23:22.182 { 00:23:22.182 "nqn": "nqn.2014-08.org.nvmexpress.discovery", 00:23:22.182 "subtype": "Discovery", 00:23:22.182 "listen_addresses": [ 00:23:22.182 { 00:23:22.182 "trtype": "TCP", 00:23:22.182 "adrfam": "IPv4", 00:23:22.182 "traddr": "10.0.0.2", 00:23:22.182 "trsvcid": "4420" 00:23:22.182 } 00:23:22.182 ], 00:23:22.182 "allow_any_host": true, 00:23:22.182 "hosts": [] 00:23:22.182 }, 00:23:22.182 { 00:23:22.182 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:23:22.182 "subtype": "NVMe", 00:23:22.182 "listen_addresses": [ 00:23:22.182 { 00:23:22.182 "trtype": "TCP", 00:23:22.182 "adrfam": "IPv4", 00:23:22.182 "traddr": "10.0.0.2", 00:23:22.182 "trsvcid": "4420" 00:23:22.182 } 00:23:22.182 ], 00:23:22.182 "allow_any_host": true, 00:23:22.182 "hosts": [], 00:23:22.182 "serial_number": "SPDK00000000000001", 00:23:22.182 "model_number": "SPDK bdev Controller", 00:23:22.182 "max_namespaces": 32, 00:23:22.182 "min_cntlid": 1, 00:23:22.182 "max_cntlid": 65519, 00:23:22.182 "namespaces": [ 00:23:22.182 { 00:23:22.182 "nsid": 1, 00:23:22.182 "bdev_name": "Malloc0", 00:23:22.182 "name": "Malloc0", 00:23:22.182 "nguid": "ABCDEF0123456789ABCDEF0123456789", 00:23:22.182 "eui64": "ABCDEF0123456789", 00:23:22.182 "uuid": "be4f7297-cf39-4c35-861c-2bbe43e169c3" 00:23:22.182 } 00:23:22.182 ] 00:23:22.182 } 00:23:22.182 ] 00:23:22.182 09:31:33 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:22.182 09:31:33 nvmf_tcp.nvmf_identify -- host/identify.sh@39 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_identify -r ' trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2014-08.org.nvmexpress.discovery' -L all 00:23:22.182 [2024-07-15 09:31:33.197202] Starting SPDK v24.09-pre git sha1 b0f01ebc5 / DPDK 24.03.0 initialization... 00:23:22.182 [2024-07-15 09:31:33.197250] [ DPDK EAL parameters: identify --no-shconf -c 0x1 -n 1 -m 0 --no-pci --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid892315 ] 00:23:22.182 EAL: No free 2048 kB hugepages reported on node 1 00:23:22.182 [2024-07-15 09:31:33.233159] nvme_ctrlr.c:1559:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to connect adminq (no timeout) 00:23:22.182 [2024-07-15 09:31:33.233223] nvme_tcp.c:2338:nvme_tcp_qpair_connect_sock: *DEBUG*: adrfam 1 ai_family 2 00:23:22.182 [2024-07-15 09:31:33.233234] nvme_tcp.c:2342:nvme_tcp_qpair_connect_sock: *DEBUG*: trsvcid is 4420 00:23:22.182 [2024-07-15 09:31:33.233249] nvme_tcp.c:2360:nvme_tcp_qpair_connect_sock: *DEBUG*: sock_impl_name is (null) 00:23:22.182 [2024-07-15 09:31:33.233259] sock.c: 337:spdk_sock_connect_ext: *DEBUG*: Creating a client socket using impl posix 00:23:22.182 [2024-07-15 09:31:33.233577] nvme_ctrlr.c:1559:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to wait for connect adminq (no timeout) 00:23:22.182 [2024-07-15 09:31:33.233636] nvme_tcp.c:1555:nvme_tcp_send_icreq_complete: *DEBUG*: Complete the icreq send for tqpair=0x1f5a540 0 00:23:22.182 [2024-07-15 09:31:33.239819] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 1 00:23:22.182 [2024-07-15 09:31:33.239843] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =1 00:23:22.182 [2024-07-15 09:31:33.239851] nvme_tcp.c:1601:nvme_tcp_icresp_handle: *DEBUG*: host_hdgst_enable: 0 00:23:22.182 [2024-07-15 09:31:33.239858] nvme_tcp.c:1602:nvme_tcp_icresp_handle: *DEBUG*: host_ddgst_enable: 0 00:23:22.182 [2024-07-15 09:31:33.239911] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:23:22.182 [2024-07-15 09:31:33.239924] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:23:22.182 [2024-07-15 09:31:33.239931] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x1f5a540) 00:23:22.182 [2024-07-15 09:31:33.239948] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:0 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x400 00:23:22.182 [2024-07-15 09:31:33.239975] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1fba3c0, cid 0, qid 0 00:23:22.182 [2024-07-15 09:31:33.247816] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:23:22.182 [2024-07-15 09:31:33.247833] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:23:22.182 [2024-07-15 09:31:33.247840] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:23:22.182 [2024-07-15 09:31:33.247847] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1fba3c0) on tqpair=0x1f5a540 00:23:22.182 [2024-07-15 09:31:33.247862] nvme_fabric.c: 622:_nvme_fabric_qpair_connect_poll: *DEBUG*: CNTLID 0x0001 00:23:22.182 [2024-07-15 09:31:33.247887] nvme_ctrlr.c:1559:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to read vs (no timeout) 00:23:22.182 [2024-07-15 09:31:33.247897] nvme_ctrlr.c:1559:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to read vs wait for vs (no timeout) 00:23:22.182 [2024-07-15 09:31:33.247918] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:23:22.182 [2024-07-15 09:31:33.247927] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:23:22.182 [2024-07-15 09:31:33.247933] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x1f5a540) 00:23:22.182 [2024-07-15 09:31:33.247944] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:22.182 [2024-07-15 09:31:33.247968] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1fba3c0, cid 0, qid 0 00:23:22.182 [2024-07-15 09:31:33.248092] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:23:22.182 [2024-07-15 09:31:33.248106] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:23:22.182 [2024-07-15 09:31:33.248113] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:23:22.182 [2024-07-15 09:31:33.248119] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1fba3c0) on tqpair=0x1f5a540 00:23:22.182 [2024-07-15 09:31:33.248128] nvme_ctrlr.c:1559:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to read cap (no timeout) 00:23:22.182 [2024-07-15 09:31:33.248140] nvme_ctrlr.c:1559:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to read cap wait for cap (no timeout) 00:23:22.182 [2024-07-15 09:31:33.248152] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:23:22.182 [2024-07-15 09:31:33.248160] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:23:22.182 [2024-07-15 09:31:33.248166] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x1f5a540) 00:23:22.182 [2024-07-15 09:31:33.248177] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:22.182 [2024-07-15 09:31:33.248198] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1fba3c0, cid 0, qid 0 00:23:22.182 [2024-07-15 09:31:33.248279] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:23:22.182 [2024-07-15 09:31:33.248291] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:23:22.182 [2024-07-15 09:31:33.248297] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:23:22.182 [2024-07-15 09:31:33.248304] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1fba3c0) on tqpair=0x1f5a540 00:23:22.182 [2024-07-15 09:31:33.248312] nvme_ctrlr.c:1559:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to check en (no timeout) 00:23:22.182 [2024-07-15 09:31:33.248330] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to check en wait for cc (timeout 15000 ms) 00:23:22.182 [2024-07-15 09:31:33.248343] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:23:22.182 [2024-07-15 09:31:33.248350] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:23:22.182 [2024-07-15 09:31:33.248357] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x1f5a540) 00:23:22.182 [2024-07-15 09:31:33.248367] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:22.182 [2024-07-15 09:31:33.248389] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1fba3c0, cid 0, qid 0 00:23:22.182 [2024-07-15 09:31:33.248468] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:23:22.182 [2024-07-15 09:31:33.248481] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:23:22.182 [2024-07-15 09:31:33.248488] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:23:22.182 [2024-07-15 09:31:33.248495] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1fba3c0) on tqpair=0x1f5a540 00:23:22.182 [2024-07-15 09:31:33.248503] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to disable and wait for CSTS.RDY = 0 (timeout 15000 ms) 00:23:22.182 [2024-07-15 09:31:33.248520] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:23:22.182 [2024-07-15 09:31:33.248529] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:23:22.182 [2024-07-15 09:31:33.248535] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x1f5a540) 00:23:22.183 [2024-07-15 09:31:33.248546] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:22.183 [2024-07-15 09:31:33.248566] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1fba3c0, cid 0, qid 0 00:23:22.183 [2024-07-15 09:31:33.248640] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:23:22.183 [2024-07-15 09:31:33.248651] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:23:22.183 [2024-07-15 09:31:33.248658] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:23:22.183 [2024-07-15 09:31:33.248664] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1fba3c0) on tqpair=0x1f5a540 00:23:22.183 [2024-07-15 09:31:33.248672] nvme_ctrlr.c:3869:nvme_ctrlr_process_init_wait_for_ready_0: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] CC.EN = 0 && CSTS.RDY = 0 00:23:22.183 [2024-07-15 09:31:33.248680] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to controller is disabled (timeout 15000 ms) 00:23:22.183 [2024-07-15 09:31:33.248693] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to enable controller by writing CC.EN = 1 (timeout 15000 ms) 00:23:22.183 [2024-07-15 09:31:33.248807] nvme_ctrlr.c:4062:nvme_ctrlr_process_init: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] Setting CC.EN = 1 00:23:22.183 [2024-07-15 09:31:33.248818] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to enable controller by writing CC.EN = 1 reg (timeout 15000 ms) 00:23:22.183 [2024-07-15 09:31:33.248831] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:23:22.183 [2024-07-15 09:31:33.248839] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:23:22.183 [2024-07-15 09:31:33.248845] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x1f5a540) 00:23:22.183 [2024-07-15 09:31:33.248856] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY SET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:22.183 [2024-07-15 09:31:33.248878] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1fba3c0, cid 0, qid 0 00:23:22.183 [2024-07-15 09:31:33.248987] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:23:22.183 [2024-07-15 09:31:33.248999] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:23:22.183 [2024-07-15 09:31:33.249009] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:23:22.183 [2024-07-15 09:31:33.249016] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1fba3c0) on tqpair=0x1f5a540 00:23:22.183 [2024-07-15 09:31:33.249024] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to wait for CSTS.RDY = 1 (timeout 15000 ms) 00:23:22.183 [2024-07-15 09:31:33.249040] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:23:22.183 [2024-07-15 09:31:33.249049] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:23:22.183 [2024-07-15 09:31:33.249055] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x1f5a540) 00:23:22.183 [2024-07-15 09:31:33.249066] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:22.183 [2024-07-15 09:31:33.249088] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1fba3c0, cid 0, qid 0 00:23:22.183 [2024-07-15 09:31:33.249163] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:23:22.183 [2024-07-15 09:31:33.249176] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:23:22.183 [2024-07-15 09:31:33.249183] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:23:22.183 [2024-07-15 09:31:33.249190] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1fba3c0) on tqpair=0x1f5a540 00:23:22.183 [2024-07-15 09:31:33.249198] nvme_ctrlr.c:3904:nvme_ctrlr_process_init_enable_wait_for_ready_1: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] CC.EN = 1 && CSTS.RDY = 1 - controller is ready 00:23:22.183 [2024-07-15 09:31:33.249206] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to reset admin queue (timeout 30000 ms) 00:23:22.183 [2024-07-15 09:31:33.249219] nvme_ctrlr.c:1559:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to identify controller (no timeout) 00:23:22.183 [2024-07-15 09:31:33.249237] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to wait for identify controller (timeout 30000 ms) 00:23:22.183 [2024-07-15 09:31:33.249253] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:23:22.183 [2024-07-15 09:31:33.249261] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x1f5a540) 00:23:22.183 [2024-07-15 09:31:33.249272] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:0 nsid:0 cdw10:00000001 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:22.183 [2024-07-15 09:31:33.249293] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1fba3c0, cid 0, qid 0 00:23:22.183 [2024-07-15 09:31:33.249416] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:23:22.183 [2024-07-15 09:31:33.249430] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:23:22.183 [2024-07-15 09:31:33.249437] nvme_tcp.c:1719:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:23:22.183 [2024-07-15 09:31:33.249444] nvme_tcp.c:1720:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x1f5a540): datao=0, datal=4096, cccid=0 00:23:22.183 [2024-07-15 09:31:33.249452] nvme_tcp.c:1731:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x1fba3c0) on tqpair(0x1f5a540): expected_datao=0, payload_size=4096 00:23:22.183 [2024-07-15 09:31:33.249460] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:23:22.183 [2024-07-15 09:31:33.249477] nvme_tcp.c:1521:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:23:22.183 [2024-07-15 09:31:33.249486] nvme_tcp.c:1312:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:23:22.183 [2024-07-15 09:31:33.289927] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:23:22.183 [2024-07-15 09:31:33.289946] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:23:22.183 [2024-07-15 09:31:33.289954] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:23:22.183 [2024-07-15 09:31:33.289961] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1fba3c0) on tqpair=0x1f5a540 00:23:22.183 [2024-07-15 09:31:33.289973] nvme_ctrlr.c:2053:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] transport max_xfer_size 4294967295 00:23:22.183 [2024-07-15 09:31:33.289988] nvme_ctrlr.c:2057:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] MDTS max_xfer_size 131072 00:23:22.183 [2024-07-15 09:31:33.290000] nvme_ctrlr.c:2060:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] CNTLID 0x0001 00:23:22.183 [2024-07-15 09:31:33.290009] nvme_ctrlr.c:2084:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] transport max_sges 16 00:23:22.183 [2024-07-15 09:31:33.290017] nvme_ctrlr.c:2099:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] fuses compare and write: 1 00:23:22.183 [2024-07-15 09:31:33.290025] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to configure AER (timeout 30000 ms) 00:23:22.183 [2024-07-15 09:31:33.290040] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to wait for configure aer (timeout 30000 ms) 00:23:22.183 [2024-07-15 09:31:33.290052] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:23:22.183 [2024-07-15 09:31:33.290060] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:23:22.183 [2024-07-15 09:31:33.290067] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x1f5a540) 00:23:22.183 [2024-07-15 09:31:33.290078] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES ASYNC EVENT CONFIGURATION cid:0 cdw10:0000000b SGL DATA BLOCK OFFSET 0x0 len:0x0 00:23:22.183 [2024-07-15 09:31:33.290102] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1fba3c0, cid 0, qid 0 00:23:22.183 [2024-07-15 09:31:33.290232] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:23:22.183 [2024-07-15 09:31:33.290244] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:23:22.183 [2024-07-15 09:31:33.290250] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:23:22.183 [2024-07-15 09:31:33.290257] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1fba3c0) on tqpair=0x1f5a540 00:23:22.183 [2024-07-15 09:31:33.290268] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:23:22.183 [2024-07-15 09:31:33.290276] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:23:22.183 [2024-07-15 09:31:33.290282] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x1f5a540) 00:23:22.183 [2024-07-15 09:31:33.290292] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:23:22.183 [2024-07-15 09:31:33.290302] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:23:22.183 [2024-07-15 09:31:33.290309] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:23:22.183 [2024-07-15 09:31:33.290315] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=1 on tqpair(0x1f5a540) 00:23:22.183 [2024-07-15 09:31:33.290324] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:23:22.183 [2024-07-15 09:31:33.290333] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:23:22.183 [2024-07-15 09:31:33.290340] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:23:22.183 [2024-07-15 09:31:33.290346] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=2 on tqpair(0x1f5a540) 00:23:22.183 [2024-07-15 09:31:33.290355] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:23:22.183 [2024-07-15 09:31:33.290364] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:23:22.183 [2024-07-15 09:31:33.290371] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:23:22.183 [2024-07-15 09:31:33.290377] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1f5a540) 00:23:22.183 [2024-07-15 09:31:33.290386] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:23:22.183 [2024-07-15 09:31:33.290395] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to set keep alive timeout (timeout 30000 ms) 00:23:22.183 [2024-07-15 09:31:33.290413] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to wait for set keep alive timeout (timeout 30000 ms) 00:23:22.183 [2024-07-15 09:31:33.290429] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:23:22.183 [2024-07-15 09:31:33.290437] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x1f5a540) 00:23:22.183 [2024-07-15 09:31:33.290447] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES KEEP ALIVE TIMER cid:4 cdw10:0000000f SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:22.183 [2024-07-15 09:31:33.290472] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1fba3c0, cid 0, qid 0 00:23:22.183 [2024-07-15 09:31:33.290483] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1fba540, cid 1, qid 0 00:23:22.183 [2024-07-15 09:31:33.290491] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1fba6c0, cid 2, qid 0 00:23:22.183 [2024-07-15 09:31:33.290499] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1fba840, cid 3, qid 0 00:23:22.183 [2024-07-15 09:31:33.290506] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1fba9c0, cid 4, qid 0 00:23:22.183 [2024-07-15 09:31:33.290647] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:23:22.183 [2024-07-15 09:31:33.290661] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:23:22.183 [2024-07-15 09:31:33.290667] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:23:22.183 [2024-07-15 09:31:33.290674] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1fba9c0) on tqpair=0x1f5a540 00:23:22.183 [2024-07-15 09:31:33.290683] nvme_ctrlr.c:3022:nvme_ctrlr_set_keep_alive_timeout_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] Sending keep alive every 5000000 us 00:23:22.183 [2024-07-15 09:31:33.290692] nvme_ctrlr.c:1559:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to ready (no timeout) 00:23:22.183 [2024-07-15 09:31:33.290710] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:23:22.183 [2024-07-15 09:31:33.290719] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x1f5a540) 00:23:22.183 [2024-07-15 09:31:33.290730] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:4 nsid:0 cdw10:00000001 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:22.183 [2024-07-15 09:31:33.290751] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1fba9c0, cid 4, qid 0 00:23:22.183 [2024-07-15 09:31:33.290882] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:23:22.183 [2024-07-15 09:31:33.290896] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:23:22.183 [2024-07-15 09:31:33.290903] nvme_tcp.c:1719:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:23:22.184 [2024-07-15 09:31:33.290909] nvme_tcp.c:1720:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x1f5a540): datao=0, datal=4096, cccid=4 00:23:22.184 [2024-07-15 09:31:33.290917] nvme_tcp.c:1731:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x1fba9c0) on tqpair(0x1f5a540): expected_datao=0, payload_size=4096 00:23:22.184 [2024-07-15 09:31:33.290924] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:23:22.184 [2024-07-15 09:31:33.290935] nvme_tcp.c:1521:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:23:22.184 [2024-07-15 09:31:33.290942] nvme_tcp.c:1312:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:23:22.184 [2024-07-15 09:31:33.290954] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:23:22.184 [2024-07-15 09:31:33.290962] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:23:22.184 [2024-07-15 09:31:33.290969] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:23:22.184 [2024-07-15 09:31:33.290976] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1fba9c0) on tqpair=0x1f5a540 00:23:22.184 [2024-07-15 09:31:33.290993] nvme_ctrlr.c:4160:nvme_ctrlr_process_init: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] Ctrlr already in ready state 00:23:22.184 [2024-07-15 09:31:33.291031] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:23:22.184 [2024-07-15 09:31:33.291042] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x1f5a540) 00:23:22.184 [2024-07-15 09:31:33.291053] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:4 nsid:0 cdw10:00ff0070 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:22.184 [2024-07-15 09:31:33.291068] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:23:22.184 [2024-07-15 09:31:33.291076] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:23:22.184 [2024-07-15 09:31:33.291082] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=5 on tqpair(0x1f5a540) 00:23:22.184 [2024-07-15 09:31:33.291091] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: KEEP ALIVE (18) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:00000000 00:23:22.184 [2024-07-15 09:31:33.291118] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1fba9c0, cid 4, qid 0 00:23:22.184 [2024-07-15 09:31:33.291130] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1fbab40, cid 5, qid 0 00:23:22.184 [2024-07-15 09:31:33.291280] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:23:22.184 [2024-07-15 09:31:33.291291] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:23:22.184 [2024-07-15 09:31:33.291298] nvme_tcp.c:1719:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:23:22.184 [2024-07-15 09:31:33.291304] nvme_tcp.c:1720:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x1f5a540): datao=0, datal=1024, cccid=4 00:23:22.184 [2024-07-15 09:31:33.291312] nvme_tcp.c:1731:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x1fba9c0) on tqpair(0x1f5a540): expected_datao=0, payload_size=1024 00:23:22.184 [2024-07-15 09:31:33.291319] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:23:22.184 [2024-07-15 09:31:33.291329] nvme_tcp.c:1521:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:23:22.184 [2024-07-15 09:31:33.291336] nvme_tcp.c:1312:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:23:22.184 [2024-07-15 09:31:33.291344] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:23:22.184 [2024-07-15 09:31:33.291353] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:23:22.184 [2024-07-15 09:31:33.291359] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:23:22.184 [2024-07-15 09:31:33.291366] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1fbab40) on tqpair=0x1f5a540 00:23:22.184 [2024-07-15 09:31:33.335829] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:23:22.184 [2024-07-15 09:31:33.335847] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:23:22.184 [2024-07-15 09:31:33.335854] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:23:22.184 [2024-07-15 09:31:33.335861] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1fba9c0) on tqpair=0x1f5a540 00:23:22.184 [2024-07-15 09:31:33.335879] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:23:22.184 [2024-07-15 09:31:33.335889] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x1f5a540) 00:23:22.184 [2024-07-15 09:31:33.335900] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:4 nsid:0 cdw10:02ff0070 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:22.184 [2024-07-15 09:31:33.335945] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1fba9c0, cid 4, qid 0 00:23:22.184 [2024-07-15 09:31:33.336082] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:23:22.184 [2024-07-15 09:31:33.336096] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:23:22.184 [2024-07-15 09:31:33.336103] nvme_tcp.c:1719:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:23:22.184 [2024-07-15 09:31:33.336110] nvme_tcp.c:1720:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x1f5a540): datao=0, datal=3072, cccid=4 00:23:22.184 [2024-07-15 09:31:33.336117] nvme_tcp.c:1731:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x1fba9c0) on tqpair(0x1f5a540): expected_datao=0, payload_size=3072 00:23:22.184 [2024-07-15 09:31:33.336125] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:23:22.184 [2024-07-15 09:31:33.336135] nvme_tcp.c:1521:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:23:22.184 [2024-07-15 09:31:33.336142] nvme_tcp.c:1312:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:23:22.184 [2024-07-15 09:31:33.336154] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:23:22.184 [2024-07-15 09:31:33.336163] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:23:22.184 [2024-07-15 09:31:33.336169] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:23:22.184 [2024-07-15 09:31:33.336180] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1fba9c0) on tqpair=0x1f5a540 00:23:22.184 [2024-07-15 09:31:33.336196] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:23:22.184 [2024-07-15 09:31:33.336205] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x1f5a540) 00:23:22.184 [2024-07-15 09:31:33.336215] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:4 nsid:0 cdw10:00010070 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:22.184 [2024-07-15 09:31:33.336244] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1fba9c0, cid 4, qid 0 00:23:22.184 [2024-07-15 09:31:33.336340] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:23:22.184 [2024-07-15 09:31:33.336354] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:23:22.184 [2024-07-15 09:31:33.336360] nvme_tcp.c:1719:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:23:22.184 [2024-07-15 09:31:33.336367] nvme_tcp.c:1720:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x1f5a540): datao=0, datal=8, cccid=4 00:23:22.184 [2024-07-15 09:31:33.336374] nvme_tcp.c:1731:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x1fba9c0) on tqpair(0x1f5a540): expected_datao=0, payload_size=8 00:23:22.184 [2024-07-15 09:31:33.336382] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:23:22.184 [2024-07-15 09:31:33.336391] nvme_tcp.c:1521:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:23:22.184 [2024-07-15 09:31:33.336398] nvme_tcp.c:1312:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:23:22.445 [2024-07-15 09:31:33.376898] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:23:22.445 [2024-07-15 09:31:33.376920] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:23:22.445 [2024-07-15 09:31:33.376928] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:23:22.445 [2024-07-15 09:31:33.376935] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1fba9c0) on tqpair=0x1f5a540 00:23:22.445 ===================================================== 00:23:22.445 NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2014-08.org.nvmexpress.discovery 00:23:22.445 ===================================================== 00:23:22.445 Controller Capabilities/Features 00:23:22.445 ================================ 00:23:22.445 Vendor ID: 0000 00:23:22.445 Subsystem Vendor ID: 0000 00:23:22.446 Serial Number: .................... 00:23:22.446 Model Number: ........................................ 00:23:22.446 Firmware Version: 24.09 00:23:22.446 Recommended Arb Burst: 0 00:23:22.446 IEEE OUI Identifier: 00 00 00 00:23:22.446 Multi-path I/O 00:23:22.446 May have multiple subsystem ports: No 00:23:22.446 May have multiple controllers: No 00:23:22.446 Associated with SR-IOV VF: No 00:23:22.446 Max Data Transfer Size: 131072 00:23:22.446 Max Number of Namespaces: 0 00:23:22.446 Max Number of I/O Queues: 1024 00:23:22.446 NVMe Specification Version (VS): 1.3 00:23:22.446 NVMe Specification Version (Identify): 1.3 00:23:22.446 Maximum Queue Entries: 128 00:23:22.446 Contiguous Queues Required: Yes 00:23:22.446 Arbitration Mechanisms Supported 00:23:22.446 Weighted Round Robin: Not Supported 00:23:22.446 Vendor Specific: Not Supported 00:23:22.446 Reset Timeout: 15000 ms 00:23:22.446 Doorbell Stride: 4 bytes 00:23:22.446 NVM Subsystem Reset: Not Supported 00:23:22.446 Command Sets Supported 00:23:22.446 NVM Command Set: Supported 00:23:22.446 Boot Partition: Not Supported 00:23:22.446 Memory Page Size Minimum: 4096 bytes 00:23:22.446 Memory Page Size Maximum: 4096 bytes 00:23:22.446 Persistent Memory Region: Not Supported 00:23:22.446 Optional Asynchronous Events Supported 00:23:22.446 Namespace Attribute Notices: Not Supported 00:23:22.446 Firmware Activation Notices: Not Supported 00:23:22.446 ANA Change Notices: Not Supported 00:23:22.446 PLE Aggregate Log Change Notices: Not Supported 00:23:22.446 LBA Status Info Alert Notices: Not Supported 00:23:22.446 EGE Aggregate Log Change Notices: Not Supported 00:23:22.446 Normal NVM Subsystem Shutdown event: Not Supported 00:23:22.446 Zone Descriptor Change Notices: Not Supported 00:23:22.446 Discovery Log Change Notices: Supported 00:23:22.446 Controller Attributes 00:23:22.446 128-bit Host Identifier: Not Supported 00:23:22.446 Non-Operational Permissive Mode: Not Supported 00:23:22.446 NVM Sets: Not Supported 00:23:22.446 Read Recovery Levels: Not Supported 00:23:22.446 Endurance Groups: Not Supported 00:23:22.446 Predictable Latency Mode: Not Supported 00:23:22.446 Traffic Based Keep ALive: Not Supported 00:23:22.446 Namespace Granularity: Not Supported 00:23:22.446 SQ Associations: Not Supported 00:23:22.446 UUID List: Not Supported 00:23:22.446 Multi-Domain Subsystem: Not Supported 00:23:22.446 Fixed Capacity Management: Not Supported 00:23:22.446 Variable Capacity Management: Not Supported 00:23:22.446 Delete Endurance Group: Not Supported 00:23:22.446 Delete NVM Set: Not Supported 00:23:22.446 Extended LBA Formats Supported: Not Supported 00:23:22.446 Flexible Data Placement Supported: Not Supported 00:23:22.446 00:23:22.446 Controller Memory Buffer Support 00:23:22.446 ================================ 00:23:22.446 Supported: No 00:23:22.446 00:23:22.446 Persistent Memory Region Support 00:23:22.446 ================================ 00:23:22.446 Supported: No 00:23:22.446 00:23:22.446 Admin Command Set Attributes 00:23:22.446 ============================ 00:23:22.446 Security Send/Receive: Not Supported 00:23:22.446 Format NVM: Not Supported 00:23:22.446 Firmware Activate/Download: Not Supported 00:23:22.446 Namespace Management: Not Supported 00:23:22.446 Device Self-Test: Not Supported 00:23:22.446 Directives: Not Supported 00:23:22.446 NVMe-MI: Not Supported 00:23:22.446 Virtualization Management: Not Supported 00:23:22.446 Doorbell Buffer Config: Not Supported 00:23:22.446 Get LBA Status Capability: Not Supported 00:23:22.446 Command & Feature Lockdown Capability: Not Supported 00:23:22.446 Abort Command Limit: 1 00:23:22.446 Async Event Request Limit: 4 00:23:22.446 Number of Firmware Slots: N/A 00:23:22.446 Firmware Slot 1 Read-Only: N/A 00:23:22.446 Firmware Activation Without Reset: N/A 00:23:22.446 Multiple Update Detection Support: N/A 00:23:22.446 Firmware Update Granularity: No Information Provided 00:23:22.446 Per-Namespace SMART Log: No 00:23:22.446 Asymmetric Namespace Access Log Page: Not Supported 00:23:22.446 Subsystem NQN: nqn.2014-08.org.nvmexpress.discovery 00:23:22.446 Command Effects Log Page: Not Supported 00:23:22.446 Get Log Page Extended Data: Supported 00:23:22.446 Telemetry Log Pages: Not Supported 00:23:22.446 Persistent Event Log Pages: Not Supported 00:23:22.446 Supported Log Pages Log Page: May Support 00:23:22.446 Commands Supported & Effects Log Page: Not Supported 00:23:22.446 Feature Identifiers & Effects Log Page:May Support 00:23:22.446 NVMe-MI Commands & Effects Log Page: May Support 00:23:22.446 Data Area 4 for Telemetry Log: Not Supported 00:23:22.446 Error Log Page Entries Supported: 128 00:23:22.446 Keep Alive: Not Supported 00:23:22.446 00:23:22.446 NVM Command Set Attributes 00:23:22.446 ========================== 00:23:22.446 Submission Queue Entry Size 00:23:22.446 Max: 1 00:23:22.446 Min: 1 00:23:22.446 Completion Queue Entry Size 00:23:22.446 Max: 1 00:23:22.446 Min: 1 00:23:22.446 Number of Namespaces: 0 00:23:22.446 Compare Command: Not Supported 00:23:22.446 Write Uncorrectable Command: Not Supported 00:23:22.446 Dataset Management Command: Not Supported 00:23:22.446 Write Zeroes Command: Not Supported 00:23:22.446 Set Features Save Field: Not Supported 00:23:22.446 Reservations: Not Supported 00:23:22.446 Timestamp: Not Supported 00:23:22.446 Copy: Not Supported 00:23:22.446 Volatile Write Cache: Not Present 00:23:22.446 Atomic Write Unit (Normal): 1 00:23:22.446 Atomic Write Unit (PFail): 1 00:23:22.446 Atomic Compare & Write Unit: 1 00:23:22.446 Fused Compare & Write: Supported 00:23:22.446 Scatter-Gather List 00:23:22.446 SGL Command Set: Supported 00:23:22.446 SGL Keyed: Supported 00:23:22.446 SGL Bit Bucket Descriptor: Not Supported 00:23:22.446 SGL Metadata Pointer: Not Supported 00:23:22.446 Oversized SGL: Not Supported 00:23:22.446 SGL Metadata Address: Not Supported 00:23:22.446 SGL Offset: Supported 00:23:22.446 Transport SGL Data Block: Not Supported 00:23:22.446 Replay Protected Memory Block: Not Supported 00:23:22.446 00:23:22.446 Firmware Slot Information 00:23:22.446 ========================= 00:23:22.446 Active slot: 0 00:23:22.446 00:23:22.446 00:23:22.446 Error Log 00:23:22.446 ========= 00:23:22.446 00:23:22.446 Active Namespaces 00:23:22.446 ================= 00:23:22.446 Discovery Log Page 00:23:22.446 ================== 00:23:22.446 Generation Counter: 2 00:23:22.446 Number of Records: 2 00:23:22.446 Record Format: 0 00:23:22.446 00:23:22.446 Discovery Log Entry 0 00:23:22.446 ---------------------- 00:23:22.446 Transport Type: 3 (TCP) 00:23:22.446 Address Family: 1 (IPv4) 00:23:22.446 Subsystem Type: 3 (Current Discovery Subsystem) 00:23:22.446 Entry Flags: 00:23:22.446 Duplicate Returned Information: 1 00:23:22.446 Explicit Persistent Connection Support for Discovery: 1 00:23:22.446 Transport Requirements: 00:23:22.446 Secure Channel: Not Required 00:23:22.446 Port ID: 0 (0x0000) 00:23:22.446 Controller ID: 65535 (0xffff) 00:23:22.446 Admin Max SQ Size: 128 00:23:22.446 Transport Service Identifier: 4420 00:23:22.446 NVM Subsystem Qualified Name: nqn.2014-08.org.nvmexpress.discovery 00:23:22.446 Transport Address: 10.0.0.2 00:23:22.446 Discovery Log Entry 1 00:23:22.446 ---------------------- 00:23:22.446 Transport Type: 3 (TCP) 00:23:22.446 Address Family: 1 (IPv4) 00:23:22.446 Subsystem Type: 2 (NVM Subsystem) 00:23:22.446 Entry Flags: 00:23:22.446 Duplicate Returned Information: 0 00:23:22.446 Explicit Persistent Connection Support for Discovery: 0 00:23:22.446 Transport Requirements: 00:23:22.446 Secure Channel: Not Required 00:23:22.446 Port ID: 0 (0x0000) 00:23:22.446 Controller ID: 65535 (0xffff) 00:23:22.446 Admin Max SQ Size: 128 00:23:22.446 Transport Service Identifier: 4420 00:23:22.446 NVM Subsystem Qualified Name: nqn.2016-06.io.spdk:cnode1 00:23:22.446 Transport Address: 10.0.0.2 [2024-07-15 09:31:33.377057] nvme_ctrlr.c:4357:nvme_ctrlr_destruct_async: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] Prepare to destruct SSD 00:23:22.446 [2024-07-15 09:31:33.377079] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1fba3c0) on tqpair=0x1f5a540 00:23:22.446 [2024-07-15 09:31:33.377090] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:22.446 [2024-07-15 09:31:33.377100] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1fba540) on tqpair=0x1f5a540 00:23:22.446 [2024-07-15 09:31:33.377108] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:22.446 [2024-07-15 09:31:33.377116] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1fba6c0) on tqpair=0x1f5a540 00:23:22.446 [2024-07-15 09:31:33.377123] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:22.446 [2024-07-15 09:31:33.377131] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1fba840) on tqpair=0x1f5a540 00:23:22.446 [2024-07-15 09:31:33.377139] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:22.446 [2024-07-15 09:31:33.377156] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:23:22.446 [2024-07-15 09:31:33.377166] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:23:22.446 [2024-07-15 09:31:33.377187] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1f5a540) 00:23:22.446 [2024-07-15 09:31:33.377198] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:22.446 [2024-07-15 09:31:33.377223] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1fba840, cid 3, qid 0 00:23:22.446 [2024-07-15 09:31:33.377357] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:23:22.446 [2024-07-15 09:31:33.377372] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:23:22.446 [2024-07-15 09:31:33.377379] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:23:22.446 [2024-07-15 09:31:33.377389] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1fba840) on tqpair=0x1f5a540 00:23:22.446 [2024-07-15 09:31:33.377402] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:23:22.446 [2024-07-15 09:31:33.377410] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:23:22.446 [2024-07-15 09:31:33.377417] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1f5a540) 00:23:22.446 [2024-07-15 09:31:33.377427] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY SET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:22.446 [2024-07-15 09:31:33.377454] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1fba840, cid 3, qid 0 00:23:22.446 [2024-07-15 09:31:33.377570] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:23:22.446 [2024-07-15 09:31:33.377581] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:23:22.446 [2024-07-15 09:31:33.377588] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:23:22.446 [2024-07-15 09:31:33.377594] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1fba840) on tqpair=0x1f5a540 00:23:22.446 [2024-07-15 09:31:33.377602] nvme_ctrlr.c:1147:nvme_ctrlr_shutdown_set_cc_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] RTD3E = 0 us 00:23:22.446 [2024-07-15 09:31:33.377610] nvme_ctrlr.c:1150:nvme_ctrlr_shutdown_set_cc_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] shutdown timeout = 10000 ms 00:23:22.446 [2024-07-15 09:31:33.377626] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:23:22.446 [2024-07-15 09:31:33.377635] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:23:22.446 [2024-07-15 09:31:33.377641] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1f5a540) 00:23:22.446 [2024-07-15 09:31:33.377652] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:22.446 [2024-07-15 09:31:33.377673] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1fba840, cid 3, qid 0 00:23:22.446 [2024-07-15 09:31:33.377754] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:23:22.446 [2024-07-15 09:31:33.377767] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:23:22.446 [2024-07-15 09:31:33.377774] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:23:22.446 [2024-07-15 09:31:33.377781] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1fba840) on tqpair=0x1f5a540 00:23:22.446 [2024-07-15 09:31:33.377797] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:23:22.446 [2024-07-15 09:31:33.377814] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:23:22.446 [2024-07-15 09:31:33.377821] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1f5a540) 00:23:22.446 [2024-07-15 09:31:33.377831] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:22.446 [2024-07-15 09:31:33.377853] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1fba840, cid 3, qid 0 00:23:22.446 [2024-07-15 09:31:33.377931] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:23:22.446 [2024-07-15 09:31:33.377945] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:23:22.446 [2024-07-15 09:31:33.377951] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:23:22.447 [2024-07-15 09:31:33.377958] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1fba840) on tqpair=0x1f5a540 00:23:22.447 [2024-07-15 09:31:33.377974] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:23:22.447 [2024-07-15 09:31:33.377983] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:23:22.447 [2024-07-15 09:31:33.377990] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1f5a540) 00:23:22.447 [2024-07-15 09:31:33.378000] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:22.447 [2024-07-15 09:31:33.378020] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1fba840, cid 3, qid 0 00:23:22.447 [2024-07-15 09:31:33.378089] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:23:22.447 [2024-07-15 09:31:33.378105] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:23:22.447 [2024-07-15 09:31:33.378112] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:23:22.447 [2024-07-15 09:31:33.378119] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1fba840) on tqpair=0x1f5a540 00:23:22.447 [2024-07-15 09:31:33.378135] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:23:22.447 [2024-07-15 09:31:33.378144] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:23:22.447 [2024-07-15 09:31:33.378151] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1f5a540) 00:23:22.447 [2024-07-15 09:31:33.378161] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:22.447 [2024-07-15 09:31:33.378182] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1fba840, cid 3, qid 0 00:23:22.447 [2024-07-15 09:31:33.378255] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:23:22.447 [2024-07-15 09:31:33.378268] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:23:22.447 [2024-07-15 09:31:33.378275] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:23:22.447 [2024-07-15 09:31:33.378282] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1fba840) on tqpair=0x1f5a540 00:23:22.447 [2024-07-15 09:31:33.378298] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:23:22.447 [2024-07-15 09:31:33.378307] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:23:22.447 [2024-07-15 09:31:33.378313] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1f5a540) 00:23:22.447 [2024-07-15 09:31:33.378324] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:22.447 [2024-07-15 09:31:33.378344] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1fba840, cid 3, qid 0 00:23:22.447 [2024-07-15 09:31:33.378423] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:23:22.447 [2024-07-15 09:31:33.378434] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:23:22.447 [2024-07-15 09:31:33.378441] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:23:22.447 [2024-07-15 09:31:33.378447] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1fba840) on tqpair=0x1f5a540 00:23:22.447 [2024-07-15 09:31:33.378463] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:23:22.447 [2024-07-15 09:31:33.378472] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:23:22.447 [2024-07-15 09:31:33.378478] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1f5a540) 00:23:22.447 [2024-07-15 09:31:33.378489] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:22.447 [2024-07-15 09:31:33.378510] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1fba840, cid 3, qid 0 00:23:22.447 [2024-07-15 09:31:33.378585] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:23:22.447 [2024-07-15 09:31:33.378598] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:23:22.447 [2024-07-15 09:31:33.378605] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:23:22.447 [2024-07-15 09:31:33.378611] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1fba840) on tqpair=0x1f5a540 00:23:22.447 [2024-07-15 09:31:33.378627] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:23:22.447 [2024-07-15 09:31:33.378637] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:23:22.447 [2024-07-15 09:31:33.378643] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1f5a540) 00:23:22.447 [2024-07-15 09:31:33.378653] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:22.447 [2024-07-15 09:31:33.378673] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1fba840, cid 3, qid 0 00:23:22.447 [2024-07-15 09:31:33.378745] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:23:22.447 [2024-07-15 09:31:33.378757] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:23:22.447 [2024-07-15 09:31:33.378767] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:23:22.447 [2024-07-15 09:31:33.378774] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1fba840) on tqpair=0x1f5a540 00:23:22.447 [2024-07-15 09:31:33.378790] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:23:22.447 [2024-07-15 09:31:33.378807] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:23:22.447 [2024-07-15 09:31:33.378815] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1f5a540) 00:23:22.447 [2024-07-15 09:31:33.378825] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:22.447 [2024-07-15 09:31:33.378848] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1fba840, cid 3, qid 0 00:23:22.447 [2024-07-15 09:31:33.378921] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:23:22.447 [2024-07-15 09:31:33.378932] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:23:22.447 [2024-07-15 09:31:33.378939] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:23:22.447 [2024-07-15 09:31:33.378946] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1fba840) on tqpair=0x1f5a540 00:23:22.447 [2024-07-15 09:31:33.378961] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:23:22.447 [2024-07-15 09:31:33.378970] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:23:22.447 [2024-07-15 09:31:33.378977] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1f5a540) 00:23:22.447 [2024-07-15 09:31:33.378987] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:22.447 [2024-07-15 09:31:33.379009] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1fba840, cid 3, qid 0 00:23:22.447 [2024-07-15 09:31:33.379081] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:23:22.447 [2024-07-15 09:31:33.379093] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:23:22.447 [2024-07-15 09:31:33.379099] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:23:22.447 [2024-07-15 09:31:33.379106] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1fba840) on tqpair=0x1f5a540 00:23:22.447 [2024-07-15 09:31:33.379122] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:23:22.447 [2024-07-15 09:31:33.379131] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:23:22.447 [2024-07-15 09:31:33.379137] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1f5a540) 00:23:22.447 [2024-07-15 09:31:33.379147] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:22.447 [2024-07-15 09:31:33.379169] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1fba840, cid 3, qid 0 00:23:22.447 [2024-07-15 09:31:33.379246] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:23:22.447 [2024-07-15 09:31:33.379259] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:23:22.447 [2024-07-15 09:31:33.379266] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:23:22.447 [2024-07-15 09:31:33.379272] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1fba840) on tqpair=0x1f5a540 00:23:22.447 [2024-07-15 09:31:33.379289] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:23:22.447 [2024-07-15 09:31:33.379298] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:23:22.447 [2024-07-15 09:31:33.379304] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1f5a540) 00:23:22.447 [2024-07-15 09:31:33.379314] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:22.447 [2024-07-15 09:31:33.379335] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1fba840, cid 3, qid 0 00:23:22.447 [2024-07-15 09:31:33.379403] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:23:22.447 [2024-07-15 09:31:33.379415] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:23:22.447 [2024-07-15 09:31:33.379422] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:23:22.447 [2024-07-15 09:31:33.379432] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1fba840) on tqpair=0x1f5a540 00:23:22.447 [2024-07-15 09:31:33.379449] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:23:22.447 [2024-07-15 09:31:33.379458] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:23:22.447 [2024-07-15 09:31:33.379464] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1f5a540) 00:23:22.447 [2024-07-15 09:31:33.379475] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:22.447 [2024-07-15 09:31:33.379495] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1fba840, cid 3, qid 0 00:23:22.447 [2024-07-15 09:31:33.379567] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:23:22.447 [2024-07-15 09:31:33.379579] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:23:22.447 [2024-07-15 09:31:33.379585] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:23:22.447 [2024-07-15 09:31:33.379592] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1fba840) on tqpair=0x1f5a540 00:23:22.447 [2024-07-15 09:31:33.379607] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:23:22.447 [2024-07-15 09:31:33.379616] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:23:22.447 [2024-07-15 09:31:33.379623] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1f5a540) 00:23:22.447 [2024-07-15 09:31:33.379633] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:22.447 [2024-07-15 09:31:33.379653] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1fba840, cid 3, qid 0 00:23:22.447 [2024-07-15 09:31:33.379723] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:23:22.447 [2024-07-15 09:31:33.379735] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:23:22.447 [2024-07-15 09:31:33.379741] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:23:22.447 [2024-07-15 09:31:33.379748] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1fba840) on tqpair=0x1f5a540 00:23:22.448 [2024-07-15 09:31:33.379763] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:23:22.448 [2024-07-15 09:31:33.379772] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:23:22.448 [2024-07-15 09:31:33.379779] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1f5a540) 00:23:22.448 [2024-07-15 09:31:33.379789] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:22.448 [2024-07-15 09:31:33.383820] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1fba840, cid 3, qid 0 00:23:22.448 [2024-07-15 09:31:33.383969] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:23:22.448 [2024-07-15 09:31:33.383982] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:23:22.448 [2024-07-15 09:31:33.383989] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:23:22.448 [2024-07-15 09:31:33.383996] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1fba840) on tqpair=0x1f5a540 00:23:22.448 [2024-07-15 09:31:33.384009] nvme_ctrlr.c:1269:nvme_ctrlr_shutdown_poll_async: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] shutdown complete in 6 milliseconds 00:23:22.448 00:23:22.448 09:31:33 nvmf_tcp.nvmf_identify -- host/identify.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_identify -r ' trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' -L all 00:23:22.448 [2024-07-15 09:31:33.419472] Starting SPDK v24.09-pre git sha1 b0f01ebc5 / DPDK 24.03.0 initialization... 00:23:22.448 [2024-07-15 09:31:33.419517] [ DPDK EAL parameters: identify --no-shconf -c 0x1 -n 1 -m 0 --no-pci --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid892317 ] 00:23:22.448 EAL: No free 2048 kB hugepages reported on node 1 00:23:22.448 [2024-07-15 09:31:33.455047] nvme_ctrlr.c:1559:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to connect adminq (no timeout) 00:23:22.448 [2024-07-15 09:31:33.455113] nvme_tcp.c:2338:nvme_tcp_qpair_connect_sock: *DEBUG*: adrfam 1 ai_family 2 00:23:22.448 [2024-07-15 09:31:33.455123] nvme_tcp.c:2342:nvme_tcp_qpair_connect_sock: *DEBUG*: trsvcid is 4420 00:23:22.448 [2024-07-15 09:31:33.455137] nvme_tcp.c:2360:nvme_tcp_qpair_connect_sock: *DEBUG*: sock_impl_name is (null) 00:23:22.448 [2024-07-15 09:31:33.455146] sock.c: 337:spdk_sock_connect_ext: *DEBUG*: Creating a client socket using impl posix 00:23:22.448 [2024-07-15 09:31:33.458851] nvme_ctrlr.c:1559:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to wait for connect adminq (no timeout) 00:23:22.448 [2024-07-15 09:31:33.458895] nvme_tcp.c:1555:nvme_tcp_send_icreq_complete: *DEBUG*: Complete the icreq send for tqpair=0xeef540 0 00:23:22.448 [2024-07-15 09:31:33.465813] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 1 00:23:22.448 [2024-07-15 09:31:33.465834] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =1 00:23:22.448 [2024-07-15 09:31:33.465842] nvme_tcp.c:1601:nvme_tcp_icresp_handle: *DEBUG*: host_hdgst_enable: 0 00:23:22.448 [2024-07-15 09:31:33.465848] nvme_tcp.c:1602:nvme_tcp_icresp_handle: *DEBUG*: host_ddgst_enable: 0 00:23:22.448 [2024-07-15 09:31:33.465889] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:23:22.448 [2024-07-15 09:31:33.465901] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:23:22.448 [2024-07-15 09:31:33.465908] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0xeef540) 00:23:22.448 [2024-07-15 09:31:33.465922] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:0 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x400 00:23:22.448 [2024-07-15 09:31:33.465956] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xf4f3c0, cid 0, qid 0 00:23:22.448 [2024-07-15 09:31:33.472813] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:23:22.448 [2024-07-15 09:31:33.472831] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:23:22.448 [2024-07-15 09:31:33.472838] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:23:22.448 [2024-07-15 09:31:33.472845] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xf4f3c0) on tqpair=0xeef540 00:23:22.448 [2024-07-15 09:31:33.472859] nvme_fabric.c: 622:_nvme_fabric_qpair_connect_poll: *DEBUG*: CNTLID 0x0001 00:23:22.448 [2024-07-15 09:31:33.472871] nvme_ctrlr.c:1559:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to read vs (no timeout) 00:23:22.448 [2024-07-15 09:31:33.472881] nvme_ctrlr.c:1559:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to read vs wait for vs (no timeout) 00:23:22.448 [2024-07-15 09:31:33.472899] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:23:22.448 [2024-07-15 09:31:33.472907] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:23:22.448 [2024-07-15 09:31:33.472914] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0xeef540) 00:23:22.448 [2024-07-15 09:31:33.472925] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:22.448 [2024-07-15 09:31:33.472957] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xf4f3c0, cid 0, qid 0 00:23:22.448 [2024-07-15 09:31:33.473086] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:23:22.448 [2024-07-15 09:31:33.473098] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:23:22.448 [2024-07-15 09:31:33.473105] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:23:22.448 [2024-07-15 09:31:33.473112] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xf4f3c0) on tqpair=0xeef540 00:23:22.448 [2024-07-15 09:31:33.473120] nvme_ctrlr.c:1559:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to read cap (no timeout) 00:23:22.448 [2024-07-15 09:31:33.473133] nvme_ctrlr.c:1559:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to read cap wait for cap (no timeout) 00:23:22.448 [2024-07-15 09:31:33.473149] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:23:22.448 [2024-07-15 09:31:33.473158] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:23:22.448 [2024-07-15 09:31:33.473164] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0xeef540) 00:23:22.448 [2024-07-15 09:31:33.473175] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:22.448 [2024-07-15 09:31:33.473196] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xf4f3c0, cid 0, qid 0 00:23:22.448 [2024-07-15 09:31:33.473275] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:23:22.448 [2024-07-15 09:31:33.473288] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:23:22.448 [2024-07-15 09:31:33.473295] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:23:22.448 [2024-07-15 09:31:33.473302] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xf4f3c0) on tqpair=0xeef540 00:23:22.448 [2024-07-15 09:31:33.473310] nvme_ctrlr.c:1559:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to check en (no timeout) 00:23:22.448 [2024-07-15 09:31:33.473323] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to check en wait for cc (timeout 15000 ms) 00:23:22.448 [2024-07-15 09:31:33.473335] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:23:22.448 [2024-07-15 09:31:33.473343] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:23:22.448 [2024-07-15 09:31:33.473349] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0xeef540) 00:23:22.448 [2024-07-15 09:31:33.473360] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:22.448 [2024-07-15 09:31:33.473380] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xf4f3c0, cid 0, qid 0 00:23:22.448 [2024-07-15 09:31:33.473475] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:23:22.448 [2024-07-15 09:31:33.473488] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:23:22.448 [2024-07-15 09:31:33.473494] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:23:22.448 [2024-07-15 09:31:33.473501] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xf4f3c0) on tqpair=0xeef540 00:23:22.448 [2024-07-15 09:31:33.473509] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to disable and wait for CSTS.RDY = 0 (timeout 15000 ms) 00:23:22.448 [2024-07-15 09:31:33.473526] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:23:22.448 [2024-07-15 09:31:33.473535] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:23:22.448 [2024-07-15 09:31:33.473541] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0xeef540) 00:23:22.448 [2024-07-15 09:31:33.473551] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:22.448 [2024-07-15 09:31:33.473572] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xf4f3c0, cid 0, qid 0 00:23:22.448 [2024-07-15 09:31:33.473650] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:23:22.448 [2024-07-15 09:31:33.473663] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:23:22.448 [2024-07-15 09:31:33.473669] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:23:22.448 [2024-07-15 09:31:33.473676] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xf4f3c0) on tqpair=0xeef540 00:23:22.448 [2024-07-15 09:31:33.473683] nvme_ctrlr.c:3869:nvme_ctrlr_process_init_wait_for_ready_0: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] CC.EN = 0 && CSTS.RDY = 0 00:23:22.448 [2024-07-15 09:31:33.473691] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to controller is disabled (timeout 15000 ms) 00:23:22.448 [2024-07-15 09:31:33.473704] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to enable controller by writing CC.EN = 1 (timeout 15000 ms) 00:23:22.448 [2024-07-15 09:31:33.473814] nvme_ctrlr.c:4062:nvme_ctrlr_process_init: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] Setting CC.EN = 1 00:23:22.448 [2024-07-15 09:31:33.473827] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to enable controller by writing CC.EN = 1 reg (timeout 15000 ms) 00:23:22.448 [2024-07-15 09:31:33.473839] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:23:22.448 [2024-07-15 09:31:33.473863] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:23:22.448 [2024-07-15 09:31:33.473869] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0xeef540) 00:23:22.448 [2024-07-15 09:31:33.473880] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY SET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:22.448 [2024-07-15 09:31:33.473901] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xf4f3c0, cid 0, qid 0 00:23:22.448 [2024-07-15 09:31:33.474044] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:23:22.448 [2024-07-15 09:31:33.474058] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:23:22.448 [2024-07-15 09:31:33.474065] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:23:22.448 [2024-07-15 09:31:33.474072] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xf4f3c0) on tqpair=0xeef540 00:23:22.448 [2024-07-15 09:31:33.474080] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to wait for CSTS.RDY = 1 (timeout 15000 ms) 00:23:22.448 [2024-07-15 09:31:33.474096] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:23:22.448 [2024-07-15 09:31:33.474106] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:23:22.448 [2024-07-15 09:31:33.474112] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0xeef540) 00:23:22.448 [2024-07-15 09:31:33.474122] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:22.448 [2024-07-15 09:31:33.474143] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xf4f3c0, cid 0, qid 0 00:23:22.448 [2024-07-15 09:31:33.474245] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:23:22.448 [2024-07-15 09:31:33.474258] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:23:22.448 [2024-07-15 09:31:33.474264] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:23:22.448 [2024-07-15 09:31:33.474271] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xf4f3c0) on tqpair=0xeef540 00:23:22.448 [2024-07-15 09:31:33.474278] nvme_ctrlr.c:3904:nvme_ctrlr_process_init_enable_wait_for_ready_1: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] CC.EN = 1 && CSTS.RDY = 1 - controller is ready 00:23:22.448 [2024-07-15 09:31:33.474287] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to reset admin queue (timeout 30000 ms) 00:23:22.449 [2024-07-15 09:31:33.474300] nvme_ctrlr.c:1559:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to identify controller (no timeout) 00:23:22.449 [2024-07-15 09:31:33.474314] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to wait for identify controller (timeout 30000 ms) 00:23:22.449 [2024-07-15 09:31:33.474327] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:23:22.449 [2024-07-15 09:31:33.474335] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0xeef540) 00:23:22.449 [2024-07-15 09:31:33.474346] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:0 nsid:0 cdw10:00000001 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:22.449 [2024-07-15 09:31:33.474366] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xf4f3c0, cid 0, qid 0 00:23:22.449 [2024-07-15 09:31:33.474481] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:23:22.449 [2024-07-15 09:31:33.474495] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:23:22.449 [2024-07-15 09:31:33.474502] nvme_tcp.c:1719:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:23:22.449 [2024-07-15 09:31:33.474509] nvme_tcp.c:1720:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0xeef540): datao=0, datal=4096, cccid=0 00:23:22.449 [2024-07-15 09:31:33.474516] nvme_tcp.c:1731:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0xf4f3c0) on tqpair(0xeef540): expected_datao=0, payload_size=4096 00:23:22.449 [2024-07-15 09:31:33.474527] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:23:22.449 [2024-07-15 09:31:33.474545] nvme_tcp.c:1521:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:23:22.449 [2024-07-15 09:31:33.474554] nvme_tcp.c:1312:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:23:22.449 [2024-07-15 09:31:33.518812] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:23:22.449 [2024-07-15 09:31:33.518830] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:23:22.449 [2024-07-15 09:31:33.518837] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:23:22.449 [2024-07-15 09:31:33.518843] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xf4f3c0) on tqpair=0xeef540 00:23:22.449 [2024-07-15 09:31:33.518854] nvme_ctrlr.c:2053:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] transport max_xfer_size 4294967295 00:23:22.449 [2024-07-15 09:31:33.518867] nvme_ctrlr.c:2057:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] MDTS max_xfer_size 131072 00:23:22.449 [2024-07-15 09:31:33.518875] nvme_ctrlr.c:2060:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] CNTLID 0x0001 00:23:22.449 [2024-07-15 09:31:33.518882] nvme_ctrlr.c:2084:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] transport max_sges 16 00:23:22.449 [2024-07-15 09:31:33.518889] nvme_ctrlr.c:2099:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] fuses compare and write: 1 00:23:22.449 [2024-07-15 09:31:33.518897] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to configure AER (timeout 30000 ms) 00:23:22.449 [2024-07-15 09:31:33.518912] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to wait for configure aer (timeout 30000 ms) 00:23:22.449 [2024-07-15 09:31:33.518924] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:23:22.449 [2024-07-15 09:31:33.518931] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:23:22.449 [2024-07-15 09:31:33.518937] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0xeef540) 00:23:22.449 [2024-07-15 09:31:33.518948] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES ASYNC EVENT CONFIGURATION cid:0 cdw10:0000000b SGL DATA BLOCK OFFSET 0x0 len:0x0 00:23:22.449 [2024-07-15 09:31:33.518970] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xf4f3c0, cid 0, qid 0 00:23:22.449 [2024-07-15 09:31:33.519115] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:23:22.449 [2024-07-15 09:31:33.519129] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:23:22.449 [2024-07-15 09:31:33.519136] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:23:22.449 [2024-07-15 09:31:33.519143] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xf4f3c0) on tqpair=0xeef540 00:23:22.449 [2024-07-15 09:31:33.519153] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:23:22.449 [2024-07-15 09:31:33.519161] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:23:22.449 [2024-07-15 09:31:33.519167] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0xeef540) 00:23:22.449 [2024-07-15 09:31:33.519177] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:23:22.449 [2024-07-15 09:31:33.519187] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:23:22.449 [2024-07-15 09:31:33.519194] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:23:22.449 [2024-07-15 09:31:33.519200] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=1 on tqpair(0xeef540) 00:23:22.449 [2024-07-15 09:31:33.519209] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:23:22.449 [2024-07-15 09:31:33.519218] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:23:22.449 [2024-07-15 09:31:33.519225] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:23:22.449 [2024-07-15 09:31:33.519231] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=2 on tqpair(0xeef540) 00:23:22.449 [2024-07-15 09:31:33.519240] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:23:22.449 [2024-07-15 09:31:33.519254] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:23:22.449 [2024-07-15 09:31:33.519261] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:23:22.449 [2024-07-15 09:31:33.519268] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0xeef540) 00:23:22.449 [2024-07-15 09:31:33.519276] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:23:22.449 [2024-07-15 09:31:33.519299] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to set keep alive timeout (timeout 30000 ms) 00:23:22.449 [2024-07-15 09:31:33.519317] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to wait for set keep alive timeout (timeout 30000 ms) 00:23:22.449 [2024-07-15 09:31:33.519329] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:23:22.449 [2024-07-15 09:31:33.519336] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0xeef540) 00:23:22.449 [2024-07-15 09:31:33.519346] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES KEEP ALIVE TIMER cid:4 cdw10:0000000f SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:22.449 [2024-07-15 09:31:33.519367] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xf4f3c0, cid 0, qid 0 00:23:22.449 [2024-07-15 09:31:33.519393] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xf4f540, cid 1, qid 0 00:23:22.449 [2024-07-15 09:31:33.519401] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xf4f6c0, cid 2, qid 0 00:23:22.449 [2024-07-15 09:31:33.519408] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xf4f840, cid 3, qid 0 00:23:22.449 [2024-07-15 09:31:33.519415] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xf4f9c0, cid 4, qid 0 00:23:22.449 [2024-07-15 09:31:33.519610] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:23:22.449 [2024-07-15 09:31:33.519624] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:23:22.449 [2024-07-15 09:31:33.519631] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:23:22.449 [2024-07-15 09:31:33.519637] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xf4f9c0) on tqpair=0xeef540 00:23:22.449 [2024-07-15 09:31:33.519645] nvme_ctrlr.c:3022:nvme_ctrlr_set_keep_alive_timeout_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] Sending keep alive every 5000000 us 00:23:22.449 [2024-07-15 09:31:33.519654] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to identify controller iocs specific (timeout 30000 ms) 00:23:22.449 [2024-07-15 09:31:33.519667] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to set number of queues (timeout 30000 ms) 00:23:22.449 [2024-07-15 09:31:33.519678] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to wait for set number of queues (timeout 30000 ms) 00:23:22.449 [2024-07-15 09:31:33.519689] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:23:22.449 [2024-07-15 09:31:33.519697] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:23:22.449 [2024-07-15 09:31:33.519703] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0xeef540) 00:23:22.449 [2024-07-15 09:31:33.519713] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES NUMBER OF QUEUES cid:4 cdw10:00000007 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:23:22.449 [2024-07-15 09:31:33.519734] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xf4f9c0, cid 4, qid 0 00:23:22.449 [2024-07-15 09:31:33.519931] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:23:22.449 [2024-07-15 09:31:33.519947] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:23:22.449 [2024-07-15 09:31:33.519953] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:23:22.449 [2024-07-15 09:31:33.519960] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xf4f9c0) on tqpair=0xeef540 00:23:22.449 [2024-07-15 09:31:33.520024] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to identify active ns (timeout 30000 ms) 00:23:22.449 [2024-07-15 09:31:33.520047] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to wait for identify active ns (timeout 30000 ms) 00:23:22.449 [2024-07-15 09:31:33.520062] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:23:22.449 [2024-07-15 09:31:33.520070] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0xeef540) 00:23:22.449 [2024-07-15 09:31:33.520081] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:4 nsid:0 cdw10:00000002 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:22.449 [2024-07-15 09:31:33.520102] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xf4f9c0, cid 4, qid 0 00:23:22.449 [2024-07-15 09:31:33.520315] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:23:22.449 [2024-07-15 09:31:33.520330] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:23:22.449 [2024-07-15 09:31:33.520337] nvme_tcp.c:1719:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:23:22.449 [2024-07-15 09:31:33.520343] nvme_tcp.c:1720:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0xeef540): datao=0, datal=4096, cccid=4 00:23:22.449 [2024-07-15 09:31:33.520351] nvme_tcp.c:1731:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0xf4f9c0) on tqpair(0xeef540): expected_datao=0, payload_size=4096 00:23:22.449 [2024-07-15 09:31:33.520358] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:23:22.449 [2024-07-15 09:31:33.520368] nvme_tcp.c:1521:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:23:22.449 [2024-07-15 09:31:33.520376] nvme_tcp.c:1312:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:23:22.449 [2024-07-15 09:31:33.520398] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:23:22.449 [2024-07-15 09:31:33.520409] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:23:22.450 [2024-07-15 09:31:33.520415] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:23:22.450 [2024-07-15 09:31:33.520422] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xf4f9c0) on tqpair=0xeef540 00:23:22.450 [2024-07-15 09:31:33.520437] nvme_ctrlr.c:4693:spdk_nvme_ctrlr_get_ns: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] Namespace 1 was added 00:23:22.450 [2024-07-15 09:31:33.520458] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to identify ns (timeout 30000 ms) 00:23:22.450 [2024-07-15 09:31:33.520476] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to wait for identify ns (timeout 30000 ms) 00:23:22.450 [2024-07-15 09:31:33.520490] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:23:22.450 [2024-07-15 09:31:33.520497] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0xeef540) 00:23:22.450 [2024-07-15 09:31:33.520508] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:4 nsid:1 cdw10:00000000 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:22.450 [2024-07-15 09:31:33.520529] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xf4f9c0, cid 4, qid 0 00:23:22.450 [2024-07-15 09:31:33.520640] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:23:22.450 [2024-07-15 09:31:33.520654] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:23:22.450 [2024-07-15 09:31:33.520660] nvme_tcp.c:1719:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:23:22.450 [2024-07-15 09:31:33.520667] nvme_tcp.c:1720:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0xeef540): datao=0, datal=4096, cccid=4 00:23:22.450 [2024-07-15 09:31:33.520674] nvme_tcp.c:1731:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0xf4f9c0) on tqpair(0xeef540): expected_datao=0, payload_size=4096 00:23:22.450 [2024-07-15 09:31:33.520682] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:23:22.450 [2024-07-15 09:31:33.520692] nvme_tcp.c:1521:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:23:22.450 [2024-07-15 09:31:33.520699] nvme_tcp.c:1312:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:23:22.450 [2024-07-15 09:31:33.520720] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:23:22.450 [2024-07-15 09:31:33.520731] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:23:22.450 [2024-07-15 09:31:33.520737] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:23:22.450 [2024-07-15 09:31:33.520747] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xf4f9c0) on tqpair=0xeef540 00:23:22.450 [2024-07-15 09:31:33.520767] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to identify namespace id descriptors (timeout 30000 ms) 00:23:22.450 [2024-07-15 09:31:33.520785] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to wait for identify namespace id descriptors (timeout 30000 ms) 00:23:22.450 [2024-07-15 09:31:33.520807] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:23:22.450 [2024-07-15 09:31:33.520817] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0xeef540) 00:23:22.450 [2024-07-15 09:31:33.520827] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:4 nsid:1 cdw10:00000003 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:22.450 [2024-07-15 09:31:33.520849] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xf4f9c0, cid 4, qid 0 00:23:22.450 [2024-07-15 09:31:33.520941] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:23:22.450 [2024-07-15 09:31:33.520955] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:23:22.450 [2024-07-15 09:31:33.520962] nvme_tcp.c:1719:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:23:22.450 [2024-07-15 09:31:33.520968] nvme_tcp.c:1720:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0xeef540): datao=0, datal=4096, cccid=4 00:23:22.450 [2024-07-15 09:31:33.520976] nvme_tcp.c:1731:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0xf4f9c0) on tqpair(0xeef540): expected_datao=0, payload_size=4096 00:23:22.450 [2024-07-15 09:31:33.520983] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:23:22.450 [2024-07-15 09:31:33.520993] nvme_tcp.c:1521:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:23:22.450 [2024-07-15 09:31:33.521000] nvme_tcp.c:1312:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:23:22.450 [2024-07-15 09:31:33.521025] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:23:22.450 [2024-07-15 09:31:33.521038] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:23:22.450 [2024-07-15 09:31:33.521044] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:23:22.450 [2024-07-15 09:31:33.521051] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xf4f9c0) on tqpair=0xeef540 00:23:22.450 [2024-07-15 09:31:33.521064] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to identify ns iocs specific (timeout 30000 ms) 00:23:22.450 [2024-07-15 09:31:33.521078] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to set supported log pages (timeout 30000 ms) 00:23:22.450 [2024-07-15 09:31:33.521093] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to set supported features (timeout 30000 ms) 00:23:22.450 [2024-07-15 09:31:33.521104] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to set host behavior support feature (timeout 30000 ms) 00:23:22.450 [2024-07-15 09:31:33.521112] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to set doorbell buffer config (timeout 30000 ms) 00:23:22.450 [2024-07-15 09:31:33.521120] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to set host ID (timeout 30000 ms) 00:23:22.450 [2024-07-15 09:31:33.521129] nvme_ctrlr.c:3110:nvme_ctrlr_set_host_id: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] NVMe-oF transport - not sending Set Features - Host ID 00:23:22.450 [2024-07-15 09:31:33.521137] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to transport ready (timeout 30000 ms) 00:23:22.450 [2024-07-15 09:31:33.521145] nvme_ctrlr.c:1559:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to ready (no timeout) 00:23:22.450 [2024-07-15 09:31:33.521164] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:23:22.450 [2024-07-15 09:31:33.521172] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0xeef540) 00:23:22.450 [2024-07-15 09:31:33.521183] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES ARBITRATION cid:4 cdw10:00000001 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:22.450 [2024-07-15 09:31:33.521200] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:23:22.450 [2024-07-15 09:31:33.521208] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:23:22.450 [2024-07-15 09:31:33.521230] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=5 on tqpair(0xeef540) 00:23:22.450 [2024-07-15 09:31:33.521239] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: KEEP ALIVE (18) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:00000000 00:23:22.450 [2024-07-15 09:31:33.521263] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xf4f9c0, cid 4, qid 0 00:23:22.450 [2024-07-15 09:31:33.521274] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xf4fb40, cid 5, qid 0 00:23:22.450 [2024-07-15 09:31:33.521415] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:23:22.450 [2024-07-15 09:31:33.521429] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:23:22.450 [2024-07-15 09:31:33.521436] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:23:22.450 [2024-07-15 09:31:33.521443] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xf4f9c0) on tqpair=0xeef540 00:23:22.450 [2024-07-15 09:31:33.521453] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:23:22.450 [2024-07-15 09:31:33.521461] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:23:22.450 [2024-07-15 09:31:33.521468] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:23:22.450 [2024-07-15 09:31:33.521474] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xf4fb40) on tqpair=0xeef540 00:23:22.450 [2024-07-15 09:31:33.521489] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:23:22.450 [2024-07-15 09:31:33.521498] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=5 on tqpair(0xeef540) 00:23:22.450 [2024-07-15 09:31:33.521508] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES POWER MANAGEMENT cid:5 cdw10:00000002 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:22.450 [2024-07-15 09:31:33.521529] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xf4fb40, cid 5, qid 0 00:23:22.450 [2024-07-15 09:31:33.521618] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:23:22.450 [2024-07-15 09:31:33.521632] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:23:22.450 [2024-07-15 09:31:33.521639] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:23:22.450 [2024-07-15 09:31:33.521646] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xf4fb40) on tqpair=0xeef540 00:23:22.450 [2024-07-15 09:31:33.521661] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:23:22.450 [2024-07-15 09:31:33.521670] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=5 on tqpair(0xeef540) 00:23:22.450 [2024-07-15 09:31:33.521680] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES TEMPERATURE THRESHOLD cid:5 cdw10:00000004 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:22.450 [2024-07-15 09:31:33.521700] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xf4fb40, cid 5, qid 0 00:23:22.450 [2024-07-15 09:31:33.521776] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:23:22.450 [2024-07-15 09:31:33.521789] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:23:22.450 [2024-07-15 09:31:33.521796] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:23:22.450 [2024-07-15 09:31:33.521811] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xf4fb40) on tqpair=0xeef540 00:23:22.450 [2024-07-15 09:31:33.521828] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:23:22.450 [2024-07-15 09:31:33.521847] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=5 on tqpair(0xeef540) 00:23:22.450 [2024-07-15 09:31:33.521858] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES NUMBER OF QUEUES cid:5 cdw10:00000007 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:22.450 [2024-07-15 09:31:33.521879] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xf4fb40, cid 5, qid 0 00:23:22.450 [2024-07-15 09:31:33.521968] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:23:22.450 [2024-07-15 09:31:33.521982] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:23:22.451 [2024-07-15 09:31:33.521992] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:23:22.451 [2024-07-15 09:31:33.522001] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xf4fb40) on tqpair=0xeef540 00:23:22.451 [2024-07-15 09:31:33.522024] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:23:22.451 [2024-07-15 09:31:33.522035] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=5 on tqpair(0xeef540) 00:23:22.451 [2024-07-15 09:31:33.522046] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:5 nsid:ffffffff cdw10:07ff0001 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:22.451 [2024-07-15 09:31:33.522058] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:23:22.451 [2024-07-15 09:31:33.522065] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0xeef540) 00:23:22.451 [2024-07-15 09:31:33.522075] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:4 nsid:ffffffff cdw10:007f0002 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:22.451 [2024-07-15 09:31:33.522086] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:23:22.451 [2024-07-15 09:31:33.522094] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=6 on tqpair(0xeef540) 00:23:22.451 [2024-07-15 09:31:33.522103] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:6 nsid:ffffffff cdw10:007f0003 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:22.451 [2024-07-15 09:31:33.522114] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:23:22.451 [2024-07-15 09:31:33.522122] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=7 on tqpair(0xeef540) 00:23:22.451 [2024-07-15 09:31:33.522131] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:7 nsid:ffffffff cdw10:03ff0005 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:22.451 [2024-07-15 09:31:33.522168] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xf4fb40, cid 5, qid 0 00:23:22.451 [2024-07-15 09:31:33.522179] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xf4f9c0, cid 4, qid 0 00:23:22.451 [2024-07-15 09:31:33.522186] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xf4fcc0, cid 6, qid 0 00:23:22.451 [2024-07-15 09:31:33.522194] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xf4fe40, cid 7, qid 0 00:23:22.451 [2024-07-15 09:31:33.522394] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:23:22.451 [2024-07-15 09:31:33.522409] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:23:22.451 [2024-07-15 09:31:33.522416] nvme_tcp.c:1719:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:23:22.451 [2024-07-15 09:31:33.522423] nvme_tcp.c:1720:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0xeef540): datao=0, datal=8192, cccid=5 00:23:22.451 [2024-07-15 09:31:33.522435] nvme_tcp.c:1731:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0xf4fb40) on tqpair(0xeef540): expected_datao=0, payload_size=8192 00:23:22.451 [2024-07-15 09:31:33.522443] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:23:22.451 [2024-07-15 09:31:33.522462] nvme_tcp.c:1521:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:23:22.451 [2024-07-15 09:31:33.522470] nvme_tcp.c:1312:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:23:22.451 [2024-07-15 09:31:33.522483] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:23:22.451 [2024-07-15 09:31:33.522493] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:23:22.451 [2024-07-15 09:31:33.522500] nvme_tcp.c:1719:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:23:22.451 [2024-07-15 09:31:33.522507] nvme_tcp.c:1720:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0xeef540): datao=0, datal=512, cccid=4 00:23:22.451 [2024-07-15 09:31:33.522515] nvme_tcp.c:1731:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0xf4f9c0) on tqpair(0xeef540): expected_datao=0, payload_size=512 00:23:22.451 [2024-07-15 09:31:33.522522] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:23:22.451 [2024-07-15 09:31:33.522531] nvme_tcp.c:1521:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:23:22.451 [2024-07-15 09:31:33.522539] nvme_tcp.c:1312:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:23:22.451 [2024-07-15 09:31:33.522550] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:23:22.451 [2024-07-15 09:31:33.522565] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:23:22.451 [2024-07-15 09:31:33.522572] nvme_tcp.c:1719:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:23:22.451 [2024-07-15 09:31:33.522578] nvme_tcp.c:1720:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0xeef540): datao=0, datal=512, cccid=6 00:23:22.451 [2024-07-15 09:31:33.522586] nvme_tcp.c:1731:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0xf4fcc0) on tqpair(0xeef540): expected_datao=0, payload_size=512 00:23:22.451 [2024-07-15 09:31:33.522594] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:23:22.451 [2024-07-15 09:31:33.522603] nvme_tcp.c:1521:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:23:22.451 [2024-07-15 09:31:33.522610] nvme_tcp.c:1312:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:23:22.451 [2024-07-15 09:31:33.522619] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:23:22.451 [2024-07-15 09:31:33.522628] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:23:22.451 [2024-07-15 09:31:33.522634] nvme_tcp.c:1719:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:23:22.451 [2024-07-15 09:31:33.522640] nvme_tcp.c:1720:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0xeef540): datao=0, datal=4096, cccid=7 00:23:22.451 [2024-07-15 09:31:33.522650] nvme_tcp.c:1731:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0xf4fe40) on tqpair(0xeef540): expected_datao=0, payload_size=4096 00:23:22.451 [2024-07-15 09:31:33.522658] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:23:22.451 [2024-07-15 09:31:33.522682] nvme_tcp.c:1521:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:23:22.451 [2024-07-15 09:31:33.522689] nvme_tcp.c:1312:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:23:22.451 [2024-07-15 09:31:33.522700] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:23:22.451 [2024-07-15 09:31:33.522709] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:23:22.451 [2024-07-15 09:31:33.522716] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:23:22.451 [2024-07-15 09:31:33.522722] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xf4fb40) on tqpair=0xeef540 00:23:22.451 [2024-07-15 09:31:33.522754] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:23:22.451 [2024-07-15 09:31:33.522765] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:23:22.451 [2024-07-15 09:31:33.522771] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:23:22.451 [2024-07-15 09:31:33.522777] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xf4f9c0) on tqpair=0xeef540 00:23:22.451 [2024-07-15 09:31:33.526807] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:23:22.451 [2024-07-15 09:31:33.526825] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:23:22.451 [2024-07-15 09:31:33.526832] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:23:22.451 [2024-07-15 09:31:33.526838] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xf4fcc0) on tqpair=0xeef540 00:23:22.451 [2024-07-15 09:31:33.526850] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:23:22.451 [2024-07-15 09:31:33.526859] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:23:22.451 [2024-07-15 09:31:33.526866] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:23:22.451 [2024-07-15 09:31:33.526873] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xf4fe40) on tqpair=0xeef540 00:23:22.451 ===================================================== 00:23:22.451 NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:23:22.451 ===================================================== 00:23:22.451 Controller Capabilities/Features 00:23:22.451 ================================ 00:23:22.451 Vendor ID: 8086 00:23:22.451 Subsystem Vendor ID: 8086 00:23:22.451 Serial Number: SPDK00000000000001 00:23:22.451 Model Number: SPDK bdev Controller 00:23:22.451 Firmware Version: 24.09 00:23:22.451 Recommended Arb Burst: 6 00:23:22.451 IEEE OUI Identifier: e4 d2 5c 00:23:22.451 Multi-path I/O 00:23:22.451 May have multiple subsystem ports: Yes 00:23:22.451 May have multiple controllers: Yes 00:23:22.451 Associated with SR-IOV VF: No 00:23:22.451 Max Data Transfer Size: 131072 00:23:22.451 Max Number of Namespaces: 32 00:23:22.451 Max Number of I/O Queues: 127 00:23:22.451 NVMe Specification Version (VS): 1.3 00:23:22.451 NVMe Specification Version (Identify): 1.3 00:23:22.451 Maximum Queue Entries: 128 00:23:22.451 Contiguous Queues Required: Yes 00:23:22.451 Arbitration Mechanisms Supported 00:23:22.451 Weighted Round Robin: Not Supported 00:23:22.451 Vendor Specific: Not Supported 00:23:22.451 Reset Timeout: 15000 ms 00:23:22.451 Doorbell Stride: 4 bytes 00:23:22.451 NVM Subsystem Reset: Not Supported 00:23:22.451 Command Sets Supported 00:23:22.451 NVM Command Set: Supported 00:23:22.451 Boot Partition: Not Supported 00:23:22.451 Memory Page Size Minimum: 4096 bytes 00:23:22.451 Memory Page Size Maximum: 4096 bytes 00:23:22.451 Persistent Memory Region: Not Supported 00:23:22.451 Optional Asynchronous Events Supported 00:23:22.451 Namespace Attribute Notices: Supported 00:23:22.451 Firmware Activation Notices: Not Supported 00:23:22.451 ANA Change Notices: Not Supported 00:23:22.451 PLE Aggregate Log Change Notices: Not Supported 00:23:22.451 LBA Status Info Alert Notices: Not Supported 00:23:22.451 EGE Aggregate Log Change Notices: Not Supported 00:23:22.451 Normal NVM Subsystem Shutdown event: Not Supported 00:23:22.451 Zone Descriptor Change Notices: Not Supported 00:23:22.451 Discovery Log Change Notices: Not Supported 00:23:22.451 Controller Attributes 00:23:22.451 128-bit Host Identifier: Supported 00:23:22.451 Non-Operational Permissive Mode: Not Supported 00:23:22.451 NVM Sets: Not Supported 00:23:22.451 Read Recovery Levels: Not Supported 00:23:22.451 Endurance Groups: Not Supported 00:23:22.451 Predictable Latency Mode: Not Supported 00:23:22.451 Traffic Based Keep ALive: Not Supported 00:23:22.451 Namespace Granularity: Not Supported 00:23:22.452 SQ Associations: Not Supported 00:23:22.452 UUID List: Not Supported 00:23:22.452 Multi-Domain Subsystem: Not Supported 00:23:22.452 Fixed Capacity Management: Not Supported 00:23:22.452 Variable Capacity Management: Not Supported 00:23:22.452 Delete Endurance Group: Not Supported 00:23:22.452 Delete NVM Set: Not Supported 00:23:22.452 Extended LBA Formats Supported: Not Supported 00:23:22.452 Flexible Data Placement Supported: Not Supported 00:23:22.452 00:23:22.452 Controller Memory Buffer Support 00:23:22.452 ================================ 00:23:22.452 Supported: No 00:23:22.452 00:23:22.452 Persistent Memory Region Support 00:23:22.452 ================================ 00:23:22.452 Supported: No 00:23:22.452 00:23:22.452 Admin Command Set Attributes 00:23:22.452 ============================ 00:23:22.452 Security Send/Receive: Not Supported 00:23:22.452 Format NVM: Not Supported 00:23:22.452 Firmware Activate/Download: Not Supported 00:23:22.452 Namespace Management: Not Supported 00:23:22.452 Device Self-Test: Not Supported 00:23:22.452 Directives: Not Supported 00:23:22.452 NVMe-MI: Not Supported 00:23:22.452 Virtualization Management: Not Supported 00:23:22.452 Doorbell Buffer Config: Not Supported 00:23:22.452 Get LBA Status Capability: Not Supported 00:23:22.452 Command & Feature Lockdown Capability: Not Supported 00:23:22.452 Abort Command Limit: 4 00:23:22.452 Async Event Request Limit: 4 00:23:22.452 Number of Firmware Slots: N/A 00:23:22.452 Firmware Slot 1 Read-Only: N/A 00:23:22.452 Firmware Activation Without Reset: N/A 00:23:22.452 Multiple Update Detection Support: N/A 00:23:22.452 Firmware Update Granularity: No Information Provided 00:23:22.452 Per-Namespace SMART Log: No 00:23:22.452 Asymmetric Namespace Access Log Page: Not Supported 00:23:22.452 Subsystem NQN: nqn.2016-06.io.spdk:cnode1 00:23:22.452 Command Effects Log Page: Supported 00:23:22.452 Get Log Page Extended Data: Supported 00:23:22.452 Telemetry Log Pages: Not Supported 00:23:22.452 Persistent Event Log Pages: Not Supported 00:23:22.452 Supported Log Pages Log Page: May Support 00:23:22.452 Commands Supported & Effects Log Page: Not Supported 00:23:22.452 Feature Identifiers & Effects Log Page:May Support 00:23:22.452 NVMe-MI Commands & Effects Log Page: May Support 00:23:22.452 Data Area 4 for Telemetry Log: Not Supported 00:23:22.452 Error Log Page Entries Supported: 128 00:23:22.452 Keep Alive: Supported 00:23:22.452 Keep Alive Granularity: 10000 ms 00:23:22.452 00:23:22.452 NVM Command Set Attributes 00:23:22.452 ========================== 00:23:22.452 Submission Queue Entry Size 00:23:22.452 Max: 64 00:23:22.452 Min: 64 00:23:22.452 Completion Queue Entry Size 00:23:22.452 Max: 16 00:23:22.452 Min: 16 00:23:22.452 Number of Namespaces: 32 00:23:22.452 Compare Command: Supported 00:23:22.452 Write Uncorrectable Command: Not Supported 00:23:22.452 Dataset Management Command: Supported 00:23:22.452 Write Zeroes Command: Supported 00:23:22.452 Set Features Save Field: Not Supported 00:23:22.452 Reservations: Supported 00:23:22.452 Timestamp: Not Supported 00:23:22.452 Copy: Supported 00:23:22.452 Volatile Write Cache: Present 00:23:22.452 Atomic Write Unit (Normal): 1 00:23:22.452 Atomic Write Unit (PFail): 1 00:23:22.452 Atomic Compare & Write Unit: 1 00:23:22.452 Fused Compare & Write: Supported 00:23:22.452 Scatter-Gather List 00:23:22.452 SGL Command Set: Supported 00:23:22.452 SGL Keyed: Supported 00:23:22.452 SGL Bit Bucket Descriptor: Not Supported 00:23:22.452 SGL Metadata Pointer: Not Supported 00:23:22.452 Oversized SGL: Not Supported 00:23:22.452 SGL Metadata Address: Not Supported 00:23:22.452 SGL Offset: Supported 00:23:22.452 Transport SGL Data Block: Not Supported 00:23:22.452 Replay Protected Memory Block: Not Supported 00:23:22.452 00:23:22.452 Firmware Slot Information 00:23:22.452 ========================= 00:23:22.452 Active slot: 1 00:23:22.452 Slot 1 Firmware Revision: 24.09 00:23:22.452 00:23:22.452 00:23:22.452 Commands Supported and Effects 00:23:22.452 ============================== 00:23:22.452 Admin Commands 00:23:22.452 -------------- 00:23:22.452 Get Log Page (02h): Supported 00:23:22.452 Identify (06h): Supported 00:23:22.452 Abort (08h): Supported 00:23:22.452 Set Features (09h): Supported 00:23:22.452 Get Features (0Ah): Supported 00:23:22.452 Asynchronous Event Request (0Ch): Supported 00:23:22.452 Keep Alive (18h): Supported 00:23:22.452 I/O Commands 00:23:22.452 ------------ 00:23:22.452 Flush (00h): Supported LBA-Change 00:23:22.452 Write (01h): Supported LBA-Change 00:23:22.452 Read (02h): Supported 00:23:22.452 Compare (05h): Supported 00:23:22.452 Write Zeroes (08h): Supported LBA-Change 00:23:22.452 Dataset Management (09h): Supported LBA-Change 00:23:22.452 Copy (19h): Supported LBA-Change 00:23:22.452 00:23:22.452 Error Log 00:23:22.452 ========= 00:23:22.452 00:23:22.452 Arbitration 00:23:22.452 =========== 00:23:22.452 Arbitration Burst: 1 00:23:22.452 00:23:22.452 Power Management 00:23:22.452 ================ 00:23:22.452 Number of Power States: 1 00:23:22.452 Current Power State: Power State #0 00:23:22.452 Power State #0: 00:23:22.452 Max Power: 0.00 W 00:23:22.452 Non-Operational State: Operational 00:23:22.452 Entry Latency: Not Reported 00:23:22.452 Exit Latency: Not Reported 00:23:22.452 Relative Read Throughput: 0 00:23:22.452 Relative Read Latency: 0 00:23:22.452 Relative Write Throughput: 0 00:23:22.452 Relative Write Latency: 0 00:23:22.452 Idle Power: Not Reported 00:23:22.452 Active Power: Not Reported 00:23:22.452 Non-Operational Permissive Mode: Not Supported 00:23:22.452 00:23:22.452 Health Information 00:23:22.452 ================== 00:23:22.452 Critical Warnings: 00:23:22.452 Available Spare Space: OK 00:23:22.452 Temperature: OK 00:23:22.452 Device Reliability: OK 00:23:22.452 Read Only: No 00:23:22.452 Volatile Memory Backup: OK 00:23:22.452 Current Temperature: 0 Kelvin (-273 Celsius) 00:23:22.452 Temperature Threshold: 0 Kelvin (-273 Celsius) 00:23:22.452 Available Spare: 0% 00:23:22.452 Available Spare Threshold: 0% 00:23:22.452 Life Percentage Used:[2024-07-15 09:31:33.526986] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:23:22.452 [2024-07-15 09:31:33.526998] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=7 on tqpair(0xeef540) 00:23:22.452 [2024-07-15 09:31:33.527009] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES ERROR_RECOVERY cid:7 cdw10:00000005 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:22.452 [2024-07-15 09:31:33.527031] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xf4fe40, cid 7, qid 0 00:23:22.452 [2024-07-15 09:31:33.527177] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:23:22.452 [2024-07-15 09:31:33.527191] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:23:22.452 [2024-07-15 09:31:33.527201] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:23:22.452 [2024-07-15 09:31:33.527208] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xf4fe40) on tqpair=0xeef540 00:23:22.452 [2024-07-15 09:31:33.527255] nvme_ctrlr.c:4357:nvme_ctrlr_destruct_async: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] Prepare to destruct SSD 00:23:22.452 [2024-07-15 09:31:33.527274] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xf4f3c0) on tqpair=0xeef540 00:23:22.452 [2024-07-15 09:31:33.527285] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:22.452 [2024-07-15 09:31:33.527293] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xf4f540) on tqpair=0xeef540 00:23:22.452 [2024-07-15 09:31:33.527301] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:22.452 [2024-07-15 09:31:33.527310] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xf4f6c0) on tqpair=0xeef540 00:23:22.452 [2024-07-15 09:31:33.527317] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:22.452 [2024-07-15 09:31:33.527326] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xf4f840) on tqpair=0xeef540 00:23:22.452 [2024-07-15 09:31:33.527333] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:22.452 [2024-07-15 09:31:33.527346] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:23:22.452 [2024-07-15 09:31:33.527369] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:23:22.452 [2024-07-15 09:31:33.527375] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0xeef540) 00:23:22.452 [2024-07-15 09:31:33.527385] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:22.452 [2024-07-15 09:31:33.527406] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xf4f840, cid 3, qid 0 00:23:22.452 [2024-07-15 09:31:33.527601] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:23:22.452 [2024-07-15 09:31:33.527613] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:23:22.452 [2024-07-15 09:31:33.527620] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:23:22.452 [2024-07-15 09:31:33.527627] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xf4f840) on tqpair=0xeef540 00:23:22.452 [2024-07-15 09:31:33.527638] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:23:22.452 [2024-07-15 09:31:33.527646] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:23:22.452 [2024-07-15 09:31:33.527653] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0xeef540) 00:23:22.452 [2024-07-15 09:31:33.527663] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY SET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:22.452 [2024-07-15 09:31:33.527689] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xf4f840, cid 3, qid 0 00:23:22.452 [2024-07-15 09:31:33.527791] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:23:22.452 [2024-07-15 09:31:33.527816] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:23:22.452 [2024-07-15 09:31:33.527825] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:23:22.452 [2024-07-15 09:31:33.527832] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xf4f840) on tqpair=0xeef540 00:23:22.453 [2024-07-15 09:31:33.527839] nvme_ctrlr.c:1147:nvme_ctrlr_shutdown_set_cc_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] RTD3E = 0 us 00:23:22.453 [2024-07-15 09:31:33.527847] nvme_ctrlr.c:1150:nvme_ctrlr_shutdown_set_cc_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] shutdown timeout = 10000 ms 00:23:22.453 [2024-07-15 09:31:33.527864] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:23:22.453 [2024-07-15 09:31:33.527873] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:23:22.453 [2024-07-15 09:31:33.527879] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0xeef540) 00:23:22.453 [2024-07-15 09:31:33.527890] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:22.453 [2024-07-15 09:31:33.527915] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xf4f840, cid 3, qid 0 00:23:22.453 [2024-07-15 09:31:33.527993] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:23:22.453 [2024-07-15 09:31:33.528006] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:23:22.453 [2024-07-15 09:31:33.528013] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:23:22.453 [2024-07-15 09:31:33.528020] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xf4f840) on tqpair=0xeef540 00:23:22.453 [2024-07-15 09:31:33.528037] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:23:22.453 [2024-07-15 09:31:33.528046] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:23:22.453 [2024-07-15 09:31:33.528053] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0xeef540) 00:23:22.453 [2024-07-15 09:31:33.528064] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:22.453 [2024-07-15 09:31:33.528084] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xf4f840, cid 3, qid 0 00:23:22.453 [2024-07-15 09:31:33.528194] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:23:22.453 [2024-07-15 09:31:33.528207] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:23:22.453 [2024-07-15 09:31:33.528213] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:23:22.453 [2024-07-15 09:31:33.528220] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xf4f840) on tqpair=0xeef540 00:23:22.453 [2024-07-15 09:31:33.528236] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:23:22.453 [2024-07-15 09:31:33.528246] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:23:22.453 [2024-07-15 09:31:33.528252] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0xeef540) 00:23:22.453 [2024-07-15 09:31:33.528263] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:22.453 [2024-07-15 09:31:33.528283] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xf4f840, cid 3, qid 0 00:23:22.453 [2024-07-15 09:31:33.528361] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:23:22.453 [2024-07-15 09:31:33.528373] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:23:22.453 [2024-07-15 09:31:33.528380] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:23:22.453 [2024-07-15 09:31:33.528387] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xf4f840) on tqpair=0xeef540 00:23:22.453 [2024-07-15 09:31:33.528402] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:23:22.453 [2024-07-15 09:31:33.528411] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:23:22.453 [2024-07-15 09:31:33.528418] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0xeef540) 00:23:22.453 [2024-07-15 09:31:33.528428] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:22.453 [2024-07-15 09:31:33.528449] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xf4f840, cid 3, qid 0 00:23:22.453 [2024-07-15 09:31:33.528528] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:23:22.453 [2024-07-15 09:31:33.528541] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:23:22.453 [2024-07-15 09:31:33.528548] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:23:22.453 [2024-07-15 09:31:33.528555] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xf4f840) on tqpair=0xeef540 00:23:22.453 [2024-07-15 09:31:33.528572] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:23:22.453 [2024-07-15 09:31:33.528581] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:23:22.453 [2024-07-15 09:31:33.528588] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0xeef540) 00:23:22.453 [2024-07-15 09:31:33.528598] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:22.453 [2024-07-15 09:31:33.528622] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xf4f840, cid 3, qid 0 00:23:22.453 [2024-07-15 09:31:33.528695] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:23:22.453 [2024-07-15 09:31:33.528709] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:23:22.453 [2024-07-15 09:31:33.528715] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:23:22.453 [2024-07-15 09:31:33.528722] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xf4f840) on tqpair=0xeef540 00:23:22.453 [2024-07-15 09:31:33.528738] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:23:22.453 [2024-07-15 09:31:33.528747] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:23:22.453 [2024-07-15 09:31:33.528754] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0xeef540) 00:23:22.453 [2024-07-15 09:31:33.528764] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:22.453 [2024-07-15 09:31:33.528785] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xf4f840, cid 3, qid 0 00:23:22.453 [2024-07-15 09:31:33.528880] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:23:22.453 [2024-07-15 09:31:33.528894] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:23:22.453 [2024-07-15 09:31:33.528901] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:23:22.453 [2024-07-15 09:31:33.528908] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xf4f840) on tqpair=0xeef540 00:23:22.453 [2024-07-15 09:31:33.528924] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:23:22.453 [2024-07-15 09:31:33.528933] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:23:22.453 [2024-07-15 09:31:33.528939] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0xeef540) 00:23:22.453 [2024-07-15 09:31:33.528950] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:22.453 [2024-07-15 09:31:33.528970] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xf4f840, cid 3, qid 0 00:23:22.453 [2024-07-15 09:31:33.529048] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:23:22.453 [2024-07-15 09:31:33.529060] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:23:22.453 [2024-07-15 09:31:33.529066] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:23:22.453 [2024-07-15 09:31:33.529073] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xf4f840) on tqpair=0xeef540 00:23:22.453 [2024-07-15 09:31:33.529089] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:23:22.453 [2024-07-15 09:31:33.529098] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:23:22.453 [2024-07-15 09:31:33.529104] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0xeef540) 00:23:22.453 [2024-07-15 09:31:33.529115] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:22.453 [2024-07-15 09:31:33.529135] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xf4f840, cid 3, qid 0 00:23:22.453 [2024-07-15 09:31:33.529211] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:23:22.453 [2024-07-15 09:31:33.529224] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:23:22.453 [2024-07-15 09:31:33.529231] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:23:22.453 [2024-07-15 09:31:33.529238] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xf4f840) on tqpair=0xeef540 00:23:22.453 [2024-07-15 09:31:33.529254] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:23:22.453 [2024-07-15 09:31:33.529263] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:23:22.453 [2024-07-15 09:31:33.529270] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0xeef540) 00:23:22.453 [2024-07-15 09:31:33.529280] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:22.453 [2024-07-15 09:31:33.529300] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xf4f840, cid 3, qid 0 00:23:22.453 [2024-07-15 09:31:33.529373] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:23:22.453 [2024-07-15 09:31:33.529387] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:23:22.453 [2024-07-15 09:31:33.529394] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:23:22.453 [2024-07-15 09:31:33.529401] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xf4f840) on tqpair=0xeef540 00:23:22.453 [2024-07-15 09:31:33.529417] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:23:22.453 [2024-07-15 09:31:33.529427] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:23:22.453 [2024-07-15 09:31:33.529433] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0xeef540) 00:23:22.453 [2024-07-15 09:31:33.529443] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:22.453 [2024-07-15 09:31:33.529463] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xf4f840, cid 3, qid 0 00:23:22.453 [2024-07-15 09:31:33.529541] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:23:22.453 [2024-07-15 09:31:33.529553] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:23:22.453 [2024-07-15 09:31:33.529560] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:23:22.453 [2024-07-15 09:31:33.529567] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xf4f840) on tqpair=0xeef540 00:23:22.453 [2024-07-15 09:31:33.529582] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:23:22.453 [2024-07-15 09:31:33.529591] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:23:22.453 [2024-07-15 09:31:33.529598] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0xeef540) 00:23:22.453 [2024-07-15 09:31:33.529608] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:22.453 [2024-07-15 09:31:33.529628] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xf4f840, cid 3, qid 0 00:23:22.453 [2024-07-15 09:31:33.529704] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:23:22.453 [2024-07-15 09:31:33.529715] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:23:22.453 [2024-07-15 09:31:33.529722] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:23:22.453 [2024-07-15 09:31:33.529729] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xf4f840) on tqpair=0xeef540 00:23:22.453 [2024-07-15 09:31:33.529745] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:23:22.453 [2024-07-15 09:31:33.529754] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:23:22.453 [2024-07-15 09:31:33.529761] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0xeef540) 00:23:22.454 [2024-07-15 09:31:33.529771] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:22.454 [2024-07-15 09:31:33.529791] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xf4f840, cid 3, qid 0 00:23:22.454 [2024-07-15 09:31:33.529875] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:23:22.454 [2024-07-15 09:31:33.529888] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:23:22.454 [2024-07-15 09:31:33.529895] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:23:22.454 [2024-07-15 09:31:33.529902] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xf4f840) on tqpair=0xeef540 00:23:22.454 [2024-07-15 09:31:33.529918] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:23:22.454 [2024-07-15 09:31:33.529928] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:23:22.454 [2024-07-15 09:31:33.529934] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0xeef540) 00:23:22.454 [2024-07-15 09:31:33.529945] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:22.454 [2024-07-15 09:31:33.529966] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xf4f840, cid 3, qid 0 00:23:22.454 [2024-07-15 09:31:33.530039] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:23:22.454 [2024-07-15 09:31:33.530053] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:23:22.454 [2024-07-15 09:31:33.530063] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:23:22.454 [2024-07-15 09:31:33.530070] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xf4f840) on tqpair=0xeef540 00:23:22.454 [2024-07-15 09:31:33.530087] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:23:22.454 [2024-07-15 09:31:33.530096] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:23:22.454 [2024-07-15 09:31:33.530103] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0xeef540) 00:23:22.454 [2024-07-15 09:31:33.530113] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:22.454 [2024-07-15 09:31:33.530134] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xf4f840, cid 3, qid 0 00:23:22.454 [2024-07-15 09:31:33.530227] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:23:22.454 [2024-07-15 09:31:33.530241] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:23:22.454 [2024-07-15 09:31:33.530247] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:23:22.454 [2024-07-15 09:31:33.530254] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xf4f840) on tqpair=0xeef540 00:23:22.454 [2024-07-15 09:31:33.530270] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:23:22.454 [2024-07-15 09:31:33.530279] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:23:22.454 [2024-07-15 09:31:33.530286] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0xeef540) 00:23:22.454 [2024-07-15 09:31:33.530296] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:22.454 [2024-07-15 09:31:33.530316] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xf4f840, cid 3, qid 0 00:23:22.454 [2024-07-15 09:31:33.530442] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:23:22.454 [2024-07-15 09:31:33.530454] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:23:22.454 [2024-07-15 09:31:33.530460] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:23:22.454 [2024-07-15 09:31:33.530467] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xf4f840) on tqpair=0xeef540 00:23:22.454 [2024-07-15 09:31:33.530483] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:23:22.454 [2024-07-15 09:31:33.530493] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:23:22.454 [2024-07-15 09:31:33.530499] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0xeef540) 00:23:22.454 [2024-07-15 09:31:33.530509] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:22.454 [2024-07-15 09:31:33.530529] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xf4f840, cid 3, qid 0 00:23:22.454 [2024-07-15 09:31:33.530605] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:23:22.454 [2024-07-15 09:31:33.530617] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:23:22.454 [2024-07-15 09:31:33.530623] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:23:22.454 [2024-07-15 09:31:33.530630] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xf4f840) on tqpair=0xeef540 00:23:22.454 [2024-07-15 09:31:33.530646] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:23:22.454 [2024-07-15 09:31:33.530655] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:23:22.454 [2024-07-15 09:31:33.530661] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0xeef540) 00:23:22.454 [2024-07-15 09:31:33.530672] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:22.454 [2024-07-15 09:31:33.530692] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xf4f840, cid 3, qid 0 00:23:22.454 [2024-07-15 09:31:33.530765] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:23:22.454 [2024-07-15 09:31:33.530779] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:23:22.454 [2024-07-15 09:31:33.530785] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:23:22.454 [2024-07-15 09:31:33.530795] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xf4f840) on tqpair=0xeef540 00:23:22.454 [2024-07-15 09:31:33.534825] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:23:22.454 [2024-07-15 09:31:33.534837] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:23:22.454 [2024-07-15 09:31:33.534843] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0xeef540) 00:23:22.454 [2024-07-15 09:31:33.534854] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:22.454 [2024-07-15 09:31:33.534875] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xf4f840, cid 3, qid 0 00:23:22.454 [2024-07-15 09:31:33.535019] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:23:22.454 [2024-07-15 09:31:33.535033] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:23:22.454 [2024-07-15 09:31:33.535040] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:23:22.454 [2024-07-15 09:31:33.535046] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xf4f840) on tqpair=0xeef540 00:23:22.454 [2024-07-15 09:31:33.535060] nvme_ctrlr.c:1269:nvme_ctrlr_shutdown_poll_async: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] shutdown complete in 7 milliseconds 00:23:22.454 0% 00:23:22.454 Data Units Read: 0 00:23:22.454 Data Units Written: 0 00:23:22.454 Host Read Commands: 0 00:23:22.454 Host Write Commands: 0 00:23:22.454 Controller Busy Time: 0 minutes 00:23:22.454 Power Cycles: 0 00:23:22.454 Power On Hours: 0 hours 00:23:22.454 Unsafe Shutdowns: 0 00:23:22.454 Unrecoverable Media Errors: 0 00:23:22.454 Lifetime Error Log Entries: 0 00:23:22.454 Warning Temperature Time: 0 minutes 00:23:22.454 Critical Temperature Time: 0 minutes 00:23:22.454 00:23:22.454 Number of Queues 00:23:22.454 ================ 00:23:22.454 Number of I/O Submission Queues: 127 00:23:22.454 Number of I/O Completion Queues: 127 00:23:22.454 00:23:22.454 Active Namespaces 00:23:22.454 ================= 00:23:22.454 Namespace ID:1 00:23:22.454 Error Recovery Timeout: Unlimited 00:23:22.454 Command Set Identifier: NVM (00h) 00:23:22.454 Deallocate: Supported 00:23:22.454 Deallocated/Unwritten Error: Not Supported 00:23:22.454 Deallocated Read Value: Unknown 00:23:22.454 Deallocate in Write Zeroes: Not Supported 00:23:22.454 Deallocated Guard Field: 0xFFFF 00:23:22.454 Flush: Supported 00:23:22.454 Reservation: Supported 00:23:22.454 Namespace Sharing Capabilities: Multiple Controllers 00:23:22.454 Size (in LBAs): 131072 (0GiB) 00:23:22.454 Capacity (in LBAs): 131072 (0GiB) 00:23:22.454 Utilization (in LBAs): 131072 (0GiB) 00:23:22.454 NGUID: ABCDEF0123456789ABCDEF0123456789 00:23:22.454 EUI64: ABCDEF0123456789 00:23:22.454 UUID: be4f7297-cf39-4c35-861c-2bbe43e169c3 00:23:22.454 Thin Provisioning: Not Supported 00:23:22.454 Per-NS Atomic Units: Yes 00:23:22.454 Atomic Boundary Size (Normal): 0 00:23:22.454 Atomic Boundary Size (PFail): 0 00:23:22.454 Atomic Boundary Offset: 0 00:23:22.454 Maximum Single Source Range Length: 65535 00:23:22.454 Maximum Copy Length: 65535 00:23:22.454 Maximum Source Range Count: 1 00:23:22.454 NGUID/EUI64 Never Reused: No 00:23:22.454 Namespace Write Protected: No 00:23:22.454 Number of LBA Formats: 1 00:23:22.454 Current LBA Format: LBA Format #00 00:23:22.454 LBA Format #00: Data Size: 512 Metadata Size: 0 00:23:22.454 00:23:22.454 09:31:33 nvmf_tcp.nvmf_identify -- host/identify.sh@51 -- # sync 00:23:22.454 09:31:33 nvmf_tcp.nvmf_identify -- host/identify.sh@52 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:23:22.454 09:31:33 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:22.454 09:31:33 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:23:22.454 09:31:33 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:22.454 09:31:33 nvmf_tcp.nvmf_identify -- host/identify.sh@54 -- # trap - SIGINT SIGTERM EXIT 00:23:22.454 09:31:33 nvmf_tcp.nvmf_identify -- host/identify.sh@56 -- # nvmftestfini 00:23:22.454 09:31:33 nvmf_tcp.nvmf_identify -- nvmf/common.sh@488 -- # nvmfcleanup 00:23:22.455 09:31:33 nvmf_tcp.nvmf_identify -- nvmf/common.sh@117 -- # sync 00:23:22.455 09:31:33 nvmf_tcp.nvmf_identify -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:23:22.455 09:31:33 nvmf_tcp.nvmf_identify -- nvmf/common.sh@120 -- # set +e 00:23:22.455 09:31:33 nvmf_tcp.nvmf_identify -- nvmf/common.sh@121 -- # for i in {1..20} 00:23:22.455 09:31:33 nvmf_tcp.nvmf_identify -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:23:22.455 rmmod nvme_tcp 00:23:22.455 rmmod nvme_fabrics 00:23:22.455 rmmod nvme_keyring 00:23:22.455 09:31:33 nvmf_tcp.nvmf_identify -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:23:22.455 09:31:33 nvmf_tcp.nvmf_identify -- nvmf/common.sh@124 -- # set -e 00:23:22.455 09:31:33 nvmf_tcp.nvmf_identify -- nvmf/common.sh@125 -- # return 0 00:23:22.455 09:31:33 nvmf_tcp.nvmf_identify -- nvmf/common.sh@489 -- # '[' -n 892164 ']' 00:23:22.455 09:31:33 nvmf_tcp.nvmf_identify -- nvmf/common.sh@490 -- # killprocess 892164 00:23:22.455 09:31:33 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@948 -- # '[' -z 892164 ']' 00:23:22.455 09:31:33 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@952 -- # kill -0 892164 00:23:22.455 09:31:33 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@953 -- # uname 00:23:22.455 09:31:33 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:23:22.455 09:31:33 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 892164 00:23:22.713 09:31:33 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:23:22.713 09:31:33 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:23:22.713 09:31:33 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@966 -- # echo 'killing process with pid 892164' 00:23:22.713 killing process with pid 892164 00:23:22.713 09:31:33 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@967 -- # kill 892164 00:23:22.713 09:31:33 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@972 -- # wait 892164 00:23:22.970 09:31:33 nvmf_tcp.nvmf_identify -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:23:22.970 09:31:33 nvmf_tcp.nvmf_identify -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:23:22.970 09:31:33 nvmf_tcp.nvmf_identify -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:23:22.970 09:31:33 nvmf_tcp.nvmf_identify -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:23:22.970 09:31:33 nvmf_tcp.nvmf_identify -- nvmf/common.sh@278 -- # remove_spdk_ns 00:23:22.970 09:31:33 nvmf_tcp.nvmf_identify -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:23:22.970 09:31:33 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:23:22.970 09:31:33 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:23:24.872 09:31:35 nvmf_tcp.nvmf_identify -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:23:24.872 00:23:24.872 real 0m6.208s 00:23:24.872 user 0m7.436s 00:23:24.872 sys 0m1.963s 00:23:24.872 09:31:35 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@1124 -- # xtrace_disable 00:23:24.872 09:31:35 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:23:24.872 ************************************ 00:23:24.872 END TEST nvmf_identify 00:23:24.872 ************************************ 00:23:24.872 09:31:35 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:23:24.872 09:31:35 nvmf_tcp -- nvmf/nvmf.sh@98 -- # run_test nvmf_perf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/perf.sh --transport=tcp 00:23:24.872 09:31:35 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:23:24.872 09:31:35 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:23:24.872 09:31:35 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:23:24.872 ************************************ 00:23:24.872 START TEST nvmf_perf 00:23:24.872 ************************************ 00:23:24.872 09:31:36 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/perf.sh --transport=tcp 00:23:25.133 * Looking for test storage... 00:23:25.133 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:23:25.133 09:31:36 nvmf_tcp.nvmf_perf -- host/perf.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:23:25.133 09:31:36 nvmf_tcp.nvmf_perf -- nvmf/common.sh@7 -- # uname -s 00:23:25.133 09:31:36 nvmf_tcp.nvmf_perf -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:23:25.133 09:31:36 nvmf_tcp.nvmf_perf -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:23:25.133 09:31:36 nvmf_tcp.nvmf_perf -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:23:25.133 09:31:36 nvmf_tcp.nvmf_perf -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:23:25.133 09:31:36 nvmf_tcp.nvmf_perf -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:23:25.133 09:31:36 nvmf_tcp.nvmf_perf -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:23:25.133 09:31:36 nvmf_tcp.nvmf_perf -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:23:25.133 09:31:36 nvmf_tcp.nvmf_perf -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:23:25.133 09:31:36 nvmf_tcp.nvmf_perf -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:23:25.133 09:31:36 nvmf_tcp.nvmf_perf -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:23:25.133 09:31:36 nvmf_tcp.nvmf_perf -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a 00:23:25.133 09:31:36 nvmf_tcp.nvmf_perf -- nvmf/common.sh@18 -- # NVME_HOSTID=29f67375-a902-e411-ace9-001e67bc3c9a 00:23:25.133 09:31:36 nvmf_tcp.nvmf_perf -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:23:25.133 09:31:36 nvmf_tcp.nvmf_perf -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:23:25.133 09:31:36 nvmf_tcp.nvmf_perf -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:23:25.133 09:31:36 nvmf_tcp.nvmf_perf -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:23:25.133 09:31:36 nvmf_tcp.nvmf_perf -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:23:25.133 09:31:36 nvmf_tcp.nvmf_perf -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:23:25.133 09:31:36 nvmf_tcp.nvmf_perf -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:23:25.133 09:31:36 nvmf_tcp.nvmf_perf -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:23:25.133 09:31:36 nvmf_tcp.nvmf_perf -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:25.133 09:31:36 nvmf_tcp.nvmf_perf -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:25.133 09:31:36 nvmf_tcp.nvmf_perf -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:25.133 09:31:36 nvmf_tcp.nvmf_perf -- paths/export.sh@5 -- # export PATH 00:23:25.133 09:31:36 nvmf_tcp.nvmf_perf -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:25.133 09:31:36 nvmf_tcp.nvmf_perf -- nvmf/common.sh@47 -- # : 0 00:23:25.133 09:31:36 nvmf_tcp.nvmf_perf -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:23:25.133 09:31:36 nvmf_tcp.nvmf_perf -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:23:25.133 09:31:36 nvmf_tcp.nvmf_perf -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:23:25.133 09:31:36 nvmf_tcp.nvmf_perf -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:23:25.133 09:31:36 nvmf_tcp.nvmf_perf -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:23:25.133 09:31:36 nvmf_tcp.nvmf_perf -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:23:25.133 09:31:36 nvmf_tcp.nvmf_perf -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:23:25.133 09:31:36 nvmf_tcp.nvmf_perf -- nvmf/common.sh@51 -- # have_pci_nics=0 00:23:25.133 09:31:36 nvmf_tcp.nvmf_perf -- host/perf.sh@12 -- # MALLOC_BDEV_SIZE=64 00:23:25.133 09:31:36 nvmf_tcp.nvmf_perf -- host/perf.sh@13 -- # MALLOC_BLOCK_SIZE=512 00:23:25.133 09:31:36 nvmf_tcp.nvmf_perf -- host/perf.sh@15 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:23:25.133 09:31:36 nvmf_tcp.nvmf_perf -- host/perf.sh@17 -- # nvmftestinit 00:23:25.133 09:31:36 nvmf_tcp.nvmf_perf -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:23:25.133 09:31:36 nvmf_tcp.nvmf_perf -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:23:25.133 09:31:36 nvmf_tcp.nvmf_perf -- nvmf/common.sh@448 -- # prepare_net_devs 00:23:25.133 09:31:36 nvmf_tcp.nvmf_perf -- nvmf/common.sh@410 -- # local -g is_hw=no 00:23:25.133 09:31:36 nvmf_tcp.nvmf_perf -- nvmf/common.sh@412 -- # remove_spdk_ns 00:23:25.133 09:31:36 nvmf_tcp.nvmf_perf -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:23:25.133 09:31:36 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:23:25.133 09:31:36 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:23:25.133 09:31:36 nvmf_tcp.nvmf_perf -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:23:25.133 09:31:36 nvmf_tcp.nvmf_perf -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:23:25.133 09:31:36 nvmf_tcp.nvmf_perf -- nvmf/common.sh@285 -- # xtrace_disable 00:23:25.133 09:31:36 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@10 -- # set +x 00:23:27.037 09:31:38 nvmf_tcp.nvmf_perf -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:23:27.037 09:31:38 nvmf_tcp.nvmf_perf -- nvmf/common.sh@291 -- # pci_devs=() 00:23:27.037 09:31:38 nvmf_tcp.nvmf_perf -- nvmf/common.sh@291 -- # local -a pci_devs 00:23:27.037 09:31:38 nvmf_tcp.nvmf_perf -- nvmf/common.sh@292 -- # pci_net_devs=() 00:23:27.037 09:31:38 nvmf_tcp.nvmf_perf -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:23:27.037 09:31:38 nvmf_tcp.nvmf_perf -- nvmf/common.sh@293 -- # pci_drivers=() 00:23:27.037 09:31:38 nvmf_tcp.nvmf_perf -- nvmf/common.sh@293 -- # local -A pci_drivers 00:23:27.037 09:31:38 nvmf_tcp.nvmf_perf -- nvmf/common.sh@295 -- # net_devs=() 00:23:27.037 09:31:38 nvmf_tcp.nvmf_perf -- nvmf/common.sh@295 -- # local -ga net_devs 00:23:27.037 09:31:38 nvmf_tcp.nvmf_perf -- nvmf/common.sh@296 -- # e810=() 00:23:27.037 09:31:38 nvmf_tcp.nvmf_perf -- nvmf/common.sh@296 -- # local -ga e810 00:23:27.037 09:31:38 nvmf_tcp.nvmf_perf -- nvmf/common.sh@297 -- # x722=() 00:23:27.037 09:31:38 nvmf_tcp.nvmf_perf -- nvmf/common.sh@297 -- # local -ga x722 00:23:27.037 09:31:38 nvmf_tcp.nvmf_perf -- nvmf/common.sh@298 -- # mlx=() 00:23:27.037 09:31:38 nvmf_tcp.nvmf_perf -- nvmf/common.sh@298 -- # local -ga mlx 00:23:27.037 09:31:38 nvmf_tcp.nvmf_perf -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:23:27.037 09:31:38 nvmf_tcp.nvmf_perf -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:23:27.037 09:31:38 nvmf_tcp.nvmf_perf -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:23:27.037 09:31:38 nvmf_tcp.nvmf_perf -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:23:27.037 09:31:38 nvmf_tcp.nvmf_perf -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:23:27.037 09:31:38 nvmf_tcp.nvmf_perf -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:23:27.037 09:31:38 nvmf_tcp.nvmf_perf -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:23:27.037 09:31:38 nvmf_tcp.nvmf_perf -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:23:27.037 09:31:38 nvmf_tcp.nvmf_perf -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:23:27.037 09:31:38 nvmf_tcp.nvmf_perf -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:23:27.037 09:31:38 nvmf_tcp.nvmf_perf -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:23:27.037 09:31:38 nvmf_tcp.nvmf_perf -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:23:27.037 09:31:38 nvmf_tcp.nvmf_perf -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:23:27.037 09:31:38 nvmf_tcp.nvmf_perf -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:23:27.037 09:31:38 nvmf_tcp.nvmf_perf -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:23:27.037 09:31:38 nvmf_tcp.nvmf_perf -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:23:27.037 09:31:38 nvmf_tcp.nvmf_perf -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:23:27.037 09:31:38 nvmf_tcp.nvmf_perf -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:23:27.038 09:31:38 nvmf_tcp.nvmf_perf -- nvmf/common.sh@341 -- # echo 'Found 0000:09:00.0 (0x8086 - 0x159b)' 00:23:27.038 Found 0000:09:00.0 (0x8086 - 0x159b) 00:23:27.038 09:31:38 nvmf_tcp.nvmf_perf -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:23:27.038 09:31:38 nvmf_tcp.nvmf_perf -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:23:27.038 09:31:38 nvmf_tcp.nvmf_perf -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:23:27.038 09:31:38 nvmf_tcp.nvmf_perf -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:23:27.038 09:31:38 nvmf_tcp.nvmf_perf -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:23:27.038 09:31:38 nvmf_tcp.nvmf_perf -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:23:27.038 09:31:38 nvmf_tcp.nvmf_perf -- nvmf/common.sh@341 -- # echo 'Found 0000:09:00.1 (0x8086 - 0x159b)' 00:23:27.038 Found 0000:09:00.1 (0x8086 - 0x159b) 00:23:27.038 09:31:38 nvmf_tcp.nvmf_perf -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:23:27.038 09:31:38 nvmf_tcp.nvmf_perf -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:23:27.038 09:31:38 nvmf_tcp.nvmf_perf -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:23:27.038 09:31:38 nvmf_tcp.nvmf_perf -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:23:27.038 09:31:38 nvmf_tcp.nvmf_perf -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:23:27.038 09:31:38 nvmf_tcp.nvmf_perf -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:23:27.038 09:31:38 nvmf_tcp.nvmf_perf -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:23:27.038 09:31:38 nvmf_tcp.nvmf_perf -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:23:27.038 09:31:38 nvmf_tcp.nvmf_perf -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:23:27.038 09:31:38 nvmf_tcp.nvmf_perf -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:23:27.038 09:31:38 nvmf_tcp.nvmf_perf -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:23:27.038 09:31:38 nvmf_tcp.nvmf_perf -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:23:27.038 09:31:38 nvmf_tcp.nvmf_perf -- nvmf/common.sh@390 -- # [[ up == up ]] 00:23:27.038 09:31:38 nvmf_tcp.nvmf_perf -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:23:27.038 09:31:38 nvmf_tcp.nvmf_perf -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:23:27.038 09:31:38 nvmf_tcp.nvmf_perf -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:09:00.0: cvl_0_0' 00:23:27.038 Found net devices under 0000:09:00.0: cvl_0_0 00:23:27.038 09:31:38 nvmf_tcp.nvmf_perf -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:23:27.038 09:31:38 nvmf_tcp.nvmf_perf -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:23:27.038 09:31:38 nvmf_tcp.nvmf_perf -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:23:27.038 09:31:38 nvmf_tcp.nvmf_perf -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:23:27.038 09:31:38 nvmf_tcp.nvmf_perf -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:23:27.038 09:31:38 nvmf_tcp.nvmf_perf -- nvmf/common.sh@390 -- # [[ up == up ]] 00:23:27.038 09:31:38 nvmf_tcp.nvmf_perf -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:23:27.038 09:31:38 nvmf_tcp.nvmf_perf -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:23:27.038 09:31:38 nvmf_tcp.nvmf_perf -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:09:00.1: cvl_0_1' 00:23:27.038 Found net devices under 0000:09:00.1: cvl_0_1 00:23:27.038 09:31:38 nvmf_tcp.nvmf_perf -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:23:27.038 09:31:38 nvmf_tcp.nvmf_perf -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:23:27.038 09:31:38 nvmf_tcp.nvmf_perf -- nvmf/common.sh@414 -- # is_hw=yes 00:23:27.038 09:31:38 nvmf_tcp.nvmf_perf -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:23:27.038 09:31:38 nvmf_tcp.nvmf_perf -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:23:27.038 09:31:38 nvmf_tcp.nvmf_perf -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:23:27.038 09:31:38 nvmf_tcp.nvmf_perf -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:23:27.038 09:31:38 nvmf_tcp.nvmf_perf -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:23:27.038 09:31:38 nvmf_tcp.nvmf_perf -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:23:27.038 09:31:38 nvmf_tcp.nvmf_perf -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:23:27.038 09:31:38 nvmf_tcp.nvmf_perf -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:23:27.038 09:31:38 nvmf_tcp.nvmf_perf -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:23:27.038 09:31:38 nvmf_tcp.nvmf_perf -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:23:27.038 09:31:38 nvmf_tcp.nvmf_perf -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:23:27.038 09:31:38 nvmf_tcp.nvmf_perf -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:23:27.038 09:31:38 nvmf_tcp.nvmf_perf -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:23:27.038 09:31:38 nvmf_tcp.nvmf_perf -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:23:27.038 09:31:38 nvmf_tcp.nvmf_perf -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:23:27.038 09:31:38 nvmf_tcp.nvmf_perf -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:23:27.038 09:31:38 nvmf_tcp.nvmf_perf -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:23:27.038 09:31:38 nvmf_tcp.nvmf_perf -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:23:27.038 09:31:38 nvmf_tcp.nvmf_perf -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:23:27.038 09:31:38 nvmf_tcp.nvmf_perf -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:23:27.038 09:31:38 nvmf_tcp.nvmf_perf -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:23:27.038 09:31:38 nvmf_tcp.nvmf_perf -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:23:27.038 09:31:38 nvmf_tcp.nvmf_perf -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:23:27.038 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:23:27.038 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.259 ms 00:23:27.038 00:23:27.038 --- 10.0.0.2 ping statistics --- 00:23:27.038 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:23:27.038 rtt min/avg/max/mdev = 0.259/0.259/0.259/0.000 ms 00:23:27.038 09:31:38 nvmf_tcp.nvmf_perf -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:23:27.038 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:23:27.038 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.128 ms 00:23:27.038 00:23:27.038 --- 10.0.0.1 ping statistics --- 00:23:27.038 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:23:27.038 rtt min/avg/max/mdev = 0.128/0.128/0.128/0.000 ms 00:23:27.038 09:31:38 nvmf_tcp.nvmf_perf -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:23:27.038 09:31:38 nvmf_tcp.nvmf_perf -- nvmf/common.sh@422 -- # return 0 00:23:27.038 09:31:38 nvmf_tcp.nvmf_perf -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:23:27.038 09:31:38 nvmf_tcp.nvmf_perf -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:23:27.038 09:31:38 nvmf_tcp.nvmf_perf -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:23:27.038 09:31:38 nvmf_tcp.nvmf_perf -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:23:27.038 09:31:38 nvmf_tcp.nvmf_perf -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:23:27.038 09:31:38 nvmf_tcp.nvmf_perf -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:23:27.038 09:31:38 nvmf_tcp.nvmf_perf -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:23:27.301 09:31:38 nvmf_tcp.nvmf_perf -- host/perf.sh@18 -- # nvmfappstart -m 0xF 00:23:27.301 09:31:38 nvmf_tcp.nvmf_perf -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:23:27.301 09:31:38 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@722 -- # xtrace_disable 00:23:27.301 09:31:38 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@10 -- # set +x 00:23:27.301 09:31:38 nvmf_tcp.nvmf_perf -- nvmf/common.sh@481 -- # nvmfpid=894254 00:23:27.301 09:31:38 nvmf_tcp.nvmf_perf -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:23:27.301 09:31:38 nvmf_tcp.nvmf_perf -- nvmf/common.sh@482 -- # waitforlisten 894254 00:23:27.301 09:31:38 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@829 -- # '[' -z 894254 ']' 00:23:27.301 09:31:38 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:23:27.301 09:31:38 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@834 -- # local max_retries=100 00:23:27.301 09:31:38 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:23:27.301 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:23:27.301 09:31:38 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@838 -- # xtrace_disable 00:23:27.301 09:31:38 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@10 -- # set +x 00:23:27.301 [2024-07-15 09:31:38.300050] Starting SPDK v24.09-pre git sha1 b0f01ebc5 / DPDK 24.03.0 initialization... 00:23:27.301 [2024-07-15 09:31:38.300164] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:23:27.301 EAL: No free 2048 kB hugepages reported on node 1 00:23:27.301 [2024-07-15 09:31:38.362310] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 4 00:23:27.301 [2024-07-15 09:31:38.462607] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:23:27.301 [2024-07-15 09:31:38.462662] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:23:27.301 [2024-07-15 09:31:38.462684] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:23:27.301 [2024-07-15 09:31:38.462695] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:23:27.301 [2024-07-15 09:31:38.462705] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:23:27.301 [2024-07-15 09:31:38.462811] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:23:27.301 [2024-07-15 09:31:38.462864] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:23:27.301 [2024-07-15 09:31:38.462940] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:23:27.301 [2024-07-15 09:31:38.462942] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:23:27.580 09:31:38 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:23:27.580 09:31:38 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@862 -- # return 0 00:23:27.580 09:31:38 nvmf_tcp.nvmf_perf -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:23:27.580 09:31:38 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@728 -- # xtrace_disable 00:23:27.580 09:31:38 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@10 -- # set +x 00:23:27.580 09:31:38 nvmf_tcp.nvmf_perf -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:23:27.580 09:31:38 nvmf_tcp.nvmf_perf -- host/perf.sh@28 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/gen_nvme.sh 00:23:27.580 09:31:38 nvmf_tcp.nvmf_perf -- host/perf.sh@28 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py load_subsystem_config 00:23:30.967 09:31:41 nvmf_tcp.nvmf_perf -- host/perf.sh@30 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py framework_get_config bdev 00:23:30.967 09:31:41 nvmf_tcp.nvmf_perf -- host/perf.sh@30 -- # jq -r '.[].params | select(.name=="Nvme0").traddr' 00:23:30.967 09:31:41 nvmf_tcp.nvmf_perf -- host/perf.sh@30 -- # local_nvme_trid=0000:0b:00.0 00:23:30.967 09:31:41 nvmf_tcp.nvmf_perf -- host/perf.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:23:31.224 09:31:42 nvmf_tcp.nvmf_perf -- host/perf.sh@31 -- # bdevs=' Malloc0' 00:23:31.224 09:31:42 nvmf_tcp.nvmf_perf -- host/perf.sh@33 -- # '[' -n 0000:0b:00.0 ']' 00:23:31.224 09:31:42 nvmf_tcp.nvmf_perf -- host/perf.sh@34 -- # bdevs=' Malloc0 Nvme0n1' 00:23:31.224 09:31:42 nvmf_tcp.nvmf_perf -- host/perf.sh@37 -- # '[' tcp == rdma ']' 00:23:31.224 09:31:42 nvmf_tcp.nvmf_perf -- host/perf.sh@42 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o 00:23:31.481 [2024-07-15 09:31:42.561900] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:23:31.481 09:31:42 nvmf_tcp.nvmf_perf -- host/perf.sh@44 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:23:31.738 09:31:42 nvmf_tcp.nvmf_perf -- host/perf.sh@45 -- # for bdev in $bdevs 00:23:31.738 09:31:42 nvmf_tcp.nvmf_perf -- host/perf.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:23:31.996 09:31:43 nvmf_tcp.nvmf_perf -- host/perf.sh@45 -- # for bdev in $bdevs 00:23:31.996 09:31:43 nvmf_tcp.nvmf_perf -- host/perf.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Nvme0n1 00:23:32.253 09:31:43 nvmf_tcp.nvmf_perf -- host/perf.sh@48 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:23:32.509 [2024-07-15 09:31:43.569594] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:23:32.509 09:31:43 nvmf_tcp.nvmf_perf -- host/perf.sh@49 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:23:32.766 09:31:43 nvmf_tcp.nvmf_perf -- host/perf.sh@52 -- # '[' -n 0000:0b:00.0 ']' 00:23:32.766 09:31:43 nvmf_tcp.nvmf_perf -- host/perf.sh@53 -- # perf_app -i 0 -q 32 -o 4096 -w randrw -M 50 -t 1 -r 'trtype:PCIe traddr:0000:0b:00.0' 00:23:32.766 09:31:43 nvmf_tcp.nvmf_perf -- host/perf.sh@21 -- # '[' 0 -eq 1 ']' 00:23:32.766 09:31:43 nvmf_tcp.nvmf_perf -- host/perf.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -i 0 -q 32 -o 4096 -w randrw -M 50 -t 1 -r 'trtype:PCIe traddr:0000:0b:00.0' 00:23:34.141 Initializing NVMe Controllers 00:23:34.141 Attached to NVMe Controller at 0000:0b:00.0 [8086:0a54] 00:23:34.141 Associating PCIE (0000:0b:00.0) NSID 1 with lcore 0 00:23:34.141 Initialization complete. Launching workers. 00:23:34.141 ======================================================== 00:23:34.141 Latency(us) 00:23:34.141 Device Information : IOPS MiB/s Average min max 00:23:34.141 PCIE (0000:0b:00.0) NSID 1 from core 0: 86934.18 339.59 367.67 11.53 7260.21 00:23:34.141 ======================================================== 00:23:34.141 Total : 86934.18 339.59 367.67 11.53 7260.21 00:23:34.141 00:23:34.141 09:31:45 nvmf_tcp.nvmf_perf -- host/perf.sh@56 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 1 -o 4096 -w randrw -M 50 -t 1 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:23:34.141 EAL: No free 2048 kB hugepages reported on node 1 00:23:35.516 Initializing NVMe Controllers 00:23:35.516 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:23:35.516 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:23:35.516 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 with lcore 0 00:23:35.516 Initialization complete. Launching workers. 00:23:35.516 ======================================================== 00:23:35.516 Latency(us) 00:23:35.516 Device Information : IOPS MiB/s Average min max 00:23:35.516 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 100.00 0.39 10404.93 140.75 45788.08 00:23:35.516 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 from core 0: 56.00 0.22 18667.35 6965.96 47907.64 00:23:35.516 ======================================================== 00:23:35.516 Total : 156.00 0.61 13370.93 140.75 47907.64 00:23:35.516 00:23:35.516 09:31:46 nvmf_tcp.nvmf_perf -- host/perf.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 32 -o 4096 -w randrw -M 50 -t 1 -HI -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:23:35.516 EAL: No free 2048 kB hugepages reported on node 1 00:23:36.891 Initializing NVMe Controllers 00:23:36.891 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:23:36.891 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:23:36.891 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 with lcore 0 00:23:36.891 Initialization complete. Launching workers. 00:23:36.891 ======================================================== 00:23:36.891 Latency(us) 00:23:36.891 Device Information : IOPS MiB/s Average min max 00:23:36.891 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 8589.97 33.55 3737.98 532.06 10318.31 00:23:36.891 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 from core 0: 3870.99 15.12 8305.60 6815.53 18437.88 00:23:36.891 ======================================================== 00:23:36.891 Total : 12460.96 48.68 5156.91 532.06 18437.88 00:23:36.891 00:23:36.891 09:31:47 nvmf_tcp.nvmf_perf -- host/perf.sh@59 -- # [[ e810 == \e\8\1\0 ]] 00:23:36.891 09:31:47 nvmf_tcp.nvmf_perf -- host/perf.sh@59 -- # [[ tcp == \r\d\m\a ]] 00:23:36.891 09:31:47 nvmf_tcp.nvmf_perf -- host/perf.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 128 -o 262144 -O 16384 -w randrw -M 50 -t 2 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:23:36.891 EAL: No free 2048 kB hugepages reported on node 1 00:23:39.423 Initializing NVMe Controllers 00:23:39.423 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:23:39.423 Controller IO queue size 128, less than required. 00:23:39.423 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:23:39.423 Controller IO queue size 128, less than required. 00:23:39.423 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:23:39.423 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:23:39.423 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 with lcore 0 00:23:39.423 Initialization complete. Launching workers. 00:23:39.423 ======================================================== 00:23:39.423 Latency(us) 00:23:39.423 Device Information : IOPS MiB/s Average min max 00:23:39.423 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 1723.40 430.85 75312.14 56889.52 113678.35 00:23:39.423 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 from core 0: 588.97 147.24 221660.23 79777.63 309950.39 00:23:39.423 ======================================================== 00:23:39.423 Total : 2312.37 578.09 112587.39 56889.52 309950.39 00:23:39.423 00:23:39.423 09:31:50 nvmf_tcp.nvmf_perf -- host/perf.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 128 -o 36964 -O 4096 -w randrw -M 50 -t 5 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' -c 0xf -P 4 00:23:39.423 EAL: No free 2048 kB hugepages reported on node 1 00:23:39.423 No valid NVMe controllers or AIO or URING devices found 00:23:39.423 Initializing NVMe Controllers 00:23:39.423 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:23:39.423 Controller IO queue size 128, less than required. 00:23:39.423 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:23:39.423 WARNING: IO size 36964 (-o) is not a multiple of nsid 1 sector size 512. Removing this ns from test 00:23:39.423 Controller IO queue size 128, less than required. 00:23:39.423 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:23:39.423 WARNING: IO size 36964 (-o) is not a multiple of nsid 2 sector size 512. Removing this ns from test 00:23:39.423 WARNING: Some requested NVMe devices were skipped 00:23:39.423 09:31:50 nvmf_tcp.nvmf_perf -- host/perf.sh@65 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 128 -o 262144 -w randrw -M 50 -t 2 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' --transport-stat 00:23:39.423 EAL: No free 2048 kB hugepages reported on node 1 00:23:42.765 Initializing NVMe Controllers 00:23:42.765 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:23:42.765 Controller IO queue size 128, less than required. 00:23:42.765 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:23:42.765 Controller IO queue size 128, less than required. 00:23:42.765 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:23:42.765 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:23:42.765 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 with lcore 0 00:23:42.765 Initialization complete. Launching workers. 00:23:42.765 00:23:42.765 ==================== 00:23:42.765 lcore 0, ns TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 statistics: 00:23:42.765 TCP transport: 00:23:42.765 polls: 8997 00:23:42.765 idle_polls: 5507 00:23:42.765 sock_completions: 3490 00:23:42.765 nvme_completions: 6301 00:23:42.765 submitted_requests: 9452 00:23:42.765 queued_requests: 1 00:23:42.765 00:23:42.765 ==================== 00:23:42.765 lcore 0, ns TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 statistics: 00:23:42.765 TCP transport: 00:23:42.765 polls: 9253 00:23:42.765 idle_polls: 5962 00:23:42.765 sock_completions: 3291 00:23:42.765 nvme_completions: 5945 00:23:42.765 submitted_requests: 8898 00:23:42.765 queued_requests: 1 00:23:42.765 ======================================================== 00:23:42.765 Latency(us) 00:23:42.765 Device Information : IOPS MiB/s Average min max 00:23:42.765 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 1572.91 393.23 83266.86 40604.45 137063.69 00:23:42.765 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 from core 0: 1484.03 371.01 86867.12 38854.47 137208.87 00:23:42.765 ======================================================== 00:23:42.765 Total : 3056.94 764.24 85014.65 38854.47 137208.87 00:23:42.765 00:23:42.765 09:31:53 nvmf_tcp.nvmf_perf -- host/perf.sh@66 -- # sync 00:23:42.765 09:31:53 nvmf_tcp.nvmf_perf -- host/perf.sh@67 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:23:42.765 09:31:53 nvmf_tcp.nvmf_perf -- host/perf.sh@69 -- # '[' 0 -eq 1 ']' 00:23:42.766 09:31:53 nvmf_tcp.nvmf_perf -- host/perf.sh@112 -- # trap - SIGINT SIGTERM EXIT 00:23:42.766 09:31:53 nvmf_tcp.nvmf_perf -- host/perf.sh@114 -- # nvmftestfini 00:23:42.766 09:31:53 nvmf_tcp.nvmf_perf -- nvmf/common.sh@488 -- # nvmfcleanup 00:23:42.766 09:31:53 nvmf_tcp.nvmf_perf -- nvmf/common.sh@117 -- # sync 00:23:42.766 09:31:53 nvmf_tcp.nvmf_perf -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:23:42.766 09:31:53 nvmf_tcp.nvmf_perf -- nvmf/common.sh@120 -- # set +e 00:23:42.766 09:31:53 nvmf_tcp.nvmf_perf -- nvmf/common.sh@121 -- # for i in {1..20} 00:23:42.766 09:31:53 nvmf_tcp.nvmf_perf -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:23:42.766 rmmod nvme_tcp 00:23:42.766 rmmod nvme_fabrics 00:23:42.766 rmmod nvme_keyring 00:23:42.766 09:31:53 nvmf_tcp.nvmf_perf -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:23:42.766 09:31:53 nvmf_tcp.nvmf_perf -- nvmf/common.sh@124 -- # set -e 00:23:42.766 09:31:53 nvmf_tcp.nvmf_perf -- nvmf/common.sh@125 -- # return 0 00:23:42.766 09:31:53 nvmf_tcp.nvmf_perf -- nvmf/common.sh@489 -- # '[' -n 894254 ']' 00:23:42.766 09:31:53 nvmf_tcp.nvmf_perf -- nvmf/common.sh@490 -- # killprocess 894254 00:23:42.766 09:31:53 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@948 -- # '[' -z 894254 ']' 00:23:42.766 09:31:53 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@952 -- # kill -0 894254 00:23:42.766 09:31:53 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@953 -- # uname 00:23:42.766 09:31:53 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:23:42.766 09:31:53 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 894254 00:23:42.766 09:31:53 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:23:42.766 09:31:53 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:23:42.766 09:31:53 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@966 -- # echo 'killing process with pid 894254' 00:23:42.766 killing process with pid 894254 00:23:42.766 09:31:53 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@967 -- # kill 894254 00:23:42.766 09:31:53 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@972 -- # wait 894254 00:23:44.145 09:31:55 nvmf_tcp.nvmf_perf -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:23:44.145 09:31:55 nvmf_tcp.nvmf_perf -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:23:44.145 09:31:55 nvmf_tcp.nvmf_perf -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:23:44.145 09:31:55 nvmf_tcp.nvmf_perf -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:23:44.145 09:31:55 nvmf_tcp.nvmf_perf -- nvmf/common.sh@278 -- # remove_spdk_ns 00:23:44.145 09:31:55 nvmf_tcp.nvmf_perf -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:23:44.145 09:31:55 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:23:44.145 09:31:55 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:23:46.057 09:31:57 nvmf_tcp.nvmf_perf -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:23:46.057 00:23:46.057 real 0m21.139s 00:23:46.057 user 1m5.430s 00:23:46.057 sys 0m5.183s 00:23:46.057 09:31:57 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@1124 -- # xtrace_disable 00:23:46.057 09:31:57 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@10 -- # set +x 00:23:46.057 ************************************ 00:23:46.057 END TEST nvmf_perf 00:23:46.057 ************************************ 00:23:46.057 09:31:57 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:23:46.057 09:31:57 nvmf_tcp -- nvmf/nvmf.sh@99 -- # run_test nvmf_fio_host /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/fio.sh --transport=tcp 00:23:46.057 09:31:57 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:23:46.057 09:31:57 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:23:46.057 09:31:57 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:23:46.057 ************************************ 00:23:46.057 START TEST nvmf_fio_host 00:23:46.057 ************************************ 00:23:46.057 09:31:57 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/fio.sh --transport=tcp 00:23:46.317 * Looking for test storage... 00:23:46.317 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:23:46.317 09:31:57 nvmf_tcp.nvmf_fio_host -- host/fio.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:23:46.317 09:31:57 nvmf_tcp.nvmf_fio_host -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:23:46.317 09:31:57 nvmf_tcp.nvmf_fio_host -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:23:46.317 09:31:57 nvmf_tcp.nvmf_fio_host -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:23:46.317 09:31:57 nvmf_tcp.nvmf_fio_host -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:46.317 09:31:57 nvmf_tcp.nvmf_fio_host -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:46.317 09:31:57 nvmf_tcp.nvmf_fio_host -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:46.317 09:31:57 nvmf_tcp.nvmf_fio_host -- paths/export.sh@5 -- # export PATH 00:23:46.317 09:31:57 nvmf_tcp.nvmf_fio_host -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:46.317 09:31:57 nvmf_tcp.nvmf_fio_host -- host/fio.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:23:46.317 09:31:57 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@7 -- # uname -s 00:23:46.317 09:31:57 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:23:46.317 09:31:57 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:23:46.317 09:31:57 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:23:46.317 09:31:57 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:23:46.317 09:31:57 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:23:46.317 09:31:57 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:23:46.317 09:31:57 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:23:46.317 09:31:57 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:23:46.317 09:31:57 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:23:46.317 09:31:57 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:23:46.317 09:31:57 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a 00:23:46.317 09:31:57 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@18 -- # NVME_HOSTID=29f67375-a902-e411-ace9-001e67bc3c9a 00:23:46.317 09:31:57 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:23:46.317 09:31:57 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:23:46.317 09:31:57 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:23:46.317 09:31:57 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:23:46.317 09:31:57 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:23:46.317 09:31:57 nvmf_tcp.nvmf_fio_host -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:23:46.317 09:31:57 nvmf_tcp.nvmf_fio_host -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:23:46.317 09:31:57 nvmf_tcp.nvmf_fio_host -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:23:46.317 09:31:57 nvmf_tcp.nvmf_fio_host -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:46.317 09:31:57 nvmf_tcp.nvmf_fio_host -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:46.318 09:31:57 nvmf_tcp.nvmf_fio_host -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:46.318 09:31:57 nvmf_tcp.nvmf_fio_host -- paths/export.sh@5 -- # export PATH 00:23:46.318 09:31:57 nvmf_tcp.nvmf_fio_host -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:46.318 09:31:57 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@47 -- # : 0 00:23:46.318 09:31:57 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:23:46.318 09:31:57 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:23:46.318 09:31:57 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:23:46.318 09:31:57 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:23:46.318 09:31:57 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:23:46.318 09:31:57 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:23:46.318 09:31:57 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:23:46.318 09:31:57 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@51 -- # have_pci_nics=0 00:23:46.318 09:31:57 nvmf_tcp.nvmf_fio_host -- host/fio.sh@12 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:23:46.318 09:31:57 nvmf_tcp.nvmf_fio_host -- host/fio.sh@14 -- # nvmftestinit 00:23:46.318 09:31:57 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:23:46.318 09:31:57 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:23:46.318 09:31:57 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@448 -- # prepare_net_devs 00:23:46.318 09:31:57 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@410 -- # local -g is_hw=no 00:23:46.318 09:31:57 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@412 -- # remove_spdk_ns 00:23:46.318 09:31:57 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:23:46.318 09:31:57 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:23:46.318 09:31:57 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:23:46.318 09:31:57 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:23:46.318 09:31:57 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:23:46.318 09:31:57 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@285 -- # xtrace_disable 00:23:46.318 09:31:57 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@10 -- # set +x 00:23:48.219 09:31:59 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:23:48.219 09:31:59 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@291 -- # pci_devs=() 00:23:48.219 09:31:59 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@291 -- # local -a pci_devs 00:23:48.219 09:31:59 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@292 -- # pci_net_devs=() 00:23:48.219 09:31:59 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:23:48.219 09:31:59 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@293 -- # pci_drivers=() 00:23:48.219 09:31:59 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@293 -- # local -A pci_drivers 00:23:48.219 09:31:59 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@295 -- # net_devs=() 00:23:48.219 09:31:59 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@295 -- # local -ga net_devs 00:23:48.219 09:31:59 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@296 -- # e810=() 00:23:48.219 09:31:59 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@296 -- # local -ga e810 00:23:48.219 09:31:59 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@297 -- # x722=() 00:23:48.219 09:31:59 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@297 -- # local -ga x722 00:23:48.219 09:31:59 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@298 -- # mlx=() 00:23:48.219 09:31:59 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@298 -- # local -ga mlx 00:23:48.219 09:31:59 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:23:48.219 09:31:59 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:23:48.219 09:31:59 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:23:48.219 09:31:59 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:23:48.219 09:31:59 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:23:48.219 09:31:59 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:23:48.219 09:31:59 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:23:48.219 09:31:59 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:23:48.219 09:31:59 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:23:48.219 09:31:59 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:23:48.219 09:31:59 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:23:48.219 09:31:59 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:23:48.219 09:31:59 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:23:48.219 09:31:59 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:23:48.219 09:31:59 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:23:48.219 09:31:59 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:23:48.219 09:31:59 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:23:48.219 09:31:59 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:23:48.219 09:31:59 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@341 -- # echo 'Found 0000:09:00.0 (0x8086 - 0x159b)' 00:23:48.219 Found 0000:09:00.0 (0x8086 - 0x159b) 00:23:48.219 09:31:59 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:23:48.219 09:31:59 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:23:48.219 09:31:59 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:23:48.219 09:31:59 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:23:48.219 09:31:59 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:23:48.219 09:31:59 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:23:48.219 09:31:59 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@341 -- # echo 'Found 0000:09:00.1 (0x8086 - 0x159b)' 00:23:48.219 Found 0000:09:00.1 (0x8086 - 0x159b) 00:23:48.219 09:31:59 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:23:48.219 09:31:59 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:23:48.219 09:31:59 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:23:48.219 09:31:59 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:23:48.219 09:31:59 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:23:48.219 09:31:59 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:23:48.219 09:31:59 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:23:48.219 09:31:59 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:23:48.219 09:31:59 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:23:48.219 09:31:59 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:23:48.219 09:31:59 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:23:48.219 09:31:59 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:23:48.219 09:31:59 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@390 -- # [[ up == up ]] 00:23:48.219 09:31:59 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:23:48.219 09:31:59 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:23:48.219 09:31:59 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:09:00.0: cvl_0_0' 00:23:48.219 Found net devices under 0000:09:00.0: cvl_0_0 00:23:48.219 09:31:59 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:23:48.219 09:31:59 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:23:48.219 09:31:59 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:23:48.219 09:31:59 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:23:48.219 09:31:59 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:23:48.219 09:31:59 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@390 -- # [[ up == up ]] 00:23:48.219 09:31:59 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:23:48.219 09:31:59 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:23:48.219 09:31:59 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:09:00.1: cvl_0_1' 00:23:48.219 Found net devices under 0000:09:00.1: cvl_0_1 00:23:48.219 09:31:59 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:23:48.219 09:31:59 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:23:48.219 09:31:59 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@414 -- # is_hw=yes 00:23:48.219 09:31:59 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:23:48.219 09:31:59 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:23:48.219 09:31:59 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:23:48.219 09:31:59 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:23:48.219 09:31:59 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:23:48.219 09:31:59 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:23:48.219 09:31:59 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:23:48.219 09:31:59 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:23:48.219 09:31:59 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:23:48.219 09:31:59 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:23:48.219 09:31:59 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:23:48.219 09:31:59 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:23:48.219 09:31:59 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:23:48.219 09:31:59 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:23:48.219 09:31:59 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:23:48.219 09:31:59 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:23:48.480 09:31:59 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:23:48.480 09:31:59 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:23:48.480 09:31:59 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:23:48.480 09:31:59 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:23:48.480 09:31:59 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:23:48.480 09:31:59 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:23:48.480 09:31:59 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:23:48.480 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:23:48.480 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.157 ms 00:23:48.480 00:23:48.480 --- 10.0.0.2 ping statistics --- 00:23:48.480 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:23:48.480 rtt min/avg/max/mdev = 0.157/0.157/0.157/0.000 ms 00:23:48.480 09:31:59 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:23:48.480 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:23:48.480 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.166 ms 00:23:48.480 00:23:48.480 --- 10.0.0.1 ping statistics --- 00:23:48.480 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:23:48.480 rtt min/avg/max/mdev = 0.166/0.166/0.166/0.000 ms 00:23:48.480 09:31:59 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:23:48.480 09:31:59 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@422 -- # return 0 00:23:48.480 09:31:59 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:23:48.480 09:31:59 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:23:48.480 09:31:59 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:23:48.480 09:31:59 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:23:48.480 09:31:59 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:23:48.480 09:31:59 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:23:48.480 09:31:59 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:23:48.480 09:31:59 nvmf_tcp.nvmf_fio_host -- host/fio.sh@16 -- # [[ y != y ]] 00:23:48.480 09:31:59 nvmf_tcp.nvmf_fio_host -- host/fio.sh@21 -- # timing_enter start_nvmf_tgt 00:23:48.480 09:31:59 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@722 -- # xtrace_disable 00:23:48.480 09:31:59 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@10 -- # set +x 00:23:48.480 09:31:59 nvmf_tcp.nvmf_fio_host -- host/fio.sh@24 -- # nvmfpid=898220 00:23:48.480 09:31:59 nvmf_tcp.nvmf_fio_host -- host/fio.sh@23 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:23:48.480 09:31:59 nvmf_tcp.nvmf_fio_host -- host/fio.sh@26 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:23:48.480 09:31:59 nvmf_tcp.nvmf_fio_host -- host/fio.sh@28 -- # waitforlisten 898220 00:23:48.480 09:31:59 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@829 -- # '[' -z 898220 ']' 00:23:48.480 09:31:59 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:23:48.480 09:31:59 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@834 -- # local max_retries=100 00:23:48.480 09:31:59 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:23:48.480 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:23:48.480 09:31:59 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@838 -- # xtrace_disable 00:23:48.480 09:31:59 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@10 -- # set +x 00:23:48.480 [2024-07-15 09:31:59.576480] Starting SPDK v24.09-pre git sha1 b0f01ebc5 / DPDK 24.03.0 initialization... 00:23:48.480 [2024-07-15 09:31:59.576557] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:23:48.480 EAL: No free 2048 kB hugepages reported on node 1 00:23:48.480 [2024-07-15 09:31:59.641413] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 4 00:23:48.739 [2024-07-15 09:31:59.753236] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:23:48.739 [2024-07-15 09:31:59.753288] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:23:48.739 [2024-07-15 09:31:59.753301] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:23:48.739 [2024-07-15 09:31:59.753312] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:23:48.739 [2024-07-15 09:31:59.753322] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:23:48.739 [2024-07-15 09:31:59.753407] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:23:48.739 [2024-07-15 09:31:59.753472] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:23:48.739 [2024-07-15 09:31:59.753538] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:23:48.739 [2024-07-15 09:31:59.753541] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:23:48.739 09:31:59 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:23:48.739 09:31:59 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@862 -- # return 0 00:23:48.739 09:31:59 nvmf_tcp.nvmf_fio_host -- host/fio.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:23:48.997 [2024-07-15 09:32:00.118225] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:23:48.997 09:32:00 nvmf_tcp.nvmf_fio_host -- host/fio.sh@30 -- # timing_exit start_nvmf_tgt 00:23:48.997 09:32:00 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@728 -- # xtrace_disable 00:23:48.997 09:32:00 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@10 -- # set +x 00:23:48.997 09:32:00 nvmf_tcp.nvmf_fio_host -- host/fio.sh@32 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc1 00:23:49.254 Malloc1 00:23:49.254 09:32:00 nvmf_tcp.nvmf_fio_host -- host/fio.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:23:49.512 09:32:00 nvmf_tcp.nvmf_fio_host -- host/fio.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:23:49.769 09:32:00 nvmf_tcp.nvmf_fio_host -- host/fio.sh@35 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:23:50.026 [2024-07-15 09:32:01.147845] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:23:50.026 09:32:01 nvmf_tcp.nvmf_fio_host -- host/fio.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:23:50.283 09:32:01 nvmf_tcp.nvmf_fio_host -- host/fio.sh@38 -- # PLUGIN_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/app/fio/nvme 00:23:50.283 09:32:01 nvmf_tcp.nvmf_fio_host -- host/fio.sh@41 -- # fio_nvme /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/app/fio/nvme/example_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' --bs=4096 00:23:50.283 09:32:01 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1360 -- # fio_plugin /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/app/fio/nvme/example_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' --bs=4096 00:23:50.283 09:32:01 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1337 -- # local fio_dir=/usr/src/fio 00:23:50.283 09:32:01 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1339 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:23:50.283 09:32:01 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1339 -- # local sanitizers 00:23:50.283 09:32:01 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1340 -- # local plugin=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme 00:23:50.283 09:32:01 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1341 -- # shift 00:23:50.283 09:32:01 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1343 -- # local asan_lib= 00:23:50.283 09:32:01 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1344 -- # for sanitizer in "${sanitizers[@]}" 00:23:50.283 09:32:01 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1345 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme 00:23:50.283 09:32:01 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1345 -- # grep libasan 00:23:50.283 09:32:01 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1345 -- # awk '{print $3}' 00:23:50.283 09:32:01 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1345 -- # asan_lib= 00:23:50.283 09:32:01 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1346 -- # [[ -n '' ]] 00:23:50.283 09:32:01 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1344 -- # for sanitizer in "${sanitizers[@]}" 00:23:50.283 09:32:01 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1345 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme 00:23:50.283 09:32:01 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1345 -- # grep libclang_rt.asan 00:23:50.283 09:32:01 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1345 -- # awk '{print $3}' 00:23:50.283 09:32:01 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1345 -- # asan_lib= 00:23:50.283 09:32:01 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1346 -- # [[ -n '' ]] 00:23:50.283 09:32:01 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1352 -- # LD_PRELOAD=' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme' 00:23:50.283 09:32:01 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1352 -- # /usr/src/fio/fio /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/app/fio/nvme/example_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' --bs=4096 00:23:50.542 test: (g=0): rw=randrw, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk, iodepth=128 00:23:50.542 fio-3.35 00:23:50.542 Starting 1 thread 00:23:50.542 EAL: No free 2048 kB hugepages reported on node 1 00:23:53.075 00:23:53.075 test: (groupid=0, jobs=1): err= 0: pid=898580: Mon Jul 15 09:32:04 2024 00:23:53.075 read: IOPS=9079, BW=35.5MiB/s (37.2MB/s)(71.1MiB/2006msec) 00:23:53.075 slat (usec): min=2, max=157, avg= 2.58, stdev= 1.85 00:23:53.075 clat (usec): min=2554, max=13158, avg=7736.91, stdev=621.51 00:23:53.075 lat (usec): min=2582, max=13160, avg=7739.49, stdev=621.39 00:23:53.075 clat percentiles (usec): 00:23:53.075 | 1.00th=[ 6325], 5.00th=[ 6783], 10.00th=[ 6980], 20.00th=[ 7242], 00:23:53.075 | 30.00th=[ 7439], 40.00th=[ 7570], 50.00th=[ 7767], 60.00th=[ 7898], 00:23:53.075 | 70.00th=[ 8029], 80.00th=[ 8225], 90.00th=[ 8455], 95.00th=[ 8717], 00:23:53.075 | 99.00th=[ 9110], 99.50th=[ 9372], 99.90th=[11600], 99.95th=[12387], 00:23:53.075 | 99.99th=[13173] 00:23:53.075 bw ( KiB/s): min=35344, max=36760, per=99.89%, avg=36280.00, stdev=635.89, samples=4 00:23:53.075 iops : min= 8836, max= 9190, avg=9070.00, stdev=158.97, samples=4 00:23:53.075 write: IOPS=9091, BW=35.5MiB/s (37.2MB/s)(71.2MiB/2006msec); 0 zone resets 00:23:53.075 slat (usec): min=2, max=129, avg= 2.72, stdev= 1.37 00:23:53.075 clat (usec): min=1388, max=11642, avg=6314.22, stdev=503.45 00:23:53.075 lat (usec): min=1397, max=11645, avg=6316.94, stdev=503.39 00:23:53.075 clat percentiles (usec): 00:23:53.075 | 1.00th=[ 5145], 5.00th=[ 5538], 10.00th=[ 5735], 20.00th=[ 5932], 00:23:53.075 | 30.00th=[ 6063], 40.00th=[ 6194], 50.00th=[ 6325], 60.00th=[ 6456], 00:23:53.075 | 70.00th=[ 6521], 80.00th=[ 6718], 90.00th=[ 6915], 95.00th=[ 7046], 00:23:53.075 | 99.00th=[ 7373], 99.50th=[ 7504], 99.90th=[ 9110], 99.95th=[11076], 00:23:53.075 | 99.99th=[11600] 00:23:53.075 bw ( KiB/s): min=36032, max=36720, per=100.00%, avg=36368.00, stdev=346.38, samples=4 00:23:53.075 iops : min= 9008, max= 9180, avg=9092.00, stdev=86.59, samples=4 00:23:53.075 lat (msec) : 2=0.02%, 4=0.12%, 10=99.74%, 20=0.12% 00:23:53.075 cpu : usr=67.33%, sys=31.02%, ctx=47, majf=0, minf=39 00:23:53.075 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.1%, >=64=99.8% 00:23:53.075 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:23:53.075 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:23:53.075 issued rwts: total=18214,18237,0,0 short=0,0,0,0 dropped=0,0,0,0 00:23:53.075 latency : target=0, window=0, percentile=100.00%, depth=128 00:23:53.075 00:23:53.075 Run status group 0 (all jobs): 00:23:53.075 READ: bw=35.5MiB/s (37.2MB/s), 35.5MiB/s-35.5MiB/s (37.2MB/s-37.2MB/s), io=71.1MiB (74.6MB), run=2006-2006msec 00:23:53.075 WRITE: bw=35.5MiB/s (37.2MB/s), 35.5MiB/s-35.5MiB/s (37.2MB/s-37.2MB/s), io=71.2MiB (74.7MB), run=2006-2006msec 00:23:53.075 09:32:04 nvmf_tcp.nvmf_fio_host -- host/fio.sh@45 -- # fio_nvme /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/app/fio/nvme/mock_sgl_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' 00:23:53.075 09:32:04 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1360 -- # fio_plugin /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/app/fio/nvme/mock_sgl_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' 00:23:53.075 09:32:04 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1337 -- # local fio_dir=/usr/src/fio 00:23:53.075 09:32:04 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1339 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:23:53.075 09:32:04 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1339 -- # local sanitizers 00:23:53.075 09:32:04 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1340 -- # local plugin=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme 00:23:53.075 09:32:04 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1341 -- # shift 00:23:53.075 09:32:04 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1343 -- # local asan_lib= 00:23:53.075 09:32:04 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1344 -- # for sanitizer in "${sanitizers[@]}" 00:23:53.075 09:32:04 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1345 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme 00:23:53.075 09:32:04 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1345 -- # grep libasan 00:23:53.075 09:32:04 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1345 -- # awk '{print $3}' 00:23:53.075 09:32:04 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1345 -- # asan_lib= 00:23:53.075 09:32:04 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1346 -- # [[ -n '' ]] 00:23:53.075 09:32:04 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1344 -- # for sanitizer in "${sanitizers[@]}" 00:23:53.075 09:32:04 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1345 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme 00:23:53.075 09:32:04 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1345 -- # grep libclang_rt.asan 00:23:53.075 09:32:04 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1345 -- # awk '{print $3}' 00:23:53.075 09:32:04 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1345 -- # asan_lib= 00:23:53.075 09:32:04 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1346 -- # [[ -n '' ]] 00:23:53.075 09:32:04 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1352 -- # LD_PRELOAD=' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme' 00:23:53.075 09:32:04 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1352 -- # /usr/src/fio/fio /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/app/fio/nvme/mock_sgl_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' 00:23:53.336 test: (g=0): rw=randrw, bs=(R) 16.0KiB-16.0KiB, (W) 16.0KiB-16.0KiB, (T) 16.0KiB-16.0KiB, ioengine=spdk, iodepth=128 00:23:53.336 fio-3.35 00:23:53.336 Starting 1 thread 00:23:53.336 EAL: No free 2048 kB hugepages reported on node 1 00:23:55.866 00:23:55.866 test: (groupid=0, jobs=1): err= 0: pid=898915: Mon Jul 15 09:32:06 2024 00:23:55.866 read: IOPS=8529, BW=133MiB/s (140MB/s)(268MiB/2008msec) 00:23:55.866 slat (usec): min=2, max=119, avg= 3.79, stdev= 1.84 00:23:55.866 clat (usec): min=2668, max=17395, avg=8667.39, stdev=2098.81 00:23:55.866 lat (usec): min=2672, max=17399, avg=8671.19, stdev=2098.84 00:23:55.866 clat percentiles (usec): 00:23:55.866 | 1.00th=[ 4621], 5.00th=[ 5407], 10.00th=[ 5997], 20.00th=[ 6783], 00:23:55.866 | 30.00th=[ 7439], 40.00th=[ 8029], 50.00th=[ 8586], 60.00th=[ 9110], 00:23:55.866 | 70.00th=[ 9765], 80.00th=[10421], 90.00th=[11338], 95.00th=[12256], 00:23:55.866 | 99.00th=[14222], 99.50th=[14746], 99.90th=[16581], 99.95th=[16909], 00:23:55.866 | 99.99th=[17433] 00:23:55.866 bw ( KiB/s): min=62176, max=77728, per=51.48%, avg=70248.00, stdev=8597.72, samples=4 00:23:55.866 iops : min= 3886, max= 4858, avg=4390.50, stdev=537.36, samples=4 00:23:55.866 write: IOPS=4968, BW=77.6MiB/s (81.4MB/s)(143MiB/1844msec); 0 zone resets 00:23:55.866 slat (usec): min=30, max=223, avg=33.72, stdev= 5.75 00:23:55.866 clat (usec): min=5117, max=17926, avg=11217.15, stdev=1919.77 00:23:55.866 lat (usec): min=5149, max=17957, avg=11250.87, stdev=1919.72 00:23:55.866 clat percentiles (usec): 00:23:55.866 | 1.00th=[ 7111], 5.00th=[ 8291], 10.00th=[ 8848], 20.00th=[ 9503], 00:23:55.866 | 30.00th=[10159], 40.00th=[10683], 50.00th=[11076], 60.00th=[11600], 00:23:55.866 | 70.00th=[12125], 80.00th=[12780], 90.00th=[13829], 95.00th=[14615], 00:23:55.866 | 99.00th=[15795], 99.50th=[16581], 99.90th=[17433], 99.95th=[17695], 00:23:55.866 | 99.99th=[17957] 00:23:55.866 bw ( KiB/s): min=65440, max=80288, per=91.66%, avg=72856.00, stdev=8435.08, samples=4 00:23:55.866 iops : min= 4090, max= 5018, avg=4553.50, stdev=527.19, samples=4 00:23:55.866 lat (msec) : 4=0.17%, 10=58.05%, 20=41.78% 00:23:55.866 cpu : usr=77.33%, sys=21.52%, ctx=40, majf=0, minf=59 00:23:55.866 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.2%, 16=0.3%, 32=0.6%, >=64=98.8% 00:23:55.866 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:23:55.866 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:23:55.866 issued rwts: total=17127,9161,0,0 short=0,0,0,0 dropped=0,0,0,0 00:23:55.866 latency : target=0, window=0, percentile=100.00%, depth=128 00:23:55.866 00:23:55.866 Run status group 0 (all jobs): 00:23:55.866 READ: bw=133MiB/s (140MB/s), 133MiB/s-133MiB/s (140MB/s-140MB/s), io=268MiB (281MB), run=2008-2008msec 00:23:55.866 WRITE: bw=77.6MiB/s (81.4MB/s), 77.6MiB/s-77.6MiB/s (81.4MB/s-81.4MB/s), io=143MiB (150MB), run=1844-1844msec 00:23:55.866 09:32:06 nvmf_tcp.nvmf_fio_host -- host/fio.sh@47 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:23:55.866 09:32:06 nvmf_tcp.nvmf_fio_host -- host/fio.sh@49 -- # '[' 0 -eq 1 ']' 00:23:55.867 09:32:06 nvmf_tcp.nvmf_fio_host -- host/fio.sh@83 -- # trap - SIGINT SIGTERM EXIT 00:23:55.867 09:32:06 nvmf_tcp.nvmf_fio_host -- host/fio.sh@85 -- # rm -f ./local-test-0-verify.state 00:23:55.867 09:32:06 nvmf_tcp.nvmf_fio_host -- host/fio.sh@86 -- # nvmftestfini 00:23:55.867 09:32:06 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@488 -- # nvmfcleanup 00:23:55.867 09:32:06 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@117 -- # sync 00:23:55.867 09:32:06 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:23:55.867 09:32:06 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@120 -- # set +e 00:23:55.867 09:32:06 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@121 -- # for i in {1..20} 00:23:55.867 09:32:06 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:23:55.867 rmmod nvme_tcp 00:23:55.867 rmmod nvme_fabrics 00:23:55.867 rmmod nvme_keyring 00:23:55.867 09:32:06 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:23:55.867 09:32:06 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@124 -- # set -e 00:23:55.867 09:32:06 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@125 -- # return 0 00:23:55.867 09:32:06 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@489 -- # '[' -n 898220 ']' 00:23:55.867 09:32:06 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@490 -- # killprocess 898220 00:23:55.867 09:32:06 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@948 -- # '[' -z 898220 ']' 00:23:55.867 09:32:06 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@952 -- # kill -0 898220 00:23:55.867 09:32:06 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@953 -- # uname 00:23:55.867 09:32:06 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:23:55.867 09:32:06 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 898220 00:23:55.867 09:32:06 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:23:55.867 09:32:06 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:23:55.867 09:32:06 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@966 -- # echo 'killing process with pid 898220' 00:23:55.867 killing process with pid 898220 00:23:55.867 09:32:06 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@967 -- # kill 898220 00:23:55.867 09:32:06 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@972 -- # wait 898220 00:23:56.126 09:32:07 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:23:56.126 09:32:07 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:23:56.126 09:32:07 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:23:56.126 09:32:07 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:23:56.126 09:32:07 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@278 -- # remove_spdk_ns 00:23:56.126 09:32:07 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:23:56.126 09:32:07 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:23:56.126 09:32:07 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:23:58.665 09:32:09 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:23:58.665 00:23:58.665 real 0m12.068s 00:23:58.665 user 0m35.397s 00:23:58.665 sys 0m3.970s 00:23:58.665 09:32:09 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1124 -- # xtrace_disable 00:23:58.665 09:32:09 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@10 -- # set +x 00:23:58.665 ************************************ 00:23:58.665 END TEST nvmf_fio_host 00:23:58.665 ************************************ 00:23:58.665 09:32:09 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:23:58.665 09:32:09 nvmf_tcp -- nvmf/nvmf.sh@100 -- # run_test nvmf_failover /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/failover.sh --transport=tcp 00:23:58.665 09:32:09 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:23:58.665 09:32:09 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:23:58.665 09:32:09 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:23:58.665 ************************************ 00:23:58.665 START TEST nvmf_failover 00:23:58.665 ************************************ 00:23:58.665 09:32:09 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/failover.sh --transport=tcp 00:23:58.665 * Looking for test storage... 00:23:58.665 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:23:58.665 09:32:09 nvmf_tcp.nvmf_failover -- host/failover.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:23:58.665 09:32:09 nvmf_tcp.nvmf_failover -- nvmf/common.sh@7 -- # uname -s 00:23:58.665 09:32:09 nvmf_tcp.nvmf_failover -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:23:58.665 09:32:09 nvmf_tcp.nvmf_failover -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:23:58.665 09:32:09 nvmf_tcp.nvmf_failover -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:23:58.665 09:32:09 nvmf_tcp.nvmf_failover -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:23:58.665 09:32:09 nvmf_tcp.nvmf_failover -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:23:58.665 09:32:09 nvmf_tcp.nvmf_failover -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:23:58.665 09:32:09 nvmf_tcp.nvmf_failover -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:23:58.665 09:32:09 nvmf_tcp.nvmf_failover -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:23:58.665 09:32:09 nvmf_tcp.nvmf_failover -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:23:58.665 09:32:09 nvmf_tcp.nvmf_failover -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:23:58.665 09:32:09 nvmf_tcp.nvmf_failover -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a 00:23:58.665 09:32:09 nvmf_tcp.nvmf_failover -- nvmf/common.sh@18 -- # NVME_HOSTID=29f67375-a902-e411-ace9-001e67bc3c9a 00:23:58.665 09:32:09 nvmf_tcp.nvmf_failover -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:23:58.665 09:32:09 nvmf_tcp.nvmf_failover -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:23:58.665 09:32:09 nvmf_tcp.nvmf_failover -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:23:58.665 09:32:09 nvmf_tcp.nvmf_failover -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:23:58.665 09:32:09 nvmf_tcp.nvmf_failover -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:23:58.665 09:32:09 nvmf_tcp.nvmf_failover -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:23:58.665 09:32:09 nvmf_tcp.nvmf_failover -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:23:58.665 09:32:09 nvmf_tcp.nvmf_failover -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:23:58.665 09:32:09 nvmf_tcp.nvmf_failover -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:58.665 09:32:09 nvmf_tcp.nvmf_failover -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:58.665 09:32:09 nvmf_tcp.nvmf_failover -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:58.665 09:32:09 nvmf_tcp.nvmf_failover -- paths/export.sh@5 -- # export PATH 00:23:58.665 09:32:09 nvmf_tcp.nvmf_failover -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:58.665 09:32:09 nvmf_tcp.nvmf_failover -- nvmf/common.sh@47 -- # : 0 00:23:58.665 09:32:09 nvmf_tcp.nvmf_failover -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:23:58.665 09:32:09 nvmf_tcp.nvmf_failover -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:23:58.665 09:32:09 nvmf_tcp.nvmf_failover -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:23:58.665 09:32:09 nvmf_tcp.nvmf_failover -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:23:58.665 09:32:09 nvmf_tcp.nvmf_failover -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:23:58.665 09:32:09 nvmf_tcp.nvmf_failover -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:23:58.665 09:32:09 nvmf_tcp.nvmf_failover -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:23:58.665 09:32:09 nvmf_tcp.nvmf_failover -- nvmf/common.sh@51 -- # have_pci_nics=0 00:23:58.665 09:32:09 nvmf_tcp.nvmf_failover -- host/failover.sh@11 -- # MALLOC_BDEV_SIZE=64 00:23:58.665 09:32:09 nvmf_tcp.nvmf_failover -- host/failover.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:23:58.665 09:32:09 nvmf_tcp.nvmf_failover -- host/failover.sh@14 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:23:58.665 09:32:09 nvmf_tcp.nvmf_failover -- host/failover.sh@16 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:23:58.665 09:32:09 nvmf_tcp.nvmf_failover -- host/failover.sh@18 -- # nvmftestinit 00:23:58.665 09:32:09 nvmf_tcp.nvmf_failover -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:23:58.665 09:32:09 nvmf_tcp.nvmf_failover -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:23:58.665 09:32:09 nvmf_tcp.nvmf_failover -- nvmf/common.sh@448 -- # prepare_net_devs 00:23:58.665 09:32:09 nvmf_tcp.nvmf_failover -- nvmf/common.sh@410 -- # local -g is_hw=no 00:23:58.665 09:32:09 nvmf_tcp.nvmf_failover -- nvmf/common.sh@412 -- # remove_spdk_ns 00:23:58.665 09:32:09 nvmf_tcp.nvmf_failover -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:23:58.665 09:32:09 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:23:58.665 09:32:09 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:23:58.665 09:32:09 nvmf_tcp.nvmf_failover -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:23:58.665 09:32:09 nvmf_tcp.nvmf_failover -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:23:58.665 09:32:09 nvmf_tcp.nvmf_failover -- nvmf/common.sh@285 -- # xtrace_disable 00:23:58.665 09:32:09 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@10 -- # set +x 00:24:00.573 09:32:11 nvmf_tcp.nvmf_failover -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:24:00.573 09:32:11 nvmf_tcp.nvmf_failover -- nvmf/common.sh@291 -- # pci_devs=() 00:24:00.573 09:32:11 nvmf_tcp.nvmf_failover -- nvmf/common.sh@291 -- # local -a pci_devs 00:24:00.573 09:32:11 nvmf_tcp.nvmf_failover -- nvmf/common.sh@292 -- # pci_net_devs=() 00:24:00.573 09:32:11 nvmf_tcp.nvmf_failover -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:24:00.573 09:32:11 nvmf_tcp.nvmf_failover -- nvmf/common.sh@293 -- # pci_drivers=() 00:24:00.573 09:32:11 nvmf_tcp.nvmf_failover -- nvmf/common.sh@293 -- # local -A pci_drivers 00:24:00.573 09:32:11 nvmf_tcp.nvmf_failover -- nvmf/common.sh@295 -- # net_devs=() 00:24:00.573 09:32:11 nvmf_tcp.nvmf_failover -- nvmf/common.sh@295 -- # local -ga net_devs 00:24:00.573 09:32:11 nvmf_tcp.nvmf_failover -- nvmf/common.sh@296 -- # e810=() 00:24:00.573 09:32:11 nvmf_tcp.nvmf_failover -- nvmf/common.sh@296 -- # local -ga e810 00:24:00.573 09:32:11 nvmf_tcp.nvmf_failover -- nvmf/common.sh@297 -- # x722=() 00:24:00.573 09:32:11 nvmf_tcp.nvmf_failover -- nvmf/common.sh@297 -- # local -ga x722 00:24:00.573 09:32:11 nvmf_tcp.nvmf_failover -- nvmf/common.sh@298 -- # mlx=() 00:24:00.573 09:32:11 nvmf_tcp.nvmf_failover -- nvmf/common.sh@298 -- # local -ga mlx 00:24:00.573 09:32:11 nvmf_tcp.nvmf_failover -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:24:00.574 09:32:11 nvmf_tcp.nvmf_failover -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:24:00.574 09:32:11 nvmf_tcp.nvmf_failover -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:24:00.574 09:32:11 nvmf_tcp.nvmf_failover -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:24:00.574 09:32:11 nvmf_tcp.nvmf_failover -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:24:00.574 09:32:11 nvmf_tcp.nvmf_failover -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:24:00.574 09:32:11 nvmf_tcp.nvmf_failover -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:24:00.574 09:32:11 nvmf_tcp.nvmf_failover -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:24:00.574 09:32:11 nvmf_tcp.nvmf_failover -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:24:00.574 09:32:11 nvmf_tcp.nvmf_failover -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:24:00.574 09:32:11 nvmf_tcp.nvmf_failover -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:24:00.574 09:32:11 nvmf_tcp.nvmf_failover -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:24:00.574 09:32:11 nvmf_tcp.nvmf_failover -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:24:00.574 09:32:11 nvmf_tcp.nvmf_failover -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:24:00.574 09:32:11 nvmf_tcp.nvmf_failover -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:24:00.574 09:32:11 nvmf_tcp.nvmf_failover -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:24:00.574 09:32:11 nvmf_tcp.nvmf_failover -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:24:00.574 09:32:11 nvmf_tcp.nvmf_failover -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:24:00.574 09:32:11 nvmf_tcp.nvmf_failover -- nvmf/common.sh@341 -- # echo 'Found 0000:09:00.0 (0x8086 - 0x159b)' 00:24:00.574 Found 0000:09:00.0 (0x8086 - 0x159b) 00:24:00.574 09:32:11 nvmf_tcp.nvmf_failover -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:24:00.574 09:32:11 nvmf_tcp.nvmf_failover -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:24:00.574 09:32:11 nvmf_tcp.nvmf_failover -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:24:00.574 09:32:11 nvmf_tcp.nvmf_failover -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:24:00.574 09:32:11 nvmf_tcp.nvmf_failover -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:24:00.574 09:32:11 nvmf_tcp.nvmf_failover -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:24:00.574 09:32:11 nvmf_tcp.nvmf_failover -- nvmf/common.sh@341 -- # echo 'Found 0000:09:00.1 (0x8086 - 0x159b)' 00:24:00.574 Found 0000:09:00.1 (0x8086 - 0x159b) 00:24:00.574 09:32:11 nvmf_tcp.nvmf_failover -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:24:00.574 09:32:11 nvmf_tcp.nvmf_failover -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:24:00.574 09:32:11 nvmf_tcp.nvmf_failover -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:24:00.574 09:32:11 nvmf_tcp.nvmf_failover -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:24:00.574 09:32:11 nvmf_tcp.nvmf_failover -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:24:00.574 09:32:11 nvmf_tcp.nvmf_failover -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:24:00.574 09:32:11 nvmf_tcp.nvmf_failover -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:24:00.574 09:32:11 nvmf_tcp.nvmf_failover -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:24:00.574 09:32:11 nvmf_tcp.nvmf_failover -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:24:00.574 09:32:11 nvmf_tcp.nvmf_failover -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:24:00.574 09:32:11 nvmf_tcp.nvmf_failover -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:24:00.574 09:32:11 nvmf_tcp.nvmf_failover -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:24:00.574 09:32:11 nvmf_tcp.nvmf_failover -- nvmf/common.sh@390 -- # [[ up == up ]] 00:24:00.574 09:32:11 nvmf_tcp.nvmf_failover -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:24:00.574 09:32:11 nvmf_tcp.nvmf_failover -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:24:00.574 09:32:11 nvmf_tcp.nvmf_failover -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:09:00.0: cvl_0_0' 00:24:00.574 Found net devices under 0000:09:00.0: cvl_0_0 00:24:00.574 09:32:11 nvmf_tcp.nvmf_failover -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:24:00.574 09:32:11 nvmf_tcp.nvmf_failover -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:24:00.574 09:32:11 nvmf_tcp.nvmf_failover -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:24:00.574 09:32:11 nvmf_tcp.nvmf_failover -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:24:00.574 09:32:11 nvmf_tcp.nvmf_failover -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:24:00.574 09:32:11 nvmf_tcp.nvmf_failover -- nvmf/common.sh@390 -- # [[ up == up ]] 00:24:00.574 09:32:11 nvmf_tcp.nvmf_failover -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:24:00.574 09:32:11 nvmf_tcp.nvmf_failover -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:24:00.574 09:32:11 nvmf_tcp.nvmf_failover -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:09:00.1: cvl_0_1' 00:24:00.574 Found net devices under 0000:09:00.1: cvl_0_1 00:24:00.574 09:32:11 nvmf_tcp.nvmf_failover -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:24:00.574 09:32:11 nvmf_tcp.nvmf_failover -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:24:00.574 09:32:11 nvmf_tcp.nvmf_failover -- nvmf/common.sh@414 -- # is_hw=yes 00:24:00.574 09:32:11 nvmf_tcp.nvmf_failover -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:24:00.574 09:32:11 nvmf_tcp.nvmf_failover -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:24:00.574 09:32:11 nvmf_tcp.nvmf_failover -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:24:00.574 09:32:11 nvmf_tcp.nvmf_failover -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:24:00.574 09:32:11 nvmf_tcp.nvmf_failover -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:24:00.574 09:32:11 nvmf_tcp.nvmf_failover -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:24:00.574 09:32:11 nvmf_tcp.nvmf_failover -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:24:00.574 09:32:11 nvmf_tcp.nvmf_failover -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:24:00.574 09:32:11 nvmf_tcp.nvmf_failover -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:24:00.574 09:32:11 nvmf_tcp.nvmf_failover -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:24:00.574 09:32:11 nvmf_tcp.nvmf_failover -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:24:00.574 09:32:11 nvmf_tcp.nvmf_failover -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:24:00.574 09:32:11 nvmf_tcp.nvmf_failover -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:24:00.574 09:32:11 nvmf_tcp.nvmf_failover -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:24:00.574 09:32:11 nvmf_tcp.nvmf_failover -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:24:00.574 09:32:11 nvmf_tcp.nvmf_failover -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:24:00.574 09:32:11 nvmf_tcp.nvmf_failover -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:24:00.574 09:32:11 nvmf_tcp.nvmf_failover -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:24:00.574 09:32:11 nvmf_tcp.nvmf_failover -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:24:00.574 09:32:11 nvmf_tcp.nvmf_failover -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:24:00.574 09:32:11 nvmf_tcp.nvmf_failover -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:24:00.574 09:32:11 nvmf_tcp.nvmf_failover -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:24:00.574 09:32:11 nvmf_tcp.nvmf_failover -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:24:00.574 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:24:00.574 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.168 ms 00:24:00.574 00:24:00.574 --- 10.0.0.2 ping statistics --- 00:24:00.574 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:24:00.574 rtt min/avg/max/mdev = 0.168/0.168/0.168/0.000 ms 00:24:00.574 09:32:11 nvmf_tcp.nvmf_failover -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:24:00.574 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:24:00.574 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.058 ms 00:24:00.574 00:24:00.574 --- 10.0.0.1 ping statistics --- 00:24:00.574 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:24:00.574 rtt min/avg/max/mdev = 0.058/0.058/0.058/0.000 ms 00:24:00.574 09:32:11 nvmf_tcp.nvmf_failover -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:24:00.574 09:32:11 nvmf_tcp.nvmf_failover -- nvmf/common.sh@422 -- # return 0 00:24:00.574 09:32:11 nvmf_tcp.nvmf_failover -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:24:00.574 09:32:11 nvmf_tcp.nvmf_failover -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:24:00.574 09:32:11 nvmf_tcp.nvmf_failover -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:24:00.574 09:32:11 nvmf_tcp.nvmf_failover -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:24:00.574 09:32:11 nvmf_tcp.nvmf_failover -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:24:00.574 09:32:11 nvmf_tcp.nvmf_failover -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:24:00.574 09:32:11 nvmf_tcp.nvmf_failover -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:24:00.574 09:32:11 nvmf_tcp.nvmf_failover -- host/failover.sh@20 -- # nvmfappstart -m 0xE 00:24:00.574 09:32:11 nvmf_tcp.nvmf_failover -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:24:00.574 09:32:11 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@722 -- # xtrace_disable 00:24:00.574 09:32:11 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@10 -- # set +x 00:24:00.574 09:32:11 nvmf_tcp.nvmf_failover -- nvmf/common.sh@481 -- # nvmfpid=901107 00:24:00.574 09:32:11 nvmf_tcp.nvmf_failover -- nvmf/common.sh@482 -- # waitforlisten 901107 00:24:00.574 09:32:11 nvmf_tcp.nvmf_failover -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xE 00:24:00.574 09:32:11 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@829 -- # '[' -z 901107 ']' 00:24:00.574 09:32:11 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:24:00.574 09:32:11 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@834 -- # local max_retries=100 00:24:00.574 09:32:11 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:24:00.574 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:24:00.574 09:32:11 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@838 -- # xtrace_disable 00:24:00.574 09:32:11 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@10 -- # set +x 00:24:00.574 [2024-07-15 09:32:11.478216] Starting SPDK v24.09-pre git sha1 b0f01ebc5 / DPDK 24.03.0 initialization... 00:24:00.574 [2024-07-15 09:32:11.478298] [ DPDK EAL parameters: nvmf -c 0xE --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:24:00.574 EAL: No free 2048 kB hugepages reported on node 1 00:24:00.574 [2024-07-15 09:32:11.542559] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 3 00:24:00.574 [2024-07-15 09:32:11.653544] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:24:00.574 [2024-07-15 09:32:11.653596] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:24:00.574 [2024-07-15 09:32:11.653625] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:24:00.574 [2024-07-15 09:32:11.653637] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:24:00.574 [2024-07-15 09:32:11.653646] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:24:00.574 [2024-07-15 09:32:11.653704] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:24:00.575 [2024-07-15 09:32:11.653768] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:24:00.575 [2024-07-15 09:32:11.653771] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:24:00.834 09:32:11 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:24:00.834 09:32:11 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@862 -- # return 0 00:24:00.834 09:32:11 nvmf_tcp.nvmf_failover -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:24:00.834 09:32:11 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@728 -- # xtrace_disable 00:24:00.834 09:32:11 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@10 -- # set +x 00:24:00.834 09:32:11 nvmf_tcp.nvmf_failover -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:24:00.834 09:32:11 nvmf_tcp.nvmf_failover -- host/failover.sh@22 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:24:01.091 [2024-07-15 09:32:12.066469] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:24:01.091 09:32:12 nvmf_tcp.nvmf_failover -- host/failover.sh@23 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc0 00:24:01.350 Malloc0 00:24:01.350 09:32:12 nvmf_tcp.nvmf_failover -- host/failover.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:24:01.608 09:32:12 nvmf_tcp.nvmf_failover -- host/failover.sh@25 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:24:01.866 09:32:12 nvmf_tcp.nvmf_failover -- host/failover.sh@26 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:24:02.124 [2024-07-15 09:32:13.139348] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:24:02.125 09:32:13 nvmf_tcp.nvmf_failover -- host/failover.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 00:24:02.382 [2024-07-15 09:32:13.432242] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4421 *** 00:24:02.382 09:32:13 nvmf_tcp.nvmf_failover -- host/failover.sh@28 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4422 00:24:02.640 [2024-07-15 09:32:13.725148] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4422 *** 00:24:02.640 09:32:13 nvmf_tcp.nvmf_failover -- host/failover.sh@31 -- # bdevperf_pid=901398 00:24:02.640 09:32:13 nvmf_tcp.nvmf_failover -- host/failover.sh@30 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 15 -f 00:24:02.640 09:32:13 nvmf_tcp.nvmf_failover -- host/failover.sh@33 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; cat $testdir/try.txt; rm -f $testdir/try.txt; killprocess $bdevperf_pid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:24:02.640 09:32:13 nvmf_tcp.nvmf_failover -- host/failover.sh@34 -- # waitforlisten 901398 /var/tmp/bdevperf.sock 00:24:02.640 09:32:13 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@829 -- # '[' -z 901398 ']' 00:24:02.640 09:32:13 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:24:02.640 09:32:13 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@834 -- # local max_retries=100 00:24:02.640 09:32:13 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:24:02.640 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:24:02.640 09:32:13 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@838 -- # xtrace_disable 00:24:02.640 09:32:13 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@10 -- # set +x 00:24:02.899 09:32:14 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:24:02.899 09:32:14 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@862 -- # return 0 00:24:02.899 09:32:14 nvmf_tcp.nvmf_failover -- host/failover.sh@35 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:24:03.465 NVMe0n1 00:24:03.465 09:32:14 nvmf_tcp.nvmf_failover -- host/failover.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4421 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:24:03.724 00:24:03.724 09:32:14 nvmf_tcp.nvmf_failover -- host/failover.sh@39 -- # run_test_pid=901533 00:24:03.724 09:32:14 nvmf_tcp.nvmf_failover -- host/failover.sh@38 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:24:03.724 09:32:14 nvmf_tcp.nvmf_failover -- host/failover.sh@41 -- # sleep 1 00:24:05.108 09:32:15 nvmf_tcp.nvmf_failover -- host/failover.sh@43 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:24:05.108 [2024-07-15 09:32:16.161210] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa56070 is same with the state(5) to be set 00:24:05.108 [2024-07-15 09:32:16.161314] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa56070 is same with the state(5) to be set 00:24:05.108 [2024-07-15 09:32:16.161338] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa56070 is same with the state(5) to be set 00:24:05.108 [2024-07-15 09:32:16.161356] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa56070 is same with the state(5) to be set 00:24:05.108 [2024-07-15 09:32:16.161375] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa56070 is same with the state(5) to be set 00:24:05.108 [2024-07-15 09:32:16.161393] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa56070 is same with the state(5) to be set 00:24:05.108 [2024-07-15 09:32:16.161412] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa56070 is same with the state(5) to be set 00:24:05.108 [2024-07-15 09:32:16.161430] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa56070 is same with the state(5) to be set 00:24:05.108 [2024-07-15 09:32:16.161447] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa56070 is same with the state(5) to be set 00:24:05.108 [2024-07-15 09:32:16.161475] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa56070 is same with the state(5) to be set 00:24:05.108 [2024-07-15 09:32:16.161496] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa56070 is same with the state(5) to be set 00:24:05.108 [2024-07-15 09:32:16.161514] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa56070 is same with the state(5) to be set 00:24:05.108 [2024-07-15 09:32:16.161532] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa56070 is same with the state(5) to be set 00:24:05.108 [2024-07-15 09:32:16.161551] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa56070 is same with the state(5) to be set 00:24:05.108 [2024-07-15 09:32:16.161571] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa56070 is same with the state(5) to be set 00:24:05.108 [2024-07-15 09:32:16.161591] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa56070 is same with the state(5) to be set 00:24:05.108 [2024-07-15 09:32:16.161611] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa56070 is same with the state(5) to be set 00:24:05.108 [2024-07-15 09:32:16.161631] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa56070 is same with the state(5) to be set 00:24:05.108 [2024-07-15 09:32:16.161648] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa56070 is same with the state(5) to be set 00:24:05.108 [2024-07-15 09:32:16.161666] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa56070 is same with the state(5) to be set 00:24:05.108 [2024-07-15 09:32:16.161685] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa56070 is same with the state(5) to be set 00:24:05.108 [2024-07-15 09:32:16.161704] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa56070 is same with the state(5) to be set 00:24:05.108 [2024-07-15 09:32:16.161721] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa56070 is same with the state(5) to be set 00:24:05.108 [2024-07-15 09:32:16.161738] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa56070 is same with the state(5) to be set 00:24:05.108 [2024-07-15 09:32:16.161755] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa56070 is same with the state(5) to be set 00:24:05.108 [2024-07-15 09:32:16.161773] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa56070 is same with the state(5) to be set 00:24:05.108 [2024-07-15 09:32:16.161815] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa56070 is same with the state(5) to be set 00:24:05.108 [2024-07-15 09:32:16.161837] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa56070 is same with the state(5) to be set 00:24:05.109 [2024-07-15 09:32:16.161859] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa56070 is same with the state(5) to be set 00:24:05.109 [2024-07-15 09:32:16.161878] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa56070 is same with the state(5) to be set 00:24:05.109 [2024-07-15 09:32:16.161897] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa56070 is same with the state(5) to be set 00:24:05.109 [2024-07-15 09:32:16.161918] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa56070 is same with the state(5) to be set 00:24:05.109 [2024-07-15 09:32:16.161938] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa56070 is same with the state(5) to be set 00:24:05.109 09:32:16 nvmf_tcp.nvmf_failover -- host/failover.sh@45 -- # sleep 3 00:24:08.404 09:32:19 nvmf_tcp.nvmf_failover -- host/failover.sh@47 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4422 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:24:08.662 00:24:08.662 09:32:19 nvmf_tcp.nvmf_failover -- host/failover.sh@48 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 00:24:08.921 [2024-07-15 09:32:19.928325] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa57640 is same with the state(5) to be set 00:24:08.921 [2024-07-15 09:32:19.928386] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa57640 is same with the state(5) to be set 00:24:08.921 [2024-07-15 09:32:19.928401] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa57640 is same with the state(5) to be set 00:24:08.921 [2024-07-15 09:32:19.928413] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa57640 is same with the state(5) to be set 00:24:08.921 [2024-07-15 09:32:19.928426] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa57640 is same with the state(5) to be set 00:24:08.921 [2024-07-15 09:32:19.928437] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa57640 is same with the state(5) to be set 00:24:08.921 [2024-07-15 09:32:19.928449] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa57640 is same with the state(5) to be set 00:24:08.921 [2024-07-15 09:32:19.928461] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa57640 is same with the state(5) to be set 00:24:08.921 09:32:19 nvmf_tcp.nvmf_failover -- host/failover.sh@50 -- # sleep 3 00:24:12.206 09:32:22 nvmf_tcp.nvmf_failover -- host/failover.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:24:12.206 [2024-07-15 09:32:23.185834] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:24:12.206 09:32:23 nvmf_tcp.nvmf_failover -- host/failover.sh@55 -- # sleep 1 00:24:13.139 09:32:24 nvmf_tcp.nvmf_failover -- host/failover.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4422 00:24:13.398 [2024-07-15 09:32:24.437141] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa57e70 is same with the state(5) to be set 00:24:13.398 [2024-07-15 09:32:24.437181] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa57e70 is same with the state(5) to be set 00:24:13.398 [2024-07-15 09:32:24.437195] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa57e70 is same with the state(5) to be set 00:24:13.398 [2024-07-15 09:32:24.437206] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa57e70 is same with the state(5) to be set 00:24:13.398 [2024-07-15 09:32:24.437218] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa57e70 is same with the state(5) to be set 00:24:13.398 [2024-07-15 09:32:24.437228] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa57e70 is same with the state(5) to be set 00:24:13.398 [2024-07-15 09:32:24.437240] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa57e70 is same with the state(5) to be set 00:24:13.398 [2024-07-15 09:32:24.437252] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa57e70 is same with the state(5) to be set 00:24:13.398 [2024-07-15 09:32:24.437263] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa57e70 is same with the state(5) to be set 00:24:13.398 [2024-07-15 09:32:24.437274] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa57e70 is same with the state(5) to be set 00:24:13.398 [2024-07-15 09:32:24.437285] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa57e70 is same with the state(5) to be set 00:24:13.398 [2024-07-15 09:32:24.437296] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa57e70 is same with the state(5) to be set 00:24:13.398 [2024-07-15 09:32:24.437308] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa57e70 is same with the state(5) to be set 00:24:13.398 [2024-07-15 09:32:24.437319] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa57e70 is same with the state(5) to be set 00:24:13.398 [2024-07-15 09:32:24.437343] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa57e70 is same with the state(5) to be set 00:24:13.398 [2024-07-15 09:32:24.437355] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa57e70 is same with the state(5) to be set 00:24:13.398 [2024-07-15 09:32:24.437366] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa57e70 is same with the state(5) to be set 00:24:13.398 [2024-07-15 09:32:24.437377] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa57e70 is same with the state(5) to be set 00:24:13.398 [2024-07-15 09:32:24.437389] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa57e70 is same with the state(5) to be set 00:24:13.398 [2024-07-15 09:32:24.437415] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa57e70 is same with the state(5) to be set 00:24:13.398 [2024-07-15 09:32:24.437426] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa57e70 is same with the state(5) to be set 00:24:13.398 [2024-07-15 09:32:24.437436] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa57e70 is same with the state(5) to be set 00:24:13.398 09:32:24 nvmf_tcp.nvmf_failover -- host/failover.sh@59 -- # wait 901533 00:24:19.989 0 00:24:19.989 09:32:30 nvmf_tcp.nvmf_failover -- host/failover.sh@61 -- # killprocess 901398 00:24:19.989 09:32:30 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@948 -- # '[' -z 901398 ']' 00:24:19.989 09:32:30 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@952 -- # kill -0 901398 00:24:19.989 09:32:30 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@953 -- # uname 00:24:19.989 09:32:30 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:24:19.989 09:32:30 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 901398 00:24:19.989 09:32:30 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:24:19.989 09:32:30 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:24:19.989 09:32:30 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@966 -- # echo 'killing process with pid 901398' 00:24:19.989 killing process with pid 901398 00:24:19.989 09:32:30 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@967 -- # kill 901398 00:24:19.989 09:32:30 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@972 -- # wait 901398 00:24:19.989 09:32:30 nvmf_tcp.nvmf_failover -- host/failover.sh@63 -- # cat /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/try.txt 00:24:19.989 [2024-07-15 09:32:13.786064] Starting SPDK v24.09-pre git sha1 b0f01ebc5 / DPDK 24.03.0 initialization... 00:24:19.989 [2024-07-15 09:32:13.786178] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid901398 ] 00:24:19.989 EAL: No free 2048 kB hugepages reported on node 1 00:24:19.989 [2024-07-15 09:32:13.851332] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:24:19.989 [2024-07-15 09:32:13.961049] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:24:19.989 Running I/O for 15 seconds... 00:24:19.989 [2024-07-15 09:32:16.162220] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:24:19.989 [2024-07-15 09:32:16.162267] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:19.989 [2024-07-15 09:32:16.162289] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:24:19.989 [2024-07-15 09:32:16.162303] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:19.990 [2024-07-15 09:32:16.162318] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:24:19.990 [2024-07-15 09:32:16.162332] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:19.990 [2024-07-15 09:32:16.162346] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:24:19.990 [2024-07-15 09:32:16.162360] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:19.990 [2024-07-15 09:32:16.162374] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc5a0f0 is same with the state(5) to be set 00:24:19.990 [2024-07-15 09:32:16.162442] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:101 nsid:1 lba:79752 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:19.990 [2024-07-15 09:32:16.162462] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:19.990 [2024-07-15 09:32:16.162489] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:79760 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:19.990 [2024-07-15 09:32:16.162505] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:19.990 [2024-07-15 09:32:16.162522] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:79768 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:19.990 [2024-07-15 09:32:16.162537] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:19.990 [2024-07-15 09:32:16.162553] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:79776 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:19.990 [2024-07-15 09:32:16.162568] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:19.990 [2024-07-15 09:32:16.162584] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:79784 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:19.990 [2024-07-15 09:32:16.162613] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:19.990 [2024-07-15 09:32:16.162630] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:71 nsid:1 lba:79792 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:19.990 [2024-07-15 09:32:16.162644] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:19.990 [2024-07-15 09:32:16.162660] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:79800 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:19.990 [2024-07-15 09:32:16.162688] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:19.990 [2024-07-15 09:32:16.162705] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:79808 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:19.990 [2024-07-15 09:32:16.162719] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:19.990 [2024-07-15 09:32:16.162734] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:79816 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:19.990 [2024-07-15 09:32:16.162748] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:19.990 [2024-07-15 09:32:16.162763] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:120 nsid:1 lba:79824 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:19.990 [2024-07-15 09:32:16.162776] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:19.990 [2024-07-15 09:32:16.162818] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:79832 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:19.990 [2024-07-15 09:32:16.162835] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:19.990 [2024-07-15 09:32:16.162851] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:112 nsid:1 lba:79840 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:19.990 [2024-07-15 09:32:16.162865] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:19.990 [2024-07-15 09:32:16.162881] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:79848 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:19.990 [2024-07-15 09:32:16.162896] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:19.990 [2024-07-15 09:32:16.162912] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:79856 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:19.990 [2024-07-15 09:32:16.162926] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:19.990 [2024-07-15 09:32:16.162942] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:79864 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:19.990 [2024-07-15 09:32:16.162956] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:19.990 [2024-07-15 09:32:16.162972] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:79872 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:19.990 [2024-07-15 09:32:16.162986] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:19.990 [2024-07-15 09:32:16.163002] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:66 nsid:1 lba:79880 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:19.990 [2024-07-15 09:32:16.163016] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:19.990 [2024-07-15 09:32:16.163031] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:79 nsid:1 lba:79888 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:19.990 [2024-07-15 09:32:16.163046] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:19.990 [2024-07-15 09:32:16.163061] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:124 nsid:1 lba:79896 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:19.990 [2024-07-15 09:32:16.163076] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:19.990 [2024-07-15 09:32:16.163105] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:81 nsid:1 lba:79904 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:19.990 [2024-07-15 09:32:16.163120] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:19.990 [2024-07-15 09:32:16.163150] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:96 nsid:1 lba:79912 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:19.990 [2024-07-15 09:32:16.163166] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:19.990 [2024-07-15 09:32:16.163181] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:68 nsid:1 lba:79920 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:19.990 [2024-07-15 09:32:16.163196] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:19.990 [2024-07-15 09:32:16.163210] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:102 nsid:1 lba:79928 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:19.990 [2024-07-15 09:32:16.163224] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:19.990 [2024-07-15 09:32:16.163239] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:79936 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:19.990 [2024-07-15 09:32:16.163259] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:19.990 [2024-07-15 09:32:16.163275] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:86 nsid:1 lba:79944 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:19.990 [2024-07-15 09:32:16.163289] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:19.990 [2024-07-15 09:32:16.163305] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:65 nsid:1 lba:79952 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:19.990 [2024-07-15 09:32:16.163318] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:19.990 [2024-07-15 09:32:16.163334] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:99 nsid:1 lba:79960 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:19.990 [2024-07-15 09:32:16.163348] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:19.990 [2024-07-15 09:32:16.163363] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:79968 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:19.990 [2024-07-15 09:32:16.163377] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:19.990 [2024-07-15 09:32:16.163393] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:108 nsid:1 lba:79976 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:19.990 [2024-07-15 09:32:16.163407] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:19.990 [2024-07-15 09:32:16.163422] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:79984 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:19.990 [2024-07-15 09:32:16.163437] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:19.990 [2024-07-15 09:32:16.163452] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:73 nsid:1 lba:79992 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:19.990 [2024-07-15 09:32:16.163466] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:19.990 [2024-07-15 09:32:16.163480] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:107 nsid:1 lba:80000 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:19.990 [2024-07-15 09:32:16.163498] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:19.990 [2024-07-15 09:32:16.163513] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:80008 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:19.990 [2024-07-15 09:32:16.163527] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:19.990 [2024-07-15 09:32:16.163542] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:87 nsid:1 lba:80016 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:19.990 [2024-07-15 09:32:16.163556] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:19.990 [2024-07-15 09:32:16.163571] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:76 nsid:1 lba:80024 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:19.990 [2024-07-15 09:32:16.163586] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:19.990 [2024-07-15 09:32:16.163601] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:106 nsid:1 lba:80032 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:19.990 [2024-07-15 09:32:16.163615] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:19.990 [2024-07-15 09:32:16.163630] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:80040 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:19.990 [2024-07-15 09:32:16.163644] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:19.990 [2024-07-15 09:32:16.163659] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:110 nsid:1 lba:80048 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:19.990 [2024-07-15 09:32:16.163673] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:19.990 [2024-07-15 09:32:16.163688] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:67 nsid:1 lba:80056 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:19.990 [2024-07-15 09:32:16.163701] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:19.991 [2024-07-15 09:32:16.163717] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:80064 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:19.991 [2024-07-15 09:32:16.163735] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:19.991 [2024-07-15 09:32:16.163751] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:97 nsid:1 lba:80072 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:19.991 [2024-07-15 09:32:16.163765] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:19.991 [2024-07-15 09:32:16.163780] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:77 nsid:1 lba:80080 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:19.991 [2024-07-15 09:32:16.163794] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:19.991 [2024-07-15 09:32:16.163836] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:115 nsid:1 lba:80088 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:19.991 [2024-07-15 09:32:16.163852] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:19.991 [2024-07-15 09:32:16.163868] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:80096 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:19.991 [2024-07-15 09:32:16.163882] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:19.991 [2024-07-15 09:32:16.163902] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:80104 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:19.991 [2024-07-15 09:32:16.163917] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:19.991 [2024-07-15 09:32:16.163933] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:89 nsid:1 lba:80112 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:19.991 [2024-07-15 09:32:16.163948] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:19.991 [2024-07-15 09:32:16.163964] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:80120 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:19.991 [2024-07-15 09:32:16.163978] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:19.991 [2024-07-15 09:32:16.163993] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:74 nsid:1 lba:80128 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:19.991 [2024-07-15 09:32:16.164008] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:19.991 [2024-07-15 09:32:16.164024] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:82 nsid:1 lba:80136 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:19.991 [2024-07-15 09:32:16.164038] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:19.991 [2024-07-15 09:32:16.164053] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:80144 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:19.991 [2024-07-15 09:32:16.164067] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:19.991 [2024-07-15 09:32:16.164083] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:80152 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:19.991 [2024-07-15 09:32:16.164097] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:19.991 [2024-07-15 09:32:16.164127] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:80160 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:19.991 [2024-07-15 09:32:16.164142] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:19.991 [2024-07-15 09:32:16.164157] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:80168 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:19.991 [2024-07-15 09:32:16.164170] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:19.991 [2024-07-15 09:32:16.164185] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:80176 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:19.991 [2024-07-15 09:32:16.164199] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:19.991 [2024-07-15 09:32:16.164214] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:80 nsid:1 lba:80184 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:19.991 [2024-07-15 09:32:16.164228] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:19.991 [2024-07-15 09:32:16.164243] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:70 nsid:1 lba:80192 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:19.991 [2024-07-15 09:32:16.164262] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:19.991 [2024-07-15 09:32:16.164278] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:80200 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:19.991 [2024-07-15 09:32:16.164292] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:19.991 [2024-07-15 09:32:16.164310] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:80208 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:19.991 [2024-07-15 09:32:16.164324] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:19.991 [2024-07-15 09:32:16.164339] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:90 nsid:1 lba:80216 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:19.991 [2024-07-15 09:32:16.164353] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:19.991 [2024-07-15 09:32:16.164368] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:80224 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:19.991 [2024-07-15 09:32:16.164382] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:19.991 [2024-07-15 09:32:16.164398] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:80232 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:19.991 [2024-07-15 09:32:16.164412] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:19.991 [2024-07-15 09:32:16.164427] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:80240 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:19.991 [2024-07-15 09:32:16.164441] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:19.991 [2024-07-15 09:32:16.164456] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:80248 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:19.991 [2024-07-15 09:32:16.164470] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:19.991 [2024-07-15 09:32:16.164485] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:80256 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:19.991 [2024-07-15 09:32:16.164499] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:19.991 [2024-07-15 09:32:16.164514] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:85 nsid:1 lba:80264 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:19.991 [2024-07-15 09:32:16.164528] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:19.991 [2024-07-15 09:32:16.164542] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:80272 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:19.991 [2024-07-15 09:32:16.164556] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:19.991 [2024-07-15 09:32:16.164571] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:80280 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:19.991 [2024-07-15 09:32:16.164584] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:19.991 [2024-07-15 09:32:16.164599] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:118 nsid:1 lba:80288 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:19.991 [2024-07-15 09:32:16.164612] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:19.991 [2024-07-15 09:32:16.164627] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:103 nsid:1 lba:80296 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:19.991 [2024-07-15 09:32:16.164641] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:19.991 [2024-07-15 09:32:16.164655] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:80304 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:19.991 [2024-07-15 09:32:16.164672] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:19.991 [2024-07-15 09:32:16.164687] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:80312 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:19.991 [2024-07-15 09:32:16.164701] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:19.991 [2024-07-15 09:32:16.164716] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:100 nsid:1 lba:80320 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:19.991 [2024-07-15 09:32:16.164734] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:19.991 [2024-07-15 09:32:16.164750] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:116 nsid:1 lba:80328 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:19.991 [2024-07-15 09:32:16.164764] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:19.991 [2024-07-15 09:32:16.164779] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:80336 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:19.991 [2024-07-15 09:32:16.164793] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:19.991 [2024-07-15 09:32:16.164832] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:109 nsid:1 lba:80344 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:19.991 [2024-07-15 09:32:16.164848] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:19.991 [2024-07-15 09:32:16.164864] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:80352 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:19.991 [2024-07-15 09:32:16.164878] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:19.991 [2024-07-15 09:32:16.164894] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:80360 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:19.991 [2024-07-15 09:32:16.164909] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:19.991 [2024-07-15 09:32:16.164925] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:111 nsid:1 lba:80368 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:19.991 [2024-07-15 09:32:16.164940] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:19.991 [2024-07-15 09:32:16.164956] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:95 nsid:1 lba:80376 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:19.991 [2024-07-15 09:32:16.164970] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:19.992 [2024-07-15 09:32:16.164986] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:92 nsid:1 lba:80384 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:19.992 [2024-07-15 09:32:16.165001] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:19.992 [2024-07-15 09:32:16.165016] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:80392 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:19.992 [2024-07-15 09:32:16.165030] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:19.992 [2024-07-15 09:32:16.165046] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:98 nsid:1 lba:80400 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:19.992 [2024-07-15 09:32:16.165060] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:19.992 [2024-07-15 09:32:16.165079] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:80408 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:19.992 [2024-07-15 09:32:16.165094] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:19.992 [2024-07-15 09:32:16.165109] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:122 nsid:1 lba:80416 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:19.992 [2024-07-15 09:32:16.165123] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:19.992 [2024-07-15 09:32:16.165139] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:114 nsid:1 lba:80424 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:19.992 [2024-07-15 09:32:16.165153] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:19.992 [2024-07-15 09:32:16.165168] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:75 nsid:1 lba:80432 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:19.992 [2024-07-15 09:32:16.165183] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:19.992 [2024-07-15 09:32:16.165198] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:105 nsid:1 lba:80440 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:19.992 [2024-07-15 09:32:16.165212] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:19.992 [2024-07-15 09:32:16.165228] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:80448 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:19.992 [2024-07-15 09:32:16.165247] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:19.992 [2024-07-15 09:32:16.165263] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:80456 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:19.992 [2024-07-15 09:32:16.165278] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:19.992 [2024-07-15 09:32:16.165293] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:80464 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:19.992 [2024-07-15 09:32:16.165308] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:19.992 [2024-07-15 09:32:16.165324] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:80472 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:19.992 [2024-07-15 09:32:16.165338] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:19.992 [2024-07-15 09:32:16.165358] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:119 nsid:1 lba:80480 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:19.992 [2024-07-15 09:32:16.165373] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:19.992 [2024-07-15 09:32:16.165389] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:80488 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:19.992 [2024-07-15 09:32:16.165403] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:19.992 [2024-07-15 09:32:16.165419] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:80496 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:19.992 [2024-07-15 09:32:16.165433] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:19.992 [2024-07-15 09:32:16.165449] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:80504 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:19.992 [2024-07-15 09:32:16.165467] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:19.992 [2024-07-15 09:32:16.165483] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:80512 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:19.992 [2024-07-15 09:32:16.165498] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:19.992 [2024-07-15 09:32:16.165513] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:80520 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:19.992 [2024-07-15 09:32:16.165527] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:19.992 [2024-07-15 09:32:16.165543] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:80528 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:19.992 [2024-07-15 09:32:16.165557] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:19.992 [2024-07-15 09:32:16.165573] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:69 nsid:1 lba:80536 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:19.992 [2024-07-15 09:32:16.165587] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:19.992 [2024-07-15 09:32:16.165602] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:72 nsid:1 lba:80544 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:19.992 [2024-07-15 09:32:16.165616] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:19.992 [2024-07-15 09:32:16.165632] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:64 nsid:1 lba:80552 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:19.992 [2024-07-15 09:32:16.165646] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:19.992 [2024-07-15 09:32:16.165662] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:104 nsid:1 lba:80560 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:19.992 [2024-07-15 09:32:16.165676] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:19.992 [2024-07-15 09:32:16.165691] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:80568 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:19.992 [2024-07-15 09:32:16.165705] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:19.992 [2024-07-15 09:32:16.165721] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:80576 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:19.992 [2024-07-15 09:32:16.165736] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:19.992 [2024-07-15 09:32:16.165752] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:80584 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:19.992 [2024-07-15 09:32:16.165766] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:19.992 [2024-07-15 09:32:16.165782] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:80592 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:19.992 [2024-07-15 09:32:16.165796] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:19.992 [2024-07-15 09:32:16.165819] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:80600 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:19.992 [2024-07-15 09:32:16.165834] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:19.992 [2024-07-15 09:32:16.165855] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:80608 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:19.992 [2024-07-15 09:32:16.165873] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:19.992 [2024-07-15 09:32:16.165889] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:80616 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:19.992 [2024-07-15 09:32:16.165903] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:19.992 [2024-07-15 09:32:16.165919] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:93 nsid:1 lba:80624 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:19.992 [2024-07-15 09:32:16.165933] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:19.992 [2024-07-15 09:32:16.165948] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:113 nsid:1 lba:80632 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:19.992 [2024-07-15 09:32:16.165963] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:19.992 [2024-07-15 09:32:16.165978] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:80640 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:19.992 [2024-07-15 09:32:16.165992] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:19.992 [2024-07-15 09:32:16.166007] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:88 nsid:1 lba:80648 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:19.992 [2024-07-15 09:32:16.166021] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:19.992 [2024-07-15 09:32:16.166037] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:80656 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:19.992 [2024-07-15 09:32:16.166063] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:19.992 [2024-07-15 09:32:16.166079] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:84 nsid:1 lba:80664 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:19.992 [2024-07-15 09:32:16.166093] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:19.992 [2024-07-15 09:32:16.166108] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:83 nsid:1 lba:80672 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:19.992 [2024-07-15 09:32:16.166123] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:19.992 [2024-07-15 09:32:16.166138] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:80680 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:19.992 [2024-07-15 09:32:16.166152] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:19.992 [2024-07-15 09:32:16.166167] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:94 nsid:1 lba:80688 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:19.992 [2024-07-15 09:32:16.166182] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:19.992 [2024-07-15 09:32:16.166197] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:80696 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:19.992 [2024-07-15 09:32:16.166211] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:19.992 [2024-07-15 09:32:16.166227] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:80704 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:19.992 [2024-07-15 09:32:16.166242] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:19.992 [2024-07-15 09:32:16.166260] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:80712 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:19.992 [2024-07-15 09:32:16.166275] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:19.992 [2024-07-15 09:32:16.166291] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:117 nsid:1 lba:80720 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:19.992 [2024-07-15 09:32:16.166305] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:19.993 [2024-07-15 09:32:16.166321] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:123 nsid:1 lba:80728 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:19.993 [2024-07-15 09:32:16.166335] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:19.993 [2024-07-15 09:32:16.166356] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:91 nsid:1 lba:80736 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:19.993 [2024-07-15 09:32:16.166371] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:19.993 [2024-07-15 09:32:16.166387] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:78 nsid:1 lba:80744 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:19.993 [2024-07-15 09:32:16.166401] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:19.993 [2024-07-15 09:32:16.166416] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:80752 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:19.993 [2024-07-15 09:32:16.166431] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:19.993 [2024-07-15 09:32:16.166446] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:121 nsid:1 lba:80760 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:19.993 [2024-07-15 09:32:16.166461] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:19.993 [2024-07-15 09:32:16.166490] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:24:19.993 [2024-07-15 09:32:16.166506] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:24:19.993 [2024-07-15 09:32:16.166518] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:80768 len:8 PRP1 0x0 PRP2 0x0 00:24:19.993 [2024-07-15 09:32:16.166532] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:19.993 [2024-07-15 09:32:16.166588] bdev_nvme.c:1612:bdev_nvme_disconnected_qpair_cb: *NOTICE*: qpair 0xc80390 was disconnected and freed. reset controller. 00:24:19.993 [2024-07-15 09:32:16.166608] bdev_nvme.c:1870:bdev_nvme_failover_trid: *NOTICE*: Start failover from 10.0.0.2:4420 to 10.0.0.2:4421 00:24:19.993 [2024-07-15 09:32:16.166623] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:19.993 [2024-07-15 09:32:16.169885] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:19.993 [2024-07-15 09:32:16.169922] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xc5a0f0 (9): Bad file descriptor 00:24:19.993 [2024-07-15 09:32:16.202634] bdev_nvme.c:2067:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:24:19.993 [2024-07-15 09:32:19.930328] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:92416 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:19.993 [2024-07-15 09:32:19.930373] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:19.993 [2024-07-15 09:32:19.930404] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:82 nsid:1 lba:92480 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:19.993 [2024-07-15 09:32:19.930432] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:19.993 [2024-07-15 09:32:19.930450] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:92488 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:19.993 [2024-07-15 09:32:19.930464] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:19.993 [2024-07-15 09:32:19.930481] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:92496 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:19.993 [2024-07-15 09:32:19.930495] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:19.993 [2024-07-15 09:32:19.930510] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:110 nsid:1 lba:92504 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:19.993 [2024-07-15 09:32:19.930524] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:19.993 [2024-07-15 09:32:19.930539] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:92512 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:19.993 [2024-07-15 09:32:19.930554] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:19.993 [2024-07-15 09:32:19.930569] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:94 nsid:1 lba:92520 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:19.993 [2024-07-15 09:32:19.930583] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:19.993 [2024-07-15 09:32:19.930598] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:92528 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:19.993 [2024-07-15 09:32:19.930612] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:19.993 [2024-07-15 09:32:19.930627] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:77 nsid:1 lba:92536 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:19.993 [2024-07-15 09:32:19.930641] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:19.993 [2024-07-15 09:32:19.930656] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:92544 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:19.993 [2024-07-15 09:32:19.930670] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:19.993 [2024-07-15 09:32:19.930685] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:92552 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:19.993 [2024-07-15 09:32:19.930700] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:19.993 [2024-07-15 09:32:19.930715] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:85 nsid:1 lba:92560 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:19.993 [2024-07-15 09:32:19.930729] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:19.993 [2024-07-15 09:32:19.930744] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:92568 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:19.993 [2024-07-15 09:32:19.930758] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:19.993 [2024-07-15 09:32:19.930773] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:92576 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:19.993 [2024-07-15 09:32:19.930815] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:19.993 [2024-07-15 09:32:19.930838] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:95 nsid:1 lba:92584 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:19.993 [2024-07-15 09:32:19.930854] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:19.993 [2024-07-15 09:32:19.930869] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:92592 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:19.993 [2024-07-15 09:32:19.930884] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:19.993 [2024-07-15 09:32:19.930899] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:92600 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:19.993 [2024-07-15 09:32:19.930914] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:19.993 [2024-07-15 09:32:19.930930] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:92608 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:19.993 [2024-07-15 09:32:19.930944] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:19.993 [2024-07-15 09:32:19.930960] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:78 nsid:1 lba:92616 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:19.993 [2024-07-15 09:32:19.930974] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:19.993 [2024-07-15 09:32:19.930989] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:92624 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:19.993 [2024-07-15 09:32:19.931004] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:19.993 [2024-07-15 09:32:19.931020] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:106 nsid:1 lba:92632 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:19.993 [2024-07-15 09:32:19.931034] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:19.993 [2024-07-15 09:32:19.931049] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:92640 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:19.993 [2024-07-15 09:32:19.931065] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:19.993 [2024-07-15 09:32:19.931081] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:92648 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:19.993 [2024-07-15 09:32:19.931096] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:19.993 [2024-07-15 09:32:19.931111] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:92656 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:19.993 [2024-07-15 09:32:19.931140] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:19.993 [2024-07-15 09:32:19.931156] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:92664 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:19.993 [2024-07-15 09:32:19.931169] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:19.993 [2024-07-15 09:32:19.931185] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:92672 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:19.993 [2024-07-15 09:32:19.931199] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:19.993 [2024-07-15 09:32:19.931214] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:92680 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:19.993 [2024-07-15 09:32:19.931231] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:19.993 [2024-07-15 09:32:19.931247] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:115 nsid:1 lba:92688 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:19.993 [2024-07-15 09:32:19.931261] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:19.993 [2024-07-15 09:32:19.931276] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:92696 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:19.993 [2024-07-15 09:32:19.931290] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:19.993 [2024-07-15 09:32:19.931305] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:92704 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:19.993 [2024-07-15 09:32:19.931319] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:19.993 [2024-07-15 09:32:19.931335] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:107 nsid:1 lba:92712 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:19.993 [2024-07-15 09:32:19.931349] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:19.993 [2024-07-15 09:32:19.931364] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:92720 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:19.994 [2024-07-15 09:32:19.931378] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:19.994 [2024-07-15 09:32:19.931393] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:73 nsid:1 lba:92728 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:19.994 [2024-07-15 09:32:19.931407] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:19.994 [2024-07-15 09:32:19.931422] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:92736 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:19.994 [2024-07-15 09:32:19.931436] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:19.994 [2024-07-15 09:32:19.931451] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:92744 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:19.994 [2024-07-15 09:32:19.931466] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:19.994 [2024-07-15 09:32:19.931482] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:92752 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:19.994 [2024-07-15 09:32:19.931496] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:19.994 [2024-07-15 09:32:19.931511] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:92760 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:19.994 [2024-07-15 09:32:19.931525] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:19.994 [2024-07-15 09:32:19.931540] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:71 nsid:1 lba:92768 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:19.994 [2024-07-15 09:32:19.931554] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:19.994 [2024-07-15 09:32:19.931570] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:92776 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:19.994 [2024-07-15 09:32:19.931584] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:19.994 [2024-07-15 09:32:19.931599] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:92784 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:19.994 [2024-07-15 09:32:19.931616] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:19.994 [2024-07-15 09:32:19.931632] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:100 nsid:1 lba:92792 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:19.994 [2024-07-15 09:32:19.931646] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:19.994 [2024-07-15 09:32:19.931661] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:81 nsid:1 lba:92800 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:19.994 [2024-07-15 09:32:19.931675] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:19.994 [2024-07-15 09:32:19.931691] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:92808 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:19.994 [2024-07-15 09:32:19.931705] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:19.994 [2024-07-15 09:32:19.931720] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:92816 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:19.994 [2024-07-15 09:32:19.931734] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:19.994 [2024-07-15 09:32:19.931749] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:92824 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:19.994 [2024-07-15 09:32:19.931763] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:19.994 [2024-07-15 09:32:19.931792] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:92832 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:19.994 [2024-07-15 09:32:19.931817] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:19.994 [2024-07-15 09:32:19.931835] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:92840 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:19.994 [2024-07-15 09:32:19.931850] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:19.994 [2024-07-15 09:32:19.931865] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:92848 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:19.994 [2024-07-15 09:32:19.931880] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:19.994 [2024-07-15 09:32:19.931895] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:84 nsid:1 lba:92856 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:19.994 [2024-07-15 09:32:19.931910] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:19.994 [2024-07-15 09:32:19.931925] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:92864 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:19.994 [2024-07-15 09:32:19.931940] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:19.994 [2024-07-15 09:32:19.931955] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:83 nsid:1 lba:92872 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:19.994 [2024-07-15 09:32:19.931976] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:19.994 [2024-07-15 09:32:19.931992] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:92880 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:19.994 [2024-07-15 09:32:19.932007] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:19.994 [2024-07-15 09:32:19.932025] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:92888 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:19.994 [2024-07-15 09:32:19.932040] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:19.994 [2024-07-15 09:32:19.932056] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:68 nsid:1 lba:92896 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:19.994 [2024-07-15 09:32:19.932071] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:19.994 [2024-07-15 09:32:19.932102] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:92904 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:19.994 [2024-07-15 09:32:19.932116] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:19.994 [2024-07-15 09:32:19.932131] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:117 nsid:1 lba:92912 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:19.994 [2024-07-15 09:32:19.932145] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:19.994 [2024-07-15 09:32:19.932160] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:69 nsid:1 lba:92920 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:19.994 [2024-07-15 09:32:19.932174] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:19.994 [2024-07-15 09:32:19.932189] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:92928 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:19.994 [2024-07-15 09:32:19.932203] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:19.994 [2024-07-15 09:32:19.932218] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:80 nsid:1 lba:92936 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:19.994 [2024-07-15 09:32:19.932232] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:19.994 [2024-07-15 09:32:19.932246] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:72 nsid:1 lba:92944 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:19.994 [2024-07-15 09:32:19.932260] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:19.994 [2024-07-15 09:32:19.932275] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:120 nsid:1 lba:92952 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:19.994 [2024-07-15 09:32:19.932289] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:19.994 [2024-07-15 09:32:19.932304] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:66 nsid:1 lba:92960 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:19.994 [2024-07-15 09:32:19.932335] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:19.994 [2024-07-15 09:32:19.932351] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:92968 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:19.994 [2024-07-15 09:32:19.932365] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:19.994 [2024-07-15 09:32:19.932381] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:91 nsid:1 lba:92976 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:19.994 [2024-07-15 09:32:19.932395] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:19.994 [2024-07-15 09:32:19.932411] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:92984 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:19.994 [2024-07-15 09:32:19.932428] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:19.994 [2024-07-15 09:32:19.932445] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:89 nsid:1 lba:92992 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:19.994 [2024-07-15 09:32:19.932459] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:19.994 [2024-07-15 09:32:19.932475] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:111 nsid:1 lba:93000 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:19.994 [2024-07-15 09:32:19.932494] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:19.994 [2024-07-15 09:32:19.932510] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:79 nsid:1 lba:93008 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:19.994 [2024-07-15 09:32:19.932525] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:19.994 [2024-07-15 09:32:19.932540] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:93016 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:19.995 [2024-07-15 09:32:19.932554] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:19.995 [2024-07-15 09:32:19.932571] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:93024 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:19.995 [2024-07-15 09:32:19.932585] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:19.995 [2024-07-15 09:32:19.932601] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:93032 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:19.995 [2024-07-15 09:32:19.932615] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:19.995 [2024-07-15 09:32:19.932646] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:98 nsid:1 lba:93040 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:19.995 [2024-07-15 09:32:19.932660] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:19.995 [2024-07-15 09:32:19.932676] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:88 nsid:1 lba:93048 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:19.995 [2024-07-15 09:32:19.932690] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:19.995 [2024-07-15 09:32:19.932705] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:93056 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:19.995 [2024-07-15 09:32:19.932718] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:19.995 [2024-07-15 09:32:19.932748] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:24:19.995 [2024-07-15 09:32:19.932764] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:93064 len:8 PRP1 0x0 PRP2 0x0 00:24:19.995 [2024-07-15 09:32:19.932777] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:19.995 [2024-07-15 09:32:19.932817] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:24:19.995 [2024-07-15 09:32:19.932831] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:24:19.995 [2024-07-15 09:32:19.932843] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:93072 len:8 PRP1 0x0 PRP2 0x0 00:24:19.995 [2024-07-15 09:32:19.932856] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:19.995 [2024-07-15 09:32:19.932869] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:24:19.995 [2024-07-15 09:32:19.932884] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:24:19.995 [2024-07-15 09:32:19.932896] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:93080 len:8 PRP1 0x0 PRP2 0x0 00:24:19.995 [2024-07-15 09:32:19.932909] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:19.995 [2024-07-15 09:32:19.932922] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:24:19.995 [2024-07-15 09:32:19.932933] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:24:19.995 [2024-07-15 09:32:19.932945] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:93088 len:8 PRP1 0x0 PRP2 0x0 00:24:19.995 [2024-07-15 09:32:19.932958] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:19.995 [2024-07-15 09:32:19.932971] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:24:19.995 [2024-07-15 09:32:19.932982] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:24:19.995 [2024-07-15 09:32:19.932998] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:93096 len:8 PRP1 0x0 PRP2 0x0 00:24:19.995 [2024-07-15 09:32:19.933012] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:19.995 [2024-07-15 09:32:19.933025] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:24:19.995 [2024-07-15 09:32:19.933037] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:24:19.995 [2024-07-15 09:32:19.933048] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:93104 len:8 PRP1 0x0 PRP2 0x0 00:24:19.995 [2024-07-15 09:32:19.933061] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:19.995 [2024-07-15 09:32:19.933075] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:24:19.995 [2024-07-15 09:32:19.933101] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:24:19.995 [2024-07-15 09:32:19.933112] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:93112 len:8 PRP1 0x0 PRP2 0x0 00:24:19.995 [2024-07-15 09:32:19.933125] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:19.995 [2024-07-15 09:32:19.933138] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:24:19.995 [2024-07-15 09:32:19.933149] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:24:19.995 [2024-07-15 09:32:19.933160] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:93120 len:8 PRP1 0x0 PRP2 0x0 00:24:19.995 [2024-07-15 09:32:19.933173] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:19.995 [2024-07-15 09:32:19.933186] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:24:19.995 [2024-07-15 09:32:19.933197] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:24:19.995 [2024-07-15 09:32:19.933208] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:93128 len:8 PRP1 0x0 PRP2 0x0 00:24:19.995 [2024-07-15 09:32:19.933220] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:19.995 [2024-07-15 09:32:19.933233] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:24:19.995 [2024-07-15 09:32:19.933244] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:24:19.995 [2024-07-15 09:32:19.933255] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:93136 len:8 PRP1 0x0 PRP2 0x0 00:24:19.995 [2024-07-15 09:32:19.933267] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:19.995 [2024-07-15 09:32:19.933283] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:24:19.995 [2024-07-15 09:32:19.933294] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:24:19.995 [2024-07-15 09:32:19.933306] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:93144 len:8 PRP1 0x0 PRP2 0x0 00:24:19.995 [2024-07-15 09:32:19.933318] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:19.995 [2024-07-15 09:32:19.933331] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:24:19.995 [2024-07-15 09:32:19.933342] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:24:19.995 [2024-07-15 09:32:19.933353] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:93152 len:8 PRP1 0x0 PRP2 0x0 00:24:19.995 [2024-07-15 09:32:19.933365] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:19.995 [2024-07-15 09:32:19.933379] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:24:19.995 [2024-07-15 09:32:19.933389] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:24:19.995 [2024-07-15 09:32:19.933405] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:93160 len:8 PRP1 0x0 PRP2 0x0 00:24:19.995 [2024-07-15 09:32:19.933418] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:19.995 [2024-07-15 09:32:19.933432] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:24:19.995 [2024-07-15 09:32:19.933442] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:24:19.995 [2024-07-15 09:32:19.933453] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:93168 len:8 PRP1 0x0 PRP2 0x0 00:24:19.995 [2024-07-15 09:32:19.933465] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:19.995 [2024-07-15 09:32:19.933479] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:24:19.995 [2024-07-15 09:32:19.933490] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:24:19.995 [2024-07-15 09:32:19.933501] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:93176 len:8 PRP1 0x0 PRP2 0x0 00:24:19.995 [2024-07-15 09:32:19.933514] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:19.995 [2024-07-15 09:32:19.933526] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:24:19.995 [2024-07-15 09:32:19.933537] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:24:19.995 [2024-07-15 09:32:19.933548] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:93184 len:8 PRP1 0x0 PRP2 0x0 00:24:19.995 [2024-07-15 09:32:19.933561] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:19.995 [2024-07-15 09:32:19.933573] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:24:19.995 [2024-07-15 09:32:19.933584] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:24:19.995 [2024-07-15 09:32:19.933595] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:93192 len:8 PRP1 0x0 PRP2 0x0 00:24:19.995 [2024-07-15 09:32:19.933607] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:19.995 [2024-07-15 09:32:19.933620] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:24:19.995 [2024-07-15 09:32:19.933631] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:24:19.995 [2024-07-15 09:32:19.933642] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:93200 len:8 PRP1 0x0 PRP2 0x0 00:24:19.995 [2024-07-15 09:32:19.933657] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:19.995 [2024-07-15 09:32:19.933670] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:24:19.995 [2024-07-15 09:32:19.933682] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:24:19.995 [2024-07-15 09:32:19.933693] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:93208 len:8 PRP1 0x0 PRP2 0x0 00:24:19.995 [2024-07-15 09:32:19.933705] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:19.995 [2024-07-15 09:32:19.933718] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:24:19.995 [2024-07-15 09:32:19.933729] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:24:19.995 [2024-07-15 09:32:19.933740] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:93216 len:8 PRP1 0x0 PRP2 0x0 00:24:19.995 [2024-07-15 09:32:19.933752] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:19.995 [2024-07-15 09:32:19.933765] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:24:19.995 [2024-07-15 09:32:19.933775] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:24:19.995 [2024-07-15 09:32:19.933809] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:93224 len:8 PRP1 0x0 PRP2 0x0 00:24:19.995 [2024-07-15 09:32:19.933824] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:19.996 [2024-07-15 09:32:19.933838] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:24:19.996 [2024-07-15 09:32:19.933849] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:24:19.996 [2024-07-15 09:32:19.933861] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:93232 len:8 PRP1 0x0 PRP2 0x0 00:24:19.996 [2024-07-15 09:32:19.933874] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:19.996 [2024-07-15 09:32:19.933887] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:24:19.996 [2024-07-15 09:32:19.933899] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:24:19.996 [2024-07-15 09:32:19.933910] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:93240 len:8 PRP1 0x0 PRP2 0x0 00:24:19.996 [2024-07-15 09:32:19.933923] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:19.996 [2024-07-15 09:32:19.933936] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:24:19.996 [2024-07-15 09:32:19.933947] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:24:19.996 [2024-07-15 09:32:19.933959] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:93248 len:8 PRP1 0x0 PRP2 0x0 00:24:19.996 [2024-07-15 09:32:19.933972] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:19.996 [2024-07-15 09:32:19.933985] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:24:19.996 [2024-07-15 09:32:19.933996] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:24:19.996 [2024-07-15 09:32:19.934007] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:93256 len:8 PRP1 0x0 PRP2 0x0 00:24:19.996 [2024-07-15 09:32:19.934019] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:19.996 [2024-07-15 09:32:19.934032] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:24:19.996 [2024-07-15 09:32:19.934047] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:24:19.996 [2024-07-15 09:32:19.934059] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:93264 len:8 PRP1 0x0 PRP2 0x0 00:24:19.996 [2024-07-15 09:32:19.934072] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:19.996 [2024-07-15 09:32:19.934086] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:24:19.996 [2024-07-15 09:32:19.934097] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:24:19.996 [2024-07-15 09:32:19.934122] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:93272 len:8 PRP1 0x0 PRP2 0x0 00:24:19.996 [2024-07-15 09:32:19.934135] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:19.996 [2024-07-15 09:32:19.934148] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:24:19.996 [2024-07-15 09:32:19.934158] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:24:19.996 [2024-07-15 09:32:19.934170] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:93280 len:8 PRP1 0x0 PRP2 0x0 00:24:19.996 [2024-07-15 09:32:19.934183] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:19.996 [2024-07-15 09:32:19.934195] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:24:19.996 [2024-07-15 09:32:19.934206] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:24:19.996 [2024-07-15 09:32:19.934217] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:93288 len:8 PRP1 0x0 PRP2 0x0 00:24:19.996 [2024-07-15 09:32:19.934230] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:19.996 [2024-07-15 09:32:19.934243] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:24:19.996 [2024-07-15 09:32:19.934254] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:24:19.996 [2024-07-15 09:32:19.934265] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:93296 len:8 PRP1 0x0 PRP2 0x0 00:24:19.996 [2024-07-15 09:32:19.934278] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:19.996 [2024-07-15 09:32:19.934291] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:24:19.996 [2024-07-15 09:32:19.934301] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:24:19.996 [2024-07-15 09:32:19.934312] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:93304 len:8 PRP1 0x0 PRP2 0x0 00:24:19.996 [2024-07-15 09:32:19.934325] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:19.996 [2024-07-15 09:32:19.934338] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:24:19.996 [2024-07-15 09:32:19.934349] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:24:19.996 [2024-07-15 09:32:19.934360] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:93312 len:8 PRP1 0x0 PRP2 0x0 00:24:19.996 [2024-07-15 09:32:19.934373] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:19.996 [2024-07-15 09:32:19.934386] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:24:19.996 [2024-07-15 09:32:19.934397] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:24:19.996 [2024-07-15 09:32:19.934408] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:93320 len:8 PRP1 0x0 PRP2 0x0 00:24:19.996 [2024-07-15 09:32:19.934420] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:19.996 [2024-07-15 09:32:19.934437] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:24:19.996 [2024-07-15 09:32:19.934448] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:24:19.996 [2024-07-15 09:32:19.934459] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:93328 len:8 PRP1 0x0 PRP2 0x0 00:24:19.996 [2024-07-15 09:32:19.934471] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:19.996 [2024-07-15 09:32:19.934484] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:24:19.996 [2024-07-15 09:32:19.934495] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:24:19.996 [2024-07-15 09:32:19.934506] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:93336 len:8 PRP1 0x0 PRP2 0x0 00:24:19.996 [2024-07-15 09:32:19.934518] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:19.996 [2024-07-15 09:32:19.934531] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:24:19.996 [2024-07-15 09:32:19.934542] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:24:19.996 [2024-07-15 09:32:19.934553] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:93344 len:8 PRP1 0x0 PRP2 0x0 00:24:19.996 [2024-07-15 09:32:19.934566] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:19.996 [2024-07-15 09:32:19.934578] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:24:19.996 [2024-07-15 09:32:19.934589] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:24:19.996 [2024-07-15 09:32:19.934600] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:93352 len:8 PRP1 0x0 PRP2 0x0 00:24:19.996 [2024-07-15 09:32:19.934612] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:19.996 [2024-07-15 09:32:19.934625] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:24:19.996 [2024-07-15 09:32:19.934636] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:24:19.996 [2024-07-15 09:32:19.934652] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:93360 len:8 PRP1 0x0 PRP2 0x0 00:24:19.996 [2024-07-15 09:32:19.934665] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:19.996 [2024-07-15 09:32:19.934678] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:24:19.996 [2024-07-15 09:32:19.934689] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:24:19.996 [2024-07-15 09:32:19.934700] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:93368 len:8 PRP1 0x0 PRP2 0x0 00:24:19.996 [2024-07-15 09:32:19.934712] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:19.996 [2024-07-15 09:32:19.934725] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:24:19.996 [2024-07-15 09:32:19.934736] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:24:19.996 [2024-07-15 09:32:19.934746] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:93376 len:8 PRP1 0x0 PRP2 0x0 00:24:19.996 [2024-07-15 09:32:19.934759] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:19.996 [2024-07-15 09:32:19.934772] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:24:19.996 [2024-07-15 09:32:19.934798] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:24:19.996 [2024-07-15 09:32:19.934819] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:93384 len:8 PRP1 0x0 PRP2 0x0 00:24:19.996 [2024-07-15 09:32:19.934836] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:19.996 [2024-07-15 09:32:19.934850] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:24:19.996 [2024-07-15 09:32:19.934861] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:24:19.996 [2024-07-15 09:32:19.934873] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:93392 len:8 PRP1 0x0 PRP2 0x0 00:24:19.996 [2024-07-15 09:32:19.934886] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:19.996 [2024-07-15 09:32:19.934899] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:24:19.996 [2024-07-15 09:32:19.934910] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:24:19.996 [2024-07-15 09:32:19.934921] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:93400 len:8 PRP1 0x0 PRP2 0x0 00:24:19.996 [2024-07-15 09:32:19.934934] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:19.996 [2024-07-15 09:32:19.934948] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:24:19.996 [2024-07-15 09:32:19.934959] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:24:19.996 [2024-07-15 09:32:19.934970] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:93408 len:8 PRP1 0x0 PRP2 0x0 00:24:19.996 [2024-07-15 09:32:19.934983] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:19.996 [2024-07-15 09:32:19.934996] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:24:19.996 [2024-07-15 09:32:19.935007] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:24:19.996 [2024-07-15 09:32:19.935018] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:93416 len:8 PRP1 0x0 PRP2 0x0 00:24:19.996 [2024-07-15 09:32:19.935031] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:19.996 [2024-07-15 09:32:19.935044] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:24:19.996 [2024-07-15 09:32:19.935055] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:24:19.996 [2024-07-15 09:32:19.935071] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:93424 len:8 PRP1 0x0 PRP2 0x0 00:24:19.996 [2024-07-15 09:32:19.935100] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:19.996 [2024-07-15 09:32:19.935118] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:24:19.997 [2024-07-15 09:32:19.935129] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:24:19.997 [2024-07-15 09:32:19.935140] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:93432 len:8 PRP1 0x0 PRP2 0x0 00:24:19.997 [2024-07-15 09:32:19.935152] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:19.997 [2024-07-15 09:32:19.935165] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:24:19.997 [2024-07-15 09:32:19.935176] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:24:19.997 [2024-07-15 09:32:19.935187] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:92424 len:8 PRP1 0x0 PRP2 0x0 00:24:19.997 [2024-07-15 09:32:19.935199] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:19.997 [2024-07-15 09:32:19.935212] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:24:19.997 [2024-07-15 09:32:19.935223] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:24:19.997 [2024-07-15 09:32:19.935237] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:92432 len:8 PRP1 0x0 PRP2 0x0 00:24:19.997 [2024-07-15 09:32:19.935250] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:19.997 [2024-07-15 09:32:19.935263] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:24:19.997 [2024-07-15 09:32:19.935273] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:24:19.997 [2024-07-15 09:32:19.935284] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:92440 len:8 PRP1 0x0 PRP2 0x0 00:24:19.997 [2024-07-15 09:32:19.935297] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:19.997 [2024-07-15 09:32:19.935310] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:24:19.997 [2024-07-15 09:32:19.935321] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:24:19.997 [2024-07-15 09:32:19.935332] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:92448 len:8 PRP1 0x0 PRP2 0x0 00:24:19.997 [2024-07-15 09:32:19.935344] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:19.997 [2024-07-15 09:32:19.935357] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:24:19.997 [2024-07-15 09:32:19.935368] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:24:19.997 [2024-07-15 09:32:19.935379] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:92456 len:8 PRP1 0x0 PRP2 0x0 00:24:19.997 [2024-07-15 09:32:19.935392] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:19.997 [2024-07-15 09:32:19.935404] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:24:19.997 [2024-07-15 09:32:19.935415] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:24:19.997 [2024-07-15 09:32:19.935426] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:92464 len:8 PRP1 0x0 PRP2 0x0 00:24:19.997 [2024-07-15 09:32:19.935438] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:19.997 [2024-07-15 09:32:19.935451] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:24:19.997 [2024-07-15 09:32:19.935462] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:24:19.997 [2024-07-15 09:32:19.935477] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:92472 len:8 PRP1 0x0 PRP2 0x0 00:24:19.997 [2024-07-15 09:32:19.935491] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:19.997 [2024-07-15 09:32:19.935555] bdev_nvme.c:1612:bdev_nvme_disconnected_qpair_cb: *NOTICE*: qpair 0xe24d80 was disconnected and freed. reset controller. 00:24:19.997 [2024-07-15 09:32:19.935574] bdev_nvme.c:1870:bdev_nvme_failover_trid: *NOTICE*: Start failover from 10.0.0.2:4421 to 10.0.0.2:4422 00:24:19.997 [2024-07-15 09:32:19.935622] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:24:19.997 [2024-07-15 09:32:19.935641] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:19.997 [2024-07-15 09:32:19.935657] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:24:19.997 [2024-07-15 09:32:19.935671] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:19.997 [2024-07-15 09:32:19.935685] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:24:19.997 [2024-07-15 09:32:19.935699] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:19.997 [2024-07-15 09:32:19.935716] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:24:19.997 [2024-07-15 09:32:19.935731] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:19.997 [2024-07-15 09:32:19.935744] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:19.997 [2024-07-15 09:32:19.935784] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xc5a0f0 (9): Bad file descriptor 00:24:19.997 [2024-07-15 09:32:19.939027] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:19.997 [2024-07-15 09:32:19.970277] bdev_nvme.c:2067:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:24:19.997 [2024-07-15 09:32:24.435797] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:24:19.997 [2024-07-15 09:32:24.435873] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:19.997 [2024-07-15 09:32:24.435893] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:24:19.997 [2024-07-15 09:32:24.435907] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:19.997 [2024-07-15 09:32:24.435922] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:24:19.997 [2024-07-15 09:32:24.435936] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:19.997 [2024-07-15 09:32:24.435950] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:24:19.997 [2024-07-15 09:32:24.435964] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:19.997 [2024-07-15 09:32:24.435978] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc5a0f0 is same with the state(5) to be set 00:24:19.997 [2024-07-15 09:32:24.437791] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:16536 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:19.997 [2024-07-15 09:32:24.437837] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:19.997 [2024-07-15 09:32:24.437865] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:16544 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:19.997 [2024-07-15 09:32:24.437882] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:19.997 [2024-07-15 09:32:24.437899] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:16552 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:19.997 [2024-07-15 09:32:24.437914] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:19.997 [2024-07-15 09:32:24.437929] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:16560 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:19.997 [2024-07-15 09:32:24.437944] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:19.997 [2024-07-15 09:32:24.437959] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:16568 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:19.997 [2024-07-15 09:32:24.437974] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:19.997 [2024-07-15 09:32:24.437990] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:16576 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:19.997 [2024-07-15 09:32:24.438010] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:19.997 [2024-07-15 09:32:24.438027] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:78 nsid:1 lba:17344 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:19.997 [2024-07-15 09:32:24.438042] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:19.997 [2024-07-15 09:32:24.438058] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:16584 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:19.997 [2024-07-15 09:32:24.438072] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:19.997 [2024-07-15 09:32:24.438087] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:16592 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:19.997 [2024-07-15 09:32:24.438102] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:19.997 [2024-07-15 09:32:24.438132] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:100 nsid:1 lba:16600 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:19.997 [2024-07-15 09:32:24.438146] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:19.997 [2024-07-15 09:32:24.438161] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:16608 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:19.997 [2024-07-15 09:32:24.438175] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:19.997 [2024-07-15 09:32:24.438189] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:117 nsid:1 lba:16616 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:19.997 [2024-07-15 09:32:24.438203] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:19.997 [2024-07-15 09:32:24.438217] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:16624 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:19.997 [2024-07-15 09:32:24.438230] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:19.997 [2024-07-15 09:32:24.438245] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:90 nsid:1 lba:16632 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:19.997 [2024-07-15 09:32:24.438259] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:19.997 [2024-07-15 09:32:24.438273] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:119 nsid:1 lba:16640 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:19.997 [2024-07-15 09:32:24.438287] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:19.997 [2024-07-15 09:32:24.438301] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:107 nsid:1 lba:16648 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:19.997 [2024-07-15 09:32:24.438315] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:19.997 [2024-07-15 09:32:24.438329] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:106 nsid:1 lba:16656 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:19.997 [2024-07-15 09:32:24.438344] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:19.997 [2024-07-15 09:32:24.438359] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:108 nsid:1 lba:16664 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:19.997 [2024-07-15 09:32:24.438373] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:19.998 [2024-07-15 09:32:24.438394] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:16672 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:19.998 [2024-07-15 09:32:24.438409] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:19.998 [2024-07-15 09:32:24.438423] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:16680 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:19.998 [2024-07-15 09:32:24.438437] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:19.998 [2024-07-15 09:32:24.438452] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:16688 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:19.998 [2024-07-15 09:32:24.438465] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:19.998 [2024-07-15 09:32:24.438480] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:16696 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:19.998 [2024-07-15 09:32:24.438493] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:19.998 [2024-07-15 09:32:24.438508] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:16704 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:19.998 [2024-07-15 09:32:24.438521] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:19.998 [2024-07-15 09:32:24.438535] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:93 nsid:1 lba:16712 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:19.998 [2024-07-15 09:32:24.438549] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:19.998 [2024-07-15 09:32:24.438564] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:16720 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:19.998 [2024-07-15 09:32:24.438578] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:19.998 [2024-07-15 09:32:24.438593] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:97 nsid:1 lba:16728 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:19.998 [2024-07-15 09:32:24.438606] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:19.998 [2024-07-15 09:32:24.438621] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:79 nsid:1 lba:16736 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:19.998 [2024-07-15 09:32:24.438635] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:19.998 [2024-07-15 09:32:24.438649] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:16744 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:19.998 [2024-07-15 09:32:24.438663] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:19.998 [2024-07-15 09:32:24.438677] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:16752 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:19.998 [2024-07-15 09:32:24.438691] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:19.998 [2024-07-15 09:32:24.438705] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:16760 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:19.998 [2024-07-15 09:32:24.438734] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:19.998 [2024-07-15 09:32:24.438750] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:16768 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:19.998 [2024-07-15 09:32:24.438767] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:19.998 [2024-07-15 09:32:24.438783] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:112 nsid:1 lba:16776 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:19.998 [2024-07-15 09:32:24.438797] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:19.998 [2024-07-15 09:32:24.438822] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:16784 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:19.998 [2024-07-15 09:32:24.438837] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:19.998 [2024-07-15 09:32:24.438852] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:110 nsid:1 lba:16792 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:19.998 [2024-07-15 09:32:24.438867] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:19.998 [2024-07-15 09:32:24.438882] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:16800 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:19.998 [2024-07-15 09:32:24.438896] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:19.998 [2024-07-15 09:32:24.438911] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:16808 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:19.998 [2024-07-15 09:32:24.438925] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:19.998 [2024-07-15 09:32:24.438940] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:16816 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:19.998 [2024-07-15 09:32:24.438954] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:19.998 [2024-07-15 09:32:24.438969] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:80 nsid:1 lba:16824 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:19.998 [2024-07-15 09:32:24.438983] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:19.998 [2024-07-15 09:32:24.438999] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:91 nsid:1 lba:16832 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:19.998 [2024-07-15 09:32:24.439012] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:19.998 [2024-07-15 09:32:24.439027] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:95 nsid:1 lba:16840 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:19.998 [2024-07-15 09:32:24.439045] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:19.998 [2024-07-15 09:32:24.439059] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:16848 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:19.998 [2024-07-15 09:32:24.439088] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:19.998 [2024-07-15 09:32:24.439105] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:102 nsid:1 lba:16856 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:19.998 [2024-07-15 09:32:24.439118] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:19.998 [2024-07-15 09:32:24.439133] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:16864 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:19.998 [2024-07-15 09:32:24.439147] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:19.998 [2024-07-15 09:32:24.439165] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:16872 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:19.998 [2024-07-15 09:32:24.439179] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:19.998 [2024-07-15 09:32:24.439194] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:96 nsid:1 lba:16880 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:19.998 [2024-07-15 09:32:24.439207] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:19.998 [2024-07-15 09:32:24.439222] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:16888 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:19.998 [2024-07-15 09:32:24.439235] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:19.998 [2024-07-15 09:32:24.439250] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:101 nsid:1 lba:16896 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:19.998 [2024-07-15 09:32:24.439264] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:19.998 [2024-07-15 09:32:24.439279] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:109 nsid:1 lba:17352 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:19.998 [2024-07-15 09:32:24.439292] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:19.998 [2024-07-15 09:32:24.439307] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:92 nsid:1 lba:17360 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:19.998 [2024-07-15 09:32:24.439321] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:19.998 [2024-07-15 09:32:24.439336] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:17368 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:19.998 [2024-07-15 09:32:24.439350] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:19.998 [2024-07-15 09:32:24.439365] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:116 nsid:1 lba:17376 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:19.998 [2024-07-15 09:32:24.439378] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:19.998 [2024-07-15 09:32:24.439393] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:17384 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:19.998 [2024-07-15 09:32:24.439407] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:19.998 [2024-07-15 09:32:24.439421] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:73 nsid:1 lba:17392 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:19.998 [2024-07-15 09:32:24.439435] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:19.998 [2024-07-15 09:32:24.439450] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:77 nsid:1 lba:17400 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:19.999 [2024-07-15 09:32:24.439464] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:19.999 [2024-07-15 09:32:24.439479] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:17408 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:19.999 [2024-07-15 09:32:24.439492] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:19.999 [2024-07-15 09:32:24.439506] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:120 nsid:1 lba:16904 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:19.999 [2024-07-15 09:32:24.439523] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:19.999 [2024-07-15 09:32:24.439538] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:16912 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:19.999 [2024-07-15 09:32:24.439551] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:19.999 [2024-07-15 09:32:24.439566] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:16920 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:19.999 [2024-07-15 09:32:24.439580] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:19.999 [2024-07-15 09:32:24.439594] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:16928 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:19.999 [2024-07-15 09:32:24.439607] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:19.999 [2024-07-15 09:32:24.439621] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:16936 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:19.999 [2024-07-15 09:32:24.439635] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:19.999 [2024-07-15 09:32:24.439649] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:72 nsid:1 lba:16944 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:19.999 [2024-07-15 09:32:24.439663] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:19.999 [2024-07-15 09:32:24.439677] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:16952 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:19.999 [2024-07-15 09:32:24.439691] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:19.999 [2024-07-15 09:32:24.439706] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:70 nsid:1 lba:16960 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:19.999 [2024-07-15 09:32:24.439720] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:19.999 [2024-07-15 09:32:24.439735] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:16968 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:19.999 [2024-07-15 09:32:24.439748] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:19.999 [2024-07-15 09:32:24.439763] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:75 nsid:1 lba:16976 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:19.999 [2024-07-15 09:32:24.439777] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:19.999 [2024-07-15 09:32:24.439792] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:88 nsid:1 lba:16984 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:19.999 [2024-07-15 09:32:24.439828] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:19.999 [2024-07-15 09:32:24.439845] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:86 nsid:1 lba:16992 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:19.999 [2024-07-15 09:32:24.439860] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:19.999 [2024-07-15 09:32:24.439875] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:17000 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:19.999 [2024-07-15 09:32:24.439890] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:19.999 [2024-07-15 09:32:24.439905] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:17008 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:19.999 [2024-07-15 09:32:24.439923] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:19.999 [2024-07-15 09:32:24.439939] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:68 nsid:1 lba:17016 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:19.999 [2024-07-15 09:32:24.439953] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:19.999 [2024-07-15 09:32:24.439968] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:17416 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:19.999 [2024-07-15 09:32:24.439982] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:19.999 [2024-07-15 09:32:24.439997] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:17024 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:19.999 [2024-07-15 09:32:24.440011] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:19.999 [2024-07-15 09:32:24.440027] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:17032 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:19.999 [2024-07-15 09:32:24.440041] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:19.999 [2024-07-15 09:32:24.440056] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:17040 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:19.999 [2024-07-15 09:32:24.440069] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:19.999 [2024-07-15 09:32:24.440084] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:69 nsid:1 lba:17048 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:19.999 [2024-07-15 09:32:24.440098] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:19.999 [2024-07-15 09:32:24.440128] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:74 nsid:1 lba:17056 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:19.999 [2024-07-15 09:32:24.440142] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:19.999 [2024-07-15 09:32:24.440157] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:81 nsid:1 lba:17064 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:19.999 [2024-07-15 09:32:24.440170] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:19.999 [2024-07-15 09:32:24.440184] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:98 nsid:1 lba:17072 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:19.999 [2024-07-15 09:32:24.440198] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:19.999 [2024-07-15 09:32:24.440213] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:124 nsid:1 lba:17080 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:19.999 [2024-07-15 09:32:24.440227] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:19.999 [2024-07-15 09:32:24.440242] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:17088 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:19.999 [2024-07-15 09:32:24.440255] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:19.999 [2024-07-15 09:32:24.440270] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:123 nsid:1 lba:17096 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:19.999 [2024-07-15 09:32:24.440284] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:19.999 [2024-07-15 09:32:24.440302] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:89 nsid:1 lba:17104 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:19.999 [2024-07-15 09:32:24.440316] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:19.999 [2024-07-15 09:32:24.440331] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:17112 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:19.999 [2024-07-15 09:32:24.440345] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:19.999 [2024-07-15 09:32:24.440360] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:17120 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:19.999 [2024-07-15 09:32:24.440373] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:19.999 [2024-07-15 09:32:24.440388] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:82 nsid:1 lba:17128 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:19.999 [2024-07-15 09:32:24.440401] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:19.999 [2024-07-15 09:32:24.440416] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:17136 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:19.999 [2024-07-15 09:32:24.440430] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:19.999 [2024-07-15 09:32:24.440444] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:17144 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:19.999 [2024-07-15 09:32:24.440458] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:19.999 [2024-07-15 09:32:24.440472] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:126 nsid:1 lba:17152 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:19.999 [2024-07-15 09:32:24.440486] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:19.999 [2024-07-15 09:32:24.440501] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:17160 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:19.999 [2024-07-15 09:32:24.440514] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:19.999 [2024-07-15 09:32:24.440529] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:17168 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:19.999 [2024-07-15 09:32:24.440542] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:19.999 [2024-07-15 09:32:24.440557] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:67 nsid:1 lba:17176 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:19.999 [2024-07-15 09:32:24.440570] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:19.999 [2024-07-15 09:32:24.440585] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:83 nsid:1 lba:17184 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:19.999 [2024-07-15 09:32:24.440599] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:19.999 [2024-07-15 09:32:24.440613] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:17192 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:19.999 [2024-07-15 09:32:24.440626] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:19.999 [2024-07-15 09:32:24.440641] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:125 nsid:1 lba:17200 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:19.999 [2024-07-15 09:32:24.440658] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:19.999 [2024-07-15 09:32:24.440673] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:17208 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:19.999 [2024-07-15 09:32:24.440687] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:19.999 [2024-07-15 09:32:24.440702] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:17216 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:19.999 [2024-07-15 09:32:24.440716] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:19.999 [2024-07-15 09:32:24.440731] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:85 nsid:1 lba:17224 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:20.000 [2024-07-15 09:32:24.440744] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:20.000 [2024-07-15 09:32:24.440759] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:118 nsid:1 lba:17232 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:20.000 [2024-07-15 09:32:24.440772] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:20.000 [2024-07-15 09:32:24.440786] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:76 nsid:1 lba:17240 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:20.000 [2024-07-15 09:32:24.440805] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:20.000 [2024-07-15 09:32:24.440838] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:87 nsid:1 lba:17248 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:20.000 [2024-07-15 09:32:24.440853] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:20.000 [2024-07-15 09:32:24.440867] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:17256 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:20.000 [2024-07-15 09:32:24.440881] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:20.000 [2024-07-15 09:32:24.440896] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:113 nsid:1 lba:17264 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:20.000 [2024-07-15 09:32:24.440910] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:20.000 [2024-07-15 09:32:24.440925] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:114 nsid:1 lba:17272 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:20.000 [2024-07-15 09:32:24.440939] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:20.000 [2024-07-15 09:32:24.440957] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:65 nsid:1 lba:17280 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:20.000 [2024-07-15 09:32:24.440972] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:20.000 [2024-07-15 09:32:24.440988] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:84 nsid:1 lba:17288 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:20.000 [2024-07-15 09:32:24.441002] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:20.000 [2024-07-15 09:32:24.441018] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:103 nsid:1 lba:17296 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:20.000 [2024-07-15 09:32:24.441032] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:20.000 [2024-07-15 09:32:24.441052] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:66 nsid:1 lba:17304 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:20.000 [2024-07-15 09:32:24.441067] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:20.000 [2024-07-15 09:32:24.441083] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:17312 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:20.000 [2024-07-15 09:32:24.441097] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:20.000 [2024-07-15 09:32:24.441128] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:99 nsid:1 lba:17320 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:20.000 [2024-07-15 09:32:24.441142] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:20.000 [2024-07-15 09:32:24.441157] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:122 nsid:1 lba:17328 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:20.000 [2024-07-15 09:32:24.441171] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:20.000 [2024-07-15 09:32:24.441186] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:17336 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:20.000 [2024-07-15 09:32:24.441200] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:20.000 [2024-07-15 09:32:24.441215] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:17424 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:20.000 [2024-07-15 09:32:24.441230] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:20.000 [2024-07-15 09:32:24.441245] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:115 nsid:1 lba:17432 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:20.000 [2024-07-15 09:32:24.441259] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:20.000 [2024-07-15 09:32:24.441274] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:104 nsid:1 lba:17440 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:20.000 [2024-07-15 09:32:24.441288] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:20.000 [2024-07-15 09:32:24.441303] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:71 nsid:1 lba:17448 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:20.000 [2024-07-15 09:32:24.441317] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:20.000 [2024-07-15 09:32:24.441333] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:17456 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:20.000 [2024-07-15 09:32:24.441347] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:20.000 [2024-07-15 09:32:24.441362] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:17464 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:20.000 [2024-07-15 09:32:24.441376] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:20.000 [2024-07-15 09:32:24.441391] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:17472 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:20.000 [2024-07-15 09:32:24.441405] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:20.000 [2024-07-15 09:32:24.441421] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:105 nsid:1 lba:17480 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:20.000 [2024-07-15 09:32:24.441435] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:20.000 [2024-07-15 09:32:24.441454] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:94 nsid:1 lba:17488 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:20.000 [2024-07-15 09:32:24.441469] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:20.000 [2024-07-15 09:32:24.441484] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:17496 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:20.000 [2024-07-15 09:32:24.441512] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:20.000 [2024-07-15 09:32:24.441529] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:121 nsid:1 lba:17504 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:20.000 [2024-07-15 09:32:24.441543] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:20.000 [2024-07-15 09:32:24.441559] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:17512 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:20.000 [2024-07-15 09:32:24.441573] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:20.000 [2024-07-15 09:32:24.441589] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:17520 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:20.000 [2024-07-15 09:32:24.441603] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:20.000 [2024-07-15 09:32:24.441618] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:17528 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:20.000 [2024-07-15 09:32:24.441633] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:20.000 [2024-07-15 09:32:24.441648] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:111 nsid:1 lba:17536 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:20.000 [2024-07-15 09:32:24.441662] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:20.000 [2024-07-15 09:32:24.441677] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:64 nsid:1 lba:17544 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:20.000 [2024-07-15 09:32:24.441692] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:20.000 [2024-07-15 09:32:24.441720] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:24:20.000 [2024-07-15 09:32:24.441736] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:24:20.000 [2024-07-15 09:32:24.441748] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:17552 len:8 PRP1 0x0 PRP2 0x0 00:24:20.000 [2024-07-15 09:32:24.441761] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:20.000 [2024-07-15 09:32:24.441845] bdev_nvme.c:1612:bdev_nvme_disconnected_qpair_cb: *NOTICE*: qpair 0xe24b70 was disconnected and freed. reset controller. 00:24:20.000 [2024-07-15 09:32:24.441867] bdev_nvme.c:1870:bdev_nvme_failover_trid: *NOTICE*: Start failover from 10.0.0.2:4422 to 10.0.0.2:4420 00:24:20.000 [2024-07-15 09:32:24.441882] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:20.000 [2024-07-15 09:32:24.445182] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:20.000 [2024-07-15 09:32:24.445222] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xc5a0f0 (9): Bad file descriptor 00:24:20.000 [2024-07-15 09:32:24.480718] bdev_nvme.c:2067:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:24:20.000 00:24:20.000 Latency(us) 00:24:20.000 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:24:20.000 Job: NVMe0n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:24:20.000 Verification LBA range: start 0x0 length 0x4000 00:24:20.000 NVMe0n1 : 15.01 8670.82 33.87 245.02 0.00 14328.64 552.20 15437.37 00:24:20.000 =================================================================================================================== 00:24:20.000 Total : 8670.82 33.87 245.02 0.00 14328.64 552.20 15437.37 00:24:20.000 Received shutdown signal, test time was about 15.000000 seconds 00:24:20.000 00:24:20.001 Latency(us) 00:24:20.001 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:24:20.001 =================================================================================================================== 00:24:20.001 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:24:20.001 09:32:30 nvmf_tcp.nvmf_failover -- host/failover.sh@65 -- # grep -c 'Resetting controller successful' 00:24:20.001 09:32:30 nvmf_tcp.nvmf_failover -- host/failover.sh@65 -- # count=3 00:24:20.001 09:32:30 nvmf_tcp.nvmf_failover -- host/failover.sh@67 -- # (( count != 3 )) 00:24:20.001 09:32:30 nvmf_tcp.nvmf_failover -- host/failover.sh@73 -- # bdevperf_pid=903374 00:24:20.001 09:32:30 nvmf_tcp.nvmf_failover -- host/failover.sh@72 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 1 -f 00:24:20.001 09:32:30 nvmf_tcp.nvmf_failover -- host/failover.sh@75 -- # waitforlisten 903374 /var/tmp/bdevperf.sock 00:24:20.001 09:32:30 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@829 -- # '[' -z 903374 ']' 00:24:20.001 09:32:30 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:24:20.001 09:32:30 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@834 -- # local max_retries=100 00:24:20.001 09:32:30 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:24:20.001 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:24:20.001 09:32:30 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@838 -- # xtrace_disable 00:24:20.001 09:32:30 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@10 -- # set +x 00:24:20.001 09:32:30 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:24:20.001 09:32:30 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@862 -- # return 0 00:24:20.001 09:32:30 nvmf_tcp.nvmf_failover -- host/failover.sh@76 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 00:24:20.001 [2024-07-15 09:32:30.898645] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4421 *** 00:24:20.001 09:32:30 nvmf_tcp.nvmf_failover -- host/failover.sh@77 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4422 00:24:20.001 [2024-07-15 09:32:31.143403] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4422 *** 00:24:20.001 09:32:31 nvmf_tcp.nvmf_failover -- host/failover.sh@78 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:24:20.570 NVMe0n1 00:24:20.570 09:32:31 nvmf_tcp.nvmf_failover -- host/failover.sh@79 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4421 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:24:20.827 00:24:20.827 09:32:31 nvmf_tcp.nvmf_failover -- host/failover.sh@80 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4422 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:24:21.086 00:24:21.086 09:32:32 nvmf_tcp.nvmf_failover -- host/failover.sh@82 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:24:21.086 09:32:32 nvmf_tcp.nvmf_failover -- host/failover.sh@82 -- # grep -q NVMe0 00:24:21.344 09:32:32 nvmf_tcp.nvmf_failover -- host/failover.sh@84 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_detach_controller NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:24:21.604 09:32:32 nvmf_tcp.nvmf_failover -- host/failover.sh@87 -- # sleep 3 00:24:24.913 09:32:35 nvmf_tcp.nvmf_failover -- host/failover.sh@88 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:24:24.913 09:32:35 nvmf_tcp.nvmf_failover -- host/failover.sh@88 -- # grep -q NVMe0 00:24:24.913 09:32:35 nvmf_tcp.nvmf_failover -- host/failover.sh@90 -- # run_test_pid=904043 00:24:24.913 09:32:35 nvmf_tcp.nvmf_failover -- host/failover.sh@89 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:24:24.913 09:32:35 nvmf_tcp.nvmf_failover -- host/failover.sh@92 -- # wait 904043 00:24:26.290 0 00:24:26.290 09:32:37 nvmf_tcp.nvmf_failover -- host/failover.sh@94 -- # cat /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/try.txt 00:24:26.290 [2024-07-15 09:32:30.392217] Starting SPDK v24.09-pre git sha1 b0f01ebc5 / DPDK 24.03.0 initialization... 00:24:26.290 [2024-07-15 09:32:30.392305] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid903374 ] 00:24:26.290 EAL: No free 2048 kB hugepages reported on node 1 00:24:26.290 [2024-07-15 09:32:30.451675] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:24:26.290 [2024-07-15 09:32:30.557917] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:24:26.290 [2024-07-15 09:32:32.724853] bdev_nvme.c:1870:bdev_nvme_failover_trid: *NOTICE*: Start failover from 10.0.0.2:4420 to 10.0.0.2:4421 00:24:26.290 [2024-07-15 09:32:32.724923] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:24:26.290 [2024-07-15 09:32:32.724944] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:26.290 [2024-07-15 09:32:32.724960] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:24:26.290 [2024-07-15 09:32:32.724975] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:26.290 [2024-07-15 09:32:32.724989] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:24:26.290 [2024-07-15 09:32:32.725003] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:26.290 [2024-07-15 09:32:32.725017] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:24:26.290 [2024-07-15 09:32:32.725030] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:26.290 [2024-07-15 09:32:32.725043] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:26.291 [2024-07-15 09:32:32.725085] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:26.291 [2024-07-15 09:32:32.725116] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x24780f0 (9): Bad file descriptor 00:24:26.291 [2024-07-15 09:32:32.745559] bdev_nvme.c:2067:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:24:26.291 Running I/O for 1 seconds... 00:24:26.291 00:24:26.291 Latency(us) 00:24:26.291 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:24:26.291 Job: NVMe0n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:24:26.291 Verification LBA range: start 0x0 length 0x4000 00:24:26.291 NVMe0n1 : 1.01 8563.11 33.45 0.00 0.00 14880.77 3398.16 12524.66 00:24:26.291 =================================================================================================================== 00:24:26.291 Total : 8563.11 33.45 0.00 0.00 14880.77 3398.16 12524.66 00:24:26.291 09:32:37 nvmf_tcp.nvmf_failover -- host/failover.sh@95 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:24:26.291 09:32:37 nvmf_tcp.nvmf_failover -- host/failover.sh@95 -- # grep -q NVMe0 00:24:26.291 09:32:37 nvmf_tcp.nvmf_failover -- host/failover.sh@98 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_detach_controller NVMe0 -t tcp -a 10.0.0.2 -s 4422 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:24:26.587 09:32:37 nvmf_tcp.nvmf_failover -- host/failover.sh@99 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:24:26.587 09:32:37 nvmf_tcp.nvmf_failover -- host/failover.sh@99 -- # grep -q NVMe0 00:24:26.865 09:32:37 nvmf_tcp.nvmf_failover -- host/failover.sh@100 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_detach_controller NVMe0 -t tcp -a 10.0.0.2 -s 4421 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:24:27.124 09:32:38 nvmf_tcp.nvmf_failover -- host/failover.sh@101 -- # sleep 3 00:24:30.421 09:32:41 nvmf_tcp.nvmf_failover -- host/failover.sh@103 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:24:30.421 09:32:41 nvmf_tcp.nvmf_failover -- host/failover.sh@103 -- # grep -q NVMe0 00:24:30.421 09:32:41 nvmf_tcp.nvmf_failover -- host/failover.sh@108 -- # killprocess 903374 00:24:30.421 09:32:41 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@948 -- # '[' -z 903374 ']' 00:24:30.421 09:32:41 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@952 -- # kill -0 903374 00:24:30.421 09:32:41 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@953 -- # uname 00:24:30.421 09:32:41 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:24:30.421 09:32:41 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 903374 00:24:30.421 09:32:41 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:24:30.421 09:32:41 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:24:30.421 09:32:41 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@966 -- # echo 'killing process with pid 903374' 00:24:30.421 killing process with pid 903374 00:24:30.421 09:32:41 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@967 -- # kill 903374 00:24:30.421 09:32:41 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@972 -- # wait 903374 00:24:30.679 09:32:41 nvmf_tcp.nvmf_failover -- host/failover.sh@110 -- # sync 00:24:30.679 09:32:41 nvmf_tcp.nvmf_failover -- host/failover.sh@111 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:24:30.936 09:32:41 nvmf_tcp.nvmf_failover -- host/failover.sh@113 -- # trap - SIGINT SIGTERM EXIT 00:24:30.936 09:32:41 nvmf_tcp.nvmf_failover -- host/failover.sh@115 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/try.txt 00:24:30.936 09:32:41 nvmf_tcp.nvmf_failover -- host/failover.sh@116 -- # nvmftestfini 00:24:30.937 09:32:41 nvmf_tcp.nvmf_failover -- nvmf/common.sh@488 -- # nvmfcleanup 00:24:30.937 09:32:41 nvmf_tcp.nvmf_failover -- nvmf/common.sh@117 -- # sync 00:24:30.937 09:32:41 nvmf_tcp.nvmf_failover -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:24:30.937 09:32:41 nvmf_tcp.nvmf_failover -- nvmf/common.sh@120 -- # set +e 00:24:30.937 09:32:41 nvmf_tcp.nvmf_failover -- nvmf/common.sh@121 -- # for i in {1..20} 00:24:30.937 09:32:41 nvmf_tcp.nvmf_failover -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:24:30.937 rmmod nvme_tcp 00:24:30.937 rmmod nvme_fabrics 00:24:30.937 rmmod nvme_keyring 00:24:30.937 09:32:41 nvmf_tcp.nvmf_failover -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:24:30.937 09:32:41 nvmf_tcp.nvmf_failover -- nvmf/common.sh@124 -- # set -e 00:24:30.937 09:32:41 nvmf_tcp.nvmf_failover -- nvmf/common.sh@125 -- # return 0 00:24:30.937 09:32:41 nvmf_tcp.nvmf_failover -- nvmf/common.sh@489 -- # '[' -n 901107 ']' 00:24:30.937 09:32:41 nvmf_tcp.nvmf_failover -- nvmf/common.sh@490 -- # killprocess 901107 00:24:30.937 09:32:41 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@948 -- # '[' -z 901107 ']' 00:24:30.937 09:32:41 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@952 -- # kill -0 901107 00:24:30.937 09:32:41 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@953 -- # uname 00:24:30.937 09:32:41 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:24:30.937 09:32:41 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 901107 00:24:30.937 09:32:41 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:24:30.937 09:32:41 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:24:30.937 09:32:41 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@966 -- # echo 'killing process with pid 901107' 00:24:30.937 killing process with pid 901107 00:24:30.937 09:32:41 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@967 -- # kill 901107 00:24:30.937 09:32:41 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@972 -- # wait 901107 00:24:31.194 09:32:42 nvmf_tcp.nvmf_failover -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:24:31.194 09:32:42 nvmf_tcp.nvmf_failover -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:24:31.194 09:32:42 nvmf_tcp.nvmf_failover -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:24:31.194 09:32:42 nvmf_tcp.nvmf_failover -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:24:31.194 09:32:42 nvmf_tcp.nvmf_failover -- nvmf/common.sh@278 -- # remove_spdk_ns 00:24:31.194 09:32:42 nvmf_tcp.nvmf_failover -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:24:31.194 09:32:42 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:24:31.194 09:32:42 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:24:33.736 09:32:44 nvmf_tcp.nvmf_failover -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:24:33.736 00:24:33.736 real 0m34.995s 00:24:33.736 user 2m3.728s 00:24:33.736 sys 0m5.666s 00:24:33.736 09:32:44 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@1124 -- # xtrace_disable 00:24:33.736 09:32:44 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@10 -- # set +x 00:24:33.736 ************************************ 00:24:33.736 END TEST nvmf_failover 00:24:33.736 ************************************ 00:24:33.736 09:32:44 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:24:33.736 09:32:44 nvmf_tcp -- nvmf/nvmf.sh@101 -- # run_test nvmf_host_discovery /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/discovery.sh --transport=tcp 00:24:33.736 09:32:44 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:24:33.736 09:32:44 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:24:33.736 09:32:44 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:24:33.736 ************************************ 00:24:33.736 START TEST nvmf_host_discovery 00:24:33.736 ************************************ 00:24:33.736 09:32:44 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/discovery.sh --transport=tcp 00:24:33.736 * Looking for test storage... 00:24:33.736 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:24:33.736 09:32:44 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:24:33.736 09:32:44 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@7 -- # uname -s 00:24:33.736 09:32:44 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:24:33.736 09:32:44 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:24:33.736 09:32:44 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:24:33.736 09:32:44 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:24:33.736 09:32:44 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:24:33.736 09:32:44 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:24:33.736 09:32:44 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:24:33.736 09:32:44 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:24:33.736 09:32:44 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:24:33.736 09:32:44 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:24:33.736 09:32:44 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a 00:24:33.736 09:32:44 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@18 -- # NVME_HOSTID=29f67375-a902-e411-ace9-001e67bc3c9a 00:24:33.736 09:32:44 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:24:33.736 09:32:44 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:24:33.736 09:32:44 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:24:33.736 09:32:44 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:24:33.736 09:32:44 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:24:33.736 09:32:44 nvmf_tcp.nvmf_host_discovery -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:24:33.736 09:32:44 nvmf_tcp.nvmf_host_discovery -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:24:33.736 09:32:44 nvmf_tcp.nvmf_host_discovery -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:24:33.736 09:32:44 nvmf_tcp.nvmf_host_discovery -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:33.736 09:32:44 nvmf_tcp.nvmf_host_discovery -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:33.736 09:32:44 nvmf_tcp.nvmf_host_discovery -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:33.736 09:32:44 nvmf_tcp.nvmf_host_discovery -- paths/export.sh@5 -- # export PATH 00:24:33.737 09:32:44 nvmf_tcp.nvmf_host_discovery -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:33.737 09:32:44 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@47 -- # : 0 00:24:33.737 09:32:44 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:24:33.737 09:32:44 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:24:33.737 09:32:44 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:24:33.737 09:32:44 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:24:33.737 09:32:44 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:24:33.737 09:32:44 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:24:33.737 09:32:44 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:24:33.737 09:32:44 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@51 -- # have_pci_nics=0 00:24:33.737 09:32:44 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@11 -- # '[' tcp == rdma ']' 00:24:33.737 09:32:44 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@16 -- # DISCOVERY_PORT=8009 00:24:33.737 09:32:44 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@17 -- # DISCOVERY_NQN=nqn.2014-08.org.nvmexpress.discovery 00:24:33.737 09:32:44 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@20 -- # NQN=nqn.2016-06.io.spdk:cnode 00:24:33.737 09:32:44 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@22 -- # HOST_NQN=nqn.2021-12.io.spdk:test 00:24:33.737 09:32:44 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@23 -- # HOST_SOCK=/tmp/host.sock 00:24:33.737 09:32:44 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@25 -- # nvmftestinit 00:24:33.737 09:32:44 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:24:33.737 09:32:44 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:24:33.737 09:32:44 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@448 -- # prepare_net_devs 00:24:33.737 09:32:44 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@410 -- # local -g is_hw=no 00:24:33.737 09:32:44 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@412 -- # remove_spdk_ns 00:24:33.737 09:32:44 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:24:33.737 09:32:44 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:24:33.737 09:32:44 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:24:33.737 09:32:44 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:24:33.737 09:32:44 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:24:33.737 09:32:44 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@285 -- # xtrace_disable 00:24:33.737 09:32:44 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:24:35.643 09:32:46 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:24:35.643 09:32:46 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@291 -- # pci_devs=() 00:24:35.643 09:32:46 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@291 -- # local -a pci_devs 00:24:35.643 09:32:46 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@292 -- # pci_net_devs=() 00:24:35.643 09:32:46 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:24:35.643 09:32:46 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@293 -- # pci_drivers=() 00:24:35.643 09:32:46 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@293 -- # local -A pci_drivers 00:24:35.643 09:32:46 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@295 -- # net_devs=() 00:24:35.643 09:32:46 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@295 -- # local -ga net_devs 00:24:35.643 09:32:46 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@296 -- # e810=() 00:24:35.643 09:32:46 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@296 -- # local -ga e810 00:24:35.643 09:32:46 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@297 -- # x722=() 00:24:35.643 09:32:46 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@297 -- # local -ga x722 00:24:35.643 09:32:46 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@298 -- # mlx=() 00:24:35.643 09:32:46 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@298 -- # local -ga mlx 00:24:35.643 09:32:46 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:24:35.643 09:32:46 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:24:35.643 09:32:46 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:24:35.643 09:32:46 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:24:35.643 09:32:46 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:24:35.643 09:32:46 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:24:35.643 09:32:46 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:24:35.643 09:32:46 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:24:35.643 09:32:46 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:24:35.643 09:32:46 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:24:35.643 09:32:46 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:24:35.643 09:32:46 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:24:35.643 09:32:46 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:24:35.643 09:32:46 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:24:35.643 09:32:46 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:24:35.643 09:32:46 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:24:35.643 09:32:46 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:24:35.643 09:32:46 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:24:35.643 09:32:46 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@341 -- # echo 'Found 0000:09:00.0 (0x8086 - 0x159b)' 00:24:35.643 Found 0000:09:00.0 (0x8086 - 0x159b) 00:24:35.643 09:32:46 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:24:35.643 09:32:46 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:24:35.643 09:32:46 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:24:35.643 09:32:46 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:24:35.643 09:32:46 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:24:35.643 09:32:46 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:24:35.643 09:32:46 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@341 -- # echo 'Found 0000:09:00.1 (0x8086 - 0x159b)' 00:24:35.643 Found 0000:09:00.1 (0x8086 - 0x159b) 00:24:35.643 09:32:46 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:24:35.643 09:32:46 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:24:35.643 09:32:46 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:24:35.643 09:32:46 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:24:35.643 09:32:46 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:24:35.643 09:32:46 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:24:35.643 09:32:46 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:24:35.643 09:32:46 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:24:35.643 09:32:46 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:24:35.643 09:32:46 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:24:35.643 09:32:46 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:24:35.643 09:32:46 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:24:35.643 09:32:46 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@390 -- # [[ up == up ]] 00:24:35.643 09:32:46 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:24:35.644 09:32:46 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:24:35.644 09:32:46 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:09:00.0: cvl_0_0' 00:24:35.644 Found net devices under 0000:09:00.0: cvl_0_0 00:24:35.644 09:32:46 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:24:35.644 09:32:46 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:24:35.644 09:32:46 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:24:35.644 09:32:46 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:24:35.644 09:32:46 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:24:35.644 09:32:46 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@390 -- # [[ up == up ]] 00:24:35.644 09:32:46 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:24:35.644 09:32:46 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:24:35.644 09:32:46 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:09:00.1: cvl_0_1' 00:24:35.644 Found net devices under 0000:09:00.1: cvl_0_1 00:24:35.644 09:32:46 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:24:35.644 09:32:46 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:24:35.644 09:32:46 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@414 -- # is_hw=yes 00:24:35.644 09:32:46 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:24:35.644 09:32:46 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:24:35.644 09:32:46 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:24:35.644 09:32:46 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:24:35.644 09:32:46 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:24:35.644 09:32:46 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:24:35.644 09:32:46 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:24:35.644 09:32:46 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:24:35.644 09:32:46 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:24:35.644 09:32:46 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:24:35.644 09:32:46 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:24:35.644 09:32:46 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:24:35.644 09:32:46 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:24:35.644 09:32:46 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:24:35.644 09:32:46 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:24:35.644 09:32:46 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:24:35.644 09:32:46 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:24:35.644 09:32:46 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:24:35.644 09:32:46 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:24:35.644 09:32:46 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:24:35.644 09:32:46 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:24:35.644 09:32:46 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:24:35.644 09:32:46 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:24:35.644 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:24:35.644 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.248 ms 00:24:35.644 00:24:35.644 --- 10.0.0.2 ping statistics --- 00:24:35.644 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:24:35.644 rtt min/avg/max/mdev = 0.248/0.248/0.248/0.000 ms 00:24:35.644 09:32:46 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:24:35.644 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:24:35.644 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.118 ms 00:24:35.644 00:24:35.644 --- 10.0.0.1 ping statistics --- 00:24:35.644 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:24:35.644 rtt min/avg/max/mdev = 0.118/0.118/0.118/0.000 ms 00:24:35.644 09:32:46 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:24:35.644 09:32:46 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@422 -- # return 0 00:24:35.644 09:32:46 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:24:35.644 09:32:46 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:24:35.644 09:32:46 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:24:35.644 09:32:46 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:24:35.644 09:32:46 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:24:35.644 09:32:46 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:24:35.644 09:32:46 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:24:35.644 09:32:46 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@30 -- # nvmfappstart -m 0x2 00:24:35.644 09:32:46 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:24:35.644 09:32:46 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@722 -- # xtrace_disable 00:24:35.644 09:32:46 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:24:35.644 09:32:46 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@481 -- # nvmfpid=906645 00:24:35.644 09:32:46 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:24:35.644 09:32:46 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@482 -- # waitforlisten 906645 00:24:35.644 09:32:46 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@829 -- # '[' -z 906645 ']' 00:24:35.644 09:32:46 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:24:35.644 09:32:46 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@834 -- # local max_retries=100 00:24:35.644 09:32:46 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:24:35.644 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:24:35.644 09:32:46 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@838 -- # xtrace_disable 00:24:35.644 09:32:46 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:24:35.644 [2024-07-15 09:32:46.661178] Starting SPDK v24.09-pre git sha1 b0f01ebc5 / DPDK 24.03.0 initialization... 00:24:35.644 [2024-07-15 09:32:46.661262] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:24:35.644 EAL: No free 2048 kB hugepages reported on node 1 00:24:35.644 [2024-07-15 09:32:46.724126] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:24:35.644 [2024-07-15 09:32:46.834637] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:24:35.644 [2024-07-15 09:32:46.834698] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:24:35.644 [2024-07-15 09:32:46.834711] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:24:35.644 [2024-07-15 09:32:46.834721] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:24:35.644 [2024-07-15 09:32:46.834746] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:24:35.644 [2024-07-15 09:32:46.834770] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:24:35.902 09:32:46 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:24:35.902 09:32:46 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@862 -- # return 0 00:24:35.902 09:32:46 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:24:35.902 09:32:46 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@728 -- # xtrace_disable 00:24:35.902 09:32:46 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:24:35.902 09:32:46 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:24:35.902 09:32:46 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@32 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:24:35.902 09:32:46 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:35.902 09:32:46 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:24:35.902 [2024-07-15 09:32:46.976569] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:24:35.903 09:32:46 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:35.903 09:32:46 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@33 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2014-08.org.nvmexpress.discovery -t tcp -a 10.0.0.2 -s 8009 00:24:35.903 09:32:46 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:35.903 09:32:46 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:24:35.903 [2024-07-15 09:32:46.984721] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 8009 *** 00:24:35.903 09:32:46 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:35.903 09:32:46 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@35 -- # rpc_cmd bdev_null_create null0 1000 512 00:24:35.903 09:32:46 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:35.903 09:32:46 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:24:35.903 null0 00:24:35.903 09:32:46 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:35.903 09:32:46 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@36 -- # rpc_cmd bdev_null_create null1 1000 512 00:24:35.903 09:32:46 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:35.903 09:32:46 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:24:35.903 null1 00:24:35.903 09:32:47 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:35.903 09:32:47 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@37 -- # rpc_cmd bdev_wait_for_examine 00:24:35.903 09:32:47 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:35.903 09:32:47 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:24:35.903 09:32:47 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:35.903 09:32:47 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@45 -- # hostpid=906788 00:24:35.903 09:32:47 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@44 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -m 0x1 -r /tmp/host.sock 00:24:35.903 09:32:47 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@46 -- # waitforlisten 906788 /tmp/host.sock 00:24:35.903 09:32:47 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@829 -- # '[' -z 906788 ']' 00:24:35.903 09:32:47 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@833 -- # local rpc_addr=/tmp/host.sock 00:24:35.903 09:32:47 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@834 -- # local max_retries=100 00:24:35.903 09:32:47 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /tmp/host.sock...' 00:24:35.903 Waiting for process to start up and listen on UNIX domain socket /tmp/host.sock... 00:24:35.903 09:32:47 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@838 -- # xtrace_disable 00:24:35.903 09:32:47 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:24:35.903 [2024-07-15 09:32:47.056757] Starting SPDK v24.09-pre git sha1 b0f01ebc5 / DPDK 24.03.0 initialization... 00:24:35.903 [2024-07-15 09:32:47.056863] [ DPDK EAL parameters: nvmf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid906788 ] 00:24:35.903 EAL: No free 2048 kB hugepages reported on node 1 00:24:36.161 [2024-07-15 09:32:47.113761] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:24:36.161 [2024-07-15 09:32:47.217772] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:24:36.161 09:32:47 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:24:36.161 09:32:47 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@862 -- # return 0 00:24:36.161 09:32:47 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@48 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; kill $hostpid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:24:36.161 09:32:47 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@50 -- # rpc_cmd -s /tmp/host.sock log_set_flag bdev_nvme 00:24:36.161 09:32:47 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:36.161 09:32:47 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:24:36.161 09:32:47 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:36.161 09:32:47 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@51 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test 00:24:36.161 09:32:47 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:36.161 09:32:47 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:24:36.161 09:32:47 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:36.161 09:32:47 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@72 -- # notify_id=0 00:24:36.161 09:32:47 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@83 -- # get_subsystem_names 00:24:36.161 09:32:47 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:24:36.161 09:32:47 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # jq -r '.[].name' 00:24:36.161 09:32:47 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:36.161 09:32:47 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # sort 00:24:36.161 09:32:47 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:24:36.161 09:32:47 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # xargs 00:24:36.419 09:32:47 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:36.419 09:32:47 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@83 -- # [[ '' == '' ]] 00:24:36.419 09:32:47 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@84 -- # get_bdev_list 00:24:36.419 09:32:47 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:24:36.419 09:32:47 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:24:36.419 09:32:47 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:36.419 09:32:47 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:24:36.419 09:32:47 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:24:36.419 09:32:47 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:24:36.419 09:32:47 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:36.419 09:32:47 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@84 -- # [[ '' == '' ]] 00:24:36.419 09:32:47 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@86 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 00:24:36.419 09:32:47 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:36.419 09:32:47 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:24:36.419 09:32:47 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:36.419 09:32:47 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@87 -- # get_subsystem_names 00:24:36.419 09:32:47 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:24:36.419 09:32:47 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:36.419 09:32:47 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # jq -r '.[].name' 00:24:36.419 09:32:47 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:24:36.419 09:32:47 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # sort 00:24:36.419 09:32:47 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # xargs 00:24:36.419 09:32:47 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:36.419 09:32:47 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@87 -- # [[ '' == '' ]] 00:24:36.419 09:32:47 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@88 -- # get_bdev_list 00:24:36.419 09:32:47 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:24:36.419 09:32:47 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:24:36.419 09:32:47 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:36.419 09:32:47 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:24:36.419 09:32:47 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:24:36.419 09:32:47 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:24:36.419 09:32:47 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:36.419 09:32:47 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@88 -- # [[ '' == '' ]] 00:24:36.419 09:32:47 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@90 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 null0 00:24:36.419 09:32:47 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:36.419 09:32:47 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:24:36.419 09:32:47 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:36.419 09:32:47 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@91 -- # get_subsystem_names 00:24:36.419 09:32:47 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:24:36.419 09:32:47 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:36.419 09:32:47 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # jq -r '.[].name' 00:24:36.419 09:32:47 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:24:36.419 09:32:47 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # sort 00:24:36.419 09:32:47 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # xargs 00:24:36.419 09:32:47 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:36.419 09:32:47 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@91 -- # [[ '' == '' ]] 00:24:36.419 09:32:47 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@92 -- # get_bdev_list 00:24:36.419 09:32:47 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:24:36.419 09:32:47 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:36.419 09:32:47 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:24:36.419 09:32:47 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:24:36.419 09:32:47 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:24:36.419 09:32:47 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:24:36.419 09:32:47 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:36.678 09:32:47 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@92 -- # [[ '' == '' ]] 00:24:36.678 09:32:47 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@96 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:24:36.678 09:32:47 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:36.678 09:32:47 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:24:36.678 [2024-07-15 09:32:47.638488] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:24:36.678 09:32:47 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:36.678 09:32:47 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@97 -- # get_subsystem_names 00:24:36.678 09:32:47 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:24:36.678 09:32:47 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:36.678 09:32:47 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # jq -r '.[].name' 00:24:36.678 09:32:47 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:24:36.678 09:32:47 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # sort 00:24:36.678 09:32:47 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # xargs 00:24:36.678 09:32:47 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:36.678 09:32:47 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@97 -- # [[ '' == '' ]] 00:24:36.678 09:32:47 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@98 -- # get_bdev_list 00:24:36.678 09:32:47 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:24:36.678 09:32:47 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:24:36.678 09:32:47 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:36.678 09:32:47 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:24:36.678 09:32:47 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:24:36.678 09:32:47 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:24:36.678 09:32:47 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:36.678 09:32:47 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@98 -- # [[ '' == '' ]] 00:24:36.678 09:32:47 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@99 -- # is_notification_count_eq 0 00:24:36.678 09:32:47 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@79 -- # expected_count=0 00:24:36.678 09:32:47 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@80 -- # waitforcondition 'get_notification_count && ((notification_count == expected_count))' 00:24:36.678 09:32:47 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@912 -- # local 'cond=get_notification_count && ((notification_count == expected_count))' 00:24:36.678 09:32:47 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # local max=10 00:24:36.678 09:32:47 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@914 -- # (( max-- )) 00:24:36.678 09:32:47 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # eval get_notification_count '&&' '((notification_count' == 'expected_count))' 00:24:36.678 09:32:47 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # get_notification_count 00:24:36.678 09:32:47 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@74 -- # rpc_cmd -s /tmp/host.sock notify_get_notifications -i 0 00:24:36.678 09:32:47 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@74 -- # jq '. | length' 00:24:36.678 09:32:47 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:36.678 09:32:47 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:24:36.678 09:32:47 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:36.678 09:32:47 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@74 -- # notification_count=0 00:24:36.678 09:32:47 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@75 -- # notify_id=0 00:24:36.678 09:32:47 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # (( notification_count == expected_count )) 00:24:36.678 09:32:47 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@916 -- # return 0 00:24:36.678 09:32:47 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@103 -- # rpc_cmd nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode0 nqn.2021-12.io.spdk:test 00:24:36.678 09:32:47 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:36.678 09:32:47 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:24:36.678 09:32:47 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:36.678 09:32:47 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@105 -- # waitforcondition '[[ "$(get_subsystem_names)" == "nvme0" ]]' 00:24:36.678 09:32:47 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@912 -- # local 'cond=[[ "$(get_subsystem_names)" == "nvme0" ]]' 00:24:36.678 09:32:47 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # local max=10 00:24:36.678 09:32:47 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@914 -- # (( max-- )) 00:24:36.678 09:32:47 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # eval '[[' '"$(get_subsystem_names)"' == '"nvme0"' ']]' 00:24:36.678 09:32:47 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # get_subsystem_names 00:24:36.678 09:32:47 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:24:36.678 09:32:47 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:36.678 09:32:47 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # jq -r '.[].name' 00:24:36.678 09:32:47 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:24:36.678 09:32:47 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # sort 00:24:36.678 09:32:47 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # xargs 00:24:36.678 09:32:47 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:36.678 09:32:47 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # [[ '' == \n\v\m\e\0 ]] 00:24:36.678 09:32:47 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@918 -- # sleep 1 00:24:37.245 [2024-07-15 09:32:48.353184] bdev_nvme.c:6983:discovery_attach_cb: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr attached 00:24:37.245 [2024-07-15 09:32:48.353208] bdev_nvme.c:7063:discovery_poller: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr connected 00:24:37.245 [2024-07-15 09:32:48.353229] bdev_nvme.c:6946:get_discovery_log_page: *INFO*: Discovery[10.0.0.2:8009] sent discovery log page command 00:24:37.245 [2024-07-15 09:32:48.439524] bdev_nvme.c:6912:discovery_log_page_cb: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 new subsystem nvme0 00:24:37.504 [2024-07-15 09:32:48.577054] bdev_nvme.c:6802:discovery_attach_controller_done: *INFO*: Discovery[10.0.0.2:8009] attach nvme0 done 00:24:37.504 [2024-07-15 09:32:48.577079] bdev_nvme.c:6761:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 found again 00:24:37.763 09:32:48 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@914 -- # (( max-- )) 00:24:37.763 09:32:48 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # eval '[[' '"$(get_subsystem_names)"' == '"nvme0"' ']]' 00:24:37.763 09:32:48 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # get_subsystem_names 00:24:37.763 09:32:48 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:24:37.763 09:32:48 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # jq -r '.[].name' 00:24:37.763 09:32:48 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:37.763 09:32:48 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # sort 00:24:37.763 09:32:48 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:24:37.763 09:32:48 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # xargs 00:24:37.763 09:32:48 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:37.763 09:32:48 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:24:37.763 09:32:48 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@916 -- # return 0 00:24:37.763 09:32:48 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@106 -- # waitforcondition '[[ "$(get_bdev_list)" == "nvme0n1" ]]' 00:24:37.763 09:32:48 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@912 -- # local 'cond=[[ "$(get_bdev_list)" == "nvme0n1" ]]' 00:24:37.763 09:32:48 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # local max=10 00:24:37.763 09:32:48 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@914 -- # (( max-- )) 00:24:37.763 09:32:48 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # eval '[[' '"$(get_bdev_list)"' == '"nvme0n1"' ']]' 00:24:37.763 09:32:48 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # get_bdev_list 00:24:37.763 09:32:48 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:24:37.763 09:32:48 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:37.763 09:32:48 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:24:37.763 09:32:48 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:24:37.763 09:32:48 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:24:37.763 09:32:48 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:24:37.763 09:32:48 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:37.763 09:32:48 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # [[ nvme0n1 == \n\v\m\e\0\n\1 ]] 00:24:37.763 09:32:48 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@916 -- # return 0 00:24:37.763 09:32:48 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@107 -- # waitforcondition '[[ "$(get_subsystem_paths nvme0)" == "$NVMF_PORT" ]]' 00:24:37.763 09:32:48 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@912 -- # local 'cond=[[ "$(get_subsystem_paths nvme0)" == "$NVMF_PORT" ]]' 00:24:37.763 09:32:48 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # local max=10 00:24:37.763 09:32:48 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@914 -- # (( max-- )) 00:24:37.763 09:32:48 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # eval '[[' '"$(get_subsystem_paths' 'nvme0)"' == '"$NVMF_PORT"' ']]' 00:24:37.763 09:32:48 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # get_subsystem_paths nvme0 00:24:37.763 09:32:48 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@63 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers -n nvme0 00:24:37.763 09:32:48 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:37.763 09:32:48 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@63 -- # jq -r '.[].ctrlrs[].trid.trsvcid' 00:24:37.763 09:32:48 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:24:37.763 09:32:48 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@63 -- # sort -n 00:24:37.763 09:32:48 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@63 -- # xargs 00:24:37.763 09:32:48 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:37.763 09:32:48 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # [[ 4420 == \4\4\2\0 ]] 00:24:37.763 09:32:48 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@916 -- # return 0 00:24:37.763 09:32:48 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@108 -- # is_notification_count_eq 1 00:24:37.764 09:32:48 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@79 -- # expected_count=1 00:24:37.764 09:32:48 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@80 -- # waitforcondition 'get_notification_count && ((notification_count == expected_count))' 00:24:37.764 09:32:48 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@912 -- # local 'cond=get_notification_count && ((notification_count == expected_count))' 00:24:37.764 09:32:48 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # local max=10 00:24:37.764 09:32:48 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@914 -- # (( max-- )) 00:24:37.764 09:32:48 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # eval get_notification_count '&&' '((notification_count' == 'expected_count))' 00:24:37.764 09:32:48 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # get_notification_count 00:24:37.764 09:32:48 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@74 -- # rpc_cmd -s /tmp/host.sock notify_get_notifications -i 0 00:24:37.764 09:32:48 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@74 -- # jq '. | length' 00:24:37.764 09:32:48 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:37.764 09:32:48 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:24:37.764 09:32:48 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:38.023 09:32:48 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@74 -- # notification_count=1 00:24:38.023 09:32:48 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@75 -- # notify_id=1 00:24:38.023 09:32:48 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # (( notification_count == expected_count )) 00:24:38.023 09:32:48 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@916 -- # return 0 00:24:38.023 09:32:48 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@111 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 null1 00:24:38.023 09:32:48 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:38.023 09:32:48 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:24:38.023 09:32:48 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:38.023 09:32:48 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@113 -- # waitforcondition '[[ "$(get_bdev_list)" == "nvme0n1 nvme0n2" ]]' 00:24:38.023 09:32:48 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@912 -- # local 'cond=[[ "$(get_bdev_list)" == "nvme0n1 nvme0n2" ]]' 00:24:38.023 09:32:48 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # local max=10 00:24:38.023 09:32:48 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@914 -- # (( max-- )) 00:24:38.023 09:32:48 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # eval '[[' '"$(get_bdev_list)"' == '"nvme0n1' 'nvme0n2"' ']]' 00:24:38.023 09:32:48 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # get_bdev_list 00:24:38.023 09:32:48 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:24:38.023 09:32:48 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:38.023 09:32:48 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:24:38.023 09:32:48 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:24:38.023 09:32:48 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:24:38.023 09:32:48 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:24:38.023 09:32:49 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:38.023 09:32:49 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # [[ nvme0n1 nvme0n2 == \n\v\m\e\0\n\1\ \n\v\m\e\0\n\2 ]] 00:24:38.023 09:32:49 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@916 -- # return 0 00:24:38.023 09:32:49 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@114 -- # is_notification_count_eq 1 00:24:38.023 09:32:49 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@79 -- # expected_count=1 00:24:38.023 09:32:49 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@80 -- # waitforcondition 'get_notification_count && ((notification_count == expected_count))' 00:24:38.023 09:32:49 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@912 -- # local 'cond=get_notification_count && ((notification_count == expected_count))' 00:24:38.023 09:32:49 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # local max=10 00:24:38.023 09:32:49 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@914 -- # (( max-- )) 00:24:38.023 09:32:49 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # eval get_notification_count '&&' '((notification_count' == 'expected_count))' 00:24:38.023 09:32:49 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # get_notification_count 00:24:38.023 09:32:49 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@74 -- # rpc_cmd -s /tmp/host.sock notify_get_notifications -i 1 00:24:38.023 09:32:49 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@74 -- # jq '. | length' 00:24:38.023 09:32:49 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:38.023 09:32:49 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:24:38.023 09:32:49 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:38.023 09:32:49 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@74 -- # notification_count=1 00:24:38.023 09:32:49 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@75 -- # notify_id=2 00:24:38.023 09:32:49 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # (( notification_count == expected_count )) 00:24:38.023 09:32:49 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@916 -- # return 0 00:24:38.023 09:32:49 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@118 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4421 00:24:38.023 09:32:49 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:38.023 09:32:49 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:24:38.023 [2024-07-15 09:32:49.078738] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4421 *** 00:24:38.023 [2024-07-15 09:32:49.079632] bdev_nvme.c:6965:discovery_aer_cb: *INFO*: Discovery[10.0.0.2:8009] got aer 00:24:38.024 [2024-07-15 09:32:49.079676] bdev_nvme.c:6946:get_discovery_log_page: *INFO*: Discovery[10.0.0.2:8009] sent discovery log page command 00:24:38.024 09:32:49 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:38.024 09:32:49 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@120 -- # waitforcondition '[[ "$(get_subsystem_names)" == "nvme0" ]]' 00:24:38.024 09:32:49 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@912 -- # local 'cond=[[ "$(get_subsystem_names)" == "nvme0" ]]' 00:24:38.024 09:32:49 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # local max=10 00:24:38.024 09:32:49 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@914 -- # (( max-- )) 00:24:38.024 09:32:49 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # eval '[[' '"$(get_subsystem_names)"' == '"nvme0"' ']]' 00:24:38.024 09:32:49 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # get_subsystem_names 00:24:38.024 09:32:49 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:24:38.024 09:32:49 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # jq -r '.[].name' 00:24:38.024 09:32:49 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:38.024 09:32:49 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:24:38.024 09:32:49 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # sort 00:24:38.024 09:32:49 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # xargs 00:24:38.024 09:32:49 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:38.024 09:32:49 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:24:38.024 09:32:49 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@916 -- # return 0 00:24:38.024 09:32:49 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@121 -- # waitforcondition '[[ "$(get_bdev_list)" == "nvme0n1 nvme0n2" ]]' 00:24:38.024 09:32:49 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@912 -- # local 'cond=[[ "$(get_bdev_list)" == "nvme0n1 nvme0n2" ]]' 00:24:38.024 09:32:49 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # local max=10 00:24:38.024 09:32:49 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@914 -- # (( max-- )) 00:24:38.024 09:32:49 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # eval '[[' '"$(get_bdev_list)"' == '"nvme0n1' 'nvme0n2"' ']]' 00:24:38.024 09:32:49 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # get_bdev_list 00:24:38.024 09:32:49 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:24:38.024 09:32:49 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:24:38.024 09:32:49 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:38.024 09:32:49 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:24:38.024 09:32:49 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:24:38.024 09:32:49 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:24:38.024 09:32:49 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:38.024 09:32:49 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # [[ nvme0n1 nvme0n2 == \n\v\m\e\0\n\1\ \n\v\m\e\0\n\2 ]] 00:24:38.024 09:32:49 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@916 -- # return 0 00:24:38.024 09:32:49 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@122 -- # waitforcondition '[[ "$(get_subsystem_paths nvme0)" == "$NVMF_PORT $NVMF_SECOND_PORT" ]]' 00:24:38.024 09:32:49 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@912 -- # local 'cond=[[ "$(get_subsystem_paths nvme0)" == "$NVMF_PORT $NVMF_SECOND_PORT" ]]' 00:24:38.024 09:32:49 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # local max=10 00:24:38.024 09:32:49 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@914 -- # (( max-- )) 00:24:38.024 09:32:49 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # eval '[[' '"$(get_subsystem_paths' 'nvme0)"' == '"$NVMF_PORT' '$NVMF_SECOND_PORT"' ']]' 00:24:38.024 09:32:49 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # get_subsystem_paths nvme0 00:24:38.024 09:32:49 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@63 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers -n nvme0 00:24:38.024 09:32:49 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@63 -- # jq -r '.[].ctrlrs[].trid.trsvcid' 00:24:38.024 09:32:49 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:38.024 09:32:49 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:24:38.024 09:32:49 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@63 -- # sort -n 00:24:38.024 09:32:49 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@63 -- # xargs 00:24:38.024 09:32:49 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:38.024 [2024-07-15 09:32:49.208504] bdev_nvme.c:6907:discovery_log_page_cb: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4421 new path for nvme0 00:24:38.024 09:32:49 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # [[ 4420 == \4\4\2\0\ \4\4\2\1 ]] 00:24:38.024 09:32:49 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@918 -- # sleep 1 00:24:38.282 [2024-07-15 09:32:49.268008] bdev_nvme.c:6802:discovery_attach_controller_done: *INFO*: Discovery[10.0.0.2:8009] attach nvme0 done 00:24:38.282 [2024-07-15 09:32:49.268031] bdev_nvme.c:6761:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 found again 00:24:38.282 [2024-07-15 09:32:49.268040] bdev_nvme.c:6761:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4421 found again 00:24:39.222 09:32:50 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@914 -- # (( max-- )) 00:24:39.222 09:32:50 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # eval '[[' '"$(get_subsystem_paths' 'nvme0)"' == '"$NVMF_PORT' '$NVMF_SECOND_PORT"' ']]' 00:24:39.222 09:32:50 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # get_subsystem_paths nvme0 00:24:39.222 09:32:50 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@63 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers -n nvme0 00:24:39.222 09:32:50 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:39.222 09:32:50 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@63 -- # jq -r '.[].ctrlrs[].trid.trsvcid' 00:24:39.222 09:32:50 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:24:39.222 09:32:50 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@63 -- # sort -n 00:24:39.222 09:32:50 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@63 -- # xargs 00:24:39.222 09:32:50 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:39.222 09:32:50 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # [[ 4420 4421 == \4\4\2\0\ \4\4\2\1 ]] 00:24:39.222 09:32:50 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@916 -- # return 0 00:24:39.222 09:32:50 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@123 -- # is_notification_count_eq 0 00:24:39.222 09:32:50 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@79 -- # expected_count=0 00:24:39.222 09:32:50 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@80 -- # waitforcondition 'get_notification_count && ((notification_count == expected_count))' 00:24:39.222 09:32:50 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@912 -- # local 'cond=get_notification_count && ((notification_count == expected_count))' 00:24:39.222 09:32:50 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # local max=10 00:24:39.222 09:32:50 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@914 -- # (( max-- )) 00:24:39.222 09:32:50 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # eval get_notification_count '&&' '((notification_count' == 'expected_count))' 00:24:39.222 09:32:50 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # get_notification_count 00:24:39.222 09:32:50 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@74 -- # rpc_cmd -s /tmp/host.sock notify_get_notifications -i 2 00:24:39.222 09:32:50 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@74 -- # jq '. | length' 00:24:39.222 09:32:50 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:39.222 09:32:50 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:24:39.222 09:32:50 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:39.222 09:32:50 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@74 -- # notification_count=0 00:24:39.222 09:32:50 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@75 -- # notify_id=2 00:24:39.222 09:32:50 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # (( notification_count == expected_count )) 00:24:39.222 09:32:50 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@916 -- # return 0 00:24:39.222 09:32:50 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@127 -- # rpc_cmd nvmf_subsystem_remove_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:24:39.222 09:32:50 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:39.222 09:32:50 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:24:39.222 [2024-07-15 09:32:50.307459] bdev_nvme.c:6965:discovery_aer_cb: *INFO*: Discovery[10.0.0.2:8009] got aer 00:24:39.222 [2024-07-15 09:32:50.307499] bdev_nvme.c:6946:get_discovery_log_page: *INFO*: Discovery[10.0.0.2:8009] sent discovery log page command 00:24:39.222 [2024-07-15 09:32:50.308759] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:24:39.222 [2024-07-15 09:32:50.308790] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:39.222 [2024-07-15 09:32:50.308815] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:24:39.222 [2024-07-15 09:32:50.308831] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:39.222 [2024-07-15 09:32:50.308845] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:24:39.222 [2024-07-15 09:32:50.308869] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:39.222 [2024-07-15 09:32:50.308883] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:24:39.222 [2024-07-15 09:32:50.308896] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:39.222 [2024-07-15 09:32:50.308909] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd83c00 is same with the state(5) to be set 00:24:39.222 09:32:50 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:39.222 09:32:50 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@129 -- # waitforcondition '[[ "$(get_subsystem_names)" == "nvme0" ]]' 00:24:39.222 09:32:50 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@912 -- # local 'cond=[[ "$(get_subsystem_names)" == "nvme0" ]]' 00:24:39.222 09:32:50 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # local max=10 00:24:39.222 09:32:50 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@914 -- # (( max-- )) 00:24:39.222 09:32:50 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # eval '[[' '"$(get_subsystem_names)"' == '"nvme0"' ']]' 00:24:39.222 09:32:50 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # get_subsystem_names 00:24:39.222 09:32:50 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:24:39.222 09:32:50 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:39.222 09:32:50 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # jq -r '.[].name' 00:24:39.222 09:32:50 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:24:39.222 09:32:50 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # sort 00:24:39.222 09:32:50 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # xargs 00:24:39.222 [2024-07-15 09:32:50.318765] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xd83c00 (9): Bad file descriptor 00:24:39.222 09:32:50 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:39.222 [2024-07-15 09:32:50.328811] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:24:39.222 [2024-07-15 09:32:50.329033] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:39.222 [2024-07-15 09:32:50.329063] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd83c00 with addr=10.0.0.2, port=4420 00:24:39.222 [2024-07-15 09:32:50.329091] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd83c00 is same with the state(5) to be set 00:24:39.222 [2024-07-15 09:32:50.329114] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xd83c00 (9): Bad file descriptor 00:24:39.222 [2024-07-15 09:32:50.329134] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:24:39.222 [2024-07-15 09:32:50.329156] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0] controller reinitialization failed 00:24:39.222 [2024-07-15 09:32:50.329171] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:24:39.222 [2024-07-15 09:32:50.329191] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:39.222 [2024-07-15 09:32:50.338914] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:24:39.222 [2024-07-15 09:32:50.339041] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:39.222 [2024-07-15 09:32:50.339067] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd83c00 with addr=10.0.0.2, port=4420 00:24:39.222 [2024-07-15 09:32:50.339083] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd83c00 is same with the state(5) to be set 00:24:39.222 [2024-07-15 09:32:50.339104] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xd83c00 (9): Bad file descriptor 00:24:39.222 [2024-07-15 09:32:50.339123] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:24:39.222 [2024-07-15 09:32:50.339136] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0] controller reinitialization failed 00:24:39.222 [2024-07-15 09:32:50.339149] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:24:39.222 [2024-07-15 09:32:50.339167] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:39.222 [2024-07-15 09:32:50.348985] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:24:39.222 [2024-07-15 09:32:50.349118] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:39.222 [2024-07-15 09:32:50.349145] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd83c00 with addr=10.0.0.2, port=4420 00:24:39.222 [2024-07-15 09:32:50.349161] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd83c00 is same with the state(5) to be set 00:24:39.222 [2024-07-15 09:32:50.349183] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xd83c00 (9): Bad file descriptor 00:24:39.222 [2024-07-15 09:32:50.349203] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:24:39.222 [2024-07-15 09:32:50.349216] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0] controller reinitialization failed 00:24:39.222 [2024-07-15 09:32:50.349229] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:24:39.222 [2024-07-15 09:32:50.349248] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:39.222 09:32:50 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:24:39.222 09:32:50 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@916 -- # return 0 00:24:39.222 09:32:50 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@130 -- # waitforcondition '[[ "$(get_bdev_list)" == "nvme0n1 nvme0n2" ]]' 00:24:39.222 09:32:50 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@912 -- # local 'cond=[[ "$(get_bdev_list)" == "nvme0n1 nvme0n2" ]]' 00:24:39.222 09:32:50 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # local max=10 00:24:39.222 09:32:50 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@914 -- # (( max-- )) 00:24:39.222 09:32:50 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # eval '[[' '"$(get_bdev_list)"' == '"nvme0n1' 'nvme0n2"' ']]' 00:24:39.222 09:32:50 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # get_bdev_list 00:24:39.222 09:32:50 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:24:39.222 09:32:50 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:24:39.222 09:32:50 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:39.222 09:32:50 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:24:39.222 09:32:50 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:24:39.222 09:32:50 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:24:39.222 [2024-07-15 09:32:50.359056] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:24:39.222 [2024-07-15 09:32:50.359309] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:39.222 [2024-07-15 09:32:50.359338] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd83c00 with addr=10.0.0.2, port=4420 00:24:39.222 [2024-07-15 09:32:50.359354] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd83c00 is same with the state(5) to be set 00:24:39.223 [2024-07-15 09:32:50.359377] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xd83c00 (9): Bad file descriptor 00:24:39.223 [2024-07-15 09:32:50.359397] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:24:39.223 [2024-07-15 09:32:50.359410] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0] controller reinitialization failed 00:24:39.223 [2024-07-15 09:32:50.359423] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:24:39.223 [2024-07-15 09:32:50.359442] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:39.223 [2024-07-15 09:32:50.369168] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:24:39.223 [2024-07-15 09:32:50.369335] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:39.223 [2024-07-15 09:32:50.369364] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd83c00 with addr=10.0.0.2, port=4420 00:24:39.223 [2024-07-15 09:32:50.369380] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd83c00 is same with the state(5) to be set 00:24:39.223 [2024-07-15 09:32:50.369402] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xd83c00 (9): Bad file descriptor 00:24:39.223 [2024-07-15 09:32:50.369423] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:24:39.223 [2024-07-15 09:32:50.369436] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0] controller reinitialization failed 00:24:39.223 [2024-07-15 09:32:50.369449] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:24:39.223 [2024-07-15 09:32:50.369467] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:39.223 [2024-07-15 09:32:50.379238] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:24:39.223 [2024-07-15 09:32:50.379415] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:39.223 [2024-07-15 09:32:50.379441] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd83c00 with addr=10.0.0.2, port=4420 00:24:39.223 [2024-07-15 09:32:50.379462] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd83c00 is same with the state(5) to be set 00:24:39.223 [2024-07-15 09:32:50.379483] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xd83c00 (9): Bad file descriptor 00:24:39.223 [2024-07-15 09:32:50.379503] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:24:39.223 [2024-07-15 09:32:50.379515] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0] controller reinitialization failed 00:24:39.223 [2024-07-15 09:32:50.379527] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:24:39.223 [2024-07-15 09:32:50.379545] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:39.223 09:32:50 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:39.223 [2024-07-15 09:32:50.389303] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:24:39.223 [2024-07-15 09:32:50.389474] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:39.223 [2024-07-15 09:32:50.389500] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd83c00 with addr=10.0.0.2, port=4420 00:24:39.223 [2024-07-15 09:32:50.389516] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd83c00 is same with the state(5) to be set 00:24:39.223 [2024-07-15 09:32:50.389538] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xd83c00 (9): Bad file descriptor 00:24:39.223 [2024-07-15 09:32:50.389558] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:24:39.223 [2024-07-15 09:32:50.389571] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0] controller reinitialization failed 00:24:39.223 [2024-07-15 09:32:50.389584] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:24:39.223 [2024-07-15 09:32:50.389602] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:39.223 [2024-07-15 09:32:50.393881] bdev_nvme.c:6770:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 not found 00:24:39.223 [2024-07-15 09:32:50.393911] bdev_nvme.c:6761:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4421 found again 00:24:39.223 09:32:50 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # [[ nvme0n1 nvme0n2 == \n\v\m\e\0\n\1\ \n\v\m\e\0\n\2 ]] 00:24:39.223 09:32:50 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@916 -- # return 0 00:24:39.223 09:32:50 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@131 -- # waitforcondition '[[ "$(get_subsystem_paths nvme0)" == "$NVMF_SECOND_PORT" ]]' 00:24:39.223 09:32:50 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@912 -- # local 'cond=[[ "$(get_subsystem_paths nvme0)" == "$NVMF_SECOND_PORT" ]]' 00:24:39.223 09:32:50 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # local max=10 00:24:39.223 09:32:50 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@914 -- # (( max-- )) 00:24:39.223 09:32:50 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # eval '[[' '"$(get_subsystem_paths' 'nvme0)"' == '"$NVMF_SECOND_PORT"' ']]' 00:24:39.223 09:32:50 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # get_subsystem_paths nvme0 00:24:39.223 09:32:50 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@63 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers -n nvme0 00:24:39.223 09:32:50 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:39.223 09:32:50 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@63 -- # jq -r '.[].ctrlrs[].trid.trsvcid' 00:24:39.223 09:32:50 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:24:39.223 09:32:50 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@63 -- # sort -n 00:24:39.223 09:32:50 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@63 -- # xargs 00:24:39.223 09:32:50 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:39.484 09:32:50 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # [[ 4421 == \4\4\2\1 ]] 00:24:39.484 09:32:50 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@916 -- # return 0 00:24:39.484 09:32:50 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@132 -- # is_notification_count_eq 0 00:24:39.484 09:32:50 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@79 -- # expected_count=0 00:24:39.484 09:32:50 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@80 -- # waitforcondition 'get_notification_count && ((notification_count == expected_count))' 00:24:39.484 09:32:50 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@912 -- # local 'cond=get_notification_count && ((notification_count == expected_count))' 00:24:39.484 09:32:50 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # local max=10 00:24:39.484 09:32:50 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@914 -- # (( max-- )) 00:24:39.484 09:32:50 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # eval get_notification_count '&&' '((notification_count' == 'expected_count))' 00:24:39.484 09:32:50 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # get_notification_count 00:24:39.484 09:32:50 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@74 -- # rpc_cmd -s /tmp/host.sock notify_get_notifications -i 2 00:24:39.484 09:32:50 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@74 -- # jq '. | length' 00:24:39.484 09:32:50 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:39.484 09:32:50 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:24:39.484 09:32:50 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:39.484 09:32:50 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@74 -- # notification_count=0 00:24:39.484 09:32:50 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@75 -- # notify_id=2 00:24:39.484 09:32:50 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # (( notification_count == expected_count )) 00:24:39.484 09:32:50 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@916 -- # return 0 00:24:39.484 09:32:50 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@134 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_stop_discovery -b nvme 00:24:39.484 09:32:50 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:39.484 09:32:50 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:24:39.484 09:32:50 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:39.484 09:32:50 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@136 -- # waitforcondition '[[ "$(get_subsystem_names)" == "" ]]' 00:24:39.484 09:32:50 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@912 -- # local 'cond=[[ "$(get_subsystem_names)" == "" ]]' 00:24:39.484 09:32:50 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # local max=10 00:24:39.484 09:32:50 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@914 -- # (( max-- )) 00:24:39.484 09:32:50 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # eval '[[' '"$(get_subsystem_names)"' == '""' ']]' 00:24:39.484 09:32:50 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # get_subsystem_names 00:24:39.484 09:32:50 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:24:39.484 09:32:50 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # jq -r '.[].name' 00:24:39.484 09:32:50 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:39.484 09:32:50 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:24:39.484 09:32:50 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # sort 00:24:39.484 09:32:50 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # xargs 00:24:39.484 09:32:50 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:39.484 09:32:50 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # [[ '' == '' ]] 00:24:39.484 09:32:50 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@916 -- # return 0 00:24:39.484 09:32:50 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@137 -- # waitforcondition '[[ "$(get_bdev_list)" == "" ]]' 00:24:39.484 09:32:50 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@912 -- # local 'cond=[[ "$(get_bdev_list)" == "" ]]' 00:24:39.484 09:32:50 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # local max=10 00:24:39.484 09:32:50 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@914 -- # (( max-- )) 00:24:39.484 09:32:50 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # eval '[[' '"$(get_bdev_list)"' == '""' ']]' 00:24:39.484 09:32:50 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # get_bdev_list 00:24:39.484 09:32:50 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:24:39.484 09:32:50 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:24:39.484 09:32:50 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:39.484 09:32:50 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:24:39.484 09:32:50 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:24:39.484 09:32:50 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:24:39.484 09:32:50 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:39.484 09:32:50 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # [[ '' == '' ]] 00:24:39.484 09:32:50 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@916 -- # return 0 00:24:39.484 09:32:50 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@138 -- # is_notification_count_eq 2 00:24:39.484 09:32:50 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@79 -- # expected_count=2 00:24:39.484 09:32:50 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@80 -- # waitforcondition 'get_notification_count && ((notification_count == expected_count))' 00:24:39.484 09:32:50 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@912 -- # local 'cond=get_notification_count && ((notification_count == expected_count))' 00:24:39.484 09:32:50 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # local max=10 00:24:39.484 09:32:50 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@914 -- # (( max-- )) 00:24:39.484 09:32:50 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # eval get_notification_count '&&' '((notification_count' == 'expected_count))' 00:24:39.484 09:32:50 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # get_notification_count 00:24:39.484 09:32:50 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@74 -- # rpc_cmd -s /tmp/host.sock notify_get_notifications -i 2 00:24:39.484 09:32:50 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:39.484 09:32:50 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@74 -- # jq '. | length' 00:24:39.484 09:32:50 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:24:39.484 09:32:50 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:39.484 09:32:50 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@74 -- # notification_count=2 00:24:39.484 09:32:50 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@75 -- # notify_id=4 00:24:39.484 09:32:50 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # (( notification_count == expected_count )) 00:24:39.484 09:32:50 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@916 -- # return 0 00:24:39.484 09:32:50 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@141 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test -w 00:24:39.484 09:32:50 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:39.484 09:32:50 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:24:40.861 [2024-07-15 09:32:51.631651] bdev_nvme.c:6983:discovery_attach_cb: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr attached 00:24:40.861 [2024-07-15 09:32:51.631672] bdev_nvme.c:7063:discovery_poller: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr connected 00:24:40.861 [2024-07-15 09:32:51.631692] bdev_nvme.c:6946:get_discovery_log_page: *INFO*: Discovery[10.0.0.2:8009] sent discovery log page command 00:24:40.861 [2024-07-15 09:32:51.718999] bdev_nvme.c:6912:discovery_log_page_cb: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4421 new subsystem nvme0 00:24:40.861 [2024-07-15 09:32:51.981442] bdev_nvme.c:6802:discovery_attach_controller_done: *INFO*: Discovery[10.0.0.2:8009] attach nvme0 done 00:24:40.861 [2024-07-15 09:32:51.981479] bdev_nvme.c:6761:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4421 found again 00:24:40.861 09:32:51 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:40.861 09:32:51 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@143 -- # NOT rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test -w 00:24:40.861 09:32:51 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@648 -- # local es=0 00:24:40.861 09:32:51 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@650 -- # valid_exec_arg rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test -w 00:24:40.861 09:32:51 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@636 -- # local arg=rpc_cmd 00:24:40.861 09:32:51 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:24:40.861 09:32:51 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@640 -- # type -t rpc_cmd 00:24:40.861 09:32:51 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:24:40.861 09:32:51 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@651 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test -w 00:24:40.861 09:32:51 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:40.861 09:32:51 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:24:40.861 request: 00:24:40.861 { 00:24:40.861 "name": "nvme", 00:24:40.861 "trtype": "tcp", 00:24:40.861 "traddr": "10.0.0.2", 00:24:40.861 "adrfam": "ipv4", 00:24:40.861 "trsvcid": "8009", 00:24:40.861 "hostnqn": "nqn.2021-12.io.spdk:test", 00:24:40.861 "wait_for_attach": true, 00:24:40.861 "method": "bdev_nvme_start_discovery", 00:24:40.861 "req_id": 1 00:24:40.861 } 00:24:40.861 Got JSON-RPC error response 00:24:40.861 response: 00:24:40.861 { 00:24:40.861 "code": -17, 00:24:40.861 "message": "File exists" 00:24:40.861 } 00:24:40.861 09:32:51 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 1 == 0 ]] 00:24:40.861 09:32:51 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@651 -- # es=1 00:24:40.861 09:32:51 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:24:40.861 09:32:51 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:24:40.861 09:32:51 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:24:40.861 09:32:51 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@145 -- # get_discovery_ctrlrs 00:24:40.861 09:32:51 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@67 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_discovery_info 00:24:40.861 09:32:51 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:40.861 09:32:51 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@67 -- # jq -r '.[].name' 00:24:40.861 09:32:51 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:24:40.861 09:32:51 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@67 -- # sort 00:24:40.861 09:32:51 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@67 -- # xargs 00:24:40.861 09:32:52 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:40.861 09:32:52 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@145 -- # [[ nvme == \n\v\m\e ]] 00:24:40.861 09:32:52 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@146 -- # get_bdev_list 00:24:40.861 09:32:52 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:24:40.861 09:32:52 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:24:40.861 09:32:52 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:40.861 09:32:52 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:24:40.861 09:32:52 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:24:40.861 09:32:52 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:24:41.121 09:32:52 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:41.121 09:32:52 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@146 -- # [[ nvme0n1 nvme0n2 == \n\v\m\e\0\n\1\ \n\v\m\e\0\n\2 ]] 00:24:41.121 09:32:52 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@149 -- # NOT rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme_second -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test -w 00:24:41.121 09:32:52 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@648 -- # local es=0 00:24:41.121 09:32:52 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@650 -- # valid_exec_arg rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme_second -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test -w 00:24:41.121 09:32:52 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@636 -- # local arg=rpc_cmd 00:24:41.121 09:32:52 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:24:41.121 09:32:52 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@640 -- # type -t rpc_cmd 00:24:41.121 09:32:52 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:24:41.121 09:32:52 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@651 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme_second -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test -w 00:24:41.121 09:32:52 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:41.121 09:32:52 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:24:41.121 request: 00:24:41.121 { 00:24:41.121 "name": "nvme_second", 00:24:41.121 "trtype": "tcp", 00:24:41.121 "traddr": "10.0.0.2", 00:24:41.121 "adrfam": "ipv4", 00:24:41.121 "trsvcid": "8009", 00:24:41.121 "hostnqn": "nqn.2021-12.io.spdk:test", 00:24:41.121 "wait_for_attach": true, 00:24:41.121 "method": "bdev_nvme_start_discovery", 00:24:41.121 "req_id": 1 00:24:41.121 } 00:24:41.121 Got JSON-RPC error response 00:24:41.121 response: 00:24:41.121 { 00:24:41.121 "code": -17, 00:24:41.121 "message": "File exists" 00:24:41.121 } 00:24:41.121 09:32:52 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 1 == 0 ]] 00:24:41.121 09:32:52 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@651 -- # es=1 00:24:41.121 09:32:52 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:24:41.121 09:32:52 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:24:41.121 09:32:52 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:24:41.121 09:32:52 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@151 -- # get_discovery_ctrlrs 00:24:41.121 09:32:52 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@67 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_discovery_info 00:24:41.121 09:32:52 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@67 -- # jq -r '.[].name' 00:24:41.121 09:32:52 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:41.121 09:32:52 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@67 -- # sort 00:24:41.121 09:32:52 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:24:41.121 09:32:52 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@67 -- # xargs 00:24:41.121 09:32:52 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:41.121 09:32:52 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@151 -- # [[ nvme == \n\v\m\e ]] 00:24:41.121 09:32:52 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@152 -- # get_bdev_list 00:24:41.121 09:32:52 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:24:41.121 09:32:52 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:41.121 09:32:52 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:24:41.121 09:32:52 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:24:41.121 09:32:52 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:24:41.121 09:32:52 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:24:41.121 09:32:52 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:41.121 09:32:52 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@152 -- # [[ nvme0n1 nvme0n2 == \n\v\m\e\0\n\1\ \n\v\m\e\0\n\2 ]] 00:24:41.121 09:32:52 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@155 -- # NOT rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme_second -t tcp -a 10.0.0.2 -s 8010 -f ipv4 -q nqn.2021-12.io.spdk:test -T 3000 00:24:41.121 09:32:52 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@648 -- # local es=0 00:24:41.121 09:32:52 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@650 -- # valid_exec_arg rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme_second -t tcp -a 10.0.0.2 -s 8010 -f ipv4 -q nqn.2021-12.io.spdk:test -T 3000 00:24:41.121 09:32:52 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@636 -- # local arg=rpc_cmd 00:24:41.121 09:32:52 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:24:41.121 09:32:52 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@640 -- # type -t rpc_cmd 00:24:41.121 09:32:52 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:24:41.121 09:32:52 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@651 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme_second -t tcp -a 10.0.0.2 -s 8010 -f ipv4 -q nqn.2021-12.io.spdk:test -T 3000 00:24:41.121 09:32:52 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:41.121 09:32:52 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:24:42.060 [2024-07-15 09:32:53.192877] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:42.060 [2024-07-15 09:32:53.192923] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd9ec90 with addr=10.0.0.2, port=8010 00:24:42.060 [2024-07-15 09:32:53.192952] nvme_tcp.c:2711:nvme_tcp_ctrlr_construct: *ERROR*: failed to create admin qpair 00:24:42.060 [2024-07-15 09:32:53.192967] nvme.c: 830:nvme_probe_internal: *ERROR*: NVMe ctrlr scan failed 00:24:42.060 [2024-07-15 09:32:53.192979] bdev_nvme.c:7045:discovery_poller: *ERROR*: Discovery[10.0.0.2:8010] could not start discovery connect 00:24:43.438 [2024-07-15 09:32:54.195386] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:43.438 [2024-07-15 09:32:54.195461] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd9ec90 with addr=10.0.0.2, port=8010 00:24:43.438 [2024-07-15 09:32:54.195493] nvme_tcp.c:2711:nvme_tcp_ctrlr_construct: *ERROR*: failed to create admin qpair 00:24:43.438 [2024-07-15 09:32:54.195507] nvme.c: 830:nvme_probe_internal: *ERROR*: NVMe ctrlr scan failed 00:24:43.438 [2024-07-15 09:32:54.195521] bdev_nvme.c:7045:discovery_poller: *ERROR*: Discovery[10.0.0.2:8010] could not start discovery connect 00:24:44.008 [2024-07-15 09:32:55.197536] bdev_nvme.c:7026:discovery_poller: *ERROR*: Discovery[10.0.0.2:8010] timed out while attaching discovery ctrlr 00:24:44.008 request: 00:24:44.008 { 00:24:44.008 "name": "nvme_second", 00:24:44.008 "trtype": "tcp", 00:24:44.008 "traddr": "10.0.0.2", 00:24:44.008 "adrfam": "ipv4", 00:24:44.008 "trsvcid": "8010", 00:24:44.008 "hostnqn": "nqn.2021-12.io.spdk:test", 00:24:44.008 "wait_for_attach": false, 00:24:44.008 "attach_timeout_ms": 3000, 00:24:44.008 "method": "bdev_nvme_start_discovery", 00:24:44.008 "req_id": 1 00:24:44.008 } 00:24:44.008 Got JSON-RPC error response 00:24:44.008 response: 00:24:44.008 { 00:24:44.008 "code": -110, 00:24:44.008 "message": "Connection timed out" 00:24:44.008 } 00:24:44.268 09:32:55 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 1 == 0 ]] 00:24:44.268 09:32:55 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@651 -- # es=1 00:24:44.268 09:32:55 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:24:44.268 09:32:55 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:24:44.268 09:32:55 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:24:44.268 09:32:55 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@157 -- # get_discovery_ctrlrs 00:24:44.268 09:32:55 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@67 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_discovery_info 00:24:44.268 09:32:55 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:44.268 09:32:55 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@67 -- # jq -r '.[].name' 00:24:44.268 09:32:55 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:24:44.268 09:32:55 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@67 -- # sort 00:24:44.268 09:32:55 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@67 -- # xargs 00:24:44.268 09:32:55 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:44.268 09:32:55 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@157 -- # [[ nvme == \n\v\m\e ]] 00:24:44.268 09:32:55 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@159 -- # trap - SIGINT SIGTERM EXIT 00:24:44.268 09:32:55 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@161 -- # kill 906788 00:24:44.268 09:32:55 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@162 -- # nvmftestfini 00:24:44.268 09:32:55 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@488 -- # nvmfcleanup 00:24:44.268 09:32:55 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@117 -- # sync 00:24:44.268 09:32:55 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:24:44.268 09:32:55 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@120 -- # set +e 00:24:44.268 09:32:55 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@121 -- # for i in {1..20} 00:24:44.268 09:32:55 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:24:44.268 rmmod nvme_tcp 00:24:44.268 rmmod nvme_fabrics 00:24:44.268 rmmod nvme_keyring 00:24:44.268 09:32:55 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:24:44.268 09:32:55 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@124 -- # set -e 00:24:44.268 09:32:55 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@125 -- # return 0 00:24:44.268 09:32:55 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@489 -- # '[' -n 906645 ']' 00:24:44.268 09:32:55 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@490 -- # killprocess 906645 00:24:44.268 09:32:55 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@948 -- # '[' -z 906645 ']' 00:24:44.268 09:32:55 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@952 -- # kill -0 906645 00:24:44.268 09:32:55 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@953 -- # uname 00:24:44.268 09:32:55 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:24:44.268 09:32:55 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 906645 00:24:44.268 09:32:55 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:24:44.268 09:32:55 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:24:44.268 09:32:55 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@966 -- # echo 'killing process with pid 906645' 00:24:44.268 killing process with pid 906645 00:24:44.268 09:32:55 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@967 -- # kill 906645 00:24:44.268 09:32:55 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@972 -- # wait 906645 00:24:44.527 09:32:55 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:24:44.527 09:32:55 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:24:44.527 09:32:55 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:24:44.527 09:32:55 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:24:44.527 09:32:55 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@278 -- # remove_spdk_ns 00:24:44.527 09:32:55 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:24:44.527 09:32:55 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:24:44.527 09:32:55 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:24:47.063 09:32:57 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:24:47.063 00:24:47.063 real 0m13.288s 00:24:47.063 user 0m19.276s 00:24:47.063 sys 0m2.770s 00:24:47.063 09:32:57 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@1124 -- # xtrace_disable 00:24:47.063 09:32:57 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:24:47.063 ************************************ 00:24:47.063 END TEST nvmf_host_discovery 00:24:47.063 ************************************ 00:24:47.063 09:32:57 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:24:47.063 09:32:57 nvmf_tcp -- nvmf/nvmf.sh@102 -- # run_test nvmf_host_multipath_status /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/multipath_status.sh --transport=tcp 00:24:47.063 09:32:57 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:24:47.063 09:32:57 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:24:47.063 09:32:57 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:24:47.063 ************************************ 00:24:47.063 START TEST nvmf_host_multipath_status 00:24:47.063 ************************************ 00:24:47.063 09:32:57 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/multipath_status.sh --transport=tcp 00:24:47.063 * Looking for test storage... 00:24:47.063 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:24:47.063 09:32:57 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:24:47.063 09:32:57 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@7 -- # uname -s 00:24:47.063 09:32:57 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:24:47.063 09:32:57 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:24:47.063 09:32:57 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:24:47.063 09:32:57 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:24:47.063 09:32:57 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:24:47.063 09:32:57 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:24:47.063 09:32:57 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:24:47.063 09:32:57 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:24:47.063 09:32:57 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:24:47.063 09:32:57 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:24:47.063 09:32:57 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a 00:24:47.063 09:32:57 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@18 -- # NVME_HOSTID=29f67375-a902-e411-ace9-001e67bc3c9a 00:24:47.063 09:32:57 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:24:47.063 09:32:57 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:24:47.063 09:32:57 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:24:47.063 09:32:57 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:24:47.063 09:32:57 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:24:47.063 09:32:57 nvmf_tcp.nvmf_host_multipath_status -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:24:47.063 09:32:57 nvmf_tcp.nvmf_host_multipath_status -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:24:47.063 09:32:57 nvmf_tcp.nvmf_host_multipath_status -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:24:47.063 09:32:57 nvmf_tcp.nvmf_host_multipath_status -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:47.063 09:32:57 nvmf_tcp.nvmf_host_multipath_status -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:47.063 09:32:57 nvmf_tcp.nvmf_host_multipath_status -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:47.063 09:32:57 nvmf_tcp.nvmf_host_multipath_status -- paths/export.sh@5 -- # export PATH 00:24:47.063 09:32:57 nvmf_tcp.nvmf_host_multipath_status -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:47.063 09:32:57 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@47 -- # : 0 00:24:47.063 09:32:57 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:24:47.063 09:32:57 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:24:47.063 09:32:57 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:24:47.063 09:32:57 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:24:47.063 09:32:57 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:24:47.063 09:32:57 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:24:47.063 09:32:57 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:24:47.063 09:32:57 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@51 -- # have_pci_nics=0 00:24:47.063 09:32:57 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@12 -- # MALLOC_BDEV_SIZE=64 00:24:47.063 09:32:57 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@13 -- # MALLOC_BLOCK_SIZE=512 00:24:47.063 09:32:57 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@15 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:24:47.063 09:32:57 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@16 -- # bpf_sh=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/bpftrace.sh 00:24:47.063 09:32:57 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@18 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:24:47.063 09:32:57 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@21 -- # NQN=nqn.2016-06.io.spdk:cnode1 00:24:47.063 09:32:57 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@31 -- # nvmftestinit 00:24:47.063 09:32:57 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:24:47.063 09:32:57 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:24:47.063 09:32:57 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@448 -- # prepare_net_devs 00:24:47.063 09:32:57 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@410 -- # local -g is_hw=no 00:24:47.063 09:32:57 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@412 -- # remove_spdk_ns 00:24:47.063 09:32:57 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:24:47.063 09:32:57 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:24:47.063 09:32:57 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:24:47.063 09:32:57 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:24:47.063 09:32:57 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:24:47.063 09:32:57 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@285 -- # xtrace_disable 00:24:47.063 09:32:57 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@10 -- # set +x 00:24:48.967 09:32:59 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:24:48.967 09:32:59 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@291 -- # pci_devs=() 00:24:48.967 09:32:59 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@291 -- # local -a pci_devs 00:24:48.967 09:32:59 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@292 -- # pci_net_devs=() 00:24:48.967 09:32:59 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:24:48.967 09:32:59 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@293 -- # pci_drivers=() 00:24:48.967 09:32:59 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@293 -- # local -A pci_drivers 00:24:48.967 09:32:59 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@295 -- # net_devs=() 00:24:48.967 09:32:59 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@295 -- # local -ga net_devs 00:24:48.967 09:32:59 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@296 -- # e810=() 00:24:48.967 09:32:59 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@296 -- # local -ga e810 00:24:48.967 09:32:59 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@297 -- # x722=() 00:24:48.967 09:32:59 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@297 -- # local -ga x722 00:24:48.967 09:32:59 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@298 -- # mlx=() 00:24:48.967 09:32:59 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@298 -- # local -ga mlx 00:24:48.967 09:32:59 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:24:48.967 09:32:59 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:24:48.967 09:32:59 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:24:48.967 09:32:59 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:24:48.967 09:32:59 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:24:48.967 09:32:59 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:24:48.967 09:32:59 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:24:48.967 09:32:59 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:24:48.967 09:32:59 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:24:48.967 09:32:59 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:24:48.967 09:32:59 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:24:48.967 09:32:59 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:24:48.967 09:32:59 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:24:48.967 09:32:59 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:24:48.967 09:32:59 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:24:48.967 09:32:59 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:24:48.967 09:32:59 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:24:48.967 09:32:59 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:24:48.967 09:32:59 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@341 -- # echo 'Found 0000:09:00.0 (0x8086 - 0x159b)' 00:24:48.967 Found 0000:09:00.0 (0x8086 - 0x159b) 00:24:48.967 09:32:59 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:24:48.967 09:32:59 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:24:48.967 09:32:59 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:24:48.967 09:32:59 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:24:48.967 09:32:59 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:24:48.967 09:32:59 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:24:48.967 09:32:59 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@341 -- # echo 'Found 0000:09:00.1 (0x8086 - 0x159b)' 00:24:48.967 Found 0000:09:00.1 (0x8086 - 0x159b) 00:24:48.967 09:32:59 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:24:48.967 09:32:59 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:24:48.967 09:32:59 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:24:48.967 09:32:59 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:24:48.967 09:32:59 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:24:48.967 09:32:59 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:24:48.967 09:32:59 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:24:48.967 09:32:59 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:24:48.967 09:32:59 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:24:48.967 09:32:59 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:24:48.967 09:32:59 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:24:48.967 09:32:59 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:24:48.967 09:32:59 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@390 -- # [[ up == up ]] 00:24:48.967 09:32:59 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:24:48.967 09:32:59 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:24:48.967 09:32:59 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:09:00.0: cvl_0_0' 00:24:48.967 Found net devices under 0000:09:00.0: cvl_0_0 00:24:48.967 09:32:59 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:24:48.967 09:32:59 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:24:48.967 09:32:59 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:24:48.967 09:32:59 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:24:48.967 09:32:59 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:24:48.968 09:32:59 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@390 -- # [[ up == up ]] 00:24:48.968 09:32:59 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:24:48.968 09:32:59 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:24:48.968 09:32:59 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:09:00.1: cvl_0_1' 00:24:48.968 Found net devices under 0000:09:00.1: cvl_0_1 00:24:48.968 09:32:59 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:24:48.968 09:32:59 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:24:48.968 09:32:59 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@414 -- # is_hw=yes 00:24:48.968 09:32:59 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:24:48.968 09:32:59 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:24:48.968 09:32:59 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:24:48.968 09:32:59 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:24:48.968 09:32:59 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:24:48.968 09:32:59 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:24:48.968 09:32:59 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:24:48.968 09:32:59 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:24:48.968 09:32:59 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:24:48.968 09:32:59 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:24:48.968 09:32:59 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:24:48.968 09:32:59 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:24:48.968 09:32:59 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:24:48.968 09:32:59 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:24:48.968 09:32:59 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:24:48.968 09:32:59 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:24:48.968 09:32:59 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:24:48.968 09:32:59 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:24:48.968 09:32:59 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:24:48.968 09:32:59 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:24:48.968 09:32:59 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:24:48.968 09:32:59 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:24:48.968 09:32:59 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:24:48.968 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:24:48.968 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.140 ms 00:24:48.968 00:24:48.968 --- 10.0.0.2 ping statistics --- 00:24:48.968 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:24:48.968 rtt min/avg/max/mdev = 0.140/0.140/0.140/0.000 ms 00:24:48.968 09:32:59 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:24:48.968 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:24:48.968 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.192 ms 00:24:48.968 00:24:48.968 --- 10.0.0.1 ping statistics --- 00:24:48.968 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:24:48.968 rtt min/avg/max/mdev = 0.192/0.192/0.192/0.000 ms 00:24:48.968 09:32:59 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:24:48.968 09:32:59 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@422 -- # return 0 00:24:48.968 09:32:59 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:24:48.968 09:32:59 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:24:48.968 09:32:59 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:24:48.968 09:32:59 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:24:48.968 09:32:59 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:24:48.968 09:32:59 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:24:48.968 09:32:59 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:24:48.968 09:32:59 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@33 -- # nvmfappstart -m 0x3 00:24:48.968 09:32:59 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:24:48.968 09:32:59 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@722 -- # xtrace_disable 00:24:48.968 09:32:59 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@10 -- # set +x 00:24:48.968 09:32:59 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@481 -- # nvmfpid=909821 00:24:48.968 09:32:59 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@482 -- # waitforlisten 909821 00:24:48.968 09:32:59 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@829 -- # '[' -z 909821 ']' 00:24:48.968 09:32:59 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x3 00:24:48.968 09:32:59 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:24:48.968 09:32:59 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@834 -- # local max_retries=100 00:24:48.968 09:32:59 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:24:48.968 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:24:48.968 09:32:59 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@838 -- # xtrace_disable 00:24:48.968 09:32:59 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@10 -- # set +x 00:24:48.968 [2024-07-15 09:32:59.958850] Starting SPDK v24.09-pre git sha1 b0f01ebc5 / DPDK 24.03.0 initialization... 00:24:48.968 [2024-07-15 09:32:59.958940] [ DPDK EAL parameters: nvmf -c 0x3 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:24:48.968 EAL: No free 2048 kB hugepages reported on node 1 00:24:48.968 [2024-07-15 09:33:00.023727] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 2 00:24:48.968 [2024-07-15 09:33:00.132772] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:24:48.968 [2024-07-15 09:33:00.132844] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:24:48.968 [2024-07-15 09:33:00.132874] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:24:48.968 [2024-07-15 09:33:00.132885] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:24:48.968 [2024-07-15 09:33:00.132895] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:24:48.968 [2024-07-15 09:33:00.132974] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:24:48.968 [2024-07-15 09:33:00.132978] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:24:49.225 09:33:00 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:24:49.225 09:33:00 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@862 -- # return 0 00:24:49.225 09:33:00 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:24:49.225 09:33:00 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@728 -- # xtrace_disable 00:24:49.225 09:33:00 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@10 -- # set +x 00:24:49.225 09:33:00 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:24:49.225 09:33:00 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@34 -- # nvmfapp_pid=909821 00:24:49.225 09:33:00 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:24:49.493 [2024-07-15 09:33:00.501249] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:24:49.493 09:33:00 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@37 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc0 00:24:49.751 Malloc0 00:24:49.751 09:33:00 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@39 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -r -m 2 00:24:50.007 09:33:01 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@40 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:24:50.264 09:33:01 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:24:50.521 [2024-07-15 09:33:01.542191] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:24:50.521 09:33:01 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@42 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 00:24:50.778 [2024-07-15 09:33:01.802899] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4421 *** 00:24:50.778 09:33:01 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@45 -- # bdevperf_pid=910208 00:24:50.778 09:33:01 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@47 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:24:50.778 09:33:01 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@44 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 90 00:24:50.778 09:33:01 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@48 -- # waitforlisten 910208 /var/tmp/bdevperf.sock 00:24:50.778 09:33:01 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@829 -- # '[' -z 910208 ']' 00:24:50.778 09:33:01 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:24:50.778 09:33:01 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@834 -- # local max_retries=100 00:24:50.778 09:33:01 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:24:50.778 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:24:50.778 09:33:01 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@838 -- # xtrace_disable 00:24:50.778 09:33:01 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@10 -- # set +x 00:24:51.036 09:33:02 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:24:51.036 09:33:02 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@862 -- # return 0 00:24:51.036 09:33:02 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_set_options -r -1 00:24:51.294 09:33:02 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@55 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b Nvme0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -l -1 -o 10 00:24:51.859 Nvme0n1 00:24:51.860 09:33:02 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@56 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b Nvme0 -t tcp -a 10.0.0.2 -s 4421 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -x multipath -l -1 -o 10 00:24:52.429 Nvme0n1 00:24:52.429 09:33:03 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@78 -- # sleep 2 00:24:52.429 09:33:03 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@76 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -t 120 -s /var/tmp/bdevperf.sock perform_tests 00:24:54.358 09:33:05 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@90 -- # set_ANA_state optimized optimized 00:24:54.358 09:33:05 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n optimized 00:24:54.615 09:33:05 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n optimized 00:24:54.874 09:33:05 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@91 -- # sleep 1 00:24:55.835 09:33:06 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@92 -- # check_status true false true true true true 00:24:55.835 09:33:06 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current true 00:24:55.835 09:33:06 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:24:55.835 09:33:06 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:24:56.093 09:33:07 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:24:56.093 09:33:07 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current false 00:24:56.093 09:33:07 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:24:56.093 09:33:07 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:24:56.352 09:33:07 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:24:56.352 09:33:07 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:24:56.352 09:33:07 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:24:56.352 09:33:07 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:24:56.611 09:33:07 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:24:56.611 09:33:07 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:24:56.611 09:33:07 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:24:56.611 09:33:07 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:24:56.870 09:33:08 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:24:56.870 09:33:08 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible true 00:24:56.870 09:33:08 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:24:56.870 09:33:08 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:24:57.128 09:33:08 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:24:57.128 09:33:08 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible true 00:24:57.128 09:33:08 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:24:57.128 09:33:08 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:24:57.386 09:33:08 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:24:57.386 09:33:08 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@94 -- # set_ANA_state non_optimized optimized 00:24:57.386 09:33:08 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n non_optimized 00:24:57.953 09:33:08 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n optimized 00:24:57.953 09:33:09 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@95 -- # sleep 1 00:24:59.327 09:33:10 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@96 -- # check_status false true true true true true 00:24:59.327 09:33:10 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current false 00:24:59.327 09:33:10 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:24:59.327 09:33:10 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:24:59.327 09:33:10 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:24:59.327 09:33:10 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current true 00:24:59.328 09:33:10 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:24:59.328 09:33:10 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:24:59.585 09:33:10 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:24:59.585 09:33:10 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:24:59.585 09:33:10 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:24:59.585 09:33:10 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:24:59.842 09:33:10 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:24:59.842 09:33:10 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:24:59.842 09:33:10 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:24:59.842 09:33:10 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:25:00.100 09:33:11 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:25:00.100 09:33:11 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible true 00:25:00.100 09:33:11 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:25:00.100 09:33:11 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:25:00.358 09:33:11 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:25:00.358 09:33:11 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible true 00:25:00.358 09:33:11 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:25:00.358 09:33:11 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:25:00.615 09:33:11 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:25:00.615 09:33:11 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@100 -- # set_ANA_state non_optimized non_optimized 00:25:00.615 09:33:11 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n non_optimized 00:25:00.873 09:33:11 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n non_optimized 00:25:01.131 09:33:12 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@101 -- # sleep 1 00:25:02.065 09:33:13 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@102 -- # check_status true false true true true true 00:25:02.066 09:33:13 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current true 00:25:02.066 09:33:13 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:25:02.066 09:33:13 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:25:02.322 09:33:13 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:25:02.322 09:33:13 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current false 00:25:02.322 09:33:13 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:25:02.322 09:33:13 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:25:02.580 09:33:13 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:25:02.580 09:33:13 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:25:02.580 09:33:13 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:25:02.580 09:33:13 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:25:02.838 09:33:13 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:25:02.838 09:33:13 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:25:02.838 09:33:13 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:25:02.838 09:33:13 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:25:03.095 09:33:14 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:25:03.095 09:33:14 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible true 00:25:03.095 09:33:14 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:25:03.095 09:33:14 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:25:03.353 09:33:14 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:25:03.353 09:33:14 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible true 00:25:03.353 09:33:14 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:25:03.353 09:33:14 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:25:03.611 09:33:14 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:25:03.611 09:33:14 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@104 -- # set_ANA_state non_optimized inaccessible 00:25:03.611 09:33:14 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n non_optimized 00:25:03.869 09:33:14 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n inaccessible 00:25:04.127 09:33:15 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@105 -- # sleep 1 00:25:05.060 09:33:16 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@106 -- # check_status true false true true true false 00:25:05.060 09:33:16 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current true 00:25:05.060 09:33:16 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:25:05.060 09:33:16 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:25:05.319 09:33:16 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:25:05.319 09:33:16 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current false 00:25:05.319 09:33:16 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:25:05.319 09:33:16 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:25:05.576 09:33:16 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:25:05.576 09:33:16 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:25:05.577 09:33:16 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:25:05.577 09:33:16 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:25:05.833 09:33:16 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:25:05.833 09:33:16 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:25:05.833 09:33:16 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:25:05.833 09:33:16 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:25:06.090 09:33:17 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:25:06.090 09:33:17 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible true 00:25:06.090 09:33:17 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:25:06.090 09:33:17 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:25:06.346 09:33:17 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:25:06.346 09:33:17 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible false 00:25:06.346 09:33:17 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:25:06.346 09:33:17 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:25:06.604 09:33:17 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:25:06.604 09:33:17 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@108 -- # set_ANA_state inaccessible inaccessible 00:25:06.604 09:33:17 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n inaccessible 00:25:06.861 09:33:17 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n inaccessible 00:25:07.121 09:33:18 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@109 -- # sleep 1 00:25:08.053 09:33:19 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@110 -- # check_status false false true true false false 00:25:08.053 09:33:19 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current false 00:25:08.053 09:33:19 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:25:08.053 09:33:19 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:25:08.310 09:33:19 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:25:08.310 09:33:19 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current false 00:25:08.310 09:33:19 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:25:08.310 09:33:19 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:25:08.567 09:33:19 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:25:08.567 09:33:19 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:25:08.567 09:33:19 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:25:08.567 09:33:19 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:25:08.828 09:33:19 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:25:08.828 09:33:19 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:25:08.828 09:33:19 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:25:08.828 09:33:19 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:25:09.085 09:33:20 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:25:09.085 09:33:20 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible false 00:25:09.085 09:33:20 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:25:09.085 09:33:20 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:25:09.341 09:33:20 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:25:09.341 09:33:20 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible false 00:25:09.341 09:33:20 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:25:09.341 09:33:20 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:25:09.599 09:33:20 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:25:09.599 09:33:20 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@112 -- # set_ANA_state inaccessible optimized 00:25:09.599 09:33:20 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n inaccessible 00:25:09.855 09:33:20 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n optimized 00:25:10.112 09:33:21 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@113 -- # sleep 1 00:25:11.048 09:33:22 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@114 -- # check_status false true true true false true 00:25:11.048 09:33:22 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current false 00:25:11.048 09:33:22 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:25:11.048 09:33:22 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:25:11.305 09:33:22 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:25:11.305 09:33:22 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current true 00:25:11.305 09:33:22 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:25:11.305 09:33:22 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:25:11.563 09:33:22 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:25:11.563 09:33:22 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:25:11.563 09:33:22 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:25:11.563 09:33:22 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:25:11.819 09:33:22 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:25:11.819 09:33:22 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:25:11.819 09:33:22 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:25:11.819 09:33:22 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:25:12.077 09:33:23 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:25:12.077 09:33:23 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible false 00:25:12.077 09:33:23 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:25:12.077 09:33:23 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:25:12.334 09:33:23 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:25:12.334 09:33:23 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible true 00:25:12.334 09:33:23 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:25:12.334 09:33:23 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:25:12.590 09:33:23 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:25:12.590 09:33:23 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@116 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_set_multipath_policy -b Nvme0n1 -p active_active 00:25:12.847 09:33:23 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@119 -- # set_ANA_state optimized optimized 00:25:12.847 09:33:23 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n optimized 00:25:13.104 09:33:24 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n optimized 00:25:13.362 09:33:24 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@120 -- # sleep 1 00:25:14.738 09:33:25 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@121 -- # check_status true true true true true true 00:25:14.738 09:33:25 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current true 00:25:14.738 09:33:25 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:25:14.738 09:33:25 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:25:14.738 09:33:25 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:25:14.738 09:33:25 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current true 00:25:14.738 09:33:25 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:25:14.738 09:33:25 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:25:14.995 09:33:25 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:25:14.995 09:33:25 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:25:14.995 09:33:25 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:25:14.995 09:33:25 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:25:15.253 09:33:26 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:25:15.253 09:33:26 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:25:15.253 09:33:26 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:25:15.253 09:33:26 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:25:15.510 09:33:26 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:25:15.510 09:33:26 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible true 00:25:15.510 09:33:26 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:25:15.510 09:33:26 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:25:15.767 09:33:26 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:25:15.767 09:33:26 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible true 00:25:15.767 09:33:26 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:25:15.767 09:33:26 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:25:16.024 09:33:27 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:25:16.024 09:33:27 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@123 -- # set_ANA_state non_optimized optimized 00:25:16.024 09:33:27 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n non_optimized 00:25:16.282 09:33:27 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n optimized 00:25:16.541 09:33:27 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@124 -- # sleep 1 00:25:17.480 09:33:28 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@125 -- # check_status false true true true true true 00:25:17.480 09:33:28 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current false 00:25:17.480 09:33:28 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:25:17.480 09:33:28 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:25:17.737 09:33:28 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:25:17.737 09:33:28 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current true 00:25:17.737 09:33:28 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:25:17.737 09:33:28 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:25:17.995 09:33:29 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:25:17.995 09:33:29 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:25:17.995 09:33:29 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:25:17.995 09:33:29 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:25:18.252 09:33:29 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:25:18.252 09:33:29 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:25:18.252 09:33:29 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:25:18.252 09:33:29 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:25:18.511 09:33:29 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:25:18.511 09:33:29 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible true 00:25:18.511 09:33:29 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:25:18.511 09:33:29 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:25:18.771 09:33:29 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:25:18.771 09:33:29 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible true 00:25:18.771 09:33:29 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:25:18.771 09:33:29 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:25:19.055 09:33:30 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:25:19.055 09:33:30 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@129 -- # set_ANA_state non_optimized non_optimized 00:25:19.055 09:33:30 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n non_optimized 00:25:19.314 09:33:30 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n non_optimized 00:25:19.573 09:33:30 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@130 -- # sleep 1 00:25:20.511 09:33:31 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@131 -- # check_status true true true true true true 00:25:20.511 09:33:31 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current true 00:25:20.511 09:33:31 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:25:20.511 09:33:31 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:25:20.769 09:33:31 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:25:20.769 09:33:31 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current true 00:25:20.769 09:33:31 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:25:20.769 09:33:31 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:25:21.027 09:33:32 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:25:21.027 09:33:32 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:25:21.027 09:33:32 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:25:21.027 09:33:32 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:25:21.285 09:33:32 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:25:21.285 09:33:32 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:25:21.285 09:33:32 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:25:21.285 09:33:32 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:25:21.543 09:33:32 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:25:21.543 09:33:32 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible true 00:25:21.543 09:33:32 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:25:21.543 09:33:32 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:25:21.801 09:33:32 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:25:21.801 09:33:32 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible true 00:25:21.801 09:33:32 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:25:21.801 09:33:32 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:25:22.090 09:33:33 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:25:22.090 09:33:33 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@133 -- # set_ANA_state non_optimized inaccessible 00:25:22.090 09:33:33 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n non_optimized 00:25:22.348 09:33:33 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n inaccessible 00:25:22.606 09:33:33 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@134 -- # sleep 1 00:25:23.542 09:33:34 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@135 -- # check_status true false true true true false 00:25:23.542 09:33:34 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current true 00:25:23.542 09:33:34 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:25:23.542 09:33:34 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:25:23.799 09:33:34 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:25:23.799 09:33:34 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current false 00:25:23.799 09:33:34 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:25:23.799 09:33:34 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:25:24.057 09:33:35 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:25:24.057 09:33:35 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:25:24.057 09:33:35 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:25:24.057 09:33:35 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:25:24.315 09:33:35 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:25:24.316 09:33:35 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:25:24.316 09:33:35 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:25:24.316 09:33:35 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:25:24.574 09:33:35 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:25:24.574 09:33:35 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible true 00:25:24.574 09:33:35 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:25:24.574 09:33:35 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:25:24.832 09:33:35 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:25:24.832 09:33:35 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible false 00:25:24.832 09:33:35 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:25:24.832 09:33:35 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:25:25.090 09:33:36 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:25:25.090 09:33:36 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@137 -- # killprocess 910208 00:25:25.090 09:33:36 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@948 -- # '[' -z 910208 ']' 00:25:25.090 09:33:36 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@952 -- # kill -0 910208 00:25:25.090 09:33:36 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@953 -- # uname 00:25:25.090 09:33:36 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:25:25.090 09:33:36 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 910208 00:25:25.090 09:33:36 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@954 -- # process_name=reactor_2 00:25:25.090 09:33:36 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@958 -- # '[' reactor_2 = sudo ']' 00:25:25.090 09:33:36 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@966 -- # echo 'killing process with pid 910208' 00:25:25.090 killing process with pid 910208 00:25:25.090 09:33:36 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@967 -- # kill 910208 00:25:25.090 09:33:36 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@972 -- # wait 910208 00:25:25.090 Connection closed with partial response: 00:25:25.090 00:25:25.090 00:25:25.360 09:33:36 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@139 -- # wait 910208 00:25:25.360 09:33:36 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@141 -- # cat /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/try.txt 00:25:25.360 [2024-07-15 09:33:01.865569] Starting SPDK v24.09-pre git sha1 b0f01ebc5 / DPDK 24.03.0 initialization... 00:25:25.360 [2024-07-15 09:33:01.865659] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid910208 ] 00:25:25.360 EAL: No free 2048 kB hugepages reported on node 1 00:25:25.360 [2024-07-15 09:33:01.928795] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:25:25.360 [2024-07-15 09:33:02.035928] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:25:25.360 Running I/O for 90 seconds... 00:25:25.360 [2024-07-15 09:33:17.936985] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:88 nsid:1 lba:99800 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:25.360 [2024-07-15 09:33:17.937050] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:88 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:25:25.360 [2024-07-15 09:33:17.937104] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:99824 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:25.360 [2024-07-15 09:33:17.937131] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:56 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:25:25.360 [2024-07-15 09:33:17.937168] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:99832 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:25.360 [2024-07-15 09:33:17.937195] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:23 cdw0:0 sqhd:0030 p:0 m:0 dnr:0 00:25:25.360 [2024-07-15 09:33:17.937230] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:121 nsid:1 lba:99840 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:25.360 [2024-07-15 09:33:17.937256] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:121 cdw0:0 sqhd:0031 p:0 m:0 dnr:0 00:25:25.360 [2024-07-15 09:33:17.937291] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:99848 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:25.360 [2024-07-15 09:33:17.937318] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:10 cdw0:0 sqhd:0032 p:0 m:0 dnr:0 00:25:25.360 [2024-07-15 09:33:17.937355] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:87 nsid:1 lba:99856 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:25.360 [2024-07-15 09:33:17.937381] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:87 cdw0:0 sqhd:0033 p:0 m:0 dnr:0 00:25:25.360 [2024-07-15 09:33:17.937417] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:99864 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:25.360 [2024-07-15 09:33:17.937442] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:40 cdw0:0 sqhd:0034 p:0 m:0 dnr:0 00:25:25.360 [2024-07-15 09:33:17.937477] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:113 nsid:1 lba:99872 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:25.360 [2024-07-15 09:33:17.937504] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:113 cdw0:0 sqhd:0035 p:0 m:0 dnr:0 00:25:25.360 [2024-07-15 09:33:17.937538] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:99880 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:25.360 [2024-07-15 09:33:17.937580] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:60 cdw0:0 sqhd:0036 p:0 m:0 dnr:0 00:25:25.360 [2024-07-15 09:33:17.937612] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:99888 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:25.360 [2024-07-15 09:33:17.937650] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:54 cdw0:0 sqhd:0037 p:0 m:0 dnr:0 00:25:25.360 [2024-07-15 09:33:17.937686] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:99896 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:25.360 [2024-07-15 09:33:17.937734] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:62 cdw0:0 sqhd:0038 p:0 m:0 dnr:0 00:25:25.360 [2024-07-15 09:33:17.937769] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:99904 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:25.360 [2024-07-15 09:33:17.937794] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:20 cdw0:0 sqhd:0039 p:0 m:0 dnr:0 00:25:25.360 [2024-07-15 09:33:17.937852] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:123 nsid:1 lba:99912 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:25.360 [2024-07-15 09:33:17.937878] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:123 cdw0:0 sqhd:003a p:0 m:0 dnr:0 00:25:25.360 [2024-07-15 09:33:17.937916] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:99920 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:25.360 [2024-07-15 09:33:17.937942] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:49 cdw0:0 sqhd:003b p:0 m:0 dnr:0 00:25:25.360 [2024-07-15 09:33:17.937980] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:99928 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:25.360 [2024-07-15 09:33:17.938006] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:42 cdw0:0 sqhd:003c p:0 m:0 dnr:0 00:25:25.360 [2024-07-15 09:33:17.938043] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:110 nsid:1 lba:99936 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:25.360 [2024-07-15 09:33:17.938069] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:110 cdw0:0 sqhd:003d p:0 m:0 dnr:0 00:25:25.360 [2024-07-15 09:33:17.938105] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:99944 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:25.360 [2024-07-15 09:33:17.938144] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:61 cdw0:0 sqhd:003e p:0 m:0 dnr:0 00:25:25.360 [2024-07-15 09:33:17.938177] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:93 nsid:1 lba:99952 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:25.360 [2024-07-15 09:33:17.938203] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:93 cdw0:0 sqhd:003f p:0 m:0 dnr:0 00:25:25.360 [2024-07-15 09:33:17.938237] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:99960 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:25.360 [2024-07-15 09:33:17.938263] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:32 cdw0:0 sqhd:0040 p:0 m:0 dnr:0 00:25:25.360 [2024-07-15 09:33:17.938296] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:99968 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:25.360 [2024-07-15 09:33:17.938322] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:51 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:25:25.360 [2024-07-15 09:33:17.938356] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:99976 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:25.360 [2024-07-15 09:33:17.938382] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:53 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:25:25.360 [2024-07-15 09:33:17.938415] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:99984 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:25.360 [2024-07-15 09:33:17.938440] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:15 cdw0:0 sqhd:0043 p:0 m:0 dnr:0 00:25:25.360 [2024-07-15 09:33:17.938473] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:102 nsid:1 lba:99992 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:25.360 [2024-07-15 09:33:17.938498] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:102 cdw0:0 sqhd:0044 p:0 m:0 dnr:0 00:25:25.360 [2024-07-15 09:33:17.938538] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:108 nsid:1 lba:100000 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:25.360 [2024-07-15 09:33:17.938562] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:108 cdw0:0 sqhd:0045 p:0 m:0 dnr:0 00:25:25.360 [2024-07-15 09:33:17.938596] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:76 nsid:1 lba:99808 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:25.360 [2024-07-15 09:33:17.938621] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:76 cdw0:0 sqhd:0046 p:0 m:0 dnr:0 00:25:25.360 [2024-07-15 09:33:17.939152] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:120 nsid:1 lba:99816 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:25.360 [2024-07-15 09:33:17.939182] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:120 cdw0:0 sqhd:0047 p:0 m:0 dnr:0 00:25:25.360 [2024-07-15 09:33:17.939219] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:122 nsid:1 lba:100008 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:25.360 [2024-07-15 09:33:17.939244] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:122 cdw0:0 sqhd:0048 p:0 m:0 dnr:0 00:25:25.360 [2024-07-15 09:33:17.939278] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:92 nsid:1 lba:100016 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:25.360 [2024-07-15 09:33:17.939304] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:92 cdw0:0 sqhd:0049 p:0 m:0 dnr:0 00:25:25.360 [2024-07-15 09:33:17.939338] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:100024 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:25.360 [2024-07-15 09:33:17.939363] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:58 cdw0:0 sqhd:004a p:0 m:0 dnr:0 00:25:25.360 [2024-07-15 09:33:17.939398] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:81 nsid:1 lba:100032 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:25.360 [2024-07-15 09:33:17.939424] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:81 cdw0:0 sqhd:004b p:0 m:0 dnr:0 00:25:25.360 [2024-07-15 09:33:17.939457] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:95 nsid:1 lba:100040 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:25.360 [2024-07-15 09:33:17.939483] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:95 cdw0:0 sqhd:004c p:0 m:0 dnr:0 00:25:25.361 [2024-07-15 09:33:17.939516] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:82 nsid:1 lba:100048 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:25.361 [2024-07-15 09:33:17.939540] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:82 cdw0:0 sqhd:004d p:0 m:0 dnr:0 00:25:25.361 [2024-07-15 09:33:17.939574] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:100056 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:25.361 [2024-07-15 09:33:17.939600] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:33 cdw0:0 sqhd:004e p:0 m:0 dnr:0 00:25:25.361 [2024-07-15 09:33:17.940639] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:100064 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:25.361 [2024-07-15 09:33:17.940672] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:7 cdw0:0 sqhd:004f p:0 m:0 dnr:0 00:25:25.361 [2024-07-15 09:33:17.940712] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:103 nsid:1 lba:100072 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:25.361 [2024-07-15 09:33:17.940741] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:103 cdw0:0 sqhd:0050 p:0 m:0 dnr:0 00:25:25.361 [2024-07-15 09:33:17.940784] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:100080 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:25.361 [2024-07-15 09:33:17.940822] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:59 cdw0:0 sqhd:0051 p:0 m:0 dnr:0 00:25:25.361 [2024-07-15 09:33:17.940861] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:99 nsid:1 lba:100088 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:25.361 [2024-07-15 09:33:17.940887] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:99 cdw0:0 sqhd:0052 p:0 m:0 dnr:0 00:25:25.361 [2024-07-15 09:33:17.940923] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:100096 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:25.361 [2024-07-15 09:33:17.940949] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:18 cdw0:0 sqhd:0053 p:0 m:0 dnr:0 00:25:25.361 [2024-07-15 09:33:17.940986] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:105 nsid:1 lba:100104 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:25.361 [2024-07-15 09:33:17.941013] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:105 cdw0:0 sqhd:0054 p:0 m:0 dnr:0 00:25:25.361 [2024-07-15 09:33:17.941050] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:96 nsid:1 lba:100112 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:25.361 [2024-07-15 09:33:17.941077] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:96 cdw0:0 sqhd:0055 p:0 m:0 dnr:0 00:25:25.361 [2024-07-15 09:33:17.941115] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:73 nsid:1 lba:100120 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:25.361 [2024-07-15 09:33:17.941141] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:73 cdw0:0 sqhd:0056 p:0 m:0 dnr:0 00:25:25.361 [2024-07-15 09:33:17.941191] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:91 nsid:1 lba:100128 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:25.361 [2024-07-15 09:33:17.941216] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:91 cdw0:0 sqhd:0057 p:0 m:0 dnr:0 00:25:25.361 [2024-07-15 09:33:17.941264] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:100136 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:25.361 [2024-07-15 09:33:17.941289] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:44 cdw0:0 sqhd:0058 p:0 m:0 dnr:0 00:25:25.361 [2024-07-15 09:33:17.941323] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:119 nsid:1 lba:100144 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:25.361 [2024-07-15 09:33:17.941348] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:119 cdw0:0 sqhd:0059 p:0 m:0 dnr:0 00:25:25.361 [2024-07-15 09:33:17.941381] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:100152 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:25.361 [2024-07-15 09:33:17.941406] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:4 cdw0:0 sqhd:005a p:0 m:0 dnr:0 00:25:25.361 [2024-07-15 09:33:17.941439] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:100160 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:25.361 [2024-07-15 09:33:17.941465] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:28 cdw0:0 sqhd:005b p:0 m:0 dnr:0 00:25:25.361 [2024-07-15 09:33:17.941498] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:100168 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:25.361 [2024-07-15 09:33:17.941523] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:31 cdw0:0 sqhd:005c p:0 m:0 dnr:0 00:25:25.361 [2024-07-15 09:33:17.941562] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:100176 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:25.361 [2024-07-15 09:33:17.941587] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:125 cdw0:0 sqhd:005d p:0 m:0 dnr:0 00:25:25.361 [2024-07-15 09:33:17.941621] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:100184 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:25.361 [2024-07-15 09:33:17.941645] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:17 cdw0:0 sqhd:005e p:0 m:0 dnr:0 00:25:25.361 [2024-07-15 09:33:17.941679] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:100192 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:25.361 [2024-07-15 09:33:17.941704] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:30 cdw0:0 sqhd:005f p:0 m:0 dnr:0 00:25:25.361 [2024-07-15 09:33:17.941740] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:100200 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:25.361 [2024-07-15 09:33:17.941765] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:5 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:25:25.361 [2024-07-15 09:33:17.941824] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:100208 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:25.361 [2024-07-15 09:33:17.941850] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:57 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:25:25.361 [2024-07-15 09:33:17.941886] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:100216 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:25.361 [2024-07-15 09:33:17.941912] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:25:25.361 [2024-07-15 09:33:17.941965] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:100224 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:25.361 [2024-07-15 09:33:17.941992] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:55 cdw0:0 sqhd:0063 p:0 m:0 dnr:0 00:25:25.361 [2024-07-15 09:33:17.942029] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:100232 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:25.361 [2024-07-15 09:33:17.942057] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:21 cdw0:0 sqhd:0064 p:0 m:0 dnr:0 00:25:25.361 [2024-07-15 09:33:17.942093] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:68 nsid:1 lba:100240 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:25.361 [2024-07-15 09:33:17.942121] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:68 cdw0:0 sqhd:0065 p:0 m:0 dnr:0 00:25:25.361 [2024-07-15 09:33:17.942157] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:70 nsid:1 lba:100248 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:25.361 [2024-07-15 09:33:17.942185] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:70 cdw0:0 sqhd:0066 p:0 m:0 dnr:0 00:25:25.361 [2024-07-15 09:33:17.942220] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:100256 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:25.361 [2024-07-15 09:33:17.942261] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:39 cdw0:0 sqhd:0067 p:0 m:0 dnr:0 00:25:25.361 [2024-07-15 09:33:17.942295] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:100264 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:25.361 [2024-07-15 09:33:17.942338] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:35 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:25:25.361 [2024-07-15 09:33:17.942373] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:66 nsid:1 lba:100272 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:25.361 [2024-07-15 09:33:17.942410] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:66 cdw0:0 sqhd:0069 p:0 m:0 dnr:0 00:25:25.361 [2024-07-15 09:33:17.942449] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:90 nsid:1 lba:100280 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:25.361 [2024-07-15 09:33:17.942475] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:90 cdw0:0 sqhd:006a p:0 m:0 dnr:0 00:25:25.361 [2024-07-15 09:33:17.942514] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:116 nsid:1 lba:100288 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:25.361 [2024-07-15 09:33:17.942541] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:116 cdw0:0 sqhd:006b p:0 m:0 dnr:0 00:25:25.361 [2024-07-15 09:33:17.942577] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:98 nsid:1 lba:100296 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:25.361 [2024-07-15 09:33:17.942619] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:98 cdw0:0 sqhd:006c p:0 m:0 dnr:0 00:25:25.361 [2024-07-15 09:33:17.942654] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:109 nsid:1 lba:100304 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:25.361 [2024-07-15 09:33:17.942696] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:109 cdw0:0 sqhd:006d p:0 m:0 dnr:0 00:25:25.361 [2024-07-15 09:33:17.942734] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:100312 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:25.361 [2024-07-15 09:33:17.942762] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:6 cdw0:0 sqhd:006e p:0 m:0 dnr:0 00:25:25.361 [2024-07-15 09:33:17.942797] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:100320 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:25.361 [2024-07-15 09:33:17.942832] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:25 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:25:25.361 [2024-07-15 09:33:17.942869] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:94 nsid:1 lba:100328 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:25.361 [2024-07-15 09:33:17.942896] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:94 cdw0:0 sqhd:0070 p:0 m:0 dnr:0 00:25:25.361 [2024-07-15 09:33:17.942931] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:100336 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:25.361 [2024-07-15 09:33:17.942958] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:63 cdw0:0 sqhd:0071 p:0 m:0 dnr:0 00:25:25.361 [2024-07-15 09:33:17.942995] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:79 nsid:1 lba:100344 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:25.361 [2024-07-15 09:33:17.943022] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:79 cdw0:0 sqhd:0072 p:0 m:0 dnr:0 00:25:25.361 [2024-07-15 09:33:17.943062] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:100352 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:25.361 [2024-07-15 09:33:17.943089] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:3 cdw0:0 sqhd:0073 p:0 m:0 dnr:0 00:25:25.361 [2024-07-15 09:33:17.943127] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:80 nsid:1 lba:100360 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:25.361 [2024-07-15 09:33:17.943153] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:80 cdw0:0 sqhd:0074 p:0 m:0 dnr:0 00:25:25.361 [2024-07-15 09:33:17.943190] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:100368 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:25.361 [2024-07-15 09:33:17.943222] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:47 cdw0:0 sqhd:0075 p:0 m:0 dnr:0 00:25:25.361 [2024-07-15 09:33:17.943260] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:100376 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:25.361 [2024-07-15 09:33:17.943287] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:12 cdw0:0 sqhd:0076 p:0 m:0 dnr:0 00:25:25.361 [2024-07-15 09:33:17.943323] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:100 nsid:1 lba:100384 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:25.361 [2024-07-15 09:33:17.943349] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:100 cdw0:0 sqhd:0077 p:0 m:0 dnr:0 00:25:25.361 [2024-07-15 09:33:17.943398] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:100392 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:25.361 [2024-07-15 09:33:17.943424] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:26 cdw0:0 sqhd:0078 p:0 m:0 dnr:0 00:25:25.361 [2024-07-15 09:33:17.943457] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:83 nsid:1 lba:100400 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:25.361 [2024-07-15 09:33:17.943482] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:83 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 00:25:25.361 [2024-07-15 09:33:17.943515] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:100408 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:25.361 [2024-07-15 09:33:17.943540] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:11 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:25:25.361 [2024-07-15 09:33:17.943574] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:100416 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:25.361 [2024-07-15 09:33:17.943599] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:45 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:25:25.361 [2024-07-15 09:33:17.943634] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:100424 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:25.361 [2024-07-15 09:33:17.943659] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:124 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:25:25.361 [2024-07-15 09:33:17.943695] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:72 nsid:1 lba:100432 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:25.361 [2024-07-15 09:33:17.943720] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:72 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:25:25.361 [2024-07-15 09:33:17.944449] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:100440 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:25.361 [2024-07-15 09:33:17.944480] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:33 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:25:25.361 [2024-07-15 09:33:17.944522] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:82 nsid:1 lba:100448 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:25.361 [2024-07-15 09:33:17.944550] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:82 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:25:25.361 [2024-07-15 09:33:17.944586] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:95 nsid:1 lba:100456 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:25.361 [2024-07-15 09:33:17.944614] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:95 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:25.361 [2024-07-15 09:33:17.944650] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:81 nsid:1 lba:100464 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:25.361 [2024-07-15 09:33:17.944677] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:81 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:25.361 [2024-07-15 09:33:17.944719] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:100472 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:25.361 [2024-07-15 09:33:17.944746] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:58 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:25:25.361 [2024-07-15 09:33:17.944783] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:92 nsid:1 lba:100480 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:25.361 [2024-07-15 09:33:17.944818] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:92 cdw0:0 sqhd:0003 p:0 m:0 dnr:0 00:25:25.361 [2024-07-15 09:33:17.944858] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:122 nsid:1 lba:100488 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:25.361 [2024-07-15 09:33:17.944885] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:122 cdw0:0 sqhd:0004 p:0 m:0 dnr:0 00:25:25.361 [2024-07-15 09:33:17.944922] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:120 nsid:1 lba:100496 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:25.361 [2024-07-15 09:33:17.944962] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:120 cdw0:0 sqhd:0005 p:0 m:0 dnr:0 00:25:25.361 [2024-07-15 09:33:17.944999] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:100504 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:25.361 [2024-07-15 09:33:17.945025] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:27 cdw0:0 sqhd:0006 p:0 m:0 dnr:0 00:25:25.361 [2024-07-15 09:33:17.945062] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:112 nsid:1 lba:100512 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:25.361 [2024-07-15 09:33:17.945101] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:112 cdw0:0 sqhd:0007 p:0 m:0 dnr:0 00:25:25.361 [2024-07-15 09:33:17.945134] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:100520 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:25.361 [2024-07-15 09:33:17.945160] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:36 cdw0:0 sqhd:0008 p:0 m:0 dnr:0 00:25:25.361 [2024-07-15 09:33:17.945196] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:117 nsid:1 lba:100528 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:25.361 [2024-07-15 09:33:17.945221] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:117 cdw0:0 sqhd:0009 p:0 m:0 dnr:0 00:25:25.361 [2024-07-15 09:33:17.945255] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:107 nsid:1 lba:100536 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:25.362 [2024-07-15 09:33:17.945281] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:107 cdw0:0 sqhd:000a p:0 m:0 dnr:0 00:25:25.362 [2024-07-15 09:33:17.945313] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:100544 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:25.362 [2024-07-15 09:33:17.945339] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:34 cdw0:0 sqhd:000b p:0 m:0 dnr:0 00:25:25.362 [2024-07-15 09:33:17.945372] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:100552 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:25.362 [2024-07-15 09:33:17.945397] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:43 cdw0:0 sqhd:000c p:0 m:0 dnr:0 00:25:25.362 [2024-07-15 09:33:17.945430] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:64 nsid:1 lba:100560 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:25.362 [2024-07-15 09:33:17.945456] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:64 cdw0:0 sqhd:000d p:0 m:0 dnr:0 00:25:25.362 [2024-07-15 09:33:17.945496] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:100568 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:25.362 [2024-07-15 09:33:17.945521] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:22 cdw0:0 sqhd:000e p:0 m:0 dnr:0 00:25:25.362 [2024-07-15 09:33:17.945556] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:86 nsid:1 lba:100576 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:25.362 [2024-07-15 09:33:17.945581] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:86 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:25:25.362 [2024-07-15 09:33:17.945616] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:100584 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:25.362 [2024-07-15 09:33:17.945641] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:52 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:25:25.362 [2024-07-15 09:33:17.945675] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:76 nsid:1 lba:100592 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:25.362 [2024-07-15 09:33:17.945715] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:76 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:25:25.362 [2024-07-15 09:33:17.945764] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:108 nsid:1 lba:100600 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:25.362 [2024-07-15 09:33:17.945791] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:108 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:25:25.362 [2024-07-15 09:33:17.945837] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:102 nsid:1 lba:100608 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:25.362 [2024-07-15 09:33:17.945865] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:102 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:25:25.362 [2024-07-15 09:33:17.945901] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:100616 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:25.362 [2024-07-15 09:33:17.945929] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:15 cdw0:0 sqhd:0014 p:0 m:0 dnr:0 00:25:25.362 [2024-07-15 09:33:17.945964] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:100624 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:25.362 [2024-07-15 09:33:17.945991] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:53 cdw0:0 sqhd:0015 p:0 m:0 dnr:0 00:25:25.362 [2024-07-15 09:33:17.946026] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:100632 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:25.362 [2024-07-15 09:33:17.946053] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:51 cdw0:0 sqhd:0016 p:0 m:0 dnr:0 00:25:25.362 [2024-07-15 09:33:17.946089] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:100640 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:25.362 [2024-07-15 09:33:17.946128] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:32 cdw0:0 sqhd:0017 p:0 m:0 dnr:0 00:25:25.362 [2024-07-15 09:33:17.946162] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:93 nsid:1 lba:100648 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:25.362 [2024-07-15 09:33:17.946187] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:93 cdw0:0 sqhd:0018 p:0 m:0 dnr:0 00:25:25.362 [2024-07-15 09:33:17.946221] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:100656 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:25.362 [2024-07-15 09:33:17.946245] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:61 cdw0:0 sqhd:0019 p:0 m:0 dnr:0 00:25:25.362 [2024-07-15 09:33:17.946280] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:110 nsid:1 lba:100664 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:25.362 [2024-07-15 09:33:17.946310] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:110 cdw0:0 sqhd:001a p:0 m:0 dnr:0 00:25:25.362 [2024-07-15 09:33:17.946345] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:100672 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:25.362 [2024-07-15 09:33:17.946370] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:42 cdw0:0 sqhd:001b p:0 m:0 dnr:0 00:25:25.362 [2024-07-15 09:33:17.946405] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:100680 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:25.362 [2024-07-15 09:33:17.946430] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:49 cdw0:0 sqhd:001c p:0 m:0 dnr:0 00:25:25.362 [2024-07-15 09:33:17.946464] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:123 nsid:1 lba:100688 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:25.362 [2024-07-15 09:33:17.946491] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:123 cdw0:0 sqhd:001d p:0 m:0 dnr:0 00:25:25.362 [2024-07-15 09:33:17.946524] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:100696 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:25.362 [2024-07-15 09:33:17.946550] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:20 cdw0:0 sqhd:001e p:0 m:0 dnr:0 00:25:25.362 [2024-07-15 09:33:17.946583] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:100704 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:25.362 [2024-07-15 09:33:17.946607] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:62 cdw0:0 sqhd:001f p:0 m:0 dnr:0 00:25:25.362 [2024-07-15 09:33:17.946640] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:100712 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:25.362 [2024-07-15 09:33:17.946665] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:54 cdw0:0 sqhd:0020 p:0 m:0 dnr:0 00:25:25.362 [2024-07-15 09:33:17.946697] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:100720 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:25.362 [2024-07-15 09:33:17.946722] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:60 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:25:25.362 [2024-07-15 09:33:17.946755] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:113 nsid:1 lba:100728 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:25.362 [2024-07-15 09:33:17.946780] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:113 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:25:25.362 [2024-07-15 09:33:17.946840] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:100736 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:25.362 [2024-07-15 09:33:17.946867] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:40 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:25:25.362 [2024-07-15 09:33:17.946904] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:87 nsid:1 lba:100744 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:25.362 [2024-07-15 09:33:17.946930] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:87 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:25:25.362 [2024-07-15 09:33:17.946967] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:100752 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:25.362 [2024-07-15 09:33:17.946994] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:10 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:25:25.362 [2024-07-15 09:33:17.947032] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:121 nsid:1 lba:100760 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:25.362 [2024-07-15 09:33:17.947063] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:121 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:25:25.362 [2024-07-15 09:33:17.947115] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:100768 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:25.362 [2024-07-15 09:33:17.947141] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:23 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:25:25.362 [2024-07-15 09:33:17.947176] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:100776 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:25.362 [2024-07-15 09:33:17.947203] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:56 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:25:25.362 [2024-07-15 09:33:17.947237] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:88 nsid:1 lba:100784 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:25.362 [2024-07-15 09:33:17.947264] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:88 cdw0:0 sqhd:0029 p:0 m:0 dnr:0 00:25:25.362 [2024-07-15 09:33:17.947299] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:97 nsid:1 lba:100792 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:25.362 [2024-07-15 09:33:17.947326] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:97 cdw0:0 sqhd:002a p:0 m:0 dnr:0 00:25:25.362 [2024-07-15 09:33:17.947373] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:100800 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:25.362 [2024-07-15 09:33:17.947399] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:24 cdw0:0 sqhd:002b p:0 m:0 dnr:0 00:25:25.362 [2024-07-15 09:33:17.947432] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:100808 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:25.362 [2024-07-15 09:33:17.947457] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:13 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:25:25.362 [2024-07-15 09:33:17.947491] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:100816 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:25.362 [2024-07-15 09:33:17.947516] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:126 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:25:25.362 [2024-07-15 09:33:17.947550] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:99800 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:25.362 [2024-07-15 09:33:17.947574] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:37 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:25:25.362 [2024-07-15 09:33:17.947610] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:84 nsid:1 lba:99824 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:25.362 [2024-07-15 09:33:17.947635] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:84 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:25:25.362 [2024-07-15 09:33:17.947670] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:99832 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:25.362 [2024-07-15 09:33:17.947695] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:48 cdw0:0 sqhd:0030 p:0 m:0 dnr:0 00:25:25.362 [2024-07-15 09:33:17.947730] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:118 nsid:1 lba:99840 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:25.362 [2024-07-15 09:33:17.947755] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:118 cdw0:0 sqhd:0031 p:0 m:0 dnr:0 00:25:25.362 [2024-07-15 09:33:17.947814] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:65 nsid:1 lba:99848 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:25.362 [2024-07-15 09:33:17.947848] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:65 cdw0:0 sqhd:0032 p:0 m:0 dnr:0 00:25:25.362 [2024-07-15 09:33:17.947886] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:114 nsid:1 lba:99856 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:25.362 [2024-07-15 09:33:17.947915] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:114 cdw0:0 sqhd:0033 p:0 m:0 dnr:0 00:25:25.362 [2024-07-15 09:33:17.947952] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:99864 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:25.362 [2024-07-15 09:33:17.947979] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:19 cdw0:0 sqhd:0034 p:0 m:0 dnr:0 00:25:25.362 [2024-07-15 09:33:17.948014] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:111 nsid:1 lba:99872 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:25.362 [2024-07-15 09:33:17.948041] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:111 cdw0:0 sqhd:0035 p:0 m:0 dnr:0 00:25:25.362 [2024-07-15 09:33:17.948076] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:99880 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:25.362 [2024-07-15 09:33:17.948103] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:9 cdw0:0 sqhd:0036 p:0 m:0 dnr:0 00:25:25.362 [2024-07-15 09:33:17.948152] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:69 nsid:1 lba:99888 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:25.362 [2024-07-15 09:33:17.948193] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:69 cdw0:0 sqhd:0037 p:0 m:0 dnr:0 00:25:25.362 [2024-07-15 09:33:17.948228] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:99896 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:25.362 [2024-07-15 09:33:17.948253] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:46 cdw0:0 sqhd:0038 p:0 m:0 dnr:0 00:25:25.362 [2024-07-15 09:33:17.948288] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:99904 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:25.362 [2024-07-15 09:33:17.948313] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:29 cdw0:0 sqhd:0039 p:0 m:0 dnr:0 00:25:25.362 [2024-07-15 09:33:17.948347] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:74 nsid:1 lba:99912 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:25.362 [2024-07-15 09:33:17.948373] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:74 cdw0:0 sqhd:003a p:0 m:0 dnr:0 00:25:25.362 [2024-07-15 09:33:17.948408] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:71 nsid:1 lba:99920 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:25.362 [2024-07-15 09:33:17.948433] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:71 cdw0:0 sqhd:003b p:0 m:0 dnr:0 00:25:25.362 [2024-07-15 09:33:17.948466] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:104 nsid:1 lba:99928 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:25.362 [2024-07-15 09:33:17.948491] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:104 cdw0:0 sqhd:003c p:0 m:0 dnr:0 00:25:25.362 [2024-07-15 09:33:17.948525] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:99936 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:25.362 [2024-07-15 09:33:17.948550] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:16 cdw0:0 sqhd:003d p:0 m:0 dnr:0 00:25:25.362 [2024-07-15 09:33:17.948584] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:75 nsid:1 lba:99944 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:25.362 [2024-07-15 09:33:17.948609] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:75 cdw0:0 sqhd:003e p:0 m:0 dnr:0 00:25:25.362 [2024-07-15 09:33:17.948647] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:78 nsid:1 lba:99952 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:25.362 [2024-07-15 09:33:17.948673] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:78 cdw0:0 sqhd:003f p:0 m:0 dnr:0 00:25:25.362 [2024-07-15 09:33:17.948706] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:115 nsid:1 lba:99960 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:25.362 [2024-07-15 09:33:17.948731] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:115 cdw0:0 sqhd:0040 p:0 m:0 dnr:0 00:25:25.362 [2024-07-15 09:33:17.948764] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:85 nsid:1 lba:99968 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:25.362 [2024-07-15 09:33:17.948811] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:85 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:25:25.362 [2024-07-15 09:33:17.948850] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:77 nsid:1 lba:99976 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:25.362 [2024-07-15 09:33:17.948876] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:77 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:25:25.362 [2024-07-15 09:33:17.948911] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:99984 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:25.362 [2024-07-15 09:33:17.948936] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:1 cdw0:0 sqhd:0043 p:0 m:0 dnr:0 00:25:25.362 [2024-07-15 09:33:17.948972] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:99992 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:25.362 [2024-07-15 09:33:17.948998] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:50 cdw0:0 sqhd:0044 p:0 m:0 dnr:0 00:25:25.362 [2024-07-15 09:33:17.949034] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:100000 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:25.362 [2024-07-15 09:33:17.949060] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:2 cdw0:0 sqhd:0045 p:0 m:0 dnr:0 00:25:25.362 [2024-07-15 09:33:17.949108] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:99808 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:25.362 [2024-07-15 09:33:17.949133] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:14 cdw0:0 sqhd:0046 p:0 m:0 dnr:0 00:25:25.362 [2024-07-15 09:33:17.949168] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:99816 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:25.362 [2024-07-15 09:33:17.949193] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:8 cdw0:0 sqhd:0047 p:0 m:0 dnr:0 00:25:25.362 [2024-07-15 09:33:17.949228] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:101 nsid:1 lba:100008 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:25.362 [2024-07-15 09:33:17.949253] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:101 cdw0:0 sqhd:0048 p:0 m:0 dnr:0 00:25:25.362 [2024-07-15 09:33:17.949286] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:100016 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:25.362 [2024-07-15 09:33:17.949313] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:41 cdw0:0 sqhd:0049 p:0 m:0 dnr:0 00:25:25.362 [2024-07-15 09:33:17.949353] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:100024 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:25.362 [2024-07-15 09:33:17.949379] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:38 cdw0:0 sqhd:004a p:0 m:0 dnr:0 00:25:25.362 [2024-07-15 09:33:17.949418] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:106 nsid:1 lba:100032 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:25.362 [2024-07-15 09:33:17.949444] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:106 cdw0:0 sqhd:004b p:0 m:0 dnr:0 00:25:25.362 [2024-07-15 09:33:17.949478] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:67 nsid:1 lba:100040 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:25.362 [2024-07-15 09:33:17.949503] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:67 cdw0:0 sqhd:004c p:0 m:0 dnr:0 00:25:25.362 [2024-07-15 09:33:17.949539] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:89 nsid:1 lba:100048 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:25.362 [2024-07-15 09:33:17.949564] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:89 cdw0:0 sqhd:004d p:0 m:0 dnr:0 00:25:25.362 [2024-07-15 09:33:17.950762] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:72 nsid:1 lba:100056 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:25.362 [2024-07-15 09:33:17.950796] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:72 cdw0:0 sqhd:004e p:0 m:0 dnr:0 00:25:25.362 [2024-07-15 09:33:17.950848] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:100064 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:25.362 [2024-07-15 09:33:17.950876] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:124 cdw0:0 sqhd:004f p:0 m:0 dnr:0 00:25:25.362 [2024-07-15 09:33:17.950914] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:100072 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:25.362 [2024-07-15 09:33:17.950941] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:45 cdw0:0 sqhd:0050 p:0 m:0 dnr:0 00:25:25.362 [2024-07-15 09:33:17.950979] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:100080 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:25.362 [2024-07-15 09:33:17.951007] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:11 cdw0:0 sqhd:0051 p:0 m:0 dnr:0 00:25:25.362 [2024-07-15 09:33:17.951043] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:83 nsid:1 lba:100088 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:25.362 [2024-07-15 09:33:17.951070] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:83 cdw0:0 sqhd:0052 p:0 m:0 dnr:0 00:25:25.362 [2024-07-15 09:33:17.951107] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:100096 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:25.362 [2024-07-15 09:33:17.951134] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:26 cdw0:0 sqhd:0053 p:0 m:0 dnr:0 00:25:25.363 [2024-07-15 09:33:17.951170] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:100 nsid:1 lba:100104 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:25.363 [2024-07-15 09:33:17.951198] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:100 cdw0:0 sqhd:0054 p:0 m:0 dnr:0 00:25:25.363 [2024-07-15 09:33:17.951246] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:100112 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:25.363 [2024-07-15 09:33:17.951271] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:12 cdw0:0 sqhd:0055 p:0 m:0 dnr:0 00:25:25.363 [2024-07-15 09:33:17.951305] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:100120 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:25.363 [2024-07-15 09:33:17.951331] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:47 cdw0:0 sqhd:0056 p:0 m:0 dnr:0 00:25:25.363 [2024-07-15 09:33:17.951364] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:80 nsid:1 lba:100128 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:25.363 [2024-07-15 09:33:17.951394] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:80 cdw0:0 sqhd:0057 p:0 m:0 dnr:0 00:25:25.363 [2024-07-15 09:33:17.951429] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:100136 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:25.363 [2024-07-15 09:33:17.951453] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:3 cdw0:0 sqhd:0058 p:0 m:0 dnr:0 00:25:25.363 [2024-07-15 09:33:17.951489] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:79 nsid:1 lba:100144 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:25.363 [2024-07-15 09:33:17.951513] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:79 cdw0:0 sqhd:0059 p:0 m:0 dnr:0 00:25:25.363 [2024-07-15 09:33:17.951550] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:100152 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:25.363 [2024-07-15 09:33:17.951575] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:63 cdw0:0 sqhd:005a p:0 m:0 dnr:0 00:25:25.363 [2024-07-15 09:33:17.951610] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:94 nsid:1 lba:100160 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:25.363 [2024-07-15 09:33:17.951635] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:94 cdw0:0 sqhd:005b p:0 m:0 dnr:0 00:25:25.363 [2024-07-15 09:33:17.951669] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:100168 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:25.363 [2024-07-15 09:33:17.951694] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:25 cdw0:0 sqhd:005c p:0 m:0 dnr:0 00:25:25.363 [2024-07-15 09:33:17.951743] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:100176 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:25.363 [2024-07-15 09:33:17.951784] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:6 cdw0:0 sqhd:005d p:0 m:0 dnr:0 00:25:25.363 [2024-07-15 09:33:17.951828] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:109 nsid:1 lba:100184 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:25.363 [2024-07-15 09:33:17.951857] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:109 cdw0:0 sqhd:005e p:0 m:0 dnr:0 00:25:25.363 [2024-07-15 09:33:17.951892] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:98 nsid:1 lba:100192 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:25.363 [2024-07-15 09:33:17.951919] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:98 cdw0:0 sqhd:005f p:0 m:0 dnr:0 00:25:25.363 [2024-07-15 09:33:17.951954] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:116 nsid:1 lba:100200 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:25.363 [2024-07-15 09:33:17.951981] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:116 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:25:25.363 [2024-07-15 09:33:17.952017] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:90 nsid:1 lba:100208 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:25.363 [2024-07-15 09:33:17.952044] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:90 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:25:25.363 [2024-07-15 09:33:17.952080] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:66 nsid:1 lba:100216 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:25.363 [2024-07-15 09:33:17.952106] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:66 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:25:25.363 [2024-07-15 09:33:17.952142] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:100224 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:25.363 [2024-07-15 09:33:17.952174] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:35 cdw0:0 sqhd:0063 p:0 m:0 dnr:0 00:25:25.363 [2024-07-15 09:33:17.952213] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:100232 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:25.363 [2024-07-15 09:33:17.952240] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:39 cdw0:0 sqhd:0064 p:0 m:0 dnr:0 00:25:25.363 [2024-07-15 09:33:17.952278] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:70 nsid:1 lba:100240 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:25.363 [2024-07-15 09:33:17.952305] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:70 cdw0:0 sqhd:0065 p:0 m:0 dnr:0 00:25:25.363 [2024-07-15 09:33:17.952341] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:68 nsid:1 lba:100248 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:25.363 [2024-07-15 09:33:17.952368] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:68 cdw0:0 sqhd:0066 p:0 m:0 dnr:0 00:25:25.363 [2024-07-15 09:33:17.952404] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:100256 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:25.363 [2024-07-15 09:33:17.952431] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:21 cdw0:0 sqhd:0067 p:0 m:0 dnr:0 00:25:25.363 [2024-07-15 09:33:17.952466] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:100264 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:25.363 [2024-07-15 09:33:17.952493] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:55 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:25:25.363 [2024-07-15 09:33:17.952529] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:100272 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:25.363 [2024-07-15 09:33:17.952557] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:0 cdw0:0 sqhd:0069 p:0 m:0 dnr:0 00:25:25.363 [2024-07-15 09:33:17.952592] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:100280 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:25.363 [2024-07-15 09:33:17.952618] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:57 cdw0:0 sqhd:006a p:0 m:0 dnr:0 00:25:25.363 [2024-07-15 09:33:17.952654] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:100288 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:25.363 [2024-07-15 09:33:17.952696] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:5 cdw0:0 sqhd:006b p:0 m:0 dnr:0 00:25:25.363 [2024-07-15 09:33:17.952746] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:100296 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:25.363 [2024-07-15 09:33:17.952773] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:30 cdw0:0 sqhd:006c p:0 m:0 dnr:0 00:25:25.363 [2024-07-15 09:33:17.952820] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:100304 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:25.363 [2024-07-15 09:33:17.952848] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:17 cdw0:0 sqhd:006d p:0 m:0 dnr:0 00:25:25.363 [2024-07-15 09:33:17.952886] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:100312 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:25.363 [2024-07-15 09:33:17.952912] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:125 cdw0:0 sqhd:006e p:0 m:0 dnr:0 00:25:25.363 [2024-07-15 09:33:17.952949] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:100320 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:25.363 [2024-07-15 09:33:17.952975] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:31 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:25:25.363 [2024-07-15 09:33:17.953017] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:100328 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:25.363 [2024-07-15 09:33:17.953044] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:28 cdw0:0 sqhd:0070 p:0 m:0 dnr:0 00:25:25.363 [2024-07-15 09:33:17.953080] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:100336 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:25.363 [2024-07-15 09:33:17.953107] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:4 cdw0:0 sqhd:0071 p:0 m:0 dnr:0 00:25:25.363 [2024-07-15 09:33:17.953143] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:119 nsid:1 lba:100344 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:25.363 [2024-07-15 09:33:17.953170] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:119 cdw0:0 sqhd:0072 p:0 m:0 dnr:0 00:25:25.363 [2024-07-15 09:33:17.953204] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:100352 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:25.363 [2024-07-15 09:33:17.953230] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:44 cdw0:0 sqhd:0073 p:0 m:0 dnr:0 00:25:25.363 [2024-07-15 09:33:17.953264] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:91 nsid:1 lba:100360 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:25.363 [2024-07-15 09:33:17.953290] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:91 cdw0:0 sqhd:0074 p:0 m:0 dnr:0 00:25:25.363 [2024-07-15 09:33:17.953325] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:73 nsid:1 lba:100368 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:25.363 [2024-07-15 09:33:17.953365] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:73 cdw0:0 sqhd:0075 p:0 m:0 dnr:0 00:25:25.363 [2024-07-15 09:33:17.953400] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:96 nsid:1 lba:100376 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:25.363 [2024-07-15 09:33:17.953425] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:96 cdw0:0 sqhd:0076 p:0 m:0 dnr:0 00:25:25.363 [2024-07-15 09:33:17.953461] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:105 nsid:1 lba:100384 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:25.363 [2024-07-15 09:33:17.953486] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:105 cdw0:0 sqhd:0077 p:0 m:0 dnr:0 00:25:25.363 [2024-07-15 09:33:17.953522] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:100392 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:25.363 [2024-07-15 09:33:17.953546] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:18 cdw0:0 sqhd:0078 p:0 m:0 dnr:0 00:25:25.363 [2024-07-15 09:33:17.953583] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:99 nsid:1 lba:100400 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:25.363 [2024-07-15 09:33:17.953609] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:99 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 00:25:25.363 [2024-07-15 09:33:17.953643] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:100408 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:25.363 [2024-07-15 09:33:17.953669] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:59 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:25:25.363 [2024-07-15 09:33:17.953704] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:103 nsid:1 lba:100416 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:25.363 [2024-07-15 09:33:17.953731] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:103 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:25:25.363 [2024-07-15 09:33:17.953772] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:100424 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:25.363 [2024-07-15 09:33:17.953799] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:7 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:25:25.363 [2024-07-15 09:33:17.954217] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:89 nsid:1 lba:100432 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:25.363 [2024-07-15 09:33:17.954244] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:89 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:25:25.363 [2024-07-15 09:33:17.954279] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:67 nsid:1 lba:100440 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:25.363 [2024-07-15 09:33:17.954304] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:67 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:25:25.363 [2024-07-15 09:33:17.955038] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:106 nsid:1 lba:100448 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:25.363 [2024-07-15 09:33:17.955070] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:106 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:25:25.363 [2024-07-15 09:33:17.955112] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:100456 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:25.363 [2024-07-15 09:33:17.955140] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:38 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:25.363 [2024-07-15 09:33:17.955176] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:100464 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:25.363 [2024-07-15 09:33:17.955204] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:41 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:25.363 [2024-07-15 09:33:17.955240] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:101 nsid:1 lba:100472 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:25.363 [2024-07-15 09:33:17.955268] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:101 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:25:25.363 [2024-07-15 09:33:17.955317] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:100480 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:25.363 [2024-07-15 09:33:17.955343] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:8 cdw0:0 sqhd:0003 p:0 m:0 dnr:0 00:25:25.363 [2024-07-15 09:33:17.955378] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:100488 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:25.363 [2024-07-15 09:33:17.955404] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:14 cdw0:0 sqhd:0004 p:0 m:0 dnr:0 00:25:25.363 [2024-07-15 09:33:17.955440] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:100496 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:25.363 [2024-07-15 09:33:17.955468] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:2 cdw0:0 sqhd:0005 p:0 m:0 dnr:0 00:25:25.363 [2024-07-15 09:33:17.955518] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:100504 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:25.363 [2024-07-15 09:33:17.955543] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:50 cdw0:0 sqhd:0006 p:0 m:0 dnr:0 00:25:25.363 [2024-07-15 09:33:17.955579] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:100512 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:25.363 [2024-07-15 09:33:17.955603] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:1 cdw0:0 sqhd:0007 p:0 m:0 dnr:0 00:25:25.363 [2024-07-15 09:33:17.955640] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:77 nsid:1 lba:100520 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:25.363 [2024-07-15 09:33:17.955671] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:77 cdw0:0 sqhd:0008 p:0 m:0 dnr:0 00:25:25.363 [2024-07-15 09:33:17.955706] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:85 nsid:1 lba:100528 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:25.363 [2024-07-15 09:33:17.955732] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:85 cdw0:0 sqhd:0009 p:0 m:0 dnr:0 00:25:25.363 [2024-07-15 09:33:17.955765] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:115 nsid:1 lba:100536 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:25.363 [2024-07-15 09:33:17.955814] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:115 cdw0:0 sqhd:000a p:0 m:0 dnr:0 00:25:25.363 [2024-07-15 09:33:17.955854] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:78 nsid:1 lba:100544 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:25.363 [2024-07-15 09:33:17.955882] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:78 cdw0:0 sqhd:000b p:0 m:0 dnr:0 00:25:25.363 [2024-07-15 09:33:17.955917] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:75 nsid:1 lba:100552 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:25.363 [2024-07-15 09:33:17.955944] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:75 cdw0:0 sqhd:000c p:0 m:0 dnr:0 00:25:25.363 [2024-07-15 09:33:17.955978] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:100560 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:25.363 [2024-07-15 09:33:17.956006] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:16 cdw0:0 sqhd:000d p:0 m:0 dnr:0 00:25:25.363 [2024-07-15 09:33:17.956042] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:104 nsid:1 lba:100568 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:25.363 [2024-07-15 09:33:17.956069] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:104 cdw0:0 sqhd:000e p:0 m:0 dnr:0 00:25:25.363 [2024-07-15 09:33:17.956123] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:71 nsid:1 lba:100576 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:25.363 [2024-07-15 09:33:17.956163] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:71 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:25:25.363 [2024-07-15 09:33:17.956197] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:74 nsid:1 lba:100584 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:25.363 [2024-07-15 09:33:17.956222] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:74 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:25:25.363 [2024-07-15 09:33:17.956256] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:100592 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:25.363 [2024-07-15 09:33:17.956279] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:29 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:25:25.363 [2024-07-15 09:33:17.956313] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:100600 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:25.363 [2024-07-15 09:33:17.956338] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:46 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:25:25.363 [2024-07-15 09:33:17.956385] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:69 nsid:1 lba:100608 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:25.364 [2024-07-15 09:33:17.956410] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:69 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:25:25.364 [2024-07-15 09:33:17.956458] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:100616 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:25.364 [2024-07-15 09:33:17.956491] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:9 cdw0:0 sqhd:0014 p:0 m:0 dnr:0 00:25:25.364 [2024-07-15 09:33:17.956526] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:111 nsid:1 lba:100624 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:25.364 [2024-07-15 09:33:17.956552] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:111 cdw0:0 sqhd:0015 p:0 m:0 dnr:0 00:25:25.364 [2024-07-15 09:33:17.956587] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:100632 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:25.364 [2024-07-15 09:33:17.956613] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:19 cdw0:0 sqhd:0016 p:0 m:0 dnr:0 00:25:25.364 [2024-07-15 09:33:17.956647] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:114 nsid:1 lba:100640 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:25.364 [2024-07-15 09:33:17.956673] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:114 cdw0:0 sqhd:0017 p:0 m:0 dnr:0 00:25:25.364 [2024-07-15 09:33:17.956708] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:65 nsid:1 lba:100648 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:25.364 [2024-07-15 09:33:17.956748] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:65 cdw0:0 sqhd:0018 p:0 m:0 dnr:0 00:25:25.364 [2024-07-15 09:33:17.956784] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:118 nsid:1 lba:100656 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:25.364 [2024-07-15 09:33:17.956836] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:118 cdw0:0 sqhd:0019 p:0 m:0 dnr:0 00:25:25.364 [2024-07-15 09:33:17.956888] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:100664 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:25.364 [2024-07-15 09:33:17.956915] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:48 cdw0:0 sqhd:001a p:0 m:0 dnr:0 00:25:25.364 [2024-07-15 09:33:17.956952] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:84 nsid:1 lba:100672 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:25.364 [2024-07-15 09:33:17.956981] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:84 cdw0:0 sqhd:001b p:0 m:0 dnr:0 00:25:25.364 [2024-07-15 09:33:17.957016] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:100680 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:25.364 [2024-07-15 09:33:17.957043] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:37 cdw0:0 sqhd:001c p:0 m:0 dnr:0 00:25:25.364 [2024-07-15 09:33:17.957078] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:100688 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:25.364 [2024-07-15 09:33:17.957120] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:126 cdw0:0 sqhd:001d p:0 m:0 dnr:0 00:25:25.364 [2024-07-15 09:33:17.957169] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:100696 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:25.364 [2024-07-15 09:33:17.957195] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:13 cdw0:0 sqhd:001e p:0 m:0 dnr:0 00:25:25.364 [2024-07-15 09:33:17.957241] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:100704 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:25.364 [2024-07-15 09:33:17.957265] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:24 cdw0:0 sqhd:001f p:0 m:0 dnr:0 00:25:25.364 [2024-07-15 09:33:17.957298] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:97 nsid:1 lba:100712 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:25.364 [2024-07-15 09:33:17.957324] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:97 cdw0:0 sqhd:0020 p:0 m:0 dnr:0 00:25:25.364 [2024-07-15 09:33:17.957366] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:88 nsid:1 lba:100720 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:25.364 [2024-07-15 09:33:17.957391] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:88 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:25:25.364 [2024-07-15 09:33:17.957425] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:100728 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:25.364 [2024-07-15 09:33:17.957450] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:56 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:25:25.364 [2024-07-15 09:33:17.957483] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:100736 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:25.364 [2024-07-15 09:33:17.957507] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:23 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:25:25.364 [2024-07-15 09:33:17.957543] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:121 nsid:1 lba:100744 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:25.364 [2024-07-15 09:33:17.957567] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:121 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:25:25.364 [2024-07-15 09:33:17.957599] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:100752 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:25.364 [2024-07-15 09:33:17.957625] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:10 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:25:25.364 [2024-07-15 09:33:17.957657] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:87 nsid:1 lba:100760 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:25.364 [2024-07-15 09:33:17.957682] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:87 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:25:25.364 [2024-07-15 09:33:17.957714] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:100768 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:25.364 [2024-07-15 09:33:17.957737] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:40 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:25:25.364 [2024-07-15 09:33:17.957770] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:113 nsid:1 lba:100776 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:25.364 [2024-07-15 09:33:17.957820] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:113 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:25:25.364 [2024-07-15 09:33:17.957879] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:100784 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:25.364 [2024-07-15 09:33:17.957905] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:60 cdw0:0 sqhd:0029 p:0 m:0 dnr:0 00:25:25.364 [2024-07-15 09:33:17.957943] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:100792 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:25.364 [2024-07-15 09:33:17.957969] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:54 cdw0:0 sqhd:002a p:0 m:0 dnr:0 00:25:25.364 [2024-07-15 09:33:17.958006] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:100800 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:25.364 [2024-07-15 09:33:17.958032] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:62 cdw0:0 sqhd:002b p:0 m:0 dnr:0 00:25:25.364 [2024-07-15 09:33:17.958070] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:100808 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:25.364 [2024-07-15 09:33:17.958110] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:20 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:25:25.364 [2024-07-15 09:33:17.958164] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:123 nsid:1 lba:100816 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:25.364 [2024-07-15 09:33:17.958189] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:123 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:25:25.364 [2024-07-15 09:33:17.958221] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:99800 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:25.364 [2024-07-15 09:33:17.958246] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:49 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:25:25.364 [2024-07-15 09:33:17.958278] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:99824 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:25.364 [2024-07-15 09:33:17.958303] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:42 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:25:25.364 [2024-07-15 09:33:17.958336] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:110 nsid:1 lba:99832 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:25.364 [2024-07-15 09:33:17.958360] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:110 cdw0:0 sqhd:0030 p:0 m:0 dnr:0 00:25:25.364 [2024-07-15 09:33:17.958392] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:99840 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:25.364 [2024-07-15 09:33:17.958416] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:61 cdw0:0 sqhd:0031 p:0 m:0 dnr:0 00:25:25.364 [2024-07-15 09:33:17.958449] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:93 nsid:1 lba:99848 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:25.364 [2024-07-15 09:33:17.958473] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:93 cdw0:0 sqhd:0032 p:0 m:0 dnr:0 00:25:25.364 [2024-07-15 09:33:17.958504] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:99856 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:25.364 [2024-07-15 09:33:17.958528] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:32 cdw0:0 sqhd:0033 p:0 m:0 dnr:0 00:25:25.364 [2024-07-15 09:33:17.958559] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:99864 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:25.364 [2024-07-15 09:33:17.958583] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:51 cdw0:0 sqhd:0034 p:0 m:0 dnr:0 00:25:25.364 [2024-07-15 09:33:17.958615] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:99872 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:25.364 [2024-07-15 09:33:17.958639] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:53 cdw0:0 sqhd:0035 p:0 m:0 dnr:0 00:25:25.364 [2024-07-15 09:33:17.958671] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:99880 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:25.364 [2024-07-15 09:33:17.958694] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:15 cdw0:0 sqhd:0036 p:0 m:0 dnr:0 00:25:25.364 [2024-07-15 09:33:17.958727] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:102 nsid:1 lba:99888 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:25.364 [2024-07-15 09:33:17.958751] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:102 cdw0:0 sqhd:0037 p:0 m:0 dnr:0 00:25:25.364 [2024-07-15 09:33:17.958817] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:108 nsid:1 lba:99896 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:25.364 [2024-07-15 09:33:17.958846] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:108 cdw0:0 sqhd:0038 p:0 m:0 dnr:0 00:25:25.364 [2024-07-15 09:33:17.958889] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:76 nsid:1 lba:99904 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:25.364 [2024-07-15 09:33:17.958916] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:76 cdw0:0 sqhd:0039 p:0 m:0 dnr:0 00:25:25.364 [2024-07-15 09:33:17.958952] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:99912 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:25.364 [2024-07-15 09:33:17.958980] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:52 cdw0:0 sqhd:003a p:0 m:0 dnr:0 00:25:25.364 [2024-07-15 09:33:17.959014] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:86 nsid:1 lba:99920 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:25.364 [2024-07-15 09:33:17.959041] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:86 cdw0:0 sqhd:003b p:0 m:0 dnr:0 00:25:25.364 [2024-07-15 09:33:17.959075] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:99928 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:25.364 [2024-07-15 09:33:17.959115] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:22 cdw0:0 sqhd:003c p:0 m:0 dnr:0 00:25:25.364 [2024-07-15 09:33:17.959148] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:64 nsid:1 lba:99936 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:25.364 [2024-07-15 09:33:17.959172] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:64 cdw0:0 sqhd:003d p:0 m:0 dnr:0 00:25:25.364 [2024-07-15 09:33:17.959204] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:99944 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:25.364 [2024-07-15 09:33:17.959228] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:43 cdw0:0 sqhd:003e p:0 m:0 dnr:0 00:25:25.364 [2024-07-15 09:33:17.959259] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:99952 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:25.364 [2024-07-15 09:33:17.959283] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:34 cdw0:0 sqhd:003f p:0 m:0 dnr:0 00:25:25.364 [2024-07-15 09:33:17.959316] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:107 nsid:1 lba:99960 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:25.364 [2024-07-15 09:33:17.959340] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:107 cdw0:0 sqhd:0040 p:0 m:0 dnr:0 00:25:25.364 [2024-07-15 09:33:17.959373] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:117 nsid:1 lba:99968 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:25.364 [2024-07-15 09:33:17.959397] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:117 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:25:25.364 [2024-07-15 09:33:17.959431] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:99976 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:25.364 [2024-07-15 09:33:17.959455] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:36 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:25:25.364 [2024-07-15 09:33:17.959489] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:112 nsid:1 lba:99984 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:25.364 [2024-07-15 09:33:17.959512] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:112 cdw0:0 sqhd:0043 p:0 m:0 dnr:0 00:25:25.364 [2024-07-15 09:33:17.959546] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:99992 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:25.364 [2024-07-15 09:33:17.959570] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:27 cdw0:0 sqhd:0044 p:0 m:0 dnr:0 00:25:25.364 [2024-07-15 09:33:17.959603] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:120 nsid:1 lba:100000 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:25.364 [2024-07-15 09:33:17.959632] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:120 cdw0:0 sqhd:0045 p:0 m:0 dnr:0 00:25:25.364 [2024-07-15 09:33:17.959666] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:122 nsid:1 lba:99808 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:25.364 [2024-07-15 09:33:17.959691] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:122 cdw0:0 sqhd:0046 p:0 m:0 dnr:0 00:25:25.364 [2024-07-15 09:33:17.959722] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:92 nsid:1 lba:99816 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:25.364 [2024-07-15 09:33:17.959747] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:92 cdw0:0 sqhd:0047 p:0 m:0 dnr:0 00:25:25.364 [2024-07-15 09:33:17.959778] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:100008 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:25.364 [2024-07-15 09:33:17.959827] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:58 cdw0:0 sqhd:0048 p:0 m:0 dnr:0 00:25:25.364 [2024-07-15 09:33:17.959863] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:81 nsid:1 lba:100016 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:25.364 [2024-07-15 09:33:17.959890] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:81 cdw0:0 sqhd:0049 p:0 m:0 dnr:0 00:25:25.364 [2024-07-15 09:33:17.959925] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:95 nsid:1 lba:100024 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:25.364 [2024-07-15 09:33:17.959951] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:95 cdw0:0 sqhd:004a p:0 m:0 dnr:0 00:25:25.364 [2024-07-15 09:33:17.959986] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:82 nsid:1 lba:100032 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:25.364 [2024-07-15 09:33:17.960013] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:82 cdw0:0 sqhd:004b p:0 m:0 dnr:0 00:25:25.364 [2024-07-15 09:33:17.960049] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:100040 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:25.364 [2024-07-15 09:33:17.960076] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:33 cdw0:0 sqhd:004c p:0 m:0 dnr:0 00:25:25.364 [2024-07-15 09:33:17.961278] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:67 nsid:1 lba:100048 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:25.364 [2024-07-15 09:33:17.961311] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:67 cdw0:0 sqhd:004d p:0 m:0 dnr:0 00:25:25.364 [2024-07-15 09:33:17.961352] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:89 nsid:1 lba:100056 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:25.364 [2024-07-15 09:33:17.961380] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:89 cdw0:0 sqhd:004e p:0 m:0 dnr:0 00:25:25.364 [2024-07-15 09:33:17.961418] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:100064 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:25.364 [2024-07-15 09:33:17.961441] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:7 cdw0:0 sqhd:004f p:0 m:0 dnr:0 00:25:25.364 [2024-07-15 09:33:17.961473] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:103 nsid:1 lba:100072 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:25.364 [2024-07-15 09:33:17.961495] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:103 cdw0:0 sqhd:0050 p:0 m:0 dnr:0 00:25:25.364 [2024-07-15 09:33:17.961540] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:100080 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:25.364 [2024-07-15 09:33:17.961569] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:59 cdw0:0 sqhd:0051 p:0 m:0 dnr:0 00:25:25.364 [2024-07-15 09:33:17.961602] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:99 nsid:1 lba:100088 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:25.364 [2024-07-15 09:33:17.961624] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:99 cdw0:0 sqhd:0052 p:0 m:0 dnr:0 00:25:25.364 [2024-07-15 09:33:17.961658] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:100096 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:25.364 [2024-07-15 09:33:17.961682] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:18 cdw0:0 sqhd:0053 p:0 m:0 dnr:0 00:25:25.364 [2024-07-15 09:33:17.961714] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:105 nsid:1 lba:100104 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:25.364 [2024-07-15 09:33:17.961735] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:105 cdw0:0 sqhd:0054 p:0 m:0 dnr:0 00:25:25.364 [2024-07-15 09:33:17.961778] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:96 nsid:1 lba:100112 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:25.364 [2024-07-15 09:33:17.961798] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:96 cdw0:0 sqhd:0055 p:0 m:0 dnr:0 00:25:25.364 [2024-07-15 09:33:17.961854] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:73 nsid:1 lba:100120 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:25.364 [2024-07-15 09:33:17.961876] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:73 cdw0:0 sqhd:0056 p:0 m:0 dnr:0 00:25:25.364 [2024-07-15 09:33:17.961909] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:91 nsid:1 lba:100128 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:25.364 [2024-07-15 09:33:17.961931] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:91 cdw0:0 sqhd:0057 p:0 m:0 dnr:0 00:25:25.364 [2024-07-15 09:33:17.961964] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:100136 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:25.364 [2024-07-15 09:33:17.961987] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:44 cdw0:0 sqhd:0058 p:0 m:0 dnr:0 00:25:25.364 [2024-07-15 09:33:17.962021] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:119 nsid:1 lba:100144 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:25.364 [2024-07-15 09:33:17.962045] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:119 cdw0:0 sqhd:0059 p:0 m:0 dnr:0 00:25:25.364 [2024-07-15 09:33:17.962094] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:100152 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:25.364 [2024-07-15 09:33:17.962113] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:4 cdw0:0 sqhd:005a p:0 m:0 dnr:0 00:25:25.364 [2024-07-15 09:33:17.962160] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:100160 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:25.364 [2024-07-15 09:33:17.962179] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:28 cdw0:0 sqhd:005b p:0 m:0 dnr:0 00:25:25.364 [2024-07-15 09:33:17.962212] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:100168 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:25.365 [2024-07-15 09:33:17.962231] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:31 cdw0:0 sqhd:005c p:0 m:0 dnr:0 00:25:25.365 [2024-07-15 09:33:17.962263] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:100176 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:25.365 [2024-07-15 09:33:17.962283] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:125 cdw0:0 sqhd:005d p:0 m:0 dnr:0 00:25:25.365 [2024-07-15 09:33:17.962318] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:100184 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:25.365 [2024-07-15 09:33:17.962347] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:17 cdw0:0 sqhd:005e p:0 m:0 dnr:0 00:25:25.365 [2024-07-15 09:33:17.962379] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:100192 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:25.365 [2024-07-15 09:33:17.962400] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:30 cdw0:0 sqhd:005f p:0 m:0 dnr:0 00:25:25.365 [2024-07-15 09:33:17.962428] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:100200 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:25.365 [2024-07-15 09:33:17.962449] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:5 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:25:25.365 [2024-07-15 09:33:17.962481] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:100208 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:25.365 [2024-07-15 09:33:17.962502] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:57 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:25:25.365 [2024-07-15 09:33:17.962532] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:100216 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:25.365 [2024-07-15 09:33:17.962554] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:25:25.365 [2024-07-15 09:33:17.962601] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:100224 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:25.365 [2024-07-15 09:33:17.962624] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:55 cdw0:0 sqhd:0063 p:0 m:0 dnr:0 00:25:25.365 [2024-07-15 09:33:17.962672] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:100232 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:25.365 [2024-07-15 09:33:17.962696] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:21 cdw0:0 sqhd:0064 p:0 m:0 dnr:0 00:25:25.365 [2024-07-15 09:33:17.962731] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:68 nsid:1 lba:100240 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:25.365 [2024-07-15 09:33:17.962751] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:68 cdw0:0 sqhd:0065 p:0 m:0 dnr:0 00:25:25.365 [2024-07-15 09:33:17.962781] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:70 nsid:1 lba:100248 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:25.365 [2024-07-15 09:33:17.962811] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:70 cdw0:0 sqhd:0066 p:0 m:0 dnr:0 00:25:25.365 [2024-07-15 09:33:17.962843] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:100256 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:25.365 [2024-07-15 09:33:17.962864] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:39 cdw0:0 sqhd:0067 p:0 m:0 dnr:0 00:25:25.365 [2024-07-15 09:33:17.962900] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:100264 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:25.365 [2024-07-15 09:33:17.962921] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:35 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:25:25.365 [2024-07-15 09:33:17.962955] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:66 nsid:1 lba:100272 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:25.365 [2024-07-15 09:33:17.962977] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:66 cdw0:0 sqhd:0069 p:0 m:0 dnr:0 00:25:25.365 [2024-07-15 09:33:17.963014] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:90 nsid:1 lba:100280 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:25.365 [2024-07-15 09:33:17.963035] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:90 cdw0:0 sqhd:006a p:0 m:0 dnr:0 00:25:25.365 [2024-07-15 09:33:17.963066] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:116 nsid:1 lba:100288 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:25.365 [2024-07-15 09:33:17.963090] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:116 cdw0:0 sqhd:006b p:0 m:0 dnr:0 00:25:25.365 [2024-07-15 09:33:17.963123] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:98 nsid:1 lba:100296 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:25.365 [2024-07-15 09:33:17.963142] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:98 cdw0:0 sqhd:006c p:0 m:0 dnr:0 00:25:25.365 [2024-07-15 09:33:17.963178] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:109 nsid:1 lba:100304 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:25.365 [2024-07-15 09:33:17.963197] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:109 cdw0:0 sqhd:006d p:0 m:0 dnr:0 00:25:25.365 [2024-07-15 09:33:17.963243] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:100312 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:25.365 [2024-07-15 09:33:17.963271] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:6 cdw0:0 sqhd:006e p:0 m:0 dnr:0 00:25:25.365 [2024-07-15 09:33:17.963318] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:100320 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:25.365 [2024-07-15 09:33:17.963342] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:25 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:25:25.365 [2024-07-15 09:33:17.963389] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:94 nsid:1 lba:100328 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:25.365 [2024-07-15 09:33:17.963413] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:94 cdw0:0 sqhd:0070 p:0 m:0 dnr:0 00:25:25.365 [2024-07-15 09:33:17.963446] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:100336 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:25.365 [2024-07-15 09:33:17.963465] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:63 cdw0:0 sqhd:0071 p:0 m:0 dnr:0 00:25:25.365 [2024-07-15 09:33:17.963497] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:79 nsid:1 lba:100344 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:25.365 [2024-07-15 09:33:17.963517] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:79 cdw0:0 sqhd:0072 p:0 m:0 dnr:0 00:25:25.365 [2024-07-15 09:33:17.963551] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:100352 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:25.365 [2024-07-15 09:33:17.963570] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:3 cdw0:0 sqhd:0073 p:0 m:0 dnr:0 00:25:25.365 [2024-07-15 09:33:17.963600] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:80 nsid:1 lba:100360 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:25.365 [2024-07-15 09:33:17.963622] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:80 cdw0:0 sqhd:0074 p:0 m:0 dnr:0 00:25:25.365 [2024-07-15 09:33:17.963670] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:100368 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:25.365 [2024-07-15 09:33:17.963691] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:47 cdw0:0 sqhd:0075 p:0 m:0 dnr:0 00:25:25.365 [2024-07-15 09:33:17.963738] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:100376 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:25.365 [2024-07-15 09:33:17.963763] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:12 cdw0:0 sqhd:0076 p:0 m:0 dnr:0 00:25:25.365 [2024-07-15 09:33:17.963821] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:100 nsid:1 lba:100384 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:25.365 [2024-07-15 09:33:17.963846] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:100 cdw0:0 sqhd:0077 p:0 m:0 dnr:0 00:25:25.365 [2024-07-15 09:33:17.963879] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:100392 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:25.365 [2024-07-15 09:33:17.963901] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:26 cdw0:0 sqhd:0078 p:0 m:0 dnr:0 00:25:25.365 [2024-07-15 09:33:17.963934] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:83 nsid:1 lba:100400 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:25.365 [2024-07-15 09:33:17.963956] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:83 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 00:25:25.365 [2024-07-15 09:33:17.963991] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:100408 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:25.365 [2024-07-15 09:33:17.964014] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:11 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:25:25.365 [2024-07-15 09:33:17.964046] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:100416 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:25.365 [2024-07-15 09:33:17.964066] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:45 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:25:25.365 [2024-07-15 09:33:17.964116] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:100424 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:25.365 [2024-07-15 09:33:17.964140] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:124 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:25:25.365 [2024-07-15 09:33:17.964186] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:72 nsid:1 lba:100432 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:25.365 [2024-07-15 09:33:17.964207] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:72 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:25:25.365 [2024-07-15 09:33:17.964914] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:100440 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:25.365 [2024-07-15 09:33:17.964952] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:33 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:25:25.365 [2024-07-15 09:33:17.964990] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:82 nsid:1 lba:100448 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:25.365 [2024-07-15 09:33:17.965024] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:82 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:25:25.365 [2024-07-15 09:33:17.965059] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:95 nsid:1 lba:100456 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:25.365 [2024-07-15 09:33:17.965083] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:95 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:25.365 [2024-07-15 09:33:17.965116] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:81 nsid:1 lba:100464 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:25.365 [2024-07-15 09:33:17.965138] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:81 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:25.365 [2024-07-15 09:33:17.965172] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:100472 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:25.365 [2024-07-15 09:33:18.431793] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:58 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:25:25.365 [2024-07-15 09:33:18.431924] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:92 nsid:1 lba:100480 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:25.365 [2024-07-15 09:33:18.431944] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:92 cdw0:0 sqhd:0003 p:0 m:0 dnr:0 00:25:25.365 [2024-07-15 09:33:18.431969] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:122 nsid:1 lba:100488 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:25.365 [2024-07-15 09:33:18.431986] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:122 cdw0:0 sqhd:0004 p:0 m:0 dnr:0 00:25:25.365 [2024-07-15 09:33:18.432008] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:120 nsid:1 lba:100496 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:25.365 [2024-07-15 09:33:18.432025] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:120 cdw0:0 sqhd:0005 p:0 m:0 dnr:0 00:25:25.365 [2024-07-15 09:33:18.432048] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:100504 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:25.365 [2024-07-15 09:33:18.432065] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:27 cdw0:0 sqhd:0006 p:0 m:0 dnr:0 00:25:25.365 [2024-07-15 09:33:18.432088] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:112 nsid:1 lba:100512 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:25.365 [2024-07-15 09:33:18.432104] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:112 cdw0:0 sqhd:0007 p:0 m:0 dnr:0 00:25:25.365 [2024-07-15 09:33:18.432141] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:100520 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:25.365 [2024-07-15 09:33:18.432157] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:36 cdw0:0 sqhd:0008 p:0 m:0 dnr:0 00:25:25.365 [2024-07-15 09:33:18.432179] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:117 nsid:1 lba:100528 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:25.365 [2024-07-15 09:33:18.432194] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:117 cdw0:0 sqhd:0009 p:0 m:0 dnr:0 00:25:25.365 [2024-07-15 09:33:18.432216] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:107 nsid:1 lba:100536 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:25.365 [2024-07-15 09:33:18.432231] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:107 cdw0:0 sqhd:000a p:0 m:0 dnr:0 00:25:25.365 [2024-07-15 09:33:18.432252] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:100544 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:25.365 [2024-07-15 09:33:18.432267] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:34 cdw0:0 sqhd:000b p:0 m:0 dnr:0 00:25:25.365 [2024-07-15 09:33:18.432289] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:100552 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:25.365 [2024-07-15 09:33:18.432304] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:43 cdw0:0 sqhd:000c p:0 m:0 dnr:0 00:25:25.365 [2024-07-15 09:33:18.432326] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:64 nsid:1 lba:100560 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:25.365 [2024-07-15 09:33:18.432341] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:64 cdw0:0 sqhd:000d p:0 m:0 dnr:0 00:25:25.365 [2024-07-15 09:33:18.432362] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:100568 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:25.365 [2024-07-15 09:33:18.432379] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:22 cdw0:0 sqhd:000e p:0 m:0 dnr:0 00:25:25.365 [2024-07-15 09:33:18.432407] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:86 nsid:1 lba:100576 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:25.365 [2024-07-15 09:33:18.432424] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:86 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:25:25.365 [2024-07-15 09:33:18.432446] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:100584 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:25.365 [2024-07-15 09:33:18.432461] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:52 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:25:25.365 [2024-07-15 09:33:18.432482] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:76 nsid:1 lba:100592 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:25.365 [2024-07-15 09:33:18.432497] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:76 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:25:25.365 [2024-07-15 09:33:18.432519] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:108 nsid:1 lba:100600 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:25.365 [2024-07-15 09:33:18.432534] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:108 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:25:25.365 [2024-07-15 09:33:18.432555] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:102 nsid:1 lba:100608 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:25.365 [2024-07-15 09:33:18.432570] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:102 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:25:25.365 [2024-07-15 09:33:18.432591] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:100616 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:25.365 [2024-07-15 09:33:18.432606] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:15 cdw0:0 sqhd:0014 p:0 m:0 dnr:0 00:25:25.365 [2024-07-15 09:33:18.432627] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:100624 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:25.365 [2024-07-15 09:33:18.432642] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:53 cdw0:0 sqhd:0015 p:0 m:0 dnr:0 00:25:25.365 [2024-07-15 09:33:18.432664] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:100632 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:25.365 [2024-07-15 09:33:18.432679] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:51 cdw0:0 sqhd:0016 p:0 m:0 dnr:0 00:25:25.365 [2024-07-15 09:33:18.432701] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:100640 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:25.365 [2024-07-15 09:33:18.432716] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:32 cdw0:0 sqhd:0017 p:0 m:0 dnr:0 00:25:25.365 [2024-07-15 09:33:18.432736] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:93 nsid:1 lba:100648 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:25.365 [2024-07-15 09:33:18.432751] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:93 cdw0:0 sqhd:0018 p:0 m:0 dnr:0 00:25:25.365 [2024-07-15 09:33:18.432772] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:100656 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:25.365 [2024-07-15 09:33:18.432812] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:61 cdw0:0 sqhd:0019 p:0 m:0 dnr:0 00:25:25.365 [2024-07-15 09:33:18.432838] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:110 nsid:1 lba:100664 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:25.365 [2024-07-15 09:33:18.432855] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:110 cdw0:0 sqhd:001a p:0 m:0 dnr:0 00:25:25.365 [2024-07-15 09:33:18.432882] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:100672 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:25.365 [2024-07-15 09:33:18.432899] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:42 cdw0:0 sqhd:001b p:0 m:0 dnr:0 00:25:25.365 [2024-07-15 09:33:18.432921] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:100680 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:25.365 [2024-07-15 09:33:18.432937] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:49 cdw0:0 sqhd:001c p:0 m:0 dnr:0 00:25:25.365 [2024-07-15 09:33:18.432960] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:123 nsid:1 lba:100688 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:25.365 [2024-07-15 09:33:18.432976] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:123 cdw0:0 sqhd:001d p:0 m:0 dnr:0 00:25:25.366 [2024-07-15 09:33:18.432999] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:100696 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:25.366 [2024-07-15 09:33:18.433015] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:20 cdw0:0 sqhd:001e p:0 m:0 dnr:0 00:25:25.366 [2024-07-15 09:33:18.433037] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:100704 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:25.366 [2024-07-15 09:33:18.433054] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:62 cdw0:0 sqhd:001f p:0 m:0 dnr:0 00:25:25.366 [2024-07-15 09:33:18.433076] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:100712 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:25.366 [2024-07-15 09:33:18.433110] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:54 cdw0:0 sqhd:0020 p:0 m:0 dnr:0 00:25:25.366 [2024-07-15 09:33:18.433133] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:100720 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:25.366 [2024-07-15 09:33:18.433163] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:60 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:25:25.366 [2024-07-15 09:33:18.433186] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:113 nsid:1 lba:100728 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:25.366 [2024-07-15 09:33:18.433201] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:113 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:25:25.366 [2024-07-15 09:33:18.433222] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:100736 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:25.366 [2024-07-15 09:33:18.433237] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:40 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:25:25.366 [2024-07-15 09:33:18.433258] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:87 nsid:1 lba:100744 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:25.366 [2024-07-15 09:33:18.433273] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:87 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:25:25.366 [2024-07-15 09:33:18.433294] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:100752 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:25.366 [2024-07-15 09:33:18.433309] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:10 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:25:25.366 [2024-07-15 09:33:18.433330] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:121 nsid:1 lba:100760 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:25.366 [2024-07-15 09:33:18.433345] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:121 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:25:25.366 [2024-07-15 09:33:18.433370] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:100768 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:25.366 [2024-07-15 09:33:18.433386] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:23 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:25:25.366 [2024-07-15 09:33:18.433407] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:100776 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:25.366 [2024-07-15 09:33:18.433422] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:56 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:25:25.366 [2024-07-15 09:33:18.433444] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:88 nsid:1 lba:100784 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:25.366 [2024-07-15 09:33:18.433459] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:88 cdw0:0 sqhd:0029 p:0 m:0 dnr:0 00:25:25.366 [2024-07-15 09:33:18.433480] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:97 nsid:1 lba:100792 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:25.366 [2024-07-15 09:33:18.433495] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:97 cdw0:0 sqhd:002a p:0 m:0 dnr:0 00:25:25.366 [2024-07-15 09:33:18.433517] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:100800 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:25.366 [2024-07-15 09:33:18.433532] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:24 cdw0:0 sqhd:002b p:0 m:0 dnr:0 00:25:25.366 [2024-07-15 09:33:18.433553] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:100808 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:25.366 [2024-07-15 09:33:18.433568] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:13 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:25:25.366 [2024-07-15 09:33:18.433589] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:100816 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:25.366 [2024-07-15 09:33:18.433604] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:126 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:25:25.366 [2024-07-15 09:33:18.433626] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:99800 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:25.366 [2024-07-15 09:33:18.433641] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:37 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:25:25.366 [2024-07-15 09:33:18.433662] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:84 nsid:1 lba:99824 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:25.366 [2024-07-15 09:33:18.433678] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:84 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:25:25.366 [2024-07-15 09:33:18.433699] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:99832 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:25.366 [2024-07-15 09:33:18.433714] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:48 cdw0:0 sqhd:0030 p:0 m:0 dnr:0 00:25:25.366 [2024-07-15 09:33:18.433735] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:118 nsid:1 lba:99840 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:25.366 [2024-07-15 09:33:18.433751] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:118 cdw0:0 sqhd:0031 p:0 m:0 dnr:0 00:25:25.366 [2024-07-15 09:33:18.433772] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:65 nsid:1 lba:99848 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:25.366 [2024-07-15 09:33:18.433810] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:65 cdw0:0 sqhd:0032 p:0 m:0 dnr:0 00:25:25.366 [2024-07-15 09:33:18.433836] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:114 nsid:1 lba:99856 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:25.366 [2024-07-15 09:33:18.433857] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:114 cdw0:0 sqhd:0033 p:0 m:0 dnr:0 00:25:25.366 [2024-07-15 09:33:18.433880] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:99864 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:25.366 [2024-07-15 09:33:18.433897] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:19 cdw0:0 sqhd:0034 p:0 m:0 dnr:0 00:25:25.366 [2024-07-15 09:33:18.433920] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:111 nsid:1 lba:99872 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:25.366 [2024-07-15 09:33:18.433936] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:111 cdw0:0 sqhd:0035 p:0 m:0 dnr:0 00:25:25.366 [2024-07-15 09:33:18.433958] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:99880 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:25.366 [2024-07-15 09:33:18.433974] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:9 cdw0:0 sqhd:0036 p:0 m:0 dnr:0 00:25:25.366 [2024-07-15 09:33:18.433997] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:69 nsid:1 lba:99888 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:25.366 [2024-07-15 09:33:18.434013] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:69 cdw0:0 sqhd:0037 p:0 m:0 dnr:0 00:25:25.366 [2024-07-15 09:33:18.434035] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:99896 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:25.366 [2024-07-15 09:33:18.434051] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:46 cdw0:0 sqhd:0038 p:0 m:0 dnr:0 00:25:25.366 [2024-07-15 09:33:18.434074] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:99904 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:25.366 [2024-07-15 09:33:18.434104] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:29 cdw0:0 sqhd:0039 p:0 m:0 dnr:0 00:25:25.366 [2024-07-15 09:33:18.434127] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:74 nsid:1 lba:99912 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:25.366 [2024-07-15 09:33:18.434159] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:74 cdw0:0 sqhd:003a p:0 m:0 dnr:0 00:25:25.366 [2024-07-15 09:33:18.434182] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:71 nsid:1 lba:99920 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:25.366 [2024-07-15 09:33:18.434197] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:71 cdw0:0 sqhd:003b p:0 m:0 dnr:0 00:25:25.366 [2024-07-15 09:33:18.434218] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:104 nsid:1 lba:99928 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:25.366 [2024-07-15 09:33:18.434233] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:104 cdw0:0 sqhd:003c p:0 m:0 dnr:0 00:25:25.366 [2024-07-15 09:33:18.434255] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:99936 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:25.366 [2024-07-15 09:33:18.434270] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:16 cdw0:0 sqhd:003d p:0 m:0 dnr:0 00:25:25.366 [2024-07-15 09:33:18.434291] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:75 nsid:1 lba:99944 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:25.366 [2024-07-15 09:33:18.434306] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:75 cdw0:0 sqhd:003e p:0 m:0 dnr:0 00:25:25.366 [2024-07-15 09:33:18.434329] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:78 nsid:1 lba:99952 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:25.366 [2024-07-15 09:33:18.434347] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:78 cdw0:0 sqhd:003f p:0 m:0 dnr:0 00:25:25.366 [2024-07-15 09:33:18.434370] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:115 nsid:1 lba:99960 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:25.366 [2024-07-15 09:33:18.434385] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:115 cdw0:0 sqhd:0040 p:0 m:0 dnr:0 00:25:25.366 [2024-07-15 09:33:18.434406] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:85 nsid:1 lba:99968 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:25.366 [2024-07-15 09:33:18.434421] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:85 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:25:25.366 [2024-07-15 09:33:18.434442] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:77 nsid:1 lba:99976 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:25.366 [2024-07-15 09:33:18.434457] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:77 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:25:25.366 [2024-07-15 09:33:18.434478] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:99984 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:25.366 [2024-07-15 09:33:18.434493] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:1 cdw0:0 sqhd:0043 p:0 m:0 dnr:0 00:25:25.366 [2024-07-15 09:33:18.434514] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:99992 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:25.366 [2024-07-15 09:33:18.434529] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:50 cdw0:0 sqhd:0044 p:0 m:0 dnr:0 00:25:25.366 [2024-07-15 09:33:18.434550] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:100000 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:25.366 [2024-07-15 09:33:18.434565] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:2 cdw0:0 sqhd:0045 p:0 m:0 dnr:0 00:25:25.366 [2024-07-15 09:33:18.434586] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:99808 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:25.366 [2024-07-15 09:33:18.434601] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:14 cdw0:0 sqhd:0046 p:0 m:0 dnr:0 00:25:25.366 [2024-07-15 09:33:18.434622] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:99816 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:25.366 [2024-07-15 09:33:18.434637] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:8 cdw0:0 sqhd:0047 p:0 m:0 dnr:0 00:25:25.366 [2024-07-15 09:33:18.434658] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:101 nsid:1 lba:100008 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:25.366 [2024-07-15 09:33:18.434674] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:101 cdw0:0 sqhd:0048 p:0 m:0 dnr:0 00:25:25.366 [2024-07-15 09:33:18.434695] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:100016 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:25.366 [2024-07-15 09:33:18.434710] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:41 cdw0:0 sqhd:0049 p:0 m:0 dnr:0 00:25:25.366 [2024-07-15 09:33:18.434730] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:100024 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:25.366 [2024-07-15 09:33:18.434746] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:38 cdw0:0 sqhd:004a p:0 m:0 dnr:0 00:25:25.366 [2024-07-15 09:33:18.434768] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:106 nsid:1 lba:100032 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:25.366 [2024-07-15 09:33:18.434799] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:106 cdw0:0 sqhd:004b p:0 m:0 dnr:0 00:25:25.366 [2024-07-15 09:33:18.435255] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:72 nsid:1 lba:100040 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:25.366 [2024-07-15 09:33:18.435280] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:72 cdw0:0 sqhd:004c p:0 m:0 dnr:0 00:25:25.366 [2024-07-15 09:33:18.435326] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:100048 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:25.366 [2024-07-15 09:33:18.435346] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:124 cdw0:0 sqhd:004d p:0 m:0 dnr:0 00:25:25.366 [2024-07-15 09:33:18.435377] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:100056 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:25.366 [2024-07-15 09:33:18.435394] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:45 cdw0:0 sqhd:004e p:0 m:0 dnr:0 00:25:25.366 [2024-07-15 09:33:18.435423] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:100064 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:25.366 [2024-07-15 09:33:18.435440] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:11 cdw0:0 sqhd:004f p:0 m:0 dnr:0 00:25:25.366 [2024-07-15 09:33:18.435469] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:83 nsid:1 lba:100072 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:25.366 [2024-07-15 09:33:18.435486] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:83 cdw0:0 sqhd:0050 p:0 m:0 dnr:0 00:25:25.366 [2024-07-15 09:33:18.435515] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:100080 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:25.366 [2024-07-15 09:33:18.435531] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:26 cdw0:0 sqhd:0051 p:0 m:0 dnr:0 00:25:25.366 [2024-07-15 09:33:18.435560] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:100 nsid:1 lba:100088 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:25.366 [2024-07-15 09:33:18.435576] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:100 cdw0:0 sqhd:0052 p:0 m:0 dnr:0 00:25:25.366 [2024-07-15 09:33:18.435605] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:100096 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:25.366 [2024-07-15 09:33:18.435621] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:12 cdw0:0 sqhd:0053 p:0 m:0 dnr:0 00:25:25.366 [2024-07-15 09:33:18.435665] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:100104 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:25.366 [2024-07-15 09:33:18.435683] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:47 cdw0:0 sqhd:0054 p:0 m:0 dnr:0 00:25:25.366 [2024-07-15 09:33:18.435726] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:80 nsid:1 lba:100112 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:25.366 [2024-07-15 09:33:18.435742] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:80 cdw0:0 sqhd:0055 p:0 m:0 dnr:0 00:25:25.366 [2024-07-15 09:33:18.435768] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:100120 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:25.366 [2024-07-15 09:33:18.435799] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:3 cdw0:0 sqhd:0056 p:0 m:0 dnr:0 00:25:25.366 [2024-07-15 09:33:18.435841] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:79 nsid:1 lba:100128 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:25.366 [2024-07-15 09:33:18.435858] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:79 cdw0:0 sqhd:0057 p:0 m:0 dnr:0 00:25:25.366 [2024-07-15 09:33:18.435894] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:100136 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:25.366 [2024-07-15 09:33:18.435912] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:63 cdw0:0 sqhd:0058 p:0 m:0 dnr:0 00:25:25.366 [2024-07-15 09:33:18.435941] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:94 nsid:1 lba:100144 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:25.366 [2024-07-15 09:33:18.435958] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:94 cdw0:0 sqhd:0059 p:0 m:0 dnr:0 00:25:25.366 [2024-07-15 09:33:18.435987] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:100152 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:25.366 [2024-07-15 09:33:18.436004] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:25 cdw0:0 sqhd:005a p:0 m:0 dnr:0 00:25:25.366 [2024-07-15 09:33:18.436032] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:100160 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:25.366 [2024-07-15 09:33:18.436048] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:6 cdw0:0 sqhd:005b p:0 m:0 dnr:0 00:25:25.366 [2024-07-15 09:33:18.436077] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:109 nsid:1 lba:100168 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:25.366 [2024-07-15 09:33:18.436110] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:109 cdw0:0 sqhd:005c p:0 m:0 dnr:0 00:25:25.366 [2024-07-15 09:33:18.436138] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:98 nsid:1 lba:100176 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:25.366 [2024-07-15 09:33:18.436169] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:98 cdw0:0 sqhd:005d p:0 m:0 dnr:0 00:25:25.366 [2024-07-15 09:33:18.436197] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:116 nsid:1 lba:100184 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:25.366 [2024-07-15 09:33:18.436212] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:116 cdw0:0 sqhd:005e p:0 m:0 dnr:0 00:25:25.366 [2024-07-15 09:33:18.436239] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:90 nsid:1 lba:100192 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:25.366 [2024-07-15 09:33:18.436255] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:90 cdw0:0 sqhd:005f p:0 m:0 dnr:0 00:25:25.366 [2024-07-15 09:33:18.436282] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:66 nsid:1 lba:100200 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:25.366 [2024-07-15 09:33:18.436298] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:66 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:25:25.366 [2024-07-15 09:33:18.436324] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:100208 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:25.366 [2024-07-15 09:33:18.436340] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:35 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:25:25.366 [2024-07-15 09:33:18.436367] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:100216 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:25.366 [2024-07-15 09:33:18.436382] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:39 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:25:25.366 [2024-07-15 09:33:18.436409] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:70 nsid:1 lba:100224 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:25.366 [2024-07-15 09:33:18.436425] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:70 cdw0:0 sqhd:0063 p:0 m:0 dnr:0 00:25:25.366 [2024-07-15 09:33:18.436451] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:68 nsid:1 lba:100232 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:25.366 [2024-07-15 09:33:18.436470] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:68 cdw0:0 sqhd:0064 p:0 m:0 dnr:0 00:25:25.366 [2024-07-15 09:33:18.436498] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:100240 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:25.366 [2024-07-15 09:33:18.436514] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:21 cdw0:0 sqhd:0065 p:0 m:0 dnr:0 00:25:25.366 [2024-07-15 09:33:18.436540] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:100248 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:25.366 [2024-07-15 09:33:18.436556] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:55 cdw0:0 sqhd:0066 p:0 m:0 dnr:0 00:25:25.367 [2024-07-15 09:33:18.436583] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:100256 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:25.367 [2024-07-15 09:33:18.436599] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:0 cdw0:0 sqhd:0067 p:0 m:0 dnr:0 00:25:25.367 [2024-07-15 09:33:18.436626] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:100264 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:25.367 [2024-07-15 09:33:18.436641] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:57 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:25:25.367 [2024-07-15 09:33:18.436668] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:100272 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:25.367 [2024-07-15 09:33:18.436684] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:5 cdw0:0 sqhd:0069 p:0 m:0 dnr:0 00:25:25.367 [2024-07-15 09:33:18.436710] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:100280 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:25.367 [2024-07-15 09:33:18.436726] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:30 cdw0:0 sqhd:006a p:0 m:0 dnr:0 00:25:25.367 [2024-07-15 09:33:18.436752] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:100288 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:25.367 [2024-07-15 09:33:18.436767] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:17 cdw0:0 sqhd:006b p:0 m:0 dnr:0 00:25:25.367 [2024-07-15 09:33:18.436819] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:100296 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:25.367 [2024-07-15 09:33:18.436838] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:125 cdw0:0 sqhd:006c p:0 m:0 dnr:0 00:25:25.367 [2024-07-15 09:33:18.436866] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:100304 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:25.367 [2024-07-15 09:33:18.436883] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:31 cdw0:0 sqhd:006d p:0 m:0 dnr:0 00:25:25.367 [2024-07-15 09:33:18.436912] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:100312 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:25.367 [2024-07-15 09:33:18.436929] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:28 cdw0:0 sqhd:006e p:0 m:0 dnr:0 00:25:25.367 [2024-07-15 09:33:18.436958] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:100320 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:25.367 [2024-07-15 09:33:18.436975] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:4 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:25:25.367 [2024-07-15 09:33:18.437003] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:119 nsid:1 lba:100328 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:25.367 [2024-07-15 09:33:18.437024] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:119 cdw0:0 sqhd:0070 p:0 m:0 dnr:0 00:25:25.367 [2024-07-15 09:33:18.437054] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:100336 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:25.367 [2024-07-15 09:33:18.437070] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:44 cdw0:0 sqhd:0071 p:0 m:0 dnr:0 00:25:25.367 [2024-07-15 09:33:18.437113] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:91 nsid:1 lba:100344 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:25.367 [2024-07-15 09:33:18.437129] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:91 cdw0:0 sqhd:0072 p:0 m:0 dnr:0 00:25:25.367 [2024-07-15 09:33:18.437155] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:73 nsid:1 lba:100352 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:25.367 [2024-07-15 09:33:18.437170] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:73 cdw0:0 sqhd:0073 p:0 m:0 dnr:0 00:25:25.367 [2024-07-15 09:33:18.437197] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:96 nsid:1 lba:100360 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:25.367 [2024-07-15 09:33:18.437213] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:96 cdw0:0 sqhd:0074 p:0 m:0 dnr:0 00:25:25.367 [2024-07-15 09:33:18.437240] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:105 nsid:1 lba:100368 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:25.367 [2024-07-15 09:33:18.437255] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:105 cdw0:0 sqhd:0075 p:0 m:0 dnr:0 00:25:25.367 [2024-07-15 09:33:18.437282] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:100376 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:25.367 [2024-07-15 09:33:18.437297] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:18 cdw0:0 sqhd:0076 p:0 m:0 dnr:0 00:25:25.367 [2024-07-15 09:33:18.437324] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:99 nsid:1 lba:100384 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:25.367 [2024-07-15 09:33:18.437339] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:99 cdw0:0 sqhd:0077 p:0 m:0 dnr:0 00:25:25.367 [2024-07-15 09:33:18.437366] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:100392 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:25.367 [2024-07-15 09:33:18.437381] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:59 cdw0:0 sqhd:0078 p:0 m:0 dnr:0 00:25:25.367 [2024-07-15 09:33:18.437408] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:103 nsid:1 lba:100400 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:25.367 [2024-07-15 09:33:18.437423] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:103 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 00:25:25.367 [2024-07-15 09:33:18.437450] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:100408 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:25.367 [2024-07-15 09:33:18.437465] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:7 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:25:25.367 [2024-07-15 09:33:18.437492] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:89 nsid:1 lba:100416 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:25.367 [2024-07-15 09:33:18.437508] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:89 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:25:25.367 [2024-07-15 09:33:18.437535] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:67 nsid:1 lba:100424 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:25.367 [2024-07-15 09:33:18.437550] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:67 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:25:25.367 [2024-07-15 09:33:18.437687] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:106 nsid:1 lba:100432 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:25.367 [2024-07-15 09:33:18.437707] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:106 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:25:25.367 [2024-07-15 09:33:33.585580] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:109 nsid:1 lba:30008 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:25.367 [2024-07-15 09:33:33.585662] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:109 cdw0:0 sqhd:0006 p:0 m:0 dnr:0 00:25:25.367 [2024-07-15 09:33:33.585703] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:30024 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:25.367 [2024-07-15 09:33:33.585722] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:45 cdw0:0 sqhd:0007 p:0 m:0 dnr:0 00:25:25.367 [2024-07-15 09:33:33.585747] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:30040 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:25.367 [2024-07-15 09:33:33.585763] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:48 cdw0:0 sqhd:0008 p:0 m:0 dnr:0 00:25:25.367 [2024-07-15 09:33:33.585809] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:84 nsid:1 lba:30056 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:25.367 [2024-07-15 09:33:33.585829] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:84 cdw0:0 sqhd:0009 p:0 m:0 dnr:0 00:25:25.367 [2024-07-15 09:33:33.585853] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:118 nsid:1 lba:30072 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:25.367 [2024-07-15 09:33:33.585871] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:118 cdw0:0 sqhd:000a p:0 m:0 dnr:0 00:25:25.367 [2024-07-15 09:33:33.585893] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:30088 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:25.367 [2024-07-15 09:33:33.585909] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:42 cdw0:0 sqhd:000b p:0 m:0 dnr:0 00:25:25.367 [2024-07-15 09:33:33.585933] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:30104 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:25.367 [2024-07-15 09:33:33.585950] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:55 cdw0:0 sqhd:000c p:0 m:0 dnr:0 00:25:25.367 [2024-07-15 09:33:33.585973] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:94 nsid:1 lba:30120 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:25.367 [2024-07-15 09:33:33.585991] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:94 cdw0:0 sqhd:000d p:0 m:0 dnr:0 00:25:25.367 [2024-07-15 09:33:33.586014] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:30136 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:25.367 [2024-07-15 09:33:33.586032] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:16 cdw0:0 sqhd:000e p:0 m:0 dnr:0 00:25:25.367 [2024-07-15 09:33:33.586054] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:99 nsid:1 lba:30152 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:25.367 [2024-07-15 09:33:33.586071] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:99 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:25:25.367 [2024-07-15 09:33:33.586093] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:30168 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:25.367 [2024-07-15 09:33:33.586126] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:12 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:25:25.367 [2024-07-15 09:33:33.586160] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:121 nsid:1 lba:30184 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:25.367 [2024-07-15 09:33:33.586178] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:121 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:25:25.367 [2024-07-15 09:33:33.586202] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:30200 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:25.367 [2024-07-15 09:33:33.586219] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:63 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:25:25.367 [2024-07-15 09:33:33.586241] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:30216 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:25.367 [2024-07-15 09:33:33.586258] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:39 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:25:25.367 [2024-07-15 09:33:33.586280] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:30232 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:25.367 [2024-07-15 09:33:33.586296] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:54 cdw0:0 sqhd:0014 p:0 m:0 dnr:0 00:25:25.367 [2024-07-15 09:33:33.586318] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:30248 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:25.367 [2024-07-15 09:33:33.586333] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:10 cdw0:0 sqhd:0015 p:0 m:0 dnr:0 00:25:25.367 [2024-07-15 09:33:33.586355] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:30264 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:25.367 [2024-07-15 09:33:33.586371] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:17 cdw0:0 sqhd:0016 p:0 m:0 dnr:0 00:25:25.367 [2024-07-15 09:33:33.586393] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:68 nsid:1 lba:30280 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:25.367 [2024-07-15 09:33:33.586409] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:68 cdw0:0 sqhd:0017 p:0 m:0 dnr:0 00:25:25.367 [2024-07-15 09:33:33.586431] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:29496 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:25.367 [2024-07-15 09:33:33.586447] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:9 cdw0:0 sqhd:0018 p:0 m:0 dnr:0 00:25:25.367 [2024-07-15 09:33:33.586469] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:29528 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:25.367 [2024-07-15 09:33:33.586485] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:47 cdw0:0 sqhd:0019 p:0 m:0 dnr:0 00:25:25.367 [2024-07-15 09:33:33.586506] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:82 nsid:1 lba:29560 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:25.367 [2024-07-15 09:33:33.586522] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:82 cdw0:0 sqhd:001a p:0 m:0 dnr:0 00:25:25.367 [2024-07-15 09:33:33.586543] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:103 nsid:1 lba:29592 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:25.367 [2024-07-15 09:33:33.586558] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:103 cdw0:0 sqhd:001b p:0 m:0 dnr:0 00:25:25.367 [2024-07-15 09:33:33.586579] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:72 nsid:1 lba:29624 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:25.367 [2024-07-15 09:33:33.586595] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:72 cdw0:0 sqhd:001c p:0 m:0 dnr:0 00:25:25.367 [2024-07-15 09:33:33.586617] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:65 nsid:1 lba:29640 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:25.367 [2024-07-15 09:33:33.586636] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:65 cdw0:0 sqhd:001d p:0 m:0 dnr:0 00:25:25.367 [2024-07-15 09:33:33.586659] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:87 nsid:1 lba:29672 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:25.367 [2024-07-15 09:33:33.586675] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:87 cdw0:0 sqhd:001e p:0 m:0 dnr:0 00:25:25.367 [2024-07-15 09:33:33.586696] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:75 nsid:1 lba:29704 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:25.367 [2024-07-15 09:33:33.586712] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:75 cdw0:0 sqhd:001f p:0 m:0 dnr:0 00:25:25.367 [2024-07-15 09:33:33.586734] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:100 nsid:1 lba:29736 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:25.367 [2024-07-15 09:33:33.586751] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:100 cdw0:0 sqhd:0020 p:0 m:0 dnr:0 00:25:25.367 [2024-07-15 09:33:33.587108] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:117 nsid:1 lba:30304 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:25.367 [2024-07-15 09:33:33.587138] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:117 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:25:25.367 [2024-07-15 09:33:33.587172] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:104 nsid:1 lba:30320 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:25.367 [2024-07-15 09:33:33.587190] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:104 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:25:25.367 [2024-07-15 09:33:33.587213] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:30336 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:25.367 [2024-07-15 09:33:33.587230] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:15 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:25:25.367 [2024-07-15 09:33:33.587253] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:30352 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:25.367 [2024-07-15 09:33:33.587270] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:59 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:25:25.367 [2024-07-15 09:33:33.587293] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:30368 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:25.367 [2024-07-15 09:33:33.587309] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:20 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:25:25.367 [2024-07-15 09:33:33.587332] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:98 nsid:1 lba:30384 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:25.367 [2024-07-15 09:33:33.587348] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:98 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:25:25.367 [2024-07-15 09:33:33.587373] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:111 nsid:1 lba:30400 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:25.367 [2024-07-15 09:33:33.587390] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:111 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:25:25.367 [2024-07-15 09:33:33.587412] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:30416 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:25.367 [2024-07-15 09:33:33.587429] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:37 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:25:25.367 [2024-07-15 09:33:33.587451] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:30432 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:25.367 [2024-07-15 09:33:33.587473] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:35 cdw0:0 sqhd:0029 p:0 m:0 dnr:0 00:25:25.367 [2024-07-15 09:33:33.587496] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:105 nsid:1 lba:29472 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:25.367 [2024-07-15 09:33:33.587514] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:105 cdw0:0 sqhd:002a p:0 m:0 dnr:0 00:25:25.367 [2024-07-15 09:33:33.587536] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:29504 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:25.367 [2024-07-15 09:33:33.587553] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:30 cdw0:0 sqhd:002b p:0 m:0 dnr:0 00:25:25.367 [2024-07-15 09:33:33.587590] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:116 nsid:1 lba:29536 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:25.367 [2024-07-15 09:33:33.587608] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:116 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:25:25.367 [2024-07-15 09:33:33.587629] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:29568 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:25.367 [2024-07-15 09:33:33.587646] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:23 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:25:25.367 [2024-07-15 09:33:33.587668] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:90 nsid:1 lba:29600 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:25.367 [2024-07-15 09:33:33.587684] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:90 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:25:25.367 [2024-07-15 09:33:33.587706] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:29632 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:25.367 [2024-07-15 09:33:33.587722] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:32 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:25:25.367 [2024-07-15 09:33:33.587761] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:88 nsid:1 lba:29648 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:25.367 [2024-07-15 09:33:33.587777] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:88 cdw0:0 sqhd:0030 p:0 m:0 dnr:0 00:25:25.367 [2024-07-15 09:33:33.587799] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:29680 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:25.368 [2024-07-15 09:33:33.587824] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:7 cdw0:0 sqhd:0031 p:0 m:0 dnr:0 00:25:25.368 [2024-07-15 09:33:33.587846] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:29712 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:25.368 [2024-07-15 09:33:33.587864] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:19 cdw0:0 sqhd:0032 p:0 m:0 dnr:0 00:25:25.368 [2024-07-15 09:33:33.587886] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:67 nsid:1 lba:29744 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:25.368 [2024-07-15 09:33:33.587903] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:67 cdw0:0 sqhd:0033 p:0 m:0 dnr:0 00:25:25.368 [2024-07-15 09:33:33.587925] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:81 nsid:1 lba:29776 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:25.368 [2024-07-15 09:33:33.587942] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:81 cdw0:0 sqhd:0034 p:0 m:0 dnr:0 00:25:25.368 [2024-07-15 09:33:33.587964] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:119 nsid:1 lba:29808 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:25.368 [2024-07-15 09:33:33.587984] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:119 cdw0:0 sqhd:0035 p:0 m:0 dnr:0 00:25:25.368 [2024-07-15 09:33:33.588008] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:124 nsid:1 lba:29840 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:25.368 [2024-07-15 09:33:33.588024] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:124 cdw0:0 sqhd:0036 p:0 m:0 dnr:0 00:25:25.368 [2024-07-15 09:33:33.588049] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:126 nsid:1 lba:29872 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:25.368 [2024-07-15 09:33:33.588065] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:126 cdw0:0 sqhd:0037 p:0 m:0 dnr:0 00:25:25.368 [2024-07-15 09:33:33.588087] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:76 nsid:1 lba:29904 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:25.368 [2024-07-15 09:33:33.588104] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:76 cdw0:0 sqhd:0038 p:0 m:0 dnr:0 00:25:25.368 [2024-07-15 09:33:33.588126] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:78 nsid:1 lba:29936 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:25.368 [2024-07-15 09:33:33.588143] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:78 cdw0:0 sqhd:0039 p:0 m:0 dnr:0 00:25:25.368 [2024-07-15 09:33:33.588166] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:29968 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:25.368 [2024-07-15 09:33:33.588183] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:34 cdw0:0 sqhd:003a p:0 m:0 dnr:0 00:25:25.368 [2024-07-15 09:33:33.588205] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:73 nsid:1 lba:30456 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:25.368 [2024-07-15 09:33:33.588222] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:73 cdw0:0 sqhd:003b p:0 m:0 dnr:0 00:25:25.368 [2024-07-15 09:33:33.588244] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:110 nsid:1 lba:30472 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:25.368 [2024-07-15 09:33:33.588261] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:110 cdw0:0 sqhd:003c p:0 m:0 dnr:0 00:25:25.368 [2024-07-15 09:33:33.588283] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:79 nsid:1 lba:29768 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:25.368 [2024-07-15 09:33:33.588299] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:79 cdw0:0 sqhd:003d p:0 m:0 dnr:0 00:25:25.368 [2024-07-15 09:33:33.588322] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:29800 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:25.368 [2024-07-15 09:33:33.588339] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:41 cdw0:0 sqhd:003e p:0 m:0 dnr:0 00:25:25.368 [2024-07-15 09:33:33.588361] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:29832 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:25.368 [2024-07-15 09:33:33.588377] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:22 cdw0:0 sqhd:003f p:0 m:0 dnr:0 00:25:25.368 [2024-07-15 09:33:33.588400] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:29864 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:25.368 [2024-07-15 09:33:33.588417] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:4 cdw0:0 sqhd:0040 p:0 m:0 dnr:0 00:25:25.368 [2024-07-15 09:33:33.588439] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:29880 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:25.368 [2024-07-15 09:33:33.588456] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:6 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:25:25.368 [2024-07-15 09:33:33.588482] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:74 nsid:1 lba:29912 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:25.368 [2024-07-15 09:33:33.588500] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:74 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:25:25.368 [2024-07-15 09:33:33.588522] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:97 nsid:1 lba:29944 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:25.368 [2024-07-15 09:33:33.588539] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:97 cdw0:0 sqhd:0043 p:0 m:0 dnr:0 00:25:25.368 [2024-07-15 09:33:33.588562] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:29976 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:25.368 [2024-07-15 09:33:33.588578] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:18 cdw0:0 sqhd:0044 p:0 m:0 dnr:0 00:25:25.368 [2024-07-15 09:33:33.588600] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:30488 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:25.368 [2024-07-15 09:33:33.588616] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:13 cdw0:0 sqhd:0045 p:0 m:0 dnr:0 00:25:25.368 [2024-07-15 09:33:33.588639] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:106 nsid:1 lba:30016 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:25.368 [2024-07-15 09:33:33.588663] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:106 cdw0:0 sqhd:0046 p:0 m:0 dnr:0 00:25:25.368 [2024-07-15 09:33:33.588687] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:120 nsid:1 lba:30048 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:25.368 [2024-07-15 09:33:33.588704] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:120 cdw0:0 sqhd:0047 p:0 m:0 dnr:0 00:25:25.368 [2024-07-15 09:33:33.588726] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:30080 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:25.368 [2024-07-15 09:33:33.588742] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:57 cdw0:0 sqhd:0048 p:0 m:0 dnr:0 00:25:25.368 [2024-07-15 09:33:33.588765] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:30112 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:25.368 [2024-07-15 09:33:33.588781] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:58 cdw0:0 sqhd:0049 p:0 m:0 dnr:0 00:25:25.368 [2024-07-15 09:33:33.588810] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:30144 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:25.368 [2024-07-15 09:33:33.588828] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:25 cdw0:0 sqhd:004a p:0 m:0 dnr:0 00:25:25.368 [2024-07-15 09:33:33.588851] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:91 nsid:1 lba:30176 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:25.368 [2024-07-15 09:33:33.588867] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:91 cdw0:0 sqhd:004b p:0 m:0 dnr:0 00:25:25.368 [2024-07-15 09:33:33.588890] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:30208 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:25.368 [2024-07-15 09:33:33.588907] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:40 cdw0:0 sqhd:004c p:0 m:0 dnr:0 00:25:25.368 [2024-07-15 09:33:33.588929] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:30240 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:25.368 [2024-07-15 09:33:33.588945] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:8 cdw0:0 sqhd:004d p:0 m:0 dnr:0 00:25:25.368 [2024-07-15 09:33:33.588971] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:108 nsid:1 lba:30272 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:25.368 [2024-07-15 09:33:33.588989] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:108 cdw0:0 sqhd:004e p:0 m:0 dnr:0 00:25:25.368 [2024-07-15 09:33:33.589476] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:30496 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:25.368 [2024-07-15 09:33:33.589506] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:50 cdw0:0 sqhd:004f p:0 m:0 dnr:0 00:25:25.368 [2024-07-15 09:33:33.589534] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:114 nsid:1 lba:30512 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:25.368 [2024-07-15 09:33:33.589552] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:114 cdw0:0 sqhd:0050 p:0 m:0 dnr:0 00:25:25.368 [2024-07-15 09:33:33.589575] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:102 nsid:1 lba:30528 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:25.368 [2024-07-15 09:33:33.589592] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:102 cdw0:0 sqhd:0051 p:0 m:0 dnr:0 00:25:25.368 [2024-07-15 09:33:33.589614] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:30544 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:25.368 [2024-07-15 09:33:33.589630] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:2 cdw0:0 sqhd:0052 p:0 m:0 dnr:0 00:25:25.368 [2024-07-15 09:33:33.589652] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:100 nsid:1 lba:30024 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:25.368 [2024-07-15 09:33:33.589669] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:100 cdw0:0 sqhd:0053 p:0 m:0 dnr:0 00:25:25.368 [2024-07-15 09:33:33.589692] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:75 nsid:1 lba:30056 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:25.368 [2024-07-15 09:33:33.589708] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:75 cdw0:0 sqhd:0054 p:0 m:0 dnr:0 00:25:25.368 [2024-07-15 09:33:33.589730] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:87 nsid:1 lba:30088 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:25.368 [2024-07-15 09:33:33.589747] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:87 cdw0:0 sqhd:0055 p:0 m:0 dnr:0 00:25:25.368 [2024-07-15 09:33:33.589769] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:65 nsid:1 lba:30120 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:25.368 [2024-07-15 09:33:33.589791] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:65 cdw0:0 sqhd:0056 p:0 m:0 dnr:0 00:25:25.368 [2024-07-15 09:33:33.589843] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:72 nsid:1 lba:30152 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:25.368 [2024-07-15 09:33:33.589863] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:72 cdw0:0 sqhd:0057 p:0 m:0 dnr:0 00:25:25.368 [2024-07-15 09:33:33.589885] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:103 nsid:1 lba:30184 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:25.368 [2024-07-15 09:33:33.589902] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:103 cdw0:0 sqhd:0058 p:0 m:0 dnr:0 00:25:25.368 [2024-07-15 09:33:33.589940] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:82 nsid:1 lba:30216 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:25.368 [2024-07-15 09:33:33.589957] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:82 cdw0:0 sqhd:0059 p:0 m:0 dnr:0 00:25:25.368 [2024-07-15 09:33:33.589979] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:30248 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:25.368 [2024-07-15 09:33:33.589999] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:47 cdw0:0 sqhd:005a p:0 m:0 dnr:0 00:25:25.368 [2024-07-15 09:33:33.590021] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:30280 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:25.368 [2024-07-15 09:33:33.590037] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:9 cdw0:0 sqhd:005b p:0 m:0 dnr:0 00:25:25.368 [2024-07-15 09:33:33.590060] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:68 nsid:1 lba:29528 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:25.368 [2024-07-15 09:33:33.590075] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:68 cdw0:0 sqhd:005c p:0 m:0 dnr:0 00:25:25.368 [2024-07-15 09:33:33.590096] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:29592 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:25.368 [2024-07-15 09:33:33.590112] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:17 cdw0:0 sqhd:005d p:0 m:0 dnr:0 00:25:25.368 [2024-07-15 09:33:33.590134] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:29640 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:25.368 [2024-07-15 09:33:33.590150] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:10 cdw0:0 sqhd:005e p:0 m:0 dnr:0 00:25:25.368 [2024-07-15 09:33:33.590172] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:29704 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:25.368 [2024-07-15 09:33:33.590188] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:54 cdw0:0 sqhd:005f p:0 m:0 dnr:0 00:25:25.368 [2024-07-15 09:33:33.590868] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:108 nsid:1 lba:30320 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:25.368 [2024-07-15 09:33:33.590894] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:108 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:25:25.368 [2024-07-15 09:33:33.590921] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:30352 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:25.368 [2024-07-15 09:33:33.590940] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:8 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:25:25.368 [2024-07-15 09:33:33.590963] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:30384 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:25.368 [2024-07-15 09:33:33.590980] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:40 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:25:25.368 [2024-07-15 09:33:33.591002] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:91 nsid:1 lba:30416 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:25.368 [2024-07-15 09:33:33.591019] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:91 cdw0:0 sqhd:0063 p:0 m:0 dnr:0 00:25:25.368 [2024-07-15 09:33:33.591041] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:29472 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:25.368 [2024-07-15 09:33:33.591057] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:25 cdw0:0 sqhd:0064 p:0 m:0 dnr:0 00:25:25.368 [2024-07-15 09:33:33.591080] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:29536 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:25.368 [2024-07-15 09:33:33.591096] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:58 cdw0:0 sqhd:0065 p:0 m:0 dnr:0 00:25:25.368 [2024-07-15 09:33:33.591118] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:29600 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:25.368 [2024-07-15 09:33:33.591145] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:57 cdw0:0 sqhd:0066 p:0 m:0 dnr:0 00:25:25.368 [2024-07-15 09:33:33.591175] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:120 nsid:1 lba:29648 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:25.368 [2024-07-15 09:33:33.591192] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:120 cdw0:0 sqhd:0067 p:0 m:0 dnr:0 00:25:25.368 [2024-07-15 09:33:33.591214] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:106 nsid:1 lba:29712 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:25.368 [2024-07-15 09:33:33.591246] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:106 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:25:25.368 [2024-07-15 09:33:33.591270] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:29776 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:25.368 [2024-07-15 09:33:33.591286] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:13 cdw0:0 sqhd:0069 p:0 m:0 dnr:0 00:25:25.368 [2024-07-15 09:33:33.591308] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:29840 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:25.368 [2024-07-15 09:33:33.591323] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:18 cdw0:0 sqhd:006a p:0 m:0 dnr:0 00:25:25.368 [2024-07-15 09:33:33.591346] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:97 nsid:1 lba:29904 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:25.368 [2024-07-15 09:33:33.591362] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:97 cdw0:0 sqhd:006b p:0 m:0 dnr:0 00:25:25.368 [2024-07-15 09:33:33.591383] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:74 nsid:1 lba:29968 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:25.368 [2024-07-15 09:33:33.591399] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:74 cdw0:0 sqhd:006c p:0 m:0 dnr:0 00:25:25.368 [2024-07-15 09:33:33.591421] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:30472 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:25.368 [2024-07-15 09:33:33.591437] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:6 cdw0:0 sqhd:006d p:0 m:0 dnr:0 00:25:25.368 [2024-07-15 09:33:33.591459] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:29800 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:25.368 [2024-07-15 09:33:33.591491] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:4 cdw0:0 sqhd:006e p:0 m:0 dnr:0 00:25:25.368 [2024-07-15 09:33:33.591515] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:29864 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:25.368 [2024-07-15 09:33:33.591531] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:22 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:25:25.368 [2024-07-15 09:33:33.591553] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:29912 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:25.368 [2024-07-15 09:33:33.591570] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:41 cdw0:0 sqhd:0070 p:0 m:0 dnr:0 00:25:25.368 [2024-07-15 09:33:33.591592] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:79 nsid:1 lba:29976 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:25.368 [2024-07-15 09:33:33.591608] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:79 cdw0:0 sqhd:0071 p:0 m:0 dnr:0 00:25:25.368 [2024-07-15 09:33:33.591632] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:110 nsid:1 lba:30016 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:25.368 [2024-07-15 09:33:33.591648] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:110 cdw0:0 sqhd:0072 p:0 m:0 dnr:0 00:25:25.368 [2024-07-15 09:33:33.591675] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:73 nsid:1 lba:30080 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:25.368 [2024-07-15 09:33:33.591693] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:73 cdw0:0 sqhd:0073 p:0 m:0 dnr:0 00:25:25.368 [2024-07-15 09:33:33.591715] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:30144 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:25.368 [2024-07-15 09:33:33.591732] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:34 cdw0:0 sqhd:0074 p:0 m:0 dnr:0 00:25:25.368 [2024-07-15 09:33:33.591754] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:78 nsid:1 lba:30208 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:25.368 [2024-07-15 09:33:33.591771] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:78 cdw0:0 sqhd:0075 p:0 m:0 dnr:0 00:25:25.368 [2024-07-15 09:33:33.591817] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:76 nsid:1 lba:30272 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:25.368 [2024-07-15 09:33:33.591836] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:76 cdw0:0 sqhd:0076 p:0 m:0 dnr:0 00:25:25.368 [2024-07-15 09:33:33.592249] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:30312 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:25.368 [2024-07-15 09:33:33.592274] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:54 cdw0:0 sqhd:0077 p:0 m:0 dnr:0 00:25:25.368 [2024-07-15 09:33:33.592302] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:30344 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:25.368 [2024-07-15 09:33:33.592320] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:10 cdw0:0 sqhd:0078 p:0 m:0 dnr:0 00:25:25.368 [2024-07-15 09:33:33.592342] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:30376 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:25.368 [2024-07-15 09:33:33.592359] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:17 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 00:25:25.369 [2024-07-15 09:33:33.592382] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:68 nsid:1 lba:30408 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:25.369 [2024-07-15 09:33:33.592399] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:68 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:25:25.369 [2024-07-15 09:33:33.592421] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:30440 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:25.369 [2024-07-15 09:33:33.592438] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:9 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:25:25.369 [2024-07-15 09:33:33.592460] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:30496 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:25.369 [2024-07-15 09:33:33.592477] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:47 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:25:25.369 [2024-07-15 09:33:33.592499] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:82 nsid:1 lba:30528 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:25.369 [2024-07-15 09:33:33.592516] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:82 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:25:25.369 [2024-07-15 09:33:33.592538] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:103 nsid:1 lba:30024 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:25.369 [2024-07-15 09:33:33.592555] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:103 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:25:25.369 [2024-07-15 09:33:33.592597] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:72 nsid:1 lba:30088 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:25.369 [2024-07-15 09:33:33.592615] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:72 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:25:25.369 [2024-07-15 09:33:33.592652] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:65 nsid:1 lba:30152 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:25.369 [2024-07-15 09:33:33.592670] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:65 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:25.369 [2024-07-15 09:33:33.592693] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:87 nsid:1 lba:30216 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:25.369 [2024-07-15 09:33:33.592709] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:87 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:25.369 [2024-07-15 09:33:33.592731] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:75 nsid:1 lba:30280 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:25.369 [2024-07-15 09:33:33.592748] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:75 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:25:25.369 [2024-07-15 09:33:33.592770] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:100 nsid:1 lba:29592 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:25.369 [2024-07-15 09:33:33.592787] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:100 cdw0:0 sqhd:0003 p:0 m:0 dnr:0 00:25:25.369 [2024-07-15 09:33:33.592817] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:29704 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:25.369 [2024-07-15 09:33:33.592835] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:0 00:25:25.369 [2024-07-15 09:33:33.593142] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:76 nsid:1 lba:30352 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:25.369 [2024-07-15 09:33:33.593166] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:76 cdw0:0 sqhd:0005 p:0 m:0 dnr:0 00:25:25.369 [2024-07-15 09:33:33.593194] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:78 nsid:1 lba:30416 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:25.369 [2024-07-15 09:33:33.593212] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:78 cdw0:0 sqhd:0006 p:0 m:0 dnr:0 00:25:25.369 [2024-07-15 09:33:33.593241] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:29536 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:25.369 [2024-07-15 09:33:33.593258] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:34 cdw0:0 sqhd:0007 p:0 m:0 dnr:0 00:25:25.369 [2024-07-15 09:33:33.593282] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:73 nsid:1 lba:29648 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:25.369 [2024-07-15 09:33:33.593299] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:73 cdw0:0 sqhd:0008 p:0 m:0 dnr:0 00:25:25.369 [2024-07-15 09:33:33.593321] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:110 nsid:1 lba:29776 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:25.369 [2024-07-15 09:33:33.593338] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:110 cdw0:0 sqhd:0009 p:0 m:0 dnr:0 00:25:25.369 [2024-07-15 09:33:33.593360] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:79 nsid:1 lba:29904 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:25.369 [2024-07-15 09:33:33.593376] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:79 cdw0:0 sqhd:000a p:0 m:0 dnr:0 00:25:25.369 [2024-07-15 09:33:33.593399] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:30472 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:25.369 [2024-07-15 09:33:33.593435] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:41 cdw0:0 sqhd:000b p:0 m:0 dnr:0 00:25:25.369 [2024-07-15 09:33:33.593458] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:29864 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:25.369 [2024-07-15 09:33:33.593490] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:22 cdw0:0 sqhd:000c p:0 m:0 dnr:0 00:25:25.369 [2024-07-15 09:33:33.593515] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:29976 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:25.369 [2024-07-15 09:33:33.593531] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:4 cdw0:0 sqhd:000d p:0 m:0 dnr:0 00:25:25.369 [2024-07-15 09:33:33.593554] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:30080 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:25.369 [2024-07-15 09:33:33.593570] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:6 cdw0:0 sqhd:000e p:0 m:0 dnr:0 00:25:25.369 [2024-07-15 09:33:33.593593] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:74 nsid:1 lba:30208 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:25.369 [2024-07-15 09:33:33.593610] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:74 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:25:25.369 [2024-07-15 09:33:33.595796] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:97 nsid:1 lba:30464 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:25.369 [2024-07-15 09:33:33.595842] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:97 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:25:25.369 [2024-07-15 09:33:33.595871] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:30344 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:25.369 [2024-07-15 09:33:33.595889] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:2 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:25:25.369 [2024-07-15 09:33:33.595912] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:100 nsid:1 lba:30408 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:25.369 [2024-07-15 09:33:33.595930] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:100 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:25:25.369 [2024-07-15 09:33:33.595953] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:75 nsid:1 lba:30496 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:25.369 [2024-07-15 09:33:33.595970] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:75 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:25:25.369 [2024-07-15 09:33:33.595992] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:87 nsid:1 lba:30024 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:25.369 [2024-07-15 09:33:33.596010] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:87 cdw0:0 sqhd:0014 p:0 m:0 dnr:0 00:25:25.369 [2024-07-15 09:33:33.596032] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:65 nsid:1 lba:30152 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:25.369 [2024-07-15 09:33:33.596049] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:65 cdw0:0 sqhd:0015 p:0 m:0 dnr:0 00:25:25.369 [2024-07-15 09:33:33.596072] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:72 nsid:1 lba:30280 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:25.369 [2024-07-15 09:33:33.596089] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:72 cdw0:0 sqhd:0016 p:0 m:0 dnr:0 00:25:25.369 [2024-07-15 09:33:33.596128] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:103 nsid:1 lba:29704 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:25.369 [2024-07-15 09:33:33.596154] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:103 cdw0:0 sqhd:0017 p:0 m:0 dnr:0 00:25:25.369 [2024-07-15 09:33:33.596179] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:74 nsid:1 lba:30416 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:25.369 [2024-07-15 09:33:33.596196] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:74 cdw0:0 sqhd:0018 p:0 m:0 dnr:0 00:25:25.369 [2024-07-15 09:33:33.596219] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:29648 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:25.369 [2024-07-15 09:33:33.596235] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:6 cdw0:0 sqhd:0019 p:0 m:0 dnr:0 00:25:25.369 [2024-07-15 09:33:33.596257] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:29904 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:25.369 [2024-07-15 09:33:33.596275] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:4 cdw0:0 sqhd:001a p:0 m:0 dnr:0 00:25:25.369 [2024-07-15 09:33:33.596297] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:29864 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:25.369 [2024-07-15 09:33:33.596314] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:22 cdw0:0 sqhd:001b p:0 m:0 dnr:0 00:25:25.369 [2024-07-15 09:33:33.596337] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:30080 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:25.369 [2024-07-15 09:33:33.596353] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:41 cdw0:0 sqhd:001c p:0 m:0 dnr:0 00:25:25.369 [2024-07-15 09:33:33.598386] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:79 nsid:1 lba:30560 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:25.369 [2024-07-15 09:33:33.598410] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:79 cdw0:0 sqhd:001d p:0 m:0 dnr:0 00:25:25.369 [2024-07-15 09:33:33.598439] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:110 nsid:1 lba:30576 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:25.369 [2024-07-15 09:33:33.598457] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:110 cdw0:0 sqhd:001e p:0 m:0 dnr:0 00:25:25.369 [2024-07-15 09:33:33.598481] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:73 nsid:1 lba:30592 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:25.369 [2024-07-15 09:33:33.598498] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:73 cdw0:0 sqhd:001f p:0 m:0 dnr:0 00:25:25.369 [2024-07-15 09:33:33.598521] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:30608 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:25.369 [2024-07-15 09:33:33.598538] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:34 cdw0:0 sqhd:0020 p:0 m:0 dnr:0 00:25:25.369 [2024-07-15 09:33:33.598561] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:78 nsid:1 lba:30624 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:25.369 [2024-07-15 09:33:33.598578] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:78 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:25:25.369 [2024-07-15 09:33:33.598602] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:76 nsid:1 lba:30640 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:25.369 [2024-07-15 09:33:33.598618] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:76 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:25:25.369 [2024-07-15 09:33:33.598641] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:82 nsid:1 lba:30656 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:25.369 [2024-07-15 09:33:33.598658] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:82 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:25:25.369 [2024-07-15 09:33:33.598686] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:30672 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:25.369 [2024-07-15 09:33:33.598705] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:47 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:25:25.369 [2024-07-15 09:33:33.598728] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:30688 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:25.369 [2024-07-15 09:33:33.598745] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:9 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:25:25.369 [2024-07-15 09:33:33.598782] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:68 nsid:1 lba:30704 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:25.369 [2024-07-15 09:33:33.598808] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:68 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:25:25.369 [2024-07-15 09:33:33.598851] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:30720 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:25.369 [2024-07-15 09:33:33.598871] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:17 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:25:25.369 [2024-07-15 09:33:33.598894] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:30736 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:25.369 [2024-07-15 09:33:33.598918] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:10 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:25:25.369 [2024-07-15 09:33:33.598941] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:30752 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:25.369 [2024-07-15 09:33:33.598958] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:54 cdw0:0 sqhd:0029 p:0 m:0 dnr:0 00:25:25.369 [2024-07-15 09:33:33.598981] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:30768 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:25.369 [2024-07-15 09:33:33.598998] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:18 cdw0:0 sqhd:002a p:0 m:0 dnr:0 00:25:25.369 [2024-07-15 09:33:33.599021] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:30784 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:25.369 [2024-07-15 09:33:33.599038] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:13 cdw0:0 sqhd:002b p:0 m:0 dnr:0 00:25:25.369 [2024-07-15 09:33:33.599061] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:106 nsid:1 lba:30800 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:25.369 [2024-07-15 09:33:33.599078] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:106 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:25:25.369 [2024-07-15 09:33:33.599100] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:120 nsid:1 lba:30816 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:25.369 [2024-07-15 09:33:33.599126] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:120 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:25:25.369 [2024-07-15 09:33:33.599149] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:30504 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:25.369 [2024-07-15 09:33:33.599166] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:57 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:25:25.369 [2024-07-15 09:33:33.599189] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:30536 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:25.369 [2024-07-15 09:33:33.599206] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:58 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:25:25.369 [2024-07-15 09:33:33.599233] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:30008 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:25.369 [2024-07-15 09:33:33.599250] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:25 cdw0:0 sqhd:0030 p:0 m:0 dnr:0 00:25:25.369 [2024-07-15 09:33:33.599273] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:91 nsid:1 lba:30072 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:25.369 [2024-07-15 09:33:33.599290] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:91 cdw0:0 sqhd:0031 p:0 m:0 dnr:0 00:25:25.369 [2024-07-15 09:33:33.599313] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:30136 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:25.369 [2024-07-15 09:33:33.599345] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:40 cdw0:0 sqhd:0032 p:0 m:0 dnr:0 00:25:25.369 [2024-07-15 09:33:33.599368] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:30200 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:25.369 [2024-07-15 09:33:33.599384] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:8 cdw0:0 sqhd:0033 p:0 m:0 dnr:0 00:25:25.369 [2024-07-15 09:33:33.599406] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:108 nsid:1 lba:30264 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:25.369 [2024-07-15 09:33:33.599422] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:108 cdw0:0 sqhd:0034 p:0 m:0 dnr:0 00:25:25.369 [2024-07-15 09:33:33.599444] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:102 nsid:1 lba:30840 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:25.369 [2024-07-15 09:33:33.599460] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:102 cdw0:0 sqhd:0035 p:0 m:0 dnr:0 00:25:25.369 [2024-07-15 09:33:33.599483] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:114 nsid:1 lba:30856 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:25.369 [2024-07-15 09:33:33.599499] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:114 cdw0:0 sqhd:0036 p:0 m:0 dnr:0 00:25:25.369 [2024-07-15 09:33:33.599537] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:30304 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:25.369 [2024-07-15 09:33:33.599555] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:50 cdw0:0 sqhd:0037 p:0 m:0 dnr:0 00:25:25.369 [2024-07-15 09:33:33.599580] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:126 nsid:1 lba:30368 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:25.369 [2024-07-15 09:33:33.599599] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:126 cdw0:0 sqhd:0038 p:0 m:0 dnr:0 00:25:25.369 [2024-07-15 09:33:33.599621] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:124 nsid:1 lba:30432 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:25.369 [2024-07-15 09:33:33.599639] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:124 cdw0:0 sqhd:0039 p:0 m:0 dnr:0 00:25:25.369 [2024-07-15 09:33:33.599662] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:30344 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:25.369 [2024-07-15 09:33:33.599679] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:41 cdw0:0 sqhd:003a p:0 m:0 dnr:0 00:25:25.369 [2024-07-15 09:33:33.599701] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:30496 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:25.369 [2024-07-15 09:33:33.599718] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:22 cdw0:0 sqhd:003b p:0 m:0 dnr:0 00:25:25.369 [2024-07-15 09:33:33.599741] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:30152 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:25.369 [2024-07-15 09:33:33.599763] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:4 cdw0:0 sqhd:003c p:0 m:0 dnr:0 00:25:25.369 [2024-07-15 09:33:33.599786] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:29704 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:25.369 [2024-07-15 09:33:33.599811] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:6 cdw0:0 sqhd:003d p:0 m:0 dnr:0 00:25:25.369 [2024-07-15 09:33:33.599836] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:74 nsid:1 lba:29648 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:25.369 [2024-07-15 09:33:33.599861] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:74 cdw0:0 sqhd:003e p:0 m:0 dnr:0 00:25:25.369 [2024-07-15 09:33:33.599884] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:103 nsid:1 lba:29864 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:25.369 [2024-07-15 09:33:33.599901] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:103 cdw0:0 sqhd:003f p:0 m:0 dnr:0 00:25:25.369 [2024-07-15 09:33:33.599923] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:72 nsid:1 lba:30456 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:25.369 [2024-07-15 09:33:33.599940] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:72 cdw0:0 sqhd:0040 p:0 m:0 dnr:0 00:25:25.369 [2024-07-15 09:33:33.601172] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:65 nsid:1 lba:30512 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:25.369 [2024-07-15 09:33:33.601196] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:65 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:25:25.369 [2024-07-15 09:33:33.601223] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:87 nsid:1 lba:30056 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:25.369 [2024-07-15 09:33:33.601241] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:87 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:25:25.369 [2024-07-15 09:33:33.601264] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:75 nsid:1 lba:30184 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:25.370 [2024-07-15 09:33:33.601281] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:75 cdw0:0 sqhd:0043 p:0 m:0 dnr:0 00:25:25.370 [2024-07-15 09:33:33.601303] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:100 nsid:1 lba:30872 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:25.370 [2024-07-15 09:33:33.601320] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:100 cdw0:0 sqhd:0044 p:0 m:0 dnr:0 00:25:25.370 [2024-07-15 09:33:33.601359] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:30888 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:25.370 [2024-07-15 09:33:33.601377] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:2 cdw0:0 sqhd:0045 p:0 m:0 dnr:0 00:25:25.370 [2024-07-15 09:33:33.601400] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:97 nsid:1 lba:30904 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:25.370 [2024-07-15 09:33:33.601417] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:97 cdw0:0 sqhd:0046 p:0 m:0 dnr:0 00:25:25.370 [2024-07-15 09:33:33.601441] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:119 nsid:1 lba:30920 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:25.370 [2024-07-15 09:33:33.601458] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:119 cdw0:0 sqhd:0047 p:0 m:0 dnr:0 00:25:25.370 [2024-07-15 09:33:33.601481] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:81 nsid:1 lba:30936 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:25.370 [2024-07-15 09:33:33.601503] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:81 cdw0:0 sqhd:0048 p:0 m:0 dnr:0 00:25:25.370 [2024-07-15 09:33:33.601527] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:67 nsid:1 lba:30952 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:25.370 [2024-07-15 09:33:33.601544] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:67 cdw0:0 sqhd:0049 p:0 m:0 dnr:0 00:25:25.370 [2024-07-15 09:33:33.601567] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:30968 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:25.370 [2024-07-15 09:33:33.601584] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:19 cdw0:0 sqhd:004a p:0 m:0 dnr:0 00:25:25.370 [2024-07-15 09:33:33.601607] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:30984 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:25.370 [2024-07-15 09:33:33.601623] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:7 cdw0:0 sqhd:004b p:0 m:0 dnr:0 00:25:25.370 [2024-07-15 09:33:33.601647] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:88 nsid:1 lba:31000 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:25.370 [2024-07-15 09:33:33.601664] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:88 cdw0:0 sqhd:004c p:0 m:0 dnr:0 00:25:25.370 [2024-07-15 09:33:33.601697] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:31016 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:25.370 [2024-07-15 09:33:33.601714] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:32 cdw0:0 sqhd:004d p:0 m:0 dnr:0 00:25:25.370 [2024-07-15 09:33:33.601736] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:90 nsid:1 lba:31032 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:25.370 [2024-07-15 09:33:33.601753] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:90 cdw0:0 sqhd:004e p:0 m:0 dnr:0 00:25:25.370 [2024-07-15 09:33:33.601775] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:31048 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:25.370 [2024-07-15 09:33:33.601792] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:23 cdw0:0 sqhd:004f p:0 m:0 dnr:0 00:25:25.370 [2024-07-15 09:33:33.601825] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:116 nsid:1 lba:30320 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:25.370 [2024-07-15 09:33:33.601843] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:116 cdw0:0 sqhd:0050 p:0 m:0 dnr:0 00:25:25.370 [2024-07-15 09:33:33.602789] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:72 nsid:1 lba:30576 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:25.370 [2024-07-15 09:33:33.602826] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:72 cdw0:0 sqhd:0051 p:0 m:0 dnr:0 00:25:25.370 [2024-07-15 09:33:33.602854] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:103 nsid:1 lba:30608 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:25.370 [2024-07-15 09:33:33.602873] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:103 cdw0:0 sqhd:0052 p:0 m:0 dnr:0 00:25:25.370 [2024-07-15 09:33:33.602896] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:74 nsid:1 lba:30640 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:25.370 [2024-07-15 09:33:33.602914] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:74 cdw0:0 sqhd:0053 p:0 m:0 dnr:0 00:25:25.370 [2024-07-15 09:33:33.602936] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:30672 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:25.370 [2024-07-15 09:33:33.602953] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:6 cdw0:0 sqhd:0054 p:0 m:0 dnr:0 00:25:25.370 [2024-07-15 09:33:33.602980] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:30704 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:25.370 [2024-07-15 09:33:33.602998] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:4 cdw0:0 sqhd:0055 p:0 m:0 dnr:0 00:25:25.370 [2024-07-15 09:33:33.603020] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:30736 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:25.370 [2024-07-15 09:33:33.603037] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:22 cdw0:0 sqhd:0056 p:0 m:0 dnr:0 00:25:25.370 [2024-07-15 09:33:33.603061] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:30768 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:25.370 [2024-07-15 09:33:33.603080] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:41 cdw0:0 sqhd:0057 p:0 m:0 dnr:0 00:25:25.370 [2024-07-15 09:33:33.603114] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:30800 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:25.370 [2024-07-15 09:33:33.603131] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:124 cdw0:0 sqhd:0058 p:0 m:0 dnr:0 00:25:25.370 [2024-07-15 09:33:33.603153] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:126 nsid:1 lba:30504 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:25.370 [2024-07-15 09:33:33.603180] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:126 cdw0:0 sqhd:0059 p:0 m:0 dnr:0 00:25:25.370 [2024-07-15 09:33:33.603202] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:30008 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:25.370 [2024-07-15 09:33:33.603219] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:50 cdw0:0 sqhd:005a p:0 m:0 dnr:0 00:25:25.370 [2024-07-15 09:33:33.603242] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:114 nsid:1 lba:30136 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:25.370 [2024-07-15 09:33:33.603259] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:114 cdw0:0 sqhd:005b p:0 m:0 dnr:0 00:25:25.370 [2024-07-15 09:33:33.603281] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:102 nsid:1 lba:30264 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:25.370 [2024-07-15 09:33:33.603313] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:102 cdw0:0 sqhd:005c p:0 m:0 dnr:0 00:25:25.370 [2024-07-15 09:33:33.603336] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:108 nsid:1 lba:30856 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:25.370 [2024-07-15 09:33:33.603353] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:108 cdw0:0 sqhd:005d p:0 m:0 dnr:0 00:25:25.370 [2024-07-15 09:33:33.603375] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:30368 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:25.370 [2024-07-15 09:33:33.603391] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:8 cdw0:0 sqhd:005e p:0 m:0 dnr:0 00:25:25.370 [2024-07-15 09:33:33.603413] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:30344 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:25.370 [2024-07-15 09:33:33.603429] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:40 cdw0:0 sqhd:005f p:0 m:0 dnr:0 00:25:25.370 [2024-07-15 09:33:33.603451] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:91 nsid:1 lba:30152 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:25.370 [2024-07-15 09:33:33.603467] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:91 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:25:25.370 [2024-07-15 09:33:33.603493] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:29648 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:25.370 [2024-07-15 09:33:33.603509] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:25 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:25:25.370 [2024-07-15 09:33:33.603531] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:30456 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:25.370 [2024-07-15 09:33:33.603548] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:58 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:25:25.370 [2024-07-15 09:33:33.603569] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:31072 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:25.370 [2024-07-15 09:33:33.603586] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:57 cdw0:0 sqhd:0063 p:0 m:0 dnr:0 00:25:25.370 [2024-07-15 09:33:33.603623] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:120 nsid:1 lba:31088 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:25.370 [2024-07-15 09:33:33.603639] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:120 cdw0:0 sqhd:0064 p:0 m:0 dnr:0 00:25:25.370 [2024-07-15 09:33:33.603661] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:106 nsid:1 lba:31104 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:25.370 [2024-07-15 09:33:33.603677] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:106 cdw0:0 sqhd:0065 p:0 m:0 dnr:0 00:25:25.370 [2024-07-15 09:33:33.603699] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:31120 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:25.370 [2024-07-15 09:33:33.603715] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:13 cdw0:0 sqhd:0066 p:0 m:0 dnr:0 00:25:25.370 [2024-07-15 09:33:33.603736] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:31136 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:25.370 [2024-07-15 09:33:33.603753] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:18 cdw0:0 sqhd:0067 p:0 m:0 dnr:0 00:25:25.370 [2024-07-15 09:33:33.603774] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:30088 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:25.370 [2024-07-15 09:33:33.603814] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:54 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:25:25.370 [2024-07-15 09:33:33.603840] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:31144 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:25.370 [2024-07-15 09:33:33.603864] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:10 cdw0:0 sqhd:0069 p:0 m:0 dnr:0 00:25:25.370 [2024-07-15 09:33:33.603887] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:31160 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:25.370 [2024-07-15 09:33:33.603903] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:17 cdw0:0 sqhd:006a p:0 m:0 dnr:0 00:25:25.370 [2024-07-15 09:33:33.604526] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:68 nsid:1 lba:31176 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:25.370 [2024-07-15 09:33:33.604551] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:68 cdw0:0 sqhd:006b p:0 m:0 dnr:0 00:25:25.370 [2024-07-15 09:33:33.604578] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:116 nsid:1 lba:30056 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:25.370 [2024-07-15 09:33:33.604597] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:116 cdw0:0 sqhd:006c p:0 m:0 dnr:0 00:25:25.370 [2024-07-15 09:33:33.604636] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:30872 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:25.370 [2024-07-15 09:33:33.604658] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:23 cdw0:0 sqhd:006d p:0 m:0 dnr:0 00:25:25.370 [2024-07-15 09:33:33.604682] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:90 nsid:1 lba:30904 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:25.370 [2024-07-15 09:33:33.604699] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:90 cdw0:0 sqhd:006e p:0 m:0 dnr:0 00:25:25.370 [2024-07-15 09:33:33.604721] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:30936 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:25.370 [2024-07-15 09:33:33.604752] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:32 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:25:25.370 [2024-07-15 09:33:33.604775] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:88 nsid:1 lba:30968 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:25.370 [2024-07-15 09:33:33.604791] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:88 cdw0:0 sqhd:0070 p:0 m:0 dnr:0 00:25:25.370 [2024-07-15 09:33:33.604839] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:31000 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:25.370 [2024-07-15 09:33:33.604865] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:7 cdw0:0 sqhd:0071 p:0 m:0 dnr:0 00:25:25.370 [2024-07-15 09:33:33.604888] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:31032 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:25.370 [2024-07-15 09:33:33.604905] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:19 cdw0:0 sqhd:0072 p:0 m:0 dnr:0 00:25:25.370 [2024-07-15 09:33:33.604928] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:67 nsid:1 lba:30320 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:25.370 [2024-07-15 09:33:33.604945] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:67 cdw0:0 sqhd:0073 p:0 m:0 dnr:0 00:25:25.370 [2024-07-15 09:33:33.607377] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:81 nsid:1 lba:31184 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:25.370 [2024-07-15 09:33:33.607403] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:81 cdw0:0 sqhd:0074 p:0 m:0 dnr:0 00:25:25.370 [2024-07-15 09:33:33.607431] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:119 nsid:1 lba:31200 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:25.370 [2024-07-15 09:33:33.607451] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:119 cdw0:0 sqhd:0075 p:0 m:0 dnr:0 00:25:25.370 [2024-07-15 09:33:33.607490] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:97 nsid:1 lba:30568 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:25.370 [2024-07-15 09:33:33.607507] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:97 cdw0:0 sqhd:0076 p:0 m:0 dnr:0 00:25:25.370 [2024-07-15 09:33:33.607545] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:30600 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:25.370 [2024-07-15 09:33:33.607564] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:2 cdw0:0 sqhd:0077 p:0 m:0 dnr:0 00:25:25.370 [2024-07-15 09:33:33.607587] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:100 nsid:1 lba:30632 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:25.370 [2024-07-15 09:33:33.607603] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:100 cdw0:0 sqhd:0078 p:0 m:0 dnr:0 00:25:25.370 [2024-07-15 09:33:33.607626] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:75 nsid:1 lba:30664 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:25.370 [2024-07-15 09:33:33.607648] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:75 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 00:25:25.370 [2024-07-15 09:33:33.607672] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:87 nsid:1 lba:30696 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:25.370 [2024-07-15 09:33:33.607689] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:87 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:25:25.370 [2024-07-15 09:33:33.607712] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:65 nsid:1 lba:30728 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:25.370 [2024-07-15 09:33:33.607729] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:65 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:25:25.370 [2024-07-15 09:33:33.607751] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:30760 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:25.370 [2024-07-15 09:33:33.607768] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:9 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:25:25.370 [2024-07-15 09:33:33.607791] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:30792 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:25.370 [2024-07-15 09:33:33.607817] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:47 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:25:25.370 [2024-07-15 09:33:33.607842] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:82 nsid:1 lba:30824 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:25.370 [2024-07-15 09:33:33.607859] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:82 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:25:25.370 [2024-07-15 09:33:33.607882] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:30608 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:25.370 [2024-07-15 09:33:33.607898] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:17 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:25:25.370 [2024-07-15 09:33:33.607921] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:30672 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:25.370 [2024-07-15 09:33:33.607937] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:10 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:25.370 [2024-07-15 09:33:33.607960] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:30736 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:25.370 [2024-07-15 09:33:33.607976] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:54 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:25.370 [2024-07-15 09:33:33.607999] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:30800 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:25.370 [2024-07-15 09:33:33.608016] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:18 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:25:25.370 [2024-07-15 09:33:33.608038] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:30008 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:25.370 [2024-07-15 09:33:33.608055] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:13 cdw0:0 sqhd:0003 p:0 m:0 dnr:0 00:25:25.370 [2024-07-15 09:33:33.608093] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:106 nsid:1 lba:30264 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:25.370 [2024-07-15 09:33:33.608118] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:106 cdw0:0 sqhd:0004 p:0 m:0 dnr:0 00:25:25.371 [2024-07-15 09:33:33.608141] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:120 nsid:1 lba:30368 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:25.371 [2024-07-15 09:33:33.608172] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:120 cdw0:0 sqhd:0005 p:0 m:0 dnr:0 00:25:25.371 [2024-07-15 09:33:33.608198] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:30152 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:25.371 [2024-07-15 09:33:33.608215] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:57 cdw0:0 sqhd:0006 p:0 m:0 dnr:0 00:25:25.371 [2024-07-15 09:33:33.608237] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:30456 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:25.371 [2024-07-15 09:33:33.608253] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:58 cdw0:0 sqhd:0007 p:0 m:0 dnr:0 00:25:25.371 [2024-07-15 09:33:33.608275] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:31088 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:25.371 [2024-07-15 09:33:33.608290] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:25 cdw0:0 sqhd:0008 p:0 m:0 dnr:0 00:25:25.371 [2024-07-15 09:33:33.608312] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:91 nsid:1 lba:31120 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:25.371 [2024-07-15 09:33:33.608328] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:91 cdw0:0 sqhd:0009 p:0 m:0 dnr:0 00:25:25.371 [2024-07-15 09:33:33.608349] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:30088 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:25.371 [2024-07-15 09:33:33.608365] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:40 cdw0:0 sqhd:000a p:0 m:0 dnr:0 00:25:25.371 [2024-07-15 09:33:33.608386] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:31160 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:25.371 [2024-07-15 09:33:33.608401] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:8 cdw0:0 sqhd:000b p:0 m:0 dnr:0 00:25:25.371 [2024-07-15 09:33:33.608423] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:108 nsid:1 lba:30848 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:25.371 [2024-07-15 09:33:33.608439] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:108 cdw0:0 sqhd:000c p:0 m:0 dnr:0 00:25:25.371 [2024-07-15 09:33:33.608461] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:102 nsid:1 lba:30024 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:25.371 [2024-07-15 09:33:33.608476] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:102 cdw0:0 sqhd:000d p:0 m:0 dnr:0 00:25:25.371 [2024-07-15 09:33:33.608498] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:114 nsid:1 lba:30416 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:25.371 [2024-07-15 09:33:33.608513] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:114 cdw0:0 sqhd:000e p:0 m:0 dnr:0 00:25:25.371 [2024-07-15 09:33:33.608535] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:67 nsid:1 lba:30056 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:25.371 [2024-07-15 09:33:33.608550] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:67 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:25:25.371 [2024-07-15 09:33:33.608572] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:30904 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:25.371 [2024-07-15 09:33:33.608588] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:19 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:25:25.371 [2024-07-15 09:33:33.608609] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:30968 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:25.371 [2024-07-15 09:33:33.608625] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:7 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:25:25.371 [2024-07-15 09:33:33.608651] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:88 nsid:1 lba:31032 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:25.371 [2024-07-15 09:33:33.608667] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:88 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:25:25.371 [2024-07-15 09:33:33.608688] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:30880 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:25.371 [2024-07-15 09:33:33.608704] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:32 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:25:25.371 [2024-07-15 09:33:33.608725] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:90 nsid:1 lba:30912 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:25.371 [2024-07-15 09:33:33.608741] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:90 cdw0:0 sqhd:0014 p:0 m:0 dnr:0 00:25:25.371 [2024-07-15 09:33:33.608764] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:30944 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:25.371 [2024-07-15 09:33:33.608780] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:23 cdw0:0 sqhd:0015 p:0 m:0 dnr:0 00:25:25.371 [2024-07-15 09:33:33.608827] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:116 nsid:1 lba:30976 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:25.371 [2024-07-15 09:33:33.608856] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:116 cdw0:0 sqhd:0016 p:0 m:0 dnr:0 00:25:25.371 [2024-07-15 09:33:33.608880] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:68 nsid:1 lba:31224 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:25.371 [2024-07-15 09:33:33.608897] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:68 cdw0:0 sqhd:0017 p:0 m:0 dnr:0 00:25:25.371 [2024-07-15 09:33:33.608919] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:31240 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:25.371 [2024-07-15 09:33:33.608936] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:50 cdw0:0 sqhd:0018 p:0 m:0 dnr:0 00:25:25.371 [2024-07-15 09:33:33.608958] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:31256 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:25.371 [2024-07-15 09:33:33.608975] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:126 cdw0:0 sqhd:0019 p:0 m:0 dnr:0 00:25:25.371 [2024-07-15 09:33:33.608998] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:31272 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:25.371 [2024-07-15 09:33:33.609014] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:124 cdw0:0 sqhd:001a p:0 m:0 dnr:0 00:25:25.371 [2024-07-15 09:33:33.609038] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:31288 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:25.371 [2024-07-15 09:33:33.609054] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:41 cdw0:0 sqhd:001b p:0 m:0 dnr:0 00:25:25.371 [2024-07-15 09:33:33.609077] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:31304 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:25.371 [2024-07-15 09:33:33.609124] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:22 cdw0:0 sqhd:001c p:0 m:0 dnr:0 00:25:25.371 [2024-07-15 09:33:33.609161] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:31320 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:25.371 [2024-07-15 09:33:33.609178] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:4 cdw0:0 sqhd:001d p:0 m:0 dnr:0 00:25:25.371 [2024-07-15 09:33:33.609817] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:31336 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:25.371 [2024-07-15 09:33:33.609846] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:6 cdw0:0 sqhd:001e p:0 m:0 dnr:0 00:25:25.371 [2024-07-15 09:33:33.609881] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:74 nsid:1 lba:31008 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:25.371 [2024-07-15 09:33:33.609899] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:74 cdw0:0 sqhd:001f p:0 m:0 dnr:0 00:25:25.371 [2024-07-15 09:33:33.609922] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:103 nsid:1 lba:31040 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:25.371 [2024-07-15 09:33:33.609939] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:103 cdw0:0 sqhd:0020 p:0 m:0 dnr:0 00:25:25.371 [2024-07-15 09:33:33.609962] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:72 nsid:1 lba:31344 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:25.371 [2024-07-15 09:33:33.609979] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:72 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:25:25.371 [2024-07-15 09:33:33.610002] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:76 nsid:1 lba:30592 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:25.371 [2024-07-15 09:33:33.610019] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:76 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:25:25.371 [2024-07-15 09:33:33.610042] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:78 nsid:1 lba:30656 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:25.371 [2024-07-15 09:33:33.610058] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:78 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:25:25.371 [2024-07-15 09:33:33.610097] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:30720 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:25.371 [2024-07-15 09:33:33.610126] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:34 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:25:25.371 [2024-07-15 09:33:33.610163] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:73 nsid:1 lba:30784 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:25.371 [2024-07-15 09:33:33.610180] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:73 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:25:25.371 [2024-07-15 09:33:33.610202] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:110 nsid:1 lba:30840 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:25.371 [2024-07-15 09:33:33.610218] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:110 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:25:25.371 [2024-07-15 09:33:33.611834] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:79 nsid:1 lba:31352 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:25.371 [2024-07-15 09:33:33.611863] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:79 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:25:25.371 [2024-07-15 09:33:33.611890] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:31368 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:25.371 [2024-07-15 09:33:33.611909] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:30 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:25:25.371 [2024-07-15 09:33:33.611933] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:105 nsid:1 lba:31384 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:25.371 [2024-07-15 09:33:33.611951] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:105 cdw0:0 sqhd:0029 p:0 m:0 dnr:0 00:25:25.371 [2024-07-15 09:33:33.611974] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:31400 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:25.371 [2024-07-15 09:33:33.611996] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:35 cdw0:0 sqhd:002a p:0 m:0 dnr:0 00:25:25.371 [2024-07-15 09:33:33.612020] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:31080 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:25.371 [2024-07-15 09:33:33.612037] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:37 cdw0:0 sqhd:002b p:0 m:0 dnr:0 00:25:25.371 [2024-07-15 09:33:33.612060] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:111 nsid:1 lba:31112 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:25.371 [2024-07-15 09:33:33.612077] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:111 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:25:25.371 [2024-07-15 09:33:33.612100] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:98 nsid:1 lba:31152 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:25.371 [2024-07-15 09:33:33.612116] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:98 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:25:25.371 [2024-07-15 09:33:33.612155] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:30888 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:25.371 [2024-07-15 09:33:33.612172] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:20 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:25:25.371 [2024-07-15 09:33:33.612194] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:30952 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:25.371 [2024-07-15 09:33:33.612210] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:59 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:25:25.371 [2024-07-15 09:33:33.612231] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:31016 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:25.371 [2024-07-15 09:33:33.612248] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:15 cdw0:0 sqhd:0030 p:0 m:0 dnr:0 00:25:25.371 [2024-07-15 09:33:33.612269] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:104 nsid:1 lba:31408 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:25.371 [2024-07-15 09:33:33.612285] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:104 cdw0:0 sqhd:0031 p:0 m:0 dnr:0 00:25:25.371 [2024-07-15 09:33:33.612306] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:117 nsid:1 lba:31424 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:25.371 [2024-07-15 09:33:33.612322] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:117 cdw0:0 sqhd:0032 p:0 m:0 dnr:0 00:25:25.371 [2024-07-15 09:33:33.612344] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:31440 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:25.371 [2024-07-15 09:33:33.612360] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:39 cdw0:0 sqhd:0033 p:0 m:0 dnr:0 00:25:25.371 [2024-07-15 09:33:33.612382] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:31200 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:25.371 [2024-07-15 09:33:33.612398] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:4 cdw0:0 sqhd:0034 p:0 m:0 dnr:0 00:25:25.371 [2024-07-15 09:33:33.612419] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:30600 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:25.371 [2024-07-15 09:33:33.612435] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:22 cdw0:0 sqhd:0035 p:0 m:0 dnr:0 00:25:25.371 [2024-07-15 09:33:33.612456] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:30664 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:25.371 [2024-07-15 09:33:33.612476] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:41 cdw0:0 sqhd:0036 p:0 m:0 dnr:0 00:25:25.371 [2024-07-15 09:33:33.612499] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:124 nsid:1 lba:30728 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:25.371 [2024-07-15 09:33:33.612515] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:124 cdw0:0 sqhd:0037 p:0 m:0 dnr:0 00:25:25.371 [2024-07-15 09:33:33.612551] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:126 nsid:1 lba:30792 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:25.371 [2024-07-15 09:33:33.612569] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:126 cdw0:0 sqhd:0038 p:0 m:0 dnr:0 00:25:25.371 [2024-07-15 09:33:33.612591] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:30608 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:25.371 [2024-07-15 09:33:33.612624] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:50 cdw0:0 sqhd:0039 p:0 m:0 dnr:0 00:25:25.371 [2024-07-15 09:33:33.612648] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:68 nsid:1 lba:30736 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:25.371 [2024-07-15 09:33:33.612665] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:68 cdw0:0 sqhd:003a p:0 m:0 dnr:0 00:25:25.371 [2024-07-15 09:33:33.612688] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:116 nsid:1 lba:30008 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:25.371 [2024-07-15 09:33:33.612705] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:116 cdw0:0 sqhd:003b p:0 m:0 dnr:0 00:25:25.371 [2024-07-15 09:33:33.612727] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:30368 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:25.371 [2024-07-15 09:33:33.612744] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:23 cdw0:0 sqhd:003c p:0 m:0 dnr:0 00:25:25.371 [2024-07-15 09:33:33.612766] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:90 nsid:1 lba:30456 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:25.371 [2024-07-15 09:33:33.612784] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:90 cdw0:0 sqhd:003d p:0 m:0 dnr:0 00:25:25.371 [2024-07-15 09:33:33.612813] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:31120 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:25.371 [2024-07-15 09:33:33.612832] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:32 cdw0:0 sqhd:003e p:0 m:0 dnr:0 00:25:25.371 [2024-07-15 09:33:33.612855] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:88 nsid:1 lba:31160 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:25.371 [2024-07-15 09:33:33.612873] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:88 cdw0:0 sqhd:003f p:0 m:0 dnr:0 00:25:25.371 [2024-07-15 09:33:33.612895] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:30024 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:25.371 [2024-07-15 09:33:33.612912] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:7 cdw0:0 sqhd:0040 p:0 m:0 dnr:0 00:25:25.371 [2024-07-15 09:33:33.612934] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:30056 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:25.371 [2024-07-15 09:33:33.612951] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:19 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:25:25.371 [2024-07-15 09:33:33.612974] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:67 nsid:1 lba:30968 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:25.371 [2024-07-15 09:33:33.612990] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:67 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:25:25.371 [2024-07-15 09:33:33.613020] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:114 nsid:1 lba:30880 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:25.371 [2024-07-15 09:33:33.613037] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:114 cdw0:0 sqhd:0043 p:0 m:0 dnr:0 00:25:25.371 [2024-07-15 09:33:33.613060] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:102 nsid:1 lba:30944 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:25.371 [2024-07-15 09:33:33.613077] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:102 cdw0:0 sqhd:0044 p:0 m:0 dnr:0 00:25:25.371 [2024-07-15 09:33:33.613119] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:108 nsid:1 lba:31224 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:25.371 [2024-07-15 09:33:33.613135] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:108 cdw0:0 sqhd:0045 p:0 m:0 dnr:0 00:25:25.371 [2024-07-15 09:33:33.613156] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:31256 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:25.371 [2024-07-15 09:33:33.613172] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:8 cdw0:0 sqhd:0046 p:0 m:0 dnr:0 00:25:25.371 [2024-07-15 09:33:33.614221] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:31288 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:25.371 [2024-07-15 09:33:33.614245] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:40 cdw0:0 sqhd:0047 p:0 m:0 dnr:0 00:25:25.371 [2024-07-15 09:33:33.614271] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:91 nsid:1 lba:31320 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:25.371 [2024-07-15 09:33:33.614288] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:91 cdw0:0 sqhd:0048 p:0 m:0 dnr:0 00:25:25.371 [2024-07-15 09:33:33.614311] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:31208 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:25.371 [2024-07-15 09:33:33.614326] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:25 cdw0:0 sqhd:0049 p:0 m:0 dnr:0 00:25:25.371 [2024-07-15 09:33:33.614348] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:110 nsid:1 lba:31008 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:25.371 [2024-07-15 09:33:33.614364] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:110 cdw0:0 sqhd:004a p:0 m:0 dnr:0 00:25:25.371 [2024-07-15 09:33:33.614385] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:73 nsid:1 lba:31344 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:25.371 [2024-07-15 09:33:33.614401] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:73 cdw0:0 sqhd:004b p:0 m:0 dnr:0 00:25:25.371 [2024-07-15 09:33:33.614423] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:30656 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:25.371 [2024-07-15 09:33:33.614439] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:34 cdw0:0 sqhd:004c p:0 m:0 dnr:0 00:25:25.371 [2024-07-15 09:33:33.614461] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:78 nsid:1 lba:30784 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:25.371 [2024-07-15 09:33:33.614476] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:78 cdw0:0 sqhd:004d p:0 m:0 dnr:0 00:25:25.372 [2024-07-15 09:33:33.614498] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:76 nsid:1 lba:30576 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:25.372 [2024-07-15 09:33:33.614515] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:76 cdw0:0 sqhd:004e p:0 m:0 dnr:0 00:25:25.372 [2024-07-15 09:33:33.614543] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:72 nsid:1 lba:30704 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:25.372 [2024-07-15 09:33:33.614560] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:72 cdw0:0 sqhd:004f p:0 m:0 dnr:0 00:25:25.372 [2024-07-15 09:33:33.614582] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:103 nsid:1 lba:30856 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:25.372 [2024-07-15 09:33:33.614597] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:103 cdw0:0 sqhd:0050 p:0 m:0 dnr:0 00:25:25.372 [2024-07-15 09:33:33.614619] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:74 nsid:1 lba:31104 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:25.372 [2024-07-15 09:33:33.614635] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:74 cdw0:0 sqhd:0051 p:0 m:0 dnr:0 00:25:25.372 [2024-07-15 09:33:33.614657] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:31144 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:25.372 [2024-07-15 09:33:33.614673] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:6 cdw0:0 sqhd:0052 p:0 m:0 dnr:0 00:25:25.372 [2024-07-15 09:33:33.614695] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:30872 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:25.372 [2024-07-15 09:33:33.614711] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:58 cdw0:0 sqhd:0053 p:0 m:0 dnr:0 00:25:25.372 [2024-07-15 09:33:33.614733] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:31000 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:25.372 [2024-07-15 09:33:33.614749] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:57 cdw0:0 sqhd:0054 p:0 m:0 dnr:0 00:25:25.372 [2024-07-15 09:33:33.615181] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:120 nsid:1 lba:31456 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:25.372 [2024-07-15 09:33:33.615206] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:120 cdw0:0 sqhd:0055 p:0 m:0 dnr:0 00:25:25.372 [2024-07-15 09:33:33.615233] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:106 nsid:1 lba:31472 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:25.372 [2024-07-15 09:33:33.615252] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:106 cdw0:0 sqhd:0056 p:0 m:0 dnr:0 00:25:25.372 [2024-07-15 09:33:33.615278] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:31488 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:25.372 [2024-07-15 09:33:33.615295] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:13 cdw0:0 sqhd:0057 p:0 m:0 dnr:0 00:25:25.372 [2024-07-15 09:33:33.615318] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:31504 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:25.372 [2024-07-15 09:33:33.615336] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:18 cdw0:0 sqhd:0058 p:0 m:0 dnr:0 00:25:25.372 [2024-07-15 09:33:33.615359] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:31520 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:25.372 [2024-07-15 09:33:33.615376] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:54 cdw0:0 sqhd:0059 p:0 m:0 dnr:0 00:25:25.372 [2024-07-15 09:33:33.615399] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:31536 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:25.372 [2024-07-15 09:33:33.615416] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:10 cdw0:0 sqhd:005a p:0 m:0 dnr:0 00:25:25.372 [2024-07-15 09:33:33.615439] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:31552 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:25.372 [2024-07-15 09:33:33.615461] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:17 cdw0:0 sqhd:005b p:0 m:0 dnr:0 00:25:25.372 [2024-07-15 09:33:33.615485] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:82 nsid:1 lba:31568 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:25.372 [2024-07-15 09:33:33.615502] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:82 cdw0:0 sqhd:005c p:0 m:0 dnr:0 00:25:25.372 [2024-07-15 09:33:33.615525] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:31216 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:25.372 [2024-07-15 09:33:33.615543] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:47 cdw0:0 sqhd:005d p:0 m:0 dnr:0 00:25:25.372 [2024-07-15 09:33:33.615565] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:31248 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:25.372 [2024-07-15 09:33:33.615597] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:9 cdw0:0 sqhd:005e p:0 m:0 dnr:0 00:25:25.372 [2024-07-15 09:33:33.615621] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:65 nsid:1 lba:31280 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:25.372 [2024-07-15 09:33:33.615637] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:65 cdw0:0 sqhd:005f p:0 m:0 dnr:0 00:25:25.372 [2024-07-15 09:33:33.615675] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:87 nsid:1 lba:31312 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:25.372 [2024-07-15 09:33:33.615691] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:87 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:25:25.372 [2024-07-15 09:33:33.615713] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:31368 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:25.372 [2024-07-15 09:33:33.615729] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:8 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:25:25.372 [2024-07-15 09:33:33.615751] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:108 nsid:1 lba:31400 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:25.372 [2024-07-15 09:33:33.615782] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:108 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:25:25.372 [2024-07-15 09:33:33.615814] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:102 nsid:1 lba:31112 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:25.372 [2024-07-15 09:33:33.615833] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:102 cdw0:0 sqhd:0063 p:0 m:0 dnr:0 00:25:25.372 [2024-07-15 09:33:33.615863] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:114 nsid:1 lba:30888 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:25.372 [2024-07-15 09:33:33.615886] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:114 cdw0:0 sqhd:0064 p:0 m:0 dnr:0 00:25:25.372 [2024-07-15 09:33:33.615910] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:67 nsid:1 lba:31016 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:25.372 [2024-07-15 09:33:33.615927] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:67 cdw0:0 sqhd:0065 p:0 m:0 dnr:0 00:25:25.372 [2024-07-15 09:33:33.615950] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:31424 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:25.372 [2024-07-15 09:33:33.615967] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:19 cdw0:0 sqhd:0066 p:0 m:0 dnr:0 00:25:25.372 [2024-07-15 09:33:33.615990] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:31200 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:25.372 [2024-07-15 09:33:33.616010] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:7 cdw0:0 sqhd:0067 p:0 m:0 dnr:0 00:25:25.372 [2024-07-15 09:33:33.616033] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:88 nsid:1 lba:30664 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:25.372 [2024-07-15 09:33:33.616050] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:88 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:25:25.372 [2024-07-15 09:33:33.616072] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:30792 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:25.372 [2024-07-15 09:33:33.616089] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:32 cdw0:0 sqhd:0069 p:0 m:0 dnr:0 00:25:25.372 [2024-07-15 09:33:33.616117] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:90 nsid:1 lba:30736 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:25.372 [2024-07-15 09:33:33.616133] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:90 cdw0:0 sqhd:006a p:0 m:0 dnr:0 00:25:25.372 [2024-07-15 09:33:33.616156] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:30368 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:25.372 [2024-07-15 09:33:33.616173] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:23 cdw0:0 sqhd:006b p:0 m:0 dnr:0 00:25:25.372 [2024-07-15 09:33:33.616196] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:116 nsid:1 lba:31120 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:25.372 [2024-07-15 09:33:33.616212] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:116 cdw0:0 sqhd:006c p:0 m:0 dnr:0 00:25:25.372 [2024-07-15 09:33:33.616252] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:68 nsid:1 lba:30024 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:25.372 [2024-07-15 09:33:33.616268] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:68 cdw0:0 sqhd:006d p:0 m:0 dnr:0 00:25:25.372 [2024-07-15 09:33:33.616305] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:30968 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:25.372 [2024-07-15 09:33:33.616321] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:50 cdw0:0 sqhd:006e p:0 m:0 dnr:0 00:25:25.372 [2024-07-15 09:33:33.616342] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:126 nsid:1 lba:30944 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:25.372 [2024-07-15 09:33:33.616359] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:126 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:25:25.372 [2024-07-15 09:33:33.616397] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:31256 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:25.372 [2024-07-15 09:33:33.616413] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:124 cdw0:0 sqhd:0070 p:0 m:0 dnr:0 00:25:25.372 [2024-07-15 09:33:33.617870] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:31592 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:25.372 [2024-07-15 09:33:33.617894] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:41 cdw0:0 sqhd:0071 p:0 m:0 dnr:0 00:25:25.372 [2024-07-15 09:33:33.617922] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:31608 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:25.372 [2024-07-15 09:33:33.617940] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:22 cdw0:0 sqhd:0072 p:0 m:0 dnr:0 00:25:25.372 [2024-07-15 09:33:33.617962] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:31624 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:25.372 [2024-07-15 09:33:33.617979] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:4 cdw0:0 sqhd:0073 p:0 m:0 dnr:0 00:25:25.372 [2024-07-15 09:33:33.618008] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:31640 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:25.372 [2024-07-15 09:33:33.618026] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:39 cdw0:0 sqhd:0074 p:0 m:0 dnr:0 00:25:25.372 [2024-07-15 09:33:33.618049] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:117 nsid:1 lba:31656 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:25.372 [2024-07-15 09:33:33.618066] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:117 cdw0:0 sqhd:0075 p:0 m:0 dnr:0 00:25:25.372 [2024-07-15 09:33:33.618088] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:104 nsid:1 lba:31360 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:25.372 [2024-07-15 09:33:33.618117] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:104 cdw0:0 sqhd:0076 p:0 m:0 dnr:0 00:25:25.372 [2024-07-15 09:33:33.618140] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:31392 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:25.372 [2024-07-15 09:33:33.618157] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:15 cdw0:0 sqhd:0077 p:0 m:0 dnr:0 00:25:25.372 [2024-07-15 09:33:33.618195] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:31320 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:25.372 [2024-07-15 09:33:33.618212] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:57 cdw0:0 sqhd:0078 p:0 m:0 dnr:0 00:25:25.372 [2024-07-15 09:33:33.618249] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:31008 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:25.372 [2024-07-15 09:33:33.618266] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:58 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 00:25:25.372 [2024-07-15 09:33:33.618288] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:30656 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:25.372 [2024-07-15 09:33:33.618304] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:6 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:25:25.372 [2024-07-15 09:33:33.618326] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:74 nsid:1 lba:30576 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:25.372 [2024-07-15 09:33:33.618342] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:74 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:25:25.372 [2024-07-15 09:33:33.618363] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:103 nsid:1 lba:30856 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:25.372 [2024-07-15 09:33:33.618379] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:103 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:25:25.372 [2024-07-15 09:33:33.618401] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:72 nsid:1 lba:31144 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:25.372 [2024-07-15 09:33:33.618417] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:72 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:25:25.372 [2024-07-15 09:33:33.618439] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:76 nsid:1 lba:31000 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:25.372 [2024-07-15 09:33:33.618455] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:76 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:25:25.372 [2024-07-15 09:33:33.618476] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:78 nsid:1 lba:31432 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:25.372 [2024-07-15 09:33:33.618492] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:78 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:25:25.372 [2024-07-15 09:33:33.618517] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:31472 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:25.372 [2024-07-15 09:33:33.618533] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:124 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:25.372 [2024-07-15 09:33:33.618555] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:31504 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:25.372 [2024-07-15 09:33:33.618571] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:126 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:25.372 [2024-07-15 09:33:33.618592] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:31536 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:25.372 [2024-07-15 09:33:33.618608] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:50 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:25:25.372 [2024-07-15 09:33:33.618629] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:68 nsid:1 lba:31568 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:25.372 [2024-07-15 09:33:33.618645] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:68 cdw0:0 sqhd:0003 p:0 m:0 dnr:0 00:25:25.372 [2024-07-15 09:33:33.618666] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:116 nsid:1 lba:31248 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:25.372 [2024-07-15 09:33:33.618682] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:116 cdw0:0 sqhd:0004 p:0 m:0 dnr:0 00:25:25.372 [2024-07-15 09:33:33.618703] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:31312 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:25.372 [2024-07-15 09:33:33.618719] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:23 cdw0:0 sqhd:0005 p:0 m:0 dnr:0 00:25:25.372 [2024-07-15 09:33:33.618740] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:90 nsid:1 lba:31400 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:25.372 [2024-07-15 09:33:33.618756] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:90 cdw0:0 sqhd:0006 p:0 m:0 dnr:0 00:25:25.372 [2024-07-15 09:33:33.618792] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:30888 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:25.372 [2024-07-15 09:33:33.618819] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:32 cdw0:0 sqhd:0007 p:0 m:0 dnr:0 00:25:25.372 [2024-07-15 09:33:33.618845] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:88 nsid:1 lba:31424 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:25.372 [2024-07-15 09:33:33.618868] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:88 cdw0:0 sqhd:0008 p:0 m:0 dnr:0 00:25:25.372 [2024-07-15 09:33:33.618891] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:30664 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:25.372 [2024-07-15 09:33:33.618908] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:7 cdw0:0 sqhd:0009 p:0 m:0 dnr:0 00:25:25.372 [2024-07-15 09:33:33.618930] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:30736 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:25.372 [2024-07-15 09:33:33.618947] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:19 cdw0:0 sqhd:000a p:0 m:0 dnr:0 00:25:25.372 [2024-07-15 09:33:33.618970] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:67 nsid:1 lba:31120 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:25.372 [2024-07-15 09:33:33.618987] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:67 cdw0:0 sqhd:000b p:0 m:0 dnr:0 00:25:25.372 [2024-07-15 09:33:33.619010] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:114 nsid:1 lba:30968 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:25.372 [2024-07-15 09:33:33.619030] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:114 cdw0:0 sqhd:000c p:0 m:0 dnr:0 00:25:25.372 [2024-07-15 09:33:33.619053] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:102 nsid:1 lba:31256 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:25.372 [2024-07-15 09:33:33.619070] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:102 cdw0:0 sqhd:000d p:0 m:0 dnr:0 00:25:25.372 [2024-07-15 09:33:33.620751] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:108 nsid:1 lba:30800 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:25.372 [2024-07-15 09:33:33.620776] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:108 cdw0:0 sqhd:000e p:0 m:0 dnr:0 00:25:25.372 [2024-07-15 09:33:33.620812] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:31088 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:25.372 [2024-07-15 09:33:33.620832] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:8 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:25:25.372 [2024-07-15 09:33:33.620858] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:87 nsid:1 lba:31032 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:25.372 [2024-07-15 09:33:33.620875] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:87 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:25:25.373 [2024-07-15 09:33:33.620898] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:65 nsid:1 lba:31672 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:25.373 [2024-07-15 09:33:33.620914] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:65 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:25:25.373 [2024-07-15 09:33:33.620937] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:31688 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:25.373 [2024-07-15 09:33:33.620953] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:9 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:25:25.373 [2024-07-15 09:33:33.620976] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:31704 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:25.373 [2024-07-15 09:33:33.620993] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:47 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:25:25.373 [2024-07-15 09:33:33.621015] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:82 nsid:1 lba:31720 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:25.373 [2024-07-15 09:33:33.621032] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:82 cdw0:0 sqhd:0014 p:0 m:0 dnr:0 00:25:25.373 [2024-07-15 09:33:33.621055] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:31736 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:25.373 [2024-07-15 09:33:33.621071] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:17 cdw0:0 sqhd:0015 p:0 m:0 dnr:0 00:25:25.373 [2024-07-15 09:33:33.621094] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:31752 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:25.373 [2024-07-15 09:33:33.621111] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:10 cdw0:0 sqhd:0016 p:0 m:0 dnr:0 00:25:25.373 [2024-07-15 09:33:33.621138] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:31768 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:25.373 [2024-07-15 09:33:33.621166] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:54 cdw0:0 sqhd:0017 p:0 m:0 dnr:0 00:25:25.373 [2024-07-15 09:33:33.621189] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:31784 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:25.373 [2024-07-15 09:33:33.621210] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:18 cdw0:0 sqhd:0018 p:0 m:0 dnr:0 00:25:25.373 [2024-07-15 09:33:33.621233] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:31800 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:25.373 [2024-07-15 09:33:33.621265] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:13 cdw0:0 sqhd:0019 p:0 m:0 dnr:0 00:25:25.373 [2024-07-15 09:33:33.621288] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:106 nsid:1 lba:31816 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:25.373 [2024-07-15 09:33:33.621320] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:106 cdw0:0 sqhd:001a p:0 m:0 dnr:0 00:25:25.373 [2024-07-15 09:33:33.621344] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:120 nsid:1 lba:31304 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:25.373 [2024-07-15 09:33:33.621360] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:120 cdw0:0 sqhd:001b p:0 m:0 dnr:0 00:25:25.373 [2024-07-15 09:33:33.621382] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:102 nsid:1 lba:31608 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:25.373 [2024-07-15 09:33:33.621399] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:102 cdw0:0 sqhd:001c p:0 m:0 dnr:0 00:25:25.373 [2024-07-15 09:33:33.621421] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:114 nsid:1 lba:31640 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:25.373 [2024-07-15 09:33:33.621438] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:114 cdw0:0 sqhd:001d p:0 m:0 dnr:0 00:25:25.373 [2024-07-15 09:33:33.621460] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:67 nsid:1 lba:31360 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:25.373 [2024-07-15 09:33:33.621476] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:67 cdw0:0 sqhd:001e p:0 m:0 dnr:0 00:25:25.373 [2024-07-15 09:33:33.621498] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:31320 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:25.373 [2024-07-15 09:33:33.621515] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:19 cdw0:0 sqhd:001f p:0 m:0 dnr:0 00:25:25.373 [2024-07-15 09:33:33.621537] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:30656 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:25.373 [2024-07-15 09:33:33.621554] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:7 cdw0:0 sqhd:0020 p:0 m:0 dnr:0 00:25:25.373 [2024-07-15 09:33:33.621576] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:88 nsid:1 lba:30856 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:25.373 [2024-07-15 09:33:33.621593] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:88 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:25:25.373 [2024-07-15 09:33:33.621629] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:31000 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:25.373 [2024-07-15 09:33:33.621645] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:32 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:25:25.373 [2024-07-15 09:33:33.621668] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:90 nsid:1 lba:31472 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:25.373 [2024-07-15 09:33:33.621683] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:90 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:25:25.373 [2024-07-15 09:33:33.621704] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:31536 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:25.373 [2024-07-15 09:33:33.621719] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:23 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:25:25.373 [2024-07-15 09:33:33.621745] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:116 nsid:1 lba:31248 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:25.373 [2024-07-15 09:33:33.621761] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:116 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:25:25.373 [2024-07-15 09:33:33.623075] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:68 nsid:1 lba:31400 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:25.373 [2024-07-15 09:33:33.623111] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:68 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:25:25.373 [2024-07-15 09:33:33.623140] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:31424 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:25.373 [2024-07-15 09:33:33.623170] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:50 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:25:25.373 [2024-07-15 09:33:33.623194] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:30736 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:25.373 [2024-07-15 09:33:33.623212] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:126 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:25:25.373 [2024-07-15 09:33:33.623235] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:30968 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:25.373 [2024-07-15 09:33:33.623252] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:124 cdw0:0 sqhd:0029 p:0 m:0 dnr:0 00:25:25.373 [2024-07-15 09:33:33.623275] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:78 nsid:1 lba:31824 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:25.373 [2024-07-15 09:33:33.623293] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:78 cdw0:0 sqhd:002a p:0 m:0 dnr:0 00:25:25.373 [2024-07-15 09:33:33.623315] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:76 nsid:1 lba:31840 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:25.373 [2024-07-15 09:33:33.623332] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:76 cdw0:0 sqhd:002b p:0 m:0 dnr:0 00:25:25.373 [2024-07-15 09:33:33.623355] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:72 nsid:1 lba:31856 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:25.373 [2024-07-15 09:33:33.623372] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:72 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:25:25.373 [2024-07-15 09:33:33.623395] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:103 nsid:1 lba:31872 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:25.373 [2024-07-15 09:33:33.623426] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:103 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:25:25.373 [2024-07-15 09:33:33.623450] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:74 nsid:1 lba:31888 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:25.373 [2024-07-15 09:33:33.623466] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:74 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:25:25.373 [2024-07-15 09:33:33.623488] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:31904 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:25.373 [2024-07-15 09:33:33.623505] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:6 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:25:25.373 [2024-07-15 09:33:33.623527] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:31464 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:25.373 [2024-07-15 09:33:33.623543] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:58 cdw0:0 sqhd:0030 p:0 m:0 dnr:0 00:25:25.373 [2024-07-15 09:33:33.623586] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:31496 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:25.373 [2024-07-15 09:33:33.623602] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:57 cdw0:0 sqhd:0031 p:0 m:0 dnr:0 00:25:25.373 [2024-07-15 09:33:33.623623] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:31528 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:25.373 [2024-07-15 09:33:33.623639] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:15 cdw0:0 sqhd:0032 p:0 m:0 dnr:0 00:25:25.373 [2024-07-15 09:33:33.623661] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:104 nsid:1 lba:31560 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:25.373 [2024-07-15 09:33:33.623677] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:104 cdw0:0 sqhd:0033 p:0 m:0 dnr:0 00:25:25.373 [2024-07-15 09:33:33.623698] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:117 nsid:1 lba:31352 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:25.373 [2024-07-15 09:33:33.623714] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:117 cdw0:0 sqhd:0034 p:0 m:0 dnr:0 00:25:25.373 [2024-07-15 09:33:33.623735] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:31408 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:25.373 [2024-07-15 09:33:33.623752] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:39 cdw0:0 sqhd:0035 p:0 m:0 dnr:0 00:25:25.373 [2024-07-15 09:33:33.623773] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:30608 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:25.373 [2024-07-15 09:33:33.623815] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:4 cdw0:0 sqhd:0036 p:0 m:0 dnr:0 00:25:25.373 [2024-07-15 09:33:33.623843] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:31224 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:25.373 [2024-07-15 09:33:33.623864] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:22 cdw0:0 sqhd:0037 p:0 m:0 dnr:0 00:25:25.373 [2024-07-15 09:33:33.623886] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:31600 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:25.373 [2024-07-15 09:33:33.623903] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:41 cdw0:0 sqhd:0038 p:0 m:0 dnr:0 00:25:25.373 [2024-07-15 09:33:33.623926] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:31632 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:25.373 [2024-07-15 09:33:33.623943] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:34 cdw0:0 sqhd:0039 p:0 m:0 dnr:0 00:25:25.373 [2024-07-15 09:33:33.623966] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:73 nsid:1 lba:31664 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:25.373 [2024-07-15 09:33:33.623982] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:73 cdw0:0 sqhd:003a p:0 m:0 dnr:0 00:25:25.373 [2024-07-15 09:33:33.624005] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:116 nsid:1 lba:31088 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:25.373 [2024-07-15 09:33:33.624022] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:116 cdw0:0 sqhd:003b p:0 m:0 dnr:0 00:25:25.373 [2024-07-15 09:33:33.624044] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:31672 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:25.373 [2024-07-15 09:33:33.624061] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:23 cdw0:0 sqhd:003c p:0 m:0 dnr:0 00:25:25.373 [2024-07-15 09:33:33.624097] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:90 nsid:1 lba:31704 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:25.373 [2024-07-15 09:33:33.624119] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:90 cdw0:0 sqhd:003d p:0 m:0 dnr:0 00:25:25.373 [2024-07-15 09:33:33.624142] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:31736 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:25.373 [2024-07-15 09:33:33.624159] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:32 cdw0:0 sqhd:003e p:0 m:0 dnr:0 00:25:25.373 [2024-07-15 09:33:33.624181] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:88 nsid:1 lba:31768 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:25.373 [2024-07-15 09:33:33.624197] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:88 cdw0:0 sqhd:003f p:0 m:0 dnr:0 00:25:25.373 [2024-07-15 09:33:33.624219] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:31800 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:25.373 [2024-07-15 09:33:33.624236] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:7 cdw0:0 sqhd:0040 p:0 m:0 dnr:0 00:25:25.373 [2024-07-15 09:33:33.624257] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:31304 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:25.373 [2024-07-15 09:33:33.624274] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:19 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:25:25.373 [2024-07-15 09:33:33.624296] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:67 nsid:1 lba:31640 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:25.373 [2024-07-15 09:33:33.624312] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:67 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:25:25.373 [2024-07-15 09:33:33.624334] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:114 nsid:1 lba:31320 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:25.373 [2024-07-15 09:33:33.624351] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:114 cdw0:0 sqhd:0043 p:0 m:0 dnr:0 00:25:25.373 [2024-07-15 09:33:33.624373] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:102 nsid:1 lba:30856 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:25.373 [2024-07-15 09:33:33.624389] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:102 cdw0:0 sqhd:0044 p:0 m:0 dnr:0 00:25:25.373 [2024-07-15 09:33:33.624412] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:120 nsid:1 lba:31472 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:25.373 [2024-07-15 09:33:33.624428] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:120 cdw0:0 sqhd:0045 p:0 m:0 dnr:0 00:25:25.373 [2024-07-15 09:33:33.624450] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:106 nsid:1 lba:31248 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:25.373 [2024-07-15 09:33:33.624466] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:106 cdw0:0 sqhd:0046 p:0 m:0 dnr:0 00:25:25.373 [2024-07-15 09:33:33.625908] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:31920 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:25.373 [2024-07-15 09:33:33.625933] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:13 cdw0:0 sqhd:0047 p:0 m:0 dnr:0 00:25:25.373 [2024-07-15 09:33:33.625961] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:31936 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:25.373 [2024-07-15 09:33:33.625979] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:18 cdw0:0 sqhd:0048 p:0 m:0 dnr:0 00:25:25.373 [2024-07-15 09:33:33.626003] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:31952 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:25.373 [2024-07-15 09:33:33.626024] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:54 cdw0:0 sqhd:0049 p:0 m:0 dnr:0 00:25:25.373 [2024-07-15 09:33:33.626048] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:31968 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:25.373 [2024-07-15 09:33:33.626066] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:10 cdw0:0 sqhd:004a p:0 m:0 dnr:0 00:25:25.373 [2024-07-15 09:33:33.626088] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:31984 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:25.373 [2024-07-15 09:33:33.626105] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:17 cdw0:0 sqhd:004b p:0 m:0 dnr:0 00:25:25.373 [2024-07-15 09:33:33.626127] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:82 nsid:1 lba:32000 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:25.373 [2024-07-15 09:33:33.626144] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:82 cdw0:0 sqhd:004c p:0 m:0 dnr:0 00:25:25.373 [2024-07-15 09:33:33.626167] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:32016 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:25.373 [2024-07-15 09:33:33.626184] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:47 cdw0:0 sqhd:004d p:0 m:0 dnr:0 00:25:25.373 [2024-07-15 09:33:33.626207] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:31344 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:25.373 [2024-07-15 09:33:33.626224] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:9 cdw0:0 sqhd:004e p:0 m:0 dnr:0 00:25:25.373 [2024-07-15 09:33:33.627262] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:65 nsid:1 lba:32040 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:25.373 [2024-07-15 09:33:33.627287] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:65 cdw0:0 sqhd:004f p:0 m:0 dnr:0 00:25:25.373 [2024-07-15 09:33:33.627315] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:87 nsid:1 lba:32056 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:25.373 [2024-07-15 09:33:33.627334] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:87 cdw0:0 sqhd:0050 p:0 m:0 dnr:0 00:25:25.373 [2024-07-15 09:33:33.627356] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:31488 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:25.373 [2024-07-15 09:33:33.627373] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:8 cdw0:0 sqhd:0051 p:0 m:0 dnr:0 00:25:25.373 [2024-07-15 09:33:33.627397] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:108 nsid:1 lba:31552 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:25.373 [2024-07-15 09:33:33.627414] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:108 cdw0:0 sqhd:0052 p:0 m:0 dnr:0 00:25:25.373 [2024-07-15 09:33:33.627436] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:106 nsid:1 lba:31424 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:25.373 [2024-07-15 09:33:33.627453] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:106 cdw0:0 sqhd:0053 p:0 m:0 dnr:0 00:25:25.373 [2024-07-15 09:33:33.627476] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:120 nsid:1 lba:30968 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:25.373 [2024-07-15 09:33:33.627493] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:120 cdw0:0 sqhd:0054 p:0 m:0 dnr:0 00:25:25.373 [2024-07-15 09:33:33.627516] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:102 nsid:1 lba:31840 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:25.373 [2024-07-15 09:33:33.627532] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:102 cdw0:0 sqhd:0055 p:0 m:0 dnr:0 00:25:25.373 [2024-07-15 09:33:33.627574] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:114 nsid:1 lba:31872 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:25.373 [2024-07-15 09:33:33.627591] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:114 cdw0:0 sqhd:0056 p:0 m:0 dnr:0 00:25:25.373 [2024-07-15 09:33:33.627614] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:67 nsid:1 lba:31904 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:25.373 [2024-07-15 09:33:33.627631] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:67 cdw0:0 sqhd:0057 p:0 m:0 dnr:0 00:25:25.373 [2024-07-15 09:33:33.627652] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:31496 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:25.373 [2024-07-15 09:33:33.627668] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:19 cdw0:0 sqhd:0058 p:0 m:0 dnr:0 00:25:25.373 [2024-07-15 09:33:33.627689] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:31560 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:25.374 [2024-07-15 09:33:33.627705] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:7 cdw0:0 sqhd:0059 p:0 m:0 dnr:0 00:25:25.374 [2024-07-15 09:33:33.627743] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:88 nsid:1 lba:31408 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:25.374 [2024-07-15 09:33:33.627760] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:88 cdw0:0 sqhd:005a p:0 m:0 dnr:0 00:25:25.374 [2024-07-15 09:33:33.627798] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:31224 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:25.374 [2024-07-15 09:33:33.627824] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:32 cdw0:0 sqhd:005b p:0 m:0 dnr:0 00:25:25.374 [2024-07-15 09:33:33.627859] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:90 nsid:1 lba:31632 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:25.374 [2024-07-15 09:33:33.627876] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:90 cdw0:0 sqhd:005c p:0 m:0 dnr:0 00:25:25.374 [2024-07-15 09:33:33.627899] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:31088 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:25.374 [2024-07-15 09:33:33.627916] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:23 cdw0:0 sqhd:005d p:0 m:0 dnr:0 00:25:25.374 [2024-07-15 09:33:33.627938] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:116 nsid:1 lba:31704 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:25.374 [2024-07-15 09:33:33.627955] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:116 cdw0:0 sqhd:005e p:0 m:0 dnr:0 00:25:25.374 [2024-07-15 09:33:33.627978] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:73 nsid:1 lba:31768 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:25.374 [2024-07-15 09:33:33.627995] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:73 cdw0:0 sqhd:005f p:0 m:0 dnr:0 00:25:25.374 [2024-07-15 09:33:33.628017] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:31304 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:25.374 [2024-07-15 09:33:33.628034] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:34 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:25:25.374 [2024-07-15 09:33:33.628056] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:31320 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:25.374 [2024-07-15 09:33:33.628073] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:41 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:25:25.374 [2024-07-15 09:33:33.628126] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:31472 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:25.374 [2024-07-15 09:33:33.628143] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:22 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:25:25.374 [2024-07-15 09:33:33.628165] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:31368 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:25.374 [2024-07-15 09:33:33.628181] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:4 cdw0:0 sqhd:0063 p:0 m:0 dnr:0 00:25:25.374 [2024-07-15 09:33:33.628218] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:32064 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:25.374 [2024-07-15 09:33:33.628234] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:39 cdw0:0 sqhd:0064 p:0 m:0 dnr:0 00:25:25.374 [2024-07-15 09:33:33.628256] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:117 nsid:1 lba:32080 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:25.374 [2024-07-15 09:33:33.628273] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:117 cdw0:0 sqhd:0065 p:0 m:0 dnr:0 00:25:25.374 [2024-07-15 09:33:33.628294] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:104 nsid:1 lba:32096 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:25.374 [2024-07-15 09:33:33.628310] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:104 cdw0:0 sqhd:0066 p:0 m:0 dnr:0 00:25:25.374 [2024-07-15 09:33:33.628333] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:32112 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:25.374 [2024-07-15 09:33:33.628349] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:15 cdw0:0 sqhd:0067 p:0 m:0 dnr:0 00:25:25.374 [2024-07-15 09:33:33.628370] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:32128 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:25.374 [2024-07-15 09:33:33.628386] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:57 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:25:25.374 [2024-07-15 09:33:33.628407] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:31680 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:25.374 [2024-07-15 09:33:33.628423] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:58 cdw0:0 sqhd:0069 p:0 m:0 dnr:0 00:25:25.374 [2024-07-15 09:33:33.628445] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:31712 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:25.374 [2024-07-15 09:33:33.628460] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:6 cdw0:0 sqhd:006a p:0 m:0 dnr:0 00:25:25.374 [2024-07-15 09:33:33.628481] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:74 nsid:1 lba:31744 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:25.374 [2024-07-15 09:33:33.628497] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:74 cdw0:0 sqhd:006b p:0 m:0 dnr:0 00:25:25.374 [2024-07-15 09:33:33.628518] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:103 nsid:1 lba:31776 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:25.374 [2024-07-15 09:33:33.628534] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:103 cdw0:0 sqhd:006c p:0 m:0 dnr:0 00:25:25.374 [2024-07-15 09:33:33.628555] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:72 nsid:1 lba:31808 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:25.374 [2024-07-15 09:33:33.628571] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:72 cdw0:0 sqhd:006d p:0 m:0 dnr:0 00:25:25.374 [2024-07-15 09:33:33.628593] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:76 nsid:1 lba:31624 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:25.374 [2024-07-15 09:33:33.628612] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:76 cdw0:0 sqhd:006e p:0 m:0 dnr:0 00:25:25.374 [2024-07-15 09:33:33.628634] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:31936 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:25.374 [2024-07-15 09:33:33.628650] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:9 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:25:25.374 [2024-07-15 09:33:33.628671] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:31968 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:25.374 [2024-07-15 09:33:33.628687] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:47 cdw0:0 sqhd:0070 p:0 m:0 dnr:0 00:25:25.374 [2024-07-15 09:33:33.628709] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:82 nsid:1 lba:32000 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:25.374 [2024-07-15 09:33:33.628724] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:82 cdw0:0 sqhd:0071 p:0 m:0 dnr:0 00:25:25.374 [2024-07-15 09:33:33.628746] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:31344 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:25.374 [2024-07-15 09:33:33.628762] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:17 cdw0:0 sqhd:0072 p:0 m:0 dnr:0 00:25:25.374 [2024-07-15 09:33:33.630946] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:31568 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:25.374 [2024-07-15 09:33:33.630971] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:10 cdw0:0 sqhd:0073 p:0 m:0 dnr:0 00:25:25.374 [2024-07-15 09:33:33.630998] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:32144 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:25.374 [2024-07-15 09:33:33.631016] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:54 cdw0:0 sqhd:0074 p:0 m:0 dnr:0 00:25:25.374 [2024-07-15 09:33:33.631040] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:32160 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:25.374 [2024-07-15 09:33:33.631057] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:18 cdw0:0 sqhd:0075 p:0 m:0 dnr:0 00:25:25.374 [2024-07-15 09:33:33.631079] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:32176 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:25.374 [2024-07-15 09:33:33.631103] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:13 cdw0:0 sqhd:0076 p:0 m:0 dnr:0 00:25:25.374 [2024-07-15 09:33:33.631126] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:78 nsid:1 lba:32192 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:25.374 [2024-07-15 09:33:33.631145] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:78 cdw0:0 sqhd:0077 p:0 m:0 dnr:0 00:25:25.374 [2024-07-15 09:33:33.631168] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:32208 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:25.374 [2024-07-15 09:33:33.631185] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:124 cdw0:0 sqhd:0078 p:0 m:0 dnr:0 00:25:25.374 [2024-07-15 09:33:33.631207] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:32224 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:25.374 [2024-07-15 09:33:33.631224] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:126 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 00:25:25.374 [2024-07-15 09:33:33.631246] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:32240 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:25.374 [2024-07-15 09:33:33.631268] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:50 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:25:25.374 [2024-07-15 09:33:33.631291] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:68 nsid:1 lba:31256 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:25.374 [2024-07-15 09:33:33.631308] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:68 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:25:25.374 [2024-07-15 09:33:33.631331] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:110 nsid:1 lba:31848 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:25.374 [2024-07-15 09:33:33.631348] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:110 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:25:25.374 [2024-07-15 09:33:33.631370] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:31880 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:25.374 [2024-07-15 09:33:33.631386] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:25 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:25:25.374 [2024-07-15 09:33:33.631409] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:32056 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:25.374 [2024-07-15 09:33:33.631426] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:17 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:25:25.374 [2024-07-15 09:33:33.631448] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:82 nsid:1 lba:31552 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:25.374 [2024-07-15 09:33:33.631465] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:82 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:25:25.374 [2024-07-15 09:33:33.631488] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:30968 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:25.374 [2024-07-15 09:33:33.631505] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:47 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:25.374 [2024-07-15 09:33:33.631527] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:31872 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:25.374 [2024-07-15 09:33:33.631545] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:9 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:25.374 [2024-07-15 09:33:33.631567] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:76 nsid:1 lba:31496 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:25.374 [2024-07-15 09:33:33.631583] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:76 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:25:25.374 [2024-07-15 09:33:33.631620] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:72 nsid:1 lba:31408 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:25.374 [2024-07-15 09:33:33.631637] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:72 cdw0:0 sqhd:0003 p:0 m:0 dnr:0 00:25:25.374 [2024-07-15 09:33:33.631674] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:103 nsid:1 lba:31632 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:25.374 [2024-07-15 09:33:33.631690] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:103 cdw0:0 sqhd:0004 p:0 m:0 dnr:0 00:25:25.374 [2024-07-15 09:33:33.631713] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:74 nsid:1 lba:31704 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:25.374 [2024-07-15 09:33:33.631744] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:74 cdw0:0 sqhd:0005 p:0 m:0 dnr:0 00:25:25.374 [2024-07-15 09:33:33.631767] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:31304 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:25.374 [2024-07-15 09:33:33.631790] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:6 cdw0:0 sqhd:0006 p:0 m:0 dnr:0 00:25:25.374 [2024-07-15 09:33:33.631830] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:31472 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:25.374 [2024-07-15 09:33:33.631858] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:58 cdw0:0 sqhd:0007 p:0 m:0 dnr:0 00:25:25.374 [2024-07-15 09:33:33.631881] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:32064 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:25.374 [2024-07-15 09:33:33.631898] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:57 cdw0:0 sqhd:0008 p:0 m:0 dnr:0 00:25:25.374 [2024-07-15 09:33:33.631921] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:32096 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:25.374 [2024-07-15 09:33:33.631938] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:15 cdw0:0 sqhd:0009 p:0 m:0 dnr:0 00:25:25.374 [2024-07-15 09:33:33.631960] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:104 nsid:1 lba:32128 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:25.374 [2024-07-15 09:33:33.631977] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:104 cdw0:0 sqhd:000a p:0 m:0 dnr:0 00:25:25.374 [2024-07-15 09:33:33.631999] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:117 nsid:1 lba:31712 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:25.374 [2024-07-15 09:33:33.632016] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:117 cdw0:0 sqhd:000b p:0 m:0 dnr:0 00:25:25.374 [2024-07-15 09:33:33.632039] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:31776 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:25.374 [2024-07-15 09:33:33.632055] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:39 cdw0:0 sqhd:000c p:0 m:0 dnr:0 00:25:25.374 [2024-07-15 09:33:33.632078] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:31624 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:25.374 [2024-07-15 09:33:33.632094] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:4 cdw0:0 sqhd:000d p:0 m:0 dnr:0 00:25:25.374 [2024-07-15 09:33:33.632117] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:31968 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:25.374 [2024-07-15 09:33:33.632134] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:22 cdw0:0 sqhd:000e p:0 m:0 dnr:0 00:25:25.374 [2024-07-15 09:33:33.632157] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:31344 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:25.374 [2024-07-15 09:33:33.632173] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:41 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:25:25.374 [2024-07-15 09:33:33.633550] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:31720 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:25.374 [2024-07-15 09:33:33.633574] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:34 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:25:25.374 [2024-07-15 09:33:33.633602] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:73 nsid:1 lba:31784 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:25.374 [2024-07-15 09:33:33.633620] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:73 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:25:25.374 [2024-07-15 09:33:33.633644] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:116 nsid:1 lba:31608 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:25.374 [2024-07-15 09:33:33.633662] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:116 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:25:25.374 [2024-07-15 09:33:33.633689] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:32256 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:25.374 [2024-07-15 09:33:33.633707] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:23 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:25:25.374 [2024-07-15 09:33:33.633730] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:90 nsid:1 lba:32272 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:25.374 [2024-07-15 09:33:33.633747] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:90 cdw0:0 sqhd:0014 p:0 m:0 dnr:0 00:25:25.374 [2024-07-15 09:33:33.633770] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:32288 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:25.374 [2024-07-15 09:33:33.633786] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:32 cdw0:0 sqhd:0015 p:0 m:0 dnr:0 00:25:25.374 [2024-07-15 09:33:33.633817] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:88 nsid:1 lba:32304 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:25.374 [2024-07-15 09:33:33.633836] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:88 cdw0:0 sqhd:0016 p:0 m:0 dnr:0 00:25:25.374 [2024-07-15 09:33:33.633860] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:32320 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:25.374 [2024-07-15 09:33:33.633877] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:7 cdw0:0 sqhd:0017 p:0 m:0 dnr:0 00:25:25.374 [2024-07-15 09:33:33.633899] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:32336 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:25.374 [2024-07-15 09:33:33.633917] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:19 cdw0:0 sqhd:0018 p:0 m:0 dnr:0 00:25:25.374 [2024-07-15 09:33:33.633939] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:67 nsid:1 lba:32352 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:25.374 [2024-07-15 09:33:33.633956] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:67 cdw0:0 sqhd:0019 p:0 m:0 dnr:0 00:25:25.374 [2024-07-15 09:33:33.633979] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:114 nsid:1 lba:32368 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:25.374 [2024-07-15 09:33:33.633995] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:114 cdw0:0 sqhd:001a p:0 m:0 dnr:0 00:25:25.374 [2024-07-15 09:33:33.634018] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:102 nsid:1 lba:32384 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:25.374 [2024-07-15 09:33:33.634035] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:102 cdw0:0 sqhd:001b p:0 m:0 dnr:0 00:25:25.374 [2024-07-15 09:33:33.634058] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:120 nsid:1 lba:31928 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:25.374 [2024-07-15 09:33:33.634075] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:120 cdw0:0 sqhd:001c p:0 m:0 dnr:0 00:25:25.374 [2024-07-15 09:33:33.634112] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:106 nsid:1 lba:31960 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:25.374 [2024-07-15 09:33:33.634128] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:106 cdw0:0 sqhd:001d p:0 m:0 dnr:0 00:25:25.374 [2024-07-15 09:33:33.634151] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:108 nsid:1 lba:31992 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:25.374 [2024-07-15 09:33:33.634166] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:108 cdw0:0 sqhd:001e p:0 m:0 dnr:0 00:25:25.374 [2024-07-15 09:33:33.634192] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:32024 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:25.374 [2024-07-15 09:33:33.634208] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:8 cdw0:0 sqhd:001f p:0 m:0 dnr:0 00:25:25.374 [2024-07-15 09:33:33.634230] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:87 nsid:1 lba:32048 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:25.374 [2024-07-15 09:33:33.634246] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:87 cdw0:0 sqhd:0020 p:0 m:0 dnr:0 00:25:25.374 [2024-07-15 09:33:33.634267] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:65 nsid:1 lba:30736 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:25.374 [2024-07-15 09:33:33.634283] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:65 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:25:25.374 [2024-07-15 09:33:33.634304] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:91 nsid:1 lba:31856 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:25.375 [2024-07-15 09:33:33.634319] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:91 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:25:25.375 [2024-07-15 09:33:33.634340] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:31672 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:25.375 [2024-07-15 09:33:33.634356] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:40 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:25:25.375 [2024-07-15 09:33:33.634377] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:31800 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:25.375 [2024-07-15 09:33:33.634393] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:59 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:25:25.375 [2024-07-15 09:33:33.634413] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:32144 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:25.375 [2024-07-15 09:33:33.634429] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:41 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:25:25.375 [2024-07-15 09:33:33.634451] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:32176 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:25.375 [2024-07-15 09:33:33.634466] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:22 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:25:25.375 [2024-07-15 09:33:33.634489] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:32208 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:25.375 [2024-07-15 09:33:33.634505] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:4 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:25:25.375 [2024-07-15 09:33:33.634527] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:32240 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:25.375 [2024-07-15 09:33:33.634543] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:39 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:25:25.375 [2024-07-15 09:33:33.634564] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:117 nsid:1 lba:31848 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:25.375 [2024-07-15 09:33:33.634580] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:117 cdw0:0 sqhd:0029 p:0 m:0 dnr:0 00:25:25.375 [2024-07-15 09:33:33.634601] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:104 nsid:1 lba:32056 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:25.375 [2024-07-15 09:33:33.634616] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:104 cdw0:0 sqhd:002a p:0 m:0 dnr:0 00:25:25.375 [2024-07-15 09:33:33.634638] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:30968 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:25.375 [2024-07-15 09:33:33.634658] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:15 cdw0:0 sqhd:002b p:0 m:0 dnr:0 00:25:25.375 [2024-07-15 09:33:33.634680] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:31496 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:25.375 [2024-07-15 09:33:33.634695] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:57 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:25:25.375 [2024-07-15 09:33:33.634717] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:31632 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:25.375 [2024-07-15 09:33:33.634733] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:58 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:25:25.375 [2024-07-15 09:33:33.634754] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:31304 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:25.375 [2024-07-15 09:33:33.634770] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:6 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:25:25.375 [2024-07-15 09:33:33.634813] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:74 nsid:1 lba:32064 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:25.375 [2024-07-15 09:33:33.634832] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:74 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:25:25.375 [2024-07-15 09:33:33.634873] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:103 nsid:1 lba:32128 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:25.375 [2024-07-15 09:33:33.634891] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:103 cdw0:0 sqhd:0030 p:0 m:0 dnr:0 00:25:25.375 [2024-07-15 09:33:33.634913] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:72 nsid:1 lba:31776 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:25.375 [2024-07-15 09:33:33.634930] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:72 cdw0:0 sqhd:0031 p:0 m:0 dnr:0 00:25:25.375 [2024-07-15 09:33:33.634953] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:76 nsid:1 lba:31968 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:25.375 [2024-07-15 09:33:33.634969] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:76 cdw0:0 sqhd:0032 p:0 m:0 dnr:0 00:25:25.375 [2024-07-15 09:33:33.637450] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:32072 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:25.375 [2024-07-15 09:33:33.637490] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:9 cdw0:0 sqhd:0033 p:0 m:0 dnr:0 00:25:25.375 [2024-07-15 09:33:33.637537] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:32104 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:25.375 [2024-07-15 09:33:33.637558] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:47 cdw0:0 sqhd:0034 p:0 m:0 dnr:0 00:25:25.375 [2024-07-15 09:33:33.637582] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:82 nsid:1 lba:32136 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:25.375 [2024-07-15 09:33:33.637600] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:82 cdw0:0 sqhd:0035 p:0 m:0 dnr:0 00:25:25.375 [2024-07-15 09:33:33.637623] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:31952 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:25.375 [2024-07-15 09:33:33.637640] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:17 cdw0:0 sqhd:0036 p:0 m:0 dnr:0 00:25:25.375 [2024-07-15 09:33:33.637663] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:32016 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:25.375 [2024-07-15 09:33:33.637685] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:25 cdw0:0 sqhd:0037 p:0 m:0 dnr:0 00:25:25.375 [2024-07-15 09:33:33.637710] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:110 nsid:1 lba:32400 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:25.375 [2024-07-15 09:33:33.637727] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:110 cdw0:0 sqhd:0038 p:0 m:0 dnr:0 00:25:25.375 [2024-07-15 09:33:33.637750] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:68 nsid:1 lba:32416 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:25.375 [2024-07-15 09:33:33.637786] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:68 cdw0:0 sqhd:0039 p:0 m:0 dnr:0 00:25:25.375 [2024-07-15 09:33:33.637816] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:32432 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:25.375 [2024-07-15 09:33:33.637851] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:50 cdw0:0 sqhd:003a p:0 m:0 dnr:0 00:25:25.375 [2024-07-15 09:33:33.637875] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:32448 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:25.375 [2024-07-15 09:33:33.637892] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:126 cdw0:0 sqhd:003b p:0 m:0 dnr:0 00:25:25.375 [2024-07-15 09:33:33.637915] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:32464 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:25.375 [2024-07-15 09:33:33.637931] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:124 cdw0:0 sqhd:003c p:0 m:0 dnr:0 00:25:25.375 [2024-07-15 09:33:33.637954] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:78 nsid:1 lba:32480 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:25.375 [2024-07-15 09:33:33.637971] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:78 cdw0:0 sqhd:003d p:0 m:0 dnr:0 00:25:25.375 [2024-07-15 09:33:33.637994] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:32496 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:25.375 [2024-07-15 09:33:33.638011] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:13 cdw0:0 sqhd:003e p:0 m:0 dnr:0 00:25:25.375 [2024-07-15 09:33:33.638033] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:32512 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:25.375 [2024-07-15 09:33:33.638050] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:18 cdw0:0 sqhd:003f p:0 m:0 dnr:0 00:25:25.375 [2024-07-15 09:33:33.638074] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:76 nsid:1 lba:31784 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:25.375 [2024-07-15 09:33:33.638090] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:76 cdw0:0 sqhd:0040 p:0 m:0 dnr:0 00:25:25.375 [2024-07-15 09:33:33.638112] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:72 nsid:1 lba:32256 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:25.375 [2024-07-15 09:33:33.638144] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:72 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:25:25.375 [2024-07-15 09:33:33.638168] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:103 nsid:1 lba:32288 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:25.375 [2024-07-15 09:33:33.638184] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:103 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:25:25.375 [2024-07-15 09:33:33.638222] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:74 nsid:1 lba:32320 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:25.375 [2024-07-15 09:33:33.638238] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:74 cdw0:0 sqhd:0043 p:0 m:0 dnr:0 00:25:25.375 [2024-07-15 09:33:33.638263] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:32352 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:25.375 [2024-07-15 09:33:33.638280] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:6 cdw0:0 sqhd:0044 p:0 m:0 dnr:0 00:25:25.375 [2024-07-15 09:33:33.638301] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:32384 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:25.375 [2024-07-15 09:33:33.638317] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:58 cdw0:0 sqhd:0045 p:0 m:0 dnr:0 00:25:25.375 [2024-07-15 09:33:33.638338] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:31960 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:25.375 [2024-07-15 09:33:33.638354] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:57 cdw0:0 sqhd:0046 p:0 m:0 dnr:0 00:25:25.375 [2024-07-15 09:33:33.638376] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:32024 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:25.375 [2024-07-15 09:33:33.638392] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:15 cdw0:0 sqhd:0047 p:0 m:0 dnr:0 00:25:25.375 [2024-07-15 09:33:33.638413] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:104 nsid:1 lba:30736 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:25.375 [2024-07-15 09:33:33.638428] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:104 cdw0:0 sqhd:0048 p:0 m:0 dnr:0 00:25:25.375 [2024-07-15 09:33:33.638450] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:117 nsid:1 lba:31672 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:25.375 [2024-07-15 09:33:33.638466] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:117 cdw0:0 sqhd:0049 p:0 m:0 dnr:0 00:25:25.375 [2024-07-15 09:33:33.638505] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:32144 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:25.375 [2024-07-15 09:33:33.638521] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:39 cdw0:0 sqhd:004a p:0 m:0 dnr:0 00:25:25.375 [2024-07-15 09:33:33.638558] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:32208 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:25.375 [2024-07-15 09:33:33.638577] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:4 cdw0:0 sqhd:004b p:0 m:0 dnr:0 00:25:25.375 [2024-07-15 09:33:33.638600] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:31848 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:25.375 [2024-07-15 09:33:33.638617] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:22 cdw0:0 sqhd:004c p:0 m:0 dnr:0 00:25:25.375 [2024-07-15 09:33:33.638640] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:30968 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:25.375 [2024-07-15 09:33:33.638657] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:41 cdw0:0 sqhd:004d p:0 m:0 dnr:0 00:25:25.375 [2024-07-15 09:33:33.638679] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:31632 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:25.375 [2024-07-15 09:33:33.638696] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:59 cdw0:0 sqhd:004e p:0 m:0 dnr:0 00:25:25.375 [2024-07-15 09:33:33.638719] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:32064 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:25.375 [2024-07-15 09:33:33.638735] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:40 cdw0:0 sqhd:004f p:0 m:0 dnr:0 00:25:25.375 [2024-07-15 09:33:33.638762] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:91 nsid:1 lba:31776 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:25.375 [2024-07-15 09:33:33.638779] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:91 cdw0:0 sqhd:0050 p:0 m:0 dnr:0 00:25:25.375 [2024-07-15 09:33:33.638811] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:65 nsid:1 lba:32152 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:25.375 [2024-07-15 09:33:33.638830] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:65 cdw0:0 sqhd:0051 p:0 m:0 dnr:0 00:25:25.375 [2024-07-15 09:33:33.638856] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:87 nsid:1 lba:32184 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:25.375 [2024-07-15 09:33:33.638873] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:87 cdw0:0 sqhd:0052 p:0 m:0 dnr:0 00:25:25.375 [2024-07-15 09:33:33.638896] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:32216 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:25.375 [2024-07-15 09:33:33.638913] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:8 cdw0:0 sqhd:0053 p:0 m:0 dnr:0 00:25:25.375 [2024-07-15 09:33:33.638936] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:108 nsid:1 lba:32248 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:25.375 [2024-07-15 09:33:33.638952] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:108 cdw0:0 sqhd:0054 p:0 m:0 dnr:0 00:25:25.375 [2024-07-15 09:33:33.638974] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:106 nsid:1 lba:31424 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:25.375 [2024-07-15 09:33:33.638991] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:106 cdw0:0 sqhd:0055 p:0 m:0 dnr:0 00:25:25.375 [2024-07-15 09:33:33.639014] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:120 nsid:1 lba:31904 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:25.375 [2024-07-15 09:33:33.639031] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:120 cdw0:0 sqhd:0056 p:0 m:0 dnr:0 00:25:25.375 [2024-07-15 09:33:33.639054] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:102 nsid:1 lba:31320 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:25.375 [2024-07-15 09:33:33.639071] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:102 cdw0:0 sqhd:0057 p:0 m:0 dnr:0 00:25:25.375 Received shutdown signal, test time was about 32.520155 seconds 00:25:25.375 00:25:25.375 Latency(us) 00:25:25.375 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:25:25.375 Job: Nvme0n1 (Core Mask 0x4, workload: verify, depth: 128, IO size: 4096) 00:25:25.375 Verification LBA range: start 0x0 length 0x4000 00:25:25.375 Nvme0n1 : 32.52 8354.22 32.63 0.00 0.00 15294.41 241.21 3529429.14 00:25:25.375 =================================================================================================================== 00:25:25.375 Total : 8354.22 32.63 0.00 0.00 15294.41 241.21 3529429.14 00:25:25.375 09:33:36 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@143 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:25:25.635 09:33:36 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@145 -- # trap - SIGINT SIGTERM EXIT 00:25:25.635 09:33:36 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@147 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/try.txt 00:25:25.635 09:33:36 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@148 -- # nvmftestfini 00:25:25.635 09:33:36 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@488 -- # nvmfcleanup 00:25:25.635 09:33:36 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@117 -- # sync 00:25:25.635 09:33:36 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:25:25.635 09:33:36 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@120 -- # set +e 00:25:25.635 09:33:36 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@121 -- # for i in {1..20} 00:25:25.635 09:33:36 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:25:25.635 rmmod nvme_tcp 00:25:25.635 rmmod nvme_fabrics 00:25:25.635 rmmod nvme_keyring 00:25:25.635 09:33:36 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:25:25.635 09:33:36 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@124 -- # set -e 00:25:25.635 09:33:36 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@125 -- # return 0 00:25:25.635 09:33:36 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@489 -- # '[' -n 909821 ']' 00:25:25.635 09:33:36 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@490 -- # killprocess 909821 00:25:25.635 09:33:36 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@948 -- # '[' -z 909821 ']' 00:25:25.635 09:33:36 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@952 -- # kill -0 909821 00:25:25.635 09:33:36 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@953 -- # uname 00:25:25.635 09:33:36 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:25:25.635 09:33:36 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 909821 00:25:25.635 09:33:36 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:25:25.635 09:33:36 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:25:25.635 09:33:36 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@966 -- # echo 'killing process with pid 909821' 00:25:25.635 killing process with pid 909821 00:25:25.635 09:33:36 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@967 -- # kill 909821 00:25:25.635 09:33:36 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@972 -- # wait 909821 00:25:25.894 09:33:37 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:25:25.894 09:33:37 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:25:25.894 09:33:37 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:25:25.894 09:33:37 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:25:25.894 09:33:37 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@278 -- # remove_spdk_ns 00:25:25.894 09:33:37 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:25:25.894 09:33:37 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:25:25.894 09:33:37 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:25:28.428 09:33:39 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:25:28.428 00:25:28.428 real 0m41.404s 00:25:28.428 user 2m3.070s 00:25:28.428 sys 0m11.194s 00:25:28.428 09:33:39 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@1124 -- # xtrace_disable 00:25:28.428 09:33:39 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@10 -- # set +x 00:25:28.428 ************************************ 00:25:28.428 END TEST nvmf_host_multipath_status 00:25:28.428 ************************************ 00:25:28.428 09:33:39 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:25:28.428 09:33:39 nvmf_tcp -- nvmf/nvmf.sh@103 -- # run_test nvmf_discovery_remove_ifc /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/discovery_remove_ifc.sh --transport=tcp 00:25:28.428 09:33:39 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:25:28.428 09:33:39 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:25:28.428 09:33:39 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:25:28.428 ************************************ 00:25:28.428 START TEST nvmf_discovery_remove_ifc 00:25:28.428 ************************************ 00:25:28.428 09:33:39 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/discovery_remove_ifc.sh --transport=tcp 00:25:28.428 * Looking for test storage... 00:25:28.428 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:25:28.428 09:33:39 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@12 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:25:28.428 09:33:39 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@7 -- # uname -s 00:25:28.428 09:33:39 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:25:28.428 09:33:39 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:25:28.428 09:33:39 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:25:28.428 09:33:39 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:25:28.428 09:33:39 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:25:28.428 09:33:39 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:25:28.428 09:33:39 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:25:28.428 09:33:39 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:25:28.428 09:33:39 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:25:28.428 09:33:39 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:25:28.428 09:33:39 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a 00:25:28.428 09:33:39 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@18 -- # NVME_HOSTID=29f67375-a902-e411-ace9-001e67bc3c9a 00:25:28.428 09:33:39 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:25:28.428 09:33:39 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:25:28.428 09:33:39 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:25:28.428 09:33:39 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:25:28.428 09:33:39 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:25:28.428 09:33:39 nvmf_tcp.nvmf_discovery_remove_ifc -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:25:28.428 09:33:39 nvmf_tcp.nvmf_discovery_remove_ifc -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:25:28.428 09:33:39 nvmf_tcp.nvmf_discovery_remove_ifc -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:25:28.428 09:33:39 nvmf_tcp.nvmf_discovery_remove_ifc -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:28.428 09:33:39 nvmf_tcp.nvmf_discovery_remove_ifc -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:28.428 09:33:39 nvmf_tcp.nvmf_discovery_remove_ifc -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:28.428 09:33:39 nvmf_tcp.nvmf_discovery_remove_ifc -- paths/export.sh@5 -- # export PATH 00:25:28.428 09:33:39 nvmf_tcp.nvmf_discovery_remove_ifc -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:28.428 09:33:39 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@47 -- # : 0 00:25:28.428 09:33:39 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:25:28.428 09:33:39 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:25:28.428 09:33:39 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:25:28.428 09:33:39 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:25:28.428 09:33:39 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:25:28.428 09:33:39 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:25:28.428 09:33:39 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:25:28.429 09:33:39 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@51 -- # have_pci_nics=0 00:25:28.429 09:33:39 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@14 -- # '[' tcp == rdma ']' 00:25:28.429 09:33:39 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@19 -- # discovery_port=8009 00:25:28.429 09:33:39 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@20 -- # discovery_nqn=nqn.2014-08.org.nvmexpress.discovery 00:25:28.429 09:33:39 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@23 -- # nqn=nqn.2016-06.io.spdk:cnode 00:25:28.429 09:33:39 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@25 -- # host_nqn=nqn.2021-12.io.spdk:test 00:25:28.429 09:33:39 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@26 -- # host_sock=/tmp/host.sock 00:25:28.429 09:33:39 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@39 -- # nvmftestinit 00:25:28.429 09:33:39 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:25:28.429 09:33:39 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:25:28.429 09:33:39 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@448 -- # prepare_net_devs 00:25:28.429 09:33:39 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@410 -- # local -g is_hw=no 00:25:28.429 09:33:39 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@412 -- # remove_spdk_ns 00:25:28.429 09:33:39 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:25:28.429 09:33:39 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:25:28.429 09:33:39 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:25:28.429 09:33:39 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:25:28.429 09:33:39 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:25:28.429 09:33:39 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@285 -- # xtrace_disable 00:25:28.429 09:33:39 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:25:30.336 09:33:41 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:25:30.336 09:33:41 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@291 -- # pci_devs=() 00:25:30.336 09:33:41 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@291 -- # local -a pci_devs 00:25:30.336 09:33:41 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@292 -- # pci_net_devs=() 00:25:30.336 09:33:41 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:25:30.336 09:33:41 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@293 -- # pci_drivers=() 00:25:30.336 09:33:41 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@293 -- # local -A pci_drivers 00:25:30.336 09:33:41 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@295 -- # net_devs=() 00:25:30.336 09:33:41 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@295 -- # local -ga net_devs 00:25:30.336 09:33:41 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@296 -- # e810=() 00:25:30.336 09:33:41 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@296 -- # local -ga e810 00:25:30.336 09:33:41 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@297 -- # x722=() 00:25:30.336 09:33:41 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@297 -- # local -ga x722 00:25:30.336 09:33:41 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@298 -- # mlx=() 00:25:30.336 09:33:41 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@298 -- # local -ga mlx 00:25:30.336 09:33:41 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:25:30.336 09:33:41 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:25:30.336 09:33:41 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:25:30.336 09:33:41 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:25:30.336 09:33:41 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:25:30.336 09:33:41 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:25:30.336 09:33:41 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:25:30.336 09:33:41 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:25:30.336 09:33:41 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:25:30.336 09:33:41 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:25:30.336 09:33:41 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:25:30.336 09:33:41 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:25:30.336 09:33:41 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:25:30.336 09:33:41 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:25:30.336 09:33:41 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:25:30.336 09:33:41 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:25:30.336 09:33:41 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:25:30.336 09:33:41 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:25:30.336 09:33:41 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@341 -- # echo 'Found 0000:09:00.0 (0x8086 - 0x159b)' 00:25:30.336 Found 0000:09:00.0 (0x8086 - 0x159b) 00:25:30.336 09:33:41 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:25:30.336 09:33:41 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:25:30.336 09:33:41 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:25:30.336 09:33:41 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:25:30.336 09:33:41 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:25:30.336 09:33:41 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:25:30.336 09:33:41 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@341 -- # echo 'Found 0000:09:00.1 (0x8086 - 0x159b)' 00:25:30.336 Found 0000:09:00.1 (0x8086 - 0x159b) 00:25:30.336 09:33:41 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:25:30.336 09:33:41 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:25:30.336 09:33:41 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:25:30.336 09:33:41 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:25:30.336 09:33:41 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:25:30.337 09:33:41 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:25:30.337 09:33:41 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:25:30.337 09:33:41 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:25:30.337 09:33:41 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:25:30.337 09:33:41 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:25:30.337 09:33:41 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:25:30.337 09:33:41 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:25:30.337 09:33:41 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@390 -- # [[ up == up ]] 00:25:30.337 09:33:41 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:25:30.337 09:33:41 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:25:30.337 09:33:41 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:09:00.0: cvl_0_0' 00:25:30.337 Found net devices under 0000:09:00.0: cvl_0_0 00:25:30.337 09:33:41 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:25:30.337 09:33:41 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:25:30.337 09:33:41 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:25:30.337 09:33:41 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:25:30.337 09:33:41 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:25:30.337 09:33:41 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@390 -- # [[ up == up ]] 00:25:30.337 09:33:41 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:25:30.337 09:33:41 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:25:30.337 09:33:41 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:09:00.1: cvl_0_1' 00:25:30.337 Found net devices under 0000:09:00.1: cvl_0_1 00:25:30.337 09:33:41 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:25:30.337 09:33:41 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:25:30.337 09:33:41 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@414 -- # is_hw=yes 00:25:30.337 09:33:41 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:25:30.337 09:33:41 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:25:30.337 09:33:41 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:25:30.337 09:33:41 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:25:30.337 09:33:41 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:25:30.337 09:33:41 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:25:30.337 09:33:41 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:25:30.337 09:33:41 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:25:30.337 09:33:41 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:25:30.337 09:33:41 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:25:30.337 09:33:41 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:25:30.337 09:33:41 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:25:30.337 09:33:41 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:25:30.337 09:33:41 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:25:30.337 09:33:41 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:25:30.337 09:33:41 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:25:30.337 09:33:41 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:25:30.337 09:33:41 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:25:30.337 09:33:41 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:25:30.337 09:33:41 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:25:30.337 09:33:41 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:25:30.337 09:33:41 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:25:30.337 09:33:41 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:25:30.337 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:25:30.337 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.133 ms 00:25:30.337 00:25:30.337 --- 10.0.0.2 ping statistics --- 00:25:30.337 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:25:30.337 rtt min/avg/max/mdev = 0.133/0.133/0.133/0.000 ms 00:25:30.337 09:33:41 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:25:30.337 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:25:30.337 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.178 ms 00:25:30.337 00:25:30.337 --- 10.0.0.1 ping statistics --- 00:25:30.337 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:25:30.337 rtt min/avg/max/mdev = 0.178/0.178/0.178/0.000 ms 00:25:30.337 09:33:41 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:25:30.337 09:33:41 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@422 -- # return 0 00:25:30.337 09:33:41 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:25:30.337 09:33:41 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:25:30.337 09:33:41 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:25:30.337 09:33:41 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:25:30.337 09:33:41 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:25:30.337 09:33:41 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:25:30.337 09:33:41 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:25:30.337 09:33:41 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@40 -- # nvmfappstart -m 0x2 00:25:30.337 09:33:41 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:25:30.337 09:33:41 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@722 -- # xtrace_disable 00:25:30.337 09:33:41 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:25:30.337 09:33:41 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@481 -- # nvmfpid=916929 00:25:30.337 09:33:41 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:25:30.337 09:33:41 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@482 -- # waitforlisten 916929 00:25:30.337 09:33:41 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@829 -- # '[' -z 916929 ']' 00:25:30.337 09:33:41 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:25:30.337 09:33:41 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@834 -- # local max_retries=100 00:25:30.337 09:33:41 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:25:30.337 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:25:30.337 09:33:41 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@838 -- # xtrace_disable 00:25:30.337 09:33:41 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:25:30.337 [2024-07-15 09:33:41.401976] Starting SPDK v24.09-pre git sha1 b0f01ebc5 / DPDK 24.03.0 initialization... 00:25:30.337 [2024-07-15 09:33:41.402044] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:25:30.337 EAL: No free 2048 kB hugepages reported on node 1 00:25:30.337 [2024-07-15 09:33:41.463926] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:25:30.596 [2024-07-15 09:33:41.575761] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:25:30.596 [2024-07-15 09:33:41.575839] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:25:30.596 [2024-07-15 09:33:41.575854] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:25:30.596 [2024-07-15 09:33:41.575866] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:25:30.596 [2024-07-15 09:33:41.575875] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:25:30.596 [2024-07-15 09:33:41.575918] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:25:30.596 09:33:41 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:25:30.596 09:33:41 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@862 -- # return 0 00:25:30.596 09:33:41 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:25:30.596 09:33:41 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@728 -- # xtrace_disable 00:25:30.596 09:33:41 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:25:30.596 09:33:41 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:25:30.596 09:33:41 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@43 -- # rpc_cmd 00:25:30.596 09:33:41 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:30.596 09:33:41 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:25:30.596 [2024-07-15 09:33:41.724167] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:25:30.596 [2024-07-15 09:33:41.732347] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 8009 *** 00:25:30.596 null0 00:25:30.596 [2024-07-15 09:33:41.764284] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:25:30.596 09:33:41 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:30.596 09:33:41 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@59 -- # hostpid=916949 00:25:30.597 09:33:41 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@58 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -m 0x1 -r /tmp/host.sock --wait-for-rpc -L bdev_nvme 00:25:30.597 09:33:41 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@60 -- # waitforlisten 916949 /tmp/host.sock 00:25:30.597 09:33:41 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@829 -- # '[' -z 916949 ']' 00:25:30.597 09:33:41 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@833 -- # local rpc_addr=/tmp/host.sock 00:25:30.597 09:33:41 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@834 -- # local max_retries=100 00:25:30.597 09:33:41 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /tmp/host.sock...' 00:25:30.597 Waiting for process to start up and listen on UNIX domain socket /tmp/host.sock... 00:25:30.597 09:33:41 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@838 -- # xtrace_disable 00:25:30.597 09:33:41 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:25:30.854 [2024-07-15 09:33:41.827652] Starting SPDK v24.09-pre git sha1 b0f01ebc5 / DPDK 24.03.0 initialization... 00:25:30.854 [2024-07-15 09:33:41.827731] [ DPDK EAL parameters: nvmf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid916949 ] 00:25:30.854 EAL: No free 2048 kB hugepages reported on node 1 00:25:30.854 [2024-07-15 09:33:41.884277] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:25:30.854 [2024-07-15 09:33:41.988723] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:25:31.113 09:33:42 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:25:31.113 09:33:42 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@862 -- # return 0 00:25:31.113 09:33:42 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@62 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; killprocess $hostpid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:25:31.113 09:33:42 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@65 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_set_options -e 1 00:25:31.113 09:33:42 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:31.113 09:33:42 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:25:31.113 09:33:42 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:31.113 09:33:42 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@66 -- # rpc_cmd -s /tmp/host.sock framework_start_init 00:25:31.113 09:33:42 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:31.113 09:33:42 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:25:31.113 09:33:42 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:31.113 09:33:42 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@69 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test --ctrlr-loss-timeout-sec 2 --reconnect-delay-sec 1 --fast-io-fail-timeout-sec 1 --wait-for-attach 00:25:31.113 09:33:42 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:31.113 09:33:42 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:25:32.051 [2024-07-15 09:33:43.242536] bdev_nvme.c:6983:discovery_attach_cb: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr attached 00:25:32.051 [2024-07-15 09:33:43.242571] bdev_nvme.c:7063:discovery_poller: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr connected 00:25:32.051 [2024-07-15 09:33:43.242594] bdev_nvme.c:6946:get_discovery_log_page: *INFO*: Discovery[10.0.0.2:8009] sent discovery log page command 00:25:32.310 [2024-07-15 09:33:43.371008] bdev_nvme.c:6912:discovery_log_page_cb: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 new subsystem nvme0 00:25:32.310 [2024-07-15 09:33:43.473411] bdev_nvme.c:7773:bdev_nvme_readv: *DEBUG*: read 8 blocks with offset 0 00:25:32.310 [2024-07-15 09:33:43.473471] bdev_nvme.c:7773:bdev_nvme_readv: *DEBUG*: read 1 blocks with offset 0 00:25:32.310 [2024-07-15 09:33:43.473511] bdev_nvme.c:7773:bdev_nvme_readv: *DEBUG*: read 64 blocks with offset 0 00:25:32.310 [2024-07-15 09:33:43.473534] bdev_nvme.c:6802:discovery_attach_controller_done: *INFO*: Discovery[10.0.0.2:8009] attach nvme0 done 00:25:32.311 [2024-07-15 09:33:43.473566] bdev_nvme.c:6761:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 found again 00:25:32.311 09:33:43 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:32.311 09:33:43 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@72 -- # wait_for_bdev nvme0n1 00:25:32.311 09:33:43 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:25:32.311 09:33:43 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:25:32.311 09:33:43 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:32.311 09:33:43 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:25:32.311 09:33:43 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:25:32.311 09:33:43 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:25:32.311 09:33:43 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:25:32.311 [2024-07-15 09:33:43.480116] bdev_nvme.c:1617:bdev_nvme_disconnected_qpair_cb: *DEBUG*: qpair 0x18b4870 was disconnected and freed. delete nvme_qpair. 00:25:32.311 09:33:43 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:32.569 09:33:43 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != \n\v\m\e\0\n\1 ]] 00:25:32.569 09:33:43 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@75 -- # ip netns exec cvl_0_0_ns_spdk ip addr del 10.0.0.2/24 dev cvl_0_0 00:25:32.569 09:33:43 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@76 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 down 00:25:32.569 09:33:43 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@79 -- # wait_for_bdev '' 00:25:32.569 09:33:43 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:25:32.569 09:33:43 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:25:32.570 09:33:43 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:25:32.570 09:33:43 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:32.570 09:33:43 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:25:32.570 09:33:43 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:25:32.570 09:33:43 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:25:32.570 09:33:43 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:32.570 09:33:43 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != '' ]] 00:25:32.570 09:33:43 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:25:33.509 09:33:44 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:25:33.509 09:33:44 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:25:33.509 09:33:44 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:25:33.509 09:33:44 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:33.509 09:33:44 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:25:33.509 09:33:44 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:25:33.509 09:33:44 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:25:33.509 09:33:44 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:33.509 09:33:44 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != '' ]] 00:25:33.509 09:33:44 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:25:34.890 09:33:45 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:25:34.890 09:33:45 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:25:34.890 09:33:45 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:25:34.890 09:33:45 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:34.890 09:33:45 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:25:34.890 09:33:45 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:25:34.890 09:33:45 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:25:34.890 09:33:45 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:34.890 09:33:45 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != '' ]] 00:25:34.890 09:33:45 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:25:35.829 09:33:46 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:25:35.829 09:33:46 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:25:35.829 09:33:46 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:25:35.829 09:33:46 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:35.829 09:33:46 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:25:35.829 09:33:46 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:25:35.829 09:33:46 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:25:35.829 09:33:46 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:35.829 09:33:46 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != '' ]] 00:25:35.829 09:33:46 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:25:36.767 09:33:47 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:25:36.767 09:33:47 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:25:36.767 09:33:47 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:25:36.767 09:33:47 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:36.767 09:33:47 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:25:36.767 09:33:47 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:25:36.767 09:33:47 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:25:36.767 09:33:47 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:36.767 09:33:47 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != '' ]] 00:25:36.767 09:33:47 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:25:37.707 09:33:48 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:25:37.707 09:33:48 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:25:37.707 09:33:48 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:25:37.707 09:33:48 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:37.707 09:33:48 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:25:37.707 09:33:48 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:25:37.707 09:33:48 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:25:37.707 09:33:48 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:37.707 09:33:48 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != '' ]] 00:25:37.707 09:33:48 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:25:37.966 [2024-07-15 09:33:48.914545] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk_internal/nvme_tcp.h: 428:nvme_tcp_read_data: *ERROR*: spdk_sock_recv() failed, errno 110: Connection timed out 00:25:37.966 [2024-07-15 09:33:48.914618] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:25:37.966 [2024-07-15 09:33:48.914639] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:37.966 [2024-07-15 09:33:48.914657] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:25:37.966 [2024-07-15 09:33:48.914670] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:37.966 [2024-07-15 09:33:48.914684] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:25:37.966 [2024-07-15 09:33:48.914697] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:37.966 [2024-07-15 09:33:48.914710] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:25:37.966 [2024-07-15 09:33:48.914722] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:37.966 [2024-07-15 09:33:48.914735] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: KEEP ALIVE (18) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 00:25:37.966 [2024-07-15 09:33:48.914748] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:37.966 [2024-07-15 09:33:48.914761] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x187b300 is same with the state(5) to be set 00:25:37.966 [2024-07-15 09:33:48.924561] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x187b300 (9): Bad file descriptor 00:25:37.966 [2024-07-15 09:33:48.934608] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:25:38.903 09:33:49 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:25:38.903 09:33:49 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:25:38.903 09:33:49 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:25:38.903 09:33:49 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:38.903 09:33:49 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:25:38.903 09:33:49 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:25:38.903 09:33:49 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:25:38.903 [2024-07-15 09:33:49.948839] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 110 00:25:38.903 [2024-07-15 09:33:49.948896] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x187b300 with addr=10.0.0.2, port=4420 00:25:38.903 [2024-07-15 09:33:49.948917] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x187b300 is same with the state(5) to be set 00:25:38.903 [2024-07-15 09:33:49.948946] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x187b300 (9): Bad file descriptor 00:25:38.903 [2024-07-15 09:33:49.949363] bdev_nvme.c:2899:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:25:38.903 [2024-07-15 09:33:49.949392] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:25:38.903 [2024-07-15 09:33:49.949407] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0] controller reinitialization failed 00:25:38.903 [2024-07-15 09:33:49.949422] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:25:38.903 [2024-07-15 09:33:49.949443] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:38.903 [2024-07-15 09:33:49.949467] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:25:38.903 09:33:49 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:38.903 09:33:49 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != '' ]] 00:25:38.903 09:33:49 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:25:39.837 [2024-07-15 09:33:50.951963] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:25:39.837 [2024-07-15 09:33:50.952005] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:25:39.837 [2024-07-15 09:33:50.952019] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0] controller reinitialization failed 00:25:39.837 [2024-07-15 09:33:50.952033] nvme_ctrlr.c:1094:nvme_ctrlr_fail: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] already in failed state 00:25:39.837 [2024-07-15 09:33:50.952062] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:39.837 [2024-07-15 09:33:50.952106] bdev_nvme.c:6734:remove_discovery_entry: *INFO*: Discovery[10.0.0.2:8009] Remove discovery entry: nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 00:25:39.837 [2024-07-15 09:33:50.952163] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:25:39.837 [2024-07-15 09:33:50.952183] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:39.837 [2024-07-15 09:33:50.952200] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:25:39.837 [2024-07-15 09:33:50.952213] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:39.837 [2024-07-15 09:33:50.952226] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:25:39.837 [2024-07-15 09:33:50.952239] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:39.837 [2024-07-15 09:33:50.952252] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:25:39.837 [2024-07-15 09:33:50.952264] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:39.837 [2024-07-15 09:33:50.952278] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: KEEP ALIVE (18) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 00:25:39.837 [2024-07-15 09:33:50.952290] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:39.837 [2024-07-15 09:33:50.952303] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2014-08.org.nvmexpress.discovery] in failed state. 00:25:39.837 [2024-07-15 09:33:50.952472] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x187a780 (9): Bad file descriptor 00:25:39.837 [2024-07-15 09:33:50.953486] nvme_fabric.c: 214:nvme_fabric_prop_get_cmd_async: *ERROR*: Failed to send Property Get fabrics command 00:25:39.837 [2024-07-15 09:33:50.953506] nvme_ctrlr.c:1213:nvme_ctrlr_shutdown_async: *ERROR*: [nqn.2014-08.org.nvmexpress.discovery] Failed to read the CC register 00:25:39.837 09:33:50 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:25:39.837 09:33:50 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:25:39.837 09:33:50 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:25:39.837 09:33:50 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:39.837 09:33:50 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:25:39.837 09:33:50 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:25:39.837 09:33:50 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:25:39.837 09:33:50 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:39.837 09:33:51 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ '' != '' ]] 00:25:39.837 09:33:51 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@82 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:25:39.837 09:33:51 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@83 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:25:40.096 09:33:51 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@86 -- # wait_for_bdev nvme1n1 00:25:40.096 09:33:51 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:25:40.096 09:33:51 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:25:40.096 09:33:51 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:25:40.096 09:33:51 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:40.096 09:33:51 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:25:40.096 09:33:51 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:25:40.096 09:33:51 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:25:40.097 09:33:51 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:40.097 09:33:51 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ '' != \n\v\m\e\1\n\1 ]] 00:25:40.097 09:33:51 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:25:41.035 09:33:52 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:25:41.035 09:33:52 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:25:41.035 09:33:52 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:25:41.035 09:33:52 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:41.035 09:33:52 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:25:41.035 09:33:52 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:25:41.035 09:33:52 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:25:41.035 09:33:52 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:41.035 09:33:52 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ '' != \n\v\m\e\1\n\1 ]] 00:25:41.035 09:33:52 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:25:41.970 [2024-07-15 09:33:53.007970] bdev_nvme.c:6983:discovery_attach_cb: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr attached 00:25:41.970 [2024-07-15 09:33:53.008004] bdev_nvme.c:7063:discovery_poller: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr connected 00:25:41.970 [2024-07-15 09:33:53.008027] bdev_nvme.c:6946:get_discovery_log_page: *INFO*: Discovery[10.0.0.2:8009] sent discovery log page command 00:25:41.970 [2024-07-15 09:33:53.094318] bdev_nvme.c:6912:discovery_log_page_cb: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 new subsystem nvme1 00:25:41.970 09:33:53 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:25:41.970 09:33:53 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:25:41.970 09:33:53 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:25:41.970 09:33:53 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:41.970 09:33:53 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:25:41.970 09:33:53 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:25:41.970 09:33:53 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:25:41.970 09:33:53 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:42.228 09:33:53 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ '' != \n\v\m\e\1\n\1 ]] 00:25:42.228 09:33:53 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:25:42.228 [2024-07-15 09:33:53.279421] bdev_nvme.c:7773:bdev_nvme_readv: *DEBUG*: read 8 blocks with offset 0 00:25:42.228 [2024-07-15 09:33:53.279466] bdev_nvme.c:7773:bdev_nvme_readv: *DEBUG*: read 1 blocks with offset 0 00:25:42.228 [2024-07-15 09:33:53.279499] bdev_nvme.c:7773:bdev_nvme_readv: *DEBUG*: read 64 blocks with offset 0 00:25:42.228 [2024-07-15 09:33:53.279522] bdev_nvme.c:6802:discovery_attach_controller_done: *INFO*: Discovery[10.0.0.2:8009] attach nvme1 done 00:25:42.228 [2024-07-15 09:33:53.279536] bdev_nvme.c:6761:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 found again 00:25:42.228 [2024-07-15 09:33:53.326462] bdev_nvme.c:1617:bdev_nvme_disconnected_qpair_cb: *DEBUG*: qpair 0x1882110 was disconnected and freed. delete nvme_qpair. 00:25:43.159 09:33:54 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:25:43.159 09:33:54 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:25:43.159 09:33:54 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:43.159 09:33:54 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:25:43.159 09:33:54 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:25:43.159 09:33:54 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:25:43.159 09:33:54 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:25:43.159 09:33:54 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:43.159 09:33:54 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ nvme1n1 != \n\v\m\e\1\n\1 ]] 00:25:43.159 09:33:54 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@88 -- # trap - SIGINT SIGTERM EXIT 00:25:43.159 09:33:54 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@90 -- # killprocess 916949 00:25:43.159 09:33:54 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@948 -- # '[' -z 916949 ']' 00:25:43.159 09:33:54 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@952 -- # kill -0 916949 00:25:43.159 09:33:54 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@953 -- # uname 00:25:43.159 09:33:54 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:25:43.159 09:33:54 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 916949 00:25:43.159 09:33:54 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:25:43.159 09:33:54 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:25:43.159 09:33:54 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@966 -- # echo 'killing process with pid 916949' 00:25:43.159 killing process with pid 916949 00:25:43.159 09:33:54 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@967 -- # kill 916949 00:25:43.159 09:33:54 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@972 -- # wait 916949 00:25:43.418 09:33:54 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@91 -- # nvmftestfini 00:25:43.418 09:33:54 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@488 -- # nvmfcleanup 00:25:43.418 09:33:54 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@117 -- # sync 00:25:43.418 09:33:54 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:25:43.418 09:33:54 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@120 -- # set +e 00:25:43.418 09:33:54 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@121 -- # for i in {1..20} 00:25:43.418 09:33:54 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:25:43.418 rmmod nvme_tcp 00:25:43.418 rmmod nvme_fabrics 00:25:43.418 rmmod nvme_keyring 00:25:43.418 09:33:54 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:25:43.418 09:33:54 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@124 -- # set -e 00:25:43.418 09:33:54 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@125 -- # return 0 00:25:43.418 09:33:54 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@489 -- # '[' -n 916929 ']' 00:25:43.418 09:33:54 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@490 -- # killprocess 916929 00:25:43.418 09:33:54 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@948 -- # '[' -z 916929 ']' 00:25:43.418 09:33:54 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@952 -- # kill -0 916929 00:25:43.418 09:33:54 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@953 -- # uname 00:25:43.418 09:33:54 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:25:43.418 09:33:54 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 916929 00:25:43.418 09:33:54 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:25:43.418 09:33:54 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:25:43.418 09:33:54 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@966 -- # echo 'killing process with pid 916929' 00:25:43.418 killing process with pid 916929 00:25:43.418 09:33:54 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@967 -- # kill 916929 00:25:43.418 09:33:54 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@972 -- # wait 916929 00:25:43.676 09:33:54 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:25:43.676 09:33:54 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:25:43.676 09:33:54 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:25:43.676 09:33:54 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:25:43.676 09:33:54 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@278 -- # remove_spdk_ns 00:25:43.676 09:33:54 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:25:43.676 09:33:54 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:25:43.676 09:33:54 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:25:46.255 09:33:56 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:25:46.255 00:25:46.255 real 0m17.722s 00:25:46.255 user 0m25.764s 00:25:46.255 sys 0m3.031s 00:25:46.255 09:33:56 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@1124 -- # xtrace_disable 00:25:46.255 09:33:56 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:25:46.255 ************************************ 00:25:46.255 END TEST nvmf_discovery_remove_ifc 00:25:46.255 ************************************ 00:25:46.255 09:33:56 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:25:46.255 09:33:56 nvmf_tcp -- nvmf/nvmf.sh@104 -- # run_test nvmf_identify_kernel_target /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/identify_kernel_nvmf.sh --transport=tcp 00:25:46.255 09:33:56 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:25:46.255 09:33:56 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:25:46.255 09:33:56 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:25:46.255 ************************************ 00:25:46.255 START TEST nvmf_identify_kernel_target 00:25:46.255 ************************************ 00:25:46.255 09:33:56 nvmf_tcp.nvmf_identify_kernel_target -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/identify_kernel_nvmf.sh --transport=tcp 00:25:46.255 * Looking for test storage... 00:25:46.255 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:25:46.255 09:33:57 nvmf_tcp.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:25:46.255 09:33:57 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@7 -- # uname -s 00:25:46.255 09:33:57 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:25:46.255 09:33:57 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:25:46.255 09:33:57 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:25:46.255 09:33:57 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:25:46.255 09:33:57 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:25:46.255 09:33:57 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:25:46.255 09:33:57 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:25:46.255 09:33:57 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:25:46.255 09:33:57 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:25:46.255 09:33:57 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:25:46.255 09:33:57 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a 00:25:46.255 09:33:57 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@18 -- # NVME_HOSTID=29f67375-a902-e411-ace9-001e67bc3c9a 00:25:46.255 09:33:57 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:25:46.255 09:33:57 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:25:46.255 09:33:57 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:25:46.255 09:33:57 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:25:46.255 09:33:57 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:25:46.255 09:33:57 nvmf_tcp.nvmf_identify_kernel_target -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:25:46.255 09:33:57 nvmf_tcp.nvmf_identify_kernel_target -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:25:46.255 09:33:57 nvmf_tcp.nvmf_identify_kernel_target -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:25:46.255 09:33:57 nvmf_tcp.nvmf_identify_kernel_target -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:46.255 09:33:57 nvmf_tcp.nvmf_identify_kernel_target -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:46.255 09:33:57 nvmf_tcp.nvmf_identify_kernel_target -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:46.255 09:33:57 nvmf_tcp.nvmf_identify_kernel_target -- paths/export.sh@5 -- # export PATH 00:25:46.255 09:33:57 nvmf_tcp.nvmf_identify_kernel_target -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:46.255 09:33:57 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@47 -- # : 0 00:25:46.255 09:33:57 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:25:46.255 09:33:57 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:25:46.255 09:33:57 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:25:46.255 09:33:57 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:25:46.255 09:33:57 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:25:46.255 09:33:57 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:25:46.255 09:33:57 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:25:46.255 09:33:57 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@51 -- # have_pci_nics=0 00:25:46.255 09:33:57 nvmf_tcp.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@11 -- # nvmftestinit 00:25:46.255 09:33:57 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:25:46.255 09:33:57 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:25:46.255 09:33:57 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@448 -- # prepare_net_devs 00:25:46.255 09:33:57 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@410 -- # local -g is_hw=no 00:25:46.255 09:33:57 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@412 -- # remove_spdk_ns 00:25:46.255 09:33:57 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:25:46.255 09:33:57 nvmf_tcp.nvmf_identify_kernel_target -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:25:46.255 09:33:57 nvmf_tcp.nvmf_identify_kernel_target -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:25:46.255 09:33:57 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:25:46.255 09:33:57 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:25:46.255 09:33:57 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@285 -- # xtrace_disable 00:25:46.255 09:33:57 nvmf_tcp.nvmf_identify_kernel_target -- common/autotest_common.sh@10 -- # set +x 00:25:48.159 09:33:59 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:25:48.159 09:33:59 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@291 -- # pci_devs=() 00:25:48.159 09:33:59 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@291 -- # local -a pci_devs 00:25:48.159 09:33:59 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@292 -- # pci_net_devs=() 00:25:48.159 09:33:59 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:25:48.159 09:33:59 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@293 -- # pci_drivers=() 00:25:48.159 09:33:59 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@293 -- # local -A pci_drivers 00:25:48.159 09:33:59 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@295 -- # net_devs=() 00:25:48.159 09:33:59 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@295 -- # local -ga net_devs 00:25:48.159 09:33:59 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@296 -- # e810=() 00:25:48.159 09:33:59 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@296 -- # local -ga e810 00:25:48.159 09:33:59 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@297 -- # x722=() 00:25:48.159 09:33:59 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@297 -- # local -ga x722 00:25:48.159 09:33:59 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@298 -- # mlx=() 00:25:48.159 09:33:59 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@298 -- # local -ga mlx 00:25:48.159 09:33:59 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:25:48.159 09:33:59 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:25:48.160 09:33:59 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:25:48.160 09:33:59 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:25:48.160 09:33:59 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:25:48.160 09:33:59 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:25:48.160 09:33:59 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:25:48.160 09:33:59 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:25:48.160 09:33:59 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:25:48.160 09:33:59 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:25:48.160 09:33:59 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:25:48.160 09:33:59 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:25:48.160 09:33:59 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:25:48.160 09:33:59 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:25:48.160 09:33:59 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:25:48.160 09:33:59 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:25:48.160 09:33:59 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:25:48.160 09:33:59 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:25:48.160 09:33:59 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@341 -- # echo 'Found 0000:09:00.0 (0x8086 - 0x159b)' 00:25:48.160 Found 0000:09:00.0 (0x8086 - 0x159b) 00:25:48.160 09:33:59 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:25:48.160 09:33:59 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:25:48.160 09:33:59 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:25:48.160 09:33:59 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:25:48.160 09:33:59 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:25:48.160 09:33:59 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:25:48.160 09:33:59 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@341 -- # echo 'Found 0000:09:00.1 (0x8086 - 0x159b)' 00:25:48.160 Found 0000:09:00.1 (0x8086 - 0x159b) 00:25:48.160 09:33:59 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:25:48.160 09:33:59 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:25:48.160 09:33:59 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:25:48.160 09:33:59 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:25:48.160 09:33:59 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:25:48.160 09:33:59 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:25:48.160 09:33:59 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:25:48.160 09:33:59 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:25:48.160 09:33:59 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:25:48.160 09:33:59 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:25:48.160 09:33:59 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:25:48.160 09:33:59 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:25:48.160 09:33:59 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@390 -- # [[ up == up ]] 00:25:48.160 09:33:59 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:25:48.160 09:33:59 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:25:48.160 09:33:59 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:09:00.0: cvl_0_0' 00:25:48.160 Found net devices under 0000:09:00.0: cvl_0_0 00:25:48.160 09:33:59 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:25:48.160 09:33:59 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:25:48.160 09:33:59 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:25:48.160 09:33:59 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:25:48.160 09:33:59 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:25:48.160 09:33:59 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@390 -- # [[ up == up ]] 00:25:48.160 09:33:59 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:25:48.160 09:33:59 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:25:48.160 09:33:59 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:09:00.1: cvl_0_1' 00:25:48.160 Found net devices under 0000:09:00.1: cvl_0_1 00:25:48.160 09:33:59 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:25:48.160 09:33:59 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:25:48.160 09:33:59 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@414 -- # is_hw=yes 00:25:48.160 09:33:59 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:25:48.160 09:33:59 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:25:48.160 09:33:59 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:25:48.160 09:33:59 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:25:48.160 09:33:59 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:25:48.160 09:33:59 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:25:48.160 09:33:59 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:25:48.160 09:33:59 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:25:48.160 09:33:59 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:25:48.160 09:33:59 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:25:48.160 09:33:59 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:25:48.160 09:33:59 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:25:48.160 09:33:59 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:25:48.160 09:33:59 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:25:48.160 09:33:59 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:25:48.160 09:33:59 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:25:48.160 09:33:59 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:25:48.160 09:33:59 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:25:48.160 09:33:59 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:25:48.160 09:33:59 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:25:48.160 09:33:59 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:25:48.160 09:33:59 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:25:48.160 09:33:59 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:25:48.160 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:25:48.160 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.207 ms 00:25:48.160 00:25:48.160 --- 10.0.0.2 ping statistics --- 00:25:48.160 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:25:48.160 rtt min/avg/max/mdev = 0.207/0.207/0.207/0.000 ms 00:25:48.160 09:33:59 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:25:48.160 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:25:48.160 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.071 ms 00:25:48.160 00:25:48.160 --- 10.0.0.1 ping statistics --- 00:25:48.160 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:25:48.160 rtt min/avg/max/mdev = 0.071/0.071/0.071/0.000 ms 00:25:48.160 09:33:59 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:25:48.160 09:33:59 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@422 -- # return 0 00:25:48.160 09:33:59 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:25:48.160 09:33:59 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:25:48.160 09:33:59 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:25:48.160 09:33:59 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:25:48.160 09:33:59 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:25:48.160 09:33:59 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:25:48.160 09:33:59 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:25:48.160 09:33:59 nvmf_tcp.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@13 -- # trap 'nvmftestfini || :; clean_kernel_target' EXIT 00:25:48.160 09:33:59 nvmf_tcp.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@15 -- # get_main_ns_ip 00:25:48.160 09:33:59 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@741 -- # local ip 00:25:48.160 09:33:59 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@742 -- # ip_candidates=() 00:25:48.160 09:33:59 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@742 -- # local -A ip_candidates 00:25:48.160 09:33:59 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:25:48.160 09:33:59 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:25:48.160 09:33:59 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:25:48.160 09:33:59 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:25:48.160 09:33:59 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:25:48.160 09:33:59 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:25:48.160 09:33:59 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:25:48.160 09:33:59 nvmf_tcp.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@15 -- # target_ip=10.0.0.1 00:25:48.160 09:33:59 nvmf_tcp.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@16 -- # configure_kernel_target nqn.2016-06.io.spdk:testnqn 10.0.0.1 00:25:48.160 09:33:59 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@632 -- # local kernel_name=nqn.2016-06.io.spdk:testnqn kernel_target_ip=10.0.0.1 00:25:48.160 09:33:59 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@634 -- # nvmet=/sys/kernel/config/nvmet 00:25:48.160 09:33:59 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@635 -- # kernel_subsystem=/sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn 00:25:48.160 09:33:59 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@636 -- # kernel_namespace=/sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn/namespaces/1 00:25:48.160 09:33:59 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@637 -- # kernel_port=/sys/kernel/config/nvmet/ports/1 00:25:48.160 09:33:59 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@639 -- # local block nvme 00:25:48.160 09:33:59 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@641 -- # [[ ! -e /sys/module/nvmet ]] 00:25:48.160 09:33:59 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@642 -- # modprobe nvmet 00:25:48.160 09:33:59 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@645 -- # [[ -e /sys/kernel/config/nvmet ]] 00:25:48.161 09:33:59 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@647 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh reset 00:25:49.096 Waiting for block devices as requested 00:25:49.096 0000:00:04.7 (8086 0e27): vfio-pci -> ioatdma 00:25:49.355 0000:00:04.6 (8086 0e26): vfio-pci -> ioatdma 00:25:49.355 0000:00:04.5 (8086 0e25): vfio-pci -> ioatdma 00:25:49.614 0000:00:04.4 (8086 0e24): vfio-pci -> ioatdma 00:25:49.614 0000:00:04.3 (8086 0e23): vfio-pci -> ioatdma 00:25:49.614 0000:00:04.2 (8086 0e22): vfio-pci -> ioatdma 00:25:49.614 0000:00:04.1 (8086 0e21): vfio-pci -> ioatdma 00:25:49.873 0000:00:04.0 (8086 0e20): vfio-pci -> ioatdma 00:25:49.873 0000:0b:00.0 (8086 0a54): vfio-pci -> nvme 00:25:50.131 0000:80:04.7 (8086 0e27): vfio-pci -> ioatdma 00:25:50.131 0000:80:04.6 (8086 0e26): vfio-pci -> ioatdma 00:25:50.131 0000:80:04.5 (8086 0e25): vfio-pci -> ioatdma 00:25:50.131 0000:80:04.4 (8086 0e24): vfio-pci -> ioatdma 00:25:50.131 0000:80:04.3 (8086 0e23): vfio-pci -> ioatdma 00:25:50.388 0000:80:04.2 (8086 0e22): vfio-pci -> ioatdma 00:25:50.389 0000:80:04.1 (8086 0e21): vfio-pci -> ioatdma 00:25:50.389 0000:80:04.0 (8086 0e20): vfio-pci -> ioatdma 00:25:50.646 09:34:01 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@650 -- # for block in /sys/block/nvme* 00:25:50.646 09:34:01 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@651 -- # [[ -e /sys/block/nvme0n1 ]] 00:25:50.646 09:34:01 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@652 -- # is_block_zoned nvme0n1 00:25:50.646 09:34:01 nvmf_tcp.nvmf_identify_kernel_target -- common/autotest_common.sh@1662 -- # local device=nvme0n1 00:25:50.646 09:34:01 nvmf_tcp.nvmf_identify_kernel_target -- common/autotest_common.sh@1664 -- # [[ -e /sys/block/nvme0n1/queue/zoned ]] 00:25:50.646 09:34:01 nvmf_tcp.nvmf_identify_kernel_target -- common/autotest_common.sh@1665 -- # [[ none != none ]] 00:25:50.646 09:34:01 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@653 -- # block_in_use nvme0n1 00:25:50.646 09:34:01 nvmf_tcp.nvmf_identify_kernel_target -- scripts/common.sh@378 -- # local block=nvme0n1 pt 00:25:50.646 09:34:01 nvmf_tcp.nvmf_identify_kernel_target -- scripts/common.sh@387 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/spdk-gpt.py nvme0n1 00:25:50.646 No valid GPT data, bailing 00:25:50.646 09:34:01 nvmf_tcp.nvmf_identify_kernel_target -- scripts/common.sh@391 -- # blkid -s PTTYPE -o value /dev/nvme0n1 00:25:50.646 09:34:01 nvmf_tcp.nvmf_identify_kernel_target -- scripts/common.sh@391 -- # pt= 00:25:50.646 09:34:01 nvmf_tcp.nvmf_identify_kernel_target -- scripts/common.sh@392 -- # return 1 00:25:50.646 09:34:01 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@653 -- # nvme=/dev/nvme0n1 00:25:50.646 09:34:01 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@656 -- # [[ -b /dev/nvme0n1 ]] 00:25:50.646 09:34:01 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@658 -- # mkdir /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn 00:25:50.646 09:34:01 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@659 -- # mkdir /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn/namespaces/1 00:25:50.646 09:34:01 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@660 -- # mkdir /sys/kernel/config/nvmet/ports/1 00:25:50.646 09:34:01 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@665 -- # echo SPDK-nqn.2016-06.io.spdk:testnqn 00:25:50.646 09:34:01 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@667 -- # echo 1 00:25:50.646 09:34:01 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@668 -- # echo /dev/nvme0n1 00:25:50.646 09:34:01 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@669 -- # echo 1 00:25:50.646 09:34:01 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@671 -- # echo 10.0.0.1 00:25:50.646 09:34:01 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@672 -- # echo tcp 00:25:50.646 09:34:01 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@673 -- # echo 4420 00:25:50.646 09:34:01 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@674 -- # echo ipv4 00:25:50.646 09:34:01 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@677 -- # ln -s /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn /sys/kernel/config/nvmet/ports/1/subsystems/ 00:25:50.646 09:34:01 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@680 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --hostid=29f67375-a902-e411-ace9-001e67bc3c9a -a 10.0.0.1 -t tcp -s 4420 00:25:50.646 00:25:50.646 Discovery Log Number of Records 2, Generation counter 2 00:25:50.646 =====Discovery Log Entry 0====== 00:25:50.646 trtype: tcp 00:25:50.646 adrfam: ipv4 00:25:50.646 subtype: current discovery subsystem 00:25:50.646 treq: not specified, sq flow control disable supported 00:25:50.646 portid: 1 00:25:50.646 trsvcid: 4420 00:25:50.646 subnqn: nqn.2014-08.org.nvmexpress.discovery 00:25:50.646 traddr: 10.0.0.1 00:25:50.646 eflags: none 00:25:50.646 sectype: none 00:25:50.646 =====Discovery Log Entry 1====== 00:25:50.646 trtype: tcp 00:25:50.646 adrfam: ipv4 00:25:50.646 subtype: nvme subsystem 00:25:50.646 treq: not specified, sq flow control disable supported 00:25:50.646 portid: 1 00:25:50.646 trsvcid: 4420 00:25:50.646 subnqn: nqn.2016-06.io.spdk:testnqn 00:25:50.646 traddr: 10.0.0.1 00:25:50.646 eflags: none 00:25:50.646 sectype: none 00:25:50.646 09:34:01 nvmf_tcp.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_identify -r ' trtype:tcp adrfam:IPv4 traddr:10.0.0.1 00:25:50.646 trsvcid:4420 subnqn:nqn.2014-08.org.nvmexpress.discovery' 00:25:50.646 EAL: No free 2048 kB hugepages reported on node 1 00:25:50.906 ===================================================== 00:25:50.906 NVMe over Fabrics controller at 10.0.0.1:4420: nqn.2014-08.org.nvmexpress.discovery 00:25:50.906 ===================================================== 00:25:50.906 Controller Capabilities/Features 00:25:50.906 ================================ 00:25:50.906 Vendor ID: 0000 00:25:50.906 Subsystem Vendor ID: 0000 00:25:50.906 Serial Number: 3aab81c4acdce8fab750 00:25:50.906 Model Number: Linux 00:25:50.906 Firmware Version: 6.7.0-68 00:25:50.906 Recommended Arb Burst: 0 00:25:50.906 IEEE OUI Identifier: 00 00 00 00:25:50.906 Multi-path I/O 00:25:50.906 May have multiple subsystem ports: No 00:25:50.906 May have multiple controllers: No 00:25:50.906 Associated with SR-IOV VF: No 00:25:50.906 Max Data Transfer Size: Unlimited 00:25:50.906 Max Number of Namespaces: 0 00:25:50.906 Max Number of I/O Queues: 1024 00:25:50.906 NVMe Specification Version (VS): 1.3 00:25:50.906 NVMe Specification Version (Identify): 1.3 00:25:50.906 Maximum Queue Entries: 1024 00:25:50.906 Contiguous Queues Required: No 00:25:50.907 Arbitration Mechanisms Supported 00:25:50.907 Weighted Round Robin: Not Supported 00:25:50.907 Vendor Specific: Not Supported 00:25:50.907 Reset Timeout: 7500 ms 00:25:50.907 Doorbell Stride: 4 bytes 00:25:50.907 NVM Subsystem Reset: Not Supported 00:25:50.907 Command Sets Supported 00:25:50.907 NVM Command Set: Supported 00:25:50.907 Boot Partition: Not Supported 00:25:50.907 Memory Page Size Minimum: 4096 bytes 00:25:50.907 Memory Page Size Maximum: 4096 bytes 00:25:50.907 Persistent Memory Region: Not Supported 00:25:50.907 Optional Asynchronous Events Supported 00:25:50.907 Namespace Attribute Notices: Not Supported 00:25:50.907 Firmware Activation Notices: Not Supported 00:25:50.907 ANA Change Notices: Not Supported 00:25:50.907 PLE Aggregate Log Change Notices: Not Supported 00:25:50.907 LBA Status Info Alert Notices: Not Supported 00:25:50.907 EGE Aggregate Log Change Notices: Not Supported 00:25:50.907 Normal NVM Subsystem Shutdown event: Not Supported 00:25:50.907 Zone Descriptor Change Notices: Not Supported 00:25:50.907 Discovery Log Change Notices: Supported 00:25:50.907 Controller Attributes 00:25:50.907 128-bit Host Identifier: Not Supported 00:25:50.907 Non-Operational Permissive Mode: Not Supported 00:25:50.907 NVM Sets: Not Supported 00:25:50.907 Read Recovery Levels: Not Supported 00:25:50.907 Endurance Groups: Not Supported 00:25:50.907 Predictable Latency Mode: Not Supported 00:25:50.907 Traffic Based Keep ALive: Not Supported 00:25:50.907 Namespace Granularity: Not Supported 00:25:50.907 SQ Associations: Not Supported 00:25:50.907 UUID List: Not Supported 00:25:50.907 Multi-Domain Subsystem: Not Supported 00:25:50.907 Fixed Capacity Management: Not Supported 00:25:50.907 Variable Capacity Management: Not Supported 00:25:50.907 Delete Endurance Group: Not Supported 00:25:50.907 Delete NVM Set: Not Supported 00:25:50.907 Extended LBA Formats Supported: Not Supported 00:25:50.907 Flexible Data Placement Supported: Not Supported 00:25:50.907 00:25:50.907 Controller Memory Buffer Support 00:25:50.907 ================================ 00:25:50.907 Supported: No 00:25:50.907 00:25:50.907 Persistent Memory Region Support 00:25:50.907 ================================ 00:25:50.907 Supported: No 00:25:50.907 00:25:50.907 Admin Command Set Attributes 00:25:50.907 ============================ 00:25:50.907 Security Send/Receive: Not Supported 00:25:50.907 Format NVM: Not Supported 00:25:50.907 Firmware Activate/Download: Not Supported 00:25:50.907 Namespace Management: Not Supported 00:25:50.907 Device Self-Test: Not Supported 00:25:50.907 Directives: Not Supported 00:25:50.907 NVMe-MI: Not Supported 00:25:50.907 Virtualization Management: Not Supported 00:25:50.907 Doorbell Buffer Config: Not Supported 00:25:50.907 Get LBA Status Capability: Not Supported 00:25:50.907 Command & Feature Lockdown Capability: Not Supported 00:25:50.907 Abort Command Limit: 1 00:25:50.907 Async Event Request Limit: 1 00:25:50.907 Number of Firmware Slots: N/A 00:25:50.907 Firmware Slot 1 Read-Only: N/A 00:25:50.907 Firmware Activation Without Reset: N/A 00:25:50.907 Multiple Update Detection Support: N/A 00:25:50.907 Firmware Update Granularity: No Information Provided 00:25:50.907 Per-Namespace SMART Log: No 00:25:50.907 Asymmetric Namespace Access Log Page: Not Supported 00:25:50.907 Subsystem NQN: nqn.2014-08.org.nvmexpress.discovery 00:25:50.907 Command Effects Log Page: Not Supported 00:25:50.907 Get Log Page Extended Data: Supported 00:25:50.907 Telemetry Log Pages: Not Supported 00:25:50.907 Persistent Event Log Pages: Not Supported 00:25:50.907 Supported Log Pages Log Page: May Support 00:25:50.907 Commands Supported & Effects Log Page: Not Supported 00:25:50.907 Feature Identifiers & Effects Log Page:May Support 00:25:50.907 NVMe-MI Commands & Effects Log Page: May Support 00:25:50.907 Data Area 4 for Telemetry Log: Not Supported 00:25:50.907 Error Log Page Entries Supported: 1 00:25:50.907 Keep Alive: Not Supported 00:25:50.907 00:25:50.907 NVM Command Set Attributes 00:25:50.907 ========================== 00:25:50.907 Submission Queue Entry Size 00:25:50.907 Max: 1 00:25:50.907 Min: 1 00:25:50.907 Completion Queue Entry Size 00:25:50.907 Max: 1 00:25:50.907 Min: 1 00:25:50.907 Number of Namespaces: 0 00:25:50.907 Compare Command: Not Supported 00:25:50.907 Write Uncorrectable Command: Not Supported 00:25:50.907 Dataset Management Command: Not Supported 00:25:50.907 Write Zeroes Command: Not Supported 00:25:50.907 Set Features Save Field: Not Supported 00:25:50.907 Reservations: Not Supported 00:25:50.907 Timestamp: Not Supported 00:25:50.907 Copy: Not Supported 00:25:50.907 Volatile Write Cache: Not Present 00:25:50.907 Atomic Write Unit (Normal): 1 00:25:50.907 Atomic Write Unit (PFail): 1 00:25:50.907 Atomic Compare & Write Unit: 1 00:25:50.907 Fused Compare & Write: Not Supported 00:25:50.907 Scatter-Gather List 00:25:50.907 SGL Command Set: Supported 00:25:50.907 SGL Keyed: Not Supported 00:25:50.907 SGL Bit Bucket Descriptor: Not Supported 00:25:50.907 SGL Metadata Pointer: Not Supported 00:25:50.907 Oversized SGL: Not Supported 00:25:50.907 SGL Metadata Address: Not Supported 00:25:50.907 SGL Offset: Supported 00:25:50.907 Transport SGL Data Block: Not Supported 00:25:50.907 Replay Protected Memory Block: Not Supported 00:25:50.907 00:25:50.907 Firmware Slot Information 00:25:50.907 ========================= 00:25:50.907 Active slot: 0 00:25:50.907 00:25:50.907 00:25:50.907 Error Log 00:25:50.907 ========= 00:25:50.907 00:25:50.907 Active Namespaces 00:25:50.907 ================= 00:25:50.907 Discovery Log Page 00:25:50.907 ================== 00:25:50.907 Generation Counter: 2 00:25:50.907 Number of Records: 2 00:25:50.907 Record Format: 0 00:25:50.907 00:25:50.907 Discovery Log Entry 0 00:25:50.907 ---------------------- 00:25:50.907 Transport Type: 3 (TCP) 00:25:50.907 Address Family: 1 (IPv4) 00:25:50.907 Subsystem Type: 3 (Current Discovery Subsystem) 00:25:50.907 Entry Flags: 00:25:50.907 Duplicate Returned Information: 0 00:25:50.907 Explicit Persistent Connection Support for Discovery: 0 00:25:50.907 Transport Requirements: 00:25:50.907 Secure Channel: Not Specified 00:25:50.907 Port ID: 1 (0x0001) 00:25:50.907 Controller ID: 65535 (0xffff) 00:25:50.907 Admin Max SQ Size: 32 00:25:50.907 Transport Service Identifier: 4420 00:25:50.907 NVM Subsystem Qualified Name: nqn.2014-08.org.nvmexpress.discovery 00:25:50.907 Transport Address: 10.0.0.1 00:25:50.907 Discovery Log Entry 1 00:25:50.907 ---------------------- 00:25:50.907 Transport Type: 3 (TCP) 00:25:50.907 Address Family: 1 (IPv4) 00:25:50.907 Subsystem Type: 2 (NVM Subsystem) 00:25:50.907 Entry Flags: 00:25:50.907 Duplicate Returned Information: 0 00:25:50.907 Explicit Persistent Connection Support for Discovery: 0 00:25:50.907 Transport Requirements: 00:25:50.907 Secure Channel: Not Specified 00:25:50.907 Port ID: 1 (0x0001) 00:25:50.907 Controller ID: 65535 (0xffff) 00:25:50.907 Admin Max SQ Size: 32 00:25:50.907 Transport Service Identifier: 4420 00:25:50.907 NVM Subsystem Qualified Name: nqn.2016-06.io.spdk:testnqn 00:25:50.907 Transport Address: 10.0.0.1 00:25:50.907 09:34:01 nvmf_tcp.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_identify -r ' trtype:tcp adrfam:IPv4 traddr:10.0.0.1 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:25:50.907 EAL: No free 2048 kB hugepages reported on node 1 00:25:50.907 get_feature(0x01) failed 00:25:50.907 get_feature(0x02) failed 00:25:50.907 get_feature(0x04) failed 00:25:50.907 ===================================================== 00:25:50.907 NVMe over Fabrics controller at 10.0.0.1:4420: nqn.2016-06.io.spdk:testnqn 00:25:50.907 ===================================================== 00:25:50.907 Controller Capabilities/Features 00:25:50.907 ================================ 00:25:50.907 Vendor ID: 0000 00:25:50.907 Subsystem Vendor ID: 0000 00:25:50.907 Serial Number: 9518aa5972d8e18f1668 00:25:50.907 Model Number: SPDK-nqn.2016-06.io.spdk:testnqn 00:25:50.907 Firmware Version: 6.7.0-68 00:25:50.907 Recommended Arb Burst: 6 00:25:50.907 IEEE OUI Identifier: 00 00 00 00:25:50.907 Multi-path I/O 00:25:50.907 May have multiple subsystem ports: Yes 00:25:50.907 May have multiple controllers: Yes 00:25:50.907 Associated with SR-IOV VF: No 00:25:50.907 Max Data Transfer Size: Unlimited 00:25:50.907 Max Number of Namespaces: 1024 00:25:50.907 Max Number of I/O Queues: 128 00:25:50.907 NVMe Specification Version (VS): 1.3 00:25:50.907 NVMe Specification Version (Identify): 1.3 00:25:50.907 Maximum Queue Entries: 1024 00:25:50.907 Contiguous Queues Required: No 00:25:50.907 Arbitration Mechanisms Supported 00:25:50.907 Weighted Round Robin: Not Supported 00:25:50.907 Vendor Specific: Not Supported 00:25:50.907 Reset Timeout: 7500 ms 00:25:50.907 Doorbell Stride: 4 bytes 00:25:50.907 NVM Subsystem Reset: Not Supported 00:25:50.907 Command Sets Supported 00:25:50.907 NVM Command Set: Supported 00:25:50.907 Boot Partition: Not Supported 00:25:50.907 Memory Page Size Minimum: 4096 bytes 00:25:50.907 Memory Page Size Maximum: 4096 bytes 00:25:50.907 Persistent Memory Region: Not Supported 00:25:50.907 Optional Asynchronous Events Supported 00:25:50.907 Namespace Attribute Notices: Supported 00:25:50.907 Firmware Activation Notices: Not Supported 00:25:50.907 ANA Change Notices: Supported 00:25:50.907 PLE Aggregate Log Change Notices: Not Supported 00:25:50.907 LBA Status Info Alert Notices: Not Supported 00:25:50.907 EGE Aggregate Log Change Notices: Not Supported 00:25:50.907 Normal NVM Subsystem Shutdown event: Not Supported 00:25:50.907 Zone Descriptor Change Notices: Not Supported 00:25:50.907 Discovery Log Change Notices: Not Supported 00:25:50.907 Controller Attributes 00:25:50.907 128-bit Host Identifier: Supported 00:25:50.907 Non-Operational Permissive Mode: Not Supported 00:25:50.907 NVM Sets: Not Supported 00:25:50.907 Read Recovery Levels: Not Supported 00:25:50.907 Endurance Groups: Not Supported 00:25:50.907 Predictable Latency Mode: Not Supported 00:25:50.907 Traffic Based Keep ALive: Supported 00:25:50.907 Namespace Granularity: Not Supported 00:25:50.907 SQ Associations: Not Supported 00:25:50.907 UUID List: Not Supported 00:25:50.907 Multi-Domain Subsystem: Not Supported 00:25:50.907 Fixed Capacity Management: Not Supported 00:25:50.907 Variable Capacity Management: Not Supported 00:25:50.907 Delete Endurance Group: Not Supported 00:25:50.907 Delete NVM Set: Not Supported 00:25:50.907 Extended LBA Formats Supported: Not Supported 00:25:50.907 Flexible Data Placement Supported: Not Supported 00:25:50.907 00:25:50.907 Controller Memory Buffer Support 00:25:50.907 ================================ 00:25:50.907 Supported: No 00:25:50.907 00:25:50.907 Persistent Memory Region Support 00:25:50.907 ================================ 00:25:50.907 Supported: No 00:25:50.907 00:25:50.907 Admin Command Set Attributes 00:25:50.907 ============================ 00:25:50.907 Security Send/Receive: Not Supported 00:25:50.907 Format NVM: Not Supported 00:25:50.907 Firmware Activate/Download: Not Supported 00:25:50.907 Namespace Management: Not Supported 00:25:50.907 Device Self-Test: Not Supported 00:25:50.907 Directives: Not Supported 00:25:50.907 NVMe-MI: Not Supported 00:25:50.907 Virtualization Management: Not Supported 00:25:50.907 Doorbell Buffer Config: Not Supported 00:25:50.907 Get LBA Status Capability: Not Supported 00:25:50.907 Command & Feature Lockdown Capability: Not Supported 00:25:50.907 Abort Command Limit: 4 00:25:50.907 Async Event Request Limit: 4 00:25:50.907 Number of Firmware Slots: N/A 00:25:50.907 Firmware Slot 1 Read-Only: N/A 00:25:50.907 Firmware Activation Without Reset: N/A 00:25:50.907 Multiple Update Detection Support: N/A 00:25:50.907 Firmware Update Granularity: No Information Provided 00:25:50.907 Per-Namespace SMART Log: Yes 00:25:50.907 Asymmetric Namespace Access Log Page: Supported 00:25:50.907 ANA Transition Time : 10 sec 00:25:50.907 00:25:50.907 Asymmetric Namespace Access Capabilities 00:25:50.907 ANA Optimized State : Supported 00:25:50.907 ANA Non-Optimized State : Supported 00:25:50.907 ANA Inaccessible State : Supported 00:25:50.907 ANA Persistent Loss State : Supported 00:25:50.907 ANA Change State : Supported 00:25:50.907 ANAGRPID is not changed : No 00:25:50.907 Non-Zero ANAGRPID for NS Mgmt Cmd : Not Supported 00:25:50.907 00:25:50.907 ANA Group Identifier Maximum : 128 00:25:50.907 Number of ANA Group Identifiers : 128 00:25:50.907 Max Number of Allowed Namespaces : 1024 00:25:50.907 Subsystem NQN: nqn.2016-06.io.spdk:testnqn 00:25:50.907 Command Effects Log Page: Supported 00:25:50.907 Get Log Page Extended Data: Supported 00:25:50.907 Telemetry Log Pages: Not Supported 00:25:50.907 Persistent Event Log Pages: Not Supported 00:25:50.907 Supported Log Pages Log Page: May Support 00:25:50.907 Commands Supported & Effects Log Page: Not Supported 00:25:50.907 Feature Identifiers & Effects Log Page:May Support 00:25:50.907 NVMe-MI Commands & Effects Log Page: May Support 00:25:50.907 Data Area 4 for Telemetry Log: Not Supported 00:25:50.907 Error Log Page Entries Supported: 128 00:25:50.907 Keep Alive: Supported 00:25:50.907 Keep Alive Granularity: 1000 ms 00:25:50.907 00:25:50.907 NVM Command Set Attributes 00:25:50.907 ========================== 00:25:50.907 Submission Queue Entry Size 00:25:50.907 Max: 64 00:25:50.907 Min: 64 00:25:50.907 Completion Queue Entry Size 00:25:50.907 Max: 16 00:25:50.907 Min: 16 00:25:50.907 Number of Namespaces: 1024 00:25:50.907 Compare Command: Not Supported 00:25:50.907 Write Uncorrectable Command: Not Supported 00:25:50.907 Dataset Management Command: Supported 00:25:50.907 Write Zeroes Command: Supported 00:25:50.907 Set Features Save Field: Not Supported 00:25:50.907 Reservations: Not Supported 00:25:50.907 Timestamp: Not Supported 00:25:50.907 Copy: Not Supported 00:25:50.907 Volatile Write Cache: Present 00:25:50.907 Atomic Write Unit (Normal): 1 00:25:50.907 Atomic Write Unit (PFail): 1 00:25:50.907 Atomic Compare & Write Unit: 1 00:25:50.907 Fused Compare & Write: Not Supported 00:25:50.907 Scatter-Gather List 00:25:50.907 SGL Command Set: Supported 00:25:50.907 SGL Keyed: Not Supported 00:25:50.907 SGL Bit Bucket Descriptor: Not Supported 00:25:50.907 SGL Metadata Pointer: Not Supported 00:25:50.907 Oversized SGL: Not Supported 00:25:50.907 SGL Metadata Address: Not Supported 00:25:50.907 SGL Offset: Supported 00:25:50.907 Transport SGL Data Block: Not Supported 00:25:50.907 Replay Protected Memory Block: Not Supported 00:25:50.907 00:25:50.907 Firmware Slot Information 00:25:50.907 ========================= 00:25:50.907 Active slot: 0 00:25:50.907 00:25:50.907 Asymmetric Namespace Access 00:25:50.907 =========================== 00:25:50.907 Change Count : 0 00:25:50.907 Number of ANA Group Descriptors : 1 00:25:50.907 ANA Group Descriptor : 0 00:25:50.907 ANA Group ID : 1 00:25:50.907 Number of NSID Values : 1 00:25:50.907 Change Count : 0 00:25:50.907 ANA State : 1 00:25:50.907 Namespace Identifier : 1 00:25:50.907 00:25:50.907 Commands Supported and Effects 00:25:50.907 ============================== 00:25:50.907 Admin Commands 00:25:50.907 -------------- 00:25:50.907 Get Log Page (02h): Supported 00:25:50.907 Identify (06h): Supported 00:25:50.907 Abort (08h): Supported 00:25:50.907 Set Features (09h): Supported 00:25:50.907 Get Features (0Ah): Supported 00:25:50.907 Asynchronous Event Request (0Ch): Supported 00:25:50.907 Keep Alive (18h): Supported 00:25:50.907 I/O Commands 00:25:50.907 ------------ 00:25:50.907 Flush (00h): Supported 00:25:50.907 Write (01h): Supported LBA-Change 00:25:50.907 Read (02h): Supported 00:25:50.907 Write Zeroes (08h): Supported LBA-Change 00:25:50.907 Dataset Management (09h): Supported 00:25:50.907 00:25:50.907 Error Log 00:25:50.908 ========= 00:25:50.908 Entry: 0 00:25:50.908 Error Count: 0x3 00:25:50.908 Submission Queue Id: 0x0 00:25:50.908 Command Id: 0x5 00:25:50.908 Phase Bit: 0 00:25:50.908 Status Code: 0x2 00:25:50.908 Status Code Type: 0x0 00:25:50.908 Do Not Retry: 1 00:25:50.908 Error Location: 0x28 00:25:50.908 LBA: 0x0 00:25:50.908 Namespace: 0x0 00:25:50.908 Vendor Log Page: 0x0 00:25:50.908 ----------- 00:25:50.908 Entry: 1 00:25:50.908 Error Count: 0x2 00:25:50.908 Submission Queue Id: 0x0 00:25:50.908 Command Id: 0x5 00:25:50.908 Phase Bit: 0 00:25:50.908 Status Code: 0x2 00:25:50.908 Status Code Type: 0x0 00:25:50.908 Do Not Retry: 1 00:25:50.908 Error Location: 0x28 00:25:50.908 LBA: 0x0 00:25:50.908 Namespace: 0x0 00:25:50.908 Vendor Log Page: 0x0 00:25:50.908 ----------- 00:25:50.908 Entry: 2 00:25:50.908 Error Count: 0x1 00:25:50.908 Submission Queue Id: 0x0 00:25:50.908 Command Id: 0x4 00:25:50.908 Phase Bit: 0 00:25:50.908 Status Code: 0x2 00:25:50.908 Status Code Type: 0x0 00:25:50.908 Do Not Retry: 1 00:25:50.908 Error Location: 0x28 00:25:50.908 LBA: 0x0 00:25:50.908 Namespace: 0x0 00:25:50.908 Vendor Log Page: 0x0 00:25:50.908 00:25:50.908 Number of Queues 00:25:50.908 ================ 00:25:50.908 Number of I/O Submission Queues: 128 00:25:50.908 Number of I/O Completion Queues: 128 00:25:50.908 00:25:50.908 ZNS Specific Controller Data 00:25:50.908 ============================ 00:25:50.908 Zone Append Size Limit: 0 00:25:50.908 00:25:50.908 00:25:50.908 Active Namespaces 00:25:50.908 ================= 00:25:50.908 get_feature(0x05) failed 00:25:50.908 Namespace ID:1 00:25:50.908 Command Set Identifier: NVM (00h) 00:25:50.908 Deallocate: Supported 00:25:50.908 Deallocated/Unwritten Error: Not Supported 00:25:50.908 Deallocated Read Value: Unknown 00:25:50.908 Deallocate in Write Zeroes: Not Supported 00:25:50.908 Deallocated Guard Field: 0xFFFF 00:25:50.908 Flush: Supported 00:25:50.908 Reservation: Not Supported 00:25:50.908 Namespace Sharing Capabilities: Multiple Controllers 00:25:50.908 Size (in LBAs): 1953525168 (931GiB) 00:25:50.908 Capacity (in LBAs): 1953525168 (931GiB) 00:25:50.908 Utilization (in LBAs): 1953525168 (931GiB) 00:25:50.908 UUID: d5713497-3e9f-4fda-80be-eb0bb30d79a6 00:25:50.908 Thin Provisioning: Not Supported 00:25:50.908 Per-NS Atomic Units: Yes 00:25:50.908 Atomic Boundary Size (Normal): 0 00:25:50.908 Atomic Boundary Size (PFail): 0 00:25:50.908 Atomic Boundary Offset: 0 00:25:50.908 NGUID/EUI64 Never Reused: No 00:25:50.908 ANA group ID: 1 00:25:50.908 Namespace Write Protected: No 00:25:50.908 Number of LBA Formats: 1 00:25:50.908 Current LBA Format: LBA Format #00 00:25:50.908 LBA Format #00: Data Size: 512 Metadata Size: 0 00:25:50.908 00:25:50.908 09:34:01 nvmf_tcp.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@1 -- # nvmftestfini 00:25:50.908 09:34:01 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@488 -- # nvmfcleanup 00:25:50.908 09:34:01 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@117 -- # sync 00:25:50.908 09:34:01 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:25:50.908 09:34:01 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@120 -- # set +e 00:25:50.908 09:34:01 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@121 -- # for i in {1..20} 00:25:50.908 09:34:01 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:25:50.908 rmmod nvme_tcp 00:25:50.908 rmmod nvme_fabrics 00:25:50.908 09:34:01 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:25:50.908 09:34:01 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@124 -- # set -e 00:25:50.908 09:34:01 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@125 -- # return 0 00:25:50.908 09:34:02 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@489 -- # '[' -n '' ']' 00:25:50.908 09:34:02 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:25:50.908 09:34:02 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:25:50.908 09:34:02 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:25:50.908 09:34:02 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:25:50.908 09:34:02 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@278 -- # remove_spdk_ns 00:25:50.908 09:34:02 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:25:50.908 09:34:02 nvmf_tcp.nvmf_identify_kernel_target -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:25:50.908 09:34:02 nvmf_tcp.nvmf_identify_kernel_target -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:25:53.441 09:34:04 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:25:53.441 09:34:04 nvmf_tcp.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@1 -- # clean_kernel_target 00:25:53.441 09:34:04 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@684 -- # [[ -e /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn ]] 00:25:53.441 09:34:04 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@686 -- # echo 0 00:25:53.441 09:34:04 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@688 -- # rm -f /sys/kernel/config/nvmet/ports/1/subsystems/nqn.2016-06.io.spdk:testnqn 00:25:53.441 09:34:04 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@689 -- # rmdir /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn/namespaces/1 00:25:53.441 09:34:04 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@690 -- # rmdir /sys/kernel/config/nvmet/ports/1 00:25:53.441 09:34:04 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@691 -- # rmdir /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn 00:25:53.441 09:34:04 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@693 -- # modules=(/sys/module/nvmet/holders/*) 00:25:53.441 09:34:04 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@695 -- # modprobe -r nvmet_tcp nvmet 00:25:53.441 09:34:04 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@698 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:25:54.375 0000:00:04.7 (8086 0e27): ioatdma -> vfio-pci 00:25:54.376 0000:00:04.6 (8086 0e26): ioatdma -> vfio-pci 00:25:54.376 0000:00:04.5 (8086 0e25): ioatdma -> vfio-pci 00:25:54.376 0000:00:04.4 (8086 0e24): ioatdma -> vfio-pci 00:25:54.376 0000:00:04.3 (8086 0e23): ioatdma -> vfio-pci 00:25:54.376 0000:00:04.2 (8086 0e22): ioatdma -> vfio-pci 00:25:54.376 0000:00:04.1 (8086 0e21): ioatdma -> vfio-pci 00:25:54.376 0000:00:04.0 (8086 0e20): ioatdma -> vfio-pci 00:25:54.376 0000:80:04.7 (8086 0e27): ioatdma -> vfio-pci 00:25:54.376 0000:80:04.6 (8086 0e26): ioatdma -> vfio-pci 00:25:54.376 0000:80:04.5 (8086 0e25): ioatdma -> vfio-pci 00:25:54.376 0000:80:04.4 (8086 0e24): ioatdma -> vfio-pci 00:25:54.376 0000:80:04.3 (8086 0e23): ioatdma -> vfio-pci 00:25:54.376 0000:80:04.2 (8086 0e22): ioatdma -> vfio-pci 00:25:54.376 0000:80:04.1 (8086 0e21): ioatdma -> vfio-pci 00:25:54.376 0000:80:04.0 (8086 0e20): ioatdma -> vfio-pci 00:25:55.312 0000:0b:00.0 (8086 0a54): nvme -> vfio-pci 00:25:55.571 00:25:55.571 real 0m9.609s 00:25:55.571 user 0m2.046s 00:25:55.571 sys 0m3.463s 00:25:55.571 09:34:06 nvmf_tcp.nvmf_identify_kernel_target -- common/autotest_common.sh@1124 -- # xtrace_disable 00:25:55.571 09:34:06 nvmf_tcp.nvmf_identify_kernel_target -- common/autotest_common.sh@10 -- # set +x 00:25:55.571 ************************************ 00:25:55.571 END TEST nvmf_identify_kernel_target 00:25:55.571 ************************************ 00:25:55.571 09:34:06 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:25:55.571 09:34:06 nvmf_tcp -- nvmf/nvmf.sh@105 -- # run_test nvmf_auth_host /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/auth.sh --transport=tcp 00:25:55.571 09:34:06 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:25:55.571 09:34:06 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:25:55.571 09:34:06 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:25:55.571 ************************************ 00:25:55.571 START TEST nvmf_auth_host 00:25:55.571 ************************************ 00:25:55.571 09:34:06 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/auth.sh --transport=tcp 00:25:55.571 * Looking for test storage... 00:25:55.571 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:25:55.571 09:34:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:25:55.571 09:34:06 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@7 -- # uname -s 00:25:55.571 09:34:06 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:25:55.571 09:34:06 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:25:55.571 09:34:06 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:25:55.571 09:34:06 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:25:55.571 09:34:06 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:25:55.571 09:34:06 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:25:55.571 09:34:06 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:25:55.571 09:34:06 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:25:55.571 09:34:06 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:25:55.571 09:34:06 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:25:55.571 09:34:06 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a 00:25:55.571 09:34:06 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@18 -- # NVME_HOSTID=29f67375-a902-e411-ace9-001e67bc3c9a 00:25:55.571 09:34:06 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:25:55.571 09:34:06 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:25:55.571 09:34:06 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:25:55.571 09:34:06 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:25:55.571 09:34:06 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:25:55.571 09:34:06 nvmf_tcp.nvmf_auth_host -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:25:55.571 09:34:06 nvmf_tcp.nvmf_auth_host -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:25:55.571 09:34:06 nvmf_tcp.nvmf_auth_host -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:25:55.571 09:34:06 nvmf_tcp.nvmf_auth_host -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:55.571 09:34:06 nvmf_tcp.nvmf_auth_host -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:55.571 09:34:06 nvmf_tcp.nvmf_auth_host -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:55.571 09:34:06 nvmf_tcp.nvmf_auth_host -- paths/export.sh@5 -- # export PATH 00:25:55.571 09:34:06 nvmf_tcp.nvmf_auth_host -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:55.571 09:34:06 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@47 -- # : 0 00:25:55.571 09:34:06 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:25:55.571 09:34:06 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:25:55.571 09:34:06 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:25:55.571 09:34:06 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:25:55.571 09:34:06 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:25:55.571 09:34:06 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:25:55.571 09:34:06 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:25:55.571 09:34:06 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@51 -- # have_pci_nics=0 00:25:55.571 09:34:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@13 -- # digests=("sha256" "sha384" "sha512") 00:25:55.571 09:34:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@16 -- # dhgroups=("ffdhe2048" "ffdhe3072" "ffdhe4096" "ffdhe6144" "ffdhe8192") 00:25:55.571 09:34:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@17 -- # subnqn=nqn.2024-02.io.spdk:cnode0 00:25:55.571 09:34:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@18 -- # hostnqn=nqn.2024-02.io.spdk:host0 00:25:55.571 09:34:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@19 -- # nvmet_subsys=/sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0 00:25:55.571 09:34:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@20 -- # nvmet_host=/sys/kernel/config/nvmet/hosts/nqn.2024-02.io.spdk:host0 00:25:55.571 09:34:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@21 -- # keys=() 00:25:55.571 09:34:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@21 -- # ckeys=() 00:25:55.571 09:34:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@68 -- # nvmftestinit 00:25:55.571 09:34:06 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:25:55.571 09:34:06 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:25:55.571 09:34:06 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@448 -- # prepare_net_devs 00:25:55.571 09:34:06 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@410 -- # local -g is_hw=no 00:25:55.571 09:34:06 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@412 -- # remove_spdk_ns 00:25:55.571 09:34:06 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:25:55.571 09:34:06 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:25:55.571 09:34:06 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:25:55.571 09:34:06 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:25:55.571 09:34:06 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:25:55.571 09:34:06 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@285 -- # xtrace_disable 00:25:55.571 09:34:06 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:58.101 09:34:08 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:25:58.101 09:34:08 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@291 -- # pci_devs=() 00:25:58.101 09:34:08 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@291 -- # local -a pci_devs 00:25:58.101 09:34:08 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@292 -- # pci_net_devs=() 00:25:58.101 09:34:08 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:25:58.101 09:34:08 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@293 -- # pci_drivers=() 00:25:58.101 09:34:08 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@293 -- # local -A pci_drivers 00:25:58.101 09:34:08 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@295 -- # net_devs=() 00:25:58.101 09:34:08 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@295 -- # local -ga net_devs 00:25:58.101 09:34:08 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@296 -- # e810=() 00:25:58.101 09:34:08 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@296 -- # local -ga e810 00:25:58.101 09:34:08 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@297 -- # x722=() 00:25:58.101 09:34:08 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@297 -- # local -ga x722 00:25:58.101 09:34:08 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@298 -- # mlx=() 00:25:58.101 09:34:08 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@298 -- # local -ga mlx 00:25:58.101 09:34:08 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:25:58.101 09:34:08 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:25:58.101 09:34:08 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:25:58.101 09:34:08 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:25:58.101 09:34:08 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:25:58.101 09:34:08 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:25:58.101 09:34:08 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:25:58.101 09:34:08 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:25:58.101 09:34:08 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:25:58.101 09:34:08 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:25:58.101 09:34:08 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:25:58.101 09:34:08 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:25:58.101 09:34:08 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:25:58.101 09:34:08 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:25:58.101 09:34:08 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:25:58.101 09:34:08 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:25:58.102 09:34:08 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:25:58.102 09:34:08 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:25:58.102 09:34:08 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@341 -- # echo 'Found 0000:09:00.0 (0x8086 - 0x159b)' 00:25:58.102 Found 0000:09:00.0 (0x8086 - 0x159b) 00:25:58.102 09:34:08 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:25:58.102 09:34:08 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:25:58.102 09:34:08 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:25:58.102 09:34:08 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:25:58.102 09:34:08 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:25:58.102 09:34:08 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:25:58.102 09:34:08 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@341 -- # echo 'Found 0000:09:00.1 (0x8086 - 0x159b)' 00:25:58.102 Found 0000:09:00.1 (0x8086 - 0x159b) 00:25:58.102 09:34:08 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:25:58.102 09:34:08 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:25:58.102 09:34:08 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:25:58.102 09:34:08 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:25:58.102 09:34:08 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:25:58.102 09:34:08 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:25:58.102 09:34:08 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:25:58.102 09:34:08 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:25:58.102 09:34:08 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:25:58.102 09:34:08 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:25:58.102 09:34:08 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:25:58.102 09:34:08 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:25:58.102 09:34:08 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@390 -- # [[ up == up ]] 00:25:58.102 09:34:08 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:25:58.102 09:34:08 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:25:58.102 09:34:08 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:09:00.0: cvl_0_0' 00:25:58.102 Found net devices under 0000:09:00.0: cvl_0_0 00:25:58.102 09:34:08 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:25:58.102 09:34:08 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:25:58.102 09:34:08 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:25:58.102 09:34:08 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:25:58.102 09:34:08 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:25:58.102 09:34:08 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@390 -- # [[ up == up ]] 00:25:58.102 09:34:08 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:25:58.102 09:34:08 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:25:58.102 09:34:08 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:09:00.1: cvl_0_1' 00:25:58.102 Found net devices under 0000:09:00.1: cvl_0_1 00:25:58.102 09:34:08 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:25:58.102 09:34:08 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:25:58.102 09:34:08 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@414 -- # is_hw=yes 00:25:58.102 09:34:08 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:25:58.102 09:34:08 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:25:58.102 09:34:08 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:25:58.102 09:34:08 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:25:58.102 09:34:08 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:25:58.102 09:34:08 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:25:58.102 09:34:08 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:25:58.102 09:34:08 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:25:58.102 09:34:08 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:25:58.102 09:34:08 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:25:58.102 09:34:08 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:25:58.102 09:34:08 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:25:58.102 09:34:08 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:25:58.102 09:34:08 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:25:58.102 09:34:08 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:25:58.102 09:34:08 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:25:58.102 09:34:08 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:25:58.102 09:34:08 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:25:58.102 09:34:08 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:25:58.102 09:34:08 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:25:58.102 09:34:08 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:25:58.102 09:34:08 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:25:58.102 09:34:08 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:25:58.102 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:25:58.102 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.252 ms 00:25:58.102 00:25:58.102 --- 10.0.0.2 ping statistics --- 00:25:58.102 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:25:58.102 rtt min/avg/max/mdev = 0.252/0.252/0.252/0.000 ms 00:25:58.102 09:34:08 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:25:58.102 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:25:58.102 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.158 ms 00:25:58.102 00:25:58.102 --- 10.0.0.1 ping statistics --- 00:25:58.102 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:25:58.102 rtt min/avg/max/mdev = 0.158/0.158/0.158/0.000 ms 00:25:58.102 09:34:08 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:25:58.102 09:34:08 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@422 -- # return 0 00:25:58.102 09:34:08 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:25:58.102 09:34:08 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:25:58.102 09:34:08 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:25:58.102 09:34:08 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:25:58.102 09:34:08 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:25:58.102 09:34:08 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:25:58.102 09:34:08 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:25:58.102 09:34:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@69 -- # nvmfappstart -L nvme_auth 00:25:58.102 09:34:08 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:25:58.102 09:34:08 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@722 -- # xtrace_disable 00:25:58.102 09:34:08 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:58.102 09:34:08 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@481 -- # nvmfpid=924146 00:25:58.102 09:34:08 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -L nvme_auth 00:25:58.102 09:34:08 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@482 -- # waitforlisten 924146 00:25:58.102 09:34:08 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@829 -- # '[' -z 924146 ']' 00:25:58.102 09:34:08 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:25:58.102 09:34:08 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@834 -- # local max_retries=100 00:25:58.102 09:34:08 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:25:58.102 09:34:08 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@838 -- # xtrace_disable 00:25:58.102 09:34:08 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:58.102 09:34:09 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:25:58.102 09:34:09 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@862 -- # return 0 00:25:58.102 09:34:09 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:25:58.102 09:34:09 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@728 -- # xtrace_disable 00:25:58.102 09:34:09 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:58.360 09:34:09 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:25:58.361 09:34:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@70 -- # trap 'cat /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/nvme-auth.log; cleanup' SIGINT SIGTERM EXIT 00:25:58.361 09:34:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@73 -- # gen_dhchap_key null 32 00:25:58.361 09:34:09 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@723 -- # local digest len file key 00:25:58.361 09:34:09 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@724 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:25:58.361 09:34:09 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@724 -- # local -A digests 00:25:58.361 09:34:09 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@726 -- # digest=null 00:25:58.361 09:34:09 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@726 -- # len=32 00:25:58.361 09:34:09 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@727 -- # xxd -p -c0 -l 16 /dev/urandom 00:25:58.361 09:34:09 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@727 -- # key=9b5fb1b9cc0c95d16c111c2c60471c65 00:25:58.361 09:34:09 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@728 -- # mktemp -t spdk.key-null.XXX 00:25:58.361 09:34:09 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@728 -- # file=/tmp/spdk.key-null.Mgs 00:25:58.361 09:34:09 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@729 -- # format_dhchap_key 9b5fb1b9cc0c95d16c111c2c60471c65 0 00:25:58.361 09:34:09 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@719 -- # format_key DHHC-1 9b5fb1b9cc0c95d16c111c2c60471c65 0 00:25:58.361 09:34:09 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@702 -- # local prefix key digest 00:25:58.361 09:34:09 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@704 -- # prefix=DHHC-1 00:25:58.361 09:34:09 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@704 -- # key=9b5fb1b9cc0c95d16c111c2c60471c65 00:25:58.361 09:34:09 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@704 -- # digest=0 00:25:58.361 09:34:09 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@705 -- # python - 00:25:58.361 09:34:09 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@730 -- # chmod 0600 /tmp/spdk.key-null.Mgs 00:25:58.361 09:34:09 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@732 -- # echo /tmp/spdk.key-null.Mgs 00:25:58.361 09:34:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@73 -- # keys[0]=/tmp/spdk.key-null.Mgs 00:25:58.361 09:34:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@73 -- # gen_dhchap_key sha512 64 00:25:58.361 09:34:09 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@723 -- # local digest len file key 00:25:58.361 09:34:09 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@724 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:25:58.361 09:34:09 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@724 -- # local -A digests 00:25:58.361 09:34:09 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@726 -- # digest=sha512 00:25:58.361 09:34:09 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@726 -- # len=64 00:25:58.361 09:34:09 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@727 -- # xxd -p -c0 -l 32 /dev/urandom 00:25:58.361 09:34:09 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@727 -- # key=efd3cb03a202432658ef152cce2ab74b6689b59bf35cc940d7edadd8c252be0b 00:25:58.361 09:34:09 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@728 -- # mktemp -t spdk.key-sha512.XXX 00:25:58.361 09:34:09 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@728 -- # file=/tmp/spdk.key-sha512.Gwl 00:25:58.361 09:34:09 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@729 -- # format_dhchap_key efd3cb03a202432658ef152cce2ab74b6689b59bf35cc940d7edadd8c252be0b 3 00:25:58.361 09:34:09 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@719 -- # format_key DHHC-1 efd3cb03a202432658ef152cce2ab74b6689b59bf35cc940d7edadd8c252be0b 3 00:25:58.361 09:34:09 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@702 -- # local prefix key digest 00:25:58.361 09:34:09 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@704 -- # prefix=DHHC-1 00:25:58.361 09:34:09 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@704 -- # key=efd3cb03a202432658ef152cce2ab74b6689b59bf35cc940d7edadd8c252be0b 00:25:58.361 09:34:09 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@704 -- # digest=3 00:25:58.361 09:34:09 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@705 -- # python - 00:25:58.361 09:34:09 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@730 -- # chmod 0600 /tmp/spdk.key-sha512.Gwl 00:25:58.361 09:34:09 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@732 -- # echo /tmp/spdk.key-sha512.Gwl 00:25:58.361 09:34:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@73 -- # ckeys[0]=/tmp/spdk.key-sha512.Gwl 00:25:58.361 09:34:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@74 -- # gen_dhchap_key null 48 00:25:58.361 09:34:09 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@723 -- # local digest len file key 00:25:58.361 09:34:09 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@724 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:25:58.361 09:34:09 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@724 -- # local -A digests 00:25:58.361 09:34:09 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@726 -- # digest=null 00:25:58.361 09:34:09 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@726 -- # len=48 00:25:58.361 09:34:09 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@727 -- # xxd -p -c0 -l 24 /dev/urandom 00:25:58.361 09:34:09 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@727 -- # key=f0af4efd854d22457d4508070188a60ce44e40b300b4baed 00:25:58.361 09:34:09 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@728 -- # mktemp -t spdk.key-null.XXX 00:25:58.361 09:34:09 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@728 -- # file=/tmp/spdk.key-null.mmg 00:25:58.361 09:34:09 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@729 -- # format_dhchap_key f0af4efd854d22457d4508070188a60ce44e40b300b4baed 0 00:25:58.361 09:34:09 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@719 -- # format_key DHHC-1 f0af4efd854d22457d4508070188a60ce44e40b300b4baed 0 00:25:58.361 09:34:09 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@702 -- # local prefix key digest 00:25:58.361 09:34:09 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@704 -- # prefix=DHHC-1 00:25:58.361 09:34:09 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@704 -- # key=f0af4efd854d22457d4508070188a60ce44e40b300b4baed 00:25:58.361 09:34:09 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@704 -- # digest=0 00:25:58.361 09:34:09 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@705 -- # python - 00:25:58.361 09:34:09 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@730 -- # chmod 0600 /tmp/spdk.key-null.mmg 00:25:58.361 09:34:09 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@732 -- # echo /tmp/spdk.key-null.mmg 00:25:58.361 09:34:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@74 -- # keys[1]=/tmp/spdk.key-null.mmg 00:25:58.361 09:34:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@74 -- # gen_dhchap_key sha384 48 00:25:58.361 09:34:09 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@723 -- # local digest len file key 00:25:58.361 09:34:09 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@724 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:25:58.361 09:34:09 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@724 -- # local -A digests 00:25:58.361 09:34:09 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@726 -- # digest=sha384 00:25:58.361 09:34:09 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@726 -- # len=48 00:25:58.361 09:34:09 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@727 -- # xxd -p -c0 -l 24 /dev/urandom 00:25:58.361 09:34:09 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@727 -- # key=fc2e92e583dfac37161ffffc984839d5168c6feec04f4220 00:25:58.361 09:34:09 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@728 -- # mktemp -t spdk.key-sha384.XXX 00:25:58.361 09:34:09 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@728 -- # file=/tmp/spdk.key-sha384.myq 00:25:58.361 09:34:09 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@729 -- # format_dhchap_key fc2e92e583dfac37161ffffc984839d5168c6feec04f4220 2 00:25:58.361 09:34:09 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@719 -- # format_key DHHC-1 fc2e92e583dfac37161ffffc984839d5168c6feec04f4220 2 00:25:58.361 09:34:09 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@702 -- # local prefix key digest 00:25:58.361 09:34:09 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@704 -- # prefix=DHHC-1 00:25:58.361 09:34:09 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@704 -- # key=fc2e92e583dfac37161ffffc984839d5168c6feec04f4220 00:25:58.361 09:34:09 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@704 -- # digest=2 00:25:58.361 09:34:09 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@705 -- # python - 00:25:58.361 09:34:09 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@730 -- # chmod 0600 /tmp/spdk.key-sha384.myq 00:25:58.361 09:34:09 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@732 -- # echo /tmp/spdk.key-sha384.myq 00:25:58.361 09:34:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@74 -- # ckeys[1]=/tmp/spdk.key-sha384.myq 00:25:58.361 09:34:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@75 -- # gen_dhchap_key sha256 32 00:25:58.361 09:34:09 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@723 -- # local digest len file key 00:25:58.361 09:34:09 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@724 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:25:58.361 09:34:09 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@724 -- # local -A digests 00:25:58.361 09:34:09 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@726 -- # digest=sha256 00:25:58.361 09:34:09 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@726 -- # len=32 00:25:58.361 09:34:09 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@727 -- # xxd -p -c0 -l 16 /dev/urandom 00:25:58.361 09:34:09 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@727 -- # key=3e966472eeede05dc4d31868001bfe87 00:25:58.361 09:34:09 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@728 -- # mktemp -t spdk.key-sha256.XXX 00:25:58.361 09:34:09 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@728 -- # file=/tmp/spdk.key-sha256.pTI 00:25:58.361 09:34:09 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@729 -- # format_dhchap_key 3e966472eeede05dc4d31868001bfe87 1 00:25:58.361 09:34:09 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@719 -- # format_key DHHC-1 3e966472eeede05dc4d31868001bfe87 1 00:25:58.361 09:34:09 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@702 -- # local prefix key digest 00:25:58.361 09:34:09 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@704 -- # prefix=DHHC-1 00:25:58.361 09:34:09 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@704 -- # key=3e966472eeede05dc4d31868001bfe87 00:25:58.361 09:34:09 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@704 -- # digest=1 00:25:58.361 09:34:09 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@705 -- # python - 00:25:58.620 09:34:09 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@730 -- # chmod 0600 /tmp/spdk.key-sha256.pTI 00:25:58.620 09:34:09 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@732 -- # echo /tmp/spdk.key-sha256.pTI 00:25:58.620 09:34:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@75 -- # keys[2]=/tmp/spdk.key-sha256.pTI 00:25:58.620 09:34:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@75 -- # gen_dhchap_key sha256 32 00:25:58.620 09:34:09 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@723 -- # local digest len file key 00:25:58.620 09:34:09 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@724 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:25:58.620 09:34:09 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@724 -- # local -A digests 00:25:58.620 09:34:09 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@726 -- # digest=sha256 00:25:58.620 09:34:09 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@726 -- # len=32 00:25:58.620 09:34:09 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@727 -- # xxd -p -c0 -l 16 /dev/urandom 00:25:58.620 09:34:09 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@727 -- # key=a2fc47219f6a78d37de22ca810dcd243 00:25:58.620 09:34:09 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@728 -- # mktemp -t spdk.key-sha256.XXX 00:25:58.620 09:34:09 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@728 -- # file=/tmp/spdk.key-sha256.QnR 00:25:58.620 09:34:09 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@729 -- # format_dhchap_key a2fc47219f6a78d37de22ca810dcd243 1 00:25:58.620 09:34:09 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@719 -- # format_key DHHC-1 a2fc47219f6a78d37de22ca810dcd243 1 00:25:58.620 09:34:09 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@702 -- # local prefix key digest 00:25:58.620 09:34:09 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@704 -- # prefix=DHHC-1 00:25:58.620 09:34:09 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@704 -- # key=a2fc47219f6a78d37de22ca810dcd243 00:25:58.620 09:34:09 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@704 -- # digest=1 00:25:58.620 09:34:09 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@705 -- # python - 00:25:58.620 09:34:09 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@730 -- # chmod 0600 /tmp/spdk.key-sha256.QnR 00:25:58.620 09:34:09 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@732 -- # echo /tmp/spdk.key-sha256.QnR 00:25:58.620 09:34:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@75 -- # ckeys[2]=/tmp/spdk.key-sha256.QnR 00:25:58.620 09:34:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@76 -- # gen_dhchap_key sha384 48 00:25:58.620 09:34:09 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@723 -- # local digest len file key 00:25:58.620 09:34:09 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@724 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:25:58.620 09:34:09 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@724 -- # local -A digests 00:25:58.620 09:34:09 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@726 -- # digest=sha384 00:25:58.620 09:34:09 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@726 -- # len=48 00:25:58.620 09:34:09 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@727 -- # xxd -p -c0 -l 24 /dev/urandom 00:25:58.620 09:34:09 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@727 -- # key=d342ffa601c6311d91278051a69bb72568ccf4661d36dbe1 00:25:58.620 09:34:09 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@728 -- # mktemp -t spdk.key-sha384.XXX 00:25:58.620 09:34:09 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@728 -- # file=/tmp/spdk.key-sha384.oT3 00:25:58.620 09:34:09 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@729 -- # format_dhchap_key d342ffa601c6311d91278051a69bb72568ccf4661d36dbe1 2 00:25:58.620 09:34:09 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@719 -- # format_key DHHC-1 d342ffa601c6311d91278051a69bb72568ccf4661d36dbe1 2 00:25:58.620 09:34:09 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@702 -- # local prefix key digest 00:25:58.620 09:34:09 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@704 -- # prefix=DHHC-1 00:25:58.620 09:34:09 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@704 -- # key=d342ffa601c6311d91278051a69bb72568ccf4661d36dbe1 00:25:58.620 09:34:09 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@704 -- # digest=2 00:25:58.620 09:34:09 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@705 -- # python - 00:25:58.620 09:34:09 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@730 -- # chmod 0600 /tmp/spdk.key-sha384.oT3 00:25:58.620 09:34:09 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@732 -- # echo /tmp/spdk.key-sha384.oT3 00:25:58.620 09:34:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@76 -- # keys[3]=/tmp/spdk.key-sha384.oT3 00:25:58.620 09:34:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@76 -- # gen_dhchap_key null 32 00:25:58.620 09:34:09 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@723 -- # local digest len file key 00:25:58.620 09:34:09 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@724 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:25:58.620 09:34:09 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@724 -- # local -A digests 00:25:58.620 09:34:09 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@726 -- # digest=null 00:25:58.620 09:34:09 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@726 -- # len=32 00:25:58.620 09:34:09 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@727 -- # xxd -p -c0 -l 16 /dev/urandom 00:25:58.620 09:34:09 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@727 -- # key=9041dd4f493cf4146b3d3667e7dda261 00:25:58.620 09:34:09 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@728 -- # mktemp -t spdk.key-null.XXX 00:25:58.620 09:34:09 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@728 -- # file=/tmp/spdk.key-null.rpq 00:25:58.620 09:34:09 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@729 -- # format_dhchap_key 9041dd4f493cf4146b3d3667e7dda261 0 00:25:58.620 09:34:09 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@719 -- # format_key DHHC-1 9041dd4f493cf4146b3d3667e7dda261 0 00:25:58.620 09:34:09 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@702 -- # local prefix key digest 00:25:58.620 09:34:09 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@704 -- # prefix=DHHC-1 00:25:58.620 09:34:09 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@704 -- # key=9041dd4f493cf4146b3d3667e7dda261 00:25:58.620 09:34:09 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@704 -- # digest=0 00:25:58.620 09:34:09 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@705 -- # python - 00:25:58.620 09:34:09 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@730 -- # chmod 0600 /tmp/spdk.key-null.rpq 00:25:58.620 09:34:09 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@732 -- # echo /tmp/spdk.key-null.rpq 00:25:58.620 09:34:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@76 -- # ckeys[3]=/tmp/spdk.key-null.rpq 00:25:58.620 09:34:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@77 -- # gen_dhchap_key sha512 64 00:25:58.620 09:34:09 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@723 -- # local digest len file key 00:25:58.620 09:34:09 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@724 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:25:58.620 09:34:09 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@724 -- # local -A digests 00:25:58.620 09:34:09 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@726 -- # digest=sha512 00:25:58.620 09:34:09 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@726 -- # len=64 00:25:58.620 09:34:09 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@727 -- # xxd -p -c0 -l 32 /dev/urandom 00:25:58.620 09:34:09 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@727 -- # key=e32e1b8eb03f522a4ef974b1c92bc69012f1061849342e1aeb5b2b39075dbc11 00:25:58.620 09:34:09 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@728 -- # mktemp -t spdk.key-sha512.XXX 00:25:58.620 09:34:09 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@728 -- # file=/tmp/spdk.key-sha512.rnM 00:25:58.620 09:34:09 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@729 -- # format_dhchap_key e32e1b8eb03f522a4ef974b1c92bc69012f1061849342e1aeb5b2b39075dbc11 3 00:25:58.620 09:34:09 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@719 -- # format_key DHHC-1 e32e1b8eb03f522a4ef974b1c92bc69012f1061849342e1aeb5b2b39075dbc11 3 00:25:58.620 09:34:09 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@702 -- # local prefix key digest 00:25:58.620 09:34:09 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@704 -- # prefix=DHHC-1 00:25:58.620 09:34:09 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@704 -- # key=e32e1b8eb03f522a4ef974b1c92bc69012f1061849342e1aeb5b2b39075dbc11 00:25:58.620 09:34:09 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@704 -- # digest=3 00:25:58.620 09:34:09 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@705 -- # python - 00:25:58.620 09:34:09 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@730 -- # chmod 0600 /tmp/spdk.key-sha512.rnM 00:25:58.620 09:34:09 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@732 -- # echo /tmp/spdk.key-sha512.rnM 00:25:58.620 09:34:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@77 -- # keys[4]=/tmp/spdk.key-sha512.rnM 00:25:58.620 09:34:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@77 -- # ckeys[4]= 00:25:58.620 09:34:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@79 -- # waitforlisten 924146 00:25:58.620 09:34:09 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@829 -- # '[' -z 924146 ']' 00:25:58.620 09:34:09 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:25:58.620 09:34:09 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@834 -- # local max_retries=100 00:25:58.620 09:34:09 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:25:58.620 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:25:58.620 09:34:09 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@838 -- # xtrace_disable 00:25:58.620 09:34:09 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:59.186 09:34:10 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:25:59.186 09:34:10 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@862 -- # return 0 00:25:59.186 09:34:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@80 -- # for i in "${!keys[@]}" 00:25:59.186 09:34:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@81 -- # rpc_cmd keyring_file_add_key key0 /tmp/spdk.key-null.Mgs 00:25:59.186 09:34:10 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:59.186 09:34:10 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:59.186 09:34:10 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:59.186 09:34:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@82 -- # [[ -n /tmp/spdk.key-sha512.Gwl ]] 00:25:59.186 09:34:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@82 -- # rpc_cmd keyring_file_add_key ckey0 /tmp/spdk.key-sha512.Gwl 00:25:59.186 09:34:10 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:59.186 09:34:10 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:59.186 09:34:10 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:59.186 09:34:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@80 -- # for i in "${!keys[@]}" 00:25:59.186 09:34:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@81 -- # rpc_cmd keyring_file_add_key key1 /tmp/spdk.key-null.mmg 00:25:59.186 09:34:10 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:59.186 09:34:10 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:59.186 09:34:10 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:59.186 09:34:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@82 -- # [[ -n /tmp/spdk.key-sha384.myq ]] 00:25:59.186 09:34:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@82 -- # rpc_cmd keyring_file_add_key ckey1 /tmp/spdk.key-sha384.myq 00:25:59.186 09:34:10 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:59.186 09:34:10 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:59.186 09:34:10 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:59.187 09:34:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@80 -- # for i in "${!keys[@]}" 00:25:59.187 09:34:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@81 -- # rpc_cmd keyring_file_add_key key2 /tmp/spdk.key-sha256.pTI 00:25:59.187 09:34:10 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:59.187 09:34:10 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:59.187 09:34:10 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:59.187 09:34:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@82 -- # [[ -n /tmp/spdk.key-sha256.QnR ]] 00:25:59.187 09:34:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@82 -- # rpc_cmd keyring_file_add_key ckey2 /tmp/spdk.key-sha256.QnR 00:25:59.187 09:34:10 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:59.187 09:34:10 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:59.187 09:34:10 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:59.187 09:34:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@80 -- # for i in "${!keys[@]}" 00:25:59.187 09:34:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@81 -- # rpc_cmd keyring_file_add_key key3 /tmp/spdk.key-sha384.oT3 00:25:59.187 09:34:10 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:59.187 09:34:10 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:59.187 09:34:10 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:59.187 09:34:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@82 -- # [[ -n /tmp/spdk.key-null.rpq ]] 00:25:59.187 09:34:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@82 -- # rpc_cmd keyring_file_add_key ckey3 /tmp/spdk.key-null.rpq 00:25:59.187 09:34:10 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:59.187 09:34:10 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:59.187 09:34:10 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:59.187 09:34:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@80 -- # for i in "${!keys[@]}" 00:25:59.187 09:34:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@81 -- # rpc_cmd keyring_file_add_key key4 /tmp/spdk.key-sha512.rnM 00:25:59.187 09:34:10 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:59.187 09:34:10 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:59.187 09:34:10 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:59.187 09:34:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@82 -- # [[ -n '' ]] 00:25:59.187 09:34:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@85 -- # nvmet_auth_init 00:25:59.187 09:34:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@35 -- # get_main_ns_ip 00:25:59.187 09:34:10 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:25:59.187 09:34:10 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:25:59.187 09:34:10 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:25:59.187 09:34:10 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:25:59.187 09:34:10 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:25:59.187 09:34:10 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:25:59.187 09:34:10 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:25:59.187 09:34:10 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:25:59.187 09:34:10 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:25:59.187 09:34:10 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:25:59.187 09:34:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@35 -- # configure_kernel_target nqn.2024-02.io.spdk:cnode0 10.0.0.1 00:25:59.187 09:34:10 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@632 -- # local kernel_name=nqn.2024-02.io.spdk:cnode0 kernel_target_ip=10.0.0.1 00:25:59.187 09:34:10 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@634 -- # nvmet=/sys/kernel/config/nvmet 00:25:59.187 09:34:10 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@635 -- # kernel_subsystem=/sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0 00:25:59.187 09:34:10 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@636 -- # kernel_namespace=/sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0/namespaces/1 00:25:59.187 09:34:10 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@637 -- # kernel_port=/sys/kernel/config/nvmet/ports/1 00:25:59.187 09:34:10 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@639 -- # local block nvme 00:25:59.187 09:34:10 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@641 -- # [[ ! -e /sys/module/nvmet ]] 00:25:59.187 09:34:10 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@642 -- # modprobe nvmet 00:25:59.187 09:34:10 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@645 -- # [[ -e /sys/kernel/config/nvmet ]] 00:25:59.187 09:34:10 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@647 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh reset 00:26:00.120 Waiting for block devices as requested 00:26:00.120 0000:00:04.7 (8086 0e27): vfio-pci -> ioatdma 00:26:00.379 0000:00:04.6 (8086 0e26): vfio-pci -> ioatdma 00:26:00.379 0000:00:04.5 (8086 0e25): vfio-pci -> ioatdma 00:26:00.379 0000:00:04.4 (8086 0e24): vfio-pci -> ioatdma 00:26:00.639 0000:00:04.3 (8086 0e23): vfio-pci -> ioatdma 00:26:00.639 0000:00:04.2 (8086 0e22): vfio-pci -> ioatdma 00:26:00.639 0000:00:04.1 (8086 0e21): vfio-pci -> ioatdma 00:26:00.639 0000:00:04.0 (8086 0e20): vfio-pci -> ioatdma 00:26:00.897 0000:0b:00.0 (8086 0a54): vfio-pci -> nvme 00:26:00.897 0000:80:04.7 (8086 0e27): vfio-pci -> ioatdma 00:26:01.155 0000:80:04.6 (8086 0e26): vfio-pci -> ioatdma 00:26:01.155 0000:80:04.5 (8086 0e25): vfio-pci -> ioatdma 00:26:01.155 0000:80:04.4 (8086 0e24): vfio-pci -> ioatdma 00:26:01.155 0000:80:04.3 (8086 0e23): vfio-pci -> ioatdma 00:26:01.412 0000:80:04.2 (8086 0e22): vfio-pci -> ioatdma 00:26:01.412 0000:80:04.1 (8086 0e21): vfio-pci -> ioatdma 00:26:01.412 0000:80:04.0 (8086 0e20): vfio-pci -> ioatdma 00:26:01.669 09:34:12 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@650 -- # for block in /sys/block/nvme* 00:26:01.669 09:34:12 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@651 -- # [[ -e /sys/block/nvme0n1 ]] 00:26:01.669 09:34:12 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@652 -- # is_block_zoned nvme0n1 00:26:01.669 09:34:12 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@1662 -- # local device=nvme0n1 00:26:01.669 09:34:12 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@1664 -- # [[ -e /sys/block/nvme0n1/queue/zoned ]] 00:26:01.669 09:34:12 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@1665 -- # [[ none != none ]] 00:26:01.669 09:34:12 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@653 -- # block_in_use nvme0n1 00:26:01.669 09:34:12 nvmf_tcp.nvmf_auth_host -- scripts/common.sh@378 -- # local block=nvme0n1 pt 00:26:01.669 09:34:12 nvmf_tcp.nvmf_auth_host -- scripts/common.sh@387 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/spdk-gpt.py nvme0n1 00:26:01.929 No valid GPT data, bailing 00:26:01.929 09:34:12 nvmf_tcp.nvmf_auth_host -- scripts/common.sh@391 -- # blkid -s PTTYPE -o value /dev/nvme0n1 00:26:01.929 09:34:12 nvmf_tcp.nvmf_auth_host -- scripts/common.sh@391 -- # pt= 00:26:01.929 09:34:12 nvmf_tcp.nvmf_auth_host -- scripts/common.sh@392 -- # return 1 00:26:01.929 09:34:12 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@653 -- # nvme=/dev/nvme0n1 00:26:01.929 09:34:12 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@656 -- # [[ -b /dev/nvme0n1 ]] 00:26:01.929 09:34:12 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@658 -- # mkdir /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0 00:26:01.929 09:34:12 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@659 -- # mkdir /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0/namespaces/1 00:26:01.929 09:34:12 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@660 -- # mkdir /sys/kernel/config/nvmet/ports/1 00:26:01.929 09:34:12 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@665 -- # echo SPDK-nqn.2024-02.io.spdk:cnode0 00:26:01.929 09:34:12 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@667 -- # echo 1 00:26:01.929 09:34:12 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@668 -- # echo /dev/nvme0n1 00:26:01.929 09:34:12 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@669 -- # echo 1 00:26:01.929 09:34:12 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@671 -- # echo 10.0.0.1 00:26:01.929 09:34:12 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@672 -- # echo tcp 00:26:01.929 09:34:12 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@673 -- # echo 4420 00:26:01.929 09:34:12 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@674 -- # echo ipv4 00:26:01.929 09:34:12 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@677 -- # ln -s /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0 /sys/kernel/config/nvmet/ports/1/subsystems/ 00:26:01.929 09:34:12 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@680 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --hostid=29f67375-a902-e411-ace9-001e67bc3c9a -a 10.0.0.1 -t tcp -s 4420 00:26:01.929 00:26:01.929 Discovery Log Number of Records 2, Generation counter 2 00:26:01.929 =====Discovery Log Entry 0====== 00:26:01.929 trtype: tcp 00:26:01.929 adrfam: ipv4 00:26:01.929 subtype: current discovery subsystem 00:26:01.929 treq: not specified, sq flow control disable supported 00:26:01.929 portid: 1 00:26:01.929 trsvcid: 4420 00:26:01.929 subnqn: nqn.2014-08.org.nvmexpress.discovery 00:26:01.929 traddr: 10.0.0.1 00:26:01.929 eflags: none 00:26:01.929 sectype: none 00:26:01.929 =====Discovery Log Entry 1====== 00:26:01.929 trtype: tcp 00:26:01.929 adrfam: ipv4 00:26:01.929 subtype: nvme subsystem 00:26:01.929 treq: not specified, sq flow control disable supported 00:26:01.929 portid: 1 00:26:01.929 trsvcid: 4420 00:26:01.929 subnqn: nqn.2024-02.io.spdk:cnode0 00:26:01.929 traddr: 10.0.0.1 00:26:01.929 eflags: none 00:26:01.929 sectype: none 00:26:01.929 09:34:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@36 -- # mkdir /sys/kernel/config/nvmet/hosts/nqn.2024-02.io.spdk:host0 00:26:01.929 09:34:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@37 -- # echo 0 00:26:01.929 09:34:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@38 -- # ln -s /sys/kernel/config/nvmet/hosts/nqn.2024-02.io.spdk:host0 /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0/allowed_hosts/nqn.2024-02.io.spdk:host0 00:26:01.929 09:34:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@88 -- # nvmet_auth_set_key sha256 ffdhe2048 1 00:26:01.929 09:34:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:26:01.929 09:34:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:26:01.929 09:34:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:26:01.929 09:34:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:26:01.929 09:34:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:ZjBhZjRlZmQ4NTRkMjI0NTdkNDUwODA3MDE4OGE2MGNlNDRlNDBiMzAwYjRiYWVkSF6LUg==: 00:26:01.929 09:34:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:ZmMyZTkyZTU4M2RmYWMzNzE2MWZmZmZjOTg0ODM5ZDUxNjhjNmZlZWMwNGY0MjIwb04rnA==: 00:26:01.929 09:34:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:26:01.929 09:34:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:26:01.929 09:34:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:ZjBhZjRlZmQ4NTRkMjI0NTdkNDUwODA3MDE4OGE2MGNlNDRlNDBiMzAwYjRiYWVkSF6LUg==: 00:26:01.929 09:34:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:ZmMyZTkyZTU4M2RmYWMzNzE2MWZmZmZjOTg0ODM5ZDUxNjhjNmZlZWMwNGY0MjIwb04rnA==: ]] 00:26:01.929 09:34:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:ZmMyZTkyZTU4M2RmYWMzNzE2MWZmZmZjOTg0ODM5ZDUxNjhjNmZlZWMwNGY0MjIwb04rnA==: 00:26:01.929 09:34:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@93 -- # IFS=, 00:26:01.929 09:34:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@94 -- # printf %s sha256,sha384,sha512 00:26:01.929 09:34:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@93 -- # IFS=, 00:26:01.929 09:34:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@94 -- # printf %s ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:26:01.929 09:34:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@93 -- # connect_authenticate sha256,sha384,sha512 ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 1 00:26:01.929 09:34:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:26:01.929 09:34:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256,sha384,sha512 00:26:01.929 09:34:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:26:01.929 09:34:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:26:01.929 09:34:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:26:01.929 09:34:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256,sha384,sha512 --dhchap-dhgroups ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:26:01.929 09:34:13 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:01.929 09:34:13 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:01.929 09:34:13 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:01.929 09:34:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:26:01.929 09:34:13 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:26:01.929 09:34:13 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:26:01.929 09:34:13 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:26:01.929 09:34:13 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:26:01.929 09:34:13 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:26:01.929 09:34:13 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:26:01.929 09:34:13 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:26:01.929 09:34:13 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:26:01.929 09:34:13 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:26:01.929 09:34:13 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:26:01.929 09:34:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:26:01.929 09:34:13 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:01.930 09:34:13 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:02.195 nvme0n1 00:26:02.195 09:34:13 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:02.195 09:34:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:26:02.195 09:34:13 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:02.195 09:34:13 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:02.195 09:34:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:26:02.195 09:34:13 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:02.195 09:34:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:26:02.195 09:34:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:26:02.195 09:34:13 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:02.195 09:34:13 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:02.195 09:34:13 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:02.195 09:34:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@100 -- # for digest in "${digests[@]}" 00:26:02.195 09:34:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:26:02.195 09:34:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:26:02.195 09:34:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe2048 0 00:26:02.195 09:34:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:26:02.195 09:34:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:26:02.195 09:34:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:26:02.195 09:34:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:26:02.195 09:34:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:OWI1ZmIxYjljYzBjOTVkMTZjMTExYzJjNjA0NzFjNjWreYXE: 00:26:02.195 09:34:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:ZWZkM2NiMDNhMjAyNDMyNjU4ZWYxNTJjY2UyYWI3NGI2Njg5YjU5YmYzNWNjOTQwZDdlZGFkZDhjMjUyYmUwYujvdz4=: 00:26:02.195 09:34:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:26:02.195 09:34:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:26:02.195 09:34:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:OWI1ZmIxYjljYzBjOTVkMTZjMTExYzJjNjA0NzFjNjWreYXE: 00:26:02.195 09:34:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:ZWZkM2NiMDNhMjAyNDMyNjU4ZWYxNTJjY2UyYWI3NGI2Njg5YjU5YmYzNWNjOTQwZDdlZGFkZDhjMjUyYmUwYujvdz4=: ]] 00:26:02.195 09:34:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:ZWZkM2NiMDNhMjAyNDMyNjU4ZWYxNTJjY2UyYWI3NGI2Njg5YjU5YmYzNWNjOTQwZDdlZGFkZDhjMjUyYmUwYujvdz4=: 00:26:02.195 09:34:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe2048 0 00:26:02.195 09:34:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:26:02.195 09:34:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:26:02.195 09:34:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:26:02.195 09:34:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:26:02.195 09:34:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:26:02.195 09:34:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:26:02.195 09:34:13 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:02.195 09:34:13 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:02.195 09:34:13 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:02.195 09:34:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:26:02.195 09:34:13 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:26:02.195 09:34:13 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:26:02.195 09:34:13 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:26:02.195 09:34:13 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:26:02.195 09:34:13 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:26:02.195 09:34:13 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:26:02.195 09:34:13 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:26:02.195 09:34:13 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:26:02.195 09:34:13 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:26:02.195 09:34:13 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:26:02.195 09:34:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:26:02.195 09:34:13 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:02.195 09:34:13 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:02.458 nvme0n1 00:26:02.458 09:34:13 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:02.458 09:34:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:26:02.458 09:34:13 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:02.458 09:34:13 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:02.458 09:34:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:26:02.458 09:34:13 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:02.458 09:34:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:26:02.458 09:34:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:26:02.458 09:34:13 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:02.458 09:34:13 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:02.458 09:34:13 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:02.458 09:34:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:26:02.458 09:34:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe2048 1 00:26:02.458 09:34:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:26:02.458 09:34:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:26:02.458 09:34:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:26:02.458 09:34:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:26:02.458 09:34:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:ZjBhZjRlZmQ4NTRkMjI0NTdkNDUwODA3MDE4OGE2MGNlNDRlNDBiMzAwYjRiYWVkSF6LUg==: 00:26:02.458 09:34:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:ZmMyZTkyZTU4M2RmYWMzNzE2MWZmZmZjOTg0ODM5ZDUxNjhjNmZlZWMwNGY0MjIwb04rnA==: 00:26:02.458 09:34:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:26:02.458 09:34:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:26:02.458 09:34:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:ZjBhZjRlZmQ4NTRkMjI0NTdkNDUwODA3MDE4OGE2MGNlNDRlNDBiMzAwYjRiYWVkSF6LUg==: 00:26:02.458 09:34:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:ZmMyZTkyZTU4M2RmYWMzNzE2MWZmZmZjOTg0ODM5ZDUxNjhjNmZlZWMwNGY0MjIwb04rnA==: ]] 00:26:02.458 09:34:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:ZmMyZTkyZTU4M2RmYWMzNzE2MWZmZmZjOTg0ODM5ZDUxNjhjNmZlZWMwNGY0MjIwb04rnA==: 00:26:02.458 09:34:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe2048 1 00:26:02.458 09:34:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:26:02.458 09:34:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:26:02.458 09:34:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:26:02.458 09:34:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:26:02.458 09:34:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:26:02.458 09:34:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:26:02.458 09:34:13 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:02.458 09:34:13 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:02.458 09:34:13 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:02.458 09:34:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:26:02.458 09:34:13 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:26:02.458 09:34:13 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:26:02.458 09:34:13 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:26:02.458 09:34:13 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:26:02.458 09:34:13 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:26:02.458 09:34:13 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:26:02.458 09:34:13 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:26:02.458 09:34:13 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:26:02.458 09:34:13 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:26:02.458 09:34:13 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:26:02.458 09:34:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:26:02.458 09:34:13 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:02.458 09:34:13 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:02.718 nvme0n1 00:26:02.718 09:34:13 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:02.718 09:34:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:26:02.718 09:34:13 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:02.718 09:34:13 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:02.718 09:34:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:26:02.718 09:34:13 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:02.718 09:34:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:26:02.718 09:34:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:26:02.718 09:34:13 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:02.718 09:34:13 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:02.718 09:34:13 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:02.718 09:34:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:26:02.718 09:34:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe2048 2 00:26:02.718 09:34:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:26:02.718 09:34:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:26:02.718 09:34:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:26:02.718 09:34:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:26:02.718 09:34:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:M2U5NjY0NzJlZWVkZTA1ZGM0ZDMxODY4MDAxYmZlODcm3UGo: 00:26:02.718 09:34:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:YTJmYzQ3MjE5ZjZhNzhkMzdkZTIyY2E4MTBkY2QyNDO5r4J8: 00:26:02.718 09:34:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:26:02.718 09:34:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:26:02.718 09:34:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:M2U5NjY0NzJlZWVkZTA1ZGM0ZDMxODY4MDAxYmZlODcm3UGo: 00:26:02.718 09:34:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:YTJmYzQ3MjE5ZjZhNzhkMzdkZTIyY2E4MTBkY2QyNDO5r4J8: ]] 00:26:02.718 09:34:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:YTJmYzQ3MjE5ZjZhNzhkMzdkZTIyY2E4MTBkY2QyNDO5r4J8: 00:26:02.718 09:34:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe2048 2 00:26:02.718 09:34:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:26:02.718 09:34:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:26:02.718 09:34:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:26:02.718 09:34:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:26:02.718 09:34:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:26:02.718 09:34:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:26:02.718 09:34:13 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:02.718 09:34:13 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:02.718 09:34:13 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:02.718 09:34:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:26:02.718 09:34:13 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:26:02.718 09:34:13 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:26:02.718 09:34:13 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:26:02.718 09:34:13 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:26:02.718 09:34:13 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:26:02.718 09:34:13 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:26:02.718 09:34:13 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:26:02.718 09:34:13 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:26:02.718 09:34:13 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:26:02.718 09:34:13 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:26:02.718 09:34:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:26:02.718 09:34:13 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:02.718 09:34:13 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:02.976 nvme0n1 00:26:02.976 09:34:13 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:02.976 09:34:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:26:02.976 09:34:13 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:02.976 09:34:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:26:02.976 09:34:13 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:02.976 09:34:13 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:02.976 09:34:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:26:02.976 09:34:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:26:02.976 09:34:13 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:02.976 09:34:13 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:02.976 09:34:13 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:02.976 09:34:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:26:02.976 09:34:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe2048 3 00:26:02.976 09:34:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:26:02.976 09:34:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:26:02.976 09:34:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:26:02.976 09:34:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:26:02.976 09:34:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:ZDM0MmZmYTYwMWM2MzExZDkxMjc4MDUxYTY5YmI3MjU2OGNjZjQ2NjFkMzZkYmUxsSTVEg==: 00:26:02.976 09:34:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:OTA0MWRkNGY0OTNjZjQxNDZiM2QzNjY3ZTdkZGEyNjEkfiGh: 00:26:02.976 09:34:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:26:02.976 09:34:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:26:02.976 09:34:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:ZDM0MmZmYTYwMWM2MzExZDkxMjc4MDUxYTY5YmI3MjU2OGNjZjQ2NjFkMzZkYmUxsSTVEg==: 00:26:02.976 09:34:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:OTA0MWRkNGY0OTNjZjQxNDZiM2QzNjY3ZTdkZGEyNjEkfiGh: ]] 00:26:02.976 09:34:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:OTA0MWRkNGY0OTNjZjQxNDZiM2QzNjY3ZTdkZGEyNjEkfiGh: 00:26:02.976 09:34:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe2048 3 00:26:02.976 09:34:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:26:02.976 09:34:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:26:02.976 09:34:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:26:02.976 09:34:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:26:02.976 09:34:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:26:02.976 09:34:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:26:02.976 09:34:13 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:02.976 09:34:13 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:02.976 09:34:13 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:02.976 09:34:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:26:02.976 09:34:13 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:26:02.976 09:34:13 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:26:02.976 09:34:13 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:26:02.976 09:34:13 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:26:02.976 09:34:13 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:26:02.976 09:34:13 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:26:02.976 09:34:13 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:26:02.976 09:34:13 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:26:02.976 09:34:13 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:26:02.976 09:34:13 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:26:02.976 09:34:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:26:02.976 09:34:13 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:02.976 09:34:13 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:02.976 nvme0n1 00:26:02.976 09:34:14 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:02.976 09:34:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:26:02.976 09:34:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:26:02.976 09:34:14 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:02.976 09:34:14 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:02.976 09:34:14 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:03.234 09:34:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:26:03.234 09:34:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:26:03.234 09:34:14 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:03.234 09:34:14 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:03.234 09:34:14 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:03.234 09:34:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:26:03.234 09:34:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe2048 4 00:26:03.234 09:34:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:26:03.234 09:34:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:26:03.234 09:34:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:26:03.234 09:34:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:26:03.234 09:34:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:ZTMyZTFiOGViMDNmNTIyYTRlZjk3NGIxYzkyYmM2OTAxMmYxMDYxODQ5MzQyZTFhZWI1YjJiMzkwNzVkYmMxMQsgpoA=: 00:26:03.234 09:34:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:26:03.234 09:34:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:26:03.234 09:34:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:26:03.234 09:34:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:ZTMyZTFiOGViMDNmNTIyYTRlZjk3NGIxYzkyYmM2OTAxMmYxMDYxODQ5MzQyZTFhZWI1YjJiMzkwNzVkYmMxMQsgpoA=: 00:26:03.234 09:34:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:26:03.234 09:34:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe2048 4 00:26:03.234 09:34:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:26:03.234 09:34:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:26:03.234 09:34:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:26:03.234 09:34:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:26:03.234 09:34:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:26:03.234 09:34:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:26:03.234 09:34:14 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:03.234 09:34:14 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:03.234 09:34:14 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:03.234 09:34:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:26:03.234 09:34:14 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:26:03.234 09:34:14 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:26:03.234 09:34:14 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:26:03.234 09:34:14 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:26:03.234 09:34:14 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:26:03.234 09:34:14 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:26:03.234 09:34:14 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:26:03.234 09:34:14 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:26:03.234 09:34:14 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:26:03.234 09:34:14 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:26:03.234 09:34:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:26:03.234 09:34:14 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:03.234 09:34:14 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:03.234 nvme0n1 00:26:03.234 09:34:14 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:03.234 09:34:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:26:03.234 09:34:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:26:03.234 09:34:14 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:03.234 09:34:14 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:03.234 09:34:14 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:03.234 09:34:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:26:03.234 09:34:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:26:03.234 09:34:14 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:03.234 09:34:14 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:03.493 09:34:14 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:03.494 09:34:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:26:03.494 09:34:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:26:03.494 09:34:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe3072 0 00:26:03.494 09:34:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:26:03.494 09:34:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:26:03.494 09:34:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:26:03.494 09:34:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:26:03.494 09:34:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:OWI1ZmIxYjljYzBjOTVkMTZjMTExYzJjNjA0NzFjNjWreYXE: 00:26:03.494 09:34:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:ZWZkM2NiMDNhMjAyNDMyNjU4ZWYxNTJjY2UyYWI3NGI2Njg5YjU5YmYzNWNjOTQwZDdlZGFkZDhjMjUyYmUwYujvdz4=: 00:26:03.494 09:34:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:26:03.494 09:34:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:26:03.754 09:34:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:OWI1ZmIxYjljYzBjOTVkMTZjMTExYzJjNjA0NzFjNjWreYXE: 00:26:03.754 09:34:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:ZWZkM2NiMDNhMjAyNDMyNjU4ZWYxNTJjY2UyYWI3NGI2Njg5YjU5YmYzNWNjOTQwZDdlZGFkZDhjMjUyYmUwYujvdz4=: ]] 00:26:03.754 09:34:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:ZWZkM2NiMDNhMjAyNDMyNjU4ZWYxNTJjY2UyYWI3NGI2Njg5YjU5YmYzNWNjOTQwZDdlZGFkZDhjMjUyYmUwYujvdz4=: 00:26:03.754 09:34:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe3072 0 00:26:03.754 09:34:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:26:03.754 09:34:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:26:03.754 09:34:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:26:03.754 09:34:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:26:03.754 09:34:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:26:03.754 09:34:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:26:03.754 09:34:14 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:03.754 09:34:14 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:03.754 09:34:14 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:03.754 09:34:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:26:03.754 09:34:14 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:26:03.754 09:34:14 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:26:03.754 09:34:14 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:26:03.754 09:34:14 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:26:03.754 09:34:14 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:26:03.754 09:34:14 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:26:03.754 09:34:14 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:26:03.754 09:34:14 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:26:03.754 09:34:14 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:26:03.754 09:34:14 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:26:03.754 09:34:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:26:03.754 09:34:14 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:03.754 09:34:14 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:03.754 nvme0n1 00:26:03.754 09:34:14 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:03.754 09:34:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:26:03.754 09:34:14 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:03.754 09:34:14 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:03.754 09:34:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:26:03.754 09:34:14 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:03.754 09:34:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:26:03.754 09:34:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:26:03.754 09:34:14 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:03.754 09:34:14 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:04.014 09:34:14 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:04.014 09:34:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:26:04.014 09:34:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe3072 1 00:26:04.014 09:34:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:26:04.014 09:34:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:26:04.014 09:34:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:26:04.014 09:34:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:26:04.014 09:34:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:ZjBhZjRlZmQ4NTRkMjI0NTdkNDUwODA3MDE4OGE2MGNlNDRlNDBiMzAwYjRiYWVkSF6LUg==: 00:26:04.014 09:34:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:ZmMyZTkyZTU4M2RmYWMzNzE2MWZmZmZjOTg0ODM5ZDUxNjhjNmZlZWMwNGY0MjIwb04rnA==: 00:26:04.014 09:34:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:26:04.014 09:34:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:26:04.014 09:34:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:ZjBhZjRlZmQ4NTRkMjI0NTdkNDUwODA3MDE4OGE2MGNlNDRlNDBiMzAwYjRiYWVkSF6LUg==: 00:26:04.014 09:34:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:ZmMyZTkyZTU4M2RmYWMzNzE2MWZmZmZjOTg0ODM5ZDUxNjhjNmZlZWMwNGY0MjIwb04rnA==: ]] 00:26:04.014 09:34:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:ZmMyZTkyZTU4M2RmYWMzNzE2MWZmZmZjOTg0ODM5ZDUxNjhjNmZlZWMwNGY0MjIwb04rnA==: 00:26:04.014 09:34:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe3072 1 00:26:04.014 09:34:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:26:04.014 09:34:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:26:04.014 09:34:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:26:04.014 09:34:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:26:04.014 09:34:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:26:04.014 09:34:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:26:04.014 09:34:14 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:04.014 09:34:14 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:04.014 09:34:14 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:04.014 09:34:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:26:04.014 09:34:14 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:26:04.014 09:34:14 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:26:04.014 09:34:14 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:26:04.014 09:34:14 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:26:04.014 09:34:14 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:26:04.014 09:34:14 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:26:04.014 09:34:14 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:26:04.014 09:34:14 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:26:04.014 09:34:14 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:26:04.014 09:34:14 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:26:04.014 09:34:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:26:04.014 09:34:14 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:04.014 09:34:14 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:04.014 nvme0n1 00:26:04.014 09:34:15 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:04.014 09:34:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:26:04.014 09:34:15 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:04.014 09:34:15 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:04.014 09:34:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:26:04.014 09:34:15 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:04.014 09:34:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:26:04.014 09:34:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:26:04.014 09:34:15 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:04.014 09:34:15 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:04.274 09:34:15 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:04.274 09:34:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:26:04.274 09:34:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe3072 2 00:26:04.274 09:34:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:26:04.274 09:34:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:26:04.274 09:34:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:26:04.274 09:34:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:26:04.274 09:34:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:M2U5NjY0NzJlZWVkZTA1ZGM0ZDMxODY4MDAxYmZlODcm3UGo: 00:26:04.274 09:34:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:YTJmYzQ3MjE5ZjZhNzhkMzdkZTIyY2E4MTBkY2QyNDO5r4J8: 00:26:04.274 09:34:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:26:04.274 09:34:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:26:04.274 09:34:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:M2U5NjY0NzJlZWVkZTA1ZGM0ZDMxODY4MDAxYmZlODcm3UGo: 00:26:04.274 09:34:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:YTJmYzQ3MjE5ZjZhNzhkMzdkZTIyY2E4MTBkY2QyNDO5r4J8: ]] 00:26:04.274 09:34:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:YTJmYzQ3MjE5ZjZhNzhkMzdkZTIyY2E4MTBkY2QyNDO5r4J8: 00:26:04.274 09:34:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe3072 2 00:26:04.274 09:34:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:26:04.274 09:34:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:26:04.274 09:34:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:26:04.274 09:34:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:26:04.275 09:34:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:26:04.275 09:34:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:26:04.275 09:34:15 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:04.275 09:34:15 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:04.275 09:34:15 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:04.275 09:34:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:26:04.275 09:34:15 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:26:04.275 09:34:15 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:26:04.275 09:34:15 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:26:04.275 09:34:15 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:26:04.275 09:34:15 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:26:04.275 09:34:15 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:26:04.275 09:34:15 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:26:04.275 09:34:15 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:26:04.275 09:34:15 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:26:04.275 09:34:15 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:26:04.275 09:34:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:26:04.275 09:34:15 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:04.275 09:34:15 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:04.275 nvme0n1 00:26:04.275 09:34:15 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:04.275 09:34:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:26:04.275 09:34:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:26:04.275 09:34:15 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:04.275 09:34:15 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:04.275 09:34:15 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:04.275 09:34:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:26:04.275 09:34:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:26:04.275 09:34:15 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:04.275 09:34:15 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:04.275 09:34:15 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:04.275 09:34:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:26:04.275 09:34:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe3072 3 00:26:04.275 09:34:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:26:04.275 09:34:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:26:04.275 09:34:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:26:04.275 09:34:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:26:04.275 09:34:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:ZDM0MmZmYTYwMWM2MzExZDkxMjc4MDUxYTY5YmI3MjU2OGNjZjQ2NjFkMzZkYmUxsSTVEg==: 00:26:04.275 09:34:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:OTA0MWRkNGY0OTNjZjQxNDZiM2QzNjY3ZTdkZGEyNjEkfiGh: 00:26:04.275 09:34:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:26:04.275 09:34:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:26:04.275 09:34:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:ZDM0MmZmYTYwMWM2MzExZDkxMjc4MDUxYTY5YmI3MjU2OGNjZjQ2NjFkMzZkYmUxsSTVEg==: 00:26:04.275 09:34:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:OTA0MWRkNGY0OTNjZjQxNDZiM2QzNjY3ZTdkZGEyNjEkfiGh: ]] 00:26:04.275 09:34:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:OTA0MWRkNGY0OTNjZjQxNDZiM2QzNjY3ZTdkZGEyNjEkfiGh: 00:26:04.275 09:34:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe3072 3 00:26:04.275 09:34:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:26:04.275 09:34:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:26:04.275 09:34:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:26:04.275 09:34:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:26:04.275 09:34:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:26:04.275 09:34:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:26:04.275 09:34:15 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:04.275 09:34:15 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:04.275 09:34:15 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:04.275 09:34:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:26:04.275 09:34:15 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:26:04.275 09:34:15 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:26:04.275 09:34:15 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:26:04.275 09:34:15 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:26:04.275 09:34:15 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:26:04.275 09:34:15 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:26:04.275 09:34:15 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:26:04.275 09:34:15 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:26:04.275 09:34:15 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:26:04.275 09:34:15 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:26:04.275 09:34:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:26:04.275 09:34:15 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:04.535 09:34:15 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:04.535 nvme0n1 00:26:04.535 09:34:15 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:04.535 09:34:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:26:04.535 09:34:15 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:04.535 09:34:15 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:04.535 09:34:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:26:04.535 09:34:15 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:04.535 09:34:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:26:04.535 09:34:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:26:04.535 09:34:15 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:04.535 09:34:15 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:04.535 09:34:15 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:04.535 09:34:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:26:04.535 09:34:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe3072 4 00:26:04.535 09:34:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:26:04.535 09:34:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:26:04.535 09:34:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:26:04.535 09:34:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:26:04.535 09:34:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:ZTMyZTFiOGViMDNmNTIyYTRlZjk3NGIxYzkyYmM2OTAxMmYxMDYxODQ5MzQyZTFhZWI1YjJiMzkwNzVkYmMxMQsgpoA=: 00:26:04.535 09:34:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:26:04.535 09:34:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:26:04.535 09:34:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:26:04.535 09:34:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:ZTMyZTFiOGViMDNmNTIyYTRlZjk3NGIxYzkyYmM2OTAxMmYxMDYxODQ5MzQyZTFhZWI1YjJiMzkwNzVkYmMxMQsgpoA=: 00:26:04.535 09:34:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:26:04.535 09:34:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe3072 4 00:26:04.535 09:34:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:26:04.535 09:34:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:26:04.535 09:34:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:26:04.535 09:34:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:26:04.535 09:34:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:26:04.535 09:34:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:26:04.535 09:34:15 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:04.535 09:34:15 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:04.535 09:34:15 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:04.535 09:34:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:26:04.535 09:34:15 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:26:04.535 09:34:15 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:26:04.535 09:34:15 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:26:04.535 09:34:15 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:26:04.535 09:34:15 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:26:04.535 09:34:15 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:26:04.535 09:34:15 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:26:04.535 09:34:15 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:26:04.535 09:34:15 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:26:04.535 09:34:15 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:26:04.535 09:34:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:26:04.535 09:34:15 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:04.535 09:34:15 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:04.792 nvme0n1 00:26:04.792 09:34:15 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:04.793 09:34:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:26:04.793 09:34:15 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:04.793 09:34:15 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:04.793 09:34:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:26:04.793 09:34:15 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:04.793 09:34:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:26:04.793 09:34:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:26:04.793 09:34:15 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:04.793 09:34:15 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:04.793 09:34:15 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:04.793 09:34:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:26:04.793 09:34:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:26:04.793 09:34:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe4096 0 00:26:04.793 09:34:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:26:04.793 09:34:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:26:04.793 09:34:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:26:04.793 09:34:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:26:04.793 09:34:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:OWI1ZmIxYjljYzBjOTVkMTZjMTExYzJjNjA0NzFjNjWreYXE: 00:26:04.793 09:34:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:ZWZkM2NiMDNhMjAyNDMyNjU4ZWYxNTJjY2UyYWI3NGI2Njg5YjU5YmYzNWNjOTQwZDdlZGFkZDhjMjUyYmUwYujvdz4=: 00:26:04.793 09:34:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:26:04.793 09:34:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:26:05.358 09:34:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:OWI1ZmIxYjljYzBjOTVkMTZjMTExYzJjNjA0NzFjNjWreYXE: 00:26:05.358 09:34:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:ZWZkM2NiMDNhMjAyNDMyNjU4ZWYxNTJjY2UyYWI3NGI2Njg5YjU5YmYzNWNjOTQwZDdlZGFkZDhjMjUyYmUwYujvdz4=: ]] 00:26:05.358 09:34:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:ZWZkM2NiMDNhMjAyNDMyNjU4ZWYxNTJjY2UyYWI3NGI2Njg5YjU5YmYzNWNjOTQwZDdlZGFkZDhjMjUyYmUwYujvdz4=: 00:26:05.358 09:34:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe4096 0 00:26:05.358 09:34:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:26:05.358 09:34:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:26:05.358 09:34:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:26:05.358 09:34:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:26:05.358 09:34:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:26:05.358 09:34:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:26:05.358 09:34:16 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:05.358 09:34:16 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:05.358 09:34:16 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:05.358 09:34:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:26:05.358 09:34:16 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:26:05.358 09:34:16 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:26:05.358 09:34:16 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:26:05.358 09:34:16 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:26:05.358 09:34:16 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:26:05.358 09:34:16 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:26:05.358 09:34:16 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:26:05.358 09:34:16 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:26:05.358 09:34:16 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:26:05.358 09:34:16 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:26:05.358 09:34:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:26:05.358 09:34:16 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:05.358 09:34:16 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:05.617 nvme0n1 00:26:05.617 09:34:16 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:05.617 09:34:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:26:05.617 09:34:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:26:05.617 09:34:16 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:05.617 09:34:16 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:05.617 09:34:16 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:05.617 09:34:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:26:05.617 09:34:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:26:05.617 09:34:16 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:05.617 09:34:16 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:05.617 09:34:16 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:05.617 09:34:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:26:05.617 09:34:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe4096 1 00:26:05.617 09:34:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:26:05.617 09:34:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:26:05.617 09:34:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:26:05.617 09:34:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:26:05.617 09:34:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:ZjBhZjRlZmQ4NTRkMjI0NTdkNDUwODA3MDE4OGE2MGNlNDRlNDBiMzAwYjRiYWVkSF6LUg==: 00:26:05.617 09:34:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:ZmMyZTkyZTU4M2RmYWMzNzE2MWZmZmZjOTg0ODM5ZDUxNjhjNmZlZWMwNGY0MjIwb04rnA==: 00:26:05.617 09:34:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:26:05.617 09:34:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:26:05.617 09:34:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:ZjBhZjRlZmQ4NTRkMjI0NTdkNDUwODA3MDE4OGE2MGNlNDRlNDBiMzAwYjRiYWVkSF6LUg==: 00:26:05.617 09:34:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:ZmMyZTkyZTU4M2RmYWMzNzE2MWZmZmZjOTg0ODM5ZDUxNjhjNmZlZWMwNGY0MjIwb04rnA==: ]] 00:26:05.617 09:34:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:ZmMyZTkyZTU4M2RmYWMzNzE2MWZmZmZjOTg0ODM5ZDUxNjhjNmZlZWMwNGY0MjIwb04rnA==: 00:26:05.617 09:34:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe4096 1 00:26:05.617 09:34:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:26:05.617 09:34:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:26:05.617 09:34:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:26:05.617 09:34:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:26:05.617 09:34:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:26:05.617 09:34:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:26:05.617 09:34:16 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:05.617 09:34:16 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:05.875 09:34:16 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:05.875 09:34:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:26:05.875 09:34:16 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:26:05.875 09:34:16 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:26:05.875 09:34:16 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:26:05.875 09:34:16 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:26:05.875 09:34:16 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:26:05.875 09:34:16 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:26:05.875 09:34:16 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:26:05.875 09:34:16 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:26:05.875 09:34:16 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:26:05.875 09:34:16 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:26:05.875 09:34:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:26:05.875 09:34:16 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:05.875 09:34:16 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:06.135 nvme0n1 00:26:06.135 09:34:17 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:06.135 09:34:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:26:06.135 09:34:17 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:06.135 09:34:17 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:06.135 09:34:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:26:06.135 09:34:17 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:06.135 09:34:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:26:06.135 09:34:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:26:06.135 09:34:17 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:06.135 09:34:17 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:06.135 09:34:17 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:06.135 09:34:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:26:06.135 09:34:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe4096 2 00:26:06.135 09:34:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:26:06.135 09:34:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:26:06.135 09:34:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:26:06.135 09:34:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:26:06.135 09:34:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:M2U5NjY0NzJlZWVkZTA1ZGM0ZDMxODY4MDAxYmZlODcm3UGo: 00:26:06.135 09:34:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:YTJmYzQ3MjE5ZjZhNzhkMzdkZTIyY2E4MTBkY2QyNDO5r4J8: 00:26:06.135 09:34:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:26:06.135 09:34:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:26:06.135 09:34:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:M2U5NjY0NzJlZWVkZTA1ZGM0ZDMxODY4MDAxYmZlODcm3UGo: 00:26:06.135 09:34:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:YTJmYzQ3MjE5ZjZhNzhkMzdkZTIyY2E4MTBkY2QyNDO5r4J8: ]] 00:26:06.135 09:34:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:YTJmYzQ3MjE5ZjZhNzhkMzdkZTIyY2E4MTBkY2QyNDO5r4J8: 00:26:06.135 09:34:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe4096 2 00:26:06.135 09:34:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:26:06.135 09:34:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:26:06.135 09:34:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:26:06.135 09:34:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:26:06.135 09:34:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:26:06.135 09:34:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:26:06.135 09:34:17 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:06.135 09:34:17 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:06.135 09:34:17 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:06.135 09:34:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:26:06.135 09:34:17 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:26:06.135 09:34:17 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:26:06.135 09:34:17 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:26:06.135 09:34:17 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:26:06.135 09:34:17 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:26:06.135 09:34:17 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:26:06.135 09:34:17 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:26:06.135 09:34:17 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:26:06.135 09:34:17 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:26:06.135 09:34:17 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:26:06.135 09:34:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:26:06.135 09:34:17 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:06.135 09:34:17 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:06.393 nvme0n1 00:26:06.393 09:34:17 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:06.393 09:34:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:26:06.393 09:34:17 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:06.393 09:34:17 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:06.393 09:34:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:26:06.393 09:34:17 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:06.393 09:34:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:26:06.393 09:34:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:26:06.393 09:34:17 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:06.393 09:34:17 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:06.393 09:34:17 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:06.393 09:34:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:26:06.393 09:34:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe4096 3 00:26:06.393 09:34:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:26:06.393 09:34:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:26:06.393 09:34:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:26:06.393 09:34:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:26:06.393 09:34:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:ZDM0MmZmYTYwMWM2MzExZDkxMjc4MDUxYTY5YmI3MjU2OGNjZjQ2NjFkMzZkYmUxsSTVEg==: 00:26:06.393 09:34:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:OTA0MWRkNGY0OTNjZjQxNDZiM2QzNjY3ZTdkZGEyNjEkfiGh: 00:26:06.393 09:34:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:26:06.393 09:34:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:26:06.393 09:34:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:ZDM0MmZmYTYwMWM2MzExZDkxMjc4MDUxYTY5YmI3MjU2OGNjZjQ2NjFkMzZkYmUxsSTVEg==: 00:26:06.393 09:34:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:OTA0MWRkNGY0OTNjZjQxNDZiM2QzNjY3ZTdkZGEyNjEkfiGh: ]] 00:26:06.393 09:34:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:OTA0MWRkNGY0OTNjZjQxNDZiM2QzNjY3ZTdkZGEyNjEkfiGh: 00:26:06.393 09:34:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe4096 3 00:26:06.393 09:34:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:26:06.393 09:34:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:26:06.393 09:34:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:26:06.393 09:34:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:26:06.393 09:34:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:26:06.393 09:34:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:26:06.393 09:34:17 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:06.393 09:34:17 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:06.393 09:34:17 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:06.393 09:34:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:26:06.393 09:34:17 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:26:06.393 09:34:17 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:26:06.393 09:34:17 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:26:06.393 09:34:17 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:26:06.393 09:34:17 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:26:06.393 09:34:17 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:26:06.393 09:34:17 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:26:06.393 09:34:17 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:26:06.393 09:34:17 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:26:06.393 09:34:17 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:26:06.393 09:34:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:26:06.393 09:34:17 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:06.393 09:34:17 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:06.651 nvme0n1 00:26:06.651 09:34:17 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:06.651 09:34:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:26:06.651 09:34:17 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:06.651 09:34:17 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:06.651 09:34:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:26:06.651 09:34:17 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:06.651 09:34:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:26:06.651 09:34:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:26:06.651 09:34:17 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:06.651 09:34:17 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:06.651 09:34:17 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:06.651 09:34:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:26:06.651 09:34:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe4096 4 00:26:06.651 09:34:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:26:06.651 09:34:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:26:06.651 09:34:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:26:06.651 09:34:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:26:06.651 09:34:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:ZTMyZTFiOGViMDNmNTIyYTRlZjk3NGIxYzkyYmM2OTAxMmYxMDYxODQ5MzQyZTFhZWI1YjJiMzkwNzVkYmMxMQsgpoA=: 00:26:06.651 09:34:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:26:06.651 09:34:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:26:06.651 09:34:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:26:06.651 09:34:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:ZTMyZTFiOGViMDNmNTIyYTRlZjk3NGIxYzkyYmM2OTAxMmYxMDYxODQ5MzQyZTFhZWI1YjJiMzkwNzVkYmMxMQsgpoA=: 00:26:06.651 09:34:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:26:06.651 09:34:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe4096 4 00:26:06.651 09:34:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:26:06.651 09:34:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:26:06.651 09:34:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:26:06.651 09:34:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:26:06.651 09:34:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:26:06.651 09:34:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:26:06.651 09:34:17 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:06.651 09:34:17 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:06.651 09:34:17 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:06.651 09:34:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:26:06.651 09:34:17 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:26:06.651 09:34:17 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:26:06.651 09:34:17 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:26:06.651 09:34:17 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:26:06.651 09:34:17 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:26:06.651 09:34:17 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:26:06.651 09:34:17 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:26:06.651 09:34:17 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:26:06.651 09:34:17 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:26:06.651 09:34:17 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:26:06.651 09:34:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:26:06.651 09:34:17 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:06.651 09:34:17 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:06.909 nvme0n1 00:26:06.909 09:34:18 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:06.909 09:34:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:26:06.909 09:34:18 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:06.909 09:34:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:26:06.909 09:34:18 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:06.909 09:34:18 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:06.909 09:34:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:26:06.909 09:34:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:26:06.909 09:34:18 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:06.909 09:34:18 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:07.167 09:34:18 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:07.167 09:34:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:26:07.167 09:34:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:26:07.167 09:34:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe6144 0 00:26:07.167 09:34:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:26:07.167 09:34:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:26:07.167 09:34:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:26:07.167 09:34:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:26:07.167 09:34:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:OWI1ZmIxYjljYzBjOTVkMTZjMTExYzJjNjA0NzFjNjWreYXE: 00:26:07.167 09:34:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:ZWZkM2NiMDNhMjAyNDMyNjU4ZWYxNTJjY2UyYWI3NGI2Njg5YjU5YmYzNWNjOTQwZDdlZGFkZDhjMjUyYmUwYujvdz4=: 00:26:07.167 09:34:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:26:07.167 09:34:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:26:09.064 09:34:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:OWI1ZmIxYjljYzBjOTVkMTZjMTExYzJjNjA0NzFjNjWreYXE: 00:26:09.064 09:34:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:ZWZkM2NiMDNhMjAyNDMyNjU4ZWYxNTJjY2UyYWI3NGI2Njg5YjU5YmYzNWNjOTQwZDdlZGFkZDhjMjUyYmUwYujvdz4=: ]] 00:26:09.064 09:34:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:ZWZkM2NiMDNhMjAyNDMyNjU4ZWYxNTJjY2UyYWI3NGI2Njg5YjU5YmYzNWNjOTQwZDdlZGFkZDhjMjUyYmUwYujvdz4=: 00:26:09.064 09:34:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe6144 0 00:26:09.064 09:34:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:26:09.064 09:34:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:26:09.064 09:34:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:26:09.064 09:34:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:26:09.064 09:34:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:26:09.064 09:34:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:26:09.064 09:34:19 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:09.064 09:34:19 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:09.064 09:34:19 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:09.064 09:34:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:26:09.064 09:34:19 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:26:09.064 09:34:19 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:26:09.064 09:34:19 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:26:09.064 09:34:19 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:26:09.064 09:34:19 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:26:09.065 09:34:19 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:26:09.065 09:34:19 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:26:09.065 09:34:19 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:26:09.065 09:34:19 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:26:09.065 09:34:19 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:26:09.065 09:34:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:26:09.065 09:34:19 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:09.065 09:34:19 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:09.065 nvme0n1 00:26:09.065 09:34:20 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:09.065 09:34:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:26:09.065 09:34:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:26:09.065 09:34:20 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:09.065 09:34:20 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:09.324 09:34:20 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:09.324 09:34:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:26:09.324 09:34:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:26:09.324 09:34:20 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:09.324 09:34:20 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:09.324 09:34:20 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:09.324 09:34:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:26:09.324 09:34:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe6144 1 00:26:09.324 09:34:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:26:09.324 09:34:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:26:09.324 09:34:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:26:09.324 09:34:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:26:09.324 09:34:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:ZjBhZjRlZmQ4NTRkMjI0NTdkNDUwODA3MDE4OGE2MGNlNDRlNDBiMzAwYjRiYWVkSF6LUg==: 00:26:09.324 09:34:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:ZmMyZTkyZTU4M2RmYWMzNzE2MWZmZmZjOTg0ODM5ZDUxNjhjNmZlZWMwNGY0MjIwb04rnA==: 00:26:09.324 09:34:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:26:09.324 09:34:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:26:09.324 09:34:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:ZjBhZjRlZmQ4NTRkMjI0NTdkNDUwODA3MDE4OGE2MGNlNDRlNDBiMzAwYjRiYWVkSF6LUg==: 00:26:09.324 09:34:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:ZmMyZTkyZTU4M2RmYWMzNzE2MWZmZmZjOTg0ODM5ZDUxNjhjNmZlZWMwNGY0MjIwb04rnA==: ]] 00:26:09.324 09:34:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:ZmMyZTkyZTU4M2RmYWMzNzE2MWZmZmZjOTg0ODM5ZDUxNjhjNmZlZWMwNGY0MjIwb04rnA==: 00:26:09.324 09:34:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe6144 1 00:26:09.324 09:34:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:26:09.324 09:34:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:26:09.324 09:34:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:26:09.324 09:34:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:26:09.324 09:34:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:26:09.324 09:34:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:26:09.324 09:34:20 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:09.324 09:34:20 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:09.324 09:34:20 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:09.324 09:34:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:26:09.324 09:34:20 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:26:09.324 09:34:20 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:26:09.324 09:34:20 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:26:09.324 09:34:20 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:26:09.324 09:34:20 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:26:09.324 09:34:20 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:26:09.324 09:34:20 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:26:09.324 09:34:20 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:26:09.324 09:34:20 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:26:09.324 09:34:20 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:26:09.324 09:34:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:26:09.324 09:34:20 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:09.324 09:34:20 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:09.892 nvme0n1 00:26:09.892 09:34:20 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:09.892 09:34:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:26:09.892 09:34:20 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:09.892 09:34:20 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:09.892 09:34:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:26:09.892 09:34:20 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:09.892 09:34:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:26:09.892 09:34:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:26:09.892 09:34:20 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:09.892 09:34:20 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:09.892 09:34:20 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:09.892 09:34:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:26:09.892 09:34:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe6144 2 00:26:09.892 09:34:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:26:09.892 09:34:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:26:09.892 09:34:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:26:09.892 09:34:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:26:09.892 09:34:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:M2U5NjY0NzJlZWVkZTA1ZGM0ZDMxODY4MDAxYmZlODcm3UGo: 00:26:09.892 09:34:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:YTJmYzQ3MjE5ZjZhNzhkMzdkZTIyY2E4MTBkY2QyNDO5r4J8: 00:26:09.892 09:34:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:26:09.892 09:34:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:26:09.892 09:34:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:M2U5NjY0NzJlZWVkZTA1ZGM0ZDMxODY4MDAxYmZlODcm3UGo: 00:26:09.892 09:34:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:YTJmYzQ3MjE5ZjZhNzhkMzdkZTIyY2E4MTBkY2QyNDO5r4J8: ]] 00:26:09.892 09:34:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:YTJmYzQ3MjE5ZjZhNzhkMzdkZTIyY2E4MTBkY2QyNDO5r4J8: 00:26:09.892 09:34:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe6144 2 00:26:09.892 09:34:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:26:09.892 09:34:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:26:09.892 09:34:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:26:09.892 09:34:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:26:09.892 09:34:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:26:09.892 09:34:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:26:09.892 09:34:20 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:09.892 09:34:20 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:09.892 09:34:20 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:09.892 09:34:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:26:09.892 09:34:20 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:26:09.892 09:34:20 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:26:09.892 09:34:20 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:26:09.892 09:34:20 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:26:09.892 09:34:20 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:26:09.892 09:34:20 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:26:09.892 09:34:20 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:26:09.892 09:34:20 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:26:09.892 09:34:20 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:26:09.892 09:34:20 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:26:09.892 09:34:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:26:09.892 09:34:20 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:09.892 09:34:20 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:10.150 nvme0n1 00:26:10.150 09:34:21 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:10.150 09:34:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:26:10.150 09:34:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:26:10.150 09:34:21 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:10.150 09:34:21 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:10.150 09:34:21 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:10.407 09:34:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:26:10.407 09:34:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:26:10.407 09:34:21 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:10.407 09:34:21 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:10.407 09:34:21 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:10.407 09:34:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:26:10.407 09:34:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe6144 3 00:26:10.407 09:34:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:26:10.407 09:34:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:26:10.407 09:34:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:26:10.407 09:34:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:26:10.407 09:34:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:ZDM0MmZmYTYwMWM2MzExZDkxMjc4MDUxYTY5YmI3MjU2OGNjZjQ2NjFkMzZkYmUxsSTVEg==: 00:26:10.407 09:34:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:OTA0MWRkNGY0OTNjZjQxNDZiM2QzNjY3ZTdkZGEyNjEkfiGh: 00:26:10.407 09:34:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:26:10.407 09:34:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:26:10.407 09:34:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:ZDM0MmZmYTYwMWM2MzExZDkxMjc4MDUxYTY5YmI3MjU2OGNjZjQ2NjFkMzZkYmUxsSTVEg==: 00:26:10.407 09:34:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:OTA0MWRkNGY0OTNjZjQxNDZiM2QzNjY3ZTdkZGEyNjEkfiGh: ]] 00:26:10.407 09:34:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:OTA0MWRkNGY0OTNjZjQxNDZiM2QzNjY3ZTdkZGEyNjEkfiGh: 00:26:10.407 09:34:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe6144 3 00:26:10.408 09:34:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:26:10.408 09:34:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:26:10.408 09:34:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:26:10.408 09:34:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:26:10.408 09:34:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:26:10.408 09:34:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:26:10.408 09:34:21 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:10.408 09:34:21 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:10.408 09:34:21 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:10.408 09:34:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:26:10.408 09:34:21 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:26:10.408 09:34:21 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:26:10.408 09:34:21 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:26:10.408 09:34:21 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:26:10.408 09:34:21 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:26:10.408 09:34:21 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:26:10.408 09:34:21 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:26:10.408 09:34:21 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:26:10.408 09:34:21 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:26:10.408 09:34:21 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:26:10.408 09:34:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:26:10.408 09:34:21 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:10.408 09:34:21 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:10.665 nvme0n1 00:26:10.665 09:34:21 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:10.665 09:34:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:26:10.665 09:34:21 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:10.665 09:34:21 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:10.665 09:34:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:26:10.665 09:34:21 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:10.924 09:34:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:26:10.924 09:34:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:26:10.924 09:34:21 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:10.925 09:34:21 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:10.925 09:34:21 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:10.925 09:34:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:26:10.925 09:34:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe6144 4 00:26:10.925 09:34:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:26:10.925 09:34:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:26:10.925 09:34:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:26:10.925 09:34:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:26:10.925 09:34:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:ZTMyZTFiOGViMDNmNTIyYTRlZjk3NGIxYzkyYmM2OTAxMmYxMDYxODQ5MzQyZTFhZWI1YjJiMzkwNzVkYmMxMQsgpoA=: 00:26:10.925 09:34:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:26:10.925 09:34:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:26:10.925 09:34:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:26:10.925 09:34:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:ZTMyZTFiOGViMDNmNTIyYTRlZjk3NGIxYzkyYmM2OTAxMmYxMDYxODQ5MzQyZTFhZWI1YjJiMzkwNzVkYmMxMQsgpoA=: 00:26:10.925 09:34:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:26:10.925 09:34:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe6144 4 00:26:10.925 09:34:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:26:10.925 09:34:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:26:10.925 09:34:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:26:10.925 09:34:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:26:10.925 09:34:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:26:10.925 09:34:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:26:10.925 09:34:21 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:10.925 09:34:21 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:10.925 09:34:21 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:10.925 09:34:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:26:10.925 09:34:21 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:26:10.925 09:34:21 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:26:10.925 09:34:21 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:26:10.925 09:34:21 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:26:10.925 09:34:21 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:26:10.925 09:34:21 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:26:10.925 09:34:21 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:26:10.925 09:34:21 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:26:10.925 09:34:21 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:26:10.925 09:34:21 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:26:10.925 09:34:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:26:10.925 09:34:21 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:10.925 09:34:21 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:11.183 nvme0n1 00:26:11.183 09:34:22 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:11.183 09:34:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:26:11.183 09:34:22 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:11.443 09:34:22 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:11.443 09:34:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:26:11.443 09:34:22 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:11.443 09:34:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:26:11.443 09:34:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:26:11.443 09:34:22 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:11.443 09:34:22 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:11.443 09:34:22 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:11.443 09:34:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:26:11.443 09:34:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:26:11.443 09:34:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe8192 0 00:26:11.443 09:34:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:26:11.443 09:34:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:26:11.443 09:34:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:26:11.443 09:34:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:26:11.443 09:34:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:OWI1ZmIxYjljYzBjOTVkMTZjMTExYzJjNjA0NzFjNjWreYXE: 00:26:11.443 09:34:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:ZWZkM2NiMDNhMjAyNDMyNjU4ZWYxNTJjY2UyYWI3NGI2Njg5YjU5YmYzNWNjOTQwZDdlZGFkZDhjMjUyYmUwYujvdz4=: 00:26:11.443 09:34:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:26:11.443 09:34:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:26:11.443 09:34:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:OWI1ZmIxYjljYzBjOTVkMTZjMTExYzJjNjA0NzFjNjWreYXE: 00:26:11.443 09:34:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:ZWZkM2NiMDNhMjAyNDMyNjU4ZWYxNTJjY2UyYWI3NGI2Njg5YjU5YmYzNWNjOTQwZDdlZGFkZDhjMjUyYmUwYujvdz4=: ]] 00:26:11.443 09:34:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:ZWZkM2NiMDNhMjAyNDMyNjU4ZWYxNTJjY2UyYWI3NGI2Njg5YjU5YmYzNWNjOTQwZDdlZGFkZDhjMjUyYmUwYujvdz4=: 00:26:11.443 09:34:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe8192 0 00:26:11.443 09:34:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:26:11.443 09:34:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:26:11.443 09:34:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:26:11.443 09:34:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:26:11.443 09:34:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:26:11.443 09:34:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:26:11.443 09:34:22 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:11.443 09:34:22 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:11.443 09:34:22 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:11.443 09:34:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:26:11.443 09:34:22 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:26:11.443 09:34:22 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:26:11.443 09:34:22 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:26:11.443 09:34:22 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:26:11.443 09:34:22 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:26:11.443 09:34:22 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:26:11.443 09:34:22 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:26:11.443 09:34:22 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:26:11.443 09:34:22 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:26:11.443 09:34:22 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:26:11.443 09:34:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:26:11.443 09:34:22 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:11.443 09:34:22 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:12.382 nvme0n1 00:26:12.382 09:34:23 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:12.382 09:34:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:26:12.382 09:34:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:26:12.382 09:34:23 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:12.382 09:34:23 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:12.382 09:34:23 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:12.382 09:34:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:26:12.382 09:34:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:26:12.382 09:34:23 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:12.382 09:34:23 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:12.382 09:34:23 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:12.382 09:34:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:26:12.382 09:34:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe8192 1 00:26:12.382 09:34:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:26:12.382 09:34:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:26:12.382 09:34:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:26:12.382 09:34:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:26:12.382 09:34:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:ZjBhZjRlZmQ4NTRkMjI0NTdkNDUwODA3MDE4OGE2MGNlNDRlNDBiMzAwYjRiYWVkSF6LUg==: 00:26:12.382 09:34:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:ZmMyZTkyZTU4M2RmYWMzNzE2MWZmZmZjOTg0ODM5ZDUxNjhjNmZlZWMwNGY0MjIwb04rnA==: 00:26:12.382 09:34:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:26:12.382 09:34:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:26:12.382 09:34:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:ZjBhZjRlZmQ4NTRkMjI0NTdkNDUwODA3MDE4OGE2MGNlNDRlNDBiMzAwYjRiYWVkSF6LUg==: 00:26:12.382 09:34:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:ZmMyZTkyZTU4M2RmYWMzNzE2MWZmZmZjOTg0ODM5ZDUxNjhjNmZlZWMwNGY0MjIwb04rnA==: ]] 00:26:12.382 09:34:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:ZmMyZTkyZTU4M2RmYWMzNzE2MWZmZmZjOTg0ODM5ZDUxNjhjNmZlZWMwNGY0MjIwb04rnA==: 00:26:12.382 09:34:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe8192 1 00:26:12.382 09:34:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:26:12.382 09:34:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:26:12.382 09:34:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:26:12.382 09:34:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:26:12.382 09:34:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:26:12.382 09:34:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:26:12.382 09:34:23 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:12.382 09:34:23 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:12.382 09:34:23 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:12.382 09:34:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:26:12.382 09:34:23 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:26:12.382 09:34:23 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:26:12.382 09:34:23 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:26:12.382 09:34:23 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:26:12.382 09:34:23 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:26:12.382 09:34:23 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:26:12.382 09:34:23 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:26:12.382 09:34:23 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:26:12.382 09:34:23 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:26:12.382 09:34:23 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:26:12.382 09:34:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:26:12.382 09:34:23 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:12.382 09:34:23 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:13.352 nvme0n1 00:26:13.352 09:34:24 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:13.352 09:34:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:26:13.352 09:34:24 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:13.352 09:34:24 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:13.352 09:34:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:26:13.352 09:34:24 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:13.352 09:34:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:26:13.352 09:34:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:26:13.352 09:34:24 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:13.352 09:34:24 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:13.352 09:34:24 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:13.352 09:34:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:26:13.352 09:34:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe8192 2 00:26:13.352 09:34:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:26:13.352 09:34:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:26:13.352 09:34:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:26:13.352 09:34:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:26:13.352 09:34:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:M2U5NjY0NzJlZWVkZTA1ZGM0ZDMxODY4MDAxYmZlODcm3UGo: 00:26:13.352 09:34:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:YTJmYzQ3MjE5ZjZhNzhkMzdkZTIyY2E4MTBkY2QyNDO5r4J8: 00:26:13.352 09:34:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:26:13.352 09:34:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:26:13.352 09:34:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:M2U5NjY0NzJlZWVkZTA1ZGM0ZDMxODY4MDAxYmZlODcm3UGo: 00:26:13.352 09:34:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:YTJmYzQ3MjE5ZjZhNzhkMzdkZTIyY2E4MTBkY2QyNDO5r4J8: ]] 00:26:13.352 09:34:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:YTJmYzQ3MjE5ZjZhNzhkMzdkZTIyY2E4MTBkY2QyNDO5r4J8: 00:26:13.352 09:34:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe8192 2 00:26:13.352 09:34:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:26:13.352 09:34:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:26:13.352 09:34:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:26:13.352 09:34:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:26:13.352 09:34:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:26:13.352 09:34:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:26:13.352 09:34:24 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:13.352 09:34:24 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:13.352 09:34:24 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:13.352 09:34:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:26:13.352 09:34:24 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:26:13.352 09:34:24 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:26:13.352 09:34:24 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:26:13.352 09:34:24 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:26:13.352 09:34:24 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:26:13.352 09:34:24 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:26:13.352 09:34:24 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:26:13.352 09:34:24 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:26:13.352 09:34:24 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:26:13.352 09:34:24 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:26:13.352 09:34:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:26:13.352 09:34:24 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:13.352 09:34:24 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:14.288 nvme0n1 00:26:14.288 09:34:25 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:14.288 09:34:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:26:14.288 09:34:25 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:14.288 09:34:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:26:14.288 09:34:25 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:14.288 09:34:25 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:14.288 09:34:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:26:14.288 09:34:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:26:14.288 09:34:25 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:14.288 09:34:25 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:14.288 09:34:25 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:14.288 09:34:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:26:14.289 09:34:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe8192 3 00:26:14.289 09:34:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:26:14.289 09:34:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:26:14.289 09:34:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:26:14.289 09:34:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:26:14.289 09:34:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:ZDM0MmZmYTYwMWM2MzExZDkxMjc4MDUxYTY5YmI3MjU2OGNjZjQ2NjFkMzZkYmUxsSTVEg==: 00:26:14.289 09:34:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:OTA0MWRkNGY0OTNjZjQxNDZiM2QzNjY3ZTdkZGEyNjEkfiGh: 00:26:14.289 09:34:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:26:14.289 09:34:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:26:14.289 09:34:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:ZDM0MmZmYTYwMWM2MzExZDkxMjc4MDUxYTY5YmI3MjU2OGNjZjQ2NjFkMzZkYmUxsSTVEg==: 00:26:14.289 09:34:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:OTA0MWRkNGY0OTNjZjQxNDZiM2QzNjY3ZTdkZGEyNjEkfiGh: ]] 00:26:14.289 09:34:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:OTA0MWRkNGY0OTNjZjQxNDZiM2QzNjY3ZTdkZGEyNjEkfiGh: 00:26:14.289 09:34:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe8192 3 00:26:14.289 09:34:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:26:14.289 09:34:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:26:14.289 09:34:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:26:14.289 09:34:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:26:14.289 09:34:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:26:14.289 09:34:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:26:14.289 09:34:25 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:14.289 09:34:25 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:14.289 09:34:25 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:14.289 09:34:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:26:14.289 09:34:25 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:26:14.289 09:34:25 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:26:14.289 09:34:25 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:26:14.289 09:34:25 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:26:14.289 09:34:25 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:26:14.289 09:34:25 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:26:14.289 09:34:25 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:26:14.289 09:34:25 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:26:14.289 09:34:25 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:26:14.289 09:34:25 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:26:14.289 09:34:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:26:14.289 09:34:25 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:14.289 09:34:25 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:14.856 nvme0n1 00:26:14.856 09:34:26 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:14.856 09:34:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:26:14.856 09:34:26 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:14.856 09:34:26 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:14.856 09:34:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:26:14.856 09:34:26 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:15.117 09:34:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:26:15.117 09:34:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:26:15.117 09:34:26 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:15.117 09:34:26 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:15.117 09:34:26 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:15.117 09:34:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:26:15.117 09:34:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe8192 4 00:26:15.117 09:34:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:26:15.117 09:34:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:26:15.117 09:34:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:26:15.117 09:34:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:26:15.117 09:34:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:ZTMyZTFiOGViMDNmNTIyYTRlZjk3NGIxYzkyYmM2OTAxMmYxMDYxODQ5MzQyZTFhZWI1YjJiMzkwNzVkYmMxMQsgpoA=: 00:26:15.117 09:34:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:26:15.117 09:34:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:26:15.117 09:34:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:26:15.117 09:34:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:ZTMyZTFiOGViMDNmNTIyYTRlZjk3NGIxYzkyYmM2OTAxMmYxMDYxODQ5MzQyZTFhZWI1YjJiMzkwNzVkYmMxMQsgpoA=: 00:26:15.117 09:34:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:26:15.117 09:34:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe8192 4 00:26:15.117 09:34:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:26:15.117 09:34:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:26:15.117 09:34:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:26:15.117 09:34:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:26:15.117 09:34:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:26:15.117 09:34:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:26:15.117 09:34:26 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:15.117 09:34:26 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:15.117 09:34:26 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:15.117 09:34:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:26:15.117 09:34:26 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:26:15.117 09:34:26 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:26:15.117 09:34:26 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:26:15.117 09:34:26 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:26:15.117 09:34:26 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:26:15.117 09:34:26 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:26:15.117 09:34:26 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:26:15.117 09:34:26 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:26:15.117 09:34:26 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:26:15.117 09:34:26 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:26:15.117 09:34:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:26:15.117 09:34:26 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:15.117 09:34:26 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:16.058 nvme0n1 00:26:16.058 09:34:26 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:16.058 09:34:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:26:16.058 09:34:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:26:16.058 09:34:26 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:16.058 09:34:26 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:16.058 09:34:26 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:16.058 09:34:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:26:16.058 09:34:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:26:16.058 09:34:26 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:16.058 09:34:26 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:16.058 09:34:26 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:16.058 09:34:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@100 -- # for digest in "${digests[@]}" 00:26:16.058 09:34:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:26:16.058 09:34:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:26:16.058 09:34:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe2048 0 00:26:16.058 09:34:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:26:16.058 09:34:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:26:16.058 09:34:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:26:16.058 09:34:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:26:16.058 09:34:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:OWI1ZmIxYjljYzBjOTVkMTZjMTExYzJjNjA0NzFjNjWreYXE: 00:26:16.058 09:34:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:ZWZkM2NiMDNhMjAyNDMyNjU4ZWYxNTJjY2UyYWI3NGI2Njg5YjU5YmYzNWNjOTQwZDdlZGFkZDhjMjUyYmUwYujvdz4=: 00:26:16.058 09:34:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:26:16.058 09:34:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:26:16.058 09:34:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:OWI1ZmIxYjljYzBjOTVkMTZjMTExYzJjNjA0NzFjNjWreYXE: 00:26:16.058 09:34:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:ZWZkM2NiMDNhMjAyNDMyNjU4ZWYxNTJjY2UyYWI3NGI2Njg5YjU5YmYzNWNjOTQwZDdlZGFkZDhjMjUyYmUwYujvdz4=: ]] 00:26:16.058 09:34:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:ZWZkM2NiMDNhMjAyNDMyNjU4ZWYxNTJjY2UyYWI3NGI2Njg5YjU5YmYzNWNjOTQwZDdlZGFkZDhjMjUyYmUwYujvdz4=: 00:26:16.058 09:34:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe2048 0 00:26:16.058 09:34:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:26:16.058 09:34:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:26:16.058 09:34:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:26:16.058 09:34:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:26:16.058 09:34:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:26:16.058 09:34:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:26:16.058 09:34:26 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:16.058 09:34:26 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:16.058 09:34:27 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:16.058 09:34:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:26:16.058 09:34:27 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:26:16.058 09:34:27 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:26:16.058 09:34:27 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:26:16.058 09:34:27 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:26:16.058 09:34:27 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:26:16.058 09:34:27 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:26:16.058 09:34:27 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:26:16.058 09:34:27 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:26:16.058 09:34:27 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:26:16.058 09:34:27 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:26:16.058 09:34:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:26:16.058 09:34:27 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:16.058 09:34:27 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:16.058 nvme0n1 00:26:16.058 09:34:27 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:16.058 09:34:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:26:16.058 09:34:27 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:16.058 09:34:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:26:16.058 09:34:27 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:16.058 09:34:27 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:16.058 09:34:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:26:16.058 09:34:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:26:16.058 09:34:27 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:16.058 09:34:27 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:16.058 09:34:27 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:16.058 09:34:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:26:16.058 09:34:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe2048 1 00:26:16.058 09:34:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:26:16.058 09:34:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:26:16.058 09:34:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:26:16.058 09:34:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:26:16.058 09:34:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:ZjBhZjRlZmQ4NTRkMjI0NTdkNDUwODA3MDE4OGE2MGNlNDRlNDBiMzAwYjRiYWVkSF6LUg==: 00:26:16.058 09:34:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:ZmMyZTkyZTU4M2RmYWMzNzE2MWZmZmZjOTg0ODM5ZDUxNjhjNmZlZWMwNGY0MjIwb04rnA==: 00:26:16.058 09:34:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:26:16.058 09:34:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:26:16.059 09:34:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:ZjBhZjRlZmQ4NTRkMjI0NTdkNDUwODA3MDE4OGE2MGNlNDRlNDBiMzAwYjRiYWVkSF6LUg==: 00:26:16.059 09:34:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:ZmMyZTkyZTU4M2RmYWMzNzE2MWZmZmZjOTg0ODM5ZDUxNjhjNmZlZWMwNGY0MjIwb04rnA==: ]] 00:26:16.059 09:34:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:ZmMyZTkyZTU4M2RmYWMzNzE2MWZmZmZjOTg0ODM5ZDUxNjhjNmZlZWMwNGY0MjIwb04rnA==: 00:26:16.059 09:34:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe2048 1 00:26:16.059 09:34:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:26:16.059 09:34:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:26:16.059 09:34:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:26:16.059 09:34:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:26:16.059 09:34:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:26:16.059 09:34:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:26:16.059 09:34:27 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:16.059 09:34:27 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:16.059 09:34:27 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:16.059 09:34:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:26:16.059 09:34:27 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:26:16.059 09:34:27 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:26:16.059 09:34:27 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:26:16.059 09:34:27 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:26:16.059 09:34:27 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:26:16.059 09:34:27 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:26:16.059 09:34:27 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:26:16.059 09:34:27 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:26:16.059 09:34:27 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:26:16.059 09:34:27 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:26:16.059 09:34:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:26:16.059 09:34:27 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:16.059 09:34:27 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:16.319 nvme0n1 00:26:16.319 09:34:27 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:16.319 09:34:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:26:16.319 09:34:27 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:16.319 09:34:27 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:16.319 09:34:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:26:16.319 09:34:27 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:16.319 09:34:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:26:16.319 09:34:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:26:16.319 09:34:27 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:16.319 09:34:27 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:16.319 09:34:27 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:16.319 09:34:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:26:16.319 09:34:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe2048 2 00:26:16.319 09:34:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:26:16.319 09:34:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:26:16.319 09:34:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:26:16.319 09:34:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:26:16.319 09:34:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:M2U5NjY0NzJlZWVkZTA1ZGM0ZDMxODY4MDAxYmZlODcm3UGo: 00:26:16.319 09:34:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:YTJmYzQ3MjE5ZjZhNzhkMzdkZTIyY2E4MTBkY2QyNDO5r4J8: 00:26:16.319 09:34:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:26:16.319 09:34:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:26:16.319 09:34:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:M2U5NjY0NzJlZWVkZTA1ZGM0ZDMxODY4MDAxYmZlODcm3UGo: 00:26:16.319 09:34:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:YTJmYzQ3MjE5ZjZhNzhkMzdkZTIyY2E4MTBkY2QyNDO5r4J8: ]] 00:26:16.319 09:34:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:YTJmYzQ3MjE5ZjZhNzhkMzdkZTIyY2E4MTBkY2QyNDO5r4J8: 00:26:16.319 09:34:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe2048 2 00:26:16.319 09:34:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:26:16.319 09:34:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:26:16.319 09:34:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:26:16.319 09:34:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:26:16.319 09:34:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:26:16.319 09:34:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:26:16.319 09:34:27 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:16.319 09:34:27 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:16.319 09:34:27 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:16.319 09:34:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:26:16.319 09:34:27 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:26:16.319 09:34:27 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:26:16.319 09:34:27 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:26:16.319 09:34:27 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:26:16.319 09:34:27 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:26:16.319 09:34:27 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:26:16.319 09:34:27 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:26:16.319 09:34:27 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:26:16.319 09:34:27 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:26:16.319 09:34:27 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:26:16.319 09:34:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:26:16.319 09:34:27 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:16.319 09:34:27 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:16.580 nvme0n1 00:26:16.580 09:34:27 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:16.580 09:34:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:26:16.580 09:34:27 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:16.580 09:34:27 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:16.580 09:34:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:26:16.580 09:34:27 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:16.580 09:34:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:26:16.580 09:34:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:26:16.580 09:34:27 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:16.580 09:34:27 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:16.580 09:34:27 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:16.580 09:34:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:26:16.580 09:34:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe2048 3 00:26:16.580 09:34:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:26:16.580 09:34:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:26:16.580 09:34:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:26:16.580 09:34:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:26:16.580 09:34:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:ZDM0MmZmYTYwMWM2MzExZDkxMjc4MDUxYTY5YmI3MjU2OGNjZjQ2NjFkMzZkYmUxsSTVEg==: 00:26:16.580 09:34:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:OTA0MWRkNGY0OTNjZjQxNDZiM2QzNjY3ZTdkZGEyNjEkfiGh: 00:26:16.580 09:34:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:26:16.580 09:34:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:26:16.580 09:34:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:ZDM0MmZmYTYwMWM2MzExZDkxMjc4MDUxYTY5YmI3MjU2OGNjZjQ2NjFkMzZkYmUxsSTVEg==: 00:26:16.580 09:34:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:OTA0MWRkNGY0OTNjZjQxNDZiM2QzNjY3ZTdkZGEyNjEkfiGh: ]] 00:26:16.580 09:34:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:OTA0MWRkNGY0OTNjZjQxNDZiM2QzNjY3ZTdkZGEyNjEkfiGh: 00:26:16.580 09:34:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe2048 3 00:26:16.580 09:34:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:26:16.580 09:34:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:26:16.580 09:34:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:26:16.580 09:34:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:26:16.580 09:34:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:26:16.580 09:34:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:26:16.580 09:34:27 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:16.580 09:34:27 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:16.580 09:34:27 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:16.580 09:34:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:26:16.580 09:34:27 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:26:16.580 09:34:27 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:26:16.580 09:34:27 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:26:16.580 09:34:27 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:26:16.580 09:34:27 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:26:16.580 09:34:27 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:26:16.580 09:34:27 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:26:16.580 09:34:27 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:26:16.580 09:34:27 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:26:16.580 09:34:27 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:26:16.580 09:34:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:26:16.580 09:34:27 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:16.580 09:34:27 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:16.840 nvme0n1 00:26:16.840 09:34:27 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:16.840 09:34:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:26:16.840 09:34:27 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:16.840 09:34:27 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:16.840 09:34:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:26:16.840 09:34:27 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:16.840 09:34:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:26:16.840 09:34:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:26:16.840 09:34:27 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:16.840 09:34:27 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:16.840 09:34:27 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:16.840 09:34:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:26:16.840 09:34:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe2048 4 00:26:16.840 09:34:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:26:16.840 09:34:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:26:16.840 09:34:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:26:16.840 09:34:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:26:16.840 09:34:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:ZTMyZTFiOGViMDNmNTIyYTRlZjk3NGIxYzkyYmM2OTAxMmYxMDYxODQ5MzQyZTFhZWI1YjJiMzkwNzVkYmMxMQsgpoA=: 00:26:16.840 09:34:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:26:16.840 09:34:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:26:16.840 09:34:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:26:16.840 09:34:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:ZTMyZTFiOGViMDNmNTIyYTRlZjk3NGIxYzkyYmM2OTAxMmYxMDYxODQ5MzQyZTFhZWI1YjJiMzkwNzVkYmMxMQsgpoA=: 00:26:16.840 09:34:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:26:16.840 09:34:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe2048 4 00:26:16.840 09:34:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:26:16.840 09:34:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:26:16.840 09:34:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:26:16.840 09:34:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:26:16.840 09:34:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:26:16.840 09:34:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:26:16.840 09:34:27 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:16.840 09:34:27 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:16.840 09:34:27 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:16.840 09:34:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:26:16.840 09:34:27 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:26:16.840 09:34:27 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:26:16.841 09:34:27 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:26:16.841 09:34:27 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:26:16.841 09:34:27 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:26:16.841 09:34:27 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:26:16.841 09:34:27 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:26:16.841 09:34:27 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:26:16.841 09:34:27 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:26:16.841 09:34:27 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:26:16.841 09:34:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:26:16.841 09:34:27 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:16.841 09:34:27 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:17.103 nvme0n1 00:26:17.103 09:34:28 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:17.103 09:34:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:26:17.103 09:34:28 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:17.103 09:34:28 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:17.103 09:34:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:26:17.103 09:34:28 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:17.103 09:34:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:26:17.103 09:34:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:26:17.103 09:34:28 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:17.103 09:34:28 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:17.103 09:34:28 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:17.103 09:34:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:26:17.103 09:34:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:26:17.103 09:34:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe3072 0 00:26:17.103 09:34:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:26:17.103 09:34:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:26:17.103 09:34:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:26:17.103 09:34:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:26:17.103 09:34:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:OWI1ZmIxYjljYzBjOTVkMTZjMTExYzJjNjA0NzFjNjWreYXE: 00:26:17.103 09:34:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:ZWZkM2NiMDNhMjAyNDMyNjU4ZWYxNTJjY2UyYWI3NGI2Njg5YjU5YmYzNWNjOTQwZDdlZGFkZDhjMjUyYmUwYujvdz4=: 00:26:17.103 09:34:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:26:17.103 09:34:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:26:17.103 09:34:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:OWI1ZmIxYjljYzBjOTVkMTZjMTExYzJjNjA0NzFjNjWreYXE: 00:26:17.103 09:34:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:ZWZkM2NiMDNhMjAyNDMyNjU4ZWYxNTJjY2UyYWI3NGI2Njg5YjU5YmYzNWNjOTQwZDdlZGFkZDhjMjUyYmUwYujvdz4=: ]] 00:26:17.103 09:34:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:ZWZkM2NiMDNhMjAyNDMyNjU4ZWYxNTJjY2UyYWI3NGI2Njg5YjU5YmYzNWNjOTQwZDdlZGFkZDhjMjUyYmUwYujvdz4=: 00:26:17.103 09:34:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe3072 0 00:26:17.103 09:34:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:26:17.103 09:34:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:26:17.103 09:34:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:26:17.103 09:34:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:26:17.103 09:34:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:26:17.103 09:34:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:26:17.103 09:34:28 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:17.103 09:34:28 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:17.103 09:34:28 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:17.103 09:34:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:26:17.103 09:34:28 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:26:17.103 09:34:28 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:26:17.103 09:34:28 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:26:17.103 09:34:28 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:26:17.103 09:34:28 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:26:17.103 09:34:28 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:26:17.103 09:34:28 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:26:17.103 09:34:28 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:26:17.103 09:34:28 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:26:17.103 09:34:28 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:26:17.103 09:34:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:26:17.103 09:34:28 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:17.103 09:34:28 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:17.362 nvme0n1 00:26:17.362 09:34:28 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:17.362 09:34:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:26:17.362 09:34:28 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:17.362 09:34:28 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:17.362 09:34:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:26:17.362 09:34:28 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:17.362 09:34:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:26:17.362 09:34:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:26:17.362 09:34:28 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:17.362 09:34:28 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:17.362 09:34:28 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:17.362 09:34:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:26:17.362 09:34:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe3072 1 00:26:17.362 09:34:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:26:17.362 09:34:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:26:17.362 09:34:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:26:17.362 09:34:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:26:17.362 09:34:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:ZjBhZjRlZmQ4NTRkMjI0NTdkNDUwODA3MDE4OGE2MGNlNDRlNDBiMzAwYjRiYWVkSF6LUg==: 00:26:17.362 09:34:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:ZmMyZTkyZTU4M2RmYWMzNzE2MWZmZmZjOTg0ODM5ZDUxNjhjNmZlZWMwNGY0MjIwb04rnA==: 00:26:17.362 09:34:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:26:17.362 09:34:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:26:17.362 09:34:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:ZjBhZjRlZmQ4NTRkMjI0NTdkNDUwODA3MDE4OGE2MGNlNDRlNDBiMzAwYjRiYWVkSF6LUg==: 00:26:17.362 09:34:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:ZmMyZTkyZTU4M2RmYWMzNzE2MWZmZmZjOTg0ODM5ZDUxNjhjNmZlZWMwNGY0MjIwb04rnA==: ]] 00:26:17.362 09:34:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:ZmMyZTkyZTU4M2RmYWMzNzE2MWZmZmZjOTg0ODM5ZDUxNjhjNmZlZWMwNGY0MjIwb04rnA==: 00:26:17.362 09:34:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe3072 1 00:26:17.362 09:34:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:26:17.362 09:34:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:26:17.362 09:34:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:26:17.362 09:34:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:26:17.362 09:34:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:26:17.362 09:34:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:26:17.362 09:34:28 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:17.362 09:34:28 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:17.362 09:34:28 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:17.362 09:34:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:26:17.362 09:34:28 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:26:17.362 09:34:28 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:26:17.362 09:34:28 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:26:17.362 09:34:28 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:26:17.362 09:34:28 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:26:17.362 09:34:28 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:26:17.362 09:34:28 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:26:17.362 09:34:28 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:26:17.362 09:34:28 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:26:17.362 09:34:28 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:26:17.362 09:34:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:26:17.362 09:34:28 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:17.362 09:34:28 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:17.621 nvme0n1 00:26:17.621 09:34:28 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:17.621 09:34:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:26:17.621 09:34:28 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:17.621 09:34:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:26:17.621 09:34:28 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:17.621 09:34:28 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:17.621 09:34:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:26:17.621 09:34:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:26:17.621 09:34:28 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:17.621 09:34:28 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:17.621 09:34:28 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:17.621 09:34:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:26:17.621 09:34:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe3072 2 00:26:17.621 09:34:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:26:17.621 09:34:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:26:17.621 09:34:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:26:17.621 09:34:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:26:17.621 09:34:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:M2U5NjY0NzJlZWVkZTA1ZGM0ZDMxODY4MDAxYmZlODcm3UGo: 00:26:17.621 09:34:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:YTJmYzQ3MjE5ZjZhNzhkMzdkZTIyY2E4MTBkY2QyNDO5r4J8: 00:26:17.621 09:34:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:26:17.621 09:34:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:26:17.621 09:34:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:M2U5NjY0NzJlZWVkZTA1ZGM0ZDMxODY4MDAxYmZlODcm3UGo: 00:26:17.621 09:34:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:YTJmYzQ3MjE5ZjZhNzhkMzdkZTIyY2E4MTBkY2QyNDO5r4J8: ]] 00:26:17.621 09:34:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:YTJmYzQ3MjE5ZjZhNzhkMzdkZTIyY2E4MTBkY2QyNDO5r4J8: 00:26:17.621 09:34:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe3072 2 00:26:17.621 09:34:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:26:17.621 09:34:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:26:17.621 09:34:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:26:17.621 09:34:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:26:17.621 09:34:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:26:17.621 09:34:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:26:17.621 09:34:28 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:17.621 09:34:28 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:17.621 09:34:28 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:17.621 09:34:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:26:17.621 09:34:28 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:26:17.621 09:34:28 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:26:17.621 09:34:28 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:26:17.621 09:34:28 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:26:17.621 09:34:28 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:26:17.621 09:34:28 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:26:17.621 09:34:28 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:26:17.621 09:34:28 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:26:17.621 09:34:28 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:26:17.621 09:34:28 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:26:17.621 09:34:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:26:17.621 09:34:28 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:17.621 09:34:28 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:17.879 nvme0n1 00:26:17.879 09:34:28 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:17.879 09:34:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:26:17.879 09:34:28 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:17.879 09:34:28 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:17.879 09:34:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:26:17.879 09:34:28 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:17.879 09:34:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:26:17.879 09:34:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:26:17.879 09:34:28 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:17.879 09:34:28 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:17.879 09:34:28 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:17.879 09:34:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:26:17.879 09:34:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe3072 3 00:26:17.879 09:34:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:26:17.879 09:34:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:26:17.879 09:34:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:26:17.879 09:34:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:26:17.879 09:34:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:ZDM0MmZmYTYwMWM2MzExZDkxMjc4MDUxYTY5YmI3MjU2OGNjZjQ2NjFkMzZkYmUxsSTVEg==: 00:26:17.879 09:34:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:OTA0MWRkNGY0OTNjZjQxNDZiM2QzNjY3ZTdkZGEyNjEkfiGh: 00:26:17.879 09:34:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:26:17.879 09:34:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:26:17.879 09:34:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:ZDM0MmZmYTYwMWM2MzExZDkxMjc4MDUxYTY5YmI3MjU2OGNjZjQ2NjFkMzZkYmUxsSTVEg==: 00:26:17.879 09:34:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:OTA0MWRkNGY0OTNjZjQxNDZiM2QzNjY3ZTdkZGEyNjEkfiGh: ]] 00:26:17.879 09:34:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:OTA0MWRkNGY0OTNjZjQxNDZiM2QzNjY3ZTdkZGEyNjEkfiGh: 00:26:17.879 09:34:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe3072 3 00:26:17.879 09:34:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:26:17.879 09:34:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:26:17.879 09:34:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:26:17.879 09:34:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:26:17.879 09:34:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:26:17.879 09:34:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:26:17.879 09:34:28 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:17.879 09:34:28 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:17.879 09:34:28 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:17.879 09:34:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:26:17.879 09:34:28 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:26:17.879 09:34:28 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:26:17.879 09:34:28 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:26:17.879 09:34:28 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:26:17.879 09:34:28 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:26:17.879 09:34:28 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:26:17.879 09:34:28 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:26:17.879 09:34:28 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:26:17.879 09:34:28 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:26:17.879 09:34:28 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:26:17.879 09:34:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:26:17.879 09:34:28 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:17.879 09:34:28 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:18.137 nvme0n1 00:26:18.137 09:34:29 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:18.137 09:34:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:26:18.137 09:34:29 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:18.137 09:34:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:26:18.137 09:34:29 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:18.137 09:34:29 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:18.137 09:34:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:26:18.137 09:34:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:26:18.137 09:34:29 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:18.137 09:34:29 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:18.137 09:34:29 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:18.137 09:34:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:26:18.137 09:34:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe3072 4 00:26:18.137 09:34:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:26:18.138 09:34:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:26:18.138 09:34:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:26:18.138 09:34:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:26:18.138 09:34:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:ZTMyZTFiOGViMDNmNTIyYTRlZjk3NGIxYzkyYmM2OTAxMmYxMDYxODQ5MzQyZTFhZWI1YjJiMzkwNzVkYmMxMQsgpoA=: 00:26:18.138 09:34:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:26:18.138 09:34:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:26:18.138 09:34:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:26:18.138 09:34:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:ZTMyZTFiOGViMDNmNTIyYTRlZjk3NGIxYzkyYmM2OTAxMmYxMDYxODQ5MzQyZTFhZWI1YjJiMzkwNzVkYmMxMQsgpoA=: 00:26:18.138 09:34:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:26:18.138 09:34:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe3072 4 00:26:18.138 09:34:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:26:18.138 09:34:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:26:18.138 09:34:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:26:18.138 09:34:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:26:18.138 09:34:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:26:18.138 09:34:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:26:18.138 09:34:29 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:18.138 09:34:29 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:18.138 09:34:29 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:18.138 09:34:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:26:18.138 09:34:29 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:26:18.138 09:34:29 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:26:18.138 09:34:29 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:26:18.138 09:34:29 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:26:18.138 09:34:29 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:26:18.138 09:34:29 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:26:18.138 09:34:29 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:26:18.138 09:34:29 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:26:18.138 09:34:29 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:26:18.138 09:34:29 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:26:18.138 09:34:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:26:18.138 09:34:29 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:18.138 09:34:29 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:18.138 nvme0n1 00:26:18.138 09:34:29 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:18.138 09:34:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:26:18.138 09:34:29 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:18.138 09:34:29 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:18.138 09:34:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:26:18.138 09:34:29 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:18.397 09:34:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:26:18.397 09:34:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:26:18.397 09:34:29 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:18.397 09:34:29 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:18.397 09:34:29 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:18.397 09:34:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:26:18.397 09:34:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:26:18.397 09:34:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe4096 0 00:26:18.397 09:34:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:26:18.397 09:34:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:26:18.397 09:34:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:26:18.397 09:34:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:26:18.397 09:34:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:OWI1ZmIxYjljYzBjOTVkMTZjMTExYzJjNjA0NzFjNjWreYXE: 00:26:18.397 09:34:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:ZWZkM2NiMDNhMjAyNDMyNjU4ZWYxNTJjY2UyYWI3NGI2Njg5YjU5YmYzNWNjOTQwZDdlZGFkZDhjMjUyYmUwYujvdz4=: 00:26:18.397 09:34:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:26:18.397 09:34:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:26:18.397 09:34:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:OWI1ZmIxYjljYzBjOTVkMTZjMTExYzJjNjA0NzFjNjWreYXE: 00:26:18.397 09:34:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:ZWZkM2NiMDNhMjAyNDMyNjU4ZWYxNTJjY2UyYWI3NGI2Njg5YjU5YmYzNWNjOTQwZDdlZGFkZDhjMjUyYmUwYujvdz4=: ]] 00:26:18.397 09:34:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:ZWZkM2NiMDNhMjAyNDMyNjU4ZWYxNTJjY2UyYWI3NGI2Njg5YjU5YmYzNWNjOTQwZDdlZGFkZDhjMjUyYmUwYujvdz4=: 00:26:18.397 09:34:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe4096 0 00:26:18.397 09:34:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:26:18.397 09:34:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:26:18.397 09:34:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:26:18.397 09:34:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:26:18.397 09:34:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:26:18.397 09:34:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:26:18.397 09:34:29 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:18.397 09:34:29 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:18.397 09:34:29 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:18.397 09:34:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:26:18.397 09:34:29 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:26:18.397 09:34:29 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:26:18.397 09:34:29 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:26:18.398 09:34:29 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:26:18.398 09:34:29 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:26:18.398 09:34:29 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:26:18.398 09:34:29 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:26:18.398 09:34:29 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:26:18.398 09:34:29 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:26:18.398 09:34:29 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:26:18.398 09:34:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:26:18.398 09:34:29 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:18.398 09:34:29 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:18.657 nvme0n1 00:26:18.657 09:34:29 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:18.657 09:34:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:26:18.657 09:34:29 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:18.657 09:34:29 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:18.657 09:34:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:26:18.657 09:34:29 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:18.657 09:34:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:26:18.657 09:34:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:26:18.657 09:34:29 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:18.657 09:34:29 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:18.657 09:34:29 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:18.657 09:34:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:26:18.657 09:34:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe4096 1 00:26:18.657 09:34:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:26:18.657 09:34:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:26:18.657 09:34:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:26:18.657 09:34:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:26:18.657 09:34:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:ZjBhZjRlZmQ4NTRkMjI0NTdkNDUwODA3MDE4OGE2MGNlNDRlNDBiMzAwYjRiYWVkSF6LUg==: 00:26:18.657 09:34:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:ZmMyZTkyZTU4M2RmYWMzNzE2MWZmZmZjOTg0ODM5ZDUxNjhjNmZlZWMwNGY0MjIwb04rnA==: 00:26:18.657 09:34:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:26:18.657 09:34:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:26:18.657 09:34:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:ZjBhZjRlZmQ4NTRkMjI0NTdkNDUwODA3MDE4OGE2MGNlNDRlNDBiMzAwYjRiYWVkSF6LUg==: 00:26:18.657 09:34:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:ZmMyZTkyZTU4M2RmYWMzNzE2MWZmZmZjOTg0ODM5ZDUxNjhjNmZlZWMwNGY0MjIwb04rnA==: ]] 00:26:18.657 09:34:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:ZmMyZTkyZTU4M2RmYWMzNzE2MWZmZmZjOTg0ODM5ZDUxNjhjNmZlZWMwNGY0MjIwb04rnA==: 00:26:18.657 09:34:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe4096 1 00:26:18.657 09:34:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:26:18.657 09:34:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:26:18.657 09:34:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:26:18.657 09:34:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:26:18.657 09:34:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:26:18.657 09:34:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:26:18.657 09:34:29 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:18.658 09:34:29 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:18.658 09:34:29 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:18.658 09:34:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:26:18.658 09:34:29 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:26:18.658 09:34:29 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:26:18.658 09:34:29 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:26:18.658 09:34:29 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:26:18.658 09:34:29 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:26:18.658 09:34:29 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:26:18.658 09:34:29 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:26:18.658 09:34:29 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:26:18.658 09:34:29 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:26:18.658 09:34:29 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:26:18.658 09:34:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:26:18.658 09:34:29 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:18.658 09:34:29 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:18.917 nvme0n1 00:26:18.917 09:34:29 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:18.917 09:34:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:26:18.917 09:34:29 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:18.917 09:34:29 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:18.917 09:34:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:26:18.917 09:34:29 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:18.917 09:34:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:26:18.917 09:34:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:26:18.917 09:34:30 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:18.917 09:34:30 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:18.917 09:34:30 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:18.917 09:34:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:26:18.917 09:34:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe4096 2 00:26:18.917 09:34:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:26:18.917 09:34:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:26:18.917 09:34:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:26:18.917 09:34:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:26:18.917 09:34:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:M2U5NjY0NzJlZWVkZTA1ZGM0ZDMxODY4MDAxYmZlODcm3UGo: 00:26:18.917 09:34:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:YTJmYzQ3MjE5ZjZhNzhkMzdkZTIyY2E4MTBkY2QyNDO5r4J8: 00:26:18.917 09:34:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:26:18.917 09:34:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:26:18.917 09:34:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:M2U5NjY0NzJlZWVkZTA1ZGM0ZDMxODY4MDAxYmZlODcm3UGo: 00:26:18.917 09:34:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:YTJmYzQ3MjE5ZjZhNzhkMzdkZTIyY2E4MTBkY2QyNDO5r4J8: ]] 00:26:18.917 09:34:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:YTJmYzQ3MjE5ZjZhNzhkMzdkZTIyY2E4MTBkY2QyNDO5r4J8: 00:26:18.917 09:34:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe4096 2 00:26:18.917 09:34:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:26:18.917 09:34:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:26:18.917 09:34:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:26:18.917 09:34:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:26:18.917 09:34:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:26:18.917 09:34:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:26:18.917 09:34:30 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:18.917 09:34:30 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:18.917 09:34:30 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:18.918 09:34:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:26:18.918 09:34:30 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:26:18.918 09:34:30 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:26:18.918 09:34:30 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:26:18.918 09:34:30 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:26:18.918 09:34:30 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:26:18.918 09:34:30 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:26:18.918 09:34:30 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:26:18.918 09:34:30 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:26:18.918 09:34:30 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:26:18.918 09:34:30 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:26:18.918 09:34:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:26:18.918 09:34:30 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:18.918 09:34:30 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:19.176 nvme0n1 00:26:19.176 09:34:30 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:19.176 09:34:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:26:19.176 09:34:30 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:19.176 09:34:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:26:19.176 09:34:30 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:19.176 09:34:30 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:19.176 09:34:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:26:19.176 09:34:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:26:19.176 09:34:30 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:19.176 09:34:30 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:19.436 09:34:30 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:19.436 09:34:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:26:19.436 09:34:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe4096 3 00:26:19.436 09:34:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:26:19.436 09:34:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:26:19.436 09:34:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:26:19.436 09:34:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:26:19.436 09:34:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:ZDM0MmZmYTYwMWM2MzExZDkxMjc4MDUxYTY5YmI3MjU2OGNjZjQ2NjFkMzZkYmUxsSTVEg==: 00:26:19.436 09:34:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:OTA0MWRkNGY0OTNjZjQxNDZiM2QzNjY3ZTdkZGEyNjEkfiGh: 00:26:19.436 09:34:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:26:19.436 09:34:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:26:19.436 09:34:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:ZDM0MmZmYTYwMWM2MzExZDkxMjc4MDUxYTY5YmI3MjU2OGNjZjQ2NjFkMzZkYmUxsSTVEg==: 00:26:19.436 09:34:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:OTA0MWRkNGY0OTNjZjQxNDZiM2QzNjY3ZTdkZGEyNjEkfiGh: ]] 00:26:19.436 09:34:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:OTA0MWRkNGY0OTNjZjQxNDZiM2QzNjY3ZTdkZGEyNjEkfiGh: 00:26:19.436 09:34:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe4096 3 00:26:19.436 09:34:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:26:19.436 09:34:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:26:19.436 09:34:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:26:19.436 09:34:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:26:19.436 09:34:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:26:19.436 09:34:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:26:19.436 09:34:30 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:19.436 09:34:30 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:19.436 09:34:30 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:19.436 09:34:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:26:19.436 09:34:30 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:26:19.436 09:34:30 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:26:19.436 09:34:30 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:26:19.437 09:34:30 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:26:19.437 09:34:30 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:26:19.437 09:34:30 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:26:19.437 09:34:30 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:26:19.437 09:34:30 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:26:19.437 09:34:30 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:26:19.437 09:34:30 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:26:19.437 09:34:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:26:19.437 09:34:30 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:19.437 09:34:30 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:19.697 nvme0n1 00:26:19.697 09:34:30 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:19.697 09:34:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:26:19.697 09:34:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:26:19.697 09:34:30 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:19.697 09:34:30 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:19.697 09:34:30 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:19.697 09:34:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:26:19.697 09:34:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:26:19.697 09:34:30 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:19.697 09:34:30 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:19.697 09:34:30 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:19.697 09:34:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:26:19.697 09:34:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe4096 4 00:26:19.697 09:34:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:26:19.697 09:34:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:26:19.697 09:34:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:26:19.697 09:34:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:26:19.697 09:34:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:ZTMyZTFiOGViMDNmNTIyYTRlZjk3NGIxYzkyYmM2OTAxMmYxMDYxODQ5MzQyZTFhZWI1YjJiMzkwNzVkYmMxMQsgpoA=: 00:26:19.697 09:34:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:26:19.697 09:34:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:26:19.697 09:34:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:26:19.697 09:34:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:ZTMyZTFiOGViMDNmNTIyYTRlZjk3NGIxYzkyYmM2OTAxMmYxMDYxODQ5MzQyZTFhZWI1YjJiMzkwNzVkYmMxMQsgpoA=: 00:26:19.697 09:34:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:26:19.697 09:34:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe4096 4 00:26:19.697 09:34:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:26:19.697 09:34:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:26:19.697 09:34:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:26:19.697 09:34:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:26:19.697 09:34:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:26:19.697 09:34:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:26:19.697 09:34:30 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:19.697 09:34:30 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:19.697 09:34:30 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:19.697 09:34:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:26:19.697 09:34:30 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:26:19.697 09:34:30 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:26:19.697 09:34:30 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:26:19.697 09:34:30 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:26:19.697 09:34:30 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:26:19.697 09:34:30 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:26:19.697 09:34:30 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:26:19.697 09:34:30 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:26:19.697 09:34:30 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:26:19.697 09:34:30 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:26:19.697 09:34:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:26:19.697 09:34:30 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:19.697 09:34:30 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:19.956 nvme0n1 00:26:19.956 09:34:31 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:19.956 09:34:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:26:19.956 09:34:31 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:19.956 09:34:31 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:19.956 09:34:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:26:19.956 09:34:31 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:19.956 09:34:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:26:19.956 09:34:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:26:19.956 09:34:31 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:19.956 09:34:31 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:19.956 09:34:31 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:19.956 09:34:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:26:19.956 09:34:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:26:19.956 09:34:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe6144 0 00:26:19.956 09:34:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:26:19.956 09:34:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:26:19.956 09:34:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:26:19.956 09:34:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:26:19.956 09:34:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:OWI1ZmIxYjljYzBjOTVkMTZjMTExYzJjNjA0NzFjNjWreYXE: 00:26:19.956 09:34:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:ZWZkM2NiMDNhMjAyNDMyNjU4ZWYxNTJjY2UyYWI3NGI2Njg5YjU5YmYzNWNjOTQwZDdlZGFkZDhjMjUyYmUwYujvdz4=: 00:26:19.956 09:34:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:26:19.956 09:34:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:26:19.956 09:34:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:OWI1ZmIxYjljYzBjOTVkMTZjMTExYzJjNjA0NzFjNjWreYXE: 00:26:19.956 09:34:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:ZWZkM2NiMDNhMjAyNDMyNjU4ZWYxNTJjY2UyYWI3NGI2Njg5YjU5YmYzNWNjOTQwZDdlZGFkZDhjMjUyYmUwYujvdz4=: ]] 00:26:19.956 09:34:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:ZWZkM2NiMDNhMjAyNDMyNjU4ZWYxNTJjY2UyYWI3NGI2Njg5YjU5YmYzNWNjOTQwZDdlZGFkZDhjMjUyYmUwYujvdz4=: 00:26:19.956 09:34:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe6144 0 00:26:19.956 09:34:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:26:19.956 09:34:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:26:19.956 09:34:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:26:19.956 09:34:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:26:19.956 09:34:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:26:19.956 09:34:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:26:19.956 09:34:31 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:19.956 09:34:31 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:19.956 09:34:31 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:19.956 09:34:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:26:19.956 09:34:31 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:26:19.956 09:34:31 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:26:19.956 09:34:31 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:26:19.956 09:34:31 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:26:19.956 09:34:31 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:26:19.956 09:34:31 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:26:19.956 09:34:31 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:26:19.956 09:34:31 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:26:19.956 09:34:31 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:26:19.956 09:34:31 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:26:19.956 09:34:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:26:19.956 09:34:31 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:19.956 09:34:31 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:20.524 nvme0n1 00:26:20.524 09:34:31 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:20.524 09:34:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:26:20.524 09:34:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:26:20.524 09:34:31 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:20.524 09:34:31 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:20.524 09:34:31 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:20.524 09:34:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:26:20.524 09:34:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:26:20.524 09:34:31 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:20.524 09:34:31 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:20.524 09:34:31 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:20.524 09:34:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:26:20.524 09:34:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe6144 1 00:26:20.524 09:34:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:26:20.524 09:34:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:26:20.524 09:34:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:26:20.524 09:34:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:26:20.524 09:34:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:ZjBhZjRlZmQ4NTRkMjI0NTdkNDUwODA3MDE4OGE2MGNlNDRlNDBiMzAwYjRiYWVkSF6LUg==: 00:26:20.524 09:34:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:ZmMyZTkyZTU4M2RmYWMzNzE2MWZmZmZjOTg0ODM5ZDUxNjhjNmZlZWMwNGY0MjIwb04rnA==: 00:26:20.524 09:34:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:26:20.524 09:34:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:26:20.524 09:34:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:ZjBhZjRlZmQ4NTRkMjI0NTdkNDUwODA3MDE4OGE2MGNlNDRlNDBiMzAwYjRiYWVkSF6LUg==: 00:26:20.524 09:34:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:ZmMyZTkyZTU4M2RmYWMzNzE2MWZmZmZjOTg0ODM5ZDUxNjhjNmZlZWMwNGY0MjIwb04rnA==: ]] 00:26:20.524 09:34:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:ZmMyZTkyZTU4M2RmYWMzNzE2MWZmZmZjOTg0ODM5ZDUxNjhjNmZlZWMwNGY0MjIwb04rnA==: 00:26:20.524 09:34:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe6144 1 00:26:20.524 09:34:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:26:20.524 09:34:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:26:20.524 09:34:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:26:20.524 09:34:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:26:20.524 09:34:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:26:20.524 09:34:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:26:20.524 09:34:31 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:20.524 09:34:31 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:20.524 09:34:31 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:20.524 09:34:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:26:20.524 09:34:31 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:26:20.524 09:34:31 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:26:20.524 09:34:31 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:26:20.524 09:34:31 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:26:20.524 09:34:31 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:26:20.524 09:34:31 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:26:20.524 09:34:31 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:26:20.525 09:34:31 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:26:20.525 09:34:31 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:26:20.525 09:34:31 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:26:20.525 09:34:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:26:20.525 09:34:31 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:20.525 09:34:31 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:21.092 nvme0n1 00:26:21.093 09:34:32 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:21.093 09:34:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:26:21.093 09:34:32 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:21.093 09:34:32 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:21.093 09:34:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:26:21.093 09:34:32 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:21.093 09:34:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:26:21.093 09:34:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:26:21.093 09:34:32 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:21.093 09:34:32 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:21.093 09:34:32 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:21.093 09:34:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:26:21.093 09:34:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe6144 2 00:26:21.093 09:34:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:26:21.093 09:34:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:26:21.093 09:34:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:26:21.093 09:34:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:26:21.093 09:34:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:M2U5NjY0NzJlZWVkZTA1ZGM0ZDMxODY4MDAxYmZlODcm3UGo: 00:26:21.093 09:34:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:YTJmYzQ3MjE5ZjZhNzhkMzdkZTIyY2E4MTBkY2QyNDO5r4J8: 00:26:21.093 09:34:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:26:21.093 09:34:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:26:21.093 09:34:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:M2U5NjY0NzJlZWVkZTA1ZGM0ZDMxODY4MDAxYmZlODcm3UGo: 00:26:21.093 09:34:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:YTJmYzQ3MjE5ZjZhNzhkMzdkZTIyY2E4MTBkY2QyNDO5r4J8: ]] 00:26:21.093 09:34:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:YTJmYzQ3MjE5ZjZhNzhkMzdkZTIyY2E4MTBkY2QyNDO5r4J8: 00:26:21.093 09:34:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe6144 2 00:26:21.093 09:34:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:26:21.093 09:34:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:26:21.093 09:34:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:26:21.093 09:34:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:26:21.093 09:34:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:26:21.093 09:34:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:26:21.093 09:34:32 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:21.093 09:34:32 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:21.093 09:34:32 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:21.093 09:34:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:26:21.093 09:34:32 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:26:21.093 09:34:32 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:26:21.093 09:34:32 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:26:21.093 09:34:32 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:26:21.093 09:34:32 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:26:21.093 09:34:32 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:26:21.093 09:34:32 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:26:21.093 09:34:32 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:26:21.093 09:34:32 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:26:21.093 09:34:32 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:26:21.093 09:34:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:26:21.093 09:34:32 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:21.093 09:34:32 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:21.658 nvme0n1 00:26:21.658 09:34:32 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:21.658 09:34:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:26:21.658 09:34:32 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:21.658 09:34:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:26:21.658 09:34:32 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:21.658 09:34:32 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:21.658 09:34:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:26:21.658 09:34:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:26:21.658 09:34:32 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:21.658 09:34:32 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:21.658 09:34:32 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:21.658 09:34:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:26:21.658 09:34:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe6144 3 00:26:21.658 09:34:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:26:21.658 09:34:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:26:21.658 09:34:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:26:21.658 09:34:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:26:21.658 09:34:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:ZDM0MmZmYTYwMWM2MzExZDkxMjc4MDUxYTY5YmI3MjU2OGNjZjQ2NjFkMzZkYmUxsSTVEg==: 00:26:21.658 09:34:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:OTA0MWRkNGY0OTNjZjQxNDZiM2QzNjY3ZTdkZGEyNjEkfiGh: 00:26:21.658 09:34:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:26:21.658 09:34:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:26:21.658 09:34:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:ZDM0MmZmYTYwMWM2MzExZDkxMjc4MDUxYTY5YmI3MjU2OGNjZjQ2NjFkMzZkYmUxsSTVEg==: 00:26:21.658 09:34:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:OTA0MWRkNGY0OTNjZjQxNDZiM2QzNjY3ZTdkZGEyNjEkfiGh: ]] 00:26:21.658 09:34:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:OTA0MWRkNGY0OTNjZjQxNDZiM2QzNjY3ZTdkZGEyNjEkfiGh: 00:26:21.658 09:34:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe6144 3 00:26:21.659 09:34:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:26:21.659 09:34:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:26:21.659 09:34:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:26:21.659 09:34:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:26:21.659 09:34:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:26:21.659 09:34:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:26:21.659 09:34:32 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:21.659 09:34:32 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:21.659 09:34:32 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:21.659 09:34:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:26:21.659 09:34:32 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:26:21.659 09:34:32 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:26:21.659 09:34:32 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:26:21.659 09:34:32 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:26:21.659 09:34:32 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:26:21.659 09:34:32 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:26:21.659 09:34:32 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:26:21.659 09:34:32 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:26:21.659 09:34:32 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:26:21.659 09:34:32 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:26:21.659 09:34:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:26:21.659 09:34:32 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:21.659 09:34:32 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:22.230 nvme0n1 00:26:22.230 09:34:33 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:22.230 09:34:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:26:22.230 09:34:33 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:22.230 09:34:33 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:22.230 09:34:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:26:22.230 09:34:33 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:22.230 09:34:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:26:22.230 09:34:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:26:22.230 09:34:33 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:22.230 09:34:33 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:22.230 09:34:33 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:22.230 09:34:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:26:22.230 09:34:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe6144 4 00:26:22.230 09:34:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:26:22.230 09:34:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:26:22.230 09:34:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:26:22.230 09:34:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:26:22.230 09:34:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:ZTMyZTFiOGViMDNmNTIyYTRlZjk3NGIxYzkyYmM2OTAxMmYxMDYxODQ5MzQyZTFhZWI1YjJiMzkwNzVkYmMxMQsgpoA=: 00:26:22.230 09:34:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:26:22.230 09:34:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:26:22.230 09:34:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:26:22.230 09:34:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:ZTMyZTFiOGViMDNmNTIyYTRlZjk3NGIxYzkyYmM2OTAxMmYxMDYxODQ5MzQyZTFhZWI1YjJiMzkwNzVkYmMxMQsgpoA=: 00:26:22.230 09:34:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:26:22.230 09:34:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe6144 4 00:26:22.230 09:34:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:26:22.230 09:34:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:26:22.230 09:34:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:26:22.230 09:34:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:26:22.230 09:34:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:26:22.230 09:34:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:26:22.230 09:34:33 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:22.230 09:34:33 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:22.230 09:34:33 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:22.230 09:34:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:26:22.230 09:34:33 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:26:22.230 09:34:33 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:26:22.230 09:34:33 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:26:22.231 09:34:33 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:26:22.231 09:34:33 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:26:22.231 09:34:33 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:26:22.231 09:34:33 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:26:22.231 09:34:33 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:26:22.231 09:34:33 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:26:22.231 09:34:33 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:26:22.231 09:34:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:26:22.231 09:34:33 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:22.231 09:34:33 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:22.798 nvme0n1 00:26:22.798 09:34:33 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:22.798 09:34:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:26:22.798 09:34:33 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:22.798 09:34:33 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:22.798 09:34:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:26:22.798 09:34:33 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:22.798 09:34:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:26:22.798 09:34:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:26:22.798 09:34:33 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:22.798 09:34:33 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:22.798 09:34:33 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:22.798 09:34:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:26:22.798 09:34:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:26:22.798 09:34:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe8192 0 00:26:22.798 09:34:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:26:22.798 09:34:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:26:22.798 09:34:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:26:22.798 09:34:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:26:22.798 09:34:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:OWI1ZmIxYjljYzBjOTVkMTZjMTExYzJjNjA0NzFjNjWreYXE: 00:26:22.798 09:34:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:ZWZkM2NiMDNhMjAyNDMyNjU4ZWYxNTJjY2UyYWI3NGI2Njg5YjU5YmYzNWNjOTQwZDdlZGFkZDhjMjUyYmUwYujvdz4=: 00:26:22.798 09:34:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:26:22.798 09:34:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:26:22.798 09:34:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:OWI1ZmIxYjljYzBjOTVkMTZjMTExYzJjNjA0NzFjNjWreYXE: 00:26:22.798 09:34:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:ZWZkM2NiMDNhMjAyNDMyNjU4ZWYxNTJjY2UyYWI3NGI2Njg5YjU5YmYzNWNjOTQwZDdlZGFkZDhjMjUyYmUwYujvdz4=: ]] 00:26:22.798 09:34:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:ZWZkM2NiMDNhMjAyNDMyNjU4ZWYxNTJjY2UyYWI3NGI2Njg5YjU5YmYzNWNjOTQwZDdlZGFkZDhjMjUyYmUwYujvdz4=: 00:26:22.798 09:34:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe8192 0 00:26:22.798 09:34:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:26:22.798 09:34:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:26:22.798 09:34:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:26:22.798 09:34:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:26:22.798 09:34:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:26:22.798 09:34:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:26:22.798 09:34:33 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:22.798 09:34:33 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:22.798 09:34:33 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:22.798 09:34:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:26:22.798 09:34:33 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:26:22.798 09:34:33 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:26:22.798 09:34:33 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:26:22.798 09:34:33 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:26:22.798 09:34:33 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:26:22.798 09:34:33 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:26:22.798 09:34:33 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:26:22.798 09:34:33 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:26:22.798 09:34:33 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:26:22.798 09:34:33 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:26:22.798 09:34:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:26:22.798 09:34:33 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:22.798 09:34:33 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:23.731 nvme0n1 00:26:23.731 09:34:34 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:23.731 09:34:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:26:23.731 09:34:34 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:23.731 09:34:34 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:23.731 09:34:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:26:23.731 09:34:34 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:23.731 09:34:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:26:23.731 09:34:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:26:23.731 09:34:34 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:23.731 09:34:34 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:23.731 09:34:34 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:23.731 09:34:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:26:23.731 09:34:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe8192 1 00:26:23.731 09:34:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:26:23.731 09:34:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:26:23.731 09:34:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:26:23.731 09:34:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:26:23.731 09:34:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:ZjBhZjRlZmQ4NTRkMjI0NTdkNDUwODA3MDE4OGE2MGNlNDRlNDBiMzAwYjRiYWVkSF6LUg==: 00:26:23.731 09:34:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:ZmMyZTkyZTU4M2RmYWMzNzE2MWZmZmZjOTg0ODM5ZDUxNjhjNmZlZWMwNGY0MjIwb04rnA==: 00:26:23.731 09:34:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:26:23.731 09:34:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:26:23.731 09:34:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:ZjBhZjRlZmQ4NTRkMjI0NTdkNDUwODA3MDE4OGE2MGNlNDRlNDBiMzAwYjRiYWVkSF6LUg==: 00:26:23.731 09:34:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:ZmMyZTkyZTU4M2RmYWMzNzE2MWZmZmZjOTg0ODM5ZDUxNjhjNmZlZWMwNGY0MjIwb04rnA==: ]] 00:26:23.731 09:34:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:ZmMyZTkyZTU4M2RmYWMzNzE2MWZmZmZjOTg0ODM5ZDUxNjhjNmZlZWMwNGY0MjIwb04rnA==: 00:26:23.731 09:34:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe8192 1 00:26:23.731 09:34:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:26:23.731 09:34:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:26:23.731 09:34:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:26:23.731 09:34:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:26:23.731 09:34:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:26:23.731 09:34:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:26:23.731 09:34:34 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:23.731 09:34:34 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:23.731 09:34:34 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:23.731 09:34:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:26:23.731 09:34:34 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:26:23.731 09:34:34 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:26:23.731 09:34:34 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:26:23.731 09:34:34 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:26:23.731 09:34:34 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:26:23.731 09:34:34 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:26:23.731 09:34:34 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:26:23.731 09:34:34 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:26:23.731 09:34:34 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:26:23.731 09:34:34 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:26:23.731 09:34:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:26:23.731 09:34:34 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:23.731 09:34:34 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:24.671 nvme0n1 00:26:24.671 09:34:35 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:24.671 09:34:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:26:24.671 09:34:35 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:24.671 09:34:35 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:24.671 09:34:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:26:24.671 09:34:35 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:24.671 09:34:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:26:24.671 09:34:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:26:24.671 09:34:35 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:24.671 09:34:35 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:24.671 09:34:35 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:24.671 09:34:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:26:24.671 09:34:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe8192 2 00:26:24.671 09:34:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:26:24.671 09:34:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:26:24.671 09:34:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:26:24.671 09:34:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:26:24.671 09:34:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:M2U5NjY0NzJlZWVkZTA1ZGM0ZDMxODY4MDAxYmZlODcm3UGo: 00:26:24.671 09:34:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:YTJmYzQ3MjE5ZjZhNzhkMzdkZTIyY2E4MTBkY2QyNDO5r4J8: 00:26:24.671 09:34:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:26:24.671 09:34:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:26:24.671 09:34:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:M2U5NjY0NzJlZWVkZTA1ZGM0ZDMxODY4MDAxYmZlODcm3UGo: 00:26:24.671 09:34:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:YTJmYzQ3MjE5ZjZhNzhkMzdkZTIyY2E4MTBkY2QyNDO5r4J8: ]] 00:26:24.671 09:34:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:YTJmYzQ3MjE5ZjZhNzhkMzdkZTIyY2E4MTBkY2QyNDO5r4J8: 00:26:24.671 09:34:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe8192 2 00:26:24.671 09:34:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:26:24.671 09:34:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:26:24.671 09:34:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:26:24.671 09:34:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:26:24.671 09:34:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:26:24.671 09:34:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:26:24.671 09:34:35 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:24.671 09:34:35 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:24.671 09:34:35 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:24.671 09:34:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:26:24.671 09:34:35 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:26:24.671 09:34:35 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:26:24.671 09:34:35 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:26:24.671 09:34:35 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:26:24.671 09:34:35 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:26:24.671 09:34:35 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:26:24.671 09:34:35 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:26:24.671 09:34:35 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:26:24.671 09:34:35 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:26:24.671 09:34:35 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:26:24.671 09:34:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:26:24.671 09:34:35 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:24.671 09:34:35 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:25.240 nvme0n1 00:26:25.240 09:34:36 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:25.240 09:34:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:26:25.240 09:34:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:26:25.240 09:34:36 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:25.240 09:34:36 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:25.499 09:34:36 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:25.499 09:34:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:26:25.500 09:34:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:26:25.500 09:34:36 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:25.500 09:34:36 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:25.500 09:34:36 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:25.500 09:34:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:26:25.500 09:34:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe8192 3 00:26:25.500 09:34:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:26:25.500 09:34:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:26:25.500 09:34:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:26:25.500 09:34:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:26:25.500 09:34:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:ZDM0MmZmYTYwMWM2MzExZDkxMjc4MDUxYTY5YmI3MjU2OGNjZjQ2NjFkMzZkYmUxsSTVEg==: 00:26:25.500 09:34:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:OTA0MWRkNGY0OTNjZjQxNDZiM2QzNjY3ZTdkZGEyNjEkfiGh: 00:26:25.500 09:34:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:26:25.500 09:34:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:26:25.500 09:34:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:ZDM0MmZmYTYwMWM2MzExZDkxMjc4MDUxYTY5YmI3MjU2OGNjZjQ2NjFkMzZkYmUxsSTVEg==: 00:26:25.500 09:34:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:OTA0MWRkNGY0OTNjZjQxNDZiM2QzNjY3ZTdkZGEyNjEkfiGh: ]] 00:26:25.500 09:34:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:OTA0MWRkNGY0OTNjZjQxNDZiM2QzNjY3ZTdkZGEyNjEkfiGh: 00:26:25.500 09:34:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe8192 3 00:26:25.500 09:34:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:26:25.500 09:34:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:26:25.500 09:34:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:26:25.500 09:34:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:26:25.500 09:34:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:26:25.500 09:34:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:26:25.500 09:34:36 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:25.500 09:34:36 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:25.500 09:34:36 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:25.500 09:34:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:26:25.500 09:34:36 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:26:25.500 09:34:36 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:26:25.500 09:34:36 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:26:25.500 09:34:36 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:26:25.500 09:34:36 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:26:25.500 09:34:36 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:26:25.500 09:34:36 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:26:25.500 09:34:36 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:26:25.500 09:34:36 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:26:25.500 09:34:36 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:26:25.500 09:34:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:26:25.500 09:34:36 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:25.500 09:34:36 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:26.436 nvme0n1 00:26:26.436 09:34:37 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:26.436 09:34:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:26:26.436 09:34:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:26:26.436 09:34:37 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:26.436 09:34:37 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:26.436 09:34:37 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:26.436 09:34:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:26:26.436 09:34:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:26:26.436 09:34:37 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:26.436 09:34:37 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:26.436 09:34:37 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:26.436 09:34:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:26:26.436 09:34:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe8192 4 00:26:26.436 09:34:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:26:26.436 09:34:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:26:26.436 09:34:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:26:26.436 09:34:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:26:26.436 09:34:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:ZTMyZTFiOGViMDNmNTIyYTRlZjk3NGIxYzkyYmM2OTAxMmYxMDYxODQ5MzQyZTFhZWI1YjJiMzkwNzVkYmMxMQsgpoA=: 00:26:26.436 09:34:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:26:26.436 09:34:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:26:26.436 09:34:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:26:26.436 09:34:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:ZTMyZTFiOGViMDNmNTIyYTRlZjk3NGIxYzkyYmM2OTAxMmYxMDYxODQ5MzQyZTFhZWI1YjJiMzkwNzVkYmMxMQsgpoA=: 00:26:26.436 09:34:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:26:26.436 09:34:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe8192 4 00:26:26.436 09:34:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:26:26.436 09:34:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:26:26.436 09:34:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:26:26.436 09:34:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:26:26.436 09:34:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:26:26.436 09:34:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:26:26.436 09:34:37 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:26.436 09:34:37 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:26.436 09:34:37 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:26.436 09:34:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:26:26.436 09:34:37 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:26:26.436 09:34:37 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:26:26.436 09:34:37 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:26:26.436 09:34:37 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:26:26.436 09:34:37 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:26:26.436 09:34:37 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:26:26.436 09:34:37 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:26:26.436 09:34:37 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:26:26.436 09:34:37 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:26:26.436 09:34:37 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:26:26.436 09:34:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:26:26.436 09:34:37 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:26.436 09:34:37 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:27.371 nvme0n1 00:26:27.371 09:34:38 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:27.371 09:34:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:26:27.371 09:34:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:26:27.371 09:34:38 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:27.371 09:34:38 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:27.371 09:34:38 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:27.371 09:34:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:26:27.371 09:34:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:26:27.371 09:34:38 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:27.371 09:34:38 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:27.371 09:34:38 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:27.371 09:34:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@100 -- # for digest in "${digests[@]}" 00:26:27.372 09:34:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:26:27.372 09:34:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:26:27.372 09:34:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe2048 0 00:26:27.372 09:34:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:26:27.372 09:34:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:26:27.372 09:34:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:26:27.372 09:34:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:26:27.372 09:34:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:OWI1ZmIxYjljYzBjOTVkMTZjMTExYzJjNjA0NzFjNjWreYXE: 00:26:27.372 09:34:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:ZWZkM2NiMDNhMjAyNDMyNjU4ZWYxNTJjY2UyYWI3NGI2Njg5YjU5YmYzNWNjOTQwZDdlZGFkZDhjMjUyYmUwYujvdz4=: 00:26:27.372 09:34:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:26:27.372 09:34:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:26:27.372 09:34:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:OWI1ZmIxYjljYzBjOTVkMTZjMTExYzJjNjA0NzFjNjWreYXE: 00:26:27.372 09:34:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:ZWZkM2NiMDNhMjAyNDMyNjU4ZWYxNTJjY2UyYWI3NGI2Njg5YjU5YmYzNWNjOTQwZDdlZGFkZDhjMjUyYmUwYujvdz4=: ]] 00:26:27.372 09:34:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:ZWZkM2NiMDNhMjAyNDMyNjU4ZWYxNTJjY2UyYWI3NGI2Njg5YjU5YmYzNWNjOTQwZDdlZGFkZDhjMjUyYmUwYujvdz4=: 00:26:27.372 09:34:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe2048 0 00:26:27.372 09:34:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:26:27.372 09:34:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:26:27.372 09:34:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:26:27.372 09:34:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:26:27.372 09:34:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:26:27.372 09:34:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:26:27.372 09:34:38 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:27.372 09:34:38 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:27.372 09:34:38 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:27.372 09:34:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:26:27.372 09:34:38 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:26:27.372 09:34:38 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:26:27.372 09:34:38 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:26:27.372 09:34:38 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:26:27.372 09:34:38 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:26:27.372 09:34:38 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:26:27.372 09:34:38 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:26:27.372 09:34:38 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:26:27.372 09:34:38 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:26:27.372 09:34:38 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:26:27.372 09:34:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:26:27.372 09:34:38 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:27.372 09:34:38 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:27.372 nvme0n1 00:26:27.372 09:34:38 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:27.372 09:34:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:26:27.372 09:34:38 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:27.372 09:34:38 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:27.372 09:34:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:26:27.372 09:34:38 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:27.372 09:34:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:26:27.372 09:34:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:26:27.372 09:34:38 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:27.372 09:34:38 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:27.372 09:34:38 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:27.372 09:34:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:26:27.372 09:34:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe2048 1 00:26:27.372 09:34:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:26:27.372 09:34:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:26:27.372 09:34:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:26:27.372 09:34:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:26:27.372 09:34:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:ZjBhZjRlZmQ4NTRkMjI0NTdkNDUwODA3MDE4OGE2MGNlNDRlNDBiMzAwYjRiYWVkSF6LUg==: 00:26:27.372 09:34:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:ZmMyZTkyZTU4M2RmYWMzNzE2MWZmZmZjOTg0ODM5ZDUxNjhjNmZlZWMwNGY0MjIwb04rnA==: 00:26:27.372 09:34:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:26:27.372 09:34:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:26:27.372 09:34:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:ZjBhZjRlZmQ4NTRkMjI0NTdkNDUwODA3MDE4OGE2MGNlNDRlNDBiMzAwYjRiYWVkSF6LUg==: 00:26:27.372 09:34:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:ZmMyZTkyZTU4M2RmYWMzNzE2MWZmZmZjOTg0ODM5ZDUxNjhjNmZlZWMwNGY0MjIwb04rnA==: ]] 00:26:27.372 09:34:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:ZmMyZTkyZTU4M2RmYWMzNzE2MWZmZmZjOTg0ODM5ZDUxNjhjNmZlZWMwNGY0MjIwb04rnA==: 00:26:27.372 09:34:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe2048 1 00:26:27.372 09:34:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:26:27.372 09:34:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:26:27.372 09:34:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:26:27.372 09:34:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:26:27.372 09:34:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:26:27.372 09:34:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:26:27.372 09:34:38 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:27.372 09:34:38 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:27.372 09:34:38 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:27.372 09:34:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:26:27.372 09:34:38 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:26:27.372 09:34:38 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:26:27.372 09:34:38 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:26:27.372 09:34:38 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:26:27.372 09:34:38 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:26:27.372 09:34:38 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:26:27.372 09:34:38 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:26:27.372 09:34:38 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:26:27.372 09:34:38 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:26:27.372 09:34:38 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:26:27.372 09:34:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:26:27.372 09:34:38 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:27.372 09:34:38 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:27.632 nvme0n1 00:26:27.632 09:34:38 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:27.632 09:34:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:26:27.632 09:34:38 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:27.632 09:34:38 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:27.632 09:34:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:26:27.632 09:34:38 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:27.632 09:34:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:26:27.632 09:34:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:26:27.632 09:34:38 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:27.632 09:34:38 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:27.632 09:34:38 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:27.632 09:34:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:26:27.632 09:34:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe2048 2 00:26:27.632 09:34:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:26:27.632 09:34:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:26:27.632 09:34:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:26:27.632 09:34:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:26:27.632 09:34:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:M2U5NjY0NzJlZWVkZTA1ZGM0ZDMxODY4MDAxYmZlODcm3UGo: 00:26:27.632 09:34:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:YTJmYzQ3MjE5ZjZhNzhkMzdkZTIyY2E4MTBkY2QyNDO5r4J8: 00:26:27.632 09:34:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:26:27.632 09:34:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:26:27.632 09:34:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:M2U5NjY0NzJlZWVkZTA1ZGM0ZDMxODY4MDAxYmZlODcm3UGo: 00:26:27.632 09:34:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:YTJmYzQ3MjE5ZjZhNzhkMzdkZTIyY2E4MTBkY2QyNDO5r4J8: ]] 00:26:27.632 09:34:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:YTJmYzQ3MjE5ZjZhNzhkMzdkZTIyY2E4MTBkY2QyNDO5r4J8: 00:26:27.632 09:34:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe2048 2 00:26:27.632 09:34:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:26:27.632 09:34:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:26:27.632 09:34:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:26:27.632 09:34:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:26:27.632 09:34:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:26:27.632 09:34:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:26:27.632 09:34:38 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:27.632 09:34:38 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:27.632 09:34:38 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:27.632 09:34:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:26:27.632 09:34:38 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:26:27.632 09:34:38 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:26:27.632 09:34:38 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:26:27.632 09:34:38 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:26:27.632 09:34:38 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:26:27.632 09:34:38 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:26:27.632 09:34:38 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:26:27.632 09:34:38 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:26:27.632 09:34:38 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:26:27.633 09:34:38 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:26:27.633 09:34:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:26:27.633 09:34:38 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:27.633 09:34:38 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:27.892 nvme0n1 00:26:27.892 09:34:38 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:27.892 09:34:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:26:27.892 09:34:38 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:27.892 09:34:38 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:27.892 09:34:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:26:27.893 09:34:38 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:27.893 09:34:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:26:27.893 09:34:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:26:27.893 09:34:38 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:27.893 09:34:38 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:27.893 09:34:38 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:27.893 09:34:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:26:27.893 09:34:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe2048 3 00:26:27.893 09:34:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:26:27.893 09:34:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:26:27.893 09:34:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:26:27.893 09:34:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:26:27.893 09:34:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:ZDM0MmZmYTYwMWM2MzExZDkxMjc4MDUxYTY5YmI3MjU2OGNjZjQ2NjFkMzZkYmUxsSTVEg==: 00:26:27.893 09:34:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:OTA0MWRkNGY0OTNjZjQxNDZiM2QzNjY3ZTdkZGEyNjEkfiGh: 00:26:27.893 09:34:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:26:27.893 09:34:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:26:27.893 09:34:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:ZDM0MmZmYTYwMWM2MzExZDkxMjc4MDUxYTY5YmI3MjU2OGNjZjQ2NjFkMzZkYmUxsSTVEg==: 00:26:27.893 09:34:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:OTA0MWRkNGY0OTNjZjQxNDZiM2QzNjY3ZTdkZGEyNjEkfiGh: ]] 00:26:27.893 09:34:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:OTA0MWRkNGY0OTNjZjQxNDZiM2QzNjY3ZTdkZGEyNjEkfiGh: 00:26:27.893 09:34:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe2048 3 00:26:27.893 09:34:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:26:27.893 09:34:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:26:27.893 09:34:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:26:27.893 09:34:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:26:27.893 09:34:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:26:27.893 09:34:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:26:27.893 09:34:38 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:27.893 09:34:38 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:27.893 09:34:38 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:27.893 09:34:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:26:27.893 09:34:38 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:26:27.893 09:34:38 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:26:27.893 09:34:38 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:26:27.893 09:34:38 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:26:27.893 09:34:38 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:26:27.893 09:34:38 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:26:27.893 09:34:38 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:26:27.893 09:34:38 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:26:27.893 09:34:38 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:26:27.893 09:34:38 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:26:27.893 09:34:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:26:27.893 09:34:38 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:27.893 09:34:38 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:28.152 nvme0n1 00:26:28.152 09:34:39 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:28.152 09:34:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:26:28.152 09:34:39 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:28.152 09:34:39 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:28.152 09:34:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:26:28.152 09:34:39 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:28.152 09:34:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:26:28.152 09:34:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:26:28.152 09:34:39 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:28.152 09:34:39 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:28.152 09:34:39 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:28.152 09:34:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:26:28.152 09:34:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe2048 4 00:26:28.152 09:34:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:26:28.152 09:34:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:26:28.152 09:34:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:26:28.152 09:34:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:26:28.152 09:34:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:ZTMyZTFiOGViMDNmNTIyYTRlZjk3NGIxYzkyYmM2OTAxMmYxMDYxODQ5MzQyZTFhZWI1YjJiMzkwNzVkYmMxMQsgpoA=: 00:26:28.152 09:34:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:26:28.152 09:34:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:26:28.152 09:34:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:26:28.152 09:34:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:ZTMyZTFiOGViMDNmNTIyYTRlZjk3NGIxYzkyYmM2OTAxMmYxMDYxODQ5MzQyZTFhZWI1YjJiMzkwNzVkYmMxMQsgpoA=: 00:26:28.152 09:34:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:26:28.152 09:34:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe2048 4 00:26:28.152 09:34:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:26:28.152 09:34:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:26:28.152 09:34:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:26:28.152 09:34:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:26:28.152 09:34:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:26:28.152 09:34:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:26:28.152 09:34:39 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:28.152 09:34:39 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:28.152 09:34:39 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:28.152 09:34:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:26:28.152 09:34:39 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:26:28.152 09:34:39 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:26:28.152 09:34:39 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:26:28.152 09:34:39 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:26:28.152 09:34:39 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:26:28.152 09:34:39 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:26:28.152 09:34:39 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:26:28.152 09:34:39 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:26:28.152 09:34:39 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:26:28.152 09:34:39 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:26:28.152 09:34:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:26:28.152 09:34:39 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:28.152 09:34:39 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:28.152 nvme0n1 00:26:28.152 09:34:39 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:28.152 09:34:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:26:28.411 09:34:39 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:28.411 09:34:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:26:28.411 09:34:39 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:28.411 09:34:39 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:28.411 09:34:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:26:28.411 09:34:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:26:28.411 09:34:39 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:28.411 09:34:39 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:28.411 09:34:39 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:28.411 09:34:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:26:28.411 09:34:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:26:28.411 09:34:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe3072 0 00:26:28.411 09:34:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:26:28.411 09:34:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:26:28.411 09:34:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:26:28.411 09:34:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:26:28.411 09:34:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:OWI1ZmIxYjljYzBjOTVkMTZjMTExYzJjNjA0NzFjNjWreYXE: 00:26:28.411 09:34:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:ZWZkM2NiMDNhMjAyNDMyNjU4ZWYxNTJjY2UyYWI3NGI2Njg5YjU5YmYzNWNjOTQwZDdlZGFkZDhjMjUyYmUwYujvdz4=: 00:26:28.411 09:34:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:26:28.411 09:34:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:26:28.411 09:34:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:OWI1ZmIxYjljYzBjOTVkMTZjMTExYzJjNjA0NzFjNjWreYXE: 00:26:28.411 09:34:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:ZWZkM2NiMDNhMjAyNDMyNjU4ZWYxNTJjY2UyYWI3NGI2Njg5YjU5YmYzNWNjOTQwZDdlZGFkZDhjMjUyYmUwYujvdz4=: ]] 00:26:28.412 09:34:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:ZWZkM2NiMDNhMjAyNDMyNjU4ZWYxNTJjY2UyYWI3NGI2Njg5YjU5YmYzNWNjOTQwZDdlZGFkZDhjMjUyYmUwYujvdz4=: 00:26:28.412 09:34:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe3072 0 00:26:28.412 09:34:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:26:28.412 09:34:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:26:28.412 09:34:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:26:28.412 09:34:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:26:28.412 09:34:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:26:28.412 09:34:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:26:28.412 09:34:39 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:28.412 09:34:39 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:28.412 09:34:39 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:28.412 09:34:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:26:28.412 09:34:39 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:26:28.412 09:34:39 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:26:28.412 09:34:39 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:26:28.412 09:34:39 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:26:28.412 09:34:39 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:26:28.412 09:34:39 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:26:28.412 09:34:39 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:26:28.412 09:34:39 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:26:28.412 09:34:39 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:26:28.412 09:34:39 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:26:28.412 09:34:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:26:28.412 09:34:39 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:28.412 09:34:39 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:28.672 nvme0n1 00:26:28.672 09:34:39 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:28.672 09:34:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:26:28.672 09:34:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:26:28.672 09:34:39 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:28.672 09:34:39 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:28.672 09:34:39 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:28.672 09:34:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:26:28.672 09:34:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:26:28.672 09:34:39 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:28.672 09:34:39 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:28.672 09:34:39 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:28.672 09:34:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:26:28.672 09:34:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe3072 1 00:26:28.672 09:34:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:26:28.672 09:34:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:26:28.672 09:34:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:26:28.672 09:34:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:26:28.672 09:34:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:ZjBhZjRlZmQ4NTRkMjI0NTdkNDUwODA3MDE4OGE2MGNlNDRlNDBiMzAwYjRiYWVkSF6LUg==: 00:26:28.672 09:34:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:ZmMyZTkyZTU4M2RmYWMzNzE2MWZmZmZjOTg0ODM5ZDUxNjhjNmZlZWMwNGY0MjIwb04rnA==: 00:26:28.672 09:34:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:26:28.672 09:34:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:26:28.672 09:34:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:ZjBhZjRlZmQ4NTRkMjI0NTdkNDUwODA3MDE4OGE2MGNlNDRlNDBiMzAwYjRiYWVkSF6LUg==: 00:26:28.672 09:34:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:ZmMyZTkyZTU4M2RmYWMzNzE2MWZmZmZjOTg0ODM5ZDUxNjhjNmZlZWMwNGY0MjIwb04rnA==: ]] 00:26:28.672 09:34:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:ZmMyZTkyZTU4M2RmYWMzNzE2MWZmZmZjOTg0ODM5ZDUxNjhjNmZlZWMwNGY0MjIwb04rnA==: 00:26:28.672 09:34:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe3072 1 00:26:28.672 09:34:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:26:28.672 09:34:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:26:28.672 09:34:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:26:28.672 09:34:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:26:28.672 09:34:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:26:28.672 09:34:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:26:28.672 09:34:39 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:28.672 09:34:39 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:28.672 09:34:39 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:28.672 09:34:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:26:28.672 09:34:39 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:26:28.672 09:34:39 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:26:28.672 09:34:39 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:26:28.672 09:34:39 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:26:28.672 09:34:39 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:26:28.672 09:34:39 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:26:28.672 09:34:39 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:26:28.672 09:34:39 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:26:28.672 09:34:39 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:26:28.672 09:34:39 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:26:28.672 09:34:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:26:28.672 09:34:39 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:28.672 09:34:39 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:28.672 nvme0n1 00:26:28.672 09:34:39 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:28.932 09:34:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:26:28.932 09:34:39 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:28.932 09:34:39 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:28.932 09:34:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:26:28.932 09:34:39 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:28.932 09:34:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:26:28.932 09:34:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:26:28.932 09:34:39 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:28.932 09:34:39 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:28.932 09:34:39 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:28.932 09:34:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:26:28.932 09:34:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe3072 2 00:26:28.933 09:34:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:26:28.933 09:34:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:26:28.933 09:34:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:26:28.933 09:34:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:26:28.933 09:34:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:M2U5NjY0NzJlZWVkZTA1ZGM0ZDMxODY4MDAxYmZlODcm3UGo: 00:26:28.933 09:34:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:YTJmYzQ3MjE5ZjZhNzhkMzdkZTIyY2E4MTBkY2QyNDO5r4J8: 00:26:28.933 09:34:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:26:28.933 09:34:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:26:28.933 09:34:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:M2U5NjY0NzJlZWVkZTA1ZGM0ZDMxODY4MDAxYmZlODcm3UGo: 00:26:28.933 09:34:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:YTJmYzQ3MjE5ZjZhNzhkMzdkZTIyY2E4MTBkY2QyNDO5r4J8: ]] 00:26:28.933 09:34:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:YTJmYzQ3MjE5ZjZhNzhkMzdkZTIyY2E4MTBkY2QyNDO5r4J8: 00:26:28.933 09:34:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe3072 2 00:26:28.933 09:34:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:26:28.933 09:34:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:26:28.933 09:34:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:26:28.933 09:34:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:26:28.933 09:34:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:26:28.933 09:34:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:26:28.933 09:34:39 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:28.933 09:34:39 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:28.933 09:34:39 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:28.933 09:34:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:26:28.933 09:34:39 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:26:28.933 09:34:39 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:26:28.933 09:34:39 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:26:28.933 09:34:39 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:26:28.933 09:34:39 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:26:28.933 09:34:39 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:26:28.933 09:34:39 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:26:28.933 09:34:39 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:26:28.933 09:34:39 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:26:28.933 09:34:39 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:26:28.933 09:34:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:26:28.933 09:34:39 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:28.933 09:34:39 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:29.193 nvme0n1 00:26:29.193 09:34:40 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:29.193 09:34:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:26:29.193 09:34:40 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:29.193 09:34:40 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:29.193 09:34:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:26:29.193 09:34:40 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:29.193 09:34:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:26:29.193 09:34:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:26:29.193 09:34:40 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:29.193 09:34:40 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:29.193 09:34:40 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:29.193 09:34:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:26:29.193 09:34:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe3072 3 00:26:29.193 09:34:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:26:29.193 09:34:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:26:29.193 09:34:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:26:29.193 09:34:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:26:29.193 09:34:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:ZDM0MmZmYTYwMWM2MzExZDkxMjc4MDUxYTY5YmI3MjU2OGNjZjQ2NjFkMzZkYmUxsSTVEg==: 00:26:29.193 09:34:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:OTA0MWRkNGY0OTNjZjQxNDZiM2QzNjY3ZTdkZGEyNjEkfiGh: 00:26:29.193 09:34:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:26:29.193 09:34:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:26:29.193 09:34:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:ZDM0MmZmYTYwMWM2MzExZDkxMjc4MDUxYTY5YmI3MjU2OGNjZjQ2NjFkMzZkYmUxsSTVEg==: 00:26:29.193 09:34:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:OTA0MWRkNGY0OTNjZjQxNDZiM2QzNjY3ZTdkZGEyNjEkfiGh: ]] 00:26:29.193 09:34:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:OTA0MWRkNGY0OTNjZjQxNDZiM2QzNjY3ZTdkZGEyNjEkfiGh: 00:26:29.193 09:34:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe3072 3 00:26:29.194 09:34:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:26:29.194 09:34:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:26:29.194 09:34:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:26:29.194 09:34:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:26:29.194 09:34:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:26:29.194 09:34:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:26:29.194 09:34:40 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:29.194 09:34:40 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:29.194 09:34:40 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:29.194 09:34:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:26:29.194 09:34:40 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:26:29.194 09:34:40 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:26:29.194 09:34:40 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:26:29.194 09:34:40 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:26:29.194 09:34:40 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:26:29.194 09:34:40 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:26:29.194 09:34:40 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:26:29.194 09:34:40 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:26:29.194 09:34:40 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:26:29.194 09:34:40 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:26:29.194 09:34:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:26:29.194 09:34:40 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:29.194 09:34:40 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:29.452 nvme0n1 00:26:29.452 09:34:40 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:29.452 09:34:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:26:29.452 09:34:40 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:29.452 09:34:40 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:29.452 09:34:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:26:29.452 09:34:40 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:29.452 09:34:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:26:29.452 09:34:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:26:29.452 09:34:40 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:29.452 09:34:40 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:29.452 09:34:40 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:29.452 09:34:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:26:29.452 09:34:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe3072 4 00:26:29.452 09:34:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:26:29.452 09:34:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:26:29.452 09:34:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:26:29.452 09:34:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:26:29.452 09:34:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:ZTMyZTFiOGViMDNmNTIyYTRlZjk3NGIxYzkyYmM2OTAxMmYxMDYxODQ5MzQyZTFhZWI1YjJiMzkwNzVkYmMxMQsgpoA=: 00:26:29.452 09:34:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:26:29.452 09:34:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:26:29.452 09:34:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:26:29.452 09:34:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:ZTMyZTFiOGViMDNmNTIyYTRlZjk3NGIxYzkyYmM2OTAxMmYxMDYxODQ5MzQyZTFhZWI1YjJiMzkwNzVkYmMxMQsgpoA=: 00:26:29.452 09:34:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:26:29.452 09:34:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe3072 4 00:26:29.452 09:34:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:26:29.452 09:34:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:26:29.452 09:34:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:26:29.452 09:34:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:26:29.452 09:34:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:26:29.452 09:34:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:26:29.452 09:34:40 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:29.452 09:34:40 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:29.452 09:34:40 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:29.452 09:34:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:26:29.452 09:34:40 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:26:29.452 09:34:40 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:26:29.452 09:34:40 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:26:29.452 09:34:40 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:26:29.452 09:34:40 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:26:29.452 09:34:40 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:26:29.452 09:34:40 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:26:29.452 09:34:40 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:26:29.452 09:34:40 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:26:29.452 09:34:40 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:26:29.453 09:34:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:26:29.453 09:34:40 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:29.453 09:34:40 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:29.453 nvme0n1 00:26:29.453 09:34:40 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:29.453 09:34:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:26:29.453 09:34:40 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:29.453 09:34:40 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:29.453 09:34:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:26:29.453 09:34:40 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:29.711 09:34:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:26:29.711 09:34:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:26:29.711 09:34:40 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:29.711 09:34:40 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:29.711 09:34:40 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:29.711 09:34:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:26:29.711 09:34:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:26:29.711 09:34:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe4096 0 00:26:29.711 09:34:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:26:29.711 09:34:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:26:29.711 09:34:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:26:29.711 09:34:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:26:29.711 09:34:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:OWI1ZmIxYjljYzBjOTVkMTZjMTExYzJjNjA0NzFjNjWreYXE: 00:26:29.711 09:34:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:ZWZkM2NiMDNhMjAyNDMyNjU4ZWYxNTJjY2UyYWI3NGI2Njg5YjU5YmYzNWNjOTQwZDdlZGFkZDhjMjUyYmUwYujvdz4=: 00:26:29.711 09:34:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:26:29.711 09:34:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:26:29.711 09:34:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:OWI1ZmIxYjljYzBjOTVkMTZjMTExYzJjNjA0NzFjNjWreYXE: 00:26:29.711 09:34:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:ZWZkM2NiMDNhMjAyNDMyNjU4ZWYxNTJjY2UyYWI3NGI2Njg5YjU5YmYzNWNjOTQwZDdlZGFkZDhjMjUyYmUwYujvdz4=: ]] 00:26:29.711 09:34:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:ZWZkM2NiMDNhMjAyNDMyNjU4ZWYxNTJjY2UyYWI3NGI2Njg5YjU5YmYzNWNjOTQwZDdlZGFkZDhjMjUyYmUwYujvdz4=: 00:26:29.711 09:34:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe4096 0 00:26:29.711 09:34:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:26:29.711 09:34:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:26:29.711 09:34:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:26:29.711 09:34:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:26:29.711 09:34:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:26:29.711 09:34:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:26:29.711 09:34:40 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:29.711 09:34:40 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:29.711 09:34:40 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:29.711 09:34:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:26:29.711 09:34:40 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:26:29.711 09:34:40 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:26:29.711 09:34:40 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:26:29.711 09:34:40 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:26:29.711 09:34:40 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:26:29.711 09:34:40 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:26:29.711 09:34:40 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:26:29.711 09:34:40 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:26:29.711 09:34:40 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:26:29.711 09:34:40 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:26:29.711 09:34:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:26:29.711 09:34:40 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:29.711 09:34:40 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:29.970 nvme0n1 00:26:29.970 09:34:40 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:29.970 09:34:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:26:29.970 09:34:40 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:29.970 09:34:40 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:29.970 09:34:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:26:29.970 09:34:40 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:29.970 09:34:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:26:29.970 09:34:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:26:29.970 09:34:41 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:29.970 09:34:41 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:29.970 09:34:41 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:29.970 09:34:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:26:29.970 09:34:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe4096 1 00:26:29.970 09:34:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:26:29.970 09:34:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:26:29.970 09:34:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:26:29.970 09:34:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:26:29.970 09:34:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:ZjBhZjRlZmQ4NTRkMjI0NTdkNDUwODA3MDE4OGE2MGNlNDRlNDBiMzAwYjRiYWVkSF6LUg==: 00:26:29.970 09:34:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:ZmMyZTkyZTU4M2RmYWMzNzE2MWZmZmZjOTg0ODM5ZDUxNjhjNmZlZWMwNGY0MjIwb04rnA==: 00:26:29.970 09:34:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:26:29.970 09:34:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:26:29.970 09:34:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:ZjBhZjRlZmQ4NTRkMjI0NTdkNDUwODA3MDE4OGE2MGNlNDRlNDBiMzAwYjRiYWVkSF6LUg==: 00:26:29.970 09:34:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:ZmMyZTkyZTU4M2RmYWMzNzE2MWZmZmZjOTg0ODM5ZDUxNjhjNmZlZWMwNGY0MjIwb04rnA==: ]] 00:26:29.970 09:34:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:ZmMyZTkyZTU4M2RmYWMzNzE2MWZmZmZjOTg0ODM5ZDUxNjhjNmZlZWMwNGY0MjIwb04rnA==: 00:26:29.970 09:34:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe4096 1 00:26:29.970 09:34:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:26:29.970 09:34:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:26:29.970 09:34:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:26:29.970 09:34:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:26:29.970 09:34:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:26:29.970 09:34:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:26:29.970 09:34:41 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:29.970 09:34:41 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:29.970 09:34:41 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:29.970 09:34:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:26:29.970 09:34:41 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:26:29.970 09:34:41 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:26:29.971 09:34:41 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:26:29.971 09:34:41 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:26:29.971 09:34:41 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:26:29.971 09:34:41 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:26:29.971 09:34:41 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:26:29.971 09:34:41 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:26:29.971 09:34:41 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:26:29.971 09:34:41 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:26:29.971 09:34:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:26:29.971 09:34:41 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:29.971 09:34:41 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:30.229 nvme0n1 00:26:30.229 09:34:41 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:30.229 09:34:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:26:30.229 09:34:41 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:30.229 09:34:41 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:30.229 09:34:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:26:30.229 09:34:41 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:30.229 09:34:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:26:30.229 09:34:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:26:30.229 09:34:41 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:30.229 09:34:41 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:30.229 09:34:41 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:30.229 09:34:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:26:30.229 09:34:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe4096 2 00:26:30.229 09:34:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:26:30.229 09:34:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:26:30.229 09:34:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:26:30.229 09:34:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:26:30.229 09:34:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:M2U5NjY0NzJlZWVkZTA1ZGM0ZDMxODY4MDAxYmZlODcm3UGo: 00:26:30.229 09:34:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:YTJmYzQ3MjE5ZjZhNzhkMzdkZTIyY2E4MTBkY2QyNDO5r4J8: 00:26:30.229 09:34:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:26:30.229 09:34:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:26:30.229 09:34:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:M2U5NjY0NzJlZWVkZTA1ZGM0ZDMxODY4MDAxYmZlODcm3UGo: 00:26:30.229 09:34:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:YTJmYzQ3MjE5ZjZhNzhkMzdkZTIyY2E4MTBkY2QyNDO5r4J8: ]] 00:26:30.229 09:34:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:YTJmYzQ3MjE5ZjZhNzhkMzdkZTIyY2E4MTBkY2QyNDO5r4J8: 00:26:30.229 09:34:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe4096 2 00:26:30.229 09:34:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:26:30.229 09:34:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:26:30.229 09:34:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:26:30.229 09:34:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:26:30.229 09:34:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:26:30.229 09:34:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:26:30.229 09:34:41 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:30.229 09:34:41 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:30.230 09:34:41 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:30.230 09:34:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:26:30.230 09:34:41 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:26:30.230 09:34:41 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:26:30.230 09:34:41 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:26:30.230 09:34:41 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:26:30.230 09:34:41 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:26:30.230 09:34:41 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:26:30.230 09:34:41 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:26:30.230 09:34:41 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:26:30.230 09:34:41 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:26:30.230 09:34:41 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:26:30.230 09:34:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:26:30.230 09:34:41 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:30.230 09:34:41 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:30.488 nvme0n1 00:26:30.488 09:34:41 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:30.488 09:34:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:26:30.488 09:34:41 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:30.488 09:34:41 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:30.488 09:34:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:26:30.488 09:34:41 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:30.488 09:34:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:26:30.488 09:34:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:26:30.488 09:34:41 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:30.488 09:34:41 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:30.488 09:34:41 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:30.488 09:34:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:26:30.488 09:34:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe4096 3 00:26:30.488 09:34:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:26:30.488 09:34:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:26:30.488 09:34:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:26:30.488 09:34:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:26:30.488 09:34:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:ZDM0MmZmYTYwMWM2MzExZDkxMjc4MDUxYTY5YmI3MjU2OGNjZjQ2NjFkMzZkYmUxsSTVEg==: 00:26:30.488 09:34:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:OTA0MWRkNGY0OTNjZjQxNDZiM2QzNjY3ZTdkZGEyNjEkfiGh: 00:26:30.488 09:34:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:26:30.488 09:34:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:26:30.488 09:34:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:ZDM0MmZmYTYwMWM2MzExZDkxMjc4MDUxYTY5YmI3MjU2OGNjZjQ2NjFkMzZkYmUxsSTVEg==: 00:26:30.488 09:34:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:OTA0MWRkNGY0OTNjZjQxNDZiM2QzNjY3ZTdkZGEyNjEkfiGh: ]] 00:26:30.488 09:34:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:OTA0MWRkNGY0OTNjZjQxNDZiM2QzNjY3ZTdkZGEyNjEkfiGh: 00:26:30.488 09:34:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe4096 3 00:26:30.488 09:34:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:26:30.488 09:34:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:26:30.488 09:34:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:26:30.488 09:34:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:26:30.488 09:34:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:26:30.488 09:34:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:26:30.488 09:34:41 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:30.488 09:34:41 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:30.746 09:34:41 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:30.746 09:34:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:26:30.746 09:34:41 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:26:30.746 09:34:41 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:26:30.746 09:34:41 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:26:30.746 09:34:41 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:26:30.746 09:34:41 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:26:30.746 09:34:41 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:26:30.746 09:34:41 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:26:30.746 09:34:41 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:26:30.746 09:34:41 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:26:30.746 09:34:41 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:26:30.746 09:34:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:26:30.746 09:34:41 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:30.746 09:34:41 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:31.006 nvme0n1 00:26:31.006 09:34:41 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:31.006 09:34:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:26:31.006 09:34:41 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:31.006 09:34:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:26:31.006 09:34:41 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:31.006 09:34:41 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:31.006 09:34:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:26:31.006 09:34:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:26:31.006 09:34:41 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:31.007 09:34:41 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:31.007 09:34:42 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:31.007 09:34:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:26:31.007 09:34:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe4096 4 00:26:31.007 09:34:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:26:31.007 09:34:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:26:31.007 09:34:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:26:31.007 09:34:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:26:31.007 09:34:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:ZTMyZTFiOGViMDNmNTIyYTRlZjk3NGIxYzkyYmM2OTAxMmYxMDYxODQ5MzQyZTFhZWI1YjJiMzkwNzVkYmMxMQsgpoA=: 00:26:31.007 09:34:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:26:31.007 09:34:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:26:31.007 09:34:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:26:31.007 09:34:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:ZTMyZTFiOGViMDNmNTIyYTRlZjk3NGIxYzkyYmM2OTAxMmYxMDYxODQ5MzQyZTFhZWI1YjJiMzkwNzVkYmMxMQsgpoA=: 00:26:31.007 09:34:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:26:31.007 09:34:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe4096 4 00:26:31.007 09:34:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:26:31.007 09:34:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:26:31.007 09:34:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:26:31.007 09:34:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:26:31.007 09:34:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:26:31.007 09:34:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:26:31.007 09:34:42 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:31.007 09:34:42 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:31.007 09:34:42 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:31.007 09:34:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:26:31.007 09:34:42 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:26:31.007 09:34:42 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:26:31.007 09:34:42 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:26:31.007 09:34:42 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:26:31.007 09:34:42 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:26:31.007 09:34:42 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:26:31.007 09:34:42 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:26:31.007 09:34:42 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:26:31.007 09:34:42 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:26:31.007 09:34:42 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:26:31.007 09:34:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:26:31.007 09:34:42 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:31.007 09:34:42 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:31.267 nvme0n1 00:26:31.267 09:34:42 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:31.267 09:34:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:26:31.267 09:34:42 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:31.267 09:34:42 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:31.267 09:34:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:26:31.267 09:34:42 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:31.267 09:34:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:26:31.267 09:34:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:26:31.267 09:34:42 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:31.267 09:34:42 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:31.267 09:34:42 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:31.267 09:34:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:26:31.267 09:34:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:26:31.267 09:34:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe6144 0 00:26:31.267 09:34:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:26:31.267 09:34:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:26:31.267 09:34:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:26:31.267 09:34:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:26:31.267 09:34:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:OWI1ZmIxYjljYzBjOTVkMTZjMTExYzJjNjA0NzFjNjWreYXE: 00:26:31.267 09:34:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:ZWZkM2NiMDNhMjAyNDMyNjU4ZWYxNTJjY2UyYWI3NGI2Njg5YjU5YmYzNWNjOTQwZDdlZGFkZDhjMjUyYmUwYujvdz4=: 00:26:31.267 09:34:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:26:31.267 09:34:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:26:31.267 09:34:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:OWI1ZmIxYjljYzBjOTVkMTZjMTExYzJjNjA0NzFjNjWreYXE: 00:26:31.267 09:34:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:ZWZkM2NiMDNhMjAyNDMyNjU4ZWYxNTJjY2UyYWI3NGI2Njg5YjU5YmYzNWNjOTQwZDdlZGFkZDhjMjUyYmUwYujvdz4=: ]] 00:26:31.267 09:34:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:ZWZkM2NiMDNhMjAyNDMyNjU4ZWYxNTJjY2UyYWI3NGI2Njg5YjU5YmYzNWNjOTQwZDdlZGFkZDhjMjUyYmUwYujvdz4=: 00:26:31.267 09:34:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe6144 0 00:26:31.267 09:34:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:26:31.267 09:34:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:26:31.267 09:34:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:26:31.267 09:34:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:26:31.267 09:34:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:26:31.267 09:34:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:26:31.267 09:34:42 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:31.267 09:34:42 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:31.267 09:34:42 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:31.267 09:34:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:26:31.267 09:34:42 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:26:31.267 09:34:42 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:26:31.267 09:34:42 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:26:31.267 09:34:42 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:26:31.267 09:34:42 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:26:31.267 09:34:42 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:26:31.267 09:34:42 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:26:31.267 09:34:42 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:26:31.267 09:34:42 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:26:31.267 09:34:42 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:26:31.267 09:34:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:26:31.267 09:34:42 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:31.267 09:34:42 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:31.836 nvme0n1 00:26:31.836 09:34:42 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:31.836 09:34:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:26:31.836 09:34:42 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:31.836 09:34:42 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:31.836 09:34:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:26:31.836 09:34:42 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:31.836 09:34:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:26:31.836 09:34:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:26:31.836 09:34:42 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:31.836 09:34:42 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:31.836 09:34:42 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:31.836 09:34:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:26:31.836 09:34:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe6144 1 00:26:31.836 09:34:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:26:31.836 09:34:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:26:31.836 09:34:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:26:31.836 09:34:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:26:31.836 09:34:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:ZjBhZjRlZmQ4NTRkMjI0NTdkNDUwODA3MDE4OGE2MGNlNDRlNDBiMzAwYjRiYWVkSF6LUg==: 00:26:31.836 09:34:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:ZmMyZTkyZTU4M2RmYWMzNzE2MWZmZmZjOTg0ODM5ZDUxNjhjNmZlZWMwNGY0MjIwb04rnA==: 00:26:31.836 09:34:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:26:31.836 09:34:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:26:31.836 09:34:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:ZjBhZjRlZmQ4NTRkMjI0NTdkNDUwODA3MDE4OGE2MGNlNDRlNDBiMzAwYjRiYWVkSF6LUg==: 00:26:31.836 09:34:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:ZmMyZTkyZTU4M2RmYWMzNzE2MWZmZmZjOTg0ODM5ZDUxNjhjNmZlZWMwNGY0MjIwb04rnA==: ]] 00:26:31.836 09:34:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:ZmMyZTkyZTU4M2RmYWMzNzE2MWZmZmZjOTg0ODM5ZDUxNjhjNmZlZWMwNGY0MjIwb04rnA==: 00:26:31.836 09:34:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe6144 1 00:26:31.836 09:34:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:26:31.836 09:34:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:26:31.836 09:34:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:26:31.836 09:34:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:26:31.836 09:34:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:26:31.836 09:34:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:26:31.836 09:34:42 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:31.836 09:34:42 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:31.836 09:34:42 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:31.836 09:34:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:26:31.836 09:34:42 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:26:31.836 09:34:42 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:26:31.836 09:34:42 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:26:31.836 09:34:42 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:26:31.836 09:34:42 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:26:31.836 09:34:42 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:26:31.836 09:34:42 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:26:31.836 09:34:42 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:26:31.836 09:34:42 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:26:31.836 09:34:42 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:26:31.836 09:34:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:26:31.836 09:34:42 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:31.836 09:34:42 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:32.400 nvme0n1 00:26:32.400 09:34:43 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:32.400 09:34:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:26:32.400 09:34:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:26:32.400 09:34:43 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:32.400 09:34:43 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:32.400 09:34:43 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:32.400 09:34:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:26:32.401 09:34:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:26:32.401 09:34:43 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:32.401 09:34:43 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:32.401 09:34:43 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:32.401 09:34:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:26:32.401 09:34:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe6144 2 00:26:32.401 09:34:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:26:32.401 09:34:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:26:32.401 09:34:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:26:32.401 09:34:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:26:32.401 09:34:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:M2U5NjY0NzJlZWVkZTA1ZGM0ZDMxODY4MDAxYmZlODcm3UGo: 00:26:32.401 09:34:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:YTJmYzQ3MjE5ZjZhNzhkMzdkZTIyY2E4MTBkY2QyNDO5r4J8: 00:26:32.401 09:34:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:26:32.401 09:34:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:26:32.401 09:34:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:M2U5NjY0NzJlZWVkZTA1ZGM0ZDMxODY4MDAxYmZlODcm3UGo: 00:26:32.401 09:34:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:YTJmYzQ3MjE5ZjZhNzhkMzdkZTIyY2E4MTBkY2QyNDO5r4J8: ]] 00:26:32.401 09:34:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:YTJmYzQ3MjE5ZjZhNzhkMzdkZTIyY2E4MTBkY2QyNDO5r4J8: 00:26:32.401 09:34:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe6144 2 00:26:32.401 09:34:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:26:32.401 09:34:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:26:32.401 09:34:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:26:32.401 09:34:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:26:32.401 09:34:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:26:32.401 09:34:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:26:32.401 09:34:43 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:32.401 09:34:43 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:32.401 09:34:43 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:32.401 09:34:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:26:32.401 09:34:43 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:26:32.401 09:34:43 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:26:32.401 09:34:43 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:26:32.401 09:34:43 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:26:32.401 09:34:43 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:26:32.401 09:34:43 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:26:32.401 09:34:43 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:26:32.401 09:34:43 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:26:32.401 09:34:43 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:26:32.401 09:34:43 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:26:32.401 09:34:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:26:32.401 09:34:43 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:32.401 09:34:43 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:32.967 nvme0n1 00:26:32.967 09:34:43 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:32.967 09:34:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:26:32.967 09:34:43 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:32.967 09:34:43 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:32.967 09:34:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:26:32.967 09:34:43 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:32.967 09:34:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:26:32.967 09:34:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:26:32.967 09:34:43 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:32.967 09:34:43 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:32.967 09:34:43 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:32.967 09:34:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:26:32.967 09:34:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe6144 3 00:26:32.967 09:34:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:26:32.967 09:34:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:26:32.967 09:34:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:26:32.967 09:34:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:26:32.967 09:34:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:ZDM0MmZmYTYwMWM2MzExZDkxMjc4MDUxYTY5YmI3MjU2OGNjZjQ2NjFkMzZkYmUxsSTVEg==: 00:26:32.967 09:34:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:OTA0MWRkNGY0OTNjZjQxNDZiM2QzNjY3ZTdkZGEyNjEkfiGh: 00:26:32.967 09:34:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:26:32.967 09:34:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:26:32.967 09:34:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:ZDM0MmZmYTYwMWM2MzExZDkxMjc4MDUxYTY5YmI3MjU2OGNjZjQ2NjFkMzZkYmUxsSTVEg==: 00:26:32.967 09:34:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:OTA0MWRkNGY0OTNjZjQxNDZiM2QzNjY3ZTdkZGEyNjEkfiGh: ]] 00:26:32.967 09:34:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:OTA0MWRkNGY0OTNjZjQxNDZiM2QzNjY3ZTdkZGEyNjEkfiGh: 00:26:32.967 09:34:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe6144 3 00:26:32.967 09:34:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:26:32.967 09:34:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:26:32.967 09:34:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:26:32.967 09:34:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:26:32.967 09:34:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:26:32.967 09:34:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:26:32.967 09:34:43 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:32.967 09:34:43 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:32.967 09:34:43 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:32.967 09:34:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:26:32.967 09:34:43 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:26:32.967 09:34:43 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:26:32.967 09:34:43 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:26:32.967 09:34:43 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:26:32.967 09:34:43 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:26:32.967 09:34:43 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:26:32.967 09:34:43 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:26:32.967 09:34:43 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:26:32.967 09:34:43 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:26:32.967 09:34:43 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:26:32.967 09:34:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:26:32.967 09:34:43 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:32.967 09:34:43 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:33.537 nvme0n1 00:26:33.537 09:34:44 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:33.537 09:34:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:26:33.537 09:34:44 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:33.537 09:34:44 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:33.537 09:34:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:26:33.537 09:34:44 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:33.537 09:34:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:26:33.537 09:34:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:26:33.537 09:34:44 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:33.537 09:34:44 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:33.537 09:34:44 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:33.537 09:34:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:26:33.537 09:34:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe6144 4 00:26:33.537 09:34:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:26:33.537 09:34:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:26:33.537 09:34:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:26:33.537 09:34:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:26:33.537 09:34:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:ZTMyZTFiOGViMDNmNTIyYTRlZjk3NGIxYzkyYmM2OTAxMmYxMDYxODQ5MzQyZTFhZWI1YjJiMzkwNzVkYmMxMQsgpoA=: 00:26:33.537 09:34:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:26:33.537 09:34:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:26:33.537 09:34:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:26:33.537 09:34:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:ZTMyZTFiOGViMDNmNTIyYTRlZjk3NGIxYzkyYmM2OTAxMmYxMDYxODQ5MzQyZTFhZWI1YjJiMzkwNzVkYmMxMQsgpoA=: 00:26:33.537 09:34:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:26:33.537 09:34:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe6144 4 00:26:33.537 09:34:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:26:33.537 09:34:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:26:33.537 09:34:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:26:33.537 09:34:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:26:33.537 09:34:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:26:33.537 09:34:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:26:33.537 09:34:44 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:33.537 09:34:44 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:33.537 09:34:44 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:33.537 09:34:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:26:33.537 09:34:44 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:26:33.537 09:34:44 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:26:33.537 09:34:44 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:26:33.537 09:34:44 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:26:33.537 09:34:44 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:26:33.537 09:34:44 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:26:33.537 09:34:44 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:26:33.537 09:34:44 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:26:33.537 09:34:44 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:26:33.537 09:34:44 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:26:33.537 09:34:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:26:33.537 09:34:44 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:33.537 09:34:44 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:34.105 nvme0n1 00:26:34.105 09:34:45 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:34.105 09:34:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:26:34.105 09:34:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:26:34.105 09:34:45 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:34.105 09:34:45 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:34.105 09:34:45 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:34.105 09:34:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:26:34.105 09:34:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:26:34.105 09:34:45 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:34.105 09:34:45 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:34.105 09:34:45 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:34.105 09:34:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:26:34.105 09:34:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:26:34.105 09:34:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe8192 0 00:26:34.105 09:34:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:26:34.105 09:34:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:26:34.105 09:34:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:26:34.105 09:34:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:26:34.105 09:34:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:OWI1ZmIxYjljYzBjOTVkMTZjMTExYzJjNjA0NzFjNjWreYXE: 00:26:34.105 09:34:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:ZWZkM2NiMDNhMjAyNDMyNjU4ZWYxNTJjY2UyYWI3NGI2Njg5YjU5YmYzNWNjOTQwZDdlZGFkZDhjMjUyYmUwYujvdz4=: 00:26:34.105 09:34:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:26:34.105 09:34:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:26:34.105 09:34:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:OWI1ZmIxYjljYzBjOTVkMTZjMTExYzJjNjA0NzFjNjWreYXE: 00:26:34.105 09:34:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:ZWZkM2NiMDNhMjAyNDMyNjU4ZWYxNTJjY2UyYWI3NGI2Njg5YjU5YmYzNWNjOTQwZDdlZGFkZDhjMjUyYmUwYujvdz4=: ]] 00:26:34.105 09:34:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:ZWZkM2NiMDNhMjAyNDMyNjU4ZWYxNTJjY2UyYWI3NGI2Njg5YjU5YmYzNWNjOTQwZDdlZGFkZDhjMjUyYmUwYujvdz4=: 00:26:34.105 09:34:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe8192 0 00:26:34.105 09:34:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:26:34.105 09:34:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:26:34.105 09:34:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:26:34.105 09:34:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:26:34.105 09:34:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:26:34.105 09:34:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:26:34.105 09:34:45 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:34.105 09:34:45 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:34.105 09:34:45 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:34.105 09:34:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:26:34.105 09:34:45 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:26:34.105 09:34:45 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:26:34.105 09:34:45 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:26:34.105 09:34:45 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:26:34.105 09:34:45 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:26:34.105 09:34:45 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:26:34.105 09:34:45 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:26:34.105 09:34:45 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:26:34.105 09:34:45 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:26:34.105 09:34:45 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:26:34.106 09:34:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:26:34.106 09:34:45 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:34.106 09:34:45 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:35.044 nvme0n1 00:26:35.044 09:34:45 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:35.044 09:34:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:26:35.044 09:34:45 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:35.044 09:34:45 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:35.044 09:34:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:26:35.044 09:34:45 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:35.044 09:34:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:26:35.044 09:34:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:26:35.044 09:34:45 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:35.044 09:34:45 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:35.044 09:34:46 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:35.044 09:34:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:26:35.044 09:34:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe8192 1 00:26:35.044 09:34:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:26:35.044 09:34:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:26:35.044 09:34:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:26:35.044 09:34:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:26:35.044 09:34:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:ZjBhZjRlZmQ4NTRkMjI0NTdkNDUwODA3MDE4OGE2MGNlNDRlNDBiMzAwYjRiYWVkSF6LUg==: 00:26:35.044 09:34:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:ZmMyZTkyZTU4M2RmYWMzNzE2MWZmZmZjOTg0ODM5ZDUxNjhjNmZlZWMwNGY0MjIwb04rnA==: 00:26:35.044 09:34:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:26:35.044 09:34:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:26:35.044 09:34:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:ZjBhZjRlZmQ4NTRkMjI0NTdkNDUwODA3MDE4OGE2MGNlNDRlNDBiMzAwYjRiYWVkSF6LUg==: 00:26:35.044 09:34:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:ZmMyZTkyZTU4M2RmYWMzNzE2MWZmZmZjOTg0ODM5ZDUxNjhjNmZlZWMwNGY0MjIwb04rnA==: ]] 00:26:35.044 09:34:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:ZmMyZTkyZTU4M2RmYWMzNzE2MWZmZmZjOTg0ODM5ZDUxNjhjNmZlZWMwNGY0MjIwb04rnA==: 00:26:35.044 09:34:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe8192 1 00:26:35.044 09:34:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:26:35.044 09:34:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:26:35.044 09:34:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:26:35.044 09:34:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:26:35.044 09:34:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:26:35.044 09:34:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:26:35.044 09:34:46 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:35.044 09:34:46 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:35.044 09:34:46 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:35.044 09:34:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:26:35.044 09:34:46 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:26:35.044 09:34:46 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:26:35.044 09:34:46 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:26:35.044 09:34:46 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:26:35.044 09:34:46 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:26:35.044 09:34:46 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:26:35.044 09:34:46 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:26:35.044 09:34:46 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:26:35.044 09:34:46 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:26:35.044 09:34:46 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:26:35.044 09:34:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:26:35.044 09:34:46 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:35.044 09:34:46 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:35.975 nvme0n1 00:26:35.975 09:34:46 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:35.975 09:34:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:26:35.975 09:34:46 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:35.975 09:34:46 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:35.975 09:34:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:26:35.975 09:34:46 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:35.975 09:34:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:26:35.975 09:34:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:26:35.975 09:34:46 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:35.975 09:34:46 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:35.975 09:34:46 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:35.975 09:34:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:26:35.975 09:34:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe8192 2 00:26:35.975 09:34:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:26:35.975 09:34:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:26:35.975 09:34:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:26:35.975 09:34:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:26:35.975 09:34:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:M2U5NjY0NzJlZWVkZTA1ZGM0ZDMxODY4MDAxYmZlODcm3UGo: 00:26:35.975 09:34:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:YTJmYzQ3MjE5ZjZhNzhkMzdkZTIyY2E4MTBkY2QyNDO5r4J8: 00:26:35.975 09:34:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:26:35.975 09:34:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:26:35.975 09:34:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:M2U5NjY0NzJlZWVkZTA1ZGM0ZDMxODY4MDAxYmZlODcm3UGo: 00:26:35.975 09:34:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:YTJmYzQ3MjE5ZjZhNzhkMzdkZTIyY2E4MTBkY2QyNDO5r4J8: ]] 00:26:35.975 09:34:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:YTJmYzQ3MjE5ZjZhNzhkMzdkZTIyY2E4MTBkY2QyNDO5r4J8: 00:26:35.975 09:34:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe8192 2 00:26:35.975 09:34:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:26:35.975 09:34:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:26:35.975 09:34:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:26:35.975 09:34:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:26:35.975 09:34:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:26:35.975 09:34:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:26:35.975 09:34:46 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:35.975 09:34:46 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:35.975 09:34:46 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:35.975 09:34:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:26:35.975 09:34:46 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:26:35.975 09:34:46 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:26:35.975 09:34:46 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:26:35.976 09:34:46 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:26:35.976 09:34:46 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:26:35.976 09:34:46 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:26:35.976 09:34:46 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:26:35.976 09:34:46 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:26:35.976 09:34:46 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:26:35.976 09:34:46 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:26:35.976 09:34:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:26:35.976 09:34:46 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:35.976 09:34:46 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:36.911 nvme0n1 00:26:36.911 09:34:47 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:36.911 09:34:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:26:36.911 09:34:47 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:36.911 09:34:47 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:36.911 09:34:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:26:36.911 09:34:47 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:36.911 09:34:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:26:36.912 09:34:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:26:36.912 09:34:47 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:36.912 09:34:47 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:36.912 09:34:47 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:36.912 09:34:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:26:36.912 09:34:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe8192 3 00:26:36.912 09:34:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:26:36.912 09:34:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:26:36.912 09:34:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:26:36.912 09:34:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:26:36.912 09:34:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:ZDM0MmZmYTYwMWM2MzExZDkxMjc4MDUxYTY5YmI3MjU2OGNjZjQ2NjFkMzZkYmUxsSTVEg==: 00:26:36.912 09:34:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:OTA0MWRkNGY0OTNjZjQxNDZiM2QzNjY3ZTdkZGEyNjEkfiGh: 00:26:36.912 09:34:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:26:36.912 09:34:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:26:36.912 09:34:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:ZDM0MmZmYTYwMWM2MzExZDkxMjc4MDUxYTY5YmI3MjU2OGNjZjQ2NjFkMzZkYmUxsSTVEg==: 00:26:36.912 09:34:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:OTA0MWRkNGY0OTNjZjQxNDZiM2QzNjY3ZTdkZGEyNjEkfiGh: ]] 00:26:36.912 09:34:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:OTA0MWRkNGY0OTNjZjQxNDZiM2QzNjY3ZTdkZGEyNjEkfiGh: 00:26:36.912 09:34:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe8192 3 00:26:36.912 09:34:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:26:36.912 09:34:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:26:36.912 09:34:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:26:36.912 09:34:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:26:36.912 09:34:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:26:36.912 09:34:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:26:36.912 09:34:47 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:36.912 09:34:47 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:36.912 09:34:47 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:36.912 09:34:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:26:36.912 09:34:47 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:26:36.912 09:34:47 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:26:36.912 09:34:47 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:26:36.912 09:34:47 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:26:36.912 09:34:47 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:26:36.912 09:34:47 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:26:36.912 09:34:47 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:26:36.912 09:34:47 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:26:36.912 09:34:47 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:26:36.912 09:34:47 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:26:36.912 09:34:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:26:36.912 09:34:47 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:36.912 09:34:47 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:37.492 nvme0n1 00:26:37.492 09:34:48 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:37.492 09:34:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:26:37.492 09:34:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:26:37.492 09:34:48 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:37.492 09:34:48 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:37.757 09:34:48 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:37.757 09:34:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:26:37.757 09:34:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:26:37.757 09:34:48 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:37.757 09:34:48 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:37.757 09:34:48 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:37.757 09:34:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:26:37.757 09:34:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe8192 4 00:26:37.757 09:34:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:26:37.757 09:34:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:26:37.757 09:34:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:26:37.757 09:34:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:26:37.757 09:34:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:ZTMyZTFiOGViMDNmNTIyYTRlZjk3NGIxYzkyYmM2OTAxMmYxMDYxODQ5MzQyZTFhZWI1YjJiMzkwNzVkYmMxMQsgpoA=: 00:26:37.757 09:34:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:26:37.757 09:34:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:26:37.757 09:34:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:26:37.757 09:34:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:ZTMyZTFiOGViMDNmNTIyYTRlZjk3NGIxYzkyYmM2OTAxMmYxMDYxODQ5MzQyZTFhZWI1YjJiMzkwNzVkYmMxMQsgpoA=: 00:26:37.757 09:34:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:26:37.757 09:34:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe8192 4 00:26:37.757 09:34:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:26:37.757 09:34:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:26:37.757 09:34:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:26:37.757 09:34:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:26:37.757 09:34:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:26:37.757 09:34:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:26:37.757 09:34:48 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:37.757 09:34:48 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:37.757 09:34:48 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:37.757 09:34:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:26:37.757 09:34:48 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:26:37.757 09:34:48 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:26:37.757 09:34:48 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:26:37.757 09:34:48 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:26:37.757 09:34:48 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:26:37.757 09:34:48 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:26:37.757 09:34:48 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:26:37.757 09:34:48 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:26:37.757 09:34:48 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:26:37.757 09:34:48 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:26:37.757 09:34:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:26:37.757 09:34:48 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:37.757 09:34:48 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:38.701 nvme0n1 00:26:38.701 09:34:49 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:38.701 09:34:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:26:38.701 09:34:49 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:38.701 09:34:49 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:38.701 09:34:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:26:38.701 09:34:49 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:38.701 09:34:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:26:38.701 09:34:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:26:38.701 09:34:49 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:38.701 09:34:49 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:38.701 09:34:49 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:38.701 09:34:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@110 -- # nvmet_auth_set_key sha256 ffdhe2048 1 00:26:38.701 09:34:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:26:38.701 09:34:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:26:38.701 09:34:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:26:38.701 09:34:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:26:38.701 09:34:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:ZjBhZjRlZmQ4NTRkMjI0NTdkNDUwODA3MDE4OGE2MGNlNDRlNDBiMzAwYjRiYWVkSF6LUg==: 00:26:38.701 09:34:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:ZmMyZTkyZTU4M2RmYWMzNzE2MWZmZmZjOTg0ODM5ZDUxNjhjNmZlZWMwNGY0MjIwb04rnA==: 00:26:38.701 09:34:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:26:38.701 09:34:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:26:38.701 09:34:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:ZjBhZjRlZmQ4NTRkMjI0NTdkNDUwODA3MDE4OGE2MGNlNDRlNDBiMzAwYjRiYWVkSF6LUg==: 00:26:38.701 09:34:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:ZmMyZTkyZTU4M2RmYWMzNzE2MWZmZmZjOTg0ODM5ZDUxNjhjNmZlZWMwNGY0MjIwb04rnA==: ]] 00:26:38.701 09:34:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:ZmMyZTkyZTU4M2RmYWMzNzE2MWZmZmZjOTg0ODM5ZDUxNjhjNmZlZWMwNGY0MjIwb04rnA==: 00:26:38.701 09:34:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@111 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:26:38.701 09:34:49 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:38.701 09:34:49 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:38.701 09:34:49 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:38.702 09:34:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@112 -- # get_main_ns_ip 00:26:38.702 09:34:49 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:26:38.702 09:34:49 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:26:38.702 09:34:49 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:26:38.702 09:34:49 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:26:38.702 09:34:49 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:26:38.702 09:34:49 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:26:38.702 09:34:49 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:26:38.702 09:34:49 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:26:38.702 09:34:49 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:26:38.702 09:34:49 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:26:38.702 09:34:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@112 -- # NOT rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 00:26:38.702 09:34:49 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@648 -- # local es=0 00:26:38.702 09:34:49 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@650 -- # valid_exec_arg rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 00:26:38.702 09:34:49 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@636 -- # local arg=rpc_cmd 00:26:38.702 09:34:49 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:26:38.702 09:34:49 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@640 -- # type -t rpc_cmd 00:26:38.702 09:34:49 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:26:38.702 09:34:49 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@651 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 00:26:38.702 09:34:49 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:38.702 09:34:49 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:38.702 request: 00:26:38.702 { 00:26:38.702 "name": "nvme0", 00:26:38.702 "trtype": "tcp", 00:26:38.702 "traddr": "10.0.0.1", 00:26:38.702 "adrfam": "ipv4", 00:26:38.702 "trsvcid": "4420", 00:26:38.702 "subnqn": "nqn.2024-02.io.spdk:cnode0", 00:26:38.702 "hostnqn": "nqn.2024-02.io.spdk:host0", 00:26:38.702 "prchk_reftag": false, 00:26:38.702 "prchk_guard": false, 00:26:38.702 "hdgst": false, 00:26:38.702 "ddgst": false, 00:26:38.702 "method": "bdev_nvme_attach_controller", 00:26:38.702 "req_id": 1 00:26:38.702 } 00:26:38.702 Got JSON-RPC error response 00:26:38.702 response: 00:26:38.702 { 00:26:38.702 "code": -5, 00:26:38.702 "message": "Input/output error" 00:26:38.702 } 00:26:38.702 09:34:49 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 1 == 0 ]] 00:26:38.702 09:34:49 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@651 -- # es=1 00:26:38.702 09:34:49 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:26:38.702 09:34:49 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:26:38.702 09:34:49 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:26:38.702 09:34:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@114 -- # rpc_cmd bdev_nvme_get_controllers 00:26:38.702 09:34:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@114 -- # jq length 00:26:38.702 09:34:49 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:38.702 09:34:49 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:38.702 09:34:49 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:38.702 09:34:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@114 -- # (( 0 == 0 )) 00:26:38.702 09:34:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@117 -- # get_main_ns_ip 00:26:38.702 09:34:49 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:26:38.702 09:34:49 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:26:38.702 09:34:49 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:26:38.702 09:34:49 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:26:38.702 09:34:49 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:26:38.702 09:34:49 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:26:38.702 09:34:49 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:26:38.702 09:34:49 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:26:38.702 09:34:49 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:26:38.702 09:34:49 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:26:38.702 09:34:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@117 -- # NOT rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 00:26:38.702 09:34:49 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@648 -- # local es=0 00:26:38.702 09:34:49 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@650 -- # valid_exec_arg rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 00:26:38.702 09:34:49 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@636 -- # local arg=rpc_cmd 00:26:38.702 09:34:49 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:26:38.702 09:34:49 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@640 -- # type -t rpc_cmd 00:26:38.702 09:34:49 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:26:38.702 09:34:49 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@651 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 00:26:38.702 09:34:49 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:38.702 09:34:49 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:38.702 request: 00:26:38.702 { 00:26:38.702 "name": "nvme0", 00:26:38.702 "trtype": "tcp", 00:26:38.702 "traddr": "10.0.0.1", 00:26:38.702 "adrfam": "ipv4", 00:26:38.702 "trsvcid": "4420", 00:26:38.702 "subnqn": "nqn.2024-02.io.spdk:cnode0", 00:26:38.702 "hostnqn": "nqn.2024-02.io.spdk:host0", 00:26:38.702 "prchk_reftag": false, 00:26:38.702 "prchk_guard": false, 00:26:38.702 "hdgst": false, 00:26:38.702 "ddgst": false, 00:26:38.702 "dhchap_key": "key2", 00:26:38.702 "method": "bdev_nvme_attach_controller", 00:26:38.702 "req_id": 1 00:26:38.702 } 00:26:38.702 Got JSON-RPC error response 00:26:38.702 response: 00:26:38.702 { 00:26:38.702 "code": -5, 00:26:38.702 "message": "Input/output error" 00:26:38.702 } 00:26:38.702 09:34:49 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 1 == 0 ]] 00:26:38.702 09:34:49 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@651 -- # es=1 00:26:38.702 09:34:49 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:26:38.702 09:34:49 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:26:38.702 09:34:49 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:26:38.702 09:34:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@120 -- # rpc_cmd bdev_nvme_get_controllers 00:26:38.702 09:34:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@120 -- # jq length 00:26:38.702 09:34:49 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:38.702 09:34:49 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:38.702 09:34:49 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:38.702 09:34:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@120 -- # (( 0 == 0 )) 00:26:38.702 09:34:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@123 -- # get_main_ns_ip 00:26:38.702 09:34:49 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:26:38.702 09:34:49 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:26:38.702 09:34:49 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:26:38.702 09:34:49 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:26:38.702 09:34:49 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:26:38.702 09:34:49 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:26:38.702 09:34:49 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:26:38.702 09:34:49 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:26:38.702 09:34:49 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:26:38.702 09:34:49 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:26:38.702 09:34:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@123 -- # NOT rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:26:38.702 09:34:49 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@648 -- # local es=0 00:26:38.702 09:34:49 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@650 -- # valid_exec_arg rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:26:38.702 09:34:49 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@636 -- # local arg=rpc_cmd 00:26:38.702 09:34:49 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:26:38.702 09:34:49 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@640 -- # type -t rpc_cmd 00:26:38.702 09:34:49 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:26:38.702 09:34:49 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@651 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:26:38.702 09:34:49 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:38.702 09:34:49 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:38.962 request: 00:26:38.962 { 00:26:38.962 "name": "nvme0", 00:26:38.962 "trtype": "tcp", 00:26:38.962 "traddr": "10.0.0.1", 00:26:38.962 "adrfam": "ipv4", 00:26:38.962 "trsvcid": "4420", 00:26:38.962 "subnqn": "nqn.2024-02.io.spdk:cnode0", 00:26:38.962 "hostnqn": "nqn.2024-02.io.spdk:host0", 00:26:38.962 "prchk_reftag": false, 00:26:38.962 "prchk_guard": false, 00:26:38.962 "hdgst": false, 00:26:38.962 "ddgst": false, 00:26:38.962 "dhchap_key": "key1", 00:26:38.962 "dhchap_ctrlr_key": "ckey2", 00:26:38.962 "method": "bdev_nvme_attach_controller", 00:26:38.962 "req_id": 1 00:26:38.962 } 00:26:38.962 Got JSON-RPC error response 00:26:38.962 response: 00:26:38.962 { 00:26:38.962 "code": -5, 00:26:38.962 "message": "Input/output error" 00:26:38.962 } 00:26:38.962 09:34:49 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 1 == 0 ]] 00:26:38.962 09:34:49 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@651 -- # es=1 00:26:38.962 09:34:49 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:26:38.962 09:34:49 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:26:38.962 09:34:49 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:26:38.962 09:34:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@127 -- # trap - SIGINT SIGTERM EXIT 00:26:38.962 09:34:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@128 -- # cleanup 00:26:38.962 09:34:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@24 -- # nvmftestfini 00:26:38.962 09:34:49 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@488 -- # nvmfcleanup 00:26:38.962 09:34:49 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@117 -- # sync 00:26:38.962 09:34:49 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:26:38.962 09:34:49 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@120 -- # set +e 00:26:38.963 09:34:49 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@121 -- # for i in {1..20} 00:26:38.963 09:34:49 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:26:38.963 rmmod nvme_tcp 00:26:38.963 rmmod nvme_fabrics 00:26:38.963 09:34:50 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:26:38.963 09:34:50 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@124 -- # set -e 00:26:38.963 09:34:50 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@125 -- # return 0 00:26:38.963 09:34:50 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@489 -- # '[' -n 924146 ']' 00:26:38.963 09:34:50 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@490 -- # killprocess 924146 00:26:38.963 09:34:50 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@948 -- # '[' -z 924146 ']' 00:26:38.963 09:34:50 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@952 -- # kill -0 924146 00:26:38.963 09:34:50 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@953 -- # uname 00:26:38.963 09:34:50 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:26:38.963 09:34:50 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 924146 00:26:38.963 09:34:50 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:26:38.963 09:34:50 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:26:38.963 09:34:50 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@966 -- # echo 'killing process with pid 924146' 00:26:38.963 killing process with pid 924146 00:26:38.963 09:34:50 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@967 -- # kill 924146 00:26:38.963 09:34:50 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@972 -- # wait 924146 00:26:39.222 09:34:50 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:26:39.222 09:34:50 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:26:39.222 09:34:50 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:26:39.222 09:34:50 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:26:39.222 09:34:50 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@278 -- # remove_spdk_ns 00:26:39.222 09:34:50 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:26:39.222 09:34:50 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:26:39.222 09:34:50 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:26:41.125 09:34:52 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:26:41.126 09:34:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@25 -- # rm /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0/allowed_hosts/nqn.2024-02.io.spdk:host0 00:26:41.126 09:34:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@26 -- # rmdir /sys/kernel/config/nvmet/hosts/nqn.2024-02.io.spdk:host0 00:26:41.384 09:34:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@27 -- # clean_kernel_target 00:26:41.384 09:34:52 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@684 -- # [[ -e /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0 ]] 00:26:41.384 09:34:52 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@686 -- # echo 0 00:26:41.384 09:34:52 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@688 -- # rm -f /sys/kernel/config/nvmet/ports/1/subsystems/nqn.2024-02.io.spdk:cnode0 00:26:41.384 09:34:52 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@689 -- # rmdir /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0/namespaces/1 00:26:41.384 09:34:52 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@690 -- # rmdir /sys/kernel/config/nvmet/ports/1 00:26:41.384 09:34:52 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@691 -- # rmdir /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0 00:26:41.384 09:34:52 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@693 -- # modules=(/sys/module/nvmet/holders/*) 00:26:41.384 09:34:52 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@695 -- # modprobe -r nvmet_tcp nvmet 00:26:41.384 09:34:52 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@698 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:26:42.761 0000:00:04.7 (8086 0e27): ioatdma -> vfio-pci 00:26:42.761 0000:00:04.6 (8086 0e26): ioatdma -> vfio-pci 00:26:42.761 0000:00:04.5 (8086 0e25): ioatdma -> vfio-pci 00:26:42.761 0000:00:04.4 (8086 0e24): ioatdma -> vfio-pci 00:26:42.761 0000:00:04.3 (8086 0e23): ioatdma -> vfio-pci 00:26:42.761 0000:00:04.2 (8086 0e22): ioatdma -> vfio-pci 00:26:42.761 0000:00:04.1 (8086 0e21): ioatdma -> vfio-pci 00:26:42.761 0000:00:04.0 (8086 0e20): ioatdma -> vfio-pci 00:26:42.761 0000:80:04.7 (8086 0e27): ioatdma -> vfio-pci 00:26:42.761 0000:80:04.6 (8086 0e26): ioatdma -> vfio-pci 00:26:42.761 0000:80:04.5 (8086 0e25): ioatdma -> vfio-pci 00:26:42.761 0000:80:04.4 (8086 0e24): ioatdma -> vfio-pci 00:26:42.761 0000:80:04.3 (8086 0e23): ioatdma -> vfio-pci 00:26:42.761 0000:80:04.2 (8086 0e22): ioatdma -> vfio-pci 00:26:42.761 0000:80:04.1 (8086 0e21): ioatdma -> vfio-pci 00:26:42.761 0000:80:04.0 (8086 0e20): ioatdma -> vfio-pci 00:26:43.696 0000:0b:00.0 (8086 0a54): nvme -> vfio-pci 00:26:43.696 09:34:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@28 -- # rm -f /tmp/spdk.key-null.Mgs /tmp/spdk.key-null.mmg /tmp/spdk.key-sha256.pTI /tmp/spdk.key-sha384.oT3 /tmp/spdk.key-sha512.rnM /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/nvme-auth.log 00:26:43.696 09:34:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:26:45.073 0000:00:04.7 (8086 0e27): Already using the vfio-pci driver 00:26:45.073 0000:00:04.6 (8086 0e26): Already using the vfio-pci driver 00:26:45.073 0000:00:04.5 (8086 0e25): Already using the vfio-pci driver 00:26:45.073 0000:00:04.4 (8086 0e24): Already using the vfio-pci driver 00:26:45.073 0000:00:04.3 (8086 0e23): Already using the vfio-pci driver 00:26:45.073 0000:00:04.2 (8086 0e22): Already using the vfio-pci driver 00:26:45.073 0000:00:04.1 (8086 0e21): Already using the vfio-pci driver 00:26:45.073 0000:00:04.0 (8086 0e20): Already using the vfio-pci driver 00:26:45.073 0000:80:04.7 (8086 0e27): Already using the vfio-pci driver 00:26:45.073 0000:0b:00.0 (8086 0a54): Already using the vfio-pci driver 00:26:45.073 0000:80:04.6 (8086 0e26): Already using the vfio-pci driver 00:26:45.073 0000:80:04.5 (8086 0e25): Already using the vfio-pci driver 00:26:45.073 0000:80:04.4 (8086 0e24): Already using the vfio-pci driver 00:26:45.073 0000:80:04.3 (8086 0e23): Already using the vfio-pci driver 00:26:45.073 0000:80:04.2 (8086 0e22): Already using the vfio-pci driver 00:26:45.073 0000:80:04.1 (8086 0e21): Already using the vfio-pci driver 00:26:45.073 0000:80:04.0 (8086 0e20): Already using the vfio-pci driver 00:26:45.073 00:26:45.073 real 0m49.547s 00:26:45.073 user 0m46.472s 00:26:45.073 sys 0m5.893s 00:26:45.073 09:34:56 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@1124 -- # xtrace_disable 00:26:45.073 09:34:56 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:45.073 ************************************ 00:26:45.073 END TEST nvmf_auth_host 00:26:45.073 ************************************ 00:26:45.073 09:34:56 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:26:45.073 09:34:56 nvmf_tcp -- nvmf/nvmf.sh@107 -- # [[ tcp == \t\c\p ]] 00:26:45.073 09:34:56 nvmf_tcp -- nvmf/nvmf.sh@108 -- # run_test nvmf_digest /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/digest.sh --transport=tcp 00:26:45.073 09:34:56 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:26:45.073 09:34:56 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:26:45.073 09:34:56 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:26:45.073 ************************************ 00:26:45.073 START TEST nvmf_digest 00:26:45.073 ************************************ 00:26:45.073 09:34:56 nvmf_tcp.nvmf_digest -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/digest.sh --transport=tcp 00:26:45.073 * Looking for test storage... 00:26:45.073 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:26:45.073 09:34:56 nvmf_tcp.nvmf_digest -- host/digest.sh@12 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:26:45.073 09:34:56 nvmf_tcp.nvmf_digest -- nvmf/common.sh@7 -- # uname -s 00:26:45.073 09:34:56 nvmf_tcp.nvmf_digest -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:26:45.073 09:34:56 nvmf_tcp.nvmf_digest -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:26:45.073 09:34:56 nvmf_tcp.nvmf_digest -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:26:45.073 09:34:56 nvmf_tcp.nvmf_digest -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:26:45.073 09:34:56 nvmf_tcp.nvmf_digest -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:26:45.073 09:34:56 nvmf_tcp.nvmf_digest -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:26:45.073 09:34:56 nvmf_tcp.nvmf_digest -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:26:45.073 09:34:56 nvmf_tcp.nvmf_digest -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:26:45.074 09:34:56 nvmf_tcp.nvmf_digest -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:26:45.074 09:34:56 nvmf_tcp.nvmf_digest -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:26:45.074 09:34:56 nvmf_tcp.nvmf_digest -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a 00:26:45.074 09:34:56 nvmf_tcp.nvmf_digest -- nvmf/common.sh@18 -- # NVME_HOSTID=29f67375-a902-e411-ace9-001e67bc3c9a 00:26:45.074 09:34:56 nvmf_tcp.nvmf_digest -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:26:45.074 09:34:56 nvmf_tcp.nvmf_digest -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:26:45.074 09:34:56 nvmf_tcp.nvmf_digest -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:26:45.074 09:34:56 nvmf_tcp.nvmf_digest -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:26:45.074 09:34:56 nvmf_tcp.nvmf_digest -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:26:45.074 09:34:56 nvmf_tcp.nvmf_digest -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:26:45.074 09:34:56 nvmf_tcp.nvmf_digest -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:26:45.074 09:34:56 nvmf_tcp.nvmf_digest -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:26:45.074 09:34:56 nvmf_tcp.nvmf_digest -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:26:45.074 09:34:56 nvmf_tcp.nvmf_digest -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:26:45.074 09:34:56 nvmf_tcp.nvmf_digest -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:26:45.074 09:34:56 nvmf_tcp.nvmf_digest -- paths/export.sh@5 -- # export PATH 00:26:45.074 09:34:56 nvmf_tcp.nvmf_digest -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:26:45.074 09:34:56 nvmf_tcp.nvmf_digest -- nvmf/common.sh@47 -- # : 0 00:26:45.074 09:34:56 nvmf_tcp.nvmf_digest -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:26:45.074 09:34:56 nvmf_tcp.nvmf_digest -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:26:45.074 09:34:56 nvmf_tcp.nvmf_digest -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:26:45.074 09:34:56 nvmf_tcp.nvmf_digest -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:26:45.074 09:34:56 nvmf_tcp.nvmf_digest -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:26:45.074 09:34:56 nvmf_tcp.nvmf_digest -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:26:45.074 09:34:56 nvmf_tcp.nvmf_digest -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:26:45.074 09:34:56 nvmf_tcp.nvmf_digest -- nvmf/common.sh@51 -- # have_pci_nics=0 00:26:45.074 09:34:56 nvmf_tcp.nvmf_digest -- host/digest.sh@14 -- # nqn=nqn.2016-06.io.spdk:cnode1 00:26:45.074 09:34:56 nvmf_tcp.nvmf_digest -- host/digest.sh@15 -- # bperfsock=/var/tmp/bperf.sock 00:26:45.074 09:34:56 nvmf_tcp.nvmf_digest -- host/digest.sh@16 -- # runtime=2 00:26:45.074 09:34:56 nvmf_tcp.nvmf_digest -- host/digest.sh@136 -- # [[ tcp != \t\c\p ]] 00:26:45.074 09:34:56 nvmf_tcp.nvmf_digest -- host/digest.sh@138 -- # nvmftestinit 00:26:45.074 09:34:56 nvmf_tcp.nvmf_digest -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:26:45.074 09:34:56 nvmf_tcp.nvmf_digest -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:26:45.074 09:34:56 nvmf_tcp.nvmf_digest -- nvmf/common.sh@448 -- # prepare_net_devs 00:26:45.074 09:34:56 nvmf_tcp.nvmf_digest -- nvmf/common.sh@410 -- # local -g is_hw=no 00:26:45.074 09:34:56 nvmf_tcp.nvmf_digest -- nvmf/common.sh@412 -- # remove_spdk_ns 00:26:45.074 09:34:56 nvmf_tcp.nvmf_digest -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:26:45.074 09:34:56 nvmf_tcp.nvmf_digest -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:26:45.074 09:34:56 nvmf_tcp.nvmf_digest -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:26:45.074 09:34:56 nvmf_tcp.nvmf_digest -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:26:45.074 09:34:56 nvmf_tcp.nvmf_digest -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:26:45.074 09:34:56 nvmf_tcp.nvmf_digest -- nvmf/common.sh@285 -- # xtrace_disable 00:26:45.074 09:34:56 nvmf_tcp.nvmf_digest -- common/autotest_common.sh@10 -- # set +x 00:26:46.979 09:34:58 nvmf_tcp.nvmf_digest -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:26:46.979 09:34:58 nvmf_tcp.nvmf_digest -- nvmf/common.sh@291 -- # pci_devs=() 00:26:46.979 09:34:58 nvmf_tcp.nvmf_digest -- nvmf/common.sh@291 -- # local -a pci_devs 00:26:46.979 09:34:58 nvmf_tcp.nvmf_digest -- nvmf/common.sh@292 -- # pci_net_devs=() 00:26:46.979 09:34:58 nvmf_tcp.nvmf_digest -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:26:46.979 09:34:58 nvmf_tcp.nvmf_digest -- nvmf/common.sh@293 -- # pci_drivers=() 00:26:46.979 09:34:58 nvmf_tcp.nvmf_digest -- nvmf/common.sh@293 -- # local -A pci_drivers 00:26:46.979 09:34:58 nvmf_tcp.nvmf_digest -- nvmf/common.sh@295 -- # net_devs=() 00:26:46.979 09:34:58 nvmf_tcp.nvmf_digest -- nvmf/common.sh@295 -- # local -ga net_devs 00:26:46.979 09:34:58 nvmf_tcp.nvmf_digest -- nvmf/common.sh@296 -- # e810=() 00:26:46.979 09:34:58 nvmf_tcp.nvmf_digest -- nvmf/common.sh@296 -- # local -ga e810 00:26:46.979 09:34:58 nvmf_tcp.nvmf_digest -- nvmf/common.sh@297 -- # x722=() 00:26:46.979 09:34:58 nvmf_tcp.nvmf_digest -- nvmf/common.sh@297 -- # local -ga x722 00:26:46.979 09:34:58 nvmf_tcp.nvmf_digest -- nvmf/common.sh@298 -- # mlx=() 00:26:46.979 09:34:58 nvmf_tcp.nvmf_digest -- nvmf/common.sh@298 -- # local -ga mlx 00:26:46.979 09:34:58 nvmf_tcp.nvmf_digest -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:26:46.979 09:34:58 nvmf_tcp.nvmf_digest -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:26:46.979 09:34:58 nvmf_tcp.nvmf_digest -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:26:46.979 09:34:58 nvmf_tcp.nvmf_digest -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:26:46.979 09:34:58 nvmf_tcp.nvmf_digest -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:26:46.979 09:34:58 nvmf_tcp.nvmf_digest -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:26:46.979 09:34:58 nvmf_tcp.nvmf_digest -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:26:46.979 09:34:58 nvmf_tcp.nvmf_digest -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:26:46.979 09:34:58 nvmf_tcp.nvmf_digest -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:26:46.979 09:34:58 nvmf_tcp.nvmf_digest -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:26:46.979 09:34:58 nvmf_tcp.nvmf_digest -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:26:46.979 09:34:58 nvmf_tcp.nvmf_digest -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:26:46.979 09:34:58 nvmf_tcp.nvmf_digest -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:26:46.979 09:34:58 nvmf_tcp.nvmf_digest -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:26:46.979 09:34:58 nvmf_tcp.nvmf_digest -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:26:46.979 09:34:58 nvmf_tcp.nvmf_digest -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:26:46.979 09:34:58 nvmf_tcp.nvmf_digest -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:26:46.979 09:34:58 nvmf_tcp.nvmf_digest -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:26:46.979 09:34:58 nvmf_tcp.nvmf_digest -- nvmf/common.sh@341 -- # echo 'Found 0000:09:00.0 (0x8086 - 0x159b)' 00:26:46.979 Found 0000:09:00.0 (0x8086 - 0x159b) 00:26:46.979 09:34:58 nvmf_tcp.nvmf_digest -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:26:46.979 09:34:58 nvmf_tcp.nvmf_digest -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:26:46.979 09:34:58 nvmf_tcp.nvmf_digest -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:26:46.979 09:34:58 nvmf_tcp.nvmf_digest -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:26:46.979 09:34:58 nvmf_tcp.nvmf_digest -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:26:46.979 09:34:58 nvmf_tcp.nvmf_digest -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:26:46.979 09:34:58 nvmf_tcp.nvmf_digest -- nvmf/common.sh@341 -- # echo 'Found 0000:09:00.1 (0x8086 - 0x159b)' 00:26:46.979 Found 0000:09:00.1 (0x8086 - 0x159b) 00:26:46.980 09:34:58 nvmf_tcp.nvmf_digest -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:26:46.980 09:34:58 nvmf_tcp.nvmf_digest -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:26:46.980 09:34:58 nvmf_tcp.nvmf_digest -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:26:46.980 09:34:58 nvmf_tcp.nvmf_digest -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:26:46.980 09:34:58 nvmf_tcp.nvmf_digest -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:26:46.980 09:34:58 nvmf_tcp.nvmf_digest -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:26:46.980 09:34:58 nvmf_tcp.nvmf_digest -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:26:46.980 09:34:58 nvmf_tcp.nvmf_digest -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:26:46.980 09:34:58 nvmf_tcp.nvmf_digest -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:26:46.980 09:34:58 nvmf_tcp.nvmf_digest -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:26:46.980 09:34:58 nvmf_tcp.nvmf_digest -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:26:46.980 09:34:58 nvmf_tcp.nvmf_digest -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:26:46.980 09:34:58 nvmf_tcp.nvmf_digest -- nvmf/common.sh@390 -- # [[ up == up ]] 00:26:46.980 09:34:58 nvmf_tcp.nvmf_digest -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:26:46.980 09:34:58 nvmf_tcp.nvmf_digest -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:26:46.980 09:34:58 nvmf_tcp.nvmf_digest -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:09:00.0: cvl_0_0' 00:26:46.980 Found net devices under 0000:09:00.0: cvl_0_0 00:26:46.980 09:34:58 nvmf_tcp.nvmf_digest -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:26:46.980 09:34:58 nvmf_tcp.nvmf_digest -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:26:46.980 09:34:58 nvmf_tcp.nvmf_digest -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:26:46.980 09:34:58 nvmf_tcp.nvmf_digest -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:26:46.980 09:34:58 nvmf_tcp.nvmf_digest -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:26:46.980 09:34:58 nvmf_tcp.nvmf_digest -- nvmf/common.sh@390 -- # [[ up == up ]] 00:26:46.980 09:34:58 nvmf_tcp.nvmf_digest -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:26:46.980 09:34:58 nvmf_tcp.nvmf_digest -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:26:46.980 09:34:58 nvmf_tcp.nvmf_digest -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:09:00.1: cvl_0_1' 00:26:46.980 Found net devices under 0000:09:00.1: cvl_0_1 00:26:46.980 09:34:58 nvmf_tcp.nvmf_digest -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:26:46.980 09:34:58 nvmf_tcp.nvmf_digest -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:26:46.980 09:34:58 nvmf_tcp.nvmf_digest -- nvmf/common.sh@414 -- # is_hw=yes 00:26:46.980 09:34:58 nvmf_tcp.nvmf_digest -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:26:46.980 09:34:58 nvmf_tcp.nvmf_digest -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:26:46.980 09:34:58 nvmf_tcp.nvmf_digest -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:26:46.980 09:34:58 nvmf_tcp.nvmf_digest -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:26:46.980 09:34:58 nvmf_tcp.nvmf_digest -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:26:46.980 09:34:58 nvmf_tcp.nvmf_digest -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:26:46.980 09:34:58 nvmf_tcp.nvmf_digest -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:26:46.980 09:34:58 nvmf_tcp.nvmf_digest -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:26:46.980 09:34:58 nvmf_tcp.nvmf_digest -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:26:46.980 09:34:58 nvmf_tcp.nvmf_digest -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:26:46.980 09:34:58 nvmf_tcp.nvmf_digest -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:26:46.980 09:34:58 nvmf_tcp.nvmf_digest -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:26:46.980 09:34:58 nvmf_tcp.nvmf_digest -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:26:46.980 09:34:58 nvmf_tcp.nvmf_digest -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:26:46.980 09:34:58 nvmf_tcp.nvmf_digest -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:26:46.980 09:34:58 nvmf_tcp.nvmf_digest -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:26:47.238 09:34:58 nvmf_tcp.nvmf_digest -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:26:47.238 09:34:58 nvmf_tcp.nvmf_digest -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:26:47.238 09:34:58 nvmf_tcp.nvmf_digest -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:26:47.238 09:34:58 nvmf_tcp.nvmf_digest -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:26:47.238 09:34:58 nvmf_tcp.nvmf_digest -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:26:47.238 09:34:58 nvmf_tcp.nvmf_digest -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:26:47.238 09:34:58 nvmf_tcp.nvmf_digest -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:26:47.238 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:26:47.238 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.125 ms 00:26:47.238 00:26:47.238 --- 10.0.0.2 ping statistics --- 00:26:47.238 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:26:47.238 rtt min/avg/max/mdev = 0.125/0.125/0.125/0.000 ms 00:26:47.238 09:34:58 nvmf_tcp.nvmf_digest -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:26:47.238 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:26:47.238 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.133 ms 00:26:47.238 00:26:47.238 --- 10.0.0.1 ping statistics --- 00:26:47.239 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:26:47.239 rtt min/avg/max/mdev = 0.133/0.133/0.133/0.000 ms 00:26:47.239 09:34:58 nvmf_tcp.nvmf_digest -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:26:47.239 09:34:58 nvmf_tcp.nvmf_digest -- nvmf/common.sh@422 -- # return 0 00:26:47.239 09:34:58 nvmf_tcp.nvmf_digest -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:26:47.239 09:34:58 nvmf_tcp.nvmf_digest -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:26:47.239 09:34:58 nvmf_tcp.nvmf_digest -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:26:47.239 09:34:58 nvmf_tcp.nvmf_digest -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:26:47.239 09:34:58 nvmf_tcp.nvmf_digest -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:26:47.239 09:34:58 nvmf_tcp.nvmf_digest -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:26:47.239 09:34:58 nvmf_tcp.nvmf_digest -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:26:47.239 09:34:58 nvmf_tcp.nvmf_digest -- host/digest.sh@140 -- # trap cleanup SIGINT SIGTERM EXIT 00:26:47.239 09:34:58 nvmf_tcp.nvmf_digest -- host/digest.sh@141 -- # [[ 0 -eq 1 ]] 00:26:47.239 09:34:58 nvmf_tcp.nvmf_digest -- host/digest.sh@145 -- # run_test nvmf_digest_clean run_digest 00:26:47.239 09:34:58 nvmf_tcp.nvmf_digest -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:26:47.239 09:34:58 nvmf_tcp.nvmf_digest -- common/autotest_common.sh@1105 -- # xtrace_disable 00:26:47.239 09:34:58 nvmf_tcp.nvmf_digest -- common/autotest_common.sh@10 -- # set +x 00:26:47.239 ************************************ 00:26:47.239 START TEST nvmf_digest_clean 00:26:47.239 ************************************ 00:26:47.239 09:34:58 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@1123 -- # run_digest 00:26:47.239 09:34:58 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@120 -- # local dsa_initiator 00:26:47.239 09:34:58 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@121 -- # [[ '' == \d\s\a\_\i\n\i\t\i\a\t\o\r ]] 00:26:47.239 09:34:58 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@121 -- # dsa_initiator=false 00:26:47.239 09:34:58 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@123 -- # tgt_params=("--wait-for-rpc") 00:26:47.239 09:34:58 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@124 -- # nvmfappstart --wait-for-rpc 00:26:47.239 09:34:58 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:26:47.239 09:34:58 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@722 -- # xtrace_disable 00:26:47.239 09:34:58 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@10 -- # set +x 00:26:47.239 09:34:58 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- nvmf/common.sh@481 -- # nvmfpid=933611 00:26:47.239 09:34:58 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --wait-for-rpc 00:26:47.239 09:34:58 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- nvmf/common.sh@482 -- # waitforlisten 933611 00:26:47.239 09:34:58 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@829 -- # '[' -z 933611 ']' 00:26:47.239 09:34:58 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:26:47.239 09:34:58 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@834 -- # local max_retries=100 00:26:47.239 09:34:58 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:26:47.239 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:26:47.239 09:34:58 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@838 -- # xtrace_disable 00:26:47.239 09:34:58 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@10 -- # set +x 00:26:47.239 [2024-07-15 09:34:58.328575] Starting SPDK v24.09-pre git sha1 b0f01ebc5 / DPDK 24.03.0 initialization... 00:26:47.239 [2024-07-15 09:34:58.328658] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:26:47.239 EAL: No free 2048 kB hugepages reported on node 1 00:26:47.239 [2024-07-15 09:34:58.392473] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:26:47.499 [2024-07-15 09:34:58.497267] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:26:47.499 [2024-07-15 09:34:58.497315] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:26:47.499 [2024-07-15 09:34:58.497344] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:26:47.499 [2024-07-15 09:34:58.497356] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:26:47.499 [2024-07-15 09:34:58.497366] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:26:47.499 [2024-07-15 09:34:58.497396] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:26:47.499 09:34:58 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:26:47.499 09:34:58 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@862 -- # return 0 00:26:47.499 09:34:58 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:26:47.499 09:34:58 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@728 -- # xtrace_disable 00:26:47.499 09:34:58 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@10 -- # set +x 00:26:47.499 09:34:58 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:26:47.499 09:34:58 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@125 -- # [[ '' == \d\s\a\_\t\a\r\g\e\t ]] 00:26:47.499 09:34:58 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@126 -- # common_target_config 00:26:47.499 09:34:58 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@43 -- # rpc_cmd 00:26:47.499 09:34:58 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:47.499 09:34:58 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@10 -- # set +x 00:26:47.758 null0 00:26:47.758 [2024-07-15 09:34:58.727834] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:26:47.758 [2024-07-15 09:34:58.752052] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:26:47.758 09:34:58 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:47.758 09:34:58 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@128 -- # run_bperf randread 4096 128 false 00:26:47.758 09:34:58 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@77 -- # local rw bs qd scan_dsa 00:26:47.758 09:34:58 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@78 -- # local acc_module acc_executed exp_module 00:26:47.758 09:34:58 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # rw=randread 00:26:47.758 09:34:58 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # bs=4096 00:26:47.758 09:34:58 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # qd=128 00:26:47.758 09:34:58 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # scan_dsa=false 00:26:47.758 09:34:58 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@83 -- # bperfpid=933630 00:26:47.758 09:34:58 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@84 -- # waitforlisten 933630 /var/tmp/bperf.sock 00:26:47.758 09:34:58 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@829 -- # '[' -z 933630 ']' 00:26:47.758 09:34:58 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bperf.sock 00:26:47.758 09:34:58 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@82 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randread -o 4096 -t 2 -q 128 -z --wait-for-rpc 00:26:47.758 09:34:58 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@834 -- # local max_retries=100 00:26:47.758 09:34:58 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:26:47.758 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:26:47.758 09:34:58 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@838 -- # xtrace_disable 00:26:47.758 09:34:58 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@10 -- # set +x 00:26:47.758 [2024-07-15 09:34:58.802253] Starting SPDK v24.09-pre git sha1 b0f01ebc5 / DPDK 24.03.0 initialization... 00:26:47.758 [2024-07-15 09:34:58.802325] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid933630 ] 00:26:47.758 EAL: No free 2048 kB hugepages reported on node 1 00:26:47.758 [2024-07-15 09:34:58.860138] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:26:48.016 [2024-07-15 09:34:58.967522] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:26:48.016 09:34:59 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:26:48.016 09:34:59 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@862 -- # return 0 00:26:48.016 09:34:59 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@86 -- # false 00:26:48.016 09:34:59 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@87 -- # bperf_rpc framework_start_init 00:26:48.016 09:34:59 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock framework_start_init 00:26:48.275 09:34:59 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@89 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:26:48.275 09:34:59 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:26:48.842 nvme0n1 00:26:48.842 09:34:59 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@92 -- # bperf_py perform_tests 00:26:48.842 09:34:59 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@19 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:26:48.842 Running I/O for 2 seconds... 00:26:51.376 00:26:51.376 Latency(us) 00:26:51.376 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:26:51.376 Job: nvme0n1 (Core Mask 0x2, workload: randread, depth: 128, IO size: 4096) 00:26:51.376 nvme0n1 : 2.04 18700.21 73.05 0.00 0.00 6705.88 3276.80 45049.93 00:26:51.376 =================================================================================================================== 00:26:51.376 Total : 18700.21 73.05 0.00 0.00 6705.88 3276.80 45049.93 00:26:51.376 0 00:26:51.376 09:35:01 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@93 -- # read -r acc_module acc_executed 00:26:51.376 09:35:01 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@93 -- # get_accel_stats 00:26:51.376 09:35:01 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@36 -- # bperf_rpc accel_get_stats 00:26:51.376 09:35:01 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock accel_get_stats 00:26:51.376 09:35:01 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@37 -- # jq -rc '.operations[] 00:26:51.376 | select(.opcode=="crc32c") 00:26:51.376 | "\(.module_name) \(.executed)"' 00:26:51.376 09:35:02 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@94 -- # false 00:26:51.376 09:35:02 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@94 -- # exp_module=software 00:26:51.376 09:35:02 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@95 -- # (( acc_executed > 0 )) 00:26:51.376 09:35:02 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@96 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:26:51.376 09:35:02 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@98 -- # killprocess 933630 00:26:51.376 09:35:02 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@948 -- # '[' -z 933630 ']' 00:26:51.376 09:35:02 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@952 -- # kill -0 933630 00:26:51.376 09:35:02 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@953 -- # uname 00:26:51.376 09:35:02 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:26:51.376 09:35:02 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 933630 00:26:51.376 09:35:02 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:26:51.376 09:35:02 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:26:51.376 09:35:02 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@966 -- # echo 'killing process with pid 933630' 00:26:51.376 killing process with pid 933630 00:26:51.376 09:35:02 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@967 -- # kill 933630 00:26:51.376 Received shutdown signal, test time was about 2.000000 seconds 00:26:51.376 00:26:51.376 Latency(us) 00:26:51.376 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:26:51.376 =================================================================================================================== 00:26:51.376 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:26:51.376 09:35:02 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@972 -- # wait 933630 00:26:51.376 09:35:02 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@129 -- # run_bperf randread 131072 16 false 00:26:51.376 09:35:02 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@77 -- # local rw bs qd scan_dsa 00:26:51.376 09:35:02 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@78 -- # local acc_module acc_executed exp_module 00:26:51.376 09:35:02 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # rw=randread 00:26:51.376 09:35:02 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # bs=131072 00:26:51.376 09:35:02 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # qd=16 00:26:51.376 09:35:02 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # scan_dsa=false 00:26:51.376 09:35:02 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@83 -- # bperfpid=934053 00:26:51.376 09:35:02 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@82 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randread -o 131072 -t 2 -q 16 -z --wait-for-rpc 00:26:51.376 09:35:02 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@84 -- # waitforlisten 934053 /var/tmp/bperf.sock 00:26:51.376 09:35:02 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@829 -- # '[' -z 934053 ']' 00:26:51.376 09:35:02 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bperf.sock 00:26:51.376 09:35:02 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@834 -- # local max_retries=100 00:26:51.376 09:35:02 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:26:51.376 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:26:51.376 09:35:02 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@838 -- # xtrace_disable 00:26:51.376 09:35:02 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@10 -- # set +x 00:26:51.634 [2024-07-15 09:35:02.593651] Starting SPDK v24.09-pre git sha1 b0f01ebc5 / DPDK 24.03.0 initialization... 00:26:51.634 [2024-07-15 09:35:02.593741] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid934053 ] 00:26:51.634 I/O size of 131072 is greater than zero copy threshold (65536). 00:26:51.634 Zero copy mechanism will not be used. 00:26:51.634 EAL: No free 2048 kB hugepages reported on node 1 00:26:51.634 [2024-07-15 09:35:02.660193] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:26:51.634 [2024-07-15 09:35:02.770881] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:26:51.634 09:35:02 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:26:51.634 09:35:02 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@862 -- # return 0 00:26:51.634 09:35:02 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@86 -- # false 00:26:51.634 09:35:02 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@87 -- # bperf_rpc framework_start_init 00:26:51.634 09:35:02 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock framework_start_init 00:26:52.202 09:35:03 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@89 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:26:52.202 09:35:03 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:26:52.459 nvme0n1 00:26:52.459 09:35:03 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@92 -- # bperf_py perform_tests 00:26:52.459 09:35:03 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@19 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:26:52.717 I/O size of 131072 is greater than zero copy threshold (65536). 00:26:52.717 Zero copy mechanism will not be used. 00:26:52.717 Running I/O for 2 seconds... 00:26:54.617 00:26:54.618 Latency(us) 00:26:54.618 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:26:54.618 Job: nvme0n1 (Core Mask 0x2, workload: randread, depth: 16, IO size: 131072) 00:26:54.618 nvme0n1 : 2.00 5747.29 718.41 0.00 0.00 2779.50 752.45 10777.03 00:26:54.618 =================================================================================================================== 00:26:54.618 Total : 5747.29 718.41 0.00 0.00 2779.50 752.45 10777.03 00:26:54.618 0 00:26:54.618 09:35:05 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@93 -- # read -r acc_module acc_executed 00:26:54.618 09:35:05 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@93 -- # get_accel_stats 00:26:54.618 09:35:05 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@36 -- # bperf_rpc accel_get_stats 00:26:54.618 09:35:05 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock accel_get_stats 00:26:54.618 09:35:05 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@37 -- # jq -rc '.operations[] 00:26:54.618 | select(.opcode=="crc32c") 00:26:54.618 | "\(.module_name) \(.executed)"' 00:26:54.876 09:35:05 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@94 -- # false 00:26:54.876 09:35:05 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@94 -- # exp_module=software 00:26:54.876 09:35:05 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@95 -- # (( acc_executed > 0 )) 00:26:54.876 09:35:05 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@96 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:26:54.876 09:35:05 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@98 -- # killprocess 934053 00:26:54.876 09:35:05 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@948 -- # '[' -z 934053 ']' 00:26:54.876 09:35:05 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@952 -- # kill -0 934053 00:26:54.876 09:35:05 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@953 -- # uname 00:26:54.876 09:35:05 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:26:54.876 09:35:05 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 934053 00:26:54.876 09:35:06 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:26:54.876 09:35:06 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:26:54.876 09:35:06 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@966 -- # echo 'killing process with pid 934053' 00:26:54.876 killing process with pid 934053 00:26:54.876 09:35:06 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@967 -- # kill 934053 00:26:54.876 Received shutdown signal, test time was about 2.000000 seconds 00:26:54.876 00:26:54.876 Latency(us) 00:26:54.876 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:26:54.876 =================================================================================================================== 00:26:54.876 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:26:54.876 09:35:06 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@972 -- # wait 934053 00:26:55.133 09:35:06 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@130 -- # run_bperf randwrite 4096 128 false 00:26:55.133 09:35:06 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@77 -- # local rw bs qd scan_dsa 00:26:55.133 09:35:06 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@78 -- # local acc_module acc_executed exp_module 00:26:55.133 09:35:06 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # rw=randwrite 00:26:55.133 09:35:06 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # bs=4096 00:26:55.133 09:35:06 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # qd=128 00:26:55.133 09:35:06 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # scan_dsa=false 00:26:55.133 09:35:06 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@83 -- # bperfpid=934564 00:26:55.133 09:35:06 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@82 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randwrite -o 4096 -t 2 -q 128 -z --wait-for-rpc 00:26:55.133 09:35:06 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@84 -- # waitforlisten 934564 /var/tmp/bperf.sock 00:26:55.133 09:35:06 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@829 -- # '[' -z 934564 ']' 00:26:55.133 09:35:06 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bperf.sock 00:26:55.133 09:35:06 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@834 -- # local max_retries=100 00:26:55.133 09:35:06 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:26:55.133 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:26:55.133 09:35:06 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@838 -- # xtrace_disable 00:26:55.133 09:35:06 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@10 -- # set +x 00:26:55.133 [2024-07-15 09:35:06.306890] Starting SPDK v24.09-pre git sha1 b0f01ebc5 / DPDK 24.03.0 initialization... 00:26:55.133 [2024-07-15 09:35:06.306972] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid934564 ] 00:26:55.390 EAL: No free 2048 kB hugepages reported on node 1 00:26:55.390 [2024-07-15 09:35:06.367074] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:26:55.390 [2024-07-15 09:35:06.474058] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:26:55.390 09:35:06 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:26:55.390 09:35:06 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@862 -- # return 0 00:26:55.390 09:35:06 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@86 -- # false 00:26:55.390 09:35:06 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@87 -- # bperf_rpc framework_start_init 00:26:55.390 09:35:06 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock framework_start_init 00:26:55.648 09:35:06 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@89 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:26:55.648 09:35:06 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:26:56.215 nvme0n1 00:26:56.215 09:35:07 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@92 -- # bperf_py perform_tests 00:26:56.215 09:35:07 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@19 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:26:56.215 Running I/O for 2 seconds... 00:26:58.119 00:26:58.119 Latency(us) 00:26:58.119 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:26:58.119 Job: nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:26:58.119 nvme0n1 : 2.01 22348.42 87.30 0.00 0.00 5720.90 2475.80 16505.36 00:26:58.119 =================================================================================================================== 00:26:58.119 Total : 22348.42 87.30 0.00 0.00 5720.90 2475.80 16505.36 00:26:58.119 0 00:26:58.376 09:35:09 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@93 -- # read -r acc_module acc_executed 00:26:58.376 09:35:09 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@93 -- # get_accel_stats 00:26:58.376 09:35:09 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@36 -- # bperf_rpc accel_get_stats 00:26:58.376 09:35:09 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock accel_get_stats 00:26:58.376 09:35:09 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@37 -- # jq -rc '.operations[] 00:26:58.376 | select(.opcode=="crc32c") 00:26:58.376 | "\(.module_name) \(.executed)"' 00:26:58.688 09:35:09 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@94 -- # false 00:26:58.688 09:35:09 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@94 -- # exp_module=software 00:26:58.688 09:35:09 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@95 -- # (( acc_executed > 0 )) 00:26:58.688 09:35:09 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@96 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:26:58.688 09:35:09 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@98 -- # killprocess 934564 00:26:58.688 09:35:09 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@948 -- # '[' -z 934564 ']' 00:26:58.688 09:35:09 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@952 -- # kill -0 934564 00:26:58.688 09:35:09 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@953 -- # uname 00:26:58.688 09:35:09 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:26:58.688 09:35:09 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 934564 00:26:58.688 09:35:09 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:26:58.688 09:35:09 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:26:58.688 09:35:09 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@966 -- # echo 'killing process with pid 934564' 00:26:58.688 killing process with pid 934564 00:26:58.688 09:35:09 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@967 -- # kill 934564 00:26:58.688 Received shutdown signal, test time was about 2.000000 seconds 00:26:58.688 00:26:58.688 Latency(us) 00:26:58.688 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:26:58.688 =================================================================================================================== 00:26:58.688 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:26:58.688 09:35:09 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@972 -- # wait 934564 00:26:58.688 09:35:09 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@131 -- # run_bperf randwrite 131072 16 false 00:26:58.688 09:35:09 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@77 -- # local rw bs qd scan_dsa 00:26:58.688 09:35:09 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@78 -- # local acc_module acc_executed exp_module 00:26:58.688 09:35:09 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # rw=randwrite 00:26:58.688 09:35:09 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # bs=131072 00:26:58.688 09:35:09 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # qd=16 00:26:58.688 09:35:09 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # scan_dsa=false 00:26:58.688 09:35:09 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@83 -- # bperfpid=934976 00:26:58.689 09:35:09 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@82 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randwrite -o 131072 -t 2 -q 16 -z --wait-for-rpc 00:26:58.689 09:35:09 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@84 -- # waitforlisten 934976 /var/tmp/bperf.sock 00:26:58.689 09:35:09 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@829 -- # '[' -z 934976 ']' 00:26:58.689 09:35:09 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bperf.sock 00:26:58.689 09:35:09 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@834 -- # local max_retries=100 00:26:58.689 09:35:09 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:26:58.689 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:26:58.689 09:35:09 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@838 -- # xtrace_disable 00:26:58.689 09:35:09 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@10 -- # set +x 00:26:58.945 [2024-07-15 09:35:09.907147] Starting SPDK v24.09-pre git sha1 b0f01ebc5 / DPDK 24.03.0 initialization... 00:26:58.945 [2024-07-15 09:35:09.907228] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid934976 ] 00:26:58.945 I/O size of 131072 is greater than zero copy threshold (65536). 00:26:58.945 Zero copy mechanism will not be used. 00:26:58.945 EAL: No free 2048 kB hugepages reported on node 1 00:26:58.945 [2024-07-15 09:35:09.963645] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:26:58.945 [2024-07-15 09:35:10.074195] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:26:58.945 09:35:10 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:26:58.945 09:35:10 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@862 -- # return 0 00:26:58.945 09:35:10 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@86 -- # false 00:26:58.945 09:35:10 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@87 -- # bperf_rpc framework_start_init 00:26:58.945 09:35:10 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock framework_start_init 00:26:59.511 09:35:10 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@89 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:26:59.511 09:35:10 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:26:59.767 nvme0n1 00:26:59.767 09:35:10 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@92 -- # bperf_py perform_tests 00:26:59.767 09:35:10 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@19 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:27:00.026 I/O size of 131072 is greater than zero copy threshold (65536). 00:27:00.026 Zero copy mechanism will not be used. 00:27:00.026 Running I/O for 2 seconds... 00:27:01.935 00:27:01.935 Latency(us) 00:27:01.935 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:27:01.935 Job: nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 16, IO size: 131072) 00:27:01.935 nvme0n1 : 2.00 6127.01 765.88 0.00 0.00 2604.44 2099.58 9417.77 00:27:01.935 =================================================================================================================== 00:27:01.935 Total : 6127.01 765.88 0.00 0.00 2604.44 2099.58 9417.77 00:27:01.935 0 00:27:01.935 09:35:12 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@93 -- # read -r acc_module acc_executed 00:27:01.935 09:35:12 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@93 -- # get_accel_stats 00:27:01.935 09:35:12 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@36 -- # bperf_rpc accel_get_stats 00:27:01.935 09:35:12 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock accel_get_stats 00:27:01.935 09:35:12 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@37 -- # jq -rc '.operations[] 00:27:01.935 | select(.opcode=="crc32c") 00:27:01.935 | "\(.module_name) \(.executed)"' 00:27:02.198 09:35:13 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@94 -- # false 00:27:02.198 09:35:13 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@94 -- # exp_module=software 00:27:02.198 09:35:13 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@95 -- # (( acc_executed > 0 )) 00:27:02.198 09:35:13 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@96 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:27:02.198 09:35:13 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@98 -- # killprocess 934976 00:27:02.198 09:35:13 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@948 -- # '[' -z 934976 ']' 00:27:02.198 09:35:13 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@952 -- # kill -0 934976 00:27:02.198 09:35:13 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@953 -- # uname 00:27:02.198 09:35:13 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:27:02.198 09:35:13 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 934976 00:27:02.198 09:35:13 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:27:02.198 09:35:13 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:27:02.198 09:35:13 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@966 -- # echo 'killing process with pid 934976' 00:27:02.198 killing process with pid 934976 00:27:02.198 09:35:13 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@967 -- # kill 934976 00:27:02.198 Received shutdown signal, test time was about 2.000000 seconds 00:27:02.198 00:27:02.198 Latency(us) 00:27:02.198 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:27:02.198 =================================================================================================================== 00:27:02.198 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:27:02.198 09:35:13 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@972 -- # wait 934976 00:27:02.458 09:35:13 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@132 -- # killprocess 933611 00:27:02.458 09:35:13 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@948 -- # '[' -z 933611 ']' 00:27:02.458 09:35:13 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@952 -- # kill -0 933611 00:27:02.458 09:35:13 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@953 -- # uname 00:27:02.458 09:35:13 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:27:02.458 09:35:13 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 933611 00:27:02.458 09:35:13 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:27:02.458 09:35:13 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:27:02.458 09:35:13 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@966 -- # echo 'killing process with pid 933611' 00:27:02.458 killing process with pid 933611 00:27:02.458 09:35:13 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@967 -- # kill 933611 00:27:02.458 09:35:13 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@972 -- # wait 933611 00:27:02.716 00:27:02.716 real 0m15.554s 00:27:02.716 user 0m30.989s 00:27:02.716 sys 0m4.176s 00:27:02.716 09:35:13 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@1124 -- # xtrace_disable 00:27:02.716 09:35:13 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@10 -- # set +x 00:27:02.716 ************************************ 00:27:02.716 END TEST nvmf_digest_clean 00:27:02.716 ************************************ 00:27:02.716 09:35:13 nvmf_tcp.nvmf_digest -- common/autotest_common.sh@1142 -- # return 0 00:27:02.716 09:35:13 nvmf_tcp.nvmf_digest -- host/digest.sh@147 -- # run_test nvmf_digest_error run_digest_error 00:27:02.716 09:35:13 nvmf_tcp.nvmf_digest -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:27:02.716 09:35:13 nvmf_tcp.nvmf_digest -- common/autotest_common.sh@1105 -- # xtrace_disable 00:27:02.716 09:35:13 nvmf_tcp.nvmf_digest -- common/autotest_common.sh@10 -- # set +x 00:27:02.716 ************************************ 00:27:02.716 START TEST nvmf_digest_error 00:27:02.716 ************************************ 00:27:02.716 09:35:13 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@1123 -- # run_digest_error 00:27:02.716 09:35:13 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@102 -- # nvmfappstart --wait-for-rpc 00:27:02.716 09:35:13 nvmf_tcp.nvmf_digest.nvmf_digest_error -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:27:02.716 09:35:13 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@722 -- # xtrace_disable 00:27:02.716 09:35:13 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:27:02.716 09:35:13 nvmf_tcp.nvmf_digest.nvmf_digest_error -- nvmf/common.sh@481 -- # nvmfpid=935414 00:27:02.716 09:35:13 nvmf_tcp.nvmf_digest.nvmf_digest_error -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --wait-for-rpc 00:27:02.716 09:35:13 nvmf_tcp.nvmf_digest.nvmf_digest_error -- nvmf/common.sh@482 -- # waitforlisten 935414 00:27:02.716 09:35:13 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@829 -- # '[' -z 935414 ']' 00:27:02.716 09:35:13 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:27:02.716 09:35:13 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@834 -- # local max_retries=100 00:27:02.716 09:35:13 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:27:02.716 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:27:02.716 09:35:13 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@838 -- # xtrace_disable 00:27:02.716 09:35:13 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:27:02.973 [2024-07-15 09:35:13.943336] Starting SPDK v24.09-pre git sha1 b0f01ebc5 / DPDK 24.03.0 initialization... 00:27:02.973 [2024-07-15 09:35:13.943420] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:27:02.973 EAL: No free 2048 kB hugepages reported on node 1 00:27:02.973 [2024-07-15 09:35:14.016607] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:27:02.973 [2024-07-15 09:35:14.120156] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:27:02.973 [2024-07-15 09:35:14.120206] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:27:02.973 [2024-07-15 09:35:14.120234] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:27:02.973 [2024-07-15 09:35:14.120246] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:27:02.973 [2024-07-15 09:35:14.120256] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:27:02.973 [2024-07-15 09:35:14.120282] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:27:02.974 09:35:14 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:27:02.974 09:35:14 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@862 -- # return 0 00:27:02.974 09:35:14 nvmf_tcp.nvmf_digest.nvmf_digest_error -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:27:02.974 09:35:14 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@728 -- # xtrace_disable 00:27:02.974 09:35:14 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:27:02.974 09:35:14 nvmf_tcp.nvmf_digest.nvmf_digest_error -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:27:02.974 09:35:14 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@104 -- # rpc_cmd accel_assign_opc -o crc32c -m error 00:27:02.974 09:35:14 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:02.974 09:35:14 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:27:03.231 [2024-07-15 09:35:14.168762] accel_rpc.c: 167:rpc_accel_assign_opc: *NOTICE*: Operation crc32c will be assigned to module error 00:27:03.231 09:35:14 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:03.231 09:35:14 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@105 -- # common_target_config 00:27:03.231 09:35:14 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@43 -- # rpc_cmd 00:27:03.231 09:35:14 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:03.231 09:35:14 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:27:03.231 null0 00:27:03.231 [2024-07-15 09:35:14.279610] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:27:03.231 [2024-07-15 09:35:14.303826] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:27:03.231 09:35:14 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:03.231 09:35:14 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@108 -- # run_bperf_err randread 4096 128 00:27:03.231 09:35:14 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@54 -- # local rw bs qd 00:27:03.231 09:35:14 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # rw=randread 00:27:03.231 09:35:14 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # bs=4096 00:27:03.231 09:35:14 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # qd=128 00:27:03.231 09:35:14 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@58 -- # bperfpid=935552 00:27:03.231 09:35:14 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randread -o 4096 -t 2 -q 128 -z 00:27:03.231 09:35:14 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@60 -- # waitforlisten 935552 /var/tmp/bperf.sock 00:27:03.231 09:35:14 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@829 -- # '[' -z 935552 ']' 00:27:03.231 09:35:14 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bperf.sock 00:27:03.231 09:35:14 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@834 -- # local max_retries=100 00:27:03.231 09:35:14 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:27:03.231 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:27:03.231 09:35:14 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@838 -- # xtrace_disable 00:27:03.231 09:35:14 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:27:03.231 [2024-07-15 09:35:14.347607] Starting SPDK v24.09-pre git sha1 b0f01ebc5 / DPDK 24.03.0 initialization... 00:27:03.231 [2024-07-15 09:35:14.347682] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid935552 ] 00:27:03.231 EAL: No free 2048 kB hugepages reported on node 1 00:27:03.231 [2024-07-15 09:35:14.404633] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:27:03.489 [2024-07-15 09:35:14.511950] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:27:03.489 09:35:14 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:27:03.489 09:35:14 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@862 -- # return 0 00:27:03.489 09:35:14 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@61 -- # bperf_rpc bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:27:03.489 09:35:14 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:27:03.747 09:35:14 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@63 -- # rpc_cmd accel_error_inject_error -o crc32c -t disable 00:27:03.747 09:35:14 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:03.747 09:35:14 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:27:03.747 09:35:14 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:03.747 09:35:14 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@64 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:27:03.747 09:35:14 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:27:04.315 nvme0n1 00:27:04.315 09:35:15 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@67 -- # rpc_cmd accel_error_inject_error -o crc32c -t corrupt -i 256 00:27:04.315 09:35:15 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:04.315 09:35:15 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:27:04.315 09:35:15 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:04.315 09:35:15 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@69 -- # bperf_py perform_tests 00:27:04.315 09:35:15 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@19 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:27:04.574 Running I/O for 2 seconds... 00:27:04.574 [2024-07-15 09:35:15.600823] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x184bd50) 00:27:04.574 [2024-07-15 09:35:15.600879] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:4312 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:04.574 [2024-07-15 09:35:15.600913] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:04.574 [2024-07-15 09:35:15.613502] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x184bd50) 00:27:04.574 [2024-07-15 09:35:15.613531] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:14816 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:04.574 [2024-07-15 09:35:15.613562] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:04.574 [2024-07-15 09:35:15.626231] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x184bd50) 00:27:04.574 [2024-07-15 09:35:15.626260] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:84 nsid:1 lba:4519 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:04.574 [2024-07-15 09:35:15.626277] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:84 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:04.574 [2024-07-15 09:35:15.638416] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x184bd50) 00:27:04.574 [2024-07-15 09:35:15.638443] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:4697 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:04.574 [2024-07-15 09:35:15.638474] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:04.574 [2024-07-15 09:35:15.653546] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x184bd50) 00:27:04.574 [2024-07-15 09:35:15.653573] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:65 nsid:1 lba:7427 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:04.574 [2024-07-15 09:35:15.653604] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:65 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:04.574 [2024-07-15 09:35:15.668084] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x184bd50) 00:27:04.574 [2024-07-15 09:35:15.668126] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:69 nsid:1 lba:17113 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:04.574 [2024-07-15 09:35:15.668142] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:69 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:04.574 [2024-07-15 09:35:15.680477] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x184bd50) 00:27:04.574 [2024-07-15 09:35:15.680507] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:90 nsid:1 lba:20162 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:04.574 [2024-07-15 09:35:15.680539] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:90 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:04.574 [2024-07-15 09:35:15.691425] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x184bd50) 00:27:04.574 [2024-07-15 09:35:15.691451] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:65 nsid:1 lba:4088 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:04.574 [2024-07-15 09:35:15.691483] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:65 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:04.574 [2024-07-15 09:35:15.704821] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x184bd50) 00:27:04.574 [2024-07-15 09:35:15.704851] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:22955 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:04.574 [2024-07-15 09:35:15.704868] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:34 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:04.574 [2024-07-15 09:35:15.717302] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x184bd50) 00:27:04.574 [2024-07-15 09:35:15.717348] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:5751 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:04.574 [2024-07-15 09:35:15.717371] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:21 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:04.574 [2024-07-15 09:35:15.731426] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x184bd50) 00:27:04.574 [2024-07-15 09:35:15.731455] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:19873 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:04.574 [2024-07-15 09:35:15.731486] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:23 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:04.574 [2024-07-15 09:35:15.743511] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x184bd50) 00:27:04.574 [2024-07-15 09:35:15.743554] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:91 nsid:1 lba:4842 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:04.574 [2024-07-15 09:35:15.743570] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:91 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:04.574 [2024-07-15 09:35:15.754920] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x184bd50) 00:27:04.574 [2024-07-15 09:35:15.754948] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:105 nsid:1 lba:22745 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:04.574 [2024-07-15 09:35:15.754979] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:105 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:04.574 [2024-07-15 09:35:15.767419] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x184bd50) 00:27:04.574 [2024-07-15 09:35:15.767446] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:103 nsid:1 lba:10351 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:04.574 [2024-07-15 09:35:15.767475] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:103 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:04.832 [2024-07-15 09:35:15.780551] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x184bd50) 00:27:04.832 [2024-07-15 09:35:15.780579] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:101 nsid:1 lba:12293 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:04.832 [2024-07-15 09:35:15.780610] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:101 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:04.832 [2024-07-15 09:35:15.794813] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x184bd50) 00:27:04.832 [2024-07-15 09:35:15.794841] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:75 nsid:1 lba:21082 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:04.832 [2024-07-15 09:35:15.794872] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:75 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:04.832 [2024-07-15 09:35:15.805749] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x184bd50) 00:27:04.832 [2024-07-15 09:35:15.805775] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:71 nsid:1 lba:10632 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:04.833 [2024-07-15 09:35:15.805813] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:71 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:04.833 [2024-07-15 09:35:15.818815] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x184bd50) 00:27:04.833 [2024-07-15 09:35:15.818842] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:70 nsid:1 lba:18663 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:04.833 [2024-07-15 09:35:15.818871] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:70 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:04.833 [2024-07-15 09:35:15.834924] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x184bd50) 00:27:04.833 [2024-07-15 09:35:15.834974] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:14242 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:04.833 [2024-07-15 09:35:15.834993] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:04.833 [2024-07-15 09:35:15.847342] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x184bd50) 00:27:04.833 [2024-07-15 09:35:15.847368] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:19687 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:04.833 [2024-07-15 09:35:15.847398] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:04.833 [2024-07-15 09:35:15.860222] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x184bd50) 00:27:04.833 [2024-07-15 09:35:15.860249] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:9632 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:04.833 [2024-07-15 09:35:15.860278] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:17 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:04.833 [2024-07-15 09:35:15.872551] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x184bd50) 00:27:04.833 [2024-07-15 09:35:15.872581] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:6773 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:04.833 [2024-07-15 09:35:15.872597] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:52 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:04.833 [2024-07-15 09:35:15.886126] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x184bd50) 00:27:04.833 [2024-07-15 09:35:15.886156] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:108 nsid:1 lba:6165 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:04.833 [2024-07-15 09:35:15.886173] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:108 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:04.833 [2024-07-15 09:35:15.899631] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x184bd50) 00:27:04.833 [2024-07-15 09:35:15.899662] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:24223 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:04.833 [2024-07-15 09:35:15.899679] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:30 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:04.833 [2024-07-15 09:35:15.910998] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x184bd50) 00:27:04.833 [2024-07-15 09:35:15.911025] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:19452 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:04.833 [2024-07-15 09:35:15.911056] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:46 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:04.833 [2024-07-15 09:35:15.926812] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x184bd50) 00:27:04.833 [2024-07-15 09:35:15.926840] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:96 nsid:1 lba:5149 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:04.833 [2024-07-15 09:35:15.926871] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:96 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:04.833 [2024-07-15 09:35:15.939905] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x184bd50) 00:27:04.833 [2024-07-15 09:35:15.939932] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:78 nsid:1 lba:10483 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:04.833 [2024-07-15 09:35:15.939962] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:78 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:04.833 [2024-07-15 09:35:15.952172] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x184bd50) 00:27:04.833 [2024-07-15 09:35:15.952201] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:79 nsid:1 lba:2814 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:04.833 [2024-07-15 09:35:15.952235] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:79 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:04.833 [2024-07-15 09:35:15.964417] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x184bd50) 00:27:04.833 [2024-07-15 09:35:15.964462] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:65 nsid:1 lba:4708 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:04.833 [2024-07-15 09:35:15.964478] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:65 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:04.833 [2024-07-15 09:35:15.975445] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x184bd50) 00:27:04.833 [2024-07-15 09:35:15.975474] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:67 nsid:1 lba:6998 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:04.833 [2024-07-15 09:35:15.975490] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:67 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:04.833 [2024-07-15 09:35:15.987948] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x184bd50) 00:27:04.833 [2024-07-15 09:35:15.987978] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:64 nsid:1 lba:8123 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:04.833 [2024-07-15 09:35:15.987994] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:64 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:04.833 [2024-07-15 09:35:16.000756] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x184bd50) 00:27:04.833 [2024-07-15 09:35:16.000798] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:8946 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:04.833 [2024-07-15 09:35:16.000820] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:44 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:04.833 [2024-07-15 09:35:16.013786] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x184bd50) 00:27:04.833 [2024-07-15 09:35:16.013823] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:16274 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:04.833 [2024-07-15 09:35:16.013841] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:55 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:05.092 [2024-07-15 09:35:16.027255] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x184bd50) 00:27:05.092 [2024-07-15 09:35:16.027285] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:20998 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:05.092 [2024-07-15 09:35:16.027318] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:44 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:05.092 [2024-07-15 09:35:16.037984] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x184bd50) 00:27:05.092 [2024-07-15 09:35:16.038012] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:74 nsid:1 lba:13793 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:05.092 [2024-07-15 09:35:16.038042] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:74 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:05.092 [2024-07-15 09:35:16.051523] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x184bd50) 00:27:05.092 [2024-07-15 09:35:16.051550] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:4825 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:05.092 [2024-07-15 09:35:16.051587] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:05.092 [2024-07-15 09:35:16.066034] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x184bd50) 00:27:05.092 [2024-07-15 09:35:16.066063] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:9520 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:05.092 [2024-07-15 09:35:16.066080] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:05.092 [2024-07-15 09:35:16.080478] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x184bd50) 00:27:05.092 [2024-07-15 09:35:16.080509] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:23680 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:05.092 [2024-07-15 09:35:16.080526] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:05.092 [2024-07-15 09:35:16.091571] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x184bd50) 00:27:05.092 [2024-07-15 09:35:16.091597] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:9925 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:05.092 [2024-07-15 09:35:16.091628] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:27 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:05.092 [2024-07-15 09:35:16.104191] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x184bd50) 00:27:05.092 [2024-07-15 09:35:16.104234] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:21633 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:05.092 [2024-07-15 09:35:16.104250] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:33 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:05.092 [2024-07-15 09:35:16.116590] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x184bd50) 00:27:05.092 [2024-07-15 09:35:16.116617] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:471 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:05.092 [2024-07-15 09:35:16.116647] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:35 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:05.092 [2024-07-15 09:35:16.129078] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x184bd50) 00:27:05.092 [2024-07-15 09:35:16.129119] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:116 nsid:1 lba:10123 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:05.092 [2024-07-15 09:35:16.129135] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:116 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:05.092 [2024-07-15 09:35:16.140669] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x184bd50) 00:27:05.092 [2024-07-15 09:35:16.140712] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:114 nsid:1 lba:6586 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:05.093 [2024-07-15 09:35:16.140728] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:114 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:05.093 [2024-07-15 09:35:16.153554] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x184bd50) 00:27:05.093 [2024-07-15 09:35:16.153582] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:126 nsid:1 lba:3367 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:05.093 [2024-07-15 09:35:16.153612] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:126 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:05.093 [2024-07-15 09:35:16.166789] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x184bd50) 00:27:05.093 [2024-07-15 09:35:16.166826] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:20240 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:05.093 [2024-07-15 09:35:16.166843] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:32 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:05.093 [2024-07-15 09:35:16.179033] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x184bd50) 00:27:05.093 [2024-07-15 09:35:16.179063] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:20062 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:05.093 [2024-07-15 09:35:16.179080] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:24 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:05.093 [2024-07-15 09:35:16.192373] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x184bd50) 00:27:05.093 [2024-07-15 09:35:16.192402] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:91 nsid:1 lba:19455 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:05.093 [2024-07-15 09:35:16.192434] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:91 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:05.093 [2024-07-15 09:35:16.204531] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x184bd50) 00:27:05.093 [2024-07-15 09:35:16.204561] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:2934 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:05.093 [2024-07-15 09:35:16.204592] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:35 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:05.093 [2024-07-15 09:35:16.216942] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x184bd50) 00:27:05.093 [2024-07-15 09:35:16.216971] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:114 nsid:1 lba:17239 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:05.093 [2024-07-15 09:35:16.216988] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:114 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:05.093 [2024-07-15 09:35:16.230494] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x184bd50) 00:27:05.093 [2024-07-15 09:35:16.230524] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:107 nsid:1 lba:14491 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:05.093 [2024-07-15 09:35:16.230557] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:107 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:05.093 [2024-07-15 09:35:16.242798] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x184bd50) 00:27:05.093 [2024-07-15 09:35:16.242848] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:92 nsid:1 lba:23239 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:05.093 [2024-07-15 09:35:16.242864] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:92 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:05.093 [2024-07-15 09:35:16.255234] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x184bd50) 00:27:05.093 [2024-07-15 09:35:16.255277] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:119 nsid:1 lba:10639 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:05.093 [2024-07-15 09:35:16.255292] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:119 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:05.093 [2024-07-15 09:35:16.267981] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x184bd50) 00:27:05.093 [2024-07-15 09:35:16.268009] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:78 nsid:1 lba:9233 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:05.093 [2024-07-15 09:35:16.268044] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:78 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:05.093 [2024-07-15 09:35:16.280237] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x184bd50) 00:27:05.093 [2024-07-15 09:35:16.280264] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:121 nsid:1 lba:6235 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:05.093 [2024-07-15 09:35:16.280294] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:121 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:05.353 [2024-07-15 09:35:16.293885] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x184bd50) 00:27:05.353 [2024-07-15 09:35:16.293914] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:13620 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:05.353 [2024-07-15 09:35:16.293945] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:37 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:05.353 [2024-07-15 09:35:16.306657] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x184bd50) 00:27:05.353 [2024-07-15 09:35:16.306687] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:12030 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:05.353 [2024-07-15 09:35:16.306718] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:05.353 [2024-07-15 09:35:16.318533] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x184bd50) 00:27:05.353 [2024-07-15 09:35:16.318559] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:123 nsid:1 lba:924 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:05.353 [2024-07-15 09:35:16.318589] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:123 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:05.353 [2024-07-15 09:35:16.330780] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x184bd50) 00:27:05.353 [2024-07-15 09:35:16.330828] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:15394 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:05.353 [2024-07-15 09:35:16.330870] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:26 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:05.353 [2024-07-15 09:35:16.343955] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x184bd50) 00:27:05.353 [2024-07-15 09:35:16.343985] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:96 nsid:1 lba:22102 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:05.353 [2024-07-15 09:35:16.344016] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:96 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:05.353 [2024-07-15 09:35:16.357551] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x184bd50) 00:27:05.353 [2024-07-15 09:35:16.357577] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:10947 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:05.353 [2024-07-15 09:35:16.357607] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:18 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:05.353 [2024-07-15 09:35:16.370314] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x184bd50) 00:27:05.353 [2024-07-15 09:35:16.370343] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:110 nsid:1 lba:18239 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:05.353 [2024-07-15 09:35:16.370374] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:110 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:05.353 [2024-07-15 09:35:16.383027] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x184bd50) 00:27:05.353 [2024-07-15 09:35:16.383078] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:2277 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:05.353 [2024-07-15 09:35:16.383096] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:52 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:05.353 [2024-07-15 09:35:16.395784] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x184bd50) 00:27:05.353 [2024-07-15 09:35:16.395822] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:9008 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:05.353 [2024-07-15 09:35:16.395840] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:22 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:05.353 [2024-07-15 09:35:16.409653] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x184bd50) 00:27:05.353 [2024-07-15 09:35:16.409697] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:81 nsid:1 lba:14663 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:05.353 [2024-07-15 09:35:16.409714] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:81 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:05.353 [2024-07-15 09:35:16.421005] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x184bd50) 00:27:05.353 [2024-07-15 09:35:16.421035] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:7476 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:05.353 [2024-07-15 09:35:16.421052] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:52 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:05.353 [2024-07-15 09:35:16.435231] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x184bd50) 00:27:05.353 [2024-07-15 09:35:16.435265] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:116 nsid:1 lba:22538 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:05.353 [2024-07-15 09:35:16.435296] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:116 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:05.353 [2024-07-15 09:35:16.448742] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x184bd50) 00:27:05.353 [2024-07-15 09:35:16.448772] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:99 nsid:1 lba:1203 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:05.353 [2024-07-15 09:35:16.448789] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:99 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:05.353 [2024-07-15 09:35:16.460017] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x184bd50) 00:27:05.353 [2024-07-15 09:35:16.460048] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:70 nsid:1 lba:22121 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:05.353 [2024-07-15 09:35:16.460064] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:70 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:05.353 [2024-07-15 09:35:16.474081] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x184bd50) 00:27:05.353 [2024-07-15 09:35:16.474126] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:70 nsid:1 lba:25252 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:05.353 [2024-07-15 09:35:16.474143] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:70 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:05.353 [2024-07-15 09:35:16.486828] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x184bd50) 00:27:05.353 [2024-07-15 09:35:16.486856] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:116 nsid:1 lba:18791 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:05.353 [2024-07-15 09:35:16.486892] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:116 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:05.353 [2024-07-15 09:35:16.498591] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x184bd50) 00:27:05.353 [2024-07-15 09:35:16.498618] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:113 nsid:1 lba:17857 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:05.353 [2024-07-15 09:35:16.498649] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:113 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:05.353 [2024-07-15 09:35:16.511696] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x184bd50) 00:27:05.353 [2024-07-15 09:35:16.511722] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:91 nsid:1 lba:13177 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:05.353 [2024-07-15 09:35:16.511751] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:91 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:05.353 [2024-07-15 09:35:16.524872] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x184bd50) 00:27:05.353 [2024-07-15 09:35:16.524915] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:71 nsid:1 lba:25002 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:05.353 [2024-07-15 09:35:16.524932] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:71 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:05.353 [2024-07-15 09:35:16.537899] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x184bd50) 00:27:05.353 [2024-07-15 09:35:16.537929] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:96 nsid:1 lba:11583 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:05.353 [2024-07-15 09:35:16.537946] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:96 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:05.614 [2024-07-15 09:35:16.550145] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x184bd50) 00:27:05.614 [2024-07-15 09:35:16.550187] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:107 nsid:1 lba:4393 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:05.614 [2024-07-15 09:35:16.550203] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:107 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:05.614 [2024-07-15 09:35:16.563601] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x184bd50) 00:27:05.614 [2024-07-15 09:35:16.563631] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:20786 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:05.614 [2024-07-15 09:35:16.563648] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:46 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:05.614 [2024-07-15 09:35:16.577483] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x184bd50) 00:27:05.614 [2024-07-15 09:35:16.577511] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:6991 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:05.614 [2024-07-15 09:35:16.577526] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:05.614 [2024-07-15 09:35:16.592717] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x184bd50) 00:27:05.614 [2024-07-15 09:35:16.592746] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:69 nsid:1 lba:24140 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:05.614 [2024-07-15 09:35:16.592763] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:69 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:05.614 [2024-07-15 09:35:16.604218] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x184bd50) 00:27:05.614 [2024-07-15 09:35:16.604246] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:6063 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:05.614 [2024-07-15 09:35:16.604283] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:05.614 [2024-07-15 09:35:16.616766] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x184bd50) 00:27:05.614 [2024-07-15 09:35:16.616814] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:7617 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:05.614 [2024-07-15 09:35:16.616831] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:05.614 [2024-07-15 09:35:16.630868] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x184bd50) 00:27:05.614 [2024-07-15 09:35:16.630894] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:24972 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:05.614 [2024-07-15 09:35:16.630925] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:05.614 [2024-07-15 09:35:16.645441] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x184bd50) 00:27:05.614 [2024-07-15 09:35:16.645470] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:5040 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:05.614 [2024-07-15 09:35:16.645487] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:29 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:05.614 [2024-07-15 09:35:16.656957] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x184bd50) 00:27:05.614 [2024-07-15 09:35:16.656986] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:25092 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:05.614 [2024-07-15 09:35:16.657003] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:44 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:05.614 [2024-07-15 09:35:16.670080] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x184bd50) 00:27:05.614 [2024-07-15 09:35:16.670110] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:14023 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:05.614 [2024-07-15 09:35:16.670127] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:38 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:05.614 [2024-07-15 09:35:16.684370] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x184bd50) 00:27:05.614 [2024-07-15 09:35:16.684399] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:22858 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:05.614 [2024-07-15 09:35:16.684415] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:62 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:05.614 [2024-07-15 09:35:16.696648] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x184bd50) 00:27:05.614 [2024-07-15 09:35:16.696677] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:122 nsid:1 lba:22273 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:05.614 [2024-07-15 09:35:16.696693] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:122 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:05.614 [2024-07-15 09:35:16.709621] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x184bd50) 00:27:05.614 [2024-07-15 09:35:16.709649] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:84 nsid:1 lba:24266 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:05.614 [2024-07-15 09:35:16.709680] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:84 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:05.614 [2024-07-15 09:35:16.722797] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x184bd50) 00:27:05.614 [2024-07-15 09:35:16.722830] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:25422 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:05.614 [2024-07-15 09:35:16.722860] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:05.614 [2024-07-15 09:35:16.736549] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x184bd50) 00:27:05.614 [2024-07-15 09:35:16.736575] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:103 nsid:1 lba:2147 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:05.614 [2024-07-15 09:35:16.736605] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:103 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:05.615 [2024-07-15 09:35:16.748943] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x184bd50) 00:27:05.615 [2024-07-15 09:35:16.748985] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:99 nsid:1 lba:1606 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:05.615 [2024-07-15 09:35:16.749000] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:99 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:05.615 [2024-07-15 09:35:16.761831] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x184bd50) 00:27:05.615 [2024-07-15 09:35:16.761860] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:18577 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:05.615 [2024-07-15 09:35:16.761877] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:55 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:05.615 [2024-07-15 09:35:16.774439] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x184bd50) 00:27:05.615 [2024-07-15 09:35:16.774469] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:6613 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:05.615 [2024-07-15 09:35:16.774486] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:36 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:05.615 [2024-07-15 09:35:16.784376] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x184bd50) 00:27:05.615 [2024-07-15 09:35:16.784402] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:12193 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:05.615 [2024-07-15 09:35:16.784431] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:51 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:05.615 [2024-07-15 09:35:16.798835] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x184bd50) 00:27:05.615 [2024-07-15 09:35:16.798864] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:9714 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:05.615 [2024-07-15 09:35:16.798895] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:39 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:05.878 [2024-07-15 09:35:16.811200] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x184bd50) 00:27:05.878 [2024-07-15 09:35:16.811228] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:16348 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:05.878 [2024-07-15 09:35:16.811258] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:24 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:05.878 [2024-07-15 09:35:16.826906] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x184bd50) 00:27:05.878 [2024-07-15 09:35:16.826933] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:21066 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:05.878 [2024-07-15 09:35:16.826968] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:47 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:05.878 [2024-07-15 09:35:16.841816] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x184bd50) 00:27:05.878 [2024-07-15 09:35:16.841846] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:109 nsid:1 lba:1213 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:05.878 [2024-07-15 09:35:16.841863] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:109 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:05.878 [2024-07-15 09:35:16.852605] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x184bd50) 00:27:05.878 [2024-07-15 09:35:16.852635] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:10989 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:05.878 [2024-07-15 09:35:16.852651] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:43 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:05.878 [2024-07-15 09:35:16.867857] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x184bd50) 00:27:05.878 [2024-07-15 09:35:16.867884] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:5404 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:05.878 [2024-07-15 09:35:16.867915] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:33 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:05.878 [2024-07-15 09:35:16.881189] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x184bd50) 00:27:05.878 [2024-07-15 09:35:16.881217] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:8200 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:05.878 [2024-07-15 09:35:16.881248] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:47 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:05.878 [2024-07-15 09:35:16.893435] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x184bd50) 00:27:05.878 [2024-07-15 09:35:16.893478] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:17505 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:05.878 [2024-07-15 09:35:16.893493] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:56 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:05.878 [2024-07-15 09:35:16.908132] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x184bd50) 00:27:05.878 [2024-07-15 09:35:16.908159] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:126 nsid:1 lba:23691 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:05.878 [2024-07-15 09:35:16.908188] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:126 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:05.878 [2024-07-15 09:35:16.923748] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x184bd50) 00:27:05.878 [2024-07-15 09:35:16.923775] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:116 nsid:1 lba:20146 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:05.878 [2024-07-15 09:35:16.923813] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:116 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:05.878 [2024-07-15 09:35:16.937315] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x184bd50) 00:27:05.878 [2024-07-15 09:35:16.937359] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:108 nsid:1 lba:3039 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:05.878 [2024-07-15 09:35:16.937376] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:108 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:05.878 [2024-07-15 09:35:16.948211] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x184bd50) 00:27:05.878 [2024-07-15 09:35:16.948259] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:14373 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:05.878 [2024-07-15 09:35:16.948278] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:32 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:05.878 [2024-07-15 09:35:16.961047] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x184bd50) 00:27:05.878 [2024-07-15 09:35:16.961074] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:70 nsid:1 lba:23187 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:05.878 [2024-07-15 09:35:16.961090] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:70 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:05.878 [2024-07-15 09:35:16.974704] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x184bd50) 00:27:05.878 [2024-07-15 09:35:16.974731] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:77 nsid:1 lba:20207 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:05.878 [2024-07-15 09:35:16.974762] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:77 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:05.878 [2024-07-15 09:35:16.985614] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x184bd50) 00:27:05.878 [2024-07-15 09:35:16.985639] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:9740 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:05.878 [2024-07-15 09:35:16.985669] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:38 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:05.878 [2024-07-15 09:35:16.998670] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x184bd50) 00:27:05.878 [2024-07-15 09:35:16.998696] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:126 nsid:1 lba:5268 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:05.878 [2024-07-15 09:35:16.998727] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:126 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:05.878 [2024-07-15 09:35:17.014451] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x184bd50) 00:27:05.878 [2024-07-15 09:35:17.014477] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:92 nsid:1 lba:4921 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:05.878 [2024-07-15 09:35:17.014508] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:92 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:05.878 [2024-07-15 09:35:17.026929] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x184bd50) 00:27:05.878 [2024-07-15 09:35:17.026955] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:101 nsid:1 lba:6679 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:05.878 [2024-07-15 09:35:17.026987] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:101 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:05.878 [2024-07-15 09:35:17.040690] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x184bd50) 00:27:05.878 [2024-07-15 09:35:17.040719] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:18492 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:05.878 [2024-07-15 09:35:17.040736] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:54 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:05.878 [2024-07-15 09:35:17.052170] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x184bd50) 00:27:05.878 [2024-07-15 09:35:17.052199] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:102 nsid:1 lba:23335 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:05.878 [2024-07-15 09:35:17.052232] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:102 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:05.878 [2024-07-15 09:35:17.065672] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x184bd50) 00:27:05.878 [2024-07-15 09:35:17.065701] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:121 nsid:1 lba:20770 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:05.878 [2024-07-15 09:35:17.065734] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:121 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:06.164 [2024-07-15 09:35:17.079866] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x184bd50) 00:27:06.165 [2024-07-15 09:35:17.079915] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:17497 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:06.165 [2024-07-15 09:35:17.079931] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:16 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:06.165 [2024-07-15 09:35:17.090627] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x184bd50) 00:27:06.165 [2024-07-15 09:35:17.090659] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:8078 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:06.165 [2024-07-15 09:35:17.090677] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:48 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:06.165 [2024-07-15 09:35:17.106303] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x184bd50) 00:27:06.165 [2024-07-15 09:35:17.106346] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:13385 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:06.165 [2024-07-15 09:35:17.106362] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:06.165 [2024-07-15 09:35:17.121031] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x184bd50) 00:27:06.165 [2024-07-15 09:35:17.121061] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:117 nsid:1 lba:22885 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:06.165 [2024-07-15 09:35:17.121079] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:117 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:06.165 [2024-07-15 09:35:17.132624] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x184bd50) 00:27:06.165 [2024-07-15 09:35:17.132651] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:106 nsid:1 lba:25559 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:06.165 [2024-07-15 09:35:17.132667] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:106 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:06.165 [2024-07-15 09:35:17.144872] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x184bd50) 00:27:06.165 [2024-07-15 09:35:17.144900] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:94 nsid:1 lba:2700 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:06.165 [2024-07-15 09:35:17.144933] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:94 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:06.165 [2024-07-15 09:35:17.157702] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x184bd50) 00:27:06.165 [2024-07-15 09:35:17.157744] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:5457 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:06.165 [2024-07-15 09:35:17.157761] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:21 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:06.165 [2024-07-15 09:35:17.172189] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x184bd50) 00:27:06.165 [2024-07-15 09:35:17.172233] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:100 nsid:1 lba:11656 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:06.165 [2024-07-15 09:35:17.172258] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:100 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:06.165 [2024-07-15 09:35:17.183525] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x184bd50) 00:27:06.165 [2024-07-15 09:35:17.183554] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:82 nsid:1 lba:675 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:06.165 [2024-07-15 09:35:17.183572] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:82 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:06.165 [2024-07-15 09:35:17.195901] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x184bd50) 00:27:06.165 [2024-07-15 09:35:17.195929] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:17207 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:06.165 [2024-07-15 09:35:17.195946] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:06.165 [2024-07-15 09:35:17.208225] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x184bd50) 00:27:06.165 [2024-07-15 09:35:17.208252] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:100 nsid:1 lba:25226 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:06.165 [2024-07-15 09:35:17.208268] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:100 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:06.165 [2024-07-15 09:35:17.223655] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x184bd50) 00:27:06.165 [2024-07-15 09:35:17.223683] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:89 nsid:1 lba:11647 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:06.165 [2024-07-15 09:35:17.223715] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:89 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:06.165 [2024-07-15 09:35:17.235038] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x184bd50) 00:27:06.165 [2024-07-15 09:35:17.235066] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:88 nsid:1 lba:4287 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:06.165 [2024-07-15 09:35:17.235096] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:88 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:06.165 [2024-07-15 09:35:17.247845] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x184bd50) 00:27:06.165 [2024-07-15 09:35:17.247873] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:13563 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:06.165 [2024-07-15 09:35:17.247904] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:46 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:06.165 [2024-07-15 09:35:17.260583] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x184bd50) 00:27:06.165 [2024-07-15 09:35:17.260611] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:100 nsid:1 lba:20781 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:06.165 [2024-07-15 09:35:17.260644] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:100 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:06.165 [2024-07-15 09:35:17.273538] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x184bd50) 00:27:06.165 [2024-07-15 09:35:17.273566] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:13743 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:06.165 [2024-07-15 09:35:17.273582] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:49 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:06.165 [2024-07-15 09:35:17.285933] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x184bd50) 00:27:06.165 [2024-07-15 09:35:17.285962] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:12827 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:06.165 [2024-07-15 09:35:17.285979] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:18 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:06.165 [2024-07-15 09:35:17.298947] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x184bd50) 00:27:06.165 [2024-07-15 09:35:17.298976] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:15030 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:06.165 [2024-07-15 09:35:17.299009] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:45 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:06.165 [2024-07-15 09:35:17.311294] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x184bd50) 00:27:06.165 [2024-07-15 09:35:17.311325] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:2403 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:06.165 [2024-07-15 09:35:17.311356] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:32 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:06.165 [2024-07-15 09:35:17.323927] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x184bd50) 00:27:06.165 [2024-07-15 09:35:17.323956] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:69 nsid:1 lba:3312 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:06.165 [2024-07-15 09:35:17.323987] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:69 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:06.165 [2024-07-15 09:35:17.337032] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x184bd50) 00:27:06.165 [2024-07-15 09:35:17.337063] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:7323 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:06.165 [2024-07-15 09:35:17.337079] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:06.426 [2024-07-15 09:35:17.348198] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x184bd50) 00:27:06.426 [2024-07-15 09:35:17.348227] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:19637 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:06.426 [2024-07-15 09:35:17.348259] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:31 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:06.426 [2024-07-15 09:35:17.361429] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x184bd50) 00:27:06.426 [2024-07-15 09:35:17.361472] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:1095 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:06.426 [2024-07-15 09:35:17.361489] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:48 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:06.426 [2024-07-15 09:35:17.374695] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x184bd50) 00:27:06.426 [2024-07-15 09:35:17.374724] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:12933 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:06.426 [2024-07-15 09:35:17.374756] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:61 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:06.426 [2024-07-15 09:35:17.389185] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x184bd50) 00:27:06.426 [2024-07-15 09:35:17.389216] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:4620 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:06.426 [2024-07-15 09:35:17.389238] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:18 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:06.426 [2024-07-15 09:35:17.404922] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x184bd50) 00:27:06.426 [2024-07-15 09:35:17.404951] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:23222 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:06.426 [2024-07-15 09:35:17.404983] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:06.426 [2024-07-15 09:35:17.419454] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x184bd50) 00:27:06.427 [2024-07-15 09:35:17.419490] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:104 nsid:1 lba:12769 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:06.427 [2024-07-15 09:35:17.419522] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:104 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:06.427 [2024-07-15 09:35:17.432795] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x184bd50) 00:27:06.427 [2024-07-15 09:35:17.432844] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:97 nsid:1 lba:23589 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:06.427 [2024-07-15 09:35:17.432862] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:97 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:06.427 [2024-07-15 09:35:17.447658] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x184bd50) 00:27:06.427 [2024-07-15 09:35:17.447687] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:114 nsid:1 lba:20020 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:06.427 [2024-07-15 09:35:17.447718] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:114 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:06.427 [2024-07-15 09:35:17.459652] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x184bd50) 00:27:06.427 [2024-07-15 09:35:17.459681] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:74 nsid:1 lba:1510 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:06.427 [2024-07-15 09:35:17.459714] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:74 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:06.427 [2024-07-15 09:35:17.473797] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x184bd50) 00:27:06.427 [2024-07-15 09:35:17.473833] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:94 nsid:1 lba:6955 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:06.427 [2024-07-15 09:35:17.473864] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:94 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:06.427 [2024-07-15 09:35:17.487747] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x184bd50) 00:27:06.427 [2024-07-15 09:35:17.487777] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:79 nsid:1 lba:25588 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:06.427 [2024-07-15 09:35:17.487793] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:79 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:06.427 [2024-07-15 09:35:17.500847] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x184bd50) 00:27:06.427 [2024-07-15 09:35:17.500877] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:22723 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:06.427 [2024-07-15 09:35:17.500893] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:17 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:06.427 [2024-07-15 09:35:17.512704] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x184bd50) 00:27:06.427 [2024-07-15 09:35:17.512737] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:2787 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:06.427 [2024-07-15 09:35:17.512768] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:44 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:06.427 [2024-07-15 09:35:17.525200] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x184bd50) 00:27:06.427 [2024-07-15 09:35:17.525226] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:69 nsid:1 lba:14351 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:06.427 [2024-07-15 09:35:17.525257] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:69 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:06.427 [2024-07-15 09:35:17.538834] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x184bd50) 00:27:06.427 [2024-07-15 09:35:17.538864] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:109 nsid:1 lba:1674 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:06.427 [2024-07-15 09:35:17.538880] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:109 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:06.427 [2024-07-15 09:35:17.551170] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x184bd50) 00:27:06.427 [2024-07-15 09:35:17.551198] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:4956 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:06.427 [2024-07-15 09:35:17.551229] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:06.427 [2024-07-15 09:35:17.563688] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x184bd50) 00:27:06.427 [2024-07-15 09:35:17.563715] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:125 nsid:1 lba:25529 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:06.427 [2024-07-15 09:35:17.563745] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:125 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:06.427 [2024-07-15 09:35:17.576230] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x184bd50) 00:27:06.427 [2024-07-15 09:35:17.576260] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:110 nsid:1 lba:23910 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:06.427 [2024-07-15 09:35:17.576276] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:110 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:06.427 00:27:06.427 Latency(us) 00:27:06.427 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:27:06.427 Job: nvme0n1 (Core Mask 0x2, workload: randread, depth: 128, IO size: 4096) 00:27:06.427 nvme0n1 : 2.00 19585.37 76.51 0.00 0.00 6527.78 3422.44 22039.51 00:27:06.427 =================================================================================================================== 00:27:06.427 Total : 19585.37 76.51 0.00 0.00 6527.78 3422.44 22039.51 00:27:06.427 0 00:27:06.427 09:35:17 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@71 -- # get_transient_errcount nvme0n1 00:27:06.427 09:35:17 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@27 -- # bperf_rpc bdev_get_iostat -b nvme0n1 00:27:06.427 09:35:17 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_get_iostat -b nvme0n1 00:27:06.427 09:35:17 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@28 -- # jq -r '.bdevs[0] 00:27:06.427 | .driver_specific 00:27:06.427 | .nvme_error 00:27:06.427 | .status_code 00:27:06.427 | .command_transient_transport_error' 00:27:06.685 09:35:17 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@71 -- # (( 153 > 0 )) 00:27:06.685 09:35:17 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@73 -- # killprocess 935552 00:27:06.685 09:35:17 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@948 -- # '[' -z 935552 ']' 00:27:06.685 09:35:17 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@952 -- # kill -0 935552 00:27:06.685 09:35:17 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@953 -- # uname 00:27:06.685 09:35:17 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:27:06.685 09:35:17 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 935552 00:27:06.685 09:35:17 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:27:06.685 09:35:17 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:27:06.685 09:35:17 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@966 -- # echo 'killing process with pid 935552' 00:27:06.685 killing process with pid 935552 00:27:06.685 09:35:17 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@967 -- # kill 935552 00:27:06.685 Received shutdown signal, test time was about 2.000000 seconds 00:27:06.685 00:27:06.685 Latency(us) 00:27:06.685 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:27:06.685 =================================================================================================================== 00:27:06.685 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:27:06.685 09:35:17 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@972 -- # wait 935552 00:27:07.254 09:35:18 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@109 -- # run_bperf_err randread 131072 16 00:27:07.254 09:35:18 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@54 -- # local rw bs qd 00:27:07.254 09:35:18 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # rw=randread 00:27:07.254 09:35:18 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # bs=131072 00:27:07.254 09:35:18 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # qd=16 00:27:07.254 09:35:18 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@58 -- # bperfpid=935965 00:27:07.254 09:35:18 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randread -o 131072 -t 2 -q 16 -z 00:27:07.254 09:35:18 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@60 -- # waitforlisten 935965 /var/tmp/bperf.sock 00:27:07.254 09:35:18 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@829 -- # '[' -z 935965 ']' 00:27:07.254 09:35:18 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bperf.sock 00:27:07.254 09:35:18 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@834 -- # local max_retries=100 00:27:07.254 09:35:18 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:27:07.254 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:27:07.254 09:35:18 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@838 -- # xtrace_disable 00:27:07.254 09:35:18 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:27:07.254 [2024-07-15 09:35:18.194527] Starting SPDK v24.09-pre git sha1 b0f01ebc5 / DPDK 24.03.0 initialization... 00:27:07.254 [2024-07-15 09:35:18.194622] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid935965 ] 00:27:07.254 I/O size of 131072 is greater than zero copy threshold (65536). 00:27:07.254 Zero copy mechanism will not be used. 00:27:07.254 EAL: No free 2048 kB hugepages reported on node 1 00:27:07.254 [2024-07-15 09:35:18.259811] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:27:07.254 [2024-07-15 09:35:18.373113] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:27:07.512 09:35:18 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:27:07.512 09:35:18 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@862 -- # return 0 00:27:07.512 09:35:18 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@61 -- # bperf_rpc bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:27:07.512 09:35:18 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:27:07.770 09:35:18 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@63 -- # rpc_cmd accel_error_inject_error -o crc32c -t disable 00:27:07.770 09:35:18 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:07.770 09:35:18 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:27:07.770 09:35:18 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:07.770 09:35:18 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@64 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:27:07.770 09:35:18 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:27:08.341 nvme0n1 00:27:08.341 09:35:19 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@67 -- # rpc_cmd accel_error_inject_error -o crc32c -t corrupt -i 32 00:27:08.341 09:35:19 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:08.341 09:35:19 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:27:08.341 09:35:19 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:08.341 09:35:19 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@69 -- # bperf_py perform_tests 00:27:08.341 09:35:19 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@19 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:27:08.341 I/O size of 131072 is greater than zero copy threshold (65536). 00:27:08.341 Zero copy mechanism will not be used. 00:27:08.341 Running I/O for 2 seconds... 00:27:08.341 [2024-07-15 09:35:19.390611] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xfd34f0) 00:27:08.341 [2024-07-15 09:35:19.390679] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:7264 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:08.341 [2024-07-15 09:35:19.390700] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:08.341 [2024-07-15 09:35:19.395682] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xfd34f0) 00:27:08.341 [2024-07-15 09:35:19.395716] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:5952 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:08.341 [2024-07-15 09:35:19.395734] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:08.341 [2024-07-15 09:35:19.399733] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xfd34f0) 00:27:08.341 [2024-07-15 09:35:19.399764] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:6400 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:08.341 [2024-07-15 09:35:19.399782] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:08.341 [2024-07-15 09:35:19.403589] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xfd34f0) 00:27:08.341 [2024-07-15 09:35:19.403619] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:24320 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:08.341 [2024-07-15 09:35:19.403636] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:08.341 [2024-07-15 09:35:19.406157] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xfd34f0) 00:27:08.341 [2024-07-15 09:35:19.406187] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:16992 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:08.342 [2024-07-15 09:35:19.406204] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:08.342 [2024-07-15 09:35:19.409932] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xfd34f0) 00:27:08.342 [2024-07-15 09:35:19.409961] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:9440 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:08.342 [2024-07-15 09:35:19.409978] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:08.342 [2024-07-15 09:35:19.412488] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xfd34f0) 00:27:08.342 [2024-07-15 09:35:19.412517] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:18912 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:08.342 [2024-07-15 09:35:19.412533] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:08.342 [2024-07-15 09:35:19.415851] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xfd34f0) 00:27:08.342 [2024-07-15 09:35:19.415880] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:25024 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:08.342 [2024-07-15 09:35:19.415897] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:08.342 [2024-07-15 09:35:19.420136] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xfd34f0) 00:27:08.342 [2024-07-15 09:35:19.420165] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:8672 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:08.342 [2024-07-15 09:35:19.420182] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:08.342 [2024-07-15 09:35:19.424687] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xfd34f0) 00:27:08.342 [2024-07-15 09:35:19.424717] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:5856 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:08.342 [2024-07-15 09:35:19.424735] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:08.342 [2024-07-15 09:35:19.429923] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xfd34f0) 00:27:08.342 [2024-07-15 09:35:19.429954] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:992 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:08.342 [2024-07-15 09:35:19.429971] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:08.342 [2024-07-15 09:35:19.434629] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xfd34f0) 00:27:08.342 [2024-07-15 09:35:19.434660] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:14208 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:08.342 [2024-07-15 09:35:19.434677] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:08.342 [2024-07-15 09:35:19.439784] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xfd34f0) 00:27:08.342 [2024-07-15 09:35:19.439822] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:18912 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:08.342 [2024-07-15 09:35:19.439847] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:08.342 [2024-07-15 09:35:19.444266] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xfd34f0) 00:27:08.342 [2024-07-15 09:35:19.444296] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:21984 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:08.342 [2024-07-15 09:35:19.444313] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:08.342 [2024-07-15 09:35:19.448663] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xfd34f0) 00:27:08.342 [2024-07-15 09:35:19.448692] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:1280 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:08.342 [2024-07-15 09:35:19.448708] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:08.342 [2024-07-15 09:35:19.453084] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xfd34f0) 00:27:08.342 [2024-07-15 09:35:19.453114] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:8992 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:08.342 [2024-07-15 09:35:19.453132] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:08.342 [2024-07-15 09:35:19.457513] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xfd34f0) 00:27:08.342 [2024-07-15 09:35:19.457542] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:15072 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:08.342 [2024-07-15 09:35:19.457560] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:08.342 [2024-07-15 09:35:19.462101] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xfd34f0) 00:27:08.342 [2024-07-15 09:35:19.462130] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:1984 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:08.342 [2024-07-15 09:35:19.462147] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:08.342 [2024-07-15 09:35:19.469194] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xfd34f0) 00:27:08.342 [2024-07-15 09:35:19.469225] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:14496 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:08.342 [2024-07-15 09:35:19.469243] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:08.342 [2024-07-15 09:35:19.476977] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xfd34f0) 00:27:08.342 [2024-07-15 09:35:19.477009] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:6688 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:08.342 [2024-07-15 09:35:19.477026] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:08.342 [2024-07-15 09:35:19.483777] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xfd34f0) 00:27:08.342 [2024-07-15 09:35:19.483817] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:14272 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:08.342 [2024-07-15 09:35:19.483848] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:08.342 [2024-07-15 09:35:19.491320] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xfd34f0) 00:27:08.342 [2024-07-15 09:35:19.491350] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:12448 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:08.342 [2024-07-15 09:35:19.491383] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:08.342 [2024-07-15 09:35:19.497873] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xfd34f0) 00:27:08.342 [2024-07-15 09:35:19.497904] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:19616 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:08.342 [2024-07-15 09:35:19.497921] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:08.342 [2024-07-15 09:35:19.504687] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xfd34f0) 00:27:08.342 [2024-07-15 09:35:19.504718] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:22432 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:08.342 [2024-07-15 09:35:19.504736] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:08.342 [2024-07-15 09:35:19.511752] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xfd34f0) 00:27:08.342 [2024-07-15 09:35:19.511783] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:21760 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:08.342 [2024-07-15 09:35:19.511807] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:08.342 [2024-07-15 09:35:19.518019] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xfd34f0) 00:27:08.342 [2024-07-15 09:35:19.518049] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:21216 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:08.342 [2024-07-15 09:35:19.518066] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:08.342 [2024-07-15 09:35:19.523156] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xfd34f0) 00:27:08.342 [2024-07-15 09:35:19.523186] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:17568 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:08.342 [2024-07-15 09:35:19.523204] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:08.342 [2024-07-15 09:35:19.527752] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xfd34f0) 00:27:08.342 [2024-07-15 09:35:19.527781] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:19616 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:08.342 [2024-07-15 09:35:19.527798] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:08.342 [2024-07-15 09:35:19.532174] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xfd34f0) 00:27:08.342 [2024-07-15 09:35:19.532203] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:3392 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:08.342 [2024-07-15 09:35:19.532220] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:08.604 [2024-07-15 09:35:19.536943] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xfd34f0) 00:27:08.604 [2024-07-15 09:35:19.536974] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:4768 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:08.604 [2024-07-15 09:35:19.536998] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:08.604 [2024-07-15 09:35:19.542476] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xfd34f0) 00:27:08.604 [2024-07-15 09:35:19.542506] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:10880 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:08.604 [2024-07-15 09:35:19.542524] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:08.604 [2024-07-15 09:35:19.548010] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xfd34f0) 00:27:08.604 [2024-07-15 09:35:19.548042] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:17536 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:08.604 [2024-07-15 09:35:19.548059] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:08.604 [2024-07-15 09:35:19.553476] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xfd34f0) 00:27:08.604 [2024-07-15 09:35:19.553507] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:2432 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:08.604 [2024-07-15 09:35:19.553524] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:08.604 [2024-07-15 09:35:19.557960] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xfd34f0) 00:27:08.604 [2024-07-15 09:35:19.557990] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:25088 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:08.604 [2024-07-15 09:35:19.558007] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:08.604 [2024-07-15 09:35:19.562370] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xfd34f0) 00:27:08.604 [2024-07-15 09:35:19.562414] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:16800 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:08.604 [2024-07-15 09:35:19.562430] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:08.604 [2024-07-15 09:35:19.566874] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xfd34f0) 00:27:08.604 [2024-07-15 09:35:19.566903] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:12384 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:08.604 [2024-07-15 09:35:19.566919] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:08.604 [2024-07-15 09:35:19.571289] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xfd34f0) 00:27:08.604 [2024-07-15 09:35:19.571318] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:22560 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:08.604 [2024-07-15 09:35:19.571335] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:08.604 [2024-07-15 09:35:19.575865] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xfd34f0) 00:27:08.604 [2024-07-15 09:35:19.575896] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:15680 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:08.604 [2024-07-15 09:35:19.575913] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:08.604 [2024-07-15 09:35:19.580892] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xfd34f0) 00:27:08.604 [2024-07-15 09:35:19.580929] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:9216 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:08.604 [2024-07-15 09:35:19.580947] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:08.604 [2024-07-15 09:35:19.585430] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xfd34f0) 00:27:08.604 [2024-07-15 09:35:19.585459] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:25472 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:08.604 [2024-07-15 09:35:19.585476] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:08.604 [2024-07-15 09:35:19.590445] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xfd34f0) 00:27:08.604 [2024-07-15 09:35:19.590474] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:24032 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:08.604 [2024-07-15 09:35:19.590491] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:08.604 [2024-07-15 09:35:19.595428] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xfd34f0) 00:27:08.604 [2024-07-15 09:35:19.595459] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:20096 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:08.604 [2024-07-15 09:35:19.595476] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:08.604 [2024-07-15 09:35:19.601114] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xfd34f0) 00:27:08.604 [2024-07-15 09:35:19.601146] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:14624 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:08.604 [2024-07-15 09:35:19.601164] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:08.604 [2024-07-15 09:35:19.608075] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xfd34f0) 00:27:08.604 [2024-07-15 09:35:19.608107] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:24352 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:08.604 [2024-07-15 09:35:19.608125] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:08.604 [2024-07-15 09:35:19.614379] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xfd34f0) 00:27:08.604 [2024-07-15 09:35:19.614419] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:13728 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:08.604 [2024-07-15 09:35:19.614437] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:08.604 [2024-07-15 09:35:19.619012] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xfd34f0) 00:27:08.604 [2024-07-15 09:35:19.619042] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:17952 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:08.604 [2024-07-15 09:35:19.619058] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:08.604 [2024-07-15 09:35:19.623541] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xfd34f0) 00:27:08.604 [2024-07-15 09:35:19.623571] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:12960 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:08.604 [2024-07-15 09:35:19.623588] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:08.604 [2024-07-15 09:35:19.628504] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xfd34f0) 00:27:08.604 [2024-07-15 09:35:19.628534] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:4640 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:08.604 [2024-07-15 09:35:19.628551] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:08.604 [2024-07-15 09:35:19.634658] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xfd34f0) 00:27:08.604 [2024-07-15 09:35:19.634688] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:24544 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:08.604 [2024-07-15 09:35:19.634705] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:08.604 [2024-07-15 09:35:19.640927] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xfd34f0) 00:27:08.604 [2024-07-15 09:35:19.640958] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:18944 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:08.604 [2024-07-15 09:35:19.640975] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:08.604 [2024-07-15 09:35:19.645387] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xfd34f0) 00:27:08.604 [2024-07-15 09:35:19.645417] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:20544 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:08.605 [2024-07-15 09:35:19.645434] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:08.605 [2024-07-15 09:35:19.649938] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xfd34f0) 00:27:08.605 [2024-07-15 09:35:19.649968] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:5632 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:08.605 [2024-07-15 09:35:19.649985] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:08.605 [2024-07-15 09:35:19.655245] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xfd34f0) 00:27:08.605 [2024-07-15 09:35:19.655275] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:20352 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:08.605 [2024-07-15 09:35:19.655292] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:08.605 [2024-07-15 09:35:19.662024] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xfd34f0) 00:27:08.605 [2024-07-15 09:35:19.662054] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:8864 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:08.605 [2024-07-15 09:35:19.662072] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:08.605 [2024-07-15 09:35:19.668784] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xfd34f0) 00:27:08.605 [2024-07-15 09:35:19.668822] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:8128 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:08.605 [2024-07-15 09:35:19.668841] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:08.605 [2024-07-15 09:35:19.674320] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xfd34f0) 00:27:08.605 [2024-07-15 09:35:19.674351] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:16832 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:08.605 [2024-07-15 09:35:19.674374] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:08.605 [2024-07-15 09:35:19.679937] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xfd34f0) 00:27:08.605 [2024-07-15 09:35:19.679967] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:5344 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:08.605 [2024-07-15 09:35:19.679984] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:08.605 [2024-07-15 09:35:19.684492] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xfd34f0) 00:27:08.605 [2024-07-15 09:35:19.684522] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:4576 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:08.605 [2024-07-15 09:35:19.684539] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:08.605 [2024-07-15 09:35:19.689128] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xfd34f0) 00:27:08.605 [2024-07-15 09:35:19.689157] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:224 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:08.605 [2024-07-15 09:35:19.689174] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:08.605 [2024-07-15 09:35:19.693835] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xfd34f0) 00:27:08.605 [2024-07-15 09:35:19.693863] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:1728 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:08.605 [2024-07-15 09:35:19.693880] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:08.605 [2024-07-15 09:35:19.698315] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xfd34f0) 00:27:08.605 [2024-07-15 09:35:19.698344] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:12032 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:08.605 [2024-07-15 09:35:19.698362] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:08.605 [2024-07-15 09:35:19.703133] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xfd34f0) 00:27:08.605 [2024-07-15 09:35:19.703176] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:2112 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:08.605 [2024-07-15 09:35:19.703192] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:08.605 [2024-07-15 09:35:19.708202] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xfd34f0) 00:27:08.605 [2024-07-15 09:35:19.708232] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:6432 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:08.605 [2024-07-15 09:35:19.708249] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:08.605 [2024-07-15 09:35:19.712750] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xfd34f0) 00:27:08.605 [2024-07-15 09:35:19.712779] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:11264 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:08.605 [2024-07-15 09:35:19.712796] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:08.605 [2024-07-15 09:35:19.718533] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xfd34f0) 00:27:08.605 [2024-07-15 09:35:19.718568] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:7776 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:08.605 [2024-07-15 09:35:19.718586] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:08.605 [2024-07-15 09:35:19.724984] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xfd34f0) 00:27:08.605 [2024-07-15 09:35:19.725015] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:20416 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:08.605 [2024-07-15 09:35:19.725032] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:08.605 [2024-07-15 09:35:19.730613] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xfd34f0) 00:27:08.605 [2024-07-15 09:35:19.730644] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:3168 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:08.605 [2024-07-15 09:35:19.730661] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:08.605 [2024-07-15 09:35:19.735812] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xfd34f0) 00:27:08.605 [2024-07-15 09:35:19.735841] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:11968 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:08.605 [2024-07-15 09:35:19.735859] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:08.605 [2024-07-15 09:35:19.741299] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xfd34f0) 00:27:08.605 [2024-07-15 09:35:19.741329] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:25024 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:08.605 [2024-07-15 09:35:19.741346] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:08.605 [2024-07-15 09:35:19.746254] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xfd34f0) 00:27:08.605 [2024-07-15 09:35:19.746284] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:18592 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:08.605 [2024-07-15 09:35:19.746301] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:08.605 [2024-07-15 09:35:19.751319] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xfd34f0) 00:27:08.605 [2024-07-15 09:35:19.751350] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:14368 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:08.605 [2024-07-15 09:35:19.751367] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:08.605 [2024-07-15 09:35:19.755861] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xfd34f0) 00:27:08.605 [2024-07-15 09:35:19.755890] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:8768 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:08.605 [2024-07-15 09:35:19.755907] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:08.605 [2024-07-15 09:35:19.760353] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xfd34f0) 00:27:08.605 [2024-07-15 09:35:19.760382] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:11712 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:08.605 [2024-07-15 09:35:19.760399] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:08.605 [2024-07-15 09:35:19.764820] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xfd34f0) 00:27:08.605 [2024-07-15 09:35:19.764849] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:6400 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:08.605 [2024-07-15 09:35:19.764866] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:08.605 [2024-07-15 09:35:19.769163] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xfd34f0) 00:27:08.605 [2024-07-15 09:35:19.769192] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:24608 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:08.605 [2024-07-15 09:35:19.769209] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:08.605 [2024-07-15 09:35:19.773698] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xfd34f0) 00:27:08.605 [2024-07-15 09:35:19.773727] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:4000 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:08.605 [2024-07-15 09:35:19.773744] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:08.605 [2024-07-15 09:35:19.778462] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xfd34f0) 00:27:08.605 [2024-07-15 09:35:19.778491] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:23872 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:08.605 [2024-07-15 09:35:19.778507] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:08.605 [2024-07-15 09:35:19.784898] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xfd34f0) 00:27:08.605 [2024-07-15 09:35:19.784929] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:23840 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:08.605 [2024-07-15 09:35:19.784947] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:08.605 [2024-07-15 09:35:19.791964] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xfd34f0) 00:27:08.605 [2024-07-15 09:35:19.791994] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:22080 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:08.605 [2024-07-15 09:35:19.792011] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:08.867 [2024-07-15 09:35:19.798974] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xfd34f0) 00:27:08.867 [2024-07-15 09:35:19.799004] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:13856 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:08.867 [2024-07-15 09:35:19.799022] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:08.867 [2024-07-15 09:35:19.805906] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xfd34f0) 00:27:08.867 [2024-07-15 09:35:19.805936] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:7360 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:08.867 [2024-07-15 09:35:19.805953] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:08.867 [2024-07-15 09:35:19.812613] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xfd34f0) 00:27:08.867 [2024-07-15 09:35:19.812643] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:19232 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:08.867 [2024-07-15 09:35:19.812667] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:08.867 [2024-07-15 09:35:19.819219] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xfd34f0) 00:27:08.867 [2024-07-15 09:35:19.819250] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:480 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:08.867 [2024-07-15 09:35:19.819267] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:08.867 [2024-07-15 09:35:19.824139] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xfd34f0) 00:27:08.867 [2024-07-15 09:35:19.824169] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:1728 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:08.867 [2024-07-15 09:35:19.824186] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:08.867 [2024-07-15 09:35:19.828755] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xfd34f0) 00:27:08.867 [2024-07-15 09:35:19.828784] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:3840 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:08.867 [2024-07-15 09:35:19.828807] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:08.867 [2024-07-15 09:35:19.833349] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xfd34f0) 00:27:08.867 [2024-07-15 09:35:19.833378] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:18272 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:08.867 [2024-07-15 09:35:19.833395] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:08.867 [2024-07-15 09:35:19.837774] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xfd34f0) 00:27:08.867 [2024-07-15 09:35:19.837812] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:12512 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:08.867 [2024-07-15 09:35:19.837831] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:08.867 [2024-07-15 09:35:19.842439] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xfd34f0) 00:27:08.868 [2024-07-15 09:35:19.842483] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:18688 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:08.868 [2024-07-15 09:35:19.842499] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:08.868 [2024-07-15 09:35:19.847150] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xfd34f0) 00:27:08.868 [2024-07-15 09:35:19.847194] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:2240 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:08.868 [2024-07-15 09:35:19.847211] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:08.868 [2024-07-15 09:35:19.851907] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xfd34f0) 00:27:08.868 [2024-07-15 09:35:19.851935] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:18368 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:08.868 [2024-07-15 09:35:19.851951] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:08.868 [2024-07-15 09:35:19.856708] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xfd34f0) 00:27:08.868 [2024-07-15 09:35:19.856745] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:14048 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:08.868 [2024-07-15 09:35:19.856763] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:08.868 [2024-07-15 09:35:19.861970] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xfd34f0) 00:27:08.868 [2024-07-15 09:35:19.862000] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:8640 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:08.868 [2024-07-15 09:35:19.862018] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:08.868 [2024-07-15 09:35:19.866786] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xfd34f0) 00:27:08.868 [2024-07-15 09:35:19.866827] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:8320 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:08.868 [2024-07-15 09:35:19.866871] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:08.868 [2024-07-15 09:35:19.871914] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xfd34f0) 00:27:08.868 [2024-07-15 09:35:19.871945] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:24736 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:08.868 [2024-07-15 09:35:19.871963] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:08.868 [2024-07-15 09:35:19.877247] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xfd34f0) 00:27:08.868 [2024-07-15 09:35:19.877277] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:10656 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:08.868 [2024-07-15 09:35:19.877295] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:08.868 [2024-07-15 09:35:19.883047] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xfd34f0) 00:27:08.868 [2024-07-15 09:35:19.883078] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:7264 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:08.868 [2024-07-15 09:35:19.883096] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:08.868 [2024-07-15 09:35:19.888492] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xfd34f0) 00:27:08.868 [2024-07-15 09:35:19.888522] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:20256 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:08.868 [2024-07-15 09:35:19.888555] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:08.868 [2024-07-15 09:35:19.893620] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xfd34f0) 00:27:08.868 [2024-07-15 09:35:19.893650] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:3136 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:08.868 [2024-07-15 09:35:19.893667] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:08.868 [2024-07-15 09:35:19.898630] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xfd34f0) 00:27:08.868 [2024-07-15 09:35:19.898660] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:5344 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:08.868 [2024-07-15 09:35:19.898679] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:08.868 [2024-07-15 09:35:19.903366] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xfd34f0) 00:27:08.868 [2024-07-15 09:35:19.903396] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:1536 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:08.868 [2024-07-15 09:35:19.903415] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:08.868 [2024-07-15 09:35:19.908656] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xfd34f0) 00:27:08.868 [2024-07-15 09:35:19.908688] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:5952 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:08.868 [2024-07-15 09:35:19.908705] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:08.868 [2024-07-15 09:35:19.914120] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xfd34f0) 00:27:08.868 [2024-07-15 09:35:19.914150] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:17088 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:08.868 [2024-07-15 09:35:19.914168] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:08.868 [2024-07-15 09:35:19.919862] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xfd34f0) 00:27:08.868 [2024-07-15 09:35:19.919901] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:9728 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:08.868 [2024-07-15 09:35:19.919919] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:08.868 [2024-07-15 09:35:19.925577] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xfd34f0) 00:27:08.868 [2024-07-15 09:35:19.925608] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:9664 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:08.868 [2024-07-15 09:35:19.925625] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:08.868 [2024-07-15 09:35:19.931567] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xfd34f0) 00:27:08.868 [2024-07-15 09:35:19.931598] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:2464 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:08.868 [2024-07-15 09:35:19.931616] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:08.868 [2024-07-15 09:35:19.937476] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xfd34f0) 00:27:08.868 [2024-07-15 09:35:19.937506] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:576 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:08.868 [2024-07-15 09:35:19.937523] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:08.868 [2024-07-15 09:35:19.943570] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xfd34f0) 00:27:08.868 [2024-07-15 09:35:19.943612] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:9504 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:08.868 [2024-07-15 09:35:19.943630] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:08.868 [2024-07-15 09:35:19.949030] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xfd34f0) 00:27:08.868 [2024-07-15 09:35:19.949060] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:3360 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:08.869 [2024-07-15 09:35:19.949085] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:08.869 [2024-07-15 09:35:19.954506] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xfd34f0) 00:27:08.869 [2024-07-15 09:35:19.954536] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:7360 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:08.869 [2024-07-15 09:35:19.954554] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:08.869 [2024-07-15 09:35:19.960391] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xfd34f0) 00:27:08.869 [2024-07-15 09:35:19.960424] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:4864 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:08.869 [2024-07-15 09:35:19.960442] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:08.869 [2024-07-15 09:35:19.966168] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xfd34f0) 00:27:08.869 [2024-07-15 09:35:19.966199] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:14240 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:08.869 [2024-07-15 09:35:19.966217] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:08.869 [2024-07-15 09:35:19.972601] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xfd34f0) 00:27:08.869 [2024-07-15 09:35:19.972632] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:18144 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:08.869 [2024-07-15 09:35:19.972650] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:08.869 [2024-07-15 09:35:19.979244] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xfd34f0) 00:27:08.869 [2024-07-15 09:35:19.979290] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:18816 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:08.869 [2024-07-15 09:35:19.979309] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:08.869 [2024-07-15 09:35:19.986069] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xfd34f0) 00:27:08.869 [2024-07-15 09:35:19.986100] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:16160 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:08.869 [2024-07-15 09:35:19.986118] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:08.869 [2024-07-15 09:35:19.992275] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xfd34f0) 00:27:08.869 [2024-07-15 09:35:19.992307] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:6464 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:08.869 [2024-07-15 09:35:19.992325] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:08.869 [2024-07-15 09:35:19.999162] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xfd34f0) 00:27:08.869 [2024-07-15 09:35:19.999193] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:20576 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:08.869 [2024-07-15 09:35:19.999211] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:08.869 [2024-07-15 09:35:20.006151] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xfd34f0) 00:27:08.869 [2024-07-15 09:35:20.006194] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:6752 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:08.869 [2024-07-15 09:35:20.006212] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:08.869 [2024-07-15 09:35:20.013061] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xfd34f0) 00:27:08.869 [2024-07-15 09:35:20.013093] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:3840 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:08.869 [2024-07-15 09:35:20.013111] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:08.869 [2024-07-15 09:35:20.019561] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xfd34f0) 00:27:08.869 [2024-07-15 09:35:20.019593] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:7072 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:08.869 [2024-07-15 09:35:20.019610] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:08.869 [2024-07-15 09:35:20.026351] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xfd34f0) 00:27:08.869 [2024-07-15 09:35:20.026382] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:13568 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:08.869 [2024-07-15 09:35:20.026400] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:08.869 [2024-07-15 09:35:20.033135] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xfd34f0) 00:27:08.869 [2024-07-15 09:35:20.033170] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:9312 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:08.869 [2024-07-15 09:35:20.033192] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:08.869 [2024-07-15 09:35:20.039689] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xfd34f0) 00:27:08.869 [2024-07-15 09:35:20.039743] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:15488 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:08.869 [2024-07-15 09:35:20.039770] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:08.869 [2024-07-15 09:35:20.046852] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xfd34f0) 00:27:08.869 [2024-07-15 09:35:20.046888] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:18112 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:08.869 [2024-07-15 09:35:20.046906] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:08.869 [2024-07-15 09:35:20.054393] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xfd34f0) 00:27:08.869 [2024-07-15 09:35:20.054425] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:10656 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:08.869 [2024-07-15 09:35:20.054443] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:09.131 [2024-07-15 09:35:20.060950] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xfd34f0) 00:27:09.131 [2024-07-15 09:35:20.060983] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:10720 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:09.131 [2024-07-15 09:35:20.061001] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:09.131 [2024-07-15 09:35:20.067097] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xfd34f0) 00:27:09.131 [2024-07-15 09:35:20.067129] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:15360 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:09.131 [2024-07-15 09:35:20.067146] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:09.131 [2024-07-15 09:35:20.073153] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xfd34f0) 00:27:09.131 [2024-07-15 09:35:20.073185] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:20480 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:09.131 [2024-07-15 09:35:20.073202] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:09.131 [2024-07-15 09:35:20.078996] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xfd34f0) 00:27:09.131 [2024-07-15 09:35:20.079028] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:21888 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:09.131 [2024-07-15 09:35:20.079046] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:09.131 [2024-07-15 09:35:20.085186] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xfd34f0) 00:27:09.131 [2024-07-15 09:35:20.085218] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:19392 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:09.131 [2024-07-15 09:35:20.085236] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:09.131 [2024-07-15 09:35:20.091152] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xfd34f0) 00:27:09.131 [2024-07-15 09:35:20.091183] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:23488 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:09.131 [2024-07-15 09:35:20.091201] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:09.131 [2024-07-15 09:35:20.096627] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xfd34f0) 00:27:09.131 [2024-07-15 09:35:20.096658] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:11040 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:09.131 [2024-07-15 09:35:20.096676] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:09.131 [2024-07-15 09:35:20.101259] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xfd34f0) 00:27:09.131 [2024-07-15 09:35:20.101288] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:15904 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:09.131 [2024-07-15 09:35:20.101305] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:09.131 [2024-07-15 09:35:20.106686] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xfd34f0) 00:27:09.131 [2024-07-15 09:35:20.106716] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:21664 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:09.131 [2024-07-15 09:35:20.106734] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:09.131 [2024-07-15 09:35:20.111731] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xfd34f0) 00:27:09.131 [2024-07-15 09:35:20.111760] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:16352 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:09.131 [2024-07-15 09:35:20.111783] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:09.131 [2024-07-15 09:35:20.115385] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xfd34f0) 00:27:09.131 [2024-07-15 09:35:20.115429] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:18912 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:09.131 [2024-07-15 09:35:20.115446] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:09.131 [2024-07-15 09:35:20.120851] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xfd34f0) 00:27:09.131 [2024-07-15 09:35:20.120882] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:12864 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:09.131 [2024-07-15 09:35:20.120899] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:09.131 [2024-07-15 09:35:20.125249] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xfd34f0) 00:27:09.131 [2024-07-15 09:35:20.125278] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:21824 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:09.131 [2024-07-15 09:35:20.125295] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:09.131 [2024-07-15 09:35:20.129713] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xfd34f0) 00:27:09.131 [2024-07-15 09:35:20.129741] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:2688 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:09.131 [2024-07-15 09:35:20.129772] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:09.131 [2024-07-15 09:35:20.134613] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xfd34f0) 00:27:09.131 [2024-07-15 09:35:20.134643] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:18400 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:09.131 [2024-07-15 09:35:20.134660] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:09.131 [2024-07-15 09:35:20.139717] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xfd34f0) 00:27:09.131 [2024-07-15 09:35:20.139746] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:20320 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:09.131 [2024-07-15 09:35:20.139778] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:09.131 [2024-07-15 09:35:20.144318] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xfd34f0) 00:27:09.131 [2024-07-15 09:35:20.144348] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:12320 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:09.131 [2024-07-15 09:35:20.144365] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:09.131 [2024-07-15 09:35:20.148610] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xfd34f0) 00:27:09.131 [2024-07-15 09:35:20.148639] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:19392 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:09.131 [2024-07-15 09:35:20.148655] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:09.131 [2024-07-15 09:35:20.153112] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xfd34f0) 00:27:09.131 [2024-07-15 09:35:20.153141] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:8928 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:09.131 [2024-07-15 09:35:20.153158] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:09.131 [2024-07-15 09:35:20.157373] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xfd34f0) 00:27:09.131 [2024-07-15 09:35:20.157402] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:2176 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:09.131 [2024-07-15 09:35:20.157419] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:09.131 [2024-07-15 09:35:20.162540] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xfd34f0) 00:27:09.131 [2024-07-15 09:35:20.162567] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:10784 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:09.131 [2024-07-15 09:35:20.162600] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:09.132 [2024-07-15 09:35:20.166245] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xfd34f0) 00:27:09.132 [2024-07-15 09:35:20.166274] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:11776 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:09.132 [2024-07-15 09:35:20.166291] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:09.132 [2024-07-15 09:35:20.170621] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xfd34f0) 00:27:09.132 [2024-07-15 09:35:20.170651] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:1664 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:09.132 [2024-07-15 09:35:20.170668] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:09.132 [2024-07-15 09:35:20.175083] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xfd34f0) 00:27:09.132 [2024-07-15 09:35:20.175114] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:7936 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:09.132 [2024-07-15 09:35:20.175131] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:09.132 [2024-07-15 09:35:20.178330] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xfd34f0) 00:27:09.132 [2024-07-15 09:35:20.178359] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:12768 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:09.132 [2024-07-15 09:35:20.178376] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:09.132 [2024-07-15 09:35:20.184387] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xfd34f0) 00:27:09.132 [2024-07-15 09:35:20.184415] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:15744 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:09.132 [2024-07-15 09:35:20.184446] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:09.132 [2024-07-15 09:35:20.191649] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xfd34f0) 00:27:09.132 [2024-07-15 09:35:20.191678] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:19584 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:09.132 [2024-07-15 09:35:20.191714] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:09.132 [2024-07-15 09:35:20.198736] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xfd34f0) 00:27:09.132 [2024-07-15 09:35:20.198767] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:25056 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:09.132 [2024-07-15 09:35:20.198785] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:09.132 [2024-07-15 09:35:20.204536] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xfd34f0) 00:27:09.132 [2024-07-15 09:35:20.204566] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:10208 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:09.132 [2024-07-15 09:35:20.204583] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:09.132 [2024-07-15 09:35:20.209924] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xfd34f0) 00:27:09.132 [2024-07-15 09:35:20.209955] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:13344 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:09.132 [2024-07-15 09:35:20.209972] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:09.132 [2024-07-15 09:35:20.215445] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xfd34f0) 00:27:09.132 [2024-07-15 09:35:20.215490] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:8320 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:09.132 [2024-07-15 09:35:20.215507] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:09.132 [2024-07-15 09:35:20.222180] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xfd34f0) 00:27:09.132 [2024-07-15 09:35:20.222211] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:25184 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:09.132 [2024-07-15 09:35:20.222242] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:09.132 [2024-07-15 09:35:20.227887] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xfd34f0) 00:27:09.132 [2024-07-15 09:35:20.227917] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:16192 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:09.132 [2024-07-15 09:35:20.227934] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:09.132 [2024-07-15 09:35:20.233924] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xfd34f0) 00:27:09.132 [2024-07-15 09:35:20.233955] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:6720 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:09.132 [2024-07-15 09:35:20.233972] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:09.132 [2024-07-15 09:35:20.240313] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xfd34f0) 00:27:09.132 [2024-07-15 09:35:20.240344] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:15424 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:09.132 [2024-07-15 09:35:20.240362] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:09.132 [2024-07-15 09:35:20.246407] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xfd34f0) 00:27:09.132 [2024-07-15 09:35:20.246457] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:9120 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:09.132 [2024-07-15 09:35:20.246476] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:09.132 [2024-07-15 09:35:20.251458] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xfd34f0) 00:27:09.132 [2024-07-15 09:35:20.251489] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:6240 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:09.132 [2024-07-15 09:35:20.251506] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:09.132 [2024-07-15 09:35:20.256245] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xfd34f0) 00:27:09.132 [2024-07-15 09:35:20.256274] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:22048 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:09.132 [2024-07-15 09:35:20.256292] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:09.132 [2024-07-15 09:35:20.262292] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xfd34f0) 00:27:09.132 [2024-07-15 09:35:20.262337] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:2912 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:09.132 [2024-07-15 09:35:20.262353] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:09.132 [2024-07-15 09:35:20.267907] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xfd34f0) 00:27:09.132 [2024-07-15 09:35:20.267937] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:5888 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:09.132 [2024-07-15 09:35:20.267955] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:09.132 [2024-07-15 09:35:20.272483] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xfd34f0) 00:27:09.132 [2024-07-15 09:35:20.272513] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:6656 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:09.132 [2024-07-15 09:35:20.272530] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:09.132 [2024-07-15 09:35:20.277280] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xfd34f0) 00:27:09.132 [2024-07-15 09:35:20.277310] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:16352 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:09.132 [2024-07-15 09:35:20.277327] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:09.132 [2024-07-15 09:35:20.282393] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xfd34f0) 00:27:09.132 [2024-07-15 09:35:20.282423] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:21408 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:09.132 [2024-07-15 09:35:20.282439] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:09.132 [2024-07-15 09:35:20.287306] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xfd34f0) 00:27:09.132 [2024-07-15 09:35:20.287335] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:21376 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:09.132 [2024-07-15 09:35:20.287352] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:09.132 [2024-07-15 09:35:20.292053] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xfd34f0) 00:27:09.132 [2024-07-15 09:35:20.292082] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:19616 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:09.132 [2024-07-15 09:35:20.292099] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:09.132 [2024-07-15 09:35:20.296679] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xfd34f0) 00:27:09.132 [2024-07-15 09:35:20.296708] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:8576 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:09.132 [2024-07-15 09:35:20.296724] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:09.132 [2024-07-15 09:35:20.301138] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xfd34f0) 00:27:09.132 [2024-07-15 09:35:20.301167] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:24736 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:09.132 [2024-07-15 09:35:20.301184] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:09.132 [2024-07-15 09:35:20.305650] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xfd34f0) 00:27:09.132 [2024-07-15 09:35:20.305679] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:3744 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:09.133 [2024-07-15 09:35:20.305696] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:09.133 [2024-07-15 09:35:20.310452] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xfd34f0) 00:27:09.133 [2024-07-15 09:35:20.310481] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:18272 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:09.133 [2024-07-15 09:35:20.310498] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:09.133 [2024-07-15 09:35:20.316786] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xfd34f0) 00:27:09.133 [2024-07-15 09:35:20.316824] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:22400 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:09.133 [2024-07-15 09:35:20.316842] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:09.133 [2024-07-15 09:35:20.322361] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xfd34f0) 00:27:09.133 [2024-07-15 09:35:20.322392] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:22528 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:09.133 [2024-07-15 09:35:20.322409] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:09.396 [2024-07-15 09:35:20.326868] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xfd34f0) 00:27:09.396 [2024-07-15 09:35:20.326897] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:17184 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:09.396 [2024-07-15 09:35:20.326915] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:09.396 [2024-07-15 09:35:20.331573] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xfd34f0) 00:27:09.396 [2024-07-15 09:35:20.331602] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:8864 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:09.396 [2024-07-15 09:35:20.331625] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:09.396 [2024-07-15 09:35:20.336183] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xfd34f0) 00:27:09.397 [2024-07-15 09:35:20.336212] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:22496 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:09.397 [2024-07-15 09:35:20.336229] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:09.397 [2024-07-15 09:35:20.341296] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xfd34f0) 00:27:09.397 [2024-07-15 09:35:20.341325] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:4544 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:09.397 [2024-07-15 09:35:20.341342] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:09.397 [2024-07-15 09:35:20.346404] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xfd34f0) 00:27:09.397 [2024-07-15 09:35:20.346434] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:9824 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:09.397 [2024-07-15 09:35:20.346451] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:09.397 [2024-07-15 09:35:20.351789] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xfd34f0) 00:27:09.397 [2024-07-15 09:35:20.351825] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:24832 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:09.397 [2024-07-15 09:35:20.351843] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:09.397 [2024-07-15 09:35:20.356283] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xfd34f0) 00:27:09.397 [2024-07-15 09:35:20.356312] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:3776 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:09.397 [2024-07-15 09:35:20.356328] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:09.397 [2024-07-15 09:35:20.361038] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xfd34f0) 00:27:09.397 [2024-07-15 09:35:20.361067] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:7968 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:09.397 [2024-07-15 09:35:20.361084] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:09.397 [2024-07-15 09:35:20.365629] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xfd34f0) 00:27:09.397 [2024-07-15 09:35:20.365658] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:7040 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:09.397 [2024-07-15 09:35:20.365675] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:09.397 [2024-07-15 09:35:20.370269] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xfd34f0) 00:27:09.397 [2024-07-15 09:35:20.370298] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:12128 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:09.397 [2024-07-15 09:35:20.370315] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:09.397 [2024-07-15 09:35:20.374967] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xfd34f0) 00:27:09.397 [2024-07-15 09:35:20.375016] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:1664 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:09.397 [2024-07-15 09:35:20.375033] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:09.397 [2024-07-15 09:35:20.379686] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xfd34f0) 00:27:09.397 [2024-07-15 09:35:20.379715] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:7200 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:09.397 [2024-07-15 09:35:20.379731] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:09.397 [2024-07-15 09:35:20.384220] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xfd34f0) 00:27:09.397 [2024-07-15 09:35:20.384249] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:13280 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:09.397 [2024-07-15 09:35:20.384266] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:09.397 [2024-07-15 09:35:20.388826] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xfd34f0) 00:27:09.397 [2024-07-15 09:35:20.388855] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:20576 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:09.397 [2024-07-15 09:35:20.388872] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:09.397 [2024-07-15 09:35:20.394192] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xfd34f0) 00:27:09.397 [2024-07-15 09:35:20.394222] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:8032 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:09.397 [2024-07-15 09:35:20.394239] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:09.397 [2024-07-15 09:35:20.399155] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xfd34f0) 00:27:09.397 [2024-07-15 09:35:20.399185] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:11872 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:09.397 [2024-07-15 09:35:20.399202] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:09.397 [2024-07-15 09:35:20.404352] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xfd34f0) 00:27:09.397 [2024-07-15 09:35:20.404396] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:3744 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:09.397 [2024-07-15 09:35:20.404414] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:09.397 [2024-07-15 09:35:20.409049] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xfd34f0) 00:27:09.397 [2024-07-15 09:35:20.409079] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:1216 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:09.397 [2024-07-15 09:35:20.409096] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:09.397 [2024-07-15 09:35:20.413722] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xfd34f0) 00:27:09.397 [2024-07-15 09:35:20.413753] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:9024 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:09.397 [2024-07-15 09:35:20.413770] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:09.397 [2024-07-15 09:35:20.418325] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xfd34f0) 00:27:09.397 [2024-07-15 09:35:20.418355] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:22912 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:09.397 [2024-07-15 09:35:20.418372] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:09.397 [2024-07-15 09:35:20.423196] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xfd34f0) 00:27:09.397 [2024-07-15 09:35:20.423226] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:1216 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:09.397 [2024-07-15 09:35:20.423243] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:09.397 [2024-07-15 09:35:20.428641] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xfd34f0) 00:27:09.397 [2024-07-15 09:35:20.428671] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:19200 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:09.397 [2024-07-15 09:35:20.428688] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:09.397 [2024-07-15 09:35:20.433212] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xfd34f0) 00:27:09.397 [2024-07-15 09:35:20.433242] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:11936 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:09.397 [2024-07-15 09:35:20.433259] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:09.397 [2024-07-15 09:35:20.437958] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xfd34f0) 00:27:09.397 [2024-07-15 09:35:20.437987] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:10016 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:09.397 [2024-07-15 09:35:20.438004] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:09.397 [2024-07-15 09:35:20.443691] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xfd34f0) 00:27:09.397 [2024-07-15 09:35:20.443721] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:3520 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:09.397 [2024-07-15 09:35:20.443738] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:09.397 [2024-07-15 09:35:20.451110] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xfd34f0) 00:27:09.397 [2024-07-15 09:35:20.451141] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:2080 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:09.397 [2024-07-15 09:35:20.451159] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:09.397 [2024-07-15 09:35:20.458249] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xfd34f0) 00:27:09.397 [2024-07-15 09:35:20.458293] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:2944 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:09.397 [2024-07-15 09:35:20.458312] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:09.397 [2024-07-15 09:35:20.465451] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xfd34f0) 00:27:09.397 [2024-07-15 09:35:20.465497] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:16160 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:09.397 [2024-07-15 09:35:20.465519] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:09.397 [2024-07-15 09:35:20.472302] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xfd34f0) 00:27:09.397 [2024-07-15 09:35:20.472333] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:18080 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:09.397 [2024-07-15 09:35:20.472351] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:09.397 [2024-07-15 09:35:20.476929] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xfd34f0) 00:27:09.397 [2024-07-15 09:35:20.476959] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:23168 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:09.397 [2024-07-15 09:35:20.476976] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:09.397 [2024-07-15 09:35:20.479542] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xfd34f0) 00:27:09.397 [2024-07-15 09:35:20.479571] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:20992 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:09.397 [2024-07-15 09:35:20.479588] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:09.398 [2024-07-15 09:35:20.482581] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xfd34f0) 00:27:09.398 [2024-07-15 09:35:20.482609] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:20224 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:09.398 [2024-07-15 09:35:20.482626] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:09.398 [2024-07-15 09:35:20.485590] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xfd34f0) 00:27:09.398 [2024-07-15 09:35:20.485618] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:19040 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:09.398 [2024-07-15 09:35:20.485635] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:09.398 [2024-07-15 09:35:20.488413] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xfd34f0) 00:27:09.398 [2024-07-15 09:35:20.488441] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:4832 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:09.398 [2024-07-15 09:35:20.488458] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:09.398 [2024-07-15 09:35:20.491636] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xfd34f0) 00:27:09.398 [2024-07-15 09:35:20.491667] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:20608 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:09.398 [2024-07-15 09:35:20.491684] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:09.398 [2024-07-15 09:35:20.495598] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xfd34f0) 00:27:09.398 [2024-07-15 09:35:20.495629] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:19136 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:09.398 [2024-07-15 09:35:20.495647] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:09.398 [2024-07-15 09:35:20.499260] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xfd34f0) 00:27:09.398 [2024-07-15 09:35:20.499291] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:16032 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:09.398 [2024-07-15 09:35:20.499309] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:09.398 [2024-07-15 09:35:20.504380] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xfd34f0) 00:27:09.398 [2024-07-15 09:35:20.504411] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:22560 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:09.398 [2024-07-15 09:35:20.504428] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:09.398 [2024-07-15 09:35:20.508992] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xfd34f0) 00:27:09.398 [2024-07-15 09:35:20.509023] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:8992 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:09.398 [2024-07-15 09:35:20.509041] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:09.398 [2024-07-15 09:35:20.514639] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xfd34f0) 00:27:09.398 [2024-07-15 09:35:20.514670] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:16288 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:09.398 [2024-07-15 09:35:20.514688] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:09.398 [2024-07-15 09:35:20.520413] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xfd34f0) 00:27:09.398 [2024-07-15 09:35:20.520443] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:20032 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:09.398 [2024-07-15 09:35:20.520461] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:09.398 [2024-07-15 09:35:20.526332] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xfd34f0) 00:27:09.398 [2024-07-15 09:35:20.526362] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:9920 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:09.398 [2024-07-15 09:35:20.526380] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:09.398 [2024-07-15 09:35:20.532319] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xfd34f0) 00:27:09.398 [2024-07-15 09:35:20.532349] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:7168 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:09.398 [2024-07-15 09:35:20.532366] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:09.398 [2024-07-15 09:35:20.538394] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xfd34f0) 00:27:09.398 [2024-07-15 09:35:20.538425] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:24992 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:09.398 [2024-07-15 09:35:20.538443] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:09.398 [2024-07-15 09:35:20.544094] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xfd34f0) 00:27:09.398 [2024-07-15 09:35:20.544124] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:4704 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:09.398 [2024-07-15 09:35:20.544148] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:09.398 [2024-07-15 09:35:20.549917] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xfd34f0) 00:27:09.398 [2024-07-15 09:35:20.549948] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:3456 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:09.398 [2024-07-15 09:35:20.549965] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:09.398 [2024-07-15 09:35:20.556568] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xfd34f0) 00:27:09.398 [2024-07-15 09:35:20.556599] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:18272 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:09.398 [2024-07-15 09:35:20.556616] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:09.398 [2024-07-15 09:35:20.563273] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xfd34f0) 00:27:09.398 [2024-07-15 09:35:20.563304] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:14912 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:09.398 [2024-07-15 09:35:20.563321] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:09.398 [2024-07-15 09:35:20.571130] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xfd34f0) 00:27:09.398 [2024-07-15 09:35:20.571160] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:5760 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:09.398 [2024-07-15 09:35:20.571177] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:09.398 [2024-07-15 09:35:20.575720] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xfd34f0) 00:27:09.398 [2024-07-15 09:35:20.575752] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:15648 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:09.398 [2024-07-15 09:35:20.575770] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:09.398 [2024-07-15 09:35:20.580147] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xfd34f0) 00:27:09.398 [2024-07-15 09:35:20.580176] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:14176 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:09.398 [2024-07-15 09:35:20.580209] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:09.398 [2024-07-15 09:35:20.585049] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xfd34f0) 00:27:09.398 [2024-07-15 09:35:20.585080] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:20864 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:09.398 [2024-07-15 09:35:20.585097] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:09.398 [2024-07-15 09:35:20.589978] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xfd34f0) 00:27:09.398 [2024-07-15 09:35:20.590008] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:13184 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:09.398 [2024-07-15 09:35:20.590025] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:09.658 [2024-07-15 09:35:20.596431] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xfd34f0) 00:27:09.659 [2024-07-15 09:35:20.596468] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:3104 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:09.659 [2024-07-15 09:35:20.596486] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:09.659 [2024-07-15 09:35:20.602615] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xfd34f0) 00:27:09.659 [2024-07-15 09:35:20.602661] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:11328 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:09.659 [2024-07-15 09:35:20.602679] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:09.659 [2024-07-15 09:35:20.609059] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xfd34f0) 00:27:09.659 [2024-07-15 09:35:20.609104] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:6304 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:09.659 [2024-07-15 09:35:20.609120] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:09.659 [2024-07-15 09:35:20.615635] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xfd34f0) 00:27:09.659 [2024-07-15 09:35:20.615665] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:21728 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:09.659 [2024-07-15 09:35:20.615683] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:09.659 [2024-07-15 09:35:20.620893] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xfd34f0) 00:27:09.659 [2024-07-15 09:35:20.620923] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:17248 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:09.659 [2024-07-15 09:35:20.620941] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:09.659 [2024-07-15 09:35:20.626179] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xfd34f0) 00:27:09.659 [2024-07-15 09:35:20.626210] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:14400 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:09.659 [2024-07-15 09:35:20.626227] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:09.659 [2024-07-15 09:35:20.631406] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xfd34f0) 00:27:09.659 [2024-07-15 09:35:20.631436] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:5024 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:09.659 [2024-07-15 09:35:20.631454] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:09.659 [2024-07-15 09:35:20.636162] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xfd34f0) 00:27:09.659 [2024-07-15 09:35:20.636191] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:23424 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:09.659 [2024-07-15 09:35:20.636208] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:09.659 [2024-07-15 09:35:20.641323] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xfd34f0) 00:27:09.659 [2024-07-15 09:35:20.641353] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:17792 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:09.659 [2024-07-15 09:35:20.641370] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:09.659 [2024-07-15 09:35:20.646265] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xfd34f0) 00:27:09.659 [2024-07-15 09:35:20.646295] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:15200 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:09.659 [2024-07-15 09:35:20.646312] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:09.659 [2024-07-15 09:35:20.650643] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xfd34f0) 00:27:09.659 [2024-07-15 09:35:20.650672] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:14336 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:09.659 [2024-07-15 09:35:20.650689] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:09.659 [2024-07-15 09:35:20.655134] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xfd34f0) 00:27:09.659 [2024-07-15 09:35:20.655163] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:2720 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:09.659 [2024-07-15 09:35:20.655181] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:09.659 [2024-07-15 09:35:20.660204] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xfd34f0) 00:27:09.659 [2024-07-15 09:35:20.660234] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:14112 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:09.659 [2024-07-15 09:35:20.660252] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:09.659 [2024-07-15 09:35:20.665132] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xfd34f0) 00:27:09.659 [2024-07-15 09:35:20.665161] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:14336 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:09.659 [2024-07-15 09:35:20.665178] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:09.659 [2024-07-15 09:35:20.669825] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xfd34f0) 00:27:09.659 [2024-07-15 09:35:20.669854] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:5504 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:09.659 [2024-07-15 09:35:20.669871] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:09.659 [2024-07-15 09:35:20.674243] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xfd34f0) 00:27:09.659 [2024-07-15 09:35:20.674272] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:1952 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:09.659 [2024-07-15 09:35:20.674289] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:09.659 [2024-07-15 09:35:20.678834] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xfd34f0) 00:27:09.659 [2024-07-15 09:35:20.678863] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:23296 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:09.659 [2024-07-15 09:35:20.678879] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:09.659 [2024-07-15 09:35:20.683280] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xfd34f0) 00:27:09.659 [2024-07-15 09:35:20.683309] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:12864 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:09.659 [2024-07-15 09:35:20.683335] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:09.659 [2024-07-15 09:35:20.689381] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xfd34f0) 00:27:09.659 [2024-07-15 09:35:20.689410] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:7680 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:09.659 [2024-07-15 09:35:20.689427] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:09.659 [2024-07-15 09:35:20.695633] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xfd34f0) 00:27:09.659 [2024-07-15 09:35:20.695663] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:10656 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:09.659 [2024-07-15 09:35:20.695680] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:09.659 [2024-07-15 09:35:20.701084] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xfd34f0) 00:27:09.659 [2024-07-15 09:35:20.701113] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:512 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:09.659 [2024-07-15 09:35:20.701130] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:09.659 [2024-07-15 09:35:20.705706] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xfd34f0) 00:27:09.659 [2024-07-15 09:35:20.705734] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:24640 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:09.659 [2024-07-15 09:35:20.705751] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:09.659 [2024-07-15 09:35:20.710336] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xfd34f0) 00:27:09.659 [2024-07-15 09:35:20.710365] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:20736 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:09.659 [2024-07-15 09:35:20.710382] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:09.659 [2024-07-15 09:35:20.715369] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xfd34f0) 00:27:09.659 [2024-07-15 09:35:20.715400] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:6624 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:09.659 [2024-07-15 09:35:20.715417] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:09.659 [2024-07-15 09:35:20.720531] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xfd34f0) 00:27:09.659 [2024-07-15 09:35:20.720561] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:12928 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:09.659 [2024-07-15 09:35:20.720577] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:09.660 [2024-07-15 09:35:20.725151] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xfd34f0) 00:27:09.660 [2024-07-15 09:35:20.725181] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:1600 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:09.660 [2024-07-15 09:35:20.725197] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:09.660 [2024-07-15 09:35:20.729729] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xfd34f0) 00:27:09.660 [2024-07-15 09:35:20.729774] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:14368 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:09.660 [2024-07-15 09:35:20.729792] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:09.660 [2024-07-15 09:35:20.734336] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xfd34f0) 00:27:09.660 [2024-07-15 09:35:20.734366] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:5920 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:09.660 [2024-07-15 09:35:20.734384] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:09.660 [2024-07-15 09:35:20.739467] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xfd34f0) 00:27:09.660 [2024-07-15 09:35:20.739498] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:21824 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:09.660 [2024-07-15 09:35:20.739515] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:09.660 [2024-07-15 09:35:20.744285] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xfd34f0) 00:27:09.660 [2024-07-15 09:35:20.744330] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:11872 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:09.660 [2024-07-15 09:35:20.744347] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:09.660 [2024-07-15 09:35:20.749236] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xfd34f0) 00:27:09.660 [2024-07-15 09:35:20.749267] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:13152 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:09.660 [2024-07-15 09:35:20.749284] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:09.660 [2024-07-15 09:35:20.754278] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xfd34f0) 00:27:09.660 [2024-07-15 09:35:20.754310] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:20256 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:09.660 [2024-07-15 09:35:20.754328] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:09.660 [2024-07-15 09:35:20.758302] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xfd34f0) 00:27:09.660 [2024-07-15 09:35:20.758331] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:8448 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:09.660 [2024-07-15 09:35:20.758363] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:09.660 [2024-07-15 09:35:20.763269] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xfd34f0) 00:27:09.660 [2024-07-15 09:35:20.763297] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:23776 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:09.660 [2024-07-15 09:35:20.763328] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:09.660 [2024-07-15 09:35:20.768914] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xfd34f0) 00:27:09.660 [2024-07-15 09:35:20.768960] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:21504 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:09.660 [2024-07-15 09:35:20.768978] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:09.660 [2024-07-15 09:35:20.775393] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xfd34f0) 00:27:09.660 [2024-07-15 09:35:20.775421] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:16576 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:09.660 [2024-07-15 09:35:20.775437] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:09.660 [2024-07-15 09:35:20.782186] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xfd34f0) 00:27:09.660 [2024-07-15 09:35:20.782217] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:23168 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:09.660 [2024-07-15 09:35:20.782250] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:09.660 [2024-07-15 09:35:20.789668] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xfd34f0) 00:27:09.660 [2024-07-15 09:35:20.789698] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:20480 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:09.660 [2024-07-15 09:35:20.789714] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:09.660 [2024-07-15 09:35:20.796320] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xfd34f0) 00:27:09.660 [2024-07-15 09:35:20.796350] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:9696 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:09.660 [2024-07-15 09:35:20.796384] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:09.660 [2024-07-15 09:35:20.802307] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xfd34f0) 00:27:09.660 [2024-07-15 09:35:20.802351] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:10560 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:09.660 [2024-07-15 09:35:20.802367] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:09.660 [2024-07-15 09:35:20.808077] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xfd34f0) 00:27:09.660 [2024-07-15 09:35:20.808108] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:14400 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:09.660 [2024-07-15 09:35:20.808141] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:09.660 [2024-07-15 09:35:20.814065] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xfd34f0) 00:27:09.660 [2024-07-15 09:35:20.814110] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:11200 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:09.660 [2024-07-15 09:35:20.814127] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:09.660 [2024-07-15 09:35:20.820031] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xfd34f0) 00:27:09.660 [2024-07-15 09:35:20.820062] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:1728 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:09.660 [2024-07-15 09:35:20.820080] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:09.660 [2024-07-15 09:35:20.825746] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xfd34f0) 00:27:09.660 [2024-07-15 09:35:20.825777] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:17376 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:09.660 [2024-07-15 09:35:20.825809] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:09.660 [2024-07-15 09:35:20.831365] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xfd34f0) 00:27:09.660 [2024-07-15 09:35:20.831394] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:5952 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:09.660 [2024-07-15 09:35:20.831427] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:09.660 [2024-07-15 09:35:20.836990] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xfd34f0) 00:27:09.660 [2024-07-15 09:35:20.837019] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:19040 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:09.660 [2024-07-15 09:35:20.837052] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:09.660 [2024-07-15 09:35:20.842871] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xfd34f0) 00:27:09.660 [2024-07-15 09:35:20.842902] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:608 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:09.660 [2024-07-15 09:35:20.842919] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:09.660 [2024-07-15 09:35:20.848661] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xfd34f0) 00:27:09.660 [2024-07-15 09:35:20.848692] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:16832 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:09.660 [2024-07-15 09:35:20.848710] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:09.921 [2024-07-15 09:35:20.855426] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xfd34f0) 00:27:09.921 [2024-07-15 09:35:20.855457] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:25056 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:09.921 [2024-07-15 09:35:20.855490] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:09.921 [2024-07-15 09:35:20.861644] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xfd34f0) 00:27:09.921 [2024-07-15 09:35:20.861675] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:10272 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:09.921 [2024-07-15 09:35:20.861707] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:09.921 [2024-07-15 09:35:20.867438] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xfd34f0) 00:27:09.921 [2024-07-15 09:35:20.867486] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:6720 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:09.921 [2024-07-15 09:35:20.867504] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:09.921 [2024-07-15 09:35:20.872944] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xfd34f0) 00:27:09.921 [2024-07-15 09:35:20.872975] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:23040 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:09.921 [2024-07-15 09:35:20.872992] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:09.921 [2024-07-15 09:35:20.878159] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xfd34f0) 00:27:09.921 [2024-07-15 09:35:20.878207] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:1280 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:09.921 [2024-07-15 09:35:20.878224] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:09.921 [2024-07-15 09:35:20.883114] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xfd34f0) 00:27:09.921 [2024-07-15 09:35:20.883143] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:11104 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:09.921 [2024-07-15 09:35:20.883161] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:09.921 [2024-07-15 09:35:20.888120] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xfd34f0) 00:27:09.921 [2024-07-15 09:35:20.888150] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:1600 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:09.921 [2024-07-15 09:35:20.888167] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:09.921 [2024-07-15 09:35:20.893129] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xfd34f0) 00:27:09.921 [2024-07-15 09:35:20.893159] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:20992 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:09.921 [2024-07-15 09:35:20.893177] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:09.921 [2024-07-15 09:35:20.897949] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xfd34f0) 00:27:09.921 [2024-07-15 09:35:20.897979] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:16192 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:09.922 [2024-07-15 09:35:20.897996] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:09.922 [2024-07-15 09:35:20.902479] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xfd34f0) 00:27:09.922 [2024-07-15 09:35:20.902508] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:17408 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:09.922 [2024-07-15 09:35:20.902525] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:09.922 [2024-07-15 09:35:20.906967] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xfd34f0) 00:27:09.922 [2024-07-15 09:35:20.907007] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:25568 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:09.922 [2024-07-15 09:35:20.907025] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:09.922 [2024-07-15 09:35:20.911490] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xfd34f0) 00:27:09.922 [2024-07-15 09:35:20.911519] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:21696 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:09.922 [2024-07-15 09:35:20.911536] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:09.922 [2024-07-15 09:35:20.916144] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xfd34f0) 00:27:09.922 [2024-07-15 09:35:20.916173] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:14656 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:09.922 [2024-07-15 09:35:20.916190] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:09.922 [2024-07-15 09:35:20.920928] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xfd34f0) 00:27:09.922 [2024-07-15 09:35:20.920957] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:5152 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:09.922 [2024-07-15 09:35:20.920974] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:09.922 [2024-07-15 09:35:20.925576] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xfd34f0) 00:27:09.922 [2024-07-15 09:35:20.925604] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:768 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:09.922 [2024-07-15 09:35:20.925621] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:09.922 [2024-07-15 09:35:20.930039] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xfd34f0) 00:27:09.922 [2024-07-15 09:35:20.930068] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:11168 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:09.922 [2024-07-15 09:35:20.930085] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:09.922 [2024-07-15 09:35:20.935227] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xfd34f0) 00:27:09.922 [2024-07-15 09:35:20.935257] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:14784 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:09.922 [2024-07-15 09:35:20.935275] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:09.922 [2024-07-15 09:35:20.940574] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xfd34f0) 00:27:09.922 [2024-07-15 09:35:20.940603] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:4192 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:09.922 [2024-07-15 09:35:20.940621] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:09.922 [2024-07-15 09:35:20.944274] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xfd34f0) 00:27:09.922 [2024-07-15 09:35:20.944304] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:2336 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:09.922 [2024-07-15 09:35:20.944321] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:09.922 [2024-07-15 09:35:20.948773] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xfd34f0) 00:27:09.922 [2024-07-15 09:35:20.948824] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:17824 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:09.922 [2024-07-15 09:35:20.948844] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:09.922 [2024-07-15 09:35:20.953187] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xfd34f0) 00:27:09.922 [2024-07-15 09:35:20.953216] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:2432 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:09.922 [2024-07-15 09:35:20.953233] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:09.922 [2024-07-15 09:35:20.956837] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xfd34f0) 00:27:09.922 [2024-07-15 09:35:20.956866] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:5728 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:09.922 [2024-07-15 09:35:20.956888] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:09.922 [2024-07-15 09:35:20.959569] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xfd34f0) 00:27:09.922 [2024-07-15 09:35:20.959598] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:17952 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:09.922 [2024-07-15 09:35:20.959614] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:09.922 [2024-07-15 09:35:20.962960] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xfd34f0) 00:27:09.922 [2024-07-15 09:35:20.962988] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:18400 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:09.922 [2024-07-15 09:35:20.963004] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:09.922 [2024-07-15 09:35:20.965494] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xfd34f0) 00:27:09.922 [2024-07-15 09:35:20.965522] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:10016 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:09.922 [2024-07-15 09:35:20.965538] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:09.922 [2024-07-15 09:35:20.969569] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xfd34f0) 00:27:09.922 [2024-07-15 09:35:20.969611] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:736 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:09.922 [2024-07-15 09:35:20.969626] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:09.922 [2024-07-15 09:35:20.974038] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xfd34f0) 00:27:09.922 [2024-07-15 09:35:20.974066] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:16256 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:09.922 [2024-07-15 09:35:20.974083] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:09.922 [2024-07-15 09:35:20.978686] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xfd34f0) 00:27:09.922 [2024-07-15 09:35:20.978714] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:7648 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:09.922 [2024-07-15 09:35:20.978730] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:09.922 [2024-07-15 09:35:20.983205] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xfd34f0) 00:27:09.922 [2024-07-15 09:35:20.983233] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:8992 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:09.922 [2024-07-15 09:35:20.983250] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:09.922 [2024-07-15 09:35:20.987836] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xfd34f0) 00:27:09.922 [2024-07-15 09:35:20.987862] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:15296 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:09.922 [2024-07-15 09:35:20.987893] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:09.922 [2024-07-15 09:35:20.992406] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xfd34f0) 00:27:09.922 [2024-07-15 09:35:20.992433] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:256 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:09.922 [2024-07-15 09:35:20.992464] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:09.922 [2024-07-15 09:35:20.997102] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xfd34f0) 00:27:09.922 [2024-07-15 09:35:20.997147] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:12448 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:09.922 [2024-07-15 09:35:20.997163] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:09.922 [2024-07-15 09:35:21.001986] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xfd34f0) 00:27:09.922 [2024-07-15 09:35:21.002014] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:16544 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:09.922 [2024-07-15 09:35:21.002031] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:09.922 [2024-07-15 09:35:21.007464] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xfd34f0) 00:27:09.922 [2024-07-15 09:35:21.007494] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:8480 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:09.922 [2024-07-15 09:35:21.007512] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:09.922 [2024-07-15 09:35:21.012991] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xfd34f0) 00:27:09.922 [2024-07-15 09:35:21.013022] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:5216 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:09.922 [2024-07-15 09:35:21.013039] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:09.922 [2024-07-15 09:35:21.017896] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xfd34f0) 00:27:09.922 [2024-07-15 09:35:21.017926] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:1888 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:09.922 [2024-07-15 09:35:21.017943] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:09.922 [2024-07-15 09:35:21.022577] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xfd34f0) 00:27:09.922 [2024-07-15 09:35:21.022606] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:12288 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:09.923 [2024-07-15 09:35:21.022623] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:09.923 [2024-07-15 09:35:21.027227] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xfd34f0) 00:27:09.923 [2024-07-15 09:35:21.027257] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:4416 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:09.923 [2024-07-15 09:35:21.027274] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:09.923 [2024-07-15 09:35:21.031838] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xfd34f0) 00:27:09.923 [2024-07-15 09:35:21.031867] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:4448 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:09.923 [2024-07-15 09:35:21.031884] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:09.923 [2024-07-15 09:35:21.036364] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xfd34f0) 00:27:09.923 [2024-07-15 09:35:21.036392] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:6944 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:09.923 [2024-07-15 09:35:21.036425] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:09.923 [2024-07-15 09:35:21.041637] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xfd34f0) 00:27:09.923 [2024-07-15 09:35:21.041667] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:2016 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:09.923 [2024-07-15 09:35:21.041685] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:09.923 [2024-07-15 09:35:21.046756] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xfd34f0) 00:27:09.923 [2024-07-15 09:35:21.046787] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:23680 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:09.923 [2024-07-15 09:35:21.046814] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:09.923 [2024-07-15 09:35:21.051185] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xfd34f0) 00:27:09.923 [2024-07-15 09:35:21.051214] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:15744 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:09.923 [2024-07-15 09:35:21.051231] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:09.923 [2024-07-15 09:35:21.055837] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xfd34f0) 00:27:09.923 [2024-07-15 09:35:21.055866] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:23360 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:09.923 [2024-07-15 09:35:21.055883] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:09.923 [2024-07-15 09:35:21.061043] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xfd34f0) 00:27:09.923 [2024-07-15 09:35:21.061073] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:9536 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:09.923 [2024-07-15 09:35:21.061090] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:09.923 [2024-07-15 09:35:21.065723] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xfd34f0) 00:27:09.923 [2024-07-15 09:35:21.065753] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:16608 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:09.923 [2024-07-15 09:35:21.065770] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:09.923 [2024-07-15 09:35:21.070758] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xfd34f0) 00:27:09.923 [2024-07-15 09:35:21.070810] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:22656 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:09.923 [2024-07-15 09:35:21.070830] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:09.923 [2024-07-15 09:35:21.075714] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xfd34f0) 00:27:09.923 [2024-07-15 09:35:21.075744] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:9440 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:09.923 [2024-07-15 09:35:21.075766] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:09.923 [2024-07-15 09:35:21.080316] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xfd34f0) 00:27:09.923 [2024-07-15 09:35:21.080345] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:8832 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:09.923 [2024-07-15 09:35:21.080362] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:09.923 [2024-07-15 09:35:21.085698] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xfd34f0) 00:27:09.923 [2024-07-15 09:35:21.085744] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:24224 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:09.923 [2024-07-15 09:35:21.085760] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:09.923 [2024-07-15 09:35:21.093200] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xfd34f0) 00:27:09.923 [2024-07-15 09:35:21.093230] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:8256 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:09.923 [2024-07-15 09:35:21.093247] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:09.923 [2024-07-15 09:35:21.099691] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xfd34f0) 00:27:09.923 [2024-07-15 09:35:21.099723] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:1856 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:09.923 [2024-07-15 09:35:21.099739] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:09.923 [2024-07-15 09:35:21.105473] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xfd34f0) 00:27:09.923 [2024-07-15 09:35:21.105510] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:22624 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:09.923 [2024-07-15 09:35:21.105527] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:09.923 [2024-07-15 09:35:21.110652] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xfd34f0) 00:27:09.923 [2024-07-15 09:35:21.110682] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:8160 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:09.923 [2024-07-15 09:35:21.110699] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:10.184 [2024-07-15 09:35:21.115161] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xfd34f0) 00:27:10.184 [2024-07-15 09:35:21.115193] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:16416 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:10.184 [2024-07-15 09:35:21.115210] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:10.184 [2024-07-15 09:35:21.119713] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xfd34f0) 00:27:10.184 [2024-07-15 09:35:21.119742] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:15232 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:10.184 [2024-07-15 09:35:21.119759] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:10.184 [2024-07-15 09:35:21.124196] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xfd34f0) 00:27:10.184 [2024-07-15 09:35:21.124226] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:13760 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:10.184 [2024-07-15 09:35:21.124243] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:10.184 [2024-07-15 09:35:21.128769] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xfd34f0) 00:27:10.184 [2024-07-15 09:35:21.128798] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:16608 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:10.184 [2024-07-15 09:35:21.128823] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:10.184 [2024-07-15 09:35:21.133253] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xfd34f0) 00:27:10.184 [2024-07-15 09:35:21.133282] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:25088 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:10.184 [2024-07-15 09:35:21.133299] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:10.184 [2024-07-15 09:35:21.138000] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xfd34f0) 00:27:10.184 [2024-07-15 09:35:21.138029] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:19680 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:10.184 [2024-07-15 09:35:21.138046] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:10.184 [2024-07-15 09:35:21.142866] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xfd34f0) 00:27:10.184 [2024-07-15 09:35:21.142895] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:7968 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:10.184 [2024-07-15 09:35:21.142912] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:10.184 [2024-07-15 09:35:21.147995] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xfd34f0) 00:27:10.184 [2024-07-15 09:35:21.148024] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:11232 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:10.184 [2024-07-15 09:35:21.148042] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:10.184 [2024-07-15 09:35:21.152521] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xfd34f0) 00:27:10.184 [2024-07-15 09:35:21.152550] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:24960 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:10.184 [2024-07-15 09:35:21.152567] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:10.184 [2024-07-15 09:35:21.157058] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xfd34f0) 00:27:10.184 [2024-07-15 09:35:21.157087] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:13408 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:10.184 [2024-07-15 09:35:21.157105] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:10.184 [2024-07-15 09:35:21.161574] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xfd34f0) 00:27:10.185 [2024-07-15 09:35:21.161602] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:3904 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:10.185 [2024-07-15 09:35:21.161625] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:10.185 [2024-07-15 09:35:21.166113] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xfd34f0) 00:27:10.185 [2024-07-15 09:35:21.166143] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:11456 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:10.185 [2024-07-15 09:35:21.166160] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:10.185 [2024-07-15 09:35:21.170680] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xfd34f0) 00:27:10.185 [2024-07-15 09:35:21.170709] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:4224 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:10.185 [2024-07-15 09:35:21.170726] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:10.185 [2024-07-15 09:35:21.175196] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xfd34f0) 00:27:10.185 [2024-07-15 09:35:21.175225] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:15712 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:10.185 [2024-07-15 09:35:21.175242] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:10.185 [2024-07-15 09:35:21.179628] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xfd34f0) 00:27:10.185 [2024-07-15 09:35:21.179657] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:9920 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:10.185 [2024-07-15 09:35:21.179674] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:10.185 [2024-07-15 09:35:21.184117] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xfd34f0) 00:27:10.185 [2024-07-15 09:35:21.184145] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:2592 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:10.185 [2024-07-15 09:35:21.184162] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:10.185 [2024-07-15 09:35:21.188799] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xfd34f0) 00:27:10.185 [2024-07-15 09:35:21.188835] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:8800 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:10.185 [2024-07-15 09:35:21.188852] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:10.185 [2024-07-15 09:35:21.193381] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xfd34f0) 00:27:10.185 [2024-07-15 09:35:21.193411] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:9504 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:10.185 [2024-07-15 09:35:21.193428] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:10.185 [2024-07-15 09:35:21.197880] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xfd34f0) 00:27:10.185 [2024-07-15 09:35:21.197909] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:96 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:10.185 [2024-07-15 09:35:21.197927] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:10.185 [2024-07-15 09:35:21.202331] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xfd34f0) 00:27:10.185 [2024-07-15 09:35:21.202365] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:21504 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:10.185 [2024-07-15 09:35:21.202383] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:10.185 [2024-07-15 09:35:21.205635] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xfd34f0) 00:27:10.185 [2024-07-15 09:35:21.205664] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:7456 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:10.185 [2024-07-15 09:35:21.205681] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:10.185 [2024-07-15 09:35:21.208937] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xfd34f0) 00:27:10.185 [2024-07-15 09:35:21.208966] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:9888 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:10.185 [2024-07-15 09:35:21.208983] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:10.185 [2024-07-15 09:35:21.212854] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xfd34f0) 00:27:10.185 [2024-07-15 09:35:21.212884] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:256 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:10.185 [2024-07-15 09:35:21.212901] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:10.185 [2024-07-15 09:35:21.217525] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xfd34f0) 00:27:10.185 [2024-07-15 09:35:21.217555] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:12416 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:10.185 [2024-07-15 09:35:21.217571] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:10.185 [2024-07-15 09:35:21.222221] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xfd34f0) 00:27:10.185 [2024-07-15 09:35:21.222250] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:5120 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:10.185 [2024-07-15 09:35:21.222267] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:10.185 [2024-07-15 09:35:21.226708] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xfd34f0) 00:27:10.185 [2024-07-15 09:35:21.226736] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:13088 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:10.185 [2024-07-15 09:35:21.226753] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:10.185 [2024-07-15 09:35:21.231165] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xfd34f0) 00:27:10.185 [2024-07-15 09:35:21.231193] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:2816 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:10.185 [2024-07-15 09:35:21.231211] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:10.185 [2024-07-15 09:35:21.236234] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xfd34f0) 00:27:10.185 [2024-07-15 09:35:21.236263] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:19392 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:10.185 [2024-07-15 09:35:21.236280] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:10.185 [2024-07-15 09:35:21.240857] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xfd34f0) 00:27:10.185 [2024-07-15 09:35:21.240886] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:11584 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:10.185 [2024-07-15 09:35:21.240903] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:10.185 [2024-07-15 09:35:21.245439] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xfd34f0) 00:27:10.185 [2024-07-15 09:35:21.245468] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:2528 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:10.185 [2024-07-15 09:35:21.245485] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:10.185 [2024-07-15 09:35:21.250673] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xfd34f0) 00:27:10.185 [2024-07-15 09:35:21.250703] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:19232 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:10.185 [2024-07-15 09:35:21.250720] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:10.185 [2024-07-15 09:35:21.255663] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xfd34f0) 00:27:10.185 [2024-07-15 09:35:21.255692] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:7232 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:10.185 [2024-07-15 09:35:21.255710] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:10.185 [2024-07-15 09:35:21.260406] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xfd34f0) 00:27:10.185 [2024-07-15 09:35:21.260435] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:19488 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:10.185 [2024-07-15 09:35:21.260452] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:10.185 [2024-07-15 09:35:21.265791] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xfd34f0) 00:27:10.185 [2024-07-15 09:35:21.265828] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:23520 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:10.185 [2024-07-15 09:35:21.265845] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:10.185 [2024-07-15 09:35:21.270329] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xfd34f0) 00:27:10.185 [2024-07-15 09:35:21.270357] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:5152 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:10.185 [2024-07-15 09:35:21.270374] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:10.185 [2024-07-15 09:35:21.275500] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xfd34f0) 00:27:10.185 [2024-07-15 09:35:21.275530] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:22976 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:10.185 [2024-07-15 09:35:21.275547] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:10.185 [2024-07-15 09:35:21.281052] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xfd34f0) 00:27:10.185 [2024-07-15 09:35:21.281083] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:13920 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:10.185 [2024-07-15 09:35:21.281106] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:10.185 [2024-07-15 09:35:21.286796] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xfd34f0) 00:27:10.185 [2024-07-15 09:35:21.286833] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:2208 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:10.185 [2024-07-15 09:35:21.286851] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:10.185 [2024-07-15 09:35:21.290340] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xfd34f0) 00:27:10.185 [2024-07-15 09:35:21.290369] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:22240 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:10.185 [2024-07-15 09:35:21.290386] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:10.185 [2024-07-15 09:35:21.298368] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xfd34f0) 00:27:10.185 [2024-07-15 09:35:21.298412] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:16096 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:10.185 [2024-07-15 09:35:21.298429] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:10.185 [2024-07-15 09:35:21.306087] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xfd34f0) 00:27:10.185 [2024-07-15 09:35:21.306118] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:14336 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:10.185 [2024-07-15 09:35:21.306135] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:10.185 [2024-07-15 09:35:21.313666] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xfd34f0) 00:27:10.185 [2024-07-15 09:35:21.313696] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:18688 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:10.185 [2024-07-15 09:35:21.313714] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:10.185 [2024-07-15 09:35:21.321837] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xfd34f0) 00:27:10.185 [2024-07-15 09:35:21.321882] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:10656 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:10.185 [2024-07-15 09:35:21.321900] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:10.185 [2024-07-15 09:35:21.328663] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xfd34f0) 00:27:10.185 [2024-07-15 09:35:21.328693] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:9056 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:10.185 [2024-07-15 09:35:21.328710] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:10.185 [2024-07-15 09:35:21.336164] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xfd34f0) 00:27:10.185 [2024-07-15 09:35:21.336194] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:8800 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:10.185 [2024-07-15 09:35:21.336211] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:10.185 [2024-07-15 09:35:21.343709] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xfd34f0) 00:27:10.185 [2024-07-15 09:35:21.343751] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:20576 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:10.185 [2024-07-15 09:35:21.343769] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:10.185 [2024-07-15 09:35:21.351245] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xfd34f0) 00:27:10.185 [2024-07-15 09:35:21.351276] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:4480 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:10.185 [2024-07-15 09:35:21.351293] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:10.185 [2024-07-15 09:35:21.358750] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xfd34f0) 00:27:10.185 [2024-07-15 09:35:21.358781] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:17888 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:10.185 [2024-07-15 09:35:21.358798] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:10.185 [2024-07-15 09:35:21.366834] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xfd34f0) 00:27:10.185 [2024-07-15 09:35:21.366864] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:9536 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:10.186 [2024-07-15 09:35:21.366882] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:10.186 [2024-07-15 09:35:21.374544] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xfd34f0) 00:27:10.186 [2024-07-15 09:35:21.374575] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:9216 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:10.186 [2024-07-15 09:35:21.374593] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:10.446 [2024-07-15 09:35:21.382041] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xfd34f0) 00:27:10.446 [2024-07-15 09:35:21.382072] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:8416 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:10.446 [2024-07-15 09:35:21.382090] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:10.446 00:27:10.446 Latency(us) 00:27:10.446 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:27:10.446 Job: nvme0n1 (Core Mask 0x2, workload: randread, depth: 16, IO size: 131072) 00:27:10.446 nvme0n1 : 2.00 5898.79 737.35 0.00 0.00 2707.39 737.28 12281.93 00:27:10.446 =================================================================================================================== 00:27:10.446 Total : 5898.79 737.35 0.00 0.00 2707.39 737.28 12281.93 00:27:10.446 0 00:27:10.446 09:35:21 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@71 -- # get_transient_errcount nvme0n1 00:27:10.446 09:35:21 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@27 -- # bperf_rpc bdev_get_iostat -b nvme0n1 00:27:10.446 09:35:21 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_get_iostat -b nvme0n1 00:27:10.446 09:35:21 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@28 -- # jq -r '.bdevs[0] 00:27:10.446 | .driver_specific 00:27:10.446 | .nvme_error 00:27:10.446 | .status_code 00:27:10.446 | .command_transient_transport_error' 00:27:10.705 09:35:21 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@71 -- # (( 381 > 0 )) 00:27:10.705 09:35:21 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@73 -- # killprocess 935965 00:27:10.705 09:35:21 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@948 -- # '[' -z 935965 ']' 00:27:10.705 09:35:21 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@952 -- # kill -0 935965 00:27:10.705 09:35:21 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@953 -- # uname 00:27:10.705 09:35:21 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:27:10.705 09:35:21 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 935965 00:27:10.705 09:35:21 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:27:10.705 09:35:21 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:27:10.705 09:35:21 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@966 -- # echo 'killing process with pid 935965' 00:27:10.705 killing process with pid 935965 00:27:10.705 09:35:21 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@967 -- # kill 935965 00:27:10.705 Received shutdown signal, test time was about 2.000000 seconds 00:27:10.705 00:27:10.705 Latency(us) 00:27:10.705 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:27:10.705 =================================================================================================================== 00:27:10.705 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:27:10.705 09:35:21 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@972 -- # wait 935965 00:27:10.965 09:35:21 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@114 -- # run_bperf_err randwrite 4096 128 00:27:10.965 09:35:21 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@54 -- # local rw bs qd 00:27:10.965 09:35:21 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # rw=randwrite 00:27:10.965 09:35:21 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # bs=4096 00:27:10.965 09:35:21 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # qd=128 00:27:10.965 09:35:21 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@58 -- # bperfpid=936421 00:27:10.965 09:35:21 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randwrite -o 4096 -t 2 -q 128 -z 00:27:10.965 09:35:21 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@60 -- # waitforlisten 936421 /var/tmp/bperf.sock 00:27:10.965 09:35:21 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@829 -- # '[' -z 936421 ']' 00:27:10.965 09:35:21 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bperf.sock 00:27:10.965 09:35:21 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@834 -- # local max_retries=100 00:27:10.965 09:35:21 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:27:10.965 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:27:10.965 09:35:21 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@838 -- # xtrace_disable 00:27:10.965 09:35:21 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:27:10.965 [2024-07-15 09:35:21.993877] Starting SPDK v24.09-pre git sha1 b0f01ebc5 / DPDK 24.03.0 initialization... 00:27:10.965 [2024-07-15 09:35:21.993951] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid936421 ] 00:27:10.965 EAL: No free 2048 kB hugepages reported on node 1 00:27:10.965 [2024-07-15 09:35:22.056953] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:27:11.224 [2024-07-15 09:35:22.170687] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:27:11.224 09:35:22 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:27:11.224 09:35:22 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@862 -- # return 0 00:27:11.224 09:35:22 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@61 -- # bperf_rpc bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:27:11.224 09:35:22 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:27:11.482 09:35:22 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@63 -- # rpc_cmd accel_error_inject_error -o crc32c -t disable 00:27:11.482 09:35:22 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:11.482 09:35:22 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:27:11.482 09:35:22 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:11.482 09:35:22 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@64 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:27:11.482 09:35:22 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:27:12.052 nvme0n1 00:27:12.052 09:35:23 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@67 -- # rpc_cmd accel_error_inject_error -o crc32c -t corrupt -i 256 00:27:12.052 09:35:23 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:12.052 09:35:23 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:27:12.052 09:35:23 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:12.052 09:35:23 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@69 -- # bperf_py perform_tests 00:27:12.052 09:35:23 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@19 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:27:12.052 Running I/O for 2 seconds... 00:27:12.052 [2024-07-15 09:35:23.136010] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xbb86b0) with pdu=0x2000190f6458 00:27:12.052 [2024-07-15 09:35:23.137119] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:20476 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:12.052 [2024-07-15 09:35:23.137171] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:27:12.052 [2024-07-15 09:35:23.150457] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xbb86b0) with pdu=0x2000190e4de8 00:27:12.052 [2024-07-15 09:35:23.152223] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:120 nsid:1 lba:641 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:12.052 [2024-07-15 09:35:23.152251] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:120 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:27:12.052 [2024-07-15 09:35:23.158968] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xbb86b0) with pdu=0x2000190e01f8 00:27:12.052 [2024-07-15 09:35:23.159736] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:122 nsid:1 lba:19586 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:12.052 [2024-07-15 09:35:23.159779] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:122 cdw0:0 sqhd:0009 p:0 m:0 dnr:0 00:27:12.052 [2024-07-15 09:35:23.171008] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xbb86b0) with pdu=0x2000190f9b30 00:27:12.052 [2024-07-15 09:35:23.171796] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:71 nsid:1 lba:18883 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:12.052 [2024-07-15 09:35:23.171856] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:71 cdw0:0 sqhd:0019 p:0 m:0 dnr:0 00:27:12.052 [2024-07-15 09:35:23.185174] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xbb86b0) with pdu=0x2000190ec840 00:27:12.052 [2024-07-15 09:35:23.186572] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:12052 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:12.052 [2024-07-15 09:35:23.186617] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:28 cdw0:0 sqhd:004a p:0 m:0 dnr:0 00:27:12.052 [2024-07-15 09:35:23.197575] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xbb86b0) with pdu=0x2000190fc128 00:27:12.052 [2024-07-15 09:35:23.199124] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:3214 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:12.052 [2024-07-15 09:35:23.199172] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:17 cdw0:0 sqhd:005a p:0 m:0 dnr:0 00:27:12.052 [2024-07-15 09:35:23.209901] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xbb86b0) with pdu=0x2000190f3e60 00:27:12.052 [2024-07-15 09:35:23.211548] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:16703 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:12.052 [2024-07-15 09:35:23.211595] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:26 cdw0:0 sqhd:006a p:0 m:0 dnr:0 00:27:12.052 [2024-07-15 09:35:23.221779] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xbb86b0) with pdu=0x2000190e6300 00:27:12.052 [2024-07-15 09:35:23.223523] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:103 nsid:1 lba:21306 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:12.052 [2024-07-15 09:35:23.223565] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:103 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:27:12.052 [2024-07-15 09:35:23.230358] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xbb86b0) with pdu=0x2000190e5a90 00:27:12.052 [2024-07-15 09:35:23.231298] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:24020 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:12.052 [2024-07-15 09:35:23.231344] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:30 cdw0:0 sqhd:001b p:0 m:0 dnr:0 00:27:12.052 [2024-07-15 09:35:23.243438] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xbb86b0) with pdu=0x2000190f1868 00:27:12.052 [2024-07-15 09:35:23.244590] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:13077 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:12.052 [2024-07-15 09:35:23.244635] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:006a p:0 m:0 dnr:0 00:27:12.312 [2024-07-15 09:35:23.255888] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xbb86b0) with pdu=0x2000190e8d30 00:27:12.312 [2024-07-15 09:35:23.257110] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:102 nsid:1 lba:12304 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:12.312 [2024-07-15 09:35:23.257155] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:102 cdw0:0 sqhd:006a p:0 m:0 dnr:0 00:27:12.312 [2024-07-15 09:35:23.267840] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xbb86b0) with pdu=0x2000190f7100 00:27:12.312 [2024-07-15 09:35:23.269121] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:13082 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:12.312 [2024-07-15 09:35:23.269153] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:57 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:27:12.312 [2024-07-15 09:35:23.279794] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xbb86b0) with pdu=0x2000190fb8b8 00:27:12.312 [2024-07-15 09:35:23.281314] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:17870 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:12.312 [2024-07-15 09:35:23.281356] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:59 cdw0:0 sqhd:005c p:0 m:0 dnr:0 00:27:12.312 [2024-07-15 09:35:23.291665] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xbb86b0) with pdu=0x2000190e95a0 00:27:12.312 [2024-07-15 09:35:23.293230] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:106 nsid:1 lba:18326 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:12.312 [2024-07-15 09:35:23.293276] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:106 cdw0:0 sqhd:006c p:0 m:0 dnr:0 00:27:12.312 [2024-07-15 09:35:23.300366] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xbb86b0) with pdu=0x2000190ebb98 00:27:12.312 [2024-07-15 09:35:23.301173] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:95 nsid:1 lba:4284 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:12.313 [2024-07-15 09:35:23.301214] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:95 cdw0:0 sqhd:000d p:0 m:0 dnr:0 00:27:12.313 [2024-07-15 09:35:23.314546] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xbb86b0) with pdu=0x2000190e23b8 00:27:12.313 [2024-07-15 09:35:23.316031] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:16604 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:12.313 [2024-07-15 09:35:23.316063] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:126 cdw0:0 sqhd:0053 p:0 m:0 dnr:0 00:27:12.313 [2024-07-15 09:35:23.325471] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xbb86b0) with pdu=0x2000190f1ca0 00:27:12.313 [2024-07-15 09:35:23.326747] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:98 nsid:1 lba:20100 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:12.313 [2024-07-15 09:35:23.326775] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:98 cdw0:0 sqhd:0073 p:0 m:0 dnr:0 00:27:12.313 [2024-07-15 09:35:23.337041] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xbb86b0) with pdu=0x2000190ea680 00:27:12.313 [2024-07-15 09:35:23.338085] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:21755 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:12.313 [2024-07-15 09:35:23.338130] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:57 cdw0:0 sqhd:0054 p:0 m:0 dnr:0 00:27:12.313 [2024-07-15 09:35:23.348033] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xbb86b0) with pdu=0x2000190f4298 00:27:12.313 [2024-07-15 09:35:23.348909] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:12287 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:12.313 [2024-07-15 09:35:23.348951] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:50 cdw0:0 sqhd:0046 p:0 m:0 dnr:0 00:27:12.313 [2024-07-15 09:35:23.359106] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xbb86b0) with pdu=0x2000190fb8b8 00:27:12.313 [2024-07-15 09:35:23.359866] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:12388 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:12.313 [2024-07-15 09:35:23.359895] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0035 p:0 m:0 dnr:0 00:27:12.313 [2024-07-15 09:35:23.370738] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xbb86b0) with pdu=0x2000190f0788 00:27:12.313 [2024-07-15 09:35:23.371651] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:119 nsid:1 lba:23398 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:12.313 [2024-07-15 09:35:23.371697] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:119 cdw0:0 sqhd:0016 p:0 m:0 dnr:0 00:27:12.313 [2024-07-15 09:35:23.385062] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xbb86b0) with pdu=0x2000190de8a8 00:27:12.313 [2024-07-15 09:35:23.386410] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:1442 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:12.313 [2024-07-15 09:35:23.386465] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:62 cdw0:0 sqhd:0076 p:0 m:0 dnr:0 00:27:12.313 [2024-07-15 09:35:23.396739] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xbb86b0) with pdu=0x2000190dece0 00:27:12.313 [2024-07-15 09:35:23.398309] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:103 nsid:1 lba:22913 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:12.313 [2024-07-15 09:35:23.398354] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:103 cdw0:0 sqhd:0057 p:0 m:0 dnr:0 00:27:12.313 [2024-07-15 09:35:23.407430] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xbb86b0) with pdu=0x2000190eea00 00:27:12.313 [2024-07-15 09:35:23.409135] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:113 nsid:1 lba:18439 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:12.313 [2024-07-15 09:35:23.409163] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:113 cdw0:0 sqhd:0077 p:0 m:0 dnr:0 00:27:12.313 [2024-07-15 09:35:23.420457] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xbb86b0) with pdu=0x2000190e6fa8 00:27:12.313 [2024-07-15 09:35:23.421839] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:64 nsid:1 lba:14807 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:12.313 [2024-07-15 09:35:23.421886] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:64 cdw0:0 sqhd:0076 p:0 m:0 dnr:0 00:27:12.313 [2024-07-15 09:35:23.430367] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xbb86b0) with pdu=0x2000190ef270 00:27:12.313 [2024-07-15 09:35:23.431018] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:14682 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:12.313 [2024-07-15 09:35:23.431046] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:35 cdw0:0 sqhd:0057 p:0 m:0 dnr:0 00:27:12.313 [2024-07-15 09:35:23.442691] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xbb86b0) with pdu=0x2000190f57b0 00:27:12.313 [2024-07-15 09:35:23.443477] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:114 nsid:1 lba:8669 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:12.313 [2024-07-15 09:35:23.443506] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:114 cdw0:0 sqhd:0067 p:0 m:0 dnr:0 00:27:12.313 [2024-07-15 09:35:23.454899] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xbb86b0) with pdu=0x2000190f0bc0 00:27:12.313 [2024-07-15 09:35:23.455861] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:14987 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:12.313 [2024-07-15 09:35:23.455889] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:124 cdw0:0 sqhd:0077 p:0 m:0 dnr:0 00:27:12.313 [2024-07-15 09:35:23.466323] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xbb86b0) with pdu=0x2000190fd208 00:27:12.313 [2024-07-15 09:35:23.467620] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:23335 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:12.313 [2024-07-15 09:35:23.467649] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:27 cdw0:0 sqhd:0076 p:0 m:0 dnr:0 00:27:12.313 [2024-07-15 09:35:23.479393] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xbb86b0) with pdu=0x2000190df988 00:27:12.313 [2024-07-15 09:35:23.479576] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:74 nsid:1 lba:5984 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:12.313 [2024-07-15 09:35:23.479618] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:74 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:27:12.313 [2024-07-15 09:35:23.493165] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xbb86b0) with pdu=0x2000190df988 00:27:12.313 [2024-07-15 09:35:23.493383] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:73 nsid:1 lba:3227 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:12.313 [2024-07-15 09:35:23.493409] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:73 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:27:12.573 [2024-07-15 09:35:23.506922] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xbb86b0) with pdu=0x2000190df988 00:27:12.573 [2024-07-15 09:35:23.507108] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:72 nsid:1 lba:9509 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:12.573 [2024-07-15 09:35:23.507151] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:72 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:27:12.573 [2024-07-15 09:35:23.520745] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xbb86b0) with pdu=0x2000190df988 00:27:12.573 [2024-07-15 09:35:23.520936] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:70 nsid:1 lba:9881 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:12.573 [2024-07-15 09:35:23.520964] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:70 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:27:12.573 [2024-07-15 09:35:23.534339] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xbb86b0) with pdu=0x2000190df988 00:27:12.573 [2024-07-15 09:35:23.534557] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:69 nsid:1 lba:9313 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:12.573 [2024-07-15 09:35:23.534586] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:69 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:27:12.573 [2024-07-15 09:35:23.548145] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xbb86b0) with pdu=0x2000190df988 00:27:12.573 [2024-07-15 09:35:23.548356] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:74 nsid:1 lba:23093 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:12.573 [2024-07-15 09:35:23.548397] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:74 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:27:12.573 [2024-07-15 09:35:23.561747] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xbb86b0) with pdu=0x2000190df988 00:27:12.573 [2024-07-15 09:35:23.561950] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:73 nsid:1 lba:4476 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:12.573 [2024-07-15 09:35:23.561992] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:73 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:27:12.573 [2024-07-15 09:35:23.575432] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xbb86b0) with pdu=0x2000190df988 00:27:12.573 [2024-07-15 09:35:23.575641] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:72 nsid:1 lba:25439 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:12.573 [2024-07-15 09:35:23.575667] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:72 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:27:12.573 [2024-07-15 09:35:23.588883] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xbb86b0) with pdu=0x2000190df988 00:27:12.573 [2024-07-15 09:35:23.589087] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:70 nsid:1 lba:21626 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:12.573 [2024-07-15 09:35:23.589127] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:70 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:27:12.573 [2024-07-15 09:35:23.602293] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xbb86b0) with pdu=0x2000190df988 00:27:12.573 [2024-07-15 09:35:23.602484] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:69 nsid:1 lba:14868 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:12.573 [2024-07-15 09:35:23.602510] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:69 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:27:12.573 [2024-07-15 09:35:23.615762] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xbb86b0) with pdu=0x2000190df988 00:27:12.573 [2024-07-15 09:35:23.615951] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:74 nsid:1 lba:21412 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:12.573 [2024-07-15 09:35:23.615978] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:74 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:27:12.573 [2024-07-15 09:35:23.629436] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xbb86b0) with pdu=0x2000190df988 00:27:12.573 [2024-07-15 09:35:23.629643] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:73 nsid:1 lba:16023 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:12.573 [2024-07-15 09:35:23.629683] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:73 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:27:12.573 [2024-07-15 09:35:23.643000] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xbb86b0) with pdu=0x2000190df988 00:27:12.573 [2024-07-15 09:35:23.643208] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:72 nsid:1 lba:1338 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:12.573 [2024-07-15 09:35:23.643235] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:72 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:27:12.573 [2024-07-15 09:35:23.656347] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xbb86b0) with pdu=0x2000190df988 00:27:12.573 [2024-07-15 09:35:23.656539] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:70 nsid:1 lba:1171 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:12.573 [2024-07-15 09:35:23.656564] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:70 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:27:12.573 [2024-07-15 09:35:23.669960] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xbb86b0) with pdu=0x2000190df988 00:27:12.573 [2024-07-15 09:35:23.670154] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:69 nsid:1 lba:16419 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:12.573 [2024-07-15 09:35:23.670181] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:69 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:27:12.573 [2024-07-15 09:35:23.683616] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xbb86b0) with pdu=0x2000190df988 00:27:12.573 [2024-07-15 09:35:23.683832] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:74 nsid:1 lba:1366 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:12.573 [2024-07-15 09:35:23.683860] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:74 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:27:12.574 [2024-07-15 09:35:23.697218] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xbb86b0) with pdu=0x2000190df988 00:27:12.574 [2024-07-15 09:35:23.697410] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:73 nsid:1 lba:21405 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:12.574 [2024-07-15 09:35:23.697436] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:73 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:27:12.574 [2024-07-15 09:35:23.710915] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xbb86b0) with pdu=0x2000190df988 00:27:12.574 [2024-07-15 09:35:23.711126] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:72 nsid:1 lba:21755 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:12.574 [2024-07-15 09:35:23.711168] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:72 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:27:12.574 [2024-07-15 09:35:23.724575] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xbb86b0) with pdu=0x2000190df988 00:27:12.574 [2024-07-15 09:35:23.724786] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:70 nsid:1 lba:8687 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:12.574 [2024-07-15 09:35:23.724826] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:70 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:27:12.574 [2024-07-15 09:35:23.738066] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xbb86b0) with pdu=0x2000190df988 00:27:12.574 [2024-07-15 09:35:23.738270] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:69 nsid:1 lba:24221 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:12.574 [2024-07-15 09:35:23.738311] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:69 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:27:12.574 [2024-07-15 09:35:23.751753] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xbb86b0) with pdu=0x2000190df988 00:27:12.574 [2024-07-15 09:35:23.751942] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:74 nsid:1 lba:1555 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:12.574 [2024-07-15 09:35:23.751969] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:74 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:27:12.574 [2024-07-15 09:35:23.765508] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xbb86b0) with pdu=0x2000190df988 00:27:12.574 [2024-07-15 09:35:23.765742] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:73 nsid:1 lba:9093 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:12.574 [2024-07-15 09:35:23.765768] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:73 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:27:12.834 [2024-07-15 09:35:23.779193] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xbb86b0) with pdu=0x2000190df988 00:27:12.834 [2024-07-15 09:35:23.779392] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:72 nsid:1 lba:20588 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:12.834 [2024-07-15 09:35:23.779418] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:72 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:27:12.834 [2024-07-15 09:35:23.792767] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xbb86b0) with pdu=0x2000190df988 00:27:12.834 [2024-07-15 09:35:23.792969] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:70 nsid:1 lba:22876 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:12.834 [2024-07-15 09:35:23.793011] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:70 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:27:12.834 [2024-07-15 09:35:23.806440] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xbb86b0) with pdu=0x2000190df988 00:27:12.834 [2024-07-15 09:35:23.806622] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:69 nsid:1 lba:6774 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:12.834 [2024-07-15 09:35:23.806663] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:69 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:27:12.834 [2024-07-15 09:35:23.820010] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xbb86b0) with pdu=0x2000190df988 00:27:12.834 [2024-07-15 09:35:23.820204] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:74 nsid:1 lba:12122 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:12.834 [2024-07-15 09:35:23.820230] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:74 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:27:12.834 [2024-07-15 09:35:23.833492] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xbb86b0) with pdu=0x2000190df988 00:27:12.834 [2024-07-15 09:35:23.833675] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:73 nsid:1 lba:24945 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:12.834 [2024-07-15 09:35:23.833716] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:73 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:27:12.834 [2024-07-15 09:35:23.847100] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xbb86b0) with pdu=0x2000190df988 00:27:12.834 [2024-07-15 09:35:23.847317] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:72 nsid:1 lba:5750 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:12.835 [2024-07-15 09:35:23.847342] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:72 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:27:12.835 [2024-07-15 09:35:23.860658] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xbb86b0) with pdu=0x2000190df988 00:27:12.835 [2024-07-15 09:35:23.860858] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:70 nsid:1 lba:6165 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:12.835 [2024-07-15 09:35:23.860898] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:70 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:27:12.835 [2024-07-15 09:35:23.874273] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xbb86b0) with pdu=0x2000190df988 00:27:12.835 [2024-07-15 09:35:23.874466] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:69 nsid:1 lba:6292 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:12.835 [2024-07-15 09:35:23.874507] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:69 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:27:12.835 [2024-07-15 09:35:23.887717] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xbb86b0) with pdu=0x2000190df988 00:27:12.835 [2024-07-15 09:35:23.887907] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:74 nsid:1 lba:22339 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:12.835 [2024-07-15 09:35:23.887934] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:74 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:27:12.835 [2024-07-15 09:35:23.901287] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xbb86b0) with pdu=0x2000190df988 00:27:12.835 [2024-07-15 09:35:23.901486] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:73 nsid:1 lba:22203 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:12.835 [2024-07-15 09:35:23.901527] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:73 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:27:12.835 [2024-07-15 09:35:23.914578] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xbb86b0) with pdu=0x2000190df988 00:27:12.835 [2024-07-15 09:35:23.914774] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:72 nsid:1 lba:23983 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:12.835 [2024-07-15 09:35:23.914822] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:72 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:27:12.835 [2024-07-15 09:35:23.928236] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xbb86b0) with pdu=0x2000190df988 00:27:12.835 [2024-07-15 09:35:23.928461] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:70 nsid:1 lba:10814 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:12.835 [2024-07-15 09:35:23.928488] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:70 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:27:12.835 [2024-07-15 09:35:23.941877] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xbb86b0) with pdu=0x2000190df988 00:27:12.835 [2024-07-15 09:35:23.942070] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:69 nsid:1 lba:18071 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:12.835 [2024-07-15 09:35:23.942096] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:69 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:27:12.835 [2024-07-15 09:35:23.955501] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xbb86b0) with pdu=0x2000190df988 00:27:12.835 [2024-07-15 09:35:23.955681] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:74 nsid:1 lba:238 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:12.835 [2024-07-15 09:35:23.955722] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:74 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:27:12.835 [2024-07-15 09:35:23.969304] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xbb86b0) with pdu=0x2000190df988 00:27:12.835 [2024-07-15 09:35:23.969501] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:73 nsid:1 lba:880 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:12.835 [2024-07-15 09:35:23.969527] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:73 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:27:12.835 [2024-07-15 09:35:23.983075] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xbb86b0) with pdu=0x2000190df988 00:27:12.835 [2024-07-15 09:35:23.983281] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:72 nsid:1 lba:5199 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:12.835 [2024-07-15 09:35:23.983322] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:72 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:27:12.835 [2024-07-15 09:35:23.996612] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xbb86b0) with pdu=0x2000190df988 00:27:12.835 [2024-07-15 09:35:23.996813] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:70 nsid:1 lba:10388 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:12.835 [2024-07-15 09:35:23.996855] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:70 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:27:12.835 [2024-07-15 09:35:24.010254] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xbb86b0) with pdu=0x2000190df988 00:27:12.835 [2024-07-15 09:35:24.010457] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:69 nsid:1 lba:22463 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:12.835 [2024-07-15 09:35:24.010497] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:69 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:27:12.835 [2024-07-15 09:35:24.023870] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xbb86b0) with pdu=0x2000190df988 00:27:12.835 [2024-07-15 09:35:24.024062] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:74 nsid:1 lba:11510 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:12.835 [2024-07-15 09:35:24.024089] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:74 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:27:13.094 [2024-07-15 09:35:24.037657] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xbb86b0) with pdu=0x2000190df988 00:27:13.094 [2024-07-15 09:35:24.037858] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:73 nsid:1 lba:13745 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:13.094 [2024-07-15 09:35:24.037900] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:73 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:27:13.094 [2024-07-15 09:35:24.051302] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xbb86b0) with pdu=0x2000190df988 00:27:13.094 [2024-07-15 09:35:24.051482] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:72 nsid:1 lba:11469 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:13.094 [2024-07-15 09:35:24.051509] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:72 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:27:13.094 [2024-07-15 09:35:24.064797] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xbb86b0) with pdu=0x2000190df988 00:27:13.094 [2024-07-15 09:35:24.065001] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:70 nsid:1 lba:24529 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:13.094 [2024-07-15 09:35:24.065043] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:70 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:27:13.094 [2024-07-15 09:35:24.078399] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xbb86b0) with pdu=0x2000190df988 00:27:13.094 [2024-07-15 09:35:24.078594] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:69 nsid:1 lba:20301 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:13.094 [2024-07-15 09:35:24.078619] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:69 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:27:13.094 [2024-07-15 09:35:24.091994] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xbb86b0) with pdu=0x2000190df988 00:27:13.094 [2024-07-15 09:35:24.092206] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:74 nsid:1 lba:24126 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:13.094 [2024-07-15 09:35:24.092246] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:74 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:27:13.094 [2024-07-15 09:35:24.105581] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xbb86b0) with pdu=0x2000190df988 00:27:13.094 [2024-07-15 09:35:24.105780] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:73 nsid:1 lba:22346 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:13.094 [2024-07-15 09:35:24.105828] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:73 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:27:13.094 [2024-07-15 09:35:24.119214] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xbb86b0) with pdu=0x2000190df988 00:27:13.094 [2024-07-15 09:35:24.119406] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:72 nsid:1 lba:20917 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:13.094 [2024-07-15 09:35:24.119445] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:72 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:27:13.094 [2024-07-15 09:35:24.132784] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xbb86b0) with pdu=0x2000190df988 00:27:13.094 [2024-07-15 09:35:24.133003] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:70 nsid:1 lba:13045 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:13.094 [2024-07-15 09:35:24.133030] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:70 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:27:13.094 [2024-07-15 09:35:24.146232] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xbb86b0) with pdu=0x2000190df988 00:27:13.094 [2024-07-15 09:35:24.146429] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:69 nsid:1 lba:20673 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:13.094 [2024-07-15 09:35:24.146453] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:69 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:27:13.094 [2024-07-15 09:35:24.159837] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xbb86b0) with pdu=0x2000190df988 00:27:13.094 [2024-07-15 09:35:24.160034] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:74 nsid:1 lba:23168 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:13.094 [2024-07-15 09:35:24.160061] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:74 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:27:13.094 [2024-07-15 09:35:24.173449] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xbb86b0) with pdu=0x2000190df988 00:27:13.094 [2024-07-15 09:35:24.173727] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:73 nsid:1 lba:1004 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:13.094 [2024-07-15 09:35:24.173768] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:73 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:27:13.094 [2024-07-15 09:35:24.187434] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xbb86b0) with pdu=0x2000190df988 00:27:13.094 [2024-07-15 09:35:24.187647] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:72 nsid:1 lba:5594 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:13.094 [2024-07-15 09:35:24.187673] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:72 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:27:13.094 [2024-07-15 09:35:24.201166] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xbb86b0) with pdu=0x2000190df988 00:27:13.094 [2024-07-15 09:35:24.201440] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:70 nsid:1 lba:11506 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:13.094 [2024-07-15 09:35:24.201474] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:70 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:27:13.094 [2024-07-15 09:35:24.215208] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xbb86b0) with pdu=0x2000190df988 00:27:13.094 [2024-07-15 09:35:24.215467] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:69 nsid:1 lba:15699 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:13.094 [2024-07-15 09:35:24.215509] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:69 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:27:13.094 [2024-07-15 09:35:24.229061] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xbb86b0) with pdu=0x2000190df988 00:27:13.094 [2024-07-15 09:35:24.229351] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:74 nsid:1 lba:18664 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:13.094 [2024-07-15 09:35:24.229378] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:74 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:27:13.094 [2024-07-15 09:35:24.243090] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xbb86b0) with pdu=0x2000190df988 00:27:13.094 [2024-07-15 09:35:24.243332] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:73 nsid:1 lba:5464 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:13.094 [2024-07-15 09:35:24.243374] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:73 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:27:13.094 [2024-07-15 09:35:24.256964] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xbb86b0) with pdu=0x2000190df988 00:27:13.094 [2024-07-15 09:35:24.257176] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:72 nsid:1 lba:24953 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:13.094 [2024-07-15 09:35:24.257203] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:72 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:27:13.094 [2024-07-15 09:35:24.271111] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xbb86b0) with pdu=0x2000190df988 00:27:13.094 [2024-07-15 09:35:24.271362] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:70 nsid:1 lba:22810 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:13.094 [2024-07-15 09:35:24.271404] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:70 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:27:13.094 [2024-07-15 09:35:24.285045] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xbb86b0) with pdu=0x2000190df988 00:27:13.094 [2024-07-15 09:35:24.285368] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:69 nsid:1 lba:16651 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:13.094 [2024-07-15 09:35:24.285398] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:69 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:27:13.352 [2024-07-15 09:35:24.299070] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xbb86b0) with pdu=0x2000190df988 00:27:13.352 [2024-07-15 09:35:24.299312] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:74 nsid:1 lba:9281 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:13.352 [2024-07-15 09:35:24.299354] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:74 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:27:13.352 [2024-07-15 09:35:24.313103] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xbb86b0) with pdu=0x2000190df988 00:27:13.352 [2024-07-15 09:35:24.313362] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:73 nsid:1 lba:499 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:13.352 [2024-07-15 09:35:24.313404] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:73 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:27:13.352 [2024-07-15 09:35:24.327105] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xbb86b0) with pdu=0x2000190df988 00:27:13.352 [2024-07-15 09:35:24.327331] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:72 nsid:1 lba:11162 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:13.352 [2024-07-15 09:35:24.327372] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:72 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:27:13.352 [2024-07-15 09:35:24.341219] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xbb86b0) with pdu=0x2000190df988 00:27:13.352 [2024-07-15 09:35:24.341461] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:70 nsid:1 lba:1190 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:13.352 [2024-07-15 09:35:24.341488] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:70 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:27:13.352 [2024-07-15 09:35:24.355067] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xbb86b0) with pdu=0x2000190df988 00:27:13.352 [2024-07-15 09:35:24.355303] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:69 nsid:1 lba:20685 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:13.352 [2024-07-15 09:35:24.355344] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:69 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:27:13.352 [2024-07-15 09:35:24.368990] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xbb86b0) with pdu=0x2000190df988 00:27:13.352 [2024-07-15 09:35:24.369276] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:74 nsid:1 lba:19687 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:13.352 [2024-07-15 09:35:24.369302] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:74 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:27:13.353 [2024-07-15 09:35:24.382629] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xbb86b0) with pdu=0x2000190df988 00:27:13.353 [2024-07-15 09:35:24.382857] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:73 nsid:1 lba:265 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:13.353 [2024-07-15 09:35:24.382884] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:73 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:27:13.353 [2024-07-15 09:35:24.396456] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xbb86b0) with pdu=0x2000190df988 00:27:13.353 [2024-07-15 09:35:24.396685] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:72 nsid:1 lba:18937 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:13.353 [2024-07-15 09:35:24.396726] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:72 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:27:13.353 [2024-07-15 09:35:24.410301] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xbb86b0) with pdu=0x2000190df988 00:27:13.353 [2024-07-15 09:35:24.410527] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:70 nsid:1 lba:23214 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:13.353 [2024-07-15 09:35:24.410554] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:70 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:27:13.353 [2024-07-15 09:35:24.423875] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xbb86b0) with pdu=0x2000190df988 00:27:13.353 [2024-07-15 09:35:24.424116] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:69 nsid:1 lba:20942 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:13.353 [2024-07-15 09:35:24.424157] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:69 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:27:13.353 [2024-07-15 09:35:24.437685] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xbb86b0) with pdu=0x2000190df988 00:27:13.353 [2024-07-15 09:35:24.437901] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:74 nsid:1 lba:12811 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:13.353 [2024-07-15 09:35:24.437928] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:74 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:27:13.353 [2024-07-15 09:35:24.451558] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xbb86b0) with pdu=0x2000190df988 00:27:13.353 [2024-07-15 09:35:24.451777] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:73 nsid:1 lba:10039 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:13.353 [2024-07-15 09:35:24.451809] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:73 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:27:13.353 [2024-07-15 09:35:24.465457] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xbb86b0) with pdu=0x2000190df988 00:27:13.353 [2024-07-15 09:35:24.465685] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:72 nsid:1 lba:9471 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:13.353 [2024-07-15 09:35:24.465727] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:72 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:27:13.353 [2024-07-15 09:35:24.479407] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xbb86b0) with pdu=0x2000190df988 00:27:13.353 [2024-07-15 09:35:24.479651] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:70 nsid:1 lba:4884 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:13.353 [2024-07-15 09:35:24.479676] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:70 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:27:13.353 [2024-07-15 09:35:24.493338] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xbb86b0) with pdu=0x2000190df988 00:27:13.353 [2024-07-15 09:35:24.493590] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:69 nsid:1 lba:438 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:13.353 [2024-07-15 09:35:24.493617] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:69 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:27:13.353 [2024-07-15 09:35:24.507150] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xbb86b0) with pdu=0x2000190df988 00:27:13.353 [2024-07-15 09:35:24.507436] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:74 nsid:1 lba:14466 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:13.353 [2024-07-15 09:35:24.507478] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:74 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:27:13.353 [2024-07-15 09:35:24.521092] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xbb86b0) with pdu=0x2000190df988 00:27:13.353 [2024-07-15 09:35:24.521332] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:73 nsid:1 lba:22607 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:13.353 [2024-07-15 09:35:24.521374] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:73 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:27:13.353 [2024-07-15 09:35:24.535129] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xbb86b0) with pdu=0x2000190df988 00:27:13.353 [2024-07-15 09:35:24.535386] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:72 nsid:1 lba:3089 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:13.353 [2024-07-15 09:35:24.535412] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:72 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:27:13.613 [2024-07-15 09:35:24.548982] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xbb86b0) with pdu=0x2000190df988 00:27:13.613 [2024-07-15 09:35:24.549214] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:70 nsid:1 lba:22417 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:13.613 [2024-07-15 09:35:24.549241] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:70 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:27:13.613 [2024-07-15 09:35:24.562819] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xbb86b0) with pdu=0x2000190df988 00:27:13.613 [2024-07-15 09:35:24.563071] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:69 nsid:1 lba:21995 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:13.613 [2024-07-15 09:35:24.563120] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:69 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:27:13.613 [2024-07-15 09:35:24.576839] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xbb86b0) with pdu=0x2000190df988 00:27:13.613 [2024-07-15 09:35:24.577197] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:74 nsid:1 lba:16539 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:13.613 [2024-07-15 09:35:24.577239] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:74 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:27:13.613 [2024-07-15 09:35:24.590695] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xbb86b0) with pdu=0x2000190df988 00:27:13.613 [2024-07-15 09:35:24.590936] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:73 nsid:1 lba:6925 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:13.613 [2024-07-15 09:35:24.590963] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:73 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:27:13.613 [2024-07-15 09:35:24.604810] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xbb86b0) with pdu=0x2000190df988 00:27:13.613 [2024-07-15 09:35:24.605062] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:72 nsid:1 lba:23953 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:13.613 [2024-07-15 09:35:24.605104] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:72 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:27:13.613 [2024-07-15 09:35:24.618816] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xbb86b0) with pdu=0x2000190df988 00:27:13.613 [2024-07-15 09:35:24.619023] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:70 nsid:1 lba:11635 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:13.613 [2024-07-15 09:35:24.619050] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:70 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:27:13.613 [2024-07-15 09:35:24.632889] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xbb86b0) with pdu=0x2000190df988 00:27:13.613 [2024-07-15 09:35:24.633118] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:69 nsid:1 lba:19279 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:13.613 [2024-07-15 09:35:24.633159] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:69 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:27:13.613 [2024-07-15 09:35:24.646639] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xbb86b0) with pdu=0x2000190df988 00:27:13.613 [2024-07-15 09:35:24.646929] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:74 nsid:1 lba:4933 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:13.613 [2024-07-15 09:35:24.646956] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:74 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:27:13.613 [2024-07-15 09:35:24.660393] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xbb86b0) with pdu=0x2000190df988 00:27:13.613 [2024-07-15 09:35:24.660605] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:73 nsid:1 lba:11801 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:13.613 [2024-07-15 09:35:24.660632] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:73 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:27:13.613 [2024-07-15 09:35:24.674015] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xbb86b0) with pdu=0x2000190df988 00:27:13.613 [2024-07-15 09:35:24.674283] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:72 nsid:1 lba:2316 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:13.613 [2024-07-15 09:35:24.674324] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:72 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:27:13.613 [2024-07-15 09:35:24.687768] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xbb86b0) with pdu=0x2000190df988 00:27:13.613 [2024-07-15 09:35:24.688011] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:70 nsid:1 lba:13643 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:13.613 [2024-07-15 09:35:24.688038] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:70 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:27:13.613 [2024-07-15 09:35:24.701537] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xbb86b0) with pdu=0x2000190df988 00:27:13.613 [2024-07-15 09:35:24.701760] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:69 nsid:1 lba:10976 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:13.613 [2024-07-15 09:35:24.701807] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:69 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:27:13.613 [2024-07-15 09:35:24.715303] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xbb86b0) with pdu=0x2000190df988 00:27:13.613 [2024-07-15 09:35:24.715518] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:74 nsid:1 lba:14850 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:13.613 [2024-07-15 09:35:24.715543] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:74 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:27:13.613 [2024-07-15 09:35:24.729143] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xbb86b0) with pdu=0x2000190df988 00:27:13.613 [2024-07-15 09:35:24.729420] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:73 nsid:1 lba:19608 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:13.613 [2024-07-15 09:35:24.729447] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:73 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:27:13.613 [2024-07-15 09:35:24.742734] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xbb86b0) with pdu=0x2000190df988 00:27:13.613 [2024-07-15 09:35:24.742949] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:72 nsid:1 lba:3290 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:13.613 [2024-07-15 09:35:24.742977] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:72 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:27:13.613 [2024-07-15 09:35:24.756410] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xbb86b0) with pdu=0x2000190df988 00:27:13.613 [2024-07-15 09:35:24.756702] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:70 nsid:1 lba:19704 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:13.613 [2024-07-15 09:35:24.756729] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:70 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:27:13.613 [2024-07-15 09:35:24.770234] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xbb86b0) with pdu=0x2000190df988 00:27:13.613 [2024-07-15 09:35:24.770502] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:69 nsid:1 lba:25232 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:13.613 [2024-07-15 09:35:24.770529] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:69 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:27:13.613 [2024-07-15 09:35:24.784172] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xbb86b0) with pdu=0x2000190df988 00:27:13.614 [2024-07-15 09:35:24.784420] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:74 nsid:1 lba:5095 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:13.614 [2024-07-15 09:35:24.784461] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:74 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:27:13.614 [2024-07-15 09:35:24.798030] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xbb86b0) with pdu=0x2000190df988 00:27:13.614 [2024-07-15 09:35:24.798256] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:73 nsid:1 lba:25531 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:13.614 [2024-07-15 09:35:24.798282] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:73 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:27:13.874 [2024-07-15 09:35:24.812289] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xbb86b0) with pdu=0x2000190df988 00:27:13.874 [2024-07-15 09:35:24.812519] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:72 nsid:1 lba:23447 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:13.874 [2024-07-15 09:35:24.812559] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:72 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:27:13.874 [2024-07-15 09:35:24.826115] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xbb86b0) with pdu=0x2000190df988 00:27:13.874 [2024-07-15 09:35:24.826412] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:70 nsid:1 lba:24440 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:13.874 [2024-07-15 09:35:24.826454] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:70 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:27:13.874 [2024-07-15 09:35:24.840186] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xbb86b0) with pdu=0x2000190df988 00:27:13.874 [2024-07-15 09:35:24.840428] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:69 nsid:1 lba:19643 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:13.874 [2024-07-15 09:35:24.840469] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:69 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:27:13.874 [2024-07-15 09:35:24.854058] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xbb86b0) with pdu=0x2000190df988 00:27:13.874 [2024-07-15 09:35:24.854342] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:74 nsid:1 lba:22811 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:13.874 [2024-07-15 09:35:24.854383] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:74 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:27:13.874 [2024-07-15 09:35:24.868027] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xbb86b0) with pdu=0x2000190df988 00:27:13.874 [2024-07-15 09:35:24.868278] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:73 nsid:1 lba:9654 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:13.874 [2024-07-15 09:35:24.868318] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:73 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:27:13.874 [2024-07-15 09:35:24.881998] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xbb86b0) with pdu=0x2000190df988 00:27:13.874 [2024-07-15 09:35:24.882232] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:72 nsid:1 lba:7392 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:13.874 [2024-07-15 09:35:24.882273] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:72 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:27:13.874 [2024-07-15 09:35:24.896117] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xbb86b0) with pdu=0x2000190df988 00:27:13.874 [2024-07-15 09:35:24.896425] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:70 nsid:1 lba:24775 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:13.874 [2024-07-15 09:35:24.896453] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:70 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:27:13.874 [2024-07-15 09:35:24.909931] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xbb86b0) with pdu=0x2000190df988 00:27:13.874 [2024-07-15 09:35:24.910143] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:69 nsid:1 lba:15257 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:13.874 [2024-07-15 09:35:24.910169] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:69 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:27:13.874 [2024-07-15 09:35:24.924024] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xbb86b0) with pdu=0x2000190df988 00:27:13.874 [2024-07-15 09:35:24.924314] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:74 nsid:1 lba:19774 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:13.874 [2024-07-15 09:35:24.924345] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:74 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:27:13.874 [2024-07-15 09:35:24.937620] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xbb86b0) with pdu=0x2000190df988 00:27:13.874 [2024-07-15 09:35:24.937854] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:73 nsid:1 lba:350 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:13.874 [2024-07-15 09:35:24.937881] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:73 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:27:13.874 [2024-07-15 09:35:24.951433] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xbb86b0) with pdu=0x2000190df988 00:27:13.874 [2024-07-15 09:35:24.951735] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:72 nsid:1 lba:21418 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:13.874 [2024-07-15 09:35:24.951778] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:72 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:27:13.874 [2024-07-15 09:35:24.965235] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xbb86b0) with pdu=0x2000190df988 00:27:13.874 [2024-07-15 09:35:24.965519] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:70 nsid:1 lba:4928 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:13.874 [2024-07-15 09:35:24.965545] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:70 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:27:13.874 [2024-07-15 09:35:24.979204] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xbb86b0) with pdu=0x2000190df988 00:27:13.874 [2024-07-15 09:35:24.979487] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:69 nsid:1 lba:24841 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:13.874 [2024-07-15 09:35:24.979529] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:69 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:27:13.874 [2024-07-15 09:35:24.993109] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xbb86b0) with pdu=0x2000190df988 00:27:13.874 [2024-07-15 09:35:24.993321] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:74 nsid:1 lba:21495 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:13.874 [2024-07-15 09:35:24.993362] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:74 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:27:13.874 [2024-07-15 09:35:25.006857] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xbb86b0) with pdu=0x2000190df988 00:27:13.874 [2024-07-15 09:35:25.007062] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:73 nsid:1 lba:20728 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:13.874 [2024-07-15 09:35:25.007089] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:73 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:27:13.874 [2024-07-15 09:35:25.020715] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xbb86b0) with pdu=0x2000190df988 00:27:13.874 [2024-07-15 09:35:25.021010] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:72 nsid:1 lba:13199 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:13.874 [2024-07-15 09:35:25.021038] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:72 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:27:13.874 [2024-07-15 09:35:25.034522] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xbb86b0) with pdu=0x2000190df988 00:27:13.874 [2024-07-15 09:35:25.034757] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:70 nsid:1 lba:14049 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:13.874 [2024-07-15 09:35:25.034799] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:70 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:27:13.874 [2024-07-15 09:35:25.048526] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xbb86b0) with pdu=0x2000190df988 00:27:13.874 [2024-07-15 09:35:25.048773] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:69 nsid:1 lba:3615 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:13.874 [2024-07-15 09:35:25.048812] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:69 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:27:13.874 [2024-07-15 09:35:25.062507] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xbb86b0) with pdu=0x2000190df988 00:27:13.874 [2024-07-15 09:35:25.062743] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:74 nsid:1 lba:24113 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:13.874 [2024-07-15 09:35:25.062784] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:74 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:27:14.132 [2024-07-15 09:35:25.076618] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xbb86b0) with pdu=0x2000190df988 00:27:14.132 [2024-07-15 09:35:25.076860] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:73 nsid:1 lba:15732 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:14.132 [2024-07-15 09:35:25.076887] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:73 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:27:14.132 [2024-07-15 09:35:25.090671] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xbb86b0) with pdu=0x2000190df988 00:27:14.132 [2024-07-15 09:35:25.090919] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:72 nsid:1 lba:16767 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:14.132 [2024-07-15 09:35:25.090947] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:72 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:27:14.132 [2024-07-15 09:35:25.104746] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xbb86b0) with pdu=0x2000190df988 00:27:14.132 [2024-07-15 09:35:25.104979] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:70 nsid:1 lba:447 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:14.132 [2024-07-15 09:35:25.105006] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:70 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:27:14.132 [2024-07-15 09:35:25.118591] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xbb86b0) with pdu=0x2000190df988 00:27:14.132 [2024-07-15 09:35:25.118858] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:69 nsid:1 lba:17607 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:14.132 [2024-07-15 09:35:25.118885] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:69 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:27:14.132 [2024-07-15 09:35:25.132531] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xbb86b0) with pdu=0x2000190df988 00:27:14.132 [2024-07-15 09:35:25.132741] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:74 nsid:1 lba:331 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:14.132 [2024-07-15 09:35:25.132767] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:74 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:27:14.132 00:27:14.132 Latency(us) 00:27:14.132 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:27:14.132 Job: nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:27:14.132 nvme0n1 : 2.01 19060.78 74.46 0.00 0.00 6698.91 2742.80 15825.73 00:27:14.132 =================================================================================================================== 00:27:14.132 Total : 19060.78 74.46 0.00 0.00 6698.91 2742.80 15825.73 00:27:14.132 0 00:27:14.132 09:35:25 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@71 -- # get_transient_errcount nvme0n1 00:27:14.132 09:35:25 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@27 -- # bperf_rpc bdev_get_iostat -b nvme0n1 00:27:14.132 09:35:25 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@28 -- # jq -r '.bdevs[0] 00:27:14.132 | .driver_specific 00:27:14.132 | .nvme_error 00:27:14.132 | .status_code 00:27:14.132 | .command_transient_transport_error' 00:27:14.132 09:35:25 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_get_iostat -b nvme0n1 00:27:14.388 09:35:25 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@71 -- # (( 150 > 0 )) 00:27:14.388 09:35:25 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@73 -- # killprocess 936421 00:27:14.388 09:35:25 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@948 -- # '[' -z 936421 ']' 00:27:14.388 09:35:25 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@952 -- # kill -0 936421 00:27:14.388 09:35:25 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@953 -- # uname 00:27:14.388 09:35:25 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:27:14.388 09:35:25 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 936421 00:27:14.388 09:35:25 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:27:14.388 09:35:25 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:27:14.388 09:35:25 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@966 -- # echo 'killing process with pid 936421' 00:27:14.388 killing process with pid 936421 00:27:14.388 09:35:25 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@967 -- # kill 936421 00:27:14.388 Received shutdown signal, test time was about 2.000000 seconds 00:27:14.388 00:27:14.388 Latency(us) 00:27:14.388 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:27:14.388 =================================================================================================================== 00:27:14.388 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:27:14.388 09:35:25 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@972 -- # wait 936421 00:27:14.645 09:35:25 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@115 -- # run_bperf_err randwrite 131072 16 00:27:14.645 09:35:25 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@54 -- # local rw bs qd 00:27:14.645 09:35:25 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # rw=randwrite 00:27:14.645 09:35:25 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # bs=131072 00:27:14.645 09:35:25 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # qd=16 00:27:14.645 09:35:25 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@58 -- # bperfpid=936917 00:27:14.645 09:35:25 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@60 -- # waitforlisten 936917 /var/tmp/bperf.sock 00:27:14.645 09:35:25 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randwrite -o 131072 -t 2 -q 16 -z 00:27:14.645 09:35:25 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@829 -- # '[' -z 936917 ']' 00:27:14.645 09:35:25 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bperf.sock 00:27:14.645 09:35:25 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@834 -- # local max_retries=100 00:27:14.645 09:35:25 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:27:14.645 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:27:14.645 09:35:25 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@838 -- # xtrace_disable 00:27:14.645 09:35:25 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:27:14.645 [2024-07-15 09:35:25.721070] Starting SPDK v24.09-pre git sha1 b0f01ebc5 / DPDK 24.03.0 initialization... 00:27:14.645 [2024-07-15 09:35:25.721158] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid936917 ] 00:27:14.645 I/O size of 131072 is greater than zero copy threshold (65536). 00:27:14.645 Zero copy mechanism will not be used. 00:27:14.645 EAL: No free 2048 kB hugepages reported on node 1 00:27:14.645 [2024-07-15 09:35:25.780955] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:27:14.902 [2024-07-15 09:35:25.890430] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:27:14.902 09:35:25 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:27:14.902 09:35:25 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@862 -- # return 0 00:27:14.902 09:35:25 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@61 -- # bperf_rpc bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:27:14.902 09:35:25 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:27:15.159 09:35:26 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@63 -- # rpc_cmd accel_error_inject_error -o crc32c -t disable 00:27:15.159 09:35:26 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:15.159 09:35:26 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:27:15.159 09:35:26 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:15.159 09:35:26 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@64 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:27:15.159 09:35:26 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:27:15.726 nvme0n1 00:27:15.726 09:35:26 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@67 -- # rpc_cmd accel_error_inject_error -o crc32c -t corrupt -i 32 00:27:15.726 09:35:26 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:15.726 09:35:26 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:27:15.726 09:35:26 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:15.726 09:35:26 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@69 -- # bperf_py perform_tests 00:27:15.726 09:35:26 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@19 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:27:15.726 I/O size of 131072 is greater than zero copy threshold (65536). 00:27:15.726 Zero copy mechanism will not be used. 00:27:15.726 Running I/O for 2 seconds... 00:27:15.726 [2024-07-15 09:35:26.848752] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x9edaf0) with pdu=0x2000190fef90 00:27:15.726 [2024-07-15 09:35:26.849108] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:8096 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:15.726 [2024-07-15 09:35:26.849167] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:15.726 [2024-07-15 09:35:26.855348] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x9edaf0) with pdu=0x2000190fef90 00:27:15.726 [2024-07-15 09:35:26.855658] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:18400 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:15.726 [2024-07-15 09:35:26.855688] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:15.726 [2024-07-15 09:35:26.861890] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x9edaf0) with pdu=0x2000190fef90 00:27:15.726 [2024-07-15 09:35:26.862221] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:24160 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:15.726 [2024-07-15 09:35:26.862265] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:15.726 [2024-07-15 09:35:26.868297] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x9edaf0) with pdu=0x2000190fef90 00:27:15.726 [2024-07-15 09:35:26.868582] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:17568 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:15.726 [2024-07-15 09:35:26.868610] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:15.726 [2024-07-15 09:35:26.875061] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x9edaf0) with pdu=0x2000190fef90 00:27:15.726 [2024-07-15 09:35:26.875414] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:13024 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:15.726 [2024-07-15 09:35:26.875443] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:15.727 [2024-07-15 09:35:26.882851] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x9edaf0) with pdu=0x2000190fef90 00:27:15.727 [2024-07-15 09:35:26.883250] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:14816 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:15.727 [2024-07-15 09:35:26.883294] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:15.727 [2024-07-15 09:35:26.890384] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x9edaf0) with pdu=0x2000190fef90 00:27:15.727 [2024-07-15 09:35:26.890678] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:24128 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:15.727 [2024-07-15 09:35:26.890706] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:15.727 [2024-07-15 09:35:26.897886] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x9edaf0) with pdu=0x2000190fef90 00:27:15.727 [2024-07-15 09:35:26.898283] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:19584 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:15.727 [2024-07-15 09:35:26.898325] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:15.727 [2024-07-15 09:35:26.905374] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x9edaf0) with pdu=0x2000190fef90 00:27:15.727 [2024-07-15 09:35:26.905706] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:4096 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:15.727 [2024-07-15 09:35:26.905735] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:15.727 [2024-07-15 09:35:26.912873] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x9edaf0) with pdu=0x2000190fef90 00:27:15.727 [2024-07-15 09:35:26.913190] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:9824 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:15.727 [2024-07-15 09:35:26.913218] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:15.727 [2024-07-15 09:35:26.920090] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x9edaf0) with pdu=0x2000190fef90 00:27:15.727 [2024-07-15 09:35:26.920388] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:6240 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:15.727 [2024-07-15 09:35:26.920417] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:15.987 [2024-07-15 09:35:26.927089] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x9edaf0) with pdu=0x2000190fef90 00:27:15.987 [2024-07-15 09:35:26.927382] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:24832 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:15.987 [2024-07-15 09:35:26.927416] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:15.987 [2024-07-15 09:35:26.934457] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x9edaf0) with pdu=0x2000190fef90 00:27:15.987 [2024-07-15 09:35:26.934765] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:3488 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:15.987 [2024-07-15 09:35:26.934816] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:15.987 [2024-07-15 09:35:26.941177] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x9edaf0) with pdu=0x2000190fef90 00:27:15.987 [2024-07-15 09:35:26.941458] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:13120 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:15.987 [2024-07-15 09:35:26.941487] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:15.987 [2024-07-15 09:35:26.946403] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x9edaf0) with pdu=0x2000190fef90 00:27:15.987 [2024-07-15 09:35:26.946699] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:25216 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:15.987 [2024-07-15 09:35:26.946729] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:15.987 [2024-07-15 09:35:26.951570] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x9edaf0) with pdu=0x2000190fef90 00:27:15.987 [2024-07-15 09:35:26.951871] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:10048 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:15.987 [2024-07-15 09:35:26.951900] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:15.987 [2024-07-15 09:35:26.956691] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x9edaf0) with pdu=0x2000190fef90 00:27:15.987 [2024-07-15 09:35:26.957053] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:4096 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:15.987 [2024-07-15 09:35:26.957082] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:15.987 [2024-07-15 09:35:26.961837] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x9edaf0) with pdu=0x2000190fef90 00:27:15.987 [2024-07-15 09:35:26.962121] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:4256 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:15.987 [2024-07-15 09:35:26.962149] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:15.987 [2024-07-15 09:35:26.966815] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x9edaf0) with pdu=0x2000190fef90 00:27:15.987 [2024-07-15 09:35:26.967107] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:17344 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:15.987 [2024-07-15 09:35:26.967136] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:15.987 [2024-07-15 09:35:26.971936] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x9edaf0) with pdu=0x2000190fef90 00:27:15.987 [2024-07-15 09:35:26.972220] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:10784 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:15.987 [2024-07-15 09:35:26.972249] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:15.987 [2024-07-15 09:35:26.977187] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x9edaf0) with pdu=0x2000190fef90 00:27:15.987 [2024-07-15 09:35:26.977477] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:18848 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:15.987 [2024-07-15 09:35:26.977506] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:15.987 [2024-07-15 09:35:26.984171] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x9edaf0) with pdu=0x2000190fef90 00:27:15.987 [2024-07-15 09:35:26.984494] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:23616 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:15.988 [2024-07-15 09:35:26.984523] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:15.988 [2024-07-15 09:35:26.990294] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x9edaf0) with pdu=0x2000190fef90 00:27:15.988 [2024-07-15 09:35:26.990604] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:25056 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:15.988 [2024-07-15 09:35:26.990632] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:15.988 [2024-07-15 09:35:26.996693] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x9edaf0) with pdu=0x2000190fef90 00:27:15.988 [2024-07-15 09:35:26.996987] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:1664 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:15.988 [2024-07-15 09:35:26.997016] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:15.988 [2024-07-15 09:35:27.003232] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x9edaf0) with pdu=0x2000190fef90 00:27:15.988 [2024-07-15 09:35:27.003550] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:22272 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:15.988 [2024-07-15 09:35:27.003580] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:15.988 [2024-07-15 09:35:27.009589] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x9edaf0) with pdu=0x2000190fef90 00:27:15.988 [2024-07-15 09:35:27.009880] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:16928 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:15.988 [2024-07-15 09:35:27.009909] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:15.988 [2024-07-15 09:35:27.016183] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x9edaf0) with pdu=0x2000190fef90 00:27:15.988 [2024-07-15 09:35:27.016464] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:17472 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:15.988 [2024-07-15 09:35:27.016493] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:15.988 [2024-07-15 09:35:27.022504] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x9edaf0) with pdu=0x2000190fef90 00:27:15.988 [2024-07-15 09:35:27.022786] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:8032 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:15.988 [2024-07-15 09:35:27.022822] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:15.988 [2024-07-15 09:35:27.029841] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x9edaf0) with pdu=0x2000190fef90 00:27:15.988 [2024-07-15 09:35:27.030127] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:17056 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:15.988 [2024-07-15 09:35:27.030156] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:15.988 [2024-07-15 09:35:27.036304] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x9edaf0) with pdu=0x2000190fef90 00:27:15.988 [2024-07-15 09:35:27.036577] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:7200 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:15.988 [2024-07-15 09:35:27.036606] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:15.988 [2024-07-15 09:35:27.042717] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x9edaf0) with pdu=0x2000190fef90 00:27:15.988 [2024-07-15 09:35:27.043006] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:21536 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:15.988 [2024-07-15 09:35:27.043035] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:15.988 [2024-07-15 09:35:27.049360] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x9edaf0) with pdu=0x2000190fef90 00:27:15.988 [2024-07-15 09:35:27.049657] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:11712 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:15.988 [2024-07-15 09:35:27.049686] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:15.988 [2024-07-15 09:35:27.056364] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x9edaf0) with pdu=0x2000190fef90 00:27:15.988 [2024-07-15 09:35:27.056645] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:2272 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:15.988 [2024-07-15 09:35:27.056674] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:15.988 [2024-07-15 09:35:27.061509] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x9edaf0) with pdu=0x2000190fef90 00:27:15.988 [2024-07-15 09:35:27.061814] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:6752 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:15.988 [2024-07-15 09:35:27.061842] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:15.988 [2024-07-15 09:35:27.066918] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x9edaf0) with pdu=0x2000190fef90 00:27:15.988 [2024-07-15 09:35:27.067200] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:13600 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:15.988 [2024-07-15 09:35:27.067229] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:15.988 [2024-07-15 09:35:27.072339] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x9edaf0) with pdu=0x2000190fef90 00:27:15.988 [2024-07-15 09:35:27.072621] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:1088 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:15.988 [2024-07-15 09:35:27.072649] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:15.988 [2024-07-15 09:35:27.078561] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x9edaf0) with pdu=0x2000190fef90 00:27:15.988 [2024-07-15 09:35:27.078854] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:15360 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:15.988 [2024-07-15 09:35:27.078883] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:15.988 [2024-07-15 09:35:27.085083] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x9edaf0) with pdu=0x2000190fef90 00:27:15.988 [2024-07-15 09:35:27.085394] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:11936 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:15.988 [2024-07-15 09:35:27.085429] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:15.988 [2024-07-15 09:35:27.092132] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x9edaf0) with pdu=0x2000190fef90 00:27:15.988 [2024-07-15 09:35:27.092415] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:21632 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:15.988 [2024-07-15 09:35:27.092444] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:15.988 [2024-07-15 09:35:27.097454] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x9edaf0) with pdu=0x2000190fef90 00:27:15.988 [2024-07-15 09:35:27.097737] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:9248 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:15.988 [2024-07-15 09:35:27.097764] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:15.988 [2024-07-15 09:35:27.102267] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x9edaf0) with pdu=0x2000190fef90 00:27:15.988 [2024-07-15 09:35:27.102550] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:18560 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:15.988 [2024-07-15 09:35:27.102578] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:15.988 [2024-07-15 09:35:27.107063] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x9edaf0) with pdu=0x2000190fef90 00:27:15.988 [2024-07-15 09:35:27.107346] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:19712 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:15.988 [2024-07-15 09:35:27.107375] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:15.988 [2024-07-15 09:35:27.112304] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x9edaf0) with pdu=0x2000190fef90 00:27:15.988 [2024-07-15 09:35:27.112600] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:14816 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:15.988 [2024-07-15 09:35:27.112629] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:15.988 [2024-07-15 09:35:27.117683] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x9edaf0) with pdu=0x2000190fef90 00:27:15.988 [2024-07-15 09:35:27.117974] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:21600 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:15.988 [2024-07-15 09:35:27.118002] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:15.988 [2024-07-15 09:35:27.122544] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x9edaf0) with pdu=0x2000190fef90 00:27:15.988 [2024-07-15 09:35:27.122861] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:1280 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:15.988 [2024-07-15 09:35:27.122889] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:15.988 [2024-07-15 09:35:27.127517] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x9edaf0) with pdu=0x2000190fef90 00:27:15.988 [2024-07-15 09:35:27.127807] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:15136 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:15.988 [2024-07-15 09:35:27.127835] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:15.988 [2024-07-15 09:35:27.132338] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x9edaf0) with pdu=0x2000190fef90 00:27:15.988 [2024-07-15 09:35:27.132657] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:22208 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:15.988 [2024-07-15 09:35:27.132684] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:15.988 [2024-07-15 09:35:27.137307] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x9edaf0) with pdu=0x2000190fef90 00:27:15.988 [2024-07-15 09:35:27.137589] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:22944 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:15.988 [2024-07-15 09:35:27.137617] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:15.988 [2024-07-15 09:35:27.142157] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x9edaf0) with pdu=0x2000190fef90 00:27:15.988 [2024-07-15 09:35:27.142438] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:7648 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:15.988 [2024-07-15 09:35:27.142466] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:15.988 [2024-07-15 09:35:27.146999] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x9edaf0) with pdu=0x2000190fef90 00:27:15.988 [2024-07-15 09:35:27.147280] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:14656 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:15.988 [2024-07-15 09:35:27.147308] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:15.988 [2024-07-15 09:35:27.151894] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x9edaf0) with pdu=0x2000190fef90 00:27:15.989 [2024-07-15 09:35:27.152174] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:25120 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:15.989 [2024-07-15 09:35:27.152202] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:15.989 [2024-07-15 09:35:27.156669] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x9edaf0) with pdu=0x2000190fef90 00:27:15.989 [2024-07-15 09:35:27.156957] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:8960 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:15.989 [2024-07-15 09:35:27.156985] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:15.989 [2024-07-15 09:35:27.161591] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x9edaf0) with pdu=0x2000190fef90 00:27:15.989 [2024-07-15 09:35:27.161878] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:7552 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:15.989 [2024-07-15 09:35:27.161906] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:15.989 [2024-07-15 09:35:27.167038] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x9edaf0) with pdu=0x2000190fef90 00:27:15.989 [2024-07-15 09:35:27.167318] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:1312 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:15.989 [2024-07-15 09:35:27.167347] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:15.989 [2024-07-15 09:35:27.173376] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x9edaf0) with pdu=0x2000190fef90 00:27:15.989 [2024-07-15 09:35:27.173674] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:12800 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:15.989 [2024-07-15 09:35:27.173718] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:15.989 [2024-07-15 09:35:27.179762] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x9edaf0) with pdu=0x2000190fef90 00:27:15.989 [2024-07-15 09:35:27.180100] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:9600 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:15.989 [2024-07-15 09:35:27.180129] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:16.249 [2024-07-15 09:35:27.186574] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x9edaf0) with pdu=0x2000190fef90 00:27:16.249 [2024-07-15 09:35:27.186868] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:8288 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:16.249 [2024-07-15 09:35:27.186896] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:16.249 [2024-07-15 09:35:27.194075] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x9edaf0) with pdu=0x2000190fef90 00:27:16.249 [2024-07-15 09:35:27.194359] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:17088 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:16.249 [2024-07-15 09:35:27.194388] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:16.249 [2024-07-15 09:35:27.200931] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x9edaf0) with pdu=0x2000190fef90 00:27:16.249 [2024-07-15 09:35:27.201200] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:15712 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:16.249 [2024-07-15 09:35:27.201228] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:16.249 [2024-07-15 09:35:27.207358] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x9edaf0) with pdu=0x2000190fef90 00:27:16.249 [2024-07-15 09:35:27.207640] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:19424 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:16.249 [2024-07-15 09:35:27.207669] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:16.249 [2024-07-15 09:35:27.213632] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x9edaf0) with pdu=0x2000190fef90 00:27:16.249 [2024-07-15 09:35:27.213907] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:22752 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:16.249 [2024-07-15 09:35:27.213935] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:16.249 [2024-07-15 09:35:27.219914] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x9edaf0) with pdu=0x2000190fef90 00:27:16.249 [2024-07-15 09:35:27.220182] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:18464 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:16.249 [2024-07-15 09:35:27.220210] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:16.249 [2024-07-15 09:35:27.226264] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x9edaf0) with pdu=0x2000190fef90 00:27:16.249 [2024-07-15 09:35:27.226533] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:1536 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:16.249 [2024-07-15 09:35:27.226561] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:16.249 [2024-07-15 09:35:27.232450] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x9edaf0) with pdu=0x2000190fef90 00:27:16.249 [2024-07-15 09:35:27.232733] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:23328 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:16.250 [2024-07-15 09:35:27.232768] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:16.250 [2024-07-15 09:35:27.238630] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x9edaf0) with pdu=0x2000190fef90 00:27:16.250 [2024-07-15 09:35:27.238908] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:25440 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:16.250 [2024-07-15 09:35:27.238937] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:16.250 [2024-07-15 09:35:27.244918] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x9edaf0) with pdu=0x2000190fef90 00:27:16.250 [2024-07-15 09:35:27.245202] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:6432 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:16.250 [2024-07-15 09:35:27.245231] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:16.250 [2024-07-15 09:35:27.251237] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x9edaf0) with pdu=0x2000190fef90 00:27:16.250 [2024-07-15 09:35:27.251504] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:19232 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:16.250 [2024-07-15 09:35:27.251532] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:16.250 [2024-07-15 09:35:27.257565] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x9edaf0) with pdu=0x2000190fef90 00:27:16.250 [2024-07-15 09:35:27.257854] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:13920 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:16.250 [2024-07-15 09:35:27.257882] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:16.250 [2024-07-15 09:35:27.263754] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x9edaf0) with pdu=0x2000190fef90 00:27:16.250 [2024-07-15 09:35:27.264044] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:8832 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:16.250 [2024-07-15 09:35:27.264072] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:16.250 [2024-07-15 09:35:27.268708] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x9edaf0) with pdu=0x2000190fef90 00:27:16.250 [2024-07-15 09:35:27.268999] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:4032 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:16.250 [2024-07-15 09:35:27.269028] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:16.250 [2024-07-15 09:35:27.273899] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x9edaf0) with pdu=0x2000190fef90 00:27:16.250 [2024-07-15 09:35:27.274194] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:15968 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:16.250 [2024-07-15 09:35:27.274222] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:16.250 [2024-07-15 09:35:27.279777] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x9edaf0) with pdu=0x2000190fef90 00:27:16.250 [2024-07-15 09:35:27.280067] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:19008 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:16.250 [2024-07-15 09:35:27.280096] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:16.250 [2024-07-15 09:35:27.285479] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x9edaf0) with pdu=0x2000190fef90 00:27:16.250 [2024-07-15 09:35:27.285829] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:21088 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:16.250 [2024-07-15 09:35:27.285858] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:16.250 [2024-07-15 09:35:27.291080] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x9edaf0) with pdu=0x2000190fef90 00:27:16.250 [2024-07-15 09:35:27.291364] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:12256 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:16.250 [2024-07-15 09:35:27.291392] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:16.250 [2024-07-15 09:35:27.295938] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x9edaf0) with pdu=0x2000190fef90 00:27:16.250 [2024-07-15 09:35:27.296229] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:21216 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:16.250 [2024-07-15 09:35:27.296257] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:16.250 [2024-07-15 09:35:27.300782] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x9edaf0) with pdu=0x2000190fef90 00:27:16.250 [2024-07-15 09:35:27.301072] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:19968 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:16.250 [2024-07-15 09:35:27.301100] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:16.250 [2024-07-15 09:35:27.305661] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x9edaf0) with pdu=0x2000190fef90 00:27:16.250 [2024-07-15 09:35:27.305949] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:9696 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:16.250 [2024-07-15 09:35:27.305978] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:16.250 [2024-07-15 09:35:27.310583] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x9edaf0) with pdu=0x2000190fef90 00:27:16.250 [2024-07-15 09:35:27.310869] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:15200 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:16.250 [2024-07-15 09:35:27.310898] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:16.250 [2024-07-15 09:35:27.315455] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x9edaf0) with pdu=0x2000190fef90 00:27:16.250 [2024-07-15 09:35:27.315720] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:12864 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:16.250 [2024-07-15 09:35:27.315748] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:16.250 [2024-07-15 09:35:27.320289] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x9edaf0) with pdu=0x2000190fef90 00:27:16.250 [2024-07-15 09:35:27.320571] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:17984 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:16.250 [2024-07-15 09:35:27.320599] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:16.250 [2024-07-15 09:35:27.325031] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x9edaf0) with pdu=0x2000190fef90 00:27:16.250 [2024-07-15 09:35:27.325313] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:10528 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:16.250 [2024-07-15 09:35:27.325341] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:16.250 [2024-07-15 09:35:27.330019] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x9edaf0) with pdu=0x2000190fef90 00:27:16.250 [2024-07-15 09:35:27.330302] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:19008 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:16.250 [2024-07-15 09:35:27.330330] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:16.250 [2024-07-15 09:35:27.334843] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x9edaf0) with pdu=0x2000190fef90 00:27:16.250 [2024-07-15 09:35:27.335123] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:23552 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:16.250 [2024-07-15 09:35:27.335151] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:16.250 [2024-07-15 09:35:27.339855] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x9edaf0) with pdu=0x2000190fef90 00:27:16.250 [2024-07-15 09:35:27.340139] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:3648 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:16.250 [2024-07-15 09:35:27.340167] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:16.250 [2024-07-15 09:35:27.345688] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x9edaf0) with pdu=0x2000190fef90 00:27:16.250 [2024-07-15 09:35:27.345976] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:9888 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:16.250 [2024-07-15 09:35:27.346004] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:16.250 [2024-07-15 09:35:27.352004] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x9edaf0) with pdu=0x2000190fef90 00:27:16.250 [2024-07-15 09:35:27.352285] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:11904 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:16.250 [2024-07-15 09:35:27.352313] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:16.250 [2024-07-15 09:35:27.358772] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x9edaf0) with pdu=0x2000190fef90 00:27:16.250 [2024-07-15 09:35:27.359049] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:14720 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:16.250 [2024-07-15 09:35:27.359078] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:16.250 [2024-07-15 09:35:27.365461] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x9edaf0) with pdu=0x2000190fef90 00:27:16.250 [2024-07-15 09:35:27.365745] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:22176 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:16.250 [2024-07-15 09:35:27.365773] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:16.250 [2024-07-15 09:35:27.371871] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x9edaf0) with pdu=0x2000190fef90 00:27:16.250 [2024-07-15 09:35:27.372154] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:21216 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:16.250 [2024-07-15 09:35:27.372183] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:16.250 [2024-07-15 09:35:27.378091] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x9edaf0) with pdu=0x2000190fef90 00:27:16.250 [2024-07-15 09:35:27.378373] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:11232 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:16.250 [2024-07-15 09:35:27.378408] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:16.250 [2024-07-15 09:35:27.385083] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x9edaf0) with pdu=0x2000190fef90 00:27:16.250 [2024-07-15 09:35:27.385381] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20992 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:16.250 [2024-07-15 09:35:27.385410] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:16.250 [2024-07-15 09:35:27.392411] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x9edaf0) with pdu=0x2000190fef90 00:27:16.250 [2024-07-15 09:35:27.392694] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:6816 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:16.250 [2024-07-15 09:35:27.392722] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:16.250 [2024-07-15 09:35:27.399529] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x9edaf0) with pdu=0x2000190fef90 00:27:16.250 [2024-07-15 09:35:27.399740] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:23616 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:16.250 [2024-07-15 09:35:27.399768] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:16.251 [2024-07-15 09:35:27.406589] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x9edaf0) with pdu=0x2000190fef90 00:27:16.251 [2024-07-15 09:35:27.406810] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:23712 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:16.251 [2024-07-15 09:35:27.406838] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:16.251 [2024-07-15 09:35:27.413910] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x9edaf0) with pdu=0x2000190fef90 00:27:16.251 [2024-07-15 09:35:27.414194] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:1856 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:16.251 [2024-07-15 09:35:27.414223] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:16.251 [2024-07-15 09:35:27.420919] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x9edaf0) with pdu=0x2000190fef90 00:27:16.251 [2024-07-15 09:35:27.421214] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:1792 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:16.251 [2024-07-15 09:35:27.421243] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:16.251 [2024-07-15 09:35:27.428070] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x9edaf0) with pdu=0x2000190fef90 00:27:16.251 [2024-07-15 09:35:27.428353] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:11072 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:16.251 [2024-07-15 09:35:27.428381] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:16.251 [2024-07-15 09:35:27.435173] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x9edaf0) with pdu=0x2000190fef90 00:27:16.251 [2024-07-15 09:35:27.435456] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:21600 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:16.251 [2024-07-15 09:35:27.435484] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:16.251 [2024-07-15 09:35:27.442264] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x9edaf0) with pdu=0x2000190fef90 00:27:16.251 [2024-07-15 09:35:27.442547] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:21664 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:16.251 [2024-07-15 09:35:27.442575] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:16.510 [2024-07-15 09:35:27.448188] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x9edaf0) with pdu=0x2000190fef90 00:27:16.510 [2024-07-15 09:35:27.448473] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:25280 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:16.511 [2024-07-15 09:35:27.448501] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:16.511 [2024-07-15 09:35:27.453109] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x9edaf0) with pdu=0x2000190fef90 00:27:16.511 [2024-07-15 09:35:27.453392] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20032 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:16.511 [2024-07-15 09:35:27.453420] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:16.511 [2024-07-15 09:35:27.457894] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x9edaf0) with pdu=0x2000190fef90 00:27:16.511 [2024-07-15 09:35:27.458173] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:13664 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:16.511 [2024-07-15 09:35:27.458201] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:16.511 [2024-07-15 09:35:27.462733] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x9edaf0) with pdu=0x2000190fef90 00:27:16.511 [2024-07-15 09:35:27.463020] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:25472 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:16.511 [2024-07-15 09:35:27.463049] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:16.511 [2024-07-15 09:35:27.467712] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x9edaf0) with pdu=0x2000190fef90 00:27:16.511 [2024-07-15 09:35:27.467999] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:22208 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:16.511 [2024-07-15 09:35:27.468027] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:16.511 [2024-07-15 09:35:27.472569] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x9edaf0) with pdu=0x2000190fef90 00:27:16.511 [2024-07-15 09:35:27.472857] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:19904 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:16.511 [2024-07-15 09:35:27.472885] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:16.511 [2024-07-15 09:35:27.477381] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x9edaf0) with pdu=0x2000190fef90 00:27:16.511 [2024-07-15 09:35:27.477662] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:6944 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:16.511 [2024-07-15 09:35:27.477690] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:16.511 [2024-07-15 09:35:27.482148] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x9edaf0) with pdu=0x2000190fef90 00:27:16.511 [2024-07-15 09:35:27.482426] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:8992 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:16.511 [2024-07-15 09:35:27.482459] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:16.511 [2024-07-15 09:35:27.486985] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x9edaf0) with pdu=0x2000190fef90 00:27:16.511 [2024-07-15 09:35:27.487265] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:11744 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:16.511 [2024-07-15 09:35:27.487294] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:16.511 [2024-07-15 09:35:27.491850] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x9edaf0) with pdu=0x2000190fef90 00:27:16.511 [2024-07-15 09:35:27.492132] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20480 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:16.511 [2024-07-15 09:35:27.492160] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:16.511 [2024-07-15 09:35:27.496595] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x9edaf0) with pdu=0x2000190fef90 00:27:16.511 [2024-07-15 09:35:27.496681] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:23520 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:16.511 [2024-07-15 09:35:27.496708] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:16.511 [2024-07-15 09:35:27.501961] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x9edaf0) with pdu=0x2000190fef90 00:27:16.511 [2024-07-15 09:35:27.502243] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:23840 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:16.511 [2024-07-15 09:35:27.502272] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:16.511 [2024-07-15 09:35:27.506702] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x9edaf0) with pdu=0x2000190fef90 00:27:16.511 [2024-07-15 09:35:27.507019] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:13664 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:16.511 [2024-07-15 09:35:27.507048] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:16.511 [2024-07-15 09:35:27.512708] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x9edaf0) with pdu=0x2000190fef90 00:27:16.511 [2024-07-15 09:35:27.512996] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:13760 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:16.511 [2024-07-15 09:35:27.513024] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:16.511 [2024-07-15 09:35:27.518367] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x9edaf0) with pdu=0x2000190fef90 00:27:16.511 [2024-07-15 09:35:27.518649] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:7072 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:16.511 [2024-07-15 09:35:27.518677] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:16.511 [2024-07-15 09:35:27.523190] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x9edaf0) with pdu=0x2000190fef90 00:27:16.511 [2024-07-15 09:35:27.523502] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:18720 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:16.511 [2024-07-15 09:35:27.523530] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:16.511 [2024-07-15 09:35:27.528001] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x9edaf0) with pdu=0x2000190fef90 00:27:16.511 [2024-07-15 09:35:27.528322] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:18368 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:16.511 [2024-07-15 09:35:27.528350] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:16.511 [2024-07-15 09:35:27.532901] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x9edaf0) with pdu=0x2000190fef90 00:27:16.511 [2024-07-15 09:35:27.533217] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:18720 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:16.511 [2024-07-15 09:35:27.533245] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:16.511 [2024-07-15 09:35:27.537873] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x9edaf0) with pdu=0x2000190fef90 00:27:16.511 [2024-07-15 09:35:27.538202] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:12320 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:16.511 [2024-07-15 09:35:27.538230] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:16.511 [2024-07-15 09:35:27.542768] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x9edaf0) with pdu=0x2000190fef90 00:27:16.511 [2024-07-15 09:35:27.543059] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:6208 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:16.511 [2024-07-15 09:35:27.543088] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:16.511 [2024-07-15 09:35:27.548168] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x9edaf0) with pdu=0x2000190fef90 00:27:16.511 [2024-07-15 09:35:27.548450] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:1632 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:16.511 [2024-07-15 09:35:27.548478] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:16.511 [2024-07-15 09:35:27.554404] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x9edaf0) with pdu=0x2000190fef90 00:27:16.511 [2024-07-15 09:35:27.554684] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:10976 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:16.511 [2024-07-15 09:35:27.554712] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:16.511 [2024-07-15 09:35:27.560730] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x9edaf0) with pdu=0x2000190fef90 00:27:16.511 [2024-07-15 09:35:27.561036] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20832 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:16.511 [2024-07-15 09:35:27.561074] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:16.511 [2024-07-15 09:35:27.567108] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x9edaf0) with pdu=0x2000190fef90 00:27:16.511 [2024-07-15 09:35:27.567403] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:3616 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:16.511 [2024-07-15 09:35:27.567432] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:16.511 [2024-07-15 09:35:27.572145] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x9edaf0) with pdu=0x2000190fef90 00:27:16.511 [2024-07-15 09:35:27.572425] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:6080 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:16.511 [2024-07-15 09:35:27.572453] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:16.511 [2024-07-15 09:35:27.576994] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x9edaf0) with pdu=0x2000190fef90 00:27:16.511 [2024-07-15 09:35:27.577309] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:4384 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:16.511 [2024-07-15 09:35:27.577337] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:16.511 [2024-07-15 09:35:27.581905] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x9edaf0) with pdu=0x2000190fef90 00:27:16.511 [2024-07-15 09:35:27.582256] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:4448 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:16.511 [2024-07-15 09:35:27.582283] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:16.511 [2024-07-15 09:35:27.586831] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x9edaf0) with pdu=0x2000190fef90 00:27:16.511 [2024-07-15 09:35:27.587149] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:19456 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:16.511 [2024-07-15 09:35:27.587178] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:16.511 [2024-07-15 09:35:27.591912] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x9edaf0) with pdu=0x2000190fef90 00:27:16.511 [2024-07-15 09:35:27.592195] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:10560 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:16.511 [2024-07-15 09:35:27.592223] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:16.511 [2024-07-15 09:35:27.597159] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x9edaf0) with pdu=0x2000190fef90 00:27:16.512 [2024-07-15 09:35:27.597474] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:10400 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:16.512 [2024-07-15 09:35:27.597503] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:16.512 [2024-07-15 09:35:27.602066] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x9edaf0) with pdu=0x2000190fef90 00:27:16.512 [2024-07-15 09:35:27.602384] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:22528 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:16.512 [2024-07-15 09:35:27.602413] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:16.512 [2024-07-15 09:35:27.608186] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x9edaf0) with pdu=0x2000190fef90 00:27:16.512 [2024-07-15 09:35:27.608466] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:9600 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:16.512 [2024-07-15 09:35:27.608495] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:16.512 [2024-07-15 09:35:27.613764] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x9edaf0) with pdu=0x2000190fef90 00:27:16.512 [2024-07-15 09:35:27.614058] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:11680 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:16.512 [2024-07-15 09:35:27.614087] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:16.512 [2024-07-15 09:35:27.618717] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x9edaf0) with pdu=0x2000190fef90 00:27:16.512 [2024-07-15 09:35:27.619012] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:7040 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:16.512 [2024-07-15 09:35:27.619047] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:16.512 [2024-07-15 09:35:27.623608] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x9edaf0) with pdu=0x2000190fef90 00:27:16.512 [2024-07-15 09:35:27.623896] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:17184 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:16.512 [2024-07-15 09:35:27.623925] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:16.512 [2024-07-15 09:35:27.628726] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x9edaf0) with pdu=0x2000190fef90 00:27:16.512 [2024-07-15 09:35:27.629019] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:7136 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:16.512 [2024-07-15 09:35:27.629047] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:16.512 [2024-07-15 09:35:27.634407] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x9edaf0) with pdu=0x2000190fef90 00:27:16.512 [2024-07-15 09:35:27.634690] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:21920 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:16.512 [2024-07-15 09:35:27.634718] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:16.512 [2024-07-15 09:35:27.639982] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x9edaf0) with pdu=0x2000190fef90 00:27:16.512 [2024-07-15 09:35:27.640295] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:6752 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:16.512 [2024-07-15 09:35:27.640322] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:16.512 [2024-07-15 09:35:27.645798] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x9edaf0) with pdu=0x2000190fef90 00:27:16.512 [2024-07-15 09:35:27.646185] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:8896 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:16.512 [2024-07-15 09:35:27.646214] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:16.512 [2024-07-15 09:35:27.650985] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x9edaf0) with pdu=0x2000190fef90 00:27:16.512 [2024-07-15 09:35:27.651268] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:4832 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:16.512 [2024-07-15 09:35:27.651296] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:16.512 [2024-07-15 09:35:27.655792] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x9edaf0) with pdu=0x2000190fef90 00:27:16.512 [2024-07-15 09:35:27.656088] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:1120 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:16.512 [2024-07-15 09:35:27.656116] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:16.512 [2024-07-15 09:35:27.660705] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x9edaf0) with pdu=0x2000190fef90 00:27:16.512 [2024-07-15 09:35:27.660992] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:17312 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:16.512 [2024-07-15 09:35:27.661020] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:16.512 [2024-07-15 09:35:27.665612] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x9edaf0) with pdu=0x2000190fef90 00:27:16.512 [2024-07-15 09:35:27.665908] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:23776 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:16.512 [2024-07-15 09:35:27.665936] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:16.512 [2024-07-15 09:35:27.670455] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x9edaf0) with pdu=0x2000190fef90 00:27:16.512 [2024-07-15 09:35:27.670733] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:7040 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:16.512 [2024-07-15 09:35:27.670761] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:16.512 [2024-07-15 09:35:27.675293] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x9edaf0) with pdu=0x2000190fef90 00:27:16.512 [2024-07-15 09:35:27.675607] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:4800 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:16.512 [2024-07-15 09:35:27.675636] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:16.512 [2024-07-15 09:35:27.680143] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x9edaf0) with pdu=0x2000190fef90 00:27:16.512 [2024-07-15 09:35:27.680441] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:7104 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:16.512 [2024-07-15 09:35:27.680470] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:16.512 [2024-07-15 09:35:27.685103] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x9edaf0) with pdu=0x2000190fef90 00:27:16.512 [2024-07-15 09:35:27.685383] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:17760 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:16.512 [2024-07-15 09:35:27.685410] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:16.512 [2024-07-15 09:35:27.690081] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x9edaf0) with pdu=0x2000190fef90 00:27:16.512 [2024-07-15 09:35:27.690363] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:3488 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:16.512 [2024-07-15 09:35:27.690391] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:16.512 [2024-07-15 09:35:27.694939] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x9edaf0) with pdu=0x2000190fef90 00:27:16.512 [2024-07-15 09:35:27.695219] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:15328 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:16.512 [2024-07-15 09:35:27.695248] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:16.512 [2024-07-15 09:35:27.699715] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x9edaf0) with pdu=0x2000190fef90 00:27:16.512 [2024-07-15 09:35:27.700002] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20768 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:16.512 [2024-07-15 09:35:27.700030] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:16.773 [2024-07-15 09:35:27.704787] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x9edaf0) with pdu=0x2000190fef90 00:27:16.773 [2024-07-15 09:35:27.705084] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20000 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:16.773 [2024-07-15 09:35:27.705112] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:16.773 [2024-07-15 09:35:27.709848] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x9edaf0) with pdu=0x2000190fef90 00:27:16.773 [2024-07-15 09:35:27.710133] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:10880 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:16.773 [2024-07-15 09:35:27.710166] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:16.773 [2024-07-15 09:35:27.714706] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x9edaf0) with pdu=0x2000190fef90 00:27:16.773 [2024-07-15 09:35:27.714996] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:8288 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:16.773 [2024-07-15 09:35:27.715024] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:16.773 [2024-07-15 09:35:27.719693] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x9edaf0) with pdu=0x2000190fef90 00:27:16.773 [2024-07-15 09:35:27.720012] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:2080 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:16.773 [2024-07-15 09:35:27.720041] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:16.773 [2024-07-15 09:35:27.724776] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x9edaf0) with pdu=0x2000190fef90 00:27:16.773 [2024-07-15 09:35:27.725071] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20288 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:16.773 [2024-07-15 09:35:27.725099] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:16.773 [2024-07-15 09:35:27.729735] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x9edaf0) with pdu=0x2000190fef90 00:27:16.773 [2024-07-15 09:35:27.730022] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:2400 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:16.773 [2024-07-15 09:35:27.730050] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:16.773 [2024-07-15 09:35:27.734627] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x9edaf0) with pdu=0x2000190fef90 00:27:16.773 [2024-07-15 09:35:27.734945] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:10080 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:16.773 [2024-07-15 09:35:27.734974] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:16.773 [2024-07-15 09:35:27.739553] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x9edaf0) with pdu=0x2000190fef90 00:27:16.773 [2024-07-15 09:35:27.739839] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:17248 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:16.773 [2024-07-15 09:35:27.739874] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:16.773 [2024-07-15 09:35:27.744407] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x9edaf0) with pdu=0x2000190fef90 00:27:16.773 [2024-07-15 09:35:27.744688] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:6816 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:16.773 [2024-07-15 09:35:27.744716] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:16.773 [2024-07-15 09:35:27.749254] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x9edaf0) with pdu=0x2000190fef90 00:27:16.773 [2024-07-15 09:35:27.749535] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:22528 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:16.773 [2024-07-15 09:35:27.749574] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:16.773 [2024-07-15 09:35:27.754146] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x9edaf0) with pdu=0x2000190fef90 00:27:16.773 [2024-07-15 09:35:27.754427] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:2528 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:16.773 [2024-07-15 09:35:27.754455] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:16.773 [2024-07-15 09:35:27.758943] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x9edaf0) with pdu=0x2000190fef90 00:27:16.773 [2024-07-15 09:35:27.759226] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:17216 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:16.773 [2024-07-15 09:35:27.759254] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:16.773 [2024-07-15 09:35:27.763769] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x9edaf0) with pdu=0x2000190fef90 00:27:16.773 [2024-07-15 09:35:27.764061] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:9216 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:16.773 [2024-07-15 09:35:27.764089] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:16.773 [2024-07-15 09:35:27.768607] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x9edaf0) with pdu=0x2000190fef90 00:27:16.773 [2024-07-15 09:35:27.768884] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:23968 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:16.773 [2024-07-15 09:35:27.768912] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:16.773 [2024-07-15 09:35:27.773390] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x9edaf0) with pdu=0x2000190fef90 00:27:16.773 [2024-07-15 09:35:27.773670] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:4768 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:16.773 [2024-07-15 09:35:27.773698] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:16.773 [2024-07-15 09:35:27.778259] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x9edaf0) with pdu=0x2000190fef90 00:27:16.773 [2024-07-15 09:35:27.778568] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:14880 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:16.773 [2024-07-15 09:35:27.778596] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:16.773 [2024-07-15 09:35:27.783097] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x9edaf0) with pdu=0x2000190fef90 00:27:16.773 [2024-07-15 09:35:27.783441] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:23904 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:16.773 [2024-07-15 09:35:27.783470] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:16.773 [2024-07-15 09:35:27.788164] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x9edaf0) with pdu=0x2000190fef90 00:27:16.773 [2024-07-15 09:35:27.788446] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:960 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:16.773 [2024-07-15 09:35:27.788474] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:16.774 [2024-07-15 09:35:27.792910] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x9edaf0) with pdu=0x2000190fef90 00:27:16.774 [2024-07-15 09:35:27.793200] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:11968 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:16.774 [2024-07-15 09:35:27.793228] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:16.774 [2024-07-15 09:35:27.797747] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x9edaf0) with pdu=0x2000190fef90 00:27:16.774 [2024-07-15 09:35:27.798031] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:23648 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:16.774 [2024-07-15 09:35:27.798059] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:16.774 [2024-07-15 09:35:27.803055] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x9edaf0) with pdu=0x2000190fef90 00:27:16.774 [2024-07-15 09:35:27.803336] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:13024 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:16.774 [2024-07-15 09:35:27.803364] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:16.774 [2024-07-15 09:35:27.809091] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x9edaf0) with pdu=0x2000190fef90 00:27:16.774 [2024-07-15 09:35:27.809372] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:19904 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:16.774 [2024-07-15 09:35:27.809401] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:16.774 [2024-07-15 09:35:27.815433] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x9edaf0) with pdu=0x2000190fef90 00:27:16.774 [2024-07-15 09:35:27.815701] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:12544 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:16.774 [2024-07-15 09:35:27.815731] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:16.774 [2024-07-15 09:35:27.822562] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x9edaf0) with pdu=0x2000190fef90 00:27:16.774 [2024-07-15 09:35:27.822892] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:23712 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:16.774 [2024-07-15 09:35:27.822921] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:16.774 [2024-07-15 09:35:27.828263] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x9edaf0) with pdu=0x2000190fef90 00:27:16.774 [2024-07-15 09:35:27.828545] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:19008 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:16.774 [2024-07-15 09:35:27.828573] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:16.774 [2024-07-15 09:35:27.833387] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x9edaf0) with pdu=0x2000190fef90 00:27:16.774 [2024-07-15 09:35:27.833668] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:16704 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:16.774 [2024-07-15 09:35:27.833696] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:16.774 [2024-07-15 09:35:27.838845] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x9edaf0) with pdu=0x2000190fef90 00:27:16.774 [2024-07-15 09:35:27.839160] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:19104 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:16.774 [2024-07-15 09:35:27.839188] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:16.774 [2024-07-15 09:35:27.843922] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x9edaf0) with pdu=0x2000190fef90 00:27:16.774 [2024-07-15 09:35:27.844202] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:11008 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:16.774 [2024-07-15 09:35:27.844230] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:16.774 [2024-07-15 09:35:27.848944] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x9edaf0) with pdu=0x2000190fef90 00:27:16.774 [2024-07-15 09:35:27.849227] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:896 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:16.774 [2024-07-15 09:35:27.849253] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:16.774 [2024-07-15 09:35:27.855161] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x9edaf0) with pdu=0x2000190fef90 00:27:16.774 [2024-07-15 09:35:27.855443] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:2432 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:16.774 [2024-07-15 09:35:27.855472] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:16.774 [2024-07-15 09:35:27.861005] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x9edaf0) with pdu=0x2000190fef90 00:27:16.774 [2024-07-15 09:35:27.861286] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:12864 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:16.774 [2024-07-15 09:35:27.861314] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:16.774 [2024-07-15 09:35:27.868230] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x9edaf0) with pdu=0x2000190fef90 00:27:16.774 [2024-07-15 09:35:27.868565] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:4736 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:16.774 [2024-07-15 09:35:27.868593] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:16.774 [2024-07-15 09:35:27.874269] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x9edaf0) with pdu=0x2000190fef90 00:27:16.774 [2024-07-15 09:35:27.874553] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:4256 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:16.774 [2024-07-15 09:35:27.874581] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:16.774 [2024-07-15 09:35:27.879317] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x9edaf0) with pdu=0x2000190fef90 00:27:16.774 [2024-07-15 09:35:27.879597] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:12288 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:16.774 [2024-07-15 09:35:27.879625] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:16.774 [2024-07-15 09:35:27.884202] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x9edaf0) with pdu=0x2000190fef90 00:27:16.774 [2024-07-15 09:35:27.884483] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:21728 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:16.774 [2024-07-15 09:35:27.884511] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:16.774 [2024-07-15 09:35:27.889030] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x9edaf0) with pdu=0x2000190fef90 00:27:16.774 [2024-07-15 09:35:27.889314] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:5888 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:16.774 [2024-07-15 09:35:27.889349] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:16.774 [2024-07-15 09:35:27.894309] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x9edaf0) with pdu=0x2000190fef90 00:27:16.774 [2024-07-15 09:35:27.894591] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:10624 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:16.774 [2024-07-15 09:35:27.894620] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:16.774 [2024-07-15 09:35:27.899525] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x9edaf0) with pdu=0x2000190fef90 00:27:16.774 [2024-07-15 09:35:27.899815] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:2080 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:16.774 [2024-07-15 09:35:27.899843] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:16.774 [2024-07-15 09:35:27.904337] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x9edaf0) with pdu=0x2000190fef90 00:27:16.774 [2024-07-15 09:35:27.904619] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:6496 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:16.774 [2024-07-15 09:35:27.904647] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:16.774 [2024-07-15 09:35:27.909177] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x9edaf0) with pdu=0x2000190fef90 00:27:16.774 [2024-07-15 09:35:27.909459] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:1824 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:16.774 [2024-07-15 09:35:27.909487] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:16.774 [2024-07-15 09:35:27.914114] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x9edaf0) with pdu=0x2000190fef90 00:27:16.774 [2024-07-15 09:35:27.914392] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:2048 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:16.774 [2024-07-15 09:35:27.914421] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:16.774 [2024-07-15 09:35:27.919014] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x9edaf0) with pdu=0x2000190fef90 00:27:16.774 [2024-07-15 09:35:27.919327] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:12128 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:16.774 [2024-07-15 09:35:27.919355] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:16.774 [2024-07-15 09:35:27.923991] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x9edaf0) with pdu=0x2000190fef90 00:27:16.774 [2024-07-15 09:35:27.924301] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:5344 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:16.774 [2024-07-15 09:35:27.924329] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:16.774 [2024-07-15 09:35:27.928911] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x9edaf0) with pdu=0x2000190fef90 00:27:16.774 [2024-07-15 09:35:27.929193] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:11104 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:16.774 [2024-07-15 09:35:27.929221] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:16.774 [2024-07-15 09:35:27.934116] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x9edaf0) with pdu=0x2000190fef90 00:27:16.774 [2024-07-15 09:35:27.934435] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:22592 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:16.774 [2024-07-15 09:35:27.934463] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:16.774 [2024-07-15 09:35:27.939340] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x9edaf0) with pdu=0x2000190fef90 00:27:16.774 [2024-07-15 09:35:27.939653] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:7456 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:16.774 [2024-07-15 09:35:27.939681] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:16.774 [2024-07-15 09:35:27.944178] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x9edaf0) with pdu=0x2000190fef90 00:27:16.774 [2024-07-15 09:35:27.944494] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:24128 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:16.774 [2024-07-15 09:35:27.944523] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:16.774 [2024-07-15 09:35:27.949041] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x9edaf0) with pdu=0x2000190fef90 00:27:16.775 [2024-07-15 09:35:27.949324] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:544 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:16.775 [2024-07-15 09:35:27.949352] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:16.775 [2024-07-15 09:35:27.953765] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x9edaf0) with pdu=0x2000190fef90 00:27:16.775 [2024-07-15 09:35:27.954091] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:14336 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:16.775 [2024-07-15 09:35:27.954120] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:16.775 [2024-07-15 09:35:27.958695] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x9edaf0) with pdu=0x2000190fef90 00:27:16.775 [2024-07-15 09:35:27.958983] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:8960 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:16.775 [2024-07-15 09:35:27.959011] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:16.775 [2024-07-15 09:35:27.963652] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x9edaf0) with pdu=0x2000190fef90 00:27:16.775 [2024-07-15 09:35:27.963974] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:21056 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:16.775 [2024-07-15 09:35:27.964002] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:17.035 [2024-07-15 09:35:27.968849] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x9edaf0) with pdu=0x2000190fef90 00:27:17.035 [2024-07-15 09:35:27.969135] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:21440 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:17.035 [2024-07-15 09:35:27.969163] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:17.035 [2024-07-15 09:35:27.975140] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x9edaf0) with pdu=0x2000190fef90 00:27:17.035 [2024-07-15 09:35:27.975433] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:832 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:17.035 [2024-07-15 09:35:27.975460] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:17.035 [2024-07-15 09:35:27.981309] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x9edaf0) with pdu=0x2000190fef90 00:27:17.035 [2024-07-15 09:35:27.981409] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:8256 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:17.035 [2024-07-15 09:35:27.981436] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:17.035 [2024-07-15 09:35:27.986872] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x9edaf0) with pdu=0x2000190fef90 00:27:17.035 [2024-07-15 09:35:27.987140] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:14400 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:17.035 [2024-07-15 09:35:27.987168] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:17.035 [2024-07-15 09:35:27.991571] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x9edaf0) with pdu=0x2000190fef90 00:27:17.035 [2024-07-15 09:35:27.991858] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:8928 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:17.035 [2024-07-15 09:35:27.991886] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:17.035 [2024-07-15 09:35:27.996463] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x9edaf0) with pdu=0x2000190fef90 00:27:17.035 [2024-07-15 09:35:27.996769] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:8192 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:17.035 [2024-07-15 09:35:27.996797] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:17.035 [2024-07-15 09:35:28.002602] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x9edaf0) with pdu=0x2000190fef90 00:27:17.036 [2024-07-15 09:35:28.002894] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:14944 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:17.036 [2024-07-15 09:35:28.002922] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:17.036 [2024-07-15 09:35:28.008205] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x9edaf0) with pdu=0x2000190fef90 00:27:17.036 [2024-07-15 09:35:28.008468] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:1216 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:17.036 [2024-07-15 09:35:28.008497] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:17.036 [2024-07-15 09:35:28.013773] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x9edaf0) with pdu=0x2000190fef90 00:27:17.036 [2024-07-15 09:35:28.014057] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:4064 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:17.036 [2024-07-15 09:35:28.014086] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:17.036 [2024-07-15 09:35:28.019146] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x9edaf0) with pdu=0x2000190fef90 00:27:17.036 [2024-07-15 09:35:28.019390] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:10752 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:17.036 [2024-07-15 09:35:28.019418] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:17.036 [2024-07-15 09:35:28.024440] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x9edaf0) with pdu=0x2000190fef90 00:27:17.036 [2024-07-15 09:35:28.024728] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:1216 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:17.036 [2024-07-15 09:35:28.024761] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:17.036 [2024-07-15 09:35:28.029789] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x9edaf0) with pdu=0x2000190fef90 00:27:17.036 [2024-07-15 09:35:28.030058] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:15072 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:17.036 [2024-07-15 09:35:28.030086] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:17.036 [2024-07-15 09:35:28.035132] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x9edaf0) with pdu=0x2000190fef90 00:27:17.036 [2024-07-15 09:35:28.035405] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:22816 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:17.036 [2024-07-15 09:35:28.035433] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:17.036 [2024-07-15 09:35:28.040472] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x9edaf0) with pdu=0x2000190fef90 00:27:17.036 [2024-07-15 09:35:28.040764] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:3968 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:17.036 [2024-07-15 09:35:28.040792] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:17.036 [2024-07-15 09:35:28.046006] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x9edaf0) with pdu=0x2000190fef90 00:27:17.036 [2024-07-15 09:35:28.046313] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:16608 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:17.036 [2024-07-15 09:35:28.046341] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:17.036 [2024-07-15 09:35:28.051381] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x9edaf0) with pdu=0x2000190fef90 00:27:17.036 [2024-07-15 09:35:28.051663] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:19328 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:17.036 [2024-07-15 09:35:28.051691] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:17.036 [2024-07-15 09:35:28.056656] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x9edaf0) with pdu=0x2000190fef90 00:27:17.036 [2024-07-15 09:35:28.056984] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:9984 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:17.036 [2024-07-15 09:35:28.057013] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:17.036 [2024-07-15 09:35:28.062061] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x9edaf0) with pdu=0x2000190fef90 00:27:17.036 [2024-07-15 09:35:28.062336] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:2080 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:17.036 [2024-07-15 09:35:28.062364] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:17.036 [2024-07-15 09:35:28.067611] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x9edaf0) with pdu=0x2000190fef90 00:27:17.036 [2024-07-15 09:35:28.067918] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:23104 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:17.036 [2024-07-15 09:35:28.067947] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:17.036 [2024-07-15 09:35:28.072967] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x9edaf0) with pdu=0x2000190fef90 00:27:17.036 [2024-07-15 09:35:28.073272] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:3488 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:17.036 [2024-07-15 09:35:28.073300] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:17.036 [2024-07-15 09:35:28.078395] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x9edaf0) with pdu=0x2000190fef90 00:27:17.036 [2024-07-15 09:35:28.078704] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:12288 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:17.036 [2024-07-15 09:35:28.078733] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:17.036 [2024-07-15 09:35:28.083934] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x9edaf0) with pdu=0x2000190fef90 00:27:17.036 [2024-07-15 09:35:28.084172] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:6656 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:17.036 [2024-07-15 09:35:28.084199] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:17.036 [2024-07-15 09:35:28.089231] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x9edaf0) with pdu=0x2000190fef90 00:27:17.036 [2024-07-15 09:35:28.089514] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:11808 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:17.036 [2024-07-15 09:35:28.089543] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:17.036 [2024-07-15 09:35:28.094545] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x9edaf0) with pdu=0x2000190fef90 00:27:17.036 [2024-07-15 09:35:28.094851] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:9792 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:17.036 [2024-07-15 09:35:28.094880] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:17.036 [2024-07-15 09:35:28.100005] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x9edaf0) with pdu=0x2000190fef90 00:27:17.036 [2024-07-15 09:35:28.100299] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20672 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:17.036 [2024-07-15 09:35:28.100328] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:17.036 [2024-07-15 09:35:28.105394] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x9edaf0) with pdu=0x2000190fef90 00:27:17.036 [2024-07-15 09:35:28.105637] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:24832 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:17.036 [2024-07-15 09:35:28.105665] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:17.036 [2024-07-15 09:35:28.110829] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x9edaf0) with pdu=0x2000190fef90 00:27:17.036 [2024-07-15 09:35:28.111096] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:9248 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:17.036 [2024-07-15 09:35:28.111124] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:17.036 [2024-07-15 09:35:28.116290] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x9edaf0) with pdu=0x2000190fef90 00:27:17.036 [2024-07-15 09:35:28.116567] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:3104 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:17.036 [2024-07-15 09:35:28.116595] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:17.036 [2024-07-15 09:35:28.121779] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x9edaf0) with pdu=0x2000190fef90 00:27:17.036 [2024-07-15 09:35:28.122023] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:2912 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:17.036 [2024-07-15 09:35:28.122052] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:17.036 [2024-07-15 09:35:28.127317] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x9edaf0) with pdu=0x2000190fef90 00:27:17.036 [2024-07-15 09:35:28.127608] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:7968 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:17.036 [2024-07-15 09:35:28.127636] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:17.036 [2024-07-15 09:35:28.132827] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x9edaf0) with pdu=0x2000190fef90 00:27:17.036 [2024-07-15 09:35:28.133067] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20896 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:17.036 [2024-07-15 09:35:28.133094] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:17.036 [2024-07-15 09:35:28.138269] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x9edaf0) with pdu=0x2000190fef90 00:27:17.036 [2024-07-15 09:35:28.138531] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:24640 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:17.036 [2024-07-15 09:35:28.138559] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:17.036 [2024-07-15 09:35:28.143602] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x9edaf0) with pdu=0x2000190fef90 00:27:17.036 [2024-07-15 09:35:28.143864] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:128 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:17.036 [2024-07-15 09:35:28.143892] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:17.036 [2024-07-15 09:35:28.148973] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x9edaf0) with pdu=0x2000190fef90 00:27:17.036 [2024-07-15 09:35:28.149322] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:18432 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:17.036 [2024-07-15 09:35:28.149350] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:17.036 [2024-07-15 09:35:28.154486] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x9edaf0) with pdu=0x2000190fef90 00:27:17.036 [2024-07-15 09:35:28.154784] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20000 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:17.036 [2024-07-15 09:35:28.154823] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:17.036 [2024-07-15 09:35:28.160004] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x9edaf0) with pdu=0x2000190fef90 00:27:17.037 [2024-07-15 09:35:28.160243] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:17856 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:17.037 [2024-07-15 09:35:28.160271] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:17.037 [2024-07-15 09:35:28.165306] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x9edaf0) with pdu=0x2000190fef90 00:27:17.037 [2024-07-15 09:35:28.165589] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:13984 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:17.037 [2024-07-15 09:35:28.165622] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:17.037 [2024-07-15 09:35:28.170737] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x9edaf0) with pdu=0x2000190fef90 00:27:17.037 [2024-07-15 09:35:28.171042] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:11776 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:17.037 [2024-07-15 09:35:28.171070] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:17.037 [2024-07-15 09:35:28.175977] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x9edaf0) with pdu=0x2000190fef90 00:27:17.037 [2024-07-15 09:35:28.176236] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:5856 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:17.037 [2024-07-15 09:35:28.176264] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:17.037 [2024-07-15 09:35:28.181318] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x9edaf0) with pdu=0x2000190fef90 00:27:17.037 [2024-07-15 09:35:28.181587] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:4448 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:17.037 [2024-07-15 09:35:28.181615] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:17.037 [2024-07-15 09:35:28.186525] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x9edaf0) with pdu=0x2000190fef90 00:27:17.037 [2024-07-15 09:35:28.186788] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:25088 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:17.037 [2024-07-15 09:35:28.186836] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:17.037 [2024-07-15 09:35:28.191592] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x9edaf0) with pdu=0x2000190fef90 00:27:17.037 [2024-07-15 09:35:28.191873] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:16640 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:17.037 [2024-07-15 09:35:28.191901] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:17.037 [2024-07-15 09:35:28.196667] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x9edaf0) with pdu=0x2000190fef90 00:27:17.037 [2024-07-15 09:35:28.196949] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:1888 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:17.037 [2024-07-15 09:35:28.196977] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:17.037 [2024-07-15 09:35:28.201890] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x9edaf0) with pdu=0x2000190fef90 00:27:17.037 [2024-07-15 09:35:28.202140] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:14848 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:17.037 [2024-07-15 09:35:28.202167] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:17.037 [2024-07-15 09:35:28.207193] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x9edaf0) with pdu=0x2000190fef90 00:27:17.037 [2024-07-15 09:35:28.207481] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:21792 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:17.037 [2024-07-15 09:35:28.207509] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:17.037 [2024-07-15 09:35:28.212536] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x9edaf0) with pdu=0x2000190fef90 00:27:17.037 [2024-07-15 09:35:28.212818] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:9920 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:17.037 [2024-07-15 09:35:28.212847] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:17.037 [2024-07-15 09:35:28.217768] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x9edaf0) with pdu=0x2000190fef90 00:27:17.037 [2024-07-15 09:35:28.218019] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:3392 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:17.037 [2024-07-15 09:35:28.218047] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:17.037 [2024-07-15 09:35:28.223001] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x9edaf0) with pdu=0x2000190fef90 00:27:17.037 [2024-07-15 09:35:28.223240] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:5280 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:17.037 [2024-07-15 09:35:28.223268] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:17.037 [2024-07-15 09:35:28.228287] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x9edaf0) with pdu=0x2000190fef90 00:27:17.037 [2024-07-15 09:35:28.228566] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:8320 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:17.037 [2024-07-15 09:35:28.228595] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:17.298 [2024-07-15 09:35:28.233404] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x9edaf0) with pdu=0x2000190fef90 00:27:17.298 [2024-07-15 09:35:28.233644] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:17184 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:17.298 [2024-07-15 09:35:28.233673] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:17.298 [2024-07-15 09:35:28.238596] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x9edaf0) with pdu=0x2000190fef90 00:27:17.298 [2024-07-15 09:35:28.238869] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:13248 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:17.298 [2024-07-15 09:35:28.238897] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:17.298 [2024-07-15 09:35:28.243808] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x9edaf0) with pdu=0x2000190fef90 00:27:17.298 [2024-07-15 09:35:28.244060] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:24704 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:17.298 [2024-07-15 09:35:28.244088] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:17.298 [2024-07-15 09:35:28.249193] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x9edaf0) with pdu=0x2000190fef90 00:27:17.298 [2024-07-15 09:35:28.249515] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:16000 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:17.298 [2024-07-15 09:35:28.249544] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:17.298 [2024-07-15 09:35:28.254368] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x9edaf0) with pdu=0x2000190fef90 00:27:17.298 [2024-07-15 09:35:28.254602] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:18496 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:17.298 [2024-07-15 09:35:28.254634] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:17.298 [2024-07-15 09:35:28.259655] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x9edaf0) with pdu=0x2000190fef90 00:27:17.298 [2024-07-15 09:35:28.259943] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:5216 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:17.298 [2024-07-15 09:35:28.259972] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:17.299 [2024-07-15 09:35:28.264885] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x9edaf0) with pdu=0x2000190fef90 00:27:17.299 [2024-07-15 09:35:28.265144] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:512 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:17.299 [2024-07-15 09:35:28.265172] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:17.299 [2024-07-15 09:35:28.269911] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x9edaf0) with pdu=0x2000190fef90 00:27:17.299 [2024-07-15 09:35:28.270222] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:5024 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:17.299 [2024-07-15 09:35:28.270249] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:17.299 [2024-07-15 09:35:28.275000] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x9edaf0) with pdu=0x2000190fef90 00:27:17.299 [2024-07-15 09:35:28.275254] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:22016 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:17.299 [2024-07-15 09:35:28.275282] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:17.299 [2024-07-15 09:35:28.280239] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x9edaf0) with pdu=0x2000190fef90 00:27:17.299 [2024-07-15 09:35:28.280469] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:9856 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:17.299 [2024-07-15 09:35:28.280498] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:17.299 [2024-07-15 09:35:28.285438] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x9edaf0) with pdu=0x2000190fef90 00:27:17.299 [2024-07-15 09:35:28.285682] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:8320 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:17.299 [2024-07-15 09:35:28.285710] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:17.299 [2024-07-15 09:35:28.290588] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x9edaf0) with pdu=0x2000190fef90 00:27:17.299 [2024-07-15 09:35:28.290882] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:10784 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:17.299 [2024-07-15 09:35:28.290910] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:17.299 [2024-07-15 09:35:28.295667] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x9edaf0) with pdu=0x2000190fef90 00:27:17.299 [2024-07-15 09:35:28.295944] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:16000 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:17.299 [2024-07-15 09:35:28.295972] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:17.299 [2024-07-15 09:35:28.300768] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x9edaf0) with pdu=0x2000190fef90 00:27:17.299 [2024-07-15 09:35:28.301026] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:2976 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:17.299 [2024-07-15 09:35:28.301058] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:17.299 [2024-07-15 09:35:28.305964] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x9edaf0) with pdu=0x2000190fef90 00:27:17.299 [2024-07-15 09:35:28.306234] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:14496 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:17.299 [2024-07-15 09:35:28.306262] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:17.299 [2024-07-15 09:35:28.311061] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x9edaf0) with pdu=0x2000190fef90 00:27:17.299 [2024-07-15 09:35:28.311314] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:4768 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:17.299 [2024-07-15 09:35:28.311342] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:17.299 [2024-07-15 09:35:28.316268] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x9edaf0) with pdu=0x2000190fef90 00:27:17.299 [2024-07-15 09:35:28.316541] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:2368 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:17.299 [2024-07-15 09:35:28.316571] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:17.299 [2024-07-15 09:35:28.321450] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x9edaf0) with pdu=0x2000190fef90 00:27:17.299 [2024-07-15 09:35:28.321728] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:13120 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:17.299 [2024-07-15 09:35:28.321756] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:17.299 [2024-07-15 09:35:28.326529] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x9edaf0) with pdu=0x2000190fef90 00:27:17.299 [2024-07-15 09:35:28.326834] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:6560 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:17.299 [2024-07-15 09:35:28.326862] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:17.299 [2024-07-15 09:35:28.331751] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x9edaf0) with pdu=0x2000190fef90 00:27:17.299 [2024-07-15 09:35:28.332015] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:3008 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:17.299 [2024-07-15 09:35:28.332043] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:17.299 [2024-07-15 09:35:28.336864] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x9edaf0) with pdu=0x2000190fef90 00:27:17.299 [2024-07-15 09:35:28.337114] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20640 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:17.299 [2024-07-15 09:35:28.337141] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:17.299 [2024-07-15 09:35:28.341955] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x9edaf0) with pdu=0x2000190fef90 00:27:17.299 [2024-07-15 09:35:28.342175] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:6848 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:17.299 [2024-07-15 09:35:28.342204] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:17.299 [2024-07-15 09:35:28.347014] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x9edaf0) with pdu=0x2000190fef90 00:27:17.299 [2024-07-15 09:35:28.347257] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:1088 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:17.299 [2024-07-15 09:35:28.347285] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:17.299 [2024-07-15 09:35:28.352214] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x9edaf0) with pdu=0x2000190fef90 00:27:17.299 [2024-07-15 09:35:28.352471] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:9600 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:17.299 [2024-07-15 09:35:28.352499] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:17.299 [2024-07-15 09:35:28.357406] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x9edaf0) with pdu=0x2000190fef90 00:27:17.299 [2024-07-15 09:35:28.357673] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:5184 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:17.299 [2024-07-15 09:35:28.357701] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:17.299 [2024-07-15 09:35:28.362576] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x9edaf0) with pdu=0x2000190fef90 00:27:17.299 [2024-07-15 09:35:28.362762] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:22880 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:17.299 [2024-07-15 09:35:28.362790] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:17.299 [2024-07-15 09:35:28.367960] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x9edaf0) with pdu=0x2000190fef90 00:27:17.299 [2024-07-15 09:35:28.368214] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20352 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:17.299 [2024-07-15 09:35:28.368241] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:17.299 [2024-07-15 09:35:28.373001] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x9edaf0) with pdu=0x2000190fef90 00:27:17.299 [2024-07-15 09:35:28.373181] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:4736 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:17.299 [2024-07-15 09:35:28.373209] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:17.299 [2024-07-15 09:35:28.378245] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x9edaf0) with pdu=0x2000190fef90 00:27:17.299 [2024-07-15 09:35:28.378503] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:6720 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:17.299 [2024-07-15 09:35:28.378531] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:17.299 [2024-07-15 09:35:28.383347] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x9edaf0) with pdu=0x2000190fef90 00:27:17.299 [2024-07-15 09:35:28.383606] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20704 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:17.299 [2024-07-15 09:35:28.383633] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:17.299 [2024-07-15 09:35:28.388392] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x9edaf0) with pdu=0x2000190fef90 00:27:17.299 [2024-07-15 09:35:28.388663] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:18496 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:17.299 [2024-07-15 09:35:28.388696] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:17.299 [2024-07-15 09:35:28.393390] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x9edaf0) with pdu=0x2000190fef90 00:27:17.299 [2024-07-15 09:35:28.393541] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:19264 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:17.299 [2024-07-15 09:35:28.393569] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:17.299 [2024-07-15 09:35:28.398483] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x9edaf0) with pdu=0x2000190fef90 00:27:17.299 [2024-07-15 09:35:28.398619] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:25216 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:17.299 [2024-07-15 09:35:28.398647] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:17.299 [2024-07-15 09:35:28.403574] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x9edaf0) with pdu=0x2000190fef90 00:27:17.299 [2024-07-15 09:35:28.403728] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:12448 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:17.299 [2024-07-15 09:35:28.403755] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:17.299 [2024-07-15 09:35:28.408890] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x9edaf0) with pdu=0x2000190fef90 00:27:17.299 [2024-07-15 09:35:28.409034] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:23904 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:17.300 [2024-07-15 09:35:28.409062] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:17.300 [2024-07-15 09:35:28.413946] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x9edaf0) with pdu=0x2000190fef90 00:27:17.300 [2024-07-15 09:35:28.414114] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:14912 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:17.300 [2024-07-15 09:35:28.414141] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:17.300 [2024-07-15 09:35:28.419070] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x9edaf0) with pdu=0x2000190fef90 00:27:17.300 [2024-07-15 09:35:28.419218] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:9952 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:17.300 [2024-07-15 09:35:28.419245] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:17.300 [2024-07-15 09:35:28.424349] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x9edaf0) with pdu=0x2000190fef90 00:27:17.300 [2024-07-15 09:35:28.424537] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:10720 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:17.300 [2024-07-15 09:35:28.424564] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:17.300 [2024-07-15 09:35:28.429358] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x9edaf0) with pdu=0x2000190fef90 00:27:17.300 [2024-07-15 09:35:28.429509] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:12320 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:17.300 [2024-07-15 09:35:28.429537] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:17.300 [2024-07-15 09:35:28.434558] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x9edaf0) with pdu=0x2000190fef90 00:27:17.300 [2024-07-15 09:35:28.434714] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:7104 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:17.300 [2024-07-15 09:35:28.434741] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:17.300 [2024-07-15 09:35:28.440001] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x9edaf0) with pdu=0x2000190fef90 00:27:17.300 [2024-07-15 09:35:28.440170] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:16288 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:17.300 [2024-07-15 09:35:28.440197] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:17.300 [2024-07-15 09:35:28.445346] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x9edaf0) with pdu=0x2000190fef90 00:27:17.300 [2024-07-15 09:35:28.445507] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:18496 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:17.300 [2024-07-15 09:35:28.445534] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:17.300 [2024-07-15 09:35:28.450489] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x9edaf0) with pdu=0x2000190fef90 00:27:17.300 [2024-07-15 09:35:28.450696] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20256 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:17.300 [2024-07-15 09:35:28.450723] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:17.300 [2024-07-15 09:35:28.455618] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x9edaf0) with pdu=0x2000190fef90 00:27:17.300 [2024-07-15 09:35:28.455791] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:13888 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:17.300 [2024-07-15 09:35:28.455827] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:17.300 [2024-07-15 09:35:28.460918] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x9edaf0) with pdu=0x2000190fef90 00:27:17.300 [2024-07-15 09:35:28.461121] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:15904 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:17.300 [2024-07-15 09:35:28.461148] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:17.300 [2024-07-15 09:35:28.466061] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x9edaf0) with pdu=0x2000190fef90 00:27:17.300 [2024-07-15 09:35:28.466222] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:13568 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:17.300 [2024-07-15 09:35:28.466249] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:17.300 [2024-07-15 09:35:28.471308] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x9edaf0) with pdu=0x2000190fef90 00:27:17.300 [2024-07-15 09:35:28.471428] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:22368 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:17.300 [2024-07-15 09:35:28.471456] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:17.300 [2024-07-15 09:35:28.476333] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x9edaf0) with pdu=0x2000190fef90 00:27:17.300 [2024-07-15 09:35:28.476493] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:576 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:17.300 [2024-07-15 09:35:28.476520] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:17.300 [2024-07-15 09:35:28.481638] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x9edaf0) with pdu=0x2000190fef90 00:27:17.300 [2024-07-15 09:35:28.481838] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:3808 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:17.300 [2024-07-15 09:35:28.481866] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:17.300 [2024-07-15 09:35:28.486927] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x9edaf0) with pdu=0x2000190fef90 00:27:17.300 [2024-07-15 09:35:28.487040] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:15328 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:17.300 [2024-07-15 09:35:28.487067] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:17.560 [2024-07-15 09:35:28.492099] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x9edaf0) with pdu=0x2000190fef90 00:27:17.560 [2024-07-15 09:35:28.492257] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:19424 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:17.560 [2024-07-15 09:35:28.492285] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:17.560 [2024-07-15 09:35:28.497255] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x9edaf0) with pdu=0x2000190fef90 00:27:17.560 [2024-07-15 09:35:28.497452] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:13856 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:17.560 [2024-07-15 09:35:28.497479] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:17.560 [2024-07-15 09:35:28.502548] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x9edaf0) with pdu=0x2000190fef90 00:27:17.560 [2024-07-15 09:35:28.502666] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:14176 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:17.560 [2024-07-15 09:35:28.502693] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:17.560 [2024-07-15 09:35:28.507596] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x9edaf0) with pdu=0x2000190fef90 00:27:17.560 [2024-07-15 09:35:28.507741] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:14848 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:17.560 [2024-07-15 09:35:28.507768] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:17.560 [2024-07-15 09:35:28.512677] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x9edaf0) with pdu=0x2000190fef90 00:27:17.560 [2024-07-15 09:35:28.512840] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:4480 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:17.560 [2024-07-15 09:35:28.512868] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:17.560 [2024-07-15 09:35:28.517873] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x9edaf0) with pdu=0x2000190fef90 00:27:17.560 [2024-07-15 09:35:28.518033] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:1632 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:17.560 [2024-07-15 09:35:28.518061] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:17.560 [2024-07-15 09:35:28.522976] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x9edaf0) with pdu=0x2000190fef90 00:27:17.560 [2024-07-15 09:35:28.523113] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:21472 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:17.560 [2024-07-15 09:35:28.523159] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:17.560 [2024-07-15 09:35:28.528166] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x9edaf0) with pdu=0x2000190fef90 00:27:17.560 [2024-07-15 09:35:28.528322] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20160 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:17.560 [2024-07-15 09:35:28.528349] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:17.560 [2024-07-15 09:35:28.533253] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x9edaf0) with pdu=0x2000190fef90 00:27:17.560 [2024-07-15 09:35:28.533399] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:13152 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:17.560 [2024-07-15 09:35:28.533427] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:17.560 [2024-07-15 09:35:28.538332] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x9edaf0) with pdu=0x2000190fef90 00:27:17.560 [2024-07-15 09:35:28.538457] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:17728 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:17.560 [2024-07-15 09:35:28.538484] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:17.560 [2024-07-15 09:35:28.543418] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x9edaf0) with pdu=0x2000190fef90 00:27:17.560 [2024-07-15 09:35:28.543560] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:22048 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:17.560 [2024-07-15 09:35:28.543588] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:17.560 [2024-07-15 09:35:28.548681] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x9edaf0) with pdu=0x2000190fef90 00:27:17.560 [2024-07-15 09:35:28.548872] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:6144 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:17.560 [2024-07-15 09:35:28.548900] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:17.560 [2024-07-15 09:35:28.553827] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x9edaf0) with pdu=0x2000190fef90 00:27:17.560 [2024-07-15 09:35:28.553974] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20000 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:17.560 [2024-07-15 09:35:28.554002] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:17.560 [2024-07-15 09:35:28.559014] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x9edaf0) with pdu=0x2000190fef90 00:27:17.560 [2024-07-15 09:35:28.559187] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:1920 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:17.560 [2024-07-15 09:35:28.559214] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:17.560 [2024-07-15 09:35:28.564190] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x9edaf0) with pdu=0x2000190fef90 00:27:17.560 [2024-07-15 09:35:28.564378] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:14912 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:17.560 [2024-07-15 09:35:28.564406] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:17.560 [2024-07-15 09:35:28.569410] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x9edaf0) with pdu=0x2000190fef90 00:27:17.560 [2024-07-15 09:35:28.569595] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:16608 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:17.560 [2024-07-15 09:35:28.569623] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:17.560 [2024-07-15 09:35:28.574649] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x9edaf0) with pdu=0x2000190fef90 00:27:17.560 [2024-07-15 09:35:28.574811] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:5344 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:17.560 [2024-07-15 09:35:28.574839] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:17.560 [2024-07-15 09:35:28.579775] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x9edaf0) with pdu=0x2000190fef90 00:27:17.560 [2024-07-15 09:35:28.579991] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:24384 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:17.560 [2024-07-15 09:35:28.580018] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:17.560 [2024-07-15 09:35:28.585061] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x9edaf0) with pdu=0x2000190fef90 00:27:17.560 [2024-07-15 09:35:28.585209] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:11776 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:17.560 [2024-07-15 09:35:28.585237] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:17.560 [2024-07-15 09:35:28.590171] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x9edaf0) with pdu=0x2000190fef90 00:27:17.560 [2024-07-15 09:35:28.590291] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:23232 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:17.560 [2024-07-15 09:35:28.590319] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:17.560 [2024-07-15 09:35:28.595364] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x9edaf0) with pdu=0x2000190fef90 00:27:17.560 [2024-07-15 09:35:28.595499] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:19616 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:17.560 [2024-07-15 09:35:28.595526] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:17.560 [2024-07-15 09:35:28.600429] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x9edaf0) with pdu=0x2000190fef90 00:27:17.560 [2024-07-15 09:35:28.600588] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:3392 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:17.560 [2024-07-15 09:35:28.600629] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:17.560 [2024-07-15 09:35:28.605572] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x9edaf0) with pdu=0x2000190fef90 00:27:17.560 [2024-07-15 09:35:28.605775] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:19808 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:17.560 [2024-07-15 09:35:28.605810] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:17.560 [2024-07-15 09:35:28.610710] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x9edaf0) with pdu=0x2000190fef90 00:27:17.560 [2024-07-15 09:35:28.610876] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:3392 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:17.560 [2024-07-15 09:35:28.610905] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:17.560 [2024-07-15 09:35:28.615916] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x9edaf0) with pdu=0x2000190fef90 00:27:17.560 [2024-07-15 09:35:28.616059] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:17440 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:17.560 [2024-07-15 09:35:28.616086] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:17.560 [2024-07-15 09:35:28.621146] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x9edaf0) with pdu=0x2000190fef90 00:27:17.560 [2024-07-15 09:35:28.621289] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:13248 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:17.560 [2024-07-15 09:35:28.621317] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:17.560 [2024-07-15 09:35:28.626188] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x9edaf0) with pdu=0x2000190fef90 00:27:17.560 [2024-07-15 09:35:28.626371] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:608 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:17.560 [2024-07-15 09:35:28.626399] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:17.560 [2024-07-15 09:35:28.631377] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x9edaf0) with pdu=0x2000190fef90 00:27:17.560 [2024-07-15 09:35:28.631557] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:22976 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:17.561 [2024-07-15 09:35:28.631585] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:17.561 [2024-07-15 09:35:28.636517] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x9edaf0) with pdu=0x2000190fef90 00:27:17.561 [2024-07-15 09:35:28.636667] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:18560 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:17.561 [2024-07-15 09:35:28.636695] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:17.561 [2024-07-15 09:35:28.641695] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x9edaf0) with pdu=0x2000190fef90 00:27:17.561 [2024-07-15 09:35:28.641864] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:5920 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:17.561 [2024-07-15 09:35:28.641892] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:17.561 [2024-07-15 09:35:28.646720] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x9edaf0) with pdu=0x2000190fef90 00:27:17.561 [2024-07-15 09:35:28.646935] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:5280 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:17.561 [2024-07-15 09:35:28.646963] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:17.561 [2024-07-15 09:35:28.651997] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x9edaf0) with pdu=0x2000190fef90 00:27:17.561 [2024-07-15 09:35:28.652153] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:25536 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:17.561 [2024-07-15 09:35:28.652180] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:17.561 [2024-07-15 09:35:28.657182] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x9edaf0) with pdu=0x2000190fef90 00:27:17.561 [2024-07-15 09:35:28.657336] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:16512 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:17.561 [2024-07-15 09:35:28.657372] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:17.561 [2024-07-15 09:35:28.662296] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x9edaf0) with pdu=0x2000190fef90 00:27:17.561 [2024-07-15 09:35:28.662451] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:6624 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:17.561 [2024-07-15 09:35:28.662479] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:17.561 [2024-07-15 09:35:28.667324] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x9edaf0) with pdu=0x2000190fef90 00:27:17.561 [2024-07-15 09:35:28.667508] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:19104 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:17.561 [2024-07-15 09:35:28.667535] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:17.561 [2024-07-15 09:35:28.672415] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x9edaf0) with pdu=0x2000190fef90 00:27:17.561 [2024-07-15 09:35:28.672593] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:23680 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:17.561 [2024-07-15 09:35:28.672621] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:17.561 [2024-07-15 09:35:28.677583] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x9edaf0) with pdu=0x2000190fef90 00:27:17.561 [2024-07-15 09:35:28.677814] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:5440 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:17.561 [2024-07-15 09:35:28.677842] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:17.561 [2024-07-15 09:35:28.682738] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x9edaf0) with pdu=0x2000190fef90 00:27:17.561 [2024-07-15 09:35:28.682895] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:17536 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:17.561 [2024-07-15 09:35:28.682923] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:17.561 [2024-07-15 09:35:28.687850] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x9edaf0) with pdu=0x2000190fef90 00:27:17.561 [2024-07-15 09:35:28.687981] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:22496 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:17.561 [2024-07-15 09:35:28.688008] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:17.561 [2024-07-15 09:35:28.693137] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x9edaf0) with pdu=0x2000190fef90 00:27:17.561 [2024-07-15 09:35:28.693290] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:22432 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:17.561 [2024-07-15 09:35:28.693318] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:17.561 [2024-07-15 09:35:28.698233] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x9edaf0) with pdu=0x2000190fef90 00:27:17.561 [2024-07-15 09:35:28.698372] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:23968 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:17.561 [2024-07-15 09:35:28.698400] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:17.561 [2024-07-15 09:35:28.703272] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x9edaf0) with pdu=0x2000190fef90 00:27:17.561 [2024-07-15 09:35:28.703464] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:2432 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:17.561 [2024-07-15 09:35:28.703492] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:17.561 [2024-07-15 09:35:28.708488] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x9edaf0) with pdu=0x2000190fef90 00:27:17.561 [2024-07-15 09:35:28.708666] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:19072 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:17.561 [2024-07-15 09:35:28.708694] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:17.561 [2024-07-15 09:35:28.713579] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x9edaf0) with pdu=0x2000190fef90 00:27:17.561 [2024-07-15 09:35:28.713750] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:3712 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:17.561 [2024-07-15 09:35:28.713778] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:17.561 [2024-07-15 09:35:28.718836] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x9edaf0) with pdu=0x2000190fef90 00:27:17.561 [2024-07-15 09:35:28.718958] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:25184 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:17.561 [2024-07-15 09:35:28.718985] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:17.561 [2024-07-15 09:35:28.724023] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x9edaf0) with pdu=0x2000190fef90 00:27:17.561 [2024-07-15 09:35:28.724167] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:14592 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:17.561 [2024-07-15 09:35:28.724195] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:17.561 [2024-07-15 09:35:28.729214] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x9edaf0) with pdu=0x2000190fef90 00:27:17.561 [2024-07-15 09:35:28.729373] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:5696 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:17.561 [2024-07-15 09:35:28.729402] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:17.561 [2024-07-15 09:35:28.734486] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x9edaf0) with pdu=0x2000190fef90 00:27:17.561 [2024-07-15 09:35:28.734684] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:14432 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:17.561 [2024-07-15 09:35:28.734728] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:17.561 [2024-07-15 09:35:28.739641] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x9edaf0) with pdu=0x2000190fef90 00:27:17.561 [2024-07-15 09:35:28.739768] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:4800 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:17.561 [2024-07-15 09:35:28.739796] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:17.561 [2024-07-15 09:35:28.744781] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x9edaf0) with pdu=0x2000190fef90 00:27:17.561 [2024-07-15 09:35:28.744993] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:480 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:17.561 [2024-07-15 09:35:28.745020] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:17.561 [2024-07-15 09:35:28.749885] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x9edaf0) with pdu=0x2000190fef90 00:27:17.561 [2024-07-15 09:35:28.750054] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:19808 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:17.561 [2024-07-15 09:35:28.750081] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:17.819 [2024-07-15 09:35:28.755201] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x9edaf0) with pdu=0x2000190fef90 00:27:17.820 [2024-07-15 09:35:28.755408] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:3840 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:17.820 [2024-07-15 09:35:28.755435] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:17.820 [2024-07-15 09:35:28.760306] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x9edaf0) with pdu=0x2000190fef90 00:27:17.820 [2024-07-15 09:35:28.760490] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:9056 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:17.820 [2024-07-15 09:35:28.760517] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:17.820 [2024-07-15 09:35:28.765523] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x9edaf0) with pdu=0x2000190fef90 00:27:17.820 [2024-07-15 09:35:28.765677] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:8256 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:17.820 [2024-07-15 09:35:28.765705] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:17.820 [2024-07-15 09:35:28.770773] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x9edaf0) with pdu=0x2000190fef90 00:27:17.820 [2024-07-15 09:35:28.770915] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:896 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:17.820 [2024-07-15 09:35:28.770943] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:17.820 [2024-07-15 09:35:28.775819] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x9edaf0) with pdu=0x2000190fef90 00:27:17.820 [2024-07-15 09:35:28.775987] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:15808 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:17.820 [2024-07-15 09:35:28.776014] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:17.820 [2024-07-15 09:35:28.781027] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x9edaf0) with pdu=0x2000190fef90 00:27:17.820 [2024-07-15 09:35:28.781189] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:9120 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:17.820 [2024-07-15 09:35:28.781217] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:17.820 [2024-07-15 09:35:28.786153] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x9edaf0) with pdu=0x2000190fef90 00:27:17.820 [2024-07-15 09:35:28.786267] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:2720 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:17.820 [2024-07-15 09:35:28.786294] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:17.820 [2024-07-15 09:35:28.791248] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x9edaf0) with pdu=0x2000190fef90 00:27:17.820 [2024-07-15 09:35:28.791351] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:3808 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:17.820 [2024-07-15 09:35:28.791383] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:17.820 [2024-07-15 09:35:28.796541] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x9edaf0) with pdu=0x2000190fef90 00:27:17.820 [2024-07-15 09:35:28.796666] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:23008 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:17.820 [2024-07-15 09:35:28.796694] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:17.820 [2024-07-15 09:35:28.801623] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x9edaf0) with pdu=0x2000190fef90 00:27:17.820 [2024-07-15 09:35:28.801764] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:3072 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:17.820 [2024-07-15 09:35:28.801791] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:17.820 [2024-07-15 09:35:28.806991] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x9edaf0) with pdu=0x2000190fef90 00:27:17.820 [2024-07-15 09:35:28.807103] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:7520 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:17.820 [2024-07-15 09:35:28.807130] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:17.820 [2024-07-15 09:35:28.812175] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x9edaf0) with pdu=0x2000190fef90 00:27:17.820 [2024-07-15 09:35:28.812321] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:13056 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:17.820 [2024-07-15 09:35:28.812347] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:17.820 [2024-07-15 09:35:28.817243] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x9edaf0) with pdu=0x2000190fef90 00:27:17.820 [2024-07-15 09:35:28.817371] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:25344 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:17.820 [2024-07-15 09:35:28.817398] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:17.820 [2024-07-15 09:35:28.822527] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x9edaf0) with pdu=0x2000190fef90 00:27:17.820 [2024-07-15 09:35:28.822704] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:6016 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:17.820 [2024-07-15 09:35:28.822732] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:17.820 [2024-07-15 09:35:28.827567] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x9edaf0) with pdu=0x2000190fef90 00:27:17.820 [2024-07-15 09:35:28.827810] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:3872 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:17.820 [2024-07-15 09:35:28.827838] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:17.820 [2024-07-15 09:35:28.832705] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x9edaf0) with pdu=0x2000190fef90 00:27:17.820 [2024-07-15 09:35:28.832893] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:576 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:17.820 [2024-07-15 09:35:28.832921] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:17.820 [2024-07-15 09:35:28.837914] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x9edaf0) with pdu=0x2000190fef90 00:27:17.820 [2024-07-15 09:35:28.838093] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20096 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:17.820 [2024-07-15 09:35:28.838120] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:17.820 [2024-07-15 09:35:28.843137] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x9edaf0) with pdu=0x2000190fef90 00:27:17.820 [2024-07-15 09:35:28.843299] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:18272 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:17.820 [2024-07-15 09:35:28.843326] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:17.820 00:27:17.820 Latency(us) 00:27:17.820 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:27:17.820 Job: nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 16, IO size: 131072) 00:27:17.820 nvme0n1 : 2.00 5712.53 714.07 0.00 0.00 2793.48 1941.81 7670.14 00:27:17.820 =================================================================================================================== 00:27:17.820 Total : 5712.53 714.07 0.00 0.00 2793.48 1941.81 7670.14 00:27:17.820 0 00:27:17.820 09:35:28 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@71 -- # get_transient_errcount nvme0n1 00:27:17.820 09:35:28 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@27 -- # bperf_rpc bdev_get_iostat -b nvme0n1 00:27:17.820 09:35:28 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@28 -- # jq -r '.bdevs[0] 00:27:17.820 | .driver_specific 00:27:17.820 | .nvme_error 00:27:17.820 | .status_code 00:27:17.820 | .command_transient_transport_error' 00:27:17.820 09:35:28 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_get_iostat -b nvme0n1 00:27:18.079 09:35:29 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@71 -- # (( 368 > 0 )) 00:27:18.079 09:35:29 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@73 -- # killprocess 936917 00:27:18.079 09:35:29 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@948 -- # '[' -z 936917 ']' 00:27:18.079 09:35:29 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@952 -- # kill -0 936917 00:27:18.079 09:35:29 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@953 -- # uname 00:27:18.079 09:35:29 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:27:18.079 09:35:29 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 936917 00:27:18.079 09:35:29 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:27:18.079 09:35:29 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:27:18.079 09:35:29 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@966 -- # echo 'killing process with pid 936917' 00:27:18.079 killing process with pid 936917 00:27:18.079 09:35:29 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@967 -- # kill 936917 00:27:18.079 Received shutdown signal, test time was about 2.000000 seconds 00:27:18.079 00:27:18.079 Latency(us) 00:27:18.079 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:27:18.079 =================================================================================================================== 00:27:18.079 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:27:18.079 09:35:29 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@972 -- # wait 936917 00:27:18.339 09:35:29 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@116 -- # killprocess 935414 00:27:18.339 09:35:29 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@948 -- # '[' -z 935414 ']' 00:27:18.339 09:35:29 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@952 -- # kill -0 935414 00:27:18.339 09:35:29 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@953 -- # uname 00:27:18.339 09:35:29 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:27:18.339 09:35:29 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 935414 00:27:18.339 09:35:29 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:27:18.339 09:35:29 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:27:18.339 09:35:29 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@966 -- # echo 'killing process with pid 935414' 00:27:18.339 killing process with pid 935414 00:27:18.339 09:35:29 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@967 -- # kill 935414 00:27:18.339 09:35:29 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@972 -- # wait 935414 00:27:18.598 00:27:18.598 real 0m15.809s 00:27:18.598 user 0m31.332s 00:27:18.598 sys 0m4.348s 00:27:18.598 09:35:29 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@1124 -- # xtrace_disable 00:27:18.598 09:35:29 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:27:18.598 ************************************ 00:27:18.598 END TEST nvmf_digest_error 00:27:18.598 ************************************ 00:27:18.598 09:35:29 nvmf_tcp.nvmf_digest -- common/autotest_common.sh@1142 -- # return 0 00:27:18.598 09:35:29 nvmf_tcp.nvmf_digest -- host/digest.sh@149 -- # trap - SIGINT SIGTERM EXIT 00:27:18.598 09:35:29 nvmf_tcp.nvmf_digest -- host/digest.sh@150 -- # nvmftestfini 00:27:18.598 09:35:29 nvmf_tcp.nvmf_digest -- nvmf/common.sh@488 -- # nvmfcleanup 00:27:18.598 09:35:29 nvmf_tcp.nvmf_digest -- nvmf/common.sh@117 -- # sync 00:27:18.598 09:35:29 nvmf_tcp.nvmf_digest -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:27:18.598 09:35:29 nvmf_tcp.nvmf_digest -- nvmf/common.sh@120 -- # set +e 00:27:18.598 09:35:29 nvmf_tcp.nvmf_digest -- nvmf/common.sh@121 -- # for i in {1..20} 00:27:18.598 09:35:29 nvmf_tcp.nvmf_digest -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:27:18.598 rmmod nvme_tcp 00:27:18.598 rmmod nvme_fabrics 00:27:18.598 rmmod nvme_keyring 00:27:18.598 09:35:29 nvmf_tcp.nvmf_digest -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:27:18.598 09:35:29 nvmf_tcp.nvmf_digest -- nvmf/common.sh@124 -- # set -e 00:27:18.598 09:35:29 nvmf_tcp.nvmf_digest -- nvmf/common.sh@125 -- # return 0 00:27:18.598 09:35:29 nvmf_tcp.nvmf_digest -- nvmf/common.sh@489 -- # '[' -n 935414 ']' 00:27:18.598 09:35:29 nvmf_tcp.nvmf_digest -- nvmf/common.sh@490 -- # killprocess 935414 00:27:18.598 09:35:29 nvmf_tcp.nvmf_digest -- common/autotest_common.sh@948 -- # '[' -z 935414 ']' 00:27:18.598 09:35:29 nvmf_tcp.nvmf_digest -- common/autotest_common.sh@952 -- # kill -0 935414 00:27:18.598 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/autotest_common.sh: line 952: kill: (935414) - No such process 00:27:18.598 09:35:29 nvmf_tcp.nvmf_digest -- common/autotest_common.sh@975 -- # echo 'Process with pid 935414 is not found' 00:27:18.598 Process with pid 935414 is not found 00:27:18.598 09:35:29 nvmf_tcp.nvmf_digest -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:27:18.598 09:35:29 nvmf_tcp.nvmf_digest -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:27:18.598 09:35:29 nvmf_tcp.nvmf_digest -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:27:18.598 09:35:29 nvmf_tcp.nvmf_digest -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:27:18.598 09:35:29 nvmf_tcp.nvmf_digest -- nvmf/common.sh@278 -- # remove_spdk_ns 00:27:18.598 09:35:29 nvmf_tcp.nvmf_digest -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:27:18.598 09:35:29 nvmf_tcp.nvmf_digest -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:27:18.598 09:35:29 nvmf_tcp.nvmf_digest -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:27:21.137 09:35:31 nvmf_tcp.nvmf_digest -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:27:21.137 00:27:21.137 real 0m35.626s 00:27:21.137 user 1m3.111s 00:27:21.137 sys 0m9.988s 00:27:21.137 09:35:31 nvmf_tcp.nvmf_digest -- common/autotest_common.sh@1124 -- # xtrace_disable 00:27:21.137 09:35:31 nvmf_tcp.nvmf_digest -- common/autotest_common.sh@10 -- # set +x 00:27:21.137 ************************************ 00:27:21.137 END TEST nvmf_digest 00:27:21.137 ************************************ 00:27:21.137 09:35:31 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:27:21.137 09:35:31 nvmf_tcp -- nvmf/nvmf.sh@111 -- # [[ 0 -eq 1 ]] 00:27:21.137 09:35:31 nvmf_tcp -- nvmf/nvmf.sh@116 -- # [[ 0 -eq 1 ]] 00:27:21.137 09:35:31 nvmf_tcp -- nvmf/nvmf.sh@121 -- # [[ phy == phy ]] 00:27:21.137 09:35:31 nvmf_tcp -- nvmf/nvmf.sh@122 -- # run_test nvmf_bdevperf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/bdevperf.sh --transport=tcp 00:27:21.137 09:35:31 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:27:21.137 09:35:31 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:27:21.137 09:35:31 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:27:21.137 ************************************ 00:27:21.137 START TEST nvmf_bdevperf 00:27:21.137 ************************************ 00:27:21.137 09:35:31 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/bdevperf.sh --transport=tcp 00:27:21.137 * Looking for test storage... 00:27:21.137 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:27:21.137 09:35:31 nvmf_tcp.nvmf_bdevperf -- host/bdevperf.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:27:21.137 09:35:31 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@7 -- # uname -s 00:27:21.137 09:35:31 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:27:21.137 09:35:31 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:27:21.137 09:35:31 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:27:21.137 09:35:31 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:27:21.137 09:35:31 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:27:21.137 09:35:31 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:27:21.137 09:35:31 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:27:21.137 09:35:31 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:27:21.137 09:35:31 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:27:21.137 09:35:31 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:27:21.137 09:35:31 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a 00:27:21.137 09:35:31 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@18 -- # NVME_HOSTID=29f67375-a902-e411-ace9-001e67bc3c9a 00:27:21.137 09:35:31 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:27:21.137 09:35:31 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:27:21.137 09:35:31 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:27:21.137 09:35:31 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:27:21.137 09:35:31 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:27:21.137 09:35:31 nvmf_tcp.nvmf_bdevperf -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:27:21.137 09:35:31 nvmf_tcp.nvmf_bdevperf -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:27:21.137 09:35:31 nvmf_tcp.nvmf_bdevperf -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:27:21.137 09:35:31 nvmf_tcp.nvmf_bdevperf -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:27:21.137 09:35:31 nvmf_tcp.nvmf_bdevperf -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:27:21.137 09:35:31 nvmf_tcp.nvmf_bdevperf -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:27:21.137 09:35:31 nvmf_tcp.nvmf_bdevperf -- paths/export.sh@5 -- # export PATH 00:27:21.137 09:35:31 nvmf_tcp.nvmf_bdevperf -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:27:21.137 09:35:31 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@47 -- # : 0 00:27:21.137 09:35:31 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:27:21.137 09:35:31 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:27:21.137 09:35:31 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:27:21.137 09:35:31 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:27:21.137 09:35:31 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:27:21.137 09:35:31 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:27:21.137 09:35:31 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:27:21.137 09:35:31 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@51 -- # have_pci_nics=0 00:27:21.137 09:35:31 nvmf_tcp.nvmf_bdevperf -- host/bdevperf.sh@11 -- # MALLOC_BDEV_SIZE=64 00:27:21.137 09:35:31 nvmf_tcp.nvmf_bdevperf -- host/bdevperf.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:27:21.137 09:35:31 nvmf_tcp.nvmf_bdevperf -- host/bdevperf.sh@24 -- # nvmftestinit 00:27:21.137 09:35:31 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:27:21.137 09:35:31 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:27:21.137 09:35:31 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@448 -- # prepare_net_devs 00:27:21.137 09:35:31 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@410 -- # local -g is_hw=no 00:27:21.137 09:35:31 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@412 -- # remove_spdk_ns 00:27:21.137 09:35:31 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:27:21.137 09:35:31 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:27:21.137 09:35:31 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:27:21.137 09:35:31 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:27:21.137 09:35:31 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:27:21.138 09:35:31 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@285 -- # xtrace_disable 00:27:21.138 09:35:31 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:27:23.038 09:35:34 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:27:23.038 09:35:34 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@291 -- # pci_devs=() 00:27:23.038 09:35:34 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@291 -- # local -a pci_devs 00:27:23.038 09:35:34 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@292 -- # pci_net_devs=() 00:27:23.038 09:35:34 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:27:23.038 09:35:34 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@293 -- # pci_drivers=() 00:27:23.038 09:35:34 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@293 -- # local -A pci_drivers 00:27:23.038 09:35:34 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@295 -- # net_devs=() 00:27:23.038 09:35:34 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@295 -- # local -ga net_devs 00:27:23.038 09:35:34 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@296 -- # e810=() 00:27:23.038 09:35:34 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@296 -- # local -ga e810 00:27:23.038 09:35:34 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@297 -- # x722=() 00:27:23.038 09:35:34 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@297 -- # local -ga x722 00:27:23.038 09:35:34 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@298 -- # mlx=() 00:27:23.038 09:35:34 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@298 -- # local -ga mlx 00:27:23.038 09:35:34 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:27:23.038 09:35:34 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:27:23.038 09:35:34 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:27:23.038 09:35:34 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:27:23.038 09:35:34 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:27:23.038 09:35:34 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:27:23.038 09:35:34 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:27:23.038 09:35:34 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:27:23.038 09:35:34 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:27:23.038 09:35:34 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:27:23.038 09:35:34 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:27:23.038 09:35:34 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:27:23.038 09:35:34 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:27:23.038 09:35:34 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:27:23.038 09:35:34 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:27:23.038 09:35:34 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:27:23.038 09:35:34 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:27:23.038 09:35:34 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:27:23.038 09:35:34 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@341 -- # echo 'Found 0000:09:00.0 (0x8086 - 0x159b)' 00:27:23.038 Found 0000:09:00.0 (0x8086 - 0x159b) 00:27:23.038 09:35:34 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:27:23.038 09:35:34 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:27:23.038 09:35:34 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:27:23.038 09:35:34 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:27:23.038 09:35:34 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:27:23.038 09:35:34 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:27:23.038 09:35:34 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@341 -- # echo 'Found 0000:09:00.1 (0x8086 - 0x159b)' 00:27:23.038 Found 0000:09:00.1 (0x8086 - 0x159b) 00:27:23.038 09:35:34 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:27:23.038 09:35:34 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:27:23.038 09:35:34 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:27:23.038 09:35:34 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:27:23.038 09:35:34 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:27:23.038 09:35:34 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:27:23.038 09:35:34 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:27:23.038 09:35:34 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:27:23.038 09:35:34 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:27:23.038 09:35:34 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:27:23.038 09:35:34 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:27:23.038 09:35:34 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:27:23.038 09:35:34 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@390 -- # [[ up == up ]] 00:27:23.038 09:35:34 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:27:23.038 09:35:34 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:27:23.038 09:35:34 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:09:00.0: cvl_0_0' 00:27:23.038 Found net devices under 0000:09:00.0: cvl_0_0 00:27:23.038 09:35:34 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:27:23.038 09:35:34 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:27:23.039 09:35:34 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:27:23.039 09:35:34 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:27:23.039 09:35:34 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:27:23.039 09:35:34 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@390 -- # [[ up == up ]] 00:27:23.039 09:35:34 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:27:23.039 09:35:34 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:27:23.039 09:35:34 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:09:00.1: cvl_0_1' 00:27:23.039 Found net devices under 0000:09:00.1: cvl_0_1 00:27:23.039 09:35:34 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:27:23.039 09:35:34 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:27:23.039 09:35:34 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@414 -- # is_hw=yes 00:27:23.039 09:35:34 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:27:23.039 09:35:34 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:27:23.039 09:35:34 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:27:23.039 09:35:34 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:27:23.039 09:35:34 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:27:23.039 09:35:34 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:27:23.039 09:35:34 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:27:23.039 09:35:34 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:27:23.039 09:35:34 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:27:23.039 09:35:34 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:27:23.039 09:35:34 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:27:23.039 09:35:34 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:27:23.039 09:35:34 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:27:23.039 09:35:34 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:27:23.039 09:35:34 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:27:23.039 09:35:34 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:27:23.039 09:35:34 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:27:23.039 09:35:34 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:27:23.039 09:35:34 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:27:23.039 09:35:34 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:27:23.039 09:35:34 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:27:23.039 09:35:34 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:27:23.039 09:35:34 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:27:23.039 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:27:23.039 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.198 ms 00:27:23.039 00:27:23.039 --- 10.0.0.2 ping statistics --- 00:27:23.039 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:27:23.039 rtt min/avg/max/mdev = 0.198/0.198/0.198/0.000 ms 00:27:23.039 09:35:34 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:27:23.039 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:27:23.039 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.105 ms 00:27:23.039 00:27:23.039 --- 10.0.0.1 ping statistics --- 00:27:23.039 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:27:23.039 rtt min/avg/max/mdev = 0.105/0.105/0.105/0.000 ms 00:27:23.039 09:35:34 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:27:23.039 09:35:34 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@422 -- # return 0 00:27:23.039 09:35:34 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:27:23.039 09:35:34 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:27:23.039 09:35:34 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:27:23.039 09:35:34 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:27:23.039 09:35:34 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:27:23.039 09:35:34 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:27:23.039 09:35:34 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:27:23.039 09:35:34 nvmf_tcp.nvmf_bdevperf -- host/bdevperf.sh@25 -- # tgt_init 00:27:23.039 09:35:34 nvmf_tcp.nvmf_bdevperf -- host/bdevperf.sh@15 -- # nvmfappstart -m 0xE 00:27:23.039 09:35:34 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:27:23.039 09:35:34 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@722 -- # xtrace_disable 00:27:23.039 09:35:34 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:27:23.039 09:35:34 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@481 -- # nvmfpid=939267 00:27:23.039 09:35:34 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xE 00:27:23.039 09:35:34 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@482 -- # waitforlisten 939267 00:27:23.039 09:35:34 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@829 -- # '[' -z 939267 ']' 00:27:23.039 09:35:34 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:27:23.039 09:35:34 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@834 -- # local max_retries=100 00:27:23.039 09:35:34 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:27:23.039 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:27:23.039 09:35:34 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@838 -- # xtrace_disable 00:27:23.039 09:35:34 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:27:23.297 [2024-07-15 09:35:34.258370] Starting SPDK v24.09-pre git sha1 b0f01ebc5 / DPDK 24.03.0 initialization... 00:27:23.297 [2024-07-15 09:35:34.258460] [ DPDK EAL parameters: nvmf -c 0xE --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:27:23.297 EAL: No free 2048 kB hugepages reported on node 1 00:27:23.297 [2024-07-15 09:35:34.320006] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 3 00:27:23.298 [2024-07-15 09:35:34.418049] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:27:23.298 [2024-07-15 09:35:34.418102] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:27:23.298 [2024-07-15 09:35:34.418130] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:27:23.298 [2024-07-15 09:35:34.418141] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:27:23.298 [2024-07-15 09:35:34.418150] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:27:23.298 [2024-07-15 09:35:34.418236] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:27:23.298 [2024-07-15 09:35:34.418304] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:27:23.298 [2024-07-15 09:35:34.418308] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:27:23.557 09:35:34 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:27:23.557 09:35:34 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@862 -- # return 0 00:27:23.557 09:35:34 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:27:23.557 09:35:34 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@728 -- # xtrace_disable 00:27:23.557 09:35:34 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:27:23.557 09:35:34 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:27:23.557 09:35:34 nvmf_tcp.nvmf_bdevperf -- host/bdevperf.sh@17 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:27:23.557 09:35:34 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:23.557 09:35:34 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:27:23.557 [2024-07-15 09:35:34.557574] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:27:23.557 09:35:34 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:23.557 09:35:34 nvmf_tcp.nvmf_bdevperf -- host/bdevperf.sh@18 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:27:23.557 09:35:34 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:23.557 09:35:34 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:27:23.557 Malloc0 00:27:23.557 09:35:34 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:23.557 09:35:34 nvmf_tcp.nvmf_bdevperf -- host/bdevperf.sh@19 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:27:23.557 09:35:34 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:23.557 09:35:34 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:27:23.557 09:35:34 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:23.557 09:35:34 nvmf_tcp.nvmf_bdevperf -- host/bdevperf.sh@20 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:27:23.557 09:35:34 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:23.557 09:35:34 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:27:23.557 09:35:34 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:23.557 09:35:34 nvmf_tcp.nvmf_bdevperf -- host/bdevperf.sh@21 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:27:23.557 09:35:34 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:23.557 09:35:34 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:27:23.557 [2024-07-15 09:35:34.624349] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:27:23.557 09:35:34 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:23.557 09:35:34 nvmf_tcp.nvmf_bdevperf -- host/bdevperf.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf --json /dev/fd/62 -q 128 -o 4096 -w verify -t 1 00:27:23.557 09:35:34 nvmf_tcp.nvmf_bdevperf -- host/bdevperf.sh@27 -- # gen_nvmf_target_json 00:27:23.557 09:35:34 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@532 -- # config=() 00:27:23.557 09:35:34 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@532 -- # local subsystem config 00:27:23.557 09:35:34 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:27:23.557 09:35:34 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:27:23.557 { 00:27:23.557 "params": { 00:27:23.557 "name": "Nvme$subsystem", 00:27:23.557 "trtype": "$TEST_TRANSPORT", 00:27:23.557 "traddr": "$NVMF_FIRST_TARGET_IP", 00:27:23.557 "adrfam": "ipv4", 00:27:23.557 "trsvcid": "$NVMF_PORT", 00:27:23.557 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:27:23.557 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:27:23.557 "hdgst": ${hdgst:-false}, 00:27:23.557 "ddgst": ${ddgst:-false} 00:27:23.557 }, 00:27:23.557 "method": "bdev_nvme_attach_controller" 00:27:23.557 } 00:27:23.557 EOF 00:27:23.557 )") 00:27:23.557 09:35:34 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@554 -- # cat 00:27:23.557 09:35:34 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@556 -- # jq . 00:27:23.557 09:35:34 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@557 -- # IFS=, 00:27:23.557 09:35:34 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:27:23.557 "params": { 00:27:23.557 "name": "Nvme1", 00:27:23.557 "trtype": "tcp", 00:27:23.557 "traddr": "10.0.0.2", 00:27:23.557 "adrfam": "ipv4", 00:27:23.557 "trsvcid": "4420", 00:27:23.557 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:27:23.557 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:27:23.557 "hdgst": false, 00:27:23.557 "ddgst": false 00:27:23.557 }, 00:27:23.557 "method": "bdev_nvme_attach_controller" 00:27:23.557 }' 00:27:23.557 [2024-07-15 09:35:34.674099] Starting SPDK v24.09-pre git sha1 b0f01ebc5 / DPDK 24.03.0 initialization... 00:27:23.557 [2024-07-15 09:35:34.674189] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid939405 ] 00:27:23.557 EAL: No free 2048 kB hugepages reported on node 1 00:27:23.557 [2024-07-15 09:35:34.732880] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:27:23.814 [2024-07-15 09:35:34.848591] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:27:24.072 Running I/O for 1 seconds... 00:27:25.009 00:27:25.009 Latency(us) 00:27:25.009 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:27:25.009 Job: Nvme1n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:27:25.009 Verification LBA range: start 0x0 length 0x4000 00:27:25.009 Nvme1n1 : 1.01 8476.68 33.11 0.00 0.00 15036.49 3046.21 16796.63 00:27:25.009 =================================================================================================================== 00:27:25.009 Total : 8476.68 33.11 0.00 0.00 15036.49 3046.21 16796.63 00:27:25.268 09:35:36 nvmf_tcp.nvmf_bdevperf -- host/bdevperf.sh@30 -- # bdevperfpid=939553 00:27:25.268 09:35:36 nvmf_tcp.nvmf_bdevperf -- host/bdevperf.sh@32 -- # sleep 3 00:27:25.268 09:35:36 nvmf_tcp.nvmf_bdevperf -- host/bdevperf.sh@29 -- # gen_nvmf_target_json 00:27:25.268 09:35:36 nvmf_tcp.nvmf_bdevperf -- host/bdevperf.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf --json /dev/fd/63 -q 128 -o 4096 -w verify -t 15 -f 00:27:25.268 09:35:36 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@532 -- # config=() 00:27:25.268 09:35:36 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@532 -- # local subsystem config 00:27:25.268 09:35:36 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:27:25.268 09:35:36 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:27:25.268 { 00:27:25.268 "params": { 00:27:25.268 "name": "Nvme$subsystem", 00:27:25.268 "trtype": "$TEST_TRANSPORT", 00:27:25.268 "traddr": "$NVMF_FIRST_TARGET_IP", 00:27:25.268 "adrfam": "ipv4", 00:27:25.268 "trsvcid": "$NVMF_PORT", 00:27:25.268 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:27:25.268 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:27:25.268 "hdgst": ${hdgst:-false}, 00:27:25.268 "ddgst": ${ddgst:-false} 00:27:25.268 }, 00:27:25.268 "method": "bdev_nvme_attach_controller" 00:27:25.268 } 00:27:25.268 EOF 00:27:25.268 )") 00:27:25.268 09:35:36 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@554 -- # cat 00:27:25.268 09:35:36 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@556 -- # jq . 00:27:25.268 09:35:36 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@557 -- # IFS=, 00:27:25.268 09:35:36 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:27:25.268 "params": { 00:27:25.268 "name": "Nvme1", 00:27:25.268 "trtype": "tcp", 00:27:25.268 "traddr": "10.0.0.2", 00:27:25.268 "adrfam": "ipv4", 00:27:25.268 "trsvcid": "4420", 00:27:25.268 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:27:25.268 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:27:25.268 "hdgst": false, 00:27:25.268 "ddgst": false 00:27:25.268 }, 00:27:25.268 "method": "bdev_nvme_attach_controller" 00:27:25.268 }' 00:27:25.268 [2024-07-15 09:35:36.446730] Starting SPDK v24.09-pre git sha1 b0f01ebc5 / DPDK 24.03.0 initialization... 00:27:25.268 [2024-07-15 09:35:36.446853] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid939553 ] 00:27:25.526 EAL: No free 2048 kB hugepages reported on node 1 00:27:25.526 [2024-07-15 09:35:36.506352] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:27:25.526 [2024-07-15 09:35:36.618985] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:27:25.785 Running I/O for 15 seconds... 00:27:28.316 09:35:39 nvmf_tcp.nvmf_bdevperf -- host/bdevperf.sh@33 -- # kill -9 939267 00:27:28.316 09:35:39 nvmf_tcp.nvmf_bdevperf -- host/bdevperf.sh@35 -- # sleep 3 00:27:28.316 [2024-07-15 09:35:39.413699] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:43760 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:28.316 [2024-07-15 09:35:39.413757] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:28.316 [2024-07-15 09:35:39.413811] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:78 nsid:1 lba:43768 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:28.316 [2024-07-15 09:35:39.413831] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:28.316 [2024-07-15 09:35:39.413873] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:43776 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:28.316 [2024-07-15 09:35:39.413890] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:28.316 [2024-07-15 09:35:39.413907] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:43784 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:28.316 [2024-07-15 09:35:39.413923] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:28.316 [2024-07-15 09:35:39.413939] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:96 nsid:1 lba:43792 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:28.316 [2024-07-15 09:35:39.413955] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:28.316 [2024-07-15 09:35:39.413971] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:68 nsid:1 lba:43800 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:28.316 [2024-07-15 09:35:39.413986] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:28.316 [2024-07-15 09:35:39.414002] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:43808 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:28.316 [2024-07-15 09:35:39.414016] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:28.316 [2024-07-15 09:35:39.414032] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:43816 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:28.316 [2024-07-15 09:35:39.414052] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:28.316 [2024-07-15 09:35:39.414070] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:70 nsid:1 lba:43824 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:28.316 [2024-07-15 09:35:39.414086] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:28.316 [2024-07-15 09:35:39.414102] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:43832 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:28.316 [2024-07-15 09:35:39.414118] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:28.317 [2024-07-15 09:35:39.414134] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:43840 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:28.317 [2024-07-15 09:35:39.414148] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:28.317 [2024-07-15 09:35:39.414181] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:119 nsid:1 lba:43848 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:28.317 [2024-07-15 09:35:39.414198] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:28.317 [2024-07-15 09:35:39.414213] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:76 nsid:1 lba:43856 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:28.317 [2024-07-15 09:35:39.414243] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:28.317 [2024-07-15 09:35:39.414260] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:43864 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:28.317 [2024-07-15 09:35:39.414275] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:28.317 [2024-07-15 09:35:39.414291] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:43872 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:28.317 [2024-07-15 09:35:39.414308] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:28.317 [2024-07-15 09:35:39.414323] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:43880 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:28.317 [2024-07-15 09:35:39.414351] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:28.317 [2024-07-15 09:35:39.414367] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:43888 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:28.317 [2024-07-15 09:35:39.414380] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:28.317 [2024-07-15 09:35:39.414394] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:67 nsid:1 lba:43896 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:28.317 [2024-07-15 09:35:39.414407] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:28.317 [2024-07-15 09:35:39.414421] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:43904 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:28.317 [2024-07-15 09:35:39.414434] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:28.317 [2024-07-15 09:35:39.414448] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:118 nsid:1 lba:43912 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:28.317 [2024-07-15 09:35:39.414461] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:28.317 [2024-07-15 09:35:39.414475] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:97 nsid:1 lba:43920 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:28.317 [2024-07-15 09:35:39.414489] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:28.317 [2024-07-15 09:35:39.414504] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:114 nsid:1 lba:43928 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:28.317 [2024-07-15 09:35:39.414517] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:28.317 [2024-07-15 09:35:39.414531] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:43936 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:28.317 [2024-07-15 09:35:39.414544] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:28.317 [2024-07-15 09:35:39.414558] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:43944 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:28.317 [2024-07-15 09:35:39.414572] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:28.317 [2024-07-15 09:35:39.414586] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:73 nsid:1 lba:43952 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:28.317 [2024-07-15 09:35:39.414599] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:28.317 [2024-07-15 09:35:39.414613] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:66 nsid:1 lba:43960 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:28.317 [2024-07-15 09:35:39.414626] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:28.317 [2024-07-15 09:35:39.414640] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:87 nsid:1 lba:43968 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:28.317 [2024-07-15 09:35:39.414653] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:28.317 [2024-07-15 09:35:39.414667] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:43976 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:28.317 [2024-07-15 09:35:39.414684] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:28.317 [2024-07-15 09:35:39.414699] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:43984 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:28.317 [2024-07-15 09:35:39.414727] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:28.317 [2024-07-15 09:35:39.414742] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:94 nsid:1 lba:43992 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:28.317 [2024-07-15 09:35:39.414755] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:28.317 [2024-07-15 09:35:39.414769] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:44000 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:28.317 [2024-07-15 09:35:39.414797] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:28.317 [2024-07-15 09:35:39.414822] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:105 nsid:1 lba:44008 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:28.317 [2024-07-15 09:35:39.414838] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:28.317 [2024-07-15 09:35:39.414854] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:109 nsid:1 lba:44016 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:28.317 [2024-07-15 09:35:39.414868] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:28.317 [2024-07-15 09:35:39.414884] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:44024 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:28.317 [2024-07-15 09:35:39.414898] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:28.317 [2024-07-15 09:35:39.414914] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:102 nsid:1 lba:44032 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:28.317 [2024-07-15 09:35:39.414928] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:28.317 [2024-07-15 09:35:39.414944] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:90 nsid:1 lba:44040 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:28.317 [2024-07-15 09:35:39.414958] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:28.317 [2024-07-15 09:35:39.414974] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:44048 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:28.317 [2024-07-15 09:35:39.414990] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:28.317 [2024-07-15 09:35:39.415006] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:44056 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:28.317 [2024-07-15 09:35:39.415021] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:28.317 [2024-07-15 09:35:39.415037] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:123 nsid:1 lba:44064 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:28.317 [2024-07-15 09:35:39.415051] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:28.317 [2024-07-15 09:35:39.415067] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:101 nsid:1 lba:43128 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:28.317 [2024-07-15 09:35:39.415081] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:28.317 [2024-07-15 09:35:39.415114] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:92 nsid:1 lba:43136 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:28.317 [2024-07-15 09:35:39.415129] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:28.317 [2024-07-15 09:35:39.415143] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:43144 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:28.317 [2024-07-15 09:35:39.415171] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:28.317 [2024-07-15 09:35:39.415186] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:43152 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:28.317 [2024-07-15 09:35:39.415199] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:28.317 [2024-07-15 09:35:39.415213] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:43160 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:28.317 [2024-07-15 09:35:39.415225] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:28.317 [2024-07-15 09:35:39.415239] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:82 nsid:1 lba:43168 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:28.317 [2024-07-15 09:35:39.415252] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:28.317 [2024-07-15 09:35:39.415266] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:43176 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:28.317 [2024-07-15 09:35:39.415279] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:28.317 [2024-07-15 09:35:39.415292] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:44072 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:28.317 [2024-07-15 09:35:39.415305] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:28.317 [2024-07-15 09:35:39.415319] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:43184 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:28.317 [2024-07-15 09:35:39.415332] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:28.317 [2024-07-15 09:35:39.415346] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:43192 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:28.317 [2024-07-15 09:35:39.415359] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:28.317 [2024-07-15 09:35:39.415373] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:65 nsid:1 lba:43200 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:28.317 [2024-07-15 09:35:39.415386] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:28.317 [2024-07-15 09:35:39.415400] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:103 nsid:1 lba:43208 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:28.317 [2024-07-15 09:35:39.415413] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:28.317 [2024-07-15 09:35:39.415427] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:43216 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:28.317 [2024-07-15 09:35:39.415440] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:28.317 [2024-07-15 09:35:39.415460] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:106 nsid:1 lba:43224 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:28.318 [2024-07-15 09:35:39.415477] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:28.318 [2024-07-15 09:35:39.415491] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:100 nsid:1 lba:43232 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:28.318 [2024-07-15 09:35:39.415504] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:28.318 [2024-07-15 09:35:39.415517] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:43240 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:28.318 [2024-07-15 09:35:39.415530] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:28.318 [2024-07-15 09:35:39.415543] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:43248 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:28.318 [2024-07-15 09:35:39.415555] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:28.318 [2024-07-15 09:35:39.415568] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:98 nsid:1 lba:43256 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:28.318 [2024-07-15 09:35:39.415580] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:28.318 [2024-07-15 09:35:39.415594] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:43264 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:28.318 [2024-07-15 09:35:39.415606] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:28.318 [2024-07-15 09:35:39.415619] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:108 nsid:1 lba:43272 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:28.318 [2024-07-15 09:35:39.415631] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:28.318 [2024-07-15 09:35:39.415645] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:122 nsid:1 lba:43280 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:28.318 [2024-07-15 09:35:39.415657] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:28.318 [2024-07-15 09:35:39.415670] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:43288 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:28.318 [2024-07-15 09:35:39.415683] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:28.318 [2024-07-15 09:35:39.415696] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:75 nsid:1 lba:43296 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:28.318 [2024-07-15 09:35:39.415708] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:28.318 [2024-07-15 09:35:39.415721] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:43304 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:28.318 [2024-07-15 09:35:39.415734] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:28.318 [2024-07-15 09:35:39.415747] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:79 nsid:1 lba:43312 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:28.318 [2024-07-15 09:35:39.415760] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:28.318 [2024-07-15 09:35:39.415773] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:74 nsid:1 lba:43320 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:28.318 [2024-07-15 09:35:39.415805] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:28.318 [2024-07-15 09:35:39.415827] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:43328 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:28.318 [2024-07-15 09:35:39.415843] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:28.318 [2024-07-15 09:35:39.415857] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:43336 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:28.318 [2024-07-15 09:35:39.415871] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:28.318 [2024-07-15 09:35:39.415886] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:43344 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:28.318 [2024-07-15 09:35:39.415900] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:28.318 [2024-07-15 09:35:39.415916] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:83 nsid:1 lba:43352 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:28.318 [2024-07-15 09:35:39.415931] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:28.318 [2024-07-15 09:35:39.415947] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:43360 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:28.318 [2024-07-15 09:35:39.415961] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:28.318 [2024-07-15 09:35:39.415976] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:43368 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:28.318 [2024-07-15 09:35:39.415990] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:28.318 [2024-07-15 09:35:39.416005] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:69 nsid:1 lba:43376 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:28.318 [2024-07-15 09:35:39.416019] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:28.318 [2024-07-15 09:35:39.416034] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:89 nsid:1 lba:43384 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:28.318 [2024-07-15 09:35:39.416048] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:28.318 [2024-07-15 09:35:39.416063] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:93 nsid:1 lba:43392 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:28.318 [2024-07-15 09:35:39.416077] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:28.318 [2024-07-15 09:35:39.416106] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:43400 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:28.318 [2024-07-15 09:35:39.416119] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:28.318 [2024-07-15 09:35:39.416133] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:71 nsid:1 lba:43408 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:28.318 [2024-07-15 09:35:39.416160] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:28.318 [2024-07-15 09:35:39.416175] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:43416 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:28.318 [2024-07-15 09:35:39.416187] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:28.318 [2024-07-15 09:35:39.416200] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:43424 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:28.318 [2024-07-15 09:35:39.416212] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:28.318 [2024-07-15 09:35:39.416229] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:81 nsid:1 lba:43432 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:28.318 [2024-07-15 09:35:39.416241] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:28.318 [2024-07-15 09:35:39.416255] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:107 nsid:1 lba:43440 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:28.318 [2024-07-15 09:35:39.416267] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:28.318 [2024-07-15 09:35:39.416280] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:43448 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:28.318 [2024-07-15 09:35:39.416293] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:28.318 [2024-07-15 09:35:39.416306] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:120 nsid:1 lba:43456 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:28.318 [2024-07-15 09:35:39.416319] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:28.318 [2024-07-15 09:35:39.416332] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:85 nsid:1 lba:43464 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:28.318 [2024-07-15 09:35:39.416345] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:28.318 [2024-07-15 09:35:39.416358] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:43472 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:28.318 [2024-07-15 09:35:39.416370] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:28.318 [2024-07-15 09:35:39.416384] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:110 nsid:1 lba:43480 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:28.318 [2024-07-15 09:35:39.416396] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:28.318 [2024-07-15 09:35:39.416410] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:111 nsid:1 lba:43488 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:28.318 [2024-07-15 09:35:39.416423] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:28.318 [2024-07-15 09:35:39.416436] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:72 nsid:1 lba:43496 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:28.318 [2024-07-15 09:35:39.416448] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:28.318 [2024-07-15 09:35:39.416462] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:44080 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:28.318 [2024-07-15 09:35:39.416474] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:28.318 [2024-07-15 09:35:39.416487] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:44088 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:28.318 [2024-07-15 09:35:39.416500] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:28.318 [2024-07-15 09:35:39.416513] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:44096 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:28.318 [2024-07-15 09:35:39.416526] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:28.318 [2024-07-15 09:35:39.416539] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:44104 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:28.318 [2024-07-15 09:35:39.416558] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:28.318 [2024-07-15 09:35:39.416574] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:64 nsid:1 lba:44112 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:28.318 [2024-07-15 09:35:39.416586] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:28.318 [2024-07-15 09:35:39.416600] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:112 nsid:1 lba:44120 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:28.318 [2024-07-15 09:35:39.416613] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:28.318 [2024-07-15 09:35:39.416627] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:44128 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:28.318 [2024-07-15 09:35:39.416640] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:28.318 [2024-07-15 09:35:39.416654] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:44136 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:28.318 [2024-07-15 09:35:39.416667] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:28.319 [2024-07-15 09:35:39.416681] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:77 nsid:1 lba:44144 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:28.319 [2024-07-15 09:35:39.416694] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:28.319 [2024-07-15 09:35:39.416707] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:43504 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:28.319 [2024-07-15 09:35:39.416720] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:28.319 [2024-07-15 09:35:39.416734] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:116 nsid:1 lba:43512 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:28.319 [2024-07-15 09:35:39.416746] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:28.319 [2024-07-15 09:35:39.416760] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:43520 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:28.319 [2024-07-15 09:35:39.416773] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:28.319 [2024-07-15 09:35:39.416809] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:43528 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:28.319 [2024-07-15 09:35:39.416826] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:28.319 [2024-07-15 09:35:39.416841] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:117 nsid:1 lba:43536 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:28.319 [2024-07-15 09:35:39.416862] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:28.319 [2024-07-15 09:35:39.416878] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:43544 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:28.319 [2024-07-15 09:35:39.416892] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:28.319 [2024-07-15 09:35:39.416908] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:124 nsid:1 lba:43552 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:28.319 [2024-07-15 09:35:39.416922] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:28.319 [2024-07-15 09:35:39.416941] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:43560 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:28.319 [2024-07-15 09:35:39.416957] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:28.319 [2024-07-15 09:35:39.416972] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:43568 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:28.319 [2024-07-15 09:35:39.416986] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:28.319 [2024-07-15 09:35:39.417002] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:104 nsid:1 lba:43576 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:28.319 [2024-07-15 09:35:39.417016] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:28.319 [2024-07-15 09:35:39.417032] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:43584 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:28.319 [2024-07-15 09:35:39.417046] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:28.319 [2024-07-15 09:35:39.417062] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:95 nsid:1 lba:43592 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:28.319 [2024-07-15 09:35:39.417076] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:28.319 [2024-07-15 09:35:39.417107] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:121 nsid:1 lba:43600 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:28.319 [2024-07-15 09:35:39.417120] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:28.319 [2024-07-15 09:35:39.417135] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:43608 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:28.319 [2024-07-15 09:35:39.417149] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:28.319 [2024-07-15 09:35:39.417177] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:125 nsid:1 lba:43616 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:28.319 [2024-07-15 09:35:39.417190] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:28.319 [2024-07-15 09:35:39.417204] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:88 nsid:1 lba:43624 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:28.319 [2024-07-15 09:35:39.417216] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:28.319 [2024-07-15 09:35:39.417230] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:86 nsid:1 lba:43632 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:28.319 [2024-07-15 09:35:39.417243] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:28.319 [2024-07-15 09:35:39.417257] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:99 nsid:1 lba:43640 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:28.319 [2024-07-15 09:35:39.417269] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:28.319 [2024-07-15 09:35:39.417283] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:113 nsid:1 lba:43648 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:28.319 [2024-07-15 09:35:39.417296] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:28.319 [2024-07-15 09:35:39.417311] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:115 nsid:1 lba:43656 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:28.319 [2024-07-15 09:35:39.417327] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:28.319 [2024-07-15 09:35:39.417341] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:80 nsid:1 lba:43664 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:28.319 [2024-07-15 09:35:39.417358] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:28.319 [2024-07-15 09:35:39.417372] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:43672 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:28.319 [2024-07-15 09:35:39.417385] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:28.319 [2024-07-15 09:35:39.417399] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:43680 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:28.319 [2024-07-15 09:35:39.417411] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:28.319 [2024-07-15 09:35:39.417425] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:43688 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:28.319 [2024-07-15 09:35:39.417437] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:28.319 [2024-07-15 09:35:39.417451] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:43696 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:28.319 [2024-07-15 09:35:39.417463] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:28.319 [2024-07-15 09:35:39.417477] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:43704 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:28.319 [2024-07-15 09:35:39.417490] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:28.319 [2024-07-15 09:35:39.417503] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:43712 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:28.319 [2024-07-15 09:35:39.417516] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:28.319 [2024-07-15 09:35:39.417529] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:43720 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:28.319 [2024-07-15 09:35:39.417542] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:28.319 [2024-07-15 09:35:39.417555] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:84 nsid:1 lba:43728 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:28.319 [2024-07-15 09:35:39.417568] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:28.319 [2024-07-15 09:35:39.417581] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:91 nsid:1 lba:43736 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:28.319 [2024-07-15 09:35:39.417594] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:28.319 [2024-07-15 09:35:39.417607] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:43744 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:28.319 [2024-07-15 09:35:39.417620] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:28.319 [2024-07-15 09:35:39.417637] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xdcb4c0 is same with the state(5) to be set 00:27:28.319 [2024-07-15 09:35:39.417653] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:27:28.319 [2024-07-15 09:35:39.417663] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:27:28.319 [2024-07-15 09:35:39.417675] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:43752 len:8 PRP1 0x0 PRP2 0x0 00:27:28.319 [2024-07-15 09:35:39.417687] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:28.319 [2024-07-15 09:35:39.417743] bdev_nvme.c:1612:bdev_nvme_disconnected_qpair_cb: *NOTICE*: qpair 0xdcb4c0 was disconnected and freed. reset controller. 00:27:28.319 [2024-07-15 09:35:39.417830] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:27:28.319 [2024-07-15 09:35:39.417853] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:28.319 [2024-07-15 09:35:39.417868] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:27:28.319 [2024-07-15 09:35:39.417881] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:28.319 [2024-07-15 09:35:39.417895] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:27:28.319 [2024-07-15 09:35:39.417913] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:28.319 [2024-07-15 09:35:39.417927] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:27:28.319 [2024-07-15 09:35:39.417941] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:28.319 [2024-07-15 09:35:39.417953] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb9aac0 is same with the state(5) to be set 00:27:28.319 [2024-07-15 09:35:39.421217] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:28.319 [2024-07-15 09:35:39.421250] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb9aac0 (9): Bad file descriptor 00:27:28.319 [2024-07-15 09:35:39.421771] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.319 [2024-07-15 09:35:39.421823] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb9aac0 with addr=10.0.0.2, port=4420 00:27:28.319 [2024-07-15 09:35:39.421842] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb9aac0 is same with the state(5) to be set 00:27:28.319 [2024-07-15 09:35:39.422057] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb9aac0 (9): Bad file descriptor 00:27:28.319 [2024-07-15 09:35:39.422274] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:28.319 [2024-07-15 09:35:39.422293] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:28.319 [2024-07-15 09:35:39.422307] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:28.320 [2024-07-15 09:35:39.425403] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:28.320 [2024-07-15 09:35:39.434762] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:28.320 [2024-07-15 09:35:39.435144] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.320 [2024-07-15 09:35:39.435174] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb9aac0 with addr=10.0.0.2, port=4420 00:27:28.320 [2024-07-15 09:35:39.435191] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb9aac0 is same with the state(5) to be set 00:27:28.320 [2024-07-15 09:35:39.435435] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb9aac0 (9): Bad file descriptor 00:27:28.320 [2024-07-15 09:35:39.435628] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:28.320 [2024-07-15 09:35:39.435652] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:28.320 [2024-07-15 09:35:39.435665] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:28.320 [2024-07-15 09:35:39.438664] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:28.320 [2024-07-15 09:35:39.447949] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:28.320 [2024-07-15 09:35:39.448288] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.320 [2024-07-15 09:35:39.448314] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb9aac0 with addr=10.0.0.2, port=4420 00:27:28.320 [2024-07-15 09:35:39.448330] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb9aac0 is same with the state(5) to be set 00:27:28.320 [2024-07-15 09:35:39.448530] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb9aac0 (9): Bad file descriptor 00:27:28.320 [2024-07-15 09:35:39.448756] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:28.320 [2024-07-15 09:35:39.448775] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:28.320 [2024-07-15 09:35:39.448810] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:28.320 [2024-07-15 09:35:39.451816] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:28.320 [2024-07-15 09:35:39.461186] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:28.320 [2024-07-15 09:35:39.461579] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.320 [2024-07-15 09:35:39.461606] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb9aac0 with addr=10.0.0.2, port=4420 00:27:28.320 [2024-07-15 09:35:39.461621] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb9aac0 is same with the state(5) to be set 00:27:28.320 [2024-07-15 09:35:39.461838] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb9aac0 (9): Bad file descriptor 00:27:28.320 [2024-07-15 09:35:39.462068] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:28.320 [2024-07-15 09:35:39.462088] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:28.320 [2024-07-15 09:35:39.462101] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:28.320 [2024-07-15 09:35:39.464921] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:28.320 [2024-07-15 09:35:39.474454] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:28.320 [2024-07-15 09:35:39.474823] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.320 [2024-07-15 09:35:39.474852] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb9aac0 with addr=10.0.0.2, port=4420 00:27:28.320 [2024-07-15 09:35:39.474868] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb9aac0 is same with the state(5) to be set 00:27:28.320 [2024-07-15 09:35:39.475081] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb9aac0 (9): Bad file descriptor 00:27:28.320 [2024-07-15 09:35:39.475309] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:28.320 [2024-07-15 09:35:39.475327] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:28.320 [2024-07-15 09:35:39.475339] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:28.320 [2024-07-15 09:35:39.478238] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:28.320 [2024-07-15 09:35:39.487628] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:28.320 [2024-07-15 09:35:39.488063] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.320 [2024-07-15 09:35:39.488091] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb9aac0 with addr=10.0.0.2, port=4420 00:27:28.320 [2024-07-15 09:35:39.488124] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb9aac0 is same with the state(5) to be set 00:27:28.320 [2024-07-15 09:35:39.488364] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb9aac0 (9): Bad file descriptor 00:27:28.320 [2024-07-15 09:35:39.488572] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:28.320 [2024-07-15 09:35:39.488590] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:28.320 [2024-07-15 09:35:39.488602] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:28.320 [2024-07-15 09:35:39.491471] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:28.320 [2024-07-15 09:35:39.500920] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:28.320 [2024-07-15 09:35:39.501312] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.320 [2024-07-15 09:35:39.501355] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb9aac0 with addr=10.0.0.2, port=4420 00:27:28.320 [2024-07-15 09:35:39.501370] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb9aac0 is same with the state(5) to be set 00:27:28.320 [2024-07-15 09:35:39.501639] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb9aac0 (9): Bad file descriptor 00:27:28.320 [2024-07-15 09:35:39.501878] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:28.320 [2024-07-15 09:35:39.501900] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:28.320 [2024-07-15 09:35:39.501913] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:28.320 [2024-07-15 09:35:39.504849] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:28.581 [2024-07-15 09:35:39.514124] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:28.581 [2024-07-15 09:35:39.514619] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.581 [2024-07-15 09:35:39.514662] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb9aac0 with addr=10.0.0.2, port=4420 00:27:28.581 [2024-07-15 09:35:39.514680] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb9aac0 is same with the state(5) to be set 00:27:28.581 [2024-07-15 09:35:39.514963] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb9aac0 (9): Bad file descriptor 00:27:28.581 [2024-07-15 09:35:39.515196] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:28.581 [2024-07-15 09:35:39.515215] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:28.581 [2024-07-15 09:35:39.515228] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:28.582 [2024-07-15 09:35:39.518310] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:28.582 [2024-07-15 09:35:39.527271] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:28.582 [2024-07-15 09:35:39.527637] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.582 [2024-07-15 09:35:39.527679] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb9aac0 with addr=10.0.0.2, port=4420 00:27:28.582 [2024-07-15 09:35:39.527695] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb9aac0 is same with the state(5) to be set 00:27:28.582 [2024-07-15 09:35:39.527964] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb9aac0 (9): Bad file descriptor 00:27:28.582 [2024-07-15 09:35:39.528176] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:28.582 [2024-07-15 09:35:39.528195] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:28.582 [2024-07-15 09:35:39.528207] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:28.582 [2024-07-15 09:35:39.531118] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:28.582 [2024-07-15 09:35:39.540464] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:28.582 [2024-07-15 09:35:39.540906] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.582 [2024-07-15 09:35:39.540948] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb9aac0 with addr=10.0.0.2, port=4420 00:27:28.582 [2024-07-15 09:35:39.540965] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb9aac0 is same with the state(5) to be set 00:27:28.582 [2024-07-15 09:35:39.541204] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb9aac0 (9): Bad file descriptor 00:27:28.582 [2024-07-15 09:35:39.541397] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:28.582 [2024-07-15 09:35:39.541415] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:28.582 [2024-07-15 09:35:39.541427] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:28.582 [2024-07-15 09:35:39.544382] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:28.582 [2024-07-15 09:35:39.553494] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:28.582 [2024-07-15 09:35:39.553860] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.582 [2024-07-15 09:35:39.553904] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb9aac0 with addr=10.0.0.2, port=4420 00:27:28.582 [2024-07-15 09:35:39.553920] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb9aac0 is same with the state(5) to be set 00:27:28.582 [2024-07-15 09:35:39.554192] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb9aac0 (9): Bad file descriptor 00:27:28.582 [2024-07-15 09:35:39.554385] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:28.582 [2024-07-15 09:35:39.554403] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:28.582 [2024-07-15 09:35:39.554415] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:28.582 [2024-07-15 09:35:39.557369] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:28.582 [2024-07-15 09:35:39.566747] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:28.582 [2024-07-15 09:35:39.567179] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.582 [2024-07-15 09:35:39.567220] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb9aac0 with addr=10.0.0.2, port=4420 00:27:28.582 [2024-07-15 09:35:39.567237] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb9aac0 is same with the state(5) to be set 00:27:28.582 [2024-07-15 09:35:39.567475] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb9aac0 (9): Bad file descriptor 00:27:28.582 [2024-07-15 09:35:39.567682] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:28.582 [2024-07-15 09:35:39.567702] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:28.582 [2024-07-15 09:35:39.567718] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:28.582 [2024-07-15 09:35:39.570667] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:28.582 [2024-07-15 09:35:39.579890] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:28.582 [2024-07-15 09:35:39.580298] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.582 [2024-07-15 09:35:39.580323] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb9aac0 with addr=10.0.0.2, port=4420 00:27:28.582 [2024-07-15 09:35:39.580337] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb9aac0 is same with the state(5) to be set 00:27:28.582 [2024-07-15 09:35:39.580567] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb9aac0 (9): Bad file descriptor 00:27:28.582 [2024-07-15 09:35:39.580773] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:28.582 [2024-07-15 09:35:39.580819] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:28.582 [2024-07-15 09:35:39.580832] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:28.582 [2024-07-15 09:35:39.583736] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:28.582 [2024-07-15 09:35:39.592994] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:28.582 [2024-07-15 09:35:39.593359] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.582 [2024-07-15 09:35:39.593401] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb9aac0 with addr=10.0.0.2, port=4420 00:27:28.582 [2024-07-15 09:35:39.593416] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb9aac0 is same with the state(5) to be set 00:27:28.582 [2024-07-15 09:35:39.593669] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb9aac0 (9): Bad file descriptor 00:27:28.582 [2024-07-15 09:35:39.593922] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:28.582 [2024-07-15 09:35:39.593942] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:28.582 [2024-07-15 09:35:39.593955] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:28.582 [2024-07-15 09:35:39.596862] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:28.582 [2024-07-15 09:35:39.606077] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:28.582 [2024-07-15 09:35:39.606509] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.582 [2024-07-15 09:35:39.606551] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb9aac0 with addr=10.0.0.2, port=4420 00:27:28.582 [2024-07-15 09:35:39.606568] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb9aac0 is same with the state(5) to be set 00:27:28.582 [2024-07-15 09:35:39.606815] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb9aac0 (9): Bad file descriptor 00:27:28.582 [2024-07-15 09:35:39.607026] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:28.582 [2024-07-15 09:35:39.607045] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:28.582 [2024-07-15 09:35:39.607057] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:28.582 [2024-07-15 09:35:39.609869] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:28.582 [2024-07-15 09:35:39.619245] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:28.582 [2024-07-15 09:35:39.619671] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.582 [2024-07-15 09:35:39.619703] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb9aac0 with addr=10.0.0.2, port=4420 00:27:28.582 [2024-07-15 09:35:39.619719] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb9aac0 is same with the state(5) to be set 00:27:28.582 [2024-07-15 09:35:39.619983] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb9aac0 (9): Bad file descriptor 00:27:28.582 [2024-07-15 09:35:39.620215] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:28.582 [2024-07-15 09:35:39.620234] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:28.582 [2024-07-15 09:35:39.620246] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:28.582 [2024-07-15 09:35:39.623135] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:28.582 [2024-07-15 09:35:39.632251] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:28.582 [2024-07-15 09:35:39.632590] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.582 [2024-07-15 09:35:39.632618] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb9aac0 with addr=10.0.0.2, port=4420 00:27:28.582 [2024-07-15 09:35:39.632633] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb9aac0 is same with the state(5) to be set 00:27:28.582 [2024-07-15 09:35:39.632866] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb9aac0 (9): Bad file descriptor 00:27:28.582 [2024-07-15 09:35:39.633071] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:28.582 [2024-07-15 09:35:39.633104] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:28.582 [2024-07-15 09:35:39.633117] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:28.582 [2024-07-15 09:35:39.636002] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:28.582 [2024-07-15 09:35:39.645317] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:28.582 [2024-07-15 09:35:39.645742] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.582 [2024-07-15 09:35:39.645769] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb9aac0 with addr=10.0.0.2, port=4420 00:27:28.582 [2024-07-15 09:35:39.645785] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb9aac0 is same with the state(5) to be set 00:27:28.582 [2024-07-15 09:35:39.646048] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb9aac0 (9): Bad file descriptor 00:27:28.582 [2024-07-15 09:35:39.646276] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:28.582 [2024-07-15 09:35:39.646295] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:28.582 [2024-07-15 09:35:39.646307] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:28.582 [2024-07-15 09:35:39.649194] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:28.582 [2024-07-15 09:35:39.658412] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:28.582 [2024-07-15 09:35:39.658743] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.582 [2024-07-15 09:35:39.658770] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb9aac0 with addr=10.0.0.2, port=4420 00:27:28.582 [2024-07-15 09:35:39.658785] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb9aac0 is same with the state(5) to be set 00:27:28.582 [2024-07-15 09:35:39.659049] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb9aac0 (9): Bad file descriptor 00:27:28.582 [2024-07-15 09:35:39.659283] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:28.583 [2024-07-15 09:35:39.659302] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:28.583 [2024-07-15 09:35:39.659314] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:28.583 [2024-07-15 09:35:39.662201] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:28.583 [2024-07-15 09:35:39.671416] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:28.583 [2024-07-15 09:35:39.671815] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.583 [2024-07-15 09:35:39.671843] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb9aac0 with addr=10.0.0.2, port=4420 00:27:28.583 [2024-07-15 09:35:39.671859] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb9aac0 is same with the state(5) to be set 00:27:28.583 [2024-07-15 09:35:39.672087] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb9aac0 (9): Bad file descriptor 00:27:28.583 [2024-07-15 09:35:39.672344] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:28.583 [2024-07-15 09:35:39.672365] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:28.583 [2024-07-15 09:35:39.672379] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:28.583 [2024-07-15 09:35:39.675881] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:28.583 [2024-07-15 09:35:39.685358] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:28.583 [2024-07-15 09:35:39.685723] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.583 [2024-07-15 09:35:39.685751] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb9aac0 with addr=10.0.0.2, port=4420 00:27:28.583 [2024-07-15 09:35:39.685767] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb9aac0 is same with the state(5) to be set 00:27:28.583 [2024-07-15 09:35:39.686017] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb9aac0 (9): Bad file descriptor 00:27:28.583 [2024-07-15 09:35:39.686244] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:28.583 [2024-07-15 09:35:39.686263] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:28.583 [2024-07-15 09:35:39.686275] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:28.583 [2024-07-15 09:35:39.689313] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:28.583 [2024-07-15 09:35:39.698483] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:28.583 [2024-07-15 09:35:39.698857] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.583 [2024-07-15 09:35:39.698898] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb9aac0 with addr=10.0.0.2, port=4420 00:27:28.583 [2024-07-15 09:35:39.698914] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb9aac0 is same with the state(5) to be set 00:27:28.583 [2024-07-15 09:35:39.699137] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb9aac0 (9): Bad file descriptor 00:27:28.583 [2024-07-15 09:35:39.699344] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:28.583 [2024-07-15 09:35:39.699363] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:28.583 [2024-07-15 09:35:39.699374] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:28.583 [2024-07-15 09:35:39.702296] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:28.583 [2024-07-15 09:35:39.711522] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:28.583 [2024-07-15 09:35:39.711887] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.583 [2024-07-15 09:35:39.711930] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb9aac0 with addr=10.0.0.2, port=4420 00:27:28.583 [2024-07-15 09:35:39.711945] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb9aac0 is same with the state(5) to be set 00:27:28.583 [2024-07-15 09:35:39.712197] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb9aac0 (9): Bad file descriptor 00:27:28.583 [2024-07-15 09:35:39.712404] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:28.583 [2024-07-15 09:35:39.712423] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:28.583 [2024-07-15 09:35:39.712434] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:28.583 [2024-07-15 09:35:39.715356] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:28.583 [2024-07-15 09:35:39.724971] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:28.583 [2024-07-15 09:35:39.725314] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.583 [2024-07-15 09:35:39.725354] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb9aac0 with addr=10.0.0.2, port=4420 00:27:28.583 [2024-07-15 09:35:39.725369] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb9aac0 is same with the state(5) to be set 00:27:28.583 [2024-07-15 09:35:39.725583] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb9aac0 (9): Bad file descriptor 00:27:28.583 [2024-07-15 09:35:39.725793] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:28.583 [2024-07-15 09:35:39.725846] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:28.583 [2024-07-15 09:35:39.725860] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:28.583 [2024-07-15 09:35:39.728873] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:28.583 [2024-07-15 09:35:39.738257] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:28.583 [2024-07-15 09:35:39.738745] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.583 [2024-07-15 09:35:39.738786] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb9aac0 with addr=10.0.0.2, port=4420 00:27:28.583 [2024-07-15 09:35:39.738810] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb9aac0 is same with the state(5) to be set 00:27:28.583 [2024-07-15 09:35:39.739065] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb9aac0 (9): Bad file descriptor 00:27:28.583 [2024-07-15 09:35:39.739294] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:28.583 [2024-07-15 09:35:39.739313] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:28.583 [2024-07-15 09:35:39.739325] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:28.583 [2024-07-15 09:35:39.742224] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:28.583 [2024-07-15 09:35:39.751357] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:28.583 [2024-07-15 09:35:39.751689] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.583 [2024-07-15 09:35:39.751716] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb9aac0 with addr=10.0.0.2, port=4420 00:27:28.583 [2024-07-15 09:35:39.751736] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb9aac0 is same with the state(5) to be set 00:27:28.583 [2024-07-15 09:35:39.751970] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb9aac0 (9): Bad file descriptor 00:27:28.583 [2024-07-15 09:35:39.752196] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:28.583 [2024-07-15 09:35:39.752215] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:28.583 [2024-07-15 09:35:39.752226] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:28.583 [2024-07-15 09:35:39.755139] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:28.583 [2024-07-15 09:35:39.764519] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:28.583 [2024-07-15 09:35:39.764945] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.583 [2024-07-15 09:35:39.764973] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb9aac0 with addr=10.0.0.2, port=4420 00:27:28.583 [2024-07-15 09:35:39.765004] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb9aac0 is same with the state(5) to be set 00:27:28.583 [2024-07-15 09:35:39.765244] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb9aac0 (9): Bad file descriptor 00:27:28.583 [2024-07-15 09:35:39.765435] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:28.583 [2024-07-15 09:35:39.765454] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:28.583 [2024-07-15 09:35:39.765466] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:28.583 [2024-07-15 09:35:39.768420] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:28.845 [2024-07-15 09:35:39.777999] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:28.845 [2024-07-15 09:35:39.778372] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.845 [2024-07-15 09:35:39.778400] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb9aac0 with addr=10.0.0.2, port=4420 00:27:28.845 [2024-07-15 09:35:39.778416] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb9aac0 is same with the state(5) to be set 00:27:28.845 [2024-07-15 09:35:39.778637] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb9aac0 (9): Bad file descriptor 00:27:28.845 [2024-07-15 09:35:39.778893] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:28.845 [2024-07-15 09:35:39.778915] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:28.845 [2024-07-15 09:35:39.778929] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:28.845 [2024-07-15 09:35:39.782076] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:28.845 [2024-07-15 09:35:39.791141] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:28.845 [2024-07-15 09:35:39.791566] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.845 [2024-07-15 09:35:39.791608] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb9aac0 with addr=10.0.0.2, port=4420 00:27:28.845 [2024-07-15 09:35:39.791624] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb9aac0 is same with the state(5) to be set 00:27:28.845 [2024-07-15 09:35:39.791880] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb9aac0 (9): Bad file descriptor 00:27:28.845 [2024-07-15 09:35:39.792099] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:28.845 [2024-07-15 09:35:39.792124] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:28.845 [2024-07-15 09:35:39.792136] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:28.845 [2024-07-15 09:35:39.795019] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:28.845 [2024-07-15 09:35:39.804175] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:28.845 [2024-07-15 09:35:39.804538] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.845 [2024-07-15 09:35:39.804579] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb9aac0 with addr=10.0.0.2, port=4420 00:27:28.845 [2024-07-15 09:35:39.804594] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb9aac0 is same with the state(5) to be set 00:27:28.845 [2024-07-15 09:35:39.804851] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb9aac0 (9): Bad file descriptor 00:27:28.845 [2024-07-15 09:35:39.805070] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:28.845 [2024-07-15 09:35:39.805090] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:28.845 [2024-07-15 09:35:39.805103] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:28.845 [2024-07-15 09:35:39.808009] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:28.845 [2024-07-15 09:35:39.817315] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:28.845 [2024-07-15 09:35:39.817706] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.845 [2024-07-15 09:35:39.817733] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb9aac0 with addr=10.0.0.2, port=4420 00:27:28.845 [2024-07-15 09:35:39.817749] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb9aac0 is same with the state(5) to be set 00:27:28.845 [2024-07-15 09:35:39.818013] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb9aac0 (9): Bad file descriptor 00:27:28.845 [2024-07-15 09:35:39.818243] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:28.845 [2024-07-15 09:35:39.818261] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:28.845 [2024-07-15 09:35:39.818273] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:28.845 [2024-07-15 09:35:39.821161] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:28.845 [2024-07-15 09:35:39.830373] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:28.845 [2024-07-15 09:35:39.830796] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.845 [2024-07-15 09:35:39.830846] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb9aac0 with addr=10.0.0.2, port=4420 00:27:28.845 [2024-07-15 09:35:39.830862] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb9aac0 is same with the state(5) to be set 00:27:28.845 [2024-07-15 09:35:39.831102] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb9aac0 (9): Bad file descriptor 00:27:28.845 [2024-07-15 09:35:39.831310] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:28.845 [2024-07-15 09:35:39.831328] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:28.845 [2024-07-15 09:35:39.831340] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:28.845 [2024-07-15 09:35:39.834252] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:28.845 [2024-07-15 09:35:39.843362] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:28.845 [2024-07-15 09:35:39.843796] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.845 [2024-07-15 09:35:39.843845] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb9aac0 with addr=10.0.0.2, port=4420 00:27:28.845 [2024-07-15 09:35:39.843862] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb9aac0 is same with the state(5) to be set 00:27:28.845 [2024-07-15 09:35:39.844100] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb9aac0 (9): Bad file descriptor 00:27:28.845 [2024-07-15 09:35:39.844308] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:28.845 [2024-07-15 09:35:39.844327] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:28.845 [2024-07-15 09:35:39.844338] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:28.845 [2024-07-15 09:35:39.847149] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:28.845 [2024-07-15 09:35:39.856374] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:28.845 [2024-07-15 09:35:39.856702] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.845 [2024-07-15 09:35:39.856728] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb9aac0 with addr=10.0.0.2, port=4420 00:27:28.845 [2024-07-15 09:35:39.856743] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb9aac0 is same with the state(5) to be set 00:27:28.845 [2024-07-15 09:35:39.856994] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb9aac0 (9): Bad file descriptor 00:27:28.845 [2024-07-15 09:35:39.857223] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:28.845 [2024-07-15 09:35:39.857241] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:28.845 [2024-07-15 09:35:39.857253] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:28.845 [2024-07-15 09:35:39.860141] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:28.845 [2024-07-15 09:35:39.869394] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:28.845 [2024-07-15 09:35:39.869752] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.845 [2024-07-15 09:35:39.869777] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb9aac0 with addr=10.0.0.2, port=4420 00:27:28.845 [2024-07-15 09:35:39.869792] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb9aac0 is same with the state(5) to be set 00:27:28.845 [2024-07-15 09:35:39.870056] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb9aac0 (9): Bad file descriptor 00:27:28.845 [2024-07-15 09:35:39.870283] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:28.845 [2024-07-15 09:35:39.870301] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:28.845 [2024-07-15 09:35:39.870313] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:28.846 [2024-07-15 09:35:39.873202] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:28.846 [2024-07-15 09:35:39.882493] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:28.846 [2024-07-15 09:35:39.882891] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.846 [2024-07-15 09:35:39.882917] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb9aac0 with addr=10.0.0.2, port=4420 00:27:28.846 [2024-07-15 09:35:39.882931] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb9aac0 is same with the state(5) to be set 00:27:28.846 [2024-07-15 09:35:39.883188] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb9aac0 (9): Bad file descriptor 00:27:28.846 [2024-07-15 09:35:39.883381] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:28.846 [2024-07-15 09:35:39.883399] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:28.846 [2024-07-15 09:35:39.883411] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:28.846 [2024-07-15 09:35:39.886340] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:28.846 [2024-07-15 09:35:39.895635] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:28.846 [2024-07-15 09:35:39.896065] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.846 [2024-07-15 09:35:39.896093] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb9aac0 with addr=10.0.0.2, port=4420 00:27:28.846 [2024-07-15 09:35:39.896108] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb9aac0 is same with the state(5) to be set 00:27:28.846 [2024-07-15 09:35:39.896343] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb9aac0 (9): Bad file descriptor 00:27:28.846 [2024-07-15 09:35:39.896553] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:28.846 [2024-07-15 09:35:39.896571] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:28.846 [2024-07-15 09:35:39.896583] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:28.846 [2024-07-15 09:35:39.899536] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:28.846 [2024-07-15 09:35:39.908614] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:28.846 [2024-07-15 09:35:39.908949] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.846 [2024-07-15 09:35:39.908976] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb9aac0 with addr=10.0.0.2, port=4420 00:27:28.846 [2024-07-15 09:35:39.908992] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb9aac0 is same with the state(5) to be set 00:27:28.846 [2024-07-15 09:35:39.909212] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb9aac0 (9): Bad file descriptor 00:27:28.846 [2024-07-15 09:35:39.909419] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:28.846 [2024-07-15 09:35:39.909438] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:28.846 [2024-07-15 09:35:39.909450] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:28.846 [2024-07-15 09:35:39.912262] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:28.846 [2024-07-15 09:35:39.921723] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:28.846 [2024-07-15 09:35:39.922082] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.846 [2024-07-15 09:35:39.922110] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb9aac0 with addr=10.0.0.2, port=4420 00:27:28.846 [2024-07-15 09:35:39.922126] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb9aac0 is same with the state(5) to be set 00:27:28.846 [2024-07-15 09:35:39.922338] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb9aac0 (9): Bad file descriptor 00:27:28.846 [2024-07-15 09:35:39.922604] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:28.846 [2024-07-15 09:35:39.922625] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:28.846 [2024-07-15 09:35:39.922643] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:28.846 [2024-07-15 09:35:39.926136] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:28.846 [2024-07-15 09:35:39.934872] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:28.846 [2024-07-15 09:35:39.935257] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.846 [2024-07-15 09:35:39.935299] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb9aac0 with addr=10.0.0.2, port=4420 00:27:28.846 [2024-07-15 09:35:39.935315] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb9aac0 is same with the state(5) to be set 00:27:28.846 [2024-07-15 09:35:39.935570] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb9aac0 (9): Bad file descriptor 00:27:28.846 [2024-07-15 09:35:39.935783] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:28.846 [2024-07-15 09:35:39.935810] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:28.846 [2024-07-15 09:35:39.935840] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:28.846 [2024-07-15 09:35:39.938711] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:28.846 [2024-07-15 09:35:39.948097] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:28.846 [2024-07-15 09:35:39.948436] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.846 [2024-07-15 09:35:39.948464] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb9aac0 with addr=10.0.0.2, port=4420 00:27:28.846 [2024-07-15 09:35:39.948479] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb9aac0 is same with the state(5) to be set 00:27:28.846 [2024-07-15 09:35:39.948701] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb9aac0 (9): Bad file descriptor 00:27:28.846 [2024-07-15 09:35:39.948958] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:28.846 [2024-07-15 09:35:39.948979] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:28.846 [2024-07-15 09:35:39.948991] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:28.846 [2024-07-15 09:35:39.951838] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:28.846 [2024-07-15 09:35:39.961142] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:28.846 [2024-07-15 09:35:39.961454] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.846 [2024-07-15 09:35:39.961480] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb9aac0 with addr=10.0.0.2, port=4420 00:27:28.846 [2024-07-15 09:35:39.961495] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb9aac0 is same with the state(5) to be set 00:27:28.846 [2024-07-15 09:35:39.961709] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb9aac0 (9): Bad file descriptor 00:27:28.846 [2024-07-15 09:35:39.961947] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:28.846 [2024-07-15 09:35:39.961968] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:28.846 [2024-07-15 09:35:39.961980] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:28.846 [2024-07-15 09:35:39.964763] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:28.846 [2024-07-15 09:35:39.974243] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:28.846 [2024-07-15 09:35:39.974686] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.846 [2024-07-15 09:35:39.974717] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb9aac0 with addr=10.0.0.2, port=4420 00:27:28.846 [2024-07-15 09:35:39.974733] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb9aac0 is same with the state(5) to be set 00:27:28.846 [2024-07-15 09:35:39.974996] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb9aac0 (9): Bad file descriptor 00:27:28.846 [2024-07-15 09:35:39.975210] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:28.846 [2024-07-15 09:35:39.975229] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:28.846 [2024-07-15 09:35:39.975241] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:28.846 [2024-07-15 09:35:39.978129] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:28.846 [2024-07-15 09:35:39.987426] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:28.846 [2024-07-15 09:35:39.987750] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.846 [2024-07-15 09:35:39.987777] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb9aac0 with addr=10.0.0.2, port=4420 00:27:28.847 [2024-07-15 09:35:39.987793] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb9aac0 is same with the state(5) to be set 00:27:28.847 [2024-07-15 09:35:39.988048] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb9aac0 (9): Bad file descriptor 00:27:28.847 [2024-07-15 09:35:39.988275] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:28.847 [2024-07-15 09:35:39.988294] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:28.847 [2024-07-15 09:35:39.988306] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:28.847 [2024-07-15 09:35:39.991235] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:28.847 [2024-07-15 09:35:40.000564] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:28.847 [2024-07-15 09:35:40.000970] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.847 [2024-07-15 09:35:40.000998] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb9aac0 with addr=10.0.0.2, port=4420 00:27:28.847 [2024-07-15 09:35:40.001014] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb9aac0 is same with the state(5) to be set 00:27:28.847 [2024-07-15 09:35:40.001260] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb9aac0 (9): Bad file descriptor 00:27:28.847 [2024-07-15 09:35:40.001514] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:28.847 [2024-07-15 09:35:40.001544] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:28.847 [2024-07-15 09:35:40.001567] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:28.847 [2024-07-15 09:35:40.005052] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:28.847 [2024-07-15 09:35:40.013932] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:28.847 [2024-07-15 09:35:40.014388] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.847 [2024-07-15 09:35:40.014417] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb9aac0 with addr=10.0.0.2, port=4420 00:27:28.847 [2024-07-15 09:35:40.014447] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb9aac0 is same with the state(5) to be set 00:27:28.847 [2024-07-15 09:35:40.014681] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb9aac0 (9): Bad file descriptor 00:27:28.847 [2024-07-15 09:35:40.014935] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:28.847 [2024-07-15 09:35:40.014958] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:28.847 [2024-07-15 09:35:40.014972] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:28.847 [2024-07-15 09:35:40.018541] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:28.847 [2024-07-15 09:35:40.027321] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:28.847 [2024-07-15 09:35:40.027713] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.847 [2024-07-15 09:35:40.027744] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb9aac0 with addr=10.0.0.2, port=4420 00:27:28.847 [2024-07-15 09:35:40.027761] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb9aac0 is same with the state(5) to be set 00:27:28.847 [2024-07-15 09:35:40.028001] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb9aac0 (9): Bad file descriptor 00:27:28.847 [2024-07-15 09:35:40.028239] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:28.847 [2024-07-15 09:35:40.028259] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:28.847 [2024-07-15 09:35:40.028272] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:28.847 [2024-07-15 09:35:40.031246] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:29.109 [2024-07-15 09:35:40.040765] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:29.109 [2024-07-15 09:35:40.041214] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.109 [2024-07-15 09:35:40.041243] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb9aac0 with addr=10.0.0.2, port=4420 00:27:29.109 [2024-07-15 09:35:40.041260] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb9aac0 is same with the state(5) to be set 00:27:29.109 [2024-07-15 09:35:40.041502] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb9aac0 (9): Bad file descriptor 00:27:29.109 [2024-07-15 09:35:40.041716] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:29.109 [2024-07-15 09:35:40.041736] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:29.109 [2024-07-15 09:35:40.041748] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:29.109 [2024-07-15 09:35:40.044889] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:29.109 [2024-07-15 09:35:40.054048] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:29.109 [2024-07-15 09:35:40.054463] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.109 [2024-07-15 09:35:40.054507] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb9aac0 with addr=10.0.0.2, port=4420 00:27:29.109 [2024-07-15 09:35:40.054523] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb9aac0 is same with the state(5) to be set 00:27:29.109 [2024-07-15 09:35:40.054770] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb9aac0 (9): Bad file descriptor 00:27:29.109 [2024-07-15 09:35:40.054997] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:29.109 [2024-07-15 09:35:40.055018] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:29.109 [2024-07-15 09:35:40.055032] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:29.109 [2024-07-15 09:35:40.058048] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:29.109 [2024-07-15 09:35:40.067357] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:29.109 [2024-07-15 09:35:40.067732] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.109 [2024-07-15 09:35:40.067774] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb9aac0 with addr=10.0.0.2, port=4420 00:27:29.109 [2024-07-15 09:35:40.067790] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb9aac0 is same with the state(5) to be set 00:27:29.109 [2024-07-15 09:35:40.068041] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb9aac0 (9): Bad file descriptor 00:27:29.109 [2024-07-15 09:35:40.068278] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:29.109 [2024-07-15 09:35:40.068296] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:29.109 [2024-07-15 09:35:40.068308] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:29.109 [2024-07-15 09:35:40.071549] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:29.109 [2024-07-15 09:35:40.080684] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:29.109 [2024-07-15 09:35:40.081069] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.109 [2024-07-15 09:35:40.081104] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb9aac0 with addr=10.0.0.2, port=4420 00:27:29.109 [2024-07-15 09:35:40.081120] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb9aac0 is same with the state(5) to be set 00:27:29.109 [2024-07-15 09:35:40.081359] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb9aac0 (9): Bad file descriptor 00:27:29.109 [2024-07-15 09:35:40.081551] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:29.109 [2024-07-15 09:35:40.081570] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:29.109 [2024-07-15 09:35:40.081582] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:29.109 [2024-07-15 09:35:40.084543] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:29.109 [2024-07-15 09:35:40.094005] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:29.109 [2024-07-15 09:35:40.094403] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.109 [2024-07-15 09:35:40.094444] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb9aac0 with addr=10.0.0.2, port=4420 00:27:29.109 [2024-07-15 09:35:40.094459] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb9aac0 is same with the state(5) to be set 00:27:29.109 [2024-07-15 09:35:40.094705] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb9aac0 (9): Bad file descriptor 00:27:29.109 [2024-07-15 09:35:40.094943] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:29.109 [2024-07-15 09:35:40.094965] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:29.109 [2024-07-15 09:35:40.094978] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:29.109 [2024-07-15 09:35:40.097921] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:29.109 [2024-07-15 09:35:40.107159] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:29.109 [2024-07-15 09:35:40.107529] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.109 [2024-07-15 09:35:40.107572] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb9aac0 with addr=10.0.0.2, port=4420 00:27:29.109 [2024-07-15 09:35:40.107592] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb9aac0 is same with the state(5) to be set 00:27:29.109 [2024-07-15 09:35:40.107871] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb9aac0 (9): Bad file descriptor 00:27:29.109 [2024-07-15 09:35:40.108075] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:29.109 [2024-07-15 09:35:40.108111] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:29.109 [2024-07-15 09:35:40.108123] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:29.109 [2024-07-15 09:35:40.111055] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:29.109 [2024-07-15 09:35:40.120814] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:29.109 [2024-07-15 09:35:40.121194] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.109 [2024-07-15 09:35:40.121236] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb9aac0 with addr=10.0.0.2, port=4420 00:27:29.109 [2024-07-15 09:35:40.121251] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb9aac0 is same with the state(5) to be set 00:27:29.109 [2024-07-15 09:35:40.121503] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb9aac0 (9): Bad file descriptor 00:27:29.109 [2024-07-15 09:35:40.121711] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:29.110 [2024-07-15 09:35:40.121730] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:29.110 [2024-07-15 09:35:40.121741] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:29.110 [2024-07-15 09:35:40.124641] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:29.110 [2024-07-15 09:35:40.134085] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:29.110 [2024-07-15 09:35:40.134463] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.110 [2024-07-15 09:35:40.134491] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb9aac0 with addr=10.0.0.2, port=4420 00:27:29.110 [2024-07-15 09:35:40.134506] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb9aac0 is same with the state(5) to be set 00:27:29.110 [2024-07-15 09:35:40.134740] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb9aac0 (9): Bad file descriptor 00:27:29.110 [2024-07-15 09:35:40.134975] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:29.110 [2024-07-15 09:35:40.134996] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:29.110 [2024-07-15 09:35:40.135008] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:29.110 [2024-07-15 09:35:40.137934] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:29.110 [2024-07-15 09:35:40.147360] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:29.110 [2024-07-15 09:35:40.147723] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.110 [2024-07-15 09:35:40.147789] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb9aac0 with addr=10.0.0.2, port=4420 00:27:29.110 [2024-07-15 09:35:40.147813] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb9aac0 is same with the state(5) to be set 00:27:29.110 [2024-07-15 09:35:40.148066] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb9aac0 (9): Bad file descriptor 00:27:29.110 [2024-07-15 09:35:40.148293] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:29.110 [2024-07-15 09:35:40.148316] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:29.110 [2024-07-15 09:35:40.148329] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:29.110 [2024-07-15 09:35:40.151233] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:29.110 [2024-07-15 09:35:40.160454] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:29.110 [2024-07-15 09:35:40.160792] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.110 [2024-07-15 09:35:40.160863] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb9aac0 with addr=10.0.0.2, port=4420 00:27:29.110 [2024-07-15 09:35:40.160879] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb9aac0 is same with the state(5) to be set 00:27:29.110 [2024-07-15 09:35:40.161122] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb9aac0 (9): Bad file descriptor 00:27:29.110 [2024-07-15 09:35:40.161348] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:29.110 [2024-07-15 09:35:40.161367] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:29.110 [2024-07-15 09:35:40.161379] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:29.110 [2024-07-15 09:35:40.164309] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:29.110 [2024-07-15 09:35:40.173672] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:29.110 [2024-07-15 09:35:40.174066] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.110 [2024-07-15 09:35:40.174095] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb9aac0 with addr=10.0.0.2, port=4420 00:27:29.110 [2024-07-15 09:35:40.174111] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb9aac0 is same with the state(5) to be set 00:27:29.110 [2024-07-15 09:35:40.174339] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb9aac0 (9): Bad file descriptor 00:27:29.110 [2024-07-15 09:35:40.174596] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:29.110 [2024-07-15 09:35:40.174618] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:29.110 [2024-07-15 09:35:40.174631] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:29.110 [2024-07-15 09:35:40.177928] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:29.110 [2024-07-15 09:35:40.186959] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:29.110 [2024-07-15 09:35:40.187425] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.110 [2024-07-15 09:35:40.187467] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb9aac0 with addr=10.0.0.2, port=4420 00:27:29.110 [2024-07-15 09:35:40.187484] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb9aac0 is same with the state(5) to be set 00:27:29.110 [2024-07-15 09:35:40.187722] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb9aac0 (9): Bad file descriptor 00:27:29.110 [2024-07-15 09:35:40.187988] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:29.110 [2024-07-15 09:35:40.188009] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:29.110 [2024-07-15 09:35:40.188022] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:29.110 [2024-07-15 09:35:40.191043] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:29.110 [2024-07-15 09:35:40.200244] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:29.110 [2024-07-15 09:35:40.200640] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.110 [2024-07-15 09:35:40.200680] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb9aac0 with addr=10.0.0.2, port=4420 00:27:29.110 [2024-07-15 09:35:40.200696] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb9aac0 is same with the state(5) to be set 00:27:29.110 [2024-07-15 09:35:40.200927] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb9aac0 (9): Bad file descriptor 00:27:29.110 [2024-07-15 09:35:40.201140] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:29.110 [2024-07-15 09:35:40.201159] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:29.110 [2024-07-15 09:35:40.201170] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:29.110 [2024-07-15 09:35:40.204088] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:29.110 [2024-07-15 09:35:40.213513] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:29.110 [2024-07-15 09:35:40.213850] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.110 [2024-07-15 09:35:40.213879] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb9aac0 with addr=10.0.0.2, port=4420 00:27:29.110 [2024-07-15 09:35:40.213894] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb9aac0 is same with the state(5) to be set 00:27:29.110 [2024-07-15 09:35:40.214121] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb9aac0 (9): Bad file descriptor 00:27:29.110 [2024-07-15 09:35:40.214329] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:29.110 [2024-07-15 09:35:40.214347] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:29.110 [2024-07-15 09:35:40.214359] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:29.110 [2024-07-15 09:35:40.217288] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:29.110 [2024-07-15 09:35:40.226708] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:29.110 [2024-07-15 09:35:40.227117] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.110 [2024-07-15 09:35:40.227159] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb9aac0 with addr=10.0.0.2, port=4420 00:27:29.110 [2024-07-15 09:35:40.227174] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb9aac0 is same with the state(5) to be set 00:27:29.110 [2024-07-15 09:35:40.227413] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb9aac0 (9): Bad file descriptor 00:27:29.110 [2024-07-15 09:35:40.227621] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:29.110 [2024-07-15 09:35:40.227640] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:29.110 [2024-07-15 09:35:40.227652] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:29.110 [2024-07-15 09:35:40.230584] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:29.110 [2024-07-15 09:35:40.239831] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:29.111 [2024-07-15 09:35:40.240296] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.111 [2024-07-15 09:35:40.240349] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb9aac0 with addr=10.0.0.2, port=4420 00:27:29.111 [2024-07-15 09:35:40.240364] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb9aac0 is same with the state(5) to be set 00:27:29.111 [2024-07-15 09:35:40.240633] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb9aac0 (9): Bad file descriptor 00:27:29.111 [2024-07-15 09:35:40.240835] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:29.111 [2024-07-15 09:35:40.240855] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:29.111 [2024-07-15 09:35:40.240871] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:29.111 [2024-07-15 09:35:40.243637] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:29.111 [2024-07-15 09:35:40.253100] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:29.111 [2024-07-15 09:35:40.253488] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.111 [2024-07-15 09:35:40.253530] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb9aac0 with addr=10.0.0.2, port=4420 00:27:29.111 [2024-07-15 09:35:40.253546] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb9aac0 is same with the state(5) to be set 00:27:29.111 [2024-07-15 09:35:40.253769] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb9aac0 (9): Bad file descriptor 00:27:29.111 [2024-07-15 09:35:40.254008] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:29.111 [2024-07-15 09:35:40.254029] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:29.111 [2024-07-15 09:35:40.254042] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:29.111 [2024-07-15 09:35:40.256950] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:29.111 [2024-07-15 09:35:40.266298] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:29.111 [2024-07-15 09:35:40.266631] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.111 [2024-07-15 09:35:40.266658] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb9aac0 with addr=10.0.0.2, port=4420 00:27:29.111 [2024-07-15 09:35:40.266673] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb9aac0 is same with the state(5) to be set 00:27:29.111 [2024-07-15 09:35:40.266923] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb9aac0 (9): Bad file descriptor 00:27:29.111 [2024-07-15 09:35:40.267157] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:29.111 [2024-07-15 09:35:40.267176] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:29.111 [2024-07-15 09:35:40.267188] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:29.111 [2024-07-15 09:35:40.270061] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:29.111 [2024-07-15 09:35:40.279328] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:29.111 [2024-07-15 09:35:40.279753] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.111 [2024-07-15 09:35:40.279795] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb9aac0 with addr=10.0.0.2, port=4420 00:27:29.111 [2024-07-15 09:35:40.279820] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb9aac0 is same with the state(5) to be set 00:27:29.111 [2024-07-15 09:35:40.280048] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb9aac0 (9): Bad file descriptor 00:27:29.111 [2024-07-15 09:35:40.280275] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:29.111 [2024-07-15 09:35:40.280293] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:29.111 [2024-07-15 09:35:40.280310] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:29.111 [2024-07-15 09:35:40.283273] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:29.111 [2024-07-15 09:35:40.292363] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:29.111 [2024-07-15 09:35:40.292789] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.111 [2024-07-15 09:35:40.292837] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb9aac0 with addr=10.0.0.2, port=4420 00:27:29.111 [2024-07-15 09:35:40.292855] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb9aac0 is same with the state(5) to be set 00:27:29.111 [2024-07-15 09:35:40.293095] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb9aac0 (9): Bad file descriptor 00:27:29.111 [2024-07-15 09:35:40.293302] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:29.111 [2024-07-15 09:35:40.293321] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:29.111 [2024-07-15 09:35:40.293332] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:29.111 [2024-07-15 09:35:40.296273] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:29.373 [2024-07-15 09:35:40.305668] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:29.373 [2024-07-15 09:35:40.306084] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.373 [2024-07-15 09:35:40.306129] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb9aac0 with addr=10.0.0.2, port=4420 00:27:29.373 [2024-07-15 09:35:40.306145] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb9aac0 is same with the state(5) to be set 00:27:29.373 [2024-07-15 09:35:40.306401] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb9aac0 (9): Bad file descriptor 00:27:29.373 [2024-07-15 09:35:40.306618] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:29.373 [2024-07-15 09:35:40.306637] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:29.373 [2024-07-15 09:35:40.306648] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:29.373 [2024-07-15 09:35:40.309767] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:29.373 [2024-07-15 09:35:40.318980] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:29.373 [2024-07-15 09:35:40.319317] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.373 [2024-07-15 09:35:40.319344] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb9aac0 with addr=10.0.0.2, port=4420 00:27:29.373 [2024-07-15 09:35:40.319360] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb9aac0 is same with the state(5) to be set 00:27:29.373 [2024-07-15 09:35:40.319582] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb9aac0 (9): Bad file descriptor 00:27:29.373 [2024-07-15 09:35:40.319816] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:29.373 [2024-07-15 09:35:40.319837] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:29.373 [2024-07-15 09:35:40.319865] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:29.373 [2024-07-15 09:35:40.322760] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:29.373 [2024-07-15 09:35:40.331940] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:29.373 [2024-07-15 09:35:40.332300] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.373 [2024-07-15 09:35:40.332332] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb9aac0 with addr=10.0.0.2, port=4420 00:27:29.373 [2024-07-15 09:35:40.332348] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb9aac0 is same with the state(5) to be set 00:27:29.373 [2024-07-15 09:35:40.332585] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb9aac0 (9): Bad file descriptor 00:27:29.373 [2024-07-15 09:35:40.332793] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:29.373 [2024-07-15 09:35:40.332836] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:29.373 [2024-07-15 09:35:40.332849] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:29.373 [2024-07-15 09:35:40.335665] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:29.373 [2024-07-15 09:35:40.344970] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:29.373 [2024-07-15 09:35:40.345282] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.373 [2024-07-15 09:35:40.345324] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb9aac0 with addr=10.0.0.2, port=4420 00:27:29.373 [2024-07-15 09:35:40.345339] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb9aac0 is same with the state(5) to be set 00:27:29.373 [2024-07-15 09:35:40.345563] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb9aac0 (9): Bad file descriptor 00:27:29.373 [2024-07-15 09:35:40.345772] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:29.373 [2024-07-15 09:35:40.345790] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:29.373 [2024-07-15 09:35:40.345825] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:29.373 [2024-07-15 09:35:40.348787] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:29.373 [2024-07-15 09:35:40.358085] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:29.373 [2024-07-15 09:35:40.358457] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.373 [2024-07-15 09:35:40.358500] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb9aac0 with addr=10.0.0.2, port=4420 00:27:29.373 [2024-07-15 09:35:40.358515] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb9aac0 is same with the state(5) to be set 00:27:29.373 [2024-07-15 09:35:40.358768] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb9aac0 (9): Bad file descriptor 00:27:29.373 [2024-07-15 09:35:40.359004] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:29.373 [2024-07-15 09:35:40.359024] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:29.373 [2024-07-15 09:35:40.359036] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:29.373 [2024-07-15 09:35:40.361826] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:29.373 [2024-07-15 09:35:40.371122] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:29.373 [2024-07-15 09:35:40.371483] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.373 [2024-07-15 09:35:40.371509] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb9aac0 with addr=10.0.0.2, port=4420 00:27:29.373 [2024-07-15 09:35:40.371524] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb9aac0 is same with the state(5) to be set 00:27:29.373 [2024-07-15 09:35:40.371758] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb9aac0 (9): Bad file descriptor 00:27:29.373 [2024-07-15 09:35:40.372003] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:29.373 [2024-07-15 09:35:40.372025] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:29.373 [2024-07-15 09:35:40.372038] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:29.373 [2024-07-15 09:35:40.374950] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:29.373 [2024-07-15 09:35:40.384315] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:29.373 [2024-07-15 09:35:40.384740] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.373 [2024-07-15 09:35:40.384781] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb9aac0 with addr=10.0.0.2, port=4420 00:27:29.373 [2024-07-15 09:35:40.384797] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb9aac0 is same with the state(5) to be set 00:27:29.373 [2024-07-15 09:35:40.385037] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb9aac0 (9): Bad file descriptor 00:27:29.374 [2024-07-15 09:35:40.385264] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:29.374 [2024-07-15 09:35:40.385283] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:29.374 [2024-07-15 09:35:40.385296] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:29.374 [2024-07-15 09:35:40.388145] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:29.374 [2024-07-15 09:35:40.397281] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:29.374 [2024-07-15 09:35:40.397710] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.374 [2024-07-15 09:35:40.397752] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb9aac0 with addr=10.0.0.2, port=4420 00:27:29.374 [2024-07-15 09:35:40.397768] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb9aac0 is same with the state(5) to be set 00:27:29.374 [2024-07-15 09:35:40.398018] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb9aac0 (9): Bad file descriptor 00:27:29.374 [2024-07-15 09:35:40.398246] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:29.374 [2024-07-15 09:35:40.398265] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:29.374 [2024-07-15 09:35:40.398277] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:29.374 [2024-07-15 09:35:40.401162] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:29.374 [2024-07-15 09:35:40.410457] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:29.374 [2024-07-15 09:35:40.410818] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.374 [2024-07-15 09:35:40.410846] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb9aac0 with addr=10.0.0.2, port=4420 00:27:29.374 [2024-07-15 09:35:40.410861] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb9aac0 is same with the state(5) to be set 00:27:29.374 [2024-07-15 09:35:40.411096] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb9aac0 (9): Bad file descriptor 00:27:29.374 [2024-07-15 09:35:40.411304] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:29.374 [2024-07-15 09:35:40.411323] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:29.374 [2024-07-15 09:35:40.411335] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:29.374 [2024-07-15 09:35:40.414264] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:29.374 [2024-07-15 09:35:40.423514] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:29.374 [2024-07-15 09:35:40.423876] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.374 [2024-07-15 09:35:40.423902] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb9aac0 with addr=10.0.0.2, port=4420 00:27:29.374 [2024-07-15 09:35:40.423917] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb9aac0 is same with the state(5) to be set 00:27:29.374 [2024-07-15 09:35:40.424186] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb9aac0 (9): Bad file descriptor 00:27:29.374 [2024-07-15 09:35:40.424420] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:29.374 [2024-07-15 09:35:40.424440] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:29.374 [2024-07-15 09:35:40.424453] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:29.374 [2024-07-15 09:35:40.427957] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:29.374 [2024-07-15 09:35:40.436717] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:29.374 [2024-07-15 09:35:40.437113] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.374 [2024-07-15 09:35:40.437141] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb9aac0 with addr=10.0.0.2, port=4420 00:27:29.374 [2024-07-15 09:35:40.437156] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb9aac0 is same with the state(5) to be set 00:27:29.374 [2024-07-15 09:35:40.437397] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb9aac0 (9): Bad file descriptor 00:27:29.374 [2024-07-15 09:35:40.437589] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:29.374 [2024-07-15 09:35:40.437607] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:29.374 [2024-07-15 09:35:40.437619] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:29.374 [2024-07-15 09:35:40.440833] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:29.374 [2024-07-15 09:35:40.450040] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:29.374 [2024-07-15 09:35:40.450408] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.374 [2024-07-15 09:35:40.450435] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb9aac0 with addr=10.0.0.2, port=4420 00:27:29.374 [2024-07-15 09:35:40.450451] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb9aac0 is same with the state(5) to be set 00:27:29.374 [2024-07-15 09:35:40.450684] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb9aac0 (9): Bad file descriptor 00:27:29.374 [2024-07-15 09:35:40.450939] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:29.374 [2024-07-15 09:35:40.450961] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:29.374 [2024-07-15 09:35:40.450974] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:29.374 [2024-07-15 09:35:40.453883] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:29.374 [2024-07-15 09:35:40.463100] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:29.374 [2024-07-15 09:35:40.463441] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.374 [2024-07-15 09:35:40.463468] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb9aac0 with addr=10.0.0.2, port=4420 00:27:29.374 [2024-07-15 09:35:40.463488] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb9aac0 is same with the state(5) to be set 00:27:29.374 [2024-07-15 09:35:40.463711] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb9aac0 (9): Bad file descriptor 00:27:29.374 [2024-07-15 09:35:40.463969] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:29.374 [2024-07-15 09:35:40.463991] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:29.374 [2024-07-15 09:35:40.464004] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:29.374 [2024-07-15 09:35:40.466916] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:29.374 [2024-07-15 09:35:40.476300] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:29.374 [2024-07-15 09:35:40.476662] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.374 [2024-07-15 09:35:40.476705] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb9aac0 with addr=10.0.0.2, port=4420 00:27:29.374 [2024-07-15 09:35:40.476720] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb9aac0 is same with the state(5) to be set 00:27:29.374 [2024-07-15 09:35:40.476971] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb9aac0 (9): Bad file descriptor 00:27:29.374 [2024-07-15 09:35:40.477219] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:29.374 [2024-07-15 09:35:40.477238] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:29.374 [2024-07-15 09:35:40.477250] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:29.374 [2024-07-15 09:35:40.480140] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:29.374 [2024-07-15 09:35:40.489434] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:29.374 [2024-07-15 09:35:40.489878] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.374 [2024-07-15 09:35:40.489906] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb9aac0 with addr=10.0.0.2, port=4420 00:27:29.374 [2024-07-15 09:35:40.489921] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb9aac0 is same with the state(5) to be set 00:27:29.374 [2024-07-15 09:35:40.490161] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb9aac0 (9): Bad file descriptor 00:27:29.374 [2024-07-15 09:35:40.490372] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:29.374 [2024-07-15 09:35:40.490392] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:29.374 [2024-07-15 09:35:40.490404] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:29.374 [2024-07-15 09:35:40.493299] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:29.374 [2024-07-15 09:35:40.502478] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:29.374 [2024-07-15 09:35:40.502810] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.374 [2024-07-15 09:35:40.502838] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb9aac0 with addr=10.0.0.2, port=4420 00:27:29.374 [2024-07-15 09:35:40.502854] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb9aac0 is same with the state(5) to be set 00:27:29.374 [2024-07-15 09:35:40.503076] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb9aac0 (9): Bad file descriptor 00:27:29.374 [2024-07-15 09:35:40.503284] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:29.374 [2024-07-15 09:35:40.503307] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:29.374 [2024-07-15 09:35:40.503320] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:29.374 [2024-07-15 09:35:40.506133] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:29.374 [2024-07-15 09:35:40.515514] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:29.374 [2024-07-15 09:35:40.515875] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.374 [2024-07-15 09:35:40.515919] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb9aac0 with addr=10.0.0.2, port=4420 00:27:29.374 [2024-07-15 09:35:40.515934] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb9aac0 is same with the state(5) to be set 00:27:29.374 [2024-07-15 09:35:40.516186] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb9aac0 (9): Bad file descriptor 00:27:29.374 [2024-07-15 09:35:40.516393] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:29.374 [2024-07-15 09:35:40.516411] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:29.374 [2024-07-15 09:35:40.516423] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:29.374 [2024-07-15 09:35:40.519340] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:29.374 [2024-07-15 09:35:40.528526] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:29.374 [2024-07-15 09:35:40.528886] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.374 [2024-07-15 09:35:40.528929] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb9aac0 with addr=10.0.0.2, port=4420 00:27:29.375 [2024-07-15 09:35:40.528944] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb9aac0 is same with the state(5) to be set 00:27:29.375 [2024-07-15 09:35:40.529189] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb9aac0 (9): Bad file descriptor 00:27:29.375 [2024-07-15 09:35:40.529381] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:29.375 [2024-07-15 09:35:40.529400] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:29.375 [2024-07-15 09:35:40.529412] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:29.375 [2024-07-15 09:35:40.532339] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:29.375 [2024-07-15 09:35:40.541590] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:29.375 [2024-07-15 09:35:40.541955] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.375 [2024-07-15 09:35:40.541982] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb9aac0 with addr=10.0.0.2, port=4420 00:27:29.375 [2024-07-15 09:35:40.541996] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb9aac0 is same with the state(5) to be set 00:27:29.375 [2024-07-15 09:35:40.542210] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb9aac0 (9): Bad file descriptor 00:27:29.375 [2024-07-15 09:35:40.542417] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:29.375 [2024-07-15 09:35:40.542436] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:29.375 [2024-07-15 09:35:40.542448] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:29.375 [2024-07-15 09:35:40.545371] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:29.375 [2024-07-15 09:35:40.554584] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:29.375 [2024-07-15 09:35:40.554950] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.375 [2024-07-15 09:35:40.554991] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb9aac0 with addr=10.0.0.2, port=4420 00:27:29.375 [2024-07-15 09:35:40.555006] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb9aac0 is same with the state(5) to be set 00:27:29.375 [2024-07-15 09:35:40.555254] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb9aac0 (9): Bad file descriptor 00:27:29.375 [2024-07-15 09:35:40.555446] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:29.375 [2024-07-15 09:35:40.555464] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:29.375 [2024-07-15 09:35:40.555476] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:29.375 [2024-07-15 09:35:40.558305] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:29.693 [2024-07-15 09:35:40.568366] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:29.693 [2024-07-15 09:35:40.568785] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.693 [2024-07-15 09:35:40.568824] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb9aac0 with addr=10.0.0.2, port=4420 00:27:29.693 [2024-07-15 09:35:40.568843] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb9aac0 is same with the state(5) to be set 00:27:29.693 [2024-07-15 09:35:40.569071] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb9aac0 (9): Bad file descriptor 00:27:29.693 [2024-07-15 09:35:40.569297] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:29.693 [2024-07-15 09:35:40.569316] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:29.693 [2024-07-15 09:35:40.569328] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:29.693 [2024-07-15 09:35:40.572342] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:29.693 [2024-07-15 09:35:40.581583] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:29.694 [2024-07-15 09:35:40.581987] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.694 [2024-07-15 09:35:40.582017] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb9aac0 with addr=10.0.0.2, port=4420 00:27:29.694 [2024-07-15 09:35:40.582033] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb9aac0 is same with the state(5) to be set 00:27:29.694 [2024-07-15 09:35:40.582274] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb9aac0 (9): Bad file descriptor 00:27:29.694 [2024-07-15 09:35:40.582467] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:29.694 [2024-07-15 09:35:40.582485] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:29.694 [2024-07-15 09:35:40.582498] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:29.694 [2024-07-15 09:35:40.585407] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:29.694 [2024-07-15 09:35:40.594754] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:29.694 [2024-07-15 09:35:40.595195] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.694 [2024-07-15 09:35:40.595247] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb9aac0 with addr=10.0.0.2, port=4420 00:27:29.694 [2024-07-15 09:35:40.595263] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb9aac0 is same with the state(5) to be set 00:27:29.694 [2024-07-15 09:35:40.595497] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb9aac0 (9): Bad file descriptor 00:27:29.694 [2024-07-15 09:35:40.595689] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:29.694 [2024-07-15 09:35:40.595708] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:29.694 [2024-07-15 09:35:40.595720] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:29.694 [2024-07-15 09:35:40.598770] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:29.694 [2024-07-15 09:35:40.607899] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:29.694 [2024-07-15 09:35:40.608399] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.694 [2024-07-15 09:35:40.608440] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb9aac0 with addr=10.0.0.2, port=4420 00:27:29.694 [2024-07-15 09:35:40.608456] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb9aac0 is same with the state(5) to be set 00:27:29.694 [2024-07-15 09:35:40.608701] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb9aac0 (9): Bad file descriptor 00:27:29.694 [2024-07-15 09:35:40.608921] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:29.694 [2024-07-15 09:35:40.608942] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:29.694 [2024-07-15 09:35:40.608955] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:29.694 [2024-07-15 09:35:40.611863] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:29.694 [2024-07-15 09:35:40.621290] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:29.694 [2024-07-15 09:35:40.621660] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.694 [2024-07-15 09:35:40.621688] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb9aac0 with addr=10.0.0.2, port=4420 00:27:29.694 [2024-07-15 09:35:40.621703] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb9aac0 is same with the state(5) to be set 00:27:29.694 [2024-07-15 09:35:40.621955] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb9aac0 (9): Bad file descriptor 00:27:29.694 [2024-07-15 09:35:40.622173] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:29.694 [2024-07-15 09:35:40.622192] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:29.694 [2024-07-15 09:35:40.622204] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:29.694 [2024-07-15 09:35:40.625282] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:29.694 [2024-07-15 09:35:40.634592] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:29.694 [2024-07-15 09:35:40.635007] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.694 [2024-07-15 09:35:40.635058] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb9aac0 with addr=10.0.0.2, port=4420 00:27:29.694 [2024-07-15 09:35:40.635075] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb9aac0 is same with the state(5) to be set 00:27:29.694 [2024-07-15 09:35:40.635313] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb9aac0 (9): Bad file descriptor 00:27:29.694 [2024-07-15 09:35:40.635511] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:29.694 [2024-07-15 09:35:40.635530] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:29.694 [2024-07-15 09:35:40.635546] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:29.694 [2024-07-15 09:35:40.638561] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:29.694 [2024-07-15 09:35:40.647856] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:29.694 [2024-07-15 09:35:40.648253] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.694 [2024-07-15 09:35:40.648292] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb9aac0 with addr=10.0.0.2, port=4420 00:27:29.694 [2024-07-15 09:35:40.648325] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb9aac0 is same with the state(5) to be set 00:27:29.694 [2024-07-15 09:35:40.648565] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb9aac0 (9): Bad file descriptor 00:27:29.694 [2024-07-15 09:35:40.648763] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:29.694 [2024-07-15 09:35:40.648797] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:29.694 [2024-07-15 09:35:40.648821] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:29.694 [2024-07-15 09:35:40.651879] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:29.694 [2024-07-15 09:35:40.661248] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:29.694 [2024-07-15 09:35:40.661682] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.694 [2024-07-15 09:35:40.661727] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb9aac0 with addr=10.0.0.2, port=4420 00:27:29.694 [2024-07-15 09:35:40.661744] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb9aac0 is same with the state(5) to be set 00:27:29.694 [2024-07-15 09:35:40.662007] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb9aac0 (9): Bad file descriptor 00:27:29.694 [2024-07-15 09:35:40.662225] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:29.694 [2024-07-15 09:35:40.662245] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:29.694 [2024-07-15 09:35:40.662257] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:29.694 [2024-07-15 09:35:40.665243] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:29.694 [2024-07-15 09:35:40.674608] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:29.694 [2024-07-15 09:35:40.674980] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.694 [2024-07-15 09:35:40.675008] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb9aac0 with addr=10.0.0.2, port=4420 00:27:29.694 [2024-07-15 09:35:40.675024] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb9aac0 is same with the state(5) to be set 00:27:29.694 [2024-07-15 09:35:40.675253] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb9aac0 (9): Bad file descriptor 00:27:29.694 [2024-07-15 09:35:40.675508] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:29.694 [2024-07-15 09:35:40.675528] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:29.694 [2024-07-15 09:35:40.675540] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:29.694 [2024-07-15 09:35:40.679025] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:29.694 [2024-07-15 09:35:40.687985] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:29.694 [2024-07-15 09:35:40.688410] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.694 [2024-07-15 09:35:40.688437] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb9aac0 with addr=10.0.0.2, port=4420 00:27:29.694 [2024-07-15 09:35:40.688453] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb9aac0 is same with the state(5) to be set 00:27:29.694 [2024-07-15 09:35:40.688673] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb9aac0 (9): Bad file descriptor 00:27:29.694 [2024-07-15 09:35:40.688919] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:29.694 [2024-07-15 09:35:40.688941] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:29.694 [2024-07-15 09:35:40.688954] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:29.694 [2024-07-15 09:35:40.691996] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:29.694 [2024-07-15 09:35:40.701370] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:29.694 [2024-07-15 09:35:40.701807] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.694 [2024-07-15 09:35:40.701836] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb9aac0 with addr=10.0.0.2, port=4420 00:27:29.694 [2024-07-15 09:35:40.701852] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb9aac0 is same with the state(5) to be set 00:27:29.694 [2024-07-15 09:35:40.702079] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb9aac0 (9): Bad file descriptor 00:27:29.694 [2024-07-15 09:35:40.702297] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:29.694 [2024-07-15 09:35:40.702316] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:29.694 [2024-07-15 09:35:40.702328] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:29.694 [2024-07-15 09:35:40.705369] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:29.695 [2024-07-15 09:35:40.714662] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:29.695 [2024-07-15 09:35:40.715085] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.695 [2024-07-15 09:35:40.715114] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb9aac0 with addr=10.0.0.2, port=4420 00:27:29.695 [2024-07-15 09:35:40.715129] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb9aac0 is same with the state(5) to be set 00:27:29.695 [2024-07-15 09:35:40.715357] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb9aac0 (9): Bad file descriptor 00:27:29.695 [2024-07-15 09:35:40.715572] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:29.695 [2024-07-15 09:35:40.715592] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:29.695 [2024-07-15 09:35:40.715604] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:29.695 [2024-07-15 09:35:40.718657] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:29.695 [2024-07-15 09:35:40.728139] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:29.695 [2024-07-15 09:35:40.728535] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.695 [2024-07-15 09:35:40.728563] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb9aac0 with addr=10.0.0.2, port=4420 00:27:29.695 [2024-07-15 09:35:40.728579] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb9aac0 is same with the state(5) to be set 00:27:29.695 [2024-07-15 09:35:40.728816] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb9aac0 (9): Bad file descriptor 00:27:29.695 [2024-07-15 09:35:40.729043] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:29.695 [2024-07-15 09:35:40.729064] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:29.695 [2024-07-15 09:35:40.729077] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:29.695 [2024-07-15 09:35:40.732067] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:29.695 [2024-07-15 09:35:40.741401] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:29.695 [2024-07-15 09:35:40.741838] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.695 [2024-07-15 09:35:40.741866] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb9aac0 with addr=10.0.0.2, port=4420 00:27:29.695 [2024-07-15 09:35:40.741882] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb9aac0 is same with the state(5) to be set 00:27:29.695 [2024-07-15 09:35:40.742123] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb9aac0 (9): Bad file descriptor 00:27:29.695 [2024-07-15 09:35:40.742321] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:29.695 [2024-07-15 09:35:40.742340] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:29.695 [2024-07-15 09:35:40.742353] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:29.695 [2024-07-15 09:35:40.745350] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:29.695 [2024-07-15 09:35:40.754681] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:29.695 [2024-07-15 09:35:40.755058] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.695 [2024-07-15 09:35:40.755087] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb9aac0 with addr=10.0.0.2, port=4420 00:27:29.695 [2024-07-15 09:35:40.755103] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb9aac0 is same with the state(5) to be set 00:27:29.695 [2024-07-15 09:35:40.755346] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb9aac0 (9): Bad file descriptor 00:27:29.695 [2024-07-15 09:35:40.755544] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:29.695 [2024-07-15 09:35:40.755563] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:29.695 [2024-07-15 09:35:40.755575] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:29.695 [2024-07-15 09:35:40.758559] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:29.695 [2024-07-15 09:35:40.768025] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:29.695 [2024-07-15 09:35:40.768414] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.695 [2024-07-15 09:35:40.768442] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb9aac0 with addr=10.0.0.2, port=4420 00:27:29.695 [2024-07-15 09:35:40.768458] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb9aac0 is same with the state(5) to be set 00:27:29.695 [2024-07-15 09:35:40.768698] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb9aac0 (9): Bad file descriptor 00:27:29.695 [2024-07-15 09:35:40.768959] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:29.695 [2024-07-15 09:35:40.768980] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:29.695 [2024-07-15 09:35:40.768993] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:29.695 [2024-07-15 09:35:40.772034] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:29.695 [2024-07-15 09:35:40.781278] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:29.695 [2024-07-15 09:35:40.781692] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.695 [2024-07-15 09:35:40.781718] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb9aac0 with addr=10.0.0.2, port=4420 00:27:29.695 [2024-07-15 09:35:40.781747] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb9aac0 is same with the state(5) to be set 00:27:29.695 [2024-07-15 09:35:40.781999] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb9aac0 (9): Bad file descriptor 00:27:29.695 [2024-07-15 09:35:40.782220] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:29.695 [2024-07-15 09:35:40.782240] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:29.695 [2024-07-15 09:35:40.782252] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:29.695 [2024-07-15 09:35:40.785253] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:29.695 [2024-07-15 09:35:40.794461] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:29.695 [2024-07-15 09:35:40.794900] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.695 [2024-07-15 09:35:40.794928] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb9aac0 with addr=10.0.0.2, port=4420 00:27:29.695 [2024-07-15 09:35:40.794944] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb9aac0 is same with the state(5) to be set 00:27:29.695 [2024-07-15 09:35:40.795185] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb9aac0 (9): Bad file descriptor 00:27:29.695 [2024-07-15 09:35:40.795383] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:29.695 [2024-07-15 09:35:40.795402] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:29.695 [2024-07-15 09:35:40.795414] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:29.695 [2024-07-15 09:35:40.798399] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:29.695 [2024-07-15 09:35:40.807630] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:29.695 [2024-07-15 09:35:40.808047] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.695 [2024-07-15 09:35:40.808075] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb9aac0 with addr=10.0.0.2, port=4420 00:27:29.695 [2024-07-15 09:35:40.808091] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb9aac0 is same with the state(5) to be set 00:27:29.695 [2024-07-15 09:35:40.808323] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb9aac0 (9): Bad file descriptor 00:27:29.695 [2024-07-15 09:35:40.808538] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:29.695 [2024-07-15 09:35:40.808557] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:29.695 [2024-07-15 09:35:40.808570] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:29.695 [2024-07-15 09:35:40.811570] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:29.695 [2024-07-15 09:35:40.820969] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:29.695 [2024-07-15 09:35:40.821419] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.695 [2024-07-15 09:35:40.821447] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb9aac0 with addr=10.0.0.2, port=4420 00:27:29.695 [2024-07-15 09:35:40.821468] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb9aac0 is same with the state(5) to be set 00:27:29.695 [2024-07-15 09:35:40.821711] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb9aac0 (9): Bad file descriptor 00:27:29.695 [2024-07-15 09:35:40.821941] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:29.695 [2024-07-15 09:35:40.821963] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:29.695 [2024-07-15 09:35:40.821976] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:29.695 [2024-07-15 09:35:40.824965] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:29.695 [2024-07-15 09:35:40.834284] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:29.695 [2024-07-15 09:35:40.834653] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.695 [2024-07-15 09:35:40.834680] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb9aac0 with addr=10.0.0.2, port=4420 00:27:29.695 [2024-07-15 09:35:40.834696] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb9aac0 is same with the state(5) to be set 00:27:29.695 [2024-07-15 09:35:40.834934] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb9aac0 (9): Bad file descriptor 00:27:29.695 [2024-07-15 09:35:40.835178] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:29.695 [2024-07-15 09:35:40.835198] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:29.695 [2024-07-15 09:35:40.835210] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:29.695 [2024-07-15 09:35:40.838179] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:29.695 [2024-07-15 09:35:40.847726] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:29.695 [2024-07-15 09:35:40.848070] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.695 [2024-07-15 09:35:40.848099] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb9aac0 with addr=10.0.0.2, port=4420 00:27:29.695 [2024-07-15 09:35:40.848119] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb9aac0 is same with the state(5) to be set 00:27:29.695 [2024-07-15 09:35:40.848359] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb9aac0 (9): Bad file descriptor 00:27:29.695 [2024-07-15 09:35:40.848572] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:29.695 [2024-07-15 09:35:40.848592] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:29.696 [2024-07-15 09:35:40.848604] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:29.696 [2024-07-15 09:35:40.851774] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:29.974 [2024-07-15 09:35:40.861240] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:29.974 [2024-07-15 09:35:40.861565] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.974 [2024-07-15 09:35:40.861595] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb9aac0 with addr=10.0.0.2, port=4420 00:27:29.974 [2024-07-15 09:35:40.861611] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb9aac0 is same with the state(5) to be set 00:27:29.974 [2024-07-15 09:35:40.861865] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb9aac0 (9): Bad file descriptor 00:27:29.974 [2024-07-15 09:35:40.862078] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:29.974 [2024-07-15 09:35:40.862118] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:29.974 [2024-07-15 09:35:40.862131] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:29.974 [2024-07-15 09:35:40.865264] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:29.974 [2024-07-15 09:35:40.874549] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:29.974 [2024-07-15 09:35:40.874937] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.974 [2024-07-15 09:35:40.874966] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb9aac0 with addr=10.0.0.2, port=4420 00:27:29.974 [2024-07-15 09:35:40.874982] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb9aac0 is same with the state(5) to be set 00:27:29.974 [2024-07-15 09:35:40.875210] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb9aac0 (9): Bad file descriptor 00:27:29.974 [2024-07-15 09:35:40.875424] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:29.974 [2024-07-15 09:35:40.875443] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:29.974 [2024-07-15 09:35:40.875455] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:29.974 [2024-07-15 09:35:40.878462] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:29.974 [2024-07-15 09:35:40.887774] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:29.974 [2024-07-15 09:35:40.888208] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.974 [2024-07-15 09:35:40.888237] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb9aac0 with addr=10.0.0.2, port=4420 00:27:29.974 [2024-07-15 09:35:40.888253] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb9aac0 is same with the state(5) to be set 00:27:29.974 [2024-07-15 09:35:40.888480] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb9aac0 (9): Bad file descriptor 00:27:29.974 [2024-07-15 09:35:40.888711] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:29.974 [2024-07-15 09:35:40.888730] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:29.974 [2024-07-15 09:35:40.888743] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:29.974 [2024-07-15 09:35:40.891772] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:29.974 [2024-07-15 09:35:40.901042] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:29.974 [2024-07-15 09:35:40.901495] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.974 [2024-07-15 09:35:40.901523] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb9aac0 with addr=10.0.0.2, port=4420 00:27:29.974 [2024-07-15 09:35:40.901539] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb9aac0 is same with the state(5) to be set 00:27:29.974 [2024-07-15 09:35:40.901780] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb9aac0 (9): Bad file descriptor 00:27:29.974 [2024-07-15 09:35:40.902015] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:29.974 [2024-07-15 09:35:40.902036] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:29.974 [2024-07-15 09:35:40.902050] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:29.974 [2024-07-15 09:35:40.905037] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:29.974 [2024-07-15 09:35:40.914314] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:29.974 [2024-07-15 09:35:40.914653] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.974 [2024-07-15 09:35:40.914681] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb9aac0 with addr=10.0.0.2, port=4420 00:27:29.974 [2024-07-15 09:35:40.914697] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb9aac0 is same with the state(5) to be set 00:27:29.974 [2024-07-15 09:35:40.914936] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb9aac0 (9): Bad file descriptor 00:27:29.974 [2024-07-15 09:35:40.915173] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:29.974 [2024-07-15 09:35:40.915193] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:29.974 [2024-07-15 09:35:40.915205] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:29.974 [2024-07-15 09:35:40.918175] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:29.974 [2024-07-15 09:35:40.927659] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:29.974 [2024-07-15 09:35:40.928033] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.974 [2024-07-15 09:35:40.928062] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb9aac0 with addr=10.0.0.2, port=4420 00:27:29.974 [2024-07-15 09:35:40.928077] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb9aac0 is same with the state(5) to be set 00:27:29.974 [2024-07-15 09:35:40.928306] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb9aac0 (9): Bad file descriptor 00:27:29.974 [2024-07-15 09:35:40.928539] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:29.974 [2024-07-15 09:35:40.928559] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:29.974 [2024-07-15 09:35:40.928572] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:29.974 [2024-07-15 09:35:40.932045] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:29.974 [2024-07-15 09:35:40.941056] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:29.974 [2024-07-15 09:35:40.941397] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.974 [2024-07-15 09:35:40.941425] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb9aac0 with addr=10.0.0.2, port=4420 00:27:29.974 [2024-07-15 09:35:40.941441] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb9aac0 is same with the state(5) to be set 00:27:29.974 [2024-07-15 09:35:40.941662] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb9aac0 (9): Bad file descriptor 00:27:29.974 [2024-07-15 09:35:40.941908] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:29.974 [2024-07-15 09:35:40.941929] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:29.974 [2024-07-15 09:35:40.941943] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:29.974 [2024-07-15 09:35:40.944980] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:29.974 [2024-07-15 09:35:40.954427] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:29.974 [2024-07-15 09:35:40.954768] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.974 [2024-07-15 09:35:40.954796] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb9aac0 with addr=10.0.0.2, port=4420 00:27:29.974 [2024-07-15 09:35:40.954821] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb9aac0 is same with the state(5) to be set 00:27:29.974 [2024-07-15 09:35:40.955041] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb9aac0 (9): Bad file descriptor 00:27:29.974 [2024-07-15 09:35:40.955258] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:29.974 [2024-07-15 09:35:40.955278] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:29.974 [2024-07-15 09:35:40.955291] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:29.975 [2024-07-15 09:35:40.958273] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:29.975 [2024-07-15 09:35:40.967677] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:29.975 [2024-07-15 09:35:40.968122] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.975 [2024-07-15 09:35:40.968149] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb9aac0 with addr=10.0.0.2, port=4420 00:27:29.975 [2024-07-15 09:35:40.968180] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb9aac0 is same with the state(5) to be set 00:27:29.975 [2024-07-15 09:35:40.968420] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb9aac0 (9): Bad file descriptor 00:27:29.975 [2024-07-15 09:35:40.968618] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:29.975 [2024-07-15 09:35:40.968637] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:29.975 [2024-07-15 09:35:40.968649] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:29.975 [2024-07-15 09:35:40.971658] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:29.975 [2024-07-15 09:35:40.981024] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:29.975 [2024-07-15 09:35:40.981423] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.975 [2024-07-15 09:35:40.981451] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb9aac0 with addr=10.0.0.2, port=4420 00:27:29.975 [2024-07-15 09:35:40.981467] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb9aac0 is same with the state(5) to be set 00:27:29.975 [2024-07-15 09:35:40.981707] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb9aac0 (9): Bad file descriptor 00:27:29.975 [2024-07-15 09:35:40.981952] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:29.975 [2024-07-15 09:35:40.981974] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:29.975 [2024-07-15 09:35:40.981987] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:29.975 [2024-07-15 09:35:40.984976] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:29.975 [2024-07-15 09:35:40.994234] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:29.975 [2024-07-15 09:35:40.994619] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.975 [2024-07-15 09:35:40.994645] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb9aac0 with addr=10.0.0.2, port=4420 00:27:29.975 [2024-07-15 09:35:40.994677] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb9aac0 is same with the state(5) to be set 00:27:29.975 [2024-07-15 09:35:40.994900] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb9aac0 (9): Bad file descriptor 00:27:29.975 [2024-07-15 09:35:40.995132] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:29.975 [2024-07-15 09:35:40.995166] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:29.975 [2024-07-15 09:35:40.995183] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:29.975 [2024-07-15 09:35:40.998155] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:29.975 [2024-07-15 09:35:41.007420] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:29.975 [2024-07-15 09:35:41.007787] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.975 [2024-07-15 09:35:41.007821] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb9aac0 with addr=10.0.0.2, port=4420 00:27:29.975 [2024-07-15 09:35:41.007838] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb9aac0 is same with the state(5) to be set 00:27:29.975 [2024-07-15 09:35:41.008058] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb9aac0 (9): Bad file descriptor 00:27:29.975 [2024-07-15 09:35:41.008271] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:29.975 [2024-07-15 09:35:41.008291] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:29.975 [2024-07-15 09:35:41.008303] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:29.975 [2024-07-15 09:35:41.011284] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:29.975 [2024-07-15 09:35:41.020716] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:29.975 [2024-07-15 09:35:41.021093] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.975 [2024-07-15 09:35:41.021121] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb9aac0 with addr=10.0.0.2, port=4420 00:27:29.975 [2024-07-15 09:35:41.021136] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb9aac0 is same with the state(5) to be set 00:27:29.975 [2024-07-15 09:35:41.021378] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb9aac0 (9): Bad file descriptor 00:27:29.975 [2024-07-15 09:35:41.021576] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:29.975 [2024-07-15 09:35:41.021596] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:29.975 [2024-07-15 09:35:41.021608] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:29.975 [2024-07-15 09:35:41.024618] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:29.975 [2024-07-15 09:35:41.034047] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:29.975 [2024-07-15 09:35:41.034454] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.975 [2024-07-15 09:35:41.034496] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb9aac0 with addr=10.0.0.2, port=4420 00:27:29.975 [2024-07-15 09:35:41.034512] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb9aac0 is same with the state(5) to be set 00:27:29.975 [2024-07-15 09:35:41.034739] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb9aac0 (9): Bad file descriptor 00:27:29.975 [2024-07-15 09:35:41.035001] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:29.975 [2024-07-15 09:35:41.035023] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:29.975 [2024-07-15 09:35:41.035036] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:29.975 [2024-07-15 09:35:41.038010] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:29.975 [2024-07-15 09:35:41.047291] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:29.975 [2024-07-15 09:35:41.047642] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.975 [2024-07-15 09:35:41.047669] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb9aac0 with addr=10.0.0.2, port=4420 00:27:29.975 [2024-07-15 09:35:41.047684] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb9aac0 is same with the state(5) to be set 00:27:29.975 [2024-07-15 09:35:41.047936] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb9aac0 (9): Bad file descriptor 00:27:29.975 [2024-07-15 09:35:41.048176] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:29.975 [2024-07-15 09:35:41.048196] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:29.975 [2024-07-15 09:35:41.048208] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:29.975 [2024-07-15 09:35:41.051182] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:29.975 [2024-07-15 09:35:41.060621] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:29.975 [2024-07-15 09:35:41.061068] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.975 [2024-07-15 09:35:41.061097] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb9aac0 with addr=10.0.0.2, port=4420 00:27:29.975 [2024-07-15 09:35:41.061112] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb9aac0 is same with the state(5) to be set 00:27:29.975 [2024-07-15 09:35:41.061342] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb9aac0 (9): Bad file descriptor 00:27:29.975 [2024-07-15 09:35:41.061555] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:29.975 [2024-07-15 09:35:41.061574] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:29.975 [2024-07-15 09:35:41.061586] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:29.975 [2024-07-15 09:35:41.064593] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:29.975 [2024-07-15 09:35:41.073917] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:29.975 [2024-07-15 09:35:41.074305] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.975 [2024-07-15 09:35:41.074346] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb9aac0 with addr=10.0.0.2, port=4420 00:27:29.975 [2024-07-15 09:35:41.074361] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb9aac0 is same with the state(5) to be set 00:27:29.975 [2024-07-15 09:35:41.074615] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb9aac0 (9): Bad file descriptor 00:27:29.975 [2024-07-15 09:35:41.074840] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:29.975 [2024-07-15 09:35:41.074876] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:29.975 [2024-07-15 09:35:41.074890] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:29.975 [2024-07-15 09:35:41.077924] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:29.975 [2024-07-15 09:35:41.087207] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:29.975 [2024-07-15 09:35:41.087588] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.975 [2024-07-15 09:35:41.087616] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb9aac0 with addr=10.0.0.2, port=4420 00:27:29.975 [2024-07-15 09:35:41.087632] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb9aac0 is same with the state(5) to be set 00:27:29.975 [2024-07-15 09:35:41.087869] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb9aac0 (9): Bad file descriptor 00:27:29.975 [2024-07-15 09:35:41.088101] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:29.975 [2024-07-15 09:35:41.088122] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:29.975 [2024-07-15 09:35:41.088135] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:29.975 [2024-07-15 09:35:41.091064] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:29.975 [2024-07-15 09:35:41.100423] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:29.975 [2024-07-15 09:35:41.100795] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.975 [2024-07-15 09:35:41.100830] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb9aac0 with addr=10.0.0.2, port=4420 00:27:29.975 [2024-07-15 09:35:41.100846] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb9aac0 is same with the state(5) to be set 00:27:29.975 [2024-07-15 09:35:41.101089] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb9aac0 (9): Bad file descriptor 00:27:29.975 [2024-07-15 09:35:41.101302] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:29.975 [2024-07-15 09:35:41.101321] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:29.976 [2024-07-15 09:35:41.101334] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:29.976 [2024-07-15 09:35:41.104305] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:29.976 [2024-07-15 09:35:41.113696] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:29.976 [2024-07-15 09:35:41.114075] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.976 [2024-07-15 09:35:41.114104] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb9aac0 with addr=10.0.0.2, port=4420 00:27:29.976 [2024-07-15 09:35:41.114120] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb9aac0 is same with the state(5) to be set 00:27:29.976 [2024-07-15 09:35:41.114347] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb9aac0 (9): Bad file descriptor 00:27:29.976 [2024-07-15 09:35:41.114561] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:29.976 [2024-07-15 09:35:41.114581] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:29.976 [2024-07-15 09:35:41.114593] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:29.976 [2024-07-15 09:35:41.117636] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:29.976 [2024-07-15 09:35:41.126940] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:29.976 [2024-07-15 09:35:41.127391] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.976 [2024-07-15 09:35:41.127419] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb9aac0 with addr=10.0.0.2, port=4420 00:27:29.976 [2024-07-15 09:35:41.127435] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb9aac0 is same with the state(5) to be set 00:27:29.976 [2024-07-15 09:35:41.127676] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb9aac0 (9): Bad file descriptor 00:27:29.976 [2024-07-15 09:35:41.127902] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:29.976 [2024-07-15 09:35:41.127923] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:29.976 [2024-07-15 09:35:41.127936] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:29.976 [2024-07-15 09:35:41.130868] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:29.976 [2024-07-15 09:35:41.140199] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:29.976 [2024-07-15 09:35:41.140510] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.976 [2024-07-15 09:35:41.140552] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb9aac0 with addr=10.0.0.2, port=4420 00:27:29.976 [2024-07-15 09:35:41.140568] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb9aac0 is same with the state(5) to be set 00:27:29.976 [2024-07-15 09:35:41.140789] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb9aac0 (9): Bad file descriptor 00:27:29.976 [2024-07-15 09:35:41.141024] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:29.976 [2024-07-15 09:35:41.141045] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:29.976 [2024-07-15 09:35:41.141059] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:29.976 [2024-07-15 09:35:41.144049] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:29.976 [2024-07-15 09:35:41.153506] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:29.976 [2024-07-15 09:35:41.153899] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.976 [2024-07-15 09:35:41.153927] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb9aac0 with addr=10.0.0.2, port=4420 00:27:29.976 [2024-07-15 09:35:41.153943] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb9aac0 is same with the state(5) to be set 00:27:29.976 [2024-07-15 09:35:41.154173] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb9aac0 (9): Bad file descriptor 00:27:29.976 [2024-07-15 09:35:41.154387] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:29.976 [2024-07-15 09:35:41.154406] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:29.976 [2024-07-15 09:35:41.154418] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:29.976 [2024-07-15 09:35:41.157401] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:29.976 [2024-07-15 09:35:41.166932] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:30.238 [2024-07-15 09:35:41.167324] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.238 [2024-07-15 09:35:41.167354] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb9aac0 with addr=10.0.0.2, port=4420 00:27:30.238 [2024-07-15 09:35:41.167371] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb9aac0 is same with the state(5) to be set 00:27:30.238 [2024-07-15 09:35:41.167612] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb9aac0 (9): Bad file descriptor 00:27:30.238 [2024-07-15 09:35:41.167837] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:30.238 [2024-07-15 09:35:41.167857] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:30.238 [2024-07-15 09:35:41.167870] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:30.238 [2024-07-15 09:35:41.170877] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:30.238 [2024-07-15 09:35:41.180202] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:30.238 [2024-07-15 09:35:41.180614] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.238 [2024-07-15 09:35:41.180643] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb9aac0 with addr=10.0.0.2, port=4420 00:27:30.238 [2024-07-15 09:35:41.180663] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb9aac0 is same with the state(5) to be set 00:27:30.238 [2024-07-15 09:35:41.180887] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb9aac0 (9): Bad file descriptor 00:27:30.238 [2024-07-15 09:35:41.181121] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:30.238 [2024-07-15 09:35:41.181142] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:30.238 [2024-07-15 09:35:41.181170] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:30.238 [2024-07-15 09:35:41.184745] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:30.238 [2024-07-15 09:35:41.193507] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:30.238 [2024-07-15 09:35:41.193879] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.238 [2024-07-15 09:35:41.193908] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb9aac0 with addr=10.0.0.2, port=4420 00:27:30.238 [2024-07-15 09:35:41.193923] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb9aac0 is same with the state(5) to be set 00:27:30.238 [2024-07-15 09:35:41.194150] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb9aac0 (9): Bad file descriptor 00:27:30.238 [2024-07-15 09:35:41.194366] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:30.238 [2024-07-15 09:35:41.194385] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:30.238 [2024-07-15 09:35:41.194398] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:30.238 [2024-07-15 09:35:41.197437] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:30.238 [2024-07-15 09:35:41.206746] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:30.238 [2024-07-15 09:35:41.207106] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.238 [2024-07-15 09:35:41.207134] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb9aac0 with addr=10.0.0.2, port=4420 00:27:30.238 [2024-07-15 09:35:41.207150] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb9aac0 is same with the state(5) to be set 00:27:30.238 [2024-07-15 09:35:41.207378] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb9aac0 (9): Bad file descriptor 00:27:30.238 [2024-07-15 09:35:41.207592] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:30.238 [2024-07-15 09:35:41.207611] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:30.238 [2024-07-15 09:35:41.207623] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:30.238 [2024-07-15 09:35:41.210634] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:30.238 [2024-07-15 09:35:41.219922] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:30.238 [2024-07-15 09:35:41.220310] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.238 [2024-07-15 09:35:41.220337] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb9aac0 with addr=10.0.0.2, port=4420 00:27:30.238 [2024-07-15 09:35:41.220352] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb9aac0 is same with the state(5) to be set 00:27:30.238 [2024-07-15 09:35:41.220572] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb9aac0 (9): Bad file descriptor 00:27:30.238 [2024-07-15 09:35:41.220785] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:30.238 [2024-07-15 09:35:41.220835] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:30.238 [2024-07-15 09:35:41.220849] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:30.238 [2024-07-15 09:35:41.223741] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:30.238 [2024-07-15 09:35:41.233297] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:30.238 [2024-07-15 09:35:41.233707] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.238 [2024-07-15 09:35:41.233735] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb9aac0 with addr=10.0.0.2, port=4420 00:27:30.238 [2024-07-15 09:35:41.233750] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb9aac0 is same with the state(5) to be set 00:27:30.238 [2024-07-15 09:35:41.233988] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb9aac0 (9): Bad file descriptor 00:27:30.238 [2024-07-15 09:35:41.234221] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:30.238 [2024-07-15 09:35:41.234240] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:30.238 [2024-07-15 09:35:41.234253] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:30.238 [2024-07-15 09:35:41.237255] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:30.238 [2024-07-15 09:35:41.246613] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:30.238 [2024-07-15 09:35:41.247032] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.238 [2024-07-15 09:35:41.247060] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb9aac0 with addr=10.0.0.2, port=4420 00:27:30.238 [2024-07-15 09:35:41.247076] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb9aac0 is same with the state(5) to be set 00:27:30.238 [2024-07-15 09:35:41.247307] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb9aac0 (9): Bad file descriptor 00:27:30.239 [2024-07-15 09:35:41.247523] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:30.239 [2024-07-15 09:35:41.247542] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:30.239 [2024-07-15 09:35:41.247554] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:30.239 [2024-07-15 09:35:41.250561] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:30.239 [2024-07-15 09:35:41.259904] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:30.239 [2024-07-15 09:35:41.260296] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.239 [2024-07-15 09:35:41.260325] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb9aac0 with addr=10.0.0.2, port=4420 00:27:30.239 [2024-07-15 09:35:41.260340] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb9aac0 is same with the state(5) to be set 00:27:30.239 [2024-07-15 09:35:41.260582] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb9aac0 (9): Bad file descriptor 00:27:30.239 [2024-07-15 09:35:41.260796] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:30.239 [2024-07-15 09:35:41.260840] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:30.239 [2024-07-15 09:35:41.260853] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:30.239 [2024-07-15 09:35:41.263897] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:30.239 [2024-07-15 09:35:41.273342] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:30.239 [2024-07-15 09:35:41.273745] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.239 [2024-07-15 09:35:41.273788] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb9aac0 with addr=10.0.0.2, port=4420 00:27:30.239 [2024-07-15 09:35:41.273810] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb9aac0 is same with the state(5) to be set 00:27:30.239 [2024-07-15 09:35:41.274064] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb9aac0 (9): Bad file descriptor 00:27:30.239 [2024-07-15 09:35:41.274288] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:30.239 [2024-07-15 09:35:41.274308] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:30.239 [2024-07-15 09:35:41.274321] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:30.239 [2024-07-15 09:35:41.277533] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:30.239 [2024-07-15 09:35:41.286678] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:30.239 [2024-07-15 09:35:41.287030] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.239 [2024-07-15 09:35:41.287059] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb9aac0 with addr=10.0.0.2, port=4420 00:27:30.239 [2024-07-15 09:35:41.287074] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb9aac0 is same with the state(5) to be set 00:27:30.239 [2024-07-15 09:35:41.287314] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb9aac0 (9): Bad file descriptor 00:27:30.239 [2024-07-15 09:35:41.287537] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:30.239 [2024-07-15 09:35:41.287556] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:30.239 [2024-07-15 09:35:41.287568] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:30.239 [2024-07-15 09:35:41.290627] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:30.239 [2024-07-15 09:35:41.299986] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:30.239 [2024-07-15 09:35:41.300458] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.239 [2024-07-15 09:35:41.300485] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb9aac0 with addr=10.0.0.2, port=4420 00:27:30.239 [2024-07-15 09:35:41.300501] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb9aac0 is same with the state(5) to be set 00:27:30.239 [2024-07-15 09:35:41.300742] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb9aac0 (9): Bad file descriptor 00:27:30.239 [2024-07-15 09:35:41.300987] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:30.239 [2024-07-15 09:35:41.301009] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:30.239 [2024-07-15 09:35:41.301022] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:30.239 [2024-07-15 09:35:41.304020] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:30.239 [2024-07-15 09:35:41.313297] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:30.239 [2024-07-15 09:35:41.313687] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.239 [2024-07-15 09:35:41.313715] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb9aac0 with addr=10.0.0.2, port=4420 00:27:30.239 [2024-07-15 09:35:41.313731] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb9aac0 is same with the state(5) to be set 00:27:30.239 [2024-07-15 09:35:41.313960] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb9aac0 (9): Bad file descriptor 00:27:30.239 [2024-07-15 09:35:41.314203] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:30.239 [2024-07-15 09:35:41.314223] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:30.239 [2024-07-15 09:35:41.314235] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:30.239 [2024-07-15 09:35:41.317236] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:30.239 [2024-07-15 09:35:41.326629] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:30.239 [2024-07-15 09:35:41.327063] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.239 [2024-07-15 09:35:41.327092] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb9aac0 with addr=10.0.0.2, port=4420 00:27:30.239 [2024-07-15 09:35:41.327108] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb9aac0 is same with the state(5) to be set 00:27:30.239 [2024-07-15 09:35:41.327338] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb9aac0 (9): Bad file descriptor 00:27:30.239 [2024-07-15 09:35:41.327552] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:30.239 [2024-07-15 09:35:41.327571] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:30.239 [2024-07-15 09:35:41.327583] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:30.239 [2024-07-15 09:35:41.330578] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:30.239 [2024-07-15 09:35:41.339908] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:30.239 [2024-07-15 09:35:41.340300] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.239 [2024-07-15 09:35:41.340328] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb9aac0 with addr=10.0.0.2, port=4420 00:27:30.239 [2024-07-15 09:35:41.340343] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb9aac0 is same with the state(5) to be set 00:27:30.239 [2024-07-15 09:35:41.340583] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb9aac0 (9): Bad file descriptor 00:27:30.239 [2024-07-15 09:35:41.340822] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:30.239 [2024-07-15 09:35:41.340863] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:30.239 [2024-07-15 09:35:41.340877] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:30.239 [2024-07-15 09:35:41.343878] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:30.239 [2024-07-15 09:35:41.353171] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:30.239 [2024-07-15 09:35:41.353609] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.239 [2024-07-15 09:35:41.353637] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb9aac0 with addr=10.0.0.2, port=4420 00:27:30.239 [2024-07-15 09:35:41.353653] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb9aac0 is same with the state(5) to be set 00:27:30.239 [2024-07-15 09:35:41.353906] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb9aac0 (9): Bad file descriptor 00:27:30.240 [2024-07-15 09:35:41.354111] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:30.240 [2024-07-15 09:35:41.354131] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:30.240 [2024-07-15 09:35:41.354162] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:30.240 [2024-07-15 09:35:41.357169] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:30.240 [2024-07-15 09:35:41.366463] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:30.240 [2024-07-15 09:35:41.366787] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.240 [2024-07-15 09:35:41.366836] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb9aac0 with addr=10.0.0.2, port=4420 00:27:30.240 [2024-07-15 09:35:41.366854] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb9aac0 is same with the state(5) to be set 00:27:30.240 [2024-07-15 09:35:41.367095] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb9aac0 (9): Bad file descriptor 00:27:30.240 [2024-07-15 09:35:41.367310] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:30.240 [2024-07-15 09:35:41.367330] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:30.240 [2024-07-15 09:35:41.367342] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:30.240 [2024-07-15 09:35:41.370349] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:30.240 [2024-07-15 09:35:41.379870] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:30.240 [2024-07-15 09:35:41.380288] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.240 [2024-07-15 09:35:41.380330] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb9aac0 with addr=10.0.0.2, port=4420 00:27:30.240 [2024-07-15 09:35:41.380346] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb9aac0 is same with the state(5) to be set 00:27:30.240 [2024-07-15 09:35:41.380613] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb9aac0 (9): Bad file descriptor 00:27:30.240 [2024-07-15 09:35:41.380839] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:30.240 [2024-07-15 09:35:41.380876] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:30.240 [2024-07-15 09:35:41.380889] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:30.240 [2024-07-15 09:35:41.383887] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:30.240 [2024-07-15 09:35:41.393244] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:30.240 [2024-07-15 09:35:41.393679] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.240 [2024-07-15 09:35:41.393707] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb9aac0 with addr=10.0.0.2, port=4420 00:27:30.240 [2024-07-15 09:35:41.393722] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb9aac0 is same with the state(5) to be set 00:27:30.240 [2024-07-15 09:35:41.393962] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb9aac0 (9): Bad file descriptor 00:27:30.240 [2024-07-15 09:35:41.394196] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:30.240 [2024-07-15 09:35:41.394215] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:30.240 [2024-07-15 09:35:41.394227] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:30.240 [2024-07-15 09:35:41.397283] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:30.240 [2024-07-15 09:35:41.406448] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:30.240 [2024-07-15 09:35:41.406889] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.240 [2024-07-15 09:35:41.406918] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb9aac0 with addr=10.0.0.2, port=4420 00:27:30.240 [2024-07-15 09:35:41.406934] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb9aac0 is same with the state(5) to be set 00:27:30.240 [2024-07-15 09:35:41.407164] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb9aac0 (9): Bad file descriptor 00:27:30.240 [2024-07-15 09:35:41.407378] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:30.240 [2024-07-15 09:35:41.407397] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:30.240 [2024-07-15 09:35:41.407409] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:30.240 [2024-07-15 09:35:41.410418] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:30.240 [2024-07-15 09:35:41.419672] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:30.240 [2024-07-15 09:35:41.420076] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.240 [2024-07-15 09:35:41.420103] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb9aac0 with addr=10.0.0.2, port=4420 00:27:30.240 [2024-07-15 09:35:41.420118] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb9aac0 is same with the state(5) to be set 00:27:30.240 [2024-07-15 09:35:41.420357] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb9aac0 (9): Bad file descriptor 00:27:30.240 [2024-07-15 09:35:41.420555] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:30.240 [2024-07-15 09:35:41.420575] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:30.240 [2024-07-15 09:35:41.420587] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:30.240 [2024-07-15 09:35:41.423666] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:30.502 [2024-07-15 09:35:41.433065] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:30.502 [2024-07-15 09:35:41.433519] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.502 [2024-07-15 09:35:41.433547] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb9aac0 with addr=10.0.0.2, port=4420 00:27:30.502 [2024-07-15 09:35:41.433563] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb9aac0 is same with the state(5) to be set 00:27:30.502 [2024-07-15 09:35:41.433813] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb9aac0 (9): Bad file descriptor 00:27:30.502 [2024-07-15 09:35:41.434044] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:30.502 [2024-07-15 09:35:41.434065] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:30.502 [2024-07-15 09:35:41.434079] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:30.502 [2024-07-15 09:35:41.437456] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:30.502 [2024-07-15 09:35:41.446427] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:30.502 [2024-07-15 09:35:41.446864] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.502 [2024-07-15 09:35:41.446893] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb9aac0 with addr=10.0.0.2, port=4420 00:27:30.502 [2024-07-15 09:35:41.446909] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb9aac0 is same with the state(5) to be set 00:27:30.502 [2024-07-15 09:35:41.447150] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb9aac0 (9): Bad file descriptor 00:27:30.502 [2024-07-15 09:35:41.447353] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:30.502 [2024-07-15 09:35:41.447372] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:30.502 [2024-07-15 09:35:41.447385] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:30.502 [2024-07-15 09:35:41.450424] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:30.502 [2024-07-15 09:35:41.459860] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:30.502 [2024-07-15 09:35:41.460252] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.502 [2024-07-15 09:35:41.460281] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb9aac0 with addr=10.0.0.2, port=4420 00:27:30.502 [2024-07-15 09:35:41.460297] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb9aac0 is same with the state(5) to be set 00:27:30.503 [2024-07-15 09:35:41.460526] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb9aac0 (9): Bad file descriptor 00:27:30.503 [2024-07-15 09:35:41.460755] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:30.503 [2024-07-15 09:35:41.460775] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:30.503 [2024-07-15 09:35:41.460811] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:30.503 [2024-07-15 09:35:41.463772] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:30.503 [2024-07-15 09:35:41.473067] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:30.503 [2024-07-15 09:35:41.473484] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.503 [2024-07-15 09:35:41.473512] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb9aac0 with addr=10.0.0.2, port=4420 00:27:30.503 [2024-07-15 09:35:41.473528] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb9aac0 is same with the state(5) to be set 00:27:30.503 [2024-07-15 09:35:41.473755] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb9aac0 (9): Bad file descriptor 00:27:30.503 [2024-07-15 09:35:41.474002] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:30.503 [2024-07-15 09:35:41.474024] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:30.503 [2024-07-15 09:35:41.474037] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:30.503 [2024-07-15 09:35:41.477048] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:30.503 [2024-07-15 09:35:41.486333] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:30.503 [2024-07-15 09:35:41.486749] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.503 [2024-07-15 09:35:41.486776] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb9aac0 with addr=10.0.0.2, port=4420 00:27:30.503 [2024-07-15 09:35:41.486816] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb9aac0 is same with the state(5) to be set 00:27:30.503 [2024-07-15 09:35:41.487045] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb9aac0 (9): Bad file descriptor 00:27:30.503 [2024-07-15 09:35:41.487278] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:30.503 [2024-07-15 09:35:41.487298] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:30.503 [2024-07-15 09:35:41.487310] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:30.503 [2024-07-15 09:35:41.490287] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:30.503 [2024-07-15 09:35:41.499612] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:30.503 [2024-07-15 09:35:41.500014] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.503 [2024-07-15 09:35:41.500043] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb9aac0 with addr=10.0.0.2, port=4420 00:27:30.503 [2024-07-15 09:35:41.500060] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb9aac0 is same with the state(5) to be set 00:27:30.503 [2024-07-15 09:35:41.500288] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb9aac0 (9): Bad file descriptor 00:27:30.503 [2024-07-15 09:35:41.500502] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:30.503 [2024-07-15 09:35:41.500521] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:30.503 [2024-07-15 09:35:41.500533] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:30.503 [2024-07-15 09:35:41.503514] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:30.503 [2024-07-15 09:35:41.512981] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:30.503 [2024-07-15 09:35:41.513372] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.503 [2024-07-15 09:35:41.513401] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb9aac0 with addr=10.0.0.2, port=4420 00:27:30.503 [2024-07-15 09:35:41.513416] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb9aac0 is same with the state(5) to be set 00:27:30.503 [2024-07-15 09:35:41.513658] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb9aac0 (9): Bad file descriptor 00:27:30.503 [2024-07-15 09:35:41.513918] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:30.503 [2024-07-15 09:35:41.513940] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:30.503 [2024-07-15 09:35:41.513953] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:30.503 [2024-07-15 09:35:41.516941] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:30.503 [2024-07-15 09:35:41.526226] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:30.503 [2024-07-15 09:35:41.526594] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.503 [2024-07-15 09:35:41.526622] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb9aac0 with addr=10.0.0.2, port=4420 00:27:30.503 [2024-07-15 09:35:41.526637] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb9aac0 is same with the state(5) to be set 00:27:30.503 [2024-07-15 09:35:41.526876] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb9aac0 (9): Bad file descriptor 00:27:30.503 [2024-07-15 09:35:41.527087] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:30.503 [2024-07-15 09:35:41.527108] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:30.503 [2024-07-15 09:35:41.527121] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:30.503 [2024-07-15 09:35:41.530125] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:30.503 [2024-07-15 09:35:41.539363] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:30.503 [2024-07-15 09:35:41.539778] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.503 [2024-07-15 09:35:41.539835] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb9aac0 with addr=10.0.0.2, port=4420 00:27:30.503 [2024-07-15 09:35:41.539855] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb9aac0 is same with the state(5) to be set 00:27:30.503 [2024-07-15 09:35:41.540113] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb9aac0 (9): Bad file descriptor 00:27:30.503 [2024-07-15 09:35:41.540321] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:30.503 [2024-07-15 09:35:41.540340] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:30.503 [2024-07-15 09:35:41.540352] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:30.503 [2024-07-15 09:35:41.543276] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:30.503 [2024-07-15 09:35:41.552504] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:30.503 [2024-07-15 09:35:41.552929] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.503 [2024-07-15 09:35:41.552971] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb9aac0 with addr=10.0.0.2, port=4420 00:27:30.503 [2024-07-15 09:35:41.552987] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb9aac0 is same with the state(5) to be set 00:27:30.503 [2024-07-15 09:35:41.553227] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb9aac0 (9): Bad file descriptor 00:27:30.503 [2024-07-15 09:35:41.553434] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:30.503 [2024-07-15 09:35:41.553453] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:30.503 [2024-07-15 09:35:41.553465] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:30.503 [2024-07-15 09:35:41.556421] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:30.503 [2024-07-15 09:35:41.565658] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:30.503 [2024-07-15 09:35:41.566056] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.503 [2024-07-15 09:35:41.566083] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb9aac0 with addr=10.0.0.2, port=4420 00:27:30.503 [2024-07-15 09:35:41.566113] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb9aac0 is same with the state(5) to be set 00:27:30.503 [2024-07-15 09:35:41.566334] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb9aac0 (9): Bad file descriptor 00:27:30.503 [2024-07-15 09:35:41.566543] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:30.503 [2024-07-15 09:35:41.566562] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:30.504 [2024-07-15 09:35:41.566573] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:30.504 [2024-07-15 09:35:41.569384] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:30.504 [2024-07-15 09:35:41.578724] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:30.504 [2024-07-15 09:35:41.579112] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.504 [2024-07-15 09:35:41.579155] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb9aac0 with addr=10.0.0.2, port=4420 00:27:30.504 [2024-07-15 09:35:41.579170] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb9aac0 is same with the state(5) to be set 00:27:30.504 [2024-07-15 09:35:41.579421] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb9aac0 (9): Bad file descriptor 00:27:30.504 [2024-07-15 09:35:41.579629] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:30.504 [2024-07-15 09:35:41.579652] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:30.504 [2024-07-15 09:35:41.579664] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:30.504 [2024-07-15 09:35:41.582580] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:30.504 [2024-07-15 09:35:41.591879] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:30.504 [2024-07-15 09:35:41.592241] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.504 [2024-07-15 09:35:41.592269] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb9aac0 with addr=10.0.0.2, port=4420 00:27:30.504 [2024-07-15 09:35:41.592284] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb9aac0 is same with the state(5) to be set 00:27:30.504 [2024-07-15 09:35:41.592518] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb9aac0 (9): Bad file descriptor 00:27:30.504 [2024-07-15 09:35:41.592726] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:30.504 [2024-07-15 09:35:41.592745] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:30.504 [2024-07-15 09:35:41.592757] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:30.504 [2024-07-15 09:35:41.595677] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:30.504 [2024-07-15 09:35:41.605093] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:30.504 [2024-07-15 09:35:41.605484] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.504 [2024-07-15 09:35:41.605525] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb9aac0 with addr=10.0.0.2, port=4420 00:27:30.504 [2024-07-15 09:35:41.605541] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb9aac0 is same with the state(5) to be set 00:27:30.504 [2024-07-15 09:35:41.605762] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb9aac0 (9): Bad file descriptor 00:27:30.504 [2024-07-15 09:35:41.606006] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:30.504 [2024-07-15 09:35:41.606028] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:30.504 [2024-07-15 09:35:41.606041] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:30.504 [2024-07-15 09:35:41.608973] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:30.504 [2024-07-15 09:35:41.618209] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:30.504 [2024-07-15 09:35:41.618525] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.504 [2024-07-15 09:35:41.618552] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb9aac0 with addr=10.0.0.2, port=4420 00:27:30.504 [2024-07-15 09:35:41.618567] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb9aac0 is same with the state(5) to be set 00:27:30.504 [2024-07-15 09:35:41.618782] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb9aac0 (9): Bad file descriptor 00:27:30.504 [2024-07-15 09:35:41.619012] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:30.504 [2024-07-15 09:35:41.619033] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:30.504 [2024-07-15 09:35:41.619046] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:30.504 [2024-07-15 09:35:41.621949] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:30.504 [2024-07-15 09:35:41.631261] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:30.504 [2024-07-15 09:35:41.631696] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.504 [2024-07-15 09:35:41.631743] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb9aac0 with addr=10.0.0.2, port=4420 00:27:30.504 [2024-07-15 09:35:41.631758] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb9aac0 is same with the state(5) to be set 00:27:30.504 [2024-07-15 09:35:41.632028] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb9aac0 (9): Bad file descriptor 00:27:30.504 [2024-07-15 09:35:41.632254] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:30.504 [2024-07-15 09:35:41.632273] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:30.504 [2024-07-15 09:35:41.632285] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:30.504 [2024-07-15 09:35:41.635058] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:30.504 [2024-07-15 09:35:41.644245] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:30.504 [2024-07-15 09:35:41.644670] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.504 [2024-07-15 09:35:41.644712] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb9aac0 with addr=10.0.0.2, port=4420 00:27:30.504 [2024-07-15 09:35:41.644728] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb9aac0 is same with the state(5) to be set 00:27:30.504 [2024-07-15 09:35:41.644978] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb9aac0 (9): Bad file descriptor 00:27:30.504 [2024-07-15 09:35:41.645209] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:30.504 [2024-07-15 09:35:41.645228] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:30.504 [2024-07-15 09:35:41.645240] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:30.504 [2024-07-15 09:35:41.648165] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:30.504 [2024-07-15 09:35:41.657295] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:30.504 [2024-07-15 09:35:41.657659] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.504 [2024-07-15 09:35:41.657702] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb9aac0 with addr=10.0.0.2, port=4420 00:27:30.504 [2024-07-15 09:35:41.657718] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb9aac0 is same with the state(5) to be set 00:27:30.504 [2024-07-15 09:35:41.657969] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb9aac0 (9): Bad file descriptor 00:27:30.504 [2024-07-15 09:35:41.658206] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:30.504 [2024-07-15 09:35:41.658225] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:30.504 [2024-07-15 09:35:41.658237] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:30.504 [2024-07-15 09:35:41.661125] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:30.504 [2024-07-15 09:35:41.670338] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:30.504 [2024-07-15 09:35:41.670705] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.504 [2024-07-15 09:35:41.670747] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb9aac0 with addr=10.0.0.2, port=4420 00:27:30.504 [2024-07-15 09:35:41.670762] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb9aac0 is same with the state(5) to be set 00:27:30.504 [2024-07-15 09:35:41.671019] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb9aac0 (9): Bad file descriptor 00:27:30.504 [2024-07-15 09:35:41.671251] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:30.504 [2024-07-15 09:35:41.671269] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:30.504 [2024-07-15 09:35:41.671281] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:30.504 [2024-07-15 09:35:41.674167] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:30.504 [2024-07-15 09:35:41.683415] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:30.504 [2024-07-15 09:35:41.683790] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.504 [2024-07-15 09:35:41.683825] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb9aac0 with addr=10.0.0.2, port=4420 00:27:30.504 [2024-07-15 09:35:41.683842] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb9aac0 is same with the state(5) to be set 00:27:30.504 [2024-07-15 09:35:41.684055] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb9aac0 (9): Bad file descriptor 00:27:30.504 [2024-07-15 09:35:41.684322] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:30.504 [2024-07-15 09:35:41.684342] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:30.504 [2024-07-15 09:35:41.684355] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:30.504 [2024-07-15 09:35:41.687871] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:30.767 [2024-07-15 09:35:41.696875] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:30.767 [2024-07-15 09:35:41.697274] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.767 [2024-07-15 09:35:41.697317] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb9aac0 with addr=10.0.0.2, port=4420 00:27:30.767 [2024-07-15 09:35:41.697333] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb9aac0 is same with the state(5) to be set 00:27:30.767 [2024-07-15 09:35:41.697585] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb9aac0 (9): Bad file descriptor 00:27:30.767 [2024-07-15 09:35:41.697818] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:30.767 [2024-07-15 09:35:41.697837] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:30.767 [2024-07-15 09:35:41.697866] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:30.767 [2024-07-15 09:35:41.700933] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:30.767 [2024-07-15 09:35:41.710129] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:30.767 [2024-07-15 09:35:41.710525] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.767 [2024-07-15 09:35:41.710553] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb9aac0 with addr=10.0.0.2, port=4420 00:27:30.767 [2024-07-15 09:35:41.710568] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb9aac0 is same with the state(5) to be set 00:27:30.767 [2024-07-15 09:35:41.710789] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb9aac0 (9): Bad file descriptor 00:27:30.767 [2024-07-15 09:35:41.711039] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:30.767 [2024-07-15 09:35:41.711059] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:30.767 [2024-07-15 09:35:41.711078] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:30.767 [2024-07-15 09:35:41.714002] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:30.767 [2024-07-15 09:35:41.723312] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:30.767 [2024-07-15 09:35:41.723684] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.767 [2024-07-15 09:35:41.723732] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb9aac0 with addr=10.0.0.2, port=4420 00:27:30.767 [2024-07-15 09:35:41.723747] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb9aac0 is same with the state(5) to be set 00:27:30.767 [2024-07-15 09:35:41.723998] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb9aac0 (9): Bad file descriptor 00:27:30.767 [2024-07-15 09:35:41.724232] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:30.767 [2024-07-15 09:35:41.724251] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:30.767 [2024-07-15 09:35:41.724264] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:30.767 [2024-07-15 09:35:41.727190] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:30.767 [2024-07-15 09:35:41.736563] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:30.767 [2024-07-15 09:35:41.736957] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.767 [2024-07-15 09:35:41.736984] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb9aac0 with addr=10.0.0.2, port=4420 00:27:30.767 [2024-07-15 09:35:41.736999] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb9aac0 is same with the state(5) to be set 00:27:30.767 [2024-07-15 09:35:41.737235] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb9aac0 (9): Bad file descriptor 00:27:30.767 [2024-07-15 09:35:41.737443] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:30.767 [2024-07-15 09:35:41.737462] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:30.767 [2024-07-15 09:35:41.737474] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:30.767 [2024-07-15 09:35:41.740409] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:30.767 [2024-07-15 09:35:41.750033] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:30.767 [2024-07-15 09:35:41.750421] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.767 [2024-07-15 09:35:41.750449] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb9aac0 with addr=10.0.0.2, port=4420 00:27:30.767 [2024-07-15 09:35:41.750464] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb9aac0 is same with the state(5) to be set 00:27:30.767 [2024-07-15 09:35:41.750700] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb9aac0 (9): Bad file descriptor 00:27:30.767 [2024-07-15 09:35:41.750948] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:30.767 [2024-07-15 09:35:41.750970] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:30.767 [2024-07-15 09:35:41.750984] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:30.767 [2024-07-15 09:35:41.754049] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:30.767 [2024-07-15 09:35:41.763447] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:30.767 [2024-07-15 09:35:41.763937] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.767 [2024-07-15 09:35:41.763966] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb9aac0 with addr=10.0.0.2, port=4420 00:27:30.767 [2024-07-15 09:35:41.763982] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb9aac0 is same with the state(5) to be set 00:27:30.767 [2024-07-15 09:35:41.764208] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb9aac0 (9): Bad file descriptor 00:27:30.767 [2024-07-15 09:35:41.764416] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:30.767 [2024-07-15 09:35:41.764435] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:30.767 [2024-07-15 09:35:41.764447] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:30.767 [2024-07-15 09:35:41.767503] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:30.767 [2024-07-15 09:35:41.776728] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:30.767 [2024-07-15 09:35:41.777095] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.767 [2024-07-15 09:35:41.777145] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb9aac0 with addr=10.0.0.2, port=4420 00:27:30.767 [2024-07-15 09:35:41.777162] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb9aac0 is same with the state(5) to be set 00:27:30.767 [2024-07-15 09:35:41.777416] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb9aac0 (9): Bad file descriptor 00:27:30.767 [2024-07-15 09:35:41.777607] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:30.767 [2024-07-15 09:35:41.777626] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:30.767 [2024-07-15 09:35:41.777638] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:30.767 [2024-07-15 09:35:41.780683] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:30.767 [2024-07-15 09:35:41.790024] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:30.768 [2024-07-15 09:35:41.790418] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.768 [2024-07-15 09:35:41.790445] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb9aac0 with addr=10.0.0.2, port=4420 00:27:30.768 [2024-07-15 09:35:41.790461] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb9aac0 is same with the state(5) to be set 00:27:30.768 [2024-07-15 09:35:41.790681] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb9aac0 (9): Bad file descriptor 00:27:30.768 [2024-07-15 09:35:41.790942] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:30.768 [2024-07-15 09:35:41.790964] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:30.768 [2024-07-15 09:35:41.790978] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:30.768 [2024-07-15 09:35:41.794014] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:30.768 [2024-07-15 09:35:41.803213] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:30.768 [2024-07-15 09:35:41.803647] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.768 [2024-07-15 09:35:41.803694] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb9aac0 with addr=10.0.0.2, port=4420 00:27:30.768 [2024-07-15 09:35:41.803710] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb9aac0 is same with the state(5) to be set 00:27:30.768 [2024-07-15 09:35:41.803976] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb9aac0 (9): Bad file descriptor 00:27:30.768 [2024-07-15 09:35:41.804194] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:30.768 [2024-07-15 09:35:41.804213] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:30.768 [2024-07-15 09:35:41.804225] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:30.768 [2024-07-15 09:35:41.807140] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:30.768 [2024-07-15 09:35:41.816376] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:30.768 [2024-07-15 09:35:41.816748] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.768 [2024-07-15 09:35:41.816797] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb9aac0 with addr=10.0.0.2, port=4420 00:27:30.768 [2024-07-15 09:35:41.816841] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb9aac0 is same with the state(5) to be set 00:27:30.768 [2024-07-15 09:35:41.817084] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb9aac0 (9): Bad file descriptor 00:27:30.768 [2024-07-15 09:35:41.817293] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:30.768 [2024-07-15 09:35:41.817313] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:30.768 [2024-07-15 09:35:41.817325] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:30.768 [2024-07-15 09:35:41.820232] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:30.768 [2024-07-15 09:35:41.829510] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:30.768 [2024-07-15 09:35:41.829825] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.768 [2024-07-15 09:35:41.829852] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb9aac0 with addr=10.0.0.2, port=4420 00:27:30.768 [2024-07-15 09:35:41.829867] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb9aac0 is same with the state(5) to be set 00:27:30.768 [2024-07-15 09:35:41.830084] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb9aac0 (9): Bad file descriptor 00:27:30.768 [2024-07-15 09:35:41.830293] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:30.768 [2024-07-15 09:35:41.830311] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:30.768 [2024-07-15 09:35:41.830324] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:30.768 [2024-07-15 09:35:41.833250] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:30.768 [2024-07-15 09:35:41.842574] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:30.768 [2024-07-15 09:35:41.842984] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.768 [2024-07-15 09:35:41.843024] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb9aac0 with addr=10.0.0.2, port=4420 00:27:30.768 [2024-07-15 09:35:41.843040] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb9aac0 is same with the state(5) to be set 00:27:30.768 [2024-07-15 09:35:41.843260] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb9aac0 (9): Bad file descriptor 00:27:30.768 [2024-07-15 09:35:41.843468] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:30.768 [2024-07-15 09:35:41.843486] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:30.768 [2024-07-15 09:35:41.843498] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:30.768 [2024-07-15 09:35:41.846447] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:30.768 [2024-07-15 09:35:41.855692] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:30.768 [2024-07-15 09:35:41.856057] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.768 [2024-07-15 09:35:41.856086] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb9aac0 with addr=10.0.0.2, port=4420 00:27:30.768 [2024-07-15 09:35:41.856102] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb9aac0 is same with the state(5) to be set 00:27:30.768 [2024-07-15 09:35:41.856352] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb9aac0 (9): Bad file descriptor 00:27:30.768 [2024-07-15 09:35:41.856544] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:30.768 [2024-07-15 09:35:41.856563] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:30.768 [2024-07-15 09:35:41.856575] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:30.768 [2024-07-15 09:35:41.859464] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:30.768 [2024-07-15 09:35:41.868947] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:30.768 [2024-07-15 09:35:41.869347] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.768 [2024-07-15 09:35:41.869375] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb9aac0 with addr=10.0.0.2, port=4420 00:27:30.768 [2024-07-15 09:35:41.869390] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb9aac0 is same with the state(5) to be set 00:27:30.768 [2024-07-15 09:35:41.869625] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb9aac0 (9): Bad file descriptor 00:27:30.768 [2024-07-15 09:35:41.869859] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:30.768 [2024-07-15 09:35:41.869879] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:30.768 [2024-07-15 09:35:41.869892] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:30.768 [2024-07-15 09:35:41.872796] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:30.768 [2024-07-15 09:35:41.882065] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:30.768 [2024-07-15 09:35:41.882502] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.768 [2024-07-15 09:35:41.882548] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb9aac0 with addr=10.0.0.2, port=4420 00:27:30.768 [2024-07-15 09:35:41.882563] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb9aac0 is same with the state(5) to be set 00:27:30.768 [2024-07-15 09:35:41.882817] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb9aac0 (9): Bad file descriptor 00:27:30.768 [2024-07-15 09:35:41.883015] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:30.768 [2024-07-15 09:35:41.883035] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:30.768 [2024-07-15 09:35:41.883047] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:30.768 [2024-07-15 09:35:41.885934] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:30.768 [2024-07-15 09:35:41.895200] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:30.768 [2024-07-15 09:35:41.895516] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.768 [2024-07-15 09:35:41.895542] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb9aac0 with addr=10.0.0.2, port=4420 00:27:30.768 [2024-07-15 09:35:41.895566] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb9aac0 is same with the state(5) to be set 00:27:30.768 [2024-07-15 09:35:41.895780] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb9aac0 (9): Bad file descriptor 00:27:30.768 [2024-07-15 09:35:41.896018] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:30.768 [2024-07-15 09:35:41.896039] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:30.768 [2024-07-15 09:35:41.896052] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:30.768 [2024-07-15 09:35:41.898960] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:30.768 [2024-07-15 09:35:41.908314] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:30.768 [2024-07-15 09:35:41.908740] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.768 [2024-07-15 09:35:41.908767] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb9aac0 with addr=10.0.0.2, port=4420 00:27:30.768 [2024-07-15 09:35:41.908798] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb9aac0 is same with the state(5) to be set 00:27:30.768 [2024-07-15 09:35:41.909049] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb9aac0 (9): Bad file descriptor 00:27:30.768 [2024-07-15 09:35:41.909259] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:30.768 [2024-07-15 09:35:41.909277] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:30.768 [2024-07-15 09:35:41.909289] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:30.768 [2024-07-15 09:35:41.912176] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:30.768 [2024-07-15 09:35:41.921468] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:30.768 [2024-07-15 09:35:41.921856] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.768 [2024-07-15 09:35:41.921882] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb9aac0 with addr=10.0.0.2, port=4420 00:27:30.768 [2024-07-15 09:35:41.921912] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb9aac0 is same with the state(5) to be set 00:27:30.768 [2024-07-15 09:35:41.922134] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb9aac0 (9): Bad file descriptor 00:27:30.768 [2024-07-15 09:35:41.922341] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:30.768 [2024-07-15 09:35:41.922360] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:30.768 [2024-07-15 09:35:41.922372] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:30.768 [2024-07-15 09:35:41.925303] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:30.769 [2024-07-15 09:35:41.934562] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:30.769 [2024-07-15 09:35:41.934948] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.769 [2024-07-15 09:35:41.934976] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb9aac0 with addr=10.0.0.2, port=4420 00:27:30.769 [2024-07-15 09:35:41.934992] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb9aac0 is same with the state(5) to be set 00:27:30.769 [2024-07-15 09:35:41.935220] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb9aac0 (9): Bad file descriptor 00:27:30.769 [2024-07-15 09:35:41.935473] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:30.769 [2024-07-15 09:35:41.935499] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:30.769 [2024-07-15 09:35:41.935514] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:30.769 [2024-07-15 09:35:41.938980] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:30.769 [2024-07-15 09:35:41.947826] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:30.769 [2024-07-15 09:35:41.948259] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.769 [2024-07-15 09:35:41.948300] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb9aac0 with addr=10.0.0.2, port=4420 00:27:30.769 [2024-07-15 09:35:41.948316] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb9aac0 is same with the state(5) to be set 00:27:30.769 [2024-07-15 09:35:41.948565] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb9aac0 (9): Bad file descriptor 00:27:30.769 [2024-07-15 09:35:41.948757] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:30.769 [2024-07-15 09:35:41.948790] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:30.769 [2024-07-15 09:35:41.948812] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:30.769 [2024-07-15 09:35:41.951822] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:31.030 [2024-07-15 09:35:41.961130] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:31.030 [2024-07-15 09:35:41.961566] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:31.030 [2024-07-15 09:35:41.961609] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb9aac0 with addr=10.0.0.2, port=4420 00:27:31.030 [2024-07-15 09:35:41.961626] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb9aac0 is same with the state(5) to be set 00:27:31.030 [2024-07-15 09:35:41.961879] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb9aac0 (9): Bad file descriptor 00:27:31.030 [2024-07-15 09:35:41.962084] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:31.030 [2024-07-15 09:35:41.962104] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:31.030 [2024-07-15 09:35:41.962117] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:31.030 [2024-07-15 09:35:41.965034] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:31.030 [2024-07-15 09:35:41.974316] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:31.030 [2024-07-15 09:35:41.974676] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:31.030 [2024-07-15 09:35:41.974722] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb9aac0 with addr=10.0.0.2, port=4420 00:27:31.030 [2024-07-15 09:35:41.974743] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb9aac0 is same with the state(5) to be set 00:27:31.030 [2024-07-15 09:35:41.975025] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb9aac0 (9): Bad file descriptor 00:27:31.030 [2024-07-15 09:35:41.975251] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:31.030 [2024-07-15 09:35:41.975270] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:31.030 [2024-07-15 09:35:41.975282] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:31.030 [2024-07-15 09:35:41.978223] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:31.030 [2024-07-15 09:35:41.987360] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:31.030 [2024-07-15 09:35:41.987771] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:31.030 [2024-07-15 09:35:41.987828] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb9aac0 with addr=10.0.0.2, port=4420 00:27:31.030 [2024-07-15 09:35:41.987845] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb9aac0 is same with the state(5) to be set 00:27:31.030 [2024-07-15 09:35:41.988092] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb9aac0 (9): Bad file descriptor 00:27:31.030 [2024-07-15 09:35:41.988284] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:31.030 [2024-07-15 09:35:41.988303] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:31.030 [2024-07-15 09:35:41.988315] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:31.030 [2024-07-15 09:35:41.991125] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:31.030 [2024-07-15 09:35:42.000423] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:31.030 [2024-07-15 09:35:42.000793] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:31.031 [2024-07-15 09:35:42.000888] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb9aac0 with addr=10.0.0.2, port=4420 00:27:31.031 [2024-07-15 09:35:42.000903] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb9aac0 is same with the state(5) to be set 00:27:31.031 [2024-07-15 09:35:42.001148] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb9aac0 (9): Bad file descriptor 00:27:31.031 [2024-07-15 09:35:42.001356] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:31.031 [2024-07-15 09:35:42.001375] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:31.031 [2024-07-15 09:35:42.001387] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:31.031 [2024-07-15 09:35:42.004239] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:31.031 [2024-07-15 09:35:42.013583] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:31.031 [2024-07-15 09:35:42.014015] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:31.031 [2024-07-15 09:35:42.014058] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb9aac0 with addr=10.0.0.2, port=4420 00:27:31.031 [2024-07-15 09:35:42.014075] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb9aac0 is same with the state(5) to be set 00:27:31.031 [2024-07-15 09:35:42.014315] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb9aac0 (9): Bad file descriptor 00:27:31.031 [2024-07-15 09:35:42.014522] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:31.031 [2024-07-15 09:35:42.014541] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:31.031 [2024-07-15 09:35:42.014553] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:31.031 [2024-07-15 09:35:42.017489] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:31.031 [2024-07-15 09:35:42.026609] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:31.031 [2024-07-15 09:35:42.027002] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:31.031 [2024-07-15 09:35:42.027028] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb9aac0 with addr=10.0.0.2, port=4420 00:27:31.031 [2024-07-15 09:35:42.027043] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb9aac0 is same with the state(5) to be set 00:27:31.031 [2024-07-15 09:35:42.027263] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb9aac0 (9): Bad file descriptor 00:27:31.031 [2024-07-15 09:35:42.027461] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:31.031 [2024-07-15 09:35:42.027480] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:31.031 [2024-07-15 09:35:42.027492] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:31.031 [2024-07-15 09:35:42.030329] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:31.031 [2024-07-15 09:35:42.039626] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:31.031 [2024-07-15 09:35:42.039995] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:31.031 [2024-07-15 09:35:42.040037] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb9aac0 with addr=10.0.0.2, port=4420 00:27:31.031 [2024-07-15 09:35:42.040053] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb9aac0 is same with the state(5) to be set 00:27:31.031 [2024-07-15 09:35:42.040305] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb9aac0 (9): Bad file descriptor 00:27:31.031 [2024-07-15 09:35:42.040512] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:31.031 [2024-07-15 09:35:42.040530] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:31.031 [2024-07-15 09:35:42.040542] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:31.031 [2024-07-15 09:35:42.043351] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:31.031 [2024-07-15 09:35:42.052687] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:31.031 [2024-07-15 09:35:42.053068] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:31.031 [2024-07-15 09:35:42.053094] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb9aac0 with addr=10.0.0.2, port=4420 00:27:31.031 [2024-07-15 09:35:42.053109] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb9aac0 is same with the state(5) to be set 00:27:31.031 [2024-07-15 09:35:42.053309] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb9aac0 (9): Bad file descriptor 00:27:31.031 [2024-07-15 09:35:42.053536] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:31.031 [2024-07-15 09:35:42.053555] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:31.031 [2024-07-15 09:35:42.053567] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:31.031 [2024-07-15 09:35:42.056376] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:31.031 [2024-07-15 09:35:42.065679] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:31.031 [2024-07-15 09:35:42.066108] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:31.031 [2024-07-15 09:35:42.066150] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb9aac0 with addr=10.0.0.2, port=4420 00:27:31.031 [2024-07-15 09:35:42.066166] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb9aac0 is same with the state(5) to be set 00:27:31.031 [2024-07-15 09:35:42.066406] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb9aac0 (9): Bad file descriptor 00:27:31.031 [2024-07-15 09:35:42.066614] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:31.031 [2024-07-15 09:35:42.066632] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:31.031 [2024-07-15 09:35:42.066649] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:31.031 [2024-07-15 09:35:42.069463] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:31.031 [2024-07-15 09:35:42.078806] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:31.031 [2024-07-15 09:35:42.079165] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:31.031 [2024-07-15 09:35:42.079207] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb9aac0 with addr=10.0.0.2, port=4420 00:27:31.031 [2024-07-15 09:35:42.079223] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb9aac0 is same with the state(5) to be set 00:27:31.031 [2024-07-15 09:35:42.079475] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb9aac0 (9): Bad file descriptor 00:27:31.031 [2024-07-15 09:35:42.079683] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:31.031 [2024-07-15 09:35:42.079701] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:31.031 [2024-07-15 09:35:42.079713] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:31.031 [2024-07-15 09:35:42.082526] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:31.031 [2024-07-15 09:35:42.091831] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:31.031 [2024-07-15 09:35:42.092194] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:31.031 [2024-07-15 09:35:42.092236] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb9aac0 with addr=10.0.0.2, port=4420 00:27:31.031 [2024-07-15 09:35:42.092252] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb9aac0 is same with the state(5) to be set 00:27:31.031 [2024-07-15 09:35:42.092503] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb9aac0 (9): Bad file descriptor 00:27:31.031 [2024-07-15 09:35:42.092709] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:31.031 [2024-07-15 09:35:42.092728] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:31.031 [2024-07-15 09:35:42.092739] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:31.031 [2024-07-15 09:35:42.095652] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:31.031 [2024-07-15 09:35:42.104970] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:31.031 [2024-07-15 09:35:42.105329] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:31.031 [2024-07-15 09:35:42.105357] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb9aac0 with addr=10.0.0.2, port=4420 00:27:31.031 [2024-07-15 09:35:42.105372] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb9aac0 is same with the state(5) to be set 00:27:31.031 [2024-07-15 09:35:42.105612] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb9aac0 (9): Bad file descriptor 00:27:31.031 [2024-07-15 09:35:42.105845] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:31.031 [2024-07-15 09:35:42.105865] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:31.031 [2024-07-15 09:35:42.105878] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:31.031 [2024-07-15 09:35:42.108663] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:31.031 [2024-07-15 09:35:42.118008] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:31.031 [2024-07-15 09:35:42.118436] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:31.031 [2024-07-15 09:35:42.118462] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb9aac0 with addr=10.0.0.2, port=4420 00:27:31.031 [2024-07-15 09:35:42.118493] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb9aac0 is same with the state(5) to be set 00:27:31.031 [2024-07-15 09:35:42.118732] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb9aac0 (9): Bad file descriptor 00:27:31.031 [2024-07-15 09:35:42.118969] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:31.031 [2024-07-15 09:35:42.118989] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:31.031 [2024-07-15 09:35:42.119002] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:31.031 [2024-07-15 09:35:42.121783] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:31.031 [2024-07-15 09:35:42.131031] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:31.031 [2024-07-15 09:35:42.131450] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:31.031 [2024-07-15 09:35:42.131506] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb9aac0 with addr=10.0.0.2, port=4420 00:27:31.031 [2024-07-15 09:35:42.131521] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb9aac0 is same with the state(5) to be set 00:27:31.031 [2024-07-15 09:35:42.131761] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb9aac0 (9): Bad file descriptor 00:27:31.031 [2024-07-15 09:35:42.132000] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:31.031 [2024-07-15 09:35:42.132021] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:31.031 [2024-07-15 09:35:42.132034] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:31.031 [2024-07-15 09:35:42.134926] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:31.031 [2024-07-15 09:35:42.144158] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:31.031 [2024-07-15 09:35:42.144594] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:31.032 [2024-07-15 09:35:42.144647] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb9aac0 with addr=10.0.0.2, port=4420 00:27:31.032 [2024-07-15 09:35:42.144662] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb9aac0 is same with the state(5) to be set 00:27:31.032 [2024-07-15 09:35:42.144937] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb9aac0 (9): Bad file descriptor 00:27:31.032 [2024-07-15 09:35:42.145142] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:31.032 [2024-07-15 09:35:42.145176] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:31.032 [2024-07-15 09:35:42.145188] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:31.032 [2024-07-15 09:35:42.148076] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:31.032 [2024-07-15 09:35:42.157283] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:31.032 [2024-07-15 09:35:42.157690] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:31.032 [2024-07-15 09:35:42.157749] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb9aac0 with addr=10.0.0.2, port=4420 00:27:31.032 [2024-07-15 09:35:42.157765] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb9aac0 is same with the state(5) to be set 00:27:31.032 [2024-07-15 09:35:42.157979] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb9aac0 (9): Bad file descriptor 00:27:31.032 [2024-07-15 09:35:42.158216] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:31.032 [2024-07-15 09:35:42.158235] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:31.032 [2024-07-15 09:35:42.158247] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:31.032 [2024-07-15 09:35:42.161141] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:31.032 [2024-07-15 09:35:42.170478] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:31.032 [2024-07-15 09:35:42.170955] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:31.032 [2024-07-15 09:35:42.171009] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb9aac0 with addr=10.0.0.2, port=4420 00:27:31.032 [2024-07-15 09:35:42.171025] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb9aac0 is same with the state(5) to be set 00:27:31.032 [2024-07-15 09:35:42.171270] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb9aac0 (9): Bad file descriptor 00:27:31.032 [2024-07-15 09:35:42.171462] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:31.032 [2024-07-15 09:35:42.171481] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:31.032 [2024-07-15 09:35:42.171493] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:31.032 [2024-07-15 09:35:42.174387] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:31.032 [2024-07-15 09:35:42.183448] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:31.032 [2024-07-15 09:35:42.183822] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:31.032 [2024-07-15 09:35:42.183868] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb9aac0 with addr=10.0.0.2, port=4420 00:27:31.032 [2024-07-15 09:35:42.183883] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb9aac0 is same with the state(5) to be set 00:27:31.032 [2024-07-15 09:35:42.184128] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb9aac0 (9): Bad file descriptor 00:27:31.032 [2024-07-15 09:35:42.184320] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:31.032 [2024-07-15 09:35:42.184338] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:31.032 [2024-07-15 09:35:42.184350] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:31.032 [2024-07-15 09:35:42.187245] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:31.032 [2024-07-15 09:35:42.196697] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:31.032 [2024-07-15 09:35:42.197082] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:31.032 [2024-07-15 09:35:42.197110] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb9aac0 with addr=10.0.0.2, port=4420 00:27:31.032 [2024-07-15 09:35:42.197126] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb9aac0 is same with the state(5) to be set 00:27:31.032 [2024-07-15 09:35:42.197353] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb9aac0 (9): Bad file descriptor 00:27:31.032 [2024-07-15 09:35:42.197561] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:31.032 [2024-07-15 09:35:42.197579] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:31.032 [2024-07-15 09:35:42.197591] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:31.032 [2024-07-15 09:35:42.200592] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:31.032 [2024-07-15 09:35:42.209973] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:31.032 [2024-07-15 09:35:42.210435] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:31.032 [2024-07-15 09:35:42.210476] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb9aac0 with addr=10.0.0.2, port=4420 00:27:31.032 [2024-07-15 09:35:42.210492] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb9aac0 is same with the state(5) to be set 00:27:31.032 [2024-07-15 09:35:42.210731] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb9aac0 (9): Bad file descriptor 00:27:31.032 [2024-07-15 09:35:42.210989] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:31.032 [2024-07-15 09:35:42.211010] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:31.032 [2024-07-15 09:35:42.211023] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:31.032 [2024-07-15 09:35:42.213989] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:31.032 [2024-07-15 09:35:42.223223] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:31.032 [2024-07-15 09:35:42.223591] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:31.032 [2024-07-15 09:35:42.223634] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb9aac0 with addr=10.0.0.2, port=4420 00:27:31.032 [2024-07-15 09:35:42.223650] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb9aac0 is same with the state(5) to be set 00:27:31.292 [2024-07-15 09:35:42.223906] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb9aac0 (9): Bad file descriptor 00:27:31.292 [2024-07-15 09:35:42.224114] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:31.292 [2024-07-15 09:35:42.224135] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:31.292 [2024-07-15 09:35:42.224148] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:31.292 [2024-07-15 09:35:42.227097] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:31.292 [2024-07-15 09:35:42.236272] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:31.292 [2024-07-15 09:35:42.236685] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:31.292 [2024-07-15 09:35:42.236738] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb9aac0 with addr=10.0.0.2, port=4420 00:27:31.292 [2024-07-15 09:35:42.236754] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb9aac0 is same with the state(5) to be set 00:27:31.292 [2024-07-15 09:35:42.237015] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb9aac0 (9): Bad file descriptor 00:27:31.292 [2024-07-15 09:35:42.237242] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:31.292 [2024-07-15 09:35:42.237261] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:31.292 [2024-07-15 09:35:42.237273] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:31.292 [2024-07-15 09:35:42.240160] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:31.292 [2024-07-15 09:35:42.249494] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:31.292 [2024-07-15 09:35:42.249916] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:31.292 [2024-07-15 09:35:42.249945] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb9aac0 with addr=10.0.0.2, port=4420 00:27:31.293 [2024-07-15 09:35:42.249966] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb9aac0 is same with the state(5) to be set 00:27:31.293 [2024-07-15 09:35:42.250201] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb9aac0 (9): Bad file descriptor 00:27:31.293 [2024-07-15 09:35:42.250411] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:31.293 [2024-07-15 09:35:42.250429] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:31.293 [2024-07-15 09:35:42.250442] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:31.293 [2024-07-15 09:35:42.253358] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:31.293 [2024-07-15 09:35:42.262612] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:31.293 [2024-07-15 09:35:42.263037] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:31.293 [2024-07-15 09:35:42.263079] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb9aac0 with addr=10.0.0.2, port=4420 00:27:31.293 [2024-07-15 09:35:42.263095] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb9aac0 is same with the state(5) to be set 00:27:31.293 [2024-07-15 09:35:42.263335] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb9aac0 (9): Bad file descriptor 00:27:31.293 [2024-07-15 09:35:42.263542] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:31.293 [2024-07-15 09:35:42.263561] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:31.293 [2024-07-15 09:35:42.263573] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:31.293 [2024-07-15 09:35:42.266384] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:31.293 [2024-07-15 09:35:42.275613] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:31.293 [2024-07-15 09:35:42.275997] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:31.293 [2024-07-15 09:35:42.276024] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb9aac0 with addr=10.0.0.2, port=4420 00:27:31.293 [2024-07-15 09:35:42.276054] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb9aac0 is same with the state(5) to be set 00:27:31.293 [2024-07-15 09:35:42.276275] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb9aac0 (9): Bad file descriptor 00:27:31.293 [2024-07-15 09:35:42.276483] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:31.293 [2024-07-15 09:35:42.276502] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:31.293 [2024-07-15 09:35:42.276514] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:31.293 [2024-07-15 09:35:42.279467] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:31.293 [2024-07-15 09:35:42.288678] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:31.293 [2024-07-15 09:35:42.289010] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:31.293 [2024-07-15 09:35:42.289037] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb9aac0 with addr=10.0.0.2, port=4420 00:27:31.293 [2024-07-15 09:35:42.289052] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb9aac0 is same with the state(5) to be set 00:27:31.293 [2024-07-15 09:35:42.289273] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb9aac0 (9): Bad file descriptor 00:27:31.293 [2024-07-15 09:35:42.289481] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:31.293 [2024-07-15 09:35:42.289504] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:31.293 [2024-07-15 09:35:42.289517] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:31.293 [2024-07-15 09:35:42.292369] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:31.293 [2024-07-15 09:35:42.301853] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:31.293 [2024-07-15 09:35:42.302283] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:31.293 [2024-07-15 09:35:42.302324] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb9aac0 with addr=10.0.0.2, port=4420 00:27:31.293 [2024-07-15 09:35:42.302340] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb9aac0 is same with the state(5) to be set 00:27:31.293 [2024-07-15 09:35:42.302579] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb9aac0 (9): Bad file descriptor 00:27:31.293 [2024-07-15 09:35:42.302786] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:31.293 [2024-07-15 09:35:42.302827] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:31.293 [2024-07-15 09:35:42.302842] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:31.293 [2024-07-15 09:35:42.305734] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:31.293 [2024-07-15 09:35:42.314945] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:31.293 [2024-07-15 09:35:42.315272] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:31.293 [2024-07-15 09:35:42.315299] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb9aac0 with addr=10.0.0.2, port=4420 00:27:31.293 [2024-07-15 09:35:42.315314] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb9aac0 is same with the state(5) to be set 00:27:31.293 [2024-07-15 09:35:42.315537] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb9aac0 (9): Bad file descriptor 00:27:31.293 [2024-07-15 09:35:42.315746] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:31.293 [2024-07-15 09:35:42.315765] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:31.293 [2024-07-15 09:35:42.315776] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:31.293 [2024-07-15 09:35:42.318693] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:31.293 [2024-07-15 09:35:42.328009] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:31.293 [2024-07-15 09:35:42.328428] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:31.293 [2024-07-15 09:35:42.328486] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb9aac0 with addr=10.0.0.2, port=4420 00:27:31.293 [2024-07-15 09:35:42.328501] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb9aac0 is same with the state(5) to be set 00:27:31.293 [2024-07-15 09:35:42.328762] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb9aac0 (9): Bad file descriptor 00:27:31.293 [2024-07-15 09:35:42.329001] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:31.293 [2024-07-15 09:35:42.329022] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:31.293 [2024-07-15 09:35:42.329035] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:31.293 [2024-07-15 09:35:42.332097] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:31.293 [2024-07-15 09:35:42.341010] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:31.293 [2024-07-15 09:35:42.341373] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:31.293 [2024-07-15 09:35:42.341414] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb9aac0 with addr=10.0.0.2, port=4420 00:27:31.293 [2024-07-15 09:35:42.341429] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb9aac0 is same with the state(5) to be set 00:27:31.293 [2024-07-15 09:35:42.341677] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb9aac0 (9): Bad file descriptor 00:27:31.293 [2024-07-15 09:35:42.341930] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:31.293 [2024-07-15 09:35:42.341951] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:31.293 [2024-07-15 09:35:42.341964] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:31.293 [2024-07-15 09:35:42.344857] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:31.293 [2024-07-15 09:35:42.354164] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:31.293 [2024-07-15 09:35:42.354489] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:31.293 [2024-07-15 09:35:42.354516] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb9aac0 with addr=10.0.0.2, port=4420 00:27:31.293 [2024-07-15 09:35:42.354531] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb9aac0 is same with the state(5) to be set 00:27:31.293 [2024-07-15 09:35:42.354752] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb9aac0 (9): Bad file descriptor 00:27:31.293 [2024-07-15 09:35:42.354996] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:31.293 [2024-07-15 09:35:42.355017] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:31.293 [2024-07-15 09:35:42.355031] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:31.293 [2024-07-15 09:35:42.357925] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:31.293 [2024-07-15 09:35:42.367241] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:31.293 [2024-07-15 09:35:42.367634] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:31.293 [2024-07-15 09:35:42.367661] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb9aac0 with addr=10.0.0.2, port=4420 00:27:31.293 [2024-07-15 09:35:42.367677] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb9aac0 is same with the state(5) to be set 00:27:31.293 [2024-07-15 09:35:42.367908] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb9aac0 (9): Bad file descriptor 00:27:31.293 [2024-07-15 09:35:42.368137] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:31.293 [2024-07-15 09:35:42.368156] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:31.293 [2024-07-15 09:35:42.368168] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:31.293 [2024-07-15 09:35:42.370938] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:31.293 [2024-07-15 09:35:42.380279] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:31.293 [2024-07-15 09:35:42.380688] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:31.293 [2024-07-15 09:35:42.380744] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb9aac0 with addr=10.0.0.2, port=4420 00:27:31.293 [2024-07-15 09:35:42.380763] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb9aac0 is same with the state(5) to be set 00:27:31.293 [2024-07-15 09:35:42.381029] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb9aac0 (9): Bad file descriptor 00:27:31.293 [2024-07-15 09:35:42.381258] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:31.293 [2024-07-15 09:35:42.381277] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:31.293 [2024-07-15 09:35:42.381289] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:31.293 [2024-07-15 09:35:42.384175] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:31.294 [2024-07-15 09:35:42.393433] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:31.294 [2024-07-15 09:35:42.393797] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:31.294 [2024-07-15 09:35:42.393834] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb9aac0 with addr=10.0.0.2, port=4420 00:27:31.294 [2024-07-15 09:35:42.393850] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb9aac0 is same with the state(5) to be set 00:27:31.294 [2024-07-15 09:35:42.394077] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb9aac0 (9): Bad file descriptor 00:27:31.294 [2024-07-15 09:35:42.394286] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:31.294 [2024-07-15 09:35:42.394305] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:31.294 [2024-07-15 09:35:42.394317] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:31.294 [2024-07-15 09:35:42.397215] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:31.294 [2024-07-15 09:35:42.406671] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:31.294 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/bdevperf.sh: line 35: 939267 Killed "${NVMF_APP[@]}" "$@" 00:27:31.294 [2024-07-15 09:35:42.407033] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:31.294 [2024-07-15 09:35:42.407061] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb9aac0 with addr=10.0.0.2, port=4420 00:27:31.294 [2024-07-15 09:35:42.407078] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb9aac0 is same with the state(5) to be set 00:27:31.294 09:35:42 nvmf_tcp.nvmf_bdevperf -- host/bdevperf.sh@36 -- # tgt_init 00:27:31.294 [2024-07-15 09:35:42.407306] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb9aac0 (9): Bad file descriptor 00:27:31.294 09:35:42 nvmf_tcp.nvmf_bdevperf -- host/bdevperf.sh@15 -- # nvmfappstart -m 0xE 00:27:31.294 [2024-07-15 09:35:42.407521] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:31.294 [2024-07-15 09:35:42.407540] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:31.294 [2024-07-15 09:35:42.407553] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:31.294 09:35:42 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:27:31.294 09:35:42 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@722 -- # xtrace_disable 00:27:31.294 09:35:42 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:27:31.294 [2024-07-15 09:35:42.410723] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:31.294 09:35:42 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@481 -- # nvmfpid=940229 00:27:31.294 09:35:42 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xE 00:27:31.294 09:35:42 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@482 -- # waitforlisten 940229 00:27:31.294 09:35:42 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@829 -- # '[' -z 940229 ']' 00:27:31.294 09:35:42 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:27:31.294 09:35:42 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@834 -- # local max_retries=100 00:27:31.294 09:35:42 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:27:31.294 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:27:31.294 09:35:42 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@838 -- # xtrace_disable 00:27:31.294 09:35:42 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:27:31.294 [2024-07-15 09:35:42.420169] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:31.294 [2024-07-15 09:35:42.420539] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:31.294 [2024-07-15 09:35:42.420568] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb9aac0 with addr=10.0.0.2, port=4420 00:27:31.294 [2024-07-15 09:35:42.420586] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb9aac0 is same with the state(5) to be set 00:27:31.294 [2024-07-15 09:35:42.420841] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb9aac0 (9): Bad file descriptor 00:27:31.294 [2024-07-15 09:35:42.421060] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:31.294 [2024-07-15 09:35:42.421081] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:31.294 [2024-07-15 09:35:42.421095] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:31.294 [2024-07-15 09:35:42.424200] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:31.294 [2024-07-15 09:35:42.433655] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:31.294 [2024-07-15 09:35:42.434103] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:31.294 [2024-07-15 09:35:42.434131] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb9aac0 with addr=10.0.0.2, port=4420 00:27:31.294 [2024-07-15 09:35:42.434147] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb9aac0 is same with the state(5) to be set 00:27:31.294 [2024-07-15 09:35:42.434387] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb9aac0 (9): Bad file descriptor 00:27:31.294 [2024-07-15 09:35:42.434601] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:31.294 [2024-07-15 09:35:42.434621] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:31.294 [2024-07-15 09:35:42.434633] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:31.294 [2024-07-15 09:35:42.437740] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:31.294 [2024-07-15 09:35:42.447100] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:31.294 [2024-07-15 09:35:42.447395] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:31.294 [2024-07-15 09:35:42.447438] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb9aac0 with addr=10.0.0.2, port=4420 00:27:31.294 [2024-07-15 09:35:42.447453] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb9aac0 is same with the state(5) to be set 00:27:31.294 [2024-07-15 09:35:42.447674] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb9aac0 (9): Bad file descriptor 00:27:31.294 [2024-07-15 09:35:42.447934] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:31.294 [2024-07-15 09:35:42.447956] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:31.294 [2024-07-15 09:35:42.447974] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:31.294 [2024-07-15 09:35:42.451097] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:31.294 [2024-07-15 09:35:42.456482] Starting SPDK v24.09-pre git sha1 b0f01ebc5 / DPDK 24.03.0 initialization... 00:27:31.294 [2024-07-15 09:35:42.456550] [ DPDK EAL parameters: nvmf -c 0xE --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:27:31.294 [2024-07-15 09:35:42.460505] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:31.294 [2024-07-15 09:35:42.460882] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:31.294 [2024-07-15 09:35:42.460911] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb9aac0 with addr=10.0.0.2, port=4420 00:27:31.294 [2024-07-15 09:35:42.460927] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb9aac0 is same with the state(5) to be set 00:27:31.294 [2024-07-15 09:35:42.461157] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb9aac0 (9): Bad file descriptor 00:27:31.294 [2024-07-15 09:35:42.461373] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:31.294 [2024-07-15 09:35:42.461393] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:31.294 [2024-07-15 09:35:42.461406] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:31.294 [2024-07-15 09:35:42.464443] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:31.294 [2024-07-15 09:35:42.473948] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:31.294 [2024-07-15 09:35:42.474368] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:31.294 [2024-07-15 09:35:42.474395] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb9aac0 with addr=10.0.0.2, port=4420 00:27:31.294 [2024-07-15 09:35:42.474426] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb9aac0 is same with the state(5) to be set 00:27:31.294 [2024-07-15 09:35:42.474668] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb9aac0 (9): Bad file descriptor 00:27:31.294 [2024-07-15 09:35:42.474910] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:31.294 [2024-07-15 09:35:42.474932] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:31.294 [2024-07-15 09:35:42.474945] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:31.294 [2024-07-15 09:35:42.477956] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:31.554 [2024-07-15 09:35:42.487516] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:31.554 [2024-07-15 09:35:42.487891] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:31.554 [2024-07-15 09:35:42.487920] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb9aac0 with addr=10.0.0.2, port=4420 00:27:31.554 [2024-07-15 09:35:42.487936] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb9aac0 is same with the state(5) to be set 00:27:31.554 [2024-07-15 09:35:42.488168] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb9aac0 (9): Bad file descriptor 00:27:31.554 [2024-07-15 09:35:42.488381] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:31.554 [2024-07-15 09:35:42.488400] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:31.554 [2024-07-15 09:35:42.488413] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:31.554 [2024-07-15 09:35:42.491613] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:31.554 EAL: No free 2048 kB hugepages reported on node 1 00:27:31.554 [2024-07-15 09:35:42.500951] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:31.554 [2024-07-15 09:35:42.501355] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:31.554 [2024-07-15 09:35:42.501384] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb9aac0 with addr=10.0.0.2, port=4420 00:27:31.554 [2024-07-15 09:35:42.501400] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb9aac0 is same with the state(5) to be set 00:27:31.554 [2024-07-15 09:35:42.501640] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb9aac0 (9): Bad file descriptor 00:27:31.554 [2024-07-15 09:35:42.501889] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:31.554 [2024-07-15 09:35:42.501911] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:31.554 [2024-07-15 09:35:42.501925] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:31.554 [2024-07-15 09:35:42.505059] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:31.554 [2024-07-15 09:35:42.514382] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:31.554 [2024-07-15 09:35:42.514725] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:31.554 [2024-07-15 09:35:42.514753] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb9aac0 with addr=10.0.0.2, port=4420 00:27:31.554 [2024-07-15 09:35:42.514769] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb9aac0 is same with the state(5) to be set 00:27:31.554 [2024-07-15 09:35:42.514991] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb9aac0 (9): Bad file descriptor 00:27:31.554 [2024-07-15 09:35:42.515229] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:31.554 [2024-07-15 09:35:42.515249] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:31.554 [2024-07-15 09:35:42.515261] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:31.554 [2024-07-15 09:35:42.518395] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:31.554 [2024-07-15 09:35:42.523505] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 3 00:27:31.554 [2024-07-15 09:35:42.527743] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:31.554 [2024-07-15 09:35:42.528138] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:31.554 [2024-07-15 09:35:42.528167] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb9aac0 with addr=10.0.0.2, port=4420 00:27:31.554 [2024-07-15 09:35:42.528183] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb9aac0 is same with the state(5) to be set 00:27:31.554 [2024-07-15 09:35:42.528414] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb9aac0 (9): Bad file descriptor 00:27:31.554 [2024-07-15 09:35:42.528629] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:31.554 [2024-07-15 09:35:42.528649] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:31.554 [2024-07-15 09:35:42.528662] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:31.554 [2024-07-15 09:35:42.531639] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:31.554 [2024-07-15 09:35:42.541163] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:31.554 [2024-07-15 09:35:42.541618] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:31.554 [2024-07-15 09:35:42.541668] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb9aac0 with addr=10.0.0.2, port=4420 00:27:31.554 [2024-07-15 09:35:42.541686] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb9aac0 is same with the state(5) to be set 00:27:31.554 [2024-07-15 09:35:42.541931] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb9aac0 (9): Bad file descriptor 00:27:31.554 [2024-07-15 09:35:42.542177] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:31.554 [2024-07-15 09:35:42.542197] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:31.554 [2024-07-15 09:35:42.542212] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:31.554 [2024-07-15 09:35:42.545308] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:31.554 [2024-07-15 09:35:42.554685] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:31.554 [2024-07-15 09:35:42.555071] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:31.554 [2024-07-15 09:35:42.555106] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb9aac0 with addr=10.0.0.2, port=4420 00:27:31.554 [2024-07-15 09:35:42.555123] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb9aac0 is same with the state(5) to be set 00:27:31.554 [2024-07-15 09:35:42.555350] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb9aac0 (9): Bad file descriptor 00:27:31.554 [2024-07-15 09:35:42.555565] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:31.554 [2024-07-15 09:35:42.555585] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:31.554 [2024-07-15 09:35:42.555598] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:31.554 [2024-07-15 09:35:42.558709] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:31.554 [2024-07-15 09:35:42.568001] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:31.554 [2024-07-15 09:35:42.568364] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:31.554 [2024-07-15 09:35:42.568405] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb9aac0 with addr=10.0.0.2, port=4420 00:27:31.554 [2024-07-15 09:35:42.568420] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb9aac0 is same with the state(5) to be set 00:27:31.554 [2024-07-15 09:35:42.568650] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb9aac0 (9): Bad file descriptor 00:27:31.554 [2024-07-15 09:35:42.568894] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:31.554 [2024-07-15 09:35:42.568917] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:31.554 [2024-07-15 09:35:42.568931] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:31.554 [2024-07-15 09:35:42.571941] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:31.554 [2024-07-15 09:35:42.581315] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:31.554 [2024-07-15 09:35:42.581741] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:31.554 [2024-07-15 09:35:42.581796] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb9aac0 with addr=10.0.0.2, port=4420 00:27:31.554 [2024-07-15 09:35:42.581822] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb9aac0 is same with the state(5) to be set 00:27:31.554 [2024-07-15 09:35:42.582065] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb9aac0 (9): Bad file descriptor 00:27:31.554 [2024-07-15 09:35:42.582300] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:31.554 [2024-07-15 09:35:42.582320] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:31.554 [2024-07-15 09:35:42.582334] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:31.554 [2024-07-15 09:35:42.585366] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:31.554 [2024-07-15 09:35:42.594698] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:31.554 [2024-07-15 09:35:42.595175] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:31.554 [2024-07-15 09:35:42.595226] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb9aac0 with addr=10.0.0.2, port=4420 00:27:31.554 [2024-07-15 09:35:42.595244] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb9aac0 is same with the state(5) to be set 00:27:31.554 [2024-07-15 09:35:42.595506] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb9aac0 (9): Bad file descriptor 00:27:31.554 [2024-07-15 09:35:42.595707] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:31.554 [2024-07-15 09:35:42.595726] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:31.554 [2024-07-15 09:35:42.595741] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:31.554 [2024-07-15 09:35:42.598879] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:31.554 [2024-07-15 09:35:42.608009] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:31.554 [2024-07-15 09:35:42.608427] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:31.554 [2024-07-15 09:35:42.608455] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb9aac0 with addr=10.0.0.2, port=4420 00:27:31.554 [2024-07-15 09:35:42.608472] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb9aac0 is same with the state(5) to be set 00:27:31.554 [2024-07-15 09:35:42.608714] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb9aac0 (9): Bad file descriptor 00:27:31.554 [2024-07-15 09:35:42.608949] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:31.554 [2024-07-15 09:35:42.608971] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:31.554 [2024-07-15 09:35:42.608984] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:31.555 [2024-07-15 09:35:42.611890] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:31.555 [2024-07-15 09:35:42.621310] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:31.555 [2024-07-15 09:35:42.621621] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:31.555 [2024-07-15 09:35:42.621664] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb9aac0 with addr=10.0.0.2, port=4420 00:27:31.555 [2024-07-15 09:35:42.621680] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb9aac0 is same with the state(5) to be set 00:27:31.555 [2024-07-15 09:35:42.621932] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb9aac0 (9): Bad file descriptor 00:27:31.555 [2024-07-15 09:35:42.622168] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:31.555 [2024-07-15 09:35:42.622188] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:31.555 [2024-07-15 09:35:42.622202] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:31.555 [2024-07-15 09:35:42.625146] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:31.555 [2024-07-15 09:35:42.630612] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:27:31.555 [2024-07-15 09:35:42.630655] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:27:31.555 [2024-07-15 09:35:42.630668] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:27:31.555 [2024-07-15 09:35:42.630678] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:27:31.555 [2024-07-15 09:35:42.630688] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:27:31.555 [2024-07-15 09:35:42.630952] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:27:31.555 [2024-07-15 09:35:42.630977] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:27:31.555 [2024-07-15 09:35:42.630980] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:27:31.555 [2024-07-15 09:35:42.634820] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:31.555 [2024-07-15 09:35:42.635224] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:31.555 [2024-07-15 09:35:42.635254] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb9aac0 with addr=10.0.0.2, port=4420 00:27:31.555 [2024-07-15 09:35:42.635270] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb9aac0 is same with the state(5) to be set 00:27:31.555 [2024-07-15 09:35:42.635487] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb9aac0 (9): Bad file descriptor 00:27:31.555 [2024-07-15 09:35:42.635717] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:31.555 [2024-07-15 09:35:42.635738] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:31.555 [2024-07-15 09:35:42.635752] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:31.555 [2024-07-15 09:35:42.638942] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:31.555 [2024-07-15 09:35:42.648260] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:31.555 [2024-07-15 09:35:42.648732] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:31.555 [2024-07-15 09:35:42.648767] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb9aac0 with addr=10.0.0.2, port=4420 00:27:31.555 [2024-07-15 09:35:42.648785] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb9aac0 is same with the state(5) to be set 00:27:31.555 [2024-07-15 09:35:42.649014] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb9aac0 (9): Bad file descriptor 00:27:31.555 [2024-07-15 09:35:42.649248] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:31.555 [2024-07-15 09:35:42.649270] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:31.555 [2024-07-15 09:35:42.649286] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:31.555 [2024-07-15 09:35:42.652446] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:31.555 [2024-07-15 09:35:42.661766] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:31.555 [2024-07-15 09:35:42.662294] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:31.555 [2024-07-15 09:35:42.662332] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb9aac0 with addr=10.0.0.2, port=4420 00:27:31.555 [2024-07-15 09:35:42.662350] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb9aac0 is same with the state(5) to be set 00:27:31.555 [2024-07-15 09:35:42.662587] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb9aac0 (9): Bad file descriptor 00:27:31.555 [2024-07-15 09:35:42.662841] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:31.555 [2024-07-15 09:35:42.662864] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:31.555 [2024-07-15 09:35:42.662880] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:31.555 [2024-07-15 09:35:42.666047] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:31.555 [2024-07-15 09:35:42.675367] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:31.555 [2024-07-15 09:35:42.675856] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:31.555 [2024-07-15 09:35:42.675895] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb9aac0 with addr=10.0.0.2, port=4420 00:27:31.555 [2024-07-15 09:35:42.675914] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb9aac0 is same with the state(5) to be set 00:27:31.555 [2024-07-15 09:35:42.676152] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb9aac0 (9): Bad file descriptor 00:27:31.555 [2024-07-15 09:35:42.676367] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:31.555 [2024-07-15 09:35:42.676388] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:31.555 [2024-07-15 09:35:42.676404] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:31.555 [2024-07-15 09:35:42.679628] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:31.555 [2024-07-15 09:35:42.688951] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:31.555 [2024-07-15 09:35:42.689453] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:31.555 [2024-07-15 09:35:42.689489] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb9aac0 with addr=10.0.0.2, port=4420 00:27:31.555 [2024-07-15 09:35:42.689508] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb9aac0 is same with the state(5) to be set 00:27:31.555 [2024-07-15 09:35:42.689729] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb9aac0 (9): Bad file descriptor 00:27:31.555 [2024-07-15 09:35:42.689961] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:31.555 [2024-07-15 09:35:42.689984] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:31.555 [2024-07-15 09:35:42.690000] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:31.555 [2024-07-15 09:35:42.693208] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:31.555 [2024-07-15 09:35:42.702493] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:31.555 [2024-07-15 09:35:42.703020] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:31.555 [2024-07-15 09:35:42.703056] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb9aac0 with addr=10.0.0.2, port=4420 00:27:31.555 [2024-07-15 09:35:42.703074] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb9aac0 is same with the state(5) to be set 00:27:31.555 [2024-07-15 09:35:42.703311] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb9aac0 (9): Bad file descriptor 00:27:31.555 [2024-07-15 09:35:42.703525] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:31.555 [2024-07-15 09:35:42.703546] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:31.555 [2024-07-15 09:35:42.703561] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:31.555 [2024-07-15 09:35:42.706751] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:31.555 [2024-07-15 09:35:42.715926] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:31.555 [2024-07-15 09:35:42.716316] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:31.555 [2024-07-15 09:35:42.716344] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb9aac0 with addr=10.0.0.2, port=4420 00:27:31.555 [2024-07-15 09:35:42.716360] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb9aac0 is same with the state(5) to be set 00:27:31.555 [2024-07-15 09:35:42.716573] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb9aac0 (9): Bad file descriptor 00:27:31.555 [2024-07-15 09:35:42.716826] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:31.555 [2024-07-15 09:35:42.716848] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:31.555 [2024-07-15 09:35:42.716862] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:31.555 [2024-07-15 09:35:42.720018] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:31.555 [2024-07-15 09:35:42.729551] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:31.555 [2024-07-15 09:35:42.729905] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:31.555 [2024-07-15 09:35:42.729933] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb9aac0 with addr=10.0.0.2, port=4420 00:27:31.555 [2024-07-15 09:35:42.729949] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb9aac0 is same with the state(5) to be set 00:27:31.555 [2024-07-15 09:35:42.730163] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb9aac0 (9): Bad file descriptor 00:27:31.555 [2024-07-15 09:35:42.730380] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:31.555 [2024-07-15 09:35:42.730401] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:31.555 [2024-07-15 09:35:42.730415] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:31.555 [2024-07-15 09:35:42.733640] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:31.555 09:35:42 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:27:31.555 09:35:42 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@862 -- # return 0 00:27:31.555 09:35:42 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:27:31.555 09:35:42 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@728 -- # xtrace_disable 00:27:31.555 09:35:42 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:27:31.555 [2024-07-15 09:35:42.743261] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:31.555 [2024-07-15 09:35:42.743647] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:31.555 [2024-07-15 09:35:42.743676] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb9aac0 with addr=10.0.0.2, port=4420 00:27:31.555 [2024-07-15 09:35:42.743697] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb9aac0 is same with the state(5) to be set 00:27:31.555 [2024-07-15 09:35:42.743930] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb9aac0 (9): Bad file descriptor 00:27:31.555 [2024-07-15 09:35:42.744163] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:31.556 [2024-07-15 09:35:42.744184] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:31.556 [2024-07-15 09:35:42.744197] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:31.814 [2024-07-15 09:35:42.747520] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:31.814 [2024-07-15 09:35:42.756841] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:31.814 [2024-07-15 09:35:42.757164] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:31.814 [2024-07-15 09:35:42.757193] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb9aac0 with addr=10.0.0.2, port=4420 00:27:31.814 [2024-07-15 09:35:42.757210] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb9aac0 is same with the state(5) to be set 00:27:31.814 09:35:42 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:27:31.814 [2024-07-15 09:35:42.757430] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb9aac0 (9): Bad file descriptor 00:27:31.814 09:35:42 nvmf_tcp.nvmf_bdevperf -- host/bdevperf.sh@17 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:27:31.814 09:35:42 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:31.814 [2024-07-15 09:35:42.757649] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:31.814 [2024-07-15 09:35:42.757671] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:31.814 [2024-07-15 09:35:42.757685] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:31.814 09:35:42 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:27:31.814 [2024-07-15 09:35:42.760903] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:31.814 [2024-07-15 09:35:42.762481] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:27:31.814 09:35:42 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:31.814 09:35:42 nvmf_tcp.nvmf_bdevperf -- host/bdevperf.sh@18 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:27:31.814 09:35:42 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:31.814 09:35:42 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:27:31.814 [2024-07-15 09:35:42.770448] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:31.814 [2024-07-15 09:35:42.770792] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:31.814 [2024-07-15 09:35:42.770827] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb9aac0 with addr=10.0.0.2, port=4420 00:27:31.814 [2024-07-15 09:35:42.770843] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb9aac0 is same with the state(5) to be set 00:27:31.814 [2024-07-15 09:35:42.771057] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb9aac0 (9): Bad file descriptor 00:27:31.814 [2024-07-15 09:35:42.771283] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:31.814 [2024-07-15 09:35:42.771304] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:31.814 [2024-07-15 09:35:42.771317] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:31.814 [2024-07-15 09:35:42.774529] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:31.814 [2024-07-15 09:35:42.783807] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:31.814 [2024-07-15 09:35:42.784139] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:31.814 [2024-07-15 09:35:42.784167] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb9aac0 with addr=10.0.0.2, port=4420 00:27:31.814 [2024-07-15 09:35:42.784183] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb9aac0 is same with the state(5) to be set 00:27:31.814 [2024-07-15 09:35:42.784411] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb9aac0 (9): Bad file descriptor 00:27:31.814 [2024-07-15 09:35:42.784637] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:31.814 [2024-07-15 09:35:42.784658] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:31.814 [2024-07-15 09:35:42.784670] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:31.814 [2024-07-15 09:35:42.787780] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:31.814 [2024-07-15 09:35:42.797223] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:31.814 [2024-07-15 09:35:42.797581] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:31.814 [2024-07-15 09:35:42.797611] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb9aac0 with addr=10.0.0.2, port=4420 00:27:31.814 [2024-07-15 09:35:42.797627] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb9aac0 is same with the state(5) to be set 00:27:31.814 [2024-07-15 09:35:42.797851] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb9aac0 (9): Bad file descriptor 00:27:31.814 [2024-07-15 09:35:42.798070] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:31.814 [2024-07-15 09:35:42.798106] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:31.814 [2024-07-15 09:35:42.798120] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:31.814 [2024-07-15 09:35:42.801245] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:31.814 [2024-07-15 09:35:42.810777] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:31.814 [2024-07-15 09:35:42.811303] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:31.815 [2024-07-15 09:35:42.811339] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb9aac0 with addr=10.0.0.2, port=4420 00:27:31.815 [2024-07-15 09:35:42.811357] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb9aac0 is same with the state(5) to be set 00:27:31.815 [2024-07-15 09:35:42.811591] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb9aac0 (9): Bad file descriptor 00:27:31.815 [2024-07-15 09:35:42.811831] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:31.815 [2024-07-15 09:35:42.811854] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:31.815 [2024-07-15 09:35:42.811870] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:31.815 Malloc0 00:27:31.815 09:35:42 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:31.815 09:35:42 nvmf_tcp.nvmf_bdevperf -- host/bdevperf.sh@19 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:27:31.815 09:35:42 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:31.815 09:35:42 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:27:31.815 [2024-07-15 09:35:42.815170] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:31.815 09:35:42 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:31.815 09:35:42 nvmf_tcp.nvmf_bdevperf -- host/bdevperf.sh@20 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:27:31.815 09:35:42 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:31.815 09:35:42 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:27:31.815 [2024-07-15 09:35:42.824405] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:31.815 [2024-07-15 09:35:42.824731] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:31.815 [2024-07-15 09:35:42.824760] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb9aac0 with addr=10.0.0.2, port=4420 00:27:31.815 [2024-07-15 09:35:42.824783] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb9aac0 is same with the state(5) to be set 00:27:31.815 [2024-07-15 09:35:42.825008] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb9aac0 (9): Bad file descriptor 00:27:31.815 [2024-07-15 09:35:42.825239] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:31.815 [2024-07-15 09:35:42.825261] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:31.815 [2024-07-15 09:35:42.825274] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:31.815 09:35:42 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:31.815 09:35:42 nvmf_tcp.nvmf_bdevperf -- host/bdevperf.sh@21 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:27:31.815 09:35:42 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:31.815 09:35:42 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:27:31.815 [2024-07-15 09:35:42.828565] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:31.815 [2024-07-15 09:35:42.832116] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:27:31.815 09:35:42 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:31.815 09:35:42 nvmf_tcp.nvmf_bdevperf -- host/bdevperf.sh@38 -- # wait 939553 00:27:31.815 [2024-07-15 09:35:42.837994] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:31.815 [2024-07-15 09:35:42.871825] bdev_nvme.c:2067:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:27:41.793 00:27:41.793 Latency(us) 00:27:41.793 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:27:41.793 Job: Nvme1n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:27:41.793 Verification LBA range: start 0x0 length 0x4000 00:27:41.793 Nvme1n1 : 15.01 6857.83 26.79 10143.05 0.00 7506.45 549.17 17379.18 00:27:41.793 =================================================================================================================== 00:27:41.793 Total : 6857.83 26.79 10143.05 0.00 7506.45 549.17 17379.18 00:27:41.793 09:35:52 nvmf_tcp.nvmf_bdevperf -- host/bdevperf.sh@39 -- # sync 00:27:41.793 09:35:52 nvmf_tcp.nvmf_bdevperf -- host/bdevperf.sh@40 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:27:41.793 09:35:52 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:41.793 09:35:52 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:27:41.793 09:35:52 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:41.793 09:35:52 nvmf_tcp.nvmf_bdevperf -- host/bdevperf.sh@42 -- # trap - SIGINT SIGTERM EXIT 00:27:41.793 09:35:52 nvmf_tcp.nvmf_bdevperf -- host/bdevperf.sh@44 -- # nvmftestfini 00:27:41.793 09:35:52 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@488 -- # nvmfcleanup 00:27:41.793 09:35:52 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@117 -- # sync 00:27:41.793 09:35:52 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:27:41.793 09:35:52 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@120 -- # set +e 00:27:41.793 09:35:52 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@121 -- # for i in {1..20} 00:27:41.793 09:35:52 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:27:41.793 rmmod nvme_tcp 00:27:41.793 rmmod nvme_fabrics 00:27:41.793 rmmod nvme_keyring 00:27:41.793 09:35:52 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:27:41.793 09:35:52 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@124 -- # set -e 00:27:41.793 09:35:52 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@125 -- # return 0 00:27:41.793 09:35:52 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@489 -- # '[' -n 940229 ']' 00:27:41.793 09:35:52 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@490 -- # killprocess 940229 00:27:41.793 09:35:52 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@948 -- # '[' -z 940229 ']' 00:27:41.793 09:35:52 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@952 -- # kill -0 940229 00:27:41.793 09:35:52 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@953 -- # uname 00:27:41.793 09:35:52 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:27:41.793 09:35:52 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 940229 00:27:41.793 09:35:52 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:27:41.793 09:35:52 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:27:41.793 09:35:52 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@966 -- # echo 'killing process with pid 940229' 00:27:41.793 killing process with pid 940229 00:27:41.793 09:35:52 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@967 -- # kill 940229 00:27:41.793 09:35:52 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@972 -- # wait 940229 00:27:41.793 09:35:52 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:27:41.793 09:35:52 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:27:41.793 09:35:52 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:27:41.793 09:35:52 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:27:41.793 09:35:52 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@278 -- # remove_spdk_ns 00:27:41.793 09:35:52 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:27:41.793 09:35:52 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:27:41.793 09:35:52 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:27:43.698 09:35:54 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:27:43.698 00:27:43.698 real 0m22.741s 00:27:43.698 user 1m0.834s 00:27:43.698 sys 0m4.371s 00:27:43.698 09:35:54 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@1124 -- # xtrace_disable 00:27:43.698 09:35:54 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:27:43.698 ************************************ 00:27:43.698 END TEST nvmf_bdevperf 00:27:43.698 ************************************ 00:27:43.699 09:35:54 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:27:43.699 09:35:54 nvmf_tcp -- nvmf/nvmf.sh@123 -- # run_test nvmf_target_disconnect /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/target_disconnect.sh --transport=tcp 00:27:43.699 09:35:54 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:27:43.699 09:35:54 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:27:43.699 09:35:54 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:27:43.699 ************************************ 00:27:43.699 START TEST nvmf_target_disconnect 00:27:43.699 ************************************ 00:27:43.699 09:35:54 nvmf_tcp.nvmf_target_disconnect -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/target_disconnect.sh --transport=tcp 00:27:43.699 * Looking for test storage... 00:27:43.699 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:27:43.699 09:35:54 nvmf_tcp.nvmf_target_disconnect -- host/target_disconnect.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:27:43.699 09:35:54 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@7 -- # uname -s 00:27:43.699 09:35:54 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:27:43.699 09:35:54 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:27:43.699 09:35:54 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:27:43.699 09:35:54 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:27:43.699 09:35:54 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:27:43.699 09:35:54 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:27:43.699 09:35:54 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:27:43.699 09:35:54 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:27:43.699 09:35:54 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:27:43.699 09:35:54 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:27:43.699 09:35:54 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a 00:27:43.699 09:35:54 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@18 -- # NVME_HOSTID=29f67375-a902-e411-ace9-001e67bc3c9a 00:27:43.699 09:35:54 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:27:43.699 09:35:54 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:27:43.699 09:35:54 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:27:43.699 09:35:54 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:27:43.699 09:35:54 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:27:43.699 09:35:54 nvmf_tcp.nvmf_target_disconnect -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:27:43.699 09:35:54 nvmf_tcp.nvmf_target_disconnect -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:27:43.699 09:35:54 nvmf_tcp.nvmf_target_disconnect -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:27:43.699 09:35:54 nvmf_tcp.nvmf_target_disconnect -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:27:43.699 09:35:54 nvmf_tcp.nvmf_target_disconnect -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:27:43.699 09:35:54 nvmf_tcp.nvmf_target_disconnect -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:27:43.699 09:35:54 nvmf_tcp.nvmf_target_disconnect -- paths/export.sh@5 -- # export PATH 00:27:43.699 09:35:54 nvmf_tcp.nvmf_target_disconnect -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:27:43.699 09:35:54 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@47 -- # : 0 00:27:43.699 09:35:54 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:27:43.699 09:35:54 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:27:43.699 09:35:54 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:27:43.699 09:35:54 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:27:43.699 09:35:54 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:27:43.699 09:35:54 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:27:43.699 09:35:54 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:27:43.699 09:35:54 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@51 -- # have_pci_nics=0 00:27:43.699 09:35:54 nvmf_tcp.nvmf_target_disconnect -- host/target_disconnect.sh@11 -- # PLUGIN_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/app/fio/nvme 00:27:43.699 09:35:54 nvmf_tcp.nvmf_target_disconnect -- host/target_disconnect.sh@13 -- # MALLOC_BDEV_SIZE=64 00:27:43.699 09:35:54 nvmf_tcp.nvmf_target_disconnect -- host/target_disconnect.sh@14 -- # MALLOC_BLOCK_SIZE=512 00:27:43.699 09:35:54 nvmf_tcp.nvmf_target_disconnect -- host/target_disconnect.sh@69 -- # nvmftestinit 00:27:43.699 09:35:54 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:27:43.699 09:35:54 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:27:43.699 09:35:54 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@448 -- # prepare_net_devs 00:27:43.699 09:35:54 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@410 -- # local -g is_hw=no 00:27:43.699 09:35:54 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@412 -- # remove_spdk_ns 00:27:43.699 09:35:54 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:27:43.699 09:35:54 nvmf_tcp.nvmf_target_disconnect -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:27:43.699 09:35:54 nvmf_tcp.nvmf_target_disconnect -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:27:43.699 09:35:54 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:27:43.699 09:35:54 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:27:43.699 09:35:54 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@285 -- # xtrace_disable 00:27:43.699 09:35:54 nvmf_tcp.nvmf_target_disconnect -- common/autotest_common.sh@10 -- # set +x 00:27:45.608 09:35:56 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:27:45.608 09:35:56 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@291 -- # pci_devs=() 00:27:45.608 09:35:56 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@291 -- # local -a pci_devs 00:27:45.608 09:35:56 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@292 -- # pci_net_devs=() 00:27:45.608 09:35:56 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:27:45.608 09:35:56 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@293 -- # pci_drivers=() 00:27:45.608 09:35:56 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@293 -- # local -A pci_drivers 00:27:45.608 09:35:56 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@295 -- # net_devs=() 00:27:45.608 09:35:56 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@295 -- # local -ga net_devs 00:27:45.608 09:35:56 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@296 -- # e810=() 00:27:45.608 09:35:56 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@296 -- # local -ga e810 00:27:45.608 09:35:56 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@297 -- # x722=() 00:27:45.608 09:35:56 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@297 -- # local -ga x722 00:27:45.608 09:35:56 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@298 -- # mlx=() 00:27:45.608 09:35:56 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@298 -- # local -ga mlx 00:27:45.608 09:35:56 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:27:45.608 09:35:56 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:27:45.608 09:35:56 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:27:45.608 09:35:56 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:27:45.608 09:35:56 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:27:45.608 09:35:56 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:27:45.608 09:35:56 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:27:45.608 09:35:56 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:27:45.608 09:35:56 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:27:45.608 09:35:56 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:27:45.608 09:35:56 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:27:45.608 09:35:56 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:27:45.608 09:35:56 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:27:45.608 09:35:56 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:27:45.608 09:35:56 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:27:45.608 09:35:56 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:27:45.608 09:35:56 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:27:45.608 09:35:56 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:27:45.608 09:35:56 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@341 -- # echo 'Found 0000:09:00.0 (0x8086 - 0x159b)' 00:27:45.608 Found 0000:09:00.0 (0x8086 - 0x159b) 00:27:45.608 09:35:56 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:27:45.608 09:35:56 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:27:45.608 09:35:56 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:27:45.608 09:35:56 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:27:45.608 09:35:56 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:27:45.608 09:35:56 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:27:45.609 09:35:56 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@341 -- # echo 'Found 0000:09:00.1 (0x8086 - 0x159b)' 00:27:45.609 Found 0000:09:00.1 (0x8086 - 0x159b) 00:27:45.609 09:35:56 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:27:45.609 09:35:56 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:27:45.609 09:35:56 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:27:45.609 09:35:56 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:27:45.609 09:35:56 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:27:45.609 09:35:56 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:27:45.609 09:35:56 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:27:45.609 09:35:56 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:27:45.609 09:35:56 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:27:45.609 09:35:56 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:27:45.609 09:35:56 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:27:45.609 09:35:56 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:27:45.609 09:35:56 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@390 -- # [[ up == up ]] 00:27:45.609 09:35:56 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:27:45.609 09:35:56 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:27:45.609 09:35:56 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:09:00.0: cvl_0_0' 00:27:45.609 Found net devices under 0000:09:00.0: cvl_0_0 00:27:45.609 09:35:56 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:27:45.609 09:35:56 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:27:45.609 09:35:56 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:27:45.609 09:35:56 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:27:45.609 09:35:56 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:27:45.609 09:35:56 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@390 -- # [[ up == up ]] 00:27:45.609 09:35:56 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:27:45.609 09:35:56 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:27:45.609 09:35:56 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:09:00.1: cvl_0_1' 00:27:45.609 Found net devices under 0000:09:00.1: cvl_0_1 00:27:45.609 09:35:56 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:27:45.609 09:35:56 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:27:45.609 09:35:56 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@414 -- # is_hw=yes 00:27:45.609 09:35:56 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:27:45.609 09:35:56 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:27:45.609 09:35:56 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:27:45.609 09:35:56 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:27:45.609 09:35:56 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:27:45.609 09:35:56 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:27:45.609 09:35:56 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:27:45.609 09:35:56 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:27:45.609 09:35:56 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:27:45.609 09:35:56 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:27:45.609 09:35:56 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:27:45.609 09:35:56 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:27:45.609 09:35:56 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:27:45.609 09:35:56 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:27:45.609 09:35:56 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:27:45.609 09:35:56 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:27:45.609 09:35:56 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:27:45.609 09:35:56 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:27:45.609 09:35:56 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:27:45.609 09:35:56 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:27:45.609 09:35:56 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:27:45.609 09:35:56 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:27:45.609 09:35:56 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:27:45.609 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:27:45.609 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.111 ms 00:27:45.609 00:27:45.609 --- 10.0.0.2 ping statistics --- 00:27:45.609 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:27:45.609 rtt min/avg/max/mdev = 0.111/0.111/0.111/0.000 ms 00:27:45.609 09:35:56 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:27:45.609 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:27:45.609 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.125 ms 00:27:45.609 00:27:45.609 --- 10.0.0.1 ping statistics --- 00:27:45.609 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:27:45.609 rtt min/avg/max/mdev = 0.125/0.125/0.125/0.000 ms 00:27:45.609 09:35:56 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:27:45.609 09:35:56 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@422 -- # return 0 00:27:45.609 09:35:56 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:27:45.609 09:35:56 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:27:45.609 09:35:56 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:27:45.609 09:35:56 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:27:45.609 09:35:56 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:27:45.609 09:35:56 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:27:45.609 09:35:56 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:27:45.609 09:35:56 nvmf_tcp.nvmf_target_disconnect -- host/target_disconnect.sh@70 -- # run_test nvmf_target_disconnect_tc1 nvmf_target_disconnect_tc1 00:27:45.609 09:35:56 nvmf_tcp.nvmf_target_disconnect -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:27:45.609 09:35:56 nvmf_tcp.nvmf_target_disconnect -- common/autotest_common.sh@1105 -- # xtrace_disable 00:27:45.609 09:35:56 nvmf_tcp.nvmf_target_disconnect -- common/autotest_common.sh@10 -- # set +x 00:27:45.609 ************************************ 00:27:45.609 START TEST nvmf_target_disconnect_tc1 00:27:45.609 ************************************ 00:27:45.609 09:35:56 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@1123 -- # nvmf_target_disconnect_tc1 00:27:45.609 09:35:56 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- host/target_disconnect.sh@32 -- # NOT /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/reconnect -q 32 -o 4096 -w randrw -M 50 -t 10 -c 0xF -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:27:45.609 09:35:56 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@648 -- # local es=0 00:27:45.609 09:35:56 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@650 -- # valid_exec_arg /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/reconnect -q 32 -o 4096 -w randrw -M 50 -t 10 -c 0xF -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:27:45.609 09:35:56 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@636 -- # local arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/reconnect 00:27:45.609 09:35:56 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:27:45.609 09:35:56 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@640 -- # type -t /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/reconnect 00:27:45.609 09:35:56 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:27:45.609 09:35:56 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@642 -- # type -P /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/reconnect 00:27:45.609 09:35:56 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:27:45.609 09:35:56 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@642 -- # arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/reconnect 00:27:45.609 09:35:56 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@642 -- # [[ -x /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/reconnect ]] 00:27:45.609 09:35:56 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@651 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/reconnect -q 32 -o 4096 -w randrw -M 50 -t 10 -c 0xF -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:27:45.609 EAL: No free 2048 kB hugepages reported on node 1 00:27:45.869 [2024-07-15 09:35:56.827229] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.869 [2024-07-15 09:35:56.827299] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc821a0 with addr=10.0.0.2, port=4420 00:27:45.869 [2024-07-15 09:35:56.827332] nvme_tcp.c:2711:nvme_tcp_ctrlr_construct: *ERROR*: failed to create admin qpair 00:27:45.869 [2024-07-15 09:35:56.827353] nvme.c: 830:nvme_probe_internal: *ERROR*: NVMe ctrlr scan failed 00:27:45.869 [2024-07-15 09:35:56.827365] nvme.c: 913:spdk_nvme_probe: *ERROR*: Create probe context failed 00:27:45.869 spdk_nvme_probe() failed for transport address '10.0.0.2' 00:27:45.869 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/reconnect: errors occurred 00:27:45.869 Initializing NVMe Controllers 00:27:45.869 09:35:56 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@651 -- # es=1 00:27:45.869 09:35:56 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:27:45.869 09:35:56 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:27:45.869 09:35:56 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:27:45.869 00:27:45.869 real 0m0.087s 00:27:45.869 user 0m0.040s 00:27:45.869 sys 0m0.047s 00:27:45.869 09:35:56 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@1124 -- # xtrace_disable 00:27:45.869 09:35:56 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@10 -- # set +x 00:27:45.869 ************************************ 00:27:45.869 END TEST nvmf_target_disconnect_tc1 00:27:45.869 ************************************ 00:27:45.869 09:35:56 nvmf_tcp.nvmf_target_disconnect -- common/autotest_common.sh@1142 -- # return 0 00:27:45.869 09:35:56 nvmf_tcp.nvmf_target_disconnect -- host/target_disconnect.sh@71 -- # run_test nvmf_target_disconnect_tc2 nvmf_target_disconnect_tc2 00:27:45.869 09:35:56 nvmf_tcp.nvmf_target_disconnect -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:27:45.869 09:35:56 nvmf_tcp.nvmf_target_disconnect -- common/autotest_common.sh@1105 -- # xtrace_disable 00:27:45.869 09:35:56 nvmf_tcp.nvmf_target_disconnect -- common/autotest_common.sh@10 -- # set +x 00:27:45.869 ************************************ 00:27:45.869 START TEST nvmf_target_disconnect_tc2 00:27:45.869 ************************************ 00:27:45.869 09:35:56 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@1123 -- # nvmf_target_disconnect_tc2 00:27:45.869 09:35:56 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@37 -- # disconnect_init 10.0.0.2 00:27:45.869 09:35:56 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@17 -- # nvmfappstart -m 0xF0 00:27:45.869 09:35:56 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:27:45.869 09:35:56 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@722 -- # xtrace_disable 00:27:45.869 09:35:56 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:27:45.869 09:35:56 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@481 -- # nvmfpid=943371 00:27:45.869 09:35:56 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF0 00:27:45.869 09:35:56 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@482 -- # waitforlisten 943371 00:27:45.869 09:35:56 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@829 -- # '[' -z 943371 ']' 00:27:45.869 09:35:56 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:27:45.869 09:35:56 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@834 -- # local max_retries=100 00:27:45.869 09:35:56 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:27:45.869 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:27:45.869 09:35:56 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@838 -- # xtrace_disable 00:27:45.869 09:35:56 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:27:45.869 [2024-07-15 09:35:56.941647] Starting SPDK v24.09-pre git sha1 b0f01ebc5 / DPDK 24.03.0 initialization... 00:27:45.869 [2024-07-15 09:35:56.941747] [ DPDK EAL parameters: nvmf -c 0xF0 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:27:45.869 EAL: No free 2048 kB hugepages reported on node 1 00:27:45.869 [2024-07-15 09:35:57.003755] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 4 00:27:46.128 [2024-07-15 09:35:57.105427] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:27:46.128 [2024-07-15 09:35:57.105479] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:27:46.128 [2024-07-15 09:35:57.105506] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:27:46.128 [2024-07-15 09:35:57.105518] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:27:46.128 [2024-07-15 09:35:57.105527] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:27:46.128 [2024-07-15 09:35:57.105607] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 5 00:27:46.128 [2024-07-15 09:35:57.105712] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 6 00:27:46.128 [2024-07-15 09:35:57.105819] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 7 00:27:46.128 [2024-07-15 09:35:57.105822] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 4 00:27:46.128 09:35:57 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:27:46.128 09:35:57 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@862 -- # return 0 00:27:46.128 09:35:57 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:27:46.128 09:35:57 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@728 -- # xtrace_disable 00:27:46.128 09:35:57 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:27:46.128 09:35:57 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:27:46.128 09:35:57 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@19 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:27:46.128 09:35:57 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:46.128 09:35:57 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:27:46.128 Malloc0 00:27:46.128 09:35:57 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:46.128 09:35:57 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@21 -- # rpc_cmd nvmf_create_transport -t tcp -o 00:27:46.128 09:35:57 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:46.128 09:35:57 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:27:46.128 [2024-07-15 09:35:57.286449] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:27:46.128 09:35:57 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:46.128 09:35:57 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:27:46.128 09:35:57 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:46.128 09:35:57 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:27:46.128 09:35:57 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:46.128 09:35:57 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:27:46.128 09:35:57 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:46.128 09:35:57 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:27:46.128 09:35:57 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:46.128 09:35:57 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:27:46.128 09:35:57 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:46.128 09:35:57 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:27:46.128 [2024-07-15 09:35:57.314664] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:27:46.128 09:35:57 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:46.128 09:35:57 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@26 -- # rpc_cmd nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:27:46.128 09:35:57 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:46.128 09:35:57 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:27:46.388 09:35:57 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:46.388 09:35:57 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@42 -- # reconnectpid=943486 00:27:46.388 09:35:57 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@44 -- # sleep 2 00:27:46.388 09:35:57 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@40 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/reconnect -q 32 -o 4096 -w randrw -M 50 -t 10 -c 0xF -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:27:46.388 EAL: No free 2048 kB hugepages reported on node 1 00:27:48.303 09:35:59 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@45 -- # kill -9 943371 00:27:48.303 09:35:59 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@47 -- # sleep 2 00:27:48.303 Read completed with error (sct=0, sc=8) 00:27:48.303 starting I/O failed 00:27:48.303 Read completed with error (sct=0, sc=8) 00:27:48.303 starting I/O failed 00:27:48.303 Read completed with error (sct=0, sc=8) 00:27:48.303 starting I/O failed 00:27:48.303 Read completed with error (sct=0, sc=8) 00:27:48.303 starting I/O failed 00:27:48.303 Read completed with error (sct=0, sc=8) 00:27:48.303 starting I/O failed 00:27:48.303 Read completed with error (sct=0, sc=8) 00:27:48.304 starting I/O failed 00:27:48.304 Read completed with error (sct=0, sc=8) 00:27:48.304 starting I/O failed 00:27:48.304 Read completed with error (sct=0, sc=8) 00:27:48.304 starting I/O failed 00:27:48.304 Read completed with error (sct=0, sc=8) 00:27:48.304 starting I/O failed 00:27:48.304 Read completed with error (sct=0, sc=8) 00:27:48.304 starting I/O failed 00:27:48.304 Read completed with error (sct=0, sc=8) 00:27:48.304 starting I/O failed 00:27:48.304 Read completed with error (sct=0, sc=8) 00:27:48.304 starting I/O failed 00:27:48.304 Read completed with error (sct=0, sc=8) 00:27:48.304 starting I/O failed 00:27:48.304 Read completed with error (sct=0, sc=8) 00:27:48.304 starting I/O failed 00:27:48.304 Write completed with error (sct=0, sc=8) 00:27:48.304 starting I/O failed 00:27:48.304 Read completed with error (sct=0, sc=8) 00:27:48.304 starting I/O failed 00:27:48.304 Write completed with error (sct=0, sc=8) 00:27:48.304 starting I/O failed 00:27:48.304 Write completed with error (sct=0, sc=8) 00:27:48.304 starting I/O failed 00:27:48.304 Read completed with error (sct=0, sc=8) 00:27:48.304 starting I/O failed 00:27:48.304 Read completed with error (sct=0, sc=8) 00:27:48.304 starting I/O failed 00:27:48.304 Read completed with error (sct=0, sc=8) 00:27:48.304 starting I/O failed 00:27:48.304 Write completed with error (sct=0, sc=8) 00:27:48.304 starting I/O failed 00:27:48.304 Read completed with error (sct=0, sc=8) 00:27:48.304 starting I/O failed 00:27:48.304 Read completed with error (sct=0, sc=8) 00:27:48.304 starting I/O failed 00:27:48.304 Read completed with error (sct=0, sc=8) 00:27:48.304 starting I/O failed 00:27:48.304 Read completed with error (sct=0, sc=8) 00:27:48.304 starting I/O failed 00:27:48.304 Read completed with error (sct=0, sc=8) 00:27:48.304 starting I/O failed 00:27:48.304 Read completed with error (sct=0, sc=8) 00:27:48.304 starting I/O failed 00:27:48.304 Read completed with error (sct=0, sc=8) 00:27:48.304 starting I/O failed 00:27:48.304 Read completed with error (sct=0, sc=8) 00:27:48.304 starting I/O failed 00:27:48.304 Read completed with error (sct=0, sc=8) 00:27:48.304 starting I/O failed 00:27:48.304 Read completed with error (sct=0, sc=8) 00:27:48.304 starting I/O failed 00:27:48.304 Read completed with error (sct=0, sc=8) 00:27:48.304 starting I/O failed 00:27:48.304 Read completed with error (sct=0, sc=8) 00:27:48.304 starting I/O failed 00:27:48.304 Read completed with error (sct=0, sc=8) 00:27:48.304 starting I/O failed 00:27:48.304 Write completed with error (sct=0, sc=8) 00:27:48.304 starting I/O failed 00:27:48.304 Write completed with error (sct=0, sc=8) 00:27:48.304 starting I/O failed 00:27:48.304 Read completed with error (sct=0, sc=8) 00:27:48.304 starting I/O failed 00:27:48.304 [2024-07-15 09:35:59.341441] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:27:48.304 Read completed with error (sct=0, sc=8) 00:27:48.304 starting I/O failed 00:27:48.304 Write completed with error (sct=0, sc=8) 00:27:48.304 starting I/O failed 00:27:48.304 Write completed with error (sct=0, sc=8) 00:27:48.304 starting I/O failed 00:27:48.304 Read completed with error (sct=0, sc=8) 00:27:48.304 starting I/O failed 00:27:48.304 Read completed with error (sct=0, sc=8) 00:27:48.304 starting I/O failed 00:27:48.304 Write completed with error (sct=0, sc=8) 00:27:48.304 starting I/O failed 00:27:48.304 Write completed with error (sct=0, sc=8) 00:27:48.304 starting I/O failed 00:27:48.304 Read completed with error (sct=0, sc=8) 00:27:48.304 starting I/O failed 00:27:48.304 Write completed with error (sct=0, sc=8) 00:27:48.304 starting I/O failed 00:27:48.304 Read completed with error (sct=0, sc=8) 00:27:48.304 starting I/O failed 00:27:48.304 Write completed with error (sct=0, sc=8) 00:27:48.304 starting I/O failed 00:27:48.304 Read completed with error (sct=0, sc=8) 00:27:48.304 starting I/O failed 00:27:48.304 Write completed with error (sct=0, sc=8) 00:27:48.304 starting I/O failed 00:27:48.304 Write completed with error (sct=0, sc=8) 00:27:48.304 starting I/O failed 00:27:48.304 Read completed with error (sct=0, sc=8) 00:27:48.304 starting I/O failed 00:27:48.304 Write completed with error (sct=0, sc=8) 00:27:48.304 starting I/O failed 00:27:48.304 Read completed with error (sct=0, sc=8) 00:27:48.304 starting I/O failed 00:27:48.304 Read completed with error (sct=0, sc=8) 00:27:48.304 starting I/O failed 00:27:48.304 Read completed with error (sct=0, sc=8) 00:27:48.304 starting I/O failed 00:27:48.304 Read completed with error (sct=0, sc=8) 00:27:48.304 starting I/O failed 00:27:48.304 Read completed with error (sct=0, sc=8) 00:27:48.304 starting I/O failed 00:27:48.304 Read completed with error (sct=0, sc=8) 00:27:48.304 starting I/O failed 00:27:48.304 Read completed with error (sct=0, sc=8) 00:27:48.304 starting I/O failed 00:27:48.304 Read completed with error (sct=0, sc=8) 00:27:48.304 starting I/O failed 00:27:48.304 Write completed with error (sct=0, sc=8) 00:27:48.304 starting I/O failed 00:27:48.304 Read completed with error (sct=0, sc=8) 00:27:48.304 starting I/O failed 00:27:48.304 [2024-07-15 09:35:59.341774] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:27:48.304 Read completed with error (sct=0, sc=8) 00:27:48.304 starting I/O failed 00:27:48.304 Read completed with error (sct=0, sc=8) 00:27:48.304 starting I/O failed 00:27:48.304 Read completed with error (sct=0, sc=8) 00:27:48.304 starting I/O failed 00:27:48.304 Read completed with error (sct=0, sc=8) 00:27:48.304 starting I/O failed 00:27:48.304 Read completed with error (sct=0, sc=8) 00:27:48.304 starting I/O failed 00:27:48.304 Read completed with error (sct=0, sc=8) 00:27:48.304 starting I/O failed 00:27:48.304 Read completed with error (sct=0, sc=8) 00:27:48.304 starting I/O failed 00:27:48.304 Read completed with error (sct=0, sc=8) 00:27:48.304 starting I/O failed 00:27:48.304 Read completed with error (sct=0, sc=8) 00:27:48.304 starting I/O failed 00:27:48.304 Read completed with error (sct=0, sc=8) 00:27:48.304 starting I/O failed 00:27:48.304 Read completed with error (sct=0, sc=8) 00:27:48.304 starting I/O failed 00:27:48.304 Read completed with error (sct=0, sc=8) 00:27:48.304 starting I/O failed 00:27:48.304 Read completed with error (sct=0, sc=8) 00:27:48.304 starting I/O failed 00:27:48.304 Read completed with error (sct=0, sc=8) 00:27:48.304 starting I/O failed 00:27:48.304 Write completed with error (sct=0, sc=8) 00:27:48.304 starting I/O failed 00:27:48.304 Write completed with error (sct=0, sc=8) 00:27:48.304 starting I/O failed 00:27:48.304 Write completed with error (sct=0, sc=8) 00:27:48.304 starting I/O failed 00:27:48.304 Read completed with error (sct=0, sc=8) 00:27:48.304 starting I/O failed 00:27:48.304 Read completed with error (sct=0, sc=8) 00:27:48.304 starting I/O failed 00:27:48.304 Read completed with error (sct=0, sc=8) 00:27:48.304 starting I/O failed 00:27:48.304 Read completed with error (sct=0, sc=8) 00:27:48.304 starting I/O failed 00:27:48.304 Write completed with error (sct=0, sc=8) 00:27:48.304 starting I/O failed 00:27:48.304 Read completed with error (sct=0, sc=8) 00:27:48.304 starting I/O failed 00:27:48.304 Read completed with error (sct=0, sc=8) 00:27:48.304 starting I/O failed 00:27:48.304 Read completed with error (sct=0, sc=8) 00:27:48.304 starting I/O failed 00:27:48.304 Read completed with error (sct=0, sc=8) 00:27:48.304 starting I/O failed 00:27:48.304 Read completed with error (sct=0, sc=8) 00:27:48.304 starting I/O failed 00:27:48.304 Write completed with error (sct=0, sc=8) 00:27:48.304 starting I/O failed 00:27:48.304 Write completed with error (sct=0, sc=8) 00:27:48.304 starting I/O failed 00:27:48.304 Read completed with error (sct=0, sc=8) 00:27:48.304 starting I/O failed 00:27:48.304 Read completed with error (sct=0, sc=8) 00:27:48.304 starting I/O failed 00:27:48.304 Write completed with error (sct=0, sc=8) 00:27:48.304 starting I/O failed 00:27:48.304 [2024-07-15 09:35:59.342117] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:27:48.304 Read completed with error (sct=0, sc=8) 00:27:48.304 starting I/O failed 00:27:48.304 Read completed with error (sct=0, sc=8) 00:27:48.304 starting I/O failed 00:27:48.304 Read completed with error (sct=0, sc=8) 00:27:48.304 starting I/O failed 00:27:48.304 Read completed with error (sct=0, sc=8) 00:27:48.304 starting I/O failed 00:27:48.304 Read completed with error (sct=0, sc=8) 00:27:48.304 starting I/O failed 00:27:48.304 Read completed with error (sct=0, sc=8) 00:27:48.304 starting I/O failed 00:27:48.304 Read completed with error (sct=0, sc=8) 00:27:48.304 starting I/O failed 00:27:48.304 Read completed with error (sct=0, sc=8) 00:27:48.304 starting I/O failed 00:27:48.304 Read completed with error (sct=0, sc=8) 00:27:48.304 starting I/O failed 00:27:48.304 Write completed with error (sct=0, sc=8) 00:27:48.304 starting I/O failed 00:27:48.304 Write completed with error (sct=0, sc=8) 00:27:48.304 starting I/O failed 00:27:48.304 Write completed with error (sct=0, sc=8) 00:27:48.304 starting I/O failed 00:27:48.304 Read completed with error (sct=0, sc=8) 00:27:48.304 starting I/O failed 00:27:48.304 Read completed with error (sct=0, sc=8) 00:27:48.304 starting I/O failed 00:27:48.304 Write completed with error (sct=0, sc=8) 00:27:48.304 starting I/O failed 00:27:48.304 Write completed with error (sct=0, sc=8) 00:27:48.304 starting I/O failed 00:27:48.304 Write completed with error (sct=0, sc=8) 00:27:48.304 starting I/O failed 00:27:48.304 Read completed with error (sct=0, sc=8) 00:27:48.304 starting I/O failed 00:27:48.304 Read completed with error (sct=0, sc=8) 00:27:48.304 starting I/O failed 00:27:48.304 Read completed with error (sct=0, sc=8) 00:27:48.304 starting I/O failed 00:27:48.304 Write completed with error (sct=0, sc=8) 00:27:48.304 starting I/O failed 00:27:48.304 Write completed with error (sct=0, sc=8) 00:27:48.304 starting I/O failed 00:27:48.304 Write completed with error (sct=0, sc=8) 00:27:48.304 starting I/O failed 00:27:48.304 Write completed with error (sct=0, sc=8) 00:27:48.304 starting I/O failed 00:27:48.304 Read completed with error (sct=0, sc=8) 00:27:48.304 starting I/O failed 00:27:48.304 Read completed with error (sct=0, sc=8) 00:27:48.304 starting I/O failed 00:27:48.304 Read completed with error (sct=0, sc=8) 00:27:48.304 starting I/O failed 00:27:48.304 Read completed with error (sct=0, sc=8) 00:27:48.304 starting I/O failed 00:27:48.305 Read completed with error (sct=0, sc=8) 00:27:48.305 starting I/O failed 00:27:48.305 Write completed with error (sct=0, sc=8) 00:27:48.305 starting I/O failed 00:27:48.305 Write completed with error (sct=0, sc=8) 00:27:48.305 starting I/O failed 00:27:48.305 Write completed with error (sct=0, sc=8) 00:27:48.305 starting I/O failed 00:27:48.305 [2024-07-15 09:35:59.342415] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:27:48.305 [2024-07-15 09:35:59.342619] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.305 [2024-07-15 09:35:59.342658] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a50000b90 with addr=10.0.0.2, port=4420 00:27:48.305 qpair failed and we were unable to recover it. 00:27:48.305 [2024-07-15 09:35:59.342753] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.305 [2024-07-15 09:35:59.342781] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a50000b90 with addr=10.0.0.2, port=4420 00:27:48.305 qpair failed and we were unable to recover it. 00:27:48.305 [2024-07-15 09:35:59.342884] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.305 [2024-07-15 09:35:59.342910] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a50000b90 with addr=10.0.0.2, port=4420 00:27:48.305 qpair failed and we were unable to recover it. 00:27:48.305 [2024-07-15 09:35:59.343004] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.305 [2024-07-15 09:35:59.343030] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a50000b90 with addr=10.0.0.2, port=4420 00:27:48.305 qpair failed and we were unable to recover it. 00:27:48.305 [2024-07-15 09:35:59.343130] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.305 [2024-07-15 09:35:59.343158] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a50000b90 with addr=10.0.0.2, port=4420 00:27:48.305 qpair failed and we were unable to recover it. 00:27:48.305 [2024-07-15 09:35:59.343249] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.305 [2024-07-15 09:35:59.343275] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a50000b90 with addr=10.0.0.2, port=4420 00:27:48.305 qpair failed and we were unable to recover it. 00:27:48.305 [2024-07-15 09:35:59.343394] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.305 [2024-07-15 09:35:59.343421] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a50000b90 with addr=10.0.0.2, port=4420 00:27:48.305 qpair failed and we were unable to recover it. 00:27:48.305 [2024-07-15 09:35:59.343542] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.305 [2024-07-15 09:35:59.343568] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a50000b90 with addr=10.0.0.2, port=4420 00:27:48.305 qpair failed and we were unable to recover it. 00:27:48.305 [2024-07-15 09:35:59.343686] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.305 [2024-07-15 09:35:59.343726] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d69200 with addr=10.0.0.2, port=4420 00:27:48.305 qpair failed and we were unable to recover it. 00:27:48.305 [2024-07-15 09:35:59.343878] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.305 [2024-07-15 09:35:59.343905] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d69200 with addr=10.0.0.2, port=4420 00:27:48.305 qpair failed and we were unable to recover it. 00:27:48.305 [2024-07-15 09:35:59.343993] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.305 [2024-07-15 09:35:59.344026] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a58000b90 with addr=10.0.0.2, port=4420 00:27:48.305 qpair failed and we were unable to recover it. 00:27:48.305 [2024-07-15 09:35:59.344183] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.305 [2024-07-15 09:35:59.344213] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a58000b90 with addr=10.0.0.2, port=4420 00:27:48.305 qpair failed and we were unable to recover it. 00:27:48.305 [2024-07-15 09:35:59.344413] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.305 [2024-07-15 09:35:59.344441] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a58000b90 with addr=10.0.0.2, port=4420 00:27:48.305 qpair failed and we were unable to recover it. 00:27:48.305 [2024-07-15 09:35:59.344526] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.305 [2024-07-15 09:35:59.344552] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a58000b90 with addr=10.0.0.2, port=4420 00:27:48.305 qpair failed and we were unable to recover it. 00:27:48.305 [2024-07-15 09:35:59.344695] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.305 [2024-07-15 09:35:59.344720] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a58000b90 with addr=10.0.0.2, port=4420 00:27:48.305 qpair failed and we were unable to recover it. 00:27:48.305 [2024-07-15 09:35:59.344853] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.305 [2024-07-15 09:35:59.344892] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:27:48.305 qpair failed and we were unable to recover it. 00:27:48.305 [2024-07-15 09:35:59.344989] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.305 [2024-07-15 09:35:59.345017] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:27:48.305 qpair failed and we were unable to recover it. 00:27:48.305 [2024-07-15 09:35:59.345140] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.305 [2024-07-15 09:35:59.345168] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:27:48.305 qpair failed and we were unable to recover it. 00:27:48.305 [2024-07-15 09:35:59.345262] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.305 [2024-07-15 09:35:59.345288] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:27:48.305 qpair failed and we were unable to recover it. 00:27:48.305 [2024-07-15 09:35:59.345403] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.305 [2024-07-15 09:35:59.345428] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:27:48.305 qpair failed and we were unable to recover it. 00:27:48.305 [2024-07-15 09:35:59.345565] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.305 [2024-07-15 09:35:59.345590] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:27:48.305 qpair failed and we were unable to recover it. 00:27:48.305 [2024-07-15 09:35:59.345676] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.305 [2024-07-15 09:35:59.345702] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a58000b90 with addr=10.0.0.2, port=4420 00:27:48.305 qpair failed and we were unable to recover it. 00:27:48.305 [2024-07-15 09:35:59.345826] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.305 [2024-07-15 09:35:59.345868] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d69200 with addr=10.0.0.2, port=4420 00:27:48.305 qpair failed and we were unable to recover it. 00:27:48.305 [2024-07-15 09:35:59.345967] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.305 [2024-07-15 09:35:59.345998] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a50000b90 with addr=10.0.0.2, port=4420 00:27:48.305 qpair failed and we were unable to recover it. 00:27:48.305 [2024-07-15 09:35:59.346112] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.305 [2024-07-15 09:35:59.346149] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a50000b90 with addr=10.0.0.2, port=4420 00:27:48.305 qpair failed and we were unable to recover it. 00:27:48.305 [2024-07-15 09:35:59.346309] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.305 [2024-07-15 09:35:59.346346] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a50000b90 with addr=10.0.0.2, port=4420 00:27:48.305 qpair failed and we were unable to recover it. 00:27:48.305 [2024-07-15 09:35:59.346452] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.305 [2024-07-15 09:35:59.346480] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d69200 with addr=10.0.0.2, port=4420 00:27:48.305 qpair failed and we were unable to recover it. 00:27:48.305 [2024-07-15 09:35:59.346649] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.305 [2024-07-15 09:35:59.346708] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:27:48.305 qpair failed and we were unable to recover it. 00:27:48.305 [2024-07-15 09:35:59.346862] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.305 [2024-07-15 09:35:59.346889] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:27:48.305 qpair failed and we were unable to recover it. 00:27:48.305 [2024-07-15 09:35:59.346989] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.305 [2024-07-15 09:35:59.347016] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:27:48.305 qpair failed and we were unable to recover it. 00:27:48.305 [2024-07-15 09:35:59.347120] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.305 [2024-07-15 09:35:59.347147] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:27:48.305 qpair failed and we were unable to recover it. 00:27:48.305 [2024-07-15 09:35:59.347259] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.305 [2024-07-15 09:35:59.347286] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:27:48.305 qpair failed and we were unable to recover it. 00:27:48.305 [2024-07-15 09:35:59.347398] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.305 [2024-07-15 09:35:59.347424] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:27:48.305 qpair failed and we were unable to recover it. 00:27:48.305 [2024-07-15 09:35:59.347512] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.305 [2024-07-15 09:35:59.347538] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:27:48.305 qpair failed and we were unable to recover it. 00:27:48.305 [2024-07-15 09:35:59.347626] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.305 [2024-07-15 09:35:59.347659] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a58000b90 with addr=10.0.0.2, port=4420 00:27:48.305 qpair failed and we were unable to recover it. 00:27:48.305 [2024-07-15 09:35:59.347771] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.305 [2024-07-15 09:35:59.347797] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a58000b90 with addr=10.0.0.2, port=4420 00:27:48.305 qpair failed and we were unable to recover it. 00:27:48.305 [2024-07-15 09:35:59.347893] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.305 [2024-07-15 09:35:59.347918] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a58000b90 with addr=10.0.0.2, port=4420 00:27:48.305 qpair failed and we were unable to recover it. 00:27:48.305 [2024-07-15 09:35:59.348006] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.305 [2024-07-15 09:35:59.348033] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a58000b90 with addr=10.0.0.2, port=4420 00:27:48.305 qpair failed and we were unable to recover it. 00:27:48.305 [2024-07-15 09:35:59.348116] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.305 [2024-07-15 09:35:59.348141] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a58000b90 with addr=10.0.0.2, port=4420 00:27:48.306 qpair failed and we were unable to recover it. 00:27:48.306 [2024-07-15 09:35:59.348220] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.306 [2024-07-15 09:35:59.348246] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a58000b90 with addr=10.0.0.2, port=4420 00:27:48.306 qpair failed and we were unable to recover it. 00:27:48.306 [2024-07-15 09:35:59.348329] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.306 [2024-07-15 09:35:59.348355] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a58000b90 with addr=10.0.0.2, port=4420 00:27:48.306 qpair failed and we were unable to recover it. 00:27:48.306 [2024-07-15 09:35:59.348470] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.306 [2024-07-15 09:35:59.348496] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a58000b90 with addr=10.0.0.2, port=4420 00:27:48.306 qpair failed and we were unable to recover it. 00:27:48.306 [2024-07-15 09:35:59.348607] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.306 [2024-07-15 09:35:59.348633] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a58000b90 with addr=10.0.0.2, port=4420 00:27:48.306 qpair failed and we were unable to recover it. 00:27:48.306 [2024-07-15 09:35:59.348712] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.306 [2024-07-15 09:35:59.348738] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:27:48.306 qpair failed and we were unable to recover it. 00:27:48.306 [2024-07-15 09:35:59.348863] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.306 [2024-07-15 09:35:59.348893] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a50000b90 with addr=10.0.0.2, port=4420 00:27:48.306 qpair failed and we were unable to recover it. 00:27:48.306 [2024-07-15 09:35:59.348983] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.306 [2024-07-15 09:35:59.349008] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a50000b90 with addr=10.0.0.2, port=4420 00:27:48.306 qpair failed and we were unable to recover it. 00:27:48.306 [2024-07-15 09:35:59.349091] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.306 [2024-07-15 09:35:59.349120] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a50000b90 with addr=10.0.0.2, port=4420 00:27:48.306 qpair failed and we were unable to recover it. 00:27:48.306 [2024-07-15 09:35:59.349197] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.306 [2024-07-15 09:35:59.349222] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a50000b90 with addr=10.0.0.2, port=4420 00:27:48.306 qpair failed and we were unable to recover it. 00:27:48.306 [2024-07-15 09:35:59.349334] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.306 [2024-07-15 09:35:59.349372] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d69200 with addr=10.0.0.2, port=4420 00:27:48.306 qpair failed and we were unable to recover it. 00:27:48.306 [2024-07-15 09:35:59.349526] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.306 [2024-07-15 09:35:59.349574] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d69200 with addr=10.0.0.2, port=4420 00:27:48.306 qpair failed and we were unable to recover it. 00:27:48.306 [2024-07-15 09:35:59.349688] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.306 [2024-07-15 09:35:59.349714] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d69200 with addr=10.0.0.2, port=4420 00:27:48.306 qpair failed and we were unable to recover it. 00:27:48.306 [2024-07-15 09:35:59.349806] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.306 [2024-07-15 09:35:59.349843] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d69200 with addr=10.0.0.2, port=4420 00:27:48.306 qpair failed and we were unable to recover it. 00:27:48.306 [2024-07-15 09:35:59.349926] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.306 [2024-07-15 09:35:59.349952] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d69200 with addr=10.0.0.2, port=4420 00:27:48.306 qpair failed and we were unable to recover it. 00:27:48.306 [2024-07-15 09:35:59.350035] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.306 [2024-07-15 09:35:59.350074] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d69200 with addr=10.0.0.2, port=4420 00:27:48.306 qpair failed and we were unable to recover it. 00:27:48.306 [2024-07-15 09:35:59.350157] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.306 [2024-07-15 09:35:59.350183] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d69200 with addr=10.0.0.2, port=4420 00:27:48.306 qpair failed and we were unable to recover it. 00:27:48.306 [2024-07-15 09:35:59.350299] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.306 [2024-07-15 09:35:59.350325] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d69200 with addr=10.0.0.2, port=4420 00:27:48.306 qpair failed and we were unable to recover it. 00:27:48.306 [2024-07-15 09:35:59.350406] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.306 [2024-07-15 09:35:59.350432] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d69200 with addr=10.0.0.2, port=4420 00:27:48.306 qpair failed and we were unable to recover it. 00:27:48.306 [2024-07-15 09:35:59.350542] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.306 [2024-07-15 09:35:59.350571] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a50000b90 with addr=10.0.0.2, port=4420 00:27:48.306 qpair failed and we were unable to recover it. 00:27:48.306 [2024-07-15 09:35:59.350649] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.306 [2024-07-15 09:35:59.350675] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a50000b90 with addr=10.0.0.2, port=4420 00:27:48.306 qpair failed and we were unable to recover it. 00:27:48.306 [2024-07-15 09:35:59.350766] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.306 [2024-07-15 09:35:59.350791] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a50000b90 with addr=10.0.0.2, port=4420 00:27:48.306 qpair failed and we were unable to recover it. 00:27:48.306 [2024-07-15 09:35:59.350885] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.306 [2024-07-15 09:35:59.350911] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a50000b90 with addr=10.0.0.2, port=4420 00:27:48.306 qpair failed and we were unable to recover it. 00:27:48.306 [2024-07-15 09:35:59.351020] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.306 [2024-07-15 09:35:59.351051] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a50000b90 with addr=10.0.0.2, port=4420 00:27:48.306 qpair failed and we were unable to recover it. 00:27:48.306 [2024-07-15 09:35:59.351146] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.306 [2024-07-15 09:35:59.351173] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a50000b90 with addr=10.0.0.2, port=4420 00:27:48.306 qpair failed and we were unable to recover it. 00:27:48.306 [2024-07-15 09:35:59.351322] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.306 [2024-07-15 09:35:59.351349] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d69200 with addr=10.0.0.2, port=4420 00:27:48.306 qpair failed and we were unable to recover it. 00:27:48.306 [2024-07-15 09:35:59.351462] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.306 [2024-07-15 09:35:59.351489] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a58000b90 with addr=10.0.0.2, port=4420 00:27:48.306 qpair failed and we were unable to recover it. 00:27:48.306 [2024-07-15 09:35:59.351630] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.306 [2024-07-15 09:35:59.351659] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:27:48.306 qpair failed and we were unable to recover it. 00:27:48.306 [2024-07-15 09:35:59.351775] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.306 [2024-07-15 09:35:59.351807] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:27:48.306 qpair failed and we were unable to recover it. 00:27:48.306 [2024-07-15 09:35:59.351907] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.306 [2024-07-15 09:35:59.351934] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:27:48.306 qpair failed and we were unable to recover it. 00:27:48.306 [2024-07-15 09:35:59.352046] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.306 [2024-07-15 09:35:59.352073] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:27:48.306 qpair failed and we were unable to recover it. 00:27:48.306 [2024-07-15 09:35:59.352187] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.306 [2024-07-15 09:35:59.352212] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:27:48.306 qpair failed and we were unable to recover it. 00:27:48.306 [2024-07-15 09:35:59.352326] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.306 [2024-07-15 09:35:59.352353] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d69200 with addr=10.0.0.2, port=4420 00:27:48.306 qpair failed and we were unable to recover it. 00:27:48.306 [2024-07-15 09:35:59.352445] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.306 [2024-07-15 09:35:59.352471] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d69200 with addr=10.0.0.2, port=4420 00:27:48.306 qpair failed and we were unable to recover it. 00:27:48.306 [2024-07-15 09:35:59.352556] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.306 [2024-07-15 09:35:59.352583] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a50000b90 with addr=10.0.0.2, port=4420 00:27:48.306 qpair failed and we were unable to recover it. 00:27:48.306 [2024-07-15 09:35:59.352694] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.306 [2024-07-15 09:35:59.352720] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a50000b90 with addr=10.0.0.2, port=4420 00:27:48.306 qpair failed and we were unable to recover it. 00:27:48.306 [2024-07-15 09:35:59.352840] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.306 [2024-07-15 09:35:59.352866] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a50000b90 with addr=10.0.0.2, port=4420 00:27:48.306 qpair failed and we were unable to recover it. 00:27:48.306 [2024-07-15 09:35:59.352970] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.306 [2024-07-15 09:35:59.352998] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a50000b90 with addr=10.0.0.2, port=4420 00:27:48.306 qpair failed and we were unable to recover it. 00:27:48.306 [2024-07-15 09:35:59.353116] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.306 [2024-07-15 09:35:59.353145] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a50000b90 with addr=10.0.0.2, port=4420 00:27:48.306 qpair failed and we were unable to recover it. 00:27:48.306 [2024-07-15 09:35:59.353231] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.306 [2024-07-15 09:35:59.353258] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a50000b90 with addr=10.0.0.2, port=4420 00:27:48.306 qpair failed and we were unable to recover it. 00:27:48.306 [2024-07-15 09:35:59.353401] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.306 [2024-07-15 09:35:59.353428] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d69200 with addr=10.0.0.2, port=4420 00:27:48.307 qpair failed and we were unable to recover it. 00:27:48.307 [2024-07-15 09:35:59.353516] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.307 [2024-07-15 09:35:59.353543] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:27:48.307 qpair failed and we were unable to recover it. 00:27:48.307 [2024-07-15 09:35:59.353690] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.307 [2024-07-15 09:35:59.353717] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a58000b90 with addr=10.0.0.2, port=4420 00:27:48.307 qpair failed and we were unable to recover it. 00:27:48.307 [2024-07-15 09:35:59.353840] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.307 [2024-07-15 09:35:59.353866] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a58000b90 with addr=10.0.0.2, port=4420 00:27:48.307 qpair failed and we were unable to recover it. 00:27:48.307 [2024-07-15 09:35:59.353983] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.307 [2024-07-15 09:35:59.354010] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a58000b90 with addr=10.0.0.2, port=4420 00:27:48.307 qpair failed and we were unable to recover it. 00:27:48.307 [2024-07-15 09:35:59.354101] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.307 [2024-07-15 09:35:59.354127] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a58000b90 with addr=10.0.0.2, port=4420 00:27:48.307 qpair failed and we were unable to recover it. 00:27:48.307 [2024-07-15 09:35:59.354216] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.307 [2024-07-15 09:35:59.354244] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a50000b90 with addr=10.0.0.2, port=4420 00:27:48.307 qpair failed and we were unable to recover it. 00:27:48.307 [2024-07-15 09:35:59.354359] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.307 [2024-07-15 09:35:59.354385] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a50000b90 with addr=10.0.0.2, port=4420 00:27:48.307 qpair failed and we were unable to recover it. 00:27:48.307 [2024-07-15 09:35:59.354476] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.307 [2024-07-15 09:35:59.354502] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:27:48.307 qpair failed and we were unable to recover it. 00:27:48.307 [2024-07-15 09:35:59.354611] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.307 [2024-07-15 09:35:59.354637] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:27:48.307 qpair failed and we were unable to recover it. 00:27:48.307 [2024-07-15 09:35:59.354736] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.307 [2024-07-15 09:35:59.354774] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d69200 with addr=10.0.0.2, port=4420 00:27:48.307 qpair failed and we were unable to recover it. 00:27:48.307 [2024-07-15 09:35:59.354877] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.307 [2024-07-15 09:35:59.354904] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a50000b90 with addr=10.0.0.2, port=4420 00:27:48.307 qpair failed and we were unable to recover it. 00:27:48.307 [2024-07-15 09:35:59.354996] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.307 [2024-07-15 09:35:59.355023] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a50000b90 with addr=10.0.0.2, port=4420 00:27:48.307 qpair failed and we were unable to recover it. 00:27:48.307 [2024-07-15 09:35:59.355146] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.307 [2024-07-15 09:35:59.355173] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a50000b90 with addr=10.0.0.2, port=4420 00:27:48.307 qpair failed and we were unable to recover it. 00:27:48.307 [2024-07-15 09:35:59.355284] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.307 [2024-07-15 09:35:59.355310] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a50000b90 with addr=10.0.0.2, port=4420 00:27:48.307 qpair failed and we were unable to recover it. 00:27:48.307 [2024-07-15 09:35:59.355384] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.307 [2024-07-15 09:35:59.355410] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a50000b90 with addr=10.0.0.2, port=4420 00:27:48.307 qpair failed and we were unable to recover it. 00:27:48.307 [2024-07-15 09:35:59.355520] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.307 [2024-07-15 09:35:59.355547] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:27:48.307 qpair failed and we were unable to recover it. 00:27:48.307 [2024-07-15 09:35:59.355689] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.307 [2024-07-15 09:35:59.355717] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d69200 with addr=10.0.0.2, port=4420 00:27:48.307 qpair failed and we were unable to recover it. 00:27:48.307 [2024-07-15 09:35:59.355813] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.307 [2024-07-15 09:35:59.355851] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a58000b90 with addr=10.0.0.2, port=4420 00:27:48.307 qpair failed and we were unable to recover it. 00:27:48.307 [2024-07-15 09:35:59.355940] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.307 [2024-07-15 09:35:59.355966] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a58000b90 with addr=10.0.0.2, port=4420 00:27:48.307 qpair failed and we were unable to recover it. 00:27:48.307 [2024-07-15 09:35:59.356047] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.307 [2024-07-15 09:35:59.356086] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a58000b90 with addr=10.0.0.2, port=4420 00:27:48.307 qpair failed and we were unable to recover it. 00:27:48.307 [2024-07-15 09:35:59.356197] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.307 [2024-07-15 09:35:59.356222] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a58000b90 with addr=10.0.0.2, port=4420 00:27:48.307 qpair failed and we were unable to recover it. 00:27:48.307 [2024-07-15 09:35:59.356313] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.307 [2024-07-15 09:35:59.356339] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a58000b90 with addr=10.0.0.2, port=4420 00:27:48.307 qpair failed and we were unable to recover it. 00:27:48.307 [2024-07-15 09:35:59.356449] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.307 [2024-07-15 09:35:59.356479] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a58000b90 with addr=10.0.0.2, port=4420 00:27:48.307 qpair failed and we were unable to recover it. 00:27:48.307 [2024-07-15 09:35:59.356583] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.307 [2024-07-15 09:35:59.356621] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d69200 with addr=10.0.0.2, port=4420 00:27:48.307 qpair failed and we were unable to recover it. 00:27:48.307 [2024-07-15 09:35:59.356744] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.307 [2024-07-15 09:35:59.356771] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a50000b90 with addr=10.0.0.2, port=4420 00:27:48.307 qpair failed and we were unable to recover it. 00:27:48.307 [2024-07-15 09:35:59.356909] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.307 [2024-07-15 09:35:59.356938] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a50000b90 with addr=10.0.0.2, port=4420 00:27:48.307 qpair failed and we were unable to recover it. 00:27:48.307 [2024-07-15 09:35:59.357049] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.307 [2024-07-15 09:35:59.357077] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a50000b90 with addr=10.0.0.2, port=4420 00:27:48.307 qpair failed and we were unable to recover it. 00:27:48.307 [2024-07-15 09:35:59.357153] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.307 [2024-07-15 09:35:59.357181] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a50000b90 with addr=10.0.0.2, port=4420 00:27:48.307 qpair failed and we were unable to recover it. 00:27:48.307 [2024-07-15 09:35:59.357295] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.307 [2024-07-15 09:35:59.357322] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a50000b90 with addr=10.0.0.2, port=4420 00:27:48.307 qpair failed and we were unable to recover it. 00:27:48.307 [2024-07-15 09:35:59.357405] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.307 [2024-07-15 09:35:59.357432] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a50000b90 with addr=10.0.0.2, port=4420 00:27:48.307 qpair failed and we were unable to recover it. 00:27:48.307 [2024-07-15 09:35:59.357520] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.307 [2024-07-15 09:35:59.357551] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d69200 with addr=10.0.0.2, port=4420 00:27:48.307 qpair failed and we were unable to recover it. 00:27:48.307 [2024-07-15 09:35:59.357681] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.307 [2024-07-15 09:35:59.357720] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:27:48.307 qpair failed and we were unable to recover it. 00:27:48.307 [2024-07-15 09:35:59.357846] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.307 [2024-07-15 09:35:59.357874] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a58000b90 with addr=10.0.0.2, port=4420 00:27:48.307 qpair failed and we were unable to recover it. 00:27:48.307 [2024-07-15 09:35:59.357963] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.307 [2024-07-15 09:35:59.357990] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a58000b90 with addr=10.0.0.2, port=4420 00:27:48.307 qpair failed and we were unable to recover it. 00:27:48.307 [2024-07-15 09:35:59.358109] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.307 [2024-07-15 09:35:59.358136] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a58000b90 with addr=10.0.0.2, port=4420 00:27:48.307 qpair failed and we were unable to recover it. 00:27:48.307 [2024-07-15 09:35:59.358226] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.307 [2024-07-15 09:35:59.358253] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a58000b90 with addr=10.0.0.2, port=4420 00:27:48.307 qpair failed and we were unable to recover it. 00:27:48.307 [2024-07-15 09:35:59.358401] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.307 [2024-07-15 09:35:59.358428] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a58000b90 with addr=10.0.0.2, port=4420 00:27:48.307 qpair failed and we were unable to recover it. 00:27:48.307 [2024-07-15 09:35:59.358539] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.307 [2024-07-15 09:35:59.358566] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a58000b90 with addr=10.0.0.2, port=4420 00:27:48.307 qpair failed and we were unable to recover it. 00:27:48.307 [2024-07-15 09:35:59.358690] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.307 [2024-07-15 09:35:59.358721] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:27:48.307 qpair failed and we were unable to recover it. 00:27:48.307 [2024-07-15 09:35:59.358846] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.307 [2024-07-15 09:35:59.358874] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a50000b90 with addr=10.0.0.2, port=4420 00:27:48.307 qpair failed and we were unable to recover it. 00:27:48.307 [2024-07-15 09:35:59.358959] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.308 [2024-07-15 09:35:59.358986] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a50000b90 with addr=10.0.0.2, port=4420 00:27:48.308 qpair failed and we were unable to recover it. 00:27:48.308 [2024-07-15 09:35:59.359065] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.308 [2024-07-15 09:35:59.359106] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a50000b90 with addr=10.0.0.2, port=4420 00:27:48.308 qpair failed and we were unable to recover it. 00:27:48.308 [2024-07-15 09:35:59.359229] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.308 [2024-07-15 09:35:59.359255] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a50000b90 with addr=10.0.0.2, port=4420 00:27:48.308 qpair failed and we were unable to recover it. 00:27:48.308 [2024-07-15 09:35:59.359340] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.308 [2024-07-15 09:35:59.359366] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a50000b90 with addr=10.0.0.2, port=4420 00:27:48.308 qpair failed and we were unable to recover it. 00:27:48.308 [2024-07-15 09:35:59.359471] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.308 [2024-07-15 09:35:59.359498] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a50000b90 with addr=10.0.0.2, port=4420 00:27:48.308 qpair failed and we were unable to recover it. 00:27:48.308 [2024-07-15 09:35:59.359613] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.308 [2024-07-15 09:35:59.359639] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a50000b90 with addr=10.0.0.2, port=4420 00:27:48.308 qpair failed and we were unable to recover it. 00:27:48.308 [2024-07-15 09:35:59.359735] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.308 [2024-07-15 09:35:59.359773] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d69200 with addr=10.0.0.2, port=4420 00:27:48.308 qpair failed and we were unable to recover it. 00:27:48.308 [2024-07-15 09:35:59.359903] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.308 [2024-07-15 09:35:59.359932] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d69200 with addr=10.0.0.2, port=4420 00:27:48.308 qpair failed and we were unable to recover it. 00:27:48.308 [2024-07-15 09:35:59.360026] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.308 [2024-07-15 09:35:59.360051] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d69200 with addr=10.0.0.2, port=4420 00:27:48.308 qpair failed and we were unable to recover it. 00:27:48.308 [2024-07-15 09:35:59.360196] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.308 [2024-07-15 09:35:59.360256] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d69200 with addr=10.0.0.2, port=4420 00:27:48.308 qpair failed and we were unable to recover it. 00:27:48.308 [2024-07-15 09:35:59.360374] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.308 [2024-07-15 09:35:59.360399] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d69200 with addr=10.0.0.2, port=4420 00:27:48.308 qpair failed and we were unable to recover it. 00:27:48.308 [2024-07-15 09:35:59.360496] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.308 [2024-07-15 09:35:59.360521] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d69200 with addr=10.0.0.2, port=4420 00:27:48.308 qpair failed and we were unable to recover it. 00:27:48.308 [2024-07-15 09:35:59.360630] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.308 [2024-07-15 09:35:59.360657] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a50000b90 with addr=10.0.0.2, port=4420 00:27:48.308 qpair failed and we were unable to recover it. 00:27:48.308 [2024-07-15 09:35:59.360748] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.308 [2024-07-15 09:35:59.360775] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a50000b90 with addr=10.0.0.2, port=4420 00:27:48.308 qpair failed and we were unable to recover it. 00:27:48.308 [2024-07-15 09:35:59.360867] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.308 [2024-07-15 09:35:59.360894] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a50000b90 with addr=10.0.0.2, port=4420 00:27:48.308 qpair failed and we were unable to recover it. 00:27:48.308 [2024-07-15 09:35:59.360989] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.308 [2024-07-15 09:35:59.361015] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a50000b90 with addr=10.0.0.2, port=4420 00:27:48.308 qpair failed and we were unable to recover it. 00:27:48.308 [2024-07-15 09:35:59.361138] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.308 [2024-07-15 09:35:59.361165] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a50000b90 with addr=10.0.0.2, port=4420 00:27:48.308 qpair failed and we were unable to recover it. 00:27:48.308 [2024-07-15 09:35:59.361247] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.308 [2024-07-15 09:35:59.361287] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a50000b90 with addr=10.0.0.2, port=4420 00:27:48.308 qpair failed and we were unable to recover it. 00:27:48.308 [2024-07-15 09:35:59.361370] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.308 [2024-07-15 09:35:59.361395] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a50000b90 with addr=10.0.0.2, port=4420 00:27:48.308 qpair failed and we were unable to recover it. 00:27:48.308 [2024-07-15 09:35:59.361516] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.308 [2024-07-15 09:35:59.361541] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a50000b90 with addr=10.0.0.2, port=4420 00:27:48.308 qpair failed and we were unable to recover it. 00:27:48.308 [2024-07-15 09:35:59.361623] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.308 [2024-07-15 09:35:59.361649] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a50000b90 with addr=10.0.0.2, port=4420 00:27:48.308 qpair failed and we were unable to recover it. 00:27:48.308 [2024-07-15 09:35:59.361735] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.308 [2024-07-15 09:35:59.361763] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d69200 with addr=10.0.0.2, port=4420 00:27:48.308 qpair failed and we were unable to recover it. 00:27:48.308 [2024-07-15 09:35:59.361967] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.308 [2024-07-15 09:35:59.361993] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d69200 with addr=10.0.0.2, port=4420 00:27:48.308 qpair failed and we were unable to recover it. 00:27:48.308 [2024-07-15 09:35:59.362112] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.308 [2024-07-15 09:35:59.362138] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d69200 with addr=10.0.0.2, port=4420 00:27:48.308 qpair failed and we were unable to recover it. 00:27:48.308 [2024-07-15 09:35:59.362214] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.308 [2024-07-15 09:35:59.362240] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d69200 with addr=10.0.0.2, port=4420 00:27:48.308 qpair failed and we were unable to recover it. 00:27:48.308 [2024-07-15 09:35:59.362355] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.308 [2024-07-15 09:35:59.362381] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d69200 with addr=10.0.0.2, port=4420 00:27:48.308 qpair failed and we were unable to recover it. 00:27:48.308 [2024-07-15 09:35:59.362468] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.308 [2024-07-15 09:35:59.362495] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d69200 with addr=10.0.0.2, port=4420 00:27:48.308 qpair failed and we were unable to recover it. 00:27:48.308 [2024-07-15 09:35:59.362580] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.308 [2024-07-15 09:35:59.362606] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d69200 with addr=10.0.0.2, port=4420 00:27:48.308 qpair failed and we were unable to recover it. 00:27:48.308 [2024-07-15 09:35:59.362688] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.308 [2024-07-15 09:35:59.362714] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d69200 with addr=10.0.0.2, port=4420 00:27:48.308 qpair failed and we were unable to recover it. 00:27:48.308 [2024-07-15 09:35:59.362805] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.308 [2024-07-15 09:35:59.362831] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d69200 with addr=10.0.0.2, port=4420 00:27:48.308 qpair failed and we were unable to recover it. 00:27:48.308 [2024-07-15 09:35:59.362943] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.308 [2024-07-15 09:35:59.362969] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d69200 with addr=10.0.0.2, port=4420 00:27:48.308 qpair failed and we were unable to recover it. 00:27:48.308 [2024-07-15 09:35:59.363053] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.308 [2024-07-15 09:35:59.363079] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d69200 with addr=10.0.0.2, port=4420 00:27:48.309 qpair failed and we were unable to recover it. 00:27:48.309 [2024-07-15 09:35:59.363163] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.309 [2024-07-15 09:35:59.363189] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d69200 with addr=10.0.0.2, port=4420 00:27:48.309 qpair failed and we were unable to recover it. 00:27:48.309 [2024-07-15 09:35:59.363311] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.309 [2024-07-15 09:35:59.363337] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d69200 with addr=10.0.0.2, port=4420 00:27:48.309 qpair failed and we were unable to recover it. 00:27:48.309 [2024-07-15 09:35:59.363448] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.309 [2024-07-15 09:35:59.363473] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d69200 with addr=10.0.0.2, port=4420 00:27:48.309 qpair failed and we were unable to recover it. 00:27:48.309 [2024-07-15 09:35:59.363576] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.309 [2024-07-15 09:35:59.363602] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d69200 with addr=10.0.0.2, port=4420 00:27:48.309 qpair failed and we were unable to recover it. 00:27:48.309 [2024-07-15 09:35:59.363679] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.309 [2024-07-15 09:35:59.363708] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d69200 with addr=10.0.0.2, port=4420 00:27:48.309 qpair failed and we were unable to recover it. 00:27:48.309 [2024-07-15 09:35:59.363848] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.309 [2024-07-15 09:35:59.363874] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d69200 with addr=10.0.0.2, port=4420 00:27:48.309 qpair failed and we were unable to recover it. 00:27:48.309 [2024-07-15 09:35:59.363989] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.309 [2024-07-15 09:35:59.364015] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d69200 with addr=10.0.0.2, port=4420 00:27:48.309 qpair failed and we were unable to recover it. 00:27:48.309 [2024-07-15 09:35:59.364091] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.309 [2024-07-15 09:35:59.364117] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d69200 with addr=10.0.0.2, port=4420 00:27:48.309 qpair failed and we were unable to recover it. 00:27:48.309 [2024-07-15 09:35:59.364223] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.309 [2024-07-15 09:35:59.364249] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d69200 with addr=10.0.0.2, port=4420 00:27:48.309 qpair failed and we were unable to recover it. 00:27:48.309 [2024-07-15 09:35:59.364383] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.309 [2024-07-15 09:35:59.364408] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d69200 with addr=10.0.0.2, port=4420 00:27:48.309 qpair failed and we were unable to recover it. 00:27:48.309 [2024-07-15 09:35:59.364519] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.309 [2024-07-15 09:35:59.364544] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d69200 with addr=10.0.0.2, port=4420 00:27:48.309 qpair failed and we were unable to recover it. 00:27:48.309 [2024-07-15 09:35:59.364663] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.309 [2024-07-15 09:35:59.364689] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d69200 with addr=10.0.0.2, port=4420 00:27:48.309 qpair failed and we were unable to recover it. 00:27:48.309 [2024-07-15 09:35:59.364797] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.309 [2024-07-15 09:35:59.364830] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d69200 with addr=10.0.0.2, port=4420 00:27:48.309 qpair failed and we were unable to recover it. 00:27:48.309 [2024-07-15 09:35:59.364917] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.309 [2024-07-15 09:35:59.364943] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d69200 with addr=10.0.0.2, port=4420 00:27:48.309 qpair failed and we were unable to recover it. 00:27:48.309 [2024-07-15 09:35:59.365061] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.309 [2024-07-15 09:35:59.365087] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d69200 with addr=10.0.0.2, port=4420 00:27:48.309 qpair failed and we were unable to recover it. 00:27:48.309 [2024-07-15 09:35:59.365201] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.309 [2024-07-15 09:35:59.365227] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d69200 with addr=10.0.0.2, port=4420 00:27:48.309 qpair failed and we were unable to recover it. 00:27:48.309 [2024-07-15 09:35:59.365325] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.309 [2024-07-15 09:35:59.365353] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a50000b90 with addr=10.0.0.2, port=4420 00:27:48.309 qpair failed and we were unable to recover it. 00:27:48.309 [2024-07-15 09:35:59.365453] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.309 [2024-07-15 09:35:59.365479] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a50000b90 with addr=10.0.0.2, port=4420 00:27:48.309 qpair failed and we were unable to recover it. 00:27:48.309 [2024-07-15 09:35:59.365602] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.309 [2024-07-15 09:35:59.365628] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a50000b90 with addr=10.0.0.2, port=4420 00:27:48.309 qpair failed and we were unable to recover it. 00:27:48.309 [2024-07-15 09:35:59.365805] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.309 [2024-07-15 09:35:59.365831] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a50000b90 with addr=10.0.0.2, port=4420 00:27:48.309 qpair failed and we were unable to recover it. 00:27:48.309 [2024-07-15 09:35:59.365969] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.309 [2024-07-15 09:35:59.365995] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a50000b90 with addr=10.0.0.2, port=4420 00:27:48.309 qpair failed and we were unable to recover it. 00:27:48.309 [2024-07-15 09:35:59.366080] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.309 [2024-07-15 09:35:59.366107] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a50000b90 with addr=10.0.0.2, port=4420 00:27:48.309 qpair failed and we were unable to recover it. 00:27:48.309 [2024-07-15 09:35:59.366279] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.309 [2024-07-15 09:35:59.366327] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a50000b90 with addr=10.0.0.2, port=4420 00:27:48.309 qpair failed and we were unable to recover it. 00:27:48.309 [2024-07-15 09:35:59.366534] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.309 [2024-07-15 09:35:59.366561] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a50000b90 with addr=10.0.0.2, port=4420 00:27:48.309 qpair failed and we were unable to recover it. 00:27:48.309 [2024-07-15 09:35:59.366673] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.309 [2024-07-15 09:35:59.366699] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a50000b90 with addr=10.0.0.2, port=4420 00:27:48.309 qpair failed and we were unable to recover it. 00:27:48.309 [2024-07-15 09:35:59.366783] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.309 [2024-07-15 09:35:59.366814] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a50000b90 with addr=10.0.0.2, port=4420 00:27:48.309 qpair failed and we were unable to recover it. 00:27:48.309 [2024-07-15 09:35:59.366901] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.309 [2024-07-15 09:35:59.366927] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a50000b90 with addr=10.0.0.2, port=4420 00:27:48.309 qpair failed and we were unable to recover it. 00:27:48.309 [2024-07-15 09:35:59.367008] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.309 [2024-07-15 09:35:59.367034] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a50000b90 with addr=10.0.0.2, port=4420 00:27:48.309 qpair failed and we were unable to recover it. 00:27:48.309 [2024-07-15 09:35:59.367173] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.309 [2024-07-15 09:35:59.367199] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a50000b90 with addr=10.0.0.2, port=4420 00:27:48.309 qpair failed and we were unable to recover it. 00:27:48.309 [2024-07-15 09:35:59.367315] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.309 [2024-07-15 09:35:59.367341] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a50000b90 with addr=10.0.0.2, port=4420 00:27:48.309 qpair failed and we were unable to recover it. 00:27:48.309 [2024-07-15 09:35:59.367450] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.309 [2024-07-15 09:35:59.367476] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a50000b90 with addr=10.0.0.2, port=4420 00:27:48.309 qpair failed and we were unable to recover it. 00:27:48.309 [2024-07-15 09:35:59.367567] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.309 [2024-07-15 09:35:59.367594] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d69200 with addr=10.0.0.2, port=4420 00:27:48.309 qpair failed and we were unable to recover it. 00:27:48.309 [2024-07-15 09:35:59.367731] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.309 [2024-07-15 09:35:59.367770] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a58000b90 with addr=10.0.0.2, port=4420 00:27:48.309 qpair failed and we were unable to recover it. 00:27:48.309 [2024-07-15 09:35:59.367931] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.309 [2024-07-15 09:35:59.367970] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:27:48.309 qpair failed and we were unable to recover it. 00:27:48.309 [2024-07-15 09:35:59.368068] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.309 [2024-07-15 09:35:59.368095] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:27:48.309 qpair failed and we were unable to recover it. 00:27:48.309 [2024-07-15 09:35:59.368234] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.309 [2024-07-15 09:35:59.368259] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:27:48.309 qpair failed and we were unable to recover it. 00:27:48.309 [2024-07-15 09:35:59.368393] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.309 [2024-07-15 09:35:59.368419] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:27:48.309 qpair failed and we were unable to recover it. 00:27:48.309 [2024-07-15 09:35:59.368562] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.309 [2024-07-15 09:35:59.368589] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a50000b90 with addr=10.0.0.2, port=4420 00:27:48.309 qpair failed and we were unable to recover it. 00:27:48.309 [2024-07-15 09:35:59.368677] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.309 [2024-07-15 09:35:59.368703] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a50000b90 with addr=10.0.0.2, port=4420 00:27:48.309 qpair failed and we were unable to recover it. 00:27:48.309 [2024-07-15 09:35:59.368828] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.309 [2024-07-15 09:35:59.368854] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a50000b90 with addr=10.0.0.2, port=4420 00:27:48.309 qpair failed and we were unable to recover it. 00:27:48.309 [2024-07-15 09:35:59.368975] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.310 [2024-07-15 09:35:59.369002] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a50000b90 with addr=10.0.0.2, port=4420 00:27:48.310 qpair failed and we were unable to recover it. 00:27:48.310 [2024-07-15 09:35:59.369086] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.310 [2024-07-15 09:35:59.369112] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a50000b90 with addr=10.0.0.2, port=4420 00:27:48.310 qpair failed and we were unable to recover it. 00:27:48.310 [2024-07-15 09:35:59.369251] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.310 [2024-07-15 09:35:59.369277] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a50000b90 with addr=10.0.0.2, port=4420 00:27:48.310 qpair failed and we were unable to recover it. 00:27:48.310 [2024-07-15 09:35:59.369416] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.310 [2024-07-15 09:35:59.369442] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a50000b90 with addr=10.0.0.2, port=4420 00:27:48.310 qpair failed and we were unable to recover it. 00:27:48.310 [2024-07-15 09:35:59.369558] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.310 [2024-07-15 09:35:59.369588] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a50000b90 with addr=10.0.0.2, port=4420 00:27:48.310 qpair failed and we were unable to recover it. 00:27:48.310 [2024-07-15 09:35:59.369700] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.310 [2024-07-15 09:35:59.369739] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a58000b90 with addr=10.0.0.2, port=4420 00:27:48.310 qpair failed and we were unable to recover it. 00:27:48.310 [2024-07-15 09:35:59.369895] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.310 [2024-07-15 09:35:59.369923] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:27:48.310 qpair failed and we were unable to recover it. 00:27:48.310 [2024-07-15 09:35:59.370059] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.310 [2024-07-15 09:35:59.370084] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:27:48.310 qpair failed and we were unable to recover it. 00:27:48.310 [2024-07-15 09:35:59.370196] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.310 [2024-07-15 09:35:59.370222] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:27:48.310 qpair failed and we were unable to recover it. 00:27:48.310 [2024-07-15 09:35:59.370335] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.310 [2024-07-15 09:35:59.370360] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:27:48.310 qpair failed and we were unable to recover it. 00:27:48.310 [2024-07-15 09:35:59.370500] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.310 [2024-07-15 09:35:59.370526] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:27:48.310 qpair failed and we were unable to recover it. 00:27:48.310 [2024-07-15 09:35:59.370612] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.310 [2024-07-15 09:35:59.370639] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a50000b90 with addr=10.0.0.2, port=4420 00:27:48.310 qpair failed and we were unable to recover it. 00:27:48.310 [2024-07-15 09:35:59.370741] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.310 [2024-07-15 09:35:59.370780] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a58000b90 with addr=10.0.0.2, port=4420 00:27:48.310 qpair failed and we were unable to recover it. 00:27:48.310 [2024-07-15 09:35:59.370911] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.310 [2024-07-15 09:35:59.370938] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a58000b90 with addr=10.0.0.2, port=4420 00:27:48.310 qpair failed and we were unable to recover it. 00:27:48.310 [2024-07-15 09:35:59.371030] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.310 [2024-07-15 09:35:59.371057] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a58000b90 with addr=10.0.0.2, port=4420 00:27:48.310 qpair failed and we were unable to recover it. 00:27:48.310 [2024-07-15 09:35:59.371144] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.310 [2024-07-15 09:35:59.371171] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a58000b90 with addr=10.0.0.2, port=4420 00:27:48.310 qpair failed and we were unable to recover it. 00:27:48.310 [2024-07-15 09:35:59.371249] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.310 [2024-07-15 09:35:59.371275] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a58000b90 with addr=10.0.0.2, port=4420 00:27:48.310 qpair failed and we were unable to recover it. 00:27:48.310 [2024-07-15 09:35:59.371393] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.310 [2024-07-15 09:35:59.371419] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a58000b90 with addr=10.0.0.2, port=4420 00:27:48.310 qpair failed and we were unable to recover it. 00:27:48.310 [2024-07-15 09:35:59.371543] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.310 [2024-07-15 09:35:59.371568] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a58000b90 with addr=10.0.0.2, port=4420 00:27:48.310 qpair failed and we were unable to recover it. 00:27:48.310 [2024-07-15 09:35:59.371679] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.310 [2024-07-15 09:35:59.371706] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a58000b90 with addr=10.0.0.2, port=4420 00:27:48.310 qpair failed and we were unable to recover it. 00:27:48.310 [2024-07-15 09:35:59.371783] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.310 [2024-07-15 09:35:59.371816] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:27:48.310 qpair failed and we were unable to recover it. 00:27:48.310 [2024-07-15 09:35:59.371928] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.310 [2024-07-15 09:35:59.371953] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:27:48.310 qpair failed and we were unable to recover it. 00:27:48.310 [2024-07-15 09:35:59.372040] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.310 [2024-07-15 09:35:59.372066] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:27:48.310 qpair failed and we were unable to recover it. 00:27:48.310 [2024-07-15 09:35:59.372202] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.310 [2024-07-15 09:35:59.372228] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:27:48.310 qpair failed and we were unable to recover it. 00:27:48.310 [2024-07-15 09:35:59.372338] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.310 [2024-07-15 09:35:59.372365] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:27:48.310 qpair failed and we were unable to recover it. 00:27:48.310 [2024-07-15 09:35:59.372497] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.310 [2024-07-15 09:35:59.372523] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:27:48.310 qpair failed and we were unable to recover it. 00:27:48.310 [2024-07-15 09:35:59.372658] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.310 [2024-07-15 09:35:59.372683] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:27:48.310 qpair failed and we were unable to recover it. 00:27:48.310 [2024-07-15 09:35:59.372776] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.310 [2024-07-15 09:35:59.372822] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a50000b90 with addr=10.0.0.2, port=4420 00:27:48.310 qpair failed and we were unable to recover it. 00:27:48.310 [2024-07-15 09:35:59.372922] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.310 [2024-07-15 09:35:59.372950] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a50000b90 with addr=10.0.0.2, port=4420 00:27:48.310 qpair failed and we were unable to recover it. 00:27:48.310 [2024-07-15 09:35:59.373030] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.310 [2024-07-15 09:35:59.373056] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a50000b90 with addr=10.0.0.2, port=4420 00:27:48.310 qpair failed and we were unable to recover it. 00:27:48.310 [2024-07-15 09:35:59.373164] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.310 [2024-07-15 09:35:59.373190] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a50000b90 with addr=10.0.0.2, port=4420 00:27:48.310 qpair failed and we were unable to recover it. 00:27:48.310 [2024-07-15 09:35:59.373423] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.310 [2024-07-15 09:35:59.373487] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d69200 with addr=10.0.0.2, port=4420 00:27:48.310 qpair failed and we were unable to recover it. 00:27:48.310 [2024-07-15 09:35:59.373607] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.310 [2024-07-15 09:35:59.373634] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d69200 with addr=10.0.0.2, port=4420 00:27:48.310 qpair failed and we were unable to recover it. 00:27:48.310 [2024-07-15 09:35:59.373759] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.310 [2024-07-15 09:35:59.373785] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:27:48.310 qpair failed and we were unable to recover it. 00:27:48.310 [2024-07-15 09:35:59.373885] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.310 [2024-07-15 09:35:59.373911] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:27:48.310 qpair failed and we were unable to recover it. 00:27:48.310 [2024-07-15 09:35:59.373998] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.310 [2024-07-15 09:35:59.374024] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:27:48.310 qpair failed and we were unable to recover it. 00:27:48.310 [2024-07-15 09:35:59.374143] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.310 [2024-07-15 09:35:59.374168] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:27:48.311 qpair failed and we were unable to recover it. 00:27:48.311 [2024-07-15 09:35:59.374298] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.311 [2024-07-15 09:35:59.374323] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:27:48.311 qpair failed and we were unable to recover it. 00:27:48.311 [2024-07-15 09:35:59.374442] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.311 [2024-07-15 09:35:59.374469] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a58000b90 with addr=10.0.0.2, port=4420 00:27:48.311 qpair failed and we were unable to recover it. 00:27:48.311 [2024-07-15 09:35:59.374561] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.311 [2024-07-15 09:35:59.374588] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a58000b90 with addr=10.0.0.2, port=4420 00:27:48.311 qpair failed and we were unable to recover it. 00:27:48.311 [2024-07-15 09:35:59.374672] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.311 [2024-07-15 09:35:59.374699] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a58000b90 with addr=10.0.0.2, port=4420 00:27:48.311 qpair failed and we were unable to recover it. 00:27:48.311 [2024-07-15 09:35:59.374809] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.311 [2024-07-15 09:35:59.374836] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a58000b90 with addr=10.0.0.2, port=4420 00:27:48.311 qpair failed and we were unable to recover it. 00:27:48.311 [2024-07-15 09:35:59.374924] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.311 [2024-07-15 09:35:59.374950] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a58000b90 with addr=10.0.0.2, port=4420 00:27:48.311 qpair failed and we were unable to recover it. 00:27:48.311 [2024-07-15 09:35:59.375089] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.311 [2024-07-15 09:35:59.375115] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a58000b90 with addr=10.0.0.2, port=4420 00:27:48.311 qpair failed and we were unable to recover it. 00:27:48.311 [2024-07-15 09:35:59.375256] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.311 [2024-07-15 09:35:59.375287] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a58000b90 with addr=10.0.0.2, port=4420 00:27:48.311 qpair failed and we were unable to recover it. 00:27:48.311 [2024-07-15 09:35:59.375398] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.311 [2024-07-15 09:35:59.375424] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a58000b90 with addr=10.0.0.2, port=4420 00:27:48.311 qpair failed and we were unable to recover it. 00:27:48.311 [2024-07-15 09:35:59.375548] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.311 [2024-07-15 09:35:59.375574] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a58000b90 with addr=10.0.0.2, port=4420 00:27:48.311 qpair failed and we were unable to recover it. 00:27:48.311 [2024-07-15 09:35:59.375654] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.311 [2024-07-15 09:35:59.375681] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:27:48.311 qpair failed and we were unable to recover it. 00:27:48.311 [2024-07-15 09:35:59.375788] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.311 [2024-07-15 09:35:59.375834] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d69200 with addr=10.0.0.2, port=4420 00:27:48.311 qpair failed and we were unable to recover it. 00:27:48.311 [2024-07-15 09:35:59.375931] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.311 [2024-07-15 09:35:59.375959] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a50000b90 with addr=10.0.0.2, port=4420 00:27:48.311 qpair failed and we were unable to recover it. 00:27:48.311 [2024-07-15 09:35:59.376047] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.311 [2024-07-15 09:35:59.376074] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a50000b90 with addr=10.0.0.2, port=4420 00:27:48.311 qpair failed and we were unable to recover it. 00:27:48.311 [2024-07-15 09:35:59.376157] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.311 [2024-07-15 09:35:59.376184] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a50000b90 with addr=10.0.0.2, port=4420 00:27:48.311 qpair failed and we were unable to recover it. 00:27:48.311 [2024-07-15 09:35:59.376297] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.311 [2024-07-15 09:35:59.376323] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a50000b90 with addr=10.0.0.2, port=4420 00:27:48.311 qpair failed and we were unable to recover it. 00:27:48.311 [2024-07-15 09:35:59.376432] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.311 [2024-07-15 09:35:59.376458] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a50000b90 with addr=10.0.0.2, port=4420 00:27:48.311 qpair failed and we were unable to recover it. 00:27:48.311 [2024-07-15 09:35:59.376573] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.311 [2024-07-15 09:35:59.376600] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:27:48.311 qpair failed and we were unable to recover it. 00:27:48.311 [2024-07-15 09:35:59.376712] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.311 [2024-07-15 09:35:59.376740] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d69200 with addr=10.0.0.2, port=4420 00:27:48.311 qpair failed and we were unable to recover it. 00:27:48.311 [2024-07-15 09:35:59.376942] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.311 [2024-07-15 09:35:59.376968] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d69200 with addr=10.0.0.2, port=4420 00:27:48.311 qpair failed and we were unable to recover it. 00:27:48.311 [2024-07-15 09:35:59.377085] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.311 [2024-07-15 09:35:59.377112] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d69200 with addr=10.0.0.2, port=4420 00:27:48.311 qpair failed and we were unable to recover it. 00:27:48.311 [2024-07-15 09:35:59.377234] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.311 [2024-07-15 09:35:59.377260] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d69200 with addr=10.0.0.2, port=4420 00:27:48.311 qpair failed and we were unable to recover it. 00:27:48.311 [2024-07-15 09:35:59.377349] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.311 [2024-07-15 09:35:59.377375] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d69200 with addr=10.0.0.2, port=4420 00:27:48.311 qpair failed and we were unable to recover it. 00:27:48.311 [2024-07-15 09:35:59.377509] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.311 [2024-07-15 09:35:59.377535] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d69200 with addr=10.0.0.2, port=4420 00:27:48.311 qpair failed and we were unable to recover it. 00:27:48.311 [2024-07-15 09:35:59.377644] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.311 [2024-07-15 09:35:59.377670] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d69200 with addr=10.0.0.2, port=4420 00:27:48.311 qpair failed and we were unable to recover it. 00:27:48.311 [2024-07-15 09:35:59.377773] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.311 [2024-07-15 09:35:59.377799] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d69200 with addr=10.0.0.2, port=4420 00:27:48.311 qpair failed and we were unable to recover it. 00:27:48.311 [2024-07-15 09:35:59.377999] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.311 [2024-07-15 09:35:59.378024] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d69200 with addr=10.0.0.2, port=4420 00:27:48.311 qpair failed and we were unable to recover it. 00:27:48.311 [2024-07-15 09:35:59.378158] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.311 [2024-07-15 09:35:59.378183] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d69200 with addr=10.0.0.2, port=4420 00:27:48.311 qpair failed and we were unable to recover it. 00:27:48.311 [2024-07-15 09:35:59.378293] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.311 [2024-07-15 09:35:59.378319] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d69200 with addr=10.0.0.2, port=4420 00:27:48.311 qpair failed and we were unable to recover it. 00:27:48.311 [2024-07-15 09:35:59.378403] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.311 [2024-07-15 09:35:59.378429] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d69200 with addr=10.0.0.2, port=4420 00:27:48.311 qpair failed and we were unable to recover it. 00:27:48.311 [2024-07-15 09:35:59.378544] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.311 [2024-07-15 09:35:59.378570] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d69200 with addr=10.0.0.2, port=4420 00:27:48.311 qpair failed and we were unable to recover it. 00:27:48.311 [2024-07-15 09:35:59.378717] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.311 [2024-07-15 09:35:59.378744] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d69200 with addr=10.0.0.2, port=4420 00:27:48.311 qpair failed and we were unable to recover it. 00:27:48.311 [2024-07-15 09:35:59.378880] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.311 [2024-07-15 09:35:59.378906] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d69200 with addr=10.0.0.2, port=4420 00:27:48.311 qpair failed and we were unable to recover it. 00:27:48.311 [2024-07-15 09:35:59.378998] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.311 [2024-07-15 09:35:59.379024] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d69200 with addr=10.0.0.2, port=4420 00:27:48.311 qpair failed and we were unable to recover it. 00:27:48.311 [2024-07-15 09:35:59.379103] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.311 [2024-07-15 09:35:59.379133] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d69200 with addr=10.0.0.2, port=4420 00:27:48.311 qpair failed and we were unable to recover it. 00:27:48.311 [2024-07-15 09:35:59.379271] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.311 [2024-07-15 09:35:59.379297] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d69200 with addr=10.0.0.2, port=4420 00:27:48.311 qpair failed and we were unable to recover it. 00:27:48.311 [2024-07-15 09:35:59.379433] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.311 [2024-07-15 09:35:59.379458] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d69200 with addr=10.0.0.2, port=4420 00:27:48.311 qpair failed and we were unable to recover it. 00:27:48.311 [2024-07-15 09:35:59.379595] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.311 [2024-07-15 09:35:59.379636] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a58000b90 with addr=10.0.0.2, port=4420 00:27:48.311 qpair failed and we were unable to recover it. 00:27:48.311 [2024-07-15 09:35:59.379760] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.311 [2024-07-15 09:35:59.379799] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a50000b90 with addr=10.0.0.2, port=4420 00:27:48.311 qpair failed and we were unable to recover it. 00:27:48.311 [2024-07-15 09:35:59.379941] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.311 [2024-07-15 09:35:59.379980] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:27:48.311 qpair failed and we were unable to recover it. 00:27:48.312 [2024-07-15 09:35:59.380107] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.312 [2024-07-15 09:35:59.380134] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:27:48.312 qpair failed and we were unable to recover it. 00:27:48.312 [2024-07-15 09:35:59.380267] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.312 [2024-07-15 09:35:59.380296] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:27:48.312 qpair failed and we were unable to recover it. 00:27:48.312 [2024-07-15 09:35:59.380455] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.312 [2024-07-15 09:35:59.380509] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:27:48.312 qpair failed and we were unable to recover it. 00:27:48.312 [2024-07-15 09:35:59.380608] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.312 [2024-07-15 09:35:59.380633] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:27:48.312 qpair failed and we were unable to recover it. 00:27:48.312 [2024-07-15 09:35:59.380746] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.312 [2024-07-15 09:35:59.380772] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:27:48.312 qpair failed and we were unable to recover it. 00:27:48.312 [2024-07-15 09:35:59.380892] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.312 [2024-07-15 09:35:59.380918] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:27:48.312 qpair failed and we were unable to recover it. 00:27:48.312 [2024-07-15 09:35:59.380995] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.312 [2024-07-15 09:35:59.381021] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:27:48.312 qpair failed and we were unable to recover it. 00:27:48.312 [2024-07-15 09:35:59.381125] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.312 [2024-07-15 09:35:59.381150] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:27:48.312 qpair failed and we were unable to recover it. 00:27:48.312 [2024-07-15 09:35:59.381238] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.312 [2024-07-15 09:35:59.381265] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:27:48.312 qpair failed and we were unable to recover it. 00:27:48.312 [2024-07-15 09:35:59.381377] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.312 [2024-07-15 09:35:59.381403] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:27:48.312 qpair failed and we were unable to recover it. 00:27:48.312 [2024-07-15 09:35:59.381509] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.312 [2024-07-15 09:35:59.381535] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:27:48.312 qpair failed and we were unable to recover it. 00:27:48.312 [2024-07-15 09:35:59.381650] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.312 [2024-07-15 09:35:59.381676] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:27:48.312 qpair failed and we were unable to recover it. 00:27:48.312 [2024-07-15 09:35:59.381798] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.312 [2024-07-15 09:35:59.381831] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d69200 with addr=10.0.0.2, port=4420 00:27:48.312 qpair failed and we were unable to recover it. 00:27:48.312 [2024-07-15 09:35:59.381970] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.312 [2024-07-15 09:35:59.381996] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d69200 with addr=10.0.0.2, port=4420 00:27:48.312 qpair failed and we were unable to recover it. 00:27:48.312 [2024-07-15 09:35:59.382107] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.312 [2024-07-15 09:35:59.382133] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d69200 with addr=10.0.0.2, port=4420 00:27:48.312 qpair failed and we were unable to recover it. 00:27:48.312 [2024-07-15 09:35:59.382214] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.312 [2024-07-15 09:35:59.382240] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d69200 with addr=10.0.0.2, port=4420 00:27:48.312 qpair failed and we were unable to recover it. 00:27:48.312 [2024-07-15 09:35:59.382348] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.312 [2024-07-15 09:35:59.382374] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d69200 with addr=10.0.0.2, port=4420 00:27:48.312 qpair failed and we were unable to recover it. 00:27:48.312 [2024-07-15 09:35:59.382463] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.312 [2024-07-15 09:35:59.382489] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d69200 with addr=10.0.0.2, port=4420 00:27:48.312 qpair failed and we were unable to recover it. 00:27:48.312 [2024-07-15 09:35:59.382570] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.312 [2024-07-15 09:35:59.382596] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d69200 with addr=10.0.0.2, port=4420 00:27:48.312 qpair failed and we were unable to recover it. 00:27:48.312 [2024-07-15 09:35:59.382714] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.312 [2024-07-15 09:35:59.382753] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a58000b90 with addr=10.0.0.2, port=4420 00:27:48.312 qpair failed and we were unable to recover it. 00:27:48.312 [2024-07-15 09:35:59.382885] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.312 [2024-07-15 09:35:59.382924] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a50000b90 with addr=10.0.0.2, port=4420 00:27:48.312 qpair failed and we were unable to recover it. 00:27:48.312 [2024-07-15 09:35:59.383044] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.312 [2024-07-15 09:35:59.383075] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:27:48.312 qpair failed and we were unable to recover it. 00:27:48.312 [2024-07-15 09:35:59.383190] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.312 [2024-07-15 09:35:59.383216] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:27:48.312 qpair failed and we were unable to recover it. 00:27:48.312 [2024-07-15 09:35:59.383297] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.312 [2024-07-15 09:35:59.383323] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:27:48.312 qpair failed and we were unable to recover it. 00:27:48.312 [2024-07-15 09:35:59.383408] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.312 [2024-07-15 09:35:59.383434] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:27:48.312 qpair failed and we were unable to recover it. 00:27:48.312 [2024-07-15 09:35:59.383581] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.312 [2024-07-15 09:35:59.383607] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:27:48.312 qpair failed and we were unable to recover it. 00:27:48.312 [2024-07-15 09:35:59.383710] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.312 [2024-07-15 09:35:59.383748] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a58000b90 with addr=10.0.0.2, port=4420 00:27:48.312 qpair failed and we were unable to recover it. 00:27:48.312 [2024-07-15 09:35:59.383888] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.312 [2024-07-15 09:35:59.383916] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a58000b90 with addr=10.0.0.2, port=4420 00:27:48.312 qpair failed and we were unable to recover it. 00:27:48.312 [2024-07-15 09:35:59.384035] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.312 [2024-07-15 09:35:59.384061] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a58000b90 with addr=10.0.0.2, port=4420 00:27:48.312 qpair failed and we were unable to recover it. 00:27:48.312 [2024-07-15 09:35:59.384147] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.312 [2024-07-15 09:35:59.384173] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a58000b90 with addr=10.0.0.2, port=4420 00:27:48.312 qpair failed and we were unable to recover it. 00:27:48.312 [2024-07-15 09:35:59.384265] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.312 [2024-07-15 09:35:59.384291] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a58000b90 with addr=10.0.0.2, port=4420 00:27:48.312 qpair failed and we were unable to recover it. 00:27:48.312 [2024-07-15 09:35:59.384384] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.312 [2024-07-15 09:35:59.384411] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a58000b90 with addr=10.0.0.2, port=4420 00:27:48.312 qpair failed and we were unable to recover it. 00:27:48.312 [2024-07-15 09:35:59.384548] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.313 [2024-07-15 09:35:59.384574] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a58000b90 with addr=10.0.0.2, port=4420 00:27:48.313 qpair failed and we were unable to recover it. 00:27:48.313 [2024-07-15 09:35:59.384712] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.313 [2024-07-15 09:35:59.384738] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a58000b90 with addr=10.0.0.2, port=4420 00:27:48.313 qpair failed and we were unable to recover it. 00:27:48.313 [2024-07-15 09:35:59.384867] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.313 [2024-07-15 09:35:59.384906] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d69200 with addr=10.0.0.2, port=4420 00:27:48.313 qpair failed and we were unable to recover it. 00:27:48.313 [2024-07-15 09:35:59.385009] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.313 [2024-07-15 09:35:59.385036] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:27:48.313 qpair failed and we were unable to recover it. 00:27:48.313 [2024-07-15 09:35:59.385154] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.313 [2024-07-15 09:35:59.385180] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:27:48.313 qpair failed and we were unable to recover it. 00:27:48.313 [2024-07-15 09:35:59.385368] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.313 [2024-07-15 09:35:59.385420] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:27:48.313 qpair failed and we were unable to recover it. 00:27:48.313 [2024-07-15 09:35:59.385585] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.313 [2024-07-15 09:35:59.385648] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:27:48.313 qpair failed and we were unable to recover it. 00:27:48.313 [2024-07-15 09:35:59.385807] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.313 [2024-07-15 09:35:59.385846] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a50000b90 with addr=10.0.0.2, port=4420 00:27:48.313 qpair failed and we were unable to recover it. 00:27:48.313 [2024-07-15 09:35:59.385946] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.313 [2024-07-15 09:35:59.385973] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a58000b90 with addr=10.0.0.2, port=4420 00:27:48.313 qpair failed and we were unable to recover it. 00:27:48.313 [2024-07-15 09:35:59.386090] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.313 [2024-07-15 09:35:59.386116] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a58000b90 with addr=10.0.0.2, port=4420 00:27:48.313 qpair failed and we were unable to recover it. 00:27:48.313 [2024-07-15 09:35:59.386229] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.313 [2024-07-15 09:35:59.386255] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a58000b90 with addr=10.0.0.2, port=4420 00:27:48.313 qpair failed and we were unable to recover it. 00:27:48.313 [2024-07-15 09:35:59.386359] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.313 [2024-07-15 09:35:59.386385] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a58000b90 with addr=10.0.0.2, port=4420 00:27:48.313 qpair failed and we were unable to recover it. 00:27:48.313 [2024-07-15 09:35:59.386497] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.313 [2024-07-15 09:35:59.386523] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a58000b90 with addr=10.0.0.2, port=4420 00:27:48.313 qpair failed and we were unable to recover it. 00:27:48.313 [2024-07-15 09:35:59.386637] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.313 [2024-07-15 09:35:59.386663] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a58000b90 with addr=10.0.0.2, port=4420 00:27:48.313 qpair failed and we were unable to recover it. 00:27:48.313 [2024-07-15 09:35:59.386805] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.313 [2024-07-15 09:35:59.386846] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a58000b90 with addr=10.0.0.2, port=4420 00:27:48.313 qpair failed and we were unable to recover it. 00:27:48.313 [2024-07-15 09:35:59.386965] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.313 [2024-07-15 09:35:59.386991] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a58000b90 with addr=10.0.0.2, port=4420 00:27:48.313 qpair failed and we were unable to recover it. 00:27:48.313 [2024-07-15 09:35:59.387138] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.313 [2024-07-15 09:35:59.387166] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:27:48.313 qpair failed and we were unable to recover it. 00:27:48.313 [2024-07-15 09:35:59.387281] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.313 [2024-07-15 09:35:59.387306] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:27:48.313 qpair failed and we were unable to recover it. 00:27:48.313 [2024-07-15 09:35:59.387416] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.313 [2024-07-15 09:35:59.387441] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:27:48.313 qpair failed and we were unable to recover it. 00:27:48.313 [2024-07-15 09:35:59.387521] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.313 [2024-07-15 09:35:59.387547] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:27:48.313 qpair failed and we were unable to recover it. 00:27:48.313 [2024-07-15 09:35:59.387628] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.313 [2024-07-15 09:35:59.387654] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:27:48.313 qpair failed and we were unable to recover it. 00:27:48.313 [2024-07-15 09:35:59.387769] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.313 [2024-07-15 09:35:59.387795] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:27:48.313 qpair failed and we were unable to recover it. 00:27:48.313 [2024-07-15 09:35:59.387913] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.313 [2024-07-15 09:35:59.387938] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:27:48.313 qpair failed and we were unable to recover it. 00:27:48.313 [2024-07-15 09:35:59.388027] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.313 [2024-07-15 09:35:59.388053] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:27:48.313 qpair failed and we were unable to recover it. 00:27:48.313 [2024-07-15 09:35:59.388164] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.313 [2024-07-15 09:35:59.388190] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:27:48.313 qpair failed and we were unable to recover it. 00:27:48.313 [2024-07-15 09:35:59.388300] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.313 [2024-07-15 09:35:59.388326] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:27:48.313 qpair failed and we were unable to recover it. 00:27:48.313 [2024-07-15 09:35:59.388431] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.313 [2024-07-15 09:35:59.388457] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:27:48.313 qpair failed and we were unable to recover it. 00:27:48.313 [2024-07-15 09:35:59.388562] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.313 [2024-07-15 09:35:59.388588] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:27:48.313 qpair failed and we were unable to recover it. 00:27:48.313 [2024-07-15 09:35:59.388702] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.313 [2024-07-15 09:35:59.388729] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a58000b90 with addr=10.0.0.2, port=4420 00:27:48.313 qpair failed and we were unable to recover it. 00:27:48.313 [2024-07-15 09:35:59.388826] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.313 [2024-07-15 09:35:59.388870] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a50000b90 with addr=10.0.0.2, port=4420 00:27:48.313 qpair failed and we were unable to recover it. 00:27:48.313 [2024-07-15 09:35:59.388995] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.313 [2024-07-15 09:35:59.389022] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a50000b90 with addr=10.0.0.2, port=4420 00:27:48.313 qpair failed and we were unable to recover it. 00:27:48.313 [2024-07-15 09:35:59.389126] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.313 [2024-07-15 09:35:59.389152] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a50000b90 with addr=10.0.0.2, port=4420 00:27:48.313 qpair failed and we were unable to recover it. 00:27:48.313 [2024-07-15 09:35:59.389265] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.313 [2024-07-15 09:35:59.389290] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a50000b90 with addr=10.0.0.2, port=4420 00:27:48.313 qpair failed and we were unable to recover it. 00:27:48.313 [2024-07-15 09:35:59.389401] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.313 [2024-07-15 09:35:59.389427] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a50000b90 with addr=10.0.0.2, port=4420 00:27:48.313 qpair failed and we were unable to recover it. 00:27:48.313 [2024-07-15 09:35:59.389512] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.313 [2024-07-15 09:35:59.389539] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:27:48.313 qpair failed and we were unable to recover it. 00:27:48.313 [2024-07-15 09:35:59.389653] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.313 [2024-07-15 09:35:59.389679] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:27:48.313 qpair failed and we were unable to recover it. 00:27:48.313 [2024-07-15 09:35:59.389815] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.313 [2024-07-15 09:35:59.389841] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:27:48.313 qpair failed and we were unable to recover it. 00:27:48.313 [2024-07-15 09:35:59.389978] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.313 [2024-07-15 09:35:59.390003] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:27:48.313 qpair failed and we were unable to recover it. 00:27:48.313 [2024-07-15 09:35:59.390087] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.313 [2024-07-15 09:35:59.390112] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:27:48.313 qpair failed and we were unable to recover it. 00:27:48.313 [2024-07-15 09:35:59.390223] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.313 [2024-07-15 09:35:59.390250] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:27:48.313 qpair failed and we were unable to recover it. 00:27:48.313 [2024-07-15 09:35:59.390364] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.314 [2024-07-15 09:35:59.390389] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:27:48.314 qpair failed and we were unable to recover it. 00:27:48.314 [2024-07-15 09:35:59.390467] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.314 [2024-07-15 09:35:59.390492] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:27:48.314 qpair failed and we were unable to recover it. 00:27:48.314 [2024-07-15 09:35:59.390592] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.314 [2024-07-15 09:35:59.390630] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d69200 with addr=10.0.0.2, port=4420 00:27:48.314 qpair failed and we were unable to recover it. 00:27:48.314 [2024-07-15 09:35:59.390732] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.314 [2024-07-15 09:35:59.390760] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a50000b90 with addr=10.0.0.2, port=4420 00:27:48.314 qpair failed and we were unable to recover it. 00:27:48.314 [2024-07-15 09:35:59.390867] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.314 [2024-07-15 09:35:59.390905] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a58000b90 with addr=10.0.0.2, port=4420 00:27:48.314 qpair failed and we were unable to recover it. 00:27:48.314 [2024-07-15 09:35:59.391029] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.314 [2024-07-15 09:35:59.391055] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a58000b90 with addr=10.0.0.2, port=4420 00:27:48.314 qpair failed and we were unable to recover it. 00:27:48.314 [2024-07-15 09:35:59.391138] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.314 [2024-07-15 09:35:59.391164] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a58000b90 with addr=10.0.0.2, port=4420 00:27:48.314 qpair failed and we were unable to recover it. 00:27:48.314 [2024-07-15 09:35:59.391247] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.314 [2024-07-15 09:35:59.391273] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a58000b90 with addr=10.0.0.2, port=4420 00:27:48.314 qpair failed and we were unable to recover it. 00:27:48.314 [2024-07-15 09:35:59.391434] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.314 [2024-07-15 09:35:59.391487] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:27:48.314 qpair failed and we were unable to recover it. 00:27:48.314 [2024-07-15 09:35:59.391578] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.314 [2024-07-15 09:35:59.391604] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:27:48.314 qpair failed and we were unable to recover it. 00:27:48.314 [2024-07-15 09:35:59.391684] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.314 [2024-07-15 09:35:59.391710] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:27:48.314 qpair failed and we were unable to recover it. 00:27:48.314 [2024-07-15 09:35:59.391809] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.314 [2024-07-15 09:35:59.391835] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:27:48.314 qpair failed and we were unable to recover it. 00:27:48.314 [2024-07-15 09:35:59.391947] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.314 [2024-07-15 09:35:59.391972] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:27:48.314 qpair failed and we were unable to recover it. 00:27:48.314 [2024-07-15 09:35:59.392059] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.314 [2024-07-15 09:35:59.392085] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:27:48.314 qpair failed and we were unable to recover it. 00:27:48.314 [2024-07-15 09:35:59.392199] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.314 [2024-07-15 09:35:59.392225] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:27:48.314 qpair failed and we were unable to recover it. 00:27:48.314 [2024-07-15 09:35:59.392298] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.314 [2024-07-15 09:35:59.392323] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:27:48.314 qpair failed and we were unable to recover it. 00:27:48.314 [2024-07-15 09:35:59.392419] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.314 [2024-07-15 09:35:59.392445] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:27:48.314 qpair failed and we were unable to recover it. 00:27:48.314 [2024-07-15 09:35:59.392557] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.314 [2024-07-15 09:35:59.392583] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:27:48.314 qpair failed and we were unable to recover it. 00:27:48.314 [2024-07-15 09:35:59.392690] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.314 [2024-07-15 09:35:59.392728] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d69200 with addr=10.0.0.2, port=4420 00:27:48.314 qpair failed and we were unable to recover it. 00:27:48.314 [2024-07-15 09:35:59.392858] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.314 [2024-07-15 09:35:59.392898] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a50000b90 with addr=10.0.0.2, port=4420 00:27:48.314 qpair failed and we were unable to recover it. 00:27:48.314 [2024-07-15 09:35:59.393044] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.314 [2024-07-15 09:35:59.393071] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a58000b90 with addr=10.0.0.2, port=4420 00:27:48.314 qpair failed and we were unable to recover it. 00:27:48.314 [2024-07-15 09:35:59.393188] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.314 [2024-07-15 09:35:59.393214] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a58000b90 with addr=10.0.0.2, port=4420 00:27:48.314 qpair failed and we were unable to recover it. 00:27:48.314 [2024-07-15 09:35:59.393348] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.314 [2024-07-15 09:35:59.393416] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a58000b90 with addr=10.0.0.2, port=4420 00:27:48.314 qpair failed and we were unable to recover it. 00:27:48.314 [2024-07-15 09:35:59.393496] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.314 [2024-07-15 09:35:59.393523] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a58000b90 with addr=10.0.0.2, port=4420 00:27:48.314 qpair failed and we were unable to recover it. 00:27:48.314 [2024-07-15 09:35:59.393637] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.314 [2024-07-15 09:35:59.393664] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:27:48.314 qpair failed and we were unable to recover it. 00:27:48.314 [2024-07-15 09:35:59.393794] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.314 [2024-07-15 09:35:59.393839] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d69200 with addr=10.0.0.2, port=4420 00:27:48.314 qpair failed and we were unable to recover it. 00:27:48.314 [2024-07-15 09:35:59.393957] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.314 [2024-07-15 09:35:59.393984] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d69200 with addr=10.0.0.2, port=4420 00:27:48.314 qpair failed and we were unable to recover it. 00:27:48.314 [2024-07-15 09:35:59.394071] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.314 [2024-07-15 09:35:59.394096] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d69200 with addr=10.0.0.2, port=4420 00:27:48.314 qpair failed and we were unable to recover it. 00:27:48.314 [2024-07-15 09:35:59.394176] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.314 [2024-07-15 09:35:59.394202] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d69200 with addr=10.0.0.2, port=4420 00:27:48.314 qpair failed and we were unable to recover it. 00:27:48.314 [2024-07-15 09:35:59.394318] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.314 [2024-07-15 09:35:59.394348] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d69200 with addr=10.0.0.2, port=4420 00:27:48.314 qpair failed and we were unable to recover it. 00:27:48.314 [2024-07-15 09:35:59.394485] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.314 [2024-07-15 09:35:59.394510] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d69200 with addr=10.0.0.2, port=4420 00:27:48.314 qpair failed and we were unable to recover it. 00:27:48.314 [2024-07-15 09:35:59.394707] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.314 [2024-07-15 09:35:59.394732] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d69200 with addr=10.0.0.2, port=4420 00:27:48.314 qpair failed and we were unable to recover it. 00:27:48.314 [2024-07-15 09:35:59.394867] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.314 [2024-07-15 09:35:59.394894] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d69200 with addr=10.0.0.2, port=4420 00:27:48.314 qpair failed and we were unable to recover it. 00:27:48.314 [2024-07-15 09:35:59.395005] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.314 [2024-07-15 09:35:59.395031] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d69200 with addr=10.0.0.2, port=4420 00:27:48.314 qpair failed and we were unable to recover it. 00:27:48.314 [2024-07-15 09:35:59.395115] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.314 [2024-07-15 09:35:59.395141] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d69200 with addr=10.0.0.2, port=4420 00:27:48.314 qpair failed and we were unable to recover it. 00:27:48.314 [2024-07-15 09:35:59.395239] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.314 [2024-07-15 09:35:59.395278] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a50000b90 with addr=10.0.0.2, port=4420 00:27:48.314 qpair failed and we were unable to recover it. 00:27:48.314 [2024-07-15 09:35:59.395391] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.314 [2024-07-15 09:35:59.395418] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a58000b90 with addr=10.0.0.2, port=4420 00:27:48.314 qpair failed and we were unable to recover it. 00:27:48.314 [2024-07-15 09:35:59.395521] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.314 [2024-07-15 09:35:59.395560] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:27:48.314 qpair failed and we were unable to recover it. 00:27:48.314 [2024-07-15 09:35:59.395690] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.314 [2024-07-15 09:35:59.395716] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:27:48.314 qpair failed and we were unable to recover it. 00:27:48.314 [2024-07-15 09:35:59.395825] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.314 [2024-07-15 09:35:59.395859] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:27:48.314 qpair failed and we were unable to recover it. 00:27:48.314 [2024-07-15 09:35:59.395952] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.314 [2024-07-15 09:35:59.395977] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:27:48.314 qpair failed and we were unable to recover it. 00:27:48.315 [2024-07-15 09:35:59.396065] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.315 [2024-07-15 09:35:59.396091] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:27:48.315 qpair failed and we were unable to recover it. 00:27:48.315 [2024-07-15 09:35:59.396176] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.315 [2024-07-15 09:35:59.396202] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:27:48.315 qpair failed and we were unable to recover it. 00:27:48.315 [2024-07-15 09:35:59.396317] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.315 [2024-07-15 09:35:59.396344] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a58000b90 with addr=10.0.0.2, port=4420 00:27:48.315 qpair failed and we were unable to recover it. 00:27:48.315 [2024-07-15 09:35:59.396538] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.315 [2024-07-15 09:35:59.396565] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d69200 with addr=10.0.0.2, port=4420 00:27:48.315 qpair failed and we were unable to recover it. 00:27:48.315 [2024-07-15 09:35:59.396676] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.315 [2024-07-15 09:35:59.396701] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d69200 with addr=10.0.0.2, port=4420 00:27:48.315 qpair failed and we were unable to recover it. 00:27:48.315 [2024-07-15 09:35:59.396785] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.315 [2024-07-15 09:35:59.396818] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d69200 with addr=10.0.0.2, port=4420 00:27:48.315 qpair failed and we were unable to recover it. 00:27:48.315 [2024-07-15 09:35:59.396905] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.315 [2024-07-15 09:35:59.396931] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d69200 with addr=10.0.0.2, port=4420 00:27:48.315 qpair failed and we were unable to recover it. 00:27:48.315 [2024-07-15 09:35:59.397019] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.315 [2024-07-15 09:35:59.397044] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d69200 with addr=10.0.0.2, port=4420 00:27:48.315 qpair failed and we were unable to recover it. 00:27:48.315 [2024-07-15 09:35:59.397217] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.315 [2024-07-15 09:35:59.397242] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d69200 with addr=10.0.0.2, port=4420 00:27:48.315 qpair failed and we were unable to recover it. 00:27:48.315 [2024-07-15 09:35:59.397394] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.315 [2024-07-15 09:35:59.397446] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d69200 with addr=10.0.0.2, port=4420 00:27:48.315 qpair failed and we were unable to recover it. 00:27:48.315 [2024-07-15 09:35:59.397582] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.315 [2024-07-15 09:35:59.397610] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d69200 with addr=10.0.0.2, port=4420 00:27:48.315 qpair failed and we were unable to recover it. 00:27:48.315 [2024-07-15 09:35:59.397720] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.315 [2024-07-15 09:35:59.397745] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d69200 with addr=10.0.0.2, port=4420 00:27:48.315 qpair failed and we were unable to recover it. 00:27:48.315 [2024-07-15 09:35:59.397827] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.315 [2024-07-15 09:35:59.397853] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d69200 with addr=10.0.0.2, port=4420 00:27:48.315 qpair failed and we were unable to recover it. 00:27:48.315 [2024-07-15 09:35:59.397933] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.315 [2024-07-15 09:35:59.397958] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d69200 with addr=10.0.0.2, port=4420 00:27:48.315 qpair failed and we were unable to recover it. 00:27:48.315 [2024-07-15 09:35:59.398069] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.315 [2024-07-15 09:35:59.398094] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d69200 with addr=10.0.0.2, port=4420 00:27:48.315 qpair failed and we were unable to recover it. 00:27:48.315 [2024-07-15 09:35:59.398208] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.315 [2024-07-15 09:35:59.398238] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d69200 with addr=10.0.0.2, port=4420 00:27:48.315 qpair failed and we were unable to recover it. 00:27:48.315 [2024-07-15 09:35:59.398322] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.315 [2024-07-15 09:35:59.398349] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d69200 with addr=10.0.0.2, port=4420 00:27:48.315 qpair failed and we were unable to recover it. 00:27:48.315 [2024-07-15 09:35:59.398539] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.315 [2024-07-15 09:35:59.398566] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d69200 with addr=10.0.0.2, port=4420 00:27:48.315 qpair failed and we were unable to recover it. 00:27:48.315 [2024-07-15 09:35:59.398646] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.315 [2024-07-15 09:35:59.398674] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a58000b90 with addr=10.0.0.2, port=4420 00:27:48.315 qpair failed and we were unable to recover it. 00:27:48.315 [2024-07-15 09:35:59.398790] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.315 [2024-07-15 09:35:59.398823] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a58000b90 with addr=10.0.0.2, port=4420 00:27:48.315 qpair failed and we were unable to recover it. 00:27:48.315 [2024-07-15 09:35:59.398937] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.315 [2024-07-15 09:35:59.398964] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a58000b90 with addr=10.0.0.2, port=4420 00:27:48.315 qpair failed and we were unable to recover it. 00:27:48.315 [2024-07-15 09:35:59.399071] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.315 [2024-07-15 09:35:59.399130] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a58000b90 with addr=10.0.0.2, port=4420 00:27:48.315 qpair failed and we were unable to recover it. 00:27:48.315 [2024-07-15 09:35:59.399254] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.315 [2024-07-15 09:35:59.399280] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a58000b90 with addr=10.0.0.2, port=4420 00:27:48.315 qpair failed and we were unable to recover it. 00:27:48.315 [2024-07-15 09:35:59.399391] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.315 [2024-07-15 09:35:59.399425] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a58000b90 with addr=10.0.0.2, port=4420 00:27:48.315 qpair failed and we were unable to recover it. 00:27:48.315 [2024-07-15 09:35:59.399508] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.315 [2024-07-15 09:35:59.399534] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a58000b90 with addr=10.0.0.2, port=4420 00:27:48.315 qpair failed and we were unable to recover it. 00:27:48.315 [2024-07-15 09:35:59.399646] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.315 [2024-07-15 09:35:59.399676] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a50000b90 with addr=10.0.0.2, port=4420 00:27:48.315 qpair failed and we were unable to recover it. 00:27:48.315 [2024-07-15 09:35:59.399770] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.315 [2024-07-15 09:35:59.399797] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:27:48.315 qpair failed and we were unable to recover it. 00:27:48.315 [2024-07-15 09:35:59.399892] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.315 [2024-07-15 09:35:59.399917] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:27:48.315 qpair failed and we were unable to recover it. 00:27:48.315 [2024-07-15 09:35:59.400000] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.315 [2024-07-15 09:35:59.400026] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:27:48.315 qpair failed and we were unable to recover it. 00:27:48.315 [2024-07-15 09:35:59.400141] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.315 [2024-07-15 09:35:59.400167] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:27:48.315 qpair failed and we were unable to recover it. 00:27:48.315 [2024-07-15 09:35:59.400284] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.315 [2024-07-15 09:35:59.400310] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:27:48.315 qpair failed and we were unable to recover it. 00:27:48.315 [2024-07-15 09:35:59.400507] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.315 [2024-07-15 09:35:59.400536] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:27:48.315 qpair failed and we were unable to recover it. 00:27:48.315 [2024-07-15 09:35:59.400614] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.315 [2024-07-15 09:35:59.400639] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:27:48.315 qpair failed and we were unable to recover it. 00:27:48.315 [2024-07-15 09:35:59.400754] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.315 [2024-07-15 09:35:59.400780] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:27:48.315 qpair failed and we were unable to recover it. 00:27:48.315 [2024-07-15 09:35:59.400934] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.315 [2024-07-15 09:35:59.400962] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:27:48.315 qpair failed and we were unable to recover it. 00:27:48.315 [2024-07-15 09:35:59.401046] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.315 [2024-07-15 09:35:59.401073] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:27:48.315 qpair failed and we were unable to recover it. 00:27:48.315 [2024-07-15 09:35:59.401161] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.315 [2024-07-15 09:35:59.401187] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:27:48.315 qpair failed and we were unable to recover it. 00:27:48.315 [2024-07-15 09:35:59.401303] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.315 [2024-07-15 09:35:59.401329] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:27:48.316 qpair failed and we were unable to recover it. 00:27:48.316 [2024-07-15 09:35:59.401413] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.316 [2024-07-15 09:35:59.401441] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a50000b90 with addr=10.0.0.2, port=4420 00:27:48.316 qpair failed and we were unable to recover it. 00:27:48.316 [2024-07-15 09:35:59.401555] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.316 [2024-07-15 09:35:59.401582] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a50000b90 with addr=10.0.0.2, port=4420 00:27:48.316 qpair failed and we were unable to recover it. 00:27:48.316 [2024-07-15 09:35:59.401672] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.316 [2024-07-15 09:35:59.401711] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a58000b90 with addr=10.0.0.2, port=4420 00:27:48.316 qpair failed and we were unable to recover it. 00:27:48.316 [2024-07-15 09:35:59.401855] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.316 [2024-07-15 09:35:59.401884] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a58000b90 with addr=10.0.0.2, port=4420 00:27:48.316 qpair failed and we were unable to recover it. 00:27:48.316 [2024-07-15 09:35:59.402004] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.316 [2024-07-15 09:35:59.402045] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d69200 with addr=10.0.0.2, port=4420 00:27:48.316 qpair failed and we were unable to recover it. 00:27:48.316 [2024-07-15 09:35:59.402165] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.316 [2024-07-15 09:35:59.402192] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d69200 with addr=10.0.0.2, port=4420 00:27:48.316 qpair failed and we were unable to recover it. 00:27:48.316 [2024-07-15 09:35:59.402315] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.316 [2024-07-15 09:35:59.402343] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:27:48.316 qpair failed and we were unable to recover it. 00:27:48.316 [2024-07-15 09:35:59.402575] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.316 [2024-07-15 09:35:59.402631] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:27:48.316 qpair failed and we were unable to recover it. 00:27:48.316 [2024-07-15 09:35:59.402739] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.316 [2024-07-15 09:35:59.402765] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:27:48.316 qpair failed and we were unable to recover it. 00:27:48.316 [2024-07-15 09:35:59.402883] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.316 [2024-07-15 09:35:59.402909] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:27:48.316 qpair failed and we were unable to recover it. 00:27:48.316 [2024-07-15 09:35:59.402997] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.316 [2024-07-15 09:35:59.403023] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:27:48.316 qpair failed and we were unable to recover it. 00:27:48.316 [2024-07-15 09:35:59.403117] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.316 [2024-07-15 09:35:59.403143] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:27:48.316 qpair failed and we were unable to recover it. 00:27:48.316 [2024-07-15 09:35:59.403227] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.316 [2024-07-15 09:35:59.403253] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:27:48.316 qpair failed and we were unable to recover it. 00:27:48.316 [2024-07-15 09:35:59.403361] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.316 [2024-07-15 09:35:59.403387] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:27:48.316 qpair failed and we were unable to recover it. 00:27:48.316 [2024-07-15 09:35:59.403509] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.316 [2024-07-15 09:35:59.403535] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:27:48.316 qpair failed and we were unable to recover it. 00:27:48.316 [2024-07-15 09:35:59.403624] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.316 [2024-07-15 09:35:59.403650] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:27:48.316 qpair failed and we were unable to recover it. 00:27:48.316 [2024-07-15 09:35:59.403736] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.316 [2024-07-15 09:35:59.403763] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:27:48.316 qpair failed and we were unable to recover it. 00:27:48.316 [2024-07-15 09:35:59.403894] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.316 [2024-07-15 09:35:59.403920] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:27:48.316 qpair failed and we were unable to recover it. 00:27:48.316 [2024-07-15 09:35:59.404116] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.316 [2024-07-15 09:35:59.404143] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:27:48.316 qpair failed and we were unable to recover it. 00:27:48.316 [2024-07-15 09:35:59.404245] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.316 [2024-07-15 09:35:59.404270] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:27:48.316 qpair failed and we were unable to recover it. 00:27:48.316 [2024-07-15 09:35:59.404378] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.316 [2024-07-15 09:35:59.404404] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:27:48.316 qpair failed and we were unable to recover it. 00:27:48.316 [2024-07-15 09:35:59.404486] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.316 [2024-07-15 09:35:59.404512] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:27:48.316 qpair failed and we were unable to recover it. 00:27:48.316 [2024-07-15 09:35:59.404639] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.316 [2024-07-15 09:35:59.404678] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a58000b90 with addr=10.0.0.2, port=4420 00:27:48.316 qpair failed and we were unable to recover it. 00:27:48.316 [2024-07-15 09:35:59.404834] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.316 [2024-07-15 09:35:59.404875] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a50000b90 with addr=10.0.0.2, port=4420 00:27:48.316 qpair failed and we were unable to recover it. 00:27:48.316 [2024-07-15 09:35:59.404991] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.316 [2024-07-15 09:35:59.405019] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a50000b90 with addr=10.0.0.2, port=4420 00:27:48.316 qpair failed and we were unable to recover it. 00:27:48.316 [2024-07-15 09:35:59.405138] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.316 [2024-07-15 09:35:59.405167] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a50000b90 with addr=10.0.0.2, port=4420 00:27:48.316 qpair failed and we were unable to recover it. 00:27:48.316 [2024-07-15 09:35:59.405307] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.316 [2024-07-15 09:35:59.405335] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a50000b90 with addr=10.0.0.2, port=4420 00:27:48.316 qpair failed and we were unable to recover it. 00:27:48.316 [2024-07-15 09:35:59.405444] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.316 [2024-07-15 09:35:59.405470] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a50000b90 with addr=10.0.0.2, port=4420 00:27:48.316 qpair failed and we were unable to recover it. 00:27:48.316 [2024-07-15 09:35:59.405563] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.316 [2024-07-15 09:35:59.405589] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a50000b90 with addr=10.0.0.2, port=4420 00:27:48.316 qpair failed and we were unable to recover it. 00:27:48.316 [2024-07-15 09:35:59.405673] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.316 [2024-07-15 09:35:59.405699] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a50000b90 with addr=10.0.0.2, port=4420 00:27:48.316 qpair failed and we were unable to recover it. 00:27:48.316 [2024-07-15 09:35:59.405835] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.316 [2024-07-15 09:35:59.405861] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a50000b90 with addr=10.0.0.2, port=4420 00:27:48.316 qpair failed and we were unable to recover it. 00:27:48.316 [2024-07-15 09:35:59.405987] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.316 [2024-07-15 09:35:59.406013] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a50000b90 with addr=10.0.0.2, port=4420 00:27:48.316 qpair failed and we were unable to recover it. 00:27:48.316 [2024-07-15 09:35:59.406100] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.316 [2024-07-15 09:35:59.406127] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a50000b90 with addr=10.0.0.2, port=4420 00:27:48.316 qpair failed and we were unable to recover it. 00:27:48.316 [2024-07-15 09:35:59.406265] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.316 [2024-07-15 09:35:59.406293] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a50000b90 with addr=10.0.0.2, port=4420 00:27:48.316 qpair failed and we were unable to recover it. 00:27:48.316 [2024-07-15 09:35:59.406380] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.317 [2024-07-15 09:35:59.406408] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:27:48.317 qpair failed and we were unable to recover it. 00:27:48.317 [2024-07-15 09:35:59.406521] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.317 [2024-07-15 09:35:59.406547] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:27:48.317 qpair failed and we were unable to recover it. 00:27:48.317 [2024-07-15 09:35:59.406638] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.317 [2024-07-15 09:35:59.406663] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:27:48.317 qpair failed and we were unable to recover it. 00:27:48.317 [2024-07-15 09:35:59.406740] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.317 [2024-07-15 09:35:59.406766] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:27:48.317 qpair failed and we were unable to recover it. 00:27:48.317 [2024-07-15 09:35:59.406897] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.317 [2024-07-15 09:35:59.406936] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a58000b90 with addr=10.0.0.2, port=4420 00:27:48.317 qpair failed and we were unable to recover it. 00:27:48.317 [2024-07-15 09:35:59.407066] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.317 [2024-07-15 09:35:59.407103] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d69200 with addr=10.0.0.2, port=4420 00:27:48.317 qpair failed and we were unable to recover it. 00:27:48.317 [2024-07-15 09:35:59.407247] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.317 [2024-07-15 09:35:59.407277] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d69200 with addr=10.0.0.2, port=4420 00:27:48.317 qpair failed and we were unable to recover it. 00:27:48.317 [2024-07-15 09:35:59.407393] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.317 [2024-07-15 09:35:59.407418] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d69200 with addr=10.0.0.2, port=4420 00:27:48.317 qpair failed and we were unable to recover it. 00:27:48.317 [2024-07-15 09:35:59.407505] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.317 [2024-07-15 09:35:59.407530] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d69200 with addr=10.0.0.2, port=4420 00:27:48.317 qpair failed and we were unable to recover it. 00:27:48.317 [2024-07-15 09:35:59.407607] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.317 [2024-07-15 09:35:59.407634] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a50000b90 with addr=10.0.0.2, port=4420 00:27:48.317 qpair failed and we were unable to recover it. 00:27:48.317 [2024-07-15 09:35:59.407722] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.317 [2024-07-15 09:35:59.407753] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a50000b90 with addr=10.0.0.2, port=4420 00:27:48.317 qpair failed and we were unable to recover it. 00:27:48.317 [2024-07-15 09:35:59.407875] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.317 [2024-07-15 09:35:59.407903] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a58000b90 with addr=10.0.0.2, port=4420 00:27:48.317 qpair failed and we were unable to recover it. 00:27:48.317 [2024-07-15 09:35:59.408001] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.317 [2024-07-15 09:35:59.408029] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a58000b90 with addr=10.0.0.2, port=4420 00:27:48.317 qpair failed and we were unable to recover it. 00:27:48.317 [2024-07-15 09:35:59.408136] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.317 [2024-07-15 09:35:59.408163] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a58000b90 with addr=10.0.0.2, port=4420 00:27:48.317 qpair failed and we were unable to recover it. 00:27:48.317 [2024-07-15 09:35:59.408276] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.317 [2024-07-15 09:35:59.408302] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a58000b90 with addr=10.0.0.2, port=4420 00:27:48.317 qpair failed and we were unable to recover it. 00:27:48.317 [2024-07-15 09:35:59.408384] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.317 [2024-07-15 09:35:59.408409] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a58000b90 with addr=10.0.0.2, port=4420 00:27:48.317 qpair failed and we were unable to recover it. 00:27:48.317 [2024-07-15 09:35:59.408494] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.317 [2024-07-15 09:35:59.408522] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:27:48.317 qpair failed and we were unable to recover it. 00:27:48.317 [2024-07-15 09:35:59.408643] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.317 [2024-07-15 09:35:59.408670] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a50000b90 with addr=10.0.0.2, port=4420 00:27:48.317 qpair failed and we were unable to recover it. 00:27:48.317 [2024-07-15 09:35:59.408755] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.317 [2024-07-15 09:35:59.408781] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a50000b90 with addr=10.0.0.2, port=4420 00:27:48.317 qpair failed and we were unable to recover it. 00:27:48.317 [2024-07-15 09:35:59.408905] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.317 [2024-07-15 09:35:59.408932] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a50000b90 with addr=10.0.0.2, port=4420 00:27:48.317 qpair failed and we were unable to recover it. 00:27:48.317 [2024-07-15 09:35:59.409021] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.317 [2024-07-15 09:35:59.409047] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a50000b90 with addr=10.0.0.2, port=4420 00:27:48.317 qpair failed and we were unable to recover it. 00:27:48.317 [2024-07-15 09:35:59.409128] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.317 [2024-07-15 09:35:59.409154] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a50000b90 with addr=10.0.0.2, port=4420 00:27:48.317 qpair failed and we were unable to recover it. 00:27:48.317 [2024-07-15 09:35:59.409273] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.317 [2024-07-15 09:35:59.409299] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a58000b90 with addr=10.0.0.2, port=4420 00:27:48.317 qpair failed and we were unable to recover it. 00:27:48.317 [2024-07-15 09:35:59.409436] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.317 [2024-07-15 09:35:59.409462] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a58000b90 with addr=10.0.0.2, port=4420 00:27:48.317 qpair failed and we were unable to recover it. 00:27:48.317 [2024-07-15 09:35:59.409605] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.317 [2024-07-15 09:35:59.409632] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a58000b90 with addr=10.0.0.2, port=4420 00:27:48.317 qpair failed and we were unable to recover it. 00:27:48.317 [2024-07-15 09:35:59.409776] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.317 [2024-07-15 09:35:59.409812] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a58000b90 with addr=10.0.0.2, port=4420 00:27:48.317 qpair failed and we were unable to recover it. 00:27:48.317 [2024-07-15 09:35:59.409903] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.317 [2024-07-15 09:35:59.409930] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a58000b90 with addr=10.0.0.2, port=4420 00:27:48.317 qpair failed and we were unable to recover it. 00:27:48.317 [2024-07-15 09:35:59.410034] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.317 [2024-07-15 09:35:59.410059] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a58000b90 with addr=10.0.0.2, port=4420 00:27:48.317 qpair failed and we were unable to recover it. 00:27:48.317 [2024-07-15 09:35:59.410167] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.317 [2024-07-15 09:35:59.410194] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a50000b90 with addr=10.0.0.2, port=4420 00:27:48.317 qpair failed and we were unable to recover it. 00:27:48.317 [2024-07-15 09:35:59.410309] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.317 [2024-07-15 09:35:59.410336] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a50000b90 with addr=10.0.0.2, port=4420 00:27:48.317 qpair failed and we were unable to recover it. 00:27:48.317 [2024-07-15 09:35:59.410513] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.317 [2024-07-15 09:35:59.410542] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a50000b90 with addr=10.0.0.2, port=4420 00:27:48.317 qpair failed and we were unable to recover it. 00:27:48.317 [2024-07-15 09:35:59.410728] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.317 [2024-07-15 09:35:59.410755] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a50000b90 with addr=10.0.0.2, port=4420 00:27:48.317 qpair failed and we were unable to recover it. 00:27:48.317 [2024-07-15 09:35:59.410853] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.317 [2024-07-15 09:35:59.410880] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a50000b90 with addr=10.0.0.2, port=4420 00:27:48.317 qpair failed and we were unable to recover it. 00:27:48.317 [2024-07-15 09:35:59.410991] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.318 [2024-07-15 09:35:59.411018] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a50000b90 with addr=10.0.0.2, port=4420 00:27:48.318 qpair failed and we were unable to recover it. 00:27:48.318 [2024-07-15 09:35:59.411141] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.318 [2024-07-15 09:35:59.411165] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a50000b90 with addr=10.0.0.2, port=4420 00:27:48.318 qpair failed and we were unable to recover it. 00:27:48.318 [2024-07-15 09:35:59.411309] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.318 [2024-07-15 09:35:59.411336] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a50000b90 with addr=10.0.0.2, port=4420 00:27:48.318 qpair failed and we were unable to recover it. 00:27:48.318 [2024-07-15 09:35:59.411496] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.318 [2024-07-15 09:35:59.411536] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d69200 with addr=10.0.0.2, port=4420 00:27:48.318 qpair failed and we were unable to recover it. 00:27:48.318 [2024-07-15 09:35:59.411685] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.318 [2024-07-15 09:35:59.411715] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a58000b90 with addr=10.0.0.2, port=4420 00:27:48.318 qpair failed and we were unable to recover it. 00:27:48.318 [2024-07-15 09:35:59.411814] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.318 [2024-07-15 09:35:59.411852] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:27:48.318 qpair failed and we were unable to recover it. 00:27:48.318 [2024-07-15 09:35:59.411998] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.318 [2024-07-15 09:35:59.412027] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:27:48.318 qpair failed and we were unable to recover it. 00:27:48.318 [2024-07-15 09:35:59.412167] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.318 [2024-07-15 09:35:59.412195] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:27:48.318 qpair failed and we were unable to recover it. 00:27:48.318 [2024-07-15 09:35:59.412304] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.318 [2024-07-15 09:35:59.412330] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:27:48.318 qpair failed and we were unable to recover it. 00:27:48.318 [2024-07-15 09:35:59.412509] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.318 [2024-07-15 09:35:59.412558] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:27:48.318 qpair failed and we were unable to recover it. 00:27:48.318 [2024-07-15 09:35:59.412645] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.318 [2024-07-15 09:35:59.412670] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:27:48.318 qpair failed and we were unable to recover it. 00:27:48.318 [2024-07-15 09:35:59.412760] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.318 [2024-07-15 09:35:59.412785] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:27:48.318 qpair failed and we were unable to recover it. 00:27:48.318 [2024-07-15 09:35:59.412873] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.318 [2024-07-15 09:35:59.412899] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:27:48.318 qpair failed and we were unable to recover it. 00:27:48.318 [2024-07-15 09:35:59.412986] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.318 [2024-07-15 09:35:59.413012] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:27:48.318 qpair failed and we were unable to recover it. 00:27:48.318 [2024-07-15 09:35:59.413147] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.318 [2024-07-15 09:35:59.413172] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:27:48.318 qpair failed and we were unable to recover it. 00:27:48.318 [2024-07-15 09:35:59.413290] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.318 [2024-07-15 09:35:59.413318] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a50000b90 with addr=10.0.0.2, port=4420 00:27:48.318 qpair failed and we were unable to recover it. 00:27:48.318 [2024-07-15 09:35:59.413430] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.318 [2024-07-15 09:35:59.413456] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a50000b90 with addr=10.0.0.2, port=4420 00:27:48.318 qpair failed and we were unable to recover it. 00:27:48.318 [2024-07-15 09:35:59.413579] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.318 [2024-07-15 09:35:59.413626] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a58000b90 with addr=10.0.0.2, port=4420 00:27:48.318 qpair failed and we were unable to recover it. 00:27:48.318 [2024-07-15 09:35:59.413751] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.318 [2024-07-15 09:35:59.413779] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a58000b90 with addr=10.0.0.2, port=4420 00:27:48.318 qpair failed and we were unable to recover it. 00:27:48.318 [2024-07-15 09:35:59.413938] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.318 [2024-07-15 09:35:59.413978] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d69200 with addr=10.0.0.2, port=4420 00:27:48.318 qpair failed and we were unable to recover it. 00:27:48.318 [2024-07-15 09:35:59.414128] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.318 [2024-07-15 09:35:59.414157] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d69200 with addr=10.0.0.2, port=4420 00:27:48.318 qpair failed and we were unable to recover it. 00:27:48.318 [2024-07-15 09:35:59.414244] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.318 [2024-07-15 09:35:59.414270] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d69200 with addr=10.0.0.2, port=4420 00:27:48.318 qpair failed and we were unable to recover it. 00:27:48.318 [2024-07-15 09:35:59.414386] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.318 [2024-07-15 09:35:59.414412] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d69200 with addr=10.0.0.2, port=4420 00:27:48.318 qpair failed and we were unable to recover it. 00:27:48.318 [2024-07-15 09:35:59.414548] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.318 [2024-07-15 09:35:59.414575] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d69200 with addr=10.0.0.2, port=4420 00:27:48.318 qpair failed and we were unable to recover it. 00:27:48.318 [2024-07-15 09:35:59.414662] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.318 [2024-07-15 09:35:59.414688] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d69200 with addr=10.0.0.2, port=4420 00:27:48.318 qpair failed and we were unable to recover it. 00:27:48.318 [2024-07-15 09:35:59.414818] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.318 [2024-07-15 09:35:59.414857] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a58000b90 with addr=10.0.0.2, port=4420 00:27:48.318 qpair failed and we were unable to recover it. 00:27:48.318 [2024-07-15 09:35:59.414953] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.318 [2024-07-15 09:35:59.414980] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a58000b90 with addr=10.0.0.2, port=4420 00:27:48.318 qpair failed and we were unable to recover it. 00:27:48.318 [2024-07-15 09:35:59.415064] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.318 [2024-07-15 09:35:59.415091] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a58000b90 with addr=10.0.0.2, port=4420 00:27:48.318 qpair failed and we were unable to recover it. 00:27:48.318 [2024-07-15 09:35:59.415236] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.318 [2024-07-15 09:35:59.415264] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a58000b90 with addr=10.0.0.2, port=4420 00:27:48.318 qpair failed and we were unable to recover it. 00:27:48.318 [2024-07-15 09:35:59.415346] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.318 [2024-07-15 09:35:59.415372] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a58000b90 with addr=10.0.0.2, port=4420 00:27:48.318 qpair failed and we were unable to recover it. 00:27:48.318 [2024-07-15 09:35:59.415485] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.318 [2024-07-15 09:35:59.415510] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a58000b90 with addr=10.0.0.2, port=4420 00:27:48.318 qpair failed and we were unable to recover it. 00:27:48.318 [2024-07-15 09:35:59.415625] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.318 [2024-07-15 09:35:59.415652] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d69200 with addr=10.0.0.2, port=4420 00:27:48.318 qpair failed and we were unable to recover it. 00:27:48.318 [2024-07-15 09:35:59.415768] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.318 [2024-07-15 09:35:59.415795] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d69200 with addr=10.0.0.2, port=4420 00:27:48.318 qpair failed and we were unable to recover it. 00:27:48.318 [2024-07-15 09:35:59.415914] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.318 [2024-07-15 09:35:59.415940] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d69200 with addr=10.0.0.2, port=4420 00:27:48.318 qpair failed and we were unable to recover it. 00:27:48.318 [2024-07-15 09:35:59.416015] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.318 [2024-07-15 09:35:59.416040] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d69200 with addr=10.0.0.2, port=4420 00:27:48.318 qpair failed and we were unable to recover it. 00:27:48.318 [2024-07-15 09:35:59.416152] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.318 [2024-07-15 09:35:59.416178] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d69200 with addr=10.0.0.2, port=4420 00:27:48.318 qpair failed and we were unable to recover it. 00:27:48.318 [2024-07-15 09:35:59.416257] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.318 [2024-07-15 09:35:59.416283] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d69200 with addr=10.0.0.2, port=4420 00:27:48.318 qpair failed and we were unable to recover it. 00:27:48.318 [2024-07-15 09:35:59.416367] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.318 [2024-07-15 09:35:59.416395] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a58000b90 with addr=10.0.0.2, port=4420 00:27:48.318 qpair failed and we were unable to recover it. 00:27:48.318 [2024-07-15 09:35:59.416486] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.318 [2024-07-15 09:35:59.416515] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a58000b90 with addr=10.0.0.2, port=4420 00:27:48.318 qpair failed and we were unable to recover it. 00:27:48.318 [2024-07-15 09:35:59.416620] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.318 [2024-07-15 09:35:59.416646] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a58000b90 with addr=10.0.0.2, port=4420 00:27:48.318 qpair failed and we were unable to recover it. 00:27:48.318 [2024-07-15 09:35:59.416726] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.318 [2024-07-15 09:35:59.416752] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a58000b90 with addr=10.0.0.2, port=4420 00:27:48.318 qpair failed and we were unable to recover it. 00:27:48.319 [2024-07-15 09:35:59.416861] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.319 [2024-07-15 09:35:59.416888] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a58000b90 with addr=10.0.0.2, port=4420 00:27:48.319 qpair failed and we were unable to recover it. 00:27:48.319 [2024-07-15 09:35:59.416973] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.319 [2024-07-15 09:35:59.417000] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a58000b90 with addr=10.0.0.2, port=4420 00:27:48.319 qpair failed and we were unable to recover it. 00:27:48.319 [2024-07-15 09:35:59.417085] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.319 [2024-07-15 09:35:59.417111] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a58000b90 with addr=10.0.0.2, port=4420 00:27:48.319 qpair failed and we were unable to recover it. 00:27:48.319 [2024-07-15 09:35:59.417189] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.319 [2024-07-15 09:35:59.417219] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a58000b90 with addr=10.0.0.2, port=4420 00:27:48.319 qpair failed and we were unable to recover it. 00:27:48.319 [2024-07-15 09:35:59.417323] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.319 [2024-07-15 09:35:59.417361] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:27:48.319 qpair failed and we were unable to recover it. 00:27:48.319 [2024-07-15 09:35:59.417474] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.319 [2024-07-15 09:35:59.417500] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d69200 with addr=10.0.0.2, port=4420 00:27:48.319 qpair failed and we were unable to recover it. 00:27:48.319 [2024-07-15 09:35:59.417614] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.319 [2024-07-15 09:35:59.417640] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d69200 with addr=10.0.0.2, port=4420 00:27:48.319 qpair failed and we were unable to recover it. 00:27:48.319 [2024-07-15 09:35:59.417713] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.319 [2024-07-15 09:35:59.417738] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d69200 with addr=10.0.0.2, port=4420 00:27:48.319 qpair failed and we were unable to recover it. 00:27:48.319 [2024-07-15 09:35:59.417829] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.319 [2024-07-15 09:35:59.417859] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d69200 with addr=10.0.0.2, port=4420 00:27:48.319 qpair failed and we were unable to recover it. 00:27:48.319 [2024-07-15 09:35:59.417939] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.319 [2024-07-15 09:35:59.417965] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d69200 with addr=10.0.0.2, port=4420 00:27:48.319 qpair failed and we were unable to recover it. 00:27:48.319 [2024-07-15 09:35:59.418106] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.319 [2024-07-15 09:35:59.418133] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a58000b90 with addr=10.0.0.2, port=4420 00:27:48.319 qpair failed and we were unable to recover it. 00:27:48.319 [2024-07-15 09:35:59.418237] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.319 [2024-07-15 09:35:59.418262] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a58000b90 with addr=10.0.0.2, port=4420 00:27:48.319 qpair failed and we were unable to recover it. 00:27:48.319 [2024-07-15 09:35:59.418377] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.319 [2024-07-15 09:35:59.418403] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a58000b90 with addr=10.0.0.2, port=4420 00:27:48.319 qpair failed and we were unable to recover it. 00:27:48.319 [2024-07-15 09:35:59.418480] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.319 [2024-07-15 09:35:59.418505] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a58000b90 with addr=10.0.0.2, port=4420 00:27:48.319 qpair failed and we were unable to recover it. 00:27:48.319 [2024-07-15 09:35:59.418589] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.319 [2024-07-15 09:35:59.418614] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a58000b90 with addr=10.0.0.2, port=4420 00:27:48.319 qpair failed and we were unable to recover it. 00:27:48.319 [2024-07-15 09:35:59.418724] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.319 [2024-07-15 09:35:59.418749] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a58000b90 with addr=10.0.0.2, port=4420 00:27:48.319 qpair failed and we were unable to recover it. 00:27:48.319 [2024-07-15 09:35:59.418867] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.319 [2024-07-15 09:35:59.418895] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d69200 with addr=10.0.0.2, port=4420 00:27:48.319 qpair failed and we were unable to recover it. 00:27:48.319 [2024-07-15 09:35:59.419014] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.319 [2024-07-15 09:35:59.419039] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d69200 with addr=10.0.0.2, port=4420 00:27:48.319 qpair failed and we were unable to recover it. 00:27:48.319 [2024-07-15 09:35:59.419154] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.319 [2024-07-15 09:35:59.419179] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d69200 with addr=10.0.0.2, port=4420 00:27:48.319 qpair failed and we were unable to recover it. 00:27:48.319 [2024-07-15 09:35:59.419261] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.319 [2024-07-15 09:35:59.419287] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d69200 with addr=10.0.0.2, port=4420 00:27:48.319 qpair failed and we were unable to recover it. 00:27:48.319 [2024-07-15 09:35:59.419384] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.319 [2024-07-15 09:35:59.419414] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:27:48.319 qpair failed and we were unable to recover it. 00:27:48.319 [2024-07-15 09:35:59.419555] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.319 [2024-07-15 09:35:59.419581] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:27:48.319 qpair failed and we were unable to recover it. 00:27:48.319 [2024-07-15 09:35:59.419694] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.319 [2024-07-15 09:35:59.419720] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:27:48.319 qpair failed and we were unable to recover it. 00:27:48.319 [2024-07-15 09:35:59.419832] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.319 [2024-07-15 09:35:59.419859] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:27:48.319 qpair failed and we were unable to recover it. 00:27:48.319 [2024-07-15 09:35:59.419941] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.319 [2024-07-15 09:35:59.419967] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:27:48.319 qpair failed and we were unable to recover it. 00:27:48.319 [2024-07-15 09:35:59.420051] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.319 [2024-07-15 09:35:59.420077] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:27:48.319 qpair failed and we were unable to recover it. 00:27:48.319 [2024-07-15 09:35:59.420211] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.319 [2024-07-15 09:35:59.420240] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:27:48.319 qpair failed and we were unable to recover it. 00:27:48.319 [2024-07-15 09:35:59.420322] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.319 [2024-07-15 09:35:59.420349] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:27:48.319 qpair failed and we were unable to recover it. 00:27:48.319 [2024-07-15 09:35:59.420486] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.319 [2024-07-15 09:35:59.420513] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:27:48.319 qpair failed and we were unable to recover it. 00:27:48.319 [2024-07-15 09:35:59.420630] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.319 [2024-07-15 09:35:59.420657] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:27:48.319 qpair failed and we were unable to recover it. 00:27:48.319 [2024-07-15 09:35:59.420823] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.319 [2024-07-15 09:35:59.420864] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a50000b90 with addr=10.0.0.2, port=4420 00:27:48.319 qpair failed and we were unable to recover it. 00:27:48.319 [2024-07-15 09:35:59.420984] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.319 [2024-07-15 09:35:59.421013] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a58000b90 with addr=10.0.0.2, port=4420 00:27:48.319 qpair failed and we were unable to recover it. 00:27:48.319 [2024-07-15 09:35:59.421121] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.319 [2024-07-15 09:35:59.421146] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a58000b90 with addr=10.0.0.2, port=4420 00:27:48.319 qpair failed and we were unable to recover it. 00:27:48.319 [2024-07-15 09:35:59.421229] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.319 [2024-07-15 09:35:59.421254] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a58000b90 with addr=10.0.0.2, port=4420 00:27:48.319 qpair failed and we were unable to recover it. 00:27:48.319 [2024-07-15 09:35:59.421343] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.319 [2024-07-15 09:35:59.421368] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a58000b90 with addr=10.0.0.2, port=4420 00:27:48.319 qpair failed and we were unable to recover it. 00:27:48.319 [2024-07-15 09:35:59.421458] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.319 [2024-07-15 09:35:59.421484] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a58000b90 with addr=10.0.0.2, port=4420 00:27:48.319 qpair failed and we were unable to recover it. 00:27:48.319 [2024-07-15 09:35:59.421595] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.319 [2024-07-15 09:35:59.421620] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a58000b90 with addr=10.0.0.2, port=4420 00:27:48.319 qpair failed and we were unable to recover it. 00:27:48.319 [2024-07-15 09:35:59.421705] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.319 [2024-07-15 09:35:59.421730] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a58000b90 with addr=10.0.0.2, port=4420 00:27:48.319 qpair failed and we were unable to recover it. 00:27:48.319 [2024-07-15 09:35:59.421836] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.319 [2024-07-15 09:35:59.421875] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d69200 with addr=10.0.0.2, port=4420 00:27:48.319 qpair failed and we were unable to recover it. 00:27:48.319 [2024-07-15 09:35:59.422037] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.319 [2024-07-15 09:35:59.422066] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d69200 with addr=10.0.0.2, port=4420 00:27:48.320 qpair failed and we were unable to recover it. 00:27:48.320 [2024-07-15 09:35:59.422174] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.320 [2024-07-15 09:35:59.422200] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d69200 with addr=10.0.0.2, port=4420 00:27:48.320 qpair failed and we were unable to recover it. 00:27:48.320 [2024-07-15 09:35:59.422320] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.320 [2024-07-15 09:35:59.422347] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d69200 with addr=10.0.0.2, port=4420 00:27:48.320 qpair failed and we were unable to recover it. 00:27:48.320 [2024-07-15 09:35:59.422462] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.320 [2024-07-15 09:35:59.422489] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:27:48.320 qpair failed and we were unable to recover it. 00:27:48.320 [2024-07-15 09:35:59.422575] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.320 [2024-07-15 09:35:59.422605] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:27:48.320 qpair failed and we were unable to recover it. 00:27:48.320 [2024-07-15 09:35:59.422721] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.320 [2024-07-15 09:35:59.422748] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:27:48.320 qpair failed and we were unable to recover it. 00:27:48.320 [2024-07-15 09:35:59.422868] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.320 [2024-07-15 09:35:59.422894] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:27:48.320 qpair failed and we were unable to recover it. 00:27:48.320 [2024-07-15 09:35:59.423018] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.320 [2024-07-15 09:35:59.423045] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:27:48.320 qpair failed and we were unable to recover it. 00:27:48.320 [2024-07-15 09:35:59.423158] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.320 [2024-07-15 09:35:59.423191] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:27:48.320 qpair failed and we were unable to recover it. 00:27:48.320 [2024-07-15 09:35:59.423304] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.320 [2024-07-15 09:35:59.423331] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:27:48.320 qpair failed and we were unable to recover it. 00:27:48.320 [2024-07-15 09:35:59.423439] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.320 [2024-07-15 09:35:59.423464] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:27:48.320 qpair failed and we were unable to recover it. 00:27:48.320 [2024-07-15 09:35:59.423541] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.320 [2024-07-15 09:35:59.423566] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:27:48.320 qpair failed and we were unable to recover it. 00:27:48.320 [2024-07-15 09:35:59.423679] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.320 [2024-07-15 09:35:59.423705] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:27:48.320 qpair failed and we were unable to recover it. 00:27:48.320 [2024-07-15 09:35:59.423790] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.320 [2024-07-15 09:35:59.423826] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:27:48.320 qpair failed and we were unable to recover it. 00:27:48.320 [2024-07-15 09:35:59.423942] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.320 [2024-07-15 09:35:59.423969] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:27:48.320 qpair failed and we were unable to recover it. 00:27:48.320 [2024-07-15 09:35:59.424077] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.320 [2024-07-15 09:35:59.424103] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:27:48.320 qpair failed and we were unable to recover it. 00:27:48.320 [2024-07-15 09:35:59.424239] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.320 [2024-07-15 09:35:59.424266] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:27:48.320 qpair failed and we were unable to recover it. 00:27:48.320 [2024-07-15 09:35:59.424354] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.320 [2024-07-15 09:35:59.424380] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:27:48.320 qpair failed and we were unable to recover it. 00:27:48.320 [2024-07-15 09:35:59.424471] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.320 [2024-07-15 09:35:59.424499] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:27:48.320 qpair failed and we were unable to recover it. 00:27:48.320 [2024-07-15 09:35:59.424612] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.320 [2024-07-15 09:35:59.424640] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:27:48.320 qpair failed and we were unable to recover it. 00:27:48.320 [2024-07-15 09:35:59.424725] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.320 [2024-07-15 09:35:59.424751] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:27:48.320 qpair failed and we were unable to recover it. 00:27:48.320 [2024-07-15 09:35:59.424882] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.320 [2024-07-15 09:35:59.424924] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d69200 with addr=10.0.0.2, port=4420 00:27:48.320 qpair failed and we were unable to recover it. 00:27:48.320 [2024-07-15 09:35:59.425027] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.320 [2024-07-15 09:35:59.425068] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a50000b90 with addr=10.0.0.2, port=4420 00:27:48.320 qpair failed and we were unable to recover it. 00:27:48.320 [2024-07-15 09:35:59.425232] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.320 [2024-07-15 09:35:59.425261] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a50000b90 with addr=10.0.0.2, port=4420 00:27:48.320 qpair failed and we were unable to recover it. 00:27:48.320 [2024-07-15 09:35:59.425379] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.320 [2024-07-15 09:35:59.425407] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a50000b90 with addr=10.0.0.2, port=4420 00:27:48.320 qpair failed and we were unable to recover it. 00:27:48.320 [2024-07-15 09:35:59.425516] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.320 [2024-07-15 09:35:59.425544] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a50000b90 with addr=10.0.0.2, port=4420 00:27:48.320 qpair failed and we were unable to recover it. 00:27:48.320 [2024-07-15 09:35:59.425635] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.320 [2024-07-15 09:35:59.425661] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a50000b90 with addr=10.0.0.2, port=4420 00:27:48.320 qpair failed and we were unable to recover it. 00:27:48.320 [2024-07-15 09:35:59.425747] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.320 [2024-07-15 09:35:59.425773] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a50000b90 with addr=10.0.0.2, port=4420 00:27:48.320 qpair failed and we were unable to recover it. 00:27:48.320 [2024-07-15 09:35:59.425875] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.320 [2024-07-15 09:35:59.425901] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a50000b90 with addr=10.0.0.2, port=4420 00:27:48.320 qpair failed and we were unable to recover it. 00:27:48.320 [2024-07-15 09:35:59.426045] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.320 [2024-07-15 09:35:59.426073] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a50000b90 with addr=10.0.0.2, port=4420 00:27:48.320 qpair failed and we were unable to recover it. 00:27:48.320 [2024-07-15 09:35:59.426176] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.320 [2024-07-15 09:35:59.426202] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a50000b90 with addr=10.0.0.2, port=4420 00:27:48.320 qpair failed and we were unable to recover it. 00:27:48.320 [2024-07-15 09:35:59.426320] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.320 [2024-07-15 09:35:59.426350] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d69200 with addr=10.0.0.2, port=4420 00:27:48.320 qpair failed and we were unable to recover it. 00:27:48.320 [2024-07-15 09:35:59.426466] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.320 [2024-07-15 09:35:59.426494] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d69200 with addr=10.0.0.2, port=4420 00:27:48.320 qpair failed and we were unable to recover it. 00:27:48.320 [2024-07-15 09:35:59.426591] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.320 [2024-07-15 09:35:59.426618] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:27:48.320 qpair failed and we were unable to recover it. 00:27:48.320 [2024-07-15 09:35:59.426813] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.320 [2024-07-15 09:35:59.426840] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:27:48.320 qpair failed and we were unable to recover it. 00:27:48.320 [2024-07-15 09:35:59.426978] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.320 [2024-07-15 09:35:59.427005] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:27:48.320 qpair failed and we were unable to recover it. 00:27:48.320 [2024-07-15 09:35:59.427094] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.320 [2024-07-15 09:35:59.427119] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:27:48.320 qpair failed and we were unable to recover it. 00:27:48.320 [2024-07-15 09:35:59.427257] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.320 [2024-07-15 09:35:59.427283] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:27:48.320 qpair failed and we were unable to recover it. 00:27:48.320 [2024-07-15 09:35:59.427395] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.320 [2024-07-15 09:35:59.427421] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:27:48.320 qpair failed and we were unable to recover it. 00:27:48.320 [2024-07-15 09:35:59.427511] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.320 [2024-07-15 09:35:59.427537] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:27:48.320 qpair failed and we were unable to recover it. 00:27:48.320 [2024-07-15 09:35:59.427631] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.320 [2024-07-15 09:35:59.427659] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:27:48.320 qpair failed and we were unable to recover it. 00:27:48.320 [2024-07-15 09:35:59.427773] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.321 [2024-07-15 09:35:59.427807] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:27:48.321 qpair failed and we were unable to recover it. 00:27:48.321 [2024-07-15 09:35:59.427936] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.321 [2024-07-15 09:35:59.427963] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:27:48.321 qpair failed and we were unable to recover it. 00:27:48.321 [2024-07-15 09:35:59.428077] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.321 [2024-07-15 09:35:59.428102] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:27:48.321 qpair failed and we were unable to recover it. 00:27:48.321 [2024-07-15 09:35:59.428246] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.321 [2024-07-15 09:35:59.428273] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:27:48.321 qpair failed and we were unable to recover it. 00:27:48.321 [2024-07-15 09:35:59.428425] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.321 [2024-07-15 09:35:59.428452] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:27:48.321 qpair failed and we were unable to recover it. 00:27:48.321 [2024-07-15 09:35:59.428561] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.321 [2024-07-15 09:35:59.428587] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:27:48.321 qpair failed and we were unable to recover it. 00:27:48.321 [2024-07-15 09:35:59.428683] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.321 [2024-07-15 09:35:59.428725] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a58000b90 with addr=10.0.0.2, port=4420 00:27:48.321 qpair failed and we were unable to recover it. 00:27:48.321 [2024-07-15 09:35:59.428830] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.321 [2024-07-15 09:35:59.428858] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a58000b90 with addr=10.0.0.2, port=4420 00:27:48.321 qpair failed and we were unable to recover it. 00:27:48.321 [2024-07-15 09:35:59.428983] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.321 [2024-07-15 09:35:59.429024] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a50000b90 with addr=10.0.0.2, port=4420 00:27:48.321 qpair failed and we were unable to recover it. 00:27:48.321 [2024-07-15 09:35:59.429142] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.321 [2024-07-15 09:35:59.429169] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a50000b90 with addr=10.0.0.2, port=4420 00:27:48.321 qpair failed and we were unable to recover it. 00:27:48.321 [2024-07-15 09:35:59.429286] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.321 [2024-07-15 09:35:59.429314] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a50000b90 with addr=10.0.0.2, port=4420 00:27:48.321 qpair failed and we were unable to recover it. 00:27:48.321 [2024-07-15 09:35:59.429422] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.321 [2024-07-15 09:35:59.429448] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a50000b90 with addr=10.0.0.2, port=4420 00:27:48.321 qpair failed and we were unable to recover it. 00:27:48.321 [2024-07-15 09:35:59.429558] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.321 [2024-07-15 09:35:59.429585] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a50000b90 with addr=10.0.0.2, port=4420 00:27:48.321 qpair failed and we were unable to recover it. 00:27:48.321 [2024-07-15 09:35:59.429685] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.321 [2024-07-15 09:35:59.429723] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d69200 with addr=10.0.0.2, port=4420 00:27:48.321 qpair failed and we were unable to recover it. 00:27:48.321 [2024-07-15 09:35:59.429869] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.321 [2024-07-15 09:35:59.429899] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d69200 with addr=10.0.0.2, port=4420 00:27:48.321 qpair failed and we were unable to recover it. 00:27:48.321 [2024-07-15 09:35:59.429988] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.321 [2024-07-15 09:35:59.430014] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d69200 with addr=10.0.0.2, port=4420 00:27:48.321 qpair failed and we were unable to recover it. 00:27:48.321 [2024-07-15 09:35:59.430124] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.321 [2024-07-15 09:35:59.430149] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d69200 with addr=10.0.0.2, port=4420 00:27:48.321 qpair failed and we were unable to recover it. 00:27:48.321 [2024-07-15 09:35:59.430260] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.321 [2024-07-15 09:35:59.430287] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d69200 with addr=10.0.0.2, port=4420 00:27:48.321 qpair failed and we were unable to recover it. 00:27:48.321 [2024-07-15 09:35:59.430391] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.321 [2024-07-15 09:35:59.430416] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d69200 with addr=10.0.0.2, port=4420 00:27:48.321 qpair failed and we were unable to recover it. 00:27:48.321 [2024-07-15 09:35:59.430554] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.321 [2024-07-15 09:35:59.430594] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a50000b90 with addr=10.0.0.2, port=4420 00:27:48.321 qpair failed and we were unable to recover it. 00:27:48.321 [2024-07-15 09:35:59.430723] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.321 [2024-07-15 09:35:59.430764] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a58000b90 with addr=10.0.0.2, port=4420 00:27:48.321 qpair failed and we were unable to recover it. 00:27:48.321 [2024-07-15 09:35:59.430892] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.321 [2024-07-15 09:35:59.430921] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:27:48.321 qpair failed and we were unable to recover it. 00:27:48.321 [2024-07-15 09:35:59.431009] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.321 [2024-07-15 09:35:59.431035] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:27:48.321 qpair failed and we were unable to recover it. 00:27:48.321 [2024-07-15 09:35:59.431123] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.321 [2024-07-15 09:35:59.431149] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:27:48.321 qpair failed and we were unable to recover it. 00:27:48.321 [2024-07-15 09:35:59.431270] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.321 [2024-07-15 09:35:59.431298] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:27:48.321 qpair failed and we were unable to recover it. 00:27:48.321 [2024-07-15 09:35:59.431386] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.321 [2024-07-15 09:35:59.431412] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:27:48.321 qpair failed and we were unable to recover it. 00:27:48.321 [2024-07-15 09:35:59.431514] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.321 [2024-07-15 09:35:59.431541] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:27:48.321 qpair failed and we were unable to recover it. 00:27:48.321 [2024-07-15 09:35:59.431646] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.321 [2024-07-15 09:35:59.431687] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a58000b90 with addr=10.0.0.2, port=4420 00:27:48.321 qpair failed and we were unable to recover it. 00:27:48.321 [2024-07-15 09:35:59.431776] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.321 [2024-07-15 09:35:59.431809] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d69200 with addr=10.0.0.2, port=4420 00:27:48.321 qpair failed and we were unable to recover it. 00:27:48.321 [2024-07-15 09:35:59.431926] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.321 [2024-07-15 09:35:59.431953] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d69200 with addr=10.0.0.2, port=4420 00:27:48.321 qpair failed and we were unable to recover it. 00:27:48.321 [2024-07-15 09:35:59.432060] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.321 [2024-07-15 09:35:59.432091] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d69200 with addr=10.0.0.2, port=4420 00:27:48.321 qpair failed and we were unable to recover it. 00:27:48.321 [2024-07-15 09:35:59.432211] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.321 [2024-07-15 09:35:59.432238] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d69200 with addr=10.0.0.2, port=4420 00:27:48.321 qpair failed and we were unable to recover it. 00:27:48.321 [2024-07-15 09:35:59.432316] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.321 [2024-07-15 09:35:59.432342] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d69200 with addr=10.0.0.2, port=4420 00:27:48.321 qpair failed and we were unable to recover it. 00:27:48.321 [2024-07-15 09:35:59.432426] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.321 [2024-07-15 09:35:59.432452] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d69200 with addr=10.0.0.2, port=4420 00:27:48.321 qpair failed and we were unable to recover it. 00:27:48.321 [2024-07-15 09:35:59.432566] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.321 [2024-07-15 09:35:59.432593] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d69200 with addr=10.0.0.2, port=4420 00:27:48.322 qpair failed and we were unable to recover it. 00:27:48.322 [2024-07-15 09:35:59.432680] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.322 [2024-07-15 09:35:59.432705] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d69200 with addr=10.0.0.2, port=4420 00:27:48.322 qpair failed and we were unable to recover it. 00:27:48.322 [2024-07-15 09:35:59.432840] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.322 [2024-07-15 09:35:59.432868] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d69200 with addr=10.0.0.2, port=4420 00:27:48.322 qpair failed and we were unable to recover it. 00:27:48.322 [2024-07-15 09:35:59.432977] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.322 [2024-07-15 09:35:59.433004] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d69200 with addr=10.0.0.2, port=4420 00:27:48.322 qpair failed and we were unable to recover it. 00:27:48.322 [2024-07-15 09:35:59.433085] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.322 [2024-07-15 09:35:59.433110] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d69200 with addr=10.0.0.2, port=4420 00:27:48.322 qpair failed and we were unable to recover it. 00:27:48.322 [2024-07-15 09:35:59.433194] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.322 [2024-07-15 09:35:59.433221] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:27:48.322 qpair failed and we were unable to recover it. 00:27:48.322 [2024-07-15 09:35:59.433339] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.322 [2024-07-15 09:35:59.433367] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:27:48.322 qpair failed and we were unable to recover it. 00:27:48.322 [2024-07-15 09:35:59.433472] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.322 [2024-07-15 09:35:59.433499] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:27:48.322 qpair failed and we were unable to recover it. 00:27:48.322 [2024-07-15 09:35:59.433587] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.322 [2024-07-15 09:35:59.433612] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:27:48.322 qpair failed and we were unable to recover it. 00:27:48.322 [2024-07-15 09:35:59.433722] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.322 [2024-07-15 09:35:59.433749] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:27:48.322 qpair failed and we were unable to recover it. 00:27:48.322 [2024-07-15 09:35:59.433893] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.322 [2024-07-15 09:35:59.433921] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:27:48.322 qpair failed and we were unable to recover it. 00:27:48.322 [2024-07-15 09:35:59.434041] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.322 [2024-07-15 09:35:59.434069] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:27:48.322 qpair failed and we were unable to recover it. 00:27:48.322 [2024-07-15 09:35:59.434180] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.322 [2024-07-15 09:35:59.434207] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:27:48.322 qpair failed and we were unable to recover it. 00:27:48.322 [2024-07-15 09:35:59.434291] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.322 [2024-07-15 09:35:59.434317] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:27:48.322 qpair failed and we were unable to recover it. 00:27:48.322 [2024-07-15 09:35:59.434427] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.322 [2024-07-15 09:35:59.434454] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:27:48.322 qpair failed and we were unable to recover it. 00:27:48.322 [2024-07-15 09:35:59.434546] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.322 [2024-07-15 09:35:59.434585] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a58000b90 with addr=10.0.0.2, port=4420 00:27:48.322 qpair failed and we were unable to recover it. 00:27:48.322 [2024-07-15 09:35:59.434697] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.322 [2024-07-15 09:35:59.434726] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a58000b90 with addr=10.0.0.2, port=4420 00:27:48.322 qpair failed and we were unable to recover it. 00:27:48.322 [2024-07-15 09:35:59.434842] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.322 [2024-07-15 09:35:59.434870] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a58000b90 with addr=10.0.0.2, port=4420 00:27:48.322 qpair failed and we were unable to recover it. 00:27:48.322 [2024-07-15 09:35:59.434987] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.322 [2024-07-15 09:35:59.435014] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a58000b90 with addr=10.0.0.2, port=4420 00:27:48.322 qpair failed and we were unable to recover it. 00:27:48.322 [2024-07-15 09:35:59.435128] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.322 [2024-07-15 09:35:59.435155] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a58000b90 with addr=10.0.0.2, port=4420 00:27:48.322 qpair failed and we were unable to recover it. 00:27:48.322 [2024-07-15 09:35:59.435262] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.322 [2024-07-15 09:35:59.435289] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a58000b90 with addr=10.0.0.2, port=4420 00:27:48.322 qpair failed and we were unable to recover it. 00:27:48.322 [2024-07-15 09:35:59.435375] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.322 [2024-07-15 09:35:59.435401] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:27:48.322 qpair failed and we were unable to recover it. 00:27:48.322 [2024-07-15 09:35:59.435530] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.322 [2024-07-15 09:35:59.435570] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a50000b90 with addr=10.0.0.2, port=4420 00:27:48.322 qpair failed and we were unable to recover it. 00:27:48.322 [2024-07-15 09:35:59.435708] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.322 [2024-07-15 09:35:59.435749] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d69200 with addr=10.0.0.2, port=4420 00:27:48.322 qpair failed and we were unable to recover it. 00:27:48.322 [2024-07-15 09:35:59.435878] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.322 [2024-07-15 09:35:59.435907] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a58000b90 with addr=10.0.0.2, port=4420 00:27:48.322 qpair failed and we were unable to recover it. 00:27:48.322 [2024-07-15 09:35:59.436023] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.322 [2024-07-15 09:35:59.436049] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a58000b90 with addr=10.0.0.2, port=4420 00:27:48.322 qpair failed and we were unable to recover it. 00:27:48.322 [2024-07-15 09:35:59.436203] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.322 [2024-07-15 09:35:59.436256] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a58000b90 with addr=10.0.0.2, port=4420 00:27:48.322 qpair failed and we were unable to recover it. 00:27:48.322 [2024-07-15 09:35:59.436339] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.322 [2024-07-15 09:35:59.436366] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a58000b90 with addr=10.0.0.2, port=4420 00:27:48.322 qpair failed and we were unable to recover it. 00:27:48.322 [2024-07-15 09:35:59.436474] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.322 [2024-07-15 09:35:59.436501] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a58000b90 with addr=10.0.0.2, port=4420 00:27:48.322 qpair failed and we were unable to recover it. 00:27:48.322 [2024-07-15 09:35:59.436611] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.322 [2024-07-15 09:35:59.436638] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a58000b90 with addr=10.0.0.2, port=4420 00:27:48.322 qpair failed and we were unable to recover it. 00:27:48.322 [2024-07-15 09:35:59.436791] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.322 [2024-07-15 09:35:59.436839] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d69200 with addr=10.0.0.2, port=4420 00:27:48.322 qpair failed and we were unable to recover it. 00:27:48.322 [2024-07-15 09:35:59.436966] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.322 [2024-07-15 09:35:59.436995] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:27:48.322 qpair failed and we were unable to recover it. 00:27:48.322 [2024-07-15 09:35:59.437107] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.322 [2024-07-15 09:35:59.437135] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:27:48.322 qpair failed and we were unable to recover it. 00:27:48.322 [2024-07-15 09:35:59.437315] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.322 [2024-07-15 09:35:59.437372] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:27:48.322 qpair failed and we were unable to recover it. 00:27:48.322 [2024-07-15 09:35:59.437446] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.322 [2024-07-15 09:35:59.437471] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:27:48.322 qpair failed and we were unable to recover it. 00:27:48.322 [2024-07-15 09:35:59.437553] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.322 [2024-07-15 09:35:59.437578] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:27:48.322 qpair failed and we were unable to recover it. 00:27:48.322 [2024-07-15 09:35:59.437689] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.322 [2024-07-15 09:35:59.437715] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:27:48.322 qpair failed and we were unable to recover it. 00:27:48.322 [2024-07-15 09:35:59.437850] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.322 [2024-07-15 09:35:59.437891] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d69200 with addr=10.0.0.2, port=4420 00:27:48.322 qpair failed and we were unable to recover it. 00:27:48.322 [2024-07-15 09:35:59.438010] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.322 [2024-07-15 09:35:59.438040] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a50000b90 with addr=10.0.0.2, port=4420 00:27:48.322 qpair failed and we were unable to recover it. 00:27:48.322 [2024-07-15 09:35:59.438152] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.323 [2024-07-15 09:35:59.438181] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a50000b90 with addr=10.0.0.2, port=4420 00:27:48.323 qpair failed and we were unable to recover it. 00:27:48.323 [2024-07-15 09:35:59.438298] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.323 [2024-07-15 09:35:59.438325] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a50000b90 with addr=10.0.0.2, port=4420 00:27:48.323 qpair failed and we were unable to recover it. 00:27:48.323 [2024-07-15 09:35:59.438504] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.323 [2024-07-15 09:35:59.438531] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a50000b90 with addr=10.0.0.2, port=4420 00:27:48.323 qpair failed and we were unable to recover it. 00:27:48.323 [2024-07-15 09:35:59.438640] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.323 [2024-07-15 09:35:59.438666] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a50000b90 with addr=10.0.0.2, port=4420 00:27:48.323 qpair failed and we were unable to recover it. 00:27:48.323 [2024-07-15 09:35:59.438811] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.323 [2024-07-15 09:35:59.438839] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a50000b90 with addr=10.0.0.2, port=4420 00:27:48.323 qpair failed and we were unable to recover it. 00:27:48.323 [2024-07-15 09:35:59.438951] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.323 [2024-07-15 09:35:59.438978] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a50000b90 with addr=10.0.0.2, port=4420 00:27:48.323 qpair failed and we were unable to recover it. 00:27:48.323 [2024-07-15 09:35:59.439090] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.323 [2024-07-15 09:35:59.439116] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a50000b90 with addr=10.0.0.2, port=4420 00:27:48.323 qpair failed and we were unable to recover it. 00:27:48.323 [2024-07-15 09:35:59.439207] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.323 [2024-07-15 09:35:59.439233] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a50000b90 with addr=10.0.0.2, port=4420 00:27:48.323 qpair failed and we were unable to recover it. 00:27:48.323 [2024-07-15 09:35:59.439341] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.323 [2024-07-15 09:35:59.439368] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a50000b90 with addr=10.0.0.2, port=4420 00:27:48.323 qpair failed and we were unable to recover it. 00:27:48.323 [2024-07-15 09:35:59.439483] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.323 [2024-07-15 09:35:59.439511] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a50000b90 with addr=10.0.0.2, port=4420 00:27:48.323 qpair failed and we were unable to recover it. 00:27:48.323 [2024-07-15 09:35:59.439591] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.323 [2024-07-15 09:35:59.439617] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a50000b90 with addr=10.0.0.2, port=4420 00:27:48.323 qpair failed and we were unable to recover it. 00:27:48.323 [2024-07-15 09:35:59.439736] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.323 [2024-07-15 09:35:59.439766] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d69200 with addr=10.0.0.2, port=4420 00:27:48.323 qpair failed and we were unable to recover it. 00:27:48.323 [2024-07-15 09:35:59.439880] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.323 [2024-07-15 09:35:59.439921] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a58000b90 with addr=10.0.0.2, port=4420 00:27:48.323 qpair failed and we were unable to recover it. 00:27:48.323 [2024-07-15 09:35:59.440068] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.323 [2024-07-15 09:35:59.440096] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a58000b90 with addr=10.0.0.2, port=4420 00:27:48.323 qpair failed and we were unable to recover it. 00:27:48.323 [2024-07-15 09:35:59.440207] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.323 [2024-07-15 09:35:59.440233] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a58000b90 with addr=10.0.0.2, port=4420 00:27:48.323 qpair failed and we were unable to recover it. 00:27:48.323 [2024-07-15 09:35:59.440350] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.323 [2024-07-15 09:35:59.440378] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a58000b90 with addr=10.0.0.2, port=4420 00:27:48.323 qpair failed and we were unable to recover it. 00:27:48.323 [2024-07-15 09:35:59.440500] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.323 [2024-07-15 09:35:59.440540] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:27:48.323 qpair failed and we were unable to recover it. 00:27:48.323 [2024-07-15 09:35:59.440665] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.323 [2024-07-15 09:35:59.440694] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a50000b90 with addr=10.0.0.2, port=4420 00:27:48.323 qpair failed and we were unable to recover it. 00:27:48.323 [2024-07-15 09:35:59.440830] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.323 [2024-07-15 09:35:59.440857] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a50000b90 with addr=10.0.0.2, port=4420 00:27:48.323 qpair failed and we were unable to recover it. 00:27:48.323 [2024-07-15 09:35:59.440948] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.323 [2024-07-15 09:35:59.440974] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a50000b90 with addr=10.0.0.2, port=4420 00:27:48.323 qpair failed and we were unable to recover it. 00:27:48.323 [2024-07-15 09:35:59.441084] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.323 [2024-07-15 09:35:59.441111] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a50000b90 with addr=10.0.0.2, port=4420 00:27:48.323 qpair failed and we were unable to recover it. 00:27:48.323 [2024-07-15 09:35:59.441249] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.323 [2024-07-15 09:35:59.441277] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a50000b90 with addr=10.0.0.2, port=4420 00:27:48.323 qpair failed and we were unable to recover it. 00:27:48.323 [2024-07-15 09:35:59.441416] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.323 [2024-07-15 09:35:59.441444] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a50000b90 with addr=10.0.0.2, port=4420 00:27:48.323 qpair failed and we were unable to recover it. 00:27:48.323 [2024-07-15 09:35:59.441565] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.323 [2024-07-15 09:35:59.441595] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:27:48.323 qpair failed and we were unable to recover it. 00:27:48.323 [2024-07-15 09:35:59.441684] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.323 [2024-07-15 09:35:59.441716] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:27:48.323 qpair failed and we were unable to recover it. 00:27:48.323 [2024-07-15 09:35:59.441813] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.323 [2024-07-15 09:35:59.441839] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:27:48.323 qpair failed and we were unable to recover it. 00:27:48.323 [2024-07-15 09:35:59.441923] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.323 [2024-07-15 09:35:59.441949] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:27:48.323 qpair failed and we were unable to recover it. 00:27:48.323 [2024-07-15 09:35:59.442062] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.323 [2024-07-15 09:35:59.442087] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:27:48.323 qpair failed and we were unable to recover it. 00:27:48.323 [2024-07-15 09:35:59.442170] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.323 [2024-07-15 09:35:59.442195] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:27:48.323 qpair failed and we were unable to recover it. 00:27:48.323 [2024-07-15 09:35:59.442312] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.323 [2024-07-15 09:35:59.442339] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:27:48.323 qpair failed and we were unable to recover it. 00:27:48.323 [2024-07-15 09:35:59.442454] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.323 [2024-07-15 09:35:59.442481] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:27:48.323 qpair failed and we were unable to recover it. 00:27:48.323 [2024-07-15 09:35:59.442588] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.323 [2024-07-15 09:35:59.442617] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a58000b90 with addr=10.0.0.2, port=4420 00:27:48.323 qpair failed and we were unable to recover it. 00:27:48.323 [2024-07-15 09:35:59.442701] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.323 [2024-07-15 09:35:59.442727] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a58000b90 with addr=10.0.0.2, port=4420 00:27:48.323 qpair failed and we were unable to recover it. 00:27:48.323 [2024-07-15 09:35:59.442837] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.323 [2024-07-15 09:35:59.442864] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a58000b90 with addr=10.0.0.2, port=4420 00:27:48.323 qpair failed and we were unable to recover it. 00:27:48.323 [2024-07-15 09:35:59.442956] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.323 [2024-07-15 09:35:59.442981] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a58000b90 with addr=10.0.0.2, port=4420 00:27:48.323 qpair failed and we were unable to recover it. 00:27:48.323 [2024-07-15 09:35:59.443072] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.323 [2024-07-15 09:35:59.443099] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a58000b90 with addr=10.0.0.2, port=4420 00:27:48.323 qpair failed and we were unable to recover it. 00:27:48.323 [2024-07-15 09:35:59.443177] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.323 [2024-07-15 09:35:59.443202] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a58000b90 with addr=10.0.0.2, port=4420 00:27:48.324 qpair failed and we were unable to recover it. 00:27:48.324 [2024-07-15 09:35:59.443309] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.324 [2024-07-15 09:35:59.443335] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a58000b90 with addr=10.0.0.2, port=4420 00:27:48.324 qpair failed and we were unable to recover it. 00:27:48.324 [2024-07-15 09:35:59.443424] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.324 [2024-07-15 09:35:59.443451] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a58000b90 with addr=10.0.0.2, port=4420 00:27:48.324 qpair failed and we were unable to recover it. 00:27:48.324 [2024-07-15 09:35:59.443527] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.324 [2024-07-15 09:35:59.443552] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a58000b90 with addr=10.0.0.2, port=4420 00:27:48.324 qpair failed and we were unable to recover it. 00:27:48.324 [2024-07-15 09:35:59.443664] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.324 [2024-07-15 09:35:59.443691] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:27:48.324 qpair failed and we were unable to recover it. 00:27:48.324 [2024-07-15 09:35:59.443805] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.324 [2024-07-15 09:35:59.443835] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a50000b90 with addr=10.0.0.2, port=4420 00:27:48.324 qpair failed and we were unable to recover it. 00:27:48.324 [2024-07-15 09:35:59.443911] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.324 [2024-07-15 09:35:59.443937] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a50000b90 with addr=10.0.0.2, port=4420 00:27:48.324 qpair failed and we were unable to recover it. 00:27:48.324 [2024-07-15 09:35:59.444051] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.324 [2024-07-15 09:35:59.444078] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a50000b90 with addr=10.0.0.2, port=4420 00:27:48.324 qpair failed and we were unable to recover it. 00:27:48.324 [2024-07-15 09:35:59.444225] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.324 [2024-07-15 09:35:59.444260] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a50000b90 with addr=10.0.0.2, port=4420 00:27:48.324 qpair failed and we were unable to recover it. 00:27:48.324 [2024-07-15 09:35:59.444475] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.324 [2024-07-15 09:35:59.444528] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a50000b90 with addr=10.0.0.2, port=4420 00:27:48.324 qpair failed and we were unable to recover it. 00:27:48.324 [2024-07-15 09:35:59.444640] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.324 [2024-07-15 09:35:59.444669] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a58000b90 with addr=10.0.0.2, port=4420 00:27:48.324 qpair failed and we were unable to recover it. 00:27:48.324 [2024-07-15 09:35:59.444779] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.324 [2024-07-15 09:35:59.444812] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a58000b90 with addr=10.0.0.2, port=4420 00:27:48.324 qpair failed and we were unable to recover it. 00:27:48.324 [2024-07-15 09:35:59.444910] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.324 [2024-07-15 09:35:59.444936] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a58000b90 with addr=10.0.0.2, port=4420 00:27:48.324 qpair failed and we were unable to recover it. 00:27:48.324 [2024-07-15 09:35:59.445019] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.324 [2024-07-15 09:35:59.445044] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a58000b90 with addr=10.0.0.2, port=4420 00:27:48.324 qpair failed and we were unable to recover it. 00:27:48.324 [2024-07-15 09:35:59.445129] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.324 [2024-07-15 09:35:59.445155] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a58000b90 with addr=10.0.0.2, port=4420 00:27:48.324 qpair failed and we were unable to recover it. 00:27:48.324 [2024-07-15 09:35:59.445294] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.324 [2024-07-15 09:35:59.445321] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a58000b90 with addr=10.0.0.2, port=4420 00:27:48.324 qpair failed and we were unable to recover it. 00:27:48.324 [2024-07-15 09:35:59.445409] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.324 [2024-07-15 09:35:59.445435] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a58000b90 with addr=10.0.0.2, port=4420 00:27:48.324 qpair failed and we were unable to recover it. 00:27:48.324 [2024-07-15 09:35:59.445549] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.324 [2024-07-15 09:35:59.445576] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a58000b90 with addr=10.0.0.2, port=4420 00:27:48.324 qpair failed and we were unable to recover it. 00:27:48.324 [2024-07-15 09:35:59.445665] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.324 [2024-07-15 09:35:59.445691] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a58000b90 with addr=10.0.0.2, port=4420 00:27:48.324 qpair failed and we were unable to recover it. 00:27:48.324 [2024-07-15 09:35:59.445778] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.324 [2024-07-15 09:35:59.445811] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a58000b90 with addr=10.0.0.2, port=4420 00:27:48.324 qpair failed and we were unable to recover it. 00:27:48.324 [2024-07-15 09:35:59.445929] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.324 [2024-07-15 09:35:59.445956] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a58000b90 with addr=10.0.0.2, port=4420 00:27:48.324 qpair failed and we were unable to recover it. 00:27:48.324 [2024-07-15 09:35:59.446067] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.324 [2024-07-15 09:35:59.446094] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a58000b90 with addr=10.0.0.2, port=4420 00:27:48.324 qpair failed and we were unable to recover it. 00:27:48.324 [2024-07-15 09:35:59.446179] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.324 [2024-07-15 09:35:59.446204] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a58000b90 with addr=10.0.0.2, port=4420 00:27:48.324 qpair failed and we were unable to recover it. 00:27:48.324 [2024-07-15 09:35:59.446290] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.324 [2024-07-15 09:35:59.446315] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a58000b90 with addr=10.0.0.2, port=4420 00:27:48.324 qpair failed and we were unable to recover it. 00:27:48.324 [2024-07-15 09:35:59.446430] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.324 [2024-07-15 09:35:59.446456] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a58000b90 with addr=10.0.0.2, port=4420 00:27:48.324 qpair failed and we were unable to recover it. 00:27:48.324 [2024-07-15 09:35:59.446564] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.324 [2024-07-15 09:35:59.446592] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a58000b90 with addr=10.0.0.2, port=4420 00:27:48.324 qpair failed and we were unable to recover it. 00:27:48.324 [2024-07-15 09:35:59.446672] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.324 [2024-07-15 09:35:59.446697] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a58000b90 with addr=10.0.0.2, port=4420 00:27:48.324 qpair failed and we were unable to recover it. 00:27:48.324 [2024-07-15 09:35:59.446783] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.324 [2024-07-15 09:35:59.446819] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a58000b90 with addr=10.0.0.2, port=4420 00:27:48.324 qpair failed and we were unable to recover it. 00:27:48.324 [2024-07-15 09:35:59.446939] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.324 [2024-07-15 09:35:59.446970] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a58000b90 with addr=10.0.0.2, port=4420 00:27:48.324 qpair failed and we were unable to recover it. 00:27:48.324 [2024-07-15 09:35:59.447058] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.324 [2024-07-15 09:35:59.447083] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a58000b90 with addr=10.0.0.2, port=4420 00:27:48.324 qpair failed and we were unable to recover it. 00:27:48.324 [2024-07-15 09:35:59.447225] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.324 [2024-07-15 09:35:59.447252] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a58000b90 with addr=10.0.0.2, port=4420 00:27:48.324 qpair failed and we were unable to recover it. 00:27:48.324 [2024-07-15 09:35:59.447365] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.324 [2024-07-15 09:35:59.447393] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a58000b90 with addr=10.0.0.2, port=4420 00:27:48.324 qpair failed and we were unable to recover it. 00:27:48.324 [2024-07-15 09:35:59.447533] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.324 [2024-07-15 09:35:59.447559] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a58000b90 with addr=10.0.0.2, port=4420 00:27:48.324 qpair failed and we were unable to recover it. 00:27:48.324 [2024-07-15 09:35:59.447689] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.324 [2024-07-15 09:35:59.447730] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d69200 with addr=10.0.0.2, port=4420 00:27:48.324 qpair failed and we were unable to recover it. 00:27:48.324 [2024-07-15 09:35:59.447854] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.324 [2024-07-15 09:35:59.447884] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a50000b90 with addr=10.0.0.2, port=4420 00:27:48.324 qpair failed and we were unable to recover it. 00:27:48.324 [2024-07-15 09:35:59.448037] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.324 [2024-07-15 09:35:59.448077] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:27:48.324 qpair failed and we were unable to recover it. 00:27:48.324 [2024-07-15 09:35:59.448194] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.324 [2024-07-15 09:35:59.448223] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:27:48.324 qpair failed and we were unable to recover it. 00:27:48.324 [2024-07-15 09:35:59.448339] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.324 [2024-07-15 09:35:59.448367] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:27:48.324 qpair failed and we were unable to recover it. 00:27:48.324 [2024-07-15 09:35:59.448441] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.324 [2024-07-15 09:35:59.448467] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:27:48.324 qpair failed and we were unable to recover it. 00:27:48.325 [2024-07-15 09:35:59.448550] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.325 [2024-07-15 09:35:59.448576] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:27:48.325 qpair failed and we were unable to recover it. 00:27:48.325 [2024-07-15 09:35:59.448728] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.325 [2024-07-15 09:35:59.448769] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d69200 with addr=10.0.0.2, port=4420 00:27:48.325 qpair failed and we were unable to recover it. 00:27:48.325 [2024-07-15 09:35:59.448868] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.325 [2024-07-15 09:35:59.448894] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a50000b90 with addr=10.0.0.2, port=4420 00:27:48.325 qpair failed and we were unable to recover it. 00:27:48.325 [2024-07-15 09:35:59.449043] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.325 [2024-07-15 09:35:59.449072] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a58000b90 with addr=10.0.0.2, port=4420 00:27:48.325 qpair failed and we were unable to recover it. 00:27:48.325 [2024-07-15 09:35:59.449162] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.325 [2024-07-15 09:35:59.449188] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a58000b90 with addr=10.0.0.2, port=4420 00:27:48.325 qpair failed and we were unable to recover it. 00:27:48.325 [2024-07-15 09:35:59.449328] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.325 [2024-07-15 09:35:59.449355] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a58000b90 with addr=10.0.0.2, port=4420 00:27:48.325 qpair failed and we were unable to recover it. 00:27:48.325 [2024-07-15 09:35:59.449458] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.325 [2024-07-15 09:35:59.449524] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a58000b90 with addr=10.0.0.2, port=4420 00:27:48.325 qpair failed and we were unable to recover it. 00:27:48.325 [2024-07-15 09:35:59.449638] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.325 [2024-07-15 09:35:59.449665] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a58000b90 with addr=10.0.0.2, port=4420 00:27:48.325 qpair failed and we were unable to recover it. 00:27:48.325 [2024-07-15 09:35:59.449827] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.325 [2024-07-15 09:35:59.449868] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d69200 with addr=10.0.0.2, port=4420 00:27:48.325 qpair failed and we were unable to recover it. 00:27:48.325 [2024-07-15 09:35:59.449955] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.325 [2024-07-15 09:35:59.449982] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d69200 with addr=10.0.0.2, port=4420 00:27:48.325 qpair failed and we were unable to recover it. 00:27:48.325 [2024-07-15 09:35:59.450093] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.325 [2024-07-15 09:35:59.450121] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d69200 with addr=10.0.0.2, port=4420 00:27:48.325 qpair failed and we were unable to recover it. 00:27:48.325 [2024-07-15 09:35:59.450208] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.325 [2024-07-15 09:35:59.450233] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d69200 with addr=10.0.0.2, port=4420 00:27:48.325 qpair failed and we were unable to recover it. 00:27:48.325 [2024-07-15 09:35:59.450404] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.325 [2024-07-15 09:35:59.450459] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d69200 with addr=10.0.0.2, port=4420 00:27:48.325 qpair failed and we were unable to recover it. 00:27:48.325 [2024-07-15 09:35:59.450596] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.325 [2024-07-15 09:35:59.450663] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d69200 with addr=10.0.0.2, port=4420 00:27:48.325 qpair failed and we were unable to recover it. 00:27:48.325 [2024-07-15 09:35:59.450779] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.325 [2024-07-15 09:35:59.450813] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d69200 with addr=10.0.0.2, port=4420 00:27:48.325 qpair failed and we were unable to recover it. 00:27:48.325 [2024-07-15 09:35:59.450898] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.325 [2024-07-15 09:35:59.450923] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d69200 with addr=10.0.0.2, port=4420 00:27:48.325 qpair failed and we were unable to recover it. 00:27:48.325 [2024-07-15 09:35:59.451042] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.325 [2024-07-15 09:35:59.451071] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:27:48.325 qpair failed and we were unable to recover it. 00:27:48.325 [2024-07-15 09:35:59.451180] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.325 [2024-07-15 09:35:59.451207] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:27:48.325 qpair failed and we were unable to recover it. 00:27:48.325 [2024-07-15 09:35:59.451316] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.325 [2024-07-15 09:35:59.451344] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:27:48.325 qpair failed and we were unable to recover it. 00:27:48.325 [2024-07-15 09:35:59.451449] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.325 [2024-07-15 09:35:59.451474] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:27:48.325 qpair failed and we were unable to recover it. 00:27:48.325 [2024-07-15 09:35:59.451586] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.325 [2024-07-15 09:35:59.451613] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:27:48.325 qpair failed and we were unable to recover it. 00:27:48.325 [2024-07-15 09:35:59.451752] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.325 [2024-07-15 09:35:59.451780] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:27:48.325 qpair failed and we were unable to recover it. 00:27:48.325 [2024-07-15 09:35:59.451893] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.325 [2024-07-15 09:35:59.451921] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d69200 with addr=10.0.0.2, port=4420 00:27:48.325 qpair failed and we were unable to recover it. 00:27:48.325 [2024-07-15 09:35:59.452012] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.325 [2024-07-15 09:35:59.452041] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d69200 with addr=10.0.0.2, port=4420 00:27:48.325 qpair failed and we were unable to recover it. 00:27:48.325 [2024-07-15 09:35:59.452127] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.325 [2024-07-15 09:35:59.452153] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d69200 with addr=10.0.0.2, port=4420 00:27:48.325 qpair failed and we were unable to recover it. 00:27:48.325 [2024-07-15 09:35:59.452266] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.325 [2024-07-15 09:35:59.452293] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d69200 with addr=10.0.0.2, port=4420 00:27:48.325 qpair failed and we were unable to recover it. 00:27:48.325 [2024-07-15 09:35:59.452400] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.325 [2024-07-15 09:35:59.452425] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d69200 with addr=10.0.0.2, port=4420 00:27:48.325 qpair failed and we were unable to recover it. 00:27:48.325 [2024-07-15 09:35:59.452558] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.325 [2024-07-15 09:35:59.452585] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d69200 with addr=10.0.0.2, port=4420 00:27:48.325 qpair failed and we were unable to recover it. 00:27:48.325 [2024-07-15 09:35:59.452688] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.325 [2024-07-15 09:35:59.452716] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:27:48.325 qpair failed and we were unable to recover it. 00:27:48.325 [2024-07-15 09:35:59.452793] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.325 [2024-07-15 09:35:59.452827] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:27:48.325 qpair failed and we were unable to recover it. 00:27:48.325 [2024-07-15 09:35:59.452919] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.325 [2024-07-15 09:35:59.452944] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:27:48.325 qpair failed and we were unable to recover it. 00:27:48.325 [2024-07-15 09:35:59.453023] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.325 [2024-07-15 09:35:59.453050] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:27:48.325 qpair failed and we were unable to recover it. 00:27:48.325 [2024-07-15 09:35:59.453166] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.325 [2024-07-15 09:35:59.453193] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:27:48.325 qpair failed and we were unable to recover it. 00:27:48.325 [2024-07-15 09:35:59.453283] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.325 [2024-07-15 09:35:59.453309] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:27:48.325 qpair failed and we were unable to recover it. 00:27:48.325 [2024-07-15 09:35:59.453399] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.325 [2024-07-15 09:35:59.453426] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d69200 with addr=10.0.0.2, port=4420 00:27:48.325 qpair failed and we were unable to recover it. 00:27:48.325 [2024-07-15 09:35:59.453504] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.325 [2024-07-15 09:35:59.453530] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d69200 with addr=10.0.0.2, port=4420 00:27:48.325 qpair failed and we were unable to recover it. 00:27:48.325 [2024-07-15 09:35:59.453667] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.326 [2024-07-15 09:35:59.453694] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d69200 with addr=10.0.0.2, port=4420 00:27:48.326 qpair failed and we were unable to recover it. 00:27:48.326 [2024-07-15 09:35:59.453812] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.326 [2024-07-15 09:35:59.453840] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d69200 with addr=10.0.0.2, port=4420 00:27:48.326 qpair failed and we were unable to recover it. 00:27:48.326 [2024-07-15 09:35:59.453952] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.326 [2024-07-15 09:35:59.453979] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d69200 with addr=10.0.0.2, port=4420 00:27:48.326 qpair failed and we were unable to recover it. 00:27:48.326 [2024-07-15 09:35:59.454063] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.326 [2024-07-15 09:35:59.454088] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d69200 with addr=10.0.0.2, port=4420 00:27:48.326 qpair failed and we were unable to recover it. 00:27:48.326 [2024-07-15 09:35:59.454249] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.326 [2024-07-15 09:35:59.454296] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d69200 with addr=10.0.0.2, port=4420 00:27:48.326 qpair failed and we were unable to recover it. 00:27:48.326 [2024-07-15 09:35:59.454458] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.326 [2024-07-15 09:35:59.454521] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d69200 with addr=10.0.0.2, port=4420 00:27:48.326 qpair failed and we were unable to recover it. 00:27:48.326 [2024-07-15 09:35:59.454680] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.326 [2024-07-15 09:35:59.454721] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a50000b90 with addr=10.0.0.2, port=4420 00:27:48.326 qpair failed and we were unable to recover it. 00:27:48.326 [2024-07-15 09:35:59.454839] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.326 [2024-07-15 09:35:59.454868] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:27:48.326 qpair failed and we were unable to recover it. 00:27:48.326 [2024-07-15 09:35:59.454955] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.326 [2024-07-15 09:35:59.454980] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:27:48.326 qpair failed and we were unable to recover it. 00:27:48.326 [2024-07-15 09:35:59.455088] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.326 [2024-07-15 09:35:59.455115] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:27:48.326 qpair failed and we were unable to recover it. 00:27:48.326 [2024-07-15 09:35:59.455226] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.326 [2024-07-15 09:35:59.455251] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:27:48.326 qpair failed and we were unable to recover it. 00:27:48.326 [2024-07-15 09:35:59.455368] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.326 [2024-07-15 09:35:59.455395] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:27:48.326 qpair failed and we were unable to recover it. 00:27:48.326 [2024-07-15 09:35:59.455530] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.326 [2024-07-15 09:35:59.455567] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:27:48.326 qpair failed and we were unable to recover it. 00:27:48.326 [2024-07-15 09:35:59.455715] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.326 [2024-07-15 09:35:59.455757] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a50000b90 with addr=10.0.0.2, port=4420 00:27:48.326 qpair failed and we were unable to recover it. 00:27:48.326 [2024-07-15 09:35:59.455859] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.326 [2024-07-15 09:35:59.455886] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a50000b90 with addr=10.0.0.2, port=4420 00:27:48.326 qpair failed and we were unable to recover it. 00:27:48.326 [2024-07-15 09:35:59.455998] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.326 [2024-07-15 09:35:59.456024] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a50000b90 with addr=10.0.0.2, port=4420 00:27:48.326 qpair failed and we were unable to recover it. 00:27:48.326 [2024-07-15 09:35:59.456118] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.326 [2024-07-15 09:35:59.456147] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a50000b90 with addr=10.0.0.2, port=4420 00:27:48.326 qpair failed and we were unable to recover it. 00:27:48.326 [2024-07-15 09:35:59.456234] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.326 [2024-07-15 09:35:59.456260] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a50000b90 with addr=10.0.0.2, port=4420 00:27:48.326 qpair failed and we were unable to recover it. 00:27:48.326 [2024-07-15 09:35:59.456350] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.326 [2024-07-15 09:35:59.456378] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a50000b90 with addr=10.0.0.2, port=4420 00:27:48.326 qpair failed and we were unable to recover it. 00:27:48.326 [2024-07-15 09:35:59.456490] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.326 [2024-07-15 09:35:59.456518] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:27:48.326 qpair failed and we were unable to recover it. 00:27:48.326 [2024-07-15 09:35:59.456635] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.326 [2024-07-15 09:35:59.456669] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:27:48.326 qpair failed and we were unable to recover it. 00:27:48.326 [2024-07-15 09:35:59.456785] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.326 [2024-07-15 09:35:59.456818] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:27:48.326 qpair failed and we were unable to recover it. 00:27:48.326 [2024-07-15 09:35:59.456902] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.326 [2024-07-15 09:35:59.456927] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:27:48.326 qpair failed and we were unable to recover it. 00:27:48.326 [2024-07-15 09:35:59.456999] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.326 [2024-07-15 09:35:59.457025] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:27:48.326 qpair failed and we were unable to recover it. 00:27:48.326 [2024-07-15 09:35:59.457130] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.326 [2024-07-15 09:35:59.457156] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:27:48.326 qpair failed and we were unable to recover it. 00:27:48.326 [2024-07-15 09:35:59.457276] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.326 [2024-07-15 09:35:59.457305] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a50000b90 with addr=10.0.0.2, port=4420 00:27:48.326 qpair failed and we were unable to recover it. 00:27:48.326 [2024-07-15 09:35:59.457390] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.326 [2024-07-15 09:35:59.457416] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a50000b90 with addr=10.0.0.2, port=4420 00:27:48.326 qpair failed and we were unable to recover it. 00:27:48.326 [2024-07-15 09:35:59.457509] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.326 [2024-07-15 09:35:59.457536] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a50000b90 with addr=10.0.0.2, port=4420 00:27:48.326 qpair failed and we were unable to recover it. 00:27:48.326 [2024-07-15 09:35:59.457648] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.326 [2024-07-15 09:35:59.457675] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a50000b90 with addr=10.0.0.2, port=4420 00:27:48.326 qpair failed and we were unable to recover it. 00:27:48.326 [2024-07-15 09:35:59.457785] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.326 [2024-07-15 09:35:59.457819] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a50000b90 with addr=10.0.0.2, port=4420 00:27:48.326 qpair failed and we were unable to recover it. 00:27:48.326 [2024-07-15 09:35:59.457940] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.326 [2024-07-15 09:35:59.457969] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d69200 with addr=10.0.0.2, port=4420 00:27:48.326 qpair failed and we were unable to recover it. 00:27:48.326 [2024-07-15 09:35:59.458080] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.327 [2024-07-15 09:35:59.458107] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d69200 with addr=10.0.0.2, port=4420 00:27:48.327 qpair failed and we were unable to recover it. 00:27:48.327 [2024-07-15 09:35:59.458214] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.327 [2024-07-15 09:35:59.458241] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d69200 with addr=10.0.0.2, port=4420 00:27:48.327 qpair failed and we were unable to recover it. 00:27:48.327 [2024-07-15 09:35:59.458325] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.327 [2024-07-15 09:35:59.458351] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d69200 with addr=10.0.0.2, port=4420 00:27:48.327 qpair failed and we were unable to recover it. 00:27:48.327 [2024-07-15 09:35:59.458451] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.327 [2024-07-15 09:35:59.458483] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d69200 with addr=10.0.0.2, port=4420 00:27:48.327 qpair failed and we were unable to recover it. 00:27:48.327 [2024-07-15 09:35:59.458618] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.327 [2024-07-15 09:35:59.458645] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d69200 with addr=10.0.0.2, port=4420 00:27:48.327 qpair failed and we were unable to recover it. 00:27:48.327 [2024-07-15 09:35:59.458729] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.327 [2024-07-15 09:35:59.458754] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d69200 with addr=10.0.0.2, port=4420 00:27:48.327 qpair failed and we were unable to recover it. 00:27:48.327 [2024-07-15 09:35:59.458882] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.327 [2024-07-15 09:35:59.458910] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:27:48.327 qpair failed and we were unable to recover it. 00:27:48.327 [2024-07-15 09:35:59.459019] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.327 [2024-07-15 09:35:59.459047] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:27:48.327 qpair failed and we were unable to recover it. 00:27:48.327 [2024-07-15 09:35:59.459160] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.327 [2024-07-15 09:35:59.459187] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:27:48.327 qpair failed and we were unable to recover it. 00:27:48.327 [2024-07-15 09:35:59.459272] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.327 [2024-07-15 09:35:59.459297] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:27:48.327 qpair failed and we were unable to recover it. 00:27:48.327 [2024-07-15 09:35:59.459376] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.327 [2024-07-15 09:35:59.459401] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:27:48.327 qpair failed and we were unable to recover it. 00:27:48.327 [2024-07-15 09:35:59.459480] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.327 [2024-07-15 09:35:59.459505] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:27:48.327 qpair failed and we were unable to recover it. 00:27:48.327 [2024-07-15 09:35:59.459578] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.327 [2024-07-15 09:35:59.459604] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:27:48.327 qpair failed and we were unable to recover it. 00:27:48.327 [2024-07-15 09:35:59.459693] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.327 [2024-07-15 09:35:59.459718] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:27:48.327 qpair failed and we were unable to recover it. 00:27:48.327 [2024-07-15 09:35:59.459858] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.327 [2024-07-15 09:35:59.459887] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a50000b90 with addr=10.0.0.2, port=4420 00:27:48.327 qpair failed and we were unable to recover it. 00:27:48.327 [2024-07-15 09:35:59.459999] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.327 [2024-07-15 09:35:59.460027] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a50000b90 with addr=10.0.0.2, port=4420 00:27:48.327 qpair failed and we were unable to recover it. 00:27:48.327 [2024-07-15 09:35:59.460138] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.327 [2024-07-15 09:35:59.460170] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a50000b90 with addr=10.0.0.2, port=4420 00:27:48.327 qpair failed and we were unable to recover it. 00:27:48.327 [2024-07-15 09:35:59.460282] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.327 [2024-07-15 09:35:59.460309] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a50000b90 with addr=10.0.0.2, port=4420 00:27:48.327 qpair failed and we were unable to recover it. 00:27:48.327 [2024-07-15 09:35:59.460389] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.327 [2024-07-15 09:35:59.460416] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d69200 with addr=10.0.0.2, port=4420 00:27:48.327 qpair failed and we were unable to recover it. 00:27:48.327 [2024-07-15 09:35:59.460567] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.327 [2024-07-15 09:35:59.460608] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a58000b90 with addr=10.0.0.2, port=4420 00:27:48.327 qpair failed and we were unable to recover it. 00:27:48.327 [2024-07-15 09:35:59.460699] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.327 [2024-07-15 09:35:59.460728] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:27:48.327 qpair failed and we were unable to recover it. 00:27:48.327 [2024-07-15 09:35:59.460873] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.327 [2024-07-15 09:35:59.460900] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:27:48.327 qpair failed and we were unable to recover it. 00:27:48.327 [2024-07-15 09:35:59.461012] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.327 [2024-07-15 09:35:59.461040] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:27:48.327 qpair failed and we were unable to recover it. 00:27:48.327 [2024-07-15 09:35:59.461150] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.327 [2024-07-15 09:35:59.461177] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:27:48.327 qpair failed and we were unable to recover it. 00:27:48.327 [2024-07-15 09:35:59.461299] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.327 [2024-07-15 09:35:59.461326] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:27:48.327 qpair failed and we were unable to recover it. 00:27:48.327 [2024-07-15 09:35:59.461417] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.327 [2024-07-15 09:35:59.461444] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d69200 with addr=10.0.0.2, port=4420 00:27:48.327 qpair failed and we were unable to recover it. 00:27:48.327 [2024-07-15 09:35:59.461556] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.327 [2024-07-15 09:35:59.461583] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d69200 with addr=10.0.0.2, port=4420 00:27:48.327 qpair failed and we were unable to recover it. 00:27:48.327 [2024-07-15 09:35:59.461696] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.327 [2024-07-15 09:35:59.461723] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d69200 with addr=10.0.0.2, port=4420 00:27:48.327 qpair failed and we were unable to recover it. 00:27:48.327 [2024-07-15 09:35:59.461811] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.327 [2024-07-15 09:35:59.461837] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d69200 with addr=10.0.0.2, port=4420 00:27:48.327 qpair failed and we were unable to recover it. 00:27:48.327 [2024-07-15 09:35:59.461921] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.327 [2024-07-15 09:35:59.461947] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d69200 with addr=10.0.0.2, port=4420 00:27:48.327 qpair failed and we were unable to recover it. 00:27:48.327 [2024-07-15 09:35:59.462076] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.327 [2024-07-15 09:35:59.462103] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d69200 with addr=10.0.0.2, port=4420 00:27:48.327 qpair failed and we were unable to recover it. 00:27:48.327 [2024-07-15 09:35:59.462239] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.327 [2024-07-15 09:35:59.462265] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d69200 with addr=10.0.0.2, port=4420 00:27:48.327 qpair failed and we were unable to recover it. 00:27:48.327 [2024-07-15 09:35:59.462350] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.327 [2024-07-15 09:35:59.462376] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d69200 with addr=10.0.0.2, port=4420 00:27:48.327 qpair failed and we were unable to recover it. 00:27:48.327 [2024-07-15 09:35:59.462478] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.327 [2024-07-15 09:35:59.462505] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d69200 with addr=10.0.0.2, port=4420 00:27:48.327 qpair failed and we were unable to recover it. 00:27:48.327 [2024-07-15 09:35:59.462615] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.327 [2024-07-15 09:35:59.462642] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d69200 with addr=10.0.0.2, port=4420 00:27:48.327 qpair failed and we were unable to recover it. 00:27:48.327 [2024-07-15 09:35:59.462723] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.327 [2024-07-15 09:35:59.462748] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d69200 with addr=10.0.0.2, port=4420 00:27:48.327 qpair failed and we were unable to recover it. 00:27:48.327 [2024-07-15 09:35:59.462859] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.327 [2024-07-15 09:35:59.462886] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d69200 with addr=10.0.0.2, port=4420 00:27:48.327 qpair failed and we were unable to recover it. 00:27:48.327 [2024-07-15 09:35:59.463024] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.327 [2024-07-15 09:35:59.463052] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d69200 with addr=10.0.0.2, port=4420 00:27:48.327 qpair failed and we were unable to recover it. 00:27:48.327 [2024-07-15 09:35:59.463139] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.327 [2024-07-15 09:35:59.463166] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d69200 with addr=10.0.0.2, port=4420 00:27:48.327 qpair failed and we were unable to recover it. 00:27:48.327 [2024-07-15 09:35:59.463242] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.327 [2024-07-15 09:35:59.463268] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d69200 with addr=10.0.0.2, port=4420 00:27:48.327 qpair failed and we were unable to recover it. 00:27:48.327 [2024-07-15 09:35:59.463384] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.327 [2024-07-15 09:35:59.463411] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d69200 with addr=10.0.0.2, port=4420 00:27:48.327 qpair failed and we were unable to recover it. 00:27:48.328 [2024-07-15 09:35:59.463485] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.328 [2024-07-15 09:35:59.463510] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d69200 with addr=10.0.0.2, port=4420 00:27:48.328 qpair failed and we were unable to recover it. 00:27:48.328 [2024-07-15 09:35:59.463615] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.328 [2024-07-15 09:35:59.463641] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d69200 with addr=10.0.0.2, port=4420 00:27:48.328 qpair failed and we were unable to recover it. 00:27:48.328 [2024-07-15 09:35:59.463759] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.328 [2024-07-15 09:35:59.463788] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:27:48.328 qpair failed and we were unable to recover it. 00:27:48.328 [2024-07-15 09:35:59.463909] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.328 [2024-07-15 09:35:59.463936] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:27:48.328 qpair failed and we were unable to recover it. 00:27:48.328 [2024-07-15 09:35:59.464017] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.328 [2024-07-15 09:35:59.464043] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:27:48.328 qpair failed and we were unable to recover it. 00:27:48.328 [2024-07-15 09:35:59.464122] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.328 [2024-07-15 09:35:59.464148] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:27:48.328 qpair failed and we were unable to recover it. 00:27:48.328 [2024-07-15 09:35:59.464262] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.328 [2024-07-15 09:35:59.464289] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:27:48.328 qpair failed and we were unable to recover it. 00:27:48.328 [2024-07-15 09:35:59.464370] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.328 [2024-07-15 09:35:59.464396] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:27:48.328 qpair failed and we were unable to recover it. 00:27:48.328 [2024-07-15 09:35:59.464502] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.328 [2024-07-15 09:35:59.464530] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d69200 with addr=10.0.0.2, port=4420 00:27:48.328 qpair failed and we were unable to recover it. 00:27:48.328 [2024-07-15 09:35:59.464627] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.328 [2024-07-15 09:35:59.464668] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a58000b90 with addr=10.0.0.2, port=4420 00:27:48.328 qpair failed and we were unable to recover it. 00:27:48.328 [2024-07-15 09:35:59.464791] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.328 [2024-07-15 09:35:59.464838] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a50000b90 with addr=10.0.0.2, port=4420 00:27:48.328 qpair failed and we were unable to recover it. 00:27:48.328 [2024-07-15 09:35:59.464952] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.328 [2024-07-15 09:35:59.464981] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a50000b90 with addr=10.0.0.2, port=4420 00:27:48.328 qpair failed and we were unable to recover it. 00:27:48.328 [2024-07-15 09:35:59.465068] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.328 [2024-07-15 09:35:59.465093] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a50000b90 with addr=10.0.0.2, port=4420 00:27:48.328 qpair failed and we were unable to recover it. 00:27:48.328 [2024-07-15 09:35:59.465207] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.328 [2024-07-15 09:35:59.465233] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a50000b90 with addr=10.0.0.2, port=4420 00:27:48.328 qpair failed and we were unable to recover it. 00:27:48.328 [2024-07-15 09:35:59.465342] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.328 [2024-07-15 09:35:59.465370] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a50000b90 with addr=10.0.0.2, port=4420 00:27:48.328 qpair failed and we were unable to recover it. 00:27:48.328 [2024-07-15 09:35:59.465483] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.328 [2024-07-15 09:35:59.465512] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d69200 with addr=10.0.0.2, port=4420 00:27:48.328 qpair failed and we were unable to recover it. 00:27:48.328 [2024-07-15 09:35:59.465612] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.328 [2024-07-15 09:35:59.465641] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a58000b90 with addr=10.0.0.2, port=4420 00:27:48.328 qpair failed and we were unable to recover it. 00:27:48.328 [2024-07-15 09:35:59.465760] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.328 [2024-07-15 09:35:59.465788] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:27:48.328 qpair failed and we were unable to recover it. 00:27:48.328 [2024-07-15 09:35:59.465878] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.328 [2024-07-15 09:35:59.465904] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:27:48.328 qpair failed and we were unable to recover it. 00:27:48.328 [2024-07-15 09:35:59.465989] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.328 [2024-07-15 09:35:59.466014] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:27:48.328 qpair failed and we were unable to recover it. 00:27:48.328 [2024-07-15 09:35:59.466128] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.328 [2024-07-15 09:35:59.466155] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:27:48.328 qpair failed and we were unable to recover it. 00:27:48.328 [2024-07-15 09:35:59.466266] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.328 [2024-07-15 09:35:59.466293] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:27:48.328 qpair failed and we were unable to recover it. 00:27:48.328 [2024-07-15 09:35:59.466372] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.328 [2024-07-15 09:35:59.466397] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:27:48.328 qpair failed and we were unable to recover it. 00:27:48.328 [2024-07-15 09:35:59.466500] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.328 [2024-07-15 09:35:59.466527] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:27:48.328 qpair failed and we were unable to recover it. 00:27:48.328 [2024-07-15 09:35:59.466608] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.328 [2024-07-15 09:35:59.466634] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:27:48.328 qpair failed and we were unable to recover it. 00:27:48.328 [2024-07-15 09:35:59.466723] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.328 [2024-07-15 09:35:59.466761] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d69200 with addr=10.0.0.2, port=4420 00:27:48.328 qpair failed and we were unable to recover it. 00:27:48.328 [2024-07-15 09:35:59.466886] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.328 [2024-07-15 09:35:59.466915] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a58000b90 with addr=10.0.0.2, port=4420 00:27:48.328 qpair failed and we were unable to recover it. 00:27:48.328 [2024-07-15 09:35:59.467002] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.328 [2024-07-15 09:35:59.467028] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a58000b90 with addr=10.0.0.2, port=4420 00:27:48.328 qpair failed and we were unable to recover it. 00:27:48.328 [2024-07-15 09:35:59.467176] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.328 [2024-07-15 09:35:59.467232] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a58000b90 with addr=10.0.0.2, port=4420 00:27:48.328 qpair failed and we were unable to recover it. 00:27:48.328 [2024-07-15 09:35:59.467371] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.328 [2024-07-15 09:35:59.467437] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a58000b90 with addr=10.0.0.2, port=4420 00:27:48.328 qpair failed and we were unable to recover it. 00:27:48.328 [2024-07-15 09:35:59.467573] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.328 [2024-07-15 09:35:59.467600] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a58000b90 with addr=10.0.0.2, port=4420 00:27:48.328 qpair failed and we were unable to recover it. 00:27:48.328 [2024-07-15 09:35:59.467744] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.328 [2024-07-15 09:35:59.467772] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:27:48.328 qpair failed and we were unable to recover it. 00:27:48.328 [2024-07-15 09:35:59.467865] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.328 [2024-07-15 09:35:59.467891] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:27:48.328 qpair failed and we were unable to recover it. 00:27:48.328 [2024-07-15 09:35:59.467974] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.328 [2024-07-15 09:35:59.468002] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:27:48.328 qpair failed and we were unable to recover it. 00:27:48.328 [2024-07-15 09:35:59.468146] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.328 [2024-07-15 09:35:59.468209] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:27:48.328 qpair failed and we were unable to recover it. 00:27:48.328 [2024-07-15 09:35:59.468314] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.328 [2024-07-15 09:35:59.468381] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:27:48.328 qpair failed and we were unable to recover it. 00:27:48.328 [2024-07-15 09:35:59.468489] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.328 [2024-07-15 09:35:59.468516] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:27:48.328 qpair failed and we were unable to recover it. 00:27:48.328 [2024-07-15 09:35:59.468597] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.328 [2024-07-15 09:35:59.468623] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:27:48.328 qpair failed and we were unable to recover it. 00:27:48.328 [2024-07-15 09:35:59.468743] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.328 [2024-07-15 09:35:59.468783] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d69200 with addr=10.0.0.2, port=4420 00:27:48.328 qpair failed and we were unable to recover it. 00:27:48.328 [2024-07-15 09:35:59.468893] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.329 [2024-07-15 09:35:59.468923] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a50000b90 with addr=10.0.0.2, port=4420 00:27:48.329 qpair failed and we were unable to recover it. 00:27:48.329 [2024-07-15 09:35:59.469014] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.329 [2024-07-15 09:35:59.469041] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a58000b90 with addr=10.0.0.2, port=4420 00:27:48.329 qpair failed and we were unable to recover it. 00:27:48.329 [2024-07-15 09:35:59.469128] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.329 [2024-07-15 09:35:59.469156] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a58000b90 with addr=10.0.0.2, port=4420 00:27:48.329 qpair failed and we were unable to recover it. 00:27:48.329 [2024-07-15 09:35:59.469267] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.329 [2024-07-15 09:35:59.469299] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a58000b90 with addr=10.0.0.2, port=4420 00:27:48.329 qpair failed and we were unable to recover it. 00:27:48.329 [2024-07-15 09:35:59.469418] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.329 [2024-07-15 09:35:59.469446] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a58000b90 with addr=10.0.0.2, port=4420 00:27:48.329 qpair failed and we were unable to recover it. 00:27:48.329 [2024-07-15 09:35:59.469584] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.329 [2024-07-15 09:35:59.469619] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a58000b90 with addr=10.0.0.2, port=4420 00:27:48.329 qpair failed and we were unable to recover it. 00:27:48.329 [2024-07-15 09:35:59.469723] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.329 [2024-07-15 09:35:59.469762] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d69200 with addr=10.0.0.2, port=4420 00:27:48.329 qpair failed and we were unable to recover it. 00:27:48.329 [2024-07-15 09:35:59.469908] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.329 [2024-07-15 09:35:59.469936] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a50000b90 with addr=10.0.0.2, port=4420 00:27:48.329 qpair failed and we were unable to recover it. 00:27:48.329 [2024-07-15 09:35:59.470026] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.329 [2024-07-15 09:35:59.470063] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a50000b90 with addr=10.0.0.2, port=4420 00:27:48.329 qpair failed and we were unable to recover it. 00:27:48.329 [2024-07-15 09:35:59.470173] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.329 [2024-07-15 09:35:59.470200] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a50000b90 with addr=10.0.0.2, port=4420 00:27:48.329 qpair failed and we were unable to recover it. 00:27:48.329 [2024-07-15 09:35:59.470371] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.329 [2024-07-15 09:35:59.470425] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a50000b90 with addr=10.0.0.2, port=4420 00:27:48.329 qpair failed and we were unable to recover it. 00:27:48.329 [2024-07-15 09:35:59.470569] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.329 [2024-07-15 09:35:59.470629] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a50000b90 with addr=10.0.0.2, port=4420 00:27:48.329 qpair failed and we were unable to recover it. 00:27:48.329 [2024-07-15 09:35:59.470706] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.329 [2024-07-15 09:35:59.470732] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a50000b90 with addr=10.0.0.2, port=4420 00:27:48.329 qpair failed and we were unable to recover it. 00:27:48.329 [2024-07-15 09:35:59.470845] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.329 [2024-07-15 09:35:59.470875] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d69200 with addr=10.0.0.2, port=4420 00:27:48.329 qpair failed and we were unable to recover it. 00:27:48.329 [2024-07-15 09:35:59.471002] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.329 [2024-07-15 09:35:59.471028] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d69200 with addr=10.0.0.2, port=4420 00:27:48.329 qpair failed and we were unable to recover it. 00:27:48.329 [2024-07-15 09:35:59.471149] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.329 [2024-07-15 09:35:59.471177] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d69200 with addr=10.0.0.2, port=4420 00:27:48.329 qpair failed and we were unable to recover it. 00:27:48.329 [2024-07-15 09:35:59.471266] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.329 [2024-07-15 09:35:59.471293] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d69200 with addr=10.0.0.2, port=4420 00:27:48.329 qpair failed and we were unable to recover it. 00:27:48.329 [2024-07-15 09:35:59.471442] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.329 [2024-07-15 09:35:59.471469] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d69200 with addr=10.0.0.2, port=4420 00:27:48.329 qpair failed and we were unable to recover it. 00:27:48.329 [2024-07-15 09:35:59.471545] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.329 [2024-07-15 09:35:59.471582] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d69200 with addr=10.0.0.2, port=4420 00:27:48.329 qpair failed and we were unable to recover it. 00:27:48.329 [2024-07-15 09:35:59.471673] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.329 [2024-07-15 09:35:59.471698] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d69200 with addr=10.0.0.2, port=4420 00:27:48.329 qpair failed and we were unable to recover it. 00:27:48.329 [2024-07-15 09:35:59.471823] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.329 [2024-07-15 09:35:59.471856] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d69200 with addr=10.0.0.2, port=4420 00:27:48.329 qpair failed and we were unable to recover it. 00:27:48.329 [2024-07-15 09:35:59.471954] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.329 [2024-07-15 09:35:59.471983] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a58000b90 with addr=10.0.0.2, port=4420 00:27:48.329 qpair failed and we were unable to recover it. 00:27:48.329 [2024-07-15 09:35:59.472094] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.329 [2024-07-15 09:35:59.472121] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a58000b90 with addr=10.0.0.2, port=4420 00:27:48.329 qpair failed and we were unable to recover it. 00:27:48.329 [2024-07-15 09:35:59.472212] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.329 [2024-07-15 09:35:59.472240] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a58000b90 with addr=10.0.0.2, port=4420 00:27:48.329 qpair failed and we were unable to recover it. 00:27:48.329 [2024-07-15 09:35:59.472363] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.329 [2024-07-15 09:35:59.472390] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a58000b90 with addr=10.0.0.2, port=4420 00:27:48.329 qpair failed and we were unable to recover it. 00:27:48.329 [2024-07-15 09:35:59.472503] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.329 [2024-07-15 09:35:59.472530] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a58000b90 with addr=10.0.0.2, port=4420 00:27:48.329 qpair failed and we were unable to recover it. 00:27:48.329 [2024-07-15 09:35:59.472638] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.329 [2024-07-15 09:35:59.472667] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a58000b90 with addr=10.0.0.2, port=4420 00:27:48.329 qpair failed and we were unable to recover it. 00:27:48.329 [2024-07-15 09:35:59.472812] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.329 [2024-07-15 09:35:59.472840] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d69200 with addr=10.0.0.2, port=4420 00:27:48.329 qpair failed and we were unable to recover it. 00:27:48.329 [2024-07-15 09:35:59.472953] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.329 [2024-07-15 09:35:59.472980] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d69200 with addr=10.0.0.2, port=4420 00:27:48.329 qpair failed and we were unable to recover it. 00:27:48.329 [2024-07-15 09:35:59.473062] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.329 [2024-07-15 09:35:59.473090] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d69200 with addr=10.0.0.2, port=4420 00:27:48.329 qpair failed and we were unable to recover it. 00:27:48.329 [2024-07-15 09:35:59.473239] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.329 [2024-07-15 09:35:59.473296] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a58000b90 with addr=10.0.0.2, port=4420 00:27:48.329 qpair failed and we were unable to recover it. 00:27:48.329 [2024-07-15 09:35:59.473435] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.329 [2024-07-15 09:35:59.473495] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a58000b90 with addr=10.0.0.2, port=4420 00:27:48.329 qpair failed and we were unable to recover it. 00:27:48.329 [2024-07-15 09:35:59.473578] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.329 [2024-07-15 09:35:59.473609] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a58000b90 with addr=10.0.0.2, port=4420 00:27:48.329 qpair failed and we were unable to recover it. 00:27:48.329 [2024-07-15 09:35:59.473756] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.329 [2024-07-15 09:35:59.473783] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a58000b90 with addr=10.0.0.2, port=4420 00:27:48.329 qpair failed and we were unable to recover it. 00:27:48.329 [2024-07-15 09:35:59.473904] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.329 [2024-07-15 09:35:59.473931] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a58000b90 with addr=10.0.0.2, port=4420 00:27:48.329 qpair failed and we were unable to recover it. 00:27:48.329 [2024-07-15 09:35:59.474069] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.329 [2024-07-15 09:35:59.474096] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a58000b90 with addr=10.0.0.2, port=4420 00:27:48.329 qpair failed and we were unable to recover it. 00:27:48.329 [2024-07-15 09:35:59.474279] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.329 [2024-07-15 09:35:59.474306] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a58000b90 with addr=10.0.0.2, port=4420 00:27:48.329 qpair failed and we were unable to recover it. 00:27:48.329 [2024-07-15 09:35:59.474422] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.329 [2024-07-15 09:35:59.474449] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a58000b90 with addr=10.0.0.2, port=4420 00:27:48.329 qpair failed and we were unable to recover it. 00:27:48.329 [2024-07-15 09:35:59.474552] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.329 [2024-07-15 09:35:59.474579] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a58000b90 with addr=10.0.0.2, port=4420 00:27:48.329 qpair failed and we were unable to recover it. 00:27:48.329 [2024-07-15 09:35:59.474666] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.329 [2024-07-15 09:35:59.474703] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a58000b90 with addr=10.0.0.2, port=4420 00:27:48.329 qpair failed and we were unable to recover it. 00:27:48.329 [2024-07-15 09:35:59.474842] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.329 [2024-07-15 09:35:59.474872] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a50000b90 with addr=10.0.0.2, port=4420 00:27:48.330 qpair failed and we were unable to recover it. 00:27:48.330 [2024-07-15 09:35:59.474994] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.330 [2024-07-15 09:35:59.475023] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:27:48.330 qpair failed and we were unable to recover it. 00:27:48.330 [2024-07-15 09:35:59.475116] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.330 [2024-07-15 09:35:59.475143] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:27:48.330 qpair failed and we were unable to recover it. 00:27:48.330 [2024-07-15 09:35:59.475234] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.330 [2024-07-15 09:35:59.475266] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:27:48.330 qpair failed and we were unable to recover it. 00:27:48.330 [2024-07-15 09:35:59.475431] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.330 [2024-07-15 09:35:59.475483] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:27:48.330 qpair failed and we were unable to recover it. 00:27:48.330 [2024-07-15 09:35:59.475619] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.330 [2024-07-15 09:35:59.475646] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:27:48.330 qpair failed and we were unable to recover it. 00:27:48.330 [2024-07-15 09:35:59.475730] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.330 [2024-07-15 09:35:59.475758] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a58000b90 with addr=10.0.0.2, port=4420 00:27:48.330 qpair failed and we were unable to recover it. 00:27:48.330 [2024-07-15 09:35:59.475874] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.330 [2024-07-15 09:35:59.475903] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a50000b90 with addr=10.0.0.2, port=4420 00:27:48.330 qpair failed and we were unable to recover it. 00:27:48.330 [2024-07-15 09:35:59.475982] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.330 [2024-07-15 09:35:59.476009] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a50000b90 with addr=10.0.0.2, port=4420 00:27:48.330 qpair failed and we were unable to recover it. 00:27:48.330 [2024-07-15 09:35:59.476094] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.330 [2024-07-15 09:35:59.476122] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a50000b90 with addr=10.0.0.2, port=4420 00:27:48.330 qpair failed and we were unable to recover it. 00:27:48.330 [2024-07-15 09:35:59.476257] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.330 [2024-07-15 09:35:59.476284] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a50000b90 with addr=10.0.0.2, port=4420 00:27:48.330 qpair failed and we were unable to recover it. 00:27:48.330 [2024-07-15 09:35:59.476391] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.330 [2024-07-15 09:35:59.476418] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a50000b90 with addr=10.0.0.2, port=4420 00:27:48.330 qpair failed and we were unable to recover it. 00:27:48.330 [2024-07-15 09:35:59.476500] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.330 [2024-07-15 09:35:59.476527] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a50000b90 with addr=10.0.0.2, port=4420 00:27:48.330 qpair failed and we were unable to recover it. 00:27:48.330 [2024-07-15 09:35:59.476611] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.330 [2024-07-15 09:35:59.476639] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a58000b90 with addr=10.0.0.2, port=4420 00:27:48.330 qpair failed and we were unable to recover it. 00:27:48.330 [2024-07-15 09:35:59.476773] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.330 [2024-07-15 09:35:59.476819] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d69200 with addr=10.0.0.2, port=4420 00:27:48.330 qpair failed and we were unable to recover it. 00:27:48.330 [2024-07-15 09:35:59.476910] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.330 [2024-07-15 09:35:59.476937] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:27:48.330 qpair failed and we were unable to recover it. 00:27:48.330 [2024-07-15 09:35:59.477048] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.330 [2024-07-15 09:35:59.477075] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:27:48.330 qpair failed and we were unable to recover it. 00:27:48.330 [2024-07-15 09:35:59.477240] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.330 [2024-07-15 09:35:59.477292] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:27:48.330 qpair failed and we were unable to recover it. 00:27:48.330 [2024-07-15 09:35:59.477400] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.330 [2024-07-15 09:35:59.477426] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:27:48.330 qpair failed and we were unable to recover it. 00:27:48.330 [2024-07-15 09:35:59.477570] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.330 [2024-07-15 09:35:59.477599] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a50000b90 with addr=10.0.0.2, port=4420 00:27:48.330 qpair failed and we were unable to recover it. 00:27:48.330 [2024-07-15 09:35:59.477713] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.330 [2024-07-15 09:35:59.477742] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a58000b90 with addr=10.0.0.2, port=4420 00:27:48.330 qpair failed and we were unable to recover it. 00:27:48.330 [2024-07-15 09:35:59.477880] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.330 [2024-07-15 09:35:59.477921] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d69200 with addr=10.0.0.2, port=4420 00:27:48.330 qpair failed and we were unable to recover it. 00:27:48.330 [2024-07-15 09:35:59.478044] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.330 [2024-07-15 09:35:59.478071] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d69200 with addr=10.0.0.2, port=4420 00:27:48.330 qpair failed and we were unable to recover it. 00:27:48.330 [2024-07-15 09:35:59.478232] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.330 [2024-07-15 09:35:59.478259] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d69200 with addr=10.0.0.2, port=4420 00:27:48.330 qpair failed and we were unable to recover it. 00:27:48.330 [2024-07-15 09:35:59.478370] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.330 [2024-07-15 09:35:59.478397] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d69200 with addr=10.0.0.2, port=4420 00:27:48.330 qpair failed and we were unable to recover it. 00:27:48.330 [2024-07-15 09:35:59.478483] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.330 [2024-07-15 09:35:59.478509] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d69200 with addr=10.0.0.2, port=4420 00:27:48.330 qpair failed and we were unable to recover it. 00:27:48.330 [2024-07-15 09:35:59.478586] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.330 [2024-07-15 09:35:59.478613] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d69200 with addr=10.0.0.2, port=4420 00:27:48.330 qpair failed and we were unable to recover it. 00:27:48.330 [2024-07-15 09:35:59.478702] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.330 [2024-07-15 09:35:59.478730] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a58000b90 with addr=10.0.0.2, port=4420 00:27:48.330 qpair failed and we were unable to recover it. 00:27:48.330 [2024-07-15 09:35:59.478810] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.330 [2024-07-15 09:35:59.478838] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a58000b90 with addr=10.0.0.2, port=4420 00:27:48.330 qpair failed and we were unable to recover it. 00:27:48.330 [2024-07-15 09:35:59.478953] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.330 [2024-07-15 09:35:59.478982] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a58000b90 with addr=10.0.0.2, port=4420 00:27:48.330 qpair failed and we were unable to recover it. 00:27:48.330 [2024-07-15 09:35:59.479064] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.330 [2024-07-15 09:35:59.479096] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a58000b90 with addr=10.0.0.2, port=4420 00:27:48.330 qpair failed and we were unable to recover it. 00:27:48.330 [2024-07-15 09:35:59.479189] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.330 [2024-07-15 09:35:59.479216] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a58000b90 with addr=10.0.0.2, port=4420 00:27:48.330 qpair failed and we were unable to recover it. 00:27:48.330 [2024-07-15 09:35:59.479358] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.330 [2024-07-15 09:35:59.479385] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a58000b90 with addr=10.0.0.2, port=4420 00:27:48.330 qpair failed and we were unable to recover it. 00:27:48.330 [2024-07-15 09:35:59.479477] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.330 [2024-07-15 09:35:59.479504] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a58000b90 with addr=10.0.0.2, port=4420 00:27:48.330 qpair failed and we were unable to recover it. 00:27:48.330 [2024-07-15 09:35:59.479588] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.331 [2024-07-15 09:35:59.479615] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a58000b90 with addr=10.0.0.2, port=4420 00:27:48.331 qpair failed and we were unable to recover it. 00:27:48.331 [2024-07-15 09:35:59.479704] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.331 [2024-07-15 09:35:59.479735] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a58000b90 with addr=10.0.0.2, port=4420 00:27:48.331 qpair failed and we were unable to recover it. 00:27:48.331 [2024-07-15 09:35:59.479834] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.331 [2024-07-15 09:35:59.479862] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d69200 with addr=10.0.0.2, port=4420 00:27:48.331 qpair failed and we were unable to recover it. 00:27:48.331 [2024-07-15 09:35:59.479975] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.331 [2024-07-15 09:35:59.480007] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d69200 with addr=10.0.0.2, port=4420 00:27:48.331 qpair failed and we were unable to recover it. 00:27:48.331 [2024-07-15 09:35:59.480114] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.331 [2024-07-15 09:35:59.480140] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d69200 with addr=10.0.0.2, port=4420 00:27:48.331 qpair failed and we were unable to recover it. 00:27:48.331 [2024-07-15 09:35:59.480217] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.331 [2024-07-15 09:35:59.480243] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d69200 with addr=10.0.0.2, port=4420 00:27:48.331 qpair failed and we were unable to recover it. 00:27:48.331 [2024-07-15 09:35:59.480350] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.331 [2024-07-15 09:35:59.480376] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d69200 with addr=10.0.0.2, port=4420 00:27:48.331 qpair failed and we were unable to recover it. 00:27:48.331 [2024-07-15 09:35:59.480468] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.331 [2024-07-15 09:35:59.480494] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d69200 with addr=10.0.0.2, port=4420 00:27:48.331 qpair failed and we were unable to recover it. 00:27:48.331 [2024-07-15 09:35:59.480633] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.331 [2024-07-15 09:35:59.480661] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a58000b90 with addr=10.0.0.2, port=4420 00:27:48.331 qpair failed and we were unable to recover it. 00:27:48.331 [2024-07-15 09:35:59.480782] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.331 [2024-07-15 09:35:59.480830] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:27:48.331 qpair failed and we were unable to recover it. 00:27:48.331 [2024-07-15 09:35:59.480935] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.331 [2024-07-15 09:35:59.480964] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:27:48.331 qpair failed and we were unable to recover it. 00:27:48.331 [2024-07-15 09:35:59.481076] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.331 [2024-07-15 09:35:59.481103] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:27:48.331 qpair failed and we were unable to recover it. 00:27:48.331 [2024-07-15 09:35:59.481216] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.331 [2024-07-15 09:35:59.481243] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:27:48.331 qpair failed and we were unable to recover it. 00:27:48.331 [2024-07-15 09:35:59.481441] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.331 [2024-07-15 09:35:59.481499] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:27:48.331 qpair failed and we were unable to recover it. 00:27:48.331 [2024-07-15 09:35:59.481579] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.331 [2024-07-15 09:35:59.481606] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:27:48.331 qpair failed and we were unable to recover it. 00:27:48.331 [2024-07-15 09:35:59.481760] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.331 [2024-07-15 09:35:59.481787] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:27:48.331 qpair failed and we were unable to recover it. 00:27:48.331 [2024-07-15 09:35:59.481898] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.331 [2024-07-15 09:35:59.481926] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:27:48.331 qpair failed and we were unable to recover it. 00:27:48.331 [2024-07-15 09:35:59.482061] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.331 [2024-07-15 09:35:59.482088] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:27:48.331 qpair failed and we were unable to recover it. 00:27:48.331 [2024-07-15 09:35:59.482277] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.331 [2024-07-15 09:35:59.482341] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:27:48.331 qpair failed and we were unable to recover it. 00:27:48.331 [2024-07-15 09:35:59.482506] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.331 [2024-07-15 09:35:59.482558] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:27:48.331 qpair failed and we were unable to recover it. 00:27:48.331 [2024-07-15 09:35:59.482695] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.331 [2024-07-15 09:35:59.482722] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:27:48.331 qpair failed and we were unable to recover it. 00:27:48.331 [2024-07-15 09:35:59.482845] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.331 [2024-07-15 09:35:59.482872] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:27:48.331 qpair failed and we were unable to recover it. 00:27:48.331 [2024-07-15 09:35:59.482957] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.331 [2024-07-15 09:35:59.482984] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:27:48.331 qpair failed and we were unable to recover it. 00:27:48.331 [2024-07-15 09:35:59.483102] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.331 [2024-07-15 09:35:59.483131] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d69200 with addr=10.0.0.2, port=4420 00:27:48.331 qpair failed and we were unable to recover it. 00:27:48.331 [2024-07-15 09:35:59.483319] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.331 [2024-07-15 09:35:59.483381] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d69200 with addr=10.0.0.2, port=4420 00:27:48.331 qpair failed and we were unable to recover it. 00:27:48.614 [2024-07-15 09:35:59.483544] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.614 [2024-07-15 09:35:59.483599] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d69200 with addr=10.0.0.2, port=4420 00:27:48.614 qpair failed and we were unable to recover it. 00:27:48.614 [2024-07-15 09:35:59.483742] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.614 [2024-07-15 09:35:59.483769] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d69200 with addr=10.0.0.2, port=4420 00:27:48.614 qpair failed and we were unable to recover it. 00:27:48.614 [2024-07-15 09:35:59.483888] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.614 [2024-07-15 09:35:59.483916] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d69200 with addr=10.0.0.2, port=4420 00:27:48.614 qpair failed and we were unable to recover it. 00:27:48.614 [2024-07-15 09:35:59.484027] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.614 [2024-07-15 09:35:59.484054] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d69200 with addr=10.0.0.2, port=4420 00:27:48.614 qpair failed and we were unable to recover it. 00:27:48.614 [2024-07-15 09:35:59.484145] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.614 [2024-07-15 09:35:59.484172] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d69200 with addr=10.0.0.2, port=4420 00:27:48.614 qpair failed and we were unable to recover it. 00:27:48.614 [2024-07-15 09:35:59.484283] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.614 [2024-07-15 09:35:59.484310] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d69200 with addr=10.0.0.2, port=4420 00:27:48.614 qpair failed and we were unable to recover it. 00:27:48.614 [2024-07-15 09:35:59.484403] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.614 [2024-07-15 09:35:59.484436] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d69200 with addr=10.0.0.2, port=4420 00:27:48.614 qpair failed and we were unable to recover it. 00:27:48.614 [2024-07-15 09:35:59.484561] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.614 [2024-07-15 09:35:59.484587] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d69200 with addr=10.0.0.2, port=4420 00:27:48.614 qpair failed and we were unable to recover it. 00:27:48.614 [2024-07-15 09:35:59.484690] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.614 [2024-07-15 09:35:59.484722] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d69200 with addr=10.0.0.2, port=4420 00:27:48.614 qpair failed and we were unable to recover it. 00:27:48.614 [2024-07-15 09:35:59.484798] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.614 [2024-07-15 09:35:59.484837] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d69200 with addr=10.0.0.2, port=4420 00:27:48.614 qpair failed and we were unable to recover it. 00:27:48.614 [2024-07-15 09:35:59.484948] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.614 [2024-07-15 09:35:59.484974] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d69200 with addr=10.0.0.2, port=4420 00:27:48.614 qpair failed and we were unable to recover it. 00:27:48.614 [2024-07-15 09:35:59.485116] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.614 [2024-07-15 09:35:59.485143] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d69200 with addr=10.0.0.2, port=4420 00:27:48.614 qpair failed and we were unable to recover it. 00:27:48.614 [2024-07-15 09:35:59.485270] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.614 [2024-07-15 09:35:59.485298] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d69200 with addr=10.0.0.2, port=4420 00:27:48.614 qpair failed and we were unable to recover it. 00:27:48.614 [2024-07-15 09:35:59.485438] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.614 [2024-07-15 09:35:59.485465] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d69200 with addr=10.0.0.2, port=4420 00:27:48.614 qpair failed and we were unable to recover it. 00:27:48.614 [2024-07-15 09:35:59.485546] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.614 [2024-07-15 09:35:59.485577] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d69200 with addr=10.0.0.2, port=4420 00:27:48.614 qpair failed and we were unable to recover it. 00:27:48.614 [2024-07-15 09:35:59.485718] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.614 [2024-07-15 09:35:59.485755] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d69200 with addr=10.0.0.2, port=4420 00:27:48.614 qpair failed and we were unable to recover it. 00:27:48.614 [2024-07-15 09:35:59.485872] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.614 [2024-07-15 09:35:59.485901] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d69200 with addr=10.0.0.2, port=4420 00:27:48.614 qpair failed and we were unable to recover it. 00:27:48.614 [2024-07-15 09:35:59.486003] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.614 [2024-07-15 09:35:59.486054] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a50000b90 with addr=10.0.0.2, port=4420 00:27:48.614 qpair failed and we were unable to recover it. 00:27:48.614 [2024-07-15 09:35:59.486200] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.614 [2024-07-15 09:35:59.486229] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a50000b90 with addr=10.0.0.2, port=4420 00:27:48.614 qpair failed and we were unable to recover it. 00:27:48.614 [2024-07-15 09:35:59.486340] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.614 [2024-07-15 09:35:59.486367] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a50000b90 with addr=10.0.0.2, port=4420 00:27:48.614 qpair failed and we were unable to recover it. 00:27:48.614 [2024-07-15 09:35:59.486462] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.614 [2024-07-15 09:35:59.486490] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a50000b90 with addr=10.0.0.2, port=4420 00:27:48.614 qpair failed and we were unable to recover it. 00:27:48.614 [2024-07-15 09:35:59.486596] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.614 [2024-07-15 09:35:59.486623] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a50000b90 with addr=10.0.0.2, port=4420 00:27:48.614 qpair failed and we were unable to recover it. 00:27:48.614 [2024-07-15 09:35:59.486738] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.614 [2024-07-15 09:35:59.486765] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a50000b90 with addr=10.0.0.2, port=4420 00:27:48.614 qpair failed and we were unable to recover it. 00:27:48.614 [2024-07-15 09:35:59.486882] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.614 [2024-07-15 09:35:59.486910] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a50000b90 with addr=10.0.0.2, port=4420 00:27:48.614 qpair failed and we were unable to recover it. 00:27:48.615 [2024-07-15 09:35:59.486990] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.615 [2024-07-15 09:35:59.487018] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a50000b90 with addr=10.0.0.2, port=4420 00:27:48.615 qpair failed and we were unable to recover it. 00:27:48.615 [2024-07-15 09:35:59.487127] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.615 [2024-07-15 09:35:59.487167] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a58000b90 with addr=10.0.0.2, port=4420 00:27:48.615 qpair failed and we were unable to recover it. 00:27:48.615 [2024-07-15 09:35:59.487281] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.615 [2024-07-15 09:35:59.487310] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a58000b90 with addr=10.0.0.2, port=4420 00:27:48.615 qpair failed and we were unable to recover it. 00:27:48.615 [2024-07-15 09:35:59.487433] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.615 [2024-07-15 09:35:59.487461] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a58000b90 with addr=10.0.0.2, port=4420 00:27:48.615 qpair failed and we were unable to recover it. 00:27:48.615 [2024-07-15 09:35:59.487551] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.615 [2024-07-15 09:35:59.487578] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a58000b90 with addr=10.0.0.2, port=4420 00:27:48.615 qpair failed and we were unable to recover it. 00:27:48.615 [2024-07-15 09:35:59.487654] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.615 [2024-07-15 09:35:59.487681] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a58000b90 with addr=10.0.0.2, port=4420 00:27:48.615 qpair failed and we were unable to recover it. 00:27:48.615 [2024-07-15 09:35:59.487798] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.615 [2024-07-15 09:35:59.487847] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:27:48.615 qpair failed and we were unable to recover it. 00:27:48.615 [2024-07-15 09:35:59.487990] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.615 [2024-07-15 09:35:59.488018] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a50000b90 with addr=10.0.0.2, port=4420 00:27:48.615 qpair failed and we were unable to recover it. 00:27:48.615 [2024-07-15 09:35:59.488113] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.615 [2024-07-15 09:35:59.488140] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a50000b90 with addr=10.0.0.2, port=4420 00:27:48.615 qpair failed and we were unable to recover it. 00:27:48.615 [2024-07-15 09:35:59.488356] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.615 [2024-07-15 09:35:59.488384] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a50000b90 with addr=10.0.0.2, port=4420 00:27:48.615 qpair failed and we were unable to recover it. 00:27:48.615 [2024-07-15 09:35:59.488537] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.615 [2024-07-15 09:35:59.488592] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a50000b90 with addr=10.0.0.2, port=4420 00:27:48.615 qpair failed and we were unable to recover it. 00:27:48.615 [2024-07-15 09:35:59.488678] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.615 [2024-07-15 09:35:59.488705] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a50000b90 with addr=10.0.0.2, port=4420 00:27:48.615 qpair failed and we were unable to recover it. 00:27:48.615 [2024-07-15 09:35:59.488781] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.615 [2024-07-15 09:35:59.488816] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a50000b90 with addr=10.0.0.2, port=4420 00:27:48.615 qpair failed and we were unable to recover it. 00:27:48.615 [2024-07-15 09:35:59.488971] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.615 [2024-07-15 09:35:59.488999] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a50000b90 with addr=10.0.0.2, port=4420 00:27:48.615 qpair failed and we were unable to recover it. 00:27:48.615 [2024-07-15 09:35:59.489115] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.615 [2024-07-15 09:35:59.489161] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a50000b90 with addr=10.0.0.2, port=4420 00:27:48.615 qpair failed and we were unable to recover it. 00:27:48.615 [2024-07-15 09:35:59.489283] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.615 [2024-07-15 09:35:59.489311] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a50000b90 with addr=10.0.0.2, port=4420 00:27:48.615 qpair failed and we were unable to recover it. 00:27:48.615 [2024-07-15 09:35:59.489430] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.615 [2024-07-15 09:35:59.489457] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a50000b90 with addr=10.0.0.2, port=4420 00:27:48.615 qpair failed and we were unable to recover it. 00:27:48.615 [2024-07-15 09:35:59.489572] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.615 [2024-07-15 09:35:59.489601] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d69200 with addr=10.0.0.2, port=4420 00:27:48.615 qpair failed and we were unable to recover it. 00:27:48.615 [2024-07-15 09:35:59.489720] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.615 [2024-07-15 09:35:59.489749] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a58000b90 with addr=10.0.0.2, port=4420 00:27:48.615 qpair failed and we were unable to recover it. 00:27:48.615 [2024-07-15 09:35:59.489866] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.615 [2024-07-15 09:35:59.489894] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a58000b90 with addr=10.0.0.2, port=4420 00:27:48.615 qpair failed and we were unable to recover it. 00:27:48.615 [2024-07-15 09:35:59.489983] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.615 [2024-07-15 09:35:59.490010] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a58000b90 with addr=10.0.0.2, port=4420 00:27:48.615 qpair failed and we were unable to recover it. 00:27:48.615 [2024-07-15 09:35:59.490132] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.615 [2024-07-15 09:35:59.490159] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a58000b90 with addr=10.0.0.2, port=4420 00:27:48.615 qpair failed and we were unable to recover it. 00:27:48.615 [2024-07-15 09:35:59.490283] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.615 [2024-07-15 09:35:59.490311] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a58000b90 with addr=10.0.0.2, port=4420 00:27:48.615 qpair failed and we were unable to recover it. 00:27:48.615 [2024-07-15 09:35:59.490425] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.615 [2024-07-15 09:35:59.490453] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a58000b90 with addr=10.0.0.2, port=4420 00:27:48.615 qpair failed and we were unable to recover it. 00:27:48.615 [2024-07-15 09:35:59.490546] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.615 [2024-07-15 09:35:59.490573] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a58000b90 with addr=10.0.0.2, port=4420 00:27:48.615 qpair failed and we were unable to recover it. 00:27:48.615 [2024-07-15 09:35:59.490712] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.615 [2024-07-15 09:35:59.490740] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a58000b90 with addr=10.0.0.2, port=4420 00:27:48.615 qpair failed and we were unable to recover it. 00:27:48.615 [2024-07-15 09:35:59.490860] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.615 [2024-07-15 09:35:59.490888] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a58000b90 with addr=10.0.0.2, port=4420 00:27:48.615 qpair failed and we were unable to recover it. 00:27:48.615 [2024-07-15 09:35:59.490980] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.615 [2024-07-15 09:35:59.491007] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a58000b90 with addr=10.0.0.2, port=4420 00:27:48.615 qpair failed and we were unable to recover it. 00:27:48.615 [2024-07-15 09:35:59.491158] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.615 [2024-07-15 09:35:59.491186] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a58000b90 with addr=10.0.0.2, port=4420 00:27:48.615 qpair failed and we were unable to recover it. 00:27:48.615 [2024-07-15 09:35:59.491300] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.615 [2024-07-15 09:35:59.491327] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a58000b90 with addr=10.0.0.2, port=4420 00:27:48.615 qpair failed and we were unable to recover it. 00:27:48.615 [2024-07-15 09:35:59.491441] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.615 [2024-07-15 09:35:59.491468] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a58000b90 with addr=10.0.0.2, port=4420 00:27:48.615 qpair failed and we were unable to recover it. 00:27:48.615 [2024-07-15 09:35:59.491584] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.615 [2024-07-15 09:35:59.491611] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a58000b90 with addr=10.0.0.2, port=4420 00:27:48.615 qpair failed and we were unable to recover it. 00:27:48.615 [2024-07-15 09:35:59.491695] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.615 [2024-07-15 09:35:59.491722] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a58000b90 with addr=10.0.0.2, port=4420 00:27:48.615 qpair failed and we were unable to recover it. 00:27:48.615 [2024-07-15 09:35:59.491809] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.615 [2024-07-15 09:35:59.491837] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a58000b90 with addr=10.0.0.2, port=4420 00:27:48.615 qpair failed and we were unable to recover it. 00:27:48.615 [2024-07-15 09:35:59.491924] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.615 [2024-07-15 09:35:59.491951] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a58000b90 with addr=10.0.0.2, port=4420 00:27:48.615 qpair failed and we were unable to recover it. 00:27:48.615 [2024-07-15 09:35:59.492037] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.615 [2024-07-15 09:35:59.492077] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a58000b90 with addr=10.0.0.2, port=4420 00:27:48.616 qpair failed and we were unable to recover it. 00:27:48.616 [2024-07-15 09:35:59.492192] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.616 [2024-07-15 09:35:59.492219] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a58000b90 with addr=10.0.0.2, port=4420 00:27:48.616 qpair failed and we were unable to recover it. 00:27:48.616 [2024-07-15 09:35:59.492304] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.616 [2024-07-15 09:35:59.492331] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a58000b90 with addr=10.0.0.2, port=4420 00:27:48.616 qpair failed and we were unable to recover it. 00:27:48.616 [2024-07-15 09:35:59.492419] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.616 [2024-07-15 09:35:59.492449] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a50000b90 with addr=10.0.0.2, port=4420 00:27:48.616 qpair failed and we were unable to recover it. 00:27:48.616 [2024-07-15 09:35:59.492541] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.616 [2024-07-15 09:35:59.492569] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a50000b90 with addr=10.0.0.2, port=4420 00:27:48.616 qpair failed and we were unable to recover it. 00:27:48.616 [2024-07-15 09:35:59.492711] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.616 [2024-07-15 09:35:59.492738] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a50000b90 with addr=10.0.0.2, port=4420 00:27:48.616 qpair failed and we were unable to recover it. 00:27:48.616 [2024-07-15 09:35:59.492845] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.616 [2024-07-15 09:35:59.492874] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a58000b90 with addr=10.0.0.2, port=4420 00:27:48.616 qpair failed and we were unable to recover it. 00:27:48.616 [2024-07-15 09:35:59.492990] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.616 [2024-07-15 09:35:59.493017] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a58000b90 with addr=10.0.0.2, port=4420 00:27:48.616 qpair failed and we were unable to recover it. 00:27:48.616 [2024-07-15 09:35:59.493134] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.616 [2024-07-15 09:35:59.493161] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a58000b90 with addr=10.0.0.2, port=4420 00:27:48.616 qpair failed and we were unable to recover it. 00:27:48.616 [2024-07-15 09:35:59.493274] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.616 [2024-07-15 09:35:59.493301] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a58000b90 with addr=10.0.0.2, port=4420 00:27:48.616 qpair failed and we were unable to recover it. 00:27:48.616 [2024-07-15 09:35:59.493418] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.616 [2024-07-15 09:35:59.493445] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a58000b90 with addr=10.0.0.2, port=4420 00:27:48.616 qpair failed and we were unable to recover it. 00:27:48.616 [2024-07-15 09:35:59.493582] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.616 [2024-07-15 09:35:59.493609] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a58000b90 with addr=10.0.0.2, port=4420 00:27:48.616 qpair failed and we were unable to recover it. 00:27:48.616 [2024-07-15 09:35:59.493719] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.616 [2024-07-15 09:35:59.493748] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a58000b90 with addr=10.0.0.2, port=4420 00:27:48.616 qpair failed and we were unable to recover it. 00:27:48.616 [2024-07-15 09:35:59.493883] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.616 [2024-07-15 09:35:59.493911] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a58000b90 with addr=10.0.0.2, port=4420 00:27:48.616 qpair failed and we were unable to recover it. 00:27:48.616 [2024-07-15 09:35:59.494003] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.616 [2024-07-15 09:35:59.494031] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a58000b90 with addr=10.0.0.2, port=4420 00:27:48.616 qpair failed and we were unable to recover it. 00:27:48.616 [2024-07-15 09:35:59.494142] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.616 [2024-07-15 09:35:59.494169] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a58000b90 with addr=10.0.0.2, port=4420 00:27:48.616 qpair failed and we were unable to recover it. 00:27:48.616 [2024-07-15 09:35:59.494281] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.616 [2024-07-15 09:35:59.494308] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a58000b90 with addr=10.0.0.2, port=4420 00:27:48.616 qpair failed and we were unable to recover it. 00:27:48.616 [2024-07-15 09:35:59.494394] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.616 [2024-07-15 09:35:59.494423] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a58000b90 with addr=10.0.0.2, port=4420 00:27:48.616 qpair failed and we were unable to recover it. 00:27:48.616 [2024-07-15 09:35:59.494505] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.616 [2024-07-15 09:35:59.494532] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a58000b90 with addr=10.0.0.2, port=4420 00:27:48.616 qpair failed and we were unable to recover it. 00:27:48.616 [2024-07-15 09:35:59.494610] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.616 [2024-07-15 09:35:59.494641] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a58000b90 with addr=10.0.0.2, port=4420 00:27:48.616 qpair failed and we were unable to recover it. 00:27:48.616 [2024-07-15 09:35:59.494750] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.616 [2024-07-15 09:35:59.494777] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a58000b90 with addr=10.0.0.2, port=4420 00:27:48.616 qpair failed and we were unable to recover it. 00:27:48.616 [2024-07-15 09:35:59.494878] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.616 [2024-07-15 09:35:59.494907] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a50000b90 with addr=10.0.0.2, port=4420 00:27:48.616 qpair failed and we were unable to recover it. 00:27:48.616 [2024-07-15 09:35:59.495022] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.616 [2024-07-15 09:35:59.495049] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a50000b90 with addr=10.0.0.2, port=4420 00:27:48.616 qpair failed and we were unable to recover it. 00:27:48.616 [2024-07-15 09:35:59.495163] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.616 [2024-07-15 09:35:59.495192] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a50000b90 with addr=10.0.0.2, port=4420 00:27:48.616 qpair failed and we were unable to recover it. 00:27:48.616 [2024-07-15 09:35:59.495312] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.616 [2024-07-15 09:35:59.495340] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a50000b90 with addr=10.0.0.2, port=4420 00:27:48.616 qpair failed and we were unable to recover it. 00:27:48.616 [2024-07-15 09:35:59.495453] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.616 [2024-07-15 09:35:59.495480] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a50000b90 with addr=10.0.0.2, port=4420 00:27:48.616 qpair failed and we were unable to recover it. 00:27:48.616 [2024-07-15 09:35:59.495570] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.616 [2024-07-15 09:35:59.495598] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a50000b90 with addr=10.0.0.2, port=4420 00:27:48.616 qpair failed and we were unable to recover it. 00:27:48.616 [2024-07-15 09:35:59.495685] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.616 [2024-07-15 09:35:59.495713] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a58000b90 with addr=10.0.0.2, port=4420 00:27:48.616 qpair failed and we were unable to recover it. 00:27:48.616 [2024-07-15 09:35:59.495873] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.616 [2024-07-15 09:35:59.495913] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d69200 with addr=10.0.0.2, port=4420 00:27:48.616 qpair failed and we were unable to recover it. 00:27:48.616 [2024-07-15 09:35:59.496010] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.616 [2024-07-15 09:35:59.496038] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d69200 with addr=10.0.0.2, port=4420 00:27:48.616 qpair failed and we were unable to recover it. 00:27:48.616 [2024-07-15 09:35:59.496131] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.616 [2024-07-15 09:35:59.496159] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d69200 with addr=10.0.0.2, port=4420 00:27:48.616 qpair failed and we were unable to recover it. 00:27:48.616 [2024-07-15 09:35:59.496312] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.616 [2024-07-15 09:35:59.496365] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d69200 with addr=10.0.0.2, port=4420 00:27:48.616 qpair failed and we were unable to recover it. 00:27:48.616 [2024-07-15 09:35:59.496574] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.616 [2024-07-15 09:35:59.496627] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d69200 with addr=10.0.0.2, port=4420 00:27:48.616 qpair failed and we were unable to recover it. 00:27:48.616 [2024-07-15 09:35:59.496740] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.616 [2024-07-15 09:35:59.496767] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d69200 with addr=10.0.0.2, port=4420 00:27:48.616 qpair failed and we were unable to recover it. 00:27:48.616 [2024-07-15 09:35:59.496950] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.616 [2024-07-15 09:35:59.496978] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d69200 with addr=10.0.0.2, port=4420 00:27:48.616 qpair failed and we were unable to recover it. 00:27:48.616 [2024-07-15 09:35:59.497078] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.616 [2024-07-15 09:35:59.497104] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d69200 with addr=10.0.0.2, port=4420 00:27:48.616 qpair failed and we were unable to recover it. 00:27:48.616 [2024-07-15 09:35:59.497209] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.616 [2024-07-15 09:35:59.497236] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d69200 with addr=10.0.0.2, port=4420 00:27:48.616 qpair failed and we were unable to recover it. 00:27:48.616 [2024-07-15 09:35:59.497322] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.616 [2024-07-15 09:35:59.497348] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d69200 with addr=10.0.0.2, port=4420 00:27:48.616 qpair failed and we were unable to recover it. 00:27:48.616 [2024-07-15 09:35:59.497455] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.617 [2024-07-15 09:35:59.497483] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d69200 with addr=10.0.0.2, port=4420 00:27:48.617 qpair failed and we were unable to recover it. 00:27:48.617 [2024-07-15 09:35:59.497628] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.617 [2024-07-15 09:35:59.497658] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a50000b90 with addr=10.0.0.2, port=4420 00:27:48.617 qpair failed and we were unable to recover it. 00:27:48.617 [2024-07-15 09:35:59.497742] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.617 [2024-07-15 09:35:59.497771] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a50000b90 with addr=10.0.0.2, port=4420 00:27:48.617 qpair failed and we were unable to recover it. 00:27:48.617 [2024-07-15 09:35:59.497902] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.617 [2024-07-15 09:35:59.497929] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a50000b90 with addr=10.0.0.2, port=4420 00:27:48.617 qpair failed and we were unable to recover it. 00:27:48.617 [2024-07-15 09:35:59.498043] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.617 [2024-07-15 09:35:59.498072] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a50000b90 with addr=10.0.0.2, port=4420 00:27:48.617 qpair failed and we were unable to recover it. 00:27:48.617 [2024-07-15 09:35:59.498158] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.617 [2024-07-15 09:35:59.498187] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a50000b90 with addr=10.0.0.2, port=4420 00:27:48.617 qpair failed and we were unable to recover it. 00:27:48.617 [2024-07-15 09:35:59.498326] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.617 [2024-07-15 09:35:59.498353] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a50000b90 with addr=10.0.0.2, port=4420 00:27:48.617 qpair failed and we were unable to recover it. 00:27:48.617 [2024-07-15 09:35:59.498442] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.617 [2024-07-15 09:35:59.498468] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a50000b90 with addr=10.0.0.2, port=4420 00:27:48.617 qpair failed and we were unable to recover it. 00:27:48.617 [2024-07-15 09:35:59.498582] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.617 [2024-07-15 09:35:59.498614] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a50000b90 with addr=10.0.0.2, port=4420 00:27:48.617 qpair failed and we were unable to recover it. 00:27:48.617 [2024-07-15 09:35:59.498700] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.617 [2024-07-15 09:35:59.498729] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a50000b90 with addr=10.0.0.2, port=4420 00:27:48.617 qpair failed and we were unable to recover it. 00:27:48.617 [2024-07-15 09:35:59.498812] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.617 [2024-07-15 09:35:59.498839] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a50000b90 with addr=10.0.0.2, port=4420 00:27:48.617 qpair failed and we were unable to recover it. 00:27:48.617 [2024-07-15 09:35:59.498979] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.617 [2024-07-15 09:35:59.499006] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a50000b90 with addr=10.0.0.2, port=4420 00:27:48.617 qpair failed and we were unable to recover it. 00:27:48.617 [2024-07-15 09:35:59.499130] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.617 [2024-07-15 09:35:59.499158] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a50000b90 with addr=10.0.0.2, port=4420 00:27:48.617 qpair failed and we were unable to recover it. 00:27:48.617 [2024-07-15 09:35:59.499295] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.617 [2024-07-15 09:35:59.499323] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a50000b90 with addr=10.0.0.2, port=4420 00:27:48.617 qpair failed and we were unable to recover it. 00:27:48.617 [2024-07-15 09:35:59.499427] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.617 [2024-07-15 09:35:59.499455] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a50000b90 with addr=10.0.0.2, port=4420 00:27:48.617 qpair failed and we were unable to recover it. 00:27:48.617 [2024-07-15 09:35:59.499577] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.617 [2024-07-15 09:35:59.499604] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a50000b90 with addr=10.0.0.2, port=4420 00:27:48.617 qpair failed and we were unable to recover it. 00:27:48.617 [2024-07-15 09:35:59.499744] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.617 [2024-07-15 09:35:59.499771] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a50000b90 with addr=10.0.0.2, port=4420 00:27:48.617 qpair failed and we were unable to recover it. 00:27:48.617 [2024-07-15 09:35:59.499912] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.617 [2024-07-15 09:35:59.499953] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a58000b90 with addr=10.0.0.2, port=4420 00:27:48.617 qpair failed and we were unable to recover it. 00:27:48.617 [2024-07-15 09:35:59.500048] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.617 [2024-07-15 09:35:59.500084] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a58000b90 with addr=10.0.0.2, port=4420 00:27:48.617 qpair failed and we were unable to recover it. 00:27:48.617 [2024-07-15 09:35:59.500176] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.617 [2024-07-15 09:35:59.500205] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a58000b90 with addr=10.0.0.2, port=4420 00:27:48.617 qpair failed and we were unable to recover it. 00:27:48.617 [2024-07-15 09:35:59.500341] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.617 [2024-07-15 09:35:59.500386] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a58000b90 with addr=10.0.0.2, port=4420 00:27:48.617 qpair failed and we were unable to recover it. 00:27:48.617 [2024-07-15 09:35:59.500500] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.617 [2024-07-15 09:35:59.500528] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a58000b90 with addr=10.0.0.2, port=4420 00:27:48.617 qpair failed and we were unable to recover it. 00:27:48.617 [2024-07-15 09:35:59.500614] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.617 [2024-07-15 09:35:59.500641] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a58000b90 with addr=10.0.0.2, port=4420 00:27:48.617 qpair failed and we were unable to recover it. 00:27:48.617 [2024-07-15 09:35:59.500719] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.617 [2024-07-15 09:35:59.500746] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a58000b90 with addr=10.0.0.2, port=4420 00:27:48.617 qpair failed and we were unable to recover it. 00:27:48.617 [2024-07-15 09:35:59.500862] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.617 [2024-07-15 09:35:59.500890] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a58000b90 with addr=10.0.0.2, port=4420 00:27:48.617 qpair failed and we were unable to recover it. 00:27:48.617 [2024-07-15 09:35:59.501004] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.617 [2024-07-15 09:35:59.501031] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a58000b90 with addr=10.0.0.2, port=4420 00:27:48.617 qpair failed and we were unable to recover it. 00:27:48.617 [2024-07-15 09:35:59.501151] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.617 [2024-07-15 09:35:59.501178] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a58000b90 with addr=10.0.0.2, port=4420 00:27:48.617 qpair failed and we were unable to recover it. 00:27:48.617 [2024-07-15 09:35:59.501295] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.617 [2024-07-15 09:35:59.501322] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a58000b90 with addr=10.0.0.2, port=4420 00:27:48.617 qpair failed and we were unable to recover it. 00:27:48.617 [2024-07-15 09:35:59.501421] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.617 [2024-07-15 09:35:59.501448] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a58000b90 with addr=10.0.0.2, port=4420 00:27:48.617 qpair failed and we were unable to recover it. 00:27:48.617 [2024-07-15 09:35:59.501596] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.617 [2024-07-15 09:35:59.501624] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a50000b90 with addr=10.0.0.2, port=4420 00:27:48.617 qpair failed and we were unable to recover it. 00:27:48.617 [2024-07-15 09:35:59.501715] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.617 [2024-07-15 09:35:59.501743] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a50000b90 with addr=10.0.0.2, port=4420 00:27:48.617 qpair failed and we were unable to recover it. 00:27:48.617 [2024-07-15 09:35:59.501877] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.617 [2024-07-15 09:35:59.501904] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a50000b90 with addr=10.0.0.2, port=4420 00:27:48.617 qpair failed and we were unable to recover it. 00:27:48.617 [2024-07-15 09:35:59.502017] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.617 [2024-07-15 09:35:59.502045] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a50000b90 with addr=10.0.0.2, port=4420 00:27:48.617 qpair failed and we were unable to recover it. 00:27:48.617 [2024-07-15 09:35:59.502132] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.617 [2024-07-15 09:35:59.502159] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a50000b90 with addr=10.0.0.2, port=4420 00:27:48.617 qpair failed and we were unable to recover it. 00:27:48.617 [2024-07-15 09:35:59.502296] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.617 [2024-07-15 09:35:59.502323] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a50000b90 with addr=10.0.0.2, port=4420 00:27:48.617 qpair failed and we were unable to recover it. 00:27:48.617 [2024-07-15 09:35:59.502533] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.617 [2024-07-15 09:35:59.502593] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a50000b90 with addr=10.0.0.2, port=4420 00:27:48.617 qpair failed and we were unable to recover it. 00:27:48.618 [2024-07-15 09:35:59.502706] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.618 [2024-07-15 09:35:59.502733] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a50000b90 with addr=10.0.0.2, port=4420 00:27:48.618 qpair failed and we were unable to recover it. 00:27:48.618 [2024-07-15 09:35:59.502852] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.618 [2024-07-15 09:35:59.502880] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a50000b90 with addr=10.0.0.2, port=4420 00:27:48.618 qpair failed and we were unable to recover it. 00:27:48.618 [2024-07-15 09:35:59.502989] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.618 [2024-07-15 09:35:59.503015] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a50000b90 with addr=10.0.0.2, port=4420 00:27:48.618 qpair failed and we were unable to recover it. 00:27:48.618 [2024-07-15 09:35:59.503101] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.618 [2024-07-15 09:35:59.503128] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a50000b90 with addr=10.0.0.2, port=4420 00:27:48.618 qpair failed and we were unable to recover it. 00:27:48.618 [2024-07-15 09:35:59.503266] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.618 [2024-07-15 09:35:59.503293] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a50000b90 with addr=10.0.0.2, port=4420 00:27:48.618 qpair failed and we were unable to recover it. 00:27:48.618 [2024-07-15 09:35:59.503380] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.618 [2024-07-15 09:35:59.503409] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a58000b90 with addr=10.0.0.2, port=4420 00:27:48.618 qpair failed and we were unable to recover it. 00:27:48.618 [2024-07-15 09:35:59.503494] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.618 [2024-07-15 09:35:59.503522] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a58000b90 with addr=10.0.0.2, port=4420 00:27:48.618 qpair failed and we were unable to recover it. 00:27:48.618 [2024-07-15 09:35:59.503634] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.618 [2024-07-15 09:35:59.503661] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a58000b90 with addr=10.0.0.2, port=4420 00:27:48.618 qpair failed and we were unable to recover it. 00:27:48.618 [2024-07-15 09:35:59.503779] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.618 [2024-07-15 09:35:59.503813] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a58000b90 with addr=10.0.0.2, port=4420 00:27:48.618 qpair failed and we were unable to recover it. 00:27:48.618 [2024-07-15 09:35:59.503911] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.618 [2024-07-15 09:35:59.503938] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a58000b90 with addr=10.0.0.2, port=4420 00:27:48.618 qpair failed and we were unable to recover it. 00:27:48.618 [2024-07-15 09:35:59.504055] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.618 [2024-07-15 09:35:59.504082] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a58000b90 with addr=10.0.0.2, port=4420 00:27:48.618 qpair failed and we were unable to recover it. 00:27:48.618 [2024-07-15 09:35:59.504195] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.618 [2024-07-15 09:35:59.504225] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a50000b90 with addr=10.0.0.2, port=4420 00:27:48.618 qpair failed and we were unable to recover it. 00:27:48.618 [2024-07-15 09:35:59.504340] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.618 [2024-07-15 09:35:59.504371] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a50000b90 with addr=10.0.0.2, port=4420 00:27:48.618 qpair failed and we were unable to recover it. 00:27:48.618 [2024-07-15 09:35:59.504510] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.618 [2024-07-15 09:35:59.504537] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a50000b90 with addr=10.0.0.2, port=4420 00:27:48.618 qpair failed and we were unable to recover it. 00:27:48.618 [2024-07-15 09:35:59.504653] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.618 [2024-07-15 09:35:59.504680] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a50000b90 with addr=10.0.0.2, port=4420 00:27:48.618 qpair failed and we were unable to recover it. 00:27:48.618 [2024-07-15 09:35:59.504774] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.618 [2024-07-15 09:35:59.504811] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a50000b90 with addr=10.0.0.2, port=4420 00:27:48.618 qpair failed and we were unable to recover it. 00:27:48.618 [2024-07-15 09:35:59.504936] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.618 [2024-07-15 09:35:59.504963] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a50000b90 with addr=10.0.0.2, port=4420 00:27:48.618 qpair failed and we were unable to recover it. 00:27:48.618 [2024-07-15 09:35:59.505055] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.618 [2024-07-15 09:35:59.505082] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a50000b90 with addr=10.0.0.2, port=4420 00:27:48.618 qpair failed and we were unable to recover it. 00:27:48.618 [2024-07-15 09:35:59.505202] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.618 [2024-07-15 09:35:59.505229] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a50000b90 with addr=10.0.0.2, port=4420 00:27:48.618 qpair failed and we were unable to recover it. 00:27:48.618 [2024-07-15 09:35:59.505322] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.618 [2024-07-15 09:35:59.505349] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a50000b90 with addr=10.0.0.2, port=4420 00:27:48.618 qpair failed and we were unable to recover it. 00:27:48.618 [2024-07-15 09:35:59.505488] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.618 [2024-07-15 09:35:59.505515] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a50000b90 with addr=10.0.0.2, port=4420 00:27:48.618 qpair failed and we were unable to recover it. 00:27:48.618 [2024-07-15 09:35:59.505624] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.618 [2024-07-15 09:35:59.505651] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a50000b90 with addr=10.0.0.2, port=4420 00:27:48.618 qpair failed and we were unable to recover it. 00:27:48.618 [2024-07-15 09:35:59.505765] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.618 [2024-07-15 09:35:59.505792] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a50000b90 with addr=10.0.0.2, port=4420 00:27:48.618 qpair failed and we were unable to recover it. 00:27:48.618 [2024-07-15 09:35:59.505886] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.618 [2024-07-15 09:35:59.505913] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a50000b90 with addr=10.0.0.2, port=4420 00:27:48.618 qpair failed and we were unable to recover it. 00:27:48.618 [2024-07-15 09:35:59.506021] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.618 [2024-07-15 09:35:59.506047] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a50000b90 with addr=10.0.0.2, port=4420 00:27:48.618 qpair failed and we were unable to recover it. 00:27:48.618 [2024-07-15 09:35:59.506191] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.618 [2024-07-15 09:35:59.506218] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a50000b90 with addr=10.0.0.2, port=4420 00:27:48.618 qpair failed and we were unable to recover it. 00:27:48.618 [2024-07-15 09:35:59.506366] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.618 [2024-07-15 09:35:59.506395] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a58000b90 with addr=10.0.0.2, port=4420 00:27:48.618 qpair failed and we were unable to recover it. 00:27:48.618 [2024-07-15 09:35:59.506543] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.618 [2024-07-15 09:35:59.506571] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a58000b90 with addr=10.0.0.2, port=4420 00:27:48.618 qpair failed and we were unable to recover it. 00:27:48.618 [2024-07-15 09:35:59.506719] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.618 [2024-07-15 09:35:59.506746] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a58000b90 with addr=10.0.0.2, port=4420 00:27:48.618 qpair failed and we were unable to recover it. 00:27:48.618 [2024-07-15 09:35:59.506863] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.618 [2024-07-15 09:35:59.506891] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a58000b90 with addr=10.0.0.2, port=4420 00:27:48.618 qpair failed and we were unable to recover it. 00:27:48.618 [2024-07-15 09:35:59.506981] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.618 [2024-07-15 09:35:59.507008] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a58000b90 with addr=10.0.0.2, port=4420 00:27:48.618 qpair failed and we were unable to recover it. 00:27:48.618 [2024-07-15 09:35:59.507102] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.618 [2024-07-15 09:35:59.507129] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a58000b90 with addr=10.0.0.2, port=4420 00:27:48.618 qpair failed and we were unable to recover it. 00:27:48.618 [2024-07-15 09:35:59.507249] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.618 [2024-07-15 09:35:59.507308] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a58000b90 with addr=10.0.0.2, port=4420 00:27:48.618 qpair failed and we were unable to recover it. 00:27:48.618 [2024-07-15 09:35:59.507454] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.618 [2024-07-15 09:35:59.507482] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a58000b90 with addr=10.0.0.2, port=4420 00:27:48.618 qpair failed and we were unable to recover it. 00:27:48.618 [2024-07-15 09:35:59.507590] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.618 [2024-07-15 09:35:59.507617] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a58000b90 with addr=10.0.0.2, port=4420 00:27:48.618 qpair failed and we were unable to recover it. 00:27:48.618 [2024-07-15 09:35:59.507709] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.618 [2024-07-15 09:35:59.507736] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a58000b90 with addr=10.0.0.2, port=4420 00:27:48.618 qpair failed and we were unable to recover it. 00:27:48.618 [2024-07-15 09:35:59.507846] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.619 [2024-07-15 09:35:59.507874] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a58000b90 with addr=10.0.0.2, port=4420 00:27:48.619 qpair failed and we were unable to recover it. 00:27:48.619 [2024-07-15 09:35:59.507967] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.619 [2024-07-15 09:35:59.507994] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a58000b90 with addr=10.0.0.2, port=4420 00:27:48.619 qpair failed and we were unable to recover it. 00:27:48.619 [2024-07-15 09:35:59.508120] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.619 [2024-07-15 09:35:59.508147] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a58000b90 with addr=10.0.0.2, port=4420 00:27:48.619 qpair failed and we were unable to recover it. 00:27:48.619 [2024-07-15 09:35:59.508242] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.619 [2024-07-15 09:35:59.508269] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a58000b90 with addr=10.0.0.2, port=4420 00:27:48.619 qpair failed and we were unable to recover it. 00:27:48.619 [2024-07-15 09:35:59.508378] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.619 [2024-07-15 09:35:59.508405] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a58000b90 with addr=10.0.0.2, port=4420 00:27:48.619 qpair failed and we were unable to recover it. 00:27:48.619 [2024-07-15 09:35:59.508544] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.619 [2024-07-15 09:35:59.508572] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a58000b90 with addr=10.0.0.2, port=4420 00:27:48.619 qpair failed and we were unable to recover it. 00:27:48.619 [2024-07-15 09:35:59.508683] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.619 [2024-07-15 09:35:59.508711] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a58000b90 with addr=10.0.0.2, port=4420 00:27:48.619 qpair failed and we were unable to recover it. 00:27:48.619 [2024-07-15 09:35:59.508807] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.619 [2024-07-15 09:35:59.508835] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a58000b90 with addr=10.0.0.2, port=4420 00:27:48.619 qpair failed and we were unable to recover it. 00:27:48.619 [2024-07-15 09:35:59.508915] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.619 [2024-07-15 09:35:59.508943] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a58000b90 with addr=10.0.0.2, port=4420 00:27:48.619 qpair failed and we were unable to recover it. 00:27:48.619 [2024-07-15 09:35:59.509079] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.619 [2024-07-15 09:35:59.509106] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a58000b90 with addr=10.0.0.2, port=4420 00:27:48.619 qpair failed and we were unable to recover it. 00:27:48.619 [2024-07-15 09:35:59.509251] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.619 [2024-07-15 09:35:59.509278] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a58000b90 with addr=10.0.0.2, port=4420 00:27:48.619 qpair failed and we were unable to recover it. 00:27:48.619 [2024-07-15 09:35:59.509414] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.619 [2024-07-15 09:35:59.509441] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a58000b90 with addr=10.0.0.2, port=4420 00:27:48.619 qpair failed and we were unable to recover it. 00:27:48.619 [2024-07-15 09:35:59.509552] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.619 [2024-07-15 09:35:59.509579] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a58000b90 with addr=10.0.0.2, port=4420 00:27:48.619 qpair failed and we were unable to recover it. 00:27:48.619 [2024-07-15 09:35:59.509696] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.619 [2024-07-15 09:35:59.509723] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a58000b90 with addr=10.0.0.2, port=4420 00:27:48.619 qpair failed and we were unable to recover it. 00:27:48.619 [2024-07-15 09:35:59.509820] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.619 [2024-07-15 09:35:59.509857] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a58000b90 with addr=10.0.0.2, port=4420 00:27:48.619 qpair failed and we were unable to recover it. 00:27:48.619 [2024-07-15 09:35:59.509997] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.619 [2024-07-15 09:35:59.510025] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a58000b90 with addr=10.0.0.2, port=4420 00:27:48.619 qpair failed and we were unable to recover it. 00:27:48.619 [2024-07-15 09:35:59.510209] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.619 [2024-07-15 09:35:59.510240] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a58000b90 with addr=10.0.0.2, port=4420 00:27:48.619 qpair failed and we were unable to recover it. 00:27:48.619 [2024-07-15 09:35:59.510386] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.619 [2024-07-15 09:35:59.510413] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a58000b90 with addr=10.0.0.2, port=4420 00:27:48.619 qpair failed and we were unable to recover it. 00:27:48.619 [2024-07-15 09:35:59.510495] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.619 [2024-07-15 09:35:59.510522] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a58000b90 with addr=10.0.0.2, port=4420 00:27:48.619 qpair failed and we were unable to recover it. 00:27:48.619 [2024-07-15 09:35:59.510665] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.619 [2024-07-15 09:35:59.510692] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a58000b90 with addr=10.0.0.2, port=4420 00:27:48.619 qpair failed and we were unable to recover it. 00:27:48.619 [2024-07-15 09:35:59.510781] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.619 [2024-07-15 09:35:59.510813] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a58000b90 with addr=10.0.0.2, port=4420 00:27:48.619 qpair failed and we were unable to recover it. 00:27:48.619 [2024-07-15 09:35:59.510922] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.619 [2024-07-15 09:35:59.510949] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a58000b90 with addr=10.0.0.2, port=4420 00:27:48.619 qpair failed and we were unable to recover it. 00:27:48.619 [2024-07-15 09:35:59.511058] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.619 [2024-07-15 09:35:59.511085] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a58000b90 with addr=10.0.0.2, port=4420 00:27:48.619 qpair failed and we were unable to recover it. 00:27:48.619 [2024-07-15 09:35:59.511196] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.619 [2024-07-15 09:35:59.511223] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a58000b90 with addr=10.0.0.2, port=4420 00:27:48.619 qpair failed and we were unable to recover it. 00:27:48.619 [2024-07-15 09:35:59.511307] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.619 [2024-07-15 09:35:59.511335] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a58000b90 with addr=10.0.0.2, port=4420 00:27:48.619 qpair failed and we were unable to recover it. 00:27:48.619 [2024-07-15 09:35:59.511470] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.619 [2024-07-15 09:35:59.511497] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a58000b90 with addr=10.0.0.2, port=4420 00:27:48.619 qpair failed and we were unable to recover it. 00:27:48.619 [2024-07-15 09:35:59.511619] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.619 [2024-07-15 09:35:59.511647] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a58000b90 with addr=10.0.0.2, port=4420 00:27:48.619 qpair failed and we were unable to recover it. 00:27:48.619 [2024-07-15 09:35:59.511788] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.619 [2024-07-15 09:35:59.511821] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a58000b90 with addr=10.0.0.2, port=4420 00:27:48.619 qpair failed and we were unable to recover it. 00:27:48.620 [2024-07-15 09:35:59.511943] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.620 [2024-07-15 09:35:59.511970] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a58000b90 with addr=10.0.0.2, port=4420 00:27:48.620 qpair failed and we were unable to recover it. 00:27:48.620 [2024-07-15 09:35:59.512088] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.620 [2024-07-15 09:35:59.512115] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a58000b90 with addr=10.0.0.2, port=4420 00:27:48.620 qpair failed and we were unable to recover it. 00:27:48.620 [2024-07-15 09:35:59.512335] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.620 [2024-07-15 09:35:59.512404] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a58000b90 with addr=10.0.0.2, port=4420 00:27:48.620 qpair failed and we were unable to recover it. 00:27:48.620 [2024-07-15 09:35:59.512540] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.620 [2024-07-15 09:35:59.512568] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a58000b90 with addr=10.0.0.2, port=4420 00:27:48.620 qpair failed and we were unable to recover it. 00:27:48.620 [2024-07-15 09:35:59.512658] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.620 [2024-07-15 09:35:59.512698] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a50000b90 with addr=10.0.0.2, port=4420 00:27:48.620 qpair failed and we were unable to recover it. 00:27:48.620 [2024-07-15 09:35:59.512798] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.620 [2024-07-15 09:35:59.512833] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a50000b90 with addr=10.0.0.2, port=4420 00:27:48.620 qpair failed and we were unable to recover it. 00:27:48.620 [2024-07-15 09:35:59.512947] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.620 [2024-07-15 09:35:59.512974] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a50000b90 with addr=10.0.0.2, port=4420 00:27:48.620 qpair failed and we were unable to recover it. 00:27:48.620 [2024-07-15 09:35:59.513154] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.620 [2024-07-15 09:35:59.513213] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a50000b90 with addr=10.0.0.2, port=4420 00:27:48.620 qpair failed and we were unable to recover it. 00:27:48.620 [2024-07-15 09:35:59.513382] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.620 [2024-07-15 09:35:59.513429] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a50000b90 with addr=10.0.0.2, port=4420 00:27:48.620 qpair failed and we were unable to recover it. 00:27:48.620 [2024-07-15 09:35:59.513545] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.620 [2024-07-15 09:35:59.513572] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a50000b90 with addr=10.0.0.2, port=4420 00:27:48.620 qpair failed and we were unable to recover it. 00:27:48.620 [2024-07-15 09:35:59.513710] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.620 [2024-07-15 09:35:59.513737] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a50000b90 with addr=10.0.0.2, port=4420 00:27:48.620 qpair failed and we were unable to recover it. 00:27:48.620 [2024-07-15 09:35:59.513863] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.620 [2024-07-15 09:35:59.513890] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a50000b90 with addr=10.0.0.2, port=4420 00:27:48.620 qpair failed and we were unable to recover it. 00:27:48.620 [2024-07-15 09:35:59.514041] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.620 [2024-07-15 09:35:59.514101] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a50000b90 with addr=10.0.0.2, port=4420 00:27:48.620 qpair failed and we were unable to recover it. 00:27:48.620 [2024-07-15 09:35:59.514312] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.620 [2024-07-15 09:35:59.514359] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a50000b90 with addr=10.0.0.2, port=4420 00:27:48.620 qpair failed and we were unable to recover it. 00:27:48.620 [2024-07-15 09:35:59.514495] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.620 [2024-07-15 09:35:59.514522] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a50000b90 with addr=10.0.0.2, port=4420 00:27:48.620 qpair failed and we were unable to recover it. 00:27:48.620 [2024-07-15 09:35:59.514637] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.620 [2024-07-15 09:35:59.514664] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a50000b90 with addr=10.0.0.2, port=4420 00:27:48.620 qpair failed and we were unable to recover it. 00:27:48.620 [2024-07-15 09:35:59.514806] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.620 [2024-07-15 09:35:59.514833] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a50000b90 with addr=10.0.0.2, port=4420 00:27:48.620 qpair failed and we were unable to recover it. 00:27:48.620 [2024-07-15 09:35:59.514918] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.620 [2024-07-15 09:35:59.514946] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a50000b90 with addr=10.0.0.2, port=4420 00:27:48.620 qpair failed and we were unable to recover it. 00:27:48.620 [2024-07-15 09:35:59.515085] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.620 [2024-07-15 09:35:59.515112] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a50000b90 with addr=10.0.0.2, port=4420 00:27:48.620 qpair failed and we were unable to recover it. 00:27:48.620 [2024-07-15 09:35:59.515197] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.620 [2024-07-15 09:35:59.515224] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a50000b90 with addr=10.0.0.2, port=4420 00:27:48.620 qpair failed and we were unable to recover it. 00:27:48.620 [2024-07-15 09:35:59.515339] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.620 [2024-07-15 09:35:59.515366] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a50000b90 with addr=10.0.0.2, port=4420 00:27:48.620 qpair failed and we were unable to recover it. 00:27:48.620 [2024-07-15 09:35:59.515451] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.620 [2024-07-15 09:35:59.515482] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a50000b90 with addr=10.0.0.2, port=4420 00:27:48.620 qpair failed and we were unable to recover it. 00:27:48.620 [2024-07-15 09:35:59.515589] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.620 [2024-07-15 09:35:59.515616] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a50000b90 with addr=10.0.0.2, port=4420 00:27:48.620 qpair failed and we were unable to recover it. 00:27:48.620 [2024-07-15 09:35:59.515698] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.620 [2024-07-15 09:35:59.515726] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a50000b90 with addr=10.0.0.2, port=4420 00:27:48.620 qpair failed and we were unable to recover it. 00:27:48.620 [2024-07-15 09:35:59.515799] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.620 [2024-07-15 09:35:59.515831] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a50000b90 with addr=10.0.0.2, port=4420 00:27:48.620 qpair failed and we were unable to recover it. 00:27:48.620 [2024-07-15 09:35:59.515938] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.620 [2024-07-15 09:35:59.515965] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a50000b90 with addr=10.0.0.2, port=4420 00:27:48.620 qpair failed and we were unable to recover it. 00:27:48.620 [2024-07-15 09:35:59.516087] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.620 [2024-07-15 09:35:59.516114] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a50000b90 with addr=10.0.0.2, port=4420 00:27:48.620 qpair failed and we were unable to recover it. 00:27:48.620 [2024-07-15 09:35:59.516229] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.620 [2024-07-15 09:35:59.516255] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a50000b90 with addr=10.0.0.2, port=4420 00:27:48.620 qpair failed and we were unable to recover it. 00:27:48.620 [2024-07-15 09:35:59.516370] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.620 [2024-07-15 09:35:59.516402] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a50000b90 with addr=10.0.0.2, port=4420 00:27:48.620 qpair failed and we were unable to recover it. 00:27:48.620 [2024-07-15 09:35:59.516513] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.620 [2024-07-15 09:35:59.516540] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a50000b90 with addr=10.0.0.2, port=4420 00:27:48.620 qpair failed and we were unable to recover it. 00:27:48.620 [2024-07-15 09:35:59.516656] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.620 [2024-07-15 09:35:59.516683] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a50000b90 with addr=10.0.0.2, port=4420 00:27:48.620 qpair failed and we were unable to recover it. 00:27:48.620 [2024-07-15 09:35:59.516823] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.620 [2024-07-15 09:35:59.516855] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a50000b90 with addr=10.0.0.2, port=4420 00:27:48.620 qpair failed and we were unable to recover it. 00:27:48.620 [2024-07-15 09:35:59.517003] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.620 [2024-07-15 09:35:59.517029] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a50000b90 with addr=10.0.0.2, port=4420 00:27:48.620 qpair failed and we were unable to recover it. 00:27:48.620 [2024-07-15 09:35:59.517182] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.620 [2024-07-15 09:35:59.517209] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a50000b90 with addr=10.0.0.2, port=4420 00:27:48.620 qpair failed and we were unable to recover it. 00:27:48.620 [2024-07-15 09:35:59.517301] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.620 [2024-07-15 09:35:59.517329] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a50000b90 with addr=10.0.0.2, port=4420 00:27:48.620 qpair failed and we were unable to recover it. 00:27:48.620 [2024-07-15 09:35:59.517442] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.620 [2024-07-15 09:35:59.517469] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a50000b90 with addr=10.0.0.2, port=4420 00:27:48.620 qpair failed and we were unable to recover it. 00:27:48.620 [2024-07-15 09:35:59.517597] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.621 [2024-07-15 09:35:59.517624] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a50000b90 with addr=10.0.0.2, port=4420 00:27:48.621 qpair failed and we were unable to recover it. 00:27:48.621 [2024-07-15 09:35:59.517730] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.621 [2024-07-15 09:35:59.517758] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a50000b90 with addr=10.0.0.2, port=4420 00:27:48.621 qpair failed and we were unable to recover it. 00:27:48.621 [2024-07-15 09:35:59.517916] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.621 [2024-07-15 09:35:59.517944] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a50000b90 with addr=10.0.0.2, port=4420 00:27:48.621 qpair failed and we were unable to recover it. 00:27:48.621 [2024-07-15 09:35:59.518021] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.621 [2024-07-15 09:35:59.518048] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a50000b90 with addr=10.0.0.2, port=4420 00:27:48.621 qpair failed and we were unable to recover it. 00:27:48.621 [2024-07-15 09:35:59.518262] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.621 [2024-07-15 09:35:59.518329] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a50000b90 with addr=10.0.0.2, port=4420 00:27:48.621 qpair failed and we were unable to recover it. 00:27:48.621 [2024-07-15 09:35:59.518440] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.621 [2024-07-15 09:35:59.518467] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a50000b90 with addr=10.0.0.2, port=4420 00:27:48.621 qpair failed and we were unable to recover it. 00:27:48.621 [2024-07-15 09:35:59.518585] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.621 [2024-07-15 09:35:59.518612] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a50000b90 with addr=10.0.0.2, port=4420 00:27:48.621 qpair failed and we were unable to recover it. 00:27:48.621 [2024-07-15 09:35:59.518707] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.621 [2024-07-15 09:35:59.518734] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a50000b90 with addr=10.0.0.2, port=4420 00:27:48.621 qpair failed and we were unable to recover it. 00:27:48.621 [2024-07-15 09:35:59.518842] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.621 [2024-07-15 09:35:59.518869] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a50000b90 with addr=10.0.0.2, port=4420 00:27:48.621 qpair failed and we were unable to recover it. 00:27:48.621 [2024-07-15 09:35:59.518953] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.621 [2024-07-15 09:35:59.518980] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a50000b90 with addr=10.0.0.2, port=4420 00:27:48.621 qpair failed and we were unable to recover it. 00:27:48.621 [2024-07-15 09:35:59.519104] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.621 [2024-07-15 09:35:59.519131] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a50000b90 with addr=10.0.0.2, port=4420 00:27:48.621 qpair failed and we were unable to recover it. 00:27:48.621 [2024-07-15 09:35:59.519238] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.621 [2024-07-15 09:35:59.519265] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a50000b90 with addr=10.0.0.2, port=4420 00:27:48.621 qpair failed and we were unable to recover it. 00:27:48.621 [2024-07-15 09:35:59.519412] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.621 [2024-07-15 09:35:59.519439] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a50000b90 with addr=10.0.0.2, port=4420 00:27:48.621 qpair failed and we were unable to recover it. 00:27:48.621 [2024-07-15 09:35:59.519532] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.621 [2024-07-15 09:35:59.519560] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a50000b90 with addr=10.0.0.2, port=4420 00:27:48.621 qpair failed and we were unable to recover it. 00:27:48.621 [2024-07-15 09:35:59.519677] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.621 [2024-07-15 09:35:59.519705] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a50000b90 with addr=10.0.0.2, port=4420 00:27:48.621 qpair failed and we were unable to recover it. 00:27:48.621 [2024-07-15 09:35:59.519843] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.621 [2024-07-15 09:35:59.519872] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a50000b90 with addr=10.0.0.2, port=4420 00:27:48.621 qpair failed and we were unable to recover it. 00:27:48.621 [2024-07-15 09:35:59.519984] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.621 [2024-07-15 09:35:59.520011] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a50000b90 with addr=10.0.0.2, port=4420 00:27:48.621 qpair failed and we were unable to recover it. 00:27:48.621 [2024-07-15 09:35:59.520150] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.621 [2024-07-15 09:35:59.520178] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a50000b90 with addr=10.0.0.2, port=4420 00:27:48.621 qpair failed and we were unable to recover it. 00:27:48.621 [2024-07-15 09:35:59.520289] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.621 [2024-07-15 09:35:59.520316] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a50000b90 with addr=10.0.0.2, port=4420 00:27:48.621 qpair failed and we were unable to recover it. 00:27:48.621 [2024-07-15 09:35:59.520432] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.621 [2024-07-15 09:35:59.520459] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a50000b90 with addr=10.0.0.2, port=4420 00:27:48.621 qpair failed and we were unable to recover it. 00:27:48.621 [2024-07-15 09:35:59.520554] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.621 [2024-07-15 09:35:59.520582] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a50000b90 with addr=10.0.0.2, port=4420 00:27:48.621 qpair failed and we were unable to recover it. 00:27:48.621 [2024-07-15 09:35:59.520721] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.621 [2024-07-15 09:35:59.520748] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a50000b90 with addr=10.0.0.2, port=4420 00:27:48.621 qpair failed and we were unable to recover it. 00:27:48.621 [2024-07-15 09:35:59.520866] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.621 [2024-07-15 09:35:59.520892] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a50000b90 with addr=10.0.0.2, port=4420 00:27:48.621 qpair failed and we were unable to recover it. 00:27:48.621 [2024-07-15 09:35:59.520999] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.621 [2024-07-15 09:35:59.521026] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a50000b90 with addr=10.0.0.2, port=4420 00:27:48.621 qpair failed and we were unable to recover it. 00:27:48.621 [2024-07-15 09:35:59.521176] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.621 [2024-07-15 09:35:59.521203] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a50000b90 with addr=10.0.0.2, port=4420 00:27:48.621 qpair failed and we were unable to recover it. 00:27:48.621 [2024-07-15 09:35:59.521316] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.621 [2024-07-15 09:35:59.521343] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a50000b90 with addr=10.0.0.2, port=4420 00:27:48.621 qpair failed and we were unable to recover it. 00:27:48.621 [2024-07-15 09:35:59.521453] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.621 [2024-07-15 09:35:59.521480] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a50000b90 with addr=10.0.0.2, port=4420 00:27:48.621 qpair failed and we were unable to recover it. 00:27:48.621 [2024-07-15 09:35:59.521594] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.621 [2024-07-15 09:35:59.521622] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a50000b90 with addr=10.0.0.2, port=4420 00:27:48.621 qpair failed and we were unable to recover it. 00:27:48.621 [2024-07-15 09:35:59.521760] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.621 [2024-07-15 09:35:59.521787] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a50000b90 with addr=10.0.0.2, port=4420 00:27:48.621 qpair failed and we were unable to recover it. 00:27:48.621 [2024-07-15 09:35:59.521939] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.621 [2024-07-15 09:35:59.521966] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a50000b90 with addr=10.0.0.2, port=4420 00:27:48.621 qpair failed and we were unable to recover it. 00:27:48.621 [2024-07-15 09:35:59.522053] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.621 [2024-07-15 09:35:59.522083] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a50000b90 with addr=10.0.0.2, port=4420 00:27:48.621 qpair failed and we were unable to recover it. 00:27:48.621 [2024-07-15 09:35:59.522217] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.621 [2024-07-15 09:35:59.522244] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a50000b90 with addr=10.0.0.2, port=4420 00:27:48.621 qpair failed and we were unable to recover it. 00:27:48.621 [2024-07-15 09:35:59.522321] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.621 [2024-07-15 09:35:59.522353] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a50000b90 with addr=10.0.0.2, port=4420 00:27:48.621 qpair failed and we were unable to recover it. 00:27:48.621 [2024-07-15 09:35:59.522445] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.621 [2024-07-15 09:35:59.522472] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a50000b90 with addr=10.0.0.2, port=4420 00:27:48.621 qpair failed and we were unable to recover it. 00:27:48.621 [2024-07-15 09:35:59.522582] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.621 [2024-07-15 09:35:59.522610] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a50000b90 with addr=10.0.0.2, port=4420 00:27:48.621 qpair failed and we were unable to recover it. 00:27:48.621 [2024-07-15 09:35:59.522722] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.622 [2024-07-15 09:35:59.522750] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a50000b90 with addr=10.0.0.2, port=4420 00:27:48.622 qpair failed and we were unable to recover it. 00:27:48.622 [2024-07-15 09:35:59.522913] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.622 [2024-07-15 09:35:59.522940] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a50000b90 with addr=10.0.0.2, port=4420 00:27:48.622 qpair failed and we were unable to recover it. 00:27:48.622 [2024-07-15 09:35:59.523057] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.622 [2024-07-15 09:35:59.523084] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a50000b90 with addr=10.0.0.2, port=4420 00:27:48.622 qpair failed and we were unable to recover it. 00:27:48.622 [2024-07-15 09:35:59.523190] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.622 [2024-07-15 09:35:59.523219] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a50000b90 with addr=10.0.0.2, port=4420 00:27:48.622 qpair failed and we were unable to recover it. 00:27:48.622 [2024-07-15 09:35:59.523377] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.622 [2024-07-15 09:35:59.523404] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a50000b90 with addr=10.0.0.2, port=4420 00:27:48.622 qpair failed and we were unable to recover it. 00:27:48.622 [2024-07-15 09:35:59.523523] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.622 [2024-07-15 09:35:59.523550] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a50000b90 with addr=10.0.0.2, port=4420 00:27:48.622 qpair failed and we were unable to recover it. 00:27:48.622 [2024-07-15 09:35:59.523699] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.622 [2024-07-15 09:35:59.523726] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a50000b90 with addr=10.0.0.2, port=4420 00:27:48.622 qpair failed and we were unable to recover it. 00:27:48.622 [2024-07-15 09:35:59.523848] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.622 [2024-07-15 09:35:59.523876] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a50000b90 with addr=10.0.0.2, port=4420 00:27:48.622 qpair failed and we were unable to recover it. 00:27:48.622 [2024-07-15 09:35:59.523987] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.622 [2024-07-15 09:35:59.524014] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a50000b90 with addr=10.0.0.2, port=4420 00:27:48.622 qpair failed and we were unable to recover it. 00:27:48.622 [2024-07-15 09:35:59.524136] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.622 [2024-07-15 09:35:59.524163] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a50000b90 with addr=10.0.0.2, port=4420 00:27:48.622 qpair failed and we were unable to recover it. 00:27:48.622 [2024-07-15 09:35:59.524271] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.622 [2024-07-15 09:35:59.524298] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a50000b90 with addr=10.0.0.2, port=4420 00:27:48.622 qpair failed and we were unable to recover it. 00:27:48.622 [2024-07-15 09:35:59.524438] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.622 [2024-07-15 09:35:59.524474] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a50000b90 with addr=10.0.0.2, port=4420 00:27:48.622 qpair failed and we were unable to recover it. 00:27:48.622 [2024-07-15 09:35:59.524610] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.622 [2024-07-15 09:35:59.524638] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a50000b90 with addr=10.0.0.2, port=4420 00:27:48.622 qpair failed and we were unable to recover it. 00:27:48.622 [2024-07-15 09:35:59.524736] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.622 [2024-07-15 09:35:59.524763] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a50000b90 with addr=10.0.0.2, port=4420 00:27:48.622 qpair failed and we were unable to recover it. 00:27:48.622 [2024-07-15 09:35:59.524876] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.622 [2024-07-15 09:35:59.524916] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a58000b90 with addr=10.0.0.2, port=4420 00:27:48.622 qpair failed and we were unable to recover it. 00:27:48.622 [2024-07-15 09:35:59.525003] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.622 [2024-07-15 09:35:59.525032] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a58000b90 with addr=10.0.0.2, port=4420 00:27:48.622 qpair failed and we were unable to recover it. 00:27:48.622 [2024-07-15 09:35:59.525129] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.622 [2024-07-15 09:35:59.525158] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a58000b90 with addr=10.0.0.2, port=4420 00:27:48.622 qpair failed and we were unable to recover it. 00:27:48.622 [2024-07-15 09:35:59.525338] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.622 [2024-07-15 09:35:59.525366] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a58000b90 with addr=10.0.0.2, port=4420 00:27:48.622 qpair failed and we were unable to recover it. 00:27:48.622 [2024-07-15 09:35:59.525481] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.622 [2024-07-15 09:35:59.525508] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a58000b90 with addr=10.0.0.2, port=4420 00:27:48.622 qpair failed and we were unable to recover it. 00:27:48.622 [2024-07-15 09:35:59.525617] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.622 [2024-07-15 09:35:59.525644] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a58000b90 with addr=10.0.0.2, port=4420 00:27:48.622 qpair failed and we were unable to recover it. 00:27:48.622 [2024-07-15 09:35:59.525737] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.622 [2024-07-15 09:35:59.525764] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a58000b90 with addr=10.0.0.2, port=4420 00:27:48.622 qpair failed and we were unable to recover it. 00:27:48.622 [2024-07-15 09:35:59.525879] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.622 [2024-07-15 09:35:59.525907] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a58000b90 with addr=10.0.0.2, port=4420 00:27:48.622 qpair failed and we were unable to recover it. 00:27:48.622 [2024-07-15 09:35:59.526000] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.622 [2024-07-15 09:35:59.526027] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a58000b90 with addr=10.0.0.2, port=4420 00:27:48.622 qpair failed and we were unable to recover it. 00:27:48.622 [2024-07-15 09:35:59.526137] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.622 [2024-07-15 09:35:59.526164] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a58000b90 with addr=10.0.0.2, port=4420 00:27:48.622 qpair failed and we were unable to recover it. 00:27:48.622 [2024-07-15 09:35:59.526263] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.622 [2024-07-15 09:35:59.526302] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:27:48.622 qpair failed and we were unable to recover it. 00:27:48.622 [2024-07-15 09:35:59.526449] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.622 [2024-07-15 09:35:59.526477] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:27:48.622 qpair failed and we were unable to recover it. 00:27:48.622 [2024-07-15 09:35:59.526565] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.622 [2024-07-15 09:35:59.526592] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:27:48.622 qpair failed and we were unable to recover it. 00:27:48.622 [2024-07-15 09:35:59.526684] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.622 [2024-07-15 09:35:59.526711] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:27:48.622 qpair failed and we were unable to recover it. 00:27:48.622 [2024-07-15 09:35:59.526824] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.622 [2024-07-15 09:35:59.526855] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:27:48.622 qpair failed and we were unable to recover it. 00:27:48.622 [2024-07-15 09:35:59.526935] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.622 [2024-07-15 09:35:59.526963] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:27:48.622 qpair failed and we were unable to recover it. 00:27:48.622 [2024-07-15 09:35:59.527166] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.622 [2024-07-15 09:35:59.527232] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:27:48.622 qpair failed and we were unable to recover it. 00:27:48.622 [2024-07-15 09:35:59.527473] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.622 [2024-07-15 09:35:59.527539] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:27:48.622 qpair failed and we were unable to recover it. 00:27:48.622 [2024-07-15 09:35:59.527723] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.622 [2024-07-15 09:35:59.527750] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:27:48.622 qpair failed and we were unable to recover it. 00:27:48.622 [2024-07-15 09:35:59.527853] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.622 [2024-07-15 09:35:59.527881] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:27:48.622 qpair failed and we were unable to recover it. 00:27:48.622 [2024-07-15 09:35:59.527996] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.622 [2024-07-15 09:35:59.528024] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:27:48.622 qpair failed and we were unable to recover it. 00:27:48.622 [2024-07-15 09:35:59.528113] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.622 [2024-07-15 09:35:59.528181] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:27:48.622 qpair failed and we were unable to recover it. 00:27:48.622 [2024-07-15 09:35:59.528466] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.622 [2024-07-15 09:35:59.528493] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:27:48.622 qpair failed and we were unable to recover it. 00:27:48.622 [2024-07-15 09:35:59.528699] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.623 [2024-07-15 09:35:59.528731] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:27:48.623 qpair failed and we were unable to recover it. 00:27:48.623 [2024-07-15 09:35:59.528849] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.623 [2024-07-15 09:35:59.528877] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:27:48.623 qpair failed and we were unable to recover it. 00:27:48.623 [2024-07-15 09:35:59.529005] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.623 [2024-07-15 09:35:59.529032] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:27:48.623 qpair failed and we were unable to recover it. 00:27:48.623 [2024-07-15 09:35:59.529202] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.623 [2024-07-15 09:35:59.529267] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:27:48.623 qpair failed and we were unable to recover it. 00:27:48.623 [2024-07-15 09:35:59.529456] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.623 [2024-07-15 09:35:59.529525] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:27:48.623 qpair failed and we were unable to recover it. 00:27:48.623 [2024-07-15 09:35:59.529721] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.623 [2024-07-15 09:35:59.529748] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:27:48.623 qpair failed and we were unable to recover it. 00:27:48.623 [2024-07-15 09:35:59.529881] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.623 [2024-07-15 09:35:59.529909] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:27:48.623 qpair failed and we were unable to recover it. 00:27:48.623 [2024-07-15 09:35:59.529999] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.623 [2024-07-15 09:35:59.530026] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:27:48.623 qpair failed and we were unable to recover it. 00:27:48.623 [2024-07-15 09:35:59.530124] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.623 [2024-07-15 09:35:59.530151] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:27:48.623 qpair failed and we were unable to recover it. 00:27:48.623 [2024-07-15 09:35:59.530256] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.623 [2024-07-15 09:35:59.530325] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:27:48.623 qpair failed and we were unable to recover it. 00:27:48.623 [2024-07-15 09:35:59.530613] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.623 [2024-07-15 09:35:59.530678] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:27:48.623 qpair failed and we were unable to recover it. 00:27:48.623 [2024-07-15 09:35:59.530859] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.623 [2024-07-15 09:35:59.530886] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:27:48.623 qpair failed and we were unable to recover it. 00:27:48.623 [2024-07-15 09:35:59.530999] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.623 [2024-07-15 09:35:59.531026] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:27:48.623 qpair failed and we were unable to recover it. 00:27:48.623 [2024-07-15 09:35:59.531139] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.623 [2024-07-15 09:35:59.531194] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:27:48.623 qpair failed and we were unable to recover it. 00:27:48.623 [2024-07-15 09:35:59.531470] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.623 [2024-07-15 09:35:59.531537] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:27:48.623 qpair failed and we were unable to recover it. 00:27:48.623 [2024-07-15 09:35:59.531785] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.623 [2024-07-15 09:35:59.531865] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:27:48.623 qpair failed and we were unable to recover it. 00:27:48.623 [2024-07-15 09:35:59.531978] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.623 [2024-07-15 09:35:59.532005] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:27:48.623 qpair failed and we were unable to recover it. 00:27:48.623 [2024-07-15 09:35:59.532149] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.623 [2024-07-15 09:35:59.532176] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:27:48.623 qpair failed and we were unable to recover it. 00:27:48.623 [2024-07-15 09:35:59.532284] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.623 [2024-07-15 09:35:59.532360] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:27:48.623 qpair failed and we were unable to recover it. 00:27:48.623 [2024-07-15 09:35:59.532586] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.623 [2024-07-15 09:35:59.532652] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:27:48.623 qpair failed and we were unable to recover it. 00:27:48.623 [2024-07-15 09:35:59.532826] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.623 [2024-07-15 09:35:59.532864] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:27:48.623 qpair failed and we were unable to recover it. 00:27:48.623 [2024-07-15 09:35:59.532975] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.623 [2024-07-15 09:35:59.533002] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:27:48.623 qpair failed and we were unable to recover it. 00:27:48.623 [2024-07-15 09:35:59.533120] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.623 [2024-07-15 09:35:59.533147] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:27:48.623 qpair failed and we were unable to recover it. 00:27:48.623 [2024-07-15 09:35:59.533320] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.623 [2024-07-15 09:35:59.533386] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:27:48.623 qpair failed and we were unable to recover it. 00:27:48.623 [2024-07-15 09:35:59.533637] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.623 [2024-07-15 09:35:59.533702] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:27:48.623 qpair failed and we were unable to recover it. 00:27:48.623 [2024-07-15 09:35:59.533866] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.623 [2024-07-15 09:35:59.533893] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:27:48.623 qpair failed and we were unable to recover it. 00:27:48.623 [2024-07-15 09:35:59.533984] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.623 [2024-07-15 09:35:59.534010] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:27:48.623 qpair failed and we were unable to recover it. 00:27:48.623 [2024-07-15 09:35:59.534146] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.623 [2024-07-15 09:35:59.534187] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d69200 with addr=10.0.0.2, port=4420 00:27:48.623 qpair failed and we were unable to recover it. 00:27:48.623 [2024-07-15 09:35:59.534390] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.623 [2024-07-15 09:35:59.534459] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d69200 with addr=10.0.0.2, port=4420 00:27:48.623 qpair failed and we were unable to recover it. 00:27:48.623 [2024-07-15 09:35:59.534691] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.623 [2024-07-15 09:35:59.534756] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d69200 with addr=10.0.0.2, port=4420 00:27:48.623 qpair failed and we were unable to recover it. 00:27:48.623 [2024-07-15 09:35:59.534981] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.623 [2024-07-15 09:35:59.535007] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d69200 with addr=10.0.0.2, port=4420 00:27:48.623 qpair failed and we were unable to recover it. 00:27:48.623 [2024-07-15 09:35:59.535121] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.623 [2024-07-15 09:35:59.535147] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d69200 with addr=10.0.0.2, port=4420 00:27:48.623 qpair failed and we were unable to recover it. 00:27:48.623 [2024-07-15 09:35:59.535280] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.623 [2024-07-15 09:35:59.535348] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d69200 with addr=10.0.0.2, port=4420 00:27:48.623 qpair failed and we were unable to recover it. 00:27:48.624 [2024-07-15 09:35:59.535600] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.624 [2024-07-15 09:35:59.535665] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d69200 with addr=10.0.0.2, port=4420 00:27:48.624 qpair failed and we were unable to recover it. 00:27:48.624 [2024-07-15 09:35:59.535813] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.624 [2024-07-15 09:35:59.535840] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d69200 with addr=10.0.0.2, port=4420 00:27:48.624 qpair failed and we were unable to recover it. 00:27:48.624 [2024-07-15 09:35:59.535958] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.624 [2024-07-15 09:35:59.535984] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d69200 with addr=10.0.0.2, port=4420 00:27:48.624 qpair failed and we were unable to recover it. 00:27:48.624 [2024-07-15 09:35:59.536143] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.624 [2024-07-15 09:35:59.536207] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d69200 with addr=10.0.0.2, port=4420 00:27:48.624 qpair failed and we were unable to recover it. 00:27:48.624 [2024-07-15 09:35:59.536514] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.624 [2024-07-15 09:35:59.536541] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d69200 with addr=10.0.0.2, port=4420 00:27:48.624 qpair failed and we were unable to recover it. 00:27:48.624 [2024-07-15 09:35:59.536713] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.624 [2024-07-15 09:35:59.536740] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d69200 with addr=10.0.0.2, port=4420 00:27:48.624 qpair failed and we were unable to recover it. 00:27:48.624 [2024-07-15 09:35:59.536878] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.624 [2024-07-15 09:35:59.536905] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d69200 with addr=10.0.0.2, port=4420 00:27:48.624 qpair failed and we were unable to recover it. 00:27:48.624 [2024-07-15 09:35:59.536989] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.624 [2024-07-15 09:35:59.537015] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d69200 with addr=10.0.0.2, port=4420 00:27:48.624 qpair failed and we were unable to recover it. 00:27:48.624 [2024-07-15 09:35:59.537104] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.624 [2024-07-15 09:35:59.537131] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d69200 with addr=10.0.0.2, port=4420 00:27:48.624 qpair failed and we were unable to recover it. 00:27:48.624 [2024-07-15 09:35:59.537242] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.624 [2024-07-15 09:35:59.537269] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d69200 with addr=10.0.0.2, port=4420 00:27:48.624 qpair failed and we were unable to recover it. 00:27:48.624 [2024-07-15 09:35:59.537428] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.624 [2024-07-15 09:35:59.537493] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d69200 with addr=10.0.0.2, port=4420 00:27:48.624 qpair failed and we were unable to recover it. 00:27:48.624 [2024-07-15 09:35:59.537644] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.624 [2024-07-15 09:35:59.537731] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d69200 with addr=10.0.0.2, port=4420 00:27:48.624 qpair failed and we were unable to recover it. 00:27:48.624 [2024-07-15 09:35:59.537835] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.624 [2024-07-15 09:35:59.537866] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:27:48.624 qpair failed and we were unable to recover it. 00:27:48.624 [2024-07-15 09:35:59.537959] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.624 [2024-07-15 09:35:59.537986] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:27:48.624 qpair failed and we were unable to recover it. 00:27:48.624 [2024-07-15 09:35:59.538096] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.624 [2024-07-15 09:35:59.538125] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:27:48.624 qpair failed and we were unable to recover it. 00:27:48.624 [2024-07-15 09:35:59.538238] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.624 [2024-07-15 09:35:59.538265] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:27:48.624 qpair failed and we were unable to recover it. 00:27:48.624 [2024-07-15 09:35:59.538513] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.624 [2024-07-15 09:35:59.538579] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:27:48.624 qpair failed and we were unable to recover it. 00:27:48.624 [2024-07-15 09:35:59.538759] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.624 [2024-07-15 09:35:59.538786] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:27:48.624 qpair failed and we were unable to recover it. 00:27:48.624 [2024-07-15 09:35:59.538941] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.624 [2024-07-15 09:35:59.538969] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d69200 with addr=10.0.0.2, port=4420 00:27:48.624 qpair failed and we were unable to recover it. 00:27:48.624 [2024-07-15 09:35:59.539064] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.624 [2024-07-15 09:35:59.539091] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d69200 with addr=10.0.0.2, port=4420 00:27:48.624 qpair failed and we were unable to recover it. 00:27:48.624 [2024-07-15 09:35:59.539319] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.624 [2024-07-15 09:35:59.539383] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d69200 with addr=10.0.0.2, port=4420 00:27:48.624 qpair failed and we were unable to recover it. 00:27:48.624 [2024-07-15 09:35:59.539607] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.624 [2024-07-15 09:35:59.539674] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:27:48.624 qpair failed and we were unable to recover it. 00:27:48.624 [2024-07-15 09:35:59.539865] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.624 [2024-07-15 09:35:59.539892] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:27:48.624 qpair failed and we were unable to recover it. 00:27:48.624 [2024-07-15 09:35:59.540003] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.624 [2024-07-15 09:35:59.540030] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:27:48.624 qpair failed and we were unable to recover it. 00:27:48.624 [2024-07-15 09:35:59.540155] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.624 [2024-07-15 09:35:59.540210] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:27:48.624 qpair failed and we were unable to recover it. 00:27:48.624 [2024-07-15 09:35:59.540489] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.624 [2024-07-15 09:35:59.540553] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:27:48.624 qpair failed and we were unable to recover it. 00:27:48.624 [2024-07-15 09:35:59.540752] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.624 [2024-07-15 09:35:59.540779] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:27:48.624 qpair failed and we were unable to recover it. 00:27:48.624 [2024-07-15 09:35:59.540884] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.624 [2024-07-15 09:35:59.540913] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d69200 with addr=10.0.0.2, port=4420 00:27:48.624 qpair failed and we were unable to recover it. 00:27:48.624 [2024-07-15 09:35:59.541004] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.624 [2024-07-15 09:35:59.541031] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d69200 with addr=10.0.0.2, port=4420 00:27:48.624 qpair failed and we were unable to recover it. 00:27:48.624 [2024-07-15 09:35:59.541145] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.624 [2024-07-15 09:35:59.541171] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d69200 with addr=10.0.0.2, port=4420 00:27:48.624 qpair failed and we were unable to recover it. 00:27:48.624 [2024-07-15 09:35:59.541293] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.624 [2024-07-15 09:35:59.541319] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d69200 with addr=10.0.0.2, port=4420 00:27:48.624 qpair failed and we were unable to recover it. 00:27:48.624 [2024-07-15 09:35:59.541584] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.624 [2024-07-15 09:35:59.541648] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d69200 with addr=10.0.0.2, port=4420 00:27:48.624 qpair failed and we were unable to recover it. 00:27:48.624 [2024-07-15 09:35:59.541867] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.624 [2024-07-15 09:35:59.541895] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d69200 with addr=10.0.0.2, port=4420 00:27:48.624 qpair failed and we were unable to recover it. 00:27:48.624 [2024-07-15 09:35:59.542010] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.624 [2024-07-15 09:35:59.542036] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d69200 with addr=10.0.0.2, port=4420 00:27:48.625 qpair failed and we were unable to recover it. 00:27:48.625 [2024-07-15 09:35:59.542169] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.625 [2024-07-15 09:35:59.542210] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a50000b90 with addr=10.0.0.2, port=4420 00:27:48.625 qpair failed and we were unable to recover it. 00:27:48.625 [2024-07-15 09:35:59.542430] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.625 [2024-07-15 09:35:59.542485] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a50000b90 with addr=10.0.0.2, port=4420 00:27:48.625 qpair failed and we were unable to recover it. 00:27:48.625 [2024-07-15 09:35:59.542710] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.625 [2024-07-15 09:35:59.542765] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a50000b90 with addr=10.0.0.2, port=4420 00:27:48.625 qpair failed and we were unable to recover it. 00:27:48.625 [2024-07-15 09:35:59.542885] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.625 [2024-07-15 09:35:59.542912] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a50000b90 with addr=10.0.0.2, port=4420 00:27:48.625 qpair failed and we were unable to recover it. 00:27:48.625 [2024-07-15 09:35:59.543048] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.625 [2024-07-15 09:35:59.543081] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a50000b90 with addr=10.0.0.2, port=4420 00:27:48.625 qpair failed and we were unable to recover it. 00:27:48.625 [2024-07-15 09:35:59.543194] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.625 [2024-07-15 09:35:59.543221] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a50000b90 with addr=10.0.0.2, port=4420 00:27:48.625 qpair failed and we were unable to recover it. 00:27:48.625 [2024-07-15 09:35:59.543351] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.625 [2024-07-15 09:35:59.543410] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a50000b90 with addr=10.0.0.2, port=4420 00:27:48.625 qpair failed and we were unable to recover it. 00:27:48.625 [2024-07-15 09:35:59.543520] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.625 [2024-07-15 09:35:59.543548] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a50000b90 with addr=10.0.0.2, port=4420 00:27:48.625 qpair failed and we were unable to recover it. 00:27:48.625 [2024-07-15 09:35:59.543657] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.625 [2024-07-15 09:35:59.543684] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a50000b90 with addr=10.0.0.2, port=4420 00:27:48.625 qpair failed and we were unable to recover it. 00:27:48.625 [2024-07-15 09:35:59.543811] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.625 [2024-07-15 09:35:59.543841] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d69200 with addr=10.0.0.2, port=4420 00:27:48.625 qpair failed and we were unable to recover it. 00:27:48.625 [2024-07-15 09:35:59.543979] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.625 [2024-07-15 09:35:59.544020] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a58000b90 with addr=10.0.0.2, port=4420 00:27:48.625 qpair failed and we were unable to recover it. 00:27:48.625 [2024-07-15 09:35:59.544143] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.625 [2024-07-15 09:35:59.544171] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a58000b90 with addr=10.0.0.2, port=4420 00:27:48.625 qpair failed and we were unable to recover it. 00:27:48.625 [2024-07-15 09:35:59.544288] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.625 [2024-07-15 09:35:59.544315] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a58000b90 with addr=10.0.0.2, port=4420 00:27:48.625 qpair failed and we were unable to recover it. 00:27:48.625 [2024-07-15 09:35:59.544404] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.625 [2024-07-15 09:35:59.544432] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a58000b90 with addr=10.0.0.2, port=4420 00:27:48.625 qpair failed and we were unable to recover it. 00:27:48.625 [2024-07-15 09:35:59.544577] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.625 [2024-07-15 09:35:59.544617] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:27:48.625 qpair failed and we were unable to recover it. 00:27:48.625 [2024-07-15 09:35:59.544715] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.625 [2024-07-15 09:35:59.544743] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:27:48.625 qpair failed and we were unable to recover it. 00:27:48.625 [2024-07-15 09:35:59.544854] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.625 [2024-07-15 09:35:59.544881] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:27:48.625 qpair failed and we were unable to recover it. 00:27:48.625 [2024-07-15 09:35:59.544995] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.625 [2024-07-15 09:35:59.545022] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:27:48.625 qpair failed and we were unable to recover it. 00:27:48.625 [2024-07-15 09:35:59.545121] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.625 [2024-07-15 09:35:59.545179] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:27:48.625 qpair failed and we were unable to recover it. 00:27:48.625 [2024-07-15 09:35:59.545408] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.625 [2024-07-15 09:35:59.545473] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:27:48.625 qpair failed and we were unable to recover it. 00:27:48.625 [2024-07-15 09:35:59.545662] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.625 [2024-07-15 09:35:59.545690] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:27:48.625 qpair failed and we were unable to recover it. 00:27:48.625 [2024-07-15 09:35:59.545818] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.625 [2024-07-15 09:35:59.545860] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a50000b90 with addr=10.0.0.2, port=4420 00:27:48.625 qpair failed and we were unable to recover it. 00:27:48.625 [2024-07-15 09:35:59.545980] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.625 [2024-07-15 09:35:59.546008] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a50000b90 with addr=10.0.0.2, port=4420 00:27:48.625 qpair failed and we were unable to recover it. 00:27:48.625 [2024-07-15 09:35:59.546202] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.625 [2024-07-15 09:35:59.546261] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a50000b90 with addr=10.0.0.2, port=4420 00:27:48.625 qpair failed and we were unable to recover it. 00:27:48.625 [2024-07-15 09:35:59.546413] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.625 [2024-07-15 09:35:59.546467] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a50000b90 with addr=10.0.0.2, port=4420 00:27:48.625 qpair failed and we were unable to recover it. 00:27:48.625 [2024-07-15 09:35:59.546583] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.625 [2024-07-15 09:35:59.546611] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a50000b90 with addr=10.0.0.2, port=4420 00:27:48.625 qpair failed and we were unable to recover it. 00:27:48.625 [2024-07-15 09:35:59.546723] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.625 [2024-07-15 09:35:59.546750] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a50000b90 with addr=10.0.0.2, port=4420 00:27:48.625 qpair failed and we were unable to recover it. 00:27:48.625 [2024-07-15 09:35:59.546871] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.625 [2024-07-15 09:35:59.546903] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a50000b90 with addr=10.0.0.2, port=4420 00:27:48.625 qpair failed and we were unable to recover it. 00:27:48.625 [2024-07-15 09:35:59.546996] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.625 [2024-07-15 09:35:59.547024] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a50000b90 with addr=10.0.0.2, port=4420 00:27:48.625 qpair failed and we were unable to recover it. 00:27:48.625 [2024-07-15 09:35:59.547144] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.625 [2024-07-15 09:35:59.547171] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a50000b90 with addr=10.0.0.2, port=4420 00:27:48.625 qpair failed and we were unable to recover it. 00:27:48.625 [2024-07-15 09:35:59.547263] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.625 [2024-07-15 09:35:59.547291] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a50000b90 with addr=10.0.0.2, port=4420 00:27:48.625 qpair failed and we were unable to recover it. 00:27:48.625 [2024-07-15 09:35:59.547397] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.625 [2024-07-15 09:35:59.547424] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a50000b90 with addr=10.0.0.2, port=4420 00:27:48.625 qpair failed and we were unable to recover it. 00:27:48.625 [2024-07-15 09:35:59.547538] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.625 [2024-07-15 09:35:59.547566] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a50000b90 with addr=10.0.0.2, port=4420 00:27:48.625 qpair failed and we were unable to recover it. 00:27:48.625 [2024-07-15 09:35:59.547677] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.625 [2024-07-15 09:35:59.547706] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:27:48.625 qpair failed and we were unable to recover it. 00:27:48.625 [2024-07-15 09:35:59.547819] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.625 [2024-07-15 09:35:59.547857] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:27:48.625 qpair failed and we were unable to recover it. 00:27:48.625 [2024-07-15 09:35:59.547969] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.625 [2024-07-15 09:35:59.547999] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a58000b90 with addr=10.0.0.2, port=4420 00:27:48.625 qpair failed and we were unable to recover it. 00:27:48.626 [2024-07-15 09:35:59.548123] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.626 [2024-07-15 09:35:59.548151] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a58000b90 with addr=10.0.0.2, port=4420 00:27:48.626 qpair failed and we were unable to recover it. 00:27:48.626 [2024-07-15 09:35:59.548291] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.626 [2024-07-15 09:35:59.548318] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a58000b90 with addr=10.0.0.2, port=4420 00:27:48.626 qpair failed and we were unable to recover it. 00:27:48.626 [2024-07-15 09:35:59.548427] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.626 [2024-07-15 09:35:59.548454] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a58000b90 with addr=10.0.0.2, port=4420 00:27:48.626 qpair failed and we were unable to recover it. 00:27:48.626 [2024-07-15 09:35:59.548565] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.626 [2024-07-15 09:35:59.548593] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a50000b90 with addr=10.0.0.2, port=4420 00:27:48.626 qpair failed and we were unable to recover it. 00:27:48.626 [2024-07-15 09:35:59.548704] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.626 [2024-07-15 09:35:59.548732] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a50000b90 with addr=10.0.0.2, port=4420 00:27:48.626 qpair failed and we were unable to recover it. 00:27:48.626 [2024-07-15 09:35:59.548825] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.626 [2024-07-15 09:35:59.548860] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a50000b90 with addr=10.0.0.2, port=4420 00:27:48.626 qpair failed and we were unable to recover it. 00:27:48.626 [2024-07-15 09:35:59.548936] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.626 [2024-07-15 09:35:59.548963] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a50000b90 with addr=10.0.0.2, port=4420 00:27:48.626 qpair failed and we were unable to recover it. 00:27:48.626 [2024-07-15 09:35:59.549043] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.626 [2024-07-15 09:35:59.549076] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a50000b90 with addr=10.0.0.2, port=4420 00:27:48.626 qpair failed and we were unable to recover it. 00:27:48.626 [2024-07-15 09:35:59.549213] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.626 [2024-07-15 09:35:59.549241] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a50000b90 with addr=10.0.0.2, port=4420 00:27:48.626 qpair failed and we were unable to recover it. 00:27:48.626 [2024-07-15 09:35:59.549354] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.626 [2024-07-15 09:35:59.549382] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a58000b90 with addr=10.0.0.2, port=4420 00:27:48.626 qpair failed and we were unable to recover it. 00:27:48.626 [2024-07-15 09:35:59.549501] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.626 [2024-07-15 09:35:59.549528] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a58000b90 with addr=10.0.0.2, port=4420 00:27:48.626 qpair failed and we were unable to recover it. 00:27:48.626 [2024-07-15 09:35:59.549669] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.626 [2024-07-15 09:35:59.549696] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:27:48.626 qpair failed and we were unable to recover it. 00:27:48.626 [2024-07-15 09:35:59.549788] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.626 [2024-07-15 09:35:59.549821] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:27:48.626 qpair failed and we were unable to recover it. 00:27:48.626 [2024-07-15 09:35:59.549962] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.626 [2024-07-15 09:35:59.549989] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:27:48.626 qpair failed and we were unable to recover it. 00:27:48.626 [2024-07-15 09:35:59.550099] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.626 [2024-07-15 09:35:59.550125] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:27:48.626 qpair failed and we were unable to recover it. 00:27:48.626 [2024-07-15 09:35:59.550294] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.626 [2024-07-15 09:35:59.550361] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:27:48.626 qpair failed and we were unable to recover it. 00:27:48.626 [2024-07-15 09:35:59.550620] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.626 [2024-07-15 09:35:59.550646] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:27:48.626 qpair failed and we were unable to recover it. 00:27:48.626 [2024-07-15 09:35:59.550752] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.626 [2024-07-15 09:35:59.550831] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:27:48.626 qpair failed and we were unable to recover it. 00:27:48.626 [2024-07-15 09:35:59.551005] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.626 [2024-07-15 09:35:59.551081] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:27:48.626 qpair failed and we were unable to recover it. 00:27:48.626 [2024-07-15 09:35:59.551336] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.626 [2024-07-15 09:35:59.551400] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:27:48.626 qpair failed and we were unable to recover it. 00:27:48.626 [2024-07-15 09:35:59.551648] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.626 [2024-07-15 09:35:59.551713] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:27:48.626 qpair failed and we were unable to recover it. 00:27:48.626 [2024-07-15 09:35:59.551898] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.626 [2024-07-15 09:35:59.551926] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:27:48.626 qpair failed and we were unable to recover it. 00:27:48.626 [2024-07-15 09:35:59.552015] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.626 [2024-07-15 09:35:59.552078] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:27:48.626 qpair failed and we were unable to recover it. 00:27:48.626 [2024-07-15 09:35:59.552410] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.626 [2024-07-15 09:35:59.552476] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:27:48.626 qpair failed and we were unable to recover it. 00:27:48.626 [2024-07-15 09:35:59.552657] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.626 [2024-07-15 09:35:59.552724] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:27:48.626 qpair failed and we were unable to recover it. 00:27:48.626 [2024-07-15 09:35:59.552929] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.626 [2024-07-15 09:35:59.552956] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:27:48.626 qpair failed and we were unable to recover it. 00:27:48.626 [2024-07-15 09:35:59.553046] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.626 [2024-07-15 09:35:59.553083] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:27:48.626 qpair failed and we were unable to recover it. 00:27:48.626 [2024-07-15 09:35:59.553229] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.626 [2024-07-15 09:35:59.553285] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:27:48.626 qpair failed and we were unable to recover it. 00:27:48.626 [2024-07-15 09:35:59.553584] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.626 [2024-07-15 09:35:59.553611] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:27:48.626 qpair failed and we were unable to recover it. 00:27:48.626 [2024-07-15 09:35:59.553788] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.626 [2024-07-15 09:35:59.553821] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:27:48.626 qpair failed and we were unable to recover it. 00:27:48.626 [2024-07-15 09:35:59.553927] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.626 [2024-07-15 09:35:59.553954] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:27:48.626 qpair failed and we were unable to recover it. 00:27:48.626 [2024-07-15 09:35:59.554039] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.626 [2024-07-15 09:35:59.554109] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:27:48.626 qpair failed and we were unable to recover it. 00:27:48.626 [2024-07-15 09:35:59.554380] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.626 [2024-07-15 09:35:59.554445] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:27:48.626 qpair failed and we were unable to recover it. 00:27:48.626 [2024-07-15 09:35:59.554639] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.626 [2024-07-15 09:35:59.554705] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:27:48.626 qpair failed and we were unable to recover it. 00:27:48.626 [2024-07-15 09:35:59.554960] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.626 [2024-07-15 09:35:59.554987] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:27:48.626 qpair failed and we were unable to recover it. 00:27:48.626 [2024-07-15 09:35:59.555093] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.626 [2024-07-15 09:35:59.555160] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:27:48.626 qpair failed and we were unable to recover it. 00:27:48.626 [2024-07-15 09:35:59.555446] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.626 [2024-07-15 09:35:59.555513] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:27:48.626 qpair failed and we were unable to recover it. 00:27:48.627 [2024-07-15 09:35:59.555768] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.627 [2024-07-15 09:35:59.555858] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:27:48.627 qpair failed and we were unable to recover it. 00:27:48.627 [2024-07-15 09:35:59.555970] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.627 [2024-07-15 09:35:59.555997] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:27:48.627 qpair failed and we were unable to recover it. 00:27:48.627 [2024-07-15 09:35:59.556098] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.627 [2024-07-15 09:35:59.556125] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:27:48.627 qpair failed and we were unable to recover it. 00:27:48.627 [2024-07-15 09:35:59.556262] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.627 [2024-07-15 09:35:59.556308] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:27:48.627 qpair failed and we were unable to recover it. 00:27:48.627 [2024-07-15 09:35:59.556590] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.627 [2024-07-15 09:35:59.556655] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:27:48.627 qpair failed and we were unable to recover it. 00:27:48.627 [2024-07-15 09:35:59.556877] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.627 [2024-07-15 09:35:59.556904] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:27:48.627 qpair failed and we were unable to recover it. 00:27:48.627 [2024-07-15 09:35:59.557015] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.627 [2024-07-15 09:35:59.557042] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:27:48.627 qpair failed and we were unable to recover it. 00:27:48.627 [2024-07-15 09:35:59.557249] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.627 [2024-07-15 09:35:59.557313] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:27:48.627 qpair failed and we were unable to recover it. 00:27:48.627 [2024-07-15 09:35:59.557561] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.627 [2024-07-15 09:35:59.557626] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:27:48.627 qpair failed and we were unable to recover it. 00:27:48.627 [2024-07-15 09:35:59.557894] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.627 [2024-07-15 09:35:59.557922] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:27:48.627 qpair failed and we were unable to recover it. 00:27:48.627 [2024-07-15 09:35:59.558011] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.627 [2024-07-15 09:35:59.558038] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:27:48.627 qpair failed and we were unable to recover it. 00:27:48.627 [2024-07-15 09:35:59.558140] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.627 [2024-07-15 09:35:59.558167] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:27:48.627 qpair failed and we were unable to recover it. 00:27:48.627 [2024-07-15 09:35:59.558303] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.627 [2024-07-15 09:35:59.558330] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:27:48.627 qpair failed and we were unable to recover it. 00:27:48.627 [2024-07-15 09:35:59.558529] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.627 [2024-07-15 09:35:59.558593] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:27:48.627 qpair failed and we were unable to recover it. 00:27:48.627 [2024-07-15 09:35:59.558827] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.627 [2024-07-15 09:35:59.558886] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:27:48.627 qpair failed and we were unable to recover it. 00:27:48.627 [2024-07-15 09:35:59.558966] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.627 [2024-07-15 09:35:59.558993] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:27:48.627 qpair failed and we were unable to recover it. 00:27:48.627 [2024-07-15 09:35:59.559120] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.627 [2024-07-15 09:35:59.559148] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:27:48.627 qpair failed and we were unable to recover it. 00:27:48.627 [2024-07-15 09:35:59.559295] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.627 [2024-07-15 09:35:59.559344] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:27:48.627 qpair failed and we were unable to recover it. 00:27:48.627 [2024-07-15 09:35:59.559540] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.627 [2024-07-15 09:35:59.559606] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:27:48.627 qpair failed and we were unable to recover it. 00:27:48.627 [2024-07-15 09:35:59.559787] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.627 [2024-07-15 09:35:59.559819] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:27:48.627 qpair failed and we were unable to recover it. 00:27:48.627 [2024-07-15 09:35:59.559939] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.627 [2024-07-15 09:35:59.559965] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:27:48.627 qpair failed and we were unable to recover it. 00:27:48.627 [2024-07-15 09:35:59.560061] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.627 [2024-07-15 09:35:59.560087] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:27:48.627 qpair failed and we were unable to recover it. 00:27:48.627 [2024-07-15 09:35:59.560270] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.627 [2024-07-15 09:35:59.560335] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:27:48.627 qpair failed and we were unable to recover it. 00:27:48.627 [2024-07-15 09:35:59.560561] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.627 [2024-07-15 09:35:59.560626] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:27:48.627 qpair failed and we were unable to recover it. 00:27:48.627 [2024-07-15 09:35:59.560902] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.627 [2024-07-15 09:35:59.560929] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:27:48.627 qpair failed and we were unable to recover it. 00:27:48.627 [2024-07-15 09:35:59.561086] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.627 [2024-07-15 09:35:59.561126] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a58000b90 with addr=10.0.0.2, port=4420 00:27:48.627 qpair failed and we were unable to recover it. 00:27:48.627 [2024-07-15 09:35:59.561268] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.627 [2024-07-15 09:35:59.561327] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a58000b90 with addr=10.0.0.2, port=4420 00:27:48.627 qpair failed and we were unable to recover it. 00:27:48.627 [2024-07-15 09:35:59.561537] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.627 [2024-07-15 09:35:59.561591] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a58000b90 with addr=10.0.0.2, port=4420 00:27:48.627 qpair failed and we were unable to recover it. 00:27:48.627 [2024-07-15 09:35:59.561678] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.627 [2024-07-15 09:35:59.561705] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a58000b90 with addr=10.0.0.2, port=4420 00:27:48.627 qpair failed and we were unable to recover it. 00:27:48.627 [2024-07-15 09:35:59.561822] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.627 [2024-07-15 09:35:59.561849] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a58000b90 with addr=10.0.0.2, port=4420 00:27:48.627 qpair failed and we were unable to recover it. 00:27:48.627 [2024-07-15 09:35:59.561965] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.627 [2024-07-15 09:35:59.561992] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a58000b90 with addr=10.0.0.2, port=4420 00:27:48.627 qpair failed and we were unable to recover it. 00:27:48.627 [2024-07-15 09:35:59.562086] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.627 [2024-07-15 09:35:59.562113] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a58000b90 with addr=10.0.0.2, port=4420 00:27:48.627 qpair failed and we were unable to recover it. 00:27:48.627 [2024-07-15 09:35:59.562257] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.627 [2024-07-15 09:35:59.562284] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a58000b90 with addr=10.0.0.2, port=4420 00:27:48.627 qpair failed and we were unable to recover it. 00:27:48.627 [2024-07-15 09:35:59.562399] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.627 [2024-07-15 09:35:59.562425] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a58000b90 with addr=10.0.0.2, port=4420 00:27:48.627 qpair failed and we were unable to recover it. 00:27:48.627 [2024-07-15 09:35:59.562546] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.627 [2024-07-15 09:35:59.562579] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a58000b90 with addr=10.0.0.2, port=4420 00:27:48.627 qpair failed and we were unable to recover it. 00:27:48.627 [2024-07-15 09:35:59.562670] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.627 [2024-07-15 09:35:59.562697] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a58000b90 with addr=10.0.0.2, port=4420 00:27:48.627 qpair failed and we were unable to recover it. 00:27:48.627 [2024-07-15 09:35:59.562780] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.628 [2024-07-15 09:35:59.562826] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a58000b90 with addr=10.0.0.2, port=4420 00:27:48.628 qpair failed and we were unable to recover it. 00:27:48.628 [2024-07-15 09:35:59.562969] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.628 [2024-07-15 09:35:59.562996] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a58000b90 with addr=10.0.0.2, port=4420 00:27:48.628 qpair failed and we were unable to recover it. 00:27:48.628 [2024-07-15 09:35:59.563117] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.628 [2024-07-15 09:35:59.563177] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a58000b90 with addr=10.0.0.2, port=4420 00:27:48.628 qpair failed and we were unable to recover it. 00:27:48.628 [2024-07-15 09:35:59.563343] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.628 [2024-07-15 09:35:59.563397] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a58000b90 with addr=10.0.0.2, port=4420 00:27:48.628 qpair failed and we were unable to recover it. 00:27:48.628 [2024-07-15 09:35:59.563509] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.628 [2024-07-15 09:35:59.563537] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:27:48.628 qpair failed and we were unable to recover it. 00:27:48.628 [2024-07-15 09:35:59.563688] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.628 [2024-07-15 09:35:59.563714] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:27:48.628 qpair failed and we were unable to recover it. 00:27:48.628 [2024-07-15 09:35:59.563829] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.628 [2024-07-15 09:35:59.563856] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:27:48.628 qpair failed and we were unable to recover it. 00:27:48.628 [2024-07-15 09:35:59.563942] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.628 [2024-07-15 09:35:59.563970] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:27:48.628 qpair failed and we were unable to recover it. 00:27:48.628 [2024-07-15 09:35:59.564095] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.628 [2024-07-15 09:35:59.564160] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:27:48.628 qpair failed and we were unable to recover it. 00:27:48.628 [2024-07-15 09:35:59.564364] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.628 [2024-07-15 09:35:59.564421] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:27:48.628 qpair failed and we were unable to recover it. 00:27:48.628 [2024-07-15 09:35:59.564606] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.628 [2024-07-15 09:35:59.564633] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:27:48.628 qpair failed and we were unable to recover it. 00:27:48.628 [2024-07-15 09:35:59.564723] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.628 [2024-07-15 09:35:59.564749] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:27:48.628 qpair failed and we were unable to recover it. 00:27:48.628 [2024-07-15 09:35:59.564848] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.628 [2024-07-15 09:35:59.564875] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:27:48.628 qpair failed and we were unable to recover it. 00:27:48.628 [2024-07-15 09:35:59.564983] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.628 [2024-07-15 09:35:59.565010] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:27:48.628 qpair failed and we were unable to recover it. 00:27:48.628 [2024-07-15 09:35:59.565148] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.628 [2024-07-15 09:35:59.565175] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:27:48.628 qpair failed and we were unable to recover it. 00:27:48.628 [2024-07-15 09:35:59.565421] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.628 [2024-07-15 09:35:59.565486] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:27:48.628 qpair failed and we were unable to recover it. 00:27:48.628 [2024-07-15 09:35:59.565641] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.628 [2024-07-15 09:35:59.565669] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:27:48.628 qpair failed and we were unable to recover it. 00:27:48.628 [2024-07-15 09:35:59.565756] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.628 [2024-07-15 09:35:59.565783] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:27:48.628 qpair failed and we were unable to recover it. 00:27:48.628 [2024-07-15 09:35:59.565902] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.628 [2024-07-15 09:35:59.565929] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:27:48.628 qpair failed and we were unable to recover it. 00:27:48.628 [2024-07-15 09:35:59.566036] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.628 [2024-07-15 09:35:59.566063] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:27:48.628 qpair failed and we were unable to recover it. 00:27:48.628 [2024-07-15 09:35:59.566148] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.628 [2024-07-15 09:35:59.566175] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:27:48.628 qpair failed and we were unable to recover it. 00:27:48.628 [2024-07-15 09:35:59.566289] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.628 [2024-07-15 09:35:59.566315] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:27:48.628 qpair failed and we were unable to recover it. 00:27:48.628 [2024-07-15 09:35:59.566564] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.628 [2024-07-15 09:35:59.566628] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:27:48.628 qpair failed and we were unable to recover it. 00:27:48.628 [2024-07-15 09:35:59.566831] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.628 [2024-07-15 09:35:59.566858] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:27:48.628 qpair failed and we were unable to recover it. 00:27:48.628 [2024-07-15 09:35:59.566975] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.628 [2024-07-15 09:35:59.567002] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:27:48.628 qpair failed and we were unable to recover it. 00:27:48.628 [2024-07-15 09:35:59.567119] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.628 [2024-07-15 09:35:59.567148] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a58000b90 with addr=10.0.0.2, port=4420 00:27:48.628 qpair failed and we were unable to recover it. 00:27:48.628 [2024-07-15 09:35:59.567368] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.628 [2024-07-15 09:35:59.567429] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a58000b90 with addr=10.0.0.2, port=4420 00:27:48.628 qpair failed and we were unable to recover it. 00:27:48.628 [2024-07-15 09:35:59.567597] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.628 [2024-07-15 09:35:59.567664] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a58000b90 with addr=10.0.0.2, port=4420 00:27:48.628 qpair failed and we were unable to recover it. 00:27:48.628 [2024-07-15 09:35:59.567753] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.628 [2024-07-15 09:35:59.567781] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a58000b90 with addr=10.0.0.2, port=4420 00:27:48.628 qpair failed and we were unable to recover it. 00:27:48.628 [2024-07-15 09:35:59.567875] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.628 [2024-07-15 09:35:59.567902] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a58000b90 with addr=10.0.0.2, port=4420 00:27:48.628 qpair failed and we were unable to recover it. 00:27:48.628 [2024-07-15 09:35:59.568083] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.628 [2024-07-15 09:35:59.568138] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a58000b90 with addr=10.0.0.2, port=4420 00:27:48.628 qpair failed and we were unable to recover it. 00:27:48.628 [2024-07-15 09:35:59.568281] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.628 [2024-07-15 09:35:59.568328] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a58000b90 with addr=10.0.0.2, port=4420 00:27:48.629 qpair failed and we were unable to recover it. 00:27:48.629 [2024-07-15 09:35:59.568410] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.629 [2024-07-15 09:35:59.568437] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a58000b90 with addr=10.0.0.2, port=4420 00:27:48.629 qpair failed and we were unable to recover it. 00:27:48.629 [2024-07-15 09:35:59.568551] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.629 [2024-07-15 09:35:59.568579] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a58000b90 with addr=10.0.0.2, port=4420 00:27:48.629 qpair failed and we were unable to recover it. 00:27:48.629 [2024-07-15 09:35:59.568692] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.629 [2024-07-15 09:35:59.568720] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a58000b90 with addr=10.0.0.2, port=4420 00:27:48.629 qpair failed and we were unable to recover it. 00:27:48.629 [2024-07-15 09:35:59.568832] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.629 [2024-07-15 09:35:59.568867] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a58000b90 with addr=10.0.0.2, port=4420 00:27:48.629 qpair failed and we were unable to recover it. 00:27:48.629 [2024-07-15 09:35:59.568977] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.629 [2024-07-15 09:35:59.569004] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a58000b90 with addr=10.0.0.2, port=4420 00:27:48.629 qpair failed and we were unable to recover it. 00:27:48.629 [2024-07-15 09:35:59.569155] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.629 [2024-07-15 09:35:59.569183] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a58000b90 with addr=10.0.0.2, port=4420 00:27:48.629 qpair failed and we were unable to recover it. 00:27:48.629 [2024-07-15 09:35:59.569300] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.629 [2024-07-15 09:35:59.569332] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a58000b90 with addr=10.0.0.2, port=4420 00:27:48.629 qpair failed and we were unable to recover it. 00:27:48.629 [2024-07-15 09:35:59.569414] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.629 [2024-07-15 09:35:59.569441] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a58000b90 with addr=10.0.0.2, port=4420 00:27:48.629 qpair failed and we were unable to recover it. 00:27:48.629 [2024-07-15 09:35:59.569584] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.629 [2024-07-15 09:35:59.569611] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a58000b90 with addr=10.0.0.2, port=4420 00:27:48.629 qpair failed and we were unable to recover it. 00:27:48.629 [2024-07-15 09:35:59.569699] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.629 [2024-07-15 09:35:59.569727] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a58000b90 with addr=10.0.0.2, port=4420 00:27:48.629 qpair failed and we were unable to recover it. 00:27:48.629 [2024-07-15 09:35:59.569863] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.629 [2024-07-15 09:35:59.569891] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a58000b90 with addr=10.0.0.2, port=4420 00:27:48.629 qpair failed and we were unable to recover it. 00:27:48.629 [2024-07-15 09:35:59.569999] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.629 [2024-07-15 09:35:59.570026] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a58000b90 with addr=10.0.0.2, port=4420 00:27:48.629 qpair failed and we were unable to recover it. 00:27:48.629 [2024-07-15 09:35:59.570136] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.629 [2024-07-15 09:35:59.570164] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a58000b90 with addr=10.0.0.2, port=4420 00:27:48.629 qpair failed and we were unable to recover it. 00:27:48.629 [2024-07-15 09:35:59.570244] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.629 [2024-07-15 09:35:59.570272] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a58000b90 with addr=10.0.0.2, port=4420 00:27:48.629 qpair failed and we were unable to recover it. 00:27:48.629 [2024-07-15 09:35:59.570416] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.629 [2024-07-15 09:35:59.570443] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a58000b90 with addr=10.0.0.2, port=4420 00:27:48.629 qpair failed and we were unable to recover it. 00:27:48.629 [2024-07-15 09:35:59.570524] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.629 [2024-07-15 09:35:59.570551] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a58000b90 with addr=10.0.0.2, port=4420 00:27:48.629 qpair failed and we were unable to recover it. 00:27:48.629 [2024-07-15 09:35:59.570659] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.629 [2024-07-15 09:35:59.570686] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a58000b90 with addr=10.0.0.2, port=4420 00:27:48.629 qpair failed and we were unable to recover it. 00:27:48.629 [2024-07-15 09:35:59.570799] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.629 [2024-07-15 09:35:59.570834] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:27:48.629 qpair failed and we were unable to recover it. 00:27:48.629 [2024-07-15 09:35:59.570961] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.629 [2024-07-15 09:35:59.570988] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:27:48.629 qpair failed and we were unable to recover it. 00:27:48.629 [2024-07-15 09:35:59.571074] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.629 [2024-07-15 09:35:59.571102] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:27:48.629 qpair failed and we were unable to recover it. 00:27:48.629 [2024-07-15 09:35:59.571261] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.629 [2024-07-15 09:35:59.571288] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:27:48.629 qpair failed and we were unable to recover it. 00:27:48.629 [2024-07-15 09:35:59.571371] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.629 [2024-07-15 09:35:59.571399] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:27:48.629 qpair failed and we were unable to recover it. 00:27:48.629 [2024-07-15 09:35:59.571507] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.629 [2024-07-15 09:35:59.571534] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:27:48.629 qpair failed and we were unable to recover it. 00:27:48.629 [2024-07-15 09:35:59.571652] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.629 [2024-07-15 09:35:59.571679] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:27:48.629 qpair failed and we were unable to recover it. 00:27:48.629 [2024-07-15 09:35:59.571767] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.629 [2024-07-15 09:35:59.571794] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:27:48.629 qpair failed and we were unable to recover it. 00:27:48.629 [2024-07-15 09:35:59.571962] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.629 [2024-07-15 09:35:59.572027] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:27:48.629 qpair failed and we were unable to recover it. 00:27:48.629 [2024-07-15 09:35:59.572354] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.629 [2024-07-15 09:35:59.572419] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:27:48.629 qpair failed and we were unable to recover it. 00:27:48.629 [2024-07-15 09:35:59.572607] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.629 [2024-07-15 09:35:59.572673] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:27:48.629 qpair failed and we were unable to recover it. 00:27:48.629 [2024-07-15 09:35:59.572909] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.629 [2024-07-15 09:35:59.572936] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:27:48.629 qpair failed and we were unable to recover it. 00:27:48.629 [2024-07-15 09:35:59.573134] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.629 [2024-07-15 09:35:59.573199] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:27:48.629 qpair failed and we were unable to recover it. 00:27:48.629 [2024-07-15 09:35:59.573438] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.629 [2024-07-15 09:35:59.573503] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:27:48.629 qpair failed and we were unable to recover it. 00:27:48.629 [2024-07-15 09:35:59.573749] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.629 [2024-07-15 09:35:59.573855] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:27:48.629 qpair failed and we were unable to recover it. 00:27:48.629 [2024-07-15 09:35:59.573950] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.629 [2024-07-15 09:35:59.573977] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:27:48.629 qpair failed and we were unable to recover it. 00:27:48.629 [2024-07-15 09:35:59.574097] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.629 [2024-07-15 09:35:59.574124] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:27:48.629 qpair failed and we were unable to recover it. 00:27:48.629 [2024-07-15 09:35:59.574357] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.629 [2024-07-15 09:35:59.574423] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:27:48.629 qpair failed and we were unable to recover it. 00:27:48.629 [2024-07-15 09:35:59.574650] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.630 [2024-07-15 09:35:59.574717] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:27:48.630 qpair failed and we were unable to recover it. 00:27:48.630 [2024-07-15 09:35:59.574941] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.630 [2024-07-15 09:35:59.574968] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:27:48.630 qpair failed and we were unable to recover it. 00:27:48.630 [2024-07-15 09:35:59.575125] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.630 [2024-07-15 09:35:59.575191] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:27:48.630 qpair failed and we were unable to recover it. 00:27:48.630 [2024-07-15 09:35:59.575438] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.630 [2024-07-15 09:35:59.575502] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:27:48.630 qpair failed and we were unable to recover it. 00:27:48.630 [2024-07-15 09:35:59.575759] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.630 [2024-07-15 09:35:59.575837] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:27:48.630 qpair failed and we were unable to recover it. 00:27:48.630 [2024-07-15 09:35:59.576005] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.630 [2024-07-15 09:35:59.576031] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:27:48.630 qpair failed and we were unable to recover it. 00:27:48.630 [2024-07-15 09:35:59.576202] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.630 [2024-07-15 09:35:59.576268] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:27:48.630 qpair failed and we were unable to recover it. 00:27:48.630 [2024-07-15 09:35:59.576513] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.630 [2024-07-15 09:35:59.576578] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:27:48.630 qpair failed and we were unable to recover it. 00:27:48.630 [2024-07-15 09:35:59.576750] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.630 [2024-07-15 09:35:59.576777] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:27:48.630 qpair failed and we were unable to recover it. 00:27:48.630 [2024-07-15 09:35:59.576901] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.630 [2024-07-15 09:35:59.576927] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:27:48.630 qpair failed and we were unable to recover it. 00:27:48.630 [2024-07-15 09:35:59.577044] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.630 [2024-07-15 09:35:59.577074] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:27:48.630 qpair failed and we were unable to recover it. 00:27:48.630 [2024-07-15 09:35:59.577212] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.630 [2024-07-15 09:35:59.577242] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:27:48.630 qpair failed and we were unable to recover it. 00:27:48.630 [2024-07-15 09:35:59.577392] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.630 [2024-07-15 09:35:59.577461] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:27:48.630 qpair failed and we were unable to recover it. 00:27:48.630 [2024-07-15 09:35:59.577715] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.630 [2024-07-15 09:35:59.577779] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:27:48.630 qpair failed and we were unable to recover it. 00:27:48.630 [2024-07-15 09:35:59.577973] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.630 [2024-07-15 09:35:59.577999] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:27:48.630 qpair failed and we were unable to recover it. 00:27:48.630 [2024-07-15 09:35:59.578174] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.630 [2024-07-15 09:35:59.578239] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:27:48.630 qpair failed and we were unable to recover it. 00:27:48.630 [2024-07-15 09:35:59.578479] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.630 [2024-07-15 09:35:59.578546] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:27:48.630 qpair failed and we were unable to recover it. 00:27:48.630 [2024-07-15 09:35:59.578788] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.630 [2024-07-15 09:35:59.578863] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:27:48.630 qpair failed and we were unable to recover it. 00:27:48.630 [2024-07-15 09:35:59.579005] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.630 [2024-07-15 09:35:59.579032] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:27:48.630 qpair failed and we were unable to recover it. 00:27:48.630 [2024-07-15 09:35:59.579146] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.630 [2024-07-15 09:35:59.579201] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:27:48.630 qpair failed and we were unable to recover it. 00:27:48.630 [2024-07-15 09:35:59.579410] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.630 [2024-07-15 09:35:59.579475] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:27:48.630 qpair failed and we were unable to recover it. 00:27:48.630 [2024-07-15 09:35:59.579779] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.630 [2024-07-15 09:35:59.579810] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:27:48.630 qpair failed and we were unable to recover it. 00:27:48.630 [2024-07-15 09:35:59.579961] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.630 [2024-07-15 09:35:59.579988] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:27:48.630 qpair failed and we were unable to recover it. 00:27:48.630 [2024-07-15 09:35:59.580151] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.630 [2024-07-15 09:35:59.580215] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:27:48.630 qpair failed and we were unable to recover it. 00:27:48.630 [2024-07-15 09:35:59.580430] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.630 [2024-07-15 09:35:59.580496] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:27:48.630 qpair failed and we were unable to recover it. 00:27:48.630 [2024-07-15 09:35:59.580745] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.630 [2024-07-15 09:35:59.580823] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:27:48.630 qpair failed and we were unable to recover it. 00:27:48.630 [2024-07-15 09:35:59.580971] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.630 [2024-07-15 09:35:59.580998] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:27:48.630 qpair failed and we were unable to recover it. 00:27:48.630 [2024-07-15 09:35:59.581223] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.630 [2024-07-15 09:35:59.581289] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:27:48.630 qpair failed and we were unable to recover it. 00:27:48.630 [2024-07-15 09:35:59.581532] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.630 [2024-07-15 09:35:59.581608] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:27:48.630 qpair failed and we were unable to recover it. 00:27:48.630 [2024-07-15 09:35:59.581867] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.630 [2024-07-15 09:35:59.581893] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:27:48.630 qpair failed and we were unable to recover it. 00:27:48.630 [2024-07-15 09:35:59.582026] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.630 [2024-07-15 09:35:59.582099] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:27:48.630 qpair failed and we were unable to recover it. 00:27:48.630 [2024-07-15 09:35:59.582343] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.630 [2024-07-15 09:35:59.582407] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:27:48.630 qpair failed and we were unable to recover it. 00:27:48.630 [2024-07-15 09:35:59.582657] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.630 [2024-07-15 09:35:59.582722] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:27:48.630 qpair failed and we were unable to recover it. 00:27:48.630 [2024-07-15 09:35:59.582952] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.630 [2024-07-15 09:35:59.582979] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:27:48.630 qpair failed and we were unable to recover it. 00:27:48.630 [2024-07-15 09:35:59.583093] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.630 [2024-07-15 09:35:59.583126] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:27:48.630 qpair failed and we were unable to recover it. 00:27:48.630 [2024-07-15 09:35:59.583238] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.630 [2024-07-15 09:35:59.583264] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:27:48.630 qpair failed and we were unable to recover it. 00:27:48.630 [2024-07-15 09:35:59.583343] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.630 [2024-07-15 09:35:59.583370] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:27:48.630 qpair failed and we were unable to recover it. 00:27:48.630 [2024-07-15 09:35:59.583474] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.630 [2024-07-15 09:35:59.583501] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:27:48.630 qpair failed and we were unable to recover it. 00:27:48.631 [2024-07-15 09:35:59.583586] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.631 [2024-07-15 09:35:59.583614] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:27:48.631 qpair failed and we were unable to recover it. 00:27:48.631 [2024-07-15 09:35:59.583860] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.631 [2024-07-15 09:35:59.583887] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:27:48.631 qpair failed and we were unable to recover it. 00:27:48.631 [2024-07-15 09:35:59.584009] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.631 [2024-07-15 09:35:59.584035] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:27:48.631 qpair failed and we were unable to recover it. 00:27:48.631 [2024-07-15 09:35:59.584225] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.631 [2024-07-15 09:35:59.584251] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:27:48.631 qpair failed and we were unable to recover it. 00:27:48.631 [2024-07-15 09:35:59.584434] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.631 [2024-07-15 09:35:59.584499] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:27:48.631 qpair failed and we were unable to recover it. 00:27:48.631 [2024-07-15 09:35:59.584747] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.631 [2024-07-15 09:35:59.584843] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:27:48.631 qpair failed and we were unable to recover it. 00:27:48.631 [2024-07-15 09:35:59.585134] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.631 [2024-07-15 09:35:59.585199] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:27:48.631 qpair failed and we were unable to recover it. 00:27:48.631 [2024-07-15 09:35:59.585394] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.631 [2024-07-15 09:35:59.585460] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:27:48.631 qpair failed and we were unable to recover it. 00:27:48.631 [2024-07-15 09:35:59.585711] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.631 [2024-07-15 09:35:59.585778] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:27:48.631 qpair failed and we were unable to recover it. 00:27:48.631 [2024-07-15 09:35:59.586049] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.631 [2024-07-15 09:35:59.586122] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:27:48.631 qpair failed and we were unable to recover it. 00:27:48.631 [2024-07-15 09:35:59.586335] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.631 [2024-07-15 09:35:59.586402] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:27:48.631 qpair failed and we were unable to recover it. 00:27:48.631 [2024-07-15 09:35:59.586684] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.631 [2024-07-15 09:35:59.586751] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:27:48.631 qpair failed and we were unable to recover it. 00:27:48.631 [2024-07-15 09:35:59.587023] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.631 [2024-07-15 09:35:59.587101] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:27:48.631 qpair failed and we were unable to recover it. 00:27:48.631 [2024-07-15 09:35:59.587380] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.631 [2024-07-15 09:35:59.587455] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:27:48.631 qpair failed and we were unable to recover it. 00:27:48.631 [2024-07-15 09:35:59.587709] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.631 [2024-07-15 09:35:59.587775] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:27:48.631 qpair failed and we were unable to recover it. 00:27:48.631 [2024-07-15 09:35:59.588087] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.631 [2024-07-15 09:35:59.588153] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:27:48.631 qpair failed and we were unable to recover it. 00:27:48.631 [2024-07-15 09:35:59.588408] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.631 [2024-07-15 09:35:59.588474] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:27:48.631 qpair failed and we were unable to recover it. 00:27:48.631 [2024-07-15 09:35:59.588678] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.631 [2024-07-15 09:35:59.588743] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:27:48.631 qpair failed and we were unable to recover it. 00:27:48.631 [2024-07-15 09:35:59.588964] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.631 [2024-07-15 09:35:59.589031] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:27:48.631 qpair failed and we were unable to recover it. 00:27:48.631 [2024-07-15 09:35:59.589320] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.631 [2024-07-15 09:35:59.589385] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:27:48.631 qpair failed and we were unable to recover it. 00:27:48.631 [2024-07-15 09:35:59.589635] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.631 [2024-07-15 09:35:59.589701] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:27:48.631 qpair failed and we were unable to recover it. 00:27:48.631 [2024-07-15 09:35:59.589968] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.631 [2024-07-15 09:35:59.590035] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:27:48.631 qpair failed and we were unable to recover it. 00:27:48.631 [2024-07-15 09:35:59.590251] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.631 [2024-07-15 09:35:59.590318] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:27:48.631 qpair failed and we were unable to recover it. 00:27:48.631 [2024-07-15 09:35:59.590605] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.631 [2024-07-15 09:35:59.590670] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:27:48.631 qpair failed and we were unable to recover it. 00:27:48.631 [2024-07-15 09:35:59.590954] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.631 [2024-07-15 09:35:59.591021] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:27:48.631 qpair failed and we were unable to recover it. 00:27:48.631 [2024-07-15 09:35:59.591270] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.631 [2024-07-15 09:35:59.591336] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:27:48.631 qpair failed and we were unable to recover it. 00:27:48.631 [2024-07-15 09:35:59.591585] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.631 [2024-07-15 09:35:59.591650] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:27:48.631 qpair failed and we were unable to recover it. 00:27:48.631 [2024-07-15 09:35:59.591909] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.631 [2024-07-15 09:35:59.591975] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:27:48.631 qpair failed and we were unable to recover it. 00:27:48.631 [2024-07-15 09:35:59.592234] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.631 [2024-07-15 09:35:59.592300] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:27:48.631 qpair failed and we were unable to recover it. 00:27:48.631 [2024-07-15 09:35:59.592562] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.631 [2024-07-15 09:35:59.592627] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:27:48.631 qpair failed and we were unable to recover it. 00:27:48.631 [2024-07-15 09:35:59.592880] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.631 [2024-07-15 09:35:59.592945] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:27:48.631 qpair failed and we were unable to recover it. 00:27:48.631 [2024-07-15 09:35:59.593159] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.631 [2024-07-15 09:35:59.593224] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:27:48.631 qpair failed and we were unable to recover it. 00:27:48.631 [2024-07-15 09:35:59.593500] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.631 [2024-07-15 09:35:59.593566] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:27:48.631 qpair failed and we were unable to recover it. 00:27:48.631 [2024-07-15 09:35:59.593819] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.631 [2024-07-15 09:35:59.593885] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:27:48.631 qpair failed and we were unable to recover it. 00:27:48.631 [2024-07-15 09:35:59.594073] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.631 [2024-07-15 09:35:59.594140] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:27:48.631 qpair failed and we were unable to recover it. 00:27:48.631 [2024-07-15 09:35:59.594354] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.631 [2024-07-15 09:35:59.594421] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:27:48.631 qpair failed and we were unable to recover it. 00:27:48.632 [2024-07-15 09:35:59.594659] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.632 [2024-07-15 09:35:59.594724] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:27:48.632 qpair failed and we were unable to recover it. 00:27:48.632 [2024-07-15 09:35:59.595040] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.632 [2024-07-15 09:35:59.595108] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:27:48.632 qpair failed and we were unable to recover it. 00:27:48.632 [2024-07-15 09:35:59.595405] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.632 [2024-07-15 09:35:59.595470] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:27:48.632 qpair failed and we were unable to recover it. 00:27:48.632 [2024-07-15 09:35:59.595756] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.632 [2024-07-15 09:35:59.595840] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:27:48.632 qpair failed and we were unable to recover it. 00:27:48.632 [2024-07-15 09:35:59.596087] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.632 [2024-07-15 09:35:59.596154] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:27:48.632 qpair failed and we were unable to recover it. 00:27:48.632 [2024-07-15 09:35:59.596394] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.632 [2024-07-15 09:35:59.596461] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:27:48.632 qpair failed and we were unable to recover it. 00:27:48.632 [2024-07-15 09:35:59.596706] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.632 [2024-07-15 09:35:59.596772] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:27:48.632 qpair failed and we were unable to recover it. 00:27:48.632 [2024-07-15 09:35:59.596986] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.632 [2024-07-15 09:35:59.597050] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:27:48.632 qpair failed and we were unable to recover it. 00:27:48.632 [2024-07-15 09:35:59.597294] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.632 [2024-07-15 09:35:59.597362] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:27:48.632 qpair failed and we were unable to recover it. 00:27:48.632 [2024-07-15 09:35:59.597574] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.632 [2024-07-15 09:35:59.597629] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:27:48.632 qpair failed and we were unable to recover it. 00:27:48.632 [2024-07-15 09:35:59.597798] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.632 [2024-07-15 09:35:59.597883] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:27:48.632 qpair failed and we were unable to recover it. 00:27:48.632 [2024-07-15 09:35:59.598160] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.632 [2024-07-15 09:35:59.598212] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:27:48.632 qpair failed and we were unable to recover it. 00:27:48.632 [2024-07-15 09:35:59.598394] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.632 [2024-07-15 09:35:59.598447] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:27:48.632 qpair failed and we were unable to recover it. 00:27:48.632 [2024-07-15 09:35:59.598623] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.632 [2024-07-15 09:35:59.598675] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:27:48.632 qpair failed and we were unable to recover it. 00:27:48.632 [2024-07-15 09:35:59.598829] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.632 [2024-07-15 09:35:59.598872] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:27:48.632 qpair failed and we were unable to recover it. 00:27:48.632 [2024-07-15 09:35:59.599011] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.632 [2024-07-15 09:35:59.599048] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:27:48.632 qpair failed and we were unable to recover it. 00:27:48.632 [2024-07-15 09:35:59.599184] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.632 [2024-07-15 09:35:59.599220] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:27:48.632 qpair failed and we were unable to recover it. 00:27:48.632 [2024-07-15 09:35:59.599523] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.632 [2024-07-15 09:35:59.599597] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:27:48.632 qpair failed and we were unable to recover it. 00:27:48.632 [2024-07-15 09:35:59.599850] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.632 [2024-07-15 09:35:59.599888] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:27:48.632 qpair failed and we were unable to recover it. 00:27:48.632 [2024-07-15 09:35:59.600017] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.632 [2024-07-15 09:35:59.600054] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:27:48.632 qpair failed and we were unable to recover it. 00:27:48.632 [2024-07-15 09:35:59.600175] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.632 [2024-07-15 09:35:59.600211] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:27:48.632 qpair failed and we were unable to recover it. 00:27:48.632 [2024-07-15 09:35:59.600333] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.632 [2024-07-15 09:35:59.600369] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:27:48.632 qpair failed and we were unable to recover it. 00:27:48.632 [2024-07-15 09:35:59.600498] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.632 [2024-07-15 09:35:59.600535] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:27:48.632 qpair failed and we were unable to recover it. 00:27:48.632 [2024-07-15 09:35:59.600674] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.632 [2024-07-15 09:35:59.600711] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:27:48.632 qpair failed and we were unable to recover it. 00:27:48.632 [2024-07-15 09:35:59.600851] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.632 [2024-07-15 09:35:59.600888] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:27:48.632 qpair failed and we were unable to recover it. 00:27:48.632 [2024-07-15 09:35:59.601008] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.632 [2024-07-15 09:35:59.601044] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:27:48.632 qpair failed and we were unable to recover it. 00:27:48.632 [2024-07-15 09:35:59.601229] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.632 [2024-07-15 09:35:59.601287] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:27:48.632 qpair failed and we were unable to recover it. 00:27:48.632 [2024-07-15 09:35:59.601452] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.632 [2024-07-15 09:35:59.601498] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:27:48.632 qpair failed and we were unable to recover it. 00:27:48.632 [2024-07-15 09:35:59.601730] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.632 [2024-07-15 09:35:59.601765] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:27:48.632 qpair failed and we were unable to recover it. 00:27:48.632 [2024-07-15 09:35:59.601900] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.632 [2024-07-15 09:35:59.601936] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:27:48.632 qpair failed and we were unable to recover it. 00:27:48.632 [2024-07-15 09:35:59.602095] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.632 [2024-07-15 09:35:59.602144] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:27:48.632 qpair failed and we were unable to recover it. 00:27:48.632 [2024-07-15 09:35:59.602313] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.632 [2024-07-15 09:35:59.602362] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:27:48.632 qpair failed and we were unable to recover it. 00:27:48.632 [2024-07-15 09:35:59.602550] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.632 [2024-07-15 09:35:59.602616] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:27:48.632 qpair failed and we were unable to recover it. 00:27:48.632 [2024-07-15 09:35:59.602857] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.632 [2024-07-15 09:35:59.602894] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:27:48.632 qpair failed and we were unable to recover it. 00:27:48.632 [2024-07-15 09:35:59.603018] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.632 [2024-07-15 09:35:59.603054] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:27:48.632 qpair failed and we were unable to recover it. 00:27:48.632 [2024-07-15 09:35:59.603213] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.632 [2024-07-15 09:35:59.603262] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:27:48.632 qpair failed and we were unable to recover it. 00:27:48.632 [2024-07-15 09:35:59.603447] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.633 [2024-07-15 09:35:59.603497] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:27:48.633 qpair failed and we were unable to recover it. 00:27:48.633 [2024-07-15 09:35:59.603725] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.633 [2024-07-15 09:35:59.603774] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:27:48.633 qpair failed and we were unable to recover it. 00:27:48.633 [2024-07-15 09:35:59.603969] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.633 [2024-07-15 09:35:59.604005] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:27:48.633 qpair failed and we were unable to recover it. 00:27:48.633 [2024-07-15 09:35:59.604168] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.633 [2024-07-15 09:35:59.604217] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:27:48.633 qpair failed and we were unable to recover it. 00:27:48.633 [2024-07-15 09:35:59.604371] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.633 [2024-07-15 09:35:59.604421] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:27:48.633 qpair failed and we were unable to recover it. 00:27:48.633 [2024-07-15 09:35:59.604575] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.633 [2024-07-15 09:35:59.604623] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:27:48.633 qpair failed and we were unable to recover it. 00:27:48.633 [2024-07-15 09:35:59.604773] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.633 [2024-07-15 09:35:59.604834] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:27:48.633 qpair failed and we were unable to recover it. 00:27:48.633 [2024-07-15 09:35:59.605000] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.633 [2024-07-15 09:35:59.605037] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:27:48.633 qpair failed and we were unable to recover it. 00:27:48.633 [2024-07-15 09:35:59.605280] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.633 [2024-07-15 09:35:59.605316] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:27:48.633 qpair failed and we were unable to recover it. 00:27:48.633 [2024-07-15 09:35:59.605462] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.633 [2024-07-15 09:35:59.605497] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:27:48.633 qpair failed and we were unable to recover it. 00:27:48.633 [2024-07-15 09:35:59.605633] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.633 [2024-07-15 09:35:59.605681] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:27:48.633 qpair failed and we were unable to recover it. 00:27:48.633 [2024-07-15 09:35:59.605890] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.633 [2024-07-15 09:35:59.605926] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:27:48.633 qpair failed and we were unable to recover it. 00:27:48.633 [2024-07-15 09:35:59.606043] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.633 [2024-07-15 09:35:59.606079] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:27:48.633 qpair failed and we were unable to recover it. 00:27:48.633 [2024-07-15 09:35:59.606297] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.633 [2024-07-15 09:35:59.606349] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:27:48.633 qpair failed and we were unable to recover it. 00:27:48.633 [2024-07-15 09:35:59.606510] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.633 [2024-07-15 09:35:59.606558] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:27:48.633 qpair failed and we were unable to recover it. 00:27:48.633 [2024-07-15 09:35:59.606743] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.633 [2024-07-15 09:35:59.606791] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:27:48.633 qpair failed and we were unable to recover it. 00:27:48.633 [2024-07-15 09:35:59.606969] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.633 [2024-07-15 09:35:59.607005] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:27:48.633 qpair failed and we were unable to recover it. 00:27:48.633 [2024-07-15 09:35:59.607197] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.633 [2024-07-15 09:35:59.607233] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:27:48.633 qpair failed and we were unable to recover it. 00:27:48.633 [2024-07-15 09:35:59.607372] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.633 [2024-07-15 09:35:59.607431] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:27:48.633 qpair failed and we were unable to recover it. 00:27:48.633 [2024-07-15 09:35:59.607576] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.633 [2024-07-15 09:35:59.607624] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:27:48.633 qpair failed and we were unable to recover it. 00:27:48.633 [2024-07-15 09:35:59.607823] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.633 [2024-07-15 09:35:59.607877] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:27:48.633 qpair failed and we were unable to recover it. 00:27:48.633 [2024-07-15 09:35:59.608020] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.633 [2024-07-15 09:35:59.608061] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:27:48.633 qpair failed and we were unable to recover it. 00:27:48.633 [2024-07-15 09:35:59.608268] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.633 [2024-07-15 09:35:59.608304] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:27:48.633 qpair failed and we were unable to recover it. 00:27:48.633 [2024-07-15 09:35:59.608447] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.633 [2024-07-15 09:35:59.608482] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:27:48.633 qpair failed and we were unable to recover it. 00:27:48.633 [2024-07-15 09:35:59.608684] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.633 [2024-07-15 09:35:59.608732] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:27:48.633 qpair failed and we were unable to recover it. 00:27:48.633 [2024-07-15 09:35:59.608933] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.633 [2024-07-15 09:35:59.608969] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:27:48.633 qpair failed and we were unable to recover it. 00:27:48.633 [2024-07-15 09:35:59.609149] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.633 [2024-07-15 09:35:59.609198] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:27:48.633 qpair failed and we were unable to recover it. 00:27:48.633 [2024-07-15 09:35:59.609388] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.633 [2024-07-15 09:35:59.609436] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:27:48.633 qpair failed and we were unable to recover it. 00:27:48.633 [2024-07-15 09:35:59.609617] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.633 [2024-07-15 09:35:59.609665] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:27:48.633 qpair failed and we were unable to recover it. 00:27:48.633 [2024-07-15 09:35:59.609851] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.633 [2024-07-15 09:35:59.609887] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:27:48.633 qpair failed and we were unable to recover it. 00:27:48.633 [2024-07-15 09:35:59.610032] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.633 [2024-07-15 09:35:59.610067] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:27:48.633 qpair failed and we were unable to recover it. 00:27:48.633 [2024-07-15 09:35:59.610232] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.634 [2024-07-15 09:35:59.610281] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:27:48.634 qpair failed and we were unable to recover it. 00:27:48.634 [2024-07-15 09:35:59.610476] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.634 [2024-07-15 09:35:59.610523] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:27:48.634 qpair failed and we were unable to recover it. 00:27:48.634 [2024-07-15 09:35:59.610698] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.634 [2024-07-15 09:35:59.610746] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:27:48.634 qpair failed and we were unable to recover it. 00:27:48.634 [2024-07-15 09:35:59.610934] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.634 [2024-07-15 09:35:59.610969] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:27:48.634 qpair failed and we were unable to recover it. 00:27:48.634 [2024-07-15 09:35:59.611109] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.634 [2024-07-15 09:35:59.611144] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:27:48.634 qpair failed and we were unable to recover it. 00:27:48.634 [2024-07-15 09:35:59.611254] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.634 [2024-07-15 09:35:59.611290] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:27:48.634 qpair failed and we were unable to recover it. 00:27:48.634 [2024-07-15 09:35:59.611418] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.634 [2024-07-15 09:35:59.611467] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:27:48.634 qpair failed and we were unable to recover it. 00:27:48.634 [2024-07-15 09:35:59.611622] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.634 [2024-07-15 09:35:59.611670] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:27:48.634 qpair failed and we were unable to recover it. 00:27:48.634 [2024-07-15 09:35:59.611858] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.634 [2024-07-15 09:35:59.611895] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:27:48.634 qpair failed and we were unable to recover it. 00:27:48.634 [2024-07-15 09:35:59.612053] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.634 [2024-07-15 09:35:59.612094] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:27:48.634 qpair failed and we were unable to recover it. 00:27:48.634 [2024-07-15 09:35:59.612241] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.634 [2024-07-15 09:35:59.612302] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:27:48.634 qpair failed and we were unable to recover it. 00:27:48.634 [2024-07-15 09:35:59.612452] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.634 [2024-07-15 09:35:59.612500] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:27:48.634 qpair failed and we were unable to recover it. 00:27:48.634 [2024-07-15 09:35:59.612635] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.634 [2024-07-15 09:35:59.612683] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:27:48.634 qpair failed and we were unable to recover it. 00:27:48.634 [2024-07-15 09:35:59.612902] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.634 [2024-07-15 09:35:59.612938] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:27:48.634 qpair failed and we were unable to recover it. 00:27:48.634 [2024-07-15 09:35:59.613139] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.634 [2024-07-15 09:35:59.613186] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:27:48.634 qpair failed and we were unable to recover it. 00:27:48.634 [2024-07-15 09:35:59.613387] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.634 [2024-07-15 09:35:59.613437] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:27:48.634 qpair failed and we were unable to recover it. 00:27:48.634 [2024-07-15 09:35:59.613591] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.634 [2024-07-15 09:35:59.613640] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:27:48.634 qpair failed and we were unable to recover it. 00:27:48.634 [2024-07-15 09:35:59.613860] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.634 [2024-07-15 09:35:59.613896] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:27:48.634 qpair failed and we were unable to recover it. 00:27:48.634 [2024-07-15 09:35:59.614009] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.634 [2024-07-15 09:35:59.614044] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:27:48.634 qpair failed and we were unable to recover it. 00:27:48.634 [2024-07-15 09:35:59.614164] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.634 [2024-07-15 09:35:59.614199] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:27:48.634 qpair failed and we were unable to recover it. 00:27:48.634 [2024-07-15 09:35:59.614315] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.634 [2024-07-15 09:35:59.614350] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:27:48.634 qpair failed and we were unable to recover it. 00:27:48.634 [2024-07-15 09:35:59.614468] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.634 [2024-07-15 09:35:59.614506] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:27:48.634 qpair failed and we were unable to recover it. 00:27:48.634 [2024-07-15 09:35:59.614683] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.634 [2024-07-15 09:35:59.614730] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:27:48.634 qpair failed and we were unable to recover it. 00:27:48.634 [2024-07-15 09:35:59.614935] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.634 [2024-07-15 09:35:59.614971] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:27:48.634 qpair failed and we were unable to recover it. 00:27:48.634 [2024-07-15 09:35:59.615109] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.634 [2024-07-15 09:35:59.615160] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:27:48.634 qpair failed and we were unable to recover it. 00:27:48.634 [2024-07-15 09:35:59.615340] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.634 [2024-07-15 09:35:59.615388] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:27:48.634 qpair failed and we were unable to recover it. 00:27:48.634 [2024-07-15 09:35:59.615613] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.634 [2024-07-15 09:35:59.615662] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:27:48.634 qpair failed and we were unable to recover it. 00:27:48.634 [2024-07-15 09:35:59.615880] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.634 [2024-07-15 09:35:59.615917] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:27:48.634 qpair failed and we were unable to recover it. 00:27:48.634 [2024-07-15 09:35:59.616029] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.634 [2024-07-15 09:35:59.616064] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:27:48.634 qpair failed and we were unable to recover it. 00:27:48.634 [2024-07-15 09:35:59.616209] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.634 [2024-07-15 09:35:59.616258] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:27:48.634 qpair failed and we were unable to recover it. 00:27:48.634 [2024-07-15 09:35:59.616424] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.634 [2024-07-15 09:35:59.616473] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:27:48.634 qpair failed and we were unable to recover it. 00:27:48.634 [2024-07-15 09:35:59.616677] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.634 [2024-07-15 09:35:59.616726] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:27:48.634 qpair failed and we were unable to recover it. 00:27:48.634 [2024-07-15 09:35:59.616926] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.634 [2024-07-15 09:35:59.616976] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:27:48.634 qpair failed and we were unable to recover it. 00:27:48.634 [2024-07-15 09:35:59.617161] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.634 [2024-07-15 09:35:59.617209] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:27:48.634 qpair failed and we were unable to recover it. 00:27:48.634 [2024-07-15 09:35:59.617394] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.634 [2024-07-15 09:35:59.617442] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:27:48.634 qpair failed and we were unable to recover it. 00:27:48.634 [2024-07-15 09:35:59.617597] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.634 [2024-07-15 09:35:59.617646] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:27:48.634 qpair failed and we were unable to recover it. 00:27:48.634 [2024-07-15 09:35:59.617796] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.634 [2024-07-15 09:35:59.617856] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:27:48.634 qpair failed and we were unable to recover it. 00:27:48.634 [2024-07-15 09:35:59.618038] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.634 [2024-07-15 09:35:59.618088] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:27:48.634 qpair failed and we were unable to recover it. 00:27:48.634 [2024-07-15 09:35:59.618272] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.635 [2024-07-15 09:35:59.618320] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:27:48.635 qpair failed and we were unable to recover it. 00:27:48.635 [2024-07-15 09:35:59.618471] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.635 [2024-07-15 09:35:59.618519] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:27:48.635 qpair failed and we were unable to recover it. 00:27:48.635 [2024-07-15 09:35:59.618717] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.635 [2024-07-15 09:35:59.618765] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:27:48.635 qpair failed and we were unable to recover it. 00:27:48.635 [2024-07-15 09:35:59.618930] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.635 [2024-07-15 09:35:59.618980] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:27:48.635 qpair failed and we were unable to recover it. 00:27:48.635 [2024-07-15 09:35:59.619199] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.635 [2024-07-15 09:35:59.619247] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:27:48.635 qpair failed and we were unable to recover it. 00:27:48.635 [2024-07-15 09:35:59.619420] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.635 [2024-07-15 09:35:59.619468] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:27:48.635 qpair failed and we were unable to recover it. 00:27:48.635 [2024-07-15 09:35:59.619627] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.635 [2024-07-15 09:35:59.619676] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:27:48.635 qpair failed and we were unable to recover it. 00:27:48.635 [2024-07-15 09:35:59.619829] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.635 [2024-07-15 09:35:59.619878] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:27:48.635 qpair failed and we were unable to recover it. 00:27:48.635 [2024-07-15 09:35:59.620068] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.635 [2024-07-15 09:35:59.620117] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:27:48.635 qpair failed and we were unable to recover it. 00:27:48.635 [2024-07-15 09:35:59.620340] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.635 [2024-07-15 09:35:59.620388] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:27:48.635 qpair failed and we were unable to recover it. 00:27:48.635 [2024-07-15 09:35:59.620601] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.635 [2024-07-15 09:35:59.620650] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:27:48.635 qpair failed and we were unable to recover it. 00:27:48.635 [2024-07-15 09:35:59.620814] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.635 [2024-07-15 09:35:59.620863] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:27:48.635 qpair failed and we were unable to recover it. 00:27:48.635 [2024-07-15 09:35:59.621084] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.635 [2024-07-15 09:35:59.621132] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:27:48.635 qpair failed and we were unable to recover it. 00:27:48.635 [2024-07-15 09:35:59.621270] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.635 [2024-07-15 09:35:59.621318] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:27:48.635 qpair failed and we were unable to recover it. 00:27:48.635 [2024-07-15 09:35:59.621483] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.635 [2024-07-15 09:35:59.621531] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:27:48.635 qpair failed and we were unable to recover it. 00:27:48.635 [2024-07-15 09:35:59.621743] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.635 [2024-07-15 09:35:59.621795] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:27:48.635 qpair failed and we were unable to recover it. 00:27:48.635 [2024-07-15 09:35:59.622014] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.635 [2024-07-15 09:35:59.622061] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:27:48.635 qpair failed and we were unable to recover it. 00:27:48.635 [2024-07-15 09:35:59.622201] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.635 [2024-07-15 09:35:59.622249] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:27:48.635 qpair failed and we were unable to recover it. 00:27:48.635 [2024-07-15 09:35:59.622466] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.635 [2024-07-15 09:35:59.622514] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:27:48.635 qpair failed and we were unable to recover it. 00:27:48.635 [2024-07-15 09:35:59.622684] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.635 [2024-07-15 09:35:59.622724] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:27:48.635 qpair failed and we were unable to recover it. 00:27:48.635 [2024-07-15 09:35:59.622989] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.635 [2024-07-15 09:35:59.623037] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:27:48.635 qpair failed and we were unable to recover it. 00:27:48.635 [2024-07-15 09:35:59.623229] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.635 [2024-07-15 09:35:59.623277] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:27:48.635 qpair failed and we were unable to recover it. 00:27:48.635 [2024-07-15 09:35:59.623459] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.635 [2024-07-15 09:35:59.623507] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:27:48.635 qpair failed and we were unable to recover it. 00:27:48.635 [2024-07-15 09:35:59.623702] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.635 [2024-07-15 09:35:59.623749] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:27:48.635 qpair failed and we were unable to recover it. 00:27:48.635 [2024-07-15 09:35:59.623934] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.635 [2024-07-15 09:35:59.623983] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:27:48.635 qpair failed and we were unable to recover it. 00:27:48.635 [2024-07-15 09:35:59.624158] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.635 [2024-07-15 09:35:59.624210] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:27:48.635 qpair failed and we were unable to recover it. 00:27:48.635 [2024-07-15 09:35:59.624433] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.635 [2024-07-15 09:35:59.624484] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:27:48.635 qpair failed and we were unable to recover it. 00:27:48.635 [2024-07-15 09:35:59.624665] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.635 [2024-07-15 09:35:59.624713] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:27:48.635 qpair failed and we were unable to recover it. 00:27:48.635 [2024-07-15 09:35:59.624888] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.635 [2024-07-15 09:35:59.624937] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:27:48.635 qpair failed and we were unable to recover it. 00:27:48.635 [2024-07-15 09:35:59.625099] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.635 [2024-07-15 09:35:59.625147] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:27:48.635 qpair failed and we were unable to recover it. 00:27:48.635 [2024-07-15 09:35:59.625344] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.635 [2024-07-15 09:35:59.625392] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:27:48.635 qpair failed and we were unable to recover it. 00:27:48.635 [2024-07-15 09:35:59.625625] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.635 [2024-07-15 09:35:59.625660] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:27:48.635 qpair failed and we were unable to recover it. 00:27:48.635 [2024-07-15 09:35:59.625810] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.635 [2024-07-15 09:35:59.625846] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:27:48.635 qpair failed and we were unable to recover it. 00:27:48.635 [2024-07-15 09:35:59.626010] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.635 [2024-07-15 09:35:59.626079] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:27:48.635 qpair failed and we were unable to recover it. 00:27:48.635 [2024-07-15 09:35:59.626265] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.635 [2024-07-15 09:35:59.626312] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:27:48.635 qpair failed and we were unable to recover it. 00:27:48.635 [2024-07-15 09:35:59.626528] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.635 [2024-07-15 09:35:59.626576] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:27:48.635 qpair failed and we were unable to recover it. 00:27:48.635 [2024-07-15 09:35:59.626745] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.635 [2024-07-15 09:35:59.626794] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:27:48.635 qpair failed and we were unable to recover it. 00:27:48.635 [2024-07-15 09:35:59.627033] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.635 [2024-07-15 09:35:59.627068] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:27:48.635 qpair failed and we were unable to recover it. 00:27:48.635 [2024-07-15 09:35:59.627208] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.635 [2024-07-15 09:35:59.627244] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:27:48.635 qpair failed and we were unable to recover it. 00:27:48.635 [2024-07-15 09:35:59.627458] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.635 [2024-07-15 09:35:59.627493] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:27:48.635 qpair failed and we were unable to recover it. 00:27:48.635 [2024-07-15 09:35:59.627668] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.635 [2024-07-15 09:35:59.627715] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:27:48.636 qpair failed and we were unable to recover it. 00:27:48.636 [2024-07-15 09:35:59.627896] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.636 [2024-07-15 09:35:59.627944] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:27:48.636 qpair failed and we were unable to recover it. 00:27:48.636 [2024-07-15 09:35:59.628124] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.636 [2024-07-15 09:35:59.628171] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:27:48.636 qpair failed and we were unable to recover it. 00:27:48.636 [2024-07-15 09:35:59.628318] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.636 [2024-07-15 09:35:59.628366] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:27:48.636 qpair failed and we were unable to recover it. 00:27:48.636 [2024-07-15 09:35:59.628554] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.636 [2024-07-15 09:35:59.628603] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:27:48.636 qpair failed and we were unable to recover it. 00:27:48.636 [2024-07-15 09:35:59.628746] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.636 [2024-07-15 09:35:59.628794] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:27:48.636 qpair failed and we were unable to recover it. 00:27:48.636 [2024-07-15 09:35:59.629013] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.636 [2024-07-15 09:35:59.629062] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:27:48.636 qpair failed and we were unable to recover it. 00:27:48.636 [2024-07-15 09:35:59.629261] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.636 [2024-07-15 09:35:59.629309] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:27:48.636 qpair failed and we were unable to recover it. 00:27:48.636 [2024-07-15 09:35:59.629523] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.636 [2024-07-15 09:35:59.629571] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:27:48.636 qpair failed and we were unable to recover it. 00:27:48.636 [2024-07-15 09:35:59.629720] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.636 [2024-07-15 09:35:59.629768] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:27:48.636 qpair failed and we were unable to recover it. 00:27:48.636 [2024-07-15 09:35:59.629950] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.636 [2024-07-15 09:35:59.630000] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:27:48.636 qpair failed and we were unable to recover it. 00:27:48.636 [2024-07-15 09:35:59.630160] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.636 [2024-07-15 09:35:59.630208] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:27:48.636 qpair failed and we were unable to recover it. 00:27:48.636 [2024-07-15 09:35:59.630399] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.636 [2024-07-15 09:35:59.630447] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:27:48.636 qpair failed and we were unable to recover it. 00:27:48.636 [2024-07-15 09:35:59.630619] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.636 [2024-07-15 09:35:59.630668] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:27:48.636 qpair failed and we were unable to recover it. 00:27:48.636 [2024-07-15 09:35:59.630837] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.636 [2024-07-15 09:35:59.630892] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:27:48.636 qpair failed and we were unable to recover it. 00:27:48.636 [2024-07-15 09:35:59.631046] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.636 [2024-07-15 09:35:59.631096] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:27:48.636 qpair failed and we were unable to recover it. 00:27:48.636 [2024-07-15 09:35:59.631237] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.636 [2024-07-15 09:35:59.631285] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:27:48.636 qpair failed and we were unable to recover it. 00:27:48.636 [2024-07-15 09:35:59.631452] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.636 [2024-07-15 09:35:59.631500] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:27:48.636 qpair failed and we were unable to recover it. 00:27:48.636 [2024-07-15 09:35:59.631661] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.636 [2024-07-15 09:35:59.631709] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:27:48.636 qpair failed and we were unable to recover it. 00:27:48.636 [2024-07-15 09:35:59.631904] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.636 [2024-07-15 09:35:59.631961] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:27:48.636 qpair failed and we were unable to recover it. 00:27:48.636 [2024-07-15 09:35:59.632167] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.636 [2024-07-15 09:35:59.632214] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:27:48.636 qpair failed and we were unable to recover it. 00:27:48.636 [2024-07-15 09:35:59.632396] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.636 [2024-07-15 09:35:59.632445] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:27:48.636 qpair failed and we were unable to recover it. 00:27:48.636 [2024-07-15 09:35:59.632685] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.636 [2024-07-15 09:35:59.632735] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:27:48.636 qpair failed and we were unable to recover it. 00:27:48.636 [2024-07-15 09:35:59.632948] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.636 [2024-07-15 09:35:59.632997] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:27:48.636 qpair failed and we were unable to recover it. 00:27:48.636 [2024-07-15 09:35:59.633194] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.636 [2024-07-15 09:35:59.633242] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:27:48.636 qpair failed and we were unable to recover it. 00:27:48.636 [2024-07-15 09:35:59.633389] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.636 [2024-07-15 09:35:59.633437] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:27:48.636 qpair failed and we were unable to recover it. 00:27:48.636 [2024-07-15 09:35:59.633620] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.636 [2024-07-15 09:35:59.633668] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:27:48.636 qpair failed and we were unable to recover it. 00:27:48.636 [2024-07-15 09:35:59.633851] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.636 [2024-07-15 09:35:59.633901] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:27:48.636 qpair failed and we were unable to recover it. 00:27:48.636 [2024-07-15 09:35:59.634050] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.636 [2024-07-15 09:35:59.634099] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:27:48.636 qpair failed and we were unable to recover it. 00:27:48.636 [2024-07-15 09:35:59.634261] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.636 [2024-07-15 09:35:59.634309] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:27:48.636 qpair failed and we were unable to recover it. 00:27:48.636 [2024-07-15 09:35:59.634454] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.636 [2024-07-15 09:35:59.634502] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:27:48.636 qpair failed and we were unable to recover it. 00:27:48.636 [2024-07-15 09:35:59.634716] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.636 [2024-07-15 09:35:59.634763] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:27:48.636 qpair failed and we were unable to recover it. 00:27:48.636 [2024-07-15 09:35:59.634955] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.636 [2024-07-15 09:35:59.635003] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:27:48.636 qpair failed and we were unable to recover it. 00:27:48.636 [2024-07-15 09:35:59.635219] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.636 [2024-07-15 09:35:59.635270] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:27:48.636 qpair failed and we were unable to recover it. 00:27:48.636 [2024-07-15 09:35:59.635439] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.636 [2024-07-15 09:35:59.635490] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:27:48.636 qpair failed and we were unable to recover it. 00:27:48.636 [2024-07-15 09:35:59.635681] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.636 [2024-07-15 09:35:59.635732] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:27:48.636 qpair failed and we were unable to recover it. 00:27:48.636 [2024-07-15 09:35:59.635992] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.636 [2024-07-15 09:35:59.636045] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:27:48.636 qpair failed and we were unable to recover it. 00:27:48.636 [2024-07-15 09:35:59.636225] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.636 [2024-07-15 09:35:59.636277] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:27:48.636 qpair failed and we were unable to recover it. 00:27:48.636 [2024-07-15 09:35:59.636448] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.636 [2024-07-15 09:35:59.636496] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:27:48.636 qpair failed and we were unable to recover it. 00:27:48.636 [2024-07-15 09:35:59.636689] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.636 [2024-07-15 09:35:59.636736] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:27:48.636 qpair failed and we were unable to recover it. 00:27:48.636 [2024-07-15 09:35:59.636922] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.636 [2024-07-15 09:35:59.636971] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:27:48.636 qpair failed and we were unable to recover it. 00:27:48.636 [2024-07-15 09:35:59.637167] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.637 [2024-07-15 09:35:59.637215] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:27:48.637 qpair failed and we were unable to recover it. 00:27:48.637 [2024-07-15 09:35:59.637386] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.637 [2024-07-15 09:35:59.637437] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:27:48.637 qpair failed and we were unable to recover it. 00:27:48.637 [2024-07-15 09:35:59.637635] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.637 [2024-07-15 09:35:59.637687] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:27:48.637 qpair failed and we were unable to recover it. 00:27:48.637 [2024-07-15 09:35:59.637877] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.637 [2024-07-15 09:35:59.637934] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:27:48.637 qpair failed and we were unable to recover it. 00:27:48.637 [2024-07-15 09:35:59.638139] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.637 [2024-07-15 09:35:59.638191] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:27:48.637 qpair failed and we were unable to recover it. 00:27:48.637 [2024-07-15 09:35:59.638359] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.637 [2024-07-15 09:35:59.638410] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:27:48.637 qpair failed and we were unable to recover it. 00:27:48.637 [2024-07-15 09:35:59.638617] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.637 [2024-07-15 09:35:59.638668] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:27:48.637 qpair failed and we were unable to recover it. 00:27:48.637 [2024-07-15 09:35:59.638854] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.637 [2024-07-15 09:35:59.638907] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:27:48.637 qpair failed and we were unable to recover it. 00:27:48.637 [2024-07-15 09:35:59.639130] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.637 [2024-07-15 09:35:59.639180] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:27:48.637 qpair failed and we were unable to recover it. 00:27:48.637 [2024-07-15 09:35:59.639430] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.637 [2024-07-15 09:35:59.639481] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:27:48.637 qpair failed and we were unable to recover it. 00:27:48.637 [2024-07-15 09:35:59.639681] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.637 [2024-07-15 09:35:59.639732] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:27:48.637 qpair failed and we were unable to recover it. 00:27:48.637 [2024-07-15 09:35:59.639937] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.637 [2024-07-15 09:35:59.639987] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:27:48.637 qpair failed and we were unable to recover it. 00:27:48.637 [2024-07-15 09:35:59.640190] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.637 [2024-07-15 09:35:59.640242] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:27:48.637 qpair failed and we were unable to recover it. 00:27:48.637 [2024-07-15 09:35:59.640437] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.637 [2024-07-15 09:35:59.640488] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:27:48.637 qpair failed and we were unable to recover it. 00:27:48.637 [2024-07-15 09:35:59.640680] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.637 [2024-07-15 09:35:59.640731] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:27:48.637 qpair failed and we were unable to recover it. 00:27:48.637 [2024-07-15 09:35:59.640919] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.637 [2024-07-15 09:35:59.640971] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:27:48.637 qpair failed and we were unable to recover it. 00:27:48.637 [2024-07-15 09:35:59.641128] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.637 [2024-07-15 09:35:59.641179] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:27:48.637 qpair failed and we were unable to recover it. 00:27:48.637 [2024-07-15 09:35:59.641369] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.637 [2024-07-15 09:35:59.641420] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:27:48.637 qpair failed and we were unable to recover it. 00:27:48.637 [2024-07-15 09:35:59.641620] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.637 [2024-07-15 09:35:59.641679] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:27:48.637 qpair failed and we were unable to recover it. 00:27:48.637 [2024-07-15 09:35:59.641889] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.637 [2024-07-15 09:35:59.641940] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:27:48.637 qpair failed and we were unable to recover it. 00:27:48.637 [2024-07-15 09:35:59.642096] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.637 [2024-07-15 09:35:59.642148] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:27:48.637 qpair failed and we were unable to recover it. 00:27:48.637 [2024-07-15 09:35:59.642348] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.637 [2024-07-15 09:35:59.642399] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:27:48.637 qpair failed and we were unable to recover it. 00:27:48.637 [2024-07-15 09:35:59.642556] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.637 [2024-07-15 09:35:59.642607] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:27:48.637 qpair failed and we were unable to recover it. 00:27:48.637 [2024-07-15 09:35:59.642766] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.637 [2024-07-15 09:35:59.642842] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:27:48.637 qpair failed and we were unable to recover it. 00:27:48.637 [2024-07-15 09:35:59.643052] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.637 [2024-07-15 09:35:59.643102] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:27:48.637 qpair failed and we were unable to recover it. 00:27:48.637 [2024-07-15 09:35:59.643264] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.637 [2024-07-15 09:35:59.643316] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:27:48.637 qpair failed and we were unable to recover it. 00:27:48.637 [2024-07-15 09:35:59.643514] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.637 [2024-07-15 09:35:59.643565] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:27:48.637 qpair failed and we were unable to recover it. 00:27:48.637 [2024-07-15 09:35:59.643797] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.637 [2024-07-15 09:35:59.643871] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:27:48.637 qpair failed and we were unable to recover it. 00:27:48.637 [2024-07-15 09:35:59.644073] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.637 [2024-07-15 09:35:59.644124] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:27:48.637 qpair failed and we were unable to recover it. 00:27:48.637 [2024-07-15 09:35:59.644328] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.637 [2024-07-15 09:35:59.644380] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:27:48.637 qpair failed and we were unable to recover it. 00:27:48.637 [2024-07-15 09:35:59.644580] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.637 [2024-07-15 09:35:59.644631] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:27:48.637 qpair failed and we were unable to recover it. 00:27:48.637 [2024-07-15 09:35:59.644857] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.637 [2024-07-15 09:35:59.644916] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:27:48.637 qpair failed and we were unable to recover it. 00:27:48.637 [2024-07-15 09:35:59.645117] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.637 [2024-07-15 09:35:59.645169] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:27:48.637 qpair failed and we were unable to recover it. 00:27:48.637 [2024-07-15 09:35:59.645341] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.637 [2024-07-15 09:35:59.645391] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:27:48.637 qpair failed and we were unable to recover it. 00:27:48.637 [2024-07-15 09:35:59.645553] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.637 [2024-07-15 09:35:59.645605] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:27:48.637 qpair failed and we were unable to recover it. 00:27:48.637 [2024-07-15 09:35:59.645762] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.637 [2024-07-15 09:35:59.645823] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:27:48.637 qpair failed and we were unable to recover it. 00:27:48.637 [2024-07-15 09:35:59.646003] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.637 [2024-07-15 09:35:59.646055] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:27:48.638 qpair failed and we were unable to recover it. 00:27:48.638 [2024-07-15 09:35:59.646214] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.638 [2024-07-15 09:35:59.646265] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:27:48.638 qpair failed and we were unable to recover it. 00:27:48.638 [2024-07-15 09:35:59.646416] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.638 [2024-07-15 09:35:59.646466] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:27:48.638 qpair failed and we were unable to recover it. 00:27:48.638 [2024-07-15 09:35:59.646618] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.638 [2024-07-15 09:35:59.646671] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:27:48.638 qpair failed and we were unable to recover it. 00:27:48.638 [2024-07-15 09:35:59.646908] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.638 [2024-07-15 09:35:59.646960] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:27:48.638 qpair failed and we were unable to recover it. 00:27:48.638 [2024-07-15 09:35:59.647128] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.638 [2024-07-15 09:35:59.647178] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:27:48.638 qpair failed and we were unable to recover it. 00:27:48.638 [2024-07-15 09:35:59.647332] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.638 [2024-07-15 09:35:59.647384] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:27:48.638 qpair failed and we were unable to recover it. 00:27:48.638 [2024-07-15 09:35:59.647572] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.638 [2024-07-15 09:35:59.647624] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:27:48.638 qpair failed and we were unable to recover it. 00:27:48.638 [2024-07-15 09:35:59.647858] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.638 [2024-07-15 09:35:59.647910] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:27:48.638 qpair failed and we were unable to recover it. 00:27:48.638 [2024-07-15 09:35:59.648120] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.638 [2024-07-15 09:35:59.648171] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:27:48.638 qpair failed and we were unable to recover it. 00:27:48.638 [2024-07-15 09:35:59.648322] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.638 [2024-07-15 09:35:59.648372] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:27:48.638 qpair failed and we were unable to recover it. 00:27:48.638 [2024-07-15 09:35:59.648509] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.638 [2024-07-15 09:35:59.648560] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:27:48.638 qpair failed and we were unable to recover it. 00:27:48.638 [2024-07-15 09:35:59.648735] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.638 [2024-07-15 09:35:59.648787] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:27:48.638 qpair failed and we were unable to recover it. 00:27:48.638 [2024-07-15 09:35:59.648995] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.638 [2024-07-15 09:35:59.649046] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:27:48.638 qpair failed and we were unable to recover it. 00:27:48.638 [2024-07-15 09:35:59.649277] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.638 [2024-07-15 09:35:59.649328] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:27:48.638 qpair failed and we were unable to recover it. 00:27:48.638 [2024-07-15 09:35:59.649532] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.638 [2024-07-15 09:35:59.649583] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:27:48.638 qpair failed and we were unable to recover it. 00:27:48.638 [2024-07-15 09:35:59.649793] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.638 [2024-07-15 09:35:59.649854] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:27:48.638 qpair failed and we were unable to recover it. 00:27:48.638 [2024-07-15 09:35:59.650062] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.638 [2024-07-15 09:35:59.650113] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:27:48.638 qpair failed and we were unable to recover it. 00:27:48.638 [2024-07-15 09:35:59.650272] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.638 [2024-07-15 09:35:59.650323] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:27:48.638 qpair failed and we were unable to recover it. 00:27:48.638 [2024-07-15 09:35:59.650553] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.638 [2024-07-15 09:35:59.650605] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:27:48.638 qpair failed and we were unable to recover it. 00:27:48.638 [2024-07-15 09:35:59.650753] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.638 [2024-07-15 09:35:59.650827] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:27:48.638 qpair failed and we were unable to recover it. 00:27:48.638 [2024-07-15 09:35:59.651018] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.638 [2024-07-15 09:35:59.651069] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:27:48.638 qpair failed and we were unable to recover it. 00:27:48.638 [2024-07-15 09:35:59.651244] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.638 [2024-07-15 09:35:59.651303] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:27:48.638 qpair failed and we were unable to recover it. 00:27:48.638 [2024-07-15 09:35:59.651464] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.638 [2024-07-15 09:35:59.651514] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:27:48.638 qpair failed and we were unable to recover it. 00:27:48.638 [2024-07-15 09:35:59.651707] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.638 [2024-07-15 09:35:59.651758] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:27:48.638 qpair failed and we were unable to recover it. 00:27:48.638 [2024-07-15 09:35:59.651945] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.638 [2024-07-15 09:35:59.651999] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:27:48.638 qpair failed and we were unable to recover it. 00:27:48.638 [2024-07-15 09:35:59.652153] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.638 [2024-07-15 09:35:59.652206] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:27:48.638 qpair failed and we were unable to recover it. 00:27:48.638 [2024-07-15 09:35:59.652418] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.638 [2024-07-15 09:35:59.652470] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:27:48.638 qpair failed and we were unable to recover it. 00:27:48.638 [2024-07-15 09:35:59.652638] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.638 [2024-07-15 09:35:59.652689] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:27:48.638 qpair failed and we were unable to recover it. 00:27:48.638 [2024-07-15 09:35:59.652896] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.638 [2024-07-15 09:35:59.652948] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:27:48.638 qpair failed and we were unable to recover it. 00:27:48.638 [2024-07-15 09:35:59.653128] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.638 [2024-07-15 09:35:59.653180] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:27:48.638 qpair failed and we were unable to recover it. 00:27:48.638 [2024-07-15 09:35:59.653372] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.638 [2024-07-15 09:35:59.653423] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:27:48.638 qpair failed and we were unable to recover it. 00:27:48.638 [2024-07-15 09:35:59.653589] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.638 [2024-07-15 09:35:59.653640] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:27:48.638 qpair failed and we were unable to recover it. 00:27:48.638 [2024-07-15 09:35:59.653873] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.638 [2024-07-15 09:35:59.653925] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:27:48.638 qpair failed and we were unable to recover it. 00:27:48.638 [2024-07-15 09:35:59.654122] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.638 [2024-07-15 09:35:59.654173] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:27:48.638 qpair failed and we were unable to recover it. 00:27:48.638 [2024-07-15 09:35:59.654328] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.638 [2024-07-15 09:35:59.654380] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:27:48.638 qpair failed and we were unable to recover it. 00:27:48.638 [2024-07-15 09:35:59.654620] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.638 [2024-07-15 09:35:59.654672] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:27:48.638 qpair failed and we were unable to recover it. 00:27:48.638 [2024-07-15 09:35:59.654918] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.638 [2024-07-15 09:35:59.654970] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:27:48.638 qpair failed and we were unable to recover it. 00:27:48.638 [2024-07-15 09:35:59.655143] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.638 [2024-07-15 09:35:59.655195] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:27:48.638 qpair failed and we were unable to recover it. 00:27:48.638 [2024-07-15 09:35:59.655366] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.638 [2024-07-15 09:35:59.655418] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:27:48.638 qpair failed and we were unable to recover it. 00:27:48.638 [2024-07-15 09:35:59.655634] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.639 [2024-07-15 09:35:59.655686] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:27:48.639 qpair failed and we were unable to recover it. 00:27:48.639 [2024-07-15 09:35:59.655867] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.639 [2024-07-15 09:35:59.655920] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:27:48.639 qpair failed and we were unable to recover it. 00:27:48.639 [2024-07-15 09:35:59.656057] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.639 [2024-07-15 09:35:59.656108] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:27:48.639 qpair failed and we were unable to recover it. 00:27:48.639 [2024-07-15 09:35:59.656304] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.639 [2024-07-15 09:35:59.656356] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:27:48.639 qpair failed and we were unable to recover it. 00:27:48.639 [2024-07-15 09:35:59.656529] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.639 [2024-07-15 09:35:59.656581] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:27:48.639 qpair failed and we were unable to recover it. 00:27:48.639 [2024-07-15 09:35:59.656771] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.639 [2024-07-15 09:35:59.656833] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:27:48.639 qpair failed and we were unable to recover it. 00:27:48.639 [2024-07-15 09:35:59.657039] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.639 [2024-07-15 09:35:59.657090] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:27:48.639 qpair failed and we were unable to recover it. 00:27:48.639 [2024-07-15 09:35:59.657288] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.639 [2024-07-15 09:35:59.657340] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:27:48.639 qpair failed and we were unable to recover it. 00:27:48.639 [2024-07-15 09:35:59.657545] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.639 [2024-07-15 09:35:59.657596] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:27:48.639 qpair failed and we were unable to recover it. 00:27:48.639 [2024-07-15 09:35:59.657812] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.639 [2024-07-15 09:35:59.657873] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:27:48.639 qpair failed and we were unable to recover it. 00:27:48.639 [2024-07-15 09:35:59.658103] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.639 [2024-07-15 09:35:59.658154] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:27:48.639 qpair failed and we were unable to recover it. 00:27:48.639 [2024-07-15 09:35:59.658353] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.639 [2024-07-15 09:35:59.658404] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:27:48.639 qpair failed and we were unable to recover it. 00:27:48.639 [2024-07-15 09:35:59.658564] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.639 [2024-07-15 09:35:59.658615] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:27:48.639 qpair failed and we were unable to recover it. 00:27:48.639 [2024-07-15 09:35:59.658846] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.639 [2024-07-15 09:35:59.658899] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:27:48.639 qpair failed and we were unable to recover it. 00:27:48.639 [2024-07-15 09:35:59.659101] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.639 [2024-07-15 09:35:59.659152] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:27:48.639 qpair failed and we were unable to recover it. 00:27:48.639 [2024-07-15 09:35:59.659350] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.639 [2024-07-15 09:35:59.659402] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:27:48.639 qpair failed and we were unable to recover it. 00:27:48.639 [2024-07-15 09:35:59.659583] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.639 [2024-07-15 09:35:59.659634] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:27:48.639 qpair failed and we were unable to recover it. 00:27:48.639 [2024-07-15 09:35:59.659822] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.639 [2024-07-15 09:35:59.659873] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:27:48.639 qpair failed and we were unable to recover it. 00:27:48.639 [2024-07-15 09:35:59.660059] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.639 [2024-07-15 09:35:59.660110] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:27:48.639 qpair failed and we were unable to recover it. 00:27:48.639 [2024-07-15 09:35:59.660340] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.639 [2024-07-15 09:35:59.660391] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:27:48.639 qpair failed and we were unable to recover it. 00:27:48.639 [2024-07-15 09:35:59.660584] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.639 [2024-07-15 09:35:59.660635] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:27:48.639 qpair failed and we were unable to recover it. 00:27:48.639 [2024-07-15 09:35:59.660823] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.639 [2024-07-15 09:35:59.660877] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:27:48.639 qpair failed and we were unable to recover it. 00:27:48.639 [2024-07-15 09:35:59.661089] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.639 [2024-07-15 09:35:59.661149] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:27:48.639 qpair failed and we were unable to recover it. 00:27:48.639 [2024-07-15 09:35:59.661311] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.639 [2024-07-15 09:35:59.661363] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:27:48.639 qpair failed and we were unable to recover it. 00:27:48.639 [2024-07-15 09:35:59.661534] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.639 [2024-07-15 09:35:59.661586] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:27:48.639 qpair failed and we were unable to recover it. 00:27:48.639 [2024-07-15 09:35:59.661748] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.639 [2024-07-15 09:35:59.661810] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:27:48.639 qpair failed and we were unable to recover it. 00:27:48.639 [2024-07-15 09:35:59.662008] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.639 [2024-07-15 09:35:59.662060] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:27:48.639 qpair failed and we were unable to recover it. 00:27:48.639 [2024-07-15 09:35:59.662217] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.639 [2024-07-15 09:35:59.662269] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:27:48.639 qpair failed and we were unable to recover it. 00:27:48.639 [2024-07-15 09:35:59.662471] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.639 [2024-07-15 09:35:59.662522] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:27:48.639 qpair failed and we were unable to recover it. 00:27:48.639 [2024-07-15 09:35:59.662724] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.639 [2024-07-15 09:35:59.662775] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:27:48.639 qpair failed and we were unable to recover it. 00:27:48.639 [2024-07-15 09:35:59.663002] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.639 [2024-07-15 09:35:59.663054] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:27:48.639 qpair failed and we were unable to recover it. 00:27:48.639 [2024-07-15 09:35:59.663260] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.639 [2024-07-15 09:35:59.663311] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:27:48.639 qpair failed and we were unable to recover it. 00:27:48.639 [2024-07-15 09:35:59.663508] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.639 [2024-07-15 09:35:59.663560] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:27:48.639 qpair failed and we were unable to recover it. 00:27:48.639 [2024-07-15 09:35:59.663713] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.639 [2024-07-15 09:35:59.663763] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:27:48.639 qpair failed and we were unable to recover it. 00:27:48.639 [2024-07-15 09:35:59.663985] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.639 [2024-07-15 09:35:59.664036] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:27:48.639 qpair failed and we were unable to recover it. 00:27:48.639 [2024-07-15 09:35:59.664205] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.639 [2024-07-15 09:35:59.664256] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:27:48.639 qpair failed and we were unable to recover it. 00:27:48.639 [2024-07-15 09:35:59.664449] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.639 [2024-07-15 09:35:59.664501] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:27:48.639 qpair failed and we were unable to recover it. 00:27:48.639 [2024-07-15 09:35:59.664698] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.639 [2024-07-15 09:35:59.664750] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:27:48.639 qpair failed and we were unable to recover it. 00:27:48.639 [2024-07-15 09:35:59.664941] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.639 [2024-07-15 09:35:59.664993] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:27:48.639 qpair failed and we were unable to recover it. 00:27:48.639 [2024-07-15 09:35:59.665219] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.639 [2024-07-15 09:35:59.665271] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:27:48.639 qpair failed and we were unable to recover it. 00:27:48.640 [2024-07-15 09:35:59.665496] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.640 [2024-07-15 09:35:59.665548] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:27:48.640 qpair failed and we were unable to recover it. 00:27:48.640 [2024-07-15 09:35:59.665782] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.640 [2024-07-15 09:35:59.665844] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:27:48.640 qpair failed and we were unable to recover it. 00:27:48.640 [2024-07-15 09:35:59.666049] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.640 [2024-07-15 09:35:59.666100] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:27:48.640 qpair failed and we were unable to recover it. 00:27:48.640 [2024-07-15 09:35:59.666300] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.640 [2024-07-15 09:35:59.666351] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:27:48.640 qpair failed and we were unable to recover it. 00:27:48.640 [2024-07-15 09:35:59.666559] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.640 [2024-07-15 09:35:59.666609] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:27:48.640 qpair failed and we were unable to recover it. 00:27:48.640 [2024-07-15 09:35:59.666776] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.640 [2024-07-15 09:35:59.666869] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:27:48.640 qpair failed and we were unable to recover it. 00:27:48.640 [2024-07-15 09:35:59.667077] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.640 [2024-07-15 09:35:59.667129] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:27:48.640 qpair failed and we were unable to recover it. 00:27:48.640 [2024-07-15 09:35:59.667310] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.640 [2024-07-15 09:35:59.667361] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:27:48.640 qpair failed and we were unable to recover it. 00:27:48.640 [2024-07-15 09:35:59.667586] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.640 [2024-07-15 09:35:59.667637] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:27:48.640 qpair failed and we were unable to recover it. 00:27:48.640 [2024-07-15 09:35:59.667880] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.640 [2024-07-15 09:35:59.667933] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:27:48.640 qpair failed and we were unable to recover it. 00:27:48.640 [2024-07-15 09:35:59.668111] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.640 [2024-07-15 09:35:59.668162] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:27:48.640 qpair failed and we were unable to recover it. 00:27:48.640 [2024-07-15 09:35:59.668325] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.640 [2024-07-15 09:35:59.668377] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:27:48.640 qpair failed and we were unable to recover it. 00:27:48.640 [2024-07-15 09:35:59.668539] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.640 [2024-07-15 09:35:59.668590] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:27:48.640 qpair failed and we were unable to recover it. 00:27:48.640 [2024-07-15 09:35:59.668785] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.640 [2024-07-15 09:35:59.668853] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:27:48.640 qpair failed and we were unable to recover it. 00:27:48.640 [2024-07-15 09:35:59.669090] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.640 [2024-07-15 09:35:59.669141] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:27:48.640 qpair failed and we were unable to recover it. 00:27:48.640 [2024-07-15 09:35:59.669355] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.640 [2024-07-15 09:35:59.669406] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:27:48.640 qpair failed and we were unable to recover it. 00:27:48.640 [2024-07-15 09:35:59.669597] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.640 [2024-07-15 09:35:59.669649] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:27:48.640 qpair failed and we were unable to recover it. 00:27:48.640 [2024-07-15 09:35:59.669881] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.640 [2024-07-15 09:35:59.669933] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:27:48.640 qpair failed and we were unable to recover it. 00:27:48.640 [2024-07-15 09:35:59.670131] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.640 [2024-07-15 09:35:59.670183] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:27:48.640 qpair failed and we were unable to recover it. 00:27:48.640 [2024-07-15 09:35:59.670357] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.640 [2024-07-15 09:35:59.670409] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:27:48.640 qpair failed and we were unable to recover it. 00:27:48.640 [2024-07-15 09:35:59.670608] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.640 [2024-07-15 09:35:59.670660] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:27:48.640 qpair failed and we were unable to recover it. 00:27:48.640 [2024-07-15 09:35:59.670839] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.640 [2024-07-15 09:35:59.670891] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:27:48.640 qpair failed and we were unable to recover it. 00:27:48.640 [2024-07-15 09:35:59.671048] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.640 [2024-07-15 09:35:59.671109] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:27:48.640 qpair failed and we were unable to recover it. 00:27:48.640 [2024-07-15 09:35:59.671275] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.640 [2024-07-15 09:35:59.671326] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:27:48.640 qpair failed and we were unable to recover it. 00:27:48.640 [2024-07-15 09:35:59.671479] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.640 [2024-07-15 09:35:59.671528] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:27:48.640 qpair failed and we were unable to recover it. 00:27:48.640 [2024-07-15 09:35:59.671734] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.640 [2024-07-15 09:35:59.671785] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:27:48.640 qpair failed and we were unable to recover it. 00:27:48.640 [2024-07-15 09:35:59.672005] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.640 [2024-07-15 09:35:59.672057] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:27:48.640 qpair failed and we were unable to recover it. 00:27:48.640 [2024-07-15 09:35:59.672229] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.640 [2024-07-15 09:35:59.672279] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:27:48.640 qpair failed and we were unable to recover it. 00:27:48.640 [2024-07-15 09:35:59.672441] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.640 [2024-07-15 09:35:59.672493] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:27:48.640 qpair failed and we were unable to recover it. 00:27:48.640 [2024-07-15 09:35:59.672699] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.640 [2024-07-15 09:35:59.672750] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:27:48.640 qpair failed and we were unable to recover it. 00:27:48.640 [2024-07-15 09:35:59.672941] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.640 [2024-07-15 09:35:59.672993] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:27:48.640 qpair failed and we were unable to recover it. 00:27:48.640 [2024-07-15 09:35:59.673158] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.640 [2024-07-15 09:35:59.673209] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:27:48.640 qpair failed and we were unable to recover it. 00:27:48.640 [2024-07-15 09:35:59.673420] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.640 [2024-07-15 09:35:59.673471] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:27:48.640 qpair failed and we were unable to recover it. 00:27:48.640 [2024-07-15 09:35:59.673641] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.640 [2024-07-15 09:35:59.673695] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:27:48.640 qpair failed and we were unable to recover it. 00:27:48.640 [2024-07-15 09:35:59.673892] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.640 [2024-07-15 09:35:59.673945] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:27:48.640 qpair failed and we were unable to recover it. 00:27:48.640 [2024-07-15 09:35:59.674110] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.640 [2024-07-15 09:35:59.674161] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:27:48.640 qpair failed and we were unable to recover it. 00:27:48.640 [2024-07-15 09:35:59.674403] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.640 [2024-07-15 09:35:59.674454] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:27:48.640 qpair failed and we were unable to recover it. 00:27:48.640 [2024-07-15 09:35:59.674629] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.640 [2024-07-15 09:35:59.674681] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:27:48.640 qpair failed and we were unable to recover it. 00:27:48.640 [2024-07-15 09:35:59.674877] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.640 [2024-07-15 09:35:59.674936] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:27:48.640 qpair failed and we were unable to recover it. 00:27:48.640 [2024-07-15 09:35:59.675142] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.640 [2024-07-15 09:35:59.675193] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:27:48.640 qpair failed and we were unable to recover it. 00:27:48.640 [2024-07-15 09:35:59.675392] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.641 [2024-07-15 09:35:59.675443] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:27:48.641 qpair failed and we were unable to recover it. 00:27:48.641 [2024-07-15 09:35:59.675634] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.641 [2024-07-15 09:35:59.675685] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:27:48.641 qpair failed and we were unable to recover it. 00:27:48.641 [2024-07-15 09:35:59.675872] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.641 [2024-07-15 09:35:59.675934] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:27:48.641 qpair failed and we were unable to recover it. 00:27:48.641 [2024-07-15 09:35:59.676079] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.641 [2024-07-15 09:35:59.676129] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:27:48.641 qpair failed and we were unable to recover it. 00:27:48.641 [2024-07-15 09:35:59.676368] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.641 [2024-07-15 09:35:59.676419] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:27:48.641 qpair failed and we were unable to recover it. 00:27:48.641 [2024-07-15 09:35:59.676632] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.641 [2024-07-15 09:35:59.676684] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:27:48.641 qpair failed and we were unable to recover it. 00:27:48.641 [2024-07-15 09:35:59.676893] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.641 [2024-07-15 09:35:59.676945] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:27:48.641 qpair failed and we were unable to recover it. 00:27:48.641 [2024-07-15 09:35:59.677126] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.641 [2024-07-15 09:35:59.677176] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:27:48.641 qpair failed and we were unable to recover it. 00:27:48.641 [2024-07-15 09:35:59.677383] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.641 [2024-07-15 09:35:59.677434] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:27:48.641 qpair failed and we were unable to recover it. 00:27:48.641 [2024-07-15 09:35:59.677625] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.641 [2024-07-15 09:35:59.677676] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:27:48.641 qpair failed and we were unable to recover it. 00:27:48.641 [2024-07-15 09:35:59.677870] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.641 [2024-07-15 09:35:59.677925] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:27:48.641 qpair failed and we were unable to recover it. 00:27:48.641 [2024-07-15 09:35:59.678111] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.641 [2024-07-15 09:35:59.678162] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:27:48.641 qpair failed and we were unable to recover it. 00:27:48.641 [2024-07-15 09:35:59.678309] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.641 [2024-07-15 09:35:59.678359] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:27:48.641 qpair failed and we were unable to recover it. 00:27:48.641 [2024-07-15 09:35:59.678559] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.641 [2024-07-15 09:35:59.678609] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:27:48.641 qpair failed and we were unable to recover it. 00:27:48.641 [2024-07-15 09:35:59.678777] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.641 [2024-07-15 09:35:59.678866] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:27:48.641 qpair failed and we were unable to recover it. 00:27:48.641 [2024-07-15 09:35:59.679028] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.641 [2024-07-15 09:35:59.679081] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:27:48.641 qpair failed and we were unable to recover it. 00:27:48.641 [2024-07-15 09:35:59.679269] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.641 [2024-07-15 09:35:59.679321] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:27:48.641 qpair failed and we were unable to recover it. 00:27:48.641 [2024-07-15 09:35:59.679522] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.641 [2024-07-15 09:35:59.679572] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:27:48.641 qpair failed and we were unable to recover it. 00:27:48.641 [2024-07-15 09:35:59.679776] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.641 [2024-07-15 09:35:59.679839] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:27:48.641 qpair failed and we were unable to recover it. 00:27:48.641 [2024-07-15 09:35:59.680042] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.641 [2024-07-15 09:35:59.680095] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:27:48.641 qpair failed and we were unable to recover it. 00:27:48.641 [2024-07-15 09:35:59.680294] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.641 [2024-07-15 09:35:59.680345] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:27:48.641 qpair failed and we were unable to recover it. 00:27:48.641 [2024-07-15 09:35:59.680545] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.641 [2024-07-15 09:35:59.680596] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:27:48.641 qpair failed and we were unable to recover it. 00:27:48.641 [2024-07-15 09:35:59.680828] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.641 [2024-07-15 09:35:59.680893] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:27:48.641 qpair failed and we were unable to recover it. 00:27:48.641 [2024-07-15 09:35:59.681103] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.641 [2024-07-15 09:35:59.681154] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:27:48.641 qpair failed and we were unable to recover it. 00:27:48.641 [2024-07-15 09:35:59.681359] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.641 [2024-07-15 09:35:59.681410] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:27:48.641 qpair failed and we were unable to recover it. 00:27:48.641 [2024-07-15 09:35:59.681601] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.641 [2024-07-15 09:35:59.681651] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:27:48.641 qpair failed and we were unable to recover it. 00:27:48.641 [2024-07-15 09:35:59.681821] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.641 [2024-07-15 09:35:59.681873] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:27:48.641 qpair failed and we were unable to recover it. 00:27:48.641 [2024-07-15 09:35:59.682059] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.641 [2024-07-15 09:35:59.682110] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:27:48.641 qpair failed and we were unable to recover it. 00:27:48.641 [2024-07-15 09:35:59.682310] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.641 [2024-07-15 09:35:59.682361] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:27:48.641 qpair failed and we were unable to recover it. 00:27:48.641 [2024-07-15 09:35:59.682527] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.641 [2024-07-15 09:35:59.682580] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:27:48.641 qpair failed and we were unable to recover it. 00:27:48.641 [2024-07-15 09:35:59.682787] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.641 [2024-07-15 09:35:59.682849] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:27:48.641 qpair failed and we were unable to recover it. 00:27:48.641 [2024-07-15 09:35:59.683028] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.641 [2024-07-15 09:35:59.683080] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:27:48.641 qpair failed and we were unable to recover it. 00:27:48.641 [2024-07-15 09:35:59.683273] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.641 [2024-07-15 09:35:59.683326] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:27:48.641 qpair failed and we were unable to recover it. 00:27:48.641 [2024-07-15 09:35:59.683512] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.641 [2024-07-15 09:35:59.683563] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:27:48.641 qpair failed and we were unable to recover it. 00:27:48.641 [2024-07-15 09:35:59.683761] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.641 [2024-07-15 09:35:59.683824] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:27:48.641 qpair failed and we were unable to recover it. 00:27:48.641 [2024-07-15 09:35:59.683982] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.641 [2024-07-15 09:35:59.684033] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:27:48.641 qpair failed and we were unable to recover it. 00:27:48.641 [2024-07-15 09:35:59.684229] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.641 [2024-07-15 09:35:59.684280] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:27:48.641 qpair failed and we were unable to recover it. 00:27:48.641 [2024-07-15 09:35:59.684461] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.641 [2024-07-15 09:35:59.684513] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:27:48.641 qpair failed and we were unable to recover it. 00:27:48.641 [2024-07-15 09:35:59.684685] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.641 [2024-07-15 09:35:59.684738] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:27:48.641 qpair failed and we were unable to recover it. 00:27:48.641 [2024-07-15 09:35:59.684904] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.641 [2024-07-15 09:35:59.684956] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:27:48.641 qpair failed and we were unable to recover it. 00:27:48.641 [2024-07-15 09:35:59.685158] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.641 [2024-07-15 09:35:59.685209] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:27:48.641 qpair failed and we were unable to recover it. 00:27:48.641 [2024-07-15 09:35:59.685442] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.642 [2024-07-15 09:35:59.685494] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:27:48.642 qpair failed and we were unable to recover it. 00:27:48.642 [2024-07-15 09:35:59.685732] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.642 [2024-07-15 09:35:59.685783] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:27:48.642 qpair failed and we were unable to recover it. 00:27:48.642 [2024-07-15 09:35:59.685967] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.642 [2024-07-15 09:35:59.686019] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:27:48.642 qpair failed and we were unable to recover it. 00:27:48.642 [2024-07-15 09:35:59.686214] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.642 [2024-07-15 09:35:59.686265] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:27:48.642 qpair failed and we were unable to recover it. 00:27:48.642 [2024-07-15 09:35:59.686499] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.642 [2024-07-15 09:35:59.686550] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:27:48.642 qpair failed and we were unable to recover it. 00:27:48.642 [2024-07-15 09:35:59.686742] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.642 [2024-07-15 09:35:59.686794] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:27:48.642 qpair failed and we were unable to recover it. 00:27:48.642 [2024-07-15 09:35:59.687039] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.642 [2024-07-15 09:35:59.687091] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:27:48.642 qpair failed and we were unable to recover it. 00:27:48.642 [2024-07-15 09:35:59.687262] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.642 [2024-07-15 09:35:59.687313] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:27:48.642 qpair failed and we were unable to recover it. 00:27:48.642 [2024-07-15 09:35:59.687582] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d770e0 is same with the state(5) to be set 00:27:48.642 [2024-07-15 09:35:59.687897] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.642 [2024-07-15 09:35:59.687975] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d69200 with addr=10.0.0.2, port=4420 00:27:48.642 qpair failed and we were unable to recover it. 00:27:48.642 [2024-07-15 09:35:59.688159] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.642 [2024-07-15 09:35:59.688212] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d69200 with addr=10.0.0.2, port=4420 00:27:48.642 qpair failed and we were unable to recover it. 00:27:48.642 [2024-07-15 09:35:59.688488] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.642 [2024-07-15 09:35:59.688540] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d69200 with addr=10.0.0.2, port=4420 00:27:48.642 qpair failed and we were unable to recover it. 00:27:48.642 [2024-07-15 09:35:59.688710] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.642 [2024-07-15 09:35:59.688760] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d69200 with addr=10.0.0.2, port=4420 00:27:48.642 qpair failed and we were unable to recover it. 00:27:48.642 [2024-07-15 09:35:59.688941] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.642 [2024-07-15 09:35:59.688993] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d69200 with addr=10.0.0.2, port=4420 00:27:48.642 qpair failed and we were unable to recover it. 00:27:48.642 [2024-07-15 09:35:59.689150] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.642 [2024-07-15 09:35:59.689202] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d69200 with addr=10.0.0.2, port=4420 00:27:48.642 qpair failed and we were unable to recover it. 00:27:48.642 [2024-07-15 09:35:59.689388] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.642 [2024-07-15 09:35:59.689440] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d69200 with addr=10.0.0.2, port=4420 00:27:48.642 qpair failed and we were unable to recover it. 00:27:48.642 [2024-07-15 09:35:59.689625] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.642 [2024-07-15 09:35:59.689676] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d69200 with addr=10.0.0.2, port=4420 00:27:48.642 qpair failed and we were unable to recover it. 00:27:48.642 [2024-07-15 09:35:59.689819] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.642 [2024-07-15 09:35:59.689872] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d69200 with addr=10.0.0.2, port=4420 00:27:48.642 qpair failed and we were unable to recover it. 00:27:48.642 [2024-07-15 09:35:59.690051] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.642 [2024-07-15 09:35:59.690101] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d69200 with addr=10.0.0.2, port=4420 00:27:48.642 qpair failed and we were unable to recover it. 00:27:48.642 [2024-07-15 09:35:59.690294] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.642 [2024-07-15 09:35:59.690345] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d69200 with addr=10.0.0.2, port=4420 00:27:48.642 qpair failed and we were unable to recover it. 00:27:48.642 [2024-07-15 09:35:59.690548] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.642 [2024-07-15 09:35:59.690599] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d69200 with addr=10.0.0.2, port=4420 00:27:48.642 qpair failed and we were unable to recover it. 00:27:48.642 [2024-07-15 09:35:59.690783] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.642 [2024-07-15 09:35:59.690854] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d69200 with addr=10.0.0.2, port=4420 00:27:48.642 qpair failed and we were unable to recover it. 00:27:48.642 [2024-07-15 09:35:59.691019] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.642 [2024-07-15 09:35:59.691072] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d69200 with addr=10.0.0.2, port=4420 00:27:48.642 qpair failed and we were unable to recover it. 00:27:48.642 [2024-07-15 09:35:59.691290] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.642 [2024-07-15 09:35:59.691343] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d69200 with addr=10.0.0.2, port=4420 00:27:48.642 qpair failed and we were unable to recover it. 00:27:48.642 [2024-07-15 09:35:59.691558] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.642 [2024-07-15 09:35:59.691610] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d69200 with addr=10.0.0.2, port=4420 00:27:48.642 qpair failed and we were unable to recover it. 00:27:48.642 [2024-07-15 09:35:59.691788] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.642 [2024-07-15 09:35:59.691853] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d69200 with addr=10.0.0.2, port=4420 00:27:48.642 qpair failed and we were unable to recover it. 00:27:48.642 [2024-07-15 09:35:59.692034] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.642 [2024-07-15 09:35:59.692085] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d69200 with addr=10.0.0.2, port=4420 00:27:48.642 qpair failed and we were unable to recover it. 00:27:48.642 [2024-07-15 09:35:59.692260] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.642 [2024-07-15 09:35:59.692311] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d69200 with addr=10.0.0.2, port=4420 00:27:48.642 qpair failed and we were unable to recover it. 00:27:48.642 [2024-07-15 09:35:59.692509] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.642 [2024-07-15 09:35:59.692560] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d69200 with addr=10.0.0.2, port=4420 00:27:48.642 qpair failed and we were unable to recover it. 00:27:48.642 [2024-07-15 09:35:59.692724] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.642 [2024-07-15 09:35:59.692776] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d69200 with addr=10.0.0.2, port=4420 00:27:48.642 qpair failed and we were unable to recover it. 00:27:48.642 [2024-07-15 09:35:59.693002] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.642 [2024-07-15 09:35:59.693053] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d69200 with addr=10.0.0.2, port=4420 00:27:48.642 qpair failed and we were unable to recover it. 00:27:48.642 [2024-07-15 09:35:59.693276] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.642 [2024-07-15 09:35:59.693327] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d69200 with addr=10.0.0.2, port=4420 00:27:48.642 qpair failed and we were unable to recover it. 00:27:48.642 [2024-07-15 09:35:59.693524] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.642 [2024-07-15 09:35:59.693574] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d69200 with addr=10.0.0.2, port=4420 00:27:48.642 qpair failed and we were unable to recover it. 00:27:48.642 [2024-07-15 09:35:59.693790] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.642 [2024-07-15 09:35:59.693865] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d69200 with addr=10.0.0.2, port=4420 00:27:48.642 qpair failed and we were unable to recover it. 00:27:48.642 [2024-07-15 09:35:59.694044] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.642 [2024-07-15 09:35:59.694096] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d69200 with addr=10.0.0.2, port=4420 00:27:48.642 qpair failed and we were unable to recover it. 00:27:48.642 [2024-07-15 09:35:59.694325] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.642 [2024-07-15 09:35:59.694377] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d69200 with addr=10.0.0.2, port=4420 00:27:48.642 qpair failed and we were unable to recover it. 00:27:48.643 [2024-07-15 09:35:59.694579] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.643 [2024-07-15 09:35:59.694630] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d69200 with addr=10.0.0.2, port=4420 00:27:48.643 qpair failed and we were unable to recover it. 00:27:48.643 [2024-07-15 09:35:59.694815] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.643 [2024-07-15 09:35:59.694866] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d69200 with addr=10.0.0.2, port=4420 00:27:48.643 qpair failed and we were unable to recover it. 00:27:48.643 [2024-07-15 09:35:59.695061] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.643 [2024-07-15 09:35:59.695113] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d69200 with addr=10.0.0.2, port=4420 00:27:48.643 qpair failed and we were unable to recover it. 00:27:48.643 [2024-07-15 09:35:59.695279] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.643 [2024-07-15 09:35:59.695330] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d69200 with addr=10.0.0.2, port=4420 00:27:48.643 qpair failed and we were unable to recover it. 00:27:48.643 [2024-07-15 09:35:59.695502] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.643 [2024-07-15 09:35:59.695553] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d69200 with addr=10.0.0.2, port=4420 00:27:48.643 qpair failed and we were unable to recover it. 00:27:48.643 [2024-07-15 09:35:59.695757] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.643 [2024-07-15 09:35:59.695820] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d69200 with addr=10.0.0.2, port=4420 00:27:48.643 qpair failed and we were unable to recover it. 00:27:48.643 [2024-07-15 09:35:59.695992] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.643 [2024-07-15 09:35:59.696045] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d69200 with addr=10.0.0.2, port=4420 00:27:48.643 qpair failed and we were unable to recover it. 00:27:48.643 [2024-07-15 09:35:59.696212] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.643 [2024-07-15 09:35:59.696262] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d69200 with addr=10.0.0.2, port=4420 00:27:48.643 qpair failed and we were unable to recover it. 00:27:48.643 [2024-07-15 09:35:59.696454] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.643 [2024-07-15 09:35:59.696505] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d69200 with addr=10.0.0.2, port=4420 00:27:48.643 qpair failed and we were unable to recover it. 00:27:48.643 [2024-07-15 09:35:59.696735] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.643 [2024-07-15 09:35:59.696785] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d69200 with addr=10.0.0.2, port=4420 00:27:48.643 qpair failed and we were unable to recover it. 00:27:48.643 [2024-07-15 09:35:59.696999] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.643 [2024-07-15 09:35:59.697051] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d69200 with addr=10.0.0.2, port=4420 00:27:48.643 qpair failed and we were unable to recover it. 00:27:48.643 [2024-07-15 09:35:59.697237] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.643 [2024-07-15 09:35:59.697288] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d69200 with addr=10.0.0.2, port=4420 00:27:48.643 qpair failed and we were unable to recover it. 00:27:48.643 [2024-07-15 09:35:59.697453] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.643 [2024-07-15 09:35:59.697503] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d69200 with addr=10.0.0.2, port=4420 00:27:48.643 qpair failed and we were unable to recover it. 00:27:48.643 [2024-07-15 09:35:59.697702] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.643 [2024-07-15 09:35:59.697761] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d69200 with addr=10.0.0.2, port=4420 00:27:48.643 qpair failed and we were unable to recover it. 00:27:48.643 [2024-07-15 09:35:59.697985] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.643 [2024-07-15 09:35:59.698036] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d69200 with addr=10.0.0.2, port=4420 00:27:48.643 qpair failed and we were unable to recover it. 00:27:48.643 [2024-07-15 09:35:59.698188] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.643 [2024-07-15 09:35:59.698239] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d69200 with addr=10.0.0.2, port=4420 00:27:48.643 qpair failed and we were unable to recover it. 00:27:48.643 [2024-07-15 09:35:59.698429] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.643 [2024-07-15 09:35:59.698480] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d69200 with addr=10.0.0.2, port=4420 00:27:48.643 qpair failed and we were unable to recover it. 00:27:48.643 [2024-07-15 09:35:59.698674] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.643 [2024-07-15 09:35:59.698725] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d69200 with addr=10.0.0.2, port=4420 00:27:48.643 qpair failed and we were unable to recover it. 00:27:48.643 [2024-07-15 09:35:59.698956] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.643 [2024-07-15 09:35:59.699011] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d69200 with addr=10.0.0.2, port=4420 00:27:48.643 qpair failed and we were unable to recover it. 00:27:48.643 [2024-07-15 09:35:59.699240] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.643 [2024-07-15 09:35:59.699303] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d69200 with addr=10.0.0.2, port=4420 00:27:48.643 qpair failed and we were unable to recover it. 00:27:48.643 [2024-07-15 09:35:59.699582] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.643 [2024-07-15 09:35:59.699646] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d69200 with addr=10.0.0.2, port=4420 00:27:48.643 qpair failed and we were unable to recover it. 00:27:48.643 [2024-07-15 09:35:59.699904] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.643 [2024-07-15 09:35:59.699961] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d69200 with addr=10.0.0.2, port=4420 00:27:48.643 qpair failed and we were unable to recover it. 00:27:48.643 [2024-07-15 09:35:59.700209] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.643 [2024-07-15 09:35:59.700273] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d69200 with addr=10.0.0.2, port=4420 00:27:48.643 qpair failed and we were unable to recover it. 00:27:48.643 [2024-07-15 09:35:59.700583] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.643 [2024-07-15 09:35:59.700647] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d69200 with addr=10.0.0.2, port=4420 00:27:48.643 qpair failed and we were unable to recover it. 00:27:48.643 [2024-07-15 09:35:59.700927] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.643 [2024-07-15 09:35:59.700983] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d69200 with addr=10.0.0.2, port=4420 00:27:48.643 qpair failed and we were unable to recover it. 00:27:48.643 [2024-07-15 09:35:59.701199] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.643 [2024-07-15 09:35:59.701254] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d69200 with addr=10.0.0.2, port=4420 00:27:48.643 qpair failed and we were unable to recover it. 00:27:48.643 [2024-07-15 09:35:59.701461] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.643 [2024-07-15 09:35:59.701516] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d69200 with addr=10.0.0.2, port=4420 00:27:48.643 qpair failed and we were unable to recover it. 00:27:48.643 [2024-07-15 09:35:59.701692] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.643 [2024-07-15 09:35:59.701748] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d69200 with addr=10.0.0.2, port=4420 00:27:48.643 qpair failed and we were unable to recover it. 00:27:48.643 [2024-07-15 09:35:59.701988] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.643 [2024-07-15 09:35:59.702044] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d69200 with addr=10.0.0.2, port=4420 00:27:48.643 qpair failed and we were unable to recover it. 00:27:48.643 [2024-07-15 09:35:59.702228] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.643 [2024-07-15 09:35:59.702282] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d69200 with addr=10.0.0.2, port=4420 00:27:48.643 qpair failed and we were unable to recover it. 00:27:48.643 [2024-07-15 09:35:59.702511] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.643 [2024-07-15 09:35:59.702571] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d69200 with addr=10.0.0.2, port=4420 00:27:48.643 qpair failed and we were unable to recover it. 00:27:48.643 [2024-07-15 09:35:59.702839] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.643 [2024-07-15 09:35:59.702900] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d69200 with addr=10.0.0.2, port=4420 00:27:48.643 qpair failed and we were unable to recover it. 00:27:48.643 [2024-07-15 09:35:59.703101] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.643 [2024-07-15 09:35:59.703160] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d69200 with addr=10.0.0.2, port=4420 00:27:48.643 qpair failed and we were unable to recover it. 00:27:48.643 [2024-07-15 09:35:59.706819] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.643 [2024-07-15 09:35:59.706873] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d69200 with addr=10.0.0.2, port=4420 00:27:48.643 qpair failed and we were unable to recover it. 00:27:48.643 [2024-07-15 09:35:59.707007] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.643 [2024-07-15 09:35:59.707038] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d69200 with addr=10.0.0.2, port=4420 00:27:48.643 qpair failed and we were unable to recover it. 00:27:48.643 [2024-07-15 09:35:59.707133] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.643 [2024-07-15 09:35:59.707161] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d69200 with addr=10.0.0.2, port=4420 00:27:48.643 qpair failed and we were unable to recover it. 00:27:48.643 [2024-07-15 09:35:59.707282] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.643 [2024-07-15 09:35:59.707311] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d69200 with addr=10.0.0.2, port=4420 00:27:48.643 qpair failed and we were unable to recover it. 00:27:48.643 [2024-07-15 09:35:59.707412] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.643 [2024-07-15 09:35:59.707440] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d69200 with addr=10.0.0.2, port=4420 00:27:48.643 qpair failed and we were unable to recover it. 00:27:48.643 [2024-07-15 09:35:59.707565] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.643 [2024-07-15 09:35:59.707591] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d69200 with addr=10.0.0.2, port=4420 00:27:48.643 qpair failed and we were unable to recover it. 00:27:48.643 [2024-07-15 09:35:59.707714] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.643 [2024-07-15 09:35:59.707740] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d69200 with addr=10.0.0.2, port=4420 00:27:48.643 qpair failed and we were unable to recover it. 00:27:48.643 [2024-07-15 09:35:59.707840] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.643 [2024-07-15 09:35:59.707870] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d69200 with addr=10.0.0.2, port=4420 00:27:48.643 qpair failed and we were unable to recover it. 00:27:48.643 [2024-07-15 09:35:59.707965] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.644 [2024-07-15 09:35:59.707992] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d69200 with addr=10.0.0.2, port=4420 00:27:48.644 qpair failed and we were unable to recover it. 00:27:48.644 [2024-07-15 09:35:59.708094] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.644 [2024-07-15 09:35:59.708121] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d69200 with addr=10.0.0.2, port=4420 00:27:48.644 qpair failed and we were unable to recover it. 00:27:48.644 [2024-07-15 09:35:59.708248] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.644 [2024-07-15 09:35:59.708273] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d69200 with addr=10.0.0.2, port=4420 00:27:48.644 qpair failed and we were unable to recover it. 00:27:48.644 [2024-07-15 09:35:59.708381] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.644 [2024-07-15 09:35:59.708407] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d69200 with addr=10.0.0.2, port=4420 00:27:48.644 qpair failed and we were unable to recover it. 00:27:48.644 [2024-07-15 09:35:59.708548] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.644 [2024-07-15 09:35:59.708574] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d69200 with addr=10.0.0.2, port=4420 00:27:48.644 qpair failed and we were unable to recover it. 00:27:48.644 [2024-07-15 09:35:59.708693] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.644 [2024-07-15 09:35:59.708720] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d69200 with addr=10.0.0.2, port=4420 00:27:48.644 qpair failed and we were unable to recover it. 00:27:48.644 [2024-07-15 09:35:59.708814] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.644 [2024-07-15 09:35:59.708840] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d69200 with addr=10.0.0.2, port=4420 00:27:48.644 qpair failed and we were unable to recover it. 00:27:48.644 [2024-07-15 09:35:59.708965] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.644 [2024-07-15 09:35:59.708991] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d69200 with addr=10.0.0.2, port=4420 00:27:48.644 qpair failed and we were unable to recover it. 00:27:48.644 [2024-07-15 09:35:59.709113] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.644 [2024-07-15 09:35:59.709139] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d69200 with addr=10.0.0.2, port=4420 00:27:48.644 qpair failed and we were unable to recover it. 00:27:48.644 [2024-07-15 09:35:59.709240] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.644 [2024-07-15 09:35:59.709268] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d69200 with addr=10.0.0.2, port=4420 00:27:48.644 qpair failed and we were unable to recover it. 00:27:48.644 [2024-07-15 09:35:59.709384] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.644 [2024-07-15 09:35:59.709412] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d69200 with addr=10.0.0.2, port=4420 00:27:48.644 qpair failed and we were unable to recover it. 00:27:48.644 [2024-07-15 09:35:59.709536] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.644 [2024-07-15 09:35:59.709563] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d69200 with addr=10.0.0.2, port=4420 00:27:48.644 qpair failed and we were unable to recover it. 00:27:48.644 [2024-07-15 09:35:59.709907] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.644 [2024-07-15 09:35:59.709936] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d69200 with addr=10.0.0.2, port=4420 00:27:48.644 qpair failed and we were unable to recover it. 00:27:48.644 [2024-07-15 09:35:59.710064] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.644 [2024-07-15 09:35:59.710115] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d69200 with addr=10.0.0.2, port=4420 00:27:48.644 qpair failed and we were unable to recover it. 00:27:48.644 [2024-07-15 09:35:59.710209] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.644 [2024-07-15 09:35:59.710237] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d69200 with addr=10.0.0.2, port=4420 00:27:48.644 qpair failed and we were unable to recover it. 00:27:48.644 [2024-07-15 09:35:59.710358] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.644 [2024-07-15 09:35:59.710384] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d69200 with addr=10.0.0.2, port=4420 00:27:48.644 qpair failed and we were unable to recover it. 00:27:48.644 [2024-07-15 09:35:59.710502] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.644 [2024-07-15 09:35:59.710529] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d69200 with addr=10.0.0.2, port=4420 00:27:48.644 qpair failed and we were unable to recover it. 00:27:48.644 [2024-07-15 09:35:59.710662] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.644 [2024-07-15 09:35:59.710688] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d69200 with addr=10.0.0.2, port=4420 00:27:48.644 qpair failed and we were unable to recover it. 00:27:48.644 [2024-07-15 09:35:59.710774] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.644 [2024-07-15 09:35:59.710808] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d69200 with addr=10.0.0.2, port=4420 00:27:48.644 qpair failed and we were unable to recover it. 00:27:48.644 [2024-07-15 09:35:59.710883] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.644 [2024-07-15 09:35:59.710910] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d69200 with addr=10.0.0.2, port=4420 00:27:48.644 qpair failed and we were unable to recover it. 00:27:48.644 [2024-07-15 09:35:59.711021] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.644 [2024-07-15 09:35:59.711047] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d69200 with addr=10.0.0.2, port=4420 00:27:48.644 qpair failed and we were unable to recover it. 00:27:48.644 [2024-07-15 09:35:59.711132] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.644 [2024-07-15 09:35:59.711158] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d69200 with addr=10.0.0.2, port=4420 00:27:48.644 qpair failed and we were unable to recover it. 00:27:48.644 [2024-07-15 09:35:59.711259] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.644 [2024-07-15 09:35:59.711285] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d69200 with addr=10.0.0.2, port=4420 00:27:48.644 qpair failed and we were unable to recover it. 00:27:48.644 [2024-07-15 09:35:59.711372] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.644 [2024-07-15 09:35:59.711397] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d69200 with addr=10.0.0.2, port=4420 00:27:48.644 qpair failed and we were unable to recover it. 00:27:48.644 [2024-07-15 09:35:59.711512] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.644 [2024-07-15 09:35:59.711537] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d69200 with addr=10.0.0.2, port=4420 00:27:48.644 qpair failed and we were unable to recover it. 00:27:48.644 [2024-07-15 09:35:59.711674] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.644 [2024-07-15 09:35:59.711699] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d69200 with addr=10.0.0.2, port=4420 00:27:48.644 qpair failed and we were unable to recover it. 00:27:48.644 [2024-07-15 09:35:59.711814] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.644 [2024-07-15 09:35:59.711845] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d69200 with addr=10.0.0.2, port=4420 00:27:48.644 qpair failed and we were unable to recover it. 00:27:48.644 [2024-07-15 09:35:59.711933] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.644 [2024-07-15 09:35:59.711959] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d69200 with addr=10.0.0.2, port=4420 00:27:48.644 qpair failed and we were unable to recover it. 00:27:48.644 [2024-07-15 09:35:59.712038] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.644 [2024-07-15 09:35:59.712063] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d69200 with addr=10.0.0.2, port=4420 00:27:48.644 qpair failed and we were unable to recover it. 00:27:48.644 [2024-07-15 09:35:59.712172] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.644 [2024-07-15 09:35:59.712197] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d69200 with addr=10.0.0.2, port=4420 00:27:48.644 qpair failed and we were unable to recover it. 00:27:48.644 [2024-07-15 09:35:59.712283] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.644 [2024-07-15 09:35:59.712308] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d69200 with addr=10.0.0.2, port=4420 00:27:48.644 qpair failed and we were unable to recover it. 00:27:48.644 [2024-07-15 09:35:59.712393] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.644 [2024-07-15 09:35:59.712418] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d69200 with addr=10.0.0.2, port=4420 00:27:48.644 qpair failed and we were unable to recover it. 00:27:48.644 [2024-07-15 09:35:59.712527] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.644 [2024-07-15 09:35:59.712553] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d69200 with addr=10.0.0.2, port=4420 00:27:48.644 qpair failed and we were unable to recover it. 00:27:48.644 [2024-07-15 09:35:59.712647] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.644 [2024-07-15 09:35:59.712672] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d69200 with addr=10.0.0.2, port=4420 00:27:48.644 qpair failed and we were unable to recover it. 00:27:48.644 [2024-07-15 09:35:59.712758] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.644 [2024-07-15 09:35:59.712782] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d69200 with addr=10.0.0.2, port=4420 00:27:48.644 qpair failed and we were unable to recover it. 00:27:48.644 [2024-07-15 09:35:59.712922] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.644 [2024-07-15 09:35:59.712948] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d69200 with addr=10.0.0.2, port=4420 00:27:48.644 qpair failed and we were unable to recover it. 00:27:48.644 [2024-07-15 09:35:59.713038] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.644 [2024-07-15 09:35:59.713062] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d69200 with addr=10.0.0.2, port=4420 00:27:48.644 qpair failed and we were unable to recover it. 00:27:48.644 [2024-07-15 09:35:59.713207] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.644 [2024-07-15 09:35:59.713232] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d69200 with addr=10.0.0.2, port=4420 00:27:48.644 qpair failed and we were unable to recover it. 00:27:48.644 [2024-07-15 09:35:59.713336] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.644 [2024-07-15 09:35:59.713361] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d69200 with addr=10.0.0.2, port=4420 00:27:48.644 qpair failed and we were unable to recover it. 00:27:48.644 [2024-07-15 09:35:59.713471] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.644 [2024-07-15 09:35:59.713496] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d69200 with addr=10.0.0.2, port=4420 00:27:48.644 qpair failed and we were unable to recover it. 00:27:48.644 [2024-07-15 09:35:59.713612] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.644 [2024-07-15 09:35:59.713638] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d69200 with addr=10.0.0.2, port=4420 00:27:48.644 qpair failed and we were unable to recover it. 00:27:48.644 [2024-07-15 09:35:59.713717] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.645 [2024-07-15 09:35:59.713742] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d69200 with addr=10.0.0.2, port=4420 00:27:48.645 qpair failed and we were unable to recover it. 00:27:48.645 [2024-07-15 09:35:59.713824] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.645 [2024-07-15 09:35:59.713850] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d69200 with addr=10.0.0.2, port=4420 00:27:48.645 qpair failed and we were unable to recover it. 00:27:48.645 [2024-07-15 09:35:59.713962] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.645 [2024-07-15 09:35:59.713988] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d69200 with addr=10.0.0.2, port=4420 00:27:48.645 qpair failed and we were unable to recover it. 00:27:48.645 [2024-07-15 09:35:59.714070] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.645 [2024-07-15 09:35:59.714094] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d69200 with addr=10.0.0.2, port=4420 00:27:48.645 qpair failed and we were unable to recover it. 00:27:48.645 [2024-07-15 09:35:59.714174] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.645 [2024-07-15 09:35:59.714199] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d69200 with addr=10.0.0.2, port=4420 00:27:48.645 qpair failed and we were unable to recover it. 00:27:48.645 [2024-07-15 09:35:59.714307] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.645 [2024-07-15 09:35:59.714331] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d69200 with addr=10.0.0.2, port=4420 00:27:48.645 qpair failed and we were unable to recover it. 00:27:48.645 [2024-07-15 09:35:59.714440] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.645 [2024-07-15 09:35:59.714465] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d69200 with addr=10.0.0.2, port=4420 00:27:48.645 qpair failed and we were unable to recover it. 00:27:48.645 [2024-07-15 09:35:59.714549] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.645 [2024-07-15 09:35:59.714574] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d69200 with addr=10.0.0.2, port=4420 00:27:48.645 qpair failed and we were unable to recover it. 00:27:48.645 [2024-07-15 09:35:59.714709] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.645 [2024-07-15 09:35:59.714748] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:27:48.645 qpair failed and we were unable to recover it. 00:27:48.645 [2024-07-15 09:35:59.714853] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.645 [2024-07-15 09:35:59.714881] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:27:48.645 qpair failed and we were unable to recover it. 00:27:48.645 [2024-07-15 09:35:59.714970] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.645 [2024-07-15 09:35:59.714996] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:27:48.645 qpair failed and we were unable to recover it. 00:27:48.645 [2024-07-15 09:35:59.715120] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.645 [2024-07-15 09:35:59.715145] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:27:48.645 qpair failed and we were unable to recover it. 00:27:48.645 [2024-07-15 09:35:59.715233] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.645 [2024-07-15 09:35:59.715259] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:27:48.645 qpair failed and we were unable to recover it. 00:27:48.645 [2024-07-15 09:35:59.715372] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.645 [2024-07-15 09:35:59.715397] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:27:48.645 qpair failed and we were unable to recover it. 00:27:48.645 [2024-07-15 09:35:59.715512] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.645 [2024-07-15 09:35:59.715537] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d69200 with addr=10.0.0.2, port=4420 00:27:48.645 qpair failed and we were unable to recover it. 00:27:48.645 [2024-07-15 09:35:59.715650] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.645 [2024-07-15 09:35:59.715675] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d69200 with addr=10.0.0.2, port=4420 00:27:48.645 qpair failed and we were unable to recover it. 00:27:48.645 [2024-07-15 09:35:59.715748] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.645 [2024-07-15 09:35:59.715774] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d69200 with addr=10.0.0.2, port=4420 00:27:48.645 qpair failed and we were unable to recover it. 00:27:48.645 [2024-07-15 09:35:59.715875] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.645 [2024-07-15 09:35:59.715901] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d69200 with addr=10.0.0.2, port=4420 00:27:48.645 qpair failed and we were unable to recover it. 00:27:48.645 [2024-07-15 09:35:59.715986] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.645 [2024-07-15 09:35:59.716010] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d69200 with addr=10.0.0.2, port=4420 00:27:48.645 qpair failed and we were unable to recover it. 00:27:48.645 [2024-07-15 09:35:59.716151] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.645 [2024-07-15 09:35:59.716177] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d69200 with addr=10.0.0.2, port=4420 00:27:48.645 qpair failed and we were unable to recover it. 00:27:48.645 [2024-07-15 09:35:59.716373] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.645 [2024-07-15 09:35:59.716428] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d69200 with addr=10.0.0.2, port=4420 00:27:48.645 qpair failed and we were unable to recover it. 00:27:48.645 [2024-07-15 09:35:59.716519] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.645 [2024-07-15 09:35:59.716544] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d69200 with addr=10.0.0.2, port=4420 00:27:48.645 qpair failed and we were unable to recover it. 00:27:48.645 [2024-07-15 09:35:59.716654] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.645 [2024-07-15 09:35:59.716679] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d69200 with addr=10.0.0.2, port=4420 00:27:48.645 qpair failed and we were unable to recover it. 00:27:48.645 [2024-07-15 09:35:59.716810] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.645 [2024-07-15 09:35:59.716835] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d69200 with addr=10.0.0.2, port=4420 00:27:48.645 qpair failed and we were unable to recover it. 00:27:48.645 [2024-07-15 09:35:59.716920] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.645 [2024-07-15 09:35:59.716944] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d69200 with addr=10.0.0.2, port=4420 00:27:48.645 qpair failed and we were unable to recover it. 00:27:48.645 [2024-07-15 09:35:59.717030] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.645 [2024-07-15 09:35:59.717054] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d69200 with addr=10.0.0.2, port=4420 00:27:48.645 qpair failed and we were unable to recover it. 00:27:48.645 [2024-07-15 09:35:59.717165] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.645 [2024-07-15 09:35:59.717193] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:27:48.645 qpair failed and we were unable to recover it. 00:27:48.645 [2024-07-15 09:35:59.717285] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.645 [2024-07-15 09:35:59.717311] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:27:48.645 qpair failed and we were unable to recover it. 00:27:48.645 [2024-07-15 09:35:59.717418] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.645 [2024-07-15 09:35:59.717444] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:27:48.645 qpair failed and we were unable to recover it. 00:27:48.645 [2024-07-15 09:35:59.717577] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.645 [2024-07-15 09:35:59.717604] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:27:48.645 qpair failed and we were unable to recover it. 00:27:48.645 [2024-07-15 09:35:59.717721] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.645 [2024-07-15 09:35:59.717747] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:27:48.645 qpair failed and we were unable to recover it. 00:27:48.645 [2024-07-15 09:35:59.717849] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.645 [2024-07-15 09:35:59.717875] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:27:48.645 qpair failed and we were unable to recover it. 00:27:48.645 [2024-07-15 09:35:59.717993] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.645 [2024-07-15 09:35:59.718018] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:27:48.645 qpair failed and we were unable to recover it. 00:27:48.645 [2024-07-15 09:35:59.718104] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.645 [2024-07-15 09:35:59.718130] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:27:48.645 qpair failed and we were unable to recover it. 00:27:48.645 [2024-07-15 09:35:59.718211] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.645 [2024-07-15 09:35:59.718237] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:27:48.645 qpair failed and we were unable to recover it. 00:27:48.645 [2024-07-15 09:35:59.718346] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.645 [2024-07-15 09:35:59.718373] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:27:48.645 qpair failed and we were unable to recover it. 00:27:48.645 [2024-07-15 09:35:59.718455] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.645 [2024-07-15 09:35:59.718481] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:27:48.645 qpair failed and we were unable to recover it. 00:27:48.645 [2024-07-15 09:35:59.718594] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.645 [2024-07-15 09:35:59.718619] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:27:48.645 qpair failed and we were unable to recover it. 00:27:48.645 [2024-07-15 09:35:59.718704] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.645 [2024-07-15 09:35:59.718730] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d69200 with addr=10.0.0.2, port=4420 00:27:48.645 qpair failed and we were unable to recover it. 00:27:48.645 [2024-07-15 09:35:59.718834] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.645 [2024-07-15 09:35:59.718879] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a58000b90 with addr=10.0.0.2, port=4420 00:27:48.645 qpair failed and we were unable to recover it. 00:27:48.645 [2024-07-15 09:35:59.718978] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.645 [2024-07-15 09:35:59.719006] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a58000b90 with addr=10.0.0.2, port=4420 00:27:48.645 qpair failed and we were unable to recover it. 00:27:48.645 [2024-07-15 09:35:59.719136] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.646 [2024-07-15 09:35:59.719192] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a58000b90 with addr=10.0.0.2, port=4420 00:27:48.646 qpair failed and we were unable to recover it. 00:27:48.646 [2024-07-15 09:35:59.719343] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.646 [2024-07-15 09:35:59.719407] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a58000b90 with addr=10.0.0.2, port=4420 00:27:48.646 qpair failed and we were unable to recover it. 00:27:48.646 [2024-07-15 09:35:59.719569] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.646 [2024-07-15 09:35:59.719625] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a58000b90 with addr=10.0.0.2, port=4420 00:27:48.646 qpair failed and we were unable to recover it. 00:27:48.646 [2024-07-15 09:35:59.719772] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.646 [2024-07-15 09:35:59.719798] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a58000b90 with addr=10.0.0.2, port=4420 00:27:48.646 qpair failed and we were unable to recover it. 00:27:48.646 [2024-07-15 09:35:59.719897] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.646 [2024-07-15 09:35:59.719923] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a58000b90 with addr=10.0.0.2, port=4420 00:27:48.646 qpair failed and we were unable to recover it. 00:27:48.646 [2024-07-15 09:35:59.720036] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.646 [2024-07-15 09:35:59.720062] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a58000b90 with addr=10.0.0.2, port=4420 00:27:48.646 qpair failed and we were unable to recover it. 00:27:48.646 [2024-07-15 09:35:59.720205] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.646 [2024-07-15 09:35:59.720263] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:27:48.646 qpair failed and we were unable to recover it. 00:27:48.646 [2024-07-15 09:35:59.720427] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.646 [2024-07-15 09:35:59.720481] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:27:48.646 qpair failed and we were unable to recover it. 00:27:48.646 [2024-07-15 09:35:59.720616] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.646 [2024-07-15 09:35:59.720642] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:27:48.646 qpair failed and we were unable to recover it. 00:27:48.646 [2024-07-15 09:35:59.720782] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.646 [2024-07-15 09:35:59.720843] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:27:48.646 qpair failed and we were unable to recover it. 00:27:48.646 [2024-07-15 09:35:59.720922] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.646 [2024-07-15 09:35:59.720948] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:27:48.646 qpair failed and we were unable to recover it. 00:27:48.646 [2024-07-15 09:35:59.721031] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.646 [2024-07-15 09:35:59.721057] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:27:48.646 qpair failed and we were unable to recover it. 00:27:48.646 [2024-07-15 09:35:59.721142] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.646 [2024-07-15 09:35:59.721168] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:27:48.646 qpair failed and we were unable to recover it. 00:27:48.646 [2024-07-15 09:35:59.721242] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.646 [2024-07-15 09:35:59.721267] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:27:48.646 qpair failed and we were unable to recover it. 00:27:48.646 [2024-07-15 09:35:59.721377] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.646 [2024-07-15 09:35:59.721407] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:27:48.646 qpair failed and we were unable to recover it. 00:27:48.646 [2024-07-15 09:35:59.721483] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.646 [2024-07-15 09:35:59.721509] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:27:48.646 qpair failed and we were unable to recover it. 00:27:48.646 [2024-07-15 09:35:59.721602] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.646 [2024-07-15 09:35:59.721627] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:27:48.646 qpair failed and we were unable to recover it. 00:27:48.646 [2024-07-15 09:35:59.721711] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.646 [2024-07-15 09:35:59.721737] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:27:48.646 qpair failed and we were unable to recover it. 00:27:48.646 [2024-07-15 09:35:59.721927] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.646 [2024-07-15 09:35:59.721953] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:27:48.646 qpair failed and we were unable to recover it. 00:27:48.646 [2024-07-15 09:35:59.722063] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.646 [2024-07-15 09:35:59.722089] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:27:48.646 qpair failed and we were unable to recover it. 00:27:48.646 [2024-07-15 09:35:59.722171] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.646 [2024-07-15 09:35:59.722197] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:27:48.646 qpair failed and we were unable to recover it. 00:27:48.646 [2024-07-15 09:35:59.722306] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.646 [2024-07-15 09:35:59.722331] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:27:48.646 qpair failed and we were unable to recover it. 00:27:48.646 [2024-07-15 09:35:59.722418] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.646 [2024-07-15 09:35:59.722443] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:27:48.646 qpair failed and we were unable to recover it. 00:27:48.646 [2024-07-15 09:35:59.722532] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.646 [2024-07-15 09:35:59.722557] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:27:48.646 qpair failed and we were unable to recover it. 00:27:48.646 [2024-07-15 09:35:59.722665] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.646 [2024-07-15 09:35:59.722691] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:27:48.646 qpair failed and we were unable to recover it. 00:27:48.646 [2024-07-15 09:35:59.722827] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.646 [2024-07-15 09:35:59.722866] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d69200 with addr=10.0.0.2, port=4420 00:27:48.646 qpair failed and we were unable to recover it. 00:27:48.646 [2024-07-15 09:35:59.722954] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.646 [2024-07-15 09:35:59.722980] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d69200 with addr=10.0.0.2, port=4420 00:27:48.646 qpair failed and we were unable to recover it. 00:27:48.646 [2024-07-15 09:35:59.723064] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.646 [2024-07-15 09:35:59.723089] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d69200 with addr=10.0.0.2, port=4420 00:27:48.646 qpair failed and we were unable to recover it. 00:27:48.646 [2024-07-15 09:35:59.723181] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.646 [2024-07-15 09:35:59.723206] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d69200 with addr=10.0.0.2, port=4420 00:27:48.646 qpair failed and we were unable to recover it. 00:27:48.646 [2024-07-15 09:35:59.723310] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.646 [2024-07-15 09:35:59.723336] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d69200 with addr=10.0.0.2, port=4420 00:27:48.646 qpair failed and we were unable to recover it. 00:27:48.646 [2024-07-15 09:35:59.723460] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.646 [2024-07-15 09:35:59.723499] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a58000b90 with addr=10.0.0.2, port=4420 00:27:48.646 qpair failed and we were unable to recover it. 00:27:48.646 [2024-07-15 09:35:59.723584] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.646 [2024-07-15 09:35:59.723611] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a58000b90 with addr=10.0.0.2, port=4420 00:27:48.646 qpair failed and we were unable to recover it. 00:27:48.646 [2024-07-15 09:35:59.723701] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.646 [2024-07-15 09:35:59.723727] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a58000b90 with addr=10.0.0.2, port=4420 00:27:48.646 qpair failed and we were unable to recover it. 00:27:48.646 [2024-07-15 09:35:59.723811] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.646 [2024-07-15 09:35:59.723837] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a58000b90 with addr=10.0.0.2, port=4420 00:27:48.646 qpair failed and we were unable to recover it. 00:27:48.646 [2024-07-15 09:35:59.723921] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.646 [2024-07-15 09:35:59.723948] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a58000b90 with addr=10.0.0.2, port=4420 00:27:48.646 qpair failed and we were unable to recover it. 00:27:48.646 [2024-07-15 09:35:59.724029] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.646 [2024-07-15 09:35:59.724055] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a58000b90 with addr=10.0.0.2, port=4420 00:27:48.646 qpair failed and we were unable to recover it. 00:27:48.647 [2024-07-15 09:35:59.724139] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.647 [2024-07-15 09:35:59.724167] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d69200 with addr=10.0.0.2, port=4420 00:27:48.647 qpair failed and we were unable to recover it. 00:27:48.647 [2024-07-15 09:35:59.724259] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.647 [2024-07-15 09:35:59.724287] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:27:48.647 qpair failed and we were unable to recover it. 00:27:48.647 [2024-07-15 09:35:59.724373] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.647 [2024-07-15 09:35:59.724398] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:27:48.647 qpair failed and we were unable to recover it. 00:27:48.647 [2024-07-15 09:35:59.724488] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.647 [2024-07-15 09:35:59.724514] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:27:48.647 qpair failed and we were unable to recover it. 00:27:48.647 [2024-07-15 09:35:59.724593] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.647 [2024-07-15 09:35:59.724619] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:27:48.647 qpair failed and we were unable to recover it. 00:27:48.647 [2024-07-15 09:35:59.724733] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.647 [2024-07-15 09:35:59.724759] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:27:48.647 qpair failed and we were unable to recover it. 00:27:48.647 [2024-07-15 09:35:59.724884] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.647 [2024-07-15 09:35:59.724910] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:27:48.647 qpair failed and we were unable to recover it. 00:27:48.647 [2024-07-15 09:35:59.725005] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.647 [2024-07-15 09:35:59.725030] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:27:48.647 qpair failed and we were unable to recover it. 00:27:48.647 [2024-07-15 09:35:59.725142] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.647 [2024-07-15 09:35:59.725168] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:27:48.647 qpair failed and we were unable to recover it. 00:27:48.647 [2024-07-15 09:35:59.725251] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.647 [2024-07-15 09:35:59.725279] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a58000b90 with addr=10.0.0.2, port=4420 00:27:48.647 qpair failed and we were unable to recover it. 00:27:48.647 [2024-07-15 09:35:59.725362] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.647 [2024-07-15 09:35:59.725388] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a58000b90 with addr=10.0.0.2, port=4420 00:27:48.647 qpair failed and we were unable to recover it. 00:27:48.647 [2024-07-15 09:35:59.725475] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.647 [2024-07-15 09:35:59.725501] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a58000b90 with addr=10.0.0.2, port=4420 00:27:48.647 qpair failed and we were unable to recover it. 00:27:48.647 [2024-07-15 09:35:59.725584] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.647 [2024-07-15 09:35:59.725610] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a58000b90 with addr=10.0.0.2, port=4420 00:27:48.647 qpair failed and we were unable to recover it. 00:27:48.647 [2024-07-15 09:35:59.725718] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.647 [2024-07-15 09:35:59.725743] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a58000b90 with addr=10.0.0.2, port=4420 00:27:48.647 qpair failed and we were unable to recover it. 00:27:48.647 [2024-07-15 09:35:59.725858] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.647 [2024-07-15 09:35:59.725884] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a58000b90 with addr=10.0.0.2, port=4420 00:27:48.647 qpair failed and we were unable to recover it. 00:27:48.647 [2024-07-15 09:35:59.725961] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.647 [2024-07-15 09:35:59.725987] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a58000b90 with addr=10.0.0.2, port=4420 00:27:48.647 qpair failed and we were unable to recover it. 00:27:48.647 [2024-07-15 09:35:59.726099] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.647 [2024-07-15 09:35:59.726125] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a58000b90 with addr=10.0.0.2, port=4420 00:27:48.647 qpair failed and we were unable to recover it. 00:27:48.647 [2024-07-15 09:35:59.726210] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.647 [2024-07-15 09:35:59.726236] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a58000b90 with addr=10.0.0.2, port=4420 00:27:48.647 qpair failed and we were unable to recover it. 00:27:48.647 [2024-07-15 09:35:59.726381] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.647 [2024-07-15 09:35:59.726434] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:27:48.647 qpair failed and we were unable to recover it. 00:27:48.647 [2024-07-15 09:35:59.726526] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.647 [2024-07-15 09:35:59.726552] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:27:48.647 qpair failed and we were unable to recover it. 00:27:48.647 [2024-07-15 09:35:59.726668] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.647 [2024-07-15 09:35:59.726693] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:27:48.647 qpair failed and we were unable to recover it. 00:27:48.647 [2024-07-15 09:35:59.726776] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.647 [2024-07-15 09:35:59.726807] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:27:48.647 qpair failed and we were unable to recover it. 00:27:48.647 [2024-07-15 09:35:59.726919] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.647 [2024-07-15 09:35:59.726945] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:27:48.647 qpair failed and we were unable to recover it. 00:27:48.647 [2024-07-15 09:35:59.727052] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.647 [2024-07-15 09:35:59.727078] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:27:48.647 qpair failed and we were unable to recover it. 00:27:48.647 [2024-07-15 09:35:59.727188] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.647 [2024-07-15 09:35:59.727214] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:27:48.647 qpair failed and we were unable to recover it. 00:27:48.647 [2024-07-15 09:35:59.727295] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.647 [2024-07-15 09:35:59.727320] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:27:48.647 qpair failed and we were unable to recover it. 00:27:48.647 [2024-07-15 09:35:59.727413] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.647 [2024-07-15 09:35:59.727438] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:27:48.647 qpair failed and we were unable to recover it. 00:27:48.647 [2024-07-15 09:35:59.727554] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.647 [2024-07-15 09:35:59.727582] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a58000b90 with addr=10.0.0.2, port=4420 00:27:48.647 qpair failed and we were unable to recover it. 00:27:48.647 [2024-07-15 09:35:59.727661] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.647 [2024-07-15 09:35:59.727687] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a58000b90 with addr=10.0.0.2, port=4420 00:27:48.647 qpair failed and we were unable to recover it. 00:27:48.647 [2024-07-15 09:35:59.727765] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.647 [2024-07-15 09:35:59.727796] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a58000b90 with addr=10.0.0.2, port=4420 00:27:48.647 qpair failed and we were unable to recover it. 00:27:48.647 [2024-07-15 09:35:59.727895] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.647 [2024-07-15 09:35:59.727922] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a58000b90 with addr=10.0.0.2, port=4420 00:27:48.647 qpair failed and we were unable to recover it. 00:27:48.647 [2024-07-15 09:35:59.728003] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.647 [2024-07-15 09:35:59.728030] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a58000b90 with addr=10.0.0.2, port=4420 00:27:48.647 qpair failed and we were unable to recover it. 00:27:48.647 [2024-07-15 09:35:59.728137] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.647 [2024-07-15 09:35:59.728163] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a58000b90 with addr=10.0.0.2, port=4420 00:27:48.647 qpair failed and we were unable to recover it. 00:27:48.647 [2024-07-15 09:35:59.728319] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.647 [2024-07-15 09:35:59.728369] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a58000b90 with addr=10.0.0.2, port=4420 00:27:48.647 qpair failed and we were unable to recover it. 00:27:48.647 [2024-07-15 09:35:59.728528] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.647 [2024-07-15 09:35:59.728577] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a58000b90 with addr=10.0.0.2, port=4420 00:27:48.647 qpair failed and we were unable to recover it. 00:27:48.647 [2024-07-15 09:35:59.728753] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.647 [2024-07-15 09:35:59.728819] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a58000b90 with addr=10.0.0.2, port=4420 00:27:48.647 qpair failed and we were unable to recover it. 00:27:48.647 [2024-07-15 09:35:59.728911] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.647 [2024-07-15 09:35:59.728938] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a58000b90 with addr=10.0.0.2, port=4420 00:27:48.647 qpair failed and we were unable to recover it. 00:27:48.647 [2024-07-15 09:35:59.729028] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.647 [2024-07-15 09:35:59.729054] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a58000b90 with addr=10.0.0.2, port=4420 00:27:48.647 qpair failed and we were unable to recover it. 00:27:48.647 [2024-07-15 09:35:59.729189] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.647 [2024-07-15 09:35:59.729215] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a58000b90 with addr=10.0.0.2, port=4420 00:27:48.647 qpair failed and we were unable to recover it. 00:27:48.647 [2024-07-15 09:35:59.729368] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.648 [2024-07-15 09:35:59.729418] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a58000b90 with addr=10.0.0.2, port=4420 00:27:48.648 qpair failed and we were unable to recover it. 00:27:48.648 [2024-07-15 09:35:59.729564] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.648 [2024-07-15 09:35:59.729622] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a58000b90 with addr=10.0.0.2, port=4420 00:27:48.648 qpair failed and we were unable to recover it. 00:27:48.648 [2024-07-15 09:35:59.729771] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.648 [2024-07-15 09:35:59.729834] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a58000b90 with addr=10.0.0.2, port=4420 00:27:48.648 qpair failed and we were unable to recover it. 00:27:48.648 [2024-07-15 09:35:59.729949] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.648 [2024-07-15 09:35:59.729975] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a58000b90 with addr=10.0.0.2, port=4420 00:27:48.648 qpair failed and we were unable to recover it. 00:27:48.648 [2024-07-15 09:35:59.730097] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.648 [2024-07-15 09:35:59.730124] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a58000b90 with addr=10.0.0.2, port=4420 00:27:48.648 qpair failed and we were unable to recover it. 00:27:48.648 [2024-07-15 09:35:59.730201] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.648 [2024-07-15 09:35:59.730227] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a58000b90 with addr=10.0.0.2, port=4420 00:27:48.648 qpair failed and we were unable to recover it. 00:27:48.648 [2024-07-15 09:35:59.730346] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.648 [2024-07-15 09:35:59.730395] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a58000b90 with addr=10.0.0.2, port=4420 00:27:48.648 qpair failed and we were unable to recover it. 00:27:48.648 [2024-07-15 09:35:59.730535] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.648 [2024-07-15 09:35:59.730590] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a58000b90 with addr=10.0.0.2, port=4420 00:27:48.648 qpair failed and we were unable to recover it. 00:27:48.648 [2024-07-15 09:35:59.730729] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.648 [2024-07-15 09:35:59.730778] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a58000b90 with addr=10.0.0.2, port=4420 00:27:48.648 qpair failed and we were unable to recover it. 00:27:48.648 [2024-07-15 09:35:59.730941] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.648 [2024-07-15 09:35:59.730968] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:27:48.648 qpair failed and we were unable to recover it. 00:27:48.648 [2024-07-15 09:35:59.731051] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.648 [2024-07-15 09:35:59.731077] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:27:48.648 qpair failed and we were unable to recover it. 00:27:48.648 [2024-07-15 09:35:59.731190] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.648 [2024-07-15 09:35:59.731216] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:27:48.648 qpair failed and we were unable to recover it. 00:27:48.648 [2024-07-15 09:35:59.731328] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.648 [2024-07-15 09:35:59.731353] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:27:48.648 qpair failed and we were unable to recover it. 00:27:48.648 [2024-07-15 09:35:59.731546] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.648 [2024-07-15 09:35:59.731572] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:27:48.648 qpair failed and we were unable to recover it. 00:27:48.648 [2024-07-15 09:35:59.731681] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.648 [2024-07-15 09:35:59.731707] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:27:48.648 qpair failed and we were unable to recover it. 00:27:48.648 [2024-07-15 09:35:59.731787] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.648 [2024-07-15 09:35:59.731822] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a58000b90 with addr=10.0.0.2, port=4420 00:27:48.648 qpair failed and we were unable to recover it. 00:27:48.648 [2024-07-15 09:35:59.731929] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.648 [2024-07-15 09:35:59.731955] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a58000b90 with addr=10.0.0.2, port=4420 00:27:48.648 qpair failed and we were unable to recover it. 00:27:48.648 [2024-07-15 09:35:59.732073] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.648 [2024-07-15 09:35:59.732099] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a58000b90 with addr=10.0.0.2, port=4420 00:27:48.648 qpair failed and we were unable to recover it. 00:27:48.648 [2024-07-15 09:35:59.732233] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.648 [2024-07-15 09:35:59.732260] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a58000b90 with addr=10.0.0.2, port=4420 00:27:48.648 qpair failed and we were unable to recover it. 00:27:48.648 [2024-07-15 09:35:59.732395] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.648 [2024-07-15 09:35:59.732421] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a58000b90 with addr=10.0.0.2, port=4420 00:27:48.648 qpair failed and we were unable to recover it. 00:27:48.648 [2024-07-15 09:35:59.732510] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.648 [2024-07-15 09:35:59.732536] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a58000b90 with addr=10.0.0.2, port=4420 00:27:48.648 qpair failed and we were unable to recover it. 00:27:48.648 [2024-07-15 09:35:59.732610] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.648 [2024-07-15 09:35:59.732637] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a58000b90 with addr=10.0.0.2, port=4420 00:27:48.648 qpair failed and we were unable to recover it. 00:27:48.648 [2024-07-15 09:35:59.732739] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.648 [2024-07-15 09:35:59.732777] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d69200 with addr=10.0.0.2, port=4420 00:27:48.648 qpair failed and we were unable to recover it. 00:27:48.648 [2024-07-15 09:35:59.732887] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.648 [2024-07-15 09:35:59.732926] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a50000b90 with addr=10.0.0.2, port=4420 00:27:48.648 qpair failed and we were unable to recover it. 00:27:48.648 [2024-07-15 09:35:59.733016] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.648 [2024-07-15 09:35:59.733043] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:27:48.648 qpair failed and we were unable to recover it. 00:27:48.648 [2024-07-15 09:35:59.733260] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.648 [2024-07-15 09:35:59.733312] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:27:48.648 qpair failed and we were unable to recover it. 00:27:48.648 [2024-07-15 09:35:59.733461] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.648 [2024-07-15 09:35:59.733504] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:27:48.648 qpair failed and we were unable to recover it. 00:27:48.648 [2024-07-15 09:35:59.733611] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.648 [2024-07-15 09:35:59.733637] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:27:48.648 qpair failed and we were unable to recover it. 00:27:48.648 [2024-07-15 09:35:59.733717] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.648 [2024-07-15 09:35:59.733742] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:27:48.648 qpair failed and we were unable to recover it. 00:27:48.648 [2024-07-15 09:35:59.733842] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.648 [2024-07-15 09:35:59.733881] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d69200 with addr=10.0.0.2, port=4420 00:27:48.648 qpair failed and we were unable to recover it. 00:27:48.648 [2024-07-15 09:35:59.733976] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.648 [2024-07-15 09:35:59.734007] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d69200 with addr=10.0.0.2, port=4420 00:27:48.648 qpair failed and we were unable to recover it. 00:27:48.648 [2024-07-15 09:35:59.734098] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.648 [2024-07-15 09:35:59.734124] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d69200 with addr=10.0.0.2, port=4420 00:27:48.648 qpair failed and we were unable to recover it. 00:27:48.648 [2024-07-15 09:35:59.734254] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.648 [2024-07-15 09:35:59.734307] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d69200 with addr=10.0.0.2, port=4420 00:27:48.648 qpair failed and we were unable to recover it. 00:27:48.648 [2024-07-15 09:35:59.734441] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.648 [2024-07-15 09:35:59.734494] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d69200 with addr=10.0.0.2, port=4420 00:27:48.648 qpair failed and we were unable to recover it. 00:27:48.648 [2024-07-15 09:35:59.734580] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.648 [2024-07-15 09:35:59.734606] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d69200 with addr=10.0.0.2, port=4420 00:27:48.648 qpair failed and we were unable to recover it. 00:27:48.648 [2024-07-15 09:35:59.734694] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.648 [2024-07-15 09:35:59.734719] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d69200 with addr=10.0.0.2, port=4420 00:27:48.648 qpair failed and we were unable to recover it. 00:27:48.648 [2024-07-15 09:35:59.734811] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.648 [2024-07-15 09:35:59.734836] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d69200 with addr=10.0.0.2, port=4420 00:27:48.649 qpair failed and we were unable to recover it. 00:27:48.649 [2024-07-15 09:35:59.734918] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.649 [2024-07-15 09:35:59.734943] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d69200 with addr=10.0.0.2, port=4420 00:27:48.649 qpair failed and we were unable to recover it. 00:27:48.649 [2024-07-15 09:35:59.735046] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.649 [2024-07-15 09:35:59.735071] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d69200 with addr=10.0.0.2, port=4420 00:27:48.649 qpair failed and we were unable to recover it. 00:27:48.649 [2024-07-15 09:35:59.735148] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.649 [2024-07-15 09:35:59.735174] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d69200 with addr=10.0.0.2, port=4420 00:27:48.649 qpair failed and we were unable to recover it. 00:27:48.649 [2024-07-15 09:35:59.735260] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.649 [2024-07-15 09:35:59.735285] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d69200 with addr=10.0.0.2, port=4420 00:27:48.649 qpair failed and we were unable to recover it. 00:27:48.649 [2024-07-15 09:35:59.735393] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.649 [2024-07-15 09:35:59.735419] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d69200 with addr=10.0.0.2, port=4420 00:27:48.649 qpair failed and we were unable to recover it. 00:27:48.649 [2024-07-15 09:35:59.735504] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.649 [2024-07-15 09:35:59.735529] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d69200 with addr=10.0.0.2, port=4420 00:27:48.649 qpair failed and we were unable to recover it. 00:27:48.649 [2024-07-15 09:35:59.735613] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.649 [2024-07-15 09:35:59.735638] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d69200 with addr=10.0.0.2, port=4420 00:27:48.649 qpair failed and we were unable to recover it. 00:27:48.649 [2024-07-15 09:35:59.735732] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.649 [2024-07-15 09:35:59.735758] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:27:48.649 qpair failed and we were unable to recover it. 00:27:48.649 [2024-07-15 09:35:59.735843] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.649 [2024-07-15 09:35:59.735870] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:27:48.649 qpair failed and we were unable to recover it. 00:27:48.649 [2024-07-15 09:35:59.736006] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.649 [2024-07-15 09:35:59.736032] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:27:48.649 qpair failed and we were unable to recover it. 00:27:48.649 [2024-07-15 09:35:59.736116] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.649 [2024-07-15 09:35:59.736141] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:27:48.649 qpair failed and we were unable to recover it. 00:27:48.649 [2024-07-15 09:35:59.736251] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.649 [2024-07-15 09:35:59.736277] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:27:48.649 qpair failed and we were unable to recover it. 00:27:48.649 [2024-07-15 09:35:59.736362] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.649 [2024-07-15 09:35:59.736388] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:27:48.649 qpair failed and we were unable to recover it. 00:27:48.649 [2024-07-15 09:35:59.736467] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.649 [2024-07-15 09:35:59.736493] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:27:48.649 qpair failed and we were unable to recover it. 00:27:48.649 [2024-07-15 09:35:59.736580] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.649 [2024-07-15 09:35:59.736611] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a50000b90 with addr=10.0.0.2, port=4420 00:27:48.649 qpair failed and we were unable to recover it. 00:27:48.649 [2024-07-15 09:35:59.736711] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.649 [2024-07-15 09:35:59.736738] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a50000b90 with addr=10.0.0.2, port=4420 00:27:48.649 qpair failed and we were unable to recover it. 00:27:48.649 [2024-07-15 09:35:59.736850] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.649 [2024-07-15 09:35:59.736877] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a50000b90 with addr=10.0.0.2, port=4420 00:27:48.649 qpair failed and we were unable to recover it. 00:27:48.649 [2024-07-15 09:35:59.736996] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.649 [2024-07-15 09:35:59.737023] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a50000b90 with addr=10.0.0.2, port=4420 00:27:48.649 qpair failed and we were unable to recover it. 00:27:48.649 [2024-07-15 09:35:59.737106] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.649 [2024-07-15 09:35:59.737163] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a50000b90 with addr=10.0.0.2, port=4420 00:27:48.649 qpair failed and we were unable to recover it. 00:27:48.649 [2024-07-15 09:35:59.737407] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.649 [2024-07-15 09:35:59.737456] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a50000b90 with addr=10.0.0.2, port=4420 00:27:48.649 qpair failed and we were unable to recover it. 00:27:48.649 [2024-07-15 09:35:59.737582] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.649 [2024-07-15 09:35:59.737620] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a50000b90 with addr=10.0.0.2, port=4420 00:27:48.649 qpair failed and we were unable to recover it. 00:27:48.649 [2024-07-15 09:35:59.737714] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.649 [2024-07-15 09:35:59.737740] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a50000b90 with addr=10.0.0.2, port=4420 00:27:48.649 qpair failed and we were unable to recover it. 00:27:48.649 [2024-07-15 09:35:59.737833] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.649 [2024-07-15 09:35:59.737861] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d69200 with addr=10.0.0.2, port=4420 00:27:48.649 qpair failed and we were unable to recover it. 00:27:48.649 [2024-07-15 09:35:59.737976] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.649 [2024-07-15 09:35:59.738002] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d69200 with addr=10.0.0.2, port=4420 00:27:48.649 qpair failed and we were unable to recover it. 00:27:48.649 [2024-07-15 09:35:59.738118] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.649 [2024-07-15 09:35:59.738172] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d69200 with addr=10.0.0.2, port=4420 00:27:48.649 qpair failed and we were unable to recover it. 00:27:48.649 [2024-07-15 09:35:59.738314] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.649 [2024-07-15 09:35:59.738363] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d69200 with addr=10.0.0.2, port=4420 00:27:48.649 qpair failed and we were unable to recover it. 00:27:48.649 [2024-07-15 09:35:59.738506] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.649 [2024-07-15 09:35:59.738558] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:27:48.649 qpair failed and we were unable to recover it. 00:27:48.649 [2024-07-15 09:35:59.738656] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.649 [2024-07-15 09:35:59.738695] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a58000b90 with addr=10.0.0.2, port=4420 00:27:48.649 qpair failed and we were unable to recover it. 00:27:48.649 [2024-07-15 09:35:59.738789] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.649 [2024-07-15 09:35:59.738825] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a58000b90 with addr=10.0.0.2, port=4420 00:27:48.649 qpair failed and we were unable to recover it. 00:27:48.649 [2024-07-15 09:35:59.738914] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.649 [2024-07-15 09:35:59.738940] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a58000b90 with addr=10.0.0.2, port=4420 00:27:48.649 qpair failed and we were unable to recover it. 00:27:48.649 [2024-07-15 09:35:59.739130] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.649 [2024-07-15 09:35:59.739182] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a58000b90 with addr=10.0.0.2, port=4420 00:27:48.649 qpair failed and we were unable to recover it. 00:27:48.649 [2024-07-15 09:35:59.739386] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.649 [2024-07-15 09:35:59.739433] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a58000b90 with addr=10.0.0.2, port=4420 00:27:48.649 qpair failed and we were unable to recover it. 00:27:48.649 [2024-07-15 09:35:59.739640] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.649 [2024-07-15 09:35:59.739686] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a58000b90 with addr=10.0.0.2, port=4420 00:27:48.649 qpair failed and we were unable to recover it. 00:27:48.649 [2024-07-15 09:35:59.739817] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.649 [2024-07-15 09:35:59.739845] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:27:48.649 qpair failed and we were unable to recover it. 00:27:48.649 [2024-07-15 09:35:59.739984] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.649 [2024-07-15 09:35:59.740009] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:27:48.649 qpair failed and we were unable to recover it. 00:27:48.649 [2024-07-15 09:35:59.740116] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.649 [2024-07-15 09:35:59.740173] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d69200 with addr=10.0.0.2, port=4420 00:27:48.649 qpair failed and we were unable to recover it. 00:27:48.649 [2024-07-15 09:35:59.740315] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.649 [2024-07-15 09:35:59.740365] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d69200 with addr=10.0.0.2, port=4420 00:27:48.649 qpair failed and we were unable to recover it. 00:27:48.649 [2024-07-15 09:35:59.740482] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.649 [2024-07-15 09:35:59.740531] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d69200 with addr=10.0.0.2, port=4420 00:27:48.649 qpair failed and we were unable to recover it. 00:27:48.650 [2024-07-15 09:35:59.740632] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.650 [2024-07-15 09:35:59.740657] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d69200 with addr=10.0.0.2, port=4420 00:27:48.650 qpair failed and we were unable to recover it. 00:27:48.650 [2024-07-15 09:35:59.740772] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.650 [2024-07-15 09:35:59.740806] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a58000b90 with addr=10.0.0.2, port=4420 00:27:48.650 qpair failed and we were unable to recover it. 00:27:48.650 [2024-07-15 09:35:59.740889] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.650 [2024-07-15 09:35:59.740915] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a58000b90 with addr=10.0.0.2, port=4420 00:27:48.650 qpair failed and we were unable to recover it. 00:27:48.650 [2024-07-15 09:35:59.741029] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.650 [2024-07-15 09:35:59.741055] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a58000b90 with addr=10.0.0.2, port=4420 00:27:48.650 qpair failed and we were unable to recover it. 00:27:48.650 [2024-07-15 09:35:59.741128] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.650 [2024-07-15 09:35:59.741154] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a58000b90 with addr=10.0.0.2, port=4420 00:27:48.650 qpair failed and we were unable to recover it. 00:27:48.650 [2024-07-15 09:35:59.741311] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.650 [2024-07-15 09:35:59.741337] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a58000b90 with addr=10.0.0.2, port=4420 00:27:48.650 qpair failed and we were unable to recover it. 00:27:48.650 [2024-07-15 09:35:59.741589] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.650 [2024-07-15 09:35:59.741635] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a58000b90 with addr=10.0.0.2, port=4420 00:27:48.650 qpair failed and we were unable to recover it. 00:27:48.650 [2024-07-15 09:35:59.741790] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.650 [2024-07-15 09:35:59.741835] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a58000b90 with addr=10.0.0.2, port=4420 00:27:48.650 qpair failed and we were unable to recover it. 00:27:48.650 [2024-07-15 09:35:59.741920] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.650 [2024-07-15 09:35:59.741947] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a58000b90 with addr=10.0.0.2, port=4420 00:27:48.650 qpair failed and we were unable to recover it. 00:27:48.650 [2024-07-15 09:35:59.742043] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.650 [2024-07-15 09:35:59.742069] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a58000b90 with addr=10.0.0.2, port=4420 00:27:48.650 qpair failed and we were unable to recover it. 00:27:48.650 [2024-07-15 09:35:59.742182] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.650 [2024-07-15 09:35:59.742208] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a58000b90 with addr=10.0.0.2, port=4420 00:27:48.650 qpair failed and we were unable to recover it. 00:27:48.650 [2024-07-15 09:35:59.742353] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.650 [2024-07-15 09:35:59.742399] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a58000b90 with addr=10.0.0.2, port=4420 00:27:48.650 qpair failed and we were unable to recover it. 00:27:48.650 [2024-07-15 09:35:59.742535] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.650 [2024-07-15 09:35:59.742581] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a58000b90 with addr=10.0.0.2, port=4420 00:27:48.650 qpair failed and we were unable to recover it. 00:27:48.650 [2024-07-15 09:35:59.742759] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.650 [2024-07-15 09:35:59.742784] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a58000b90 with addr=10.0.0.2, port=4420 00:27:48.650 qpair failed and we were unable to recover it. 00:27:48.650 [2024-07-15 09:35:59.742872] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.650 [2024-07-15 09:35:59.742898] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a58000b90 with addr=10.0.0.2, port=4420 00:27:48.650 qpair failed and we were unable to recover it. 00:27:48.650 [2024-07-15 09:35:59.742985] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.650 [2024-07-15 09:35:59.743010] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a58000b90 with addr=10.0.0.2, port=4420 00:27:48.650 qpair failed and we were unable to recover it. 00:27:48.650 [2024-07-15 09:35:59.743097] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.650 [2024-07-15 09:35:59.743123] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a58000b90 with addr=10.0.0.2, port=4420 00:27:48.650 qpair failed and we were unable to recover it. 00:27:48.650 [2024-07-15 09:35:59.743245] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.650 [2024-07-15 09:35:59.743271] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a58000b90 with addr=10.0.0.2, port=4420 00:27:48.650 qpair failed and we were unable to recover it. 00:27:48.650 [2024-07-15 09:35:59.743442] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.650 [2024-07-15 09:35:59.743489] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a58000b90 with addr=10.0.0.2, port=4420 00:27:48.650 qpair failed and we were unable to recover it. 00:27:48.650 [2024-07-15 09:35:59.743688] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.650 [2024-07-15 09:35:59.743734] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a58000b90 with addr=10.0.0.2, port=4420 00:27:48.650 qpair failed and we were unable to recover it. 00:27:48.650 [2024-07-15 09:35:59.743825] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.650 [2024-07-15 09:35:59.743853] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a58000b90 with addr=10.0.0.2, port=4420 00:27:48.650 qpair failed and we were unable to recover it. 00:27:48.650 [2024-07-15 09:35:59.743987] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.650 [2024-07-15 09:35:59.744025] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:27:48.650 qpair failed and we were unable to recover it. 00:27:48.650 [2024-07-15 09:35:59.744126] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.650 [2024-07-15 09:35:59.744158] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:27:48.650 qpair failed and we were unable to recover it. 00:27:48.650 [2024-07-15 09:35:59.744271] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.650 [2024-07-15 09:35:59.744297] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:27:48.650 qpair failed and we were unable to recover it. 00:27:48.650 [2024-07-15 09:35:59.744393] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.650 [2024-07-15 09:35:59.744418] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:27:48.650 qpair failed and we were unable to recover it. 00:27:48.650 [2024-07-15 09:35:59.744527] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.650 [2024-07-15 09:35:59.744552] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:27:48.650 qpair failed and we were unable to recover it. 00:27:48.650 [2024-07-15 09:35:59.744637] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.650 [2024-07-15 09:35:59.744663] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:27:48.650 qpair failed and we were unable to recover it. 00:27:48.650 [2024-07-15 09:35:59.744754] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.650 [2024-07-15 09:35:59.744779] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:27:48.650 qpair failed and we were unable to recover it. 00:27:48.650 [2024-07-15 09:35:59.744876] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.650 [2024-07-15 09:35:59.744902] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:27:48.650 qpair failed and we were unable to recover it. 00:27:48.650 [2024-07-15 09:35:59.745015] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.650 [2024-07-15 09:35:59.745041] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:27:48.650 qpair failed and we were unable to recover it. 00:27:48.650 [2024-07-15 09:35:59.745131] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.650 [2024-07-15 09:35:59.745157] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:27:48.650 qpair failed and we were unable to recover it. 00:27:48.650 [2024-07-15 09:35:59.745238] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.650 [2024-07-15 09:35:59.745264] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:27:48.650 qpair failed and we were unable to recover it. 00:27:48.650 [2024-07-15 09:35:59.745386] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.650 [2024-07-15 09:35:59.745411] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:27:48.650 qpair failed and we were unable to recover it. 00:27:48.650 [2024-07-15 09:35:59.745514] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.650 [2024-07-15 09:35:59.745553] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a58000b90 with addr=10.0.0.2, port=4420 00:27:48.650 qpair failed and we were unable to recover it. 00:27:48.650 [2024-07-15 09:35:59.745644] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.650 [2024-07-15 09:35:59.745672] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a58000b90 with addr=10.0.0.2, port=4420 00:27:48.650 qpair failed and we were unable to recover it. 00:27:48.650 [2024-07-15 09:35:59.745788] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.650 [2024-07-15 09:35:59.745827] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a58000b90 with addr=10.0.0.2, port=4420 00:27:48.650 qpair failed and we were unable to recover it. 00:27:48.650 [2024-07-15 09:35:59.745943] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.650 [2024-07-15 09:35:59.745969] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a58000b90 with addr=10.0.0.2, port=4420 00:27:48.650 qpair failed and we were unable to recover it. 00:27:48.651 [2024-07-15 09:35:59.746058] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.651 [2024-07-15 09:35:59.746084] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a58000b90 with addr=10.0.0.2, port=4420 00:27:48.651 qpair failed and we were unable to recover it. 00:27:48.651 [2024-07-15 09:35:59.746164] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.651 [2024-07-15 09:35:59.746191] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a58000b90 with addr=10.0.0.2, port=4420 00:27:48.651 qpair failed and we were unable to recover it. 00:27:48.651 [2024-07-15 09:35:59.746340] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.651 [2024-07-15 09:35:59.746386] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a58000b90 with addr=10.0.0.2, port=4420 00:27:48.651 qpair failed and we were unable to recover it. 00:27:48.651 [2024-07-15 09:35:59.746523] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.651 [2024-07-15 09:35:59.746568] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a58000b90 with addr=10.0.0.2, port=4420 00:27:48.651 qpair failed and we were unable to recover it. 00:27:48.651 [2024-07-15 09:35:59.746716] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.651 [2024-07-15 09:35:59.746764] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a58000b90 with addr=10.0.0.2, port=4420 00:27:48.651 qpair failed and we were unable to recover it. 00:27:48.651 [2024-07-15 09:35:59.746903] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.651 [2024-07-15 09:35:59.746929] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a58000b90 with addr=10.0.0.2, port=4420 00:27:48.651 qpair failed and we were unable to recover it. 00:27:48.651 [2024-07-15 09:35:59.747048] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.651 [2024-07-15 09:35:59.747073] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a58000b90 with addr=10.0.0.2, port=4420 00:27:48.651 qpair failed and we were unable to recover it. 00:27:48.651 [2024-07-15 09:35:59.747155] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.651 [2024-07-15 09:35:59.747181] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a58000b90 with addr=10.0.0.2, port=4420 00:27:48.651 qpair failed and we were unable to recover it. 00:27:48.651 [2024-07-15 09:35:59.747316] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.651 [2024-07-15 09:35:59.747373] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a58000b90 with addr=10.0.0.2, port=4420 00:27:48.651 qpair failed and we were unable to recover it. 00:27:48.651 [2024-07-15 09:35:59.747499] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.651 [2024-07-15 09:35:59.747525] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a58000b90 with addr=10.0.0.2, port=4420 00:27:48.651 qpair failed and we were unable to recover it. 00:27:48.651 [2024-07-15 09:35:59.747719] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.651 [2024-07-15 09:35:59.747746] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a58000b90 with addr=10.0.0.2, port=4420 00:27:48.651 qpair failed and we were unable to recover it. 00:27:48.651 [2024-07-15 09:35:59.747832] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.651 [2024-07-15 09:35:59.747859] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a58000b90 with addr=10.0.0.2, port=4420 00:27:48.651 qpair failed and we were unable to recover it. 00:27:48.651 [2024-07-15 09:35:59.747957] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.651 [2024-07-15 09:35:59.747996] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d69200 with addr=10.0.0.2, port=4420 00:27:48.651 qpair failed and we were unable to recover it. 00:27:48.651 [2024-07-15 09:35:59.748117] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.651 [2024-07-15 09:35:59.748142] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d69200 with addr=10.0.0.2, port=4420 00:27:48.651 qpair failed and we were unable to recover it. 00:27:48.651 [2024-07-15 09:35:59.748295] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.651 [2024-07-15 09:35:59.748344] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d69200 with addr=10.0.0.2, port=4420 00:27:48.651 qpair failed and we were unable to recover it. 00:27:48.651 [2024-07-15 09:35:59.748435] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.651 [2024-07-15 09:35:59.748460] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d69200 with addr=10.0.0.2, port=4420 00:27:48.651 qpair failed and we were unable to recover it. 00:27:48.651 [2024-07-15 09:35:59.748547] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.651 [2024-07-15 09:35:59.748573] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d69200 with addr=10.0.0.2, port=4420 00:27:48.651 qpair failed and we were unable to recover it. 00:27:48.651 [2024-07-15 09:35:59.748681] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.651 [2024-07-15 09:35:59.748707] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d69200 with addr=10.0.0.2, port=4420 00:27:48.651 qpair failed and we were unable to recover it. 00:27:48.651 [2024-07-15 09:35:59.748792] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.651 [2024-07-15 09:35:59.748827] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a58000b90 with addr=10.0.0.2, port=4420 00:27:48.651 qpair failed and we were unable to recover it. 00:27:48.651 [2024-07-15 09:35:59.748914] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.651 [2024-07-15 09:35:59.748941] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a58000b90 with addr=10.0.0.2, port=4420 00:27:48.651 qpair failed and we were unable to recover it. 00:27:48.651 [2024-07-15 09:35:59.749053] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.651 [2024-07-15 09:35:59.749080] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a58000b90 with addr=10.0.0.2, port=4420 00:27:48.651 qpair failed and we were unable to recover it. 00:27:48.651 [2024-07-15 09:35:59.749211] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.651 [2024-07-15 09:35:59.749255] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a58000b90 with addr=10.0.0.2, port=4420 00:27:48.651 qpair failed and we were unable to recover it. 00:27:48.651 [2024-07-15 09:35:59.749424] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.651 [2024-07-15 09:35:59.749469] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a58000b90 with addr=10.0.0.2, port=4420 00:27:48.651 qpair failed and we were unable to recover it. 00:27:48.651 [2024-07-15 09:35:59.749593] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.651 [2024-07-15 09:35:59.749637] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a58000b90 with addr=10.0.0.2, port=4420 00:27:48.651 qpair failed and we were unable to recover it. 00:27:48.651 [2024-07-15 09:35:59.749764] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.651 [2024-07-15 09:35:59.749790] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d69200 with addr=10.0.0.2, port=4420 00:27:48.651 qpair failed and we were unable to recover it. 00:27:48.651 [2024-07-15 09:35:59.749907] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.651 [2024-07-15 09:35:59.749937] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d69200 with addr=10.0.0.2, port=4420 00:27:48.651 qpair failed and we were unable to recover it. 00:27:48.651 [2024-07-15 09:35:59.750046] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.651 [2024-07-15 09:35:59.750071] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d69200 with addr=10.0.0.2, port=4420 00:27:48.651 qpair failed and we were unable to recover it. 00:27:48.651 [2024-07-15 09:35:59.750147] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.651 [2024-07-15 09:35:59.750172] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d69200 with addr=10.0.0.2, port=4420 00:27:48.651 qpair failed and we were unable to recover it. 00:27:48.651 [2024-07-15 09:35:59.750252] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.651 [2024-07-15 09:35:59.750277] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d69200 with addr=10.0.0.2, port=4420 00:27:48.651 qpair failed and we were unable to recover it. 00:27:48.651 [2024-07-15 09:35:59.750360] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.651 [2024-07-15 09:35:59.750385] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d69200 with addr=10.0.0.2, port=4420 00:27:48.651 qpair failed and we were unable to recover it. 00:27:48.651 [2024-07-15 09:35:59.750474] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.651 [2024-07-15 09:35:59.750499] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d69200 with addr=10.0.0.2, port=4420 00:27:48.651 qpair failed and we were unable to recover it. 00:27:48.651 [2024-07-15 09:35:59.750579] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.651 [2024-07-15 09:35:59.750605] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d69200 with addr=10.0.0.2, port=4420 00:27:48.651 qpair failed and we were unable to recover it. 00:27:48.651 [2024-07-15 09:35:59.750740] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.651 [2024-07-15 09:35:59.750764] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d69200 with addr=10.0.0.2, port=4420 00:27:48.651 qpair failed and we were unable to recover it. 00:27:48.651 [2024-07-15 09:35:59.750850] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.651 [2024-07-15 09:35:59.750877] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a58000b90 with addr=10.0.0.2, port=4420 00:27:48.651 qpair failed and we were unable to recover it. 00:27:48.651 [2024-07-15 09:35:59.750958] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.652 [2024-07-15 09:35:59.750985] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a58000b90 with addr=10.0.0.2, port=4420 00:27:48.652 qpair failed and we were unable to recover it. 00:27:48.652 [2024-07-15 09:35:59.751095] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.652 [2024-07-15 09:35:59.751121] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a58000b90 with addr=10.0.0.2, port=4420 00:27:48.652 qpair failed and we were unable to recover it. 00:27:48.652 [2024-07-15 09:35:59.751235] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.652 [2024-07-15 09:35:59.751261] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a58000b90 with addr=10.0.0.2, port=4420 00:27:48.652 qpair failed and we were unable to recover it. 00:27:48.652 [2024-07-15 09:35:59.751372] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.652 [2024-07-15 09:35:59.751398] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a58000b90 with addr=10.0.0.2, port=4420 00:27:48.652 qpair failed and we were unable to recover it. 00:27:48.652 [2024-07-15 09:35:59.751522] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.652 [2024-07-15 09:35:59.751562] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:27:48.652 qpair failed and we were unable to recover it. 00:27:48.652 [2024-07-15 09:35:59.751682] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.652 [2024-07-15 09:35:59.751708] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d69200 with addr=10.0.0.2, port=4420 00:27:48.652 qpair failed and we were unable to recover it. 00:27:48.652 [2024-07-15 09:35:59.751833] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.652 [2024-07-15 09:35:59.751873] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a50000b90 with addr=10.0.0.2, port=4420 00:27:48.652 qpair failed and we were unable to recover it. 00:27:48.652 [2024-07-15 09:35:59.751994] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.652 [2024-07-15 09:35:59.752022] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a50000b90 with addr=10.0.0.2, port=4420 00:27:48.652 qpair failed and we were unable to recover it. 00:27:48.652 [2024-07-15 09:35:59.752104] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.652 [2024-07-15 09:35:59.752131] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a50000b90 with addr=10.0.0.2, port=4420 00:27:48.652 qpair failed and we were unable to recover it. 00:27:48.652 [2024-07-15 09:35:59.752209] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.652 [2024-07-15 09:35:59.752236] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a50000b90 with addr=10.0.0.2, port=4420 00:27:48.652 qpair failed and we were unable to recover it. 00:27:48.652 [2024-07-15 09:35:59.752326] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.652 [2024-07-15 09:35:59.752354] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a50000b90 with addr=10.0.0.2, port=4420 00:27:48.652 qpair failed and we were unable to recover it. 00:27:48.652 [2024-07-15 09:35:59.752439] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.652 [2024-07-15 09:35:59.752468] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:27:48.652 qpair failed and we were unable to recover it. 00:27:48.652 [2024-07-15 09:35:59.752556] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.652 [2024-07-15 09:35:59.752583] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d69200 with addr=10.0.0.2, port=4420 00:27:48.652 qpair failed and we were unable to recover it. 00:27:48.652 [2024-07-15 09:35:59.752697] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.652 [2024-07-15 09:35:59.752722] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d69200 with addr=10.0.0.2, port=4420 00:27:48.652 qpair failed and we were unable to recover it. 00:27:48.652 [2024-07-15 09:35:59.752811] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.652 [2024-07-15 09:35:59.752838] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d69200 with addr=10.0.0.2, port=4420 00:27:48.652 qpair failed and we were unable to recover it. 00:27:48.652 [2024-07-15 09:35:59.752917] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.652 [2024-07-15 09:35:59.752942] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d69200 with addr=10.0.0.2, port=4420 00:27:48.652 qpair failed and we were unable to recover it. 00:27:48.652 [2024-07-15 09:35:59.753021] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.652 [2024-07-15 09:35:59.753046] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d69200 with addr=10.0.0.2, port=4420 00:27:48.652 qpair failed and we were unable to recover it. 00:27:48.652 [2024-07-15 09:35:59.753158] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.652 [2024-07-15 09:35:59.753183] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d69200 with addr=10.0.0.2, port=4420 00:27:48.652 qpair failed and we were unable to recover it. 00:27:48.652 [2024-07-15 09:35:59.753304] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.652 [2024-07-15 09:35:59.753336] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:27:48.652 qpair failed and we were unable to recover it. 00:27:48.652 [2024-07-15 09:35:59.753536] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.652 [2024-07-15 09:35:59.753562] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:27:48.652 qpair failed and we were unable to recover it. 00:27:48.652 [2024-07-15 09:35:59.753670] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.652 [2024-07-15 09:35:59.753695] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:27:48.652 qpair failed and we were unable to recover it. 00:27:48.652 [2024-07-15 09:35:59.753777] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.652 [2024-07-15 09:35:59.753808] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:27:48.652 qpair failed and we were unable to recover it. 00:27:48.652 [2024-07-15 09:35:59.753927] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.652 [2024-07-15 09:35:59.753953] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:27:48.652 qpair failed and we were unable to recover it. 00:27:48.652 [2024-07-15 09:35:59.754039] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.652 [2024-07-15 09:35:59.754064] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:27:48.652 qpair failed and we were unable to recover it. 00:27:48.652 [2024-07-15 09:35:59.754195] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.652 [2024-07-15 09:35:59.754253] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:27:48.652 qpair failed and we were unable to recover it. 00:27:48.652 [2024-07-15 09:35:59.754370] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.652 [2024-07-15 09:35:59.754399] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a58000b90 with addr=10.0.0.2, port=4420 00:27:48.652 qpair failed and we were unable to recover it. 00:27:48.652 [2024-07-15 09:35:59.754483] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.652 [2024-07-15 09:35:59.754510] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a58000b90 with addr=10.0.0.2, port=4420 00:27:48.652 qpair failed and we were unable to recover it. 00:27:48.652 [2024-07-15 09:35:59.754586] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.652 [2024-07-15 09:35:59.754612] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a58000b90 with addr=10.0.0.2, port=4420 00:27:48.652 qpair failed and we were unable to recover it. 00:27:48.652 [2024-07-15 09:35:59.754693] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.652 [2024-07-15 09:35:59.754720] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a58000b90 with addr=10.0.0.2, port=4420 00:27:48.652 qpair failed and we were unable to recover it. 00:27:48.652 [2024-07-15 09:35:59.754795] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.652 [2024-07-15 09:35:59.754829] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a58000b90 with addr=10.0.0.2, port=4420 00:27:48.652 qpair failed and we were unable to recover it. 00:27:48.652 [2024-07-15 09:35:59.754914] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.652 [2024-07-15 09:35:59.754940] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a58000b90 with addr=10.0.0.2, port=4420 00:27:48.652 qpair failed and we were unable to recover it. 00:27:48.652 [2024-07-15 09:35:59.755033] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.652 [2024-07-15 09:35:59.755059] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a58000b90 with addr=10.0.0.2, port=4420 00:27:48.652 qpair failed and we were unable to recover it. 00:27:48.652 [2024-07-15 09:35:59.755178] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.652 [2024-07-15 09:35:59.755222] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a58000b90 with addr=10.0.0.2, port=4420 00:27:48.652 qpair failed and we were unable to recover it. 00:27:48.652 [2024-07-15 09:35:59.755410] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.652 [2024-07-15 09:35:59.755436] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a58000b90 with addr=10.0.0.2, port=4420 00:27:48.652 qpair failed and we were unable to recover it. 00:27:48.652 [2024-07-15 09:35:59.755566] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.652 [2024-07-15 09:35:59.755592] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:27:48.652 qpair failed and we were unable to recover it. 00:27:48.652 [2024-07-15 09:35:59.755730] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.652 [2024-07-15 09:35:59.755756] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:27:48.652 qpair failed and we were unable to recover it. 00:27:48.652 [2024-07-15 09:35:59.755843] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.652 [2024-07-15 09:35:59.755869] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:27:48.652 qpair failed and we were unable to recover it. 00:27:48.652 [2024-07-15 09:35:59.755954] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.652 [2024-07-15 09:35:59.755980] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:27:48.652 qpair failed and we were unable to recover it. 00:27:48.652 [2024-07-15 09:35:59.756103] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.652 [2024-07-15 09:35:59.756161] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:27:48.652 qpair failed and we were unable to recover it. 00:27:48.652 [2024-07-15 09:35:59.756288] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.652 [2024-07-15 09:35:59.756338] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:27:48.652 qpair failed and we were unable to recover it. 00:27:48.652 [2024-07-15 09:35:59.756442] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.652 [2024-07-15 09:35:59.756468] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:27:48.652 qpair failed and we were unable to recover it. 00:27:48.652 [2024-07-15 09:35:59.756664] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.652 [2024-07-15 09:35:59.756690] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:27:48.653 qpair failed and we were unable to recover it. 00:27:48.653 [2024-07-15 09:35:59.756811] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.653 [2024-07-15 09:35:59.756837] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:27:48.653 qpair failed and we were unable to recover it. 00:27:48.653 [2024-07-15 09:35:59.756912] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.653 [2024-07-15 09:35:59.756938] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:27:48.653 qpair failed and we were unable to recover it. 00:27:48.653 [2024-07-15 09:35:59.757137] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.653 [2024-07-15 09:35:59.757186] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:27:48.653 qpair failed and we were unable to recover it. 00:27:48.653 [2024-07-15 09:35:59.757274] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.653 [2024-07-15 09:35:59.757299] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:27:48.653 qpair failed and we were unable to recover it. 00:27:48.653 [2024-07-15 09:35:59.757398] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.653 [2024-07-15 09:35:59.757425] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a58000b90 with addr=10.0.0.2, port=4420 00:27:48.653 qpair failed and we were unable to recover it. 00:27:48.653 [2024-07-15 09:35:59.757506] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.653 [2024-07-15 09:35:59.757531] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a58000b90 with addr=10.0.0.2, port=4420 00:27:48.653 qpair failed and we were unable to recover it. 00:27:48.653 [2024-07-15 09:35:59.757622] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.653 [2024-07-15 09:35:59.757648] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a58000b90 with addr=10.0.0.2, port=4420 00:27:48.653 qpair failed and we were unable to recover it. 00:27:48.653 [2024-07-15 09:35:59.757787] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.653 [2024-07-15 09:35:59.757820] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a58000b90 with addr=10.0.0.2, port=4420 00:27:48.653 qpair failed and we were unable to recover it. 00:27:48.653 [2024-07-15 09:35:59.757936] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.653 [2024-07-15 09:35:59.757962] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a58000b90 with addr=10.0.0.2, port=4420 00:27:48.653 qpair failed and we were unable to recover it. 00:27:48.653 [2024-07-15 09:35:59.758051] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.653 [2024-07-15 09:35:59.758077] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a58000b90 with addr=10.0.0.2, port=4420 00:27:48.653 qpair failed and we were unable to recover it. 00:27:48.653 [2024-07-15 09:35:59.758157] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.653 [2024-07-15 09:35:59.758182] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a58000b90 with addr=10.0.0.2, port=4420 00:27:48.653 qpair failed and we were unable to recover it. 00:27:48.653 [2024-07-15 09:35:59.758275] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.653 [2024-07-15 09:35:59.758304] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a50000b90 with addr=10.0.0.2, port=4420 00:27:48.653 qpair failed and we were unable to recover it. 00:27:48.653 [2024-07-15 09:35:59.758417] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.653 [2024-07-15 09:35:59.758462] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a50000b90 with addr=10.0.0.2, port=4420 00:27:48.653 qpair failed and we were unable to recover it. 00:27:48.653 [2024-07-15 09:35:59.758633] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.653 [2024-07-15 09:35:59.758679] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a50000b90 with addr=10.0.0.2, port=4420 00:27:48.653 qpair failed and we were unable to recover it. 00:27:48.653 [2024-07-15 09:35:59.758839] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.653 [2024-07-15 09:35:59.758883] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a50000b90 with addr=10.0.0.2, port=4420 00:27:48.653 qpair failed and we were unable to recover it. 00:27:48.653 [2024-07-15 09:35:59.758976] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.653 [2024-07-15 09:35:59.759002] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a50000b90 with addr=10.0.0.2, port=4420 00:27:48.653 qpair failed and we were unable to recover it. 00:27:48.653 [2024-07-15 09:35:59.759108] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.653 [2024-07-15 09:35:59.759139] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a50000b90 with addr=10.0.0.2, port=4420 00:27:48.653 qpair failed and we were unable to recover it. 00:27:48.653 [2024-07-15 09:35:59.759255] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.653 [2024-07-15 09:35:59.759281] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a50000b90 with addr=10.0.0.2, port=4420 00:27:48.653 qpair failed and we were unable to recover it. 00:27:48.653 [2024-07-15 09:35:59.759400] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.653 [2024-07-15 09:35:59.759442] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a50000b90 with addr=10.0.0.2, port=4420 00:27:48.653 qpair failed and we were unable to recover it. 00:27:48.653 [2024-07-15 09:35:59.759635] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.653 [2024-07-15 09:35:59.759678] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a50000b90 with addr=10.0.0.2, port=4420 00:27:48.653 qpair failed and we were unable to recover it. 00:27:48.653 [2024-07-15 09:35:59.759859] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.653 [2024-07-15 09:35:59.759887] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a50000b90 with addr=10.0.0.2, port=4420 00:27:48.653 qpair failed and we were unable to recover it. 00:27:48.653 [2024-07-15 09:35:59.759977] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.653 [2024-07-15 09:35:59.760004] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a50000b90 with addr=10.0.0.2, port=4420 00:27:48.653 qpair failed and we were unable to recover it. 00:27:48.653 [2024-07-15 09:35:59.760091] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.653 [2024-07-15 09:35:59.760119] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:27:48.653 qpair failed and we were unable to recover it. 00:27:48.653 [2024-07-15 09:35:59.760213] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.653 [2024-07-15 09:35:59.760238] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:27:48.653 qpair failed and we were unable to recover it. 00:27:48.653 [2024-07-15 09:35:59.760337] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.653 [2024-07-15 09:35:59.760393] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:27:48.653 qpair failed and we were unable to recover it. 00:27:48.653 [2024-07-15 09:35:59.760503] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.653 [2024-07-15 09:35:59.760529] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:27:48.653 qpair failed and we were unable to recover it. 00:27:48.653 [2024-07-15 09:35:59.760609] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.653 [2024-07-15 09:35:59.760634] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:27:48.653 qpair failed and we were unable to recover it. 00:27:48.653 [2024-07-15 09:35:59.760713] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.653 [2024-07-15 09:35:59.760739] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:27:48.653 qpair failed and we were unable to recover it. 00:27:48.653 [2024-07-15 09:35:59.760824] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.653 [2024-07-15 09:35:59.760851] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:27:48.653 qpair failed and we were unable to recover it. 00:27:48.653 [2024-07-15 09:35:59.760959] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.653 [2024-07-15 09:35:59.760985] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:27:48.653 qpair failed and we were unable to recover it. 00:27:48.653 [2024-07-15 09:35:59.761072] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.653 [2024-07-15 09:35:59.761098] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:27:48.653 qpair failed and we were unable to recover it. 00:27:48.653 [2024-07-15 09:35:59.761208] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.653 [2024-07-15 09:35:59.761233] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:27:48.653 qpair failed and we were unable to recover it. 00:27:48.653 [2024-07-15 09:35:59.761323] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.653 [2024-07-15 09:35:59.761350] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:27:48.653 qpair failed and we were unable to recover it. 00:27:48.653 [2024-07-15 09:35:59.761460] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.653 [2024-07-15 09:35:59.761486] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:27:48.653 qpair failed and we were unable to recover it. 00:27:48.653 [2024-07-15 09:35:59.761621] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.653 [2024-07-15 09:35:59.761646] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:27:48.653 qpair failed and we were unable to recover it. 00:27:48.653 [2024-07-15 09:35:59.761777] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.653 [2024-07-15 09:35:59.761823] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a58000b90 with addr=10.0.0.2, port=4420 00:27:48.653 qpair failed and we were unable to recover it. 00:27:48.653 [2024-07-15 09:35:59.761950] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.654 [2024-07-15 09:35:59.761977] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a58000b90 with addr=10.0.0.2, port=4420 00:27:48.654 qpair failed and we were unable to recover it. 00:27:48.654 [2024-07-15 09:35:59.762090] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.654 [2024-07-15 09:35:59.762116] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a58000b90 with addr=10.0.0.2, port=4420 00:27:48.654 qpair failed and we were unable to recover it. 00:27:48.654 [2024-07-15 09:35:59.762266] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.654 [2024-07-15 09:35:59.762307] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a58000b90 with addr=10.0.0.2, port=4420 00:27:48.654 qpair failed and we were unable to recover it. 00:27:48.654 [2024-07-15 09:35:59.762479] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.654 [2024-07-15 09:35:59.762521] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a58000b90 with addr=10.0.0.2, port=4420 00:27:48.654 qpair failed and we were unable to recover it. 00:27:48.654 [2024-07-15 09:35:59.762657] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.654 [2024-07-15 09:35:59.762705] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a50000b90 with addr=10.0.0.2, port=4420 00:27:48.654 qpair failed and we were unable to recover it. 00:27:48.654 [2024-07-15 09:35:59.762841] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.654 [2024-07-15 09:35:59.762867] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:27:48.654 qpair failed and we were unable to recover it. 00:27:48.654 [2024-07-15 09:35:59.762977] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.654 [2024-07-15 09:35:59.763004] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:27:48.654 qpair failed and we were unable to recover it. 00:27:48.654 [2024-07-15 09:35:59.763123] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.654 [2024-07-15 09:35:59.763151] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a58000b90 with addr=10.0.0.2, port=4420 00:27:48.654 qpair failed and we were unable to recover it. 00:27:48.654 [2024-07-15 09:35:59.763319] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.654 [2024-07-15 09:35:59.763362] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a58000b90 with addr=10.0.0.2, port=4420 00:27:48.654 qpair failed and we were unable to recover it. 00:27:48.654 [2024-07-15 09:35:59.763494] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.654 [2024-07-15 09:35:59.763542] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a58000b90 with addr=10.0.0.2, port=4420 00:27:48.654 qpair failed and we were unable to recover it. 00:27:48.654 [2024-07-15 09:35:59.763667] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.654 [2024-07-15 09:35:59.763693] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a58000b90 with addr=10.0.0.2, port=4420 00:27:48.654 qpair failed and we were unable to recover it. 00:27:48.654 [2024-07-15 09:35:59.763814] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.654 [2024-07-15 09:35:59.763843] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a50000b90 with addr=10.0.0.2, port=4420 00:27:48.654 qpair failed and we were unable to recover it. 00:27:48.654 [2024-07-15 09:35:59.763938] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.654 [2024-07-15 09:35:59.763965] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a50000b90 with addr=10.0.0.2, port=4420 00:27:48.654 qpair failed and we were unable to recover it. 00:27:48.654 [2024-07-15 09:35:59.764074] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.654 [2024-07-15 09:35:59.764100] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a50000b90 with addr=10.0.0.2, port=4420 00:27:48.654 qpair failed and we were unable to recover it. 00:27:48.654 [2024-07-15 09:35:59.764227] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.654 [2024-07-15 09:35:59.764254] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a50000b90 with addr=10.0.0.2, port=4420 00:27:48.654 qpair failed and we were unable to recover it. 00:27:48.654 [2024-07-15 09:35:59.764385] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.654 [2024-07-15 09:35:59.764427] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a50000b90 with addr=10.0.0.2, port=4420 00:27:48.654 qpair failed and we were unable to recover it. 00:27:48.654 [2024-07-15 09:35:59.764622] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.654 [2024-07-15 09:35:59.764675] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:27:48.654 qpair failed and we were unable to recover it. 00:27:48.654 [2024-07-15 09:35:59.764873] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.654 [2024-07-15 09:35:59.764899] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:27:48.654 qpair failed and we were unable to recover it. 00:27:48.654 [2024-07-15 09:35:59.764984] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.654 [2024-07-15 09:35:59.765009] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:27:48.654 qpair failed and we were unable to recover it. 00:27:48.654 [2024-07-15 09:35:59.765092] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.654 [2024-07-15 09:35:59.765118] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:27:48.654 qpair failed and we were unable to recover it. 00:27:48.654 [2024-07-15 09:35:59.765206] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.654 [2024-07-15 09:35:59.765239] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a58000b90 with addr=10.0.0.2, port=4420 00:27:48.654 qpair failed and we were unable to recover it. 00:27:48.654 [2024-07-15 09:35:59.765446] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.654 [2024-07-15 09:35:59.765527] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a58000b90 with addr=10.0.0.2, port=4420 00:27:48.654 qpair failed and we were unable to recover it. 00:27:48.654 [2024-07-15 09:35:59.765714] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.654 [2024-07-15 09:35:59.765761] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a50000b90 with addr=10.0.0.2, port=4420 00:27:48.654 qpair failed and we were unable to recover it. 00:27:48.654 [2024-07-15 09:35:59.765895] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.654 [2024-07-15 09:35:59.765923] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:27:48.654 qpair failed and we were unable to recover it. 00:27:48.654 [2024-07-15 09:35:59.766065] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.654 [2024-07-15 09:35:59.766090] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:27:48.654 qpair failed and we were unable to recover it. 00:27:48.654 [2024-07-15 09:35:59.766177] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.654 [2024-07-15 09:35:59.766203] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:27:48.654 qpair failed and we were unable to recover it. 00:27:48.654 [2024-07-15 09:35:59.766318] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.654 [2024-07-15 09:35:59.766344] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:27:48.654 qpair failed and we were unable to recover it. 00:27:48.654 [2024-07-15 09:35:59.766457] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.654 [2024-07-15 09:35:59.766483] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:27:48.654 qpair failed and we were unable to recover it. 00:27:48.654 [2024-07-15 09:35:59.766592] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.654 [2024-07-15 09:35:59.766618] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:27:48.654 qpair failed and we were unable to recover it. 00:27:48.654 [2024-07-15 09:35:59.766754] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.654 [2024-07-15 09:35:59.766785] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a50000b90 with addr=10.0.0.2, port=4420 00:27:48.654 qpair failed and we were unable to recover it. 00:27:48.654 [2024-07-15 09:35:59.766884] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.654 [2024-07-15 09:35:59.766910] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a50000b90 with addr=10.0.0.2, port=4420 00:27:48.654 qpair failed and we were unable to recover it. 00:27:48.654 [2024-07-15 09:35:59.767015] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.654 [2024-07-15 09:35:59.767042] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a50000b90 with addr=10.0.0.2, port=4420 00:27:48.654 qpair failed and we were unable to recover it. 00:27:48.654 [2024-07-15 09:35:59.767180] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.654 [2024-07-15 09:35:59.767208] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a50000b90 with addr=10.0.0.2, port=4420 00:27:48.654 qpair failed and we were unable to recover it. 00:27:48.654 [2024-07-15 09:35:59.767324] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.654 [2024-07-15 09:35:59.767371] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a50000b90 with addr=10.0.0.2, port=4420 00:27:48.654 qpair failed and we were unable to recover it. 00:27:48.654 [2024-07-15 09:35:59.767573] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.654 [2024-07-15 09:35:59.767617] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a58000b90 with addr=10.0.0.2, port=4420 00:27:48.654 qpair failed and we were unable to recover it. 00:27:48.654 [2024-07-15 09:35:59.767741] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.654 [2024-07-15 09:35:59.767767] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a58000b90 with addr=10.0.0.2, port=4420 00:27:48.654 qpair failed and we were unable to recover it. 00:27:48.654 [2024-07-15 09:35:59.767871] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.654 [2024-07-15 09:35:59.767897] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a58000b90 with addr=10.0.0.2, port=4420 00:27:48.654 qpair failed and we were unable to recover it. 00:27:48.654 [2024-07-15 09:35:59.768008] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.654 [2024-07-15 09:35:59.768034] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a58000b90 with addr=10.0.0.2, port=4420 00:27:48.654 qpair failed and we were unable to recover it. 00:27:48.654 [2024-07-15 09:35:59.768181] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.654 [2024-07-15 09:35:59.768222] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a58000b90 with addr=10.0.0.2, port=4420 00:27:48.654 qpair failed and we were unable to recover it. 00:27:48.654 [2024-07-15 09:35:59.768374] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.654 [2024-07-15 09:35:59.768430] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a58000b90 with addr=10.0.0.2, port=4420 00:27:48.654 qpair failed and we were unable to recover it. 00:27:48.654 [2024-07-15 09:35:59.768569] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.655 [2024-07-15 09:35:59.768595] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a58000b90 with addr=10.0.0.2, port=4420 00:27:48.655 qpair failed and we were unable to recover it. 00:27:48.655 [2024-07-15 09:35:59.768755] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.655 [2024-07-15 09:35:59.768781] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a58000b90 with addr=10.0.0.2, port=4420 00:27:48.655 qpair failed and we were unable to recover it. 00:27:48.655 [2024-07-15 09:35:59.768875] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.655 [2024-07-15 09:35:59.768901] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a58000b90 with addr=10.0.0.2, port=4420 00:27:48.655 qpair failed and we were unable to recover it. 00:27:48.655 [2024-07-15 09:35:59.768988] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.655 [2024-07-15 09:35:59.769015] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a58000b90 with addr=10.0.0.2, port=4420 00:27:48.655 qpair failed and we were unable to recover it. 00:27:48.655 [2024-07-15 09:35:59.769192] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.655 [2024-07-15 09:35:59.769234] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a58000b90 with addr=10.0.0.2, port=4420 00:27:48.655 qpair failed and we were unable to recover it. 00:27:48.655 [2024-07-15 09:35:59.769366] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.655 [2024-07-15 09:35:59.769407] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a58000b90 with addr=10.0.0.2, port=4420 00:27:48.655 qpair failed and we were unable to recover it. 00:27:48.655 [2024-07-15 09:35:59.769582] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.655 [2024-07-15 09:35:59.769623] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a58000b90 with addr=10.0.0.2, port=4420 00:27:48.655 qpair failed and we were unable to recover it. 00:27:48.655 [2024-07-15 09:35:59.769744] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.655 [2024-07-15 09:35:59.769772] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:27:48.655 qpair failed and we were unable to recover it. 00:27:48.655 [2024-07-15 09:35:59.769889] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.655 [2024-07-15 09:35:59.769915] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:27:48.655 qpair failed and we were unable to recover it. 00:27:48.655 [2024-07-15 09:35:59.770021] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.655 [2024-07-15 09:35:59.770082] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:27:48.655 qpair failed and we were unable to recover it. 00:27:48.655 [2024-07-15 09:35:59.770236] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.655 [2024-07-15 09:35:59.770284] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:27:48.655 qpair failed and we were unable to recover it. 00:27:48.655 [2024-07-15 09:35:59.770406] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.655 [2024-07-15 09:35:59.770457] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:27:48.655 qpair failed and we were unable to recover it. 00:27:48.655 [2024-07-15 09:35:59.770539] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.655 [2024-07-15 09:35:59.770564] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:27:48.655 qpair failed and we were unable to recover it. 00:27:48.655 [2024-07-15 09:35:59.770651] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.655 [2024-07-15 09:35:59.770677] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:27:48.655 qpair failed and we were unable to recover it. 00:27:48.655 [2024-07-15 09:35:59.770780] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.655 [2024-07-15 09:35:59.770815] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:27:48.655 qpair failed and we were unable to recover it. 00:27:48.655 [2024-07-15 09:35:59.770929] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.655 [2024-07-15 09:35:59.770955] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:27:48.655 qpair failed and we were unable to recover it. 00:27:48.655 [2024-07-15 09:35:59.771046] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.655 [2024-07-15 09:35:59.771072] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:27:48.655 qpair failed and we were unable to recover it. 00:27:48.655 [2024-07-15 09:35:59.771154] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.655 [2024-07-15 09:35:59.771179] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:27:48.655 qpair failed and we were unable to recover it. 00:27:48.655 [2024-07-15 09:35:59.771289] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.655 [2024-07-15 09:35:59.771314] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:27:48.655 qpair failed and we were unable to recover it. 00:27:48.655 [2024-07-15 09:35:59.771404] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.655 [2024-07-15 09:35:59.771433] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a58000b90 with addr=10.0.0.2, port=4420 00:27:48.655 qpair failed and we were unable to recover it. 00:27:48.655 [2024-07-15 09:35:59.771540] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.655 [2024-07-15 09:35:59.771570] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a58000b90 with addr=10.0.0.2, port=4420 00:27:48.655 qpair failed and we were unable to recover it. 00:27:48.655 [2024-07-15 09:35:59.771652] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.655 [2024-07-15 09:35:59.771678] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a58000b90 with addr=10.0.0.2, port=4420 00:27:48.655 qpair failed and we were unable to recover it. 00:27:48.655 [2024-07-15 09:35:59.771817] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.655 [2024-07-15 09:35:59.771845] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a58000b90 with addr=10.0.0.2, port=4420 00:27:48.655 qpair failed and we were unable to recover it. 00:27:48.655 [2024-07-15 09:35:59.771955] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.655 [2024-07-15 09:35:59.771981] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a58000b90 with addr=10.0.0.2, port=4420 00:27:48.655 qpair failed and we were unable to recover it. 00:27:48.655 [2024-07-15 09:35:59.772066] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.655 [2024-07-15 09:35:59.772114] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a58000b90 with addr=10.0.0.2, port=4420 00:27:48.655 qpair failed and we were unable to recover it. 00:27:48.655 [2024-07-15 09:35:59.772239] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.655 [2024-07-15 09:35:59.772280] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a58000b90 with addr=10.0.0.2, port=4420 00:27:48.655 qpair failed and we were unable to recover it. 00:27:48.655 [2024-07-15 09:35:59.772431] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.655 [2024-07-15 09:35:59.772472] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a58000b90 with addr=10.0.0.2, port=4420 00:27:48.655 qpair failed and we were unable to recover it. 00:27:48.655 [2024-07-15 09:35:59.772588] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.655 [2024-07-15 09:35:59.772630] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a58000b90 with addr=10.0.0.2, port=4420 00:27:48.655 qpair failed and we were unable to recover it. 00:27:48.655 [2024-07-15 09:35:59.772781] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.655 [2024-07-15 09:35:59.772814] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:27:48.655 qpair failed and we were unable to recover it. 00:27:48.655 [2024-07-15 09:35:59.772922] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.655 [2024-07-15 09:35:59.772964] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:27:48.655 qpair failed and we were unable to recover it. 00:27:48.655 [2024-07-15 09:35:59.773065] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.655 [2024-07-15 09:35:59.773092] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:27:48.655 qpair failed and we were unable to recover it. 00:27:48.655 [2024-07-15 09:35:59.773236] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.655 [2024-07-15 09:35:59.773284] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:27:48.655 qpair failed and we were unable to recover it. 00:27:48.655 [2024-07-15 09:35:59.773369] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.655 [2024-07-15 09:35:59.773394] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:27:48.655 qpair failed and we were unable to recover it. 00:27:48.655 [2024-07-15 09:35:59.773501] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.655 [2024-07-15 09:35:59.773527] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:27:48.655 qpair failed and we were unable to recover it. 00:27:48.655 [2024-07-15 09:35:59.773613] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.655 [2024-07-15 09:35:59.773640] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a58000b90 with addr=10.0.0.2, port=4420 00:27:48.655 qpair failed and we were unable to recover it. 00:27:48.656 [2024-07-15 09:35:59.773751] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.656 [2024-07-15 09:35:59.773777] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a58000b90 with addr=10.0.0.2, port=4420 00:27:48.656 qpair failed and we were unable to recover it. 00:27:48.656 [2024-07-15 09:35:59.773908] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.656 [2024-07-15 09:35:59.773937] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a50000b90 with addr=10.0.0.2, port=4420 00:27:48.656 qpair failed and we were unable to recover it. 00:27:48.656 [2024-07-15 09:35:59.774019] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.656 [2024-07-15 09:35:59.774048] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a50000b90 with addr=10.0.0.2, port=4420 00:27:48.656 qpair failed and we were unable to recover it. 00:27:48.656 [2024-07-15 09:35:59.774188] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.656 [2024-07-15 09:35:59.774213] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a50000b90 with addr=10.0.0.2, port=4420 00:27:48.656 qpair failed and we were unable to recover it. 00:27:48.656 [2024-07-15 09:35:59.774331] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.656 [2024-07-15 09:35:59.774357] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a50000b90 with addr=10.0.0.2, port=4420 00:27:48.656 qpair failed and we were unable to recover it. 00:27:48.656 [2024-07-15 09:35:59.774501] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.656 [2024-07-15 09:35:59.774543] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a50000b90 with addr=10.0.0.2, port=4420 00:27:48.656 qpair failed and we were unable to recover it. 00:27:48.656 [2024-07-15 09:35:59.774676] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.656 [2024-07-15 09:35:59.774725] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a50000b90 with addr=10.0.0.2, port=4420 00:27:48.656 qpair failed and we were unable to recover it. 00:27:48.656 [2024-07-15 09:35:59.774820] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.656 [2024-07-15 09:35:59.774847] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a50000b90 with addr=10.0.0.2, port=4420 00:27:48.656 qpair failed and we were unable to recover it. 00:27:48.656 [2024-07-15 09:35:59.774951] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.656 [2024-07-15 09:35:59.774993] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a50000b90 with addr=10.0.0.2, port=4420 00:27:48.656 qpair failed and we were unable to recover it. 00:27:48.656 [2024-07-15 09:35:59.775185] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.656 [2024-07-15 09:35:59.775226] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a50000b90 with addr=10.0.0.2, port=4420 00:27:48.656 qpair failed and we were unable to recover it. 00:27:48.656 [2024-07-15 09:35:59.775417] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.656 [2024-07-15 09:35:59.775462] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a50000b90 with addr=10.0.0.2, port=4420 00:27:48.656 qpair failed and we were unable to recover it. 00:27:48.656 [2024-07-15 09:35:59.775587] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.656 [2024-07-15 09:35:59.775629] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a50000b90 with addr=10.0.0.2, port=4420 00:27:48.656 qpair failed and we were unable to recover it. 00:27:48.656 [2024-07-15 09:35:59.775768] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.656 [2024-07-15 09:35:59.775837] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d69200 with addr=10.0.0.2, port=4420 00:27:48.656 qpair failed and we were unable to recover it. 00:27:48.656 [2024-07-15 09:35:59.775958] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.656 [2024-07-15 09:35:59.775987] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a58000b90 with addr=10.0.0.2, port=4420 00:27:48.656 qpair failed and we were unable to recover it. 00:27:48.656 [2024-07-15 09:35:59.776109] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.656 [2024-07-15 09:35:59.776155] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a58000b90 with addr=10.0.0.2, port=4420 00:27:48.656 qpair failed and we were unable to recover it. 00:27:48.656 [2024-07-15 09:35:59.776350] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.656 [2024-07-15 09:35:59.776391] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a58000b90 with addr=10.0.0.2, port=4420 00:27:48.656 qpair failed and we were unable to recover it. 00:27:48.656 [2024-07-15 09:35:59.776560] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.656 [2024-07-15 09:35:59.776601] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a58000b90 with addr=10.0.0.2, port=4420 00:27:48.656 qpair failed and we were unable to recover it. 00:27:48.656 [2024-07-15 09:35:59.776750] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.656 [2024-07-15 09:35:59.776776] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a58000b90 with addr=10.0.0.2, port=4420 00:27:48.656 qpair failed and we were unable to recover it. 00:27:48.656 [2024-07-15 09:35:59.776873] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.656 [2024-07-15 09:35:59.776900] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a58000b90 with addr=10.0.0.2, port=4420 00:27:48.656 qpair failed and we were unable to recover it. 00:27:48.656 [2024-07-15 09:35:59.776992] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.656 [2024-07-15 09:35:59.777019] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a58000b90 with addr=10.0.0.2, port=4420 00:27:48.656 qpair failed and we were unable to recover it. 00:27:48.656 [2024-07-15 09:35:59.777113] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.656 [2024-07-15 09:35:59.777140] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a58000b90 with addr=10.0.0.2, port=4420 00:27:48.656 qpair failed and we were unable to recover it. 00:27:48.656 [2024-07-15 09:35:59.777222] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.656 [2024-07-15 09:35:59.777248] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a58000b90 with addr=10.0.0.2, port=4420 00:27:48.656 qpair failed and we were unable to recover it. 00:27:48.656 [2024-07-15 09:35:59.777378] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.656 [2024-07-15 09:35:59.777420] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a58000b90 with addr=10.0.0.2, port=4420 00:27:48.656 qpair failed and we were unable to recover it. 00:27:48.656 [2024-07-15 09:35:59.777644] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.656 [2024-07-15 09:35:59.777685] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a58000b90 with addr=10.0.0.2, port=4420 00:27:48.656 qpair failed and we were unable to recover it. 00:27:48.656 [2024-07-15 09:35:59.777864] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.656 [2024-07-15 09:35:59.777890] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a58000b90 with addr=10.0.0.2, port=4420 00:27:48.656 qpair failed and we were unable to recover it. 00:27:48.656 [2024-07-15 09:35:59.778003] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.656 [2024-07-15 09:35:59.778029] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a58000b90 with addr=10.0.0.2, port=4420 00:27:48.656 qpair failed and we were unable to recover it. 00:27:48.656 [2024-07-15 09:35:59.778117] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.656 [2024-07-15 09:35:59.778173] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a58000b90 with addr=10.0.0.2, port=4420 00:27:48.656 qpair failed and we were unable to recover it. 00:27:48.656 [2024-07-15 09:35:59.778298] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.656 [2024-07-15 09:35:59.778339] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a58000b90 with addr=10.0.0.2, port=4420 00:27:48.656 qpair failed and we were unable to recover it. 00:27:48.656 [2024-07-15 09:35:59.778468] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.656 [2024-07-15 09:35:59.778509] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a58000b90 with addr=10.0.0.2, port=4420 00:27:48.656 qpair failed and we were unable to recover it. 00:27:48.656 [2024-07-15 09:35:59.778670] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.656 [2024-07-15 09:35:59.778711] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a58000b90 with addr=10.0.0.2, port=4420 00:27:48.656 qpair failed and we were unable to recover it. 00:27:48.656 [2024-07-15 09:35:59.778876] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.656 [2024-07-15 09:35:59.778903] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a58000b90 with addr=10.0.0.2, port=4420 00:27:48.656 qpair failed and we were unable to recover it. 00:27:48.656 [2024-07-15 09:35:59.779021] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.656 [2024-07-15 09:35:59.779047] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a58000b90 with addr=10.0.0.2, port=4420 00:27:48.656 qpair failed and we were unable to recover it. 00:27:48.656 [2024-07-15 09:35:59.779192] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.656 [2024-07-15 09:35:59.779233] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a58000b90 with addr=10.0.0.2, port=4420 00:27:48.656 qpair failed and we were unable to recover it. 00:27:48.656 [2024-07-15 09:35:59.779371] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.656 [2024-07-15 09:35:59.779397] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a58000b90 with addr=10.0.0.2, port=4420 00:27:48.656 qpair failed and we were unable to recover it. 00:27:48.656 [2024-07-15 09:35:59.779533] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.656 [2024-07-15 09:35:59.779574] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a58000b90 with addr=10.0.0.2, port=4420 00:27:48.656 qpair failed and we were unable to recover it. 00:27:48.656 [2024-07-15 09:35:59.779699] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.656 [2024-07-15 09:35:59.779724] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a58000b90 with addr=10.0.0.2, port=4420 00:27:48.656 qpair failed and we were unable to recover it. 00:27:48.656 [2024-07-15 09:35:59.779813] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.656 [2024-07-15 09:35:59.779839] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a58000b90 with addr=10.0.0.2, port=4420 00:27:48.656 qpair failed and we were unable to recover it. 00:27:48.656 [2024-07-15 09:35:59.779924] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.656 [2024-07-15 09:35:59.779954] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a58000b90 with addr=10.0.0.2, port=4420 00:27:48.656 qpair failed and we were unable to recover it. 00:27:48.656 [2024-07-15 09:35:59.780040] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.656 [2024-07-15 09:35:59.780066] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a58000b90 with addr=10.0.0.2, port=4420 00:27:48.656 qpair failed and we were unable to recover it. 00:27:48.656 [2024-07-15 09:35:59.780183] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.656 [2024-07-15 09:35:59.780209] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a58000b90 with addr=10.0.0.2, port=4420 00:27:48.656 qpair failed and we were unable to recover it. 00:27:48.656 [2024-07-15 09:35:59.780387] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.656 [2024-07-15 09:35:59.780428] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a58000b90 with addr=10.0.0.2, port=4420 00:27:48.656 qpair failed and we were unable to recover it. 00:27:48.656 [2024-07-15 09:35:59.780554] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.657 [2024-07-15 09:35:59.780598] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a58000b90 with addr=10.0.0.2, port=4420 00:27:48.657 qpair failed and we were unable to recover it. 00:27:48.657 [2024-07-15 09:35:59.780750] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.657 [2024-07-15 09:35:59.780821] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a50000b90 with addr=10.0.0.2, port=4420 00:27:48.657 qpair failed and we were unable to recover it. 00:27:48.657 [2024-07-15 09:35:59.780933] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.657 [2024-07-15 09:35:59.780960] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a50000b90 with addr=10.0.0.2, port=4420 00:27:48.657 qpair failed and we were unable to recover it. 00:27:48.657 [2024-07-15 09:35:59.781037] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.657 [2024-07-15 09:35:59.781063] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a50000b90 with addr=10.0.0.2, port=4420 00:27:48.657 qpair failed and we were unable to recover it. 00:27:48.657 [2024-07-15 09:35:59.781199] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.657 [2024-07-15 09:35:59.781226] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a50000b90 with addr=10.0.0.2, port=4420 00:27:48.657 qpair failed and we were unable to recover it. 00:27:48.657 [2024-07-15 09:35:59.781362] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.657 [2024-07-15 09:35:59.781404] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a50000b90 with addr=10.0.0.2, port=4420 00:27:48.657 qpair failed and we were unable to recover it. 00:27:48.657 [2024-07-15 09:35:59.781617] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.657 [2024-07-15 09:35:59.781662] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a50000b90 with addr=10.0.0.2, port=4420 00:27:48.657 qpair failed and we were unable to recover it. 00:27:48.657 [2024-07-15 09:35:59.781794] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.657 [2024-07-15 09:35:59.781858] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a50000b90 with addr=10.0.0.2, port=4420 00:27:48.657 qpair failed and we were unable to recover it. 00:27:48.657 [2024-07-15 09:35:59.781942] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.657 [2024-07-15 09:35:59.781968] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a50000b90 with addr=10.0.0.2, port=4420 00:27:48.657 qpair failed and we were unable to recover it. 00:27:48.657 [2024-07-15 09:35:59.782076] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.657 [2024-07-15 09:35:59.782105] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a50000b90 with addr=10.0.0.2, port=4420 00:27:48.657 qpair failed and we were unable to recover it. 00:27:48.657 [2024-07-15 09:35:59.782206] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.657 [2024-07-15 09:35:59.782259] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a50000b90 with addr=10.0.0.2, port=4420 00:27:48.657 qpair failed and we were unable to recover it. 00:27:48.657 [2024-07-15 09:35:59.782415] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.657 [2024-07-15 09:35:59.782464] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a50000b90 with addr=10.0.0.2, port=4420 00:27:48.657 qpair failed and we were unable to recover it. 00:27:48.657 [2024-07-15 09:35:59.782583] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.657 [2024-07-15 09:35:59.782635] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a50000b90 with addr=10.0.0.2, port=4420 00:27:48.657 qpair failed and we were unable to recover it. 00:27:48.657 [2024-07-15 09:35:59.782768] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.657 [2024-07-15 09:35:59.782817] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a50000b90 with addr=10.0.0.2, port=4420 00:27:48.657 qpair failed and we were unable to recover it. 00:27:48.657 [2024-07-15 09:35:59.782956] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.657 [2024-07-15 09:35:59.782982] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a50000b90 with addr=10.0.0.2, port=4420 00:27:48.657 qpair failed and we were unable to recover it. 00:27:48.657 [2024-07-15 09:35:59.783076] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.657 [2024-07-15 09:35:59.783102] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a50000b90 with addr=10.0.0.2, port=4420 00:27:48.657 qpair failed and we were unable to recover it. 00:27:48.657 [2024-07-15 09:35:59.783177] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.657 [2024-07-15 09:35:59.783203] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a50000b90 with addr=10.0.0.2, port=4420 00:27:48.657 qpair failed and we were unable to recover it. 00:27:48.657 [2024-07-15 09:35:59.783341] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.657 [2024-07-15 09:35:59.783382] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a50000b90 with addr=10.0.0.2, port=4420 00:27:48.657 qpair failed and we were unable to recover it. 00:27:48.657 [2024-07-15 09:35:59.783499] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.657 [2024-07-15 09:35:59.783554] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a50000b90 with addr=10.0.0.2, port=4420 00:27:48.657 qpair failed and we were unable to recover it. 00:27:48.657 [2024-07-15 09:35:59.783712] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.657 [2024-07-15 09:35:59.783753] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a50000b90 with addr=10.0.0.2, port=4420 00:27:48.657 qpair failed and we were unable to recover it. 00:27:48.657 [2024-07-15 09:35:59.783907] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.657 [2024-07-15 09:35:59.783933] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a50000b90 with addr=10.0.0.2, port=4420 00:27:48.657 qpair failed and we were unable to recover it. 00:27:48.657 [2024-07-15 09:35:59.784040] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.657 [2024-07-15 09:35:59.784066] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a50000b90 with addr=10.0.0.2, port=4420 00:27:48.657 qpair failed and we were unable to recover it. 00:27:48.657 [2024-07-15 09:35:59.784160] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.657 [2024-07-15 09:35:59.784186] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a50000b90 with addr=10.0.0.2, port=4420 00:27:48.657 qpair failed and we were unable to recover it. 00:27:48.657 [2024-07-15 09:35:59.784312] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.657 [2024-07-15 09:35:59.784358] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a50000b90 with addr=10.0.0.2, port=4420 00:27:48.657 qpair failed and we were unable to recover it. 00:27:48.657 [2024-07-15 09:35:59.784522] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.657 [2024-07-15 09:35:59.784564] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a50000b90 with addr=10.0.0.2, port=4420 00:27:48.657 qpair failed and we were unable to recover it. 00:27:48.657 [2024-07-15 09:35:59.784705] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.657 [2024-07-15 09:35:59.784747] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a50000b90 with addr=10.0.0.2, port=4420 00:27:48.657 qpair failed and we were unable to recover it. 00:27:48.657 [2024-07-15 09:35:59.784890] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.657 [2024-07-15 09:35:59.784916] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a50000b90 with addr=10.0.0.2, port=4420 00:27:48.657 qpair failed and we were unable to recover it. 00:27:48.657 [2024-07-15 09:35:59.785041] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.657 [2024-07-15 09:35:59.785083] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a50000b90 with addr=10.0.0.2, port=4420 00:27:48.657 qpair failed and we were unable to recover it. 00:27:48.949 [2024-07-15 09:35:59.785247] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.949 [2024-07-15 09:35:59.785290] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a50000b90 with addr=10.0.0.2, port=4420 00:27:48.949 qpair failed and we were unable to recover it. 00:27:48.949 [2024-07-15 09:35:59.785450] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.949 [2024-07-15 09:35:59.785493] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a50000b90 with addr=10.0.0.2, port=4420 00:27:48.949 qpair failed and we were unable to recover it. 00:27:48.949 [2024-07-15 09:35:59.785685] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.949 [2024-07-15 09:35:59.785724] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:27:48.949 qpair failed and we were unable to recover it. 00:27:48.949 [2024-07-15 09:35:59.785852] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.949 [2024-07-15 09:35:59.785879] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:27:48.949 qpair failed and we were unable to recover it. 00:27:48.949 [2024-07-15 09:35:59.785971] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.949 [2024-07-15 09:35:59.785997] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:27:48.949 qpair failed and we were unable to recover it. 00:27:48.949 [2024-07-15 09:35:59.786088] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.949 [2024-07-15 09:35:59.786114] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:27:48.949 qpair failed and we were unable to recover it. 00:27:48.949 [2024-07-15 09:35:59.786198] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.949 [2024-07-15 09:35:59.786224] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:27:48.949 qpair failed and we were unable to recover it. 00:27:48.949 [2024-07-15 09:35:59.786307] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.949 [2024-07-15 09:35:59.786333] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:27:48.949 qpair failed and we were unable to recover it. 00:27:48.949 [2024-07-15 09:35:59.786416] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.949 [2024-07-15 09:35:59.786441] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:27:48.949 qpair failed and we were unable to recover it. 00:27:48.949 [2024-07-15 09:35:59.786555] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.949 [2024-07-15 09:35:59.786582] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:27:48.949 qpair failed and we were unable to recover it. 00:27:48.949 [2024-07-15 09:35:59.786688] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.949 [2024-07-15 09:35:59.786727] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d69200 with addr=10.0.0.2, port=4420 00:27:48.949 qpair failed and we were unable to recover it. 00:27:48.949 [2024-07-15 09:35:59.786857] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.949 [2024-07-15 09:35:59.786886] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a50000b90 with addr=10.0.0.2, port=4420 00:27:48.949 qpair failed and we were unable to recover it. 00:27:48.949 [2024-07-15 09:35:59.786980] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.949 [2024-07-15 09:35:59.787007] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a50000b90 with addr=10.0.0.2, port=4420 00:27:48.949 qpair failed and we were unable to recover it. 00:27:48.949 [2024-07-15 09:35:59.787143] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.949 [2024-07-15 09:35:59.787169] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a50000b90 with addr=10.0.0.2, port=4420 00:27:48.949 qpair failed and we were unable to recover it. 00:27:48.949 [2024-07-15 09:35:59.787257] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.949 [2024-07-15 09:35:59.787284] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a50000b90 with addr=10.0.0.2, port=4420 00:27:48.949 qpair failed and we were unable to recover it. 00:27:48.949 [2024-07-15 09:35:59.787363] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.949 [2024-07-15 09:35:59.787427] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a50000b90 with addr=10.0.0.2, port=4420 00:27:48.949 qpair failed and we were unable to recover it. 00:27:48.949 [2024-07-15 09:35:59.787585] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.949 [2024-07-15 09:35:59.787626] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a50000b90 with addr=10.0.0.2, port=4420 00:27:48.949 qpair failed and we were unable to recover it. 00:27:48.949 [2024-07-15 09:35:59.787796] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.949 [2024-07-15 09:35:59.787828] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a50000b90 with addr=10.0.0.2, port=4420 00:27:48.949 qpair failed and we were unable to recover it. 00:27:48.949 [2024-07-15 09:35:59.787940] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.949 [2024-07-15 09:35:59.787967] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a50000b90 with addr=10.0.0.2, port=4420 00:27:48.949 qpair failed and we were unable to recover it. 00:27:48.949 [2024-07-15 09:35:59.788091] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.949 [2024-07-15 09:35:59.788134] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a50000b90 with addr=10.0.0.2, port=4420 00:27:48.949 qpair failed and we were unable to recover it. 00:27:48.949 [2024-07-15 09:35:59.788267] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.949 [2024-07-15 09:35:59.788308] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a50000b90 with addr=10.0.0.2, port=4420 00:27:48.949 qpair failed and we were unable to recover it. 00:27:48.949 [2024-07-15 09:35:59.788476] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.949 [2024-07-15 09:35:59.788517] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a50000b90 with addr=10.0.0.2, port=4420 00:27:48.949 qpair failed and we were unable to recover it. 00:27:48.949 [2024-07-15 09:35:59.788641] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.949 [2024-07-15 09:35:59.788669] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:27:48.949 qpair failed and we were unable to recover it. 00:27:48.949 [2024-07-15 09:35:59.788810] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.949 [2024-07-15 09:35:59.788841] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:27:48.949 qpair failed and we were unable to recover it. 00:27:48.949 [2024-07-15 09:35:59.788988] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.949 [2024-07-15 09:35:59.789039] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:27:48.949 qpair failed and we were unable to recover it. 00:27:48.949 [2024-07-15 09:35:59.789188] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.949 [2024-07-15 09:35:59.789237] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:27:48.949 qpair failed and we were unable to recover it. 00:27:48.949 [2024-07-15 09:35:59.789381] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.949 [2024-07-15 09:35:59.789432] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:27:48.949 qpair failed and we were unable to recover it. 00:27:48.949 [2024-07-15 09:35:59.789536] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.949 [2024-07-15 09:35:59.789562] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:27:48.949 qpair failed and we were unable to recover it. 00:27:48.949 [2024-07-15 09:35:59.789645] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.949 [2024-07-15 09:35:59.789670] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:27:48.949 qpair failed and we were unable to recover it. 00:27:48.949 [2024-07-15 09:35:59.789811] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.949 [2024-07-15 09:35:59.789837] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:27:48.949 qpair failed and we were unable to recover it. 00:27:48.949 [2024-07-15 09:35:59.790000] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.949 [2024-07-15 09:35:59.790026] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:27:48.949 qpair failed and we were unable to recover it. 00:27:48.949 [2024-07-15 09:35:59.790230] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.949 [2024-07-15 09:35:59.790256] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:27:48.949 qpair failed and we were unable to recover it. 00:27:48.949 [2024-07-15 09:35:59.790348] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.949 [2024-07-15 09:35:59.790373] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:27:48.949 qpair failed and we were unable to recover it. 00:27:48.949 [2024-07-15 09:35:59.790485] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.949 [2024-07-15 09:35:59.790511] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:27:48.949 qpair failed and we were unable to recover it. 00:27:48.949 [2024-07-15 09:35:59.790600] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.949 [2024-07-15 09:35:59.790627] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a50000b90 with addr=10.0.0.2, port=4420 00:27:48.949 qpair failed and we were unable to recover it. 00:27:48.949 [2024-07-15 09:35:59.790745] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.949 [2024-07-15 09:35:59.790772] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a50000b90 with addr=10.0.0.2, port=4420 00:27:48.949 qpair failed and we were unable to recover it. 00:27:48.949 [2024-07-15 09:35:59.790891] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.949 [2024-07-15 09:35:59.790918] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a50000b90 with addr=10.0.0.2, port=4420 00:27:48.949 qpair failed and we were unable to recover it. 00:27:48.949 [2024-07-15 09:35:59.791031] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.949 [2024-07-15 09:35:59.791058] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a50000b90 with addr=10.0.0.2, port=4420 00:27:48.949 qpair failed and we were unable to recover it. 00:27:48.949 [2024-07-15 09:35:59.791151] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.949 [2024-07-15 09:35:59.791177] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a50000b90 with addr=10.0.0.2, port=4420 00:27:48.949 qpair failed and we were unable to recover it. 00:27:48.949 [2024-07-15 09:35:59.791313] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.949 [2024-07-15 09:35:59.791342] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a50000b90 with addr=10.0.0.2, port=4420 00:27:48.949 qpair failed and we were unable to recover it. 00:27:48.949 [2024-07-15 09:35:59.791437] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.949 [2024-07-15 09:35:59.791463] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:27:48.949 qpair failed and we were unable to recover it. 00:27:48.949 [2024-07-15 09:35:59.791575] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.949 [2024-07-15 09:35:59.791601] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:27:48.949 qpair failed and we were unable to recover it. 00:27:48.949 [2024-07-15 09:35:59.791710] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.949 [2024-07-15 09:35:59.791736] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:27:48.949 qpair failed and we were unable to recover it. 00:27:48.949 [2024-07-15 09:35:59.791870] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.949 [2024-07-15 09:35:59.791921] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:27:48.949 qpair failed and we were unable to recover it. 00:27:48.949 [2024-07-15 09:35:59.792002] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.949 [2024-07-15 09:35:59.792028] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:27:48.949 qpair failed and we were unable to recover it. 00:27:48.949 [2024-07-15 09:35:59.792138] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.949 [2024-07-15 09:35:59.792164] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:27:48.949 qpair failed and we were unable to recover it. 00:27:48.949 [2024-07-15 09:35:59.792275] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.949 [2024-07-15 09:35:59.792302] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:27:48.949 qpair failed and we were unable to recover it. 00:27:48.949 [2024-07-15 09:35:59.792392] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.949 [2024-07-15 09:35:59.792419] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:27:48.949 qpair failed and we were unable to recover it. 00:27:48.949 [2024-07-15 09:35:59.792498] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.949 [2024-07-15 09:35:59.792524] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:27:48.949 qpair failed and we were unable to recover it. 00:27:48.949 [2024-07-15 09:35:59.792719] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.949 [2024-07-15 09:35:59.792745] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:27:48.949 qpair failed and we were unable to recover it. 00:27:48.949 [2024-07-15 09:35:59.792842] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.949 [2024-07-15 09:35:59.792868] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:27:48.949 qpair failed and we were unable to recover it. 00:27:48.949 [2024-07-15 09:35:59.792950] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.949 [2024-07-15 09:35:59.792975] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:27:48.949 qpair failed and we were unable to recover it. 00:27:48.950 [2024-07-15 09:35:59.793056] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.950 [2024-07-15 09:35:59.793082] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:27:48.950 qpair failed and we were unable to recover it. 00:27:48.950 [2024-07-15 09:35:59.793162] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.950 [2024-07-15 09:35:59.793188] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:27:48.950 qpair failed and we were unable to recover it. 00:27:48.950 [2024-07-15 09:35:59.793325] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.950 [2024-07-15 09:35:59.793350] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:27:48.950 qpair failed and we were unable to recover it. 00:27:48.950 [2024-07-15 09:35:59.793426] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.950 [2024-07-15 09:35:59.793452] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:27:48.950 qpair failed and we were unable to recover it. 00:27:48.950 [2024-07-15 09:35:59.793561] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.950 [2024-07-15 09:35:59.793587] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:27:48.950 qpair failed and we were unable to recover it. 00:27:48.950 [2024-07-15 09:35:59.793685] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.950 [2024-07-15 09:35:59.793724] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a58000b90 with addr=10.0.0.2, port=4420 00:27:48.950 qpair failed and we were unable to recover it. 00:27:48.950 [2024-07-15 09:35:59.793819] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.950 [2024-07-15 09:35:59.793847] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a58000b90 with addr=10.0.0.2, port=4420 00:27:48.950 qpair failed and we were unable to recover it. 00:27:48.950 [2024-07-15 09:35:59.793958] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.950 [2024-07-15 09:35:59.793985] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a58000b90 with addr=10.0.0.2, port=4420 00:27:48.950 qpair failed and we were unable to recover it. 00:27:48.950 [2024-07-15 09:35:59.794144] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.950 [2024-07-15 09:35:59.794186] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a58000b90 with addr=10.0.0.2, port=4420 00:27:48.950 qpair failed and we were unable to recover it. 00:27:48.950 [2024-07-15 09:35:59.794329] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.950 [2024-07-15 09:35:59.794371] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a58000b90 with addr=10.0.0.2, port=4420 00:27:48.950 qpair failed and we were unable to recover it. 00:27:48.950 [2024-07-15 09:35:59.794563] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.950 [2024-07-15 09:35:59.794605] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a58000b90 with addr=10.0.0.2, port=4420 00:27:48.950 qpair failed and we were unable to recover it. 00:27:48.950 [2024-07-15 09:35:59.794737] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.950 [2024-07-15 09:35:59.794768] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a58000b90 with addr=10.0.0.2, port=4420 00:27:48.950 qpair failed and we were unable to recover it. 00:27:48.950 [2024-07-15 09:35:59.794869] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.950 [2024-07-15 09:35:59.794896] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a58000b90 with addr=10.0.0.2, port=4420 00:27:48.950 qpair failed and we were unable to recover it. 00:27:48.950 [2024-07-15 09:35:59.795009] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.950 [2024-07-15 09:35:59.795035] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a58000b90 with addr=10.0.0.2, port=4420 00:27:48.950 qpair failed and we were unable to recover it. 00:27:48.950 [2024-07-15 09:35:59.795117] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.950 [2024-07-15 09:35:59.795166] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a58000b90 with addr=10.0.0.2, port=4420 00:27:48.950 qpair failed and we were unable to recover it. 00:27:48.950 [2024-07-15 09:35:59.795319] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.950 [2024-07-15 09:35:59.795360] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a58000b90 with addr=10.0.0.2, port=4420 00:27:48.950 qpair failed and we were unable to recover it. 00:27:48.950 [2024-07-15 09:35:59.795487] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.950 [2024-07-15 09:35:59.795528] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a58000b90 with addr=10.0.0.2, port=4420 00:27:48.950 qpair failed and we were unable to recover it. 00:27:48.950 [2024-07-15 09:35:59.795679] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.950 [2024-07-15 09:35:59.795705] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a58000b90 with addr=10.0.0.2, port=4420 00:27:48.950 qpair failed and we were unable to recover it. 00:27:48.950 [2024-07-15 09:35:59.795819] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.950 [2024-07-15 09:35:59.795847] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a58000b90 with addr=10.0.0.2, port=4420 00:27:48.950 qpair failed and we were unable to recover it. 00:27:48.950 [2024-07-15 09:35:59.795957] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.950 [2024-07-15 09:35:59.795984] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a58000b90 with addr=10.0.0.2, port=4420 00:27:48.950 qpair failed and we were unable to recover it. 00:27:48.950 [2024-07-15 09:35:59.796123] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.950 [2024-07-15 09:35:59.796177] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:27:48.950 qpair failed and we were unable to recover it. 00:27:48.950 [2024-07-15 09:35:59.796316] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.950 [2024-07-15 09:35:59.796365] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:27:48.950 qpair failed and we were unable to recover it. 00:27:48.950 [2024-07-15 09:35:59.796496] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.950 [2024-07-15 09:35:59.796551] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:27:48.950 qpair failed and we were unable to recover it. 00:27:48.950 [2024-07-15 09:35:59.796664] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.950 [2024-07-15 09:35:59.796690] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:27:48.950 qpair failed and we were unable to recover it. 00:27:48.950 [2024-07-15 09:35:59.796786] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.950 [2024-07-15 09:35:59.796833] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a50000b90 with addr=10.0.0.2, port=4420 00:27:48.950 qpair failed and we were unable to recover it. 00:27:48.950 [2024-07-15 09:35:59.796960] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.950 [2024-07-15 09:35:59.796988] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a50000b90 with addr=10.0.0.2, port=4420 00:27:48.950 qpair failed and we were unable to recover it. 00:27:48.950 [2024-07-15 09:35:59.797074] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.950 [2024-07-15 09:35:59.797101] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a50000b90 with addr=10.0.0.2, port=4420 00:27:48.950 qpair failed and we were unable to recover it. 00:27:48.950 [2024-07-15 09:35:59.797213] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.950 [2024-07-15 09:35:59.797255] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a50000b90 with addr=10.0.0.2, port=4420 00:27:48.950 qpair failed and we were unable to recover it. 00:27:48.950 [2024-07-15 09:35:59.797381] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.950 [2024-07-15 09:35:59.797424] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a50000b90 with addr=10.0.0.2, port=4420 00:27:48.950 qpair failed and we were unable to recover it. 00:27:48.950 [2024-07-15 09:35:59.797594] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.950 [2024-07-15 09:35:59.797638] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a58000b90 with addr=10.0.0.2, port=4420 00:27:48.950 qpair failed and we were unable to recover it. 00:27:48.950 [2024-07-15 09:35:59.797759] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.950 [2024-07-15 09:35:59.797786] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:27:48.950 qpair failed and we were unable to recover it. 00:27:48.950 [2024-07-15 09:35:59.797900] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.950 [2024-07-15 09:35:59.797926] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:27:48.950 qpair failed and we were unable to recover it. 00:27:48.950 [2024-07-15 09:35:59.798002] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.950 [2024-07-15 09:35:59.798027] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:27:48.950 qpair failed and we were unable to recover it. 00:27:48.950 [2024-07-15 09:35:59.798139] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.950 [2024-07-15 09:35:59.798165] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:27:48.950 qpair failed and we were unable to recover it. 00:27:48.950 [2024-07-15 09:35:59.798248] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.950 [2024-07-15 09:35:59.798274] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:27:48.950 qpair failed and we were unable to recover it. 00:27:48.950 [2024-07-15 09:35:59.798351] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.950 [2024-07-15 09:35:59.798377] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:27:48.950 qpair failed and we were unable to recover it. 00:27:48.950 [2024-07-15 09:35:59.798507] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.950 [2024-07-15 09:35:59.798533] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:27:48.950 qpair failed and we were unable to recover it. 00:27:48.950 [2024-07-15 09:35:59.798616] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.950 [2024-07-15 09:35:59.798641] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:27:48.950 qpair failed and we were unable to recover it. 00:27:48.950 [2024-07-15 09:35:59.798754] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.950 [2024-07-15 09:35:59.798780] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:27:48.950 qpair failed and we were unable to recover it. 00:27:48.950 [2024-07-15 09:35:59.798867] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.950 [2024-07-15 09:35:59.798895] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a58000b90 with addr=10.0.0.2, port=4420 00:27:48.950 qpair failed and we were unable to recover it. 00:27:48.950 [2024-07-15 09:35:59.799010] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.950 [2024-07-15 09:35:59.799036] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a58000b90 with addr=10.0.0.2, port=4420 00:27:48.950 qpair failed and we were unable to recover it. 00:27:48.950 [2024-07-15 09:35:59.799145] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.950 [2024-07-15 09:35:59.799171] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a58000b90 with addr=10.0.0.2, port=4420 00:27:48.950 qpair failed and we were unable to recover it. 00:27:48.950 [2024-07-15 09:35:59.799279] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.950 [2024-07-15 09:35:59.799306] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a58000b90 with addr=10.0.0.2, port=4420 00:27:48.950 qpair failed and we were unable to recover it. 00:27:48.950 [2024-07-15 09:35:59.799391] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.950 [2024-07-15 09:35:59.799418] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a58000b90 with addr=10.0.0.2, port=4420 00:27:48.950 qpair failed and we were unable to recover it. 00:27:48.950 [2024-07-15 09:35:59.799505] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.950 [2024-07-15 09:35:59.799534] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a50000b90 with addr=10.0.0.2, port=4420 00:27:48.950 qpair failed and we were unable to recover it. 00:27:48.950 [2024-07-15 09:35:59.799619] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.950 [2024-07-15 09:35:59.799645] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a50000b90 with addr=10.0.0.2, port=4420 00:27:48.950 qpair failed and we were unable to recover it. 00:27:48.950 [2024-07-15 09:35:59.799762] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.950 [2024-07-15 09:35:59.799788] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a50000b90 with addr=10.0.0.2, port=4420 00:27:48.950 qpair failed and we were unable to recover it. 00:27:48.950 [2024-07-15 09:35:59.799881] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.950 [2024-07-15 09:35:59.799908] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a50000b90 with addr=10.0.0.2, port=4420 00:27:48.950 qpair failed and we were unable to recover it. 00:27:48.950 [2024-07-15 09:35:59.800104] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.950 [2024-07-15 09:35:59.800143] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a50000b90 with addr=10.0.0.2, port=4420 00:27:48.950 qpair failed and we were unable to recover it. 00:27:48.950 [2024-07-15 09:35:59.800301] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.950 [2024-07-15 09:35:59.800341] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a50000b90 with addr=10.0.0.2, port=4420 00:27:48.950 qpair failed and we were unable to recover it. 00:27:48.950 [2024-07-15 09:35:59.800525] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.950 [2024-07-15 09:35:59.800566] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a58000b90 with addr=10.0.0.2, port=4420 00:27:48.950 qpair failed and we were unable to recover it. 00:27:48.950 [2024-07-15 09:35:59.800716] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.950 [2024-07-15 09:35:59.800746] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:27:48.950 qpair failed and we were unable to recover it. 00:27:48.950 [2024-07-15 09:35:59.800842] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.950 [2024-07-15 09:35:59.800868] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:27:48.950 qpair failed and we were unable to recover it. 00:27:48.950 [2024-07-15 09:35:59.800976] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.950 [2024-07-15 09:35:59.801028] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:27:48.950 qpair failed and we were unable to recover it. 00:27:48.950 [2024-07-15 09:35:59.801141] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.950 [2024-07-15 09:35:59.801190] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:27:48.950 qpair failed and we were unable to recover it. 00:27:48.950 [2024-07-15 09:35:59.801326] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.950 [2024-07-15 09:35:59.801376] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:27:48.950 qpair failed and we were unable to recover it. 00:27:48.950 [2024-07-15 09:35:59.801491] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.950 [2024-07-15 09:35:59.801517] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:27:48.950 qpair failed and we were unable to recover it. 00:27:48.950 [2024-07-15 09:35:59.801637] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.950 [2024-07-15 09:35:59.801664] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:27:48.950 qpair failed and we were unable to recover it. 00:27:48.950 [2024-07-15 09:35:59.801752] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.950 [2024-07-15 09:35:59.801778] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:27:48.950 qpair failed and we were unable to recover it. 00:27:48.950 [2024-07-15 09:35:59.801899] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.951 [2024-07-15 09:35:59.801928] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a50000b90 with addr=10.0.0.2, port=4420 00:27:48.951 qpair failed and we were unable to recover it. 00:27:48.951 [2024-07-15 09:35:59.802041] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.951 [2024-07-15 09:35:59.802067] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a50000b90 with addr=10.0.0.2, port=4420 00:27:48.951 qpair failed and we were unable to recover it. 00:27:48.951 [2024-07-15 09:35:59.802172] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.951 [2024-07-15 09:35:59.802201] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a50000b90 with addr=10.0.0.2, port=4420 00:27:48.951 qpair failed and we were unable to recover it. 00:27:48.951 [2024-07-15 09:35:59.802322] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.951 [2024-07-15 09:35:59.802348] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a50000b90 with addr=10.0.0.2, port=4420 00:27:48.951 qpair failed and we were unable to recover it. 00:27:48.951 [2024-07-15 09:35:59.802466] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.951 [2024-07-15 09:35:59.802494] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a58000b90 with addr=10.0.0.2, port=4420 00:27:48.951 qpair failed and we were unable to recover it. 00:27:48.951 [2024-07-15 09:35:59.802600] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.951 [2024-07-15 09:35:59.802626] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a58000b90 with addr=10.0.0.2, port=4420 00:27:48.951 qpair failed and we were unable to recover it. 00:27:48.951 [2024-07-15 09:35:59.802717] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.951 [2024-07-15 09:35:59.802744] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:27:48.951 qpair failed and we were unable to recover it. 00:27:48.951 [2024-07-15 09:35:59.802894] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.951 [2024-07-15 09:35:59.802944] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:27:48.951 qpair failed and we were unable to recover it. 00:27:48.951 [2024-07-15 09:35:59.803105] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.951 [2024-07-15 09:35:59.803155] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:27:48.951 qpair failed and we were unable to recover it. 00:27:48.951 [2024-07-15 09:35:59.803264] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.951 [2024-07-15 09:35:59.803303] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:27:48.951 qpair failed and we were unable to recover it. 00:27:48.951 [2024-07-15 09:35:59.803399] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.951 [2024-07-15 09:35:59.803424] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:27:48.951 qpair failed and we were unable to recover it. 00:27:48.951 [2024-07-15 09:35:59.803503] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.951 [2024-07-15 09:35:59.803528] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:27:48.951 qpair failed and we were unable to recover it. 00:27:48.951 [2024-07-15 09:35:59.803656] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.951 [2024-07-15 09:35:59.803685] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a58000b90 with addr=10.0.0.2, port=4420 00:27:48.951 qpair failed and we were unable to recover it. 00:27:48.951 [2024-07-15 09:35:59.803775] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.951 [2024-07-15 09:35:59.803816] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a50000b90 with addr=10.0.0.2, port=4420 00:27:48.951 qpair failed and we were unable to recover it. 00:27:48.951 [2024-07-15 09:35:59.803940] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.951 [2024-07-15 09:35:59.803967] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a50000b90 with addr=10.0.0.2, port=4420 00:27:48.951 qpair failed and we were unable to recover it. 00:27:48.951 [2024-07-15 09:35:59.804111] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.951 [2024-07-15 09:35:59.804150] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a50000b90 with addr=10.0.0.2, port=4420 00:27:48.951 qpair failed and we were unable to recover it. 00:27:48.951 [2024-07-15 09:35:59.804259] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.951 [2024-07-15 09:35:59.804302] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a50000b90 with addr=10.0.0.2, port=4420 00:27:48.951 qpair failed and we were unable to recover it. 00:27:48.951 [2024-07-15 09:35:59.804432] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.951 [2024-07-15 09:35:59.804472] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a50000b90 with addr=10.0.0.2, port=4420 00:27:48.951 qpair failed and we were unable to recover it. 00:27:48.951 [2024-07-15 09:35:59.804606] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.951 [2024-07-15 09:35:59.804633] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:27:48.951 qpair failed and we were unable to recover it. 00:27:48.951 [2024-07-15 09:35:59.804724] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.951 [2024-07-15 09:35:59.804750] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:27:48.951 qpair failed and we were unable to recover it. 00:27:48.951 [2024-07-15 09:35:59.804843] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.951 [2024-07-15 09:35:59.804869] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:27:48.951 qpair failed and we were unable to recover it. 00:27:48.951 [2024-07-15 09:35:59.804977] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.951 [2024-07-15 09:35:59.805028] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:27:48.951 qpair failed and we were unable to recover it. 00:27:48.951 [2024-07-15 09:35:59.805202] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.951 [2024-07-15 09:35:59.805250] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:27:48.951 qpair failed and we were unable to recover it. 00:27:48.951 [2024-07-15 09:35:59.805383] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.951 [2024-07-15 09:35:59.805436] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:27:48.951 qpair failed and we were unable to recover it. 00:27:48.951 [2024-07-15 09:35:59.805522] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.951 [2024-07-15 09:35:59.805549] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:27:48.951 qpair failed and we were unable to recover it. 00:27:48.951 [2024-07-15 09:35:59.805627] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.951 [2024-07-15 09:35:59.805652] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:27:48.951 qpair failed and we were unable to recover it. 00:27:48.951 [2024-07-15 09:35:59.805736] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.951 [2024-07-15 09:35:59.805761] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:27:48.951 qpair failed and we were unable to recover it. 00:27:48.951 [2024-07-15 09:35:59.805884] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.951 [2024-07-15 09:35:59.805913] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a50000b90 with addr=10.0.0.2, port=4420 00:27:48.951 qpair failed and we were unable to recover it. 00:27:48.951 [2024-07-15 09:35:59.805991] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.951 [2024-07-15 09:35:59.806017] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a50000b90 with addr=10.0.0.2, port=4420 00:27:48.951 qpair failed and we were unable to recover it. 00:27:48.951 [2024-07-15 09:35:59.806138] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.951 [2024-07-15 09:35:59.806165] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a50000b90 with addr=10.0.0.2, port=4420 00:27:48.951 qpair failed and we were unable to recover it. 00:27:48.951 [2024-07-15 09:35:59.806276] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.951 [2024-07-15 09:35:59.806316] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a50000b90 with addr=10.0.0.2, port=4420 00:27:48.951 qpair failed and we were unable to recover it. 00:27:48.951 [2024-07-15 09:35:59.806493] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.951 [2024-07-15 09:35:59.806552] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a58000b90 with addr=10.0.0.2, port=4420 00:27:48.951 qpair failed and we were unable to recover it. 00:27:48.951 [2024-07-15 09:35:59.806686] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.951 [2024-07-15 09:35:59.806736] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a58000b90 with addr=10.0.0.2, port=4420 00:27:48.951 qpair failed and we were unable to recover it. 00:27:48.951 [2024-07-15 09:35:59.806895] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.951 [2024-07-15 09:35:59.806924] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a58000b90 with addr=10.0.0.2, port=4420 00:27:48.951 qpair failed and we were unable to recover it. 00:27:48.951 [2024-07-15 09:35:59.807014] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.951 [2024-07-15 09:35:59.807040] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a58000b90 with addr=10.0.0.2, port=4420 00:27:48.951 qpair failed and we were unable to recover it. 00:27:48.951 [2024-07-15 09:35:59.807127] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.951 [2024-07-15 09:35:59.807153] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a58000b90 with addr=10.0.0.2, port=4420 00:27:48.951 qpair failed and we were unable to recover it. 00:27:48.951 [2024-07-15 09:35:59.807294] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.951 [2024-07-15 09:35:59.807333] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a58000b90 with addr=10.0.0.2, port=4420 00:27:48.951 qpair failed and we were unable to recover it. 00:27:48.951 [2024-07-15 09:35:59.807487] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.951 [2024-07-15 09:35:59.807527] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a58000b90 with addr=10.0.0.2, port=4420 00:27:48.951 qpair failed and we were unable to recover it. 00:27:48.951 [2024-07-15 09:35:59.807641] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.951 [2024-07-15 09:35:59.807680] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a58000b90 with addr=10.0.0.2, port=4420 00:27:48.951 qpair failed and we were unable to recover it. 00:27:48.951 [2024-07-15 09:35:59.807815] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.951 [2024-07-15 09:35:59.807842] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a58000b90 with addr=10.0.0.2, port=4420 00:27:48.951 qpair failed and we were unable to recover it. 00:27:48.951 [2024-07-15 09:35:59.807942] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.951 [2024-07-15 09:35:59.807982] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a58000b90 with addr=10.0.0.2, port=4420 00:27:48.951 qpair failed and we were unable to recover it. 00:27:48.951 [2024-07-15 09:35:59.808167] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.951 [2024-07-15 09:35:59.808207] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a58000b90 with addr=10.0.0.2, port=4420 00:27:48.951 qpair failed and we were unable to recover it. 00:27:48.951 [2024-07-15 09:35:59.808394] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.951 [2024-07-15 09:35:59.808433] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a58000b90 with addr=10.0.0.2, port=4420 00:27:48.951 qpair failed and we were unable to recover it. 00:27:48.951 [2024-07-15 09:35:59.808549] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.951 [2024-07-15 09:35:59.808589] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a58000b90 with addr=10.0.0.2, port=4420 00:27:48.951 qpair failed and we were unable to recover it. 00:27:48.951 [2024-07-15 09:35:59.808694] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.951 [2024-07-15 09:35:59.808733] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a58000b90 with addr=10.0.0.2, port=4420 00:27:48.951 qpair failed and we were unable to recover it. 00:27:48.951 [2024-07-15 09:35:59.808853] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.951 [2024-07-15 09:35:59.808881] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:27:48.951 qpair failed and we were unable to recover it. 00:27:48.951 [2024-07-15 09:35:59.809008] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.951 [2024-07-15 09:35:59.809057] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:27:48.951 qpair failed and we were unable to recover it. 00:27:48.951 [2024-07-15 09:35:59.809162] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.951 [2024-07-15 09:35:59.809213] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:27:48.951 qpair failed and we were unable to recover it. 00:27:48.951 [2024-07-15 09:35:59.809349] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.951 [2024-07-15 09:35:59.809390] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:27:48.951 qpair failed and we were unable to recover it. 00:27:48.951 [2024-07-15 09:35:59.809465] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.951 [2024-07-15 09:35:59.809491] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:27:48.951 qpair failed and we were unable to recover it. 00:27:48.951 [2024-07-15 09:35:59.809611] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.951 [2024-07-15 09:35:59.809636] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:27:48.951 qpair failed and we were unable to recover it. 00:27:48.951 [2024-07-15 09:35:59.809747] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.951 [2024-07-15 09:35:59.809775] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a58000b90 with addr=10.0.0.2, port=4420 00:27:48.951 qpair failed and we were unable to recover it. 00:27:48.951 [2024-07-15 09:35:59.809866] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.951 [2024-07-15 09:35:59.809893] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a58000b90 with addr=10.0.0.2, port=4420 00:27:48.951 qpair failed and we were unable to recover it. 00:27:48.951 [2024-07-15 09:35:59.810006] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.951 [2024-07-15 09:35:59.810032] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a58000b90 with addr=10.0.0.2, port=4420 00:27:48.951 qpair failed and we were unable to recover it. 00:27:48.951 [2024-07-15 09:35:59.810110] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.951 [2024-07-15 09:35:59.810136] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a58000b90 with addr=10.0.0.2, port=4420 00:27:48.951 qpair failed and we were unable to recover it. 00:27:48.951 [2024-07-15 09:35:59.810217] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.952 [2024-07-15 09:35:59.810243] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a58000b90 with addr=10.0.0.2, port=4420 00:27:48.952 qpair failed and we were unable to recover it. 00:27:48.952 [2024-07-15 09:35:59.810376] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.952 [2024-07-15 09:35:59.810402] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a58000b90 with addr=10.0.0.2, port=4420 00:27:48.952 qpair failed and we were unable to recover it. 00:27:48.952 [2024-07-15 09:35:59.810510] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.952 [2024-07-15 09:35:59.810548] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a58000b90 with addr=10.0.0.2, port=4420 00:27:48.952 qpair failed and we were unable to recover it. 00:27:48.952 [2024-07-15 09:35:59.810709] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.952 [2024-07-15 09:35:59.810753] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a50000b90 with addr=10.0.0.2, port=4420 00:27:48.952 qpair failed and we were unable to recover it. 00:27:48.952 [2024-07-15 09:35:59.810903] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.952 [2024-07-15 09:35:59.810956] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a50000b90 with addr=10.0.0.2, port=4420 00:27:48.952 qpair failed and we were unable to recover it. 00:27:48.952 [2024-07-15 09:35:59.811084] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.952 [2024-07-15 09:35:59.811124] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a50000b90 with addr=10.0.0.2, port=4420 00:27:48.952 qpair failed and we were unable to recover it. 00:27:48.952 [2024-07-15 09:35:59.811297] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.952 [2024-07-15 09:35:59.811340] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a50000b90 with addr=10.0.0.2, port=4420 00:27:48.952 qpair failed and we were unable to recover it. 00:27:48.952 [2024-07-15 09:35:59.811536] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.952 [2024-07-15 09:35:59.811576] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a50000b90 with addr=10.0.0.2, port=4420 00:27:48.952 qpair failed and we were unable to recover it. 00:27:48.952 [2024-07-15 09:35:59.811742] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.952 [2024-07-15 09:35:59.811770] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:27:48.952 qpair failed and we were unable to recover it. 00:27:48.952 [2024-07-15 09:35:59.811889] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.952 [2024-07-15 09:35:59.811914] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:27:48.952 qpair failed and we were unable to recover it. 00:27:48.952 [2024-07-15 09:35:59.811993] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.952 [2024-07-15 09:35:59.812019] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:27:48.952 qpair failed and we were unable to recover it. 00:27:48.952 [2024-07-15 09:35:59.812127] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.952 [2024-07-15 09:35:59.812176] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:27:48.952 qpair failed and we were unable to recover it. 00:27:48.952 [2024-07-15 09:35:59.812309] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.952 [2024-07-15 09:35:59.812358] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:27:48.952 qpair failed and we were unable to recover it. 00:27:48.952 [2024-07-15 09:35:59.812468] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.952 [2024-07-15 09:35:59.812494] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:27:48.952 qpair failed and we were unable to recover it. 00:27:48.952 [2024-07-15 09:35:59.812576] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.952 [2024-07-15 09:35:59.812602] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:27:48.952 qpair failed and we were unable to recover it. 00:27:48.952 [2024-07-15 09:35:59.812687] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.952 [2024-07-15 09:35:59.812713] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:27:48.952 qpair failed and we were unable to recover it. 00:27:48.952 [2024-07-15 09:35:59.812846] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.952 [2024-07-15 09:35:59.812872] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:27:48.952 qpair failed and we were unable to recover it. 00:27:48.952 [2024-07-15 09:35:59.812952] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.952 [2024-07-15 09:35:59.812982] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:27:48.952 qpair failed and we were unable to recover it. 00:27:48.952 [2024-07-15 09:35:59.813067] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.952 [2024-07-15 09:35:59.813094] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:27:48.952 qpair failed and we were unable to recover it. 00:27:48.952 [2024-07-15 09:35:59.813206] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.952 [2024-07-15 09:35:59.813232] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:27:48.952 qpair failed and we were unable to recover it. 00:27:48.952 [2024-07-15 09:35:59.813341] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.952 [2024-07-15 09:35:59.813366] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:27:48.952 qpair failed and we were unable to recover it. 00:27:48.952 [2024-07-15 09:35:59.813474] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.952 [2024-07-15 09:35:59.813500] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:27:48.952 qpair failed and we were unable to recover it. 00:27:48.952 [2024-07-15 09:35:59.813612] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.952 [2024-07-15 09:35:59.813638] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:27:48.952 qpair failed and we were unable to recover it. 00:27:48.952 [2024-07-15 09:35:59.813758] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.952 [2024-07-15 09:35:59.813787] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a50000b90 with addr=10.0.0.2, port=4420 00:27:48.952 qpair failed and we were unable to recover it. 00:27:48.952 [2024-07-15 09:35:59.813915] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.952 [2024-07-15 09:35:59.813959] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a50000b90 with addr=10.0.0.2, port=4420 00:27:48.952 qpair failed and we were unable to recover it. 00:27:48.952 [2024-07-15 09:35:59.814115] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.952 [2024-07-15 09:35:59.814154] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a50000b90 with addr=10.0.0.2, port=4420 00:27:48.952 qpair failed and we were unable to recover it. 00:27:48.952 [2024-07-15 09:35:59.814301] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.952 [2024-07-15 09:35:59.814329] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a50000b90 with addr=10.0.0.2, port=4420 00:27:48.952 qpair failed and we were unable to recover it. 00:27:48.952 [2024-07-15 09:35:59.814471] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.952 [2024-07-15 09:35:59.814497] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a50000b90 with addr=10.0.0.2, port=4420 00:27:48.952 qpair failed and we were unable to recover it. 00:27:48.952 [2024-07-15 09:35:59.814612] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.952 [2024-07-15 09:35:59.814638] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a50000b90 with addr=10.0.0.2, port=4420 00:27:48.952 qpair failed and we were unable to recover it. 00:27:48.952 [2024-07-15 09:35:59.814744] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.952 [2024-07-15 09:35:59.814771] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a50000b90 with addr=10.0.0.2, port=4420 00:27:48.952 qpair failed and we were unable to recover it. 00:27:48.952 [2024-07-15 09:35:59.814886] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.952 [2024-07-15 09:35:59.814915] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a50000b90 with addr=10.0.0.2, port=4420 00:27:48.952 qpair failed and we were unable to recover it. 00:27:48.952 [2024-07-15 09:35:59.815031] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.952 [2024-07-15 09:35:59.815057] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a50000b90 with addr=10.0.0.2, port=4420 00:27:48.952 qpair failed and we were unable to recover it. 00:27:48.952 [2024-07-15 09:35:59.815201] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.952 [2024-07-15 09:35:59.815249] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a50000b90 with addr=10.0.0.2, port=4420 00:27:48.952 qpair failed and we were unable to recover it. 00:27:48.952 [2024-07-15 09:35:59.815410] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.952 [2024-07-15 09:35:59.815449] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a50000b90 with addr=10.0.0.2, port=4420 00:27:48.952 qpair failed and we were unable to recover it. 00:27:48.952 [2024-07-15 09:35:59.815608] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.952 [2024-07-15 09:35:59.815648] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a50000b90 with addr=10.0.0.2, port=4420 00:27:48.952 qpair failed and we were unable to recover it. 00:27:48.952 [2024-07-15 09:35:59.815796] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.952 [2024-07-15 09:35:59.815832] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a50000b90 with addr=10.0.0.2, port=4420 00:27:48.952 qpair failed and we were unable to recover it. 00:27:48.952 [2024-07-15 09:35:59.815938] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.952 [2024-07-15 09:35:59.815967] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a50000b90 with addr=10.0.0.2, port=4420 00:27:48.952 qpair failed and we were unable to recover it. 00:27:48.952 [2024-07-15 09:35:59.816049] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.952 [2024-07-15 09:35:59.816100] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a50000b90 with addr=10.0.0.2, port=4420 00:27:48.952 qpair failed and we were unable to recover it. 00:27:48.952 [2024-07-15 09:35:59.816232] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.952 [2024-07-15 09:35:59.816284] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a50000b90 with addr=10.0.0.2, port=4420 00:27:48.952 qpair failed and we were unable to recover it. 00:27:48.952 [2024-07-15 09:35:59.816446] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.952 [2024-07-15 09:35:59.816488] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a50000b90 with addr=10.0.0.2, port=4420 00:27:48.952 qpair failed and we were unable to recover it. 00:27:48.952 [2024-07-15 09:35:59.816636] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.952 [2024-07-15 09:35:59.816675] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a50000b90 with addr=10.0.0.2, port=4420 00:27:48.952 qpair failed and we were unable to recover it. 00:27:48.952 [2024-07-15 09:35:59.816799] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.952 [2024-07-15 09:35:59.816831] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a50000b90 with addr=10.0.0.2, port=4420 00:27:48.952 qpair failed and we were unable to recover it. 00:27:48.952 [2024-07-15 09:35:59.816938] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.952 [2024-07-15 09:35:59.816965] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a50000b90 with addr=10.0.0.2, port=4420 00:27:48.952 qpair failed and we were unable to recover it. 00:27:48.952 [2024-07-15 09:35:59.817101] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.952 [2024-07-15 09:35:59.817127] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a50000b90 with addr=10.0.0.2, port=4420 00:27:48.952 qpair failed and we were unable to recover it. 00:27:48.952 [2024-07-15 09:35:59.817240] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.952 [2024-07-15 09:35:59.817308] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:27:48.952 qpair failed and we were unable to recover it. 00:27:48.952 [2024-07-15 09:35:59.817447] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.952 [2024-07-15 09:35:59.817501] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:27:48.952 qpair failed and we were unable to recover it. 00:27:48.952 [2024-07-15 09:35:59.817611] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.952 [2024-07-15 09:35:59.817637] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:27:48.952 qpair failed and we were unable to recover it. 00:27:48.952 [2024-07-15 09:35:59.817726] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.952 [2024-07-15 09:35:59.817753] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:27:48.952 qpair failed and we were unable to recover it. 00:27:48.952 [2024-07-15 09:35:59.817836] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.952 [2024-07-15 09:35:59.817862] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:27:48.952 qpair failed and we were unable to recover it. 00:27:48.952 [2024-07-15 09:35:59.817943] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.952 [2024-07-15 09:35:59.817970] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:27:48.952 qpair failed and we were unable to recover it. 00:27:48.952 [2024-07-15 09:35:59.818062] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.952 [2024-07-15 09:35:59.818089] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:27:48.952 qpair failed and we were unable to recover it. 00:27:48.953 [2024-07-15 09:35:59.818175] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.953 [2024-07-15 09:35:59.818201] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:27:48.953 qpair failed and we were unable to recover it. 00:27:48.953 [2024-07-15 09:35:59.818336] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.953 [2024-07-15 09:35:59.818362] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:27:48.953 qpair failed and we were unable to recover it. 00:27:48.953 [2024-07-15 09:35:59.818475] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.953 [2024-07-15 09:35:59.818501] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:27:48.953 qpair failed and we were unable to recover it. 00:27:48.953 [2024-07-15 09:35:59.818633] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.953 [2024-07-15 09:35:59.818658] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:27:48.953 qpair failed and we were unable to recover it. 00:27:48.953 [2024-07-15 09:35:59.818743] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.953 [2024-07-15 09:35:59.818768] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:27:48.953 qpair failed and we were unable to recover it. 00:27:48.953 [2024-07-15 09:35:59.818893] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.953 [2024-07-15 09:35:59.818922] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a50000b90 with addr=10.0.0.2, port=4420 00:27:48.953 qpair failed and we were unable to recover it. 00:27:48.953 [2024-07-15 09:35:59.819008] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.953 [2024-07-15 09:35:59.819042] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a50000b90 with addr=10.0.0.2, port=4420 00:27:48.953 qpair failed and we were unable to recover it. 00:27:48.953 [2024-07-15 09:35:59.819130] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.953 [2024-07-15 09:35:59.819175] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a50000b90 with addr=10.0.0.2, port=4420 00:27:48.953 qpair failed and we were unable to recover it. 00:27:48.953 [2024-07-15 09:35:59.819316] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.953 [2024-07-15 09:35:59.819357] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a50000b90 with addr=10.0.0.2, port=4420 00:27:48.953 qpair failed and we were unable to recover it. 00:27:48.953 [2024-07-15 09:35:59.819544] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.953 [2024-07-15 09:35:59.819587] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a50000b90 with addr=10.0.0.2, port=4420 00:27:48.953 qpair failed and we were unable to recover it. 00:27:48.953 [2024-07-15 09:35:59.819720] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.953 [2024-07-15 09:35:59.819746] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a50000b90 with addr=10.0.0.2, port=4420 00:27:48.953 qpair failed and we were unable to recover it. 00:27:48.953 [2024-07-15 09:35:59.819858] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.953 [2024-07-15 09:35:59.819884] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a50000b90 with addr=10.0.0.2, port=4420 00:27:48.953 qpair failed and we were unable to recover it. 00:27:48.953 [2024-07-15 09:35:59.819968] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.953 [2024-07-15 09:35:59.819995] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a50000b90 with addr=10.0.0.2, port=4420 00:27:48.953 qpair failed and we were unable to recover it. 00:27:48.953 [2024-07-15 09:35:59.820130] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.953 [2024-07-15 09:35:59.820169] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a50000b90 with addr=10.0.0.2, port=4420 00:27:48.953 qpair failed and we were unable to recover it. 00:27:48.953 [2024-07-15 09:35:59.820360] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.953 [2024-07-15 09:35:59.820399] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a50000b90 with addr=10.0.0.2, port=4420 00:27:48.953 qpair failed and we were unable to recover it. 00:27:48.953 [2024-07-15 09:35:59.820562] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.953 [2024-07-15 09:35:59.820610] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a50000b90 with addr=10.0.0.2, port=4420 00:27:48.953 qpair failed and we were unable to recover it. 00:27:48.953 [2024-07-15 09:35:59.820743] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.953 [2024-07-15 09:35:59.820782] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a50000b90 with addr=10.0.0.2, port=4420 00:27:48.953 qpair failed and we were unable to recover it. 00:27:48.953 [2024-07-15 09:35:59.820906] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.953 [2024-07-15 09:35:59.820949] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a50000b90 with addr=10.0.0.2, port=4420 00:27:48.953 qpair failed and we were unable to recover it. 00:27:48.953 [2024-07-15 09:35:59.821152] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.953 [2024-07-15 09:35:59.821194] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a50000b90 with addr=10.0.0.2, port=4420 00:27:48.953 qpair failed and we were unable to recover it. 00:27:48.953 [2024-07-15 09:35:59.821343] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.953 [2024-07-15 09:35:59.821382] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a50000b90 with addr=10.0.0.2, port=4420 00:27:48.953 qpair failed and we were unable to recover it. 00:27:48.953 [2024-07-15 09:35:59.821513] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.953 [2024-07-15 09:35:59.821553] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a50000b90 with addr=10.0.0.2, port=4420 00:27:48.953 qpair failed and we were unable to recover it. 00:27:48.953 [2024-07-15 09:35:59.821707] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.953 [2024-07-15 09:35:59.821746] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a50000b90 with addr=10.0.0.2, port=4420 00:27:48.953 qpair failed and we were unable to recover it. 00:27:48.953 [2024-07-15 09:35:59.821900] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.953 [2024-07-15 09:35:59.821926] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a50000b90 with addr=10.0.0.2, port=4420 00:27:48.953 qpair failed and we were unable to recover it. 00:27:48.953 [2024-07-15 09:35:59.822020] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.953 [2024-07-15 09:35:59.822050] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a50000b90 with addr=10.0.0.2, port=4420 00:27:48.953 qpair failed and we were unable to recover it. 00:27:48.953 [2024-07-15 09:35:59.822137] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.953 [2024-07-15 09:35:59.822164] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a50000b90 with addr=10.0.0.2, port=4420 00:27:48.953 qpair failed and we were unable to recover it. 00:27:48.953 [2024-07-15 09:35:59.822313] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.953 [2024-07-15 09:35:59.822353] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a50000b90 with addr=10.0.0.2, port=4420 00:27:48.953 qpair failed and we were unable to recover it. 00:27:48.953 [2024-07-15 09:35:59.822572] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.953 [2024-07-15 09:35:59.822611] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a50000b90 with addr=10.0.0.2, port=4420 00:27:48.953 qpair failed and we were unable to recover it. 00:27:48.953 [2024-07-15 09:35:59.822775] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.953 [2024-07-15 09:35:59.822825] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a50000b90 with addr=10.0.0.2, port=4420 00:27:48.953 qpair failed and we were unable to recover it. 00:27:48.953 [2024-07-15 09:35:59.822907] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.953 [2024-07-15 09:35:59.822937] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a50000b90 with addr=10.0.0.2, port=4420 00:27:48.953 qpair failed and we were unable to recover it. 00:27:48.953 [2024-07-15 09:35:59.823090] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.953 [2024-07-15 09:35:59.823131] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a50000b90 with addr=10.0.0.2, port=4420 00:27:48.953 qpair failed and we were unable to recover it. 00:27:48.953 [2024-07-15 09:35:59.823274] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.953 [2024-07-15 09:35:59.823313] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a50000b90 with addr=10.0.0.2, port=4420 00:27:48.953 qpair failed and we were unable to recover it. 00:27:48.953 [2024-07-15 09:35:59.823462] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.953 [2024-07-15 09:35:59.823506] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a50000b90 with addr=10.0.0.2, port=4420 00:27:48.953 qpair failed and we were unable to recover it. 00:27:48.953 [2024-07-15 09:35:59.823696] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.953 [2024-07-15 09:35:59.823736] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a50000b90 with addr=10.0.0.2, port=4420 00:27:48.953 qpair failed and we were unable to recover it. 00:27:48.953 [2024-07-15 09:35:59.823928] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.953 [2024-07-15 09:35:59.823967] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a58000b90 with addr=10.0.0.2, port=4420 00:27:48.953 qpair failed and we were unable to recover it. 00:27:48.953 [2024-07-15 09:35:59.824086] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.953 [2024-07-15 09:35:59.824114] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a58000b90 with addr=10.0.0.2, port=4420 00:27:48.953 qpair failed and we were unable to recover it. 00:27:48.953 [2024-07-15 09:35:59.824242] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.953 [2024-07-15 09:35:59.824282] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a58000b90 with addr=10.0.0.2, port=4420 00:27:48.953 qpair failed and we were unable to recover it. 00:27:48.953 [2024-07-15 09:35:59.824416] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.953 [2024-07-15 09:35:59.824455] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a58000b90 with addr=10.0.0.2, port=4420 00:27:48.953 qpair failed and we were unable to recover it. 00:27:48.953 [2024-07-15 09:35:59.824623] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.953 [2024-07-15 09:35:59.824664] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a58000b90 with addr=10.0.0.2, port=4420 00:27:48.953 qpair failed and we were unable to recover it. 00:27:48.953 [2024-07-15 09:35:59.824823] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.953 [2024-07-15 09:35:59.824861] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:27:48.953 qpair failed and we were unable to recover it. 00:27:48.953 [2024-07-15 09:35:59.824960] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.953 [2024-07-15 09:35:59.824987] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:27:48.953 qpair failed and we were unable to recover it. 00:27:48.953 [2024-07-15 09:35:59.825082] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.953 [2024-07-15 09:35:59.825107] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:27:48.953 qpair failed and we were unable to recover it. 00:27:48.953 [2024-07-15 09:35:59.825188] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.953 [2024-07-15 09:35:59.825215] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:27:48.953 qpair failed and we were unable to recover it. 00:27:48.953 [2024-07-15 09:35:59.825352] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.953 [2024-07-15 09:35:59.825399] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:27:48.953 qpair failed and we were unable to recover it. 00:27:48.953 [2024-07-15 09:35:59.825509] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.953 [2024-07-15 09:35:59.825558] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:27:48.953 qpair failed and we were unable to recover it. 00:27:48.953 [2024-07-15 09:35:59.825668] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.953 [2024-07-15 09:35:59.825694] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:27:48.953 qpair failed and we were unable to recover it. 00:27:48.953 [2024-07-15 09:35:59.825791] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.953 [2024-07-15 09:35:59.825824] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:27:48.953 qpair failed and we were unable to recover it. 00:27:48.953 [2024-07-15 09:35:59.825961] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.953 [2024-07-15 09:35:59.825992] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:27:48.953 qpair failed and we were unable to recover it. 00:27:48.953 [2024-07-15 09:35:59.826105] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.953 [2024-07-15 09:35:59.826132] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:27:48.953 qpair failed and we were unable to recover it. 00:27:48.953 [2024-07-15 09:35:59.826241] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.953 [2024-07-15 09:35:59.826267] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:27:48.953 qpair failed and we were unable to recover it. 00:27:48.953 [2024-07-15 09:35:59.826379] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.953 [2024-07-15 09:35:59.826404] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:27:48.953 qpair failed and we were unable to recover it. 00:27:48.953 [2024-07-15 09:35:59.826513] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.953 [2024-07-15 09:35:59.826539] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:27:48.953 qpair failed and we were unable to recover it. 00:27:48.953 [2024-07-15 09:35:59.826632] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.953 [2024-07-15 09:35:59.826657] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:27:48.953 qpair failed and we were unable to recover it. 00:27:48.953 [2024-07-15 09:35:59.826769] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.953 [2024-07-15 09:35:59.826794] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:27:48.953 qpair failed and we were unable to recover it. 00:27:48.953 [2024-07-15 09:35:59.826896] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.953 [2024-07-15 09:35:59.826924] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a58000b90 with addr=10.0.0.2, port=4420 00:27:48.953 qpair failed and we were unable to recover it. 00:27:48.953 [2024-07-15 09:35:59.827014] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.953 [2024-07-15 09:35:59.827041] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a58000b90 with addr=10.0.0.2, port=4420 00:27:48.953 qpair failed and we were unable to recover it. 00:27:48.953 [2024-07-15 09:35:59.827154] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.953 [2024-07-15 09:35:59.827180] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a58000b90 with addr=10.0.0.2, port=4420 00:27:48.953 qpair failed and we were unable to recover it. 00:27:48.953 [2024-07-15 09:35:59.827293] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.953 [2024-07-15 09:35:59.827319] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a58000b90 with addr=10.0.0.2, port=4420 00:27:48.953 qpair failed and we were unable to recover it. 00:27:48.953 [2024-07-15 09:35:59.827407] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.953 [2024-07-15 09:35:59.827433] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a58000b90 with addr=10.0.0.2, port=4420 00:27:48.953 qpair failed and we were unable to recover it. 00:27:48.953 [2024-07-15 09:35:59.827540] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.953 [2024-07-15 09:35:59.827566] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a58000b90 with addr=10.0.0.2, port=4420 00:27:48.953 qpair failed and we were unable to recover it. 00:27:48.953 [2024-07-15 09:35:59.827648] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.953 [2024-07-15 09:35:59.827674] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a58000b90 with addr=10.0.0.2, port=4420 00:27:48.953 qpair failed and we were unable to recover it. 00:27:48.953 [2024-07-15 09:35:59.827776] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.953 [2024-07-15 09:35:59.827828] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a50000b90 with addr=10.0.0.2, port=4420 00:27:48.954 qpair failed and we were unable to recover it. 00:27:48.954 [2024-07-15 09:35:59.827929] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.954 [2024-07-15 09:35:59.827974] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a50000b90 with addr=10.0.0.2, port=4420 00:27:48.954 qpair failed and we were unable to recover it. 00:27:48.954 [2024-07-15 09:35:59.828100] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.954 [2024-07-15 09:35:59.828143] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a50000b90 with addr=10.0.0.2, port=4420 00:27:48.954 qpair failed and we were unable to recover it. 00:27:48.954 [2024-07-15 09:35:59.828304] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.954 [2024-07-15 09:35:59.828344] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a50000b90 with addr=10.0.0.2, port=4420 00:27:48.954 qpair failed and we were unable to recover it. 00:27:48.954 [2024-07-15 09:35:59.828524] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.954 [2024-07-15 09:35:59.828581] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d69200 with addr=10.0.0.2, port=4420 00:27:48.954 qpair failed and we were unable to recover it. 00:27:48.954 [2024-07-15 09:35:59.828706] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.954 [2024-07-15 09:35:59.828732] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d69200 with addr=10.0.0.2, port=4420 00:27:48.954 qpair failed and we were unable to recover it. 00:27:48.954 [2024-07-15 09:35:59.828827] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.954 [2024-07-15 09:35:59.828855] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d69200 with addr=10.0.0.2, port=4420 00:27:48.954 qpair failed and we were unable to recover it. 00:27:48.954 [2024-07-15 09:35:59.828932] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.954 [2024-07-15 09:35:59.828958] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d69200 with addr=10.0.0.2, port=4420 00:27:48.954 qpair failed and we were unable to recover it. 00:27:48.954 [2024-07-15 09:35:59.829067] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.954 [2024-07-15 09:35:59.829107] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d69200 with addr=10.0.0.2, port=4420 00:27:48.954 qpair failed and we were unable to recover it. 00:27:48.954 [2024-07-15 09:35:59.829259] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.954 [2024-07-15 09:35:59.829298] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d69200 with addr=10.0.0.2, port=4420 00:27:48.954 qpair failed and we were unable to recover it. 00:27:48.954 [2024-07-15 09:35:59.829403] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.954 [2024-07-15 09:35:59.829441] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d69200 with addr=10.0.0.2, port=4420 00:27:48.954 qpair failed and we were unable to recover it. 00:27:48.954 [2024-07-15 09:35:59.829563] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.954 [2024-07-15 09:35:59.829603] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d69200 with addr=10.0.0.2, port=4420 00:27:48.954 qpair failed and we were unable to recover it. 00:27:48.954 [2024-07-15 09:35:59.829718] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.954 [2024-07-15 09:35:59.829756] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d69200 with addr=10.0.0.2, port=4420 00:27:48.954 qpair failed and we were unable to recover it. 00:27:48.954 [2024-07-15 09:35:59.829888] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.954 [2024-07-15 09:35:59.829919] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d69200 with addr=10.0.0.2, port=4420 00:27:48.954 qpair failed and we were unable to recover it. 00:27:48.954 [2024-07-15 09:35:59.830065] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.954 [2024-07-15 09:35:59.830104] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d69200 with addr=10.0.0.2, port=4420 00:27:48.954 qpair failed and we were unable to recover it. 00:27:48.954 [2024-07-15 09:35:59.830216] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.954 [2024-07-15 09:35:59.830254] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d69200 with addr=10.0.0.2, port=4420 00:27:48.954 qpair failed and we were unable to recover it. 00:27:48.954 [2024-07-15 09:35:59.830386] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.954 [2024-07-15 09:35:59.830426] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d69200 with addr=10.0.0.2, port=4420 00:27:48.954 qpair failed and we were unable to recover it. 00:27:48.954 [2024-07-15 09:35:59.830569] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.954 [2024-07-15 09:35:59.830606] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d69200 with addr=10.0.0.2, port=4420 00:27:48.954 qpair failed and we were unable to recover it. 00:27:48.954 [2024-07-15 09:35:59.830748] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.954 [2024-07-15 09:35:59.830786] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d69200 with addr=10.0.0.2, port=4420 00:27:48.954 qpair failed and we were unable to recover it. 00:27:48.954 [2024-07-15 09:35:59.830911] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.954 [2024-07-15 09:35:59.830937] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d69200 with addr=10.0.0.2, port=4420 00:27:48.954 qpair failed and we were unable to recover it. 00:27:48.954 [2024-07-15 09:35:59.831052] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.954 [2024-07-15 09:35:59.831077] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d69200 with addr=10.0.0.2, port=4420 00:27:48.954 qpair failed and we were unable to recover it. 00:27:48.954 [2024-07-15 09:35:59.831240] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.954 [2024-07-15 09:35:59.831278] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d69200 with addr=10.0.0.2, port=4420 00:27:48.954 qpair failed and we were unable to recover it. 00:27:48.954 [2024-07-15 09:35:59.831427] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.954 [2024-07-15 09:35:59.831465] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d69200 with addr=10.0.0.2, port=4420 00:27:48.954 qpair failed and we were unable to recover it. 00:27:48.954 [2024-07-15 09:35:59.831590] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.954 [2024-07-15 09:35:59.831628] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d69200 with addr=10.0.0.2, port=4420 00:27:48.954 qpair failed and we were unable to recover it. 00:27:48.954 [2024-07-15 09:35:59.831791] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.954 [2024-07-15 09:35:59.831823] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d69200 with addr=10.0.0.2, port=4420 00:27:48.954 qpair failed and we were unable to recover it. 00:27:48.954 [2024-07-15 09:35:59.831904] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.954 [2024-07-15 09:35:59.831930] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d69200 with addr=10.0.0.2, port=4420 00:27:48.954 qpair failed and we were unable to recover it. 00:27:48.954 [2024-07-15 09:35:59.832018] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.954 [2024-07-15 09:35:59.832043] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d69200 with addr=10.0.0.2, port=4420 00:27:48.954 qpair failed and we were unable to recover it. 00:27:48.954 [2024-07-15 09:35:59.832185] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.954 [2024-07-15 09:35:59.832211] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d69200 with addr=10.0.0.2, port=4420 00:27:48.954 qpair failed and we were unable to recover it. 00:27:48.954 [2024-07-15 09:35:59.832335] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.954 [2024-07-15 09:35:59.832373] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d69200 with addr=10.0.0.2, port=4420 00:27:48.954 qpair failed and we were unable to recover it. 00:27:48.954 [2024-07-15 09:35:59.832551] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.954 [2024-07-15 09:35:59.832589] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d69200 with addr=10.0.0.2, port=4420 00:27:48.954 qpair failed and we were unable to recover it. 00:27:48.954 [2024-07-15 09:35:59.832759] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.954 [2024-07-15 09:35:59.832839] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d69200 with addr=10.0.0.2, port=4420 00:27:48.954 qpair failed and we were unable to recover it. 00:27:48.954 [2024-07-15 09:35:59.832972] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.954 [2024-07-15 09:35:59.832997] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d69200 with addr=10.0.0.2, port=4420 00:27:48.954 qpair failed and we were unable to recover it. 00:27:48.954 [2024-07-15 09:35:59.833076] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.954 [2024-07-15 09:35:59.833102] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d69200 with addr=10.0.0.2, port=4420 00:27:48.954 qpair failed and we were unable to recover it. 00:27:48.954 [2024-07-15 09:35:59.833208] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.954 [2024-07-15 09:35:59.833234] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d69200 with addr=10.0.0.2, port=4420 00:27:48.954 qpair failed and we were unable to recover it. 00:27:48.954 [2024-07-15 09:35:59.833403] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.954 [2024-07-15 09:35:59.833467] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d69200 with addr=10.0.0.2, port=4420 00:27:48.954 qpair failed and we were unable to recover it. 00:27:48.954 [2024-07-15 09:35:59.833600] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.954 [2024-07-15 09:35:59.833626] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d69200 with addr=10.0.0.2, port=4420 00:27:48.954 qpair failed and we were unable to recover it. 00:27:48.954 [2024-07-15 09:35:59.833864] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.954 [2024-07-15 09:35:59.833904] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a58000b90 with addr=10.0.0.2, port=4420 00:27:48.954 qpair failed and we were unable to recover it. 00:27:48.954 [2024-07-15 09:35:59.834027] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.954 [2024-07-15 09:35:59.834055] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a58000b90 with addr=10.0.0.2, port=4420 00:27:48.954 qpair failed and we were unable to recover it. 00:27:48.954 [2024-07-15 09:35:59.834256] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.954 [2024-07-15 09:35:59.834298] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a58000b90 with addr=10.0.0.2, port=4420 00:27:48.954 qpair failed and we were unable to recover it. 00:27:48.954 [2024-07-15 09:35:59.834446] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.954 [2024-07-15 09:35:59.834487] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a58000b90 with addr=10.0.0.2, port=4420 00:27:48.954 qpair failed and we were unable to recover it. 00:27:48.954 [2024-07-15 09:35:59.834630] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.954 [2024-07-15 09:35:59.834696] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a50000b90 with addr=10.0.0.2, port=4420 00:27:48.954 qpair failed and we were unable to recover it. 00:27:48.954 [2024-07-15 09:35:59.834844] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.954 [2024-07-15 09:35:59.834883] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:27:48.954 qpair failed and we were unable to recover it. 00:27:48.954 [2024-07-15 09:35:59.834993] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.954 [2024-07-15 09:35:59.835020] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d69200 with addr=10.0.0.2, port=4420 00:27:48.954 qpair failed and we were unable to recover it. 00:27:48.954 [2024-07-15 09:35:59.835131] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.954 [2024-07-15 09:35:59.835156] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d69200 with addr=10.0.0.2, port=4420 00:27:48.954 qpair failed and we were unable to recover it. 00:27:48.954 [2024-07-15 09:35:59.835335] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.954 [2024-07-15 09:35:59.835399] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d69200 with addr=10.0.0.2, port=4420 00:27:48.954 qpair failed and we were unable to recover it. 00:27:48.954 [2024-07-15 09:35:59.835567] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.954 [2024-07-15 09:35:59.835620] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d69200 with addr=10.0.0.2, port=4420 00:27:48.954 qpair failed and we were unable to recover it. 00:27:48.954 [2024-07-15 09:35:59.835778] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.954 [2024-07-15 09:35:59.835831] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a58000b90 with addr=10.0.0.2, port=4420 00:27:48.954 qpair failed and we were unable to recover it. 00:27:48.954 [2024-07-15 09:35:59.835993] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.954 [2024-07-15 09:35:59.836019] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a58000b90 with addr=10.0.0.2, port=4420 00:27:48.954 qpair failed and we were unable to recover it. 00:27:48.954 [2024-07-15 09:35:59.836104] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.954 [2024-07-15 09:35:59.836130] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a58000b90 with addr=10.0.0.2, port=4420 00:27:48.954 qpair failed and we were unable to recover it. 00:27:48.954 [2024-07-15 09:35:59.836215] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.954 [2024-07-15 09:35:59.836241] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a58000b90 with addr=10.0.0.2, port=4420 00:27:48.954 qpair failed and we were unable to recover it. 00:27:48.954 [2024-07-15 09:35:59.836412] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.954 [2024-07-15 09:35:59.836451] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a58000b90 with addr=10.0.0.2, port=4420 00:27:48.954 qpair failed and we were unable to recover it. 00:27:48.954 [2024-07-15 09:35:59.836663] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.954 [2024-07-15 09:35:59.836702] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a58000b90 with addr=10.0.0.2, port=4420 00:27:48.954 qpair failed and we were unable to recover it. 00:27:48.954 [2024-07-15 09:35:59.836822] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.954 [2024-07-15 09:35:59.836875] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d69200 with addr=10.0.0.2, port=4420 00:27:48.954 qpair failed and we were unable to recover it. 00:27:48.954 [2024-07-15 09:35:59.836968] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.954 [2024-07-15 09:35:59.836994] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d69200 with addr=10.0.0.2, port=4420 00:27:48.954 qpair failed and we were unable to recover it. 00:27:48.954 [2024-07-15 09:35:59.837110] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.954 [2024-07-15 09:35:59.837135] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d69200 with addr=10.0.0.2, port=4420 00:27:48.954 qpair failed and we were unable to recover it. 00:27:48.954 [2024-07-15 09:35:59.837240] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.954 [2024-07-15 09:35:59.837266] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d69200 with addr=10.0.0.2, port=4420 00:27:48.954 qpair failed and we were unable to recover it. 00:27:48.954 [2024-07-15 09:35:59.837365] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.954 [2024-07-15 09:35:59.837403] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d69200 with addr=10.0.0.2, port=4420 00:27:48.954 qpair failed and we were unable to recover it. 00:27:48.954 [2024-07-15 09:35:59.837547] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.955 [2024-07-15 09:35:59.837585] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d69200 with addr=10.0.0.2, port=4420 00:27:48.955 qpair failed and we were unable to recover it. 00:27:48.955 [2024-07-15 09:35:59.837735] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.955 [2024-07-15 09:35:59.837773] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d69200 with addr=10.0.0.2, port=4420 00:27:48.955 qpair failed and we were unable to recover it. 00:27:48.955 [2024-07-15 09:35:59.837908] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.955 [2024-07-15 09:35:59.837934] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d69200 with addr=10.0.0.2, port=4420 00:27:48.955 qpair failed and we were unable to recover it. 00:27:48.955 [2024-07-15 09:35:59.838054] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.955 [2024-07-15 09:35:59.838106] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d69200 with addr=10.0.0.2, port=4420 00:27:48.955 qpair failed and we were unable to recover it. 00:27:48.955 [2024-07-15 09:35:59.838251] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.955 [2024-07-15 09:35:59.838290] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d69200 with addr=10.0.0.2, port=4420 00:27:48.955 qpair failed and we were unable to recover it. 00:27:48.955 [2024-07-15 09:35:59.838451] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.955 [2024-07-15 09:35:59.838489] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d69200 with addr=10.0.0.2, port=4420 00:27:48.955 qpair failed and we were unable to recover it. 00:27:48.955 [2024-07-15 09:35:59.838610] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.955 [2024-07-15 09:35:59.838648] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d69200 with addr=10.0.0.2, port=4420 00:27:48.955 qpair failed and we were unable to recover it. 00:27:48.955 [2024-07-15 09:35:59.838819] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.955 [2024-07-15 09:35:59.838847] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a58000b90 with addr=10.0.0.2, port=4420 00:27:48.955 qpair failed and we were unable to recover it. 00:27:48.955 [2024-07-15 09:35:59.838940] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.955 [2024-07-15 09:35:59.838967] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a58000b90 with addr=10.0.0.2, port=4420 00:27:48.955 qpair failed and we were unable to recover it. 00:27:48.955 [2024-07-15 09:35:59.839080] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.955 [2024-07-15 09:35:59.839120] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a58000b90 with addr=10.0.0.2, port=4420 00:27:48.955 qpair failed and we were unable to recover it. 00:27:48.955 [2024-07-15 09:35:59.839270] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.955 [2024-07-15 09:35:59.839317] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a58000b90 with addr=10.0.0.2, port=4420 00:27:48.955 qpair failed and we were unable to recover it. 00:27:48.955 [2024-07-15 09:35:59.839478] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.955 [2024-07-15 09:35:59.839518] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a58000b90 with addr=10.0.0.2, port=4420 00:27:48.955 qpair failed and we were unable to recover it. 00:27:48.955 [2024-07-15 09:35:59.839669] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.955 [2024-07-15 09:35:59.839709] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a58000b90 with addr=10.0.0.2, port=4420 00:27:48.955 qpair failed and we were unable to recover it. 00:27:48.955 [2024-07-15 09:35:59.839875] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.955 [2024-07-15 09:35:59.839902] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d69200 with addr=10.0.0.2, port=4420 00:27:48.955 qpair failed and we were unable to recover it. 00:27:48.955 [2024-07-15 09:35:59.839987] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.955 [2024-07-15 09:35:59.840013] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d69200 with addr=10.0.0.2, port=4420 00:27:48.955 qpair failed and we were unable to recover it. 00:27:48.955 [2024-07-15 09:35:59.840164] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.955 [2024-07-15 09:35:59.840203] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d69200 with addr=10.0.0.2, port=4420 00:27:48.955 qpair failed and we were unable to recover it. 00:27:48.955 [2024-07-15 09:35:59.840382] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.955 [2024-07-15 09:35:59.840421] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d69200 with addr=10.0.0.2, port=4420 00:27:48.955 qpair failed and we were unable to recover it. 00:27:48.955 [2024-07-15 09:35:59.840529] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.955 [2024-07-15 09:35:59.840567] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d69200 with addr=10.0.0.2, port=4420 00:27:48.955 qpair failed and we were unable to recover it. 00:27:48.955 [2024-07-15 09:35:59.840714] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.955 [2024-07-15 09:35:59.840740] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d69200 with addr=10.0.0.2, port=4420 00:27:48.955 qpair failed and we were unable to recover it. 00:27:48.955 [2024-07-15 09:35:59.840821] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.955 [2024-07-15 09:35:59.840848] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a58000b90 with addr=10.0.0.2, port=4420 00:27:48.955 qpair failed and we were unable to recover it. 00:27:48.955 [2024-07-15 09:35:59.840953] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.955 [2024-07-15 09:35:59.840979] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a58000b90 with addr=10.0.0.2, port=4420 00:27:48.955 qpair failed and we were unable to recover it. 00:27:48.955 [2024-07-15 09:35:59.841129] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.955 [2024-07-15 09:35:59.841168] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a58000b90 with addr=10.0.0.2, port=4420 00:27:48.955 qpair failed and we were unable to recover it. 00:27:48.955 [2024-07-15 09:35:59.841316] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.955 [2024-07-15 09:35:59.841355] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a58000b90 with addr=10.0.0.2, port=4420 00:27:48.955 qpair failed and we were unable to recover it. 00:27:48.955 [2024-07-15 09:35:59.841509] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.955 [2024-07-15 09:35:59.841547] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a58000b90 with addr=10.0.0.2, port=4420 00:27:48.955 qpair failed and we were unable to recover it. 00:27:48.955 [2024-07-15 09:35:59.841678] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.955 [2024-07-15 09:35:59.841705] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a58000b90 with addr=10.0.0.2, port=4420 00:27:48.955 qpair failed and we were unable to recover it. 00:27:48.955 [2024-07-15 09:35:59.841818] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.955 [2024-07-15 09:35:59.841845] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a58000b90 with addr=10.0.0.2, port=4420 00:27:48.955 qpair failed and we were unable to recover it. 00:27:48.955 [2024-07-15 09:35:59.841927] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.955 [2024-07-15 09:35:59.841953] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a58000b90 with addr=10.0.0.2, port=4420 00:27:48.955 qpair failed and we were unable to recover it. 00:27:48.955 [2024-07-15 09:35:59.842091] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.955 [2024-07-15 09:35:59.842130] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a58000b90 with addr=10.0.0.2, port=4420 00:27:48.955 qpair failed and we were unable to recover it. 00:27:48.955 [2024-07-15 09:35:59.842320] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.955 [2024-07-15 09:35:59.842359] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a58000b90 with addr=10.0.0.2, port=4420 00:27:48.955 qpair failed and we were unable to recover it. 00:27:48.955 [2024-07-15 09:35:59.842494] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.955 [2024-07-15 09:35:59.842546] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a58000b90 with addr=10.0.0.2, port=4420 00:27:48.955 qpair failed and we were unable to recover it. 00:27:48.955 [2024-07-15 09:35:59.842716] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.955 [2024-07-15 09:35:59.842742] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a58000b90 with addr=10.0.0.2, port=4420 00:27:48.955 qpair failed and we were unable to recover it. 00:27:48.955 [2024-07-15 09:35:59.842835] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.955 [2024-07-15 09:35:59.842861] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a58000b90 with addr=10.0.0.2, port=4420 00:27:48.955 qpair failed and we were unable to recover it. 00:27:48.955 [2024-07-15 09:35:59.842958] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.955 [2024-07-15 09:35:59.842984] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a58000b90 with addr=10.0.0.2, port=4420 00:27:48.955 qpair failed and we were unable to recover it. 00:27:48.955 [2024-07-15 09:35:59.843099] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.955 [2024-07-15 09:35:59.843137] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a58000b90 with addr=10.0.0.2, port=4420 00:27:48.955 qpair failed and we were unable to recover it. 00:27:48.955 [2024-07-15 09:35:59.843266] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.955 [2024-07-15 09:35:59.843305] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a58000b90 with addr=10.0.0.2, port=4420 00:27:48.955 qpair failed and we were unable to recover it. 00:27:48.955 [2024-07-15 09:35:59.843428] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.955 [2024-07-15 09:35:59.843467] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a58000b90 with addr=10.0.0.2, port=4420 00:27:48.955 qpair failed and we were unable to recover it. 00:27:48.955 [2024-07-15 09:35:59.843600] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.955 [2024-07-15 09:35:59.843639] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:27:48.955 qpair failed and we were unable to recover it. 00:27:48.955 [2024-07-15 09:35:59.843743] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.955 [2024-07-15 09:35:59.843782] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d69200 with addr=10.0.0.2, port=4420 00:27:48.955 qpair failed and we were unable to recover it. 00:27:48.956 [2024-07-15 09:35:59.843897] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.956 [2024-07-15 09:35:59.843936] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a50000b90 with addr=10.0.0.2, port=4420 00:27:48.956 qpair failed and we were unable to recover it. 00:27:48.956 [2024-07-15 09:35:59.844101] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.956 [2024-07-15 09:35:59.844143] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a50000b90 with addr=10.0.0.2, port=4420 00:27:48.956 qpair failed and we were unable to recover it. 00:27:48.956 [2024-07-15 09:35:59.844295] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.956 [2024-07-15 09:35:59.844350] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a50000b90 with addr=10.0.0.2, port=4420 00:27:48.956 qpair failed and we were unable to recover it. 00:27:48.956 [2024-07-15 09:35:59.844482] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.956 [2024-07-15 09:35:59.844525] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a50000b90 with addr=10.0.0.2, port=4420 00:27:48.956 qpair failed and we were unable to recover it. 00:27:48.956 [2024-07-15 09:35:59.844687] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.956 [2024-07-15 09:35:59.844725] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a50000b90 with addr=10.0.0.2, port=4420 00:27:48.956 qpair failed and we were unable to recover it. 00:27:48.956 [2024-07-15 09:35:59.844857] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.956 [2024-07-15 09:35:59.844886] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a50000b90 with addr=10.0.0.2, port=4420 00:27:48.956 qpair failed and we were unable to recover it. 00:27:48.956 [2024-07-15 09:35:59.845004] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.956 [2024-07-15 09:35:59.845031] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a50000b90 with addr=10.0.0.2, port=4420 00:27:48.956 qpair failed and we were unable to recover it. 00:27:48.956 [2024-07-15 09:35:59.845142] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.956 [2024-07-15 09:35:59.845182] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a50000b90 with addr=10.0.0.2, port=4420 00:27:48.956 qpair failed and we were unable to recover it. 00:27:48.956 [2024-07-15 09:35:59.845338] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.956 [2024-07-15 09:35:59.845379] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a50000b90 with addr=10.0.0.2, port=4420 00:27:48.956 qpair failed and we were unable to recover it. 00:27:48.956 [2024-07-15 09:35:59.845532] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.956 [2024-07-15 09:35:59.845571] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a50000b90 with addr=10.0.0.2, port=4420 00:27:48.956 qpair failed and we were unable to recover it. 00:27:48.956 [2024-07-15 09:35:59.845710] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.956 [2024-07-15 09:35:59.845736] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a50000b90 with addr=10.0.0.2, port=4420 00:27:48.956 qpair failed and we were unable to recover it. 00:27:48.956 [2024-07-15 09:35:59.845858] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.956 [2024-07-15 09:35:59.845886] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a50000b90 with addr=10.0.0.2, port=4420 00:27:48.956 qpair failed and we were unable to recover it. 00:27:48.956 [2024-07-15 09:35:59.845971] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.956 [2024-07-15 09:35:59.846008] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a50000b90 with addr=10.0.0.2, port=4420 00:27:48.956 qpair failed and we were unable to recover it. 00:27:48.956 [2024-07-15 09:35:59.846099] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.956 [2024-07-15 09:35:59.846180] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a50000b90 with addr=10.0.0.2, port=4420 00:27:48.956 qpair failed and we were unable to recover it. 00:27:48.956 [2024-07-15 09:35:59.846354] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.956 [2024-07-15 09:35:59.846388] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a50000b90 with addr=10.0.0.2, port=4420 00:27:48.956 qpair failed and we were unable to recover it. 00:27:48.956 [2024-07-15 09:35:59.846507] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.956 [2024-07-15 09:35:59.846546] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d69200 with addr=10.0.0.2, port=4420 00:27:48.956 qpair failed and we were unable to recover it. 00:27:48.956 [2024-07-15 09:35:59.846667] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.956 [2024-07-15 09:35:59.846704] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a58000b90 with addr=10.0.0.2, port=4420 00:27:48.956 qpair failed and we were unable to recover it. 00:27:48.956 [2024-07-15 09:35:59.846845] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.956 [2024-07-15 09:35:59.846884] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:27:48.956 qpair failed and we were unable to recover it. 00:27:48.956 [2024-07-15 09:35:59.846976] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.956 [2024-07-15 09:35:59.847003] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:27:48.956 qpair failed and we were unable to recover it. 00:27:48.956 [2024-07-15 09:35:59.847111] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.956 [2024-07-15 09:35:59.847160] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:27:48.956 qpair failed and we were unable to recover it. 00:27:48.956 [2024-07-15 09:35:59.847273] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.956 [2024-07-15 09:35:59.847307] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:27:48.956 qpair failed and we were unable to recover it. 00:27:48.956 [2024-07-15 09:35:59.847411] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.956 [2024-07-15 09:35:59.847437] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:27:48.956 qpair failed and we were unable to recover it. 00:27:48.956 [2024-07-15 09:35:59.847519] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.956 [2024-07-15 09:35:59.847545] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:27:48.956 qpair failed and we were unable to recover it. 00:27:48.956 [2024-07-15 09:35:59.847628] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.956 [2024-07-15 09:35:59.847654] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:27:48.956 qpair failed and we were unable to recover it. 00:27:48.956 [2024-07-15 09:35:59.847762] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.956 [2024-07-15 09:35:59.847788] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:27:48.956 qpair failed and we were unable to recover it. 00:27:48.956 [2024-07-15 09:35:59.847880] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.956 [2024-07-15 09:35:59.847906] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:27:48.956 qpair failed and we were unable to recover it. 00:27:48.956 [2024-07-15 09:35:59.847995] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.956 [2024-07-15 09:35:59.848021] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:27:48.956 qpair failed and we were unable to recover it. 00:27:48.956 [2024-07-15 09:35:59.848100] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.956 [2024-07-15 09:35:59.848126] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:27:48.956 qpair failed and we were unable to recover it. 00:27:48.956 [2024-07-15 09:35:59.848264] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.956 [2024-07-15 09:35:59.848290] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:27:48.956 qpair failed and we were unable to recover it. 00:27:48.956 [2024-07-15 09:35:59.848386] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.956 [2024-07-15 09:35:59.848425] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a58000b90 with addr=10.0.0.2, port=4420 00:27:48.956 qpair failed and we were unable to recover it. 00:27:48.956 [2024-07-15 09:35:59.848509] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.956 [2024-07-15 09:35:59.848536] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a58000b90 with addr=10.0.0.2, port=4420 00:27:48.956 qpair failed and we were unable to recover it. 00:27:48.956 [2024-07-15 09:35:59.848632] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.956 [2024-07-15 09:35:59.848671] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d69200 with addr=10.0.0.2, port=4420 00:27:48.956 qpair failed and we were unable to recover it. 00:27:48.956 [2024-07-15 09:35:59.848761] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.956 [2024-07-15 09:35:59.848788] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d69200 with addr=10.0.0.2, port=4420 00:27:48.956 qpair failed and we were unable to recover it. 00:27:48.956 [2024-07-15 09:35:59.848907] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.956 [2024-07-15 09:35:59.848941] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d69200 with addr=10.0.0.2, port=4420 00:27:48.956 qpair failed and we were unable to recover it. 00:27:48.956 [2024-07-15 09:35:59.849049] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.956 [2024-07-15 09:35:59.849084] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d69200 with addr=10.0.0.2, port=4420 00:27:48.956 qpair failed and we were unable to recover it. 00:27:48.956 [2024-07-15 09:35:59.849232] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.956 [2024-07-15 09:35:59.849271] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d69200 with addr=10.0.0.2, port=4420 00:27:48.956 qpair failed and we were unable to recover it. 00:27:48.956 [2024-07-15 09:35:59.849401] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.956 [2024-07-15 09:35:59.849444] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a50000b90 with addr=10.0.0.2, port=4420 00:27:48.956 qpair failed and we were unable to recover it. 00:27:48.956 [2024-07-15 09:35:59.849566] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.956 [2024-07-15 09:35:59.849605] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a50000b90 with addr=10.0.0.2, port=4420 00:27:48.956 qpair failed and we were unable to recover it. 00:27:48.956 [2024-07-15 09:35:59.849736] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.956 [2024-07-15 09:35:59.849764] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:27:48.956 qpair failed and we were unable to recover it. 00:27:48.956 [2024-07-15 09:35:59.849907] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.956 [2024-07-15 09:35:59.849935] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d69200 with addr=10.0.0.2, port=4420 00:27:48.956 qpair failed and we were unable to recover it. 00:27:48.956 [2024-07-15 09:35:59.850055] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.956 [2024-07-15 09:35:59.850110] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a58000b90 with addr=10.0.0.2, port=4420 00:27:48.956 qpair failed and we were unable to recover it. 00:27:48.956 [2024-07-15 09:35:59.850236] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.956 [2024-07-15 09:35:59.850275] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a58000b90 with addr=10.0.0.2, port=4420 00:27:48.956 qpair failed and we were unable to recover it. 00:27:48.956 [2024-07-15 09:35:59.850457] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.956 [2024-07-15 09:35:59.850495] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a58000b90 with addr=10.0.0.2, port=4420 00:27:48.956 qpair failed and we were unable to recover it. 00:27:48.956 [2024-07-15 09:35:59.850618] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.956 [2024-07-15 09:35:59.850644] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a58000b90 with addr=10.0.0.2, port=4420 00:27:48.956 qpair failed and we were unable to recover it. 00:27:48.956 [2024-07-15 09:35:59.850717] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.957 [2024-07-15 09:35:59.850743] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a58000b90 with addr=10.0.0.2, port=4420 00:27:48.957 qpair failed and we were unable to recover it. 00:27:48.957 [2024-07-15 09:35:59.850856] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.957 [2024-07-15 09:35:59.850891] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a58000b90 with addr=10.0.0.2, port=4420 00:27:48.957 qpair failed and we were unable to recover it. 00:27:48.957 [2024-07-15 09:35:59.851000] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.957 [2024-07-15 09:35:59.851035] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a58000b90 with addr=10.0.0.2, port=4420 00:27:48.957 qpair failed and we were unable to recover it. 00:27:48.957 [2024-07-15 09:35:59.851151] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.957 [2024-07-15 09:35:59.851207] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a58000b90 with addr=10.0.0.2, port=4420 00:27:48.957 qpair failed and we were unable to recover it. 00:27:48.957 [2024-07-15 09:35:59.851409] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.957 [2024-07-15 09:35:59.851454] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a58000b90 with addr=10.0.0.2, port=4420 00:27:48.957 qpair failed and we were unable to recover it. 00:27:48.957 [2024-07-15 09:35:59.851634] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.957 [2024-07-15 09:35:59.851661] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a58000b90 with addr=10.0.0.2, port=4420 00:27:48.957 qpair failed and we were unable to recover it. 00:27:48.957 [2024-07-15 09:35:59.851745] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.957 [2024-07-15 09:35:59.851771] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a58000b90 with addr=10.0.0.2, port=4420 00:27:48.957 qpair failed and we were unable to recover it. 00:27:48.957 [2024-07-15 09:35:59.851893] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.957 [2024-07-15 09:35:59.851921] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:27:48.957 qpair failed and we were unable to recover it. 00:27:48.957 [2024-07-15 09:35:59.852015] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.957 [2024-07-15 09:35:59.852049] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:27:48.957 qpair failed and we were unable to recover it. 00:27:48.957 [2024-07-15 09:35:59.852204] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.957 [2024-07-15 09:35:59.852259] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:27:48.957 qpair failed and we were unable to recover it. 00:27:48.957 [2024-07-15 09:35:59.852339] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.957 [2024-07-15 09:35:59.852365] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:27:48.957 qpair failed and we were unable to recover it. 00:27:48.957 [2024-07-15 09:35:59.852466] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.957 [2024-07-15 09:35:59.852514] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:27:48.957 qpair failed and we were unable to recover it. 00:27:48.957 [2024-07-15 09:35:59.852623] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.957 [2024-07-15 09:35:59.852649] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:27:48.957 qpair failed and we were unable to recover it. 00:27:48.957 [2024-07-15 09:35:59.852728] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.957 [2024-07-15 09:35:59.852755] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a58000b90 with addr=10.0.0.2, port=4420 00:27:48.957 qpair failed and we were unable to recover it. 00:27:48.957 [2024-07-15 09:35:59.852891] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.957 [2024-07-15 09:35:59.852929] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d69200 with addr=10.0.0.2, port=4420 00:27:48.957 qpair failed and we were unable to recover it. 00:27:48.957 [2024-07-15 09:35:59.853024] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.957 [2024-07-15 09:35:59.853053] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a50000b90 with addr=10.0.0.2, port=4420 00:27:48.957 qpair failed and we were unable to recover it. 00:27:48.957 [2024-07-15 09:35:59.853144] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.957 [2024-07-15 09:35:59.853171] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a50000b90 with addr=10.0.0.2, port=4420 00:27:48.957 qpair failed and we were unable to recover it. 00:27:48.957 [2024-07-15 09:35:59.853296] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.957 [2024-07-15 09:35:59.853336] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a50000b90 with addr=10.0.0.2, port=4420 00:27:48.957 qpair failed and we were unable to recover it. 00:27:48.957 [2024-07-15 09:35:59.853556] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.957 [2024-07-15 09:35:59.853595] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a50000b90 with addr=10.0.0.2, port=4420 00:27:48.957 qpair failed and we were unable to recover it. 00:27:48.957 [2024-07-15 09:35:59.853714] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.957 [2024-07-15 09:35:59.853740] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a50000b90 with addr=10.0.0.2, port=4420 00:27:48.957 qpair failed and we were unable to recover it. 00:27:48.957 [2024-07-15 09:35:59.853869] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.957 [2024-07-15 09:35:59.853900] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a58000b90 with addr=10.0.0.2, port=4420 00:27:48.957 qpair failed and we were unable to recover it. 00:27:48.957 [2024-07-15 09:35:59.854023] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.957 [2024-07-15 09:35:59.854049] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a58000b90 with addr=10.0.0.2, port=4420 00:27:48.957 qpair failed and we were unable to recover it. 00:27:48.957 [2024-07-15 09:35:59.854281] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.957 [2024-07-15 09:35:59.854332] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a58000b90 with addr=10.0.0.2, port=4420 00:27:48.957 qpair failed and we were unable to recover it. 00:27:48.957 [2024-07-15 09:35:59.854463] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.957 [2024-07-15 09:35:59.854522] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a58000b90 with addr=10.0.0.2, port=4420 00:27:48.957 qpair failed and we were unable to recover it. 00:27:48.957 [2024-07-15 09:35:59.854721] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.957 [2024-07-15 09:35:59.854816] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a58000b90 with addr=10.0.0.2, port=4420 00:27:48.957 qpair failed and we were unable to recover it. 00:27:48.957 [2024-07-15 09:35:59.854971] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.957 [2024-07-15 09:35:59.855009] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:27:48.957 qpair failed and we were unable to recover it. 00:27:48.957 [2024-07-15 09:35:59.855107] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.957 [2024-07-15 09:35:59.855135] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:27:48.957 qpair failed and we were unable to recover it. 00:27:48.957 [2024-07-15 09:35:59.855291] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.957 [2024-07-15 09:35:59.855343] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:27:48.957 qpair failed and we were unable to recover it. 00:27:48.957 [2024-07-15 09:35:59.855518] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.957 [2024-07-15 09:35:59.855567] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:27:48.957 qpair failed and we were unable to recover it. 00:27:48.957 [2024-07-15 09:35:59.855656] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.957 [2024-07-15 09:35:59.855683] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a50000b90 with addr=10.0.0.2, port=4420 00:27:48.957 qpair failed and we were unable to recover it. 00:27:48.957 [2024-07-15 09:35:59.855798] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.957 [2024-07-15 09:35:59.855829] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a50000b90 with addr=10.0.0.2, port=4420 00:27:48.957 qpair failed and we were unable to recover it. 00:27:48.957 [2024-07-15 09:35:59.855919] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.957 [2024-07-15 09:35:59.855945] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a50000b90 with addr=10.0.0.2, port=4420 00:27:48.957 qpair failed and we were unable to recover it. 00:27:48.957 [2024-07-15 09:35:59.856055] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.957 [2024-07-15 09:35:59.856108] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a50000b90 with addr=10.0.0.2, port=4420 00:27:48.957 qpair failed and we were unable to recover it. 00:27:48.957 [2024-07-15 09:35:59.856291] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.957 [2024-07-15 09:35:59.856330] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a50000b90 with addr=10.0.0.2, port=4420 00:27:48.957 qpair failed and we were unable to recover it. 00:27:48.957 [2024-07-15 09:35:59.856470] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.957 [2024-07-15 09:35:59.856513] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a50000b90 with addr=10.0.0.2, port=4420 00:27:48.957 qpair failed and we were unable to recover it. 00:27:48.957 [2024-07-15 09:35:59.856760] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.957 [2024-07-15 09:35:59.856817] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a50000b90 with addr=10.0.0.2, port=4420 00:27:48.957 qpair failed and we were unable to recover it. 00:27:48.957 [2024-07-15 09:35:59.856924] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.957 [2024-07-15 09:35:59.856951] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a50000b90 with addr=10.0.0.2, port=4420 00:27:48.957 qpair failed and we were unable to recover it. 00:27:48.957 [2024-07-15 09:35:59.857044] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.957 [2024-07-15 09:35:59.857088] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a58000b90 with addr=10.0.0.2, port=4420 00:27:48.957 qpair failed and we were unable to recover it. 00:27:48.957 [2024-07-15 09:35:59.857221] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.957 [2024-07-15 09:35:59.857266] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a58000b90 with addr=10.0.0.2, port=4420 00:27:48.957 qpair failed and we were unable to recover it. 00:27:48.957 [2024-07-15 09:35:59.857425] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.957 [2024-07-15 09:35:59.857466] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a58000b90 with addr=10.0.0.2, port=4420 00:27:48.957 qpair failed and we were unable to recover it. 00:27:48.957 [2024-07-15 09:35:59.857608] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.957 [2024-07-15 09:35:59.857647] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a58000b90 with addr=10.0.0.2, port=4420 00:27:48.957 qpair failed and we were unable to recover it. 00:27:48.957 [2024-07-15 09:35:59.857764] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.957 [2024-07-15 09:35:59.857790] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a58000b90 with addr=10.0.0.2, port=4420 00:27:48.957 qpair failed and we were unable to recover it. 00:27:48.957 [2024-07-15 09:35:59.857908] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.957 [2024-07-15 09:35:59.857934] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a58000b90 with addr=10.0.0.2, port=4420 00:27:48.957 qpair failed and we were unable to recover it. 00:27:48.957 [2024-07-15 09:35:59.858020] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.957 [2024-07-15 09:35:59.858046] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a58000b90 with addr=10.0.0.2, port=4420 00:27:48.957 qpair failed and we were unable to recover it. 00:27:48.957 [2024-07-15 09:35:59.858150] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.957 [2024-07-15 09:35:59.858184] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a58000b90 with addr=10.0.0.2, port=4420 00:27:48.957 qpair failed and we were unable to recover it. 00:27:48.957 [2024-07-15 09:35:59.858372] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.958 [2024-07-15 09:35:59.858412] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a58000b90 with addr=10.0.0.2, port=4420 00:27:48.958 qpair failed and we were unable to recover it. 00:27:48.958 [2024-07-15 09:35:59.858573] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.958 [2024-07-15 09:35:59.858614] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a50000b90 with addr=10.0.0.2, port=4420 00:27:48.958 qpair failed and we were unable to recover it. 00:27:48.958 [2024-07-15 09:35:59.858790] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.958 [2024-07-15 09:35:59.858822] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a50000b90 with addr=10.0.0.2, port=4420 00:27:48.958 qpair failed and we were unable to recover it. 00:27:48.958 [2024-07-15 09:35:59.858941] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.958 [2024-07-15 09:35:59.858967] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a50000b90 with addr=10.0.0.2, port=4420 00:27:48.958 qpair failed and we were unable to recover it. 00:27:48.958 [2024-07-15 09:35:59.859057] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.958 [2024-07-15 09:35:59.859083] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a50000b90 with addr=10.0.0.2, port=4420 00:27:48.958 qpair failed and we were unable to recover it. 00:27:48.958 [2024-07-15 09:35:59.859191] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.958 [2024-07-15 09:35:59.859230] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a50000b90 with addr=10.0.0.2, port=4420 00:27:48.958 qpair failed and we were unable to recover it. 00:27:48.958 [2024-07-15 09:35:59.859343] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.958 [2024-07-15 09:35:59.859382] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a50000b90 with addr=10.0.0.2, port=4420 00:27:48.958 qpair failed and we were unable to recover it. 00:27:48.958 [2024-07-15 09:35:59.859534] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.958 [2024-07-15 09:35:59.859573] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a50000b90 with addr=10.0.0.2, port=4420 00:27:48.958 qpair failed and we were unable to recover it. 00:27:48.958 [2024-07-15 09:35:59.859697] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.958 [2024-07-15 09:35:59.859723] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a50000b90 with addr=10.0.0.2, port=4420 00:27:48.958 qpair failed and we were unable to recover it. 00:27:48.958 [2024-07-15 09:35:59.859811] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.958 [2024-07-15 09:35:59.859838] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a50000b90 with addr=10.0.0.2, port=4420 00:27:48.958 qpair failed and we were unable to recover it. 00:27:48.958 [2024-07-15 09:35:59.859951] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.958 [2024-07-15 09:35:59.859977] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a50000b90 with addr=10.0.0.2, port=4420 00:27:48.958 qpair failed and we were unable to recover it. 00:27:48.958 [2024-07-15 09:35:59.860073] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.958 [2024-07-15 09:35:59.860099] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a50000b90 with addr=10.0.0.2, port=4420 00:27:48.958 qpair failed and we were unable to recover it. 00:27:48.958 [2024-07-15 09:35:59.860207] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.958 [2024-07-15 09:35:59.860254] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a50000b90 with addr=10.0.0.2, port=4420 00:27:48.958 qpair failed and we were unable to recover it. 00:27:48.958 [2024-07-15 09:35:59.860374] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.958 [2024-07-15 09:35:59.860414] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a50000b90 with addr=10.0.0.2, port=4420 00:27:48.958 qpair failed and we were unable to recover it. 00:27:48.958 [2024-07-15 09:35:59.860538] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.958 [2024-07-15 09:35:59.860578] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a50000b90 with addr=10.0.0.2, port=4420 00:27:48.958 qpair failed and we were unable to recover it. 00:27:48.958 [2024-07-15 09:35:59.860731] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.958 [2024-07-15 09:35:59.860783] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a50000b90 with addr=10.0.0.2, port=4420 00:27:48.958 qpair failed and we were unable to recover it. 00:27:48.958 [2024-07-15 09:35:59.860935] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.958 [2024-07-15 09:35:59.860961] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a50000b90 with addr=10.0.0.2, port=4420 00:27:48.958 qpair failed and we were unable to recover it. 00:27:48.958 [2024-07-15 09:35:59.861075] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.958 [2024-07-15 09:35:59.861113] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:27:48.958 qpair failed and we were unable to recover it. 00:27:48.958 [2024-07-15 09:35:59.861257] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.958 [2024-07-15 09:35:59.861306] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:27:48.958 qpair failed and we were unable to recover it. 00:27:48.958 [2024-07-15 09:35:59.861441] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.958 [2024-07-15 09:35:59.861487] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:27:48.958 qpair failed and we were unable to recover it. 00:27:48.958 [2024-07-15 09:35:59.861578] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.958 [2024-07-15 09:35:59.861605] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:27:48.958 qpair failed and we were unable to recover it. 00:27:48.958 [2024-07-15 09:35:59.861685] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.958 [2024-07-15 09:35:59.861711] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:27:48.958 qpair failed and we were unable to recover it. 00:27:48.958 [2024-07-15 09:35:59.861827] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.958 [2024-07-15 09:35:59.861854] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:27:48.958 qpair failed and we were unable to recover it. 00:27:48.958 [2024-07-15 09:35:59.861993] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.958 [2024-07-15 09:35:59.862019] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:27:48.958 qpair failed and we were unable to recover it. 00:27:48.958 [2024-07-15 09:35:59.862127] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.958 [2024-07-15 09:35:59.862152] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:27:48.958 qpair failed and we were unable to recover it. 00:27:48.958 [2024-07-15 09:35:59.862261] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.958 [2024-07-15 09:35:59.862287] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:27:48.958 qpair failed and we were unable to recover it. 00:27:48.958 [2024-07-15 09:35:59.862401] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.958 [2024-07-15 09:35:59.862427] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:27:48.958 qpair failed and we were unable to recover it. 00:27:48.958 [2024-07-15 09:35:59.862504] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.958 [2024-07-15 09:35:59.862531] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:27:48.958 qpair failed and we were unable to recover it. 00:27:48.958 [2024-07-15 09:35:59.862631] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.958 [2024-07-15 09:35:59.862657] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:27:48.958 qpair failed and we were unable to recover it. 00:27:48.958 [2024-07-15 09:35:59.862743] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.958 [2024-07-15 09:35:59.862771] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a50000b90 with addr=10.0.0.2, port=4420 00:27:48.958 qpair failed and we were unable to recover it. 00:27:48.958 [2024-07-15 09:35:59.862908] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.958 [2024-07-15 09:35:59.862952] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d69200 with addr=10.0.0.2, port=4420 00:27:48.958 qpair failed and we were unable to recover it. 00:27:48.958 [2024-07-15 09:35:59.863081] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.958 [2024-07-15 09:35:59.863120] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a58000b90 with addr=10.0.0.2, port=4420 00:27:48.958 qpair failed and we were unable to recover it. 00:27:48.958 [2024-07-15 09:35:59.863267] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.958 [2024-07-15 09:35:59.863294] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a58000b90 with addr=10.0.0.2, port=4420 00:27:48.958 qpair failed and we were unable to recover it. 00:27:48.958 [2024-07-15 09:35:59.863374] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.958 [2024-07-15 09:35:59.863401] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a58000b90 with addr=10.0.0.2, port=4420 00:27:48.958 qpair failed and we were unable to recover it. 00:27:48.958 [2024-07-15 09:35:59.863475] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.958 [2024-07-15 09:35:59.863501] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a58000b90 with addr=10.0.0.2, port=4420 00:27:48.958 qpair failed and we were unable to recover it. 00:27:48.958 [2024-07-15 09:35:59.863616] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.958 [2024-07-15 09:35:59.863643] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a58000b90 with addr=10.0.0.2, port=4420 00:27:48.958 qpair failed and we were unable to recover it. 00:27:48.958 [2024-07-15 09:35:59.863732] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.958 [2024-07-15 09:35:59.863771] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a50000b90 with addr=10.0.0.2, port=4420 00:27:48.958 qpair failed and we were unable to recover it. 00:27:48.958 [2024-07-15 09:35:59.863923] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.958 [2024-07-15 09:35:59.863982] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d69200 with addr=10.0.0.2, port=4420 00:27:48.958 qpair failed and we were unable to recover it. 00:27:48.958 [2024-07-15 09:35:59.864186] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.958 [2024-07-15 09:35:59.864237] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:27:48.958 qpair failed and we were unable to recover it. 00:27:48.958 [2024-07-15 09:35:59.864322] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.958 [2024-07-15 09:35:59.864349] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:27:48.958 qpair failed and we were unable to recover it. 00:27:48.958 [2024-07-15 09:35:59.864474] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.958 [2024-07-15 09:35:59.864500] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:27:48.958 qpair failed and we were unable to recover it. 00:27:48.958 [2024-07-15 09:35:59.864603] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.958 [2024-07-15 09:35:59.864628] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:27:48.958 qpair failed and we were unable to recover it. 00:27:48.958 [2024-07-15 09:35:59.864754] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.958 [2024-07-15 09:35:59.864794] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a58000b90 with addr=10.0.0.2, port=4420 00:27:48.958 qpair failed and we were unable to recover it. 00:27:48.958 [2024-07-15 09:35:59.864945] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.958 [2024-07-15 09:35:59.864987] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a58000b90 with addr=10.0.0.2, port=4420 00:27:48.958 qpair failed and we were unable to recover it. 00:27:48.958 [2024-07-15 09:35:59.865150] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.958 [2024-07-15 09:35:59.865190] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a58000b90 with addr=10.0.0.2, port=4420 00:27:48.958 qpair failed and we were unable to recover it. 00:27:48.958 [2024-07-15 09:35:59.865348] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.959 [2024-07-15 09:35:59.865396] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a58000b90 with addr=10.0.0.2, port=4420 00:27:48.959 qpair failed and we were unable to recover it. 00:27:48.959 [2024-07-15 09:35:59.865550] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.959 [2024-07-15 09:35:59.865589] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a58000b90 with addr=10.0.0.2, port=4420 00:27:48.959 qpair failed and we were unable to recover it. 00:27:48.959 [2024-07-15 09:35:59.865727] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.959 [2024-07-15 09:35:59.865753] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a58000b90 with addr=10.0.0.2, port=4420 00:27:48.959 qpair failed and we were unable to recover it. 00:27:48.959 [2024-07-15 09:35:59.865865] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.959 [2024-07-15 09:35:59.865891] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a58000b90 with addr=10.0.0.2, port=4420 00:27:48.959 qpair failed and we were unable to recover it. 00:27:48.959 [2024-07-15 09:35:59.866000] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.959 [2024-07-15 09:35:59.866026] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a58000b90 with addr=10.0.0.2, port=4420 00:27:48.959 qpair failed and we were unable to recover it. 00:27:48.959 [2024-07-15 09:35:59.866138] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.959 [2024-07-15 09:35:59.866165] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a58000b90 with addr=10.0.0.2, port=4420 00:27:48.959 qpair failed and we were unable to recover it. 00:27:48.959 [2024-07-15 09:35:59.866333] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.959 [2024-07-15 09:35:59.866368] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a58000b90 with addr=10.0.0.2, port=4420 00:27:48.959 qpair failed and we were unable to recover it. 00:27:48.959 [2024-07-15 09:35:59.866520] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.959 [2024-07-15 09:35:59.866587] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a50000b90 with addr=10.0.0.2, port=4420 00:27:48.959 qpair failed and we were unable to recover it. 00:27:48.959 [2024-07-15 09:35:59.866783] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.959 [2024-07-15 09:35:59.866857] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a50000b90 with addr=10.0.0.2, port=4420 00:27:48.959 qpair failed and we were unable to recover it. 00:27:48.959 [2024-07-15 09:35:59.866976] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.959 [2024-07-15 09:35:59.867003] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a50000b90 with addr=10.0.0.2, port=4420 00:27:48.959 qpair failed and we were unable to recover it. 00:27:48.959 [2024-07-15 09:35:59.867090] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.959 [2024-07-15 09:35:59.867116] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a50000b90 with addr=10.0.0.2, port=4420 00:27:48.959 qpair failed and we were unable to recover it. 00:27:48.959 [2024-07-15 09:35:59.870948] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.959 [2024-07-15 09:35:59.870988] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a50000b90 with addr=10.0.0.2, port=4420 00:27:48.959 qpair failed and we were unable to recover it. 00:27:48.959 [2024-07-15 09:35:59.871096] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.959 [2024-07-15 09:35:59.871125] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a50000b90 with addr=10.0.0.2, port=4420 00:27:48.959 qpair failed and we were unable to recover it. 00:27:48.959 [2024-07-15 09:35:59.871240] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.959 [2024-07-15 09:35:59.871266] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a50000b90 with addr=10.0.0.2, port=4420 00:27:48.959 qpair failed and we were unable to recover it. 00:27:48.959 [2024-07-15 09:35:59.871420] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.959 [2024-07-15 09:35:59.871463] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a50000b90 with addr=10.0.0.2, port=4420 00:27:48.959 qpair failed and we were unable to recover it. 00:27:48.959 [2024-07-15 09:35:59.871644] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.959 [2024-07-15 09:35:59.871685] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a50000b90 with addr=10.0.0.2, port=4420 00:27:48.959 qpair failed and we were unable to recover it. 00:27:48.959 [2024-07-15 09:35:59.871851] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.959 [2024-07-15 09:35:59.871878] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a50000b90 with addr=10.0.0.2, port=4420 00:27:48.959 qpair failed and we were unable to recover it. 00:27:48.959 [2024-07-15 09:35:59.871994] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.959 [2024-07-15 09:35:59.872020] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a50000b90 with addr=10.0.0.2, port=4420 00:27:48.959 qpair failed and we were unable to recover it. 00:27:48.959 [2024-07-15 09:35:59.872160] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.959 [2024-07-15 09:35:59.872186] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a50000b90 with addr=10.0.0.2, port=4420 00:27:48.959 qpair failed and we were unable to recover it. 00:27:48.959 [2024-07-15 09:35:59.872293] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.959 [2024-07-15 09:35:59.872337] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a50000b90 with addr=10.0.0.2, port=4420 00:27:48.959 qpair failed and we were unable to recover it. 00:27:48.959 [2024-07-15 09:35:59.872540] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.959 [2024-07-15 09:35:59.872581] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a50000b90 with addr=10.0.0.2, port=4420 00:27:48.959 qpair failed and we were unable to recover it. 00:27:48.959 [2024-07-15 09:35:59.872739] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.959 [2024-07-15 09:35:59.872764] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a50000b90 with addr=10.0.0.2, port=4420 00:27:48.959 qpair failed and we were unable to recover it. 00:27:48.959 [2024-07-15 09:35:59.872920] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.959 [2024-07-15 09:35:59.872959] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a58000b90 with addr=10.0.0.2, port=4420 00:27:48.959 qpair failed and we were unable to recover it. 00:27:48.959 [2024-07-15 09:35:59.873075] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.959 [2024-07-15 09:35:59.873113] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a58000b90 with addr=10.0.0.2, port=4420 00:27:48.959 qpair failed and we were unable to recover it. 00:27:48.959 [2024-07-15 09:35:59.873296] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.959 [2024-07-15 09:35:59.873322] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a58000b90 with addr=10.0.0.2, port=4420 00:27:48.959 qpair failed and we were unable to recover it. 00:27:48.959 [2024-07-15 09:35:59.873430] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.959 [2024-07-15 09:35:59.873488] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a58000b90 with addr=10.0.0.2, port=4420 00:27:48.959 qpair failed and we were unable to recover it. 00:27:48.959 [2024-07-15 09:35:59.873654] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.959 [2024-07-15 09:35:59.873681] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a58000b90 with addr=10.0.0.2, port=4420 00:27:48.959 qpair failed and we were unable to recover it. 00:27:48.959 [2024-07-15 09:35:59.873843] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.959 [2024-07-15 09:35:59.873870] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a58000b90 with addr=10.0.0.2, port=4420 00:27:48.959 qpair failed and we were unable to recover it. 00:27:48.959 [2024-07-15 09:35:59.873964] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.959 [2024-07-15 09:35:59.873990] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a58000b90 with addr=10.0.0.2, port=4420 00:27:48.959 qpair failed and we were unable to recover it. 00:27:48.959 [2024-07-15 09:35:59.874103] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.959 [2024-07-15 09:35:59.874148] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a58000b90 with addr=10.0.0.2, port=4420 00:27:48.959 qpair failed and we were unable to recover it. 00:27:48.959 [2024-07-15 09:35:59.874266] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.959 [2024-07-15 09:35:59.874291] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a58000b90 with addr=10.0.0.2, port=4420 00:27:48.959 qpair failed and we were unable to recover it. 00:27:48.959 [2024-07-15 09:35:59.874478] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.959 [2024-07-15 09:35:59.874522] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a58000b90 with addr=10.0.0.2, port=4420 00:27:48.959 qpair failed and we were unable to recover it. 00:27:48.959 [2024-07-15 09:35:59.874687] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.959 [2024-07-15 09:35:59.874730] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a58000b90 with addr=10.0.0.2, port=4420 00:27:48.959 qpair failed and we were unable to recover it. 00:27:48.959 [2024-07-15 09:35:59.874893] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.959 [2024-07-15 09:35:59.874919] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a58000b90 with addr=10.0.0.2, port=4420 00:27:48.959 qpair failed and we were unable to recover it. 00:27:48.959 [2024-07-15 09:35:59.875006] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.959 [2024-07-15 09:35:59.875032] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a58000b90 with addr=10.0.0.2, port=4420 00:27:48.959 qpair failed and we were unable to recover it. 00:27:48.959 [2024-07-15 09:35:59.875196] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.959 [2024-07-15 09:35:59.875230] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a58000b90 with addr=10.0.0.2, port=4420 00:27:48.960 qpair failed and we were unable to recover it. 00:27:48.960 [2024-07-15 09:35:59.875372] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.960 [2024-07-15 09:35:59.875417] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a58000b90 with addr=10.0.0.2, port=4420 00:27:48.960 qpair failed and we were unable to recover it. 00:27:48.960 [2024-07-15 09:35:59.875588] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.960 [2024-07-15 09:35:59.875632] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a58000b90 with addr=10.0.0.2, port=4420 00:27:48.960 qpair failed and we were unable to recover it. 00:27:48.960 [2024-07-15 09:35:59.875762] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.960 [2024-07-15 09:35:59.875817] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a58000b90 with addr=10.0.0.2, port=4420 00:27:48.960 qpair failed and we were unable to recover it. 00:27:48.960 [2024-07-15 09:35:59.875968] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.960 [2024-07-15 09:35:59.875995] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a58000b90 with addr=10.0.0.2, port=4420 00:27:48.960 qpair failed and we were unable to recover it. 00:27:48.960 [2024-07-15 09:35:59.876100] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.960 [2024-07-15 09:35:59.876154] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a58000b90 with addr=10.0.0.2, port=4420 00:27:48.960 qpair failed and we were unable to recover it. 00:27:48.960 [2024-07-15 09:35:59.876283] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.960 [2024-07-15 09:35:59.876328] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a58000b90 with addr=10.0.0.2, port=4420 00:27:48.960 qpair failed and we were unable to recover it. 00:27:48.960 [2024-07-15 09:35:59.876487] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.960 [2024-07-15 09:35:59.876531] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a58000b90 with addr=10.0.0.2, port=4420 00:27:48.960 qpair failed and we were unable to recover it. 00:27:48.960 [2024-07-15 09:35:59.876697] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.960 [2024-07-15 09:35:59.876741] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a58000b90 with addr=10.0.0.2, port=4420 00:27:48.960 qpair failed and we were unable to recover it. 00:27:48.960 [2024-07-15 09:35:59.876912] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.960 [2024-07-15 09:35:59.876951] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d69200 with addr=10.0.0.2, port=4420 00:27:48.960 qpair failed and we were unable to recover it. 00:27:48.960 [2024-07-15 09:35:59.877054] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.960 [2024-07-15 09:35:59.877093] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:27:48.960 qpair failed and we were unable to recover it. 00:27:48.960 [2024-07-15 09:35:59.877185] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.960 [2024-07-15 09:35:59.877211] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:27:48.960 qpair failed and we were unable to recover it. 00:27:48.960 [2024-07-15 09:35:59.877316] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.960 [2024-07-15 09:35:59.877370] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:27:48.960 qpair failed and we were unable to recover it. 00:27:48.960 [2024-07-15 09:35:59.877524] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.960 [2024-07-15 09:35:59.877579] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:27:48.960 qpair failed and we were unable to recover it. 00:27:48.960 [2024-07-15 09:35:59.877698] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.960 [2024-07-15 09:35:59.877738] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a50000b90 with addr=10.0.0.2, port=4420 00:27:48.960 qpair failed and we were unable to recover it. 00:27:48.960 [2024-07-15 09:35:59.877839] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.960 [2024-07-15 09:35:59.877867] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a58000b90 with addr=10.0.0.2, port=4420 00:27:48.960 qpair failed and we were unable to recover it. 00:27:48.960 [2024-07-15 09:35:59.878004] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.960 [2024-07-15 09:35:59.878030] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a58000b90 with addr=10.0.0.2, port=4420 00:27:48.960 qpair failed and we were unable to recover it. 00:27:48.960 [2024-07-15 09:35:59.878145] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.960 [2024-07-15 09:35:59.878179] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a58000b90 with addr=10.0.0.2, port=4420 00:27:48.960 qpair failed and we were unable to recover it. 00:27:48.960 [2024-07-15 09:35:59.878351] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.960 [2024-07-15 09:35:59.878404] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a58000b90 with addr=10.0.0.2, port=4420 00:27:48.960 qpair failed and we were unable to recover it. 00:27:48.960 [2024-07-15 09:35:59.878549] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.960 [2024-07-15 09:35:59.878583] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a58000b90 with addr=10.0.0.2, port=4420 00:27:48.960 qpair failed and we were unable to recover it. 00:27:48.960 [2024-07-15 09:35:59.878701] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.960 [2024-07-15 09:35:59.878727] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a58000b90 with addr=10.0.0.2, port=4420 00:27:48.960 qpair failed and we were unable to recover it. 00:27:48.960 [2024-07-15 09:35:59.878817] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.960 [2024-07-15 09:35:59.878844] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a58000b90 with addr=10.0.0.2, port=4420 00:27:48.960 qpair failed and we were unable to recover it. 00:27:48.960 [2024-07-15 09:35:59.878959] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.960 [2024-07-15 09:35:59.878985] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a58000b90 with addr=10.0.0.2, port=4420 00:27:48.960 qpair failed and we were unable to recover it. 00:27:48.960 [2024-07-15 09:35:59.879096] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.960 [2024-07-15 09:35:59.879122] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a58000b90 with addr=10.0.0.2, port=4420 00:27:48.960 qpair failed and we were unable to recover it. 00:27:48.960 [2024-07-15 09:35:59.879238] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.960 [2024-07-15 09:35:59.879264] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a58000b90 with addr=10.0.0.2, port=4420 00:27:48.960 qpair failed and we were unable to recover it. 00:27:48.960 [2024-07-15 09:35:59.879448] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.960 [2024-07-15 09:35:59.879474] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a58000b90 with addr=10.0.0.2, port=4420 00:27:48.960 qpair failed and we were unable to recover it. 00:27:48.960 [2024-07-15 09:35:59.879714] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.960 [2024-07-15 09:35:59.879757] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a58000b90 with addr=10.0.0.2, port=4420 00:27:48.960 qpair failed and we were unable to recover it. 00:27:48.960 [2024-07-15 09:35:59.879925] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.960 [2024-07-15 09:35:59.879951] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a58000b90 with addr=10.0.0.2, port=4420 00:27:48.960 qpair failed and we were unable to recover it. 00:27:48.960 [2024-07-15 09:35:59.880054] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.960 [2024-07-15 09:35:59.880098] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a58000b90 with addr=10.0.0.2, port=4420 00:27:48.960 qpair failed and we were unable to recover it. 00:27:48.960 [2024-07-15 09:35:59.880250] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.960 [2024-07-15 09:35:59.880294] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a58000b90 with addr=10.0.0.2, port=4420 00:27:48.960 qpair failed and we were unable to recover it. 00:27:48.960 [2024-07-15 09:35:59.880460] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.960 [2024-07-15 09:35:59.880512] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a58000b90 with addr=10.0.0.2, port=4420 00:27:48.960 qpair failed and we were unable to recover it. 00:27:48.960 [2024-07-15 09:35:59.880622] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.960 [2024-07-15 09:35:59.880655] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a58000b90 with addr=10.0.0.2, port=4420 00:27:48.960 qpair failed and we were unable to recover it. 00:27:48.960 [2024-07-15 09:35:59.880771] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.960 [2024-07-15 09:35:59.880797] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a58000b90 with addr=10.0.0.2, port=4420 00:27:48.960 qpair failed and we were unable to recover it. 00:27:48.960 [2024-07-15 09:35:59.880917] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.960 [2024-07-15 09:35:59.880943] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a58000b90 with addr=10.0.0.2, port=4420 00:27:48.960 qpair failed and we were unable to recover it. 00:27:48.960 [2024-07-15 09:35:59.881049] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.960 [2024-07-15 09:35:59.881075] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a58000b90 with addr=10.0.0.2, port=4420 00:27:48.960 qpair failed and we were unable to recover it. 00:27:48.960 [2024-07-15 09:35:59.881209] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.960 [2024-07-15 09:35:59.881243] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a58000b90 with addr=10.0.0.2, port=4420 00:27:48.960 qpair failed and we were unable to recover it. 00:27:48.960 [2024-07-15 09:35:59.881403] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.960 [2024-07-15 09:35:59.881447] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a58000b90 with addr=10.0.0.2, port=4420 00:27:48.960 qpair failed and we were unable to recover it. 00:27:48.960 [2024-07-15 09:35:59.881616] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.960 [2024-07-15 09:35:59.881659] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a58000b90 with addr=10.0.0.2, port=4420 00:27:48.960 qpair failed and we were unable to recover it. 00:27:48.960 [2024-07-15 09:35:59.881872] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.960 [2024-07-15 09:35:59.881899] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a58000b90 with addr=10.0.0.2, port=4420 00:27:48.960 qpair failed and we were unable to recover it. 00:27:48.960 [2024-07-15 09:35:59.882014] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.960 [2024-07-15 09:35:59.882040] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a58000b90 with addr=10.0.0.2, port=4420 00:27:48.960 qpair failed and we were unable to recover it. 00:27:48.960 [2024-07-15 09:35:59.882207] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.960 [2024-07-15 09:35:59.882251] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a58000b90 with addr=10.0.0.2, port=4420 00:27:48.960 qpair failed and we were unable to recover it. 00:27:48.960 [2024-07-15 09:35:59.882468] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.960 [2024-07-15 09:35:59.882534] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a58000b90 with addr=10.0.0.2, port=4420 00:27:48.960 qpair failed and we were unable to recover it. 00:27:48.960 [2024-07-15 09:35:59.882823] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.960 [2024-07-15 09:35:59.882872] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a58000b90 with addr=10.0.0.2, port=4420 00:27:48.960 qpair failed and we were unable to recover it. 00:27:48.960 [2024-07-15 09:35:59.882955] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.960 [2024-07-15 09:35:59.882980] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a58000b90 with addr=10.0.0.2, port=4420 00:27:48.960 qpair failed and we were unable to recover it. 00:27:48.960 [2024-07-15 09:35:59.883073] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.960 [2024-07-15 09:35:59.883098] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a58000b90 with addr=10.0.0.2, port=4420 00:27:48.960 qpair failed and we were unable to recover it. 00:27:48.960 [2024-07-15 09:35:59.883212] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.960 [2024-07-15 09:35:59.883238] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a58000b90 with addr=10.0.0.2, port=4420 00:27:48.960 qpair failed and we were unable to recover it. 00:27:48.960 [2024-07-15 09:35:59.883399] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.960 [2024-07-15 09:35:59.883442] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a58000b90 with addr=10.0.0.2, port=4420 00:27:48.960 qpair failed and we were unable to recover it. 00:27:48.960 [2024-07-15 09:35:59.883673] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.960 [2024-07-15 09:35:59.883718] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a58000b90 with addr=10.0.0.2, port=4420 00:27:48.960 qpair failed and we were unable to recover it. 00:27:48.960 [2024-07-15 09:35:59.883908] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.960 [2024-07-15 09:35:59.883935] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a58000b90 with addr=10.0.0.2, port=4420 00:27:48.960 qpair failed and we were unable to recover it. 00:27:48.960 [2024-07-15 09:35:59.884026] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.960 [2024-07-15 09:35:59.884051] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a58000b90 with addr=10.0.0.2, port=4420 00:27:48.960 qpair failed and we were unable to recover it. 00:27:48.960 [2024-07-15 09:35:59.884236] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.960 [2024-07-15 09:35:59.884280] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a58000b90 with addr=10.0.0.2, port=4420 00:27:48.960 qpair failed and we were unable to recover it. 00:27:48.960 [2024-07-15 09:35:59.884438] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.960 [2024-07-15 09:35:59.884481] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a58000b90 with addr=10.0.0.2, port=4420 00:27:48.960 qpair failed and we were unable to recover it. 00:27:48.960 [2024-07-15 09:35:59.884668] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.960 [2024-07-15 09:35:59.884702] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a58000b90 with addr=10.0.0.2, port=4420 00:27:48.960 qpair failed and we were unable to recover it. 00:27:48.960 [2024-07-15 09:35:59.884840] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.960 [2024-07-15 09:35:59.884866] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a58000b90 with addr=10.0.0.2, port=4420 00:27:48.960 qpair failed and we were unable to recover it. 00:27:48.960 [2024-07-15 09:35:59.884982] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.960 [2024-07-15 09:35:59.885008] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a58000b90 with addr=10.0.0.2, port=4420 00:27:48.960 qpair failed and we were unable to recover it. 00:27:48.960 [2024-07-15 09:35:59.885119] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.960 [2024-07-15 09:35:59.885144] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a58000b90 with addr=10.0.0.2, port=4420 00:27:48.960 qpair failed and we were unable to recover it. 00:27:48.960 [2024-07-15 09:35:59.885294] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.960 [2024-07-15 09:35:59.885336] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a58000b90 with addr=10.0.0.2, port=4420 00:27:48.960 qpair failed and we were unable to recover it. 00:27:48.960 [2024-07-15 09:35:59.885524] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.960 [2024-07-15 09:35:59.885590] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a50000b90 with addr=10.0.0.2, port=4420 00:27:48.960 qpair failed and we were unable to recover it. 00:27:48.960 [2024-07-15 09:35:59.885825] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.961 [2024-07-15 09:35:59.885888] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a50000b90 with addr=10.0.0.2, port=4420 00:27:48.961 qpair failed and we were unable to recover it. 00:27:48.961 [2024-07-15 09:35:59.885999] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.961 [2024-07-15 09:35:59.886025] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a50000b90 with addr=10.0.0.2, port=4420 00:27:48.961 qpair failed and we were unable to recover it. 00:27:48.961 [2024-07-15 09:35:59.886134] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.961 [2024-07-15 09:35:59.886160] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a50000b90 with addr=10.0.0.2, port=4420 00:27:48.961 qpair failed and we were unable to recover it. 00:27:48.961 [2024-07-15 09:35:59.886320] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.961 [2024-07-15 09:35:59.886346] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a50000b90 with addr=10.0.0.2, port=4420 00:27:48.961 qpair failed and we were unable to recover it. 00:27:48.961 [2024-07-15 09:35:59.886516] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.961 [2024-07-15 09:35:59.886559] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a50000b90 with addr=10.0.0.2, port=4420 00:27:48.961 qpair failed and we were unable to recover it. 00:27:48.961 [2024-07-15 09:35:59.886739] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.961 [2024-07-15 09:35:59.886785] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a50000b90 with addr=10.0.0.2, port=4420 00:27:48.961 qpair failed and we were unable to recover it. 00:27:48.961 [2024-07-15 09:35:59.886935] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.961 [2024-07-15 09:35:59.886962] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a50000b90 with addr=10.0.0.2, port=4420 00:27:48.961 qpair failed and we were unable to recover it. 00:27:48.961 [2024-07-15 09:35:59.887088] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.961 [2024-07-15 09:35:59.887141] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a50000b90 with addr=10.0.0.2, port=4420 00:27:48.961 qpair failed and we were unable to recover it. 00:27:48.961 [2024-07-15 09:35:59.887257] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.961 [2024-07-15 09:35:59.887291] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a50000b90 with addr=10.0.0.2, port=4420 00:27:48.961 qpair failed and we were unable to recover it. 00:27:48.961 [2024-07-15 09:35:59.887427] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.961 [2024-07-15 09:35:59.887478] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d69200 with addr=10.0.0.2, port=4420 00:27:48.961 qpair failed and we were unable to recover it. 00:27:48.961 [2024-07-15 09:35:59.887679] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.961 [2024-07-15 09:35:59.887706] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d69200 with addr=10.0.0.2, port=4420 00:27:48.961 qpair failed and we were unable to recover it. 00:27:48.961 [2024-07-15 09:35:59.887818] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.961 [2024-07-15 09:35:59.887844] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d69200 with addr=10.0.0.2, port=4420 00:27:48.961 qpair failed and we were unable to recover it. 00:27:48.961 [2024-07-15 09:35:59.887961] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.961 [2024-07-15 09:35:59.887992] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d69200 with addr=10.0.0.2, port=4420 00:27:48.961 qpair failed and we were unable to recover it. 00:27:48.961 [2024-07-15 09:35:59.888136] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.961 [2024-07-15 09:35:59.888178] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d69200 with addr=10.0.0.2, port=4420 00:27:48.961 qpair failed and we were unable to recover it. 00:27:48.961 [2024-07-15 09:35:59.888318] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.961 [2024-07-15 09:35:59.888362] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d69200 with addr=10.0.0.2, port=4420 00:27:48.961 qpair failed and we were unable to recover it. 00:27:48.961 [2024-07-15 09:35:59.888519] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.961 [2024-07-15 09:35:59.888562] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d69200 with addr=10.0.0.2, port=4420 00:27:48.961 qpair failed and we were unable to recover it. 00:27:48.961 [2024-07-15 09:35:59.888692] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.961 [2024-07-15 09:35:59.888718] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d69200 with addr=10.0.0.2, port=4420 00:27:48.961 qpair failed and we were unable to recover it. 00:27:48.961 [2024-07-15 09:35:59.888830] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.961 [2024-07-15 09:35:59.888856] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d69200 with addr=10.0.0.2, port=4420 00:27:48.961 qpair failed and we were unable to recover it. 00:27:48.961 [2024-07-15 09:35:59.888994] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.961 [2024-07-15 09:35:59.889020] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d69200 with addr=10.0.0.2, port=4420 00:27:48.961 qpair failed and we were unable to recover it. 00:27:48.961 [2024-07-15 09:35:59.889173] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.961 [2024-07-15 09:35:59.889216] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d69200 with addr=10.0.0.2, port=4420 00:27:48.961 qpair failed and we were unable to recover it. 00:27:48.961 [2024-07-15 09:35:59.889391] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.961 [2024-07-15 09:35:59.889433] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d69200 with addr=10.0.0.2, port=4420 00:27:48.961 qpair failed and we were unable to recover it. 00:27:48.961 [2024-07-15 09:35:59.889587] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.961 [2024-07-15 09:35:59.889630] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d69200 with addr=10.0.0.2, port=4420 00:27:48.961 qpair failed and we were unable to recover it. 00:27:48.961 [2024-07-15 09:35:59.889777] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.961 [2024-07-15 09:35:59.889810] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d69200 with addr=10.0.0.2, port=4420 00:27:48.961 qpair failed and we were unable to recover it. 00:27:48.961 [2024-07-15 09:35:59.889948] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.961 [2024-07-15 09:35:59.889990] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d69200 with addr=10.0.0.2, port=4420 00:27:48.961 qpair failed and we were unable to recover it. 00:27:48.961 [2024-07-15 09:35:59.890145] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.961 [2024-07-15 09:35:59.890188] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d69200 with addr=10.0.0.2, port=4420 00:27:48.961 qpair failed and we were unable to recover it. 00:27:48.961 [2024-07-15 09:35:59.890352] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.961 [2024-07-15 09:35:59.890426] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:27:48.961 qpair failed and we were unable to recover it. 00:27:48.961 [2024-07-15 09:35:59.890527] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.961 [2024-07-15 09:35:59.890555] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:27:48.961 qpair failed and we were unable to recover it. 00:27:48.961 [2024-07-15 09:35:59.890702] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.961 [2024-07-15 09:35:59.890752] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:27:48.961 qpair failed and we were unable to recover it. 00:27:48.961 [2024-07-15 09:35:59.890901] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.961 [2024-07-15 09:35:59.890927] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:27:48.961 qpair failed and we were unable to recover it. 00:27:48.961 [2024-07-15 09:35:59.891008] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.961 [2024-07-15 09:35:59.891033] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:27:48.961 qpair failed and we were unable to recover it. 00:27:48.961 [2024-07-15 09:35:59.891121] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.961 [2024-07-15 09:35:59.891147] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:27:48.961 qpair failed and we were unable to recover it. 00:27:48.961 [2024-07-15 09:35:59.891253] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.961 [2024-07-15 09:35:59.891278] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:27:48.961 qpair failed and we were unable to recover it. 00:27:48.961 [2024-07-15 09:35:59.891386] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.961 [2024-07-15 09:35:59.891412] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:27:48.961 qpair failed and we were unable to recover it. 00:27:48.961 [2024-07-15 09:35:59.891526] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.961 [2024-07-15 09:35:59.891552] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:27:48.961 qpair failed and we were unable to recover it. 00:27:48.961 [2024-07-15 09:35:59.891667] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.961 [2024-07-15 09:35:59.891693] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d69200 with addr=10.0.0.2, port=4420 00:27:48.961 qpair failed and we were unable to recover it. 00:27:48.961 [2024-07-15 09:35:59.891772] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.961 [2024-07-15 09:35:59.891797] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d69200 with addr=10.0.0.2, port=4420 00:27:48.961 qpair failed and we were unable to recover it. 00:27:48.961 [2024-07-15 09:35:59.891880] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.961 [2024-07-15 09:35:59.891905] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d69200 with addr=10.0.0.2, port=4420 00:27:48.961 qpair failed and we were unable to recover it. 00:27:48.961 [2024-07-15 09:35:59.892066] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.961 [2024-07-15 09:35:59.892091] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d69200 with addr=10.0.0.2, port=4420 00:27:48.961 qpair failed and we were unable to recover it. 00:27:48.961 [2024-07-15 09:35:59.892199] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.961 [2024-07-15 09:35:59.892225] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d69200 with addr=10.0.0.2, port=4420 00:27:48.961 qpair failed and we were unable to recover it. 00:27:48.961 [2024-07-15 09:35:59.892330] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.961 [2024-07-15 09:35:59.892360] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d69200 with addr=10.0.0.2, port=4420 00:27:48.961 qpair failed and we were unable to recover it. 00:27:48.961 [2024-07-15 09:35:59.892451] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.961 [2024-07-15 09:35:59.892479] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:27:48.961 qpair failed and we were unable to recover it. 00:27:48.961 [2024-07-15 09:35:59.892604] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.961 [2024-07-15 09:35:59.892644] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a58000b90 with addr=10.0.0.2, port=4420 00:27:48.961 qpair failed and we were unable to recover it. 00:27:48.961 [2024-07-15 09:35:59.892764] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.961 [2024-07-15 09:35:59.892810] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a50000b90 with addr=10.0.0.2, port=4420 00:27:48.961 qpair failed and we were unable to recover it. 00:27:48.961 [2024-07-15 09:35:59.892907] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.961 [2024-07-15 09:35:59.892935] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a50000b90 with addr=10.0.0.2, port=4420 00:27:48.961 qpair failed and we were unable to recover it. 00:27:48.961 [2024-07-15 09:35:59.893051] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.961 [2024-07-15 09:35:59.893095] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a50000b90 with addr=10.0.0.2, port=4420 00:27:48.961 qpair failed and we were unable to recover it. 00:27:48.961 [2024-07-15 09:35:59.893255] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.961 [2024-07-15 09:35:59.893299] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a50000b90 with addr=10.0.0.2, port=4420 00:27:48.961 qpair failed and we were unable to recover it. 00:27:48.961 [2024-07-15 09:35:59.893440] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.961 [2024-07-15 09:35:59.893486] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a50000b90 with addr=10.0.0.2, port=4420 00:27:48.961 qpair failed and we were unable to recover it. 00:27:48.961 [2024-07-15 09:35:59.893684] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.961 [2024-07-15 09:35:59.893727] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a50000b90 with addr=10.0.0.2, port=4420 00:27:48.961 qpair failed and we were unable to recover it. 00:27:48.961 [2024-07-15 09:35:59.893900] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.961 [2024-07-15 09:35:59.893927] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a50000b90 with addr=10.0.0.2, port=4420 00:27:48.961 qpair failed and we were unable to recover it. 00:27:48.961 [2024-07-15 09:35:59.894017] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.961 [2024-07-15 09:35:59.894076] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a50000b90 with addr=10.0.0.2, port=4420 00:27:48.961 qpair failed and we were unable to recover it. 00:27:48.961 [2024-07-15 09:35:59.894222] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.961 [2024-07-15 09:35:59.894266] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a50000b90 with addr=10.0.0.2, port=4420 00:27:48.961 qpair failed and we were unable to recover it. 00:27:48.961 [2024-07-15 09:35:59.894395] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.961 [2024-07-15 09:35:59.894438] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a50000b90 with addr=10.0.0.2, port=4420 00:27:48.961 qpair failed and we were unable to recover it. 00:27:48.961 [2024-07-15 09:35:59.894632] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.961 [2024-07-15 09:35:59.894675] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a50000b90 with addr=10.0.0.2, port=4420 00:27:48.961 qpair failed and we were unable to recover it. 00:27:48.961 [2024-07-15 09:35:59.894815] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.961 [2024-07-15 09:35:59.894842] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a50000b90 with addr=10.0.0.2, port=4420 00:27:48.961 qpair failed and we were unable to recover it. 00:27:48.961 [2024-07-15 09:35:59.894919] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.961 [2024-07-15 09:35:59.894945] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a50000b90 with addr=10.0.0.2, port=4420 00:27:48.961 qpair failed and we were unable to recover it. 00:27:48.961 [2024-07-15 09:35:59.895076] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.961 [2024-07-15 09:35:59.895118] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a50000b90 with addr=10.0.0.2, port=4420 00:27:48.961 qpair failed and we were unable to recover it. 00:27:48.961 [2024-07-15 09:35:59.895345] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.961 [2024-07-15 09:35:59.895388] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a50000b90 with addr=10.0.0.2, port=4420 00:27:48.961 qpair failed and we were unable to recover it. 00:27:48.961 [2024-07-15 09:35:59.895539] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.961 [2024-07-15 09:35:59.895582] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a50000b90 with addr=10.0.0.2, port=4420 00:27:48.961 qpair failed and we were unable to recover it. 00:27:48.961 [2024-07-15 09:35:59.895760] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.961 [2024-07-15 09:35:59.895814] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a50000b90 with addr=10.0.0.2, port=4420 00:27:48.961 qpair failed and we were unable to recover it. 00:27:48.961 [2024-07-15 09:35:59.895929] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.961 [2024-07-15 09:35:59.895956] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a50000b90 with addr=10.0.0.2, port=4420 00:27:48.961 qpair failed and we were unable to recover it. 00:27:48.962 [2024-07-15 09:35:59.896037] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.962 [2024-07-15 09:35:59.896064] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a50000b90 with addr=10.0.0.2, port=4420 00:27:48.962 qpair failed and we were unable to recover it. 00:27:48.962 [2024-07-15 09:35:59.896236] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.962 [2024-07-15 09:35:59.896279] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a50000b90 with addr=10.0.0.2, port=4420 00:27:48.962 qpair failed and we were unable to recover it. 00:27:48.962 [2024-07-15 09:35:59.896434] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.962 [2024-07-15 09:35:59.896493] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a50000b90 with addr=10.0.0.2, port=4420 00:27:48.962 qpair failed and we were unable to recover it. 00:27:48.962 [2024-07-15 09:35:59.896656] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.962 [2024-07-15 09:35:59.896701] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a50000b90 with addr=10.0.0.2, port=4420 00:27:48.962 qpair failed and we were unable to recover it. 00:27:48.962 [2024-07-15 09:35:59.896893] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.962 [2024-07-15 09:35:59.896920] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a50000b90 with addr=10.0.0.2, port=4420 00:27:48.962 qpair failed and we were unable to recover it. 00:27:48.962 [2024-07-15 09:35:59.897055] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.962 [2024-07-15 09:35:59.897081] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a50000b90 with addr=10.0.0.2, port=4420 00:27:48.962 qpair failed and we were unable to recover it. 00:27:48.962 [2024-07-15 09:35:59.897180] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.962 [2024-07-15 09:35:59.897224] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a50000b90 with addr=10.0.0.2, port=4420 00:27:48.962 qpair failed and we were unable to recover it. 00:27:48.962 [2024-07-15 09:35:59.897423] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.962 [2024-07-15 09:35:59.897467] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a50000b90 with addr=10.0.0.2, port=4420 00:27:48.962 qpair failed and we were unable to recover it. 00:27:48.962 [2024-07-15 09:35:59.897639] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.962 [2024-07-15 09:35:59.897682] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a50000b90 with addr=10.0.0.2, port=4420 00:27:48.962 qpair failed and we were unable to recover it. 00:27:48.962 [2024-07-15 09:35:59.897816] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.962 [2024-07-15 09:35:59.897855] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a58000b90 with addr=10.0.0.2, port=4420 00:27:48.962 qpair failed and we were unable to recover it. 00:27:48.962 [2024-07-15 09:35:59.897943] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.962 [2024-07-15 09:35:59.897971] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d69200 with addr=10.0.0.2, port=4420 00:27:48.962 qpair failed and we were unable to recover it. 00:27:48.962 [2024-07-15 09:35:59.898082] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.962 [2024-07-15 09:35:59.898107] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d69200 with addr=10.0.0.2, port=4420 00:27:48.962 qpair failed and we were unable to recover it. 00:27:48.962 [2024-07-15 09:35:59.898219] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.962 [2024-07-15 09:35:59.898245] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d69200 with addr=10.0.0.2, port=4420 00:27:48.962 qpair failed and we were unable to recover it. 00:27:48.962 [2024-07-15 09:35:59.898376] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.962 [2024-07-15 09:35:59.898420] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d69200 with addr=10.0.0.2, port=4420 00:27:48.962 qpair failed and we were unable to recover it. 00:27:48.962 [2024-07-15 09:35:59.898555] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.962 [2024-07-15 09:35:59.898602] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d69200 with addr=10.0.0.2, port=4420 00:27:48.962 qpair failed and we were unable to recover it. 00:27:48.962 [2024-07-15 09:35:59.898783] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.962 [2024-07-15 09:35:59.898816] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d69200 with addr=10.0.0.2, port=4420 00:27:48.962 qpair failed and we were unable to recover it. 00:27:48.962 [2024-07-15 09:35:59.898896] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.962 [2024-07-15 09:35:59.898921] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d69200 with addr=10.0.0.2, port=4420 00:27:48.962 qpair failed and we were unable to recover it. 00:27:48.962 [2024-07-15 09:35:59.899033] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.962 [2024-07-15 09:35:59.899075] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d69200 with addr=10.0.0.2, port=4420 00:27:48.962 qpair failed and we were unable to recover it. 00:27:48.962 [2024-07-15 09:35:59.899238] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.962 [2024-07-15 09:35:59.899280] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d69200 with addr=10.0.0.2, port=4420 00:27:48.962 qpair failed and we were unable to recover it. 00:27:48.962 [2024-07-15 09:35:59.899448] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.962 [2024-07-15 09:35:59.899490] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d69200 with addr=10.0.0.2, port=4420 00:27:48.962 qpair failed and we were unable to recover it. 00:27:48.962 [2024-07-15 09:35:59.899661] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.962 [2024-07-15 09:35:59.899704] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d69200 with addr=10.0.0.2, port=4420 00:27:48.962 qpair failed and we were unable to recover it. 00:27:48.962 [2024-07-15 09:35:59.899845] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.962 [2024-07-15 09:35:59.899889] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d69200 with addr=10.0.0.2, port=4420 00:27:48.962 qpair failed and we were unable to recover it. 00:27:48.962 [2024-07-15 09:35:59.900083] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.962 [2024-07-15 09:35:59.900125] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d69200 with addr=10.0.0.2, port=4420 00:27:48.962 qpair failed and we were unable to recover it. 00:27:48.962 [2024-07-15 09:35:59.900288] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.962 [2024-07-15 09:35:59.900331] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d69200 with addr=10.0.0.2, port=4420 00:27:48.962 qpair failed and we were unable to recover it. 00:27:48.962 [2024-07-15 09:35:59.900503] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.962 [2024-07-15 09:35:59.900546] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d69200 with addr=10.0.0.2, port=4420 00:27:48.962 qpair failed and we were unable to recover it. 00:27:48.962 [2024-07-15 09:35:59.900686] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.962 [2024-07-15 09:35:59.900729] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d69200 with addr=10.0.0.2, port=4420 00:27:48.962 qpair failed and we were unable to recover it. 00:27:48.962 [2024-07-15 09:35:59.900935] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.962 [2024-07-15 09:35:59.901005] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:27:48.962 qpair failed and we were unable to recover it. 00:27:48.962 [2024-07-15 09:35:59.901117] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.962 [2024-07-15 09:35:59.901144] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:27:48.962 qpair failed and we were unable to recover it. 00:27:48.962 [2024-07-15 09:35:59.901296] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.962 [2024-07-15 09:35:59.901348] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:27:48.962 qpair failed and we were unable to recover it. 00:27:48.962 [2024-07-15 09:35:59.901462] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.962 [2024-07-15 09:35:59.901513] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:27:48.962 qpair failed and we were unable to recover it. 00:27:48.962 [2024-07-15 09:35:59.901623] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.962 [2024-07-15 09:35:59.901648] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:27:48.962 qpair failed and we were unable to recover it. 00:27:48.962 [2024-07-15 09:35:59.901727] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.962 [2024-07-15 09:35:59.901753] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:27:48.962 qpair failed and we were unable to recover it. 00:27:48.962 [2024-07-15 09:35:59.901834] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.962 [2024-07-15 09:35:59.901860] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:27:48.962 qpair failed and we were unable to recover it. 00:27:48.962 [2024-07-15 09:35:59.902038] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.962 [2024-07-15 09:35:59.902103] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a50000b90 with addr=10.0.0.2, port=4420 00:27:48.962 qpair failed and we were unable to recover it. 00:27:48.962 [2024-07-15 09:35:59.902261] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.962 [2024-07-15 09:35:59.902288] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a50000b90 with addr=10.0.0.2, port=4420 00:27:48.962 qpair failed and we were unable to recover it. 00:27:48.962 [2024-07-15 09:35:59.902561] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.962 [2024-07-15 09:35:59.902604] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a50000b90 with addr=10.0.0.2, port=4420 00:27:48.962 qpair failed and we were unable to recover it. 00:27:48.962 [2024-07-15 09:35:59.902757] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.962 [2024-07-15 09:35:59.902783] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a50000b90 with addr=10.0.0.2, port=4420 00:27:48.962 qpair failed and we were unable to recover it. 00:27:48.962 [2024-07-15 09:35:59.902903] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.962 [2024-07-15 09:35:59.902929] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a50000b90 with addr=10.0.0.2, port=4420 00:27:48.962 qpair failed and we were unable to recover it. 00:27:48.962 [2024-07-15 09:35:59.903019] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.962 [2024-07-15 09:35:59.903045] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a50000b90 with addr=10.0.0.2, port=4420 00:27:48.962 qpair failed and we were unable to recover it. 00:27:48.962 [2024-07-15 09:35:59.903281] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.962 [2024-07-15 09:35:59.903324] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a50000b90 with addr=10.0.0.2, port=4420 00:27:48.962 qpair failed and we were unable to recover it. 00:27:48.962 [2024-07-15 09:35:59.903498] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.962 [2024-07-15 09:35:59.903541] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a50000b90 with addr=10.0.0.2, port=4420 00:27:48.962 qpair failed and we were unable to recover it. 00:27:48.962 [2024-07-15 09:35:59.903694] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.962 [2024-07-15 09:35:59.903741] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a50000b90 with addr=10.0.0.2, port=4420 00:27:48.962 qpair failed and we were unable to recover it. 00:27:48.962 [2024-07-15 09:35:59.903895] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.962 [2024-07-15 09:35:59.903922] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a50000b90 with addr=10.0.0.2, port=4420 00:27:48.962 qpair failed and we were unable to recover it. 00:27:48.962 [2024-07-15 09:35:59.904040] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.962 [2024-07-15 09:35:59.904066] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a50000b90 with addr=10.0.0.2, port=4420 00:27:48.962 qpair failed and we were unable to recover it. 00:27:48.962 [2024-07-15 09:35:59.904257] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.962 [2024-07-15 09:35:59.904339] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a50000b90 with addr=10.0.0.2, port=4420 00:27:48.962 qpair failed and we were unable to recover it. 00:27:48.962 [2024-07-15 09:35:59.904508] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.962 [2024-07-15 09:35:59.904569] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a50000b90 with addr=10.0.0.2, port=4420 00:27:48.962 qpair failed and we were unable to recover it. 00:27:48.962 [2024-07-15 09:35:59.904734] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.962 [2024-07-15 09:35:59.904765] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a50000b90 with addr=10.0.0.2, port=4420 00:27:48.962 qpair failed and we were unable to recover it. 00:27:48.962 [2024-07-15 09:35:59.904910] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.962 [2024-07-15 09:35:59.904937] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a50000b90 with addr=10.0.0.2, port=4420 00:27:48.962 qpair failed and we were unable to recover it. 00:27:48.962 [2024-07-15 09:35:59.905071] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.962 [2024-07-15 09:35:59.905098] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a50000b90 with addr=10.0.0.2, port=4420 00:27:48.962 qpair failed and we were unable to recover it. 00:27:48.962 [2024-07-15 09:35:59.905260] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.962 [2024-07-15 09:35:59.905324] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a50000b90 with addr=10.0.0.2, port=4420 00:27:48.962 qpair failed and we were unable to recover it. 00:27:48.962 [2024-07-15 09:35:59.905589] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.962 [2024-07-15 09:35:59.905655] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a50000b90 with addr=10.0.0.2, port=4420 00:27:48.962 qpair failed and we were unable to recover it. 00:27:48.962 [2024-07-15 09:35:59.905823] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.963 [2024-07-15 09:35:59.905850] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a50000b90 with addr=10.0.0.2, port=4420 00:27:48.963 qpair failed and we were unable to recover it. 00:27:48.963 [2024-07-15 09:35:59.905947] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.963 [2024-07-15 09:35:59.905973] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a50000b90 with addr=10.0.0.2, port=4420 00:27:48.963 qpair failed and we were unable to recover it. 00:27:48.963 [2024-07-15 09:35:59.906052] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.963 [2024-07-15 09:35:59.906100] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a50000b90 with addr=10.0.0.2, port=4420 00:27:48.963 qpair failed and we were unable to recover it. 00:27:48.963 [2024-07-15 09:35:59.906284] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.963 [2024-07-15 09:35:59.906327] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a50000b90 with addr=10.0.0.2, port=4420 00:27:48.963 qpair failed and we were unable to recover it. 00:27:48.963 [2024-07-15 09:35:59.906502] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.963 [2024-07-15 09:35:59.906546] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a50000b90 with addr=10.0.0.2, port=4420 00:27:48.963 qpair failed and we were unable to recover it. 00:27:48.963 [2024-07-15 09:35:59.906738] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.963 [2024-07-15 09:35:59.906812] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d69200 with addr=10.0.0.2, port=4420 00:27:48.963 qpair failed and we were unable to recover it. 00:27:48.963 [2024-07-15 09:35:59.906936] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.963 [2024-07-15 09:35:59.906963] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d69200 with addr=10.0.0.2, port=4420 00:27:48.963 qpair failed and we were unable to recover it. 00:27:48.963 [2024-07-15 09:35:59.907055] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.963 [2024-07-15 09:35:59.907081] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d69200 with addr=10.0.0.2, port=4420 00:27:48.963 qpair failed and we were unable to recover it. 00:27:48.963 [2024-07-15 09:35:59.907204] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.963 [2024-07-15 09:35:59.907249] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d69200 with addr=10.0.0.2, port=4420 00:27:48.963 qpair failed and we were unable to recover it. 00:27:48.963 [2024-07-15 09:35:59.907430] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.963 [2024-07-15 09:35:59.907475] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d69200 with addr=10.0.0.2, port=4420 00:27:48.963 qpair failed and we were unable to recover it. 00:27:48.963 [2024-07-15 09:35:59.907645] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.963 [2024-07-15 09:35:59.907691] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d69200 with addr=10.0.0.2, port=4420 00:27:48.963 qpair failed and we were unable to recover it. 00:27:48.963 [2024-07-15 09:35:59.907870] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.963 [2024-07-15 09:35:59.907897] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d69200 with addr=10.0.0.2, port=4420 00:27:48.963 qpair failed and we were unable to recover it. 00:27:48.963 [2024-07-15 09:35:59.907992] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.963 [2024-07-15 09:35:59.908017] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d69200 with addr=10.0.0.2, port=4420 00:27:48.963 qpair failed and we were unable to recover it. 00:27:48.963 [2024-07-15 09:35:59.908130] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.963 [2024-07-15 09:35:59.908156] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d69200 with addr=10.0.0.2, port=4420 00:27:48.963 qpair failed and we were unable to recover it. 00:27:48.963 [2024-07-15 09:35:59.908306] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.963 [2024-07-15 09:35:59.908357] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d69200 with addr=10.0.0.2, port=4420 00:27:48.963 qpair failed and we were unable to recover it. 00:27:48.963 [2024-07-15 09:35:59.908514] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.963 [2024-07-15 09:35:59.908557] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d69200 with addr=10.0.0.2, port=4420 00:27:48.963 qpair failed and we were unable to recover it. 00:27:48.963 [2024-07-15 09:35:59.908731] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.963 [2024-07-15 09:35:59.908770] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a58000b90 with addr=10.0.0.2, port=4420 00:27:48.963 qpair failed and we were unable to recover it. 00:27:48.963 [2024-07-15 09:35:59.908894] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.963 [2024-07-15 09:35:59.908923] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a50000b90 with addr=10.0.0.2, port=4420 00:27:48.963 qpair failed and we were unable to recover it. 00:27:48.963 [2024-07-15 09:35:59.909037] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.963 [2024-07-15 09:35:59.909064] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a50000b90 with addr=10.0.0.2, port=4420 00:27:48.963 qpair failed and we were unable to recover it. 00:27:48.963 [2024-07-15 09:35:59.909146] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.963 [2024-07-15 09:35:59.909172] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a50000b90 with addr=10.0.0.2, port=4420 00:27:48.963 qpair failed and we were unable to recover it. 00:27:48.963 [2024-07-15 09:35:59.909282] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.963 [2024-07-15 09:35:59.909309] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a50000b90 with addr=10.0.0.2, port=4420 00:27:48.963 qpair failed and we were unable to recover it. 00:27:48.963 [2024-07-15 09:35:59.909413] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.963 [2024-07-15 09:35:59.909456] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a50000b90 with addr=10.0.0.2, port=4420 00:27:48.963 qpair failed and we were unable to recover it. 00:27:48.963 [2024-07-15 09:35:59.909629] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.963 [2024-07-15 09:35:59.909685] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d69200 with addr=10.0.0.2, port=4420 00:27:48.963 qpair failed and we were unable to recover it. 00:27:48.963 [2024-07-15 09:35:59.909858] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.963 [2024-07-15 09:35:59.909884] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d69200 with addr=10.0.0.2, port=4420 00:27:48.963 qpair failed and we were unable to recover it. 00:27:48.963 [2024-07-15 09:35:59.909979] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.963 [2024-07-15 09:35:59.910004] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d69200 with addr=10.0.0.2, port=4420 00:27:48.963 qpair failed and we were unable to recover it. 00:27:48.963 [2024-07-15 09:35:59.910085] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.963 [2024-07-15 09:35:59.910111] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d69200 with addr=10.0.0.2, port=4420 00:27:48.963 qpair failed and we were unable to recover it. 00:27:48.963 [2024-07-15 09:35:59.910218] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.963 [2024-07-15 09:35:59.910274] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d69200 with addr=10.0.0.2, port=4420 00:27:48.963 qpair failed and we were unable to recover it. 00:27:48.963 [2024-07-15 09:35:59.910393] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.963 [2024-07-15 09:35:59.910419] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d69200 with addr=10.0.0.2, port=4420 00:27:48.963 qpair failed and we were unable to recover it. 00:27:48.963 [2024-07-15 09:35:59.910603] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.963 [2024-07-15 09:35:59.910645] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d69200 with addr=10.0.0.2, port=4420 00:27:48.963 qpair failed and we were unable to recover it. 00:27:48.963 [2024-07-15 09:35:59.910819] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.963 [2024-07-15 09:35:59.910865] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d69200 with addr=10.0.0.2, port=4420 00:27:48.963 qpair failed and we were unable to recover it. 00:27:48.963 [2024-07-15 09:35:59.911001] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.963 [2024-07-15 09:35:59.911026] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d69200 with addr=10.0.0.2, port=4420 00:27:48.963 qpair failed and we were unable to recover it. 00:27:48.963 [2024-07-15 09:35:59.911112] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.963 [2024-07-15 09:35:59.911138] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d69200 with addr=10.0.0.2, port=4420 00:27:48.963 qpair failed and we were unable to recover it. 00:27:48.963 [2024-07-15 09:35:59.911265] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.963 [2024-07-15 09:35:59.911308] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d69200 with addr=10.0.0.2, port=4420 00:27:48.963 qpair failed and we were unable to recover it. 00:27:48.963 [2024-07-15 09:35:59.911523] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.963 [2024-07-15 09:35:59.911566] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d69200 with addr=10.0.0.2, port=4420 00:27:48.963 qpair failed and we were unable to recover it. 00:27:48.963 [2024-07-15 09:35:59.911713] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.963 [2024-07-15 09:35:59.911755] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d69200 with addr=10.0.0.2, port=4420 00:27:48.963 qpair failed and we were unable to recover it. 00:27:48.963 [2024-07-15 09:35:59.911938] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.963 [2024-07-15 09:35:59.911978] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a58000b90 with addr=10.0.0.2, port=4420 00:27:48.963 qpair failed and we were unable to recover it. 00:27:48.963 [2024-07-15 09:35:59.912093] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.963 [2024-07-15 09:35:59.912138] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a58000b90 with addr=10.0.0.2, port=4420 00:27:48.963 qpair failed and we were unable to recover it. 00:27:48.963 [2024-07-15 09:35:59.912285] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.963 [2024-07-15 09:35:59.912311] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a58000b90 with addr=10.0.0.2, port=4420 00:27:48.963 qpair failed and we were unable to recover it. 00:27:48.963 [2024-07-15 09:35:59.912518] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.963 [2024-07-15 09:35:59.912559] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a58000b90 with addr=10.0.0.2, port=4420 00:27:48.963 qpair failed and we were unable to recover it. 00:27:48.963 [2024-07-15 09:35:59.912678] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.963 [2024-07-15 09:35:59.912704] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a58000b90 with addr=10.0.0.2, port=4420 00:27:48.963 qpair failed and we were unable to recover it. 00:27:48.963 [2024-07-15 09:35:59.912818] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.963 [2024-07-15 09:35:59.912844] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a58000b90 with addr=10.0.0.2, port=4420 00:27:48.963 qpair failed and we were unable to recover it. 00:27:48.963 [2024-07-15 09:35:59.912924] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.963 [2024-07-15 09:35:59.912951] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a58000b90 with addr=10.0.0.2, port=4420 00:27:48.963 qpair failed and we were unable to recover it. 00:27:48.963 [2024-07-15 09:35:59.913084] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.963 [2024-07-15 09:35:59.913126] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a58000b90 with addr=10.0.0.2, port=4420 00:27:48.963 qpair failed and we were unable to recover it. 00:27:48.963 [2024-07-15 09:35:59.913244] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.963 [2024-07-15 09:35:59.913286] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a58000b90 with addr=10.0.0.2, port=4420 00:27:48.963 qpair failed and we were unable to recover it. 00:27:48.963 [2024-07-15 09:35:59.913458] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.963 [2024-07-15 09:35:59.913505] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d69200 with addr=10.0.0.2, port=4420 00:27:48.963 qpair failed and we were unable to recover it. 00:27:48.963 [2024-07-15 09:35:59.913675] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.963 [2024-07-15 09:35:59.913713] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:27:48.963 qpair failed and we were unable to recover it. 00:27:48.963 [2024-07-15 09:35:59.913797] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.963 [2024-07-15 09:35:59.913831] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:27:48.963 qpair failed and we were unable to recover it. 00:27:48.963 [2024-07-15 09:35:59.913942] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.963 [2024-07-15 09:35:59.913968] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:27:48.963 qpair failed and we were unable to recover it. 00:27:48.963 [2024-07-15 09:35:59.914049] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.963 [2024-07-15 09:35:59.914075] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:27:48.963 qpair failed and we were unable to recover it. 00:27:48.963 [2024-07-15 09:35:59.914183] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.963 [2024-07-15 09:35:59.914209] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:27:48.963 qpair failed and we were unable to recover it. 00:27:48.963 [2024-07-15 09:35:59.914340] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.963 [2024-07-15 09:35:59.914366] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:27:48.963 qpair failed and we were unable to recover it. 00:27:48.963 [2024-07-15 09:35:59.914480] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.963 [2024-07-15 09:35:59.914506] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:27:48.963 qpair failed and we were unable to recover it. 00:27:48.963 [2024-07-15 09:35:59.914593] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.963 [2024-07-15 09:35:59.914619] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:27:48.963 qpair failed and we were unable to recover it. 00:27:48.963 [2024-07-15 09:35:59.914700] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.963 [2024-07-15 09:35:59.914729] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a58000b90 with addr=10.0.0.2, port=4420 00:27:48.963 qpair failed and we were unable to recover it. 00:27:48.963 [2024-07-15 09:35:59.914826] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.963 [2024-07-15 09:35:59.914853] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d69200 with addr=10.0.0.2, port=4420 00:27:48.963 qpair failed and we were unable to recover it. 00:27:48.963 [2024-07-15 09:35:59.914966] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.963 [2024-07-15 09:35:59.914992] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d69200 with addr=10.0.0.2, port=4420 00:27:48.963 qpair failed and we were unable to recover it. 00:27:48.963 [2024-07-15 09:35:59.915109] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.963 [2024-07-15 09:35:59.915134] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d69200 with addr=10.0.0.2, port=4420 00:27:48.963 qpair failed and we were unable to recover it. 00:27:48.963 [2024-07-15 09:35:59.915235] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.963 [2024-07-15 09:35:59.915260] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d69200 with addr=10.0.0.2, port=4420 00:27:48.963 qpair failed and we were unable to recover it. 00:27:48.963 [2024-07-15 09:35:59.915376] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.963 [2024-07-15 09:35:59.915402] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d69200 with addr=10.0.0.2, port=4420 00:27:48.963 qpair failed and we were unable to recover it. 00:27:48.963 [2024-07-15 09:35:59.915511] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.963 [2024-07-15 09:35:59.915563] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:27:48.963 qpair failed and we were unable to recover it. 00:27:48.963 [2024-07-15 09:35:59.915657] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.963 [2024-07-15 09:35:59.915686] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a58000b90 with addr=10.0.0.2, port=4420 00:27:48.963 qpair failed and we were unable to recover it. 00:27:48.963 [2024-07-15 09:35:59.915769] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.963 [2024-07-15 09:35:59.915794] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a58000b90 with addr=10.0.0.2, port=4420 00:27:48.963 qpair failed and we were unable to recover it. 00:27:48.963 [2024-07-15 09:35:59.915940] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.964 [2024-07-15 09:35:59.915966] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a58000b90 with addr=10.0.0.2, port=4420 00:27:48.964 qpair failed and we were unable to recover it. 00:27:48.964 [2024-07-15 09:35:59.916124] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.964 [2024-07-15 09:35:59.916168] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a58000b90 with addr=10.0.0.2, port=4420 00:27:48.964 qpair failed and we were unable to recover it. 00:27:48.964 [2024-07-15 09:35:59.916301] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.964 [2024-07-15 09:35:59.916345] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a58000b90 with addr=10.0.0.2, port=4420 00:27:48.964 qpair failed and we were unable to recover it. 00:27:48.964 [2024-07-15 09:35:59.916530] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.964 [2024-07-15 09:35:59.916575] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d69200 with addr=10.0.0.2, port=4420 00:27:48.964 qpair failed and we were unable to recover it. 00:27:48.964 [2024-07-15 09:35:59.916733] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.964 [2024-07-15 09:35:59.916760] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:27:48.964 qpair failed and we were unable to recover it. 00:27:48.964 [2024-07-15 09:35:59.916856] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.964 [2024-07-15 09:35:59.916882] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:27:48.964 qpair failed and we were unable to recover it. 00:27:48.964 [2024-07-15 09:35:59.916984] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.964 [2024-07-15 09:35:59.917040] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:27:48.964 qpair failed and we were unable to recover it. 00:27:48.964 [2024-07-15 09:35:59.917179] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.964 [2024-07-15 09:35:59.917229] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:27:48.964 qpair failed and we were unable to recover it. 00:27:48.964 [2024-07-15 09:35:59.917388] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.964 [2024-07-15 09:35:59.917415] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:27:48.964 qpair failed and we were unable to recover it. 00:27:48.964 [2024-07-15 09:35:59.917546] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.964 [2024-07-15 09:35:59.917595] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:27:48.964 qpair failed and we were unable to recover it. 00:27:48.964 [2024-07-15 09:35:59.917679] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.964 [2024-07-15 09:35:59.917705] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:27:48.964 qpair failed and we were unable to recover it. 00:27:48.964 [2024-07-15 09:35:59.917783] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.964 [2024-07-15 09:35:59.917815] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:27:48.964 qpair failed and we were unable to recover it. 00:27:48.964 [2024-07-15 09:35:59.917897] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.964 [2024-07-15 09:35:59.917922] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:27:48.964 qpair failed and we were unable to recover it. 00:27:48.964 [2024-07-15 09:35:59.918009] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.964 [2024-07-15 09:35:59.918034] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:27:48.964 qpair failed and we were unable to recover it. 00:27:48.964 [2024-07-15 09:35:59.918125] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.964 [2024-07-15 09:35:59.918152] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:27:48.964 qpair failed and we were unable to recover it. 00:27:48.964 [2024-07-15 09:35:59.918244] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.964 [2024-07-15 09:35:59.918270] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:27:48.964 qpair failed and we were unable to recover it. 00:27:48.964 [2024-07-15 09:35:59.918346] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.964 [2024-07-15 09:35:59.918372] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:27:48.964 qpair failed and we were unable to recover it. 00:27:48.964 [2024-07-15 09:35:59.918445] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.964 [2024-07-15 09:35:59.918470] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:27:48.964 qpair failed and we were unable to recover it. 00:27:48.964 [2024-07-15 09:35:59.918574] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.964 [2024-07-15 09:35:59.918599] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:27:48.964 qpair failed and we were unable to recover it. 00:27:48.964 [2024-07-15 09:35:59.918683] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.964 [2024-07-15 09:35:59.918708] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:27:48.964 qpair failed and we were unable to recover it. 00:27:48.964 [2024-07-15 09:35:59.918796] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.964 [2024-07-15 09:35:59.918828] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:27:48.964 qpair failed and we were unable to recover it. 00:27:48.964 [2024-07-15 09:35:59.918933] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.964 [2024-07-15 09:35:59.918958] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:27:48.964 qpair failed and we were unable to recover it. 00:27:48.964 [2024-07-15 09:35:59.919073] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.964 [2024-07-15 09:35:59.919099] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:27:48.964 qpair failed and we were unable to recover it. 00:27:48.964 [2024-07-15 09:35:59.919212] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.964 [2024-07-15 09:35:59.919238] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:27:48.964 qpair failed and we were unable to recover it. 00:27:48.964 [2024-07-15 09:35:59.919350] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.964 [2024-07-15 09:35:59.919379] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a58000b90 with addr=10.0.0.2, port=4420 00:27:48.964 qpair failed and we were unable to recover it. 00:27:48.964 [2024-07-15 09:35:59.919489] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.964 [2024-07-15 09:35:59.919515] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a58000b90 with addr=10.0.0.2, port=4420 00:27:48.964 qpair failed and we were unable to recover it. 00:27:48.964 [2024-07-15 09:35:59.919604] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.964 [2024-07-15 09:35:59.919643] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a50000b90 with addr=10.0.0.2, port=4420 00:27:48.964 qpair failed and we were unable to recover it. 00:27:48.964 [2024-07-15 09:35:59.919735] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.964 [2024-07-15 09:35:59.919767] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a50000b90 with addr=10.0.0.2, port=4420 00:27:48.964 qpair failed and we were unable to recover it. 00:27:48.964 [2024-07-15 09:35:59.919932] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.964 [2024-07-15 09:35:59.919977] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a50000b90 with addr=10.0.0.2, port=4420 00:27:48.964 qpair failed and we were unable to recover it. 00:27:48.964 [2024-07-15 09:35:59.920120] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.964 [2024-07-15 09:35:59.920163] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a50000b90 with addr=10.0.0.2, port=4420 00:27:48.964 qpair failed and we were unable to recover it. 00:27:48.964 [2024-07-15 09:35:59.920353] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.964 [2024-07-15 09:35:59.920405] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:27:48.964 qpair failed and we were unable to recover it. 00:27:48.964 [2024-07-15 09:35:59.920528] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.964 [2024-07-15 09:35:59.920579] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:27:48.964 qpair failed and we were unable to recover it. 00:27:48.964 [2024-07-15 09:35:59.920659] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.964 [2024-07-15 09:35:59.920684] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:27:48.964 qpair failed and we were unable to recover it. 00:27:48.964 [2024-07-15 09:35:59.920807] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.964 [2024-07-15 09:35:59.920833] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:27:48.964 qpair failed and we were unable to recover it. 00:27:48.964 [2024-07-15 09:35:59.920976] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.964 [2024-07-15 09:35:59.921026] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:27:48.964 qpair failed and we were unable to recover it. 00:27:48.964 [2024-07-15 09:35:59.921109] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.964 [2024-07-15 09:35:59.921136] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:27:48.964 qpair failed and we were unable to recover it. 00:27:48.964 [2024-07-15 09:35:59.921307] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.964 [2024-07-15 09:35:59.921357] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:27:48.964 qpair failed and we were unable to recover it. 00:27:48.964 [2024-07-15 09:35:59.921467] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.964 [2024-07-15 09:35:59.921492] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:27:48.964 qpair failed and we were unable to recover it. 00:27:48.964 [2024-07-15 09:35:59.921571] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.964 [2024-07-15 09:35:59.921596] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:27:48.964 qpair failed and we were unable to recover it. 00:27:48.964 [2024-07-15 09:35:59.921709] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.964 [2024-07-15 09:35:59.921734] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:27:48.964 qpair failed and we were unable to recover it. 00:27:48.964 [2024-07-15 09:35:59.921896] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.964 [2024-07-15 09:35:59.921948] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:27:48.964 qpair failed and we were unable to recover it. 00:27:48.964 [2024-07-15 09:35:59.922105] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.964 [2024-07-15 09:35:59.922156] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:27:48.964 qpair failed and we were unable to recover it. 00:27:48.964 [2024-07-15 09:35:59.922334] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.964 [2024-07-15 09:35:59.922388] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:27:48.964 qpair failed and we were unable to recover it. 00:27:48.964 [2024-07-15 09:35:59.922497] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.964 [2024-07-15 09:35:59.922523] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:27:48.964 qpair failed and we were unable to recover it. 00:27:48.964 [2024-07-15 09:35:59.922632] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.964 [2024-07-15 09:35:59.922658] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:27:48.964 qpair failed and we were unable to recover it. 00:27:48.964 [2024-07-15 09:35:59.922738] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.964 [2024-07-15 09:35:59.922763] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:27:48.964 qpair failed and we were unable to recover it. 00:27:48.964 [2024-07-15 09:35:59.922923] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.964 [2024-07-15 09:35:59.922974] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:27:48.964 qpair failed and we were unable to recover it. 00:27:48.964 [2024-07-15 09:35:59.923100] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.964 [2024-07-15 09:35:59.923152] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:27:48.964 qpair failed and we were unable to recover it. 00:27:48.964 [2024-07-15 09:35:59.923313] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.964 [2024-07-15 09:35:59.923339] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:27:48.964 qpair failed and we were unable to recover it. 00:27:48.964 [2024-07-15 09:35:59.923442] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.964 [2024-07-15 09:35:59.923468] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:27:48.964 qpair failed and we were unable to recover it. 00:27:48.964 [2024-07-15 09:35:59.923560] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.964 [2024-07-15 09:35:59.923586] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:27:48.964 qpair failed and we were unable to recover it. 00:27:48.964 [2024-07-15 09:35:59.923660] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.964 [2024-07-15 09:35:59.923686] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:27:48.964 qpair failed and we were unable to recover it. 00:27:48.964 [2024-07-15 09:35:59.923821] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.964 [2024-07-15 09:35:59.923860] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d69200 with addr=10.0.0.2, port=4420 00:27:48.964 qpair failed and we were unable to recover it. 00:27:48.964 [2024-07-15 09:35:59.923974] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.964 [2024-07-15 09:35:59.924002] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d69200 with addr=10.0.0.2, port=4420 00:27:48.964 qpair failed and we were unable to recover it. 00:27:48.964 [2024-07-15 09:35:59.924128] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.964 [2024-07-15 09:35:59.924154] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d69200 with addr=10.0.0.2, port=4420 00:27:48.964 qpair failed and we were unable to recover it. 00:27:48.964 [2024-07-15 09:35:59.924284] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.964 [2024-07-15 09:35:59.924328] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d69200 with addr=10.0.0.2, port=4420 00:27:48.964 qpair failed and we were unable to recover it. 00:27:48.964 [2024-07-15 09:35:59.924473] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.964 [2024-07-15 09:35:59.924498] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d69200 with addr=10.0.0.2, port=4420 00:27:48.964 qpair failed and we were unable to recover it. 00:27:48.964 [2024-07-15 09:35:59.924612] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.964 [2024-07-15 09:35:59.924638] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d69200 with addr=10.0.0.2, port=4420 00:27:48.964 qpair failed and we were unable to recover it. 00:27:48.964 [2024-07-15 09:35:59.924725] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.964 [2024-07-15 09:35:59.924751] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d69200 with addr=10.0.0.2, port=4420 00:27:48.964 qpair failed and we were unable to recover it. 00:27:48.964 [2024-07-15 09:35:59.924859] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.964 [2024-07-15 09:35:59.924902] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d69200 with addr=10.0.0.2, port=4420 00:27:48.964 qpair failed and we were unable to recover it. 00:27:48.964 [2024-07-15 09:35:59.925085] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.964 [2024-07-15 09:35:59.925129] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d69200 with addr=10.0.0.2, port=4420 00:27:48.964 qpair failed and we were unable to recover it. 00:27:48.964 [2024-07-15 09:35:59.925325] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.965 [2024-07-15 09:35:59.925368] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d69200 with addr=10.0.0.2, port=4420 00:27:48.965 qpair failed and we were unable to recover it. 00:27:48.965 [2024-07-15 09:35:59.925493] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.965 [2024-07-15 09:35:59.925535] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d69200 with addr=10.0.0.2, port=4420 00:27:48.965 qpair failed and we were unable to recover it. 00:27:48.965 [2024-07-15 09:35:59.925694] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.965 [2024-07-15 09:35:59.925737] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d69200 with addr=10.0.0.2, port=4420 00:27:48.965 qpair failed and we were unable to recover it. 00:27:48.965 [2024-07-15 09:35:59.925890] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.965 [2024-07-15 09:35:59.925917] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d69200 with addr=10.0.0.2, port=4420 00:27:48.965 qpair failed and we were unable to recover it. 00:27:48.965 [2024-07-15 09:35:59.926001] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.965 [2024-07-15 09:35:59.926057] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d69200 with addr=10.0.0.2, port=4420 00:27:48.965 qpair failed and we were unable to recover it. 00:27:48.965 [2024-07-15 09:35:59.926223] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.965 [2024-07-15 09:35:59.926265] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d69200 with addr=10.0.0.2, port=4420 00:27:48.965 qpair failed and we were unable to recover it. 00:27:48.965 [2024-07-15 09:35:59.926434] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.965 [2024-07-15 09:35:59.926477] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d69200 with addr=10.0.0.2, port=4420 00:27:48.965 qpair failed and we were unable to recover it. 00:27:48.965 [2024-07-15 09:35:59.926629] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.965 [2024-07-15 09:35:59.926694] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d69200 with addr=10.0.0.2, port=4420 00:27:48.965 qpair failed and we were unable to recover it. 00:27:48.965 [2024-07-15 09:35:59.926907] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.965 [2024-07-15 09:35:59.926933] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d69200 with addr=10.0.0.2, port=4420 00:27:48.965 qpair failed and we were unable to recover it. 00:27:48.965 [2024-07-15 09:35:59.927081] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.965 [2024-07-15 09:35:59.927124] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d69200 with addr=10.0.0.2, port=4420 00:27:48.965 qpair failed and we were unable to recover it. 00:27:48.965 [2024-07-15 09:35:59.927289] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.965 [2024-07-15 09:35:59.927331] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d69200 with addr=10.0.0.2, port=4420 00:27:48.965 qpair failed and we were unable to recover it. 00:27:48.965 [2024-07-15 09:35:59.927501] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.965 [2024-07-15 09:35:59.927543] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d69200 with addr=10.0.0.2, port=4420 00:27:48.965 qpair failed and we were unable to recover it. 00:27:48.965 [2024-07-15 09:35:59.927708] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.965 [2024-07-15 09:35:59.927751] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d69200 with addr=10.0.0.2, port=4420 00:27:48.965 qpair failed and we were unable to recover it. 00:27:48.965 [2024-07-15 09:35:59.927886] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.965 [2024-07-15 09:35:59.927912] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d69200 with addr=10.0.0.2, port=4420 00:27:48.965 qpair failed and we were unable to recover it. 00:27:48.965 [2024-07-15 09:35:59.928027] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.965 [2024-07-15 09:35:59.928053] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d69200 with addr=10.0.0.2, port=4420 00:27:48.965 qpair failed and we were unable to recover it. 00:27:48.965 [2024-07-15 09:35:59.928193] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.965 [2024-07-15 09:35:59.928242] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d69200 with addr=10.0.0.2, port=4420 00:27:48.965 qpair failed and we were unable to recover it. 00:27:48.965 [2024-07-15 09:35:59.928437] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.965 [2024-07-15 09:35:59.928481] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d69200 with addr=10.0.0.2, port=4420 00:27:48.965 qpair failed and we were unable to recover it. 00:27:48.965 [2024-07-15 09:35:59.928610] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.965 [2024-07-15 09:35:59.928672] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d69200 with addr=10.0.0.2, port=4420 00:27:48.965 qpair failed and we were unable to recover it. 00:27:48.965 [2024-07-15 09:35:59.928844] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.965 [2024-07-15 09:35:59.928871] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d69200 with addr=10.0.0.2, port=4420 00:27:48.965 qpair failed and we were unable to recover it. 00:27:48.965 [2024-07-15 09:35:59.928984] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.965 [2024-07-15 09:35:59.929010] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d69200 with addr=10.0.0.2, port=4420 00:27:48.965 qpair failed and we were unable to recover it. 00:27:48.965 [2024-07-15 09:35:59.929133] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.965 [2024-07-15 09:35:59.929206] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a50000b90 with addr=10.0.0.2, port=4420 00:27:48.965 qpair failed and we were unable to recover it. 00:27:48.965 [2024-07-15 09:35:59.929343] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.965 [2024-07-15 09:35:59.929391] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a50000b90 with addr=10.0.0.2, port=4420 00:27:48.965 qpair failed and we were unable to recover it. 00:27:48.965 [2024-07-15 09:35:59.929591] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.965 [2024-07-15 09:35:59.929635] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a50000b90 with addr=10.0.0.2, port=4420 00:27:48.965 qpair failed and we were unable to recover it. 00:27:48.965 [2024-07-15 09:35:59.929782] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.965 [2024-07-15 09:35:59.929814] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a50000b90 with addr=10.0.0.2, port=4420 00:27:48.965 qpair failed and we were unable to recover it. 00:27:48.965 [2024-07-15 09:35:59.929903] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.965 [2024-07-15 09:35:59.929930] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a50000b90 with addr=10.0.0.2, port=4420 00:27:48.965 qpair failed and we were unable to recover it. 00:27:48.965 [2024-07-15 09:35:59.930019] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.965 [2024-07-15 09:35:59.930078] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a50000b90 with addr=10.0.0.2, port=4420 00:27:48.965 qpair failed and we were unable to recover it. 00:27:48.965 [2024-07-15 09:35:59.930243] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.965 [2024-07-15 09:35:59.930289] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a50000b90 with addr=10.0.0.2, port=4420 00:27:48.965 qpair failed and we were unable to recover it. 00:27:48.965 [2024-07-15 09:35:59.930498] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.965 [2024-07-15 09:35:59.930563] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:27:48.965 qpair failed and we were unable to recover it. 00:27:48.965 [2024-07-15 09:35:59.930679] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.965 [2024-07-15 09:35:59.930706] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:27:48.965 qpair failed and we were unable to recover it. 00:27:48.965 [2024-07-15 09:35:59.930810] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.965 [2024-07-15 09:35:59.930837] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d69200 with addr=10.0.0.2, port=4420 00:27:48.965 qpair failed and we were unable to recover it. 00:27:48.965 [2024-07-15 09:35:59.930920] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.965 [2024-07-15 09:35:59.930947] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d69200 with addr=10.0.0.2, port=4420 00:27:48.965 qpair failed and we were unable to recover it. 00:27:48.965 [2024-07-15 09:35:59.931052] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.965 [2024-07-15 09:35:59.931101] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d69200 with addr=10.0.0.2, port=4420 00:27:48.965 qpair failed and we were unable to recover it. 00:27:48.965 [2024-07-15 09:35:59.931252] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.965 [2024-07-15 09:35:59.931308] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d69200 with addr=10.0.0.2, port=4420 00:27:48.965 qpair failed and we were unable to recover it. 00:27:48.965 [2024-07-15 09:35:59.931449] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.965 [2024-07-15 09:35:59.931492] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d69200 with addr=10.0.0.2, port=4420 00:27:48.965 qpair failed and we were unable to recover it. 00:27:48.965 [2024-07-15 09:35:59.931659] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.965 [2024-07-15 09:35:59.931702] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d69200 with addr=10.0.0.2, port=4420 00:27:48.965 qpair failed and we were unable to recover it. 00:27:48.965 [2024-07-15 09:35:59.931853] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.965 [2024-07-15 09:35:59.931879] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d69200 with addr=10.0.0.2, port=4420 00:27:48.965 qpair failed and we were unable to recover it. 00:27:48.965 [2024-07-15 09:35:59.932012] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.965 [2024-07-15 09:35:59.932038] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d69200 with addr=10.0.0.2, port=4420 00:27:48.965 qpair failed and we were unable to recover it. 00:27:48.965 [2024-07-15 09:35:59.932200] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.965 [2024-07-15 09:35:59.932242] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d69200 with addr=10.0.0.2, port=4420 00:27:48.965 qpair failed and we were unable to recover it. 00:27:48.965 [2024-07-15 09:35:59.932442] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.965 [2024-07-15 09:35:59.932484] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d69200 with addr=10.0.0.2, port=4420 00:27:48.965 qpair failed and we were unable to recover it. 00:27:48.965 [2024-07-15 09:35:59.932690] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.965 [2024-07-15 09:35:59.932737] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a50000b90 with addr=10.0.0.2, port=4420 00:27:48.965 qpair failed and we were unable to recover it. 00:27:48.965 [2024-07-15 09:35:59.932942] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.965 [2024-07-15 09:35:59.932969] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a50000b90 with addr=10.0.0.2, port=4420 00:27:48.965 qpair failed and we were unable to recover it. 00:27:48.965 [2024-07-15 09:35:59.933086] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.965 [2024-07-15 09:35:59.933112] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a50000b90 with addr=10.0.0.2, port=4420 00:27:48.965 qpair failed and we were unable to recover it. 00:27:48.965 [2024-07-15 09:35:59.933219] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.965 [2024-07-15 09:35:59.933262] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a50000b90 with addr=10.0.0.2, port=4420 00:27:48.965 qpair failed and we were unable to recover it. 00:27:48.965 [2024-07-15 09:35:59.933434] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.965 [2024-07-15 09:35:59.933477] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a50000b90 with addr=10.0.0.2, port=4420 00:27:48.965 qpair failed and we were unable to recover it. 00:27:48.965 [2024-07-15 09:35:59.933683] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.965 [2024-07-15 09:35:59.933747] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a50000b90 with addr=10.0.0.2, port=4420 00:27:48.965 qpair failed and we were unable to recover it. 00:27:48.965 [2024-07-15 09:35:59.933937] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.965 [2024-07-15 09:35:59.933964] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d69200 with addr=10.0.0.2, port=4420 00:27:48.965 qpair failed and we were unable to recover it. 00:27:48.965 [2024-07-15 09:35:59.934092] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.965 [2024-07-15 09:35:59.934134] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d69200 with addr=10.0.0.2, port=4420 00:27:48.965 qpair failed and we were unable to recover it. 00:27:48.965 [2024-07-15 09:35:59.934359] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.965 [2024-07-15 09:35:59.934402] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d69200 with addr=10.0.0.2, port=4420 00:27:48.965 qpair failed and we were unable to recover it. 00:27:48.965 [2024-07-15 09:35:59.934604] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.965 [2024-07-15 09:35:59.934646] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d69200 with addr=10.0.0.2, port=4420 00:27:48.965 qpair failed and we were unable to recover it. 00:27:48.965 [2024-07-15 09:35:59.934788] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.965 [2024-07-15 09:35:59.934819] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d69200 with addr=10.0.0.2, port=4420 00:27:48.965 qpair failed and we were unable to recover it. 00:27:48.965 [2024-07-15 09:35:59.934934] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.965 [2024-07-15 09:35:59.934959] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d69200 with addr=10.0.0.2, port=4420 00:27:48.965 qpair failed and we were unable to recover it. 00:27:48.965 [2024-07-15 09:35:59.935068] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.965 [2024-07-15 09:35:59.935117] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d69200 with addr=10.0.0.2, port=4420 00:27:48.965 qpair failed and we were unable to recover it. 00:27:48.965 [2024-07-15 09:35:59.935286] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.965 [2024-07-15 09:35:59.935329] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d69200 with addr=10.0.0.2, port=4420 00:27:48.965 qpair failed and we were unable to recover it. 00:27:48.965 [2024-07-15 09:35:59.935469] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.965 [2024-07-15 09:35:59.935513] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d69200 with addr=10.0.0.2, port=4420 00:27:48.965 qpair failed and we were unable to recover it. 00:27:48.965 [2024-07-15 09:35:59.935686] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.966 [2024-07-15 09:35:59.935733] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a50000b90 with addr=10.0.0.2, port=4420 00:27:48.966 qpair failed and we were unable to recover it. 00:27:48.966 [2024-07-15 09:35:59.935874] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.966 [2024-07-15 09:35:59.935901] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a50000b90 with addr=10.0.0.2, port=4420 00:27:48.966 qpair failed and we were unable to recover it. 00:27:48.966 [2024-07-15 09:35:59.935987] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.966 [2024-07-15 09:35:59.936038] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a50000b90 with addr=10.0.0.2, port=4420 00:27:48.966 qpair failed and we were unable to recover it. 00:27:48.966 [2024-07-15 09:35:59.936212] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.966 [2024-07-15 09:35:59.936255] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a50000b90 with addr=10.0.0.2, port=4420 00:27:48.966 qpair failed and we were unable to recover it. 00:27:48.966 [2024-07-15 09:35:59.936391] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.966 [2024-07-15 09:35:59.936436] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a50000b90 with addr=10.0.0.2, port=4420 00:27:48.966 qpair failed and we were unable to recover it. 00:27:48.966 [2024-07-15 09:35:59.936633] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.966 [2024-07-15 09:35:59.936676] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a50000b90 with addr=10.0.0.2, port=4420 00:27:48.966 qpair failed and we were unable to recover it. 00:27:48.966 [2024-07-15 09:35:59.936856] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.966 [2024-07-15 09:35:59.936888] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a50000b90 with addr=10.0.0.2, port=4420 00:27:48.966 qpair failed and we were unable to recover it. 00:27:48.966 [2024-07-15 09:35:59.936980] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.966 [2024-07-15 09:35:59.937006] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a50000b90 with addr=10.0.0.2, port=4420 00:27:48.966 qpair failed and we were unable to recover it. 00:27:48.966 [2024-07-15 09:35:59.937125] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.966 [2024-07-15 09:35:59.937167] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a50000b90 with addr=10.0.0.2, port=4420 00:27:48.966 qpair failed and we were unable to recover it. 00:27:48.966 [2024-07-15 09:35:59.937296] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.966 [2024-07-15 09:35:59.937340] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a50000b90 with addr=10.0.0.2, port=4420 00:27:48.966 qpair failed and we were unable to recover it. 00:27:48.966 [2024-07-15 09:35:59.937535] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.966 [2024-07-15 09:35:59.937579] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a50000b90 with addr=10.0.0.2, port=4420 00:27:48.966 qpair failed and we were unable to recover it. 00:27:48.966 [2024-07-15 09:35:59.937764] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.966 [2024-07-15 09:35:59.937790] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a50000b90 with addr=10.0.0.2, port=4420 00:27:48.966 qpair failed and we were unable to recover it. 00:27:48.966 [2024-07-15 09:35:59.937901] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.966 [2024-07-15 09:35:59.937927] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a50000b90 with addr=10.0.0.2, port=4420 00:27:48.966 qpair failed and we were unable to recover it. 00:27:48.966 [2024-07-15 09:35:59.938034] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.966 [2024-07-15 09:35:59.938079] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a50000b90 with addr=10.0.0.2, port=4420 00:27:48.966 qpair failed and we were unable to recover it. 00:27:48.966 [2024-07-15 09:35:59.938220] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.966 [2024-07-15 09:35:59.938245] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a50000b90 with addr=10.0.0.2, port=4420 00:27:48.966 qpair failed and we were unable to recover it. 00:27:48.966 [2024-07-15 09:35:59.938423] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.966 [2024-07-15 09:35:59.938467] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a50000b90 with addr=10.0.0.2, port=4420 00:27:48.966 qpair failed and we were unable to recover it. 00:27:48.966 [2024-07-15 09:35:59.938584] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.966 [2024-07-15 09:35:59.938627] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a50000b90 with addr=10.0.0.2, port=4420 00:27:48.966 qpair failed and we were unable to recover it. 00:27:48.966 [2024-07-15 09:35:59.938778] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.966 [2024-07-15 09:35:59.938809] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a50000b90 with addr=10.0.0.2, port=4420 00:27:48.966 qpair failed and we were unable to recover it. 00:27:48.966 [2024-07-15 09:35:59.938895] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.966 [2024-07-15 09:35:59.938923] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a50000b90 with addr=10.0.0.2, port=4420 00:27:48.966 qpair failed and we were unable to recover it. 00:27:48.966 [2024-07-15 09:35:59.939031] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.966 [2024-07-15 09:35:59.939057] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a50000b90 with addr=10.0.0.2, port=4420 00:27:48.966 qpair failed and we were unable to recover it. 00:27:48.966 [2024-07-15 09:35:59.939252] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.966 [2024-07-15 09:35:59.939296] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a50000b90 with addr=10.0.0.2, port=4420 00:27:48.966 qpair failed and we were unable to recover it. 00:27:48.966 [2024-07-15 09:35:59.939474] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.966 [2024-07-15 09:35:59.939520] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d69200 with addr=10.0.0.2, port=4420 00:27:48.966 qpair failed and we were unable to recover it. 00:27:48.966 [2024-07-15 09:35:59.939685] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.966 [2024-07-15 09:35:59.939728] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d69200 with addr=10.0.0.2, port=4420 00:27:48.966 qpair failed and we were unable to recover it. 00:27:48.966 [2024-07-15 09:35:59.939898] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.966 [2024-07-15 09:35:59.939943] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d69200 with addr=10.0.0.2, port=4420 00:27:48.966 qpair failed and we were unable to recover it. 00:27:48.966 [2024-07-15 09:35:59.940074] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.966 [2024-07-15 09:35:59.940118] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d69200 with addr=10.0.0.2, port=4420 00:27:48.966 qpair failed and we were unable to recover it. 00:27:48.966 [2024-07-15 09:35:59.940278] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.966 [2024-07-15 09:35:59.940344] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d69200 with addr=10.0.0.2, port=4420 00:27:48.966 qpair failed and we were unable to recover it. 00:27:48.966 [2024-07-15 09:35:59.940577] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.966 [2024-07-15 09:35:59.940623] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d69200 with addr=10.0.0.2, port=4420 00:27:48.966 qpair failed and we were unable to recover it. 00:27:48.966 [2024-07-15 09:35:59.940788] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.966 [2024-07-15 09:35:59.940855] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d69200 with addr=10.0.0.2, port=4420 00:27:48.966 qpair failed and we were unable to recover it. 00:27:48.966 [2024-07-15 09:35:59.940962] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.966 [2024-07-15 09:35:59.940988] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d69200 with addr=10.0.0.2, port=4420 00:27:48.966 qpair failed and we were unable to recover it. 00:27:48.966 [2024-07-15 09:35:59.941112] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.966 [2024-07-15 09:35:59.941155] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d69200 with addr=10.0.0.2, port=4420 00:27:48.966 qpair failed and we were unable to recover it. 00:27:48.966 [2024-07-15 09:35:59.941332] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.966 [2024-07-15 09:35:59.941377] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d69200 with addr=10.0.0.2, port=4420 00:27:48.966 qpair failed and we were unable to recover it. 00:27:48.966 [2024-07-15 09:35:59.941586] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.966 [2024-07-15 09:35:59.941631] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d69200 with addr=10.0.0.2, port=4420 00:27:48.966 qpair failed and we were unable to recover it. 00:27:48.966 [2024-07-15 09:35:59.941827] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.966 [2024-07-15 09:35:59.941874] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d69200 with addr=10.0.0.2, port=4420 00:27:48.966 qpair failed and we were unable to recover it. 00:27:48.966 [2024-07-15 09:35:59.942040] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.966 [2024-07-15 09:35:59.942092] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d69200 with addr=10.0.0.2, port=4420 00:27:48.966 qpair failed and we were unable to recover it. 00:27:48.966 [2024-07-15 09:35:59.942275] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.966 [2024-07-15 09:35:59.942321] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d69200 with addr=10.0.0.2, port=4420 00:27:48.966 qpair failed and we were unable to recover it. 00:27:48.966 [2024-07-15 09:35:59.942534] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.966 [2024-07-15 09:35:59.942576] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d69200 with addr=10.0.0.2, port=4420 00:27:48.966 qpair failed and we were unable to recover it. 00:27:48.966 [2024-07-15 09:35:59.942739] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.966 [2024-07-15 09:35:59.942781] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d69200 with addr=10.0.0.2, port=4420 00:27:48.966 qpair failed and we were unable to recover it. 00:27:48.966 [2024-07-15 09:35:59.942955] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.966 [2024-07-15 09:35:59.942998] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d69200 with addr=10.0.0.2, port=4420 00:27:48.966 qpair failed and we were unable to recover it. 00:27:48.966 [2024-07-15 09:35:59.943196] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.966 [2024-07-15 09:35:59.943239] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d69200 with addr=10.0.0.2, port=4420 00:27:48.966 qpair failed and we were unable to recover it. 00:27:48.966 [2024-07-15 09:35:59.943403] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.966 [2024-07-15 09:35:59.943446] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d69200 with addr=10.0.0.2, port=4420 00:27:48.966 qpair failed and we were unable to recover it. 00:27:48.966 [2024-07-15 09:35:59.943665] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.966 [2024-07-15 09:35:59.943729] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d69200 with addr=10.0.0.2, port=4420 00:27:48.966 qpair failed and we were unable to recover it. 00:27:48.966 [2024-07-15 09:35:59.943927] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.966 [2024-07-15 09:35:59.943973] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d69200 with addr=10.0.0.2, port=4420 00:27:48.966 qpair failed and we were unable to recover it. 00:27:48.966 [2024-07-15 09:35:59.944156] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.966 [2024-07-15 09:35:59.944202] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d69200 with addr=10.0.0.2, port=4420 00:27:48.966 qpair failed and we were unable to recover it. 00:27:48.966 [2024-07-15 09:35:59.944345] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.966 [2024-07-15 09:35:59.944391] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d69200 with addr=10.0.0.2, port=4420 00:27:48.966 qpair failed and we were unable to recover it. 00:27:48.966 [2024-07-15 09:35:59.944591] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.966 [2024-07-15 09:35:59.944636] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d69200 with addr=10.0.0.2, port=4420 00:27:48.966 qpair failed and we were unable to recover it. 00:27:48.966 [2024-07-15 09:35:59.944787] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.966 [2024-07-15 09:35:59.944848] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d69200 with addr=10.0.0.2, port=4420 00:27:48.966 qpair failed and we were unable to recover it. 00:27:48.966 [2024-07-15 09:35:59.944981] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.966 [2024-07-15 09:35:59.945026] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d69200 with addr=10.0.0.2, port=4420 00:27:48.966 qpair failed and we were unable to recover it. 00:27:48.966 [2024-07-15 09:35:59.945207] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.966 [2024-07-15 09:35:59.945254] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d69200 with addr=10.0.0.2, port=4420 00:27:48.966 qpair failed and we were unable to recover it. 00:27:48.966 [2024-07-15 09:35:59.945428] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.966 [2024-07-15 09:35:59.945474] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d69200 with addr=10.0.0.2, port=4420 00:27:48.966 qpair failed and we were unable to recover it. 00:27:48.966 [2024-07-15 09:35:59.945653] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.966 [2024-07-15 09:35:59.945698] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d69200 with addr=10.0.0.2, port=4420 00:27:48.966 qpair failed and we were unable to recover it. 00:27:48.966 [2024-07-15 09:35:59.945876] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.966 [2024-07-15 09:35:59.945924] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d69200 with addr=10.0.0.2, port=4420 00:27:48.966 qpair failed and we were unable to recover it. 00:27:48.966 [2024-07-15 09:35:59.946092] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.966 [2024-07-15 09:35:59.946139] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d69200 with addr=10.0.0.2, port=4420 00:27:48.966 qpair failed and we were unable to recover it. 00:27:48.966 [2024-07-15 09:35:59.946315] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.966 [2024-07-15 09:35:59.946362] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d69200 with addr=10.0.0.2, port=4420 00:27:48.966 qpair failed and we were unable to recover it. 00:27:48.966 [2024-07-15 09:35:59.946540] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.966 [2024-07-15 09:35:59.946585] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d69200 with addr=10.0.0.2, port=4420 00:27:48.966 qpair failed and we were unable to recover it. 00:27:48.966 [2024-07-15 09:35:59.946741] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.966 [2024-07-15 09:35:59.946787] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d69200 with addr=10.0.0.2, port=4420 00:27:48.966 qpair failed and we were unable to recover it. 00:27:48.966 [2024-07-15 09:35:59.946965] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.966 [2024-07-15 09:35:59.947011] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d69200 with addr=10.0.0.2, port=4420 00:27:48.966 qpair failed and we were unable to recover it. 00:27:48.966 [2024-07-15 09:35:59.947183] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.966 [2024-07-15 09:35:59.947229] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d69200 with addr=10.0.0.2, port=4420 00:27:48.966 qpair failed and we were unable to recover it. 00:27:48.966 [2024-07-15 09:35:59.947397] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.966 [2024-07-15 09:35:59.947442] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d69200 with addr=10.0.0.2, port=4420 00:27:48.966 qpair failed and we were unable to recover it. 00:27:48.966 [2024-07-15 09:35:59.947657] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.966 [2024-07-15 09:35:59.947702] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d69200 with addr=10.0.0.2, port=4420 00:27:48.966 qpair failed and we were unable to recover it. 00:27:48.966 [2024-07-15 09:35:59.947911] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.966 [2024-07-15 09:35:59.947977] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d69200 with addr=10.0.0.2, port=4420 00:27:48.966 qpair failed and we were unable to recover it. 00:27:48.966 [2024-07-15 09:35:59.948203] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.966 [2024-07-15 09:35:59.948277] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d69200 with addr=10.0.0.2, port=4420 00:27:48.966 qpair failed and we were unable to recover it. 00:27:48.967 [2024-07-15 09:35:59.948474] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.967 [2024-07-15 09:35:59.948519] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d69200 with addr=10.0.0.2, port=4420 00:27:48.967 qpair failed and we were unable to recover it. 00:27:48.967 [2024-07-15 09:35:59.948738] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.967 [2024-07-15 09:35:59.948832] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d69200 with addr=10.0.0.2, port=4420 00:27:48.967 qpair failed and we were unable to recover it. 00:27:48.967 [2024-07-15 09:35:59.949047] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.967 [2024-07-15 09:35:59.949092] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d69200 with addr=10.0.0.2, port=4420 00:27:48.967 qpair failed and we were unable to recover it. 00:27:48.967 [2024-07-15 09:35:59.949282] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.967 [2024-07-15 09:35:59.949328] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d69200 with addr=10.0.0.2, port=4420 00:27:48.967 qpair failed and we were unable to recover it. 00:27:48.967 [2024-07-15 09:35:59.949470] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.967 [2024-07-15 09:35:59.949515] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d69200 with addr=10.0.0.2, port=4420 00:27:48.967 qpair failed and we were unable to recover it. 00:27:48.967 [2024-07-15 09:35:59.949753] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.967 [2024-07-15 09:35:59.949848] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d69200 with addr=10.0.0.2, port=4420 00:27:48.967 qpair failed and we were unable to recover it. 00:27:48.967 [2024-07-15 09:35:59.950056] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.967 [2024-07-15 09:35:59.950101] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d69200 with addr=10.0.0.2, port=4420 00:27:48.967 qpair failed and we were unable to recover it. 00:27:48.967 [2024-07-15 09:35:59.950313] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.967 [2024-07-15 09:35:59.950357] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d69200 with addr=10.0.0.2, port=4420 00:27:48.967 qpair failed and we were unable to recover it. 00:27:48.967 [2024-07-15 09:35:59.950517] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.967 [2024-07-15 09:35:59.950562] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d69200 with addr=10.0.0.2, port=4420 00:27:48.967 qpair failed and we were unable to recover it. 00:27:48.967 [2024-07-15 09:35:59.950742] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.967 [2024-07-15 09:35:59.950789] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d69200 with addr=10.0.0.2, port=4420 00:27:48.967 qpair failed and we were unable to recover it. 00:27:48.967 [2024-07-15 09:35:59.950945] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.967 [2024-07-15 09:35:59.950971] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d69200 with addr=10.0.0.2, port=4420 00:27:48.967 qpair failed and we were unable to recover it. 00:27:48.967 [2024-07-15 09:35:59.951056] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.967 [2024-07-15 09:35:59.951082] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d69200 with addr=10.0.0.2, port=4420 00:27:48.967 qpair failed and we were unable to recover it. 00:27:48.967 [2024-07-15 09:35:59.951195] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.967 [2024-07-15 09:35:59.951221] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d69200 with addr=10.0.0.2, port=4420 00:27:48.967 qpair failed and we were unable to recover it. 00:27:48.967 [2024-07-15 09:35:59.951321] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.967 [2024-07-15 09:35:59.951347] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d69200 with addr=10.0.0.2, port=4420 00:27:48.967 qpair failed and we were unable to recover it. 00:27:48.967 [2024-07-15 09:35:59.951458] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.967 [2024-07-15 09:35:59.951485] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d69200 with addr=10.0.0.2, port=4420 00:27:48.967 qpair failed and we were unable to recover it. 00:27:48.967 [2024-07-15 09:35:59.951572] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.967 [2024-07-15 09:35:59.951598] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d69200 with addr=10.0.0.2, port=4420 00:27:48.967 qpair failed and we were unable to recover it. 00:27:48.967 [2024-07-15 09:35:59.951705] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.967 [2024-07-15 09:35:59.951731] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d69200 with addr=10.0.0.2, port=4420 00:27:48.967 qpair failed and we were unable to recover it. 00:27:48.967 [2024-07-15 09:35:59.951825] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.967 [2024-07-15 09:35:59.951853] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d69200 with addr=10.0.0.2, port=4420 00:27:48.967 qpair failed and we were unable to recover it. 00:27:48.967 [2024-07-15 09:35:59.951976] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.967 [2024-07-15 09:35:59.952001] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d69200 with addr=10.0.0.2, port=4420 00:27:48.967 qpair failed and we were unable to recover it. 00:27:48.967 [2024-07-15 09:35:59.952110] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.967 [2024-07-15 09:35:59.952135] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d69200 with addr=10.0.0.2, port=4420 00:27:48.967 qpair failed and we were unable to recover it. 00:27:48.967 [2024-07-15 09:35:59.952237] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.967 [2024-07-15 09:35:59.952263] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d69200 with addr=10.0.0.2, port=4420 00:27:48.967 qpair failed and we were unable to recover it. 00:27:48.967 [2024-07-15 09:35:59.952371] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.967 [2024-07-15 09:35:59.952397] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d69200 with addr=10.0.0.2, port=4420 00:27:48.967 qpair failed and we were unable to recover it. 00:27:48.967 [2024-07-15 09:35:59.952482] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.967 [2024-07-15 09:35:59.952508] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d69200 with addr=10.0.0.2, port=4420 00:27:48.967 qpair failed and we were unable to recover it. 00:27:48.967 [2024-07-15 09:35:59.952595] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.967 [2024-07-15 09:35:59.952621] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d69200 with addr=10.0.0.2, port=4420 00:27:48.967 qpair failed and we were unable to recover it. 00:27:48.967 [2024-07-15 09:35:59.952698] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.967 [2024-07-15 09:35:59.952723] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d69200 with addr=10.0.0.2, port=4420 00:27:48.967 qpair failed and we were unable to recover it. 00:27:48.967 [2024-07-15 09:35:59.952818] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.967 [2024-07-15 09:35:59.952844] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d69200 with addr=10.0.0.2, port=4420 00:27:48.967 qpair failed and we were unable to recover it. 00:27:48.967 [2024-07-15 09:35:59.952933] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.967 [2024-07-15 09:35:59.952958] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d69200 with addr=10.0.0.2, port=4420 00:27:48.967 qpair failed and we were unable to recover it. 00:27:48.967 [2024-07-15 09:35:59.953073] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.967 [2024-07-15 09:35:59.953098] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d69200 with addr=10.0.0.2, port=4420 00:27:48.967 qpair failed and we were unable to recover it. 00:27:48.967 [2024-07-15 09:35:59.953202] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.967 [2024-07-15 09:35:59.953227] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d69200 with addr=10.0.0.2, port=4420 00:27:48.967 qpair failed and we were unable to recover it. 00:27:48.967 [2024-07-15 09:35:59.953312] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.967 [2024-07-15 09:35:59.953338] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d69200 with addr=10.0.0.2, port=4420 00:27:48.967 qpair failed and we were unable to recover it. 00:27:48.967 [2024-07-15 09:35:59.953449] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.967 [2024-07-15 09:35:59.953475] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d69200 with addr=10.0.0.2, port=4420 00:27:48.967 qpair failed and we were unable to recover it. 00:27:48.967 [2024-07-15 09:35:59.953586] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.967 [2024-07-15 09:35:59.953612] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d69200 with addr=10.0.0.2, port=4420 00:27:48.967 qpair failed and we were unable to recover it. 00:27:48.967 [2024-07-15 09:35:59.953688] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.967 [2024-07-15 09:35:59.953713] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d69200 with addr=10.0.0.2, port=4420 00:27:48.967 qpair failed and we were unable to recover it. 00:27:48.967 [2024-07-15 09:35:59.953805] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.967 [2024-07-15 09:35:59.953831] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d69200 with addr=10.0.0.2, port=4420 00:27:48.967 qpair failed and we were unable to recover it. 00:27:48.967 [2024-07-15 09:35:59.953965] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.967 [2024-07-15 09:35:59.953991] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d69200 with addr=10.0.0.2, port=4420 00:27:48.967 qpair failed and we were unable to recover it. 00:27:48.967 [2024-07-15 09:35:59.954101] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.967 [2024-07-15 09:35:59.954126] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d69200 with addr=10.0.0.2, port=4420 00:27:48.967 qpair failed and we were unable to recover it. 00:27:48.967 [2024-07-15 09:35:59.954215] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.967 [2024-07-15 09:35:59.954241] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d69200 with addr=10.0.0.2, port=4420 00:27:48.967 qpair failed and we were unable to recover it. 00:27:48.967 [2024-07-15 09:35:59.954356] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.967 [2024-07-15 09:35:59.954382] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d69200 with addr=10.0.0.2, port=4420 00:27:48.967 qpair failed and we were unable to recover it. 00:27:48.967 [2024-07-15 09:35:59.954468] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.967 [2024-07-15 09:35:59.954527] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d69200 with addr=10.0.0.2, port=4420 00:27:48.967 qpair failed and we were unable to recover it. 00:27:48.967 [2024-07-15 09:35:59.954714] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.967 [2024-07-15 09:35:59.954760] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d69200 with addr=10.0.0.2, port=4420 00:27:48.967 qpair failed and we were unable to recover it. 00:27:48.967 [2024-07-15 09:35:59.954923] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.967 [2024-07-15 09:35:59.954949] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d69200 with addr=10.0.0.2, port=4420 00:27:48.967 qpair failed and we were unable to recover it. 00:27:48.967 [2024-07-15 09:35:59.955078] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.967 [2024-07-15 09:35:59.955115] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d69200 with addr=10.0.0.2, port=4420 00:27:48.967 qpair failed and we were unable to recover it. 00:27:48.967 [2024-07-15 09:35:59.955323] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.967 [2024-07-15 09:35:59.955368] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d69200 with addr=10.0.0.2, port=4420 00:27:48.967 qpair failed and we were unable to recover it. 00:27:48.967 [2024-07-15 09:35:59.955522] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.967 [2024-07-15 09:35:59.955568] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d69200 with addr=10.0.0.2, port=4420 00:27:48.967 qpair failed and we were unable to recover it. 00:27:48.967 [2024-07-15 09:35:59.955723] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.967 [2024-07-15 09:35:59.955760] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d69200 with addr=10.0.0.2, port=4420 00:27:48.967 qpair failed and we were unable to recover it. 00:27:48.967 [2024-07-15 09:35:59.955911] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.967 [2024-07-15 09:35:59.955937] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d69200 with addr=10.0.0.2, port=4420 00:27:48.967 qpair failed and we were unable to recover it. 00:27:48.967 [2024-07-15 09:35:59.956030] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.967 [2024-07-15 09:35:59.956055] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d69200 with addr=10.0.0.2, port=4420 00:27:48.967 qpair failed and we were unable to recover it. 00:27:48.967 [2024-07-15 09:35:59.956141] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.967 [2024-07-15 09:35:59.956184] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d69200 with addr=10.0.0.2, port=4420 00:27:48.967 qpair failed and we were unable to recover it. 00:27:48.967 [2024-07-15 09:35:59.956356] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.967 [2024-07-15 09:35:59.956402] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d69200 with addr=10.0.0.2, port=4420 00:27:48.967 qpair failed and we were unable to recover it. 00:27:48.967 [2024-07-15 09:35:59.956548] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.967 [2024-07-15 09:35:59.956593] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d69200 with addr=10.0.0.2, port=4420 00:27:48.967 qpair failed and we were unable to recover it. 00:27:48.967 [2024-07-15 09:35:59.956778] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.967 [2024-07-15 09:35:59.956838] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d69200 with addr=10.0.0.2, port=4420 00:27:48.967 qpair failed and we were unable to recover it. 00:27:48.967 [2024-07-15 09:35:59.956951] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.967 [2024-07-15 09:35:59.956977] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d69200 with addr=10.0.0.2, port=4420 00:27:48.967 qpair failed and we were unable to recover it. 00:27:48.967 [2024-07-15 09:35:59.957111] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.967 [2024-07-15 09:35:59.957156] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d69200 with addr=10.0.0.2, port=4420 00:27:48.967 qpair failed and we were unable to recover it. 00:27:48.967 [2024-07-15 09:35:59.957305] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.967 [2024-07-15 09:35:59.957351] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d69200 with addr=10.0.0.2, port=4420 00:27:48.967 qpair failed and we were unable to recover it. 00:27:48.967 [2024-07-15 09:35:59.957556] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.967 [2024-07-15 09:35:59.957602] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d69200 with addr=10.0.0.2, port=4420 00:27:48.967 qpair failed and we were unable to recover it. 00:27:48.967 [2024-07-15 09:35:59.957768] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.967 [2024-07-15 09:35:59.957793] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d69200 with addr=10.0.0.2, port=4420 00:27:48.967 qpair failed and we were unable to recover it. 00:27:48.967 [2024-07-15 09:35:59.957909] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.967 [2024-07-15 09:35:59.957935] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d69200 with addr=10.0.0.2, port=4420 00:27:48.967 qpair failed and we were unable to recover it. 00:27:48.967 [2024-07-15 09:35:59.958043] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.967 [2024-07-15 09:35:59.958069] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d69200 with addr=10.0.0.2, port=4420 00:27:48.967 qpair failed and we were unable to recover it. 00:27:48.967 [2024-07-15 09:35:59.958308] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.967 [2024-07-15 09:35:59.958333] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d69200 with addr=10.0.0.2, port=4420 00:27:48.967 qpair failed and we were unable to recover it. 00:27:48.967 [2024-07-15 09:35:59.958466] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.967 [2024-07-15 09:35:59.958511] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d69200 with addr=10.0.0.2, port=4420 00:27:48.967 qpair failed and we were unable to recover it. 00:27:48.968 [2024-07-15 09:35:59.958695] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.968 [2024-07-15 09:35:59.958731] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d69200 with addr=10.0.0.2, port=4420 00:27:48.968 qpair failed and we were unable to recover it. 00:27:48.968 [2024-07-15 09:35:59.958866] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.968 [2024-07-15 09:35:59.958893] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d69200 with addr=10.0.0.2, port=4420 00:27:48.968 qpair failed and we were unable to recover it. 00:27:48.968 [2024-07-15 09:35:59.958984] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.968 [2024-07-15 09:35:59.959010] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d69200 with addr=10.0.0.2, port=4420 00:27:48.968 qpair failed and we were unable to recover it. 00:27:48.968 [2024-07-15 09:35:59.959097] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.968 [2024-07-15 09:35:59.959153] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d69200 with addr=10.0.0.2, port=4420 00:27:48.968 qpair failed and we were unable to recover it. 00:27:48.968 [2024-07-15 09:35:59.959365] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.968 [2024-07-15 09:35:59.959411] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d69200 with addr=10.0.0.2, port=4420 00:27:48.968 qpair failed and we were unable to recover it. 00:27:48.968 [2024-07-15 09:35:59.959587] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.968 [2024-07-15 09:35:59.959632] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d69200 with addr=10.0.0.2, port=4420 00:27:48.968 qpair failed and we were unable to recover it. 00:27:48.968 [2024-07-15 09:35:59.959816] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.968 [2024-07-15 09:35:59.959866] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d69200 with addr=10.0.0.2, port=4420 00:27:48.968 qpair failed and we were unable to recover it. 00:27:48.968 [2024-07-15 09:35:59.959956] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.968 [2024-07-15 09:35:59.959986] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d69200 with addr=10.0.0.2, port=4420 00:27:48.968 qpair failed and we were unable to recover it. 00:27:48.968 [2024-07-15 09:35:59.960160] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.968 [2024-07-15 09:35:59.960206] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d69200 with addr=10.0.0.2, port=4420 00:27:48.968 qpair failed and we were unable to recover it. 00:27:48.968 [2024-07-15 09:35:59.960411] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.968 [2024-07-15 09:35:59.960457] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d69200 with addr=10.0.0.2, port=4420 00:27:48.968 qpair failed and we were unable to recover it. 00:27:48.968 [2024-07-15 09:35:59.960636] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.968 [2024-07-15 09:35:59.960682] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d69200 with addr=10.0.0.2, port=4420 00:27:48.968 qpair failed and we were unable to recover it. 00:27:48.968 [2024-07-15 09:35:59.960868] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.968 [2024-07-15 09:35:59.960894] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d69200 with addr=10.0.0.2, port=4420 00:27:48.968 qpair failed and we were unable to recover it. 00:27:48.968 [2024-07-15 09:35:59.961032] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.968 [2024-07-15 09:35:59.961057] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d69200 with addr=10.0.0.2, port=4420 00:27:48.968 qpair failed and we were unable to recover it. 00:27:48.968 [2024-07-15 09:35:59.961234] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.968 [2024-07-15 09:35:59.961280] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d69200 with addr=10.0.0.2, port=4420 00:27:48.968 qpair failed and we were unable to recover it. 00:27:48.968 [2024-07-15 09:35:59.961453] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.968 [2024-07-15 09:35:59.961497] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d69200 with addr=10.0.0.2, port=4420 00:27:48.968 qpair failed and we were unable to recover it. 00:27:48.968 [2024-07-15 09:35:59.961636] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.968 [2024-07-15 09:35:59.961673] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d69200 with addr=10.0.0.2, port=4420 00:27:48.968 qpair failed and we were unable to recover it. 00:27:48.968 [2024-07-15 09:35:59.961816] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.968 [2024-07-15 09:35:59.961860] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d69200 with addr=10.0.0.2, port=4420 00:27:48.968 qpair failed and we were unable to recover it. 00:27:48.968 [2024-07-15 09:35:59.961941] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.968 [2024-07-15 09:35:59.961966] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d69200 with addr=10.0.0.2, port=4420 00:27:48.968 qpair failed and we were unable to recover it. 00:27:48.968 [2024-07-15 09:35:59.962058] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.968 [2024-07-15 09:35:59.962110] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d69200 with addr=10.0.0.2, port=4420 00:27:48.968 qpair failed and we were unable to recover it. 00:27:48.968 [2024-07-15 09:35:59.962278] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.968 [2024-07-15 09:35:59.962325] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d69200 with addr=10.0.0.2, port=4420 00:27:48.968 qpair failed and we were unable to recover it. 00:27:48.968 [2024-07-15 09:35:59.962519] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.968 [2024-07-15 09:35:59.962566] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d69200 with addr=10.0.0.2, port=4420 00:27:48.968 qpair failed and we were unable to recover it. 00:27:48.968 [2024-07-15 09:35:59.962814] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.968 [2024-07-15 09:35:59.962864] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d69200 with addr=10.0.0.2, port=4420 00:27:48.968 qpair failed and we were unable to recover it. 00:27:48.968 [2024-07-15 09:35:59.962958] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.968 [2024-07-15 09:35:59.962984] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d69200 with addr=10.0.0.2, port=4420 00:27:48.968 qpair failed and we were unable to recover it. 00:27:48.968 [2024-07-15 09:35:59.963068] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.968 [2024-07-15 09:35:59.963116] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d69200 with addr=10.0.0.2, port=4420 00:27:48.968 qpair failed and we were unable to recover it. 00:27:48.968 [2024-07-15 09:35:59.963300] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.968 [2024-07-15 09:35:59.963346] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d69200 with addr=10.0.0.2, port=4420 00:27:48.968 qpair failed and we were unable to recover it. 00:27:48.968 [2024-07-15 09:35:59.963485] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.968 [2024-07-15 09:35:59.963530] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d69200 with addr=10.0.0.2, port=4420 00:27:48.968 qpair failed and we were unable to recover it. 00:27:48.968 [2024-07-15 09:35:59.963679] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.968 [2024-07-15 09:35:59.963715] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d69200 with addr=10.0.0.2, port=4420 00:27:48.968 qpair failed and we were unable to recover it. 00:27:48.968 [2024-07-15 09:35:59.963874] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.968 [2024-07-15 09:35:59.963901] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d69200 with addr=10.0.0.2, port=4420 00:27:48.968 qpair failed and we were unable to recover it. 00:27:48.968 [2024-07-15 09:35:59.964016] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.968 [2024-07-15 09:35:59.964041] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d69200 with addr=10.0.0.2, port=4420 00:27:48.968 qpair failed and we were unable to recover it. 00:27:48.968 [2024-07-15 09:35:59.964121] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.968 [2024-07-15 09:35:59.964146] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d69200 with addr=10.0.0.2, port=4420 00:27:48.968 qpair failed and we were unable to recover it. 00:27:48.968 [2024-07-15 09:35:59.964263] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.968 [2024-07-15 09:35:59.964289] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d69200 with addr=10.0.0.2, port=4420 00:27:48.968 qpair failed and we were unable to recover it. 00:27:48.968 [2024-07-15 09:35:59.964449] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.968 [2024-07-15 09:35:59.964494] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d69200 with addr=10.0.0.2, port=4420 00:27:48.968 qpair failed and we were unable to recover it. 00:27:48.968 [2024-07-15 09:35:59.964693] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.968 [2024-07-15 09:35:59.964730] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d69200 with addr=10.0.0.2, port=4420 00:27:48.968 qpair failed and we were unable to recover it. 00:27:48.968 [2024-07-15 09:35:59.964879] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.968 [2024-07-15 09:35:59.964905] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d69200 with addr=10.0.0.2, port=4420 00:27:48.968 qpair failed and we were unable to recover it. 00:27:48.968 [2024-07-15 09:35:59.965016] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.968 [2024-07-15 09:35:59.965048] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d69200 with addr=10.0.0.2, port=4420 00:27:48.968 qpair failed and we were unable to recover it. 00:27:48.968 [2024-07-15 09:35:59.965165] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.968 [2024-07-15 09:35:59.965211] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d69200 with addr=10.0.0.2, port=4420 00:27:48.968 qpair failed and we were unable to recover it. 00:27:48.968 [2024-07-15 09:35:59.965377] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.968 [2024-07-15 09:35:59.965422] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d69200 with addr=10.0.0.2, port=4420 00:27:48.968 qpair failed and we were unable to recover it. 00:27:48.968 [2024-07-15 09:35:59.965623] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.968 [2024-07-15 09:35:59.965687] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d69200 with addr=10.0.0.2, port=4420 00:27:48.968 qpair failed and we were unable to recover it. 00:27:48.968 [2024-07-15 09:35:59.965862] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.968 [2024-07-15 09:35:59.965888] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d69200 with addr=10.0.0.2, port=4420 00:27:48.968 qpair failed and we were unable to recover it. 00:27:48.968 [2024-07-15 09:35:59.965977] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.968 [2024-07-15 09:35:59.966004] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d69200 with addr=10.0.0.2, port=4420 00:27:48.968 qpair failed and we were unable to recover it. 00:27:48.968 [2024-07-15 09:35:59.966155] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.968 [2024-07-15 09:35:59.966220] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d69200 with addr=10.0.0.2, port=4420 00:27:48.968 qpair failed and we were unable to recover it. 00:27:48.968 [2024-07-15 09:35:59.966361] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.968 [2024-07-15 09:35:59.966440] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d69200 with addr=10.0.0.2, port=4420 00:27:48.968 qpair failed and we were unable to recover it. 00:27:48.968 [2024-07-15 09:35:59.966671] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.968 [2024-07-15 09:35:59.966735] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d69200 with addr=10.0.0.2, port=4420 00:27:48.968 qpair failed and we were unable to recover it. 00:27:48.968 [2024-07-15 09:35:59.966904] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.968 [2024-07-15 09:35:59.966930] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d69200 with addr=10.0.0.2, port=4420 00:27:48.968 qpair failed and we were unable to recover it. 00:27:48.968 [2024-07-15 09:35:59.967019] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.968 [2024-07-15 09:35:59.967045] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d69200 with addr=10.0.0.2, port=4420 00:27:48.968 qpair failed and we were unable to recover it. 00:27:48.968 [2024-07-15 09:35:59.967171] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.968 [2024-07-15 09:35:59.967248] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d69200 with addr=10.0.0.2, port=4420 00:27:48.968 qpair failed and we were unable to recover it. 00:27:48.968 [2024-07-15 09:35:59.967458] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.968 [2024-07-15 09:35:59.967523] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d69200 with addr=10.0.0.2, port=4420 00:27:48.968 qpair failed and we were unable to recover it. 00:27:48.968 [2024-07-15 09:35:59.967715] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.968 [2024-07-15 09:35:59.967751] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d69200 with addr=10.0.0.2, port=4420 00:27:48.968 qpair failed and we were unable to recover it. 00:27:48.968 [2024-07-15 09:35:59.967908] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.968 [2024-07-15 09:35:59.967934] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d69200 with addr=10.0.0.2, port=4420 00:27:48.968 qpair failed and we were unable to recover it. 00:27:48.968 [2024-07-15 09:35:59.968021] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.968 [2024-07-15 09:35:59.968046] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d69200 with addr=10.0.0.2, port=4420 00:27:48.969 qpair failed and we were unable to recover it. 00:27:48.969 [2024-07-15 09:35:59.968183] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.969 [2024-07-15 09:35:59.968252] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d69200 with addr=10.0.0.2, port=4420 00:27:48.969 qpair failed and we were unable to recover it. 00:27:48.969 [2024-07-15 09:35:59.968495] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.969 [2024-07-15 09:35:59.968541] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d69200 with addr=10.0.0.2, port=4420 00:27:48.969 qpair failed and we were unable to recover it. 00:27:48.969 [2024-07-15 09:35:59.968693] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.969 [2024-07-15 09:35:59.968730] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d69200 with addr=10.0.0.2, port=4420 00:27:48.969 qpair failed and we were unable to recover it. 00:27:48.969 [2024-07-15 09:35:59.968892] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.969 [2024-07-15 09:35:59.968918] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d69200 with addr=10.0.0.2, port=4420 00:27:48.969 qpair failed and we were unable to recover it. 00:27:48.969 [2024-07-15 09:35:59.969033] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.969 [2024-07-15 09:35:59.969058] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d69200 with addr=10.0.0.2, port=4420 00:27:48.969 qpair failed and we were unable to recover it. 00:27:48.969 [2024-07-15 09:35:59.969195] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.969 [2024-07-15 09:35:59.969240] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d69200 with addr=10.0.0.2, port=4420 00:27:48.969 qpair failed and we were unable to recover it. 00:27:48.969 [2024-07-15 09:35:59.969410] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.969 [2024-07-15 09:35:59.969436] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d69200 with addr=10.0.0.2, port=4420 00:27:48.969 qpair failed and we were unable to recover it. 00:27:48.969 [2024-07-15 09:35:59.969636] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.969 [2024-07-15 09:35:59.969681] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d69200 with addr=10.0.0.2, port=4420 00:27:48.969 qpair failed and we were unable to recover it. 00:27:48.969 [2024-07-15 09:35:59.969882] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.969 [2024-07-15 09:35:59.969909] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d69200 with addr=10.0.0.2, port=4420 00:27:48.969 qpair failed and we were unable to recover it. 00:27:48.969 [2024-07-15 09:35:59.969985] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.969 [2024-07-15 09:35:59.970011] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d69200 with addr=10.0.0.2, port=4420 00:27:48.969 qpair failed and we were unable to recover it. 00:27:48.969 [2024-07-15 09:35:59.970098] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.969 [2024-07-15 09:35:59.970123] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d69200 with addr=10.0.0.2, port=4420 00:27:48.969 qpair failed and we were unable to recover it. 00:27:48.969 [2024-07-15 09:35:59.970205] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.969 [2024-07-15 09:35:59.970247] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d69200 with addr=10.0.0.2, port=4420 00:27:48.969 qpair failed and we were unable to recover it. 00:27:48.969 [2024-07-15 09:35:59.970403] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.969 [2024-07-15 09:35:59.970448] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d69200 with addr=10.0.0.2, port=4420 00:27:48.969 qpair failed and we were unable to recover it. 00:27:48.969 [2024-07-15 09:35:59.970628] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.969 [2024-07-15 09:35:59.970665] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d69200 with addr=10.0.0.2, port=4420 00:27:48.969 qpair failed and we were unable to recover it. 00:27:48.969 [2024-07-15 09:35:59.970820] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.969 [2024-07-15 09:35:59.970863] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d69200 with addr=10.0.0.2, port=4420 00:27:48.969 qpair failed and we were unable to recover it. 00:27:48.969 [2024-07-15 09:35:59.970954] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.969 [2024-07-15 09:35:59.970979] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d69200 with addr=10.0.0.2, port=4420 00:27:48.969 qpair failed and we were unable to recover it. 00:27:48.969 [2024-07-15 09:35:59.971154] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.969 [2024-07-15 09:35:59.971199] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d69200 with addr=10.0.0.2, port=4420 00:27:48.969 qpair failed and we were unable to recover it. 00:27:48.969 [2024-07-15 09:35:59.971439] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.969 [2024-07-15 09:35:59.971487] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d69200 with addr=10.0.0.2, port=4420 00:27:48.969 qpair failed and we were unable to recover it. 00:27:48.969 [2024-07-15 09:35:59.971678] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.969 [2024-07-15 09:35:59.971725] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d69200 with addr=10.0.0.2, port=4420 00:27:48.969 qpair failed and we were unable to recover it. 00:27:48.969 [2024-07-15 09:35:59.971889] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.969 [2024-07-15 09:35:59.971915] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d69200 with addr=10.0.0.2, port=4420 00:27:48.969 qpair failed and we were unable to recover it. 00:27:48.969 [2024-07-15 09:35:59.972025] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.969 [2024-07-15 09:35:59.972052] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d69200 with addr=10.0.0.2, port=4420 00:27:48.969 qpair failed and we were unable to recover it. 00:27:48.969 [2024-07-15 09:35:59.972212] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.969 [2024-07-15 09:35:59.972260] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d69200 with addr=10.0.0.2, port=4420 00:27:48.969 qpair failed and we were unable to recover it. 00:27:48.969 [2024-07-15 09:35:59.972441] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.969 [2024-07-15 09:35:59.972489] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d69200 with addr=10.0.0.2, port=4420 00:27:48.969 qpair failed and we were unable to recover it. 00:27:48.969 [2024-07-15 09:35:59.972707] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.969 [2024-07-15 09:35:59.972743] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d69200 with addr=10.0.0.2, port=4420 00:27:48.969 qpair failed and we were unable to recover it. 00:27:48.969 [2024-07-15 09:35:59.972902] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.969 [2024-07-15 09:35:59.972939] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d69200 with addr=10.0.0.2, port=4420 00:27:48.969 qpair failed and we were unable to recover it. 00:27:48.969 [2024-07-15 09:35:59.973113] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.969 [2024-07-15 09:35:59.973161] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d69200 with addr=10.0.0.2, port=4420 00:27:48.969 qpair failed and we were unable to recover it. 00:27:48.969 [2024-07-15 09:35:59.973344] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.969 [2024-07-15 09:35:59.973392] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d69200 with addr=10.0.0.2, port=4420 00:27:48.969 qpair failed and we were unable to recover it. 00:27:48.969 [2024-07-15 09:35:59.973560] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.969 [2024-07-15 09:35:59.973604] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d69200 with addr=10.0.0.2, port=4420 00:27:48.969 qpair failed and we were unable to recover it. 00:27:48.969 [2024-07-15 09:35:59.973765] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.969 [2024-07-15 09:35:59.973811] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d69200 with addr=10.0.0.2, port=4420 00:27:48.969 qpair failed and we were unable to recover it. 00:27:48.969 [2024-07-15 09:35:59.973963] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.969 [2024-07-15 09:35:59.974000] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d69200 with addr=10.0.0.2, port=4420 00:27:48.969 qpair failed and we were unable to recover it. 00:27:48.969 [2024-07-15 09:35:59.974123] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.969 [2024-07-15 09:35:59.974168] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d69200 with addr=10.0.0.2, port=4420 00:27:48.969 qpair failed and we were unable to recover it. 00:27:48.969 [2024-07-15 09:35:59.974345] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.969 [2024-07-15 09:35:59.974391] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d69200 with addr=10.0.0.2, port=4420 00:27:48.969 qpair failed and we were unable to recover it. 00:27:48.969 [2024-07-15 09:35:59.974633] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.969 [2024-07-15 09:35:59.974696] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d69200 with addr=10.0.0.2, port=4420 00:27:48.969 qpair failed and we were unable to recover it. 00:27:48.969 [2024-07-15 09:35:59.974912] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.969 [2024-07-15 09:35:59.974950] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d69200 with addr=10.0.0.2, port=4420 00:27:48.969 qpair failed and we were unable to recover it. 00:27:48.969 [2024-07-15 09:35:59.975095] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.969 [2024-07-15 09:35:59.975141] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d69200 with addr=10.0.0.2, port=4420 00:27:48.969 qpair failed and we were unable to recover it. 00:27:48.969 [2024-07-15 09:35:59.975251] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.969 [2024-07-15 09:35:59.975276] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d69200 with addr=10.0.0.2, port=4420 00:27:48.969 qpair failed and we were unable to recover it. 00:27:48.969 [2024-07-15 09:35:59.975393] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.969 [2024-07-15 09:35:59.975418] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d69200 with addr=10.0.0.2, port=4420 00:27:48.969 qpair failed and we were unable to recover it. 00:27:48.969 [2024-07-15 09:35:59.975559] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.969 [2024-07-15 09:35:59.975604] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d69200 with addr=10.0.0.2, port=4420 00:27:48.969 qpair failed and we were unable to recover it. 00:27:48.969 [2024-07-15 09:35:59.975783] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.969 [2024-07-15 09:35:59.975839] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d69200 with addr=10.0.0.2, port=4420 00:27:48.969 qpair failed and we were unable to recover it. 00:27:48.969 [2024-07-15 09:35:59.975937] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.969 [2024-07-15 09:35:59.975962] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d69200 with addr=10.0.0.2, port=4420 00:27:48.969 qpair failed and we were unable to recover it. 00:27:48.969 [2024-07-15 09:35:59.976098] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.969 [2024-07-15 09:35:59.976143] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d69200 with addr=10.0.0.2, port=4420 00:27:48.969 qpair failed and we were unable to recover it. 00:27:48.969 [2024-07-15 09:35:59.976341] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.969 [2024-07-15 09:35:59.976387] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d69200 with addr=10.0.0.2, port=4420 00:27:48.969 qpair failed and we were unable to recover it. 00:27:48.969 [2024-07-15 09:35:59.976560] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.969 [2024-07-15 09:35:59.976605] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d69200 with addr=10.0.0.2, port=4420 00:27:48.969 qpair failed and we were unable to recover it. 00:27:48.969 [2024-07-15 09:35:59.976764] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.969 [2024-07-15 09:35:59.976807] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d69200 with addr=10.0.0.2, port=4420 00:27:48.969 qpair failed and we were unable to recover it. 00:27:48.969 [2024-07-15 09:35:59.976955] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.969 [2024-07-15 09:35:59.976991] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d69200 with addr=10.0.0.2, port=4420 00:27:48.969 qpair failed and we were unable to recover it. 00:27:48.969 [2024-07-15 09:35:59.977151] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.969 [2024-07-15 09:35:59.977178] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d69200 with addr=10.0.0.2, port=4420 00:27:48.969 qpair failed and we were unable to recover it. 00:27:48.969 [2024-07-15 09:35:59.977288] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.969 [2024-07-15 09:35:59.977314] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d69200 with addr=10.0.0.2, port=4420 00:27:48.969 qpair failed and we were unable to recover it. 00:27:48.969 [2024-07-15 09:35:59.977419] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.969 [2024-07-15 09:35:59.977464] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d69200 with addr=10.0.0.2, port=4420 00:27:48.969 qpair failed and we were unable to recover it. 00:27:48.969 [2024-07-15 09:35:59.977628] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.969 [2024-07-15 09:35:59.977673] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d69200 with addr=10.0.0.2, port=4420 00:27:48.969 qpair failed and we were unable to recover it. 00:27:48.969 [2024-07-15 09:35:59.977877] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.969 [2024-07-15 09:35:59.977914] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d69200 with addr=10.0.0.2, port=4420 00:27:48.969 qpair failed and we were unable to recover it. 00:27:48.969 [2024-07-15 09:35:59.978125] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.969 [2024-07-15 09:35:59.978173] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d69200 with addr=10.0.0.2, port=4420 00:27:48.969 qpair failed and we were unable to recover it. 00:27:48.969 [2024-07-15 09:35:59.978352] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.969 [2024-07-15 09:35:59.978378] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d69200 with addr=10.0.0.2, port=4420 00:27:48.969 qpair failed and we were unable to recover it. 00:27:48.969 [2024-07-15 09:35:59.978492] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.969 [2024-07-15 09:35:59.978521] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d69200 with addr=10.0.0.2, port=4420 00:27:48.969 qpair failed and we were unable to recover it. 00:27:48.969 [2024-07-15 09:35:59.978630] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.969 [2024-07-15 09:35:59.978679] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d69200 with addr=10.0.0.2, port=4420 00:27:48.969 qpair failed and we were unable to recover it. 00:27:48.969 [2024-07-15 09:35:59.978871] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.969 [2024-07-15 09:35:59.978908] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d69200 with addr=10.0.0.2, port=4420 00:27:48.969 qpair failed and we were unable to recover it. 00:27:48.969 [2024-07-15 09:35:59.979084] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.969 [2024-07-15 09:35:59.979121] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d69200 with addr=10.0.0.2, port=4420 00:27:48.969 qpair failed and we were unable to recover it. 00:27:48.969 [2024-07-15 09:35:59.979296] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.969 [2024-07-15 09:35:59.979344] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d69200 with addr=10.0.0.2, port=4420 00:27:48.969 qpair failed and we were unable to recover it. 00:27:48.969 [2024-07-15 09:35:59.979474] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.969 [2024-07-15 09:35:59.979522] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d69200 with addr=10.0.0.2, port=4420 00:27:48.969 qpair failed and we were unable to recover it. 00:27:48.969 [2024-07-15 09:35:59.979737] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.969 [2024-07-15 09:35:59.979774] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d69200 with addr=10.0.0.2, port=4420 00:27:48.969 qpair failed and we were unable to recover it. 00:27:48.969 [2024-07-15 09:35:59.979926] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.969 [2024-07-15 09:35:59.979964] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d69200 with addr=10.0.0.2, port=4420 00:27:48.969 qpair failed and we were unable to recover it. 00:27:48.970 [2024-07-15 09:35:59.980137] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.970 [2024-07-15 09:35:59.980186] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d69200 with addr=10.0.0.2, port=4420 00:27:48.970 qpair failed and we were unable to recover it. 00:27:48.970 [2024-07-15 09:35:59.980376] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.970 [2024-07-15 09:35:59.980424] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d69200 with addr=10.0.0.2, port=4420 00:27:48.970 qpair failed and we were unable to recover it. 00:27:48.970 [2024-07-15 09:35:59.980650] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.970 [2024-07-15 09:35:59.980697] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d69200 with addr=10.0.0.2, port=4420 00:27:48.970 qpair failed and we were unable to recover it. 00:27:48.970 [2024-07-15 09:35:59.980831] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.970 [2024-07-15 09:35:59.980868] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d69200 with addr=10.0.0.2, port=4420 00:27:48.970 qpair failed and we were unable to recover it. 00:27:48.970 [2024-07-15 09:35:59.980991] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.970 [2024-07-15 09:35:59.981027] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d69200 with addr=10.0.0.2, port=4420 00:27:48.970 qpair failed and we were unable to recover it. 00:27:48.970 [2024-07-15 09:35:59.981189] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.970 [2024-07-15 09:35:59.981237] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d69200 with addr=10.0.0.2, port=4420 00:27:48.970 qpair failed and we were unable to recover it. 00:27:48.970 [2024-07-15 09:35:59.981461] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.970 [2024-07-15 09:35:59.981509] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d69200 with addr=10.0.0.2, port=4420 00:27:48.970 qpair failed and we were unable to recover it. 00:27:48.970 [2024-07-15 09:35:59.981712] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.970 [2024-07-15 09:35:59.981749] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d69200 with addr=10.0.0.2, port=4420 00:27:48.970 qpair failed and we were unable to recover it. 00:27:48.970 [2024-07-15 09:35:59.981933] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.970 [2024-07-15 09:35:59.981976] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d69200 with addr=10.0.0.2, port=4420 00:27:48.970 qpair failed and we were unable to recover it. 00:27:48.970 [2024-07-15 09:35:59.982081] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.970 [2024-07-15 09:35:59.982107] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d69200 with addr=10.0.0.2, port=4420 00:27:48.970 qpair failed and we were unable to recover it. 00:27:48.970 [2024-07-15 09:35:59.982268] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.970 [2024-07-15 09:35:59.982316] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d69200 with addr=10.0.0.2, port=4420 00:27:48.970 qpair failed and we were unable to recover it. 00:27:48.970 [2024-07-15 09:35:59.982560] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.970 [2024-07-15 09:35:59.982619] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d69200 with addr=10.0.0.2, port=4420 00:27:48.970 qpair failed and we were unable to recover it. 00:27:48.970 [2024-07-15 09:35:59.982857] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.970 [2024-07-15 09:35:59.982895] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d69200 with addr=10.0.0.2, port=4420 00:27:48.970 qpair failed and we were unable to recover it. 00:27:48.970 [2024-07-15 09:35:59.983052] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.970 [2024-07-15 09:35:59.983100] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d69200 with addr=10.0.0.2, port=4420 00:27:48.970 qpair failed and we were unable to recover it. 00:27:48.970 [2024-07-15 09:35:59.983280] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.970 [2024-07-15 09:35:59.983328] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d69200 with addr=10.0.0.2, port=4420 00:27:48.970 qpair failed and we were unable to recover it. 00:27:48.970 [2024-07-15 09:35:59.983497] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.970 [2024-07-15 09:35:59.983551] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d69200 with addr=10.0.0.2, port=4420 00:27:48.970 qpair failed and we were unable to recover it. 00:27:48.970 [2024-07-15 09:35:59.983761] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.970 [2024-07-15 09:35:59.983797] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d69200 with addr=10.0.0.2, port=4420 00:27:48.970 qpair failed and we were unable to recover it. 00:27:48.970 [2024-07-15 09:35:59.983915] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.970 [2024-07-15 09:35:59.983951] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d69200 with addr=10.0.0.2, port=4420 00:27:48.970 qpair failed and we were unable to recover it. 00:27:48.970 [2024-07-15 09:35:59.984088] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.970 [2024-07-15 09:35:59.984136] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d69200 with addr=10.0.0.2, port=4420 00:27:48.970 qpair failed and we were unable to recover it. 00:27:48.970 [2024-07-15 09:35:59.984314] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.970 [2024-07-15 09:35:59.984370] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d69200 with addr=10.0.0.2, port=4420 00:27:48.970 qpair failed and we were unable to recover it. 00:27:48.970 [2024-07-15 09:35:59.984555] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.970 [2024-07-15 09:35:59.984602] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d69200 with addr=10.0.0.2, port=4420 00:27:48.970 qpair failed and we were unable to recover it. 00:27:48.970 [2024-07-15 09:35:59.984791] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.970 [2024-07-15 09:35:59.984848] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d69200 with addr=10.0.0.2, port=4420 00:27:48.970 qpair failed and we were unable to recover it. 00:27:48.970 [2024-07-15 09:35:59.984987] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.970 [2024-07-15 09:35:59.985047] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d69200 with addr=10.0.0.2, port=4420 00:27:48.970 qpair failed and we were unable to recover it. 00:27:48.970 [2024-07-15 09:35:59.985239] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.970 [2024-07-15 09:35:59.985287] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d69200 with addr=10.0.0.2, port=4420 00:27:48.970 qpair failed and we were unable to recover it. 00:27:48.970 [2024-07-15 09:35:59.985482] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.970 [2024-07-15 09:35:59.985529] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d69200 with addr=10.0.0.2, port=4420 00:27:48.970 qpair failed and we were unable to recover it. 00:27:48.970 [2024-07-15 09:35:59.985719] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.970 [2024-07-15 09:35:59.985755] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d69200 with addr=10.0.0.2, port=4420 00:27:48.970 qpair failed and we were unable to recover it. 00:27:48.970 [2024-07-15 09:35:59.985908] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.970 [2024-07-15 09:35:59.985946] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d69200 with addr=10.0.0.2, port=4420 00:27:48.970 qpair failed and we were unable to recover it. 00:27:48.970 [2024-07-15 09:35:59.986151] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.970 [2024-07-15 09:35:59.986199] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d69200 with addr=10.0.0.2, port=4420 00:27:48.970 qpair failed and we were unable to recover it. 00:27:48.970 [2024-07-15 09:35:59.986345] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.970 [2024-07-15 09:35:59.986394] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d69200 with addr=10.0.0.2, port=4420 00:27:48.970 qpair failed and we were unable to recover it. 00:27:48.970 [2024-07-15 09:35:59.986574] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.970 [2024-07-15 09:35:59.986622] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d69200 with addr=10.0.0.2, port=4420 00:27:48.970 qpair failed and we were unable to recover it. 00:27:48.970 [2024-07-15 09:35:59.986817] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.970 [2024-07-15 09:35:59.986874] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d69200 with addr=10.0.0.2, port=4420 00:27:48.970 qpair failed and we were unable to recover it. 00:27:48.970 [2024-07-15 09:35:59.987095] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.970 [2024-07-15 09:35:59.987143] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d69200 with addr=10.0.0.2, port=4420 00:27:48.970 qpair failed and we were unable to recover it. 00:27:48.970 [2024-07-15 09:35:59.987308] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.970 [2024-07-15 09:35:59.987356] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d69200 with addr=10.0.0.2, port=4420 00:27:48.970 qpair failed and we were unable to recover it. 00:27:48.970 [2024-07-15 09:35:59.987562] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.970 [2024-07-15 09:35:59.987612] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d69200 with addr=10.0.0.2, port=4420 00:27:48.970 qpair failed and we were unable to recover it. 00:27:48.970 [2024-07-15 09:35:59.987813] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.970 [2024-07-15 09:35:59.987851] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d69200 with addr=10.0.0.2, port=4420 00:27:48.970 qpair failed and we were unable to recover it. 00:27:48.970 [2024-07-15 09:35:59.987996] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.970 [2024-07-15 09:35:59.988033] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d69200 with addr=10.0.0.2, port=4420 00:27:48.970 qpair failed and we were unable to recover it. 00:27:48.970 [2024-07-15 09:35:59.988202] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.970 [2024-07-15 09:35:59.988250] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d69200 with addr=10.0.0.2, port=4420 00:27:48.970 qpair failed and we were unable to recover it. 00:27:48.970 [2024-07-15 09:35:59.988464] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.970 [2024-07-15 09:35:59.988513] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d69200 with addr=10.0.0.2, port=4420 00:27:48.970 qpair failed and we were unable to recover it. 00:27:48.970 [2024-07-15 09:35:59.988703] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.970 [2024-07-15 09:35:59.988739] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d69200 with addr=10.0.0.2, port=4420 00:27:48.970 qpair failed and we were unable to recover it. 00:27:48.970 [2024-07-15 09:35:59.988899] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.970 [2024-07-15 09:35:59.988937] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d69200 with addr=10.0.0.2, port=4420 00:27:48.970 qpair failed and we were unable to recover it. 00:27:48.970 [2024-07-15 09:35:59.989061] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.970 [2024-07-15 09:35:59.989097] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d69200 with addr=10.0.0.2, port=4420 00:27:48.970 qpair failed and we were unable to recover it. 00:27:48.970 [2024-07-15 09:35:59.989280] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.970 [2024-07-15 09:35:59.989316] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d69200 with addr=10.0.0.2, port=4420 00:27:48.970 qpair failed and we were unable to recover it. 00:27:48.970 [2024-07-15 09:35:59.989460] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.970 [2024-07-15 09:35:59.989496] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d69200 with addr=10.0.0.2, port=4420 00:27:48.970 qpair failed and we were unable to recover it. 00:27:48.970 [2024-07-15 09:35:59.989608] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.970 [2024-07-15 09:35:59.989645] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d69200 with addr=10.0.0.2, port=4420 00:27:48.970 qpair failed and we were unable to recover it. 00:27:48.970 [2024-07-15 09:35:59.989785] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.970 [2024-07-15 09:35:59.989832] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d69200 with addr=10.0.0.2, port=4420 00:27:48.970 qpair failed and we were unable to recover it. 00:27:48.970 [2024-07-15 09:35:59.989947] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.970 [2024-07-15 09:35:59.989983] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d69200 with addr=10.0.0.2, port=4420 00:27:48.970 qpair failed and we were unable to recover it. 00:27:48.970 [2024-07-15 09:35:59.990121] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.970 [2024-07-15 09:35:59.990163] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d69200 with addr=10.0.0.2, port=4420 00:27:48.970 qpair failed and we were unable to recover it. 00:27:48.970 [2024-07-15 09:35:59.990325] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.970 [2024-07-15 09:35:59.990362] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d69200 with addr=10.0.0.2, port=4420 00:27:48.970 qpair failed and we were unable to recover it. 00:27:48.970 [2024-07-15 09:35:59.990480] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.970 [2024-07-15 09:35:59.990517] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d69200 with addr=10.0.0.2, port=4420 00:27:48.970 qpair failed and we were unable to recover it. 00:27:48.970 [2024-07-15 09:35:59.990666] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.970 [2024-07-15 09:35:59.990702] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d69200 with addr=10.0.0.2, port=4420 00:27:48.970 qpair failed and we were unable to recover it. 00:27:48.970 [2024-07-15 09:35:59.990873] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.970 [2024-07-15 09:35:59.990911] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d69200 with addr=10.0.0.2, port=4420 00:27:48.970 qpair failed and we were unable to recover it. 00:27:48.970 [2024-07-15 09:35:59.991087] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.970 [2024-07-15 09:35:59.991123] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d69200 with addr=10.0.0.2, port=4420 00:27:48.970 qpair failed and we were unable to recover it. 00:27:48.970 [2024-07-15 09:35:59.991261] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.970 [2024-07-15 09:35:59.991297] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d69200 with addr=10.0.0.2, port=4420 00:27:48.970 qpair failed and we were unable to recover it. 00:27:48.970 [2024-07-15 09:35:59.991431] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.970 [2024-07-15 09:35:59.991468] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d69200 with addr=10.0.0.2, port=4420 00:27:48.970 qpair failed and we were unable to recover it. 00:27:48.970 [2024-07-15 09:35:59.991590] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.970 [2024-07-15 09:35:59.991626] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d69200 with addr=10.0.0.2, port=4420 00:27:48.970 qpair failed and we were unable to recover it. 00:27:48.970 [2024-07-15 09:35:59.991766] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.970 [2024-07-15 09:35:59.991809] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d69200 with addr=10.0.0.2, port=4420 00:27:48.970 qpair failed and we were unable to recover it. 00:27:48.970 [2024-07-15 09:35:59.991938] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.970 [2024-07-15 09:35:59.991975] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d69200 with addr=10.0.0.2, port=4420 00:27:48.970 qpair failed and we were unable to recover it. 00:27:48.971 [2024-07-15 09:35:59.992148] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.971 [2024-07-15 09:35:59.992184] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d69200 with addr=10.0.0.2, port=4420 00:27:48.971 qpair failed and we were unable to recover it. 00:27:48.971 [2024-07-15 09:35:59.992298] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.971 [2024-07-15 09:35:59.992334] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d69200 with addr=10.0.0.2, port=4420 00:27:48.971 qpair failed and we were unable to recover it. 00:27:48.971 [2024-07-15 09:35:59.992507] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.971 [2024-07-15 09:35:59.992543] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d69200 with addr=10.0.0.2, port=4420 00:27:48.971 qpair failed and we were unable to recover it. 00:27:48.971 [2024-07-15 09:35:59.992729] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.971 [2024-07-15 09:35:59.992765] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d69200 with addr=10.0.0.2, port=4420 00:27:48.971 qpair failed and we were unable to recover it. 00:27:48.971 [2024-07-15 09:35:59.992930] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.971 [2024-07-15 09:35:59.992955] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d69200 with addr=10.0.0.2, port=4420 00:27:48.971 qpair failed and we were unable to recover it. 00:27:48.971 [2024-07-15 09:35:59.993062] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.971 [2024-07-15 09:35:59.993088] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d69200 with addr=10.0.0.2, port=4420 00:27:48.971 qpair failed and we were unable to recover it. 00:27:48.971 [2024-07-15 09:35:59.993228] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.971 [2024-07-15 09:35:59.993254] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d69200 with addr=10.0.0.2, port=4420 00:27:48.971 qpair failed and we were unable to recover it. 00:27:48.971 [2024-07-15 09:35:59.993419] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.971 [2024-07-15 09:35:59.993455] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d69200 with addr=10.0.0.2, port=4420 00:27:48.971 qpair failed and we were unable to recover it. 00:27:48.971 [2024-07-15 09:35:59.993599] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.971 [2024-07-15 09:35:59.993635] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d69200 with addr=10.0.0.2, port=4420 00:27:48.971 qpair failed and we were unable to recover it. 00:27:48.971 [2024-07-15 09:35:59.993755] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.971 [2024-07-15 09:35:59.993791] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d69200 with addr=10.0.0.2, port=4420 00:27:48.971 qpair failed and we were unable to recover it. 00:27:48.971 [2024-07-15 09:35:59.993952] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.971 [2024-07-15 09:35:59.993988] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d69200 with addr=10.0.0.2, port=4420 00:27:48.971 qpair failed and we were unable to recover it. 00:27:48.971 [2024-07-15 09:35:59.994095] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.971 [2024-07-15 09:35:59.994131] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d69200 with addr=10.0.0.2, port=4420 00:27:48.971 qpair failed and we were unable to recover it. 00:27:48.971 [2024-07-15 09:35:59.994282] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.971 [2024-07-15 09:35:59.994317] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d69200 with addr=10.0.0.2, port=4420 00:27:48.971 qpair failed and we were unable to recover it. 00:27:48.971 [2024-07-15 09:35:59.994466] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.971 [2024-07-15 09:35:59.994502] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d69200 with addr=10.0.0.2, port=4420 00:27:48.971 qpair failed and we were unable to recover it. 00:27:48.971 [2024-07-15 09:35:59.994673] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.971 [2024-07-15 09:35:59.994720] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d69200 with addr=10.0.0.2, port=4420 00:27:48.971 qpair failed and we were unable to recover it. 00:27:48.971 [2024-07-15 09:35:59.994915] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.971 [2024-07-15 09:35:59.994964] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d69200 with addr=10.0.0.2, port=4420 00:27:48.971 qpair failed and we were unable to recover it. 00:27:48.971 [2024-07-15 09:35:59.995154] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.971 [2024-07-15 09:35:59.995201] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d69200 with addr=10.0.0.2, port=4420 00:27:48.971 qpair failed and we were unable to recover it. 00:27:48.971 [2024-07-15 09:35:59.995424] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.971 [2024-07-15 09:35:59.995498] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d69200 with addr=10.0.0.2, port=4420 00:27:48.971 qpair failed and we were unable to recover it. 00:27:48.971 [2024-07-15 09:35:59.995737] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.971 [2024-07-15 09:35:59.995786] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d69200 with addr=10.0.0.2, port=4420 00:27:48.971 qpair failed and we were unable to recover it. 00:27:48.971 [2024-07-15 09:35:59.996002] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.971 [2024-07-15 09:35:59.996077] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d69200 with addr=10.0.0.2, port=4420 00:27:48.971 qpair failed and we were unable to recover it. 00:27:48.971 [2024-07-15 09:35:59.996318] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.971 [2024-07-15 09:35:59.996389] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d69200 with addr=10.0.0.2, port=4420 00:27:48.971 qpair failed and we were unable to recover it. 00:27:48.971 [2024-07-15 09:35:59.996625] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.971 [2024-07-15 09:35:59.996680] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d69200 with addr=10.0.0.2, port=4420 00:27:48.971 qpair failed and we were unable to recover it. 00:27:48.971 [2024-07-15 09:35:59.996884] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.971 [2024-07-15 09:35:59.996956] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d69200 with addr=10.0.0.2, port=4420 00:27:48.971 qpair failed and we were unable to recover it. 00:27:48.971 [2024-07-15 09:35:59.997131] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.971 [2024-07-15 09:35:59.997201] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d69200 with addr=10.0.0.2, port=4420 00:27:48.971 qpair failed and we were unable to recover it. 00:27:48.971 [2024-07-15 09:35:59.997436] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.971 [2024-07-15 09:35:59.997484] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d69200 with addr=10.0.0.2, port=4420 00:27:48.971 qpair failed and we were unable to recover it. 00:27:48.971 [2024-07-15 09:35:59.997649] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.971 [2024-07-15 09:35:59.997697] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d69200 with addr=10.0.0.2, port=4420 00:27:48.971 qpair failed and we were unable to recover it. 00:27:48.971 [2024-07-15 09:35:59.997879] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.971 [2024-07-15 09:35:59.997927] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d69200 with addr=10.0.0.2, port=4420 00:27:48.971 qpair failed and we were unable to recover it. 00:27:48.971 [2024-07-15 09:35:59.998108] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.971 [2024-07-15 09:35:59.998155] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d69200 with addr=10.0.0.2, port=4420 00:27:48.971 qpair failed and we were unable to recover it. 00:27:48.971 [2024-07-15 09:35:59.998315] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.971 [2024-07-15 09:35:59.998364] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d69200 with addr=10.0.0.2, port=4420 00:27:48.971 qpair failed and we were unable to recover it. 00:27:48.971 [2024-07-15 09:35:59.998552] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.971 [2024-07-15 09:35:59.998600] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d69200 with addr=10.0.0.2, port=4420 00:27:48.971 qpair failed and we were unable to recover it. 00:27:48.971 [2024-07-15 09:35:59.998796] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.971 [2024-07-15 09:35:59.998874] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d69200 with addr=10.0.0.2, port=4420 00:27:48.971 qpair failed and we were unable to recover it. 00:27:48.971 [2024-07-15 09:35:59.999076] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.971 [2024-07-15 09:35:59.999137] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d69200 with addr=10.0.0.2, port=4420 00:27:48.971 qpair failed and we were unable to recover it. 00:27:48.971 [2024-07-15 09:35:59.999385] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.971 [2024-07-15 09:35:59.999433] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d69200 with addr=10.0.0.2, port=4420 00:27:48.971 qpair failed and we were unable to recover it. 00:27:48.971 [2024-07-15 09:35:59.999640] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.971 [2024-07-15 09:35:59.999695] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d69200 with addr=10.0.0.2, port=4420 00:27:48.971 qpair failed and we were unable to recover it. 00:27:48.971 [2024-07-15 09:35:59.999924] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.971 [2024-07-15 09:35:59.999999] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d69200 with addr=10.0.0.2, port=4420 00:27:48.971 qpair failed and we were unable to recover it. 00:27:48.971 [2024-07-15 09:36:00.000245] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.971 [2024-07-15 09:36:00.000271] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d69200 with addr=10.0.0.2, port=4420 00:27:48.971 qpair failed and we were unable to recover it. 00:27:48.971 [2024-07-15 09:36:00.000353] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.971 [2024-07-15 09:36:00.000379] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d69200 with addr=10.0.0.2, port=4420 00:27:48.971 qpair failed and we were unable to recover it. 00:27:48.971 [2024-07-15 09:36:00.000515] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.971 [2024-07-15 09:36:00.000540] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d69200 with addr=10.0.0.2, port=4420 00:27:48.971 qpair failed and we were unable to recover it. 00:27:48.971 [2024-07-15 09:36:00.000713] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.971 [2024-07-15 09:36:00.000761] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d69200 with addr=10.0.0.2, port=4420 00:27:48.971 qpair failed and we were unable to recover it. 00:27:48.971 [2024-07-15 09:36:00.000979] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.971 [2024-07-15 09:36:00.001013] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d69200 with addr=10.0.0.2, port=4420 00:27:48.971 qpair failed and we were unable to recover it. 00:27:48.971 [2024-07-15 09:36:00.001267] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.971 [2024-07-15 09:36:00.001331] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d69200 with addr=10.0.0.2, port=4420 00:27:48.971 qpair failed and we were unable to recover it. 00:27:48.971 [2024-07-15 09:36:00.001557] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.971 [2024-07-15 09:36:00.001625] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d69200 with addr=10.0.0.2, port=4420 00:27:48.971 qpair failed and we were unable to recover it. 00:27:48.971 [2024-07-15 09:36:00.001873] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.971 [2024-07-15 09:36:00.001939] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d69200 with addr=10.0.0.2, port=4420 00:27:48.971 qpair failed and we were unable to recover it. 00:27:48.971 [2024-07-15 09:36:00.002131] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.971 [2024-07-15 09:36:00.002196] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d69200 with addr=10.0.0.2, port=4420 00:27:48.971 qpair failed and we were unable to recover it. 00:27:48.971 [2024-07-15 09:36:00.002507] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.971 [2024-07-15 09:36:00.002580] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d69200 with addr=10.0.0.2, port=4420 00:27:48.971 qpair failed and we were unable to recover it. 00:27:48.971 [2024-07-15 09:36:00.002793] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.971 [2024-07-15 09:36:00.002879] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d69200 with addr=10.0.0.2, port=4420 00:27:48.971 qpair failed and we were unable to recover it. 00:27:48.971 [2024-07-15 09:36:00.003130] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.971 [2024-07-15 09:36:00.003206] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d69200 with addr=10.0.0.2, port=4420 00:27:48.971 qpair failed and we were unable to recover it. 00:27:48.971 [2024-07-15 09:36:00.003477] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.971 [2024-07-15 09:36:00.003554] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d69200 with addr=10.0.0.2, port=4420 00:27:48.971 qpair failed and we were unable to recover it. 00:27:48.971 [2024-07-15 09:36:00.003782] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.971 [2024-07-15 09:36:00.003852] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d69200 with addr=10.0.0.2, port=4420 00:27:48.971 qpair failed and we were unable to recover it. 00:27:48.971 [2024-07-15 09:36:00.004027] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.971 [2024-07-15 09:36:00.004079] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d69200 with addr=10.0.0.2, port=4420 00:27:48.971 qpair failed and we were unable to recover it. 00:27:48.971 [2024-07-15 09:36:00.004239] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.971 [2024-07-15 09:36:00.004290] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d69200 with addr=10.0.0.2, port=4420 00:27:48.971 qpair failed and we were unable to recover it. 00:27:48.971 [2024-07-15 09:36:00.004486] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.971 [2024-07-15 09:36:00.004538] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d69200 with addr=10.0.0.2, port=4420 00:27:48.971 qpair failed and we were unable to recover it. 00:27:48.971 [2024-07-15 09:36:00.004696] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.971 [2024-07-15 09:36:00.004748] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d69200 with addr=10.0.0.2, port=4420 00:27:48.972 qpair failed and we were unable to recover it. 00:27:48.972 [2024-07-15 09:36:00.004963] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.972 [2024-07-15 09:36:00.005016] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d69200 with addr=10.0.0.2, port=4420 00:27:48.972 qpair failed and we were unable to recover it. 00:27:48.972 [2024-07-15 09:36:00.005174] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.972 [2024-07-15 09:36:00.005225] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d69200 with addr=10.0.0.2, port=4420 00:27:48.972 qpair failed and we were unable to recover it. 00:27:48.972 [2024-07-15 09:36:00.005391] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.972 [2024-07-15 09:36:00.005442] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d69200 with addr=10.0.0.2, port=4420 00:27:48.972 qpair failed and we were unable to recover it. 00:27:48.972 [2024-07-15 09:36:00.005639] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.972 [2024-07-15 09:36:00.005694] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d69200 with addr=10.0.0.2, port=4420 00:27:48.972 qpair failed and we were unable to recover it. 00:27:48.972 [2024-07-15 09:36:00.005914] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.972 [2024-07-15 09:36:00.005972] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d69200 with addr=10.0.0.2, port=4420 00:27:48.972 qpair failed and we were unable to recover it. 00:27:48.972 [2024-07-15 09:36:00.006188] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.972 [2024-07-15 09:36:00.006240] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d69200 with addr=10.0.0.2, port=4420 00:27:48.972 qpair failed and we were unable to recover it. 00:27:48.972 [2024-07-15 09:36:00.006495] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.972 [2024-07-15 09:36:00.006547] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d69200 with addr=10.0.0.2, port=4420 00:27:48.972 qpair failed and we were unable to recover it. 00:27:48.972 [2024-07-15 09:36:00.006750] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.972 [2024-07-15 09:36:00.006799] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d69200 with addr=10.0.0.2, port=4420 00:27:48.972 qpair failed and we were unable to recover it. 00:27:48.972 [2024-07-15 09:36:00.007018] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.972 [2024-07-15 09:36:00.007067] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d69200 with addr=10.0.0.2, port=4420 00:27:48.972 qpair failed and we were unable to recover it. 00:27:48.972 [2024-07-15 09:36:00.007283] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.972 [2024-07-15 09:36:00.007335] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d69200 with addr=10.0.0.2, port=4420 00:27:48.972 qpair failed and we were unable to recover it. 00:27:48.972 [2024-07-15 09:36:00.007536] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.972 [2024-07-15 09:36:00.007587] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d69200 with addr=10.0.0.2, port=4420 00:27:48.972 qpair failed and we were unable to recover it. 00:27:48.972 [2024-07-15 09:36:00.007785] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.972 [2024-07-15 09:36:00.007847] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d69200 with addr=10.0.0.2, port=4420 00:27:48.972 qpair failed and we were unable to recover it. 00:27:48.972 [2024-07-15 09:36:00.008000] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.972 [2024-07-15 09:36:00.008049] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d69200 with addr=10.0.0.2, port=4420 00:27:48.972 qpair failed and we were unable to recover it. 00:27:48.972 [2024-07-15 09:36:00.008237] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.972 [2024-07-15 09:36:00.008286] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d69200 with addr=10.0.0.2, port=4420 00:27:48.972 qpair failed and we were unable to recover it. 00:27:48.972 [2024-07-15 09:36:00.008484] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.972 [2024-07-15 09:36:00.008532] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d69200 with addr=10.0.0.2, port=4420 00:27:48.972 qpair failed and we were unable to recover it. 00:27:48.972 [2024-07-15 09:36:00.008727] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.972 [2024-07-15 09:36:00.008776] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d69200 with addr=10.0.0.2, port=4420 00:27:48.972 qpair failed and we were unable to recover it. 00:27:48.972 [2024-07-15 09:36:00.008959] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.972 [2024-07-15 09:36:00.009009] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d69200 with addr=10.0.0.2, port=4420 00:27:48.972 qpair failed and we were unable to recover it. 00:27:48.972 [2024-07-15 09:36:00.009232] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.972 [2024-07-15 09:36:00.009287] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d69200 with addr=10.0.0.2, port=4420 00:27:48.972 qpair failed and we were unable to recover it. 00:27:48.972 [2024-07-15 09:36:00.009531] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.972 [2024-07-15 09:36:00.009606] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d69200 with addr=10.0.0.2, port=4420 00:27:48.972 qpair failed and we were unable to recover it. 00:27:48.972 [2024-07-15 09:36:00.009870] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.972 [2024-07-15 09:36:00.009938] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d69200 with addr=10.0.0.2, port=4420 00:27:48.972 qpair failed and we were unable to recover it. 00:27:48.972 [2024-07-15 09:36:00.010142] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.972 [2024-07-15 09:36:00.010203] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d69200 with addr=10.0.0.2, port=4420 00:27:48.972 qpair failed and we were unable to recover it. 00:27:48.972 [2024-07-15 09:36:00.010398] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.972 [2024-07-15 09:36:00.010458] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d69200 with addr=10.0.0.2, port=4420 00:27:48.972 qpair failed and we were unable to recover it. 00:27:48.972 [2024-07-15 09:36:00.010648] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.972 [2024-07-15 09:36:00.010706] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d69200 with addr=10.0.0.2, port=4420 00:27:48.972 qpair failed and we were unable to recover it. 00:27:48.972 [2024-07-15 09:36:00.010918] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.972 [2024-07-15 09:36:00.010979] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d69200 with addr=10.0.0.2, port=4420 00:27:48.972 qpair failed and we were unable to recover it. 00:27:48.972 [2024-07-15 09:36:00.011167] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.972 [2024-07-15 09:36:00.011207] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d69200 with addr=10.0.0.2, port=4420 00:27:48.972 qpair failed and we were unable to recover it. 00:27:48.972 [2024-07-15 09:36:00.012813] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.972 [2024-07-15 09:36:00.012856] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d69200 with addr=10.0.0.2, port=4420 00:27:48.972 qpair failed and we were unable to recover it. 00:27:48.972 [2024-07-15 09:36:00.012991] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.972 [2024-07-15 09:36:00.013028] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d69200 with addr=10.0.0.2, port=4420 00:27:48.972 qpair failed and we were unable to recover it. 00:27:48.972 [2024-07-15 09:36:00.013145] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.972 [2024-07-15 09:36:00.013180] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d69200 with addr=10.0.0.2, port=4420 00:27:48.972 qpair failed and we were unable to recover it. 00:27:48.972 [2024-07-15 09:36:00.013286] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.972 [2024-07-15 09:36:00.013313] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d69200 with addr=10.0.0.2, port=4420 00:27:48.972 qpair failed and we were unable to recover it. 00:27:48.972 [2024-07-15 09:36:00.013428] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.972 [2024-07-15 09:36:00.013453] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d69200 with addr=10.0.0.2, port=4420 00:27:48.972 qpair failed and we were unable to recover it. 00:27:48.972 [2024-07-15 09:36:00.013546] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.972 [2024-07-15 09:36:00.013571] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d69200 with addr=10.0.0.2, port=4420 00:27:48.972 qpair failed and we were unable to recover it. 00:27:48.972 [2024-07-15 09:36:00.013661] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.972 [2024-07-15 09:36:00.013692] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d69200 with addr=10.0.0.2, port=4420 00:27:48.972 qpair failed and we were unable to recover it. 00:27:48.972 [2024-07-15 09:36:00.013790] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.972 [2024-07-15 09:36:00.013835] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d69200 with addr=10.0.0.2, port=4420 00:27:48.972 qpair failed and we were unable to recover it. 00:27:48.972 [2024-07-15 09:36:00.013922] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.972 [2024-07-15 09:36:00.013947] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d69200 with addr=10.0.0.2, port=4420 00:27:48.972 qpair failed and we were unable to recover it. 00:27:48.972 [2024-07-15 09:36:00.014035] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.972 [2024-07-15 09:36:00.014060] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d69200 with addr=10.0.0.2, port=4420 00:27:48.972 qpair failed and we were unable to recover it. 00:27:48.972 [2024-07-15 09:36:00.014149] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.972 [2024-07-15 09:36:00.014175] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d69200 with addr=10.0.0.2, port=4420 00:27:48.972 qpair failed and we were unable to recover it. 00:27:48.972 [2024-07-15 09:36:00.014258] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.972 [2024-07-15 09:36:00.014283] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d69200 with addr=10.0.0.2, port=4420 00:27:48.972 qpair failed and we were unable to recover it. 00:27:48.972 [2024-07-15 09:36:00.014360] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.972 [2024-07-15 09:36:00.014386] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d69200 with addr=10.0.0.2, port=4420 00:27:48.972 qpair failed and we were unable to recover it. 00:27:48.972 [2024-07-15 09:36:00.014456] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.972 [2024-07-15 09:36:00.014481] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d69200 with addr=10.0.0.2, port=4420 00:27:48.972 qpair failed and we were unable to recover it. 00:27:48.972 [2024-07-15 09:36:00.014566] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.972 [2024-07-15 09:36:00.014591] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d69200 with addr=10.0.0.2, port=4420 00:27:48.972 qpair failed and we were unable to recover it. 00:27:48.972 [2024-07-15 09:36:00.014673] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.972 [2024-07-15 09:36:00.014699] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d69200 with addr=10.0.0.2, port=4420 00:27:48.972 qpair failed and we were unable to recover it. 00:27:48.972 [2024-07-15 09:36:00.014781] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.972 [2024-07-15 09:36:00.014816] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d69200 with addr=10.0.0.2, port=4420 00:27:48.972 qpair failed and we were unable to recover it. 00:27:48.972 [2024-07-15 09:36:00.014904] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.972 [2024-07-15 09:36:00.014930] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d69200 with addr=10.0.0.2, port=4420 00:27:48.972 qpair failed and we were unable to recover it. 00:27:48.972 [2024-07-15 09:36:00.015027] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.972 [2024-07-15 09:36:00.015052] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d69200 with addr=10.0.0.2, port=4420 00:27:48.972 qpair failed and we were unable to recover it. 00:27:48.972 [2024-07-15 09:36:00.015162] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.972 [2024-07-15 09:36:00.015188] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d69200 with addr=10.0.0.2, port=4420 00:27:48.972 qpair failed and we were unable to recover it. 00:27:48.972 [2024-07-15 09:36:00.015308] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.972 [2024-07-15 09:36:00.015334] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d69200 with addr=10.0.0.2, port=4420 00:27:48.972 qpair failed and we were unable to recover it. 00:27:48.972 [2024-07-15 09:36:00.015433] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.972 [2024-07-15 09:36:00.015519] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a58000b90 with addr=10.0.0.2, port=4420 00:27:48.972 qpair failed and we were unable to recover it. 00:27:48.972 [2024-07-15 09:36:00.015664] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.972 [2024-07-15 09:36:00.015693] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a58000b90 with addr=10.0.0.2, port=4420 00:27:48.972 qpair failed and we were unable to recover it. 00:27:48.972 [2024-07-15 09:36:00.015782] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.972 [2024-07-15 09:36:00.015818] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a58000b90 with addr=10.0.0.2, port=4420 00:27:48.972 qpair failed and we were unable to recover it. 00:27:48.972 [2024-07-15 09:36:00.015906] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.972 [2024-07-15 09:36:00.015932] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a58000b90 with addr=10.0.0.2, port=4420 00:27:48.972 qpair failed and we were unable to recover it. 00:27:48.972 [2024-07-15 09:36:00.016020] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.972 [2024-07-15 09:36:00.016045] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a58000b90 with addr=10.0.0.2, port=4420 00:27:48.972 qpair failed and we were unable to recover it. 00:27:48.972 [2024-07-15 09:36:00.016132] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.972 [2024-07-15 09:36:00.016158] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a58000b90 with addr=10.0.0.2, port=4420 00:27:48.972 qpair failed and we were unable to recover it. 00:27:48.972 [2024-07-15 09:36:00.016243] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.972 [2024-07-15 09:36:00.016268] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a58000b90 with addr=10.0.0.2, port=4420 00:27:48.972 qpair failed and we were unable to recover it. 00:27:48.972 [2024-07-15 09:36:00.016344] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.972 [2024-07-15 09:36:00.016369] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a58000b90 with addr=10.0.0.2, port=4420 00:27:48.972 qpair failed and we were unable to recover it. 00:27:48.972 [2024-07-15 09:36:00.016453] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.972 [2024-07-15 09:36:00.016480] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a58000b90 with addr=10.0.0.2, port=4420 00:27:48.972 qpair failed and we were unable to recover it. 00:27:48.972 [2024-07-15 09:36:00.016570] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.972 [2024-07-15 09:36:00.016595] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a58000b90 with addr=10.0.0.2, port=4420 00:27:48.972 qpair failed and we were unable to recover it. 00:27:48.972 [2024-07-15 09:36:00.016675] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.972 [2024-07-15 09:36:00.016700] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a58000b90 with addr=10.0.0.2, port=4420 00:27:48.972 qpair failed and we were unable to recover it. 00:27:48.972 [2024-07-15 09:36:00.016795] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.972 [2024-07-15 09:36:00.016828] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a58000b90 with addr=10.0.0.2, port=4420 00:27:48.972 qpair failed and we were unable to recover it. 00:27:48.972 [2024-07-15 09:36:00.016910] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.972 [2024-07-15 09:36:00.016942] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a58000b90 with addr=10.0.0.2, port=4420 00:27:48.972 qpair failed and we were unable to recover it. 00:27:48.973 [2024-07-15 09:36:00.017033] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.973 [2024-07-15 09:36:00.017059] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a58000b90 with addr=10.0.0.2, port=4420 00:27:48.973 qpair failed and we were unable to recover it. 00:27:48.973 [2024-07-15 09:36:00.017174] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.973 [2024-07-15 09:36:00.017199] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a58000b90 with addr=10.0.0.2, port=4420 00:27:48.973 qpair failed and we were unable to recover it. 00:27:48.973 [2024-07-15 09:36:00.017280] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.973 [2024-07-15 09:36:00.017306] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a58000b90 with addr=10.0.0.2, port=4420 00:27:48.973 qpair failed and we were unable to recover it. 00:27:48.973 [2024-07-15 09:36:00.017383] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.973 [2024-07-15 09:36:00.017408] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a58000b90 with addr=10.0.0.2, port=4420 00:27:48.973 qpair failed and we were unable to recover it. 00:27:48.973 [2024-07-15 09:36:00.017495] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.973 [2024-07-15 09:36:00.017521] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a58000b90 with addr=10.0.0.2, port=4420 00:27:48.973 qpair failed and we were unable to recover it. 00:27:48.973 [2024-07-15 09:36:00.017613] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.973 [2024-07-15 09:36:00.017638] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a58000b90 with addr=10.0.0.2, port=4420 00:27:48.973 qpair failed and we were unable to recover it. 00:27:48.973 [2024-07-15 09:36:00.017732] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.973 [2024-07-15 09:36:00.017759] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a58000b90 with addr=10.0.0.2, port=4420 00:27:48.973 qpair failed and we were unable to recover it. 00:27:48.973 [2024-07-15 09:36:00.017846] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.973 [2024-07-15 09:36:00.017872] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a58000b90 with addr=10.0.0.2, port=4420 00:27:48.973 qpair failed and we were unable to recover it. 00:27:48.973 [2024-07-15 09:36:00.017960] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.973 [2024-07-15 09:36:00.017986] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a58000b90 with addr=10.0.0.2, port=4420 00:27:48.973 qpair failed and we were unable to recover it. 00:27:48.973 [2024-07-15 09:36:00.018096] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.973 [2024-07-15 09:36:00.018122] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a58000b90 with addr=10.0.0.2, port=4420 00:27:48.973 qpair failed and we were unable to recover it. 00:27:48.973 [2024-07-15 09:36:00.018233] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.973 [2024-07-15 09:36:00.018259] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a58000b90 with addr=10.0.0.2, port=4420 00:27:48.973 qpair failed and we were unable to recover it. 00:27:48.973 [2024-07-15 09:36:00.018367] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.973 [2024-07-15 09:36:00.018393] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a58000b90 with addr=10.0.0.2, port=4420 00:27:48.973 qpair failed and we were unable to recover it. 00:27:48.973 [2024-07-15 09:36:00.018471] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.973 [2024-07-15 09:36:00.018498] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a58000b90 with addr=10.0.0.2, port=4420 00:27:48.973 qpair failed and we were unable to recover it. 00:27:48.973 [2024-07-15 09:36:00.018586] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.973 [2024-07-15 09:36:00.018613] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a58000b90 with addr=10.0.0.2, port=4420 00:27:48.973 qpair failed and we were unable to recover it. 00:27:48.973 [2024-07-15 09:36:00.018710] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.973 [2024-07-15 09:36:00.018748] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d69200 with addr=10.0.0.2, port=4420 00:27:48.973 qpair failed and we were unable to recover it. 00:27:48.973 [2024-07-15 09:36:00.018842] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.973 [2024-07-15 09:36:00.018870] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d69200 with addr=10.0.0.2, port=4420 00:27:48.973 qpair failed and we were unable to recover it. 00:27:48.973 [2024-07-15 09:36:00.018989] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.973 [2024-07-15 09:36:00.019015] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d69200 with addr=10.0.0.2, port=4420 00:27:48.973 qpair failed and we were unable to recover it. 00:27:48.973 [2024-07-15 09:36:00.019096] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.973 [2024-07-15 09:36:00.019122] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d69200 with addr=10.0.0.2, port=4420 00:27:48.973 qpair failed and we were unable to recover it. 00:27:48.973 [2024-07-15 09:36:00.019212] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.973 [2024-07-15 09:36:00.019237] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d69200 with addr=10.0.0.2, port=4420 00:27:48.973 qpair failed and we were unable to recover it. 00:27:48.973 [2024-07-15 09:36:00.019316] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.973 [2024-07-15 09:36:00.019341] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d69200 with addr=10.0.0.2, port=4420 00:27:48.973 qpair failed and we were unable to recover it. 00:27:48.973 [2024-07-15 09:36:00.019453] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.973 [2024-07-15 09:36:00.019480] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a58000b90 with addr=10.0.0.2, port=4420 00:27:48.973 qpair failed and we were unable to recover it. 00:27:48.973 [2024-07-15 09:36:00.019562] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.973 [2024-07-15 09:36:00.019588] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a58000b90 with addr=10.0.0.2, port=4420 00:27:48.973 qpair failed and we were unable to recover it. 00:27:48.973 [2024-07-15 09:36:00.019673] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.973 [2024-07-15 09:36:00.019699] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a58000b90 with addr=10.0.0.2, port=4420 00:27:48.973 qpair failed and we were unable to recover it. 00:27:48.973 [2024-07-15 09:36:00.019776] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.973 [2024-07-15 09:36:00.019807] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a58000b90 with addr=10.0.0.2, port=4420 00:27:48.973 qpair failed and we were unable to recover it. 00:27:48.973 [2024-07-15 09:36:00.019899] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.973 [2024-07-15 09:36:00.019925] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a58000b90 with addr=10.0.0.2, port=4420 00:27:48.973 qpair failed and we were unable to recover it. 00:27:48.973 [2024-07-15 09:36:00.020005] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.973 [2024-07-15 09:36:00.020031] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a58000b90 with addr=10.0.0.2, port=4420 00:27:48.973 qpair failed and we were unable to recover it. 00:27:48.973 [2024-07-15 09:36:00.020171] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.973 [2024-07-15 09:36:00.020199] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d69200 with addr=10.0.0.2, port=4420 00:27:48.973 qpair failed and we were unable to recover it. 00:27:48.973 [2024-07-15 09:36:00.020346] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.973 [2024-07-15 09:36:00.020371] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d69200 with addr=10.0.0.2, port=4420 00:27:48.973 qpair failed and we were unable to recover it. 00:27:48.973 [2024-07-15 09:36:00.020453] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.973 [2024-07-15 09:36:00.020479] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d69200 with addr=10.0.0.2, port=4420 00:27:48.973 qpair failed and we were unable to recover it. 00:27:48.973 [2024-07-15 09:36:00.020589] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.973 [2024-07-15 09:36:00.020615] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d69200 with addr=10.0.0.2, port=4420 00:27:48.973 qpair failed and we were unable to recover it. 00:27:48.973 [2024-07-15 09:36:00.020729] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.973 [2024-07-15 09:36:00.020755] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d69200 with addr=10.0.0.2, port=4420 00:27:48.973 qpair failed and we were unable to recover it. 00:27:48.973 [2024-07-15 09:36:00.020841] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.973 [2024-07-15 09:36:00.020867] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d69200 with addr=10.0.0.2, port=4420 00:27:48.973 qpair failed and we were unable to recover it. 00:27:48.973 [2024-07-15 09:36:00.020950] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.973 [2024-07-15 09:36:00.020976] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d69200 with addr=10.0.0.2, port=4420 00:27:48.973 qpair failed and we were unable to recover it. 00:27:48.973 [2024-07-15 09:36:00.021057] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.973 [2024-07-15 09:36:00.021082] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d69200 with addr=10.0.0.2, port=4420 00:27:48.973 qpair failed and we were unable to recover it. 00:27:48.973 [2024-07-15 09:36:00.021165] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.973 [2024-07-15 09:36:00.021191] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d69200 with addr=10.0.0.2, port=4420 00:27:48.973 qpair failed and we were unable to recover it. 00:27:48.973 [2024-07-15 09:36:00.021274] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.973 [2024-07-15 09:36:00.021300] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d69200 with addr=10.0.0.2, port=4420 00:27:48.973 qpair failed and we were unable to recover it. 00:27:48.973 [2024-07-15 09:36:00.021415] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.973 [2024-07-15 09:36:00.021441] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d69200 with addr=10.0.0.2, port=4420 00:27:48.973 qpair failed and we were unable to recover it. 00:27:48.973 [2024-07-15 09:36:00.021546] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.973 [2024-07-15 09:36:00.021571] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d69200 with addr=10.0.0.2, port=4420 00:27:48.973 qpair failed and we were unable to recover it. 00:27:48.973 [2024-07-15 09:36:00.021675] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.973 [2024-07-15 09:36:00.021701] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d69200 with addr=10.0.0.2, port=4420 00:27:48.973 qpair failed and we were unable to recover it. 00:27:48.973 [2024-07-15 09:36:00.021787] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.973 [2024-07-15 09:36:00.021820] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d69200 with addr=10.0.0.2, port=4420 00:27:48.973 qpair failed and we were unable to recover it. 00:27:48.973 [2024-07-15 09:36:00.021937] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.973 [2024-07-15 09:36:00.021963] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d69200 with addr=10.0.0.2, port=4420 00:27:48.973 qpair failed and we were unable to recover it. 00:27:48.973 [2024-07-15 09:36:00.022043] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.973 [2024-07-15 09:36:00.022069] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d69200 with addr=10.0.0.2, port=4420 00:27:48.973 qpair failed and we were unable to recover it. 00:27:48.973 [2024-07-15 09:36:00.022180] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.973 [2024-07-15 09:36:00.022207] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d69200 with addr=10.0.0.2, port=4420 00:27:48.973 qpair failed and we were unable to recover it. 00:27:48.973 [2024-07-15 09:36:00.022319] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.973 [2024-07-15 09:36:00.022345] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d69200 with addr=10.0.0.2, port=4420 00:27:48.973 qpair failed and we were unable to recover it. 00:27:48.973 [2024-07-15 09:36:00.022461] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.973 [2024-07-15 09:36:00.022487] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d69200 with addr=10.0.0.2, port=4420 00:27:48.973 qpair failed and we were unable to recover it. 00:27:48.973 [2024-07-15 09:36:00.022601] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.973 [2024-07-15 09:36:00.022627] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d69200 with addr=10.0.0.2, port=4420 00:27:48.973 qpair failed and we were unable to recover it. 00:27:48.973 [2024-07-15 09:36:00.022704] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.973 [2024-07-15 09:36:00.022730] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d69200 with addr=10.0.0.2, port=4420 00:27:48.973 qpair failed and we were unable to recover it. 00:27:48.973 [2024-07-15 09:36:00.022821] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.973 [2024-07-15 09:36:00.022850] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a58000b90 with addr=10.0.0.2, port=4420 00:27:48.973 qpair failed and we were unable to recover it. 00:27:48.973 [2024-07-15 09:36:00.022961] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.973 [2024-07-15 09:36:00.022993] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a58000b90 with addr=10.0.0.2, port=4420 00:27:48.973 qpair failed and we were unable to recover it. 00:27:48.973 [2024-07-15 09:36:00.023134] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.973 [2024-07-15 09:36:00.023161] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a58000b90 with addr=10.0.0.2, port=4420 00:27:48.973 qpair failed and we were unable to recover it. 00:27:48.973 [2024-07-15 09:36:00.023270] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.973 [2024-07-15 09:36:00.023296] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a58000b90 with addr=10.0.0.2, port=4420 00:27:48.973 qpair failed and we were unable to recover it. 00:27:48.973 [2024-07-15 09:36:00.023374] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.973 [2024-07-15 09:36:00.023400] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a58000b90 with addr=10.0.0.2, port=4420 00:27:48.973 qpair failed and we were unable to recover it. 00:27:48.973 [2024-07-15 09:36:00.023513] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.973 [2024-07-15 09:36:00.023539] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a58000b90 with addr=10.0.0.2, port=4420 00:27:48.973 qpair failed and we were unable to recover it. 00:27:48.973 [2024-07-15 09:36:00.023650] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.973 [2024-07-15 09:36:00.023677] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d69200 with addr=10.0.0.2, port=4420 00:27:48.973 qpair failed and we were unable to recover it. 00:27:48.973 [2024-07-15 09:36:00.023768] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.973 [2024-07-15 09:36:00.023793] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d69200 with addr=10.0.0.2, port=4420 00:27:48.973 qpair failed and we were unable to recover it. 00:27:48.973 [2024-07-15 09:36:00.023913] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.973 [2024-07-15 09:36:00.023938] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d69200 with addr=10.0.0.2, port=4420 00:27:48.973 qpair failed and we were unable to recover it. 00:27:48.973 [2024-07-15 09:36:00.024016] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.973 [2024-07-15 09:36:00.024042] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d69200 with addr=10.0.0.2, port=4420 00:27:48.973 qpair failed and we were unable to recover it. 00:27:48.973 [2024-07-15 09:36:00.024123] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.973 [2024-07-15 09:36:00.024148] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d69200 with addr=10.0.0.2, port=4420 00:27:48.973 qpair failed and we were unable to recover it. 00:27:48.973 [2024-07-15 09:36:00.024286] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.973 [2024-07-15 09:36:00.024311] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d69200 with addr=10.0.0.2, port=4420 00:27:48.973 qpair failed and we were unable to recover it. 00:27:48.973 [2024-07-15 09:36:00.024396] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.973 [2024-07-15 09:36:00.024422] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d69200 with addr=10.0.0.2, port=4420 00:27:48.973 qpair failed and we were unable to recover it. 00:27:48.973 [2024-07-15 09:36:00.024501] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.974 [2024-07-15 09:36:00.024527] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d69200 with addr=10.0.0.2, port=4420 00:27:48.974 qpair failed and we were unable to recover it. 00:27:48.974 [2024-07-15 09:36:00.024682] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.974 [2024-07-15 09:36:00.024722] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a50000b90 with addr=10.0.0.2, port=4420 00:27:48.974 qpair failed and we were unable to recover it. 00:27:48.974 [2024-07-15 09:36:00.024844] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.974 [2024-07-15 09:36:00.024872] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a58000b90 with addr=10.0.0.2, port=4420 00:27:48.974 qpair failed and we were unable to recover it. 00:27:48.974 [2024-07-15 09:36:00.024959] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.974 [2024-07-15 09:36:00.024986] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a58000b90 with addr=10.0.0.2, port=4420 00:27:48.974 qpair failed and we were unable to recover it. 00:27:48.974 [2024-07-15 09:36:00.025072] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.974 [2024-07-15 09:36:00.025098] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a58000b90 with addr=10.0.0.2, port=4420 00:27:48.974 qpair failed and we were unable to recover it. 00:27:48.974 [2024-07-15 09:36:00.025214] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.974 [2024-07-15 09:36:00.025239] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a58000b90 with addr=10.0.0.2, port=4420 00:27:48.974 qpair failed and we were unable to recover it. 00:27:48.974 [2024-07-15 09:36:00.025351] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.974 [2024-07-15 09:36:00.025377] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a58000b90 with addr=10.0.0.2, port=4420 00:27:48.974 qpair failed and we were unable to recover it. 00:27:48.974 [2024-07-15 09:36:00.025528] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.974 [2024-07-15 09:36:00.025555] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d69200 with addr=10.0.0.2, port=4420 00:27:48.974 qpair failed and we were unable to recover it. 00:27:48.974 [2024-07-15 09:36:00.025646] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.974 [2024-07-15 09:36:00.025677] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a50000b90 with addr=10.0.0.2, port=4420 00:27:48.974 qpair failed and we were unable to recover it. 00:27:48.974 [2024-07-15 09:36:00.025768] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.974 [2024-07-15 09:36:00.025795] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a50000b90 with addr=10.0.0.2, port=4420 00:27:48.974 qpair failed and we were unable to recover it. 00:27:48.974 [2024-07-15 09:36:00.025920] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.974 [2024-07-15 09:36:00.025947] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a50000b90 with addr=10.0.0.2, port=4420 00:27:48.974 qpair failed and we were unable to recover it. 00:27:48.974 [2024-07-15 09:36:00.026034] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.974 [2024-07-15 09:36:00.026060] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a50000b90 with addr=10.0.0.2, port=4420 00:27:48.974 qpair failed and we were unable to recover it. 00:27:48.974 [2024-07-15 09:36:00.026173] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.974 [2024-07-15 09:36:00.026205] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a50000b90 with addr=10.0.0.2, port=4420 00:27:48.974 qpair failed and we were unable to recover it. 00:27:48.974 [2024-07-15 09:36:00.026292] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.974 [2024-07-15 09:36:00.026318] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a50000b90 with addr=10.0.0.2, port=4420 00:27:48.974 qpair failed and we were unable to recover it. 00:27:48.974 [2024-07-15 09:36:00.026429] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.974 [2024-07-15 09:36:00.026456] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d69200 with addr=10.0.0.2, port=4420 00:27:48.974 qpair failed and we were unable to recover it. 00:27:48.974 [2024-07-15 09:36:00.026535] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.974 [2024-07-15 09:36:00.026560] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d69200 with addr=10.0.0.2, port=4420 00:27:48.974 qpair failed and we were unable to recover it. 00:27:48.974 [2024-07-15 09:36:00.026671] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.974 [2024-07-15 09:36:00.026696] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d69200 with addr=10.0.0.2, port=4420 00:27:48.974 qpair failed and we were unable to recover it. 00:27:48.974 [2024-07-15 09:36:00.026812] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.974 [2024-07-15 09:36:00.026838] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d69200 with addr=10.0.0.2, port=4420 00:27:48.974 qpair failed and we were unable to recover it. 00:27:48.974 [2024-07-15 09:36:00.026944] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.974 [2024-07-15 09:36:00.026969] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d69200 with addr=10.0.0.2, port=4420 00:27:48.974 qpair failed and we were unable to recover it. 00:27:48.974 [2024-07-15 09:36:00.027081] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.974 [2024-07-15 09:36:00.027106] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d69200 with addr=10.0.0.2, port=4420 00:27:48.974 qpair failed and we were unable to recover it. 00:27:48.974 [2024-07-15 09:36:00.027219] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.974 [2024-07-15 09:36:00.027254] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a50000b90 with addr=10.0.0.2, port=4420 00:27:48.974 qpair failed and we were unable to recover it. 00:27:48.974 [2024-07-15 09:36:00.027377] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.974 [2024-07-15 09:36:00.027404] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a50000b90 with addr=10.0.0.2, port=4420 00:27:48.974 qpair failed and we were unable to recover it. 00:27:48.974 [2024-07-15 09:36:00.027498] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.974 [2024-07-15 09:36:00.027525] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a50000b90 with addr=10.0.0.2, port=4420 00:27:48.974 qpair failed and we were unable to recover it. 00:27:48.974 [2024-07-15 09:36:00.027611] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.974 [2024-07-15 09:36:00.027638] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a50000b90 with addr=10.0.0.2, port=4420 00:27:48.974 qpair failed and we were unable to recover it. 00:27:48.974 [2024-07-15 09:36:00.027725] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.974 [2024-07-15 09:36:00.027753] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a50000b90 with addr=10.0.0.2, port=4420 00:27:48.974 qpair failed and we were unable to recover it. 00:27:48.974 [2024-07-15 09:36:00.027854] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.974 [2024-07-15 09:36:00.027883] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a58000b90 with addr=10.0.0.2, port=4420 00:27:48.974 qpair failed and we were unable to recover it. 00:27:48.974 [2024-07-15 09:36:00.027974] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.974 [2024-07-15 09:36:00.028000] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a58000b90 with addr=10.0.0.2, port=4420 00:27:48.974 qpair failed and we were unable to recover it. 00:27:48.974 [2024-07-15 09:36:00.028110] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.974 [2024-07-15 09:36:00.028136] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a58000b90 with addr=10.0.0.2, port=4420 00:27:48.974 qpair failed and we were unable to recover it. 00:27:48.974 [2024-07-15 09:36:00.028223] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.974 [2024-07-15 09:36:00.028249] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a58000b90 with addr=10.0.0.2, port=4420 00:27:48.974 qpair failed and we were unable to recover it. 00:27:48.974 [2024-07-15 09:36:00.028336] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.974 [2024-07-15 09:36:00.028363] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d69200 with addr=10.0.0.2, port=4420 00:27:48.974 qpair failed and we were unable to recover it. 00:27:48.974 [2024-07-15 09:36:00.028447] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.974 [2024-07-15 09:36:00.028472] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d69200 with addr=10.0.0.2, port=4420 00:27:48.974 qpair failed and we were unable to recover it. 00:27:48.974 [2024-07-15 09:36:00.028570] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.974 [2024-07-15 09:36:00.028596] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d69200 with addr=10.0.0.2, port=4420 00:27:48.974 qpair failed and we were unable to recover it. 00:27:48.974 [2024-07-15 09:36:00.028676] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.974 [2024-07-15 09:36:00.028701] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d69200 with addr=10.0.0.2, port=4420 00:27:48.974 qpair failed and we were unable to recover it. 00:27:48.974 [2024-07-15 09:36:00.028779] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.974 [2024-07-15 09:36:00.028816] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d69200 with addr=10.0.0.2, port=4420 00:27:48.974 qpair failed and we were unable to recover it. 00:27:48.974 [2024-07-15 09:36:00.028906] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.974 [2024-07-15 09:36:00.028932] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d69200 with addr=10.0.0.2, port=4420 00:27:48.974 qpair failed and we were unable to recover it. 00:27:48.974 [2024-07-15 09:36:00.029033] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.974 [2024-07-15 09:36:00.029058] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d69200 with addr=10.0.0.2, port=4420 00:27:48.974 qpair failed and we were unable to recover it. 00:27:48.974 [2024-07-15 09:36:00.029137] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.974 [2024-07-15 09:36:00.029163] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d69200 with addr=10.0.0.2, port=4420 00:27:48.974 qpair failed and we were unable to recover it. 00:27:48.974 [2024-07-15 09:36:00.029239] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.974 [2024-07-15 09:36:00.029264] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d69200 with addr=10.0.0.2, port=4420 00:27:48.974 qpair failed and we were unable to recover it. 00:27:48.974 [2024-07-15 09:36:00.029343] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.974 [2024-07-15 09:36:00.029368] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d69200 with addr=10.0.0.2, port=4420 00:27:48.974 qpair failed and we were unable to recover it. 00:27:48.974 [2024-07-15 09:36:00.029479] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.974 [2024-07-15 09:36:00.029505] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d69200 with addr=10.0.0.2, port=4420 00:27:48.974 qpair failed and we were unable to recover it. 00:27:48.974 [2024-07-15 09:36:00.029591] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.974 [2024-07-15 09:36:00.029616] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d69200 with addr=10.0.0.2, port=4420 00:27:48.974 qpair failed and we were unable to recover it. 00:27:48.974 [2024-07-15 09:36:00.029703] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.974 [2024-07-15 09:36:00.029729] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d69200 with addr=10.0.0.2, port=4420 00:27:48.974 qpair failed and we were unable to recover it. 00:27:48.974 [2024-07-15 09:36:00.029823] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.974 [2024-07-15 09:36:00.029850] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d69200 with addr=10.0.0.2, port=4420 00:27:48.974 qpair failed and we were unable to recover it. 00:27:48.974 [2024-07-15 09:36:00.029941] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.974 [2024-07-15 09:36:00.029970] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a50000b90 with addr=10.0.0.2, port=4420 00:27:48.974 qpair failed and we were unable to recover it. 00:27:48.974 [2024-07-15 09:36:00.030067] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.974 [2024-07-15 09:36:00.030095] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a58000b90 with addr=10.0.0.2, port=4420 00:27:48.974 qpair failed and we were unable to recover it. 00:27:48.974 [2024-07-15 09:36:00.030182] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.974 [2024-07-15 09:36:00.030208] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a58000b90 with addr=10.0.0.2, port=4420 00:27:48.974 qpair failed and we were unable to recover it. 00:27:48.974 [2024-07-15 09:36:00.030288] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.974 [2024-07-15 09:36:00.030314] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a58000b90 with addr=10.0.0.2, port=4420 00:27:48.974 qpair failed and we were unable to recover it. 00:27:48.974 [2024-07-15 09:36:00.030458] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.974 [2024-07-15 09:36:00.030484] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a58000b90 with addr=10.0.0.2, port=4420 00:27:48.974 qpair failed and we were unable to recover it. 00:27:48.974 [2024-07-15 09:36:00.030574] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.974 [2024-07-15 09:36:00.030601] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a58000b90 with addr=10.0.0.2, port=4420 00:27:48.974 qpair failed and we were unable to recover it. 00:27:48.974 [2024-07-15 09:36:00.030682] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.974 [2024-07-15 09:36:00.030708] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d69200 with addr=10.0.0.2, port=4420 00:27:48.974 qpair failed and we were unable to recover it. 00:27:48.974 [2024-07-15 09:36:00.030829] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.974 [2024-07-15 09:36:00.030862] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a50000b90 with addr=10.0.0.2, port=4420 00:27:48.974 qpair failed and we were unable to recover it. 00:27:48.974 [2024-07-15 09:36:00.030950] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.974 [2024-07-15 09:36:00.030977] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a50000b90 with addr=10.0.0.2, port=4420 00:27:48.974 qpair failed and we were unable to recover it. 00:27:48.974 [2024-07-15 09:36:00.031095] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.974 [2024-07-15 09:36:00.031121] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a50000b90 with addr=10.0.0.2, port=4420 00:27:48.974 qpair failed and we were unable to recover it. 00:27:48.975 [2024-07-15 09:36:00.031208] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.975 [2024-07-15 09:36:00.031235] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a50000b90 with addr=10.0.0.2, port=4420 00:27:48.975 qpair failed and we were unable to recover it. 00:27:48.975 [2024-07-15 09:36:00.031345] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.975 [2024-07-15 09:36:00.031371] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a50000b90 with addr=10.0.0.2, port=4420 00:27:48.975 qpair failed and we were unable to recover it. 00:27:48.975 [2024-07-15 09:36:00.031484] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.975 [2024-07-15 09:36:00.031512] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a58000b90 with addr=10.0.0.2, port=4420 00:27:48.975 qpair failed and we were unable to recover it. 00:27:48.975 [2024-07-15 09:36:00.031599] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.975 [2024-07-15 09:36:00.031626] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d69200 with addr=10.0.0.2, port=4420 00:27:48.975 qpair failed and we were unable to recover it. 00:27:48.975 [2024-07-15 09:36:00.031714] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.975 [2024-07-15 09:36:00.031739] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d69200 with addr=10.0.0.2, port=4420 00:27:48.975 qpair failed and we were unable to recover it. 00:27:48.975 [2024-07-15 09:36:00.031827] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.975 [2024-07-15 09:36:00.031852] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d69200 with addr=10.0.0.2, port=4420 00:27:48.975 qpair failed and we were unable to recover it. 00:27:48.975 [2024-07-15 09:36:00.031966] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.975 [2024-07-15 09:36:00.031992] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d69200 with addr=10.0.0.2, port=4420 00:27:48.975 qpair failed and we were unable to recover it. 00:27:48.975 [2024-07-15 09:36:00.032072] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.975 [2024-07-15 09:36:00.032101] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d69200 with addr=10.0.0.2, port=4420 00:27:48.975 qpair failed and we were unable to recover it. 00:27:48.975 [2024-07-15 09:36:00.032190] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.975 [2024-07-15 09:36:00.032216] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d69200 with addr=10.0.0.2, port=4420 00:27:48.975 qpair failed and we were unable to recover it. 00:27:48.975 [2024-07-15 09:36:00.032302] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.975 [2024-07-15 09:36:00.032330] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a58000b90 with addr=10.0.0.2, port=4420 00:27:48.975 qpair failed and we were unable to recover it. 00:27:48.975 [2024-07-15 09:36:00.032409] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.975 [2024-07-15 09:36:00.032436] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a58000b90 with addr=10.0.0.2, port=4420 00:27:48.975 qpair failed and we were unable to recover it. 00:27:48.975 [2024-07-15 09:36:00.032567] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.975 [2024-07-15 09:36:00.032593] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a58000b90 with addr=10.0.0.2, port=4420 00:27:48.975 qpair failed and we were unable to recover it. 00:27:48.975 [2024-07-15 09:36:00.032714] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.975 [2024-07-15 09:36:00.032740] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a58000b90 with addr=10.0.0.2, port=4420 00:27:48.975 qpair failed and we were unable to recover it. 00:27:48.975 [2024-07-15 09:36:00.032842] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.975 [2024-07-15 09:36:00.032869] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a58000b90 with addr=10.0.0.2, port=4420 00:27:48.975 qpair failed and we were unable to recover it. 00:27:48.975 [2024-07-15 09:36:00.032961] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.975 [2024-07-15 09:36:00.032987] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a58000b90 with addr=10.0.0.2, port=4420 00:27:48.975 qpair failed and we were unable to recover it. 00:27:48.975 [2024-07-15 09:36:00.033069] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.975 [2024-07-15 09:36:00.033095] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d69200 with addr=10.0.0.2, port=4420 00:27:48.975 qpair failed and we were unable to recover it. 00:27:48.975 [2024-07-15 09:36:00.033184] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.975 [2024-07-15 09:36:00.033210] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d69200 with addr=10.0.0.2, port=4420 00:27:48.975 qpair failed and we were unable to recover it. 00:27:48.975 [2024-07-15 09:36:00.033284] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.975 [2024-07-15 09:36:00.033310] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d69200 with addr=10.0.0.2, port=4420 00:27:48.975 qpair failed and we were unable to recover it. 00:27:48.975 [2024-07-15 09:36:00.033398] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.975 [2024-07-15 09:36:00.033424] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d69200 with addr=10.0.0.2, port=4420 00:27:48.975 qpair failed and we were unable to recover it. 00:27:48.975 [2024-07-15 09:36:00.033528] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.975 [2024-07-15 09:36:00.033553] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d69200 with addr=10.0.0.2, port=4420 00:27:48.975 qpair failed and we were unable to recover it. 00:27:48.975 [2024-07-15 09:36:00.033637] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.975 [2024-07-15 09:36:00.033663] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d69200 with addr=10.0.0.2, port=4420 00:27:48.975 qpair failed and we were unable to recover it. 00:27:48.975 [2024-07-15 09:36:00.033753] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.975 [2024-07-15 09:36:00.033780] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a58000b90 with addr=10.0.0.2, port=4420 00:27:48.975 qpair failed and we were unable to recover it. 00:27:48.975 [2024-07-15 09:36:00.033882] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.975 [2024-07-15 09:36:00.033909] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a58000b90 with addr=10.0.0.2, port=4420 00:27:48.975 qpair failed and we were unable to recover it. 00:27:48.975 [2024-07-15 09:36:00.033995] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.975 [2024-07-15 09:36:00.034022] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a58000b90 with addr=10.0.0.2, port=4420 00:27:48.975 qpair failed and we were unable to recover it. 00:27:48.975 [2024-07-15 09:36:00.034104] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.975 [2024-07-15 09:36:00.034131] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a58000b90 with addr=10.0.0.2, port=4420 00:27:48.975 qpair failed and we were unable to recover it. 00:27:48.975 [2024-07-15 09:36:00.034224] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.975 [2024-07-15 09:36:00.034250] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a58000b90 with addr=10.0.0.2, port=4420 00:27:48.975 qpair failed and we were unable to recover it. 00:27:48.975 [2024-07-15 09:36:00.034328] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.975 [2024-07-15 09:36:00.034354] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a58000b90 with addr=10.0.0.2, port=4420 00:27:48.975 qpair failed and we were unable to recover it. 00:27:48.975 [2024-07-15 09:36:00.034441] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.975 [2024-07-15 09:36:00.034468] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a58000b90 with addr=10.0.0.2, port=4420 00:27:48.975 qpair failed and we were unable to recover it. 00:27:48.975 [2024-07-15 09:36:00.034542] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.975 [2024-07-15 09:36:00.034568] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a58000b90 with addr=10.0.0.2, port=4420 00:27:48.975 qpair failed and we were unable to recover it. 00:27:48.975 [2024-07-15 09:36:00.034683] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.975 [2024-07-15 09:36:00.034709] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a58000b90 with addr=10.0.0.2, port=4420 00:27:48.975 qpair failed and we were unable to recover it. 00:27:48.975 [2024-07-15 09:36:00.034814] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.975 [2024-07-15 09:36:00.034841] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a58000b90 with addr=10.0.0.2, port=4420 00:27:48.975 qpair failed and we were unable to recover it. 00:27:48.975 [2024-07-15 09:36:00.034922] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.975 [2024-07-15 09:36:00.034948] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a58000b90 with addr=10.0.0.2, port=4420 00:27:48.975 qpair failed and we were unable to recover it. 00:27:48.975 [2024-07-15 09:36:00.035053] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.975 [2024-07-15 09:36:00.035078] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a58000b90 with addr=10.0.0.2, port=4420 00:27:48.975 qpair failed and we were unable to recover it. 00:27:48.975 [2024-07-15 09:36:00.035163] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.975 [2024-07-15 09:36:00.035188] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a58000b90 with addr=10.0.0.2, port=4420 00:27:48.975 qpair failed and we were unable to recover it. 00:27:48.975 [2024-07-15 09:36:00.035266] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.975 [2024-07-15 09:36:00.035295] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a58000b90 with addr=10.0.0.2, port=4420 00:27:48.975 qpair failed and we were unable to recover it. 00:27:48.975 [2024-07-15 09:36:00.035404] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.975 [2024-07-15 09:36:00.035430] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a58000b90 with addr=10.0.0.2, port=4420 00:27:48.975 qpair failed and we were unable to recover it. 00:27:48.975 [2024-07-15 09:36:00.035509] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.975 [2024-07-15 09:36:00.035535] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a58000b90 with addr=10.0.0.2, port=4420 00:27:48.975 qpair failed and we were unable to recover it. 00:27:48.975 [2024-07-15 09:36:00.035622] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.975 [2024-07-15 09:36:00.035648] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a58000b90 with addr=10.0.0.2, port=4420 00:27:48.975 qpair failed and we were unable to recover it. 00:27:48.975 [2024-07-15 09:36:00.035736] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.975 [2024-07-15 09:36:00.035762] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a58000b90 with addr=10.0.0.2, port=4420 00:27:48.975 qpair failed and we were unable to recover it. 00:27:48.975 [2024-07-15 09:36:00.035862] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.975 [2024-07-15 09:36:00.035901] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d69200 with addr=10.0.0.2, port=4420 00:27:48.975 qpair failed and we were unable to recover it. 00:27:48.975 [2024-07-15 09:36:00.036019] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.975 [2024-07-15 09:36:00.036046] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d69200 with addr=10.0.0.2, port=4420 00:27:48.975 qpair failed and we were unable to recover it. 00:27:48.975 [2024-07-15 09:36:00.036127] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.975 [2024-07-15 09:36:00.036153] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d69200 with addr=10.0.0.2, port=4420 00:27:48.975 qpair failed and we were unable to recover it. 00:27:48.975 [2024-07-15 09:36:00.036264] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.975 [2024-07-15 09:36:00.036289] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d69200 with addr=10.0.0.2, port=4420 00:27:48.975 qpair failed and we were unable to recover it. 00:27:48.975 [2024-07-15 09:36:00.036381] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.975 [2024-07-15 09:36:00.036407] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d69200 with addr=10.0.0.2, port=4420 00:27:48.975 qpair failed and we were unable to recover it. 00:27:48.975 [2024-07-15 09:36:00.036519] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.975 [2024-07-15 09:36:00.036545] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d69200 with addr=10.0.0.2, port=4420 00:27:48.975 qpair failed and we were unable to recover it. 00:27:48.975 [2024-07-15 09:36:00.036659] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.975 [2024-07-15 09:36:00.036684] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d69200 with addr=10.0.0.2, port=4420 00:27:48.975 qpair failed and we were unable to recover it. 00:27:48.975 [2024-07-15 09:36:00.036765] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.975 [2024-07-15 09:36:00.036792] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d69200 with addr=10.0.0.2, port=4420 00:27:48.975 qpair failed and we were unable to recover it. 00:27:48.975 [2024-07-15 09:36:00.036887] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.975 [2024-07-15 09:36:00.036913] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d69200 with addr=10.0.0.2, port=4420 00:27:48.975 qpair failed and we were unable to recover it. 00:27:48.975 [2024-07-15 09:36:00.036998] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.975 [2024-07-15 09:36:00.037023] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d69200 with addr=10.0.0.2, port=4420 00:27:48.975 qpair failed and we were unable to recover it. 00:27:48.975 [2024-07-15 09:36:00.037104] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.975 [2024-07-15 09:36:00.037130] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d69200 with addr=10.0.0.2, port=4420 00:27:48.975 qpair failed and we were unable to recover it. 00:27:48.975 [2024-07-15 09:36:00.037255] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.975 [2024-07-15 09:36:00.037281] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d69200 with addr=10.0.0.2, port=4420 00:27:48.975 qpair failed and we were unable to recover it. 00:27:48.975 [2024-07-15 09:36:00.037394] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.975 [2024-07-15 09:36:00.037420] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d69200 with addr=10.0.0.2, port=4420 00:27:48.975 qpair failed and we were unable to recover it. 00:27:48.975 [2024-07-15 09:36:00.037503] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.975 [2024-07-15 09:36:00.037529] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d69200 with addr=10.0.0.2, port=4420 00:27:48.975 qpair failed and we were unable to recover it. 00:27:48.975 [2024-07-15 09:36:00.037637] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.975 [2024-07-15 09:36:00.037662] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d69200 with addr=10.0.0.2, port=4420 00:27:48.975 qpair failed and we were unable to recover it. 00:27:48.975 [2024-07-15 09:36:00.037748] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.975 [2024-07-15 09:36:00.037775] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d69200 with addr=10.0.0.2, port=4420 00:27:48.975 qpair failed and we were unable to recover it. 00:27:48.975 [2024-07-15 09:36:00.037871] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.975 [2024-07-15 09:36:00.037898] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d69200 with addr=10.0.0.2, port=4420 00:27:48.975 qpair failed and we were unable to recover it. 00:27:48.975 [2024-07-15 09:36:00.037980] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.975 [2024-07-15 09:36:00.038006] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d69200 with addr=10.0.0.2, port=4420 00:27:48.975 qpair failed and we were unable to recover it. 00:27:48.975 [2024-07-15 09:36:00.038081] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.975 [2024-07-15 09:36:00.038106] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d69200 with addr=10.0.0.2, port=4420 00:27:48.975 qpair failed and we were unable to recover it. 00:27:48.975 [2024-07-15 09:36:00.038214] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.975 [2024-07-15 09:36:00.038240] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d69200 with addr=10.0.0.2, port=4420 00:27:48.975 qpair failed and we were unable to recover it. 00:27:48.975 [2024-07-15 09:36:00.038314] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.975 [2024-07-15 09:36:00.038339] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d69200 with addr=10.0.0.2, port=4420 00:27:48.975 qpair failed and we were unable to recover it. 00:27:48.975 [2024-07-15 09:36:00.038426] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.975 [2024-07-15 09:36:00.038451] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d69200 with addr=10.0.0.2, port=4420 00:27:48.975 qpair failed and we were unable to recover it. 00:27:48.976 [2024-07-15 09:36:00.038560] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.976 [2024-07-15 09:36:00.038589] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d69200 with addr=10.0.0.2, port=4420 00:27:48.976 qpair failed and we were unable to recover it. 00:27:48.976 [2024-07-15 09:36:00.038704] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.976 [2024-07-15 09:36:00.038730] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d69200 with addr=10.0.0.2, port=4420 00:27:48.976 qpair failed and we were unable to recover it. 00:27:48.976 [2024-07-15 09:36:00.038874] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.976 [2024-07-15 09:36:00.038900] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d69200 with addr=10.0.0.2, port=4420 00:27:48.976 qpair failed and we were unable to recover it. 00:27:48.976 [2024-07-15 09:36:00.039030] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.976 [2024-07-15 09:36:00.039062] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d69200 with addr=10.0.0.2, port=4420 00:27:48.976 qpair failed and we were unable to recover it. 00:27:48.976 [2024-07-15 09:36:00.039175] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.976 [2024-07-15 09:36:00.039229] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d69200 with addr=10.0.0.2, port=4420 00:27:48.976 qpair failed and we were unable to recover it. 00:27:48.976 [2024-07-15 09:36:00.039363] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.976 [2024-07-15 09:36:00.039389] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d69200 with addr=10.0.0.2, port=4420 00:27:48.976 qpair failed and we were unable to recover it. 00:27:48.976 [2024-07-15 09:36:00.039478] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.976 [2024-07-15 09:36:00.039503] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d69200 with addr=10.0.0.2, port=4420 00:27:48.976 qpair failed and we were unable to recover it. 00:27:48.976 [2024-07-15 09:36:00.039581] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.976 [2024-07-15 09:36:00.039606] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d69200 with addr=10.0.0.2, port=4420 00:27:48.976 qpair failed and we were unable to recover it. 00:27:48.976 [2024-07-15 09:36:00.039692] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.976 [2024-07-15 09:36:00.039718] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d69200 with addr=10.0.0.2, port=4420 00:27:48.976 qpair failed and we were unable to recover it. 00:27:48.976 [2024-07-15 09:36:00.039795] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.976 [2024-07-15 09:36:00.039826] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d69200 with addr=10.0.0.2, port=4420 00:27:48.976 qpair failed and we were unable to recover it. 00:27:48.976 [2024-07-15 09:36:00.039938] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.976 [2024-07-15 09:36:00.039963] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d69200 with addr=10.0.0.2, port=4420 00:27:48.976 qpair failed and we were unable to recover it. 00:27:48.976 [2024-07-15 09:36:00.040053] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.976 [2024-07-15 09:36:00.040078] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d69200 with addr=10.0.0.2, port=4420 00:27:48.976 qpair failed and we were unable to recover it. 00:27:48.976 [2024-07-15 09:36:00.040156] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.976 [2024-07-15 09:36:00.040182] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d69200 with addr=10.0.0.2, port=4420 00:27:48.976 qpair failed and we were unable to recover it. 00:27:48.976 [2024-07-15 09:36:00.040268] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.976 [2024-07-15 09:36:00.040293] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d69200 with addr=10.0.0.2, port=4420 00:27:48.976 qpair failed and we were unable to recover it. 00:27:48.976 [2024-07-15 09:36:00.040440] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.976 [2024-07-15 09:36:00.040469] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a58000b90 with addr=10.0.0.2, port=4420 00:27:48.976 qpair failed and we were unable to recover it. 00:27:48.976 [2024-07-15 09:36:00.040558] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.976 [2024-07-15 09:36:00.040585] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a58000b90 with addr=10.0.0.2, port=4420 00:27:48.976 qpair failed and we were unable to recover it. 00:27:48.976 [2024-07-15 09:36:00.040671] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.976 [2024-07-15 09:36:00.040698] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a58000b90 with addr=10.0.0.2, port=4420 00:27:48.976 qpair failed and we were unable to recover it. 00:27:48.976 [2024-07-15 09:36:00.040784] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.976 [2024-07-15 09:36:00.040817] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a58000b90 with addr=10.0.0.2, port=4420 00:27:48.976 qpair failed and we were unable to recover it. 00:27:48.976 [2024-07-15 09:36:00.040902] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.976 [2024-07-15 09:36:00.040929] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a58000b90 with addr=10.0.0.2, port=4420 00:27:48.976 qpair failed and we were unable to recover it. 00:27:48.976 [2024-07-15 09:36:00.041011] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.976 [2024-07-15 09:36:00.041037] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a58000b90 with addr=10.0.0.2, port=4420 00:27:48.976 qpair failed and we were unable to recover it. 00:27:48.976 [2024-07-15 09:36:00.041118] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.976 [2024-07-15 09:36:00.041144] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a58000b90 with addr=10.0.0.2, port=4420 00:27:48.976 qpair failed and we were unable to recover it. 00:27:48.976 [2024-07-15 09:36:00.041234] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.976 [2024-07-15 09:36:00.041259] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a58000b90 with addr=10.0.0.2, port=4420 00:27:48.976 qpair failed and we were unable to recover it. 00:27:48.976 [2024-07-15 09:36:00.041343] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.976 [2024-07-15 09:36:00.041368] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a58000b90 with addr=10.0.0.2, port=4420 00:27:48.976 qpair failed and we were unable to recover it. 00:27:48.976 [2024-07-15 09:36:00.041475] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.976 [2024-07-15 09:36:00.041502] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d69200 with addr=10.0.0.2, port=4420 00:27:48.976 qpair failed and we were unable to recover it. 00:27:48.976 [2024-07-15 09:36:00.041604] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.976 [2024-07-15 09:36:00.041630] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d69200 with addr=10.0.0.2, port=4420 00:27:48.976 qpair failed and we were unable to recover it. 00:27:48.976 [2024-07-15 09:36:00.041751] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.976 [2024-07-15 09:36:00.041790] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a50000b90 with addr=10.0.0.2, port=4420 00:27:48.976 qpair failed and we were unable to recover it. 00:27:48.976 [2024-07-15 09:36:00.041889] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.976 [2024-07-15 09:36:00.041917] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a50000b90 with addr=10.0.0.2, port=4420 00:27:48.976 qpair failed and we were unable to recover it. 00:27:48.976 [2024-07-15 09:36:00.042007] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.976 [2024-07-15 09:36:00.042039] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a50000b90 with addr=10.0.0.2, port=4420 00:27:48.976 qpair failed and we were unable to recover it. 00:27:48.976 [2024-07-15 09:36:00.042124] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.976 [2024-07-15 09:36:00.042150] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a50000b90 with addr=10.0.0.2, port=4420 00:27:48.976 qpair failed and we were unable to recover it. 00:27:48.976 [2024-07-15 09:36:00.042243] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.976 [2024-07-15 09:36:00.042270] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a58000b90 with addr=10.0.0.2, port=4420 00:27:48.976 qpair failed and we were unable to recover it. 00:27:48.976 [2024-07-15 09:36:00.042365] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.976 [2024-07-15 09:36:00.042405] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:27:48.976 qpair failed and we were unable to recover it. 00:27:48.976 [2024-07-15 09:36:00.042497] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.976 [2024-07-15 09:36:00.042524] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d69200 with addr=10.0.0.2, port=4420 00:27:48.976 qpair failed and we were unable to recover it. 00:27:48.976 [2024-07-15 09:36:00.042640] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.976 [2024-07-15 09:36:00.042666] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d69200 with addr=10.0.0.2, port=4420 00:27:48.976 qpair failed and we were unable to recover it. 00:27:48.976 [2024-07-15 09:36:00.042758] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.976 [2024-07-15 09:36:00.042784] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d69200 with addr=10.0.0.2, port=4420 00:27:48.976 qpair failed and we were unable to recover it. 00:27:48.976 [2024-07-15 09:36:00.042880] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.976 [2024-07-15 09:36:00.042905] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d69200 with addr=10.0.0.2, port=4420 00:27:48.976 qpair failed and we were unable to recover it. 00:27:48.976 [2024-07-15 09:36:00.043011] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.976 [2024-07-15 09:36:00.043036] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d69200 with addr=10.0.0.2, port=4420 00:27:48.976 qpair failed and we were unable to recover it. 00:27:48.976 [2024-07-15 09:36:00.043122] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.976 [2024-07-15 09:36:00.043148] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d69200 with addr=10.0.0.2, port=4420 00:27:48.976 qpair failed and we were unable to recover it. 00:27:48.976 [2024-07-15 09:36:00.043260] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.976 [2024-07-15 09:36:00.043285] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d69200 with addr=10.0.0.2, port=4420 00:27:48.976 qpair failed and we were unable to recover it. 00:27:48.976 [2024-07-15 09:36:00.043390] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.976 [2024-07-15 09:36:00.043416] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d69200 with addr=10.0.0.2, port=4420 00:27:48.976 qpair failed and we were unable to recover it. 00:27:48.976 [2024-07-15 09:36:00.043537] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.976 [2024-07-15 09:36:00.043563] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d69200 with addr=10.0.0.2, port=4420 00:27:48.976 qpair failed and we were unable to recover it. 00:27:48.976 [2024-07-15 09:36:00.043643] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.976 [2024-07-15 09:36:00.043669] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d69200 with addr=10.0.0.2, port=4420 00:27:48.976 qpair failed and we were unable to recover it. 00:27:48.976 [2024-07-15 09:36:00.043785] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.976 [2024-07-15 09:36:00.043818] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d69200 with addr=10.0.0.2, port=4420 00:27:48.976 qpair failed and we were unable to recover it. 00:27:48.976 [2024-07-15 09:36:00.043936] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.976 [2024-07-15 09:36:00.043964] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a58000b90 with addr=10.0.0.2, port=4420 00:27:48.976 qpair failed and we were unable to recover it. 00:27:48.976 [2024-07-15 09:36:00.044051] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.976 [2024-07-15 09:36:00.044078] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a58000b90 with addr=10.0.0.2, port=4420 00:27:48.976 qpair failed and we were unable to recover it. 00:27:48.976 [2024-07-15 09:36:00.044163] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.976 [2024-07-15 09:36:00.044190] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a58000b90 with addr=10.0.0.2, port=4420 00:27:48.976 qpair failed and we were unable to recover it. 00:27:48.976 [2024-07-15 09:36:00.044296] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.976 [2024-07-15 09:36:00.044322] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a58000b90 with addr=10.0.0.2, port=4420 00:27:48.976 qpair failed and we were unable to recover it. 00:27:48.976 [2024-07-15 09:36:00.044458] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.976 [2024-07-15 09:36:00.044485] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a58000b90 with addr=10.0.0.2, port=4420 00:27:48.976 qpair failed and we were unable to recover it. 00:27:48.976 [2024-07-15 09:36:00.044603] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.976 [2024-07-15 09:36:00.044640] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a58000b90 with addr=10.0.0.2, port=4420 00:27:48.976 qpair failed and we were unable to recover it. 00:27:48.976 [2024-07-15 09:36:00.044753] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.976 [2024-07-15 09:36:00.044792] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d69200 with addr=10.0.0.2, port=4420 00:27:48.976 qpair failed and we were unable to recover it. 00:27:48.976 [2024-07-15 09:36:00.044938] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.976 [2024-07-15 09:36:00.044970] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d69200 with addr=10.0.0.2, port=4420 00:27:48.976 qpair failed and we were unable to recover it. 00:27:48.976 [2024-07-15 09:36:00.045091] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.976 [2024-07-15 09:36:00.045128] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d69200 with addr=10.0.0.2, port=4420 00:27:48.976 qpair failed and we were unable to recover it. 00:27:48.976 [2024-07-15 09:36:00.045301] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.976 [2024-07-15 09:36:00.045338] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d69200 with addr=10.0.0.2, port=4420 00:27:48.976 qpair failed and we were unable to recover it. 00:27:48.976 [2024-07-15 09:36:00.045481] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.976 [2024-07-15 09:36:00.045518] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d69200 with addr=10.0.0.2, port=4420 00:27:48.976 qpair failed and we were unable to recover it. 00:27:48.976 [2024-07-15 09:36:00.045664] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.976 [2024-07-15 09:36:00.045701] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d69200 with addr=10.0.0.2, port=4420 00:27:48.976 qpair failed and we were unable to recover it. 00:27:48.976 [2024-07-15 09:36:00.045869] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.976 [2024-07-15 09:36:00.045907] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d69200 with addr=10.0.0.2, port=4420 00:27:48.976 qpair failed and we were unable to recover it. 00:27:48.976 [2024-07-15 09:36:00.046011] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.976 [2024-07-15 09:36:00.046043] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d69200 with addr=10.0.0.2, port=4420 00:27:48.976 qpair failed and we were unable to recover it. 00:27:48.976 [2024-07-15 09:36:00.046190] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.976 [2024-07-15 09:36:00.046225] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d69200 with addr=10.0.0.2, port=4420 00:27:48.976 qpair failed and we were unable to recover it. 00:27:48.976 [2024-07-15 09:36:00.046392] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.976 [2024-07-15 09:36:00.046427] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d69200 with addr=10.0.0.2, port=4420 00:27:48.976 qpair failed and we were unable to recover it. 00:27:48.976 [2024-07-15 09:36:00.046539] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.976 [2024-07-15 09:36:00.046575] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d69200 with addr=10.0.0.2, port=4420 00:27:48.976 qpair failed and we were unable to recover it. 00:27:48.976 [2024-07-15 09:36:00.046716] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.976 [2024-07-15 09:36:00.046751] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d69200 with addr=10.0.0.2, port=4420 00:27:48.976 qpair failed and we were unable to recover it. 00:27:48.976 [2024-07-15 09:36:00.046916] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.976 [2024-07-15 09:36:00.046948] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d69200 with addr=10.0.0.2, port=4420 00:27:48.976 qpair failed and we were unable to recover it. 00:27:48.976 [2024-07-15 09:36:00.047050] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.977 [2024-07-15 09:36:00.047098] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d69200 with addr=10.0.0.2, port=4420 00:27:48.977 qpair failed and we were unable to recover it. 00:27:48.977 [2024-07-15 09:36:00.047240] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.977 [2024-07-15 09:36:00.047275] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d69200 with addr=10.0.0.2, port=4420 00:27:48.977 qpair failed and we were unable to recover it. 00:27:48.977 [2024-07-15 09:36:00.047421] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.977 [2024-07-15 09:36:00.047456] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d69200 with addr=10.0.0.2, port=4420 00:27:48.977 qpair failed and we were unable to recover it. 00:27:48.977 [2024-07-15 09:36:00.047613] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.977 [2024-07-15 09:36:00.047662] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d69200 with addr=10.0.0.2, port=4420 00:27:48.977 qpair failed and we were unable to recover it. 00:27:48.977 [2024-07-15 09:36:00.047808] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.977 [2024-07-15 09:36:00.047858] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d69200 with addr=10.0.0.2, port=4420 00:27:48.977 qpair failed and we were unable to recover it. 00:27:48.977 [2024-07-15 09:36:00.047962] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.977 [2024-07-15 09:36:00.047995] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d69200 with addr=10.0.0.2, port=4420 00:27:48.977 qpair failed and we were unable to recover it. 00:27:48.977 [2024-07-15 09:36:00.048136] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.977 [2024-07-15 09:36:00.048171] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d69200 with addr=10.0.0.2, port=4420 00:27:48.977 qpair failed and we were unable to recover it. 00:27:48.977 [2024-07-15 09:36:00.048312] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.977 [2024-07-15 09:36:00.048346] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d69200 with addr=10.0.0.2, port=4420 00:27:48.977 qpair failed and we were unable to recover it. 00:27:48.977 [2024-07-15 09:36:00.048460] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.977 [2024-07-15 09:36:00.048495] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d69200 with addr=10.0.0.2, port=4420 00:27:48.977 qpair failed and we were unable to recover it. 00:27:48.977 [2024-07-15 09:36:00.048691] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.977 [2024-07-15 09:36:00.048745] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:27:48.977 qpair failed and we were unable to recover it. 00:27:48.977 [2024-07-15 09:36:00.048901] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.977 [2024-07-15 09:36:00.048950] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a50000b90 with addr=10.0.0.2, port=4420 00:27:48.977 qpair failed and we were unable to recover it. 00:27:48.977 [2024-07-15 09:36:00.049103] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.977 [2024-07-15 09:36:00.049154] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a58000b90 with addr=10.0.0.2, port=4420 00:27:48.977 qpair failed and we were unable to recover it. 00:27:48.977 [2024-07-15 09:36:00.049266] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.977 [2024-07-15 09:36:00.049302] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a58000b90 with addr=10.0.0.2, port=4420 00:27:48.977 qpair failed and we were unable to recover it. 00:27:48.977 [2024-07-15 09:36:00.049403] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.977 [2024-07-15 09:36:00.049437] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a58000b90 with addr=10.0.0.2, port=4420 00:27:48.977 qpair failed and we were unable to recover it. 00:27:48.977 [2024-07-15 09:36:00.049578] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.977 [2024-07-15 09:36:00.049612] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a58000b90 with addr=10.0.0.2, port=4420 00:27:48.977 qpair failed and we were unable to recover it. 00:27:48.977 [2024-07-15 09:36:00.049712] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.977 [2024-07-15 09:36:00.049747] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d69200 with addr=10.0.0.2, port=4420 00:27:48.977 qpair failed and we were unable to recover it. 00:27:48.977 [2024-07-15 09:36:00.049919] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.977 [2024-07-15 09:36:00.049967] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:27:48.977 qpair failed and we were unable to recover it. 00:27:48.977 [2024-07-15 09:36:00.050129] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.977 [2024-07-15 09:36:00.050165] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:27:48.977 qpair failed and we were unable to recover it. 00:27:48.977 [2024-07-15 09:36:00.050285] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.977 [2024-07-15 09:36:00.050320] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:27:48.977 qpair failed and we were unable to recover it. 00:27:48.977 [2024-07-15 09:36:00.050486] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.977 [2024-07-15 09:36:00.050519] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:27:48.977 qpair failed and we were unable to recover it. 00:27:48.977 [2024-07-15 09:36:00.050637] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.977 [2024-07-15 09:36:00.050687] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:27:48.977 qpair failed and we were unable to recover it. 00:27:48.977 [2024-07-15 09:36:00.050799] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.977 [2024-07-15 09:36:00.050840] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:27:48.977 qpair failed and we were unable to recover it. 00:27:48.977 [2024-07-15 09:36:00.050995] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.977 [2024-07-15 09:36:00.051029] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:27:48.977 qpair failed and we were unable to recover it. 00:27:48.977 [2024-07-15 09:36:00.051150] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.977 [2024-07-15 09:36:00.051184] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:27:48.977 qpair failed and we were unable to recover it. 00:27:48.977 [2024-07-15 09:36:00.051294] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.977 [2024-07-15 09:36:00.051329] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:27:48.977 qpair failed and we were unable to recover it. 00:27:48.977 [2024-07-15 09:36:00.051475] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.977 [2024-07-15 09:36:00.051508] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:27:48.977 qpair failed and we were unable to recover it. 00:27:48.977 [2024-07-15 09:36:00.051646] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.977 [2024-07-15 09:36:00.051679] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:27:48.977 qpair failed and we were unable to recover it. 00:27:48.977 [2024-07-15 09:36:00.051827] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.977 [2024-07-15 09:36:00.051893] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a58000b90 with addr=10.0.0.2, port=4420 00:27:48.977 qpair failed and we were unable to recover it. 00:27:48.977 [2024-07-15 09:36:00.052013] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.977 [2024-07-15 09:36:00.052061] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d69200 with addr=10.0.0.2, port=4420 00:27:48.977 qpair failed and we were unable to recover it. 00:27:48.977 [2024-07-15 09:36:00.052240] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.977 [2024-07-15 09:36:00.052296] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a50000b90 with addr=10.0.0.2, port=4420 00:27:48.977 qpair failed and we were unable to recover it. 00:27:48.977 [2024-07-15 09:36:00.052414] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.977 [2024-07-15 09:36:00.052449] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a50000b90 with addr=10.0.0.2, port=4420 00:27:48.977 qpair failed and we were unable to recover it. 00:27:48.977 [2024-07-15 09:36:00.052574] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.977 [2024-07-15 09:36:00.052607] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a50000b90 with addr=10.0.0.2, port=4420 00:27:48.977 qpair failed and we were unable to recover it. 00:27:48.977 [2024-07-15 09:36:00.052707] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.977 [2024-07-15 09:36:00.052740] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a50000b90 with addr=10.0.0.2, port=4420 00:27:48.977 qpair failed and we were unable to recover it. 00:27:48.977 [2024-07-15 09:36:00.052854] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.977 [2024-07-15 09:36:00.052894] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a50000b90 with addr=10.0.0.2, port=4420 00:27:48.977 qpair failed and we were unable to recover it. 00:27:48.977 [2024-07-15 09:36:00.053022] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.977 [2024-07-15 09:36:00.053055] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a50000b90 with addr=10.0.0.2, port=4420 00:27:48.977 qpair failed and we were unable to recover it. 00:27:48.977 [2024-07-15 09:36:00.053195] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.977 [2024-07-15 09:36:00.053227] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a50000b90 with addr=10.0.0.2, port=4420 00:27:48.977 qpair failed and we were unable to recover it. 00:27:48.977 [2024-07-15 09:36:00.053359] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.977 [2024-07-15 09:36:00.053402] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a50000b90 with addr=10.0.0.2, port=4420 00:27:48.977 qpair failed and we were unable to recover it. 00:27:48.977 [2024-07-15 09:36:00.053540] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.977 [2024-07-15 09:36:00.053573] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a50000b90 with addr=10.0.0.2, port=4420 00:27:48.977 qpair failed and we were unable to recover it. 00:27:48.977 [2024-07-15 09:36:00.053706] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.977 [2024-07-15 09:36:00.053739] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a50000b90 with addr=10.0.0.2, port=4420 00:27:48.977 qpair failed and we were unable to recover it. 00:27:48.977 [2024-07-15 09:36:00.053852] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.977 [2024-07-15 09:36:00.053887] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a50000b90 with addr=10.0.0.2, port=4420 00:27:48.977 qpair failed and we were unable to recover it. 00:27:48.977 [2024-07-15 09:36:00.054029] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.977 [2024-07-15 09:36:00.054061] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a50000b90 with addr=10.0.0.2, port=4420 00:27:48.977 qpair failed and we were unable to recover it. 00:27:48.977 [2024-07-15 09:36:00.054175] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.977 [2024-07-15 09:36:00.054211] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d69200 with addr=10.0.0.2, port=4420 00:27:48.977 qpair failed and we were unable to recover it. 00:27:48.977 [2024-07-15 09:36:00.054338] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.977 [2024-07-15 09:36:00.054371] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d69200 with addr=10.0.0.2, port=4420 00:27:48.977 qpair failed and we were unable to recover it. 00:27:48.977 [2024-07-15 09:36:00.054502] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.977 [2024-07-15 09:36:00.054535] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d69200 with addr=10.0.0.2, port=4420 00:27:48.977 qpair failed and we were unable to recover it. 00:27:48.977 [2024-07-15 09:36:00.054642] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.977 [2024-07-15 09:36:00.054674] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d69200 with addr=10.0.0.2, port=4420 00:27:48.977 qpair failed and we were unable to recover it. 00:27:48.977 [2024-07-15 09:36:00.054811] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.977 [2024-07-15 09:36:00.054844] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d69200 with addr=10.0.0.2, port=4420 00:27:48.977 qpair failed and we were unable to recover it. 00:27:48.977 [2024-07-15 09:36:00.054995] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.977 [2024-07-15 09:36:00.055027] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d69200 with addr=10.0.0.2, port=4420 00:27:48.977 qpair failed and we were unable to recover it. 00:27:48.977 [2024-07-15 09:36:00.055143] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.977 [2024-07-15 09:36:00.055175] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d69200 with addr=10.0.0.2, port=4420 00:27:48.977 qpair failed and we were unable to recover it. 00:27:48.977 [2024-07-15 09:36:00.055330] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.977 [2024-07-15 09:36:00.055361] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d69200 with addr=10.0.0.2, port=4420 00:27:48.977 qpair failed and we were unable to recover it. 00:27:48.977 [2024-07-15 09:36:00.055466] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.977 [2024-07-15 09:36:00.055497] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d69200 with addr=10.0.0.2, port=4420 00:27:48.977 qpair failed and we were unable to recover it. 00:27:48.977 [2024-07-15 09:36:00.055599] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.977 [2024-07-15 09:36:00.055630] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d69200 with addr=10.0.0.2, port=4420 00:27:48.977 qpair failed and we were unable to recover it. 00:27:48.977 [2024-07-15 09:36:00.055728] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.977 [2024-07-15 09:36:00.055759] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d69200 with addr=10.0.0.2, port=4420 00:27:48.977 qpair failed and we were unable to recover it. 00:27:48.977 [2024-07-15 09:36:00.055883] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.977 [2024-07-15 09:36:00.055931] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a58000b90 with addr=10.0.0.2, port=4420 00:27:48.977 qpair failed and we were unable to recover it. 00:27:48.977 [2024-07-15 09:36:00.056043] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.977 [2024-07-15 09:36:00.056076] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a58000b90 with addr=10.0.0.2, port=4420 00:27:48.977 qpair failed and we were unable to recover it. 00:27:48.977 [2024-07-15 09:36:00.056214] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.977 [2024-07-15 09:36:00.056247] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a58000b90 with addr=10.0.0.2, port=4420 00:27:48.977 qpair failed and we were unable to recover it. 00:27:48.978 [2024-07-15 09:36:00.056375] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.978 [2024-07-15 09:36:00.056407] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a58000b90 with addr=10.0.0.2, port=4420 00:27:48.978 qpair failed and we were unable to recover it. 00:27:48.978 [2024-07-15 09:36:00.056513] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.978 [2024-07-15 09:36:00.056544] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a58000b90 with addr=10.0.0.2, port=4420 00:27:48.978 qpair failed and we were unable to recover it. 00:27:48.978 [2024-07-15 09:36:00.056649] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.978 [2024-07-15 09:36:00.056680] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a58000b90 with addr=10.0.0.2, port=4420 00:27:48.978 qpair failed and we were unable to recover it. 00:27:48.978 [2024-07-15 09:36:00.056780] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.978 [2024-07-15 09:36:00.056818] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a58000b90 with addr=10.0.0.2, port=4420 00:27:48.978 qpair failed and we were unable to recover it. 00:27:48.978 [2024-07-15 09:36:00.056953] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.978 [2024-07-15 09:36:00.056984] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a58000b90 with addr=10.0.0.2, port=4420 00:27:48.978 qpair failed and we were unable to recover it. 00:27:48.978 [2024-07-15 09:36:00.057088] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.978 [2024-07-15 09:36:00.057135] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a58000b90 with addr=10.0.0.2, port=4420 00:27:48.978 qpair failed and we were unable to recover it. 00:27:48.978 [2024-07-15 09:36:00.057288] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.978 [2024-07-15 09:36:00.057320] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a58000b90 with addr=10.0.0.2, port=4420 00:27:48.978 qpair failed and we were unable to recover it. 00:27:48.978 [2024-07-15 09:36:00.057478] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.978 [2024-07-15 09:36:00.057510] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a58000b90 with addr=10.0.0.2, port=4420 00:27:48.978 qpair failed and we were unable to recover it. 00:27:48.978 [2024-07-15 09:36:00.057610] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.978 [2024-07-15 09:36:00.057641] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a58000b90 with addr=10.0.0.2, port=4420 00:27:48.978 qpair failed and we were unable to recover it. 00:27:48.978 [2024-07-15 09:36:00.057768] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.978 [2024-07-15 09:36:00.057813] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a58000b90 with addr=10.0.0.2, port=4420 00:27:48.978 qpair failed and we were unable to recover it. 00:27:48.978 [2024-07-15 09:36:00.057935] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.978 [2024-07-15 09:36:00.057967] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a58000b90 with addr=10.0.0.2, port=4420 00:27:48.978 qpair failed and we were unable to recover it. 00:27:48.978 [2024-07-15 09:36:00.058062] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.978 [2024-07-15 09:36:00.058093] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a58000b90 with addr=10.0.0.2, port=4420 00:27:48.978 qpair failed and we were unable to recover it. 00:27:48.978 [2024-07-15 09:36:00.058230] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.978 [2024-07-15 09:36:00.058262] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a58000b90 with addr=10.0.0.2, port=4420 00:27:48.978 qpair failed and we were unable to recover it. 00:27:48.978 [2024-07-15 09:36:00.058364] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.978 [2024-07-15 09:36:00.058396] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a58000b90 with addr=10.0.0.2, port=4420 00:27:48.978 qpair failed and we were unable to recover it. 00:27:48.978 [2024-07-15 09:36:00.058554] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.978 [2024-07-15 09:36:00.058586] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a58000b90 with addr=10.0.0.2, port=4420 00:27:48.978 qpair failed and we were unable to recover it. 00:27:48.978 [2024-07-15 09:36:00.058687] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.978 [2024-07-15 09:36:00.058718] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a58000b90 with addr=10.0.0.2, port=4420 00:27:48.978 qpair failed and we were unable to recover it. 00:27:48.978 [2024-07-15 09:36:00.058819] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.978 [2024-07-15 09:36:00.058867] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a58000b90 with addr=10.0.0.2, port=4420 00:27:48.978 qpair failed and we were unable to recover it. 00:27:48.978 [2024-07-15 09:36:00.058957] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.978 [2024-07-15 09:36:00.058987] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a58000b90 with addr=10.0.0.2, port=4420 00:27:48.978 qpair failed and we were unable to recover it. 00:27:48.978 [2024-07-15 09:36:00.059079] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.978 [2024-07-15 09:36:00.059118] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a58000b90 with addr=10.0.0.2, port=4420 00:27:48.978 qpair failed and we were unable to recover it. 00:27:48.978 [2024-07-15 09:36:00.059276] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.978 [2024-07-15 09:36:00.059306] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a58000b90 with addr=10.0.0.2, port=4420 00:27:48.978 qpair failed and we were unable to recover it. 00:27:48.978 [2024-07-15 09:36:00.059403] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.978 [2024-07-15 09:36:00.059434] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a58000b90 with addr=10.0.0.2, port=4420 00:27:48.978 qpair failed and we were unable to recover it. 00:27:48.978 [2024-07-15 09:36:00.059532] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.978 [2024-07-15 09:36:00.059562] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a58000b90 with addr=10.0.0.2, port=4420 00:27:48.978 qpair failed and we were unable to recover it. 00:27:48.978 [2024-07-15 09:36:00.059690] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.978 [2024-07-15 09:36:00.059720] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a58000b90 with addr=10.0.0.2, port=4420 00:27:48.978 qpair failed and we were unable to recover it. 00:27:48.978 [2024-07-15 09:36:00.059849] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.978 [2024-07-15 09:36:00.059880] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a58000b90 with addr=10.0.0.2, port=4420 00:27:48.978 qpair failed and we were unable to recover it. 00:27:48.978 [2024-07-15 09:36:00.060001] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.978 [2024-07-15 09:36:00.060031] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a58000b90 with addr=10.0.0.2, port=4420 00:27:48.978 qpair failed and we were unable to recover it. 00:27:48.978 [2024-07-15 09:36:00.060169] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.978 [2024-07-15 09:36:00.060199] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a58000b90 with addr=10.0.0.2, port=4420 00:27:48.978 qpair failed and we were unable to recover it. 00:27:48.978 [2024-07-15 09:36:00.060315] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.978 [2024-07-15 09:36:00.060345] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a58000b90 with addr=10.0.0.2, port=4420 00:27:48.978 qpair failed and we were unable to recover it. 00:27:48.978 [2024-07-15 09:36:00.060469] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.978 [2024-07-15 09:36:00.060499] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a58000b90 with addr=10.0.0.2, port=4420 00:27:48.978 qpair failed and we were unable to recover it. 00:27:48.978 [2024-07-15 09:36:00.060595] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.978 [2024-07-15 09:36:00.060625] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a58000b90 with addr=10.0.0.2, port=4420 00:27:48.978 qpair failed and we were unable to recover it. 00:27:48.978 [2024-07-15 09:36:00.060735] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.978 [2024-07-15 09:36:00.060780] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:27:48.978 qpair failed and we were unable to recover it. 00:27:48.978 [2024-07-15 09:36:00.060922] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.978 [2024-07-15 09:36:00.060953] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:27:48.978 qpair failed and we were unable to recover it. 00:27:48.978 [2024-07-15 09:36:00.061056] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.978 [2024-07-15 09:36:00.061086] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:27:48.978 qpair failed and we were unable to recover it. 00:27:48.978 [2024-07-15 09:36:00.061228] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.978 [2024-07-15 09:36:00.061259] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a58000b90 with addr=10.0.0.2, port=4420 00:27:48.978 qpair failed and we were unable to recover it. 00:27:48.978 [2024-07-15 09:36:00.061410] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.978 [2024-07-15 09:36:00.061439] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a58000b90 with addr=10.0.0.2, port=4420 00:27:48.978 qpair failed and we were unable to recover it. 00:27:48.978 [2024-07-15 09:36:00.061536] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.978 [2024-07-15 09:36:00.061566] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a58000b90 with addr=10.0.0.2, port=4420 00:27:48.978 qpair failed and we were unable to recover it. 00:27:48.978 [2024-07-15 09:36:00.061667] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.978 [2024-07-15 09:36:00.061699] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a58000b90 with addr=10.0.0.2, port=4420 00:27:48.978 qpair failed and we were unable to recover it. 00:27:48.978 [2024-07-15 09:36:00.061834] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.978 [2024-07-15 09:36:00.061865] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a58000b90 with addr=10.0.0.2, port=4420 00:27:48.978 qpair failed and we were unable to recover it. 00:27:48.978 [2024-07-15 09:36:00.061964] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.978 [2024-07-15 09:36:00.061995] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a58000b90 with addr=10.0.0.2, port=4420 00:27:48.978 qpair failed and we were unable to recover it. 00:27:48.978 [2024-07-15 09:36:00.062122] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.978 [2024-07-15 09:36:00.062153] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a58000b90 with addr=10.0.0.2, port=4420 00:27:48.978 qpair failed and we were unable to recover it. 00:27:48.978 [2024-07-15 09:36:00.062285] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.978 [2024-07-15 09:36:00.062314] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a58000b90 with addr=10.0.0.2, port=4420 00:27:48.978 qpair failed and we were unable to recover it. 00:27:48.978 [2024-07-15 09:36:00.062419] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.978 [2024-07-15 09:36:00.062450] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a58000b90 with addr=10.0.0.2, port=4420 00:27:48.978 qpair failed and we were unable to recover it. 00:27:48.978 [2024-07-15 09:36:00.062544] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.978 [2024-07-15 09:36:00.062575] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a58000b90 with addr=10.0.0.2, port=4420 00:27:48.978 qpair failed and we were unable to recover it. 00:27:48.978 [2024-07-15 09:36:00.062669] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.978 [2024-07-15 09:36:00.062699] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a58000b90 with addr=10.0.0.2, port=4420 00:27:48.978 qpair failed and we were unable to recover it. 00:27:48.978 [2024-07-15 09:36:00.062823] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.978 [2024-07-15 09:36:00.062883] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a50000b90 with addr=10.0.0.2, port=4420 00:27:48.978 qpair failed and we were unable to recover it. 00:27:48.978 [2024-07-15 09:36:00.063030] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.978 [2024-07-15 09:36:00.063072] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:27:48.978 qpair failed and we were unable to recover it. 00:27:48.978 [2024-07-15 09:36:00.063203] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.978 [2024-07-15 09:36:00.063239] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:27:48.978 qpair failed and we were unable to recover it. 00:27:48.978 [2024-07-15 09:36:00.063371] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.978 [2024-07-15 09:36:00.063400] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:27:48.978 qpair failed and we were unable to recover it. 00:27:48.978 [2024-07-15 09:36:00.063547] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.978 [2024-07-15 09:36:00.063575] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:27:48.978 qpair failed and we were unable to recover it. 00:27:48.978 [2024-07-15 09:36:00.063670] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.978 [2024-07-15 09:36:00.063698] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:27:48.978 qpair failed and we were unable to recover it. 00:27:48.978 [2024-07-15 09:36:00.063855] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.978 [2024-07-15 09:36:00.063884] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:27:48.978 qpair failed and we were unable to recover it. 00:27:48.978 [2024-07-15 09:36:00.063976] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.978 [2024-07-15 09:36:00.064005] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:27:48.978 qpair failed and we were unable to recover it. 00:27:48.978 [2024-07-15 09:36:00.064096] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.978 [2024-07-15 09:36:00.064124] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:27:48.978 qpair failed and we were unable to recover it. 00:27:48.978 [2024-07-15 09:36:00.064267] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.978 [2024-07-15 09:36:00.064295] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:27:48.978 qpair failed and we were unable to recover it. 00:27:48.978 [2024-07-15 09:36:00.064444] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.978 [2024-07-15 09:36:00.064473] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:27:48.978 qpair failed and we were unable to recover it. 00:27:48.978 [2024-07-15 09:36:00.064561] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.978 [2024-07-15 09:36:00.064589] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:27:48.978 qpair failed and we were unable to recover it. 00:27:48.978 [2024-07-15 09:36:00.064677] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.978 [2024-07-15 09:36:00.064707] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a58000b90 with addr=10.0.0.2, port=4420 00:27:48.978 qpair failed and we were unable to recover it. 00:27:48.978 [2024-07-15 09:36:00.064820] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.978 [2024-07-15 09:36:00.064865] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a58000b90 with addr=10.0.0.2, port=4420 00:27:48.978 qpair failed and we were unable to recover it. 00:27:48.978 [2024-07-15 09:36:00.064980] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.978 [2024-07-15 09:36:00.065008] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a58000b90 with addr=10.0.0.2, port=4420 00:27:48.978 qpair failed and we were unable to recover it. 00:27:48.978 [2024-07-15 09:36:00.065134] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.978 [2024-07-15 09:36:00.065162] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a58000b90 with addr=10.0.0.2, port=4420 00:27:48.978 qpair failed and we were unable to recover it. 00:27:48.978 [2024-07-15 09:36:00.065257] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.978 [2024-07-15 09:36:00.065285] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a58000b90 with addr=10.0.0.2, port=4420 00:27:48.978 qpair failed and we were unable to recover it. 00:27:48.978 [2024-07-15 09:36:00.065421] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.978 [2024-07-15 09:36:00.065449] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a58000b90 with addr=10.0.0.2, port=4420 00:27:48.978 qpair failed and we were unable to recover it. 00:27:48.978 [2024-07-15 09:36:00.065550] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.979 [2024-07-15 09:36:00.065579] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a58000b90 with addr=10.0.0.2, port=4420 00:27:48.979 qpair failed and we were unable to recover it. 00:27:48.979 [2024-07-15 09:36:00.065669] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.979 [2024-07-15 09:36:00.065697] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a58000b90 with addr=10.0.0.2, port=4420 00:27:48.979 qpair failed and we were unable to recover it. 00:27:48.979 [2024-07-15 09:36:00.065832] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.979 [2024-07-15 09:36:00.065874] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d69200 with addr=10.0.0.2, port=4420 00:27:48.979 qpair failed and we were unable to recover it. 00:27:48.979 [2024-07-15 09:36:00.065970] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.979 [2024-07-15 09:36:00.065998] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d69200 with addr=10.0.0.2, port=4420 00:27:48.979 qpair failed and we were unable to recover it. 00:27:48.979 [2024-07-15 09:36:00.066095] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.979 [2024-07-15 09:36:00.066122] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d69200 with addr=10.0.0.2, port=4420 00:27:48.979 qpair failed and we were unable to recover it. 00:27:48.979 [2024-07-15 09:36:00.066239] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.979 [2024-07-15 09:36:00.066266] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d69200 with addr=10.0.0.2, port=4420 00:27:48.979 qpair failed and we were unable to recover it. 00:27:48.979 [2024-07-15 09:36:00.066381] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.979 [2024-07-15 09:36:00.066408] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d69200 with addr=10.0.0.2, port=4420 00:27:48.979 qpair failed and we were unable to recover it. 00:27:48.979 [2024-07-15 09:36:00.066485] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.979 [2024-07-15 09:36:00.066513] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d69200 with addr=10.0.0.2, port=4420 00:27:48.979 qpair failed and we were unable to recover it. 00:27:48.979 [2024-07-15 09:36:00.066671] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.979 [2024-07-15 09:36:00.066701] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a58000b90 with addr=10.0.0.2, port=4420 00:27:48.979 qpair failed and we were unable to recover it. 00:27:48.979 [2024-07-15 09:36:00.066919] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.979 [2024-07-15 09:36:00.066947] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a58000b90 with addr=10.0.0.2, port=4420 00:27:48.979 qpair failed and we were unable to recover it. 00:27:48.979 [2024-07-15 09:36:00.067035] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.979 [2024-07-15 09:36:00.067064] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a58000b90 with addr=10.0.0.2, port=4420 00:27:48.979 qpair failed and we were unable to recover it. 00:27:48.979 [2024-07-15 09:36:00.067210] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.979 [2024-07-15 09:36:00.067239] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d69200 with addr=10.0.0.2, port=4420 00:27:48.979 qpair failed and we were unable to recover it. 00:27:48.979 [2024-07-15 09:36:00.067381] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.979 [2024-07-15 09:36:00.067409] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d69200 with addr=10.0.0.2, port=4420 00:27:48.979 qpair failed and we were unable to recover it. 00:27:48.979 [2024-07-15 09:36:00.067553] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.979 [2024-07-15 09:36:00.067581] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d69200 with addr=10.0.0.2, port=4420 00:27:48.979 qpair failed and we were unable to recover it. 00:27:48.979 [2024-07-15 09:36:00.067672] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.979 [2024-07-15 09:36:00.067699] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d69200 with addr=10.0.0.2, port=4420 00:27:48.979 qpair failed and we were unable to recover it. 00:27:48.979 [2024-07-15 09:36:00.067812] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.979 [2024-07-15 09:36:00.067854] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:27:48.979 qpair failed and we were unable to recover it. 00:27:48.979 [2024-07-15 09:36:00.067978] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.979 [2024-07-15 09:36:00.068007] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:27:48.979 qpair failed and we were unable to recover it. 00:27:48.979 [2024-07-15 09:36:00.068147] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.979 [2024-07-15 09:36:00.068190] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:27:48.979 qpair failed and we were unable to recover it. 00:27:48.979 [2024-07-15 09:36:00.068362] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.979 [2024-07-15 09:36:00.068406] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:27:48.979 qpair failed and we were unable to recover it. 00:27:48.979 [2024-07-15 09:36:00.068574] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.979 [2024-07-15 09:36:00.068616] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:27:48.979 qpair failed and we were unable to recover it. 00:27:48.979 [2024-07-15 09:36:00.068795] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.979 [2024-07-15 09:36:00.068857] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a58000b90 with addr=10.0.0.2, port=4420 00:27:48.979 qpair failed and we were unable to recover it. 00:27:48.979 [2024-07-15 09:36:00.068995] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.979 [2024-07-15 09:36:00.069039] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a58000b90 with addr=10.0.0.2, port=4420 00:27:48.979 qpair failed and we were unable to recover it. 00:27:48.979 [2024-07-15 09:36:00.069240] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.979 [2024-07-15 09:36:00.069284] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a58000b90 with addr=10.0.0.2, port=4420 00:27:48.979 qpair failed and we were unable to recover it. 00:27:48.979 [2024-07-15 09:36:00.069425] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.979 [2024-07-15 09:36:00.069469] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a58000b90 with addr=10.0.0.2, port=4420 00:27:48.979 qpair failed and we were unable to recover it. 00:27:48.979 [2024-07-15 09:36:00.069594] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.979 [2024-07-15 09:36:00.069636] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a58000b90 with addr=10.0.0.2, port=4420 00:27:48.979 qpair failed and we were unable to recover it. 00:27:48.979 [2024-07-15 09:36:00.069791] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.979 [2024-07-15 09:36:00.069835] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a58000b90 with addr=10.0.0.2, port=4420 00:27:48.979 qpair failed and we were unable to recover it. 00:27:48.979 [2024-07-15 09:36:00.069931] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.979 [2024-07-15 09:36:00.069978] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a58000b90 with addr=10.0.0.2, port=4420 00:27:48.979 qpair failed and we were unable to recover it. 00:27:48.979 [2024-07-15 09:36:00.070123] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.979 [2024-07-15 09:36:00.070156] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d69200 with addr=10.0.0.2, port=4420 00:27:48.979 qpair failed and we were unable to recover it. 00:27:48.979 [2024-07-15 09:36:00.070253] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.979 [2024-07-15 09:36:00.070286] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d69200 with addr=10.0.0.2, port=4420 00:27:48.979 qpair failed and we were unable to recover it. 00:27:48.979 [2024-07-15 09:36:00.070444] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.979 [2024-07-15 09:36:00.070473] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d69200 with addr=10.0.0.2, port=4420 00:27:48.979 qpair failed and we were unable to recover it. 00:27:48.979 [2024-07-15 09:36:00.070625] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.979 [2024-07-15 09:36:00.070654] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d69200 with addr=10.0.0.2, port=4420 00:27:48.979 qpair failed and we were unable to recover it. 00:27:48.979 [2024-07-15 09:36:00.070743] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.979 [2024-07-15 09:36:00.070772] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d69200 with addr=10.0.0.2, port=4420 00:27:48.979 qpair failed and we were unable to recover it. 00:27:48.979 [2024-07-15 09:36:00.070914] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.979 [2024-07-15 09:36:00.070945] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d69200 with addr=10.0.0.2, port=4420 00:27:48.979 qpair failed and we were unable to recover it. 00:27:48.979 [2024-07-15 09:36:00.071061] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.979 [2024-07-15 09:36:00.071091] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d69200 with addr=10.0.0.2, port=4420 00:27:48.979 qpair failed and we were unable to recover it. 00:27:48.979 [2024-07-15 09:36:00.071186] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.979 [2024-07-15 09:36:00.071215] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d69200 with addr=10.0.0.2, port=4420 00:27:48.979 qpair failed and we were unable to recover it. 00:27:48.979 [2024-07-15 09:36:00.071314] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.979 [2024-07-15 09:36:00.071344] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d69200 with addr=10.0.0.2, port=4420 00:27:48.979 qpair failed and we were unable to recover it. 00:27:48.979 [2024-07-15 09:36:00.071442] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.979 [2024-07-15 09:36:00.071472] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d69200 with addr=10.0.0.2, port=4420 00:27:48.979 qpair failed and we were unable to recover it. 00:27:48.979 [2024-07-15 09:36:00.071559] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.979 [2024-07-15 09:36:00.071589] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d69200 with addr=10.0.0.2, port=4420 00:27:48.979 qpair failed and we were unable to recover it. 00:27:48.979 [2024-07-15 09:36:00.071701] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.979 [2024-07-15 09:36:00.071747] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:27:48.979 qpair failed and we were unable to recover it. 00:27:48.979 [2024-07-15 09:36:00.071873] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.979 [2024-07-15 09:36:00.071907] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a58000b90 with addr=10.0.0.2, port=4420 00:27:48.979 qpair failed and we were unable to recover it. 00:27:48.979 [2024-07-15 09:36:00.072015] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.979 [2024-07-15 09:36:00.072046] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a58000b90 with addr=10.0.0.2, port=4420 00:27:48.979 qpair failed and we were unable to recover it. 00:27:48.979 [2024-07-15 09:36:00.072205] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.979 [2024-07-15 09:36:00.072238] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a58000b90 with addr=10.0.0.2, port=4420 00:27:48.979 qpair failed and we were unable to recover it. 00:27:48.979 [2024-07-15 09:36:00.072373] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.979 [2024-07-15 09:36:00.072406] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a58000b90 with addr=10.0.0.2, port=4420 00:27:48.979 qpair failed and we were unable to recover it. 00:27:48.979 [2024-07-15 09:36:00.072534] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.979 [2024-07-15 09:36:00.072567] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a58000b90 with addr=10.0.0.2, port=4420 00:27:48.979 qpair failed and we were unable to recover it. 00:27:48.979 [2024-07-15 09:36:00.072731] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.979 [2024-07-15 09:36:00.072764] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a58000b90 with addr=10.0.0.2, port=4420 00:27:48.979 qpair failed and we were unable to recover it. 00:27:48.979 [2024-07-15 09:36:00.072907] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.979 [2024-07-15 09:36:00.072940] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a58000b90 with addr=10.0.0.2, port=4420 00:27:48.979 qpair failed and we were unable to recover it. 00:27:48.979 [2024-07-15 09:36:00.073077] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.979 [2024-07-15 09:36:00.073110] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a58000b90 with addr=10.0.0.2, port=4420 00:27:48.979 qpair failed and we were unable to recover it. 00:27:48.979 [2024-07-15 09:36:00.073249] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.979 [2024-07-15 09:36:00.073281] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a58000b90 with addr=10.0.0.2, port=4420 00:27:48.979 qpair failed and we were unable to recover it. 00:27:48.979 [2024-07-15 09:36:00.073382] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.979 [2024-07-15 09:36:00.073415] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a58000b90 with addr=10.0.0.2, port=4420 00:27:48.979 qpair failed and we were unable to recover it. 00:27:48.979 [2024-07-15 09:36:00.073528] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.979 [2024-07-15 09:36:00.073560] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a58000b90 with addr=10.0.0.2, port=4420 00:27:48.979 qpair failed and we were unable to recover it. 00:27:48.979 [2024-07-15 09:36:00.073664] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.979 [2024-07-15 09:36:00.073697] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a58000b90 with addr=10.0.0.2, port=4420 00:27:48.979 qpair failed and we were unable to recover it. 00:27:48.979 [2024-07-15 09:36:00.073839] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.979 [2024-07-15 09:36:00.073877] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a58000b90 with addr=10.0.0.2, port=4420 00:27:48.979 qpair failed and we were unable to recover it. 00:27:48.979 [2024-07-15 09:36:00.074024] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.979 [2024-07-15 09:36:00.074057] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a58000b90 with addr=10.0.0.2, port=4420 00:27:48.979 qpair failed and we were unable to recover it. 00:27:48.979 [2024-07-15 09:36:00.074229] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.979 [2024-07-15 09:36:00.074278] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d69200 with addr=10.0.0.2, port=4420 00:27:48.979 qpair failed and we were unable to recover it. 00:27:48.979 [2024-07-15 09:36:00.074446] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.979 [2024-07-15 09:36:00.074480] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d69200 with addr=10.0.0.2, port=4420 00:27:48.979 qpair failed and we were unable to recover it. 00:27:48.979 [2024-07-15 09:36:00.074612] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.979 [2024-07-15 09:36:00.074645] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d69200 with addr=10.0.0.2, port=4420 00:27:48.979 qpair failed and we were unable to recover it. 00:27:48.979 [2024-07-15 09:36:00.074776] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.979 [2024-07-15 09:36:00.074821] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d69200 with addr=10.0.0.2, port=4420 00:27:48.979 qpair failed and we were unable to recover it. 00:27:48.979 [2024-07-15 09:36:00.074930] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.979 [2024-07-15 09:36:00.074963] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d69200 with addr=10.0.0.2, port=4420 00:27:48.979 qpair failed and we were unable to recover it. 00:27:48.979 [2024-07-15 09:36:00.075117] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.979 [2024-07-15 09:36:00.075160] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d69200 with addr=10.0.0.2, port=4420 00:27:48.980 qpair failed and we were unable to recover it. 00:27:48.980 [2024-07-15 09:36:00.075311] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.980 [2024-07-15 09:36:00.075344] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a58000b90 with addr=10.0.0.2, port=4420 00:27:48.980 qpair failed and we were unable to recover it. 00:27:48.980 [2024-07-15 09:36:00.075558] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.980 [2024-07-15 09:36:00.075601] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a58000b90 with addr=10.0.0.2, port=4420 00:27:48.980 qpair failed and we were unable to recover it. 00:27:48.980 [2024-07-15 09:36:00.075754] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.980 [2024-07-15 09:36:00.075851] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a58000b90 with addr=10.0.0.2, port=4420 00:27:48.980 qpair failed and we were unable to recover it. 00:27:48.980 [2024-07-15 09:36:00.076003] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.980 [2024-07-15 09:36:00.076060] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a58000b90 with addr=10.0.0.2, port=4420 00:27:48.980 qpair failed and we were unable to recover it. 00:27:48.980 [2024-07-15 09:36:00.076211] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.980 [2024-07-15 09:36:00.076243] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a58000b90 with addr=10.0.0.2, port=4420 00:27:48.980 qpair failed and we were unable to recover it. 00:27:48.980 [2024-07-15 09:36:00.076347] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.980 [2024-07-15 09:36:00.076380] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a58000b90 with addr=10.0.0.2, port=4420 00:27:48.980 qpair failed and we were unable to recover it. 00:27:48.980 [2024-07-15 09:36:00.076486] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.980 [2024-07-15 09:36:00.076520] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d69200 with addr=10.0.0.2, port=4420 00:27:48.980 qpair failed and we were unable to recover it. 00:27:48.980 [2024-07-15 09:36:00.076642] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.980 [2024-07-15 09:36:00.076690] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:27:48.980 qpair failed and we were unable to recover it. 00:27:48.980 [2024-07-15 09:36:00.076847] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.980 [2024-07-15 09:36:00.076893] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:27:48.980 qpair failed and we were unable to recover it. 00:27:48.980 [2024-07-15 09:36:00.077069] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.980 [2024-07-15 09:36:00.077113] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:27:48.980 qpair failed and we were unable to recover it. 00:27:48.980 [2024-07-15 09:36:00.077286] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.980 [2024-07-15 09:36:00.077329] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:27:48.980 qpair failed and we were unable to recover it. 00:27:48.980 [2024-07-15 09:36:00.077496] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.980 [2024-07-15 09:36:00.077528] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:27:48.980 qpair failed and we were unable to recover it. 00:27:48.980 [2024-07-15 09:36:00.077657] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.980 [2024-07-15 09:36:00.077689] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:27:48.980 qpair failed and we were unable to recover it. 00:27:48.980 [2024-07-15 09:36:00.077825] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.980 [2024-07-15 09:36:00.077868] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:27:48.980 qpair failed and we were unable to recover it. 00:27:48.980 [2024-07-15 09:36:00.078034] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.980 [2024-07-15 09:36:00.078067] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:27:48.980 qpair failed and we were unable to recover it. 00:27:48.980 [2024-07-15 09:36:00.078209] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.980 [2024-07-15 09:36:00.078241] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:27:48.980 qpair failed and we were unable to recover it. 00:27:48.980 [2024-07-15 09:36:00.078354] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.980 [2024-07-15 09:36:00.078386] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:27:48.980 qpair failed and we were unable to recover it. 00:27:48.980 [2024-07-15 09:36:00.078503] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.980 [2024-07-15 09:36:00.078536] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:27:48.980 qpair failed and we were unable to recover it. 00:27:48.980 [2024-07-15 09:36:00.078659] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.980 [2024-07-15 09:36:00.078691] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:27:48.980 qpair failed and we were unable to recover it. 00:27:48.980 [2024-07-15 09:36:00.078855] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.980 [2024-07-15 09:36:00.078894] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:27:48.980 qpair failed and we were unable to recover it. 00:27:48.980 [2024-07-15 09:36:00.079027] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.980 [2024-07-15 09:36:00.079060] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:27:48.980 qpair failed and we were unable to recover it. 00:27:48.980 [2024-07-15 09:36:00.079246] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.980 [2024-07-15 09:36:00.079278] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:27:48.980 qpair failed and we were unable to recover it. 00:27:48.980 [2024-07-15 09:36:00.079438] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.980 [2024-07-15 09:36:00.079470] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:27:48.980 qpair failed and we were unable to recover it. 00:27:48.980 [2024-07-15 09:36:00.079642] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.980 [2024-07-15 09:36:00.079685] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:27:48.980 qpair failed and we were unable to recover it. 00:27:48.980 [2024-07-15 09:36:00.079857] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.980 [2024-07-15 09:36:00.079901] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:27:48.980 qpair failed and we were unable to recover it. 00:27:48.980 [2024-07-15 09:36:00.080035] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.980 [2024-07-15 09:36:00.080078] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:27:48.980 qpair failed and we were unable to recover it. 00:27:48.980 [2024-07-15 09:36:00.080256] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.980 [2024-07-15 09:36:00.080299] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:27:48.980 qpair failed and we were unable to recover it. 00:27:48.980 [2024-07-15 09:36:00.080487] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.980 [2024-07-15 09:36:00.080524] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:27:48.980 qpair failed and we were unable to recover it. 00:27:48.980 [2024-07-15 09:36:00.080636] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.980 [2024-07-15 09:36:00.080671] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:27:48.980 qpair failed and we were unable to recover it. 00:27:48.980 [2024-07-15 09:36:00.080845] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.980 [2024-07-15 09:36:00.080888] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:27:48.980 qpair failed and we were unable to recover it. 00:27:48.980 [2024-07-15 09:36:00.081028] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.980 [2024-07-15 09:36:00.081071] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:27:48.980 qpair failed and we were unable to recover it. 00:27:48.980 [2024-07-15 09:36:00.081236] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.980 [2024-07-15 09:36:00.081279] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:27:48.980 qpair failed and we were unable to recover it. 00:27:48.980 [2024-07-15 09:36:00.081494] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.980 [2024-07-15 09:36:00.081526] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:27:48.980 qpair failed and we were unable to recover it. 00:27:48.981 [2024-07-15 09:36:00.081670] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.981 [2024-07-15 09:36:00.081702] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:27:48.981 qpair failed and we were unable to recover it. 00:27:48.981 [2024-07-15 09:36:00.081858] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.981 [2024-07-15 09:36:00.081893] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:27:48.981 qpair failed and we were unable to recover it. 00:27:48.981 [2024-07-15 09:36:00.082059] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.981 [2024-07-15 09:36:00.082102] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:27:48.981 qpair failed and we were unable to recover it. 00:27:48.981 [2024-07-15 09:36:00.082249] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.981 [2024-07-15 09:36:00.082292] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:27:48.981 qpair failed and we were unable to recover it. 00:27:48.981 [2024-07-15 09:36:00.082469] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.981 [2024-07-15 09:36:00.082511] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:27:48.981 qpair failed and we were unable to recover it. 00:27:48.981 [2024-07-15 09:36:00.082653] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.981 [2024-07-15 09:36:00.082689] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:27:48.981 qpair failed and we were unable to recover it. 00:27:48.981 [2024-07-15 09:36:00.082845] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.981 [2024-07-15 09:36:00.082904] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:27:48.981 qpair failed and we were unable to recover it. 00:27:48.981 [2024-07-15 09:36:00.083082] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.981 [2024-07-15 09:36:00.083124] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:27:48.981 qpair failed and we were unable to recover it. 00:27:48.981 [2024-07-15 09:36:00.083312] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.981 [2024-07-15 09:36:00.083355] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:27:48.981 qpair failed and we were unable to recover it. 00:27:48.981 [2024-07-15 09:36:00.083518] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.981 [2024-07-15 09:36:00.083561] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:27:48.981 qpair failed and we were unable to recover it. 00:27:48.981 [2024-07-15 09:36:00.083734] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.981 [2024-07-15 09:36:00.083778] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:27:48.981 qpair failed and we were unable to recover it. 00:27:48.981 [2024-07-15 09:36:00.083953] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.981 [2024-07-15 09:36:00.083996] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:27:48.981 qpair failed and we were unable to recover it. 00:27:48.981 [2024-07-15 09:36:00.084174] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.981 [2024-07-15 09:36:00.084217] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:27:48.981 qpair failed and we were unable to recover it. 00:27:48.981 [2024-07-15 09:36:00.084389] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.981 [2024-07-15 09:36:00.084425] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:27:48.981 qpair failed and we were unable to recover it. 00:27:48.981 [2024-07-15 09:36:00.084585] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.981 [2024-07-15 09:36:00.084622] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:27:48.981 qpair failed and we were unable to recover it. 00:27:48.981 [2024-07-15 09:36:00.084780] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.981 [2024-07-15 09:36:00.084825] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:27:48.981 qpair failed and we were unable to recover it. 00:27:48.981 [2024-07-15 09:36:00.084983] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.981 [2024-07-15 09:36:00.085015] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:27:48.981 qpair failed and we were unable to recover it. 00:27:48.981 [2024-07-15 09:36:00.085147] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.981 [2024-07-15 09:36:00.085180] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:27:48.981 qpair failed and we were unable to recover it. 00:27:48.981 [2024-07-15 09:36:00.085324] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.981 [2024-07-15 09:36:00.085366] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:27:48.981 qpair failed and we were unable to recover it. 00:27:48.981 [2024-07-15 09:36:00.085536] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.981 [2024-07-15 09:36:00.085580] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:27:48.981 qpair failed and we were unable to recover it. 00:27:48.981 [2024-07-15 09:36:00.085813] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.981 [2024-07-15 09:36:00.085878] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:27:48.981 qpair failed and we were unable to recover it. 00:27:48.981 [2024-07-15 09:36:00.086068] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.981 [2024-07-15 09:36:00.086105] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:27:48.981 qpair failed and we were unable to recover it. 00:27:48.981 [2024-07-15 09:36:00.086238] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.981 [2024-07-15 09:36:00.086274] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:27:48.981 qpair failed and we were unable to recover it. 00:27:48.981 [2024-07-15 09:36:00.086403] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.981 [2024-07-15 09:36:00.086445] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:27:48.981 qpair failed and we were unable to recover it. 00:27:48.981 [2024-07-15 09:36:00.086638] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.981 [2024-07-15 09:36:00.086681] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:27:48.981 qpair failed and we were unable to recover it. 00:27:48.981 [2024-07-15 09:36:00.086837] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.981 [2024-07-15 09:36:00.086891] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:27:48.981 qpair failed and we were unable to recover it. 00:27:48.981 [2024-07-15 09:36:00.087051] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.981 [2024-07-15 09:36:00.087088] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:27:48.981 qpair failed and we were unable to recover it. 00:27:48.981 [2024-07-15 09:36:00.087218] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.981 [2024-07-15 09:36:00.087252] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:27:48.981 qpair failed and we were unable to recover it. 00:27:48.981 [2024-07-15 09:36:00.087414] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.981 [2024-07-15 09:36:00.087452] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:27:48.981 qpair failed and we were unable to recover it. 00:27:48.982 [2024-07-15 09:36:00.087652] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.982 [2024-07-15 09:36:00.087683] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:27:48.982 qpair failed and we were unable to recover it. 00:27:48.982 [2024-07-15 09:36:00.087791] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.982 [2024-07-15 09:36:00.087830] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:27:48.982 qpair failed and we were unable to recover it. 00:27:48.982 [2024-07-15 09:36:00.087990] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.982 [2024-07-15 09:36:00.088048] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:27:48.982 qpair failed and we were unable to recover it. 00:27:48.982 [2024-07-15 09:36:00.088225] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.982 [2024-07-15 09:36:00.088258] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:27:48.982 qpair failed and we were unable to recover it. 00:27:48.982 [2024-07-15 09:36:00.088385] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.982 [2024-07-15 09:36:00.088418] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:27:48.982 qpair failed and we were unable to recover it. 00:27:48.982 [2024-07-15 09:36:00.088593] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.982 [2024-07-15 09:36:00.088644] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:27:48.982 qpair failed and we were unable to recover it. 00:27:48.982 [2024-07-15 09:36:00.088779] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.982 [2024-07-15 09:36:00.088818] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:27:48.982 qpair failed and we were unable to recover it. 00:27:48.982 [2024-07-15 09:36:00.088956] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.982 [2024-07-15 09:36:00.088999] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:27:48.982 qpair failed and we were unable to recover it. 00:27:48.982 [2024-07-15 09:36:00.089217] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.982 [2024-07-15 09:36:00.089249] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:27:48.982 qpair failed and we were unable to recover it. 00:27:48.982 [2024-07-15 09:36:00.089415] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.982 [2024-07-15 09:36:00.089447] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:27:48.982 qpair failed and we were unable to recover it. 00:27:48.982 [2024-07-15 09:36:00.089639] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.982 [2024-07-15 09:36:00.089671] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:27:48.982 qpair failed and we were unable to recover it. 00:27:48.982 [2024-07-15 09:36:00.089776] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.982 [2024-07-15 09:36:00.089819] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:27:48.982 qpair failed and we were unable to recover it. 00:27:48.982 [2024-07-15 09:36:00.089936] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.982 [2024-07-15 09:36:00.089979] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:27:48.982 qpair failed and we were unable to recover it. 00:27:48.982 [2024-07-15 09:36:00.090182] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.982 [2024-07-15 09:36:00.090224] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:27:48.982 qpair failed and we were unable to recover it. 00:27:48.982 [2024-07-15 09:36:00.090353] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.982 [2024-07-15 09:36:00.090395] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:27:48.982 qpair failed and we were unable to recover it. 00:27:48.982 [2024-07-15 09:36:00.090516] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.982 [2024-07-15 09:36:00.090559] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:27:48.982 qpair failed and we were unable to recover it. 00:27:48.982 [2024-07-15 09:36:00.090712] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.982 [2024-07-15 09:36:00.090754] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:27:48.982 qpair failed and we were unable to recover it. 00:27:48.982 [2024-07-15 09:36:00.090987] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.982 [2024-07-15 09:36:00.091053] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d69200 with addr=10.0.0.2, port=4420 00:27:48.982 qpair failed and we were unable to recover it. 00:27:48.982 [2024-07-15 09:36:00.091209] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.982 [2024-07-15 09:36:00.091242] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d69200 with addr=10.0.0.2, port=4420 00:27:48.982 qpair failed and we were unable to recover it. 00:27:48.982 [2024-07-15 09:36:00.091381] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.982 [2024-07-15 09:36:00.091413] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d69200 with addr=10.0.0.2, port=4420 00:27:48.982 qpair failed and we were unable to recover it. 00:27:48.982 [2024-07-15 09:36:00.091584] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.982 [2024-07-15 09:36:00.091627] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d69200 with addr=10.0.0.2, port=4420 00:27:48.982 qpair failed and we were unable to recover it. 00:27:48.982 [2024-07-15 09:36:00.091829] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.982 [2024-07-15 09:36:00.091873] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d69200 with addr=10.0.0.2, port=4420 00:27:48.982 qpair failed and we were unable to recover it. 00:27:48.982 [2024-07-15 09:36:00.092000] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.982 [2024-07-15 09:36:00.092042] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d69200 with addr=10.0.0.2, port=4420 00:27:48.982 qpair failed and we were unable to recover it. 00:27:48.982 [2024-07-15 09:36:00.092177] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.982 [2024-07-15 09:36:00.092219] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d69200 with addr=10.0.0.2, port=4420 00:27:48.982 qpair failed and we were unable to recover it. 00:27:48.982 [2024-07-15 09:36:00.092367] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.982 [2024-07-15 09:36:00.092411] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d69200 with addr=10.0.0.2, port=4420 00:27:48.982 qpair failed and we were unable to recover it. 00:27:48.982 [2024-07-15 09:36:00.092646] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.982 [2024-07-15 09:36:00.092689] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d69200 with addr=10.0.0.2, port=4420 00:27:48.982 qpair failed and we were unable to recover it. 00:27:48.982 [2024-07-15 09:36:00.092874] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.982 [2024-07-15 09:36:00.092920] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:27:48.982 qpair failed and we were unable to recover it. 00:27:48.982 [2024-07-15 09:36:00.093077] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.982 [2024-07-15 09:36:00.093120] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:27:48.982 qpair failed and we were unable to recover it. 00:27:48.982 [2024-07-15 09:36:00.093252] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.982 [2024-07-15 09:36:00.093295] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:27:48.982 qpair failed and we were unable to recover it. 00:27:48.982 [2024-07-15 09:36:00.093437] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.982 [2024-07-15 09:36:00.093479] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:27:48.982 qpair failed and we were unable to recover it. 00:27:48.982 [2024-07-15 09:36:00.093625] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.983 [2024-07-15 09:36:00.093667] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:27:48.983 qpair failed and we were unable to recover it. 00:27:48.983 [2024-07-15 09:36:00.093836] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.983 [2024-07-15 09:36:00.093880] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:27:48.983 qpair failed and we were unable to recover it. 00:27:48.983 [2024-07-15 09:36:00.094013] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.983 [2024-07-15 09:36:00.094058] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:27:48.983 qpair failed and we were unable to recover it. 00:27:48.983 [2024-07-15 09:36:00.094245] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.983 [2024-07-15 09:36:00.094288] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:27:48.983 qpair failed and we were unable to recover it. 00:27:48.983 [2024-07-15 09:36:00.094450] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.983 [2024-07-15 09:36:00.094493] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:27:48.983 qpair failed and we were unable to recover it. 00:27:48.983 [2024-07-15 09:36:00.094631] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.983 [2024-07-15 09:36:00.094673] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:27:48.983 qpair failed and we were unable to recover it. 00:27:48.983 [2024-07-15 09:36:00.094835] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.983 [2024-07-15 09:36:00.094878] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:27:48.983 qpair failed and we were unable to recover it. 00:27:48.983 [2024-07-15 09:36:00.095003] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.983 [2024-07-15 09:36:00.095054] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:27:48.983 qpair failed and we were unable to recover it. 00:27:48.983 [2024-07-15 09:36:00.095247] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.983 [2024-07-15 09:36:00.095289] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:27:48.983 qpair failed and we were unable to recover it. 00:27:48.983 [2024-07-15 09:36:00.095463] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.983 [2024-07-15 09:36:00.095506] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:27:48.983 qpair failed and we were unable to recover it. 00:27:48.983 [2024-07-15 09:36:00.095677] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.983 [2024-07-15 09:36:00.095720] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:27:48.983 qpair failed and we were unable to recover it. 00:27:48.983 [2024-07-15 09:36:00.095883] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.983 [2024-07-15 09:36:00.095925] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:27:48.983 qpair failed and we were unable to recover it. 00:27:48.983 [2024-07-15 09:36:00.096095] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.983 [2024-07-15 09:36:00.096137] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:27:48.983 qpair failed and we were unable to recover it. 00:27:48.983 [2024-07-15 09:36:00.096268] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.983 [2024-07-15 09:36:00.096311] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:27:48.983 qpair failed and we were unable to recover it. 00:27:48.983 [2024-07-15 09:36:00.096487] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.983 [2024-07-15 09:36:00.096530] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:27:48.983 qpair failed and we were unable to recover it. 00:27:48.983 [2024-07-15 09:36:00.096659] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.983 [2024-07-15 09:36:00.096701] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:27:48.983 qpair failed and we were unable to recover it. 00:27:48.983 [2024-07-15 09:36:00.096901] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.983 [2024-07-15 09:36:00.096967] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a58000b90 with addr=10.0.0.2, port=4420 00:27:48.983 qpair failed and we were unable to recover it. 00:27:48.983 [2024-07-15 09:36:00.097165] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.983 [2024-07-15 09:36:00.097211] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a58000b90 with addr=10.0.0.2, port=4420 00:27:48.983 qpair failed and we were unable to recover it. 00:27:48.983 [2024-07-15 09:36:00.097412] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.983 [2024-07-15 09:36:00.097455] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a58000b90 with addr=10.0.0.2, port=4420 00:27:48.983 qpair failed and we were unable to recover it. 00:27:48.983 [2024-07-15 09:36:00.097629] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.983 [2024-07-15 09:36:00.097672] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a58000b90 with addr=10.0.0.2, port=4420 00:27:48.983 qpair failed and we were unable to recover it. 00:27:48.983 [2024-07-15 09:36:00.097838] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.983 [2024-07-15 09:36:00.097882] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a58000b90 with addr=10.0.0.2, port=4420 00:27:48.983 qpair failed and we were unable to recover it. 00:27:48.983 [2024-07-15 09:36:00.098061] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.983 [2024-07-15 09:36:00.098106] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a58000b90 with addr=10.0.0.2, port=4420 00:27:48.983 qpair failed and we were unable to recover it. 00:27:48.983 [2024-07-15 09:36:00.098250] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.983 [2024-07-15 09:36:00.098294] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:27:48.983 qpair failed and we were unable to recover it. 00:27:48.983 [2024-07-15 09:36:00.098439] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.983 [2024-07-15 09:36:00.098482] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:27:48.983 qpair failed and we were unable to recover it. 00:27:48.983 [2024-07-15 09:36:00.098649] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.983 [2024-07-15 09:36:00.098692] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:27:48.983 qpair failed and we were unable to recover it. 00:27:48.983 [2024-07-15 09:36:00.098868] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.983 [2024-07-15 09:36:00.098912] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:27:48.983 qpair failed and we were unable to recover it. 00:27:48.983 [2024-07-15 09:36:00.099087] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.983 [2024-07-15 09:36:00.099130] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:27:48.983 qpair failed and we were unable to recover it. 00:27:48.983 [2024-07-15 09:36:00.099267] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.983 [2024-07-15 09:36:00.099310] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:27:48.983 qpair failed and we were unable to recover it. 00:27:48.983 [2024-07-15 09:36:00.099488] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.983 [2024-07-15 09:36:00.099531] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:27:48.983 qpair failed and we were unable to recover it. 00:27:48.983 [2024-07-15 09:36:00.099690] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.983 [2024-07-15 09:36:00.099756] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:27:48.983 qpair failed and we were unable to recover it. 00:27:48.984 [2024-07-15 09:36:00.099917] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.984 [2024-07-15 09:36:00.099985] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:27:48.984 qpair failed and we were unable to recover it. 00:27:48.984 [2024-07-15 09:36:00.100216] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.984 [2024-07-15 09:36:00.100281] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:27:48.984 qpair failed and we were unable to recover it. 00:27:48.984 [2024-07-15 09:36:00.100538] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.984 [2024-07-15 09:36:00.100601] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:27:48.984 qpair failed and we were unable to recover it. 00:27:48.984 [2024-07-15 09:36:00.100817] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.984 [2024-07-15 09:36:00.100861] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:27:48.984 qpair failed and we were unable to recover it. 00:27:48.984 [2024-07-15 09:36:00.101036] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.984 [2024-07-15 09:36:00.101082] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:27:48.984 qpair failed and we were unable to recover it. 00:27:48.984 [2024-07-15 09:36:00.101230] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.984 [2024-07-15 09:36:00.101274] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:27:48.984 qpair failed and we were unable to recover it. 00:27:48.984 [2024-07-15 09:36:00.101433] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.984 [2024-07-15 09:36:00.101477] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:27:48.984 qpair failed and we were unable to recover it. 00:27:48.984 [2024-07-15 09:36:00.101639] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.984 [2024-07-15 09:36:00.101707] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:27:48.984 qpair failed and we were unable to recover it. 00:27:48.984 [2024-07-15 09:36:00.101906] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.984 [2024-07-15 09:36:00.101952] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:27:48.984 qpair failed and we were unable to recover it. 00:27:48.984 [2024-07-15 09:36:00.102099] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.984 [2024-07-15 09:36:00.102145] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:27:48.984 qpair failed and we were unable to recover it. 00:27:48.984 [2024-07-15 09:36:00.102319] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.984 [2024-07-15 09:36:00.102365] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:27:48.984 qpair failed and we were unable to recover it. 00:27:48.984 [2024-07-15 09:36:00.102521] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.984 [2024-07-15 09:36:00.102566] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:27:48.984 qpair failed and we were unable to recover it. 00:27:48.984 [2024-07-15 09:36:00.102775] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.984 [2024-07-15 09:36:00.102832] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:27:48.984 qpair failed and we were unable to recover it. 00:27:48.984 [2024-07-15 09:36:00.103014] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.984 [2024-07-15 09:36:00.103060] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:27:48.984 qpair failed and we were unable to recover it. 00:27:48.984 [2024-07-15 09:36:00.103239] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.984 [2024-07-15 09:36:00.103284] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:27:48.984 qpair failed and we were unable to recover it. 00:27:48.984 [2024-07-15 09:36:00.103463] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.984 [2024-07-15 09:36:00.103507] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:27:48.984 qpair failed and we were unable to recover it. 00:27:48.984 [2024-07-15 09:36:00.103685] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.984 [2024-07-15 09:36:00.103730] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:27:48.984 qpair failed and we were unable to recover it. 00:27:48.984 [2024-07-15 09:36:00.103912] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.984 [2024-07-15 09:36:00.103965] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:27:48.984 qpair failed and we were unable to recover it. 00:27:48.984 [2024-07-15 09:36:00.104121] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.984 [2024-07-15 09:36:00.104166] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:27:48.984 qpair failed and we were unable to recover it. 00:27:48.984 [2024-07-15 09:36:00.104373] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.984 [2024-07-15 09:36:00.104418] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:27:48.984 qpair failed and we were unable to recover it. 00:27:48.984 [2024-07-15 09:36:00.104594] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.984 [2024-07-15 09:36:00.104639] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:27:48.984 qpair failed and we were unable to recover it. 00:27:48.984 [2024-07-15 09:36:00.104810] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.984 [2024-07-15 09:36:00.104856] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:27:48.984 qpair failed and we were unable to recover it. 00:27:48.984 [2024-07-15 09:36:00.105039] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.984 [2024-07-15 09:36:00.105083] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:27:48.984 qpair failed and we were unable to recover it. 00:27:48.984 [2024-07-15 09:36:00.105228] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.984 [2024-07-15 09:36:00.105274] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:27:48.984 qpair failed and we were unable to recover it. 00:27:48.984 [2024-07-15 09:36:00.105481] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.984 [2024-07-15 09:36:00.105527] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:27:48.984 qpair failed and we were unable to recover it. 00:27:48.984 [2024-07-15 09:36:00.105656] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.984 [2024-07-15 09:36:00.105702] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:27:48.984 qpair failed and we were unable to recover it. 00:27:48.984 [2024-07-15 09:36:00.105872] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.984 [2024-07-15 09:36:00.105918] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:27:48.984 qpair failed and we were unable to recover it. 00:27:48.984 [2024-07-15 09:36:00.106079] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.984 [2024-07-15 09:36:00.106124] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:27:48.984 qpair failed and we were unable to recover it. 00:27:48.984 [2024-07-15 09:36:00.106273] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.984 [2024-07-15 09:36:00.106318] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:27:48.984 qpair failed and we were unable to recover it. 00:27:48.984 [2024-07-15 09:36:00.106497] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.985 [2024-07-15 09:36:00.106542] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:27:48.985 qpair failed and we were unable to recover it. 00:27:48.985 [2024-07-15 09:36:00.106713] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.985 [2024-07-15 09:36:00.106758] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:27:48.985 qpair failed and we were unable to recover it. 00:27:48.985 [2024-07-15 09:36:00.106941] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.985 [2024-07-15 09:36:00.106987] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:27:48.985 qpair failed and we were unable to recover it. 00:27:48.985 [2024-07-15 09:36:00.107166] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.985 [2024-07-15 09:36:00.107215] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:27:48.985 qpair failed and we were unable to recover it. 00:27:48.985 [2024-07-15 09:36:00.107365] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.985 [2024-07-15 09:36:00.107412] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:27:48.985 qpair failed and we were unable to recover it. 00:27:48.985 [2024-07-15 09:36:00.107570] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.985 [2024-07-15 09:36:00.107615] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:27:48.985 qpair failed and we were unable to recover it. 00:27:48.985 [2024-07-15 09:36:00.107761] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.985 [2024-07-15 09:36:00.107824] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:27:48.985 qpair failed and we were unable to recover it. 00:27:48.985 [2024-07-15 09:36:00.108003] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.985 [2024-07-15 09:36:00.108048] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:27:48.985 qpair failed and we were unable to recover it. 00:27:48.985 [2024-07-15 09:36:00.108255] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.985 [2024-07-15 09:36:00.108301] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:27:48.985 qpair failed and we were unable to recover it. 00:27:48.985 [2024-07-15 09:36:00.108453] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.985 [2024-07-15 09:36:00.108500] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:27:48.985 qpair failed and we were unable to recover it. 00:27:48.985 [2024-07-15 09:36:00.108641] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.985 [2024-07-15 09:36:00.108675] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:27:48.985 qpair failed and we were unable to recover it. 00:27:48.985 [2024-07-15 09:36:00.108826] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.985 [2024-07-15 09:36:00.108860] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:27:48.985 qpair failed and we were unable to recover it. 00:27:48.985 [2024-07-15 09:36:00.108970] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.985 [2024-07-15 09:36:00.109003] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:27:48.985 qpair failed and we were unable to recover it. 00:27:48.985 [2024-07-15 09:36:00.109151] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.985 [2024-07-15 09:36:00.109185] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:27:48.985 qpair failed and we were unable to recover it. 00:27:48.985 [2024-07-15 09:36:00.109330] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.985 [2024-07-15 09:36:00.109363] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:27:48.985 qpair failed and we were unable to recover it. 00:27:48.985 [2024-07-15 09:36:00.109506] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.985 [2024-07-15 09:36:00.109540] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:27:48.985 qpair failed and we were unable to recover it. 00:27:48.985 [2024-07-15 09:36:00.109676] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.985 [2024-07-15 09:36:00.109710] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:27:48.985 qpair failed and we were unable to recover it. 00:27:48.985 [2024-07-15 09:36:00.109933] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.985 [2024-07-15 09:36:00.109979] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:27:48.985 qpair failed and we were unable to recover it. 00:27:48.985 [2024-07-15 09:36:00.110160] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.985 [2024-07-15 09:36:00.110205] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:27:48.985 qpair failed and we were unable to recover it. 00:27:48.985 [2024-07-15 09:36:00.110371] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.985 [2024-07-15 09:36:00.110404] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:27:48.985 qpair failed and we were unable to recover it. 00:27:48.985 [2024-07-15 09:36:00.110508] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.985 [2024-07-15 09:36:00.110542] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:27:48.985 qpair failed and we were unable to recover it. 00:27:48.985 [2024-07-15 09:36:00.110645] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.985 [2024-07-15 09:36:00.110683] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:27:48.986 qpair failed and we were unable to recover it. 00:27:48.986 [2024-07-15 09:36:00.110796] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.986 [2024-07-15 09:36:00.110846] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:27:48.986 qpair failed and we were unable to recover it. 00:27:48.986 [2024-07-15 09:36:00.110961] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.986 [2024-07-15 09:36:00.110994] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:27:48.986 qpair failed and we were unable to recover it. 00:27:49.261 [2024-07-15 09:36:00.111120] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.261 [2024-07-15 09:36:00.111154] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:27:49.261 qpair failed and we were unable to recover it. 00:27:49.261 [2024-07-15 09:36:00.111294] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.261 [2024-07-15 09:36:00.111355] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:27:49.261 qpair failed and we were unable to recover it. 00:27:49.261 [2024-07-15 09:36:00.111494] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.261 [2024-07-15 09:36:00.111541] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:27:49.261 qpair failed and we were unable to recover it. 00:27:49.261 [2024-07-15 09:36:00.111688] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.261 [2024-07-15 09:36:00.111733] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:27:49.261 qpair failed and we were unable to recover it. 00:27:49.261 [2024-07-15 09:36:00.111917] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.261 [2024-07-15 09:36:00.111980] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:27:49.261 qpair failed and we were unable to recover it. 00:27:49.261 [2024-07-15 09:36:00.112167] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.262 [2024-07-15 09:36:00.112214] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:27:49.262 qpair failed and we were unable to recover it. 00:27:49.262 [2024-07-15 09:36:00.112346] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.262 [2024-07-15 09:36:00.112392] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:27:49.262 qpair failed and we were unable to recover it. 00:27:49.262 [2024-07-15 09:36:00.112598] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.262 [2024-07-15 09:36:00.112643] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:27:49.262 qpair failed and we were unable to recover it. 00:27:49.262 [2024-07-15 09:36:00.112791] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.262 [2024-07-15 09:36:00.112845] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:27:49.262 qpair failed and we were unable to recover it. 00:27:49.262 [2024-07-15 09:36:00.113013] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.262 [2024-07-15 09:36:00.113047] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:27:49.262 qpair failed and we were unable to recover it. 00:27:49.262 [2024-07-15 09:36:00.113164] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.262 [2024-07-15 09:36:00.113198] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:27:49.262 qpair failed and we were unable to recover it. 00:27:49.262 [2024-07-15 09:36:00.113308] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.262 [2024-07-15 09:36:00.113362] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:27:49.262 qpair failed and we were unable to recover it. 00:27:49.262 [2024-07-15 09:36:00.113497] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.262 [2024-07-15 09:36:00.113553] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:27:49.262 qpair failed and we were unable to recover it. 00:27:49.262 [2024-07-15 09:36:00.113660] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.262 [2024-07-15 09:36:00.113694] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:27:49.262 qpair failed and we were unable to recover it. 00:27:49.262 [2024-07-15 09:36:00.113863] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.262 [2024-07-15 09:36:00.113914] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:27:49.262 qpair failed and we were unable to recover it. 00:27:49.262 [2024-07-15 09:36:00.114077] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.262 [2024-07-15 09:36:00.114111] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:27:49.262 qpair failed and we were unable to recover it. 00:27:49.262 [2024-07-15 09:36:00.114211] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.262 [2024-07-15 09:36:00.114261] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:27:49.262 qpair failed and we were unable to recover it. 00:27:49.262 [2024-07-15 09:36:00.114404] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.262 [2024-07-15 09:36:00.114438] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:27:49.262 qpair failed and we were unable to recover it. 00:27:49.262 [2024-07-15 09:36:00.114558] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.262 [2024-07-15 09:36:00.114593] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:27:49.262 qpair failed and we were unable to recover it. 00:27:49.262 [2024-07-15 09:36:00.114732] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.262 [2024-07-15 09:36:00.114766] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:27:49.262 qpair failed and we were unable to recover it. 00:27:49.262 [2024-07-15 09:36:00.114960] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.262 [2024-07-15 09:36:00.115029] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a58000b90 with addr=10.0.0.2, port=4420 00:27:49.262 qpair failed and we were unable to recover it. 00:27:49.262 [2024-07-15 09:36:00.115208] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.262 [2024-07-15 09:36:00.115244] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a58000b90 with addr=10.0.0.2, port=4420 00:27:49.262 qpair failed and we were unable to recover it. 00:27:49.262 [2024-07-15 09:36:00.115405] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.262 [2024-07-15 09:36:00.115461] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a58000b90 with addr=10.0.0.2, port=4420 00:27:49.262 qpair failed and we were unable to recover it. 00:27:49.262 [2024-07-15 09:36:00.115607] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.262 [2024-07-15 09:36:00.115642] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a58000b90 with addr=10.0.0.2, port=4420 00:27:49.262 qpair failed and we were unable to recover it. 00:27:49.262 [2024-07-15 09:36:00.115762] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.262 [2024-07-15 09:36:00.115797] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a58000b90 with addr=10.0.0.2, port=4420 00:27:49.262 qpair failed and we were unable to recover it. 00:27:49.262 [2024-07-15 09:36:00.115976] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.262 [2024-07-15 09:36:00.116010] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a58000b90 with addr=10.0.0.2, port=4420 00:27:49.262 qpair failed and we were unable to recover it. 00:27:49.262 [2024-07-15 09:36:00.116204] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.262 [2024-07-15 09:36:00.116252] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:27:49.262 qpair failed and we were unable to recover it. 00:27:49.262 [2024-07-15 09:36:00.116396] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.262 [2024-07-15 09:36:00.116430] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:27:49.262 qpair failed and we were unable to recover it. 00:27:49.262 [2024-07-15 09:36:00.116577] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.262 [2024-07-15 09:36:00.116611] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:27:49.262 qpair failed and we were unable to recover it. 00:27:49.262 [2024-07-15 09:36:00.116815] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.262 [2024-07-15 09:36:00.116861] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:27:49.262 qpair failed and we were unable to recover it. 00:27:49.262 [2024-07-15 09:36:00.117049] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.262 [2024-07-15 09:36:00.117084] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:27:49.262 qpair failed and we were unable to recover it. 00:27:49.262 [2024-07-15 09:36:00.117211] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.262 [2024-07-15 09:36:00.117244] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:27:49.262 qpair failed and we were unable to recover it. 00:27:49.262 [2024-07-15 09:36:00.117356] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.262 [2024-07-15 09:36:00.117390] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:27:49.262 qpair failed and we were unable to recover it. 00:27:49.262 [2024-07-15 09:36:00.117495] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.262 [2024-07-15 09:36:00.117528] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:27:49.262 qpair failed and we were unable to recover it. 00:27:49.262 [2024-07-15 09:36:00.117716] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.262 [2024-07-15 09:36:00.117761] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:27:49.262 qpair failed and we were unable to recover it. 00:27:49.262 [2024-07-15 09:36:00.117906] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.262 [2024-07-15 09:36:00.117943] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a58000b90 with addr=10.0.0.2, port=4420 00:27:49.262 qpair failed and we were unable to recover it. 00:27:49.262 [2024-07-15 09:36:00.118051] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.262 [2024-07-15 09:36:00.118087] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a58000b90 with addr=10.0.0.2, port=4420 00:27:49.262 qpair failed and we were unable to recover it. 00:27:49.262 [2024-07-15 09:36:00.118215] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.262 [2024-07-15 09:36:00.118249] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a58000b90 with addr=10.0.0.2, port=4420 00:27:49.262 qpair failed and we were unable to recover it. 00:27:49.262 [2024-07-15 09:36:00.118363] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.262 [2024-07-15 09:36:00.118398] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a58000b90 with addr=10.0.0.2, port=4420 00:27:49.262 qpair failed and we were unable to recover it. 00:27:49.262 [2024-07-15 09:36:00.118577] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.262 [2024-07-15 09:36:00.118613] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a58000b90 with addr=10.0.0.2, port=4420 00:27:49.262 qpair failed and we were unable to recover it. 00:27:49.262 [2024-07-15 09:36:00.118715] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.262 [2024-07-15 09:36:00.118749] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a58000b90 with addr=10.0.0.2, port=4420 00:27:49.262 qpair failed and we were unable to recover it. 00:27:49.262 [2024-07-15 09:36:00.118923] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.263 [2024-07-15 09:36:00.118959] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a58000b90 with addr=10.0.0.2, port=4420 00:27:49.263 qpair failed and we were unable to recover it. 00:27:49.263 [2024-07-15 09:36:00.119095] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.263 [2024-07-15 09:36:00.119135] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a58000b90 with addr=10.0.0.2, port=4420 00:27:49.263 qpair failed and we were unable to recover it. 00:27:49.263 [2024-07-15 09:36:00.119296] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.263 [2024-07-15 09:36:00.119346] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a50000b90 with addr=10.0.0.2, port=4420 00:27:49.263 qpair failed and we were unable to recover it. 00:27:49.263 [2024-07-15 09:36:00.119528] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.263 [2024-07-15 09:36:00.119575] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a50000b90 with addr=10.0.0.2, port=4420 00:27:49.263 qpair failed and we were unable to recover it. 00:27:49.263 [2024-07-15 09:36:00.119753] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.263 [2024-07-15 09:36:00.119849] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a50000b90 with addr=10.0.0.2, port=4420 00:27:49.263 qpair failed and we were unable to recover it. 00:27:49.263 [2024-07-15 09:36:00.120082] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.263 [2024-07-15 09:36:00.120126] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a50000b90 with addr=10.0.0.2, port=4420 00:27:49.263 qpair failed and we were unable to recover it. 00:27:49.263 [2024-07-15 09:36:00.120254] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.263 [2024-07-15 09:36:00.120297] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a50000b90 with addr=10.0.0.2, port=4420 00:27:49.263 qpair failed and we were unable to recover it. 00:27:49.263 [2024-07-15 09:36:00.120456] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.263 [2024-07-15 09:36:00.120499] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a50000b90 with addr=10.0.0.2, port=4420 00:27:49.263 qpair failed and we were unable to recover it. 00:27:49.263 [2024-07-15 09:36:00.120664] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.263 [2024-07-15 09:36:00.120707] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a50000b90 with addr=10.0.0.2, port=4420 00:27:49.263 qpair failed and we were unable to recover it. 00:27:49.263 [2024-07-15 09:36:00.120858] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.263 [2024-07-15 09:36:00.120904] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a50000b90 with addr=10.0.0.2, port=4420 00:27:49.263 qpair failed and we were unable to recover it. 00:27:49.263 [2024-07-15 09:36:00.121030] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.263 [2024-07-15 09:36:00.121071] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a50000b90 with addr=10.0.0.2, port=4420 00:27:49.263 qpair failed and we were unable to recover it. 00:27:49.263 [2024-07-15 09:36:00.121196] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.263 [2024-07-15 09:36:00.121237] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a50000b90 with addr=10.0.0.2, port=4420 00:27:49.263 qpair failed and we were unable to recover it. 00:27:49.263 [2024-07-15 09:36:00.121371] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.263 [2024-07-15 09:36:00.121414] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a50000b90 with addr=10.0.0.2, port=4420 00:27:49.263 qpair failed and we were unable to recover it. 00:27:49.263 [2024-07-15 09:36:00.121576] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.263 [2024-07-15 09:36:00.121623] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a50000b90 with addr=10.0.0.2, port=4420 00:27:49.263 qpair failed and we were unable to recover it. 00:27:49.263 [2024-07-15 09:36:00.121790] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.263 [2024-07-15 09:36:00.121848] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a58000b90 with addr=10.0.0.2, port=4420 00:27:49.263 qpair failed and we were unable to recover it. 00:27:49.263 [2024-07-15 09:36:00.122014] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.263 [2024-07-15 09:36:00.122061] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a58000b90 with addr=10.0.0.2, port=4420 00:27:49.263 qpair failed and we were unable to recover it. 00:27:49.263 [2024-07-15 09:36:00.122203] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.263 [2024-07-15 09:36:00.122249] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a58000b90 with addr=10.0.0.2, port=4420 00:27:49.263 qpair failed and we were unable to recover it. 00:27:49.263 [2024-07-15 09:36:00.122409] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.263 [2024-07-15 09:36:00.122456] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a58000b90 with addr=10.0.0.2, port=4420 00:27:49.263 qpair failed and we were unable to recover it. 00:27:49.263 [2024-07-15 09:36:00.122666] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.263 [2024-07-15 09:36:00.122735] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:27:49.263 qpair failed and we were unable to recover it. 00:27:49.263 [2024-07-15 09:36:00.122955] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.263 [2024-07-15 09:36:00.123039] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:27:49.263 qpair failed and we were unable to recover it. 00:27:49.263 [2024-07-15 09:36:00.123281] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.263 [2024-07-15 09:36:00.123376] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:27:49.263 qpair failed and we were unable to recover it. 00:27:49.263 [2024-07-15 09:36:00.123599] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.263 [2024-07-15 09:36:00.123644] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:27:49.263 qpair failed and we were unable to recover it. 00:27:49.263 [2024-07-15 09:36:00.123796] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.263 [2024-07-15 09:36:00.123883] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:27:49.263 qpair failed and we were unable to recover it. 00:27:49.263 [2024-07-15 09:36:00.124083] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.263 [2024-07-15 09:36:00.124175] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:27:49.263 qpair failed and we were unable to recover it. 00:27:49.263 [2024-07-15 09:36:00.124335] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.263 [2024-07-15 09:36:00.124381] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:27:49.263 qpair failed and we were unable to recover it. 00:27:49.263 [2024-07-15 09:36:00.124550] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.263 [2024-07-15 09:36:00.124596] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:27:49.263 qpair failed and we were unable to recover it. 00:27:49.263 [2024-07-15 09:36:00.124733] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.263 [2024-07-15 09:36:00.124778] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:27:49.263 qpair failed and we were unable to recover it. 00:27:49.263 [2024-07-15 09:36:00.124994] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.263 [2024-07-15 09:36:00.125059] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:27:49.263 qpair failed and we were unable to recover it. 00:27:49.263 [2024-07-15 09:36:00.125267] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.263 [2024-07-15 09:36:00.125332] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:27:49.263 qpair failed and we were unable to recover it. 00:27:49.263 [2024-07-15 09:36:00.125594] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.263 [2024-07-15 09:36:00.125639] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:27:49.263 qpair failed and we were unable to recover it. 00:27:49.263 [2024-07-15 09:36:00.125790] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.263 [2024-07-15 09:36:00.125880] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:27:49.263 qpair failed and we were unable to recover it. 00:27:49.263 [2024-07-15 09:36:00.126113] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.263 [2024-07-15 09:36:00.126179] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:27:49.263 qpair failed and we were unable to recover it. 00:27:49.263 [2024-07-15 09:36:00.126457] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.263 [2024-07-15 09:36:00.126521] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:27:49.263 qpair failed and we were unable to recover it. 00:27:49.263 [2024-07-15 09:36:00.126686] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.263 [2024-07-15 09:36:00.126766] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:27:49.263 qpair failed and we were unable to recover it. 00:27:49.263 [2024-07-15 09:36:00.126967] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.263 [2024-07-15 09:36:00.127033] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:27:49.263 qpair failed and we were unable to recover it. 00:27:49.263 [2024-07-15 09:36:00.127240] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.263 [2024-07-15 09:36:00.127305] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:27:49.263 qpair failed and we were unable to recover it. 00:27:49.263 [2024-07-15 09:36:00.127539] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.263 [2024-07-15 09:36:00.127603] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:27:49.263 qpair failed and we were unable to recover it. 00:27:49.263 [2024-07-15 09:36:00.127763] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.263 [2024-07-15 09:36:00.127827] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:27:49.263 qpair failed and we were unable to recover it. 00:27:49.263 [2024-07-15 09:36:00.127995] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.263 [2024-07-15 09:36:00.128059] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:27:49.263 qpair failed and we were unable to recover it. 00:27:49.263 [2024-07-15 09:36:00.128253] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.263 [2024-07-15 09:36:00.128317] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:27:49.263 qpair failed and we were unable to recover it. 00:27:49.264 [2024-07-15 09:36:00.128506] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.264 [2024-07-15 09:36:00.128570] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:27:49.264 qpair failed and we were unable to recover it. 00:27:49.264 [2024-07-15 09:36:00.128817] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.264 [2024-07-15 09:36:00.128882] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:27:49.264 qpair failed and we were unable to recover it. 00:27:49.264 [2024-07-15 09:36:00.129086] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.264 [2024-07-15 09:36:00.129152] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:27:49.264 qpair failed and we were unable to recover it. 00:27:49.264 [2024-07-15 09:36:00.129317] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.264 [2024-07-15 09:36:00.129401] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:27:49.264 qpair failed and we were unable to recover it. 00:27:49.264 [2024-07-15 09:36:00.129580] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.264 [2024-07-15 09:36:00.129626] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:27:49.264 qpair failed and we were unable to recover it. 00:27:49.264 [2024-07-15 09:36:00.129780] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.264 [2024-07-15 09:36:00.129838] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:27:49.264 qpair failed and we were unable to recover it. 00:27:49.264 [2024-07-15 09:36:00.129980] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.264 [2024-07-15 09:36:00.130026] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:27:49.264 qpair failed and we were unable to recover it. 00:27:49.264 [2024-07-15 09:36:00.130178] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.264 [2024-07-15 09:36:00.130223] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:27:49.264 qpair failed and we were unable to recover it. 00:27:49.264 [2024-07-15 09:36:00.130405] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.264 [2024-07-15 09:36:00.130450] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:27:49.264 qpair failed and we were unable to recover it. 00:27:49.264 [2024-07-15 09:36:00.130608] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.264 [2024-07-15 09:36:00.130673] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:27:49.264 qpair failed and we were unable to recover it. 00:27:49.264 [2024-07-15 09:36:00.130849] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.264 [2024-07-15 09:36:00.130896] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:27:49.264 qpair failed and we were unable to recover it. 00:27:49.264 [2024-07-15 09:36:00.131036] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.264 [2024-07-15 09:36:00.131083] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:27:49.264 qpair failed and we were unable to recover it. 00:27:49.264 [2024-07-15 09:36:00.131246] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.264 [2024-07-15 09:36:00.131287] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:27:49.264 qpair failed and we were unable to recover it. 00:27:49.264 [2024-07-15 09:36:00.131453] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.264 [2024-07-15 09:36:00.131494] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:27:49.264 qpair failed and we were unable to recover it. 00:27:49.264 [2024-07-15 09:36:00.131691] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.264 [2024-07-15 09:36:00.131736] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:27:49.264 qpair failed and we were unable to recover it. 00:27:49.264 [2024-07-15 09:36:00.131950] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.264 [2024-07-15 09:36:00.132016] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:27:49.264 qpair failed and we were unable to recover it. 00:27:49.264 [2024-07-15 09:36:00.132240] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.264 [2024-07-15 09:36:00.132305] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:27:49.264 qpair failed and we were unable to recover it. 00:27:49.264 [2024-07-15 09:36:00.132543] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.264 [2024-07-15 09:36:00.132610] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:27:49.264 qpair failed and we were unable to recover it. 00:27:49.264 [2024-07-15 09:36:00.132845] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.264 [2024-07-15 09:36:00.132892] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:27:49.264 qpair failed and we were unable to recover it. 00:27:49.264 [2024-07-15 09:36:00.133057] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.264 [2024-07-15 09:36:00.133116] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:27:49.264 qpair failed and we were unable to recover it. 00:27:49.264 [2024-07-15 09:36:00.133345] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.264 [2024-07-15 09:36:00.133409] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:27:49.264 qpair failed and we were unable to recover it. 00:27:49.264 [2024-07-15 09:36:00.133573] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.264 [2024-07-15 09:36:00.133620] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:27:49.264 qpair failed and we were unable to recover it. 00:27:49.264 [2024-07-15 09:36:00.133815] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.264 [2024-07-15 09:36:00.133863] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:27:49.264 qpair failed and we were unable to recover it. 00:27:49.264 [2024-07-15 09:36:00.134003] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.264 [2024-07-15 09:36:00.134048] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:27:49.264 qpair failed and we were unable to recover it. 00:27:49.264 [2024-07-15 09:36:00.134228] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.264 [2024-07-15 09:36:00.134298] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d69200 with addr=10.0.0.2, port=4420 00:27:49.264 qpair failed and we were unable to recover it. 00:27:49.264 [2024-07-15 09:36:00.134486] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.264 [2024-07-15 09:36:00.134534] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d69200 with addr=10.0.0.2, port=4420 00:27:49.264 qpair failed and we were unable to recover it. 00:27:49.264 [2024-07-15 09:36:00.134671] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.264 [2024-07-15 09:36:00.134717] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d69200 with addr=10.0.0.2, port=4420 00:27:49.264 qpair failed and we were unable to recover it. 00:27:49.264 [2024-07-15 09:36:00.134897] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.264 [2024-07-15 09:36:00.134945] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d69200 with addr=10.0.0.2, port=4420 00:27:49.264 qpair failed and we were unable to recover it. 00:27:49.264 [2024-07-15 09:36:00.135107] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.264 [2024-07-15 09:36:00.135151] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d69200 with addr=10.0.0.2, port=4420 00:27:49.264 qpair failed and we were unable to recover it. 00:27:49.264 [2024-07-15 09:36:00.135351] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.264 [2024-07-15 09:36:00.135396] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d69200 with addr=10.0.0.2, port=4420 00:27:49.264 qpair failed and we were unable to recover it. 00:27:49.264 [2024-07-15 09:36:00.135541] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.264 [2024-07-15 09:36:00.135593] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d69200 with addr=10.0.0.2, port=4420 00:27:49.264 qpair failed and we were unable to recover it. 00:27:49.264 [2024-07-15 09:36:00.135773] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.264 [2024-07-15 09:36:00.135838] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d69200 with addr=10.0.0.2, port=4420 00:27:49.264 qpair failed and we were unable to recover it. 00:27:49.264 [2024-07-15 09:36:00.135979] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.264 [2024-07-15 09:36:00.136024] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d69200 with addr=10.0.0.2, port=4420 00:27:49.264 qpair failed and we were unable to recover it. 00:27:49.264 [2024-07-15 09:36:00.136199] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.264 [2024-07-15 09:36:00.136245] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d69200 with addr=10.0.0.2, port=4420 00:27:49.264 qpair failed and we were unable to recover it. 00:27:49.264 [2024-07-15 09:36:00.136472] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.264 [2024-07-15 09:36:00.136515] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d69200 with addr=10.0.0.2, port=4420 00:27:49.264 qpair failed and we were unable to recover it. 00:27:49.264 [2024-07-15 09:36:00.136640] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.264 [2024-07-15 09:36:00.136681] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d69200 with addr=10.0.0.2, port=4420 00:27:49.264 qpair failed and we were unable to recover it. 00:27:49.264 [2024-07-15 09:36:00.136845] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.264 [2024-07-15 09:36:00.136886] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d69200 with addr=10.0.0.2, port=4420 00:27:49.264 qpair failed and we were unable to recover it. 00:27:49.264 [2024-07-15 09:36:00.137051] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.264 [2024-07-15 09:36:00.137092] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d69200 with addr=10.0.0.2, port=4420 00:27:49.264 qpair failed and we were unable to recover it. 00:27:49.264 [2024-07-15 09:36:00.137236] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.265 [2024-07-15 09:36:00.137276] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d69200 with addr=10.0.0.2, port=4420 00:27:49.265 qpair failed and we were unable to recover it. 00:27:49.265 [2024-07-15 09:36:00.137426] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.265 [2024-07-15 09:36:00.137467] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d69200 with addr=10.0.0.2, port=4420 00:27:49.265 qpair failed and we were unable to recover it. 00:27:49.265 [2024-07-15 09:36:00.137593] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.265 [2024-07-15 09:36:00.137634] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d69200 with addr=10.0.0.2, port=4420 00:27:49.265 qpair failed and we were unable to recover it. 00:27:49.265 [2024-07-15 09:36:00.137755] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.265 [2024-07-15 09:36:00.137795] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d69200 with addr=10.0.0.2, port=4420 00:27:49.265 qpair failed and we were unable to recover it. 00:27:49.265 [2024-07-15 09:36:00.137930] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.265 [2024-07-15 09:36:00.137971] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d69200 with addr=10.0.0.2, port=4420 00:27:49.265 qpair failed and we were unable to recover it. 00:27:49.265 [2024-07-15 09:36:00.138141] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.265 [2024-07-15 09:36:00.138191] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d69200 with addr=10.0.0.2, port=4420 00:27:49.265 qpair failed and we were unable to recover it. 00:27:49.265 [2024-07-15 09:36:00.138364] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.265 [2024-07-15 09:36:00.138405] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d69200 with addr=10.0.0.2, port=4420 00:27:49.265 qpair failed and we were unable to recover it. 00:27:49.265 [2024-07-15 09:36:00.138565] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.265 [2024-07-15 09:36:00.138605] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d69200 with addr=10.0.0.2, port=4420 00:27:49.265 qpair failed and we were unable to recover it. 00:27:49.265 [2024-07-15 09:36:00.138726] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.265 [2024-07-15 09:36:00.138767] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d69200 with addr=10.0.0.2, port=4420 00:27:49.265 qpair failed and we were unable to recover it. 00:27:49.265 [2024-07-15 09:36:00.138938] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.265 [2024-07-15 09:36:00.138978] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d69200 with addr=10.0.0.2, port=4420 00:27:49.265 qpair failed and we were unable to recover it. 00:27:49.265 [2024-07-15 09:36:00.139123] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.265 [2024-07-15 09:36:00.139164] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d69200 with addr=10.0.0.2, port=4420 00:27:49.265 qpair failed and we were unable to recover it. 00:27:49.265 [2024-07-15 09:36:00.139323] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.265 [2024-07-15 09:36:00.139365] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d69200 with addr=10.0.0.2, port=4420 00:27:49.265 qpair failed and we were unable to recover it. 00:27:49.265 [2024-07-15 09:36:00.139492] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.265 [2024-07-15 09:36:00.139533] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d69200 with addr=10.0.0.2, port=4420 00:27:49.265 qpair failed and we were unable to recover it. 00:27:49.265 [2024-07-15 09:36:00.139663] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.265 [2024-07-15 09:36:00.139705] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d69200 with addr=10.0.0.2, port=4420 00:27:49.265 qpair failed and we were unable to recover it. 00:27:49.265 [2024-07-15 09:36:00.139871] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.265 [2024-07-15 09:36:00.139912] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d69200 with addr=10.0.0.2, port=4420 00:27:49.265 qpair failed and we were unable to recover it. 00:27:49.265 [2024-07-15 09:36:00.140056] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.265 [2024-07-15 09:36:00.140096] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d69200 with addr=10.0.0.2, port=4420 00:27:49.265 qpair failed and we were unable to recover it. 00:27:49.265 [2024-07-15 09:36:00.140255] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.265 [2024-07-15 09:36:00.140296] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d69200 with addr=10.0.0.2, port=4420 00:27:49.265 qpair failed and we were unable to recover it. 00:27:49.265 [2024-07-15 09:36:00.140451] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.265 [2024-07-15 09:36:00.140491] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d69200 with addr=10.0.0.2, port=4420 00:27:49.265 qpair failed and we were unable to recover it. 00:27:49.265 [2024-07-15 09:36:00.140656] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.265 [2024-07-15 09:36:00.140697] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d69200 with addr=10.0.0.2, port=4420 00:27:49.265 qpair failed and we were unable to recover it. 00:27:49.265 [2024-07-15 09:36:00.140869] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.265 [2024-07-15 09:36:00.140916] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d69200 with addr=10.0.0.2, port=4420 00:27:49.265 qpair failed and we were unable to recover it. 00:27:49.265 [2024-07-15 09:36:00.141068] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.265 [2024-07-15 09:36:00.141108] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d69200 with addr=10.0.0.2, port=4420 00:27:49.265 qpair failed and we were unable to recover it. 00:27:49.265 [2024-07-15 09:36:00.141272] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.265 [2024-07-15 09:36:00.141313] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d69200 with addr=10.0.0.2, port=4420 00:27:49.265 qpair failed and we were unable to recover it. 00:27:49.265 [2024-07-15 09:36:00.141475] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.265 [2024-07-15 09:36:00.141516] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d69200 with addr=10.0.0.2, port=4420 00:27:49.265 qpair failed and we were unable to recover it. 00:27:49.265 [2024-07-15 09:36:00.141678] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.265 [2024-07-15 09:36:00.141717] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d69200 with addr=10.0.0.2, port=4420 00:27:49.265 qpair failed and we were unable to recover it. 00:27:49.265 [2024-07-15 09:36:00.141857] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.265 [2024-07-15 09:36:00.141899] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d69200 with addr=10.0.0.2, port=4420 00:27:49.265 qpair failed and we were unable to recover it. 00:27:49.265 [2024-07-15 09:36:00.142030] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.265 [2024-07-15 09:36:00.142070] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d69200 with addr=10.0.0.2, port=4420 00:27:49.265 qpair failed and we were unable to recover it. 00:27:49.265 [2024-07-15 09:36:00.142230] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.265 [2024-07-15 09:36:00.142271] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d69200 with addr=10.0.0.2, port=4420 00:27:49.265 qpair failed and we were unable to recover it. 00:27:49.265 [2024-07-15 09:36:00.142398] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.265 [2024-07-15 09:36:00.142438] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d69200 with addr=10.0.0.2, port=4420 00:27:49.265 qpair failed and we were unable to recover it. 00:27:49.265 [2024-07-15 09:36:00.142561] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.265 [2024-07-15 09:36:00.142601] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d69200 with addr=10.0.0.2, port=4420 00:27:49.265 qpair failed and we were unable to recover it. 00:27:49.265 [2024-07-15 09:36:00.142723] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.265 [2024-07-15 09:36:00.142764] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d69200 with addr=10.0.0.2, port=4420 00:27:49.265 qpair failed and we were unable to recover it. 00:27:49.265 [2024-07-15 09:36:00.142931] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.265 [2024-07-15 09:36:00.142973] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d69200 with addr=10.0.0.2, port=4420 00:27:49.265 qpair failed and we were unable to recover it. 00:27:49.265 [2024-07-15 09:36:00.143114] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.265 [2024-07-15 09:36:00.143155] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d69200 with addr=10.0.0.2, port=4420 00:27:49.265 qpair failed and we were unable to recover it. 00:27:49.265 [2024-07-15 09:36:00.143312] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.265 [2024-07-15 09:36:00.143352] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d69200 with addr=10.0.0.2, port=4420 00:27:49.265 qpair failed and we were unable to recover it. 00:27:49.265 [2024-07-15 09:36:00.143490] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.265 [2024-07-15 09:36:00.143531] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d69200 with addr=10.0.0.2, port=4420 00:27:49.265 qpair failed and we were unable to recover it. 00:27:49.265 [2024-07-15 09:36:00.143654] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.265 [2024-07-15 09:36:00.143694] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d69200 with addr=10.0.0.2, port=4420 00:27:49.265 qpair failed and we were unable to recover it. 00:27:49.265 [2024-07-15 09:36:00.143858] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.265 [2024-07-15 09:36:00.143900] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d69200 with addr=10.0.0.2, port=4420 00:27:49.265 qpair failed and we were unable to recover it. 00:27:49.265 [2024-07-15 09:36:00.144096] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.265 [2024-07-15 09:36:00.144138] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d69200 with addr=10.0.0.2, port=4420 00:27:49.265 qpair failed and we were unable to recover it. 00:27:49.265 [2024-07-15 09:36:00.144267] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.265 [2024-07-15 09:36:00.144308] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d69200 with addr=10.0.0.2, port=4420 00:27:49.265 qpair failed and we were unable to recover it. 00:27:49.265 [2024-07-15 09:36:00.144449] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.265 [2024-07-15 09:36:00.144489] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d69200 with addr=10.0.0.2, port=4420 00:27:49.265 qpair failed and we were unable to recover it. 00:27:49.265 [2024-07-15 09:36:00.144615] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.265 [2024-07-15 09:36:00.144656] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d69200 with addr=10.0.0.2, port=4420 00:27:49.265 qpair failed and we were unable to recover it. 00:27:49.266 [2024-07-15 09:36:00.144795] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.266 [2024-07-15 09:36:00.144846] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d69200 with addr=10.0.0.2, port=4420 00:27:49.266 qpair failed and we were unable to recover it. 00:27:49.266 [2024-07-15 09:36:00.144985] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.266 [2024-07-15 09:36:00.145026] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d69200 with addr=10.0.0.2, port=4420 00:27:49.266 qpair failed and we were unable to recover it. 00:27:49.266 [2024-07-15 09:36:00.145147] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.266 [2024-07-15 09:36:00.145187] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d69200 with addr=10.0.0.2, port=4420 00:27:49.266 qpair failed and we were unable to recover it. 00:27:49.266 [2024-07-15 09:36:00.145327] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.266 [2024-07-15 09:36:00.145368] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d69200 with addr=10.0.0.2, port=4420 00:27:49.266 qpair failed and we were unable to recover it. 00:27:49.266 [2024-07-15 09:36:00.145489] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.266 [2024-07-15 09:36:00.145530] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d69200 with addr=10.0.0.2, port=4420 00:27:49.266 qpair failed and we were unable to recover it. 00:27:49.266 [2024-07-15 09:36:00.145681] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.266 [2024-07-15 09:36:00.145722] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d69200 with addr=10.0.0.2, port=4420 00:27:49.266 qpair failed and we were unable to recover it. 00:27:49.266 [2024-07-15 09:36:00.145879] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.266 [2024-07-15 09:36:00.145921] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d69200 with addr=10.0.0.2, port=4420 00:27:49.266 qpair failed and we were unable to recover it. 00:27:49.266 [2024-07-15 09:36:00.146058] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.266 [2024-07-15 09:36:00.146099] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d69200 with addr=10.0.0.2, port=4420 00:27:49.266 qpair failed and we were unable to recover it. 00:27:49.266 [2024-07-15 09:36:00.146227] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.266 [2024-07-15 09:36:00.146268] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d69200 with addr=10.0.0.2, port=4420 00:27:49.266 qpair failed and we were unable to recover it. 00:27:49.266 [2024-07-15 09:36:00.146410] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.266 [2024-07-15 09:36:00.146450] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d69200 with addr=10.0.0.2, port=4420 00:27:49.266 qpair failed and we were unable to recover it. 00:27:49.266 [2024-07-15 09:36:00.146638] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.266 [2024-07-15 09:36:00.146678] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d69200 with addr=10.0.0.2, port=4420 00:27:49.266 qpair failed and we were unable to recover it. 00:27:49.266 [2024-07-15 09:36:00.146833] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.266 [2024-07-15 09:36:00.146874] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d69200 with addr=10.0.0.2, port=4420 00:27:49.266 qpair failed and we were unable to recover it. 00:27:49.266 [2024-07-15 09:36:00.147063] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.266 [2024-07-15 09:36:00.147112] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d69200 with addr=10.0.0.2, port=4420 00:27:49.266 qpair failed and we were unable to recover it. 00:27:49.266 [2024-07-15 09:36:00.147308] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.266 [2024-07-15 09:36:00.147348] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d69200 with addr=10.0.0.2, port=4420 00:27:49.266 qpair failed and we were unable to recover it. 00:27:49.266 [2024-07-15 09:36:00.147535] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.266 [2024-07-15 09:36:00.147576] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d69200 with addr=10.0.0.2, port=4420 00:27:49.266 qpair failed and we were unable to recover it. 00:27:49.266 [2024-07-15 09:36:00.147689] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.266 [2024-07-15 09:36:00.147729] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d69200 with addr=10.0.0.2, port=4420 00:27:49.266 qpair failed and we were unable to recover it. 00:27:49.266 [2024-07-15 09:36:00.147901] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.266 [2024-07-15 09:36:00.147943] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d69200 with addr=10.0.0.2, port=4420 00:27:49.266 qpair failed and we were unable to recover it. 00:27:49.266 [2024-07-15 09:36:00.148073] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.266 [2024-07-15 09:36:00.148124] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d69200 with addr=10.0.0.2, port=4420 00:27:49.266 qpair failed and we were unable to recover it. 00:27:49.266 [2024-07-15 09:36:00.148276] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.266 [2024-07-15 09:36:00.148317] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d69200 with addr=10.0.0.2, port=4420 00:27:49.266 qpair failed and we were unable to recover it. 00:27:49.266 [2024-07-15 09:36:00.148435] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.266 [2024-07-15 09:36:00.148475] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d69200 with addr=10.0.0.2, port=4420 00:27:49.266 qpair failed and we were unable to recover it. 00:27:49.266 [2024-07-15 09:36:00.148646] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.266 [2024-07-15 09:36:00.148687] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d69200 with addr=10.0.0.2, port=4420 00:27:49.266 qpair failed and we were unable to recover it. 00:27:49.266 [2024-07-15 09:36:00.148850] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.266 [2024-07-15 09:36:00.148891] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d69200 with addr=10.0.0.2, port=4420 00:27:49.266 qpair failed and we were unable to recover it. 00:27:49.266 [2024-07-15 09:36:00.149055] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.266 [2024-07-15 09:36:00.149097] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d69200 with addr=10.0.0.2, port=4420 00:27:49.266 qpair failed and we were unable to recover it. 00:27:49.266 [2024-07-15 09:36:00.149257] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.266 [2024-07-15 09:36:00.149297] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d69200 with addr=10.0.0.2, port=4420 00:27:49.266 qpair failed and we were unable to recover it. 00:27:49.266 [2024-07-15 09:36:00.149441] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.266 [2024-07-15 09:36:00.149481] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d69200 with addr=10.0.0.2, port=4420 00:27:49.266 qpair failed and we were unable to recover it. 00:27:49.266 [2024-07-15 09:36:00.149635] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.266 [2024-07-15 09:36:00.149676] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d69200 with addr=10.0.0.2, port=4420 00:27:49.266 qpair failed and we were unable to recover it. 00:27:49.266 [2024-07-15 09:36:00.149869] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.266 [2024-07-15 09:36:00.149910] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d69200 with addr=10.0.0.2, port=4420 00:27:49.266 qpair failed and we were unable to recover it. 00:27:49.266 [2024-07-15 09:36:00.150054] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.266 [2024-07-15 09:36:00.150094] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d69200 with addr=10.0.0.2, port=4420 00:27:49.266 qpair failed and we were unable to recover it. 00:27:49.266 [2024-07-15 09:36:00.150248] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.266 [2024-07-15 09:36:00.150289] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d69200 with addr=10.0.0.2, port=4420 00:27:49.266 qpair failed and we were unable to recover it. 00:27:49.266 [2024-07-15 09:36:00.150419] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.267 [2024-07-15 09:36:00.150459] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d69200 with addr=10.0.0.2, port=4420 00:27:49.267 qpair failed and we were unable to recover it. 00:27:49.267 [2024-07-15 09:36:00.150637] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.267 [2024-07-15 09:36:00.150678] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d69200 with addr=10.0.0.2, port=4420 00:27:49.267 qpair failed and we were unable to recover it. 00:27:49.267 [2024-07-15 09:36:00.150842] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.267 [2024-07-15 09:36:00.150883] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d69200 with addr=10.0.0.2, port=4420 00:27:49.267 qpair failed and we were unable to recover it. 00:27:49.267 [2024-07-15 09:36:00.151013] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.267 [2024-07-15 09:36:00.151053] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d69200 with addr=10.0.0.2, port=4420 00:27:49.267 qpair failed and we were unable to recover it. 00:27:49.267 [2024-07-15 09:36:00.151194] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.267 [2024-07-15 09:36:00.151234] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d69200 with addr=10.0.0.2, port=4420 00:27:49.267 qpair failed and we were unable to recover it. 00:27:49.267 [2024-07-15 09:36:00.151369] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.267 [2024-07-15 09:36:00.151410] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d69200 with addr=10.0.0.2, port=4420 00:27:49.267 qpair failed and we were unable to recover it. 00:27:49.267 [2024-07-15 09:36:00.151534] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.267 [2024-07-15 09:36:00.151574] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d69200 with addr=10.0.0.2, port=4420 00:27:49.267 qpair failed and we were unable to recover it. 00:27:49.267 [2024-07-15 09:36:00.151742] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.267 [2024-07-15 09:36:00.151782] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d69200 with addr=10.0.0.2, port=4420 00:27:49.267 qpair failed and we were unable to recover it. 00:27:49.267 [2024-07-15 09:36:00.151956] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.267 [2024-07-15 09:36:00.151995] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d69200 with addr=10.0.0.2, port=4420 00:27:49.267 qpair failed and we were unable to recover it. 00:27:49.267 [2024-07-15 09:36:00.152134] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.267 [2024-07-15 09:36:00.152176] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d69200 with addr=10.0.0.2, port=4420 00:27:49.267 qpair failed and we were unable to recover it. 00:27:49.267 [2024-07-15 09:36:00.152327] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.267 [2024-07-15 09:36:00.152367] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d69200 with addr=10.0.0.2, port=4420 00:27:49.267 qpair failed and we were unable to recover it. 00:27:49.267 [2024-07-15 09:36:00.152495] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.267 [2024-07-15 09:36:00.152535] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d69200 with addr=10.0.0.2, port=4420 00:27:49.267 qpair failed and we were unable to recover it. 00:27:49.267 [2024-07-15 09:36:00.152655] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.267 [2024-07-15 09:36:00.152696] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d69200 with addr=10.0.0.2, port=4420 00:27:49.267 qpair failed and we were unable to recover it. 00:27:49.267 [2024-07-15 09:36:00.152856] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.267 [2024-07-15 09:36:00.152897] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d69200 with addr=10.0.0.2, port=4420 00:27:49.267 qpair failed and we were unable to recover it. 00:27:49.267 [2024-07-15 09:36:00.153063] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.267 [2024-07-15 09:36:00.153102] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d69200 with addr=10.0.0.2, port=4420 00:27:49.267 qpair failed and we were unable to recover it. 00:27:49.267 [2024-07-15 09:36:00.153232] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.267 [2024-07-15 09:36:00.153270] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d69200 with addr=10.0.0.2, port=4420 00:27:49.267 qpair failed and we were unable to recover it. 00:27:49.267 [2024-07-15 09:36:00.153412] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.267 [2024-07-15 09:36:00.153452] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d69200 with addr=10.0.0.2, port=4420 00:27:49.267 qpair failed and we were unable to recover it. 00:27:49.267 [2024-07-15 09:36:00.153612] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.267 [2024-07-15 09:36:00.153650] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d69200 with addr=10.0.0.2, port=4420 00:27:49.267 qpair failed and we were unable to recover it. 00:27:49.267 [2024-07-15 09:36:00.153817] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.267 [2024-07-15 09:36:00.153862] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d69200 with addr=10.0.0.2, port=4420 00:27:49.267 qpair failed and we were unable to recover it. 00:27:49.267 [2024-07-15 09:36:00.153999] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.267 [2024-07-15 09:36:00.154038] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d69200 with addr=10.0.0.2, port=4420 00:27:49.267 qpair failed and we were unable to recover it. 00:27:49.267 [2024-07-15 09:36:00.154208] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.267 [2024-07-15 09:36:00.154249] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d69200 with addr=10.0.0.2, port=4420 00:27:49.267 qpair failed and we were unable to recover it. 00:27:49.267 [2024-07-15 09:36:00.154409] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.267 [2024-07-15 09:36:00.154449] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d69200 with addr=10.0.0.2, port=4420 00:27:49.267 qpair failed and we were unable to recover it. 00:27:49.267 [2024-07-15 09:36:00.154602] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.267 [2024-07-15 09:36:00.154641] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d69200 with addr=10.0.0.2, port=4420 00:27:49.267 qpair failed and we were unable to recover it. 00:27:49.267 [2024-07-15 09:36:00.154794] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.267 [2024-07-15 09:36:00.154855] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d69200 with addr=10.0.0.2, port=4420 00:27:49.267 qpair failed and we were unable to recover it. 00:27:49.267 [2024-07-15 09:36:00.155018] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.267 [2024-07-15 09:36:00.155058] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d69200 with addr=10.0.0.2, port=4420 00:27:49.267 qpair failed and we were unable to recover it. 00:27:49.267 [2024-07-15 09:36:00.155174] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.267 [2024-07-15 09:36:00.155214] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d69200 with addr=10.0.0.2, port=4420 00:27:49.267 qpair failed and we were unable to recover it. 00:27:49.267 [2024-07-15 09:36:00.155344] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.267 [2024-07-15 09:36:00.155383] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d69200 with addr=10.0.0.2, port=4420 00:27:49.267 qpair failed and we were unable to recover it. 00:27:49.267 [2024-07-15 09:36:00.155535] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.267 [2024-07-15 09:36:00.155575] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d69200 with addr=10.0.0.2, port=4420 00:27:49.267 qpair failed and we were unable to recover it. 00:27:49.267 [2024-07-15 09:36:00.155732] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.267 [2024-07-15 09:36:00.155772] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d69200 with addr=10.0.0.2, port=4420 00:27:49.267 qpair failed and we were unable to recover it. 00:27:49.267 [2024-07-15 09:36:00.155949] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.267 [2024-07-15 09:36:00.155990] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d69200 with addr=10.0.0.2, port=4420 00:27:49.267 qpair failed and we were unable to recover it. 00:27:49.267 [2024-07-15 09:36:00.156114] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.267 [2024-07-15 09:36:00.156154] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d69200 with addr=10.0.0.2, port=4420 00:27:49.267 qpair failed and we were unable to recover it. 00:27:49.267 [2024-07-15 09:36:00.156316] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.267 [2024-07-15 09:36:00.156356] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d69200 with addr=10.0.0.2, port=4420 00:27:49.267 qpair failed and we were unable to recover it. 00:27:49.267 [2024-07-15 09:36:00.156519] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.267 [2024-07-15 09:36:00.156560] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d69200 with addr=10.0.0.2, port=4420 00:27:49.267 qpair failed and we were unable to recover it. 00:27:49.267 [2024-07-15 09:36:00.156699] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.267 [2024-07-15 09:36:00.156739] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d69200 with addr=10.0.0.2, port=4420 00:27:49.267 qpair failed and we were unable to recover it. 00:27:49.267 [2024-07-15 09:36:00.156944] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.267 [2024-07-15 09:36:00.156985] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d69200 with addr=10.0.0.2, port=4420 00:27:49.267 qpair failed and we were unable to recover it. 00:27:49.267 [2024-07-15 09:36:00.157155] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.267 [2024-07-15 09:36:00.157194] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d69200 with addr=10.0.0.2, port=4420 00:27:49.267 qpair failed and we were unable to recover it. 00:27:49.267 [2024-07-15 09:36:00.157327] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.267 [2024-07-15 09:36:00.157366] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d69200 with addr=10.0.0.2, port=4420 00:27:49.267 qpair failed and we were unable to recover it. 00:27:49.267 [2024-07-15 09:36:00.157498] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.267 [2024-07-15 09:36:00.157538] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d69200 with addr=10.0.0.2, port=4420 00:27:49.267 qpair failed and we were unable to recover it. 00:27:49.267 [2024-07-15 09:36:00.157668] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.267 [2024-07-15 09:36:00.157708] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d69200 with addr=10.0.0.2, port=4420 00:27:49.267 qpair failed and we were unable to recover it. 00:27:49.267 [2024-07-15 09:36:00.157851] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.267 [2024-07-15 09:36:00.157891] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d69200 with addr=10.0.0.2, port=4420 00:27:49.267 qpair failed and we were unable to recover it. 00:27:49.267 [2024-07-15 09:36:00.158023] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.267 [2024-07-15 09:36:00.158063] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d69200 with addr=10.0.0.2, port=4420 00:27:49.268 qpair failed and we were unable to recover it. 00:27:49.268 [2024-07-15 09:36:00.158225] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.268 [2024-07-15 09:36:00.158265] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d69200 with addr=10.0.0.2, port=4420 00:27:49.268 qpair failed and we were unable to recover it. 00:27:49.268 [2024-07-15 09:36:00.158395] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.268 [2024-07-15 09:36:00.158434] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d69200 with addr=10.0.0.2, port=4420 00:27:49.268 qpair failed and we were unable to recover it. 00:27:49.268 [2024-07-15 09:36:00.158571] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.268 [2024-07-15 09:36:00.158610] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d69200 with addr=10.0.0.2, port=4420 00:27:49.268 qpair failed and we were unable to recover it. 00:27:49.268 [2024-07-15 09:36:00.158741] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.268 [2024-07-15 09:36:00.158781] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d69200 with addr=10.0.0.2, port=4420 00:27:49.268 qpair failed and we were unable to recover it. 00:27:49.268 [2024-07-15 09:36:00.158934] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.268 [2024-07-15 09:36:00.158980] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d69200 with addr=10.0.0.2, port=4420 00:27:49.268 qpair failed and we were unable to recover it. 00:27:49.268 [2024-07-15 09:36:00.159135] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.268 [2024-07-15 09:36:00.159175] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d69200 with addr=10.0.0.2, port=4420 00:27:49.268 qpair failed and we were unable to recover it. 00:27:49.268 [2024-07-15 09:36:00.159345] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.268 [2024-07-15 09:36:00.159386] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d69200 with addr=10.0.0.2, port=4420 00:27:49.268 qpair failed and we were unable to recover it. 00:27:49.268 [2024-07-15 09:36:00.159513] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.268 [2024-07-15 09:36:00.159554] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d69200 with addr=10.0.0.2, port=4420 00:27:49.268 qpair failed and we were unable to recover it. 00:27:49.268 [2024-07-15 09:36:00.159690] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.268 [2024-07-15 09:36:00.159729] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d69200 with addr=10.0.0.2, port=4420 00:27:49.268 qpair failed and we were unable to recover it. 00:27:49.268 [2024-07-15 09:36:00.159874] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.268 [2024-07-15 09:36:00.159913] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d69200 with addr=10.0.0.2, port=4420 00:27:49.268 qpair failed and we were unable to recover it. 00:27:49.268 [2024-07-15 09:36:00.160038] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.268 [2024-07-15 09:36:00.160078] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d69200 with addr=10.0.0.2, port=4420 00:27:49.268 qpair failed and we were unable to recover it. 00:27:49.268 [2024-07-15 09:36:00.160244] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.268 [2024-07-15 09:36:00.160283] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d69200 with addr=10.0.0.2, port=4420 00:27:49.268 qpair failed and we were unable to recover it. 00:27:49.268 [2024-07-15 09:36:00.160419] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.268 [2024-07-15 09:36:00.160457] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d69200 with addr=10.0.0.2, port=4420 00:27:49.268 qpair failed and we were unable to recover it. 00:27:49.268 [2024-07-15 09:36:00.160578] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.268 [2024-07-15 09:36:00.160619] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d69200 with addr=10.0.0.2, port=4420 00:27:49.268 qpair failed and we were unable to recover it. 00:27:49.268 [2024-07-15 09:36:00.160814] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.268 [2024-07-15 09:36:00.160854] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d69200 with addr=10.0.0.2, port=4420 00:27:49.268 qpair failed and we were unable to recover it. 00:27:49.268 [2024-07-15 09:36:00.161016] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.268 [2024-07-15 09:36:00.161055] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d69200 with addr=10.0.0.2, port=4420 00:27:49.268 qpair failed and we were unable to recover it. 00:27:49.268 [2024-07-15 09:36:00.161179] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.268 [2024-07-15 09:36:00.161219] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d69200 with addr=10.0.0.2, port=4420 00:27:49.268 qpair failed and we were unable to recover it. 00:27:49.268 [2024-07-15 09:36:00.161340] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.268 [2024-07-15 09:36:00.161381] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d69200 with addr=10.0.0.2, port=4420 00:27:49.268 qpair failed and we were unable to recover it. 00:27:49.268 [2024-07-15 09:36:00.161512] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.268 [2024-07-15 09:36:00.161553] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d69200 with addr=10.0.0.2, port=4420 00:27:49.268 qpair failed and we were unable to recover it. 00:27:49.268 [2024-07-15 09:36:00.161676] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.268 [2024-07-15 09:36:00.161715] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d69200 with addr=10.0.0.2, port=4420 00:27:49.268 qpair failed and we were unable to recover it. 00:27:49.268 [2024-07-15 09:36:00.161871] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.268 [2024-07-15 09:36:00.161932] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:27:49.268 qpair failed and we were unable to recover it. 00:27:49.268 [2024-07-15 09:36:00.162111] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.268 [2024-07-15 09:36:00.162154] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:27:49.268 qpair failed and we were unable to recover it. 00:27:49.268 [2024-07-15 09:36:00.162274] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.268 [2024-07-15 09:36:00.162315] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:27:49.268 qpair failed and we were unable to recover it. 00:27:49.268 [2024-07-15 09:36:00.162446] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.268 [2024-07-15 09:36:00.162486] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:27:49.268 qpair failed and we were unable to recover it. 00:27:49.268 [2024-07-15 09:36:00.162641] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.268 [2024-07-15 09:36:00.162680] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:27:49.268 qpair failed and we were unable to recover it. 00:27:49.268 [2024-07-15 09:36:00.162816] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.268 [2024-07-15 09:36:00.162857] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:27:49.268 qpair failed and we were unable to recover it. 00:27:49.268 [2024-07-15 09:36:00.163032] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.268 [2024-07-15 09:36:00.163072] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:27:49.268 qpair failed and we were unable to recover it. 00:27:49.268 [2024-07-15 09:36:00.163206] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.268 [2024-07-15 09:36:00.163246] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:27:49.268 qpair failed and we were unable to recover it. 00:27:49.268 [2024-07-15 09:36:00.163380] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.268 [2024-07-15 09:36:00.163419] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:27:49.268 qpair failed and we were unable to recover it. 00:27:49.268 [2024-07-15 09:36:00.163544] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.268 [2024-07-15 09:36:00.163584] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:27:49.268 qpair failed and we were unable to recover it. 00:27:49.268 [2024-07-15 09:36:00.163744] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.268 [2024-07-15 09:36:00.163783] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:27:49.268 qpair failed and we were unable to recover it. 00:27:49.268 [2024-07-15 09:36:00.163926] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.268 [2024-07-15 09:36:00.163976] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:27:49.268 qpair failed and we were unable to recover it. 00:27:49.268 [2024-07-15 09:36:00.164144] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.268 [2024-07-15 09:36:00.164185] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:27:49.268 qpair failed and we were unable to recover it. 00:27:49.268 [2024-07-15 09:36:00.164361] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.268 [2024-07-15 09:36:00.164402] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:27:49.268 qpair failed and we were unable to recover it. 00:27:49.268 [2024-07-15 09:36:00.164566] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.268 [2024-07-15 09:36:00.164608] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:27:49.268 qpair failed and we were unable to recover it. 00:27:49.268 [2024-07-15 09:36:00.164738] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.268 [2024-07-15 09:36:00.164782] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d69200 with addr=10.0.0.2, port=4420 00:27:49.268 qpair failed and we were unable to recover it. 00:27:49.268 [2024-07-15 09:36:00.164937] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.268 [2024-07-15 09:36:00.164977] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d69200 with addr=10.0.0.2, port=4420 00:27:49.268 qpair failed and we were unable to recover it. 00:27:49.268 [2024-07-15 09:36:00.165109] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.268 [2024-07-15 09:36:00.165150] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d69200 with addr=10.0.0.2, port=4420 00:27:49.268 qpair failed and we were unable to recover it. 00:27:49.268 [2024-07-15 09:36:00.165277] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.268 [2024-07-15 09:36:00.165317] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d69200 with addr=10.0.0.2, port=4420 00:27:49.268 qpair failed and we were unable to recover it. 00:27:49.268 [2024-07-15 09:36:00.165473] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.268 [2024-07-15 09:36:00.165514] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d69200 with addr=10.0.0.2, port=4420 00:27:49.269 qpair failed and we were unable to recover it. 00:27:49.269 [2024-07-15 09:36:00.165701] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.269 [2024-07-15 09:36:00.165740] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d69200 with addr=10.0.0.2, port=4420 00:27:49.269 qpair failed and we were unable to recover it. 00:27:49.269 [2024-07-15 09:36:00.165907] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.269 [2024-07-15 09:36:00.165949] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d69200 with addr=10.0.0.2, port=4420 00:27:49.269 qpair failed and we were unable to recover it. 00:27:49.269 [2024-07-15 09:36:00.166076] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.269 [2024-07-15 09:36:00.166115] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d69200 with addr=10.0.0.2, port=4420 00:27:49.269 qpair failed and we were unable to recover it. 00:27:49.269 [2024-07-15 09:36:00.166274] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.269 [2024-07-15 09:36:00.166314] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d69200 with addr=10.0.0.2, port=4420 00:27:49.269 qpair failed and we were unable to recover it. 00:27:49.269 [2024-07-15 09:36:00.166434] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.269 [2024-07-15 09:36:00.166475] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d69200 with addr=10.0.0.2, port=4420 00:27:49.269 qpair failed and we were unable to recover it. 00:27:49.269 [2024-07-15 09:36:00.166611] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.269 [2024-07-15 09:36:00.166650] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d69200 with addr=10.0.0.2, port=4420 00:27:49.269 qpair failed and we were unable to recover it. 00:27:49.269 [2024-07-15 09:36:00.166774] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.269 [2024-07-15 09:36:00.166823] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d69200 with addr=10.0.0.2, port=4420 00:27:49.269 qpair failed and we were unable to recover it. 00:27:49.269 [2024-07-15 09:36:00.167015] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.269 [2024-07-15 09:36:00.167055] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d69200 with addr=10.0.0.2, port=4420 00:27:49.269 qpair failed and we were unable to recover it. 00:27:49.269 [2024-07-15 09:36:00.167215] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.269 [2024-07-15 09:36:00.167255] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d69200 with addr=10.0.0.2, port=4420 00:27:49.269 qpair failed and we were unable to recover it. 00:27:49.269 [2024-07-15 09:36:00.167422] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.269 [2024-07-15 09:36:00.167461] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d69200 with addr=10.0.0.2, port=4420 00:27:49.269 qpair failed and we were unable to recover it. 00:27:49.269 [2024-07-15 09:36:00.167617] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.269 [2024-07-15 09:36:00.167656] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d69200 with addr=10.0.0.2, port=4420 00:27:49.269 qpair failed and we were unable to recover it. 00:27:49.269 [2024-07-15 09:36:00.167821] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.269 [2024-07-15 09:36:00.167861] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d69200 with addr=10.0.0.2, port=4420 00:27:49.269 qpair failed and we were unable to recover it. 00:27:49.269 [2024-07-15 09:36:00.167994] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.269 [2024-07-15 09:36:00.168034] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d69200 with addr=10.0.0.2, port=4420 00:27:49.269 qpair failed and we were unable to recover it. 00:27:49.269 [2024-07-15 09:36:00.168160] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.269 [2024-07-15 09:36:00.168201] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d69200 with addr=10.0.0.2, port=4420 00:27:49.269 qpair failed and we were unable to recover it. 00:27:49.269 [2024-07-15 09:36:00.168364] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.269 [2024-07-15 09:36:00.168404] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d69200 with addr=10.0.0.2, port=4420 00:27:49.269 qpair failed and we were unable to recover it. 00:27:49.269 [2024-07-15 09:36:00.168563] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.269 [2024-07-15 09:36:00.168601] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d69200 with addr=10.0.0.2, port=4420 00:27:49.269 qpair failed and we were unable to recover it. 00:27:49.269 [2024-07-15 09:36:00.168742] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.269 [2024-07-15 09:36:00.168781] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d69200 with addr=10.0.0.2, port=4420 00:27:49.269 qpair failed and we were unable to recover it. 00:27:49.269 [2024-07-15 09:36:00.168933] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.269 [2024-07-15 09:36:00.168973] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d69200 with addr=10.0.0.2, port=4420 00:27:49.269 qpair failed and we were unable to recover it. 00:27:49.269 [2024-07-15 09:36:00.169131] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.269 [2024-07-15 09:36:00.169176] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d69200 with addr=10.0.0.2, port=4420 00:27:49.269 qpair failed and we were unable to recover it. 00:27:49.269 [2024-07-15 09:36:00.169332] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.269 [2024-07-15 09:36:00.169372] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d69200 with addr=10.0.0.2, port=4420 00:27:49.269 qpair failed and we were unable to recover it. 00:27:49.269 [2024-07-15 09:36:00.169507] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.269 [2024-07-15 09:36:00.169547] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d69200 with addr=10.0.0.2, port=4420 00:27:49.269 qpair failed and we were unable to recover it. 00:27:49.269 [2024-07-15 09:36:00.169676] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.269 [2024-07-15 09:36:00.169717] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d69200 with addr=10.0.0.2, port=4420 00:27:49.269 qpair failed and we were unable to recover it. 00:27:49.269 [2024-07-15 09:36:00.169887] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.269 [2024-07-15 09:36:00.169929] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d69200 with addr=10.0.0.2, port=4420 00:27:49.269 qpair failed and we were unable to recover it. 00:27:49.269 [2024-07-15 09:36:00.170071] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.269 [2024-07-15 09:36:00.170111] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d69200 with addr=10.0.0.2, port=4420 00:27:49.269 qpair failed and we were unable to recover it. 00:27:49.269 [2024-07-15 09:36:00.170277] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.269 [2024-07-15 09:36:00.170316] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d69200 with addr=10.0.0.2, port=4420 00:27:49.269 qpair failed and we were unable to recover it. 00:27:49.269 [2024-07-15 09:36:00.170481] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.269 [2024-07-15 09:36:00.170521] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d69200 with addr=10.0.0.2, port=4420 00:27:49.269 qpair failed and we were unable to recover it. 00:27:49.269 [2024-07-15 09:36:00.170695] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.269 [2024-07-15 09:36:00.170736] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d69200 with addr=10.0.0.2, port=4420 00:27:49.269 qpair failed and we were unable to recover it. 00:27:49.269 [2024-07-15 09:36:00.170907] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.269 [2024-07-15 09:36:00.170947] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d69200 with addr=10.0.0.2, port=4420 00:27:49.269 qpair failed and we were unable to recover it. 00:27:49.269 [2024-07-15 09:36:00.171104] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.269 [2024-07-15 09:36:00.171144] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d69200 with addr=10.0.0.2, port=4420 00:27:49.269 qpair failed and we were unable to recover it. 00:27:49.269 [2024-07-15 09:36:00.171300] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.269 [2024-07-15 09:36:00.171339] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d69200 with addr=10.0.0.2, port=4420 00:27:49.269 qpair failed and we were unable to recover it. 00:27:49.269 [2024-07-15 09:36:00.171475] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.269 [2024-07-15 09:36:00.171516] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d69200 with addr=10.0.0.2, port=4420 00:27:49.269 qpair failed and we were unable to recover it. 00:27:49.269 [2024-07-15 09:36:00.171677] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.269 [2024-07-15 09:36:00.171718] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d69200 with addr=10.0.0.2, port=4420 00:27:49.269 qpair failed and we were unable to recover it. 00:27:49.269 [2024-07-15 09:36:00.171864] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.269 [2024-07-15 09:36:00.171906] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d69200 with addr=10.0.0.2, port=4420 00:27:49.269 qpair failed and we were unable to recover it. 00:27:49.269 [2024-07-15 09:36:00.172023] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.269 [2024-07-15 09:36:00.172065] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d69200 with addr=10.0.0.2, port=4420 00:27:49.269 qpair failed and we were unable to recover it. 00:27:49.269 [2024-07-15 09:36:00.172192] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.269 [2024-07-15 09:36:00.172232] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d69200 with addr=10.0.0.2, port=4420 00:27:49.269 qpair failed and we were unable to recover it. 00:27:49.269 [2024-07-15 09:36:00.172355] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.269 [2024-07-15 09:36:00.172395] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d69200 with addr=10.0.0.2, port=4420 00:27:49.269 qpair failed and we were unable to recover it. 00:27:49.269 [2024-07-15 09:36:00.172551] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.269 [2024-07-15 09:36:00.172592] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d69200 with addr=10.0.0.2, port=4420 00:27:49.269 qpair failed and we were unable to recover it. 00:27:49.269 [2024-07-15 09:36:00.172751] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.269 [2024-07-15 09:36:00.172791] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d69200 with addr=10.0.0.2, port=4420 00:27:49.269 qpair failed and we were unable to recover it. 00:27:49.269 [2024-07-15 09:36:00.172924] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.269 [2024-07-15 09:36:00.172964] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d69200 with addr=10.0.0.2, port=4420 00:27:49.269 qpair failed and we were unable to recover it. 00:27:49.269 [2024-07-15 09:36:00.173098] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.269 [2024-07-15 09:36:00.173138] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d69200 with addr=10.0.0.2, port=4420 00:27:49.269 qpair failed and we were unable to recover it. 00:27:49.269 [2024-07-15 09:36:00.173277] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.269 [2024-07-15 09:36:00.173317] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d69200 with addr=10.0.0.2, port=4420 00:27:49.269 qpair failed and we were unable to recover it. 00:27:49.270 [2024-07-15 09:36:00.173507] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.270 [2024-07-15 09:36:00.173547] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d69200 with addr=10.0.0.2, port=4420 00:27:49.270 qpair failed and we were unable to recover it. 00:27:49.270 [2024-07-15 09:36:00.173685] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.270 [2024-07-15 09:36:00.173726] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d69200 with addr=10.0.0.2, port=4420 00:27:49.270 qpair failed and we were unable to recover it. 00:27:49.270 [2024-07-15 09:36:00.173894] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.270 [2024-07-15 09:36:00.173935] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d69200 with addr=10.0.0.2, port=4420 00:27:49.270 qpair failed and we were unable to recover it. 00:27:49.270 [2024-07-15 09:36:00.174101] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.270 [2024-07-15 09:36:00.174141] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d69200 with addr=10.0.0.2, port=4420 00:27:49.270 qpair failed and we were unable to recover it. 00:27:49.270 [2024-07-15 09:36:00.174269] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.270 [2024-07-15 09:36:00.174309] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d69200 with addr=10.0.0.2, port=4420 00:27:49.270 qpair failed and we were unable to recover it. 00:27:49.270 [2024-07-15 09:36:00.174449] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.270 [2024-07-15 09:36:00.174490] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d69200 with addr=10.0.0.2, port=4420 00:27:49.270 qpair failed and we were unable to recover it. 00:27:49.270 [2024-07-15 09:36:00.174626] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.270 [2024-07-15 09:36:00.174666] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d69200 with addr=10.0.0.2, port=4420 00:27:49.270 qpair failed and we were unable to recover it. 00:27:49.270 [2024-07-15 09:36:00.174831] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.270 [2024-07-15 09:36:00.174872] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d69200 with addr=10.0.0.2, port=4420 00:27:49.270 qpair failed and we were unable to recover it. 00:27:49.270 [2024-07-15 09:36:00.174998] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.270 [2024-07-15 09:36:00.175039] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d69200 with addr=10.0.0.2, port=4420 00:27:49.270 qpair failed and we were unable to recover it. 00:27:49.270 [2024-07-15 09:36:00.175254] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.270 [2024-07-15 09:36:00.175293] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d69200 with addr=10.0.0.2, port=4420 00:27:49.270 qpair failed and we were unable to recover it. 00:27:49.270 [2024-07-15 09:36:00.175426] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.270 [2024-07-15 09:36:00.175466] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d69200 with addr=10.0.0.2, port=4420 00:27:49.270 qpair failed and we were unable to recover it. 00:27:49.270 [2024-07-15 09:36:00.175655] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.270 [2024-07-15 09:36:00.175695] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d69200 with addr=10.0.0.2, port=4420 00:27:49.270 qpair failed and we were unable to recover it. 00:27:49.270 [2024-07-15 09:36:00.175822] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.270 [2024-07-15 09:36:00.175863] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d69200 with addr=10.0.0.2, port=4420 00:27:49.270 qpair failed and we were unable to recover it. 00:27:49.270 [2024-07-15 09:36:00.175995] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.270 [2024-07-15 09:36:00.176044] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d69200 with addr=10.0.0.2, port=4420 00:27:49.270 qpair failed and we were unable to recover it. 00:27:49.270 [2024-07-15 09:36:00.176178] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.270 [2024-07-15 09:36:00.176218] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d69200 with addr=10.0.0.2, port=4420 00:27:49.270 qpair failed and we were unable to recover it. 00:27:49.270 [2024-07-15 09:36:00.176347] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.270 [2024-07-15 09:36:00.176387] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d69200 with addr=10.0.0.2, port=4420 00:27:49.270 qpair failed and we were unable to recover it. 00:27:49.270 [2024-07-15 09:36:00.176512] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.270 [2024-07-15 09:36:00.176552] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d69200 with addr=10.0.0.2, port=4420 00:27:49.270 qpair failed and we were unable to recover it. 00:27:49.270 [2024-07-15 09:36:00.176674] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.270 [2024-07-15 09:36:00.176713] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d69200 with addr=10.0.0.2, port=4420 00:27:49.270 qpair failed and we were unable to recover it. 00:27:49.270 [2024-07-15 09:36:00.176885] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.270 [2024-07-15 09:36:00.176926] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d69200 with addr=10.0.0.2, port=4420 00:27:49.270 qpair failed and we were unable to recover it. 00:27:49.270 [2024-07-15 09:36:00.177083] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.270 [2024-07-15 09:36:00.177123] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d69200 with addr=10.0.0.2, port=4420 00:27:49.270 qpair failed and we were unable to recover it. 00:27:49.270 [2024-07-15 09:36:00.177312] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.270 [2024-07-15 09:36:00.177352] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d69200 with addr=10.0.0.2, port=4420 00:27:49.270 qpair failed and we were unable to recover it. 00:27:49.270 [2024-07-15 09:36:00.177484] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.270 [2024-07-15 09:36:00.177524] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d69200 with addr=10.0.0.2, port=4420 00:27:49.270 qpair failed and we were unable to recover it. 00:27:49.270 [2024-07-15 09:36:00.177681] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.270 [2024-07-15 09:36:00.177722] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d69200 with addr=10.0.0.2, port=4420 00:27:49.270 qpair failed and we were unable to recover it. 00:27:49.270 [2024-07-15 09:36:00.177882] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.270 [2024-07-15 09:36:00.177923] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d69200 with addr=10.0.0.2, port=4420 00:27:49.270 qpair failed and we were unable to recover it. 00:27:49.270 [2024-07-15 09:36:00.178041] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.270 [2024-07-15 09:36:00.178082] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d69200 with addr=10.0.0.2, port=4420 00:27:49.270 qpair failed and we were unable to recover it. 00:27:49.270 [2024-07-15 09:36:00.178242] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.270 [2024-07-15 09:36:00.178283] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d69200 with addr=10.0.0.2, port=4420 00:27:49.270 qpair failed and we were unable to recover it. 00:27:49.270 [2024-07-15 09:36:00.178399] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.270 [2024-07-15 09:36:00.178440] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d69200 with addr=10.0.0.2, port=4420 00:27:49.270 qpair failed and we were unable to recover it. 00:27:49.270 [2024-07-15 09:36:00.178590] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.270 [2024-07-15 09:36:00.178631] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d69200 with addr=10.0.0.2, port=4420 00:27:49.270 qpair failed and we were unable to recover it. 00:27:49.270 [2024-07-15 09:36:00.178783] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.270 [2024-07-15 09:36:00.178831] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d69200 with addr=10.0.0.2, port=4420 00:27:49.270 qpair failed and we were unable to recover it. 00:27:49.270 [2024-07-15 09:36:00.178994] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.270 [2024-07-15 09:36:00.179034] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d69200 with addr=10.0.0.2, port=4420 00:27:49.270 qpair failed and we were unable to recover it. 00:27:49.270 [2024-07-15 09:36:00.179164] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.270 [2024-07-15 09:36:00.179204] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d69200 with addr=10.0.0.2, port=4420 00:27:49.270 qpair failed and we were unable to recover it. 00:27:49.270 [2024-07-15 09:36:00.179367] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.270 [2024-07-15 09:36:00.179407] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d69200 with addr=10.0.0.2, port=4420 00:27:49.270 qpair failed and we were unable to recover it. 00:27:49.270 [2024-07-15 09:36:00.179572] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.270 [2024-07-15 09:36:00.179612] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d69200 with addr=10.0.0.2, port=4420 00:27:49.270 qpair failed and we were unable to recover it. 00:27:49.270 [2024-07-15 09:36:00.179742] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.270 [2024-07-15 09:36:00.179783] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d69200 with addr=10.0.0.2, port=4420 00:27:49.270 qpair failed and we were unable to recover it. 00:27:49.270 [2024-07-15 09:36:00.179943] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.270 [2024-07-15 09:36:00.179983] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d69200 with addr=10.0.0.2, port=4420 00:27:49.270 qpair failed and we were unable to recover it. 00:27:49.270 [2024-07-15 09:36:00.180121] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.271 [2024-07-15 09:36:00.180161] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d69200 with addr=10.0.0.2, port=4420 00:27:49.271 qpair failed and we were unable to recover it. 00:27:49.271 [2024-07-15 09:36:00.180278] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.271 [2024-07-15 09:36:00.180318] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d69200 with addr=10.0.0.2, port=4420 00:27:49.271 qpair failed and we were unable to recover it. 00:27:49.271 [2024-07-15 09:36:00.180481] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.271 [2024-07-15 09:36:00.180521] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d69200 with addr=10.0.0.2, port=4420 00:27:49.271 qpair failed and we were unable to recover it. 00:27:49.271 [2024-07-15 09:36:00.180633] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.271 [2024-07-15 09:36:00.180673] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d69200 with addr=10.0.0.2, port=4420 00:27:49.271 qpair failed and we were unable to recover it. 00:27:49.271 [2024-07-15 09:36:00.180790] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.271 [2024-07-15 09:36:00.180837] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d69200 with addr=10.0.0.2, port=4420 00:27:49.271 qpair failed and we were unable to recover it. 00:27:49.271 [2024-07-15 09:36:00.180970] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.271 [2024-07-15 09:36:00.181010] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d69200 with addr=10.0.0.2, port=4420 00:27:49.271 qpair failed and we were unable to recover it. 00:27:49.271 [2024-07-15 09:36:00.181137] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.271 [2024-07-15 09:36:00.181178] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d69200 with addr=10.0.0.2, port=4420 00:27:49.271 qpair failed and we were unable to recover it. 00:27:49.271 [2024-07-15 09:36:00.181333] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.271 [2024-07-15 09:36:00.181373] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d69200 with addr=10.0.0.2, port=4420 00:27:49.271 qpair failed and we were unable to recover it. 00:27:49.271 [2024-07-15 09:36:00.181531] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.271 [2024-07-15 09:36:00.181571] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d69200 with addr=10.0.0.2, port=4420 00:27:49.271 qpair failed and we were unable to recover it. 00:27:49.271 [2024-07-15 09:36:00.181751] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.271 [2024-07-15 09:36:00.181794] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d69200 with addr=10.0.0.2, port=4420 00:27:49.271 qpair failed and we were unable to recover it. 00:27:49.271 [2024-07-15 09:36:00.181981] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.271 [2024-07-15 09:36:00.182028] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d69200 with addr=10.0.0.2, port=4420 00:27:49.271 qpair failed and we were unable to recover it. 00:27:49.271 [2024-07-15 09:36:00.182183] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.271 [2024-07-15 09:36:00.182223] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d69200 with addr=10.0.0.2, port=4420 00:27:49.271 qpair failed and we were unable to recover it. 00:27:49.271 [2024-07-15 09:36:00.182398] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.271 [2024-07-15 09:36:00.182439] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d69200 with addr=10.0.0.2, port=4420 00:27:49.271 qpair failed and we were unable to recover it. 00:27:49.271 [2024-07-15 09:36:00.182604] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.271 [2024-07-15 09:36:00.182646] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d69200 with addr=10.0.0.2, port=4420 00:27:49.271 qpair failed and we were unable to recover it. 00:27:49.271 [2024-07-15 09:36:00.182821] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.271 [2024-07-15 09:36:00.182863] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d69200 with addr=10.0.0.2, port=4420 00:27:49.271 qpair failed and we were unable to recover it. 00:27:49.271 [2024-07-15 09:36:00.183035] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.271 [2024-07-15 09:36:00.183075] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d69200 with addr=10.0.0.2, port=4420 00:27:49.271 qpair failed and we were unable to recover it. 00:27:49.271 [2024-07-15 09:36:00.183243] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.271 [2024-07-15 09:36:00.183283] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d69200 with addr=10.0.0.2, port=4420 00:27:49.271 qpair failed and we were unable to recover it. 00:27:49.271 [2024-07-15 09:36:00.183440] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.271 [2024-07-15 09:36:00.183480] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d69200 with addr=10.0.0.2, port=4420 00:27:49.271 qpair failed and we were unable to recover it. 00:27:49.271 [2024-07-15 09:36:00.183622] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.271 [2024-07-15 09:36:00.183662] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d69200 with addr=10.0.0.2, port=4420 00:27:49.271 qpair failed and we were unable to recover it. 00:27:49.271 [2024-07-15 09:36:00.183850] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.271 [2024-07-15 09:36:00.183893] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d69200 with addr=10.0.0.2, port=4420 00:27:49.271 qpair failed and we were unable to recover it. 00:27:49.271 [2024-07-15 09:36:00.184013] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.271 [2024-07-15 09:36:00.184054] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d69200 with addr=10.0.0.2, port=4420 00:27:49.271 qpair failed and we were unable to recover it. 00:27:49.271 [2024-07-15 09:36:00.184225] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.271 [2024-07-15 09:36:00.184266] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d69200 with addr=10.0.0.2, port=4420 00:27:49.271 qpair failed and we were unable to recover it. 00:27:49.271 [2024-07-15 09:36:00.184400] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.271 [2024-07-15 09:36:00.184439] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d69200 with addr=10.0.0.2, port=4420 00:27:49.271 qpair failed and we were unable to recover it. 00:27:49.271 [2024-07-15 09:36:00.184635] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.271 [2024-07-15 09:36:00.184675] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d69200 with addr=10.0.0.2, port=4420 00:27:49.271 qpair failed and we were unable to recover it. 00:27:49.271 [2024-07-15 09:36:00.184815] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.271 [2024-07-15 09:36:00.184856] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d69200 with addr=10.0.0.2, port=4420 00:27:49.271 qpair failed and we were unable to recover it. 00:27:49.271 [2024-07-15 09:36:00.184979] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.271 [2024-07-15 09:36:00.185018] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d69200 with addr=10.0.0.2, port=4420 00:27:49.271 qpair failed and we were unable to recover it. 00:27:49.271 [2024-07-15 09:36:00.185180] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.271 [2024-07-15 09:36:00.185220] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d69200 with addr=10.0.0.2, port=4420 00:27:49.271 qpair failed and we were unable to recover it. 00:27:49.271 [2024-07-15 09:36:00.185414] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.271 [2024-07-15 09:36:00.185454] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d69200 with addr=10.0.0.2, port=4420 00:27:49.271 qpair failed and we were unable to recover it. 00:27:49.271 [2024-07-15 09:36:00.185584] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.271 [2024-07-15 09:36:00.185624] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d69200 with addr=10.0.0.2, port=4420 00:27:49.271 qpair failed and we were unable to recover it. 00:27:49.271 [2024-07-15 09:36:00.185757] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.271 [2024-07-15 09:36:00.185798] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d69200 with addr=10.0.0.2, port=4420 00:27:49.271 qpair failed and we were unable to recover it. 00:27:49.271 [2024-07-15 09:36:00.185982] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.271 [2024-07-15 09:36:00.186022] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d69200 with addr=10.0.0.2, port=4420 00:27:49.271 qpair failed and we were unable to recover it. 00:27:49.271 [2024-07-15 09:36:00.186151] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.271 [2024-07-15 09:36:00.186192] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d69200 with addr=10.0.0.2, port=4420 00:27:49.271 qpair failed and we were unable to recover it. 00:27:49.271 [2024-07-15 09:36:00.186354] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.271 [2024-07-15 09:36:00.186394] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d69200 with addr=10.0.0.2, port=4420 00:27:49.271 qpair failed and we were unable to recover it. 00:27:49.271 [2024-07-15 09:36:00.186527] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.271 [2024-07-15 09:36:00.186567] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d69200 with addr=10.0.0.2, port=4420 00:27:49.271 qpair failed and we were unable to recover it. 00:27:49.271 [2024-07-15 09:36:00.186733] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.271 [2024-07-15 09:36:00.186774] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d69200 with addr=10.0.0.2, port=4420 00:27:49.271 qpair failed and we were unable to recover it. 00:27:49.271 [2024-07-15 09:36:00.186957] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.271 [2024-07-15 09:36:00.186998] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d69200 with addr=10.0.0.2, port=4420 00:27:49.271 qpair failed and we were unable to recover it. 00:27:49.271 [2024-07-15 09:36:00.187124] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.271 [2024-07-15 09:36:00.187165] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d69200 with addr=10.0.0.2, port=4420 00:27:49.271 qpair failed and we were unable to recover it. 00:27:49.271 [2024-07-15 09:36:00.187296] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.271 [2024-07-15 09:36:00.187343] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d69200 with addr=10.0.0.2, port=4420 00:27:49.271 qpair failed and we were unable to recover it. 00:27:49.271 [2024-07-15 09:36:00.187502] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.271 [2024-07-15 09:36:00.187542] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d69200 with addr=10.0.0.2, port=4420 00:27:49.271 qpair failed and we were unable to recover it. 00:27:49.271 [2024-07-15 09:36:00.187676] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.271 [2024-07-15 09:36:00.187716] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d69200 with addr=10.0.0.2, port=4420 00:27:49.271 qpair failed and we were unable to recover it. 00:27:49.271 [2024-07-15 09:36:00.187882] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.271 [2024-07-15 09:36:00.187923] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d69200 with addr=10.0.0.2, port=4420 00:27:49.271 qpair failed and we were unable to recover it. 00:27:49.271 [2024-07-15 09:36:00.188072] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.272 [2024-07-15 09:36:00.188113] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d69200 with addr=10.0.0.2, port=4420 00:27:49.272 qpair failed and we were unable to recover it. 00:27:49.272 [2024-07-15 09:36:00.188248] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.272 [2024-07-15 09:36:00.188288] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d69200 with addr=10.0.0.2, port=4420 00:27:49.272 qpair failed and we were unable to recover it. 00:27:49.272 [2024-07-15 09:36:00.188425] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.272 [2024-07-15 09:36:00.188465] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d69200 with addr=10.0.0.2, port=4420 00:27:49.272 qpair failed and we were unable to recover it. 00:27:49.272 [2024-07-15 09:36:00.188637] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.272 [2024-07-15 09:36:00.188677] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d69200 with addr=10.0.0.2, port=4420 00:27:49.272 qpair failed and we were unable to recover it. 00:27:49.272 [2024-07-15 09:36:00.188841] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.272 [2024-07-15 09:36:00.188883] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d69200 with addr=10.0.0.2, port=4420 00:27:49.272 qpair failed and we were unable to recover it. 00:27:49.272 [2024-07-15 09:36:00.189029] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.272 [2024-07-15 09:36:00.189069] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d69200 with addr=10.0.0.2, port=4420 00:27:49.272 qpair failed and we were unable to recover it. 00:27:49.272 [2024-07-15 09:36:00.189204] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.272 [2024-07-15 09:36:00.189244] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d69200 with addr=10.0.0.2, port=4420 00:27:49.272 qpair failed and we were unable to recover it. 00:27:49.272 [2024-07-15 09:36:00.189375] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.272 [2024-07-15 09:36:00.189416] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d69200 with addr=10.0.0.2, port=4420 00:27:49.272 qpair failed and we were unable to recover it. 00:27:49.272 [2024-07-15 09:36:00.189542] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.272 [2024-07-15 09:36:00.189582] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d69200 with addr=10.0.0.2, port=4420 00:27:49.272 qpair failed and we were unable to recover it. 00:27:49.272 [2024-07-15 09:36:00.189732] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.272 [2024-07-15 09:36:00.189772] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d69200 with addr=10.0.0.2, port=4420 00:27:49.272 qpair failed and we were unable to recover it. 00:27:49.272 [2024-07-15 09:36:00.189952] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.272 [2024-07-15 09:36:00.189993] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d69200 with addr=10.0.0.2, port=4420 00:27:49.272 qpair failed and we were unable to recover it. 00:27:49.272 [2024-07-15 09:36:00.190130] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.272 [2024-07-15 09:36:00.190171] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d69200 with addr=10.0.0.2, port=4420 00:27:49.272 qpair failed and we were unable to recover it. 00:27:49.272 [2024-07-15 09:36:00.190361] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.272 [2024-07-15 09:36:00.190401] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d69200 with addr=10.0.0.2, port=4420 00:27:49.272 qpair failed and we were unable to recover it. 00:27:49.272 [2024-07-15 09:36:00.190555] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.272 [2024-07-15 09:36:00.190596] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d69200 with addr=10.0.0.2, port=4420 00:27:49.272 qpair failed and we were unable to recover it. 00:27:49.272 [2024-07-15 09:36:00.190754] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.272 [2024-07-15 09:36:00.190794] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d69200 with addr=10.0.0.2, port=4420 00:27:49.272 qpair failed and we were unable to recover it. 00:27:49.272 [2024-07-15 09:36:00.190968] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.272 [2024-07-15 09:36:00.191008] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d69200 with addr=10.0.0.2, port=4420 00:27:49.272 qpair failed and we were unable to recover it. 00:27:49.272 [2024-07-15 09:36:00.191148] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.272 [2024-07-15 09:36:00.191188] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d69200 with addr=10.0.0.2, port=4420 00:27:49.272 qpair failed and we were unable to recover it. 00:27:49.272 [2024-07-15 09:36:00.191318] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.272 [2024-07-15 09:36:00.191358] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d69200 with addr=10.0.0.2, port=4420 00:27:49.272 qpair failed and we were unable to recover it. 00:27:49.272 [2024-07-15 09:36:00.191547] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.272 [2024-07-15 09:36:00.191587] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d69200 with addr=10.0.0.2, port=4420 00:27:49.272 qpair failed and we were unable to recover it. 00:27:49.272 [2024-07-15 09:36:00.191719] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.272 [2024-07-15 09:36:00.191758] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d69200 with addr=10.0.0.2, port=4420 00:27:49.272 qpair failed and we were unable to recover it. 00:27:49.272 [2024-07-15 09:36:00.191959] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.272 [2024-07-15 09:36:00.192001] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d69200 with addr=10.0.0.2, port=4420 00:27:49.272 qpair failed and we were unable to recover it. 00:27:49.272 [2024-07-15 09:36:00.192143] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.272 [2024-07-15 09:36:00.192183] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d69200 with addr=10.0.0.2, port=4420 00:27:49.272 qpair failed and we were unable to recover it. 00:27:49.272 [2024-07-15 09:36:00.192303] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.272 [2024-07-15 09:36:00.192343] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d69200 with addr=10.0.0.2, port=4420 00:27:49.272 qpair failed and we were unable to recover it. 00:27:49.272 [2024-07-15 09:36:00.192507] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.272 [2024-07-15 09:36:00.192553] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d69200 with addr=10.0.0.2, port=4420 00:27:49.272 qpair failed and we were unable to recover it. 00:27:49.272 [2024-07-15 09:36:00.192716] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.272 [2024-07-15 09:36:00.192756] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d69200 with addr=10.0.0.2, port=4420 00:27:49.272 qpair failed and we were unable to recover it. 00:27:49.272 [2024-07-15 09:36:00.192941] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.272 [2024-07-15 09:36:00.192982] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d69200 with addr=10.0.0.2, port=4420 00:27:49.272 qpair failed and we were unable to recover it. 00:27:49.272 [2024-07-15 09:36:00.193110] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.272 [2024-07-15 09:36:00.193153] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d69200 with addr=10.0.0.2, port=4420 00:27:49.272 qpair failed and we were unable to recover it. 00:27:49.272 [2024-07-15 09:36:00.193309] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.272 [2024-07-15 09:36:00.193349] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d69200 with addr=10.0.0.2, port=4420 00:27:49.272 qpair failed and we were unable to recover it. 00:27:49.272 [2024-07-15 09:36:00.193502] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.272 [2024-07-15 09:36:00.193542] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d69200 with addr=10.0.0.2, port=4420 00:27:49.272 qpair failed and we were unable to recover it. 00:27:49.272 [2024-07-15 09:36:00.193707] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.272 [2024-07-15 09:36:00.193747] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d69200 with addr=10.0.0.2, port=4420 00:27:49.272 qpair failed and we were unable to recover it. 00:27:49.272 [2024-07-15 09:36:00.193931] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.272 [2024-07-15 09:36:00.193973] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d69200 with addr=10.0.0.2, port=4420 00:27:49.272 qpair failed and we were unable to recover it. 00:27:49.272 [2024-07-15 09:36:00.194110] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.272 [2024-07-15 09:36:00.194149] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d69200 with addr=10.0.0.2, port=4420 00:27:49.272 qpair failed and we were unable to recover it. 00:27:49.272 [2024-07-15 09:36:00.194310] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.272 [2024-07-15 09:36:00.194350] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d69200 with addr=10.0.0.2, port=4420 00:27:49.272 qpair failed and we were unable to recover it. 00:27:49.272 [2024-07-15 09:36:00.194485] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.272 [2024-07-15 09:36:00.194525] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d69200 with addr=10.0.0.2, port=4420 00:27:49.272 qpair failed and we were unable to recover it. 00:27:49.272 [2024-07-15 09:36:00.194680] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.272 [2024-07-15 09:36:00.194721] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d69200 with addr=10.0.0.2, port=4420 00:27:49.272 qpair failed and we were unable to recover it. 00:27:49.272 [2024-07-15 09:36:00.194867] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.272 [2024-07-15 09:36:00.194908] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d69200 with addr=10.0.0.2, port=4420 00:27:49.272 qpair failed and we were unable to recover it. 00:27:49.272 [2024-07-15 09:36:00.195065] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.272 [2024-07-15 09:36:00.195105] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d69200 with addr=10.0.0.2, port=4420 00:27:49.272 qpair failed and we were unable to recover it. 00:27:49.272 [2024-07-15 09:36:00.195290] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.272 [2024-07-15 09:36:00.195351] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:27:49.272 qpair failed and we were unable to recover it. 00:27:49.272 [2024-07-15 09:36:00.195499] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.272 [2024-07-15 09:36:00.195543] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:27:49.272 qpair failed and we were unable to recover it. 00:27:49.272 [2024-07-15 09:36:00.195703] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.272 [2024-07-15 09:36:00.195744] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:27:49.272 qpair failed and we were unable to recover it. 00:27:49.272 [2024-07-15 09:36:00.195929] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.272 [2024-07-15 09:36:00.195972] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:27:49.272 qpair failed and we were unable to recover it. 00:27:49.273 [2024-07-15 09:36:00.196108] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.273 [2024-07-15 09:36:00.196148] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:27:49.273 qpair failed and we were unable to recover it. 00:27:49.273 [2024-07-15 09:36:00.196278] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.273 [2024-07-15 09:36:00.196318] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:27:49.273 qpair failed and we were unable to recover it. 00:27:49.273 [2024-07-15 09:36:00.196440] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.273 [2024-07-15 09:36:00.196481] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:27:49.273 qpair failed and we were unable to recover it. 00:27:49.273 [2024-07-15 09:36:00.196669] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.273 [2024-07-15 09:36:00.196709] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:27:49.273 qpair failed and we were unable to recover it. 00:27:49.273 [2024-07-15 09:36:00.196850] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.273 [2024-07-15 09:36:00.196892] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:27:49.273 qpair failed and we were unable to recover it. 00:27:49.273 [2024-07-15 09:36:00.197096] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.273 [2024-07-15 09:36:00.197137] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:27:49.273 qpair failed and we were unable to recover it. 00:27:49.273 [2024-07-15 09:36:00.197276] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.273 [2024-07-15 09:36:00.197317] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:27:49.273 qpair failed and we were unable to recover it. 00:27:49.273 [2024-07-15 09:36:00.197443] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.273 [2024-07-15 09:36:00.197483] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:27:49.273 qpair failed and we were unable to recover it. 00:27:49.273 [2024-07-15 09:36:00.197611] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.273 [2024-07-15 09:36:00.197653] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d69200 with addr=10.0.0.2, port=4420 00:27:49.273 qpair failed and we were unable to recover it. 00:27:49.273 [2024-07-15 09:36:00.197845] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.273 [2024-07-15 09:36:00.197892] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d69200 with addr=10.0.0.2, port=4420 00:27:49.273 qpair failed and we were unable to recover it. 00:27:49.273 [2024-07-15 09:36:00.198033] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.273 [2024-07-15 09:36:00.198073] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d69200 with addr=10.0.0.2, port=4420 00:27:49.273 qpair failed and we were unable to recover it. 00:27:49.273 [2024-07-15 09:36:00.198191] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.273 [2024-07-15 09:36:00.198231] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d69200 with addr=10.0.0.2, port=4420 00:27:49.273 qpair failed and we were unable to recover it. 00:27:49.273 [2024-07-15 09:36:00.198396] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.273 [2024-07-15 09:36:00.198436] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d69200 with addr=10.0.0.2, port=4420 00:27:49.273 qpair failed and we were unable to recover it. 00:27:49.273 [2024-07-15 09:36:00.198575] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.273 [2024-07-15 09:36:00.198616] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d69200 with addr=10.0.0.2, port=4420 00:27:49.273 qpair failed and we were unable to recover it. 00:27:49.273 [2024-07-15 09:36:00.198777] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.273 [2024-07-15 09:36:00.198829] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d69200 with addr=10.0.0.2, port=4420 00:27:49.273 qpair failed and we were unable to recover it. 00:27:49.273 [2024-07-15 09:36:00.199003] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.273 [2024-07-15 09:36:00.199043] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d69200 with addr=10.0.0.2, port=4420 00:27:49.273 qpair failed and we were unable to recover it. 00:27:49.273 [2024-07-15 09:36:00.199181] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.273 [2024-07-15 09:36:00.199221] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d69200 with addr=10.0.0.2, port=4420 00:27:49.273 qpair failed and we were unable to recover it. 00:27:49.273 [2024-07-15 09:36:00.199363] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.273 [2024-07-15 09:36:00.199404] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d69200 with addr=10.0.0.2, port=4420 00:27:49.273 qpair failed and we were unable to recover it. 00:27:49.273 [2024-07-15 09:36:00.199540] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.273 [2024-07-15 09:36:00.199580] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d69200 with addr=10.0.0.2, port=4420 00:27:49.273 qpair failed and we were unable to recover it. 00:27:49.273 [2024-07-15 09:36:00.199743] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.273 [2024-07-15 09:36:00.199783] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d69200 with addr=10.0.0.2, port=4420 00:27:49.273 qpair failed and we were unable to recover it. 00:27:49.273 [2024-07-15 09:36:00.199991] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.273 [2024-07-15 09:36:00.200052] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:27:49.273 qpair failed and we were unable to recover it. 00:27:49.273 [2024-07-15 09:36:00.200255] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.273 [2024-07-15 09:36:00.200298] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:27:49.273 qpair failed and we were unable to recover it. 00:27:49.273 [2024-07-15 09:36:00.200487] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.273 [2024-07-15 09:36:00.200528] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:27:49.273 qpair failed and we were unable to recover it. 00:27:49.273 [2024-07-15 09:36:00.200650] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.273 [2024-07-15 09:36:00.200690] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:27:49.273 qpair failed and we were unable to recover it. 00:27:49.273 [2024-07-15 09:36:00.200854] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.273 [2024-07-15 09:36:00.200896] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:27:49.273 qpair failed and we were unable to recover it. 00:27:49.273 [2024-07-15 09:36:00.201041] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.273 [2024-07-15 09:36:00.201082] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:27:49.273 qpair failed and we were unable to recover it. 00:27:49.273 [2024-07-15 09:36:00.201250] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.273 [2024-07-15 09:36:00.201290] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:27:49.273 qpair failed and we were unable to recover it. 00:27:49.273 [2024-07-15 09:36:00.201489] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.273 [2024-07-15 09:36:00.201529] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:27:49.273 qpair failed and we were unable to recover it. 00:27:49.273 [2024-07-15 09:36:00.201654] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.273 [2024-07-15 09:36:00.201694] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:27:49.273 qpair failed and we were unable to recover it. 00:27:49.273 [2024-07-15 09:36:00.201862] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.273 [2024-07-15 09:36:00.201904] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:27:49.273 qpair failed and we were unable to recover it. 00:27:49.273 [2024-07-15 09:36:00.202067] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.273 [2024-07-15 09:36:00.202126] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:27:49.273 qpair failed and we were unable to recover it. 00:27:49.273 [2024-07-15 09:36:00.202322] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.273 [2024-07-15 09:36:00.202371] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:27:49.273 qpair failed and we were unable to recover it. 00:27:49.273 [2024-07-15 09:36:00.202605] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.273 [2024-07-15 09:36:00.202665] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:27:49.273 qpair failed and we were unable to recover it. 00:27:49.273 [2024-07-15 09:36:00.202918] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.273 [2024-07-15 09:36:00.202967] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:27:49.273 qpair failed and we were unable to recover it. 00:27:49.273 [2024-07-15 09:36:00.203114] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.273 [2024-07-15 09:36:00.203162] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:27:49.273 qpair failed and we were unable to recover it. 00:27:49.273 [2024-07-15 09:36:00.203395] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.273 [2024-07-15 09:36:00.203443] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:27:49.273 qpair failed and we were unable to recover it. 00:27:49.273 [2024-07-15 09:36:00.203632] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.273 [2024-07-15 09:36:00.203688] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:27:49.273 qpair failed and we were unable to recover it. 00:27:49.273 [2024-07-15 09:36:00.203839] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.273 [2024-07-15 09:36:00.203887] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:27:49.273 qpair failed and we were unable to recover it. 00:27:49.273 [2024-07-15 09:36:00.204095] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.273 [2024-07-15 09:36:00.204154] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:27:49.273 qpair failed and we were unable to recover it. 00:27:49.273 [2024-07-15 09:36:00.204366] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.273 [2024-07-15 09:36:00.204450] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:27:49.273 qpair failed and we were unable to recover it. 00:27:49.273 [2024-07-15 09:36:00.204642] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.274 [2024-07-15 09:36:00.204715] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:27:49.274 qpair failed and we were unable to recover it. 00:27:49.274 [2024-07-15 09:36:00.204967] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.274 [2024-07-15 09:36:00.205030] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:27:49.274 qpair failed and we were unable to recover it. 00:27:49.274 [2024-07-15 09:36:00.205248] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.274 [2024-07-15 09:36:00.205309] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:27:49.274 qpair failed and we were unable to recover it. 00:27:49.274 [2024-07-15 09:36:00.205520] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.274 [2024-07-15 09:36:00.205580] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:27:49.274 qpair failed and we were unable to recover it. 00:27:49.274 [2024-07-15 09:36:00.205795] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.274 [2024-07-15 09:36:00.205858] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:27:49.274 qpair failed and we were unable to recover it. 00:27:49.274 [2024-07-15 09:36:00.206032] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.274 [2024-07-15 09:36:00.206081] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:27:49.274 qpair failed and we were unable to recover it. 00:27:49.274 [2024-07-15 09:36:00.206257] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.274 [2024-07-15 09:36:00.206305] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:27:49.274 qpair failed and we were unable to recover it. 00:27:49.274 [2024-07-15 09:36:00.206525] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.274 [2024-07-15 09:36:00.206573] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:27:49.274 qpair failed and we were unable to recover it. 00:27:49.274 [2024-07-15 09:36:00.206729] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.274 [2024-07-15 09:36:00.206777] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:27:49.274 qpair failed and we were unable to recover it. 00:27:49.274 [2024-07-15 09:36:00.206979] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.274 [2024-07-15 09:36:00.207027] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:27:49.274 qpair failed and we were unable to recover it. 00:27:49.274 [2024-07-15 09:36:00.207236] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.274 [2024-07-15 09:36:00.207287] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:27:49.274 qpair failed and we were unable to recover it. 00:27:49.274 [2024-07-15 09:36:00.207500] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.274 [2024-07-15 09:36:00.207555] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:27:49.274 qpair failed and we were unable to recover it. 00:27:49.274 [2024-07-15 09:36:00.207741] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.274 [2024-07-15 09:36:00.207789] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:27:49.274 qpair failed and we were unable to recover it. 00:27:49.274 [2024-07-15 09:36:00.207971] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.274 [2024-07-15 09:36:00.208019] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:27:49.274 qpair failed and we were unable to recover it. 00:27:49.274 [2024-07-15 09:36:00.208174] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.274 [2024-07-15 09:36:00.208223] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:27:49.274 qpair failed and we were unable to recover it. 00:27:49.274 [2024-07-15 09:36:00.208380] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.274 [2024-07-15 09:36:00.208429] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:27:49.274 qpair failed and we were unable to recover it. 00:27:49.274 [2024-07-15 09:36:00.208646] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.274 [2024-07-15 09:36:00.208697] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:27:49.274 qpair failed and we were unable to recover it. 00:27:49.274 [2024-07-15 09:36:00.208906] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.274 [2024-07-15 09:36:00.208941] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:27:49.274 qpair failed and we were unable to recover it. 00:27:49.274 [2024-07-15 09:36:00.209086] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.274 [2024-07-15 09:36:00.209121] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:27:49.274 qpair failed and we were unable to recover it. 00:27:49.274 [2024-07-15 09:36:00.209238] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.274 [2024-07-15 09:36:00.209274] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:27:49.274 qpair failed and we were unable to recover it. 00:27:49.274 [2024-07-15 09:36:00.209414] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.274 [2024-07-15 09:36:00.209448] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:27:49.274 qpair failed and we were unable to recover it. 00:27:49.274 [2024-07-15 09:36:00.209580] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.274 [2024-07-15 09:36:00.209616] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:27:49.274 qpair failed and we were unable to recover it. 00:27:49.274 [2024-07-15 09:36:00.209766] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.274 [2024-07-15 09:36:00.209808] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:27:49.274 qpair failed and we were unable to recover it. 00:27:49.274 [2024-07-15 09:36:00.209956] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.274 [2024-07-15 09:36:00.209989] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:27:49.274 qpair failed and we were unable to recover it. 00:27:49.274 [2024-07-15 09:36:00.210107] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.274 [2024-07-15 09:36:00.210141] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:27:49.274 qpair failed and we were unable to recover it. 00:27:49.274 [2024-07-15 09:36:00.210256] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.274 [2024-07-15 09:36:00.210289] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:27:49.274 qpair failed and we were unable to recover it. 00:27:49.274 [2024-07-15 09:36:00.210401] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.274 [2024-07-15 09:36:00.210435] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:27:49.274 qpair failed and we were unable to recover it. 00:27:49.274 [2024-07-15 09:36:00.210575] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.274 [2024-07-15 09:36:00.210608] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:27:49.274 qpair failed and we were unable to recover it. 00:27:49.274 [2024-07-15 09:36:00.210715] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.274 [2024-07-15 09:36:00.210750] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:27:49.274 qpair failed and we were unable to recover it. 00:27:49.274 [2024-07-15 09:36:00.210924] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.274 [2024-07-15 09:36:00.210959] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:27:49.274 qpair failed and we were unable to recover it. 00:27:49.274 [2024-07-15 09:36:00.211064] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.274 [2024-07-15 09:36:00.211097] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:27:49.274 qpair failed and we were unable to recover it. 00:27:49.274 [2024-07-15 09:36:00.211202] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.274 [2024-07-15 09:36:00.211236] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:27:49.274 qpair failed and we were unable to recover it. 00:27:49.274 [2024-07-15 09:36:00.211348] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.274 [2024-07-15 09:36:00.211382] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:27:49.274 qpair failed and we were unable to recover it. 00:27:49.274 [2024-07-15 09:36:00.211483] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.274 [2024-07-15 09:36:00.211517] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:27:49.274 qpair failed and we were unable to recover it. 00:27:49.274 [2024-07-15 09:36:00.211633] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.274 [2024-07-15 09:36:00.211666] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:27:49.274 qpair failed and we were unable to recover it. 00:27:49.274 [2024-07-15 09:36:00.211828] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.274 [2024-07-15 09:36:00.211877] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:27:49.274 qpair failed and we were unable to recover it. 00:27:49.274 [2024-07-15 09:36:00.211979] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.274 [2024-07-15 09:36:00.212016] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:27:49.274 qpair failed and we were unable to recover it. 00:27:49.274 [2024-07-15 09:36:00.212153] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.274 [2024-07-15 09:36:00.212185] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:27:49.274 qpair failed and we were unable to recover it. 00:27:49.274 [2024-07-15 09:36:00.212294] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.274 [2024-07-15 09:36:00.212326] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:27:49.274 qpair failed and we were unable to recover it. 00:27:49.274 [2024-07-15 09:36:00.212434] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.274 [2024-07-15 09:36:00.212466] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:27:49.274 qpair failed and we were unable to recover it. 00:27:49.274 [2024-07-15 09:36:00.212603] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.274 [2024-07-15 09:36:00.212636] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:27:49.274 qpair failed and we were unable to recover it. 00:27:49.274 [2024-07-15 09:36:00.212755] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.275 [2024-07-15 09:36:00.212816] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a50000b90 with addr=10.0.0.2, port=4420 00:27:49.275 qpair failed and we were unable to recover it. 00:27:49.275 [2024-07-15 09:36:00.212964] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.275 [2024-07-15 09:36:00.212998] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a50000b90 with addr=10.0.0.2, port=4420 00:27:49.275 qpair failed and we were unable to recover it. 00:27:49.275 [2024-07-15 09:36:00.213116] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.275 [2024-07-15 09:36:00.213151] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a50000b90 with addr=10.0.0.2, port=4420 00:27:49.275 qpair failed and we were unable to recover it. 00:27:49.275 [2024-07-15 09:36:00.213289] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.275 [2024-07-15 09:36:00.213327] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a50000b90 with addr=10.0.0.2, port=4420 00:27:49.275 qpair failed and we were unable to recover it. 00:27:49.275 [2024-07-15 09:36:00.213434] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.275 [2024-07-15 09:36:00.213467] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a50000b90 with addr=10.0.0.2, port=4420 00:27:49.275 qpair failed and we were unable to recover it. 00:27:49.275 [2024-07-15 09:36:00.213608] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.275 [2024-07-15 09:36:00.213641] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a50000b90 with addr=10.0.0.2, port=4420 00:27:49.275 qpair failed and we were unable to recover it. 00:27:49.275 [2024-07-15 09:36:00.213780] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.275 [2024-07-15 09:36:00.213820] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:27:49.275 qpair failed and we were unable to recover it. 00:27:49.275 [2024-07-15 09:36:00.213969] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.275 [2024-07-15 09:36:00.214001] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:27:49.275 qpair failed and we were unable to recover it. 00:27:49.275 [2024-07-15 09:36:00.214142] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.275 [2024-07-15 09:36:00.214175] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:27:49.275 qpair failed and we were unable to recover it. 00:27:49.275 [2024-07-15 09:36:00.214289] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.275 [2024-07-15 09:36:00.214322] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:27:49.275 qpair failed and we were unable to recover it. 00:27:49.275 [2024-07-15 09:36:00.214457] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.275 [2024-07-15 09:36:00.214489] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:27:49.275 qpair failed and we were unable to recover it. 00:27:49.275 [2024-07-15 09:36:00.214599] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.275 [2024-07-15 09:36:00.214632] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:27:49.275 qpair failed and we were unable to recover it. 00:27:49.275 [2024-07-15 09:36:00.214735] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.275 [2024-07-15 09:36:00.214767] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:27:49.275 qpair failed and we were unable to recover it. 00:27:49.275 [2024-07-15 09:36:00.214915] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.275 [2024-07-15 09:36:00.214948] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:27:49.275 qpair failed and we were unable to recover it. 00:27:49.275 [2024-07-15 09:36:00.215043] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.275 [2024-07-15 09:36:00.215076] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:27:49.275 qpair failed and we were unable to recover it. 00:27:49.275 [2024-07-15 09:36:00.215185] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.275 [2024-07-15 09:36:00.215217] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:27:49.275 qpair failed and we were unable to recover it. 00:27:49.275 [2024-07-15 09:36:00.215349] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.275 [2024-07-15 09:36:00.215392] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:27:49.275 qpair failed and we were unable to recover it. 00:27:49.275 [2024-07-15 09:36:00.215510] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.275 [2024-07-15 09:36:00.215542] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:27:49.275 qpair failed and we were unable to recover it. 00:27:49.275 [2024-07-15 09:36:00.215655] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.275 [2024-07-15 09:36:00.215687] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:27:49.275 qpair failed and we were unable to recover it. 00:27:49.275 [2024-07-15 09:36:00.215823] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.275 [2024-07-15 09:36:00.215871] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:27:49.275 qpair failed and we were unable to recover it. 00:27:49.275 [2024-07-15 09:36:00.215971] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.275 [2024-07-15 09:36:00.216003] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:27:49.275 qpair failed and we were unable to recover it. 00:27:49.275 [2024-07-15 09:36:00.216115] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.275 [2024-07-15 09:36:00.216146] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:27:49.275 qpair failed and we were unable to recover it. 00:27:49.275 [2024-07-15 09:36:00.216279] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.275 [2024-07-15 09:36:00.216311] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:27:49.275 qpair failed and we were unable to recover it. 00:27:49.275 [2024-07-15 09:36:00.216416] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.275 [2024-07-15 09:36:00.216447] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:27:49.275 qpair failed and we were unable to recover it. 00:27:49.275 [2024-07-15 09:36:00.216538] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.275 [2024-07-15 09:36:00.216569] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:27:49.275 qpair failed and we were unable to recover it. 00:27:49.275 [2024-07-15 09:36:00.216689] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.275 [2024-07-15 09:36:00.216719] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:27:49.275 qpair failed and we were unable to recover it. 00:27:49.275 [2024-07-15 09:36:00.216834] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.275 [2024-07-15 09:36:00.216865] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:27:49.275 qpair failed and we were unable to recover it. 00:27:49.275 [2024-07-15 09:36:00.216963] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.275 [2024-07-15 09:36:00.216994] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:27:49.275 qpair failed and we were unable to recover it. 00:27:49.275 [2024-07-15 09:36:00.217128] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.275 [2024-07-15 09:36:00.217160] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:27:49.275 qpair failed and we were unable to recover it. 00:27:49.275 [2024-07-15 09:36:00.217268] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.275 [2024-07-15 09:36:00.217294] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:27:49.275 qpair failed and we were unable to recover it. 00:27:49.275 [2024-07-15 09:36:00.217382] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.275 [2024-07-15 09:36:00.217409] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:27:49.275 qpair failed and we were unable to recover it. 00:27:49.275 [2024-07-15 09:36:00.217500] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.275 [2024-07-15 09:36:00.217546] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:27:49.275 qpair failed and we were unable to recover it. 00:27:49.275 [2024-07-15 09:36:00.217673] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.275 [2024-07-15 09:36:00.217720] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:27:49.275 qpair failed and we were unable to recover it. 00:27:49.275 [2024-07-15 09:36:00.217841] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.275 [2024-07-15 09:36:00.217888] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:27:49.275 qpair failed and we were unable to recover it. 00:27:49.275 [2024-07-15 09:36:00.218021] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.275 [2024-07-15 09:36:00.218069] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:27:49.275 qpair failed and we were unable to recover it. 00:27:49.275 [2024-07-15 09:36:00.218192] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.275 [2024-07-15 09:36:00.218243] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:27:49.276 qpair failed and we were unable to recover it. 00:27:49.276 [2024-07-15 09:36:00.218379] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.276 [2024-07-15 09:36:00.218407] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:27:49.276 qpair failed and we were unable to recover it. 00:27:49.276 [2024-07-15 09:36:00.218496] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.276 [2024-07-15 09:36:00.218524] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:27:49.276 qpair failed and we were unable to recover it. 00:27:49.276 [2024-07-15 09:36:00.218639] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.276 [2024-07-15 09:36:00.218666] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:27:49.276 qpair failed and we were unable to recover it. 00:27:49.276 [2024-07-15 09:36:00.218757] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.276 [2024-07-15 09:36:00.218786] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:27:49.276 qpair failed and we were unable to recover it. 00:27:49.276 [2024-07-15 09:36:00.218896] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.276 [2024-07-15 09:36:00.218924] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:27:49.276 qpair failed and we were unable to recover it. 00:27:49.276 [2024-07-15 09:36:00.219048] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.276 [2024-07-15 09:36:00.219075] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:27:49.276 qpair failed and we were unable to recover it. 00:27:49.276 [2024-07-15 09:36:00.219158] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.276 [2024-07-15 09:36:00.219185] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:27:49.276 qpair failed and we were unable to recover it. 00:27:49.276 [2024-07-15 09:36:00.219300] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.276 [2024-07-15 09:36:00.219328] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:27:49.276 qpair failed and we were unable to recover it. 00:27:49.276 [2024-07-15 09:36:00.219417] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.276 [2024-07-15 09:36:00.219444] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:27:49.276 qpair failed and we were unable to recover it. 00:27:49.276 [2024-07-15 09:36:00.219533] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.276 [2024-07-15 09:36:00.219561] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:27:49.276 qpair failed and we were unable to recover it. 00:27:49.276 [2024-07-15 09:36:00.219649] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.276 [2024-07-15 09:36:00.219679] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:27:49.276 qpair failed and we were unable to recover it. 00:27:49.276 [2024-07-15 09:36:00.219807] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.276 [2024-07-15 09:36:00.219835] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:27:49.276 qpair failed and we were unable to recover it. 00:27:49.276 [2024-07-15 09:36:00.219922] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.276 [2024-07-15 09:36:00.219950] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:27:49.276 qpair failed and we were unable to recover it. 00:27:49.276 [2024-07-15 09:36:00.220040] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.276 [2024-07-15 09:36:00.220068] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:27:49.276 qpair failed and we were unable to recover it. 00:27:49.276 [2024-07-15 09:36:00.220166] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.276 [2024-07-15 09:36:00.220194] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:27:49.276 qpair failed and we were unable to recover it. 00:27:49.276 [2024-07-15 09:36:00.220285] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.276 [2024-07-15 09:36:00.220312] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:27:49.276 qpair failed and we were unable to recover it. 00:27:49.276 [2024-07-15 09:36:00.220430] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.276 [2024-07-15 09:36:00.220457] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:27:49.276 qpair failed and we were unable to recover it. 00:27:49.276 [2024-07-15 09:36:00.220541] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.276 [2024-07-15 09:36:00.220580] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:27:49.276 qpair failed and we were unable to recover it. 00:27:49.276 [2024-07-15 09:36:00.220724] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.276 [2024-07-15 09:36:00.220752] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:27:49.276 qpair failed and we were unable to recover it. 00:27:49.276 [2024-07-15 09:36:00.220876] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.276 [2024-07-15 09:36:00.220904] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:27:49.276 qpair failed and we were unable to recover it. 00:27:49.276 [2024-07-15 09:36:00.220989] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.276 [2024-07-15 09:36:00.221017] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:27:49.276 qpair failed and we were unable to recover it. 00:27:49.276 [2024-07-15 09:36:00.221137] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.276 [2024-07-15 09:36:00.221164] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:27:49.276 qpair failed and we were unable to recover it. 00:27:49.276 [2024-07-15 09:36:00.221246] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.276 [2024-07-15 09:36:00.221273] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:27:49.276 qpair failed and we were unable to recover it. 00:27:49.276 [2024-07-15 09:36:00.221360] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.276 [2024-07-15 09:36:00.221388] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:27:49.276 qpair failed and we were unable to recover it. 00:27:49.276 [2024-07-15 09:36:00.221471] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.276 [2024-07-15 09:36:00.221498] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:27:49.276 qpair failed and we were unable to recover it. 00:27:49.276 [2024-07-15 09:36:00.221581] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.276 [2024-07-15 09:36:00.221608] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:27:49.276 qpair failed and we were unable to recover it. 00:27:49.276 [2024-07-15 09:36:00.221709] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.276 [2024-07-15 09:36:00.221736] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:27:49.276 qpair failed and we were unable to recover it. 00:27:49.276 [2024-07-15 09:36:00.221869] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.276 [2024-07-15 09:36:00.221898] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:27:49.276 qpair failed and we were unable to recover it. 00:27:49.276 [2024-07-15 09:36:00.221986] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.276 [2024-07-15 09:36:00.222013] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:27:49.276 qpair failed and we were unable to recover it. 00:27:49.276 [2024-07-15 09:36:00.222111] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.276 [2024-07-15 09:36:00.222139] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:27:49.276 qpair failed and we were unable to recover it. 00:27:49.276 [2024-07-15 09:36:00.222225] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.276 [2024-07-15 09:36:00.222253] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:27:49.276 qpair failed and we were unable to recover it. 00:27:49.276 [2024-07-15 09:36:00.222367] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.276 [2024-07-15 09:36:00.222395] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:27:49.276 qpair failed and we were unable to recover it. 00:27:49.276 [2024-07-15 09:36:00.222507] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.276 [2024-07-15 09:36:00.222534] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:27:49.276 qpair failed and we were unable to recover it. 00:27:49.276 [2024-07-15 09:36:00.222670] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.276 [2024-07-15 09:36:00.222714] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a50000b90 with addr=10.0.0.2, port=4420 00:27:49.276 qpair failed and we were unable to recover it. 00:27:49.276 [2024-07-15 09:36:00.222820] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.276 [2024-07-15 09:36:00.222852] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a50000b90 with addr=10.0.0.2, port=4420 00:27:49.276 qpair failed and we were unable to recover it. 00:27:49.276 [2024-07-15 09:36:00.222945] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.276 [2024-07-15 09:36:00.222978] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a50000b90 with addr=10.0.0.2, port=4420 00:27:49.276 qpair failed and we were unable to recover it. 00:27:49.276 [2024-07-15 09:36:00.223084] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.276 [2024-07-15 09:36:00.223113] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:27:49.276 qpair failed and we were unable to recover it. 00:27:49.276 [2024-07-15 09:36:00.223216] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.276 [2024-07-15 09:36:00.223243] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:27:49.276 qpair failed and we were unable to recover it. 00:27:49.276 [2024-07-15 09:36:00.223363] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.276 [2024-07-15 09:36:00.223390] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:27:49.276 qpair failed and we were unable to recover it. 00:27:49.276 [2024-07-15 09:36:00.223484] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.276 [2024-07-15 09:36:00.223517] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:27:49.277 qpair failed and we were unable to recover it. 00:27:49.277 [2024-07-15 09:36:00.223603] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.277 [2024-07-15 09:36:00.223630] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:27:49.277 qpair failed and we were unable to recover it. 00:27:49.277 [2024-07-15 09:36:00.223708] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.277 [2024-07-15 09:36:00.223736] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:27:49.277 qpair failed and we were unable to recover it. 00:27:49.277 [2024-07-15 09:36:00.223861] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.277 [2024-07-15 09:36:00.223889] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:27:49.277 qpair failed and we were unable to recover it. 00:27:49.277 [2024-07-15 09:36:00.224010] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.277 [2024-07-15 09:36:00.224037] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:27:49.277 qpair failed and we were unable to recover it. 00:27:49.277 [2024-07-15 09:36:00.224168] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.277 [2024-07-15 09:36:00.224196] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:27:49.277 qpair failed and we were unable to recover it. 00:27:49.277 [2024-07-15 09:36:00.224318] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.277 [2024-07-15 09:36:00.224347] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:27:49.277 qpair failed and we were unable to recover it. 00:27:49.277 [2024-07-15 09:36:00.224438] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.277 [2024-07-15 09:36:00.224466] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:27:49.277 qpair failed and we were unable to recover it. 00:27:49.277 [2024-07-15 09:36:00.224588] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.277 [2024-07-15 09:36:00.224616] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:27:49.277 qpair failed and we were unable to recover it. 00:27:49.277 [2024-07-15 09:36:00.224715] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.277 [2024-07-15 09:36:00.224747] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a50000b90 with addr=10.0.0.2, port=4420 00:27:49.277 qpair failed and we were unable to recover it. 00:27:49.277 [2024-07-15 09:36:00.224894] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.277 [2024-07-15 09:36:00.224923] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a50000b90 with addr=10.0.0.2, port=4420 00:27:49.277 qpair failed and we were unable to recover it. 00:27:49.277 [2024-07-15 09:36:00.225025] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.277 [2024-07-15 09:36:00.225054] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a50000b90 with addr=10.0.0.2, port=4420 00:27:49.277 qpair failed and we were unable to recover it. 00:27:49.277 [2024-07-15 09:36:00.225202] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.277 [2024-07-15 09:36:00.225231] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a50000b90 with addr=10.0.0.2, port=4420 00:27:49.277 qpair failed and we were unable to recover it. 00:27:49.277 [2024-07-15 09:36:00.225335] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.277 [2024-07-15 09:36:00.225364] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a50000b90 with addr=10.0.0.2, port=4420 00:27:49.277 qpair failed and we were unable to recover it. 00:27:49.277 [2024-07-15 09:36:00.225476] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.277 [2024-07-15 09:36:00.225518] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d69200 with addr=10.0.0.2, port=4420 00:27:49.277 qpair failed and we were unable to recover it. 00:27:49.277 [2024-07-15 09:36:00.225647] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.277 [2024-07-15 09:36:00.225676] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:27:49.277 qpair failed and we were unable to recover it. 00:27:49.277 [2024-07-15 09:36:00.225793] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.277 [2024-07-15 09:36:00.225839] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:27:49.277 qpair failed and we were unable to recover it. 00:27:49.277 [2024-07-15 09:36:00.225956] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.277 [2024-07-15 09:36:00.225984] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:27:49.277 qpair failed and we were unable to recover it. 00:27:49.277 [2024-07-15 09:36:00.226101] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.277 [2024-07-15 09:36:00.226128] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:27:49.277 qpair failed and we were unable to recover it. 00:27:49.277 [2024-07-15 09:36:00.226225] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.277 [2024-07-15 09:36:00.226252] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:27:49.277 qpair failed and we were unable to recover it. 00:27:49.277 [2024-07-15 09:36:00.226377] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.277 [2024-07-15 09:36:00.226405] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:27:49.277 qpair failed and we were unable to recover it. 00:27:49.277 [2024-07-15 09:36:00.226538] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.277 [2024-07-15 09:36:00.226566] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:27:49.277 qpair failed and we were unable to recover it. 00:27:49.277 [2024-07-15 09:36:00.226679] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.277 [2024-07-15 09:36:00.226707] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:27:49.277 qpair failed and we were unable to recover it. 00:27:49.277 [2024-07-15 09:36:00.226812] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.277 [2024-07-15 09:36:00.226840] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:27:49.277 qpair failed and we were unable to recover it. 00:27:49.277 [2024-07-15 09:36:00.226929] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.277 [2024-07-15 09:36:00.226957] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:27:49.277 qpair failed and we were unable to recover it. 00:27:49.277 [2024-07-15 09:36:00.227049] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.277 [2024-07-15 09:36:00.227077] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:27:49.277 qpair failed and we were unable to recover it. 00:27:49.277 [2024-07-15 09:36:00.227161] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.277 [2024-07-15 09:36:00.227189] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:27:49.277 qpair failed and we were unable to recover it. 00:27:49.277 [2024-07-15 09:36:00.227308] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.277 [2024-07-15 09:36:00.227336] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:27:49.277 qpair failed and we were unable to recover it. 00:27:49.277 [2024-07-15 09:36:00.227436] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.277 [2024-07-15 09:36:00.227464] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:27:49.277 qpair failed and we were unable to recover it. 00:27:49.277 [2024-07-15 09:36:00.227551] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.277 [2024-07-15 09:36:00.227579] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:27:49.277 qpair failed and we were unable to recover it. 00:27:49.277 [2024-07-15 09:36:00.227679] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.277 [2024-07-15 09:36:00.227729] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d69200 with addr=10.0.0.2, port=4420 00:27:49.277 qpair failed and we were unable to recover it. 00:27:49.277 [2024-07-15 09:36:00.227876] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.277 [2024-07-15 09:36:00.227905] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d69200 with addr=10.0.0.2, port=4420 00:27:49.277 qpair failed and we were unable to recover it. 00:27:49.277 [2024-07-15 09:36:00.227993] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.277 [2024-07-15 09:36:00.228020] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d69200 with addr=10.0.0.2, port=4420 00:27:49.277 qpair failed and we were unable to recover it. 00:27:49.277 [2024-07-15 09:36:00.228116] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.277 [2024-07-15 09:36:00.228142] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d69200 with addr=10.0.0.2, port=4420 00:27:49.277 qpair failed and we were unable to recover it. 00:27:49.277 [2024-07-15 09:36:00.228237] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.277 [2024-07-15 09:36:00.228263] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d69200 with addr=10.0.0.2, port=4420 00:27:49.277 qpair failed and we were unable to recover it. 00:27:49.277 [2024-07-15 09:36:00.228348] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.277 [2024-07-15 09:36:00.228374] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d69200 with addr=10.0.0.2, port=4420 00:27:49.277 qpair failed and we were unable to recover it. 00:27:49.277 [2024-07-15 09:36:00.228510] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.277 [2024-07-15 09:36:00.228537] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d69200 with addr=10.0.0.2, port=4420 00:27:49.277 qpair failed and we were unable to recover it. 00:27:49.277 [2024-07-15 09:36:00.228619] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.277 [2024-07-15 09:36:00.228646] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d69200 with addr=10.0.0.2, port=4420 00:27:49.277 qpair failed and we were unable to recover it. 00:27:49.277 [2024-07-15 09:36:00.228731] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.277 [2024-07-15 09:36:00.228758] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d69200 with addr=10.0.0.2, port=4420 00:27:49.277 qpair failed and we were unable to recover it. 00:27:49.277 [2024-07-15 09:36:00.228856] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.277 [2024-07-15 09:36:00.228884] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d69200 with addr=10.0.0.2, port=4420 00:27:49.277 qpair failed and we were unable to recover it. 00:27:49.277 [2024-07-15 09:36:00.228974] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.277 [2024-07-15 09:36:00.229000] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d69200 with addr=10.0.0.2, port=4420 00:27:49.277 qpair failed and we were unable to recover it. 00:27:49.277 [2024-07-15 09:36:00.229096] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.278 [2024-07-15 09:36:00.229122] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d69200 with addr=10.0.0.2, port=4420 00:27:49.278 qpair failed and we were unable to recover it. 00:27:49.278 [2024-07-15 09:36:00.229223] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.278 [2024-07-15 09:36:00.229248] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d69200 with addr=10.0.0.2, port=4420 00:27:49.278 qpair failed and we were unable to recover it. 00:27:49.278 [2024-07-15 09:36:00.229341] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.278 [2024-07-15 09:36:00.229368] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d69200 with addr=10.0.0.2, port=4420 00:27:49.278 qpair failed and we were unable to recover it. 00:27:49.278 [2024-07-15 09:36:00.229479] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.278 [2024-07-15 09:36:00.229505] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d69200 with addr=10.0.0.2, port=4420 00:27:49.278 qpair failed and we were unable to recover it. 00:27:49.278 [2024-07-15 09:36:00.229618] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.278 [2024-07-15 09:36:00.229644] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d69200 with addr=10.0.0.2, port=4420 00:27:49.278 qpair failed and we were unable to recover it. 00:27:49.278 [2024-07-15 09:36:00.229729] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.278 [2024-07-15 09:36:00.229755] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d69200 with addr=10.0.0.2, port=4420 00:27:49.278 qpair failed and we were unable to recover it. 00:27:49.278 [2024-07-15 09:36:00.229879] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.278 [2024-07-15 09:36:00.229906] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d69200 with addr=10.0.0.2, port=4420 00:27:49.278 qpair failed and we were unable to recover it. 00:27:49.278 [2024-07-15 09:36:00.229988] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.278 [2024-07-15 09:36:00.230014] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d69200 with addr=10.0.0.2, port=4420 00:27:49.278 qpair failed and we were unable to recover it. 00:27:49.278 [2024-07-15 09:36:00.230100] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.278 [2024-07-15 09:36:00.230136] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d69200 with addr=10.0.0.2, port=4420 00:27:49.278 qpair failed and we were unable to recover it. 00:27:49.278 [2024-07-15 09:36:00.230231] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.278 [2024-07-15 09:36:00.230257] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d69200 with addr=10.0.0.2, port=4420 00:27:49.278 qpair failed and we were unable to recover it. 00:27:49.278 [2024-07-15 09:36:00.230351] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.278 [2024-07-15 09:36:00.230377] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d69200 with addr=10.0.0.2, port=4420 00:27:49.278 qpair failed and we were unable to recover it. 00:27:49.278 [2024-07-15 09:36:00.230495] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.278 [2024-07-15 09:36:00.230522] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d69200 with addr=10.0.0.2, port=4420 00:27:49.278 qpair failed and we were unable to recover it. 00:27:49.278 [2024-07-15 09:36:00.230612] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.278 [2024-07-15 09:36:00.230638] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d69200 with addr=10.0.0.2, port=4420 00:27:49.278 qpair failed and we were unable to recover it. 00:27:49.278 [2024-07-15 09:36:00.230746] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.278 [2024-07-15 09:36:00.230787] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:27:49.278 qpair failed and we were unable to recover it. 00:27:49.278 [2024-07-15 09:36:00.230916] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.278 [2024-07-15 09:36:00.230944] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:27:49.278 qpair failed and we were unable to recover it. 00:27:49.278 [2024-07-15 09:36:00.231033] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.278 [2024-07-15 09:36:00.231060] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:27:49.278 qpair failed and we were unable to recover it. 00:27:49.278 [2024-07-15 09:36:00.231156] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.278 [2024-07-15 09:36:00.231184] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d69200 with addr=10.0.0.2, port=4420 00:27:49.278 qpair failed and we were unable to recover it. 00:27:49.278 [2024-07-15 09:36:00.231275] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.278 [2024-07-15 09:36:00.231303] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d69200 with addr=10.0.0.2, port=4420 00:27:49.278 qpair failed and we were unable to recover it. 00:27:49.278 [2024-07-15 09:36:00.231388] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.278 [2024-07-15 09:36:00.231416] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d69200 with addr=10.0.0.2, port=4420 00:27:49.278 qpair failed and we were unable to recover it. 00:27:49.278 [2024-07-15 09:36:00.231500] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.278 [2024-07-15 09:36:00.231527] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d69200 with addr=10.0.0.2, port=4420 00:27:49.278 qpair failed and we were unable to recover it. 00:27:49.278 [2024-07-15 09:36:00.231612] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.278 [2024-07-15 09:36:00.231638] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d69200 with addr=10.0.0.2, port=4420 00:27:49.278 qpair failed and we were unable to recover it. 00:27:49.278 [2024-07-15 09:36:00.231729] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.278 [2024-07-15 09:36:00.231755] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d69200 with addr=10.0.0.2, port=4420 00:27:49.278 qpair failed and we were unable to recover it. 00:27:49.278 [2024-07-15 09:36:00.231860] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.278 [2024-07-15 09:36:00.231886] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d69200 with addr=10.0.0.2, port=4420 00:27:49.278 qpair failed and we were unable to recover it. 00:27:49.278 [2024-07-15 09:36:00.231990] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.278 [2024-07-15 09:36:00.232015] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d69200 with addr=10.0.0.2, port=4420 00:27:49.278 qpair failed and we were unable to recover it. 00:27:49.278 [2024-07-15 09:36:00.232111] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.278 [2024-07-15 09:36:00.232136] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d69200 with addr=10.0.0.2, port=4420 00:27:49.278 qpair failed and we were unable to recover it. 00:27:49.278 [2024-07-15 09:36:00.232231] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.278 [2024-07-15 09:36:00.232256] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d69200 with addr=10.0.0.2, port=4420 00:27:49.278 qpair failed and we were unable to recover it. 00:27:49.278 [2024-07-15 09:36:00.232346] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.278 [2024-07-15 09:36:00.232371] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d69200 with addr=10.0.0.2, port=4420 00:27:49.278 qpair failed and we were unable to recover it. 00:27:49.278 [2024-07-15 09:36:00.232460] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.278 [2024-07-15 09:36:00.232487] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d69200 with addr=10.0.0.2, port=4420 00:27:49.278 qpair failed and we were unable to recover it. 00:27:49.278 [2024-07-15 09:36:00.232590] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.278 [2024-07-15 09:36:00.232628] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:27:49.278 qpair failed and we were unable to recover it. 00:27:49.278 [2024-07-15 09:36:00.232746] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.278 [2024-07-15 09:36:00.232773] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:27:49.278 qpair failed and we were unable to recover it. 00:27:49.278 [2024-07-15 09:36:00.232865] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.278 [2024-07-15 09:36:00.232892] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:27:49.278 qpair failed and we were unable to recover it. 00:27:49.278 [2024-07-15 09:36:00.232971] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.278 [2024-07-15 09:36:00.232997] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:27:49.278 qpair failed and we were unable to recover it. 00:27:49.278 [2024-07-15 09:36:00.233086] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.278 [2024-07-15 09:36:00.233112] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:27:49.278 qpair failed and we were unable to recover it. 00:27:49.278 [2024-07-15 09:36:00.233224] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.278 [2024-07-15 09:36:00.233250] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:27:49.278 qpair failed and we were unable to recover it. 00:27:49.278 [2024-07-15 09:36:00.233358] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.278 [2024-07-15 09:36:00.233384] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:27:49.278 qpair failed and we were unable to recover it. 00:27:49.278 [2024-07-15 09:36:00.233466] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.278 [2024-07-15 09:36:00.233492] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:27:49.278 qpair failed and we were unable to recover it. 00:27:49.278 [2024-07-15 09:36:00.233602] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.278 [2024-07-15 09:36:00.233627] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:27:49.278 qpair failed and we were unable to recover it. 00:27:49.278 [2024-07-15 09:36:00.233713] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.278 [2024-07-15 09:36:00.233739] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:27:49.278 qpair failed and we were unable to recover it. 00:27:49.278 [2024-07-15 09:36:00.233840] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.278 [2024-07-15 09:36:00.233868] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:27:49.278 qpair failed and we were unable to recover it. 00:27:49.278 [2024-07-15 09:36:00.233957] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.278 [2024-07-15 09:36:00.233984] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:27:49.278 qpair failed and we were unable to recover it. 00:27:49.278 [2024-07-15 09:36:00.234117] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.278 [2024-07-15 09:36:00.234143] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:27:49.278 qpair failed and we were unable to recover it. 00:27:49.278 [2024-07-15 09:36:00.234233] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.279 [2024-07-15 09:36:00.234259] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:27:49.279 qpair failed and we were unable to recover it. 00:27:49.279 [2024-07-15 09:36:00.234367] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.279 [2024-07-15 09:36:00.234393] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:27:49.279 qpair failed and we were unable to recover it. 00:27:49.279 [2024-07-15 09:36:00.234494] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.279 [2024-07-15 09:36:00.234520] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:27:49.279 qpair failed and we were unable to recover it. 00:27:49.279 [2024-07-15 09:36:00.234612] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.279 [2024-07-15 09:36:00.234638] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:27:49.279 qpair failed and we were unable to recover it. 00:27:49.279 [2024-07-15 09:36:00.234715] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.279 [2024-07-15 09:36:00.234741] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:27:49.279 qpair failed and we were unable to recover it. 00:27:49.279 [2024-07-15 09:36:00.234832] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.279 [2024-07-15 09:36:00.234858] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:27:49.279 qpair failed and we were unable to recover it. 00:27:49.279 [2024-07-15 09:36:00.234965] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.279 [2024-07-15 09:36:00.234991] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:27:49.279 qpair failed and we were unable to recover it. 00:27:49.279 [2024-07-15 09:36:00.235131] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.279 [2024-07-15 09:36:00.235165] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:27:49.279 qpair failed and we were unable to recover it. 00:27:49.279 [2024-07-15 09:36:00.235251] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.279 [2024-07-15 09:36:00.235280] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d69200 with addr=10.0.0.2, port=4420 00:27:49.279 qpair failed and we were unable to recover it. 00:27:49.279 [2024-07-15 09:36:00.235364] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.279 [2024-07-15 09:36:00.235390] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d69200 with addr=10.0.0.2, port=4420 00:27:49.279 qpair failed and we were unable to recover it. 00:27:49.279 [2024-07-15 09:36:00.235470] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.279 [2024-07-15 09:36:00.235496] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d69200 with addr=10.0.0.2, port=4420 00:27:49.279 qpair failed and we were unable to recover it. 00:27:49.279 [2024-07-15 09:36:00.235586] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.279 [2024-07-15 09:36:00.235614] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d69200 with addr=10.0.0.2, port=4420 00:27:49.279 qpair failed and we were unable to recover it. 00:27:49.279 [2024-07-15 09:36:00.235707] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.279 [2024-07-15 09:36:00.235737] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d69200 with addr=10.0.0.2, port=4420 00:27:49.279 qpair failed and we were unable to recover it. 00:27:49.279 [2024-07-15 09:36:00.235833] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.279 [2024-07-15 09:36:00.235859] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d69200 with addr=10.0.0.2, port=4420 00:27:49.279 qpair failed and we were unable to recover it. 00:27:49.279 [2024-07-15 09:36:00.235948] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.279 [2024-07-15 09:36:00.235974] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:27:49.279 qpair failed and we were unable to recover it. 00:27:49.279 [2024-07-15 09:36:00.236081] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.279 [2024-07-15 09:36:00.236107] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:27:49.279 qpair failed and we were unable to recover it. 00:27:49.279 [2024-07-15 09:36:00.236191] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.279 [2024-07-15 09:36:00.236217] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:27:49.279 qpair failed and we were unable to recover it. 00:27:49.279 [2024-07-15 09:36:00.236303] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.279 [2024-07-15 09:36:00.236329] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:27:49.279 qpair failed and we were unable to recover it. 00:27:49.279 [2024-07-15 09:36:00.236416] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.279 [2024-07-15 09:36:00.236441] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:27:49.279 qpair failed and we were unable to recover it. 00:27:49.279 [2024-07-15 09:36:00.236519] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.279 [2024-07-15 09:36:00.236545] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:27:49.279 qpair failed and we were unable to recover it. 00:27:49.279 [2024-07-15 09:36:00.236640] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.279 [2024-07-15 09:36:00.236666] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d69200 with addr=10.0.0.2, port=4420 00:27:49.279 qpair failed and we were unable to recover it. 00:27:49.279 [2024-07-15 09:36:00.236751] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.279 [2024-07-15 09:36:00.236777] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d69200 with addr=10.0.0.2, port=4420 00:27:49.279 qpair failed and we were unable to recover it. 00:27:49.279 [2024-07-15 09:36:00.236866] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.279 [2024-07-15 09:36:00.236892] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d69200 with addr=10.0.0.2, port=4420 00:27:49.279 qpair failed and we were unable to recover it. 00:27:49.279 [2024-07-15 09:36:00.236978] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.279 [2024-07-15 09:36:00.237004] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d69200 with addr=10.0.0.2, port=4420 00:27:49.279 qpair failed and we were unable to recover it. 00:27:49.279 [2024-07-15 09:36:00.237093] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.279 [2024-07-15 09:36:00.237120] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d69200 with addr=10.0.0.2, port=4420 00:27:49.279 qpair failed and we were unable to recover it. 00:27:49.279 [2024-07-15 09:36:00.237206] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.279 [2024-07-15 09:36:00.237233] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d69200 with addr=10.0.0.2, port=4420 00:27:49.279 qpair failed and we were unable to recover it. 00:27:49.279 [2024-07-15 09:36:00.237349] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.279 [2024-07-15 09:36:00.237375] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d69200 with addr=10.0.0.2, port=4420 00:27:49.279 qpair failed and we were unable to recover it. 00:27:49.279 [2024-07-15 09:36:00.237479] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.279 [2024-07-15 09:36:00.237505] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d69200 with addr=10.0.0.2, port=4420 00:27:49.279 qpair failed and we were unable to recover it. 00:27:49.279 [2024-07-15 09:36:00.237579] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.279 [2024-07-15 09:36:00.237605] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d69200 with addr=10.0.0.2, port=4420 00:27:49.279 qpair failed and we were unable to recover it. 00:27:49.279 [2024-07-15 09:36:00.237717] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.279 [2024-07-15 09:36:00.237742] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d69200 with addr=10.0.0.2, port=4420 00:27:49.279 qpair failed and we were unable to recover it. 00:27:49.279 [2024-07-15 09:36:00.237836] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.279 [2024-07-15 09:36:00.237862] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d69200 with addr=10.0.0.2, port=4420 00:27:49.279 qpair failed and we were unable to recover it. 00:27:49.279 [2024-07-15 09:36:00.237940] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.279 [2024-07-15 09:36:00.237966] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d69200 with addr=10.0.0.2, port=4420 00:27:49.279 qpair failed and we were unable to recover it. 00:27:49.279 [2024-07-15 09:36:00.238043] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.279 [2024-07-15 09:36:00.238068] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d69200 with addr=10.0.0.2, port=4420 00:27:49.279 qpair failed and we were unable to recover it. 00:27:49.279 [2024-07-15 09:36:00.238163] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.279 [2024-07-15 09:36:00.238193] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d69200 with addr=10.0.0.2, port=4420 00:27:49.279 qpair failed and we were unable to recover it. 00:27:49.279 [2024-07-15 09:36:00.238284] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.279 [2024-07-15 09:36:00.238310] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d69200 with addr=10.0.0.2, port=4420 00:27:49.279 qpair failed and we were unable to recover it. 00:27:49.279 [2024-07-15 09:36:00.238394] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.279 [2024-07-15 09:36:00.238420] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d69200 with addr=10.0.0.2, port=4420 00:27:49.279 qpair failed and we were unable to recover it. 00:27:49.280 [2024-07-15 09:36:00.238540] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.280 [2024-07-15 09:36:00.238566] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d69200 with addr=10.0.0.2, port=4420 00:27:49.280 qpair failed and we were unable to recover it. 00:27:49.280 [2024-07-15 09:36:00.238649] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.280 [2024-07-15 09:36:00.238675] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d69200 with addr=10.0.0.2, port=4420 00:27:49.280 qpair failed and we were unable to recover it. 00:27:49.280 [2024-07-15 09:36:00.238782] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.280 [2024-07-15 09:36:00.238817] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d69200 with addr=10.0.0.2, port=4420 00:27:49.280 qpair failed and we were unable to recover it. 00:27:49.280 [2024-07-15 09:36:00.238929] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.280 [2024-07-15 09:36:00.238958] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d69200 with addr=10.0.0.2, port=4420 00:27:49.280 qpair failed and we were unable to recover it. 00:27:49.280 [2024-07-15 09:36:00.239044] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.280 [2024-07-15 09:36:00.239070] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d69200 with addr=10.0.0.2, port=4420 00:27:49.280 qpair failed and we were unable to recover it. 00:27:49.280 [2024-07-15 09:36:00.239177] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.280 [2024-07-15 09:36:00.239202] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d69200 with addr=10.0.0.2, port=4420 00:27:49.280 qpair failed and we were unable to recover it. 00:27:49.280 [2024-07-15 09:36:00.239275] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.280 [2024-07-15 09:36:00.239301] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d69200 with addr=10.0.0.2, port=4420 00:27:49.280 qpair failed and we were unable to recover it. 00:27:49.280 [2024-07-15 09:36:00.239392] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.280 [2024-07-15 09:36:00.239431] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a50000b90 with addr=10.0.0.2, port=4420 00:27:49.280 qpair failed and we were unable to recover it. 00:27:49.280 [2024-07-15 09:36:00.239532] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.280 [2024-07-15 09:36:00.239571] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:27:49.280 qpair failed and we were unable to recover it. 00:27:49.280 [2024-07-15 09:36:00.239670] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.280 [2024-07-15 09:36:00.239696] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:27:49.280 qpair failed and we were unable to recover it. 00:27:49.280 [2024-07-15 09:36:00.239824] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.280 [2024-07-15 09:36:00.239852] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:27:49.280 qpair failed and we were unable to recover it. 00:27:49.280 [2024-07-15 09:36:00.239966] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.280 [2024-07-15 09:36:00.239991] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:27:49.280 qpair failed and we were unable to recover it. 00:27:49.280 [2024-07-15 09:36:00.240077] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.280 [2024-07-15 09:36:00.240104] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:27:49.280 qpair failed and we were unable to recover it. 00:27:49.280 [2024-07-15 09:36:00.240193] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.280 [2024-07-15 09:36:00.240220] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:27:49.280 qpair failed and we were unable to recover it. 00:27:49.280 [2024-07-15 09:36:00.240334] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.280 [2024-07-15 09:36:00.240361] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:27:49.280 qpair failed and we were unable to recover it. 00:27:49.280 [2024-07-15 09:36:00.240448] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.280 [2024-07-15 09:36:00.240474] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:27:49.280 qpair failed and we were unable to recover it. 00:27:49.280 [2024-07-15 09:36:00.240565] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.280 [2024-07-15 09:36:00.240592] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d69200 with addr=10.0.0.2, port=4420 00:27:49.280 qpair failed and we were unable to recover it. 00:27:49.280 [2024-07-15 09:36:00.240717] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.280 [2024-07-15 09:36:00.240747] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a50000b90 with addr=10.0.0.2, port=4420 00:27:49.280 qpair failed and we were unable to recover it. 00:27:49.280 [2024-07-15 09:36:00.240846] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.280 [2024-07-15 09:36:00.240875] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a50000b90 with addr=10.0.0.2, port=4420 00:27:49.280 qpair failed and we were unable to recover it. 00:27:49.280 [2024-07-15 09:36:00.240964] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.280 [2024-07-15 09:36:00.240991] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a50000b90 with addr=10.0.0.2, port=4420 00:27:49.280 qpair failed and we were unable to recover it. 00:27:49.280 [2024-07-15 09:36:00.241108] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.280 [2024-07-15 09:36:00.241135] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a50000b90 with addr=10.0.0.2, port=4420 00:27:49.280 qpair failed and we were unable to recover it. 00:27:49.280 [2024-07-15 09:36:00.241213] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.280 [2024-07-15 09:36:00.241240] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a50000b90 with addr=10.0.0.2, port=4420 00:27:49.280 qpair failed and we were unable to recover it. 00:27:49.280 [2024-07-15 09:36:00.241327] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.280 [2024-07-15 09:36:00.241353] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a50000b90 with addr=10.0.0.2, port=4420 00:27:49.280 qpair failed and we were unable to recover it. 00:27:49.280 [2024-07-15 09:36:00.241445] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.280 [2024-07-15 09:36:00.241471] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d69200 with addr=10.0.0.2, port=4420 00:27:49.280 qpair failed and we were unable to recover it. 00:27:49.280 [2024-07-15 09:36:00.241590] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.280 [2024-07-15 09:36:00.241616] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d69200 with addr=10.0.0.2, port=4420 00:27:49.280 qpair failed and we were unable to recover it. 00:27:49.280 [2024-07-15 09:36:00.241735] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.280 [2024-07-15 09:36:00.241763] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:27:49.280 qpair failed and we were unable to recover it. 00:27:49.280 [2024-07-15 09:36:00.241856] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.280 [2024-07-15 09:36:00.241883] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:27:49.280 qpair failed and we were unable to recover it. 00:27:49.280 [2024-07-15 09:36:00.241969] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.280 [2024-07-15 09:36:00.241996] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:27:49.280 qpair failed and we were unable to recover it. 00:27:49.280 [2024-07-15 09:36:00.242072] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.280 [2024-07-15 09:36:00.242098] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:27:49.280 qpair failed and we were unable to recover it. 00:27:49.280 [2024-07-15 09:36:00.242170] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.280 [2024-07-15 09:36:00.242196] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:27:49.280 qpair failed and we were unable to recover it. 00:27:49.280 [2024-07-15 09:36:00.242281] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.280 [2024-07-15 09:36:00.242313] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:27:49.280 qpair failed and we were unable to recover it. 00:27:49.280 [2024-07-15 09:36:00.242402] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.280 [2024-07-15 09:36:00.242430] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d69200 with addr=10.0.0.2, port=4420 00:27:49.280 qpair failed and we were unable to recover it. 00:27:49.280 [2024-07-15 09:36:00.242515] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.280 [2024-07-15 09:36:00.242541] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d69200 with addr=10.0.0.2, port=4420 00:27:49.280 qpair failed and we were unable to recover it. 00:27:49.280 [2024-07-15 09:36:00.242617] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.280 [2024-07-15 09:36:00.242643] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d69200 with addr=10.0.0.2, port=4420 00:27:49.280 qpair failed and we were unable to recover it. 00:27:49.280 [2024-07-15 09:36:00.242725] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.280 [2024-07-15 09:36:00.242751] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d69200 with addr=10.0.0.2, port=4420 00:27:49.280 qpair failed and we were unable to recover it. 00:27:49.280 [2024-07-15 09:36:00.242838] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.280 [2024-07-15 09:36:00.242864] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d69200 with addr=10.0.0.2, port=4420 00:27:49.280 qpair failed and we were unable to recover it. 00:27:49.280 [2024-07-15 09:36:00.242950] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.280 [2024-07-15 09:36:00.242976] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d69200 with addr=10.0.0.2, port=4420 00:27:49.280 qpair failed and we were unable to recover it. 00:27:49.280 [2024-07-15 09:36:00.243060] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.280 [2024-07-15 09:36:00.243086] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d69200 with addr=10.0.0.2, port=4420 00:27:49.280 qpair failed and we were unable to recover it. 00:27:49.280 [2024-07-15 09:36:00.243198] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.280 [2024-07-15 09:36:00.243224] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d69200 with addr=10.0.0.2, port=4420 00:27:49.280 qpair failed and we were unable to recover it. 00:27:49.280 [2024-07-15 09:36:00.243334] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.280 [2024-07-15 09:36:00.243360] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d69200 with addr=10.0.0.2, port=4420 00:27:49.280 qpair failed and we were unable to recover it. 00:27:49.280 [2024-07-15 09:36:00.243441] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.280 [2024-07-15 09:36:00.243469] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:27:49.280 qpair failed and we were unable to recover it. 00:27:49.281 [2024-07-15 09:36:00.243591] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.281 [2024-07-15 09:36:00.243631] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a50000b90 with addr=10.0.0.2, port=4420 00:27:49.281 qpair failed and we were unable to recover it. 00:27:49.281 [2024-07-15 09:36:00.243754] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.281 [2024-07-15 09:36:00.243785] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a50000b90 with addr=10.0.0.2, port=4420 00:27:49.281 qpair failed and we were unable to recover it. 00:27:49.281 [2024-07-15 09:36:00.243889] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.281 [2024-07-15 09:36:00.243915] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a50000b90 with addr=10.0.0.2, port=4420 00:27:49.281 qpair failed and we were unable to recover it. 00:27:49.281 [2024-07-15 09:36:00.244035] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.281 [2024-07-15 09:36:00.244061] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a50000b90 with addr=10.0.0.2, port=4420 00:27:49.281 qpair failed and we were unable to recover it. 00:27:49.281 [2024-07-15 09:36:00.244153] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.281 [2024-07-15 09:36:00.244179] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a50000b90 with addr=10.0.0.2, port=4420 00:27:49.281 qpair failed and we were unable to recover it. 00:27:49.281 [2024-07-15 09:36:00.244258] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.281 [2024-07-15 09:36:00.244284] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a50000b90 with addr=10.0.0.2, port=4420 00:27:49.281 qpair failed and we were unable to recover it. 00:27:49.281 [2024-07-15 09:36:00.244394] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.281 [2024-07-15 09:36:00.244420] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a50000b90 with addr=10.0.0.2, port=4420 00:27:49.281 qpair failed and we were unable to recover it. 00:27:49.281 [2024-07-15 09:36:00.244525] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.281 [2024-07-15 09:36:00.244550] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a50000b90 with addr=10.0.0.2, port=4420 00:27:49.281 qpair failed and we were unable to recover it. 00:27:49.281 [2024-07-15 09:36:00.244650] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.281 [2024-07-15 09:36:00.244678] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d69200 with addr=10.0.0.2, port=4420 00:27:49.281 qpair failed and we were unable to recover it. 00:27:49.281 [2024-07-15 09:36:00.244761] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.281 [2024-07-15 09:36:00.244786] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d69200 with addr=10.0.0.2, port=4420 00:27:49.281 qpair failed and we were unable to recover it. 00:27:49.281 [2024-07-15 09:36:00.244875] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.281 [2024-07-15 09:36:00.244901] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d69200 with addr=10.0.0.2, port=4420 00:27:49.281 qpair failed and we were unable to recover it. 00:27:49.281 [2024-07-15 09:36:00.244985] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.281 [2024-07-15 09:36:00.245011] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d69200 with addr=10.0.0.2, port=4420 00:27:49.281 qpair failed and we were unable to recover it. 00:27:49.281 [2024-07-15 09:36:00.245101] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.281 [2024-07-15 09:36:00.245126] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d69200 with addr=10.0.0.2, port=4420 00:27:49.281 qpair failed and we were unable to recover it. 00:27:49.281 [2024-07-15 09:36:00.245229] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.281 [2024-07-15 09:36:00.245255] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d69200 with addr=10.0.0.2, port=4420 00:27:49.281 qpair failed and we were unable to recover it. 00:27:49.281 [2024-07-15 09:36:00.245336] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.281 [2024-07-15 09:36:00.245360] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d69200 with addr=10.0.0.2, port=4420 00:27:49.281 qpair failed and we were unable to recover it. 00:27:49.281 [2024-07-15 09:36:00.245471] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.281 [2024-07-15 09:36:00.245499] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:27:49.281 qpair failed and we were unable to recover it. 00:27:49.281 [2024-07-15 09:36:00.245614] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.281 [2024-07-15 09:36:00.245640] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:27:49.281 qpair failed and we were unable to recover it. 00:27:49.281 [2024-07-15 09:36:00.245748] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.281 [2024-07-15 09:36:00.245774] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:27:49.281 qpair failed and we were unable to recover it. 00:27:49.281 [2024-07-15 09:36:00.245862] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.281 [2024-07-15 09:36:00.245887] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:27:49.281 qpair failed and we were unable to recover it. 00:27:49.281 [2024-07-15 09:36:00.245970] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.281 [2024-07-15 09:36:00.245997] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:27:49.281 qpair failed and we were unable to recover it. 00:27:49.281 [2024-07-15 09:36:00.246075] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.281 [2024-07-15 09:36:00.246100] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:27:49.281 qpair failed and we were unable to recover it. 00:27:49.281 [2024-07-15 09:36:00.246181] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.281 [2024-07-15 09:36:00.246207] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d69200 with addr=10.0.0.2, port=4420 00:27:49.281 qpair failed and we were unable to recover it. 00:27:49.281 [2024-07-15 09:36:00.246295] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.281 [2024-07-15 09:36:00.246320] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d69200 with addr=10.0.0.2, port=4420 00:27:49.281 qpair failed and we were unable to recover it. 00:27:49.281 [2024-07-15 09:36:00.246441] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.281 [2024-07-15 09:36:00.246471] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a50000b90 with addr=10.0.0.2, port=4420 00:27:49.281 qpair failed and we were unable to recover it. 00:27:49.281 [2024-07-15 09:36:00.246589] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.281 [2024-07-15 09:36:00.246616] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a50000b90 with addr=10.0.0.2, port=4420 00:27:49.281 qpair failed and we were unable to recover it. 00:27:49.281 [2024-07-15 09:36:00.246730] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.281 [2024-07-15 09:36:00.246756] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a50000b90 with addr=10.0.0.2, port=4420 00:27:49.281 qpair failed and we were unable to recover it. 00:27:49.281 [2024-07-15 09:36:00.246876] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.281 [2024-07-15 09:36:00.246903] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a50000b90 with addr=10.0.0.2, port=4420 00:27:49.281 qpair failed and we were unable to recover it. 00:27:49.281 [2024-07-15 09:36:00.246992] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.281 [2024-07-15 09:36:00.247018] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a50000b90 with addr=10.0.0.2, port=4420 00:27:49.281 qpair failed and we were unable to recover it. 00:27:49.281 [2024-07-15 09:36:00.247160] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.281 [2024-07-15 09:36:00.247186] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a50000b90 with addr=10.0.0.2, port=4420 00:27:49.281 qpair failed and we were unable to recover it. 00:27:49.281 [2024-07-15 09:36:00.247297] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.281 [2024-07-15 09:36:00.247323] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a50000b90 with addr=10.0.0.2, port=4420 00:27:49.281 qpair failed and we were unable to recover it. 00:27:49.281 [2024-07-15 09:36:00.247443] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.281 [2024-07-15 09:36:00.247470] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a50000b90 with addr=10.0.0.2, port=4420 00:27:49.281 qpair failed and we were unable to recover it. 00:27:49.281 [2024-07-15 09:36:00.247564] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.281 [2024-07-15 09:36:00.247591] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a50000b90 with addr=10.0.0.2, port=4420 00:27:49.281 qpair failed and we were unable to recover it. 00:27:49.281 [2024-07-15 09:36:00.247670] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.281 [2024-07-15 09:36:00.247696] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a50000b90 with addr=10.0.0.2, port=4420 00:27:49.281 qpair failed and we were unable to recover it. 00:27:49.281 [2024-07-15 09:36:00.247784] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.281 [2024-07-15 09:36:00.247825] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:27:49.281 qpair failed and we were unable to recover it. 00:27:49.281 [2024-07-15 09:36:00.247922] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.281 [2024-07-15 09:36:00.247948] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:27:49.281 qpair failed and we were unable to recover it. 00:27:49.281 [2024-07-15 09:36:00.248033] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.281 [2024-07-15 09:36:00.248059] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:27:49.281 qpair failed and we were unable to recover it. 00:27:49.281 [2024-07-15 09:36:00.248166] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.281 [2024-07-15 09:36:00.248192] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:27:49.281 qpair failed and we were unable to recover it. 00:27:49.281 [2024-07-15 09:36:00.248323] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.281 [2024-07-15 09:36:00.248353] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:27:49.281 qpair failed and we were unable to recover it. 00:27:49.281 [2024-07-15 09:36:00.248448] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.281 [2024-07-15 09:36:00.248478] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:27:49.281 qpair failed and we were unable to recover it. 00:27:49.281 [2024-07-15 09:36:00.248606] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.281 [2024-07-15 09:36:00.248635] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:27:49.281 qpair failed and we were unable to recover it. 00:27:49.281 [2024-07-15 09:36:00.248726] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.281 [2024-07-15 09:36:00.248756] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:27:49.281 qpair failed and we were unable to recover it. 00:27:49.282 [2024-07-15 09:36:00.248889] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.282 [2024-07-15 09:36:00.248928] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d69200 with addr=10.0.0.2, port=4420 00:27:49.282 qpair failed and we were unable to recover it. 00:27:49.282 [2024-07-15 09:36:00.249020] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.282 [2024-07-15 09:36:00.249047] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a50000b90 with addr=10.0.0.2, port=4420 00:27:49.282 qpair failed and we were unable to recover it. 00:27:49.282 [2024-07-15 09:36:00.249208] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.282 [2024-07-15 09:36:00.249238] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a50000b90 with addr=10.0.0.2, port=4420 00:27:49.282 qpair failed and we were unable to recover it. 00:27:49.282 [2024-07-15 09:36:00.249348] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.282 [2024-07-15 09:36:00.249375] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a50000b90 with addr=10.0.0.2, port=4420 00:27:49.282 qpair failed and we were unable to recover it. 00:27:49.282 [2024-07-15 09:36:00.249462] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.282 [2024-07-15 09:36:00.249489] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a50000b90 with addr=10.0.0.2, port=4420 00:27:49.282 qpair failed and we were unable to recover it. 00:27:49.282 [2024-07-15 09:36:00.249579] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.282 [2024-07-15 09:36:00.249625] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a50000b90 with addr=10.0.0.2, port=4420 00:27:49.282 qpair failed and we were unable to recover it. 00:27:49.282 [2024-07-15 09:36:00.249753] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.282 [2024-07-15 09:36:00.249783] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:27:49.282 qpair failed and we were unable to recover it. 00:27:49.282 [2024-07-15 09:36:00.249900] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.282 [2024-07-15 09:36:00.249926] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:27:49.282 qpair failed and we were unable to recover it. 00:27:49.282 [2024-07-15 09:36:00.250062] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.282 [2024-07-15 09:36:00.250112] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:27:49.282 qpair failed and we were unable to recover it. 00:27:49.282 [2024-07-15 09:36:00.250240] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.282 [2024-07-15 09:36:00.250270] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:27:49.282 qpair failed and we were unable to recover it. 00:27:49.282 [2024-07-15 09:36:00.250430] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.282 [2024-07-15 09:36:00.250460] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:27:49.282 qpair failed and we were unable to recover it. 00:27:49.282 [2024-07-15 09:36:00.250579] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.282 [2024-07-15 09:36:00.250608] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:27:49.282 qpair failed and we were unable to recover it. 00:27:49.282 [2024-07-15 09:36:00.250735] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.282 [2024-07-15 09:36:00.250765] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:27:49.282 qpair failed and we were unable to recover it. 00:27:49.282 [2024-07-15 09:36:00.250909] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.282 [2024-07-15 09:36:00.250935] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:27:49.282 qpair failed and we were unable to recover it. 00:27:49.282 [2024-07-15 09:36:00.251023] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.282 [2024-07-15 09:36:00.251048] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:27:49.282 qpair failed and we were unable to recover it. 00:27:49.282 [2024-07-15 09:36:00.251152] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.282 [2024-07-15 09:36:00.251187] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:27:49.282 qpair failed and we were unable to recover it. 00:27:49.282 [2024-07-15 09:36:00.251322] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.282 [2024-07-15 09:36:00.251352] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:27:49.282 qpair failed and we were unable to recover it. 00:27:49.282 [2024-07-15 09:36:00.251469] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.282 [2024-07-15 09:36:00.251498] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:27:49.282 qpair failed and we were unable to recover it. 00:27:49.282 [2024-07-15 09:36:00.251601] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.282 [2024-07-15 09:36:00.251630] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:27:49.282 qpair failed and we were unable to recover it. 00:27:49.282 [2024-07-15 09:36:00.251755] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.282 [2024-07-15 09:36:00.251785] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:27:49.282 qpair failed and we were unable to recover it. 00:27:49.282 [2024-07-15 09:36:00.251897] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.282 [2024-07-15 09:36:00.251924] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:27:49.282 qpair failed and we were unable to recover it. 00:27:49.282 [2024-07-15 09:36:00.252036] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.282 [2024-07-15 09:36:00.252062] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:27:49.282 qpair failed and we were unable to recover it. 00:27:49.282 [2024-07-15 09:36:00.252196] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.282 [2024-07-15 09:36:00.252225] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:27:49.282 qpair failed and we were unable to recover it. 00:27:49.282 [2024-07-15 09:36:00.252321] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.282 [2024-07-15 09:36:00.252351] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:27:49.282 qpair failed and we were unable to recover it. 00:27:49.282 [2024-07-15 09:36:00.252471] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.282 [2024-07-15 09:36:00.252501] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:27:49.282 qpair failed and we were unable to recover it. 00:27:49.282 [2024-07-15 09:36:00.252593] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.282 [2024-07-15 09:36:00.252623] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:27:49.282 qpair failed and we were unable to recover it. 00:27:49.282 [2024-07-15 09:36:00.252727] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.282 [2024-07-15 09:36:00.252756] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:27:49.282 qpair failed and we were unable to recover it. 00:27:49.282 [2024-07-15 09:36:00.252920] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.282 [2024-07-15 09:36:00.252946] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:27:49.282 qpair failed and we were unable to recover it. 00:27:49.282 [2024-07-15 09:36:00.253057] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.282 [2024-07-15 09:36:00.253105] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:27:49.282 qpair failed and we were unable to recover it. 00:27:49.282 [2024-07-15 09:36:00.253229] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.282 [2024-07-15 09:36:00.253259] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:27:49.282 qpair failed and we were unable to recover it. 00:27:49.282 [2024-07-15 09:36:00.253380] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.282 [2024-07-15 09:36:00.253410] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:27:49.282 qpair failed and we were unable to recover it. 00:27:49.282 [2024-07-15 09:36:00.253558] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.282 [2024-07-15 09:36:00.253587] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:27:49.282 qpair failed and we were unable to recover it. 00:27:49.282 [2024-07-15 09:36:00.253673] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.282 [2024-07-15 09:36:00.253703] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:27:49.282 qpair failed and we were unable to recover it. 00:27:49.282 [2024-07-15 09:36:00.253828] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.282 [2024-07-15 09:36:00.253871] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:27:49.282 qpair failed and we were unable to recover it. 00:27:49.282 [2024-07-15 09:36:00.253954] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.282 [2024-07-15 09:36:00.253980] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:27:49.282 qpair failed and we were unable to recover it. 00:27:49.282 [2024-07-15 09:36:00.254136] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.282 [2024-07-15 09:36:00.254165] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:27:49.282 qpair failed and we were unable to recover it. 00:27:49.282 [2024-07-15 09:36:00.254281] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.282 [2024-07-15 09:36:00.254310] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:27:49.282 qpair failed and we were unable to recover it. 00:27:49.282 [2024-07-15 09:36:00.254412] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.282 [2024-07-15 09:36:00.254442] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:27:49.282 qpair failed and we were unable to recover it. 00:27:49.282 [2024-07-15 09:36:00.254529] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.282 [2024-07-15 09:36:00.254558] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:27:49.282 qpair failed and we were unable to recover it. 00:27:49.282 [2024-07-15 09:36:00.254646] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.282 [2024-07-15 09:36:00.254675] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:27:49.282 qpair failed and we were unable to recover it. 00:27:49.282 [2024-07-15 09:36:00.254791] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.283 [2024-07-15 09:36:00.254831] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:27:49.283 qpair failed and we were unable to recover it. 00:27:49.283 [2024-07-15 09:36:00.254932] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.283 [2024-07-15 09:36:00.254958] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:27:49.283 qpair failed and we were unable to recover it. 00:27:49.283 [2024-07-15 09:36:00.255041] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.283 [2024-07-15 09:36:00.255068] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:27:49.283 qpair failed and we were unable to recover it. 00:27:49.283 [2024-07-15 09:36:00.255206] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.283 [2024-07-15 09:36:00.255231] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:27:49.283 qpair failed and we were unable to recover it. 00:27:49.283 [2024-07-15 09:36:00.255355] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.283 [2024-07-15 09:36:00.255384] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:27:49.283 qpair failed and we were unable to recover it. 00:27:49.283 [2024-07-15 09:36:00.255500] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.283 [2024-07-15 09:36:00.255546] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:27:49.283 qpair failed and we were unable to recover it. 00:27:49.283 [2024-07-15 09:36:00.255656] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.283 [2024-07-15 09:36:00.255686] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:27:49.283 qpair failed and we were unable to recover it. 00:27:49.283 [2024-07-15 09:36:00.255818] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.283 [2024-07-15 09:36:00.255860] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:27:49.283 qpair failed and we were unable to recover it. 00:27:49.283 [2024-07-15 09:36:00.255940] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.283 [2024-07-15 09:36:00.255965] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:27:49.283 qpair failed and we were unable to recover it. 00:27:49.283 [2024-07-15 09:36:00.256078] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.283 [2024-07-15 09:36:00.256123] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:27:49.283 qpair failed and we were unable to recover it. 00:27:49.283 [2024-07-15 09:36:00.256280] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.283 [2024-07-15 09:36:00.256310] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:27:49.283 qpair failed and we were unable to recover it. 00:27:49.283 [2024-07-15 09:36:00.256404] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.283 [2024-07-15 09:36:00.256435] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:27:49.283 qpair failed and we were unable to recover it. 00:27:49.283 [2024-07-15 09:36:00.256559] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.283 [2024-07-15 09:36:00.256590] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:27:49.283 qpair failed and we were unable to recover it. 00:27:49.283 [2024-07-15 09:36:00.256697] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.283 [2024-07-15 09:36:00.256744] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a58000b90 with addr=10.0.0.2, port=4420 00:27:49.283 qpair failed and we were unable to recover it. 00:27:49.283 [2024-07-15 09:36:00.256904] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.283 [2024-07-15 09:36:00.256932] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a58000b90 with addr=10.0.0.2, port=4420 00:27:49.283 qpair failed and we were unable to recover it. 00:27:49.283 [2024-07-15 09:36:00.257046] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.283 [2024-07-15 09:36:00.257078] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a58000b90 with addr=10.0.0.2, port=4420 00:27:49.283 qpair failed and we were unable to recover it. 00:27:49.283 [2024-07-15 09:36:00.257191] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.283 [2024-07-15 09:36:00.257217] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a58000b90 with addr=10.0.0.2, port=4420 00:27:49.283 qpair failed and we were unable to recover it. 00:27:49.283 [2024-07-15 09:36:00.257340] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.283 [2024-07-15 09:36:00.257371] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a58000b90 with addr=10.0.0.2, port=4420 00:27:49.283 qpair failed and we were unable to recover it. 00:27:49.283 [2024-07-15 09:36:00.257472] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.283 [2024-07-15 09:36:00.257503] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a58000b90 with addr=10.0.0.2, port=4420 00:27:49.283 qpair failed and we were unable to recover it. 00:27:49.283 [2024-07-15 09:36:00.257608] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.283 [2024-07-15 09:36:00.257642] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:27:49.283 qpair failed and we were unable to recover it. 00:27:49.283 [2024-07-15 09:36:00.257805] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.283 [2024-07-15 09:36:00.257851] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:27:49.283 qpair failed and we were unable to recover it. 00:27:49.283 [2024-07-15 09:36:00.257937] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.283 [2024-07-15 09:36:00.257963] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:27:49.283 qpair failed and we were unable to recover it. 00:27:49.283 [2024-07-15 09:36:00.258047] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.283 [2024-07-15 09:36:00.258094] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:27:49.283 qpair failed and we were unable to recover it. 00:27:49.283 [2024-07-15 09:36:00.258247] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.283 [2024-07-15 09:36:00.258278] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:27:49.283 qpair failed and we were unable to recover it. 00:27:49.283 [2024-07-15 09:36:00.258404] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.283 [2024-07-15 09:36:00.258435] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:27:49.283 qpair failed and we were unable to recover it. 00:27:49.283 [2024-07-15 09:36:00.258562] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.283 [2024-07-15 09:36:00.258593] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:27:49.283 qpair failed and we were unable to recover it. 00:27:49.283 [2024-07-15 09:36:00.258719] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.283 [2024-07-15 09:36:00.258749] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:27:49.283 qpair failed and we were unable to recover it. 00:27:49.283 [2024-07-15 09:36:00.258908] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.283 [2024-07-15 09:36:00.258947] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d69200 with addr=10.0.0.2, port=4420 00:27:49.283 qpair failed and we were unable to recover it. 00:27:49.283 [2024-07-15 09:36:00.259069] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.283 [2024-07-15 09:36:00.259114] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d69200 with addr=10.0.0.2, port=4420 00:27:49.283 qpair failed and we were unable to recover it. 00:27:49.283 [2024-07-15 09:36:00.259218] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.283 [2024-07-15 09:36:00.259250] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d69200 with addr=10.0.0.2, port=4420 00:27:49.283 qpair failed and we were unable to recover it. 00:27:49.283 [2024-07-15 09:36:00.259379] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.283 [2024-07-15 09:36:00.259410] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d69200 with addr=10.0.0.2, port=4420 00:27:49.283 qpair failed and we were unable to recover it. 00:27:49.283 [2024-07-15 09:36:00.259532] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.283 [2024-07-15 09:36:00.259563] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d69200 with addr=10.0.0.2, port=4420 00:27:49.283 qpair failed and we were unable to recover it. 00:27:49.283 [2024-07-15 09:36:00.259687] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.283 [2024-07-15 09:36:00.259717] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d69200 with addr=10.0.0.2, port=4420 00:27:49.283 qpair failed and we were unable to recover it. 00:27:49.283 [2024-07-15 09:36:00.259864] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.283 [2024-07-15 09:36:00.259896] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:27:49.283 qpair failed and we were unable to recover it. 00:27:49.283 [2024-07-15 09:36:00.260029] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.283 [2024-07-15 09:36:00.260060] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:27:49.284 qpair failed and we were unable to recover it. 00:27:49.284 [2024-07-15 09:36:00.260216] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.284 [2024-07-15 09:36:00.260247] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:27:49.284 qpair failed and we were unable to recover it. 00:27:49.284 [2024-07-15 09:36:00.260380] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.284 [2024-07-15 09:36:00.260411] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:27:49.284 qpair failed and we were unable to recover it. 00:27:49.284 [2024-07-15 09:36:00.260514] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.284 [2024-07-15 09:36:00.260545] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:27:49.284 qpair failed and we were unable to recover it. 00:27:49.284 [2024-07-15 09:36:00.260650] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.284 [2024-07-15 09:36:00.260681] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:27:49.284 qpair failed and we were unable to recover it. 00:27:49.284 [2024-07-15 09:36:00.260808] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.284 [2024-07-15 09:36:00.260839] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:27:49.284 qpair failed and we were unable to recover it. 00:27:49.284 [2024-07-15 09:36:00.260933] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.284 [2024-07-15 09:36:00.260964] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:27:49.284 qpair failed and we were unable to recover it. 00:27:49.284 [2024-07-15 09:36:00.261090] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.284 [2024-07-15 09:36:00.261121] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:27:49.284 qpair failed and we were unable to recover it. 00:27:49.284 [2024-07-15 09:36:00.261281] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.284 [2024-07-15 09:36:00.261315] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d69200 with addr=10.0.0.2, port=4420 00:27:49.284 qpair failed and we were unable to recover it. 00:27:49.284 [2024-07-15 09:36:00.261418] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.284 [2024-07-15 09:36:00.261449] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d69200 with addr=10.0.0.2, port=4420 00:27:49.284 qpair failed and we were unable to recover it. 00:27:49.284 [2024-07-15 09:36:00.261555] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.284 [2024-07-15 09:36:00.261586] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d69200 with addr=10.0.0.2, port=4420 00:27:49.284 qpair failed and we were unable to recover it. 00:27:49.284 [2024-07-15 09:36:00.261707] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.284 [2024-07-15 09:36:00.261737] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d69200 with addr=10.0.0.2, port=4420 00:27:49.284 qpair failed and we were unable to recover it. 00:27:49.284 [2024-07-15 09:36:00.261898] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.284 [2024-07-15 09:36:00.261929] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d69200 with addr=10.0.0.2, port=4420 00:27:49.284 qpair failed and we were unable to recover it. 00:27:49.284 [2024-07-15 09:36:00.262027] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.284 [2024-07-15 09:36:00.262058] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d69200 with addr=10.0.0.2, port=4420 00:27:49.284 qpair failed and we were unable to recover it. 00:27:49.284 [2024-07-15 09:36:00.262186] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.284 [2024-07-15 09:36:00.262219] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:27:49.284 qpair failed and we were unable to recover it. 00:27:49.284 [2024-07-15 09:36:00.262344] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.284 [2024-07-15 09:36:00.262375] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:27:49.284 qpair failed and we were unable to recover it. 00:27:49.284 [2024-07-15 09:36:00.262477] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.284 [2024-07-15 09:36:00.262507] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:27:49.284 qpair failed and we were unable to recover it. 00:27:49.284 [2024-07-15 09:36:00.262638] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.284 [2024-07-15 09:36:00.262669] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:27:49.284 qpair failed and we were unable to recover it. 00:27:49.284 [2024-07-15 09:36:00.262795] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.284 [2024-07-15 09:36:00.262845] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:27:49.284 qpair failed and we were unable to recover it. 00:27:49.284 [2024-07-15 09:36:00.262945] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.284 [2024-07-15 09:36:00.262975] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:27:49.284 qpair failed and we were unable to recover it. 00:27:49.284 [2024-07-15 09:36:00.263069] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.284 [2024-07-15 09:36:00.263099] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:27:49.284 qpair failed and we were unable to recover it. 00:27:49.284 [2024-07-15 09:36:00.263195] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.284 [2024-07-15 09:36:00.263231] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:27:49.284 qpair failed and we were unable to recover it. 00:27:49.284 [2024-07-15 09:36:00.263324] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.284 [2024-07-15 09:36:00.263355] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:27:49.284 qpair failed and we were unable to recover it. 00:27:49.284 [2024-07-15 09:36:00.263482] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.284 [2024-07-15 09:36:00.263512] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:27:49.284 qpair failed and we were unable to recover it. 00:27:49.284 [2024-07-15 09:36:00.263618] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.284 [2024-07-15 09:36:00.263650] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:27:49.284 qpair failed and we were unable to recover it. 00:27:49.284 [2024-07-15 09:36:00.263809] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.284 [2024-07-15 09:36:00.263852] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a50000b90 with addr=10.0.0.2, port=4420 00:27:49.284 qpair failed and we were unable to recover it. 00:27:49.284 [2024-07-15 09:36:00.263947] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.284 [2024-07-15 09:36:00.263974] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a50000b90 with addr=10.0.0.2, port=4420 00:27:49.284 qpair failed and we were unable to recover it. 00:27:49.284 [2024-07-15 09:36:00.264083] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.284 [2024-07-15 09:36:00.264110] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a50000b90 with addr=10.0.0.2, port=4420 00:27:49.284 qpair failed and we were unable to recover it. 00:27:49.284 [2024-07-15 09:36:00.264219] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.284 [2024-07-15 09:36:00.264250] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a50000b90 with addr=10.0.0.2, port=4420 00:27:49.284 qpair failed and we were unable to recover it. 00:27:49.284 [2024-07-15 09:36:00.264349] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.284 [2024-07-15 09:36:00.264380] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a50000b90 with addr=10.0.0.2, port=4420 00:27:49.284 qpair failed and we were unable to recover it. 00:27:49.284 [2024-07-15 09:36:00.264487] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.284 [2024-07-15 09:36:00.264519] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a50000b90 with addr=10.0.0.2, port=4420 00:27:49.284 qpair failed and we were unable to recover it. 00:27:49.284 [2024-07-15 09:36:00.264672] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.284 [2024-07-15 09:36:00.264703] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a50000b90 with addr=10.0.0.2, port=4420 00:27:49.284 qpair failed and we were unable to recover it. 00:27:49.284 [2024-07-15 09:36:00.264839] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.284 [2024-07-15 09:36:00.264872] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d69200 with addr=10.0.0.2, port=4420 00:27:49.284 qpair failed and we were unable to recover it. 00:27:49.284 [2024-07-15 09:36:00.265029] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.284 [2024-07-15 09:36:00.265060] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d69200 with addr=10.0.0.2, port=4420 00:27:49.284 qpair failed and we were unable to recover it. 00:27:49.284 [2024-07-15 09:36:00.265253] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.284 [2024-07-15 09:36:00.265285] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d69200 with addr=10.0.0.2, port=4420 00:27:49.284 qpair failed and we were unable to recover it. 00:27:49.284 [2024-07-15 09:36:00.265420] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.284 [2024-07-15 09:36:00.265451] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d69200 with addr=10.0.0.2, port=4420 00:27:49.284 qpair failed and we were unable to recover it. 00:27:49.284 [2024-07-15 09:36:00.265579] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.284 [2024-07-15 09:36:00.265611] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d69200 with addr=10.0.0.2, port=4420 00:27:49.284 qpair failed and we were unable to recover it. 00:27:49.284 [2024-07-15 09:36:00.265720] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.284 [2024-07-15 09:36:00.265751] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d69200 with addr=10.0.0.2, port=4420 00:27:49.284 qpair failed and we were unable to recover it. 00:27:49.284 [2024-07-15 09:36:00.265868] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.284 [2024-07-15 09:36:00.265900] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d69200 with addr=10.0.0.2, port=4420 00:27:49.284 qpair failed and we were unable to recover it. 00:27:49.284 [2024-07-15 09:36:00.266037] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.284 [2024-07-15 09:36:00.266069] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d69200 with addr=10.0.0.2, port=4420 00:27:49.284 qpair failed and we were unable to recover it. 00:27:49.284 [2024-07-15 09:36:00.266170] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.284 [2024-07-15 09:36:00.266202] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d69200 with addr=10.0.0.2, port=4420 00:27:49.284 qpair failed and we were unable to recover it. 00:27:49.284 [2024-07-15 09:36:00.266334] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.285 [2024-07-15 09:36:00.266366] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d69200 with addr=10.0.0.2, port=4420 00:27:49.285 qpair failed and we were unable to recover it. 00:27:49.285 [2024-07-15 09:36:00.266524] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.285 [2024-07-15 09:36:00.266556] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d69200 with addr=10.0.0.2, port=4420 00:27:49.285 qpair failed and we were unable to recover it. 00:27:49.285 [2024-07-15 09:36:00.266662] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.285 [2024-07-15 09:36:00.266694] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d69200 with addr=10.0.0.2, port=4420 00:27:49.285 qpair failed and we were unable to recover it. 00:27:49.285 [2024-07-15 09:36:00.266828] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.285 [2024-07-15 09:36:00.266855] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d69200 with addr=10.0.0.2, port=4420 00:27:49.285 qpair failed and we were unable to recover it. 00:27:49.285 [2024-07-15 09:36:00.266939] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.285 [2024-07-15 09:36:00.266964] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d69200 with addr=10.0.0.2, port=4420 00:27:49.285 qpair failed and we were unable to recover it. 00:27:49.285 [2024-07-15 09:36:00.267068] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.285 [2024-07-15 09:36:00.267093] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d69200 with addr=10.0.0.2, port=4420 00:27:49.285 qpair failed and we were unable to recover it. 00:27:49.285 [2024-07-15 09:36:00.267178] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.285 [2024-07-15 09:36:00.267204] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d69200 with addr=10.0.0.2, port=4420 00:27:49.285 qpair failed and we were unable to recover it. 00:27:49.285 [2024-07-15 09:36:00.267314] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.285 [2024-07-15 09:36:00.267344] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d69200 with addr=10.0.0.2, port=4420 00:27:49.285 qpair failed and we were unable to recover it. 00:27:49.285 [2024-07-15 09:36:00.267463] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.285 [2024-07-15 09:36:00.267512] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:27:49.285 qpair failed and we were unable to recover it. 00:27:49.285 [2024-07-15 09:36:00.267683] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.285 [2024-07-15 09:36:00.267716] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:27:49.285 qpair failed and we were unable to recover it. 00:27:49.285 [2024-07-15 09:36:00.267821] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.285 [2024-07-15 09:36:00.267855] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:27:49.285 qpair failed and we were unable to recover it. 00:27:49.285 [2024-07-15 09:36:00.267951] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.285 [2024-07-15 09:36:00.267982] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:27:49.285 qpair failed and we were unable to recover it. 00:27:49.285 [2024-07-15 09:36:00.268086] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.285 [2024-07-15 09:36:00.268118] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:27:49.285 qpair failed and we were unable to recover it. 00:27:49.285 [2024-07-15 09:36:00.268266] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.285 [2024-07-15 09:36:00.268292] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:27:49.285 qpair failed and we were unable to recover it. 00:27:49.285 [2024-07-15 09:36:00.268406] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.285 [2024-07-15 09:36:00.268432] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d69200 with addr=10.0.0.2, port=4420 00:27:49.285 qpair failed and we were unable to recover it. 00:27:49.285 [2024-07-15 09:36:00.268555] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.285 [2024-07-15 09:36:00.268588] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d69200 with addr=10.0.0.2, port=4420 00:27:49.285 qpair failed and we were unable to recover it. 00:27:49.285 [2024-07-15 09:36:00.268692] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.285 [2024-07-15 09:36:00.268724] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d69200 with addr=10.0.0.2, port=4420 00:27:49.285 qpair failed and we were unable to recover it. 00:27:49.285 [2024-07-15 09:36:00.268851] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.285 [2024-07-15 09:36:00.268884] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d69200 with addr=10.0.0.2, port=4420 00:27:49.285 qpair failed and we were unable to recover it. 00:27:49.285 [2024-07-15 09:36:00.269014] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.285 [2024-07-15 09:36:00.269045] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d69200 with addr=10.0.0.2, port=4420 00:27:49.285 qpair failed and we were unable to recover it. 00:27:49.285 [2024-07-15 09:36:00.269172] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.285 [2024-07-15 09:36:00.269203] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d69200 with addr=10.0.0.2, port=4420 00:27:49.285 qpair failed and we were unable to recover it. 00:27:49.285 [2024-07-15 09:36:00.269306] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.285 [2024-07-15 09:36:00.269339] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d69200 with addr=10.0.0.2, port=4420 00:27:49.285 qpair failed and we were unable to recover it. 00:27:49.285 [2024-07-15 09:36:00.269480] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.285 [2024-07-15 09:36:00.269516] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a50000b90 with addr=10.0.0.2, port=4420 00:27:49.285 qpair failed and we were unable to recover it. 00:27:49.285 [2024-07-15 09:36:00.269651] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.285 [2024-07-15 09:36:00.269686] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a50000b90 with addr=10.0.0.2, port=4420 00:27:49.285 qpair failed and we were unable to recover it. 00:27:49.285 [2024-07-15 09:36:00.269825] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.285 [2024-07-15 09:36:00.269858] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a50000b90 with addr=10.0.0.2, port=4420 00:27:49.285 qpair failed and we were unable to recover it. 00:27:49.285 [2024-07-15 09:36:00.269960] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.285 [2024-07-15 09:36:00.269994] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a50000b90 with addr=10.0.0.2, port=4420 00:27:49.285 qpair failed and we were unable to recover it. 00:27:49.285 [2024-07-15 09:36:00.270137] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.285 [2024-07-15 09:36:00.270170] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a50000b90 with addr=10.0.0.2, port=4420 00:27:49.285 qpair failed and we were unable to recover it. 00:27:49.285 [2024-07-15 09:36:00.270274] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.285 [2024-07-15 09:36:00.270308] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a50000b90 with addr=10.0.0.2, port=4420 00:27:49.285 qpair failed and we were unable to recover it. 00:27:49.285 [2024-07-15 09:36:00.270445] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.285 [2024-07-15 09:36:00.270478] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d69200 with addr=10.0.0.2, port=4420 00:27:49.285 qpair failed and we were unable to recover it. 00:27:49.285 [2024-07-15 09:36:00.270580] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.285 [2024-07-15 09:36:00.270612] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d69200 with addr=10.0.0.2, port=4420 00:27:49.285 qpair failed and we were unable to recover it. 00:27:49.285 [2024-07-15 09:36:00.270741] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.285 [2024-07-15 09:36:00.270774] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d69200 with addr=10.0.0.2, port=4420 00:27:49.285 qpair failed and we were unable to recover it. 00:27:49.285 [2024-07-15 09:36:00.270925] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.285 [2024-07-15 09:36:00.270958] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d69200 with addr=10.0.0.2, port=4420 00:27:49.285 qpair failed and we were unable to recover it. 00:27:49.285 [2024-07-15 09:36:00.271131] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.285 [2024-07-15 09:36:00.271157] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d69200 with addr=10.0.0.2, port=4420 00:27:49.285 qpair failed and we were unable to recover it. 00:27:49.285 [2024-07-15 09:36:00.271249] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.285 [2024-07-15 09:36:00.271275] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d69200 with addr=10.0.0.2, port=4420 00:27:49.285 qpair failed and we were unable to recover it. 00:27:49.285 [2024-07-15 09:36:00.271411] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.285 [2024-07-15 09:36:00.271437] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d69200 with addr=10.0.0.2, port=4420 00:27:49.285 qpair failed and we were unable to recover it. 00:27:49.285 [2024-07-15 09:36:00.271596] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.285 [2024-07-15 09:36:00.271634] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d69200 with addr=10.0.0.2, port=4420 00:27:49.285 qpair failed and we were unable to recover it. 00:27:49.285 [2024-07-15 09:36:00.271737] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.285 [2024-07-15 09:36:00.271768] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d69200 with addr=10.0.0.2, port=4420 00:27:49.285 qpair failed and we were unable to recover it. 00:27:49.285 [2024-07-15 09:36:00.271944] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.285 [2024-07-15 09:36:00.271976] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d69200 with addr=10.0.0.2, port=4420 00:27:49.285 qpair failed and we were unable to recover it. 00:27:49.285 [2024-07-15 09:36:00.272084] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.285 [2024-07-15 09:36:00.272116] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d69200 with addr=10.0.0.2, port=4420 00:27:49.285 qpair failed and we were unable to recover it. 00:27:49.285 [2024-07-15 09:36:00.272256] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.285 [2024-07-15 09:36:00.272282] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d69200 with addr=10.0.0.2, port=4420 00:27:49.285 qpair failed and we were unable to recover it. 00:27:49.285 [2024-07-15 09:36:00.272373] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.285 [2024-07-15 09:36:00.272400] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d69200 with addr=10.0.0.2, port=4420 00:27:49.285 qpair failed and we were unable to recover it. 00:27:49.285 [2024-07-15 09:36:00.272533] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.285 [2024-07-15 09:36:00.272564] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d69200 with addr=10.0.0.2, port=4420 00:27:49.285 qpair failed and we were unable to recover it. 00:27:49.285 [2024-07-15 09:36:00.272695] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.286 [2024-07-15 09:36:00.272726] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d69200 with addr=10.0.0.2, port=4420 00:27:49.286 qpair failed and we were unable to recover it. 00:27:49.286 [2024-07-15 09:36:00.272863] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.286 [2024-07-15 09:36:00.272896] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d69200 with addr=10.0.0.2, port=4420 00:27:49.286 qpair failed and we were unable to recover it. 00:27:49.286 [2024-07-15 09:36:00.273023] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.286 [2024-07-15 09:36:00.273054] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d69200 with addr=10.0.0.2, port=4420 00:27:49.286 qpair failed and we were unable to recover it. 00:27:49.286 [2024-07-15 09:36:00.273180] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.286 [2024-07-15 09:36:00.273211] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d69200 with addr=10.0.0.2, port=4420 00:27:49.286 qpair failed and we were unable to recover it. 00:27:49.286 [2024-07-15 09:36:00.273343] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.286 [2024-07-15 09:36:00.273375] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d69200 with addr=10.0.0.2, port=4420 00:27:49.286 qpair failed and we were unable to recover it. 00:27:49.286 [2024-07-15 09:36:00.273529] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.286 [2024-07-15 09:36:00.273560] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d69200 with addr=10.0.0.2, port=4420 00:27:49.286 qpair failed and we were unable to recover it. 00:27:49.286 [2024-07-15 09:36:00.273686] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.286 [2024-07-15 09:36:00.273731] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d69200 with addr=10.0.0.2, port=4420 00:27:49.286 qpair failed and we were unable to recover it. 00:27:49.286 [2024-07-15 09:36:00.273887] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.286 [2024-07-15 09:36:00.273928] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a50000b90 with addr=10.0.0.2, port=4420 00:27:49.286 qpair failed and we were unable to recover it. 00:27:49.286 [2024-07-15 09:36:00.274051] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.286 [2024-07-15 09:36:00.274092] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a50000b90 with addr=10.0.0.2, port=4420 00:27:49.286 qpair failed and we were unable to recover it. 00:27:49.286 [2024-07-15 09:36:00.274232] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.286 [2024-07-15 09:36:00.274267] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a50000b90 with addr=10.0.0.2, port=4420 00:27:49.286 qpair failed and we were unable to recover it. 00:27:49.286 [2024-07-15 09:36:00.274413] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.286 [2024-07-15 09:36:00.274448] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a50000b90 with addr=10.0.0.2, port=4420 00:27:49.286 qpair failed and we were unable to recover it. 00:27:49.286 [2024-07-15 09:36:00.274555] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.286 [2024-07-15 09:36:00.274588] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a50000b90 with addr=10.0.0.2, port=4420 00:27:49.286 qpair failed and we were unable to recover it. 00:27:49.286 [2024-07-15 09:36:00.274731] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.286 [2024-07-15 09:36:00.274765] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a50000b90 with addr=10.0.0.2, port=4420 00:27:49.286 qpair failed and we were unable to recover it. 00:27:49.286 [2024-07-15 09:36:00.274925] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.286 [2024-07-15 09:36:00.274960] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a50000b90 with addr=10.0.0.2, port=4420 00:27:49.286 qpair failed and we were unable to recover it. 00:27:49.286 [2024-07-15 09:36:00.275075] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.286 [2024-07-15 09:36:00.275109] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a50000b90 with addr=10.0.0.2, port=4420 00:27:49.286 qpair failed and we were unable to recover it. 00:27:49.286 [2024-07-15 09:36:00.275215] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.286 [2024-07-15 09:36:00.275250] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a50000b90 with addr=10.0.0.2, port=4420 00:27:49.286 qpair failed and we were unable to recover it. 00:27:49.286 [2024-07-15 09:36:00.275380] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.286 [2024-07-15 09:36:00.275414] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a50000b90 with addr=10.0.0.2, port=4420 00:27:49.286 qpair failed and we were unable to recover it. 00:27:49.286 [2024-07-15 09:36:00.275559] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.286 [2024-07-15 09:36:00.275594] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a50000b90 with addr=10.0.0.2, port=4420 00:27:49.286 qpair failed and we were unable to recover it. 00:27:49.286 [2024-07-15 09:36:00.275749] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.286 [2024-07-15 09:36:00.275808] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:27:49.286 qpair failed and we were unable to recover it. 00:27:49.286 [2024-07-15 09:36:00.275961] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.286 [2024-07-15 09:36:00.275995] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:27:49.286 qpair failed and we were unable to recover it. 00:27:49.286 [2024-07-15 09:36:00.276147] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.286 [2024-07-15 09:36:00.276179] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:27:49.286 qpair failed and we were unable to recover it. 00:27:49.286 [2024-07-15 09:36:00.276299] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.286 [2024-07-15 09:36:00.276326] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:27:49.286 qpair failed and we were unable to recover it. 00:27:49.286 [2024-07-15 09:36:00.276461] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.286 [2024-07-15 09:36:00.276495] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:27:49.286 qpair failed and we were unable to recover it. 00:27:49.286 [2024-07-15 09:36:00.276635] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.286 [2024-07-15 09:36:00.276669] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:27:49.286 qpair failed and we were unable to recover it. 00:27:49.286 [2024-07-15 09:36:00.276777] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.286 [2024-07-15 09:36:00.276825] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:27:49.286 qpair failed and we were unable to recover it. 00:27:49.286 [2024-07-15 09:36:00.276945] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.286 [2024-07-15 09:36:00.276980] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:27:49.286 qpair failed and we were unable to recover it. 00:27:49.286 [2024-07-15 09:36:00.277122] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.286 [2024-07-15 09:36:00.277156] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:27:49.286 qpair failed and we were unable to recover it. 00:27:49.286 [2024-07-15 09:36:00.277327] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.286 [2024-07-15 09:36:00.277360] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:27:49.286 qpair failed and we were unable to recover it. 00:27:49.286 [2024-07-15 09:36:00.277497] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.286 [2024-07-15 09:36:00.277531] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:27:49.286 qpair failed and we were unable to recover it. 00:27:49.286 [2024-07-15 09:36:00.277671] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.286 [2024-07-15 09:36:00.277698] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:27:49.286 qpair failed and we were unable to recover it. 00:27:49.286 [2024-07-15 09:36:00.277783] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.286 [2024-07-15 09:36:00.277815] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:27:49.286 qpair failed and we were unable to recover it. 00:27:49.286 [2024-07-15 09:36:00.277932] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.286 [2024-07-15 09:36:00.277980] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:27:49.286 qpair failed and we were unable to recover it. 00:27:49.286 [2024-07-15 09:36:00.278123] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.286 [2024-07-15 09:36:00.278156] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:27:49.286 qpair failed and we were unable to recover it. 00:27:49.286 [2024-07-15 09:36:00.278299] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.286 [2024-07-15 09:36:00.278333] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:27:49.286 qpair failed and we were unable to recover it. 00:27:49.286 [2024-07-15 09:36:00.278440] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.286 [2024-07-15 09:36:00.278475] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:27:49.286 qpair failed and we were unable to recover it. 00:27:49.286 [2024-07-15 09:36:00.278638] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.286 [2024-07-15 09:36:00.278666] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:27:49.286 qpair failed and we were unable to recover it. 00:27:49.286 [2024-07-15 09:36:00.278788] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.286 [2024-07-15 09:36:00.278823] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:27:49.286 qpair failed and we were unable to recover it. 00:27:49.286 [2024-07-15 09:36:00.278946] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.286 [2024-07-15 09:36:00.278974] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:27:49.286 qpair failed and we were unable to recover it. 00:27:49.286 [2024-07-15 09:36:00.279061] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.286 [2024-07-15 09:36:00.279092] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:27:49.286 qpair failed and we were unable to recover it. 00:27:49.286 [2024-07-15 09:36:00.279206] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.286 [2024-07-15 09:36:00.279231] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:27:49.286 qpair failed and we were unable to recover it. 00:27:49.286 [2024-07-15 09:36:00.279343] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.286 [2024-07-15 09:36:00.279369] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:27:49.286 qpair failed and we were unable to recover it. 00:27:49.286 [2024-07-15 09:36:00.279488] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.287 [2024-07-15 09:36:00.279528] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a58000b90 with addr=10.0.0.2, port=4420 00:27:49.287 qpair failed and we were unable to recover it. 00:27:49.287 [2024-07-15 09:36:00.279636] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.287 [2024-07-15 09:36:00.279675] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a50000b90 with addr=10.0.0.2, port=4420 00:27:49.287 qpair failed and we were unable to recover it. 00:27:49.287 [2024-07-15 09:36:00.279831] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.287 [2024-07-15 09:36:00.279874] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d69200 with addr=10.0.0.2, port=4420 00:27:49.287 qpair failed and we were unable to recover it. 00:27:49.287 [2024-07-15 09:36:00.280036] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.287 [2024-07-15 09:36:00.280076] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d69200 with addr=10.0.0.2, port=4420 00:27:49.287 qpair failed and we were unable to recover it. 00:27:49.287 [2024-07-15 09:36:00.280260] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.287 [2024-07-15 09:36:00.280298] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d69200 with addr=10.0.0.2, port=4420 00:27:49.287 qpair failed and we were unable to recover it. 00:27:49.287 [2024-07-15 09:36:00.280479] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.287 [2024-07-15 09:36:00.280520] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d69200 with addr=10.0.0.2, port=4420 00:27:49.287 qpair failed and we were unable to recover it. 00:27:49.287 [2024-07-15 09:36:00.280676] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.287 [2024-07-15 09:36:00.280703] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d69200 with addr=10.0.0.2, port=4420 00:27:49.287 qpair failed and we were unable to recover it. 00:27:49.287 [2024-07-15 09:36:00.280791] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.287 [2024-07-15 09:36:00.280833] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d69200 with addr=10.0.0.2, port=4420 00:27:49.287 qpair failed and we were unable to recover it. 00:27:49.287 [2024-07-15 09:36:00.280916] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.287 [2024-07-15 09:36:00.280943] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d69200 with addr=10.0.0.2, port=4420 00:27:49.287 qpair failed and we were unable to recover it. 00:27:49.287 [2024-07-15 09:36:00.281027] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.287 [2024-07-15 09:36:00.281053] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d69200 with addr=10.0.0.2, port=4420 00:27:49.287 qpair failed and we were unable to recover it. 00:27:49.287 [2024-07-15 09:36:00.281249] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.287 [2024-07-15 09:36:00.281289] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d69200 with addr=10.0.0.2, port=4420 00:27:49.287 qpair failed and we were unable to recover it. 00:27:49.287 [2024-07-15 09:36:00.281442] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.287 [2024-07-15 09:36:00.281483] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d69200 with addr=10.0.0.2, port=4420 00:27:49.287 qpair failed and we were unable to recover it. 00:27:49.287 [2024-07-15 09:36:00.281595] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.287 [2024-07-15 09:36:00.281635] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d69200 with addr=10.0.0.2, port=4420 00:27:49.287 qpair failed and we were unable to recover it. 00:27:49.287 [2024-07-15 09:36:00.281765] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.287 [2024-07-15 09:36:00.281832] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a58000b90 with addr=10.0.0.2, port=4420 00:27:49.287 qpair failed and we were unable to recover it. 00:27:49.287 [2024-07-15 09:36:00.281946] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.287 [2024-07-15 09:36:00.281974] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a58000b90 with addr=10.0.0.2, port=4420 00:27:49.287 qpair failed and we were unable to recover it. 00:27:49.287 [2024-07-15 09:36:00.282067] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.287 [2024-07-15 09:36:00.282094] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a58000b90 with addr=10.0.0.2, port=4420 00:27:49.287 qpair failed and we were unable to recover it. 00:27:49.287 [2024-07-15 09:36:00.282182] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.287 [2024-07-15 09:36:00.282209] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a58000b90 with addr=10.0.0.2, port=4420 00:27:49.287 qpair failed and we were unable to recover it. 00:27:49.287 [2024-07-15 09:36:00.282333] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.287 [2024-07-15 09:36:00.282362] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a58000b90 with addr=10.0.0.2, port=4420 00:27:49.287 qpair failed and we were unable to recover it. 00:27:49.287 [2024-07-15 09:36:00.282516] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.287 [2024-07-15 09:36:00.282545] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a58000b90 with addr=10.0.0.2, port=4420 00:27:49.287 qpair failed and we were unable to recover it. 00:27:49.287 [2024-07-15 09:36:00.282659] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.287 [2024-07-15 09:36:00.282693] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a58000b90 with addr=10.0.0.2, port=4420 00:27:49.287 qpair failed and we were unable to recover it. 00:27:49.287 [2024-07-15 09:36:00.282834] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.287 [2024-07-15 09:36:00.282861] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a58000b90 with addr=10.0.0.2, port=4420 00:27:49.287 qpair failed and we were unable to recover it. 00:27:49.287 [2024-07-15 09:36:00.282949] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.287 [2024-07-15 09:36:00.282977] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:27:49.287 qpair failed and we were unable to recover it. 00:27:49.287 [2024-07-15 09:36:00.283094] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.287 [2024-07-15 09:36:00.283122] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d69200 with addr=10.0.0.2, port=4420 00:27:49.287 qpair failed and we were unable to recover it. 00:27:49.287 [2024-07-15 09:36:00.283227] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.287 [2024-07-15 09:36:00.283265] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d69200 with addr=10.0.0.2, port=4420 00:27:49.287 qpair failed and we were unable to recover it. 00:27:49.287 [2024-07-15 09:36:00.283376] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.287 [2024-07-15 09:36:00.283415] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d69200 with addr=10.0.0.2, port=4420 00:27:49.287 qpair failed and we were unable to recover it. 00:27:49.287 [2024-07-15 09:36:00.283576] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.287 [2024-07-15 09:36:00.283616] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d69200 with addr=10.0.0.2, port=4420 00:27:49.287 qpair failed and we were unable to recover it. 00:27:49.287 [2024-07-15 09:36:00.283781] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.287 [2024-07-15 09:36:00.283829] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d69200 with addr=10.0.0.2, port=4420 00:27:49.287 qpair failed and we were unable to recover it. 00:27:49.287 [2024-07-15 09:36:00.283963] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.287 [2024-07-15 09:36:00.283988] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d69200 with addr=10.0.0.2, port=4420 00:27:49.287 qpair failed and we were unable to recover it. 00:27:49.287 [2024-07-15 09:36:00.284131] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.287 [2024-07-15 09:36:00.284156] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d69200 with addr=10.0.0.2, port=4420 00:27:49.287 qpair failed and we were unable to recover it. 00:27:49.287 [2024-07-15 09:36:00.284284] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.287 [2024-07-15 09:36:00.284323] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d69200 with addr=10.0.0.2, port=4420 00:27:49.287 qpair failed and we were unable to recover it. 00:27:49.287 [2024-07-15 09:36:00.284477] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.287 [2024-07-15 09:36:00.284517] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d69200 with addr=10.0.0.2, port=4420 00:27:49.287 qpair failed and we were unable to recover it. 00:27:49.287 [2024-07-15 09:36:00.284668] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.287 [2024-07-15 09:36:00.284708] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d69200 with addr=10.0.0.2, port=4420 00:27:49.287 qpair failed and we were unable to recover it. 00:27:49.287 [2024-07-15 09:36:00.284864] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.287 [2024-07-15 09:36:00.284890] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d69200 with addr=10.0.0.2, port=4420 00:27:49.287 qpair failed and we were unable to recover it. 00:27:49.287 [2024-07-15 09:36:00.284986] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.287 [2024-07-15 09:36:00.285011] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d69200 with addr=10.0.0.2, port=4420 00:27:49.287 qpair failed and we were unable to recover it. 00:27:49.287 [2024-07-15 09:36:00.285157] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.287 [2024-07-15 09:36:00.285198] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d69200 with addr=10.0.0.2, port=4420 00:27:49.287 qpair failed and we were unable to recover it. 00:27:49.287 [2024-07-15 09:36:00.285323] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.287 [2024-07-15 09:36:00.285363] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d69200 with addr=10.0.0.2, port=4420 00:27:49.287 qpair failed and we were unable to recover it. 00:27:49.287 [2024-07-15 09:36:00.285478] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.287 [2024-07-15 09:36:00.285518] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d69200 with addr=10.0.0.2, port=4420 00:27:49.287 qpair failed and we were unable to recover it. 00:27:49.287 [2024-07-15 09:36:00.285645] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.287 [2024-07-15 09:36:00.285685] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d69200 with addr=10.0.0.2, port=4420 00:27:49.287 qpair failed and we were unable to recover it. 00:27:49.287 [2024-07-15 09:36:00.285867] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.287 [2024-07-15 09:36:00.285893] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d69200 with addr=10.0.0.2, port=4420 00:27:49.287 qpair failed and we were unable to recover it. 00:27:49.288 [2024-07-15 09:36:00.285979] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.288 [2024-07-15 09:36:00.286005] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d69200 with addr=10.0.0.2, port=4420 00:27:49.288 qpair failed and we were unable to recover it. 00:27:49.288 [2024-07-15 09:36:00.286101] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.288 [2024-07-15 09:36:00.286127] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d69200 with addr=10.0.0.2, port=4420 00:27:49.288 qpair failed and we were unable to recover it. 00:27:49.288 [2024-07-15 09:36:00.286245] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.288 [2024-07-15 09:36:00.286270] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d69200 with addr=10.0.0.2, port=4420 00:27:49.288 qpair failed and we were unable to recover it. 00:27:49.288 [2024-07-15 09:36:00.286350] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.288 [2024-07-15 09:36:00.286402] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d69200 with addr=10.0.0.2, port=4420 00:27:49.288 qpair failed and we were unable to recover it. 00:27:49.288 [2024-07-15 09:36:00.286559] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.288 [2024-07-15 09:36:00.286597] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d69200 with addr=10.0.0.2, port=4420 00:27:49.288 qpair failed and we were unable to recover it. 00:27:49.288 [2024-07-15 09:36:00.286724] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.288 [2024-07-15 09:36:00.286762] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d69200 with addr=10.0.0.2, port=4420 00:27:49.288 qpair failed and we were unable to recover it. 00:27:49.288 [2024-07-15 09:36:00.286885] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.288 [2024-07-15 09:36:00.286911] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d69200 with addr=10.0.0.2, port=4420 00:27:49.289 qpair failed and we were unable to recover it. 00:27:49.289 [2024-07-15 09:36:00.286990] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.289 [2024-07-15 09:36:00.287016] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d69200 with addr=10.0.0.2, port=4420 00:27:49.289 qpair failed and we were unable to recover it. 00:27:49.289 [2024-07-15 09:36:00.287160] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.289 [2024-07-15 09:36:00.287199] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d69200 with addr=10.0.0.2, port=4420 00:27:49.289 qpair failed and we were unable to recover it. 00:27:49.289 [2024-07-15 09:36:00.287350] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.289 [2024-07-15 09:36:00.287388] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d69200 with addr=10.0.0.2, port=4420 00:27:49.289 qpair failed and we were unable to recover it. 00:27:49.289 [2024-07-15 09:36:00.287539] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.289 [2024-07-15 09:36:00.287585] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d69200 with addr=10.0.0.2, port=4420 00:27:49.289 qpair failed and we were unable to recover it. 00:27:49.289 [2024-07-15 09:36:00.287729] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.289 [2024-07-15 09:36:00.287768] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d69200 with addr=10.0.0.2, port=4420 00:27:49.289 qpair failed and we were unable to recover it. 00:27:49.289 [2024-07-15 09:36:00.287924] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.289 [2024-07-15 09:36:00.287949] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d69200 with addr=10.0.0.2, port=4420 00:27:49.289 qpair failed and we were unable to recover it. 00:27:49.289 [2024-07-15 09:36:00.288027] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.289 [2024-07-15 09:36:00.288052] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d69200 with addr=10.0.0.2, port=4420 00:27:49.289 qpair failed and we were unable to recover it. 00:27:49.289 [2024-07-15 09:36:00.288227] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.289 [2024-07-15 09:36:00.288265] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d69200 with addr=10.0.0.2, port=4420 00:27:49.289 qpair failed and we were unable to recover it. 00:27:49.289 [2024-07-15 09:36:00.288386] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.289 [2024-07-15 09:36:00.288462] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d69200 with addr=10.0.0.2, port=4420 00:27:49.289 qpair failed and we were unable to recover it. 00:27:49.289 [2024-07-15 09:36:00.288686] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.289 [2024-07-15 09:36:00.288715] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d69200 with addr=10.0.0.2, port=4420 00:27:49.289 qpair failed and we were unable to recover it. 00:27:49.289 [2024-07-15 09:36:00.288818] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.289 [2024-07-15 09:36:00.288860] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d69200 with addr=10.0.0.2, port=4420 00:27:49.289 qpair failed and we were unable to recover it. 00:27:49.289 [2024-07-15 09:36:00.288942] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.289 [2024-07-15 09:36:00.288968] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d69200 with addr=10.0.0.2, port=4420 00:27:49.289 qpair failed and we were unable to recover it. 00:27:49.289 [2024-07-15 09:36:00.289045] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.289 [2024-07-15 09:36:00.289092] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d69200 with addr=10.0.0.2, port=4420 00:27:49.289 qpair failed and we were unable to recover it. 00:27:49.289 [2024-07-15 09:36:00.289204] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.289 [2024-07-15 09:36:00.289244] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d69200 with addr=10.0.0.2, port=4420 00:27:49.289 qpair failed and we were unable to recover it. 00:27:49.289 [2024-07-15 09:36:00.289426] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.289 [2024-07-15 09:36:00.289469] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a58000b90 with addr=10.0.0.2, port=4420 00:27:49.289 qpair failed and we were unable to recover it. 00:27:49.289 [2024-07-15 09:36:00.289575] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.289 [2024-07-15 09:36:00.289617] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:27:49.289 qpair failed and we were unable to recover it. 00:27:49.289 [2024-07-15 09:36:00.289787] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.289 [2024-07-15 09:36:00.289871] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a50000b90 with addr=10.0.0.2, port=4420 00:27:49.289 qpair failed and we were unable to recover it. 00:27:49.289 [2024-07-15 09:36:00.289993] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.289 [2024-07-15 09:36:00.290022] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a50000b90 with addr=10.0.0.2, port=4420 00:27:49.289 qpair failed and we were unable to recover it. 00:27:49.289 [2024-07-15 09:36:00.290149] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.289 [2024-07-15 09:36:00.290190] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a50000b90 with addr=10.0.0.2, port=4420 00:27:49.289 qpair failed and we were unable to recover it. 00:27:49.289 [2024-07-15 09:36:00.290318] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.289 [2024-07-15 09:36:00.290357] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a50000b90 with addr=10.0.0.2, port=4420 00:27:49.289 qpair failed and we were unable to recover it. 00:27:49.289 [2024-07-15 09:36:00.290524] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.289 [2024-07-15 09:36:00.290565] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d69200 with addr=10.0.0.2, port=4420 00:27:49.289 qpair failed and we were unable to recover it. 00:27:49.289 [2024-07-15 09:36:00.290724] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.289 [2024-07-15 09:36:00.290762] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d69200 with addr=10.0.0.2, port=4420 00:27:49.289 qpair failed and we were unable to recover it. 00:27:49.289 [2024-07-15 09:36:00.290903] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.289 [2024-07-15 09:36:00.290929] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d69200 with addr=10.0.0.2, port=4420 00:27:49.289 qpair failed and we were unable to recover it. 00:27:49.289 [2024-07-15 09:36:00.291020] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.289 [2024-07-15 09:36:00.291045] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d69200 with addr=10.0.0.2, port=4420 00:27:49.289 qpair failed and we were unable to recover it. 00:27:49.289 [2024-07-15 09:36:00.291154] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.289 [2024-07-15 09:36:00.291192] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d69200 with addr=10.0.0.2, port=4420 00:27:49.289 qpair failed and we were unable to recover it. 00:27:49.289 [2024-07-15 09:36:00.291354] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.289 [2024-07-15 09:36:00.291393] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d69200 with addr=10.0.0.2, port=4420 00:27:49.289 qpair failed and we were unable to recover it. 00:27:49.289 [2024-07-15 09:36:00.291595] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.289 [2024-07-15 09:36:00.291625] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a50000b90 with addr=10.0.0.2, port=4420 00:27:49.289 qpair failed and we were unable to recover it. 00:27:49.289 [2024-07-15 09:36:00.291748] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.289 [2024-07-15 09:36:00.291778] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a50000b90 with addr=10.0.0.2, port=4420 00:27:49.289 qpair failed and we were unable to recover it. 00:27:49.289 [2024-07-15 09:36:00.291899] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.289 [2024-07-15 09:36:00.291925] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a50000b90 with addr=10.0.0.2, port=4420 00:27:49.289 qpair failed and we were unable to recover it. 00:27:49.289 [2024-07-15 09:36:00.292034] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.289 [2024-07-15 09:36:00.292060] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a50000b90 with addr=10.0.0.2, port=4420 00:27:49.289 qpair failed and we were unable to recover it. 00:27:49.289 [2024-07-15 09:36:00.292163] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.289 [2024-07-15 09:36:00.292213] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a50000b90 with addr=10.0.0.2, port=4420 00:27:49.289 qpair failed and we were unable to recover it. 00:27:49.289 [2024-07-15 09:36:00.292426] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.289 [2024-07-15 09:36:00.292452] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a50000b90 with addr=10.0.0.2, port=4420 00:27:49.289 qpair failed and we were unable to recover it. 00:27:49.289 [2024-07-15 09:36:00.292588] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.289 [2024-07-15 09:36:00.292632] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d69200 with addr=10.0.0.2, port=4420 00:27:49.289 qpair failed and we were unable to recover it. 00:27:49.289 [2024-07-15 09:36:00.292836] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.289 [2024-07-15 09:36:00.292885] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d69200 with addr=10.0.0.2, port=4420 00:27:49.289 qpair failed and we were unable to recover it. 00:27:49.289 [2024-07-15 09:36:00.292989] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.289 [2024-07-15 09:36:00.293015] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d69200 with addr=10.0.0.2, port=4420 00:27:49.289 qpair failed and we were unable to recover it. 00:27:49.289 [2024-07-15 09:36:00.293104] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.289 [2024-07-15 09:36:00.293130] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d69200 with addr=10.0.0.2, port=4420 00:27:49.289 qpair failed and we were unable to recover it. 00:27:49.289 [2024-07-15 09:36:00.293213] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.289 [2024-07-15 09:36:00.293239] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d69200 with addr=10.0.0.2, port=4420 00:27:49.289 qpair failed and we were unable to recover it. 00:27:49.289 [2024-07-15 09:36:00.293373] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.289 [2024-07-15 09:36:00.293398] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d69200 with addr=10.0.0.2, port=4420 00:27:49.289 qpair failed and we were unable to recover it. 00:27:49.289 [2024-07-15 09:36:00.293511] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.289 [2024-07-15 09:36:00.293537] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d69200 with addr=10.0.0.2, port=4420 00:27:49.289 qpair failed and we were unable to recover it. 00:27:49.289 [2024-07-15 09:36:00.293630] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.289 [2024-07-15 09:36:00.293669] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a58000b90 with addr=10.0.0.2, port=4420 00:27:49.289 qpair failed and we were unable to recover it. 00:27:49.289 [2024-07-15 09:36:00.293762] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.289 [2024-07-15 09:36:00.293790] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a58000b90 with addr=10.0.0.2, port=4420 00:27:49.290 qpair failed and we were unable to recover it. 00:27:49.290 [2024-07-15 09:36:00.293890] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.290 [2024-07-15 09:36:00.293917] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a58000b90 with addr=10.0.0.2, port=4420 00:27:49.290 qpair failed and we were unable to recover it. 00:27:49.290 [2024-07-15 09:36:00.294013] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.290 [2024-07-15 09:36:00.294039] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a58000b90 with addr=10.0.0.2, port=4420 00:27:49.290 qpair failed and we were unable to recover it. 00:27:49.290 [2024-07-15 09:36:00.294153] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.290 [2024-07-15 09:36:00.294179] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a58000b90 with addr=10.0.0.2, port=4420 00:27:49.290 qpair failed and we were unable to recover it. 00:27:49.290 [2024-07-15 09:36:00.294297] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.290 [2024-07-15 09:36:00.294323] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a58000b90 with addr=10.0.0.2, port=4420 00:27:49.290 qpair failed and we were unable to recover it. 00:27:49.290 [2024-07-15 09:36:00.294415] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.290 [2024-07-15 09:36:00.294469] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a58000b90 with addr=10.0.0.2, port=4420 00:27:49.290 qpair failed and we were unable to recover it. 00:27:49.290 [2024-07-15 09:36:00.294604] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.290 [2024-07-15 09:36:00.294646] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a58000b90 with addr=10.0.0.2, port=4420 00:27:49.290 qpair failed and we were unable to recover it. 00:27:49.290 [2024-07-15 09:36:00.294829] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.290 [2024-07-15 09:36:00.294880] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a58000b90 with addr=10.0.0.2, port=4420 00:27:49.290 qpair failed and we were unable to recover it. 00:27:49.290 [2024-07-15 09:36:00.294959] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.290 [2024-07-15 09:36:00.294986] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a58000b90 with addr=10.0.0.2, port=4420 00:27:49.290 qpair failed and we were unable to recover it. 00:27:49.290 [2024-07-15 09:36:00.295071] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.290 [2024-07-15 09:36:00.295097] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a58000b90 with addr=10.0.0.2, port=4420 00:27:49.290 qpair failed and we were unable to recover it. 00:27:49.290 [2024-07-15 09:36:00.295173] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.290 [2024-07-15 09:36:00.295199] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a58000b90 with addr=10.0.0.2, port=4420 00:27:49.290 qpair failed and we were unable to recover it. 00:27:49.290 [2024-07-15 09:36:00.295329] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.290 [2024-07-15 09:36:00.295364] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a58000b90 with addr=10.0.0.2, port=4420 00:27:49.290 qpair failed and we were unable to recover it. 00:27:49.290 [2024-07-15 09:36:00.295499] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.290 [2024-07-15 09:36:00.295525] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a58000b90 with addr=10.0.0.2, port=4420 00:27:49.290 qpair failed and we were unable to recover it. 00:27:49.290 [2024-07-15 09:36:00.295653] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.290 [2024-07-15 09:36:00.295681] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a58000b90 with addr=10.0.0.2, port=4420 00:27:49.290 qpair failed and we were unable to recover it. 00:27:49.290 [2024-07-15 09:36:00.295808] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.290 [2024-07-15 09:36:00.295856] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a58000b90 with addr=10.0.0.2, port=4420 00:27:49.290 qpair failed and we were unable to recover it. 00:27:49.290 [2024-07-15 09:36:00.295935] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.290 [2024-07-15 09:36:00.295962] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a58000b90 with addr=10.0.0.2, port=4420 00:27:49.290 qpair failed and we were unable to recover it. 00:27:49.290 [2024-07-15 09:36:00.296043] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.290 [2024-07-15 09:36:00.296069] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a58000b90 with addr=10.0.0.2, port=4420 00:27:49.290 qpair failed and we were unable to recover it. 00:27:49.290 [2024-07-15 09:36:00.296151] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.290 [2024-07-15 09:36:00.296178] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a58000b90 with addr=10.0.0.2, port=4420 00:27:49.290 qpair failed and we were unable to recover it. 00:27:49.290 [2024-07-15 09:36:00.296264] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.290 [2024-07-15 09:36:00.296291] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a58000b90 with addr=10.0.0.2, port=4420 00:27:49.290 qpair failed and we were unable to recover it. 00:27:49.290 [2024-07-15 09:36:00.296369] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.290 [2024-07-15 09:36:00.296397] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a58000b90 with addr=10.0.0.2, port=4420 00:27:49.290 qpair failed and we were unable to recover it. 00:27:49.290 [2024-07-15 09:36:00.296510] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.290 [2024-07-15 09:36:00.296537] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a58000b90 with addr=10.0.0.2, port=4420 00:27:49.290 qpair failed and we were unable to recover it. 00:27:49.290 [2024-07-15 09:36:00.296624] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.290 [2024-07-15 09:36:00.296651] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a58000b90 with addr=10.0.0.2, port=4420 00:27:49.290 qpair failed and we were unable to recover it. 00:27:49.290 [2024-07-15 09:36:00.296787] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.290 [2024-07-15 09:36:00.296818] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a58000b90 with addr=10.0.0.2, port=4420 00:27:49.290 qpair failed and we were unable to recover it. 00:27:49.290 [2024-07-15 09:36:00.296914] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.290 [2024-07-15 09:36:00.296940] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a58000b90 with addr=10.0.0.2, port=4420 00:27:49.290 qpair failed and we were unable to recover it. 00:27:49.290 [2024-07-15 09:36:00.297027] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.290 [2024-07-15 09:36:00.297070] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a58000b90 with addr=10.0.0.2, port=4420 00:27:49.290 qpair failed and we were unable to recover it. 00:27:49.290 [2024-07-15 09:36:00.297199] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.290 [2024-07-15 09:36:00.297240] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a58000b90 with addr=10.0.0.2, port=4420 00:27:49.290 qpair failed and we were unable to recover it. 00:27:49.290 [2024-07-15 09:36:00.297348] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.290 [2024-07-15 09:36:00.297402] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a58000b90 with addr=10.0.0.2, port=4420 00:27:49.290 qpair failed and we were unable to recover it. 00:27:49.290 [2024-07-15 09:36:00.297508] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.290 [2024-07-15 09:36:00.297534] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a58000b90 with addr=10.0.0.2, port=4420 00:27:49.290 qpair failed and we were unable to recover it. 00:27:49.290 [2024-07-15 09:36:00.297644] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.290 [2024-07-15 09:36:00.297669] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a58000b90 with addr=10.0.0.2, port=4420 00:27:49.290 qpair failed and we were unable to recover it. 00:27:49.290 [2024-07-15 09:36:00.297753] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.290 [2024-07-15 09:36:00.297779] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a58000b90 with addr=10.0.0.2, port=4420 00:27:49.290 qpair failed and we were unable to recover it. 00:27:49.290 [2024-07-15 09:36:00.297869] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.290 [2024-07-15 09:36:00.297895] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a58000b90 with addr=10.0.0.2, port=4420 00:27:49.290 qpair failed and we were unable to recover it. 00:27:49.290 [2024-07-15 09:36:00.297984] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.290 [2024-07-15 09:36:00.298010] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a58000b90 with addr=10.0.0.2, port=4420 00:27:49.290 qpair failed and we were unable to recover it. 00:27:49.290 [2024-07-15 09:36:00.298091] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.290 [2024-07-15 09:36:00.298118] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a58000b90 with addr=10.0.0.2, port=4420 00:27:49.290 qpair failed and we were unable to recover it. 00:27:49.290 [2024-07-15 09:36:00.298199] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.290 [2024-07-15 09:36:00.298225] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a58000b90 with addr=10.0.0.2, port=4420 00:27:49.290 qpair failed and we were unable to recover it. 00:27:49.290 [2024-07-15 09:36:00.298335] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.290 [2024-07-15 09:36:00.298373] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:27:49.290 qpair failed and we were unable to recover it. 00:27:49.290 [2024-07-15 09:36:00.298463] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.290 [2024-07-15 09:36:00.298490] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:27:49.290 qpair failed and we were unable to recover it. 00:27:49.290 [2024-07-15 09:36:00.298594] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.290 [2024-07-15 09:36:00.298633] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d69200 with addr=10.0.0.2, port=4420 00:27:49.290 qpair failed and we were unable to recover it. 00:27:49.290 [2024-07-15 09:36:00.298779] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.290 [2024-07-15 09:36:00.298816] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d69200 with addr=10.0.0.2, port=4420 00:27:49.290 qpair failed and we were unable to recover it. 00:27:49.290 [2024-07-15 09:36:00.298929] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.290 [2024-07-15 09:36:00.298955] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d69200 with addr=10.0.0.2, port=4420 00:27:49.290 qpair failed and we were unable to recover it. 00:27:49.290 [2024-07-15 09:36:00.299033] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.290 [2024-07-15 09:36:00.299058] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d69200 with addr=10.0.0.2, port=4420 00:27:49.290 qpair failed and we were unable to recover it. 00:27:49.290 [2024-07-15 09:36:00.299141] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.290 [2024-07-15 09:36:00.299166] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d69200 with addr=10.0.0.2, port=4420 00:27:49.290 qpair failed and we were unable to recover it. 00:27:49.290 [2024-07-15 09:36:00.299301] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.291 [2024-07-15 09:36:00.299340] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a50000b90 with addr=10.0.0.2, port=4420 00:27:49.291 qpair failed and we were unable to recover it. 00:27:49.291 [2024-07-15 09:36:00.299460] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.291 [2024-07-15 09:36:00.299487] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:27:49.291 qpair failed and we were unable to recover it. 00:27:49.291 [2024-07-15 09:36:00.299575] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.291 [2024-07-15 09:36:00.299601] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:27:49.291 qpair failed and we were unable to recover it. 00:27:49.291 [2024-07-15 09:36:00.299710] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.291 [2024-07-15 09:36:00.299735] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:27:49.291 qpair failed and we were unable to recover it. 00:27:49.291 [2024-07-15 09:36:00.299841] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.291 [2024-07-15 09:36:00.299867] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:27:49.291 qpair failed and we were unable to recover it. 00:27:49.291 [2024-07-15 09:36:00.299961] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.291 [2024-07-15 09:36:00.299989] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a58000b90 with addr=10.0.0.2, port=4420 00:27:49.291 qpair failed and we were unable to recover it. 00:27:49.291 [2024-07-15 09:36:00.300076] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.291 [2024-07-15 09:36:00.300103] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a58000b90 with addr=10.0.0.2, port=4420 00:27:49.291 qpair failed and we were unable to recover it. 00:27:49.291 [2024-07-15 09:36:00.300191] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.291 [2024-07-15 09:36:00.300217] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a58000b90 with addr=10.0.0.2, port=4420 00:27:49.291 qpair failed and we were unable to recover it. 00:27:49.291 [2024-07-15 09:36:00.300297] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.291 [2024-07-15 09:36:00.300324] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a58000b90 with addr=10.0.0.2, port=4420 00:27:49.291 qpair failed and we were unable to recover it. 00:27:49.291 [2024-07-15 09:36:00.300410] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.291 [2024-07-15 09:36:00.300436] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a58000b90 with addr=10.0.0.2, port=4420 00:27:49.291 qpair failed and we were unable to recover it. 00:27:49.291 [2024-07-15 09:36:00.300554] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.291 [2024-07-15 09:36:00.300583] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a50000b90 with addr=10.0.0.2, port=4420 00:27:49.291 qpair failed and we were unable to recover it. 00:27:49.291 [2024-07-15 09:36:00.300677] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.291 [2024-07-15 09:36:00.300705] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a50000b90 with addr=10.0.0.2, port=4420 00:27:49.291 qpair failed and we were unable to recover it. 00:27:49.291 [2024-07-15 09:36:00.300817] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.291 [2024-07-15 09:36:00.300845] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d69200 with addr=10.0.0.2, port=4420 00:27:49.291 qpair failed and we were unable to recover it. 00:27:49.291 [2024-07-15 09:36:00.300927] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.291 [2024-07-15 09:36:00.300958] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d69200 with addr=10.0.0.2, port=4420 00:27:49.291 qpair failed and we were unable to recover it. 00:27:49.291 [2024-07-15 09:36:00.301070] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.291 [2024-07-15 09:36:00.301096] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d69200 with addr=10.0.0.2, port=4420 00:27:49.291 qpair failed and we were unable to recover it. 00:27:49.291 [2024-07-15 09:36:00.301207] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.291 [2024-07-15 09:36:00.301232] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d69200 with addr=10.0.0.2, port=4420 00:27:49.291 qpair failed and we were unable to recover it. 00:27:49.291 [2024-07-15 09:36:00.301314] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.291 [2024-07-15 09:36:00.301340] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d69200 with addr=10.0.0.2, port=4420 00:27:49.291 qpair failed and we were unable to recover it. 00:27:49.291 [2024-07-15 09:36:00.301428] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.291 [2024-07-15 09:36:00.301456] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:27:49.291 qpair failed and we were unable to recover it. 00:27:49.291 [2024-07-15 09:36:00.301540] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.291 [2024-07-15 09:36:00.301566] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:27:49.291 qpair failed and we were unable to recover it. 00:27:49.291 [2024-07-15 09:36:00.301650] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.291 [2024-07-15 09:36:00.301677] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a58000b90 with addr=10.0.0.2, port=4420 00:27:49.291 qpair failed and we were unable to recover it. 00:27:49.291 [2024-07-15 09:36:00.301761] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.291 [2024-07-15 09:36:00.301788] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a58000b90 with addr=10.0.0.2, port=4420 00:27:49.291 qpair failed and we were unable to recover it. 00:27:49.291 [2024-07-15 09:36:00.301881] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.291 [2024-07-15 09:36:00.301907] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a58000b90 with addr=10.0.0.2, port=4420 00:27:49.291 qpair failed and we were unable to recover it. 00:27:49.291 [2024-07-15 09:36:00.301989] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.291 [2024-07-15 09:36:00.302015] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a58000b90 with addr=10.0.0.2, port=4420 00:27:49.291 qpair failed and we were unable to recover it. 00:27:49.291 [2024-07-15 09:36:00.302105] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.291 [2024-07-15 09:36:00.302131] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a58000b90 with addr=10.0.0.2, port=4420 00:27:49.291 qpair failed and we were unable to recover it. 00:27:49.291 [2024-07-15 09:36:00.302222] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.291 [2024-07-15 09:36:00.302251] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a50000b90 with addr=10.0.0.2, port=4420 00:27:49.291 qpair failed and we were unable to recover it. 00:27:49.291 [2024-07-15 09:36:00.302332] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.291 [2024-07-15 09:36:00.302358] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:27:49.291 qpair failed and we were unable to recover it. 00:27:49.291 [2024-07-15 09:36:00.302442] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.291 [2024-07-15 09:36:00.302468] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:27:49.291 qpair failed and we were unable to recover it. 00:27:49.291 [2024-07-15 09:36:00.302553] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.291 [2024-07-15 09:36:00.302579] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:27:49.291 qpair failed and we were unable to recover it. 00:27:49.291 [2024-07-15 09:36:00.302686] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.291 [2024-07-15 09:36:00.302712] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:27:49.291 qpair failed and we were unable to recover it. 00:27:49.291 [2024-07-15 09:36:00.302793] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.291 [2024-07-15 09:36:00.302827] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d69200 with addr=10.0.0.2, port=4420 00:27:49.291 qpair failed and we were unable to recover it. 00:27:49.291 [2024-07-15 09:36:00.302915] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.291 [2024-07-15 09:36:00.302941] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d69200 with addr=10.0.0.2, port=4420 00:27:49.291 qpair failed and we were unable to recover it. 00:27:49.291 [2024-07-15 09:36:00.303015] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.291 [2024-07-15 09:36:00.303040] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d69200 with addr=10.0.0.2, port=4420 00:27:49.291 qpair failed and we were unable to recover it. 00:27:49.291 [2024-07-15 09:36:00.303128] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.291 [2024-07-15 09:36:00.303153] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d69200 with addr=10.0.0.2, port=4420 00:27:49.291 qpair failed and we were unable to recover it. 00:27:49.291 [2024-07-15 09:36:00.303246] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.291 [2024-07-15 09:36:00.303274] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a58000b90 with addr=10.0.0.2, port=4420 00:27:49.291 qpair failed and we were unable to recover it. 00:27:49.291 [2024-07-15 09:36:00.303362] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.291 [2024-07-15 09:36:00.303388] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a58000b90 with addr=10.0.0.2, port=4420 00:27:49.291 qpair failed and we were unable to recover it. 00:27:49.291 [2024-07-15 09:36:00.303494] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.291 [2024-07-15 09:36:00.303521] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:27:49.291 qpair failed and we were unable to recover it. 00:27:49.291 [2024-07-15 09:36:00.303602] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.291 [2024-07-15 09:36:00.303628] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:27:49.291 qpair failed and we were unable to recover it. 00:27:49.291 [2024-07-15 09:36:00.303708] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.291 [2024-07-15 09:36:00.303734] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:27:49.291 qpair failed and we were unable to recover it. 00:27:49.291 [2024-07-15 09:36:00.303828] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.291 [2024-07-15 09:36:00.303854] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:27:49.291 qpair failed and we were unable to recover it. 00:27:49.291 [2024-07-15 09:36:00.303930] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.291 [2024-07-15 09:36:00.303956] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:27:49.291 qpair failed and we were unable to recover it. 00:27:49.291 [2024-07-15 09:36:00.304037] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.291 [2024-07-15 09:36:00.304067] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:27:49.291 qpair failed and we were unable to recover it. 00:27:49.291 [2024-07-15 09:36:00.304184] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.291 [2024-07-15 09:36:00.304211] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a58000b90 with addr=10.0.0.2, port=4420 00:27:49.292 qpair failed and we were unable to recover it. 00:27:49.292 [2024-07-15 09:36:00.304297] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.292 [2024-07-15 09:36:00.304323] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a58000b90 with addr=10.0.0.2, port=4420 00:27:49.292 qpair failed and we were unable to recover it. 00:27:49.292 [2024-07-15 09:36:00.304410] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.292 [2024-07-15 09:36:00.304439] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a50000b90 with addr=10.0.0.2, port=4420 00:27:49.292 qpair failed and we were unable to recover it. 00:27:49.292 [2024-07-15 09:36:00.304523] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.292 [2024-07-15 09:36:00.304550] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a50000b90 with addr=10.0.0.2, port=4420 00:27:49.292 qpair failed and we were unable to recover it. 00:27:49.292 [2024-07-15 09:36:00.304661] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.292 [2024-07-15 09:36:00.304688] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a50000b90 with addr=10.0.0.2, port=4420 00:27:49.292 qpair failed and we were unable to recover it. 00:27:49.292 [2024-07-15 09:36:00.304797] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.292 [2024-07-15 09:36:00.304828] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a50000b90 with addr=10.0.0.2, port=4420 00:27:49.292 qpair failed and we were unable to recover it. 00:27:49.292 [2024-07-15 09:36:00.304909] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.292 [2024-07-15 09:36:00.304935] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a50000b90 with addr=10.0.0.2, port=4420 00:27:49.292 qpair failed and we were unable to recover it. 00:27:49.292 [2024-07-15 09:36:00.305050] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.292 [2024-07-15 09:36:00.305076] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a50000b90 with addr=10.0.0.2, port=4420 00:27:49.292 qpair failed and we were unable to recover it. 00:27:49.292 [2024-07-15 09:36:00.305182] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.292 [2024-07-15 09:36:00.305207] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a50000b90 with addr=10.0.0.2, port=4420 00:27:49.292 qpair failed and we were unable to recover it. 00:27:49.292 [2024-07-15 09:36:00.305322] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.292 [2024-07-15 09:36:00.305350] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:27:49.292 qpair failed and we were unable to recover it. 00:27:49.292 [2024-07-15 09:36:00.305451] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.292 [2024-07-15 09:36:00.305490] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d69200 with addr=10.0.0.2, port=4420 00:27:49.292 qpair failed and we were unable to recover it. 00:27:49.292 [2024-07-15 09:36:00.305575] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.292 [2024-07-15 09:36:00.305602] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d69200 with addr=10.0.0.2, port=4420 00:27:49.292 qpair failed and we were unable to recover it. 00:27:49.292 [2024-07-15 09:36:00.305740] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.292 [2024-07-15 09:36:00.305765] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d69200 with addr=10.0.0.2, port=4420 00:27:49.292 qpair failed and we were unable to recover it. 00:27:49.292 [2024-07-15 09:36:00.305865] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.292 [2024-07-15 09:36:00.305910] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d69200 with addr=10.0.0.2, port=4420 00:27:49.292 qpair failed and we were unable to recover it. 00:27:49.292 [2024-07-15 09:36:00.306015] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.292 [2024-07-15 09:36:00.306041] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d69200 with addr=10.0.0.2, port=4420 00:27:49.292 qpair failed and we were unable to recover it. 00:27:49.292 [2024-07-15 09:36:00.306137] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.292 [2024-07-15 09:36:00.306164] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d69200 with addr=10.0.0.2, port=4420 00:27:49.292 qpair failed and we were unable to recover it. 00:27:49.292 [2024-07-15 09:36:00.306249] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.292 [2024-07-15 09:36:00.306275] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d69200 with addr=10.0.0.2, port=4420 00:27:49.292 qpair failed and we were unable to recover it. 00:27:49.292 [2024-07-15 09:36:00.306359] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.292 [2024-07-15 09:36:00.306386] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a58000b90 with addr=10.0.0.2, port=4420 00:27:49.292 qpair failed and we were unable to recover it. 00:27:49.292 [2024-07-15 09:36:00.306473] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.292 [2024-07-15 09:36:00.306500] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a58000b90 with addr=10.0.0.2, port=4420 00:27:49.292 qpair failed and we were unable to recover it. 00:27:49.292 [2024-07-15 09:36:00.306578] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.292 [2024-07-15 09:36:00.306604] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a58000b90 with addr=10.0.0.2, port=4420 00:27:49.292 qpair failed and we were unable to recover it. 00:27:49.292 [2024-07-15 09:36:00.306713] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.292 [2024-07-15 09:36:00.306739] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a58000b90 with addr=10.0.0.2, port=4420 00:27:49.292 qpair failed and we were unable to recover it. 00:27:49.292 [2024-07-15 09:36:00.306835] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.292 [2024-07-15 09:36:00.306863] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a50000b90 with addr=10.0.0.2, port=4420 00:27:49.292 qpair failed and we were unable to recover it. 00:27:49.292 [2024-07-15 09:36:00.306954] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.292 [2024-07-15 09:36:00.306982] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a50000b90 with addr=10.0.0.2, port=4420 00:27:49.292 qpair failed and we were unable to recover it. 00:27:49.292 [2024-07-15 09:36:00.307093] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.292 [2024-07-15 09:36:00.307119] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a50000b90 with addr=10.0.0.2, port=4420 00:27:49.292 qpair failed and we were unable to recover it. 00:27:49.292 [2024-07-15 09:36:00.307201] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.292 [2024-07-15 09:36:00.307226] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a50000b90 with addr=10.0.0.2, port=4420 00:27:49.292 qpair failed and we were unable to recover it. 00:27:49.292 [2024-07-15 09:36:00.307352] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.292 [2024-07-15 09:36:00.307378] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a50000b90 with addr=10.0.0.2, port=4420 00:27:49.292 qpair failed and we were unable to recover it. 00:27:49.292 [2024-07-15 09:36:00.307487] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.292 [2024-07-15 09:36:00.307513] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d69200 with addr=10.0.0.2, port=4420 00:27:49.292 qpair failed and we were unable to recover it. 00:27:49.292 [2024-07-15 09:36:00.307653] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.292 [2024-07-15 09:36:00.307679] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a58000b90 with addr=10.0.0.2, port=4420 00:27:49.292 qpair failed and we were unable to recover it. 00:27:49.292 [2024-07-15 09:36:00.307771] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.292 [2024-07-15 09:36:00.307798] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:27:49.292 qpair failed and we were unable to recover it. 00:27:49.292 [2024-07-15 09:36:00.307922] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.292 [2024-07-15 09:36:00.307946] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:27:49.292 qpair failed and we were unable to recover it. 00:27:49.292 [2024-07-15 09:36:00.308030] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.292 [2024-07-15 09:36:00.308056] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:27:49.292 qpair failed and we were unable to recover it. 00:27:49.292 [2024-07-15 09:36:00.308134] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.292 [2024-07-15 09:36:00.308159] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:27:49.292 qpair failed and we were unable to recover it. 00:27:49.292 [2024-07-15 09:36:00.308295] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.292 [2024-07-15 09:36:00.308321] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:27:49.292 qpair failed and we were unable to recover it. 00:27:49.292 [2024-07-15 09:36:00.308433] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.292 [2024-07-15 09:36:00.308460] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d69200 with addr=10.0.0.2, port=4420 00:27:49.292 qpair failed and we were unable to recover it. 00:27:49.292 [2024-07-15 09:36:00.308571] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.292 [2024-07-15 09:36:00.308597] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d69200 with addr=10.0.0.2, port=4420 00:27:49.292 qpair failed and we were unable to recover it. 00:27:49.292 [2024-07-15 09:36:00.308671] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.292 [2024-07-15 09:36:00.308696] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d69200 with addr=10.0.0.2, port=4420 00:27:49.292 qpair failed and we were unable to recover it. 00:27:49.292 [2024-07-15 09:36:00.308818] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.292 [2024-07-15 09:36:00.308845] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d69200 with addr=10.0.0.2, port=4420 00:27:49.292 qpair failed and we were unable to recover it. 00:27:49.292 [2024-07-15 09:36:00.308923] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.292 [2024-07-15 09:36:00.308949] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d69200 with addr=10.0.0.2, port=4420 00:27:49.292 qpair failed and we were unable to recover it. 00:27:49.292 [2024-07-15 09:36:00.309031] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.292 [2024-07-15 09:36:00.309056] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d69200 with addr=10.0.0.2, port=4420 00:27:49.293 qpair failed and we were unable to recover it. 00:27:49.293 [2024-07-15 09:36:00.309157] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.293 [2024-07-15 09:36:00.309183] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d69200 with addr=10.0.0.2, port=4420 00:27:49.293 qpair failed and we were unable to recover it. 00:27:49.293 [2024-07-15 09:36:00.309274] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.293 [2024-07-15 09:36:00.309299] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d69200 with addr=10.0.0.2, port=4420 00:27:49.293 qpair failed and we were unable to recover it. 00:27:49.293 [2024-07-15 09:36:00.309378] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.293 [2024-07-15 09:36:00.309404] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d69200 with addr=10.0.0.2, port=4420 00:27:49.293 qpair failed and we were unable to recover it. 00:27:49.293 [2024-07-15 09:36:00.309479] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.293 [2024-07-15 09:36:00.309504] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d69200 with addr=10.0.0.2, port=4420 00:27:49.293 qpair failed and we were unable to recover it. 00:27:49.293 [2024-07-15 09:36:00.309588] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.293 [2024-07-15 09:36:00.309615] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d69200 with addr=10.0.0.2, port=4420 00:27:49.293 qpair failed and we were unable to recover it. 00:27:49.293 [2024-07-15 09:36:00.309719] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.293 [2024-07-15 09:36:00.309745] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d69200 with addr=10.0.0.2, port=4420 00:27:49.293 qpair failed and we were unable to recover it. 00:27:49.293 [2024-07-15 09:36:00.309841] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.293 [2024-07-15 09:36:00.309868] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:27:49.293 qpair failed and we were unable to recover it. 00:27:49.293 [2024-07-15 09:36:00.309949] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.293 [2024-07-15 09:36:00.309976] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:27:49.293 qpair failed and we were unable to recover it. 00:27:49.293 [2024-07-15 09:36:00.310061] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.293 [2024-07-15 09:36:00.310087] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:27:49.293 qpair failed and we were unable to recover it. 00:27:49.293 [2024-07-15 09:36:00.310177] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.293 [2024-07-15 09:36:00.310203] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:27:49.293 qpair failed and we were unable to recover it. 00:27:49.293 [2024-07-15 09:36:00.310289] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.293 [2024-07-15 09:36:00.310317] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a58000b90 with addr=10.0.0.2, port=4420 00:27:49.293 qpair failed and we were unable to recover it. 00:27:49.293 [2024-07-15 09:36:00.310409] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.293 [2024-07-15 09:36:00.310435] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a58000b90 with addr=10.0.0.2, port=4420 00:27:49.293 qpair failed and we were unable to recover it. 00:27:49.293 [2024-07-15 09:36:00.310545] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.293 [2024-07-15 09:36:00.310571] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d69200 with addr=10.0.0.2, port=4420 00:27:49.293 qpair failed and we were unable to recover it. 00:27:49.293 [2024-07-15 09:36:00.310686] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.293 [2024-07-15 09:36:00.310712] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d69200 with addr=10.0.0.2, port=4420 00:27:49.293 qpair failed and we were unable to recover it. 00:27:49.293 [2024-07-15 09:36:00.310837] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.293 [2024-07-15 09:36:00.310866] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a50000b90 with addr=10.0.0.2, port=4420 00:27:49.293 qpair failed and we were unable to recover it. 00:27:49.293 [2024-07-15 09:36:00.310976] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.293 [2024-07-15 09:36:00.311002] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a50000b90 with addr=10.0.0.2, port=4420 00:27:49.293 qpair failed and we were unable to recover it. 00:27:49.293 [2024-07-15 09:36:00.311086] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.293 [2024-07-15 09:36:00.311113] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a50000b90 with addr=10.0.0.2, port=4420 00:27:49.293 qpair failed and we were unable to recover it. 00:27:49.293 [2024-07-15 09:36:00.311195] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.293 [2024-07-15 09:36:00.311221] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a50000b90 with addr=10.0.0.2, port=4420 00:27:49.293 qpair failed and we were unable to recover it. 00:27:49.293 [2024-07-15 09:36:00.311310] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.293 [2024-07-15 09:36:00.311337] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a58000b90 with addr=10.0.0.2, port=4420 00:27:49.293 qpair failed and we were unable to recover it. 00:27:49.293 [2024-07-15 09:36:00.311419] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.293 [2024-07-15 09:36:00.311446] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:27:49.293 qpair failed and we were unable to recover it. 00:27:49.293 [2024-07-15 09:36:00.311561] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.293 [2024-07-15 09:36:00.311588] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:27:49.293 qpair failed and we were unable to recover it. 00:27:49.293 [2024-07-15 09:36:00.311673] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.293 [2024-07-15 09:36:00.311698] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:27:49.293 qpair failed and we were unable to recover it. 00:27:49.293 [2024-07-15 09:36:00.311814] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.293 [2024-07-15 09:36:00.311840] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:27:49.293 qpair failed and we were unable to recover it. 00:27:49.293 [2024-07-15 09:36:00.311932] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.293 [2024-07-15 09:36:00.311958] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:27:49.293 qpair failed and we were unable to recover it. 00:27:49.293 [2024-07-15 09:36:00.312036] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.293 [2024-07-15 09:36:00.312062] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:27:49.293 qpair failed and we were unable to recover it. 00:27:49.293 [2024-07-15 09:36:00.312174] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.293 [2024-07-15 09:36:00.312200] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:27:49.293 qpair failed and we were unable to recover it. 00:27:49.293 [2024-07-15 09:36:00.312310] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.293 [2024-07-15 09:36:00.312335] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:27:49.293 qpair failed and we were unable to recover it. 00:27:49.293 [2024-07-15 09:36:00.312421] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.293 [2024-07-15 09:36:00.312451] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:27:49.293 qpair failed and we were unable to recover it. 00:27:49.293 [2024-07-15 09:36:00.312577] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.293 [2024-07-15 09:36:00.312617] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a58000b90 with addr=10.0.0.2, port=4420 00:27:49.293 qpair failed and we were unable to recover it. 00:27:49.293 [2024-07-15 09:36:00.312706] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.293 [2024-07-15 09:36:00.312733] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d69200 with addr=10.0.0.2, port=4420 00:27:49.293 qpair failed and we were unable to recover it. 00:27:49.293 [2024-07-15 09:36:00.312828] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.293 [2024-07-15 09:36:00.312854] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d69200 with addr=10.0.0.2, port=4420 00:27:49.293 qpair failed and we were unable to recover it. 00:27:49.293 [2024-07-15 09:36:00.312943] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.293 [2024-07-15 09:36:00.312969] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d69200 with addr=10.0.0.2, port=4420 00:27:49.293 qpair failed and we were unable to recover it. 00:27:49.293 [2024-07-15 09:36:00.313058] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.293 [2024-07-15 09:36:00.313084] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d69200 with addr=10.0.0.2, port=4420 00:27:49.293 qpair failed and we were unable to recover it. 00:27:49.293 [2024-07-15 09:36:00.313221] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.293 [2024-07-15 09:36:00.313247] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d69200 with addr=10.0.0.2, port=4420 00:27:49.293 qpair failed and we were unable to recover it. 00:27:49.293 [2024-07-15 09:36:00.313326] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.293 [2024-07-15 09:36:00.313352] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d69200 with addr=10.0.0.2, port=4420 00:27:49.293 qpair failed and we were unable to recover it. 00:27:49.294 [2024-07-15 09:36:00.313424] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.294 [2024-07-15 09:36:00.313449] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d69200 with addr=10.0.0.2, port=4420 00:27:49.294 qpair failed and we were unable to recover it. 00:27:49.294 [2024-07-15 09:36:00.313553] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.294 [2024-07-15 09:36:00.313578] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d69200 with addr=10.0.0.2, port=4420 00:27:49.294 qpair failed and we were unable to recover it. 00:27:49.294 [2024-07-15 09:36:00.313658] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.294 [2024-07-15 09:36:00.313685] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:27:49.294 qpair failed and we were unable to recover it. 00:27:49.294 [2024-07-15 09:36:00.313778] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.294 [2024-07-15 09:36:00.313815] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a58000b90 with addr=10.0.0.2, port=4420 00:27:49.294 qpair failed and we were unable to recover it. 00:27:49.294 [2024-07-15 09:36:00.313900] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.294 [2024-07-15 09:36:00.313927] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a58000b90 with addr=10.0.0.2, port=4420 00:27:49.294 qpair failed and we were unable to recover it. 00:27:49.294 [2024-07-15 09:36:00.314015] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.294 [2024-07-15 09:36:00.314041] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a58000b90 with addr=10.0.0.2, port=4420 00:27:49.294 qpair failed and we were unable to recover it. 00:27:49.294 [2024-07-15 09:36:00.314137] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.294 [2024-07-15 09:36:00.314163] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a58000b90 with addr=10.0.0.2, port=4420 00:27:49.294 qpair failed and we were unable to recover it. 00:27:49.294 [2024-07-15 09:36:00.314277] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.294 [2024-07-15 09:36:00.314303] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a58000b90 with addr=10.0.0.2, port=4420 00:27:49.294 qpair failed and we were unable to recover it. 00:27:49.294 [2024-07-15 09:36:00.314416] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.294 [2024-07-15 09:36:00.314442] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a58000b90 with addr=10.0.0.2, port=4420 00:27:49.294 qpair failed and we were unable to recover it. 00:27:49.294 [2024-07-15 09:36:00.314551] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.294 [2024-07-15 09:36:00.314577] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a58000b90 with addr=10.0.0.2, port=4420 00:27:49.294 qpair failed and we were unable to recover it. 00:27:49.294 [2024-07-15 09:36:00.314659] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.294 [2024-07-15 09:36:00.314686] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a58000b90 with addr=10.0.0.2, port=4420 00:27:49.294 qpair failed and we were unable to recover it. 00:27:49.294 [2024-07-15 09:36:00.314814] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.294 [2024-07-15 09:36:00.314841] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d69200 with addr=10.0.0.2, port=4420 00:27:49.294 qpair failed and we were unable to recover it. 00:27:49.294 [2024-07-15 09:36:00.314951] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.294 [2024-07-15 09:36:00.315001] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:27:49.294 qpair failed and we were unable to recover it. 00:27:49.294 [2024-07-15 09:36:00.315106] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.294 [2024-07-15 09:36:00.315132] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:27:49.294 qpair failed and we were unable to recover it. 00:27:49.294 [2024-07-15 09:36:00.315213] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.294 [2024-07-15 09:36:00.315239] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:27:49.294 qpair failed and we were unable to recover it. 00:27:49.294 [2024-07-15 09:36:00.315344] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.294 [2024-07-15 09:36:00.315370] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:27:49.294 qpair failed and we were unable to recover it. 00:27:49.294 [2024-07-15 09:36:00.315482] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.294 [2024-07-15 09:36:00.315507] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:27:49.294 qpair failed and we were unable to recover it. 00:27:49.294 [2024-07-15 09:36:00.315617] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.294 [2024-07-15 09:36:00.315642] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:27:49.294 qpair failed and we were unable to recover it. 00:27:49.294 [2024-07-15 09:36:00.315759] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.294 [2024-07-15 09:36:00.315786] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d69200 with addr=10.0.0.2, port=4420 00:27:49.294 qpair failed and we were unable to recover it. 00:27:49.294 [2024-07-15 09:36:00.315913] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.294 [2024-07-15 09:36:00.315960] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a50000b90 with addr=10.0.0.2, port=4420 00:27:49.294 qpair failed and we were unable to recover it. 00:27:49.294 [2024-07-15 09:36:00.316076] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.294 [2024-07-15 09:36:00.316104] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a58000b90 with addr=10.0.0.2, port=4420 00:27:49.294 qpair failed and we were unable to recover it. 00:27:49.294 [2024-07-15 09:36:00.316241] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.294 [2024-07-15 09:36:00.316267] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a58000b90 with addr=10.0.0.2, port=4420 00:27:49.294 qpair failed and we were unable to recover it. 00:27:49.294 [2024-07-15 09:36:00.316352] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.294 [2024-07-15 09:36:00.316378] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a58000b90 with addr=10.0.0.2, port=4420 00:27:49.294 qpair failed and we were unable to recover it. 00:27:49.294 [2024-07-15 09:36:00.316469] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.294 [2024-07-15 09:36:00.316495] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a58000b90 with addr=10.0.0.2, port=4420 00:27:49.294 qpair failed and we were unable to recover it. 00:27:49.294 [2024-07-15 09:36:00.316610] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.294 [2024-07-15 09:36:00.316636] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a58000b90 with addr=10.0.0.2, port=4420 00:27:49.294 qpair failed and we were unable to recover it. 00:27:49.294 [2024-07-15 09:36:00.316725] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.294 [2024-07-15 09:36:00.316763] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d69200 with addr=10.0.0.2, port=4420 00:27:49.294 qpair failed and we were unable to recover it. 00:27:49.294 [2024-07-15 09:36:00.316860] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.294 [2024-07-15 09:36:00.316889] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a50000b90 with addr=10.0.0.2, port=4420 00:27:49.294 qpair failed and we were unable to recover it. 00:27:49.294 [2024-07-15 09:36:00.316984] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.294 [2024-07-15 09:36:00.317011] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a50000b90 with addr=10.0.0.2, port=4420 00:27:49.294 qpair failed and we were unable to recover it. 00:27:49.294 [2024-07-15 09:36:00.317095] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.294 [2024-07-15 09:36:00.317121] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a50000b90 with addr=10.0.0.2, port=4420 00:27:49.294 qpair failed and we were unable to recover it. 00:27:49.294 [2024-07-15 09:36:00.317207] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.294 [2024-07-15 09:36:00.317234] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a50000b90 with addr=10.0.0.2, port=4420 00:27:49.294 qpair failed and we were unable to recover it. 00:27:49.294 [2024-07-15 09:36:00.317322] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.294 [2024-07-15 09:36:00.317350] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:27:49.294 qpair failed and we were unable to recover it. 00:27:49.294 [2024-07-15 09:36:00.317465] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.294 [2024-07-15 09:36:00.317491] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:27:49.294 qpair failed and we were unable to recover it. 00:27:49.294 [2024-07-15 09:36:00.317599] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.294 [2024-07-15 09:36:00.317624] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:27:49.294 qpair failed and we were unable to recover it. 00:27:49.294 [2024-07-15 09:36:00.317717] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.294 [2024-07-15 09:36:00.317742] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:27:49.294 qpair failed and we were unable to recover it. 00:27:49.294 [2024-07-15 09:36:00.317843] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.294 [2024-07-15 09:36:00.317872] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a58000b90 with addr=10.0.0.2, port=4420 00:27:49.294 qpair failed and we were unable to recover it. 00:27:49.294 [2024-07-15 09:36:00.317960] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.294 [2024-07-15 09:36:00.317986] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a58000b90 with addr=10.0.0.2, port=4420 00:27:49.294 qpair failed and we were unable to recover it. 00:27:49.294 [2024-07-15 09:36:00.318069] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.294 [2024-07-15 09:36:00.318095] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a58000b90 with addr=10.0.0.2, port=4420 00:27:49.294 qpair failed and we were unable to recover it. 00:27:49.294 [2024-07-15 09:36:00.318186] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.294 [2024-07-15 09:36:00.318212] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a58000b90 with addr=10.0.0.2, port=4420 00:27:49.294 qpair failed and we were unable to recover it. 00:27:49.294 [2024-07-15 09:36:00.318294] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.294 [2024-07-15 09:36:00.318320] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a58000b90 with addr=10.0.0.2, port=4420 00:27:49.294 qpair failed and we were unable to recover it. 00:27:49.294 [2024-07-15 09:36:00.318456] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.294 [2024-07-15 09:36:00.318483] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a50000b90 with addr=10.0.0.2, port=4420 00:27:49.294 qpair failed and we were unable to recover it. 00:27:49.294 [2024-07-15 09:36:00.318597] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.294 [2024-07-15 09:36:00.318623] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:27:49.294 qpair failed and we were unable to recover it. 00:27:49.295 [2024-07-15 09:36:00.318708] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.295 [2024-07-15 09:36:00.318734] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:27:49.295 qpair failed and we were unable to recover it. 00:27:49.295 [2024-07-15 09:36:00.318862] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.295 [2024-07-15 09:36:00.318888] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:27:49.295 qpair failed and we were unable to recover it. 00:27:49.295 [2024-07-15 09:36:00.318980] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.295 [2024-07-15 09:36:00.319006] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:27:49.295 qpair failed and we were unable to recover it. 00:27:49.295 [2024-07-15 09:36:00.319085] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.295 [2024-07-15 09:36:00.319110] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:27:49.295 qpair failed and we were unable to recover it. 00:27:49.295 [2024-07-15 09:36:00.319191] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.295 [2024-07-15 09:36:00.319219] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a58000b90 with addr=10.0.0.2, port=4420 00:27:49.295 qpair failed and we were unable to recover it. 00:27:49.295 [2024-07-15 09:36:00.319306] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.295 [2024-07-15 09:36:00.319332] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a58000b90 with addr=10.0.0.2, port=4420 00:27:49.295 qpair failed and we were unable to recover it. 00:27:49.295 [2024-07-15 09:36:00.319410] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.295 [2024-07-15 09:36:00.319436] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a58000b90 with addr=10.0.0.2, port=4420 00:27:49.295 qpair failed and we were unable to recover it. 00:27:49.295 [2024-07-15 09:36:00.319551] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.295 [2024-07-15 09:36:00.319577] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a58000b90 with addr=10.0.0.2, port=4420 00:27:49.295 qpair failed and we were unable to recover it. 00:27:49.295 [2024-07-15 09:36:00.319652] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.295 [2024-07-15 09:36:00.319678] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a58000b90 with addr=10.0.0.2, port=4420 00:27:49.295 qpair failed and we were unable to recover it. 00:27:49.295 [2024-07-15 09:36:00.319782] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.295 [2024-07-15 09:36:00.319836] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d69200 with addr=10.0.0.2, port=4420 00:27:49.295 qpair failed and we were unable to recover it. 00:27:49.295 [2024-07-15 09:36:00.319924] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.295 [2024-07-15 09:36:00.319951] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d69200 with addr=10.0.0.2, port=4420 00:27:49.295 qpair failed and we were unable to recover it. 00:27:49.295 [2024-07-15 09:36:00.320030] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.295 [2024-07-15 09:36:00.320056] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d69200 with addr=10.0.0.2, port=4420 00:27:49.295 qpair failed and we were unable to recover it. 00:27:49.295 [2024-07-15 09:36:00.320166] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.295 [2024-07-15 09:36:00.320191] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d69200 with addr=10.0.0.2, port=4420 00:27:49.295 qpair failed and we were unable to recover it. 00:27:49.295 [2024-07-15 09:36:00.320302] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.295 [2024-07-15 09:36:00.320327] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d69200 with addr=10.0.0.2, port=4420 00:27:49.295 qpair failed and we were unable to recover it. 00:27:49.295 [2024-07-15 09:36:00.320414] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.295 [2024-07-15 09:36:00.320443] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a50000b90 with addr=10.0.0.2, port=4420 00:27:49.295 qpair failed and we were unable to recover it. 00:27:49.295 [2024-07-15 09:36:00.320530] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.295 [2024-07-15 09:36:00.320557] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a50000b90 with addr=10.0.0.2, port=4420 00:27:49.295 qpair failed and we were unable to recover it. 00:27:49.295 [2024-07-15 09:36:00.320647] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.295 [2024-07-15 09:36:00.320674] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:27:49.295 qpair failed and we were unable to recover it. 00:27:49.295 [2024-07-15 09:36:00.320752] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.295 [2024-07-15 09:36:00.320777] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:27:49.295 qpair failed and we were unable to recover it. 00:27:49.295 [2024-07-15 09:36:00.320866] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.295 [2024-07-15 09:36:00.320898] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a58000b90 with addr=10.0.0.2, port=4420 00:27:49.295 qpair failed and we were unable to recover it. 00:27:49.295 [2024-07-15 09:36:00.320992] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.295 [2024-07-15 09:36:00.321019] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a58000b90 with addr=10.0.0.2, port=4420 00:27:49.295 qpair failed and we were unable to recover it. 00:27:49.295 [2024-07-15 09:36:00.321132] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.295 [2024-07-15 09:36:00.321158] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a58000b90 with addr=10.0.0.2, port=4420 00:27:49.295 qpair failed and we were unable to recover it. 00:27:49.295 [2024-07-15 09:36:00.321236] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.295 [2024-07-15 09:36:00.321262] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a58000b90 with addr=10.0.0.2, port=4420 00:27:49.295 qpair failed and we were unable to recover it. 00:27:49.295 [2024-07-15 09:36:00.321344] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.295 [2024-07-15 09:36:00.321371] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a58000b90 with addr=10.0.0.2, port=4420 00:27:49.295 qpair failed and we were unable to recover it. 00:27:49.295 [2024-07-15 09:36:00.321488] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.295 [2024-07-15 09:36:00.321515] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d69200 with addr=10.0.0.2, port=4420 00:27:49.295 qpair failed and we were unable to recover it. 00:27:49.295 [2024-07-15 09:36:00.321598] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.295 [2024-07-15 09:36:00.321625] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:27:49.295 qpair failed and we were unable to recover it. 00:27:49.295 [2024-07-15 09:36:00.321706] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.295 [2024-07-15 09:36:00.321732] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:27:49.295 qpair failed and we were unable to recover it. 00:27:49.295 [2024-07-15 09:36:00.321830] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.295 [2024-07-15 09:36:00.321856] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:27:49.295 qpair failed and we were unable to recover it. 00:27:49.295 [2024-07-15 09:36:00.321941] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.295 [2024-07-15 09:36:00.321966] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:27:49.295 qpair failed and we were unable to recover it. 00:27:49.295 [2024-07-15 09:36:00.322045] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.295 [2024-07-15 09:36:00.322071] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:27:49.295 qpair failed and we were unable to recover it. 00:27:49.295 [2024-07-15 09:36:00.322150] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.295 [2024-07-15 09:36:00.322176] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:27:49.295 qpair failed and we were unable to recover it. 00:27:49.295 [2024-07-15 09:36:00.322250] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.295 [2024-07-15 09:36:00.322275] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:27:49.295 qpair failed and we were unable to recover it. 00:27:49.295 [2024-07-15 09:36:00.322378] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.295 [2024-07-15 09:36:00.322407] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a50000b90 with addr=10.0.0.2, port=4420 00:27:49.295 qpair failed and we were unable to recover it. 00:27:49.295 [2024-07-15 09:36:00.322498] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.295 [2024-07-15 09:36:00.322526] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a58000b90 with addr=10.0.0.2, port=4420 00:27:49.295 qpair failed and we were unable to recover it. 00:27:49.295 [2024-07-15 09:36:00.322642] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.295 [2024-07-15 09:36:00.322668] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a58000b90 with addr=10.0.0.2, port=4420 00:27:49.295 qpair failed and we were unable to recover it. 00:27:49.295 [2024-07-15 09:36:00.322756] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.295 [2024-07-15 09:36:00.322782] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a58000b90 with addr=10.0.0.2, port=4420 00:27:49.295 qpair failed and we were unable to recover it. 00:27:49.295 [2024-07-15 09:36:00.322884] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.295 [2024-07-15 09:36:00.322911] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a58000b90 with addr=10.0.0.2, port=4420 00:27:49.295 qpair failed and we were unable to recover it. 00:27:49.295 [2024-07-15 09:36:00.323027] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.295 [2024-07-15 09:36:00.323053] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a58000b90 with addr=10.0.0.2, port=4420 00:27:49.295 qpair failed and we were unable to recover it. 00:27:49.295 [2024-07-15 09:36:00.323166] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.295 [2024-07-15 09:36:00.323192] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a58000b90 with addr=10.0.0.2, port=4420 00:27:49.295 qpair failed and we were unable to recover it. 00:27:49.295 [2024-07-15 09:36:00.323269] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.295 [2024-07-15 09:36:00.323295] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a58000b90 with addr=10.0.0.2, port=4420 00:27:49.295 qpair failed and we were unable to recover it. 00:27:49.295 [2024-07-15 09:36:00.323407] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.295 [2024-07-15 09:36:00.323433] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a58000b90 with addr=10.0.0.2, port=4420 00:27:49.295 qpair failed and we were unable to recover it. 00:27:49.295 [2024-07-15 09:36:00.323544] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.295 [2024-07-15 09:36:00.323569] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a58000b90 with addr=10.0.0.2, port=4420 00:27:49.296 qpair failed and we were unable to recover it. 00:27:49.296 [2024-07-15 09:36:00.323689] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.296 [2024-07-15 09:36:00.323716] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a50000b90 with addr=10.0.0.2, port=4420 00:27:49.296 qpair failed and we were unable to recover it. 00:27:49.296 [2024-07-15 09:36:00.323814] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.296 [2024-07-15 09:36:00.323840] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a50000b90 with addr=10.0.0.2, port=4420 00:27:49.296 qpair failed and we were unable to recover it. 00:27:49.296 [2024-07-15 09:36:00.323931] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.296 [2024-07-15 09:36:00.323959] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a50000b90 with addr=10.0.0.2, port=4420 00:27:49.296 qpair failed and we were unable to recover it. 00:27:49.296 [2024-07-15 09:36:00.324043] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.296 [2024-07-15 09:36:00.324070] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a50000b90 with addr=10.0.0.2, port=4420 00:27:49.296 qpair failed and we were unable to recover it. 00:27:49.296 [2024-07-15 09:36:00.324195] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.296 [2024-07-15 09:36:00.324233] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d69200 with addr=10.0.0.2, port=4420 00:27:49.296 qpair failed and we were unable to recover it. 00:27:49.296 [2024-07-15 09:36:00.324316] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.296 [2024-07-15 09:36:00.324342] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:27:49.296 qpair failed and we were unable to recover it. 00:27:49.296 [2024-07-15 09:36:00.324450] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.296 [2024-07-15 09:36:00.324475] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:27:49.296 qpair failed and we were unable to recover it. 00:27:49.296 [2024-07-15 09:36:00.324561] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.296 [2024-07-15 09:36:00.324586] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:27:49.296 qpair failed and we were unable to recover it. 00:27:49.296 [2024-07-15 09:36:00.324699] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.296 [2024-07-15 09:36:00.324724] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:27:49.296 qpair failed and we were unable to recover it. 00:27:49.296 [2024-07-15 09:36:00.324811] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.296 [2024-07-15 09:36:00.324837] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:27:49.296 qpair failed and we were unable to recover it. 00:27:49.296 [2024-07-15 09:36:00.324919] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.296 [2024-07-15 09:36:00.324944] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:27:49.296 qpair failed and we were unable to recover it. 00:27:49.296 [2024-07-15 09:36:00.325023] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.296 [2024-07-15 09:36:00.325048] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:27:49.296 qpair failed and we were unable to recover it. 00:27:49.296 [2024-07-15 09:36:00.325162] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.296 [2024-07-15 09:36:00.325187] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:27:49.296 qpair failed and we were unable to recover it. 00:27:49.296 [2024-07-15 09:36:00.325271] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.296 [2024-07-15 09:36:00.325297] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:27:49.296 qpair failed and we were unable to recover it. 00:27:49.296 [2024-07-15 09:36:00.325390] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.296 [2024-07-15 09:36:00.325415] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:27:49.296 qpair failed and we were unable to recover it. 00:27:49.296 [2024-07-15 09:36:00.325499] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.296 [2024-07-15 09:36:00.325524] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:27:49.296 qpair failed and we were unable to recover it. 00:27:49.296 [2024-07-15 09:36:00.325632] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.296 [2024-07-15 09:36:00.325660] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a50000b90 with addr=10.0.0.2, port=4420 00:27:49.296 qpair failed and we were unable to recover it. 00:27:49.296 [2024-07-15 09:36:00.325766] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.296 [2024-07-15 09:36:00.325814] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a50000b90 with addr=10.0.0.2, port=4420 00:27:49.296 qpair failed and we were unable to recover it. 00:27:49.296 [2024-07-15 09:36:00.325924] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.296 [2024-07-15 09:36:00.325950] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a50000b90 with addr=10.0.0.2, port=4420 00:27:49.296 qpair failed and we were unable to recover it. 00:27:49.296 [2024-07-15 09:36:00.326038] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.296 [2024-07-15 09:36:00.326064] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a50000b90 with addr=10.0.0.2, port=4420 00:27:49.296 qpair failed and we were unable to recover it. 00:27:49.296 [2024-07-15 09:36:00.326173] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.296 [2024-07-15 09:36:00.326198] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a50000b90 with addr=10.0.0.2, port=4420 00:27:49.296 qpair failed and we were unable to recover it. 00:27:49.296 [2024-07-15 09:36:00.326310] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.296 [2024-07-15 09:36:00.326336] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a50000b90 with addr=10.0.0.2, port=4420 00:27:49.296 qpair failed and we were unable to recover it. 00:27:49.296 [2024-07-15 09:36:00.326445] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.296 [2024-07-15 09:36:00.326471] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a50000b90 with addr=10.0.0.2, port=4420 00:27:49.296 qpair failed and we were unable to recover it. 00:27:49.296 [2024-07-15 09:36:00.326595] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.296 [2024-07-15 09:36:00.326634] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d69200 with addr=10.0.0.2, port=4420 00:27:49.296 qpair failed and we were unable to recover it. 00:27:49.296 [2024-07-15 09:36:00.326745] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.296 [2024-07-15 09:36:00.326774] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a58000b90 with addr=10.0.0.2, port=4420 00:27:49.296 qpair failed and we were unable to recover it. 00:27:49.296 [2024-07-15 09:36:00.326870] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.296 [2024-07-15 09:36:00.326896] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a58000b90 with addr=10.0.0.2, port=4420 00:27:49.296 qpair failed and we were unable to recover it. 00:27:49.296 [2024-07-15 09:36:00.326987] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.296 [2024-07-15 09:36:00.327014] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a58000b90 with addr=10.0.0.2, port=4420 00:27:49.296 qpair failed and we were unable to recover it. 00:27:49.296 [2024-07-15 09:36:00.327122] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.296 [2024-07-15 09:36:00.327148] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a58000b90 with addr=10.0.0.2, port=4420 00:27:49.296 qpair failed and we were unable to recover it. 00:27:49.296 [2024-07-15 09:36:00.327251] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.296 [2024-07-15 09:36:00.327276] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a58000b90 with addr=10.0.0.2, port=4420 00:27:49.296 qpair failed and we were unable to recover it. 00:27:49.296 [2024-07-15 09:36:00.327379] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.296 [2024-07-15 09:36:00.327405] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a58000b90 with addr=10.0.0.2, port=4420 00:27:49.296 qpair failed and we were unable to recover it. 00:27:49.296 [2024-07-15 09:36:00.327489] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.296 [2024-07-15 09:36:00.327515] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a58000b90 with addr=10.0.0.2, port=4420 00:27:49.296 qpair failed and we were unable to recover it. 00:27:49.296 [2024-07-15 09:36:00.327632] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.296 [2024-07-15 09:36:00.327660] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:27:49.296 qpair failed and we were unable to recover it. 00:27:49.296 [2024-07-15 09:36:00.327750] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.296 [2024-07-15 09:36:00.327776] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:27:49.296 qpair failed and we were unable to recover it. 00:27:49.296 [2024-07-15 09:36:00.327867] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.296 [2024-07-15 09:36:00.327893] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:27:49.296 qpair failed and we were unable to recover it. 00:27:49.296 [2024-07-15 09:36:00.327980] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.296 [2024-07-15 09:36:00.328006] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:27:49.296 qpair failed and we were unable to recover it. 00:27:49.296 [2024-07-15 09:36:00.328134] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.296 [2024-07-15 09:36:00.328160] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:27:49.296 qpair failed and we were unable to recover it. 00:27:49.296 [2024-07-15 09:36:00.328244] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.296 [2024-07-15 09:36:00.328270] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:27:49.296 qpair failed and we were unable to recover it. 00:27:49.296 [2024-07-15 09:36:00.328352] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.296 [2024-07-15 09:36:00.328378] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:27:49.296 qpair failed and we were unable to recover it. 00:27:49.296 [2024-07-15 09:36:00.328482] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.296 [2024-07-15 09:36:00.328508] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:27:49.296 qpair failed and we were unable to recover it. 00:27:49.296 [2024-07-15 09:36:00.328630] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.296 [2024-07-15 09:36:00.328669] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d69200 with addr=10.0.0.2, port=4420 00:27:49.296 qpair failed and we were unable to recover it. 00:27:49.296 [2024-07-15 09:36:00.328755] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.297 [2024-07-15 09:36:00.328783] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a58000b90 with addr=10.0.0.2, port=4420 00:27:49.297 qpair failed and we were unable to recover it. 00:27:49.297 [2024-07-15 09:36:00.328875] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.297 [2024-07-15 09:36:00.328902] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a58000b90 with addr=10.0.0.2, port=4420 00:27:49.297 qpair failed and we were unable to recover it. 00:27:49.297 [2024-07-15 09:36:00.328988] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.297 [2024-07-15 09:36:00.329015] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a58000b90 with addr=10.0.0.2, port=4420 00:27:49.297 qpair failed and we were unable to recover it. 00:27:49.297 [2024-07-15 09:36:00.329104] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.297 [2024-07-15 09:36:00.329131] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a58000b90 with addr=10.0.0.2, port=4420 00:27:49.297 qpair failed and we were unable to recover it. 00:27:49.297 [2024-07-15 09:36:00.329250] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.297 [2024-07-15 09:36:00.329278] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a50000b90 with addr=10.0.0.2, port=4420 00:27:49.297 qpair failed and we were unable to recover it. 00:27:49.297 [2024-07-15 09:36:00.329368] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.297 [2024-07-15 09:36:00.329394] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a50000b90 with addr=10.0.0.2, port=4420 00:27:49.297 qpair failed and we were unable to recover it. 00:27:49.297 [2024-07-15 09:36:00.329499] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.297 [2024-07-15 09:36:00.329525] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a50000b90 with addr=10.0.0.2, port=4420 00:27:49.297 qpair failed and we were unable to recover it. 00:27:49.297 [2024-07-15 09:36:00.329634] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.297 [2024-07-15 09:36:00.329661] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a50000b90 with addr=10.0.0.2, port=4420 00:27:49.297 qpair failed and we were unable to recover it. 00:27:49.297 [2024-07-15 09:36:00.329745] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.297 [2024-07-15 09:36:00.329772] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:27:49.297 qpair failed and we were unable to recover it. 00:27:49.297 [2024-07-15 09:36:00.329869] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.297 [2024-07-15 09:36:00.329899] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d69200 with addr=10.0.0.2, port=4420 00:27:49.297 qpair failed and we were unable to recover it. 00:27:49.297 [2024-07-15 09:36:00.329989] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.297 [2024-07-15 09:36:00.330015] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d69200 with addr=10.0.0.2, port=4420 00:27:49.297 qpair failed and we were unable to recover it. 00:27:49.297 [2024-07-15 09:36:00.330155] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.297 [2024-07-15 09:36:00.330180] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d69200 with addr=10.0.0.2, port=4420 00:27:49.297 qpair failed and we were unable to recover it. 00:27:49.297 [2024-07-15 09:36:00.330292] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.297 [2024-07-15 09:36:00.330318] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d69200 with addr=10.0.0.2, port=4420 00:27:49.297 qpair failed and we were unable to recover it. 00:27:49.297 [2024-07-15 09:36:00.330400] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.297 [2024-07-15 09:36:00.330426] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d69200 with addr=10.0.0.2, port=4420 00:27:49.297 qpair failed and we were unable to recover it. 00:27:49.297 [2024-07-15 09:36:00.330513] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.297 [2024-07-15 09:36:00.330539] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d69200 with addr=10.0.0.2, port=4420 00:27:49.297 qpair failed and we were unable to recover it. 00:27:49.297 [2024-07-15 09:36:00.330618] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.297 [2024-07-15 09:36:00.330643] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d69200 with addr=10.0.0.2, port=4420 00:27:49.297 qpair failed and we were unable to recover it. 00:27:49.297 [2024-07-15 09:36:00.330754] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.297 [2024-07-15 09:36:00.330782] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a58000b90 with addr=10.0.0.2, port=4420 00:27:49.297 qpair failed and we were unable to recover it. 00:27:49.297 [2024-07-15 09:36:00.330909] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.297 [2024-07-15 09:36:00.330938] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a50000b90 with addr=10.0.0.2, port=4420 00:27:49.297 qpair failed and we were unable to recover it. 00:27:49.297 [2024-07-15 09:36:00.331056] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.297 [2024-07-15 09:36:00.331082] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a50000b90 with addr=10.0.0.2, port=4420 00:27:49.297 qpair failed and we were unable to recover it. 00:27:49.297 [2024-07-15 09:36:00.331194] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.297 [2024-07-15 09:36:00.331222] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a50000b90 with addr=10.0.0.2, port=4420 00:27:49.297 qpair failed and we were unable to recover it. 00:27:49.297 [2024-07-15 09:36:00.331332] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.297 [2024-07-15 09:36:00.331357] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a50000b90 with addr=10.0.0.2, port=4420 00:27:49.297 qpair failed and we were unable to recover it. 00:27:49.297 [2024-07-15 09:36:00.331435] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.297 [2024-07-15 09:36:00.331462] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a50000b90 with addr=10.0.0.2, port=4420 00:27:49.297 qpair failed and we were unable to recover it. 00:27:49.297 [2024-07-15 09:36:00.331577] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.297 [2024-07-15 09:36:00.331604] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a50000b90 with addr=10.0.0.2, port=4420 00:27:49.297 qpair failed and we were unable to recover it. 00:27:49.297 [2024-07-15 09:36:00.331684] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.297 [2024-07-15 09:36:00.331710] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a50000b90 with addr=10.0.0.2, port=4420 00:27:49.297 qpair failed and we were unable to recover it. 00:27:49.297 [2024-07-15 09:36:00.331816] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.297 [2024-07-15 09:36:00.331843] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:27:49.297 qpair failed and we were unable to recover it. 00:27:49.297 [2024-07-15 09:36:00.331927] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.297 [2024-07-15 09:36:00.331954] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:27:49.297 qpair failed and we were unable to recover it. 00:27:49.297 [2024-07-15 09:36:00.332061] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.297 [2024-07-15 09:36:00.332086] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:27:49.297 qpair failed and we were unable to recover it. 00:27:49.297 [2024-07-15 09:36:00.332167] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.297 [2024-07-15 09:36:00.332192] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:27:49.297 qpair failed and we were unable to recover it. 00:27:49.297 [2024-07-15 09:36:00.332273] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.297 [2024-07-15 09:36:00.332299] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:27:49.297 qpair failed and we were unable to recover it. 00:27:49.297 [2024-07-15 09:36:00.332390] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.297 [2024-07-15 09:36:00.332416] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:27:49.297 qpair failed and we were unable to recover it. 00:27:49.297 [2024-07-15 09:36:00.332497] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.297 [2024-07-15 09:36:00.332523] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:27:49.297 qpair failed and we were unable to recover it. 00:27:49.297 [2024-07-15 09:36:00.332638] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.297 [2024-07-15 09:36:00.332664] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:27:49.297 qpair failed and we were unable to recover it. 00:27:49.297 [2024-07-15 09:36:00.332770] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.297 [2024-07-15 09:36:00.332796] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:27:49.297 qpair failed and we were unable to recover it. 00:27:49.297 [2024-07-15 09:36:00.332876] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.297 [2024-07-15 09:36:00.332902] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:27:49.297 qpair failed and we were unable to recover it. 00:27:49.297 [2024-07-15 09:36:00.332981] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.297 [2024-07-15 09:36:00.333007] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:27:49.297 qpair failed and we were unable to recover it. 00:27:49.297 [2024-07-15 09:36:00.333105] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.297 [2024-07-15 09:36:00.333131] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:27:49.297 qpair failed and we were unable to recover it. 00:27:49.298 [2024-07-15 09:36:00.333243] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.298 [2024-07-15 09:36:00.333268] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:27:49.298 qpair failed and we were unable to recover it. 00:27:49.298 [2024-07-15 09:36:00.333347] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.298 [2024-07-15 09:36:00.333373] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:27:49.298 qpair failed and we were unable to recover it. 00:27:49.298 [2024-07-15 09:36:00.333460] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.298 [2024-07-15 09:36:00.333485] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:27:49.298 qpair failed and we were unable to recover it. 00:27:49.298 [2024-07-15 09:36:00.333592] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.298 [2024-07-15 09:36:00.333618] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:27:49.298 qpair failed and we were unable to recover it. 00:27:49.298 [2024-07-15 09:36:00.333744] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.298 [2024-07-15 09:36:00.333783] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d69200 with addr=10.0.0.2, port=4420 00:27:49.298 qpair failed and we were unable to recover it. 00:27:49.298 [2024-07-15 09:36:00.333905] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.298 [2024-07-15 09:36:00.333932] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d69200 with addr=10.0.0.2, port=4420 00:27:49.298 qpair failed and we were unable to recover it. 00:27:49.298 [2024-07-15 09:36:00.334009] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.298 [2024-07-15 09:36:00.334035] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d69200 with addr=10.0.0.2, port=4420 00:27:49.298 qpair failed and we were unable to recover it. 00:27:49.298 [2024-07-15 09:36:00.334144] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.298 [2024-07-15 09:36:00.334170] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d69200 with addr=10.0.0.2, port=4420 00:27:49.298 qpair failed and we were unable to recover it. 00:27:49.298 [2024-07-15 09:36:00.334279] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.298 [2024-07-15 09:36:00.334309] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d69200 with addr=10.0.0.2, port=4420 00:27:49.298 qpair failed and we were unable to recover it. 00:27:49.298 [2024-07-15 09:36:00.334444] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.298 [2024-07-15 09:36:00.334469] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d69200 with addr=10.0.0.2, port=4420 00:27:49.298 qpair failed and we were unable to recover it. 00:27:49.298 [2024-07-15 09:36:00.334565] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.298 [2024-07-15 09:36:00.334591] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:27:49.298 qpair failed and we were unable to recover it. 00:27:49.298 [2024-07-15 09:36:00.334714] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.298 [2024-07-15 09:36:00.334754] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a58000b90 with addr=10.0.0.2, port=4420 00:27:49.298 qpair failed and we were unable to recover it. 00:27:49.298 [2024-07-15 09:36:00.334861] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.298 [2024-07-15 09:36:00.334888] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a58000b90 with addr=10.0.0.2, port=4420 00:27:49.298 qpair failed and we were unable to recover it. 00:27:49.298 [2024-07-15 09:36:00.335005] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.298 [2024-07-15 09:36:00.335031] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a58000b90 with addr=10.0.0.2, port=4420 00:27:49.298 qpair failed and we were unable to recover it. 00:27:49.298 [2024-07-15 09:36:00.335121] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.298 [2024-07-15 09:36:00.335148] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a58000b90 with addr=10.0.0.2, port=4420 00:27:49.298 qpair failed and we were unable to recover it. 00:27:49.298 [2024-07-15 09:36:00.335263] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.298 [2024-07-15 09:36:00.335290] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a58000b90 with addr=10.0.0.2, port=4420 00:27:49.298 qpair failed and we were unable to recover it. 00:27:49.298 [2024-07-15 09:36:00.335371] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.298 [2024-07-15 09:36:00.335398] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a58000b90 with addr=10.0.0.2, port=4420 00:27:49.298 qpair failed and we were unable to recover it. 00:27:49.298 [2024-07-15 09:36:00.335496] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.298 [2024-07-15 09:36:00.335525] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a58000b90 with addr=10.0.0.2, port=4420 00:27:49.298 qpair failed and we were unable to recover it. 00:27:49.298 [2024-07-15 09:36:00.335655] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.298 [2024-07-15 09:36:00.335697] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a50000b90 with addr=10.0.0.2, port=4420 00:27:49.298 qpair failed and we were unable to recover it. 00:27:49.298 [2024-07-15 09:36:00.335794] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.298 [2024-07-15 09:36:00.335832] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a50000b90 with addr=10.0.0.2, port=4420 00:27:49.298 qpair failed and we were unable to recover it. 00:27:49.298 [2024-07-15 09:36:00.335946] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.298 [2024-07-15 09:36:00.335974] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a50000b90 with addr=10.0.0.2, port=4420 00:27:49.298 qpair failed and we were unable to recover it. 00:27:49.298 [2024-07-15 09:36:00.336072] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.298 [2024-07-15 09:36:00.336121] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a50000b90 with addr=10.0.0.2, port=4420 00:27:49.298 qpair failed and we were unable to recover it. 00:27:49.298 [2024-07-15 09:36:00.336236] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.298 [2024-07-15 09:36:00.336272] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a50000b90 with addr=10.0.0.2, port=4420 00:27:49.298 qpair failed and we were unable to recover it. 00:27:49.298 [2024-07-15 09:36:00.336415] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.298 [2024-07-15 09:36:00.336451] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a50000b90 with addr=10.0.0.2, port=4420 00:27:49.298 qpair failed and we were unable to recover it. 00:27:49.298 [2024-07-15 09:36:00.336607] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.298 [2024-07-15 09:36:00.336640] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a50000b90 with addr=10.0.0.2, port=4420 00:27:49.298 qpair failed and we were unable to recover it. 00:27:49.298 [2024-07-15 09:36:00.336770] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.298 [2024-07-15 09:36:00.336818] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a50000b90 with addr=10.0.0.2, port=4420 00:27:49.298 qpair failed and we were unable to recover it. 00:27:49.298 [2024-07-15 09:36:00.336963] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.298 [2024-07-15 09:36:00.336990] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a50000b90 with addr=10.0.0.2, port=4420 00:27:49.298 qpair failed and we were unable to recover it. 00:27:49.298 [2024-07-15 09:36:00.337167] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.298 [2024-07-15 09:36:00.337194] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a50000b90 with addr=10.0.0.2, port=4420 00:27:49.298 qpair failed and we were unable to recover it. 00:27:49.298 [2024-07-15 09:36:00.337273] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.298 [2024-07-15 09:36:00.337301] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a50000b90 with addr=10.0.0.2, port=4420 00:27:49.298 qpair failed and we were unable to recover it. 00:27:49.298 [2024-07-15 09:36:00.337414] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.298 [2024-07-15 09:36:00.337461] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a50000b90 with addr=10.0.0.2, port=4420 00:27:49.298 qpair failed and we were unable to recover it. 00:27:49.298 [2024-07-15 09:36:00.337565] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.298 [2024-07-15 09:36:00.337603] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a58000b90 with addr=10.0.0.2, port=4420 00:27:49.298 qpair failed and we were unable to recover it. 00:27:49.298 [2024-07-15 09:36:00.337736] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.298 [2024-07-15 09:36:00.337765] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a58000b90 with addr=10.0.0.2, port=4420 00:27:49.298 qpair failed and we were unable to recover it. 00:27:49.298 [2024-07-15 09:36:00.337884] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.298 [2024-07-15 09:36:00.337913] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a58000b90 with addr=10.0.0.2, port=4420 00:27:49.298 qpair failed and we were unable to recover it. 00:27:49.298 [2024-07-15 09:36:00.338033] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.298 [2024-07-15 09:36:00.338061] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a58000b90 with addr=10.0.0.2, port=4420 00:27:49.298 qpair failed and we were unable to recover it. 00:27:49.298 [2024-07-15 09:36:00.338183] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.298 [2024-07-15 09:36:00.338211] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a58000b90 with addr=10.0.0.2, port=4420 00:27:49.298 qpair failed and we were unable to recover it. 00:27:49.298 [2024-07-15 09:36:00.338384] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.298 [2024-07-15 09:36:00.338420] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a58000b90 with addr=10.0.0.2, port=4420 00:27:49.298 qpair failed and we were unable to recover it. 00:27:49.298 [2024-07-15 09:36:00.338561] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.298 [2024-07-15 09:36:00.338596] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a58000b90 with addr=10.0.0.2, port=4420 00:27:49.298 qpair failed and we were unable to recover it. 00:27:49.298 [2024-07-15 09:36:00.338739] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.298 [2024-07-15 09:36:00.338775] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a58000b90 with addr=10.0.0.2, port=4420 00:27:49.298 qpair failed and we were unable to recover it. 00:27:49.298 [2024-07-15 09:36:00.338924] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.298 [2024-07-15 09:36:00.338966] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:27:49.298 qpair failed and we were unable to recover it. 00:27:49.298 [2024-07-15 09:36:00.339117] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.298 [2024-07-15 09:36:00.339145] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:27:49.298 qpair failed and we were unable to recover it. 00:27:49.298 [2024-07-15 09:36:00.339341] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.298 [2024-07-15 09:36:00.339375] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:27:49.298 qpair failed and we were unable to recover it. 00:27:49.299 [2024-07-15 09:36:00.339523] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.299 [2024-07-15 09:36:00.339570] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:27:49.299 qpair failed and we were unable to recover it. 00:27:49.299 [2024-07-15 09:36:00.339660] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.299 [2024-07-15 09:36:00.339690] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a50000b90 with addr=10.0.0.2, port=4420 00:27:49.299 qpair failed and we were unable to recover it. 00:27:49.299 [2024-07-15 09:36:00.339785] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.299 [2024-07-15 09:36:00.339817] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a50000b90 with addr=10.0.0.2, port=4420 00:27:49.299 qpair failed and we were unable to recover it. 00:27:49.299 [2024-07-15 09:36:00.339911] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.299 [2024-07-15 09:36:00.339939] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a50000b90 with addr=10.0.0.2, port=4420 00:27:49.299 qpair failed and we were unable to recover it. 00:27:49.299 [2024-07-15 09:36:00.340032] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.299 [2024-07-15 09:36:00.340060] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a50000b90 with addr=10.0.0.2, port=4420 00:27:49.299 qpair failed and we were unable to recover it. 00:27:49.299 [2024-07-15 09:36:00.340189] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.299 [2024-07-15 09:36:00.340227] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a50000b90 with addr=10.0.0.2, port=4420 00:27:49.299 qpair failed and we were unable to recover it. 00:27:49.299 [2024-07-15 09:36:00.340412] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.299 [2024-07-15 09:36:00.340461] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a50000b90 with addr=10.0.0.2, port=4420 00:27:49.299 qpair failed and we were unable to recover it. 00:27:49.299 [2024-07-15 09:36:00.340571] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.299 [2024-07-15 09:36:00.340613] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a50000b90 with addr=10.0.0.2, port=4420 00:27:49.299 qpair failed and we were unable to recover it. 00:27:49.299 [2024-07-15 09:36:00.340760] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.299 [2024-07-15 09:36:00.340787] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a50000b90 with addr=10.0.0.2, port=4420 00:27:49.299 qpair failed and we were unable to recover it. 00:27:49.299 [2024-07-15 09:36:00.340906] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.299 [2024-07-15 09:36:00.340933] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a50000b90 with addr=10.0.0.2, port=4420 00:27:49.299 qpair failed and we were unable to recover it. 00:27:49.299 [2024-07-15 09:36:00.341051] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.299 [2024-07-15 09:36:00.341078] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a50000b90 with addr=10.0.0.2, port=4420 00:27:49.299 qpair failed and we were unable to recover it. 00:27:49.299 [2024-07-15 09:36:00.341191] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.299 [2024-07-15 09:36:00.341218] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a50000b90 with addr=10.0.0.2, port=4420 00:27:49.299 qpair failed and we were unable to recover it. 00:27:49.299 [2024-07-15 09:36:00.341354] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.299 [2024-07-15 09:36:00.341387] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a50000b90 with addr=10.0.0.2, port=4420 00:27:49.299 qpair failed and we were unable to recover it. 00:27:49.299 [2024-07-15 09:36:00.341518] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.299 [2024-07-15 09:36:00.341552] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a50000b90 with addr=10.0.0.2, port=4420 00:27:49.299 qpair failed and we were unable to recover it. 00:27:49.299 [2024-07-15 09:36:00.341709] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.299 [2024-07-15 09:36:00.341748] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a50000b90 with addr=10.0.0.2, port=4420 00:27:49.299 qpair failed and we were unable to recover it. 00:27:49.299 [2024-07-15 09:36:00.341891] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.299 [2024-07-15 09:36:00.341924] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a58000b90 with addr=10.0.0.2, port=4420 00:27:49.299 qpair failed and we were unable to recover it. 00:27:49.299 [2024-07-15 09:36:00.342051] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.299 [2024-07-15 09:36:00.342079] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:27:49.299 qpair failed and we were unable to recover it. 00:27:49.299 [2024-07-15 09:36:00.342174] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.299 [2024-07-15 09:36:00.342201] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:27:49.299 qpair failed and we were unable to recover it. 00:27:49.299 [2024-07-15 09:36:00.342390] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.299 [2024-07-15 09:36:00.342425] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:27:49.299 qpair failed and we were unable to recover it. 00:27:49.299 [2024-07-15 09:36:00.342554] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.299 [2024-07-15 09:36:00.342602] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:27:49.299 qpair failed and we were unable to recover it. 00:27:49.299 [2024-07-15 09:36:00.342719] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.299 [2024-07-15 09:36:00.342746] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:27:49.299 qpair failed and we were unable to recover it. 00:27:49.299 [2024-07-15 09:36:00.342882] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.299 [2024-07-15 09:36:00.342922] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:27:49.299 qpair failed and we were unable to recover it. 00:27:49.299 [2024-07-15 09:36:00.343043] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.299 [2024-07-15 09:36:00.343070] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:27:49.299 qpair failed and we were unable to recover it. 00:27:49.299 [2024-07-15 09:36:00.343162] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.299 [2024-07-15 09:36:00.343191] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:27:49.299 qpair failed and we were unable to recover it. 00:27:49.299 [2024-07-15 09:36:00.343280] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.299 [2024-07-15 09:36:00.343308] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:27:49.299 qpair failed and we were unable to recover it. 00:27:49.299 [2024-07-15 09:36:00.343396] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.299 [2024-07-15 09:36:00.343423] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:27:49.299 qpair failed and we were unable to recover it. 00:27:49.299 [2024-07-15 09:36:00.343514] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.299 [2024-07-15 09:36:00.343541] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:27:49.299 qpair failed and we were unable to recover it. 00:27:49.299 [2024-07-15 09:36:00.343656] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.299 [2024-07-15 09:36:00.343683] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:27:49.299 qpair failed and we were unable to recover it. 00:27:49.299 [2024-07-15 09:36:00.343775] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.299 [2024-07-15 09:36:00.343807] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:27:49.299 qpair failed and we were unable to recover it. 00:27:49.299 [2024-07-15 09:36:00.343934] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.299 [2024-07-15 09:36:00.343961] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:27:49.299 qpair failed and we were unable to recover it. 00:27:49.299 [2024-07-15 09:36:00.344058] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.299 [2024-07-15 09:36:00.344089] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:27:49.299 qpair failed and we were unable to recover it. 00:27:49.299 [2024-07-15 09:36:00.344192] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.299 [2024-07-15 09:36:00.344220] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:27:49.299 qpair failed and we were unable to recover it. 00:27:49.299 [2024-07-15 09:36:00.344344] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.299 [2024-07-15 09:36:00.344372] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:27:49.299 qpair failed and we were unable to recover it. 00:27:49.299 [2024-07-15 09:36:00.344490] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.299 [2024-07-15 09:36:00.344517] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:27:49.299 qpair failed and we were unable to recover it. 00:27:49.299 [2024-07-15 09:36:00.344618] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.299 [2024-07-15 09:36:00.344660] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a58000b90 with addr=10.0.0.2, port=4420 00:27:49.299 qpair failed and we were unable to recover it. 00:27:49.299 [2024-07-15 09:36:00.344781] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.299 [2024-07-15 09:36:00.344832] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d69200 with addr=10.0.0.2, port=4420 00:27:49.299 qpair failed and we were unable to recover it. 00:27:49.299 [2024-07-15 09:36:00.344955] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.299 [2024-07-15 09:36:00.344986] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a50000b90 with addr=10.0.0.2, port=4420 00:27:49.299 qpair failed and we were unable to recover it. 00:27:49.299 [2024-07-15 09:36:00.345115] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.299 [2024-07-15 09:36:00.345143] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a50000b90 with addr=10.0.0.2, port=4420 00:27:49.299 qpair failed and we were unable to recover it. 00:27:49.299 [2024-07-15 09:36:00.345235] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.299 [2024-07-15 09:36:00.345263] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a50000b90 with addr=10.0.0.2, port=4420 00:27:49.299 qpair failed and we were unable to recover it. 00:27:49.299 [2024-07-15 09:36:00.345361] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.299 [2024-07-15 09:36:00.345410] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a50000b90 with addr=10.0.0.2, port=4420 00:27:49.299 qpair failed and we were unable to recover it. 00:27:49.299 [2024-07-15 09:36:00.345522] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.299 [2024-07-15 09:36:00.345558] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a50000b90 with addr=10.0.0.2, port=4420 00:27:49.300 qpair failed and we were unable to recover it. 00:27:49.300 [2024-07-15 09:36:00.345696] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.300 [2024-07-15 09:36:00.345738] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a58000b90 with addr=10.0.0.2, port=4420 00:27:49.300 qpair failed and we were unable to recover it. 00:27:49.300 [2024-07-15 09:36:00.345864] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.300 [2024-07-15 09:36:00.345895] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d69200 with addr=10.0.0.2, port=4420 00:27:49.300 qpair failed and we were unable to recover it. 00:27:49.300 [2024-07-15 09:36:00.345992] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.300 [2024-07-15 09:36:00.346030] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d69200 with addr=10.0.0.2, port=4420 00:27:49.300 qpair failed and we were unable to recover it. 00:27:49.300 [2024-07-15 09:36:00.346119] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.300 [2024-07-15 09:36:00.346148] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d69200 with addr=10.0.0.2, port=4420 00:27:49.300 qpair failed and we were unable to recover it. 00:27:49.300 [2024-07-15 09:36:00.346243] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.300 [2024-07-15 09:36:00.346291] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d69200 with addr=10.0.0.2, port=4420 00:27:49.300 qpair failed and we were unable to recover it. 00:27:49.300 [2024-07-15 09:36:00.346399] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.300 [2024-07-15 09:36:00.346448] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d69200 with addr=10.0.0.2, port=4420 00:27:49.300 qpair failed and we were unable to recover it. 00:27:49.300 [2024-07-15 09:36:00.346598] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.300 [2024-07-15 09:36:00.346647] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d69200 with addr=10.0.0.2, port=4420 00:27:49.300 qpair failed and we were unable to recover it. 00:27:49.300 [2024-07-15 09:36:00.346785] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.300 [2024-07-15 09:36:00.346853] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a58000b90 with addr=10.0.0.2, port=4420 00:27:49.300 qpair failed and we were unable to recover it. 00:27:49.300 [2024-07-15 09:36:00.346974] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.300 [2024-07-15 09:36:00.347003] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a58000b90 with addr=10.0.0.2, port=4420 00:27:49.300 qpair failed and we were unable to recover it. 00:27:49.300 [2024-07-15 09:36:00.347137] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.300 [2024-07-15 09:36:00.347174] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a58000b90 with addr=10.0.0.2, port=4420 00:27:49.300 qpair failed and we were unable to recover it. 00:27:49.300 [2024-07-15 09:36:00.347291] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.300 [2024-07-15 09:36:00.347327] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a58000b90 with addr=10.0.0.2, port=4420 00:27:49.300 qpair failed and we were unable to recover it. 00:27:49.300 [2024-07-15 09:36:00.347464] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.300 [2024-07-15 09:36:00.347502] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:27:49.300 qpair failed and we were unable to recover it. 00:27:49.300 [2024-07-15 09:36:00.347637] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.300 [2024-07-15 09:36:00.347665] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:27:49.300 qpair failed and we were unable to recover it. 00:27:49.300 [2024-07-15 09:36:00.347758] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.300 [2024-07-15 09:36:00.347785] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:27:49.300 qpair failed and we were unable to recover it. 00:27:49.300 [2024-07-15 09:36:00.347884] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.300 [2024-07-15 09:36:00.347912] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:27:49.300 qpair failed and we were unable to recover it. 00:27:49.300 [2024-07-15 09:36:00.348011] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.300 [2024-07-15 09:36:00.348038] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:27:49.300 qpair failed and we were unable to recover it. 00:27:49.300 [2024-07-15 09:36:00.348121] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.300 [2024-07-15 09:36:00.348148] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:27:49.300 qpair failed and we were unable to recover it. 00:27:49.300 [2024-07-15 09:36:00.348262] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.300 [2024-07-15 09:36:00.348289] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:27:49.300 qpair failed and we were unable to recover it. 00:27:49.300 [2024-07-15 09:36:00.348371] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.300 [2024-07-15 09:36:00.348399] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:27:49.300 qpair failed and we were unable to recover it. 00:27:49.300 [2024-07-15 09:36:00.348501] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.300 [2024-07-15 09:36:00.348542] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d69200 with addr=10.0.0.2, port=4420 00:27:49.300 qpair failed and we were unable to recover it. 00:27:49.300 [2024-07-15 09:36:00.348653] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.300 [2024-07-15 09:36:00.348683] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a58000b90 with addr=10.0.0.2, port=4420 00:27:49.300 qpair failed and we were unable to recover it. 00:27:49.300 [2024-07-15 09:36:00.348778] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.300 [2024-07-15 09:36:00.348814] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a58000b90 with addr=10.0.0.2, port=4420 00:27:49.300 qpair failed and we were unable to recover it. 00:27:49.300 [2024-07-15 09:36:00.348945] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.300 [2024-07-15 09:36:00.348973] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a58000b90 with addr=10.0.0.2, port=4420 00:27:49.300 qpair failed and we were unable to recover it. 00:27:49.300 [2024-07-15 09:36:00.349118] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.300 [2024-07-15 09:36:00.349145] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a58000b90 with addr=10.0.0.2, port=4420 00:27:49.300 qpair failed and we were unable to recover it. 00:27:49.300 [2024-07-15 09:36:00.349233] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.300 [2024-07-15 09:36:00.349260] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a58000b90 with addr=10.0.0.2, port=4420 00:27:49.300 qpair failed and we were unable to recover it. 00:27:49.300 [2024-07-15 09:36:00.349340] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.300 [2024-07-15 09:36:00.349368] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:27:49.300 qpair failed and we were unable to recover it. 00:27:49.300 [2024-07-15 09:36:00.349486] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.300 [2024-07-15 09:36:00.349514] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:27:49.300 qpair failed and we were unable to recover it. 00:27:49.300 [2024-07-15 09:36:00.349602] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.300 [2024-07-15 09:36:00.349629] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:27:49.300 qpair failed and we were unable to recover it. 00:27:49.300 [2024-07-15 09:36:00.349741] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.300 [2024-07-15 09:36:00.349768] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:27:49.300 qpair failed and we were unable to recover it. 00:27:49.300 [2024-07-15 09:36:00.349882] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.300 [2024-07-15 09:36:00.349924] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d69200 with addr=10.0.0.2, port=4420 00:27:49.300 qpair failed and we were unable to recover it. 00:27:49.300 [2024-07-15 09:36:00.350019] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.300 [2024-07-15 09:36:00.350047] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d69200 with addr=10.0.0.2, port=4420 00:27:49.300 qpair failed and we were unable to recover it. 00:27:49.300 [2024-07-15 09:36:00.350163] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.300 [2024-07-15 09:36:00.350191] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d69200 with addr=10.0.0.2, port=4420 00:27:49.300 qpair failed and we were unable to recover it. 00:27:49.300 [2024-07-15 09:36:00.350273] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.300 [2024-07-15 09:36:00.350300] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d69200 with addr=10.0.0.2, port=4420 00:27:49.300 qpair failed and we were unable to recover it. 00:27:49.300 [2024-07-15 09:36:00.350425] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.300 [2024-07-15 09:36:00.350460] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a50000b90 with addr=10.0.0.2, port=4420 00:27:49.300 qpair failed and we were unable to recover it. 00:27:49.300 [2024-07-15 09:36:00.350566] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.300 [2024-07-15 09:36:00.350594] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a50000b90 with addr=10.0.0.2, port=4420 00:27:49.300 qpair failed and we were unable to recover it. 00:27:49.300 [2024-07-15 09:36:00.350683] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.300 [2024-07-15 09:36:00.350711] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a50000b90 with addr=10.0.0.2, port=4420 00:27:49.300 qpair failed and we were unable to recover it. 00:27:49.300 [2024-07-15 09:36:00.350820] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.300 [2024-07-15 09:36:00.350850] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a50000b90 with addr=10.0.0.2, port=4420 00:27:49.300 qpair failed and we were unable to recover it. 00:27:49.300 [2024-07-15 09:36:00.350938] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.300 [2024-07-15 09:36:00.350966] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a50000b90 with addr=10.0.0.2, port=4420 00:27:49.300 qpair failed and we were unable to recover it. 00:27:49.300 [2024-07-15 09:36:00.351103] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.300 [2024-07-15 09:36:00.351140] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:27:49.300 qpair failed and we were unable to recover it. 00:27:49.300 [2024-07-15 09:36:00.351301] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.300 [2024-07-15 09:36:00.351344] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:27:49.300 qpair failed and we were unable to recover it. 00:27:49.300 [2024-07-15 09:36:00.351435] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.300 [2024-07-15 09:36:00.351464] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:27:49.301 qpair failed and we were unable to recover it. 00:27:49.301 [2024-07-15 09:36:00.351580] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.301 [2024-07-15 09:36:00.351608] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:27:49.301 qpair failed and we were unable to recover it. 00:27:49.301 [2024-07-15 09:36:00.351706] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.301 [2024-07-15 09:36:00.351735] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d69200 with addr=10.0.0.2, port=4420 00:27:49.301 qpair failed and we were unable to recover it. 00:27:49.301 [2024-07-15 09:36:00.351837] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.301 [2024-07-15 09:36:00.351867] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a58000b90 with addr=10.0.0.2, port=4420 00:27:49.301 qpair failed and we were unable to recover it. 00:27:49.301 [2024-07-15 09:36:00.351993] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.301 [2024-07-15 09:36:00.352020] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a58000b90 with addr=10.0.0.2, port=4420 00:27:49.301 qpair failed and we were unable to recover it. 00:27:49.301 [2024-07-15 09:36:00.352175] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.301 [2024-07-15 09:36:00.352203] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a58000b90 with addr=10.0.0.2, port=4420 00:27:49.301 qpair failed and we were unable to recover it. 00:27:49.301 [2024-07-15 09:36:00.352294] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.301 [2024-07-15 09:36:00.352324] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a58000b90 with addr=10.0.0.2, port=4420 00:27:49.301 qpair failed and we were unable to recover it. 00:27:49.301 [2024-07-15 09:36:00.352465] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.301 [2024-07-15 09:36:00.352501] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a58000b90 with addr=10.0.0.2, port=4420 00:27:49.301 qpair failed and we were unable to recover it. 00:27:49.301 [2024-07-15 09:36:00.352628] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.301 [2024-07-15 09:36:00.352663] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a58000b90 with addr=10.0.0.2, port=4420 00:27:49.301 qpair failed and we were unable to recover it. 00:27:49.301 [2024-07-15 09:36:00.352838] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.301 [2024-07-15 09:36:00.352880] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d69200 with addr=10.0.0.2, port=4420 00:27:49.301 qpair failed and we were unable to recover it. 00:27:49.301 [2024-07-15 09:36:00.352977] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.301 [2024-07-15 09:36:00.353006] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d69200 with addr=10.0.0.2, port=4420 00:27:49.301 qpair failed and we were unable to recover it. 00:27:49.301 [2024-07-15 09:36:00.353151] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.301 [2024-07-15 09:36:00.353179] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d69200 with addr=10.0.0.2, port=4420 00:27:49.301 qpair failed and we were unable to recover it. 00:27:49.301 [2024-07-15 09:36:00.353302] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.301 [2024-07-15 09:36:00.353335] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d69200 with addr=10.0.0.2, port=4420 00:27:49.301 qpair failed and we were unable to recover it. 00:27:49.301 [2024-07-15 09:36:00.353446] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.301 [2024-07-15 09:36:00.353479] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d69200 with addr=10.0.0.2, port=4420 00:27:49.301 qpair failed and we were unable to recover it. 00:27:49.301 [2024-07-15 09:36:00.353600] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.301 [2024-07-15 09:36:00.353650] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:27:49.301 qpair failed and we were unable to recover it. 00:27:49.301 [2024-07-15 09:36:00.353793] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.301 [2024-07-15 09:36:00.353828] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a58000b90 with addr=10.0.0.2, port=4420 00:27:49.301 qpair failed and we were unable to recover it. 00:27:49.301 [2024-07-15 09:36:00.353918] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.301 [2024-07-15 09:36:00.353946] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a58000b90 with addr=10.0.0.2, port=4420 00:27:49.301 qpair failed and we were unable to recover it. 00:27:49.301 [2024-07-15 09:36:00.354043] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.301 [2024-07-15 09:36:00.354094] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a58000b90 with addr=10.0.0.2, port=4420 00:27:49.301 qpair failed and we were unable to recover it. 00:27:49.301 [2024-07-15 09:36:00.354218] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.301 [2024-07-15 09:36:00.354254] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a58000b90 with addr=10.0.0.2, port=4420 00:27:49.301 qpair failed and we were unable to recover it. 00:27:49.301 [2024-07-15 09:36:00.354393] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.301 [2024-07-15 09:36:00.354437] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a58000b90 with addr=10.0.0.2, port=4420 00:27:49.301 qpair failed and we were unable to recover it. 00:27:49.301 [2024-07-15 09:36:00.354579] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.301 [2024-07-15 09:36:00.354616] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d69200 with addr=10.0.0.2, port=4420 00:27:49.301 qpair failed and we were unable to recover it. 00:27:49.301 [2024-07-15 09:36:00.354734] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.301 [2024-07-15 09:36:00.354761] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d69200 with addr=10.0.0.2, port=4420 00:27:49.301 qpair failed and we were unable to recover it. 00:27:49.301 [2024-07-15 09:36:00.354885] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.301 [2024-07-15 09:36:00.354914] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d69200 with addr=10.0.0.2, port=4420 00:27:49.301 qpair failed and we were unable to recover it. 00:27:49.301 [2024-07-15 09:36:00.354991] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.301 [2024-07-15 09:36:00.355018] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d69200 with addr=10.0.0.2, port=4420 00:27:49.301 qpair failed and we were unable to recover it. 00:27:49.301 [2024-07-15 09:36:00.355103] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.301 [2024-07-15 09:36:00.355130] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d69200 with addr=10.0.0.2, port=4420 00:27:49.301 qpair failed and we were unable to recover it. 00:27:49.301 [2024-07-15 09:36:00.355216] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.301 [2024-07-15 09:36:00.355244] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d69200 with addr=10.0.0.2, port=4420 00:27:49.301 qpair failed and we were unable to recover it. 00:27:49.301 [2024-07-15 09:36:00.355353] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.301 [2024-07-15 09:36:00.355386] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d69200 with addr=10.0.0.2, port=4420 00:27:49.301 qpair failed and we were unable to recover it. 00:27:49.301 [2024-07-15 09:36:00.355480] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.301 [2024-07-15 09:36:00.355514] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d69200 with addr=10.0.0.2, port=4420 00:27:49.301 qpair failed and we were unable to recover it. 00:27:49.301 [2024-07-15 09:36:00.355648] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.301 [2024-07-15 09:36:00.355685] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a50000b90 with addr=10.0.0.2, port=4420 00:27:49.301 qpair failed and we were unable to recover it. 00:27:49.301 [2024-07-15 09:36:00.355847] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.301 [2024-07-15 09:36:00.355877] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a58000b90 with addr=10.0.0.2, port=4420 00:27:49.301 qpair failed and we were unable to recover it. 00:27:49.301 [2024-07-15 09:36:00.355972] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.301 [2024-07-15 09:36:00.356001] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a58000b90 with addr=10.0.0.2, port=4420 00:27:49.301 qpair failed and we were unable to recover it. 00:27:49.301 [2024-07-15 09:36:00.356083] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.301 [2024-07-15 09:36:00.356111] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a58000b90 with addr=10.0.0.2, port=4420 00:27:49.301 qpair failed and we were unable to recover it. 00:27:49.301 [2024-07-15 09:36:00.356200] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.301 [2024-07-15 09:36:00.356248] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a58000b90 with addr=10.0.0.2, port=4420 00:27:49.301 qpair failed and we were unable to recover it. 00:27:49.301 [2024-07-15 09:36:00.356405] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.301 [2024-07-15 09:36:00.356433] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a58000b90 with addr=10.0.0.2, port=4420 00:27:49.301 qpair failed and we were unable to recover it. 00:27:49.301 [2024-07-15 09:36:00.356603] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.301 [2024-07-15 09:36:00.356639] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a58000b90 with addr=10.0.0.2, port=4420 00:27:49.301 qpair failed and we were unable to recover it. 00:27:49.301 [2024-07-15 09:36:00.356752] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.301 [2024-07-15 09:36:00.356787] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a58000b90 with addr=10.0.0.2, port=4420 00:27:49.302 qpair failed and we were unable to recover it. 00:27:49.302 [2024-07-15 09:36:00.356917] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.302 [2024-07-15 09:36:00.356958] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:27:49.302 qpair failed and we were unable to recover it. 00:27:49.302 [2024-07-15 09:36:00.357064] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.302 [2024-07-15 09:36:00.357112] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d69200 with addr=10.0.0.2, port=4420 00:27:49.302 qpair failed and we were unable to recover it. 00:27:49.302 [2024-07-15 09:36:00.357254] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.302 [2024-07-15 09:36:00.357288] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d69200 with addr=10.0.0.2, port=4420 00:27:49.302 qpair failed and we were unable to recover it. 00:27:49.302 [2024-07-15 09:36:00.357417] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.302 [2024-07-15 09:36:00.357450] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d69200 with addr=10.0.0.2, port=4420 00:27:49.302 qpair failed and we were unable to recover it. 00:27:49.302 [2024-07-15 09:36:00.357558] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.302 [2024-07-15 09:36:00.357603] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d69200 with addr=10.0.0.2, port=4420 00:27:49.302 qpair failed and we were unable to recover it. 00:27:49.302 [2024-07-15 09:36:00.357727] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.302 [2024-07-15 09:36:00.357758] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a50000b90 with addr=10.0.0.2, port=4420 00:27:49.302 qpair failed and we were unable to recover it. 00:27:49.302 [2024-07-15 09:36:00.357852] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.302 [2024-07-15 09:36:00.357882] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a58000b90 with addr=10.0.0.2, port=4420 00:27:49.302 qpair failed and we were unable to recover it. 00:27:49.302 [2024-07-15 09:36:00.357971] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.302 [2024-07-15 09:36:00.358000] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a58000b90 with addr=10.0.0.2, port=4420 00:27:49.302 qpair failed and we were unable to recover it. 00:27:49.302 [2024-07-15 09:36:00.358123] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.302 [2024-07-15 09:36:00.358161] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a58000b90 with addr=10.0.0.2, port=4420 00:27:49.302 qpair failed and we were unable to recover it. 00:27:49.302 [2024-07-15 09:36:00.358283] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.302 [2024-07-15 09:36:00.358311] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a58000b90 with addr=10.0.0.2, port=4420 00:27:49.302 qpair failed and we were unable to recover it. 00:27:49.302 [2024-07-15 09:36:00.358399] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.302 [2024-07-15 09:36:00.358427] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a58000b90 with addr=10.0.0.2, port=4420 00:27:49.302 qpair failed and we were unable to recover it. 00:27:49.302 [2024-07-15 09:36:00.358533] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.302 [2024-07-15 09:36:00.358582] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d69200 with addr=10.0.0.2, port=4420 00:27:49.302 qpair failed and we were unable to recover it. 00:27:49.302 [2024-07-15 09:36:00.358733] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.302 [2024-07-15 09:36:00.358766] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d69200 with addr=10.0.0.2, port=4420 00:27:49.302 qpair failed and we were unable to recover it. 00:27:49.302 [2024-07-15 09:36:00.358901] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.302 [2024-07-15 09:36:00.358933] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:27:49.302 qpair failed and we were unable to recover it. 00:27:49.302 [2024-07-15 09:36:00.359019] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.302 [2024-07-15 09:36:00.359048] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:27:49.302 qpair failed and we were unable to recover it. 00:27:49.302 [2024-07-15 09:36:00.359166] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.302 [2024-07-15 09:36:00.359194] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:27:49.302 qpair failed and we were unable to recover it. 00:27:49.302 [2024-07-15 09:36:00.359279] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.302 [2024-07-15 09:36:00.359307] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:27:49.302 qpair failed and we were unable to recover it. 00:27:49.302 [2024-07-15 09:36:00.359422] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.302 [2024-07-15 09:36:00.359450] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:27:49.302 qpair failed and we were unable to recover it. 00:27:49.302 [2024-07-15 09:36:00.359562] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.302 [2024-07-15 09:36:00.359589] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:27:49.302 qpair failed and we were unable to recover it. 00:27:49.302 [2024-07-15 09:36:00.359677] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.302 [2024-07-15 09:36:00.359705] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:27:49.302 qpair failed and we were unable to recover it. 00:27:49.302 [2024-07-15 09:36:00.359849] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.302 [2024-07-15 09:36:00.359877] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:27:49.302 qpair failed and we were unable to recover it. 00:27:49.302 [2024-07-15 09:36:00.359974] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.302 [2024-07-15 09:36:00.360001] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:27:49.302 qpair failed and we were unable to recover it. 00:27:49.302 [2024-07-15 09:36:00.360122] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.302 [2024-07-15 09:36:00.360149] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:27:49.302 qpair failed and we were unable to recover it. 00:27:49.302 [2024-07-15 09:36:00.360235] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.302 [2024-07-15 09:36:00.360262] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:27:49.302 qpair failed and we were unable to recover it. 00:27:49.302 [2024-07-15 09:36:00.360353] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.302 [2024-07-15 09:36:00.360386] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:27:49.302 qpair failed and we were unable to recover it. 00:27:49.302 [2024-07-15 09:36:00.360509] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.302 [2024-07-15 09:36:00.360536] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:27:49.302 qpair failed and we were unable to recover it. 00:27:49.302 [2024-07-15 09:36:00.360617] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.302 [2024-07-15 09:36:00.360645] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:27:49.302 qpair failed and we were unable to recover it. 00:27:49.302 [2024-07-15 09:36:00.360735] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.302 [2024-07-15 09:36:00.360763] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:27:49.302 qpair failed and we were unable to recover it. 00:27:49.302 [2024-07-15 09:36:00.360850] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.302 [2024-07-15 09:36:00.360878] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:27:49.302 qpair failed and we were unable to recover it. 00:27:49.302 [2024-07-15 09:36:00.360987] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.302 [2024-07-15 09:36:00.361015] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:27:49.302 qpair failed and we were unable to recover it. 00:27:49.302 [2024-07-15 09:36:00.361102] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.302 [2024-07-15 09:36:00.361129] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:27:49.302 qpair failed and we were unable to recover it. 00:27:49.302 [2024-07-15 09:36:00.361213] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.302 [2024-07-15 09:36:00.361242] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:27:49.302 qpair failed and we were unable to recover it. 00:27:49.302 [2024-07-15 09:36:00.361360] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.302 [2024-07-15 09:36:00.361385] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:27:49.302 qpair failed and we were unable to recover it. 00:27:49.302 [2024-07-15 09:36:00.361485] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.302 [2024-07-15 09:36:00.361513] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:27:49.302 qpair failed and we were unable to recover it. 00:27:49.302 [2024-07-15 09:36:00.361612] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.302 [2024-07-15 09:36:00.361657] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:27:49.302 qpair failed and we were unable to recover it. 00:27:49.302 [2024-07-15 09:36:00.361816] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.302 [2024-07-15 09:36:00.361843] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:27:49.302 qpair failed and we were unable to recover it. 00:27:49.302 [2024-07-15 09:36:00.361933] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.302 [2024-07-15 09:36:00.361960] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:27:49.302 qpair failed and we were unable to recover it. 00:27:49.302 [2024-07-15 09:36:00.362045] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.302 [2024-07-15 09:36:00.362070] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:27:49.302 qpair failed and we were unable to recover it. 00:27:49.302 [2024-07-15 09:36:00.362186] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.302 [2024-07-15 09:36:00.362212] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:27:49.302 qpair failed and we were unable to recover it. 00:27:49.302 [2024-07-15 09:36:00.362286] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.302 [2024-07-15 09:36:00.362311] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:27:49.303 qpair failed and we were unable to recover it. 00:27:49.303 [2024-07-15 09:36:00.362395] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.303 [2024-07-15 09:36:00.362421] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:27:49.303 qpair failed and we were unable to recover it. 00:27:49.303 [2024-07-15 09:36:00.362532] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.303 [2024-07-15 09:36:00.362558] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:27:49.303 qpair failed and we were unable to recover it. 00:27:49.303 [2024-07-15 09:36:00.362666] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.303 [2024-07-15 09:36:00.362692] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:27:49.303 qpair failed and we were unable to recover it. 00:27:49.303 [2024-07-15 09:36:00.362778] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.303 [2024-07-15 09:36:00.362825] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d69200 with addr=10.0.0.2, port=4420 00:27:49.303 qpair failed and we were unable to recover it. 00:27:49.303 [2024-07-15 09:36:00.362941] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.303 [2024-07-15 09:36:00.362968] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d69200 with addr=10.0.0.2, port=4420 00:27:49.303 qpair failed and we were unable to recover it. 00:27:49.303 [2024-07-15 09:36:00.363053] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.303 [2024-07-15 09:36:00.363079] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d69200 with addr=10.0.0.2, port=4420 00:27:49.303 qpair failed and we were unable to recover it. 00:27:49.303 [2024-07-15 09:36:00.363221] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.303 [2024-07-15 09:36:00.363246] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d69200 with addr=10.0.0.2, port=4420 00:27:49.303 qpair failed and we were unable to recover it. 00:27:49.303 [2024-07-15 09:36:00.363329] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.303 [2024-07-15 09:36:00.363354] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d69200 with addr=10.0.0.2, port=4420 00:27:49.303 qpair failed and we were unable to recover it. 00:27:49.303 [2024-07-15 09:36:00.363436] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.303 [2024-07-15 09:36:00.363462] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d69200 with addr=10.0.0.2, port=4420 00:27:49.303 qpair failed and we were unable to recover it. 00:27:49.303 [2024-07-15 09:36:00.363544] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.303 [2024-07-15 09:36:00.363569] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d69200 with addr=10.0.0.2, port=4420 00:27:49.303 qpair failed and we were unable to recover it. 00:27:49.303 [2024-07-15 09:36:00.363647] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.303 [2024-07-15 09:36:00.363672] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d69200 with addr=10.0.0.2, port=4420 00:27:49.303 qpair failed and we were unable to recover it. 00:27:49.303 [2024-07-15 09:36:00.363763] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.303 [2024-07-15 09:36:00.363792] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a50000b90 with addr=10.0.0.2, port=4420 00:27:49.303 qpair failed and we were unable to recover it. 00:27:49.303 [2024-07-15 09:36:00.363918] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.303 [2024-07-15 09:36:00.363944] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:27:49.303 qpair failed and we were unable to recover it. 00:27:49.303 [2024-07-15 09:36:00.364050] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.303 [2024-07-15 09:36:00.364076] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:27:49.303 qpair failed and we were unable to recover it. 00:27:49.303 [2024-07-15 09:36:00.364155] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.303 [2024-07-15 09:36:00.364182] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:27:49.303 qpair failed and we were unable to recover it. 00:27:49.303 [2024-07-15 09:36:00.364265] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.303 [2024-07-15 09:36:00.364291] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:27:49.303 qpair failed and we were unable to recover it. 00:27:49.303 [2024-07-15 09:36:00.364431] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.303 [2024-07-15 09:36:00.364470] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a58000b90 with addr=10.0.0.2, port=4420 00:27:49.303 qpair failed and we were unable to recover it. 00:27:49.303 [2024-07-15 09:36:00.364567] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.303 [2024-07-15 09:36:00.364594] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d69200 with addr=10.0.0.2, port=4420 00:27:49.303 qpair failed and we were unable to recover it. 00:27:49.303 [2024-07-15 09:36:00.364675] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.303 [2024-07-15 09:36:00.364701] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d69200 with addr=10.0.0.2, port=4420 00:27:49.303 qpair failed and we were unable to recover it. 00:27:49.303 [2024-07-15 09:36:00.364817] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.303 [2024-07-15 09:36:00.364843] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d69200 with addr=10.0.0.2, port=4420 00:27:49.303 qpair failed and we were unable to recover it. 00:27:49.303 [2024-07-15 09:36:00.364924] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.303 [2024-07-15 09:36:00.364950] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d69200 with addr=10.0.0.2, port=4420 00:27:49.303 qpair failed and we were unable to recover it. 00:27:49.303 [2024-07-15 09:36:00.365024] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.303 [2024-07-15 09:36:00.365050] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d69200 with addr=10.0.0.2, port=4420 00:27:49.303 qpair failed and we were unable to recover it. 00:27:49.303 [2024-07-15 09:36:00.365123] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.303 [2024-07-15 09:36:00.365149] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d69200 with addr=10.0.0.2, port=4420 00:27:49.303 qpair failed and we were unable to recover it. 00:27:49.303 [2024-07-15 09:36:00.365284] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.303 [2024-07-15 09:36:00.365309] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d69200 with addr=10.0.0.2, port=4420 00:27:49.303 qpair failed and we were unable to recover it. 00:27:49.303 [2024-07-15 09:36:00.365428] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.303 [2024-07-15 09:36:00.365453] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d69200 with addr=10.0.0.2, port=4420 00:27:49.303 qpair failed and we were unable to recover it. 00:27:49.303 [2024-07-15 09:36:00.365596] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.303 [2024-07-15 09:36:00.365623] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:27:49.303 qpair failed and we were unable to recover it. 00:27:49.303 [2024-07-15 09:36:00.365706] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.303 [2024-07-15 09:36:00.365731] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:27:49.303 qpair failed and we were unable to recover it. 00:27:49.303 [2024-07-15 09:36:00.365819] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.303 [2024-07-15 09:36:00.365846] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:27:49.303 qpair failed and we were unable to recover it. 00:27:49.303 [2024-07-15 09:36:00.365956] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.303 [2024-07-15 09:36:00.365982] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:27:49.303 qpair failed and we were unable to recover it. 00:27:49.303 [2024-07-15 09:36:00.366061] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.303 [2024-07-15 09:36:00.366086] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:27:49.303 qpair failed and we were unable to recover it. 00:27:49.303 [2024-07-15 09:36:00.366194] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.303 [2024-07-15 09:36:00.366219] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:27:49.303 qpair failed and we were unable to recover it. 00:27:49.303 [2024-07-15 09:36:00.366295] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.303 [2024-07-15 09:36:00.366321] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:27:49.303 qpair failed and we were unable to recover it. 00:27:49.303 [2024-07-15 09:36:00.366418] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.303 [2024-07-15 09:36:00.366444] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:27:49.303 qpair failed and we were unable to recover it. 00:27:49.303 [2024-07-15 09:36:00.366526] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.303 [2024-07-15 09:36:00.366551] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:27:49.303 qpair failed and we were unable to recover it. 00:27:49.303 [2024-07-15 09:36:00.366640] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.303 [2024-07-15 09:36:00.366667] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d69200 with addr=10.0.0.2, port=4420 00:27:49.303 qpair failed and we were unable to recover it. 00:27:49.303 [2024-07-15 09:36:00.366766] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.303 [2024-07-15 09:36:00.366796] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a58000b90 with addr=10.0.0.2, port=4420 00:27:49.303 qpair failed and we were unable to recover it. 00:27:49.303 [2024-07-15 09:36:00.366889] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.303 [2024-07-15 09:36:00.366916] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a58000b90 with addr=10.0.0.2, port=4420 00:27:49.303 qpair failed and we were unable to recover it. 00:27:49.303 [2024-07-15 09:36:00.367005] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.303 [2024-07-15 09:36:00.367031] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a58000b90 with addr=10.0.0.2, port=4420 00:27:49.303 qpair failed and we were unable to recover it. 00:27:49.303 [2024-07-15 09:36:00.367143] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.303 [2024-07-15 09:36:00.367169] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a58000b90 with addr=10.0.0.2, port=4420 00:27:49.303 qpair failed and we were unable to recover it. 00:27:49.303 [2024-07-15 09:36:00.367258] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.303 [2024-07-15 09:36:00.367286] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a50000b90 with addr=10.0.0.2, port=4420 00:27:49.303 qpair failed and we were unable to recover it. 00:27:49.304 [2024-07-15 09:36:00.367372] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.304 [2024-07-15 09:36:00.367400] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a50000b90 with addr=10.0.0.2, port=4420 00:27:49.304 qpair failed and we were unable to recover it. 00:27:49.304 [2024-07-15 09:36:00.367518] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.304 [2024-07-15 09:36:00.367545] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a50000b90 with addr=10.0.0.2, port=4420 00:27:49.304 qpair failed and we were unable to recover it. 00:27:49.304 [2024-07-15 09:36:00.367627] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.304 [2024-07-15 09:36:00.367653] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a50000b90 with addr=10.0.0.2, port=4420 00:27:49.304 qpair failed and we were unable to recover it. 00:27:49.304 [2024-07-15 09:36:00.367765] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.304 [2024-07-15 09:36:00.367794] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a50000b90 with addr=10.0.0.2, port=4420 00:27:49.304 qpair failed and we were unable to recover it. 00:27:49.304 [2024-07-15 09:36:00.367921] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.304 [2024-07-15 09:36:00.367946] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a50000b90 with addr=10.0.0.2, port=4420 00:27:49.304 qpair failed and we were unable to recover it. 00:27:49.304 [2024-07-15 09:36:00.368057] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.304 [2024-07-15 09:36:00.368083] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a50000b90 with addr=10.0.0.2, port=4420 00:27:49.304 qpair failed and we were unable to recover it. 00:27:49.304 [2024-07-15 09:36:00.368216] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.304 [2024-07-15 09:36:00.368279] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a50000b90 with addr=10.0.0.2, port=4420 00:27:49.304 qpair failed and we were unable to recover it. 00:27:49.304 [2024-07-15 09:36:00.368393] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.304 [2024-07-15 09:36:00.368418] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a50000b90 with addr=10.0.0.2, port=4420 00:27:49.304 qpair failed and we were unable to recover it. 00:27:49.304 [2024-07-15 09:36:00.368555] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.304 [2024-07-15 09:36:00.368583] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a50000b90 with addr=10.0.0.2, port=4420 00:27:49.304 qpair failed and we were unable to recover it. 00:27:49.304 [2024-07-15 09:36:00.368672] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.304 [2024-07-15 09:36:00.368699] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:27:49.304 qpair failed and we were unable to recover it. 00:27:49.304 [2024-07-15 09:36:00.368782] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.304 [2024-07-15 09:36:00.368814] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:27:49.304 qpair failed and we were unable to recover it. 00:27:49.304 [2024-07-15 09:36:00.368926] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.304 [2024-07-15 09:36:00.368957] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:27:49.304 qpair failed and we were unable to recover it. 00:27:49.304 [2024-07-15 09:36:00.369070] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.304 [2024-07-15 09:36:00.369096] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:27:49.304 qpair failed and we were unable to recover it. 00:27:49.304 [2024-07-15 09:36:00.369189] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.304 [2024-07-15 09:36:00.369215] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:27:49.304 qpair failed and we were unable to recover it. 00:27:49.304 [2024-07-15 09:36:00.369306] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.304 [2024-07-15 09:36:00.369332] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:27:49.304 qpair failed and we were unable to recover it. 00:27:49.304 [2024-07-15 09:36:00.369420] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.304 [2024-07-15 09:36:00.369448] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a50000b90 with addr=10.0.0.2, port=4420 00:27:49.304 qpair failed and we were unable to recover it. 00:27:49.304 [2024-07-15 09:36:00.369564] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.304 [2024-07-15 09:36:00.369590] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a50000b90 with addr=10.0.0.2, port=4420 00:27:49.304 qpair failed and we were unable to recover it. 00:27:49.304 [2024-07-15 09:36:00.369699] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.304 [2024-07-15 09:36:00.369725] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a50000b90 with addr=10.0.0.2, port=4420 00:27:49.304 qpair failed and we were unable to recover it. 00:27:49.304 [2024-07-15 09:36:00.369814] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.304 [2024-07-15 09:36:00.369843] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a50000b90 with addr=10.0.0.2, port=4420 00:27:49.304 qpair failed and we were unable to recover it. 00:27:49.304 [2024-07-15 09:36:00.369951] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.304 [2024-07-15 09:36:00.369977] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a50000b90 with addr=10.0.0.2, port=4420 00:27:49.304 qpair failed and we were unable to recover it. 00:27:49.304 [2024-07-15 09:36:00.370061] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.304 [2024-07-15 09:36:00.370087] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a50000b90 with addr=10.0.0.2, port=4420 00:27:49.304 qpair failed and we were unable to recover it. 00:27:49.304 [2024-07-15 09:36:00.370209] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.304 [2024-07-15 09:36:00.370236] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a50000b90 with addr=10.0.0.2, port=4420 00:27:49.304 qpair failed and we were unable to recover it. 00:27:49.304 [2024-07-15 09:36:00.370313] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.304 [2024-07-15 09:36:00.370338] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a50000b90 with addr=10.0.0.2, port=4420 00:27:49.304 qpair failed and we were unable to recover it. 00:27:49.304 [2024-07-15 09:36:00.370430] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.304 [2024-07-15 09:36:00.370456] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a50000b90 with addr=10.0.0.2, port=4420 00:27:49.304 qpair failed and we were unable to recover it. 00:27:49.304 [2024-07-15 09:36:00.370544] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.304 [2024-07-15 09:36:00.370571] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:27:49.304 qpair failed and we were unable to recover it. 00:27:49.304 [2024-07-15 09:36:00.370679] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.304 [2024-07-15 09:36:00.370718] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d69200 with addr=10.0.0.2, port=4420 00:27:49.304 qpair failed and we were unable to recover it. 00:27:49.304 [2024-07-15 09:36:00.370843] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.304 [2024-07-15 09:36:00.370882] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a58000b90 with addr=10.0.0.2, port=4420 00:27:49.304 qpair failed and we were unable to recover it. 00:27:49.304 [2024-07-15 09:36:00.370972] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.304 [2024-07-15 09:36:00.370999] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a58000b90 with addr=10.0.0.2, port=4420 00:27:49.304 qpair failed and we were unable to recover it. 00:27:49.304 [2024-07-15 09:36:00.371082] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.304 [2024-07-15 09:36:00.371108] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a58000b90 with addr=10.0.0.2, port=4420 00:27:49.304 qpair failed and we were unable to recover it. 00:27:49.304 [2024-07-15 09:36:00.371216] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.304 [2024-07-15 09:36:00.371242] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a58000b90 with addr=10.0.0.2, port=4420 00:27:49.304 qpair failed and we were unable to recover it. 00:27:49.304 [2024-07-15 09:36:00.371329] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.304 [2024-07-15 09:36:00.371356] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a50000b90 with addr=10.0.0.2, port=4420 00:27:49.304 qpair failed and we were unable to recover it. 00:27:49.304 [2024-07-15 09:36:00.371468] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.304 [2024-07-15 09:36:00.371497] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a50000b90 with addr=10.0.0.2, port=4420 00:27:49.304 qpair failed and we were unable to recover it. 00:27:49.304 [2024-07-15 09:36:00.371606] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.304 [2024-07-15 09:36:00.371632] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a50000b90 with addr=10.0.0.2, port=4420 00:27:49.304 qpair failed and we were unable to recover it. 00:27:49.304 [2024-07-15 09:36:00.371745] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.304 [2024-07-15 09:36:00.371771] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a50000b90 with addr=10.0.0.2, port=4420 00:27:49.304 qpair failed and we were unable to recover it. 00:27:49.304 [2024-07-15 09:36:00.371869] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.304 [2024-07-15 09:36:00.371913] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:27:49.304 qpair failed and we were unable to recover it. 00:27:49.304 [2024-07-15 09:36:00.372018] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.304 [2024-07-15 09:36:00.372061] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:27:49.304 qpair failed and we were unable to recover it. 00:27:49.304 [2024-07-15 09:36:00.372171] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.304 [2024-07-15 09:36:00.372196] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:27:49.304 qpair failed and we were unable to recover it. 00:27:49.304 [2024-07-15 09:36:00.372302] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.304 [2024-07-15 09:36:00.372328] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:27:49.304 qpair failed and we were unable to recover it. 00:27:49.304 [2024-07-15 09:36:00.372411] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.304 [2024-07-15 09:36:00.372437] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:27:49.304 qpair failed and we were unable to recover it. 00:27:49.304 [2024-07-15 09:36:00.372519] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.304 [2024-07-15 09:36:00.372548] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a58000b90 with addr=10.0.0.2, port=4420 00:27:49.304 qpair failed and we were unable to recover it. 00:27:49.304 [2024-07-15 09:36:00.372635] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.304 [2024-07-15 09:36:00.372664] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a58000b90 with addr=10.0.0.2, port=4420 00:27:49.304 qpair failed and we were unable to recover it. 00:27:49.304 [2024-07-15 09:36:00.372757] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.304 [2024-07-15 09:36:00.372796] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d69200 with addr=10.0.0.2, port=4420 00:27:49.304 qpair failed and we were unable to recover it. 00:27:49.304 [2024-07-15 09:36:00.372897] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.304 [2024-07-15 09:36:00.372924] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d69200 with addr=10.0.0.2, port=4420 00:27:49.305 qpair failed and we were unable to recover it. 00:27:49.305 [2024-07-15 09:36:00.373035] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.305 [2024-07-15 09:36:00.373061] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d69200 with addr=10.0.0.2, port=4420 00:27:49.305 qpair failed and we were unable to recover it. 00:27:49.305 [2024-07-15 09:36:00.373166] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.305 [2024-07-15 09:36:00.373192] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d69200 with addr=10.0.0.2, port=4420 00:27:49.305 qpair failed and we were unable to recover it. 00:27:49.305 [2024-07-15 09:36:00.373295] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.305 [2024-07-15 09:36:00.373321] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d69200 with addr=10.0.0.2, port=4420 00:27:49.305 qpair failed and we were unable to recover it. 00:27:49.305 [2024-07-15 09:36:00.373405] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.305 [2024-07-15 09:36:00.373432] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d69200 with addr=10.0.0.2, port=4420 00:27:49.305 qpair failed and we were unable to recover it. 00:27:49.305 [2024-07-15 09:36:00.373520] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.305 [2024-07-15 09:36:00.373546] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d69200 with addr=10.0.0.2, port=4420 00:27:49.305 qpair failed and we were unable to recover it. 00:27:49.305 [2024-07-15 09:36:00.373680] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.305 [2024-07-15 09:36:00.373706] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d69200 with addr=10.0.0.2, port=4420 00:27:49.305 qpair failed and we were unable to recover it. 00:27:49.305 [2024-07-15 09:36:00.373788] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.305 [2024-07-15 09:36:00.373821] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a50000b90 with addr=10.0.0.2, port=4420 00:27:49.305 qpair failed and we were unable to recover it. 00:27:49.305 [2024-07-15 09:36:00.373907] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.305 [2024-07-15 09:36:00.373936] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a50000b90 with addr=10.0.0.2, port=4420 00:27:49.305 qpair failed and we were unable to recover it. 00:27:49.305 [2024-07-15 09:36:00.374055] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.305 [2024-07-15 09:36:00.374085] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:27:49.305 qpair failed and we were unable to recover it. 00:27:49.305 [2024-07-15 09:36:00.374205] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.305 [2024-07-15 09:36:00.374231] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:27:49.305 qpair failed and we were unable to recover it. 00:27:49.305 [2024-07-15 09:36:00.374357] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.305 [2024-07-15 09:36:00.374403] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:27:49.305 qpair failed and we were unable to recover it. 00:27:49.305 [2024-07-15 09:36:00.374528] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.305 [2024-07-15 09:36:00.374574] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:27:49.305 qpair failed and we were unable to recover it. 00:27:49.305 [2024-07-15 09:36:00.374684] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.305 [2024-07-15 09:36:00.374712] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:27:49.305 qpair failed and we were unable to recover it. 00:27:49.305 [2024-07-15 09:36:00.374823] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.305 [2024-07-15 09:36:00.374851] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:27:49.305 qpair failed and we were unable to recover it. 00:27:49.305 [2024-07-15 09:36:00.374935] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.305 [2024-07-15 09:36:00.374962] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:27:49.305 qpair failed and we were unable to recover it. 00:27:49.305 [2024-07-15 09:36:00.375057] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.305 [2024-07-15 09:36:00.375084] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:27:49.305 qpair failed and we were unable to recover it. 00:27:49.305 [2024-07-15 09:36:00.375166] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.305 [2024-07-15 09:36:00.375194] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:27:49.305 qpair failed and we were unable to recover it. 00:27:49.305 [2024-07-15 09:36:00.375311] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.305 [2024-07-15 09:36:00.375338] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:27:49.305 qpair failed and we were unable to recover it. 00:27:49.305 [2024-07-15 09:36:00.375432] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.305 [2024-07-15 09:36:00.375462] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d69200 with addr=10.0.0.2, port=4420 00:27:49.305 qpair failed and we were unable to recover it. 00:27:49.305 [2024-07-15 09:36:00.375559] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.305 [2024-07-15 09:36:00.375587] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d69200 with addr=10.0.0.2, port=4420 00:27:49.305 qpair failed and we were unable to recover it. 00:27:49.305 [2024-07-15 09:36:00.375686] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.305 [2024-07-15 09:36:00.375716] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a50000b90 with addr=10.0.0.2, port=4420 00:27:49.305 qpair failed and we were unable to recover it. 00:27:49.305 [2024-07-15 09:36:00.375842] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.305 [2024-07-15 09:36:00.375869] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a50000b90 with addr=10.0.0.2, port=4420 00:27:49.305 qpair failed and we were unable to recover it. 00:27:49.305 [2024-07-15 09:36:00.375961] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.305 [2024-07-15 09:36:00.375988] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a50000b90 with addr=10.0.0.2, port=4420 00:27:49.305 qpair failed and we were unable to recover it. 00:27:49.305 [2024-07-15 09:36:00.376075] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.305 [2024-07-15 09:36:00.376103] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a50000b90 with addr=10.0.0.2, port=4420 00:27:49.305 qpair failed and we were unable to recover it. 00:27:49.305 [2024-07-15 09:36:00.376257] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.305 [2024-07-15 09:36:00.376293] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a50000b90 with addr=10.0.0.2, port=4420 00:27:49.305 qpair failed and we were unable to recover it. 00:27:49.305 [2024-07-15 09:36:00.376441] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.305 [2024-07-15 09:36:00.376477] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a50000b90 with addr=10.0.0.2, port=4420 00:27:49.305 qpair failed and we were unable to recover it. 00:27:49.305 [2024-07-15 09:36:00.376622] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.305 [2024-07-15 09:36:00.376656] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a50000b90 with addr=10.0.0.2, port=4420 00:27:49.305 qpair failed and we were unable to recover it. 00:27:49.305 [2024-07-15 09:36:00.376770] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.305 [2024-07-15 09:36:00.376815] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:27:49.305 qpair failed and we were unable to recover it. 00:27:49.305 [2024-07-15 09:36:00.376930] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.305 [2024-07-15 09:36:00.376957] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:27:49.305 qpair failed and we were unable to recover it. 00:27:49.305 [2024-07-15 09:36:00.377089] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.305 [2024-07-15 09:36:00.377121] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:27:49.305 qpair failed and we were unable to recover it. 00:27:49.305 [2024-07-15 09:36:00.377309] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.305 [2024-07-15 09:36:00.377361] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:27:49.305 qpair failed and we were unable to recover it. 00:27:49.305 [2024-07-15 09:36:00.377528] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.305 [2024-07-15 09:36:00.377575] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:27:49.305 qpair failed and we were unable to recover it. 00:27:49.305 [2024-07-15 09:36:00.377684] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.305 [2024-07-15 09:36:00.377712] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:27:49.305 qpair failed and we were unable to recover it. 00:27:49.305 [2024-07-15 09:36:00.377829] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.305 [2024-07-15 09:36:00.377857] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:27:49.305 qpair failed and we were unable to recover it. 00:27:49.305 [2024-07-15 09:36:00.377967] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.305 [2024-07-15 09:36:00.378009] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a58000b90 with addr=10.0.0.2, port=4420 00:27:49.305 qpair failed and we were unable to recover it. 00:27:49.305 [2024-07-15 09:36:00.378137] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.305 [2024-07-15 09:36:00.378184] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d69200 with addr=10.0.0.2, port=4420 00:27:49.305 qpair failed and we were unable to recover it. 00:27:49.305 [2024-07-15 09:36:00.378284] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.305 [2024-07-15 09:36:00.378333] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d69200 with addr=10.0.0.2, port=4420 00:27:49.305 qpair failed and we were unable to recover it. 00:27:49.305 [2024-07-15 09:36:00.378501] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.305 [2024-07-15 09:36:00.378551] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d69200 with addr=10.0.0.2, port=4420 00:27:49.305 qpair failed and we were unable to recover it. 00:27:49.305 [2024-07-15 09:36:00.378667] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.306 [2024-07-15 09:36:00.378703] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d69200 with addr=10.0.0.2, port=4420 00:27:49.306 qpair failed and we were unable to recover it. 00:27:49.306 [2024-07-15 09:36:00.378827] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.306 [2024-07-15 09:36:00.378858] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d69200 with addr=10.0.0.2, port=4420 00:27:49.306 qpair failed and we were unable to recover it. 00:27:49.306 [2024-07-15 09:36:00.378961] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.306 [2024-07-15 09:36:00.378992] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d69200 with addr=10.0.0.2, port=4420 00:27:49.306 qpair failed and we were unable to recover it. 00:27:49.306 [2024-07-15 09:36:00.379094] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.306 [2024-07-15 09:36:00.379125] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d69200 with addr=10.0.0.2, port=4420 00:27:49.306 qpair failed and we were unable to recover it. 00:27:49.306 [2024-07-15 09:36:00.379254] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.306 [2024-07-15 09:36:00.379285] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d69200 with addr=10.0.0.2, port=4420 00:27:49.306 qpair failed and we were unable to recover it. 00:27:49.306 [2024-07-15 09:36:00.379430] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.306 [2024-07-15 09:36:00.379462] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:27:49.306 qpair failed and we were unable to recover it. 00:27:49.306 [2024-07-15 09:36:00.379597] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.306 [2024-07-15 09:36:00.379624] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:27:49.306 qpair failed and we were unable to recover it. 00:27:49.306 [2024-07-15 09:36:00.379714] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.306 [2024-07-15 09:36:00.379742] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:27:49.306 qpair failed and we were unable to recover it. 00:27:49.306 [2024-07-15 09:36:00.379863] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.306 [2024-07-15 09:36:00.379891] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:27:49.306 qpair failed and we were unable to recover it. 00:27:49.306 [2024-07-15 09:36:00.379979] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.306 [2024-07-15 09:36:00.380006] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:27:49.306 qpair failed and we were unable to recover it. 00:27:49.306 [2024-07-15 09:36:00.380121] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.306 [2024-07-15 09:36:00.380148] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:27:49.306 qpair failed and we were unable to recover it. 00:27:49.306 [2024-07-15 09:36:00.380268] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.306 [2024-07-15 09:36:00.380297] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d69200 with addr=10.0.0.2, port=4420 00:27:49.306 qpair failed and we were unable to recover it. 00:27:49.306 [2024-07-15 09:36:00.380391] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.306 [2024-07-15 09:36:00.380423] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a58000b90 with addr=10.0.0.2, port=4420 00:27:49.306 qpair failed and we were unable to recover it. 00:27:49.306 [2024-07-15 09:36:00.380515] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.306 [2024-07-15 09:36:00.380544] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a58000b90 with addr=10.0.0.2, port=4420 00:27:49.306 qpair failed and we were unable to recover it. 00:27:49.306 [2024-07-15 09:36:00.380640] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.306 [2024-07-15 09:36:00.380668] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a58000b90 with addr=10.0.0.2, port=4420 00:27:49.306 qpair failed and we were unable to recover it. 00:27:49.306 [2024-07-15 09:36:00.380784] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.306 [2024-07-15 09:36:00.380822] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a58000b90 with addr=10.0.0.2, port=4420 00:27:49.306 qpair failed and we were unable to recover it. 00:27:49.306 [2024-07-15 09:36:00.380911] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.306 [2024-07-15 09:36:00.380938] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a58000b90 with addr=10.0.0.2, port=4420 00:27:49.306 qpair failed and we were unable to recover it. 00:27:49.306 [2024-07-15 09:36:00.381050] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.306 [2024-07-15 09:36:00.381082] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a58000b90 with addr=10.0.0.2, port=4420 00:27:49.306 qpair failed and we were unable to recover it. 00:27:49.306 [2024-07-15 09:36:00.381222] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.306 [2024-07-15 09:36:00.381273] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a58000b90 with addr=10.0.0.2, port=4420 00:27:49.306 qpair failed and we were unable to recover it. 00:27:49.306 [2024-07-15 09:36:00.381394] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.306 [2024-07-15 09:36:00.381430] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a58000b90 with addr=10.0.0.2, port=4420 00:27:49.306 qpair failed and we were unable to recover it. 00:27:49.306 [2024-07-15 09:36:00.381534] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.306 [2024-07-15 09:36:00.381569] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a58000b90 with addr=10.0.0.2, port=4420 00:27:49.306 qpair failed and we were unable to recover it. 00:27:49.306 [2024-07-15 09:36:00.381696] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.306 [2024-07-15 09:36:00.381725] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a58000b90 with addr=10.0.0.2, port=4420 00:27:49.306 qpair failed and we were unable to recover it. 00:27:49.306 [2024-07-15 09:36:00.381848] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.306 [2024-07-15 09:36:00.381876] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a58000b90 with addr=10.0.0.2, port=4420 00:27:49.306 qpair failed and we were unable to recover it. 00:27:49.306 [2024-07-15 09:36:00.381968] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.306 [2024-07-15 09:36:00.381998] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a58000b90 with addr=10.0.0.2, port=4420 00:27:49.306 qpair failed and we were unable to recover it. 00:27:49.306 [2024-07-15 09:36:00.382104] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.306 [2024-07-15 09:36:00.382132] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a58000b90 with addr=10.0.0.2, port=4420 00:27:49.306 qpair failed and we were unable to recover it. 00:27:49.306 [2024-07-15 09:36:00.382259] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.306 [2024-07-15 09:36:00.382294] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a58000b90 with addr=10.0.0.2, port=4420 00:27:49.306 qpair failed and we were unable to recover it. 00:27:49.306 [2024-07-15 09:36:00.382426] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.306 [2024-07-15 09:36:00.382474] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a58000b90 with addr=10.0.0.2, port=4420 00:27:49.306 qpair failed and we were unable to recover it. 00:27:49.306 [2024-07-15 09:36:00.382612] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.306 [2024-07-15 09:36:00.382640] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a58000b90 with addr=10.0.0.2, port=4420 00:27:49.306 qpair failed and we were unable to recover it. 00:27:49.306 [2024-07-15 09:36:00.382775] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.306 [2024-07-15 09:36:00.382826] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a58000b90 with addr=10.0.0.2, port=4420 00:27:49.306 qpair failed and we were unable to recover it. 00:27:49.306 [2024-07-15 09:36:00.382945] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.307 [2024-07-15 09:36:00.382973] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a58000b90 with addr=10.0.0.2, port=4420 00:27:49.307 qpair failed and we were unable to recover it. 00:27:49.307 [2024-07-15 09:36:00.383079] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.307 [2024-07-15 09:36:00.383106] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a58000b90 with addr=10.0.0.2, port=4420 00:27:49.307 qpair failed and we were unable to recover it. 00:27:49.307 [2024-07-15 09:36:00.383249] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.307 [2024-07-15 09:36:00.383280] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a58000b90 with addr=10.0.0.2, port=4420 00:27:49.307 qpair failed and we were unable to recover it. 00:27:49.307 [2024-07-15 09:36:00.383438] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.307 [2024-07-15 09:36:00.383471] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a58000b90 with addr=10.0.0.2, port=4420 00:27:49.307 qpair failed and we were unable to recover it. 00:27:49.307 [2024-07-15 09:36:00.383593] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.307 [2024-07-15 09:36:00.383628] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a58000b90 with addr=10.0.0.2, port=4420 00:27:49.307 qpair failed and we were unable to recover it. 00:27:49.307 [2024-07-15 09:36:00.383753] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.307 [2024-07-15 09:36:00.383788] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a58000b90 with addr=10.0.0.2, port=4420 00:27:49.307 qpair failed and we were unable to recover it. 00:27:49.307 [2024-07-15 09:36:00.383913] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.307 [2024-07-15 09:36:00.383941] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a58000b90 with addr=10.0.0.2, port=4420 00:27:49.307 qpair failed and we were unable to recover it. 00:27:49.307 [2024-07-15 09:36:00.384026] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.307 [2024-07-15 09:36:00.384071] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a58000b90 with addr=10.0.0.2, port=4420 00:27:49.307 qpair failed and we were unable to recover it. 00:27:49.307 [2024-07-15 09:36:00.384209] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.307 [2024-07-15 09:36:00.384269] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a50000b90 with addr=10.0.0.2, port=4420 00:27:49.307 qpair failed and we were unable to recover it. 00:27:49.307 [2024-07-15 09:36:00.384461] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.307 [2024-07-15 09:36:00.384523] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:27:49.307 qpair failed and we were unable to recover it. 00:27:49.307 [2024-07-15 09:36:00.384622] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.307 [2024-07-15 09:36:00.384652] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:27:49.307 qpair failed and we were unable to recover it. 00:27:49.307 [2024-07-15 09:36:00.384746] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.307 [2024-07-15 09:36:00.384774] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:27:49.307 qpair failed and we were unable to recover it. 00:27:49.307 [2024-07-15 09:36:00.384920] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.307 [2024-07-15 09:36:00.384968] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:27:49.307 qpair failed and we were unable to recover it. 00:27:49.307 [2024-07-15 09:36:00.385067] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.307 [2024-07-15 09:36:00.385094] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:27:49.307 qpair failed and we were unable to recover it. 00:27:49.307 [2024-07-15 09:36:00.385186] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.307 [2024-07-15 09:36:00.385215] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a58000b90 with addr=10.0.0.2, port=4420 00:27:49.307 qpair failed and we were unable to recover it. 00:27:49.307 [2024-07-15 09:36:00.385360] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.307 [2024-07-15 09:36:00.385388] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a58000b90 with addr=10.0.0.2, port=4420 00:27:49.307 qpair failed and we were unable to recover it. 00:27:49.307 [2024-07-15 09:36:00.385507] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.307 [2024-07-15 09:36:00.385535] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a58000b90 with addr=10.0.0.2, port=4420 00:27:49.307 qpair failed and we were unable to recover it. 00:27:49.307 [2024-07-15 09:36:00.385626] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.307 [2024-07-15 09:36:00.385655] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a58000b90 with addr=10.0.0.2, port=4420 00:27:49.307 qpair failed and we were unable to recover it. 00:27:49.307 [2024-07-15 09:36:00.385771] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.307 [2024-07-15 09:36:00.385799] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a58000b90 with addr=10.0.0.2, port=4420 00:27:49.307 qpair failed and we were unable to recover it. 00:27:49.307 [2024-07-15 09:36:00.385964] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.307 [2024-07-15 09:36:00.385995] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a58000b90 with addr=10.0.0.2, port=4420 00:27:49.307 qpair failed and we were unable to recover it. 00:27:49.307 [2024-07-15 09:36:00.386136] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.307 [2024-07-15 09:36:00.386171] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a58000b90 with addr=10.0.0.2, port=4420 00:27:49.307 qpair failed and we were unable to recover it. 00:27:49.307 [2024-07-15 09:36:00.386277] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.307 [2024-07-15 09:36:00.386326] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a58000b90 with addr=10.0.0.2, port=4420 00:27:49.307 qpair failed and we were unable to recover it. 00:27:49.307 [2024-07-15 09:36:00.386430] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.307 [2024-07-15 09:36:00.386461] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a58000b90 with addr=10.0.0.2, port=4420 00:27:49.307 qpair failed and we were unable to recover it. 00:27:49.307 [2024-07-15 09:36:00.386599] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.307 [2024-07-15 09:36:00.386648] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a58000b90 with addr=10.0.0.2, port=4420 00:27:49.307 qpair failed and we were unable to recover it. 00:27:49.307 [2024-07-15 09:36:00.386767] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.307 [2024-07-15 09:36:00.386817] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a58000b90 with addr=10.0.0.2, port=4420 00:27:49.307 qpair failed and we were unable to recover it. 00:27:49.307 [2024-07-15 09:36:00.386925] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.307 [2024-07-15 09:36:00.386953] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a58000b90 with addr=10.0.0.2, port=4420 00:27:49.307 qpair failed and we were unable to recover it. 00:27:49.307 [2024-07-15 09:36:00.387062] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.307 [2024-07-15 09:36:00.387112] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a58000b90 with addr=10.0.0.2, port=4420 00:27:49.307 qpair failed and we were unable to recover it. 00:27:49.307 [2024-07-15 09:36:00.387274] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.307 [2024-07-15 09:36:00.387309] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a58000b90 with addr=10.0.0.2, port=4420 00:27:49.307 qpair failed and we were unable to recover it. 00:27:49.307 [2024-07-15 09:36:00.387420] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.307 [2024-07-15 09:36:00.387456] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a58000b90 with addr=10.0.0.2, port=4420 00:27:49.307 qpair failed and we were unable to recover it. 00:27:49.307 [2024-07-15 09:36:00.387634] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.307 [2024-07-15 09:36:00.387684] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:27:49.307 qpair failed and we were unable to recover it. 00:27:49.307 [2024-07-15 09:36:00.387799] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.307 [2024-07-15 09:36:00.387834] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:27:49.307 qpair failed and we were unable to recover it. 00:27:49.307 [2024-07-15 09:36:00.387941] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.307 [2024-07-15 09:36:00.387987] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:27:49.307 qpair failed and we were unable to recover it. 00:27:49.307 [2024-07-15 09:36:00.388072] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.307 [2024-07-15 09:36:00.388099] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:27:49.307 qpair failed and we were unable to recover it. 00:27:49.307 [2024-07-15 09:36:00.388236] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.307 [2024-07-15 09:36:00.388284] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:27:49.307 qpair failed and we were unable to recover it. 00:27:49.307 [2024-07-15 09:36:00.388427] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.307 [2024-07-15 09:36:00.388475] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:27:49.307 qpair failed and we were unable to recover it. 00:27:49.307 [2024-07-15 09:36:00.388565] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.307 [2024-07-15 09:36:00.388596] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a58000b90 with addr=10.0.0.2, port=4420 00:27:49.307 qpair failed and we were unable to recover it. 00:27:49.307 [2024-07-15 09:36:00.388714] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.307 [2024-07-15 09:36:00.388742] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a58000b90 with addr=10.0.0.2, port=4420 00:27:49.307 qpair failed and we were unable to recover it. 00:27:49.307 [2024-07-15 09:36:00.388872] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.307 [2024-07-15 09:36:00.388918] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d69200 with addr=10.0.0.2, port=4420 00:27:49.307 qpair failed and we were unable to recover it. 00:27:49.307 [2024-07-15 09:36:00.389092] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.307 [2024-07-15 09:36:00.389126] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d69200 with addr=10.0.0.2, port=4420 00:27:49.307 qpair failed and we were unable to recover it. 00:27:49.307 [2024-07-15 09:36:00.389226] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.307 [2024-07-15 09:36:00.389257] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d69200 with addr=10.0.0.2, port=4420 00:27:49.307 qpair failed and we were unable to recover it. 00:27:49.307 [2024-07-15 09:36:00.389434] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.307 [2024-07-15 09:36:00.389469] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d69200 with addr=10.0.0.2, port=4420 00:27:49.307 qpair failed and we were unable to recover it. 00:27:49.308 [2024-07-15 09:36:00.389659] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.308 [2024-07-15 09:36:00.389693] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d69200 with addr=10.0.0.2, port=4420 00:27:49.308 qpair failed and we were unable to recover it. 00:27:49.308 [2024-07-15 09:36:00.389847] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.308 [2024-07-15 09:36:00.389894] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d69200 with addr=10.0.0.2, port=4420 00:27:49.308 qpair failed and we were unable to recover it. 00:27:49.308 [2024-07-15 09:36:00.390035] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.308 [2024-07-15 09:36:00.390062] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d69200 with addr=10.0.0.2, port=4420 00:27:49.308 qpair failed and we were unable to recover it. 00:27:49.308 [2024-07-15 09:36:00.390241] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.308 [2024-07-15 09:36:00.390276] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d69200 with addr=10.0.0.2, port=4420 00:27:49.308 qpair failed and we were unable to recover it. 00:27:49.308 [2024-07-15 09:36:00.390395] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.308 [2024-07-15 09:36:00.390429] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d69200 with addr=10.0.0.2, port=4420 00:27:49.308 qpair failed and we were unable to recover it. 00:27:49.308 [2024-07-15 09:36:00.390556] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.308 [2024-07-15 09:36:00.390599] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d69200 with addr=10.0.0.2, port=4420 00:27:49.308 qpair failed and we were unable to recover it. 00:27:49.308 [2024-07-15 09:36:00.390740] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.308 [2024-07-15 09:36:00.390768] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d69200 with addr=10.0.0.2, port=4420 00:27:49.308 qpair failed and we were unable to recover it. 00:27:49.308 [2024-07-15 09:36:00.390866] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.308 [2024-07-15 09:36:00.390892] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d69200 with addr=10.0.0.2, port=4420 00:27:49.308 qpair failed and we were unable to recover it. 00:27:49.308 [2024-07-15 09:36:00.391052] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.308 [2024-07-15 09:36:00.391082] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d69200 with addr=10.0.0.2, port=4420 00:27:49.308 qpair failed and we were unable to recover it. 00:27:49.308 [2024-07-15 09:36:00.391207] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.308 [2024-07-15 09:36:00.391236] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d69200 with addr=10.0.0.2, port=4420 00:27:49.308 qpair failed and we were unable to recover it. 00:27:49.308 [2024-07-15 09:36:00.391395] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.308 [2024-07-15 09:36:00.391428] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d69200 with addr=10.0.0.2, port=4420 00:27:49.308 qpair failed and we were unable to recover it. 00:27:49.308 [2024-07-15 09:36:00.391568] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.308 [2024-07-15 09:36:00.391602] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d69200 with addr=10.0.0.2, port=4420 00:27:49.308 qpair failed and we were unable to recover it. 00:27:49.308 [2024-07-15 09:36:00.391744] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.308 [2024-07-15 09:36:00.391771] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d69200 with addr=10.0.0.2, port=4420 00:27:49.308 qpair failed and we were unable to recover it. 00:27:49.308 [2024-07-15 09:36:00.391873] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.308 [2024-07-15 09:36:00.391899] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d69200 with addr=10.0.0.2, port=4420 00:27:49.308 qpair failed and we were unable to recover it. 00:27:49.308 [2024-07-15 09:36:00.392039] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.308 [2024-07-15 09:36:00.392065] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d69200 with addr=10.0.0.2, port=4420 00:27:49.308 qpair failed and we were unable to recover it. 00:27:49.308 [2024-07-15 09:36:00.392177] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.308 [2024-07-15 09:36:00.392206] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d69200 with addr=10.0.0.2, port=4420 00:27:49.308 qpair failed and we were unable to recover it. 00:27:49.308 [2024-07-15 09:36:00.392336] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.308 [2024-07-15 09:36:00.392366] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d69200 with addr=10.0.0.2, port=4420 00:27:49.308 qpair failed and we were unable to recover it. 00:27:49.308 [2024-07-15 09:36:00.392476] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.308 [2024-07-15 09:36:00.392508] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d69200 with addr=10.0.0.2, port=4420 00:27:49.308 qpair failed and we were unable to recover it. 00:27:49.308 [2024-07-15 09:36:00.392666] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.308 [2024-07-15 09:36:00.392697] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d69200 with addr=10.0.0.2, port=4420 00:27:49.308 qpair failed and we were unable to recover it. 00:27:49.308 [2024-07-15 09:36:00.392822] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.308 [2024-07-15 09:36:00.392864] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a50000b90 with addr=10.0.0.2, port=4420 00:27:49.308 qpair failed and we were unable to recover it. 00:27:49.308 [2024-07-15 09:36:00.392977] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.308 [2024-07-15 09:36:00.393006] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a50000b90 with addr=10.0.0.2, port=4420 00:27:49.308 qpair failed and we were unable to recover it. 00:27:49.308 [2024-07-15 09:36:00.393108] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.308 [2024-07-15 09:36:00.393150] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:27:49.308 qpair failed and we were unable to recover it. 00:27:49.308 [2024-07-15 09:36:00.393299] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.308 [2024-07-15 09:36:00.393345] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:27:49.308 qpair failed and we were unable to recover it. 00:27:49.308 [2024-07-15 09:36:00.393455] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.308 [2024-07-15 09:36:00.393506] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:27:49.308 qpair failed and we were unable to recover it. 00:27:49.308 [2024-07-15 09:36:00.393621] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.308 [2024-07-15 09:36:00.393650] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:27:49.308 qpair failed and we were unable to recover it. 00:27:49.308 [2024-07-15 09:36:00.393740] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.308 [2024-07-15 09:36:00.393767] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:27:49.308 qpair failed and we were unable to recover it. 00:27:49.308 [2024-07-15 09:36:00.393887] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.308 [2024-07-15 09:36:00.393916] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:27:49.308 qpair failed and we were unable to recover it. 00:27:49.308 [2024-07-15 09:36:00.394004] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.308 [2024-07-15 09:36:00.394031] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:27:49.308 qpair failed and we were unable to recover it. 00:27:49.308 [2024-07-15 09:36:00.394119] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.308 [2024-07-15 09:36:00.394147] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:27:49.308 qpair failed and we were unable to recover it. 00:27:49.308 [2024-07-15 09:36:00.394262] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.308 [2024-07-15 09:36:00.394289] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:27:49.308 qpair failed and we were unable to recover it. 00:27:49.308 [2024-07-15 09:36:00.394392] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.308 [2024-07-15 09:36:00.394419] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:27:49.308 qpair failed and we were unable to recover it. 00:27:49.308 [2024-07-15 09:36:00.394539] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.308 [2024-07-15 09:36:00.394569] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d69200 with addr=10.0.0.2, port=4420 00:27:49.308 qpair failed and we were unable to recover it. 00:27:49.308 [2024-07-15 09:36:00.394687] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.308 [2024-07-15 09:36:00.394714] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d69200 with addr=10.0.0.2, port=4420 00:27:49.308 qpair failed and we were unable to recover it. 00:27:49.308 [2024-07-15 09:36:00.394813] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.308 [2024-07-15 09:36:00.394840] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d69200 with addr=10.0.0.2, port=4420 00:27:49.308 qpair failed and we were unable to recover it. 00:27:49.308 [2024-07-15 09:36:00.394952] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.308 [2024-07-15 09:36:00.394985] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d69200 with addr=10.0.0.2, port=4420 00:27:49.308 qpair failed and we were unable to recover it. 00:27:49.308 [2024-07-15 09:36:00.395107] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.308 [2024-07-15 09:36:00.395135] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d69200 with addr=10.0.0.2, port=4420 00:27:49.308 qpair failed and we were unable to recover it. 00:27:49.308 [2024-07-15 09:36:00.395246] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.308 [2024-07-15 09:36:00.395278] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d69200 with addr=10.0.0.2, port=4420 00:27:49.308 qpair failed and we were unable to recover it. 00:27:49.308 [2024-07-15 09:36:00.395364] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.308 [2024-07-15 09:36:00.395390] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d69200 with addr=10.0.0.2, port=4420 00:27:49.308 qpair failed and we were unable to recover it. 00:27:49.308 [2024-07-15 09:36:00.395499] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.308 [2024-07-15 09:36:00.395526] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d69200 with addr=10.0.0.2, port=4420 00:27:49.308 qpair failed and we were unable to recover it. 00:27:49.308 [2024-07-15 09:36:00.395629] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.309 [2024-07-15 09:36:00.395656] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d69200 with addr=10.0.0.2, port=4420 00:27:49.309 qpair failed and we were unable to recover it. 00:27:49.309 [2024-07-15 09:36:00.395746] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.309 [2024-07-15 09:36:00.395774] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d69200 with addr=10.0.0.2, port=4420 00:27:49.309 qpair failed and we were unable to recover it. 00:27:49.309 [2024-07-15 09:36:00.395866] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.309 [2024-07-15 09:36:00.395894] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d69200 with addr=10.0.0.2, port=4420 00:27:49.309 qpair failed and we were unable to recover it. 00:27:49.309 [2024-07-15 09:36:00.396004] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.309 [2024-07-15 09:36:00.396039] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d69200 with addr=10.0.0.2, port=4420 00:27:49.309 qpair failed and we were unable to recover it. 00:27:49.309 [2024-07-15 09:36:00.396189] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.309 [2024-07-15 09:36:00.396223] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d69200 with addr=10.0.0.2, port=4420 00:27:49.309 qpair failed and we were unable to recover it. 00:27:49.309 [2024-07-15 09:36:00.396365] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.309 [2024-07-15 09:36:00.396398] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d69200 with addr=10.0.0.2, port=4420 00:27:49.309 qpair failed and we were unable to recover it. 00:27:49.309 [2024-07-15 09:36:00.396538] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.309 [2024-07-15 09:36:00.396572] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d69200 with addr=10.0.0.2, port=4420 00:27:49.309 qpair failed and we were unable to recover it. 00:27:49.309 [2024-07-15 09:36:00.396711] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.309 [2024-07-15 09:36:00.396739] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d69200 with addr=10.0.0.2, port=4420 00:27:49.309 qpair failed and we were unable to recover it. 00:27:49.309 [2024-07-15 09:36:00.396860] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.309 [2024-07-15 09:36:00.396888] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d69200 with addr=10.0.0.2, port=4420 00:27:49.309 qpair failed and we were unable to recover it. 00:27:49.309 [2024-07-15 09:36:00.397014] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.309 [2024-07-15 09:36:00.397055] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a50000b90 with addr=10.0.0.2, port=4420 00:27:49.309 qpair failed and we were unable to recover it. 00:27:49.309 [2024-07-15 09:36:00.397234] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.309 [2024-07-15 09:36:00.397269] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a50000b90 with addr=10.0.0.2, port=4420 00:27:49.309 qpair failed and we were unable to recover it. 00:27:49.309 [2024-07-15 09:36:00.397422] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.309 [2024-07-15 09:36:00.397466] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a50000b90 with addr=10.0.0.2, port=4420 00:27:49.309 qpair failed and we were unable to recover it. 00:27:49.309 [2024-07-15 09:36:00.397648] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.309 [2024-07-15 09:36:00.397697] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a50000b90 with addr=10.0.0.2, port=4420 00:27:49.309 qpair failed and we were unable to recover it. 00:27:49.309 [2024-07-15 09:36:00.397815] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.309 [2024-07-15 09:36:00.397843] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a50000b90 with addr=10.0.0.2, port=4420 00:27:49.309 qpair failed and we were unable to recover it. 00:27:49.309 [2024-07-15 09:36:00.397952] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.309 [2024-07-15 09:36:00.397982] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a50000b90 with addr=10.0.0.2, port=4420 00:27:49.309 qpair failed and we were unable to recover it. 00:27:49.309 [2024-07-15 09:36:00.398083] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.309 [2024-07-15 09:36:00.398134] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a50000b90 with addr=10.0.0.2, port=4420 00:27:49.309 qpair failed and we were unable to recover it. 00:27:49.309 [2024-07-15 09:36:00.398299] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.309 [2024-07-15 09:36:00.398332] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a50000b90 with addr=10.0.0.2, port=4420 00:27:49.309 qpair failed and we were unable to recover it. 00:27:49.309 [2024-07-15 09:36:00.398442] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.309 [2024-07-15 09:36:00.398475] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a50000b90 with addr=10.0.0.2, port=4420 00:27:49.309 qpair failed and we were unable to recover it. 00:27:49.309 [2024-07-15 09:36:00.398585] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.309 [2024-07-15 09:36:00.398619] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d69200 with addr=10.0.0.2, port=4420 00:27:49.309 qpair failed and we were unable to recover it. 00:27:49.309 [2024-07-15 09:36:00.398791] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.309 [2024-07-15 09:36:00.398827] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d69200 with addr=10.0.0.2, port=4420 00:27:49.309 qpair failed and we were unable to recover it. 00:27:49.309 [2024-07-15 09:36:00.398919] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.309 [2024-07-15 09:36:00.398945] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d69200 with addr=10.0.0.2, port=4420 00:27:49.309 qpair failed and we were unable to recover it. 00:27:49.309 [2024-07-15 09:36:00.399055] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.309 [2024-07-15 09:36:00.399099] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d69200 with addr=10.0.0.2, port=4420 00:27:49.309 qpair failed and we were unable to recover it. 00:27:49.309 [2024-07-15 09:36:00.399274] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.309 [2024-07-15 09:36:00.399314] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d69200 with addr=10.0.0.2, port=4420 00:27:49.309 qpair failed and we were unable to recover it. 00:27:49.309 [2024-07-15 09:36:00.399435] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.309 [2024-07-15 09:36:00.399468] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d69200 with addr=10.0.0.2, port=4420 00:27:49.309 qpair failed and we were unable to recover it. 00:27:49.309 [2024-07-15 09:36:00.399576] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.309 [2024-07-15 09:36:00.399610] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d69200 with addr=10.0.0.2, port=4420 00:27:49.309 qpair failed and we were unable to recover it. 00:27:49.309 [2024-07-15 09:36:00.399729] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.309 [2024-07-15 09:36:00.399761] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d69200 with addr=10.0.0.2, port=4420 00:27:49.309 qpair failed and we were unable to recover it. 00:27:49.309 [2024-07-15 09:36:00.399895] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.309 [2024-07-15 09:36:00.399923] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d69200 with addr=10.0.0.2, port=4420 00:27:49.309 qpair failed and we were unable to recover it. 00:27:49.309 [2024-07-15 09:36:00.400041] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.309 [2024-07-15 09:36:00.400069] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d69200 with addr=10.0.0.2, port=4420 00:27:49.309 qpair failed and we were unable to recover it. 00:27:49.309 [2024-07-15 09:36:00.400249] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.309 [2024-07-15 09:36:00.400283] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d69200 with addr=10.0.0.2, port=4420 00:27:49.309 qpair failed and we were unable to recover it. 00:27:49.309 [2024-07-15 09:36:00.400394] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.309 [2024-07-15 09:36:00.400427] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d69200 with addr=10.0.0.2, port=4420 00:27:49.309 qpair failed and we were unable to recover it. 00:27:49.309 [2024-07-15 09:36:00.400532] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.309 [2024-07-15 09:36:00.400564] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d69200 with addr=10.0.0.2, port=4420 00:27:49.309 qpair failed and we were unable to recover it. 00:27:49.309 [2024-07-15 09:36:00.400704] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.309 [2024-07-15 09:36:00.400731] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d69200 with addr=10.0.0.2, port=4420 00:27:49.309 qpair failed and we were unable to recover it. 00:27:49.309 [2024-07-15 09:36:00.400833] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.309 [2024-07-15 09:36:00.400860] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d69200 with addr=10.0.0.2, port=4420 00:27:49.309 qpair failed and we were unable to recover it. 00:27:49.309 [2024-07-15 09:36:00.401000] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.309 [2024-07-15 09:36:00.401028] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d69200 with addr=10.0.0.2, port=4420 00:27:49.309 qpair failed and we were unable to recover it. 00:27:49.309 [2024-07-15 09:36:00.401172] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.309 [2024-07-15 09:36:00.401205] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d69200 with addr=10.0.0.2, port=4420 00:27:49.309 qpair failed and we were unable to recover it. 00:27:49.309 [2024-07-15 09:36:00.401310] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.309 [2024-07-15 09:36:00.401344] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d69200 with addr=10.0.0.2, port=4420 00:27:49.309 qpair failed and we were unable to recover it. 00:27:49.309 [2024-07-15 09:36:00.401514] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.309 [2024-07-15 09:36:00.401547] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d69200 with addr=10.0.0.2, port=4420 00:27:49.309 qpair failed and we were unable to recover it. 00:27:49.309 [2024-07-15 09:36:00.401697] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.309 [2024-07-15 09:36:00.401739] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:27:49.309 qpair failed and we were unable to recover it. 00:27:49.309 [2024-07-15 09:36:00.401857] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.309 [2024-07-15 09:36:00.401886] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:27:49.309 qpair failed and we were unable to recover it. 00:27:49.309 [2024-07-15 09:36:00.401976] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.310 [2024-07-15 09:36:00.402004] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:27:49.310 qpair failed and we were unable to recover it. 00:27:49.310 [2024-07-15 09:36:00.402114] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.310 [2024-07-15 09:36:00.402148] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:27:49.310 qpair failed and we were unable to recover it. 00:27:49.310 [2024-07-15 09:36:00.402339] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.310 [2024-07-15 09:36:00.402387] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:27:49.310 qpair failed and we were unable to recover it. 00:27:49.310 [2024-07-15 09:36:00.402491] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.310 [2024-07-15 09:36:00.402527] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a50000b90 with addr=10.0.0.2, port=4420 00:27:49.310 qpair failed and we were unable to recover it. 00:27:49.310 [2024-07-15 09:36:00.402676] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.310 [2024-07-15 09:36:00.402705] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d69200 with addr=10.0.0.2, port=4420 00:27:49.310 qpair failed and we were unable to recover it. 00:27:49.310 [2024-07-15 09:36:00.402818] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.310 [2024-07-15 09:36:00.402860] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a58000b90 with addr=10.0.0.2, port=4420 00:27:49.310 qpair failed and we were unable to recover it. 00:27:49.310 [2024-07-15 09:36:00.402980] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.310 [2024-07-15 09:36:00.403013] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a58000b90 with addr=10.0.0.2, port=4420 00:27:49.310 qpair failed and we were unable to recover it. 00:27:49.310 [2024-07-15 09:36:00.403166] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.310 [2024-07-15 09:36:00.403199] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a58000b90 with addr=10.0.0.2, port=4420 00:27:49.310 qpair failed and we were unable to recover it. 00:27:49.310 [2024-07-15 09:36:00.403309] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.310 [2024-07-15 09:36:00.403342] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a58000b90 with addr=10.0.0.2, port=4420 00:27:49.310 qpair failed and we were unable to recover it. 00:27:49.310 [2024-07-15 09:36:00.403480] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.310 [2024-07-15 09:36:00.403514] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:27:49.310 qpair failed and we were unable to recover it. 00:27:49.310 [2024-07-15 09:36:00.403625] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.310 [2024-07-15 09:36:00.403659] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:27:49.310 qpair failed and we were unable to recover it. 00:27:49.310 [2024-07-15 09:36:00.403749] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.310 [2024-07-15 09:36:00.403778] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:27:49.310 qpair failed and we were unable to recover it. 00:27:49.310 [2024-07-15 09:36:00.403953] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.310 [2024-07-15 09:36:00.404002] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:27:49.310 qpair failed and we were unable to recover it. 00:27:49.310 [2024-07-15 09:36:00.404133] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.310 [2024-07-15 09:36:00.404178] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:27:49.310 qpair failed and we were unable to recover it. 00:27:49.310 [2024-07-15 09:36:00.404292] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.310 [2024-07-15 09:36:00.404320] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:27:49.310 qpair failed and we were unable to recover it. 00:27:49.310 [2024-07-15 09:36:00.404438] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.310 [2024-07-15 09:36:00.404466] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:27:49.310 qpair failed and we were unable to recover it. 00:27:49.310 [2024-07-15 09:36:00.404566] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.310 [2024-07-15 09:36:00.404595] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:27:49.310 qpair failed and we were unable to recover it. 00:27:49.310 [2024-07-15 09:36:00.404723] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.310 [2024-07-15 09:36:00.404764] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d69200 with addr=10.0.0.2, port=4420 00:27:49.310 qpair failed and we were unable to recover it. 00:27:49.310 [2024-07-15 09:36:00.404908] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.310 [2024-07-15 09:36:00.404938] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a58000b90 with addr=10.0.0.2, port=4420 00:27:49.310 qpair failed and we were unable to recover it. 00:27:49.310 [2024-07-15 09:36:00.405042] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.310 [2024-07-15 09:36:00.405071] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a58000b90 with addr=10.0.0.2, port=4420 00:27:49.310 qpair failed and we were unable to recover it. 00:27:49.310 [2024-07-15 09:36:00.405194] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.310 [2024-07-15 09:36:00.405223] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a58000b90 with addr=10.0.0.2, port=4420 00:27:49.310 qpair failed and we were unable to recover it. 00:27:49.310 [2024-07-15 09:36:00.405330] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.310 [2024-07-15 09:36:00.405358] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a58000b90 with addr=10.0.0.2, port=4420 00:27:49.310 qpair failed and we were unable to recover it. 00:27:49.310 [2024-07-15 09:36:00.405468] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.310 [2024-07-15 09:36:00.405496] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a58000b90 with addr=10.0.0.2, port=4420 00:27:49.310 qpair failed and we were unable to recover it. 00:27:49.310 [2024-07-15 09:36:00.405587] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.310 [2024-07-15 09:36:00.405614] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:27:49.310 qpair failed and we were unable to recover it. 00:27:49.310 [2024-07-15 09:36:00.405747] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.310 [2024-07-15 09:36:00.405788] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d69200 with addr=10.0.0.2, port=4420 00:27:49.310 qpair failed and we were unable to recover it. 00:27:49.310 [2024-07-15 09:36:00.405906] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.310 [2024-07-15 09:36:00.405934] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d69200 with addr=10.0.0.2, port=4420 00:27:49.310 qpair failed and we were unable to recover it. 00:27:49.310 [2024-07-15 09:36:00.406045] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.310 [2024-07-15 09:36:00.406075] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d69200 with addr=10.0.0.2, port=4420 00:27:49.310 qpair failed and we were unable to recover it. 00:27:49.310 [2024-07-15 09:36:00.406178] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.310 [2024-07-15 09:36:00.406209] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d69200 with addr=10.0.0.2, port=4420 00:27:49.310 qpair failed and we were unable to recover it. 00:27:49.310 [2024-07-15 09:36:00.406354] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.310 [2024-07-15 09:36:00.406385] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d69200 with addr=10.0.0.2, port=4420 00:27:49.310 qpair failed and we were unable to recover it. 00:27:49.310 [2024-07-15 09:36:00.406515] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.310 [2024-07-15 09:36:00.406546] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d69200 with addr=10.0.0.2, port=4420 00:27:49.310 qpair failed and we were unable to recover it. 00:27:49.310 [2024-07-15 09:36:00.406671] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.310 [2024-07-15 09:36:00.406701] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:27:49.311 qpair failed and we were unable to recover it. 00:27:49.311 [2024-07-15 09:36:00.406883] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.311 [2024-07-15 09:36:00.406929] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a50000b90 with addr=10.0.0.2, port=4420 00:27:49.311 qpair failed and we were unable to recover it. 00:27:49.311 [2024-07-15 09:36:00.407045] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.311 [2024-07-15 09:36:00.407083] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a50000b90 with addr=10.0.0.2, port=4420 00:27:49.311 qpair failed and we were unable to recover it. 00:27:49.311 [2024-07-15 09:36:00.407240] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.311 [2024-07-15 09:36:00.407275] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d69200 with addr=10.0.0.2, port=4420 00:27:49.311 qpair failed and we were unable to recover it. 00:27:49.311 [2024-07-15 09:36:00.407381] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.311 [2024-07-15 09:36:00.407415] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d69200 with addr=10.0.0.2, port=4420 00:27:49.311 qpair failed and we were unable to recover it. 00:27:49.311 [2024-07-15 09:36:00.407532] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.311 [2024-07-15 09:36:00.407567] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a58000b90 with addr=10.0.0.2, port=4420 00:27:49.311 qpair failed and we were unable to recover it. 00:27:49.311 [2024-07-15 09:36:00.407695] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.311 [2024-07-15 09:36:00.407729] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a58000b90 with addr=10.0.0.2, port=4420 00:27:49.311 qpair failed and we were unable to recover it. 00:27:49.311 [2024-07-15 09:36:00.407880] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.311 [2024-07-15 09:36:00.407913] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:27:49.311 qpair failed and we were unable to recover it. 00:27:49.311 [2024-07-15 09:36:00.408013] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.311 [2024-07-15 09:36:00.408043] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:27:49.311 qpair failed and we were unable to recover it. 00:27:49.311 [2024-07-15 09:36:00.408225] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.311 [2024-07-15 09:36:00.408260] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a50000b90 with addr=10.0.0.2, port=4420 00:27:49.311 qpair failed and we were unable to recover it. 00:27:49.311 [2024-07-15 09:36:00.408392] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.311 [2024-07-15 09:36:00.408426] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a50000b90 with addr=10.0.0.2, port=4420 00:27:49.311 qpair failed and we were unable to recover it. 00:27:49.311 [2024-07-15 09:36:00.408553] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.311 [2024-07-15 09:36:00.408587] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a50000b90 with addr=10.0.0.2, port=4420 00:27:49.311 qpair failed and we were unable to recover it. 00:27:49.311 [2024-07-15 09:36:00.408698] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.311 [2024-07-15 09:36:00.408727] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a50000b90 with addr=10.0.0.2, port=4420 00:27:49.311 qpair failed and we were unable to recover it. 00:27:49.311 [2024-07-15 09:36:00.408882] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.311 [2024-07-15 09:36:00.408925] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a58000b90 with addr=10.0.0.2, port=4420 00:27:49.311 qpair failed and we were unable to recover it. 00:27:49.311 [2024-07-15 09:36:00.409029] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.311 [2024-07-15 09:36:00.409058] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d69200 with addr=10.0.0.2, port=4420 00:27:49.311 qpair failed and we were unable to recover it. 00:27:49.311 [2024-07-15 09:36:00.409225] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.311 [2024-07-15 09:36:00.409258] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d69200 with addr=10.0.0.2, port=4420 00:27:49.311 qpair failed and we were unable to recover it. 00:27:49.311 [2024-07-15 09:36:00.409392] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.311 [2024-07-15 09:36:00.409424] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d69200 with addr=10.0.0.2, port=4420 00:27:49.311 qpair failed and we were unable to recover it. 00:27:49.311 [2024-07-15 09:36:00.409550] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.311 [2024-07-15 09:36:00.409577] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d69200 with addr=10.0.0.2, port=4420 00:27:49.311 qpair failed and we were unable to recover it. 00:27:49.311 [2024-07-15 09:36:00.409720] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.311 [2024-07-15 09:36:00.409746] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d69200 with addr=10.0.0.2, port=4420 00:27:49.311 qpair failed and we were unable to recover it. 00:27:49.311 [2024-07-15 09:36:00.409878] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.311 [2024-07-15 09:36:00.409910] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a50000b90 with addr=10.0.0.2, port=4420 00:27:49.311 qpair failed and we were unable to recover it. 00:27:49.311 [2024-07-15 09:36:00.410018] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.311 [2024-07-15 09:36:00.410050] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a58000b90 with addr=10.0.0.2, port=4420 00:27:49.311 qpair failed and we were unable to recover it. 00:27:49.311 [2024-07-15 09:36:00.410192] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.311 [2024-07-15 09:36:00.410253] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:27:49.311 qpair failed and we were unable to recover it. 00:27:49.311 [2024-07-15 09:36:00.410404] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.311 [2024-07-15 09:36:00.410450] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:27:49.311 qpair failed and we were unable to recover it. 00:27:49.311 [2024-07-15 09:36:00.410541] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.311 [2024-07-15 09:36:00.410569] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:27:49.311 qpair failed and we were unable to recover it. 00:27:49.311 [2024-07-15 09:36:00.410667] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.311 [2024-07-15 09:36:00.410695] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:27:49.311 qpair failed and we were unable to recover it. 00:27:49.311 [2024-07-15 09:36:00.410816] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.311 [2024-07-15 09:36:00.410844] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:27:49.311 qpair failed and we were unable to recover it. 00:27:49.311 [2024-07-15 09:36:00.410935] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.311 [2024-07-15 09:36:00.410962] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:27:49.311 qpair failed and we were unable to recover it. 00:27:49.311 [2024-07-15 09:36:00.411084] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.311 [2024-07-15 09:36:00.411112] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:27:49.311 qpair failed and we were unable to recover it. 00:27:49.311 [2024-07-15 09:36:00.411234] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.311 [2024-07-15 09:36:00.411261] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:27:49.311 qpair failed and we were unable to recover it. 00:27:49.311 [2024-07-15 09:36:00.411346] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.311 [2024-07-15 09:36:00.411372] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:27:49.311 qpair failed and we were unable to recover it. 00:27:49.311 [2024-07-15 09:36:00.411467] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.311 [2024-07-15 09:36:00.411494] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:27:49.311 qpair failed and we were unable to recover it. 00:27:49.311 [2024-07-15 09:36:00.411618] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.311 [2024-07-15 09:36:00.411645] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:27:49.311 qpair failed and we were unable to recover it. 00:27:49.311 [2024-07-15 09:36:00.411768] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.311 [2024-07-15 09:36:00.411795] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:27:49.311 qpair failed and we were unable to recover it. 00:27:49.311 [2024-07-15 09:36:00.411923] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.311 [2024-07-15 09:36:00.411968] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:27:49.311 qpair failed and we were unable to recover it. 00:27:49.311 [2024-07-15 09:36:00.412090] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.311 [2024-07-15 09:36:00.412117] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:27:49.311 qpair failed and we were unable to recover it. 00:27:49.311 [2024-07-15 09:36:00.412226] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.311 [2024-07-15 09:36:00.412254] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:27:49.311 qpair failed and we were unable to recover it. 00:27:49.311 [2024-07-15 09:36:00.412377] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.311 [2024-07-15 09:36:00.412404] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:27:49.311 qpair failed and we were unable to recover it. 00:27:49.311 [2024-07-15 09:36:00.412500] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.311 [2024-07-15 09:36:00.412526] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:27:49.312 qpair failed and we were unable to recover it. 00:27:49.312 [2024-07-15 09:36:00.412639] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.312 [2024-07-15 09:36:00.412670] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:27:49.312 qpair failed and we were unable to recover it. 00:27:49.312 [2024-07-15 09:36:00.412761] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.312 [2024-07-15 09:36:00.412787] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:27:49.312 qpair failed and we were unable to recover it. 00:27:49.312 [2024-07-15 09:36:00.412916] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.312 [2024-07-15 09:36:00.412944] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:27:49.312 qpair failed and we were unable to recover it. 00:27:49.312 [2024-07-15 09:36:00.413026] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.312 [2024-07-15 09:36:00.413052] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:27:49.312 qpair failed and we were unable to recover it. 00:27:49.312 [2024-07-15 09:36:00.413160] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.312 [2024-07-15 09:36:00.413201] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a58000b90 with addr=10.0.0.2, port=4420 00:27:49.312 qpair failed and we were unable to recover it. 00:27:49.312 [2024-07-15 09:36:00.413358] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.312 [2024-07-15 09:36:00.413392] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a58000b90 with addr=10.0.0.2, port=4420 00:27:49.312 qpair failed and we were unable to recover it. 00:27:49.312 [2024-07-15 09:36:00.413516] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.312 [2024-07-15 09:36:00.413544] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a58000b90 with addr=10.0.0.2, port=4420 00:27:49.312 qpair failed and we were unable to recover it. 00:27:49.312 [2024-07-15 09:36:00.413635] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.312 [2024-07-15 09:36:00.413662] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a58000b90 with addr=10.0.0.2, port=4420 00:27:49.312 qpair failed and we were unable to recover it. 00:27:49.312 [2024-07-15 09:36:00.413746] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.312 [2024-07-15 09:36:00.413773] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a58000b90 with addr=10.0.0.2, port=4420 00:27:49.312 qpair failed and we were unable to recover it. 00:27:49.312 [2024-07-15 09:36:00.413877] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.312 [2024-07-15 09:36:00.413912] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a58000b90 with addr=10.0.0.2, port=4420 00:27:49.312 qpair failed and we were unable to recover it. 00:27:49.312 [2024-07-15 09:36:00.414006] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.312 [2024-07-15 09:36:00.414033] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a58000b90 with addr=10.0.0.2, port=4420 00:27:49.312 qpair failed and we were unable to recover it. 00:27:49.312 [2024-07-15 09:36:00.414131] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.312 [2024-07-15 09:36:00.414158] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a58000b90 with addr=10.0.0.2, port=4420 00:27:49.312 qpair failed and we were unable to recover it. 00:27:49.312 [2024-07-15 09:36:00.414270] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.312 [2024-07-15 09:36:00.414301] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a58000b90 with addr=10.0.0.2, port=4420 00:27:49.312 qpair failed and we were unable to recover it. 00:27:49.312 [2024-07-15 09:36:00.414414] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.312 [2024-07-15 09:36:00.414444] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:27:49.312 qpair failed and we were unable to recover it. 00:27:49.312 [2024-07-15 09:36:00.414565] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.312 [2024-07-15 09:36:00.414607] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d69200 with addr=10.0.0.2, port=4420 00:27:49.312 qpair failed and we were unable to recover it. 00:27:49.312 [2024-07-15 09:36:00.414737] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.312 [2024-07-15 09:36:00.414766] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d69200 with addr=10.0.0.2, port=4420 00:27:49.312 qpair failed and we were unable to recover it. 00:27:49.312 [2024-07-15 09:36:00.414916] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.312 [2024-07-15 09:36:00.414948] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d69200 with addr=10.0.0.2, port=4420 00:27:49.312 qpair failed and we were unable to recover it. 00:27:49.312 [2024-07-15 09:36:00.415079] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.312 [2024-07-15 09:36:00.415114] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d69200 with addr=10.0.0.2, port=4420 00:27:49.312 qpair failed and we were unable to recover it. 00:27:49.312 [2024-07-15 09:36:00.415222] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.312 [2024-07-15 09:36:00.415252] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d69200 with addr=10.0.0.2, port=4420 00:27:49.312 qpair failed and we were unable to recover it. 00:27:49.312 [2024-07-15 09:36:00.415377] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.312 [2024-07-15 09:36:00.415408] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d69200 with addr=10.0.0.2, port=4420 00:27:49.312 qpair failed and we were unable to recover it. 00:27:49.312 [2024-07-15 09:36:00.415535] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.312 [2024-07-15 09:36:00.415566] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:27:49.312 qpair failed and we were unable to recover it. 00:27:49.312 [2024-07-15 09:36:00.415705] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.312 [2024-07-15 09:36:00.415733] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:27:49.312 qpair failed and we were unable to recover it. 00:27:49.312 [2024-07-15 09:36:00.415890] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.312 [2024-07-15 09:36:00.415924] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a58000b90 with addr=10.0.0.2, port=4420 00:27:49.312 qpair failed and we were unable to recover it. 00:27:49.312 [2024-07-15 09:36:00.416031] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.312 [2024-07-15 09:36:00.416063] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a58000b90 with addr=10.0.0.2, port=4420 00:27:49.312 qpair failed and we were unable to recover it. 00:27:49.312 [2024-07-15 09:36:00.416171] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.312 [2024-07-15 09:36:00.416201] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a58000b90 with addr=10.0.0.2, port=4420 00:27:49.312 qpair failed and we were unable to recover it. 00:27:49.312 [2024-07-15 09:36:00.416313] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.312 [2024-07-15 09:36:00.416345] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a58000b90 with addr=10.0.0.2, port=4420 00:27:49.312 qpair failed and we were unable to recover it. 00:27:49.312 [2024-07-15 09:36:00.416469] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.312 [2024-07-15 09:36:00.416501] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d69200 with addr=10.0.0.2, port=4420 00:27:49.312 qpair failed and we were unable to recover it. 00:27:49.312 [2024-07-15 09:36:00.416648] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.312 [2024-07-15 09:36:00.416676] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d69200 with addr=10.0.0.2, port=4420 00:27:49.312 qpair failed and we were unable to recover it. 00:27:49.312 [2024-07-15 09:36:00.416822] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.312 [2024-07-15 09:36:00.416853] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:27:49.312 qpair failed and we were unable to recover it. 00:27:49.312 [2024-07-15 09:36:00.416996] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.312 [2024-07-15 09:36:00.417024] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:27:49.312 qpair failed and we were unable to recover it. 00:27:49.312 [2024-07-15 09:36:00.417128] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.312 [2024-07-15 09:36:00.417159] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:27:49.312 qpair failed and we were unable to recover it. 00:27:49.312 [2024-07-15 09:36:00.417285] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.312 [2024-07-15 09:36:00.417314] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:27:49.312 qpair failed and we were unable to recover it. 00:27:49.312 [2024-07-15 09:36:00.417438] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.312 [2024-07-15 09:36:00.417474] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a58000b90 with addr=10.0.0.2, port=4420 00:27:49.312 qpair failed and we were unable to recover it. 00:27:49.312 [2024-07-15 09:36:00.417579] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.312 [2024-07-15 09:36:00.417610] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a58000b90 with addr=10.0.0.2, port=4420 00:27:49.312 qpair failed and we were unable to recover it. 00:27:49.312 [2024-07-15 09:36:00.417714] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.312 [2024-07-15 09:36:00.417744] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a58000b90 with addr=10.0.0.2, port=4420 00:27:49.312 qpair failed and we were unable to recover it. 00:27:49.312 [2024-07-15 09:36:00.417896] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.312 [2024-07-15 09:36:00.417925] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a58000b90 with addr=10.0.0.2, port=4420 00:27:49.312 qpair failed and we were unable to recover it. 00:27:49.312 [2024-07-15 09:36:00.418060] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.312 [2024-07-15 09:36:00.418120] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a50000b90 with addr=10.0.0.2, port=4420 00:27:49.312 qpair failed and we were unable to recover it. 00:27:49.312 [2024-07-15 09:36:00.418285] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.312 [2024-07-15 09:36:00.418319] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a50000b90 with addr=10.0.0.2, port=4420 00:27:49.312 qpair failed and we were unable to recover it. 00:27:49.313 [2024-07-15 09:36:00.418476] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.313 [2024-07-15 09:36:00.418510] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a50000b90 with addr=10.0.0.2, port=4420 00:27:49.313 qpair failed and we were unable to recover it. 00:27:49.313 [2024-07-15 09:36:00.418629] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.313 [2024-07-15 09:36:00.418658] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a50000b90 with addr=10.0.0.2, port=4420 00:27:49.313 qpair failed and we were unable to recover it. 00:27:49.313 [2024-07-15 09:36:00.418785] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.313 [2024-07-15 09:36:00.418833] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d69200 with addr=10.0.0.2, port=4420 00:27:49.313 qpair failed and we were unable to recover it. 00:27:49.313 [2024-07-15 09:36:00.418937] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.313 [2024-07-15 09:36:00.418965] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d69200 with addr=10.0.0.2, port=4420 00:27:49.313 qpair failed and we were unable to recover it. 00:27:49.313 [2024-07-15 09:36:00.419069] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.313 [2024-07-15 09:36:00.419098] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d69200 with addr=10.0.0.2, port=4420 00:27:49.313 qpair failed and we were unable to recover it. 00:27:49.313 [2024-07-15 09:36:00.419241] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.313 [2024-07-15 09:36:00.419273] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d69200 with addr=10.0.0.2, port=4420 00:27:49.313 qpair failed and we were unable to recover it. 00:27:49.313 [2024-07-15 09:36:00.419428] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.313 [2024-07-15 09:36:00.419459] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d69200 with addr=10.0.0.2, port=4420 00:27:49.313 qpair failed and we were unable to recover it. 00:27:49.313 [2024-07-15 09:36:00.419565] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.313 [2024-07-15 09:36:00.419596] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a58000b90 with addr=10.0.0.2, port=4420 00:27:49.313 qpair failed and we were unable to recover it. 00:27:49.313 [2024-07-15 09:36:00.419721] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.313 [2024-07-15 09:36:00.419753] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a58000b90 with addr=10.0.0.2, port=4420 00:27:49.313 qpair failed and we were unable to recover it. 00:27:49.313 [2024-07-15 09:36:00.419898] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.313 [2024-07-15 09:36:00.419939] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:27:49.313 qpair failed and we were unable to recover it. 00:27:49.313 [2024-07-15 09:36:00.420075] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.313 [2024-07-15 09:36:00.420128] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:27:49.313 qpair failed and we were unable to recover it. 00:27:49.313 [2024-07-15 09:36:00.420300] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.313 [2024-07-15 09:36:00.420349] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:27:49.313 qpair failed and we were unable to recover it. 00:27:49.313 [2024-07-15 09:36:00.420495] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.313 [2024-07-15 09:36:00.420540] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:27:49.313 qpair failed and we were unable to recover it. 00:27:49.313 [2024-07-15 09:36:00.420653] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.313 [2024-07-15 09:36:00.420681] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:27:49.313 qpair failed and we were unable to recover it. 00:27:49.313 [2024-07-15 09:36:00.420799] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.313 [2024-07-15 09:36:00.420853] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a50000b90 with addr=10.0.0.2, port=4420 00:27:49.313 qpair failed and we were unable to recover it. 00:27:49.313 [2024-07-15 09:36:00.420988] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.313 [2024-07-15 09:36:00.421019] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a50000b90 with addr=10.0.0.2, port=4420 00:27:49.313 qpair failed and we were unable to recover it. 00:27:49.313 [2024-07-15 09:36:00.421119] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.313 [2024-07-15 09:36:00.421151] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a50000b90 with addr=10.0.0.2, port=4420 00:27:49.313 qpair failed and we were unable to recover it. 00:27:49.313 [2024-07-15 09:36:00.421256] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.313 [2024-07-15 09:36:00.421291] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a50000b90 with addr=10.0.0.2, port=4420 00:27:49.313 qpair failed and we were unable to recover it. 00:27:49.313 [2024-07-15 09:36:00.421401] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.313 [2024-07-15 09:36:00.421433] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a50000b90 with addr=10.0.0.2, port=4420 00:27:49.313 qpair failed and we were unable to recover it. 00:27:49.313 [2024-07-15 09:36:00.421535] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.313 [2024-07-15 09:36:00.421578] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a50000b90 with addr=10.0.0.2, port=4420 00:27:49.313 qpair failed and we were unable to recover it. 00:27:49.313 [2024-07-15 09:36:00.421699] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.313 [2024-07-15 09:36:00.421725] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a50000b90 with addr=10.0.0.2, port=4420 00:27:49.313 qpair failed and we were unable to recover it. 00:27:49.313 [2024-07-15 09:36:00.421857] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.313 [2024-07-15 09:36:00.421885] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a50000b90 with addr=10.0.0.2, port=4420 00:27:49.313 qpair failed and we were unable to recover it. 00:27:49.313 [2024-07-15 09:36:00.421975] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.313 [2024-07-15 09:36:00.422004] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d69200 with addr=10.0.0.2, port=4420 00:27:49.313 qpair failed and we were unable to recover it. 00:27:49.313 [2024-07-15 09:36:00.422168] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.313 [2024-07-15 09:36:00.422199] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d69200 with addr=10.0.0.2, port=4420 00:27:49.313 qpair failed and we were unable to recover it. 00:27:49.313 [2024-07-15 09:36:00.422304] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.313 [2024-07-15 09:36:00.422334] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d69200 with addr=10.0.0.2, port=4420 00:27:49.313 qpair failed and we were unable to recover it. 00:27:49.313 [2024-07-15 09:36:00.422438] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.313 [2024-07-15 09:36:00.422469] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d69200 with addr=10.0.0.2, port=4420 00:27:49.313 qpair failed and we were unable to recover it. 00:27:49.313 [2024-07-15 09:36:00.422561] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.313 [2024-07-15 09:36:00.422592] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d69200 with addr=10.0.0.2, port=4420 00:27:49.313 qpair failed and we were unable to recover it. 00:27:49.313 [2024-07-15 09:36:00.422722] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.313 [2024-07-15 09:36:00.422753] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d69200 with addr=10.0.0.2, port=4420 00:27:49.313 qpair failed and we were unable to recover it. 00:27:49.313 [2024-07-15 09:36:00.422876] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.313 [2024-07-15 09:36:00.422902] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d69200 with addr=10.0.0.2, port=4420 00:27:49.313 qpair failed and we were unable to recover it. 00:27:49.313 [2024-07-15 09:36:00.423038] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.313 [2024-07-15 09:36:00.423069] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d69200 with addr=10.0.0.2, port=4420 00:27:49.313 qpair failed and we were unable to recover it. 00:27:49.313 [2024-07-15 09:36:00.423166] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.313 [2024-07-15 09:36:00.423202] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d69200 with addr=10.0.0.2, port=4420 00:27:49.313 qpair failed and we were unable to recover it. 00:27:49.313 [2024-07-15 09:36:00.423333] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.313 [2024-07-15 09:36:00.423379] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d69200 with addr=10.0.0.2, port=4420 00:27:49.313 qpair failed and we were unable to recover it. 00:27:49.313 [2024-07-15 09:36:00.423499] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.313 [2024-07-15 09:36:00.423528] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d69200 with addr=10.0.0.2, port=4420 00:27:49.313 qpair failed and we were unable to recover it. 00:27:49.313 [2024-07-15 09:36:00.423652] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.313 [2024-07-15 09:36:00.423681] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d69200 with addr=10.0.0.2, port=4420 00:27:49.313 qpair failed and we were unable to recover it. 00:27:49.313 [2024-07-15 09:36:00.423775] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.313 [2024-07-15 09:36:00.423817] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a50000b90 with addr=10.0.0.2, port=4420 00:27:49.313 qpair failed and we were unable to recover it. 00:27:49.313 [2024-07-15 09:36:00.423941] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.313 [2024-07-15 09:36:00.423972] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:27:49.313 qpair failed and we were unable to recover it. 00:27:49.313 [2024-07-15 09:36:00.424107] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.313 [2024-07-15 09:36:00.424152] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a58000b90 with addr=10.0.0.2, port=4420 00:27:49.313 qpair failed and we were unable to recover it. 00:27:49.313 [2024-07-15 09:36:00.424258] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.313 [2024-07-15 09:36:00.424305] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a58000b90 with addr=10.0.0.2, port=4420 00:27:49.313 qpair failed and we were unable to recover it. 00:27:49.313 [2024-07-15 09:36:00.424445] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.313 [2024-07-15 09:36:00.424482] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a58000b90 with addr=10.0.0.2, port=4420 00:27:49.313 qpair failed and we were unable to recover it. 00:27:49.313 [2024-07-15 09:36:00.424636] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.313 [2024-07-15 09:36:00.424666] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a58000b90 with addr=10.0.0.2, port=4420 00:27:49.313 qpair failed and we were unable to recover it. 00:27:49.313 [2024-07-15 09:36:00.424773] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.314 [2024-07-15 09:36:00.424811] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a58000b90 with addr=10.0.0.2, port=4420 00:27:49.314 qpair failed and we were unable to recover it. 00:27:49.314 [2024-07-15 09:36:00.424932] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.314 [2024-07-15 09:36:00.424959] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a58000b90 with addr=10.0.0.2, port=4420 00:27:49.314 qpair failed and we were unable to recover it. 00:27:49.314 [2024-07-15 09:36:00.425060] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.314 [2024-07-15 09:36:00.425090] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a58000b90 with addr=10.0.0.2, port=4420 00:27:49.314 qpair failed and we were unable to recover it. 00:27:49.314 [2024-07-15 09:36:00.425181] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.314 [2024-07-15 09:36:00.425211] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a58000b90 with addr=10.0.0.2, port=4420 00:27:49.314 qpair failed and we were unable to recover it. 00:27:49.314 [2024-07-15 09:36:00.425329] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.314 [2024-07-15 09:36:00.425360] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a58000b90 with addr=10.0.0.2, port=4420 00:27:49.314 qpair failed and we were unable to recover it. 00:27:49.314 [2024-07-15 09:36:00.425467] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.314 [2024-07-15 09:36:00.425496] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a58000b90 with addr=10.0.0.2, port=4420 00:27:49.314 qpair failed and we were unable to recover it. 00:27:49.314 [2024-07-15 09:36:00.425623] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.314 [2024-07-15 09:36:00.425651] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a58000b90 with addr=10.0.0.2, port=4420 00:27:49.314 qpair failed and we were unable to recover it. 00:27:49.314 [2024-07-15 09:36:00.425791] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.314 [2024-07-15 09:36:00.425833] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a58000b90 with addr=10.0.0.2, port=4420 00:27:49.314 qpair failed and we were unable to recover it. 00:27:49.314 [2024-07-15 09:36:00.425954] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.314 [2024-07-15 09:36:00.425982] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a58000b90 with addr=10.0.0.2, port=4420 00:27:49.314 qpair failed and we were unable to recover it. 00:27:49.314 [2024-07-15 09:36:00.426066] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.314 [2024-07-15 09:36:00.426094] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a58000b90 with addr=10.0.0.2, port=4420 00:27:49.314 qpair failed and we were unable to recover it. 00:27:49.314 [2024-07-15 09:36:00.426239] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.314 [2024-07-15 09:36:00.426268] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a58000b90 with addr=10.0.0.2, port=4420 00:27:49.314 qpair failed and we were unable to recover it. 00:27:49.314 [2024-07-15 09:36:00.426382] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.314 [2024-07-15 09:36:00.426412] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a58000b90 with addr=10.0.0.2, port=4420 00:27:49.314 qpair failed and we were unable to recover it. 00:27:49.314 [2024-07-15 09:36:00.426570] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.314 [2024-07-15 09:36:00.426600] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a58000b90 with addr=10.0.0.2, port=4420 00:27:49.314 qpair failed and we were unable to recover it. 00:27:49.314 [2024-07-15 09:36:00.426720] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.314 [2024-07-15 09:36:00.426747] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a58000b90 with addr=10.0.0.2, port=4420 00:27:49.314 qpair failed and we were unable to recover it. 00:27:49.314 [2024-07-15 09:36:00.426834] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.314 [2024-07-15 09:36:00.426872] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a58000b90 with addr=10.0.0.2, port=4420 00:27:49.314 qpair failed and we were unable to recover it. 00:27:49.314 [2024-07-15 09:36:00.426989] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.314 [2024-07-15 09:36:00.427017] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a58000b90 with addr=10.0.0.2, port=4420 00:27:49.314 qpair failed and we were unable to recover it. 00:27:49.314 [2024-07-15 09:36:00.427161] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.314 [2024-07-15 09:36:00.427191] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a58000b90 with addr=10.0.0.2, port=4420 00:27:49.314 qpair failed and we were unable to recover it. 00:27:49.314 [2024-07-15 09:36:00.427306] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.314 [2024-07-15 09:36:00.427350] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a58000b90 with addr=10.0.0.2, port=4420 00:27:49.314 qpair failed and we were unable to recover it. 00:27:49.314 [2024-07-15 09:36:00.427474] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.314 [2024-07-15 09:36:00.427515] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a58000b90 with addr=10.0.0.2, port=4420 00:27:49.314 qpair failed and we were unable to recover it. 00:27:49.314 [2024-07-15 09:36:00.427605] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.314 [2024-07-15 09:36:00.427634] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a58000b90 with addr=10.0.0.2, port=4420 00:27:49.314 qpair failed and we were unable to recover it. 00:27:49.314 [2024-07-15 09:36:00.427741] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.314 [2024-07-15 09:36:00.427768] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a58000b90 with addr=10.0.0.2, port=4420 00:27:49.314 qpair failed and we were unable to recover it. 00:27:49.314 [2024-07-15 09:36:00.427865] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.314 [2024-07-15 09:36:00.427893] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a58000b90 with addr=10.0.0.2, port=4420 00:27:49.314 qpair failed and we were unable to recover it. 00:27:49.314 [2024-07-15 09:36:00.427982] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.314 [2024-07-15 09:36:00.428009] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a58000b90 with addr=10.0.0.2, port=4420 00:27:49.314 qpair failed and we were unable to recover it. 00:27:49.314 [2024-07-15 09:36:00.428090] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.314 [2024-07-15 09:36:00.428118] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a58000b90 with addr=10.0.0.2, port=4420 00:27:49.314 qpair failed and we were unable to recover it. 00:27:49.314 [2024-07-15 09:36:00.428205] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.314 [2024-07-15 09:36:00.428249] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a58000b90 with addr=10.0.0.2, port=4420 00:27:49.314 qpair failed and we were unable to recover it. 00:27:49.314 [2024-07-15 09:36:00.428375] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.314 [2024-07-15 09:36:00.428409] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a50000b90 with addr=10.0.0.2, port=4420 00:27:49.314 qpair failed and we were unable to recover it. 00:27:49.314 [2024-07-15 09:36:00.428512] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.314 [2024-07-15 09:36:00.428545] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a50000b90 with addr=10.0.0.2, port=4420 00:27:49.314 qpair failed and we were unable to recover it. 00:27:49.314 [2024-07-15 09:36:00.428672] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.314 [2024-07-15 09:36:00.428702] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a50000b90 with addr=10.0.0.2, port=4420 00:27:49.314 qpair failed and we were unable to recover it. 00:27:49.314 [2024-07-15 09:36:00.428814] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.314 [2024-07-15 09:36:00.428859] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a50000b90 with addr=10.0.0.2, port=4420 00:27:49.314 qpair failed and we were unable to recover it. 00:27:49.314 [2024-07-15 09:36:00.428945] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.314 [2024-07-15 09:36:00.428972] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a50000b90 with addr=10.0.0.2, port=4420 00:27:49.314 qpair failed and we were unable to recover it. 00:27:49.314 [2024-07-15 09:36:00.429072] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.314 [2024-07-15 09:36:00.429099] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a50000b90 with addr=10.0.0.2, port=4420 00:27:49.314 qpair failed and we were unable to recover it. 00:27:49.314 [2024-07-15 09:36:00.429203] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.314 [2024-07-15 09:36:00.429233] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a50000b90 with addr=10.0.0.2, port=4420 00:27:49.314 qpair failed and we were unable to recover it. 00:27:49.314 [2024-07-15 09:36:00.429418] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.314 [2024-07-15 09:36:00.429449] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a50000b90 with addr=10.0.0.2, port=4420 00:27:49.314 qpair failed and we were unable to recover it. 00:27:49.314 [2024-07-15 09:36:00.429574] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.314 [2024-07-15 09:36:00.429604] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a50000b90 with addr=10.0.0.2, port=4420 00:27:49.314 qpair failed and we were unable to recover it. 00:27:49.314 [2024-07-15 09:36:00.429702] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.314 [2024-07-15 09:36:00.429732] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a50000b90 with addr=10.0.0.2, port=4420 00:27:49.314 qpair failed and we were unable to recover it. 00:27:49.314 [2024-07-15 09:36:00.429868] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.314 [2024-07-15 09:36:00.429896] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a50000b90 with addr=10.0.0.2, port=4420 00:27:49.314 qpair failed and we were unable to recover it. 00:27:49.314 [2024-07-15 09:36:00.429993] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.314 [2024-07-15 09:36:00.430020] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a50000b90 with addr=10.0.0.2, port=4420 00:27:49.314 qpair failed and we were unable to recover it. 00:27:49.314 [2024-07-15 09:36:00.430171] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.314 [2024-07-15 09:36:00.430200] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a50000b90 with addr=10.0.0.2, port=4420 00:27:49.314 qpair failed and we were unable to recover it. 00:27:49.314 [2024-07-15 09:36:00.430316] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.314 [2024-07-15 09:36:00.430356] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a50000b90 with addr=10.0.0.2, port=4420 00:27:49.314 qpair failed and we were unable to recover it. 00:27:49.314 [2024-07-15 09:36:00.430502] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.314 [2024-07-15 09:36:00.430546] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d69200 with addr=10.0.0.2, port=4420 00:27:49.315 qpair failed and we were unable to recover it. 00:27:49.315 [2024-07-15 09:36:00.430649] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.315 [2024-07-15 09:36:00.430695] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a58000b90 with addr=10.0.0.2, port=4420 00:27:49.315 qpair failed and we were unable to recover it. 00:27:49.315 [2024-07-15 09:36:00.430785] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.315 [2024-07-15 09:36:00.430819] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a58000b90 with addr=10.0.0.2, port=4420 00:27:49.315 qpair failed and we were unable to recover it. 00:27:49.315 [2024-07-15 09:36:00.430911] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.315 [2024-07-15 09:36:00.430939] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a58000b90 with addr=10.0.0.2, port=4420 00:27:49.315 qpair failed and we were unable to recover it. 00:27:49.315 [2024-07-15 09:36:00.431052] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.315 [2024-07-15 09:36:00.431080] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a58000b90 with addr=10.0.0.2, port=4420 00:27:49.315 qpair failed and we were unable to recover it. 00:27:49.315 [2024-07-15 09:36:00.431187] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.315 [2024-07-15 09:36:00.431216] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a58000b90 with addr=10.0.0.2, port=4420 00:27:49.315 qpair failed and we were unable to recover it. 00:27:49.315 [2024-07-15 09:36:00.431347] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.315 [2024-07-15 09:36:00.431382] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a50000b90 with addr=10.0.0.2, port=4420 00:27:49.315 qpair failed and we were unable to recover it. 00:27:49.315 [2024-07-15 09:36:00.431509] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.315 [2024-07-15 09:36:00.431539] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a50000b90 with addr=10.0.0.2, port=4420 00:27:49.315 qpair failed and we were unable to recover it. 00:27:49.315 [2024-07-15 09:36:00.431662] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.315 [2024-07-15 09:36:00.431704] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:27:49.315 qpair failed and we were unable to recover it. 00:27:49.315 [2024-07-15 09:36:00.431827] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.315 [2024-07-15 09:36:00.431857] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:27:49.315 qpair failed and we were unable to recover it. 00:27:49.315 [2024-07-15 09:36:00.431953] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.315 [2024-07-15 09:36:00.431980] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:27:49.315 qpair failed and we were unable to recover it. 00:27:49.315 [2024-07-15 09:36:00.432092] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.315 [2024-07-15 09:36:00.432120] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:27:49.315 qpair failed and we were unable to recover it. 00:27:49.315 [2024-07-15 09:36:00.432235] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.315 [2024-07-15 09:36:00.432262] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:27:49.315 qpair failed and we were unable to recover it. 00:27:49.315 [2024-07-15 09:36:00.432357] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.315 [2024-07-15 09:36:00.432385] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:27:49.315 qpair failed and we were unable to recover it. 00:27:49.315 [2024-07-15 09:36:00.432468] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.315 [2024-07-15 09:36:00.432494] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:27:49.315 qpair failed and we were unable to recover it. 00:27:49.315 [2024-07-15 09:36:00.432587] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.315 [2024-07-15 09:36:00.432614] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:27:49.315 qpair failed and we were unable to recover it. 00:27:49.315 [2024-07-15 09:36:00.432752] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.315 [2024-07-15 09:36:00.432794] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d69200 with addr=10.0.0.2, port=4420 00:27:49.315 qpair failed and we were unable to recover it. 00:27:49.315 [2024-07-15 09:36:00.432904] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.315 [2024-07-15 09:36:00.432933] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d69200 with addr=10.0.0.2, port=4420 00:27:49.315 qpair failed and we were unable to recover it. 00:27:49.315 [2024-07-15 09:36:00.433051] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.315 [2024-07-15 09:36:00.433078] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d69200 with addr=10.0.0.2, port=4420 00:27:49.315 qpair failed and we were unable to recover it. 00:27:49.315 [2024-07-15 09:36:00.433165] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.315 [2024-07-15 09:36:00.433192] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d69200 with addr=10.0.0.2, port=4420 00:27:49.315 qpair failed and we were unable to recover it. 00:27:49.315 [2024-07-15 09:36:00.433319] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.315 [2024-07-15 09:36:00.433347] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d69200 with addr=10.0.0.2, port=4420 00:27:49.315 qpair failed and we were unable to recover it. 00:27:49.315 [2024-07-15 09:36:00.433499] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.315 [2024-07-15 09:36:00.433530] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a58000b90 with addr=10.0.0.2, port=4420 00:27:49.315 qpair failed and we were unable to recover it. 00:27:49.315 [2024-07-15 09:36:00.433665] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.315 [2024-07-15 09:36:00.433692] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a58000b90 with addr=10.0.0.2, port=4420 00:27:49.315 qpair failed and we were unable to recover it. 00:27:49.315 [2024-07-15 09:36:00.433822] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.315 [2024-07-15 09:36:00.433851] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a58000b90 with addr=10.0.0.2, port=4420 00:27:49.315 qpair failed and we were unable to recover it. 00:27:49.315 [2024-07-15 09:36:00.433979] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.315 [2024-07-15 09:36:00.434007] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a58000b90 with addr=10.0.0.2, port=4420 00:27:49.315 qpair failed and we were unable to recover it. 00:27:49.315 [2024-07-15 09:36:00.434118] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.315 [2024-07-15 09:36:00.434148] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a58000b90 with addr=10.0.0.2, port=4420 00:27:49.315 qpair failed and we were unable to recover it. 00:27:49.315 [2024-07-15 09:36:00.434238] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.315 [2024-07-15 09:36:00.434271] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a58000b90 with addr=10.0.0.2, port=4420 00:27:49.315 qpair failed and we were unable to recover it. 00:27:49.315 [2024-07-15 09:36:00.434369] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.315 [2024-07-15 09:36:00.434398] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d69200 with addr=10.0.0.2, port=4420 00:27:49.315 qpair failed and we were unable to recover it. 00:27:49.315 [2024-07-15 09:36:00.434496] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.315 [2024-07-15 09:36:00.434525] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d69200 with addr=10.0.0.2, port=4420 00:27:49.315 qpair failed and we were unable to recover it. 00:27:49.315 [2024-07-15 09:36:00.434669] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.315 [2024-07-15 09:36:00.434699] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d69200 with addr=10.0.0.2, port=4420 00:27:49.315 qpair failed and we were unable to recover it. 00:27:49.315 [2024-07-15 09:36:00.434812] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.315 [2024-07-15 09:36:00.434859] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d69200 with addr=10.0.0.2, port=4420 00:27:49.315 qpair failed and we were unable to recover it. 00:27:49.315 [2024-07-15 09:36:00.435002] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.315 [2024-07-15 09:36:00.435031] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d69200 with addr=10.0.0.2, port=4420 00:27:49.315 qpair failed and we were unable to recover it. 00:27:49.315 [2024-07-15 09:36:00.435153] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.315 [2024-07-15 09:36:00.435181] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d69200 with addr=10.0.0.2, port=4420 00:27:49.315 qpair failed and we were unable to recover it. 00:27:49.315 [2024-07-15 09:36:00.435271] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.315 [2024-07-15 09:36:00.435313] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d69200 with addr=10.0.0.2, port=4420 00:27:49.315 qpair failed and we were unable to recover it. 00:27:49.315 [2024-07-15 09:36:00.435421] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.315 [2024-07-15 09:36:00.435448] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d69200 with addr=10.0.0.2, port=4420 00:27:49.315 qpair failed and we were unable to recover it. 00:27:49.315 [2024-07-15 09:36:00.435583] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.316 [2024-07-15 09:36:00.435626] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a50000b90 with addr=10.0.0.2, port=4420 00:27:49.316 qpair failed and we were unable to recover it. 00:27:49.316 [2024-07-15 09:36:00.435779] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.316 [2024-07-15 09:36:00.435816] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a58000b90 with addr=10.0.0.2, port=4420 00:27:49.316 qpair failed and we were unable to recover it. 00:27:49.316 [2024-07-15 09:36:00.435932] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.316 [2024-07-15 09:36:00.435962] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a58000b90 with addr=10.0.0.2, port=4420 00:27:49.316 qpair failed and we were unable to recover it. 00:27:49.316 [2024-07-15 09:36:00.436116] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.316 [2024-07-15 09:36:00.436145] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a58000b90 with addr=10.0.0.2, port=4420 00:27:49.316 qpair failed and we were unable to recover it. 00:27:49.316 [2024-07-15 09:36:00.436259] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.316 [2024-07-15 09:36:00.436287] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a58000b90 with addr=10.0.0.2, port=4420 00:27:49.316 qpair failed and we were unable to recover it. 00:27:49.316 [2024-07-15 09:36:00.436417] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.316 [2024-07-15 09:36:00.436446] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a58000b90 with addr=10.0.0.2, port=4420 00:27:49.316 qpair failed and we were unable to recover it. 00:27:49.316 [2024-07-15 09:36:00.436530] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.316 [2024-07-15 09:36:00.436558] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a58000b90 with addr=10.0.0.2, port=4420 00:27:49.316 qpair failed and we were unable to recover it. 00:27:49.316 [2024-07-15 09:36:00.436679] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.316 [2024-07-15 09:36:00.436707] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a58000b90 with addr=10.0.0.2, port=4420 00:27:49.316 qpair failed and we were unable to recover it. 00:27:49.316 [2024-07-15 09:36:00.436842] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.316 [2024-07-15 09:36:00.436872] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:27:49.316 qpair failed and we were unable to recover it. 00:27:49.316 [2024-07-15 09:36:00.436989] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.316 [2024-07-15 09:36:00.437032] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:27:49.316 qpair failed and we were unable to recover it. 00:27:49.316 [2024-07-15 09:36:00.437128] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.316 [2024-07-15 09:36:00.437156] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:27:49.316 qpair failed and we were unable to recover it. 00:27:49.316 [2024-07-15 09:36:00.437253] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.316 [2024-07-15 09:36:00.437281] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:27:49.316 qpair failed and we were unable to recover it. 00:27:49.316 [2024-07-15 09:36:00.437414] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.316 [2024-07-15 09:36:00.437441] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:27:49.316 qpair failed and we were unable to recover it. 00:27:49.316 [2024-07-15 09:36:00.437518] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.316 [2024-07-15 09:36:00.437545] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:27:49.316 qpair failed and we were unable to recover it. 00:27:49.316 [2024-07-15 09:36:00.437640] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.316 [2024-07-15 09:36:00.437666] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:27:49.316 qpair failed and we were unable to recover it. 00:27:49.316 [2024-07-15 09:36:00.437775] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.316 [2024-07-15 09:36:00.437810] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:27:49.316 qpair failed and we were unable to recover it. 00:27:49.316 [2024-07-15 09:36:00.437959] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.316 [2024-07-15 09:36:00.437986] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:27:49.316 qpair failed and we were unable to recover it. 00:27:49.316 [2024-07-15 09:36:00.438105] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.316 [2024-07-15 09:36:00.438133] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:27:49.316 qpair failed and we were unable to recover it. 00:27:49.316 [2024-07-15 09:36:00.438258] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.316 [2024-07-15 09:36:00.438286] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:27:49.316 qpair failed and we were unable to recover it. 00:27:49.316 [2024-07-15 09:36:00.438371] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.316 [2024-07-15 09:36:00.438397] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:27:49.316 qpair failed and we were unable to recover it. 00:27:49.316 [2024-07-15 09:36:00.438516] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.316 [2024-07-15 09:36:00.438543] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:27:49.316 qpair failed and we were unable to recover it. 00:27:49.316 [2024-07-15 09:36:00.438636] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.316 [2024-07-15 09:36:00.438664] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:27:49.316 qpair failed and we were unable to recover it. 00:27:49.316 [2024-07-15 09:36:00.438807] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.316 [2024-07-15 09:36:00.438835] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:27:49.316 qpair failed and we were unable to recover it. 00:27:49.316 [2024-07-15 09:36:00.438951] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.316 [2024-07-15 09:36:00.438979] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:27:49.316 qpair failed and we were unable to recover it. 00:27:49.316 [2024-07-15 09:36:00.439080] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.594 [2024-07-15 09:36:00.439106] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:27:49.594 qpair failed and we were unable to recover it. 00:27:49.594 [2024-07-15 09:36:00.439194] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.594 [2024-07-15 09:36:00.439220] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:27:49.594 qpair failed and we were unable to recover it. 00:27:49.594 [2024-07-15 09:36:00.439339] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.594 [2024-07-15 09:36:00.439367] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:27:49.594 qpair failed and we were unable to recover it. 00:27:49.594 [2024-07-15 09:36:00.439451] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.594 [2024-07-15 09:36:00.439478] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:27:49.594 qpair failed and we were unable to recover it. 00:27:49.594 [2024-07-15 09:36:00.439567] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.594 [2024-07-15 09:36:00.439594] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:27:49.594 qpair failed and we were unable to recover it. 00:27:49.594 [2024-07-15 09:36:00.439718] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.594 [2024-07-15 09:36:00.439749] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a58000b90 with addr=10.0.0.2, port=4420 00:27:49.595 qpair failed and we were unable to recover it. 00:27:49.595 [2024-07-15 09:36:00.439857] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.595 [2024-07-15 09:36:00.439899] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d69200 with addr=10.0.0.2, port=4420 00:27:49.595 qpair failed and we were unable to recover it. 00:27:49.595 [2024-07-15 09:36:00.440029] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.595 [2024-07-15 09:36:00.440066] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d69200 with addr=10.0.0.2, port=4420 00:27:49.595 qpair failed and we were unable to recover it. 00:27:49.595 [2024-07-15 09:36:00.440183] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.595 [2024-07-15 09:36:00.440212] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d69200 with addr=10.0.0.2, port=4420 00:27:49.595 qpair failed and we were unable to recover it. 00:27:49.595 [2024-07-15 09:36:00.440321] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.595 [2024-07-15 09:36:00.440349] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d69200 with addr=10.0.0.2, port=4420 00:27:49.595 qpair failed and we were unable to recover it. 00:27:49.595 [2024-07-15 09:36:00.440460] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.595 [2024-07-15 09:36:00.440487] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d69200 with addr=10.0.0.2, port=4420 00:27:49.595 qpair failed and we were unable to recover it. 00:27:49.595 [2024-07-15 09:36:00.440601] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.595 [2024-07-15 09:36:00.440628] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d69200 with addr=10.0.0.2, port=4420 00:27:49.595 qpair failed and we were unable to recover it. 00:27:49.595 [2024-07-15 09:36:00.440715] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.595 [2024-07-15 09:36:00.440763] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a58000b90 with addr=10.0.0.2, port=4420 00:27:49.595 qpair failed and we were unable to recover it. 00:27:49.595 [2024-07-15 09:36:00.440881] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.595 [2024-07-15 09:36:00.440924] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a50000b90 with addr=10.0.0.2, port=4420 00:27:49.595 qpair failed and we were unable to recover it. 00:27:49.595 [2024-07-15 09:36:00.441084] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.595 [2024-07-15 09:36:00.441129] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:27:49.595 qpair failed and we were unable to recover it. 00:27:49.595 [2024-07-15 09:36:00.441262] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.595 [2024-07-15 09:36:00.441308] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:27:49.595 qpair failed and we were unable to recover it. 00:27:49.595 [2024-07-15 09:36:00.441468] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.595 [2024-07-15 09:36:00.441512] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:27:49.595 qpair failed and we were unable to recover it. 00:27:49.595 [2024-07-15 09:36:00.441610] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.595 [2024-07-15 09:36:00.441638] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:27:49.595 qpair failed and we were unable to recover it. 00:27:49.595 [2024-07-15 09:36:00.441721] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.595 [2024-07-15 09:36:00.441747] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:27:49.595 qpair failed and we were unable to recover it. 00:27:49.595 [2024-07-15 09:36:00.441892] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.595 [2024-07-15 09:36:00.441936] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:27:49.595 qpair failed and we were unable to recover it. 00:27:49.595 [2024-07-15 09:36:00.442048] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.595 [2024-07-15 09:36:00.442092] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:27:49.595 qpair failed and we were unable to recover it. 00:27:49.595 [2024-07-15 09:36:00.442192] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.595 [2024-07-15 09:36:00.442220] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:27:49.595 qpair failed and we were unable to recover it. 00:27:49.595 [2024-07-15 09:36:00.442311] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.595 [2024-07-15 09:36:00.442339] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:27:49.595 qpair failed and we were unable to recover it. 00:27:49.595 [2024-07-15 09:36:00.442425] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.595 [2024-07-15 09:36:00.442452] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:27:49.595 qpair failed and we were unable to recover it. 00:27:49.595 [2024-07-15 09:36:00.442571] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.595 [2024-07-15 09:36:00.442602] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:27:49.595 qpair failed and we were unable to recover it. 00:27:49.595 [2024-07-15 09:36:00.442719] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.595 [2024-07-15 09:36:00.442747] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:27:49.595 qpair failed and we were unable to recover it. 00:27:49.595 [2024-07-15 09:36:00.442884] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.595 [2024-07-15 09:36:00.442912] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:27:49.595 qpair failed and we were unable to recover it. 00:27:49.595 [2024-07-15 09:36:00.443030] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.595 [2024-07-15 09:36:00.443058] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:27:49.595 qpair failed and we were unable to recover it. 00:27:49.595 [2024-07-15 09:36:00.443181] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.595 [2024-07-15 09:36:00.443209] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:27:49.595 qpair failed and we were unable to recover it. 00:27:49.595 [2024-07-15 09:36:00.443329] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.595 [2024-07-15 09:36:00.443357] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:27:49.595 qpair failed and we were unable to recover it. 00:27:49.595 [2024-07-15 09:36:00.443498] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.595 [2024-07-15 09:36:00.443525] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:27:49.595 qpair failed and we were unable to recover it. 00:27:49.595 [2024-07-15 09:36:00.443610] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.595 [2024-07-15 09:36:00.443638] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:27:49.595 qpair failed and we were unable to recover it. 00:27:49.595 [2024-07-15 09:36:00.443726] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.595 [2024-07-15 09:36:00.443754] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:27:49.595 qpair failed and we were unable to recover it. 00:27:49.595 [2024-07-15 09:36:00.443880] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.595 [2024-07-15 09:36:00.443909] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:27:49.595 qpair failed and we were unable to recover it. 00:27:49.595 [2024-07-15 09:36:00.444007] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.595 [2024-07-15 09:36:00.444043] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:27:49.595 qpair failed and we were unable to recover it. 00:27:49.595 [2024-07-15 09:36:00.444131] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.595 [2024-07-15 09:36:00.444160] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:27:49.595 qpair failed and we were unable to recover it. 00:27:49.595 [2024-07-15 09:36:00.444270] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.595 [2024-07-15 09:36:00.444298] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:27:49.595 qpair failed and we were unable to recover it. 00:27:49.595 [2024-07-15 09:36:00.444378] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.595 [2024-07-15 09:36:00.444405] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:27:49.595 qpair failed and we were unable to recover it. 00:27:49.595 [2024-07-15 09:36:00.444536] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.595 [2024-07-15 09:36:00.444578] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a50000b90 with addr=10.0.0.2, port=4420 00:27:49.595 qpair failed and we were unable to recover it. 00:27:49.595 [2024-07-15 09:36:00.444704] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.595 [2024-07-15 09:36:00.444734] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a50000b90 with addr=10.0.0.2, port=4420 00:27:49.595 qpair failed and we were unable to recover it. 00:27:49.595 [2024-07-15 09:36:00.444833] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.595 [2024-07-15 09:36:00.444861] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a50000b90 with addr=10.0.0.2, port=4420 00:27:49.595 qpair failed and we were unable to recover it. 00:27:49.595 [2024-07-15 09:36:00.444960] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.595 [2024-07-15 09:36:00.444987] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a50000b90 with addr=10.0.0.2, port=4420 00:27:49.595 qpair failed and we were unable to recover it. 00:27:49.595 [2024-07-15 09:36:00.445066] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.595 [2024-07-15 09:36:00.445094] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a50000b90 with addr=10.0.0.2, port=4420 00:27:49.595 qpair failed and we were unable to recover it. 00:27:49.595 [2024-07-15 09:36:00.445209] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.595 [2024-07-15 09:36:00.445236] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a50000b90 with addr=10.0.0.2, port=4420 00:27:49.595 qpair failed and we were unable to recover it. 00:27:49.595 [2024-07-15 09:36:00.445347] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.595 [2024-07-15 09:36:00.445377] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:27:49.595 qpair failed and we were unable to recover it. 00:27:49.595 [2024-07-15 09:36:00.445536] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.595 [2024-07-15 09:36:00.445563] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:27:49.595 qpair failed and we were unable to recover it. 00:27:49.595 [2024-07-15 09:36:00.445653] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.596 [2024-07-15 09:36:00.445680] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:27:49.596 qpair failed and we were unable to recover it. 00:27:49.596 [2024-07-15 09:36:00.445791] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.596 [2024-07-15 09:36:00.445831] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:27:49.596 qpair failed and we were unable to recover it. 00:27:49.596 [2024-07-15 09:36:00.445967] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.596 [2024-07-15 09:36:00.446011] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:27:49.596 qpair failed and we were unable to recover it. 00:27:49.596 [2024-07-15 09:36:00.446104] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.596 [2024-07-15 09:36:00.446132] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:27:49.596 qpair failed and we were unable to recover it. 00:27:49.596 [2024-07-15 09:36:00.446247] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.596 [2024-07-15 09:36:00.446275] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:27:49.596 qpair failed and we were unable to recover it. 00:27:49.596 [2024-07-15 09:36:00.446384] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.596 [2024-07-15 09:36:00.446411] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:27:49.596 qpair failed and we were unable to recover it. 00:27:49.596 [2024-07-15 09:36:00.446504] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.596 [2024-07-15 09:36:00.446531] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:27:49.596 qpair failed and we were unable to recover it. 00:27:49.596 [2024-07-15 09:36:00.446608] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.596 [2024-07-15 09:36:00.446635] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:27:49.596 qpair failed and we were unable to recover it. 00:27:49.596 [2024-07-15 09:36:00.446759] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.596 [2024-07-15 09:36:00.446786] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:27:49.596 qpair failed and we were unable to recover it. 00:27:49.596 [2024-07-15 09:36:00.446883] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.596 [2024-07-15 09:36:00.446910] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:27:49.596 qpair failed and we were unable to recover it. 00:27:49.596 [2024-07-15 09:36:00.446999] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.596 [2024-07-15 09:36:00.447027] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:27:49.596 qpair failed and we were unable to recover it. 00:27:49.596 [2024-07-15 09:36:00.447174] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.596 [2024-07-15 09:36:00.447202] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:27:49.596 qpair failed and we were unable to recover it. 00:27:49.596 [2024-07-15 09:36:00.447291] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.596 [2024-07-15 09:36:00.447319] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:27:49.596 qpair failed and we were unable to recover it. 00:27:49.596 [2024-07-15 09:36:00.447415] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.596 [2024-07-15 09:36:00.447443] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:27:49.596 qpair failed and we were unable to recover it. 00:27:49.596 [2024-07-15 09:36:00.447540] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.596 [2024-07-15 09:36:00.447568] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:27:49.596 qpair failed and we were unable to recover it. 00:27:49.596 [2024-07-15 09:36:00.447695] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.596 [2024-07-15 09:36:00.447724] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:27:49.596 qpair failed and we were unable to recover it. 00:27:49.596 [2024-07-15 09:36:00.447866] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.596 [2024-07-15 09:36:00.447894] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:27:49.596 qpair failed and we were unable to recover it. 00:27:49.596 [2024-07-15 09:36:00.448023] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.596 [2024-07-15 09:36:00.448052] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:27:49.596 qpair failed and we were unable to recover it. 00:27:49.596 [2024-07-15 09:36:00.448171] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.596 [2024-07-15 09:36:00.448198] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:27:49.596 qpair failed and we were unable to recover it. 00:27:49.596 [2024-07-15 09:36:00.448291] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.596 [2024-07-15 09:36:00.448319] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:27:49.596 qpair failed and we were unable to recover it. 00:27:49.596 [2024-07-15 09:36:00.448436] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.596 [2024-07-15 09:36:00.448464] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:27:49.596 qpair failed and we were unable to recover it. 00:27:49.596 [2024-07-15 09:36:00.448562] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.596 [2024-07-15 09:36:00.448590] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:27:49.596 qpair failed and we were unable to recover it. 00:27:49.596 [2024-07-15 09:36:00.448707] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.596 [2024-07-15 09:36:00.448735] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:27:49.596 qpair failed and we were unable to recover it. 00:27:49.596 [2024-07-15 09:36:00.448866] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.596 [2024-07-15 09:36:00.448911] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:27:49.596 qpair failed and we were unable to recover it. 00:27:49.596 [2024-07-15 09:36:00.449028] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.596 [2024-07-15 09:36:00.449055] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:27:49.596 qpair failed and we were unable to recover it. 00:27:49.596 [2024-07-15 09:36:00.449179] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.596 [2024-07-15 09:36:00.449206] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:27:49.596 qpair failed and we were unable to recover it. 00:27:49.596 [2024-07-15 09:36:00.449326] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.596 [2024-07-15 09:36:00.449353] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:27:49.596 qpair failed and we were unable to recover it. 00:27:49.596 [2024-07-15 09:36:00.449448] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.596 [2024-07-15 09:36:00.449476] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:27:49.596 qpair failed and we were unable to recover it. 00:27:49.596 [2024-07-15 09:36:00.449565] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.596 [2024-07-15 09:36:00.449598] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:27:49.596 qpair failed and we were unable to recover it. 00:27:49.596 [2024-07-15 09:36:00.449703] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.596 [2024-07-15 09:36:00.449745] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d69200 with addr=10.0.0.2, port=4420 00:27:49.596 qpair failed and we were unable to recover it. 00:27:49.596 [2024-07-15 09:36:00.449850] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.596 [2024-07-15 09:36:00.449896] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d69200 with addr=10.0.0.2, port=4420 00:27:49.596 qpair failed and we were unable to recover it. 00:27:49.596 [2024-07-15 09:36:00.449980] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.596 [2024-07-15 09:36:00.450009] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d69200 with addr=10.0.0.2, port=4420 00:27:49.596 qpair failed and we were unable to recover it. 00:27:49.596 [2024-07-15 09:36:00.450125] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.596 [2024-07-15 09:36:00.450153] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d69200 with addr=10.0.0.2, port=4420 00:27:49.596 qpair failed and we were unable to recover it. 00:27:49.596 [2024-07-15 09:36:00.450246] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.596 [2024-07-15 09:36:00.450275] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d69200 with addr=10.0.0.2, port=4420 00:27:49.596 qpair failed and we were unable to recover it. 00:27:49.596 [2024-07-15 09:36:00.450388] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.596 [2024-07-15 09:36:00.450417] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d69200 with addr=10.0.0.2, port=4420 00:27:49.596 qpair failed and we were unable to recover it. 00:27:49.596 [2024-07-15 09:36:00.450508] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.596 [2024-07-15 09:36:00.450535] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d69200 with addr=10.0.0.2, port=4420 00:27:49.596 qpair failed and we were unable to recover it. 00:27:49.596 [2024-07-15 09:36:00.450649] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.596 [2024-07-15 09:36:00.450676] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d69200 with addr=10.0.0.2, port=4420 00:27:49.596 qpair failed and we were unable to recover it. 00:27:49.596 [2024-07-15 09:36:00.450761] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.596 [2024-07-15 09:36:00.450788] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d69200 with addr=10.0.0.2, port=4420 00:27:49.596 qpair failed and we were unable to recover it. 00:27:49.596 [2024-07-15 09:36:00.450920] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.596 [2024-07-15 09:36:00.450950] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d69200 with addr=10.0.0.2, port=4420 00:27:49.596 qpair failed and we were unable to recover it. 00:27:49.596 [2024-07-15 09:36:00.451080] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.596 [2024-07-15 09:36:00.451109] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d69200 with addr=10.0.0.2, port=4420 00:27:49.596 qpair failed and we were unable to recover it. 00:27:49.596 [2024-07-15 09:36:00.451196] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.596 [2024-07-15 09:36:00.451225] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d69200 with addr=10.0.0.2, port=4420 00:27:49.596 qpair failed and we were unable to recover it. 00:27:49.597 [2024-07-15 09:36:00.451369] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.597 [2024-07-15 09:36:00.451397] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d69200 with addr=10.0.0.2, port=4420 00:27:49.597 qpair failed and we were unable to recover it. 00:27:49.597 [2024-07-15 09:36:00.451551] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.597 [2024-07-15 09:36:00.451580] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d69200 with addr=10.0.0.2, port=4420 00:27:49.597 qpair failed and we were unable to recover it. 00:27:49.597 [2024-07-15 09:36:00.451741] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.597 [2024-07-15 09:36:00.451784] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a50000b90 with addr=10.0.0.2, port=4420 00:27:49.597 qpair failed and we were unable to recover it. 00:27:49.597 [2024-07-15 09:36:00.451953] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.597 [2024-07-15 09:36:00.452001] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:27:49.597 qpair failed and we were unable to recover it. 00:27:49.597 [2024-07-15 09:36:00.452107] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.597 [2024-07-15 09:36:00.452137] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:27:49.597 qpair failed and we were unable to recover it. 00:27:49.597 [2024-07-15 09:36:00.452313] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.597 [2024-07-15 09:36:00.452356] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:27:49.597 qpair failed and we were unable to recover it. 00:27:49.597 [2024-07-15 09:36:00.452476] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.597 [2024-07-15 09:36:00.452503] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:27:49.597 qpair failed and we were unable to recover it. 00:27:49.597 [2024-07-15 09:36:00.452594] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.597 [2024-07-15 09:36:00.452621] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:27:49.597 qpair failed and we were unable to recover it. 00:27:49.597 [2024-07-15 09:36:00.452717] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.597 [2024-07-15 09:36:00.452746] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d69200 with addr=10.0.0.2, port=4420 00:27:49.597 qpair failed and we were unable to recover it. 00:27:49.597 [2024-07-15 09:36:00.452881] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.597 [2024-07-15 09:36:00.452913] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a50000b90 with addr=10.0.0.2, port=4420 00:27:49.597 qpair failed and we were unable to recover it. 00:27:49.597 [2024-07-15 09:36:00.453042] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.597 [2024-07-15 09:36:00.453071] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a50000b90 with addr=10.0.0.2, port=4420 00:27:49.597 qpair failed and we were unable to recover it. 00:27:49.597 [2024-07-15 09:36:00.453183] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.597 [2024-07-15 09:36:00.453213] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a50000b90 with addr=10.0.0.2, port=4420 00:27:49.597 qpair failed and we were unable to recover it. 00:27:49.597 [2024-07-15 09:36:00.453330] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.597 [2024-07-15 09:36:00.453360] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a50000b90 with addr=10.0.0.2, port=4420 00:27:49.597 qpair failed and we were unable to recover it. 00:27:49.597 [2024-07-15 09:36:00.453455] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.597 [2024-07-15 09:36:00.453484] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a50000b90 with addr=10.0.0.2, port=4420 00:27:49.597 qpair failed and we were unable to recover it. 00:27:49.597 [2024-07-15 09:36:00.453628] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.597 [2024-07-15 09:36:00.453657] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:27:49.597 qpair failed and we were unable to recover it. 00:27:49.597 [2024-07-15 09:36:00.453754] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.597 [2024-07-15 09:36:00.453784] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d69200 with addr=10.0.0.2, port=4420 00:27:49.597 qpair failed and we were unable to recover it. 00:27:49.597 [2024-07-15 09:36:00.453942] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.597 [2024-07-15 09:36:00.453970] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d69200 with addr=10.0.0.2, port=4420 00:27:49.597 qpair failed and we were unable to recover it. 00:27:49.597 [2024-07-15 09:36:00.454066] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.597 [2024-07-15 09:36:00.454111] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d69200 with addr=10.0.0.2, port=4420 00:27:49.597 qpair failed and we were unable to recover it. 00:27:49.597 [2024-07-15 09:36:00.454235] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.597 [2024-07-15 09:36:00.454265] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d69200 with addr=10.0.0.2, port=4420 00:27:49.597 qpair failed and we were unable to recover it. 00:27:49.597 [2024-07-15 09:36:00.454355] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.597 [2024-07-15 09:36:00.454385] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d69200 with addr=10.0.0.2, port=4420 00:27:49.597 qpair failed and we were unable to recover it. 00:27:49.597 [2024-07-15 09:36:00.454490] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.597 [2024-07-15 09:36:00.454519] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d69200 with addr=10.0.0.2, port=4420 00:27:49.597 qpair failed and we were unable to recover it. 00:27:49.597 [2024-07-15 09:36:00.454657] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.597 [2024-07-15 09:36:00.454685] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d69200 with addr=10.0.0.2, port=4420 00:27:49.597 qpair failed and we were unable to recover it. 00:27:49.597 [2024-07-15 09:36:00.454822] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.597 [2024-07-15 09:36:00.454851] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d69200 with addr=10.0.0.2, port=4420 00:27:49.597 qpair failed and we were unable to recover it. 00:27:49.597 [2024-07-15 09:36:00.454965] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.597 [2024-07-15 09:36:00.454992] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d69200 with addr=10.0.0.2, port=4420 00:27:49.597 qpair failed and we were unable to recover it. 00:27:49.597 [2024-07-15 09:36:00.455099] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.597 [2024-07-15 09:36:00.455129] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d69200 with addr=10.0.0.2, port=4420 00:27:49.597 qpair failed and we were unable to recover it. 00:27:49.597 [2024-07-15 09:36:00.455249] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.597 [2024-07-15 09:36:00.455278] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d69200 with addr=10.0.0.2, port=4420 00:27:49.597 qpair failed and we were unable to recover it. 00:27:49.597 [2024-07-15 09:36:00.455402] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.597 [2024-07-15 09:36:00.455431] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d69200 with addr=10.0.0.2, port=4420 00:27:49.597 qpair failed and we were unable to recover it. 00:27:49.597 [2024-07-15 09:36:00.455516] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.597 [2024-07-15 09:36:00.455545] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d69200 with addr=10.0.0.2, port=4420 00:27:49.597 qpair failed and we were unable to recover it. 00:27:49.597 [2024-07-15 09:36:00.455642] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.597 [2024-07-15 09:36:00.455673] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d69200 with addr=10.0.0.2, port=4420 00:27:49.597 qpair failed and we were unable to recover it. 00:27:49.597 [2024-07-15 09:36:00.455810] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.597 [2024-07-15 09:36:00.455857] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a50000b90 with addr=10.0.0.2, port=4420 00:27:49.597 qpair failed and we were unable to recover it. 00:27:49.597 [2024-07-15 09:36:00.455999] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.597 [2024-07-15 09:36:00.456026] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a50000b90 with addr=10.0.0.2, port=4420 00:27:49.597 qpair failed and we were unable to recover it. 00:27:49.597 [2024-07-15 09:36:00.456122] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.597 [2024-07-15 09:36:00.456150] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a50000b90 with addr=10.0.0.2, port=4420 00:27:49.597 qpair failed and we were unable to recover it. 00:27:49.597 [2024-07-15 09:36:00.456246] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.597 [2024-07-15 09:36:00.456273] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a50000b90 with addr=10.0.0.2, port=4420 00:27:49.597 qpair failed and we were unable to recover it. 00:27:49.597 [2024-07-15 09:36:00.456365] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.597 [2024-07-15 09:36:00.456394] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a50000b90 with addr=10.0.0.2, port=4420 00:27:49.597 qpair failed and we were unable to recover it. 00:27:49.597 [2024-07-15 09:36:00.456518] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.597 [2024-07-15 09:36:00.456547] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a50000b90 with addr=10.0.0.2, port=4420 00:27:49.597 qpair failed and we were unable to recover it. 00:27:49.597 [2024-07-15 09:36:00.456695] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.597 [2024-07-15 09:36:00.456723] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d69200 with addr=10.0.0.2, port=4420 00:27:49.597 qpair failed and we were unable to recover it. 00:27:49.597 [2024-07-15 09:36:00.456817] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.597 [2024-07-15 09:36:00.456845] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d69200 with addr=10.0.0.2, port=4420 00:27:49.597 qpair failed and we were unable to recover it. 00:27:49.597 [2024-07-15 09:36:00.456933] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.597 [2024-07-15 09:36:00.456960] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d69200 with addr=10.0.0.2, port=4420 00:27:49.597 qpair failed and we were unable to recover it. 00:27:49.597 [2024-07-15 09:36:00.457051] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.597 [2024-07-15 09:36:00.457078] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d69200 with addr=10.0.0.2, port=4420 00:27:49.597 qpair failed and we were unable to recover it. 00:27:49.597 [2024-07-15 09:36:00.457235] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.597 [2024-07-15 09:36:00.457265] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d69200 with addr=10.0.0.2, port=4420 00:27:49.597 qpair failed and we were unable to recover it. 00:27:49.597 [2024-07-15 09:36:00.457428] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.597 [2024-07-15 09:36:00.457474] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a58000b90 with addr=10.0.0.2, port=4420 00:27:49.597 qpair failed and we were unable to recover it. 00:27:49.598 [2024-07-15 09:36:00.457602] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.598 [2024-07-15 09:36:00.457633] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a50000b90 with addr=10.0.0.2, port=4420 00:27:49.598 qpair failed and we were unable to recover it. 00:27:49.598 [2024-07-15 09:36:00.457783] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.598 [2024-07-15 09:36:00.457818] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a50000b90 with addr=10.0.0.2, port=4420 00:27:49.598 qpair failed and we were unable to recover it. 00:27:49.598 [2024-07-15 09:36:00.457904] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.598 [2024-07-15 09:36:00.457931] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a50000b90 with addr=10.0.0.2, port=4420 00:27:49.598 qpair failed and we were unable to recover it. 00:27:49.598 [2024-07-15 09:36:00.458026] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.598 [2024-07-15 09:36:00.458052] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a50000b90 with addr=10.0.0.2, port=4420 00:27:49.598 qpair failed and we were unable to recover it. 00:27:49.598 [2024-07-15 09:36:00.458168] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.598 [2024-07-15 09:36:00.458196] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a50000b90 with addr=10.0.0.2, port=4420 00:27:49.598 qpair failed and we were unable to recover it. 00:27:49.598 [2024-07-15 09:36:00.458325] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.598 [2024-07-15 09:36:00.458355] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a50000b90 with addr=10.0.0.2, port=4420 00:27:49.598 qpair failed and we were unable to recover it. 00:27:49.598 [2024-07-15 09:36:00.458477] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.598 [2024-07-15 09:36:00.458506] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a50000b90 with addr=10.0.0.2, port=4420 00:27:49.598 qpair failed and we were unable to recover it. 00:27:49.598 [2024-07-15 09:36:00.458632] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.598 [2024-07-15 09:36:00.458662] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a50000b90 with addr=10.0.0.2, port=4420 00:27:49.598 qpair failed and we were unable to recover it. 00:27:49.598 [2024-07-15 09:36:00.458793] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.598 [2024-07-15 09:36:00.458845] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d69200 with addr=10.0.0.2, port=4420 00:27:49.598 qpair failed and we were unable to recover it. 00:27:49.598 [2024-07-15 09:36:00.458957] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.598 [2024-07-15 09:36:00.458984] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d69200 with addr=10.0.0.2, port=4420 00:27:49.598 qpair failed and we were unable to recover it. 00:27:49.598 [2024-07-15 09:36:00.459104] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.598 [2024-07-15 09:36:00.459131] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d69200 with addr=10.0.0.2, port=4420 00:27:49.598 qpair failed and we were unable to recover it. 00:27:49.598 [2024-07-15 09:36:00.459246] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.598 [2024-07-15 09:36:00.459274] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d69200 with addr=10.0.0.2, port=4420 00:27:49.598 qpair failed and we were unable to recover it. 00:27:49.598 [2024-07-15 09:36:00.459389] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.598 [2024-07-15 09:36:00.459420] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d69200 with addr=10.0.0.2, port=4420 00:27:49.598 qpair failed and we were unable to recover it. 00:27:49.598 [2024-07-15 09:36:00.459571] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.598 [2024-07-15 09:36:00.459601] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d69200 with addr=10.0.0.2, port=4420 00:27:49.598 qpair failed and we were unable to recover it. 00:27:49.598 [2024-07-15 09:36:00.459732] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.598 [2024-07-15 09:36:00.459763] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d69200 with addr=10.0.0.2, port=4420 00:27:49.598 qpair failed and we were unable to recover it. 00:27:49.598 [2024-07-15 09:36:00.459879] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.598 [2024-07-15 09:36:00.459906] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d69200 with addr=10.0.0.2, port=4420 00:27:49.598 qpair failed and we were unable to recover it. 00:27:49.598 [2024-07-15 09:36:00.460005] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.598 [2024-07-15 09:36:00.460032] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d69200 with addr=10.0.0.2, port=4420 00:27:49.598 qpair failed and we were unable to recover it. 00:27:49.598 [2024-07-15 09:36:00.460153] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.598 [2024-07-15 09:36:00.460180] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d69200 with addr=10.0.0.2, port=4420 00:27:49.598 qpair failed and we were unable to recover it. 00:27:49.598 [2024-07-15 09:36:00.460301] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.598 [2024-07-15 09:36:00.460346] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d69200 with addr=10.0.0.2, port=4420 00:27:49.598 qpair failed and we were unable to recover it. 00:27:49.598 [2024-07-15 09:36:00.460529] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.598 [2024-07-15 09:36:00.460561] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d69200 with addr=10.0.0.2, port=4420 00:27:49.598 qpair failed and we were unable to recover it. 00:27:49.598 [2024-07-15 09:36:00.460689] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.598 [2024-07-15 09:36:00.460720] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d69200 with addr=10.0.0.2, port=4420 00:27:49.598 qpair failed and we were unable to recover it. 00:27:49.598 [2024-07-15 09:36:00.460882] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.598 [2024-07-15 09:36:00.460910] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d69200 with addr=10.0.0.2, port=4420 00:27:49.598 qpair failed and we were unable to recover it. 00:27:49.598 [2024-07-15 09:36:00.461001] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.598 [2024-07-15 09:36:00.461045] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d69200 with addr=10.0.0.2, port=4420 00:27:49.598 qpair failed and we were unable to recover it. 00:27:49.598 [2024-07-15 09:36:00.461141] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.598 [2024-07-15 09:36:00.461172] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d69200 with addr=10.0.0.2, port=4420 00:27:49.598 qpair failed and we were unable to recover it. 00:27:49.598 [2024-07-15 09:36:00.461277] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.598 [2024-07-15 09:36:00.461307] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d69200 with addr=10.0.0.2, port=4420 00:27:49.598 qpair failed and we were unable to recover it. 00:27:49.598 [2024-07-15 09:36:00.461427] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.598 [2024-07-15 09:36:00.461458] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d69200 with addr=10.0.0.2, port=4420 00:27:49.598 qpair failed and we were unable to recover it. 00:27:49.598 [2024-07-15 09:36:00.461593] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.598 [2024-07-15 09:36:00.461627] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a50000b90 with addr=10.0.0.2, port=4420 00:27:49.598 qpair failed and we were unable to recover it. 00:27:49.598 [2024-07-15 09:36:00.461739] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.598 [2024-07-15 09:36:00.461785] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a58000b90 with addr=10.0.0.2, port=4420 00:27:49.598 qpair failed and we were unable to recover it. 00:27:49.598 [2024-07-15 09:36:00.461930] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.598 [2024-07-15 09:36:00.461971] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:27:49.598 qpair failed and we were unable to recover it. 00:27:49.598 [2024-07-15 09:36:00.462112] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.598 [2024-07-15 09:36:00.462144] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:27:49.598 qpair failed and we were unable to recover it. 00:27:49.598 [2024-07-15 09:36:00.462292] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.598 [2024-07-15 09:36:00.462323] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:27:49.598 qpair failed and we were unable to recover it. 00:27:49.598 [2024-07-15 09:36:00.462443] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.598 [2024-07-15 09:36:00.462489] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:27:49.598 qpair failed and we were unable to recover it. 00:27:49.598 [2024-07-15 09:36:00.462639] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.598 [2024-07-15 09:36:00.462666] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:27:49.598 qpair failed and we were unable to recover it. 00:27:49.598 [2024-07-15 09:36:00.462758] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.598 [2024-07-15 09:36:00.462785] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:27:49.598 qpair failed and we were unable to recover it. 00:27:49.598 [2024-07-15 09:36:00.462901] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.598 [2024-07-15 09:36:00.462943] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a58000b90 with addr=10.0.0.2, port=4420 00:27:49.598 qpair failed and we were unable to recover it. 00:27:49.598 [2024-07-15 09:36:00.463069] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.598 [2024-07-15 09:36:00.463098] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a58000b90 with addr=10.0.0.2, port=4420 00:27:49.598 qpair failed and we were unable to recover it. 00:27:49.598 [2024-07-15 09:36:00.463202] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.598 [2024-07-15 09:36:00.463230] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a58000b90 with addr=10.0.0.2, port=4420 00:27:49.598 qpair failed and we were unable to recover it. 00:27:49.598 [2024-07-15 09:36:00.463346] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.598 [2024-07-15 09:36:00.463374] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a58000b90 with addr=10.0.0.2, port=4420 00:27:49.598 qpair failed and we were unable to recover it. 00:27:49.598 [2024-07-15 09:36:00.463470] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.598 [2024-07-15 09:36:00.463499] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d69200 with addr=10.0.0.2, port=4420 00:27:49.598 qpair failed and we were unable to recover it. 00:27:49.598 [2024-07-15 09:36:00.463594] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.598 [2024-07-15 09:36:00.463622] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d69200 with addr=10.0.0.2, port=4420 00:27:49.598 qpair failed and we were unable to recover it. 00:27:49.598 [2024-07-15 09:36:00.463735] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.598 [2024-07-15 09:36:00.463762] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d69200 with addr=10.0.0.2, port=4420 00:27:49.598 qpair failed and we were unable to recover it. 00:27:49.599 [2024-07-15 09:36:00.463888] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.599 [2024-07-15 09:36:00.463916] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d69200 with addr=10.0.0.2, port=4420 00:27:49.599 qpair failed and we were unable to recover it. 00:27:49.599 [2024-07-15 09:36:00.464004] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.599 [2024-07-15 09:36:00.464032] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d69200 with addr=10.0.0.2, port=4420 00:27:49.599 qpair failed and we were unable to recover it. 00:27:49.599 [2024-07-15 09:36:00.464136] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.599 [2024-07-15 09:36:00.464168] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d69200 with addr=10.0.0.2, port=4420 00:27:49.599 qpair failed and we were unable to recover it. 00:27:49.599 [2024-07-15 09:36:00.464293] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.599 [2024-07-15 09:36:00.464324] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d69200 with addr=10.0.0.2, port=4420 00:27:49.599 qpair failed and we were unable to recover it. 00:27:49.599 [2024-07-15 09:36:00.464433] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.599 [2024-07-15 09:36:00.464461] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d69200 with addr=10.0.0.2, port=4420 00:27:49.599 qpair failed and we were unable to recover it. 00:27:49.599 [2024-07-15 09:36:00.464600] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.599 [2024-07-15 09:36:00.464630] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d69200 with addr=10.0.0.2, port=4420 00:27:49.599 qpair failed and we were unable to recover it. 00:27:49.599 [2024-07-15 09:36:00.464735] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.599 [2024-07-15 09:36:00.464767] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a58000b90 with addr=10.0.0.2, port=4420 00:27:49.599 qpair failed and we were unable to recover it. 00:27:49.599 [2024-07-15 09:36:00.464892] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.599 [2024-07-15 09:36:00.464923] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a50000b90 with addr=10.0.0.2, port=4420 00:27:49.599 qpair failed and we were unable to recover it. 00:27:49.599 [2024-07-15 09:36:00.465030] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.599 [2024-07-15 09:36:00.465061] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a50000b90 with addr=10.0.0.2, port=4420 00:27:49.599 qpair failed and we were unable to recover it. 00:27:49.599 [2024-07-15 09:36:00.465189] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.599 [2024-07-15 09:36:00.465219] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a50000b90 with addr=10.0.0.2, port=4420 00:27:49.599 qpair failed and we were unable to recover it. 00:27:49.599 [2024-07-15 09:36:00.465350] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.599 [2024-07-15 09:36:00.465381] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a50000b90 with addr=10.0.0.2, port=4420 00:27:49.599 qpair failed and we were unable to recover it. 00:27:49.599 [2024-07-15 09:36:00.465544] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.599 [2024-07-15 09:36:00.465594] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:27:49.599 qpair failed and we were unable to recover it. 00:27:49.599 [2024-07-15 09:36:00.465687] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.599 [2024-07-15 09:36:00.465715] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:27:49.599 qpair failed and we were unable to recover it. 00:27:49.599 [2024-07-15 09:36:00.465837] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.599 [2024-07-15 09:36:00.465866] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:27:49.599 qpair failed and we were unable to recover it. 00:27:49.599 [2024-07-15 09:36:00.465994] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.599 [2024-07-15 09:36:00.466040] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:27:49.599 qpair failed and we were unable to recover it. 00:27:49.599 [2024-07-15 09:36:00.466179] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.599 [2024-07-15 09:36:00.466229] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:27:49.599 qpair failed and we were unable to recover it. 00:27:49.599 [2024-07-15 09:36:00.466344] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.599 [2024-07-15 09:36:00.466372] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:27:49.599 qpair failed and we were unable to recover it. 00:27:49.599 [2024-07-15 09:36:00.466497] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.599 [2024-07-15 09:36:00.466527] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a50000b90 with addr=10.0.0.2, port=4420 00:27:49.599 qpair failed and we were unable to recover it. 00:27:49.599 [2024-07-15 09:36:00.466682] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.599 [2024-07-15 09:36:00.466723] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d69200 with addr=10.0.0.2, port=4420 00:27:49.599 qpair failed and we were unable to recover it. 00:27:49.599 [2024-07-15 09:36:00.466850] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.599 [2024-07-15 09:36:00.466880] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d69200 with addr=10.0.0.2, port=4420 00:27:49.599 qpair failed and we were unable to recover it. 00:27:49.599 [2024-07-15 09:36:00.466976] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.599 [2024-07-15 09:36:00.467004] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d69200 with addr=10.0.0.2, port=4420 00:27:49.599 qpair failed and we were unable to recover it. 00:27:49.599 [2024-07-15 09:36:00.467122] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.599 [2024-07-15 09:36:00.467167] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d69200 with addr=10.0.0.2, port=4420 00:27:49.599 qpair failed and we were unable to recover it. 00:27:49.599 [2024-07-15 09:36:00.467258] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.599 [2024-07-15 09:36:00.467289] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d69200 with addr=10.0.0.2, port=4420 00:27:49.599 qpair failed and we were unable to recover it. 00:27:49.599 [2024-07-15 09:36:00.467419] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.599 [2024-07-15 09:36:00.467449] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d69200 with addr=10.0.0.2, port=4420 00:27:49.599 qpair failed and we were unable to recover it. 00:27:49.599 [2024-07-15 09:36:00.467558] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.599 [2024-07-15 09:36:00.467592] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a50000b90 with addr=10.0.0.2, port=4420 00:27:49.599 qpair failed and we were unable to recover it. 00:27:49.599 [2024-07-15 09:36:00.467682] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.599 [2024-07-15 09:36:00.467713] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a50000b90 with addr=10.0.0.2, port=4420 00:27:49.599 qpair failed and we were unable to recover it. 00:27:49.599 [2024-07-15 09:36:00.467854] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.599 [2024-07-15 09:36:00.467889] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:27:49.599 qpair failed and we were unable to recover it. 00:27:49.599 [2024-07-15 09:36:00.467977] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.599 [2024-07-15 09:36:00.468005] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:27:49.599 qpair failed and we were unable to recover it. 00:27:49.599 [2024-07-15 09:36:00.468147] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.599 [2024-07-15 09:36:00.468193] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:27:49.599 qpair failed and we were unable to recover it. 00:27:49.599 [2024-07-15 09:36:00.468310] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.599 [2024-07-15 09:36:00.468337] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:27:49.599 qpair failed and we were unable to recover it. 00:27:49.599 [2024-07-15 09:36:00.468455] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.599 [2024-07-15 09:36:00.468484] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d69200 with addr=10.0.0.2, port=4420 00:27:49.599 qpair failed and we were unable to recover it. 00:27:49.599 [2024-07-15 09:36:00.468571] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.599 [2024-07-15 09:36:00.468599] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d69200 with addr=10.0.0.2, port=4420 00:27:49.599 qpair failed and we were unable to recover it. 00:27:49.599 [2024-07-15 09:36:00.468687] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.600 [2024-07-15 09:36:00.468716] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a50000b90 with addr=10.0.0.2, port=4420 00:27:49.600 qpair failed and we were unable to recover it. 00:27:49.600 [2024-07-15 09:36:00.468873] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.600 [2024-07-15 09:36:00.468903] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a50000b90 with addr=10.0.0.2, port=4420 00:27:49.600 qpair failed and we were unable to recover it. 00:27:49.600 [2024-07-15 09:36:00.469053] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.600 [2024-07-15 09:36:00.469085] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a50000b90 with addr=10.0.0.2, port=4420 00:27:49.600 qpair failed and we were unable to recover it. 00:27:49.600 [2024-07-15 09:36:00.469251] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.600 [2024-07-15 09:36:00.469284] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a50000b90 with addr=10.0.0.2, port=4420 00:27:49.600 qpair failed and we were unable to recover it. 00:27:49.600 [2024-07-15 09:36:00.469442] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.600 [2024-07-15 09:36:00.469475] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a50000b90 with addr=10.0.0.2, port=4420 00:27:49.600 qpair failed and we were unable to recover it. 00:27:49.600 [2024-07-15 09:36:00.469608] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.600 [2024-07-15 09:36:00.469641] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a50000b90 with addr=10.0.0.2, port=4420 00:27:49.600 qpair failed and we were unable to recover it. 00:27:49.600 [2024-07-15 09:36:00.469790] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.600 [2024-07-15 09:36:00.469848] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d69200 with addr=10.0.0.2, port=4420 00:27:49.600 qpair failed and we were unable to recover it. 00:27:49.600 [2024-07-15 09:36:00.469973] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.600 [2024-07-15 09:36:00.470001] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:27:49.600 qpair failed and we were unable to recover it. 00:27:49.600 [2024-07-15 09:36:00.470095] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.600 [2024-07-15 09:36:00.470122] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:27:49.600 qpair failed and we were unable to recover it. 00:27:49.600 [2024-07-15 09:36:00.470217] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.600 [2024-07-15 09:36:00.470245] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:27:49.600 qpair failed and we were unable to recover it. 00:27:49.600 [2024-07-15 09:36:00.470337] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.600 [2024-07-15 09:36:00.470365] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:27:49.600 qpair failed and we were unable to recover it. 00:27:49.600 [2024-07-15 09:36:00.470485] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.600 [2024-07-15 09:36:00.470512] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:27:49.600 qpair failed and we were unable to recover it. 00:27:49.600 [2024-07-15 09:36:00.470606] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.600 [2024-07-15 09:36:00.470636] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a50000b90 with addr=10.0.0.2, port=4420 00:27:49.600 qpair failed and we were unable to recover it. 00:27:49.600 [2024-07-15 09:36:00.470776] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.600 [2024-07-15 09:36:00.470818] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a50000b90 with addr=10.0.0.2, port=4420 00:27:49.600 qpair failed and we were unable to recover it. 00:27:49.600 [2024-07-15 09:36:00.470918] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.600 [2024-07-15 09:36:00.470945] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a50000b90 with addr=10.0.0.2, port=4420 00:27:49.600 qpair failed and we were unable to recover it. 00:27:49.600 [2024-07-15 09:36:00.471058] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.600 [2024-07-15 09:36:00.471085] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a50000b90 with addr=10.0.0.2, port=4420 00:27:49.600 qpair failed and we were unable to recover it. 00:27:49.600 [2024-07-15 09:36:00.471198] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.600 [2024-07-15 09:36:00.471225] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a50000b90 with addr=10.0.0.2, port=4420 00:27:49.600 qpair failed and we were unable to recover it. 00:27:49.600 [2024-07-15 09:36:00.471367] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.600 [2024-07-15 09:36:00.471396] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a50000b90 with addr=10.0.0.2, port=4420 00:27:49.600 qpair failed and we were unable to recover it. 00:27:49.600 [2024-07-15 09:36:00.471566] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.600 [2024-07-15 09:36:00.471598] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a50000b90 with addr=10.0.0.2, port=4420 00:27:49.600 qpair failed and we were unable to recover it. 00:27:49.600 [2024-07-15 09:36:00.471734] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.600 [2024-07-15 09:36:00.471766] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a50000b90 with addr=10.0.0.2, port=4420 00:27:49.600 qpair failed and we were unable to recover it. 00:27:49.600 [2024-07-15 09:36:00.471893] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.600 [2024-07-15 09:36:00.471922] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a50000b90 with addr=10.0.0.2, port=4420 00:27:49.600 qpair failed and we were unable to recover it. 00:27:49.600 [2024-07-15 09:36:00.472059] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.600 [2024-07-15 09:36:00.472109] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:27:49.600 qpair failed and we were unable to recover it. 00:27:49.600 [2024-07-15 09:36:00.472225] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.600 [2024-07-15 09:36:00.472274] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:27:49.600 qpair failed and we were unable to recover it. 00:27:49.600 [2024-07-15 09:36:00.472413] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.600 [2024-07-15 09:36:00.472463] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:27:49.600 qpair failed and we were unable to recover it. 00:27:49.600 [2024-07-15 09:36:00.472587] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.600 [2024-07-15 09:36:00.472614] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:27:49.600 qpair failed and we were unable to recover it. 00:27:49.600 [2024-07-15 09:36:00.472697] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.600 [2024-07-15 09:36:00.472725] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:27:49.600 qpair failed and we were unable to recover it. 00:27:49.600 [2024-07-15 09:36:00.472859] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.600 [2024-07-15 09:36:00.472906] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:27:49.600 qpair failed and we were unable to recover it. 00:27:49.600 [2024-07-15 09:36:00.473022] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.600 [2024-07-15 09:36:00.473051] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:27:49.600 qpair failed and we were unable to recover it. 00:27:49.600 [2024-07-15 09:36:00.473174] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.600 [2024-07-15 09:36:00.473203] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:27:49.600 qpair failed and we were unable to recover it. 00:27:49.600 [2024-07-15 09:36:00.473319] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.600 [2024-07-15 09:36:00.473348] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:27:49.600 qpair failed and we were unable to recover it. 00:27:49.600 [2024-07-15 09:36:00.473464] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.600 [2024-07-15 09:36:00.473492] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:27:49.600 qpair failed and we were unable to recover it. 00:27:49.600 [2024-07-15 09:36:00.473639] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.600 [2024-07-15 09:36:00.473666] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:27:49.600 qpair failed and we were unable to recover it. 00:27:49.600 [2024-07-15 09:36:00.473791] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.600 [2024-07-15 09:36:00.473824] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:27:49.600 qpair failed and we were unable to recover it. 00:27:49.600 [2024-07-15 09:36:00.473908] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.600 [2024-07-15 09:36:00.473936] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:27:49.600 qpair failed and we were unable to recover it. 00:27:49.600 [2024-07-15 09:36:00.474023] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.600 [2024-07-15 09:36:00.474055] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:27:49.600 qpair failed and we were unable to recover it. 00:27:49.600 [2024-07-15 09:36:00.474196] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.600 [2024-07-15 09:36:00.474224] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:27:49.600 qpair failed and we were unable to recover it. 00:27:49.600 [2024-07-15 09:36:00.474338] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.600 [2024-07-15 09:36:00.474366] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:27:49.600 qpair failed and we were unable to recover it. 00:27:49.600 [2024-07-15 09:36:00.474457] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.600 [2024-07-15 09:36:00.474485] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:27:49.600 qpair failed and we were unable to recover it. 00:27:49.600 [2024-07-15 09:36:00.474582] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.600 [2024-07-15 09:36:00.474623] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a58000b90 with addr=10.0.0.2, port=4420 00:27:49.600 qpair failed and we were unable to recover it. 00:27:49.600 [2024-07-15 09:36:00.474743] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.600 [2024-07-15 09:36:00.474773] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a58000b90 with addr=10.0.0.2, port=4420 00:27:49.600 qpair failed and we were unable to recover it. 00:27:49.600 [2024-07-15 09:36:00.474877] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.600 [2024-07-15 09:36:00.474908] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a58000b90 with addr=10.0.0.2, port=4420 00:27:49.600 qpair failed and we were unable to recover it. 00:27:49.601 [2024-07-15 09:36:00.475057] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.601 [2024-07-15 09:36:00.475090] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a58000b90 with addr=10.0.0.2, port=4420 00:27:49.601 qpair failed and we were unable to recover it. 00:27:49.601 [2024-07-15 09:36:00.475223] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.601 [2024-07-15 09:36:00.475269] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a58000b90 with addr=10.0.0.2, port=4420 00:27:49.601 qpair failed and we were unable to recover it. 00:27:49.601 [2024-07-15 09:36:00.475412] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.601 [2024-07-15 09:36:00.475444] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a58000b90 with addr=10.0.0.2, port=4420 00:27:49.601 qpair failed and we were unable to recover it. 00:27:49.601 [2024-07-15 09:36:00.475582] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.601 [2024-07-15 09:36:00.475610] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a58000b90 with addr=10.0.0.2, port=4420 00:27:49.601 qpair failed and we were unable to recover it. 00:27:49.601 [2024-07-15 09:36:00.475716] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.601 [2024-07-15 09:36:00.475743] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a58000b90 with addr=10.0.0.2, port=4420 00:27:49.601 qpair failed and we were unable to recover it. 00:27:49.601 [2024-07-15 09:36:00.475861] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.601 [2024-07-15 09:36:00.475903] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d69200 with addr=10.0.0.2, port=4420 00:27:49.601 qpair failed and we were unable to recover it. 00:27:49.601 [2024-07-15 09:36:00.476016] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.601 [2024-07-15 09:36:00.476049] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d69200 with addr=10.0.0.2, port=4420 00:27:49.601 qpair failed and we were unable to recover it. 00:27:49.601 [2024-07-15 09:36:00.476195] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.601 [2024-07-15 09:36:00.476228] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d69200 with addr=10.0.0.2, port=4420 00:27:49.601 qpair failed and we were unable to recover it. 00:27:49.601 [2024-07-15 09:36:00.476317] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.601 [2024-07-15 09:36:00.476349] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d69200 with addr=10.0.0.2, port=4420 00:27:49.601 qpair failed and we were unable to recover it. 00:27:49.601 [2024-07-15 09:36:00.476458] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.601 [2024-07-15 09:36:00.476488] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:27:49.601 qpair failed and we were unable to recover it. 00:27:49.601 [2024-07-15 09:36:00.476608] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.601 [2024-07-15 09:36:00.476636] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:27:49.601 qpair failed and we were unable to recover it. 00:27:49.601 [2024-07-15 09:36:00.476721] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.601 [2024-07-15 09:36:00.476751] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a58000b90 with addr=10.0.0.2, port=4420 00:27:49.601 qpair failed and we were unable to recover it. 00:27:49.601 [2024-07-15 09:36:00.476881] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.601 [2024-07-15 09:36:00.476909] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a58000b90 with addr=10.0.0.2, port=4420 00:27:49.601 qpair failed and we were unable to recover it. 00:27:49.601 [2024-07-15 09:36:00.477021] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.601 [2024-07-15 09:36:00.477049] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a58000b90 with addr=10.0.0.2, port=4420 00:27:49.601 qpair failed and we were unable to recover it. 00:27:49.601 [2024-07-15 09:36:00.477168] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.601 [2024-07-15 09:36:00.477196] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a58000b90 with addr=10.0.0.2, port=4420 00:27:49.601 qpair failed and we were unable to recover it. 00:27:49.601 [2024-07-15 09:36:00.477345] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.601 [2024-07-15 09:36:00.477377] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a58000b90 with addr=10.0.0.2, port=4420 00:27:49.601 qpair failed and we were unable to recover it. 00:27:49.601 [2024-07-15 09:36:00.477484] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.601 [2024-07-15 09:36:00.477518] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a58000b90 with addr=10.0.0.2, port=4420 00:27:49.601 qpair failed and we were unable to recover it. 00:27:49.601 [2024-07-15 09:36:00.477678] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.601 [2024-07-15 09:36:00.477720] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:27:49.601 qpair failed and we were unable to recover it. 00:27:49.601 [2024-07-15 09:36:00.477864] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.601 [2024-07-15 09:36:00.477906] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d69200 with addr=10.0.0.2, port=4420 00:27:49.601 qpair failed and we were unable to recover it. 00:27:49.601 [2024-07-15 09:36:00.478020] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.601 [2024-07-15 09:36:00.478060] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a50000b90 with addr=10.0.0.2, port=4420 00:27:49.601 qpair failed and we were unable to recover it. 00:27:49.601 [2024-07-15 09:36:00.478214] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.601 [2024-07-15 09:36:00.478254] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a50000b90 with addr=10.0.0.2, port=4420 00:27:49.601 qpair failed and we were unable to recover it. 00:27:49.601 [2024-07-15 09:36:00.478442] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.601 [2024-07-15 09:36:00.478476] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a50000b90 with addr=10.0.0.2, port=4420 00:27:49.601 qpair failed and we were unable to recover it. 00:27:49.601 [2024-07-15 09:36:00.478634] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.601 [2024-07-15 09:36:00.478680] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a50000b90 with addr=10.0.0.2, port=4420 00:27:49.601 qpair failed and we were unable to recover it. 00:27:49.601 [2024-07-15 09:36:00.478874] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.601 [2024-07-15 09:36:00.478904] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a50000b90 with addr=10.0.0.2, port=4420 00:27:49.601 qpair failed and we were unable to recover it. 00:27:49.601 [2024-07-15 09:36:00.479020] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.601 [2024-07-15 09:36:00.479047] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a50000b90 with addr=10.0.0.2, port=4420 00:27:49.601 qpair failed and we were unable to recover it. 00:27:49.601 [2024-07-15 09:36:00.479190] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.601 [2024-07-15 09:36:00.479233] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a50000b90 with addr=10.0.0.2, port=4420 00:27:49.601 qpair failed and we were unable to recover it. 00:27:49.601 [2024-07-15 09:36:00.479448] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.601 [2024-07-15 09:36:00.479492] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a50000b90 with addr=10.0.0.2, port=4420 00:27:49.601 qpair failed and we were unable to recover it. 00:27:49.601 [2024-07-15 09:36:00.479612] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.601 [2024-07-15 09:36:00.479658] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a50000b90 with addr=10.0.0.2, port=4420 00:27:49.601 qpair failed and we were unable to recover it. 00:27:49.601 [2024-07-15 09:36:00.479819] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.601 [2024-07-15 09:36:00.479847] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a50000b90 with addr=10.0.0.2, port=4420 00:27:49.601 qpair failed and we were unable to recover it. 00:27:49.601 [2024-07-15 09:36:00.479949] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.601 [2024-07-15 09:36:00.479977] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a50000b90 with addr=10.0.0.2, port=4420 00:27:49.601 qpair failed and we were unable to recover it. 00:27:49.601 [2024-07-15 09:36:00.480124] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.601 [2024-07-15 09:36:00.480157] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a50000b90 with addr=10.0.0.2, port=4420 00:27:49.601 qpair failed and we were unable to recover it. 00:27:49.601 [2024-07-15 09:36:00.480319] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.601 [2024-07-15 09:36:00.480350] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a50000b90 with addr=10.0.0.2, port=4420 00:27:49.601 qpair failed and we were unable to recover it. 00:27:49.601 [2024-07-15 09:36:00.480483] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.601 [2024-07-15 09:36:00.480516] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a50000b90 with addr=10.0.0.2, port=4420 00:27:49.601 qpair failed and we were unable to recover it. 00:27:49.601 [2024-07-15 09:36:00.480666] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.601 [2024-07-15 09:36:00.480708] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:27:49.601 qpair failed and we were unable to recover it. 00:27:49.601 [2024-07-15 09:36:00.480846] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.601 [2024-07-15 09:36:00.480876] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:27:49.601 qpair failed and we were unable to recover it. 00:27:49.601 [2024-07-15 09:36:00.480984] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.601 [2024-07-15 09:36:00.481018] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:27:49.601 qpair failed and we were unable to recover it. 00:27:49.601 [2024-07-15 09:36:00.481153] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.601 [2024-07-15 09:36:00.481200] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:27:49.601 qpair failed and we were unable to recover it. 00:27:49.601 [2024-07-15 09:36:00.481350] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.601 [2024-07-15 09:36:00.481402] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:27:49.601 qpair failed and we were unable to recover it. 00:27:49.601 [2024-07-15 09:36:00.481514] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.601 [2024-07-15 09:36:00.481542] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:27:49.601 qpair failed and we were unable to recover it. 00:27:49.601 [2024-07-15 09:36:00.481637] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.601 [2024-07-15 09:36:00.481666] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a50000b90 with addr=10.0.0.2, port=4420 00:27:49.601 qpair failed and we were unable to recover it. 00:27:49.601 [2024-07-15 09:36:00.481786] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.601 [2024-07-15 09:36:00.481819] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a50000b90 with addr=10.0.0.2, port=4420 00:27:49.601 qpair failed and we were unable to recover it. 00:27:49.602 [2024-07-15 09:36:00.481951] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.602 [2024-07-15 09:36:00.481978] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a50000b90 with addr=10.0.0.2, port=4420 00:27:49.602 qpair failed and we were unable to recover it. 00:27:49.602 [2024-07-15 09:36:00.482062] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.602 [2024-07-15 09:36:00.482091] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a50000b90 with addr=10.0.0.2, port=4420 00:27:49.602 qpair failed and we were unable to recover it. 00:27:49.602 [2024-07-15 09:36:00.482248] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.602 [2024-07-15 09:36:00.482277] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a50000b90 with addr=10.0.0.2, port=4420 00:27:49.602 qpair failed and we were unable to recover it. 00:27:49.602 [2024-07-15 09:36:00.482365] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.602 [2024-07-15 09:36:00.482413] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a50000b90 with addr=10.0.0.2, port=4420 00:27:49.602 qpair failed and we were unable to recover it. 00:27:49.602 [2024-07-15 09:36:00.482582] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.602 [2024-07-15 09:36:00.482629] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:27:49.602 qpair failed and we were unable to recover it. 00:27:49.602 [2024-07-15 09:36:00.482747] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.602 [2024-07-15 09:36:00.482775] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:27:49.602 qpair failed and we were unable to recover it. 00:27:49.602 [2024-07-15 09:36:00.482917] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.602 [2024-07-15 09:36:00.482965] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:27:49.602 qpair failed and we were unable to recover it. 00:27:49.602 [2024-07-15 09:36:00.483073] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.602 [2024-07-15 09:36:00.483125] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:27:49.602 qpair failed and we were unable to recover it. 00:27:49.602 [2024-07-15 09:36:00.483249] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.602 [2024-07-15 09:36:00.483295] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:27:49.602 qpair failed and we were unable to recover it. 00:27:49.602 [2024-07-15 09:36:00.483417] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.602 [2024-07-15 09:36:00.483445] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:27:49.602 qpair failed and we were unable to recover it. 00:27:49.602 [2024-07-15 09:36:00.483564] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.602 [2024-07-15 09:36:00.483591] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:27:49.602 qpair failed and we were unable to recover it. 00:27:49.602 [2024-07-15 09:36:00.483708] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.602 [2024-07-15 09:36:00.483736] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:27:49.602 qpair failed and we were unable to recover it. 00:27:49.602 [2024-07-15 09:36:00.483855] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.602 [2024-07-15 09:36:00.483883] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:27:49.602 qpair failed and we were unable to recover it. 00:27:49.602 [2024-07-15 09:36:00.483972] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.602 [2024-07-15 09:36:00.484001] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:27:49.602 qpair failed and we were unable to recover it. 00:27:49.602 [2024-07-15 09:36:00.484086] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.602 [2024-07-15 09:36:00.484114] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:27:49.602 qpair failed and we were unable to recover it. 00:27:49.602 [2024-07-15 09:36:00.484225] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.602 [2024-07-15 09:36:00.484252] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:27:49.602 qpair failed and we were unable to recover it. 00:27:49.602 [2024-07-15 09:36:00.484331] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.602 [2024-07-15 09:36:00.484359] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:27:49.602 qpair failed and we were unable to recover it. 00:27:49.602 [2024-07-15 09:36:00.484499] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.602 [2024-07-15 09:36:00.484527] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:27:49.602 qpair failed and we were unable to recover it. 00:27:49.602 [2024-07-15 09:36:00.484622] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.602 [2024-07-15 09:36:00.484650] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:27:49.602 qpair failed and we were unable to recover it. 00:27:49.602 [2024-07-15 09:36:00.484737] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.602 [2024-07-15 09:36:00.484770] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:27:49.602 qpair failed and we were unable to recover it. 00:27:49.602 [2024-07-15 09:36:00.484895] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.602 [2024-07-15 09:36:00.484922] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:27:49.602 qpair failed and we were unable to recover it. 00:27:49.602 [2024-07-15 09:36:00.485036] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.602 [2024-07-15 09:36:00.485063] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:27:49.602 qpair failed and we were unable to recover it. 00:27:49.602 [2024-07-15 09:36:00.485175] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.602 [2024-07-15 09:36:00.485213] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:27:49.602 qpair failed and we were unable to recover it. 00:27:49.602 [2024-07-15 09:36:00.485336] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.602 [2024-07-15 09:36:00.485363] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:27:49.602 qpair failed and we were unable to recover it. 00:27:49.602 [2024-07-15 09:36:00.485452] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.602 [2024-07-15 09:36:00.485480] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:27:49.602 qpair failed and we were unable to recover it. 00:27:49.602 [2024-07-15 09:36:00.485590] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.602 [2024-07-15 09:36:00.485618] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:27:49.602 qpair failed and we were unable to recover it. 00:27:49.602 [2024-07-15 09:36:00.485713] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.602 [2024-07-15 09:36:00.485740] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:27:49.602 qpair failed and we were unable to recover it. 00:27:49.602 [2024-07-15 09:36:00.485839] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.602 [2024-07-15 09:36:00.485868] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:27:49.602 qpair failed and we were unable to recover it. 00:27:49.602 [2024-07-15 09:36:00.485976] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.602 [2024-07-15 09:36:00.486004] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:27:49.602 qpair failed and we were unable to recover it. 00:27:49.602 [2024-07-15 09:36:00.486105] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.602 [2024-07-15 09:36:00.486133] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:27:49.602 qpair failed and we were unable to recover it. 00:27:49.602 [2024-07-15 09:36:00.486253] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.602 [2024-07-15 09:36:00.486282] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:27:49.602 qpair failed and we were unable to recover it. 00:27:49.602 [2024-07-15 09:36:00.486392] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.602 [2024-07-15 09:36:00.486419] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:27:49.602 qpair failed and we were unable to recover it. 00:27:49.602 [2024-07-15 09:36:00.486558] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.602 [2024-07-15 09:36:00.486585] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:27:49.602 qpair failed and we were unable to recover it. 00:27:49.602 [2024-07-15 09:36:00.486685] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.602 [2024-07-15 09:36:00.486713] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:27:49.602 qpair failed and we were unable to recover it. 00:27:49.602 [2024-07-15 09:36:00.486827] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.602 [2024-07-15 09:36:00.486854] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:27:49.602 qpair failed and we were unable to recover it. 00:27:49.602 [2024-07-15 09:36:00.486971] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.602 [2024-07-15 09:36:00.486998] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:27:49.602 qpair failed and we were unable to recover it. 00:27:49.602 [2024-07-15 09:36:00.487098] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.602 [2024-07-15 09:36:00.487141] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a58000b90 with addr=10.0.0.2, port=4420 00:27:49.602 qpair failed and we were unable to recover it. 00:27:49.602 [2024-07-15 09:36:00.487275] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.602 [2024-07-15 09:36:00.487304] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a58000b90 with addr=10.0.0.2, port=4420 00:27:49.602 qpair failed and we were unable to recover it. 00:27:49.602 [2024-07-15 09:36:00.487423] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.602 [2024-07-15 09:36:00.487451] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a58000b90 with addr=10.0.0.2, port=4420 00:27:49.602 qpair failed and we were unable to recover it. 00:27:49.602 [2024-07-15 09:36:00.487549] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.602 [2024-07-15 09:36:00.487577] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a58000b90 with addr=10.0.0.2, port=4420 00:27:49.602 qpair failed and we were unable to recover it. 00:27:49.602 [2024-07-15 09:36:00.487694] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.602 [2024-07-15 09:36:00.487723] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a58000b90 with addr=10.0.0.2, port=4420 00:27:49.603 qpair failed and we were unable to recover it. 00:27:49.603 [2024-07-15 09:36:00.487886] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.603 [2024-07-15 09:36:00.487920] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a58000b90 with addr=10.0.0.2, port=4420 00:27:49.603 qpair failed and we were unable to recover it. 00:27:49.603 [2024-07-15 09:36:00.488046] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.603 [2024-07-15 09:36:00.488086] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a58000b90 with addr=10.0.0.2, port=4420 00:27:49.603 qpair failed and we were unable to recover it. 00:27:49.603 [2024-07-15 09:36:00.488250] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.603 [2024-07-15 09:36:00.488285] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a58000b90 with addr=10.0.0.2, port=4420 00:27:49.603 qpair failed and we were unable to recover it. 00:27:49.603 [2024-07-15 09:36:00.488430] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.603 [2024-07-15 09:36:00.488463] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a58000b90 with addr=10.0.0.2, port=4420 00:27:49.603 qpair failed and we were unable to recover it. 00:27:49.603 [2024-07-15 09:36:00.488601] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.603 [2024-07-15 09:36:00.488635] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a58000b90 with addr=10.0.0.2, port=4420 00:27:49.603 qpair failed and we were unable to recover it. 00:27:49.603 [2024-07-15 09:36:00.488771] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.603 [2024-07-15 09:36:00.488810] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a58000b90 with addr=10.0.0.2, port=4420 00:27:49.603 qpair failed and we were unable to recover it. 00:27:49.603 [2024-07-15 09:36:00.488925] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.603 [2024-07-15 09:36:00.488954] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a58000b90 with addr=10.0.0.2, port=4420 00:27:49.603 qpair failed and we were unable to recover it. 00:27:49.603 [2024-07-15 09:36:00.489151] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.603 [2024-07-15 09:36:00.489185] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a58000b90 with addr=10.0.0.2, port=4420 00:27:49.603 qpair failed and we were unable to recover it. 00:27:49.603 [2024-07-15 09:36:00.489299] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.603 [2024-07-15 09:36:00.489326] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a58000b90 with addr=10.0.0.2, port=4420 00:27:49.603 qpair failed and we were unable to recover it. 00:27:49.603 [2024-07-15 09:36:00.489510] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.603 [2024-07-15 09:36:00.489543] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a58000b90 with addr=10.0.0.2, port=4420 00:27:49.603 qpair failed and we were unable to recover it. 00:27:49.603 [2024-07-15 09:36:00.489650] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.603 [2024-07-15 09:36:00.489678] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a58000b90 with addr=10.0.0.2, port=4420 00:27:49.603 qpair failed and we were unable to recover it. 00:27:49.603 [2024-07-15 09:36:00.489796] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.603 [2024-07-15 09:36:00.489836] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a58000b90 with addr=10.0.0.2, port=4420 00:27:49.603 qpair failed and we were unable to recover it. 00:27:49.603 [2024-07-15 09:36:00.489934] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.603 [2024-07-15 09:36:00.489962] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a58000b90 with addr=10.0.0.2, port=4420 00:27:49.603 qpair failed and we were unable to recover it. 00:27:49.603 [2024-07-15 09:36:00.490060] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.603 [2024-07-15 09:36:00.490087] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a58000b90 with addr=10.0.0.2, port=4420 00:27:49.603 qpair failed and we were unable to recover it. 00:27:49.603 [2024-07-15 09:36:00.490181] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.603 [2024-07-15 09:36:00.490208] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a58000b90 with addr=10.0.0.2, port=4420 00:27:49.603 qpair failed and we were unable to recover it. 00:27:49.603 [2024-07-15 09:36:00.490350] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.603 [2024-07-15 09:36:00.490385] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a58000b90 with addr=10.0.0.2, port=4420 00:27:49.603 qpair failed and we were unable to recover it. 00:27:49.603 [2024-07-15 09:36:00.490595] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.603 [2024-07-15 09:36:00.490630] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a58000b90 with addr=10.0.0.2, port=4420 00:27:49.603 qpair failed and we were unable to recover it. 00:27:49.603 [2024-07-15 09:36:00.490835] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.603 [2024-07-15 09:36:00.490864] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a58000b90 with addr=10.0.0.2, port=4420 00:27:49.603 qpair failed and we were unable to recover it. 00:27:49.603 [2024-07-15 09:36:00.490977] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.603 [2024-07-15 09:36:00.491024] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d69200 with addr=10.0.0.2, port=4420 00:27:49.603 qpair failed and we were unable to recover it. 00:27:49.603 [2024-07-15 09:36:00.491142] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.603 [2024-07-15 09:36:00.491179] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d69200 with addr=10.0.0.2, port=4420 00:27:49.603 qpair failed and we were unable to recover it. 00:27:49.603 [2024-07-15 09:36:00.491353] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.603 [2024-07-15 09:36:00.491389] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d69200 with addr=10.0.0.2, port=4420 00:27:49.603 qpair failed and we were unable to recover it. 00:27:49.603 [2024-07-15 09:36:00.491532] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.603 [2024-07-15 09:36:00.491567] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d69200 with addr=10.0.0.2, port=4420 00:27:49.603 qpair failed and we were unable to recover it. 00:27:49.603 [2024-07-15 09:36:00.491705] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.603 [2024-07-15 09:36:00.491732] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d69200 with addr=10.0.0.2, port=4420 00:27:49.603 qpair failed and we were unable to recover it. 00:27:49.603 [2024-07-15 09:36:00.491858] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.603 [2024-07-15 09:36:00.491887] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d69200 with addr=10.0.0.2, port=4420 00:27:49.603 qpair failed and we were unable to recover it. 00:27:49.603 [2024-07-15 09:36:00.492002] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.603 [2024-07-15 09:36:00.492030] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d69200 with addr=10.0.0.2, port=4420 00:27:49.603 qpair failed and we were unable to recover it. 00:27:49.603 [2024-07-15 09:36:00.492131] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.603 [2024-07-15 09:36:00.492159] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d69200 with addr=10.0.0.2, port=4420 00:27:49.603 qpair failed and we were unable to recover it. 00:27:49.603 [2024-07-15 09:36:00.492249] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.603 [2024-07-15 09:36:00.492277] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d69200 with addr=10.0.0.2, port=4420 00:27:49.603 qpair failed and we were unable to recover it. 00:27:49.603 [2024-07-15 09:36:00.492429] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.603 [2024-07-15 09:36:00.492460] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a58000b90 with addr=10.0.0.2, port=4420 00:27:49.603 qpair failed and we were unable to recover it. 00:27:49.603 [2024-07-15 09:36:00.492703] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.603 [2024-07-15 09:36:00.492739] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a58000b90 with addr=10.0.0.2, port=4420 00:27:49.603 qpair failed and we were unable to recover it. 00:27:49.603 [2024-07-15 09:36:00.492872] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.603 [2024-07-15 09:36:00.492901] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a58000b90 with addr=10.0.0.2, port=4420 00:27:49.603 qpair failed and we were unable to recover it. 00:27:49.603 [2024-07-15 09:36:00.493032] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.603 [2024-07-15 09:36:00.493068] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a58000b90 with addr=10.0.0.2, port=4420 00:27:49.603 qpair failed and we were unable to recover it. 00:27:49.603 [2024-07-15 09:36:00.493187] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.603 [2024-07-15 09:36:00.493238] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a58000b90 with addr=10.0.0.2, port=4420 00:27:49.603 qpair failed and we were unable to recover it. 00:27:49.603 [2024-07-15 09:36:00.493417] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.603 [2024-07-15 09:36:00.493453] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a58000b90 with addr=10.0.0.2, port=4420 00:27:49.603 qpair failed and we were unable to recover it. 00:27:49.604 [2024-07-15 09:36:00.493624] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.604 [2024-07-15 09:36:00.493659] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a58000b90 with addr=10.0.0.2, port=4420 00:27:49.604 qpair failed and we were unable to recover it. 00:27:49.604 [2024-07-15 09:36:00.493767] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.604 [2024-07-15 09:36:00.493795] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a58000b90 with addr=10.0.0.2, port=4420 00:27:49.604 qpair failed and we were unable to recover it. 00:27:49.604 [2024-07-15 09:36:00.493886] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.604 [2024-07-15 09:36:00.493914] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a58000b90 with addr=10.0.0.2, port=4420 00:27:49.604 qpair failed and we were unable to recover it. 00:27:49.604 [2024-07-15 09:36:00.494029] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.604 [2024-07-15 09:36:00.494058] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a58000b90 with addr=10.0.0.2, port=4420 00:27:49.604 qpair failed and we were unable to recover it. 00:27:49.604 [2024-07-15 09:36:00.494204] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.604 [2024-07-15 09:36:00.494239] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a58000b90 with addr=10.0.0.2, port=4420 00:27:49.604 qpair failed and we were unable to recover it. 00:27:49.604 [2024-07-15 09:36:00.494400] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.604 [2024-07-15 09:36:00.494436] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a58000b90 with addr=10.0.0.2, port=4420 00:27:49.604 qpair failed and we were unable to recover it. 00:27:49.604 [2024-07-15 09:36:00.494577] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.604 [2024-07-15 09:36:00.494612] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a58000b90 with addr=10.0.0.2, port=4420 00:27:49.604 qpair failed and we were unable to recover it. 00:27:49.604 [2024-07-15 09:36:00.494763] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.604 [2024-07-15 09:36:00.494840] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:27:49.604 qpair failed and we were unable to recover it. 00:27:49.604 [2024-07-15 09:36:00.494941] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.604 [2024-07-15 09:36:00.494969] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:27:49.604 qpair failed and we were unable to recover it. 00:27:49.604 [2024-07-15 09:36:00.495110] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.604 [2024-07-15 09:36:00.495146] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d69200 with addr=10.0.0.2, port=4420 00:27:49.604 qpair failed and we were unable to recover it. 00:27:49.604 [2024-07-15 09:36:00.495253] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.604 [2024-07-15 09:36:00.495289] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d69200 with addr=10.0.0.2, port=4420 00:27:49.604 qpair failed and we were unable to recover it. 00:27:49.604 [2024-07-15 09:36:00.495441] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.604 [2024-07-15 09:36:00.495475] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d69200 with addr=10.0.0.2, port=4420 00:27:49.604 qpair failed and we were unable to recover it. 00:27:49.604 [2024-07-15 09:36:00.495669] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.604 [2024-07-15 09:36:00.495710] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d69200 with addr=10.0.0.2, port=4420 00:27:49.604 qpair failed and we were unable to recover it. 00:27:49.604 [2024-07-15 09:36:00.495829] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.604 [2024-07-15 09:36:00.495879] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a58000b90 with addr=10.0.0.2, port=4420 00:27:49.604 qpair failed and we were unable to recover it. 00:27:49.604 [2024-07-15 09:36:00.495968] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.604 [2024-07-15 09:36:00.495996] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a58000b90 with addr=10.0.0.2, port=4420 00:27:49.604 qpair failed and we were unable to recover it. 00:27:49.604 [2024-07-15 09:36:00.496163] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.604 [2024-07-15 09:36:00.496199] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a58000b90 with addr=10.0.0.2, port=4420 00:27:49.604 qpair failed and we were unable to recover it. 00:27:49.604 [2024-07-15 09:36:00.496372] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.604 [2024-07-15 09:36:00.496408] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a58000b90 with addr=10.0.0.2, port=4420 00:27:49.604 qpair failed and we were unable to recover it. 00:27:49.604 [2024-07-15 09:36:00.496578] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.604 [2024-07-15 09:36:00.496630] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a50000b90 with addr=10.0.0.2, port=4420 00:27:49.604 qpair failed and we were unable to recover it. 00:27:49.604 [2024-07-15 09:36:00.496817] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.604 [2024-07-15 09:36:00.496847] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:27:49.604 qpair failed and we were unable to recover it. 00:27:49.604 [2024-07-15 09:36:00.496945] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.604 [2024-07-15 09:36:00.496973] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:27:49.604 qpair failed and we were unable to recover it. 00:27:49.604 [2024-07-15 09:36:00.497146] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.604 [2024-07-15 09:36:00.497190] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:27:49.604 qpair failed and we were unable to recover it. 00:27:49.604 [2024-07-15 09:36:00.497335] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.604 [2024-07-15 09:36:00.497384] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:27:49.604 qpair failed and we were unable to recover it. 00:27:49.604 [2024-07-15 09:36:00.497475] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.604 [2024-07-15 09:36:00.497504] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:27:49.604 qpair failed and we were unable to recover it. 00:27:49.604 [2024-07-15 09:36:00.497622] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.604 [2024-07-15 09:36:00.497651] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a58000b90 with addr=10.0.0.2, port=4420 00:27:49.604 qpair failed and we were unable to recover it. 00:27:49.604 [2024-07-15 09:36:00.497787] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.604 [2024-07-15 09:36:00.497856] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a50000b90 with addr=10.0.0.2, port=4420 00:27:49.604 qpair failed and we were unable to recover it. 00:27:49.604 [2024-07-15 09:36:00.498016] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.604 [2024-07-15 09:36:00.498055] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a50000b90 with addr=10.0.0.2, port=4420 00:27:49.604 qpair failed and we were unable to recover it. 00:27:49.604 [2024-07-15 09:36:00.498264] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.604 [2024-07-15 09:36:00.498303] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a50000b90 with addr=10.0.0.2, port=4420 00:27:49.604 qpair failed and we were unable to recover it. 00:27:49.604 [2024-07-15 09:36:00.498453] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.604 [2024-07-15 09:36:00.498489] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a50000b90 with addr=10.0.0.2, port=4420 00:27:49.604 qpair failed and we were unable to recover it. 00:27:49.604 [2024-07-15 09:36:00.498614] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.604 [2024-07-15 09:36:00.498650] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a50000b90 with addr=10.0.0.2, port=4420 00:27:49.604 qpair failed and we were unable to recover it. 00:27:49.604 [2024-07-15 09:36:00.498840] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.604 [2024-07-15 09:36:00.498870] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:27:49.604 qpair failed and we were unable to recover it. 00:27:49.604 [2024-07-15 09:36:00.498987] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.604 [2024-07-15 09:36:00.499035] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:27:49.604 qpair failed and we were unable to recover it. 00:27:49.604 [2024-07-15 09:36:00.499214] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.604 [2024-07-15 09:36:00.499261] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:27:49.604 qpair failed and we were unable to recover it. 00:27:49.604 [2024-07-15 09:36:00.499377] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.604 [2024-07-15 09:36:00.499433] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:27:49.604 qpair failed and we were unable to recover it. 00:27:49.604 [2024-07-15 09:36:00.499526] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.604 [2024-07-15 09:36:00.499553] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:27:49.604 qpair failed and we were unable to recover it. 00:27:49.604 [2024-07-15 09:36:00.499695] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.604 [2024-07-15 09:36:00.499721] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:27:49.604 qpair failed and we were unable to recover it. 00:27:49.604 [2024-07-15 09:36:00.499842] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.604 [2024-07-15 09:36:00.499869] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:27:49.604 qpair failed and we were unable to recover it. 00:27:49.604 [2024-07-15 09:36:00.499977] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.604 [2024-07-15 09:36:00.500019] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a58000b90 with addr=10.0.0.2, port=4420 00:27:49.604 qpair failed and we were unable to recover it. 00:27:49.604 [2024-07-15 09:36:00.500166] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.604 [2024-07-15 09:36:00.500207] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d69200 with addr=10.0.0.2, port=4420 00:27:49.604 qpair failed and we were unable to recover it. 00:27:49.604 [2024-07-15 09:36:00.500306] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.604 [2024-07-15 09:36:00.500334] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d69200 with addr=10.0.0.2, port=4420 00:27:49.604 qpair failed and we were unable to recover it. 00:27:49.604 [2024-07-15 09:36:00.500435] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.604 [2024-07-15 09:36:00.500463] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d69200 with addr=10.0.0.2, port=4420 00:27:49.605 qpair failed and we were unable to recover it. 00:27:49.605 [2024-07-15 09:36:00.500582] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.605 [2024-07-15 09:36:00.500609] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d69200 with addr=10.0.0.2, port=4420 00:27:49.605 qpair failed and we were unable to recover it. 00:27:49.605 [2024-07-15 09:36:00.500739] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.605 [2024-07-15 09:36:00.500781] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a50000b90 with addr=10.0.0.2, port=4420 00:27:49.605 qpair failed and we were unable to recover it. 00:27:49.605 [2024-07-15 09:36:00.500958] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.605 [2024-07-15 09:36:00.500996] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a50000b90 with addr=10.0.0.2, port=4420 00:27:49.605 qpair failed and we were unable to recover it. 00:27:49.605 [2024-07-15 09:36:00.501162] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.605 [2024-07-15 09:36:00.501198] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a50000b90 with addr=10.0.0.2, port=4420 00:27:49.605 qpair failed and we were unable to recover it. 00:27:49.605 [2024-07-15 09:36:00.501339] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.605 [2024-07-15 09:36:00.501375] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a50000b90 with addr=10.0.0.2, port=4420 00:27:49.605 qpair failed and we were unable to recover it. 00:27:49.605 [2024-07-15 09:36:00.501511] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.605 [2024-07-15 09:36:00.501548] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a50000b90 with addr=10.0.0.2, port=4420 00:27:49.605 qpair failed and we were unable to recover it. 00:27:49.605 [2024-07-15 09:36:00.501703] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.605 [2024-07-15 09:36:00.501745] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a58000b90 with addr=10.0.0.2, port=4420 00:27:49.605 qpair failed and we were unable to recover it. 00:27:49.605 [2024-07-15 09:36:00.501921] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.605 [2024-07-15 09:36:00.501969] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a58000b90 with addr=10.0.0.2, port=4420 00:27:49.605 qpair failed and we were unable to recover it. 00:27:49.605 [2024-07-15 09:36:00.502165] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.605 [2024-07-15 09:36:00.502210] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a58000b90 with addr=10.0.0.2, port=4420 00:27:49.605 qpair failed and we were unable to recover it. 00:27:49.605 [2024-07-15 09:36:00.502401] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.605 [2024-07-15 09:36:00.502446] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a58000b90 with addr=10.0.0.2, port=4420 00:27:49.605 qpair failed and we were unable to recover it. 00:27:49.605 [2024-07-15 09:36:00.502663] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.605 [2024-07-15 09:36:00.502707] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a58000b90 with addr=10.0.0.2, port=4420 00:27:49.605 qpair failed and we were unable to recover it. 00:27:49.605 [2024-07-15 09:36:00.502911] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.605 [2024-07-15 09:36:00.502965] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:27:49.605 qpair failed and we were unable to recover it. 00:27:49.605 [2024-07-15 09:36:00.503152] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.605 [2024-07-15 09:36:00.503205] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a50000b90 with addr=10.0.0.2, port=4420 00:27:49.605 qpair failed and we were unable to recover it. 00:27:49.605 [2024-07-15 09:36:00.503360] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.605 [2024-07-15 09:36:00.503404] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a50000b90 with addr=10.0.0.2, port=4420 00:27:49.605 qpair failed and we were unable to recover it. 00:27:49.605 [2024-07-15 09:36:00.503586] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.605 [2024-07-15 09:36:00.503630] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a50000b90 with addr=10.0.0.2, port=4420 00:27:49.605 qpair failed and we were unable to recover it. 00:27:49.605 [2024-07-15 09:36:00.503771] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.605 [2024-07-15 09:36:00.503807] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a50000b90 with addr=10.0.0.2, port=4420 00:27:49.605 qpair failed and we were unable to recover it. 00:27:49.605 [2024-07-15 09:36:00.503905] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.605 [2024-07-15 09:36:00.503932] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a50000b90 with addr=10.0.0.2, port=4420 00:27:49.605 qpair failed and we were unable to recover it. 00:27:49.605 [2024-07-15 09:36:00.504096] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.605 [2024-07-15 09:36:00.504135] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a50000b90 with addr=10.0.0.2, port=4420 00:27:49.605 qpair failed and we were unable to recover it. 00:27:49.605 [2024-07-15 09:36:00.504249] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.605 [2024-07-15 09:36:00.504276] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a50000b90 with addr=10.0.0.2, port=4420 00:27:49.605 qpair failed and we were unable to recover it. 00:27:49.605 [2024-07-15 09:36:00.504405] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.605 [2024-07-15 09:36:00.504443] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a50000b90 with addr=10.0.0.2, port=4420 00:27:49.605 qpair failed and we were unable to recover it. 00:27:49.605 [2024-07-15 09:36:00.504616] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.605 [2024-07-15 09:36:00.504653] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a50000b90 with addr=10.0.0.2, port=4420 00:27:49.605 qpair failed and we were unable to recover it. 00:27:49.605 [2024-07-15 09:36:00.504775] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.605 [2024-07-15 09:36:00.504811] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a50000b90 with addr=10.0.0.2, port=4420 00:27:49.605 qpair failed and we were unable to recover it. 00:27:49.605 [2024-07-15 09:36:00.504928] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.605 [2024-07-15 09:36:00.504955] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a50000b90 with addr=10.0.0.2, port=4420 00:27:49.605 qpair failed and we were unable to recover it. 00:27:49.605 [2024-07-15 09:36:00.505057] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.605 [2024-07-15 09:36:00.505092] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a50000b90 with addr=10.0.0.2, port=4420 00:27:49.605 qpair failed and we were unable to recover it. 00:27:49.605 [2024-07-15 09:36:00.505215] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.605 [2024-07-15 09:36:00.505252] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a50000b90 with addr=10.0.0.2, port=4420 00:27:49.605 qpair failed and we were unable to recover it. 00:27:49.605 [2024-07-15 09:36:00.505352] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.605 [2024-07-15 09:36:00.505390] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a50000b90 with addr=10.0.0.2, port=4420 00:27:49.605 qpair failed and we were unable to recover it. 00:27:49.605 [2024-07-15 09:36:00.505627] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.605 [2024-07-15 09:36:00.505669] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a58000b90 with addr=10.0.0.2, port=4420 00:27:49.605 qpair failed and we were unable to recover it. 00:27:49.605 [2024-07-15 09:36:00.505825] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.605 [2024-07-15 09:36:00.505854] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a58000b90 with addr=10.0.0.2, port=4420 00:27:49.605 qpair failed and we were unable to recover it. 00:27:49.605 [2024-07-15 09:36:00.505978] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.605 [2024-07-15 09:36:00.506006] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a58000b90 with addr=10.0.0.2, port=4420 00:27:49.605 qpair failed and we were unable to recover it. 00:27:49.605 [2024-07-15 09:36:00.506119] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.605 [2024-07-15 09:36:00.506157] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a58000b90 with addr=10.0.0.2, port=4420 00:27:49.605 qpair failed and we were unable to recover it. 00:27:49.605 [2024-07-15 09:36:00.506321] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.605 [2024-07-15 09:36:00.506358] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a58000b90 with addr=10.0.0.2, port=4420 00:27:49.605 qpair failed and we were unable to recover it. 00:27:49.605 [2024-07-15 09:36:00.506503] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.605 [2024-07-15 09:36:00.506541] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a58000b90 with addr=10.0.0.2, port=4420 00:27:49.605 qpair failed and we were unable to recover it. 00:27:49.605 [2024-07-15 09:36:00.506663] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.605 [2024-07-15 09:36:00.506700] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a58000b90 with addr=10.0.0.2, port=4420 00:27:49.605 qpair failed and we were unable to recover it. 00:27:49.605 [2024-07-15 09:36:00.506837] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.605 [2024-07-15 09:36:00.506865] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a58000b90 with addr=10.0.0.2, port=4420 00:27:49.605 qpair failed and we were unable to recover it. 00:27:49.605 [2024-07-15 09:36:00.506985] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.606 [2024-07-15 09:36:00.507013] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a58000b90 with addr=10.0.0.2, port=4420 00:27:49.606 qpair failed and we were unable to recover it. 00:27:49.606 [2024-07-15 09:36:00.507159] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.606 [2024-07-15 09:36:00.507198] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a58000b90 with addr=10.0.0.2, port=4420 00:27:49.606 qpair failed and we were unable to recover it. 00:27:49.606 [2024-07-15 09:36:00.507403] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.606 [2024-07-15 09:36:00.507441] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a58000b90 with addr=10.0.0.2, port=4420 00:27:49.606 qpair failed and we were unable to recover it. 00:27:49.606 [2024-07-15 09:36:00.507577] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.606 [2024-07-15 09:36:00.507614] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a58000b90 with addr=10.0.0.2, port=4420 00:27:49.606 qpair failed and we were unable to recover it. 00:27:49.606 [2024-07-15 09:36:00.507722] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.606 [2024-07-15 09:36:00.507760] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a58000b90 with addr=10.0.0.2, port=4420 00:27:49.606 qpair failed and we were unable to recover it. 00:27:49.606 [2024-07-15 09:36:00.507951] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.606 [2024-07-15 09:36:00.507993] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:27:49.606 qpair failed and we were unable to recover it. 00:27:49.606 [2024-07-15 09:36:00.508132] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.606 [2024-07-15 09:36:00.508160] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:27:49.606 qpair failed and we were unable to recover it. 00:27:49.606 [2024-07-15 09:36:00.508311] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.606 [2024-07-15 09:36:00.508360] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:27:49.606 qpair failed and we were unable to recover it. 00:27:49.606 [2024-07-15 09:36:00.508476] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.606 [2024-07-15 09:36:00.508526] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:27:49.606 qpair failed and we were unable to recover it. 00:27:49.606 [2024-07-15 09:36:00.508640] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.606 [2024-07-15 09:36:00.508668] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:27:49.606 qpair failed and we were unable to recover it. 00:27:49.606 [2024-07-15 09:36:00.508763] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.606 [2024-07-15 09:36:00.508791] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:27:49.606 qpair failed and we were unable to recover it. 00:27:49.606 [2024-07-15 09:36:00.508922] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.606 [2024-07-15 09:36:00.508950] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:27:49.606 qpair failed and we were unable to recover it. 00:27:49.606 [2024-07-15 09:36:00.509042] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.606 [2024-07-15 09:36:00.509068] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:27:49.606 qpair failed and we were unable to recover it. 00:27:49.606 [2024-07-15 09:36:00.509179] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.606 [2024-07-15 09:36:00.509207] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:27:49.606 qpair failed and we were unable to recover it. 00:27:49.606 [2024-07-15 09:36:00.509329] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.606 [2024-07-15 09:36:00.509360] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a58000b90 with addr=10.0.0.2, port=4420 00:27:49.606 qpair failed and we were unable to recover it. 00:27:49.606 [2024-07-15 09:36:00.509453] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.606 [2024-07-15 09:36:00.509481] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a58000b90 with addr=10.0.0.2, port=4420 00:27:49.606 qpair failed and we were unable to recover it. 00:27:49.606 [2024-07-15 09:36:00.509599] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.606 [2024-07-15 09:36:00.509628] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a58000b90 with addr=10.0.0.2, port=4420 00:27:49.606 qpair failed and we were unable to recover it. 00:27:49.606 [2024-07-15 09:36:00.509745] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.606 [2024-07-15 09:36:00.509773] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a58000b90 with addr=10.0.0.2, port=4420 00:27:49.606 qpair failed and we were unable to recover it. 00:27:49.606 [2024-07-15 09:36:00.509905] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.606 [2024-07-15 09:36:00.509952] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d69200 with addr=10.0.0.2, port=4420 00:27:49.606 qpair failed and we were unable to recover it. 00:27:49.606 [2024-07-15 09:36:00.510068] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.606 [2024-07-15 09:36:00.510108] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d69200 with addr=10.0.0.2, port=4420 00:27:49.606 qpair failed and we were unable to recover it. 00:27:49.606 [2024-07-15 09:36:00.510254] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.606 [2024-07-15 09:36:00.510291] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d69200 with addr=10.0.0.2, port=4420 00:27:49.606 qpair failed and we were unable to recover it. 00:27:49.606 [2024-07-15 09:36:00.510436] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.606 [2024-07-15 09:36:00.510473] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d69200 with addr=10.0.0.2, port=4420 00:27:49.606 qpair failed and we were unable to recover it. 00:27:49.606 [2024-07-15 09:36:00.510633] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.606 [2024-07-15 09:36:00.510688] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a50000b90 with addr=10.0.0.2, port=4420 00:27:49.606 qpair failed and we were unable to recover it. 00:27:49.606 [2024-07-15 09:36:00.510851] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.606 [2024-07-15 09:36:00.510881] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:27:49.606 qpair failed and we were unable to recover it. 00:27:49.606 [2024-07-15 09:36:00.511022] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.606 [2024-07-15 09:36:00.511075] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:27:49.606 qpair failed and we were unable to recover it. 00:27:49.606 [2024-07-15 09:36:00.511219] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.606 [2024-07-15 09:36:00.511267] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:27:49.606 qpair failed and we were unable to recover it. 00:27:49.606 [2024-07-15 09:36:00.511353] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.606 [2024-07-15 09:36:00.511381] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:27:49.606 qpair failed and we were unable to recover it. 00:27:49.606 [2024-07-15 09:36:00.511480] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.606 [2024-07-15 09:36:00.511510] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a58000b90 with addr=10.0.0.2, port=4420 00:27:49.606 qpair failed and we were unable to recover it. 00:27:49.606 [2024-07-15 09:36:00.511644] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.606 [2024-07-15 09:36:00.511686] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d69200 with addr=10.0.0.2, port=4420 00:27:49.606 qpair failed and we were unable to recover it. 00:27:49.606 [2024-07-15 09:36:00.511839] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.606 [2024-07-15 09:36:00.511881] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d69200 with addr=10.0.0.2, port=4420 00:27:49.606 qpair failed and we were unable to recover it. 00:27:49.606 [2024-07-15 09:36:00.511991] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.606 [2024-07-15 09:36:00.512030] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d69200 with addr=10.0.0.2, port=4420 00:27:49.606 qpair failed and we were unable to recover it. 00:27:49.606 [2024-07-15 09:36:00.512213] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.606 [2024-07-15 09:36:00.512254] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d69200 with addr=10.0.0.2, port=4420 00:27:49.606 qpair failed and we were unable to recover it. 00:27:49.606 [2024-07-15 09:36:00.512417] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.606 [2024-07-15 09:36:00.512456] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d69200 with addr=10.0.0.2, port=4420 00:27:49.606 qpair failed and we were unable to recover it. 00:27:49.606 [2024-07-15 09:36:00.512586] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.606 [2024-07-15 09:36:00.512615] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:27:49.606 qpair failed and we were unable to recover it. 00:27:49.606 [2024-07-15 09:36:00.512762] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.606 [2024-07-15 09:36:00.512790] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:27:49.606 qpair failed and we were unable to recover it. 00:27:49.606 [2024-07-15 09:36:00.512948] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.606 [2024-07-15 09:36:00.512998] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:27:49.606 qpair failed and we were unable to recover it. 00:27:49.606 [2024-07-15 09:36:00.513145] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.606 [2024-07-15 09:36:00.513202] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:27:49.606 qpair failed and we were unable to recover it. 00:27:49.606 [2024-07-15 09:36:00.513347] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.606 [2024-07-15 09:36:00.513397] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:27:49.606 qpair failed and we were unable to recover it. 00:27:49.606 [2024-07-15 09:36:00.513559] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.606 [2024-07-15 09:36:00.513599] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:27:49.606 qpair failed and we were unable to recover it. 00:27:49.606 [2024-07-15 09:36:00.513736] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.606 [2024-07-15 09:36:00.513763] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:27:49.606 qpair failed and we were unable to recover it. 00:27:49.607 [2024-07-15 09:36:00.513905] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.607 [2024-07-15 09:36:00.513936] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a58000b90 with addr=10.0.0.2, port=4420 00:27:49.607 qpair failed and we were unable to recover it. 00:27:49.607 [2024-07-15 09:36:00.514056] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.607 [2024-07-15 09:36:00.514084] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a58000b90 with addr=10.0.0.2, port=4420 00:27:49.607 qpair failed and we were unable to recover it. 00:27:49.607 [2024-07-15 09:36:00.514204] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.607 [2024-07-15 09:36:00.514232] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a58000b90 with addr=10.0.0.2, port=4420 00:27:49.607 qpair failed and we were unable to recover it. 00:27:49.607 [2024-07-15 09:36:00.514343] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.607 [2024-07-15 09:36:00.514371] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a58000b90 with addr=10.0.0.2, port=4420 00:27:49.607 qpair failed and we were unable to recover it. 00:27:49.607 [2024-07-15 09:36:00.514512] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.607 [2024-07-15 09:36:00.514540] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a58000b90 with addr=10.0.0.2, port=4420 00:27:49.607 qpair failed and we were unable to recover it. 00:27:49.607 [2024-07-15 09:36:00.514640] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.607 [2024-07-15 09:36:00.514671] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d69200 with addr=10.0.0.2, port=4420 00:27:49.607 qpair failed and we were unable to recover it. 00:27:49.607 [2024-07-15 09:36:00.514819] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.607 [2024-07-15 09:36:00.514848] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:27:49.607 qpair failed and we were unable to recover it. 00:27:49.607 [2024-07-15 09:36:00.514954] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.607 [2024-07-15 09:36:00.515008] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:27:49.607 qpair failed and we were unable to recover it. 00:27:49.607 [2024-07-15 09:36:00.515148] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.607 [2024-07-15 09:36:00.515184] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:27:49.607 qpair failed and we were unable to recover it. 00:27:49.607 [2024-07-15 09:36:00.515372] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.607 [2024-07-15 09:36:00.515420] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:27:49.607 qpair failed and we were unable to recover it. 00:27:49.607 [2024-07-15 09:36:00.515511] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.607 [2024-07-15 09:36:00.515539] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:27:49.607 qpair failed and we were unable to recover it. 00:27:49.607 [2024-07-15 09:36:00.515630] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.607 [2024-07-15 09:36:00.515657] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:27:49.607 qpair failed and we were unable to recover it. 00:27:49.607 [2024-07-15 09:36:00.515805] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.607 [2024-07-15 09:36:00.515833] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:27:49.607 qpair failed and we were unable to recover it. 00:27:49.607 [2024-07-15 09:36:00.515927] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.607 [2024-07-15 09:36:00.515955] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:27:49.607 qpair failed and we were unable to recover it. 00:27:49.607 [2024-07-15 09:36:00.516078] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.607 [2024-07-15 09:36:00.516115] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a58000b90 with addr=10.0.0.2, port=4420 00:27:49.607 qpair failed and we were unable to recover it. 00:27:49.607 [2024-07-15 09:36:00.516212] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.607 [2024-07-15 09:36:00.516241] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a58000b90 with addr=10.0.0.2, port=4420 00:27:49.607 qpair failed and we were unable to recover it. 00:27:49.607 [2024-07-15 09:36:00.516361] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.607 [2024-07-15 09:36:00.516389] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a58000b90 with addr=10.0.0.2, port=4420 00:27:49.607 qpair failed and we were unable to recover it. 00:27:49.607 [2024-07-15 09:36:00.516499] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.607 [2024-07-15 09:36:00.516527] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a58000b90 with addr=10.0.0.2, port=4420 00:27:49.607 qpair failed and we were unable to recover it. 00:27:49.607 [2024-07-15 09:36:00.516621] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.607 [2024-07-15 09:36:00.516649] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a58000b90 with addr=10.0.0.2, port=4420 00:27:49.607 qpair failed and we were unable to recover it. 00:27:49.607 [2024-07-15 09:36:00.516776] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.607 [2024-07-15 09:36:00.516808] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a58000b90 with addr=10.0.0.2, port=4420 00:27:49.607 qpair failed and we were unable to recover it. 00:27:49.607 [2024-07-15 09:36:00.516926] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.607 [2024-07-15 09:36:00.516965] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a58000b90 with addr=10.0.0.2, port=4420 00:27:49.607 qpair failed and we were unable to recover it. 00:27:49.607 [2024-07-15 09:36:00.517155] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.607 [2024-07-15 09:36:00.517194] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a58000b90 with addr=10.0.0.2, port=4420 00:27:49.607 qpair failed and we were unable to recover it. 00:27:49.607 [2024-07-15 09:36:00.517320] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.607 [2024-07-15 09:36:00.517359] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a58000b90 with addr=10.0.0.2, port=4420 00:27:49.607 qpair failed and we were unable to recover it. 00:27:49.607 [2024-07-15 09:36:00.517503] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.607 [2024-07-15 09:36:00.517542] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a58000b90 with addr=10.0.0.2, port=4420 00:27:49.607 qpair failed and we were unable to recover it. 00:27:49.607 [2024-07-15 09:36:00.517697] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.607 [2024-07-15 09:36:00.517736] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a58000b90 with addr=10.0.0.2, port=4420 00:27:49.607 qpair failed and we were unable to recover it. 00:27:49.607 [2024-07-15 09:36:00.517890] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.607 [2024-07-15 09:36:00.517919] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a58000b90 with addr=10.0.0.2, port=4420 00:27:49.607 qpair failed and we were unable to recover it. 00:27:49.607 [2024-07-15 09:36:00.518029] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.607 [2024-07-15 09:36:00.518074] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a58000b90 with addr=10.0.0.2, port=4420 00:27:49.607 qpair failed and we were unable to recover it. 00:27:49.607 [2024-07-15 09:36:00.518232] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.607 [2024-07-15 09:36:00.518272] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a58000b90 with addr=10.0.0.2, port=4420 00:27:49.607 qpair failed and we were unable to recover it. 00:27:49.607 [2024-07-15 09:36:00.518429] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.607 [2024-07-15 09:36:00.518468] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a58000b90 with addr=10.0.0.2, port=4420 00:27:49.607 qpair failed and we were unable to recover it. 00:27:49.607 [2024-07-15 09:36:00.518651] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.607 [2024-07-15 09:36:00.518691] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a58000b90 with addr=10.0.0.2, port=4420 00:27:49.607 qpair failed and we were unable to recover it. 00:27:49.607 [2024-07-15 09:36:00.518835] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.607 [2024-07-15 09:36:00.518863] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a58000b90 with addr=10.0.0.2, port=4420 00:27:49.607 qpair failed and we were unable to recover it. 00:27:49.607 [2024-07-15 09:36:00.518979] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.607 [2024-07-15 09:36:00.519007] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a58000b90 with addr=10.0.0.2, port=4420 00:27:49.607 qpair failed and we were unable to recover it. 00:27:49.607 [2024-07-15 09:36:00.519166] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.607 [2024-07-15 09:36:00.519207] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a58000b90 with addr=10.0.0.2, port=4420 00:27:49.607 qpair failed and we were unable to recover it. 00:27:49.607 [2024-07-15 09:36:00.519360] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.607 [2024-07-15 09:36:00.519400] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a58000b90 with addr=10.0.0.2, port=4420 00:27:49.607 qpair failed and we were unable to recover it. 00:27:49.607 [2024-07-15 09:36:00.519556] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.607 [2024-07-15 09:36:00.519596] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a58000b90 with addr=10.0.0.2, port=4420 00:27:49.607 qpair failed and we were unable to recover it. 00:27:49.607 [2024-07-15 09:36:00.519738] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.608 [2024-07-15 09:36:00.519766] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a58000b90 with addr=10.0.0.2, port=4420 00:27:49.608 qpair failed and we were unable to recover it. 00:27:49.608 [2024-07-15 09:36:00.519858] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.608 [2024-07-15 09:36:00.519886] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a58000b90 with addr=10.0.0.2, port=4420 00:27:49.608 qpair failed and we were unable to recover it. 00:27:49.608 [2024-07-15 09:36:00.519981] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.608 [2024-07-15 09:36:00.520010] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a58000b90 with addr=10.0.0.2, port=4420 00:27:49.608 qpair failed and we were unable to recover it. 00:27:49.608 [2024-07-15 09:36:00.520158] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.608 [2024-07-15 09:36:00.520213] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:27:49.608 qpair failed and we were unable to recover it. 00:27:49.608 [2024-07-15 09:36:00.520375] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.608 [2024-07-15 09:36:00.520425] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:27:49.608 qpair failed and we were unable to recover it. 00:27:49.608 [2024-07-15 09:36:00.520598] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.608 [2024-07-15 09:36:00.520654] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:27:49.608 qpair failed and we were unable to recover it. 00:27:49.608 [2024-07-15 09:36:00.520759] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.608 [2024-07-15 09:36:00.520796] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:27:49.608 qpair failed and we were unable to recover it. 00:27:49.608 [2024-07-15 09:36:00.520919] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.608 [2024-07-15 09:36:00.520946] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:27:49.608 qpair failed and we were unable to recover it. 00:27:49.608 [2024-07-15 09:36:00.521090] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.608 [2024-07-15 09:36:00.521137] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:27:49.608 qpair failed and we were unable to recover it. 00:27:49.608 [2024-07-15 09:36:00.521282] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.608 [2024-07-15 09:36:00.521331] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:27:49.608 qpair failed and we were unable to recover it. 00:27:49.608 [2024-07-15 09:36:00.521445] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.608 [2024-07-15 09:36:00.521499] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:27:49.608 qpair failed and we were unable to recover it. 00:27:49.608 [2024-07-15 09:36:00.521619] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.608 [2024-07-15 09:36:00.521647] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:27:49.608 qpair failed and we were unable to recover it. 00:27:49.608 [2024-07-15 09:36:00.521768] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.608 [2024-07-15 09:36:00.521795] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:27:49.608 qpair failed and we were unable to recover it. 00:27:49.608 [2024-07-15 09:36:00.521929] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.608 [2024-07-15 09:36:00.521978] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:27:49.608 qpair failed and we were unable to recover it. 00:27:49.608 [2024-07-15 09:36:00.522122] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.608 [2024-07-15 09:36:00.522171] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:27:49.608 qpair failed and we were unable to recover it. 00:27:49.608 [2024-07-15 09:36:00.522283] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.608 [2024-07-15 09:36:00.522332] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:27:49.608 qpair failed and we were unable to recover it. 00:27:49.608 [2024-07-15 09:36:00.522450] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.608 [2024-07-15 09:36:00.522477] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:27:49.608 qpair failed and we were unable to recover it. 00:27:49.608 [2024-07-15 09:36:00.522588] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.608 [2024-07-15 09:36:00.522615] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:27:49.608 qpair failed and we were unable to recover it. 00:27:49.608 [2024-07-15 09:36:00.522734] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.608 [2024-07-15 09:36:00.522761] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:27:49.608 qpair failed and we were unable to recover it. 00:27:49.608 [2024-07-15 09:36:00.522853] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.608 [2024-07-15 09:36:00.522880] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:27:49.608 qpair failed and we were unable to recover it. 00:27:49.608 [2024-07-15 09:36:00.523024] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.608 [2024-07-15 09:36:00.523051] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:27:49.608 qpair failed and we were unable to recover it. 00:27:49.608 [2024-07-15 09:36:00.523138] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.608 [2024-07-15 09:36:00.523166] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:27:49.608 qpair failed and we were unable to recover it. 00:27:49.608 [2024-07-15 09:36:00.523286] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.608 [2024-07-15 09:36:00.523312] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:27:49.608 qpair failed and we were unable to recover it. 00:27:49.608 [2024-07-15 09:36:00.523448] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.608 [2024-07-15 09:36:00.523476] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:27:49.608 qpair failed and we were unable to recover it. 00:27:49.608 [2024-07-15 09:36:00.523598] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.608 [2024-07-15 09:36:00.523625] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:27:49.608 qpair failed and we were unable to recover it. 00:27:49.608 [2024-07-15 09:36:00.523725] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.608 [2024-07-15 09:36:00.523767] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d69200 with addr=10.0.0.2, port=4420 00:27:49.608 qpair failed and we were unable to recover it. 00:27:49.608 [2024-07-15 09:36:00.523908] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.608 [2024-07-15 09:36:00.523937] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d69200 with addr=10.0.0.2, port=4420 00:27:49.608 qpair failed and we were unable to recover it. 00:27:49.608 [2024-07-15 09:36:00.524020] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.608 [2024-07-15 09:36:00.524047] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d69200 with addr=10.0.0.2, port=4420 00:27:49.608 qpair failed and we were unable to recover it. 00:27:49.608 [2024-07-15 09:36:00.524163] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.608 [2024-07-15 09:36:00.524190] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d69200 with addr=10.0.0.2, port=4420 00:27:49.608 qpair failed and we were unable to recover it. 00:27:49.608 [2024-07-15 09:36:00.524272] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.608 [2024-07-15 09:36:00.524299] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d69200 with addr=10.0.0.2, port=4420 00:27:49.608 qpair failed and we were unable to recover it. 00:27:49.608 [2024-07-15 09:36:00.524421] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.609 [2024-07-15 09:36:00.524448] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d69200 with addr=10.0.0.2, port=4420 00:27:49.609 qpair failed and we were unable to recover it. 00:27:49.609 [2024-07-15 09:36:00.524558] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.609 [2024-07-15 09:36:00.524585] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d69200 with addr=10.0.0.2, port=4420 00:27:49.609 qpair failed and we were unable to recover it. 00:27:49.609 [2024-07-15 09:36:00.524695] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.609 [2024-07-15 09:36:00.524722] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d69200 with addr=10.0.0.2, port=4420 00:27:49.609 qpair failed and we were unable to recover it. 00:27:49.609 [2024-07-15 09:36:00.524884] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.609 [2024-07-15 09:36:00.524942] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a58000b90 with addr=10.0.0.2, port=4420 00:27:49.609 qpair failed and we were unable to recover it. 00:27:49.609 [2024-07-15 09:36:00.525124] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.609 [2024-07-15 09:36:00.525167] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a58000b90 with addr=10.0.0.2, port=4420 00:27:49.609 qpair failed and we were unable to recover it. 00:27:49.609 [2024-07-15 09:36:00.525334] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.609 [2024-07-15 09:36:00.525375] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a58000b90 with addr=10.0.0.2, port=4420 00:27:49.609 qpair failed and we were unable to recover it. 00:27:49.609 [2024-07-15 09:36:00.525529] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.609 [2024-07-15 09:36:00.525569] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a58000b90 with addr=10.0.0.2, port=4420 00:27:49.609 qpair failed and we were unable to recover it. 00:27:49.609 [2024-07-15 09:36:00.525756] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.609 [2024-07-15 09:36:00.525786] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:27:49.609 qpair failed and we were unable to recover it. 00:27:49.609 [2024-07-15 09:36:00.525906] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.609 [2024-07-15 09:36:00.525934] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:27:49.609 qpair failed and we were unable to recover it. 00:27:49.609 [2024-07-15 09:36:00.526066] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.609 [2024-07-15 09:36:00.526116] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:27:49.609 qpair failed and we were unable to recover it. 00:27:49.609 [2024-07-15 09:36:00.526210] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.609 [2024-07-15 09:36:00.526239] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:27:49.609 qpair failed and we were unable to recover it. 00:27:49.609 [2024-07-15 09:36:00.526404] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.609 [2024-07-15 09:36:00.526432] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:27:49.609 qpair failed and we were unable to recover it. 00:27:49.609 [2024-07-15 09:36:00.526527] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.609 [2024-07-15 09:36:00.526556] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d69200 with addr=10.0.0.2, port=4420 00:27:49.609 qpair failed and we were unable to recover it. 00:27:49.609 [2024-07-15 09:36:00.526701] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.609 [2024-07-15 09:36:00.526729] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d69200 with addr=10.0.0.2, port=4420 00:27:49.609 qpair failed and we were unable to recover it. 00:27:49.609 [2024-07-15 09:36:00.526857] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.609 [2024-07-15 09:36:00.526885] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d69200 with addr=10.0.0.2, port=4420 00:27:49.609 qpair failed and we were unable to recover it. 00:27:49.609 [2024-07-15 09:36:00.527006] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.609 [2024-07-15 09:36:00.527033] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d69200 with addr=10.0.0.2, port=4420 00:27:49.609 qpair failed and we were unable to recover it. 00:27:49.609 [2024-07-15 09:36:00.527165] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.609 [2024-07-15 09:36:00.527216] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d69200 with addr=10.0.0.2, port=4420 00:27:49.609 qpair failed and we were unable to recover it. 00:27:49.609 [2024-07-15 09:36:00.527400] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.609 [2024-07-15 09:36:00.527438] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d69200 with addr=10.0.0.2, port=4420 00:27:49.609 qpair failed and we were unable to recover it. 00:27:49.609 [2024-07-15 09:36:00.527596] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.609 [2024-07-15 09:36:00.527636] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d69200 with addr=10.0.0.2, port=4420 00:27:49.609 qpair failed and we were unable to recover it. 00:27:49.609 [2024-07-15 09:36:00.527792] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.609 [2024-07-15 09:36:00.527851] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d69200 with addr=10.0.0.2, port=4420 00:27:49.609 qpair failed and we were unable to recover it. 00:27:49.609 [2024-07-15 09:36:00.527941] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.609 [2024-07-15 09:36:00.527974] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d69200 with addr=10.0.0.2, port=4420 00:27:49.609 qpair failed and we were unable to recover it. 00:27:49.609 [2024-07-15 09:36:00.528093] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.609 [2024-07-15 09:36:00.528131] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d69200 with addr=10.0.0.2, port=4420 00:27:49.609 qpair failed and we were unable to recover it. 00:27:49.609 [2024-07-15 09:36:00.528258] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.609 [2024-07-15 09:36:00.528297] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d69200 with addr=10.0.0.2, port=4420 00:27:49.609 qpair failed and we were unable to recover it. 00:27:49.609 [2024-07-15 09:36:00.528448] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.609 [2024-07-15 09:36:00.528486] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d69200 with addr=10.0.0.2, port=4420 00:27:49.609 qpair failed and we were unable to recover it. 00:27:49.609 [2024-07-15 09:36:00.528634] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.609 [2024-07-15 09:36:00.528663] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:27:49.609 qpair failed and we were unable to recover it. 00:27:49.609 [2024-07-15 09:36:00.528768] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.609 [2024-07-15 09:36:00.528818] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a58000b90 with addr=10.0.0.2, port=4420 00:27:49.609 qpair failed and we were unable to recover it. 00:27:49.609 [2024-07-15 09:36:00.528953] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.609 [2024-07-15 09:36:00.528983] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a58000b90 with addr=10.0.0.2, port=4420 00:27:49.609 qpair failed and we were unable to recover it. 00:27:49.609 [2024-07-15 09:36:00.529103] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.609 [2024-07-15 09:36:00.529131] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a58000b90 with addr=10.0.0.2, port=4420 00:27:49.609 qpair failed and we were unable to recover it. 00:27:49.609 [2024-07-15 09:36:00.529291] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.609 [2024-07-15 09:36:00.529346] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a58000b90 with addr=10.0.0.2, port=4420 00:27:49.609 qpair failed and we were unable to recover it. 00:27:49.609 [2024-07-15 09:36:00.529516] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.609 [2024-07-15 09:36:00.529555] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a58000b90 with addr=10.0.0.2, port=4420 00:27:49.609 qpair failed and we were unable to recover it. 00:27:49.609 [2024-07-15 09:36:00.529708] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.609 [2024-07-15 09:36:00.529749] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a58000b90 with addr=10.0.0.2, port=4420 00:27:49.609 qpair failed and we were unable to recover it. 00:27:49.609 [2024-07-15 09:36:00.529909] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.609 [2024-07-15 09:36:00.529938] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a58000b90 with addr=10.0.0.2, port=4420 00:27:49.609 qpair failed and we were unable to recover it. 00:27:49.609 [2024-07-15 09:36:00.530048] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.609 [2024-07-15 09:36:00.530075] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a58000b90 with addr=10.0.0.2, port=4420 00:27:49.609 qpair failed and we were unable to recover it. 00:27:49.609 [2024-07-15 09:36:00.530256] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.609 [2024-07-15 09:36:00.530295] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a58000b90 with addr=10.0.0.2, port=4420 00:27:49.609 qpair failed and we were unable to recover it. 00:27:49.609 [2024-07-15 09:36:00.530513] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.610 [2024-07-15 09:36:00.530553] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a58000b90 with addr=10.0.0.2, port=4420 00:27:49.610 qpair failed and we were unable to recover it. 00:27:49.610 [2024-07-15 09:36:00.530677] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.610 [2024-07-15 09:36:00.530716] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a58000b90 with addr=10.0.0.2, port=4420 00:27:49.610 qpair failed and we were unable to recover it. 00:27:49.610 [2024-07-15 09:36:00.530900] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.610 [2024-07-15 09:36:00.530929] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a58000b90 with addr=10.0.0.2, port=4420 00:27:49.610 qpair failed and we were unable to recover it. 00:27:49.610 [2024-07-15 09:36:00.531046] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.610 [2024-07-15 09:36:00.531074] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a58000b90 with addr=10.0.0.2, port=4420 00:27:49.610 qpair failed and we were unable to recover it. 00:27:49.610 [2024-07-15 09:36:00.531191] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.610 [2024-07-15 09:36:00.531218] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a58000b90 with addr=10.0.0.2, port=4420 00:27:49.610 qpair failed and we were unable to recover it. 00:27:49.610 [2024-07-15 09:36:00.531354] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.610 [2024-07-15 09:36:00.531394] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a58000b90 with addr=10.0.0.2, port=4420 00:27:49.610 qpair failed and we were unable to recover it. 00:27:49.610 [2024-07-15 09:36:00.531527] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.610 [2024-07-15 09:36:00.531577] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a58000b90 with addr=10.0.0.2, port=4420 00:27:49.610 qpair failed and we were unable to recover it. 00:27:49.610 [2024-07-15 09:36:00.531771] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.610 [2024-07-15 09:36:00.531817] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a58000b90 with addr=10.0.0.2, port=4420 00:27:49.610 qpair failed and we were unable to recover it. 00:27:49.610 [2024-07-15 09:36:00.531933] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.610 [2024-07-15 09:36:00.531962] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a58000b90 with addr=10.0.0.2, port=4420 00:27:49.610 qpair failed and we were unable to recover it. 00:27:49.610 [2024-07-15 09:36:00.532051] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.610 [2024-07-15 09:36:00.532105] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a58000b90 with addr=10.0.0.2, port=4420 00:27:49.610 qpair failed and we were unable to recover it. 00:27:49.610 [2024-07-15 09:36:00.532293] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.610 [2024-07-15 09:36:00.532332] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a58000b90 with addr=10.0.0.2, port=4420 00:27:49.610 qpair failed and we were unable to recover it. 00:27:49.610 [2024-07-15 09:36:00.532517] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.610 [2024-07-15 09:36:00.532556] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a58000b90 with addr=10.0.0.2, port=4420 00:27:49.610 qpair failed and we were unable to recover it. 00:27:49.610 [2024-07-15 09:36:00.532698] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.610 [2024-07-15 09:36:00.532737] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a58000b90 with addr=10.0.0.2, port=4420 00:27:49.610 qpair failed and we were unable to recover it. 00:27:49.610 [2024-07-15 09:36:00.532905] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.610 [2024-07-15 09:36:00.532939] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a58000b90 with addr=10.0.0.2, port=4420 00:27:49.610 qpair failed and we were unable to recover it. 00:27:49.610 [2024-07-15 09:36:00.533025] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.610 [2024-07-15 09:36:00.533053] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a58000b90 with addr=10.0.0.2, port=4420 00:27:49.610 qpair failed and we were unable to recover it. 00:27:49.610 [2024-07-15 09:36:00.533275] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.610 [2024-07-15 09:36:00.533315] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a58000b90 with addr=10.0.0.2, port=4420 00:27:49.610 qpair failed and we were unable to recover it. 00:27:49.610 [2024-07-15 09:36:00.533476] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.610 [2024-07-15 09:36:00.533515] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a58000b90 with addr=10.0.0.2, port=4420 00:27:49.610 qpair failed and we were unable to recover it. 00:27:49.610 [2024-07-15 09:36:00.533647] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.610 [2024-07-15 09:36:00.533686] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a58000b90 with addr=10.0.0.2, port=4420 00:27:49.610 qpair failed and we were unable to recover it. 00:27:49.610 [2024-07-15 09:36:00.533833] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.610 [2024-07-15 09:36:00.533862] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a58000b90 with addr=10.0.0.2, port=4420 00:27:49.610 qpair failed and we were unable to recover it. 00:27:49.610 [2024-07-15 09:36:00.533943] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.610 [2024-07-15 09:36:00.533971] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a58000b90 with addr=10.0.0.2, port=4420 00:27:49.610 qpair failed and we were unable to recover it. 00:27:49.610 [2024-07-15 09:36:00.534087] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.610 [2024-07-15 09:36:00.534115] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a58000b90 with addr=10.0.0.2, port=4420 00:27:49.610 qpair failed and we were unable to recover it. 00:27:49.610 [2024-07-15 09:36:00.534224] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.610 [2024-07-15 09:36:00.534262] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a58000b90 with addr=10.0.0.2, port=4420 00:27:49.610 qpair failed and we were unable to recover it. 00:27:49.610 [2024-07-15 09:36:00.534437] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.610 [2024-07-15 09:36:00.534476] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a58000b90 with addr=10.0.0.2, port=4420 00:27:49.610 qpair failed and we were unable to recover it. 00:27:49.610 [2024-07-15 09:36:00.534610] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.610 [2024-07-15 09:36:00.534649] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a58000b90 with addr=10.0.0.2, port=4420 00:27:49.610 qpair failed and we were unable to recover it. 00:27:49.610 [2024-07-15 09:36:00.534825] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.610 [2024-07-15 09:36:00.534853] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a58000b90 with addr=10.0.0.2, port=4420 00:27:49.610 qpair failed and we were unable to recover it. 00:27:49.610 [2024-07-15 09:36:00.534937] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.610 [2024-07-15 09:36:00.534966] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a58000b90 with addr=10.0.0.2, port=4420 00:27:49.610 qpair failed and we were unable to recover it. 00:27:49.610 [2024-07-15 09:36:00.535052] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.610 [2024-07-15 09:36:00.535079] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a58000b90 with addr=10.0.0.2, port=4420 00:27:49.610 qpair failed and we were unable to recover it. 00:27:49.610 [2024-07-15 09:36:00.535192] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.610 [2024-07-15 09:36:00.535232] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a58000b90 with addr=10.0.0.2, port=4420 00:27:49.610 qpair failed and we were unable to recover it. 00:27:49.610 [2024-07-15 09:36:00.535386] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.610 [2024-07-15 09:36:00.535426] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a58000b90 with addr=10.0.0.2, port=4420 00:27:49.610 qpair failed and we were unable to recover it. 00:27:49.610 [2024-07-15 09:36:00.535588] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.610 [2024-07-15 09:36:00.535627] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a58000b90 with addr=10.0.0.2, port=4420 00:27:49.610 qpair failed and we were unable to recover it. 00:27:49.610 [2024-07-15 09:36:00.535749] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.610 [2024-07-15 09:36:00.535778] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a58000b90 with addr=10.0.0.2, port=4420 00:27:49.610 qpair failed and we were unable to recover it. 00:27:49.610 [2024-07-15 09:36:00.535878] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.610 [2024-07-15 09:36:00.535907] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a58000b90 with addr=10.0.0.2, port=4420 00:27:49.610 qpair failed and we were unable to recover it. 00:27:49.610 [2024-07-15 09:36:00.535990] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.610 [2024-07-15 09:36:00.536045] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a58000b90 with addr=10.0.0.2, port=4420 00:27:49.610 qpair failed and we were unable to recover it. 00:27:49.611 [2024-07-15 09:36:00.536228] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.611 [2024-07-15 09:36:00.536267] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a58000b90 with addr=10.0.0.2, port=4420 00:27:49.611 qpair failed and we were unable to recover it. 00:27:49.611 [2024-07-15 09:36:00.536415] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.611 [2024-07-15 09:36:00.536454] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a58000b90 with addr=10.0.0.2, port=4420 00:27:49.611 qpair failed and we were unable to recover it. 00:27:49.611 [2024-07-15 09:36:00.536604] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.611 [2024-07-15 09:36:00.536643] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a58000b90 with addr=10.0.0.2, port=4420 00:27:49.611 qpair failed and we were unable to recover it. 00:27:49.611 [2024-07-15 09:36:00.536759] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.611 [2024-07-15 09:36:00.536787] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a58000b90 with addr=10.0.0.2, port=4420 00:27:49.611 qpair failed and we were unable to recover it. 00:27:49.611 [2024-07-15 09:36:00.536933] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.611 [2024-07-15 09:36:00.536961] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a58000b90 with addr=10.0.0.2, port=4420 00:27:49.611 qpair failed and we were unable to recover it. 00:27:49.611 [2024-07-15 09:36:00.537074] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.611 [2024-07-15 09:36:00.537102] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a58000b90 with addr=10.0.0.2, port=4420 00:27:49.611 qpair failed and we were unable to recover it. 00:27:49.611 [2024-07-15 09:36:00.537301] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.611 [2024-07-15 09:36:00.537340] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a58000b90 with addr=10.0.0.2, port=4420 00:27:49.611 qpair failed and we were unable to recover it. 00:27:49.611 [2024-07-15 09:36:00.537500] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.611 [2024-07-15 09:36:00.537539] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a58000b90 with addr=10.0.0.2, port=4420 00:27:49.611 qpair failed and we were unable to recover it. 00:27:49.611 [2024-07-15 09:36:00.537682] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.611 [2024-07-15 09:36:00.537721] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a58000b90 with addr=10.0.0.2, port=4420 00:27:49.611 qpair failed and we were unable to recover it. 00:27:49.611 [2024-07-15 09:36:00.537852] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.611 [2024-07-15 09:36:00.537881] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a58000b90 with addr=10.0.0.2, port=4420 00:27:49.611 qpair failed and we were unable to recover it. 00:27:49.611 [2024-07-15 09:36:00.538041] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.611 [2024-07-15 09:36:00.538069] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a58000b90 with addr=10.0.0.2, port=4420 00:27:49.611 qpair failed and we were unable to recover it. 00:27:49.611 [2024-07-15 09:36:00.538235] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.611 [2024-07-15 09:36:00.538274] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a58000b90 with addr=10.0.0.2, port=4420 00:27:49.611 qpair failed and we were unable to recover it. 00:27:49.611 [2024-07-15 09:36:00.538403] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.611 [2024-07-15 09:36:00.538452] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a58000b90 with addr=10.0.0.2, port=4420 00:27:49.611 qpair failed and we were unable to recover it. 00:27:49.611 [2024-07-15 09:36:00.538617] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.611 [2024-07-15 09:36:00.538657] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a58000b90 with addr=10.0.0.2, port=4420 00:27:49.611 qpair failed and we were unable to recover it. 00:27:49.611 [2024-07-15 09:36:00.538870] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.611 [2024-07-15 09:36:00.538899] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a58000b90 with addr=10.0.0.2, port=4420 00:27:49.611 qpair failed and we were unable to recover it. 00:27:49.611 [2024-07-15 09:36:00.538993] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.611 [2024-07-15 09:36:00.539048] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a58000b90 with addr=10.0.0.2, port=4420 00:27:49.611 qpair failed and we were unable to recover it. 00:27:49.611 [2024-07-15 09:36:00.539198] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.611 [2024-07-15 09:36:00.539237] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a58000b90 with addr=10.0.0.2, port=4420 00:27:49.611 qpair failed and we were unable to recover it. 00:27:49.611 [2024-07-15 09:36:00.539366] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.611 [2024-07-15 09:36:00.539406] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a58000b90 with addr=10.0.0.2, port=4420 00:27:49.611 qpair failed and we were unable to recover it. 00:27:49.611 [2024-07-15 09:36:00.539607] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.611 [2024-07-15 09:36:00.539650] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a58000b90 with addr=10.0.0.2, port=4420 00:27:49.611 qpair failed and we were unable to recover it. 00:27:49.611 [2024-07-15 09:36:00.539733] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.611 [2024-07-15 09:36:00.539761] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a58000b90 with addr=10.0.0.2, port=4420 00:27:49.611 qpair failed and we were unable to recover it. 00:27:49.611 [2024-07-15 09:36:00.539907] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.611 [2024-07-15 09:36:00.539940] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a58000b90 with addr=10.0.0.2, port=4420 00:27:49.611 qpair failed and we were unable to recover it. 00:27:49.611 [2024-07-15 09:36:00.540032] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.611 [2024-07-15 09:36:00.540060] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a58000b90 with addr=10.0.0.2, port=4420 00:27:49.611 qpair failed and we were unable to recover it. 00:27:49.611 [2024-07-15 09:36:00.540145] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.611 [2024-07-15 09:36:00.540202] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a58000b90 with addr=10.0.0.2, port=4420 00:27:49.611 qpair failed and we were unable to recover it. 00:27:49.611 [2024-07-15 09:36:00.540366] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.611 [2024-07-15 09:36:00.540407] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a58000b90 with addr=10.0.0.2, port=4420 00:27:49.611 qpair failed and we were unable to recover it. 00:27:49.611 [2024-07-15 09:36:00.540558] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.611 [2024-07-15 09:36:00.540599] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a58000b90 with addr=10.0.0.2, port=4420 00:27:49.611 qpair failed and we were unable to recover it. 00:27:49.611 [2024-07-15 09:36:00.540764] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.611 [2024-07-15 09:36:00.540838] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a58000b90 with addr=10.0.0.2, port=4420 00:27:49.611 qpair failed and we were unable to recover it. 00:27:49.611 [2024-07-15 09:36:00.540984] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.611 [2024-07-15 09:36:00.541025] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a58000b90 with addr=10.0.0.2, port=4420 00:27:49.611 qpair failed and we were unable to recover it. 00:27:49.611 [2024-07-15 09:36:00.541185] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.611 [2024-07-15 09:36:00.541227] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a58000b90 with addr=10.0.0.2, port=4420 00:27:49.611 qpair failed and we were unable to recover it. 00:27:49.611 [2024-07-15 09:36:00.541396] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.611 [2024-07-15 09:36:00.541436] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a58000b90 with addr=10.0.0.2, port=4420 00:27:49.611 qpair failed and we were unable to recover it. 00:27:49.611 [2024-07-15 09:36:00.541591] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.611 [2024-07-15 09:36:00.541632] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a58000b90 with addr=10.0.0.2, port=4420 00:27:49.611 qpair failed and we were unable to recover it. 00:27:49.611 [2024-07-15 09:36:00.541763] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.611 [2024-07-15 09:36:00.541814] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a58000b90 with addr=10.0.0.2, port=4420 00:27:49.611 qpair failed and we were unable to recover it. 00:27:49.611 [2024-07-15 09:36:00.541956] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.611 [2024-07-15 09:36:00.541997] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a58000b90 with addr=10.0.0.2, port=4420 00:27:49.611 qpair failed and we were unable to recover it. 00:27:49.611 [2024-07-15 09:36:00.542162] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.611 [2024-07-15 09:36:00.542203] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a58000b90 with addr=10.0.0.2, port=4420 00:27:49.611 qpair failed and we were unable to recover it. 00:27:49.611 [2024-07-15 09:36:00.542395] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.611 [2024-07-15 09:36:00.542436] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a58000b90 with addr=10.0.0.2, port=4420 00:27:49.611 qpair failed and we were unable to recover it. 00:27:49.611 [2024-07-15 09:36:00.542576] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.611 [2024-07-15 09:36:00.542616] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a58000b90 with addr=10.0.0.2, port=4420 00:27:49.611 qpair failed and we were unable to recover it. 00:27:49.611 [2024-07-15 09:36:00.542839] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.611 [2024-07-15 09:36:00.542902] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a58000b90 with addr=10.0.0.2, port=4420 00:27:49.611 qpair failed and we were unable to recover it. 00:27:49.611 [2024-07-15 09:36:00.543103] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.611 [2024-07-15 09:36:00.543147] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a58000b90 with addr=10.0.0.2, port=4420 00:27:49.611 qpair failed and we were unable to recover it. 00:27:49.611 [2024-07-15 09:36:00.543331] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.611 [2024-07-15 09:36:00.543375] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a58000b90 with addr=10.0.0.2, port=4420 00:27:49.611 qpair failed and we were unable to recover it. 00:27:49.611 [2024-07-15 09:36:00.543567] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.611 [2024-07-15 09:36:00.543611] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a58000b90 with addr=10.0.0.2, port=4420 00:27:49.611 qpair failed and we were unable to recover it. 00:27:49.612 [2024-07-15 09:36:00.543772] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.612 [2024-07-15 09:36:00.543820] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a58000b90 with addr=10.0.0.2, port=4420 00:27:49.612 qpair failed and we were unable to recover it. 00:27:49.612 [2024-07-15 09:36:00.543990] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.612 [2024-07-15 09:36:00.544032] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a58000b90 with addr=10.0.0.2, port=4420 00:27:49.612 qpair failed and we were unable to recover it. 00:27:49.612 [2024-07-15 09:36:00.544161] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.612 [2024-07-15 09:36:00.544203] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a58000b90 with addr=10.0.0.2, port=4420 00:27:49.612 qpair failed and we were unable to recover it. 00:27:49.612 [2024-07-15 09:36:00.544354] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.612 [2024-07-15 09:36:00.544396] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a58000b90 with addr=10.0.0.2, port=4420 00:27:49.612 qpair failed and we were unable to recover it. 00:27:49.612 [2024-07-15 09:36:00.544597] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.612 [2024-07-15 09:36:00.544638] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a58000b90 with addr=10.0.0.2, port=4420 00:27:49.612 qpair failed and we were unable to recover it. 00:27:49.612 [2024-07-15 09:36:00.544828] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.612 [2024-07-15 09:36:00.544874] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a58000b90 with addr=10.0.0.2, port=4420 00:27:49.612 qpair failed and we were unable to recover it. 00:27:49.612 [2024-07-15 09:36:00.545011] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.612 [2024-07-15 09:36:00.545052] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a58000b90 with addr=10.0.0.2, port=4420 00:27:49.612 qpair failed and we were unable to recover it. 00:27:49.612 [2024-07-15 09:36:00.545229] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.612 [2024-07-15 09:36:00.545270] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a58000b90 with addr=10.0.0.2, port=4420 00:27:49.612 qpair failed and we were unable to recover it. 00:27:49.612 [2024-07-15 09:36:00.545420] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.612 [2024-07-15 09:36:00.545462] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a58000b90 with addr=10.0.0.2, port=4420 00:27:49.612 qpair failed and we were unable to recover it. 00:27:49.612 [2024-07-15 09:36:00.545631] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.612 [2024-07-15 09:36:00.545672] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a58000b90 with addr=10.0.0.2, port=4420 00:27:49.612 qpair failed and we were unable to recover it. 00:27:49.612 [2024-07-15 09:36:00.545832] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.612 [2024-07-15 09:36:00.545876] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a58000b90 with addr=10.0.0.2, port=4420 00:27:49.612 qpair failed and we were unable to recover it. 00:27:49.612 [2024-07-15 09:36:00.546037] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.612 [2024-07-15 09:36:00.546078] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a58000b90 with addr=10.0.0.2, port=4420 00:27:49.612 qpair failed and we were unable to recover it. 00:27:49.612 [2024-07-15 09:36:00.546234] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.612 [2024-07-15 09:36:00.546275] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a58000b90 with addr=10.0.0.2, port=4420 00:27:49.612 qpair failed and we were unable to recover it. 00:27:49.612 [2024-07-15 09:36:00.546444] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.612 [2024-07-15 09:36:00.546486] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a58000b90 with addr=10.0.0.2, port=4420 00:27:49.612 qpair failed and we were unable to recover it. 00:27:49.612 [2024-07-15 09:36:00.546650] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.612 [2024-07-15 09:36:00.546692] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a58000b90 with addr=10.0.0.2, port=4420 00:27:49.612 qpair failed and we were unable to recover it. 00:27:49.612 [2024-07-15 09:36:00.546856] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.612 [2024-07-15 09:36:00.546910] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a58000b90 with addr=10.0.0.2, port=4420 00:27:49.612 qpair failed and we were unable to recover it. 00:27:49.612 [2024-07-15 09:36:00.547076] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.612 [2024-07-15 09:36:00.547117] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a58000b90 with addr=10.0.0.2, port=4420 00:27:49.612 qpair failed and we were unable to recover it. 00:27:49.612 [2024-07-15 09:36:00.547282] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.612 [2024-07-15 09:36:00.547323] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a58000b90 with addr=10.0.0.2, port=4420 00:27:49.612 qpair failed and we were unable to recover it. 00:27:49.612 [2024-07-15 09:36:00.547475] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.612 [2024-07-15 09:36:00.547517] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a58000b90 with addr=10.0.0.2, port=4420 00:27:49.612 qpair failed and we were unable to recover it. 00:27:49.612 [2024-07-15 09:36:00.547641] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.612 [2024-07-15 09:36:00.547683] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a58000b90 with addr=10.0.0.2, port=4420 00:27:49.612 qpair failed and we were unable to recover it. 00:27:49.612 [2024-07-15 09:36:00.547822] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.612 [2024-07-15 09:36:00.547863] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a58000b90 with addr=10.0.0.2, port=4420 00:27:49.612 qpair failed and we were unable to recover it. 00:27:49.612 [2024-07-15 09:36:00.548028] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.612 [2024-07-15 09:36:00.548075] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a58000b90 with addr=10.0.0.2, port=4420 00:27:49.612 qpair failed and we were unable to recover it. 00:27:49.612 [2024-07-15 09:36:00.548243] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.612 [2024-07-15 09:36:00.548284] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a58000b90 with addr=10.0.0.2, port=4420 00:27:49.612 qpair failed and we were unable to recover it. 00:27:49.612 [2024-07-15 09:36:00.548401] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.612 [2024-07-15 09:36:00.548442] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a58000b90 with addr=10.0.0.2, port=4420 00:27:49.612 qpair failed and we were unable to recover it. 00:27:49.612 [2024-07-15 09:36:00.548667] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.612 [2024-07-15 09:36:00.548711] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a58000b90 with addr=10.0.0.2, port=4420 00:27:49.612 qpair failed and we were unable to recover it. 00:27:49.612 [2024-07-15 09:36:00.548869] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.612 [2024-07-15 09:36:00.548913] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a58000b90 with addr=10.0.0.2, port=4420 00:27:49.612 qpair failed and we were unable to recover it. 00:27:49.612 [2024-07-15 09:36:00.549027] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.612 [2024-07-15 09:36:00.549071] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a58000b90 with addr=10.0.0.2, port=4420 00:27:49.612 qpair failed and we were unable to recover it. 00:27:49.612 [2024-07-15 09:36:00.549241] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.612 [2024-07-15 09:36:00.549286] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a58000b90 with addr=10.0.0.2, port=4420 00:27:49.612 qpair failed and we were unable to recover it. 00:27:49.612 [2024-07-15 09:36:00.549423] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.612 [2024-07-15 09:36:00.549468] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a58000b90 with addr=10.0.0.2, port=4420 00:27:49.612 qpair failed and we were unable to recover it. 00:27:49.612 [2024-07-15 09:36:00.549667] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.613 [2024-07-15 09:36:00.549711] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a58000b90 with addr=10.0.0.2, port=4420 00:27:49.613 qpair failed and we were unable to recover it. 00:27:49.613 [2024-07-15 09:36:00.549853] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.613 [2024-07-15 09:36:00.549897] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a58000b90 with addr=10.0.0.2, port=4420 00:27:49.613 qpair failed and we were unable to recover it. 00:27:49.613 [2024-07-15 09:36:00.550030] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.613 [2024-07-15 09:36:00.550073] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a58000b90 with addr=10.0.0.2, port=4420 00:27:49.613 qpair failed and we were unable to recover it. 00:27:49.613 [2024-07-15 09:36:00.550217] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.613 [2024-07-15 09:36:00.550260] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a58000b90 with addr=10.0.0.2, port=4420 00:27:49.613 qpair failed and we were unable to recover it. 00:27:49.613 [2024-07-15 09:36:00.550431] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.613 [2024-07-15 09:36:00.550475] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a58000b90 with addr=10.0.0.2, port=4420 00:27:49.613 qpair failed and we were unable to recover it. 00:27:49.613 [2024-07-15 09:36:00.550645] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.613 [2024-07-15 09:36:00.550689] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a58000b90 with addr=10.0.0.2, port=4420 00:27:49.613 qpair failed and we were unable to recover it. 00:27:49.613 [2024-07-15 09:36:00.550865] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.613 [2024-07-15 09:36:00.550905] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a58000b90 with addr=10.0.0.2, port=4420 00:27:49.613 qpair failed and we were unable to recover it. 00:27:49.613 [2024-07-15 09:36:00.551098] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.613 [2024-07-15 09:36:00.551138] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a58000b90 with addr=10.0.0.2, port=4420 00:27:49.613 qpair failed and we were unable to recover it. 00:27:49.613 [2024-07-15 09:36:00.551269] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.613 [2024-07-15 09:36:00.551309] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a58000b90 with addr=10.0.0.2, port=4420 00:27:49.613 qpair failed and we were unable to recover it. 00:27:49.613 [2024-07-15 09:36:00.551466] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.613 [2024-07-15 09:36:00.551508] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a58000b90 with addr=10.0.0.2, port=4420 00:27:49.613 qpair failed and we were unable to recover it. 00:27:49.613 [2024-07-15 09:36:00.551704] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.613 [2024-07-15 09:36:00.551745] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a58000b90 with addr=10.0.0.2, port=4420 00:27:49.613 qpair failed and we were unable to recover it. 00:27:49.613 [2024-07-15 09:36:00.551890] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.613 [2024-07-15 09:36:00.551931] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a58000b90 with addr=10.0.0.2, port=4420 00:27:49.613 qpair failed and we were unable to recover it. 00:27:49.613 [2024-07-15 09:36:00.552060] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.613 [2024-07-15 09:36:00.552100] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a58000b90 with addr=10.0.0.2, port=4420 00:27:49.613 qpair failed and we were unable to recover it. 00:27:49.613 [2024-07-15 09:36:00.552223] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.613 [2024-07-15 09:36:00.552265] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a58000b90 with addr=10.0.0.2, port=4420 00:27:49.613 qpair failed and we were unable to recover it. 00:27:49.613 [2024-07-15 09:36:00.552421] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.613 [2024-07-15 09:36:00.552462] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a58000b90 with addr=10.0.0.2, port=4420 00:27:49.613 qpair failed and we were unable to recover it. 00:27:49.613 [2024-07-15 09:36:00.552599] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.613 [2024-07-15 09:36:00.552639] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a58000b90 with addr=10.0.0.2, port=4420 00:27:49.613 qpair failed and we were unable to recover it. 00:27:49.613 [2024-07-15 09:36:00.552788] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.613 [2024-07-15 09:36:00.552850] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a58000b90 with addr=10.0.0.2, port=4420 00:27:49.613 qpair failed and we were unable to recover it. 00:27:49.613 [2024-07-15 09:36:00.552998] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.613 [2024-07-15 09:36:00.553039] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a58000b90 with addr=10.0.0.2, port=4420 00:27:49.613 qpair failed and we were unable to recover it. 00:27:49.613 [2024-07-15 09:36:00.553201] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.613 [2024-07-15 09:36:00.553242] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a58000b90 with addr=10.0.0.2, port=4420 00:27:49.613 qpair failed and we were unable to recover it. 00:27:49.613 [2024-07-15 09:36:00.553418] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.613 [2024-07-15 09:36:00.553458] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a58000b90 with addr=10.0.0.2, port=4420 00:27:49.613 qpair failed and we were unable to recover it. 00:27:49.613 [2024-07-15 09:36:00.553647] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.613 [2024-07-15 09:36:00.553687] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a58000b90 with addr=10.0.0.2, port=4420 00:27:49.613 qpair failed and we were unable to recover it. 00:27:49.613 [2024-07-15 09:36:00.553912] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.613 [2024-07-15 09:36:00.553956] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a58000b90 with addr=10.0.0.2, port=4420 00:27:49.613 qpair failed and we were unable to recover it. 00:27:49.613 [2024-07-15 09:36:00.554121] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.613 [2024-07-15 09:36:00.554162] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a58000b90 with addr=10.0.0.2, port=4420 00:27:49.613 qpair failed and we were unable to recover it. 00:27:49.613 [2024-07-15 09:36:00.554315] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.613 [2024-07-15 09:36:00.554360] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a58000b90 with addr=10.0.0.2, port=4420 00:27:49.613 qpair failed and we were unable to recover it. 00:27:49.613 [2024-07-15 09:36:00.554538] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.613 [2024-07-15 09:36:00.554583] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a58000b90 with addr=10.0.0.2, port=4420 00:27:49.613 qpair failed and we were unable to recover it. 00:27:49.613 [2024-07-15 09:36:00.554751] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.613 [2024-07-15 09:36:00.554794] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a58000b90 with addr=10.0.0.2, port=4420 00:27:49.613 qpair failed and we were unable to recover it. 00:27:49.613 [2024-07-15 09:36:00.554950] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.613 [2024-07-15 09:36:00.554994] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a58000b90 with addr=10.0.0.2, port=4420 00:27:49.613 qpair failed and we were unable to recover it. 00:27:49.613 [2024-07-15 09:36:00.555164] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.613 [2024-07-15 09:36:00.555206] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a58000b90 with addr=10.0.0.2, port=4420 00:27:49.613 qpair failed and we were unable to recover it. 00:27:49.613 [2024-07-15 09:36:00.555404] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.613 [2024-07-15 09:36:00.555448] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a58000b90 with addr=10.0.0.2, port=4420 00:27:49.613 qpair failed and we were unable to recover it. 00:27:49.613 [2024-07-15 09:36:00.555615] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.613 [2024-07-15 09:36:00.555659] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a58000b90 with addr=10.0.0.2, port=4420 00:27:49.613 qpair failed and we were unable to recover it. 00:27:49.613 [2024-07-15 09:36:00.555822] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.613 [2024-07-15 09:36:00.555901] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a58000b90 with addr=10.0.0.2, port=4420 00:27:49.613 qpair failed and we were unable to recover it. 00:27:49.613 [2024-07-15 09:36:00.556093] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.613 [2024-07-15 09:36:00.556158] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a58000b90 with addr=10.0.0.2, port=4420 00:27:49.613 qpair failed and we were unable to recover it. 00:27:49.613 [2024-07-15 09:36:00.556383] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.613 [2024-07-15 09:36:00.556459] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a58000b90 with addr=10.0.0.2, port=4420 00:27:49.613 qpair failed and we were unable to recover it. 00:27:49.613 [2024-07-15 09:36:00.556688] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.614 [2024-07-15 09:36:00.556732] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a58000b90 with addr=10.0.0.2, port=4420 00:27:49.614 qpair failed and we were unable to recover it. 00:27:49.614 [2024-07-15 09:36:00.556883] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.614 [2024-07-15 09:36:00.556928] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a58000b90 with addr=10.0.0.2, port=4420 00:27:49.614 qpair failed and we were unable to recover it. 00:27:49.614 [2024-07-15 09:36:00.557063] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.614 [2024-07-15 09:36:00.557118] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a58000b90 with addr=10.0.0.2, port=4420 00:27:49.614 qpair failed and we were unable to recover it. 00:27:49.614 [2024-07-15 09:36:00.557293] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.614 [2024-07-15 09:36:00.557335] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a58000b90 with addr=10.0.0.2, port=4420 00:27:49.614 qpair failed and we were unable to recover it. 00:27:49.614 [2024-07-15 09:36:00.557499] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.614 [2024-07-15 09:36:00.557540] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a58000b90 with addr=10.0.0.2, port=4420 00:27:49.614 qpair failed and we were unable to recover it. 00:27:49.614 [2024-07-15 09:36:00.557667] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.614 [2024-07-15 09:36:00.557710] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a58000b90 with addr=10.0.0.2, port=4420 00:27:49.614 qpair failed and we were unable to recover it. 00:27:49.614 [2024-07-15 09:36:00.557874] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.614 [2024-07-15 09:36:00.557919] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a58000b90 with addr=10.0.0.2, port=4420 00:27:49.614 qpair failed and we were unable to recover it. 00:27:49.614 [2024-07-15 09:36:00.558069] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.614 [2024-07-15 09:36:00.558120] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a58000b90 with addr=10.0.0.2, port=4420 00:27:49.614 qpair failed and we were unable to recover it. 00:27:49.614 [2024-07-15 09:36:00.558319] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.614 [2024-07-15 09:36:00.558361] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a58000b90 with addr=10.0.0.2, port=4420 00:27:49.614 qpair failed and we were unable to recover it. 00:27:49.614 [2024-07-15 09:36:00.558529] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.614 [2024-07-15 09:36:00.558571] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a58000b90 with addr=10.0.0.2, port=4420 00:27:49.614 qpair failed and we were unable to recover it. 00:27:49.614 [2024-07-15 09:36:00.558750] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.614 [2024-07-15 09:36:00.558792] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a58000b90 with addr=10.0.0.2, port=4420 00:27:49.614 qpair failed and we were unable to recover it. 00:27:49.614 [2024-07-15 09:36:00.559013] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.614 [2024-07-15 09:36:00.559057] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a58000b90 with addr=10.0.0.2, port=4420 00:27:49.614 qpair failed and we were unable to recover it. 00:27:49.614 [2024-07-15 09:36:00.559193] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.614 [2024-07-15 09:36:00.559238] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a58000b90 with addr=10.0.0.2, port=4420 00:27:49.614 qpair failed and we were unable to recover it. 00:27:49.614 [2024-07-15 09:36:00.559389] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.614 [2024-07-15 09:36:00.559434] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a58000b90 with addr=10.0.0.2, port=4420 00:27:49.614 qpair failed and we were unable to recover it. 00:27:49.614 [2024-07-15 09:36:00.559604] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.614 [2024-07-15 09:36:00.559648] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a58000b90 with addr=10.0.0.2, port=4420 00:27:49.614 qpair failed and we were unable to recover it. 00:27:49.614 [2024-07-15 09:36:00.559821] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.614 [2024-07-15 09:36:00.559866] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a58000b90 with addr=10.0.0.2, port=4420 00:27:49.614 qpair failed and we were unable to recover it. 00:27:49.614 [2024-07-15 09:36:00.559991] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.614 [2024-07-15 09:36:00.560036] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a58000b90 with addr=10.0.0.2, port=4420 00:27:49.614 qpair failed and we were unable to recover it. 00:27:49.614 [2024-07-15 09:36:00.560210] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.614 [2024-07-15 09:36:00.560254] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a58000b90 with addr=10.0.0.2, port=4420 00:27:49.614 qpair failed and we were unable to recover it. 00:27:49.614 [2024-07-15 09:36:00.560420] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.614 [2024-07-15 09:36:00.560462] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a58000b90 with addr=10.0.0.2, port=4420 00:27:49.614 qpair failed and we were unable to recover it. 00:27:49.614 [2024-07-15 09:36:00.560593] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.614 [2024-07-15 09:36:00.560637] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a58000b90 with addr=10.0.0.2, port=4420 00:27:49.614 qpair failed and we were unable to recover it. 00:27:49.614 [2024-07-15 09:36:00.560776] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.614 [2024-07-15 09:36:00.560839] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a58000b90 with addr=10.0.0.2, port=4420 00:27:49.614 qpair failed and we were unable to recover it. 00:27:49.614 [2024-07-15 09:36:00.560994] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.614 [2024-07-15 09:36:00.561038] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a58000b90 with addr=10.0.0.2, port=4420 00:27:49.614 qpair failed and we were unable to recover it. 00:27:49.614 [2024-07-15 09:36:00.561203] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.614 [2024-07-15 09:36:00.561246] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a58000b90 with addr=10.0.0.2, port=4420 00:27:49.614 qpair failed and we were unable to recover it. 00:27:49.614 [2024-07-15 09:36:00.561382] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.614 [2024-07-15 09:36:00.561424] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a58000b90 with addr=10.0.0.2, port=4420 00:27:49.614 qpair failed and we were unable to recover it. 00:27:49.614 [2024-07-15 09:36:00.561557] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.614 [2024-07-15 09:36:00.561600] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a58000b90 with addr=10.0.0.2, port=4420 00:27:49.614 qpair failed and we were unable to recover it. 00:27:49.614 [2024-07-15 09:36:00.561738] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.614 [2024-07-15 09:36:00.561780] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a58000b90 with addr=10.0.0.2, port=4420 00:27:49.614 qpair failed and we were unable to recover it. 00:27:49.614 [2024-07-15 09:36:00.562009] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.614 [2024-07-15 09:36:00.562053] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a58000b90 with addr=10.0.0.2, port=4420 00:27:49.614 qpair failed and we were unable to recover it. 00:27:49.614 [2024-07-15 09:36:00.562222] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.614 [2024-07-15 09:36:00.562265] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a58000b90 with addr=10.0.0.2, port=4420 00:27:49.614 qpair failed and we were unable to recover it. 00:27:49.614 [2024-07-15 09:36:00.562400] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.614 [2024-07-15 09:36:00.562443] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a58000b90 with addr=10.0.0.2, port=4420 00:27:49.614 qpair failed and we were unable to recover it. 00:27:49.614 [2024-07-15 09:36:00.562641] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.614 [2024-07-15 09:36:00.562685] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a58000b90 with addr=10.0.0.2, port=4420 00:27:49.614 qpair failed and we were unable to recover it. 00:27:49.614 [2024-07-15 09:36:00.562918] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.614 [2024-07-15 09:36:00.562963] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a58000b90 with addr=10.0.0.2, port=4420 00:27:49.614 qpair failed and we were unable to recover it. 00:27:49.614 [2024-07-15 09:36:00.563136] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.614 [2024-07-15 09:36:00.563181] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a58000b90 with addr=10.0.0.2, port=4420 00:27:49.614 qpair failed and we were unable to recover it. 00:27:49.614 [2024-07-15 09:36:00.563314] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.614 [2024-07-15 09:36:00.563359] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a58000b90 with addr=10.0.0.2, port=4420 00:27:49.614 qpair failed and we were unable to recover it. 00:27:49.614 [2024-07-15 09:36:00.563556] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.614 [2024-07-15 09:36:00.563601] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a58000b90 with addr=10.0.0.2, port=4420 00:27:49.614 qpair failed and we were unable to recover it. 00:27:49.614 [2024-07-15 09:36:00.563811] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.614 [2024-07-15 09:36:00.563857] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a58000b90 with addr=10.0.0.2, port=4420 00:27:49.614 qpair failed and we were unable to recover it. 00:27:49.614 [2024-07-15 09:36:00.564005] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.614 [2024-07-15 09:36:00.564051] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a58000b90 with addr=10.0.0.2, port=4420 00:27:49.614 qpair failed and we were unable to recover it. 00:27:49.614 [2024-07-15 09:36:00.564229] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.615 [2024-07-15 09:36:00.564276] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a58000b90 with addr=10.0.0.2, port=4420 00:27:49.615 qpair failed and we were unable to recover it. 00:27:49.615 [2024-07-15 09:36:00.564492] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.615 [2024-07-15 09:36:00.564537] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a58000b90 with addr=10.0.0.2, port=4420 00:27:49.615 qpair failed and we were unable to recover it. 00:27:49.615 [2024-07-15 09:36:00.564691] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.615 [2024-07-15 09:36:00.564737] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a58000b90 with addr=10.0.0.2, port=4420 00:27:49.615 qpair failed and we were unable to recover it. 00:27:49.615 [2024-07-15 09:36:00.564892] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.615 [2024-07-15 09:36:00.564943] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a58000b90 with addr=10.0.0.2, port=4420 00:27:49.615 qpair failed and we were unable to recover it. 00:27:49.615 [2024-07-15 09:36:00.565110] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.615 [2024-07-15 09:36:00.565153] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a58000b90 with addr=10.0.0.2, port=4420 00:27:49.615 qpair failed and we were unable to recover it. 00:27:49.615 [2024-07-15 09:36:00.565315] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.615 [2024-07-15 09:36:00.565358] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a58000b90 with addr=10.0.0.2, port=4420 00:27:49.615 qpair failed and we were unable to recover it. 00:27:49.615 [2024-07-15 09:36:00.565518] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.615 [2024-07-15 09:36:00.565561] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a58000b90 with addr=10.0.0.2, port=4420 00:27:49.615 qpair failed and we were unable to recover it. 00:27:49.615 [2024-07-15 09:36:00.565737] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.615 [2024-07-15 09:36:00.565780] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a58000b90 with addr=10.0.0.2, port=4420 00:27:49.615 qpair failed and we were unable to recover it. 00:27:49.615 [2024-07-15 09:36:00.566045] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.615 [2024-07-15 09:36:00.566107] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a58000b90 with addr=10.0.0.2, port=4420 00:27:49.615 qpair failed and we were unable to recover it. 00:27:49.615 [2024-07-15 09:36:00.566367] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.615 [2024-07-15 09:36:00.566428] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a58000b90 with addr=10.0.0.2, port=4420 00:27:49.615 qpair failed and we were unable to recover it. 00:27:49.615 [2024-07-15 09:36:00.566646] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.615 [2024-07-15 09:36:00.566708] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a58000b90 with addr=10.0.0.2, port=4420 00:27:49.615 qpair failed and we were unable to recover it. 00:27:49.615 [2024-07-15 09:36:00.566949] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.615 [2024-07-15 09:36:00.567011] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a58000b90 with addr=10.0.0.2, port=4420 00:27:49.615 qpair failed and we were unable to recover it. 00:27:49.615 [2024-07-15 09:36:00.567227] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.615 [2024-07-15 09:36:00.567271] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a58000b90 with addr=10.0.0.2, port=4420 00:27:49.615 qpair failed and we were unable to recover it. 00:27:49.615 [2024-07-15 09:36:00.567464] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.615 [2024-07-15 09:36:00.567507] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a58000b90 with addr=10.0.0.2, port=4420 00:27:49.615 qpair failed and we were unable to recover it. 00:27:49.615 [2024-07-15 09:36:00.567670] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.615 [2024-07-15 09:36:00.567714] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a58000b90 with addr=10.0.0.2, port=4420 00:27:49.615 qpair failed and we were unable to recover it. 00:27:49.615 [2024-07-15 09:36:00.567917] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.615 [2024-07-15 09:36:00.567964] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a58000b90 with addr=10.0.0.2, port=4420 00:27:49.615 qpair failed and we were unable to recover it. 00:27:49.615 [2024-07-15 09:36:00.568123] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.615 [2024-07-15 09:36:00.568170] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a58000b90 with addr=10.0.0.2, port=4420 00:27:49.615 qpair failed and we were unable to recover it. 00:27:49.615 [2024-07-15 09:36:00.568384] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.615 [2024-07-15 09:36:00.568431] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a58000b90 with addr=10.0.0.2, port=4420 00:27:49.615 qpair failed and we were unable to recover it. 00:27:49.615 [2024-07-15 09:36:00.568640] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.615 [2024-07-15 09:36:00.568687] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a58000b90 with addr=10.0.0.2, port=4420 00:27:49.615 qpair failed and we were unable to recover it. 00:27:49.615 [2024-07-15 09:36:00.568840] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.615 [2024-07-15 09:36:00.568887] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a58000b90 with addr=10.0.0.2, port=4420 00:27:49.615 qpair failed and we were unable to recover it. 00:27:49.615 [2024-07-15 09:36:00.569079] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.615 [2024-07-15 09:36:00.569125] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a58000b90 with addr=10.0.0.2, port=4420 00:27:49.615 qpair failed and we were unable to recover it. 00:27:49.615 [2024-07-15 09:36:00.569294] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.615 [2024-07-15 09:36:00.569341] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a58000b90 with addr=10.0.0.2, port=4420 00:27:49.615 qpair failed and we were unable to recover it. 00:27:49.615 [2024-07-15 09:36:00.569550] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.615 [2024-07-15 09:36:00.569597] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a58000b90 with addr=10.0.0.2, port=4420 00:27:49.615 qpair failed and we were unable to recover it. 00:27:49.615 [2024-07-15 09:36:00.569745] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.615 [2024-07-15 09:36:00.569790] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a58000b90 with addr=10.0.0.2, port=4420 00:27:49.615 qpair failed and we were unable to recover it. 00:27:49.615 [2024-07-15 09:36:00.569936] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.615 [2024-07-15 09:36:00.569983] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a58000b90 with addr=10.0.0.2, port=4420 00:27:49.615 qpair failed and we were unable to recover it. 00:27:49.615 [2024-07-15 09:36:00.570161] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.615 [2024-07-15 09:36:00.570208] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a58000b90 with addr=10.0.0.2, port=4420 00:27:49.615 qpair failed and we were unable to recover it. 00:27:49.615 [2024-07-15 09:36:00.570363] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.615 [2024-07-15 09:36:00.570409] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a58000b90 with addr=10.0.0.2, port=4420 00:27:49.615 qpair failed and we were unable to recover it. 00:27:49.615 [2024-07-15 09:36:00.570595] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.615 [2024-07-15 09:36:00.570642] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a58000b90 with addr=10.0.0.2, port=4420 00:27:49.615 qpair failed and we were unable to recover it. 00:27:49.615 [2024-07-15 09:36:00.570815] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.615 [2024-07-15 09:36:00.570862] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a58000b90 with addr=10.0.0.2, port=4420 00:27:49.615 qpair failed and we were unable to recover it. 00:27:49.615 [2024-07-15 09:36:00.571081] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.615 [2024-07-15 09:36:00.571138] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a58000b90 with addr=10.0.0.2, port=4420 00:27:49.615 qpair failed and we were unable to recover it. 00:27:49.615 [2024-07-15 09:36:00.571284] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.615 [2024-07-15 09:36:00.571332] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a58000b90 with addr=10.0.0.2, port=4420 00:27:49.615 qpair failed and we were unable to recover it. 00:27:49.615 [2024-07-15 09:36:00.571500] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.615 [2024-07-15 09:36:00.571547] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a58000b90 with addr=10.0.0.2, port=4420 00:27:49.615 qpair failed and we were unable to recover it. 00:27:49.615 [2024-07-15 09:36:00.571735] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.615 [2024-07-15 09:36:00.571781] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a58000b90 with addr=10.0.0.2, port=4420 00:27:49.615 qpair failed and we were unable to recover it. 00:27:49.615 [2024-07-15 09:36:00.571959] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.615 [2024-07-15 09:36:00.572006] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a58000b90 with addr=10.0.0.2, port=4420 00:27:49.615 qpair failed and we were unable to recover it. 00:27:49.615 [2024-07-15 09:36:00.572228] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.615 [2024-07-15 09:36:00.572274] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a58000b90 with addr=10.0.0.2, port=4420 00:27:49.615 qpair failed and we were unable to recover it. 00:27:49.615 [2024-07-15 09:36:00.572451] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.615 [2024-07-15 09:36:00.572498] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a58000b90 with addr=10.0.0.2, port=4420 00:27:49.615 qpair failed and we were unable to recover it. 00:27:49.615 [2024-07-15 09:36:00.572711] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.615 [2024-07-15 09:36:00.572757] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a58000b90 with addr=10.0.0.2, port=4420 00:27:49.615 qpair failed and we were unable to recover it. 00:27:49.615 [2024-07-15 09:36:00.572991] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.615 [2024-07-15 09:36:00.573038] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a58000b90 with addr=10.0.0.2, port=4420 00:27:49.615 qpair failed and we were unable to recover it. 00:27:49.615 [2024-07-15 09:36:00.573214] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.615 [2024-07-15 09:36:00.573260] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a58000b90 with addr=10.0.0.2, port=4420 00:27:49.615 qpair failed and we were unable to recover it. 00:27:49.615 [2024-07-15 09:36:00.573472] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.615 [2024-07-15 09:36:00.573519] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a58000b90 with addr=10.0.0.2, port=4420 00:27:49.615 qpair failed and we were unable to recover it. 00:27:49.615 [2024-07-15 09:36:00.573703] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.616 [2024-07-15 09:36:00.573748] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a58000b90 with addr=10.0.0.2, port=4420 00:27:49.616 qpair failed and we were unable to recover it. 00:27:49.616 [2024-07-15 09:36:00.573956] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.616 [2024-07-15 09:36:00.574003] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a58000b90 with addr=10.0.0.2, port=4420 00:27:49.616 qpair failed and we were unable to recover it. 00:27:49.616 [2024-07-15 09:36:00.574137] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.616 [2024-07-15 09:36:00.574183] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a58000b90 with addr=10.0.0.2, port=4420 00:27:49.616 qpair failed and we were unable to recover it. 00:27:49.616 [2024-07-15 09:36:00.574366] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.616 [2024-07-15 09:36:00.574419] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a58000b90 with addr=10.0.0.2, port=4420 00:27:49.616 qpair failed and we were unable to recover it. 00:27:49.616 [2024-07-15 09:36:00.574583] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.616 [2024-07-15 09:36:00.574629] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a58000b90 with addr=10.0.0.2, port=4420 00:27:49.616 qpair failed and we were unable to recover it. 00:27:49.616 [2024-07-15 09:36:00.574766] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.616 [2024-07-15 09:36:00.574835] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a58000b90 with addr=10.0.0.2, port=4420 00:27:49.616 qpair failed and we were unable to recover it. 00:27:49.616 [2024-07-15 09:36:00.575020] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.616 [2024-07-15 09:36:00.575066] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a58000b90 with addr=10.0.0.2, port=4420 00:27:49.616 qpair failed and we were unable to recover it. 00:27:49.616 [2024-07-15 09:36:00.575235] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.616 [2024-07-15 09:36:00.575281] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a58000b90 with addr=10.0.0.2, port=4420 00:27:49.616 qpair failed and we were unable to recover it. 00:27:49.616 [2024-07-15 09:36:00.575463] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.616 [2024-07-15 09:36:00.575511] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a58000b90 with addr=10.0.0.2, port=4420 00:27:49.616 qpair failed and we were unable to recover it. 00:27:49.616 [2024-07-15 09:36:00.575700] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.616 [2024-07-15 09:36:00.575747] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a58000b90 with addr=10.0.0.2, port=4420 00:27:49.616 qpair failed and we were unable to recover it. 00:27:49.616 [2024-07-15 09:36:00.575954] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.616 [2024-07-15 09:36:00.576002] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a58000b90 with addr=10.0.0.2, port=4420 00:27:49.616 qpair failed and we were unable to recover it. 00:27:49.616 [2024-07-15 09:36:00.576184] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.616 [2024-07-15 09:36:00.576230] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a58000b90 with addr=10.0.0.2, port=4420 00:27:49.616 qpair failed and we were unable to recover it. 00:27:49.616 [2024-07-15 09:36:00.576369] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.616 [2024-07-15 09:36:00.576414] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a58000b90 with addr=10.0.0.2, port=4420 00:27:49.616 qpair failed and we were unable to recover it. 00:27:49.616 [2024-07-15 09:36:00.576596] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.616 [2024-07-15 09:36:00.576642] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a58000b90 with addr=10.0.0.2, port=4420 00:27:49.616 qpair failed and we were unable to recover it. 00:27:49.616 [2024-07-15 09:36:00.576826] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.616 [2024-07-15 09:36:00.576873] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a58000b90 with addr=10.0.0.2, port=4420 00:27:49.616 qpair failed and we were unable to recover it. 00:27:49.616 [2024-07-15 09:36:00.577031] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.616 [2024-07-15 09:36:00.577078] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a58000b90 with addr=10.0.0.2, port=4420 00:27:49.616 qpair failed and we were unable to recover it. 00:27:49.616 [2024-07-15 09:36:00.577298] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.616 [2024-07-15 09:36:00.577344] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a58000b90 with addr=10.0.0.2, port=4420 00:27:49.616 qpair failed and we were unable to recover it. 00:27:49.616 [2024-07-15 09:36:00.577523] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.616 [2024-07-15 09:36:00.577568] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a58000b90 with addr=10.0.0.2, port=4420 00:27:49.616 qpair failed and we were unable to recover it. 00:27:49.616 [2024-07-15 09:36:00.577740] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.616 [2024-07-15 09:36:00.577787] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a58000b90 with addr=10.0.0.2, port=4420 00:27:49.616 qpair failed and we were unable to recover it. 00:27:49.616 [2024-07-15 09:36:00.578006] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.616 [2024-07-15 09:36:00.578052] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a58000b90 with addr=10.0.0.2, port=4420 00:27:49.616 qpair failed and we were unable to recover it. 00:27:49.616 [2024-07-15 09:36:00.578246] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.616 [2024-07-15 09:36:00.578293] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a58000b90 with addr=10.0.0.2, port=4420 00:27:49.616 qpair failed and we were unable to recover it. 00:27:49.616 [2024-07-15 09:36:00.578510] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.616 [2024-07-15 09:36:00.578556] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a58000b90 with addr=10.0.0.2, port=4420 00:27:49.616 qpair failed and we were unable to recover it. 00:27:49.616 [2024-07-15 09:36:00.578745] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.616 [2024-07-15 09:36:00.578790] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a58000b90 with addr=10.0.0.2, port=4420 00:27:49.616 qpair failed and we were unable to recover it. 00:27:49.616 [2024-07-15 09:36:00.578984] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.616 [2024-07-15 09:36:00.579033] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a58000b90 with addr=10.0.0.2, port=4420 00:27:49.616 qpair failed and we were unable to recover it. 00:27:49.616 [2024-07-15 09:36:00.579225] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.616 [2024-07-15 09:36:00.579274] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a58000b90 with addr=10.0.0.2, port=4420 00:27:49.616 qpair failed and we were unable to recover it. 00:27:49.616 [2024-07-15 09:36:00.579455] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.616 [2024-07-15 09:36:00.579504] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a58000b90 with addr=10.0.0.2, port=4420 00:27:49.616 qpair failed and we were unable to recover it. 00:27:49.616 [2024-07-15 09:36:00.579725] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.616 [2024-07-15 09:36:00.579784] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a58000b90 with addr=10.0.0.2, port=4420 00:27:49.616 qpair failed and we were unable to recover it. 00:27:49.616 [2024-07-15 09:36:00.579997] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.616 [2024-07-15 09:36:00.580046] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a58000b90 with addr=10.0.0.2, port=4420 00:27:49.616 qpair failed and we were unable to recover it. 00:27:49.616 [2024-07-15 09:36:00.580299] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.616 [2024-07-15 09:36:00.580348] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a58000b90 with addr=10.0.0.2, port=4420 00:27:49.616 qpair failed and we were unable to recover it. 00:27:49.616 [2024-07-15 09:36:00.580564] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.616 [2024-07-15 09:36:00.580613] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a58000b90 with addr=10.0.0.2, port=4420 00:27:49.616 qpair failed and we were unable to recover it. 00:27:49.616 [2024-07-15 09:36:00.580780] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.616 [2024-07-15 09:36:00.580855] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a58000b90 with addr=10.0.0.2, port=4420 00:27:49.616 qpair failed and we were unable to recover it. 00:27:49.616 [2024-07-15 09:36:00.581049] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.616 [2024-07-15 09:36:00.581097] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a58000b90 with addr=10.0.0.2, port=4420 00:27:49.616 qpair failed and we were unable to recover it. 00:27:49.616 [2024-07-15 09:36:00.581256] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.616 [2024-07-15 09:36:00.581304] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a58000b90 with addr=10.0.0.2, port=4420 00:27:49.616 qpair failed and we were unable to recover it. 00:27:49.616 [2024-07-15 09:36:00.581474] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.616 [2024-07-15 09:36:00.581523] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a58000b90 with addr=10.0.0.2, port=4420 00:27:49.616 qpair failed and we were unable to recover it. 00:27:49.616 [2024-07-15 09:36:00.581712] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.616 [2024-07-15 09:36:00.581762] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a58000b90 with addr=10.0.0.2, port=4420 00:27:49.616 qpair failed and we were unable to recover it. 00:27:49.616 [2024-07-15 09:36:00.582004] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.616 [2024-07-15 09:36:00.582053] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a58000b90 with addr=10.0.0.2, port=4420 00:27:49.616 qpair failed and we were unable to recover it. 00:27:49.616 [2024-07-15 09:36:00.582234] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.616 [2024-07-15 09:36:00.582284] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a58000b90 with addr=10.0.0.2, port=4420 00:27:49.616 qpair failed and we were unable to recover it. 00:27:49.616 [2024-07-15 09:36:00.582465] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.616 [2024-07-15 09:36:00.582512] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a58000b90 with addr=10.0.0.2, port=4420 00:27:49.616 qpair failed and we were unable to recover it. 00:27:49.616 [2024-07-15 09:36:00.582713] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.616 [2024-07-15 09:36:00.582760] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a58000b90 with addr=10.0.0.2, port=4420 00:27:49.616 qpair failed and we were unable to recover it. 00:27:49.616 [2024-07-15 09:36:00.582959] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.616 [2024-07-15 09:36:00.583007] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a58000b90 with addr=10.0.0.2, port=4420 00:27:49.616 qpair failed and we were unable to recover it. 00:27:49.616 [2024-07-15 09:36:00.583202] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.616 [2024-07-15 09:36:00.583249] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a58000b90 with addr=10.0.0.2, port=4420 00:27:49.616 qpair failed and we were unable to recover it. 00:27:49.617 [2024-07-15 09:36:00.583439] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.617 [2024-07-15 09:36:00.583486] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a58000b90 with addr=10.0.0.2, port=4420 00:27:49.617 qpair failed and we were unable to recover it. 00:27:49.617 [2024-07-15 09:36:00.583618] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.617 [2024-07-15 09:36:00.583665] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a58000b90 with addr=10.0.0.2, port=4420 00:27:49.617 qpair failed and we were unable to recover it. 00:27:49.617 [2024-07-15 09:36:00.583863] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.617 [2024-07-15 09:36:00.583922] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a58000b90 with addr=10.0.0.2, port=4420 00:27:49.617 qpair failed and we were unable to recover it. 00:27:49.617 [2024-07-15 09:36:00.584144] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.617 [2024-07-15 09:36:00.584193] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a58000b90 with addr=10.0.0.2, port=4420 00:27:49.617 qpair failed and we were unable to recover it. 00:27:49.617 [2024-07-15 09:36:00.584377] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.617 [2024-07-15 09:36:00.584426] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a58000b90 with addr=10.0.0.2, port=4420 00:27:49.617 qpair failed and we were unable to recover it. 00:27:49.617 [2024-07-15 09:36:00.584613] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.617 [2024-07-15 09:36:00.584661] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a58000b90 with addr=10.0.0.2, port=4420 00:27:49.617 qpair failed and we were unable to recover it. 00:27:49.617 [2024-07-15 09:36:00.584853] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.617 [2024-07-15 09:36:00.584902] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a58000b90 with addr=10.0.0.2, port=4420 00:27:49.617 qpair failed and we were unable to recover it. 00:27:49.617 [2024-07-15 09:36:00.585052] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.617 [2024-07-15 09:36:00.585100] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a58000b90 with addr=10.0.0.2, port=4420 00:27:49.617 qpair failed and we were unable to recover it. 00:27:49.617 [2024-07-15 09:36:00.585282] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.617 [2024-07-15 09:36:00.585330] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a58000b90 with addr=10.0.0.2, port=4420 00:27:49.617 qpair failed and we were unable to recover it. 00:27:49.617 [2024-07-15 09:36:00.585506] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.617 [2024-07-15 09:36:00.585553] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a58000b90 with addr=10.0.0.2, port=4420 00:27:49.617 qpair failed and we were unable to recover it. 00:27:49.617 [2024-07-15 09:36:00.585719] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.617 [2024-07-15 09:36:00.585767] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a58000b90 with addr=10.0.0.2, port=4420 00:27:49.617 qpair failed and we were unable to recover it. 00:27:49.617 [2024-07-15 09:36:00.585948] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.617 [2024-07-15 09:36:00.585998] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a58000b90 with addr=10.0.0.2, port=4420 00:27:49.617 qpair failed and we were unable to recover it. 00:27:49.617 [2024-07-15 09:36:00.586214] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.617 [2024-07-15 09:36:00.586263] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a58000b90 with addr=10.0.0.2, port=4420 00:27:49.617 qpair failed and we were unable to recover it. 00:27:49.617 [2024-07-15 09:36:00.586411] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.617 [2024-07-15 09:36:00.586459] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a58000b90 with addr=10.0.0.2, port=4420 00:27:49.617 qpair failed and we were unable to recover it. 00:27:49.617 [2024-07-15 09:36:00.586615] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.617 [2024-07-15 09:36:00.586663] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a58000b90 with addr=10.0.0.2, port=4420 00:27:49.617 qpair failed and we were unable to recover it. 00:27:49.617 [2024-07-15 09:36:00.586810] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.617 [2024-07-15 09:36:00.586879] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a58000b90 with addr=10.0.0.2, port=4420 00:27:49.617 qpair failed and we were unable to recover it. 00:27:49.617 [2024-07-15 09:36:00.587063] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.617 [2024-07-15 09:36:00.587116] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a58000b90 with addr=10.0.0.2, port=4420 00:27:49.617 qpair failed and we were unable to recover it. 00:27:49.617 [2024-07-15 09:36:00.587280] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.617 [2024-07-15 09:36:00.587333] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a58000b90 with addr=10.0.0.2, port=4420 00:27:49.617 qpair failed and we were unable to recover it. 00:27:49.617 [2024-07-15 09:36:00.587535] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.617 [2024-07-15 09:36:00.587588] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a58000b90 with addr=10.0.0.2, port=4420 00:27:49.617 qpair failed and we were unable to recover it. 00:27:49.617 [2024-07-15 09:36:00.587783] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.617 [2024-07-15 09:36:00.587850] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a58000b90 with addr=10.0.0.2, port=4420 00:27:49.617 qpair failed and we were unable to recover it. 00:27:49.617 [2024-07-15 09:36:00.588062] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.617 [2024-07-15 09:36:00.588111] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a58000b90 with addr=10.0.0.2, port=4420 00:27:49.617 qpair failed and we were unable to recover it. 00:27:49.617 [2024-07-15 09:36:00.588252] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.617 [2024-07-15 09:36:00.588300] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a58000b90 with addr=10.0.0.2, port=4420 00:27:49.617 qpair failed and we were unable to recover it. 00:27:49.617 [2024-07-15 09:36:00.588482] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.617 [2024-07-15 09:36:00.588529] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a58000b90 with addr=10.0.0.2, port=4420 00:27:49.617 qpair failed and we were unable to recover it. 00:27:49.617 [2024-07-15 09:36:00.588706] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.617 [2024-07-15 09:36:00.588753] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a58000b90 with addr=10.0.0.2, port=4420 00:27:49.617 qpair failed and we were unable to recover it. 00:27:49.617 [2024-07-15 09:36:00.588979] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.617 [2024-07-15 09:36:00.589027] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a58000b90 with addr=10.0.0.2, port=4420 00:27:49.617 qpair failed and we were unable to recover it. 00:27:49.617 [2024-07-15 09:36:00.589170] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.617 [2024-07-15 09:36:00.589218] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a58000b90 with addr=10.0.0.2, port=4420 00:27:49.617 qpair failed and we were unable to recover it. 00:27:49.617 [2024-07-15 09:36:00.589415] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.617 [2024-07-15 09:36:00.589465] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a58000b90 with addr=10.0.0.2, port=4420 00:27:49.617 qpair failed and we were unable to recover it. 00:27:49.617 [2024-07-15 09:36:00.589657] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.617 [2024-07-15 09:36:00.589707] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a58000b90 with addr=10.0.0.2, port=4420 00:27:49.617 qpair failed and we were unable to recover it. 00:27:49.617 [2024-07-15 09:36:00.589908] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.617 [2024-07-15 09:36:00.589959] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a58000b90 with addr=10.0.0.2, port=4420 00:27:49.617 qpair failed and we were unable to recover it. 00:27:49.617 [2024-07-15 09:36:00.590160] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.617 [2024-07-15 09:36:00.590209] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a58000b90 with addr=10.0.0.2, port=4420 00:27:49.617 qpair failed and we were unable to recover it. 00:27:49.617 [2024-07-15 09:36:00.590399] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.617 [2024-07-15 09:36:00.590447] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a58000b90 with addr=10.0.0.2, port=4420 00:27:49.617 qpair failed and we were unable to recover it. 00:27:49.617 [2024-07-15 09:36:00.590605] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.617 [2024-07-15 09:36:00.590654] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a58000b90 with addr=10.0.0.2, port=4420 00:27:49.617 qpair failed and we were unable to recover it. 00:27:49.617 [2024-07-15 09:36:00.590788] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.617 [2024-07-15 09:36:00.590846] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a58000b90 with addr=10.0.0.2, port=4420 00:27:49.618 qpair failed and we were unable to recover it. 00:27:49.618 [2024-07-15 09:36:00.591044] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.618 [2024-07-15 09:36:00.591092] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a58000b90 with addr=10.0.0.2, port=4420 00:27:49.618 qpair failed and we were unable to recover it. 00:27:49.618 [2024-07-15 09:36:00.591275] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.618 [2024-07-15 09:36:00.591323] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a58000b90 with addr=10.0.0.2, port=4420 00:27:49.618 qpair failed and we were unable to recover it. 00:27:49.618 [2024-07-15 09:36:00.591475] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.618 [2024-07-15 09:36:00.591524] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a58000b90 with addr=10.0.0.2, port=4420 00:27:49.618 qpair failed and we were unable to recover it. 00:27:49.618 [2024-07-15 09:36:00.591666] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.618 [2024-07-15 09:36:00.591715] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a58000b90 with addr=10.0.0.2, port=4420 00:27:49.618 qpair failed and we were unable to recover it. 00:27:49.618 [2024-07-15 09:36:00.591883] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.618 [2024-07-15 09:36:00.591934] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a58000b90 with addr=10.0.0.2, port=4420 00:27:49.618 qpair failed and we were unable to recover it. 00:27:49.618 [2024-07-15 09:36:00.592159] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.618 [2024-07-15 09:36:00.592208] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a58000b90 with addr=10.0.0.2, port=4420 00:27:49.618 qpair failed and we were unable to recover it. 00:27:49.618 [2024-07-15 09:36:00.592407] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.618 [2024-07-15 09:36:00.592441] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a58000b90 with addr=10.0.0.2, port=4420 00:27:49.618 qpair failed and we were unable to recover it. 00:27:49.618 [2024-07-15 09:36:00.592604] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.618 [2024-07-15 09:36:00.592637] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a58000b90 with addr=10.0.0.2, port=4420 00:27:49.618 qpair failed and we were unable to recover it. 00:27:49.618 [2024-07-15 09:36:00.592750] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.618 [2024-07-15 09:36:00.592783] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a58000b90 with addr=10.0.0.2, port=4420 00:27:49.618 qpair failed and we were unable to recover it. 00:27:49.618 [2024-07-15 09:36:00.592945] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.618 [2024-07-15 09:36:00.592984] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a58000b90 with addr=10.0.0.2, port=4420 00:27:49.618 qpair failed and we were unable to recover it. 00:27:49.618 [2024-07-15 09:36:00.593091] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.618 [2024-07-15 09:36:00.593124] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a58000b90 with addr=10.0.0.2, port=4420 00:27:49.618 qpair failed and we were unable to recover it. 00:27:49.618 [2024-07-15 09:36:00.593262] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.618 [2024-07-15 09:36:00.593296] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a58000b90 with addr=10.0.0.2, port=4420 00:27:49.618 qpair failed and we were unable to recover it. 00:27:49.618 [2024-07-15 09:36:00.593433] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.618 [2024-07-15 09:36:00.593467] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a58000b90 with addr=10.0.0.2, port=4420 00:27:49.618 qpair failed and we were unable to recover it. 00:27:49.618 [2024-07-15 09:36:00.593574] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.618 [2024-07-15 09:36:00.593605] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a58000b90 with addr=10.0.0.2, port=4420 00:27:49.618 qpair failed and we were unable to recover it. 00:27:49.618 [2024-07-15 09:36:00.593747] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.618 [2024-07-15 09:36:00.593781] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a58000b90 with addr=10.0.0.2, port=4420 00:27:49.618 qpair failed and we were unable to recover it. 00:27:49.618 [2024-07-15 09:36:00.593928] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.618 [2024-07-15 09:36:00.593962] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a58000b90 with addr=10.0.0.2, port=4420 00:27:49.618 qpair failed and we were unable to recover it. 00:27:49.618 [2024-07-15 09:36:00.594088] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.618 [2024-07-15 09:36:00.594120] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a58000b90 with addr=10.0.0.2, port=4420 00:27:49.618 qpair failed and we were unable to recover it. 00:27:49.618 [2024-07-15 09:36:00.594263] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.618 [2024-07-15 09:36:00.594297] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a58000b90 with addr=10.0.0.2, port=4420 00:27:49.618 qpair failed and we were unable to recover it. 00:27:49.618 [2024-07-15 09:36:00.594463] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.618 [2024-07-15 09:36:00.594506] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a58000b90 with addr=10.0.0.2, port=4420 00:27:49.618 qpair failed and we were unable to recover it. 00:27:49.618 [2024-07-15 09:36:00.594644] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.618 [2024-07-15 09:36:00.594684] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a58000b90 with addr=10.0.0.2, port=4420 00:27:49.618 qpair failed and we were unable to recover it. 00:27:49.618 [2024-07-15 09:36:00.594851] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.618 [2024-07-15 09:36:00.594894] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a58000b90 with addr=10.0.0.2, port=4420 00:27:49.618 qpair failed and we were unable to recover it. 00:27:49.618 [2024-07-15 09:36:00.595029] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.618 [2024-07-15 09:36:00.595069] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a58000b90 with addr=10.0.0.2, port=4420 00:27:49.618 qpair failed and we were unable to recover it. 00:27:49.618 [2024-07-15 09:36:00.595234] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.618 [2024-07-15 09:36:00.595278] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a58000b90 with addr=10.0.0.2, port=4420 00:27:49.618 qpair failed and we were unable to recover it. 00:27:49.618 [2024-07-15 09:36:00.595482] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.618 [2024-07-15 09:36:00.595526] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a58000b90 with addr=10.0.0.2, port=4420 00:27:49.618 qpair failed and we were unable to recover it. 00:27:49.618 [2024-07-15 09:36:00.595725] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.618 [2024-07-15 09:36:00.595768] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a58000b90 with addr=10.0.0.2, port=4420 00:27:49.618 qpair failed and we were unable to recover it. 00:27:49.618 [2024-07-15 09:36:00.595928] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.618 [2024-07-15 09:36:00.595973] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a58000b90 with addr=10.0.0.2, port=4420 00:27:49.618 qpair failed and we were unable to recover it. 00:27:49.618 [2024-07-15 09:36:00.596137] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.618 [2024-07-15 09:36:00.596181] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a58000b90 with addr=10.0.0.2, port=4420 00:27:49.618 qpair failed and we were unable to recover it. 00:27:49.618 [2024-07-15 09:36:00.596326] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.618 [2024-07-15 09:36:00.596392] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a58000b90 with addr=10.0.0.2, port=4420 00:27:49.618 qpair failed and we were unable to recover it. 00:27:49.618 [2024-07-15 09:36:00.596589] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.618 [2024-07-15 09:36:00.596636] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a58000b90 with addr=10.0.0.2, port=4420 00:27:49.618 qpair failed and we were unable to recover it. 00:27:49.618 [2024-07-15 09:36:00.596787] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.618 [2024-07-15 09:36:00.596845] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a58000b90 with addr=10.0.0.2, port=4420 00:27:49.618 qpair failed and we were unable to recover it. 00:27:49.618 [2024-07-15 09:36:00.596987] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.618 [2024-07-15 09:36:00.597035] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a58000b90 with addr=10.0.0.2, port=4420 00:27:49.618 qpair failed and we were unable to recover it. 00:27:49.618 [2024-07-15 09:36:00.597207] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.618 [2024-07-15 09:36:00.597251] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a58000b90 with addr=10.0.0.2, port=4420 00:27:49.618 qpair failed and we were unable to recover it. 00:27:49.618 [2024-07-15 09:36:00.597443] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.618 [2024-07-15 09:36:00.597489] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a58000b90 with addr=10.0.0.2, port=4420 00:27:49.618 qpair failed and we were unable to recover it. 00:27:49.618 [2024-07-15 09:36:00.597687] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.618 [2024-07-15 09:36:00.597736] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a58000b90 with addr=10.0.0.2, port=4420 00:27:49.618 qpair failed and we were unable to recover it. 00:27:49.618 [2024-07-15 09:36:00.597915] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.618 [2024-07-15 09:36:00.597962] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a58000b90 with addr=10.0.0.2, port=4420 00:27:49.618 qpair failed and we were unable to recover it. 00:27:49.618 [2024-07-15 09:36:00.598098] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.618 [2024-07-15 09:36:00.598144] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a58000b90 with addr=10.0.0.2, port=4420 00:27:49.618 qpair failed and we were unable to recover it. 00:27:49.618 [2024-07-15 09:36:00.598359] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.618 [2024-07-15 09:36:00.598407] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a58000b90 with addr=10.0.0.2, port=4420 00:27:49.618 qpair failed and we were unable to recover it. 00:27:49.618 [2024-07-15 09:36:00.598552] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.618 [2024-07-15 09:36:00.598599] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a58000b90 with addr=10.0.0.2, port=4420 00:27:49.618 qpair failed and we were unable to recover it. 00:27:49.618 [2024-07-15 09:36:00.598756] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.618 [2024-07-15 09:36:00.598817] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a58000b90 with addr=10.0.0.2, port=4420 00:27:49.618 qpair failed and we were unable to recover it. 00:27:49.618 [2024-07-15 09:36:00.598970] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.618 [2024-07-15 09:36:00.599016] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a58000b90 with addr=10.0.0.2, port=4420 00:27:49.618 qpair failed and we were unable to recover it. 00:27:49.618 [2024-07-15 09:36:00.599161] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.619 [2024-07-15 09:36:00.599210] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a58000b90 with addr=10.0.0.2, port=4420 00:27:49.619 qpair failed and we were unable to recover it. 00:27:49.619 [2024-07-15 09:36:00.599368] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.619 [2024-07-15 09:36:00.599418] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a58000b90 with addr=10.0.0.2, port=4420 00:27:49.619 qpair failed and we were unable to recover it. 00:27:49.619 [2024-07-15 09:36:00.599608] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.619 [2024-07-15 09:36:00.599655] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a58000b90 with addr=10.0.0.2, port=4420 00:27:49.619 qpair failed and we were unable to recover it. 00:27:49.619 [2024-07-15 09:36:00.599823] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.619 [2024-07-15 09:36:00.599874] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a58000b90 with addr=10.0.0.2, port=4420 00:27:49.619 qpair failed and we were unable to recover it. 00:27:49.619 [2024-07-15 09:36:00.600094] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.619 [2024-07-15 09:36:00.600142] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a58000b90 with addr=10.0.0.2, port=4420 00:27:49.619 qpair failed and we were unable to recover it. 00:27:49.619 [2024-07-15 09:36:00.600369] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.619 [2024-07-15 09:36:00.600418] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a58000b90 with addr=10.0.0.2, port=4420 00:27:49.619 qpair failed and we were unable to recover it. 00:27:49.619 [2024-07-15 09:36:00.600670] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.619 [2024-07-15 09:36:00.600718] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a58000b90 with addr=10.0.0.2, port=4420 00:27:49.619 qpair failed and we were unable to recover it. 00:27:49.619 [2024-07-15 09:36:00.600912] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.619 [2024-07-15 09:36:00.600963] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a58000b90 with addr=10.0.0.2, port=4420 00:27:49.619 qpair failed and we were unable to recover it. 00:27:49.619 [2024-07-15 09:36:00.601120] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.619 [2024-07-15 09:36:00.601168] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a58000b90 with addr=10.0.0.2, port=4420 00:27:49.619 qpair failed and we were unable to recover it. 00:27:49.619 [2024-07-15 09:36:00.601366] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.619 [2024-07-15 09:36:00.601423] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a58000b90 with addr=10.0.0.2, port=4420 00:27:49.619 qpair failed and we were unable to recover it. 00:27:49.619 [2024-07-15 09:36:00.601617] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.619 [2024-07-15 09:36:00.601664] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a58000b90 with addr=10.0.0.2, port=4420 00:27:49.619 qpair failed and we were unable to recover it. 00:27:49.619 [2024-07-15 09:36:00.601841] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.619 [2024-07-15 09:36:00.601889] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a58000b90 with addr=10.0.0.2, port=4420 00:27:49.619 qpair failed and we were unable to recover it. 00:27:49.619 [2024-07-15 09:36:00.602051] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.619 [2024-07-15 09:36:00.602099] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a58000b90 with addr=10.0.0.2, port=4420 00:27:49.619 qpair failed and we were unable to recover it. 00:27:49.619 [2024-07-15 09:36:00.602333] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.619 [2024-07-15 09:36:00.602382] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a58000b90 with addr=10.0.0.2, port=4420 00:27:49.619 qpair failed and we were unable to recover it. 00:27:49.619 [2024-07-15 09:36:00.602568] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.619 [2024-07-15 09:36:00.602616] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a58000b90 with addr=10.0.0.2, port=4420 00:27:49.619 qpair failed and we were unable to recover it. 00:27:49.619 [2024-07-15 09:36:00.602769] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.619 [2024-07-15 09:36:00.602825] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a58000b90 with addr=10.0.0.2, port=4420 00:27:49.619 qpair failed and we were unable to recover it. 00:27:49.619 [2024-07-15 09:36:00.603049] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.619 [2024-07-15 09:36:00.603097] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a58000b90 with addr=10.0.0.2, port=4420 00:27:49.619 qpair failed and we were unable to recover it. 00:27:49.619 [2024-07-15 09:36:00.603251] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.619 [2024-07-15 09:36:00.603296] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a58000b90 with addr=10.0.0.2, port=4420 00:27:49.619 qpair failed and we were unable to recover it. 00:27:49.619 [2024-07-15 09:36:00.603462] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.619 [2024-07-15 09:36:00.603516] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a58000b90 with addr=10.0.0.2, port=4420 00:27:49.619 qpair failed and we were unable to recover it. 00:27:49.619 [2024-07-15 09:36:00.603735] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.619 [2024-07-15 09:36:00.603817] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a58000b90 with addr=10.0.0.2, port=4420 00:27:49.619 qpair failed and we were unable to recover it. 00:27:49.619 [2024-07-15 09:36:00.604019] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.619 [2024-07-15 09:36:00.604066] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a58000b90 with addr=10.0.0.2, port=4420 00:27:49.619 qpair failed and we were unable to recover it. 00:27:49.619 [2024-07-15 09:36:00.604210] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.619 [2024-07-15 09:36:00.604258] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a58000b90 with addr=10.0.0.2, port=4420 00:27:49.619 qpair failed and we were unable to recover it. 00:27:49.619 [2024-07-15 09:36:00.604416] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.619 [2024-07-15 09:36:00.604465] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a58000b90 with addr=10.0.0.2, port=4420 00:27:49.619 qpair failed and we were unable to recover it. 00:27:49.619 [2024-07-15 09:36:00.604682] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.619 [2024-07-15 09:36:00.604737] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a58000b90 with addr=10.0.0.2, port=4420 00:27:49.619 qpair failed and we were unable to recover it. 00:27:49.619 [2024-07-15 09:36:00.604965] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.619 [2024-07-15 09:36:00.605014] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a58000b90 with addr=10.0.0.2, port=4420 00:27:49.619 qpair failed and we were unable to recover it. 00:27:49.619 [2024-07-15 09:36:00.605179] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.619 [2024-07-15 09:36:00.605228] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a58000b90 with addr=10.0.0.2, port=4420 00:27:49.619 qpair failed and we were unable to recover it. 00:27:49.619 [2024-07-15 09:36:00.605450] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.619 [2024-07-15 09:36:00.605498] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a58000b90 with addr=10.0.0.2, port=4420 00:27:49.619 qpair failed and we were unable to recover it. 00:27:49.619 [2024-07-15 09:36:00.605661] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.619 [2024-07-15 09:36:00.605710] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a58000b90 with addr=10.0.0.2, port=4420 00:27:49.619 qpair failed and we were unable to recover it. 00:27:49.619 [2024-07-15 09:36:00.605874] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.619 [2024-07-15 09:36:00.605921] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a58000b90 with addr=10.0.0.2, port=4420 00:27:49.619 qpair failed and we were unable to recover it. 00:27:49.619 [2024-07-15 09:36:00.606072] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.619 [2024-07-15 09:36:00.606119] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a58000b90 with addr=10.0.0.2, port=4420 00:27:49.619 qpair failed and we were unable to recover it. 00:27:49.619 [2024-07-15 09:36:00.606298] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.619 [2024-07-15 09:36:00.606345] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a58000b90 with addr=10.0.0.2, port=4420 00:27:49.619 qpair failed and we were unable to recover it. 00:27:49.619 [2024-07-15 09:36:00.606538] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.619 [2024-07-15 09:36:00.606587] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a58000b90 with addr=10.0.0.2, port=4420 00:27:49.619 qpair failed and we were unable to recover it. 00:27:49.619 [2024-07-15 09:36:00.606746] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.619 [2024-07-15 09:36:00.606792] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a58000b90 with addr=10.0.0.2, port=4420 00:27:49.619 qpair failed and we were unable to recover it. 00:27:49.619 [2024-07-15 09:36:00.606968] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.619 [2024-07-15 09:36:00.607030] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a58000b90 with addr=10.0.0.2, port=4420 00:27:49.619 qpair failed and we were unable to recover it. 00:27:49.619 [2024-07-15 09:36:00.607214] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.619 [2024-07-15 09:36:00.607263] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a58000b90 with addr=10.0.0.2, port=4420 00:27:49.619 qpair failed and we were unable to recover it. 00:27:49.619 [2024-07-15 09:36:00.607445] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.619 [2024-07-15 09:36:00.607494] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a58000b90 with addr=10.0.0.2, port=4420 00:27:49.619 qpair failed and we were unable to recover it. 00:27:49.619 [2024-07-15 09:36:00.607648] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.619 [2024-07-15 09:36:00.607696] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a58000b90 with addr=10.0.0.2, port=4420 00:27:49.619 qpair failed and we were unable to recover it. 00:27:49.619 [2024-07-15 09:36:00.607918] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.619 [2024-07-15 09:36:00.607966] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a58000b90 with addr=10.0.0.2, port=4420 00:27:49.619 qpair failed and we were unable to recover it. 00:27:49.619 [2024-07-15 09:36:00.608156] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.619 [2024-07-15 09:36:00.608203] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a58000b90 with addr=10.0.0.2, port=4420 00:27:49.619 qpair failed and we were unable to recover it. 00:27:49.619 [2024-07-15 09:36:00.608383] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.619 [2024-07-15 09:36:00.608432] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a58000b90 with addr=10.0.0.2, port=4420 00:27:49.619 qpair failed and we were unable to recover it. 00:27:49.619 [2024-07-15 09:36:00.608618] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.619 [2024-07-15 09:36:00.608666] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a58000b90 with addr=10.0.0.2, port=4420 00:27:49.620 qpair failed and we were unable to recover it. 00:27:49.620 [2024-07-15 09:36:00.608831] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.620 [2024-07-15 09:36:00.608878] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a58000b90 with addr=10.0.0.2, port=4420 00:27:49.620 qpair failed and we were unable to recover it. 00:27:49.620 [2024-07-15 09:36:00.609064] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.620 [2024-07-15 09:36:00.609132] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a58000b90 with addr=10.0.0.2, port=4420 00:27:49.620 qpair failed and we were unable to recover it. 00:27:49.620 [2024-07-15 09:36:00.609340] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.620 [2024-07-15 09:36:00.609409] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a58000b90 with addr=10.0.0.2, port=4420 00:27:49.620 qpair failed and we were unable to recover it. 00:27:49.620 [2024-07-15 09:36:00.609593] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.620 [2024-07-15 09:36:00.609639] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a58000b90 with addr=10.0.0.2, port=4420 00:27:49.620 qpair failed and we were unable to recover it. 00:27:49.620 [2024-07-15 09:36:00.609781] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.620 [2024-07-15 09:36:00.609837] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a58000b90 with addr=10.0.0.2, port=4420 00:27:49.620 qpair failed and we were unable to recover it. 00:27:49.620 [2024-07-15 09:36:00.609996] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.620 [2024-07-15 09:36:00.610039] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a58000b90 with addr=10.0.0.2, port=4420 00:27:49.620 qpair failed and we were unable to recover it. 00:27:49.620 [2024-07-15 09:36:00.610246] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.620 [2024-07-15 09:36:00.610291] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a58000b90 with addr=10.0.0.2, port=4420 00:27:49.620 qpair failed and we were unable to recover it. 00:27:49.620 [2024-07-15 09:36:00.610472] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.620 [2024-07-15 09:36:00.610516] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a58000b90 with addr=10.0.0.2, port=4420 00:27:49.620 qpair failed and we were unable to recover it. 00:27:49.620 [2024-07-15 09:36:00.610691] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.620 [2024-07-15 09:36:00.610742] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a58000b90 with addr=10.0.0.2, port=4420 00:27:49.620 qpair failed and we were unable to recover it. 00:27:49.620 [2024-07-15 09:36:00.610950] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.620 [2024-07-15 09:36:00.610995] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a58000b90 with addr=10.0.0.2, port=4420 00:27:49.620 qpair failed and we were unable to recover it. 00:27:49.620 [2024-07-15 09:36:00.611183] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.620 [2024-07-15 09:36:00.611258] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a58000b90 with addr=10.0.0.2, port=4420 00:27:49.620 qpair failed and we were unable to recover it. 00:27:49.620 [2024-07-15 09:36:00.611460] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.620 [2024-07-15 09:36:00.611530] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a58000b90 with addr=10.0.0.2, port=4420 00:27:49.620 qpair failed and we were unable to recover it. 00:27:49.620 [2024-07-15 09:36:00.611695] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.620 [2024-07-15 09:36:00.611739] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a58000b90 with addr=10.0.0.2, port=4420 00:27:49.620 qpair failed and we were unable to recover it. 00:27:49.620 [2024-07-15 09:36:00.611914] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.620 [2024-07-15 09:36:00.611961] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a58000b90 with addr=10.0.0.2, port=4420 00:27:49.620 qpair failed and we were unable to recover it. 00:27:49.620 [2024-07-15 09:36:00.612116] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.620 [2024-07-15 09:36:00.612160] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a58000b90 with addr=10.0.0.2, port=4420 00:27:49.620 qpair failed and we were unable to recover it. 00:27:49.620 [2024-07-15 09:36:00.612368] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.620 [2024-07-15 09:36:00.612434] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a58000b90 with addr=10.0.0.2, port=4420 00:27:49.620 qpair failed and we were unable to recover it. 00:27:49.620 [2024-07-15 09:36:00.612633] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.620 [2024-07-15 09:36:00.612678] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a58000b90 with addr=10.0.0.2, port=4420 00:27:49.620 qpair failed and we were unable to recover it. 00:27:49.620 [2024-07-15 09:36:00.612833] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.620 [2024-07-15 09:36:00.612876] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a58000b90 with addr=10.0.0.2, port=4420 00:27:49.620 qpair failed and we were unable to recover it. 00:27:49.620 [2024-07-15 09:36:00.613054] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.620 [2024-07-15 09:36:00.613098] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a58000b90 with addr=10.0.0.2, port=4420 00:27:49.620 qpair failed and we were unable to recover it. 00:27:49.620 [2024-07-15 09:36:00.613242] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.620 [2024-07-15 09:36:00.613286] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a58000b90 with addr=10.0.0.2, port=4420 00:27:49.620 qpair failed and we were unable to recover it. 00:27:49.620 [2024-07-15 09:36:00.613437] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.620 [2024-07-15 09:36:00.613482] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a58000b90 with addr=10.0.0.2, port=4420 00:27:49.620 qpair failed and we were unable to recover it. 00:27:49.620 [2024-07-15 09:36:00.613638] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.620 [2024-07-15 09:36:00.613703] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a58000b90 with addr=10.0.0.2, port=4420 00:27:49.620 qpair failed and we were unable to recover it. 00:27:49.620 [2024-07-15 09:36:00.613892] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.620 [2024-07-15 09:36:00.613958] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a58000b90 with addr=10.0.0.2, port=4420 00:27:49.620 qpair failed and we were unable to recover it. 00:27:49.620 [2024-07-15 09:36:00.614170] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.620 [2024-07-15 09:36:00.614220] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a58000b90 with addr=10.0.0.2, port=4420 00:27:49.620 qpair failed and we were unable to recover it. 00:27:49.620 [2024-07-15 09:36:00.614450] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.620 [2024-07-15 09:36:00.614502] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a58000b90 with addr=10.0.0.2, port=4420 00:27:49.620 qpair failed and we were unable to recover it. 00:27:49.620 [2024-07-15 09:36:00.614723] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.620 [2024-07-15 09:36:00.614774] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a58000b90 with addr=10.0.0.2, port=4420 00:27:49.620 qpair failed and we were unable to recover it. 00:27:49.620 [2024-07-15 09:36:00.615022] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.620 [2024-07-15 09:36:00.615088] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a58000b90 with addr=10.0.0.2, port=4420 00:27:49.620 qpair failed and we were unable to recover it. 00:27:49.620 [2024-07-15 09:36:00.615290] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.620 [2024-07-15 09:36:00.615338] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a58000b90 with addr=10.0.0.2, port=4420 00:27:49.620 qpair failed and we were unable to recover it. 00:27:49.620 [2024-07-15 09:36:00.615500] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.620 [2024-07-15 09:36:00.615547] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a58000b90 with addr=10.0.0.2, port=4420 00:27:49.620 qpair failed and we were unable to recover it. 00:27:49.620 [2024-07-15 09:36:00.615719] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.620 [2024-07-15 09:36:00.615763] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a58000b90 with addr=10.0.0.2, port=4420 00:27:49.620 qpair failed and we were unable to recover it. 00:27:49.620 [2024-07-15 09:36:00.615931] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.620 [2024-07-15 09:36:00.615975] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a58000b90 with addr=10.0.0.2, port=4420 00:27:49.620 qpair failed and we were unable to recover it. 00:27:49.620 [2024-07-15 09:36:00.616121] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.620 [2024-07-15 09:36:00.616175] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a58000b90 with addr=10.0.0.2, port=4420 00:27:49.620 qpair failed and we were unable to recover it. 00:27:49.620 [2024-07-15 09:36:00.616319] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.620 [2024-07-15 09:36:00.616365] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a58000b90 with addr=10.0.0.2, port=4420 00:27:49.620 qpair failed and we were unable to recover it. 00:27:49.620 [2024-07-15 09:36:00.616546] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.620 [2024-07-15 09:36:00.616591] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a58000b90 with addr=10.0.0.2, port=4420 00:27:49.620 qpair failed and we were unable to recover it. 00:27:49.620 [2024-07-15 09:36:00.616780] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.620 [2024-07-15 09:36:00.616838] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a58000b90 with addr=10.0.0.2, port=4420 00:27:49.620 qpair failed and we were unable to recover it. 00:27:49.620 [2024-07-15 09:36:00.618087] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.620 [2024-07-15 09:36:00.618122] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a58000b90 with addr=10.0.0.2, port=4420 00:27:49.620 qpair failed and we were unable to recover it. 00:27:49.620 [2024-07-15 09:36:00.618227] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.620 [2024-07-15 09:36:00.618252] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a58000b90 with addr=10.0.0.2, port=4420 00:27:49.620 qpair failed and we were unable to recover it. 00:27:49.620 [2024-07-15 09:36:00.618344] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.620 [2024-07-15 09:36:00.618371] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a58000b90 with addr=10.0.0.2, port=4420 00:27:49.620 qpair failed and we were unable to recover it. 00:27:49.620 [2024-07-15 09:36:00.618482] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.620 [2024-07-15 09:36:00.618509] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a58000b90 with addr=10.0.0.2, port=4420 00:27:49.620 qpair failed and we were unable to recover it. 00:27:49.620 [2024-07-15 09:36:00.618590] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.620 [2024-07-15 09:36:00.618615] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a58000b90 with addr=10.0.0.2, port=4420 00:27:49.620 qpair failed and we were unable to recover it. 00:27:49.620 [2024-07-15 09:36:00.618736] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.620 [2024-07-15 09:36:00.618762] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a58000b90 with addr=10.0.0.2, port=4420 00:27:49.621 qpair failed and we were unable to recover it. 00:27:49.621 [2024-07-15 09:36:00.618868] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.621 [2024-07-15 09:36:00.618894] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a58000b90 with addr=10.0.0.2, port=4420 00:27:49.621 qpair failed and we were unable to recover it. 00:27:49.621 [2024-07-15 09:36:00.618984] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.621 [2024-07-15 09:36:00.619011] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a58000b90 with addr=10.0.0.2, port=4420 00:27:49.621 qpair failed and we were unable to recover it. 00:27:49.621 [2024-07-15 09:36:00.619148] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.621 [2024-07-15 09:36:00.619173] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a58000b90 with addr=10.0.0.2, port=4420 00:27:49.621 qpair failed and we were unable to recover it. 00:27:49.621 [2024-07-15 09:36:00.619300] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.621 [2024-07-15 09:36:00.619327] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a58000b90 with addr=10.0.0.2, port=4420 00:27:49.621 qpair failed and we were unable to recover it. 00:27:49.621 [2024-07-15 09:36:00.619409] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.621 [2024-07-15 09:36:00.619435] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a58000b90 with addr=10.0.0.2, port=4420 00:27:49.621 qpair failed and we were unable to recover it. 00:27:49.621 [2024-07-15 09:36:00.619527] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.621 [2024-07-15 09:36:00.619555] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a58000b90 with addr=10.0.0.2, port=4420 00:27:49.621 qpair failed and we were unable to recover it. 00:27:49.621 [2024-07-15 09:36:00.619636] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.621 [2024-07-15 09:36:00.619661] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a58000b90 with addr=10.0.0.2, port=4420 00:27:49.621 qpair failed and we were unable to recover it. 00:27:49.621 [2024-07-15 09:36:00.619755] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.621 [2024-07-15 09:36:00.619790] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a58000b90 with addr=10.0.0.2, port=4420 00:27:49.621 qpair failed and we were unable to recover it. 00:27:49.621 [2024-07-15 09:36:00.619910] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.621 [2024-07-15 09:36:00.619935] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a58000b90 with addr=10.0.0.2, port=4420 00:27:49.621 qpair failed and we were unable to recover it. 00:27:49.621 [2024-07-15 09:36:00.620021] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.621 [2024-07-15 09:36:00.620045] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a58000b90 with addr=10.0.0.2, port=4420 00:27:49.621 qpair failed and we were unable to recover it. 00:27:49.621 [2024-07-15 09:36:00.620155] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.621 [2024-07-15 09:36:00.620180] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a58000b90 with addr=10.0.0.2, port=4420 00:27:49.621 qpair failed and we were unable to recover it. 00:27:49.621 [2024-07-15 09:36:00.620294] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.621 [2024-07-15 09:36:00.620319] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a58000b90 with addr=10.0.0.2, port=4420 00:27:49.621 qpair failed and we were unable to recover it. 00:27:49.621 [2024-07-15 09:36:00.620411] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.621 [2024-07-15 09:36:00.620435] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a58000b90 with addr=10.0.0.2, port=4420 00:27:49.621 qpair failed and we were unable to recover it. 00:27:49.621 [2024-07-15 09:36:00.620522] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.621 [2024-07-15 09:36:00.620547] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a58000b90 with addr=10.0.0.2, port=4420 00:27:49.621 qpair failed and we were unable to recover it. 00:27:49.621 [2024-07-15 09:36:00.620683] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.621 [2024-07-15 09:36:00.620719] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a58000b90 with addr=10.0.0.2, port=4420 00:27:49.621 qpair failed and we were unable to recover it. 00:27:49.621 [2024-07-15 09:36:00.620833] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.621 [2024-07-15 09:36:00.620871] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a58000b90 with addr=10.0.0.2, port=4420 00:27:49.621 qpair failed and we were unable to recover it. 00:27:49.621 [2024-07-15 09:36:00.620951] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.621 [2024-07-15 09:36:00.620976] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a58000b90 with addr=10.0.0.2, port=4420 00:27:49.621 qpair failed and we were unable to recover it. 00:27:49.621 [2024-07-15 09:36:00.621090] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.621 [2024-07-15 09:36:00.621120] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a58000b90 with addr=10.0.0.2, port=4420 00:27:49.621 qpair failed and we were unable to recover it. 00:27:49.621 [2024-07-15 09:36:00.621212] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.621 [2024-07-15 09:36:00.621237] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a58000b90 with addr=10.0.0.2, port=4420 00:27:49.621 qpair failed and we were unable to recover it. 00:27:49.621 [2024-07-15 09:36:00.621371] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.621 [2024-07-15 09:36:00.621397] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a58000b90 with addr=10.0.0.2, port=4420 00:27:49.621 qpair failed and we were unable to recover it. 00:27:49.621 [2024-07-15 09:36:00.621503] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.621 [2024-07-15 09:36:00.621528] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a58000b90 with addr=10.0.0.2, port=4420 00:27:49.621 qpair failed and we were unable to recover it. 00:27:49.621 [2024-07-15 09:36:00.621648] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.621 [2024-07-15 09:36:00.621674] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a58000b90 with addr=10.0.0.2, port=4420 00:27:49.621 qpair failed and we were unable to recover it. 00:27:49.621 [2024-07-15 09:36:00.621812] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.621 [2024-07-15 09:36:00.621838] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a58000b90 with addr=10.0.0.2, port=4420 00:27:49.621 qpair failed and we were unable to recover it. 00:27:49.621 [2024-07-15 09:36:00.621958] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.621 [2024-07-15 09:36:00.621984] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a58000b90 with addr=10.0.0.2, port=4420 00:27:49.621 qpair failed and we were unable to recover it. 00:27:49.621 [2024-07-15 09:36:00.622077] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.621 [2024-07-15 09:36:00.622113] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a58000b90 with addr=10.0.0.2, port=4420 00:27:49.621 qpair failed and we were unable to recover it. 00:27:49.621 [2024-07-15 09:36:00.622233] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.621 [2024-07-15 09:36:00.622259] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a58000b90 with addr=10.0.0.2, port=4420 00:27:49.621 qpair failed and we were unable to recover it. 00:27:49.621 [2024-07-15 09:36:00.622343] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.621 [2024-07-15 09:36:00.622369] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a58000b90 with addr=10.0.0.2, port=4420 00:27:49.621 qpair failed and we were unable to recover it. 00:27:49.621 [2024-07-15 09:36:00.622450] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.621 [2024-07-15 09:36:00.622476] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a58000b90 with addr=10.0.0.2, port=4420 00:27:49.621 qpair failed and we were unable to recover it. 00:27:49.621 [2024-07-15 09:36:00.622555] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.621 [2024-07-15 09:36:00.622579] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a58000b90 with addr=10.0.0.2, port=4420 00:27:49.621 qpair failed and we were unable to recover it. 00:27:49.621 [2024-07-15 09:36:00.622668] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.621 [2024-07-15 09:36:00.622696] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a58000b90 with addr=10.0.0.2, port=4420 00:27:49.621 qpair failed and we were unable to recover it. 00:27:49.621 [2024-07-15 09:36:00.622786] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.621 [2024-07-15 09:36:00.622825] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a58000b90 with addr=10.0.0.2, port=4420 00:27:49.621 qpair failed and we were unable to recover it. 00:27:49.621 [2024-07-15 09:36:00.622902] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.621 [2024-07-15 09:36:00.622926] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a58000b90 with addr=10.0.0.2, port=4420 00:27:49.621 qpair failed and we were unable to recover it. 00:27:49.621 [2024-07-15 09:36:00.623052] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.621 [2024-07-15 09:36:00.623078] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a58000b90 with addr=10.0.0.2, port=4420 00:27:49.621 qpair failed and we were unable to recover it. 00:27:49.621 [2024-07-15 09:36:00.623185] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.621 [2024-07-15 09:36:00.623210] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a58000b90 with addr=10.0.0.2, port=4420 00:27:49.621 qpair failed and we were unable to recover it. 00:27:49.621 [2024-07-15 09:36:00.623329] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.621 [2024-07-15 09:36:00.623355] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a58000b90 with addr=10.0.0.2, port=4420 00:27:49.621 qpair failed and we were unable to recover it. 00:27:49.621 [2024-07-15 09:36:00.623440] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.621 [2024-07-15 09:36:00.623464] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a58000b90 with addr=10.0.0.2, port=4420 00:27:49.621 qpair failed and we were unable to recover it. 00:27:49.621 [2024-07-15 09:36:00.623592] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.621 [2024-07-15 09:36:00.623617] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a58000b90 with addr=10.0.0.2, port=4420 00:27:49.621 qpair failed and we were unable to recover it. 00:27:49.621 [2024-07-15 09:36:00.623730] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.621 [2024-07-15 09:36:00.623756] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a58000b90 with addr=10.0.0.2, port=4420 00:27:49.621 qpair failed and we were unable to recover it. 00:27:49.621 [2024-07-15 09:36:00.623865] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.621 [2024-07-15 09:36:00.623890] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a58000b90 with addr=10.0.0.2, port=4420 00:27:49.622 qpair failed and we were unable to recover it. 00:27:49.622 [2024-07-15 09:36:00.623978] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.622 [2024-07-15 09:36:00.624003] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a58000b90 with addr=10.0.0.2, port=4420 00:27:49.622 qpair failed and we were unable to recover it. 00:27:49.622 [2024-07-15 09:36:00.624143] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.622 [2024-07-15 09:36:00.624169] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a58000b90 with addr=10.0.0.2, port=4420 00:27:49.622 qpair failed and we were unable to recover it. 00:27:49.622 [2024-07-15 09:36:00.624283] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.622 [2024-07-15 09:36:00.624308] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a58000b90 with addr=10.0.0.2, port=4420 00:27:49.622 qpair failed and we were unable to recover it. 00:27:49.622 [2024-07-15 09:36:00.624391] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.622 [2024-07-15 09:36:00.624417] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a58000b90 with addr=10.0.0.2, port=4420 00:27:49.622 qpair failed and we were unable to recover it. 00:27:49.622 [2024-07-15 09:36:00.624529] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.622 [2024-07-15 09:36:00.624554] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a58000b90 with addr=10.0.0.2, port=4420 00:27:49.622 qpair failed and we were unable to recover it. 00:27:49.622 [2024-07-15 09:36:00.624645] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.622 [2024-07-15 09:36:00.624670] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a58000b90 with addr=10.0.0.2, port=4420 00:27:49.622 qpair failed and we were unable to recover it. 00:27:49.622 [2024-07-15 09:36:00.624748] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.622 [2024-07-15 09:36:00.624772] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a58000b90 with addr=10.0.0.2, port=4420 00:27:49.622 qpair failed and we were unable to recover it. 00:27:49.622 [2024-07-15 09:36:00.624894] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.622 [2024-07-15 09:36:00.624922] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a58000b90 with addr=10.0.0.2, port=4420 00:27:49.622 qpair failed and we were unable to recover it. 00:27:49.622 [2024-07-15 09:36:00.625011] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.622 [2024-07-15 09:36:00.625038] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a58000b90 with addr=10.0.0.2, port=4420 00:27:49.622 qpair failed and we were unable to recover it. 00:27:49.622 [2024-07-15 09:36:00.625191] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.622 [2024-07-15 09:36:00.625217] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a58000b90 with addr=10.0.0.2, port=4420 00:27:49.622 qpair failed and we were unable to recover it. 00:27:49.622 [2024-07-15 09:36:00.625326] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.622 [2024-07-15 09:36:00.625351] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a58000b90 with addr=10.0.0.2, port=4420 00:27:49.622 qpair failed and we were unable to recover it. 00:27:49.622 [2024-07-15 09:36:00.625436] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.622 [2024-07-15 09:36:00.625461] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a58000b90 with addr=10.0.0.2, port=4420 00:27:49.622 qpair failed and we were unable to recover it. 00:27:49.622 [2024-07-15 09:36:00.625552] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.622 [2024-07-15 09:36:00.625579] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a58000b90 with addr=10.0.0.2, port=4420 00:27:49.622 qpair failed and we were unable to recover it. 00:27:49.622 [2024-07-15 09:36:00.625668] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.622 [2024-07-15 09:36:00.625693] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a58000b90 with addr=10.0.0.2, port=4420 00:27:49.622 qpair failed and we were unable to recover it. 00:27:49.622 [2024-07-15 09:36:00.625788] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.622 [2024-07-15 09:36:00.625821] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a58000b90 with addr=10.0.0.2, port=4420 00:27:49.622 qpair failed and we were unable to recover it. 00:27:49.622 [2024-07-15 09:36:00.625911] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.622 [2024-07-15 09:36:00.625936] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a58000b90 with addr=10.0.0.2, port=4420 00:27:49.622 qpair failed and we were unable to recover it. 00:27:49.622 [2024-07-15 09:36:00.626044] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.622 [2024-07-15 09:36:00.626069] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a58000b90 with addr=10.0.0.2, port=4420 00:27:49.622 qpair failed and we were unable to recover it. 00:27:49.622 [2024-07-15 09:36:00.626191] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.622 [2024-07-15 09:36:00.626217] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a58000b90 with addr=10.0.0.2, port=4420 00:27:49.622 qpair failed and we were unable to recover it. 00:27:49.622 [2024-07-15 09:36:00.626335] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.622 [2024-07-15 09:36:00.626360] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a58000b90 with addr=10.0.0.2, port=4420 00:27:49.622 qpair failed and we were unable to recover it. 00:27:49.622 [2024-07-15 09:36:00.626447] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.622 [2024-07-15 09:36:00.626473] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a58000b90 with addr=10.0.0.2, port=4420 00:27:49.622 qpair failed and we were unable to recover it. 00:27:49.622 [2024-07-15 09:36:00.626586] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.622 [2024-07-15 09:36:00.626612] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a58000b90 with addr=10.0.0.2, port=4420 00:27:49.622 qpair failed and we were unable to recover it. 00:27:49.622 [2024-07-15 09:36:00.626724] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.622 [2024-07-15 09:36:00.626749] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a58000b90 with addr=10.0.0.2, port=4420 00:27:49.622 qpair failed and we were unable to recover it. 00:27:49.622 [2024-07-15 09:36:00.626844] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.622 [2024-07-15 09:36:00.626872] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a58000b90 with addr=10.0.0.2, port=4420 00:27:49.622 qpair failed and we were unable to recover it. 00:27:49.622 [2024-07-15 09:36:00.626954] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.622 [2024-07-15 09:36:00.626980] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a58000b90 with addr=10.0.0.2, port=4420 00:27:49.622 qpair failed and we were unable to recover it. 00:27:49.622 [2024-07-15 09:36:00.627052] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.622 [2024-07-15 09:36:00.627078] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a58000b90 with addr=10.0.0.2, port=4420 00:27:49.622 qpair failed and we were unable to recover it. 00:27:49.622 [2024-07-15 09:36:00.627195] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.622 [2024-07-15 09:36:00.627220] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a58000b90 with addr=10.0.0.2, port=4420 00:27:49.622 qpair failed and we were unable to recover it. 00:27:49.622 [2024-07-15 09:36:00.627306] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.622 [2024-07-15 09:36:00.627332] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a58000b90 with addr=10.0.0.2, port=4420 00:27:49.622 qpair failed and we were unable to recover it. 00:27:49.622 [2024-07-15 09:36:00.627419] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.622 [2024-07-15 09:36:00.627453] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a58000b90 with addr=10.0.0.2, port=4420 00:27:49.622 qpair failed and we were unable to recover it. 00:27:49.622 [2024-07-15 09:36:00.627538] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.622 [2024-07-15 09:36:00.627564] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a58000b90 with addr=10.0.0.2, port=4420 00:27:49.622 qpair failed and we were unable to recover it. 00:27:49.622 [2024-07-15 09:36:00.627710] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.622 [2024-07-15 09:36:00.627739] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a58000b90 with addr=10.0.0.2, port=4420 00:27:49.622 qpair failed and we were unable to recover it. 00:27:49.622 [2024-07-15 09:36:00.627823] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.622 [2024-07-15 09:36:00.627850] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a58000b90 with addr=10.0.0.2, port=4420 00:27:49.622 qpair failed and we were unable to recover it. 00:27:49.622 [2024-07-15 09:36:00.627937] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.622 [2024-07-15 09:36:00.627962] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a58000b90 with addr=10.0.0.2, port=4420 00:27:49.622 qpair failed and we were unable to recover it. 00:27:49.622 [2024-07-15 09:36:00.628066] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.622 [2024-07-15 09:36:00.628092] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a58000b90 with addr=10.0.0.2, port=4420 00:27:49.622 qpair failed and we were unable to recover it. 00:27:49.622 [2024-07-15 09:36:00.628178] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.622 [2024-07-15 09:36:00.628204] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a58000b90 with addr=10.0.0.2, port=4420 00:27:49.622 qpair failed and we were unable to recover it. 00:27:49.622 [2024-07-15 09:36:00.628314] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.622 [2024-07-15 09:36:00.628339] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a58000b90 with addr=10.0.0.2, port=4420 00:27:49.622 qpair failed and we were unable to recover it. 00:27:49.622 [2024-07-15 09:36:00.628423] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.622 [2024-07-15 09:36:00.628452] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a58000b90 with addr=10.0.0.2, port=4420 00:27:49.622 qpair failed and we were unable to recover it. 00:27:49.622 [2024-07-15 09:36:00.628567] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.623 [2024-07-15 09:36:00.628606] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a50000b90 with addr=10.0.0.2, port=4420 00:27:49.623 qpair failed and we were unable to recover it. 00:27:49.623 [2024-07-15 09:36:00.628702] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.623 [2024-07-15 09:36:00.628732] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a50000b90 with addr=10.0.0.2, port=4420 00:27:49.623 qpair failed and we were unable to recover it. 00:27:49.623 [2024-07-15 09:36:00.628821] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.623 [2024-07-15 09:36:00.628847] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a50000b90 with addr=10.0.0.2, port=4420 00:27:49.623 qpair failed and we were unable to recover it. 00:27:49.623 [2024-07-15 09:36:00.628926] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.623 [2024-07-15 09:36:00.628954] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a50000b90 with addr=10.0.0.2, port=4420 00:27:49.623 qpair failed and we were unable to recover it. 00:27:49.623 [2024-07-15 09:36:00.629042] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.623 [2024-07-15 09:36:00.629069] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a50000b90 with addr=10.0.0.2, port=4420 00:27:49.623 qpair failed and we were unable to recover it. 00:27:49.623 [2024-07-15 09:36:00.629173] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.623 [2024-07-15 09:36:00.629199] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a50000b90 with addr=10.0.0.2, port=4420 00:27:49.623 qpair failed and we were unable to recover it. 00:27:49.623 [2024-07-15 09:36:00.629343] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.623 [2024-07-15 09:36:00.629369] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a50000b90 with addr=10.0.0.2, port=4420 00:27:49.623 qpair failed and we were unable to recover it. 00:27:49.623 [2024-07-15 09:36:00.629459] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.623 [2024-07-15 09:36:00.629486] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a50000b90 with addr=10.0.0.2, port=4420 00:27:49.623 qpair failed and we were unable to recover it. 00:27:49.623 [2024-07-15 09:36:00.629576] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.623 [2024-07-15 09:36:00.629603] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a50000b90 with addr=10.0.0.2, port=4420 00:27:49.623 qpair failed and we were unable to recover it. 00:27:49.623 [2024-07-15 09:36:00.629690] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.623 [2024-07-15 09:36:00.629716] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a50000b90 with addr=10.0.0.2, port=4420 00:27:49.623 qpair failed and we were unable to recover it. 00:27:49.623 [2024-07-15 09:36:00.629828] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.623 [2024-07-15 09:36:00.629857] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a50000b90 with addr=10.0.0.2, port=4420 00:27:49.623 qpair failed and we were unable to recover it. 00:27:49.623 [2024-07-15 09:36:00.629964] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.623 [2024-07-15 09:36:00.629998] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a50000b90 with addr=10.0.0.2, port=4420 00:27:49.623 qpair failed and we were unable to recover it. 00:27:49.623 [2024-07-15 09:36:00.630125] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.623 [2024-07-15 09:36:00.630153] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a58000b90 with addr=10.0.0.2, port=4420 00:27:49.623 qpair failed and we were unable to recover it. 00:27:49.623 [2024-07-15 09:36:00.630281] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.623 [2024-07-15 09:36:00.630329] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a58000b90 with addr=10.0.0.2, port=4420 00:27:49.623 qpair failed and we were unable to recover it. 00:27:49.623 [2024-07-15 09:36:00.630505] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.623 [2024-07-15 09:36:00.630551] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a58000b90 with addr=10.0.0.2, port=4420 00:27:49.623 qpair failed and we were unable to recover it. 00:27:49.623 [2024-07-15 09:36:00.630637] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.623 [2024-07-15 09:36:00.630663] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a58000b90 with addr=10.0.0.2, port=4420 00:27:49.623 qpair failed and we were unable to recover it. 00:27:49.623 [2024-07-15 09:36:00.630751] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.623 [2024-07-15 09:36:00.630776] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a58000b90 with addr=10.0.0.2, port=4420 00:27:49.623 qpair failed and we were unable to recover it. 00:27:49.623 [2024-07-15 09:36:00.630901] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.623 [2024-07-15 09:36:00.630928] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a58000b90 with addr=10.0.0.2, port=4420 00:27:49.623 qpair failed and we were unable to recover it. 00:27:49.623 [2024-07-15 09:36:00.631017] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.623 [2024-07-15 09:36:00.631043] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a58000b90 with addr=10.0.0.2, port=4420 00:27:49.623 qpair failed and we were unable to recover it. 00:27:49.623 [2024-07-15 09:36:00.631135] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.623 [2024-07-15 09:36:00.631161] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a58000b90 with addr=10.0.0.2, port=4420 00:27:49.623 qpair failed and we were unable to recover it. 00:27:49.623 [2024-07-15 09:36:00.631242] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.623 [2024-07-15 09:36:00.631269] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a58000b90 with addr=10.0.0.2, port=4420 00:27:49.623 qpair failed and we were unable to recover it. 00:27:49.623 [2024-07-15 09:36:00.631354] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.623 [2024-07-15 09:36:00.631380] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a58000b90 with addr=10.0.0.2, port=4420 00:27:49.623 qpair failed and we were unable to recover it. 00:27:49.623 [2024-07-15 09:36:00.631496] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.623 [2024-07-15 09:36:00.631522] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a58000b90 with addr=10.0.0.2, port=4420 00:27:49.623 qpair failed and we were unable to recover it. 00:27:49.623 [2024-07-15 09:36:00.631610] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.623 [2024-07-15 09:36:00.631636] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a58000b90 with addr=10.0.0.2, port=4420 00:27:49.623 qpair failed and we were unable to recover it. 00:27:49.623 [2024-07-15 09:36:00.631722] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.623 [2024-07-15 09:36:00.631748] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a58000b90 with addr=10.0.0.2, port=4420 00:27:49.623 qpair failed and we were unable to recover it. 00:27:49.623 [2024-07-15 09:36:00.631872] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.623 [2024-07-15 09:36:00.631898] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a58000b90 with addr=10.0.0.2, port=4420 00:27:49.623 qpair failed and we were unable to recover it. 00:27:49.623 [2024-07-15 09:36:00.631985] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.623 [2024-07-15 09:36:00.632011] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a58000b90 with addr=10.0.0.2, port=4420 00:27:49.623 qpair failed and we were unable to recover it. 00:27:49.623 [2024-07-15 09:36:00.632123] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.623 [2024-07-15 09:36:00.632150] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a58000b90 with addr=10.0.0.2, port=4420 00:27:49.623 qpair failed and we were unable to recover it. 00:27:49.623 [2024-07-15 09:36:00.632266] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.623 [2024-07-15 09:36:00.632292] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a58000b90 with addr=10.0.0.2, port=4420 00:27:49.623 qpair failed and we were unable to recover it. 00:27:49.623 [2024-07-15 09:36:00.632378] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.623 [2024-07-15 09:36:00.632404] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a58000b90 with addr=10.0.0.2, port=4420 00:27:49.623 qpair failed and we were unable to recover it. 00:27:49.623 [2024-07-15 09:36:00.632493] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.623 [2024-07-15 09:36:00.632522] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a50000b90 with addr=10.0.0.2, port=4420 00:27:49.623 qpair failed and we were unable to recover it. 00:27:49.623 [2024-07-15 09:36:00.632612] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.623 [2024-07-15 09:36:00.632637] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a50000b90 with addr=10.0.0.2, port=4420 00:27:49.623 qpair failed and we were unable to recover it. 00:27:49.623 [2024-07-15 09:36:00.632739] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.623 [2024-07-15 09:36:00.632766] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a50000b90 with addr=10.0.0.2, port=4420 00:27:49.623 qpair failed and we were unable to recover it. 00:27:49.623 [2024-07-15 09:36:00.632866] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.623 [2024-07-15 09:36:00.632893] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a50000b90 with addr=10.0.0.2, port=4420 00:27:49.623 qpair failed and we were unable to recover it. 00:27:49.623 [2024-07-15 09:36:00.633032] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.623 [2024-07-15 09:36:00.633057] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a50000b90 with addr=10.0.0.2, port=4420 00:27:49.623 qpair failed and we were unable to recover it. 00:27:49.623 [2024-07-15 09:36:00.633215] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.623 [2024-07-15 09:36:00.633254] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:27:49.623 qpair failed and we were unable to recover it. 00:27:49.623 [2024-07-15 09:36:00.633373] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.623 [2024-07-15 09:36:00.633401] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a58000b90 with addr=10.0.0.2, port=4420 00:27:49.623 qpair failed and we were unable to recover it. 00:27:49.623 [2024-07-15 09:36:00.633509] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.623 [2024-07-15 09:36:00.633535] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a58000b90 with addr=10.0.0.2, port=4420 00:27:49.623 qpair failed and we were unable to recover it. 00:27:49.623 [2024-07-15 09:36:00.633620] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.623 [2024-07-15 09:36:00.633645] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a58000b90 with addr=10.0.0.2, port=4420 00:27:49.623 qpair failed and we were unable to recover it. 00:27:49.623 [2024-07-15 09:36:00.633726] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.623 [2024-07-15 09:36:00.633757] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a58000b90 with addr=10.0.0.2, port=4420 00:27:49.623 qpair failed and we were unable to recover it. 00:27:49.623 [2024-07-15 09:36:00.633874] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.623 [2024-07-15 09:36:00.633901] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a58000b90 with addr=10.0.0.2, port=4420 00:27:49.623 qpair failed and we were unable to recover it. 00:27:49.623 [2024-07-15 09:36:00.634010] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.624 [2024-07-15 09:36:00.634036] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a58000b90 with addr=10.0.0.2, port=4420 00:27:49.624 qpair failed and we were unable to recover it. 00:27:49.624 [2024-07-15 09:36:00.634129] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.624 [2024-07-15 09:36:00.634155] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a58000b90 with addr=10.0.0.2, port=4420 00:27:49.624 qpair failed and we were unable to recover it. 00:27:49.624 [2024-07-15 09:36:00.634264] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.624 [2024-07-15 09:36:00.634290] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a58000b90 with addr=10.0.0.2, port=4420 00:27:49.624 qpair failed and we were unable to recover it. 00:27:49.624 [2024-07-15 09:36:00.634375] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.624 [2024-07-15 09:36:00.634402] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a58000b90 with addr=10.0.0.2, port=4420 00:27:49.624 qpair failed and we were unable to recover it. 00:27:49.624 [2024-07-15 09:36:00.634527] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.624 [2024-07-15 09:36:00.634554] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a58000b90 with addr=10.0.0.2, port=4420 00:27:49.624 qpair failed and we were unable to recover it. 00:27:49.624 [2024-07-15 09:36:00.634663] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.624 [2024-07-15 09:36:00.634689] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a58000b90 with addr=10.0.0.2, port=4420 00:27:49.624 qpair failed and we were unable to recover it. 00:27:49.624 [2024-07-15 09:36:00.634807] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.624 [2024-07-15 09:36:00.634843] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a50000b90 with addr=10.0.0.2, port=4420 00:27:49.624 qpair failed and we were unable to recover it. 00:27:49.624 [2024-07-15 09:36:00.634966] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.624 [2024-07-15 09:36:00.634995] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:27:49.624 qpair failed and we were unable to recover it. 00:27:49.624 [2024-07-15 09:36:00.635082] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.624 [2024-07-15 09:36:00.635110] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:27:49.624 qpair failed and we were unable to recover it. 00:27:49.624 [2024-07-15 09:36:00.635258] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.624 [2024-07-15 09:36:00.635300] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:27:49.624 qpair failed and we were unable to recover it. 00:27:49.624 [2024-07-15 09:36:00.635432] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.624 [2024-07-15 09:36:00.635494] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:27:49.624 qpair failed and we were unable to recover it. 00:27:49.624 [2024-07-15 09:36:00.635703] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.624 [2024-07-15 09:36:00.635744] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:27:49.624 qpair failed and we were unable to recover it. 00:27:49.624 [2024-07-15 09:36:00.635967] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.624 [2024-07-15 09:36:00.636009] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:27:49.624 qpair failed and we were unable to recover it. 00:27:49.624 [2024-07-15 09:36:00.636216] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.624 [2024-07-15 09:36:00.636258] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:27:49.624 qpair failed and we were unable to recover it. 00:27:49.624 [2024-07-15 09:36:00.636396] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.624 [2024-07-15 09:36:00.636437] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:27:49.624 qpair failed and we were unable to recover it. 00:27:49.624 [2024-07-15 09:36:00.636612] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.624 [2024-07-15 09:36:00.636646] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:27:49.624 qpair failed and we were unable to recover it. 00:27:49.624 [2024-07-15 09:36:00.636752] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.624 [2024-07-15 09:36:00.636787] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:27:49.624 qpair failed and we were unable to recover it. 00:27:49.624 [2024-07-15 09:36:00.636909] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.624 [2024-07-15 09:36:00.636935] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:27:49.624 qpair failed and we were unable to recover it. 00:27:49.624 [2024-07-15 09:36:00.637046] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.624 [2024-07-15 09:36:00.637087] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:27:49.624 qpair failed and we were unable to recover it. 00:27:49.624 [2024-07-15 09:36:00.637282] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.624 [2024-07-15 09:36:00.637323] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:27:49.624 qpair failed and we were unable to recover it. 00:27:49.624 [2024-07-15 09:36:00.637449] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.624 [2024-07-15 09:36:00.637489] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:27:49.624 qpair failed and we were unable to recover it. 00:27:49.624 [2024-07-15 09:36:00.637612] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.624 [2024-07-15 09:36:00.637653] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:27:49.624 qpair failed and we were unable to recover it. 00:27:49.624 [2024-07-15 09:36:00.637804] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.624 [2024-07-15 09:36:00.637831] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:27:49.624 qpair failed and we were unable to recover it. 00:27:49.624 [2024-07-15 09:36:00.637918] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.624 [2024-07-15 09:36:00.637945] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:27:49.624 qpair failed and we were unable to recover it. 00:27:49.624 [2024-07-15 09:36:00.638051] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.624 [2024-07-15 09:36:00.638077] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:27:49.624 qpair failed and we were unable to recover it. 00:27:49.624 [2024-07-15 09:36:00.638176] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.624 [2024-07-15 09:36:00.638202] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:27:49.624 qpair failed and we were unable to recover it. 00:27:49.624 [2024-07-15 09:36:00.638305] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.624 [2024-07-15 09:36:00.638361] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:27:49.624 qpair failed and we were unable to recover it. 00:27:49.624 [2024-07-15 09:36:00.638518] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.624 [2024-07-15 09:36:00.638566] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:27:49.624 qpair failed and we were unable to recover it. 00:27:49.624 [2024-07-15 09:36:00.638698] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.624 [2024-07-15 09:36:00.638739] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:27:49.624 qpair failed and we were unable to recover it. 00:27:49.624 [2024-07-15 09:36:00.638901] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.624 [2024-07-15 09:36:00.638927] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:27:49.624 qpair failed and we were unable to recover it. 00:27:49.624 [2024-07-15 09:36:00.639068] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.624 [2024-07-15 09:36:00.639095] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:27:49.624 qpair failed and we were unable to recover it. 00:27:49.624 [2024-07-15 09:36:00.639195] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.624 [2024-07-15 09:36:00.639221] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:27:49.624 qpair failed and we were unable to recover it. 00:27:49.624 [2024-07-15 09:36:00.639340] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.624 [2024-07-15 09:36:00.639367] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:27:49.624 qpair failed and we were unable to recover it. 00:27:49.624 [2024-07-15 09:36:00.639492] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.624 [2024-07-15 09:36:00.639532] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:27:49.624 qpair failed and we were unable to recover it. 00:27:49.624 [2024-07-15 09:36:00.639649] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.624 [2024-07-15 09:36:00.639691] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:27:49.624 qpair failed and we were unable to recover it. 00:27:49.624 [2024-07-15 09:36:00.639872] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.624 [2024-07-15 09:36:00.639898] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:27:49.624 qpair failed and we were unable to recover it. 00:27:49.624 [2024-07-15 09:36:00.640011] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.624 [2024-07-15 09:36:00.640037] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:27:49.624 qpair failed and we were unable to recover it. 00:27:49.624 [2024-07-15 09:36:00.640130] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.624 [2024-07-15 09:36:00.640156] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:27:49.624 qpair failed and we were unable to recover it. 00:27:49.624 [2024-07-15 09:36:00.640280] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.624 [2024-07-15 09:36:00.640328] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:27:49.624 qpair failed and we were unable to recover it. 00:27:49.624 [2024-07-15 09:36:00.640458] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.624 [2024-07-15 09:36:00.640505] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:27:49.624 qpair failed and we were unable to recover it. 00:27:49.624 [2024-07-15 09:36:00.640640] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.624 [2024-07-15 09:36:00.640683] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:27:49.624 qpair failed and we were unable to recover it. 00:27:49.625 [2024-07-15 09:36:00.640817] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.625 [2024-07-15 09:36:00.640866] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:27:49.625 qpair failed and we were unable to recover it. 00:27:49.625 [2024-07-15 09:36:00.640982] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.625 [2024-07-15 09:36:00.641008] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:27:49.625 qpair failed and we were unable to recover it. 00:27:49.625 [2024-07-15 09:36:00.641106] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.625 [2024-07-15 09:36:00.641132] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:27:49.625 qpair failed and we were unable to recover it. 00:27:49.625 [2024-07-15 09:36:00.641275] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.625 [2024-07-15 09:36:00.641314] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:27:49.625 qpair failed and we were unable to recover it. 00:27:49.625 [2024-07-15 09:36:00.641459] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.625 [2024-07-15 09:36:00.641510] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:27:49.625 qpair failed and we were unable to recover it. 00:27:49.625 [2024-07-15 09:36:00.641670] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.625 [2024-07-15 09:36:00.641710] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:27:49.625 qpair failed and we were unable to recover it. 00:27:49.625 [2024-07-15 09:36:00.641885] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.625 [2024-07-15 09:36:00.641911] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:27:49.625 qpair failed and we were unable to recover it. 00:27:49.625 [2024-07-15 09:36:00.641997] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.625 [2024-07-15 09:36:00.642023] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:27:49.625 qpair failed and we were unable to recover it. 00:27:49.625 [2024-07-15 09:36:00.642127] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.625 [2024-07-15 09:36:00.642153] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:27:49.625 qpair failed and we were unable to recover it. 00:27:49.625 [2024-07-15 09:36:00.642269] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.625 [2024-07-15 09:36:00.642310] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:27:49.625 qpair failed and we were unable to recover it. 00:27:49.625 [2024-07-15 09:36:00.642420] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.625 [2024-07-15 09:36:00.642445] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:27:49.625 qpair failed and we were unable to recover it. 00:27:49.625 [2024-07-15 09:36:00.642557] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.625 [2024-07-15 09:36:00.642598] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:27:49.625 qpair failed and we were unable to recover it. 00:27:49.625 [2024-07-15 09:36:00.642732] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.625 [2024-07-15 09:36:00.642773] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:27:49.625 qpair failed and we were unable to recover it. 00:27:49.625 [2024-07-15 09:36:00.642951] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.625 [2024-07-15 09:36:00.642991] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a58000b90 with addr=10.0.0.2, port=4420 00:27:49.625 qpair failed and we were unable to recover it. 00:27:49.625 [2024-07-15 09:36:00.643145] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.625 [2024-07-15 09:36:00.643200] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a58000b90 with addr=10.0.0.2, port=4420 00:27:49.625 qpair failed and we were unable to recover it. 00:27:49.625 [2024-07-15 09:36:00.643309] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.625 [2024-07-15 09:36:00.643359] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a58000b90 with addr=10.0.0.2, port=4420 00:27:49.625 qpair failed and we were unable to recover it. 00:27:49.625 [2024-07-15 09:36:00.643493] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.625 [2024-07-15 09:36:00.643543] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a58000b90 with addr=10.0.0.2, port=4420 00:27:49.625 qpair failed and we were unable to recover it. 00:27:49.625 [2024-07-15 09:36:00.643659] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.625 [2024-07-15 09:36:00.643685] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a58000b90 with addr=10.0.0.2, port=4420 00:27:49.625 qpair failed and we were unable to recover it. 00:27:49.625 [2024-07-15 09:36:00.643785] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.625 [2024-07-15 09:36:00.643828] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a58000b90 with addr=10.0.0.2, port=4420 00:27:49.625 qpair failed and we were unable to recover it. 00:27:49.625 [2024-07-15 09:36:00.643914] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.625 [2024-07-15 09:36:00.643940] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a58000b90 with addr=10.0.0.2, port=4420 00:27:49.625 qpair failed and we were unable to recover it. 00:27:49.625 [2024-07-15 09:36:00.644049] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.625 [2024-07-15 09:36:00.644075] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a58000b90 with addr=10.0.0.2, port=4420 00:27:49.625 qpair failed and we were unable to recover it. 00:27:49.625 [2024-07-15 09:36:00.644186] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.625 [2024-07-15 09:36:00.644212] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a58000b90 with addr=10.0.0.2, port=4420 00:27:49.625 qpair failed and we were unable to recover it. 00:27:49.625 [2024-07-15 09:36:00.644302] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.625 [2024-07-15 09:36:00.644329] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a58000b90 with addr=10.0.0.2, port=4420 00:27:49.625 qpair failed and we were unable to recover it. 00:27:49.625 [2024-07-15 09:36:00.644409] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.625 [2024-07-15 09:36:00.644436] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a58000b90 with addr=10.0.0.2, port=4420 00:27:49.625 qpair failed and we were unable to recover it. 00:27:49.625 [2024-07-15 09:36:00.644525] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.625 [2024-07-15 09:36:00.644552] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a58000b90 with addr=10.0.0.2, port=4420 00:27:49.625 qpair failed and we were unable to recover it. 00:27:49.625 [2024-07-15 09:36:00.644665] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.625 [2024-07-15 09:36:00.644691] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a58000b90 with addr=10.0.0.2, port=4420 00:27:49.625 qpair failed and we were unable to recover it. 00:27:49.625 [2024-07-15 09:36:00.644779] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.625 [2024-07-15 09:36:00.644813] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a58000b90 with addr=10.0.0.2, port=4420 00:27:49.625 qpair failed and we were unable to recover it. 00:27:49.625 [2024-07-15 09:36:00.644904] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.625 [2024-07-15 09:36:00.644930] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a58000b90 with addr=10.0.0.2, port=4420 00:27:49.625 qpair failed and we were unable to recover it. 00:27:49.625 [2024-07-15 09:36:00.645017] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.625 [2024-07-15 09:36:00.645042] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a58000b90 with addr=10.0.0.2, port=4420 00:27:49.625 qpair failed and we were unable to recover it. 00:27:49.625 [2024-07-15 09:36:00.645130] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.625 [2024-07-15 09:36:00.645155] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a58000b90 with addr=10.0.0.2, port=4420 00:27:49.625 qpair failed and we were unable to recover it. 00:27:49.625 [2024-07-15 09:36:00.645234] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.625 [2024-07-15 09:36:00.645260] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a58000b90 with addr=10.0.0.2, port=4420 00:27:49.625 qpair failed and we were unable to recover it. 00:27:49.625 [2024-07-15 09:36:00.645365] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.625 [2024-07-15 09:36:00.645391] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a58000b90 with addr=10.0.0.2, port=4420 00:27:49.625 qpair failed and we were unable to recover it. 00:27:49.625 [2024-07-15 09:36:00.645495] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.625 [2024-07-15 09:36:00.645521] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a58000b90 with addr=10.0.0.2, port=4420 00:27:49.625 qpair failed and we were unable to recover it. 00:27:49.625 [2024-07-15 09:36:00.645609] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.625 [2024-07-15 09:36:00.645637] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a58000b90 with addr=10.0.0.2, port=4420 00:27:49.625 qpair failed and we were unable to recover it. 00:27:49.625 [2024-07-15 09:36:00.645756] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.625 [2024-07-15 09:36:00.645786] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:27:49.625 qpair failed and we were unable to recover it. 00:27:49.625 [2024-07-15 09:36:00.645880] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.625 [2024-07-15 09:36:00.645907] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:27:49.625 qpair failed and we were unable to recover it. 00:27:49.625 [2024-07-15 09:36:00.645995] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.625 [2024-07-15 09:36:00.646020] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:27:49.625 qpair failed and we were unable to recover it. 00:27:49.625 [2024-07-15 09:36:00.646152] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.625 [2024-07-15 09:36:00.646198] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:27:49.625 qpair failed and we were unable to recover it. 00:27:49.625 [2024-07-15 09:36:00.646360] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.625 [2024-07-15 09:36:00.646402] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:27:49.625 qpair failed and we were unable to recover it. 00:27:49.625 [2024-07-15 09:36:00.646488] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.625 [2024-07-15 09:36:00.646514] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:27:49.625 qpair failed and we were unable to recover it. 00:27:49.625 [2024-07-15 09:36:00.646622] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.625 [2024-07-15 09:36:00.646647] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:27:49.626 qpair failed and we were unable to recover it. 00:27:49.626 [2024-07-15 09:36:00.646731] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.626 [2024-07-15 09:36:00.646757] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:27:49.626 qpair failed and we were unable to recover it. 00:27:49.626 [2024-07-15 09:36:00.646876] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.626 [2024-07-15 09:36:00.646918] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:27:49.626 qpair failed and we were unable to recover it. 00:27:49.626 [2024-07-15 09:36:00.647043] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.626 [2024-07-15 09:36:00.647072] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a58000b90 with addr=10.0.0.2, port=4420 00:27:49.626 qpair failed and we were unable to recover it. 00:27:49.626 [2024-07-15 09:36:00.647227] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.626 [2024-07-15 09:36:00.647278] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a58000b90 with addr=10.0.0.2, port=4420 00:27:49.626 qpair failed and we were unable to recover it. 00:27:49.626 [2024-07-15 09:36:00.647402] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.626 [2024-07-15 09:36:00.647447] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a58000b90 with addr=10.0.0.2, port=4420 00:27:49.626 qpair failed and we were unable to recover it. 00:27:49.626 [2024-07-15 09:36:00.647528] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.626 [2024-07-15 09:36:00.647554] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a58000b90 with addr=10.0.0.2, port=4420 00:27:49.626 qpair failed and we were unable to recover it. 00:27:49.626 [2024-07-15 09:36:00.647666] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.626 [2024-07-15 09:36:00.647695] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a58000b90 with addr=10.0.0.2, port=4420 00:27:49.626 qpair failed and we were unable to recover it. 00:27:49.626 [2024-07-15 09:36:00.647820] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.626 [2024-07-15 09:36:00.647847] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a58000b90 with addr=10.0.0.2, port=4420 00:27:49.626 qpair failed and we were unable to recover it. 00:27:49.626 [2024-07-15 09:36:00.647935] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.626 [2024-07-15 09:36:00.647962] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:27:49.626 qpair failed and we were unable to recover it. 00:27:49.626 [2024-07-15 09:36:00.648045] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.626 [2024-07-15 09:36:00.648071] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:27:49.626 qpair failed and we were unable to recover it. 00:27:49.626 [2024-07-15 09:36:00.648155] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.626 [2024-07-15 09:36:00.648181] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:27:49.626 qpair failed and we were unable to recover it. 00:27:49.626 [2024-07-15 09:36:00.648270] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.626 [2024-07-15 09:36:00.648295] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:27:49.626 qpair failed and we were unable to recover it. 00:27:49.626 [2024-07-15 09:36:00.648391] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.626 [2024-07-15 09:36:00.648417] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:27:49.626 qpair failed and we were unable to recover it. 00:27:49.626 [2024-07-15 09:36:00.648495] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.626 [2024-07-15 09:36:00.648521] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:27:49.626 qpair failed and we were unable to recover it. 00:27:49.626 [2024-07-15 09:36:00.648631] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.626 [2024-07-15 09:36:00.648657] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:27:49.626 qpair failed and we were unable to recover it. 00:27:49.626 [2024-07-15 09:36:00.648762] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.626 [2024-07-15 09:36:00.648789] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:27:49.626 qpair failed and we were unable to recover it. 00:27:49.626 [2024-07-15 09:36:00.648918] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.626 [2024-07-15 09:36:00.648945] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:27:49.626 qpair failed and we were unable to recover it. 00:27:49.626 [2024-07-15 09:36:00.649046] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.626 [2024-07-15 09:36:00.649072] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:27:49.626 qpair failed and we were unable to recover it. 00:27:49.626 [2024-07-15 09:36:00.649207] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.626 [2024-07-15 09:36:00.649234] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:27:49.626 qpair failed and we were unable to recover it. 00:27:49.626 [2024-07-15 09:36:00.649355] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.626 [2024-07-15 09:36:00.649383] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:27:49.626 qpair failed and we were unable to recover it. 00:27:49.626 [2024-07-15 09:36:00.649476] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.626 [2024-07-15 09:36:00.649503] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:27:49.626 qpair failed and we were unable to recover it. 00:27:49.626 [2024-07-15 09:36:00.649596] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.626 [2024-07-15 09:36:00.649624] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:27:49.626 qpair failed and we were unable to recover it. 00:27:49.626 [2024-07-15 09:36:00.649714] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.626 [2024-07-15 09:36:00.649755] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:27:49.626 qpair failed and we were unable to recover it. 00:27:49.626 [2024-07-15 09:36:00.649893] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.626 [2024-07-15 09:36:00.649921] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a58000b90 with addr=10.0.0.2, port=4420 00:27:49.626 qpair failed and we were unable to recover it. 00:27:49.626 [2024-07-15 09:36:00.650023] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.626 [2024-07-15 09:36:00.650051] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a58000b90 with addr=10.0.0.2, port=4420 00:27:49.626 qpair failed and we were unable to recover it. 00:27:49.626 [2024-07-15 09:36:00.650171] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.626 [2024-07-15 09:36:00.650199] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a58000b90 with addr=10.0.0.2, port=4420 00:27:49.626 qpair failed and we were unable to recover it. 00:27:49.626 [2024-07-15 09:36:00.650304] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.626 [2024-07-15 09:36:00.650331] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a58000b90 with addr=10.0.0.2, port=4420 00:27:49.626 qpair failed and we were unable to recover it. 00:27:49.626 [2024-07-15 09:36:00.650449] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.626 [2024-07-15 09:36:00.650476] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a58000b90 with addr=10.0.0.2, port=4420 00:27:49.626 qpair failed and we were unable to recover it. 00:27:49.626 [2024-07-15 09:36:00.650614] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.626 [2024-07-15 09:36:00.650640] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a58000b90 with addr=10.0.0.2, port=4420 00:27:49.626 qpair failed and we were unable to recover it. 00:27:49.626 [2024-07-15 09:36:00.650723] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.626 [2024-07-15 09:36:00.650750] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:27:49.626 qpair failed and we were unable to recover it. 00:27:49.626 [2024-07-15 09:36:00.650843] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.626 [2024-07-15 09:36:00.650870] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:27:49.626 qpair failed and we were unable to recover it. 00:27:49.626 [2024-07-15 09:36:00.650980] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.626 [2024-07-15 09:36:00.651007] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:27:49.626 qpair failed and we were unable to recover it. 00:27:49.626 [2024-07-15 09:36:00.651120] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.626 [2024-07-15 09:36:00.651148] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:27:49.626 qpair failed and we were unable to recover it. 00:27:49.626 [2024-07-15 09:36:00.651262] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.626 [2024-07-15 09:36:00.651287] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:27:49.626 qpair failed and we were unable to recover it. 00:27:49.627 [2024-07-15 09:36:00.651394] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.627 [2024-07-15 09:36:00.651420] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:27:49.627 qpair failed and we were unable to recover it. 00:27:49.627 [2024-07-15 09:36:00.651538] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.627 [2024-07-15 09:36:00.651566] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:27:49.627 qpair failed and we were unable to recover it. 00:27:49.627 [2024-07-15 09:36:00.651684] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.627 [2024-07-15 09:36:00.651717] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:27:49.627 qpair failed and we were unable to recover it. 00:27:49.627 [2024-07-15 09:36:00.651872] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.627 [2024-07-15 09:36:00.651898] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:27:49.627 qpair failed and we were unable to recover it. 00:27:49.627 [2024-07-15 09:36:00.652014] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.627 [2024-07-15 09:36:00.652044] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:27:49.627 qpair failed and we were unable to recover it. 00:27:49.627 [2024-07-15 09:36:00.652144] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.627 [2024-07-15 09:36:00.652171] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:27:49.627 qpair failed and we were unable to recover it. 00:27:49.627 [2024-07-15 09:36:00.652271] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.627 [2024-07-15 09:36:00.652307] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:27:49.627 qpair failed and we were unable to recover it. 00:27:49.627 [2024-07-15 09:36:00.652426] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.627 [2024-07-15 09:36:00.652454] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:27:49.627 qpair failed and we were unable to recover it. 00:27:49.627 [2024-07-15 09:36:00.652574] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.627 [2024-07-15 09:36:00.652602] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:27:49.627 qpair failed and we were unable to recover it. 00:27:49.627 [2024-07-15 09:36:00.652689] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.627 [2024-07-15 09:36:00.652719] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:27:49.627 qpair failed and we were unable to recover it. 00:27:49.627 [2024-07-15 09:36:00.652834] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.627 [2024-07-15 09:36:00.652859] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:27:49.627 qpair failed and we were unable to recover it. 00:27:49.627 [2024-07-15 09:36:00.652994] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.627 [2024-07-15 09:36:00.653019] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:27:49.627 qpair failed and we were unable to recover it. 00:27:49.627 [2024-07-15 09:36:00.653116] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.627 [2024-07-15 09:36:00.653141] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:27:49.627 qpair failed and we were unable to recover it. 00:27:49.627 [2024-07-15 09:36:00.653256] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.627 [2024-07-15 09:36:00.653283] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:27:49.627 qpair failed and we were unable to recover it. 00:27:49.627 [2024-07-15 09:36:00.653374] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.627 [2024-07-15 09:36:00.653399] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:27:49.627 qpair failed and we were unable to recover it. 00:27:49.627 [2024-07-15 09:36:00.653510] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.627 [2024-07-15 09:36:00.653536] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:27:49.627 qpair failed and we were unable to recover it. 00:27:49.627 [2024-07-15 09:36:00.653636] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.627 [2024-07-15 09:36:00.653675] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a58000b90 with addr=10.0.0.2, port=4420 00:27:49.627 qpair failed and we were unable to recover it. 00:27:49.627 [2024-07-15 09:36:00.653767] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.627 [2024-07-15 09:36:00.653796] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a58000b90 with addr=10.0.0.2, port=4420 00:27:49.627 qpair failed and we were unable to recover it. 00:27:49.627 [2024-07-15 09:36:00.653893] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.627 [2024-07-15 09:36:00.653919] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a58000b90 with addr=10.0.0.2, port=4420 00:27:49.627 qpair failed and we were unable to recover it. 00:27:49.627 [2024-07-15 09:36:00.654034] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.627 [2024-07-15 09:36:00.654060] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a58000b90 with addr=10.0.0.2, port=4420 00:27:49.627 qpair failed and we were unable to recover it. 00:27:49.627 [2024-07-15 09:36:00.654150] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.627 [2024-07-15 09:36:00.654176] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a58000b90 with addr=10.0.0.2, port=4420 00:27:49.627 qpair failed and we were unable to recover it. 00:27:49.627 [2024-07-15 09:36:00.654256] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.627 [2024-07-15 09:36:00.654282] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a58000b90 with addr=10.0.0.2, port=4420 00:27:49.627 qpair failed and we were unable to recover it. 00:27:49.627 [2024-07-15 09:36:00.654379] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.627 [2024-07-15 09:36:00.654405] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a58000b90 with addr=10.0.0.2, port=4420 00:27:49.627 qpair failed and we were unable to recover it. 00:27:49.627 [2024-07-15 09:36:00.654492] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.627 [2024-07-15 09:36:00.654519] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a58000b90 with addr=10.0.0.2, port=4420 00:27:49.627 qpair failed and we were unable to recover it. 00:27:49.627 [2024-07-15 09:36:00.654631] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.627 [2024-07-15 09:36:00.654656] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a58000b90 with addr=10.0.0.2, port=4420 00:27:49.627 qpair failed and we were unable to recover it. 00:27:49.627 [2024-07-15 09:36:00.654747] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.627 [2024-07-15 09:36:00.654773] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a58000b90 with addr=10.0.0.2, port=4420 00:27:49.627 qpair failed and we were unable to recover it. 00:27:49.627 [2024-07-15 09:36:00.654863] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.627 [2024-07-15 09:36:00.654889] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a58000b90 with addr=10.0.0.2, port=4420 00:27:49.627 qpair failed and we were unable to recover it. 00:27:49.627 [2024-07-15 09:36:00.654980] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.627 [2024-07-15 09:36:00.655006] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a58000b90 with addr=10.0.0.2, port=4420 00:27:49.627 qpair failed and we were unable to recover it. 00:27:49.627 [2024-07-15 09:36:00.655090] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.627 [2024-07-15 09:36:00.655115] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a58000b90 with addr=10.0.0.2, port=4420 00:27:49.627 qpair failed and we were unable to recover it. 00:27:49.627 [2024-07-15 09:36:00.655237] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.627 [2024-07-15 09:36:00.655262] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a58000b90 with addr=10.0.0.2, port=4420 00:27:49.627 qpair failed and we were unable to recover it. 00:27:49.627 [2024-07-15 09:36:00.655352] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.627 [2024-07-15 09:36:00.655377] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a58000b90 with addr=10.0.0.2, port=4420 00:27:49.627 qpair failed and we were unable to recover it. 00:27:49.627 [2024-07-15 09:36:00.655459] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.627 [2024-07-15 09:36:00.655485] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a58000b90 with addr=10.0.0.2, port=4420 00:27:49.627 qpair failed and we were unable to recover it. 00:27:49.627 [2024-07-15 09:36:00.655570] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.627 [2024-07-15 09:36:00.655595] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a58000b90 with addr=10.0.0.2, port=4420 00:27:49.627 qpair failed and we were unable to recover it. 00:27:49.627 [2024-07-15 09:36:00.655684] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.627 [2024-07-15 09:36:00.655709] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a58000b90 with addr=10.0.0.2, port=4420 00:27:49.627 qpair failed and we were unable to recover it. 00:27:49.627 [2024-07-15 09:36:00.655796] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.627 [2024-07-15 09:36:00.655831] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a58000b90 with addr=10.0.0.2, port=4420 00:27:49.627 qpair failed and we were unable to recover it. 00:27:49.627 [2024-07-15 09:36:00.655915] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.627 [2024-07-15 09:36:00.655940] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a58000b90 with addr=10.0.0.2, port=4420 00:27:49.627 qpair failed and we were unable to recover it. 00:27:49.627 [2024-07-15 09:36:00.656054] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.627 [2024-07-15 09:36:00.656079] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a58000b90 with addr=10.0.0.2, port=4420 00:27:49.627 qpair failed and we were unable to recover it. 00:27:49.627 [2024-07-15 09:36:00.656169] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.627 [2024-07-15 09:36:00.656204] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a58000b90 with addr=10.0.0.2, port=4420 00:27:49.627 qpair failed and we were unable to recover it. 00:27:49.627 [2024-07-15 09:36:00.656290] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.627 [2024-07-15 09:36:00.656317] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a58000b90 with addr=10.0.0.2, port=4420 00:27:49.627 qpair failed and we were unable to recover it. 00:27:49.627 [2024-07-15 09:36:00.656413] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.627 [2024-07-15 09:36:00.656438] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a58000b90 with addr=10.0.0.2, port=4420 00:27:49.627 qpair failed and we were unable to recover it. 00:27:49.627 [2024-07-15 09:36:00.656555] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.627 [2024-07-15 09:36:00.656581] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a58000b90 with addr=10.0.0.2, port=4420 00:27:49.628 qpair failed and we were unable to recover it. 00:27:49.628 [2024-07-15 09:36:00.656692] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.628 [2024-07-15 09:36:00.656717] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a58000b90 with addr=10.0.0.2, port=4420 00:27:49.628 qpair failed and we were unable to recover it. 00:27:49.628 [2024-07-15 09:36:00.656845] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.628 [2024-07-15 09:36:00.656876] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a58000b90 with addr=10.0.0.2, port=4420 00:27:49.628 qpair failed and we were unable to recover it. 00:27:49.628 [2024-07-15 09:36:00.656996] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.628 [2024-07-15 09:36:00.657021] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a58000b90 with addr=10.0.0.2, port=4420 00:27:49.628 qpair failed and we were unable to recover it. 00:27:49.628 [2024-07-15 09:36:00.657118] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.628 [2024-07-15 09:36:00.657143] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a58000b90 with addr=10.0.0.2, port=4420 00:27:49.628 qpair failed and we were unable to recover it. 00:27:49.628 [2024-07-15 09:36:00.657261] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.628 [2024-07-15 09:36:00.657285] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a58000b90 with addr=10.0.0.2, port=4420 00:27:49.628 qpair failed and we were unable to recover it. 00:27:49.628 [2024-07-15 09:36:00.657382] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.628 [2024-07-15 09:36:00.657408] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a58000b90 with addr=10.0.0.2, port=4420 00:27:49.628 qpair failed and we were unable to recover it. 00:27:49.628 [2024-07-15 09:36:00.657494] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.628 [2024-07-15 09:36:00.657520] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a58000b90 with addr=10.0.0.2, port=4420 00:27:49.628 qpair failed and we were unable to recover it. 00:27:49.628 [2024-07-15 09:36:00.657601] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.628 [2024-07-15 09:36:00.657626] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a58000b90 with addr=10.0.0.2, port=4420 00:27:49.628 qpair failed and we were unable to recover it. 00:27:49.628 [2024-07-15 09:36:00.657711] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.628 [2024-07-15 09:36:00.657736] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a58000b90 with addr=10.0.0.2, port=4420 00:27:49.628 qpair failed and we were unable to recover it. 00:27:49.628 [2024-07-15 09:36:00.657846] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.628 [2024-07-15 09:36:00.657872] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a58000b90 with addr=10.0.0.2, port=4420 00:27:49.628 qpair failed and we were unable to recover it. 00:27:49.628 [2024-07-15 09:36:00.657979] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.628 [2024-07-15 09:36:00.658005] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a58000b90 with addr=10.0.0.2, port=4420 00:27:49.628 qpair failed and we were unable to recover it. 00:27:49.628 [2024-07-15 09:36:00.658087] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.628 [2024-07-15 09:36:00.658113] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a58000b90 with addr=10.0.0.2, port=4420 00:27:49.628 qpair failed and we were unable to recover it. 00:27:49.628 [2024-07-15 09:36:00.658195] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.628 [2024-07-15 09:36:00.658221] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a58000b90 with addr=10.0.0.2, port=4420 00:27:49.628 qpair failed and we were unable to recover it. 00:27:49.628 [2024-07-15 09:36:00.658302] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.628 [2024-07-15 09:36:00.658328] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a58000b90 with addr=10.0.0.2, port=4420 00:27:49.628 qpair failed and we were unable to recover it. 00:27:49.628 [2024-07-15 09:36:00.658410] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.628 [2024-07-15 09:36:00.658435] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a58000b90 with addr=10.0.0.2, port=4420 00:27:49.628 qpair failed and we were unable to recover it. 00:27:49.628 [2024-07-15 09:36:00.658560] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.628 [2024-07-15 09:36:00.658586] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a58000b90 with addr=10.0.0.2, port=4420 00:27:49.628 qpair failed and we were unable to recover it. 00:27:49.628 [2024-07-15 09:36:00.658705] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.628 [2024-07-15 09:36:00.658730] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a58000b90 with addr=10.0.0.2, port=4420 00:27:49.628 qpair failed and we were unable to recover it. 00:27:49.628 [2024-07-15 09:36:00.658817] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.628 [2024-07-15 09:36:00.658853] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a58000b90 with addr=10.0.0.2, port=4420 00:27:49.628 qpair failed and we were unable to recover it. 00:27:49.628 [2024-07-15 09:36:00.658935] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.628 [2024-07-15 09:36:00.658960] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a58000b90 with addr=10.0.0.2, port=4420 00:27:49.628 qpair failed and we were unable to recover it. 00:27:49.628 [2024-07-15 09:36:00.659047] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.628 [2024-07-15 09:36:00.659073] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a58000b90 with addr=10.0.0.2, port=4420 00:27:49.628 qpair failed and we were unable to recover it. 00:27:49.628 [2024-07-15 09:36:00.659168] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.628 [2024-07-15 09:36:00.659193] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a58000b90 with addr=10.0.0.2, port=4420 00:27:49.628 qpair failed and we were unable to recover it. 00:27:49.628 [2024-07-15 09:36:00.659282] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.628 [2024-07-15 09:36:00.659307] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a58000b90 with addr=10.0.0.2, port=4420 00:27:49.628 qpair failed and we were unable to recover it. 00:27:49.628 [2024-07-15 09:36:00.659419] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.628 [2024-07-15 09:36:00.659443] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a58000b90 with addr=10.0.0.2, port=4420 00:27:49.628 qpair failed and we were unable to recover it. 00:27:49.628 [2024-07-15 09:36:00.659551] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.628 [2024-07-15 09:36:00.659576] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a58000b90 with addr=10.0.0.2, port=4420 00:27:49.628 qpair failed and we were unable to recover it. 00:27:49.628 [2024-07-15 09:36:00.659658] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.628 [2024-07-15 09:36:00.659683] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a58000b90 with addr=10.0.0.2, port=4420 00:27:49.628 qpair failed and we were unable to recover it. 00:27:49.628 [2024-07-15 09:36:00.659769] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.628 [2024-07-15 09:36:00.659793] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a58000b90 with addr=10.0.0.2, port=4420 00:27:49.628 qpair failed and we were unable to recover it. 00:27:49.628 [2024-07-15 09:36:00.659887] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.628 [2024-07-15 09:36:00.659912] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a58000b90 with addr=10.0.0.2, port=4420 00:27:49.628 qpair failed and we were unable to recover it. 00:27:49.628 [2024-07-15 09:36:00.659997] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.628 [2024-07-15 09:36:00.660022] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a58000b90 with addr=10.0.0.2, port=4420 00:27:49.628 qpair failed and we were unable to recover it. 00:27:49.628 [2024-07-15 09:36:00.660113] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.628 [2024-07-15 09:36:00.660139] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a58000b90 with addr=10.0.0.2, port=4420 00:27:49.628 qpair failed and we were unable to recover it. 00:27:49.628 [2024-07-15 09:36:00.660243] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.628 [2024-07-15 09:36:00.660268] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a58000b90 with addr=10.0.0.2, port=4420 00:27:49.628 qpair failed and we were unable to recover it. 00:27:49.628 [2024-07-15 09:36:00.660358] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.628 [2024-07-15 09:36:00.660384] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a58000b90 with addr=10.0.0.2, port=4420 00:27:49.628 qpair failed and we were unable to recover it. 00:27:49.628 [2024-07-15 09:36:00.660470] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.628 [2024-07-15 09:36:00.660496] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a58000b90 with addr=10.0.0.2, port=4420 00:27:49.628 qpair failed and we were unable to recover it. 00:27:49.628 [2024-07-15 09:36:00.660610] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.628 [2024-07-15 09:36:00.660636] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a58000b90 with addr=10.0.0.2, port=4420 00:27:49.628 qpair failed and we were unable to recover it. 00:27:49.628 [2024-07-15 09:36:00.660720] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.628 [2024-07-15 09:36:00.660746] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a58000b90 with addr=10.0.0.2, port=4420 00:27:49.628 qpair failed and we were unable to recover it. 00:27:49.628 [2024-07-15 09:36:00.660844] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.628 [2024-07-15 09:36:00.660871] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a58000b90 with addr=10.0.0.2, port=4420 00:27:49.628 qpair failed and we were unable to recover it. 00:27:49.628 [2024-07-15 09:36:00.660957] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.628 [2024-07-15 09:36:00.660982] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a58000b90 with addr=10.0.0.2, port=4420 00:27:49.628 qpair failed and we were unable to recover it. 00:27:49.628 [2024-07-15 09:36:00.661104] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.628 [2024-07-15 09:36:00.661131] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a58000b90 with addr=10.0.0.2, port=4420 00:27:49.628 qpair failed and we were unable to recover it. 00:27:49.628 [2024-07-15 09:36:00.661210] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.628 [2024-07-15 09:36:00.661236] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a58000b90 with addr=10.0.0.2, port=4420 00:27:49.628 qpair failed and we were unable to recover it. 00:27:49.628 [2024-07-15 09:36:00.661320] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.628 [2024-07-15 09:36:00.661346] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a58000b90 with addr=10.0.0.2, port=4420 00:27:49.628 qpair failed and we were unable to recover it. 00:27:49.628 [2024-07-15 09:36:00.661444] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.628 [2024-07-15 09:36:00.661471] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a58000b90 with addr=10.0.0.2, port=4420 00:27:49.628 qpair failed and we were unable to recover it. 00:27:49.628 [2024-07-15 09:36:00.661568] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.629 [2024-07-15 09:36:00.661594] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a58000b90 with addr=10.0.0.2, port=4420 00:27:49.629 qpair failed and we were unable to recover it. 00:27:49.629 [2024-07-15 09:36:00.661703] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.629 [2024-07-15 09:36:00.661733] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a58000b90 with addr=10.0.0.2, port=4420 00:27:49.629 qpair failed and we were unable to recover it. 00:27:49.629 [2024-07-15 09:36:00.661841] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.629 [2024-07-15 09:36:00.661867] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a58000b90 with addr=10.0.0.2, port=4420 00:27:49.629 qpair failed and we were unable to recover it. 00:27:49.629 [2024-07-15 09:36:00.661961] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.629 [2024-07-15 09:36:00.661987] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a58000b90 with addr=10.0.0.2, port=4420 00:27:49.629 qpair failed and we were unable to recover it. 00:27:49.629 [2024-07-15 09:36:00.662072] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.629 [2024-07-15 09:36:00.662097] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a58000b90 with addr=10.0.0.2, port=4420 00:27:49.629 qpair failed and we were unable to recover it. 00:27:49.629 [2024-07-15 09:36:00.662188] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.629 [2024-07-15 09:36:00.662214] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a58000b90 with addr=10.0.0.2, port=4420 00:27:49.629 qpair failed and we were unable to recover it. 00:27:49.629 [2024-07-15 09:36:00.662323] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.629 [2024-07-15 09:36:00.662349] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a58000b90 with addr=10.0.0.2, port=4420 00:27:49.629 qpair failed and we were unable to recover it. 00:27:49.629 [2024-07-15 09:36:00.662431] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.629 [2024-07-15 09:36:00.662456] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a58000b90 with addr=10.0.0.2, port=4420 00:27:49.629 qpair failed and we were unable to recover it. 00:27:49.629 [2024-07-15 09:36:00.662564] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.629 [2024-07-15 09:36:00.662589] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a58000b90 with addr=10.0.0.2, port=4420 00:27:49.629 qpair failed and we were unable to recover it. 00:27:49.629 [2024-07-15 09:36:00.662673] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.629 [2024-07-15 09:36:00.662698] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a58000b90 with addr=10.0.0.2, port=4420 00:27:49.629 qpair failed and we were unable to recover it. 00:27:49.629 [2024-07-15 09:36:00.662788] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.629 [2024-07-15 09:36:00.662825] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a58000b90 with addr=10.0.0.2, port=4420 00:27:49.629 qpair failed and we were unable to recover it. 00:27:49.629 [2024-07-15 09:36:00.662938] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.629 [2024-07-15 09:36:00.662963] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a58000b90 with addr=10.0.0.2, port=4420 00:27:49.629 qpair failed and we were unable to recover it. 00:27:49.629 [2024-07-15 09:36:00.663042] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.629 [2024-07-15 09:36:00.663067] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a58000b90 with addr=10.0.0.2, port=4420 00:27:49.629 qpair failed and we were unable to recover it. 00:27:49.629 [2024-07-15 09:36:00.663155] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.629 [2024-07-15 09:36:00.663180] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a58000b90 with addr=10.0.0.2, port=4420 00:27:49.629 qpair failed and we were unable to recover it. 00:27:49.629 [2024-07-15 09:36:00.663259] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.629 [2024-07-15 09:36:00.663284] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a58000b90 with addr=10.0.0.2, port=4420 00:27:49.629 qpair failed and we were unable to recover it. 00:27:49.629 [2024-07-15 09:36:00.663378] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.629 [2024-07-15 09:36:00.663403] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a58000b90 with addr=10.0.0.2, port=4420 00:27:49.629 qpair failed and we were unable to recover it. 00:27:49.629 [2024-07-15 09:36:00.663511] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.629 [2024-07-15 09:36:00.663536] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a58000b90 with addr=10.0.0.2, port=4420 00:27:49.629 qpair failed and we were unable to recover it. 00:27:49.629 [2024-07-15 09:36:00.663615] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.629 [2024-07-15 09:36:00.663640] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a58000b90 with addr=10.0.0.2, port=4420 00:27:49.629 qpair failed and we were unable to recover it. 00:27:49.629 [2024-07-15 09:36:00.663722] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.629 [2024-07-15 09:36:00.663748] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a58000b90 with addr=10.0.0.2, port=4420 00:27:49.629 qpair failed and we were unable to recover it. 00:27:49.629 [2024-07-15 09:36:00.663862] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.629 [2024-07-15 09:36:00.663888] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a58000b90 with addr=10.0.0.2, port=4420 00:27:49.629 qpair failed and we were unable to recover it. 00:27:49.629 [2024-07-15 09:36:00.663978] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.629 [2024-07-15 09:36:00.664004] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a58000b90 with addr=10.0.0.2, port=4420 00:27:49.629 qpair failed and we were unable to recover it. 00:27:49.629 [2024-07-15 09:36:00.664090] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.629 [2024-07-15 09:36:00.664116] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a58000b90 with addr=10.0.0.2, port=4420 00:27:49.629 qpair failed and we were unable to recover it. 00:27:49.629 [2024-07-15 09:36:00.664207] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.629 [2024-07-15 09:36:00.664234] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a58000b90 with addr=10.0.0.2, port=4420 00:27:49.629 qpair failed and we were unable to recover it. 00:27:49.629 [2024-07-15 09:36:00.664315] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.629 [2024-07-15 09:36:00.664340] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a58000b90 with addr=10.0.0.2, port=4420 00:27:49.629 qpair failed and we were unable to recover it. 00:27:49.629 [2024-07-15 09:36:00.664420] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.629 [2024-07-15 09:36:00.664445] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a58000b90 with addr=10.0.0.2, port=4420 00:27:49.629 qpair failed and we were unable to recover it. 00:27:49.629 [2024-07-15 09:36:00.664529] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.629 [2024-07-15 09:36:00.664553] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a58000b90 with addr=10.0.0.2, port=4420 00:27:49.629 qpair failed and we were unable to recover it. 00:27:49.629 [2024-07-15 09:36:00.664630] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.629 [2024-07-15 09:36:00.664655] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a58000b90 with addr=10.0.0.2, port=4420 00:27:49.629 qpair failed and we were unable to recover it. 00:27:49.629 [2024-07-15 09:36:00.664736] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.629 [2024-07-15 09:36:00.664763] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a58000b90 with addr=10.0.0.2, port=4420 00:27:49.629 qpair failed and we were unable to recover it. 00:27:49.629 [2024-07-15 09:36:00.664869] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.629 [2024-07-15 09:36:00.664908] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:27:49.629 qpair failed and we were unable to recover it. 00:27:49.629 [2024-07-15 09:36:00.664992] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.629 [2024-07-15 09:36:00.665019] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:27:49.629 qpair failed and we were unable to recover it. 00:27:49.629 [2024-07-15 09:36:00.665112] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.629 [2024-07-15 09:36:00.665139] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:27:49.629 qpair failed and we were unable to recover it. 00:27:49.629 [2024-07-15 09:36:00.665233] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.629 [2024-07-15 09:36:00.665257] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:27:49.629 qpair failed and we were unable to recover it. 00:27:49.629 [2024-07-15 09:36:00.665395] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.629 [2024-07-15 09:36:00.665421] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:27:49.629 qpair failed and we were unable to recover it. 00:27:49.629 [2024-07-15 09:36:00.665504] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.629 [2024-07-15 09:36:00.665530] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:27:49.629 qpair failed and we were unable to recover it. 00:27:49.629 [2024-07-15 09:36:00.665620] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.629 [2024-07-15 09:36:00.665646] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:27:49.629 qpair failed and we were unable to recover it. 00:27:49.629 [2024-07-15 09:36:00.665740] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.629 [2024-07-15 09:36:00.665765] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:27:49.629 qpair failed and we were unable to recover it. 00:27:49.629 [2024-07-15 09:36:00.665880] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.629 [2024-07-15 09:36:00.665907] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:27:49.629 qpair failed and we were unable to recover it. 00:27:49.629 [2024-07-15 09:36:00.665991] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.629 [2024-07-15 09:36:00.666016] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:27:49.629 qpair failed and we were unable to recover it. 00:27:49.629 [2024-07-15 09:36:00.666177] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.629 [2024-07-15 09:36:00.666203] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:27:49.629 qpair failed and we were unable to recover it. 00:27:49.629 [2024-07-15 09:36:00.666335] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.629 [2024-07-15 09:36:00.666360] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:27:49.629 qpair failed and we were unable to recover it. 00:27:49.629 [2024-07-15 09:36:00.666451] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.629 [2024-07-15 09:36:00.666479] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a58000b90 with addr=10.0.0.2, port=4420 00:27:49.629 qpair failed and we were unable to recover it. 00:27:49.629 [2024-07-15 09:36:00.666591] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.630 [2024-07-15 09:36:00.666621] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a58000b90 with addr=10.0.0.2, port=4420 00:27:49.630 qpair failed and we were unable to recover it. 00:27:49.630 [2024-07-15 09:36:00.666766] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.630 [2024-07-15 09:36:00.666792] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a58000b90 with addr=10.0.0.2, port=4420 00:27:49.630 qpair failed and we were unable to recover it. 00:27:49.630 [2024-07-15 09:36:00.666923] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.630 [2024-07-15 09:36:00.666949] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a58000b90 with addr=10.0.0.2, port=4420 00:27:49.630 qpair failed and we were unable to recover it. 00:27:49.630 [2024-07-15 09:36:00.667035] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.630 [2024-07-15 09:36:00.667060] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a58000b90 with addr=10.0.0.2, port=4420 00:27:49.630 qpair failed and we were unable to recover it. 00:27:49.630 [2024-07-15 09:36:00.667141] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.630 [2024-07-15 09:36:00.667166] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a58000b90 with addr=10.0.0.2, port=4420 00:27:49.630 qpair failed and we were unable to recover it. 00:27:49.630 [2024-07-15 09:36:00.667253] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.630 [2024-07-15 09:36:00.667280] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:27:49.630 qpair failed and we were unable to recover it. 00:27:49.630 [2024-07-15 09:36:00.667361] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.630 [2024-07-15 09:36:00.667386] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:27:49.630 qpair failed and we were unable to recover it. 00:27:49.630 [2024-07-15 09:36:00.667471] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.630 [2024-07-15 09:36:00.667497] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:27:49.630 qpair failed and we were unable to recover it. 00:27:49.630 [2024-07-15 09:36:00.667597] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.630 [2024-07-15 09:36:00.667622] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:27:49.630 qpair failed and we were unable to recover it. 00:27:49.630 [2024-07-15 09:36:00.667713] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.630 [2024-07-15 09:36:00.667738] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:27:49.630 qpair failed and we were unable to recover it. 00:27:49.630 [2024-07-15 09:36:00.667869] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.630 [2024-07-15 09:36:00.667894] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:27:49.630 qpair failed and we were unable to recover it. 00:27:49.630 [2024-07-15 09:36:00.667977] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.630 [2024-07-15 09:36:00.668003] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:27:49.630 qpair failed and we were unable to recover it. 00:27:49.630 [2024-07-15 09:36:00.668095] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.630 [2024-07-15 09:36:00.668119] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:27:49.630 qpair failed and we were unable to recover it. 00:27:49.630 [2024-07-15 09:36:00.668202] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.630 [2024-07-15 09:36:00.668228] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:27:49.630 qpair failed and we were unable to recover it. 00:27:49.630 [2024-07-15 09:36:00.668318] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.630 [2024-07-15 09:36:00.668343] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:27:49.630 qpair failed and we were unable to recover it. 00:27:49.630 [2024-07-15 09:36:00.668511] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.630 [2024-07-15 09:36:00.668537] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:27:49.630 qpair failed and we were unable to recover it. 00:27:49.630 [2024-07-15 09:36:00.668644] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.630 [2024-07-15 09:36:00.668669] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:27:49.630 qpair failed and we were unable to recover it. 00:27:49.630 [2024-07-15 09:36:00.668754] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.630 [2024-07-15 09:36:00.668779] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:27:49.630 qpair failed and we were unable to recover it. 00:27:49.630 [2024-07-15 09:36:00.668896] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.630 [2024-07-15 09:36:00.668935] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a50000b90 with addr=10.0.0.2, port=4420 00:27:49.630 qpair failed and we were unable to recover it. 00:27:49.630 [2024-07-15 09:36:00.669040] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.630 [2024-07-15 09:36:00.669066] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a50000b90 with addr=10.0.0.2, port=4420 00:27:49.630 qpair failed and we were unable to recover it. 00:27:49.630 [2024-07-15 09:36:00.669157] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.630 [2024-07-15 09:36:00.669186] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a58000b90 with addr=10.0.0.2, port=4420 00:27:49.630 qpair failed and we were unable to recover it. 00:27:49.630 [2024-07-15 09:36:00.669275] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.630 [2024-07-15 09:36:00.669301] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a58000b90 with addr=10.0.0.2, port=4420 00:27:49.630 qpair failed and we were unable to recover it. 00:27:49.630 [2024-07-15 09:36:00.669415] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.630 [2024-07-15 09:36:00.669463] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a58000b90 with addr=10.0.0.2, port=4420 00:27:49.630 qpair failed and we were unable to recover it. 00:27:49.630 [2024-07-15 09:36:00.669574] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.630 [2024-07-15 09:36:00.669599] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a58000b90 with addr=10.0.0.2, port=4420 00:27:49.630 qpair failed and we were unable to recover it. 00:27:49.630 [2024-07-15 09:36:00.669713] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.630 [2024-07-15 09:36:00.669738] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a58000b90 with addr=10.0.0.2, port=4420 00:27:49.630 qpair failed and we were unable to recover it. 00:27:49.630 [2024-07-15 09:36:00.669869] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.630 [2024-07-15 09:36:00.669895] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a58000b90 with addr=10.0.0.2, port=4420 00:27:49.630 qpair failed and we were unable to recover it. 00:27:49.630 [2024-07-15 09:36:00.670008] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.630 [2024-07-15 09:36:00.670033] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a58000b90 with addr=10.0.0.2, port=4420 00:27:49.630 qpair failed and we were unable to recover it. 00:27:49.630 [2024-07-15 09:36:00.670152] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.630 [2024-07-15 09:36:00.670180] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:27:49.630 qpair failed and we were unable to recover it. 00:27:49.630 [2024-07-15 09:36:00.670302] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.630 [2024-07-15 09:36:00.670327] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:27:49.630 qpair failed and we were unable to recover it. 00:27:49.630 [2024-07-15 09:36:00.670407] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.630 [2024-07-15 09:36:00.670432] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:27:49.630 qpair failed and we were unable to recover it. 00:27:49.630 [2024-07-15 09:36:00.670525] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.630 [2024-07-15 09:36:00.670550] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:27:49.630 qpair failed and we were unable to recover it. 00:27:49.630 [2024-07-15 09:36:00.670673] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.630 [2024-07-15 09:36:00.670699] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:27:49.630 qpair failed and we were unable to recover it. 00:27:49.630 [2024-07-15 09:36:00.670815] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.630 [2024-07-15 09:36:00.670850] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:27:49.630 qpair failed and we were unable to recover it. 00:27:49.630 [2024-07-15 09:36:00.670933] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.630 [2024-07-15 09:36:00.670959] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:27:49.630 qpair failed and we were unable to recover it. 00:27:49.630 [2024-07-15 09:36:00.671041] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.631 [2024-07-15 09:36:00.671066] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:27:49.631 qpair failed and we were unable to recover it. 00:27:49.631 [2024-07-15 09:36:00.671194] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.631 [2024-07-15 09:36:00.671231] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:27:49.631 qpair failed and we were unable to recover it. 00:27:49.631 [2024-07-15 09:36:00.671434] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.631 [2024-07-15 09:36:00.671488] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a58000b90 with addr=10.0.0.2, port=4420 00:27:49.631 qpair failed and we were unable to recover it. 00:27:49.631 [2024-07-15 09:36:00.671574] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.631 [2024-07-15 09:36:00.671602] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a58000b90 with addr=10.0.0.2, port=4420 00:27:49.631 qpair failed and we were unable to recover it. 00:27:49.631 [2024-07-15 09:36:00.671730] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.631 [2024-07-15 09:36:00.671755] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a58000b90 with addr=10.0.0.2, port=4420 00:27:49.631 qpair failed and we were unable to recover it. 00:27:49.631 [2024-07-15 09:36:00.671873] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.631 [2024-07-15 09:36:00.671924] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a58000b90 with addr=10.0.0.2, port=4420 00:27:49.631 qpair failed and we were unable to recover it. 00:27:49.631 [2024-07-15 09:36:00.672062] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.631 [2024-07-15 09:36:00.672105] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a58000b90 with addr=10.0.0.2, port=4420 00:27:49.631 qpair failed and we were unable to recover it. 00:27:49.631 [2024-07-15 09:36:00.672294] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.631 [2024-07-15 09:36:00.672348] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a58000b90 with addr=10.0.0.2, port=4420 00:27:49.631 qpair failed and we were unable to recover it. 00:27:49.631 [2024-07-15 09:36:00.672458] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.631 [2024-07-15 09:36:00.672483] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a58000b90 with addr=10.0.0.2, port=4420 00:27:49.631 qpair failed and we were unable to recover it. 00:27:49.631 [2024-07-15 09:36:00.672584] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.631 [2024-07-15 09:36:00.672610] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a58000b90 with addr=10.0.0.2, port=4420 00:27:49.631 qpair failed and we were unable to recover it. 00:27:49.631 [2024-07-15 09:36:00.672696] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.631 [2024-07-15 09:36:00.672721] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a58000b90 with addr=10.0.0.2, port=4420 00:27:49.631 qpair failed and we were unable to recover it. 00:27:49.631 [2024-07-15 09:36:00.672866] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.631 [2024-07-15 09:36:00.672893] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a58000b90 with addr=10.0.0.2, port=4420 00:27:49.631 qpair failed and we were unable to recover it. 00:27:49.631 [2024-07-15 09:36:00.672986] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.631 [2024-07-15 09:36:00.673011] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a58000b90 with addr=10.0.0.2, port=4420 00:27:49.631 qpair failed and we were unable to recover it. 00:27:49.631 [2024-07-15 09:36:00.673094] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.631 [2024-07-15 09:36:00.673118] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a58000b90 with addr=10.0.0.2, port=4420 00:27:49.631 qpair failed and we were unable to recover it. 00:27:49.631 [2024-07-15 09:36:00.673227] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.631 [2024-07-15 09:36:00.673252] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a58000b90 with addr=10.0.0.2, port=4420 00:27:49.631 qpair failed and we were unable to recover it. 00:27:49.631 [2024-07-15 09:36:00.673340] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.631 [2024-07-15 09:36:00.673365] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a58000b90 with addr=10.0.0.2, port=4420 00:27:49.631 qpair failed and we were unable to recover it. 00:27:49.631 [2024-07-15 09:36:00.673497] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.631 [2024-07-15 09:36:00.673523] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a58000b90 with addr=10.0.0.2, port=4420 00:27:49.631 qpair failed and we were unable to recover it. 00:27:49.631 [2024-07-15 09:36:00.673659] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.631 [2024-07-15 09:36:00.673685] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a58000b90 with addr=10.0.0.2, port=4420 00:27:49.631 qpair failed and we were unable to recover it. 00:27:49.631 [2024-07-15 09:36:00.673770] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.631 [2024-07-15 09:36:00.673795] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a58000b90 with addr=10.0.0.2, port=4420 00:27:49.631 qpair failed and we were unable to recover it. 00:27:49.631 [2024-07-15 09:36:00.673895] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.631 [2024-07-15 09:36:00.673921] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a58000b90 with addr=10.0.0.2, port=4420 00:27:49.631 qpair failed and we were unable to recover it. 00:27:49.631 [2024-07-15 09:36:00.674005] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.631 [2024-07-15 09:36:00.674030] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a58000b90 with addr=10.0.0.2, port=4420 00:27:49.631 qpair failed and we were unable to recover it. 00:27:49.631 [2024-07-15 09:36:00.674133] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.631 [2024-07-15 09:36:00.674158] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a58000b90 with addr=10.0.0.2, port=4420 00:27:49.631 qpair failed and we were unable to recover it. 00:27:49.631 [2024-07-15 09:36:00.674269] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.631 [2024-07-15 09:36:00.674294] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a58000b90 with addr=10.0.0.2, port=4420 00:27:49.631 qpair failed and we were unable to recover it. 00:27:49.631 [2024-07-15 09:36:00.674408] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.631 [2024-07-15 09:36:00.674434] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a58000b90 with addr=10.0.0.2, port=4420 00:27:49.631 qpair failed and we were unable to recover it. 00:27:49.631 [2024-07-15 09:36:00.674520] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.631 [2024-07-15 09:36:00.674545] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a58000b90 with addr=10.0.0.2, port=4420 00:27:49.631 qpair failed and we were unable to recover it. 00:27:49.631 [2024-07-15 09:36:00.674634] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.631 [2024-07-15 09:36:00.674660] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a58000b90 with addr=10.0.0.2, port=4420 00:27:49.631 qpair failed and we were unable to recover it. 00:27:49.631 [2024-07-15 09:36:00.674747] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.631 [2024-07-15 09:36:00.674772] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a58000b90 with addr=10.0.0.2, port=4420 00:27:49.631 qpair failed and we were unable to recover it. 00:27:49.631 [2024-07-15 09:36:00.674868] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.631 [2024-07-15 09:36:00.674894] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a58000b90 with addr=10.0.0.2, port=4420 00:27:49.631 qpair failed and we were unable to recover it. 00:27:49.631 [2024-07-15 09:36:00.674977] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.631 [2024-07-15 09:36:00.675002] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a58000b90 with addr=10.0.0.2, port=4420 00:27:49.631 qpair failed and we were unable to recover it. 00:27:49.631 [2024-07-15 09:36:00.675115] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.631 [2024-07-15 09:36:00.675141] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a58000b90 with addr=10.0.0.2, port=4420 00:27:49.631 qpair failed and we were unable to recover it. 00:27:49.631 [2024-07-15 09:36:00.675225] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.631 [2024-07-15 09:36:00.675251] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a58000b90 with addr=10.0.0.2, port=4420 00:27:49.631 qpair failed and we were unable to recover it. 00:27:49.631 [2024-07-15 09:36:00.675333] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.631 [2024-07-15 09:36:00.675358] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a58000b90 with addr=10.0.0.2, port=4420 00:27:49.631 qpair failed and we were unable to recover it. 00:27:49.631 [2024-07-15 09:36:00.675503] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.631 [2024-07-15 09:36:00.675542] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:27:49.631 qpair failed and we were unable to recover it. 00:27:49.631 [2024-07-15 09:36:00.675644] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.631 [2024-07-15 09:36:00.675671] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:27:49.631 qpair failed and we were unable to recover it. 00:27:49.631 [2024-07-15 09:36:00.675752] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.631 [2024-07-15 09:36:00.675777] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:27:49.631 qpair failed and we were unable to recover it. 00:27:49.631 [2024-07-15 09:36:00.675896] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.631 [2024-07-15 09:36:00.675921] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:27:49.631 qpair failed and we were unable to recover it. 00:27:49.631 [2024-07-15 09:36:00.676006] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.631 [2024-07-15 09:36:00.676031] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:27:49.631 qpair failed and we were unable to recover it. 00:27:49.631 [2024-07-15 09:36:00.676110] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.631 [2024-07-15 09:36:00.676135] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:27:49.631 qpair failed and we were unable to recover it. 00:27:49.631 [2024-07-15 09:36:00.676251] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.631 [2024-07-15 09:36:00.676286] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:27:49.631 qpair failed and we were unable to recover it. 00:27:49.631 [2024-07-15 09:36:00.676391] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.631 [2024-07-15 09:36:00.676425] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:27:49.631 qpair failed and we were unable to recover it. 00:27:49.631 [2024-07-15 09:36:00.676550] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.631 [2024-07-15 09:36:00.676575] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:27:49.631 qpair failed and we were unable to recover it. 00:27:49.632 [2024-07-15 09:36:00.676689] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.632 [2024-07-15 09:36:00.676716] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:27:49.632 qpair failed and we were unable to recover it. 00:27:49.632 [2024-07-15 09:36:00.676797] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.632 [2024-07-15 09:36:00.676829] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:27:49.632 qpair failed and we were unable to recover it. 00:27:49.632 [2024-07-15 09:36:00.676935] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.632 [2024-07-15 09:36:00.676970] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:27:49.632 qpair failed and we were unable to recover it. 00:27:49.632 [2024-07-15 09:36:00.677136] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.632 [2024-07-15 09:36:00.677161] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:27:49.632 qpair failed and we were unable to recover it. 00:27:49.632 [2024-07-15 09:36:00.677275] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.632 [2024-07-15 09:36:00.677301] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:27:49.632 qpair failed and we were unable to recover it. 00:27:49.632 [2024-07-15 09:36:00.677399] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.632 [2024-07-15 09:36:00.677443] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a50000b90 with addr=10.0.0.2, port=4420 00:27:49.632 qpair failed and we were unable to recover it. 00:27:49.632 [2024-07-15 09:36:00.677546] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.632 [2024-07-15 09:36:00.677573] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a50000b90 with addr=10.0.0.2, port=4420 00:27:49.632 qpair failed and we were unable to recover it. 00:27:49.632 [2024-07-15 09:36:00.677655] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.632 [2024-07-15 09:36:00.677684] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a50000b90 with addr=10.0.0.2, port=4420 00:27:49.632 qpair failed and we were unable to recover it. 00:27:49.632 [2024-07-15 09:36:00.677772] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.632 [2024-07-15 09:36:00.677797] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a50000b90 with addr=10.0.0.2, port=4420 00:27:49.632 qpair failed and we were unable to recover it. 00:27:49.632 [2024-07-15 09:36:00.677946] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.632 [2024-07-15 09:36:00.677973] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a50000b90 with addr=10.0.0.2, port=4420 00:27:49.632 qpair failed and we were unable to recover it. 00:27:49.632 [2024-07-15 09:36:00.678061] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.632 [2024-07-15 09:36:00.678088] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a50000b90 with addr=10.0.0.2, port=4420 00:27:49.632 qpair failed and we were unable to recover it. 00:27:49.632 [2024-07-15 09:36:00.678172] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.632 [2024-07-15 09:36:00.678198] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a50000b90 with addr=10.0.0.2, port=4420 00:27:49.632 qpair failed and we were unable to recover it. 00:27:49.632 [2024-07-15 09:36:00.678284] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.632 [2024-07-15 09:36:00.678309] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a50000b90 with addr=10.0.0.2, port=4420 00:27:49.632 qpair failed and we were unable to recover it. 00:27:49.632 [2024-07-15 09:36:00.678423] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.632 [2024-07-15 09:36:00.678449] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a50000b90 with addr=10.0.0.2, port=4420 00:27:49.632 qpair failed and we were unable to recover it. 00:27:49.632 [2024-07-15 09:36:00.678528] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.632 [2024-07-15 09:36:00.678553] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a50000b90 with addr=10.0.0.2, port=4420 00:27:49.632 qpair failed and we were unable to recover it. 00:27:49.632 [2024-07-15 09:36:00.678647] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.632 [2024-07-15 09:36:00.678673] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a50000b90 with addr=10.0.0.2, port=4420 00:27:49.632 qpair failed and we were unable to recover it. 00:27:49.632 [2024-07-15 09:36:00.678762] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.632 [2024-07-15 09:36:00.678787] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a50000b90 with addr=10.0.0.2, port=4420 00:27:49.632 qpair failed and we were unable to recover it. 00:27:49.632 [2024-07-15 09:36:00.678888] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.632 [2024-07-15 09:36:00.678914] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a50000b90 with addr=10.0.0.2, port=4420 00:27:49.632 qpair failed and we were unable to recover it. 00:27:49.632 [2024-07-15 09:36:00.678990] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.632 [2024-07-15 09:36:00.679016] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a50000b90 with addr=10.0.0.2, port=4420 00:27:49.632 qpair failed and we were unable to recover it. 00:27:49.632 [2024-07-15 09:36:00.679113] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.632 [2024-07-15 09:36:00.679139] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a50000b90 with addr=10.0.0.2, port=4420 00:27:49.632 qpair failed and we were unable to recover it. 00:27:49.632 [2024-07-15 09:36:00.679275] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.632 [2024-07-15 09:36:00.679301] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a50000b90 with addr=10.0.0.2, port=4420 00:27:49.632 qpair failed and we were unable to recover it. 00:27:49.632 [2024-07-15 09:36:00.679408] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.632 [2024-07-15 09:36:00.679433] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a50000b90 with addr=10.0.0.2, port=4420 00:27:49.632 qpair failed and we were unable to recover it. 00:27:49.632 [2024-07-15 09:36:00.679517] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.632 [2024-07-15 09:36:00.679543] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a50000b90 with addr=10.0.0.2, port=4420 00:27:49.632 qpair failed and we were unable to recover it. 00:27:49.632 [2024-07-15 09:36:00.679641] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.632 [2024-07-15 09:36:00.679666] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a50000b90 with addr=10.0.0.2, port=4420 00:27:49.632 qpair failed and we were unable to recover it. 00:27:49.632 [2024-07-15 09:36:00.679778] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.632 [2024-07-15 09:36:00.679809] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a50000b90 with addr=10.0.0.2, port=4420 00:27:49.632 qpair failed and we were unable to recover it. 00:27:49.632 [2024-07-15 09:36:00.679909] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.632 [2024-07-15 09:36:00.679935] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a50000b90 with addr=10.0.0.2, port=4420 00:27:49.632 qpair failed and we were unable to recover it. 00:27:49.632 [2024-07-15 09:36:00.680017] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.632 [2024-07-15 09:36:00.680042] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a50000b90 with addr=10.0.0.2, port=4420 00:27:49.632 qpair failed and we were unable to recover it. 00:27:49.632 [2024-07-15 09:36:00.680161] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.632 [2024-07-15 09:36:00.680186] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a50000b90 with addr=10.0.0.2, port=4420 00:27:49.632 qpair failed and we were unable to recover it. 00:27:49.632 [2024-07-15 09:36:00.680375] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.632 [2024-07-15 09:36:00.680401] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a50000b90 with addr=10.0.0.2, port=4420 00:27:49.632 qpair failed and we were unable to recover it. 00:27:49.632 [2024-07-15 09:36:00.680509] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.632 [2024-07-15 09:36:00.680547] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a50000b90 with addr=10.0.0.2, port=4420 00:27:49.632 qpair failed and we were unable to recover it. 00:27:49.632 [2024-07-15 09:36:00.680629] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.632 [2024-07-15 09:36:00.680654] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a50000b90 with addr=10.0.0.2, port=4420 00:27:49.632 qpair failed and we were unable to recover it. 00:27:49.632 [2024-07-15 09:36:00.680746] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.632 [2024-07-15 09:36:00.680772] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a50000b90 with addr=10.0.0.2, port=4420 00:27:49.632 qpair failed and we were unable to recover it. 00:27:49.632 [2024-07-15 09:36:00.680878] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.632 [2024-07-15 09:36:00.680906] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:27:49.632 qpair failed and we were unable to recover it. 00:27:49.632 [2024-07-15 09:36:00.680993] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.632 [2024-07-15 09:36:00.681018] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:27:49.632 qpair failed and we were unable to recover it. 00:27:49.632 [2024-07-15 09:36:00.681129] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.632 [2024-07-15 09:36:00.681155] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:27:49.632 qpair failed and we were unable to recover it. 00:27:49.632 [2024-07-15 09:36:00.681263] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.632 [2024-07-15 09:36:00.681289] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:27:49.632 qpair failed and we were unable to recover it. 00:27:49.632 [2024-07-15 09:36:00.681369] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.632 [2024-07-15 09:36:00.681420] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:27:49.632 qpair failed and we were unable to recover it. 00:27:49.632 [2024-07-15 09:36:00.681538] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.632 [2024-07-15 09:36:00.681572] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:27:49.632 qpair failed and we were unable to recover it. 00:27:49.632 [2024-07-15 09:36:00.681678] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.632 [2024-07-15 09:36:00.681714] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:27:49.632 qpair failed and we were unable to recover it. 00:27:49.632 [2024-07-15 09:36:00.681852] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.632 [2024-07-15 09:36:00.681880] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:27:49.632 qpair failed and we were unable to recover it. 00:27:49.632 [2024-07-15 09:36:00.681961] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.633 [2024-07-15 09:36:00.681986] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:27:49.633 qpair failed and we were unable to recover it. 00:27:49.633 [2024-07-15 09:36:00.682071] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.633 [2024-07-15 09:36:00.682096] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:27:49.633 qpair failed and we were unable to recover it. 00:27:49.633 [2024-07-15 09:36:00.682177] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.633 [2024-07-15 09:36:00.682202] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:27:49.633 qpair failed and we were unable to recover it. 00:27:49.633 [2024-07-15 09:36:00.682306] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.633 [2024-07-15 09:36:00.682331] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:27:49.633 qpair failed and we were unable to recover it. 00:27:49.633 [2024-07-15 09:36:00.682443] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.633 [2024-07-15 09:36:00.682469] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:27:49.633 qpair failed and we were unable to recover it. 00:27:49.633 [2024-07-15 09:36:00.682553] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.633 [2024-07-15 09:36:00.682583] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:27:49.633 qpair failed and we were unable to recover it. 00:27:49.633 [2024-07-15 09:36:00.682663] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.633 [2024-07-15 09:36:00.682689] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:27:49.633 qpair failed and we were unable to recover it. 00:27:49.633 [2024-07-15 09:36:00.682777] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.633 [2024-07-15 09:36:00.682826] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:27:49.633 qpair failed and we were unable to recover it. 00:27:49.633 [2024-07-15 09:36:00.682948] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.633 [2024-07-15 09:36:00.682973] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:27:49.633 qpair failed and we were unable to recover it. 00:27:49.633 [2024-07-15 09:36:00.683044] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.633 [2024-07-15 09:36:00.683069] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:27:49.633 qpair failed and we were unable to recover it. 00:27:49.633 [2024-07-15 09:36:00.683182] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.633 [2024-07-15 09:36:00.683207] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:27:49.633 qpair failed and we were unable to recover it. 00:27:49.633 [2024-07-15 09:36:00.683289] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.633 [2024-07-15 09:36:00.683314] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:27:49.633 qpair failed and we were unable to recover it. 00:27:49.633 [2024-07-15 09:36:00.683398] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.633 [2024-07-15 09:36:00.683423] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:27:49.633 qpair failed and we were unable to recover it. 00:27:49.633 [2024-07-15 09:36:00.683530] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.633 [2024-07-15 09:36:00.683554] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:27:49.633 qpair failed and we were unable to recover it. 00:27:49.633 [2024-07-15 09:36:00.683675] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.633 [2024-07-15 09:36:00.683700] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:27:49.633 qpair failed and we were unable to recover it. 00:27:49.633 [2024-07-15 09:36:00.683798] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.633 [2024-07-15 09:36:00.683827] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:27:49.633 qpair failed and we were unable to recover it. 00:27:49.633 [2024-07-15 09:36:00.683943] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.633 [2024-07-15 09:36:00.683968] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:27:49.633 qpair failed and we were unable to recover it. 00:27:49.633 [2024-07-15 09:36:00.684042] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.633 [2024-07-15 09:36:00.684066] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:27:49.633 qpair failed and we were unable to recover it. 00:27:49.633 [2024-07-15 09:36:00.684157] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.633 [2024-07-15 09:36:00.684183] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:27:49.633 qpair failed and we were unable to recover it. 00:27:49.633 [2024-07-15 09:36:00.684275] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.633 [2024-07-15 09:36:00.684300] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:27:49.633 qpair failed and we were unable to recover it. 00:27:49.633 [2024-07-15 09:36:00.684382] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.633 [2024-07-15 09:36:00.684408] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:27:49.633 qpair failed and we were unable to recover it. 00:27:49.633 [2024-07-15 09:36:00.684518] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.633 [2024-07-15 09:36:00.684543] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:27:49.633 qpair failed and we were unable to recover it. 00:27:49.633 [2024-07-15 09:36:00.684637] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.633 [2024-07-15 09:36:00.684666] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a50000b90 with addr=10.0.0.2, port=4420 00:27:49.633 qpair failed and we were unable to recover it. 00:27:49.633 [2024-07-15 09:36:00.684752] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.633 [2024-07-15 09:36:00.684778] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a50000b90 with addr=10.0.0.2, port=4420 00:27:49.633 qpair failed and we were unable to recover it. 00:27:49.633 [2024-07-15 09:36:00.684882] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.633 [2024-07-15 09:36:00.684909] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a50000b90 with addr=10.0.0.2, port=4420 00:27:49.633 qpair failed and we were unable to recover it. 00:27:49.633 [2024-07-15 09:36:00.685025] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.633 [2024-07-15 09:36:00.685050] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a50000b90 with addr=10.0.0.2, port=4420 00:27:49.633 qpair failed and we were unable to recover it. 00:27:49.633 [2024-07-15 09:36:00.685157] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.633 [2024-07-15 09:36:00.685194] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a50000b90 with addr=10.0.0.2, port=4420 00:27:49.633 qpair failed and we were unable to recover it. 00:27:49.633 [2024-07-15 09:36:00.685339] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.633 [2024-07-15 09:36:00.685374] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a50000b90 with addr=10.0.0.2, port=4420 00:27:49.633 qpair failed and we were unable to recover it. 00:27:49.633 [2024-07-15 09:36:00.685488] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.633 [2024-07-15 09:36:00.685524] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a50000b90 with addr=10.0.0.2, port=4420 00:27:49.633 qpair failed and we were unable to recover it. 00:27:49.633 [2024-07-15 09:36:00.685655] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.633 [2024-07-15 09:36:00.685695] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a58000b90 with addr=10.0.0.2, port=4420 00:27:49.633 qpair failed and we were unable to recover it. 00:27:49.633 [2024-07-15 09:36:00.685784] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.633 [2024-07-15 09:36:00.685820] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a58000b90 with addr=10.0.0.2, port=4420 00:27:49.633 qpair failed and we were unable to recover it. 00:27:49.633 [2024-07-15 09:36:00.685949] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.633 [2024-07-15 09:36:00.686000] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a58000b90 with addr=10.0.0.2, port=4420 00:27:49.633 qpair failed and we were unable to recover it. 00:27:49.633 [2024-07-15 09:36:00.686115] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.633 [2024-07-15 09:36:00.686153] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a50000b90 with addr=10.0.0.2, port=4420 00:27:49.633 qpair failed and we were unable to recover it. 00:27:49.633 [2024-07-15 09:36:00.686308] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.633 [2024-07-15 09:36:00.686350] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a50000b90 with addr=10.0.0.2, port=4420 00:27:49.633 qpair failed and we were unable to recover it. 00:27:49.633 [2024-07-15 09:36:00.686494] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.633 [2024-07-15 09:36:00.686541] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a50000b90 with addr=10.0.0.2, port=4420 00:27:49.633 qpair failed and we were unable to recover it. 00:27:49.633 [2024-07-15 09:36:00.686682] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.633 [2024-07-15 09:36:00.686708] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a50000b90 with addr=10.0.0.2, port=4420 00:27:49.633 qpair failed and we were unable to recover it. 00:27:49.633 [2024-07-15 09:36:00.686798] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.633 [2024-07-15 09:36:00.686832] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a50000b90 with addr=10.0.0.2, port=4420 00:27:49.633 qpair failed and we were unable to recover it. 00:27:49.633 [2024-07-15 09:36:00.686914] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.633 [2024-07-15 09:36:00.686940] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a50000b90 with addr=10.0.0.2, port=4420 00:27:49.633 qpair failed and we were unable to recover it. 00:27:49.633 [2024-07-15 09:36:00.687078] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.633 [2024-07-15 09:36:00.687115] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a50000b90 with addr=10.0.0.2, port=4420 00:27:49.633 qpair failed and we were unable to recover it. 00:27:49.633 [2024-07-15 09:36:00.687275] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.633 [2024-07-15 09:36:00.687314] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a50000b90 with addr=10.0.0.2, port=4420 00:27:49.633 qpair failed and we were unable to recover it. 00:27:49.633 [2024-07-15 09:36:00.687426] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.633 [2024-07-15 09:36:00.687462] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a50000b90 with addr=10.0.0.2, port=4420 00:27:49.633 qpair failed and we were unable to recover it. 00:27:49.633 [2024-07-15 09:36:00.687584] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.634 [2024-07-15 09:36:00.687621] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a50000b90 with addr=10.0.0.2, port=4420 00:27:49.634 qpair failed and we were unable to recover it. 00:27:49.634 [2024-07-15 09:36:00.687734] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.634 [2024-07-15 09:36:00.687769] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a50000b90 with addr=10.0.0.2, port=4420 00:27:49.634 qpair failed and we were unable to recover it. 00:27:49.634 [2024-07-15 09:36:00.687897] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.634 [2024-07-15 09:36:00.687923] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a50000b90 with addr=10.0.0.2, port=4420 00:27:49.634 qpair failed and we were unable to recover it. 00:27:49.634 [2024-07-15 09:36:00.688067] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.634 [2024-07-15 09:36:00.688105] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a50000b90 with addr=10.0.0.2, port=4420 00:27:49.634 qpair failed and we were unable to recover it. 00:27:49.634 [2024-07-15 09:36:00.688218] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.634 [2024-07-15 09:36:00.688260] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a50000b90 with addr=10.0.0.2, port=4420 00:27:49.634 qpair failed and we were unable to recover it. 00:27:49.634 [2024-07-15 09:36:00.688410] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.634 [2024-07-15 09:36:00.688446] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a50000b90 with addr=10.0.0.2, port=4420 00:27:49.634 qpair failed and we were unable to recover it. 00:27:49.634 [2024-07-15 09:36:00.688597] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.634 [2024-07-15 09:36:00.688643] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a50000b90 with addr=10.0.0.2, port=4420 00:27:49.634 qpair failed and we were unable to recover it. 00:27:49.634 [2024-07-15 09:36:00.688760] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.634 [2024-07-15 09:36:00.688788] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a50000b90 with addr=10.0.0.2, port=4420 00:27:49.634 qpair failed and we were unable to recover it. 00:27:49.634 [2024-07-15 09:36:00.688897] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.634 [2024-07-15 09:36:00.688924] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a50000b90 with addr=10.0.0.2, port=4420 00:27:49.634 qpair failed and we were unable to recover it. 00:27:49.634 [2024-07-15 09:36:00.689012] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.634 [2024-07-15 09:36:00.689038] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a50000b90 with addr=10.0.0.2, port=4420 00:27:49.634 qpair failed and we were unable to recover it. 00:27:49.634 [2024-07-15 09:36:00.689170] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.634 [2024-07-15 09:36:00.689204] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a50000b90 with addr=10.0.0.2, port=4420 00:27:49.634 qpair failed and we were unable to recover it. 00:27:49.634 [2024-07-15 09:36:00.689324] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.634 [2024-07-15 09:36:00.689367] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a50000b90 with addr=10.0.0.2, port=4420 00:27:49.634 qpair failed and we were unable to recover it. 00:27:49.634 [2024-07-15 09:36:00.689519] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.634 [2024-07-15 09:36:00.689568] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a50000b90 with addr=10.0.0.2, port=4420 00:27:49.634 qpair failed and we were unable to recover it. 00:27:49.634 [2024-07-15 09:36:00.689694] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.634 [2024-07-15 09:36:00.689730] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a50000b90 with addr=10.0.0.2, port=4420 00:27:49.634 qpair failed and we were unable to recover it. 00:27:49.634 [2024-07-15 09:36:00.689869] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.634 [2024-07-15 09:36:00.689896] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a50000b90 with addr=10.0.0.2, port=4420 00:27:49.634 qpair failed and we were unable to recover it. 00:27:49.634 [2024-07-15 09:36:00.690012] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.634 [2024-07-15 09:36:00.690038] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a50000b90 with addr=10.0.0.2, port=4420 00:27:49.634 qpair failed and we were unable to recover it. 00:27:49.634 [2024-07-15 09:36:00.690123] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.634 [2024-07-15 09:36:00.690149] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a50000b90 with addr=10.0.0.2, port=4420 00:27:49.634 qpair failed and we were unable to recover it. 00:27:49.634 [2024-07-15 09:36:00.690266] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.634 [2024-07-15 09:36:00.690304] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a50000b90 with addr=10.0.0.2, port=4420 00:27:49.634 qpair failed and we were unable to recover it. 00:27:49.634 [2024-07-15 09:36:00.690437] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.634 [2024-07-15 09:36:00.690464] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a50000b90 with addr=10.0.0.2, port=4420 00:27:49.634 qpair failed and we were unable to recover it. 00:27:49.634 [2024-07-15 09:36:00.690593] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.634 [2024-07-15 09:36:00.690632] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a50000b90 with addr=10.0.0.2, port=4420 00:27:49.634 qpair failed and we were unable to recover it. 00:27:49.634 [2024-07-15 09:36:00.690740] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.634 [2024-07-15 09:36:00.690785] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a50000b90 with addr=10.0.0.2, port=4420 00:27:49.634 qpair failed and we were unable to recover it. 00:27:49.634 [2024-07-15 09:36:00.690907] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.634 [2024-07-15 09:36:00.690932] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a50000b90 with addr=10.0.0.2, port=4420 00:27:49.634 qpair failed and we were unable to recover it. 00:27:49.634 [2024-07-15 09:36:00.691057] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.634 [2024-07-15 09:36:00.691096] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a50000b90 with addr=10.0.0.2, port=4420 00:27:49.634 qpair failed and we were unable to recover it. 00:27:49.634 [2024-07-15 09:36:00.691254] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.634 [2024-07-15 09:36:00.691288] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a50000b90 with addr=10.0.0.2, port=4420 00:27:49.634 qpair failed and we were unable to recover it. 00:27:49.634 [2024-07-15 09:36:00.691404] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.634 [2024-07-15 09:36:00.691439] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a50000b90 with addr=10.0.0.2, port=4420 00:27:49.634 qpair failed and we were unable to recover it. 00:27:49.634 [2024-07-15 09:36:00.691552] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.634 [2024-07-15 09:36:00.691589] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a50000b90 with addr=10.0.0.2, port=4420 00:27:49.634 qpair failed and we were unable to recover it. 00:27:49.634 [2024-07-15 09:36:00.691732] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.634 [2024-07-15 09:36:00.691757] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a50000b90 with addr=10.0.0.2, port=4420 00:27:49.634 qpair failed and we were unable to recover it. 00:27:49.634 [2024-07-15 09:36:00.691891] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.634 [2024-07-15 09:36:00.691917] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a50000b90 with addr=10.0.0.2, port=4420 00:27:49.634 qpair failed and we were unable to recover it. 00:27:49.634 [2024-07-15 09:36:00.692032] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.634 [2024-07-15 09:36:00.692059] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a50000b90 with addr=10.0.0.2, port=4420 00:27:49.634 qpair failed and we were unable to recover it. 00:27:49.634 [2024-07-15 09:36:00.692212] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.634 [2024-07-15 09:36:00.692246] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a50000b90 with addr=10.0.0.2, port=4420 00:27:49.634 qpair failed and we were unable to recover it. 00:27:49.634 [2024-07-15 09:36:00.692436] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.634 [2024-07-15 09:36:00.692462] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a50000b90 with addr=10.0.0.2, port=4420 00:27:49.634 qpair failed and we were unable to recover it. 00:27:49.634 [2024-07-15 09:36:00.692629] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.634 [2024-07-15 09:36:00.692665] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a50000b90 with addr=10.0.0.2, port=4420 00:27:49.634 qpair failed and we were unable to recover it. 00:27:49.634 [2024-07-15 09:36:00.692783] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.634 [2024-07-15 09:36:00.692828] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a50000b90 with addr=10.0.0.2, port=4420 00:27:49.634 qpair failed and we were unable to recover it. 00:27:49.634 [2024-07-15 09:36:00.692959] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.634 [2024-07-15 09:36:00.692985] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a50000b90 with addr=10.0.0.2, port=4420 00:27:49.634 qpair failed and we were unable to recover it. 00:27:49.634 [2024-07-15 09:36:00.693090] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.634 [2024-07-15 09:36:00.693124] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a50000b90 with addr=10.0.0.2, port=4420 00:27:49.634 qpair failed and we were unable to recover it. 00:27:49.634 [2024-07-15 09:36:00.693259] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.635 [2024-07-15 09:36:00.693294] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a50000b90 with addr=10.0.0.2, port=4420 00:27:49.635 qpair failed and we were unable to recover it. 00:27:49.635 [2024-07-15 09:36:00.693409] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.635 [2024-07-15 09:36:00.693445] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a50000b90 with addr=10.0.0.2, port=4420 00:27:49.635 qpair failed and we were unable to recover it. 00:27:49.635 [2024-07-15 09:36:00.693601] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.635 [2024-07-15 09:36:00.693637] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a50000b90 with addr=10.0.0.2, port=4420 00:27:49.635 qpair failed and we were unable to recover it. 00:27:49.635 [2024-07-15 09:36:00.693754] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.635 [2024-07-15 09:36:00.693790] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a50000b90 with addr=10.0.0.2, port=4420 00:27:49.635 qpair failed and we were unable to recover it. 00:27:49.635 [2024-07-15 09:36:00.693913] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.635 [2024-07-15 09:36:00.693939] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a50000b90 with addr=10.0.0.2, port=4420 00:27:49.635 qpair failed and we were unable to recover it. 00:27:49.635 [2024-07-15 09:36:00.694031] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.635 [2024-07-15 09:36:00.694082] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a50000b90 with addr=10.0.0.2, port=4420 00:27:49.635 qpair failed and we were unable to recover it. 00:27:49.635 [2024-07-15 09:36:00.694205] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.635 [2024-07-15 09:36:00.694243] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a50000b90 with addr=10.0.0.2, port=4420 00:27:49.635 qpair failed and we were unable to recover it. 00:27:49.635 [2024-07-15 09:36:00.694418] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.635 [2024-07-15 09:36:00.694456] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a50000b90 with addr=10.0.0.2, port=4420 00:27:49.635 qpair failed and we were unable to recover it. 00:27:49.635 [2024-07-15 09:36:00.694660] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.635 [2024-07-15 09:36:00.694700] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a50000b90 with addr=10.0.0.2, port=4420 00:27:49.635 qpair failed and we were unable to recover it. 00:27:49.635 [2024-07-15 09:36:00.694858] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.635 [2024-07-15 09:36:00.694884] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a50000b90 with addr=10.0.0.2, port=4420 00:27:49.635 qpair failed and we were unable to recover it. 00:27:49.635 [2024-07-15 09:36:00.695002] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.635 [2024-07-15 09:36:00.695028] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a50000b90 with addr=10.0.0.2, port=4420 00:27:49.635 qpair failed and we were unable to recover it. 00:27:49.635 [2024-07-15 09:36:00.695141] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.635 [2024-07-15 09:36:00.695179] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a50000b90 with addr=10.0.0.2, port=4420 00:27:49.635 qpair failed and we were unable to recover it. 00:27:49.635 [2024-07-15 09:36:00.695308] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.635 [2024-07-15 09:36:00.695355] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a50000b90 with addr=10.0.0.2, port=4420 00:27:49.635 qpair failed and we were unable to recover it. 00:27:49.635 [2024-07-15 09:36:00.695502] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.635 [2024-07-15 09:36:00.695543] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a50000b90 with addr=10.0.0.2, port=4420 00:27:49.635 qpair failed and we were unable to recover it. 00:27:49.635 [2024-07-15 09:36:00.695667] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.635 [2024-07-15 09:36:00.695703] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a50000b90 with addr=10.0.0.2, port=4420 00:27:49.635 qpair failed and we were unable to recover it. 00:27:49.635 [2024-07-15 09:36:00.695868] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.635 [2024-07-15 09:36:00.695894] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a50000b90 with addr=10.0.0.2, port=4420 00:27:49.635 qpair failed and we were unable to recover it. 00:27:49.635 [2024-07-15 09:36:00.695998] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.635 [2024-07-15 09:36:00.696026] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a50000b90 with addr=10.0.0.2, port=4420 00:27:49.635 qpair failed and we were unable to recover it. 00:27:49.635 [2024-07-15 09:36:00.696137] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.635 [2024-07-15 09:36:00.696164] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a50000b90 with addr=10.0.0.2, port=4420 00:27:49.635 qpair failed and we were unable to recover it. 00:27:49.635 [2024-07-15 09:36:00.696333] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.635 [2024-07-15 09:36:00.696374] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a50000b90 with addr=10.0.0.2, port=4420 00:27:49.635 qpair failed and we were unable to recover it. 00:27:49.635 [2024-07-15 09:36:00.696528] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.635 [2024-07-15 09:36:00.696566] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a50000b90 with addr=10.0.0.2, port=4420 00:27:49.635 qpair failed and we were unable to recover it. 00:27:49.635 [2024-07-15 09:36:00.696687] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.635 [2024-07-15 09:36:00.696712] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a50000b90 with addr=10.0.0.2, port=4420 00:27:49.635 qpair failed and we were unable to recover it. 00:27:49.635 [2024-07-15 09:36:00.696822] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.635 [2024-07-15 09:36:00.696848] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a50000b90 with addr=10.0.0.2, port=4420 00:27:49.635 qpair failed and we were unable to recover it. 00:27:49.635 [2024-07-15 09:36:00.696933] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.635 [2024-07-15 09:36:00.696961] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a50000b90 with addr=10.0.0.2, port=4420 00:27:49.635 qpair failed and we were unable to recover it. 00:27:49.635 [2024-07-15 09:36:00.697063] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.635 [2024-07-15 09:36:00.697104] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a50000b90 with addr=10.0.0.2, port=4420 00:27:49.635 qpair failed and we were unable to recover it. 00:27:49.635 [2024-07-15 09:36:00.697233] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.635 [2024-07-15 09:36:00.697274] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a50000b90 with addr=10.0.0.2, port=4420 00:27:49.635 qpair failed and we were unable to recover it. 00:27:49.635 [2024-07-15 09:36:00.697410] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.635 [2024-07-15 09:36:00.697435] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a50000b90 with addr=10.0.0.2, port=4420 00:27:49.635 qpair failed and we were unable to recover it. 00:27:49.635 [2024-07-15 09:36:00.697570] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.635 [2024-07-15 09:36:00.697607] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a50000b90 with addr=10.0.0.2, port=4420 00:27:49.635 qpair failed and we were unable to recover it. 00:27:49.635 [2024-07-15 09:36:00.697763] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.635 [2024-07-15 09:36:00.697814] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a50000b90 with addr=10.0.0.2, port=4420 00:27:49.635 qpair failed and we were unable to recover it. 00:27:49.635 [2024-07-15 09:36:00.697930] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.635 [2024-07-15 09:36:00.697956] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a50000b90 with addr=10.0.0.2, port=4420 00:27:49.635 qpair failed and we were unable to recover it. 00:27:49.635 [2024-07-15 09:36:00.698068] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.635 [2024-07-15 09:36:00.698099] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a50000b90 with addr=10.0.0.2, port=4420 00:27:49.635 qpair failed and we were unable to recover it. 00:27:49.635 [2024-07-15 09:36:00.698211] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.635 [2024-07-15 09:36:00.698246] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a50000b90 with addr=10.0.0.2, port=4420 00:27:49.635 qpair failed and we were unable to recover it. 00:27:49.635 [2024-07-15 09:36:00.698359] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.635 [2024-07-15 09:36:00.698398] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a50000b90 with addr=10.0.0.2, port=4420 00:27:49.635 qpair failed and we were unable to recover it. 00:27:49.635 [2024-07-15 09:36:00.698562] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.635 [2024-07-15 09:36:00.698598] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a50000b90 with addr=10.0.0.2, port=4420 00:27:49.635 qpair failed and we were unable to recover it. 00:27:49.635 [2024-07-15 09:36:00.698713] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.635 [2024-07-15 09:36:00.698748] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a50000b90 with addr=10.0.0.2, port=4420 00:27:49.635 qpair failed and we were unable to recover it. 00:27:49.635 [2024-07-15 09:36:00.698885] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.635 [2024-07-15 09:36:00.698910] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a50000b90 with addr=10.0.0.2, port=4420 00:27:49.635 qpair failed and we were unable to recover it. 00:27:49.635 [2024-07-15 09:36:00.699022] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.635 [2024-07-15 09:36:00.699048] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a50000b90 with addr=10.0.0.2, port=4420 00:27:49.635 qpair failed and we were unable to recover it. 00:27:49.635 [2024-07-15 09:36:00.699182] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.635 [2024-07-15 09:36:00.699234] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a50000b90 with addr=10.0.0.2, port=4420 00:27:49.635 qpair failed and we were unable to recover it. 00:27:49.635 [2024-07-15 09:36:00.699352] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.635 [2024-07-15 09:36:00.699389] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a50000b90 with addr=10.0.0.2, port=4420 00:27:49.635 qpair failed and we were unable to recover it. 00:27:49.635 [2024-07-15 09:36:00.699544] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.635 [2024-07-15 09:36:00.699579] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a50000b90 with addr=10.0.0.2, port=4420 00:27:49.635 qpair failed and we were unable to recover it. 00:27:49.635 [2024-07-15 09:36:00.699697] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.635 [2024-07-15 09:36:00.699732] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a50000b90 with addr=10.0.0.2, port=4420 00:27:49.635 qpair failed and we were unable to recover it. 00:27:49.635 [2024-07-15 09:36:00.699888] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.635 [2024-07-15 09:36:00.699915] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a50000b90 with addr=10.0.0.2, port=4420 00:27:49.636 qpair failed and we were unable to recover it. 00:27:49.636 [2024-07-15 09:36:00.700005] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.636 [2024-07-15 09:36:00.700032] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a50000b90 with addr=10.0.0.2, port=4420 00:27:49.636 qpair failed and we were unable to recover it. 00:27:49.636 [2024-07-15 09:36:00.700158] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.636 [2024-07-15 09:36:00.700196] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a50000b90 with addr=10.0.0.2, port=4420 00:27:49.636 qpair failed and we were unable to recover it. 00:27:49.636 [2024-07-15 09:36:00.700321] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.636 [2024-07-15 09:36:00.700359] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a50000b90 with addr=10.0.0.2, port=4420 00:27:49.636 qpair failed and we were unable to recover it. 00:27:49.636 [2024-07-15 09:36:00.700517] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.636 [2024-07-15 09:36:00.700556] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a50000b90 with addr=10.0.0.2, port=4420 00:27:49.636 qpair failed and we were unable to recover it. 00:27:49.636 [2024-07-15 09:36:00.700718] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.636 [2024-07-15 09:36:00.700758] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a50000b90 with addr=10.0.0.2, port=4420 00:27:49.636 qpair failed and we were unable to recover it. 00:27:49.636 [2024-07-15 09:36:00.700910] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.636 [2024-07-15 09:36:00.700936] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a50000b90 with addr=10.0.0.2, port=4420 00:27:49.636 qpair failed and we were unable to recover it. 00:27:49.636 [2024-07-15 09:36:00.701024] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.636 [2024-07-15 09:36:00.701050] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a50000b90 with addr=10.0.0.2, port=4420 00:27:49.636 qpair failed and we were unable to recover it. 00:27:49.636 [2024-07-15 09:36:00.701132] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.636 [2024-07-15 09:36:00.701157] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a50000b90 with addr=10.0.0.2, port=4420 00:27:49.636 qpair failed and we were unable to recover it. 00:27:49.636 [2024-07-15 09:36:00.701267] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.636 [2024-07-15 09:36:00.701295] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a50000b90 with addr=10.0.0.2, port=4420 00:27:49.636 qpair failed and we were unable to recover it. 00:27:49.636 [2024-07-15 09:36:00.701425] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.636 [2024-07-15 09:36:00.701472] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a50000b90 with addr=10.0.0.2, port=4420 00:27:49.636 qpair failed and we were unable to recover it. 00:27:49.636 [2024-07-15 09:36:00.701587] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.636 [2024-07-15 09:36:00.701624] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a50000b90 with addr=10.0.0.2, port=4420 00:27:49.636 qpair failed and we were unable to recover it. 00:27:49.636 [2024-07-15 09:36:00.701740] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.636 [2024-07-15 09:36:00.701766] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a50000b90 with addr=10.0.0.2, port=4420 00:27:49.636 qpair failed and we were unable to recover it. 00:27:49.636 [2024-07-15 09:36:00.701883] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.636 [2024-07-15 09:36:00.701909] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a50000b90 with addr=10.0.0.2, port=4420 00:27:49.636 qpair failed and we were unable to recover it. 00:27:49.636 [2024-07-15 09:36:00.702018] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.636 [2024-07-15 09:36:00.702044] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a50000b90 with addr=10.0.0.2, port=4420 00:27:49.636 qpair failed and we were unable to recover it. 00:27:49.636 [2024-07-15 09:36:00.702127] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.636 [2024-07-15 09:36:00.702177] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a50000b90 with addr=10.0.0.2, port=4420 00:27:49.636 qpair failed and we were unable to recover it. 00:27:49.636 [2024-07-15 09:36:00.702296] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.636 [2024-07-15 09:36:00.702333] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a50000b90 with addr=10.0.0.2, port=4420 00:27:49.636 qpair failed and we were unable to recover it. 00:27:49.636 [2024-07-15 09:36:00.702456] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.636 [2024-07-15 09:36:00.702493] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a50000b90 with addr=10.0.0.2, port=4420 00:27:49.636 qpair failed and we were unable to recover it. 00:27:49.636 [2024-07-15 09:36:00.702607] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.636 [2024-07-15 09:36:00.702643] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a50000b90 with addr=10.0.0.2, port=4420 00:27:49.636 qpair failed and we were unable to recover it. 00:27:49.636 [2024-07-15 09:36:00.702829] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.636 [2024-07-15 09:36:00.702855] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a50000b90 with addr=10.0.0.2, port=4420 00:27:49.636 qpair failed and we were unable to recover it. 00:27:49.636 [2024-07-15 09:36:00.702944] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.636 [2024-07-15 09:36:00.702974] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a50000b90 with addr=10.0.0.2, port=4420 00:27:49.636 qpair failed and we were unable to recover it. 00:27:49.636 [2024-07-15 09:36:00.703088] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.636 [2024-07-15 09:36:00.703113] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a50000b90 with addr=10.0.0.2, port=4420 00:27:49.636 qpair failed and we were unable to recover it. 00:27:49.636 [2024-07-15 09:36:00.703225] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.636 [2024-07-15 09:36:00.703251] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a50000b90 with addr=10.0.0.2, port=4420 00:27:49.636 qpair failed and we were unable to recover it. 00:27:49.636 [2024-07-15 09:36:00.703371] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.636 [2024-07-15 09:36:00.703397] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a50000b90 with addr=10.0.0.2, port=4420 00:27:49.636 qpair failed and we were unable to recover it. 00:27:49.636 [2024-07-15 09:36:00.703560] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.636 [2024-07-15 09:36:00.703620] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a58000b90 with addr=10.0.0.2, port=4420 00:27:49.636 qpair failed and we were unable to recover it. 00:27:49.636 [2024-07-15 09:36:00.703766] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.636 [2024-07-15 09:36:00.703793] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a58000b90 with addr=10.0.0.2, port=4420 00:27:49.636 qpair failed and we were unable to recover it. 00:27:49.636 [2024-07-15 09:36:00.703888] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.636 [2024-07-15 09:36:00.703914] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a58000b90 with addr=10.0.0.2, port=4420 00:27:49.636 qpair failed and we were unable to recover it. 00:27:49.636 [2024-07-15 09:36:00.704017] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.636 [2024-07-15 09:36:00.704054] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a58000b90 with addr=10.0.0.2, port=4420 00:27:49.636 qpair failed and we were unable to recover it. 00:27:49.636 [2024-07-15 09:36:00.704221] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.636 [2024-07-15 09:36:00.704272] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a58000b90 with addr=10.0.0.2, port=4420 00:27:49.636 qpair failed and we were unable to recover it. 00:27:49.636 [2024-07-15 09:36:00.704413] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.636 [2024-07-15 09:36:00.704464] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a58000b90 with addr=10.0.0.2, port=4420 00:27:49.636 qpair failed and we were unable to recover it. 00:27:49.636 [2024-07-15 09:36:00.704551] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.636 [2024-07-15 09:36:00.704577] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a58000b90 with addr=10.0.0.2, port=4420 00:27:49.636 qpair failed and we were unable to recover it. 00:27:49.636 [2024-07-15 09:36:00.704664] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.636 [2024-07-15 09:36:00.704690] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a58000b90 with addr=10.0.0.2, port=4420 00:27:49.636 qpair failed and we were unable to recover it. 00:27:49.636 [2024-07-15 09:36:00.704773] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.636 [2024-07-15 09:36:00.704798] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a58000b90 with addr=10.0.0.2, port=4420 00:27:49.636 qpair failed and we were unable to recover it. 00:27:49.636 [2024-07-15 09:36:00.704916] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.636 [2024-07-15 09:36:00.704941] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a58000b90 with addr=10.0.0.2, port=4420 00:27:49.636 qpair failed and we were unable to recover it. 00:27:49.636 [2024-07-15 09:36:00.705031] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.636 [2024-07-15 09:36:00.705058] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a58000b90 with addr=10.0.0.2, port=4420 00:27:49.636 qpair failed and we were unable to recover it. 00:27:49.636 [2024-07-15 09:36:00.705173] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.636 [2024-07-15 09:36:00.705198] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a58000b90 with addr=10.0.0.2, port=4420 00:27:49.636 qpair failed and we were unable to recover it. 00:27:49.636 [2024-07-15 09:36:00.705283] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.636 [2024-07-15 09:36:00.705314] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a58000b90 with addr=10.0.0.2, port=4420 00:27:49.636 qpair failed and we were unable to recover it. 00:27:49.636 [2024-07-15 09:36:00.705404] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.636 [2024-07-15 09:36:00.705429] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a58000b90 with addr=10.0.0.2, port=4420 00:27:49.636 qpair failed and we were unable to recover it. 00:27:49.636 [2024-07-15 09:36:00.705516] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.636 [2024-07-15 09:36:00.705541] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a58000b90 with addr=10.0.0.2, port=4420 00:27:49.636 qpair failed and we were unable to recover it. 00:27:49.636 [2024-07-15 09:36:00.705620] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.636 [2024-07-15 09:36:00.705645] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a58000b90 with addr=10.0.0.2, port=4420 00:27:49.636 qpair failed and we were unable to recover it. 00:27:49.636 [2024-07-15 09:36:00.705746] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.636 [2024-07-15 09:36:00.705772] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a58000b90 with addr=10.0.0.2, port=4420 00:27:49.636 qpair failed and we were unable to recover it. 00:27:49.636 [2024-07-15 09:36:00.705870] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.637 [2024-07-15 09:36:00.705896] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a58000b90 with addr=10.0.0.2, port=4420 00:27:49.637 qpair failed and we were unable to recover it. 00:27:49.637 [2024-07-15 09:36:00.705983] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.637 [2024-07-15 09:36:00.706008] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a58000b90 with addr=10.0.0.2, port=4420 00:27:49.637 qpair failed and we were unable to recover it. 00:27:49.637 [2024-07-15 09:36:00.706125] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.637 [2024-07-15 09:36:00.706151] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a58000b90 with addr=10.0.0.2, port=4420 00:27:49.637 qpair failed and we were unable to recover it. 00:27:49.637 [2024-07-15 09:36:00.706226] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.637 [2024-07-15 09:36:00.706251] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a58000b90 with addr=10.0.0.2, port=4420 00:27:49.637 qpair failed and we were unable to recover it. 00:27:49.637 [2024-07-15 09:36:00.706337] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.637 [2024-07-15 09:36:00.706364] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a58000b90 with addr=10.0.0.2, port=4420 00:27:49.637 qpair failed and we were unable to recover it. 00:27:49.637 [2024-07-15 09:36:00.706497] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.637 [2024-07-15 09:36:00.706522] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a58000b90 with addr=10.0.0.2, port=4420 00:27:49.637 qpair failed and we were unable to recover it. 00:27:49.637 [2024-07-15 09:36:00.706636] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.637 [2024-07-15 09:36:00.706662] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a58000b90 with addr=10.0.0.2, port=4420 00:27:49.637 qpair failed and we were unable to recover it. 00:27:49.637 [2024-07-15 09:36:00.706774] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.637 [2024-07-15 09:36:00.706807] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a58000b90 with addr=10.0.0.2, port=4420 00:27:49.637 qpair failed and we were unable to recover it. 00:27:49.637 [2024-07-15 09:36:00.706898] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.637 [2024-07-15 09:36:00.706924] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a58000b90 with addr=10.0.0.2, port=4420 00:27:49.637 qpair failed and we were unable to recover it. 00:27:49.637 [2024-07-15 09:36:00.707049] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.637 [2024-07-15 09:36:00.707098] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a58000b90 with addr=10.0.0.2, port=4420 00:27:49.637 qpair failed and we were unable to recover it. 00:27:49.637 [2024-07-15 09:36:00.707190] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.637 [2024-07-15 09:36:00.707215] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a58000b90 with addr=10.0.0.2, port=4420 00:27:49.637 qpair failed and we were unable to recover it. 00:27:49.637 [2024-07-15 09:36:00.707352] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.637 [2024-07-15 09:36:00.707378] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a58000b90 with addr=10.0.0.2, port=4420 00:27:49.637 qpair failed and we were unable to recover it. 00:27:49.637 [2024-07-15 09:36:00.707461] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.637 [2024-07-15 09:36:00.707487] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a58000b90 with addr=10.0.0.2, port=4420 00:27:49.637 qpair failed and we were unable to recover it. 00:27:49.637 [2024-07-15 09:36:00.707576] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.637 [2024-07-15 09:36:00.707613] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a58000b90 with addr=10.0.0.2, port=4420 00:27:49.637 qpair failed and we were unable to recover it. 00:27:49.637 [2024-07-15 09:36:00.707720] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.637 [2024-07-15 09:36:00.707745] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a58000b90 with addr=10.0.0.2, port=4420 00:27:49.637 qpair failed and we were unable to recover it. 00:27:49.637 [2024-07-15 09:36:00.707828] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.637 [2024-07-15 09:36:00.707855] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a58000b90 with addr=10.0.0.2, port=4420 00:27:49.637 qpair failed and we were unable to recover it. 00:27:49.637 [2024-07-15 09:36:00.707961] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.637 [2024-07-15 09:36:00.707986] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a58000b90 with addr=10.0.0.2, port=4420 00:27:49.637 qpair failed and we were unable to recover it. 00:27:49.637 [2024-07-15 09:36:00.708068] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.637 [2024-07-15 09:36:00.708093] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a58000b90 with addr=10.0.0.2, port=4420 00:27:49.637 qpair failed and we were unable to recover it. 00:27:49.637 [2024-07-15 09:36:00.708202] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.637 [2024-07-15 09:36:00.708229] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a58000b90 with addr=10.0.0.2, port=4420 00:27:49.637 qpair failed and we were unable to recover it. 00:27:49.637 [2024-07-15 09:36:00.708371] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.637 [2024-07-15 09:36:00.708397] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a58000b90 with addr=10.0.0.2, port=4420 00:27:49.637 qpair failed and we were unable to recover it. 00:27:49.637 [2024-07-15 09:36:00.708489] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.637 [2024-07-15 09:36:00.708515] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a58000b90 with addr=10.0.0.2, port=4420 00:27:49.637 qpair failed and we were unable to recover it. 00:27:49.637 [2024-07-15 09:36:00.708623] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.637 [2024-07-15 09:36:00.708650] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a58000b90 with addr=10.0.0.2, port=4420 00:27:49.637 qpair failed and we were unable to recover it. 00:27:49.637 [2024-07-15 09:36:00.708739] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.637 [2024-07-15 09:36:00.708764] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a58000b90 with addr=10.0.0.2, port=4420 00:27:49.637 qpair failed and we were unable to recover it. 00:27:49.637 [2024-07-15 09:36:00.708883] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.637 [2024-07-15 09:36:00.708908] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a58000b90 with addr=10.0.0.2, port=4420 00:27:49.637 qpair failed and we were unable to recover it. 00:27:49.637 [2024-07-15 09:36:00.708987] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.637 [2024-07-15 09:36:00.709012] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a58000b90 with addr=10.0.0.2, port=4420 00:27:49.637 qpair failed and we were unable to recover it. 00:27:49.637 [2024-07-15 09:36:00.709099] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.637 [2024-07-15 09:36:00.709124] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a58000b90 with addr=10.0.0.2, port=4420 00:27:49.637 qpair failed and we were unable to recover it. 00:27:49.637 [2024-07-15 09:36:00.709227] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.637 [2024-07-15 09:36:00.709252] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a58000b90 with addr=10.0.0.2, port=4420 00:27:49.637 qpair failed and we were unable to recover it. 00:27:49.637 [2024-07-15 09:36:00.709344] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.637 [2024-07-15 09:36:00.709369] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a58000b90 with addr=10.0.0.2, port=4420 00:27:49.637 qpair failed and we were unable to recover it. 00:27:49.637 [2024-07-15 09:36:00.709506] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.637 [2024-07-15 09:36:00.709532] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a58000b90 with addr=10.0.0.2, port=4420 00:27:49.637 qpair failed and we were unable to recover it. 00:27:49.637 [2024-07-15 09:36:00.709668] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.637 [2024-07-15 09:36:00.709694] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a58000b90 with addr=10.0.0.2, port=4420 00:27:49.637 qpair failed and we were unable to recover it. 00:27:49.637 [2024-07-15 09:36:00.709780] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.637 [2024-07-15 09:36:00.709811] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a58000b90 with addr=10.0.0.2, port=4420 00:27:49.637 qpair failed and we were unable to recover it. 00:27:49.637 [2024-07-15 09:36:00.709923] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.637 [2024-07-15 09:36:00.709949] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a58000b90 with addr=10.0.0.2, port=4420 00:27:49.637 qpair failed and we were unable to recover it. 00:27:49.637 [2024-07-15 09:36:00.710055] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.637 [2024-07-15 09:36:00.710081] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a58000b90 with addr=10.0.0.2, port=4420 00:27:49.637 qpair failed and we were unable to recover it. 00:27:49.637 [2024-07-15 09:36:00.710154] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.637 [2024-07-15 09:36:00.710179] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a58000b90 with addr=10.0.0.2, port=4420 00:27:49.637 qpair failed and we were unable to recover it. 00:27:49.637 [2024-07-15 09:36:00.710271] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.637 [2024-07-15 09:36:00.710296] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a58000b90 with addr=10.0.0.2, port=4420 00:27:49.637 qpair failed and we were unable to recover it. 00:27:49.637 [2024-07-15 09:36:00.710412] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.637 [2024-07-15 09:36:00.710442] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a58000b90 with addr=10.0.0.2, port=4420 00:27:49.637 qpair failed and we were unable to recover it. 00:27:49.637 [2024-07-15 09:36:00.710557] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.637 [2024-07-15 09:36:00.710582] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a58000b90 with addr=10.0.0.2, port=4420 00:27:49.637 qpair failed and we were unable to recover it. 00:27:49.637 [2024-07-15 09:36:00.710671] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.637 [2024-07-15 09:36:00.710698] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a58000b90 with addr=10.0.0.2, port=4420 00:27:49.637 qpair failed and we were unable to recover it. 00:27:49.637 [2024-07-15 09:36:00.710827] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.637 [2024-07-15 09:36:00.710865] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a58000b90 with addr=10.0.0.2, port=4420 00:27:49.637 qpair failed and we were unable to recover it. 00:27:49.637 [2024-07-15 09:36:00.710979] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.637 [2024-07-15 09:36:00.711006] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a58000b90 with addr=10.0.0.2, port=4420 00:27:49.637 qpair failed and we were unable to recover it. 00:27:49.637 [2024-07-15 09:36:00.711141] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.638 [2024-07-15 09:36:00.711167] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a58000b90 with addr=10.0.0.2, port=4420 00:27:49.638 qpair failed and we were unable to recover it. 00:27:49.638 [2024-07-15 09:36:00.711252] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.638 [2024-07-15 09:36:00.711277] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a58000b90 with addr=10.0.0.2, port=4420 00:27:49.638 qpair failed and we were unable to recover it. 00:27:49.638 [2024-07-15 09:36:00.711368] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.638 [2024-07-15 09:36:00.711394] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a58000b90 with addr=10.0.0.2, port=4420 00:27:49.638 qpair failed and we were unable to recover it. 00:27:49.638 [2024-07-15 09:36:00.711530] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.638 [2024-07-15 09:36:00.711557] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a58000b90 with addr=10.0.0.2, port=4420 00:27:49.638 qpair failed and we were unable to recover it. 00:27:49.638 [2024-07-15 09:36:00.711668] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.638 [2024-07-15 09:36:00.711694] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a58000b90 with addr=10.0.0.2, port=4420 00:27:49.638 qpair failed and we were unable to recover it. 00:27:49.638 [2024-07-15 09:36:00.711815] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.638 [2024-07-15 09:36:00.711852] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a58000b90 with addr=10.0.0.2, port=4420 00:27:49.638 qpair failed and we were unable to recover it. 00:27:49.638 [2024-07-15 09:36:00.711956] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.638 [2024-07-15 09:36:00.711993] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a58000b90 with addr=10.0.0.2, port=4420 00:27:49.638 qpair failed and we were unable to recover it. 00:27:49.638 [2024-07-15 09:36:00.712119] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.638 [2024-07-15 09:36:00.712144] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a58000b90 with addr=10.0.0.2, port=4420 00:27:49.638 qpair failed and we were unable to recover it. 00:27:49.638 [2024-07-15 09:36:00.712254] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.638 [2024-07-15 09:36:00.712280] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a58000b90 with addr=10.0.0.2, port=4420 00:27:49.638 qpair failed and we were unable to recover it. 00:27:49.638 [2024-07-15 09:36:00.712374] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.638 [2024-07-15 09:36:00.712400] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a58000b90 with addr=10.0.0.2, port=4420 00:27:49.638 qpair failed and we were unable to recover it. 00:27:49.638 [2024-07-15 09:36:00.712485] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.638 [2024-07-15 09:36:00.712511] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a58000b90 with addr=10.0.0.2, port=4420 00:27:49.638 qpair failed and we were unable to recover it. 00:27:49.638 [2024-07-15 09:36:00.712627] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.638 [2024-07-15 09:36:00.712653] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a58000b90 with addr=10.0.0.2, port=4420 00:27:49.638 qpair failed and we were unable to recover it. 00:27:49.638 [2024-07-15 09:36:00.712761] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.638 [2024-07-15 09:36:00.712787] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a58000b90 with addr=10.0.0.2, port=4420 00:27:49.638 qpair failed and we were unable to recover it. 00:27:49.638 [2024-07-15 09:36:00.712885] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.638 [2024-07-15 09:36:00.712912] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a58000b90 with addr=10.0.0.2, port=4420 00:27:49.638 qpair failed and we were unable to recover it. 00:27:49.638 [2024-07-15 09:36:00.712997] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.638 [2024-07-15 09:36:00.713022] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a58000b90 with addr=10.0.0.2, port=4420 00:27:49.638 qpair failed and we were unable to recover it. 00:27:49.638 [2024-07-15 09:36:00.713119] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.638 [2024-07-15 09:36:00.713145] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a58000b90 with addr=10.0.0.2, port=4420 00:27:49.638 qpair failed and we were unable to recover it. 00:27:49.638 [2024-07-15 09:36:00.713251] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.638 [2024-07-15 09:36:00.713276] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a58000b90 with addr=10.0.0.2, port=4420 00:27:49.638 qpair failed and we were unable to recover it. 00:27:49.638 [2024-07-15 09:36:00.713387] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.638 [2024-07-15 09:36:00.713414] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a58000b90 with addr=10.0.0.2, port=4420 00:27:49.638 qpair failed and we were unable to recover it. 00:27:49.638 [2024-07-15 09:36:00.713523] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.638 [2024-07-15 09:36:00.713548] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a58000b90 with addr=10.0.0.2, port=4420 00:27:49.638 qpair failed and we were unable to recover it. 00:27:49.638 [2024-07-15 09:36:00.713655] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.638 [2024-07-15 09:36:00.713680] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a58000b90 with addr=10.0.0.2, port=4420 00:27:49.638 qpair failed and we were unable to recover it. 00:27:49.638 [2024-07-15 09:36:00.713757] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.638 [2024-07-15 09:36:00.713782] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a58000b90 with addr=10.0.0.2, port=4420 00:27:49.638 qpair failed and we were unable to recover it. 00:27:49.638 [2024-07-15 09:36:00.713908] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.638 [2024-07-15 09:36:00.713935] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a58000b90 with addr=10.0.0.2, port=4420 00:27:49.638 qpair failed and we were unable to recover it. 00:27:49.638 [2024-07-15 09:36:00.714047] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.638 [2024-07-15 09:36:00.714073] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a58000b90 with addr=10.0.0.2, port=4420 00:27:49.638 qpair failed and we were unable to recover it. 00:27:49.638 [2024-07-15 09:36:00.714160] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.638 [2024-07-15 09:36:00.714186] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a58000b90 with addr=10.0.0.2, port=4420 00:27:49.638 qpair failed and we were unable to recover it. 00:27:49.638 [2024-07-15 09:36:00.714301] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.638 [2024-07-15 09:36:00.714327] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a58000b90 with addr=10.0.0.2, port=4420 00:27:49.638 qpair failed and we were unable to recover it. 00:27:49.638 [2024-07-15 09:36:00.714436] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.638 [2024-07-15 09:36:00.714461] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a58000b90 with addr=10.0.0.2, port=4420 00:27:49.638 qpair failed and we were unable to recover it. 00:27:49.638 [2024-07-15 09:36:00.714576] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.638 [2024-07-15 09:36:00.714602] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a58000b90 with addr=10.0.0.2, port=4420 00:27:49.638 qpair failed and we were unable to recover it. 00:27:49.638 [2024-07-15 09:36:00.714737] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.638 [2024-07-15 09:36:00.714763] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a58000b90 with addr=10.0.0.2, port=4420 00:27:49.638 qpair failed and we were unable to recover it. 00:27:49.638 [2024-07-15 09:36:00.714853] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.638 [2024-07-15 09:36:00.714879] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a58000b90 with addr=10.0.0.2, port=4420 00:27:49.638 qpair failed and we were unable to recover it. 00:27:49.638 [2024-07-15 09:36:00.714984] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.638 [2024-07-15 09:36:00.715010] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a58000b90 with addr=10.0.0.2, port=4420 00:27:49.638 qpair failed and we were unable to recover it. 00:27:49.638 [2024-07-15 09:36:00.715093] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.638 [2024-07-15 09:36:00.715120] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a58000b90 with addr=10.0.0.2, port=4420 00:27:49.638 qpair failed and we were unable to recover it. 00:27:49.638 [2024-07-15 09:36:00.715208] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.638 [2024-07-15 09:36:00.715234] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a58000b90 with addr=10.0.0.2, port=4420 00:27:49.638 qpair failed and we were unable to recover it. 00:27:49.638 [2024-07-15 09:36:00.715315] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.638 [2024-07-15 09:36:00.715341] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a58000b90 with addr=10.0.0.2, port=4420 00:27:49.638 qpair failed and we were unable to recover it. 00:27:49.638 [2024-07-15 09:36:00.715420] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.638 [2024-07-15 09:36:00.715446] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a58000b90 with addr=10.0.0.2, port=4420 00:27:49.638 qpair failed and we were unable to recover it. 00:27:49.638 [2024-07-15 09:36:00.715528] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.638 [2024-07-15 09:36:00.715554] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a58000b90 with addr=10.0.0.2, port=4420 00:27:49.638 qpair failed and we were unable to recover it. 00:27:49.638 [2024-07-15 09:36:00.715677] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.638 [2024-07-15 09:36:00.715708] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a58000b90 with addr=10.0.0.2, port=4420 00:27:49.638 qpair failed and we were unable to recover it. 00:27:49.638 [2024-07-15 09:36:00.715807] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.638 [2024-07-15 09:36:00.715834] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a58000b90 with addr=10.0.0.2, port=4420 00:27:49.638 qpair failed and we were unable to recover it. 00:27:49.638 [2024-07-15 09:36:00.715924] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.638 [2024-07-15 09:36:00.715950] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a58000b90 with addr=10.0.0.2, port=4420 00:27:49.638 qpair failed and we were unable to recover it. 00:27:49.638 [2024-07-15 09:36:00.716029] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.638 [2024-07-15 09:36:00.716054] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a58000b90 with addr=10.0.0.2, port=4420 00:27:49.638 qpair failed and we were unable to recover it. 00:27:49.638 [2024-07-15 09:36:00.716168] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.638 [2024-07-15 09:36:00.716194] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a58000b90 with addr=10.0.0.2, port=4420 00:27:49.638 qpair failed and we were unable to recover it. 00:27:49.638 [2024-07-15 09:36:00.716310] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.638 [2024-07-15 09:36:00.716337] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a58000b90 with addr=10.0.0.2, port=4420 00:27:49.638 qpair failed and we were unable to recover it. 00:27:49.639 [2024-07-15 09:36:00.716429] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.639 [2024-07-15 09:36:00.716454] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a58000b90 with addr=10.0.0.2, port=4420 00:27:49.639 qpair failed and we were unable to recover it. 00:27:49.639 [2024-07-15 09:36:00.716566] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.639 [2024-07-15 09:36:00.716591] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a58000b90 with addr=10.0.0.2, port=4420 00:27:49.639 qpair failed and we were unable to recover it. 00:27:49.639 [2024-07-15 09:36:00.716683] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.639 [2024-07-15 09:36:00.716709] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a58000b90 with addr=10.0.0.2, port=4420 00:27:49.639 qpair failed and we were unable to recover it. 00:27:49.639 [2024-07-15 09:36:00.716825] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.639 [2024-07-15 09:36:00.716863] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a58000b90 with addr=10.0.0.2, port=4420 00:27:49.639 qpair failed and we were unable to recover it. 00:27:49.639 [2024-07-15 09:36:00.716945] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.639 [2024-07-15 09:36:00.716972] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a58000b90 with addr=10.0.0.2, port=4420 00:27:49.639 qpair failed and we were unable to recover it. 00:27:49.639 [2024-07-15 09:36:00.717054] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.639 [2024-07-15 09:36:00.717079] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a58000b90 with addr=10.0.0.2, port=4420 00:27:49.639 qpair failed and we were unable to recover it. 00:27:49.639 [2024-07-15 09:36:00.717184] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.640 [2024-07-15 09:36:00.717209] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a58000b90 with addr=10.0.0.2, port=4420 00:27:49.640 qpair failed and we were unable to recover it. 00:27:49.640 [2024-07-15 09:36:00.717339] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.640 [2024-07-15 09:36:00.717365] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a58000b90 with addr=10.0.0.2, port=4420 00:27:49.640 qpair failed and we were unable to recover it. 00:27:49.640 [2024-07-15 09:36:00.717449] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.640 [2024-07-15 09:36:00.717474] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a58000b90 with addr=10.0.0.2, port=4420 00:27:49.640 qpair failed and we were unable to recover it. 00:27:49.640 [2024-07-15 09:36:00.717610] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.640 [2024-07-15 09:36:00.717637] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a58000b90 with addr=10.0.0.2, port=4420 00:27:49.640 qpair failed and we were unable to recover it. 00:27:49.640 [2024-07-15 09:36:00.717745] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.640 [2024-07-15 09:36:00.717771] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a58000b90 with addr=10.0.0.2, port=4420 00:27:49.640 qpair failed and we were unable to recover it. 00:27:49.640 [2024-07-15 09:36:00.717869] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.640 [2024-07-15 09:36:00.717896] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a58000b90 with addr=10.0.0.2, port=4420 00:27:49.640 qpair failed and we were unable to recover it. 00:27:49.640 [2024-07-15 09:36:00.717983] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.640 [2024-07-15 09:36:00.718009] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a58000b90 with addr=10.0.0.2, port=4420 00:27:49.640 qpair failed and we were unable to recover it. 00:27:49.640 [2024-07-15 09:36:00.718122] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.640 [2024-07-15 09:36:00.718147] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a58000b90 with addr=10.0.0.2, port=4420 00:27:49.640 qpair failed and we were unable to recover it. 00:27:49.640 [2024-07-15 09:36:00.718222] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.640 [2024-07-15 09:36:00.718247] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a58000b90 with addr=10.0.0.2, port=4420 00:27:49.640 qpair failed and we were unable to recover it. 00:27:49.640 [2024-07-15 09:36:00.718357] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.640 [2024-07-15 09:36:00.718383] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a58000b90 with addr=10.0.0.2, port=4420 00:27:49.640 qpair failed and we were unable to recover it. 00:27:49.640 [2024-07-15 09:36:00.718480] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.640 [2024-07-15 09:36:00.718507] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a58000b90 with addr=10.0.0.2, port=4420 00:27:49.640 qpair failed and we were unable to recover it. 00:27:49.640 [2024-07-15 09:36:00.718603] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.640 [2024-07-15 09:36:00.718630] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a58000b90 with addr=10.0.0.2, port=4420 00:27:49.640 qpair failed and we were unable to recover it. 00:27:49.640 [2024-07-15 09:36:00.718711] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.640 [2024-07-15 09:36:00.718737] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a58000b90 with addr=10.0.0.2, port=4420 00:27:49.640 qpair failed and we were unable to recover it. 00:27:49.640 [2024-07-15 09:36:00.718821] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.640 [2024-07-15 09:36:00.718857] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a58000b90 with addr=10.0.0.2, port=4420 00:27:49.640 qpair failed and we were unable to recover it. 00:27:49.640 [2024-07-15 09:36:00.718970] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.640 [2024-07-15 09:36:00.718996] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a58000b90 with addr=10.0.0.2, port=4420 00:27:49.640 qpair failed and we were unable to recover it. 00:27:49.640 [2024-07-15 09:36:00.719084] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.640 [2024-07-15 09:36:00.719111] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a58000b90 with addr=10.0.0.2, port=4420 00:27:49.640 qpair failed and we were unable to recover it. 00:27:49.640 [2024-07-15 09:36:00.719215] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.640 [2024-07-15 09:36:00.719241] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a58000b90 with addr=10.0.0.2, port=4420 00:27:49.640 qpair failed and we were unable to recover it. 00:27:49.640 [2024-07-15 09:36:00.719336] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.640 [2024-07-15 09:36:00.719363] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a58000b90 with addr=10.0.0.2, port=4420 00:27:49.640 qpair failed and we were unable to recover it. 00:27:49.640 [2024-07-15 09:36:00.719459] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.640 [2024-07-15 09:36:00.719485] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a58000b90 with addr=10.0.0.2, port=4420 00:27:49.640 qpair failed and we were unable to recover it. 00:27:49.640 [2024-07-15 09:36:00.719575] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.640 [2024-07-15 09:36:00.719601] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a58000b90 with addr=10.0.0.2, port=4420 00:27:49.640 qpair failed and we were unable to recover it. 00:27:49.640 [2024-07-15 09:36:00.719690] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.640 [2024-07-15 09:36:00.719716] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a58000b90 with addr=10.0.0.2, port=4420 00:27:49.640 qpair failed and we were unable to recover it. 00:27:49.640 [2024-07-15 09:36:00.719818] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.640 [2024-07-15 09:36:00.719845] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a58000b90 with addr=10.0.0.2, port=4420 00:27:49.640 qpair failed and we were unable to recover it. 00:27:49.640 [2024-07-15 09:36:00.719931] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.640 [2024-07-15 09:36:00.719957] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a58000b90 with addr=10.0.0.2, port=4420 00:27:49.640 qpair failed and we were unable to recover it. 00:27:49.640 [2024-07-15 09:36:00.720044] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.640 [2024-07-15 09:36:00.720070] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a58000b90 with addr=10.0.0.2, port=4420 00:27:49.640 qpair failed and we were unable to recover it. 00:27:49.640 [2024-07-15 09:36:00.720154] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.641 [2024-07-15 09:36:00.720180] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a58000b90 with addr=10.0.0.2, port=4420 00:27:49.641 qpair failed and we were unable to recover it. 00:27:49.641 [2024-07-15 09:36:00.720270] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.641 [2024-07-15 09:36:00.720297] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a58000b90 with addr=10.0.0.2, port=4420 00:27:49.641 qpair failed and we were unable to recover it. 00:27:49.641 [2024-07-15 09:36:00.720378] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.641 [2024-07-15 09:36:00.720404] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a58000b90 with addr=10.0.0.2, port=4420 00:27:49.641 qpair failed and we were unable to recover it. 00:27:49.641 [2024-07-15 09:36:00.720444] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1d770e0 (9): Bad file descriptor 00:27:49.641 [2024-07-15 09:36:00.720598] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.641 [2024-07-15 09:36:00.720636] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:27:49.641 qpair failed and we were unable to recover it. 00:27:49.641 [2024-07-15 09:36:00.720761] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.641 [2024-07-15 09:36:00.720787] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:27:49.641 qpair failed and we were unable to recover it. 00:27:49.641 [2024-07-15 09:36:00.720888] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.641 [2024-07-15 09:36:00.720913] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:27:49.641 qpair failed and we were unable to recover it. 00:27:49.641 [2024-07-15 09:36:00.721022] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.641 [2024-07-15 09:36:00.721047] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:27:49.641 qpair failed and we were unable to recover it. 00:27:49.641 [2024-07-15 09:36:00.721143] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.641 [2024-07-15 09:36:00.721169] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:27:49.641 qpair failed and we were unable to recover it. 00:27:49.641 [2024-07-15 09:36:00.721251] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.641 [2024-07-15 09:36:00.721276] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:27:49.641 qpair failed and we were unable to recover it. 00:27:49.641 [2024-07-15 09:36:00.721412] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.641 [2024-07-15 09:36:00.721437] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:27:49.641 qpair failed and we were unable to recover it. 00:27:49.641 [2024-07-15 09:36:00.721545] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.641 [2024-07-15 09:36:00.721570] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:27:49.641 qpair failed and we were unable to recover it. 00:27:49.641 [2024-07-15 09:36:00.721685] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.641 [2024-07-15 09:36:00.721710] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:27:49.641 qpair failed and we were unable to recover it. 00:27:49.641 [2024-07-15 09:36:00.721793] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.641 [2024-07-15 09:36:00.721825] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:27:49.641 qpair failed and we were unable to recover it. 00:27:49.641 [2024-07-15 09:36:00.721910] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.641 [2024-07-15 09:36:00.721935] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:27:49.641 qpair failed and we were unable to recover it. 00:27:49.641 [2024-07-15 09:36:00.722022] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.641 [2024-07-15 09:36:00.722047] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:27:49.641 qpair failed and we were unable to recover it. 00:27:49.641 [2024-07-15 09:36:00.722200] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.641 [2024-07-15 09:36:00.722232] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:27:49.641 qpair failed and we were unable to recover it. 00:27:49.641 [2024-07-15 09:36:00.722392] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.641 [2024-07-15 09:36:00.722427] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:27:49.641 qpair failed and we were unable to recover it. 00:27:49.641 [2024-07-15 09:36:00.722565] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.641 [2024-07-15 09:36:00.722601] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:27:49.641 qpair failed and we were unable to recover it. 00:27:49.641 [2024-07-15 09:36:00.722734] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.641 [2024-07-15 09:36:00.722759] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:27:49.641 qpair failed and we were unable to recover it. 00:27:49.641 [2024-07-15 09:36:00.722883] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.641 [2024-07-15 09:36:00.722910] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:27:49.641 qpair failed and we were unable to recover it. 00:27:49.641 [2024-07-15 09:36:00.722989] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.641 [2024-07-15 09:36:00.723014] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:27:49.641 qpair failed and we were unable to recover it. 00:27:49.641 [2024-07-15 09:36:00.723098] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.641 [2024-07-15 09:36:00.723124] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:27:49.641 qpair failed and we were unable to recover it. 00:27:49.641 [2024-07-15 09:36:00.723229] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.641 [2024-07-15 09:36:00.723263] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:27:49.641 qpair failed and we were unable to recover it. 00:27:49.641 [2024-07-15 09:36:00.723379] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.641 [2024-07-15 09:36:00.723414] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:27:49.641 qpair failed and we were unable to recover it. 00:27:49.641 [2024-07-15 09:36:00.723559] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.641 [2024-07-15 09:36:00.723583] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:27:49.641 qpair failed and we were unable to recover it. 00:27:49.641 [2024-07-15 09:36:00.723679] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.641 [2024-07-15 09:36:00.723720] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a50000b90 with addr=10.0.0.2, port=4420 00:27:49.641 qpair failed and we were unable to recover it. 00:27:49.641 [2024-07-15 09:36:00.723826] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.641 [2024-07-15 09:36:00.723854] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a50000b90 with addr=10.0.0.2, port=4420 00:27:49.641 qpair failed and we were unable to recover it. 00:27:49.641 [2024-07-15 09:36:00.723947] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.641 [2024-07-15 09:36:00.723975] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a50000b90 with addr=10.0.0.2, port=4420 00:27:49.641 qpair failed and we were unable to recover it. 00:27:49.641 [2024-07-15 09:36:00.724058] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.641 [2024-07-15 09:36:00.724083] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a50000b90 with addr=10.0.0.2, port=4420 00:27:49.641 qpair failed and we were unable to recover it. 00:27:49.641 [2024-07-15 09:36:00.724216] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.641 [2024-07-15 09:36:00.724242] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a50000b90 with addr=10.0.0.2, port=4420 00:27:49.641 qpair failed and we were unable to recover it. 00:27:49.641 [2024-07-15 09:36:00.724372] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.641 [2024-07-15 09:36:00.724426] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a50000b90 with addr=10.0.0.2, port=4420 00:27:49.641 qpair failed and we were unable to recover it. 00:27:49.641 [2024-07-15 09:36:00.724509] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.641 [2024-07-15 09:36:00.724535] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a50000b90 with addr=10.0.0.2, port=4420 00:27:49.641 qpair failed and we were unable to recover it. 00:27:49.641 [2024-07-15 09:36:00.724655] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.641 [2024-07-15 09:36:00.724682] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a50000b90 with addr=10.0.0.2, port=4420 00:27:49.641 qpair failed and we were unable to recover it. 00:27:49.641 [2024-07-15 09:36:00.724793] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.641 [2024-07-15 09:36:00.724825] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a50000b90 with addr=10.0.0.2, port=4420 00:27:49.641 qpair failed and we were unable to recover it. 00:27:49.641 [2024-07-15 09:36:00.724915] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.641 [2024-07-15 09:36:00.724940] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a50000b90 with addr=10.0.0.2, port=4420 00:27:49.641 qpair failed and we were unable to recover it. 00:27:49.641 [2024-07-15 09:36:00.725016] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.641 [2024-07-15 09:36:00.725042] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a50000b90 with addr=10.0.0.2, port=4420 00:27:49.641 qpair failed and we were unable to recover it. 00:27:49.641 [2024-07-15 09:36:00.725176] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.641 [2024-07-15 09:36:00.725202] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a50000b90 with addr=10.0.0.2, port=4420 00:27:49.641 qpair failed and we were unable to recover it. 00:27:49.641 [2024-07-15 09:36:00.725285] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.641 [2024-07-15 09:36:00.725311] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a50000b90 with addr=10.0.0.2, port=4420 00:27:49.641 qpair failed and we were unable to recover it. 00:27:49.641 [2024-07-15 09:36:00.725417] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.641 [2024-07-15 09:36:00.725443] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a50000b90 with addr=10.0.0.2, port=4420 00:27:49.641 qpair failed and we were unable to recover it. 00:27:49.641 [2024-07-15 09:36:00.725548] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.641 [2024-07-15 09:36:00.725574] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a50000b90 with addr=10.0.0.2, port=4420 00:27:49.641 qpair failed and we were unable to recover it. 00:27:49.641 [2024-07-15 09:36:00.725664] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.642 [2024-07-15 09:36:00.725692] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:27:49.642 qpair failed and we were unable to recover it. 00:27:49.642 [2024-07-15 09:36:00.725806] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.642 [2024-07-15 09:36:00.725833] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:27:49.642 qpair failed and we were unable to recover it. 00:27:49.642 [2024-07-15 09:36:00.725926] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.642 [2024-07-15 09:36:00.725953] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:27:49.642 qpair failed and we were unable to recover it. 00:27:49.642 [2024-07-15 09:36:00.726043] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.642 [2024-07-15 09:36:00.726069] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:27:49.642 qpair failed and we were unable to recover it. 00:27:49.642 [2024-07-15 09:36:00.726186] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.642 [2024-07-15 09:36:00.726212] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:27:49.642 qpair failed and we were unable to recover it. 00:27:49.642 [2024-07-15 09:36:00.726325] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.642 [2024-07-15 09:36:00.726354] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a58000b90 with addr=10.0.0.2, port=4420 00:27:49.642 qpair failed and we were unable to recover it. 00:27:49.642 [2024-07-15 09:36:00.726493] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.642 [2024-07-15 09:36:00.726544] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a50000b90 with addr=10.0.0.2, port=4420 00:27:49.642 qpair failed and we were unable to recover it. 00:27:49.642 [2024-07-15 09:36:00.726658] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.642 [2024-07-15 09:36:00.726684] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a50000b90 with addr=10.0.0.2, port=4420 00:27:49.642 qpair failed and we were unable to recover it. 00:27:49.642 [2024-07-15 09:36:00.726763] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.642 [2024-07-15 09:36:00.726788] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a50000b90 with addr=10.0.0.2, port=4420 00:27:49.642 qpair failed and we were unable to recover it. 00:27:49.642 [2024-07-15 09:36:00.726894] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.642 [2024-07-15 09:36:00.726921] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a50000b90 with addr=10.0.0.2, port=4420 00:27:49.642 qpair failed and we were unable to recover it. 00:27:49.642 [2024-07-15 09:36:00.727025] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.642 [2024-07-15 09:36:00.727061] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a50000b90 with addr=10.0.0.2, port=4420 00:27:49.642 qpair failed and we were unable to recover it. 00:27:49.642 [2024-07-15 09:36:00.727154] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.642 [2024-07-15 09:36:00.727179] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a50000b90 with addr=10.0.0.2, port=4420 00:27:49.642 qpair failed and we were unable to recover it. 00:27:49.642 [2024-07-15 09:36:00.727318] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.642 [2024-07-15 09:36:00.727364] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a50000b90 with addr=10.0.0.2, port=4420 00:27:49.642 qpair failed and we were unable to recover it. 00:27:49.642 [2024-07-15 09:36:00.727449] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.642 [2024-07-15 09:36:00.727474] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a50000b90 with addr=10.0.0.2, port=4420 00:27:49.642 qpair failed and we were unable to recover it. 00:27:49.642 [2024-07-15 09:36:00.727587] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.642 [2024-07-15 09:36:00.727615] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:27:49.642 qpair failed and we were unable to recover it. 00:27:49.642 [2024-07-15 09:36:00.727728] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.642 [2024-07-15 09:36:00.727753] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:27:49.642 qpair failed and we were unable to recover it. 00:27:49.642 [2024-07-15 09:36:00.727868] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.642 [2024-07-15 09:36:00.727894] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:27:49.642 qpair failed and we were unable to recover it. 00:27:49.642 [2024-07-15 09:36:00.728005] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.642 [2024-07-15 09:36:00.728036] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:27:49.642 qpair failed and we were unable to recover it. 00:27:49.642 [2024-07-15 09:36:00.728150] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.642 [2024-07-15 09:36:00.728183] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:27:49.642 qpair failed and we were unable to recover it. 00:27:49.642 [2024-07-15 09:36:00.728325] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.642 [2024-07-15 09:36:00.728361] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:27:49.642 qpair failed and we were unable to recover it. 00:27:49.642 [2024-07-15 09:36:00.728473] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.642 [2024-07-15 09:36:00.728508] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:27:49.642 qpair failed and we were unable to recover it. 00:27:49.642 [2024-07-15 09:36:00.728650] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.642 [2024-07-15 09:36:00.728675] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:27:49.642 qpair failed and we were unable to recover it. 00:27:49.642 [2024-07-15 09:36:00.728760] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.642 [2024-07-15 09:36:00.728784] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:27:49.642 qpair failed and we were unable to recover it. 00:27:49.642 [2024-07-15 09:36:00.728880] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.642 [2024-07-15 09:36:00.728905] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:27:49.642 qpair failed and we were unable to recover it. 00:27:49.642 [2024-07-15 09:36:00.729016] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.642 [2024-07-15 09:36:00.729040] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:27:49.642 qpair failed and we were unable to recover it. 00:27:49.642 [2024-07-15 09:36:00.729150] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.642 [2024-07-15 09:36:00.729175] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:27:49.642 qpair failed and we were unable to recover it. 00:27:49.642 [2024-07-15 09:36:00.729332] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.642 [2024-07-15 09:36:00.729367] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:27:49.642 qpair failed and we were unable to recover it. 00:27:49.642 [2024-07-15 09:36:00.729566] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.642 [2024-07-15 09:36:00.729600] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:27:49.642 qpair failed and we were unable to recover it. 00:27:49.642 [2024-07-15 09:36:00.729759] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.642 [2024-07-15 09:36:00.729791] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:27:49.642 qpair failed and we were unable to recover it. 00:27:49.642 [2024-07-15 09:36:00.729917] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.642 [2024-07-15 09:36:00.729942] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:27:49.642 qpair failed and we were unable to recover it. 00:27:49.642 [2024-07-15 09:36:00.730052] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.642 [2024-07-15 09:36:00.730077] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:27:49.642 qpair failed and we were unable to recover it. 00:27:49.642 [2024-07-15 09:36:00.730232] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.642 [2024-07-15 09:36:00.730276] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:27:49.642 qpair failed and we were unable to recover it. 00:27:49.642 [2024-07-15 09:36:00.730434] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.642 [2024-07-15 09:36:00.730470] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:27:49.642 qpair failed and we were unable to recover it. 00:27:49.642 [2024-07-15 09:36:00.730614] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.642 [2024-07-15 09:36:00.730657] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:27:49.642 qpair failed and we were unable to recover it. 00:27:49.642 [2024-07-15 09:36:00.730769] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.642 [2024-07-15 09:36:00.730794] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:27:49.642 qpair failed and we were unable to recover it. 00:27:49.642 [2024-07-15 09:36:00.730907] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.642 [2024-07-15 09:36:00.730933] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:27:49.642 qpair failed and we were unable to recover it. 00:27:49.642 [2024-07-15 09:36:00.731045] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.642 [2024-07-15 09:36:00.731071] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:27:49.642 qpair failed and we were unable to recover it. 00:27:49.642 [2024-07-15 09:36:00.731190] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.642 [2024-07-15 09:36:00.731216] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:27:49.642 qpair failed and we were unable to recover it. 00:27:49.642 [2024-07-15 09:36:00.731365] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.642 [2024-07-15 09:36:00.731401] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:27:49.642 qpair failed and we were unable to recover it. 00:27:49.642 [2024-07-15 09:36:00.731564] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.642 [2024-07-15 09:36:00.731602] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:27:49.642 qpair failed and we were unable to recover it. 00:27:49.642 [2024-07-15 09:36:00.731745] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.642 [2024-07-15 09:36:00.731797] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:27:49.642 qpair failed and we were unable to recover it. 00:27:49.643 [2024-07-15 09:36:00.731912] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.643 [2024-07-15 09:36:00.731938] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:27:49.643 qpair failed and we were unable to recover it. 00:27:49.643 [2024-07-15 09:36:00.732024] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.643 [2024-07-15 09:36:00.732050] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:27:49.643 qpair failed and we were unable to recover it. 00:27:49.643 [2024-07-15 09:36:00.732157] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.643 [2024-07-15 09:36:00.732184] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:27:49.643 qpair failed and we were unable to recover it. 00:27:49.643 [2024-07-15 09:36:00.732293] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.643 [2024-07-15 09:36:00.732327] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:27:49.643 qpair failed and we were unable to recover it. 00:27:49.643 [2024-07-15 09:36:00.732487] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.643 [2024-07-15 09:36:00.732522] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:27:49.643 qpair failed and we were unable to recover it. 00:27:49.643 [2024-07-15 09:36:00.732700] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.643 [2024-07-15 09:36:00.732726] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:27:49.643 qpair failed and we were unable to recover it. 00:27:49.643 [2024-07-15 09:36:00.732814] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.643 [2024-07-15 09:36:00.732841] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:27:49.643 qpair failed and we were unable to recover it. 00:27:49.643 [2024-07-15 09:36:00.732922] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.643 [2024-07-15 09:36:00.732948] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:27:49.643 qpair failed and we were unable to recover it. 00:27:49.643 [2024-07-15 09:36:00.733035] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.643 [2024-07-15 09:36:00.733062] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:27:49.643 qpair failed and we were unable to recover it. 00:27:49.643 [2024-07-15 09:36:00.733174] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.643 [2024-07-15 09:36:00.733208] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:27:49.643 qpair failed and we were unable to recover it. 00:27:49.643 [2024-07-15 09:36:00.733397] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.643 [2024-07-15 09:36:00.733432] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:27:49.643 qpair failed and we were unable to recover it. 00:27:49.643 [2024-07-15 09:36:00.733557] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.643 [2024-07-15 09:36:00.733583] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:27:49.643 qpair failed and we were unable to recover it. 00:27:49.643 [2024-07-15 09:36:00.733700] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.643 [2024-07-15 09:36:00.733726] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:27:49.643 qpair failed and we were unable to recover it. 00:27:49.643 [2024-07-15 09:36:00.733820] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.643 [2024-07-15 09:36:00.733871] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a58000b90 with addr=10.0.0.2, port=4420 00:27:49.643 qpair failed and we were unable to recover it. 00:27:49.643 [2024-07-15 09:36:00.733988] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.643 [2024-07-15 09:36:00.734017] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a58000b90 with addr=10.0.0.2, port=4420 00:27:49.643 qpair failed and we were unable to recover it. 00:27:49.643 [2024-07-15 09:36:00.734161] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.643 [2024-07-15 09:36:00.734209] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a58000b90 with addr=10.0.0.2, port=4420 00:27:49.643 qpair failed and we were unable to recover it. 00:27:49.643 [2024-07-15 09:36:00.734333] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.643 [2024-07-15 09:36:00.734374] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a58000b90 with addr=10.0.0.2, port=4420 00:27:49.643 qpair failed and we were unable to recover it. 00:27:49.643 [2024-07-15 09:36:00.734505] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.643 [2024-07-15 09:36:00.734555] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a58000b90 with addr=10.0.0.2, port=4420 00:27:49.643 qpair failed and we were unable to recover it. 00:27:49.643 [2024-07-15 09:36:00.734692] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.643 [2024-07-15 09:36:00.734719] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a58000b90 with addr=10.0.0.2, port=4420 00:27:49.643 qpair failed and we were unable to recover it. 00:27:49.643 [2024-07-15 09:36:00.734810] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.643 [2024-07-15 09:36:00.734836] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a58000b90 with addr=10.0.0.2, port=4420 00:27:49.643 qpair failed and we were unable to recover it. 00:27:49.643 [2024-07-15 09:36:00.734945] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.643 [2024-07-15 09:36:00.734970] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a58000b90 with addr=10.0.0.2, port=4420 00:27:49.643 qpair failed and we were unable to recover it. 00:27:49.643 [2024-07-15 09:36:00.735048] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.643 [2024-07-15 09:36:00.735073] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a58000b90 with addr=10.0.0.2, port=4420 00:27:49.643 qpair failed and we were unable to recover it. 00:27:49.643 [2024-07-15 09:36:00.735181] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.643 [2024-07-15 09:36:00.735206] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a58000b90 with addr=10.0.0.2, port=4420 00:27:49.643 qpair failed and we were unable to recover it. 00:27:49.643 [2024-07-15 09:36:00.735297] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.643 [2024-07-15 09:36:00.735322] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a58000b90 with addr=10.0.0.2, port=4420 00:27:49.643 qpair failed and we were unable to recover it. 00:27:49.643 [2024-07-15 09:36:00.735404] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.643 [2024-07-15 09:36:00.735430] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a58000b90 with addr=10.0.0.2, port=4420 00:27:49.643 qpair failed and we were unable to recover it. 00:27:49.643 [2024-07-15 09:36:00.735545] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.643 [2024-07-15 09:36:00.735570] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a58000b90 with addr=10.0.0.2, port=4420 00:27:49.643 qpair failed and we were unable to recover it. 00:27:49.643 [2024-07-15 09:36:00.735650] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.643 [2024-07-15 09:36:00.735676] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a58000b90 with addr=10.0.0.2, port=4420 00:27:49.643 qpair failed and we were unable to recover it. 00:27:49.643 [2024-07-15 09:36:00.735791] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.643 [2024-07-15 09:36:00.735823] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a58000b90 with addr=10.0.0.2, port=4420 00:27:49.643 qpair failed and we were unable to recover it. 00:27:49.643 [2024-07-15 09:36:00.735906] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.643 [2024-07-15 09:36:00.735931] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a58000b90 with addr=10.0.0.2, port=4420 00:27:49.643 qpair failed and we were unable to recover it. 00:27:49.643 [2024-07-15 09:36:00.736011] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.643 [2024-07-15 09:36:00.736036] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a58000b90 with addr=10.0.0.2, port=4420 00:27:49.643 qpair failed and we were unable to recover it. 00:27:49.643 [2024-07-15 09:36:00.736150] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.643 [2024-07-15 09:36:00.736175] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a58000b90 with addr=10.0.0.2, port=4420 00:27:49.643 qpair failed and we were unable to recover it. 00:27:49.643 [2024-07-15 09:36:00.736284] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.643 [2024-07-15 09:36:00.736309] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a58000b90 with addr=10.0.0.2, port=4420 00:27:49.643 qpair failed and we were unable to recover it. 00:27:49.643 [2024-07-15 09:36:00.736388] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.643 [2024-07-15 09:36:00.736413] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a58000b90 with addr=10.0.0.2, port=4420 00:27:49.643 qpair failed and we were unable to recover it. 00:27:49.643 [2024-07-15 09:36:00.736527] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.643 [2024-07-15 09:36:00.736552] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a58000b90 with addr=10.0.0.2, port=4420 00:27:49.643 qpair failed and we were unable to recover it. 00:27:49.643 [2024-07-15 09:36:00.736658] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.643 [2024-07-15 09:36:00.736683] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a58000b90 with addr=10.0.0.2, port=4420 00:27:49.643 qpair failed and we were unable to recover it. 00:27:49.643 [2024-07-15 09:36:00.736763] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.643 [2024-07-15 09:36:00.736788] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a58000b90 with addr=10.0.0.2, port=4420 00:27:49.643 qpair failed and we were unable to recover it. 00:27:49.643 [2024-07-15 09:36:00.736888] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.643 [2024-07-15 09:36:00.736913] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a58000b90 with addr=10.0.0.2, port=4420 00:27:49.643 qpair failed and we were unable to recover it. 00:27:49.643 [2024-07-15 09:36:00.736998] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.643 [2024-07-15 09:36:00.737024] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a58000b90 with addr=10.0.0.2, port=4420 00:27:49.643 qpair failed and we were unable to recover it. 00:27:49.643 [2024-07-15 09:36:00.737159] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.643 [2024-07-15 09:36:00.737184] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a58000b90 with addr=10.0.0.2, port=4420 00:27:49.643 qpair failed and we were unable to recover it. 00:27:49.643 [2024-07-15 09:36:00.737288] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.643 [2024-07-15 09:36:00.737313] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a58000b90 with addr=10.0.0.2, port=4420 00:27:49.643 qpair failed and we were unable to recover it. 00:27:49.643 [2024-07-15 09:36:00.737392] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.644 [2024-07-15 09:36:00.737417] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a58000b90 with addr=10.0.0.2, port=4420 00:27:49.644 qpair failed and we were unable to recover it. 00:27:49.644 [2024-07-15 09:36:00.737532] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.644 [2024-07-15 09:36:00.737558] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a58000b90 with addr=10.0.0.2, port=4420 00:27:49.644 qpair failed and we were unable to recover it. 00:27:49.644 [2024-07-15 09:36:00.737694] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.644 [2024-07-15 09:36:00.737719] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a58000b90 with addr=10.0.0.2, port=4420 00:27:49.644 qpair failed and we were unable to recover it. 00:27:49.644 [2024-07-15 09:36:00.737830] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.644 [2024-07-15 09:36:00.737858] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a58000b90 with addr=10.0.0.2, port=4420 00:27:49.644 qpair failed and we were unable to recover it. 00:27:49.644 [2024-07-15 09:36:00.737942] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.644 [2024-07-15 09:36:00.737968] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a58000b90 with addr=10.0.0.2, port=4420 00:27:49.644 qpair failed and we were unable to recover it. 00:27:49.644 [2024-07-15 09:36:00.738049] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.644 [2024-07-15 09:36:00.738076] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a58000b90 with addr=10.0.0.2, port=4420 00:27:49.644 qpair failed and we were unable to recover it. 00:27:49.644 [2024-07-15 09:36:00.738184] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.644 [2024-07-15 09:36:00.738210] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a58000b90 with addr=10.0.0.2, port=4420 00:27:49.644 qpair failed and we were unable to recover it. 00:27:49.644 [2024-07-15 09:36:00.738295] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.644 [2024-07-15 09:36:00.738321] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a58000b90 with addr=10.0.0.2, port=4420 00:27:49.644 qpair failed and we were unable to recover it. 00:27:49.644 [2024-07-15 09:36:00.738425] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.644 [2024-07-15 09:36:00.738451] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a58000b90 with addr=10.0.0.2, port=4420 00:27:49.644 qpair failed and we were unable to recover it. 00:27:49.644 [2024-07-15 09:36:00.738546] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.644 [2024-07-15 09:36:00.738572] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a58000b90 with addr=10.0.0.2, port=4420 00:27:49.644 qpair failed and we were unable to recover it. 00:27:49.644 [2024-07-15 09:36:00.738654] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.644 [2024-07-15 09:36:00.738680] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a58000b90 with addr=10.0.0.2, port=4420 00:27:49.644 qpair failed and we were unable to recover it. 00:27:49.644 [2024-07-15 09:36:00.738762] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.644 [2024-07-15 09:36:00.738788] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a58000b90 with addr=10.0.0.2, port=4420 00:27:49.644 qpair failed and we were unable to recover it. 00:27:49.644 [2024-07-15 09:36:00.738891] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.644 [2024-07-15 09:36:00.738917] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a58000b90 with addr=10.0.0.2, port=4420 00:27:49.644 qpair failed and we were unable to recover it. 00:27:49.644 [2024-07-15 09:36:00.739032] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.644 [2024-07-15 09:36:00.739059] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a58000b90 with addr=10.0.0.2, port=4420 00:27:49.644 qpair failed and we were unable to recover it. 00:27:49.644 [2024-07-15 09:36:00.739139] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.644 [2024-07-15 09:36:00.739166] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a58000b90 with addr=10.0.0.2, port=4420 00:27:49.644 qpair failed and we were unable to recover it. 00:27:49.644 [2024-07-15 09:36:00.739259] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.644 [2024-07-15 09:36:00.739298] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:27:49.644 qpair failed and we were unable to recover it. 00:27:49.644 [2024-07-15 09:36:00.739390] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.644 [2024-07-15 09:36:00.739423] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:27:49.644 qpair failed and we were unable to recover it. 00:27:49.644 [2024-07-15 09:36:00.739516] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.644 [2024-07-15 09:36:00.739542] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:27:49.644 qpair failed and we were unable to recover it. 00:27:49.644 [2024-07-15 09:36:00.739629] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.644 [2024-07-15 09:36:00.739656] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:27:49.644 qpair failed and we were unable to recover it. 00:27:49.644 [2024-07-15 09:36:00.739744] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.644 [2024-07-15 09:36:00.739770] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:27:49.644 qpair failed and we were unable to recover it. 00:27:49.644 [2024-07-15 09:36:00.739883] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.644 [2024-07-15 09:36:00.739910] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:27:49.644 qpair failed and we were unable to recover it. 00:27:49.644 [2024-07-15 09:36:00.739997] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.644 [2024-07-15 09:36:00.740026] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a58000b90 with addr=10.0.0.2, port=4420 00:27:49.644 qpair failed and we were unable to recover it. 00:27:49.644 [2024-07-15 09:36:00.740130] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.644 [2024-07-15 09:36:00.740166] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a58000b90 with addr=10.0.0.2, port=4420 00:27:49.644 qpair failed and we were unable to recover it. 00:27:49.644 [2024-07-15 09:36:00.740300] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.644 [2024-07-15 09:36:00.740326] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a58000b90 with addr=10.0.0.2, port=4420 00:27:49.644 qpair failed and we were unable to recover it. 00:27:49.644 [2024-07-15 09:36:00.740402] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.644 [2024-07-15 09:36:00.740428] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a58000b90 with addr=10.0.0.2, port=4420 00:27:49.644 qpair failed and we were unable to recover it. 00:27:49.644 [2024-07-15 09:36:00.740539] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.644 [2024-07-15 09:36:00.740565] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a58000b90 with addr=10.0.0.2, port=4420 00:27:49.644 qpair failed and we were unable to recover it. 00:27:49.644 [2024-07-15 09:36:00.740641] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.644 [2024-07-15 09:36:00.740666] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a58000b90 with addr=10.0.0.2, port=4420 00:27:49.644 qpair failed and we were unable to recover it. 00:27:49.644 [2024-07-15 09:36:00.740757] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.644 [2024-07-15 09:36:00.740783] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:27:49.644 qpair failed and we were unable to recover it. 00:27:49.644 [2024-07-15 09:36:00.740871] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.644 [2024-07-15 09:36:00.740897] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:27:49.644 qpair failed and we were unable to recover it. 00:27:49.644 [2024-07-15 09:36:00.740985] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.644 [2024-07-15 09:36:00.741011] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:27:49.644 qpair failed and we were unable to recover it. 00:27:49.644 [2024-07-15 09:36:00.741094] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.644 [2024-07-15 09:36:00.741119] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:27:49.644 qpair failed and we were unable to recover it. 00:27:49.644 [2024-07-15 09:36:00.741200] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.644 [2024-07-15 09:36:00.741227] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:27:49.644 qpair failed and we were unable to recover it. 00:27:49.644 [2024-07-15 09:36:00.741324] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.644 [2024-07-15 09:36:00.741349] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:27:49.644 qpair failed and we were unable to recover it. 00:27:49.645 [2024-07-15 09:36:00.741431] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.645 [2024-07-15 09:36:00.741456] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:27:49.645 qpair failed and we were unable to recover it. 00:27:49.645 [2024-07-15 09:36:00.741535] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.645 [2024-07-15 09:36:00.741560] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:27:49.645 qpair failed and we were unable to recover it. 00:27:49.645 [2024-07-15 09:36:00.741636] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.645 [2024-07-15 09:36:00.741662] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:27:49.645 qpair failed and we were unable to recover it. 00:27:49.645 [2024-07-15 09:36:00.741774] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.645 [2024-07-15 09:36:00.741806] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:27:49.645 qpair failed and we were unable to recover it. 00:27:49.645 [2024-07-15 09:36:00.741893] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.645 [2024-07-15 09:36:00.741919] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:27:49.645 qpair failed and we were unable to recover it. 00:27:49.645 [2024-07-15 09:36:00.742001] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.645 [2024-07-15 09:36:00.742027] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:27:49.645 qpair failed and we were unable to recover it. 00:27:49.645 [2024-07-15 09:36:00.742108] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.645 [2024-07-15 09:36:00.742135] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:27:49.645 qpair failed and we were unable to recover it. 00:27:49.645 [2024-07-15 09:36:00.742219] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.645 [2024-07-15 09:36:00.742244] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:27:49.645 qpair failed and we were unable to recover it. 00:27:49.645 [2024-07-15 09:36:00.742392] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.645 [2024-07-15 09:36:00.742453] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a50000b90 with addr=10.0.0.2, port=4420 00:27:49.645 qpair failed and we were unable to recover it. 00:27:49.645 [2024-07-15 09:36:00.742543] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.645 [2024-07-15 09:36:00.742570] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a50000b90 with addr=10.0.0.2, port=4420 00:27:49.645 qpair failed and we were unable to recover it. 00:27:49.645 [2024-07-15 09:36:00.742690] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.645 [2024-07-15 09:36:00.742719] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a50000b90 with addr=10.0.0.2, port=4420 00:27:49.645 qpair failed and we were unable to recover it. 00:27:49.645 [2024-07-15 09:36:00.742810] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.645 [2024-07-15 09:36:00.742838] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a50000b90 with addr=10.0.0.2, port=4420 00:27:49.645 qpair failed and we were unable to recover it. 00:27:49.645 [2024-07-15 09:36:00.742948] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.645 [2024-07-15 09:36:00.742975] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a50000b90 with addr=10.0.0.2, port=4420 00:27:49.645 qpair failed and we were unable to recover it. 00:27:49.645 [2024-07-15 09:36:00.743066] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.645 [2024-07-15 09:36:00.743092] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a50000b90 with addr=10.0.0.2, port=4420 00:27:49.645 qpair failed and we were unable to recover it. 00:27:49.645 [2024-07-15 09:36:00.743236] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.645 [2024-07-15 09:36:00.743262] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a50000b90 with addr=10.0.0.2, port=4420 00:27:49.645 qpair failed and we were unable to recover it. 00:27:49.645 [2024-07-15 09:36:00.743353] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.645 [2024-07-15 09:36:00.743382] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a50000b90 with addr=10.0.0.2, port=4420 00:27:49.645 qpair failed and we were unable to recover it. 00:27:49.645 [2024-07-15 09:36:00.743475] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.645 [2024-07-15 09:36:00.743501] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a50000b90 with addr=10.0.0.2, port=4420 00:27:49.645 qpair failed and we were unable to recover it. 00:27:49.645 [2024-07-15 09:36:00.743634] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.645 [2024-07-15 09:36:00.743661] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a50000b90 with addr=10.0.0.2, port=4420 00:27:49.645 qpair failed and we were unable to recover it. 00:27:49.645 [2024-07-15 09:36:00.743748] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.645 [2024-07-15 09:36:00.743775] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a50000b90 with addr=10.0.0.2, port=4420 00:27:49.645 qpair failed and we were unable to recover it. 00:27:49.645 [2024-07-15 09:36:00.743884] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.645 [2024-07-15 09:36:00.743910] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a50000b90 with addr=10.0.0.2, port=4420 00:27:49.645 qpair failed and we were unable to recover it. 00:27:49.645 [2024-07-15 09:36:00.743995] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.645 [2024-07-15 09:36:00.744021] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a50000b90 with addr=10.0.0.2, port=4420 00:27:49.645 qpair failed and we were unable to recover it. 00:27:49.645 [2024-07-15 09:36:00.744107] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.645 [2024-07-15 09:36:00.744133] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a50000b90 with addr=10.0.0.2, port=4420 00:27:49.645 qpair failed and we were unable to recover it. 00:27:49.645 [2024-07-15 09:36:00.744245] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.645 [2024-07-15 09:36:00.744272] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a50000b90 with addr=10.0.0.2, port=4420 00:27:49.645 qpair failed and we were unable to recover it. 00:27:49.645 [2024-07-15 09:36:00.744348] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.645 [2024-07-15 09:36:00.744379] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a50000b90 with addr=10.0.0.2, port=4420 00:27:49.645 qpair failed and we were unable to recover it. 00:27:49.645 [2024-07-15 09:36:00.744481] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.645 [2024-07-15 09:36:00.744521] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a58000b90 with addr=10.0.0.2, port=4420 00:27:49.645 qpair failed and we were unable to recover it. 00:27:49.645 [2024-07-15 09:36:00.744614] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.645 [2024-07-15 09:36:00.744642] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a58000b90 with addr=10.0.0.2, port=4420 00:27:49.645 qpair failed and we were unable to recover it. 00:27:49.645 [2024-07-15 09:36:00.744734] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.645 [2024-07-15 09:36:00.744762] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:27:49.645 qpair failed and we were unable to recover it. 00:27:49.645 [2024-07-15 09:36:00.744854] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.645 [2024-07-15 09:36:00.744880] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:27:49.645 qpair failed and we were unable to recover it. 00:27:49.645 [2024-07-15 09:36:00.744963] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.645 [2024-07-15 09:36:00.744989] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:27:49.645 qpair failed and we were unable to recover it. 00:27:49.645 [2024-07-15 09:36:00.745069] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.645 [2024-07-15 09:36:00.745095] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:27:49.645 qpair failed and we were unable to recover it. 00:27:49.645 [2024-07-15 09:36:00.745201] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.645 [2024-07-15 09:36:00.745234] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:27:49.645 qpair failed and we were unable to recover it. 00:27:49.645 [2024-07-15 09:36:00.745380] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.645 [2024-07-15 09:36:00.745425] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:27:49.645 qpair failed and we were unable to recover it. 00:27:49.645 [2024-07-15 09:36:00.745561] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.645 [2024-07-15 09:36:00.745594] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:27:49.645 qpair failed and we were unable to recover it. 00:27:49.645 [2024-07-15 09:36:00.745705] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.645 [2024-07-15 09:36:00.745731] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:27:49.645 qpair failed and we were unable to recover it. 00:27:49.645 [2024-07-15 09:36:00.745856] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.645 [2024-07-15 09:36:00.745882] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:27:49.645 qpair failed and we were unable to recover it. 00:27:49.645 [2024-07-15 09:36:00.745966] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.645 [2024-07-15 09:36:00.745993] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:27:49.645 qpair failed and we were unable to recover it. 00:27:49.645 [2024-07-15 09:36:00.746081] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.645 [2024-07-15 09:36:00.746107] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:27:49.645 qpair failed and we were unable to recover it. 00:27:49.645 [2024-07-15 09:36:00.746234] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.645 [2024-07-15 09:36:00.746267] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:27:49.645 qpair failed and we were unable to recover it. 00:27:49.645 [2024-07-15 09:36:00.746468] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.645 [2024-07-15 09:36:00.746502] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:27:49.645 qpair failed and we were unable to recover it. 00:27:49.645 [2024-07-15 09:36:00.746612] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.645 [2024-07-15 09:36:00.746645] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:27:49.645 qpair failed and we were unable to recover it. 00:27:49.645 [2024-07-15 09:36:00.746786] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.645 [2024-07-15 09:36:00.746827] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:27:49.646 qpair failed and we were unable to recover it. 00:27:49.646 [2024-07-15 09:36:00.746961] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.646 [2024-07-15 09:36:00.746988] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:27:49.646 qpair failed and we were unable to recover it. 00:27:49.646 [2024-07-15 09:36:00.747103] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.646 [2024-07-15 09:36:00.747137] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:27:49.646 qpair failed and we were unable to recover it. 00:27:49.646 [2024-07-15 09:36:00.747241] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.646 [2024-07-15 09:36:00.747277] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:27:49.646 qpair failed and we were unable to recover it. 00:27:49.646 [2024-07-15 09:36:00.747418] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.646 [2024-07-15 09:36:00.747462] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:27:49.646 qpair failed and we were unable to recover it. 00:27:49.646 [2024-07-15 09:36:00.747611] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.646 [2024-07-15 09:36:00.747637] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:27:49.646 qpair failed and we were unable to recover it. 00:27:49.646 [2024-07-15 09:36:00.747740] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.646 [2024-07-15 09:36:00.747766] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:27:49.646 qpair failed and we were unable to recover it. 00:27:49.646 [2024-07-15 09:36:00.747888] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.646 [2024-07-15 09:36:00.747915] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:27:49.646 qpair failed and we were unable to recover it. 00:27:49.646 [2024-07-15 09:36:00.747994] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.646 [2024-07-15 09:36:00.748044] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:27:49.646 qpair failed and we were unable to recover it. 00:27:49.646 [2024-07-15 09:36:00.748150] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.646 [2024-07-15 09:36:00.748183] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:27:49.646 qpair failed and we were unable to recover it. 00:27:49.646 [2024-07-15 09:36:00.748290] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.646 [2024-07-15 09:36:00.748326] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:27:49.646 qpair failed and we were unable to recover it. 00:27:49.646 [2024-07-15 09:36:00.748440] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.646 [2024-07-15 09:36:00.748473] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:27:49.646 qpair failed and we were unable to recover it. 00:27:49.646 [2024-07-15 09:36:00.748589] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.646 [2024-07-15 09:36:00.748639] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:27:49.646 qpair failed and we were unable to recover it. 00:27:49.646 [2024-07-15 09:36:00.748752] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.646 [2024-07-15 09:36:00.748779] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:27:49.646 qpair failed and we were unable to recover it. 00:27:49.646 [2024-07-15 09:36:00.748993] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.646 [2024-07-15 09:36:00.749022] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a50000b90 with addr=10.0.0.2, port=4420 00:27:49.646 qpair failed and we were unable to recover it. 00:27:49.646 [2024-07-15 09:36:00.749170] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.646 [2024-07-15 09:36:00.749222] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a50000b90 with addr=10.0.0.2, port=4420 00:27:49.646 qpair failed and we were unable to recover it. 00:27:49.646 [2024-07-15 09:36:00.749312] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.646 [2024-07-15 09:36:00.749338] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a50000b90 with addr=10.0.0.2, port=4420 00:27:49.646 qpair failed and we were unable to recover it. 00:27:49.646 [2024-07-15 09:36:00.749458] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.646 [2024-07-15 09:36:00.749505] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a50000b90 with addr=10.0.0.2, port=4420 00:27:49.646 qpair failed and we were unable to recover it. 00:27:49.646 [2024-07-15 09:36:00.749588] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.646 [2024-07-15 09:36:00.749614] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a50000b90 with addr=10.0.0.2, port=4420 00:27:49.646 qpair failed and we were unable to recover it. 00:27:49.646 [2024-07-15 09:36:00.749715] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.646 [2024-07-15 09:36:00.749741] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a50000b90 with addr=10.0.0.2, port=4420 00:27:49.646 qpair failed and we were unable to recover it. 00:27:49.646 [2024-07-15 09:36:00.749840] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.646 [2024-07-15 09:36:00.749867] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a50000b90 with addr=10.0.0.2, port=4420 00:27:49.646 qpair failed and we were unable to recover it. 00:27:49.646 [2024-07-15 09:36:00.749950] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.646 [2024-07-15 09:36:00.749976] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a50000b90 with addr=10.0.0.2, port=4420 00:27:49.646 qpair failed and we were unable to recover it. 00:27:49.646 [2024-07-15 09:36:00.750060] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.646 [2024-07-15 09:36:00.750086] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a50000b90 with addr=10.0.0.2, port=4420 00:27:49.646 qpair failed and we were unable to recover it. 00:27:49.646 [2024-07-15 09:36:00.750182] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.646 [2024-07-15 09:36:00.750215] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a50000b90 with addr=10.0.0.2, port=4420 00:27:49.646 qpair failed and we were unable to recover it. 00:27:49.646 [2024-07-15 09:36:00.750328] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.646 [2024-07-15 09:36:00.750354] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a50000b90 with addr=10.0.0.2, port=4420 00:27:49.646 qpair failed and we were unable to recover it. 00:27:49.646 [2024-07-15 09:36:00.750465] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.646 [2024-07-15 09:36:00.750491] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a50000b90 with addr=10.0.0.2, port=4420 00:27:49.646 qpair failed and we were unable to recover it. 00:27:49.646 [2024-07-15 09:36:00.750580] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.646 [2024-07-15 09:36:00.750608] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:27:49.646 qpair failed and we were unable to recover it. 00:27:49.646 [2024-07-15 09:36:00.750691] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.646 [2024-07-15 09:36:00.750718] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:27:49.646 qpair failed and we were unable to recover it. 00:27:49.646 [2024-07-15 09:36:00.750831] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.646 [2024-07-15 09:36:00.750858] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:27:49.646 qpair failed and we were unable to recover it. 00:27:49.646 [2024-07-15 09:36:00.750946] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.646 [2024-07-15 09:36:00.750972] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:27:49.646 qpair failed and we were unable to recover it. 00:27:49.646 [2024-07-15 09:36:00.751082] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.646 [2024-07-15 09:36:00.751111] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:27:49.646 qpair failed and we were unable to recover it. 00:27:49.646 [2024-07-15 09:36:00.751197] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.646 [2024-07-15 09:36:00.751223] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:27:49.646 qpair failed and we were unable to recover it. 00:27:49.646 [2024-07-15 09:36:00.751359] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.646 [2024-07-15 09:36:00.751393] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:27:49.646 qpair failed and we were unable to recover it. 00:27:49.646 [2024-07-15 09:36:00.751496] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.646 [2024-07-15 09:36:00.751529] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:27:49.646 qpair failed and we were unable to recover it. 00:27:49.646 [2024-07-15 09:36:00.751647] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.646 [2024-07-15 09:36:00.751672] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:27:49.646 qpair failed and we were unable to recover it. 00:27:49.646 [2024-07-15 09:36:00.751755] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.646 [2024-07-15 09:36:00.751781] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:27:49.646 qpair failed and we were unable to recover it. 00:27:49.646 [2024-07-15 09:36:00.751871] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.646 [2024-07-15 09:36:00.751898] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:27:49.646 qpair failed and we were unable to recover it. 00:27:49.646 [2024-07-15 09:36:00.751987] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.646 [2024-07-15 09:36:00.752013] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:27:49.646 qpair failed and we were unable to recover it. 00:27:49.646 [2024-07-15 09:36:00.752152] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.646 [2024-07-15 09:36:00.752186] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:27:49.646 qpair failed and we were unable to recover it. 00:27:49.646 [2024-07-15 09:36:00.752359] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.646 [2024-07-15 09:36:00.752392] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:27:49.646 qpair failed and we were unable to recover it. 00:27:49.646 [2024-07-15 09:36:00.752505] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.646 [2024-07-15 09:36:00.752538] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:27:49.646 qpair failed and we were unable to recover it. 00:27:49.646 [2024-07-15 09:36:00.752639] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.647 [2024-07-15 09:36:00.752674] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:27:49.647 qpair failed and we were unable to recover it. 00:27:49.647 [2024-07-15 09:36:00.752782] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.647 [2024-07-15 09:36:00.752830] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:27:49.647 qpair failed and we were unable to recover it. 00:27:49.647 [2024-07-15 09:36:00.752915] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.647 [2024-07-15 09:36:00.752941] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:27:49.647 qpair failed and we were unable to recover it. 00:27:49.647 [2024-07-15 09:36:00.753039] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.647 [2024-07-15 09:36:00.753072] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:27:49.647 qpair failed and we were unable to recover it. 00:27:49.647 [2024-07-15 09:36:00.753176] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.647 [2024-07-15 09:36:00.753211] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:27:49.647 qpair failed and we were unable to recover it. 00:27:49.647 [2024-07-15 09:36:00.753322] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.647 [2024-07-15 09:36:00.753356] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:27:49.647 qpair failed and we were unable to recover it. 00:27:49.647 [2024-07-15 09:36:00.753468] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.647 [2024-07-15 09:36:00.753495] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:27:49.647 qpair failed and we were unable to recover it. 00:27:49.647 [2024-07-15 09:36:00.753612] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.647 [2024-07-15 09:36:00.753638] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:27:49.647 qpair failed and we were unable to recover it. 00:27:49.647 [2024-07-15 09:36:00.753790] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.647 [2024-07-15 09:36:00.753844] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a58000b90 with addr=10.0.0.2, port=4420 00:27:49.647 qpair failed and we were unable to recover it. 00:27:49.647 [2024-07-15 09:36:00.753969] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.647 [2024-07-15 09:36:00.753997] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a58000b90 with addr=10.0.0.2, port=4420 00:27:49.647 qpair failed and we were unable to recover it. 00:27:49.647 [2024-07-15 09:36:00.754105] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.647 [2024-07-15 09:36:00.754157] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a58000b90 with addr=10.0.0.2, port=4420 00:27:49.647 qpair failed and we were unable to recover it. 00:27:49.647 [2024-07-15 09:36:00.754262] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.647 [2024-07-15 09:36:00.754297] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a58000b90 with addr=10.0.0.2, port=4420 00:27:49.647 qpair failed and we were unable to recover it. 00:27:49.647 [2024-07-15 09:36:00.754428] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.647 [2024-07-15 09:36:00.754481] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a50000b90 with addr=10.0.0.2, port=4420 00:27:49.647 qpair failed and we were unable to recover it. 00:27:49.647 [2024-07-15 09:36:00.754621] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.647 [2024-07-15 09:36:00.754669] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a50000b90 with addr=10.0.0.2, port=4420 00:27:49.647 qpair failed and we were unable to recover it. 00:27:49.647 [2024-07-15 09:36:00.754758] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.647 [2024-07-15 09:36:00.754785] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:27:49.647 qpair failed and we were unable to recover it. 00:27:49.647 [2024-07-15 09:36:00.754875] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.647 [2024-07-15 09:36:00.754902] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:27:49.647 qpair failed and we were unable to recover it. 00:27:49.647 [2024-07-15 09:36:00.755041] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.647 [2024-07-15 09:36:00.755067] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:27:49.647 qpair failed and we were unable to recover it. 00:27:49.647 [2024-07-15 09:36:00.755172] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.647 [2024-07-15 09:36:00.755206] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:27:49.647 qpair failed and we were unable to recover it. 00:27:49.647 [2024-07-15 09:36:00.755310] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.647 [2024-07-15 09:36:00.755345] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:27:49.647 qpair failed and we were unable to recover it. 00:27:49.647 [2024-07-15 09:36:00.755482] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.647 [2024-07-15 09:36:00.755516] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:27:49.647 qpair failed and we were unable to recover it. 00:27:49.647 [2024-07-15 09:36:00.755633] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.647 [2024-07-15 09:36:00.755667] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:27:49.647 qpair failed and we were unable to recover it. 00:27:49.647 [2024-07-15 09:36:00.755776] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.647 [2024-07-15 09:36:00.755817] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:27:49.647 qpair failed and we were unable to recover it. 00:27:49.647 [2024-07-15 09:36:00.755924] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.647 [2024-07-15 09:36:00.755954] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:27:49.647 qpair failed and we were unable to recover it. 00:27:49.647 [2024-07-15 09:36:00.756045] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.647 [2024-07-15 09:36:00.756071] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:27:49.647 qpair failed and we were unable to recover it. 00:27:49.647 [2024-07-15 09:36:00.756236] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.647 [2024-07-15 09:36:00.756269] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:27:49.647 qpair failed and we were unable to recover it. 00:27:49.647 [2024-07-15 09:36:00.756372] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.647 [2024-07-15 09:36:00.756406] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:27:49.647 qpair failed and we were unable to recover it. 00:27:49.647 [2024-07-15 09:36:00.756512] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.647 [2024-07-15 09:36:00.756545] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:27:49.647 qpair failed and we were unable to recover it. 00:27:49.647 [2024-07-15 09:36:00.756703] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.647 [2024-07-15 09:36:00.756729] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:27:49.647 qpair failed and we were unable to recover it. 00:27:49.647 [2024-07-15 09:36:00.756817] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.647 [2024-07-15 09:36:00.756843] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:27:49.647 qpair failed and we were unable to recover it. 00:27:49.647 [2024-07-15 09:36:00.756926] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.647 [2024-07-15 09:36:00.756951] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:27:49.647 qpair failed and we were unable to recover it. 00:27:49.647 [2024-07-15 09:36:00.757045] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.647 [2024-07-15 09:36:00.757079] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:27:49.647 qpair failed and we were unable to recover it. 00:27:49.647 [2024-07-15 09:36:00.757217] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.647 [2024-07-15 09:36:00.757250] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:27:49.647 qpair failed and we were unable to recover it. 00:27:49.647 [2024-07-15 09:36:00.757378] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.647 [2024-07-15 09:36:00.757411] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:27:49.647 qpair failed and we were unable to recover it. 00:27:49.647 [2024-07-15 09:36:00.757545] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.647 [2024-07-15 09:36:00.757578] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:27:49.647 qpair failed and we were unable to recover it. 00:27:49.647 [2024-07-15 09:36:00.757674] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.647 [2024-07-15 09:36:00.757707] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:27:49.647 qpair failed and we were unable to recover it. 00:27:49.647 [2024-07-15 09:36:00.757826] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.647 [2024-07-15 09:36:00.757852] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:27:49.647 qpair failed and we were unable to recover it. 00:27:49.647 [2024-07-15 09:36:00.757941] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.647 [2024-07-15 09:36:00.757967] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:27:49.647 qpair failed and we were unable to recover it. 00:27:49.647 [2024-07-15 09:36:00.758059] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.647 [2024-07-15 09:36:00.758084] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:27:49.647 qpair failed and we were unable to recover it. 00:27:49.648 [2024-07-15 09:36:00.758198] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.648 [2024-07-15 09:36:00.758246] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:27:49.648 qpair failed and we were unable to recover it. 00:27:49.648 [2024-07-15 09:36:00.758359] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.648 [2024-07-15 09:36:00.758384] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:27:49.648 qpair failed and we were unable to recover it. 00:27:49.648 [2024-07-15 09:36:00.758507] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.648 [2024-07-15 09:36:00.758541] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:27:49.648 qpair failed and we were unable to recover it. 00:27:49.648 [2024-07-15 09:36:00.758679] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.648 [2024-07-15 09:36:00.758705] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:27:49.648 qpair failed and we were unable to recover it. 00:27:49.648 [2024-07-15 09:36:00.758838] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.648 [2024-07-15 09:36:00.758870] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:27:49.648 qpair failed and we were unable to recover it. 00:27:49.648 [2024-07-15 09:36:00.758984] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.648 [2024-07-15 09:36:00.759009] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:27:49.648 qpair failed and we were unable to recover it. 00:27:49.648 [2024-07-15 09:36:00.759116] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.648 [2024-07-15 09:36:00.759149] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:27:49.648 qpair failed and we were unable to recover it. 00:27:49.648 [2024-07-15 09:36:00.759269] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.648 [2024-07-15 09:36:00.759312] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:27:49.648 qpair failed and we were unable to recover it. 00:27:49.648 [2024-07-15 09:36:00.759421] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.648 [2024-07-15 09:36:00.759454] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:27:49.648 qpair failed and we were unable to recover it. 00:27:49.648 [2024-07-15 09:36:00.759570] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.648 [2024-07-15 09:36:00.759599] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a50000b90 with addr=10.0.0.2, port=4420 00:27:49.648 qpair failed and we were unable to recover it. 00:27:49.648 [2024-07-15 09:36:00.759696] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.648 [2024-07-15 09:36:00.759723] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a50000b90 with addr=10.0.0.2, port=4420 00:27:49.648 qpair failed and we were unable to recover it. 00:27:49.648 [2024-07-15 09:36:00.759813] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.648 [2024-07-15 09:36:00.759841] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a50000b90 with addr=10.0.0.2, port=4420 00:27:49.648 qpair failed and we were unable to recover it. 00:27:49.648 [2024-07-15 09:36:00.759947] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.648 [2024-07-15 09:36:00.759973] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a50000b90 with addr=10.0.0.2, port=4420 00:27:49.648 qpair failed and we were unable to recover it. 00:27:49.648 [2024-07-15 09:36:00.760053] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.648 [2024-07-15 09:36:00.760079] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a50000b90 with addr=10.0.0.2, port=4420 00:27:49.648 qpair failed and we were unable to recover it. 00:27:49.648 [2024-07-15 09:36:00.760188] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.648 [2024-07-15 09:36:00.760213] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a50000b90 with addr=10.0.0.2, port=4420 00:27:49.648 qpair failed and we were unable to recover it. 00:27:49.648 [2024-07-15 09:36:00.760316] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.648 [2024-07-15 09:36:00.760344] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:27:49.648 qpair failed and we were unable to recover it. 00:27:49.648 [2024-07-15 09:36:00.760428] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.648 [2024-07-15 09:36:00.760454] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:27:49.648 qpair failed and we were unable to recover it. 00:27:49.648 [2024-07-15 09:36:00.760536] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.648 [2024-07-15 09:36:00.760562] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:27:49.648 qpair failed and we were unable to recover it. 00:27:49.648 [2024-07-15 09:36:00.760646] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.648 [2024-07-15 09:36:00.760671] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:27:49.648 qpair failed and we were unable to recover it. 00:27:49.648 [2024-07-15 09:36:00.760758] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.648 [2024-07-15 09:36:00.760785] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:27:49.648 qpair failed and we were unable to recover it. 00:27:49.648 [2024-07-15 09:36:00.760879] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.648 [2024-07-15 09:36:00.760905] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:27:49.648 qpair failed and we were unable to recover it. 00:27:49.648 [2024-07-15 09:36:00.760987] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.648 [2024-07-15 09:36:00.761012] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:27:49.648 qpair failed and we were unable to recover it. 00:27:49.648 [2024-07-15 09:36:00.761163] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.648 [2024-07-15 09:36:00.761189] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:27:49.648 qpair failed and we were unable to recover it. 00:27:49.648 [2024-07-15 09:36:00.761297] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.648 [2024-07-15 09:36:00.761324] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:27:49.648 qpair failed and we were unable to recover it. 00:27:49.648 [2024-07-15 09:36:00.761457] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.648 [2024-07-15 09:36:00.761496] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:27:49.648 qpair failed and we were unable to recover it. 00:27:49.648 [2024-07-15 09:36:00.761633] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.648 [2024-07-15 09:36:00.761667] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:27:49.648 qpair failed and we were unable to recover it. 00:27:49.648 [2024-07-15 09:36:00.761782] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.648 [2024-07-15 09:36:00.761831] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:27:49.648 qpair failed and we were unable to recover it. 00:27:49.648 [2024-07-15 09:36:00.761966] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.648 [2024-07-15 09:36:00.761992] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:27:49.648 qpair failed and we were unable to recover it. 00:27:49.648 [2024-07-15 09:36:00.762152] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.648 [2024-07-15 09:36:00.762187] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:27:49.648 qpair failed and we were unable to recover it. 00:27:49.648 [2024-07-15 09:36:00.762332] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.648 [2024-07-15 09:36:00.762366] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:27:49.648 qpair failed and we were unable to recover it. 00:27:49.648 [2024-07-15 09:36:00.762510] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.648 [2024-07-15 09:36:00.762544] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:27:49.648 qpair failed and we were unable to recover it. 00:27:49.648 [2024-07-15 09:36:00.762675] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.648 [2024-07-15 09:36:00.762724] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a58000b90 with addr=10.0.0.2, port=4420 00:27:49.648 qpair failed and we were unable to recover it. 00:27:49.648 [2024-07-15 09:36:00.762832] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.648 [2024-07-15 09:36:00.762862] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a58000b90 with addr=10.0.0.2, port=4420 00:27:49.648 qpair failed and we were unable to recover it. 00:27:49.648 [2024-07-15 09:36:00.762971] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.648 [2024-07-15 09:36:00.762997] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a58000b90 with addr=10.0.0.2, port=4420 00:27:49.648 qpair failed and we were unable to recover it. 00:27:49.648 [2024-07-15 09:36:00.763081] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.648 [2024-07-15 09:36:00.763108] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a58000b90 with addr=10.0.0.2, port=4420 00:27:49.648 qpair failed and we were unable to recover it. 00:27:49.648 [2024-07-15 09:36:00.763189] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.648 [2024-07-15 09:36:00.763216] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a58000b90 with addr=10.0.0.2, port=4420 00:27:49.648 qpair failed and we were unable to recover it. 00:27:49.648 [2024-07-15 09:36:00.763329] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.648 [2024-07-15 09:36:00.763355] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a58000b90 with addr=10.0.0.2, port=4420 00:27:49.648 qpair failed and we were unable to recover it. 00:27:49.648 [2024-07-15 09:36:00.763441] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.648 [2024-07-15 09:36:00.763469] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a58000b90 with addr=10.0.0.2, port=4420 00:27:49.648 qpair failed and we were unable to recover it. 00:27:49.648 [2024-07-15 09:36:00.763569] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.648 [2024-07-15 09:36:00.763596] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a58000b90 with addr=10.0.0.2, port=4420 00:27:49.648 qpair failed and we were unable to recover it. 00:27:49.649 [2024-07-15 09:36:00.763716] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.649 [2024-07-15 09:36:00.763742] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a58000b90 with addr=10.0.0.2, port=4420 00:27:49.649 qpair failed and we were unable to recover it. 00:27:49.649 [2024-07-15 09:36:00.763828] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.649 [2024-07-15 09:36:00.763867] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a58000b90 with addr=10.0.0.2, port=4420 00:27:49.649 qpair failed and we were unable to recover it. 00:27:49.649 [2024-07-15 09:36:00.763980] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.649 [2024-07-15 09:36:00.764006] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a58000b90 with addr=10.0.0.2, port=4420 00:27:49.649 qpair failed and we were unable to recover it. 00:27:49.649 [2024-07-15 09:36:00.764083] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.649 [2024-07-15 09:36:00.764109] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a58000b90 with addr=10.0.0.2, port=4420 00:27:49.649 qpair failed and we were unable to recover it. 00:27:49.649 [2024-07-15 09:36:00.764203] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.649 [2024-07-15 09:36:00.764230] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a58000b90 with addr=10.0.0.2, port=4420 00:27:49.649 qpair failed and we were unable to recover it. 00:27:49.649 [2024-07-15 09:36:00.764309] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.649 [2024-07-15 09:36:00.764336] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a58000b90 with addr=10.0.0.2, port=4420 00:27:49.649 qpair failed and we were unable to recover it. 00:27:49.649 [2024-07-15 09:36:00.764454] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.649 [2024-07-15 09:36:00.764480] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a58000b90 with addr=10.0.0.2, port=4420 00:27:49.649 qpair failed and we were unable to recover it. 00:27:49.649 [2024-07-15 09:36:00.764561] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.649 [2024-07-15 09:36:00.764588] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a58000b90 with addr=10.0.0.2, port=4420 00:27:49.649 qpair failed and we were unable to recover it. 00:27:49.649 [2024-07-15 09:36:00.764677] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.649 [2024-07-15 09:36:00.764703] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a58000b90 with addr=10.0.0.2, port=4420 00:27:49.649 qpair failed and we were unable to recover it. 00:27:49.649 [2024-07-15 09:36:00.764785] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.649 [2024-07-15 09:36:00.764817] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a58000b90 with addr=10.0.0.2, port=4420 00:27:49.649 qpair failed and we were unable to recover it. 00:27:49.649 [2024-07-15 09:36:00.764900] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.649 [2024-07-15 09:36:00.764926] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a58000b90 with addr=10.0.0.2, port=4420 00:27:49.649 qpair failed and we were unable to recover it. 00:27:49.649 [2024-07-15 09:36:00.765056] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.649 [2024-07-15 09:36:00.765083] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a58000b90 with addr=10.0.0.2, port=4420 00:27:49.649 qpair failed and we were unable to recover it. 00:27:49.649 [2024-07-15 09:36:00.765221] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.649 [2024-07-15 09:36:00.765249] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a50000b90 with addr=10.0.0.2, port=4420 00:27:49.649 qpair failed and we were unable to recover it. 00:27:49.649 [2024-07-15 09:36:00.765373] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.649 [2024-07-15 09:36:00.765400] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:27:49.649 qpair failed and we were unable to recover it. 00:27:49.649 [2024-07-15 09:36:00.765488] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.649 [2024-07-15 09:36:00.765513] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:27:49.649 qpair failed and we were unable to recover it. 00:27:49.649 [2024-07-15 09:36:00.765618] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.649 [2024-07-15 09:36:00.765644] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:27:49.649 qpair failed and we were unable to recover it. 00:27:49.649 [2024-07-15 09:36:00.765754] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.649 [2024-07-15 09:36:00.765780] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:27:49.649 qpair failed and we were unable to recover it. 00:27:49.649 [2024-07-15 09:36:00.765876] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.649 [2024-07-15 09:36:00.765901] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:27:49.649 qpair failed and we were unable to recover it. 00:27:49.649 [2024-07-15 09:36:00.765996] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.649 [2024-07-15 09:36:00.766046] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:27:49.649 qpair failed and we were unable to recover it. 00:27:49.649 [2024-07-15 09:36:00.766158] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.649 [2024-07-15 09:36:00.766193] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:27:49.649 qpair failed and we were unable to recover it. 00:27:49.649 [2024-07-15 09:36:00.766337] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.649 [2024-07-15 09:36:00.766372] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:27:49.649 qpair failed and we were unable to recover it. 00:27:49.649 [2024-07-15 09:36:00.766516] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.649 [2024-07-15 09:36:00.766551] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:27:49.649 qpair failed and we were unable to recover it. 00:27:49.649 [2024-07-15 09:36:00.766693] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.649 [2024-07-15 09:36:00.766720] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:27:49.649 qpair failed and we were unable to recover it. 00:27:49.649 [2024-07-15 09:36:00.766830] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.649 [2024-07-15 09:36:00.766866] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:27:49.649 qpair failed and we were unable to recover it. 00:27:49.649 [2024-07-15 09:36:00.767002] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.649 [2024-07-15 09:36:00.767027] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:27:49.649 qpair failed and we were unable to recover it. 00:27:49.649 [2024-07-15 09:36:00.767156] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.649 [2024-07-15 09:36:00.767195] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:27:49.649 qpair failed and we were unable to recover it. 00:27:49.649 [2024-07-15 09:36:00.767335] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.649 [2024-07-15 09:36:00.767369] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:27:49.649 qpair failed and we were unable to recover it. 00:27:49.649 [2024-07-15 09:36:00.767566] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.649 [2024-07-15 09:36:00.767599] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:27:49.649 qpair failed and we were unable to recover it. 00:27:49.649 [2024-07-15 09:36:00.767768] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.649 [2024-07-15 09:36:00.767794] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:27:49.649 qpair failed and we were unable to recover it. 00:27:49.649 [2024-07-15 09:36:00.767891] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.649 [2024-07-15 09:36:00.767917] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:27:49.649 qpair failed and we were unable to recover it. 00:27:49.649 [2024-07-15 09:36:00.767996] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.649 [2024-07-15 09:36:00.768021] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:27:49.650 qpair failed and we were unable to recover it. 00:27:49.650 [2024-07-15 09:36:00.768183] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.650 [2024-07-15 09:36:00.768218] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:27:49.650 qpair failed and we were unable to recover it. 00:27:49.650 [2024-07-15 09:36:00.768338] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.650 [2024-07-15 09:36:00.768365] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:27:49.650 qpair failed and we were unable to recover it. 00:27:49.650 [2024-07-15 09:36:00.768514] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.650 [2024-07-15 09:36:00.768550] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:27:49.650 qpair failed and we were unable to recover it. 00:27:49.650 [2024-07-15 09:36:00.768691] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.650 [2024-07-15 09:36:00.768716] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:27:49.650 qpair failed and we were unable to recover it. 00:27:49.650 [2024-07-15 09:36:00.768828] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.650 [2024-07-15 09:36:00.768854] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:27:49.650 qpair failed and we were unable to recover it. 00:27:49.650 [2024-07-15 09:36:00.768946] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.650 [2024-07-15 09:36:00.768971] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:27:49.650 qpair failed and we were unable to recover it. 00:27:49.650 [2024-07-15 09:36:00.769053] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.650 [2024-07-15 09:36:00.769078] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:27:49.650 qpair failed and we were unable to recover it. 00:27:49.650 [2024-07-15 09:36:00.769201] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.650 [2024-07-15 09:36:00.769235] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:27:49.650 qpair failed and we were unable to recover it. 00:27:49.650 [2024-07-15 09:36:00.769405] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.650 [2024-07-15 09:36:00.769438] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:27:49.650 qpair failed and we were unable to recover it. 00:27:49.650 [2024-07-15 09:36:00.769554] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.650 [2024-07-15 09:36:00.769605] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a50000b90 with addr=10.0.0.2, port=4420 00:27:49.650 qpair failed and we were unable to recover it. 00:27:49.650 [2024-07-15 09:36:00.769746] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.650 [2024-07-15 09:36:00.769771] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a50000b90 with addr=10.0.0.2, port=4420 00:27:49.650 qpair failed and we were unable to recover it. 00:27:49.942 [2024-07-15 09:36:00.769867] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.942 [2024-07-15 09:36:00.769894] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a50000b90 with addr=10.0.0.2, port=4420 00:27:49.942 qpair failed and we were unable to recover it. 00:27:49.942 [2024-07-15 09:36:00.770020] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.942 [2024-07-15 09:36:00.770055] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a50000b90 with addr=10.0.0.2, port=4420 00:27:49.942 qpair failed and we were unable to recover it. 00:27:49.942 [2024-07-15 09:36:00.770190] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.942 [2024-07-15 09:36:00.770216] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a50000b90 with addr=10.0.0.2, port=4420 00:27:49.942 qpair failed and we were unable to recover it. 00:27:49.942 [2024-07-15 09:36:00.770310] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.942 [2024-07-15 09:36:00.770336] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a50000b90 with addr=10.0.0.2, port=4420 00:27:49.942 qpair failed and we were unable to recover it. 00:27:49.942 [2024-07-15 09:36:00.770423] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.942 [2024-07-15 09:36:00.770448] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a50000b90 with addr=10.0.0.2, port=4420 00:27:49.942 qpair failed and we were unable to recover it. 00:27:49.942 [2024-07-15 09:36:00.770554] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.942 [2024-07-15 09:36:00.770581] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a50000b90 with addr=10.0.0.2, port=4420 00:27:49.942 qpair failed and we were unable to recover it. 00:27:49.942 [2024-07-15 09:36:00.770708] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.942 [2024-07-15 09:36:00.770756] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a58000b90 with addr=10.0.0.2, port=4420 00:27:49.942 qpair failed and we were unable to recover it. 00:27:49.942 [2024-07-15 09:36:00.770854] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.942 [2024-07-15 09:36:00.770881] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a58000b90 with addr=10.0.0.2, port=4420 00:27:49.942 qpair failed and we were unable to recover it. 00:27:49.942 [2024-07-15 09:36:00.771006] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.942 [2024-07-15 09:36:00.771032] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a58000b90 with addr=10.0.0.2, port=4420 00:27:49.942 qpair failed and we were unable to recover it. 00:27:49.942 [2024-07-15 09:36:00.771117] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.942 [2024-07-15 09:36:00.771143] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a58000b90 with addr=10.0.0.2, port=4420 00:27:49.942 qpair failed and we were unable to recover it. 00:27:49.942 [2024-07-15 09:36:00.771230] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.942 [2024-07-15 09:36:00.771258] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:27:49.942 qpair failed and we were unable to recover it. 00:27:49.942 [2024-07-15 09:36:00.771337] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.942 [2024-07-15 09:36:00.771388] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:27:49.942 qpair failed and we were unable to recover it. 00:27:49.942 [2024-07-15 09:36:00.771527] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.942 [2024-07-15 09:36:00.771560] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:27:49.942 qpair failed and we were unable to recover it. 00:27:49.942 [2024-07-15 09:36:00.771696] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.942 [2024-07-15 09:36:00.771725] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:27:49.942 qpair failed and we were unable to recover it. 00:27:49.942 [2024-07-15 09:36:00.771845] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.942 [2024-07-15 09:36:00.771871] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:27:49.942 qpair failed and we were unable to recover it. 00:27:49.942 [2024-07-15 09:36:00.771951] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.942 [2024-07-15 09:36:00.771977] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:27:49.942 qpair failed and we were unable to recover it. 00:27:49.942 [2024-07-15 09:36:00.772080] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.942 [2024-07-15 09:36:00.772115] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:27:49.942 qpair failed and we were unable to recover it. 00:27:49.942 [2024-07-15 09:36:00.772253] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.942 [2024-07-15 09:36:00.772292] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:27:49.942 qpair failed and we were unable to recover it. 00:27:49.942 [2024-07-15 09:36:00.772425] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.942 [2024-07-15 09:36:00.772459] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:27:49.942 qpair failed and we were unable to recover it. 00:27:49.942 [2024-07-15 09:36:00.772574] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.942 [2024-07-15 09:36:00.772600] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:27:49.942 qpair failed and we were unable to recover it. 00:27:49.942 [2024-07-15 09:36:00.772680] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.942 [2024-07-15 09:36:00.772706] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:27:49.942 qpair failed and we were unable to recover it. 00:27:49.942 [2024-07-15 09:36:00.772816] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.942 [2024-07-15 09:36:00.772842] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:27:49.942 qpair failed and we were unable to recover it. 00:27:49.942 [2024-07-15 09:36:00.772931] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.942 [2024-07-15 09:36:00.772957] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:27:49.942 qpair failed and we were unable to recover it. 00:27:49.942 [2024-07-15 09:36:00.773070] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.942 [2024-07-15 09:36:00.773099] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:27:49.942 qpair failed and we were unable to recover it. 00:27:49.943 [2024-07-15 09:36:00.773216] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.943 [2024-07-15 09:36:00.773242] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:27:49.943 qpair failed and we were unable to recover it. 00:27:49.943 [2024-07-15 09:36:00.773414] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.943 [2024-07-15 09:36:00.773448] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:27:49.943 qpair failed and we were unable to recover it. 00:27:49.943 [2024-07-15 09:36:00.773543] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.943 [2024-07-15 09:36:00.773577] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:27:49.943 qpair failed and we were unable to recover it. 00:27:49.943 [2024-07-15 09:36:00.773715] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.943 [2024-07-15 09:36:00.773740] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:27:49.943 qpair failed and we were unable to recover it. 00:27:49.943 [2024-07-15 09:36:00.773821] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.943 [2024-07-15 09:36:00.773859] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:27:49.943 qpair failed and we were unable to recover it. 00:27:49.943 [2024-07-15 09:36:00.773963] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.943 [2024-07-15 09:36:00.773989] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:27:49.943 qpair failed and we were unable to recover it. 00:27:49.943 [2024-07-15 09:36:00.774068] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.943 [2024-07-15 09:36:00.774094] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:27:49.943 qpair failed and we were unable to recover it. 00:27:49.943 [2024-07-15 09:36:00.774230] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.943 [2024-07-15 09:36:00.774274] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:27:49.943 qpair failed and we were unable to recover it. 00:27:49.943 [2024-07-15 09:36:00.774411] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.943 [2024-07-15 09:36:00.774454] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:27:49.943 qpair failed and we were unable to recover it. 00:27:49.943 [2024-07-15 09:36:00.774598] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.943 [2024-07-15 09:36:00.774651] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a50000b90 with addr=10.0.0.2, port=4420 00:27:49.943 qpair failed and we were unable to recover it. 00:27:49.943 [2024-07-15 09:36:00.774738] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.943 [2024-07-15 09:36:00.774765] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a50000b90 with addr=10.0.0.2, port=4420 00:27:49.943 qpair failed and we were unable to recover it. 00:27:49.943 [2024-07-15 09:36:00.774891] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.943 [2024-07-15 09:36:00.774918] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a50000b90 with addr=10.0.0.2, port=4420 00:27:49.943 qpair failed and we were unable to recover it. 00:27:49.943 [2024-07-15 09:36:00.775023] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.943 [2024-07-15 09:36:00.775047] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a50000b90 with addr=10.0.0.2, port=4420 00:27:49.943 qpair failed and we were unable to recover it. 00:27:49.943 [2024-07-15 09:36:00.775135] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.943 [2024-07-15 09:36:00.775160] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a50000b90 with addr=10.0.0.2, port=4420 00:27:49.943 qpair failed and we were unable to recover it. 00:27:49.943 [2024-07-15 09:36:00.775270] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.943 [2024-07-15 09:36:00.775295] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a50000b90 with addr=10.0.0.2, port=4420 00:27:49.943 qpair failed and we were unable to recover it. 00:27:49.943 [2024-07-15 09:36:00.775403] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.943 [2024-07-15 09:36:00.775428] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a50000b90 with addr=10.0.0.2, port=4420 00:27:49.943 qpair failed and we were unable to recover it. 00:27:49.943 [2024-07-15 09:36:00.775536] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.943 [2024-07-15 09:36:00.775562] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a50000b90 with addr=10.0.0.2, port=4420 00:27:49.943 qpair failed and we were unable to recover it. 00:27:49.943 [2024-07-15 09:36:00.775644] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.943 [2024-07-15 09:36:00.775670] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a50000b90 with addr=10.0.0.2, port=4420 00:27:49.943 qpair failed and we were unable to recover it. 00:27:49.943 [2024-07-15 09:36:00.775745] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.943 [2024-07-15 09:36:00.775770] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a50000b90 with addr=10.0.0.2, port=4420 00:27:49.943 qpair failed and we were unable to recover it. 00:27:49.943 [2024-07-15 09:36:00.775881] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.943 [2024-07-15 09:36:00.775907] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a50000b90 with addr=10.0.0.2, port=4420 00:27:49.943 qpair failed and we were unable to recover it. 00:27:49.943 [2024-07-15 09:36:00.775996] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.943 [2024-07-15 09:36:00.776022] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a50000b90 with addr=10.0.0.2, port=4420 00:27:49.943 qpair failed and we were unable to recover it. 00:27:49.943 [2024-07-15 09:36:00.776145] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.943 [2024-07-15 09:36:00.776171] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a50000b90 with addr=10.0.0.2, port=4420 00:27:49.943 qpair failed and we were unable to recover it. 00:27:49.943 [2024-07-15 09:36:00.776283] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.943 [2024-07-15 09:36:00.776308] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a50000b90 with addr=10.0.0.2, port=4420 00:27:49.943 qpair failed and we were unable to recover it. 00:27:49.943 [2024-07-15 09:36:00.776420] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.943 [2024-07-15 09:36:00.776446] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a50000b90 with addr=10.0.0.2, port=4420 00:27:49.943 qpair failed and we were unable to recover it. 00:27:49.943 [2024-07-15 09:36:00.776556] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.943 [2024-07-15 09:36:00.776581] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a50000b90 with addr=10.0.0.2, port=4420 00:27:49.943 qpair failed and we were unable to recover it. 00:27:49.943 [2024-07-15 09:36:00.776691] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.943 [2024-07-15 09:36:00.776717] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a50000b90 with addr=10.0.0.2, port=4420 00:27:49.943 qpair failed and we were unable to recover it. 00:27:49.943 [2024-07-15 09:36:00.776809] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.943 [2024-07-15 09:36:00.776853] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d69200 with addr=10.0.0.2, port=4420 00:27:49.943 qpair failed and we were unable to recover it. 00:27:49.943 [2024-07-15 09:36:00.776957] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.943 [2024-07-15 09:36:00.776985] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d69200 with addr=10.0.0.2, port=4420 00:27:49.943 qpair failed and we were unable to recover it. 00:27:49.943 [2024-07-15 09:36:00.777070] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.943 [2024-07-15 09:36:00.777100] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d69200 with addr=10.0.0.2, port=4420 00:27:49.943 qpair failed and we were unable to recover it. 00:27:49.943 [2024-07-15 09:36:00.777211] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.943 [2024-07-15 09:36:00.777237] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d69200 with addr=10.0.0.2, port=4420 00:27:49.943 qpair failed and we were unable to recover it. 00:27:49.943 [2024-07-15 09:36:00.777371] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.943 [2024-07-15 09:36:00.777397] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d69200 with addr=10.0.0.2, port=4420 00:27:49.943 qpair failed and we were unable to recover it. 00:27:49.943 [2024-07-15 09:36:00.777511] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.943 [2024-07-15 09:36:00.777540] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a58000b90 with addr=10.0.0.2, port=4420 00:27:49.943 qpair failed and we were unable to recover it. 00:27:49.943 [2024-07-15 09:36:00.777656] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.943 [2024-07-15 09:36:00.777684] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a50000b90 with addr=10.0.0.2, port=4420 00:27:49.943 qpair failed and we were unable to recover it. 00:27:49.943 [2024-07-15 09:36:00.777797] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.943 [2024-07-15 09:36:00.777829] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:27:49.943 qpair failed and we were unable to recover it. 00:27:49.943 [2024-07-15 09:36:00.777916] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.943 [2024-07-15 09:36:00.777942] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:27:49.943 qpair failed and we were unable to recover it. 00:27:49.943 [2024-07-15 09:36:00.778060] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.944 [2024-07-15 09:36:00.778093] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:27:49.944 qpair failed and we were unable to recover it. 00:27:49.944 [2024-07-15 09:36:00.778266] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.944 [2024-07-15 09:36:00.778300] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:27:49.944 qpair failed and we were unable to recover it. 00:27:49.944 [2024-07-15 09:36:00.778442] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.944 [2024-07-15 09:36:00.778475] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:27:49.944 qpair failed and we were unable to recover it. 00:27:49.944 [2024-07-15 09:36:00.778633] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.944 [2024-07-15 09:36:00.778668] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a58000b90 with addr=10.0.0.2, port=4420 00:27:49.944 qpair failed and we were unable to recover it. 00:27:49.944 [2024-07-15 09:36:00.778779] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.944 [2024-07-15 09:36:00.778816] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a50000b90 with addr=10.0.0.2, port=4420 00:27:49.944 qpair failed and we were unable to recover it. 00:27:49.944 [2024-07-15 09:36:00.778934] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.944 [2024-07-15 09:36:00.778960] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a50000b90 with addr=10.0.0.2, port=4420 00:27:49.944 qpair failed and we were unable to recover it. 00:27:49.944 [2024-07-15 09:36:00.779038] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.944 [2024-07-15 09:36:00.779063] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a50000b90 with addr=10.0.0.2, port=4420 00:27:49.944 qpair failed and we were unable to recover it. 00:27:49.944 [2024-07-15 09:36:00.779220] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.944 [2024-07-15 09:36:00.779266] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a50000b90 with addr=10.0.0.2, port=4420 00:27:49.944 qpair failed and we were unable to recover it. 00:27:49.944 [2024-07-15 09:36:00.779409] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.944 [2024-07-15 09:36:00.779449] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a50000b90 with addr=10.0.0.2, port=4420 00:27:49.944 qpair failed and we were unable to recover it. 00:27:49.944 [2024-07-15 09:36:00.779554] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.944 [2024-07-15 09:36:00.779580] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a50000b90 with addr=10.0.0.2, port=4420 00:27:49.944 qpair failed and we were unable to recover it. 00:27:49.944 [2024-07-15 09:36:00.779664] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.944 [2024-07-15 09:36:00.779689] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a50000b90 with addr=10.0.0.2, port=4420 00:27:49.944 qpair failed and we were unable to recover it. 00:27:49.944 [2024-07-15 09:36:00.779811] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.944 [2024-07-15 09:36:00.779849] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d69200 with addr=10.0.0.2, port=4420 00:27:49.944 qpair failed and we were unable to recover it. 00:27:49.944 [2024-07-15 09:36:00.779948] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.944 [2024-07-15 09:36:00.779975] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d69200 with addr=10.0.0.2, port=4420 00:27:49.944 qpair failed and we were unable to recover it. 00:27:49.944 [2024-07-15 09:36:00.780079] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.944 [2024-07-15 09:36:00.780114] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d69200 with addr=10.0.0.2, port=4420 00:27:49.944 qpair failed and we were unable to recover it. 00:27:49.944 [2024-07-15 09:36:00.780242] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.944 [2024-07-15 09:36:00.780268] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d69200 with addr=10.0.0.2, port=4420 00:27:49.944 qpair failed and we were unable to recover it. 00:27:49.944 [2024-07-15 09:36:00.780394] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.944 [2024-07-15 09:36:00.780431] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:27:49.944 qpair failed and we were unable to recover it. 00:27:49.944 [2024-07-15 09:36:00.780602] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.944 [2024-07-15 09:36:00.780637] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:27:49.944 qpair failed and we were unable to recover it. 00:27:49.944 [2024-07-15 09:36:00.780779] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.944 [2024-07-15 09:36:00.780809] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:27:49.944 qpair failed and we were unable to recover it. 00:27:49.944 [2024-07-15 09:36:00.780933] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.944 [2024-07-15 09:36:00.780958] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:27:49.944 qpair failed and we were unable to recover it. 00:27:49.944 [2024-07-15 09:36:00.781036] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.944 [2024-07-15 09:36:00.781061] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:27:49.944 qpair failed and we were unable to recover it. 00:27:49.944 [2024-07-15 09:36:00.781206] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.944 [2024-07-15 09:36:00.781240] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:27:49.944 qpair failed and we were unable to recover it. 00:27:49.944 [2024-07-15 09:36:00.781381] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.944 [2024-07-15 09:36:00.781416] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:27:49.944 qpair failed and we were unable to recover it. 00:27:49.944 [2024-07-15 09:36:00.781528] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.944 [2024-07-15 09:36:00.781554] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:27:49.944 qpair failed and we were unable to recover it. 00:27:49.944 [2024-07-15 09:36:00.781689] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.944 [2024-07-15 09:36:00.781714] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:27:49.944 qpair failed and we were unable to recover it. 00:27:49.944 [2024-07-15 09:36:00.781836] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.944 [2024-07-15 09:36:00.781884] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d69200 with addr=10.0.0.2, port=4420 00:27:49.944 qpair failed and we were unable to recover it. 00:27:49.944 [2024-07-15 09:36:00.781971] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.944 [2024-07-15 09:36:00.781998] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d69200 with addr=10.0.0.2, port=4420 00:27:49.944 qpair failed and we were unable to recover it. 00:27:49.944 [2024-07-15 09:36:00.782136] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.944 [2024-07-15 09:36:00.782172] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d69200 with addr=10.0.0.2, port=4420 00:27:49.944 qpair failed and we were unable to recover it. 00:27:49.944 [2024-07-15 09:36:00.782303] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.944 [2024-07-15 09:36:00.782338] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d69200 with addr=10.0.0.2, port=4420 00:27:49.944 qpair failed and we were unable to recover it. 00:27:49.944 [2024-07-15 09:36:00.782476] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.944 [2024-07-15 09:36:00.782511] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d69200 with addr=10.0.0.2, port=4420 00:27:49.944 qpair failed and we were unable to recover it. 00:27:49.944 [2024-07-15 09:36:00.782664] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.944 [2024-07-15 09:36:00.782690] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d69200 with addr=10.0.0.2, port=4420 00:27:49.944 qpair failed and we were unable to recover it. 00:27:49.944 [2024-07-15 09:36:00.782792] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.944 [2024-07-15 09:36:00.782828] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d69200 with addr=10.0.0.2, port=4420 00:27:49.944 qpair failed and we were unable to recover it. 00:27:49.944 [2024-07-15 09:36:00.782915] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.944 [2024-07-15 09:36:00.782945] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d69200 with addr=10.0.0.2, port=4420 00:27:49.944 qpair failed and we were unable to recover it. 00:27:49.944 [2024-07-15 09:36:00.783042] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.944 [2024-07-15 09:36:00.783077] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d69200 with addr=10.0.0.2, port=4420 00:27:49.944 qpair failed and we were unable to recover it. 00:27:49.944 [2024-07-15 09:36:00.783216] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.944 [2024-07-15 09:36:00.783251] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d69200 with addr=10.0.0.2, port=4420 00:27:49.944 qpair failed and we were unable to recover it. 00:27:49.944 [2024-07-15 09:36:00.783393] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.944 [2024-07-15 09:36:00.783427] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d69200 with addr=10.0.0.2, port=4420 00:27:49.944 qpair failed and we were unable to recover it. 00:27:49.944 [2024-07-15 09:36:00.783542] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.944 [2024-07-15 09:36:00.783593] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d69200 with addr=10.0.0.2, port=4420 00:27:49.944 qpair failed and we were unable to recover it. 00:27:49.944 [2024-07-15 09:36:00.783736] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.944 [2024-07-15 09:36:00.783772] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:27:49.944 qpair failed and we were unable to recover it. 00:27:49.944 [2024-07-15 09:36:00.783898] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.944 [2024-07-15 09:36:00.783938] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a58000b90 with addr=10.0.0.2, port=4420 00:27:49.944 qpair failed and we were unable to recover it. 00:27:49.944 [2024-07-15 09:36:00.784060] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.944 [2024-07-15 09:36:00.784113] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a50000b90 with addr=10.0.0.2, port=4420 00:27:49.944 qpair failed and we were unable to recover it. 00:27:49.944 [2024-07-15 09:36:00.784222] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.944 [2024-07-15 09:36:00.784247] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a50000b90 with addr=10.0.0.2, port=4420 00:27:49.944 qpair failed and we were unable to recover it. 00:27:49.945 [2024-07-15 09:36:00.784361] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.945 [2024-07-15 09:36:00.784386] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a50000b90 with addr=10.0.0.2, port=4420 00:27:49.945 qpair failed and we were unable to recover it. 00:27:49.945 [2024-07-15 09:36:00.784469] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.945 [2024-07-15 09:36:00.784494] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a50000b90 with addr=10.0.0.2, port=4420 00:27:49.945 qpair failed and we were unable to recover it. 00:27:49.945 [2024-07-15 09:36:00.784575] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.945 [2024-07-15 09:36:00.784600] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a50000b90 with addr=10.0.0.2, port=4420 00:27:49.945 qpair failed and we were unable to recover it. 00:27:49.945 [2024-07-15 09:36:00.784679] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.945 [2024-07-15 09:36:00.784704] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a50000b90 with addr=10.0.0.2, port=4420 00:27:49.945 qpair failed and we were unable to recover it. 00:27:49.945 [2024-07-15 09:36:00.784790] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.945 [2024-07-15 09:36:00.784822] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a50000b90 with addr=10.0.0.2, port=4420 00:27:49.945 qpair failed and we were unable to recover it. 00:27:49.945 [2024-07-15 09:36:00.784938] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.945 [2024-07-15 09:36:00.784964] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a50000b90 with addr=10.0.0.2, port=4420 00:27:49.945 qpair failed and we were unable to recover it. 00:27:49.945 [2024-07-15 09:36:00.785056] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.945 [2024-07-15 09:36:00.785082] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a50000b90 with addr=10.0.0.2, port=4420 00:27:49.945 qpair failed and we were unable to recover it. 00:27:49.945 [2024-07-15 09:36:00.785198] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.945 [2024-07-15 09:36:00.785223] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a50000b90 with addr=10.0.0.2, port=4420 00:27:49.945 qpair failed and we were unable to recover it. 00:27:49.945 [2024-07-15 09:36:00.785367] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.945 [2024-07-15 09:36:00.785393] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a50000b90 with addr=10.0.0.2, port=4420 00:27:49.945 qpair failed and we were unable to recover it. 00:27:49.945 [2024-07-15 09:36:00.785482] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.945 [2024-07-15 09:36:00.785506] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a50000b90 with addr=10.0.0.2, port=4420 00:27:49.945 qpair failed and we were unable to recover it. 00:27:49.945 [2024-07-15 09:36:00.785616] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.945 [2024-07-15 09:36:00.785641] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a50000b90 with addr=10.0.0.2, port=4420 00:27:49.945 qpair failed and we were unable to recover it. 00:27:49.945 [2024-07-15 09:36:00.785752] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.945 [2024-07-15 09:36:00.785779] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d69200 with addr=10.0.0.2, port=4420 00:27:49.945 qpair failed and we were unable to recover it. 00:27:49.945 [2024-07-15 09:36:00.785916] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.945 [2024-07-15 09:36:00.785954] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:27:49.945 qpair failed and we were unable to recover it. 00:27:49.945 [2024-07-15 09:36:00.786047] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.945 [2024-07-15 09:36:00.786074] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:27:49.945 qpair failed and we were unable to recover it. 00:27:49.945 [2024-07-15 09:36:00.786243] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.945 [2024-07-15 09:36:00.786289] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:27:49.945 qpair failed and we were unable to recover it. 00:27:49.945 [2024-07-15 09:36:00.786436] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.945 [2024-07-15 09:36:00.786471] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:27:49.945 qpair failed and we were unable to recover it. 00:27:49.945 [2024-07-15 09:36:00.786639] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.945 [2024-07-15 09:36:00.786673] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:27:49.945 qpair failed and we were unable to recover it. 00:27:49.945 [2024-07-15 09:36:00.786782] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.945 [2024-07-15 09:36:00.786813] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:27:49.945 qpair failed and we were unable to recover it. 00:27:49.945 [2024-07-15 09:36:00.786946] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.945 [2024-07-15 09:36:00.786985] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d69200 with addr=10.0.0.2, port=4420 00:27:49.945 qpair failed and we were unable to recover it. 00:27:49.945 [2024-07-15 09:36:00.787111] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.945 [2024-07-15 09:36:00.787148] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d69200 with addr=10.0.0.2, port=4420 00:27:49.945 qpair failed and we were unable to recover it. 00:27:49.945 [2024-07-15 09:36:00.787290] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.945 [2024-07-15 09:36:00.787325] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d69200 with addr=10.0.0.2, port=4420 00:27:49.945 qpair failed and we were unable to recover it. 00:27:49.945 [2024-07-15 09:36:00.787469] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.945 [2024-07-15 09:36:00.787505] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d69200 with addr=10.0.0.2, port=4420 00:27:49.945 qpair failed and we were unable to recover it. 00:27:49.945 [2024-07-15 09:36:00.787715] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.945 [2024-07-15 09:36:00.787775] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a58000b90 with addr=10.0.0.2, port=4420 00:27:49.945 qpair failed and we were unable to recover it. 00:27:49.945 [2024-07-15 09:36:00.787887] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.945 [2024-07-15 09:36:00.787915] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a50000b90 with addr=10.0.0.2, port=4420 00:27:49.945 qpair failed and we were unable to recover it. 00:27:49.945 [2024-07-15 09:36:00.788056] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.945 [2024-07-15 09:36:00.788081] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a50000b90 with addr=10.0.0.2, port=4420 00:27:49.945 qpair failed and we were unable to recover it. 00:27:49.945 [2024-07-15 09:36:00.788166] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.945 [2024-07-15 09:36:00.788191] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a50000b90 with addr=10.0.0.2, port=4420 00:27:49.945 qpair failed and we were unable to recover it. 00:27:49.945 [2024-07-15 09:36:00.788325] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.945 [2024-07-15 09:36:00.788373] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a50000b90 with addr=10.0.0.2, port=4420 00:27:49.945 qpair failed and we were unable to recover it. 00:27:49.945 [2024-07-15 09:36:00.788513] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.945 [2024-07-15 09:36:00.788559] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a50000b90 with addr=10.0.0.2, port=4420 00:27:49.945 qpair failed and we were unable to recover it. 00:27:49.945 [2024-07-15 09:36:00.788672] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.945 [2024-07-15 09:36:00.788697] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a50000b90 with addr=10.0.0.2, port=4420 00:27:49.945 qpair failed and we were unable to recover it. 00:27:49.945 [2024-07-15 09:36:00.788834] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.945 [2024-07-15 09:36:00.788861] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a50000b90 with addr=10.0.0.2, port=4420 00:27:49.945 qpair failed and we were unable to recover it. 00:27:49.945 [2024-07-15 09:36:00.788945] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.945 [2024-07-15 09:36:00.788970] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a50000b90 with addr=10.0.0.2, port=4420 00:27:49.945 qpair failed and we were unable to recover it. 00:27:49.945 [2024-07-15 09:36:00.789053] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.945 [2024-07-15 09:36:00.789078] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a50000b90 with addr=10.0.0.2, port=4420 00:27:49.945 qpair failed and we were unable to recover it. 00:27:49.945 [2024-07-15 09:36:00.789198] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.945 [2024-07-15 09:36:00.789223] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a50000b90 with addr=10.0.0.2, port=4420 00:27:49.945 qpair failed and we were unable to recover it. 00:27:49.945 [2024-07-15 09:36:00.789332] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.945 [2024-07-15 09:36:00.789358] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a50000b90 with addr=10.0.0.2, port=4420 00:27:49.945 qpair failed and we were unable to recover it. 00:27:49.945 [2024-07-15 09:36:00.789449] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.945 [2024-07-15 09:36:00.789475] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a50000b90 with addr=10.0.0.2, port=4420 00:27:49.946 qpair failed and we were unable to recover it. 00:27:49.946 [2024-07-15 09:36:00.789616] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.946 [2024-07-15 09:36:00.789642] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a50000b90 with addr=10.0.0.2, port=4420 00:27:49.946 qpair failed and we were unable to recover it. 00:27:49.946 [2024-07-15 09:36:00.789766] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.946 [2024-07-15 09:36:00.789810] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:27:49.946 qpair failed and we were unable to recover it. 00:27:49.946 [2024-07-15 09:36:00.789908] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.946 [2024-07-15 09:36:00.789935] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:27:49.946 qpair failed and we were unable to recover it. 00:27:49.946 [2024-07-15 09:36:00.790017] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.946 [2024-07-15 09:36:00.790043] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:27:49.946 qpair failed and we were unable to recover it. 00:27:49.946 [2024-07-15 09:36:00.790156] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.946 [2024-07-15 09:36:00.790182] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:27:49.946 qpair failed and we were unable to recover it. 00:27:49.946 [2024-07-15 09:36:00.790336] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.946 [2024-07-15 09:36:00.790389] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d69200 with addr=10.0.0.2, port=4420 00:27:49.946 qpair failed and we were unable to recover it. 00:27:49.946 [2024-07-15 09:36:00.790592] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.946 [2024-07-15 09:36:00.790629] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d69200 with addr=10.0.0.2, port=4420 00:27:49.946 qpair failed and we were unable to recover it. 00:27:49.946 [2024-07-15 09:36:00.790771] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.946 [2024-07-15 09:36:00.790797] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d69200 with addr=10.0.0.2, port=4420 00:27:49.946 qpair failed and we were unable to recover it. 00:27:49.946 [2024-07-15 09:36:00.790917] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.946 [2024-07-15 09:36:00.790943] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d69200 with addr=10.0.0.2, port=4420 00:27:49.946 qpair failed and we were unable to recover it. 00:27:49.946 [2024-07-15 09:36:00.791017] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.946 [2024-07-15 09:36:00.791043] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d69200 with addr=10.0.0.2, port=4420 00:27:49.946 qpair failed and we were unable to recover it. 00:27:49.946 [2024-07-15 09:36:00.791139] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.946 [2024-07-15 09:36:00.791166] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d69200 with addr=10.0.0.2, port=4420 00:27:49.946 qpair failed and we were unable to recover it. 00:27:49.946 [2024-07-15 09:36:00.791330] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.946 [2024-07-15 09:36:00.791365] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d69200 with addr=10.0.0.2, port=4420 00:27:49.946 qpair failed and we were unable to recover it. 00:27:49.946 [2024-07-15 09:36:00.791504] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.946 [2024-07-15 09:36:00.791553] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d69200 with addr=10.0.0.2, port=4420 00:27:49.946 qpair failed and we were unable to recover it. 00:27:49.946 [2024-07-15 09:36:00.791715] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.946 [2024-07-15 09:36:00.791744] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a50000b90 with addr=10.0.0.2, port=4420 00:27:49.946 qpair failed and we were unable to recover it. 00:27:49.946 [2024-07-15 09:36:00.791894] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.946 [2024-07-15 09:36:00.791932] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:27:49.946 qpair failed and we were unable to recover it. 00:27:49.946 [2024-07-15 09:36:00.792058] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.946 [2024-07-15 09:36:00.792096] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:27:49.946 qpair failed and we were unable to recover it. 00:27:49.946 [2024-07-15 09:36:00.792253] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.946 [2024-07-15 09:36:00.792288] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:27:49.946 qpair failed and we were unable to recover it. 00:27:49.946 [2024-07-15 09:36:00.792430] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.946 [2024-07-15 09:36:00.792465] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:27:49.946 qpair failed and we were unable to recover it. 00:27:49.946 [2024-07-15 09:36:00.792601] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.946 [2024-07-15 09:36:00.792636] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:27:49.946 qpair failed and we were unable to recover it. 00:27:49.946 [2024-07-15 09:36:00.792754] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.946 [2024-07-15 09:36:00.792788] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:27:49.946 qpair failed and we were unable to recover it. 00:27:49.946 [2024-07-15 09:36:00.792952] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.946 [2024-07-15 09:36:00.792977] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:27:49.946 qpair failed and we were unable to recover it. 00:27:49.946 [2024-07-15 09:36:00.793061] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.946 [2024-07-15 09:36:00.793087] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:27:49.946 qpair failed and we were unable to recover it. 00:27:49.946 [2024-07-15 09:36:00.793199] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.946 [2024-07-15 09:36:00.793225] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:27:49.946 qpair failed and we were unable to recover it. 00:27:49.946 [2024-07-15 09:36:00.793384] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.946 [2024-07-15 09:36:00.793435] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:27:49.946 qpair failed and we were unable to recover it. 00:27:49.946 [2024-07-15 09:36:00.793645] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.946 [2024-07-15 09:36:00.793682] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:27:49.946 qpair failed and we were unable to recover it. 00:27:49.946 [2024-07-15 09:36:00.793874] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.946 [2024-07-15 09:36:00.793900] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:27:49.946 qpair failed and we were unable to recover it. 00:27:49.946 [2024-07-15 09:36:00.793992] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.946 [2024-07-15 09:36:00.794019] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:27:49.946 qpair failed and we were unable to recover it. 00:27:49.946 [2024-07-15 09:36:00.794129] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.946 [2024-07-15 09:36:00.794174] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:27:49.946 qpair failed and we were unable to recover it. 00:27:49.946 [2024-07-15 09:36:00.794330] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.946 [2024-07-15 09:36:00.794365] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:27:49.946 qpair failed and we were unable to recover it. 00:27:49.946 [2024-07-15 09:36:00.794502] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.946 [2024-07-15 09:36:00.794537] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:27:49.946 qpair failed and we were unable to recover it. 00:27:49.946 [2024-07-15 09:36:00.794677] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.946 [2024-07-15 09:36:00.794719] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:27:49.946 qpair failed and we were unable to recover it. 00:27:49.946 [2024-07-15 09:36:00.794826] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.946 [2024-07-15 09:36:00.794853] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:27:49.946 qpair failed and we were unable to recover it. 00:27:49.946 [2024-07-15 09:36:00.794943] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.946 [2024-07-15 09:36:00.794969] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:27:49.946 qpair failed and we were unable to recover it. 00:27:49.946 [2024-07-15 09:36:00.795101] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.946 [2024-07-15 09:36:00.795138] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:27:49.946 qpair failed and we were unable to recover it. 00:27:49.946 [2024-07-15 09:36:00.795270] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.946 [2024-07-15 09:36:00.795305] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:27:49.946 qpair failed and we were unable to recover it. 00:27:49.946 [2024-07-15 09:36:00.795478] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.946 [2024-07-15 09:36:00.795512] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:27:49.947 qpair failed and we were unable to recover it. 00:27:49.947 [2024-07-15 09:36:00.795637] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.947 [2024-07-15 09:36:00.795676] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a58000b90 with addr=10.0.0.2, port=4420 00:27:49.947 qpair failed and we were unable to recover it. 00:27:49.947 [2024-07-15 09:36:00.795785] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.947 [2024-07-15 09:36:00.795832] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d69200 with addr=10.0.0.2, port=4420 00:27:49.947 qpair failed and we were unable to recover it. 00:27:49.947 [2024-07-15 09:36:00.795977] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.947 [2024-07-15 09:36:00.796003] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d69200 with addr=10.0.0.2, port=4420 00:27:49.947 qpair failed and we were unable to recover it. 00:27:49.947 [2024-07-15 09:36:00.796108] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.947 [2024-07-15 09:36:00.796154] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d69200 with addr=10.0.0.2, port=4420 00:27:49.947 qpair failed and we were unable to recover it. 00:27:49.947 [2024-07-15 09:36:00.796269] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.947 [2024-07-15 09:36:00.796311] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d69200 with addr=10.0.0.2, port=4420 00:27:49.947 qpair failed and we were unable to recover it. 00:27:49.947 [2024-07-15 09:36:00.796476] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.947 [2024-07-15 09:36:00.796513] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d69200 with addr=10.0.0.2, port=4420 00:27:49.947 qpair failed and we were unable to recover it. 00:27:49.947 [2024-07-15 09:36:00.796676] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.947 [2024-07-15 09:36:00.796702] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d69200 with addr=10.0.0.2, port=4420 00:27:49.947 qpair failed and we were unable to recover it. 00:27:49.947 [2024-07-15 09:36:00.796815] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.947 [2024-07-15 09:36:00.796841] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d69200 with addr=10.0.0.2, port=4420 00:27:49.947 qpair failed and we were unable to recover it. 00:27:49.947 [2024-07-15 09:36:00.796990] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.947 [2024-07-15 09:36:00.797015] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d69200 with addr=10.0.0.2, port=4420 00:27:49.947 qpair failed and we were unable to recover it. 00:27:49.947 [2024-07-15 09:36:00.797130] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.947 [2024-07-15 09:36:00.797177] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d69200 with addr=10.0.0.2, port=4420 00:27:49.947 qpair failed and we were unable to recover it. 00:27:49.947 [2024-07-15 09:36:00.797348] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.947 [2024-07-15 09:36:00.797382] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d69200 with addr=10.0.0.2, port=4420 00:27:49.947 qpair failed and we were unable to recover it. 00:27:49.947 [2024-07-15 09:36:00.797500] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.947 [2024-07-15 09:36:00.797526] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d69200 with addr=10.0.0.2, port=4420 00:27:49.947 qpair failed and we were unable to recover it. 00:27:49.947 [2024-07-15 09:36:00.797648] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.947 [2024-07-15 09:36:00.797683] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d69200 with addr=10.0.0.2, port=4420 00:27:49.947 qpair failed and we were unable to recover it. 00:27:49.947 [2024-07-15 09:36:00.797791] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.947 [2024-07-15 09:36:00.797822] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d69200 with addr=10.0.0.2, port=4420 00:27:49.947 qpair failed and we were unable to recover it. 00:27:49.947 [2024-07-15 09:36:00.797922] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.947 [2024-07-15 09:36:00.797965] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a50000b90 with addr=10.0.0.2, port=4420 00:27:49.947 qpair failed and we were unable to recover it. 00:27:49.947 [2024-07-15 09:36:00.798138] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.947 [2024-07-15 09:36:00.798178] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:27:49.947 qpair failed and we were unable to recover it. 00:27:49.947 [2024-07-15 09:36:00.798364] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.947 [2024-07-15 09:36:00.798401] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:27:49.947 qpair failed and we were unable to recover it. 00:27:49.947 [2024-07-15 09:36:00.798549] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.947 [2024-07-15 09:36:00.798587] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:27:49.947 qpair failed and we were unable to recover it. 00:27:49.947 [2024-07-15 09:36:00.798708] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.947 [2024-07-15 09:36:00.798733] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:27:49.947 qpair failed and we were unable to recover it. 00:27:49.947 [2024-07-15 09:36:00.798822] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.947 [2024-07-15 09:36:00.798848] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:27:49.947 qpair failed and we were unable to recover it. 00:27:49.947 [2024-07-15 09:36:00.798959] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.947 [2024-07-15 09:36:00.798985] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:27:49.947 qpair failed and we were unable to recover it. 00:27:49.947 [2024-07-15 09:36:00.799097] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.947 [2024-07-15 09:36:00.799124] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:27:49.947 qpair failed and we were unable to recover it. 00:27:49.947 [2024-07-15 09:36:00.799238] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.947 [2024-07-15 09:36:00.799263] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:27:49.947 qpair failed and we were unable to recover it. 00:27:49.947 [2024-07-15 09:36:00.799430] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.947 [2024-07-15 09:36:00.799458] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d69200 with addr=10.0.0.2, port=4420 00:27:49.947 qpair failed and we were unable to recover it. 00:27:49.947 [2024-07-15 09:36:00.799613] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.947 [2024-07-15 09:36:00.799648] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d69200 with addr=10.0.0.2, port=4420 00:27:49.947 qpair failed and we were unable to recover it. 00:27:49.947 [2024-07-15 09:36:00.799795] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.947 [2024-07-15 09:36:00.799843] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a58000b90 with addr=10.0.0.2, port=4420 00:27:49.947 qpair failed and we were unable to recover it. 00:27:49.947 [2024-07-15 09:36:00.799940] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.947 [2024-07-15 09:36:00.799967] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:27:49.947 qpair failed and we were unable to recover it. 00:27:49.947 [2024-07-15 09:36:00.800078] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.947 [2024-07-15 09:36:00.800103] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:27:49.947 qpair failed and we were unable to recover it. 00:27:49.947 [2024-07-15 09:36:00.800193] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.947 [2024-07-15 09:36:00.800242] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:27:49.947 qpair failed and we were unable to recover it. 00:27:49.947 [2024-07-15 09:36:00.800425] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.947 [2024-07-15 09:36:00.800460] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:27:49.947 qpair failed and we were unable to recover it. 00:27:49.947 [2024-07-15 09:36:00.800651] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.948 [2024-07-15 09:36:00.800687] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:27:49.948 qpair failed and we were unable to recover it. 00:27:49.948 [2024-07-15 09:36:00.800793] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.948 [2024-07-15 09:36:00.800871] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:27:49.948 qpair failed and we were unable to recover it. 00:27:49.948 [2024-07-15 09:36:00.800967] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.948 [2024-07-15 09:36:00.800992] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:27:49.948 qpair failed and we were unable to recover it. 00:27:49.948 [2024-07-15 09:36:00.801129] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.948 [2024-07-15 09:36:00.801164] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:27:49.948 qpair failed and we were unable to recover it. 00:27:49.948 [2024-07-15 09:36:00.801339] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.948 [2024-07-15 09:36:00.801374] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:27:49.948 qpair failed and we were unable to recover it. 00:27:49.948 [2024-07-15 09:36:00.801513] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.948 [2024-07-15 09:36:00.801549] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:27:49.948 qpair failed and we were unable to recover it. 00:27:49.948 [2024-07-15 09:36:00.801660] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.948 [2024-07-15 09:36:00.801686] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:27:49.948 qpair failed and we were unable to recover it. 00:27:49.948 [2024-07-15 09:36:00.801787] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.948 [2024-07-15 09:36:00.801834] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d69200 with addr=10.0.0.2, port=4420 00:27:49.948 qpair failed and we were unable to recover it. 00:27:49.948 [2024-07-15 09:36:00.801941] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.948 [2024-07-15 09:36:00.801967] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d69200 with addr=10.0.0.2, port=4420 00:27:49.948 qpair failed and we were unable to recover it. 00:27:49.948 [2024-07-15 09:36:00.802084] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.948 [2024-07-15 09:36:00.802118] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d69200 with addr=10.0.0.2, port=4420 00:27:49.948 qpair failed and we were unable to recover it. 00:27:49.948 [2024-07-15 09:36:00.802197] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.948 [2024-07-15 09:36:00.802248] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d69200 with addr=10.0.0.2, port=4420 00:27:49.948 qpair failed and we were unable to recover it. 00:27:49.948 [2024-07-15 09:36:00.802433] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.948 [2024-07-15 09:36:00.802489] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a58000b90 with addr=10.0.0.2, port=4420 00:27:49.948 qpair failed and we were unable to recover it. 00:27:49.948 [2024-07-15 09:36:00.802602] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.948 [2024-07-15 09:36:00.802628] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a58000b90 with addr=10.0.0.2, port=4420 00:27:49.948 qpair failed and we were unable to recover it. 00:27:49.948 [2024-07-15 09:36:00.802741] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.948 [2024-07-15 09:36:00.802768] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:27:49.948 qpair failed and we were unable to recover it. 00:27:49.948 [2024-07-15 09:36:00.802861] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.948 [2024-07-15 09:36:00.802888] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:27:49.948 qpair failed and we were unable to recover it. 00:27:49.948 [2024-07-15 09:36:00.802996] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.948 [2024-07-15 09:36:00.803034] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a50000b90 with addr=10.0.0.2, port=4420 00:27:49.948 qpair failed and we were unable to recover it. 00:27:49.948 [2024-07-15 09:36:00.803186] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.948 [2024-07-15 09:36:00.803239] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a50000b90 with addr=10.0.0.2, port=4420 00:27:49.948 qpair failed and we were unable to recover it. 00:27:49.948 [2024-07-15 09:36:00.803407] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.948 [2024-07-15 09:36:00.803456] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a50000b90 with addr=10.0.0.2, port=4420 00:27:49.948 qpair failed and we were unable to recover it. 00:27:49.948 [2024-07-15 09:36:00.803594] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.948 [2024-07-15 09:36:00.803619] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a50000b90 with addr=10.0.0.2, port=4420 00:27:49.948 qpair failed and we were unable to recover it. 00:27:49.948 [2024-07-15 09:36:00.803722] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.948 [2024-07-15 09:36:00.803749] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a50000b90 with addr=10.0.0.2, port=4420 00:27:49.948 qpair failed and we were unable to recover it. 00:27:49.948 [2024-07-15 09:36:00.803838] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.948 [2024-07-15 09:36:00.803868] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d69200 with addr=10.0.0.2, port=4420 00:27:49.948 qpair failed and we were unable to recover it. 00:27:49.948 [2024-07-15 09:36:00.803956] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.948 [2024-07-15 09:36:00.803982] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d69200 with addr=10.0.0.2, port=4420 00:27:49.948 qpair failed and we were unable to recover it. 00:27:49.948 [2024-07-15 09:36:00.804151] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.948 [2024-07-15 09:36:00.804191] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d69200 with addr=10.0.0.2, port=4420 00:27:49.948 qpair failed and we were unable to recover it. 00:27:49.948 [2024-07-15 09:36:00.804354] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.948 [2024-07-15 09:36:00.804388] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d69200 with addr=10.0.0.2, port=4420 00:27:49.948 qpair failed and we were unable to recover it. 00:27:49.948 [2024-07-15 09:36:00.804500] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.948 [2024-07-15 09:36:00.804542] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:27:49.948 qpair failed and we were unable to recover it. 00:27:49.948 [2024-07-15 09:36:00.804688] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.948 [2024-07-15 09:36:00.804714] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:27:49.948 qpair failed and we were unable to recover it. 00:27:49.948 [2024-07-15 09:36:00.804827] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.948 [2024-07-15 09:36:00.804863] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:27:49.948 qpair failed and we were unable to recover it. 00:27:49.948 [2024-07-15 09:36:00.804946] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.948 [2024-07-15 09:36:00.804972] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:27:49.948 qpair failed and we were unable to recover it. 00:27:49.948 [2024-07-15 09:36:00.805132] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.948 [2024-07-15 09:36:00.805205] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a58000b90 with addr=10.0.0.2, port=4420 00:27:49.948 qpair failed and we were unable to recover it. 00:27:49.948 [2024-07-15 09:36:00.805343] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.948 [2024-07-15 09:36:00.805393] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a58000b90 with addr=10.0.0.2, port=4420 00:27:49.948 qpair failed and we were unable to recover it. 00:27:49.948 [2024-07-15 09:36:00.805523] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.948 [2024-07-15 09:36:00.805568] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a58000b90 with addr=10.0.0.2, port=4420 00:27:49.948 qpair failed and we were unable to recover it. 00:27:49.948 [2024-07-15 09:36:00.805653] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.948 [2024-07-15 09:36:00.805679] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a58000b90 with addr=10.0.0.2, port=4420 00:27:49.948 qpair failed and we were unable to recover it. 00:27:49.948 [2024-07-15 09:36:00.805827] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.948 [2024-07-15 09:36:00.805864] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a58000b90 with addr=10.0.0.2, port=4420 00:27:49.948 qpair failed and we were unable to recover it. 00:27:49.948 [2024-07-15 09:36:00.805965] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.948 [2024-07-15 09:36:00.806003] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a58000b90 with addr=10.0.0.2, port=4420 00:27:49.948 qpair failed and we were unable to recover it. 00:27:49.948 [2024-07-15 09:36:00.806156] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.948 [2024-07-15 09:36:00.806202] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a58000b90 with addr=10.0.0.2, port=4420 00:27:49.948 qpair failed and we were unable to recover it. 00:27:49.948 [2024-07-15 09:36:00.806339] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.948 [2024-07-15 09:36:00.806388] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a58000b90 with addr=10.0.0.2, port=4420 00:27:49.948 qpair failed and we were unable to recover it. 00:27:49.948 [2024-07-15 09:36:00.806465] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.948 [2024-07-15 09:36:00.806491] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a58000b90 with addr=10.0.0.2, port=4420 00:27:49.948 qpair failed and we were unable to recover it. 00:27:49.948 [2024-07-15 09:36:00.806572] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.948 [2024-07-15 09:36:00.806599] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a58000b90 with addr=10.0.0.2, port=4420 00:27:49.948 qpair failed and we were unable to recover it. 00:27:49.948 [2024-07-15 09:36:00.806713] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.948 [2024-07-15 09:36:00.806739] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a58000b90 with addr=10.0.0.2, port=4420 00:27:49.949 qpair failed and we were unable to recover it. 00:27:49.949 [2024-07-15 09:36:00.806834] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.949 [2024-07-15 09:36:00.806881] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d69200 with addr=10.0.0.2, port=4420 00:27:49.949 qpair failed and we were unable to recover it. 00:27:49.949 [2024-07-15 09:36:00.806973] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.949 [2024-07-15 09:36:00.807000] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d69200 with addr=10.0.0.2, port=4420 00:27:49.949 qpair failed and we were unable to recover it. 00:27:49.949 [2024-07-15 09:36:00.807115] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.949 [2024-07-15 09:36:00.807142] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d69200 with addr=10.0.0.2, port=4420 00:27:49.949 qpair failed and we were unable to recover it. 00:27:49.949 [2024-07-15 09:36:00.807233] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.949 [2024-07-15 09:36:00.807258] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d69200 with addr=10.0.0.2, port=4420 00:27:49.949 qpair failed and we were unable to recover it. 00:27:49.949 [2024-07-15 09:36:00.807393] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.949 [2024-07-15 09:36:00.807430] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d69200 with addr=10.0.0.2, port=4420 00:27:49.949 qpair failed and we were unable to recover it. 00:27:49.949 [2024-07-15 09:36:00.807608] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.949 [2024-07-15 09:36:00.807644] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d69200 with addr=10.0.0.2, port=4420 00:27:49.949 qpair failed and we were unable to recover it. 00:27:49.949 [2024-07-15 09:36:00.807787] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.949 [2024-07-15 09:36:00.807819] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d69200 with addr=10.0.0.2, port=4420 00:27:49.949 qpair failed and we were unable to recover it. 00:27:49.949 [2024-07-15 09:36:00.807932] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.949 [2024-07-15 09:36:00.807958] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d69200 with addr=10.0.0.2, port=4420 00:27:49.949 qpair failed and we were unable to recover it. 00:27:49.949 [2024-07-15 09:36:00.808132] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.949 [2024-07-15 09:36:00.808178] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d69200 with addr=10.0.0.2, port=4420 00:27:49.949 qpair failed and we were unable to recover it. 00:27:49.949 [2024-07-15 09:36:00.808354] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.949 [2024-07-15 09:36:00.808390] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d69200 with addr=10.0.0.2, port=4420 00:27:49.949 qpair failed and we were unable to recover it. 00:27:49.949 [2024-07-15 09:36:00.808546] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.949 [2024-07-15 09:36:00.808582] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d69200 with addr=10.0.0.2, port=4420 00:27:49.949 qpair failed and we were unable to recover it. 00:27:49.949 [2024-07-15 09:36:00.808735] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.949 [2024-07-15 09:36:00.808764] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a50000b90 with addr=10.0.0.2, port=4420 00:27:49.949 qpair failed and we were unable to recover it. 00:27:49.949 [2024-07-15 09:36:00.808863] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.949 [2024-07-15 09:36:00.808895] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a58000b90 with addr=10.0.0.2, port=4420 00:27:49.949 qpair failed and we were unable to recover it. 00:27:49.949 [2024-07-15 09:36:00.808988] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.949 [2024-07-15 09:36:00.809014] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a58000b90 with addr=10.0.0.2, port=4420 00:27:49.949 qpair failed and we were unable to recover it. 00:27:49.949 [2024-07-15 09:36:00.809119] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.949 [2024-07-15 09:36:00.809157] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a58000b90 with addr=10.0.0.2, port=4420 00:27:49.949 qpair failed and we were unable to recover it. 00:27:49.949 [2024-07-15 09:36:00.809269] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.949 [2024-07-15 09:36:00.809295] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a58000b90 with addr=10.0.0.2, port=4420 00:27:49.949 qpair failed and we were unable to recover it. 00:27:49.949 [2024-07-15 09:36:00.809433] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.949 [2024-07-15 09:36:00.809459] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a58000b90 with addr=10.0.0.2, port=4420 00:27:49.949 qpair failed and we were unable to recover it. 00:27:49.949 [2024-07-15 09:36:00.809540] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.949 [2024-07-15 09:36:00.809567] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d69200 with addr=10.0.0.2, port=4420 00:27:49.949 qpair failed and we were unable to recover it. 00:27:49.949 [2024-07-15 09:36:00.809650] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.949 [2024-07-15 09:36:00.809676] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d69200 with addr=10.0.0.2, port=4420 00:27:49.949 qpair failed and we were unable to recover it. 00:27:49.949 [2024-07-15 09:36:00.809765] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.949 [2024-07-15 09:36:00.809814] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:27:49.949 qpair failed and we were unable to recover it. 00:27:49.949 [2024-07-15 09:36:00.809941] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.949 [2024-07-15 09:36:00.809968] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:27:49.949 qpair failed and we were unable to recover it. 00:27:49.949 [2024-07-15 09:36:00.810058] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.949 [2024-07-15 09:36:00.810084] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:27:49.949 qpair failed and we were unable to recover it. 00:27:49.949 [2024-07-15 09:36:00.810221] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.949 [2024-07-15 09:36:00.810258] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:27:49.949 qpair failed and we were unable to recover it. 00:27:49.949 [2024-07-15 09:36:00.810428] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.949 [2024-07-15 09:36:00.810478] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a58000b90 with addr=10.0.0.2, port=4420 00:27:49.949 qpair failed and we were unable to recover it. 00:27:49.949 [2024-07-15 09:36:00.810566] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.949 [2024-07-15 09:36:00.810592] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a58000b90 with addr=10.0.0.2, port=4420 00:27:49.949 qpair failed and we were unable to recover it. 00:27:49.949 [2024-07-15 09:36:00.810681] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.949 [2024-07-15 09:36:00.810709] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d69200 with addr=10.0.0.2, port=4420 00:27:49.949 qpair failed and we were unable to recover it. 00:27:49.949 [2024-07-15 09:36:00.810833] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.949 [2024-07-15 09:36:00.810870] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d69200 with addr=10.0.0.2, port=4420 00:27:49.949 qpair failed and we were unable to recover it. 00:27:49.949 [2024-07-15 09:36:00.810981] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.949 [2024-07-15 09:36:00.811007] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d69200 with addr=10.0.0.2, port=4420 00:27:49.949 qpair failed and we were unable to recover it. 00:27:49.949 [2024-07-15 09:36:00.811115] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.949 [2024-07-15 09:36:00.811140] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d69200 with addr=10.0.0.2, port=4420 00:27:49.949 qpair failed and we were unable to recover it. 00:27:49.949 [2024-07-15 09:36:00.811217] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.949 [2024-07-15 09:36:00.811242] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d69200 with addr=10.0.0.2, port=4420 00:27:49.949 qpair failed and we were unable to recover it. 00:27:49.949 [2024-07-15 09:36:00.811365] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.949 [2024-07-15 09:36:00.811400] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d69200 with addr=10.0.0.2, port=4420 00:27:49.949 qpair failed and we were unable to recover it. 00:27:49.949 [2024-07-15 09:36:00.811577] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.949 [2024-07-15 09:36:00.811632] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a58000b90 with addr=10.0.0.2, port=4420 00:27:49.949 qpair failed and we were unable to recover it. 00:27:49.949 [2024-07-15 09:36:00.811718] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.949 [2024-07-15 09:36:00.811744] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a58000b90 with addr=10.0.0.2, port=4420 00:27:49.949 qpair failed and we were unable to recover it. 00:27:49.949 [2024-07-15 09:36:00.811845] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.949 [2024-07-15 09:36:00.811885] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a50000b90 with addr=10.0.0.2, port=4420 00:27:49.949 qpair failed and we were unable to recover it. 00:27:49.949 [2024-07-15 09:36:00.812004] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.949 [2024-07-15 09:36:00.812031] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a50000b90 with addr=10.0.0.2, port=4420 00:27:49.949 qpair failed and we were unable to recover it. 00:27:49.949 [2024-07-15 09:36:00.812169] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.949 [2024-07-15 09:36:00.812205] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a50000b90 with addr=10.0.0.2, port=4420 00:27:49.949 qpair failed and we were unable to recover it. 00:27:49.949 [2024-07-15 09:36:00.812398] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.949 [2024-07-15 09:36:00.812447] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a50000b90 with addr=10.0.0.2, port=4420 00:27:49.949 qpair failed and we were unable to recover it. 00:27:49.949 [2024-07-15 09:36:00.812588] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.949 [2024-07-15 09:36:00.812627] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d69200 with addr=10.0.0.2, port=4420 00:27:49.949 qpair failed and we were unable to recover it. 00:27:49.949 [2024-07-15 09:36:00.812744] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.949 [2024-07-15 09:36:00.812782] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d69200 with addr=10.0.0.2, port=4420 00:27:49.950 qpair failed and we were unable to recover it. 00:27:49.950 [2024-07-15 09:36:00.812963] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.950 [2024-07-15 09:36:00.812991] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a58000b90 with addr=10.0.0.2, port=4420 00:27:49.950 qpair failed and we were unable to recover it. 00:27:49.950 [2024-07-15 09:36:00.813128] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.950 [2024-07-15 09:36:00.813185] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a58000b90 with addr=10.0.0.2, port=4420 00:27:49.950 qpair failed and we were unable to recover it. 00:27:49.950 [2024-07-15 09:36:00.813294] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.950 [2024-07-15 09:36:00.813331] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a58000b90 with addr=10.0.0.2, port=4420 00:27:49.950 qpair failed and we were unable to recover it. 00:27:49.950 [2024-07-15 09:36:00.813490] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.950 [2024-07-15 09:36:00.813516] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a58000b90 with addr=10.0.0.2, port=4420 00:27:49.950 qpair failed and we were unable to recover it. 00:27:49.950 [2024-07-15 09:36:00.813626] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.950 [2024-07-15 09:36:00.813652] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a58000b90 with addr=10.0.0.2, port=4420 00:27:49.950 qpair failed and we were unable to recover it. 00:27:49.950 [2024-07-15 09:36:00.813763] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.950 [2024-07-15 09:36:00.813789] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a58000b90 with addr=10.0.0.2, port=4420 00:27:49.950 qpair failed and we were unable to recover it. 00:27:49.950 [2024-07-15 09:36:00.813879] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.950 [2024-07-15 09:36:00.813905] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a58000b90 with addr=10.0.0.2, port=4420 00:27:49.950 qpair failed and we were unable to recover it. 00:27:49.950 [2024-07-15 09:36:00.814017] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.950 [2024-07-15 09:36:00.814044] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a58000b90 with addr=10.0.0.2, port=4420 00:27:49.950 qpair failed and we were unable to recover it. 00:27:49.950 [2024-07-15 09:36:00.814159] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.950 [2024-07-15 09:36:00.814185] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a58000b90 with addr=10.0.0.2, port=4420 00:27:49.950 qpair failed and we were unable to recover it. 00:27:49.950 [2024-07-15 09:36:00.814264] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.950 [2024-07-15 09:36:00.814290] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a58000b90 with addr=10.0.0.2, port=4420 00:27:49.950 qpair failed and we were unable to recover it. 00:27:49.950 [2024-07-15 09:36:00.814400] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.950 [2024-07-15 09:36:00.814426] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a58000b90 with addr=10.0.0.2, port=4420 00:27:49.950 qpair failed and we were unable to recover it. 00:27:49.950 [2024-07-15 09:36:00.814506] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.950 [2024-07-15 09:36:00.814532] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a58000b90 with addr=10.0.0.2, port=4420 00:27:49.950 qpair failed and we were unable to recover it. 00:27:49.950 [2024-07-15 09:36:00.814626] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.950 [2024-07-15 09:36:00.814666] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d69200 with addr=10.0.0.2, port=4420 00:27:49.950 qpair failed and we were unable to recover it. 00:27:49.950 [2024-07-15 09:36:00.814784] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.950 [2024-07-15 09:36:00.814822] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a50000b90 with addr=10.0.0.2, port=4420 00:27:49.950 qpair failed and we were unable to recover it. 00:27:49.950 [2024-07-15 09:36:00.814963] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.950 [2024-07-15 09:36:00.815002] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:27:49.950 qpair failed and we were unable to recover it. 00:27:49.950 [2024-07-15 09:36:00.815120] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.950 [2024-07-15 09:36:00.815147] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:27:49.950 qpair failed and we were unable to recover it. 00:27:49.950 [2024-07-15 09:36:00.815238] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.950 [2024-07-15 09:36:00.815275] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:27:49.950 qpair failed and we were unable to recover it. 00:27:49.950 [2024-07-15 09:36:00.815480] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.950 [2024-07-15 09:36:00.815517] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:27:49.950 qpair failed and we were unable to recover it. 00:27:49.950 [2024-07-15 09:36:00.815660] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.950 [2024-07-15 09:36:00.815686] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:27:49.950 qpair failed and we were unable to recover it. 00:27:49.950 [2024-07-15 09:36:00.815793] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.950 [2024-07-15 09:36:00.815829] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:27:49.950 qpair failed and we were unable to recover it. 00:27:49.950 [2024-07-15 09:36:00.815931] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.950 [2024-07-15 09:36:00.815957] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:27:49.950 qpair failed and we were unable to recover it. 00:27:49.950 [2024-07-15 09:36:00.816071] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.950 [2024-07-15 09:36:00.816097] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:27:49.950 qpair failed and we were unable to recover it. 00:27:49.950 [2024-07-15 09:36:00.816236] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.950 [2024-07-15 09:36:00.816273] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:27:49.950 qpair failed and we were unable to recover it. 00:27:49.950 [2024-07-15 09:36:00.816476] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.950 [2024-07-15 09:36:00.816513] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:27:49.950 qpair failed and we were unable to recover it. 00:27:49.950 [2024-07-15 09:36:00.816659] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.950 [2024-07-15 09:36:00.816687] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a58000b90 with addr=10.0.0.2, port=4420 00:27:49.950 qpair failed and we were unable to recover it. 00:27:49.950 [2024-07-15 09:36:00.816769] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.950 [2024-07-15 09:36:00.816795] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a58000b90 with addr=10.0.0.2, port=4420 00:27:49.950 qpair failed and we were unable to recover it. 00:27:49.950 [2024-07-15 09:36:00.816893] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.950 [2024-07-15 09:36:00.816921] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a58000b90 with addr=10.0.0.2, port=4420 00:27:49.950 qpair failed and we were unable to recover it. 00:27:49.950 [2024-07-15 09:36:00.817022] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.950 [2024-07-15 09:36:00.817048] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a58000b90 with addr=10.0.0.2, port=4420 00:27:49.950 qpair failed and we were unable to recover it. 00:27:49.950 [2024-07-15 09:36:00.817138] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.950 [2024-07-15 09:36:00.817165] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a58000b90 with addr=10.0.0.2, port=4420 00:27:49.950 qpair failed and we were unable to recover it. 00:27:49.950 [2024-07-15 09:36:00.817250] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.950 [2024-07-15 09:36:00.817277] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a58000b90 with addr=10.0.0.2, port=4420 00:27:49.950 qpair failed and we were unable to recover it. 00:27:49.950 [2024-07-15 09:36:00.817385] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.950 [2024-07-15 09:36:00.817412] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:27:49.950 qpair failed and we were unable to recover it. 00:27:49.950 [2024-07-15 09:36:00.817506] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.950 [2024-07-15 09:36:00.817544] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d69200 with addr=10.0.0.2, port=4420 00:27:49.950 qpair failed and we were unable to recover it. 00:27:49.950 [2024-07-15 09:36:00.817647] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.950 [2024-07-15 09:36:00.817686] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a50000b90 with addr=10.0.0.2, port=4420 00:27:49.950 qpair failed and we were unable to recover it. 00:27:49.950 [2024-07-15 09:36:00.817811] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.950 [2024-07-15 09:36:00.817838] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a50000b90 with addr=10.0.0.2, port=4420 00:27:49.950 qpair failed and we were unable to recover it. 00:27:49.950 [2024-07-15 09:36:00.817960] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.950 [2024-07-15 09:36:00.817985] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a50000b90 with addr=10.0.0.2, port=4420 00:27:49.950 qpair failed and we were unable to recover it. 00:27:49.950 [2024-07-15 09:36:00.818067] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.951 [2024-07-15 09:36:00.818104] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a50000b90 with addr=10.0.0.2, port=4420 00:27:49.951 qpair failed and we were unable to recover it. 00:27:49.951 [2024-07-15 09:36:00.818244] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.951 [2024-07-15 09:36:00.818290] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a50000b90 with addr=10.0.0.2, port=4420 00:27:49.951 qpair failed and we were unable to recover it. 00:27:49.951 [2024-07-15 09:36:00.818442] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.951 [2024-07-15 09:36:00.818480] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:27:49.951 qpair failed and we were unable to recover it. 00:27:49.951 [2024-07-15 09:36:00.818628] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.951 [2024-07-15 09:36:00.818666] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:27:49.951 qpair failed and we were unable to recover it. 00:27:49.951 [2024-07-15 09:36:00.818788] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.951 [2024-07-15 09:36:00.818820] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:27:49.951 qpair failed and we were unable to recover it. 00:27:49.951 [2024-07-15 09:36:00.818933] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.951 [2024-07-15 09:36:00.818964] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:27:49.951 qpair failed and we were unable to recover it. 00:27:49.951 [2024-07-15 09:36:00.819083] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.951 [2024-07-15 09:36:00.819109] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:27:49.951 qpair failed and we were unable to recover it. 00:27:49.951 [2024-07-15 09:36:00.819224] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.951 [2024-07-15 09:36:00.819251] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:27:49.951 qpair failed and we were unable to recover it. 00:27:49.951 [2024-07-15 09:36:00.819390] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.951 [2024-07-15 09:36:00.819440] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a50000b90 with addr=10.0.0.2, port=4420 00:27:49.951 qpair failed and we were unable to recover it. 00:27:49.951 [2024-07-15 09:36:00.819570] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.951 [2024-07-15 09:36:00.819619] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a50000b90 with addr=10.0.0.2, port=4420 00:27:49.951 qpair failed and we were unable to recover it. 00:27:49.951 [2024-07-15 09:36:00.819755] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.951 [2024-07-15 09:36:00.819780] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a50000b90 with addr=10.0.0.2, port=4420 00:27:49.951 qpair failed and we were unable to recover it. 00:27:49.951 [2024-07-15 09:36:00.819868] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.951 [2024-07-15 09:36:00.819894] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a50000b90 with addr=10.0.0.2, port=4420 00:27:49.951 qpair failed and we were unable to recover it. 00:27:49.951 [2024-07-15 09:36:00.820013] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.951 [2024-07-15 09:36:00.820039] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a50000b90 with addr=10.0.0.2, port=4420 00:27:49.951 qpair failed and we were unable to recover it. 00:27:49.951 [2024-07-15 09:36:00.820178] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.951 [2024-07-15 09:36:00.820233] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a50000b90 with addr=10.0.0.2, port=4420 00:27:49.951 qpair failed and we were unable to recover it. 00:27:49.951 [2024-07-15 09:36:00.820407] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.951 [2024-07-15 09:36:00.820446] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:27:49.951 qpair failed and we were unable to recover it. 00:27:49.951 [2024-07-15 09:36:00.820591] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.951 [2024-07-15 09:36:00.820629] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:27:49.951 qpair failed and we were unable to recover it. 00:27:49.951 [2024-07-15 09:36:00.820772] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.951 [2024-07-15 09:36:00.820798] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:27:49.951 qpair failed and we were unable to recover it. 00:27:49.951 [2024-07-15 09:36:00.820887] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.951 [2024-07-15 09:36:00.820913] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:27:49.951 qpair failed and we were unable to recover it. 00:27:49.951 [2024-07-15 09:36:00.820992] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.951 [2024-07-15 09:36:00.821018] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:27:49.951 qpair failed and we were unable to recover it. 00:27:49.951 [2024-07-15 09:36:00.821183] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.951 [2024-07-15 09:36:00.821220] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:27:49.951 qpair failed and we were unable to recover it. 00:27:49.951 [2024-07-15 09:36:00.821334] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.951 [2024-07-15 09:36:00.821371] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:27:49.951 qpair failed and we were unable to recover it. 00:27:49.951 [2024-07-15 09:36:00.821469] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.951 [2024-07-15 09:36:00.821506] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:27:49.951 qpair failed and we were unable to recover it. 00:27:49.951 [2024-07-15 09:36:00.821650] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.951 [2024-07-15 09:36:00.821675] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:27:49.951 qpair failed and we were unable to recover it. 00:27:49.951 [2024-07-15 09:36:00.821752] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.951 [2024-07-15 09:36:00.821778] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:27:49.951 qpair failed and we were unable to recover it. 00:27:49.951 [2024-07-15 09:36:00.821903] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.951 [2024-07-15 09:36:00.821929] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:27:49.951 qpair failed and we were unable to recover it. 00:27:49.951 [2024-07-15 09:36:00.822038] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.951 [2024-07-15 09:36:00.822064] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:27:49.951 qpair failed and we were unable to recover it. 00:27:49.951 [2024-07-15 09:36:00.822188] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.951 [2024-07-15 09:36:00.822214] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:27:49.951 qpair failed and we were unable to recover it. 00:27:49.951 [2024-07-15 09:36:00.822387] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.951 [2024-07-15 09:36:00.822438] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:27:49.951 qpair failed and we were unable to recover it. 00:27:49.951 [2024-07-15 09:36:00.822587] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.951 [2024-07-15 09:36:00.822625] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:27:49.951 qpair failed and we were unable to recover it. 00:27:49.951 [2024-07-15 09:36:00.822792] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.951 [2024-07-15 09:36:00.822830] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:27:49.951 qpair failed and we were unable to recover it. 00:27:49.951 [2024-07-15 09:36:00.822943] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.951 [2024-07-15 09:36:00.822969] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:27:49.951 qpair failed and we were unable to recover it. 00:27:49.951 [2024-07-15 09:36:00.823065] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.951 [2024-07-15 09:36:00.823104] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a58000b90 with addr=10.0.0.2, port=4420 00:27:49.951 qpair failed and we were unable to recover it. 00:27:49.951 [2024-07-15 09:36:00.823263] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.951 [2024-07-15 09:36:00.823313] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a58000b90 with addr=10.0.0.2, port=4420 00:27:49.951 qpair failed and we were unable to recover it. 00:27:49.951 [2024-07-15 09:36:00.823451] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.951 [2024-07-15 09:36:00.823495] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a58000b90 with addr=10.0.0.2, port=4420 00:27:49.952 qpair failed and we were unable to recover it. 00:27:49.952 [2024-07-15 09:36:00.823606] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.952 [2024-07-15 09:36:00.823633] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a58000b90 with addr=10.0.0.2, port=4420 00:27:49.952 qpair failed and we were unable to recover it. 00:27:49.952 [2024-07-15 09:36:00.823758] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.952 [2024-07-15 09:36:00.823797] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a50000b90 with addr=10.0.0.2, port=4420 00:27:49.952 qpair failed and we were unable to recover it. 00:27:49.952 [2024-07-15 09:36:00.823951] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.952 [2024-07-15 09:36:00.823978] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a50000b90 with addr=10.0.0.2, port=4420 00:27:49.952 qpair failed and we were unable to recover it. 00:27:49.952 [2024-07-15 09:36:00.824117] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.952 [2024-07-15 09:36:00.824165] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a50000b90 with addr=10.0.0.2, port=4420 00:27:49.952 qpair failed and we were unable to recover it. 00:27:49.952 [2024-07-15 09:36:00.824333] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.952 [2024-07-15 09:36:00.824363] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a50000b90 with addr=10.0.0.2, port=4420 00:27:49.952 qpair failed and we were unable to recover it. 00:27:49.952 [2024-07-15 09:36:00.824502] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.952 [2024-07-15 09:36:00.824528] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a50000b90 with addr=10.0.0.2, port=4420 00:27:49.952 qpair failed and we were unable to recover it. 00:27:49.952 [2024-07-15 09:36:00.824605] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.952 [2024-07-15 09:36:00.824632] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:27:49.952 qpair failed and we were unable to recover it. 00:27:49.952 [2024-07-15 09:36:00.824722] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.952 [2024-07-15 09:36:00.824750] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a58000b90 with addr=10.0.0.2, port=4420 00:27:49.952 qpair failed and we were unable to recover it. 00:27:49.952 [2024-07-15 09:36:00.824919] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.952 [2024-07-15 09:36:00.824958] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d69200 with addr=10.0.0.2, port=4420 00:27:49.952 qpair failed and we were unable to recover it. 00:27:49.952 [2024-07-15 09:36:00.825046] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.952 [2024-07-15 09:36:00.825073] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d69200 with addr=10.0.0.2, port=4420 00:27:49.952 qpair failed and we were unable to recover it. 00:27:49.952 [2024-07-15 09:36:00.825241] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.952 [2024-07-15 09:36:00.825278] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d69200 with addr=10.0.0.2, port=4420 00:27:49.952 qpair failed and we were unable to recover it. 00:27:49.952 [2024-07-15 09:36:00.825396] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.952 [2024-07-15 09:36:00.825454] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d69200 with addr=10.0.0.2, port=4420 00:27:49.952 qpair failed and we were unable to recover it. 00:27:49.952 [2024-07-15 09:36:00.825571] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.952 [2024-07-15 09:36:00.825599] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a50000b90 with addr=10.0.0.2, port=4420 00:27:49.952 qpair failed and we were unable to recover it. 00:27:49.952 [2024-07-15 09:36:00.825712] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.952 [2024-07-15 09:36:00.825740] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:27:49.952 qpair failed and we were unable to recover it. 00:27:49.952 [2024-07-15 09:36:00.825819] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.952 [2024-07-15 09:36:00.825846] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:27:49.952 qpair failed and we were unable to recover it. 00:27:49.952 [2024-07-15 09:36:00.825925] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.952 [2024-07-15 09:36:00.825951] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:27:49.952 qpair failed and we were unable to recover it. 00:27:49.952 [2024-07-15 09:36:00.826027] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.952 [2024-07-15 09:36:00.826052] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:27:49.952 qpair failed and we were unable to recover it. 00:27:49.952 [2024-07-15 09:36:00.826268] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.952 [2024-07-15 09:36:00.826305] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:27:49.952 qpair failed and we were unable to recover it. 00:27:49.952 [2024-07-15 09:36:00.826455] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.952 [2024-07-15 09:36:00.826490] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:27:49.952 qpair failed and we were unable to recover it. 00:27:49.952 [2024-07-15 09:36:00.826635] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.952 [2024-07-15 09:36:00.826671] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:27:49.952 qpair failed and we were unable to recover it. 00:27:49.952 [2024-07-15 09:36:00.826831] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.952 [2024-07-15 09:36:00.826877] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:27:49.952 qpair failed and we were unable to recover it. 00:27:49.952 [2024-07-15 09:36:00.827014] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.952 [2024-07-15 09:36:00.827040] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:27:49.952 qpair failed and we were unable to recover it. 00:27:49.952 [2024-07-15 09:36:00.827152] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.952 [2024-07-15 09:36:00.827202] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:27:49.952 qpair failed and we were unable to recover it. 00:27:49.952 [2024-07-15 09:36:00.827372] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.952 [2024-07-15 09:36:00.827408] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:27:49.952 qpair failed and we were unable to recover it. 00:27:49.952 [2024-07-15 09:36:00.827559] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.952 [2024-07-15 09:36:00.827596] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:27:49.952 qpair failed and we were unable to recover it. 00:27:49.952 [2024-07-15 09:36:00.827749] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.952 [2024-07-15 09:36:00.827778] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a50000b90 with addr=10.0.0.2, port=4420 00:27:49.952 qpair failed and we were unable to recover it. 00:27:49.952 [2024-07-15 09:36:00.827877] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.952 [2024-07-15 09:36:00.827902] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a50000b90 with addr=10.0.0.2, port=4420 00:27:49.952 qpair failed and we were unable to recover it. 00:27:49.952 [2024-07-15 09:36:00.827982] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.952 [2024-07-15 09:36:00.828007] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a50000b90 with addr=10.0.0.2, port=4420 00:27:49.952 qpair failed and we were unable to recover it. 00:27:49.952 [2024-07-15 09:36:00.828113] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.952 [2024-07-15 09:36:00.828164] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a50000b90 with addr=10.0.0.2, port=4420 00:27:49.952 qpair failed and we were unable to recover it. 00:27:49.952 [2024-07-15 09:36:00.828248] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.952 [2024-07-15 09:36:00.828274] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a50000b90 with addr=10.0.0.2, port=4420 00:27:49.952 qpair failed and we were unable to recover it. 00:27:49.952 [2024-07-15 09:36:00.828359] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.952 [2024-07-15 09:36:00.828384] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a50000b90 with addr=10.0.0.2, port=4420 00:27:49.952 qpair failed and we were unable to recover it. 00:27:49.952 [2024-07-15 09:36:00.828499] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.952 [2024-07-15 09:36:00.828524] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a50000b90 with addr=10.0.0.2, port=4420 00:27:49.952 qpair failed and we were unable to recover it. 00:27:49.952 [2024-07-15 09:36:00.828612] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.952 [2024-07-15 09:36:00.828637] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a50000b90 with addr=10.0.0.2, port=4420 00:27:49.952 qpair failed and we were unable to recover it. 00:27:49.952 [2024-07-15 09:36:00.828769] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.952 [2024-07-15 09:36:00.828795] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a50000b90 with addr=10.0.0.2, port=4420 00:27:49.952 qpair failed and we were unable to recover it. 00:27:49.952 [2024-07-15 09:36:00.828899] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.952 [2024-07-15 09:36:00.828925] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a50000b90 with addr=10.0.0.2, port=4420 00:27:49.952 qpair failed and we were unable to recover it. 00:27:49.952 [2024-07-15 09:36:00.829006] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.952 [2024-07-15 09:36:00.829031] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a50000b90 with addr=10.0.0.2, port=4420 00:27:49.952 qpair failed and we were unable to recover it. 00:27:49.952 [2024-07-15 09:36:00.829153] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.952 [2024-07-15 09:36:00.829180] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a50000b90 with addr=10.0.0.2, port=4420 00:27:49.953 qpair failed and we were unable to recover it. 00:27:49.953 [2024-07-15 09:36:00.829296] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.953 [2024-07-15 09:36:00.829324] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:27:49.953 qpair failed and we were unable to recover it. 00:27:49.953 [2024-07-15 09:36:00.829456] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.953 [2024-07-15 09:36:00.829495] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a58000b90 with addr=10.0.0.2, port=4420 00:27:49.953 qpair failed and we were unable to recover it. 00:27:49.953 [2024-07-15 09:36:00.829612] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.953 [2024-07-15 09:36:00.829640] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a58000b90 with addr=10.0.0.2, port=4420 00:27:49.953 qpair failed and we were unable to recover it. 00:27:49.953 [2024-07-15 09:36:00.829778] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.953 [2024-07-15 09:36:00.829816] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a58000b90 with addr=10.0.0.2, port=4420 00:27:49.953 qpair failed and we were unable to recover it. 00:27:49.953 [2024-07-15 09:36:00.829903] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.953 [2024-07-15 09:36:00.829930] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a58000b90 with addr=10.0.0.2, port=4420 00:27:49.953 qpair failed and we were unable to recover it. 00:27:49.953 [2024-07-15 09:36:00.830043] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.953 [2024-07-15 09:36:00.830069] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a58000b90 with addr=10.0.0.2, port=4420 00:27:49.953 qpair failed and we were unable to recover it. 00:27:49.953 [2024-07-15 09:36:00.830189] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.953 [2024-07-15 09:36:00.830215] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a58000b90 with addr=10.0.0.2, port=4420 00:27:49.953 qpair failed and we were unable to recover it. 00:27:49.953 [2024-07-15 09:36:00.830293] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.953 [2024-07-15 09:36:00.830320] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a58000b90 with addr=10.0.0.2, port=4420 00:27:49.953 qpair failed and we were unable to recover it. 00:27:49.953 [2024-07-15 09:36:00.830456] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.953 [2024-07-15 09:36:00.830502] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a58000b90 with addr=10.0.0.2, port=4420 00:27:49.953 qpair failed and we were unable to recover it. 00:27:49.953 [2024-07-15 09:36:00.830616] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.953 [2024-07-15 09:36:00.830642] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a58000b90 with addr=10.0.0.2, port=4420 00:27:49.953 qpair failed and we were unable to recover it. 00:27:49.953 [2024-07-15 09:36:00.830725] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.953 [2024-07-15 09:36:00.830752] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a58000b90 with addr=10.0.0.2, port=4420 00:27:49.953 qpair failed and we were unable to recover it. 00:27:49.953 [2024-07-15 09:36:00.830837] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.953 [2024-07-15 09:36:00.830864] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a58000b90 with addr=10.0.0.2, port=4420 00:27:49.953 qpair failed and we were unable to recover it. 00:27:49.953 [2024-07-15 09:36:00.830975] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.953 [2024-07-15 09:36:00.831000] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a58000b90 with addr=10.0.0.2, port=4420 00:27:49.953 qpair failed and we were unable to recover it. 00:27:49.953 [2024-07-15 09:36:00.831115] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.953 [2024-07-15 09:36:00.831141] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a58000b90 with addr=10.0.0.2, port=4420 00:27:49.953 qpair failed and we were unable to recover it. 00:27:49.953 [2024-07-15 09:36:00.831249] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.953 [2024-07-15 09:36:00.831280] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a58000b90 with addr=10.0.0.2, port=4420 00:27:49.953 qpair failed and we were unable to recover it. 00:27:49.953 [2024-07-15 09:36:00.831389] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.953 [2024-07-15 09:36:00.831415] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a58000b90 with addr=10.0.0.2, port=4420 00:27:49.953 qpair failed and we were unable to recover it. 00:27:49.953 [2024-07-15 09:36:00.831495] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.953 [2024-07-15 09:36:00.831520] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a58000b90 with addr=10.0.0.2, port=4420 00:27:49.953 qpair failed and we were unable to recover it. 00:27:49.953 [2024-07-15 09:36:00.831598] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.953 [2024-07-15 09:36:00.831625] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a58000b90 with addr=10.0.0.2, port=4420 00:27:49.953 qpair failed and we were unable to recover it. 00:27:49.953 [2024-07-15 09:36:00.831743] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.953 [2024-07-15 09:36:00.831771] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a50000b90 with addr=10.0.0.2, port=4420 00:27:49.953 qpair failed and we were unable to recover it. 00:27:49.953 [2024-07-15 09:36:00.831867] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.953 [2024-07-15 09:36:00.831895] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:27:49.953 qpair failed and we were unable to recover it. 00:27:49.953 [2024-07-15 09:36:00.832008] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.953 [2024-07-15 09:36:00.832034] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:27:49.953 qpair failed and we were unable to recover it. 00:27:49.953 [2024-07-15 09:36:00.832166] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.953 [2024-07-15 09:36:00.832203] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:27:49.953 qpair failed and we were unable to recover it. 00:27:49.953 [2024-07-15 09:36:00.832324] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.953 [2024-07-15 09:36:00.832367] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:27:49.953 qpair failed and we were unable to recover it. 00:27:49.953 [2024-07-15 09:36:00.832494] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.953 [2024-07-15 09:36:00.832549] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d69200 with addr=10.0.0.2, port=4420 00:27:49.953 qpair failed and we were unable to recover it. 00:27:49.953 [2024-07-15 09:36:00.832670] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.953 [2024-07-15 09:36:00.832698] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a58000b90 with addr=10.0.0.2, port=4420 00:27:49.953 qpair failed and we were unable to recover it. 00:27:49.953 [2024-07-15 09:36:00.832824] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.953 [2024-07-15 09:36:00.832865] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a50000b90 with addr=10.0.0.2, port=4420 00:27:49.953 qpair failed and we were unable to recover it. 00:27:49.953 [2024-07-15 09:36:00.833007] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.953 [2024-07-15 09:36:00.833034] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a50000b90 with addr=10.0.0.2, port=4420 00:27:49.953 qpair failed and we were unable to recover it. 00:27:49.953 [2024-07-15 09:36:00.833191] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.953 [2024-07-15 09:36:00.833240] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a50000b90 with addr=10.0.0.2, port=4420 00:27:49.953 qpair failed and we were unable to recover it. 00:27:49.953 [2024-07-15 09:36:00.833385] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.953 [2024-07-15 09:36:00.833430] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a50000b90 with addr=10.0.0.2, port=4420 00:27:49.953 qpair failed and we were unable to recover it. 00:27:49.953 [2024-07-15 09:36:00.833571] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.953 [2024-07-15 09:36:00.833598] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:27:49.953 qpair failed and we were unable to recover it. 00:27:49.953 [2024-07-15 09:36:00.833720] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.953 [2024-07-15 09:36:00.833748] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d69200 with addr=10.0.0.2, port=4420 00:27:49.953 qpair failed and we were unable to recover it. 00:27:49.953 [2024-07-15 09:36:00.833867] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.953 [2024-07-15 09:36:00.833893] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d69200 with addr=10.0.0.2, port=4420 00:27:49.953 qpair failed and we were unable to recover it. 00:27:49.953 [2024-07-15 09:36:00.833975] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.953 [2024-07-15 09:36:00.834001] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d69200 with addr=10.0.0.2, port=4420 00:27:49.953 qpair failed and we were unable to recover it. 00:27:49.953 [2024-07-15 09:36:00.834089] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.953 [2024-07-15 09:36:00.834143] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d69200 with addr=10.0.0.2, port=4420 00:27:49.953 qpair failed and we were unable to recover it. 00:27:49.953 [2024-07-15 09:36:00.834288] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.953 [2024-07-15 09:36:00.834324] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d69200 with addr=10.0.0.2, port=4420 00:27:49.953 qpair failed and we were unable to recover it. 00:27:49.953 [2024-07-15 09:36:00.834498] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.953 [2024-07-15 09:36:00.834534] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d69200 with addr=10.0.0.2, port=4420 00:27:49.953 qpair failed and we were unable to recover it. 00:27:49.953 [2024-07-15 09:36:00.834658] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.953 [2024-07-15 09:36:00.834696] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:27:49.953 qpair failed and we were unable to recover it. 00:27:49.953 [2024-07-15 09:36:00.834823] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.953 [2024-07-15 09:36:00.834856] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a58000b90 with addr=10.0.0.2, port=4420 00:27:49.953 qpair failed and we were unable to recover it. 00:27:49.953 [2024-07-15 09:36:00.835026] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.954 [2024-07-15 09:36:00.835073] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a58000b90 with addr=10.0.0.2, port=4420 00:27:49.954 qpair failed and we were unable to recover it. 00:27:49.954 [2024-07-15 09:36:00.835251] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.954 [2024-07-15 09:36:00.835302] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a58000b90 with addr=10.0.0.2, port=4420 00:27:49.954 qpair failed and we were unable to recover it. 00:27:49.954 [2024-07-15 09:36:00.835424] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.954 [2024-07-15 09:36:00.835475] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a58000b90 with addr=10.0.0.2, port=4420 00:27:49.954 qpair failed and we were unable to recover it. 00:27:49.954 [2024-07-15 09:36:00.835594] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.954 [2024-07-15 09:36:00.835622] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a50000b90 with addr=10.0.0.2, port=4420 00:27:49.954 qpair failed and we were unable to recover it. 00:27:49.954 [2024-07-15 09:36:00.835747] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.954 [2024-07-15 09:36:00.835785] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d69200 with addr=10.0.0.2, port=4420 00:27:49.954 qpair failed and we were unable to recover it. 00:27:49.954 [2024-07-15 09:36:00.835886] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.954 [2024-07-15 09:36:00.835913] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d69200 with addr=10.0.0.2, port=4420 00:27:49.954 qpair failed and we were unable to recover it. 00:27:49.954 [2024-07-15 09:36:00.836026] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.954 [2024-07-15 09:36:00.836052] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d69200 with addr=10.0.0.2, port=4420 00:27:49.954 qpair failed and we were unable to recover it. 00:27:49.954 [2024-07-15 09:36:00.836168] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.954 [2024-07-15 09:36:00.836211] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d69200 with addr=10.0.0.2, port=4420 00:27:49.954 qpair failed and we were unable to recover it. 00:27:49.954 [2024-07-15 09:36:00.836404] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.954 [2024-07-15 09:36:00.836441] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d69200 with addr=10.0.0.2, port=4420 00:27:49.954 qpair failed and we were unable to recover it. 00:27:49.954 [2024-07-15 09:36:00.836560] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.954 [2024-07-15 09:36:00.836603] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d69200 with addr=10.0.0.2, port=4420 00:27:49.954 qpair failed and we were unable to recover it. 00:27:49.954 [2024-07-15 09:36:00.836715] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.954 [2024-07-15 09:36:00.836742] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d69200 with addr=10.0.0.2, port=4420 00:27:49.954 qpair failed and we were unable to recover it. 00:27:49.954 [2024-07-15 09:36:00.836826] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.954 [2024-07-15 09:36:00.836852] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d69200 with addr=10.0.0.2, port=4420 00:27:49.954 qpair failed and we were unable to recover it. 00:27:49.954 [2024-07-15 09:36:00.836933] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.954 [2024-07-15 09:36:00.836959] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d69200 with addr=10.0.0.2, port=4420 00:27:49.954 qpair failed and we were unable to recover it. 00:27:49.954 [2024-07-15 09:36:00.837094] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.954 [2024-07-15 09:36:00.837130] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d69200 with addr=10.0.0.2, port=4420 00:27:49.954 qpair failed and we were unable to recover it. 00:27:49.954 [2024-07-15 09:36:00.837272] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.954 [2024-07-15 09:36:00.837309] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d69200 with addr=10.0.0.2, port=4420 00:27:49.954 qpair failed and we were unable to recover it. 00:27:49.954 [2024-07-15 09:36:00.837424] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.954 [2024-07-15 09:36:00.837452] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a58000b90 with addr=10.0.0.2, port=4420 00:27:49.954 qpair failed and we were unable to recover it. 00:27:49.954 [2024-07-15 09:36:00.837567] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.954 [2024-07-15 09:36:00.837593] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a58000b90 with addr=10.0.0.2, port=4420 00:27:49.954 qpair failed and we were unable to recover it. 00:27:49.954 [2024-07-15 09:36:00.837718] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.954 [2024-07-15 09:36:00.837757] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:27:49.954 qpair failed and we were unable to recover it. 00:27:49.954 [2024-07-15 09:36:00.837927] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.954 [2024-07-15 09:36:00.837953] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d69200 with addr=10.0.0.2, port=4420 00:27:49.954 qpair failed and we were unable to recover it. 00:27:49.954 [2024-07-15 09:36:00.838043] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.954 [2024-07-15 09:36:00.838069] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d69200 with addr=10.0.0.2, port=4420 00:27:49.954 qpair failed and we were unable to recover it. 00:27:49.954 [2024-07-15 09:36:00.838174] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.954 [2024-07-15 09:36:00.838215] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d69200 with addr=10.0.0.2, port=4420 00:27:49.954 qpair failed and we were unable to recover it. 00:27:49.954 [2024-07-15 09:36:00.838357] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.954 [2024-07-15 09:36:00.838393] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d69200 with addr=10.0.0.2, port=4420 00:27:49.954 qpair failed and we were unable to recover it. 00:27:49.954 [2024-07-15 09:36:00.838558] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.954 [2024-07-15 09:36:00.838615] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a50000b90 with addr=10.0.0.2, port=4420 00:27:49.954 qpair failed and we were unable to recover it. 00:27:49.954 [2024-07-15 09:36:00.838701] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.954 [2024-07-15 09:36:00.838727] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a50000b90 with addr=10.0.0.2, port=4420 00:27:49.954 qpair failed and we were unable to recover it. 00:27:49.954 [2024-07-15 09:36:00.838871] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.954 [2024-07-15 09:36:00.838897] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a50000b90 with addr=10.0.0.2, port=4420 00:27:49.954 qpair failed and we were unable to recover it. 00:27:49.954 [2024-07-15 09:36:00.839033] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.954 [2024-07-15 09:36:00.839059] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a50000b90 with addr=10.0.0.2, port=4420 00:27:49.954 qpair failed and we were unable to recover it. 00:27:49.954 [2024-07-15 09:36:00.839207] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.954 [2024-07-15 09:36:00.839232] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a50000b90 with addr=10.0.0.2, port=4420 00:27:49.954 qpair failed and we were unable to recover it. 00:27:49.954 [2024-07-15 09:36:00.839344] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.954 [2024-07-15 09:36:00.839392] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a50000b90 with addr=10.0.0.2, port=4420 00:27:49.954 qpair failed and we were unable to recover it. 00:27:49.954 [2024-07-15 09:36:00.839501] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.954 [2024-07-15 09:36:00.839528] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a50000b90 with addr=10.0.0.2, port=4420 00:27:49.954 qpair failed and we were unable to recover it. 00:27:49.954 [2024-07-15 09:36:00.839666] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.954 [2024-07-15 09:36:00.839692] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a50000b90 with addr=10.0.0.2, port=4420 00:27:49.954 qpair failed and we were unable to recover it. 00:27:49.954 [2024-07-15 09:36:00.839818] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.954 [2024-07-15 09:36:00.839863] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:27:49.954 qpair failed and we were unable to recover it. 00:27:49.954 [2024-07-15 09:36:00.839984] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.954 [2024-07-15 09:36:00.840011] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d69200 with addr=10.0.0.2, port=4420 00:27:49.954 qpair failed and we were unable to recover it. 00:27:49.954 [2024-07-15 09:36:00.840111] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.954 [2024-07-15 09:36:00.840150] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a58000b90 with addr=10.0.0.2, port=4420 00:27:49.954 qpair failed and we were unable to recover it. 00:27:49.954 [2024-07-15 09:36:00.840290] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.954 [2024-07-15 09:36:00.840341] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a58000b90 with addr=10.0.0.2, port=4420 00:27:49.954 qpair failed and we were unable to recover it. 00:27:49.954 [2024-07-15 09:36:00.840480] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.954 [2024-07-15 09:36:00.840528] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a58000b90 with addr=10.0.0.2, port=4420 00:27:49.954 qpair failed and we were unable to recover it. 00:27:49.954 [2024-07-15 09:36:00.840668] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.954 [2024-07-15 09:36:00.840694] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a58000b90 with addr=10.0.0.2, port=4420 00:27:49.954 qpair failed and we were unable to recover it. 00:27:49.954 [2024-07-15 09:36:00.840784] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.954 [2024-07-15 09:36:00.840818] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a58000b90 with addr=10.0.0.2, port=4420 00:27:49.954 qpair failed and we were unable to recover it. 00:27:49.954 [2024-07-15 09:36:00.840971] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.954 [2024-07-15 09:36:00.841013] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:27:49.954 qpair failed and we were unable to recover it. 00:27:49.954 [2024-07-15 09:36:00.841160] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.954 [2024-07-15 09:36:00.841199] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d69200 with addr=10.0.0.2, port=4420 00:27:49.954 qpair failed and we were unable to recover it. 00:27:49.954 [2024-07-15 09:36:00.841325] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.954 [2024-07-15 09:36:00.841362] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d69200 with addr=10.0.0.2, port=4420 00:27:49.954 qpair failed and we were unable to recover it. 00:27:49.954 [2024-07-15 09:36:00.841516] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.954 [2024-07-15 09:36:00.841552] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d69200 with addr=10.0.0.2, port=4420 00:27:49.955 qpair failed and we were unable to recover it. 00:27:49.955 [2024-07-15 09:36:00.841668] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.955 [2024-07-15 09:36:00.841704] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d69200 with addr=10.0.0.2, port=4420 00:27:49.955 qpair failed and we were unable to recover it. 00:27:49.955 [2024-07-15 09:36:00.841868] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.955 [2024-07-15 09:36:00.841894] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d69200 with addr=10.0.0.2, port=4420 00:27:49.955 qpair failed and we were unable to recover it. 00:27:49.955 [2024-07-15 09:36:00.841975] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.955 [2024-07-15 09:36:00.842001] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d69200 with addr=10.0.0.2, port=4420 00:27:49.955 qpair failed and we were unable to recover it. 00:27:49.955 [2024-07-15 09:36:00.842190] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.955 [2024-07-15 09:36:00.842237] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d69200 with addr=10.0.0.2, port=4420 00:27:49.955 qpair failed and we were unable to recover it. 00:27:49.955 [2024-07-15 09:36:00.842462] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.955 [2024-07-15 09:36:00.842499] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d69200 with addr=10.0.0.2, port=4420 00:27:49.955 qpair failed and we were unable to recover it. 00:27:49.955 [2024-07-15 09:36:00.842616] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.955 [2024-07-15 09:36:00.842652] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d69200 with addr=10.0.0.2, port=4420 00:27:49.955 qpair failed and we were unable to recover it. 00:27:49.955 [2024-07-15 09:36:00.842781] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.955 [2024-07-15 09:36:00.842829] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:27:49.955 qpair failed and we were unable to recover it. 00:27:49.955 [2024-07-15 09:36:00.842948] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.955 [2024-07-15 09:36:00.842975] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:27:49.955 qpair failed and we were unable to recover it. 00:27:49.955 [2024-07-15 09:36:00.843064] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.955 [2024-07-15 09:36:00.843090] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:27:49.955 qpair failed and we were unable to recover it. 00:27:49.955 [2024-07-15 09:36:00.843195] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.955 [2024-07-15 09:36:00.843220] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:27:49.955 qpair failed and we were unable to recover it. 00:27:49.955 [2024-07-15 09:36:00.843330] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.955 [2024-07-15 09:36:00.843366] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:27:49.955 qpair failed and we were unable to recover it. 00:27:49.955 [2024-07-15 09:36:00.843539] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.955 [2024-07-15 09:36:00.843576] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:27:49.955 qpair failed and we were unable to recover it. 00:27:49.955 [2024-07-15 09:36:00.843683] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.955 [2024-07-15 09:36:00.843721] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:27:49.955 qpair failed and we were unable to recover it. 00:27:49.955 [2024-07-15 09:36:00.843870] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.955 [2024-07-15 09:36:00.843897] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:27:49.955 qpair failed and we were unable to recover it. 00:27:49.955 [2024-07-15 09:36:00.844014] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.955 [2024-07-15 09:36:00.844040] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:27:49.955 qpair failed and we were unable to recover it. 00:27:49.955 [2024-07-15 09:36:00.844129] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.955 [2024-07-15 09:36:00.844156] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:27:49.955 qpair failed and we were unable to recover it. 00:27:49.955 [2024-07-15 09:36:00.844315] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.955 [2024-07-15 09:36:00.844352] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:27:49.955 qpair failed and we were unable to recover it. 00:27:49.955 [2024-07-15 09:36:00.844478] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.955 [2024-07-15 09:36:00.844504] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:27:49.955 qpair failed and we were unable to recover it. 00:27:49.955 [2024-07-15 09:36:00.844668] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.955 [2024-07-15 09:36:00.844710] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:27:49.955 qpair failed and we were unable to recover it. 00:27:49.955 [2024-07-15 09:36:00.844806] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.955 [2024-07-15 09:36:00.844833] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:27:49.955 qpair failed and we were unable to recover it. 00:27:49.955 [2024-07-15 09:36:00.844942] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.955 [2024-07-15 09:36:00.844968] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:27:49.955 qpair failed and we were unable to recover it. 00:27:49.955 [2024-07-15 09:36:00.845047] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.955 [2024-07-15 09:36:00.845073] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:27:49.955 qpair failed and we were unable to recover it. 00:27:49.955 [2024-07-15 09:36:00.845177] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.955 [2024-07-15 09:36:00.845214] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:27:49.955 qpair failed and we were unable to recover it. 00:27:49.955 [2024-07-15 09:36:00.845368] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.955 [2024-07-15 09:36:00.845405] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:27:49.955 qpair failed and we were unable to recover it. 00:27:49.955 [2024-07-15 09:36:00.845523] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.955 [2024-07-15 09:36:00.845559] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:27:49.955 qpair failed and we were unable to recover it. 00:27:49.955 [2024-07-15 09:36:00.845699] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.955 [2024-07-15 09:36:00.845725] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:27:49.955 qpair failed and we were unable to recover it. 00:27:49.955 [2024-07-15 09:36:00.845851] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.955 [2024-07-15 09:36:00.845894] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a58000b90 with addr=10.0.0.2, port=4420 00:27:49.955 qpair failed and we were unable to recover it. 00:27:49.955 [2024-07-15 09:36:00.845996] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.955 [2024-07-15 09:36:00.846024] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a58000b90 with addr=10.0.0.2, port=4420 00:27:49.955 qpair failed and we were unable to recover it. 00:27:49.955 [2024-07-15 09:36:00.846114] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.955 [2024-07-15 09:36:00.846141] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a58000b90 with addr=10.0.0.2, port=4420 00:27:49.955 qpair failed and we were unable to recover it. 00:27:49.955 [2024-07-15 09:36:00.846253] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.955 [2024-07-15 09:36:00.846309] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a58000b90 with addr=10.0.0.2, port=4420 00:27:49.955 qpair failed and we were unable to recover it. 00:27:49.955 [2024-07-15 09:36:00.846476] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.955 [2024-07-15 09:36:00.846529] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a58000b90 with addr=10.0.0.2, port=4420 00:27:49.955 qpair failed and we were unable to recover it. 00:27:49.955 [2024-07-15 09:36:00.846612] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.955 [2024-07-15 09:36:00.846638] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a58000b90 with addr=10.0.0.2, port=4420 00:27:49.955 qpair failed and we were unable to recover it. 00:27:49.955 [2024-07-15 09:36:00.846775] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.955 [2024-07-15 09:36:00.846809] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a58000b90 with addr=10.0.0.2, port=4420 00:27:49.955 qpair failed and we were unable to recover it. 00:27:49.955 [2024-07-15 09:36:00.846904] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.955 [2024-07-15 09:36:00.846930] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a58000b90 with addr=10.0.0.2, port=4420 00:27:49.955 qpair failed and we were unable to recover it. 00:27:49.955 [2024-07-15 09:36:00.847009] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.955 [2024-07-15 09:36:00.847035] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a58000b90 with addr=10.0.0.2, port=4420 00:27:49.955 qpair failed and we were unable to recover it. 00:27:49.955 [2024-07-15 09:36:00.847123] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.955 [2024-07-15 09:36:00.847150] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a58000b90 with addr=10.0.0.2, port=4420 00:27:49.955 qpair failed and we were unable to recover it. 00:27:49.955 [2024-07-15 09:36:00.847240] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.955 [2024-07-15 09:36:00.847266] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a58000b90 with addr=10.0.0.2, port=4420 00:27:49.955 qpair failed and we were unable to recover it. 00:27:49.955 [2024-07-15 09:36:00.847350] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.955 [2024-07-15 09:36:00.847377] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a58000b90 with addr=10.0.0.2, port=4420 00:27:49.955 qpair failed and we were unable to recover it. 00:27:49.955 [2024-07-15 09:36:00.847487] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.955 [2024-07-15 09:36:00.847514] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a58000b90 with addr=10.0.0.2, port=4420 00:27:49.955 qpair failed and we were unable to recover it. 00:27:49.955 [2024-07-15 09:36:00.847627] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.955 [2024-07-15 09:36:00.847654] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a58000b90 with addr=10.0.0.2, port=4420 00:27:49.955 qpair failed and we were unable to recover it. 00:27:49.956 [2024-07-15 09:36:00.847805] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.956 [2024-07-15 09:36:00.847845] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a50000b90 with addr=10.0.0.2, port=4420 00:27:49.956 qpair failed and we were unable to recover it. 00:27:49.956 [2024-07-15 09:36:00.847965] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.956 [2024-07-15 09:36:00.847996] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a50000b90 with addr=10.0.0.2, port=4420 00:27:49.956 qpair failed and we were unable to recover it. 00:27:49.956 [2024-07-15 09:36:00.848082] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.956 [2024-07-15 09:36:00.848108] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a50000b90 with addr=10.0.0.2, port=4420 00:27:49.956 qpair failed and we were unable to recover it. 00:27:49.956 [2024-07-15 09:36:00.848236] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.956 [2024-07-15 09:36:00.848262] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a50000b90 with addr=10.0.0.2, port=4420 00:27:49.956 qpair failed and we were unable to recover it. 00:27:49.956 [2024-07-15 09:36:00.848338] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.956 [2024-07-15 09:36:00.848364] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a50000b90 with addr=10.0.0.2, port=4420 00:27:49.956 qpair failed and we were unable to recover it. 00:27:49.956 [2024-07-15 09:36:00.848446] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.956 [2024-07-15 09:36:00.848471] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a50000b90 with addr=10.0.0.2, port=4420 00:27:49.956 qpair failed and we were unable to recover it. 00:27:49.956 [2024-07-15 09:36:00.848588] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.956 [2024-07-15 09:36:00.848616] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a58000b90 with addr=10.0.0.2, port=4420 00:27:49.956 qpair failed and we were unable to recover it. 00:27:49.956 [2024-07-15 09:36:00.848707] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.956 [2024-07-15 09:36:00.848733] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a58000b90 with addr=10.0.0.2, port=4420 00:27:49.956 qpair failed and we were unable to recover it. 00:27:49.956 [2024-07-15 09:36:00.848840] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.956 [2024-07-15 09:36:00.848866] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a58000b90 with addr=10.0.0.2, port=4420 00:27:49.956 qpair failed and we were unable to recover it. 00:27:49.956 [2024-07-15 09:36:00.848982] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.956 [2024-07-15 09:36:00.849008] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a58000b90 with addr=10.0.0.2, port=4420 00:27:49.956 qpair failed and we were unable to recover it. 00:27:49.956 [2024-07-15 09:36:00.849119] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.956 [2024-07-15 09:36:00.849145] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a58000b90 with addr=10.0.0.2, port=4420 00:27:49.956 qpair failed and we were unable to recover it. 00:27:49.956 [2024-07-15 09:36:00.849227] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.956 [2024-07-15 09:36:00.849253] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a58000b90 with addr=10.0.0.2, port=4420 00:27:49.956 qpair failed and we were unable to recover it. 00:27:49.956 [2024-07-15 09:36:00.849362] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.956 [2024-07-15 09:36:00.849390] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a50000b90 with addr=10.0.0.2, port=4420 00:27:49.956 qpair failed and we were unable to recover it. 00:27:49.956 [2024-07-15 09:36:00.849503] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.956 [2024-07-15 09:36:00.849530] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a50000b90 with addr=10.0.0.2, port=4420 00:27:49.956 qpair failed and we were unable to recover it. 00:27:49.956 [2024-07-15 09:36:00.849610] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.956 [2024-07-15 09:36:00.849635] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a50000b90 with addr=10.0.0.2, port=4420 00:27:49.956 qpair failed and we were unable to recover it. 00:27:49.956 [2024-07-15 09:36:00.849744] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.956 [2024-07-15 09:36:00.849770] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a50000b90 with addr=10.0.0.2, port=4420 00:27:49.956 qpair failed and we were unable to recover it. 00:27:49.956 [2024-07-15 09:36:00.849866] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.956 [2024-07-15 09:36:00.849892] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a50000b90 with addr=10.0.0.2, port=4420 00:27:49.956 qpair failed and we were unable to recover it. 00:27:49.956 [2024-07-15 09:36:00.849973] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.956 [2024-07-15 09:36:00.849998] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a50000b90 with addr=10.0.0.2, port=4420 00:27:49.956 qpair failed and we were unable to recover it. 00:27:49.956 [2024-07-15 09:36:00.850111] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.956 [2024-07-15 09:36:00.850138] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a50000b90 with addr=10.0.0.2, port=4420 00:27:49.956 qpair failed and we were unable to recover it. 00:27:49.956 [2024-07-15 09:36:00.850225] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.956 [2024-07-15 09:36:00.850251] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a50000b90 with addr=10.0.0.2, port=4420 00:27:49.956 qpair failed and we were unable to recover it. 00:27:49.956 [2024-07-15 09:36:00.850373] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.956 [2024-07-15 09:36:00.850412] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d69200 with addr=10.0.0.2, port=4420 00:27:49.956 qpair failed and we were unable to recover it. 00:27:49.956 [2024-07-15 09:36:00.850509] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.956 [2024-07-15 09:36:00.850536] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a58000b90 with addr=10.0.0.2, port=4420 00:27:49.956 qpair failed and we were unable to recover it. 00:27:49.956 [2024-07-15 09:36:00.850643] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.956 [2024-07-15 09:36:00.850669] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a58000b90 with addr=10.0.0.2, port=4420 00:27:49.956 qpair failed and we were unable to recover it. 00:27:49.956 [2024-07-15 09:36:00.850779] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.956 [2024-07-15 09:36:00.850811] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a58000b90 with addr=10.0.0.2, port=4420 00:27:49.956 qpair failed and we were unable to recover it. 00:27:49.956 [2024-07-15 09:36:00.850894] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.956 [2024-07-15 09:36:00.850920] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a58000b90 with addr=10.0.0.2, port=4420 00:27:49.956 qpair failed and we were unable to recover it. 00:27:49.956 [2024-07-15 09:36:00.850998] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.956 [2024-07-15 09:36:00.851023] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a58000b90 with addr=10.0.0.2, port=4420 00:27:49.956 qpair failed and we were unable to recover it. 00:27:49.956 [2024-07-15 09:36:00.851133] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.956 [2024-07-15 09:36:00.851159] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a58000b90 with addr=10.0.0.2, port=4420 00:27:49.956 qpair failed and we were unable to recover it. 00:27:49.956 [2024-07-15 09:36:00.851270] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.956 [2024-07-15 09:36:00.851296] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a58000b90 with addr=10.0.0.2, port=4420 00:27:49.956 qpair failed and we were unable to recover it. 00:27:49.956 [2024-07-15 09:36:00.851384] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.956 [2024-07-15 09:36:00.851410] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a58000b90 with addr=10.0.0.2, port=4420 00:27:49.956 qpair failed and we were unable to recover it. 00:27:49.956 [2024-07-15 09:36:00.851491] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.956 [2024-07-15 09:36:00.851523] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a58000b90 with addr=10.0.0.2, port=4420 00:27:49.956 qpair failed and we were unable to recover it. 00:27:49.956 [2024-07-15 09:36:00.851639] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.956 [2024-07-15 09:36:00.851666] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a58000b90 with addr=10.0.0.2, port=4420 00:27:49.956 qpair failed and we were unable to recover it. 00:27:49.956 [2024-07-15 09:36:00.851779] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.956 [2024-07-15 09:36:00.851813] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a58000b90 with addr=10.0.0.2, port=4420 00:27:49.956 qpair failed and we were unable to recover it. 00:27:49.956 [2024-07-15 09:36:00.851943] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.956 [2024-07-15 09:36:00.851991] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a58000b90 with addr=10.0.0.2, port=4420 00:27:49.956 qpair failed and we were unable to recover it. 00:27:49.957 [2024-07-15 09:36:00.852097] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.957 [2024-07-15 09:36:00.852148] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a58000b90 with addr=10.0.0.2, port=4420 00:27:49.957 qpair failed and we were unable to recover it. 00:27:49.957 [2024-07-15 09:36:00.852235] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.957 [2024-07-15 09:36:00.852261] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a58000b90 with addr=10.0.0.2, port=4420 00:27:49.957 qpair failed and we were unable to recover it. 00:27:49.957 [2024-07-15 09:36:00.852400] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.957 [2024-07-15 09:36:00.852426] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a58000b90 with addr=10.0.0.2, port=4420 00:27:49.957 qpair failed and we were unable to recover it. 00:27:49.957 [2024-07-15 09:36:00.852501] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.957 [2024-07-15 09:36:00.852526] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a58000b90 with addr=10.0.0.2, port=4420 00:27:49.957 qpair failed and we were unable to recover it. 00:27:49.957 [2024-07-15 09:36:00.852634] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.957 [2024-07-15 09:36:00.852660] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a58000b90 with addr=10.0.0.2, port=4420 00:27:49.957 qpair failed and we were unable to recover it. 00:27:49.957 [2024-07-15 09:36:00.852746] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.957 [2024-07-15 09:36:00.852772] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a58000b90 with addr=10.0.0.2, port=4420 00:27:49.957 qpair failed and we were unable to recover it. 00:27:49.957 [2024-07-15 09:36:00.852921] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.957 [2024-07-15 09:36:00.852960] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d69200 with addr=10.0.0.2, port=4420 00:27:49.957 qpair failed and we were unable to recover it. 00:27:49.957 [2024-07-15 09:36:00.853065] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.957 [2024-07-15 09:36:00.853104] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:27:49.957 qpair failed and we were unable to recover it. 00:27:49.957 [2024-07-15 09:36:00.853248] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.957 [2024-07-15 09:36:00.853276] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a50000b90 with addr=10.0.0.2, port=4420 00:27:49.957 qpair failed and we were unable to recover it. 00:27:49.957 [2024-07-15 09:36:00.853367] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.957 [2024-07-15 09:36:00.853393] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a50000b90 with addr=10.0.0.2, port=4420 00:27:49.957 qpair failed and we were unable to recover it. 00:27:49.957 [2024-07-15 09:36:00.853486] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.957 [2024-07-15 09:36:00.853514] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a50000b90 with addr=10.0.0.2, port=4420 00:27:49.957 qpair failed and we were unable to recover it. 00:27:49.957 [2024-07-15 09:36:00.853602] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.957 [2024-07-15 09:36:00.853628] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a50000b90 with addr=10.0.0.2, port=4420 00:27:49.957 qpair failed and we were unable to recover it. 00:27:49.957 [2024-07-15 09:36:00.853717] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.957 [2024-07-15 09:36:00.853744] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a50000b90 with addr=10.0.0.2, port=4420 00:27:49.957 qpair failed and we were unable to recover it. 00:27:49.957 [2024-07-15 09:36:00.853835] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.957 [2024-07-15 09:36:00.853864] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d69200 with addr=10.0.0.2, port=4420 00:27:49.957 qpair failed and we were unable to recover it. 00:27:49.957 [2024-07-15 09:36:00.853969] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.957 [2024-07-15 09:36:00.853995] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d69200 with addr=10.0.0.2, port=4420 00:27:49.957 qpair failed and we were unable to recover it. 00:27:49.957 [2024-07-15 09:36:00.854078] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.957 [2024-07-15 09:36:00.854104] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d69200 with addr=10.0.0.2, port=4420 00:27:49.957 qpair failed and we were unable to recover it. 00:27:49.957 [2024-07-15 09:36:00.854198] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.957 [2024-07-15 09:36:00.854224] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d69200 with addr=10.0.0.2, port=4420 00:27:49.957 qpair failed and we were unable to recover it. 00:27:49.957 [2024-07-15 09:36:00.854312] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.957 [2024-07-15 09:36:00.854337] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d69200 with addr=10.0.0.2, port=4420 00:27:49.957 qpair failed and we were unable to recover it. 00:27:49.957 [2024-07-15 09:36:00.854423] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.957 [2024-07-15 09:36:00.854449] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d69200 with addr=10.0.0.2, port=4420 00:27:49.957 qpair failed and we were unable to recover it. 00:27:49.957 [2024-07-15 09:36:00.854533] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.957 [2024-07-15 09:36:00.854560] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a50000b90 with addr=10.0.0.2, port=4420 00:27:49.957 qpair failed and we were unable to recover it. 00:27:49.957 [2024-07-15 09:36:00.854698] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.957 [2024-07-15 09:36:00.854724] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a50000b90 with addr=10.0.0.2, port=4420 00:27:49.957 qpair failed and we were unable to recover it. 00:27:49.957 [2024-07-15 09:36:00.854812] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.957 [2024-07-15 09:36:00.854840] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a50000b90 with addr=10.0.0.2, port=4420 00:27:49.957 qpair failed and we were unable to recover it. 00:27:49.957 [2024-07-15 09:36:00.854934] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.957 [2024-07-15 09:36:00.854959] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a50000b90 with addr=10.0.0.2, port=4420 00:27:49.957 qpair failed and we were unable to recover it. 00:27:49.957 [2024-07-15 09:36:00.855050] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.957 [2024-07-15 09:36:00.855116] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:27:49.957 qpair failed and we were unable to recover it. 00:27:49.957 [2024-07-15 09:36:00.855245] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.957 [2024-07-15 09:36:00.855288] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:27:49.957 qpair failed and we were unable to recover it. 00:27:49.957 [2024-07-15 09:36:00.855458] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.957 [2024-07-15 09:36:00.855483] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:27:49.957 qpair failed and we were unable to recover it. 00:27:49.957 [2024-07-15 09:36:00.855618] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.957 [2024-07-15 09:36:00.855644] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:27:49.957 qpair failed and we were unable to recover it. 00:27:49.957 [2024-07-15 09:36:00.855753] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.957 [2024-07-15 09:36:00.855779] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:27:49.957 qpair failed and we were unable to recover it. 00:27:49.957 [2024-07-15 09:36:00.855880] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.957 [2024-07-15 09:36:00.855908] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d69200 with addr=10.0.0.2, port=4420 00:27:49.957 qpair failed and we were unable to recover it. 00:27:49.957 [2024-07-15 09:36:00.855998] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.957 [2024-07-15 09:36:00.856023] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d69200 with addr=10.0.0.2, port=4420 00:27:49.957 qpair failed and we were unable to recover it. 00:27:49.957 [2024-07-15 09:36:00.856152] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.957 [2024-07-15 09:36:00.856179] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d69200 with addr=10.0.0.2, port=4420 00:27:49.957 qpair failed and we were unable to recover it. 00:27:49.957 [2024-07-15 09:36:00.856299] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.957 [2024-07-15 09:36:00.856336] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d69200 with addr=10.0.0.2, port=4420 00:27:49.957 qpair failed and we were unable to recover it. 00:27:49.957 [2024-07-15 09:36:00.856451] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.957 [2024-07-15 09:36:00.856487] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d69200 with addr=10.0.0.2, port=4420 00:27:49.957 qpair failed and we were unable to recover it. 00:27:49.957 [2024-07-15 09:36:00.856635] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.957 [2024-07-15 09:36:00.856672] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d69200 with addr=10.0.0.2, port=4420 00:27:49.957 qpair failed and we were unable to recover it. 00:27:49.957 [2024-07-15 09:36:00.856846] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.957 [2024-07-15 09:36:00.856872] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d69200 with addr=10.0.0.2, port=4420 00:27:49.957 qpair failed and we were unable to recover it. 00:27:49.957 [2024-07-15 09:36:00.856987] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.957 [2024-07-15 09:36:00.857013] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d69200 with addr=10.0.0.2, port=4420 00:27:49.957 qpair failed and we were unable to recover it. 00:27:49.957 [2024-07-15 09:36:00.857100] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.957 [2024-07-15 09:36:00.857126] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d69200 with addr=10.0.0.2, port=4420 00:27:49.957 qpair failed and we were unable to recover it. 00:27:49.957 [2024-07-15 09:36:00.857270] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.957 [2024-07-15 09:36:00.857296] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d69200 with addr=10.0.0.2, port=4420 00:27:49.957 qpair failed and we were unable to recover it. 00:27:49.957 [2024-07-15 09:36:00.857439] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.957 [2024-07-15 09:36:00.857485] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d69200 with addr=10.0.0.2, port=4420 00:27:49.957 qpair failed and we were unable to recover it. 00:27:49.957 [2024-07-15 09:36:00.857614] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.957 [2024-07-15 09:36:00.857651] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d69200 with addr=10.0.0.2, port=4420 00:27:49.957 qpair failed and we were unable to recover it. 00:27:49.958 [2024-07-15 09:36:00.857774] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.958 [2024-07-15 09:36:00.857819] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d69200 with addr=10.0.0.2, port=4420 00:27:49.958 qpair failed and we were unable to recover it. 00:27:49.958 [2024-07-15 09:36:00.857927] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.958 [2024-07-15 09:36:00.857953] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d69200 with addr=10.0.0.2, port=4420 00:27:49.958 qpair failed and we were unable to recover it. 00:27:49.958 [2024-07-15 09:36:00.858074] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.958 [2024-07-15 09:36:00.858110] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d69200 with addr=10.0.0.2, port=4420 00:27:49.958 qpair failed and we were unable to recover it. 00:27:49.958 [2024-07-15 09:36:00.858249] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.958 [2024-07-15 09:36:00.858286] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d69200 with addr=10.0.0.2, port=4420 00:27:49.958 qpair failed and we were unable to recover it. 00:27:49.958 [2024-07-15 09:36:00.858394] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.958 [2024-07-15 09:36:00.858431] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d69200 with addr=10.0.0.2, port=4420 00:27:49.958 qpair failed and we were unable to recover it. 00:27:49.958 [2024-07-15 09:36:00.858606] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.958 [2024-07-15 09:36:00.858642] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d69200 with addr=10.0.0.2, port=4420 00:27:49.958 qpair failed and we were unable to recover it. 00:27:49.958 [2024-07-15 09:36:00.858769] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.958 [2024-07-15 09:36:00.858812] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d69200 with addr=10.0.0.2, port=4420 00:27:49.958 qpair failed and we were unable to recover it. 00:27:49.958 [2024-07-15 09:36:00.858921] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.958 [2024-07-15 09:36:00.858947] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d69200 with addr=10.0.0.2, port=4420 00:27:49.958 qpair failed and we were unable to recover it. 00:27:49.958 [2024-07-15 09:36:00.859025] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.958 [2024-07-15 09:36:00.859075] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d69200 with addr=10.0.0.2, port=4420 00:27:49.958 qpair failed and we were unable to recover it. 00:27:49.958 [2024-07-15 09:36:00.859193] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.958 [2024-07-15 09:36:00.859231] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d69200 with addr=10.0.0.2, port=4420 00:27:49.958 qpair failed and we were unable to recover it. 00:27:49.958 [2024-07-15 09:36:00.859417] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.958 [2024-07-15 09:36:00.859484] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a50000b90 with addr=10.0.0.2, port=4420 00:27:49.958 qpair failed and we were unable to recover it. 00:27:49.958 [2024-07-15 09:36:00.859589] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.958 [2024-07-15 09:36:00.859628] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a58000b90 with addr=10.0.0.2, port=4420 00:27:49.958 qpair failed and we were unable to recover it. 00:27:49.958 [2024-07-15 09:36:00.859763] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.958 [2024-07-15 09:36:00.859807] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:27:49.958 qpair failed and we were unable to recover it. 00:27:49.958 [2024-07-15 09:36:00.859902] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.958 [2024-07-15 09:36:00.859929] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:27:49.958 qpair failed and we were unable to recover it. 00:27:49.958 [2024-07-15 09:36:00.860017] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.958 [2024-07-15 09:36:00.860066] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:27:49.958 qpair failed and we were unable to recover it. 00:27:49.958 [2024-07-15 09:36:00.860215] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.958 [2024-07-15 09:36:00.860252] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:27:49.958 qpair failed and we were unable to recover it. 00:27:49.958 [2024-07-15 09:36:00.860376] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.958 [2024-07-15 09:36:00.860414] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d69200 with addr=10.0.0.2, port=4420 00:27:49.958 qpair failed and we were unable to recover it. 00:27:49.958 [2024-07-15 09:36:00.860536] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.958 [2024-07-15 09:36:00.860565] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a50000b90 with addr=10.0.0.2, port=4420 00:27:49.958 qpair failed and we were unable to recover it. 00:27:49.958 [2024-07-15 09:36:00.860680] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.958 [2024-07-15 09:36:00.860709] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a58000b90 with addr=10.0.0.2, port=4420 00:27:49.958 qpair failed and we were unable to recover it. 00:27:49.958 [2024-07-15 09:36:00.860824] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.958 [2024-07-15 09:36:00.860851] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a58000b90 with addr=10.0.0.2, port=4420 00:27:49.958 qpair failed and we were unable to recover it. 00:27:49.958 [2024-07-15 09:36:00.860963] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.958 [2024-07-15 09:36:00.860990] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a58000b90 with addr=10.0.0.2, port=4420 00:27:49.958 qpair failed and we were unable to recover it. 00:27:49.958 [2024-07-15 09:36:00.861099] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.958 [2024-07-15 09:36:00.861152] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a58000b90 with addr=10.0.0.2, port=4420 00:27:49.958 qpair failed and we were unable to recover it. 00:27:49.958 [2024-07-15 09:36:00.861232] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.958 [2024-07-15 09:36:00.861258] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a58000b90 with addr=10.0.0.2, port=4420 00:27:49.958 qpair failed and we were unable to recover it. 00:27:49.958 [2024-07-15 09:36:00.861367] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.958 [2024-07-15 09:36:00.861399] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d69200 with addr=10.0.0.2, port=4420 00:27:49.958 qpair failed and we were unable to recover it. 00:27:49.958 [2024-07-15 09:36:00.861537] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.958 [2024-07-15 09:36:00.861563] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d69200 with addr=10.0.0.2, port=4420 00:27:49.958 qpair failed and we were unable to recover it. 00:27:49.958 [2024-07-15 09:36:00.861646] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.958 [2024-07-15 09:36:00.861671] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d69200 with addr=10.0.0.2, port=4420 00:27:49.958 qpair failed and we were unable to recover it. 00:27:49.958 [2024-07-15 09:36:00.861775] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.958 [2024-07-15 09:36:00.861806] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d69200 with addr=10.0.0.2, port=4420 00:27:49.958 qpair failed and we were unable to recover it. 00:27:49.958 [2024-07-15 09:36:00.861897] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.958 [2024-07-15 09:36:00.861923] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d69200 with addr=10.0.0.2, port=4420 00:27:49.958 qpair failed and we were unable to recover it. 00:27:49.958 [2024-07-15 09:36:00.862029] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.958 [2024-07-15 09:36:00.862055] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d69200 with addr=10.0.0.2, port=4420 00:27:49.958 qpair failed and we were unable to recover it. 00:27:49.958 [2024-07-15 09:36:00.862171] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.958 [2024-07-15 09:36:00.862197] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d69200 with addr=10.0.0.2, port=4420 00:27:49.958 qpair failed and we were unable to recover it. 00:27:49.958 [2024-07-15 09:36:00.862360] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.958 [2024-07-15 09:36:00.862406] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d69200 with addr=10.0.0.2, port=4420 00:27:49.958 qpair failed and we were unable to recover it. 00:27:49.958 [2024-07-15 09:36:00.862523] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.958 [2024-07-15 09:36:00.862560] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d69200 with addr=10.0.0.2, port=4420 00:27:49.958 qpair failed and we were unable to recover it. 00:27:49.958 [2024-07-15 09:36:00.862684] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.958 [2024-07-15 09:36:00.862709] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d69200 with addr=10.0.0.2, port=4420 00:27:49.958 qpair failed and we were unable to recover it. 00:27:49.958 [2024-07-15 09:36:00.862816] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.958 [2024-07-15 09:36:00.862841] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d69200 with addr=10.0.0.2, port=4420 00:27:49.958 qpair failed and we were unable to recover it. 00:27:49.958 [2024-07-15 09:36:00.862922] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.958 [2024-07-15 09:36:00.862947] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d69200 with addr=10.0.0.2, port=4420 00:27:49.958 qpair failed and we were unable to recover it. 00:27:49.958 [2024-07-15 09:36:00.863032] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.958 [2024-07-15 09:36:00.863058] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d69200 with addr=10.0.0.2, port=4420 00:27:49.958 qpair failed and we were unable to recover it. 00:27:49.958 [2024-07-15 09:36:00.863138] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.958 [2024-07-15 09:36:00.863163] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d69200 with addr=10.0.0.2, port=4420 00:27:49.958 qpair failed and we were unable to recover it. 00:27:49.958 [2024-07-15 09:36:00.863312] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.958 [2024-07-15 09:36:00.863367] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:27:49.958 qpair failed and we were unable to recover it. 00:27:49.958 [2024-07-15 09:36:00.863544] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.958 [2024-07-15 09:36:00.863583] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:27:49.958 qpair failed and we were unable to recover it. 00:27:49.958 [2024-07-15 09:36:00.863699] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.958 [2024-07-15 09:36:00.863725] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:27:49.958 qpair failed and we were unable to recover it. 00:27:49.958 [2024-07-15 09:36:00.863809] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.959 [2024-07-15 09:36:00.863835] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:27:49.959 qpair failed and we were unable to recover it. 00:27:49.959 [2024-07-15 09:36:00.863924] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.959 [2024-07-15 09:36:00.863949] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:27:49.959 qpair failed and we were unable to recover it. 00:27:49.959 [2024-07-15 09:36:00.864026] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.959 [2024-07-15 09:36:00.864051] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:27:49.959 qpair failed and we were unable to recover it. 00:27:49.959 [2024-07-15 09:36:00.864146] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.959 [2024-07-15 09:36:00.864174] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d69200 with addr=10.0.0.2, port=4420 00:27:49.959 qpair failed and we were unable to recover it. 00:27:49.959 [2024-07-15 09:36:00.864252] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.959 [2024-07-15 09:36:00.864296] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d69200 with addr=10.0.0.2, port=4420 00:27:49.959 qpair failed and we were unable to recover it. 00:27:49.959 [2024-07-15 09:36:00.864412] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.959 [2024-07-15 09:36:00.864438] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d69200 with addr=10.0.0.2, port=4420 00:27:49.959 qpair failed and we were unable to recover it. 00:27:49.959 [2024-07-15 09:36:00.864590] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.959 [2024-07-15 09:36:00.864627] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d69200 with addr=10.0.0.2, port=4420 00:27:49.959 qpair failed and we were unable to recover it. 00:27:49.959 [2024-07-15 09:36:00.864787] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.959 [2024-07-15 09:36:00.864842] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a58000b90 with addr=10.0.0.2, port=4420 00:27:49.959 qpair failed and we were unable to recover it. 00:27:49.959 [2024-07-15 09:36:00.864941] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.959 [2024-07-15 09:36:00.864968] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a58000b90 with addr=10.0.0.2, port=4420 00:27:49.959 qpair failed and we were unable to recover it. 00:27:49.959 [2024-07-15 09:36:00.865088] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.959 [2024-07-15 09:36:00.865114] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a58000b90 with addr=10.0.0.2, port=4420 00:27:49.959 qpair failed and we were unable to recover it. 00:27:49.959 [2024-07-15 09:36:00.865227] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.959 [2024-07-15 09:36:00.865258] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a58000b90 with addr=10.0.0.2, port=4420 00:27:49.959 qpair failed and we were unable to recover it. 00:27:49.959 [2024-07-15 09:36:00.865369] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.959 [2024-07-15 09:36:00.865422] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a58000b90 with addr=10.0.0.2, port=4420 00:27:49.959 qpair failed and we were unable to recover it. 00:27:49.959 [2024-07-15 09:36:00.865513] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.959 [2024-07-15 09:36:00.865541] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:27:49.959 qpair failed and we were unable to recover it. 00:27:49.959 [2024-07-15 09:36:00.865625] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.959 [2024-07-15 09:36:00.865651] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:27:49.959 qpair failed and we were unable to recover it. 00:27:49.959 [2024-07-15 09:36:00.865746] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.959 [2024-07-15 09:36:00.865771] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:27:49.959 qpair failed and we were unable to recover it. 00:27:49.959 [2024-07-15 09:36:00.865873] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.959 [2024-07-15 09:36:00.865899] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:27:49.959 qpair failed and we were unable to recover it. 00:27:49.959 [2024-07-15 09:36:00.865973] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.959 [2024-07-15 09:36:00.865998] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:27:49.959 qpair failed and we were unable to recover it. 00:27:49.959 [2024-07-15 09:36:00.866136] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.959 [2024-07-15 09:36:00.866162] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:27:49.959 qpair failed and we were unable to recover it. 00:27:49.959 [2024-07-15 09:36:00.866262] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.959 [2024-07-15 09:36:00.866299] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:27:49.959 qpair failed and we were unable to recover it. 00:27:49.959 [2024-07-15 09:36:00.866444] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.959 [2024-07-15 09:36:00.866480] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d69200 with addr=10.0.0.2, port=4420 00:27:49.959 qpair failed and we were unable to recover it. 00:27:49.959 [2024-07-15 09:36:00.866593] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.959 [2024-07-15 09:36:00.866628] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d69200 with addr=10.0.0.2, port=4420 00:27:49.959 qpair failed and we were unable to recover it. 00:27:49.959 [2024-07-15 09:36:00.866735] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.959 [2024-07-15 09:36:00.866760] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d69200 with addr=10.0.0.2, port=4420 00:27:49.959 qpair failed and we were unable to recover it. 00:27:49.959 [2024-07-15 09:36:00.866878] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.959 [2024-07-15 09:36:00.866905] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d69200 with addr=10.0.0.2, port=4420 00:27:49.959 qpair failed and we were unable to recover it. 00:27:49.959 [2024-07-15 09:36:00.866986] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.959 [2024-07-15 09:36:00.867012] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d69200 with addr=10.0.0.2, port=4420 00:27:49.959 qpair failed and we were unable to recover it. 00:27:49.959 [2024-07-15 09:36:00.867138] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.959 [2024-07-15 09:36:00.867164] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d69200 with addr=10.0.0.2, port=4420 00:27:49.959 qpair failed and we were unable to recover it. 00:27:49.959 [2024-07-15 09:36:00.867243] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.959 [2024-07-15 09:36:00.867293] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d69200 with addr=10.0.0.2, port=4420 00:27:49.959 qpair failed and we were unable to recover it. 00:27:49.959 [2024-07-15 09:36:00.867427] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.959 [2024-07-15 09:36:00.867464] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d69200 with addr=10.0.0.2, port=4420 00:27:49.959 qpair failed and we were unable to recover it. 00:27:49.959 [2024-07-15 09:36:00.867579] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.959 [2024-07-15 09:36:00.867616] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d69200 with addr=10.0.0.2, port=4420 00:27:49.959 qpair failed and we were unable to recover it. 00:27:49.959 [2024-07-15 09:36:00.867767] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.959 [2024-07-15 09:36:00.867810] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d69200 with addr=10.0.0.2, port=4420 00:27:49.959 qpair failed and we were unable to recover it. 00:27:49.959 [2024-07-15 09:36:00.867925] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.959 [2024-07-15 09:36:00.867950] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d69200 with addr=10.0.0.2, port=4420 00:27:49.959 qpair failed and we were unable to recover it. 00:27:49.959 [2024-07-15 09:36:00.868059] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.959 [2024-07-15 09:36:00.868084] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d69200 with addr=10.0.0.2, port=4420 00:27:49.959 qpair failed and we were unable to recover it. 00:27:49.959 [2024-07-15 09:36:00.868162] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.959 [2024-07-15 09:36:00.868188] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d69200 with addr=10.0.0.2, port=4420 00:27:49.959 qpair failed and we were unable to recover it. 00:27:49.959 [2024-07-15 09:36:00.868266] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.959 [2024-07-15 09:36:00.868292] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d69200 with addr=10.0.0.2, port=4420 00:27:49.959 qpair failed and we were unable to recover it. 00:27:49.959 [2024-07-15 09:36:00.868375] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.959 [2024-07-15 09:36:00.868403] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a58000b90 with addr=10.0.0.2, port=4420 00:27:49.959 qpair failed and we were unable to recover it. 00:27:49.959 [2024-07-15 09:36:00.868537] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.959 [2024-07-15 09:36:00.868592] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:27:49.959 qpair failed and we were unable to recover it. 00:27:49.959 [2024-07-15 09:36:00.868715] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.959 [2024-07-15 09:36:00.868742] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:27:49.959 qpair failed and we were unable to recover it. 00:27:49.959 [2024-07-15 09:36:00.868837] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.959 [2024-07-15 09:36:00.868863] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:27:49.959 qpair failed and we were unable to recover it. 00:27:49.959 [2024-07-15 09:36:00.868972] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.959 [2024-07-15 09:36:00.869003] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:27:49.959 qpair failed and we were unable to recover it. 00:27:49.959 [2024-07-15 09:36:00.869116] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.959 [2024-07-15 09:36:00.869152] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:27:49.959 qpair failed and we were unable to recover it. 00:27:49.959 [2024-07-15 09:36:00.869273] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.959 [2024-07-15 09:36:00.869309] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:27:49.959 qpair failed and we were unable to recover it. 00:27:49.959 [2024-07-15 09:36:00.869417] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.960 [2024-07-15 09:36:00.869454] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:27:49.960 qpair failed and we were unable to recover it. 00:27:49.960 [2024-07-15 09:36:00.869584] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.960 [2024-07-15 09:36:00.869623] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a50000b90 with addr=10.0.0.2, port=4420 00:27:49.960 qpair failed and we were unable to recover it. 00:27:49.960 [2024-07-15 09:36:00.869739] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.960 [2024-07-15 09:36:00.869765] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d69200 with addr=10.0.0.2, port=4420 00:27:49.960 qpair failed and we were unable to recover it. 00:27:49.960 [2024-07-15 09:36:00.869880] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.960 [2024-07-15 09:36:00.869906] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d69200 with addr=10.0.0.2, port=4420 00:27:49.960 qpair failed and we were unable to recover it. 00:27:49.960 [2024-07-15 09:36:00.870015] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.960 [2024-07-15 09:36:00.870040] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d69200 with addr=10.0.0.2, port=4420 00:27:49.960 qpair failed and we were unable to recover it. 00:27:49.960 [2024-07-15 09:36:00.870161] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.960 [2024-07-15 09:36:00.870202] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d69200 with addr=10.0.0.2, port=4420 00:27:49.960 qpair failed and we were unable to recover it. 00:27:49.960 [2024-07-15 09:36:00.870349] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.960 [2024-07-15 09:36:00.870385] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d69200 with addr=10.0.0.2, port=4420 00:27:49.960 qpair failed and we were unable to recover it. 00:27:49.960 [2024-07-15 09:36:00.870505] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.960 [2024-07-15 09:36:00.870558] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:27:49.960 qpair failed and we were unable to recover it. 00:27:49.960 [2024-07-15 09:36:00.870710] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.960 [2024-07-15 09:36:00.870736] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:27:49.960 qpair failed and we were unable to recover it. 00:27:49.960 [2024-07-15 09:36:00.870824] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.960 [2024-07-15 09:36:00.870850] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:27:49.960 qpair failed and we were unable to recover it. 00:27:49.960 [2024-07-15 09:36:00.870928] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.960 [2024-07-15 09:36:00.870954] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:27:49.960 qpair failed and we were unable to recover it. 00:27:49.960 [2024-07-15 09:36:00.871042] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.960 [2024-07-15 09:36:00.871067] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:27:49.960 qpair failed and we were unable to recover it. 00:27:49.960 [2024-07-15 09:36:00.871211] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.960 [2024-07-15 09:36:00.871248] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:27:49.960 qpair failed and we were unable to recover it. 00:27:49.960 [2024-07-15 09:36:00.871366] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.960 [2024-07-15 09:36:00.871391] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:27:49.960 qpair failed and we were unable to recover it. 00:27:49.960 [2024-07-15 09:36:00.871526] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.960 [2024-07-15 09:36:00.871565] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:27:49.960 qpair failed and we were unable to recover it. 00:27:49.960 [2024-07-15 09:36:00.871704] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.960 [2024-07-15 09:36:00.871744] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a58000b90 with addr=10.0.0.2, port=4420 00:27:49.960 qpair failed and we were unable to recover it. 00:27:49.960 [2024-07-15 09:36:00.871855] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.960 [2024-07-15 09:36:00.871882] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d69200 with addr=10.0.0.2, port=4420 00:27:49.960 qpair failed and we were unable to recover it. 00:27:49.960 [2024-07-15 09:36:00.871960] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.960 [2024-07-15 09:36:00.872010] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d69200 with addr=10.0.0.2, port=4420 00:27:49.960 qpair failed and we were unable to recover it. 00:27:49.960 [2024-07-15 09:36:00.872127] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.960 [2024-07-15 09:36:00.872164] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d69200 with addr=10.0.0.2, port=4420 00:27:49.960 qpair failed and we were unable to recover it. 00:27:49.960 [2024-07-15 09:36:00.872267] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.960 [2024-07-15 09:36:00.872303] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d69200 with addr=10.0.0.2, port=4420 00:27:49.960 qpair failed and we were unable to recover it. 00:27:49.960 [2024-07-15 09:36:00.872448] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.960 [2024-07-15 09:36:00.872485] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d69200 with addr=10.0.0.2, port=4420 00:27:49.960 qpair failed and we were unable to recover it. 00:27:49.960 [2024-07-15 09:36:00.872607] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.960 [2024-07-15 09:36:00.872645] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:27:49.960 qpair failed and we were unable to recover it. 00:27:49.960 [2024-07-15 09:36:00.872790] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.960 [2024-07-15 09:36:00.872828] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a58000b90 with addr=10.0.0.2, port=4420 00:27:49.960 qpair failed and we were unable to recover it. 00:27:49.960 [2024-07-15 09:36:00.872944] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.960 [2024-07-15 09:36:00.872971] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a58000b90 with addr=10.0.0.2, port=4420 00:27:49.960 qpair failed and we were unable to recover it. 00:27:49.960 [2024-07-15 09:36:00.873103] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.960 [2024-07-15 09:36:00.873159] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a58000b90 with addr=10.0.0.2, port=4420 00:27:49.960 qpair failed and we were unable to recover it. 00:27:49.960 [2024-07-15 09:36:00.873283] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.960 [2024-07-15 09:36:00.873331] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a58000b90 with addr=10.0.0.2, port=4420 00:27:49.960 qpair failed and we were unable to recover it. 00:27:49.960 [2024-07-15 09:36:00.873440] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.960 [2024-07-15 09:36:00.873467] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a58000b90 with addr=10.0.0.2, port=4420 00:27:49.960 qpair failed and we were unable to recover it. 00:27:49.960 [2024-07-15 09:36:00.873599] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.960 [2024-07-15 09:36:00.873625] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a58000b90 with addr=10.0.0.2, port=4420 00:27:49.960 qpair failed and we were unable to recover it. 00:27:49.960 [2024-07-15 09:36:00.873724] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.960 [2024-07-15 09:36:00.873763] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:27:49.960 qpair failed and we were unable to recover it. 00:27:49.960 [2024-07-15 09:36:00.873917] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.960 [2024-07-15 09:36:00.873951] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a50000b90 with addr=10.0.0.2, port=4420 00:27:49.960 qpair failed and we were unable to recover it. 00:27:49.960 [2024-07-15 09:36:00.874031] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.960 [2024-07-15 09:36:00.874058] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d69200 with addr=10.0.0.2, port=4420 00:27:49.960 qpair failed and we were unable to recover it. 00:27:49.960 [2024-07-15 09:36:00.874173] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.960 [2024-07-15 09:36:00.874210] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d69200 with addr=10.0.0.2, port=4420 00:27:49.960 qpair failed and we were unable to recover it. 00:27:49.960 [2024-07-15 09:36:00.874340] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.960 [2024-07-15 09:36:00.874367] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d69200 with addr=10.0.0.2, port=4420 00:27:49.960 qpair failed and we were unable to recover it. 00:27:49.960 [2024-07-15 09:36:00.874487] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.960 [2024-07-15 09:36:00.874524] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d69200 with addr=10.0.0.2, port=4420 00:27:49.960 qpair failed and we were unable to recover it. 00:27:49.960 [2024-07-15 09:36:00.874661] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.960 [2024-07-15 09:36:00.874687] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d69200 with addr=10.0.0.2, port=4420 00:27:49.960 qpair failed and we were unable to recover it. 00:27:49.961 [2024-07-15 09:36:00.874771] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.961 [2024-07-15 09:36:00.874813] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:27:49.961 qpair failed and we were unable to recover it. 00:27:49.961 [2024-07-15 09:36:00.874913] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.961 [2024-07-15 09:36:00.874940] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:27:49.961 qpair failed and we were unable to recover it. 00:27:49.961 [2024-07-15 09:36:00.875023] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.961 [2024-07-15 09:36:00.875075] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:27:49.961 qpair failed and we were unable to recover it. 00:27:49.961 [2024-07-15 09:36:00.875213] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.961 [2024-07-15 09:36:00.875251] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:27:49.961 qpair failed and we were unable to recover it. 00:27:49.961 [2024-07-15 09:36:00.875374] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.961 [2024-07-15 09:36:00.875412] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:27:49.961 qpair failed and we were unable to recover it. 00:27:49.961 [2024-07-15 09:36:00.875558] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.961 [2024-07-15 09:36:00.875595] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:27:49.961 qpair failed and we were unable to recover it. 00:27:49.961 [2024-07-15 09:36:00.875709] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.961 [2024-07-15 09:36:00.875752] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:27:49.961 qpair failed and we were unable to recover it. 00:27:49.961 [2024-07-15 09:36:00.875905] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.961 [2024-07-15 09:36:00.875931] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:27:49.961 qpair failed and we were unable to recover it. 00:27:49.961 [2024-07-15 09:36:00.876067] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.961 [2024-07-15 09:36:00.876093] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:27:49.961 qpair failed and we were unable to recover it. 00:27:49.961 [2024-07-15 09:36:00.876198] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.961 [2024-07-15 09:36:00.876235] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:27:49.961 qpair failed and we were unable to recover it. 00:27:49.961 [2024-07-15 09:36:00.876413] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.961 [2024-07-15 09:36:00.876450] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:27:49.961 qpair failed and we were unable to recover it. 00:27:49.961 [2024-07-15 09:36:00.876567] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.961 [2024-07-15 09:36:00.876622] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a58000b90 with addr=10.0.0.2, port=4420 00:27:49.961 qpair failed and we were unable to recover it. 00:27:49.961 [2024-07-15 09:36:00.876732] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.961 [2024-07-15 09:36:00.876759] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a58000b90 with addr=10.0.0.2, port=4420 00:27:49.961 qpair failed and we were unable to recover it. 00:27:49.961 [2024-07-15 09:36:00.876906] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.961 [2024-07-15 09:36:00.876936] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a50000b90 with addr=10.0.0.2, port=4420 00:27:49.961 qpair failed and we were unable to recover it. 00:27:49.961 [2024-07-15 09:36:00.877026] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.961 [2024-07-15 09:36:00.877052] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a50000b90 with addr=10.0.0.2, port=4420 00:27:49.961 qpair failed and we were unable to recover it. 00:27:49.961 [2024-07-15 09:36:00.877178] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.961 [2024-07-15 09:36:00.877214] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a50000b90 with addr=10.0.0.2, port=4420 00:27:49.961 qpair failed and we were unable to recover it. 00:27:49.961 [2024-07-15 09:36:00.877343] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.961 [2024-07-15 09:36:00.877383] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a50000b90 with addr=10.0.0.2, port=4420 00:27:49.961 qpair failed and we were unable to recover it. 00:27:49.961 [2024-07-15 09:36:00.877564] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.961 [2024-07-15 09:36:00.877621] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a50000b90 with addr=10.0.0.2, port=4420 00:27:49.961 qpair failed and we were unable to recover it. 00:27:49.961 [2024-07-15 09:36:00.877730] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.961 [2024-07-15 09:36:00.877756] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a50000b90 with addr=10.0.0.2, port=4420 00:27:49.961 qpair failed and we were unable to recover it. 00:27:49.961 [2024-07-15 09:36:00.877868] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.961 [2024-07-15 09:36:00.877908] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a58000b90 with addr=10.0.0.2, port=4420 00:27:49.961 qpair failed and we were unable to recover it. 00:27:49.961 [2024-07-15 09:36:00.877998] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.961 [2024-07-15 09:36:00.878025] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:27:49.961 qpair failed and we were unable to recover it. 00:27:49.961 [2024-07-15 09:36:00.878122] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.961 [2024-07-15 09:36:00.878160] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d69200 with addr=10.0.0.2, port=4420 00:27:49.961 qpair failed and we were unable to recover it. 00:27:49.961 [2024-07-15 09:36:00.878282] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.961 [2024-07-15 09:36:00.878320] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d69200 with addr=10.0.0.2, port=4420 00:27:49.961 qpair failed and we were unable to recover it. 00:27:49.961 [2024-07-15 09:36:00.878464] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.961 [2024-07-15 09:36:00.878502] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d69200 with addr=10.0.0.2, port=4420 00:27:49.961 qpair failed and we were unable to recover it. 00:27:49.961 [2024-07-15 09:36:00.878617] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.961 [2024-07-15 09:36:00.878653] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d69200 with addr=10.0.0.2, port=4420 00:27:49.961 qpair failed and we were unable to recover it. 00:27:49.961 [2024-07-15 09:36:00.878776] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.961 [2024-07-15 09:36:00.878809] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a50000b90 with addr=10.0.0.2, port=4420 00:27:49.961 qpair failed and we were unable to recover it. 00:27:49.961 [2024-07-15 09:36:00.878890] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.961 [2024-07-15 09:36:00.878918] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a50000b90 with addr=10.0.0.2, port=4420 00:27:49.961 qpair failed and we were unable to recover it. 00:27:49.961 [2024-07-15 09:36:00.879003] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.961 [2024-07-15 09:36:00.879030] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a50000b90 with addr=10.0.0.2, port=4420 00:27:49.961 qpair failed and we were unable to recover it. 00:27:49.961 [2024-07-15 09:36:00.879199] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.961 [2024-07-15 09:36:00.879239] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a50000b90 with addr=10.0.0.2, port=4420 00:27:49.961 qpair failed and we were unable to recover it. 00:27:49.961 [2024-07-15 09:36:00.879369] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.961 [2024-07-15 09:36:00.879424] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a50000b90 with addr=10.0.0.2, port=4420 00:27:49.961 qpair failed and we were unable to recover it. 00:27:49.961 [2024-07-15 09:36:00.879516] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.961 [2024-07-15 09:36:00.879542] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a50000b90 with addr=10.0.0.2, port=4420 00:27:49.961 qpair failed and we were unable to recover it. 00:27:49.961 [2024-07-15 09:36:00.879650] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.961 [2024-07-15 09:36:00.879676] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a50000b90 with addr=10.0.0.2, port=4420 00:27:49.961 qpair failed and we were unable to recover it. 00:27:49.961 [2024-07-15 09:36:00.879778] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.961 [2024-07-15 09:36:00.879826] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a58000b90 with addr=10.0.0.2, port=4420 00:27:49.961 qpair failed and we were unable to recover it. 00:27:49.961 [2024-07-15 09:36:00.879917] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.961 [2024-07-15 09:36:00.879945] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:27:49.961 qpair failed and we were unable to recover it. 00:27:49.961 [2024-07-15 09:36:00.880034] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.961 [2024-07-15 09:36:00.880061] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d69200 with addr=10.0.0.2, port=4420 00:27:49.961 qpair failed and we were unable to recover it. 00:27:49.961 [2024-07-15 09:36:00.880166] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.961 [2024-07-15 09:36:00.880226] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d69200 with addr=10.0.0.2, port=4420 00:27:49.961 qpair failed and we were unable to recover it. 00:27:49.961 [2024-07-15 09:36:00.880364] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.961 [2024-07-15 09:36:00.880389] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d69200 with addr=10.0.0.2, port=4420 00:27:49.961 qpair failed and we were unable to recover it. 00:27:49.961 [2024-07-15 09:36:00.880551] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.961 [2024-07-15 09:36:00.880619] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d69200 with addr=10.0.0.2, port=4420 00:27:49.961 qpair failed and we were unable to recover it. 00:27:49.961 [2024-07-15 09:36:00.880791] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.961 [2024-07-15 09:36:00.880832] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d69200 with addr=10.0.0.2, port=4420 00:27:49.961 qpair failed and we were unable to recover it. 00:27:49.962 [2024-07-15 09:36:00.880934] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.962 [2024-07-15 09:36:00.880970] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d69200 with addr=10.0.0.2, port=4420 00:27:49.962 qpair failed and we were unable to recover it. 00:27:49.962 [2024-07-15 09:36:00.881077] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.962 [2024-07-15 09:36:00.881114] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d69200 with addr=10.0.0.2, port=4420 00:27:49.962 qpair failed and we were unable to recover it. 00:27:49.962 [2024-07-15 09:36:00.881256] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.962 [2024-07-15 09:36:00.881293] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d69200 with addr=10.0.0.2, port=4420 00:27:49.962 qpair failed and we were unable to recover it. 00:27:49.962 [2024-07-15 09:36:00.881403] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.962 [2024-07-15 09:36:00.881439] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d69200 with addr=10.0.0.2, port=4420 00:27:49.962 qpair failed and we were unable to recover it. 00:27:49.962 [2024-07-15 09:36:00.881568] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.962 [2024-07-15 09:36:00.881605] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d69200 with addr=10.0.0.2, port=4420 00:27:49.962 qpair failed and we were unable to recover it. 00:27:49.962 [2024-07-15 09:36:00.881741] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.962 [2024-07-15 09:36:00.881767] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d69200 with addr=10.0.0.2, port=4420 00:27:49.962 qpair failed and we were unable to recover it. 00:27:49.962 [2024-07-15 09:36:00.881864] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.962 [2024-07-15 09:36:00.881895] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a58000b90 with addr=10.0.0.2, port=4420 00:27:49.962 qpair failed and we were unable to recover it. 00:27:49.962 [2024-07-15 09:36:00.881984] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.962 [2024-07-15 09:36:00.882011] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a58000b90 with addr=10.0.0.2, port=4420 00:27:49.962 qpair failed and we were unable to recover it. 00:27:49.962 [2024-07-15 09:36:00.882102] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.962 [2024-07-15 09:36:00.882128] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a58000b90 with addr=10.0.0.2, port=4420 00:27:49.962 qpair failed and we were unable to recover it. 00:27:49.962 [2024-07-15 09:36:00.882229] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.962 [2024-07-15 09:36:00.882256] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a58000b90 with addr=10.0.0.2, port=4420 00:27:49.962 qpair failed and we were unable to recover it. 00:27:49.962 [2024-07-15 09:36:00.882385] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.962 [2024-07-15 09:36:00.882422] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a58000b90 with addr=10.0.0.2, port=4420 00:27:49.962 qpair failed and we were unable to recover it. 00:27:49.962 [2024-07-15 09:36:00.882554] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.962 [2024-07-15 09:36:00.882580] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a58000b90 with addr=10.0.0.2, port=4420 00:27:49.962 qpair failed and we were unable to recover it. 00:27:49.962 [2024-07-15 09:36:00.882665] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.962 [2024-07-15 09:36:00.882692] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d69200 with addr=10.0.0.2, port=4420 00:27:49.962 qpair failed and we were unable to recover it. 00:27:49.962 [2024-07-15 09:36:00.882775] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.962 [2024-07-15 09:36:00.882808] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:27:49.962 qpair failed and we were unable to recover it. 00:27:49.962 [2024-07-15 09:36:00.882892] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.962 [2024-07-15 09:36:00.882920] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a50000b90 with addr=10.0.0.2, port=4420 00:27:49.962 qpair failed and we were unable to recover it. 00:27:49.962 [2024-07-15 09:36:00.883007] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.962 [2024-07-15 09:36:00.883032] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a50000b90 with addr=10.0.0.2, port=4420 00:27:49.962 qpair failed and we were unable to recover it. 00:27:49.962 [2024-07-15 09:36:00.883144] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.962 [2024-07-15 09:36:00.883171] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a50000b90 with addr=10.0.0.2, port=4420 00:27:49.962 qpair failed and we were unable to recover it. 00:27:49.962 [2024-07-15 09:36:00.883285] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.962 [2024-07-15 09:36:00.883315] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a50000b90 with addr=10.0.0.2, port=4420 00:27:49.962 qpair failed and we were unable to recover it. 00:27:49.962 [2024-07-15 09:36:00.883454] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.962 [2024-07-15 09:36:00.883501] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a58000b90 with addr=10.0.0.2, port=4420 00:27:49.962 qpair failed and we were unable to recover it. 00:27:49.962 [2024-07-15 09:36:00.883623] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.962 [2024-07-15 09:36:00.883650] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d69200 with addr=10.0.0.2, port=4420 00:27:49.962 qpair failed and we were unable to recover it. 00:27:49.962 [2024-07-15 09:36:00.883744] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.962 [2024-07-15 09:36:00.883770] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d69200 with addr=10.0.0.2, port=4420 00:27:49.962 qpair failed and we were unable to recover it. 00:27:49.962 [2024-07-15 09:36:00.883855] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.962 [2024-07-15 09:36:00.883881] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d69200 with addr=10.0.0.2, port=4420 00:27:49.962 qpair failed and we were unable to recover it. 00:27:49.962 [2024-07-15 09:36:00.883972] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.962 [2024-07-15 09:36:00.883998] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d69200 with addr=10.0.0.2, port=4420 00:27:49.962 qpair failed and we were unable to recover it. 00:27:49.962 [2024-07-15 09:36:00.884084] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.962 [2024-07-15 09:36:00.884109] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d69200 with addr=10.0.0.2, port=4420 00:27:49.962 qpair failed and we were unable to recover it. 00:27:49.962 [2024-07-15 09:36:00.884228] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.962 [2024-07-15 09:36:00.884253] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d69200 with addr=10.0.0.2, port=4420 00:27:49.962 qpair failed and we were unable to recover it. 00:27:49.962 [2024-07-15 09:36:00.884338] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.962 [2024-07-15 09:36:00.884366] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a58000b90 with addr=10.0.0.2, port=4420 00:27:49.962 qpair failed and we were unable to recover it. 00:27:49.962 [2024-07-15 09:36:00.884456] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.962 [2024-07-15 09:36:00.884484] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:27:49.962 qpair failed and we were unable to recover it. 00:27:49.962 [2024-07-15 09:36:00.884576] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.962 [2024-07-15 09:36:00.884604] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a50000b90 with addr=10.0.0.2, port=4420 00:27:49.962 qpair failed and we were unable to recover it. 00:27:49.962 [2024-07-15 09:36:00.884688] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.962 [2024-07-15 09:36:00.884713] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a50000b90 with addr=10.0.0.2, port=4420 00:27:49.962 qpair failed and we were unable to recover it. 00:27:49.962 [2024-07-15 09:36:00.884796] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.962 [2024-07-15 09:36:00.884828] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a50000b90 with addr=10.0.0.2, port=4420 00:27:49.962 qpair failed and we were unable to recover it. 00:27:49.962 [2024-07-15 09:36:00.884933] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.962 [2024-07-15 09:36:00.884984] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a50000b90 with addr=10.0.0.2, port=4420 00:27:49.962 qpair failed and we were unable to recover it. 00:27:49.962 [2024-07-15 09:36:00.885122] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.962 [2024-07-15 09:36:00.885174] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a50000b90 with addr=10.0.0.2, port=4420 00:27:49.962 qpair failed and we were unable to recover it. 00:27:49.962 [2024-07-15 09:36:00.885287] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.962 [2024-07-15 09:36:00.885313] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a50000b90 with addr=10.0.0.2, port=4420 00:27:49.962 qpair failed and we were unable to recover it. 00:27:49.962 [2024-07-15 09:36:00.885405] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.962 [2024-07-15 09:36:00.885432] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:27:49.962 qpair failed and we were unable to recover it. 00:27:49.962 [2024-07-15 09:36:00.885546] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.962 [2024-07-15 09:36:00.885573] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d69200 with addr=10.0.0.2, port=4420 00:27:49.962 qpair failed and we were unable to recover it. 00:27:49.962 [2024-07-15 09:36:00.885658] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.962 [2024-07-15 09:36:00.885686] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a58000b90 with addr=10.0.0.2, port=4420 00:27:49.962 qpair failed and we were unable to recover it. 00:27:49.962 [2024-07-15 09:36:00.887937] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.962 [2024-07-15 09:36:00.887977] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a58000b90 with addr=10.0.0.2, port=4420 00:27:49.962 qpair failed and we were unable to recover it. 00:27:49.962 [2024-07-15 09:36:00.888104] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.962 [2024-07-15 09:36:00.888132] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a58000b90 with addr=10.0.0.2, port=4420 00:27:49.962 qpair failed and we were unable to recover it. 00:27:49.963 [2024-07-15 09:36:00.888244] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.963 [2024-07-15 09:36:00.888285] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a58000b90 with addr=10.0.0.2, port=4420 00:27:49.963 qpair failed and we were unable to recover it. 00:27:49.963 [2024-07-15 09:36:00.888462] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.963 [2024-07-15 09:36:00.888511] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a58000b90 with addr=10.0.0.2, port=4420 00:27:49.963 qpair failed and we were unable to recover it. 00:27:49.963 [2024-07-15 09:36:00.888607] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.963 [2024-07-15 09:36:00.888633] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a58000b90 with addr=10.0.0.2, port=4420 00:27:49.963 qpair failed and we were unable to recover it. 00:27:49.963 [2024-07-15 09:36:00.888715] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.963 [2024-07-15 09:36:00.888741] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a58000b90 with addr=10.0.0.2, port=4420 00:27:49.963 qpair failed and we were unable to recover it. 00:27:49.963 [2024-07-15 09:36:00.888877] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.963 [2024-07-15 09:36:00.888905] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a58000b90 with addr=10.0.0.2, port=4420 00:27:49.963 qpair failed and we were unable to recover it. 00:27:49.963 [2024-07-15 09:36:00.888992] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.963 [2024-07-15 09:36:00.889017] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a58000b90 with addr=10.0.0.2, port=4420 00:27:49.963 qpair failed and we were unable to recover it. 00:27:49.963 [2024-07-15 09:36:00.889112] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.963 [2024-07-15 09:36:00.889150] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d69200 with addr=10.0.0.2, port=4420 00:27:49.963 qpair failed and we were unable to recover it. 00:27:49.963 [2024-07-15 09:36:00.889292] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.963 [2024-07-15 09:36:00.889319] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d69200 with addr=10.0.0.2, port=4420 00:27:49.963 qpair failed and we were unable to recover it. 00:27:49.963 [2024-07-15 09:36:00.889407] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.963 [2024-07-15 09:36:00.889433] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d69200 with addr=10.0.0.2, port=4420 00:27:49.963 qpair failed and we were unable to recover it. 00:27:49.963 [2024-07-15 09:36:00.889519] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.963 [2024-07-15 09:36:00.889544] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d69200 with addr=10.0.0.2, port=4420 00:27:49.963 qpair failed and we were unable to recover it. 00:27:49.963 [2024-07-15 09:36:00.889630] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.963 [2024-07-15 09:36:00.889671] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a50000b90 with addr=10.0.0.2, port=4420 00:27:49.963 qpair failed and we were unable to recover it. 00:27:49.963 [2024-07-15 09:36:00.889775] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.963 [2024-07-15 09:36:00.889820] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:27:49.963 qpair failed and we were unable to recover it. 00:27:49.963 [2024-07-15 09:36:00.889942] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.963 [2024-07-15 09:36:00.889969] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:27:49.963 qpair failed and we were unable to recover it. 00:27:49.963 [2024-07-15 09:36:00.890051] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.963 [2024-07-15 09:36:00.890097] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:27:49.963 qpair failed and we were unable to recover it. 00:27:49.963 [2024-07-15 09:36:00.890216] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.963 [2024-07-15 09:36:00.890254] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:27:49.963 qpair failed and we were unable to recover it. 00:27:49.963 [2024-07-15 09:36:00.890382] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.963 [2024-07-15 09:36:00.890418] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:27:49.963 qpair failed and we were unable to recover it. 00:27:49.963 [2024-07-15 09:36:00.890565] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.963 [2024-07-15 09:36:00.890602] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:27:49.963 qpair failed and we were unable to recover it. 00:27:49.963 [2024-07-15 09:36:00.890764] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.963 [2024-07-15 09:36:00.890790] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:27:49.963 qpair failed and we were unable to recover it. 00:27:49.963 [2024-07-15 09:36:00.890927] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.963 [2024-07-15 09:36:00.890953] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:27:49.963 qpair failed and we were unable to recover it. 00:27:49.963 [2024-07-15 09:36:00.891046] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.963 [2024-07-15 09:36:00.891077] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:27:49.963 qpair failed and we were unable to recover it. 00:27:49.963 [2024-07-15 09:36:00.891170] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.963 [2024-07-15 09:36:00.891196] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:27:49.963 qpair failed and we were unable to recover it. 00:27:49.963 [2024-07-15 09:36:00.891280] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.963 [2024-07-15 09:36:00.891332] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:27:49.963 qpair failed and we were unable to recover it. 00:27:49.963 [2024-07-15 09:36:00.891516] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.963 [2024-07-15 09:36:00.891554] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:27:49.963 qpair failed and we were unable to recover it. 00:27:49.963 [2024-07-15 09:36:00.891714] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.963 [2024-07-15 09:36:00.891752] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:27:49.963 qpair failed and we were unable to recover it. 00:27:49.963 [2024-07-15 09:36:00.891889] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.963 [2024-07-15 09:36:00.891915] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:27:49.963 qpair failed and we were unable to recover it. 00:27:49.963 [2024-07-15 09:36:00.892028] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.963 [2024-07-15 09:36:00.892055] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:27:49.963 qpair failed and we were unable to recover it. 00:27:49.963 [2024-07-15 09:36:00.892175] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.963 [2024-07-15 09:36:00.892200] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:27:49.963 qpair failed and we were unable to recover it. 00:27:49.963 [2024-07-15 09:36:00.892281] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.963 [2024-07-15 09:36:00.892332] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:27:49.963 qpair failed and we were unable to recover it. 00:27:49.963 [2024-07-15 09:36:00.892446] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.963 [2024-07-15 09:36:00.892484] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:27:49.963 qpair failed and we were unable to recover it. 00:27:49.963 [2024-07-15 09:36:00.892601] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.963 [2024-07-15 09:36:00.892641] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:27:49.963 qpair failed and we were unable to recover it. 00:27:49.963 [2024-07-15 09:36:00.892788] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.963 [2024-07-15 09:36:00.892818] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:27:49.963 qpair failed and we were unable to recover it. 00:27:49.963 [2024-07-15 09:36:00.892932] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.963 [2024-07-15 09:36:00.892958] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:27:49.963 qpair failed and we were unable to recover it. 00:27:49.963 [2024-07-15 09:36:00.893039] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.963 [2024-07-15 09:36:00.893064] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:27:49.963 qpair failed and we were unable to recover it. 00:27:49.963 [2024-07-15 09:36:00.893160] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.963 [2024-07-15 09:36:00.893185] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:27:49.963 qpair failed and we were unable to recover it. 00:27:49.963 [2024-07-15 09:36:00.893264] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.963 [2024-07-15 09:36:00.893290] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:27:49.963 qpair failed and we were unable to recover it. 00:27:49.963 [2024-07-15 09:36:00.893415] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.963 [2024-07-15 09:36:00.893455] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:27:49.964 qpair failed and we were unable to recover it. 00:27:49.964 [2024-07-15 09:36:00.893581] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.964 [2024-07-15 09:36:00.893620] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:27:49.964 qpair failed and we were unable to recover it. 00:27:49.964 [2024-07-15 09:36:00.893759] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.964 [2024-07-15 09:36:00.893784] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:27:49.964 qpair failed and we were unable to recover it. 00:27:49.964 [2024-07-15 09:36:00.893922] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.964 [2024-07-15 09:36:00.893961] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a50000b90 with addr=10.0.0.2, port=4420 00:27:49.964 qpair failed and we were unable to recover it. 00:27:49.964 [2024-07-15 09:36:00.894062] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.964 [2024-07-15 09:36:00.894102] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a58000b90 with addr=10.0.0.2, port=4420 00:27:49.964 qpair failed and we were unable to recover it. 00:27:49.964 [2024-07-15 09:36:00.894219] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.964 [2024-07-15 09:36:00.894246] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a58000b90 with addr=10.0.0.2, port=4420 00:27:49.964 qpair failed and we were unable to recover it. 00:27:49.964 [2024-07-15 09:36:00.894335] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.964 [2024-07-15 09:36:00.894361] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a58000b90 with addr=10.0.0.2, port=4420 00:27:49.964 qpair failed and we were unable to recover it. 00:27:49.964 [2024-07-15 09:36:00.894441] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.964 [2024-07-15 09:36:00.894468] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a58000b90 with addr=10.0.0.2, port=4420 00:27:49.964 qpair failed and we were unable to recover it. 00:27:49.964 [2024-07-15 09:36:00.894556] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.964 [2024-07-15 09:36:00.894581] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a58000b90 with addr=10.0.0.2, port=4420 00:27:49.964 qpair failed and we were unable to recover it. 00:27:49.964 [2024-07-15 09:36:00.894665] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.964 [2024-07-15 09:36:00.894692] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:27:49.964 qpair failed and we were unable to recover it. 00:27:49.964 [2024-07-15 09:36:00.894787] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.964 [2024-07-15 09:36:00.894823] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a50000b90 with addr=10.0.0.2, port=4420 00:27:49.964 qpair failed and we were unable to recover it. 00:27:49.964 [2024-07-15 09:36:00.894933] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.964 [2024-07-15 09:36:00.894973] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d69200 with addr=10.0.0.2, port=4420 00:27:49.964 qpair failed and we were unable to recover it. 00:27:49.964 [2024-07-15 09:36:00.895058] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.964 [2024-07-15 09:36:00.895085] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d69200 with addr=10.0.0.2, port=4420 00:27:49.964 qpair failed and we were unable to recover it. 00:27:49.964 [2024-07-15 09:36:00.895200] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.964 [2024-07-15 09:36:00.895226] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d69200 with addr=10.0.0.2, port=4420 00:27:49.964 qpair failed and we were unable to recover it. 00:27:49.964 [2024-07-15 09:36:00.895309] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.964 [2024-07-15 09:36:00.895335] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d69200 with addr=10.0.0.2, port=4420 00:27:49.964 qpair failed and we were unable to recover it. 00:27:49.964 [2024-07-15 09:36:00.895454] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.964 [2024-07-15 09:36:00.895492] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d69200 with addr=10.0.0.2, port=4420 00:27:49.964 qpair failed and we were unable to recover it. 00:27:49.964 [2024-07-15 09:36:00.895659] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.964 [2024-07-15 09:36:00.895694] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d69200 with addr=10.0.0.2, port=4420 00:27:49.964 qpair failed and we were unable to recover it. 00:27:49.964 [2024-07-15 09:36:00.895797] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.964 [2024-07-15 09:36:00.895829] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d69200 with addr=10.0.0.2, port=4420 00:27:49.964 qpair failed and we were unable to recover it. 00:27:49.964 [2024-07-15 09:36:00.895949] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.964 [2024-07-15 09:36:00.895975] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d69200 with addr=10.0.0.2, port=4420 00:27:49.964 qpair failed and we were unable to recover it. 00:27:49.964 [2024-07-15 09:36:00.896070] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.964 [2024-07-15 09:36:00.896116] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d69200 with addr=10.0.0.2, port=4420 00:27:49.964 qpair failed and we were unable to recover it. 00:27:49.964 [2024-07-15 09:36:00.896240] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.964 [2024-07-15 09:36:00.896289] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d69200 with addr=10.0.0.2, port=4420 00:27:49.964 qpair failed and we were unable to recover it. 00:27:49.964 [2024-07-15 09:36:00.896416] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.964 [2024-07-15 09:36:00.896455] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d69200 with addr=10.0.0.2, port=4420 00:27:49.964 qpair failed and we were unable to recover it. 00:27:49.964 [2024-07-15 09:36:00.896569] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.964 [2024-07-15 09:36:00.896608] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d69200 with addr=10.0.0.2, port=4420 00:27:49.964 qpair failed and we were unable to recover it. 00:27:49.964 [2024-07-15 09:36:00.896727] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.964 [2024-07-15 09:36:00.896752] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d69200 with addr=10.0.0.2, port=4420 00:27:49.964 qpair failed and we were unable to recover it. 00:27:49.964 [2024-07-15 09:36:00.896875] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.964 [2024-07-15 09:36:00.896903] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a58000b90 with addr=10.0.0.2, port=4420 00:27:49.964 qpair failed and we were unable to recover it. 00:27:49.964 [2024-07-15 09:36:00.897014] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.964 [2024-07-15 09:36:00.897041] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a58000b90 with addr=10.0.0.2, port=4420 00:27:49.964 qpair failed and we were unable to recover it. 00:27:49.964 [2024-07-15 09:36:00.897192] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.964 [2024-07-15 09:36:00.897241] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a58000b90 with addr=10.0.0.2, port=4420 00:27:49.964 qpair failed and we were unable to recover it. 00:27:49.964 [2024-07-15 09:36:00.897385] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.964 [2024-07-15 09:36:00.897435] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a58000b90 with addr=10.0.0.2, port=4420 00:27:49.964 qpair failed and we were unable to recover it. 00:27:49.964 [2024-07-15 09:36:00.897520] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.964 [2024-07-15 09:36:00.897547] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a58000b90 with addr=10.0.0.2, port=4420 00:27:49.964 qpair failed and we were unable to recover it. 00:27:49.964 [2024-07-15 09:36:00.897656] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.964 [2024-07-15 09:36:00.897682] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a58000b90 with addr=10.0.0.2, port=4420 00:27:49.964 qpair failed and we were unable to recover it. 00:27:49.964 [2024-07-15 09:36:00.897761] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.964 [2024-07-15 09:36:00.897787] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a58000b90 with addr=10.0.0.2, port=4420 00:27:49.964 qpair failed and we were unable to recover it. 00:27:49.964 [2024-07-15 09:36:00.897880] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.964 [2024-07-15 09:36:00.897907] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a58000b90 with addr=10.0.0.2, port=4420 00:27:49.964 qpair failed and we were unable to recover it. 00:27:49.964 [2024-07-15 09:36:00.897990] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.964 [2024-07-15 09:36:00.898016] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a58000b90 with addr=10.0.0.2, port=4420 00:27:49.964 qpair failed and we were unable to recover it. 00:27:49.964 [2024-07-15 09:36:00.898113] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.964 [2024-07-15 09:36:00.898141] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d69200 with addr=10.0.0.2, port=4420 00:27:49.964 qpair failed and we were unable to recover it. 00:27:49.964 [2024-07-15 09:36:00.898235] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.964 [2024-07-15 09:36:00.898263] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:27:49.964 qpair failed and we were unable to recover it. 00:27:49.964 [2024-07-15 09:36:00.898376] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.964 [2024-07-15 09:36:00.898401] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:27:49.964 qpair failed and we were unable to recover it. 00:27:49.964 [2024-07-15 09:36:00.898508] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.964 [2024-07-15 09:36:00.898533] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:27:49.964 qpair failed and we were unable to recover it. 00:27:49.964 [2024-07-15 09:36:00.898618] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.964 [2024-07-15 09:36:00.898645] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:27:49.964 qpair failed and we were unable to recover it. 00:27:49.964 [2024-07-15 09:36:00.898747] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.964 [2024-07-15 09:36:00.898787] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a50000b90 with addr=10.0.0.2, port=4420 00:27:49.964 qpair failed and we were unable to recover it. 00:27:49.964 [2024-07-15 09:36:00.898882] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.964 [2024-07-15 09:36:00.898909] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a58000b90 with addr=10.0.0.2, port=4420 00:27:49.965 qpair failed and we were unable to recover it. 00:27:49.965 [2024-07-15 09:36:00.898997] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.965 [2024-07-15 09:36:00.899023] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a58000b90 with addr=10.0.0.2, port=4420 00:27:49.965 qpair failed and we were unable to recover it. 00:27:49.965 [2024-07-15 09:36:00.899112] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.965 [2024-07-15 09:36:00.899138] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a58000b90 with addr=10.0.0.2, port=4420 00:27:49.965 qpair failed and we were unable to recover it. 00:27:49.965 [2024-07-15 09:36:00.899227] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.965 [2024-07-15 09:36:00.899253] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a58000b90 with addr=10.0.0.2, port=4420 00:27:49.965 qpair failed and we were unable to recover it. 00:27:49.965 [2024-07-15 09:36:00.899364] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.965 [2024-07-15 09:36:00.899392] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d69200 with addr=10.0.0.2, port=4420 00:27:49.965 qpair failed and we were unable to recover it. 00:27:49.965 [2024-07-15 09:36:00.899504] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.965 [2024-07-15 09:36:00.899531] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:27:49.965 qpair failed and we were unable to recover it. 00:27:49.965 [2024-07-15 09:36:00.899616] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.965 [2024-07-15 09:36:00.899642] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:27:49.965 qpair failed and we were unable to recover it. 00:27:49.965 [2024-07-15 09:36:00.899728] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.965 [2024-07-15 09:36:00.899756] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:27:49.965 qpair failed and we were unable to recover it. 00:27:49.965 [2024-07-15 09:36:00.899856] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.965 [2024-07-15 09:36:00.899883] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:27:49.965 qpair failed and we were unable to recover it. 00:27:49.965 [2024-07-15 09:36:00.899974] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.965 [2024-07-15 09:36:00.900004] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a50000b90 with addr=10.0.0.2, port=4420 00:27:49.965 qpair failed and we were unable to recover it. 00:27:49.965 [2024-07-15 09:36:00.900098] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.965 [2024-07-15 09:36:00.900125] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a58000b90 with addr=10.0.0.2, port=4420 00:27:49.965 qpair failed and we were unable to recover it. 00:27:49.965 [2024-07-15 09:36:00.900256] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.965 [2024-07-15 09:36:00.900306] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a58000b90 with addr=10.0.0.2, port=4420 00:27:49.965 qpair failed and we were unable to recover it. 00:27:49.965 [2024-07-15 09:36:00.900448] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.965 [2024-07-15 09:36:00.900501] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a58000b90 with addr=10.0.0.2, port=4420 00:27:49.965 qpair failed and we were unable to recover it. 00:27:49.965 [2024-07-15 09:36:00.900594] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.965 [2024-07-15 09:36:00.900620] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a58000b90 with addr=10.0.0.2, port=4420 00:27:49.965 qpair failed and we were unable to recover it. 00:27:49.965 [2024-07-15 09:36:00.900744] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.965 [2024-07-15 09:36:00.900772] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d69200 with addr=10.0.0.2, port=4420 00:27:49.965 qpair failed and we were unable to recover it. 00:27:49.965 [2024-07-15 09:36:00.900868] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.965 [2024-07-15 09:36:00.900895] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:27:49.965 qpair failed and we were unable to recover it. 00:27:49.965 [2024-07-15 09:36:00.900981] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.965 [2024-07-15 09:36:00.901007] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:27:49.965 qpair failed and we were unable to recover it. 00:27:49.965 [2024-07-15 09:36:00.901130] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.965 [2024-07-15 09:36:00.901168] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:27:49.965 qpair failed and we were unable to recover it. 00:27:49.965 [2024-07-15 09:36:00.901295] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.965 [2024-07-15 09:36:00.901332] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:27:49.965 qpair failed and we were unable to recover it. 00:27:49.965 [2024-07-15 09:36:00.901477] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.965 [2024-07-15 09:36:00.901514] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:27:49.965 qpair failed and we were unable to recover it. 00:27:49.965 [2024-07-15 09:36:00.901661] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.965 [2024-07-15 09:36:00.901687] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:27:49.965 qpair failed and we were unable to recover it. 00:27:49.965 [2024-07-15 09:36:00.901815] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.965 [2024-07-15 09:36:00.901854] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d69200 with addr=10.0.0.2, port=4420 00:27:49.965 qpair failed and we were unable to recover it. 00:27:49.965 [2024-07-15 09:36:00.901950] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.965 [2024-07-15 09:36:00.901977] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d69200 with addr=10.0.0.2, port=4420 00:27:49.965 qpair failed and we were unable to recover it. 00:27:49.965 [2024-07-15 09:36:00.902068] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.965 [2024-07-15 09:36:00.902093] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d69200 with addr=10.0.0.2, port=4420 00:27:49.965 qpair failed and we were unable to recover it. 00:27:49.965 [2024-07-15 09:36:00.902202] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.965 [2024-07-15 09:36:00.902228] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d69200 with addr=10.0.0.2, port=4420 00:27:49.965 qpair failed and we were unable to recover it. 00:27:49.965 [2024-07-15 09:36:00.902337] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.965 [2024-07-15 09:36:00.902374] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d69200 with addr=10.0.0.2, port=4420 00:27:49.965 qpair failed and we were unable to recover it. 00:27:49.965 [2024-07-15 09:36:00.902544] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.965 [2024-07-15 09:36:00.902597] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a58000b90 with addr=10.0.0.2, port=4420 00:27:49.965 qpair failed and we were unable to recover it. 00:27:49.965 [2024-07-15 09:36:00.902699] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.965 [2024-07-15 09:36:00.902726] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a58000b90 with addr=10.0.0.2, port=4420 00:27:49.965 qpair failed and we were unable to recover it. 00:27:49.965 [2024-07-15 09:36:00.902814] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.965 [2024-07-15 09:36:00.902841] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a58000b90 with addr=10.0.0.2, port=4420 00:27:49.965 qpair failed and we were unable to recover it. 00:27:49.965 [2024-07-15 09:36:00.902917] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.965 [2024-07-15 09:36:00.902943] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a58000b90 with addr=10.0.0.2, port=4420 00:27:49.965 qpair failed and we were unable to recover it. 00:27:49.966 [2024-07-15 09:36:00.903029] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.966 [2024-07-15 09:36:00.903056] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a58000b90 with addr=10.0.0.2, port=4420 00:27:49.966 qpair failed and we were unable to recover it. 00:27:49.966 [2024-07-15 09:36:00.903163] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.966 [2024-07-15 09:36:00.903189] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a58000b90 with addr=10.0.0.2, port=4420 00:27:49.966 qpair failed and we were unable to recover it. 00:27:49.966 [2024-07-15 09:36:00.903313] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.966 [2024-07-15 09:36:00.903340] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d69200 with addr=10.0.0.2, port=4420 00:27:49.966 qpair failed and we were unable to recover it. 00:27:49.966 [2024-07-15 09:36:00.903426] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.966 [2024-07-15 09:36:00.903454] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:27:49.966 qpair failed and we were unable to recover it. 00:27:49.966 [2024-07-15 09:36:00.903573] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.966 [2024-07-15 09:36:00.903599] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:27:49.966 qpair failed and we were unable to recover it. 00:27:49.966 [2024-07-15 09:36:00.903684] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.966 [2024-07-15 09:36:00.903709] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:27:49.966 qpair failed and we were unable to recover it. 00:27:49.966 [2024-07-15 09:36:00.903790] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.966 [2024-07-15 09:36:00.903822] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:27:49.966 qpair failed and we were unable to recover it. 00:27:49.966 [2024-07-15 09:36:00.903940] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.966 [2024-07-15 09:36:00.903966] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:27:49.966 qpair failed and we were unable to recover it. 00:27:49.966 [2024-07-15 09:36:00.904048] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.966 [2024-07-15 09:36:00.904075] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a58000b90 with addr=10.0.0.2, port=4420 00:27:49.966 qpair failed and we were unable to recover it. 00:27:49.966 [2024-07-15 09:36:00.904158] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.966 [2024-07-15 09:36:00.904189] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a58000b90 with addr=10.0.0.2, port=4420 00:27:49.966 qpair failed and we were unable to recover it. 00:27:49.966 [2024-07-15 09:36:00.904298] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.966 [2024-07-15 09:36:00.904350] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a58000b90 with addr=10.0.0.2, port=4420 00:27:49.966 qpair failed and we were unable to recover it. 00:27:49.966 [2024-07-15 09:36:00.904462] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.966 [2024-07-15 09:36:00.904488] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a58000b90 with addr=10.0.0.2, port=4420 00:27:49.966 qpair failed and we were unable to recover it. 00:27:49.966 [2024-07-15 09:36:00.904588] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.966 [2024-07-15 09:36:00.904627] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d69200 with addr=10.0.0.2, port=4420 00:27:49.966 qpair failed and we were unable to recover it. 00:27:49.966 [2024-07-15 09:36:00.904775] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.966 [2024-07-15 09:36:00.904816] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a50000b90 with addr=10.0.0.2, port=4420 00:27:49.966 qpair failed and we were unable to recover it. 00:27:49.966 [2024-07-15 09:36:00.904909] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.966 [2024-07-15 09:36:00.904936] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:27:49.966 qpair failed and we were unable to recover it. 00:27:49.966 [2024-07-15 09:36:00.905019] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.966 [2024-07-15 09:36:00.905045] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:27:49.966 qpair failed and we were unable to recover it. 00:27:49.966 [2024-07-15 09:36:00.905126] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.966 [2024-07-15 09:36:00.905152] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:27:49.966 qpair failed and we were unable to recover it. 00:27:49.966 [2024-07-15 09:36:00.905262] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.966 [2024-07-15 09:36:00.905301] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:27:49.966 qpair failed and we were unable to recover it. 00:27:49.966 [2024-07-15 09:36:00.905426] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.966 [2024-07-15 09:36:00.905480] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a58000b90 with addr=10.0.0.2, port=4420 00:27:49.966 qpair failed and we were unable to recover it. 00:27:49.966 [2024-07-15 09:36:00.905586] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.966 [2024-07-15 09:36:00.905613] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a58000b90 with addr=10.0.0.2, port=4420 00:27:49.966 qpair failed and we were unable to recover it. 00:27:49.966 [2024-07-15 09:36:00.905691] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.966 [2024-07-15 09:36:00.905718] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a58000b90 with addr=10.0.0.2, port=4420 00:27:49.966 qpair failed and we were unable to recover it. 00:27:49.966 [2024-07-15 09:36:00.905830] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.966 [2024-07-15 09:36:00.905857] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a58000b90 with addr=10.0.0.2, port=4420 00:27:49.966 qpair failed and we were unable to recover it. 00:27:49.966 [2024-07-15 09:36:00.905969] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.966 [2024-07-15 09:36:00.905995] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a58000b90 with addr=10.0.0.2, port=4420 00:27:49.966 qpair failed and we were unable to recover it. 00:27:49.966 [2024-07-15 09:36:00.906143] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.966 [2024-07-15 09:36:00.906170] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a58000b90 with addr=10.0.0.2, port=4420 00:27:49.966 qpair failed and we were unable to recover it. 00:27:49.966 [2024-07-15 09:36:00.906254] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.966 [2024-07-15 09:36:00.906281] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a58000b90 with addr=10.0.0.2, port=4420 00:27:49.966 qpair failed and we were unable to recover it. 00:27:49.966 [2024-07-15 09:36:00.906361] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.966 [2024-07-15 09:36:00.906387] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a58000b90 with addr=10.0.0.2, port=4420 00:27:49.966 qpair failed and we were unable to recover it. 00:27:49.966 [2024-07-15 09:36:00.906495] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.966 [2024-07-15 09:36:00.906522] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a58000b90 with addr=10.0.0.2, port=4420 00:27:49.966 qpair failed and we were unable to recover it. 00:27:49.966 [2024-07-15 09:36:00.906635] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.966 [2024-07-15 09:36:00.906661] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a58000b90 with addr=10.0.0.2, port=4420 00:27:49.966 qpair failed and we were unable to recover it. 00:27:49.966 [2024-07-15 09:36:00.906788] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.966 [2024-07-15 09:36:00.906834] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d69200 with addr=10.0.0.2, port=4420 00:27:49.966 qpair failed and we were unable to recover it. 00:27:49.966 [2024-07-15 09:36:00.906986] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.966 [2024-07-15 09:36:00.907015] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a50000b90 with addr=10.0.0.2, port=4420 00:27:49.966 qpair failed and we were unable to recover it. 00:27:49.966 [2024-07-15 09:36:00.907136] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.966 [2024-07-15 09:36:00.907164] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:27:49.966 qpair failed and we were unable to recover it. 00:27:49.966 [2024-07-15 09:36:00.907251] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.966 [2024-07-15 09:36:00.907277] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:27:49.966 qpair failed and we were unable to recover it. 00:27:49.966 [2024-07-15 09:36:00.907391] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.966 [2024-07-15 09:36:00.907417] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:27:49.966 qpair failed and we were unable to recover it. 00:27:49.966 [2024-07-15 09:36:00.907520] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.966 [2024-07-15 09:36:00.907558] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:27:49.966 qpair failed and we were unable to recover it. 00:27:49.966 [2024-07-15 09:36:00.907714] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.966 [2024-07-15 09:36:00.907741] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a58000b90 with addr=10.0.0.2, port=4420 00:27:49.966 qpair failed and we were unable to recover it. 00:27:49.966 [2024-07-15 09:36:00.907876] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.966 [2024-07-15 09:36:00.907902] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a58000b90 with addr=10.0.0.2, port=4420 00:27:49.966 qpair failed and we were unable to recover it. 00:27:49.966 [2024-07-15 09:36:00.907991] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.966 [2024-07-15 09:36:00.908018] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a58000b90 with addr=10.0.0.2, port=4420 00:27:49.966 qpair failed and we were unable to recover it. 00:27:49.966 [2024-07-15 09:36:00.908124] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.966 [2024-07-15 09:36:00.908161] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a58000b90 with addr=10.0.0.2, port=4420 00:27:49.966 qpair failed and we were unable to recover it. 00:27:49.967 [2024-07-15 09:36:00.908285] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.967 [2024-07-15 09:36:00.908335] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a58000b90 with addr=10.0.0.2, port=4420 00:27:49.967 qpair failed and we were unable to recover it. 00:27:49.967 [2024-07-15 09:36:00.908447] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.967 [2024-07-15 09:36:00.908473] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a58000b90 with addr=10.0.0.2, port=4420 00:27:49.967 qpair failed and we were unable to recover it. 00:27:49.967 [2024-07-15 09:36:00.908578] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.967 [2024-07-15 09:36:00.908605] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a58000b90 with addr=10.0.0.2, port=4420 00:27:49.967 qpair failed and we were unable to recover it. 00:27:49.967 [2024-07-15 09:36:00.908732] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.967 [2024-07-15 09:36:00.908770] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d69200 with addr=10.0.0.2, port=4420 00:27:49.967 qpair failed and we were unable to recover it. 00:27:49.967 [2024-07-15 09:36:00.908896] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.967 [2024-07-15 09:36:00.908925] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a50000b90 with addr=10.0.0.2, port=4420 00:27:49.967 qpair failed and we were unable to recover it. 00:27:49.967 [2024-07-15 09:36:00.909042] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.967 [2024-07-15 09:36:00.909070] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:27:49.967 qpair failed and we were unable to recover it. 00:27:49.967 [2024-07-15 09:36:00.909166] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.967 [2024-07-15 09:36:00.909202] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:27:49.967 qpair failed and we were unable to recover it. 00:27:49.967 [2024-07-15 09:36:00.909350] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.967 [2024-07-15 09:36:00.909387] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:27:49.967 qpair failed and we were unable to recover it. 00:27:49.967 [2024-07-15 09:36:00.909505] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.967 [2024-07-15 09:36:00.909542] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:27:49.967 qpair failed and we were unable to recover it. 00:27:49.967 [2024-07-15 09:36:00.909662] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.967 [2024-07-15 09:36:00.909689] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:27:49.967 qpair failed and we were unable to recover it. 00:27:49.967 [2024-07-15 09:36:00.909768] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.967 [2024-07-15 09:36:00.909794] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:27:49.967 qpair failed and we were unable to recover it. 00:27:49.967 [2024-07-15 09:36:00.909886] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.967 [2024-07-15 09:36:00.909919] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:27:49.967 qpair failed and we were unable to recover it. 00:27:49.967 [2024-07-15 09:36:00.910071] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.967 [2024-07-15 09:36:00.910109] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:27:49.967 qpair failed and we were unable to recover it. 00:27:49.967 [2024-07-15 09:36:00.910252] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.967 [2024-07-15 09:36:00.910289] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:27:49.967 qpair failed and we were unable to recover it. 00:27:49.967 [2024-07-15 09:36:00.910400] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.967 [2024-07-15 09:36:00.910436] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:27:49.967 qpair failed and we were unable to recover it. 00:27:49.967 [2024-07-15 09:36:00.910601] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.967 [2024-07-15 09:36:00.910627] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:27:49.967 qpair failed and we were unable to recover it. 00:27:49.967 [2024-07-15 09:36:00.910733] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.967 [2024-07-15 09:36:00.910759] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:27:49.967 qpair failed and we were unable to recover it. 00:27:49.967 [2024-07-15 09:36:00.910875] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.967 [2024-07-15 09:36:00.910901] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:27:49.967 qpair failed and we were unable to recover it. 00:27:49.967 [2024-07-15 09:36:00.910987] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.967 [2024-07-15 09:36:00.911015] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a58000b90 with addr=10.0.0.2, port=4420 00:27:49.967 qpair failed and we were unable to recover it. 00:27:49.967 [2024-07-15 09:36:00.911159] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.967 [2024-07-15 09:36:00.911198] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d69200 with addr=10.0.0.2, port=4420 00:27:49.967 qpair failed and we were unable to recover it. 00:27:49.967 [2024-07-15 09:36:00.911395] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.967 [2024-07-15 09:36:00.911453] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d69200 with addr=10.0.0.2, port=4420 00:27:49.967 qpair failed and we were unable to recover it. 00:27:49.967 [2024-07-15 09:36:00.911694] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.967 [2024-07-15 09:36:00.911719] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d69200 with addr=10.0.0.2, port=4420 00:27:49.967 qpair failed and we were unable to recover it. 00:27:49.967 [2024-07-15 09:36:00.911855] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.967 [2024-07-15 09:36:00.911881] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d69200 with addr=10.0.0.2, port=4420 00:27:49.967 qpair failed and we were unable to recover it. 00:27:49.967 [2024-07-15 09:36:00.911970] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.967 [2024-07-15 09:36:00.911996] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d69200 with addr=10.0.0.2, port=4420 00:27:49.967 qpair failed and we were unable to recover it. 00:27:49.967 [2024-07-15 09:36:00.912084] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.967 [2024-07-15 09:36:00.912110] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:27:49.967 qpair failed and we were unable to recover it. 00:27:49.967 [2024-07-15 09:36:00.912223] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.967 [2024-07-15 09:36:00.912250] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:27:49.967 qpair failed and we were unable to recover it. 00:27:49.967 [2024-07-15 09:36:00.912356] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.967 [2024-07-15 09:36:00.912393] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:27:49.967 qpair failed and we were unable to recover it. 00:27:49.967 [2024-07-15 09:36:00.912504] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.967 [2024-07-15 09:36:00.912541] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:27:49.967 qpair failed and we were unable to recover it. 00:27:49.967 [2024-07-15 09:36:00.912685] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.967 [2024-07-15 09:36:00.912711] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:27:49.967 qpair failed and we were unable to recover it. 00:27:49.967 [2024-07-15 09:36:00.912813] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.967 [2024-07-15 09:36:00.912853] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a50000b90 with addr=10.0.0.2, port=4420 00:27:49.967 qpair failed and we were unable to recover it. 00:27:49.967 [2024-07-15 09:36:00.912950] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.967 [2024-07-15 09:36:00.912977] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a50000b90 with addr=10.0.0.2, port=4420 00:27:49.967 qpair failed and we were unable to recover it. 00:27:49.967 [2024-07-15 09:36:00.913094] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.967 [2024-07-15 09:36:00.913121] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a50000b90 with addr=10.0.0.2, port=4420 00:27:49.967 qpair failed and we were unable to recover it. 00:27:49.967 [2024-07-15 09:36:00.913203] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.967 [2024-07-15 09:36:00.913229] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a50000b90 with addr=10.0.0.2, port=4420 00:27:49.967 qpair failed and we were unable to recover it. 00:27:49.967 [2024-07-15 09:36:00.913312] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.967 [2024-07-15 09:36:00.913339] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a50000b90 with addr=10.0.0.2, port=4420 00:27:49.967 qpair failed and we were unable to recover it. 00:27:49.967 [2024-07-15 09:36:00.913423] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.967 [2024-07-15 09:36:00.913450] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a50000b90 with addr=10.0.0.2, port=4420 00:27:49.967 qpair failed and we were unable to recover it. 00:27:49.968 [2024-07-15 09:36:00.913557] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.968 [2024-07-15 09:36:00.913584] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a50000b90 with addr=10.0.0.2, port=4420 00:27:49.968 qpair failed and we were unable to recover it. 00:27:49.968 [2024-07-15 09:36:00.913665] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.968 [2024-07-15 09:36:00.913691] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a50000b90 with addr=10.0.0.2, port=4420 00:27:49.968 qpair failed and we were unable to recover it. 00:27:49.968 [2024-07-15 09:36:00.913828] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.968 [2024-07-15 09:36:00.913855] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a50000b90 with addr=10.0.0.2, port=4420 00:27:49.968 qpair failed and we were unable to recover it. 00:27:49.968 [2024-07-15 09:36:00.913945] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.968 [2024-07-15 09:36:00.913972] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:27:49.968 qpair failed and we were unable to recover it. 00:27:49.968 [2024-07-15 09:36:00.914053] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.968 [2024-07-15 09:36:00.914080] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:27:49.968 qpair failed and we were unable to recover it. 00:27:49.968 [2024-07-15 09:36:00.914194] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.968 [2024-07-15 09:36:00.914220] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:27:49.968 qpair failed and we were unable to recover it. 00:27:49.968 [2024-07-15 09:36:00.914296] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.968 [2024-07-15 09:36:00.914322] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:27:49.968 qpair failed and we were unable to recover it. 00:27:49.968 [2024-07-15 09:36:00.914425] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.968 [2024-07-15 09:36:00.914451] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:27:49.968 qpair failed and we were unable to recover it. 00:27:49.968 [2024-07-15 09:36:00.914537] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.968 [2024-07-15 09:36:00.914566] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a58000b90 with addr=10.0.0.2, port=4420 00:27:49.968 qpair failed and we were unable to recover it. 00:27:49.968 [2024-07-15 09:36:00.914654] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.968 [2024-07-15 09:36:00.914681] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a58000b90 with addr=10.0.0.2, port=4420 00:27:49.968 qpair failed and we were unable to recover it. 00:27:49.968 [2024-07-15 09:36:00.914788] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.968 [2024-07-15 09:36:00.914821] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a58000b90 with addr=10.0.0.2, port=4420 00:27:49.968 qpair failed and we were unable to recover it. 00:27:49.968 [2024-07-15 09:36:00.914932] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.968 [2024-07-15 09:36:00.914958] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a58000b90 with addr=10.0.0.2, port=4420 00:27:49.968 qpair failed and we were unable to recover it. 00:27:49.968 [2024-07-15 09:36:00.915066] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.968 [2024-07-15 09:36:00.915092] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a58000b90 with addr=10.0.0.2, port=4420 00:27:49.968 qpair failed and we were unable to recover it. 00:27:49.968 [2024-07-15 09:36:00.915173] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.968 [2024-07-15 09:36:00.915200] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a58000b90 with addr=10.0.0.2, port=4420 00:27:49.968 qpair failed and we were unable to recover it. 00:27:49.968 [2024-07-15 09:36:00.915338] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.968 [2024-07-15 09:36:00.915364] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a58000b90 with addr=10.0.0.2, port=4420 00:27:49.968 qpair failed and we were unable to recover it. 00:27:49.968 [2024-07-15 09:36:00.915445] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.968 [2024-07-15 09:36:00.915471] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a58000b90 with addr=10.0.0.2, port=4420 00:27:49.968 qpair failed and we were unable to recover it. 00:27:49.968 [2024-07-15 09:36:00.915557] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.968 [2024-07-15 09:36:00.915588] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a58000b90 with addr=10.0.0.2, port=4420 00:27:49.968 qpair failed and we were unable to recover it. 00:27:49.968 [2024-07-15 09:36:00.915727] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.968 [2024-07-15 09:36:00.915754] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:27:49.968 qpair failed and we were unable to recover it. 00:27:49.968 [2024-07-15 09:36:00.915866] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.968 [2024-07-15 09:36:00.915893] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:27:49.968 qpair failed and we were unable to recover it. 00:27:49.968 [2024-07-15 09:36:00.915978] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.968 [2024-07-15 09:36:00.916028] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:27:49.968 qpair failed and we were unable to recover it. 00:27:49.968 [2024-07-15 09:36:00.916192] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.968 [2024-07-15 09:36:00.916218] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:27:49.968 qpair failed and we were unable to recover it. 00:27:49.968 [2024-07-15 09:36:00.916352] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.968 [2024-07-15 09:36:00.916378] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:27:49.968 qpair failed and we were unable to recover it. 00:27:49.968 [2024-07-15 09:36:00.916461] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.968 [2024-07-15 09:36:00.916487] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:27:49.968 qpair failed and we were unable to recover it. 00:27:49.968 [2024-07-15 09:36:00.916595] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.968 [2024-07-15 09:36:00.916622] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:27:49.968 qpair failed and we were unable to recover it. 00:27:49.968 [2024-07-15 09:36:00.916706] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.968 [2024-07-15 09:36:00.916733] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:27:49.968 qpair failed and we were unable to recover it. 00:27:49.968 [2024-07-15 09:36:00.916874] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.968 [2024-07-15 09:36:00.916901] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:27:49.968 qpair failed and we were unable to recover it. 00:27:49.968 [2024-07-15 09:36:00.916979] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.968 [2024-07-15 09:36:00.917007] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a58000b90 with addr=10.0.0.2, port=4420 00:27:49.968 qpair failed and we were unable to recover it. 00:27:49.968 [2024-07-15 09:36:00.917138] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.968 [2024-07-15 09:36:00.917188] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a58000b90 with addr=10.0.0.2, port=4420 00:27:49.968 qpair failed and we were unable to recover it. 00:27:49.968 [2024-07-15 09:36:00.917321] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.968 [2024-07-15 09:36:00.917367] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a58000b90 with addr=10.0.0.2, port=4420 00:27:49.968 qpair failed and we were unable to recover it. 00:27:49.968 [2024-07-15 09:36:00.917501] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.968 [2024-07-15 09:36:00.917527] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a58000b90 with addr=10.0.0.2, port=4420 00:27:49.968 qpair failed and we were unable to recover it. 00:27:49.968 [2024-07-15 09:36:00.917610] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.968 [2024-07-15 09:36:00.917636] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a58000b90 with addr=10.0.0.2, port=4420 00:27:49.968 qpair failed and we were unable to recover it. 00:27:49.968 [2024-07-15 09:36:00.917722] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.968 [2024-07-15 09:36:00.917748] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a58000b90 with addr=10.0.0.2, port=4420 00:27:49.968 qpair failed and we were unable to recover it. 00:27:49.968 [2024-07-15 09:36:00.917838] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.968 [2024-07-15 09:36:00.917867] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a58000b90 with addr=10.0.0.2, port=4420 00:27:49.968 qpair failed and we were unable to recover it. 00:27:49.968 [2024-07-15 09:36:00.917974] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.968 [2024-07-15 09:36:00.917999] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a58000b90 with addr=10.0.0.2, port=4420 00:27:49.968 qpair failed and we were unable to recover it. 00:27:49.968 [2024-07-15 09:36:00.918099] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.968 [2024-07-15 09:36:00.918137] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d69200 with addr=10.0.0.2, port=4420 00:27:49.968 qpair failed and we were unable to recover it. 00:27:49.968 [2024-07-15 09:36:00.918276] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.968 [2024-07-15 09:36:00.918315] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:27:49.968 qpair failed and we were unable to recover it. 00:27:49.968 [2024-07-15 09:36:00.918460] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.968 [2024-07-15 09:36:00.918513] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:27:49.968 qpair failed and we were unable to recover it. 00:27:49.968 [2024-07-15 09:36:00.918637] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.968 [2024-07-15 09:36:00.918663] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:27:49.968 qpair failed and we were unable to recover it. 00:27:49.968 [2024-07-15 09:36:00.918778] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.968 [2024-07-15 09:36:00.918809] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:27:49.968 qpair failed and we were unable to recover it. 00:27:49.968 [2024-07-15 09:36:00.918901] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.968 [2024-07-15 09:36:00.918927] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:27:49.968 qpair failed and we were unable to recover it. 00:27:49.969 [2024-07-15 09:36:00.919033] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.969 [2024-07-15 09:36:00.919058] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:27:49.969 qpair failed and we were unable to recover it. 00:27:49.969 [2024-07-15 09:36:00.919162] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.969 [2024-07-15 09:36:00.919198] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:27:49.969 qpair failed and we were unable to recover it. 00:27:49.969 [2024-07-15 09:36:00.919320] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.969 [2024-07-15 09:36:00.919347] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:27:49.969 qpair failed and we were unable to recover it. 00:27:49.969 [2024-07-15 09:36:00.919471] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.969 [2024-07-15 09:36:00.919499] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a58000b90 with addr=10.0.0.2, port=4420 00:27:49.969 qpair failed and we were unable to recover it. 00:27:49.969 [2024-07-15 09:36:00.919584] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.969 [2024-07-15 09:36:00.919610] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a58000b90 with addr=10.0.0.2, port=4420 00:27:49.969 qpair failed and we were unable to recover it. 00:27:49.969 [2024-07-15 09:36:00.919727] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.969 [2024-07-15 09:36:00.919753] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a58000b90 with addr=10.0.0.2, port=4420 00:27:49.969 qpair failed and we were unable to recover it. 00:27:49.969 [2024-07-15 09:36:00.919839] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.969 [2024-07-15 09:36:00.919865] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a58000b90 with addr=10.0.0.2, port=4420 00:27:49.969 qpair failed and we were unable to recover it. 00:27:49.969 [2024-07-15 09:36:00.919949] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.969 [2024-07-15 09:36:00.919975] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a58000b90 with addr=10.0.0.2, port=4420 00:27:49.969 qpair failed and we were unable to recover it. 00:27:49.969 [2024-07-15 09:36:00.920084] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.969 [2024-07-15 09:36:00.920110] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a58000b90 with addr=10.0.0.2, port=4420 00:27:49.969 qpair failed and we were unable to recover it. 00:27:49.969 [2024-07-15 09:36:00.920191] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.969 [2024-07-15 09:36:00.920217] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a58000b90 with addr=10.0.0.2, port=4420 00:27:49.969 qpair failed and we were unable to recover it. 00:27:49.969 [2024-07-15 09:36:00.920327] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.969 [2024-07-15 09:36:00.920353] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a58000b90 with addr=10.0.0.2, port=4420 00:27:49.969 qpair failed and we were unable to recover it. 00:27:49.969 [2024-07-15 09:36:00.920490] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.969 [2024-07-15 09:36:00.920516] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a58000b90 with addr=10.0.0.2, port=4420 00:27:49.969 qpair failed and we were unable to recover it. 00:27:49.969 [2024-07-15 09:36:00.920622] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.969 [2024-07-15 09:36:00.920648] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a58000b90 with addr=10.0.0.2, port=4420 00:27:49.969 qpair failed and we were unable to recover it. 00:27:49.969 [2024-07-15 09:36:00.920774] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.969 [2024-07-15 09:36:00.920819] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d69200 with addr=10.0.0.2, port=4420 00:27:49.969 qpair failed and we were unable to recover it. 00:27:49.969 [2024-07-15 09:36:00.920966] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.969 [2024-07-15 09:36:00.921004] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d69200 with addr=10.0.0.2, port=4420 00:27:49.969 qpair failed and we were unable to recover it. 00:27:49.969 [2024-07-15 09:36:00.921133] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.969 [2024-07-15 09:36:00.921159] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d69200 with addr=10.0.0.2, port=4420 00:27:49.969 qpair failed and we were unable to recover it. 00:27:49.969 [2024-07-15 09:36:00.921310] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.969 [2024-07-15 09:36:00.921346] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d69200 with addr=10.0.0.2, port=4420 00:27:49.969 qpair failed and we were unable to recover it. 00:27:49.969 [2024-07-15 09:36:00.921573] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.969 [2024-07-15 09:36:00.921632] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a50000b90 with addr=10.0.0.2, port=4420 00:27:49.969 qpair failed and we were unable to recover it. 00:27:49.969 [2024-07-15 09:36:00.921720] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.969 [2024-07-15 09:36:00.921748] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:27:49.969 qpair failed and we were unable to recover it. 00:27:49.969 [2024-07-15 09:36:00.921862] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.969 [2024-07-15 09:36:00.921890] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a58000b90 with addr=10.0.0.2, port=4420 00:27:49.969 qpair failed and we were unable to recover it. 00:27:49.969 [2024-07-15 09:36:00.922000] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.969 [2024-07-15 09:36:00.922027] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a58000b90 with addr=10.0.0.2, port=4420 00:27:49.969 qpair failed and we were unable to recover it. 00:27:49.969 [2024-07-15 09:36:00.922155] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.969 [2024-07-15 09:36:00.922207] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a58000b90 with addr=10.0.0.2, port=4420 00:27:49.969 qpair failed and we were unable to recover it. 00:27:49.969 [2024-07-15 09:36:00.922291] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.969 [2024-07-15 09:36:00.922317] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a58000b90 with addr=10.0.0.2, port=4420 00:27:49.969 qpair failed and we were unable to recover it. 00:27:49.969 [2024-07-15 09:36:00.922412] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.969 [2024-07-15 09:36:00.922438] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a58000b90 with addr=10.0.0.2, port=4420 00:27:49.969 qpair failed and we were unable to recover it. 00:27:49.969 [2024-07-15 09:36:00.922545] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.969 [2024-07-15 09:36:00.922570] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a58000b90 with addr=10.0.0.2, port=4420 00:27:49.969 qpair failed and we were unable to recover it. 00:27:49.969 [2024-07-15 09:36:00.922709] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.969 [2024-07-15 09:36:00.922735] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a58000b90 with addr=10.0.0.2, port=4420 00:27:49.969 qpair failed and we were unable to recover it. 00:27:49.969 [2024-07-15 09:36:00.922844] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.969 [2024-07-15 09:36:00.922883] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d69200 with addr=10.0.0.2, port=4420 00:27:49.969 qpair failed and we were unable to recover it. 00:27:49.969 [2024-07-15 09:36:00.922981] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.969 [2024-07-15 09:36:00.923008] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d69200 with addr=10.0.0.2, port=4420 00:27:49.969 qpair failed and we were unable to recover it. 00:27:49.969 [2024-07-15 09:36:00.923097] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.969 [2024-07-15 09:36:00.923123] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d69200 with addr=10.0.0.2, port=4420 00:27:49.969 qpair failed and we were unable to recover it. 00:27:49.969 [2024-07-15 09:36:00.923212] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.969 [2024-07-15 09:36:00.923239] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d69200 with addr=10.0.0.2, port=4420 00:27:49.969 qpair failed and we were unable to recover it. 00:27:49.969 [2024-07-15 09:36:00.923355] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.969 [2024-07-15 09:36:00.923382] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d69200 with addr=10.0.0.2, port=4420 00:27:49.969 qpair failed and we were unable to recover it. 00:27:49.969 [2024-07-15 09:36:00.923465] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.969 [2024-07-15 09:36:00.923491] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d69200 with addr=10.0.0.2, port=4420 00:27:49.969 qpair failed and we were unable to recover it. 00:27:49.969 [2024-07-15 09:36:00.923597] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.969 [2024-07-15 09:36:00.923625] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a58000b90 with addr=10.0.0.2, port=4420 00:27:49.970 qpair failed and we were unable to recover it. 00:27:49.970 [2024-07-15 09:36:00.923716] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.970 [2024-07-15 09:36:00.923747] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a50000b90 with addr=10.0.0.2, port=4420 00:27:49.970 qpair failed and we were unable to recover it. 00:27:49.970 [2024-07-15 09:36:00.923866] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.970 [2024-07-15 09:36:00.923904] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:27:49.970 qpair failed and we were unable to recover it. 00:27:49.970 [2024-07-15 09:36:00.924021] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.970 [2024-07-15 09:36:00.924049] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:27:49.970 qpair failed and we were unable to recover it. 00:27:49.970 [2024-07-15 09:36:00.924242] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.970 [2024-07-15 09:36:00.924268] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:27:49.970 qpair failed and we were unable to recover it. 00:27:49.970 [2024-07-15 09:36:00.924475] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.970 [2024-07-15 09:36:00.924552] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:27:49.970 qpair failed and we were unable to recover it. 00:27:49.970 [2024-07-15 09:36:00.924741] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.970 [2024-07-15 09:36:00.924767] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:27:49.970 qpair failed and we were unable to recover it. 00:27:49.970 [2024-07-15 09:36:00.924886] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.970 [2024-07-15 09:36:00.924913] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a58000b90 with addr=10.0.0.2, port=4420 00:27:49.970 qpair failed and we were unable to recover it. 00:27:49.970 [2024-07-15 09:36:00.925003] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.970 [2024-07-15 09:36:00.925030] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a58000b90 with addr=10.0.0.2, port=4420 00:27:49.970 qpair failed and we were unable to recover it. 00:27:49.970 [2024-07-15 09:36:00.925142] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.970 [2024-07-15 09:36:00.925168] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a58000b90 with addr=10.0.0.2, port=4420 00:27:49.970 qpair failed and we were unable to recover it. 00:27:49.970 [2024-07-15 09:36:00.925248] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.970 [2024-07-15 09:36:00.925274] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a58000b90 with addr=10.0.0.2, port=4420 00:27:49.970 qpair failed and we were unable to recover it. 00:27:49.970 [2024-07-15 09:36:00.925355] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.970 [2024-07-15 09:36:00.925386] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a58000b90 with addr=10.0.0.2, port=4420 00:27:49.970 qpair failed and we were unable to recover it. 00:27:49.970 [2024-07-15 09:36:00.925503] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.970 [2024-07-15 09:36:00.925530] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d69200 with addr=10.0.0.2, port=4420 00:27:49.970 qpair failed and we were unable to recover it. 00:27:49.970 [2024-07-15 09:36:00.925613] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.970 [2024-07-15 09:36:00.925638] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d69200 with addr=10.0.0.2, port=4420 00:27:49.970 qpair failed and we were unable to recover it. 00:27:49.970 [2024-07-15 09:36:00.925748] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.970 [2024-07-15 09:36:00.925774] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d69200 with addr=10.0.0.2, port=4420 00:27:49.970 qpair failed and we were unable to recover it. 00:27:49.970 [2024-07-15 09:36:00.925888] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.970 [2024-07-15 09:36:00.925914] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d69200 with addr=10.0.0.2, port=4420 00:27:49.970 qpair failed and we were unable to recover it. 00:27:49.970 [2024-07-15 09:36:00.925993] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.970 [2024-07-15 09:36:00.926018] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d69200 with addr=10.0.0.2, port=4420 00:27:49.970 qpair failed and we were unable to recover it. 00:27:49.970 [2024-07-15 09:36:00.926152] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.970 [2024-07-15 09:36:00.926177] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d69200 with addr=10.0.0.2, port=4420 00:27:49.970 qpair failed and we were unable to recover it. 00:27:49.970 [2024-07-15 09:36:00.926257] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.970 [2024-07-15 09:36:00.926282] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d69200 with addr=10.0.0.2, port=4420 00:27:49.970 qpair failed and we were unable to recover it. 00:27:49.970 [2024-07-15 09:36:00.926366] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.970 [2024-07-15 09:36:00.926418] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d69200 with addr=10.0.0.2, port=4420 00:27:49.970 qpair failed and we were unable to recover it. 00:27:49.970 [2024-07-15 09:36:00.926559] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.970 [2024-07-15 09:36:00.926595] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d69200 with addr=10.0.0.2, port=4420 00:27:49.970 qpair failed and we were unable to recover it. 00:27:49.970 [2024-07-15 09:36:00.926708] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.970 [2024-07-15 09:36:00.926734] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d69200 with addr=10.0.0.2, port=4420 00:27:49.970 qpair failed and we were unable to recover it. 00:27:49.970 [2024-07-15 09:36:00.926846] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.970 [2024-07-15 09:36:00.926874] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:27:49.970 qpair failed and we were unable to recover it. 00:27:49.970 [2024-07-15 09:36:00.926983] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.970 [2024-07-15 09:36:00.927010] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:27:49.970 qpair failed and we were unable to recover it. 00:27:49.970 [2024-07-15 09:36:00.927148] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.970 [2024-07-15 09:36:00.927192] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:27:49.970 qpair failed and we were unable to recover it. 00:27:49.970 [2024-07-15 09:36:00.927314] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.970 [2024-07-15 09:36:00.927353] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:27:49.970 qpair failed and we were unable to recover it. 00:27:49.970 [2024-07-15 09:36:00.927468] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.970 [2024-07-15 09:36:00.927510] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:27:49.970 qpair failed and we were unable to recover it. 00:27:49.970 [2024-07-15 09:36:00.927659] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.970 [2024-07-15 09:36:00.927687] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a58000b90 with addr=10.0.0.2, port=4420 00:27:49.970 qpair failed and we were unable to recover it. 00:27:49.970 [2024-07-15 09:36:00.927806] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.970 [2024-07-15 09:36:00.927832] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d69200 with addr=10.0.0.2, port=4420 00:27:49.970 qpair failed and we were unable to recover it. 00:27:49.970 [2024-07-15 09:36:00.927927] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.970 [2024-07-15 09:36:00.927952] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d69200 with addr=10.0.0.2, port=4420 00:27:49.970 qpair failed and we were unable to recover it. 00:27:49.970 [2024-07-15 09:36:00.928064] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.970 [2024-07-15 09:36:00.928090] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d69200 with addr=10.0.0.2, port=4420 00:27:49.970 qpair failed and we were unable to recover it. 00:27:49.970 [2024-07-15 09:36:00.928219] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.970 [2024-07-15 09:36:00.928255] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d69200 with addr=10.0.0.2, port=4420 00:27:49.970 qpair failed and we were unable to recover it. 00:27:49.970 [2024-07-15 09:36:00.928465] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.970 [2024-07-15 09:36:00.928502] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d69200 with addr=10.0.0.2, port=4420 00:27:49.970 qpair failed and we were unable to recover it. 00:27:49.970 [2024-07-15 09:36:00.928679] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.970 [2024-07-15 09:36:00.928716] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d69200 with addr=10.0.0.2, port=4420 00:27:49.970 qpair failed and we were unable to recover it. 00:27:49.970 [2024-07-15 09:36:00.928864] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.970 [2024-07-15 09:36:00.928890] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d69200 with addr=10.0.0.2, port=4420 00:27:49.970 qpair failed and we were unable to recover it. 00:27:49.970 [2024-07-15 09:36:00.928980] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.970 [2024-07-15 09:36:00.929006] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d69200 with addr=10.0.0.2, port=4420 00:27:49.970 qpair failed and we were unable to recover it. 00:27:49.970 [2024-07-15 09:36:00.929091] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.970 [2024-07-15 09:36:00.929116] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d69200 with addr=10.0.0.2, port=4420 00:27:49.970 qpair failed and we were unable to recover it. 00:27:49.970 [2024-07-15 09:36:00.929258] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.970 [2024-07-15 09:36:00.929307] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a58000b90 with addr=10.0.0.2, port=4420 00:27:49.970 qpair failed and we were unable to recover it. 00:27:49.971 [2024-07-15 09:36:00.929442] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.971 [2024-07-15 09:36:00.929495] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a58000b90 with addr=10.0.0.2, port=4420 00:27:49.971 qpair failed and we were unable to recover it. 00:27:49.971 [2024-07-15 09:36:00.929634] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.971 [2024-07-15 09:36:00.929660] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a58000b90 with addr=10.0.0.2, port=4420 00:27:49.971 qpair failed and we were unable to recover it. 00:27:49.971 [2024-07-15 09:36:00.929745] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.971 [2024-07-15 09:36:00.929771] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a58000b90 with addr=10.0.0.2, port=4420 00:27:49.971 qpair failed and we were unable to recover it. 00:27:49.971 [2024-07-15 09:36:00.929864] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.971 [2024-07-15 09:36:00.929890] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a58000b90 with addr=10.0.0.2, port=4420 00:27:49.971 qpair failed and we were unable to recover it. 00:27:49.971 [2024-07-15 09:36:00.929998] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.971 [2024-07-15 09:36:00.930023] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a58000b90 with addr=10.0.0.2, port=4420 00:27:49.971 qpair failed and we were unable to recover it. 00:27:49.971 [2024-07-15 09:36:00.930108] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.971 [2024-07-15 09:36:00.930134] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a58000b90 with addr=10.0.0.2, port=4420 00:27:49.971 qpair failed and we were unable to recover it. 00:27:49.971 [2024-07-15 09:36:00.930244] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.971 [2024-07-15 09:36:00.930269] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a58000b90 with addr=10.0.0.2, port=4420 00:27:49.971 qpair failed and we were unable to recover it. 00:27:49.971 [2024-07-15 09:36:00.930357] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.971 [2024-07-15 09:36:00.930383] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a58000b90 with addr=10.0.0.2, port=4420 00:27:49.971 qpair failed and we were unable to recover it. 00:27:49.971 [2024-07-15 09:36:00.930516] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.971 [2024-07-15 09:36:00.930542] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a58000b90 with addr=10.0.0.2, port=4420 00:27:49.971 qpair failed and we were unable to recover it. 00:27:49.971 [2024-07-15 09:36:00.930619] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.971 [2024-07-15 09:36:00.930644] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a58000b90 with addr=10.0.0.2, port=4420 00:27:49.971 qpair failed and we were unable to recover it. 00:27:49.971 [2024-07-15 09:36:00.930722] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.971 [2024-07-15 09:36:00.930748] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a58000b90 with addr=10.0.0.2, port=4420 00:27:49.971 qpair failed and we were unable to recover it. 00:27:49.971 [2024-07-15 09:36:00.930868] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.971 [2024-07-15 09:36:00.930896] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d69200 with addr=10.0.0.2, port=4420 00:27:49.971 qpair failed and we were unable to recover it. 00:27:49.971 [2024-07-15 09:36:00.930978] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.971 [2024-07-15 09:36:00.931003] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d69200 with addr=10.0.0.2, port=4420 00:27:49.971 qpair failed and we were unable to recover it. 00:27:49.971 [2024-07-15 09:36:00.931077] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.971 [2024-07-15 09:36:00.931103] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d69200 with addr=10.0.0.2, port=4420 00:27:49.971 qpair failed and we were unable to recover it. 00:27:49.971 [2024-07-15 09:36:00.931228] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.971 [2024-07-15 09:36:00.931254] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d69200 with addr=10.0.0.2, port=4420 00:27:49.971 qpair failed and we were unable to recover it. 00:27:49.971 [2024-07-15 09:36:00.931371] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.971 [2024-07-15 09:36:00.931397] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d69200 with addr=10.0.0.2, port=4420 00:27:49.971 qpair failed and we were unable to recover it. 00:27:49.971 [2024-07-15 09:36:00.931537] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.971 [2024-07-15 09:36:00.931565] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:27:49.971 qpair failed and we were unable to recover it. 00:27:49.971 [2024-07-15 09:36:00.931673] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.971 [2024-07-15 09:36:00.931701] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a58000b90 with addr=10.0.0.2, port=4420 00:27:49.971 qpair failed and we were unable to recover it. 00:27:49.971 [2024-07-15 09:36:00.931790] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.971 [2024-07-15 09:36:00.931824] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a58000b90 with addr=10.0.0.2, port=4420 00:27:49.971 qpair failed and we were unable to recover it. 00:27:49.971 [2024-07-15 09:36:00.931937] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.971 [2024-07-15 09:36:00.931983] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a58000b90 with addr=10.0.0.2, port=4420 00:27:49.971 qpair failed and we were unable to recover it. 00:27:49.971 [2024-07-15 09:36:00.932128] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.971 [2024-07-15 09:36:00.932175] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a58000b90 with addr=10.0.0.2, port=4420 00:27:49.971 qpair failed and we were unable to recover it. 00:27:49.971 [2024-07-15 09:36:00.932309] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.971 [2024-07-15 09:36:00.932355] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a58000b90 with addr=10.0.0.2, port=4420 00:27:49.971 qpair failed and we were unable to recover it. 00:27:49.971 [2024-07-15 09:36:00.932436] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.971 [2024-07-15 09:36:00.932463] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a58000b90 with addr=10.0.0.2, port=4420 00:27:49.971 qpair failed and we were unable to recover it. 00:27:49.971 [2024-07-15 09:36:00.932579] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.971 [2024-07-15 09:36:00.932605] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a58000b90 with addr=10.0.0.2, port=4420 00:27:49.971 qpair failed and we were unable to recover it. 00:27:49.971 [2024-07-15 09:36:00.932691] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.971 [2024-07-15 09:36:00.932718] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a58000b90 with addr=10.0.0.2, port=4420 00:27:49.971 qpair failed and we were unable to recover it. 00:27:49.971 [2024-07-15 09:36:00.932825] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.971 [2024-07-15 09:36:00.932856] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a58000b90 with addr=10.0.0.2, port=4420 00:27:49.971 qpair failed and we were unable to recover it. 00:27:49.971 [2024-07-15 09:36:00.932943] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.971 [2024-07-15 09:36:00.932970] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a58000b90 with addr=10.0.0.2, port=4420 00:27:49.971 qpair failed and we were unable to recover it. 00:27:49.971 [2024-07-15 09:36:00.933102] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.971 [2024-07-15 09:36:00.933141] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a50000b90 with addr=10.0.0.2, port=4420 00:27:49.971 qpair failed and we were unable to recover it. 00:27:49.971 [2024-07-15 09:36:00.933260] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.971 [2024-07-15 09:36:00.933288] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d69200 with addr=10.0.0.2, port=4420 00:27:49.971 qpair failed and we were unable to recover it. 00:27:49.971 [2024-07-15 09:36:00.933408] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.971 [2024-07-15 09:36:00.933436] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:27:49.971 qpair failed and we were unable to recover it. 00:27:49.971 [2024-07-15 09:36:00.933526] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.971 [2024-07-15 09:36:00.933551] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:27:49.971 qpair failed and we were unable to recover it. 00:27:49.971 [2024-07-15 09:36:00.933666] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.971 [2024-07-15 09:36:00.933692] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:27:49.971 qpair failed and we were unable to recover it. 00:27:49.971 [2024-07-15 09:36:00.933807] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.971 [2024-07-15 09:36:00.933833] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:27:49.971 qpair failed and we were unable to recover it. 00:27:49.971 [2024-07-15 09:36:00.934019] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.971 [2024-07-15 09:36:00.934074] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a58000b90 with addr=10.0.0.2, port=4420 00:27:49.971 qpair failed and we were unable to recover it. 00:27:49.971 [2024-07-15 09:36:00.934242] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.971 [2024-07-15 09:36:00.934290] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a58000b90 with addr=10.0.0.2, port=4420 00:27:49.971 qpair failed and we were unable to recover it. 00:27:49.971 [2024-07-15 09:36:00.934399] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.971 [2024-07-15 09:36:00.934426] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a58000b90 with addr=10.0.0.2, port=4420 00:27:49.971 qpair failed and we were unable to recover it. 00:27:49.971 [2024-07-15 09:36:00.934532] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.971 [2024-07-15 09:36:00.934558] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a58000b90 with addr=10.0.0.2, port=4420 00:27:49.971 qpair failed and we were unable to recover it. 00:27:49.971 [2024-07-15 09:36:00.934646] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.971 [2024-07-15 09:36:00.934673] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a58000b90 with addr=10.0.0.2, port=4420 00:27:49.971 qpair failed and we were unable to recover it. 00:27:49.971 [2024-07-15 09:36:00.934779] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.972 [2024-07-15 09:36:00.934814] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a58000b90 with addr=10.0.0.2, port=4420 00:27:49.972 qpair failed and we were unable to recover it. 00:27:49.972 [2024-07-15 09:36:00.934920] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.972 [2024-07-15 09:36:00.934945] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a58000b90 with addr=10.0.0.2, port=4420 00:27:49.972 qpair failed and we were unable to recover it. 00:27:49.972 [2024-07-15 09:36:00.935019] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.972 [2024-07-15 09:36:00.935049] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a58000b90 with addr=10.0.0.2, port=4420 00:27:49.972 qpair failed and we were unable to recover it. 00:27:49.972 [2024-07-15 09:36:00.935139] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.972 [2024-07-15 09:36:00.935166] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a58000b90 with addr=10.0.0.2, port=4420 00:27:49.972 qpair failed and we were unable to recover it. 00:27:49.972 [2024-07-15 09:36:00.935303] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.972 [2024-07-15 09:36:00.935329] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a58000b90 with addr=10.0.0.2, port=4420 00:27:49.972 qpair failed and we were unable to recover it. 00:27:49.972 [2024-07-15 09:36:00.935421] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.972 [2024-07-15 09:36:00.935447] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a58000b90 with addr=10.0.0.2, port=4420 00:27:49.972 qpair failed and we were unable to recover it. 00:27:49.972 [2024-07-15 09:36:00.935531] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.972 [2024-07-15 09:36:00.935566] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a58000b90 with addr=10.0.0.2, port=4420 00:27:49.972 qpair failed and we were unable to recover it. 00:27:49.972 [2024-07-15 09:36:00.935680] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.972 [2024-07-15 09:36:00.935706] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a58000b90 with addr=10.0.0.2, port=4420 00:27:49.972 qpair failed and we were unable to recover it. 00:27:49.972 [2024-07-15 09:36:00.935820] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.972 [2024-07-15 09:36:00.935856] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a58000b90 with addr=10.0.0.2, port=4420 00:27:49.972 qpair failed and we were unable to recover it. 00:27:49.972 [2024-07-15 09:36:00.935969] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.972 [2024-07-15 09:36:00.935996] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a58000b90 with addr=10.0.0.2, port=4420 00:27:49.972 qpair failed and we were unable to recover it. 00:27:49.972 [2024-07-15 09:36:00.936112] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.972 [2024-07-15 09:36:00.936138] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a58000b90 with addr=10.0.0.2, port=4420 00:27:49.972 qpair failed and we were unable to recover it. 00:27:49.972 [2024-07-15 09:36:00.936250] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.972 [2024-07-15 09:36:00.936276] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a58000b90 with addr=10.0.0.2, port=4420 00:27:49.972 qpair failed and we were unable to recover it. 00:27:49.972 [2024-07-15 09:36:00.936363] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.972 [2024-07-15 09:36:00.936388] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a58000b90 with addr=10.0.0.2, port=4420 00:27:49.972 qpair failed and we were unable to recover it. 00:27:49.972 [2024-07-15 09:36:00.936538] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.972 [2024-07-15 09:36:00.936576] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:27:49.972 qpair failed and we were unable to recover it. 00:27:49.972 [2024-07-15 09:36:00.936679] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.972 [2024-07-15 09:36:00.936718] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a50000b90 with addr=10.0.0.2, port=4420 00:27:49.972 qpair failed and we were unable to recover it. 00:27:49.972 [2024-07-15 09:36:00.936843] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.972 [2024-07-15 09:36:00.936870] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a50000b90 with addr=10.0.0.2, port=4420 00:27:49.972 qpair failed and we were unable to recover it. 00:27:49.972 [2024-07-15 09:36:00.936995] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.972 [2024-07-15 09:36:00.937037] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a50000b90 with addr=10.0.0.2, port=4420 00:27:49.972 qpair failed and we were unable to recover it. 00:27:49.972 [2024-07-15 09:36:00.937160] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.972 [2024-07-15 09:36:00.937185] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a50000b90 with addr=10.0.0.2, port=4420 00:27:49.972 qpair failed and we were unable to recover it. 00:27:49.972 [2024-07-15 09:36:00.937273] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.972 [2024-07-15 09:36:00.937299] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a50000b90 with addr=10.0.0.2, port=4420 00:27:49.972 qpair failed and we were unable to recover it. 00:27:49.972 [2024-07-15 09:36:00.937429] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.972 [2024-07-15 09:36:00.937479] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a58000b90 with addr=10.0.0.2, port=4420 00:27:49.972 qpair failed and we were unable to recover it. 00:27:49.972 [2024-07-15 09:36:00.937595] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.972 [2024-07-15 09:36:00.937624] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:27:49.972 qpair failed and we were unable to recover it. 00:27:49.972 [2024-07-15 09:36:00.937740] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.972 [2024-07-15 09:36:00.937766] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:27:49.972 qpair failed and we were unable to recover it. 00:27:49.972 [2024-07-15 09:36:00.937925] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.972 [2024-07-15 09:36:00.937963] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:27:49.972 qpair failed and we were unable to recover it. 00:27:49.972 [2024-07-15 09:36:00.938140] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.972 [2024-07-15 09:36:00.938177] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:27:49.972 qpair failed and we were unable to recover it. 00:27:49.972 [2024-07-15 09:36:00.938324] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.972 [2024-07-15 09:36:00.938361] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:27:49.972 qpair failed and we were unable to recover it. 00:27:49.972 [2024-07-15 09:36:00.938510] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.972 [2024-07-15 09:36:00.938546] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:27:49.972 qpair failed and we were unable to recover it. 00:27:49.972 [2024-07-15 09:36:00.938675] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.972 [2024-07-15 09:36:00.938703] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a58000b90 with addr=10.0.0.2, port=4420 00:27:49.972 qpair failed and we were unable to recover it. 00:27:49.972 [2024-07-15 09:36:00.938816] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.972 [2024-07-15 09:36:00.938842] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a58000b90 with addr=10.0.0.2, port=4420 00:27:49.972 qpair failed and we were unable to recover it. 00:27:49.972 [2024-07-15 09:36:00.939011] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.972 [2024-07-15 09:36:00.939060] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a58000b90 with addr=10.0.0.2, port=4420 00:27:49.972 qpair failed and we were unable to recover it. 00:27:49.972 [2024-07-15 09:36:00.939201] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.972 [2024-07-15 09:36:00.939240] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:27:49.972 qpair failed and we were unable to recover it. 00:27:49.972 [2024-07-15 09:36:00.939386] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.972 [2024-07-15 09:36:00.939423] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:27:49.972 qpair failed and we were unable to recover it. 00:27:49.972 [2024-07-15 09:36:00.939546] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.972 [2024-07-15 09:36:00.939583] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:27:49.972 qpair failed and we were unable to recover it. 00:27:49.972 [2024-07-15 09:36:00.939753] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.972 [2024-07-15 09:36:00.939778] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:27:49.972 qpair failed and we were unable to recover it. 00:27:49.972 [2024-07-15 09:36:00.939868] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.972 [2024-07-15 09:36:00.939894] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:27:49.972 qpair failed and we were unable to recover it. 00:27:49.972 [2024-07-15 09:36:00.940004] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.972 [2024-07-15 09:36:00.940030] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:27:49.972 qpair failed and we were unable to recover it. 00:27:49.972 [2024-07-15 09:36:00.940147] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.972 [2024-07-15 09:36:00.940183] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:27:49.972 qpair failed and we were unable to recover it. 00:27:49.972 [2024-07-15 09:36:00.940327] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.972 [2024-07-15 09:36:00.940363] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:27:49.972 qpair failed and we were unable to recover it. 00:27:49.972 [2024-07-15 09:36:00.940506] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.972 [2024-07-15 09:36:00.940544] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:27:49.972 qpair failed and we were unable to recover it. 00:27:49.972 [2024-07-15 09:36:00.940663] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.972 [2024-07-15 09:36:00.940692] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a58000b90 with addr=10.0.0.2, port=4420 00:27:49.972 qpair failed and we were unable to recover it. 00:27:49.972 [2024-07-15 09:36:00.940842] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.973 [2024-07-15 09:36:00.940897] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d69200 with addr=10.0.0.2, port=4420 00:27:49.973 qpair failed and we were unable to recover it. 00:27:49.973 [2024-07-15 09:36:00.941045] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.973 [2024-07-15 09:36:00.941100] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a50000b90 with addr=10.0.0.2, port=4420 00:27:49.973 qpair failed and we were unable to recover it. 00:27:49.973 [2024-07-15 09:36:00.941266] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.973 [2024-07-15 09:36:00.941307] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a50000b90 with addr=10.0.0.2, port=4420 00:27:49.973 qpair failed and we were unable to recover it. 00:27:49.973 [2024-07-15 09:36:00.941458] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.973 [2024-07-15 09:36:00.941503] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a50000b90 with addr=10.0.0.2, port=4420 00:27:49.973 qpair failed and we were unable to recover it. 00:27:49.973 [2024-07-15 09:36:00.941667] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.973 [2024-07-15 09:36:00.941693] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a50000b90 with addr=10.0.0.2, port=4420 00:27:49.973 qpair failed and we were unable to recover it. 00:27:49.973 [2024-07-15 09:36:00.941775] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.973 [2024-07-15 09:36:00.941807] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a50000b90 with addr=10.0.0.2, port=4420 00:27:49.973 qpair failed and we were unable to recover it. 00:27:49.973 [2024-07-15 09:36:00.941897] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.973 [2024-07-15 09:36:00.941922] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a50000b90 with addr=10.0.0.2, port=4420 00:27:49.973 qpair failed and we were unable to recover it. 00:27:49.973 [2024-07-15 09:36:00.942004] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.973 [2024-07-15 09:36:00.942052] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a50000b90 with addr=10.0.0.2, port=4420 00:27:49.973 qpair failed and we were unable to recover it. 00:27:49.973 [2024-07-15 09:36:00.942168] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.973 [2024-07-15 09:36:00.942205] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a50000b90 with addr=10.0.0.2, port=4420 00:27:49.973 qpair failed and we were unable to recover it. 00:27:49.973 [2024-07-15 09:36:00.942350] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.973 [2024-07-15 09:36:00.942387] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a50000b90 with addr=10.0.0.2, port=4420 00:27:49.973 qpair failed and we were unable to recover it. 00:27:49.973 [2024-07-15 09:36:00.942525] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.973 [2024-07-15 09:36:00.942562] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a50000b90 with addr=10.0.0.2, port=4420 00:27:49.973 qpair failed and we were unable to recover it. 00:27:49.973 [2024-07-15 09:36:00.942738] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.973 [2024-07-15 09:36:00.942775] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a50000b90 with addr=10.0.0.2, port=4420 00:27:49.973 qpair failed and we were unable to recover it. 00:27:49.973 [2024-07-15 09:36:00.942953] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.973 [2024-07-15 09:36:00.942979] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a50000b90 with addr=10.0.0.2, port=4420 00:27:49.973 qpair failed and we were unable to recover it. 00:27:49.973 [2024-07-15 09:36:00.943060] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.973 [2024-07-15 09:36:00.943108] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a50000b90 with addr=10.0.0.2, port=4420 00:27:49.973 qpair failed and we were unable to recover it. 00:27:49.973 [2024-07-15 09:36:00.943248] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.973 [2024-07-15 09:36:00.943284] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a50000b90 with addr=10.0.0.2, port=4420 00:27:49.973 qpair failed and we were unable to recover it. 00:27:49.973 [2024-07-15 09:36:00.943386] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.973 [2024-07-15 09:36:00.943423] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a50000b90 with addr=10.0.0.2, port=4420 00:27:49.973 qpair failed and we were unable to recover it. 00:27:49.973 [2024-07-15 09:36:00.943594] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.973 [2024-07-15 09:36:00.943643] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a50000b90 with addr=10.0.0.2, port=4420 00:27:49.973 qpair failed and we were unable to recover it. 00:27:49.973 [2024-07-15 09:36:00.943795] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.973 [2024-07-15 09:36:00.943853] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a50000b90 with addr=10.0.0.2, port=4420 00:27:49.973 qpair failed and we were unable to recover it. 00:27:49.973 [2024-07-15 09:36:00.943957] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.973 [2024-07-15 09:36:00.943983] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a50000b90 with addr=10.0.0.2, port=4420 00:27:49.973 qpair failed and we were unable to recover it. 00:27:49.973 [2024-07-15 09:36:00.944068] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.973 [2024-07-15 09:36:00.944096] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a50000b90 with addr=10.0.0.2, port=4420 00:27:49.973 qpair failed and we were unable to recover it. 00:27:49.973 [2024-07-15 09:36:00.944237] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.973 [2024-07-15 09:36:00.944274] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a50000b90 with addr=10.0.0.2, port=4420 00:27:49.973 qpair failed and we were unable to recover it. 00:27:49.973 [2024-07-15 09:36:00.944397] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.973 [2024-07-15 09:36:00.944443] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a50000b90 with addr=10.0.0.2, port=4420 00:27:49.973 qpair failed and we were unable to recover it. 00:27:49.973 [2024-07-15 09:36:00.944591] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.973 [2024-07-15 09:36:00.944627] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a50000b90 with addr=10.0.0.2, port=4420 00:27:49.973 qpair failed and we were unable to recover it. 00:27:49.973 [2024-07-15 09:36:00.944771] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.973 [2024-07-15 09:36:00.944815] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a50000b90 with addr=10.0.0.2, port=4420 00:27:49.973 qpair failed and we were unable to recover it. 00:27:49.973 [2024-07-15 09:36:00.944938] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.973 [2024-07-15 09:36:00.944964] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a50000b90 with addr=10.0.0.2, port=4420 00:27:49.973 qpair failed and we were unable to recover it. 00:27:49.973 [2024-07-15 09:36:00.945077] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.973 [2024-07-15 09:36:00.945102] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a50000b90 with addr=10.0.0.2, port=4420 00:27:49.973 qpair failed and we were unable to recover it. 00:27:49.973 [2024-07-15 09:36:00.945252] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.973 [2024-07-15 09:36:00.945289] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a50000b90 with addr=10.0.0.2, port=4420 00:27:49.973 qpair failed and we were unable to recover it. 00:27:49.973 [2024-07-15 09:36:00.945431] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.973 [2024-07-15 09:36:00.945484] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a50000b90 with addr=10.0.0.2, port=4420 00:27:49.973 qpair failed and we were unable to recover it. 00:27:49.973 [2024-07-15 09:36:00.945634] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.973 [2024-07-15 09:36:00.945676] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a50000b90 with addr=10.0.0.2, port=4420 00:27:49.973 qpair failed and we were unable to recover it. 00:27:49.973 [2024-07-15 09:36:00.945785] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.973 [2024-07-15 09:36:00.945816] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a50000b90 with addr=10.0.0.2, port=4420 00:27:49.973 qpair failed and we were unable to recover it. 00:27:49.973 [2024-07-15 09:36:00.945971] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.973 [2024-07-15 09:36:00.946010] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a58000b90 with addr=10.0.0.2, port=4420 00:27:49.973 qpair failed and we were unable to recover it. 00:27:49.973 [2024-07-15 09:36:00.946190] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.973 [2024-07-15 09:36:00.946234] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d69200 with addr=10.0.0.2, port=4420 00:27:49.973 qpair failed and we were unable to recover it. 00:27:49.973 [2024-07-15 09:36:00.946368] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.973 [2024-07-15 09:36:00.946407] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d69200 with addr=10.0.0.2, port=4420 00:27:49.973 qpair failed and we were unable to recover it. 00:27:49.973 [2024-07-15 09:36:00.946531] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.973 [2024-07-15 09:36:00.946571] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d69200 with addr=10.0.0.2, port=4420 00:27:49.973 qpair failed and we were unable to recover it. 00:27:49.973 [2024-07-15 09:36:00.946739] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.973 [2024-07-15 09:36:00.946764] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d69200 with addr=10.0.0.2, port=4420 00:27:49.973 qpair failed and we were unable to recover it. 00:27:49.973 [2024-07-15 09:36:00.946867] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.973 [2024-07-15 09:36:00.946893] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d69200 with addr=10.0.0.2, port=4420 00:27:49.973 qpair failed and we were unable to recover it. 00:27:49.973 [2024-07-15 09:36:00.946972] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.974 [2024-07-15 09:36:00.946997] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d69200 with addr=10.0.0.2, port=4420 00:27:49.974 qpair failed and we were unable to recover it. 00:27:49.974 [2024-07-15 09:36:00.947112] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.974 [2024-07-15 09:36:00.947150] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d69200 with addr=10.0.0.2, port=4420 00:27:49.974 qpair failed and we were unable to recover it. 00:27:49.974 [2024-07-15 09:36:00.947315] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.974 [2024-07-15 09:36:00.947351] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d69200 with addr=10.0.0.2, port=4420 00:27:49.974 qpair failed and we were unable to recover it. 00:27:49.974 [2024-07-15 09:36:00.947495] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.974 [2024-07-15 09:36:00.947537] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d69200 with addr=10.0.0.2, port=4420 00:27:49.974 qpair failed and we were unable to recover it. 00:27:49.974 [2024-07-15 09:36:00.947716] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.974 [2024-07-15 09:36:00.947752] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d69200 with addr=10.0.0.2, port=4420 00:27:49.974 qpair failed and we were unable to recover it. 00:27:49.974 [2024-07-15 09:36:00.947903] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.974 [2024-07-15 09:36:00.947929] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d69200 with addr=10.0.0.2, port=4420 00:27:49.974 qpair failed and we were unable to recover it. 00:27:49.974 [2024-07-15 09:36:00.948013] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.974 [2024-07-15 09:36:00.948039] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d69200 with addr=10.0.0.2, port=4420 00:27:49.974 qpair failed and we were unable to recover it. 00:27:49.974 [2024-07-15 09:36:00.948214] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.974 [2024-07-15 09:36:00.948250] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d69200 with addr=10.0.0.2, port=4420 00:27:49.974 qpair failed and we were unable to recover it. 00:27:49.974 [2024-07-15 09:36:00.948432] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.974 [2024-07-15 09:36:00.948469] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d69200 with addr=10.0.0.2, port=4420 00:27:49.974 qpair failed and we were unable to recover it. 00:27:49.974 [2024-07-15 09:36:00.948615] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.974 [2024-07-15 09:36:00.948651] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d69200 with addr=10.0.0.2, port=4420 00:27:49.974 qpair failed and we were unable to recover it. 00:27:49.974 [2024-07-15 09:36:00.948821] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.974 [2024-07-15 09:36:00.948848] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d69200 with addr=10.0.0.2, port=4420 00:27:49.974 qpair failed and we were unable to recover it. 00:27:49.974 [2024-07-15 09:36:00.948927] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.974 [2024-07-15 09:36:00.948953] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d69200 with addr=10.0.0.2, port=4420 00:27:49.974 qpair failed and we were unable to recover it. 00:27:49.974 [2024-07-15 09:36:00.949031] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.974 [2024-07-15 09:36:00.949057] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d69200 with addr=10.0.0.2, port=4420 00:27:49.974 qpair failed and we were unable to recover it. 00:27:49.974 [2024-07-15 09:36:00.949214] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.974 [2024-07-15 09:36:00.949240] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d69200 with addr=10.0.0.2, port=4420 00:27:49.974 qpair failed and we were unable to recover it. 00:27:49.974 [2024-07-15 09:36:00.949327] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.974 [2024-07-15 09:36:00.949353] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d69200 with addr=10.0.0.2, port=4420 00:27:49.974 qpair failed and we were unable to recover it. 00:27:49.974 [2024-07-15 09:36:00.949557] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.974 [2024-07-15 09:36:00.949597] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a50000b90 with addr=10.0.0.2, port=4420 00:27:49.974 qpair failed and we were unable to recover it. 00:27:49.974 [2024-07-15 09:36:00.949742] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.974 [2024-07-15 09:36:00.949767] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a50000b90 with addr=10.0.0.2, port=4420 00:27:49.974 qpair failed and we were unable to recover it. 00:27:49.974 [2024-07-15 09:36:00.949932] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.974 [2024-07-15 09:36:00.949959] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a50000b90 with addr=10.0.0.2, port=4420 00:27:49.974 qpair failed and we were unable to recover it. 00:27:49.974 [2024-07-15 09:36:00.950090] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.974 [2024-07-15 09:36:00.950127] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a50000b90 with addr=10.0.0.2, port=4420 00:27:49.974 qpair failed and we were unable to recover it. 00:27:49.974 [2024-07-15 09:36:00.950255] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.974 [2024-07-15 09:36:00.950299] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a50000b90 with addr=10.0.0.2, port=4420 00:27:49.974 qpair failed and we were unable to recover it. 00:27:49.974 [2024-07-15 09:36:00.950476] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.974 [2024-07-15 09:36:00.950514] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a50000b90 with addr=10.0.0.2, port=4420 00:27:49.974 qpair failed and we were unable to recover it. 00:27:49.974 [2024-07-15 09:36:00.950641] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.974 [2024-07-15 09:36:00.950681] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d69200 with addr=10.0.0.2, port=4420 00:27:49.974 qpair failed and we were unable to recover it. 00:27:49.974 [2024-07-15 09:36:00.950829] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.974 [2024-07-15 09:36:00.950855] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d69200 with addr=10.0.0.2, port=4420 00:27:49.974 qpair failed and we were unable to recover it. 00:27:49.974 [2024-07-15 09:36:00.950936] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.974 [2024-07-15 09:36:00.950961] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d69200 with addr=10.0.0.2, port=4420 00:27:49.974 qpair failed and we were unable to recover it. 00:27:49.974 [2024-07-15 09:36:00.951065] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.974 [2024-07-15 09:36:00.951091] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d69200 with addr=10.0.0.2, port=4420 00:27:49.974 qpair failed and we were unable to recover it. 00:27:49.974 [2024-07-15 09:36:00.951229] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.974 [2024-07-15 09:36:00.951265] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d69200 with addr=10.0.0.2, port=4420 00:27:49.974 qpair failed and we were unable to recover it. 00:27:49.974 [2024-07-15 09:36:00.951383] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.974 [2024-07-15 09:36:00.951420] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d69200 with addr=10.0.0.2, port=4420 00:27:49.974 qpair failed and we were unable to recover it. 00:27:49.974 [2024-07-15 09:36:00.951535] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.974 [2024-07-15 09:36:00.951574] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a50000b90 with addr=10.0.0.2, port=4420 00:27:49.974 qpair failed and we were unable to recover it. 00:27:49.975 [2024-07-15 09:36:00.951730] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.975 [2024-07-15 09:36:00.951768] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a58000b90 with addr=10.0.0.2, port=4420 00:27:49.975 qpair failed and we were unable to recover it. 00:27:49.975 [2024-07-15 09:36:00.951901] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.975 [2024-07-15 09:36:00.951939] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:27:49.975 qpair failed and we were unable to recover it. 00:27:49.975 [2024-07-15 09:36:00.952032] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.975 [2024-07-15 09:36:00.952060] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:27:49.975 qpair failed and we were unable to recover it. 00:27:49.975 [2024-07-15 09:36:00.952219] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.975 [2024-07-15 09:36:00.952257] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:27:49.975 qpair failed and we were unable to recover it. 00:27:49.975 [2024-07-15 09:36:00.952387] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.975 [2024-07-15 09:36:00.952433] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:27:49.975 qpair failed and we were unable to recover it. 00:27:49.975 [2024-07-15 09:36:00.952590] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.975 [2024-07-15 09:36:00.952638] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d69200 with addr=10.0.0.2, port=4420 00:27:49.975 qpair failed and we were unable to recover it. 00:27:49.975 [2024-07-15 09:36:00.952823] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.975 [2024-07-15 09:36:00.952876] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a50000b90 with addr=10.0.0.2, port=4420 00:27:49.975 qpair failed and we were unable to recover it. 00:27:49.975 [2024-07-15 09:36:00.953006] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.975 [2024-07-15 09:36:00.953032] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a50000b90 with addr=10.0.0.2, port=4420 00:27:49.975 qpair failed and we were unable to recover it. 00:27:49.975 [2024-07-15 09:36:00.953142] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.975 [2024-07-15 09:36:00.953167] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a50000b90 with addr=10.0.0.2, port=4420 00:27:49.975 qpair failed and we were unable to recover it. 00:27:49.975 [2024-07-15 09:36:00.953325] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.975 [2024-07-15 09:36:00.953361] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a50000b90 with addr=10.0.0.2, port=4420 00:27:49.975 qpair failed and we were unable to recover it. 00:27:49.975 [2024-07-15 09:36:00.953512] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.975 [2024-07-15 09:36:00.953558] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a50000b90 with addr=10.0.0.2, port=4420 00:27:49.975 qpair failed and we were unable to recover it. 00:27:49.975 [2024-07-15 09:36:00.953708] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.975 [2024-07-15 09:36:00.953751] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a50000b90 with addr=10.0.0.2, port=4420 00:27:49.975 qpair failed and we were unable to recover it. 00:27:49.975 [2024-07-15 09:36:00.953841] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.975 [2024-07-15 09:36:00.953867] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a50000b90 with addr=10.0.0.2, port=4420 00:27:49.975 qpair failed and we were unable to recover it. 00:27:49.975 [2024-07-15 09:36:00.953952] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.975 [2024-07-15 09:36:00.953978] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a50000b90 with addr=10.0.0.2, port=4420 00:27:49.975 qpair failed and we were unable to recover it. 00:27:49.975 [2024-07-15 09:36:00.954086] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.975 [2024-07-15 09:36:00.954112] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a50000b90 with addr=10.0.0.2, port=4420 00:27:49.975 qpair failed and we were unable to recover it. 00:27:49.975 [2024-07-15 09:36:00.954242] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.975 [2024-07-15 09:36:00.954280] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a50000b90 with addr=10.0.0.2, port=4420 00:27:49.975 qpair failed and we were unable to recover it. 00:27:49.975 [2024-07-15 09:36:00.954424] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.975 [2024-07-15 09:36:00.954461] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a50000b90 with addr=10.0.0.2, port=4420 00:27:49.975 qpair failed and we were unable to recover it. 00:27:49.975 [2024-07-15 09:36:00.954622] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.975 [2024-07-15 09:36:00.954661] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a50000b90 with addr=10.0.0.2, port=4420 00:27:49.975 qpair failed and we were unable to recover it. 00:27:49.975 [2024-07-15 09:36:00.954856] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.975 [2024-07-15 09:36:00.954886] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a50000b90 with addr=10.0.0.2, port=4420 00:27:49.975 qpair failed and we were unable to recover it. 00:27:49.975 [2024-07-15 09:36:00.954966] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.975 [2024-07-15 09:36:00.954992] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a50000b90 with addr=10.0.0.2, port=4420 00:27:49.975 qpair failed and we were unable to recover it. 00:27:49.975 [2024-07-15 09:36:00.955107] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.975 [2024-07-15 09:36:00.955133] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a50000b90 with addr=10.0.0.2, port=4420 00:27:49.975 qpair failed and we were unable to recover it. 00:27:49.975 [2024-07-15 09:36:00.955252] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.975 [2024-07-15 09:36:00.955291] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a50000b90 with addr=10.0.0.2, port=4420 00:27:49.975 qpair failed and we were unable to recover it. 00:27:49.975 [2024-07-15 09:36:00.955465] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.975 [2024-07-15 09:36:00.955504] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a50000b90 with addr=10.0.0.2, port=4420 00:27:49.975 qpair failed and we were unable to recover it. 00:27:49.975 [2024-07-15 09:36:00.955653] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.975 [2024-07-15 09:36:00.955692] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a50000b90 with addr=10.0.0.2, port=4420 00:27:49.975 qpair failed and we were unable to recover it. 00:27:49.975 [2024-07-15 09:36:00.955881] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.975 [2024-07-15 09:36:00.955908] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a50000b90 with addr=10.0.0.2, port=4420 00:27:49.975 qpair failed and we were unable to recover it. 00:27:49.975 [2024-07-15 09:36:00.956032] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.975 [2024-07-15 09:36:00.956058] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a50000b90 with addr=10.0.0.2, port=4420 00:27:49.975 qpair failed and we were unable to recover it. 00:27:49.975 [2024-07-15 09:36:00.956153] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.975 [2024-07-15 09:36:00.956179] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a50000b90 with addr=10.0.0.2, port=4420 00:27:49.975 qpair failed and we were unable to recover it. 00:27:49.975 [2024-07-15 09:36:00.956332] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.975 [2024-07-15 09:36:00.956372] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a50000b90 with addr=10.0.0.2, port=4420 00:27:49.975 qpair failed and we were unable to recover it. 00:27:49.975 [2024-07-15 09:36:00.956534] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.975 [2024-07-15 09:36:00.956573] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a50000b90 with addr=10.0.0.2, port=4420 00:27:49.975 qpair failed and we were unable to recover it. 00:27:49.975 [2024-07-15 09:36:00.956734] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.975 [2024-07-15 09:36:00.956772] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a50000b90 with addr=10.0.0.2, port=4420 00:27:49.975 qpair failed and we were unable to recover it. 00:27:49.975 [2024-07-15 09:36:00.956913] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.975 [2024-07-15 09:36:00.956939] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a50000b90 with addr=10.0.0.2, port=4420 00:27:49.975 qpair failed and we were unable to recover it. 00:27:49.975 [2024-07-15 09:36:00.957046] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.975 [2024-07-15 09:36:00.957072] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a50000b90 with addr=10.0.0.2, port=4420 00:27:49.975 qpair failed and we were unable to recover it. 00:27:49.975 [2024-07-15 09:36:00.957165] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.975 [2024-07-15 09:36:00.957190] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a50000b90 with addr=10.0.0.2, port=4420 00:27:49.975 qpair failed and we were unable to recover it. 00:27:49.975 [2024-07-15 09:36:00.957322] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.975 [2024-07-15 09:36:00.957361] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a50000b90 with addr=10.0.0.2, port=4420 00:27:49.975 qpair failed and we were unable to recover it. 00:27:49.975 [2024-07-15 09:36:00.957473] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.975 [2024-07-15 09:36:00.957499] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a50000b90 with addr=10.0.0.2, port=4420 00:27:49.975 qpair failed and we were unable to recover it. 00:27:49.975 [2024-07-15 09:36:00.957618] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.975 [2024-07-15 09:36:00.957660] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a50000b90 with addr=10.0.0.2, port=4420 00:27:49.975 qpair failed and we were unable to recover it. 00:27:49.975 [2024-07-15 09:36:00.957797] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.975 [2024-07-15 09:36:00.957828] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a50000b90 with addr=10.0.0.2, port=4420 00:27:49.975 qpair failed and we were unable to recover it. 00:27:49.975 [2024-07-15 09:36:00.957915] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.976 [2024-07-15 09:36:00.957940] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a50000b90 with addr=10.0.0.2, port=4420 00:27:49.976 qpair failed and we were unable to recover it. 00:27:49.976 [2024-07-15 09:36:00.958072] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.976 [2024-07-15 09:36:00.958097] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a50000b90 with addr=10.0.0.2, port=4420 00:27:49.976 qpair failed and we were unable to recover it. 00:27:49.976 [2024-07-15 09:36:00.958243] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.976 [2024-07-15 09:36:00.958275] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a50000b90 with addr=10.0.0.2, port=4420 00:27:49.976 qpair failed and we were unable to recover it. 00:27:49.976 [2024-07-15 09:36:00.958469] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.976 [2024-07-15 09:36:00.958504] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a50000b90 with addr=10.0.0.2, port=4420 00:27:49.976 qpair failed and we were unable to recover it. 00:27:49.976 [2024-07-15 09:36:00.958631] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.976 [2024-07-15 09:36:00.958657] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a50000b90 with addr=10.0.0.2, port=4420 00:27:49.976 qpair failed and we were unable to recover it. 00:27:49.976 [2024-07-15 09:36:00.958775] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.976 [2024-07-15 09:36:00.958811] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a50000b90 with addr=10.0.0.2, port=4420 00:27:49.976 qpair failed and we were unable to recover it. 00:27:49.976 [2024-07-15 09:36:00.958908] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.976 [2024-07-15 09:36:00.958933] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a50000b90 with addr=10.0.0.2, port=4420 00:27:49.976 qpair failed and we were unable to recover it. 00:27:49.976 [2024-07-15 09:36:00.959015] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.976 [2024-07-15 09:36:00.959040] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a50000b90 with addr=10.0.0.2, port=4420 00:27:49.976 qpair failed and we were unable to recover it. 00:27:49.976 [2024-07-15 09:36:00.959126] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.976 [2024-07-15 09:36:00.959152] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a50000b90 with addr=10.0.0.2, port=4420 00:27:49.976 qpair failed and we were unable to recover it. 00:27:49.976 [2024-07-15 09:36:00.959363] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.976 [2024-07-15 09:36:00.959396] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a50000b90 with addr=10.0.0.2, port=4420 00:27:49.976 qpair failed and we were unable to recover it. 00:27:49.976 [2024-07-15 09:36:00.959510] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.976 [2024-07-15 09:36:00.959544] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a50000b90 with addr=10.0.0.2, port=4420 00:27:49.976 qpair failed and we were unable to recover it. 00:27:49.976 [2024-07-15 09:36:00.959688] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.976 [2024-07-15 09:36:00.959723] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a50000b90 with addr=10.0.0.2, port=4420 00:27:49.976 qpair failed and we were unable to recover it. 00:27:49.976 [2024-07-15 09:36:00.959863] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.976 [2024-07-15 09:36:00.959889] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a50000b90 with addr=10.0.0.2, port=4420 00:27:49.976 qpair failed and we were unable to recover it. 00:27:49.976 [2024-07-15 09:36:00.959977] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.976 [2024-07-15 09:36:00.960002] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a50000b90 with addr=10.0.0.2, port=4420 00:27:49.976 qpair failed and we were unable to recover it. 00:27:49.976 [2024-07-15 09:36:00.960177] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.976 [2024-07-15 09:36:00.960210] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a50000b90 with addr=10.0.0.2, port=4420 00:27:49.976 qpair failed and we were unable to recover it. 00:27:49.976 [2024-07-15 09:36:00.960349] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.976 [2024-07-15 09:36:00.960380] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a50000b90 with addr=10.0.0.2, port=4420 00:27:49.976 qpair failed and we were unable to recover it. 00:27:49.976 [2024-07-15 09:36:00.960522] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.976 [2024-07-15 09:36:00.960562] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a50000b90 with addr=10.0.0.2, port=4420 00:27:49.976 qpair failed and we were unable to recover it. 00:27:49.976 [2024-07-15 09:36:00.960684] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.976 [2024-07-15 09:36:00.960722] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a50000b90 with addr=10.0.0.2, port=4420 00:27:49.976 qpair failed and we were unable to recover it. 00:27:49.976 [2024-07-15 09:36:00.960860] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.976 [2024-07-15 09:36:00.960887] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a50000b90 with addr=10.0.0.2, port=4420 00:27:49.976 qpair failed and we were unable to recover it. 00:27:49.976 [2024-07-15 09:36:00.960965] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.976 [2024-07-15 09:36:00.960991] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a50000b90 with addr=10.0.0.2, port=4420 00:27:49.976 qpair failed and we were unable to recover it. 00:27:49.976 [2024-07-15 09:36:00.961077] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.976 [2024-07-15 09:36:00.961124] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a50000b90 with addr=10.0.0.2, port=4420 00:27:49.976 qpair failed and we were unable to recover it. 00:27:49.976 [2024-07-15 09:36:00.961317] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.976 [2024-07-15 09:36:00.961369] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a50000b90 with addr=10.0.0.2, port=4420 00:27:49.976 qpair failed and we were unable to recover it. 00:27:49.976 [2024-07-15 09:36:00.961525] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.976 [2024-07-15 09:36:00.961563] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a50000b90 with addr=10.0.0.2, port=4420 00:27:49.976 qpair failed and we were unable to recover it. 00:27:49.976 [2024-07-15 09:36:00.961697] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.976 [2024-07-15 09:36:00.961735] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a50000b90 with addr=10.0.0.2, port=4420 00:27:49.976 qpair failed and we were unable to recover it. 00:27:49.976 [2024-07-15 09:36:00.961881] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.976 [2024-07-15 09:36:00.961908] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a50000b90 with addr=10.0.0.2, port=4420 00:27:49.976 qpair failed and we were unable to recover it. 00:27:49.976 [2024-07-15 09:36:00.962018] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.976 [2024-07-15 09:36:00.962043] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a50000b90 with addr=10.0.0.2, port=4420 00:27:49.976 qpair failed and we were unable to recover it. 00:27:49.976 [2024-07-15 09:36:00.962160] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.976 [2024-07-15 09:36:00.962214] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a50000b90 with addr=10.0.0.2, port=4420 00:27:49.976 qpair failed and we were unable to recover it. 00:27:49.976 [2024-07-15 09:36:00.962341] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.976 [2024-07-15 09:36:00.962394] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a50000b90 with addr=10.0.0.2, port=4420 00:27:49.976 qpair failed and we were unable to recover it. 00:27:49.976 [2024-07-15 09:36:00.962553] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.976 [2024-07-15 09:36:00.962603] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a50000b90 with addr=10.0.0.2, port=4420 00:27:49.976 qpair failed and we were unable to recover it. 00:27:49.976 [2024-07-15 09:36:00.962728] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.976 [2024-07-15 09:36:00.962767] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a50000b90 with addr=10.0.0.2, port=4420 00:27:49.976 qpair failed and we were unable to recover it. 00:27:49.976 [2024-07-15 09:36:00.962926] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.976 [2024-07-15 09:36:00.962953] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a50000b90 with addr=10.0.0.2, port=4420 00:27:49.976 qpair failed and we were unable to recover it. 00:27:49.976 [2024-07-15 09:36:00.963046] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.976 [2024-07-15 09:36:00.963073] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a50000b90 with addr=10.0.0.2, port=4420 00:27:49.976 qpair failed and we were unable to recover it. 00:27:49.976 [2024-07-15 09:36:00.963195] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.976 [2024-07-15 09:36:00.963230] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a50000b90 with addr=10.0.0.2, port=4420 00:27:49.976 qpair failed and we were unable to recover it. 00:27:49.976 [2024-07-15 09:36:00.963427] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.976 [2024-07-15 09:36:00.963479] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a50000b90 with addr=10.0.0.2, port=4420 00:27:49.976 qpair failed and we were unable to recover it. 00:27:49.976 [2024-07-15 09:36:00.963600] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.976 [2024-07-15 09:36:00.963638] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a50000b90 with addr=10.0.0.2, port=4420 00:27:49.976 qpair failed and we were unable to recover it. 00:27:49.976 [2024-07-15 09:36:00.963792] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.976 [2024-07-15 09:36:00.963852] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a50000b90 with addr=10.0.0.2, port=4420 00:27:49.976 qpair failed and we were unable to recover it. 00:27:49.976 [2024-07-15 09:36:00.963984] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.976 [2024-07-15 09:36:00.964029] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a58000b90 with addr=10.0.0.2, port=4420 00:27:49.976 qpair failed and we were unable to recover it. 00:27:49.976 [2024-07-15 09:36:00.964189] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.977 [2024-07-15 09:36:00.964242] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a58000b90 with addr=10.0.0.2, port=4420 00:27:49.977 qpair failed and we were unable to recover it. 00:27:49.977 [2024-07-15 09:36:00.964385] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.977 [2024-07-15 09:36:00.964434] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a58000b90 with addr=10.0.0.2, port=4420 00:27:49.977 qpair failed and we were unable to recover it. 00:27:49.977 [2024-07-15 09:36:00.964566] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.977 [2024-07-15 09:36:00.964621] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a58000b90 with addr=10.0.0.2, port=4420 00:27:49.977 qpair failed and we were unable to recover it. 00:27:49.977 [2024-07-15 09:36:00.964709] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.977 [2024-07-15 09:36:00.964736] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a58000b90 with addr=10.0.0.2, port=4420 00:27:49.977 qpair failed and we were unable to recover it. 00:27:49.977 [2024-07-15 09:36:00.964855] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.977 [2024-07-15 09:36:00.964883] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a58000b90 with addr=10.0.0.2, port=4420 00:27:49.977 qpair failed and we were unable to recover it. 00:27:49.977 [2024-07-15 09:36:00.964969] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.977 [2024-07-15 09:36:00.964995] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a58000b90 with addr=10.0.0.2, port=4420 00:27:49.977 qpair failed and we were unable to recover it. 00:27:49.977 [2024-07-15 09:36:00.965076] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.977 [2024-07-15 09:36:00.965102] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a58000b90 with addr=10.0.0.2, port=4420 00:27:49.977 qpair failed and we were unable to recover it. 00:27:49.977 [2024-07-15 09:36:00.965214] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.977 [2024-07-15 09:36:00.965240] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a58000b90 with addr=10.0.0.2, port=4420 00:27:49.977 qpair failed and we were unable to recover it. 00:27:49.977 [2024-07-15 09:36:00.965322] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.977 [2024-07-15 09:36:00.965349] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a50000b90 with addr=10.0.0.2, port=4420 00:27:49.977 qpair failed and we were unable to recover it. 00:27:49.977 [2024-07-15 09:36:00.965458] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.977 [2024-07-15 09:36:00.965484] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a50000b90 with addr=10.0.0.2, port=4420 00:27:49.977 qpair failed and we were unable to recover it. 00:27:49.977 [2024-07-15 09:36:00.965567] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.977 [2024-07-15 09:36:00.965593] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a50000b90 with addr=10.0.0.2, port=4420 00:27:49.977 qpair failed and we were unable to recover it. 00:27:49.977 [2024-07-15 09:36:00.965709] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.977 [2024-07-15 09:36:00.965735] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a50000b90 with addr=10.0.0.2, port=4420 00:27:49.977 qpair failed and we were unable to recover it. 00:27:49.977 [2024-07-15 09:36:00.965849] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.977 [2024-07-15 09:36:00.965877] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a50000b90 with addr=10.0.0.2, port=4420 00:27:49.977 qpair failed and we were unable to recover it. 00:27:49.977 [2024-07-15 09:36:00.965965] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.977 [2024-07-15 09:36:00.965991] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a50000b90 with addr=10.0.0.2, port=4420 00:27:49.977 qpair failed and we were unable to recover it. 00:27:49.977 [2024-07-15 09:36:00.966181] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.977 [2024-07-15 09:36:00.966214] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a50000b90 with addr=10.0.0.2, port=4420 00:27:49.977 qpair failed and we were unable to recover it. 00:27:49.977 [2024-07-15 09:36:00.966354] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.977 [2024-07-15 09:36:00.966386] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a50000b90 with addr=10.0.0.2, port=4420 00:27:49.977 qpair failed and we were unable to recover it. 00:27:49.977 [2024-07-15 09:36:00.966581] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.977 [2024-07-15 09:36:00.966620] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a50000b90 with addr=10.0.0.2, port=4420 00:27:49.977 qpair failed and we were unable to recover it. 00:27:49.977 [2024-07-15 09:36:00.966769] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.977 [2024-07-15 09:36:00.966794] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a50000b90 with addr=10.0.0.2, port=4420 00:27:49.977 qpair failed and we were unable to recover it. 00:27:49.977 [2024-07-15 09:36:00.966924] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.977 [2024-07-15 09:36:00.966951] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a50000b90 with addr=10.0.0.2, port=4420 00:27:49.977 qpair failed and we were unable to recover it. 00:27:49.977 [2024-07-15 09:36:00.967036] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.977 [2024-07-15 09:36:00.967062] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a50000b90 with addr=10.0.0.2, port=4420 00:27:49.977 qpair failed and we were unable to recover it. 00:27:49.977 [2024-07-15 09:36:00.967180] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.977 [2024-07-15 09:36:00.967230] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a50000b90 with addr=10.0.0.2, port=4420 00:27:49.977 qpair failed and we were unable to recover it. 00:27:49.977 [2024-07-15 09:36:00.967329] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.977 [2024-07-15 09:36:00.967363] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a50000b90 with addr=10.0.0.2, port=4420 00:27:49.977 qpair failed and we were unable to recover it. 00:27:49.977 [2024-07-15 09:36:00.967552] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.977 [2024-07-15 09:36:00.967591] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a50000b90 with addr=10.0.0.2, port=4420 00:27:49.977 qpair failed and we were unable to recover it. 00:27:49.977 [2024-07-15 09:36:00.967715] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.977 [2024-07-15 09:36:00.967740] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a50000b90 with addr=10.0.0.2, port=4420 00:27:49.977 qpair failed and we were unable to recover it. 00:27:49.977 [2024-07-15 09:36:00.967854] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.977 [2024-07-15 09:36:00.967881] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a50000b90 with addr=10.0.0.2, port=4420 00:27:49.977 qpair failed and we were unable to recover it. 00:27:49.977 [2024-07-15 09:36:00.967967] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.977 [2024-07-15 09:36:00.967992] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a50000b90 with addr=10.0.0.2, port=4420 00:27:49.977 qpair failed and we were unable to recover it. 00:27:49.977 [2024-07-15 09:36:00.968097] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.977 [2024-07-15 09:36:00.968151] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a50000b90 with addr=10.0.0.2, port=4420 00:27:49.977 qpair failed and we were unable to recover it. 00:27:49.977 [2024-07-15 09:36:00.968327] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.977 [2024-07-15 09:36:00.968359] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a50000b90 with addr=10.0.0.2, port=4420 00:27:49.977 qpair failed and we were unable to recover it. 00:27:49.977 [2024-07-15 09:36:00.968489] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.977 [2024-07-15 09:36:00.968546] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a50000b90 with addr=10.0.0.2, port=4420 00:27:49.977 qpair failed and we were unable to recover it. 00:27:49.977 [2024-07-15 09:36:00.968704] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.977 [2024-07-15 09:36:00.968732] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a58000b90 with addr=10.0.0.2, port=4420 00:27:49.977 qpair failed and we were unable to recover it. 00:27:49.977 [2024-07-15 09:36:00.968833] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.977 [2024-07-15 09:36:00.968873] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:27:49.977 qpair failed and we were unable to recover it. 00:27:49.977 [2024-07-15 09:36:00.968963] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.977 [2024-07-15 09:36:00.969020] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:27:49.977 qpair failed and we were unable to recover it. 00:27:49.977 [2024-07-15 09:36:00.969171] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.977 [2024-07-15 09:36:00.969211] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:27:49.977 qpair failed and we were unable to recover it. 00:27:49.977 [2024-07-15 09:36:00.969360] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.977 [2024-07-15 09:36:00.969386] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:27:49.977 qpair failed and we were unable to recover it. 00:27:49.977 [2024-07-15 09:36:00.969465] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.977 [2024-07-15 09:36:00.969491] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:27:49.977 qpair failed and we were unable to recover it. 00:27:49.977 [2024-07-15 09:36:00.969625] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.977 [2024-07-15 09:36:00.969650] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:27:49.977 qpair failed and we were unable to recover it. 00:27:49.977 [2024-07-15 09:36:00.969780] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.977 [2024-07-15 09:36:00.969852] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a58000b90 with addr=10.0.0.2, port=4420 00:27:49.977 qpair failed and we were unable to recover it. 00:27:49.977 [2024-07-15 09:36:00.969937] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.978 [2024-07-15 09:36:00.969963] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a58000b90 with addr=10.0.0.2, port=4420 00:27:49.978 qpair failed and we were unable to recover it. 00:27:49.978 [2024-07-15 09:36:00.970079] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.978 [2024-07-15 09:36:00.970105] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a58000b90 with addr=10.0.0.2, port=4420 00:27:49.978 qpair failed and we were unable to recover it. 00:27:49.978 [2024-07-15 09:36:00.970246] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.978 [2024-07-15 09:36:00.970300] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a58000b90 with addr=10.0.0.2, port=4420 00:27:49.978 qpair failed and we were unable to recover it. 00:27:49.978 [2024-07-15 09:36:00.970448] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.978 [2024-07-15 09:36:00.970500] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a58000b90 with addr=10.0.0.2, port=4420 00:27:49.978 qpair failed and we were unable to recover it. 00:27:49.978 [2024-07-15 09:36:00.970611] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.978 [2024-07-15 09:36:00.970638] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a58000b90 with addr=10.0.0.2, port=4420 00:27:49.978 qpair failed and we were unable to recover it. 00:27:49.978 [2024-07-15 09:36:00.970747] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.978 [2024-07-15 09:36:00.970774] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a58000b90 with addr=10.0.0.2, port=4420 00:27:49.978 qpair failed and we were unable to recover it. 00:27:49.978 [2024-07-15 09:36:00.970897] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.978 [2024-07-15 09:36:00.970923] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a58000b90 with addr=10.0.0.2, port=4420 00:27:49.978 qpair failed and we were unable to recover it. 00:27:49.978 [2024-07-15 09:36:00.971007] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.978 [2024-07-15 09:36:00.971034] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a58000b90 with addr=10.0.0.2, port=4420 00:27:49.978 qpair failed and we were unable to recover it. 00:27:49.978 [2024-07-15 09:36:00.971113] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.978 [2024-07-15 09:36:00.971139] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a58000b90 with addr=10.0.0.2, port=4420 00:27:49.978 qpair failed and we were unable to recover it. 00:27:49.978 [2024-07-15 09:36:00.971246] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.978 [2024-07-15 09:36:00.971271] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a58000b90 with addr=10.0.0.2, port=4420 00:27:49.978 qpair failed and we were unable to recover it. 00:27:49.978 [2024-07-15 09:36:00.971378] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.978 [2024-07-15 09:36:00.971404] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a58000b90 with addr=10.0.0.2, port=4420 00:27:49.978 qpair failed and we were unable to recover it. 00:27:49.978 [2024-07-15 09:36:00.971484] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.978 [2024-07-15 09:36:00.971510] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a58000b90 with addr=10.0.0.2, port=4420 00:27:49.978 qpair failed and we were unable to recover it. 00:27:49.978 [2024-07-15 09:36:00.971589] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.978 [2024-07-15 09:36:00.971616] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a58000b90 with addr=10.0.0.2, port=4420 00:27:49.978 qpair failed and we were unable to recover it. 00:27:49.978 [2024-07-15 09:36:00.971721] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.978 [2024-07-15 09:36:00.971747] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a58000b90 with addr=10.0.0.2, port=4420 00:27:49.978 qpair failed and we were unable to recover it. 00:27:49.978 [2024-07-15 09:36:00.971862] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.978 [2024-07-15 09:36:00.971889] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a58000b90 with addr=10.0.0.2, port=4420 00:27:49.978 qpair failed and we were unable to recover it. 00:27:49.978 [2024-07-15 09:36:00.971987] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.978 [2024-07-15 09:36:00.972013] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a58000b90 with addr=10.0.0.2, port=4420 00:27:49.978 qpair failed and we were unable to recover it. 00:27:49.978 [2024-07-15 09:36:00.972109] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.978 [2024-07-15 09:36:00.972137] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a58000b90 with addr=10.0.0.2, port=4420 00:27:49.978 qpair failed and we were unable to recover it. 00:27:49.978 [2024-07-15 09:36:00.972275] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.978 [2024-07-15 09:36:00.972301] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a58000b90 with addr=10.0.0.2, port=4420 00:27:49.978 qpair failed and we were unable to recover it. 00:27:49.978 [2024-07-15 09:36:00.972411] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.978 [2024-07-15 09:36:00.972437] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a58000b90 with addr=10.0.0.2, port=4420 00:27:49.978 qpair failed and we were unable to recover it. 00:27:49.978 [2024-07-15 09:36:00.972520] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.978 [2024-07-15 09:36:00.972546] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a58000b90 with addr=10.0.0.2, port=4420 00:27:49.978 qpair failed and we were unable to recover it. 00:27:49.978 [2024-07-15 09:36:00.972654] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.978 [2024-07-15 09:36:00.972680] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a58000b90 with addr=10.0.0.2, port=4420 00:27:49.978 qpair failed and we were unable to recover it. 00:27:49.978 [2024-07-15 09:36:00.972792] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.978 [2024-07-15 09:36:00.972825] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a58000b90 with addr=10.0.0.2, port=4420 00:27:49.978 qpair failed and we were unable to recover it. 00:27:49.978 [2024-07-15 09:36:00.972907] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.978 [2024-07-15 09:36:00.972932] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a58000b90 with addr=10.0.0.2, port=4420 00:27:49.978 qpair failed and we were unable to recover it. 00:27:49.978 [2024-07-15 09:36:00.973016] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.978 [2024-07-15 09:36:00.973042] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a58000b90 with addr=10.0.0.2, port=4420 00:27:49.978 qpair failed and we were unable to recover it. 00:27:49.978 [2024-07-15 09:36:00.973117] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.978 [2024-07-15 09:36:00.973144] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a58000b90 with addr=10.0.0.2, port=4420 00:27:49.978 qpair failed and we were unable to recover it. 00:27:49.978 [2024-07-15 09:36:00.973229] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.978 [2024-07-15 09:36:00.973255] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a58000b90 with addr=10.0.0.2, port=4420 00:27:49.978 qpair failed and we were unable to recover it. 00:27:49.978 [2024-07-15 09:36:00.973340] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.978 [2024-07-15 09:36:00.973366] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a58000b90 with addr=10.0.0.2, port=4420 00:27:49.978 qpair failed and we were unable to recover it. 00:27:49.978 [2024-07-15 09:36:00.973478] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.978 [2024-07-15 09:36:00.973503] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a58000b90 with addr=10.0.0.2, port=4420 00:27:49.978 qpair failed and we were unable to recover it. 00:27:49.978 [2024-07-15 09:36:00.973616] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.978 [2024-07-15 09:36:00.973642] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a58000b90 with addr=10.0.0.2, port=4420 00:27:49.978 qpair failed and we were unable to recover it. 00:27:49.978 [2024-07-15 09:36:00.973754] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.978 [2024-07-15 09:36:00.973783] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:27:49.978 qpair failed and we were unable to recover it. 00:27:49.978 [2024-07-15 09:36:00.973881] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.978 [2024-07-15 09:36:00.973907] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:27:49.978 qpair failed and we were unable to recover it. 00:27:49.978 [2024-07-15 09:36:00.974014] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.978 [2024-07-15 09:36:00.974046] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:27:49.978 qpair failed and we were unable to recover it. 00:27:49.978 [2024-07-15 09:36:00.974184] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.978 [2024-07-15 09:36:00.974222] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:27:49.978 qpair failed and we were unable to recover it. 00:27:49.978 [2024-07-15 09:36:00.974344] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.978 [2024-07-15 09:36:00.974382] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:27:49.978 qpair failed and we were unable to recover it. 00:27:49.978 [2024-07-15 09:36:00.974528] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.978 [2024-07-15 09:36:00.974555] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:27:49.978 qpair failed and we were unable to recover it. 00:27:49.978 [2024-07-15 09:36:00.974666] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.978 [2024-07-15 09:36:00.974692] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:27:49.978 qpair failed and we were unable to recover it. 00:27:49.979 [2024-07-15 09:36:00.974813] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.979 [2024-07-15 09:36:00.974857] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:27:49.979 qpair failed and we were unable to recover it. 00:27:49.979 [2024-07-15 09:36:00.974988] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.979 [2024-07-15 09:36:00.975021] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:27:49.979 qpair failed and we were unable to recover it. 00:27:49.979 [2024-07-15 09:36:00.975173] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.979 [2024-07-15 09:36:00.975212] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:27:49.979 qpair failed and we were unable to recover it. 00:27:49.979 [2024-07-15 09:36:00.975392] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.979 [2024-07-15 09:36:00.975431] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:27:49.979 qpair failed and we were unable to recover it. 00:27:49.979 [2024-07-15 09:36:00.975554] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.979 [2024-07-15 09:36:00.975593] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:27:49.979 qpair failed and we were unable to recover it. 00:27:49.979 [2024-07-15 09:36:00.975742] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.979 [2024-07-15 09:36:00.975770] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a58000b90 with addr=10.0.0.2, port=4420 00:27:49.979 qpair failed and we were unable to recover it. 00:27:49.979 [2024-07-15 09:36:00.975914] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.979 [2024-07-15 09:36:00.975967] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a58000b90 with addr=10.0.0.2, port=4420 00:27:49.979 qpair failed and we were unable to recover it. 00:27:49.979 [2024-07-15 09:36:00.976045] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.979 [2024-07-15 09:36:00.976071] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a58000b90 with addr=10.0.0.2, port=4420 00:27:49.979 qpair failed and we were unable to recover it. 00:27:49.979 [2024-07-15 09:36:00.976206] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.979 [2024-07-15 09:36:00.976239] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a58000b90 with addr=10.0.0.2, port=4420 00:27:49.979 qpair failed and we were unable to recover it. 00:27:49.979 [2024-07-15 09:36:00.976403] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.979 [2024-07-15 09:36:00.976456] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a58000b90 with addr=10.0.0.2, port=4420 00:27:49.979 qpair failed and we were unable to recover it. 00:27:49.979 [2024-07-15 09:36:00.976543] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.979 [2024-07-15 09:36:00.976569] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a58000b90 with addr=10.0.0.2, port=4420 00:27:49.979 qpair failed and we were unable to recover it. 00:27:49.979 [2024-07-15 09:36:00.976684] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.979 [2024-07-15 09:36:00.976710] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a58000b90 with addr=10.0.0.2, port=4420 00:27:49.979 qpair failed and we were unable to recover it. 00:27:49.979 [2024-07-15 09:36:00.976795] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.979 [2024-07-15 09:36:00.976829] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a58000b90 with addr=10.0.0.2, port=4420 00:27:49.979 qpair failed and we were unable to recover it. 00:27:49.979 [2024-07-15 09:36:00.976952] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.979 [2024-07-15 09:36:00.976978] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a58000b90 with addr=10.0.0.2, port=4420 00:27:49.979 qpair failed and we were unable to recover it. 00:27:49.979 [2024-07-15 09:36:00.977106] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.979 [2024-07-15 09:36:00.977153] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a58000b90 with addr=10.0.0.2, port=4420 00:27:49.979 qpair failed and we were unable to recover it. 00:27:49.979 [2024-07-15 09:36:00.977236] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.979 [2024-07-15 09:36:00.977263] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a58000b90 with addr=10.0.0.2, port=4420 00:27:49.979 qpair failed and we were unable to recover it. 00:27:49.979 [2024-07-15 09:36:00.977374] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.979 [2024-07-15 09:36:00.977400] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a58000b90 with addr=10.0.0.2, port=4420 00:27:49.979 qpair failed and we were unable to recover it. 00:27:49.979 [2024-07-15 09:36:00.977498] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.979 [2024-07-15 09:36:00.977525] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:27:49.979 qpair failed and we were unable to recover it. 00:27:49.979 [2024-07-15 09:36:00.977602] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.979 [2024-07-15 09:36:00.977628] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:27:49.979 qpair failed and we were unable to recover it. 00:27:49.979 [2024-07-15 09:36:00.977740] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.979 [2024-07-15 09:36:00.977766] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:27:49.979 qpair failed and we were unable to recover it. 00:27:49.979 [2024-07-15 09:36:00.977868] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.979 [2024-07-15 09:36:00.977894] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:27:49.979 qpair failed and we were unable to recover it. 00:27:49.979 [2024-07-15 09:36:00.977985] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.979 [2024-07-15 09:36:00.978010] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:27:49.979 qpair failed and we were unable to recover it. 00:27:49.979 [2024-07-15 09:36:00.978090] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.979 [2024-07-15 09:36:00.978116] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:27:49.979 qpair failed and we were unable to recover it. 00:27:49.979 [2024-07-15 09:36:00.978196] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.979 [2024-07-15 09:36:00.978224] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a58000b90 with addr=10.0.0.2, port=4420 00:27:49.979 qpair failed and we were unable to recover it. 00:27:49.979 [2024-07-15 09:36:00.978357] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.979 [2024-07-15 09:36:00.978383] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a58000b90 with addr=10.0.0.2, port=4420 00:27:49.979 qpair failed and we were unable to recover it. 00:27:49.979 [2024-07-15 09:36:00.978463] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.979 [2024-07-15 09:36:00.978489] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a58000b90 with addr=10.0.0.2, port=4420 00:27:49.979 qpair failed and we were unable to recover it. 00:27:49.979 [2024-07-15 09:36:00.978575] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.979 [2024-07-15 09:36:00.978601] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a58000b90 with addr=10.0.0.2, port=4420 00:27:49.979 qpair failed and we were unable to recover it. 00:27:49.979 [2024-07-15 09:36:00.978716] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.979 [2024-07-15 09:36:00.978742] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a58000b90 with addr=10.0.0.2, port=4420 00:27:49.979 qpair failed and we were unable to recover it. 00:27:49.979 [2024-07-15 09:36:00.978827] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.979 [2024-07-15 09:36:00.978864] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a58000b90 with addr=10.0.0.2, port=4420 00:27:49.979 qpair failed and we were unable to recover it. 00:27:49.979 [2024-07-15 09:36:00.978944] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.979 [2024-07-15 09:36:00.978971] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a58000b90 with addr=10.0.0.2, port=4420 00:27:49.979 qpair failed and we were unable to recover it. 00:27:49.979 [2024-07-15 09:36:00.979116] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.979 [2024-07-15 09:36:00.979143] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a58000b90 with addr=10.0.0.2, port=4420 00:27:49.979 qpair failed and we were unable to recover it. 00:27:49.979 [2024-07-15 09:36:00.979277] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.979 [2024-07-15 09:36:00.979303] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a58000b90 with addr=10.0.0.2, port=4420 00:27:49.979 qpair failed and we were unable to recover it. 00:27:49.979 [2024-07-15 09:36:00.979383] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.979 [2024-07-15 09:36:00.979408] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a58000b90 with addr=10.0.0.2, port=4420 00:27:49.980 qpair failed and we were unable to recover it. 00:27:49.980 [2024-07-15 09:36:00.979554] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.980 [2024-07-15 09:36:00.979580] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a58000b90 with addr=10.0.0.2, port=4420 00:27:49.980 qpair failed and we were unable to recover it. 00:27:49.980 [2024-07-15 09:36:00.979710] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.980 [2024-07-15 09:36:00.979749] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a50000b90 with addr=10.0.0.2, port=4420 00:27:49.980 qpair failed and we were unable to recover it. 00:27:49.980 [2024-07-15 09:36:00.979887] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.980 [2024-07-15 09:36:00.979921] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:27:49.980 qpair failed and we were unable to recover it. 00:27:49.980 [2024-07-15 09:36:00.980029] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.980 [2024-07-15 09:36:00.980061] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:27:49.980 qpair failed and we were unable to recover it. 00:27:49.980 [2024-07-15 09:36:00.980181] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.980 [2024-07-15 09:36:00.980219] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:27:49.980 qpair failed and we were unable to recover it. 00:27:49.980 [2024-07-15 09:36:00.980460] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.980 [2024-07-15 09:36:00.980552] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:27:49.980 qpair failed and we were unable to recover it. 00:27:49.980 [2024-07-15 09:36:00.980712] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.980 [2024-07-15 09:36:00.980738] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:27:49.980 qpair failed and we were unable to recover it. 00:27:49.980 [2024-07-15 09:36:00.980825] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.980 [2024-07-15 09:36:00.980863] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:27:49.980 qpair failed and we were unable to recover it. 00:27:49.980 [2024-07-15 09:36:00.980968] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.980 [2024-07-15 09:36:00.980995] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:27:49.980 qpair failed and we were unable to recover it. 00:27:49.980 [2024-07-15 09:36:00.981141] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.980 [2024-07-15 09:36:00.981173] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:27:49.980 qpair failed and we were unable to recover it. 00:27:49.980 [2024-07-15 09:36:00.981336] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.980 [2024-07-15 09:36:00.981367] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:27:49.980 qpair failed and we were unable to recover it. 00:27:49.980 [2024-07-15 09:36:00.981542] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.980 [2024-07-15 09:36:00.981581] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:27:49.980 qpair failed and we were unable to recover it. 00:27:49.980 [2024-07-15 09:36:00.981686] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.980 [2024-07-15 09:36:00.981724] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:27:49.980 qpair failed and we were unable to recover it. 00:27:49.980 [2024-07-15 09:36:00.981852] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.980 [2024-07-15 09:36:00.981882] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:27:49.980 qpair failed and we were unable to recover it. 00:27:49.980 [2024-07-15 09:36:00.981958] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.980 [2024-07-15 09:36:00.981984] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:27:49.980 qpair failed and we were unable to recover it. 00:27:49.980 [2024-07-15 09:36:00.982096] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.980 [2024-07-15 09:36:00.982142] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:27:49.980 qpair failed and we were unable to recover it. 00:27:49.980 [2024-07-15 09:36:00.982286] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.980 [2024-07-15 09:36:00.982337] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:27:49.980 qpair failed and we were unable to recover it. 00:27:49.980 [2024-07-15 09:36:00.982489] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.980 [2024-07-15 09:36:00.982533] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:27:49.980 qpair failed and we were unable to recover it. 00:27:49.980 [2024-07-15 09:36:00.982719] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.980 [2024-07-15 09:36:00.982758] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:27:49.980 qpair failed and we were unable to recover it. 00:27:49.980 [2024-07-15 09:36:00.982881] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.980 [2024-07-15 09:36:00.982907] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:27:49.980 qpair failed and we were unable to recover it. 00:27:49.980 [2024-07-15 09:36:00.983041] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.980 [2024-07-15 09:36:00.983067] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:27:49.980 qpair failed and we were unable to recover it. 00:27:49.980 [2024-07-15 09:36:00.983175] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.980 [2024-07-15 09:36:00.983215] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:27:49.980 qpair failed and we were unable to recover it. 00:27:49.980 [2024-07-15 09:36:00.983396] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.980 [2024-07-15 09:36:00.983427] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:27:49.980 qpair failed and we were unable to recover it. 00:27:49.980 [2024-07-15 09:36:00.983598] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.980 [2024-07-15 09:36:00.983649] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:27:49.980 qpair failed and we were unable to recover it. 00:27:49.980 [2024-07-15 09:36:00.983789] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.980 [2024-07-15 09:36:00.983829] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:27:49.980 qpair failed and we were unable to recover it. 00:27:49.980 [2024-07-15 09:36:00.983959] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.980 [2024-07-15 09:36:00.983985] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:27:49.980 qpair failed and we were unable to recover it. 00:27:49.980 [2024-07-15 09:36:00.984117] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.980 [2024-07-15 09:36:00.984155] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:27:49.980 qpair failed and we were unable to recover it. 00:27:49.980 [2024-07-15 09:36:00.984332] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.980 [2024-07-15 09:36:00.984364] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:27:49.980 qpair failed and we were unable to recover it. 00:27:49.980 [2024-07-15 09:36:00.984526] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.980 [2024-07-15 09:36:00.984559] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:27:49.980 qpair failed and we were unable to recover it. 00:27:49.980 [2024-07-15 09:36:00.984707] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.980 [2024-07-15 09:36:00.984745] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:27:49.980 qpair failed and we were unable to recover it. 00:27:49.980 [2024-07-15 09:36:00.984900] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.980 [2024-07-15 09:36:00.984926] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:27:49.980 qpair failed and we were unable to recover it. 00:27:49.980 [2024-07-15 09:36:00.985059] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.980 [2024-07-15 09:36:00.985085] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:27:49.980 qpair failed and we were unable to recover it. 00:27:49.980 [2024-07-15 09:36:00.985170] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.980 [2024-07-15 09:36:00.985196] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:27:49.980 qpair failed and we were unable to recover it. 00:27:49.980 [2024-07-15 09:36:00.985322] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.980 [2024-07-15 09:36:00.985373] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:27:49.980 qpair failed and we were unable to recover it. 00:27:49.980 [2024-07-15 09:36:00.985529] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.980 [2024-07-15 09:36:00.985561] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:27:49.980 qpair failed and we were unable to recover it. 00:27:49.980 [2024-07-15 09:36:00.985681] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.980 [2024-07-15 09:36:00.985735] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a50000b90 with addr=10.0.0.2, port=4420 00:27:49.980 qpair failed and we were unable to recover it. 00:27:49.980 [2024-07-15 09:36:00.985844] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.980 [2024-07-15 09:36:00.985871] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a50000b90 with addr=10.0.0.2, port=4420 00:27:49.981 qpair failed and we were unable to recover it. 00:27:49.981 [2024-07-15 09:36:00.985984] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.981 [2024-07-15 09:36:00.986009] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a50000b90 with addr=10.0.0.2, port=4420 00:27:49.981 qpair failed and we were unable to recover it. 00:27:49.981 [2024-07-15 09:36:00.986126] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.981 [2024-07-15 09:36:00.986151] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a50000b90 with addr=10.0.0.2, port=4420 00:27:49.981 qpair failed and we were unable to recover it. 00:27:49.981 [2024-07-15 09:36:00.986314] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.981 [2024-07-15 09:36:00.986351] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a50000b90 with addr=10.0.0.2, port=4420 00:27:49.981 qpair failed and we were unable to recover it. 00:27:49.981 [2024-07-15 09:36:00.986468] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.981 [2024-07-15 09:36:00.986512] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a50000b90 with addr=10.0.0.2, port=4420 00:27:49.981 qpair failed and we were unable to recover it. 00:27:49.981 [2024-07-15 09:36:00.986703] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.981 [2024-07-15 09:36:00.986736] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a50000b90 with addr=10.0.0.2, port=4420 00:27:49.981 qpair failed and we were unable to recover it. 00:27:49.981 [2024-07-15 09:36:00.986871] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.981 [2024-07-15 09:36:00.986898] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a50000b90 with addr=10.0.0.2, port=4420 00:27:49.981 qpair failed and we were unable to recover it. 00:27:49.981 [2024-07-15 09:36:00.987012] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.981 [2024-07-15 09:36:00.987038] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a50000b90 with addr=10.0.0.2, port=4420 00:27:49.981 qpair failed and we were unable to recover it. 00:27:49.981 [2024-07-15 09:36:00.987119] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.981 [2024-07-15 09:36:00.987163] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a50000b90 with addr=10.0.0.2, port=4420 00:27:49.981 qpair failed and we were unable to recover it. 00:27:49.981 [2024-07-15 09:36:00.987297] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.981 [2024-07-15 09:36:00.987335] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a50000b90 with addr=10.0.0.2, port=4420 00:27:49.981 qpair failed and we were unable to recover it. 00:27:49.981 [2024-07-15 09:36:00.987473] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.981 [2024-07-15 09:36:00.987515] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a50000b90 with addr=10.0.0.2, port=4420 00:27:49.981 qpair failed and we were unable to recover it. 00:27:49.981 [2024-07-15 09:36:00.987709] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.981 [2024-07-15 09:36:00.987748] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a50000b90 with addr=10.0.0.2, port=4420 00:27:49.981 qpair failed and we were unable to recover it. 00:27:49.981 [2024-07-15 09:36:00.987888] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.981 [2024-07-15 09:36:00.987914] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a50000b90 with addr=10.0.0.2, port=4420 00:27:49.981 qpair failed and we were unable to recover it. 00:27:49.981 [2024-07-15 09:36:00.988022] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.981 [2024-07-15 09:36:00.988047] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a50000b90 with addr=10.0.0.2, port=4420 00:27:49.981 qpair failed and we were unable to recover it. 00:27:49.981 [2024-07-15 09:36:00.988184] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.981 [2024-07-15 09:36:00.988229] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a50000b90 with addr=10.0.0.2, port=4420 00:27:49.981 qpair failed and we were unable to recover it. 00:27:49.981 [2024-07-15 09:36:00.988384] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.981 [2024-07-15 09:36:00.988417] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a50000b90 with addr=10.0.0.2, port=4420 00:27:49.981 qpair failed and we were unable to recover it. 00:27:49.981 [2024-07-15 09:36:00.988551] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.981 [2024-07-15 09:36:00.988599] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a50000b90 with addr=10.0.0.2, port=4420 00:27:49.981 qpair failed and we were unable to recover it. 00:27:49.981 [2024-07-15 09:36:00.988730] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.981 [2024-07-15 09:36:00.988776] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a50000b90 with addr=10.0.0.2, port=4420 00:27:49.981 qpair failed and we were unable to recover it. 00:27:49.981 [2024-07-15 09:36:00.988925] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.981 [2024-07-15 09:36:00.988952] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a50000b90 with addr=10.0.0.2, port=4420 00:27:49.981 qpair failed and we were unable to recover it. 00:27:49.981 [2024-07-15 09:36:00.989066] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.981 [2024-07-15 09:36:00.989092] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a50000b90 with addr=10.0.0.2, port=4420 00:27:49.981 qpair failed and we were unable to recover it. 00:27:49.981 [2024-07-15 09:36:00.989244] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.981 [2024-07-15 09:36:00.989269] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a50000b90 with addr=10.0.0.2, port=4420 00:27:49.981 qpair failed and we were unable to recover it. 00:27:49.981 [2024-07-15 09:36:00.989459] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.981 [2024-07-15 09:36:00.989491] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a50000b90 with addr=10.0.0.2, port=4420 00:27:49.981 qpair failed and we were unable to recover it. 00:27:49.981 [2024-07-15 09:36:00.989699] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.981 [2024-07-15 09:36:00.989739] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a50000b90 with addr=10.0.0.2, port=4420 00:27:49.981 qpair failed and we were unable to recover it. 00:27:49.981 [2024-07-15 09:36:00.989891] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.981 [2024-07-15 09:36:00.989917] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a50000b90 with addr=10.0.0.2, port=4420 00:27:49.981 qpair failed and we were unable to recover it. 00:27:49.981 [2024-07-15 09:36:00.990027] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.981 [2024-07-15 09:36:00.990053] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a50000b90 with addr=10.0.0.2, port=4420 00:27:49.981 qpair failed and we were unable to recover it. 00:27:49.981 [2024-07-15 09:36:00.990192] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.981 [2024-07-15 09:36:00.990219] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a50000b90 with addr=10.0.0.2, port=4420 00:27:49.981 qpair failed and we were unable to recover it. 00:27:49.981 [2024-07-15 09:36:00.990459] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.981 [2024-07-15 09:36:00.990525] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a50000b90 with addr=10.0.0.2, port=4420 00:27:49.981 qpair failed and we were unable to recover it. 00:27:49.981 [2024-07-15 09:36:00.990672] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.981 [2024-07-15 09:36:00.990735] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a50000b90 with addr=10.0.0.2, port=4420 00:27:49.981 qpair failed and we were unable to recover it. 00:27:49.981 [2024-07-15 09:36:00.990920] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.981 [2024-07-15 09:36:00.990947] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a50000b90 with addr=10.0.0.2, port=4420 00:27:49.981 qpair failed and we were unable to recover it. 00:27:49.981 [2024-07-15 09:36:00.991035] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.981 [2024-07-15 09:36:00.991060] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a50000b90 with addr=10.0.0.2, port=4420 00:27:49.981 qpair failed and we were unable to recover it. 00:27:49.981 [2024-07-15 09:36:00.991170] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.981 [2024-07-15 09:36:00.991204] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a50000b90 with addr=10.0.0.2, port=4420 00:27:49.981 qpair failed and we were unable to recover it. 00:27:49.981 [2024-07-15 09:36:00.991329] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.981 [2024-07-15 09:36:00.991372] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a50000b90 with addr=10.0.0.2, port=4420 00:27:49.981 qpair failed and we were unable to recover it. 00:27:49.981 [2024-07-15 09:36:00.991530] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.981 [2024-07-15 09:36:00.991564] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a50000b90 with addr=10.0.0.2, port=4420 00:27:49.981 qpair failed and we were unable to recover it. 00:27:49.981 [2024-07-15 09:36:00.991729] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.981 [2024-07-15 09:36:00.991840] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a50000b90 with addr=10.0.0.2, port=4420 00:27:49.981 qpair failed and we were unable to recover it. 00:27:49.981 [2024-07-15 09:36:00.991984] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.981 [2024-07-15 09:36:00.992010] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a50000b90 with addr=10.0.0.2, port=4420 00:27:49.981 qpair failed and we were unable to recover it. 00:27:49.981 [2024-07-15 09:36:00.992124] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.981 [2024-07-15 09:36:00.992174] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a50000b90 with addr=10.0.0.2, port=4420 00:27:49.981 qpair failed and we were unable to recover it. 00:27:49.981 [2024-07-15 09:36:00.992357] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.981 [2024-07-15 09:36:00.992432] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a50000b90 with addr=10.0.0.2, port=4420 00:27:49.981 qpair failed and we were unable to recover it. 00:27:49.981 [2024-07-15 09:36:00.992687] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.981 [2024-07-15 09:36:00.992754] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a50000b90 with addr=10.0.0.2, port=4420 00:27:49.981 qpair failed and we were unable to recover it. 00:27:49.981 [2024-07-15 09:36:00.992957] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.981 [2024-07-15 09:36:00.992983] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a50000b90 with addr=10.0.0.2, port=4420 00:27:49.981 qpair failed and we were unable to recover it. 00:27:49.981 [2024-07-15 09:36:00.993069] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.981 [2024-07-15 09:36:00.993094] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a50000b90 with addr=10.0.0.2, port=4420 00:27:49.981 qpair failed and we were unable to recover it. 00:27:49.981 [2024-07-15 09:36:00.993222] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.981 [2024-07-15 09:36:00.993248] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a50000b90 with addr=10.0.0.2, port=4420 00:27:49.981 qpair failed and we were unable to recover it. 00:27:49.981 [2024-07-15 09:36:00.993335] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.982 [2024-07-15 09:36:00.993360] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a50000b90 with addr=10.0.0.2, port=4420 00:27:49.982 qpair failed and we were unable to recover it. 00:27:49.982 [2024-07-15 09:36:00.993444] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.982 [2024-07-15 09:36:00.993470] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a50000b90 with addr=10.0.0.2, port=4420 00:27:49.982 qpair failed and we were unable to recover it. 00:27:49.982 [2024-07-15 09:36:00.993682] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.982 [2024-07-15 09:36:00.993747] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a50000b90 with addr=10.0.0.2, port=4420 00:27:49.982 qpair failed and we were unable to recover it. 00:27:49.982 [2024-07-15 09:36:00.993934] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.982 [2024-07-15 09:36:00.993972] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:27:49.982 qpair failed and we were unable to recover it. 00:27:49.982 [2024-07-15 09:36:00.994131] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.982 [2024-07-15 09:36:00.994165] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:27:49.982 qpair failed and we were unable to recover it. 00:27:49.982 [2024-07-15 09:36:00.994349] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.982 [2024-07-15 09:36:00.994429] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:27:49.982 qpair failed and we were unable to recover it. 00:27:49.982 [2024-07-15 09:36:00.994624] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.982 [2024-07-15 09:36:00.994656] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:27:49.982 qpair failed and we were unable to recover it. 00:27:49.982 [2024-07-15 09:36:00.994787] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.982 [2024-07-15 09:36:00.994843] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:27:49.982 qpair failed and we were unable to recover it. 00:27:49.982 [2024-07-15 09:36:00.994983] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.982 [2024-07-15 09:36:00.995009] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:27:49.982 qpair failed and we were unable to recover it. 00:27:49.982 [2024-07-15 09:36:00.995117] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.982 [2024-07-15 09:36:00.995158] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:27:49.982 qpair failed and we were unable to recover it. 00:27:49.982 [2024-07-15 09:36:00.995292] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.982 [2024-07-15 09:36:00.995323] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:27:49.982 qpair failed and we were unable to recover it. 00:27:49.982 [2024-07-15 09:36:00.995443] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.982 [2024-07-15 09:36:00.995470] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:27:49.982 qpair failed and we were unable to recover it. 00:27:49.982 [2024-07-15 09:36:00.995562] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.982 [2024-07-15 09:36:00.995617] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a50000b90 with addr=10.0.0.2, port=4420 00:27:49.982 qpair failed and we were unable to recover it. 00:27:49.982 [2024-07-15 09:36:00.995770] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.982 [2024-07-15 09:36:00.995817] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a50000b90 with addr=10.0.0.2, port=4420 00:27:49.982 qpair failed and we were unable to recover it. 00:27:49.982 [2024-07-15 09:36:00.995983] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.982 [2024-07-15 09:36:00.996023] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a50000b90 with addr=10.0.0.2, port=4420 00:27:49.982 qpair failed and we were unable to recover it. 00:27:49.982 [2024-07-15 09:36:00.996148] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.982 [2024-07-15 09:36:00.996185] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a50000b90 with addr=10.0.0.2, port=4420 00:27:49.982 qpair failed and we were unable to recover it. 00:27:49.982 [2024-07-15 09:36:00.996350] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.982 [2024-07-15 09:36:00.996386] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a50000b90 with addr=10.0.0.2, port=4420 00:27:49.982 qpair failed and we were unable to recover it. 00:27:49.982 [2024-07-15 09:36:00.996489] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.982 [2024-07-15 09:36:00.996523] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a50000b90 with addr=10.0.0.2, port=4420 00:27:49.982 qpair failed and we were unable to recover it. 00:27:49.982 [2024-07-15 09:36:00.996634] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.982 [2024-07-15 09:36:00.996668] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:27:49.982 qpair failed and we were unable to recover it. 00:27:49.982 [2024-07-15 09:36:00.996827] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.982 [2024-07-15 09:36:00.996859] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:27:49.982 qpair failed and we were unable to recover it. 00:27:49.982 [2024-07-15 09:36:00.996983] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.982 [2024-07-15 09:36:00.997015] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:27:49.982 qpair failed and we were unable to recover it. 00:27:49.982 [2024-07-15 09:36:00.997156] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.982 [2024-07-15 09:36:00.997188] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:27:49.982 qpair failed and we were unable to recover it. 00:27:49.982 [2024-07-15 09:36:00.997347] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.982 [2024-07-15 09:36:00.997399] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:27:49.982 qpair failed and we were unable to recover it. 00:27:49.982 [2024-07-15 09:36:00.997558] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.982 [2024-07-15 09:36:00.997589] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:27:49.982 qpair failed and we were unable to recover it. 00:27:49.982 [2024-07-15 09:36:00.997756] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.982 [2024-07-15 09:36:00.997794] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:27:49.982 qpair failed and we were unable to recover it. 00:27:49.982 [2024-07-15 09:36:00.997925] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.982 [2024-07-15 09:36:00.997965] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:27:49.982 qpair failed and we were unable to recover it. 00:27:49.982 [2024-07-15 09:36:00.998123] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.982 [2024-07-15 09:36:00.998162] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:27:49.982 qpair failed and we were unable to recover it. 00:27:49.982 [2024-07-15 09:36:00.998358] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.982 [2024-07-15 09:36:00.998390] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:27:49.982 qpair failed and we were unable to recover it. 00:27:49.982 [2024-07-15 09:36:00.998496] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.982 [2024-07-15 09:36:00.998527] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:27:49.982 qpair failed and we were unable to recover it. 00:27:49.982 [2024-07-15 09:36:00.998664] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.982 [2024-07-15 09:36:00.998696] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:27:49.982 qpair failed and we were unable to recover it. 00:27:49.982 [2024-07-15 09:36:00.998842] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.982 [2024-07-15 09:36:00.998883] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:27:49.982 qpair failed and we were unable to recover it. 00:27:49.982 [2024-07-15 09:36:00.999028] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.982 [2024-07-15 09:36:00.999066] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:27:49.982 qpair failed and we were unable to recover it. 00:27:49.982 [2024-07-15 09:36:00.999233] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.982 [2024-07-15 09:36:00.999271] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:27:49.982 qpair failed and we were unable to recover it. 00:27:49.982 [2024-07-15 09:36:00.999388] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.982 [2024-07-15 09:36:00.999428] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:27:49.982 qpair failed and we were unable to recover it. 00:27:49.982 [2024-07-15 09:36:00.999616] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.982 [2024-07-15 09:36:00.999654] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:27:49.982 qpair failed and we were unable to recover it. 00:27:49.982 [2024-07-15 09:36:00.999777] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.982 [2024-07-15 09:36:00.999827] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:27:49.982 qpair failed and we were unable to recover it. 00:27:49.982 [2024-07-15 09:36:00.999941] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.982 [2024-07-15 09:36:00.999980] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:27:49.982 qpair failed and we were unable to recover it. 00:27:49.982 [2024-07-15 09:36:01.000169] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.982 [2024-07-15 09:36:01.000208] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:27:49.982 qpair failed and we were unable to recover it. 00:27:49.982 [2024-07-15 09:36:01.000395] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.982 [2024-07-15 09:36:01.000427] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:27:49.982 qpair failed and we were unable to recover it. 00:27:49.982 [2024-07-15 09:36:01.000530] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.982 [2024-07-15 09:36:01.000562] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:27:49.982 qpair failed and we were unable to recover it. 00:27:49.982 [2024-07-15 09:36:01.000696] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.982 [2024-07-15 09:36:01.000728] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:27:49.982 qpair failed and we were unable to recover it. 00:27:49.983 [2024-07-15 09:36:01.000842] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.983 [2024-07-15 09:36:01.000875] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:27:49.983 qpair failed and we were unable to recover it. 00:27:49.983 [2024-07-15 09:36:01.001070] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.983 [2024-07-15 09:36:01.001102] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:27:49.983 qpair failed and we were unable to recover it. 00:27:49.983 [2024-07-15 09:36:01.001240] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.983 [2024-07-15 09:36:01.001273] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:27:49.983 qpair failed and we were unable to recover it. 00:27:49.983 [2024-07-15 09:36:01.001402] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.983 [2024-07-15 09:36:01.001434] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:27:49.983 qpair failed and we were unable to recover it. 00:27:49.983 [2024-07-15 09:36:01.001588] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.983 [2024-07-15 09:36:01.001620] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:27:49.983 qpair failed and we were unable to recover it. 00:27:49.983 [2024-07-15 09:36:01.001718] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.983 [2024-07-15 09:36:01.001751] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:27:49.983 qpair failed and we were unable to recover it. 00:27:49.983 [2024-07-15 09:36:01.001926] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.983 [2024-07-15 09:36:01.001965] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:27:49.983 qpair failed and we were unable to recover it. 00:27:49.983 [2024-07-15 09:36:01.002086] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.983 [2024-07-15 09:36:01.002138] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:27:49.983 qpair failed and we were unable to recover it. 00:27:49.983 [2024-07-15 09:36:01.002303] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.983 [2024-07-15 09:36:01.002335] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:27:49.983 qpair failed and we were unable to recover it. 00:27:49.983 [2024-07-15 09:36:01.002499] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.983 [2024-07-15 09:36:01.002539] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:27:49.983 qpair failed and we were unable to recover it. 00:27:49.983 [2024-07-15 09:36:01.002737] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.983 [2024-07-15 09:36:01.002777] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:27:49.983 qpair failed and we were unable to recover it. 00:27:49.983 [2024-07-15 09:36:01.002953] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.983 [2024-07-15 09:36:01.002994] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:27:49.983 qpair failed and we were unable to recover it. 00:27:49.983 [2024-07-15 09:36:01.003154] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.983 [2024-07-15 09:36:01.003194] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:27:49.983 qpair failed and we were unable to recover it. 00:27:49.983 [2024-07-15 09:36:01.003350] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.983 [2024-07-15 09:36:01.003390] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:27:49.983 qpair failed and we were unable to recover it. 00:27:49.983 [2024-07-15 09:36:01.003541] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.983 [2024-07-15 09:36:01.003582] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:27:49.983 qpair failed and we were unable to recover it. 00:27:49.983 [2024-07-15 09:36:01.003744] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.983 [2024-07-15 09:36:01.003790] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:27:49.983 qpair failed and we were unable to recover it. 00:27:49.983 [2024-07-15 09:36:01.003972] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.983 [2024-07-15 09:36:01.004004] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:27:49.983 qpair failed and we were unable to recover it. 00:27:49.983 [2024-07-15 09:36:01.004167] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.983 [2024-07-15 09:36:01.004200] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:27:49.983 qpair failed and we were unable to recover it. 00:27:49.983 [2024-07-15 09:36:01.004403] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.983 [2024-07-15 09:36:01.004444] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:27:49.983 qpair failed and we were unable to recover it. 00:27:49.983 [2024-07-15 09:36:01.004635] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.983 [2024-07-15 09:36:01.004667] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:27:49.983 qpair failed and we were unable to recover it. 00:27:49.983 [2024-07-15 09:36:01.004793] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.983 [2024-07-15 09:36:01.004831] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:27:49.983 qpair failed and we were unable to recover it. 00:27:49.983 [2024-07-15 09:36:01.004958] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.983 [2024-07-15 09:36:01.005008] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:27:49.983 qpair failed and we were unable to recover it. 00:27:49.983 [2024-07-15 09:36:01.005141] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.983 [2024-07-15 09:36:01.005172] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:27:49.983 qpair failed and we were unable to recover it. 00:27:49.983 [2024-07-15 09:36:01.005323] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.983 [2024-07-15 09:36:01.005363] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:27:49.983 qpair failed and we were unable to recover it. 00:27:49.983 [2024-07-15 09:36:01.005564] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.983 [2024-07-15 09:36:01.005596] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:27:49.983 qpair failed and we were unable to recover it. 00:27:49.983 [2024-07-15 09:36:01.005700] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.983 [2024-07-15 09:36:01.005737] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:27:49.983 qpair failed and we were unable to recover it. 00:27:49.983 [2024-07-15 09:36:01.005905] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.983 [2024-07-15 09:36:01.005953] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:27:49.983 qpair failed and we were unable to recover it. 00:27:49.983 [2024-07-15 09:36:01.006070] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.983 [2024-07-15 09:36:01.006114] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:27:49.983 qpair failed and we were unable to recover it. 00:27:49.983 [2024-07-15 09:36:01.006311] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.983 [2024-07-15 09:36:01.006343] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:27:49.983 qpair failed and we were unable to recover it. 00:27:49.983 [2024-07-15 09:36:01.006481] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.983 [2024-07-15 09:36:01.006513] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:27:49.983 qpair failed and we were unable to recover it. 00:27:49.983 [2024-07-15 09:36:01.006656] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.983 [2024-07-15 09:36:01.006695] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:27:49.983 qpair failed and we were unable to recover it. 00:27:49.983 [2024-07-15 09:36:01.006852] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.983 [2024-07-15 09:36:01.006892] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:27:49.983 qpair failed and we were unable to recover it. 00:27:49.983 [2024-07-15 09:36:01.007044] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.983 [2024-07-15 09:36:01.007083] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:27:49.983 qpair failed and we were unable to recover it. 00:27:49.983 [2024-07-15 09:36:01.007202] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.983 [2024-07-15 09:36:01.007241] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:27:49.983 qpair failed and we were unable to recover it. 00:27:49.983 [2024-07-15 09:36:01.007369] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.983 [2024-07-15 09:36:01.007408] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:27:49.983 qpair failed and we were unable to recover it. 00:27:49.983 [2024-07-15 09:36:01.007573] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.983 [2024-07-15 09:36:01.007605] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:27:49.983 qpair failed and we were unable to recover it. 00:27:49.983 [2024-07-15 09:36:01.007767] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.983 [2024-07-15 09:36:01.007799] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:27:49.983 qpair failed and we were unable to recover it. 00:27:49.983 [2024-07-15 09:36:01.007939] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.984 [2024-07-15 09:36:01.007991] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:27:49.984 qpair failed and we were unable to recover it. 00:27:49.984 [2024-07-15 09:36:01.008093] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.984 [2024-07-15 09:36:01.008125] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:27:49.984 qpair failed and we were unable to recover it. 00:27:49.984 [2024-07-15 09:36:01.008264] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.984 [2024-07-15 09:36:01.008296] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:27:49.984 qpair failed and we were unable to recover it. 00:27:49.984 [2024-07-15 09:36:01.008431] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.984 [2024-07-15 09:36:01.008462] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:27:49.984 qpair failed and we were unable to recover it. 00:27:49.984 [2024-07-15 09:36:01.008629] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.984 [2024-07-15 09:36:01.008667] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:27:49.984 qpair failed and we were unable to recover it. 00:27:49.984 [2024-07-15 09:36:01.008815] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.984 [2024-07-15 09:36:01.008855] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:27:49.984 qpair failed and we were unable to recover it. 00:27:49.984 [2024-07-15 09:36:01.009004] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.984 [2024-07-15 09:36:01.009042] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:27:49.984 qpair failed and we were unable to recover it. 00:27:49.984 [2024-07-15 09:36:01.009188] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.984 [2024-07-15 09:36:01.009228] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:27:49.984 qpair failed and we were unable to recover it. 00:27:49.984 [2024-07-15 09:36:01.009372] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.984 [2024-07-15 09:36:01.009418] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:27:49.984 qpair failed and we were unable to recover it. 00:27:49.984 [2024-07-15 09:36:01.009582] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.984 [2024-07-15 09:36:01.009628] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:27:49.984 qpair failed and we were unable to recover it. 00:27:49.984 [2024-07-15 09:36:01.009795] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.984 [2024-07-15 09:36:01.009849] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:27:49.984 qpair failed and we were unable to recover it. 00:27:49.984 [2024-07-15 09:36:01.010011] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.984 [2024-07-15 09:36:01.010052] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:27:49.984 qpair failed and we were unable to recover it. 00:27:49.984 [2024-07-15 09:36:01.010219] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.984 [2024-07-15 09:36:01.010260] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:27:49.984 qpair failed and we were unable to recover it. 00:27:49.984 [2024-07-15 09:36:01.010400] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.984 [2024-07-15 09:36:01.010441] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:27:49.984 qpair failed and we were unable to recover it. 00:27:49.984 [2024-07-15 09:36:01.010596] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.984 [2024-07-15 09:36:01.010637] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:27:49.984 qpair failed and we were unable to recover it. 00:27:49.984 [2024-07-15 09:36:01.010796] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.984 [2024-07-15 09:36:01.010848] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:27:49.984 qpair failed and we were unable to recover it. 00:27:49.984 [2024-07-15 09:36:01.011009] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.984 [2024-07-15 09:36:01.011050] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:27:49.984 qpair failed and we were unable to recover it. 00:27:49.984 [2024-07-15 09:36:01.011212] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.984 [2024-07-15 09:36:01.011253] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:27:49.984 qpair failed and we were unable to recover it. 00:27:49.984 [2024-07-15 09:36:01.011384] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.984 [2024-07-15 09:36:01.011425] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:27:49.984 qpair failed and we were unable to recover it. 00:27:49.984 [2024-07-15 09:36:01.011595] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.984 [2024-07-15 09:36:01.011627] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:27:49.984 qpair failed and we were unable to recover it. 00:27:49.984 [2024-07-15 09:36:01.011759] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.984 [2024-07-15 09:36:01.011792] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:27:49.984 qpair failed and we were unable to recover it. 00:27:49.984 [2024-07-15 09:36:01.011932] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.984 [2024-07-15 09:36:01.011973] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:27:49.984 qpair failed and we were unable to recover it. 00:27:49.984 [2024-07-15 09:36:01.012127] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.984 [2024-07-15 09:36:01.012167] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:27:49.984 qpair failed and we were unable to recover it. 00:27:49.984 [2024-07-15 09:36:01.012329] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.984 [2024-07-15 09:36:01.012369] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:27:49.984 qpair failed and we were unable to recover it. 00:27:49.984 [2024-07-15 09:36:01.012532] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.984 [2024-07-15 09:36:01.012572] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:27:49.984 qpair failed and we were unable to recover it. 00:27:49.984 [2024-07-15 09:36:01.012698] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.984 [2024-07-15 09:36:01.012739] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:27:49.984 qpair failed and we were unable to recover it. 00:27:49.984 [2024-07-15 09:36:01.012887] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.984 [2024-07-15 09:36:01.012929] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:27:49.984 qpair failed and we were unable to recover it. 00:27:49.984 [2024-07-15 09:36:01.013052] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.984 [2024-07-15 09:36:01.013093] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:27:49.984 qpair failed and we were unable to recover it. 00:27:49.984 [2024-07-15 09:36:01.013225] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.984 [2024-07-15 09:36:01.013265] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:27:49.984 qpair failed and we were unable to recover it. 00:27:49.984 [2024-07-15 09:36:01.013423] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.984 [2024-07-15 09:36:01.013464] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:27:49.984 qpair failed and we were unable to recover it. 00:27:49.984 [2024-07-15 09:36:01.013601] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.984 [2024-07-15 09:36:01.013632] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:27:49.984 qpair failed and we were unable to recover it. 00:27:49.984 [2024-07-15 09:36:01.013761] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.984 [2024-07-15 09:36:01.013793] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:27:49.984 qpair failed and we were unable to recover it. 00:27:49.984 [2024-07-15 09:36:01.013918] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.984 [2024-07-15 09:36:01.013951] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:27:49.984 qpair failed and we were unable to recover it. 00:27:49.984 [2024-07-15 09:36:01.014118] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.984 [2024-07-15 09:36:01.014159] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:27:49.984 qpair failed and we were unable to recover it. 00:27:49.984 [2024-07-15 09:36:01.014346] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.984 [2024-07-15 09:36:01.014378] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:27:49.984 qpair failed and we were unable to recover it. 00:27:49.984 [2024-07-15 09:36:01.014500] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.984 [2024-07-15 09:36:01.014533] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:27:49.984 qpair failed and we were unable to recover it. 00:27:49.984 [2024-07-15 09:36:01.014649] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.984 [2024-07-15 09:36:01.014698] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a50000b90 with addr=10.0.0.2, port=4420 00:27:49.984 qpair failed and we were unable to recover it. 00:27:49.984 [2024-07-15 09:36:01.014840] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.984 [2024-07-15 09:36:01.014875] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a50000b90 with addr=10.0.0.2, port=4420 00:27:49.984 qpair failed and we were unable to recover it. 00:27:49.984 [2024-07-15 09:36:01.015009] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.984 [2024-07-15 09:36:01.015042] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a50000b90 with addr=10.0.0.2, port=4420 00:27:49.984 qpair failed and we were unable to recover it. 00:27:49.984 [2024-07-15 09:36:01.015168] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.984 [2024-07-15 09:36:01.015210] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a50000b90 with addr=10.0.0.2, port=4420 00:27:49.984 qpair failed and we were unable to recover it. 00:27:49.984 [2024-07-15 09:36:01.015373] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.984 [2024-07-15 09:36:01.015414] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a50000b90 with addr=10.0.0.2, port=4420 00:27:49.984 qpair failed and we were unable to recover it. 00:27:49.984 [2024-07-15 09:36:01.015570] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.984 [2024-07-15 09:36:01.015609] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a50000b90 with addr=10.0.0.2, port=4420 00:27:49.984 qpair failed and we were unable to recover it. 00:27:49.984 [2024-07-15 09:36:01.015766] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.984 [2024-07-15 09:36:01.015817] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:27:49.984 qpair failed and we were unable to recover it. 00:27:49.984 [2024-07-15 09:36:01.015974] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.984 [2024-07-15 09:36:01.016015] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:27:49.984 qpair failed and we were unable to recover it. 00:27:49.985 [2024-07-15 09:36:01.016178] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.985 [2024-07-15 09:36:01.016229] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:27:49.985 qpair failed and we were unable to recover it. 00:27:49.985 [2024-07-15 09:36:01.016330] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.985 [2024-07-15 09:36:01.016367] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:27:49.985 qpair failed and we were unable to recover it. 00:27:49.985 [2024-07-15 09:36:01.016525] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.985 [2024-07-15 09:36:01.016567] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:27:49.985 qpair failed and we were unable to recover it. 00:27:49.985 [2024-07-15 09:36:01.016695] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.985 [2024-07-15 09:36:01.016735] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:27:49.985 qpair failed and we were unable to recover it. 00:27:49.985 [2024-07-15 09:36:01.016934] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.985 [2024-07-15 09:36:01.016967] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:27:49.985 qpair failed and we were unable to recover it. 00:27:49.985 [2024-07-15 09:36:01.017071] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.985 [2024-07-15 09:36:01.017104] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:27:49.985 qpair failed and we were unable to recover it. 00:27:49.985 [2024-07-15 09:36:01.017240] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.985 [2024-07-15 09:36:01.017272] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:27:49.985 qpair failed and we were unable to recover it. 00:27:49.985 [2024-07-15 09:36:01.017381] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.985 [2024-07-15 09:36:01.017413] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:27:49.985 qpair failed and we were unable to recover it. 00:27:49.985 [2024-07-15 09:36:01.017571] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.985 [2024-07-15 09:36:01.017603] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:27:49.985 qpair failed and we were unable to recover it. 00:27:49.985 [2024-07-15 09:36:01.017742] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.985 [2024-07-15 09:36:01.017788] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:27:49.985 qpair failed and we were unable to recover it. 00:27:49.985 [2024-07-15 09:36:01.017964] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.985 [2024-07-15 09:36:01.018006] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:27:49.985 qpair failed and we were unable to recover it. 00:27:49.985 [2024-07-15 09:36:01.018206] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.985 [2024-07-15 09:36:01.018246] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:27:49.985 qpair failed and we were unable to recover it. 00:27:49.985 [2024-07-15 09:36:01.018410] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.985 [2024-07-15 09:36:01.018451] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:27:49.985 qpair failed and we were unable to recover it. 00:27:49.985 [2024-07-15 09:36:01.018608] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.985 [2024-07-15 09:36:01.018648] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:27:49.985 qpair failed and we were unable to recover it. 00:27:49.985 [2024-07-15 09:36:01.018777] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.985 [2024-07-15 09:36:01.018858] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:27:49.985 qpair failed and we were unable to recover it. 00:27:49.985 [2024-07-15 09:36:01.018966] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.985 [2024-07-15 09:36:01.018997] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:27:49.985 qpair failed and we were unable to recover it. 00:27:49.985 [2024-07-15 09:36:01.019128] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.985 [2024-07-15 09:36:01.019159] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:27:49.985 qpair failed and we were unable to recover it. 00:27:49.985 [2024-07-15 09:36:01.019259] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.985 [2024-07-15 09:36:01.019291] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:27:49.985 qpair failed and we were unable to recover it. 00:27:49.985 [2024-07-15 09:36:01.019438] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.985 [2024-07-15 09:36:01.019470] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:27:49.985 qpair failed and we were unable to recover it. 00:27:49.985 [2024-07-15 09:36:01.019605] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.985 [2024-07-15 09:36:01.019637] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:27:49.985 qpair failed and we were unable to recover it. 00:27:49.985 [2024-07-15 09:36:01.019767] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.985 [2024-07-15 09:36:01.019817] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:27:49.985 qpair failed and we were unable to recover it. 00:27:49.985 [2024-07-15 09:36:01.019944] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.985 [2024-07-15 09:36:01.019996] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:27:49.985 qpair failed and we were unable to recover it. 00:27:49.985 [2024-07-15 09:36:01.020107] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.985 [2024-07-15 09:36:01.020139] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:27:49.985 qpair failed and we were unable to recover it. 00:27:49.985 [2024-07-15 09:36:01.020242] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.985 [2024-07-15 09:36:01.020274] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:27:49.985 qpair failed and we were unable to recover it. 00:27:49.985 [2024-07-15 09:36:01.020376] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.985 [2024-07-15 09:36:01.020407] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:27:49.985 qpair failed and we were unable to recover it. 00:27:49.985 [2024-07-15 09:36:01.020565] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.985 [2024-07-15 09:36:01.020596] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:27:49.985 qpair failed and we were unable to recover it. 00:27:49.985 [2024-07-15 09:36:01.020745] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.985 [2024-07-15 09:36:01.020796] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:27:49.985 qpair failed and we were unable to recover it. 00:27:49.985 [2024-07-15 09:36:01.020929] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.985 [2024-07-15 09:36:01.020960] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:27:49.985 qpair failed and we were unable to recover it. 00:27:49.985 [2024-07-15 09:36:01.021095] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.985 [2024-07-15 09:36:01.021135] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:27:49.985 qpair failed and we were unable to recover it. 00:27:49.985 [2024-07-15 09:36:01.021295] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.985 [2024-07-15 09:36:01.021336] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:27:49.985 qpair failed and we were unable to recover it. 00:27:49.985 [2024-07-15 09:36:01.021502] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.985 [2024-07-15 09:36:01.021543] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:27:49.985 qpair failed and we were unable to recover it. 00:27:49.985 [2024-07-15 09:36:01.021704] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.985 [2024-07-15 09:36:01.021751] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:27:49.985 qpair failed and we were unable to recover it. 00:27:49.985 [2024-07-15 09:36:01.021956] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.985 [2024-07-15 09:36:01.021987] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:27:49.985 qpair failed and we were unable to recover it. 00:27:49.985 [2024-07-15 09:36:01.022083] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.985 [2024-07-15 09:36:01.022114] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:27:49.985 qpair failed and we were unable to recover it. 00:27:49.985 [2024-07-15 09:36:01.022246] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.985 [2024-07-15 09:36:01.022277] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:27:49.985 qpair failed and we were unable to recover it. 00:27:49.985 [2024-07-15 09:36:01.022409] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.985 [2024-07-15 09:36:01.022440] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:27:49.985 qpair failed and we were unable to recover it. 00:27:49.985 [2024-07-15 09:36:01.022610] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.985 [2024-07-15 09:36:01.022644] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:27:49.985 qpair failed and we were unable to recover it. 00:27:49.985 [2024-07-15 09:36:01.022781] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.985 [2024-07-15 09:36:01.022819] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:27:49.985 qpair failed and we were unable to recover it. 00:27:49.985 [2024-07-15 09:36:01.022990] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.985 [2024-07-15 09:36:01.023029] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:27:49.985 qpair failed and we were unable to recover it. 00:27:49.985 [2024-07-15 09:36:01.023187] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.985 [2024-07-15 09:36:01.023227] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:27:49.985 qpair failed and we were unable to recover it. 00:27:49.985 [2024-07-15 09:36:01.023392] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.985 [2024-07-15 09:36:01.023433] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:27:49.986 qpair failed and we were unable to recover it. 00:27:49.986 [2024-07-15 09:36:01.023590] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.986 [2024-07-15 09:36:01.023637] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:27:49.986 qpair failed and we were unable to recover it. 00:27:49.986 [2024-07-15 09:36:01.023808] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.986 [2024-07-15 09:36:01.023860] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:27:49.986 qpair failed and we were unable to recover it. 00:27:49.986 [2024-07-15 09:36:01.023987] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.986 [2024-07-15 09:36:01.024026] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:27:49.986 qpair failed and we were unable to recover it. 00:27:49.986 [2024-07-15 09:36:01.024233] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.986 [2024-07-15 09:36:01.024276] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:27:49.986 qpair failed and we were unable to recover it. 00:27:49.986 [2024-07-15 09:36:01.024479] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.986 [2024-07-15 09:36:01.024520] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:27:49.986 qpair failed and we were unable to recover it. 00:27:49.986 [2024-07-15 09:36:01.024677] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.986 [2024-07-15 09:36:01.024720] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:27:49.986 qpair failed and we were unable to recover it. 00:27:49.986 [2024-07-15 09:36:01.024871] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.986 [2024-07-15 09:36:01.024916] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:27:49.986 qpair failed and we were unable to recover it. 00:27:49.986 [2024-07-15 09:36:01.025115] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.986 [2024-07-15 09:36:01.025158] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:27:49.986 qpair failed and we were unable to recover it. 00:27:49.986 [2024-07-15 09:36:01.025346] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.986 [2024-07-15 09:36:01.025385] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:27:49.986 qpair failed and we were unable to recover it. 00:27:49.986 [2024-07-15 09:36:01.025586] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.986 [2024-07-15 09:36:01.025625] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:27:49.986 qpair failed and we were unable to recover it. 00:27:49.986 [2024-07-15 09:36:01.025762] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.986 [2024-07-15 09:36:01.025809] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:27:49.986 qpair failed and we were unable to recover it. 00:27:49.986 [2024-07-15 09:36:01.026009] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.986 [2024-07-15 09:36:01.026050] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:27:49.986 qpair failed and we were unable to recover it. 00:27:49.986 [2024-07-15 09:36:01.026217] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.986 [2024-07-15 09:36:01.026257] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:27:49.986 qpair failed and we were unable to recover it. 00:27:49.986 [2024-07-15 09:36:01.026446] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.986 [2024-07-15 09:36:01.026487] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:27:49.986 qpair failed and we were unable to recover it. 00:27:49.986 [2024-07-15 09:36:01.026696] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.986 [2024-07-15 09:36:01.026736] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:27:49.986 qpair failed and we were unable to recover it. 00:27:49.986 [2024-07-15 09:36:01.026893] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.986 [2024-07-15 09:36:01.026934] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:27:49.986 qpair failed and we were unable to recover it. 00:27:49.986 [2024-07-15 09:36:01.027123] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.986 [2024-07-15 09:36:01.027162] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:27:49.986 qpair failed and we were unable to recover it. 00:27:49.986 [2024-07-15 09:36:01.027341] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.986 [2024-07-15 09:36:01.027382] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:27:49.986 qpair failed and we were unable to recover it. 00:27:49.986 [2024-07-15 09:36:01.027541] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.986 [2024-07-15 09:36:01.027583] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:27:49.986 qpair failed and we were unable to recover it. 00:27:49.986 [2024-07-15 09:36:01.027758] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.986 [2024-07-15 09:36:01.027831] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:27:49.986 qpair failed and we were unable to recover it. 00:27:49.986 [2024-07-15 09:36:01.028013] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.986 [2024-07-15 09:36:01.028057] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:27:49.986 qpair failed and we were unable to recover it. 00:27:49.986 [2024-07-15 09:36:01.028223] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.986 [2024-07-15 09:36:01.028266] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:27:49.986 qpair failed and we were unable to recover it. 00:27:49.986 [2024-07-15 09:36:01.028413] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.986 [2024-07-15 09:36:01.028456] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:27:49.986 qpair failed and we were unable to recover it. 00:27:49.986 [2024-07-15 09:36:01.028598] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.986 [2024-07-15 09:36:01.028641] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:27:49.986 qpair failed and we were unable to recover it. 00:27:49.986 [2024-07-15 09:36:01.028821] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.986 [2024-07-15 09:36:01.028866] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:27:49.986 qpair failed and we were unable to recover it. 00:27:49.986 [2024-07-15 09:36:01.029031] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.986 [2024-07-15 09:36:01.029073] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:27:49.986 qpair failed and we were unable to recover it. 00:27:49.986 [2024-07-15 09:36:01.029205] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.986 [2024-07-15 09:36:01.029247] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:27:49.986 qpair failed and we were unable to recover it. 00:27:49.986 [2024-07-15 09:36:01.029427] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.986 [2024-07-15 09:36:01.029471] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:27:49.986 qpair failed and we were unable to recover it. 00:27:49.986 [2024-07-15 09:36:01.029644] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.986 [2024-07-15 09:36:01.029687] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:27:49.986 qpair failed and we were unable to recover it. 00:27:49.986 [2024-07-15 09:36:01.029856] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.986 [2024-07-15 09:36:01.029901] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:27:49.986 qpair failed and we were unable to recover it. 00:27:49.986 [2024-07-15 09:36:01.030037] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.986 [2024-07-15 09:36:01.030079] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:27:49.986 qpair failed and we were unable to recover it. 00:27:49.986 [2024-07-15 09:36:01.030253] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.986 [2024-07-15 09:36:01.030297] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:27:49.986 qpair failed and we were unable to recover it. 00:27:49.986 [2024-07-15 09:36:01.030464] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.986 [2024-07-15 09:36:01.030506] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:27:49.986 qpair failed and we were unable to recover it. 00:27:49.986 [2024-07-15 09:36:01.030650] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.986 [2024-07-15 09:36:01.030692] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:27:49.986 qpair failed and we were unable to recover it. 00:27:49.986 [2024-07-15 09:36:01.030818] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.987 [2024-07-15 09:36:01.030862] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:27:49.987 qpair failed and we were unable to recover it. 00:27:49.987 [2024-07-15 09:36:01.031028] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.987 [2024-07-15 09:36:01.031071] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:27:49.987 qpair failed and we were unable to recover it. 00:27:49.987 [2024-07-15 09:36:01.031205] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.987 [2024-07-15 09:36:01.031247] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:27:49.987 qpair failed and we were unable to recover it. 00:27:49.987 [2024-07-15 09:36:01.031390] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.987 [2024-07-15 09:36:01.031433] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:27:49.987 qpair failed and we were unable to recover it. 00:27:49.987 [2024-07-15 09:36:01.031629] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.987 [2024-07-15 09:36:01.031672] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:27:49.987 qpair failed and we were unable to recover it. 00:27:49.987 [2024-07-15 09:36:01.031837] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.987 [2024-07-15 09:36:01.031881] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:27:49.987 qpair failed and we were unable to recover it. 00:27:49.987 [2024-07-15 09:36:01.032022] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.987 [2024-07-15 09:36:01.032071] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:27:49.987 qpair failed and we were unable to recover it. 00:27:49.987 [2024-07-15 09:36:01.032215] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.987 [2024-07-15 09:36:01.032257] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:27:49.987 qpair failed and we were unable to recover it. 00:27:49.987 [2024-07-15 09:36:01.032428] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.987 [2024-07-15 09:36:01.032471] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:27:49.987 qpair failed and we were unable to recover it. 00:27:49.987 [2024-07-15 09:36:01.032638] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.987 [2024-07-15 09:36:01.032681] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:27:49.987 qpair failed and we were unable to recover it. 00:27:49.987 [2024-07-15 09:36:01.032820] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.987 [2024-07-15 09:36:01.032873] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:27:49.987 qpair failed and we were unable to recover it. 00:27:49.987 [2024-07-15 09:36:01.033046] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.987 [2024-07-15 09:36:01.033088] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:27:49.987 qpair failed and we were unable to recover it. 00:27:49.987 [2024-07-15 09:36:01.033250] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.987 [2024-07-15 09:36:01.033292] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:27:49.987 qpair failed and we were unable to recover it. 00:27:49.987 [2024-07-15 09:36:01.033451] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.987 [2024-07-15 09:36:01.033494] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:27:49.987 qpair failed and we were unable to recover it. 00:27:49.987 [2024-07-15 09:36:01.033612] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.987 [2024-07-15 09:36:01.033655] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:27:49.987 qpair failed and we were unable to recover it. 00:27:49.987 [2024-07-15 09:36:01.033852] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.987 [2024-07-15 09:36:01.033895] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:27:49.987 qpair failed and we were unable to recover it. 00:27:49.987 [2024-07-15 09:36:01.034059] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.987 [2024-07-15 09:36:01.034101] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:27:49.987 qpair failed and we were unable to recover it. 00:27:49.987 [2024-07-15 09:36:01.034266] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.987 [2024-07-15 09:36:01.034308] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:27:49.987 qpair failed and we were unable to recover it. 00:27:49.987 [2024-07-15 09:36:01.034456] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.987 [2024-07-15 09:36:01.034501] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:27:49.987 qpair failed and we were unable to recover it. 00:27:49.987 [2024-07-15 09:36:01.034644] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.987 [2024-07-15 09:36:01.034688] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:27:49.987 qpair failed and we were unable to recover it. 00:27:49.987 [2024-07-15 09:36:01.034863] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.987 [2024-07-15 09:36:01.034906] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:27:49.987 qpair failed and we were unable to recover it. 00:27:49.987 [2024-07-15 09:36:01.035044] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.987 [2024-07-15 09:36:01.035085] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:27:49.987 qpair failed and we were unable to recover it. 00:27:49.987 [2024-07-15 09:36:01.035256] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.987 [2024-07-15 09:36:01.035298] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:27:49.987 qpair failed and we were unable to recover it. 00:27:49.987 [2024-07-15 09:36:01.035474] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.987 [2024-07-15 09:36:01.035518] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:27:49.987 qpair failed and we were unable to recover it. 00:27:49.987 [2024-07-15 09:36:01.035718] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.987 [2024-07-15 09:36:01.035762] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:27:49.987 qpair failed and we were unable to recover it. 00:27:49.987 [2024-07-15 09:36:01.035936] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.987 [2024-07-15 09:36:01.035979] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:27:49.987 qpair failed and we were unable to recover it. 00:27:49.987 [2024-07-15 09:36:01.036146] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.987 [2024-07-15 09:36:01.036188] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:27:49.987 qpair failed and we were unable to recover it. 00:27:49.987 [2024-07-15 09:36:01.036359] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.987 [2024-07-15 09:36:01.036400] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:27:49.987 qpair failed and we were unable to recover it. 00:27:49.987 [2024-07-15 09:36:01.036571] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.987 [2024-07-15 09:36:01.036614] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:27:49.987 qpair failed and we were unable to recover it. 00:27:49.987 [2024-07-15 09:36:01.036778] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.987 [2024-07-15 09:36:01.036860] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:27:49.987 qpair failed and we were unable to recover it. 00:27:49.987 [2024-07-15 09:36:01.036996] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.987 [2024-07-15 09:36:01.037040] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:27:49.987 qpair failed and we were unable to recover it. 00:27:49.987 [2024-07-15 09:36:01.037208] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.987 [2024-07-15 09:36:01.037250] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:27:49.987 qpair failed and we were unable to recover it. 00:27:49.987 [2024-07-15 09:36:01.037411] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.987 [2024-07-15 09:36:01.037453] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:27:49.987 qpair failed and we were unable to recover it. 00:27:49.987 [2024-07-15 09:36:01.037664] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.987 [2024-07-15 09:36:01.037707] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:27:49.987 qpair failed and we were unable to recover it. 00:27:49.987 [2024-07-15 09:36:01.037880] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.987 [2024-07-15 09:36:01.037923] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:27:49.987 qpair failed and we were unable to recover it. 00:27:49.987 [2024-07-15 09:36:01.038096] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.987 [2024-07-15 09:36:01.038139] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:27:49.987 qpair failed and we were unable to recover it. 00:27:49.987 [2024-07-15 09:36:01.038336] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.987 [2024-07-15 09:36:01.038380] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:27:49.987 qpair failed and we were unable to recover it. 00:27:49.987 [2024-07-15 09:36:01.038547] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.987 [2024-07-15 09:36:01.038591] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:27:49.987 qpair failed and we were unable to recover it. 00:27:49.987 [2024-07-15 09:36:01.038832] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.987 [2024-07-15 09:36:01.038875] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:27:49.987 qpair failed and we were unable to recover it. 00:27:49.987 [2024-07-15 09:36:01.039057] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.987 [2024-07-15 09:36:01.039102] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:27:49.987 qpair failed and we were unable to recover it. 00:27:49.987 [2024-07-15 09:36:01.039271] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.987 [2024-07-15 09:36:01.039315] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:27:49.987 qpair failed and we were unable to recover it. 00:27:49.987 [2024-07-15 09:36:01.039483] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.987 [2024-07-15 09:36:01.039527] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:27:49.988 qpair failed and we were unable to recover it. 00:27:49.988 [2024-07-15 09:36:01.039701] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.988 [2024-07-15 09:36:01.039746] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:27:49.988 qpair failed and we were unable to recover it. 00:27:49.988 [2024-07-15 09:36:01.039901] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.988 [2024-07-15 09:36:01.039947] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:27:49.988 qpair failed and we were unable to recover it. 00:27:49.988 [2024-07-15 09:36:01.040133] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.988 [2024-07-15 09:36:01.040180] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:27:49.988 qpair failed and we were unable to recover it. 00:27:49.988 [2024-07-15 09:36:01.040329] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.988 [2024-07-15 09:36:01.040372] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:27:49.988 qpair failed and we were unable to recover it. 00:27:49.988 [2024-07-15 09:36:01.040544] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.988 [2024-07-15 09:36:01.040593] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:27:49.988 qpair failed and we were unable to recover it. 00:27:49.988 [2024-07-15 09:36:01.040726] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.988 [2024-07-15 09:36:01.040768] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:27:49.988 qpair failed and we were unable to recover it. 00:27:49.988 [2024-07-15 09:36:01.040956] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.988 [2024-07-15 09:36:01.040999] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:27:49.988 qpair failed and we were unable to recover it. 00:27:49.988 [2024-07-15 09:36:01.041200] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.988 [2024-07-15 09:36:01.041243] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:27:49.988 qpair failed and we were unable to recover it. 00:27:49.988 [2024-07-15 09:36:01.041440] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.988 [2024-07-15 09:36:01.041483] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:27:49.988 qpair failed and we were unable to recover it. 00:27:49.988 [2024-07-15 09:36:01.041630] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.988 [2024-07-15 09:36:01.041682] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:27:49.988 qpair failed and we were unable to recover it. 00:27:49.988 [2024-07-15 09:36:01.041928] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.988 [2024-07-15 09:36:01.041972] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:27:49.988 qpair failed and we were unable to recover it. 00:27:49.988 [2024-07-15 09:36:01.042139] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.988 [2024-07-15 09:36:01.042183] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:27:49.988 qpair failed and we were unable to recover it. 00:27:49.988 [2024-07-15 09:36:01.042373] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.988 [2024-07-15 09:36:01.042419] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:27:49.988 qpair failed and we were unable to recover it. 00:27:49.988 [2024-07-15 09:36:01.042602] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.988 [2024-07-15 09:36:01.042647] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:27:49.988 qpair failed and we were unable to recover it. 00:27:49.988 [2024-07-15 09:36:01.042844] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.988 [2024-07-15 09:36:01.042889] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:27:49.988 qpair failed and we were unable to recover it. 00:27:49.988 [2024-07-15 09:36:01.043041] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.988 [2024-07-15 09:36:01.043085] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:27:49.988 qpair failed and we were unable to recover it. 00:27:49.988 [2024-07-15 09:36:01.043256] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.988 [2024-07-15 09:36:01.043303] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:27:49.988 qpair failed and we were unable to recover it. 00:27:49.988 [2024-07-15 09:36:01.043477] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.988 [2024-07-15 09:36:01.043521] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:27:49.988 qpair failed and we were unable to recover it. 00:27:49.988 [2024-07-15 09:36:01.043699] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.988 [2024-07-15 09:36:01.043746] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:27:49.988 qpair failed and we were unable to recover it. 00:27:49.988 [2024-07-15 09:36:01.043970] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.988 [2024-07-15 09:36:01.044016] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:27:49.988 qpair failed and we were unable to recover it. 00:27:49.988 [2024-07-15 09:36:01.044194] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.988 [2024-07-15 09:36:01.044240] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:27:49.988 qpair failed and we were unable to recover it. 00:27:49.988 [2024-07-15 09:36:01.044446] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.988 [2024-07-15 09:36:01.044492] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:27:49.988 qpair failed and we were unable to recover it. 00:27:49.988 [2024-07-15 09:36:01.044702] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.988 [2024-07-15 09:36:01.044761] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:27:49.988 qpair failed and we were unable to recover it. 00:27:49.988 [2024-07-15 09:36:01.045035] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.988 [2024-07-15 09:36:01.045094] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:27:49.988 qpair failed and we were unable to recover it. 00:27:49.988 [2024-07-15 09:36:01.045294] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.988 [2024-07-15 09:36:01.045343] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:27:49.988 qpair failed and we were unable to recover it. 00:27:49.988 [2024-07-15 09:36:01.045528] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.988 [2024-07-15 09:36:01.045576] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:27:49.988 qpair failed and we were unable to recover it. 00:27:49.988 [2024-07-15 09:36:01.045790] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.988 [2024-07-15 09:36:01.045995] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:27:49.988 qpair failed and we were unable to recover it. 00:27:49.988 [2024-07-15 09:36:01.046218] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.989 [2024-07-15 09:36:01.046265] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:27:49.989 qpair failed and we were unable to recover it. 00:27:49.989 [2024-07-15 09:36:01.046476] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.989 [2024-07-15 09:36:01.046521] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:27:49.989 qpair failed and we were unable to recover it. 00:27:49.989 [2024-07-15 09:36:01.046690] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.989 [2024-07-15 09:36:01.046736] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:27:49.989 qpair failed and we were unable to recover it. 00:27:49.989 [2024-07-15 09:36:01.047026] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.989 [2024-07-15 09:36:01.047073] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:27:49.989 qpair failed and we were unable to recover it. 00:27:49.989 [2024-07-15 09:36:01.047234] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.989 [2024-07-15 09:36:01.047280] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:27:49.989 qpair failed and we were unable to recover it. 00:27:49.989 [2024-07-15 09:36:01.047489] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.989 [2024-07-15 09:36:01.047535] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:27:49.989 qpair failed and we were unable to recover it. 00:27:49.989 [2024-07-15 09:36:01.047730] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.989 [2024-07-15 09:36:01.047774] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:27:49.989 qpair failed and we were unable to recover it. 00:27:49.989 [2024-07-15 09:36:01.047964] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.989 [2024-07-15 09:36:01.048005] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:27:49.989 qpair failed and we were unable to recover it. 00:27:49.989 [2024-07-15 09:36:01.048143] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.989 [2024-07-15 09:36:01.048188] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:27:49.989 qpair failed and we were unable to recover it. 00:27:49.989 [2024-07-15 09:36:01.048391] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.989 [2024-07-15 09:36:01.048436] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:27:49.989 qpair failed and we were unable to recover it. 00:27:49.989 [2024-07-15 09:36:01.048615] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.989 [2024-07-15 09:36:01.048664] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:27:49.989 qpair failed and we were unable to recover it. 00:27:49.989 [2024-07-15 09:36:01.048865] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.989 [2024-07-15 09:36:01.048924] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:27:49.989 qpair failed and we were unable to recover it. 00:27:49.989 [2024-07-15 09:36:01.049129] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.989 [2024-07-15 09:36:01.049177] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:27:49.989 qpair failed and we were unable to recover it. 00:27:49.989 [2024-07-15 09:36:01.049388] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.989 [2024-07-15 09:36:01.049436] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:27:49.989 qpair failed and we were unable to recover it. 00:27:49.989 [2024-07-15 09:36:01.049600] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.989 [2024-07-15 09:36:01.049650] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:27:49.989 qpair failed and we were unable to recover it. 00:27:49.989 [2024-07-15 09:36:01.049851] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.989 [2024-07-15 09:36:01.049917] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:27:49.989 qpair failed and we were unable to recover it. 00:27:49.989 [2024-07-15 09:36:01.050110] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.989 [2024-07-15 09:36:01.050167] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:27:49.989 qpair failed and we were unable to recover it. 00:27:49.989 [2024-07-15 09:36:01.050368] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.989 [2024-07-15 09:36:01.050421] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:27:49.989 qpair failed and we were unable to recover it. 00:27:49.989 [2024-07-15 09:36:01.050627] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.989 [2024-07-15 09:36:01.050673] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:27:49.989 qpair failed and we were unable to recover it. 00:27:49.989 [2024-07-15 09:36:01.050858] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.989 [2024-07-15 09:36:01.050933] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:27:49.989 qpair failed and we were unable to recover it. 00:27:49.989 [2024-07-15 09:36:01.051184] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.989 [2024-07-15 09:36:01.051232] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:27:49.989 qpair failed and we were unable to recover it. 00:27:49.989 [2024-07-15 09:36:01.051455] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.989 [2024-07-15 09:36:01.051501] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:27:49.989 qpair failed and we were unable to recover it. 00:27:49.989 [2024-07-15 09:36:01.051672] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.989 [2024-07-15 09:36:01.051719] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:27:49.989 qpair failed and we were unable to recover it. 00:27:49.989 [2024-07-15 09:36:01.051933] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.989 [2024-07-15 09:36:01.052008] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:27:49.989 qpair failed and we were unable to recover it. 00:27:49.989 [2024-07-15 09:36:01.052285] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.989 [2024-07-15 09:36:01.052359] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:27:49.989 qpair failed and we were unable to recover it. 00:27:49.989 [2024-07-15 09:36:01.052554] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.989 [2024-07-15 09:36:01.052601] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:27:49.989 qpair failed and we were unable to recover it. 00:27:49.989 [2024-07-15 09:36:01.052733] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.989 [2024-07-15 09:36:01.052778] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:27:49.989 qpair failed and we were unable to recover it. 00:27:49.989 [2024-07-15 09:36:01.053003] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.989 [2024-07-15 09:36:01.053077] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:27:49.989 qpair failed and we were unable to recover it. 00:27:49.989 [2024-07-15 09:36:01.053303] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.989 [2024-07-15 09:36:01.053352] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:27:49.989 qpair failed and we were unable to recover it. 00:27:49.989 [2024-07-15 09:36:01.053548] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.989 [2024-07-15 09:36:01.053593] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:27:49.989 qpair failed and we were unable to recover it. 00:27:49.989 [2024-07-15 09:36:01.053808] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.989 [2024-07-15 09:36:01.053854] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:27:49.989 qpair failed and we were unable to recover it. 00:27:49.989 [2024-07-15 09:36:01.054073] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.989 [2024-07-15 09:36:01.054150] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:27:49.989 qpair failed and we were unable to recover it. 00:27:49.989 [2024-07-15 09:36:01.054408] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.989 [2024-07-15 09:36:01.054482] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:27:49.989 qpair failed and we were unable to recover it. 00:27:49.989 [2024-07-15 09:36:01.054676] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.989 [2024-07-15 09:36:01.054722] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:27:49.989 qpair failed and we were unable to recover it. 00:27:49.989 [2024-07-15 09:36:01.054893] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.989 [2024-07-15 09:36:01.054949] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:27:49.989 qpair failed and we were unable to recover it. 00:27:49.989 [2024-07-15 09:36:01.055161] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.989 [2024-07-15 09:36:01.055211] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:27:49.989 qpair failed and we were unable to recover it. 00:27:49.989 [2024-07-15 09:36:01.055424] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.989 [2024-07-15 09:36:01.055475] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:27:49.989 qpair failed and we were unable to recover it. 00:27:49.989 [2024-07-15 09:36:01.055675] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.989 [2024-07-15 09:36:01.055721] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:27:49.989 qpair failed and we were unable to recover it. 00:27:49.989 [2024-07-15 09:36:01.055926] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.989 [2024-07-15 09:36:01.055983] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:27:49.989 qpair failed and we were unable to recover it. 00:27:49.989 [2024-07-15 09:36:01.056151] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.989 [2024-07-15 09:36:01.056217] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:27:49.989 qpair failed and we were unable to recover it. 00:27:49.989 [2024-07-15 09:36:01.056438] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.989 [2024-07-15 09:36:01.056509] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:27:49.989 qpair failed and we were unable to recover it. 00:27:49.990 [2024-07-15 09:36:01.056682] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.990 [2024-07-15 09:36:01.056728] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:27:49.990 qpair failed and we were unable to recover it. 00:27:49.990 [2024-07-15 09:36:01.056958] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.990 [2024-07-15 09:36:01.057007] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:27:49.990 qpair failed and we were unable to recover it. 00:27:49.990 [2024-07-15 09:36:01.057261] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.990 [2024-07-15 09:36:01.057316] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:27:49.990 qpair failed and we were unable to recover it. 00:27:49.990 [2024-07-15 09:36:01.057588] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.990 [2024-07-15 09:36:01.057642] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:27:49.990 qpair failed and we were unable to recover it. 00:27:49.990 [2024-07-15 09:36:01.057883] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.990 [2024-07-15 09:36:01.057958] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:27:49.990 qpair failed and we were unable to recover it. 00:27:49.990 [2024-07-15 09:36:01.058194] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.990 [2024-07-15 09:36:01.058267] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:27:49.990 qpair failed and we were unable to recover it. 00:27:49.990 [2024-07-15 09:36:01.058513] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.990 [2024-07-15 09:36:01.058561] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:27:49.990 qpair failed and we were unable to recover it. 00:27:49.990 [2024-07-15 09:36:01.058766] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.990 [2024-07-15 09:36:01.058819] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:27:49.990 qpair failed and we were unable to recover it. 00:27:49.990 [2024-07-15 09:36:01.059039] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.990 [2024-07-15 09:36:01.059113] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:27:49.990 qpair failed and we were unable to recover it. 00:27:49.990 [2024-07-15 09:36:01.059389] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.990 [2024-07-15 09:36:01.059437] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:27:49.990 qpair failed and we were unable to recover it. 00:27:49.990 [2024-07-15 09:36:01.059630] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.990 [2024-07-15 09:36:01.059676] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:27:49.990 qpair failed and we were unable to recover it. 00:27:49.990 [2024-07-15 09:36:01.059854] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.990 [2024-07-15 09:36:01.059910] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:27:49.990 qpair failed and we were unable to recover it. 00:27:49.990 [2024-07-15 09:36:01.060140] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.990 [2024-07-15 09:36:01.060214] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:27:49.990 qpair failed and we were unable to recover it. 00:27:49.990 [2024-07-15 09:36:01.060453] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.990 [2024-07-15 09:36:01.060527] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:27:49.990 qpair failed and we were unable to recover it. 00:27:49.990 [2024-07-15 09:36:01.060727] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.990 [2024-07-15 09:36:01.060772] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:27:49.990 qpair failed and we were unable to recover it. 00:27:49.990 [2024-07-15 09:36:01.060981] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.990 [2024-07-15 09:36:01.061055] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:27:49.990 qpair failed and we were unable to recover it. 00:27:49.990 [2024-07-15 09:36:01.061338] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.990 [2024-07-15 09:36:01.061394] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:27:49.990 qpair failed and we were unable to recover it. 00:27:49.990 [2024-07-15 09:36:01.061579] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.990 [2024-07-15 09:36:01.061624] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:27:49.990 qpair failed and we were unable to recover it. 00:27:49.990 [2024-07-15 09:36:01.061798] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.990 [2024-07-15 09:36:01.061870] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:27:49.990 qpair failed and we were unable to recover it. 00:27:49.990 [2024-07-15 09:36:01.062055] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.990 [2024-07-15 09:36:01.062103] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:27:49.990 qpair failed and we were unable to recover it. 00:27:49.990 [2024-07-15 09:36:01.062360] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.990 [2024-07-15 09:36:01.062415] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:27:49.990 qpair failed and we were unable to recover it. 00:27:49.990 [2024-07-15 09:36:01.062643] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.990 [2024-07-15 09:36:01.062688] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:27:49.990 qpair failed and we were unable to recover it. 00:27:49.990 [2024-07-15 09:36:01.062857] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.990 [2024-07-15 09:36:01.062923] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:27:49.990 qpair failed and we were unable to recover it. 00:27:49.990 [2024-07-15 09:36:01.063130] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.990 [2024-07-15 09:36:01.063206] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:27:49.990 qpair failed and we were unable to recover it. 00:27:49.990 [2024-07-15 09:36:01.063448] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.990 [2024-07-15 09:36:01.063504] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:27:49.990 qpair failed and we were unable to recover it. 00:27:49.990 [2024-07-15 09:36:01.063734] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.990 [2024-07-15 09:36:01.063779] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:27:49.990 qpair failed and we were unable to recover it. 00:27:49.990 [2024-07-15 09:36:01.063931] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.990 [2024-07-15 09:36:01.063995] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:27:49.990 qpair failed and we were unable to recover it. 00:27:49.990 [2024-07-15 09:36:01.064213] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.990 [2024-07-15 09:36:01.064282] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:27:49.990 qpair failed and we were unable to recover it. 00:27:49.990 [2024-07-15 09:36:01.064476] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.990 [2024-07-15 09:36:01.064532] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:27:49.990 qpair failed and we were unable to recover it. 00:27:49.990 [2024-07-15 09:36:01.064757] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.990 [2024-07-15 09:36:01.064822] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:27:49.990 qpair failed and we were unable to recover it. 00:27:49.990 [2024-07-15 09:36:01.065008] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.990 [2024-07-15 09:36:01.065065] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:27:49.990 qpair failed and we were unable to recover it. 00:27:49.990 [2024-07-15 09:36:01.065330] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.990 [2024-07-15 09:36:01.065404] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:27:49.990 qpair failed and we were unable to recover it. 00:27:49.990 [2024-07-15 09:36:01.065574] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.990 [2024-07-15 09:36:01.065619] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:27:49.990 qpair failed and we were unable to recover it. 00:27:49.990 [2024-07-15 09:36:01.065764] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.990 [2024-07-15 09:36:01.065816] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:27:49.990 qpair failed and we were unable to recover it. 00:27:49.990 [2024-07-15 09:36:01.066009] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.990 [2024-07-15 09:36:01.066055] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:27:49.990 qpair failed and we were unable to recover it. 00:27:49.990 [2024-07-15 09:36:01.066275] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.990 [2024-07-15 09:36:01.066331] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:27:49.990 qpair failed and we were unable to recover it. 00:27:49.990 [2024-07-15 09:36:01.066583] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.990 [2024-07-15 09:36:01.066638] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:27:49.990 qpair failed and we were unable to recover it. 00:27:49.990 [2024-07-15 09:36:01.066880] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.990 [2024-07-15 09:36:01.066955] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:27:49.990 qpair failed and we were unable to recover it. 00:27:49.990 [2024-07-15 09:36:01.067226] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.990 [2024-07-15 09:36:01.067280] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:27:49.990 qpair failed and we were unable to recover it. 00:27:49.990 [2024-07-15 09:36:01.067469] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.990 [2024-07-15 09:36:01.067523] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:27:49.990 qpair failed and we were unable to recover it. 00:27:49.990 [2024-07-15 09:36:01.067750] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.991 [2024-07-15 09:36:01.067796] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:27:49.991 qpair failed and we were unable to recover it. 00:27:49.991 [2024-07-15 09:36:01.067992] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.991 [2024-07-15 09:36:01.068038] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:27:49.991 qpair failed and we were unable to recover it. 00:27:49.991 [2024-07-15 09:36:01.068242] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.991 [2024-07-15 09:36:01.068288] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:27:49.991 qpair failed and we were unable to recover it. 00:27:49.991 [2024-07-15 09:36:01.068436] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.991 [2024-07-15 09:36:01.068483] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:27:49.991 qpair failed and we were unable to recover it. 00:27:49.991 [2024-07-15 09:36:01.068724] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.991 [2024-07-15 09:36:01.068779] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:27:49.991 qpair failed and we were unable to recover it. 00:27:49.991 [2024-07-15 09:36:01.069006] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.991 [2024-07-15 09:36:01.069058] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:27:49.991 qpair failed and we were unable to recover it. 00:27:49.991 [2024-07-15 09:36:01.069275] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.991 [2024-07-15 09:36:01.069326] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:27:49.991 qpair failed and we were unable to recover it. 00:27:49.991 [2024-07-15 09:36:01.069558] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.991 [2024-07-15 09:36:01.069608] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:27:49.991 qpair failed and we were unable to recover it. 00:27:49.991 [2024-07-15 09:36:01.069778] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.991 [2024-07-15 09:36:01.069840] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:27:49.991 qpair failed and we were unable to recover it. 00:27:49.991 [2024-07-15 09:36:01.070027] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.991 [2024-07-15 09:36:01.070078] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:27:49.991 qpair failed and we were unable to recover it. 00:27:49.991 [2024-07-15 09:36:01.070301] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.991 [2024-07-15 09:36:01.070353] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:27:49.991 qpair failed and we were unable to recover it. 00:27:49.991 [2024-07-15 09:36:01.070586] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.991 [2024-07-15 09:36:01.070637] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:27:49.991 qpair failed and we were unable to recover it. 00:27:49.991 [2024-07-15 09:36:01.070843] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.991 [2024-07-15 09:36:01.070895] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:27:49.991 qpair failed and we were unable to recover it. 00:27:49.991 [2024-07-15 09:36:01.071087] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.991 [2024-07-15 09:36:01.071139] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:27:49.991 qpair failed and we were unable to recover it. 00:27:49.991 [2024-07-15 09:36:01.071345] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.991 [2024-07-15 09:36:01.071396] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:27:49.991 qpair failed and we were unable to recover it. 00:27:49.991 [2024-07-15 09:36:01.071626] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.991 [2024-07-15 09:36:01.071677] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:27:49.991 qpair failed and we were unable to recover it. 00:27:49.991 [2024-07-15 09:36:01.071887] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.991 [2024-07-15 09:36:01.071947] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:27:49.991 qpair failed and we were unable to recover it. 00:27:49.991 [2024-07-15 09:36:01.072188] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.991 [2024-07-15 09:36:01.072239] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:27:49.991 qpair failed and we were unable to recover it. 00:27:49.991 [2024-07-15 09:36:01.072432] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.991 [2024-07-15 09:36:01.072484] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:27:49.991 qpair failed and we were unable to recover it. 00:27:49.991 [2024-07-15 09:36:01.072685] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.991 [2024-07-15 09:36:01.072741] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:27:49.991 qpair failed and we were unable to recover it. 00:27:49.991 [2024-07-15 09:36:01.072975] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.991 [2024-07-15 09:36:01.073027] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:27:49.991 qpair failed and we were unable to recover it. 00:27:49.991 [2024-07-15 09:36:01.073236] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.991 [2024-07-15 09:36:01.073288] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:27:49.991 qpair failed and we were unable to recover it. 00:27:49.991 [2024-07-15 09:36:01.073443] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.991 [2024-07-15 09:36:01.073494] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:27:49.991 qpair failed and we were unable to recover it. 00:27:49.991 [2024-07-15 09:36:01.073678] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.991 [2024-07-15 09:36:01.073735] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:27:49.991 qpair failed and we were unable to recover it. 00:27:49.991 [2024-07-15 09:36:01.073987] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.991 [2024-07-15 09:36:01.074043] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:27:49.991 qpair failed and we were unable to recover it. 00:27:49.991 [2024-07-15 09:36:01.074209] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.991 [2024-07-15 09:36:01.074278] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:27:49.991 qpair failed and we were unable to recover it. 00:27:49.991 [2024-07-15 09:36:01.074508] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.991 [2024-07-15 09:36:01.074563] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:27:49.991 qpair failed and we were unable to recover it. 00:27:49.991 [2024-07-15 09:36:01.074797] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.991 [2024-07-15 09:36:01.074879] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:27:49.991 qpair failed and we were unable to recover it. 00:27:49.991 [2024-07-15 09:36:01.075088] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.991 [2024-07-15 09:36:01.075139] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:27:49.991 qpair failed and we were unable to recover it. 00:27:49.991 [2024-07-15 09:36:01.075342] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.991 [2024-07-15 09:36:01.075395] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:27:49.991 qpair failed and we were unable to recover it. 00:27:49.991 [2024-07-15 09:36:01.075596] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.991 [2024-07-15 09:36:01.075649] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:27:49.991 qpair failed and we were unable to recover it. 00:27:49.991 [2024-07-15 09:36:01.075863] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.991 [2024-07-15 09:36:01.075915] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:27:49.991 qpair failed and we were unable to recover it. 00:27:49.991 [2024-07-15 09:36:01.076149] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.991 [2024-07-15 09:36:01.076200] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:27:49.991 qpair failed and we were unable to recover it. 00:27:49.991 [2024-07-15 09:36:01.076399] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.991 [2024-07-15 09:36:01.076449] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:27:49.991 qpair failed and we were unable to recover it. 00:27:49.991 [2024-07-15 09:36:01.076651] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.991 [2024-07-15 09:36:01.076702] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:27:49.991 qpair failed and we were unable to recover it. 00:27:49.991 [2024-07-15 09:36:01.076900] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.991 [2024-07-15 09:36:01.076953] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:27:49.991 qpair failed and we were unable to recover it. 00:27:49.991 [2024-07-15 09:36:01.077150] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.991 [2024-07-15 09:36:01.077201] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:27:49.991 qpair failed and we were unable to recover it. 00:27:49.991 [2024-07-15 09:36:01.077398] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.991 [2024-07-15 09:36:01.077450] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:27:49.991 qpair failed and we were unable to recover it. 00:27:49.991 [2024-07-15 09:36:01.077668] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.991 [2024-07-15 09:36:01.077723] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:27:49.991 qpair failed and we were unable to recover it. 00:27:49.991 [2024-07-15 09:36:01.077940] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.991 [2024-07-15 09:36:01.077994] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:27:49.991 qpair failed and we were unable to recover it. 00:27:49.991 [2024-07-15 09:36:01.078185] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.991 [2024-07-15 09:36:01.078237] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:27:49.991 qpair failed and we were unable to recover it. 00:27:49.991 [2024-07-15 09:36:01.078429] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.992 [2024-07-15 09:36:01.078481] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:27:49.992 qpair failed and we were unable to recover it. 00:27:49.992 [2024-07-15 09:36:01.078701] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.992 [2024-07-15 09:36:01.078752] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:27:49.992 qpair failed and we were unable to recover it. 00:27:49.992 [2024-07-15 09:36:01.078971] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.992 [2024-07-15 09:36:01.079024] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:27:49.992 qpair failed and we were unable to recover it. 00:27:49.992 [2024-07-15 09:36:01.079246] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.992 [2024-07-15 09:36:01.079318] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:27:49.992 qpair failed and we were unable to recover it. 00:27:49.992 [2024-07-15 09:36:01.079594] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.992 [2024-07-15 09:36:01.079649] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:27:49.992 qpair failed and we were unable to recover it. 00:27:49.992 [2024-07-15 09:36:01.079843] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.992 [2024-07-15 09:36:01.079896] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:27:49.992 qpair failed and we were unable to recover it. 00:27:49.992 [2024-07-15 09:36:01.080109] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.992 [2024-07-15 09:36:01.080182] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:27:49.992 qpair failed and we were unable to recover it. 00:27:49.992 [2024-07-15 09:36:01.080460] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.992 [2024-07-15 09:36:01.080515] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:27:49.992 qpair failed and we were unable to recover it. 00:27:49.992 [2024-07-15 09:36:01.080753] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.992 [2024-07-15 09:36:01.080821] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:27:49.992 qpair failed and we were unable to recover it. 00:27:49.992 [2024-07-15 09:36:01.081070] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.992 [2024-07-15 09:36:01.081144] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:27:49.992 qpair failed and we were unable to recover it. 00:27:49.992 [2024-07-15 09:36:01.081411] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.992 [2024-07-15 09:36:01.081484] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:27:49.992 qpair failed and we were unable to recover it. 00:27:49.992 [2024-07-15 09:36:01.081669] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.992 [2024-07-15 09:36:01.081739] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:27:49.992 qpair failed and we were unable to recover it. 00:27:49.992 [2024-07-15 09:36:01.082004] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.992 [2024-07-15 09:36:01.082078] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:27:49.992 qpair failed and we were unable to recover it. 00:27:49.992 [2024-07-15 09:36:01.082260] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.992 [2024-07-15 09:36:01.082313] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:27:49.992 qpair failed and we were unable to recover it. 00:27:49.992 [2024-07-15 09:36:01.082522] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.992 [2024-07-15 09:36:01.082576] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:27:49.992 qpair failed and we were unable to recover it. 00:27:49.992 [2024-07-15 09:36:01.082816] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.992 [2024-07-15 09:36:01.082875] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:27:49.992 qpair failed and we were unable to recover it. 00:27:49.992 [2024-07-15 09:36:01.083051] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.992 [2024-07-15 09:36:01.083103] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:27:49.992 qpair failed and we were unable to recover it. 00:27:49.992 [2024-07-15 09:36:01.083295] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.992 [2024-07-15 09:36:01.083348] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:27:49.992 qpair failed and we were unable to recover it. 00:27:49.992 [2024-07-15 09:36:01.083527] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.992 [2024-07-15 09:36:01.083578] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:27:49.992 qpair failed and we were unable to recover it. 00:27:49.992 [2024-07-15 09:36:01.083775] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.992 [2024-07-15 09:36:01.083838] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:27:49.992 qpair failed and we were unable to recover it. 00:27:49.992 [2024-07-15 09:36:01.083999] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.992 [2024-07-15 09:36:01.084051] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:27:49.992 qpair failed and we were unable to recover it. 00:27:49.992 [2024-07-15 09:36:01.084216] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.992 [2024-07-15 09:36:01.084266] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:27:49.992 qpair failed and we were unable to recover it. 00:27:49.992 [2024-07-15 09:36:01.084454] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.992 [2024-07-15 09:36:01.084504] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:27:49.992 qpair failed and we were unable to recover it. 00:27:49.992 [2024-07-15 09:36:01.084702] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.992 [2024-07-15 09:36:01.084754] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:27:49.992 qpair failed and we were unable to recover it. 00:27:49.992 [2024-07-15 09:36:01.085016] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.992 [2024-07-15 09:36:01.085068] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:27:49.992 qpair failed and we were unable to recover it. 00:27:49.992 [2024-07-15 09:36:01.085269] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.992 [2024-07-15 09:36:01.085320] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:27:49.992 qpair failed and we were unable to recover it. 00:27:49.992 [2024-07-15 09:36:01.085491] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.992 [2024-07-15 09:36:01.085545] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:27:49.992 qpair failed and we were unable to recover it. 00:27:49.992 [2024-07-15 09:36:01.085717] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.992 [2024-07-15 09:36:01.085768] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:27:49.992 qpair failed and we were unable to recover it. 00:27:49.992 [2024-07-15 09:36:01.085951] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.992 [2024-07-15 09:36:01.086004] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:27:49.992 qpair failed and we were unable to recover it. 00:27:49.992 [2024-07-15 09:36:01.086253] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.992 [2024-07-15 09:36:01.086305] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:27:49.992 qpair failed and we were unable to recover it. 00:27:49.992 [2024-07-15 09:36:01.086531] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.992 [2024-07-15 09:36:01.086582] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:27:49.992 qpair failed and we were unable to recover it. 00:27:49.992 [2024-07-15 09:36:01.086741] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.992 [2024-07-15 09:36:01.086792] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:27:49.992 qpair failed and we were unable to recover it. 00:27:49.992 [2024-07-15 09:36:01.087008] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.992 [2024-07-15 09:36:01.087060] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:27:49.992 qpair failed and we were unable to recover it. 00:27:49.993 [2024-07-15 09:36:01.087293] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.993 [2024-07-15 09:36:01.087344] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:27:49.993 qpair failed and we were unable to recover it. 00:27:49.993 [2024-07-15 09:36:01.087532] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.993 [2024-07-15 09:36:01.087582] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:27:49.993 qpair failed and we were unable to recover it. 00:27:49.993 [2024-07-15 09:36:01.087771] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.993 [2024-07-15 09:36:01.087837] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:27:49.993 qpair failed and we were unable to recover it. 00:27:49.993 [2024-07-15 09:36:01.088073] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.993 [2024-07-15 09:36:01.088125] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:27:49.993 qpair failed and we were unable to recover it. 00:27:49.993 [2024-07-15 09:36:01.088393] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.993 [2024-07-15 09:36:01.088447] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:27:49.993 qpair failed and we were unable to recover it. 00:27:49.993 [2024-07-15 09:36:01.088617] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.993 [2024-07-15 09:36:01.088671] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:27:49.993 qpair failed and we were unable to recover it. 00:27:49.993 [2024-07-15 09:36:01.088907] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.993 [2024-07-15 09:36:01.088964] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:27:49.993 qpair failed and we were unable to recover it. 00:27:49.993 [2024-07-15 09:36:01.089168] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.993 [2024-07-15 09:36:01.089223] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:27:49.993 qpair failed and we were unable to recover it. 00:27:49.993 [2024-07-15 09:36:01.089433] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.993 [2024-07-15 09:36:01.089487] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:27:49.993 qpair failed and we were unable to recover it. 00:27:49.993 [2024-07-15 09:36:01.089702] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.993 [2024-07-15 09:36:01.089758] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:27:49.993 qpair failed and we were unable to recover it. 00:27:49.993 [2024-07-15 09:36:01.089957] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.993 [2024-07-15 09:36:01.090012] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:27:49.993 qpair failed and we were unable to recover it. 00:27:49.993 [2024-07-15 09:36:01.090188] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.993 [2024-07-15 09:36:01.090242] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:27:49.993 qpair failed and we were unable to recover it. 00:27:49.993 [2024-07-15 09:36:01.090488] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.993 [2024-07-15 09:36:01.090542] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:27:49.993 qpair failed and we were unable to recover it. 00:27:49.993 [2024-07-15 09:36:01.090756] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.993 [2024-07-15 09:36:01.090819] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:27:49.993 qpair failed and we were unable to recover it. 00:27:49.993 [2024-07-15 09:36:01.091039] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.993 [2024-07-15 09:36:01.091093] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:27:49.993 qpair failed and we were unable to recover it. 00:27:49.993 [2024-07-15 09:36:01.091335] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.993 [2024-07-15 09:36:01.091390] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:27:49.993 qpair failed and we were unable to recover it. 00:27:49.993 [2024-07-15 09:36:01.091594] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.993 [2024-07-15 09:36:01.091648] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:27:49.993 qpair failed and we were unable to recover it. 00:27:49.993 [2024-07-15 09:36:01.091850] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.993 [2024-07-15 09:36:01.091906] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:27:49.993 qpair failed and we were unable to recover it. 00:27:49.993 [2024-07-15 09:36:01.092084] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.993 [2024-07-15 09:36:01.092140] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:27:49.993 qpair failed and we were unable to recover it. 00:27:49.993 [2024-07-15 09:36:01.092380] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.993 [2024-07-15 09:36:01.092435] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:27:49.993 qpair failed and we were unable to recover it. 00:27:49.993 [2024-07-15 09:36:01.092609] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.993 [2024-07-15 09:36:01.092684] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:27:49.993 qpair failed and we were unable to recover it. 00:27:49.993 [2024-07-15 09:36:01.092869] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.993 [2024-07-15 09:36:01.092918] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:27:49.993 qpair failed and we were unable to recover it. 00:27:49.993 [2024-07-15 09:36:01.093074] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.993 [2024-07-15 09:36:01.093129] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:27:49.993 qpair failed and we were unable to recover it. 00:27:49.993 [2024-07-15 09:36:01.093329] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.993 [2024-07-15 09:36:01.093377] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:27:49.993 qpair failed and we were unable to recover it. 00:27:49.993 [2024-07-15 09:36:01.093559] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.993 [2024-07-15 09:36:01.093606] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:27:49.993 qpair failed and we were unable to recover it. 00:27:49.993 [2024-07-15 09:36:01.093749] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.993 [2024-07-15 09:36:01.093797] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:27:49.993 qpair failed and we were unable to recover it. 00:27:49.993 [2024-07-15 09:36:01.094034] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.993 [2024-07-15 09:36:01.094085] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:27:49.993 qpair failed and we were unable to recover it. 00:27:49.993 [2024-07-15 09:36:01.094254] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.993 [2024-07-15 09:36:01.094305] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:27:49.993 qpair failed and we were unable to recover it. 00:27:49.993 [2024-07-15 09:36:01.094496] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.993 [2024-07-15 09:36:01.094549] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:27:49.993 qpair failed and we were unable to recover it. 00:27:49.993 [2024-07-15 09:36:01.094709] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.993 [2024-07-15 09:36:01.094761] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:27:49.993 qpair failed and we were unable to recover it. 00:27:49.993 [2024-07-15 09:36:01.094978] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.993 [2024-07-15 09:36:01.095029] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:27:49.993 qpair failed and we were unable to recover it. 00:27:49.993 [2024-07-15 09:36:01.095192] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.993 [2024-07-15 09:36:01.095243] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:27:49.993 qpair failed and we were unable to recover it. 00:27:49.993 [2024-07-15 09:36:01.095510] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.993 [2024-07-15 09:36:01.095562] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:27:49.993 qpair failed and we were unable to recover it. 00:27:49.993 [2024-07-15 09:36:01.095791] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.993 [2024-07-15 09:36:01.095864] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:27:49.993 qpair failed and we were unable to recover it. 00:27:49.993 [2024-07-15 09:36:01.096044] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.993 [2024-07-15 09:36:01.096099] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:27:49.993 qpair failed and we were unable to recover it. 00:27:49.993 [2024-07-15 09:36:01.096326] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.993 [2024-07-15 09:36:01.096381] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:27:49.993 qpair failed and we were unable to recover it. 00:27:49.993 [2024-07-15 09:36:01.096570] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.993 [2024-07-15 09:36:01.096625] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:27:49.993 qpair failed and we were unable to recover it. 00:27:49.993 [2024-07-15 09:36:01.096850] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.993 [2024-07-15 09:36:01.096906] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:27:49.993 qpair failed and we were unable to recover it. 00:27:49.993 [2024-07-15 09:36:01.097066] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.993 [2024-07-15 09:36:01.097123] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:27:49.993 qpair failed and we were unable to recover it. 00:27:49.993 [2024-07-15 09:36:01.097370] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.993 [2024-07-15 09:36:01.097425] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:27:49.993 qpair failed and we were unable to recover it. 00:27:49.993 [2024-07-15 09:36:01.097610] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.993 [2024-07-15 09:36:01.097665] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:27:49.994 qpair failed and we were unable to recover it. 00:27:49.994 [2024-07-15 09:36:01.097845] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.994 [2024-07-15 09:36:01.097901] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:27:49.994 qpair failed and we were unable to recover it. 00:27:49.994 [2024-07-15 09:36:01.098100] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.994 [2024-07-15 09:36:01.098155] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:27:49.994 qpair failed and we were unable to recover it. 00:27:49.994 [2024-07-15 09:36:01.098381] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.994 [2024-07-15 09:36:01.098436] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:27:49.994 qpair failed and we were unable to recover it. 00:27:49.994 [2024-07-15 09:36:01.098605] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.994 [2024-07-15 09:36:01.098659] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:27:49.994 qpair failed and we were unable to recover it. 00:27:49.994 [2024-07-15 09:36:01.098869] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.994 [2024-07-15 09:36:01.098924] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:27:49.994 qpair failed and we were unable to recover it. 00:27:49.994 [2024-07-15 09:36:01.099133] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.994 [2024-07-15 09:36:01.099187] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:27:49.994 qpair failed and we were unable to recover it. 00:27:49.994 [2024-07-15 09:36:01.099381] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.994 [2024-07-15 09:36:01.099436] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:27:49.994 qpair failed and we were unable to recover it. 00:27:49.994 [2024-07-15 09:36:01.099635] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.994 [2024-07-15 09:36:01.099690] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:27:49.994 qpair failed and we were unable to recover it. 00:27:49.994 [2024-07-15 09:36:01.099926] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.994 [2024-07-15 09:36:01.099983] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:27:49.994 qpair failed and we were unable to recover it. 00:27:49.994 [2024-07-15 09:36:01.100176] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.994 [2024-07-15 09:36:01.100230] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:27:49.994 qpair failed and we were unable to recover it. 00:27:49.994 [2024-07-15 09:36:01.100403] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.994 [2024-07-15 09:36:01.100457] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:27:49.994 qpair failed and we were unable to recover it. 00:27:49.994 [2024-07-15 09:36:01.100694] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.994 [2024-07-15 09:36:01.100749] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:27:49.994 qpair failed and we were unable to recover it. 00:27:49.994 [2024-07-15 09:36:01.100991] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.994 [2024-07-15 09:36:01.101046] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:27:49.994 qpair failed and we were unable to recover it. 00:27:49.994 [2024-07-15 09:36:01.101263] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.994 [2024-07-15 09:36:01.101317] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:27:49.994 qpair failed and we were unable to recover it. 00:27:49.994 [2024-07-15 09:36:01.101493] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.994 [2024-07-15 09:36:01.101548] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:27:49.994 qpair failed and we were unable to recover it. 00:27:49.994 [2024-07-15 09:36:01.101762] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.994 [2024-07-15 09:36:01.101840] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:27:49.994 qpair failed and we were unable to recover it. 00:27:49.994 [2024-07-15 09:36:01.102071] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.994 [2024-07-15 09:36:01.102126] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:27:49.994 qpair failed and we were unable to recover it. 00:27:49.994 [2024-07-15 09:36:01.102309] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.994 [2024-07-15 09:36:01.102364] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:27:49.994 qpair failed and we were unable to recover it. 00:27:49.994 [2024-07-15 09:36:01.102536] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.994 [2024-07-15 09:36:01.102589] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:27:49.994 qpair failed and we were unable to recover it. 00:27:49.994 [2024-07-15 09:36:01.102830] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.994 [2024-07-15 09:36:01.102889] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:27:49.994 qpair failed and we were unable to recover it. 00:27:49.994 [2024-07-15 09:36:01.103060] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.994 [2024-07-15 09:36:01.103114] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:27:49.994 qpair failed and we were unable to recover it. 00:27:49.994 [2024-07-15 09:36:01.103363] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.994 [2024-07-15 09:36:01.103426] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:27:49.994 qpair failed and we were unable to recover it. 00:27:49.994 [2024-07-15 09:36:01.103602] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.994 [2024-07-15 09:36:01.103656] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:27:49.994 qpair failed and we were unable to recover it. 00:27:49.994 [2024-07-15 09:36:01.103846] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.994 [2024-07-15 09:36:01.103902] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:27:49.994 qpair failed and we were unable to recover it. 00:27:49.994 [2024-07-15 09:36:01.104081] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.994 [2024-07-15 09:36:01.104137] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:27:49.994 qpair failed and we were unable to recover it. 00:27:49.994 [2024-07-15 09:36:01.104341] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.994 [2024-07-15 09:36:01.104394] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:27:49.994 qpair failed and we were unable to recover it. 00:27:50.290 [2024-07-15 09:36:01.104599] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:50.290 [2024-07-15 09:36:01.104656] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:27:50.290 qpair failed and we were unable to recover it. 00:27:50.290 [2024-07-15 09:36:01.104841] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:50.290 [2024-07-15 09:36:01.104899] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:27:50.290 qpair failed and we were unable to recover it. 00:27:50.290 [2024-07-15 09:36:01.105077] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:50.290 [2024-07-15 09:36:01.105130] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:27:50.290 qpair failed and we were unable to recover it. 00:27:50.290 [2024-07-15 09:36:01.105308] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:50.290 [2024-07-15 09:36:01.105360] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:27:50.290 qpair failed and we were unable to recover it. 00:27:50.290 [2024-07-15 09:36:01.105595] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:50.290 [2024-07-15 09:36:01.105653] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:27:50.290 qpair failed and we were unable to recover it. 00:27:50.290 [2024-07-15 09:36:01.105865] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:50.290 [2024-07-15 09:36:01.105921] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:27:50.290 qpair failed and we were unable to recover it. 00:27:50.290 [2024-07-15 09:36:01.106113] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:50.290 [2024-07-15 09:36:01.106164] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:27:50.290 qpair failed and we were unable to recover it. 00:27:50.290 [2024-07-15 09:36:01.106351] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:50.290 [2024-07-15 09:36:01.106411] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:27:50.290 qpair failed and we were unable to recover it. 00:27:50.290 [2024-07-15 09:36:01.106596] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:50.290 [2024-07-15 09:36:01.106646] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:27:50.290 qpair failed and we were unable to recover it. 00:27:50.290 [2024-07-15 09:36:01.106825] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:50.290 [2024-07-15 09:36:01.106881] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:27:50.290 qpair failed and we were unable to recover it. 00:27:50.290 [2024-07-15 09:36:01.107068] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:50.290 [2024-07-15 09:36:01.107122] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:27:50.290 qpair failed and we were unable to recover it. 00:27:50.290 [2024-07-15 09:36:01.107310] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:50.290 [2024-07-15 09:36:01.107364] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:27:50.290 qpair failed and we were unable to recover it. 00:27:50.290 [2024-07-15 09:36:01.107576] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:50.290 [2024-07-15 09:36:01.107637] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:27:50.290 qpair failed and we were unable to recover it. 00:27:50.290 [2024-07-15 09:36:01.107824] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:50.290 [2024-07-15 09:36:01.107883] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:27:50.290 qpair failed and we were unable to recover it. 00:27:50.290 [2024-07-15 09:36:01.108046] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:50.290 [2024-07-15 09:36:01.108099] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:27:50.290 qpair failed and we were unable to recover it. 00:27:50.290 [2024-07-15 09:36:01.108340] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:50.290 [2024-07-15 09:36:01.108396] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:27:50.290 qpair failed and we were unable to recover it. 00:27:50.290 [2024-07-15 09:36:01.108611] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:50.290 [2024-07-15 09:36:01.108666] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:27:50.290 qpair failed and we were unable to recover it. 00:27:50.290 [2024-07-15 09:36:01.108861] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:50.290 [2024-07-15 09:36:01.108917] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:27:50.290 qpair failed and we were unable to recover it. 00:27:50.290 [2024-07-15 09:36:01.109129] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:50.290 [2024-07-15 09:36:01.109184] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:27:50.290 qpair failed and we were unable to recover it. 00:27:50.290 [2024-07-15 09:36:01.109370] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:50.290 [2024-07-15 09:36:01.109425] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:27:50.290 qpair failed and we were unable to recover it. 00:27:50.290 [2024-07-15 09:36:01.109607] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:50.290 [2024-07-15 09:36:01.109663] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:27:50.290 qpair failed and we were unable to recover it. 00:27:50.290 [2024-07-15 09:36:01.109872] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:50.290 [2024-07-15 09:36:01.109928] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:27:50.290 qpair failed and we were unable to recover it. 00:27:50.290 [2024-07-15 09:36:01.110155] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:50.290 [2024-07-15 09:36:01.110210] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:27:50.290 qpair failed and we were unable to recover it. 00:27:50.290 [2024-07-15 09:36:01.110416] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:50.290 [2024-07-15 09:36:01.110471] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:27:50.290 qpair failed and we were unable to recover it. 00:27:50.290 [2024-07-15 09:36:01.110642] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:50.290 [2024-07-15 09:36:01.110696] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:27:50.290 qpair failed and we were unable to recover it. 00:27:50.290 [2024-07-15 09:36:01.110863] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:50.290 [2024-07-15 09:36:01.110918] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:27:50.290 qpair failed and we were unable to recover it. 00:27:50.290 [2024-07-15 09:36:01.111099] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:50.290 [2024-07-15 09:36:01.111152] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:27:50.290 qpair failed and we were unable to recover it. 00:27:50.290 [2024-07-15 09:36:01.111330] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:50.290 [2024-07-15 09:36:01.111385] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:27:50.290 qpair failed and we were unable to recover it. 00:27:50.290 [2024-07-15 09:36:01.111586] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:50.290 [2024-07-15 09:36:01.111639] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:27:50.290 qpair failed and we were unable to recover it. 00:27:50.290 [2024-07-15 09:36:01.111843] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:50.290 [2024-07-15 09:36:01.111899] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:27:50.290 qpair failed and we were unable to recover it. 00:27:50.290 [2024-07-15 09:36:01.112109] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:50.290 [2024-07-15 09:36:01.112179] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:27:50.290 qpair failed and we were unable to recover it. 00:27:50.290 [2024-07-15 09:36:01.112393] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:50.290 [2024-07-15 09:36:01.112446] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:27:50.290 qpair failed and we were unable to recover it. 00:27:50.290 [2024-07-15 09:36:01.112622] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:50.290 [2024-07-15 09:36:01.112677] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:27:50.290 qpair failed and we were unable to recover it. 00:27:50.290 [2024-07-15 09:36:01.112899] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:50.290 [2024-07-15 09:36:01.112954] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:27:50.290 qpair failed and we were unable to recover it. 00:27:50.290 [2024-07-15 09:36:01.113130] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:50.290 [2024-07-15 09:36:01.113183] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:27:50.290 qpair failed and we were unable to recover it. 00:27:50.290 [2024-07-15 09:36:01.113414] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:50.290 [2024-07-15 09:36:01.113476] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:27:50.290 qpair failed and we were unable to recover it. 00:27:50.290 [2024-07-15 09:36:01.113689] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:50.290 [2024-07-15 09:36:01.113744] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:27:50.290 qpair failed and we were unable to recover it. 00:27:50.290 [2024-07-15 09:36:01.113967] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:50.290 [2024-07-15 09:36:01.114023] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:27:50.290 qpair failed and we were unable to recover it. 00:27:50.290 [2024-07-15 09:36:01.114197] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:50.290 [2024-07-15 09:36:01.114258] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:27:50.290 qpair failed and we were unable to recover it. 00:27:50.290 [2024-07-15 09:36:01.114459] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:50.290 [2024-07-15 09:36:01.114522] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:27:50.290 qpair failed and we were unable to recover it. 00:27:50.290 [2024-07-15 09:36:01.114695] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:50.290 [2024-07-15 09:36:01.114748] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:27:50.290 qpair failed and we were unable to recover it. 00:27:50.290 [2024-07-15 09:36:01.114998] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:50.290 [2024-07-15 09:36:01.115054] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:27:50.290 qpair failed and we were unable to recover it. 00:27:50.290 [2024-07-15 09:36:01.115277] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:50.290 [2024-07-15 09:36:01.115331] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:27:50.290 qpair failed and we were unable to recover it. 00:27:50.290 [2024-07-15 09:36:01.115541] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:50.290 [2024-07-15 09:36:01.115597] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:27:50.290 qpair failed and we were unable to recover it. 00:27:50.290 [2024-07-15 09:36:01.115815] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:50.290 [2024-07-15 09:36:01.115874] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:27:50.290 qpair failed and we were unable to recover it. 00:27:50.290 [2024-07-15 09:36:01.116047] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:50.290 [2024-07-15 09:36:01.116099] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:27:50.291 qpair failed and we were unable to recover it. 00:27:50.291 [2024-07-15 09:36:01.116273] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:50.291 [2024-07-15 09:36:01.116326] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:27:50.291 qpair failed and we were unable to recover it. 00:27:50.291 [2024-07-15 09:36:01.116524] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:50.291 [2024-07-15 09:36:01.116579] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:27:50.291 qpair failed and we were unable to recover it. 00:27:50.291 [2024-07-15 09:36:01.116792] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:50.291 [2024-07-15 09:36:01.116863] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:27:50.291 qpair failed and we were unable to recover it. 00:27:50.291 [2024-07-15 09:36:01.117134] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:50.291 [2024-07-15 09:36:01.117207] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:27:50.291 qpair failed and we were unable to recover it. 00:27:50.291 [2024-07-15 09:36:01.117383] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:50.291 [2024-07-15 09:36:01.117437] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:27:50.291 qpair failed and we were unable to recover it. 00:27:50.291 [2024-07-15 09:36:01.117618] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:50.291 [2024-07-15 09:36:01.117673] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:27:50.291 qpair failed and we were unable to recover it. 00:27:50.291 [2024-07-15 09:36:01.117873] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:50.291 [2024-07-15 09:36:01.117906] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:27:50.291 qpair failed and we were unable to recover it. 00:27:50.291 [2024-07-15 09:36:01.118019] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:50.291 [2024-07-15 09:36:01.118051] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:27:50.291 qpair failed and we were unable to recover it. 00:27:50.291 [2024-07-15 09:36:01.118159] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:50.291 [2024-07-15 09:36:01.118193] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:27:50.291 qpair failed and we were unable to recover it. 00:27:50.291 [2024-07-15 09:36:01.118305] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:50.291 [2024-07-15 09:36:01.118339] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:27:50.291 qpair failed and we were unable to recover it. 00:27:50.291 [2024-07-15 09:36:01.118465] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:50.291 [2024-07-15 09:36:01.118497] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:27:50.291 qpair failed and we were unable to recover it. 00:27:50.291 [2024-07-15 09:36:01.118610] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:50.291 [2024-07-15 09:36:01.118644] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:27:50.291 qpair failed and we were unable to recover it. 00:27:50.291 [2024-07-15 09:36:01.118762] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:50.291 [2024-07-15 09:36:01.118794] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:27:50.291 qpair failed and we were unable to recover it. 00:27:50.291 [2024-07-15 09:36:01.118949] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:50.291 [2024-07-15 09:36:01.118983] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:27:50.291 qpair failed and we were unable to recover it. 00:27:50.291 [2024-07-15 09:36:01.119080] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:50.291 [2024-07-15 09:36:01.119113] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:27:50.291 qpair failed and we were unable to recover it. 00:27:50.291 [2024-07-15 09:36:01.119213] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:50.291 [2024-07-15 09:36:01.119245] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:27:50.291 qpair failed and we were unable to recover it. 00:27:50.291 [2024-07-15 09:36:01.119377] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:50.291 [2024-07-15 09:36:01.119427] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d69200 with addr=10.0.0.2, port=4420 00:27:50.291 qpair failed and we were unable to recover it. 00:27:50.291 [2024-07-15 09:36:01.119590] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:50.291 [2024-07-15 09:36:01.119641] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a58000b90 with addr=10.0.0.2, port=4420 00:27:50.291 qpair failed and we were unable to recover it. 00:27:50.291 [2024-07-15 09:36:01.119752] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:50.291 [2024-07-15 09:36:01.119789] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a58000b90 with addr=10.0.0.2, port=4420 00:27:50.291 qpair failed and we were unable to recover it. 00:27:50.291 [2024-07-15 09:36:01.119947] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:50.291 [2024-07-15 09:36:01.119985] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a58000b90 with addr=10.0.0.2, port=4420 00:27:50.291 qpair failed and we were unable to recover it. 00:27:50.291 [2024-07-15 09:36:01.120107] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:50.291 [2024-07-15 09:36:01.120142] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a58000b90 with addr=10.0.0.2, port=4420 00:27:50.291 qpair failed and we were unable to recover it. 00:27:50.291 [2024-07-15 09:36:01.120261] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:50.291 [2024-07-15 09:36:01.120296] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a58000b90 with addr=10.0.0.2, port=4420 00:27:50.291 qpair failed and we were unable to recover it. 00:27:50.291 [2024-07-15 09:36:01.120450] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:50.291 [2024-07-15 09:36:01.120488] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a58000b90 with addr=10.0.0.2, port=4420 00:27:50.291 qpair failed and we were unable to recover it. 00:27:50.291 [2024-07-15 09:36:01.120658] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:50.291 [2024-07-15 09:36:01.120697] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a58000b90 with addr=10.0.0.2, port=4420 00:27:50.291 qpair failed and we were unable to recover it. 00:27:50.291 [2024-07-15 09:36:01.120824] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:50.291 [2024-07-15 09:36:01.120877] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a58000b90 with addr=10.0.0.2, port=4420 00:27:50.291 qpair failed and we were unable to recover it. 00:27:50.291 [2024-07-15 09:36:01.120991] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:50.291 [2024-07-15 09:36:01.121024] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a58000b90 with addr=10.0.0.2, port=4420 00:27:50.291 qpair failed and we were unable to recover it. 00:27:50.291 [2024-07-15 09:36:01.121128] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:50.291 [2024-07-15 09:36:01.121167] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a58000b90 with addr=10.0.0.2, port=4420 00:27:50.291 qpair failed and we were unable to recover it. 00:27:50.291 [2024-07-15 09:36:01.121284] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:50.291 [2024-07-15 09:36:01.121322] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a58000b90 with addr=10.0.0.2, port=4420 00:27:50.291 qpair failed and we were unable to recover it. 00:27:50.291 [2024-07-15 09:36:01.121438] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:50.291 [2024-07-15 09:36:01.121477] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a58000b90 with addr=10.0.0.2, port=4420 00:27:50.291 qpair failed and we were unable to recover it. 00:27:50.291 [2024-07-15 09:36:01.121594] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:50.291 [2024-07-15 09:36:01.121635] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a58000b90 with addr=10.0.0.2, port=4420 00:27:50.291 qpair failed and we were unable to recover it. 00:27:50.291 [2024-07-15 09:36:01.121752] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:50.291 [2024-07-15 09:36:01.121786] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a58000b90 with addr=10.0.0.2, port=4420 00:27:50.291 qpair failed and we were unable to recover it. 00:27:50.291 [2024-07-15 09:36:01.121945] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:50.291 [2024-07-15 09:36:01.121979] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:27:50.291 qpair failed and we were unable to recover it. 00:27:50.291 [2024-07-15 09:36:01.122111] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:50.291 [2024-07-15 09:36:01.122144] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:27:50.291 qpair failed and we were unable to recover it. 00:27:50.291 [2024-07-15 09:36:01.122277] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:50.291 [2024-07-15 09:36:01.122309] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:27:50.291 qpair failed and we were unable to recover it. 00:27:50.291 [2024-07-15 09:36:01.122409] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:50.291 [2024-07-15 09:36:01.122441] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:27:50.291 qpair failed and we were unable to recover it. 00:27:50.291 [2024-07-15 09:36:01.122541] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:50.291 [2024-07-15 09:36:01.122575] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:27:50.291 qpair failed and we were unable to recover it. 00:27:50.291 [2024-07-15 09:36:01.122703] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:50.291 [2024-07-15 09:36:01.122735] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:27:50.291 qpair failed and we were unable to recover it. 00:27:50.291 [2024-07-15 09:36:01.122867] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:50.291 [2024-07-15 09:36:01.122899] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:27:50.291 qpair failed and we were unable to recover it. 00:27:50.291 [2024-07-15 09:36:01.122995] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:50.291 [2024-07-15 09:36:01.123026] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:27:50.291 qpair failed and we were unable to recover it. 00:27:50.291 [2024-07-15 09:36:01.123153] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:50.291 [2024-07-15 09:36:01.123183] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:27:50.291 qpair failed and we were unable to recover it. 00:27:50.291 [2024-07-15 09:36:01.123291] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:50.291 [2024-07-15 09:36:01.123323] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:27:50.291 qpair failed and we were unable to recover it. 00:27:50.291 [2024-07-15 09:36:01.123423] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:50.291 [2024-07-15 09:36:01.123455] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:27:50.291 qpair failed and we were unable to recover it. 00:27:50.291 [2024-07-15 09:36:01.123585] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:50.291 [2024-07-15 09:36:01.123618] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:27:50.291 qpair failed and we were unable to recover it. 00:27:50.291 [2024-07-15 09:36:01.123753] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:50.291 [2024-07-15 09:36:01.123784] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:27:50.291 qpair failed and we were unable to recover it. 00:27:50.291 [2024-07-15 09:36:01.123912] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:50.291 [2024-07-15 09:36:01.123941] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:27:50.291 qpair failed and we were unable to recover it. 00:27:50.291 [2024-07-15 09:36:01.124047] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:50.291 [2024-07-15 09:36:01.124078] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:27:50.291 qpair failed and we were unable to recover it. 00:27:50.291 [2024-07-15 09:36:01.124174] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:50.291 [2024-07-15 09:36:01.124203] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:27:50.291 qpair failed and we were unable to recover it. 00:27:50.291 [2024-07-15 09:36:01.124304] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:50.291 [2024-07-15 09:36:01.124335] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:27:50.291 qpair failed and we were unable to recover it. 00:27:50.291 [2024-07-15 09:36:01.124463] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:50.291 [2024-07-15 09:36:01.124493] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:27:50.291 qpair failed and we were unable to recover it. 00:27:50.291 [2024-07-15 09:36:01.124621] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:50.291 [2024-07-15 09:36:01.124651] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:27:50.291 qpair failed and we were unable to recover it. 00:27:50.291 [2024-07-15 09:36:01.124761] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:50.291 [2024-07-15 09:36:01.124791] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:27:50.291 qpair failed and we were unable to recover it. 00:27:50.291 [2024-07-15 09:36:01.124927] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:50.291 [2024-07-15 09:36:01.124956] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:27:50.291 qpair failed and we were unable to recover it. 00:27:50.291 [2024-07-15 09:36:01.125087] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:50.291 [2024-07-15 09:36:01.125117] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:27:50.291 qpair failed and we were unable to recover it. 00:27:50.291 [2024-07-15 09:36:01.125207] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:50.291 [2024-07-15 09:36:01.125237] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:27:50.291 qpair failed and we were unable to recover it. 00:27:50.291 [2024-07-15 09:36:01.125336] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:50.292 [2024-07-15 09:36:01.125366] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:27:50.292 qpair failed and we were unable to recover it. 00:27:50.292 [2024-07-15 09:36:01.125496] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:50.292 [2024-07-15 09:36:01.125527] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:27:50.292 qpair failed and we were unable to recover it. 00:27:50.292 [2024-07-15 09:36:01.125624] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:50.292 [2024-07-15 09:36:01.125658] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:27:50.292 qpair failed and we were unable to recover it. 00:27:50.292 [2024-07-15 09:36:01.125789] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:50.292 [2024-07-15 09:36:01.125827] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:27:50.292 qpair failed and we were unable to recover it. 00:27:50.292 [2024-07-15 09:36:01.125925] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:50.292 [2024-07-15 09:36:01.125955] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:27:50.292 qpair failed and we were unable to recover it. 00:27:50.292 [2024-07-15 09:36:01.126044] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:50.292 [2024-07-15 09:36:01.126075] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:27:50.292 qpair failed and we were unable to recover it. 00:27:50.292 [2024-07-15 09:36:01.126176] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:50.292 [2024-07-15 09:36:01.126205] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:27:50.292 qpair failed and we were unable to recover it. 00:27:50.292 [2024-07-15 09:36:01.126305] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:50.292 [2024-07-15 09:36:01.126335] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:27:50.292 qpair failed and we were unable to recover it. 00:27:50.292 [2024-07-15 09:36:01.126428] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:50.292 [2024-07-15 09:36:01.126458] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:27:50.292 qpair failed and we were unable to recover it. 00:27:50.292 [2024-07-15 09:36:01.126579] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:50.292 [2024-07-15 09:36:01.126610] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:27:50.292 qpair failed and we were unable to recover it. 00:27:50.292 [2024-07-15 09:36:01.126764] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:50.292 [2024-07-15 09:36:01.126795] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:27:50.292 qpair failed and we were unable to recover it. 00:27:50.292 [2024-07-15 09:36:01.126920] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:50.292 [2024-07-15 09:36:01.126949] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:27:50.292 qpair failed and we were unable to recover it. 00:27:50.292 [2024-07-15 09:36:01.127079] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:50.292 [2024-07-15 09:36:01.127108] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:27:50.292 qpair failed and we were unable to recover it. 00:27:50.292 [2024-07-15 09:36:01.127208] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:50.292 [2024-07-15 09:36:01.127237] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:27:50.292 qpair failed and we were unable to recover it. 00:27:50.292 [2024-07-15 09:36:01.127340] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:50.292 [2024-07-15 09:36:01.127368] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:27:50.292 qpair failed and we were unable to recover it. 00:27:50.292 [2024-07-15 09:36:01.127463] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:50.292 [2024-07-15 09:36:01.127492] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:27:50.292 qpair failed and we were unable to recover it. 00:27:50.292 [2024-07-15 09:36:01.127624] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:50.292 [2024-07-15 09:36:01.127653] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:27:50.292 qpair failed and we were unable to recover it. 00:27:50.292 [2024-07-15 09:36:01.127806] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:50.292 [2024-07-15 09:36:01.127838] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:27:50.292 qpair failed and we were unable to recover it. 00:27:50.292 [2024-07-15 09:36:01.127931] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:50.292 [2024-07-15 09:36:01.127959] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:27:50.292 qpair failed and we were unable to recover it. 00:27:50.292 [2024-07-15 09:36:01.128056] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:50.292 [2024-07-15 09:36:01.128096] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:27:50.292 qpair failed and we were unable to recover it. 00:27:50.292 [2024-07-15 09:36:01.128187] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:50.292 [2024-07-15 09:36:01.128215] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:27:50.292 qpair failed and we were unable to recover it. 00:27:50.292 [2024-07-15 09:36:01.128311] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:50.292 [2024-07-15 09:36:01.128342] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:27:50.292 qpair failed and we were unable to recover it. 00:27:50.292 [2024-07-15 09:36:01.128433] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:50.292 [2024-07-15 09:36:01.128462] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:27:50.292 qpair failed and we were unable to recover it. 00:27:50.292 [2024-07-15 09:36:01.128554] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:50.292 [2024-07-15 09:36:01.128583] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:27:50.292 qpair failed and we were unable to recover it. 00:27:50.292 [2024-07-15 09:36:01.128682] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:50.292 [2024-07-15 09:36:01.128710] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:27:50.292 qpair failed and we were unable to recover it. 00:27:50.292 [2024-07-15 09:36:01.128812] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:50.292 [2024-07-15 09:36:01.128842] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:27:50.292 qpair failed and we were unable to recover it. 00:27:50.292 [2024-07-15 09:36:01.128936] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:50.292 [2024-07-15 09:36:01.128965] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:27:50.292 qpair failed and we were unable to recover it. 00:27:50.292 [2024-07-15 09:36:01.129059] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:50.292 [2024-07-15 09:36:01.129087] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:27:50.292 qpair failed and we were unable to recover it. 00:27:50.292 [2024-07-15 09:36:01.129179] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:50.292 [2024-07-15 09:36:01.129208] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:27:50.292 qpair failed and we were unable to recover it. 00:27:50.292 [2024-07-15 09:36:01.129304] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:50.292 [2024-07-15 09:36:01.129333] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:27:50.292 qpair failed and we were unable to recover it. 00:27:50.292 [2024-07-15 09:36:01.129433] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:50.292 [2024-07-15 09:36:01.129463] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:27:50.292 qpair failed and we were unable to recover it. 00:27:50.292 [2024-07-15 09:36:01.129555] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:50.292 [2024-07-15 09:36:01.129583] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:27:50.292 qpair failed and we were unable to recover it. 00:27:50.292 [2024-07-15 09:36:01.129702] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:50.292 [2024-07-15 09:36:01.129732] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:27:50.292 qpair failed and we were unable to recover it. 00:27:50.292 [2024-07-15 09:36:01.129835] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:50.292 [2024-07-15 09:36:01.129874] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:27:50.292 qpair failed and we were unable to recover it. 00:27:50.292 [2024-07-15 09:36:01.129965] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:50.292 [2024-07-15 09:36:01.129994] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:27:50.292 qpair failed and we were unable to recover it. 00:27:50.292 [2024-07-15 09:36:01.130098] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:50.292 [2024-07-15 09:36:01.130127] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:27:50.292 qpair failed and we were unable to recover it. 00:27:50.292 [2024-07-15 09:36:01.130225] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:50.292 [2024-07-15 09:36:01.130255] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:27:50.292 qpair failed and we were unable to recover it. 00:27:50.292 [2024-07-15 09:36:01.130352] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:50.292 [2024-07-15 09:36:01.130380] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:27:50.292 qpair failed and we were unable to recover it. 00:27:50.292 [2024-07-15 09:36:01.130475] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:50.292 [2024-07-15 09:36:01.130505] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:27:50.292 qpair failed and we were unable to recover it. 00:27:50.292 [2024-07-15 09:36:01.130616] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:50.292 [2024-07-15 09:36:01.130663] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a50000b90 with addr=10.0.0.2, port=4420 00:27:50.292 qpair failed and we were unable to recover it. 00:27:50.292 [2024-07-15 09:36:01.130764] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:50.292 [2024-07-15 09:36:01.130794] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a50000b90 with addr=10.0.0.2, port=4420 00:27:50.292 qpair failed and we were unable to recover it. 00:27:50.292 [2024-07-15 09:36:01.130917] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:50.292 [2024-07-15 09:36:01.130945] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a50000b90 with addr=10.0.0.2, port=4420 00:27:50.292 qpair failed and we were unable to recover it. 00:27:50.292 [2024-07-15 09:36:01.131066] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:50.292 [2024-07-15 09:36:01.131101] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a50000b90 with addr=10.0.0.2, port=4420 00:27:50.292 qpair failed and we were unable to recover it. 00:27:50.292 [2024-07-15 09:36:01.131218] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:50.292 [2024-07-15 09:36:01.131247] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a50000b90 with addr=10.0.0.2, port=4420 00:27:50.292 qpair failed and we were unable to recover it. 00:27:50.292 [2024-07-15 09:36:01.131356] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:50.292 [2024-07-15 09:36:01.131386] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a50000b90 with addr=10.0.0.2, port=4420 00:27:50.292 qpair failed and we were unable to recover it. 00:27:50.292 [2024-07-15 09:36:01.131475] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:50.292 [2024-07-15 09:36:01.131507] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a50000b90 with addr=10.0.0.2, port=4420 00:27:50.292 qpair failed and we were unable to recover it. 00:27:50.292 [2024-07-15 09:36:01.131604] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:50.292 [2024-07-15 09:36:01.131633] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a50000b90 with addr=10.0.0.2, port=4420 00:27:50.292 qpair failed and we were unable to recover it. 00:27:50.292 [2024-07-15 09:36:01.131724] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:50.292 [2024-07-15 09:36:01.131753] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a50000b90 with addr=10.0.0.2, port=4420 00:27:50.292 qpair failed and we were unable to recover it. 00:27:50.292 [2024-07-15 09:36:01.131854] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:50.292 [2024-07-15 09:36:01.131889] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a50000b90 with addr=10.0.0.2, port=4420 00:27:50.292 qpair failed and we were unable to recover it. 00:27:50.292 [2024-07-15 09:36:01.132021] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:50.292 [2024-07-15 09:36:01.132054] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a58000b90 with addr=10.0.0.2, port=4420 00:27:50.292 qpair failed and we were unable to recover it. 00:27:50.292 [2024-07-15 09:36:01.132161] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:50.292 [2024-07-15 09:36:01.132192] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a58000b90 with addr=10.0.0.2, port=4420 00:27:50.292 qpair failed and we were unable to recover it. 00:27:50.292 [2024-07-15 09:36:01.132312] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:50.292 [2024-07-15 09:36:01.132345] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a58000b90 with addr=10.0.0.2, port=4420 00:27:50.292 qpair failed and we were unable to recover it. 00:27:50.292 [2024-07-15 09:36:01.132437] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:50.292 [2024-07-15 09:36:01.132472] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a58000b90 with addr=10.0.0.2, port=4420 00:27:50.292 qpair failed and we were unable to recover it. 00:27:50.292 [2024-07-15 09:36:01.132606] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:50.292 [2024-07-15 09:36:01.132636] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a58000b90 with addr=10.0.0.2, port=4420 00:27:50.292 qpair failed and we were unable to recover it. 00:27:50.292 [2024-07-15 09:36:01.132728] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:50.293 [2024-07-15 09:36:01.132763] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a58000b90 with addr=10.0.0.2, port=4420 00:27:50.293 qpair failed and we were unable to recover it. 00:27:50.293 [2024-07-15 09:36:01.132899] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:50.293 [2024-07-15 09:36:01.132930] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a58000b90 with addr=10.0.0.2, port=4420 00:27:50.293 qpair failed and we were unable to recover it. 00:27:50.293 [2024-07-15 09:36:01.133027] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:50.293 [2024-07-15 09:36:01.133072] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a58000b90 with addr=10.0.0.2, port=4420 00:27:50.293 qpair failed and we were unable to recover it. 00:27:50.293 [2024-07-15 09:36:01.133175] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:50.293 [2024-07-15 09:36:01.133210] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a58000b90 with addr=10.0.0.2, port=4420 00:27:50.293 qpair failed and we were unable to recover it. 00:27:50.293 [2024-07-15 09:36:01.133302] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:50.293 [2024-07-15 09:36:01.133333] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a58000b90 with addr=10.0.0.2, port=4420 00:27:50.293 qpair failed and we were unable to recover it. 00:27:50.293 [2024-07-15 09:36:01.133423] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:50.293 [2024-07-15 09:36:01.133453] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a58000b90 with addr=10.0.0.2, port=4420 00:27:50.293 qpair failed and we were unable to recover it. 00:27:50.293 [2024-07-15 09:36:01.133552] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:50.293 [2024-07-15 09:36:01.133582] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a58000b90 with addr=10.0.0.2, port=4420 00:27:50.293 qpair failed and we were unable to recover it. 00:27:50.293 [2024-07-15 09:36:01.133676] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:50.293 [2024-07-15 09:36:01.133705] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a58000b90 with addr=10.0.0.2, port=4420 00:27:50.293 qpair failed and we were unable to recover it. 00:27:50.293 [2024-07-15 09:36:01.133818] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:50.293 [2024-07-15 09:36:01.133867] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a58000b90 with addr=10.0.0.2, port=4420 00:27:50.293 qpair failed and we were unable to recover it. 00:27:50.293 [2024-07-15 09:36:01.133981] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:50.293 [2024-07-15 09:36:01.134009] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a58000b90 with addr=10.0.0.2, port=4420 00:27:50.293 qpair failed and we were unable to recover it. 00:27:50.293 [2024-07-15 09:36:01.134109] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:50.293 [2024-07-15 09:36:01.134136] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a58000b90 with addr=10.0.0.2, port=4420 00:27:50.293 qpair failed and we were unable to recover it. 00:27:50.293 [2024-07-15 09:36:01.134256] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:50.293 [2024-07-15 09:36:01.134284] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a58000b90 with addr=10.0.0.2, port=4420 00:27:50.293 qpair failed and we were unable to recover it. 00:27:50.293 [2024-07-15 09:36:01.134385] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:50.293 [2024-07-15 09:36:01.134413] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a58000b90 with addr=10.0.0.2, port=4420 00:27:50.293 qpair failed and we were unable to recover it. 00:27:50.293 [2024-07-15 09:36:01.134504] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:50.293 [2024-07-15 09:36:01.134532] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a58000b90 with addr=10.0.0.2, port=4420 00:27:50.293 qpair failed and we were unable to recover it. 00:27:50.293 [2024-07-15 09:36:01.134641] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:50.293 [2024-07-15 09:36:01.134682] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d69200 with addr=10.0.0.2, port=4420 00:27:50.293 qpair failed and we were unable to recover it. 00:27:50.293 [2024-07-15 09:36:01.134789] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:50.293 [2024-07-15 09:36:01.134827] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d69200 with addr=10.0.0.2, port=4420 00:27:50.293 qpair failed and we were unable to recover it. 00:27:50.293 [2024-07-15 09:36:01.134928] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:50.293 [2024-07-15 09:36:01.134957] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d69200 with addr=10.0.0.2, port=4420 00:27:50.293 qpair failed and we were unable to recover it. 00:27:50.293 [2024-07-15 09:36:01.135047] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:50.293 [2024-07-15 09:36:01.135074] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d69200 with addr=10.0.0.2, port=4420 00:27:50.293 qpair failed and we were unable to recover it. 00:27:50.293 [2024-07-15 09:36:01.135168] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:50.293 [2024-07-15 09:36:01.135195] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d69200 with addr=10.0.0.2, port=4420 00:27:50.293 qpair failed and we were unable to recover it. 00:27:50.293 [2024-07-15 09:36:01.135308] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:50.293 [2024-07-15 09:36:01.135335] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d69200 with addr=10.0.0.2, port=4420 00:27:50.293 qpair failed and we were unable to recover it. 00:27:50.293 [2024-07-15 09:36:01.135422] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:50.293 [2024-07-15 09:36:01.135449] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d69200 with addr=10.0.0.2, port=4420 00:27:50.293 qpair failed and we were unable to recover it. 00:27:50.293 [2024-07-15 09:36:01.135543] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:50.293 [2024-07-15 09:36:01.135570] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d69200 with addr=10.0.0.2, port=4420 00:27:50.293 qpair failed and we were unable to recover it. 00:27:50.293 [2024-07-15 09:36:01.135660] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:50.293 [2024-07-15 09:36:01.135688] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d69200 with addr=10.0.0.2, port=4420 00:27:50.293 qpair failed and we were unable to recover it. 00:27:50.293 [2024-07-15 09:36:01.135776] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:50.293 [2024-07-15 09:36:01.135814] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a58000b90 with addr=10.0.0.2, port=4420 00:27:50.293 qpair failed and we were unable to recover it. 00:27:50.293 [2024-07-15 09:36:01.135912] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:50.293 [2024-07-15 09:36:01.135940] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a58000b90 with addr=10.0.0.2, port=4420 00:27:50.293 qpair failed and we were unable to recover it. 00:27:50.293 [2024-07-15 09:36:01.136061] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:50.293 [2024-07-15 09:36:01.136089] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a58000b90 with addr=10.0.0.2, port=4420 00:27:50.293 qpair failed and we were unable to recover it. 00:27:50.293 [2024-07-15 09:36:01.136182] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:50.293 [2024-07-15 09:36:01.136210] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a58000b90 with addr=10.0.0.2, port=4420 00:27:50.293 qpair failed and we were unable to recover it. 00:27:50.293 [2024-07-15 09:36:01.136292] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:50.293 [2024-07-15 09:36:01.136320] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a58000b90 with addr=10.0.0.2, port=4420 00:27:50.293 qpair failed and we were unable to recover it. 00:27:50.293 [2024-07-15 09:36:01.136437] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:50.293 [2024-07-15 09:36:01.136470] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a58000b90 with addr=10.0.0.2, port=4420 00:27:50.293 qpair failed and we were unable to recover it. 00:27:50.293 [2024-07-15 09:36:01.136561] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:50.293 [2024-07-15 09:36:01.136588] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d69200 with addr=10.0.0.2, port=4420 00:27:50.293 qpair failed and we were unable to recover it. 00:27:50.293 [2024-07-15 09:36:01.136687] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:50.293 [2024-07-15 09:36:01.136728] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:27:50.293 qpair failed and we were unable to recover it. 00:27:50.293 [2024-07-15 09:36:01.136832] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:50.293 [2024-07-15 09:36:01.136878] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a50000b90 with addr=10.0.0.2, port=4420 00:27:50.293 qpair failed and we were unable to recover it. 00:27:50.293 [2024-07-15 09:36:01.136982] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:50.293 [2024-07-15 09:36:01.137009] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a50000b90 with addr=10.0.0.2, port=4420 00:27:50.293 qpair failed and we were unable to recover it. 00:27:50.293 [2024-07-15 09:36:01.137101] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:50.293 [2024-07-15 09:36:01.137127] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a50000b90 with addr=10.0.0.2, port=4420 00:27:50.293 qpair failed and we were unable to recover it. 00:27:50.293 [2024-07-15 09:36:01.137216] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:50.293 [2024-07-15 09:36:01.137242] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a50000b90 with addr=10.0.0.2, port=4420 00:27:50.293 qpair failed and we were unable to recover it. 00:27:50.293 [2024-07-15 09:36:01.137332] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:50.293 [2024-07-15 09:36:01.137360] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a58000b90 with addr=10.0.0.2, port=4420 00:27:50.293 qpair failed and we were unable to recover it. 00:27:50.293 [2024-07-15 09:36:01.137454] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:50.293 [2024-07-15 09:36:01.137482] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d69200 with addr=10.0.0.2, port=4420 00:27:50.293 qpair failed and we were unable to recover it. 00:27:50.293 [2024-07-15 09:36:01.137572] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:50.293 [2024-07-15 09:36:01.137599] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d69200 with addr=10.0.0.2, port=4420 00:27:50.293 qpair failed and we were unable to recover it. 00:27:50.293 [2024-07-15 09:36:01.137682] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:50.293 [2024-07-15 09:36:01.137708] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d69200 with addr=10.0.0.2, port=4420 00:27:50.293 qpair failed and we were unable to recover it. 00:27:50.293 [2024-07-15 09:36:01.137819] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:50.293 [2024-07-15 09:36:01.137846] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d69200 with addr=10.0.0.2, port=4420 00:27:50.293 qpair failed and we were unable to recover it. 00:27:50.293 [2024-07-15 09:36:01.137959] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:50.293 [2024-07-15 09:36:01.137986] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d69200 with addr=10.0.0.2, port=4420 00:27:50.293 qpair failed and we were unable to recover it. 00:27:50.293 [2024-07-15 09:36:01.138073] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:50.293 [2024-07-15 09:36:01.138100] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d69200 with addr=10.0.0.2, port=4420 00:27:50.293 qpair failed and we were unable to recover it. 00:27:50.293 [2024-07-15 09:36:01.138188] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:50.293 [2024-07-15 09:36:01.138215] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a58000b90 with addr=10.0.0.2, port=4420 00:27:50.293 qpair failed and we were unable to recover it. 00:27:50.293 [2024-07-15 09:36:01.138306] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:50.293 [2024-07-15 09:36:01.138333] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a58000b90 with addr=10.0.0.2, port=4420 00:27:50.293 qpair failed and we were unable to recover it. 00:27:50.293 [2024-07-15 09:36:01.138418] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:50.293 [2024-07-15 09:36:01.138445] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a58000b90 with addr=10.0.0.2, port=4420 00:27:50.293 qpair failed and we were unable to recover it. 00:27:50.293 [2024-07-15 09:36:01.138528] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:50.293 [2024-07-15 09:36:01.138555] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a58000b90 with addr=10.0.0.2, port=4420 00:27:50.293 qpair failed and we were unable to recover it. 00:27:50.293 [2024-07-15 09:36:01.138759] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:50.294 [2024-07-15 09:36:01.138806] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:27:50.294 qpair failed and we were unable to recover it. 00:27:50.294 [2024-07-15 09:36:01.138923] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:50.294 [2024-07-15 09:36:01.138951] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:27:50.294 qpair failed and we were unable to recover it. 00:27:50.294 [2024-07-15 09:36:01.139045] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:50.294 [2024-07-15 09:36:01.139072] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:27:50.294 qpair failed and we were unable to recover it. 00:27:50.294 [2024-07-15 09:36:01.139156] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:50.294 [2024-07-15 09:36:01.139181] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:27:50.294 qpair failed and we were unable to recover it. 00:27:50.294 [2024-07-15 09:36:01.139317] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:50.294 [2024-07-15 09:36:01.139344] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:27:50.294 qpair failed and we were unable to recover it. 00:27:50.294 [2024-07-15 09:36:01.139430] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:50.294 [2024-07-15 09:36:01.139459] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d69200 with addr=10.0.0.2, port=4420 00:27:50.294 qpair failed and we were unable to recover it. 00:27:50.294 [2024-07-15 09:36:01.139568] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:50.294 [2024-07-15 09:36:01.139597] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a58000b90 with addr=10.0.0.2, port=4420 00:27:50.294 qpair failed and we were unable to recover it. 00:27:50.294 [2024-07-15 09:36:01.139689] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:50.294 [2024-07-15 09:36:01.139718] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a50000b90 with addr=10.0.0.2, port=4420 00:27:50.294 qpair failed and we were unable to recover it. 00:27:50.294 [2024-07-15 09:36:01.139806] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:50.294 [2024-07-15 09:36:01.139836] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a50000b90 with addr=10.0.0.2, port=4420 00:27:50.294 qpair failed and we were unable to recover it. 00:27:50.294 [2024-07-15 09:36:01.139925] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:50.294 [2024-07-15 09:36:01.139956] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a50000b90 with addr=10.0.0.2, port=4420 00:27:50.294 qpair failed and we were unable to recover it. 00:27:50.294 [2024-07-15 09:36:01.140045] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:50.294 [2024-07-15 09:36:01.140072] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a50000b90 with addr=10.0.0.2, port=4420 00:27:50.294 qpair failed and we were unable to recover it. 00:27:50.294 [2024-07-15 09:36:01.140159] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:50.294 [2024-07-15 09:36:01.140187] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a50000b90 with addr=10.0.0.2, port=4420 00:27:50.294 qpair failed and we were unable to recover it. 00:27:50.294 [2024-07-15 09:36:01.140336] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:50.294 [2024-07-15 09:36:01.140362] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a50000b90 with addr=10.0.0.2, port=4420 00:27:50.294 qpair failed and we were unable to recover it. 00:27:50.294 [2024-07-15 09:36:01.140457] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:50.294 [2024-07-15 09:36:01.140487] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a58000b90 with addr=10.0.0.2, port=4420 00:27:50.294 qpair failed and we were unable to recover it. 00:27:50.294 [2024-07-15 09:36:01.140578] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:50.294 [2024-07-15 09:36:01.140605] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a58000b90 with addr=10.0.0.2, port=4420 00:27:50.294 qpair failed and we were unable to recover it. 00:27:50.294 [2024-07-15 09:36:01.140718] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:50.294 [2024-07-15 09:36:01.140746] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d69200 with addr=10.0.0.2, port=4420 00:27:50.294 qpair failed and we were unable to recover it. 00:27:50.294 [2024-07-15 09:36:01.140852] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:50.294 [2024-07-15 09:36:01.140879] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d69200 with addr=10.0.0.2, port=4420 00:27:50.294 qpair failed and we were unable to recover it. 00:27:50.294 [2024-07-15 09:36:01.140964] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:50.294 [2024-07-15 09:36:01.140989] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d69200 with addr=10.0.0.2, port=4420 00:27:50.294 qpair failed and we were unable to recover it. 00:27:50.294 [2024-07-15 09:36:01.141067] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:50.294 [2024-07-15 09:36:01.141092] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d69200 with addr=10.0.0.2, port=4420 00:27:50.294 qpair failed and we were unable to recover it. 00:27:50.294 [2024-07-15 09:36:01.141200] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:50.294 [2024-07-15 09:36:01.141225] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d69200 with addr=10.0.0.2, port=4420 00:27:50.294 qpair failed and we were unable to recover it. 00:27:50.294 [2024-07-15 09:36:01.141338] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:50.294 [2024-07-15 09:36:01.141363] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d69200 with addr=10.0.0.2, port=4420 00:27:50.294 qpair failed and we were unable to recover it. 00:27:50.294 [2024-07-15 09:36:01.141472] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:50.294 [2024-07-15 09:36:01.141500] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a58000b90 with addr=10.0.0.2, port=4420 00:27:50.294 qpair failed and we were unable to recover it. 00:27:50.294 [2024-07-15 09:36:01.141588] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:50.294 [2024-07-15 09:36:01.141617] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a50000b90 with addr=10.0.0.2, port=4420 00:27:50.294 qpair failed and we were unable to recover it. 00:27:50.294 [2024-07-15 09:36:01.141736] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:50.294 [2024-07-15 09:36:01.141762] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a50000b90 with addr=10.0.0.2, port=4420 00:27:50.294 qpair failed and we were unable to recover it. 00:27:50.294 [2024-07-15 09:36:01.141846] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:50.294 [2024-07-15 09:36:01.141872] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a50000b90 with addr=10.0.0.2, port=4420 00:27:50.294 qpair failed and we were unable to recover it. 00:27:50.294 [2024-07-15 09:36:01.141968] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:50.294 [2024-07-15 09:36:01.141994] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a50000b90 with addr=10.0.0.2, port=4420 00:27:50.294 qpair failed and we were unable to recover it. 00:27:50.294 [2024-07-15 09:36:01.142117] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:50.294 [2024-07-15 09:36:01.142143] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a50000b90 with addr=10.0.0.2, port=4420 00:27:50.294 qpair failed and we were unable to recover it. 00:27:50.294 [2024-07-15 09:36:01.142230] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:50.294 [2024-07-15 09:36:01.142258] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a50000b90 with addr=10.0.0.2, port=4420 00:27:50.294 qpair failed and we were unable to recover it. 00:27:50.294 [2024-07-15 09:36:01.142355] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:50.294 [2024-07-15 09:36:01.142382] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a58000b90 with addr=10.0.0.2, port=4420 00:27:50.294 qpair failed and we were unable to recover it. 00:27:50.294 [2024-07-15 09:36:01.142468] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:50.294 [2024-07-15 09:36:01.142494] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a58000b90 with addr=10.0.0.2, port=4420 00:27:50.294 qpair failed and we were unable to recover it. 00:27:50.294 [2024-07-15 09:36:01.142586] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:50.294 [2024-07-15 09:36:01.142613] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d69200 with addr=10.0.0.2, port=4420 00:27:50.294 qpair failed and we were unable to recover it. 00:27:50.294 [2024-07-15 09:36:01.142699] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:50.294 [2024-07-15 09:36:01.142725] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d69200 with addr=10.0.0.2, port=4420 00:27:50.294 qpair failed and we were unable to recover it. 00:27:50.294 [2024-07-15 09:36:01.142848] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:50.294 [2024-07-15 09:36:01.142887] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:27:50.294 qpair failed and we were unable to recover it. 00:27:50.294 [2024-07-15 09:36:01.143005] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:50.294 [2024-07-15 09:36:01.143031] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:27:50.294 qpair failed and we were unable to recover it. 00:27:50.294 [2024-07-15 09:36:01.143116] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:50.294 [2024-07-15 09:36:01.143141] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:27:50.294 qpair failed and we were unable to recover it. 00:27:50.294 [2024-07-15 09:36:01.143224] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:50.294 [2024-07-15 09:36:01.143250] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:27:50.294 qpair failed and we were unable to recover it. 00:27:50.294 [2024-07-15 09:36:01.143342] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:50.294 [2024-07-15 09:36:01.143369] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a58000b90 with addr=10.0.0.2, port=4420 00:27:50.294 qpair failed and we were unable to recover it. 00:27:50.294 [2024-07-15 09:36:01.143456] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:50.294 [2024-07-15 09:36:01.143483] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a58000b90 with addr=10.0.0.2, port=4420 00:27:50.294 qpair failed and we were unable to recover it. 00:27:50.294 [2024-07-15 09:36:01.143598] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:50.294 [2024-07-15 09:36:01.143624] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a58000b90 with addr=10.0.0.2, port=4420 00:27:50.294 qpair failed and we were unable to recover it. 00:27:50.294 [2024-07-15 09:36:01.143705] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:50.294 [2024-07-15 09:36:01.143731] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a58000b90 with addr=10.0.0.2, port=4420 00:27:50.294 qpair failed and we were unable to recover it. 00:27:50.294 [2024-07-15 09:36:01.143849] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:50.294 [2024-07-15 09:36:01.143877] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a50000b90 with addr=10.0.0.2, port=4420 00:27:50.294 qpair failed and we were unable to recover it. 00:27:50.294 [2024-07-15 09:36:01.143974] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:50.294 [2024-07-15 09:36:01.144001] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a50000b90 with addr=10.0.0.2, port=4420 00:27:50.294 qpair failed and we were unable to recover it. 00:27:50.294 [2024-07-15 09:36:01.144087] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:50.294 [2024-07-15 09:36:01.144113] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a50000b90 with addr=10.0.0.2, port=4420 00:27:50.294 qpair failed and we were unable to recover it. 00:27:50.294 [2024-07-15 09:36:01.144198] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:50.294 [2024-07-15 09:36:01.144224] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a50000b90 with addr=10.0.0.2, port=4420 00:27:50.294 qpair failed and we were unable to recover it. 00:27:50.294 [2024-07-15 09:36:01.144307] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:50.294 [2024-07-15 09:36:01.144333] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a50000b90 with addr=10.0.0.2, port=4420 00:27:50.294 qpair failed and we were unable to recover it. 00:27:50.294 [2024-07-15 09:36:01.144417] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:50.294 [2024-07-15 09:36:01.144444] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:27:50.294 qpair failed and we were unable to recover it. 00:27:50.294 [2024-07-15 09:36:01.144532] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:50.294 [2024-07-15 09:36:01.144560] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a58000b90 with addr=10.0.0.2, port=4420 00:27:50.294 qpair failed and we were unable to recover it. 00:27:50.294 [2024-07-15 09:36:01.144653] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:50.294 [2024-07-15 09:36:01.144681] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d69200 with addr=10.0.0.2, port=4420 00:27:50.294 qpair failed and we were unable to recover it. 00:27:50.294 [2024-07-15 09:36:01.144761] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:50.294 [2024-07-15 09:36:01.144787] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d69200 with addr=10.0.0.2, port=4420 00:27:50.294 qpair failed and we were unable to recover it. 00:27:50.294 [2024-07-15 09:36:01.144885] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:50.294 [2024-07-15 09:36:01.144916] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d69200 with addr=10.0.0.2, port=4420 00:27:50.294 qpair failed and we were unable to recover it. 00:27:50.294 [2024-07-15 09:36:01.145002] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:50.294 [2024-07-15 09:36:01.145027] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d69200 with addr=10.0.0.2, port=4420 00:27:50.294 qpair failed and we were unable to recover it. 00:27:50.294 [2024-07-15 09:36:01.145108] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:50.294 [2024-07-15 09:36:01.145133] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d69200 with addr=10.0.0.2, port=4420 00:27:50.294 qpair failed and we were unable to recover it. 00:27:50.294 [2024-07-15 09:36:01.145267] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:50.294 [2024-07-15 09:36:01.145292] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d69200 with addr=10.0.0.2, port=4420 00:27:50.294 qpair failed and we were unable to recover it. 00:27:50.294 [2024-07-15 09:36:01.145380] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:50.294 [2024-07-15 09:36:01.145408] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a58000b90 with addr=10.0.0.2, port=4420 00:27:50.294 qpair failed and we were unable to recover it. 00:27:50.294 [2024-07-15 09:36:01.145497] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:50.294 [2024-07-15 09:36:01.145524] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a50000b90 with addr=10.0.0.2, port=4420 00:27:50.294 qpair failed and we were unable to recover it. 00:27:50.294 [2024-07-15 09:36:01.145646] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:50.294 [2024-07-15 09:36:01.145674] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:27:50.294 qpair failed and we were unable to recover it. 00:27:50.294 [2024-07-15 09:36:01.145758] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:50.295 [2024-07-15 09:36:01.145783] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:27:50.295 qpair failed and we were unable to recover it. 00:27:50.295 [2024-07-15 09:36:01.145889] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:50.295 [2024-07-15 09:36:01.145915] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:27:50.295 qpair failed and we were unable to recover it. 00:27:50.295 [2024-07-15 09:36:01.146026] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:50.295 [2024-07-15 09:36:01.146052] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:27:50.295 qpair failed and we were unable to recover it. 00:27:50.295 [2024-07-15 09:36:01.146141] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:50.295 [2024-07-15 09:36:01.146167] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d69200 with addr=10.0.0.2, port=4420 00:27:50.295 qpair failed and we were unable to recover it. 00:27:50.295 [2024-07-15 09:36:01.146252] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:50.295 [2024-07-15 09:36:01.146278] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a58000b90 with addr=10.0.0.2, port=4420 00:27:50.295 qpair failed and we were unable to recover it. 00:27:50.295 [2024-07-15 09:36:01.146362] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:50.295 [2024-07-15 09:36:01.146390] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a50000b90 with addr=10.0.0.2, port=4420 00:27:50.295 qpair failed and we were unable to recover it. 00:27:50.295 [2024-07-15 09:36:01.146481] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:50.295 [2024-07-15 09:36:01.146507] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a50000b90 with addr=10.0.0.2, port=4420 00:27:50.295 qpair failed and we were unable to recover it. 00:27:50.295 [2024-07-15 09:36:01.146604] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:50.295 [2024-07-15 09:36:01.146630] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a50000b90 with addr=10.0.0.2, port=4420 00:27:50.295 qpair failed and we were unable to recover it. 00:27:50.295 [2024-07-15 09:36:01.146722] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:50.295 [2024-07-15 09:36:01.146747] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a50000b90 with addr=10.0.0.2, port=4420 00:27:50.295 qpair failed and we were unable to recover it. 00:27:50.295 [2024-07-15 09:36:01.146834] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:50.295 [2024-07-15 09:36:01.146868] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a50000b90 with addr=10.0.0.2, port=4420 00:27:50.295 qpair failed and we were unable to recover it. 00:27:50.295 [2024-07-15 09:36:01.146987] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:50.295 [2024-07-15 09:36:01.147012] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a50000b90 with addr=10.0.0.2, port=4420 00:27:50.295 qpair failed and we were unable to recover it. 00:27:50.295 [2024-07-15 09:36:01.147131] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:50.295 [2024-07-15 09:36:01.147157] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a50000b90 with addr=10.0.0.2, port=4420 00:27:50.295 qpair failed and we were unable to recover it. 00:27:50.295 [2024-07-15 09:36:01.147245] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:50.295 [2024-07-15 09:36:01.147272] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:27:50.295 qpair failed and we were unable to recover it. 00:27:50.295 [2024-07-15 09:36:01.147363] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:50.295 [2024-07-15 09:36:01.147390] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d69200 with addr=10.0.0.2, port=4420 00:27:50.295 qpair failed and we were unable to recover it. 00:27:50.295 [2024-07-15 09:36:01.147473] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:50.295 [2024-07-15 09:36:01.147500] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a58000b90 with addr=10.0.0.2, port=4420 00:27:50.295 qpair failed and we were unable to recover it. 00:27:50.295 [2024-07-15 09:36:01.147587] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:50.295 [2024-07-15 09:36:01.147613] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a58000b90 with addr=10.0.0.2, port=4420 00:27:50.295 qpair failed and we were unable to recover it. 00:27:50.295 [2024-07-15 09:36:01.147696] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:50.295 [2024-07-15 09:36:01.147722] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a58000b90 with addr=10.0.0.2, port=4420 00:27:50.295 qpair failed and we were unable to recover it. 00:27:50.295 [2024-07-15 09:36:01.147810] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:50.295 [2024-07-15 09:36:01.147837] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a58000b90 with addr=10.0.0.2, port=4420 00:27:50.295 qpair failed and we were unable to recover it. 00:27:50.295 [2024-07-15 09:36:01.147925] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:50.295 [2024-07-15 09:36:01.147951] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a58000b90 with addr=10.0.0.2, port=4420 00:27:50.295 qpair failed and we were unable to recover it. 00:27:50.295 [2024-07-15 09:36:01.148058] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:50.295 [2024-07-15 09:36:01.148085] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a58000b90 with addr=10.0.0.2, port=4420 00:27:50.295 qpair failed and we were unable to recover it. 00:27:50.295 [2024-07-15 09:36:01.148201] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:50.295 [2024-07-15 09:36:01.148227] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d69200 with addr=10.0.0.2, port=4420 00:27:50.295 qpair failed and we were unable to recover it. 00:27:50.295 [2024-07-15 09:36:01.148318] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:50.295 [2024-07-15 09:36:01.148344] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d69200 with addr=10.0.0.2, port=4420 00:27:50.295 qpair failed and we were unable to recover it. 00:27:50.295 [2024-07-15 09:36:01.148431] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:50.295 [2024-07-15 09:36:01.148458] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:27:50.295 qpair failed and we were unable to recover it. 00:27:50.295 [2024-07-15 09:36:01.148551] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:50.295 [2024-07-15 09:36:01.148576] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:27:50.295 qpair failed and we were unable to recover it. 00:27:50.295 [2024-07-15 09:36:01.148656] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:50.295 [2024-07-15 09:36:01.148682] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:27:50.295 qpair failed and we were unable to recover it. 00:27:50.295 [2024-07-15 09:36:01.148784] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:50.295 [2024-07-15 09:36:01.148815] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:27:50.295 qpair failed and we were unable to recover it. 00:27:50.295 [2024-07-15 09:36:01.148904] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:50.295 [2024-07-15 09:36:01.148930] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:27:50.295 qpair failed and we were unable to recover it. 00:27:50.295 [2024-07-15 09:36:01.149011] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:50.295 [2024-07-15 09:36:01.149036] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:27:50.295 qpair failed and we were unable to recover it. 00:27:50.295 [2024-07-15 09:36:01.149116] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:50.295 [2024-07-15 09:36:01.149140] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:27:50.295 qpair failed and we were unable to recover it. 00:27:50.295 [2024-07-15 09:36:01.149254] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:50.295 [2024-07-15 09:36:01.149278] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:27:50.295 qpair failed and we were unable to recover it. 00:27:50.295 [2024-07-15 09:36:01.149440] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:50.295 [2024-07-15 09:36:01.149465] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:27:50.295 qpair failed and we were unable to recover it. 00:27:50.295 [2024-07-15 09:36:01.149540] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:50.295 [2024-07-15 09:36:01.149565] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:27:50.295 qpair failed and we were unable to recover it. 00:27:50.295 [2024-07-15 09:36:01.149678] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:50.295 [2024-07-15 09:36:01.149705] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a58000b90 with addr=10.0.0.2, port=4420 00:27:50.295 qpair failed and we were unable to recover it. 00:27:50.295 [2024-07-15 09:36:01.149813] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:50.295 [2024-07-15 09:36:01.149840] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a58000b90 with addr=10.0.0.2, port=4420 00:27:50.295 qpair failed and we were unable to recover it. 00:27:50.295 [2024-07-15 09:36:01.149965] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:50.295 [2024-07-15 09:36:01.149991] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a58000b90 with addr=10.0.0.2, port=4420 00:27:50.295 qpair failed and we were unable to recover it. 00:27:50.295 [2024-07-15 09:36:01.150069] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:50.295 [2024-07-15 09:36:01.150094] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a58000b90 with addr=10.0.0.2, port=4420 00:27:50.295 qpair failed and we were unable to recover it. 00:27:50.295 [2024-07-15 09:36:01.150207] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:50.295 [2024-07-15 09:36:01.150233] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a58000b90 with addr=10.0.0.2, port=4420 00:27:50.295 qpair failed and we were unable to recover it. 00:27:50.295 [2024-07-15 09:36:01.150342] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:50.295 [2024-07-15 09:36:01.150369] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a58000b90 with addr=10.0.0.2, port=4420 00:27:50.295 qpair failed and we were unable to recover it. 00:27:50.295 [2024-07-15 09:36:01.150449] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:50.295 [2024-07-15 09:36:01.150476] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:27:50.295 qpair failed and we were unable to recover it. 00:27:50.295 [2024-07-15 09:36:01.150553] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:50.295 [2024-07-15 09:36:01.150578] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:27:50.295 qpair failed and we were unable to recover it. 00:27:50.295 [2024-07-15 09:36:01.150655] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:50.295 [2024-07-15 09:36:01.150680] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:27:50.295 qpair failed and we were unable to recover it. 00:27:50.295 [2024-07-15 09:36:01.150767] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:50.295 [2024-07-15 09:36:01.150792] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:27:50.295 qpair failed and we were unable to recover it. 00:27:50.295 [2024-07-15 09:36:01.150912] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:50.295 [2024-07-15 09:36:01.150938] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:27:50.295 qpair failed and we were unable to recover it. 00:27:50.295 [2024-07-15 09:36:01.151022] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:50.295 [2024-07-15 09:36:01.151047] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:27:50.295 qpair failed and we were unable to recover it. 00:27:50.295 [2024-07-15 09:36:01.151131] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:50.295 [2024-07-15 09:36:01.151158] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a58000b90 with addr=10.0.0.2, port=4420 00:27:50.295 qpair failed and we were unable to recover it. 00:27:50.295 [2024-07-15 09:36:01.151240] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:50.295 [2024-07-15 09:36:01.151267] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a58000b90 with addr=10.0.0.2, port=4420 00:27:50.295 qpair failed and we were unable to recover it. 00:27:50.295 [2024-07-15 09:36:01.151347] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:50.295 [2024-07-15 09:36:01.151374] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a58000b90 with addr=10.0.0.2, port=4420 00:27:50.295 qpair failed and we were unable to recover it. 00:27:50.295 [2024-07-15 09:36:01.151464] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:50.295 [2024-07-15 09:36:01.151491] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a58000b90 with addr=10.0.0.2, port=4420 00:27:50.295 qpair failed and we were unable to recover it. 00:27:50.295 [2024-07-15 09:36:01.151585] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:50.295 [2024-07-15 09:36:01.151612] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a58000b90 with addr=10.0.0.2, port=4420 00:27:50.295 qpair failed and we were unable to recover it. 00:27:50.295 [2024-07-15 09:36:01.151716] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:50.295 [2024-07-15 09:36:01.151742] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a58000b90 with addr=10.0.0.2, port=4420 00:27:50.295 qpair failed and we were unable to recover it. 00:27:50.295 [2024-07-15 09:36:01.151849] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:50.295 [2024-07-15 09:36:01.151877] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:27:50.295 qpair failed and we were unable to recover it. 00:27:50.295 [2024-07-15 09:36:01.151987] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:50.295 [2024-07-15 09:36:01.152013] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:27:50.295 qpair failed and we were unable to recover it. 00:27:50.295 [2024-07-15 09:36:01.152126] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:50.295 [2024-07-15 09:36:01.152151] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:27:50.295 qpair failed and we were unable to recover it. 00:27:50.295 [2024-07-15 09:36:01.152230] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:50.295 [2024-07-15 09:36:01.152255] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:27:50.295 qpair failed and we were unable to recover it. 00:27:50.295 [2024-07-15 09:36:01.152471] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:50.295 [2024-07-15 09:36:01.152496] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:27:50.295 qpair failed and we were unable to recover it. 00:27:50.295 [2024-07-15 09:36:01.152584] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:50.295 [2024-07-15 09:36:01.152610] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:27:50.295 qpair failed and we were unable to recover it. 00:27:50.295 [2024-07-15 09:36:01.152698] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:50.296 [2024-07-15 09:36:01.152724] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:27:50.296 qpair failed and we were unable to recover it. 00:27:50.296 [2024-07-15 09:36:01.152809] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:50.296 [2024-07-15 09:36:01.152834] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:27:50.296 qpair failed and we were unable to recover it. 00:27:50.296 [2024-07-15 09:36:01.152927] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:50.296 [2024-07-15 09:36:01.152952] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:27:50.296 qpair failed and we were unable to recover it. 00:27:50.296 [2024-07-15 09:36:01.153031] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:50.296 [2024-07-15 09:36:01.153057] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:27:50.296 qpair failed and we were unable to recover it. 00:27:50.296 [2024-07-15 09:36:01.153182] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:50.296 [2024-07-15 09:36:01.153212] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:27:50.296 qpair failed and we were unable to recover it. 00:27:50.296 [2024-07-15 09:36:01.153304] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:50.296 [2024-07-15 09:36:01.153330] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:27:50.296 qpair failed and we were unable to recover it. 00:27:50.296 [2024-07-15 09:36:01.153407] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:50.296 [2024-07-15 09:36:01.153432] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:27:50.296 qpair failed and we were unable to recover it. 00:27:50.296 [2024-07-15 09:36:01.153514] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:50.296 [2024-07-15 09:36:01.153539] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:27:50.296 qpair failed and we were unable to recover it. 00:27:50.296 [2024-07-15 09:36:01.153622] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:50.296 [2024-07-15 09:36:01.153649] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:27:50.296 qpair failed and we were unable to recover it. 00:27:50.296 [2024-07-15 09:36:01.153729] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:50.296 [2024-07-15 09:36:01.153754] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:27:50.296 qpair failed and we were unable to recover it. 00:27:50.296 [2024-07-15 09:36:01.153834] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:50.296 [2024-07-15 09:36:01.153859] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:27:50.296 qpair failed and we were unable to recover it. 00:27:50.296 [2024-07-15 09:36:01.153951] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:50.296 [2024-07-15 09:36:01.153976] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:27:50.296 qpair failed and we were unable to recover it. 00:27:50.296 [2024-07-15 09:36:01.154084] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:50.296 [2024-07-15 09:36:01.154108] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:27:50.296 qpair failed and we were unable to recover it. 00:27:50.296 [2024-07-15 09:36:01.154228] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:50.296 [2024-07-15 09:36:01.154254] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:27:50.296 qpair failed and we were unable to recover it. 00:27:50.296 [2024-07-15 09:36:01.154334] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:50.296 [2024-07-15 09:36:01.154359] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:27:50.296 qpair failed and we were unable to recover it. 00:27:50.296 [2024-07-15 09:36:01.154470] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:50.296 [2024-07-15 09:36:01.154498] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a58000b90 with addr=10.0.0.2, port=4420 00:27:50.296 qpair failed and we were unable to recover it. 00:27:50.296 [2024-07-15 09:36:01.154582] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:50.296 [2024-07-15 09:36:01.154611] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a50000b90 with addr=10.0.0.2, port=4420 00:27:50.296 qpair failed and we were unable to recover it. 00:27:50.296 [2024-07-15 09:36:01.154714] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:50.296 [2024-07-15 09:36:01.154752] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d69200 with addr=10.0.0.2, port=4420 00:27:50.296 qpair failed and we were unable to recover it. 00:27:50.296 [2024-07-15 09:36:01.154885] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:50.296 [2024-07-15 09:36:01.154912] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:27:50.296 qpair failed and we were unable to recover it. 00:27:50.296 [2024-07-15 09:36:01.155001] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:50.296 [2024-07-15 09:36:01.155027] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:27:50.296 qpair failed and we were unable to recover it. 00:27:50.296 [2024-07-15 09:36:01.155232] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:50.296 [2024-07-15 09:36:01.155258] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:27:50.296 qpair failed and we were unable to recover it. 00:27:50.296 [2024-07-15 09:36:01.155333] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:50.296 [2024-07-15 09:36:01.155358] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:27:50.296 qpair failed and we were unable to recover it. 00:27:50.296 [2024-07-15 09:36:01.155441] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:50.296 [2024-07-15 09:36:01.155465] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:27:50.296 qpair failed and we were unable to recover it. 00:27:50.296 [2024-07-15 09:36:01.155553] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:50.296 [2024-07-15 09:36:01.155578] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:27:50.296 qpair failed and we were unable to recover it. 00:27:50.296 [2024-07-15 09:36:01.155666] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:50.296 [2024-07-15 09:36:01.155691] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:27:50.296 qpair failed and we were unable to recover it. 00:27:50.296 [2024-07-15 09:36:01.155795] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:50.296 [2024-07-15 09:36:01.155825] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:27:50.296 qpair failed and we were unable to recover it. 00:27:50.296 [2024-07-15 09:36:01.155931] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:50.296 [2024-07-15 09:36:01.155956] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:27:50.296 qpair failed and we were unable to recover it. 00:27:50.296 [2024-07-15 09:36:01.156032] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:50.296 [2024-07-15 09:36:01.156057] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:27:50.296 qpair failed and we were unable to recover it. 00:27:50.296 [2024-07-15 09:36:01.156145] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:50.296 [2024-07-15 09:36:01.156172] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:27:50.296 qpair failed and we were unable to recover it. 00:27:50.296 [2024-07-15 09:36:01.156255] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:50.296 [2024-07-15 09:36:01.156281] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:27:50.296 qpair failed and we were unable to recover it. 00:27:50.296 [2024-07-15 09:36:01.156393] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:50.296 [2024-07-15 09:36:01.156419] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:27:50.296 qpair failed and we were unable to recover it. 00:27:50.296 [2024-07-15 09:36:01.156546] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:50.296 [2024-07-15 09:36:01.156586] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a58000b90 with addr=10.0.0.2, port=4420 00:27:50.296 qpair failed and we were unable to recover it. 00:27:50.296 [2024-07-15 09:36:01.156701] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:50.296 [2024-07-15 09:36:01.156730] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a50000b90 with addr=10.0.0.2, port=4420 00:27:50.296 qpair failed and we were unable to recover it. 00:27:50.296 [2024-07-15 09:36:01.156835] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:50.296 [2024-07-15 09:36:01.156873] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d69200 with addr=10.0.0.2, port=4420 00:27:50.296 qpair failed and we were unable to recover it. 00:27:50.296 [2024-07-15 09:36:01.156963] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:50.296 [2024-07-15 09:36:01.156990] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d69200 with addr=10.0.0.2, port=4420 00:27:50.296 qpair failed and we were unable to recover it. 00:27:50.296 [2024-07-15 09:36:01.157081] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:50.296 [2024-07-15 09:36:01.157107] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d69200 with addr=10.0.0.2, port=4420 00:27:50.296 qpair failed and we were unable to recover it. 00:27:50.296 [2024-07-15 09:36:01.157193] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:50.296 [2024-07-15 09:36:01.157220] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d69200 with addr=10.0.0.2, port=4420 00:27:50.296 qpair failed and we were unable to recover it. 00:27:50.296 [2024-07-15 09:36:01.157331] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:50.296 [2024-07-15 09:36:01.157356] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d69200 with addr=10.0.0.2, port=4420 00:27:50.296 qpair failed and we were unable to recover it. 00:27:50.296 [2024-07-15 09:36:01.157447] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:50.296 [2024-07-15 09:36:01.157476] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a58000b90 with addr=10.0.0.2, port=4420 00:27:50.296 qpair failed and we were unable to recover it. 00:27:50.296 [2024-07-15 09:36:01.157564] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:50.296 [2024-07-15 09:36:01.157593] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a50000b90 with addr=10.0.0.2, port=4420 00:27:50.296 qpair failed and we were unable to recover it. 00:27:50.296 [2024-07-15 09:36:01.157761] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:50.296 [2024-07-15 09:36:01.157818] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a50000b90 with addr=10.0.0.2, port=4420 00:27:50.296 qpair failed and we were unable to recover it. 00:27:50.296 [2024-07-15 09:36:01.157941] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:50.296 [2024-07-15 09:36:01.157966] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a50000b90 with addr=10.0.0.2, port=4420 00:27:50.296 qpair failed and we were unable to recover it. 00:27:50.296 [2024-07-15 09:36:01.158050] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:50.296 [2024-07-15 09:36:01.158076] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a50000b90 with addr=10.0.0.2, port=4420 00:27:50.296 qpair failed and we were unable to recover it. 00:27:50.296 [2024-07-15 09:36:01.158158] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:50.296 [2024-07-15 09:36:01.158182] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a50000b90 with addr=10.0.0.2, port=4420 00:27:50.296 qpair failed and we were unable to recover it. 00:27:50.296 [2024-07-15 09:36:01.158290] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:50.296 [2024-07-15 09:36:01.158320] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a50000b90 with addr=10.0.0.2, port=4420 00:27:50.296 qpair failed and we were unable to recover it. 00:27:50.296 [2024-07-15 09:36:01.158411] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:50.296 [2024-07-15 09:36:01.158470] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a58000b90 with addr=10.0.0.2, port=4420 00:27:50.296 qpair failed and we were unable to recover it. 00:27:50.297 [2024-07-15 09:36:01.158634] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:50.297 [2024-07-15 09:36:01.158678] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a58000b90 with addr=10.0.0.2, port=4420 00:27:50.297 qpair failed and we were unable to recover it. 00:27:50.297 [2024-07-15 09:36:01.158870] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:50.297 [2024-07-15 09:36:01.158896] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a58000b90 with addr=10.0.0.2, port=4420 00:27:50.297 qpair failed and we were unable to recover it. 00:27:50.297 [2024-07-15 09:36:01.159007] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:50.297 [2024-07-15 09:36:01.159033] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a58000b90 with addr=10.0.0.2, port=4420 00:27:50.297 qpair failed and we were unable to recover it. 00:27:50.297 [2024-07-15 09:36:01.159183] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:50.297 [2024-07-15 09:36:01.159225] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a58000b90 with addr=10.0.0.2, port=4420 00:27:50.297 qpair failed and we were unable to recover it. 00:27:50.297 [2024-07-15 09:36:01.159365] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:50.297 [2024-07-15 09:36:01.159407] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a58000b90 with addr=10.0.0.2, port=4420 00:27:50.297 qpair failed and we were unable to recover it. 00:27:50.297 [2024-07-15 09:36:01.159534] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:50.297 [2024-07-15 09:36:01.159608] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a50000b90 with addr=10.0.0.2, port=4420 00:27:50.297 qpair failed and we were unable to recover it. 00:27:50.297 [2024-07-15 09:36:01.159823] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:50.297 [2024-07-15 09:36:01.159859] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a50000b90 with addr=10.0.0.2, port=4420 00:27:50.297 qpair failed and we were unable to recover it. 00:27:50.297 [2024-07-15 09:36:01.159976] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:50.297 [2024-07-15 09:36:01.160002] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a50000b90 with addr=10.0.0.2, port=4420 00:27:50.297 qpair failed and we were unable to recover it. 00:27:50.297 [2024-07-15 09:36:01.160120] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:50.297 [2024-07-15 09:36:01.160164] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a50000b90 with addr=10.0.0.2, port=4420 00:27:50.297 qpair failed and we were unable to recover it. 00:27:50.297 [2024-07-15 09:36:01.160305] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:50.297 [2024-07-15 09:36:01.160347] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a50000b90 with addr=10.0.0.2, port=4420 00:27:50.297 qpair failed and we were unable to recover it. 00:27:50.297 [2024-07-15 09:36:01.160503] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:50.297 [2024-07-15 09:36:01.160544] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a50000b90 with addr=10.0.0.2, port=4420 00:27:50.297 qpair failed and we were unable to recover it. 00:27:50.297 [2024-07-15 09:36:01.160675] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:50.297 [2024-07-15 09:36:01.160719] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a58000b90 with addr=10.0.0.2, port=4420 00:27:50.297 qpair failed and we were unable to recover it. 00:27:50.297 [2024-07-15 09:36:01.160878] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:50.297 [2024-07-15 09:36:01.160905] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a58000b90 with addr=10.0.0.2, port=4420 00:27:50.297 qpair failed and we were unable to recover it. 00:27:50.297 [2024-07-15 09:36:01.160986] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:50.297 [2024-07-15 09:36:01.161012] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a58000b90 with addr=10.0.0.2, port=4420 00:27:50.297 qpair failed and we were unable to recover it. 00:27:50.297 [2024-07-15 09:36:01.161108] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:50.297 [2024-07-15 09:36:01.161134] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a58000b90 with addr=10.0.0.2, port=4420 00:27:50.297 qpair failed and we were unable to recover it. 00:27:50.297 [2024-07-15 09:36:01.161224] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:50.297 [2024-07-15 09:36:01.161250] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a58000b90 with addr=10.0.0.2, port=4420 00:27:50.297 qpair failed and we were unable to recover it. 00:27:50.297 [2024-07-15 09:36:01.161327] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:50.297 [2024-07-15 09:36:01.161353] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a58000b90 with addr=10.0.0.2, port=4420 00:27:50.297 qpair failed and we were unable to recover it. 00:27:50.297 [2024-07-15 09:36:01.161441] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:50.297 [2024-07-15 09:36:01.161488] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a58000b90 with addr=10.0.0.2, port=4420 00:27:50.297 qpair failed and we were unable to recover it. 00:27:50.297 [2024-07-15 09:36:01.161680] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:50.297 [2024-07-15 09:36:01.161722] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a58000b90 with addr=10.0.0.2, port=4420 00:27:50.297 qpair failed and we were unable to recover it. 00:27:50.297 [2024-07-15 09:36:01.161899] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:50.297 [2024-07-15 09:36:01.161937] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d69200 with addr=10.0.0.2, port=4420 00:27:50.297 qpair failed and we were unable to recover it. 00:27:50.297 [2024-07-15 09:36:01.162026] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:50.297 [2024-07-15 09:36:01.162054] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a50000b90 with addr=10.0.0.2, port=4420 00:27:50.297 qpair failed and we were unable to recover it. 00:27:50.297 [2024-07-15 09:36:01.162188] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:50.297 [2024-07-15 09:36:01.162228] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a50000b90 with addr=10.0.0.2, port=4420 00:27:50.297 qpair failed and we were unable to recover it. 00:27:50.297 [2024-07-15 09:36:01.162380] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:50.297 [2024-07-15 09:36:01.162420] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a50000b90 with addr=10.0.0.2, port=4420 00:27:50.297 qpair failed and we were unable to recover it. 00:27:50.297 [2024-07-15 09:36:01.162551] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:50.297 [2024-07-15 09:36:01.162598] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a50000b90 with addr=10.0.0.2, port=4420 00:27:50.297 qpair failed and we were unable to recover it. 00:27:50.297 [2024-07-15 09:36:01.162870] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:50.297 [2024-07-15 09:36:01.162909] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:27:50.297 qpair failed and we were unable to recover it. 00:27:50.297 [2024-07-15 09:36:01.163002] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:50.297 [2024-07-15 09:36:01.163030] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a58000b90 with addr=10.0.0.2, port=4420 00:27:50.297 qpair failed and we were unable to recover it. 00:27:50.297 [2024-07-15 09:36:01.163156] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:50.297 [2024-07-15 09:36:01.163197] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a58000b90 with addr=10.0.0.2, port=4420 00:27:50.297 qpair failed and we were unable to recover it. 00:27:50.297 [2024-07-15 09:36:01.163358] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:50.297 [2024-07-15 09:36:01.163399] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a58000b90 with addr=10.0.0.2, port=4420 00:27:50.297 qpair failed and we were unable to recover it. 00:27:50.297 [2024-07-15 09:36:01.163560] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:50.297 [2024-07-15 09:36:01.163601] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a58000b90 with addr=10.0.0.2, port=4420 00:27:50.297 qpair failed and we were unable to recover it. 00:27:50.297 [2024-07-15 09:36:01.163721] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:50.297 [2024-07-15 09:36:01.163762] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a58000b90 with addr=10.0.0.2, port=4420 00:27:50.297 qpair failed and we were unable to recover it. 00:27:50.297 [2024-07-15 09:36:01.163899] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:50.297 [2024-07-15 09:36:01.163928] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a50000b90 with addr=10.0.0.2, port=4420 00:27:50.297 qpair failed and we were unable to recover it. 00:27:50.297 [2024-07-15 09:36:01.164014] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:50.297 [2024-07-15 09:36:01.164041] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a50000b90 with addr=10.0.0.2, port=4420 00:27:50.297 qpair failed and we were unable to recover it. 00:27:50.297 [2024-07-15 09:36:01.164198] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:50.297 [2024-07-15 09:36:01.164244] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a50000b90 with addr=10.0.0.2, port=4420 00:27:50.297 qpair failed and we were unable to recover it. 00:27:50.297 [2024-07-15 09:36:01.164412] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:50.297 [2024-07-15 09:36:01.164454] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a50000b90 with addr=10.0.0.2, port=4420 00:27:50.297 qpair failed and we were unable to recover it. 00:27:50.297 [2024-07-15 09:36:01.164618] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:50.297 [2024-07-15 09:36:01.164662] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a50000b90 with addr=10.0.0.2, port=4420 00:27:50.297 qpair failed and we were unable to recover it. 00:27:50.297 [2024-07-15 09:36:01.164793] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:50.297 [2024-07-15 09:36:01.164858] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a50000b90 with addr=10.0.0.2, port=4420 00:27:50.297 qpair failed and we were unable to recover it. 00:27:50.297 [2024-07-15 09:36:01.164939] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:50.297 [2024-07-15 09:36:01.164964] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a50000b90 with addr=10.0.0.2, port=4420 00:27:50.297 qpair failed and we were unable to recover it. 00:27:50.297 [2024-07-15 09:36:01.165050] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:50.297 [2024-07-15 09:36:01.165077] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a50000b90 with addr=10.0.0.2, port=4420 00:27:50.297 qpair failed and we were unable to recover it. 00:27:50.297 [2024-07-15 09:36:01.165259] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:50.297 [2024-07-15 09:36:01.165306] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a50000b90 with addr=10.0.0.2, port=4420 00:27:50.297 qpair failed and we were unable to recover it. 00:27:50.297 [2024-07-15 09:36:01.165432] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:50.298 [2024-07-15 09:36:01.165475] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a58000b90 with addr=10.0.0.2, port=4420 00:27:50.298 qpair failed and we were unable to recover it. 00:27:50.298 [2024-07-15 09:36:01.165604] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:50.298 [2024-07-15 09:36:01.165646] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a58000b90 with addr=10.0.0.2, port=4420 00:27:50.298 qpair failed and we were unable to recover it. 00:27:50.298 [2024-07-15 09:36:01.165837] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:50.298 [2024-07-15 09:36:01.165884] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a58000b90 with addr=10.0.0.2, port=4420 00:27:50.298 qpair failed and we were unable to recover it. 00:27:50.298 [2024-07-15 09:36:01.165977] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:50.298 [2024-07-15 09:36:01.166003] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a58000b90 with addr=10.0.0.2, port=4420 00:27:50.298 qpair failed and we were unable to recover it. 00:27:50.298 [2024-07-15 09:36:01.166125] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:50.298 [2024-07-15 09:36:01.166165] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a58000b90 with addr=10.0.0.2, port=4420 00:27:50.298 qpair failed and we were unable to recover it. 00:27:50.298 [2024-07-15 09:36:01.166325] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:50.298 [2024-07-15 09:36:01.166366] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a58000b90 with addr=10.0.0.2, port=4420 00:27:50.298 qpair failed and we were unable to recover it. 00:27:50.298 [2024-07-15 09:36:01.166499] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:50.298 [2024-07-15 09:36:01.166541] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a58000b90 with addr=10.0.0.2, port=4420 00:27:50.298 qpair failed and we were unable to recover it. 00:27:50.298 [2024-07-15 09:36:01.166679] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:50.298 [2024-07-15 09:36:01.166726] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:27:50.298 qpair failed and we were unable to recover it. 00:27:50.298 [2024-07-15 09:36:01.166873] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:50.298 [2024-07-15 09:36:01.166900] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:27:50.298 qpair failed and we were unable to recover it. 00:27:50.298 [2024-07-15 09:36:01.167014] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:50.298 [2024-07-15 09:36:01.167041] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:27:50.298 qpair failed and we were unable to recover it. 00:27:50.298 [2024-07-15 09:36:01.167190] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:50.298 [2024-07-15 09:36:01.167231] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:27:50.298 qpair failed and we were unable to recover it. 00:27:50.298 [2024-07-15 09:36:01.167391] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:50.298 [2024-07-15 09:36:01.167431] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:27:50.298 qpair failed and we were unable to recover it. 00:27:50.298 [2024-07-15 09:36:01.167595] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:50.298 [2024-07-15 09:36:01.167636] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:27:50.298 qpair failed and we were unable to recover it. 00:27:50.298 [2024-07-15 09:36:01.167819] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:50.298 [2024-07-15 09:36:01.167870] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:27:50.298 qpair failed and we were unable to recover it. 00:27:50.298 [2024-07-15 09:36:01.167961] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:50.298 [2024-07-15 09:36:01.167986] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:27:50.298 qpair failed and we were unable to recover it. 00:27:50.298 [2024-07-15 09:36:01.168067] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:50.298 [2024-07-15 09:36:01.168117] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:27:50.298 qpair failed and we were unable to recover it. 00:27:50.298 [2024-07-15 09:36:01.168249] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:50.298 [2024-07-15 09:36:01.168289] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:27:50.298 qpair failed and we were unable to recover it. 00:27:50.298 [2024-07-15 09:36:01.168411] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:50.298 [2024-07-15 09:36:01.168451] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:27:50.298 qpair failed and we were unable to recover it. 00:27:50.298 [2024-07-15 09:36:01.168579] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:50.298 [2024-07-15 09:36:01.168620] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:27:50.298 qpair failed and we were unable to recover it. 00:27:50.298 [2024-07-15 09:36:01.168761] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:50.298 [2024-07-15 09:36:01.168811] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:27:50.298 qpair failed and we were unable to recover it. 00:27:50.298 [2024-07-15 09:36:01.168922] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:50.298 [2024-07-15 09:36:01.168947] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:27:50.298 qpair failed and we were unable to recover it. 00:27:50.298 [2024-07-15 09:36:01.169022] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:50.298 [2024-07-15 09:36:01.169047] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:27:50.298 qpair failed and we were unable to recover it. 00:27:50.298 [2024-07-15 09:36:01.169173] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:50.298 [2024-07-15 09:36:01.169212] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:27:50.298 qpair failed and we were unable to recover it. 00:27:50.298 [2024-07-15 09:36:01.169368] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:50.298 [2024-07-15 09:36:01.169407] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:27:50.298 qpair failed and we were unable to recover it. 00:27:50.298 [2024-07-15 09:36:01.169572] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:50.298 [2024-07-15 09:36:01.169611] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:27:50.298 qpair failed and we were unable to recover it. 00:27:50.298 [2024-07-15 09:36:01.169766] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:50.298 [2024-07-15 09:36:01.169828] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:27:50.298 qpair failed and we were unable to recover it. 00:27:50.298 [2024-07-15 09:36:01.169957] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:50.298 [2024-07-15 09:36:01.169996] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d69200 with addr=10.0.0.2, port=4420 00:27:50.298 qpair failed and we were unable to recover it. 00:27:50.298 [2024-07-15 09:36:01.170088] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:50.298 [2024-07-15 09:36:01.170117] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a50000b90 with addr=10.0.0.2, port=4420 00:27:50.298 qpair failed and we were unable to recover it. 00:27:50.298 [2024-07-15 09:36:01.170223] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:50.298 [2024-07-15 09:36:01.170270] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a50000b90 with addr=10.0.0.2, port=4420 00:27:50.298 qpair failed and we were unable to recover it. 00:27:50.298 [2024-07-15 09:36:01.170440] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:50.298 [2024-07-15 09:36:01.170482] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a50000b90 with addr=10.0.0.2, port=4420 00:27:50.298 qpair failed and we were unable to recover it. 00:27:50.298 [2024-07-15 09:36:01.170642] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:50.298 [2024-07-15 09:36:01.170684] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a50000b90 with addr=10.0.0.2, port=4420 00:27:50.298 qpair failed and we were unable to recover it. 00:27:50.298 [2024-07-15 09:36:01.170870] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:50.298 [2024-07-15 09:36:01.170899] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a58000b90 with addr=10.0.0.2, port=4420 00:27:50.298 qpair failed and we were unable to recover it. 00:27:50.298 [2024-07-15 09:36:01.170985] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:50.298 [2024-07-15 09:36:01.171011] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a58000b90 with addr=10.0.0.2, port=4420 00:27:50.298 qpair failed and we were unable to recover it. 00:27:50.298 [2024-07-15 09:36:01.171095] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:50.298 [2024-07-15 09:36:01.171121] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a58000b90 with addr=10.0.0.2, port=4420 00:27:50.298 qpair failed and we were unable to recover it. 00:27:50.298 [2024-07-15 09:36:01.171304] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:50.298 [2024-07-15 09:36:01.171346] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a58000b90 with addr=10.0.0.2, port=4420 00:27:50.298 qpair failed and we were unable to recover it. 00:27:50.298 [2024-07-15 09:36:01.171475] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:50.298 [2024-07-15 09:36:01.171516] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a58000b90 with addr=10.0.0.2, port=4420 00:27:50.298 qpair failed and we were unable to recover it. 00:27:50.298 [2024-07-15 09:36:01.171677] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:50.298 [2024-07-15 09:36:01.171723] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a58000b90 with addr=10.0.0.2, port=4420 00:27:50.298 qpair failed and we were unable to recover it. 00:27:50.298 [2024-07-15 09:36:01.171883] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:50.298 [2024-07-15 09:36:01.171910] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a58000b90 with addr=10.0.0.2, port=4420 00:27:50.298 qpair failed and we were unable to recover it. 00:27:50.298 [2024-07-15 09:36:01.172001] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:50.298 [2024-07-15 09:36:01.172027] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a58000b90 with addr=10.0.0.2, port=4420 00:27:50.298 qpair failed and we were unable to recover it. 00:27:50.298 [2024-07-15 09:36:01.172127] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:50.298 [2024-07-15 09:36:01.172175] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a58000b90 with addr=10.0.0.2, port=4420 00:27:50.298 qpair failed and we were unable to recover it. 00:27:50.298 [2024-07-15 09:36:01.172340] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:50.298 [2024-07-15 09:36:01.172381] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a58000b90 with addr=10.0.0.2, port=4420 00:27:50.298 qpair failed and we were unable to recover it. 00:27:50.298 [2024-07-15 09:36:01.172545] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:50.298 [2024-07-15 09:36:01.172586] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a58000b90 with addr=10.0.0.2, port=4420 00:27:50.298 qpair failed and we were unable to recover it. 00:27:50.298 [2024-07-15 09:36:01.172711] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:50.298 [2024-07-15 09:36:01.172753] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a58000b90 with addr=10.0.0.2, port=4420 00:27:50.298 qpair failed and we were unable to recover it. 00:27:50.298 [2024-07-15 09:36:01.172876] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:50.298 [2024-07-15 09:36:01.172902] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a58000b90 with addr=10.0.0.2, port=4420 00:27:50.298 qpair failed and we were unable to recover it. 00:27:50.298 [2024-07-15 09:36:01.173011] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:50.298 [2024-07-15 09:36:01.173037] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a58000b90 with addr=10.0.0.2, port=4420 00:27:50.298 qpair failed and we were unable to recover it. 00:27:50.298 [2024-07-15 09:36:01.173149] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:50.298 [2024-07-15 09:36:01.173190] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a58000b90 with addr=10.0.0.2, port=4420 00:27:50.298 qpair failed and we were unable to recover it. 00:27:50.298 [2024-07-15 09:36:01.173343] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:50.298 [2024-07-15 09:36:01.173383] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a58000b90 with addr=10.0.0.2, port=4420 00:27:50.298 qpair failed and we were unable to recover it. 00:27:50.298 [2024-07-15 09:36:01.173524] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:50.298 [2024-07-15 09:36:01.173565] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a58000b90 with addr=10.0.0.2, port=4420 00:27:50.298 qpair failed and we were unable to recover it. 00:27:50.298 [2024-07-15 09:36:01.173688] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:50.298 [2024-07-15 09:36:01.173729] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a58000b90 with addr=10.0.0.2, port=4420 00:27:50.298 qpair failed and we were unable to recover it. 00:27:50.298 [2024-07-15 09:36:01.173868] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:50.298 [2024-07-15 09:36:01.173895] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a58000b90 with addr=10.0.0.2, port=4420 00:27:50.298 qpair failed and we were unable to recover it. 00:27:50.298 [2024-07-15 09:36:01.174005] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:50.298 [2024-07-15 09:36:01.174032] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a58000b90 with addr=10.0.0.2, port=4420 00:27:50.298 qpair failed and we were unable to recover it. 00:27:50.298 [2024-07-15 09:36:01.174185] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:50.298 [2024-07-15 09:36:01.174226] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a58000b90 with addr=10.0.0.2, port=4420 00:27:50.298 qpair failed and we were unable to recover it. 00:27:50.298 [2024-07-15 09:36:01.174345] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:50.298 [2024-07-15 09:36:01.174387] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a58000b90 with addr=10.0.0.2, port=4420 00:27:50.298 qpair failed and we were unable to recover it. 00:27:50.298 [2024-07-15 09:36:01.174520] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:50.298 [2024-07-15 09:36:01.174561] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a58000b90 with addr=10.0.0.2, port=4420 00:27:50.298 qpair failed and we were unable to recover it. 00:27:50.298 [2024-07-15 09:36:01.174710] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:50.299 [2024-07-15 09:36:01.174767] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d69200 with addr=10.0.0.2, port=4420 00:27:50.299 qpair failed and we were unable to recover it. 00:27:50.299 [2024-07-15 09:36:01.174894] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:50.299 [2024-07-15 09:36:01.174921] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d69200 with addr=10.0.0.2, port=4420 00:27:50.299 qpair failed and we were unable to recover it. 00:27:50.299 [2024-07-15 09:36:01.175001] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:50.299 [2024-07-15 09:36:01.175027] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d69200 with addr=10.0.0.2, port=4420 00:27:50.299 qpair failed and we were unable to recover it. 00:27:50.299 [2024-07-15 09:36:01.175137] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:50.299 [2024-07-15 09:36:01.175177] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d69200 with addr=10.0.0.2, port=4420 00:27:50.299 qpair failed and we were unable to recover it. 00:27:50.299 [2024-07-15 09:36:01.175339] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:50.299 [2024-07-15 09:36:01.175379] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d69200 with addr=10.0.0.2, port=4420 00:27:50.299 qpair failed and we were unable to recover it. 00:27:50.299 [2024-07-15 09:36:01.175505] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:50.299 [2024-07-15 09:36:01.175545] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d69200 with addr=10.0.0.2, port=4420 00:27:50.299 qpair failed and we were unable to recover it. 00:27:50.299 [2024-07-15 09:36:01.175662] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:50.299 [2024-07-15 09:36:01.175706] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a58000b90 with addr=10.0.0.2, port=4420 00:27:50.299 qpair failed and we were unable to recover it. 00:27:50.299 [2024-07-15 09:36:01.175872] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:50.299 [2024-07-15 09:36:01.175899] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a58000b90 with addr=10.0.0.2, port=4420 00:27:50.299 qpair failed and we were unable to recover it. 00:27:50.299 [2024-07-15 09:36:01.176018] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:50.299 [2024-07-15 09:36:01.176044] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a58000b90 with addr=10.0.0.2, port=4420 00:27:50.299 qpair failed and we were unable to recover it. 00:27:50.299 [2024-07-15 09:36:01.176125] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:50.299 [2024-07-15 09:36:01.176151] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a58000b90 with addr=10.0.0.2, port=4420 00:27:50.299 qpair failed and we were unable to recover it. 00:27:50.299 [2024-07-15 09:36:01.176256] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:50.299 [2024-07-15 09:36:01.176297] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a58000b90 with addr=10.0.0.2, port=4420 00:27:50.299 qpair failed and we were unable to recover it. 00:27:50.299 [2024-07-15 09:36:01.176431] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:50.299 [2024-07-15 09:36:01.176499] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a58000b90 with addr=10.0.0.2, port=4420 00:27:50.299 qpair failed and we were unable to recover it. 00:27:50.299 [2024-07-15 09:36:01.176680] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:50.299 [2024-07-15 09:36:01.176731] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d69200 with addr=10.0.0.2, port=4420 00:27:50.299 qpair failed and we were unable to recover it. 00:27:50.299 [2024-07-15 09:36:01.176870] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:50.299 [2024-07-15 09:36:01.176896] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d69200 with addr=10.0.0.2, port=4420 00:27:50.299 qpair failed and we were unable to recover it. 00:27:50.299 [2024-07-15 09:36:01.176976] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:50.299 [2024-07-15 09:36:01.177001] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d69200 with addr=10.0.0.2, port=4420 00:27:50.299 qpair failed and we were unable to recover it. 00:27:50.299 [2024-07-15 09:36:01.177088] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:50.299 [2024-07-15 09:36:01.177114] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d69200 with addr=10.0.0.2, port=4420 00:27:50.299 qpair failed and we were unable to recover it. 00:27:50.299 [2024-07-15 09:36:01.177196] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:50.299 [2024-07-15 09:36:01.177222] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d69200 with addr=10.0.0.2, port=4420 00:27:50.299 qpair failed and we were unable to recover it. 00:27:50.299 [2024-07-15 09:36:01.177298] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:50.299 [2024-07-15 09:36:01.177323] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d69200 with addr=10.0.0.2, port=4420 00:27:50.299 qpair failed and we were unable to recover it. 00:27:50.299 [2024-07-15 09:36:01.177433] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:50.299 [2024-07-15 09:36:01.177476] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d69200 with addr=10.0.0.2, port=4420 00:27:50.299 qpair failed and we were unable to recover it. 00:27:50.299 [2024-07-15 09:36:01.177615] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:50.299 [2024-07-15 09:36:01.177658] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d69200 with addr=10.0.0.2, port=4420 00:27:50.299 qpair failed and we were unable to recover it. 00:27:50.299 [2024-07-15 09:36:01.177792] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:50.299 [2024-07-15 09:36:01.177861] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:27:50.299 qpair failed and we were unable to recover it. 00:27:50.299 [2024-07-15 09:36:01.177952] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:50.299 [2024-07-15 09:36:01.177978] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:27:50.299 qpair failed and we were unable to recover it. 00:27:50.299 [2024-07-15 09:36:01.178084] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:50.299 [2024-07-15 09:36:01.178139] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:27:50.299 qpair failed and we were unable to recover it. 00:27:50.299 [2024-07-15 09:36:01.178332] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:50.299 [2024-07-15 09:36:01.178373] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:27:50.299 qpair failed and we were unable to recover it. 00:27:50.299 [2024-07-15 09:36:01.178495] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:50.299 [2024-07-15 09:36:01.178535] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:27:50.299 qpair failed and we were unable to recover it. 00:27:50.299 [2024-07-15 09:36:01.178671] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:50.299 [2024-07-15 09:36:01.178696] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:27:50.299 qpair failed and we were unable to recover it. 00:27:50.299 [2024-07-15 09:36:01.178782] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:50.299 [2024-07-15 09:36:01.178813] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d69200 with addr=10.0.0.2, port=4420 00:27:50.299 qpair failed and we were unable to recover it. 00:27:50.299 [2024-07-15 09:36:01.178920] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:50.299 [2024-07-15 09:36:01.178946] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d69200 with addr=10.0.0.2, port=4420 00:27:50.299 qpair failed and we were unable to recover it. 00:27:50.299 [2024-07-15 09:36:01.179034] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:50.299 [2024-07-15 09:36:01.179060] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d69200 with addr=10.0.0.2, port=4420 00:27:50.299 qpair failed and we were unable to recover it. 00:27:50.299 [2024-07-15 09:36:01.179144] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:50.299 [2024-07-15 09:36:01.179169] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d69200 with addr=10.0.0.2, port=4420 00:27:50.299 qpair failed and we were unable to recover it. 00:27:50.299 [2024-07-15 09:36:01.179250] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:50.299 [2024-07-15 09:36:01.179275] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d69200 with addr=10.0.0.2, port=4420 00:27:50.299 qpair failed and we were unable to recover it. 00:27:50.299 [2024-07-15 09:36:01.179382] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:50.299 [2024-07-15 09:36:01.179407] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d69200 with addr=10.0.0.2, port=4420 00:27:50.299 qpair failed and we were unable to recover it. 00:27:50.299 [2024-07-15 09:36:01.179489] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:50.299 [2024-07-15 09:36:01.179514] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d69200 with addr=10.0.0.2, port=4420 00:27:50.299 qpair failed and we were unable to recover it. 00:27:50.299 [2024-07-15 09:36:01.179601] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:50.299 [2024-07-15 09:36:01.179657] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a58000b90 with addr=10.0.0.2, port=4420 00:27:50.299 qpair failed and we were unable to recover it. 00:27:50.299 [2024-07-15 09:36:01.179826] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:50.299 [2024-07-15 09:36:01.179873] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a58000b90 with addr=10.0.0.2, port=4420 00:27:50.299 qpair failed and we were unable to recover it. 00:27:50.299 [2024-07-15 09:36:01.179960] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:50.299 [2024-07-15 09:36:01.179986] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a58000b90 with addr=10.0.0.2, port=4420 00:27:50.299 qpair failed and we were unable to recover it. 00:27:50.299 [2024-07-15 09:36:01.180077] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:50.299 [2024-07-15 09:36:01.180103] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a58000b90 with addr=10.0.0.2, port=4420 00:27:50.299 qpair failed and we were unable to recover it. 00:27:50.299 [2024-07-15 09:36:01.180208] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:50.299 [2024-07-15 09:36:01.180235] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a58000b90 with addr=10.0.0.2, port=4420 00:27:50.299 qpair failed and we were unable to recover it. 00:27:50.299 [2024-07-15 09:36:01.180319] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:50.299 [2024-07-15 09:36:01.180347] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:27:50.299 qpair failed and we were unable to recover it. 00:27:50.299 [2024-07-15 09:36:01.180443] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:50.299 [2024-07-15 09:36:01.180469] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d69200 with addr=10.0.0.2, port=4420 00:27:50.299 qpair failed and we were unable to recover it. 00:27:50.299 [2024-07-15 09:36:01.180558] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:50.299 [2024-07-15 09:36:01.180584] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d69200 with addr=10.0.0.2, port=4420 00:27:50.299 qpair failed and we were unable to recover it. 00:27:50.299 [2024-07-15 09:36:01.180673] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:50.299 [2024-07-15 09:36:01.180699] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d69200 with addr=10.0.0.2, port=4420 00:27:50.299 qpair failed and we were unable to recover it. 00:27:50.299 [2024-07-15 09:36:01.180787] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:50.299 [2024-07-15 09:36:01.180821] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d69200 with addr=10.0.0.2, port=4420 00:27:50.299 qpair failed and we were unable to recover it. 00:27:50.299 [2024-07-15 09:36:01.180907] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:50.299 [2024-07-15 09:36:01.180933] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d69200 with addr=10.0.0.2, port=4420 00:27:50.299 qpair failed and we were unable to recover it. 00:27:50.299 [2024-07-15 09:36:01.181015] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:50.299 [2024-07-15 09:36:01.181041] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d69200 with addr=10.0.0.2, port=4420 00:27:50.299 qpair failed and we were unable to recover it. 00:27:50.299 [2024-07-15 09:36:01.181176] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:50.299 [2024-07-15 09:36:01.181216] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d69200 with addr=10.0.0.2, port=4420 00:27:50.299 qpair failed and we were unable to recover it. 00:27:50.299 [2024-07-15 09:36:01.181340] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:50.299 [2024-07-15 09:36:01.181380] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d69200 with addr=10.0.0.2, port=4420 00:27:50.299 qpair failed and we were unable to recover it. 00:27:50.299 [2024-07-15 09:36:01.181518] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:50.299 [2024-07-15 09:36:01.181561] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d69200 with addr=10.0.0.2, port=4420 00:27:50.299 qpair failed and we were unable to recover it. 00:27:50.299 [2024-07-15 09:36:01.181697] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:50.299 [2024-07-15 09:36:01.181724] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:27:50.299 qpair failed and we were unable to recover it. 00:27:50.299 [2024-07-15 09:36:01.181827] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:50.299 [2024-07-15 09:36:01.181853] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:27:50.299 qpair failed and we were unable to recover it. 00:27:50.299 [2024-07-15 09:36:01.181936] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:50.299 [2024-07-15 09:36:01.181961] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:27:50.299 qpair failed and we were unable to recover it. 00:27:50.299 [2024-07-15 09:36:01.182045] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:50.299 [2024-07-15 09:36:01.182071] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:27:50.299 qpair failed and we were unable to recover it. 00:27:50.299 [2024-07-15 09:36:01.182181] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:50.299 [2024-07-15 09:36:01.182207] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:27:50.299 qpair failed and we were unable to recover it. 00:27:50.299 [2024-07-15 09:36:01.182289] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:50.299 [2024-07-15 09:36:01.182314] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:27:50.299 qpair failed and we were unable to recover it. 00:27:50.299 [2024-07-15 09:36:01.182395] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:50.299 [2024-07-15 09:36:01.182444] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:27:50.299 qpair failed and we were unable to recover it. 00:27:50.299 [2024-07-15 09:36:01.182565] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:50.299 [2024-07-15 09:36:01.182604] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:27:50.299 qpair failed and we were unable to recover it. 00:27:50.299 [2024-07-15 09:36:01.182834] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:50.300 [2024-07-15 09:36:01.182873] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a58000b90 with addr=10.0.0.2, port=4420 00:27:50.300 qpair failed and we were unable to recover it. 00:27:50.300 [2024-07-15 09:36:01.182976] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:50.300 [2024-07-15 09:36:01.183003] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a58000b90 with addr=10.0.0.2, port=4420 00:27:50.300 qpair failed and we were unable to recover it. 00:27:50.300 [2024-07-15 09:36:01.183092] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:50.300 [2024-07-15 09:36:01.183118] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a58000b90 with addr=10.0.0.2, port=4420 00:27:50.300 qpair failed and we were unable to recover it. 00:27:50.300 [2024-07-15 09:36:01.183225] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:50.300 [2024-07-15 09:36:01.183251] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a58000b90 with addr=10.0.0.2, port=4420 00:27:50.300 qpair failed and we were unable to recover it. 00:27:50.300 [2024-07-15 09:36:01.183357] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:50.300 [2024-07-15 09:36:01.183383] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a58000b90 with addr=10.0.0.2, port=4420 00:27:50.300 qpair failed and we were unable to recover it. 00:27:50.300 [2024-07-15 09:36:01.183526] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:50.300 [2024-07-15 09:36:01.183567] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a58000b90 with addr=10.0.0.2, port=4420 00:27:50.300 qpair failed and we were unable to recover it. 00:27:50.300 [2024-07-15 09:36:01.183691] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:50.300 [2024-07-15 09:36:01.183734] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:27:50.300 qpair failed and we were unable to recover it. 00:27:50.300 [2024-07-15 09:36:01.183869] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:50.300 [2024-07-15 09:36:01.183894] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:27:50.300 qpair failed and we were unable to recover it. 00:27:50.300 [2024-07-15 09:36:01.183985] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:50.300 [2024-07-15 09:36:01.184011] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:27:50.300 qpair failed and we were unable to recover it. 00:27:50.300 [2024-07-15 09:36:01.184131] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:50.300 [2024-07-15 09:36:01.184157] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:27:50.300 qpair failed and we were unable to recover it. 00:27:50.300 [2024-07-15 09:36:01.184271] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:50.300 [2024-07-15 09:36:01.184297] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:27:50.300 qpair failed and we were unable to recover it. 00:27:50.300 [2024-07-15 09:36:01.184386] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:50.300 [2024-07-15 09:36:01.184411] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:27:50.300 qpair failed and we were unable to recover it. 00:27:50.300 [2024-07-15 09:36:01.184520] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:50.300 [2024-07-15 09:36:01.184546] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:27:50.300 qpair failed and we were unable to recover it. 00:27:50.300 [2024-07-15 09:36:01.184630] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:50.300 [2024-07-15 09:36:01.184656] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:27:50.300 qpair failed and we were unable to recover it. 00:27:50.300 [2024-07-15 09:36:01.184817] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:50.300 [2024-07-15 09:36:01.184879] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a50000b90 with addr=10.0.0.2, port=4420 00:27:50.300 qpair failed and we were unable to recover it. 00:27:50.300 [2024-07-15 09:36:01.184977] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:50.300 [2024-07-15 09:36:01.185005] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a58000b90 with addr=10.0.0.2, port=4420 00:27:50.300 qpair failed and we were unable to recover it. 00:27:50.300 [2024-07-15 09:36:01.185141] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:50.300 [2024-07-15 09:36:01.185182] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a58000b90 with addr=10.0.0.2, port=4420 00:27:50.300 qpair failed and we were unable to recover it. 00:27:50.300 [2024-07-15 09:36:01.185338] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:50.300 [2024-07-15 09:36:01.185381] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a58000b90 with addr=10.0.0.2, port=4420 00:27:50.300 qpair failed and we were unable to recover it. 00:27:50.300 [2024-07-15 09:36:01.185549] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:50.300 [2024-07-15 09:36:01.185576] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a58000b90 with addr=10.0.0.2, port=4420 00:27:50.300 qpair failed and we were unable to recover it. 00:27:50.300 [2024-07-15 09:36:01.185695] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:50.300 [2024-07-15 09:36:01.185721] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a58000b90 with addr=10.0.0.2, port=4420 00:27:50.300 qpair failed and we were unable to recover it. 00:27:50.300 [2024-07-15 09:36:01.185807] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:50.300 [2024-07-15 09:36:01.185833] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a58000b90 with addr=10.0.0.2, port=4420 00:27:50.300 qpair failed and we were unable to recover it. 00:27:50.300 [2024-07-15 09:36:01.185919] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:50.300 [2024-07-15 09:36:01.185945] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a58000b90 with addr=10.0.0.2, port=4420 00:27:50.300 qpair failed and we were unable to recover it. 00:27:50.300 [2024-07-15 09:36:01.186037] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:50.300 [2024-07-15 09:36:01.186063] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a58000b90 with addr=10.0.0.2, port=4420 00:27:50.300 qpair failed and we were unable to recover it. 00:27:50.300 [2024-07-15 09:36:01.186149] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:50.300 [2024-07-15 09:36:01.186180] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a58000b90 with addr=10.0.0.2, port=4420 00:27:50.300 qpair failed and we were unable to recover it. 00:27:50.300 [2024-07-15 09:36:01.186278] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:50.300 [2024-07-15 09:36:01.186305] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a58000b90 with addr=10.0.0.2, port=4420 00:27:50.300 qpair failed and we were unable to recover it. 00:27:50.300 [2024-07-15 09:36:01.186412] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:50.300 [2024-07-15 09:36:01.186437] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a58000b90 with addr=10.0.0.2, port=4420 00:27:50.300 qpair failed and we were unable to recover it. 00:27:50.300 [2024-07-15 09:36:01.186522] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:50.300 [2024-07-15 09:36:01.186548] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a58000b90 with addr=10.0.0.2, port=4420 00:27:50.300 qpair failed and we were unable to recover it. 00:27:50.300 [2024-07-15 09:36:01.186676] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:50.300 [2024-07-15 09:36:01.186716] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a58000b90 with addr=10.0.0.2, port=4420 00:27:50.300 qpair failed and we were unable to recover it. 00:27:50.300 [2024-07-15 09:36:01.186860] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:50.300 [2024-07-15 09:36:01.186886] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a58000b90 with addr=10.0.0.2, port=4420 00:27:50.300 qpair failed and we were unable to recover it. 00:27:50.300 [2024-07-15 09:36:01.186999] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:50.300 [2024-07-15 09:36:01.187024] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a58000b90 with addr=10.0.0.2, port=4420 00:27:50.300 qpair failed and we were unable to recover it. 00:27:50.300 [2024-07-15 09:36:01.187128] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:50.300 [2024-07-15 09:36:01.187154] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a58000b90 with addr=10.0.0.2, port=4420 00:27:50.300 qpair failed and we were unable to recover it. 00:27:50.300 [2024-07-15 09:36:01.187236] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:50.300 [2024-07-15 09:36:01.187262] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a58000b90 with addr=10.0.0.2, port=4420 00:27:50.300 qpair failed and we were unable to recover it. 00:27:50.300 [2024-07-15 09:36:01.187383] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:50.300 [2024-07-15 09:36:01.187409] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a58000b90 with addr=10.0.0.2, port=4420 00:27:50.300 qpair failed and we were unable to recover it. 00:27:50.300 [2024-07-15 09:36:01.187487] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:50.300 [2024-07-15 09:36:01.187513] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a58000b90 with addr=10.0.0.2, port=4420 00:27:50.300 qpair failed and we were unable to recover it. 00:27:50.300 [2024-07-15 09:36:01.187619] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:50.300 [2024-07-15 09:36:01.187657] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d69200 with addr=10.0.0.2, port=4420 00:27:50.300 qpair failed and we were unable to recover it. 00:27:50.300 [2024-07-15 09:36:01.187739] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:50.300 [2024-07-15 09:36:01.187766] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d69200 with addr=10.0.0.2, port=4420 00:27:50.300 qpair failed and we were unable to recover it. 00:27:50.300 [2024-07-15 09:36:01.187858] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:50.300 [2024-07-15 09:36:01.187885] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d69200 with addr=10.0.0.2, port=4420 00:27:50.300 qpair failed and we were unable to recover it. 00:27:50.300 [2024-07-15 09:36:01.187969] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:50.300 [2024-07-15 09:36:01.187995] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d69200 with addr=10.0.0.2, port=4420 00:27:50.300 qpair failed and we were unable to recover it. 00:27:50.300 [2024-07-15 09:36:01.188091] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:50.300 [2024-07-15 09:36:01.188132] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d69200 with addr=10.0.0.2, port=4420 00:27:50.300 qpair failed and we were unable to recover it. 00:27:50.300 [2024-07-15 09:36:01.188305] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:50.300 [2024-07-15 09:36:01.188330] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d69200 with addr=10.0.0.2, port=4420 00:27:50.300 qpair failed and we were unable to recover it. 00:27:50.300 [2024-07-15 09:36:01.188474] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:50.300 [2024-07-15 09:36:01.188501] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a58000b90 with addr=10.0.0.2, port=4420 00:27:50.300 qpair failed and we were unable to recover it. 00:27:50.300 [2024-07-15 09:36:01.188623] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:50.300 [2024-07-15 09:36:01.188661] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:27:50.300 qpair failed and we were unable to recover it. 00:27:50.300 [2024-07-15 09:36:01.188754] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:50.300 [2024-07-15 09:36:01.188780] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:27:50.300 qpair failed and we were unable to recover it. 00:27:50.300 [2024-07-15 09:36:01.188903] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:50.300 [2024-07-15 09:36:01.188929] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d69200 with addr=10.0.0.2, port=4420 00:27:50.300 qpair failed and we were unable to recover it. 00:27:50.300 [2024-07-15 09:36:01.189021] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:50.300 [2024-07-15 09:36:01.189047] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d69200 with addr=10.0.0.2, port=4420 00:27:50.300 qpair failed and we were unable to recover it. 00:27:50.300 [2024-07-15 09:36:01.189131] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:50.300 [2024-07-15 09:36:01.189157] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d69200 with addr=10.0.0.2, port=4420 00:27:50.300 qpair failed and we were unable to recover it. 00:27:50.300 [2024-07-15 09:36:01.189300] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:50.300 [2024-07-15 09:36:01.189340] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d69200 with addr=10.0.0.2, port=4420 00:27:50.300 qpair failed and we were unable to recover it. 00:27:50.300 [2024-07-15 09:36:01.189472] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:50.301 [2024-07-15 09:36:01.189516] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a58000b90 with addr=10.0.0.2, port=4420 00:27:50.301 qpair failed and we were unable to recover it. 00:27:50.301 [2024-07-15 09:36:01.189658] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:50.301 [2024-07-15 09:36:01.189705] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a50000b90 with addr=10.0.0.2, port=4420 00:27:50.301 qpair failed and we were unable to recover it. 00:27:50.301 [2024-07-15 09:36:01.189875] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:50.301 [2024-07-15 09:36:01.189904] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:27:50.301 qpair failed and we were unable to recover it. 00:27:50.301 [2024-07-15 09:36:01.190016] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:50.301 [2024-07-15 09:36:01.190043] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d69200 with addr=10.0.0.2, port=4420 00:27:50.301 qpair failed and we were unable to recover it. 00:27:50.301 [2024-07-15 09:36:01.190137] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:50.301 [2024-07-15 09:36:01.190163] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d69200 with addr=10.0.0.2, port=4420 00:27:50.301 qpair failed and we were unable to recover it. 00:27:50.301 [2024-07-15 09:36:01.190256] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:50.301 [2024-07-15 09:36:01.190282] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d69200 with addr=10.0.0.2, port=4420 00:27:50.301 qpair failed and we were unable to recover it. 00:27:50.301 [2024-07-15 09:36:01.190365] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:50.301 [2024-07-15 09:36:01.190391] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d69200 with addr=10.0.0.2, port=4420 00:27:50.301 qpair failed and we were unable to recover it. 00:27:50.301 [2024-07-15 09:36:01.190510] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:50.301 [2024-07-15 09:36:01.190561] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d69200 with addr=10.0.0.2, port=4420 00:27:50.301 qpair failed and we were unable to recover it. 00:27:50.301 [2024-07-15 09:36:01.190641] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:50.301 [2024-07-15 09:36:01.190666] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d69200 with addr=10.0.0.2, port=4420 00:27:50.301 qpair failed and we were unable to recover it. 00:27:50.301 [2024-07-15 09:36:01.190744] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:50.301 [2024-07-15 09:36:01.190770] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d69200 with addr=10.0.0.2, port=4420 00:27:50.301 qpair failed and we were unable to recover it. 00:27:50.301 [2024-07-15 09:36:01.190874] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:50.301 [2024-07-15 09:36:01.190913] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a58000b90 with addr=10.0.0.2, port=4420 00:27:50.301 qpair failed and we were unable to recover it. 00:27:50.301 [2024-07-15 09:36:01.191004] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:50.301 [2024-07-15 09:36:01.191034] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a58000b90 with addr=10.0.0.2, port=4420 00:27:50.301 qpair failed and we were unable to recover it. 00:27:50.301 [2024-07-15 09:36:01.191147] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:50.301 [2024-07-15 09:36:01.191173] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a58000b90 with addr=10.0.0.2, port=4420 00:27:50.301 qpair failed and we were unable to recover it. 00:27:50.301 [2024-07-15 09:36:01.191317] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:50.301 [2024-07-15 09:36:01.191343] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a58000b90 with addr=10.0.0.2, port=4420 00:27:50.301 qpair failed and we were unable to recover it. 00:27:50.301 [2024-07-15 09:36:01.191426] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:50.301 [2024-07-15 09:36:01.191452] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a58000b90 with addr=10.0.0.2, port=4420 00:27:50.301 qpair failed and we were unable to recover it. 00:27:50.301 [2024-07-15 09:36:01.191568] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:50.301 [2024-07-15 09:36:01.191619] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a58000b90 with addr=10.0.0.2, port=4420 00:27:50.301 qpair failed and we were unable to recover it. 00:27:50.301 [2024-07-15 09:36:01.191793] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:50.301 [2024-07-15 09:36:01.191865] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d69200 with addr=10.0.0.2, port=4420 00:27:50.301 qpair failed and we were unable to recover it. 00:27:50.301 [2024-07-15 09:36:01.191970] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:50.301 [2024-07-15 09:36:01.191996] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d69200 with addr=10.0.0.2, port=4420 00:27:50.301 qpair failed and we were unable to recover it. 00:27:50.301 [2024-07-15 09:36:01.192079] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:50.301 [2024-07-15 09:36:01.192134] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d69200 with addr=10.0.0.2, port=4420 00:27:50.301 qpair failed and we were unable to recover it. 00:27:50.301 [2024-07-15 09:36:01.192293] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:50.301 [2024-07-15 09:36:01.192318] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d69200 with addr=10.0.0.2, port=4420 00:27:50.301 qpair failed and we were unable to recover it. 00:27:50.301 [2024-07-15 09:36:01.192435] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:50.301 [2024-07-15 09:36:01.192465] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a50000b90 with addr=10.0.0.2, port=4420 00:27:50.301 qpair failed and we were unable to recover it. 00:27:50.301 [2024-07-15 09:36:01.192575] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:50.301 [2024-07-15 09:36:01.192601] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a50000b90 with addr=10.0.0.2, port=4420 00:27:50.301 qpair failed and we were unable to recover it. 00:27:50.301 [2024-07-15 09:36:01.192711] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:50.301 [2024-07-15 09:36:01.192739] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a58000b90 with addr=10.0.0.2, port=4420 00:27:50.301 qpair failed and we were unable to recover it. 00:27:50.301 [2024-07-15 09:36:01.192830] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:50.301 [2024-07-15 09:36:01.192858] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a58000b90 with addr=10.0.0.2, port=4420 00:27:50.301 qpair failed and we were unable to recover it. 00:27:50.301 [2024-07-15 09:36:01.192951] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:50.301 [2024-07-15 09:36:01.192978] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a58000b90 with addr=10.0.0.2, port=4420 00:27:50.301 qpair failed and we were unable to recover it. 00:27:50.301 [2024-07-15 09:36:01.193063] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:50.301 [2024-07-15 09:36:01.193090] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a58000b90 with addr=10.0.0.2, port=4420 00:27:50.301 qpair failed and we were unable to recover it. 00:27:50.301 [2024-07-15 09:36:01.193198] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:50.301 [2024-07-15 09:36:01.193224] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a58000b90 with addr=10.0.0.2, port=4420 00:27:50.301 qpair failed and we were unable to recover it. 00:27:50.301 [2024-07-15 09:36:01.193316] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:50.301 [2024-07-15 09:36:01.193363] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a58000b90 with addr=10.0.0.2, port=4420 00:27:50.301 qpair failed and we were unable to recover it. 00:27:50.301 [2024-07-15 09:36:01.193499] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:50.301 [2024-07-15 09:36:01.193525] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a58000b90 with addr=10.0.0.2, port=4420 00:27:50.301 qpair failed and we were unable to recover it. 00:27:50.301 [2024-07-15 09:36:01.193649] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:50.301 [2024-07-15 09:36:01.193677] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d69200 with addr=10.0.0.2, port=4420 00:27:50.301 qpair failed and we were unable to recover it. 00:27:50.301 [2024-07-15 09:36:01.193777] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:50.301 [2024-07-15 09:36:01.193821] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:27:50.301 qpair failed and we were unable to recover it. 00:27:50.301 [2024-07-15 09:36:01.193911] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:50.301 [2024-07-15 09:36:01.193938] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:27:50.301 qpair failed and we were unable to recover it. 00:27:50.301 [2024-07-15 09:36:01.194033] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:50.301 [2024-07-15 09:36:01.194059] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:27:50.301 qpair failed and we were unable to recover it. 00:27:50.301 [2024-07-15 09:36:01.194175] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:50.301 [2024-07-15 09:36:01.194200] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:27:50.301 qpair failed and we were unable to recover it. 00:27:50.301 [2024-07-15 09:36:01.194281] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:50.301 [2024-07-15 09:36:01.194307] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:27:50.301 qpair failed and we were unable to recover it. 00:27:50.301 [2024-07-15 09:36:01.194433] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:50.301 [2024-07-15 09:36:01.194473] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:27:50.301 qpair failed and we were unable to recover it. 00:27:50.301 [2024-07-15 09:36:01.194635] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:50.301 [2024-07-15 09:36:01.194683] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:27:50.301 qpair failed and we were unable to recover it. 00:27:50.301 [2024-07-15 09:36:01.194765] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:50.301 [2024-07-15 09:36:01.194791] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:27:50.301 qpair failed and we were unable to recover it. 00:27:50.301 [2024-07-15 09:36:01.194888] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:50.301 [2024-07-15 09:36:01.194913] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:27:50.301 qpair failed and we were unable to recover it. 00:27:50.301 [2024-07-15 09:36:01.194993] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:50.301 [2024-07-15 09:36:01.195019] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:27:50.301 qpair failed and we were unable to recover it. 00:27:50.301 [2024-07-15 09:36:01.195114] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:50.301 [2024-07-15 09:36:01.195138] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:27:50.301 qpair failed and we were unable to recover it. 00:27:50.301 [2024-07-15 09:36:01.195245] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:50.301 [2024-07-15 09:36:01.195270] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:27:50.301 qpair failed and we were unable to recover it. 00:27:50.301 [2024-07-15 09:36:01.195438] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:50.301 [2024-07-15 09:36:01.195474] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:27:50.301 qpair failed and we were unable to recover it. 00:27:50.301 [2024-07-15 09:36:01.195609] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:50.301 [2024-07-15 09:36:01.195659] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a50000b90 with addr=10.0.0.2, port=4420 00:27:50.301 qpair failed and we were unable to recover it. 00:27:50.301 [2024-07-15 09:36:01.195819] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:50.301 [2024-07-15 09:36:01.195867] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a58000b90 with addr=10.0.0.2, port=4420 00:27:50.301 qpair failed and we were unable to recover it. 00:27:50.301 [2024-07-15 09:36:01.195981] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:50.301 [2024-07-15 09:36:01.196007] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a58000b90 with addr=10.0.0.2, port=4420 00:27:50.301 qpair failed and we were unable to recover it. 00:27:50.301 [2024-07-15 09:36:01.196101] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:50.301 [2024-07-15 09:36:01.196128] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a58000b90 with addr=10.0.0.2, port=4420 00:27:50.301 qpair failed and we were unable to recover it. 00:27:50.301 [2024-07-15 09:36:01.196228] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:50.301 [2024-07-15 09:36:01.196254] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a58000b90 with addr=10.0.0.2, port=4420 00:27:50.301 qpair failed and we were unable to recover it. 00:27:50.301 [2024-07-15 09:36:01.196338] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:50.301 [2024-07-15 09:36:01.196364] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a58000b90 with addr=10.0.0.2, port=4420 00:27:50.301 qpair failed and we were unable to recover it. 00:27:50.301 [2024-07-15 09:36:01.196454] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:50.301 [2024-07-15 09:36:01.196480] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:27:50.301 qpair failed and we were unable to recover it. 00:27:50.301 [2024-07-15 09:36:01.196568] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:50.301 [2024-07-15 09:36:01.196620] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:27:50.301 qpair failed and we were unable to recover it. 00:27:50.301 [2024-07-15 09:36:01.196747] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:50.301 [2024-07-15 09:36:01.196786] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:27:50.301 qpair failed and we were unable to recover it. 00:27:50.301 [2024-07-15 09:36:01.196923] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:50.301 [2024-07-15 09:36:01.196949] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:27:50.301 qpair failed and we were unable to recover it. 00:27:50.301 [2024-07-15 09:36:01.197058] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:50.301 [2024-07-15 09:36:01.197083] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:27:50.301 qpair failed and we were unable to recover it. 00:27:50.301 [2024-07-15 09:36:01.197163] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:50.301 [2024-07-15 09:36:01.197187] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:27:50.301 qpair failed and we were unable to recover it. 00:27:50.301 [2024-07-15 09:36:01.197292] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:50.301 [2024-07-15 09:36:01.197317] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:27:50.301 qpair failed and we were unable to recover it. 00:27:50.301 [2024-07-15 09:36:01.197410] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:50.302 [2024-07-15 09:36:01.197435] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:27:50.302 qpair failed and we were unable to recover it. 00:27:50.302 [2024-07-15 09:36:01.197527] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:50.302 [2024-07-15 09:36:01.197554] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:27:50.302 qpair failed and we were unable to recover it. 00:27:50.302 [2024-07-15 09:36:01.197661] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:50.302 [2024-07-15 09:36:01.197686] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:27:50.302 qpair failed and we were unable to recover it. 00:27:50.302 [2024-07-15 09:36:01.197806] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:50.302 [2024-07-15 09:36:01.197835] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a50000b90 with addr=10.0.0.2, port=4420 00:27:50.302 qpair failed and we were unable to recover it. 00:27:50.302 [2024-07-15 09:36:01.197929] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:50.302 [2024-07-15 09:36:01.197958] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a50000b90 with addr=10.0.0.2, port=4420 00:27:50.302 qpair failed and we were unable to recover it. 00:27:50.302 [2024-07-15 09:36:01.198047] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:50.302 [2024-07-15 09:36:01.198072] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a50000b90 with addr=10.0.0.2, port=4420 00:27:50.302 qpair failed and we were unable to recover it. 00:27:50.302 [2024-07-15 09:36:01.198183] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:50.302 [2024-07-15 09:36:01.198208] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a50000b90 with addr=10.0.0.2, port=4420 00:27:50.302 qpair failed and we were unable to recover it. 00:27:50.302 [2024-07-15 09:36:01.198323] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:50.302 [2024-07-15 09:36:01.198352] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a50000b90 with addr=10.0.0.2, port=4420 00:27:50.302 qpair failed and we were unable to recover it. 00:27:50.302 [2024-07-15 09:36:01.198443] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:50.302 [2024-07-15 09:36:01.198468] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a50000b90 with addr=10.0.0.2, port=4420 00:27:50.302 qpair failed and we were unable to recover it. 00:27:50.302 [2024-07-15 09:36:01.198552] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:50.302 [2024-07-15 09:36:01.198579] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:27:50.302 qpair failed and we were unable to recover it. 00:27:50.302 [2024-07-15 09:36:01.198676] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:50.302 [2024-07-15 09:36:01.198715] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d69200 with addr=10.0.0.2, port=4420 00:27:50.302 qpair failed and we were unable to recover it. 00:27:50.302 [2024-07-15 09:36:01.198817] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:50.302 [2024-07-15 09:36:01.198846] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a58000b90 with addr=10.0.0.2, port=4420 00:27:50.302 qpair failed and we were unable to recover it. 00:27:50.302 [2024-07-15 09:36:01.198926] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:50.302 [2024-07-15 09:36:01.198952] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a58000b90 with addr=10.0.0.2, port=4420 00:27:50.302 qpair failed and we were unable to recover it. 00:27:50.302 [2024-07-15 09:36:01.199037] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:50.302 [2024-07-15 09:36:01.199063] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a58000b90 with addr=10.0.0.2, port=4420 00:27:50.302 qpair failed and we were unable to recover it. 00:27:50.302 [2024-07-15 09:36:01.199175] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:50.302 [2024-07-15 09:36:01.199201] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a58000b90 with addr=10.0.0.2, port=4420 00:27:50.302 qpair failed and we were unable to recover it. 00:27:50.302 [2024-07-15 09:36:01.199279] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:50.302 [2024-07-15 09:36:01.199330] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a58000b90 with addr=10.0.0.2, port=4420 00:27:50.302 qpair failed and we were unable to recover it. 00:27:50.302 [2024-07-15 09:36:01.199442] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:50.302 [2024-07-15 09:36:01.199481] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a58000b90 with addr=10.0.0.2, port=4420 00:27:50.302 qpair failed and we were unable to recover it. 00:27:50.302 [2024-07-15 09:36:01.199637] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:50.302 [2024-07-15 09:36:01.199662] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a58000b90 with addr=10.0.0.2, port=4420 00:27:50.302 qpair failed and we were unable to recover it. 00:27:50.302 [2024-07-15 09:36:01.199750] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:50.302 [2024-07-15 09:36:01.199776] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a58000b90 with addr=10.0.0.2, port=4420 00:27:50.302 qpair failed and we were unable to recover it. 00:27:50.302 [2024-07-15 09:36:01.199888] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:50.302 [2024-07-15 09:36:01.199914] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a58000b90 with addr=10.0.0.2, port=4420 00:27:50.302 qpair failed and we were unable to recover it. 00:27:50.302 [2024-07-15 09:36:01.199998] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:50.302 [2024-07-15 09:36:01.200024] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a58000b90 with addr=10.0.0.2, port=4420 00:27:50.302 qpair failed and we were unable to recover it. 00:27:50.302 [2024-07-15 09:36:01.200166] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:50.302 [2024-07-15 09:36:01.200205] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a58000b90 with addr=10.0.0.2, port=4420 00:27:50.302 qpair failed and we were unable to recover it. 00:27:50.302 [2024-07-15 09:36:01.200332] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:50.302 [2024-07-15 09:36:01.200372] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a58000b90 with addr=10.0.0.2, port=4420 00:27:50.302 qpair failed and we were unable to recover it. 00:27:50.302 [2024-07-15 09:36:01.200495] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:50.302 [2024-07-15 09:36:01.200534] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a58000b90 with addr=10.0.0.2, port=4420 00:27:50.302 qpair failed and we were unable to recover it. 00:27:50.302 [2024-07-15 09:36:01.200666] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:50.302 [2024-07-15 09:36:01.200704] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a58000b90 with addr=10.0.0.2, port=4420 00:27:50.302 qpair failed and we were unable to recover it. 00:27:50.302 [2024-07-15 09:36:01.200861] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:50.302 [2024-07-15 09:36:01.200895] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a58000b90 with addr=10.0.0.2, port=4420 00:27:50.302 qpair failed and we were unable to recover it. 00:27:50.302 [2024-07-15 09:36:01.200983] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:50.302 [2024-07-15 09:36:01.201009] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a58000b90 with addr=10.0.0.2, port=4420 00:27:50.302 qpair failed and we were unable to recover it. 00:27:50.302 [2024-07-15 09:36:01.201147] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:50.302 [2024-07-15 09:36:01.201195] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a58000b90 with addr=10.0.0.2, port=4420 00:27:50.302 qpair failed and we were unable to recover it. 00:27:50.302 [2024-07-15 09:36:01.201314] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:50.302 [2024-07-15 09:36:01.201354] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a58000b90 with addr=10.0.0.2, port=4420 00:27:50.302 qpair failed and we were unable to recover it. 00:27:50.302 [2024-07-15 09:36:01.201471] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:50.302 [2024-07-15 09:36:01.201510] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a58000b90 with addr=10.0.0.2, port=4420 00:27:50.302 qpair failed and we were unable to recover it. 00:27:50.302 [2024-07-15 09:36:01.201664] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:50.302 [2024-07-15 09:36:01.201705] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a58000b90 with addr=10.0.0.2, port=4420 00:27:50.302 qpair failed and we were unable to recover it. 00:27:50.302 [2024-07-15 09:36:01.201846] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:50.302 [2024-07-15 09:36:01.201873] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a58000b90 with addr=10.0.0.2, port=4420 00:27:50.302 qpair failed and we were unable to recover it. 00:27:50.302 [2024-07-15 09:36:01.201985] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:50.302 [2024-07-15 09:36:01.202011] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a58000b90 with addr=10.0.0.2, port=4420 00:27:50.302 qpair failed and we were unable to recover it. 00:27:50.302 [2024-07-15 09:36:01.202121] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:50.302 [2024-07-15 09:36:01.202147] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a58000b90 with addr=10.0.0.2, port=4420 00:27:50.302 qpair failed and we were unable to recover it. 00:27:50.302 [2024-07-15 09:36:01.202227] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:50.302 [2024-07-15 09:36:01.202252] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a58000b90 with addr=10.0.0.2, port=4420 00:27:50.302 qpair failed and we were unable to recover it. 00:27:50.302 [2024-07-15 09:36:01.202363] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:50.302 [2024-07-15 09:36:01.202412] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a58000b90 with addr=10.0.0.2, port=4420 00:27:50.302 qpair failed and we were unable to recover it. 00:27:50.302 [2024-07-15 09:36:01.202581] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:50.302 [2024-07-15 09:36:01.202623] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a58000b90 with addr=10.0.0.2, port=4420 00:27:50.302 qpair failed and we were unable to recover it. 00:27:50.302 [2024-07-15 09:36:01.202756] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:50.302 [2024-07-15 09:36:01.202796] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a58000b90 with addr=10.0.0.2, port=4420 00:27:50.302 qpair failed and we were unable to recover it. 00:27:50.302 [2024-07-15 09:36:01.202913] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:50.302 [2024-07-15 09:36:01.202940] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a58000b90 with addr=10.0.0.2, port=4420 00:27:50.302 qpair failed and we were unable to recover it. 00:27:50.302 [2024-07-15 09:36:01.203034] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:50.302 [2024-07-15 09:36:01.203062] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a50000b90 with addr=10.0.0.2, port=4420 00:27:50.302 qpair failed and we were unable to recover it. 00:27:50.302 [2024-07-15 09:36:01.203281] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:50.302 [2024-07-15 09:36:01.203320] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a50000b90 with addr=10.0.0.2, port=4420 00:27:50.302 qpair failed and we were unable to recover it. 00:27:50.302 [2024-07-15 09:36:01.203488] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:50.302 [2024-07-15 09:36:01.203531] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a50000b90 with addr=10.0.0.2, port=4420 00:27:50.302 qpair failed and we were unable to recover it. 00:27:50.302 [2024-07-15 09:36:01.203718] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:50.302 [2024-07-15 09:36:01.203758] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a50000b90 with addr=10.0.0.2, port=4420 00:27:50.302 qpair failed and we were unable to recover it. 00:27:50.302 [2024-07-15 09:36:01.203887] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:50.302 [2024-07-15 09:36:01.203914] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a50000b90 with addr=10.0.0.2, port=4420 00:27:50.302 qpair failed and we were unable to recover it. 00:27:50.302 [2024-07-15 09:36:01.203994] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:50.302 [2024-07-15 09:36:01.204024] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a50000b90 with addr=10.0.0.2, port=4420 00:27:50.302 qpair failed and we were unable to recover it. 00:27:50.302 [2024-07-15 09:36:01.204217] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:50.302 [2024-07-15 09:36:01.204261] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a50000b90 with addr=10.0.0.2, port=4420 00:27:50.302 qpair failed and we were unable to recover it. 00:27:50.302 [2024-07-15 09:36:01.204414] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:50.302 [2024-07-15 09:36:01.204458] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a50000b90 with addr=10.0.0.2, port=4420 00:27:50.302 qpair failed and we were unable to recover it. 00:27:50.302 [2024-07-15 09:36:01.204707] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:50.302 [2024-07-15 09:36:01.204772] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d69200 with addr=10.0.0.2, port=4420 00:27:50.302 qpair failed and we were unable to recover it. 00:27:50.302 [2024-07-15 09:36:01.204936] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:50.302 [2024-07-15 09:36:01.204963] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d69200 with addr=10.0.0.2, port=4420 00:27:50.302 qpair failed and we were unable to recover it. 00:27:50.302 [2024-07-15 09:36:01.205049] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:50.302 [2024-07-15 09:36:01.205093] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d69200 with addr=10.0.0.2, port=4420 00:27:50.302 qpair failed and we were unable to recover it. 00:27:50.302 [2024-07-15 09:36:01.205246] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:50.302 [2024-07-15 09:36:01.205284] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d69200 with addr=10.0.0.2, port=4420 00:27:50.302 qpair failed and we were unable to recover it. 00:27:50.302 [2024-07-15 09:36:01.205440] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:50.302 [2024-07-15 09:36:01.205479] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d69200 with addr=10.0.0.2, port=4420 00:27:50.302 qpair failed and we were unable to recover it. 00:27:50.302 [2024-07-15 09:36:01.205630] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:50.302 [2024-07-15 09:36:01.205688] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:27:50.302 qpair failed and we were unable to recover it. 00:27:50.302 [2024-07-15 09:36:01.205831] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:50.302 [2024-07-15 09:36:01.205880] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a50000b90 with addr=10.0.0.2, port=4420 00:27:50.302 qpair failed and we were unable to recover it. 00:27:50.302 [2024-07-15 09:36:01.205980] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:50.303 [2024-07-15 09:36:01.206012] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a50000b90 with addr=10.0.0.2, port=4420 00:27:50.303 qpair failed and we were unable to recover it. 00:27:50.303 [2024-07-15 09:36:01.206154] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:50.303 [2024-07-15 09:36:01.206195] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a50000b90 with addr=10.0.0.2, port=4420 00:27:50.303 qpair failed and we were unable to recover it. 00:27:50.303 [2024-07-15 09:36:01.206352] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:50.303 [2024-07-15 09:36:01.206391] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a50000b90 with addr=10.0.0.2, port=4420 00:27:50.303 qpair failed and we were unable to recover it. 00:27:50.303 [2024-07-15 09:36:01.206517] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:50.303 [2024-07-15 09:36:01.206559] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a58000b90 with addr=10.0.0.2, port=4420 00:27:50.303 qpair failed and we were unable to recover it. 00:27:50.303 [2024-07-15 09:36:01.206697] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:50.303 [2024-07-15 09:36:01.206738] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d69200 with addr=10.0.0.2, port=4420 00:27:50.303 qpair failed and we were unable to recover it. 00:27:50.303 [2024-07-15 09:36:01.206885] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:50.303 [2024-07-15 09:36:01.206911] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d69200 with addr=10.0.0.2, port=4420 00:27:50.303 qpair failed and we were unable to recover it. 00:27:50.303 [2024-07-15 09:36:01.207004] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:50.303 [2024-07-15 09:36:01.207030] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d69200 with addr=10.0.0.2, port=4420 00:27:50.303 qpair failed and we were unable to recover it. 00:27:50.303 [2024-07-15 09:36:01.207174] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:50.303 [2024-07-15 09:36:01.207212] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d69200 with addr=10.0.0.2, port=4420 00:27:50.303 qpair failed and we were unable to recover it. 00:27:50.303 [2024-07-15 09:36:01.207351] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:50.303 [2024-07-15 09:36:01.207389] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d69200 with addr=10.0.0.2, port=4420 00:27:50.303 qpair failed and we were unable to recover it. 00:27:50.303 [2024-07-15 09:36:01.207555] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:50.303 [2024-07-15 09:36:01.207598] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a50000b90 with addr=10.0.0.2, port=4420 00:27:50.303 qpair failed and we were unable to recover it. 00:27:50.303 [2024-07-15 09:36:01.207725] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:50.303 [2024-07-15 09:36:01.207765] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a58000b90 with addr=10.0.0.2, port=4420 00:27:50.303 qpair failed and we were unable to recover it. 00:27:50.303 [2024-07-15 09:36:01.207894] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:50.303 [2024-07-15 09:36:01.207921] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a58000b90 with addr=10.0.0.2, port=4420 00:27:50.303 qpair failed and we were unable to recover it. 00:27:50.303 [2024-07-15 09:36:01.208037] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:50.303 [2024-07-15 09:36:01.208064] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a58000b90 with addr=10.0.0.2, port=4420 00:27:50.304 qpair failed and we were unable to recover it. 00:27:50.304 [2024-07-15 09:36:01.208230] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:50.304 [2024-07-15 09:36:01.208269] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a58000b90 with addr=10.0.0.2, port=4420 00:27:50.304 qpair failed and we were unable to recover it. 00:27:50.304 [2024-07-15 09:36:01.208404] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:50.304 [2024-07-15 09:36:01.208445] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a58000b90 with addr=10.0.0.2, port=4420 00:27:50.304 qpair failed and we were unable to recover it. 00:27:50.304 [2024-07-15 09:36:01.208571] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:50.304 [2024-07-15 09:36:01.208611] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a58000b90 with addr=10.0.0.2, port=4420 00:27:50.304 qpair failed and we were unable to recover it. 00:27:50.304 [2024-07-15 09:36:01.208770] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:50.304 [2024-07-15 09:36:01.208822] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a50000b90 with addr=10.0.0.2, port=4420 00:27:50.304 qpair failed and we were unable to recover it. 00:27:50.304 [2024-07-15 09:36:01.208957] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:50.304 [2024-07-15 09:36:01.208983] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a50000b90 with addr=10.0.0.2, port=4420 00:27:50.304 qpair failed and we were unable to recover it. 00:27:50.304 [2024-07-15 09:36:01.209064] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:50.304 [2024-07-15 09:36:01.209118] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a50000b90 with addr=10.0.0.2, port=4420 00:27:50.304 qpair failed and we were unable to recover it. 00:27:50.304 [2024-07-15 09:36:01.209253] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:50.304 [2024-07-15 09:36:01.209294] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a58000b90 with addr=10.0.0.2, port=4420 00:27:50.304 qpair failed and we were unable to recover it. 00:27:50.304 [2024-07-15 09:36:01.209437] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:50.304 [2024-07-15 09:36:01.209494] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:27:50.304 qpair failed and we were unable to recover it. 00:27:50.304 [2024-07-15 09:36:01.209626] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:50.304 [2024-07-15 09:36:01.209667] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d69200 with addr=10.0.0.2, port=4420 00:27:50.304 qpair failed and we were unable to recover it. 00:27:50.304 [2024-07-15 09:36:01.209849] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:50.304 [2024-07-15 09:36:01.209875] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d69200 with addr=10.0.0.2, port=4420 00:27:50.304 qpair failed and we were unable to recover it. 00:27:50.304 [2024-07-15 09:36:01.209965] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:50.304 [2024-07-15 09:36:01.209991] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d69200 with addr=10.0.0.2, port=4420 00:27:50.304 qpair failed and we were unable to recover it. 00:27:50.304 [2024-07-15 09:36:01.210069] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:50.304 [2024-07-15 09:36:01.210095] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d69200 with addr=10.0.0.2, port=4420 00:27:50.304 qpair failed and we were unable to recover it. 00:27:50.304 [2024-07-15 09:36:01.210222] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:50.304 [2024-07-15 09:36:01.210260] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d69200 with addr=10.0.0.2, port=4420 00:27:50.304 qpair failed and we were unable to recover it. 00:27:50.304 [2024-07-15 09:36:01.210386] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:50.304 [2024-07-15 09:36:01.210425] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d69200 with addr=10.0.0.2, port=4420 00:27:50.304 qpair failed and we were unable to recover it. 00:27:50.304 [2024-07-15 09:36:01.210587] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:50.304 [2024-07-15 09:36:01.210629] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:27:50.304 qpair failed and we were unable to recover it. 00:27:50.304 [2024-07-15 09:36:01.210782] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:50.304 [2024-07-15 09:36:01.210863] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:27:50.304 qpair failed and we were unable to recover it. 00:27:50.304 [2024-07-15 09:36:01.210953] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:50.304 [2024-07-15 09:36:01.210982] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a58000b90 with addr=10.0.0.2, port=4420 00:27:50.304 qpair failed and we were unable to recover it. 00:27:50.304 [2024-07-15 09:36:01.211115] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:50.304 [2024-07-15 09:36:01.211154] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a58000b90 with addr=10.0.0.2, port=4420 00:27:50.304 qpair failed and we were unable to recover it. 00:27:50.304 [2024-07-15 09:36:01.211286] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:50.304 [2024-07-15 09:36:01.211325] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a58000b90 with addr=10.0.0.2, port=4420 00:27:50.304 qpair failed and we were unable to recover it. 00:27:50.304 [2024-07-15 09:36:01.211454] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:50.304 [2024-07-15 09:36:01.211495] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a58000b90 with addr=10.0.0.2, port=4420 00:27:50.304 qpair failed and we were unable to recover it. 00:27:50.304 [2024-07-15 09:36:01.211667] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:50.305 [2024-07-15 09:36:01.211693] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a58000b90 with addr=10.0.0.2, port=4420 00:27:50.305 qpair failed and we were unable to recover it. 00:27:50.305 [2024-07-15 09:36:01.211836] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:50.305 [2024-07-15 09:36:01.211863] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a58000b90 with addr=10.0.0.2, port=4420 00:27:50.305 qpair failed and we were unable to recover it. 00:27:50.305 [2024-07-15 09:36:01.211968] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:50.305 [2024-07-15 09:36:01.211994] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a58000b90 with addr=10.0.0.2, port=4420 00:27:50.305 qpair failed and we were unable to recover it. 00:27:50.305 [2024-07-15 09:36:01.212075] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:50.305 [2024-07-15 09:36:01.212101] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a58000b90 with addr=10.0.0.2, port=4420 00:27:50.305 qpair failed and we were unable to recover it. 00:27:50.305 [2024-07-15 09:36:01.212183] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:50.305 [2024-07-15 09:36:01.212209] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a58000b90 with addr=10.0.0.2, port=4420 00:27:50.305 qpair failed and we were unable to recover it. 00:27:50.305 [2024-07-15 09:36:01.212298] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:50.305 [2024-07-15 09:36:01.212324] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a58000b90 with addr=10.0.0.2, port=4420 00:27:50.305 qpair failed and we were unable to recover it. 00:27:50.305 [2024-07-15 09:36:01.212406] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:50.305 [2024-07-15 09:36:01.212455] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d69200 with addr=10.0.0.2, port=4420 00:27:50.305 qpair failed and we were unable to recover it. 00:27:50.305 [2024-07-15 09:36:01.212612] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:50.305 [2024-07-15 09:36:01.212656] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d69200 with addr=10.0.0.2, port=4420 00:27:50.305 qpair failed and we were unable to recover it. 00:27:50.305 [2024-07-15 09:36:01.212781] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:50.305 [2024-07-15 09:36:01.212829] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d69200 with addr=10.0.0.2, port=4420 00:27:50.305 qpair failed and we were unable to recover it. 00:27:50.305 [2024-07-15 09:36:01.212941] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:50.305 [2024-07-15 09:36:01.212967] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d69200 with addr=10.0.0.2, port=4420 00:27:50.305 qpair failed and we were unable to recover it. 00:27:50.305 [2024-07-15 09:36:01.213079] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:50.305 [2024-07-15 09:36:01.213105] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d69200 with addr=10.0.0.2, port=4420 00:27:50.305 qpair failed and we were unable to recover it. 00:27:50.305 [2024-07-15 09:36:01.213187] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:50.305 [2024-07-15 09:36:01.213232] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d69200 with addr=10.0.0.2, port=4420 00:27:50.305 qpair failed and we were unable to recover it. 00:27:50.305 [2024-07-15 09:36:01.213360] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:50.305 [2024-07-15 09:36:01.213401] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a58000b90 with addr=10.0.0.2, port=4420 00:27:50.305 qpair failed and we were unable to recover it. 00:27:50.305 [2024-07-15 09:36:01.213588] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:50.305 [2024-07-15 09:36:01.213627] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a58000b90 with addr=10.0.0.2, port=4420 00:27:50.305 qpair failed and we were unable to recover it. 00:27:50.305 [2024-07-15 09:36:01.213764] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:50.305 [2024-07-15 09:36:01.213832] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:27:50.305 qpair failed and we were unable to recover it. 00:27:50.305 [2024-07-15 09:36:01.213937] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:50.305 [2024-07-15 09:36:01.213964] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:27:50.305 qpair failed and we were unable to recover it. 00:27:50.305 [2024-07-15 09:36:01.214058] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:50.305 [2024-07-15 09:36:01.214123] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a50000b90 with addr=10.0.0.2, port=4420 00:27:50.305 qpair failed and we were unable to recover it. 00:27:50.305 [2024-07-15 09:36:01.214291] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:50.305 [2024-07-15 09:36:01.214319] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a50000b90 with addr=10.0.0.2, port=4420 00:27:50.305 qpair failed and we were unable to recover it. 00:27:50.305 [2024-07-15 09:36:01.214408] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:50.305 [2024-07-15 09:36:01.214436] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a50000b90 with addr=10.0.0.2, port=4420 00:27:50.305 qpair failed and we were unable to recover it. 00:27:50.305 [2024-07-15 09:36:01.214523] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:50.305 [2024-07-15 09:36:01.214576] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a50000b90 with addr=10.0.0.2, port=4420 00:27:50.305 qpair failed and we were unable to recover it. 00:27:50.305 [2024-07-15 09:36:01.214700] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:50.305 [2024-07-15 09:36:01.214740] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a50000b90 with addr=10.0.0.2, port=4420 00:27:50.305 qpair failed and we were unable to recover it. 00:27:50.305 [2024-07-15 09:36:01.214892] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:50.305 [2024-07-15 09:36:01.214919] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d69200 with addr=10.0.0.2, port=4420 00:27:50.305 qpair failed and we were unable to recover it. 00:27:50.305 [2024-07-15 09:36:01.215009] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:50.305 [2024-07-15 09:36:01.215036] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:27:50.305 qpair failed and we were unable to recover it. 00:27:50.305 [2024-07-15 09:36:01.215156] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:50.305 [2024-07-15 09:36:01.215195] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:27:50.305 qpair failed and we were unable to recover it. 00:27:50.305 [2024-07-15 09:36:01.215398] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:50.305 [2024-07-15 09:36:01.215423] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:27:50.305 qpair failed and we were unable to recover it. 00:27:50.305 [2024-07-15 09:36:01.215545] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:50.305 [2024-07-15 09:36:01.215584] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:27:50.305 qpair failed and we were unable to recover it. 00:27:50.305 [2024-07-15 09:36:01.215718] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:50.305 [2024-07-15 09:36:01.215755] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:27:50.305 qpair failed and we were unable to recover it. 00:27:50.305 [2024-07-15 09:36:01.215892] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:50.305 [2024-07-15 09:36:01.215920] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a50000b90 with addr=10.0.0.2, port=4420 00:27:50.305 qpair failed and we were unable to recover it. 00:27:50.305 [2024-07-15 09:36:01.216014] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:50.305 [2024-07-15 09:36:01.216040] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d69200 with addr=10.0.0.2, port=4420 00:27:50.306 qpair failed and we were unable to recover it. 00:27:50.306 [2024-07-15 09:36:01.216186] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:50.306 [2024-07-15 09:36:01.216225] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d69200 with addr=10.0.0.2, port=4420 00:27:50.306 qpair failed and we were unable to recover it. 00:27:50.306 [2024-07-15 09:36:01.216391] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:50.306 [2024-07-15 09:36:01.216416] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d69200 with addr=10.0.0.2, port=4420 00:27:50.306 qpair failed and we were unable to recover it. 00:27:50.306 [2024-07-15 09:36:01.216576] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:50.306 [2024-07-15 09:36:01.216614] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d69200 with addr=10.0.0.2, port=4420 00:27:50.306 qpair failed and we were unable to recover it. 00:27:50.306 [2024-07-15 09:36:01.216736] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:50.306 [2024-07-15 09:36:01.216775] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d69200 with addr=10.0.0.2, port=4420 00:27:50.306 qpair failed and we were unable to recover it. 00:27:50.306 [2024-07-15 09:36:01.216934] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:50.306 [2024-07-15 09:36:01.216962] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:27:50.306 qpair failed and we were unable to recover it. 00:27:50.306 [2024-07-15 09:36:01.217048] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:50.306 [2024-07-15 09:36:01.217080] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:27:50.306 qpair failed and we were unable to recover it. 00:27:50.306 [2024-07-15 09:36:01.217164] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:50.306 [2024-07-15 09:36:01.217190] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:27:50.306 qpair failed and we were unable to recover it. 00:27:50.306 [2024-07-15 09:36:01.217301] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:50.306 [2024-07-15 09:36:01.217327] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:27:50.306 qpair failed and we were unable to recover it. 00:27:50.306 [2024-07-15 09:36:01.217444] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:50.306 [2024-07-15 09:36:01.217470] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:27:50.306 qpair failed and we were unable to recover it. 00:27:50.306 [2024-07-15 09:36:01.217585] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:50.306 [2024-07-15 09:36:01.217623] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:27:50.306 qpair failed and we were unable to recover it. 00:27:50.306 [2024-07-15 09:36:01.217738] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:50.306 [2024-07-15 09:36:01.217777] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:27:50.306 qpair failed and we were unable to recover it. 00:27:50.306 [2024-07-15 09:36:01.217909] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:50.306 [2024-07-15 09:36:01.217937] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a50000b90 with addr=10.0.0.2, port=4420 00:27:50.306 qpair failed and we were unable to recover it. 00:27:50.306 [2024-07-15 09:36:01.218030] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:50.306 [2024-07-15 09:36:01.218058] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a50000b90 with addr=10.0.0.2, port=4420 00:27:50.306 qpair failed and we were unable to recover it. 00:27:50.306 [2024-07-15 09:36:01.218228] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:50.306 [2024-07-15 09:36:01.218268] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a50000b90 with addr=10.0.0.2, port=4420 00:27:50.306 qpair failed and we were unable to recover it. 00:27:50.306 [2024-07-15 09:36:01.218383] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:50.306 [2024-07-15 09:36:01.218423] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a50000b90 with addr=10.0.0.2, port=4420 00:27:50.306 qpair failed and we were unable to recover it. 00:27:50.306 [2024-07-15 09:36:01.218556] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:50.306 [2024-07-15 09:36:01.218595] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a50000b90 with addr=10.0.0.2, port=4420 00:27:50.306 qpair failed and we were unable to recover it. 00:27:50.306 [2024-07-15 09:36:01.218729] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:50.306 [2024-07-15 09:36:01.218770] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a50000b90 with addr=10.0.0.2, port=4420 00:27:50.306 qpair failed and we were unable to recover it. 00:27:50.306 [2024-07-15 09:36:01.218907] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:50.306 [2024-07-15 09:36:01.218934] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:27:50.306 qpair failed and we were unable to recover it. 00:27:50.306 [2024-07-15 09:36:01.219018] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:50.306 [2024-07-15 09:36:01.219044] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:27:50.306 qpair failed and we were unable to recover it. 00:27:50.306 [2024-07-15 09:36:01.219158] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:50.306 [2024-07-15 09:36:01.219197] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:27:50.306 qpair failed and we were unable to recover it. 00:27:50.306 [2024-07-15 09:36:01.219357] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:50.306 [2024-07-15 09:36:01.219396] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:27:50.306 qpair failed and we were unable to recover it. 00:27:50.306 [2024-07-15 09:36:01.219575] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:50.306 [2024-07-15 09:36:01.219635] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a58000b90 with addr=10.0.0.2, port=4420 00:27:50.306 qpair failed and we were unable to recover it. 00:27:50.306 [2024-07-15 09:36:01.219777] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:50.306 [2024-07-15 09:36:01.219830] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a58000b90 with addr=10.0.0.2, port=4420 00:27:50.306 qpair failed and we were unable to recover it. 00:27:50.306 [2024-07-15 09:36:01.219948] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:50.306 [2024-07-15 09:36:01.219975] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a58000b90 with addr=10.0.0.2, port=4420 00:27:50.306 qpair failed and we were unable to recover it. 00:27:50.306 [2024-07-15 09:36:01.220114] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:50.306 [2024-07-15 09:36:01.220154] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a58000b90 with addr=10.0.0.2, port=4420 00:27:50.306 qpair failed and we were unable to recover it. 00:27:50.306 [2024-07-15 09:36:01.220282] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:50.306 [2024-07-15 09:36:01.220323] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a58000b90 with addr=10.0.0.2, port=4420 00:27:50.306 qpair failed and we were unable to recover it. 00:27:50.306 [2024-07-15 09:36:01.220511] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:50.306 [2024-07-15 09:36:01.220551] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a58000b90 with addr=10.0.0.2, port=4420 00:27:50.306 qpair failed and we were unable to recover it. 00:27:50.306 [2024-07-15 09:36:01.220743] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:50.306 [2024-07-15 09:36:01.220781] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a58000b90 with addr=10.0.0.2, port=4420 00:27:50.306 qpair failed and we were unable to recover it. 00:27:50.306 [2024-07-15 09:36:01.220923] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:50.306 [2024-07-15 09:36:01.220962] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a58000b90 with addr=10.0.0.2, port=4420 00:27:50.306 qpair failed and we were unable to recover it. 00:27:50.306 [2024-07-15 09:36:01.221157] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:50.306 [2024-07-15 09:36:01.221196] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a58000b90 with addr=10.0.0.2, port=4420 00:27:50.306 qpair failed and we were unable to recover it. 00:27:50.306 [2024-07-15 09:36:01.221333] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:50.306 [2024-07-15 09:36:01.221372] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a58000b90 with addr=10.0.0.2, port=4420 00:27:50.307 qpair failed and we were unable to recover it. 00:27:50.307 [2024-07-15 09:36:01.221532] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:50.307 [2024-07-15 09:36:01.221558] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a58000b90 with addr=10.0.0.2, port=4420 00:27:50.307 qpair failed and we were unable to recover it. 00:27:50.307 [2024-07-15 09:36:01.221703] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:50.307 [2024-07-15 09:36:01.221732] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a50000b90 with addr=10.0.0.2, port=4420 00:27:50.307 qpair failed and we were unable to recover it. 00:27:50.307 [2024-07-15 09:36:01.221859] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:50.307 [2024-07-15 09:36:01.221903] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a50000b90 with addr=10.0.0.2, port=4420 00:27:50.307 qpair failed and we were unable to recover it. 00:27:50.307 [2024-07-15 09:36:01.222066] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:50.307 [2024-07-15 09:36:01.222105] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a50000b90 with addr=10.0.0.2, port=4420 00:27:50.307 qpair failed and we were unable to recover it. 00:27:50.307 [2024-07-15 09:36:01.222255] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:50.307 [2024-07-15 09:36:01.222294] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a50000b90 with addr=10.0.0.2, port=4420 00:27:50.307 qpair failed and we were unable to recover it. 00:27:50.307 [2024-07-15 09:36:01.222417] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:50.307 [2024-07-15 09:36:01.222457] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a50000b90 with addr=10.0.0.2, port=4420 00:27:50.307 qpair failed and we were unable to recover it. 00:27:50.307 [2024-07-15 09:36:01.222613] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:50.307 [2024-07-15 09:36:01.222652] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a50000b90 with addr=10.0.0.2, port=4420 00:27:50.307 qpair failed and we were unable to recover it. 00:27:50.307 [2024-07-15 09:36:01.222778] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:50.307 [2024-07-15 09:36:01.222833] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a58000b90 with addr=10.0.0.2, port=4420 00:27:50.307 qpair failed and we were unable to recover it. 00:27:50.307 [2024-07-15 09:36:01.222968] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:50.307 [2024-07-15 09:36:01.223008] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a58000b90 with addr=10.0.0.2, port=4420 00:27:50.307 qpair failed and we were unable to recover it. 00:27:50.307 [2024-07-15 09:36:01.223166] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:50.307 [2024-07-15 09:36:01.223206] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a58000b90 with addr=10.0.0.2, port=4420 00:27:50.307 qpair failed and we were unable to recover it. 00:27:50.307 [2024-07-15 09:36:01.223359] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:50.307 [2024-07-15 09:36:01.223398] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a58000b90 with addr=10.0.0.2, port=4420 00:27:50.307 qpair failed and we were unable to recover it. 00:27:50.307 [2024-07-15 09:36:01.223519] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:50.307 [2024-07-15 09:36:01.223578] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a58000b90 with addr=10.0.0.2, port=4420 00:27:50.307 qpair failed and we were unable to recover it. 00:27:50.307 [2024-07-15 09:36:01.223750] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:50.307 [2024-07-15 09:36:01.223789] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a58000b90 with addr=10.0.0.2, port=4420 00:27:50.307 qpair failed and we were unable to recover it. 00:27:50.307 [2024-07-15 09:36:01.223923] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:50.307 [2024-07-15 09:36:01.223963] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a58000b90 with addr=10.0.0.2, port=4420 00:27:50.307 qpair failed and we were unable to recover it. 00:27:50.307 [2024-07-15 09:36:01.224089] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:50.307 [2024-07-15 09:36:01.224134] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a58000b90 with addr=10.0.0.2, port=4420 00:27:50.307 qpair failed and we were unable to recover it. 00:27:50.307 [2024-07-15 09:36:01.224263] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:50.307 [2024-07-15 09:36:01.224303] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a58000b90 with addr=10.0.0.2, port=4420 00:27:50.307 qpair failed and we were unable to recover it. 00:27:50.307 [2024-07-15 09:36:01.224470] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:50.307 [2024-07-15 09:36:01.224496] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a58000b90 with addr=10.0.0.2, port=4420 00:27:50.307 qpair failed and we were unable to recover it. 00:27:50.307 [2024-07-15 09:36:01.224585] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:50.307 [2024-07-15 09:36:01.224611] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a58000b90 with addr=10.0.0.2, port=4420 00:27:50.307 qpair failed and we were unable to recover it. 00:27:50.307 [2024-07-15 09:36:01.224746] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:50.307 [2024-07-15 09:36:01.224789] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a58000b90 with addr=10.0.0.2, port=4420 00:27:50.307 qpair failed and we were unable to recover it. 00:27:50.308 [2024-07-15 09:36:01.224994] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:50.308 [2024-07-15 09:36:01.225038] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a58000b90 with addr=10.0.0.2, port=4420 00:27:50.308 qpair failed and we were unable to recover it. 00:27:50.308 [2024-07-15 09:36:01.225185] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:50.308 [2024-07-15 09:36:01.225244] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a58000b90 with addr=10.0.0.2, port=4420 00:27:50.308 qpair failed and we were unable to recover it. 00:27:50.308 [2024-07-15 09:36:01.225436] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:50.308 [2024-07-15 09:36:01.225479] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a58000b90 with addr=10.0.0.2, port=4420 00:27:50.308 qpair failed and we were unable to recover it. 00:27:50.308 [2024-07-15 09:36:01.225636] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:50.308 [2024-07-15 09:36:01.225682] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a58000b90 with addr=10.0.0.2, port=4420 00:27:50.308 qpair failed and we were unable to recover it. 00:27:50.308 [2024-07-15 09:36:01.225875] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:50.308 [2024-07-15 09:36:01.225961] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:27:50.308 qpair failed and we were unable to recover it. 00:27:50.308 [2024-07-15 09:36:01.226156] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:50.308 [2024-07-15 09:36:01.226230] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d69200 with addr=10.0.0.2, port=4420 00:27:50.308 qpair failed and we were unable to recover it. 00:27:50.308 [2024-07-15 09:36:01.226423] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:50.308 [2024-07-15 09:36:01.226450] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d69200 with addr=10.0.0.2, port=4420 00:27:50.308 qpair failed and we were unable to recover it. 00:27:50.308 [2024-07-15 09:36:01.226535] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:50.308 [2024-07-15 09:36:01.226561] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d69200 with addr=10.0.0.2, port=4420 00:27:50.308 qpair failed and we were unable to recover it. 00:27:50.308 [2024-07-15 09:36:01.226650] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:50.308 [2024-07-15 09:36:01.226675] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d69200 with addr=10.0.0.2, port=4420 00:27:50.308 qpair failed and we were unable to recover it. 00:27:50.308 [2024-07-15 09:36:01.226780] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:50.308 [2024-07-15 09:36:01.226856] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a50000b90 with addr=10.0.0.2, port=4420 00:27:50.308 qpair failed and we were unable to recover it. 00:27:50.308 [2024-07-15 09:36:01.227032] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:50.308 [2024-07-15 09:36:01.227075] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a50000b90 with addr=10.0.0.2, port=4420 00:27:50.308 qpair failed and we were unable to recover it. 00:27:50.308 [2024-07-15 09:36:01.227215] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:50.308 [2024-07-15 09:36:01.227256] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a50000b90 with addr=10.0.0.2, port=4420 00:27:50.308 qpair failed and we were unable to recover it. 00:27:50.308 [2024-07-15 09:36:01.227381] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:50.308 [2024-07-15 09:36:01.227421] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a50000b90 with addr=10.0.0.2, port=4420 00:27:50.308 qpair failed and we were unable to recover it. 00:27:50.308 [2024-07-15 09:36:01.227570] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:50.308 [2024-07-15 09:36:01.227611] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a58000b90 with addr=10.0.0.2, port=4420 00:27:50.308 qpair failed and we were unable to recover it. 00:27:50.308 [2024-07-15 09:36:01.227739] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:50.308 [2024-07-15 09:36:01.227778] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a58000b90 with addr=10.0.0.2, port=4420 00:27:50.308 qpair failed and we were unable to recover it. 00:27:50.308 [2024-07-15 09:36:01.227944] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:50.308 [2024-07-15 09:36:01.227984] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d69200 with addr=10.0.0.2, port=4420 00:27:50.308 qpair failed and we were unable to recover it. 00:27:50.308 [2024-07-15 09:36:01.228152] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:50.308 [2024-07-15 09:36:01.228179] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d69200 with addr=10.0.0.2, port=4420 00:27:50.308 qpair failed and we were unable to recover it. 00:27:50.308 [2024-07-15 09:36:01.228294] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:50.308 [2024-07-15 09:36:01.228321] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d69200 with addr=10.0.0.2, port=4420 00:27:50.308 qpair failed and we were unable to recover it. 00:27:50.308 [2024-07-15 09:36:01.228434] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:50.308 [2024-07-15 09:36:01.228472] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d69200 with addr=10.0.0.2, port=4420 00:27:50.308 qpair failed and we were unable to recover it. 00:27:50.308 [2024-07-15 09:36:01.228612] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:50.308 [2024-07-15 09:36:01.228660] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a50000b90 with addr=10.0.0.2, port=4420 00:27:50.308 qpair failed and we were unable to recover it. 00:27:50.308 [2024-07-15 09:36:01.228889] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:50.308 [2024-07-15 09:36:01.228947] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:27:50.308 qpair failed and we were unable to recover it. 00:27:50.308 [2024-07-15 09:36:01.229189] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:50.308 [2024-07-15 09:36:01.229229] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:27:50.308 qpair failed and we were unable to recover it. 00:27:50.308 [2024-07-15 09:36:01.229384] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:50.308 [2024-07-15 09:36:01.229424] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:27:50.308 qpair failed and we were unable to recover it. 00:27:50.308 [2024-07-15 09:36:01.229564] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:50.308 [2024-07-15 09:36:01.229613] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:27:50.308 qpair failed and we were unable to recover it. 00:27:50.308 [2024-07-15 09:36:01.229758] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:50.308 [2024-07-15 09:36:01.229796] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:27:50.308 qpair failed and we were unable to recover it. 00:27:50.308 [2024-07-15 09:36:01.229957] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:50.308 [2024-07-15 09:36:01.229998] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:27:50.308 qpair failed and we were unable to recover it. 00:27:50.308 [2024-07-15 09:36:01.230185] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:50.308 [2024-07-15 09:36:01.230226] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:27:50.308 qpair failed and we were unable to recover it. 00:27:50.308 [2024-07-15 09:36:01.230369] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:50.308 [2024-07-15 09:36:01.230426] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:27:50.308 qpair failed and we were unable to recover it. 00:27:50.308 [2024-07-15 09:36:01.230594] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:50.308 [2024-07-15 09:36:01.230633] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:27:50.308 qpair failed and we were unable to recover it. 00:27:50.308 [2024-07-15 09:36:01.230793] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:50.308 [2024-07-15 09:36:01.230840] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:27:50.309 qpair failed and we were unable to recover it. 00:27:50.309 [2024-07-15 09:36:01.230958] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:50.309 [2024-07-15 09:36:01.231015] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:27:50.309 qpair failed and we were unable to recover it. 00:27:50.309 [2024-07-15 09:36:01.231158] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:50.309 [2024-07-15 09:36:01.231202] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:27:50.309 qpair failed and we were unable to recover it. 00:27:50.309 [2024-07-15 09:36:01.231318] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:50.309 [2024-07-15 09:36:01.231343] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:27:50.309 qpair failed and we were unable to recover it. 00:27:50.309 [2024-07-15 09:36:01.231483] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:50.309 [2024-07-15 09:36:01.231522] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:27:50.309 qpair failed and we were unable to recover it. 00:27:50.309 [2024-07-15 09:36:01.231668] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:50.309 [2024-07-15 09:36:01.231706] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:27:50.309 qpair failed and we were unable to recover it. 00:27:50.309 [2024-07-15 09:36:01.231905] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:50.309 [2024-07-15 09:36:01.231953] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:27:50.309 qpair failed and we were unable to recover it. 00:27:50.309 [2024-07-15 09:36:01.232115] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:50.309 [2024-07-15 09:36:01.232156] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:27:50.309 qpair failed and we were unable to recover it. 00:27:50.309 [2024-07-15 09:36:01.232331] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:50.309 [2024-07-15 09:36:01.232358] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:27:50.309 qpair failed and we were unable to recover it. 00:27:50.309 [2024-07-15 09:36:01.232466] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:50.309 [2024-07-15 09:36:01.232492] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:27:50.309 qpair failed and we were unable to recover it. 00:27:50.309 [2024-07-15 09:36:01.232595] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:50.309 [2024-07-15 09:36:01.232636] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:27:50.309 qpair failed and we were unable to recover it. 00:27:50.309 [2024-07-15 09:36:01.232845] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:50.309 [2024-07-15 09:36:01.232884] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:27:50.309 qpair failed and we were unable to recover it. 00:27:50.309 [2024-07-15 09:36:01.233010] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:50.309 [2024-07-15 09:36:01.233048] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:27:50.309 qpair failed and we were unable to recover it. 00:27:50.309 [2024-07-15 09:36:01.233166] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:50.309 [2024-07-15 09:36:01.233205] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:27:50.309 qpair failed and we were unable to recover it. 00:27:50.309 [2024-07-15 09:36:01.233362] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:50.309 [2024-07-15 09:36:01.233400] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:27:50.309 qpair failed and we were unable to recover it. 00:27:50.309 [2024-07-15 09:36:01.233582] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:50.309 [2024-07-15 09:36:01.233619] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:27:50.309 qpair failed and we were unable to recover it. 00:27:50.309 [2024-07-15 09:36:01.233763] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:50.309 [2024-07-15 09:36:01.233813] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:27:50.309 qpair failed and we were unable to recover it. 00:27:50.309 [2024-07-15 09:36:01.233979] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:50.309 [2024-07-15 09:36:01.234019] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:27:50.309 qpair failed and we were unable to recover it. 00:27:50.309 [2024-07-15 09:36:01.234149] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:50.309 [2024-07-15 09:36:01.234190] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:27:50.309 qpair failed and we were unable to recover it. 00:27:50.309 [2024-07-15 09:36:01.234347] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:50.309 [2024-07-15 09:36:01.234388] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:27:50.309 qpair failed and we were unable to recover it. 00:27:50.309 [2024-07-15 09:36:01.234596] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:50.309 [2024-07-15 09:36:01.234637] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:27:50.309 qpair failed and we were unable to recover it. 00:27:50.309 [2024-07-15 09:36:01.234796] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:50.309 [2024-07-15 09:36:01.234849] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:27:50.309 qpair failed and we were unable to recover it. 00:27:50.309 [2024-07-15 09:36:01.234985] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:50.309 [2024-07-15 09:36:01.235027] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:27:50.309 qpair failed and we were unable to recover it. 00:27:50.309 [2024-07-15 09:36:01.235194] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:50.309 [2024-07-15 09:36:01.235235] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:27:50.309 qpair failed and we were unable to recover it. 00:27:50.309 [2024-07-15 09:36:01.235365] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:50.309 [2024-07-15 09:36:01.235406] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:27:50.309 qpair failed and we were unable to recover it. 00:27:50.309 [2024-07-15 09:36:01.235518] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:50.309 [2024-07-15 09:36:01.235558] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:27:50.309 qpair failed and we were unable to recover it. 00:27:50.309 [2024-07-15 09:36:01.235725] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:50.309 [2024-07-15 09:36:01.235765] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:27:50.309 qpair failed and we were unable to recover it. 00:27:50.309 [2024-07-15 09:36:01.235963] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:50.309 [2024-07-15 09:36:01.236004] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:27:50.309 qpair failed and we were unable to recover it. 00:27:50.309 [2024-07-15 09:36:01.236151] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:50.309 [2024-07-15 09:36:01.236191] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:27:50.309 qpair failed and we were unable to recover it. 00:27:50.309 [2024-07-15 09:36:01.236346] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:50.309 [2024-07-15 09:36:01.236372] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:27:50.309 qpair failed and we were unable to recover it. 00:27:50.309 [2024-07-15 09:36:01.236563] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:50.309 [2024-07-15 09:36:01.236588] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:27:50.309 qpair failed and we were unable to recover it. 00:27:50.310 [2024-07-15 09:36:01.236715] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:50.310 [2024-07-15 09:36:01.236756] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:27:50.310 qpair failed and we were unable to recover it. 00:27:50.310 [2024-07-15 09:36:01.236903] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:50.310 [2024-07-15 09:36:01.236944] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:27:50.310 qpair failed and we were unable to recover it. 00:27:50.310 [2024-07-15 09:36:01.237123] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:50.310 [2024-07-15 09:36:01.237162] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a50000b90 with addr=10.0.0.2, port=4420 00:27:50.310 qpair failed and we were unable to recover it. 00:27:50.310 [2024-07-15 09:36:01.237304] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:50.310 [2024-07-15 09:36:01.237332] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a50000b90 with addr=10.0.0.2, port=4420 00:27:50.310 qpair failed and we were unable to recover it. 00:27:50.310 [2024-07-15 09:36:01.237464] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:50.310 [2024-07-15 09:36:01.237507] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a50000b90 with addr=10.0.0.2, port=4420 00:27:50.310 qpair failed and we were unable to recover it. 00:27:50.310 [2024-07-15 09:36:01.237636] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:50.310 [2024-07-15 09:36:01.237679] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a50000b90 with addr=10.0.0.2, port=4420 00:27:50.310 qpair failed and we were unable to recover it. 00:27:50.310 [2024-07-15 09:36:01.237819] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:50.310 [2024-07-15 09:36:01.237861] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a50000b90 with addr=10.0.0.2, port=4420 00:27:50.310 qpair failed and we were unable to recover it. 00:27:50.310 [2024-07-15 09:36:01.237988] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:50.310 [2024-07-15 09:36:01.238029] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a50000b90 with addr=10.0.0.2, port=4420 00:27:50.310 qpair failed and we were unable to recover it. 00:27:50.310 [2024-07-15 09:36:01.238219] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:50.310 [2024-07-15 09:36:01.238260] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a50000b90 with addr=10.0.0.2, port=4420 00:27:50.310 qpair failed and we were unable to recover it. 00:27:50.310 [2024-07-15 09:36:01.238447] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:50.310 [2024-07-15 09:36:01.238488] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a50000b90 with addr=10.0.0.2, port=4420 00:27:50.310 qpair failed and we were unable to recover it. 00:27:50.310 [2024-07-15 09:36:01.238672] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:50.310 [2024-07-15 09:36:01.238734] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d69200 with addr=10.0.0.2, port=4420 00:27:50.310 qpair failed and we were unable to recover it. 00:27:50.310 [2024-07-15 09:36:01.238916] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:50.310 [2024-07-15 09:36:01.238959] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:27:50.310 qpair failed and we were unable to recover it. 00:27:50.310 [2024-07-15 09:36:01.239090] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:50.310 [2024-07-15 09:36:01.239133] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:27:50.310 qpair failed and we were unable to recover it. 00:27:50.310 [2024-07-15 09:36:01.239270] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:50.310 [2024-07-15 09:36:01.239310] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:27:50.310 qpair failed and we were unable to recover it. 00:27:50.310 [2024-07-15 09:36:01.239449] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:50.310 [2024-07-15 09:36:01.239492] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:27:50.310 qpair failed and we were unable to recover it. 00:27:50.310 [2024-07-15 09:36:01.239688] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:50.310 [2024-07-15 09:36:01.239731] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:27:50.310 qpair failed and we were unable to recover it. 00:27:50.310 [2024-07-15 09:36:01.239896] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:50.310 [2024-07-15 09:36:01.239940] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:27:50.310 qpair failed and we were unable to recover it. 00:27:50.310 [2024-07-15 09:36:01.240142] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:50.310 [2024-07-15 09:36:01.240184] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:27:50.310 qpair failed and we were unable to recover it. 00:27:50.310 [2024-07-15 09:36:01.240389] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:50.310 [2024-07-15 09:36:01.240429] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:27:50.310 qpair failed and we were unable to recover it. 00:27:50.310 [2024-07-15 09:36:01.240557] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:50.310 [2024-07-15 09:36:01.240599] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:27:50.310 qpair failed and we were unable to recover it. 00:27:50.310 [2024-07-15 09:36:01.240764] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:50.310 [2024-07-15 09:36:01.240812] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:27:50.310 qpair failed and we were unable to recover it. 00:27:50.312 [2024-07-15 09:36:01.240978] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:50.312 [2024-07-15 09:36:01.241017] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:27:50.312 qpair failed and we were unable to recover it. 00:27:50.312 [2024-07-15 09:36:01.241206] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:50.312 [2024-07-15 09:36:01.241246] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:27:50.312 qpair failed and we were unable to recover it. 00:27:50.312 [2024-07-15 09:36:01.241403] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:50.312 [2024-07-15 09:36:01.241444] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:27:50.312 qpair failed and we were unable to recover it. 00:27:50.312 [2024-07-15 09:36:01.241607] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:50.312 [2024-07-15 09:36:01.241648] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:27:50.312 qpair failed and we were unable to recover it. 00:27:50.312 [2024-07-15 09:36:01.241792] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:50.312 [2024-07-15 09:36:01.241864] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a50000b90 with addr=10.0.0.2, port=4420 00:27:50.312 qpair failed and we were unable to recover it. 00:27:50.312 [2024-07-15 09:36:01.242013] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:50.312 [2024-07-15 09:36:01.242055] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a50000b90 with addr=10.0.0.2, port=4420 00:27:50.312 qpair failed and we were unable to recover it. 00:27:50.312 [2024-07-15 09:36:01.242250] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:50.312 [2024-07-15 09:36:01.242291] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a50000b90 with addr=10.0.0.2, port=4420 00:27:50.312 qpair failed and we were unable to recover it. 00:27:50.312 [2024-07-15 09:36:01.242452] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:50.312 [2024-07-15 09:36:01.242494] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a50000b90 with addr=10.0.0.2, port=4420 00:27:50.312 qpair failed and we were unable to recover it. 00:27:50.312 [2024-07-15 09:36:01.242705] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:50.312 [2024-07-15 09:36:01.242746] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a50000b90 with addr=10.0.0.2, port=4420 00:27:50.312 qpair failed and we were unable to recover it. 00:27:50.312 [2024-07-15 09:36:01.242892] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:50.312 [2024-07-15 09:36:01.242934] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a50000b90 with addr=10.0.0.2, port=4420 00:27:50.312 qpair failed and we were unable to recover it. 00:27:50.312 [2024-07-15 09:36:01.243063] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:50.312 [2024-07-15 09:36:01.243105] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a50000b90 with addr=10.0.0.2, port=4420 00:27:50.312 qpair failed and we were unable to recover it. 00:27:50.312 [2024-07-15 09:36:01.243218] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:50.312 [2024-07-15 09:36:01.243259] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a50000b90 with addr=10.0.0.2, port=4420 00:27:50.312 qpair failed and we were unable to recover it. 00:27:50.312 [2024-07-15 09:36:01.243472] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:50.312 [2024-07-15 09:36:01.243515] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a50000b90 with addr=10.0.0.2, port=4420 00:27:50.312 qpair failed and we were unable to recover it. 00:27:50.312 [2024-07-15 09:36:01.243643] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:50.312 [2024-07-15 09:36:01.243686] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a50000b90 with addr=10.0.0.2, port=4420 00:27:50.312 qpair failed and we were unable to recover it. 00:27:50.312 [2024-07-15 09:36:01.243855] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:50.312 [2024-07-15 09:36:01.243901] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a50000b90 with addr=10.0.0.2, port=4420 00:27:50.312 qpair failed and we were unable to recover it. 00:27:50.312 [2024-07-15 09:36:01.244101] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:50.312 [2024-07-15 09:36:01.244144] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a50000b90 with addr=10.0.0.2, port=4420 00:27:50.312 qpair failed and we were unable to recover it. 00:27:50.312 [2024-07-15 09:36:01.244316] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:50.313 [2024-07-15 09:36:01.244360] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a50000b90 with addr=10.0.0.2, port=4420 00:27:50.313 qpair failed and we were unable to recover it. 00:27:50.313 [2024-07-15 09:36:01.244531] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:50.313 [2024-07-15 09:36:01.244575] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a50000b90 with addr=10.0.0.2, port=4420 00:27:50.313 qpair failed and we were unable to recover it. 00:27:50.313 [2024-07-15 09:36:01.244734] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:50.313 [2024-07-15 09:36:01.244778] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a50000b90 with addr=10.0.0.2, port=4420 00:27:50.313 qpair failed and we were unable to recover it. 00:27:50.313 [2024-07-15 09:36:01.244966] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:50.313 [2024-07-15 09:36:01.245010] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a50000b90 with addr=10.0.0.2, port=4420 00:27:50.313 qpair failed and we were unable to recover it. 00:27:50.313 [2024-07-15 09:36:01.245210] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:50.313 [2024-07-15 09:36:01.245253] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a50000b90 with addr=10.0.0.2, port=4420 00:27:50.313 qpair failed and we were unable to recover it. 00:27:50.313 [2024-07-15 09:36:01.245421] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:50.313 [2024-07-15 09:36:01.245474] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a50000b90 with addr=10.0.0.2, port=4420 00:27:50.313 qpair failed and we were unable to recover it. 00:27:50.313 [2024-07-15 09:36:01.245642] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:50.313 [2024-07-15 09:36:01.245685] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a50000b90 with addr=10.0.0.2, port=4420 00:27:50.313 qpair failed and we were unable to recover it. 00:27:50.313 [2024-07-15 09:36:01.245860] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:50.313 [2024-07-15 09:36:01.245887] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a50000b90 with addr=10.0.0.2, port=4420 00:27:50.313 qpair failed and we were unable to recover it. 00:27:50.313 [2024-07-15 09:36:01.245967] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:50.313 [2024-07-15 09:36:01.245993] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a50000b90 with addr=10.0.0.2, port=4420 00:27:50.313 qpair failed and we were unable to recover it. 00:27:50.313 [2024-07-15 09:36:01.246132] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:50.313 [2024-07-15 09:36:01.246158] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a50000b90 with addr=10.0.0.2, port=4420 00:27:50.313 qpair failed and we were unable to recover it. 00:27:50.313 [2024-07-15 09:36:01.246278] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:50.313 [2024-07-15 09:36:01.246304] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a50000b90 with addr=10.0.0.2, port=4420 00:27:50.313 qpair failed and we were unable to recover it. 00:27:50.313 [2024-07-15 09:36:01.246413] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:50.313 [2024-07-15 09:36:01.246438] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a50000b90 with addr=10.0.0.2, port=4420 00:27:50.313 qpair failed and we were unable to recover it. 00:27:50.313 [2024-07-15 09:36:01.246545] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:50.313 [2024-07-15 09:36:01.246571] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a50000b90 with addr=10.0.0.2, port=4420 00:27:50.313 qpair failed and we were unable to recover it. 00:27:50.313 [2024-07-15 09:36:01.246746] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:50.313 [2024-07-15 09:36:01.246785] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d69200 with addr=10.0.0.2, port=4420 00:27:50.313 qpair failed and we were unable to recover it. 00:27:50.313 [2024-07-15 09:36:01.247044] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:50.313 [2024-07-15 09:36:01.247086] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d69200 with addr=10.0.0.2, port=4420 00:27:50.313 qpair failed and we were unable to recover it. 00:27:50.313 [2024-07-15 09:36:01.247219] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:50.313 [2024-07-15 09:36:01.247259] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d69200 with addr=10.0.0.2, port=4420 00:27:50.313 qpair failed and we were unable to recover it. 00:27:50.313 [2024-07-15 09:36:01.247420] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:50.313 [2024-07-15 09:36:01.247482] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d69200 with addr=10.0.0.2, port=4420 00:27:50.313 qpair failed and we were unable to recover it. 00:27:50.313 [2024-07-15 09:36:01.247646] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:50.313 [2024-07-15 09:36:01.247688] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d69200 with addr=10.0.0.2, port=4420 00:27:50.313 qpair failed and we were unable to recover it. 00:27:50.313 [2024-07-15 09:36:01.247856] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:50.313 [2024-07-15 09:36:01.247901] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d69200 with addr=10.0.0.2, port=4420 00:27:50.313 qpair failed and we were unable to recover it. 00:27:50.313 [2024-07-15 09:36:01.248084] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:50.313 [2024-07-15 09:36:01.248126] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d69200 with addr=10.0.0.2, port=4420 00:27:50.313 qpair failed and we were unable to recover it. 00:27:50.313 [2024-07-15 09:36:01.248298] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:50.313 [2024-07-15 09:36:01.248340] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d69200 with addr=10.0.0.2, port=4420 00:27:50.313 qpair failed and we were unable to recover it. 00:27:50.313 [2024-07-15 09:36:01.248465] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:50.313 [2024-07-15 09:36:01.248508] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d69200 with addr=10.0.0.2, port=4420 00:27:50.313 qpair failed and we were unable to recover it. 00:27:50.313 [2024-07-15 09:36:01.248660] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:50.313 [2024-07-15 09:36:01.248703] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d69200 with addr=10.0.0.2, port=4420 00:27:50.313 qpair failed and we were unable to recover it. 00:27:50.313 [2024-07-15 09:36:01.248853] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:50.313 [2024-07-15 09:36:01.248896] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d69200 with addr=10.0.0.2, port=4420 00:27:50.313 qpair failed and we were unable to recover it. 00:27:50.313 [2024-07-15 09:36:01.249093] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:50.313 [2024-07-15 09:36:01.249135] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d69200 with addr=10.0.0.2, port=4420 00:27:50.313 qpair failed and we were unable to recover it. 00:27:50.313 [2024-07-15 09:36:01.249304] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:50.313 [2024-07-15 09:36:01.249347] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d69200 with addr=10.0.0.2, port=4420 00:27:50.313 qpair failed and we were unable to recover it. 00:27:50.313 [2024-07-15 09:36:01.249542] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:50.313 [2024-07-15 09:36:01.249584] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d69200 with addr=10.0.0.2, port=4420 00:27:50.313 qpair failed and we were unable to recover it. 00:27:50.313 [2024-07-15 09:36:01.249752] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:50.313 [2024-07-15 09:36:01.249796] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d69200 with addr=10.0.0.2, port=4420 00:27:50.313 qpair failed and we were unable to recover it. 00:27:50.313 [2024-07-15 09:36:01.249990] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:50.313 [2024-07-15 09:36:01.250033] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d69200 with addr=10.0.0.2, port=4420 00:27:50.313 qpair failed and we were unable to recover it. 00:27:50.313 [2024-07-15 09:36:01.250165] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:50.313 [2024-07-15 09:36:01.250207] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d69200 with addr=10.0.0.2, port=4420 00:27:50.313 qpair failed and we were unable to recover it. 00:27:50.313 [2024-07-15 09:36:01.250346] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:50.313 [2024-07-15 09:36:01.250389] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d69200 with addr=10.0.0.2, port=4420 00:27:50.313 qpair failed and we were unable to recover it. 00:27:50.313 [2024-07-15 09:36:01.250572] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:50.313 [2024-07-15 09:36:01.250612] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d69200 with addr=10.0.0.2, port=4420 00:27:50.313 qpair failed and we were unable to recover it. 00:27:50.313 [2024-07-15 09:36:01.250733] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:50.313 [2024-07-15 09:36:01.250780] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d69200 with addr=10.0.0.2, port=4420 00:27:50.313 qpair failed and we were unable to recover it. 00:27:50.313 [2024-07-15 09:36:01.250943] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:50.313 [2024-07-15 09:36:01.250986] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d69200 with addr=10.0.0.2, port=4420 00:27:50.313 qpair failed and we were unable to recover it. 00:27:50.313 [2024-07-15 09:36:01.251201] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:50.313 [2024-07-15 09:36:01.251241] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d69200 with addr=10.0.0.2, port=4420 00:27:50.313 qpair failed and we were unable to recover it. 00:27:50.313 [2024-07-15 09:36:01.251455] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:50.313 [2024-07-15 09:36:01.251497] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d69200 with addr=10.0.0.2, port=4420 00:27:50.313 qpair failed and we were unable to recover it. 00:27:50.313 [2024-07-15 09:36:01.251780] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:50.313 [2024-07-15 09:36:01.251873] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d69200 with addr=10.0.0.2, port=4420 00:27:50.313 qpair failed and we were unable to recover it. 00:27:50.313 [2024-07-15 09:36:01.252080] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:50.313 [2024-07-15 09:36:01.252123] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d69200 with addr=10.0.0.2, port=4420 00:27:50.313 qpair failed and we were unable to recover it. 00:27:50.313 [2024-07-15 09:36:01.252271] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:50.313 [2024-07-15 09:36:01.252315] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d69200 with addr=10.0.0.2, port=4420 00:27:50.313 qpair failed and we were unable to recover it. 00:27:50.313 [2024-07-15 09:36:01.252483] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:50.313 [2024-07-15 09:36:01.252526] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d69200 with addr=10.0.0.2, port=4420 00:27:50.313 qpair failed and we were unable to recover it. 00:27:50.313 [2024-07-15 09:36:01.252659] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:50.313 [2024-07-15 09:36:01.252701] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d69200 with addr=10.0.0.2, port=4420 00:27:50.313 qpair failed and we were unable to recover it. 00:27:50.313 [2024-07-15 09:36:01.252854] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:50.313 [2024-07-15 09:36:01.252897] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d69200 with addr=10.0.0.2, port=4420 00:27:50.313 qpair failed and we were unable to recover it. 00:27:50.313 [2024-07-15 09:36:01.253033] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:50.313 [2024-07-15 09:36:01.253091] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d69200 with addr=10.0.0.2, port=4420 00:27:50.313 qpair failed and we were unable to recover it. 00:27:50.313 [2024-07-15 09:36:01.253250] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:50.313 [2024-07-15 09:36:01.253292] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d69200 with addr=10.0.0.2, port=4420 00:27:50.313 qpair failed and we were unable to recover it. 00:27:50.313 [2024-07-15 09:36:01.253484] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:50.313 [2024-07-15 09:36:01.253527] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d69200 with addr=10.0.0.2, port=4420 00:27:50.313 qpair failed and we were unable to recover it. 00:27:50.313 [2024-07-15 09:36:01.253682] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:50.313 [2024-07-15 09:36:01.253725] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d69200 with addr=10.0.0.2, port=4420 00:27:50.313 qpair failed and we were unable to recover it. 00:27:50.313 [2024-07-15 09:36:01.253878] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:50.313 [2024-07-15 09:36:01.253921] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d69200 with addr=10.0.0.2, port=4420 00:27:50.313 qpair failed and we were unable to recover it. 00:27:50.313 [2024-07-15 09:36:01.254107] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:50.313 [2024-07-15 09:36:01.254148] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d69200 with addr=10.0.0.2, port=4420 00:27:50.313 qpair failed and we were unable to recover it. 00:27:50.313 [2024-07-15 09:36:01.254283] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:50.313 [2024-07-15 09:36:01.254323] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d69200 with addr=10.0.0.2, port=4420 00:27:50.313 qpair failed and we were unable to recover it. 00:27:50.313 [2024-07-15 09:36:01.254506] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:50.313 [2024-07-15 09:36:01.254549] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d69200 with addr=10.0.0.2, port=4420 00:27:50.313 qpair failed and we were unable to recover it. 00:27:50.313 [2024-07-15 09:36:01.254761] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:50.313 [2024-07-15 09:36:01.254812] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d69200 with addr=10.0.0.2, port=4420 00:27:50.313 qpair failed and we were unable to recover it. 00:27:50.313 [2024-07-15 09:36:01.254992] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:50.313 [2024-07-15 09:36:01.255049] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d69200 with addr=10.0.0.2, port=4420 00:27:50.313 qpair failed and we were unable to recover it. 00:27:50.313 [2024-07-15 09:36:01.255220] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:50.313 [2024-07-15 09:36:01.255264] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d69200 with addr=10.0.0.2, port=4420 00:27:50.313 qpair failed and we were unable to recover it. 00:27:50.313 [2024-07-15 09:36:01.255387] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:50.313 [2024-07-15 09:36:01.255430] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d69200 with addr=10.0.0.2, port=4420 00:27:50.313 qpair failed and we were unable to recover it. 00:27:50.313 [2024-07-15 09:36:01.255589] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:50.313 [2024-07-15 09:36:01.255659] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d69200 with addr=10.0.0.2, port=4420 00:27:50.313 qpair failed and we were unable to recover it. 00:27:50.313 [2024-07-15 09:36:01.255847] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:50.314 [2024-07-15 09:36:01.255891] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d69200 with addr=10.0.0.2, port=4420 00:27:50.314 qpair failed and we were unable to recover it. 00:27:50.314 [2024-07-15 09:36:01.256028] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:50.314 [2024-07-15 09:36:01.256071] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d69200 with addr=10.0.0.2, port=4420 00:27:50.314 qpair failed and we were unable to recover it. 00:27:50.314 [2024-07-15 09:36:01.256288] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:50.314 [2024-07-15 09:36:01.256328] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d69200 with addr=10.0.0.2, port=4420 00:27:50.314 qpair failed and we were unable to recover it. 00:27:50.314 [2024-07-15 09:36:01.256459] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:50.314 [2024-07-15 09:36:01.256499] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d69200 with addr=10.0.0.2, port=4420 00:27:50.314 qpair failed and we were unable to recover it. 00:27:50.314 [2024-07-15 09:36:01.256661] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:50.314 [2024-07-15 09:36:01.256714] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d69200 with addr=10.0.0.2, port=4420 00:27:50.314 qpair failed and we were unable to recover it. 00:27:50.314 [2024-07-15 09:36:01.256847] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:50.314 [2024-07-15 09:36:01.256894] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d69200 with addr=10.0.0.2, port=4420 00:27:50.314 qpair failed and we were unable to recover it. 00:27:50.314 [2024-07-15 09:36:01.257065] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:50.314 [2024-07-15 09:36:01.257110] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d69200 with addr=10.0.0.2, port=4420 00:27:50.314 qpair failed and we were unable to recover it. 00:27:50.314 [2024-07-15 09:36:01.257330] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:50.314 [2024-07-15 09:36:01.257370] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d69200 with addr=10.0.0.2, port=4420 00:27:50.314 qpair failed and we were unable to recover it. 00:27:50.314 [2024-07-15 09:36:01.257526] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:50.314 [2024-07-15 09:36:01.257567] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d69200 with addr=10.0.0.2, port=4420 00:27:50.314 qpair failed and we were unable to recover it. 00:27:50.314 [2024-07-15 09:36:01.257723] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:50.314 [2024-07-15 09:36:01.257766] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d69200 with addr=10.0.0.2, port=4420 00:27:50.314 qpair failed and we were unable to recover it. 00:27:50.314 [2024-07-15 09:36:01.257947] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:50.314 [2024-07-15 09:36:01.257991] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d69200 with addr=10.0.0.2, port=4420 00:27:50.314 qpair failed and we were unable to recover it. 00:27:50.314 [2024-07-15 09:36:01.258139] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:50.314 [2024-07-15 09:36:01.258196] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d69200 with addr=10.0.0.2, port=4420 00:27:50.314 qpair failed and we were unable to recover it. 00:27:50.314 [2024-07-15 09:36:01.258386] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:50.314 [2024-07-15 09:36:01.258444] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d69200 with addr=10.0.0.2, port=4420 00:27:50.314 qpair failed and we were unable to recover it. 00:27:50.314 [2024-07-15 09:36:01.258665] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:50.314 [2024-07-15 09:36:01.258705] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d69200 with addr=10.0.0.2, port=4420 00:27:50.314 qpair failed and we were unable to recover it. 00:27:50.314 [2024-07-15 09:36:01.258844] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:50.314 [2024-07-15 09:36:01.258917] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d69200 with addr=10.0.0.2, port=4420 00:27:50.314 qpair failed and we were unable to recover it. 00:27:50.314 [2024-07-15 09:36:01.259085] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:50.314 [2024-07-15 09:36:01.259127] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d69200 with addr=10.0.0.2, port=4420 00:27:50.314 qpair failed and we were unable to recover it. 00:27:50.314 [2024-07-15 09:36:01.259322] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:50.314 [2024-07-15 09:36:01.259365] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d69200 with addr=10.0.0.2, port=4420 00:27:50.314 qpair failed and we were unable to recover it. 00:27:50.314 [2024-07-15 09:36:01.259558] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:50.314 [2024-07-15 09:36:01.259598] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d69200 with addr=10.0.0.2, port=4420 00:27:50.314 qpair failed and we were unable to recover it. 00:27:50.314 [2024-07-15 09:36:01.259788] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:50.314 [2024-07-15 09:36:01.259873] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d69200 with addr=10.0.0.2, port=4420 00:27:50.314 qpair failed and we were unable to recover it. 00:27:50.314 [2024-07-15 09:36:01.260045] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:50.314 [2024-07-15 09:36:01.260088] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d69200 with addr=10.0.0.2, port=4420 00:27:50.314 qpair failed and we were unable to recover it. 00:27:50.314 [2024-07-15 09:36:01.260279] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:50.314 [2024-07-15 09:36:01.260321] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d69200 with addr=10.0.0.2, port=4420 00:27:50.314 qpair failed and we were unable to recover it. 00:27:50.314 [2024-07-15 09:36:01.260461] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:50.314 [2024-07-15 09:36:01.260519] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d69200 with addr=10.0.0.2, port=4420 00:27:50.314 qpair failed and we were unable to recover it. 00:27:50.314 [2024-07-15 09:36:01.260693] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:50.314 [2024-07-15 09:36:01.260739] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d69200 with addr=10.0.0.2, port=4420 00:27:50.314 qpair failed and we were unable to recover it. 00:27:50.314 [2024-07-15 09:36:01.260934] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:50.314 [2024-07-15 09:36:01.260979] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d69200 with addr=10.0.0.2, port=4420 00:27:50.314 qpair failed and we were unable to recover it. 00:27:50.314 [2024-07-15 09:36:01.261148] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:50.314 [2024-07-15 09:36:01.261193] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d69200 with addr=10.0.0.2, port=4420 00:27:50.314 qpair failed and we were unable to recover it. 00:27:50.314 [2024-07-15 09:36:01.261413] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:50.314 [2024-07-15 09:36:01.261454] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d69200 with addr=10.0.0.2, port=4420 00:27:50.314 qpair failed and we were unable to recover it. 00:27:50.314 [2024-07-15 09:36:01.261621] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:50.314 [2024-07-15 09:36:01.261681] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d69200 with addr=10.0.0.2, port=4420 00:27:50.314 qpair failed and we were unable to recover it. 00:27:50.314 [2024-07-15 09:36:01.261826] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:50.314 [2024-07-15 09:36:01.261873] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d69200 with addr=10.0.0.2, port=4420 00:27:50.314 qpair failed and we were unable to recover it. 00:27:50.314 [2024-07-15 09:36:01.262029] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:50.314 [2024-07-15 09:36:01.262074] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d69200 with addr=10.0.0.2, port=4420 00:27:50.314 qpair failed and we were unable to recover it. 00:27:50.314 [2024-07-15 09:36:01.262247] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:50.314 [2024-07-15 09:36:01.262293] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d69200 with addr=10.0.0.2, port=4420 00:27:50.314 qpair failed and we were unable to recover it. 00:27:50.314 [2024-07-15 09:36:01.262474] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:50.314 [2024-07-15 09:36:01.262519] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d69200 with addr=10.0.0.2, port=4420 00:27:50.314 qpair failed and we were unable to recover it. 00:27:50.314 [2024-07-15 09:36:01.262646] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:50.314 [2024-07-15 09:36:01.262698] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d69200 with addr=10.0.0.2, port=4420 00:27:50.314 qpair failed and we were unable to recover it. 00:27:50.314 [2024-07-15 09:36:01.262951] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:50.314 [2024-07-15 09:36:01.263017] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d69200 with addr=10.0.0.2, port=4420 00:27:50.314 qpair failed and we were unable to recover it. 00:27:50.314 [2024-07-15 09:36:01.263212] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:50.314 [2024-07-15 09:36:01.263290] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d69200 with addr=10.0.0.2, port=4420 00:27:50.314 qpair failed and we were unable to recover it. 00:27:50.314 [2024-07-15 09:36:01.263457] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:50.314 [2024-07-15 09:36:01.263532] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d69200 with addr=10.0.0.2, port=4420 00:27:50.314 qpair failed and we were unable to recover it. 00:27:50.314 [2024-07-15 09:36:01.263736] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:50.314 [2024-07-15 09:36:01.263781] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d69200 with addr=10.0.0.2, port=4420 00:27:50.314 qpair failed and we were unable to recover it. 00:27:50.314 [2024-07-15 09:36:01.263979] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:50.314 [2024-07-15 09:36:01.264024] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d69200 with addr=10.0.0.2, port=4420 00:27:50.314 qpair failed and we were unable to recover it. 00:27:50.314 [2024-07-15 09:36:01.264197] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:50.314 [2024-07-15 09:36:01.264242] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d69200 with addr=10.0.0.2, port=4420 00:27:50.314 qpair failed and we were unable to recover it. 00:27:50.314 [2024-07-15 09:36:01.264387] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:50.314 [2024-07-15 09:36:01.264432] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d69200 with addr=10.0.0.2, port=4420 00:27:50.314 qpair failed and we were unable to recover it. 00:27:50.314 [2024-07-15 09:36:01.264641] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:50.314 [2024-07-15 09:36:01.264681] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d69200 with addr=10.0.0.2, port=4420 00:27:50.314 qpair failed and we were unable to recover it. 00:27:50.314 [2024-07-15 09:36:01.264897] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:50.314 [2024-07-15 09:36:01.264943] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d69200 with addr=10.0.0.2, port=4420 00:27:50.314 qpair failed and we were unable to recover it. 00:27:50.314 [2024-07-15 09:36:01.265115] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:50.314 [2024-07-15 09:36:01.265160] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d69200 with addr=10.0.0.2, port=4420 00:27:50.314 qpair failed and we were unable to recover it. 00:27:50.314 [2024-07-15 09:36:01.265326] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:50.314 [2024-07-15 09:36:01.265372] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d69200 with addr=10.0.0.2, port=4420 00:27:50.314 qpair failed and we were unable to recover it. 00:27:50.314 [2024-07-15 09:36:01.265552] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:50.314 [2024-07-15 09:36:01.265597] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d69200 with addr=10.0.0.2, port=4420 00:27:50.314 qpair failed and we were unable to recover it. 00:27:50.314 [2024-07-15 09:36:01.265744] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:50.314 [2024-07-15 09:36:01.265789] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d69200 with addr=10.0.0.2, port=4420 00:27:50.314 qpair failed and we were unable to recover it. 00:27:50.314 [2024-07-15 09:36:01.265968] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:50.314 [2024-07-15 09:36:01.266013] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d69200 with addr=10.0.0.2, port=4420 00:27:50.314 qpair failed and we were unable to recover it. 00:27:50.314 [2024-07-15 09:36:01.266200] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:50.314 [2024-07-15 09:36:01.266240] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d69200 with addr=10.0.0.2, port=4420 00:27:50.314 qpair failed and we were unable to recover it. 00:27:50.314 [2024-07-15 09:36:01.266405] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:50.314 [2024-07-15 09:36:01.266464] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d69200 with addr=10.0.0.2, port=4420 00:27:50.314 qpair failed and we were unable to recover it. 00:27:50.314 [2024-07-15 09:36:01.266601] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:50.314 [2024-07-15 09:36:01.266645] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d69200 with addr=10.0.0.2, port=4420 00:27:50.314 qpair failed and we were unable to recover it. 00:27:50.314 [2024-07-15 09:36:01.266850] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:50.314 [2024-07-15 09:36:01.266896] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d69200 with addr=10.0.0.2, port=4420 00:27:50.314 qpair failed and we were unable to recover it. 00:27:50.314 [2024-07-15 09:36:01.267074] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:50.314 [2024-07-15 09:36:01.267119] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d69200 with addr=10.0.0.2, port=4420 00:27:50.314 qpair failed and we were unable to recover it. 00:27:50.314 [2024-07-15 09:36:01.267259] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:50.314 [2024-07-15 09:36:01.267304] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d69200 with addr=10.0.0.2, port=4420 00:27:50.314 qpair failed and we were unable to recover it. 00:27:50.314 [2024-07-15 09:36:01.267478] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:50.314 [2024-07-15 09:36:01.267524] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d69200 with addr=10.0.0.2, port=4420 00:27:50.314 qpair failed and we were unable to recover it. 00:27:50.314 [2024-07-15 09:36:01.267674] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:50.314 [2024-07-15 09:36:01.267719] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d69200 with addr=10.0.0.2, port=4420 00:27:50.314 qpair failed and we were unable to recover it. 00:27:50.314 [2024-07-15 09:36:01.267898] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:50.314 [2024-07-15 09:36:01.267943] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d69200 with addr=10.0.0.2, port=4420 00:27:50.314 qpair failed and we were unable to recover it. 00:27:50.314 [2024-07-15 09:36:01.268077] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:50.314 [2024-07-15 09:36:01.268122] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d69200 with addr=10.0.0.2, port=4420 00:27:50.315 qpair failed and we were unable to recover it. 00:27:50.315 [2024-07-15 09:36:01.268270] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:50.315 [2024-07-15 09:36:01.268314] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d69200 with addr=10.0.0.2, port=4420 00:27:50.315 qpair failed and we were unable to recover it. 00:27:50.315 [2024-07-15 09:36:01.268515] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:50.315 [2024-07-15 09:36:01.268560] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d69200 with addr=10.0.0.2, port=4420 00:27:50.315 qpair failed and we were unable to recover it. 00:27:50.315 [2024-07-15 09:36:01.268756] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:50.315 [2024-07-15 09:36:01.268795] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d69200 with addr=10.0.0.2, port=4420 00:27:50.315 qpair failed and we were unable to recover it. 00:27:50.315 [2024-07-15 09:36:01.268951] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:50.315 [2024-07-15 09:36:01.268991] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d69200 with addr=10.0.0.2, port=4420 00:27:50.315 qpair failed and we were unable to recover it. 00:27:50.315 [2024-07-15 09:36:01.269185] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:50.315 [2024-07-15 09:36:01.269249] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d69200 with addr=10.0.0.2, port=4420 00:27:50.315 qpair failed and we were unable to recover it. 00:27:50.315 [2024-07-15 09:36:01.269483] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:50.315 [2024-07-15 09:36:01.269547] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d69200 with addr=10.0.0.2, port=4420 00:27:50.315 qpair failed and we were unable to recover it. 00:27:50.315 [2024-07-15 09:36:01.269708] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:50.315 [2024-07-15 09:36:01.269775] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d69200 with addr=10.0.0.2, port=4420 00:27:50.315 qpair failed and we were unable to recover it. 00:27:50.315 [2024-07-15 09:36:01.270066] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:50.315 [2024-07-15 09:36:01.270130] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d69200 with addr=10.0.0.2, port=4420 00:27:50.315 qpair failed and we were unable to recover it. 00:27:50.315 [2024-07-15 09:36:01.270363] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:50.315 [2024-07-15 09:36:01.270426] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d69200 with addr=10.0.0.2, port=4420 00:27:50.315 qpair failed and we were unable to recover it. 00:27:50.315 [2024-07-15 09:36:01.270681] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:50.315 [2024-07-15 09:36:01.270746] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d69200 with addr=10.0.0.2, port=4420 00:27:50.315 qpair failed and we were unable to recover it. 00:27:50.315 [2024-07-15 09:36:01.271006] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:50.315 [2024-07-15 09:36:01.271102] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a50000b90 with addr=10.0.0.2, port=4420 00:27:50.315 qpair failed and we were unable to recover it. 00:27:50.315 [2024-07-15 09:36:01.271394] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:50.315 [2024-07-15 09:36:01.271460] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a50000b90 with addr=10.0.0.2, port=4420 00:27:50.315 qpair failed and we were unable to recover it. 00:27:50.315 [2024-07-15 09:36:01.271681] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:50.315 [2024-07-15 09:36:01.271746] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a50000b90 with addr=10.0.0.2, port=4420 00:27:50.315 qpair failed and we were unable to recover it. 00:27:50.315 [2024-07-15 09:36:01.272040] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:50.315 [2024-07-15 09:36:01.272106] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a50000b90 with addr=10.0.0.2, port=4420 00:27:50.315 qpair failed and we were unable to recover it. 00:27:50.315 [2024-07-15 09:36:01.272376] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:50.315 [2024-07-15 09:36:01.272440] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a50000b90 with addr=10.0.0.2, port=4420 00:27:50.315 qpair failed and we were unable to recover it. 00:27:50.315 [2024-07-15 09:36:01.272679] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:50.315 [2024-07-15 09:36:01.272749] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a50000b90 with addr=10.0.0.2, port=4420 00:27:50.315 qpair failed and we were unable to recover it. 00:27:50.315 [2024-07-15 09:36:01.273061] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:50.315 [2024-07-15 09:36:01.273132] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d69200 with addr=10.0.0.2, port=4420 00:27:50.315 qpair failed and we were unable to recover it. 00:27:50.315 [2024-07-15 09:36:01.273367] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:50.315 [2024-07-15 09:36:01.273432] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d69200 with addr=10.0.0.2, port=4420 00:27:50.315 qpair failed and we were unable to recover it. 00:27:50.315 [2024-07-15 09:36:01.273698] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:50.315 [2024-07-15 09:36:01.273762] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d69200 with addr=10.0.0.2, port=4420 00:27:50.315 qpair failed and we were unable to recover it. 00:27:50.315 [2024-07-15 09:36:01.274035] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:50.315 [2024-07-15 09:36:01.274098] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d69200 with addr=10.0.0.2, port=4420 00:27:50.315 qpair failed and we were unable to recover it. 00:27:50.315 [2024-07-15 09:36:01.274298] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:50.315 [2024-07-15 09:36:01.274378] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d69200 with addr=10.0.0.2, port=4420 00:27:50.315 qpair failed and we were unable to recover it. 00:27:50.315 [2024-07-15 09:36:01.274624] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:50.315 [2024-07-15 09:36:01.274687] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d69200 with addr=10.0.0.2, port=4420 00:27:50.315 qpair failed and we were unable to recover it. 00:27:50.315 [2024-07-15 09:36:01.274921] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:50.315 [2024-07-15 09:36:01.274988] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d69200 with addr=10.0.0.2, port=4420 00:27:50.315 qpair failed and we were unable to recover it. 00:27:50.315 [2024-07-15 09:36:01.275199] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:50.315 [2024-07-15 09:36:01.275263] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d69200 with addr=10.0.0.2, port=4420 00:27:50.315 qpair failed and we were unable to recover it. 00:27:50.315 [2024-07-15 09:36:01.275534] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:50.315 [2024-07-15 09:36:01.275598] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d69200 with addr=10.0.0.2, port=4420 00:27:50.315 qpair failed and we were unable to recover it. 00:27:50.315 [2024-07-15 09:36:01.275837] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:50.315 [2024-07-15 09:36:01.275883] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d69200 with addr=10.0.0.2, port=4420 00:27:50.315 qpair failed and we were unable to recover it. 00:27:50.315 [2024-07-15 09:36:01.276050] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:50.315 [2024-07-15 09:36:01.276095] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d69200 with addr=10.0.0.2, port=4420 00:27:50.315 qpair failed and we were unable to recover it. 00:27:50.315 [2024-07-15 09:36:01.276280] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:50.315 [2024-07-15 09:36:01.276326] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d69200 with addr=10.0.0.2, port=4420 00:27:50.315 qpair failed and we were unable to recover it. 00:27:50.315 [2024-07-15 09:36:01.276460] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:50.315 [2024-07-15 09:36:01.276506] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d69200 with addr=10.0.0.2, port=4420 00:27:50.315 qpair failed and we were unable to recover it. 00:27:50.315 [2024-07-15 09:36:01.276695] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:50.315 [2024-07-15 09:36:01.276744] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d69200 with addr=10.0.0.2, port=4420 00:27:50.315 qpair failed and we were unable to recover it. 00:27:50.315 [2024-07-15 09:36:01.277010] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:50.315 [2024-07-15 09:36:01.277060] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d69200 with addr=10.0.0.2, port=4420 00:27:50.315 qpair failed and we were unable to recover it. 00:27:50.315 [2024-07-15 09:36:01.277252] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:50.315 [2024-07-15 09:36:01.277301] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d69200 with addr=10.0.0.2, port=4420 00:27:50.315 qpair failed and we were unable to recover it. 00:27:50.315 [2024-07-15 09:36:01.277484] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:50.315 [2024-07-15 09:36:01.277532] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d69200 with addr=10.0.0.2, port=4420 00:27:50.315 qpair failed and we were unable to recover it. 00:27:50.315 [2024-07-15 09:36:01.277716] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:50.315 [2024-07-15 09:36:01.277764] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d69200 with addr=10.0.0.2, port=4420 00:27:50.315 qpair failed and we were unable to recover it. 00:27:50.315 [2024-07-15 09:36:01.277980] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:50.315 [2024-07-15 09:36:01.278028] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d69200 with addr=10.0.0.2, port=4420 00:27:50.315 qpair failed and we were unable to recover it. 00:27:50.315 [2024-07-15 09:36:01.278243] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:50.315 [2024-07-15 09:36:01.278291] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d69200 with addr=10.0.0.2, port=4420 00:27:50.315 qpair failed and we were unable to recover it. 00:27:50.315 [2024-07-15 09:36:01.278437] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:50.315 [2024-07-15 09:36:01.278485] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d69200 with addr=10.0.0.2, port=4420 00:27:50.315 qpair failed and we were unable to recover it. 00:27:50.315 [2024-07-15 09:36:01.278728] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:50.315 [2024-07-15 09:36:01.278791] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d69200 with addr=10.0.0.2, port=4420 00:27:50.315 qpair failed and we were unable to recover it. 00:27:50.315 [2024-07-15 09:36:01.279035] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:50.315 [2024-07-15 09:36:01.279083] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d69200 with addr=10.0.0.2, port=4420 00:27:50.315 qpair failed and we were unable to recover it. 00:27:50.315 [2024-07-15 09:36:01.279303] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:50.315 [2024-07-15 09:36:01.279351] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d69200 with addr=10.0.0.2, port=4420 00:27:50.315 qpair failed and we were unable to recover it. 00:27:50.315 [2024-07-15 09:36:01.279565] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:50.315 [2024-07-15 09:36:01.279628] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d69200 with addr=10.0.0.2, port=4420 00:27:50.315 qpair failed and we were unable to recover it. 00:27:50.315 [2024-07-15 09:36:01.279819] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:50.315 [2024-07-15 09:36:01.279867] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d69200 with addr=10.0.0.2, port=4420 00:27:50.315 qpair failed and we were unable to recover it. 00:27:50.315 [2024-07-15 09:36:01.280056] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:50.315 [2024-07-15 09:36:01.280104] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d69200 with addr=10.0.0.2, port=4420 00:27:50.315 qpair failed and we were unable to recover it. 00:27:50.315 [2024-07-15 09:36:01.280301] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:50.315 [2024-07-15 09:36:01.280350] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d69200 with addr=10.0.0.2, port=4420 00:27:50.315 qpair failed and we were unable to recover it. 00:27:50.315 [2024-07-15 09:36:01.280497] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:50.315 [2024-07-15 09:36:01.280546] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d69200 with addr=10.0.0.2, port=4420 00:27:50.315 qpair failed and we were unable to recover it. 00:27:50.315 [2024-07-15 09:36:01.280703] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:50.315 [2024-07-15 09:36:01.280751] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d69200 with addr=10.0.0.2, port=4420 00:27:50.315 qpair failed and we were unable to recover it. 00:27:50.315 [2024-07-15 09:36:01.280985] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:50.316 [2024-07-15 09:36:01.281034] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d69200 with addr=10.0.0.2, port=4420 00:27:50.316 qpair failed and we were unable to recover it. 00:27:50.316 [2024-07-15 09:36:01.281189] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:50.316 [2024-07-15 09:36:01.281240] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d69200 with addr=10.0.0.2, port=4420 00:27:50.316 qpair failed and we were unable to recover it. 00:27:50.316 [2024-07-15 09:36:01.281431] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:50.316 [2024-07-15 09:36:01.281479] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d69200 with addr=10.0.0.2, port=4420 00:27:50.316 qpair failed and we were unable to recover it. 00:27:50.316 [2024-07-15 09:36:01.281671] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:50.316 [2024-07-15 09:36:01.281719] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d69200 with addr=10.0.0.2, port=4420 00:27:50.316 qpair failed and we were unable to recover it. 00:27:50.316 [2024-07-15 09:36:01.281916] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:50.316 [2024-07-15 09:36:01.281965] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d69200 with addr=10.0.0.2, port=4420 00:27:50.316 qpair failed and we were unable to recover it. 00:27:50.316 [2024-07-15 09:36:01.282150] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:50.316 [2024-07-15 09:36:01.282198] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d69200 with addr=10.0.0.2, port=4420 00:27:50.316 qpair failed and we were unable to recover it. 00:27:50.316 [2024-07-15 09:36:01.282337] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:50.316 [2024-07-15 09:36:01.282386] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d69200 with addr=10.0.0.2, port=4420 00:27:50.316 qpair failed and we were unable to recover it. 00:27:50.316 [2024-07-15 09:36:01.282568] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:50.316 [2024-07-15 09:36:01.282616] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d69200 with addr=10.0.0.2, port=4420 00:27:50.316 qpair failed and we were unable to recover it. 00:27:50.316 [2024-07-15 09:36:01.282813] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:50.316 [2024-07-15 09:36:01.282865] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d69200 with addr=10.0.0.2, port=4420 00:27:50.316 qpair failed and we were unable to recover it. 00:27:50.316 [2024-07-15 09:36:01.283053] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:50.316 [2024-07-15 09:36:01.283101] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d69200 with addr=10.0.0.2, port=4420 00:27:50.316 qpair failed and we were unable to recover it. 00:27:50.316 [2024-07-15 09:36:01.283271] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:50.316 [2024-07-15 09:36:01.283319] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d69200 with addr=10.0.0.2, port=4420 00:27:50.316 qpair failed and we were unable to recover it. 00:27:50.316 [2024-07-15 09:36:01.283512] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:50.316 [2024-07-15 09:36:01.283560] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d69200 with addr=10.0.0.2, port=4420 00:27:50.316 qpair failed and we were unable to recover it. 00:27:50.316 [2024-07-15 09:36:01.283779] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:50.316 [2024-07-15 09:36:01.283841] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d69200 with addr=10.0.0.2, port=4420 00:27:50.316 qpair failed and we were unable to recover it. 00:27:50.316 [2024-07-15 09:36:01.284024] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:50.316 [2024-07-15 09:36:01.284071] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d69200 with addr=10.0.0.2, port=4420 00:27:50.316 qpair failed and we were unable to recover it. 00:27:50.316 [2024-07-15 09:36:01.284264] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:50.316 [2024-07-15 09:36:01.284315] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d69200 with addr=10.0.0.2, port=4420 00:27:50.316 qpair failed and we were unable to recover it. 00:27:50.316 [2024-07-15 09:36:01.284507] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:50.316 [2024-07-15 09:36:01.284559] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d69200 with addr=10.0.0.2, port=4420 00:27:50.316 qpair failed and we were unable to recover it. 00:27:50.316 [2024-07-15 09:36:01.284758] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:50.316 [2024-07-15 09:36:01.284823] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d69200 with addr=10.0.0.2, port=4420 00:27:50.316 qpair failed and we were unable to recover it. 00:27:50.316 [2024-07-15 09:36:01.285037] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:50.316 [2024-07-15 09:36:01.285088] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d69200 with addr=10.0.0.2, port=4420 00:27:50.316 qpair failed and we were unable to recover it. 00:27:50.316 [2024-07-15 09:36:01.285240] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:50.316 [2024-07-15 09:36:01.285293] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d69200 with addr=10.0.0.2, port=4420 00:27:50.316 qpair failed and we were unable to recover it. 00:27:50.316 [2024-07-15 09:36:01.285527] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:50.316 [2024-07-15 09:36:01.285575] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d69200 with addr=10.0.0.2, port=4420 00:27:50.316 qpair failed and we were unable to recover it. 00:27:50.316 [2024-07-15 09:36:01.285717] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:50.316 [2024-07-15 09:36:01.285765] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d69200 with addr=10.0.0.2, port=4420 00:27:50.316 qpair failed and we were unable to recover it. 00:27:50.316 [2024-07-15 09:36:01.286010] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:50.316 [2024-07-15 09:36:01.286083] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a50000b90 with addr=10.0.0.2, port=4420 00:27:50.316 qpair failed and we were unable to recover it. 00:27:50.316 [2024-07-15 09:36:01.286289] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:50.316 [2024-07-15 09:36:01.286341] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a50000b90 with addr=10.0.0.2, port=4420 00:27:50.316 qpair failed and we were unable to recover it. 00:27:50.316 [2024-07-15 09:36:01.286537] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:50.316 [2024-07-15 09:36:01.286588] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a50000b90 with addr=10.0.0.2, port=4420 00:27:50.316 qpair failed and we were unable to recover it. 00:27:50.316 [2024-07-15 09:36:01.286748] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:50.316 [2024-07-15 09:36:01.286819] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a50000b90 with addr=10.0.0.2, port=4420 00:27:50.316 qpair failed and we were unable to recover it. 00:27:50.316 [2024-07-15 09:36:01.287011] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:50.316 [2024-07-15 09:36:01.287061] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a50000b90 with addr=10.0.0.2, port=4420 00:27:50.316 qpair failed and we were unable to recover it. 00:27:50.316 [2024-07-15 09:36:01.287279] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:50.316 [2024-07-15 09:36:01.287327] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a50000b90 with addr=10.0.0.2, port=4420 00:27:50.316 qpair failed and we were unable to recover it. 00:27:50.316 [2024-07-15 09:36:01.287545] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:50.316 [2024-07-15 09:36:01.287592] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a50000b90 with addr=10.0.0.2, port=4420 00:27:50.316 qpair failed and we were unable to recover it. 00:27:50.316 [2024-07-15 09:36:01.287842] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:50.316 [2024-07-15 09:36:01.287892] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a50000b90 with addr=10.0.0.2, port=4420 00:27:50.316 qpair failed and we were unable to recover it. 00:27:50.316 [2024-07-15 09:36:01.288112] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:50.316 [2024-07-15 09:36:01.288161] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a50000b90 with addr=10.0.0.2, port=4420 00:27:50.316 qpair failed and we were unable to recover it. 00:27:50.316 [2024-07-15 09:36:01.288342] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:50.316 [2024-07-15 09:36:01.288390] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a50000b90 with addr=10.0.0.2, port=4420 00:27:50.316 qpair failed and we were unable to recover it. 00:27:50.316 [2024-07-15 09:36:01.288598] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:50.316 [2024-07-15 09:36:01.288651] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a50000b90 with addr=10.0.0.2, port=4420 00:27:50.316 qpair failed and we were unable to recover it. 00:27:50.316 [2024-07-15 09:36:01.288848] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:50.316 [2024-07-15 09:36:01.288900] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a50000b90 with addr=10.0.0.2, port=4420 00:27:50.316 qpair failed and we were unable to recover it. 00:27:50.316 [2024-07-15 09:36:01.289059] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:50.316 [2024-07-15 09:36:01.289119] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a50000b90 with addr=10.0.0.2, port=4420 00:27:50.316 qpair failed and we were unable to recover it. 00:27:50.316 [2024-07-15 09:36:01.289350] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:50.316 [2024-07-15 09:36:01.289402] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a50000b90 with addr=10.0.0.2, port=4420 00:27:50.316 qpair failed and we were unable to recover it. 00:27:50.316 [2024-07-15 09:36:01.289644] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:50.316 [2024-07-15 09:36:01.289695] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a50000b90 with addr=10.0.0.2, port=4420 00:27:50.316 qpair failed and we were unable to recover it. 00:27:50.316 [2024-07-15 09:36:01.289893] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:50.316 [2024-07-15 09:36:01.289945] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a50000b90 with addr=10.0.0.2, port=4420 00:27:50.316 qpair failed and we were unable to recover it. 00:27:50.316 [2024-07-15 09:36:01.290153] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:50.316 [2024-07-15 09:36:01.290204] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a50000b90 with addr=10.0.0.2, port=4420 00:27:50.316 qpair failed and we were unable to recover it. 00:27:50.316 [2024-07-15 09:36:01.290428] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:50.316 [2024-07-15 09:36:01.290480] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a50000b90 with addr=10.0.0.2, port=4420 00:27:50.316 qpair failed and we were unable to recover it. 00:27:50.316 [2024-07-15 09:36:01.290644] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:50.316 [2024-07-15 09:36:01.290729] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a50000b90 with addr=10.0.0.2, port=4420 00:27:50.316 qpair failed and we were unable to recover it. 00:27:50.316 [2024-07-15 09:36:01.290982] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:50.316 [2024-07-15 09:36:01.291034] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a50000b90 with addr=10.0.0.2, port=4420 00:27:50.316 qpair failed and we were unable to recover it. 00:27:50.316 [2024-07-15 09:36:01.291189] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:50.316 [2024-07-15 09:36:01.291243] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a50000b90 with addr=10.0.0.2, port=4420 00:27:50.316 qpair failed and we were unable to recover it. 00:27:50.316 [2024-07-15 09:36:01.291450] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:50.316 [2024-07-15 09:36:01.291502] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a50000b90 with addr=10.0.0.2, port=4420 00:27:50.316 qpair failed and we were unable to recover it. 00:27:50.316 [2024-07-15 09:36:01.291702] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:50.316 [2024-07-15 09:36:01.291754] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a50000b90 with addr=10.0.0.2, port=4420 00:27:50.316 qpair failed and we were unable to recover it. 00:27:50.316 [2024-07-15 09:36:01.291968] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:50.316 [2024-07-15 09:36:01.292020] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a50000b90 with addr=10.0.0.2, port=4420 00:27:50.316 qpair failed and we were unable to recover it. 00:27:50.316 [2024-07-15 09:36:01.292178] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:50.316 [2024-07-15 09:36:01.292230] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a50000b90 with addr=10.0.0.2, port=4420 00:27:50.316 qpair failed and we were unable to recover it. 00:27:50.316 [2024-07-15 09:36:01.292464] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:50.316 [2024-07-15 09:36:01.292517] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a50000b90 with addr=10.0.0.2, port=4420 00:27:50.316 qpair failed and we were unable to recover it. 00:27:50.316 [2024-07-15 09:36:01.292700] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:50.316 [2024-07-15 09:36:01.292767] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a50000b90 with addr=10.0.0.2, port=4420 00:27:50.316 qpair failed and we were unable to recover it. 00:27:50.316 [2024-07-15 09:36:01.292984] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:50.316 [2024-07-15 09:36:01.293050] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a50000b90 with addr=10.0.0.2, port=4420 00:27:50.316 qpair failed and we were unable to recover it. 00:27:50.316 [2024-07-15 09:36:01.293335] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:50.316 [2024-07-15 09:36:01.293401] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a50000b90 with addr=10.0.0.2, port=4420 00:27:50.316 qpair failed and we were unable to recover it. 00:27:50.316 [2024-07-15 09:36:01.293622] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:50.316 [2024-07-15 09:36:01.293687] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a50000b90 with addr=10.0.0.2, port=4420 00:27:50.316 qpair failed and we were unable to recover it. 00:27:50.316 [2024-07-15 09:36:01.293945] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:50.316 [2024-07-15 09:36:01.294011] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a50000b90 with addr=10.0.0.2, port=4420 00:27:50.316 qpair failed and we were unable to recover it. 00:27:50.316 [2024-07-15 09:36:01.294261] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:50.316 [2024-07-15 09:36:01.294326] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a50000b90 with addr=10.0.0.2, port=4420 00:27:50.316 qpair failed and we were unable to recover it. 00:27:50.316 [2024-07-15 09:36:01.294579] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:50.316 [2024-07-15 09:36:01.294646] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a50000b90 with addr=10.0.0.2, port=4420 00:27:50.316 qpair failed and we were unable to recover it. 00:27:50.316 [2024-07-15 09:36:01.294840] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:50.316 [2024-07-15 09:36:01.294934] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a50000b90 with addr=10.0.0.2, port=4420 00:27:50.316 qpair failed and we were unable to recover it. 00:27:50.316 [2024-07-15 09:36:01.295184] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:50.316 [2024-07-15 09:36:01.295249] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a50000b90 with addr=10.0.0.2, port=4420 00:27:50.316 qpair failed and we were unable to recover it. 00:27:50.317 [2024-07-15 09:36:01.295486] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:50.317 [2024-07-15 09:36:01.295538] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a50000b90 with addr=10.0.0.2, port=4420 00:27:50.317 qpair failed and we were unable to recover it. 00:27:50.317 [2024-07-15 09:36:01.295730] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:50.317 [2024-07-15 09:36:01.295771] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a50000b90 with addr=10.0.0.2, port=4420 00:27:50.317 qpair failed and we were unable to recover it. 00:27:50.317 [2024-07-15 09:36:01.295930] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:50.317 [2024-07-15 09:36:01.295972] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a50000b90 with addr=10.0.0.2, port=4420 00:27:50.317 qpair failed and we were unable to recover it. 00:27:50.317 [2024-07-15 09:36:01.296136] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:50.317 [2024-07-15 09:36:01.296180] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a50000b90 with addr=10.0.0.2, port=4420 00:27:50.317 qpair failed and we were unable to recover it. 00:27:50.317 [2024-07-15 09:36:01.296374] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:50.317 [2024-07-15 09:36:01.296427] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a50000b90 with addr=10.0.0.2, port=4420 00:27:50.317 qpair failed and we were unable to recover it. 00:27:50.317 [2024-07-15 09:36:01.296659] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:50.317 [2024-07-15 09:36:01.296711] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a50000b90 with addr=10.0.0.2, port=4420 00:27:50.317 qpair failed and we were unable to recover it. 00:27:50.317 [2024-07-15 09:36:01.296925] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:50.317 [2024-07-15 09:36:01.296967] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a50000b90 with addr=10.0.0.2, port=4420 00:27:50.317 qpair failed and we were unable to recover it. 00:27:50.317 [2024-07-15 09:36:01.297106] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:50.317 [2024-07-15 09:36:01.297148] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a50000b90 with addr=10.0.0.2, port=4420 00:27:50.317 qpair failed and we were unable to recover it. 00:27:50.317 [2024-07-15 09:36:01.297314] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:50.317 [2024-07-15 09:36:01.297389] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a50000b90 with addr=10.0.0.2, port=4420 00:27:50.317 qpair failed and we were unable to recover it. 00:27:50.317 [2024-07-15 09:36:01.297632] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:50.317 [2024-07-15 09:36:01.297685] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a50000b90 with addr=10.0.0.2, port=4420 00:27:50.317 qpair failed and we were unable to recover it. 00:27:50.317 [2024-07-15 09:36:01.297902] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:50.317 [2024-07-15 09:36:01.297944] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a50000b90 with addr=10.0.0.2, port=4420 00:27:50.317 qpair failed and we were unable to recover it. 00:27:50.317 [2024-07-15 09:36:01.298107] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:50.317 [2024-07-15 09:36:01.298170] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a50000b90 with addr=10.0.0.2, port=4420 00:27:50.317 qpair failed and we were unable to recover it. 00:27:50.317 [2024-07-15 09:36:01.298428] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:50.317 [2024-07-15 09:36:01.298481] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a50000b90 with addr=10.0.0.2, port=4420 00:27:50.317 qpair failed and we were unable to recover it. 00:27:50.317 [2024-07-15 09:36:01.298667] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:50.317 [2024-07-15 09:36:01.298719] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a50000b90 with addr=10.0.0.2, port=4420 00:27:50.317 qpair failed and we were unable to recover it. 00:27:50.317 [2024-07-15 09:36:01.298913] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:50.317 [2024-07-15 09:36:01.298956] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a50000b90 with addr=10.0.0.2, port=4420 00:27:50.317 qpair failed and we were unable to recover it. 00:27:50.317 [2024-07-15 09:36:01.299119] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:50.317 [2024-07-15 09:36:01.299162] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a50000b90 with addr=10.0.0.2, port=4420 00:27:50.317 qpair failed and we were unable to recover it. 00:27:50.317 [2024-07-15 09:36:01.299354] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:50.317 [2024-07-15 09:36:01.299407] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a50000b90 with addr=10.0.0.2, port=4420 00:27:50.317 qpair failed and we were unable to recover it. 00:27:50.317 [2024-07-15 09:36:01.299604] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:50.317 [2024-07-15 09:36:01.299671] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a50000b90 with addr=10.0.0.2, port=4420 00:27:50.317 qpair failed and we were unable to recover it. 00:27:50.317 [2024-07-15 09:36:01.299870] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:50.317 [2024-07-15 09:36:01.299914] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a50000b90 with addr=10.0.0.2, port=4420 00:27:50.317 qpair failed and we were unable to recover it. 00:27:50.317 [2024-07-15 09:36:01.300071] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:50.317 [2024-07-15 09:36:01.300123] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a50000b90 with addr=10.0.0.2, port=4420 00:27:50.317 qpair failed and we were unable to recover it. 00:27:50.317 [2024-07-15 09:36:01.300336] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:50.317 [2024-07-15 09:36:01.300401] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a50000b90 with addr=10.0.0.2, port=4420 00:27:50.317 qpair failed and we were unable to recover it. 00:27:50.317 [2024-07-15 09:36:01.300650] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:50.317 [2024-07-15 09:36:01.300714] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a50000b90 with addr=10.0.0.2, port=4420 00:27:50.317 qpair failed and we were unable to recover it. 00:27:50.317 [2024-07-15 09:36:01.300987] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:50.317 [2024-07-15 09:36:01.301030] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a50000b90 with addr=10.0.0.2, port=4420 00:27:50.317 qpair failed and we were unable to recover it. 00:27:50.317 [2024-07-15 09:36:01.301219] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:50.317 [2024-07-15 09:36:01.301287] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a50000b90 with addr=10.0.0.2, port=4420 00:27:50.317 qpair failed and we were unable to recover it. 00:27:50.317 [2024-07-15 09:36:01.301545] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:50.317 [2024-07-15 09:36:01.301611] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a50000b90 with addr=10.0.0.2, port=4420 00:27:50.317 qpair failed and we were unable to recover it. 00:27:50.317 [2024-07-15 09:36:01.301819] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:50.317 [2024-07-15 09:36:01.301861] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a50000b90 with addr=10.0.0.2, port=4420 00:27:50.317 qpair failed and we were unable to recover it. 00:27:50.317 [2024-07-15 09:36:01.302050] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:50.317 [2024-07-15 09:36:01.302108] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a50000b90 with addr=10.0.0.2, port=4420 00:27:50.317 qpair failed and we were unable to recover it. 00:27:50.317 [2024-07-15 09:36:01.302330] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:50.317 [2024-07-15 09:36:01.302383] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a50000b90 with addr=10.0.0.2, port=4420 00:27:50.317 qpair failed and we were unable to recover it. 00:27:50.317 [2024-07-15 09:36:01.302575] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:50.317 [2024-07-15 09:36:01.302628] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a50000b90 with addr=10.0.0.2, port=4420 00:27:50.317 qpair failed and we were unable to recover it. 00:27:50.317 [2024-07-15 09:36:01.302843] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:50.317 [2024-07-15 09:36:01.302887] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a50000b90 with addr=10.0.0.2, port=4420 00:27:50.317 qpair failed and we were unable to recover it. 00:27:50.317 [2024-07-15 09:36:01.303049] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:50.317 [2024-07-15 09:36:01.303091] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a50000b90 with addr=10.0.0.2, port=4420 00:27:50.317 qpair failed and we were unable to recover it. 00:27:50.317 [2024-07-15 09:36:01.303318] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:50.317 [2024-07-15 09:36:01.303370] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a50000b90 with addr=10.0.0.2, port=4420 00:27:50.317 qpair failed and we were unable to recover it. 00:27:50.317 [2024-07-15 09:36:01.303576] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:50.317 [2024-07-15 09:36:01.303640] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a50000b90 with addr=10.0.0.2, port=4420 00:27:50.317 qpair failed and we were unable to recover it. 00:27:50.317 [2024-07-15 09:36:01.303863] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:50.317 [2024-07-15 09:36:01.303906] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a50000b90 with addr=10.0.0.2, port=4420 00:27:50.317 qpair failed and we were unable to recover it. 00:27:50.317 [2024-07-15 09:36:01.304036] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:50.317 [2024-07-15 09:36:01.304094] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a50000b90 with addr=10.0.0.2, port=4420 00:27:50.317 qpair failed and we were unable to recover it. 00:27:50.317 [2024-07-15 09:36:01.304301] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:50.317 [2024-07-15 09:36:01.304354] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a50000b90 with addr=10.0.0.2, port=4420 00:27:50.317 qpair failed and we were unable to recover it. 00:27:50.317 [2024-07-15 09:36:01.304536] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:50.317 [2024-07-15 09:36:01.304605] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a50000b90 with addr=10.0.0.2, port=4420 00:27:50.317 qpair failed and we were unable to recover it. 00:27:50.317 [2024-07-15 09:36:01.304820] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:50.317 [2024-07-15 09:36:01.304862] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a50000b90 with addr=10.0.0.2, port=4420 00:27:50.317 qpair failed and we were unable to recover it. 00:27:50.317 [2024-07-15 09:36:01.305022] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:50.317 [2024-07-15 09:36:01.305063] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a50000b90 with addr=10.0.0.2, port=4420 00:27:50.317 qpair failed and we were unable to recover it. 00:27:50.317 [2024-07-15 09:36:01.305313] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:50.317 [2024-07-15 09:36:01.305370] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a50000b90 with addr=10.0.0.2, port=4420 00:27:50.317 qpair failed and we were unable to recover it. 00:27:50.317 [2024-07-15 09:36:01.305616] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:50.317 [2024-07-15 09:36:01.305671] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a50000b90 with addr=10.0.0.2, port=4420 00:27:50.317 qpair failed and we were unable to recover it. 00:27:50.317 [2024-07-15 09:36:01.305899] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:50.317 [2024-07-15 09:36:01.305940] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a50000b90 with addr=10.0.0.2, port=4420 00:27:50.317 qpair failed and we were unable to recover it. 00:27:50.317 [2024-07-15 09:36:01.306104] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:50.317 [2024-07-15 09:36:01.306146] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a50000b90 with addr=10.0.0.2, port=4420 00:27:50.317 qpair failed and we were unable to recover it. 00:27:50.317 [2024-07-15 09:36:01.306302] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:50.317 [2024-07-15 09:36:01.306343] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a50000b90 with addr=10.0.0.2, port=4420 00:27:50.317 qpair failed and we were unable to recover it. 00:27:50.317 [2024-07-15 09:36:01.306535] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:50.317 [2024-07-15 09:36:01.306587] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a50000b90 with addr=10.0.0.2, port=4420 00:27:50.317 qpair failed and we were unable to recover it. 00:27:50.317 [2024-07-15 09:36:01.306819] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:50.317 [2024-07-15 09:36:01.306861] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a50000b90 with addr=10.0.0.2, port=4420 00:27:50.317 qpair failed and we were unable to recover it. 00:27:50.317 [2024-07-15 09:36:01.307056] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:50.317 [2024-07-15 09:36:01.307118] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a50000b90 with addr=10.0.0.2, port=4420 00:27:50.317 qpair failed and we were unable to recover it. 00:27:50.317 [2024-07-15 09:36:01.307341] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:50.317 [2024-07-15 09:36:01.307403] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a50000b90 with addr=10.0.0.2, port=4420 00:27:50.317 qpair failed and we were unable to recover it. 00:27:50.317 [2024-07-15 09:36:01.307627] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:50.317 [2024-07-15 09:36:01.307687] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a50000b90 with addr=10.0.0.2, port=4420 00:27:50.317 qpair failed and we were unable to recover it. 00:27:50.317 [2024-07-15 09:36:01.307892] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:50.317 [2024-07-15 09:36:01.307935] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a50000b90 with addr=10.0.0.2, port=4420 00:27:50.317 qpair failed and we were unable to recover it. 00:27:50.317 [2024-07-15 09:36:01.308131] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:50.317 [2024-07-15 09:36:01.308183] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a50000b90 with addr=10.0.0.2, port=4420 00:27:50.317 qpair failed and we were unable to recover it. 00:27:50.317 [2024-07-15 09:36:01.308461] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:50.317 [2024-07-15 09:36:01.308513] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a50000b90 with addr=10.0.0.2, port=4420 00:27:50.317 qpair failed and we were unable to recover it. 00:27:50.317 [2024-07-15 09:36:01.308748] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:50.317 [2024-07-15 09:36:01.308790] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a50000b90 with addr=10.0.0.2, port=4420 00:27:50.317 qpair failed and we were unable to recover it. 00:27:50.317 [2024-07-15 09:36:01.308980] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:50.317 [2024-07-15 09:36:01.309023] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a50000b90 with addr=10.0.0.2, port=4420 00:27:50.317 qpair failed and we were unable to recover it. 00:27:50.317 [2024-07-15 09:36:01.309207] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:50.317 [2024-07-15 09:36:01.309248] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a50000b90 with addr=10.0.0.2, port=4420 00:27:50.317 qpair failed and we were unable to recover it. 00:27:50.317 [2024-07-15 09:36:01.309433] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:50.317 [2024-07-15 09:36:01.309485] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a50000b90 with addr=10.0.0.2, port=4420 00:27:50.317 qpair failed and we were unable to recover it. 00:27:50.317 [2024-07-15 09:36:01.309646] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:50.318 [2024-07-15 09:36:01.309723] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a50000b90 with addr=10.0.0.2, port=4420 00:27:50.318 qpair failed and we were unable to recover it. 00:27:50.318 [2024-07-15 09:36:01.309929] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:50.318 [2024-07-15 09:36:01.309971] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a50000b90 with addr=10.0.0.2, port=4420 00:27:50.318 qpair failed and we were unable to recover it. 00:27:50.318 [2024-07-15 09:36:01.310137] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:50.318 [2024-07-15 09:36:01.310177] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a50000b90 with addr=10.0.0.2, port=4420 00:27:50.318 qpair failed and we were unable to recover it. 00:27:50.318 [2024-07-15 09:36:01.310384] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:50.318 [2024-07-15 09:36:01.310436] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a50000b90 with addr=10.0.0.2, port=4420 00:27:50.318 qpair failed and we were unable to recover it. 00:27:50.318 [2024-07-15 09:36:01.310683] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:50.318 [2024-07-15 09:36:01.310749] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a50000b90 with addr=10.0.0.2, port=4420 00:27:50.318 qpair failed and we were unable to recover it. 00:27:50.318 [2024-07-15 09:36:01.311035] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:50.318 [2024-07-15 09:36:01.311104] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a50000b90 with addr=10.0.0.2, port=4420 00:27:50.318 qpair failed and we were unable to recover it. 00:27:50.318 [2024-07-15 09:36:01.311373] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:50.318 [2024-07-15 09:36:01.311427] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a50000b90 with addr=10.0.0.2, port=4420 00:27:50.318 qpair failed and we were unable to recover it. 00:27:50.318 [2024-07-15 09:36:01.311663] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:50.318 [2024-07-15 09:36:01.311728] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a50000b90 with addr=10.0.0.2, port=4420 00:27:50.318 qpair failed and we were unable to recover it. 00:27:50.318 [2024-07-15 09:36:01.311957] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:50.318 [2024-07-15 09:36:01.311999] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a50000b90 with addr=10.0.0.2, port=4420 00:27:50.318 qpair failed and we were unable to recover it. 00:27:50.318 [2024-07-15 09:36:01.312263] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:50.318 [2024-07-15 09:36:01.312329] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a50000b90 with addr=10.0.0.2, port=4420 00:27:50.318 qpair failed and we were unable to recover it. 00:27:50.318 [2024-07-15 09:36:01.312534] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:50.318 [2024-07-15 09:36:01.312609] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a50000b90 with addr=10.0.0.2, port=4420 00:27:50.318 qpair failed and we were unable to recover it. 00:27:50.318 [2024-07-15 09:36:01.312845] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:50.318 [2024-07-15 09:36:01.312900] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a50000b90 with addr=10.0.0.2, port=4420 00:27:50.318 qpair failed and we were unable to recover it. 00:27:50.318 [2024-07-15 09:36:01.313132] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:50.318 [2024-07-15 09:36:01.313186] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a50000b90 with addr=10.0.0.2, port=4420 00:27:50.318 qpair failed and we were unable to recover it. 00:27:50.318 [2024-07-15 09:36:01.313393] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:50.318 [2024-07-15 09:36:01.313445] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a50000b90 with addr=10.0.0.2, port=4420 00:27:50.318 qpair failed and we were unable to recover it. 00:27:50.318 [2024-07-15 09:36:01.313643] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:50.318 [2024-07-15 09:36:01.313695] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a50000b90 with addr=10.0.0.2, port=4420 00:27:50.318 qpair failed and we were unable to recover it. 00:27:50.318 [2024-07-15 09:36:01.313889] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:50.318 [2024-07-15 09:36:01.313942] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a50000b90 with addr=10.0.0.2, port=4420 00:27:50.318 qpair failed and we were unable to recover it. 00:27:50.318 [2024-07-15 09:36:01.314177] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:50.318 [2024-07-15 09:36:01.314230] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a50000b90 with addr=10.0.0.2, port=4420 00:27:50.318 qpair failed and we were unable to recover it. 00:27:50.318 [2024-07-15 09:36:01.314432] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:50.318 [2024-07-15 09:36:01.314483] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a50000b90 with addr=10.0.0.2, port=4420 00:27:50.318 qpair failed and we were unable to recover it. 00:27:50.318 [2024-07-15 09:36:01.314643] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:50.318 [2024-07-15 09:36:01.314695] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a50000b90 with addr=10.0.0.2, port=4420 00:27:50.318 qpair failed and we were unable to recover it. 00:27:50.318 [2024-07-15 09:36:01.314911] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:50.318 [2024-07-15 09:36:01.314964] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a50000b90 with addr=10.0.0.2, port=4420 00:27:50.318 qpair failed and we were unable to recover it. 00:27:50.318 [2024-07-15 09:36:01.315179] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:50.318 [2024-07-15 09:36:01.315231] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a50000b90 with addr=10.0.0.2, port=4420 00:27:50.318 qpair failed and we were unable to recover it. 00:27:50.318 [2024-07-15 09:36:01.315432] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:50.318 [2024-07-15 09:36:01.315484] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a50000b90 with addr=10.0.0.2, port=4420 00:27:50.318 qpair failed and we were unable to recover it. 00:27:50.318 [2024-07-15 09:36:01.315677] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:50.318 [2024-07-15 09:36:01.315728] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a50000b90 with addr=10.0.0.2, port=4420 00:27:50.318 qpair failed and we were unable to recover it. 00:27:50.318 [2024-07-15 09:36:01.315950] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:50.318 [2024-07-15 09:36:01.316002] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a50000b90 with addr=10.0.0.2, port=4420 00:27:50.318 qpair failed and we were unable to recover it. 00:27:50.318 [2024-07-15 09:36:01.316201] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:50.318 [2024-07-15 09:36:01.316254] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a50000b90 with addr=10.0.0.2, port=4420 00:27:50.318 qpair failed and we were unable to recover it. 00:27:50.318 [2024-07-15 09:36:01.316442] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:50.318 [2024-07-15 09:36:01.316498] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a50000b90 with addr=10.0.0.2, port=4420 00:27:50.318 qpair failed and we were unable to recover it. 00:27:50.318 [2024-07-15 09:36:01.316658] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:50.318 [2024-07-15 09:36:01.316741] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a50000b90 with addr=10.0.0.2, port=4420 00:27:50.318 qpair failed and we were unable to recover it. 00:27:50.318 [2024-07-15 09:36:01.317029] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:50.318 [2024-07-15 09:36:01.317085] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a50000b90 with addr=10.0.0.2, port=4420 00:27:50.318 qpair failed and we were unable to recover it. 00:27:50.318 [2024-07-15 09:36:01.317330] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:50.318 [2024-07-15 09:36:01.317385] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a50000b90 with addr=10.0.0.2, port=4420 00:27:50.318 qpair failed and we were unable to recover it. 00:27:50.318 [2024-07-15 09:36:01.317599] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:50.318 [2024-07-15 09:36:01.317655] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a50000b90 with addr=10.0.0.2, port=4420 00:27:50.318 qpair failed and we were unable to recover it. 00:27:50.318 [2024-07-15 09:36:01.317868] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:50.318 [2024-07-15 09:36:01.317927] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a50000b90 with addr=10.0.0.2, port=4420 00:27:50.318 qpair failed and we were unable to recover it. 00:27:50.318 [2024-07-15 09:36:01.318140] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:50.318 [2024-07-15 09:36:01.318197] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a50000b90 with addr=10.0.0.2, port=4420 00:27:50.318 qpair failed and we were unable to recover it. 00:27:50.318 [2024-07-15 09:36:01.318418] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:50.318 [2024-07-15 09:36:01.318484] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a50000b90 with addr=10.0.0.2, port=4420 00:27:50.318 qpair failed and we were unable to recover it. 00:27:50.318 [2024-07-15 09:36:01.318720] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:50.318 [2024-07-15 09:36:01.318775] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a50000b90 with addr=10.0.0.2, port=4420 00:27:50.318 qpair failed and we were unable to recover it. 00:27:50.318 [2024-07-15 09:36:01.318972] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:50.318 [2024-07-15 09:36:01.319028] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a50000b90 with addr=10.0.0.2, port=4420 00:27:50.318 qpair failed and we were unable to recover it. 00:27:50.318 [2024-07-15 09:36:01.319242] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:50.318 [2024-07-15 09:36:01.319297] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a50000b90 with addr=10.0.0.2, port=4420 00:27:50.318 qpair failed and we were unable to recover it. 00:27:50.318 [2024-07-15 09:36:01.319512] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:50.318 [2024-07-15 09:36:01.319569] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a50000b90 with addr=10.0.0.2, port=4420 00:27:50.318 qpair failed and we were unable to recover it. 00:27:50.318 [2024-07-15 09:36:01.319782] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:50.318 [2024-07-15 09:36:01.319852] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a50000b90 with addr=10.0.0.2, port=4420 00:27:50.318 qpair failed and we were unable to recover it. 00:27:50.318 [2024-07-15 09:36:01.320064] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:50.318 [2024-07-15 09:36:01.320122] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a50000b90 with addr=10.0.0.2, port=4420 00:27:50.318 qpair failed and we were unable to recover it. 00:27:50.318 [2024-07-15 09:36:01.320333] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:50.318 [2024-07-15 09:36:01.320389] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a50000b90 with addr=10.0.0.2, port=4420 00:27:50.318 qpair failed and we were unable to recover it. 00:27:50.318 [2024-07-15 09:36:01.320608] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:50.318 [2024-07-15 09:36:01.320666] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a50000b90 with addr=10.0.0.2, port=4420 00:27:50.318 qpair failed and we were unable to recover it. 00:27:50.318 [2024-07-15 09:36:01.320874] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:50.318 [2024-07-15 09:36:01.320931] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a50000b90 with addr=10.0.0.2, port=4420 00:27:50.318 qpair failed and we were unable to recover it. 00:27:50.318 [2024-07-15 09:36:01.321174] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:50.318 [2024-07-15 09:36:01.321229] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a50000b90 with addr=10.0.0.2, port=4420 00:27:50.318 qpair failed and we were unable to recover it. 00:27:50.318 [2024-07-15 09:36:01.321479] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:50.318 [2024-07-15 09:36:01.321534] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a50000b90 with addr=10.0.0.2, port=4420 00:27:50.318 qpair failed and we were unable to recover it. 00:27:50.318 [2024-07-15 09:36:01.321748] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:50.318 [2024-07-15 09:36:01.321817] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a50000b90 with addr=10.0.0.2, port=4420 00:27:50.318 qpair failed and we were unable to recover it. 00:27:50.318 [2024-07-15 09:36:01.322024] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:50.318 [2024-07-15 09:36:01.322079] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a50000b90 with addr=10.0.0.2, port=4420 00:27:50.318 qpair failed and we were unable to recover it. 00:27:50.318 [2024-07-15 09:36:01.322298] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:50.318 [2024-07-15 09:36:01.322356] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a50000b90 with addr=10.0.0.2, port=4420 00:27:50.318 qpair failed and we were unable to recover it. 00:27:50.318 [2024-07-15 09:36:01.322599] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:50.318 [2024-07-15 09:36:01.322654] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a50000b90 with addr=10.0.0.2, port=4420 00:27:50.318 qpair failed and we were unable to recover it. 00:27:50.318 [2024-07-15 09:36:01.322909] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:50.318 [2024-07-15 09:36:01.322966] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a50000b90 with addr=10.0.0.2, port=4420 00:27:50.319 qpair failed and we were unable to recover it. 00:27:50.319 [2024-07-15 09:36:01.323177] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:50.319 [2024-07-15 09:36:01.323233] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a50000b90 with addr=10.0.0.2, port=4420 00:27:50.319 qpair failed and we were unable to recover it. 00:27:50.319 [2024-07-15 09:36:01.323482] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:50.319 [2024-07-15 09:36:01.323538] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a50000b90 with addr=10.0.0.2, port=4420 00:27:50.319 qpair failed and we were unable to recover it. 00:27:50.319 [2024-07-15 09:36:01.323747] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:50.319 [2024-07-15 09:36:01.323816] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a50000b90 with addr=10.0.0.2, port=4420 00:27:50.319 qpair failed and we were unable to recover it. 00:27:50.319 [2024-07-15 09:36:01.324036] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:50.319 [2024-07-15 09:36:01.324091] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a50000b90 with addr=10.0.0.2, port=4420 00:27:50.319 qpair failed and we were unable to recover it. 00:27:50.319 [2024-07-15 09:36:01.324295] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:50.319 [2024-07-15 09:36:01.324351] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a50000b90 with addr=10.0.0.2, port=4420 00:27:50.319 qpair failed and we were unable to recover it. 00:27:50.319 [2024-07-15 09:36:01.324567] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:50.319 [2024-07-15 09:36:01.324624] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a50000b90 with addr=10.0.0.2, port=4420 00:27:50.319 qpair failed and we were unable to recover it. 00:27:50.319 [2024-07-15 09:36:01.324859] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:50.319 [2024-07-15 09:36:01.324916] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a50000b90 with addr=10.0.0.2, port=4420 00:27:50.319 qpair failed and we were unable to recover it. 00:27:50.319 [2024-07-15 09:36:01.325125] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:50.319 [2024-07-15 09:36:01.325181] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a50000b90 with addr=10.0.0.2, port=4420 00:27:50.319 qpair failed and we were unable to recover it. 00:27:50.319 [2024-07-15 09:36:01.325382] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:50.319 [2024-07-15 09:36:01.325440] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a50000b90 with addr=10.0.0.2, port=4420 00:27:50.319 qpair failed and we were unable to recover it. 00:27:50.319 [2024-07-15 09:36:01.325608] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:50.319 [2024-07-15 09:36:01.325663] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a50000b90 with addr=10.0.0.2, port=4420 00:27:50.319 qpair failed and we were unable to recover it. 00:27:50.319 [2024-07-15 09:36:01.325868] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:50.319 [2024-07-15 09:36:01.325926] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a50000b90 with addr=10.0.0.2, port=4420 00:27:50.319 qpair failed and we were unable to recover it. 00:27:50.319 [2024-07-15 09:36:01.326125] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:50.319 [2024-07-15 09:36:01.326181] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a50000b90 with addr=10.0.0.2, port=4420 00:27:50.319 qpair failed and we were unable to recover it. 00:27:50.319 [2024-07-15 09:36:01.326386] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:50.319 [2024-07-15 09:36:01.326443] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a50000b90 with addr=10.0.0.2, port=4420 00:27:50.319 qpair failed and we were unable to recover it. 00:27:50.319 [2024-07-15 09:36:01.326644] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:50.319 [2024-07-15 09:36:01.326700] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a50000b90 with addr=10.0.0.2, port=4420 00:27:50.319 qpair failed and we were unable to recover it. 00:27:50.319 [2024-07-15 09:36:01.326906] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:50.319 [2024-07-15 09:36:01.326963] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a50000b90 with addr=10.0.0.2, port=4420 00:27:50.319 qpair failed and we were unable to recover it. 00:27:50.319 [2024-07-15 09:36:01.327208] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:50.319 [2024-07-15 09:36:01.327266] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a50000b90 with addr=10.0.0.2, port=4420 00:27:50.319 qpair failed and we were unable to recover it. 00:27:50.319 [2024-07-15 09:36:01.327471] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:50.319 [2024-07-15 09:36:01.327529] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a50000b90 with addr=10.0.0.2, port=4420 00:27:50.319 qpair failed and we were unable to recover it. 00:27:50.319 [2024-07-15 09:36:01.327717] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:50.319 [2024-07-15 09:36:01.327785] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a50000b90 with addr=10.0.0.2, port=4420 00:27:50.319 qpair failed and we were unable to recover it. 00:27:50.319 [2024-07-15 09:36:01.328078] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:50.319 [2024-07-15 09:36:01.328139] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a50000b90 with addr=10.0.0.2, port=4420 00:27:50.319 qpair failed and we were unable to recover it. 00:27:50.319 [2024-07-15 09:36:01.328349] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:50.319 [2024-07-15 09:36:01.328405] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a50000b90 with addr=10.0.0.2, port=4420 00:27:50.319 qpair failed and we were unable to recover it. 00:27:50.319 [2024-07-15 09:36:01.328679] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:50.319 [2024-07-15 09:36:01.328744] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a50000b90 with addr=10.0.0.2, port=4420 00:27:50.319 qpair failed and we were unable to recover it. 00:27:50.319 [2024-07-15 09:36:01.329022] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:50.319 [2024-07-15 09:36:01.329098] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a50000b90 with addr=10.0.0.2, port=4420 00:27:50.319 qpair failed and we were unable to recover it. 00:27:50.319 [2024-07-15 09:36:01.329397] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:50.319 [2024-07-15 09:36:01.329463] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a50000b90 with addr=10.0.0.2, port=4420 00:27:50.319 qpair failed and we were unable to recover it. 00:27:50.319 [2024-07-15 09:36:01.329716] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:50.319 [2024-07-15 09:36:01.329792] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a50000b90 with addr=10.0.0.2, port=4420 00:27:50.319 qpair failed and we were unable to recover it. 00:27:50.319 [2024-07-15 09:36:01.330061] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:50.319 [2024-07-15 09:36:01.330127] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a50000b90 with addr=10.0.0.2, port=4420 00:27:50.319 qpair failed and we were unable to recover it. 00:27:50.319 [2024-07-15 09:36:01.330412] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:50.319 [2024-07-15 09:36:01.330477] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a50000b90 with addr=10.0.0.2, port=4420 00:27:50.319 qpair failed and we were unable to recover it. 00:27:50.319 [2024-07-15 09:36:01.330736] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:50.319 [2024-07-15 09:36:01.330845] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a50000b90 with addr=10.0.0.2, port=4420 00:27:50.319 qpair failed and we were unable to recover it. 00:27:50.319 [2024-07-15 09:36:01.331129] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:50.319 [2024-07-15 09:36:01.331195] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a50000b90 with addr=10.0.0.2, port=4420 00:27:50.319 qpair failed and we were unable to recover it. 00:27:50.319 [2024-07-15 09:36:01.331498] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:50.319 [2024-07-15 09:36:01.331563] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a50000b90 with addr=10.0.0.2, port=4420 00:27:50.319 qpair failed and we were unable to recover it. 00:27:50.319 [2024-07-15 09:36:01.331850] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:50.319 [2024-07-15 09:36:01.331909] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a50000b90 with addr=10.0.0.2, port=4420 00:27:50.319 qpair failed and we were unable to recover it. 00:27:50.319 [2024-07-15 09:36:01.332119] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:50.319 [2024-07-15 09:36:01.332175] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a50000b90 with addr=10.0.0.2, port=4420 00:27:50.319 qpair failed and we were unable to recover it. 00:27:50.319 [2024-07-15 09:36:01.332377] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:50.319 [2024-07-15 09:36:01.332439] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a50000b90 with addr=10.0.0.2, port=4420 00:27:50.319 qpair failed and we were unable to recover it. 00:27:50.319 [2024-07-15 09:36:01.332705] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:50.319 [2024-07-15 09:36:01.332766] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a50000b90 with addr=10.0.0.2, port=4420 00:27:50.319 qpair failed and we were unable to recover it. 00:27:50.319 [2024-07-15 09:36:01.332989] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:50.319 [2024-07-15 09:36:01.333049] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a50000b90 with addr=10.0.0.2, port=4420 00:27:50.319 qpair failed and we were unable to recover it. 00:27:50.319 [2024-07-15 09:36:01.333312] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:50.319 [2024-07-15 09:36:01.333373] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a50000b90 with addr=10.0.0.2, port=4420 00:27:50.319 qpair failed and we were unable to recover it. 00:27:50.319 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/target_disconnect.sh: line 36: 943371 Killed "${NVMF_APP[@]}" "$@" 00:27:50.319 [2024-07-15 09:36:01.333646] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:50.319 [2024-07-15 09:36:01.333706] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a50000b90 with addr=10.0.0.2, port=4420 00:27:50.319 qpair failed and we were unable to recover it. 00:27:50.319 09:36:01 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@48 -- # disconnect_init 10.0.0.2 00:27:50.319 09:36:01 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@17 -- # nvmfappstart -m 0xF0 00:27:50.319 [2024-07-15 09:36:01.333966] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:50.319 [2024-07-15 09:36:01.334027] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a50000b90 with addr=10.0.0.2, port=4420 00:27:50.319 qpair failed and we were unable to recover it. 00:27:50.319 09:36:01 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:27:50.319 [2024-07-15 09:36:01.334202] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:50.319 09:36:01 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@722 -- # xtrace_disable 00:27:50.319 [2024-07-15 09:36:01.334267] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a50000b90 with addr=10.0.0.2, port=4420 00:27:50.319 qpair failed and we were unable to recover it. 00:27:50.319 09:36:01 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:27:50.319 [2024-07-15 09:36:01.334436] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:50.319 [2024-07-15 09:36:01.334498] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a50000b90 with addr=10.0.0.2, port=4420 00:27:50.319 qpair failed and we were unable to recover it. 00:27:50.319 [2024-07-15 09:36:01.335522] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:50.319 [2024-07-15 09:36:01.335556] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a50000b90 with addr=10.0.0.2, port=4420 00:27:50.319 qpair failed and we were unable to recover it. 00:27:50.319 [2024-07-15 09:36:01.335685] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:50.319 [2024-07-15 09:36:01.335715] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a50000b90 with addr=10.0.0.2, port=4420 00:27:50.319 qpair failed and we were unable to recover it. 00:27:50.319 [2024-07-15 09:36:01.335880] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:50.319 [2024-07-15 09:36:01.335934] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a50000b90 with addr=10.0.0.2, port=4420 00:27:50.319 qpair failed and we were unable to recover it. 00:27:50.319 [2024-07-15 09:36:01.336057] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:50.319 [2024-07-15 09:36:01.336087] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a50000b90 with addr=10.0.0.2, port=4420 00:27:50.319 qpair failed and we were unable to recover it. 00:27:50.319 [2024-07-15 09:36:01.336221] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:50.319 [2024-07-15 09:36:01.336250] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a50000b90 with addr=10.0.0.2, port=4420 00:27:50.319 qpair failed and we were unable to recover it. 00:27:50.319 [2024-07-15 09:36:01.336343] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:50.319 [2024-07-15 09:36:01.336374] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a50000b90 with addr=10.0.0.2, port=4420 00:27:50.319 qpair failed and we were unable to recover it. 00:27:50.319 [2024-07-15 09:36:01.336478] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:50.319 [2024-07-15 09:36:01.336507] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a50000b90 with addr=10.0.0.2, port=4420 00:27:50.319 qpair failed and we were unable to recover it. 00:27:50.319 [2024-07-15 09:36:01.336611] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:50.319 [2024-07-15 09:36:01.336641] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a50000b90 with addr=10.0.0.2, port=4420 00:27:50.319 qpair failed and we were unable to recover it. 00:27:50.319 [2024-07-15 09:36:01.336768] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:50.319 [2024-07-15 09:36:01.336798] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a50000b90 with addr=10.0.0.2, port=4420 00:27:50.320 qpair failed and we were unable to recover it. 00:27:50.320 [2024-07-15 09:36:01.336913] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:50.320 [2024-07-15 09:36:01.336944] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a50000b90 with addr=10.0.0.2, port=4420 00:27:50.320 qpair failed and we were unable to recover it. 00:27:50.320 [2024-07-15 09:36:01.337069] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:50.320 [2024-07-15 09:36:01.337099] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a50000b90 with addr=10.0.0.2, port=4420 00:27:50.320 qpair failed and we were unable to recover it. 00:27:50.320 [2024-07-15 09:36:01.337222] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:50.320 [2024-07-15 09:36:01.337254] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a50000b90 with addr=10.0.0.2, port=4420 00:27:50.320 qpair failed and we were unable to recover it. 00:27:50.320 [2024-07-15 09:36:01.337384] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:50.320 [2024-07-15 09:36:01.337413] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a50000b90 with addr=10.0.0.2, port=4420 00:27:50.320 qpair failed and we were unable to recover it. 00:27:50.320 [2024-07-15 09:36:01.337536] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:50.320 [2024-07-15 09:36:01.337565] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a50000b90 with addr=10.0.0.2, port=4420 00:27:50.320 qpair failed and we were unable to recover it. 00:27:50.320 [2024-07-15 09:36:01.337656] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:50.320 [2024-07-15 09:36:01.337685] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a50000b90 with addr=10.0.0.2, port=4420 00:27:50.320 qpair failed and we were unable to recover it. 00:27:50.320 [2024-07-15 09:36:01.337811] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:50.320 [2024-07-15 09:36:01.337841] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a50000b90 with addr=10.0.0.2, port=4420 00:27:50.320 qpair failed and we were unable to recover it. 00:27:50.320 [2024-07-15 09:36:01.337968] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:50.320 [2024-07-15 09:36:01.337997] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a50000b90 with addr=10.0.0.2, port=4420 00:27:50.320 qpair failed and we were unable to recover it. 00:27:50.320 [2024-07-15 09:36:01.338122] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:50.320 [2024-07-15 09:36:01.338153] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a50000b90 with addr=10.0.0.2, port=4420 00:27:50.320 qpair failed and we were unable to recover it. 00:27:50.320 [2024-07-15 09:36:01.338275] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:50.320 [2024-07-15 09:36:01.338305] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a50000b90 with addr=10.0.0.2, port=4420 00:27:50.320 qpair failed and we were unable to recover it. 00:27:50.320 [2024-07-15 09:36:01.338399] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:50.320 [2024-07-15 09:36:01.338428] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a50000b90 with addr=10.0.0.2, port=4420 00:27:50.320 qpair failed and we were unable to recover it. 00:27:50.320 [2024-07-15 09:36:01.338560] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:50.320 [2024-07-15 09:36:01.338589] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a50000b90 with addr=10.0.0.2, port=4420 00:27:50.320 qpair failed and we were unable to recover it. 00:27:50.320 [2024-07-15 09:36:01.338702] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:50.320 [2024-07-15 09:36:01.338730] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a50000b90 with addr=10.0.0.2, port=4420 00:27:50.320 qpair failed and we were unable to recover it. 00:27:50.320 [2024-07-15 09:36:01.338869] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:50.320 [2024-07-15 09:36:01.338899] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a50000b90 with addr=10.0.0.2, port=4420 00:27:50.320 09:36:01 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@481 -- # nvmfpid=943999 00:27:50.320 qpair failed and we were unable to recover it. 00:27:50.320 09:36:01 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF0 00:27:50.320 09:36:01 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@482 -- # waitforlisten 943999 00:27:50.320 [2024-07-15 09:36:01.339036] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:50.320 [2024-07-15 09:36:01.339081] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a58000b90 with addr=10.0.0.2, port=4420 00:27:50.320 qpair failed and we were unable to recover it. 00:27:50.320 [2024-07-15 09:36:01.339217] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:50.320 [2024-07-15 09:36:01.339250] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a58000b90 with addr=10.0.0.2, port=4420 00:27:50.320 qpair failed and we were unable to recover it. 00:27:50.320 09:36:01 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@829 -- # '[' -z 943999 ']' 00:27:50.320 [2024-07-15 09:36:01.339373] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:50.320 [2024-07-15 09:36:01.339402] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a58000b90 with addr=10.0.0.2, port=4420 00:27:50.320 qpair failed and we were unable to recover it. 00:27:50.320 09:36:01 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:27:50.320 [2024-07-15 09:36:01.339525] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:50.320 [2024-07-15 09:36:01.339554] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a58000b90 with addr=10.0.0.2, port=4420 00:27:50.320 qpair failed and we were unable to recover it. 00:27:50.320 09:36:01 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@834 -- # local max_retries=100 00:27:50.320 [2024-07-15 09:36:01.339706] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:50.320 09:36:01 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:27:50.320 [2024-07-15 09:36:01.339736] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a58000b9Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:27:50.320 0 with addr=10.0.0.2, port=4420 00:27:50.320 qpair failed and we were unable to recover it. 00:27:50.320 09:36:01 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@838 -- # xtrace_disable 00:27:50.320 [2024-07-15 09:36:01.339920] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:50.320 [2024-07-15 09:36:01.339985] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d69200 wit 09:36:01 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:27:50.320 h addr=10.0.0.2, port=4420 00:27:50.320 qpair failed and we were unable to recover it. 00:27:50.320 [2024-07-15 09:36:01.340232] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:50.320 [2024-07-15 09:36:01.340278] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d69200 with addr=10.0.0.2, port=4420 00:27:50.320 qpair failed and we were unable to recover it. 00:27:50.320 [2024-07-15 09:36:01.340495] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:50.320 [2024-07-15 09:36:01.340559] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d69200 with addr=10.0.0.2, port=4420 00:27:50.320 qpair failed and we were unable to recover it. 00:27:50.320 [2024-07-15 09:36:01.340845] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:50.320 [2024-07-15 09:36:01.340893] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d69200 with addr=10.0.0.2, port=4420 00:27:50.320 qpair failed and we were unable to recover it. 00:27:50.320 [2024-07-15 09:36:01.341065] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:50.320 [2024-07-15 09:36:01.341117] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d69200 with addr=10.0.0.2, port=4420 00:27:50.320 qpair failed and we were unable to recover it. 00:27:50.320 [2024-07-15 09:36:01.341249] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:50.320 [2024-07-15 09:36:01.341317] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d69200 with addr=10.0.0.2, port=4420 00:27:50.320 qpair failed and we were unable to recover it. 00:27:50.320 [2024-07-15 09:36:01.341482] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:50.320 [2024-07-15 09:36:01.341541] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d69200 with addr=10.0.0.2, port=4420 00:27:50.320 qpair failed and we were unable to recover it. 00:27:50.320 [2024-07-15 09:36:01.341714] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:50.320 [2024-07-15 09:36:01.341743] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d69200 with addr=10.0.0.2, port=4420 00:27:50.320 qpair failed and we were unable to recover it. 00:27:50.320 [2024-07-15 09:36:01.341852] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:50.320 [2024-07-15 09:36:01.341880] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d69200 with addr=10.0.0.2, port=4420 00:27:50.320 qpair failed and we were unable to recover it. 00:27:50.320 [2024-07-15 09:36:01.341981] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:50.320 [2024-07-15 09:36:01.342009] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d69200 with addr=10.0.0.2, port=4420 00:27:50.320 qpair failed and we were unable to recover it. 00:27:50.320 [2024-07-15 09:36:01.342180] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:50.320 [2024-07-15 09:36:01.342241] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d69200 with addr=10.0.0.2, port=4420 00:27:50.320 qpair failed and we were unable to recover it. 00:27:50.320 [2024-07-15 09:36:01.342439] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:50.320 [2024-07-15 09:36:01.342512] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d69200 with addr=10.0.0.2, port=4420 00:27:50.320 qpair failed and we were unable to recover it. 00:27:50.320 [2024-07-15 09:36:01.342762] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:50.320 [2024-07-15 09:36:01.342813] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d69200 with addr=10.0.0.2, port=4420 00:27:50.320 qpair failed and we were unable to recover it. 00:27:50.320 [2024-07-15 09:36:01.342937] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:50.320 [2024-07-15 09:36:01.342993] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d69200 with addr=10.0.0.2, port=4420 00:27:50.320 qpair failed and we were unable to recover it. 00:27:50.320 [2024-07-15 09:36:01.343122] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:50.320 [2024-07-15 09:36:01.343164] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d69200 with addr=10.0.0.2, port=4420 00:27:50.320 qpair failed and we were unable to recover it. 00:27:50.320 [2024-07-15 09:36:01.343301] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:50.320 [2024-07-15 09:36:01.343344] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d69200 with addr=10.0.0.2, port=4420 00:27:50.320 qpair failed and we were unable to recover it. 00:27:50.320 [2024-07-15 09:36:01.343497] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:50.320 [2024-07-15 09:36:01.343547] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d69200 with addr=10.0.0.2, port=4420 00:27:50.321 qpair failed and we were unable to recover it. 00:27:50.321 [2024-07-15 09:36:01.343683] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:50.321 [2024-07-15 09:36:01.343726] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d69200 with addr=10.0.0.2, port=4420 00:27:50.321 qpair failed and we were unable to recover it. 00:27:50.321 [2024-07-15 09:36:01.343870] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:50.321 [2024-07-15 09:36:01.343899] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d69200 with addr=10.0.0.2, port=4420 00:27:50.321 qpair failed and we were unable to recover it. 00:27:50.321 [2024-07-15 09:36:01.344011] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:50.321 [2024-07-15 09:36:01.344054] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d69200 with addr=10.0.0.2, port=4420 00:27:50.321 qpair failed and we were unable to recover it. 00:27:50.321 [2024-07-15 09:36:01.344225] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:50.321 [2024-07-15 09:36:01.344267] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d69200 with addr=10.0.0.2, port=4420 00:27:50.321 qpair failed and we were unable to recover it. 00:27:50.321 [2024-07-15 09:36:01.344461] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:50.321 [2024-07-15 09:36:01.344508] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d69200 with addr=10.0.0.2, port=4420 00:27:50.321 qpair failed and we were unable to recover it. 00:27:50.321 [2024-07-15 09:36:01.344738] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:50.321 [2024-07-15 09:36:01.344786] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d69200 with addr=10.0.0.2, port=4420 00:27:50.321 qpair failed and we were unable to recover it. 00:27:50.321 [2024-07-15 09:36:01.344936] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:50.321 [2024-07-15 09:36:01.344965] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d69200 with addr=10.0.0.2, port=4420 00:27:50.321 qpair failed and we were unable to recover it. 00:27:50.321 [2024-07-15 09:36:01.345070] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:50.321 [2024-07-15 09:36:01.345113] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d69200 with addr=10.0.0.2, port=4420 00:27:50.321 qpair failed and we were unable to recover it. 00:27:50.321 [2024-07-15 09:36:01.345312] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:50.321 [2024-07-15 09:36:01.345380] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d69200 with addr=10.0.0.2, port=4420 00:27:50.321 qpair failed and we were unable to recover it. 00:27:50.321 [2024-07-15 09:36:01.345578] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:50.321 [2024-07-15 09:36:01.345644] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d69200 with addr=10.0.0.2, port=4420 00:27:50.321 qpair failed and we were unable to recover it. 00:27:50.321 [2024-07-15 09:36:01.345889] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:50.321 [2024-07-15 09:36:01.345918] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d69200 with addr=10.0.0.2, port=4420 00:27:50.321 qpair failed and we were unable to recover it. 00:27:50.321 [2024-07-15 09:36:01.346008] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:50.321 [2024-07-15 09:36:01.346037] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d69200 with addr=10.0.0.2, port=4420 00:27:50.321 qpair failed and we were unable to recover it. 00:27:50.321 [2024-07-15 09:36:01.346145] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:50.321 [2024-07-15 09:36:01.346173] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d69200 with addr=10.0.0.2, port=4420 00:27:50.321 qpair failed and we were unable to recover it. 00:27:50.321 [2024-07-15 09:36:01.346361] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:50.321 [2024-07-15 09:36:01.346412] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d69200 with addr=10.0.0.2, port=4420 00:27:50.321 qpair failed and we were unable to recover it. 00:27:50.321 [2024-07-15 09:36:01.346644] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:50.321 [2024-07-15 09:36:01.346695] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d69200 with addr=10.0.0.2, port=4420 00:27:50.321 qpair failed and we were unable to recover it. 00:27:50.321 [2024-07-15 09:36:01.346866] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:50.321 [2024-07-15 09:36:01.346896] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d69200 with addr=10.0.0.2, port=4420 00:27:50.321 qpair failed and we were unable to recover it. 00:27:50.321 [2024-07-15 09:36:01.346988] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:50.321 [2024-07-15 09:36:01.347017] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d69200 with addr=10.0.0.2, port=4420 00:27:50.321 qpair failed and we were unable to recover it. 00:27:50.321 [2024-07-15 09:36:01.347187] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:50.321 [2024-07-15 09:36:01.347230] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d69200 with addr=10.0.0.2, port=4420 00:27:50.321 qpair failed and we were unable to recover it. 00:27:50.321 [2024-07-15 09:36:01.347405] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:50.321 [2024-07-15 09:36:01.347482] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d69200 with addr=10.0.0.2, port=4420 00:27:50.321 qpair failed and we were unable to recover it. 00:27:50.321 [2024-07-15 09:36:01.347717] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:50.321 [2024-07-15 09:36:01.347760] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d69200 with addr=10.0.0.2, port=4420 00:27:50.321 qpair failed and we were unable to recover it. 00:27:50.321 [2024-07-15 09:36:01.347927] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:50.321 [2024-07-15 09:36:01.347957] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d69200 with addr=10.0.0.2, port=4420 00:27:50.321 qpair failed and we were unable to recover it. 00:27:50.321 [2024-07-15 09:36:01.348079] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:50.321 [2024-07-15 09:36:01.348117] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d69200 with addr=10.0.0.2, port=4420 00:27:50.321 qpair failed and we were unable to recover it. 00:27:50.321 [2024-07-15 09:36:01.348260] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:50.321 [2024-07-15 09:36:01.348312] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d69200 with addr=10.0.0.2, port=4420 00:27:50.321 qpair failed and we were unable to recover it. 00:27:50.321 [2024-07-15 09:36:01.348504] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:50.321 [2024-07-15 09:36:01.348561] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d69200 with addr=10.0.0.2, port=4420 00:27:50.321 qpair failed and we were unable to recover it. 00:27:50.321 [2024-07-15 09:36:01.348791] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:50.321 [2024-07-15 09:36:01.348856] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d69200 with addr=10.0.0.2, port=4420 00:27:50.321 qpair failed and we were unable to recover it. 00:27:50.321 [2024-07-15 09:36:01.348948] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:50.321 [2024-07-15 09:36:01.348977] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d69200 with addr=10.0.0.2, port=4420 00:27:50.321 qpair failed and we were unable to recover it. 00:27:50.321 [2024-07-15 09:36:01.349097] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:50.321 [2024-07-15 09:36:01.349146] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d69200 with addr=10.0.0.2, port=4420 00:27:50.321 qpair failed and we were unable to recover it. 00:27:50.321 [2024-07-15 09:36:01.349312] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:50.321 [2024-07-15 09:36:01.349365] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d69200 with addr=10.0.0.2, port=4420 00:27:50.321 qpair failed and we were unable to recover it. 00:27:50.321 [2024-07-15 09:36:01.349499] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:50.321 [2024-07-15 09:36:01.349569] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d69200 with addr=10.0.0.2, port=4420 00:27:50.321 qpair failed and we were unable to recover it. 00:27:50.321 [2024-07-15 09:36:01.349735] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:50.321 [2024-07-15 09:36:01.349764] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d69200 with addr=10.0.0.2, port=4420 00:27:50.321 qpair failed and we were unable to recover it. 00:27:50.321 [2024-07-15 09:36:01.349892] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:50.321 [2024-07-15 09:36:01.349921] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d69200 with addr=10.0.0.2, port=4420 00:27:50.321 qpair failed and we were unable to recover it. 00:27:50.321 [2024-07-15 09:36:01.350022] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:50.321 [2024-07-15 09:36:01.350051] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d69200 with addr=10.0.0.2, port=4420 00:27:50.321 qpair failed and we were unable to recover it. 00:27:50.321 [2024-07-15 09:36:01.350176] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:50.321 [2024-07-15 09:36:01.350248] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d69200 with addr=10.0.0.2, port=4420 00:27:50.321 qpair failed and we were unable to recover it. 00:27:50.321 [2024-07-15 09:36:01.350454] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:50.321 [2024-07-15 09:36:01.350508] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d69200 with addr=10.0.0.2, port=4420 00:27:50.321 qpair failed and we were unable to recover it. 00:27:50.321 [2024-07-15 09:36:01.350707] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:50.321 [2024-07-15 09:36:01.350751] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d69200 with addr=10.0.0.2, port=4420 00:27:50.321 qpair failed and we were unable to recover it. 00:27:50.321 [2024-07-15 09:36:01.350905] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:50.321 [2024-07-15 09:36:01.350934] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d69200 with addr=10.0.0.2, port=4420 00:27:50.321 qpair failed and we were unable to recover it. 00:27:50.321 [2024-07-15 09:36:01.351028] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:50.321 [2024-07-15 09:36:01.351057] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d69200 with addr=10.0.0.2, port=4420 00:27:50.321 qpair failed and we were unable to recover it. 00:27:50.321 [2024-07-15 09:36:01.351225] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:50.321 [2024-07-15 09:36:01.351270] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d69200 with addr=10.0.0.2, port=4420 00:27:50.321 qpair failed and we were unable to recover it. 00:27:50.321 [2024-07-15 09:36:01.351486] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:50.321 [2024-07-15 09:36:01.351543] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d69200 with addr=10.0.0.2, port=4420 00:27:50.321 qpair failed and we were unable to recover it. 00:27:50.321 [2024-07-15 09:36:01.351695] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:50.321 [2024-07-15 09:36:01.351750] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d69200 with addr=10.0.0.2, port=4420 00:27:50.321 qpair failed and we were unable to recover it. 00:27:50.321 [2024-07-15 09:36:01.351921] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:50.321 [2024-07-15 09:36:01.351965] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a58000b90 with addr=10.0.0.2, port=4420 00:27:50.321 qpair failed and we were unable to recover it. 00:27:50.321 [2024-07-15 09:36:01.352076] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:50.321 [2024-07-15 09:36:01.352145] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a58000b90 with addr=10.0.0.2, port=4420 00:27:50.321 qpair failed and we were unable to recover it. 00:27:50.321 [2024-07-15 09:36:01.352384] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:50.321 [2024-07-15 09:36:01.352447] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a58000b90 with addr=10.0.0.2, port=4420 00:27:50.321 qpair failed and we were unable to recover it. 00:27:50.321 [2024-07-15 09:36:01.352637] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:50.321 [2024-07-15 09:36:01.352693] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a58000b90 with addr=10.0.0.2, port=4420 00:27:50.321 qpair failed and we were unable to recover it. 00:27:50.321 [2024-07-15 09:36:01.352854] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:50.321 [2024-07-15 09:36:01.352883] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a58000b90 with addr=10.0.0.2, port=4420 00:27:50.321 qpair failed and we were unable to recover it. 00:27:50.321 [2024-07-15 09:36:01.353000] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:50.321 [2024-07-15 09:36:01.353028] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a58000b90 with addr=10.0.0.2, port=4420 00:27:50.321 qpair failed and we were unable to recover it. 00:27:50.321 [2024-07-15 09:36:01.353180] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:50.321 [2024-07-15 09:36:01.353235] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a58000b90 with addr=10.0.0.2, port=4420 00:27:50.321 qpair failed and we were unable to recover it. 00:27:50.321 [2024-07-15 09:36:01.353408] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:50.321 [2024-07-15 09:36:01.353463] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a58000b90 with addr=10.0.0.2, port=4420 00:27:50.321 qpair failed and we were unable to recover it. 00:27:50.321 [2024-07-15 09:36:01.353630] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:50.321 [2024-07-15 09:36:01.353687] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a58000b90 with addr=10.0.0.2, port=4420 00:27:50.321 qpair failed and we were unable to recover it. 00:27:50.321 [2024-07-15 09:36:01.353875] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:50.321 [2024-07-15 09:36:01.353906] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a58000b90 with addr=10.0.0.2, port=4420 00:27:50.321 qpair failed and we were unable to recover it. 00:27:50.321 [2024-07-15 09:36:01.354033] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:50.321 [2024-07-15 09:36:01.354062] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a58000b90 with addr=10.0.0.2, port=4420 00:27:50.321 qpair failed and we were unable to recover it. 00:27:50.321 [2024-07-15 09:36:01.354154] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:50.321 [2024-07-15 09:36:01.354185] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a58000b90 with addr=10.0.0.2, port=4420 00:27:50.321 qpair failed and we were unable to recover it. 00:27:50.321 [2024-07-15 09:36:01.354403] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:50.321 [2024-07-15 09:36:01.354458] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a58000b90 with addr=10.0.0.2, port=4420 00:27:50.321 qpair failed and we were unable to recover it. 00:27:50.321 [2024-07-15 09:36:01.354684] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:50.321 [2024-07-15 09:36:01.354719] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a58000b90 with addr=10.0.0.2, port=4420 00:27:50.321 qpair failed and we were unable to recover it. 00:27:50.321 [2024-07-15 09:36:01.354835] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:50.321 [2024-07-15 09:36:01.354866] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a58000b90 with addr=10.0.0.2, port=4420 00:27:50.321 qpair failed and we were unable to recover it. 00:27:50.321 [2024-07-15 09:36:01.354960] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:50.321 [2024-07-15 09:36:01.354989] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a58000b90 with addr=10.0.0.2, port=4420 00:27:50.321 qpair failed and we were unable to recover it. 00:27:50.321 [2024-07-15 09:36:01.355111] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:50.321 [2024-07-15 09:36:01.355139] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a58000b90 with addr=10.0.0.2, port=4420 00:27:50.322 qpair failed and we were unable to recover it. 00:27:50.322 [2024-07-15 09:36:01.355257] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:50.322 [2024-07-15 09:36:01.355287] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a58000b90 with addr=10.0.0.2, port=4420 00:27:50.322 qpair failed and we were unable to recover it. 00:27:50.322 [2024-07-15 09:36:01.355382] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:50.322 [2024-07-15 09:36:01.355411] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a58000b90 with addr=10.0.0.2, port=4420 00:27:50.322 qpair failed and we were unable to recover it. 00:27:50.322 [2024-07-15 09:36:01.355538] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:50.322 [2024-07-15 09:36:01.355594] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a58000b90 with addr=10.0.0.2, port=4420 00:27:50.322 qpair failed and we were unable to recover it. 00:27:50.322 [2024-07-15 09:36:01.355847] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:50.322 [2024-07-15 09:36:01.355877] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a58000b90 with addr=10.0.0.2, port=4420 00:27:50.322 qpair failed and we were unable to recover it. 00:27:50.322 [2024-07-15 09:36:01.355972] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:50.322 [2024-07-15 09:36:01.356001] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a58000b90 with addr=10.0.0.2, port=4420 00:27:50.322 qpair failed and we were unable to recover it. 00:27:50.322 [2024-07-15 09:36:01.356193] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:50.322 [2024-07-15 09:36:01.356249] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a58000b90 with addr=10.0.0.2, port=4420 00:27:50.322 qpair failed and we were unable to recover it. 00:27:50.322 [2024-07-15 09:36:01.356424] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:50.322 [2024-07-15 09:36:01.356480] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a58000b90 with addr=10.0.0.2, port=4420 00:27:50.322 qpair failed and we were unable to recover it. 00:27:50.322 [2024-07-15 09:36:01.356704] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:50.322 [2024-07-15 09:36:01.356760] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a58000b90 with addr=10.0.0.2, port=4420 00:27:50.322 qpair failed and we were unable to recover it. 00:27:50.322 [2024-07-15 09:36:01.356962] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:50.322 [2024-07-15 09:36:01.356992] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a58000b90 with addr=10.0.0.2, port=4420 00:27:50.322 qpair failed and we were unable to recover it. 00:27:50.322 [2024-07-15 09:36:01.357124] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:50.322 [2024-07-15 09:36:01.357153] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a58000b90 with addr=10.0.0.2, port=4420 00:27:50.322 qpair failed and we were unable to recover it. 00:27:50.322 [2024-07-15 09:36:01.357284] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:50.322 [2024-07-15 09:36:01.357313] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a58000b90 with addr=10.0.0.2, port=4420 00:27:50.322 qpair failed and we were unable to recover it. 00:27:50.322 [2024-07-15 09:36:01.357449] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:50.322 [2024-07-15 09:36:01.357478] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a58000b90 with addr=10.0.0.2, port=4420 00:27:50.322 qpair failed and we were unable to recover it. 00:27:50.322 [2024-07-15 09:36:01.357572] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:50.322 [2024-07-15 09:36:01.357601] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a58000b90 with addr=10.0.0.2, port=4420 00:27:50.322 qpair failed and we were unable to recover it. 00:27:50.322 [2024-07-15 09:36:01.357720] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:50.322 [2024-07-15 09:36:01.357763] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d69200 with addr=10.0.0.2, port=4420 00:27:50.322 qpair failed and we were unable to recover it. 00:27:50.322 [2024-07-15 09:36:01.357936] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:50.322 [2024-07-15 09:36:01.358003] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a50000b90 with addr=10.0.0.2, port=4420 00:27:50.322 qpair failed and we were unable to recover it. 00:27:50.322 [2024-07-15 09:36:01.358109] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:50.322 [2024-07-15 09:36:01.358139] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a50000b90 with addr=10.0.0.2, port=4420 00:27:50.322 qpair failed and we were unable to recover it. 00:27:50.322 [2024-07-15 09:36:01.358300] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:50.322 [2024-07-15 09:36:01.358371] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a50000b90 with addr=10.0.0.2, port=4420 00:27:50.322 qpair failed and we were unable to recover it. 00:27:50.322 [2024-07-15 09:36:01.358528] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:50.322 [2024-07-15 09:36:01.358588] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a50000b90 with addr=10.0.0.2, port=4420 00:27:50.322 qpair failed and we were unable to recover it. 00:27:50.322 [2024-07-15 09:36:01.358675] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:50.322 [2024-07-15 09:36:01.358704] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a50000b90 with addr=10.0.0.2, port=4420 00:27:50.322 qpair failed and we were unable to recover it. 00:27:50.322 [2024-07-15 09:36:01.358815] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:50.322 [2024-07-15 09:36:01.358845] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a58000b90 with addr=10.0.0.2, port=4420 00:27:50.322 qpair failed and we were unable to recover it. 00:27:50.322 [2024-07-15 09:36:01.358975] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:50.322 [2024-07-15 09:36:01.359007] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d69200 with addr=10.0.0.2, port=4420 00:27:50.322 qpair failed and we were unable to recover it. 00:27:50.322 [2024-07-15 09:36:01.359140] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:50.322 [2024-07-15 09:36:01.359221] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d69200 with addr=10.0.0.2, port=4420 00:27:50.322 qpair failed and we were unable to recover it. 00:27:50.322 [2024-07-15 09:36:01.359433] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:50.322 [2024-07-15 09:36:01.359488] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d69200 with addr=10.0.0.2, port=4420 00:27:50.322 qpair failed and we were unable to recover it. 00:27:50.322 [2024-07-15 09:36:01.359685] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:50.322 [2024-07-15 09:36:01.359760] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a58000b90 with addr=10.0.0.2, port=4420 00:27:50.322 qpair failed and we were unable to recover it. 00:27:50.322 [2024-07-15 09:36:01.359950] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:50.322 [2024-07-15 09:36:01.359979] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a58000b90 with addr=10.0.0.2, port=4420 00:27:50.322 qpair failed and we were unable to recover it. 00:27:50.322 [2024-07-15 09:36:01.360184] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:50.322 [2024-07-15 09:36:01.360255] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a58000b90 with addr=10.0.0.2, port=4420 00:27:50.322 qpair failed and we were unable to recover it. 00:27:50.322 [2024-07-15 09:36:01.360491] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:50.322 [2024-07-15 09:36:01.360542] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a58000b90 with addr=10.0.0.2, port=4420 00:27:50.322 qpair failed and we were unable to recover it. 00:27:50.322 [2024-07-15 09:36:01.360772] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:50.322 [2024-07-15 09:36:01.360808] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a58000b90 with addr=10.0.0.2, port=4420 00:27:50.322 qpair failed and we were unable to recover it. 00:27:50.322 [2024-07-15 09:36:01.360928] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:50.322 [2024-07-15 09:36:01.360981] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a58000b90 with addr=10.0.0.2, port=4420 00:27:50.322 qpair failed and we were unable to recover it. 00:27:50.322 [2024-07-15 09:36:01.361121] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:50.322 [2024-07-15 09:36:01.361165] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a58000b90 with addr=10.0.0.2, port=4420 00:27:50.322 qpair failed and we were unable to recover it. 00:27:50.322 [2024-07-15 09:36:01.361333] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:50.322 [2024-07-15 09:36:01.361386] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a58000b90 with addr=10.0.0.2, port=4420 00:27:50.322 qpair failed and we were unable to recover it. 00:27:50.322 [2024-07-15 09:36:01.361671] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:50.322 [2024-07-15 09:36:01.361723] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a58000b90 with addr=10.0.0.2, port=4420 00:27:50.322 qpair failed and we were unable to recover it. 00:27:50.322 [2024-07-15 09:36:01.361890] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:50.322 [2024-07-15 09:36:01.361919] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a58000b90 with addr=10.0.0.2, port=4420 00:27:50.322 qpair failed and we were unable to recover it. 00:27:50.322 [2024-07-15 09:36:01.362022] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:50.322 [2024-07-15 09:36:01.362052] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a58000b90 with addr=10.0.0.2, port=4420 00:27:50.322 qpair failed and we were unable to recover it. 00:27:50.322 [2024-07-15 09:36:01.362212] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:50.322 [2024-07-15 09:36:01.362280] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a58000b90 with addr=10.0.0.2, port=4420 00:27:50.322 qpair failed and we were unable to recover it. 00:27:50.322 [2024-07-15 09:36:01.362539] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:50.322 [2024-07-15 09:36:01.362591] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a58000b90 with addr=10.0.0.2, port=4420 00:27:50.322 qpair failed and we were unable to recover it. 00:27:50.322 [2024-07-15 09:36:01.362789] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:50.322 [2024-07-15 09:36:01.362866] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a58000b90 with addr=10.0.0.2, port=4420 00:27:50.322 qpair failed and we were unable to recover it. 00:27:50.322 [2024-07-15 09:36:01.362967] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:50.322 [2024-07-15 09:36:01.362997] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a58000b90 with addr=10.0.0.2, port=4420 00:27:50.322 qpair failed and we were unable to recover it. 00:27:50.322 [2024-07-15 09:36:01.363114] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:50.322 [2024-07-15 09:36:01.363143] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a58000b90 with addr=10.0.0.2, port=4420 00:27:50.322 qpair failed and we were unable to recover it. 00:27:50.322 [2024-07-15 09:36:01.363283] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:50.322 [2024-07-15 09:36:01.363327] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a58000b90 with addr=10.0.0.2, port=4420 00:27:50.322 qpair failed and we were unable to recover it. 00:27:50.322 [2024-07-15 09:36:01.363539] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:50.322 [2024-07-15 09:36:01.363592] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a58000b90 with addr=10.0.0.2, port=4420 00:27:50.322 qpair failed and we were unable to recover it. 00:27:50.322 [2024-07-15 09:36:01.363787] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:50.322 [2024-07-15 09:36:01.363827] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a58000b90 with addr=10.0.0.2, port=4420 00:27:50.322 qpair failed and we were unable to recover it. 00:27:50.322 [2024-07-15 09:36:01.363920] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:50.322 [2024-07-15 09:36:01.363948] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a58000b90 with addr=10.0.0.2, port=4420 00:27:50.322 qpair failed and we were unable to recover it. 00:27:50.322 [2024-07-15 09:36:01.364097] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:50.322 [2024-07-15 09:36:01.364141] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a58000b90 with addr=10.0.0.2, port=4420 00:27:50.322 qpair failed and we were unable to recover it. 00:27:50.322 [2024-07-15 09:36:01.364312] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:50.322 [2024-07-15 09:36:01.364365] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a58000b90 with addr=10.0.0.2, port=4420 00:27:50.322 qpair failed and we were unable to recover it. 00:27:50.322 [2024-07-15 09:36:01.364519] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:50.322 [2024-07-15 09:36:01.364568] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a58000b90 with addr=10.0.0.2, port=4420 00:27:50.322 qpair failed and we were unable to recover it. 00:27:50.322 [2024-07-15 09:36:01.364712] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:50.322 [2024-07-15 09:36:01.364741] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a58000b90 with addr=10.0.0.2, port=4420 00:27:50.322 qpair failed and we were unable to recover it. 00:27:50.322 [2024-07-15 09:36:01.364857] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:50.322 [2024-07-15 09:36:01.364888] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a58000b90 with addr=10.0.0.2, port=4420 00:27:50.323 qpair failed and we were unable to recover it. 00:27:50.323 [2024-07-15 09:36:01.365004] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:50.323 [2024-07-15 09:36:01.365033] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a58000b90 with addr=10.0.0.2, port=4420 00:27:50.323 qpair failed and we were unable to recover it. 00:27:50.323 [2024-07-15 09:36:01.365139] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:50.323 [2024-07-15 09:36:01.365169] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a58000b90 with addr=10.0.0.2, port=4420 00:27:50.323 qpair failed and we were unable to recover it. 00:27:50.323 [2024-07-15 09:36:01.365265] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:50.323 [2024-07-15 09:36:01.365296] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a58000b90 with addr=10.0.0.2, port=4420 00:27:50.323 qpair failed and we were unable to recover it. 00:27:50.323 [2024-07-15 09:36:01.365416] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:50.323 [2024-07-15 09:36:01.365445] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a58000b90 with addr=10.0.0.2, port=4420 00:27:50.323 qpair failed and we were unable to recover it. 00:27:50.323 [2024-07-15 09:36:01.365642] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:50.323 [2024-07-15 09:36:01.365703] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a58000b90 with addr=10.0.0.2, port=4420 00:27:50.323 qpair failed and we were unable to recover it. 00:27:50.323 [2024-07-15 09:36:01.365911] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:50.323 [2024-07-15 09:36:01.365941] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a58000b90 with addr=10.0.0.2, port=4420 00:27:50.323 qpair failed and we were unable to recover it. 00:27:50.323 [2024-07-15 09:36:01.366064] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:50.323 [2024-07-15 09:36:01.366093] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a58000b90 with addr=10.0.0.2, port=4420 00:27:50.323 qpair failed and we were unable to recover it. 00:27:50.323 [2024-07-15 09:36:01.366194] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:50.323 [2024-07-15 09:36:01.366223] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a58000b90 with addr=10.0.0.2, port=4420 00:27:50.323 qpair failed and we were unable to recover it. 00:27:50.323 [2024-07-15 09:36:01.366311] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:50.323 [2024-07-15 09:36:01.366340] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a58000b90 with addr=10.0.0.2, port=4420 00:27:50.323 qpair failed and we were unable to recover it. 00:27:50.323 [2024-07-15 09:36:01.366457] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:50.323 [2024-07-15 09:36:01.366486] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a58000b90 with addr=10.0.0.2, port=4420 00:27:50.323 qpair failed and we were unable to recover it. 00:27:50.323 [2024-07-15 09:36:01.366629] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:50.323 [2024-07-15 09:36:01.366709] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a58000b90 with addr=10.0.0.2, port=4420 00:27:50.323 qpair failed and we were unable to recover it. 00:27:50.323 [2024-07-15 09:36:01.366941] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:50.323 [2024-07-15 09:36:01.366972] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a58000b90 with addr=10.0.0.2, port=4420 00:27:50.323 qpair failed and we were unable to recover it. 00:27:50.323 [2024-07-15 09:36:01.367062] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:50.323 [2024-07-15 09:36:01.367091] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a58000b90 with addr=10.0.0.2, port=4420 00:27:50.323 qpair failed and we were unable to recover it. 00:27:50.323 [2024-07-15 09:36:01.367224] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:50.323 [2024-07-15 09:36:01.367253] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a58000b90 with addr=10.0.0.2, port=4420 00:27:50.323 qpair failed and we were unable to recover it. 00:27:50.323 [2024-07-15 09:36:01.367400] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:50.323 [2024-07-15 09:36:01.367446] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a58000b90 with addr=10.0.0.2, port=4420 00:27:50.323 qpair failed and we were unable to recover it. 00:27:50.323 [2024-07-15 09:36:01.367639] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:50.323 [2024-07-15 09:36:01.367692] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a58000b90 with addr=10.0.0.2, port=4420 00:27:50.323 qpair failed and we were unable to recover it. 00:27:50.323 [2024-07-15 09:36:01.367862] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:50.323 [2024-07-15 09:36:01.367893] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a58000b90 with addr=10.0.0.2, port=4420 00:27:50.323 qpair failed and we were unable to recover it. 00:27:50.323 [2024-07-15 09:36:01.367982] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:50.323 [2024-07-15 09:36:01.368011] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a58000b90 with addr=10.0.0.2, port=4420 00:27:50.323 qpair failed and we were unable to recover it. 00:27:50.323 [2024-07-15 09:36:01.368103] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:50.323 [2024-07-15 09:36:01.368133] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a58000b90 with addr=10.0.0.2, port=4420 00:27:50.323 qpair failed and we were unable to recover it. 00:27:50.323 [2024-07-15 09:36:01.368225] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:50.323 [2024-07-15 09:36:01.368254] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a58000b90 with addr=10.0.0.2, port=4420 00:27:50.323 qpair failed and we were unable to recover it. 00:27:50.323 [2024-07-15 09:36:01.368348] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:50.323 [2024-07-15 09:36:01.368377] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a58000b90 with addr=10.0.0.2, port=4420 00:27:50.323 qpair failed and we were unable to recover it. 00:27:50.323 [2024-07-15 09:36:01.368502] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:50.323 [2024-07-15 09:36:01.368531] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a58000b90 with addr=10.0.0.2, port=4420 00:27:50.323 qpair failed and we were unable to recover it. 00:27:50.323 [2024-07-15 09:36:01.368714] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:50.323 [2024-07-15 09:36:01.368794] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a50000b90 with addr=10.0.0.2, port=4420 00:27:50.323 qpair failed and we were unable to recover it. 00:27:50.323 [2024-07-15 09:36:01.368901] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:50.323 [2024-07-15 09:36:01.368932] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a50000b90 with addr=10.0.0.2, port=4420 00:27:50.323 qpair failed and we were unable to recover it. 00:27:50.323 [2024-07-15 09:36:01.369078] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:50.323 [2024-07-15 09:36:01.369108] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a50000b90 with addr=10.0.0.2, port=4420 00:27:50.323 qpair failed and we were unable to recover it. 00:27:50.323 [2024-07-15 09:36:01.369229] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:50.323 [2024-07-15 09:36:01.369295] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a50000b90 with addr=10.0.0.2, port=4420 00:27:50.323 qpair failed and we were unable to recover it. 00:27:50.323 [2024-07-15 09:36:01.369491] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:50.323 [2024-07-15 09:36:01.369569] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d69200 with addr=10.0.0.2, port=4420 00:27:50.323 qpair failed and we were unable to recover it. 00:27:50.323 [2024-07-15 09:36:01.369866] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:50.323 [2024-07-15 09:36:01.369917] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d69200 with addr=10.0.0.2, port=4420 00:27:50.323 qpair failed and we were unable to recover it. 00:27:50.323 [2024-07-15 09:36:01.370044] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:50.323 [2024-07-15 09:36:01.370074] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a58000b90 with addr=10.0.0.2, port=4420 00:27:50.323 qpair failed and we were unable to recover it. 00:27:50.323 [2024-07-15 09:36:01.370194] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:50.323 [2024-07-15 09:36:01.370248] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a58000b90 with addr=10.0.0.2, port=4420 00:27:50.323 qpair failed and we were unable to recover it. 00:27:50.323 [2024-07-15 09:36:01.370420] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:50.323 [2024-07-15 09:36:01.370482] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a58000b90 with addr=10.0.0.2, port=4420 00:27:50.323 qpair failed and we were unable to recover it. 00:27:50.323 [2024-07-15 09:36:01.370650] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:50.323 [2024-07-15 09:36:01.370712] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a58000b90 with addr=10.0.0.2, port=4420 00:27:50.323 qpair failed and we were unable to recover it. 00:27:50.323 [2024-07-15 09:36:01.370863] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:50.323 [2024-07-15 09:36:01.370898] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a50000b90 with addr=10.0.0.2, port=4420 00:27:50.323 qpair failed and we were unable to recover it. 00:27:50.323 [2024-07-15 09:36:01.370995] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:50.323 [2024-07-15 09:36:01.371025] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a50000b90 with addr=10.0.0.2, port=4420 00:27:50.323 qpair failed and we were unable to recover it. 00:27:50.323 [2024-07-15 09:36:01.371173] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:50.323 [2024-07-15 09:36:01.371228] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a50000b90 with addr=10.0.0.2, port=4420 00:27:50.323 qpair failed and we were unable to recover it. 00:27:50.323 [2024-07-15 09:36:01.371384] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:50.323 [2024-07-15 09:36:01.371446] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a50000b90 with addr=10.0.0.2, port=4420 00:27:50.323 qpair failed and we were unable to recover it. 00:27:50.323 [2024-07-15 09:36:01.371607] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:50.323 [2024-07-15 09:36:01.371636] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a50000b90 with addr=10.0.0.2, port=4420 00:27:50.323 qpair failed and we were unable to recover it. 00:27:50.323 [2024-07-15 09:36:01.371729] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:50.323 [2024-07-15 09:36:01.371758] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a50000b90 with addr=10.0.0.2, port=4420 00:27:50.323 qpair failed and we were unable to recover it. 00:27:50.323 [2024-07-15 09:36:01.371894] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:50.323 [2024-07-15 09:36:01.371925] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a58000b90 with addr=10.0.0.2, port=4420 00:27:50.323 qpair failed and we were unable to recover it. 00:27:50.323 [2024-07-15 09:36:01.372044] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:50.323 [2024-07-15 09:36:01.372096] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a58000b90 with addr=10.0.0.2, port=4420 00:27:50.323 qpair failed and we were unable to recover it. 00:27:50.323 [2024-07-15 09:36:01.372273] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:50.323 [2024-07-15 09:36:01.372325] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a58000b90 with addr=10.0.0.2, port=4420 00:27:50.323 qpair failed and we were unable to recover it. 00:27:50.323 [2024-07-15 09:36:01.372478] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:50.323 [2024-07-15 09:36:01.372531] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a58000b90 with addr=10.0.0.2, port=4420 00:27:50.323 qpair failed and we were unable to recover it. 00:27:50.323 [2024-07-15 09:36:01.372744] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:50.323 [2024-07-15 09:36:01.372787] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a58000b90 with addr=10.0.0.2, port=4420 00:27:50.323 qpair failed and we were unable to recover it. 00:27:50.323 [2024-07-15 09:36:01.372909] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:50.323 [2024-07-15 09:36:01.372962] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a58000b90 with addr=10.0.0.2, port=4420 00:27:50.323 qpair failed and we were unable to recover it. 00:27:50.323 [2024-07-15 09:36:01.373116] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:50.323 [2024-07-15 09:36:01.373167] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a58000b90 with addr=10.0.0.2, port=4420 00:27:50.323 qpair failed and we were unable to recover it. 00:27:50.323 [2024-07-15 09:36:01.373357] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:50.323 [2024-07-15 09:36:01.373400] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a58000b90 with addr=10.0.0.2, port=4420 00:27:50.323 qpair failed and we were unable to recover it. 00:27:50.323 [2024-07-15 09:36:01.373559] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:50.323 [2024-07-15 09:36:01.373602] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a58000b90 with addr=10.0.0.2, port=4420 00:27:50.323 qpair failed and we were unable to recover it. 00:27:50.323 [2024-07-15 09:36:01.373785] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:50.323 [2024-07-15 09:36:01.373821] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a58000b90 with addr=10.0.0.2, port=4420 00:27:50.323 qpair failed and we were unable to recover it. 00:27:50.323 [2024-07-15 09:36:01.373923] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:50.323 [2024-07-15 09:36:01.373951] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a58000b90 with addr=10.0.0.2, port=4420 00:27:50.323 qpair failed and we were unable to recover it. 00:27:50.323 [2024-07-15 09:36:01.374074] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:50.323 [2024-07-15 09:36:01.374111] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a58000b90 with addr=10.0.0.2, port=4420 00:27:50.323 qpair failed and we were unable to recover it. 00:27:50.323 [2024-07-15 09:36:01.374234] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:50.323 [2024-07-15 09:36:01.374277] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a58000b90 with addr=10.0.0.2, port=4420 00:27:50.323 qpair failed and we were unable to recover it. 00:27:50.323 [2024-07-15 09:36:01.374463] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:50.323 [2024-07-15 09:36:01.374514] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a58000b90 with addr=10.0.0.2, port=4420 00:27:50.323 qpair failed and we were unable to recover it. 00:27:50.323 [2024-07-15 09:36:01.374718] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:50.323 [2024-07-15 09:36:01.374762] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a58000b90 with addr=10.0.0.2, port=4420 00:27:50.323 qpair failed and we were unable to recover it. 00:27:50.323 [2024-07-15 09:36:01.374931] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:50.323 [2024-07-15 09:36:01.374964] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a50000b90 with addr=10.0.0.2, port=4420 00:27:50.323 qpair failed and we were unable to recover it. 00:27:50.323 [2024-07-15 09:36:01.375063] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:50.323 [2024-07-15 09:36:01.375093] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a50000b90 with addr=10.0.0.2, port=4420 00:27:50.323 qpair failed and we were unable to recover it. 00:27:50.323 [2024-07-15 09:36:01.375221] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:50.323 [2024-07-15 09:36:01.375279] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a50000b90 with addr=10.0.0.2, port=4420 00:27:50.323 qpair failed and we were unable to recover it. 00:27:50.323 [2024-07-15 09:36:01.375375] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:50.324 [2024-07-15 09:36:01.375404] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a50000b90 with addr=10.0.0.2, port=4420 00:27:50.324 qpair failed and we were unable to recover it. 00:27:50.324 [2024-07-15 09:36:01.375539] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:50.324 [2024-07-15 09:36:01.375592] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a50000b90 with addr=10.0.0.2, port=4420 00:27:50.324 qpair failed and we were unable to recover it. 00:27:50.324 [2024-07-15 09:36:01.375755] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:50.324 [2024-07-15 09:36:01.375836] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d69200 with addr=10.0.0.2, port=4420 00:27:50.324 qpair failed and we were unable to recover it. 00:27:50.324 [2024-07-15 09:36:01.375990] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:50.324 [2024-07-15 09:36:01.376035] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a58000b90 with addr=10.0.0.2, port=4420 00:27:50.324 qpair failed and we were unable to recover it. 00:27:50.324 [2024-07-15 09:36:01.376199] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:50.324 [2024-07-15 09:36:01.376243] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a58000b90 with addr=10.0.0.2, port=4420 00:27:50.324 qpair failed and we were unable to recover it. 00:27:50.324 [2024-07-15 09:36:01.376549] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:50.324 [2024-07-15 09:36:01.376593] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a58000b90 with addr=10.0.0.2, port=4420 00:27:50.324 qpair failed and we were unable to recover it. 00:27:50.324 [2024-07-15 09:36:01.376770] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:50.324 [2024-07-15 09:36:01.376798] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a58000b90 with addr=10.0.0.2, port=4420 00:27:50.324 qpair failed and we were unable to recover it. 00:27:50.324 [2024-07-15 09:36:01.376930] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:50.324 [2024-07-15 09:36:01.376959] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a58000b90 with addr=10.0.0.2, port=4420 00:27:50.324 qpair failed and we were unable to recover it. 00:27:50.324 [2024-07-15 09:36:01.377091] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:50.324 [2024-07-15 09:36:01.377143] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a58000b90 with addr=10.0.0.2, port=4420 00:27:50.324 qpair failed and we were unable to recover it. 00:27:50.324 [2024-07-15 09:36:01.377307] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:50.324 [2024-07-15 09:36:01.377358] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a58000b90 with addr=10.0.0.2, port=4420 00:27:50.324 qpair failed and we were unable to recover it. 00:27:50.324 [2024-07-15 09:36:01.377548] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:50.324 [2024-07-15 09:36:01.377599] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a58000b90 with addr=10.0.0.2, port=4420 00:27:50.324 qpair failed and we were unable to recover it. 00:27:50.324 [2024-07-15 09:36:01.377738] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:50.324 [2024-07-15 09:36:01.377767] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a58000b90 with addr=10.0.0.2, port=4420 00:27:50.324 qpair failed and we were unable to recover it. 00:27:50.324 [2024-07-15 09:36:01.377872] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:50.324 [2024-07-15 09:36:01.377902] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a58000b90 with addr=10.0.0.2, port=4420 00:27:50.324 qpair failed and we were unable to recover it. 00:27:50.324 [2024-07-15 09:36:01.377996] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:50.324 [2024-07-15 09:36:01.378025] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a58000b90 with addr=10.0.0.2, port=4420 00:27:50.324 qpair failed and we were unable to recover it. 00:27:50.324 [2024-07-15 09:36:01.378158] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:50.324 [2024-07-15 09:36:01.378211] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a58000b90 with addr=10.0.0.2, port=4420 00:27:50.324 qpair failed and we were unable to recover it. 00:27:50.324 [2024-07-15 09:36:01.378421] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:50.324 [2024-07-15 09:36:01.378464] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a58000b90 with addr=10.0.0.2, port=4420 00:27:50.324 qpair failed and we were unable to recover it. 00:27:50.324 [2024-07-15 09:36:01.378631] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:50.324 [2024-07-15 09:36:01.378676] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a58000b90 with addr=10.0.0.2, port=4420 00:27:50.324 qpair failed and we were unable to recover it. 00:27:50.324 [2024-07-15 09:36:01.378816] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:50.324 [2024-07-15 09:36:01.378845] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a58000b90 with addr=10.0.0.2, port=4420 00:27:50.324 qpair failed and we were unable to recover it. 00:27:50.324 [2024-07-15 09:36:01.378992] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:50.324 [2024-07-15 09:36:01.379021] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a58000b90 with addr=10.0.0.2, port=4420 00:27:50.324 qpair failed and we were unable to recover it. 00:27:50.324 [2024-07-15 09:36:01.379191] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:50.324 [2024-07-15 09:36:01.379234] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a58000b90 with addr=10.0.0.2, port=4420 00:27:50.324 qpair failed and we were unable to recover it. 00:27:50.324 [2024-07-15 09:36:01.379432] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:50.324 [2024-07-15 09:36:01.379475] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a58000b90 with addr=10.0.0.2, port=4420 00:27:50.324 qpair failed and we were unable to recover it. 00:27:50.324 [2024-07-15 09:36:01.379611] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:50.324 [2024-07-15 09:36:01.379655] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a58000b90 with addr=10.0.0.2, port=4420 00:27:50.324 qpair failed and we were unable to recover it. 00:27:50.324 [2024-07-15 09:36:01.379823] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:50.324 [2024-07-15 09:36:01.379874] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a58000b90 with addr=10.0.0.2, port=4420 00:27:50.324 qpair failed and we were unable to recover it. 00:27:50.324 [2024-07-15 09:36:01.379969] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:50.324 [2024-07-15 09:36:01.379998] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a58000b90 with addr=10.0.0.2, port=4420 00:27:50.324 qpair failed and we were unable to recover it. 00:27:50.324 [2024-07-15 09:36:01.380126] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:50.324 [2024-07-15 09:36:01.380155] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a58000b90 with addr=10.0.0.2, port=4420 00:27:50.324 qpair failed and we were unable to recover it. 00:27:50.324 [2024-07-15 09:36:01.380277] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:50.324 [2024-07-15 09:36:01.380306] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a58000b90 with addr=10.0.0.2, port=4420 00:27:50.324 qpair failed and we were unable to recover it. 00:27:50.324 [2024-07-15 09:36:01.380446] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:50.324 [2024-07-15 09:36:01.380498] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a58000b90 with addr=10.0.0.2, port=4420 00:27:50.324 qpair failed and we were unable to recover it. 00:27:50.324 [2024-07-15 09:36:01.380676] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:50.324 [2024-07-15 09:36:01.380719] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a58000b90 with addr=10.0.0.2, port=4420 00:27:50.324 qpair failed and we were unable to recover it. 00:27:50.324 [2024-07-15 09:36:01.380899] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:50.324 [2024-07-15 09:36:01.380928] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a58000b90 with addr=10.0.0.2, port=4420 00:27:50.324 qpair failed and we were unable to recover it. 00:27:50.324 [2024-07-15 09:36:01.381021] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:50.324 [2024-07-15 09:36:01.381051] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a58000b90 with addr=10.0.0.2, port=4420 00:27:50.324 qpair failed and we were unable to recover it. 00:27:50.324 [2024-07-15 09:36:01.381142] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:50.324 [2024-07-15 09:36:01.381171] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a58000b90 with addr=10.0.0.2, port=4420 00:27:50.324 qpair failed and we were unable to recover it. 00:27:50.324 [2024-07-15 09:36:01.381267] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:50.324 [2024-07-15 09:36:01.381295] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a58000b90 with addr=10.0.0.2, port=4420 00:27:50.324 qpair failed and we were unable to recover it. 00:27:50.324 [2024-07-15 09:36:01.381405] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:50.324 [2024-07-15 09:36:01.381449] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a58000b90 with addr=10.0.0.2, port=4420 00:27:50.324 qpair failed and we were unable to recover it. 00:27:50.324 [2024-07-15 09:36:01.381639] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:50.324 [2024-07-15 09:36:01.381692] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a58000b90 with addr=10.0.0.2, port=4420 00:27:50.324 qpair failed and we were unable to recover it. 00:27:50.324 [2024-07-15 09:36:01.381891] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:50.324 [2024-07-15 09:36:01.381920] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a58000b90 with addr=10.0.0.2, port=4420 00:27:50.324 qpair failed and we were unable to recover it. 00:27:50.324 [2024-07-15 09:36:01.382013] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:50.324 [2024-07-15 09:36:01.382042] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a58000b90 with addr=10.0.0.2, port=4420 00:27:50.324 qpair failed and we were unable to recover it. 00:27:50.324 [2024-07-15 09:36:01.382164] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:50.324 [2024-07-15 09:36:01.382192] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a58000b90 with addr=10.0.0.2, port=4420 00:27:50.324 qpair failed and we were unable to recover it. 00:27:50.324 [2024-07-15 09:36:01.382339] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:50.324 [2024-07-15 09:36:01.382391] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a58000b90 with addr=10.0.0.2, port=4420 00:27:50.324 qpair failed and we were unable to recover it. 00:27:50.324 [2024-07-15 09:36:01.382650] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:50.324 [2024-07-15 09:36:01.382702] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a58000b90 with addr=10.0.0.2, port=4420 00:27:50.324 qpair failed and we were unable to recover it. 00:27:50.324 [2024-07-15 09:36:01.382926] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:50.324 [2024-07-15 09:36:01.382960] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a58000b90 with addr=10.0.0.2, port=4420 00:27:50.324 qpair failed and we were unable to recover it. 00:27:50.324 [2024-07-15 09:36:01.383095] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:50.324 [2024-07-15 09:36:01.383139] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a58000b90 with addr=10.0.0.2, port=4420 00:27:50.324 qpair failed and we were unable to recover it. 00:27:50.324 [2024-07-15 09:36:01.383304] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:50.324 [2024-07-15 09:36:01.383383] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a58000b90 with addr=10.0.0.2, port=4420 00:27:50.324 qpair failed and we were unable to recover it. 00:27:50.324 [2024-07-15 09:36:01.383587] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:50.324 [2024-07-15 09:36:01.383653] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a58000b90 with addr=10.0.0.2, port=4420 00:27:50.324 qpair failed and we were unable to recover it. 00:27:50.324 [2024-07-15 09:36:01.383858] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:50.324 [2024-07-15 09:36:01.383888] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a58000b90 with addr=10.0.0.2, port=4420 00:27:50.324 qpair failed and we were unable to recover it. 00:27:50.324 [2024-07-15 09:36:01.383989] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:50.324 [2024-07-15 09:36:01.384019] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a58000b90 with addr=10.0.0.2, port=4420 00:27:50.324 qpair failed and we were unable to recover it. 00:27:50.324 [2024-07-15 09:36:01.384176] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:50.324 [2024-07-15 09:36:01.384249] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a58000b90 with addr=10.0.0.2, port=4420 00:27:50.324 qpair failed and we were unable to recover it. 00:27:50.324 [2024-07-15 09:36:01.384466] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:50.324 [2024-07-15 09:36:01.384532] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a58000b90 with addr=10.0.0.2, port=4420 00:27:50.324 qpair failed and we were unable to recover it. 00:27:50.324 [2024-07-15 09:36:01.385668] Starting SPDK v24.09-pre git sha1 b0f01ebc5 / DPDK 24.03.0 initialization... 00:27:50.324 [2024-07-15 09:36:01.385762] [ DPDK EAL parameters: nvmf -c 0xF0 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:27:50.324 [2024-07-15 09:36:01.385837] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:50.324 [2024-07-15 09:36:01.385870] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a58000b90 with addr=10.0.0.2, port=4420 00:27:50.324 qpair failed and we were unable to recover it. 00:27:50.324 [2024-07-15 09:36:01.386003] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:50.324 [2024-07-15 09:36:01.386031] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a58000b90 with addr=10.0.0.2, port=4420 00:27:50.324 qpair failed and we were unable to recover it. 00:27:50.324 [2024-07-15 09:36:01.386141] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:50.324 [2024-07-15 09:36:01.386169] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a58000b90 with addr=10.0.0.2, port=4420 00:27:50.324 qpair failed and we were unable to recover it. 00:27:50.324 [2024-07-15 09:36:01.386295] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:50.324 [2024-07-15 09:36:01.386323] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a58000b90 with addr=10.0.0.2, port=4420 00:27:50.324 qpair failed and we were unable to recover it. 00:27:50.324 [2024-07-15 09:36:01.386454] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:50.324 [2024-07-15 09:36:01.386488] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a58000b90 with addr=10.0.0.2, port=4420 00:27:50.324 qpair failed and we were unable to recover it. 00:27:50.324 [2024-07-15 09:36:01.386615] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:50.324 [2024-07-15 09:36:01.386645] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a58000b90 with addr=10.0.0.2, port=4420 00:27:50.324 qpair failed and we were unable to recover it. 00:27:50.324 [2024-07-15 09:36:01.386742] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:50.324 [2024-07-15 09:36:01.386771] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a58000b90 with addr=10.0.0.2, port=4420 00:27:50.324 qpair failed and we were unable to recover it. 00:27:50.324 [2024-07-15 09:36:01.386921] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:50.324 [2024-07-15 09:36:01.386951] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a58000b90 with addr=10.0.0.2, port=4420 00:27:50.324 qpair failed and we were unable to recover it. 00:27:50.325 [2024-07-15 09:36:01.387106] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:50.325 [2024-07-15 09:36:01.387137] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a58000b90 with addr=10.0.0.2, port=4420 00:27:50.325 qpair failed and we were unable to recover it. 00:27:50.325 [2024-07-15 09:36:01.387242] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:50.325 [2024-07-15 09:36:01.387270] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a58000b90 with addr=10.0.0.2, port=4420 00:27:50.325 qpair failed and we were unable to recover it. 00:27:50.325 [2024-07-15 09:36:01.387394] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:50.325 [2024-07-15 09:36:01.387428] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a58000b90 with addr=10.0.0.2, port=4420 00:27:50.325 qpair failed and we were unable to recover it. 00:27:50.325 [2024-07-15 09:36:01.387527] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:50.325 [2024-07-15 09:36:01.387556] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a58000b90 with addr=10.0.0.2, port=4420 00:27:50.325 qpair failed and we were unable to recover it. 00:27:50.325 [2024-07-15 09:36:01.387698] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:50.325 [2024-07-15 09:36:01.387744] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a50000b90 with addr=10.0.0.2, port=4420 00:27:50.325 qpair failed and we were unable to recover it. 00:27:50.325 [2024-07-15 09:36:01.387873] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:50.325 [2024-07-15 09:36:01.387917] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d69200 with addr=10.0.0.2, port=4420 00:27:50.325 qpair failed and we were unable to recover it. 00:27:50.325 [2024-07-15 09:36:01.388024] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:50.325 [2024-07-15 09:36:01.388054] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d69200 with addr=10.0.0.2, port=4420 00:27:50.325 qpair failed and we were unable to recover it. 00:27:50.325 [2024-07-15 09:36:01.388154] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:50.325 [2024-07-15 09:36:01.388184] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d69200 with addr=10.0.0.2, port=4420 00:27:50.325 qpair failed and we were unable to recover it. 00:27:50.325 [2024-07-15 09:36:01.388313] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:50.325 [2024-07-15 09:36:01.388356] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d69200 with addr=10.0.0.2, port=4420 00:27:50.325 qpair failed and we were unable to recover it. 00:27:50.325 [2024-07-15 09:36:01.388492] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:50.325 [2024-07-15 09:36:01.388533] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d69200 with addr=10.0.0.2, port=4420 00:27:50.325 qpair failed and we were unable to recover it. 00:27:50.325 [2024-07-15 09:36:01.388692] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:50.325 [2024-07-15 09:36:01.388723] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a58000b90 with addr=10.0.0.2, port=4420 00:27:50.325 qpair failed and we were unable to recover it. 00:27:50.325 [2024-07-15 09:36:01.388833] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:50.325 [2024-07-15 09:36:01.388867] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a50000b90 with addr=10.0.0.2, port=4420 00:27:50.325 qpair failed and we were unable to recover it. 00:27:50.325 [2024-07-15 09:36:01.388994] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:50.325 [2024-07-15 09:36:01.389026] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a50000b90 with addr=10.0.0.2, port=4420 00:27:50.325 qpair failed and we were unable to recover it. 00:27:50.325 [2024-07-15 09:36:01.389163] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:50.325 [2024-07-15 09:36:01.389193] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a50000b90 with addr=10.0.0.2, port=4420 00:27:50.325 qpair failed and we were unable to recover it. 00:27:50.325 [2024-07-15 09:36:01.389315] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:50.325 [2024-07-15 09:36:01.389346] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a50000b90 with addr=10.0.0.2, port=4420 00:27:50.325 qpair failed and we were unable to recover it. 00:27:50.325 [2024-07-15 09:36:01.389439] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:50.325 [2024-07-15 09:36:01.389466] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a50000b90 with addr=10.0.0.2, port=4420 00:27:50.325 qpair failed and we were unable to recover it. 00:27:50.325 [2024-07-15 09:36:01.389566] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:50.325 [2024-07-15 09:36:01.389596] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a50000b90 with addr=10.0.0.2, port=4420 00:27:50.325 qpair failed and we were unable to recover it. 00:27:50.325 [2024-07-15 09:36:01.389710] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:50.325 [2024-07-15 09:36:01.389741] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a50000b90 with addr=10.0.0.2, port=4420 00:27:50.325 qpair failed and we were unable to recover it. 00:27:50.325 [2024-07-15 09:36:01.389884] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:50.325 [2024-07-15 09:36:01.389915] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a50000b90 with addr=10.0.0.2, port=4420 00:27:50.325 qpair failed and we were unable to recover it. 00:27:50.325 [2024-07-15 09:36:01.390017] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:50.325 [2024-07-15 09:36:01.390047] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a50000b90 with addr=10.0.0.2, port=4420 00:27:50.325 qpair failed and we were unable to recover it. 00:27:50.325 [2024-07-15 09:36:01.390182] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:50.325 [2024-07-15 09:36:01.390212] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a50000b90 with addr=10.0.0.2, port=4420 00:27:50.325 qpair failed and we were unable to recover it. 00:27:50.325 [2024-07-15 09:36:01.390963] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:50.325 [2024-07-15 09:36:01.390996] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a50000b90 with addr=10.0.0.2, port=4420 00:27:50.325 qpair failed and we were unable to recover it. 00:27:50.325 [2024-07-15 09:36:01.391152] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:50.325 [2024-07-15 09:36:01.391182] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a50000b90 with addr=10.0.0.2, port=4420 00:27:50.325 qpair failed and we were unable to recover it. 00:27:50.325 [2024-07-15 09:36:01.391297] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:50.325 [2024-07-15 09:36:01.391324] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a50000b90 with addr=10.0.0.2, port=4420 00:27:50.325 qpair failed and we were unable to recover it. 00:27:50.325 [2024-07-15 09:36:01.391463] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:50.325 [2024-07-15 09:36:01.391492] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a50000b90 with addr=10.0.0.2, port=4420 00:27:50.325 qpair failed and we were unable to recover it. 00:27:50.325 [2024-07-15 09:36:01.391623] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:50.325 [2024-07-15 09:36:01.391650] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a50000b90 with addr=10.0.0.2, port=4420 00:27:50.325 qpair failed and we were unable to recover it. 00:27:50.325 [2024-07-15 09:36:01.391752] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:50.325 [2024-07-15 09:36:01.391779] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a50000b90 with addr=10.0.0.2, port=4420 00:27:50.325 qpair failed and we were unable to recover it. 00:27:50.325 [2024-07-15 09:36:01.391872] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:50.325 [2024-07-15 09:36:01.391905] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d69200 with addr=10.0.0.2, port=4420 00:27:50.325 qpair failed and we were unable to recover it. 00:27:50.325 [2024-07-15 09:36:01.392037] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:50.325 [2024-07-15 09:36:01.392067] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d69200 with addr=10.0.0.2, port=4420 00:27:50.325 qpair failed and we were unable to recover it. 00:27:50.325 [2024-07-15 09:36:01.392193] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:50.325 [2024-07-15 09:36:01.392219] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d69200 with addr=10.0.0.2, port=4420 00:27:50.325 qpair failed and we were unable to recover it. 00:27:50.325 [2024-07-15 09:36:01.392329] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:50.325 [2024-07-15 09:36:01.392371] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d69200 with addr=10.0.0.2, port=4420 00:27:50.325 qpair failed and we were unable to recover it. 00:27:50.325 [2024-07-15 09:36:01.392536] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:50.325 [2024-07-15 09:36:01.392565] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d69200 with addr=10.0.0.2, port=4420 00:27:50.325 qpair failed and we were unable to recover it. 00:27:50.325 [2024-07-15 09:36:01.392663] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:50.325 [2024-07-15 09:36:01.392689] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d69200 with addr=10.0.0.2, port=4420 00:27:50.325 qpair failed and we were unable to recover it. 00:27:50.325 [2024-07-15 09:36:01.392811] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:50.325 [2024-07-15 09:36:01.392855] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d69200 with addr=10.0.0.2, port=4420 00:27:50.325 qpair failed and we were unable to recover it. 00:27:50.325 [2024-07-15 09:36:01.392939] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:50.325 [2024-07-15 09:36:01.392965] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d69200 with addr=10.0.0.2, port=4420 00:27:50.325 qpair failed and we were unable to recover it. 00:27:50.325 [2024-07-15 09:36:01.393047] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:50.325 [2024-07-15 09:36:01.393104] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d69200 with addr=10.0.0.2, port=4420 00:27:50.325 qpair failed and we were unable to recover it. 00:27:50.325 [2024-07-15 09:36:01.393230] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:50.325 [2024-07-15 09:36:01.393256] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d69200 with addr=10.0.0.2, port=4420 00:27:50.325 qpair failed and we were unable to recover it. 00:27:50.325 [2024-07-15 09:36:01.393377] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:50.325 [2024-07-15 09:36:01.393404] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d69200 with addr=10.0.0.2, port=4420 00:27:50.325 qpair failed and we were unable to recover it. 00:27:50.325 [2024-07-15 09:36:01.393578] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:50.325 [2024-07-15 09:36:01.393604] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d69200 with addr=10.0.0.2, port=4420 00:27:50.325 qpair failed and we were unable to recover it. 00:27:50.325 [2024-07-15 09:36:01.393699] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:50.325 [2024-07-15 09:36:01.393725] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d69200 with addr=10.0.0.2, port=4420 00:27:50.325 qpair failed and we were unable to recover it. 00:27:50.325 [2024-07-15 09:36:01.393843] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:50.325 [2024-07-15 09:36:01.393870] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d69200 with addr=10.0.0.2, port=4420 00:27:50.325 qpair failed and we were unable to recover it. 00:27:50.325 [2024-07-15 09:36:01.393958] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:50.325 [2024-07-15 09:36:01.393984] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d69200 with addr=10.0.0.2, port=4420 00:27:50.325 qpair failed and we were unable to recover it. 00:27:50.325 [2024-07-15 09:36:01.394100] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:50.325 [2024-07-15 09:36:01.394134] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d69200 with addr=10.0.0.2, port=4420 00:27:50.325 qpair failed and we were unable to recover it. 00:27:50.325 [2024-07-15 09:36:01.394240] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:50.325 [2024-07-15 09:36:01.394274] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d69200 with addr=10.0.0.2, port=4420 00:27:50.325 qpair failed and we were unable to recover it. 00:27:50.325 [2024-07-15 09:36:01.394389] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:50.325 [2024-07-15 09:36:01.394423] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d69200 with addr=10.0.0.2, port=4420 00:27:50.325 qpair failed and we were unable to recover it. 00:27:50.325 [2024-07-15 09:36:01.394538] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:50.325 [2024-07-15 09:36:01.394571] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d69200 with addr=10.0.0.2, port=4420 00:27:50.325 qpair failed and we were unable to recover it. 00:27:50.325 [2024-07-15 09:36:01.394694] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:50.325 [2024-07-15 09:36:01.394721] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d69200 with addr=10.0.0.2, port=4420 00:27:50.325 qpair failed and we were unable to recover it. 00:27:50.325 [2024-07-15 09:36:01.394836] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:50.325 [2024-07-15 09:36:01.394869] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a50000b90 with addr=10.0.0.2, port=4420 00:27:50.325 qpair failed and we were unable to recover it. 00:27:50.325 [2024-07-15 09:36:01.394959] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:50.325 [2024-07-15 09:36:01.395005] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a50000b90 with addr=10.0.0.2, port=4420 00:27:50.325 qpair failed and we were unable to recover it. 00:27:50.325 [2024-07-15 09:36:01.395149] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:50.326 [2024-07-15 09:36:01.395176] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a50000b90 with addr=10.0.0.2, port=4420 00:27:50.326 qpair failed and we were unable to recover it. 00:27:50.326 [2024-07-15 09:36:01.395289] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:50.326 [2024-07-15 09:36:01.395345] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a50000b90 with addr=10.0.0.2, port=4420 00:27:50.326 qpair failed and we were unable to recover it. 00:27:50.326 [2024-07-15 09:36:01.395460] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:50.326 [2024-07-15 09:36:01.395495] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a50000b90 with addr=10.0.0.2, port=4420 00:27:50.326 qpair failed and we were unable to recover it. 00:27:50.326 [2024-07-15 09:36:01.395602] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:50.326 [2024-07-15 09:36:01.395646] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a50000b90 with addr=10.0.0.2, port=4420 00:27:50.326 qpair failed and we were unable to recover it. 00:27:50.326 [2024-07-15 09:36:01.395754] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:50.326 [2024-07-15 09:36:01.395780] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a50000b90 with addr=10.0.0.2, port=4420 00:27:50.326 qpair failed and we were unable to recover it. 00:27:50.326 [2024-07-15 09:36:01.395886] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:50.326 [2024-07-15 09:36:01.395913] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a50000b90 with addr=10.0.0.2, port=4420 00:27:50.326 qpair failed and we were unable to recover it. 00:27:50.326 [2024-07-15 09:36:01.396001] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:50.326 [2024-07-15 09:36:01.396029] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a50000b90 with addr=10.0.0.2, port=4420 00:27:50.326 qpair failed and we were unable to recover it. 00:27:50.326 [2024-07-15 09:36:01.396124] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:50.326 [2024-07-15 09:36:01.396150] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a50000b90 with addr=10.0.0.2, port=4420 00:27:50.326 qpair failed and we were unable to recover it. 00:27:50.326 [2024-07-15 09:36:01.396445] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:50.326 [2024-07-15 09:36:01.396474] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a50000b90 with addr=10.0.0.2, port=4420 00:27:50.326 qpair failed and we were unable to recover it. 00:27:50.326 [2024-07-15 09:36:01.396686] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:50.326 [2024-07-15 09:36:01.396715] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a50000b90 with addr=10.0.0.2, port=4420 00:27:50.326 qpair failed and we were unable to recover it. 00:27:50.326 [2024-07-15 09:36:01.396853] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:50.326 [2024-07-15 09:36:01.396880] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a50000b90 with addr=10.0.0.2, port=4420 00:27:50.326 qpair failed and we were unable to recover it. 00:27:50.326 [2024-07-15 09:36:01.396960] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:50.326 [2024-07-15 09:36:01.396987] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a50000b90 with addr=10.0.0.2, port=4420 00:27:50.326 qpair failed and we were unable to recover it. 00:27:50.326 [2024-07-15 09:36:01.397072] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:50.326 [2024-07-15 09:36:01.397097] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a50000b90 with addr=10.0.0.2, port=4420 00:27:50.326 qpair failed and we were unable to recover it. 00:27:50.326 [2024-07-15 09:36:01.397210] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:50.326 [2024-07-15 09:36:01.397236] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a50000b90 with addr=10.0.0.2, port=4420 00:27:50.326 qpair failed and we were unable to recover it. 00:27:50.326 [2024-07-15 09:36:01.397335] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:50.326 [2024-07-15 09:36:01.397361] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a50000b90 with addr=10.0.0.2, port=4420 00:27:50.326 qpair failed and we were unable to recover it. 00:27:50.326 [2024-07-15 09:36:01.397482] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:50.326 [2024-07-15 09:36:01.397511] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a50000b90 with addr=10.0.0.2, port=4420 00:27:50.326 qpair failed and we were unable to recover it. 00:27:50.326 [2024-07-15 09:36:01.397633] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:50.326 [2024-07-15 09:36:01.397660] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a50000b90 with addr=10.0.0.2, port=4420 00:27:50.326 qpair failed and we were unable to recover it. 00:27:50.326 [2024-07-15 09:36:01.397759] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:50.326 [2024-07-15 09:36:01.397786] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a50000b90 with addr=10.0.0.2, port=4420 00:27:50.326 qpair failed and we were unable to recover it. 00:27:50.326 [2024-07-15 09:36:01.397890] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:50.326 [2024-07-15 09:36:01.397917] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a50000b90 with addr=10.0.0.2, port=4420 00:27:50.326 qpair failed and we were unable to recover it. 00:27:50.326 [2024-07-15 09:36:01.398000] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:50.326 [2024-07-15 09:36:01.398027] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a50000b90 with addr=10.0.0.2, port=4420 00:27:50.326 qpair failed and we were unable to recover it. 00:27:50.326 [2024-07-15 09:36:01.398143] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:50.326 [2024-07-15 09:36:01.398170] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a50000b90 with addr=10.0.0.2, port=4420 00:27:50.326 qpair failed and we were unable to recover it. 00:27:50.326 [2024-07-15 09:36:01.398253] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:50.326 [2024-07-15 09:36:01.398279] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a50000b90 with addr=10.0.0.2, port=4420 00:27:50.326 qpair failed and we were unable to recover it. 00:27:50.326 [2024-07-15 09:36:01.398366] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:50.326 [2024-07-15 09:36:01.398394] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d69200 with addr=10.0.0.2, port=4420 00:27:50.326 qpair failed and we were unable to recover it. 00:27:50.326 [2024-07-15 09:36:01.398506] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:50.326 [2024-07-15 09:36:01.398533] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d69200 with addr=10.0.0.2, port=4420 00:27:50.326 qpair failed and we were unable to recover it. 00:27:50.326 [2024-07-15 09:36:01.398615] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:50.326 [2024-07-15 09:36:01.398640] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d69200 with addr=10.0.0.2, port=4420 00:27:50.326 qpair failed and we were unable to recover it. 00:27:50.326 [2024-07-15 09:36:01.398772] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:50.326 [2024-07-15 09:36:01.398815] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d69200 with addr=10.0.0.2, port=4420 00:27:50.326 qpair failed and we were unable to recover it. 00:27:50.326 [2024-07-15 09:36:01.398906] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:50.326 [2024-07-15 09:36:01.398936] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d69200 with addr=10.0.0.2, port=4420 00:27:50.326 qpair failed and we were unable to recover it. 00:27:50.326 [2024-07-15 09:36:01.399018] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:50.326 [2024-07-15 09:36:01.399044] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d69200 with addr=10.0.0.2, port=4420 00:27:50.326 qpair failed and we were unable to recover it. 00:27:50.326 [2024-07-15 09:36:01.399198] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:50.326 [2024-07-15 09:36:01.399229] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d69200 with addr=10.0.0.2, port=4420 00:27:50.326 qpair failed and we were unable to recover it. 00:27:50.326 [2024-07-15 09:36:01.399352] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:50.326 [2024-07-15 09:36:01.399382] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d69200 with addr=10.0.0.2, port=4420 00:27:50.326 qpair failed and we were unable to recover it. 00:27:50.326 [2024-07-15 09:36:01.399481] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:50.326 [2024-07-15 09:36:01.399512] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d69200 with addr=10.0.0.2, port=4420 00:27:50.326 qpair failed and we were unable to recover it. 00:27:50.326 [2024-07-15 09:36:01.399618] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:50.326 [2024-07-15 09:36:01.399645] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a50000b90 with addr=10.0.0.2, port=4420 00:27:50.326 qpair failed and we were unable to recover it. 00:27:50.326 [2024-07-15 09:36:01.399742] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:50.326 [2024-07-15 09:36:01.399782] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a58000b90 with addr=10.0.0.2, port=4420 00:27:50.326 qpair failed and we were unable to recover it. 00:27:50.326 [2024-07-15 09:36:01.399912] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:50.326 [2024-07-15 09:36:01.399940] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a58000b90 with addr=10.0.0.2, port=4420 00:27:50.326 qpair failed and we were unable to recover it. 00:27:50.326 [2024-07-15 09:36:01.400020] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:50.326 [2024-07-15 09:36:01.400046] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a58000b90 with addr=10.0.0.2, port=4420 00:27:50.326 qpair failed and we were unable to recover it. 00:27:50.326 [2024-07-15 09:36:01.400169] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:50.326 [2024-07-15 09:36:01.400214] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a58000b90 with addr=10.0.0.2, port=4420 00:27:50.326 qpair failed and we were unable to recover it. 00:27:50.326 [2024-07-15 09:36:01.400320] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:50.326 [2024-07-15 09:36:01.400346] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a58000b90 with addr=10.0.0.2, port=4420 00:27:50.326 qpair failed and we were unable to recover it. 00:27:50.326 [2024-07-15 09:36:01.400451] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:50.326 [2024-07-15 09:36:01.400477] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a58000b90 with addr=10.0.0.2, port=4420 00:27:50.326 qpair failed and we were unable to recover it. 00:27:50.326 [2024-07-15 09:36:01.400598] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:50.326 [2024-07-15 09:36:01.400626] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a50000b90 with addr=10.0.0.2, port=4420 00:27:50.326 qpair failed and we were unable to recover it. 00:27:50.326 [2024-07-15 09:36:01.400744] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:50.326 [2024-07-15 09:36:01.400771] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a50000b90 with addr=10.0.0.2, port=4420 00:27:50.326 qpair failed and we were unable to recover it. 00:27:50.326 [2024-07-15 09:36:01.400863] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:50.326 [2024-07-15 09:36:01.400891] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a50000b90 with addr=10.0.0.2, port=4420 00:27:50.326 qpair failed and we were unable to recover it. 00:27:50.326 [2024-07-15 09:36:01.400970] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:50.326 [2024-07-15 09:36:01.401001] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a50000b90 with addr=10.0.0.2, port=4420 00:27:50.326 qpair failed and we were unable to recover it. 00:27:50.326 [2024-07-15 09:36:01.401119] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:50.326 [2024-07-15 09:36:01.401145] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a50000b90 with addr=10.0.0.2, port=4420 00:27:50.326 qpair failed and we were unable to recover it. 00:27:50.326 [2024-07-15 09:36:01.401225] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:50.326 [2024-07-15 09:36:01.401251] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a50000b90 with addr=10.0.0.2, port=4420 00:27:50.326 qpair failed and we were unable to recover it. 00:27:50.326 [2024-07-15 09:36:01.401335] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:50.326 [2024-07-15 09:36:01.401362] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a50000b90 with addr=10.0.0.2, port=4420 00:27:50.326 qpair failed and we were unable to recover it. 00:27:50.326 [2024-07-15 09:36:01.401457] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:50.326 [2024-07-15 09:36:01.401483] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a50000b90 with addr=10.0.0.2, port=4420 00:27:50.326 qpair failed and we were unable to recover it. 00:27:50.326 [2024-07-15 09:36:01.401575] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:50.326 [2024-07-15 09:36:01.401603] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d69200 with addr=10.0.0.2, port=4420 00:27:50.326 qpair failed and we were unable to recover it. 00:27:50.326 [2024-07-15 09:36:01.401693] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:50.326 [2024-07-15 09:36:01.401722] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a58000b90 with addr=10.0.0.2, port=4420 00:27:50.326 qpair failed and we were unable to recover it. 00:27:50.326 [2024-07-15 09:36:01.401811] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:50.326 [2024-07-15 09:36:01.401837] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a58000b90 with addr=10.0.0.2, port=4420 00:27:50.326 qpair failed and we were unable to recover it. 00:27:50.326 [2024-07-15 09:36:01.401922] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:50.326 [2024-07-15 09:36:01.401951] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a58000b90 with addr=10.0.0.2, port=4420 00:27:50.326 qpair failed and we were unable to recover it. 00:27:50.326 [2024-07-15 09:36:01.402033] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:50.326 [2024-07-15 09:36:01.402057] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a58000b90 with addr=10.0.0.2, port=4420 00:27:50.326 qpair failed and we were unable to recover it. 00:27:50.326 [2024-07-15 09:36:01.402173] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:50.326 [2024-07-15 09:36:01.402199] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a58000b90 with addr=10.0.0.2, port=4420 00:27:50.326 qpair failed and we were unable to recover it. 00:27:50.326 [2024-07-15 09:36:01.402313] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:50.326 [2024-07-15 09:36:01.402339] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a58000b90 with addr=10.0.0.2, port=4420 00:27:50.326 qpair failed and we were unable to recover it. 00:27:50.326 [2024-07-15 09:36:01.402452] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:50.326 [2024-07-15 09:36:01.402478] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a58000b90 with addr=10.0.0.2, port=4420 00:27:50.326 qpair failed and we were unable to recover it. 00:27:50.326 [2024-07-15 09:36:01.402563] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:50.326 [2024-07-15 09:36:01.402589] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a58000b90 with addr=10.0.0.2, port=4420 00:27:50.326 qpair failed and we were unable to recover it. 00:27:50.326 [2024-07-15 09:36:01.402687] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:50.326 [2024-07-15 09:36:01.402716] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a50000b90 with addr=10.0.0.2, port=4420 00:27:50.326 qpair failed and we were unable to recover it. 00:27:50.326 [2024-07-15 09:36:01.402804] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:50.326 [2024-07-15 09:36:01.402830] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a50000b90 with addr=10.0.0.2, port=4420 00:27:50.326 qpair failed and we were unable to recover it. 00:27:50.326 [2024-07-15 09:36:01.402919] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:50.327 [2024-07-15 09:36:01.402945] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a50000b90 with addr=10.0.0.2, port=4420 00:27:50.327 qpair failed and we were unable to recover it. 00:27:50.327 [2024-07-15 09:36:01.403073] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:50.327 [2024-07-15 09:36:01.403122] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a50000b90 with addr=10.0.0.2, port=4420 00:27:50.327 qpair failed and we were unable to recover it. 00:27:50.327 [2024-07-15 09:36:01.403278] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:50.327 [2024-07-15 09:36:01.403326] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a50000b90 with addr=10.0.0.2, port=4420 00:27:50.327 qpair failed and we were unable to recover it. 00:27:50.327 [2024-07-15 09:36:01.403405] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:50.327 [2024-07-15 09:36:01.403431] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a50000b90 with addr=10.0.0.2, port=4420 00:27:50.327 qpair failed and we were unable to recover it. 00:27:50.327 [2024-07-15 09:36:01.403543] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:50.327 [2024-07-15 09:36:01.403569] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a50000b90 with addr=10.0.0.2, port=4420 00:27:50.327 qpair failed and we were unable to recover it. 00:27:50.327 [2024-07-15 09:36:01.403683] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:50.327 [2024-07-15 09:36:01.403710] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a50000b90 with addr=10.0.0.2, port=4420 00:27:50.327 qpair failed and we were unable to recover it. 00:27:50.327 [2024-07-15 09:36:01.403793] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:50.327 [2024-07-15 09:36:01.403828] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a50000b90 with addr=10.0.0.2, port=4420 00:27:50.327 qpair failed and we were unable to recover it. 00:27:50.327 [2024-07-15 09:36:01.403935] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:50.327 [2024-07-15 09:36:01.403966] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a50000b90 with addr=10.0.0.2, port=4420 00:27:50.327 qpair failed and we were unable to recover it. 00:27:50.327 [2024-07-15 09:36:01.404088] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:50.327 [2024-07-15 09:36:01.404136] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a50000b90 with addr=10.0.0.2, port=4420 00:27:50.327 qpair failed and we were unable to recover it. 00:27:50.327 [2024-07-15 09:36:01.404213] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:50.327 [2024-07-15 09:36:01.404239] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a50000b90 with addr=10.0.0.2, port=4420 00:27:50.327 qpair failed and we were unable to recover it. 00:27:50.327 [2024-07-15 09:36:01.404348] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:50.327 [2024-07-15 09:36:01.404374] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a50000b90 with addr=10.0.0.2, port=4420 00:27:50.327 qpair failed and we were unable to recover it. 00:27:50.327 [2024-07-15 09:36:01.404478] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:50.327 [2024-07-15 09:36:01.404517] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d69200 with addr=10.0.0.2, port=4420 00:27:50.327 qpair failed and we were unable to recover it. 00:27:50.327 [2024-07-15 09:36:01.404610] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:50.327 [2024-07-15 09:36:01.404636] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d69200 with addr=10.0.0.2, port=4420 00:27:50.327 qpair failed and we were unable to recover it. 00:27:50.327 [2024-07-15 09:36:01.404729] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:50.327 [2024-07-15 09:36:01.404757] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a58000b90 with addr=10.0.0.2, port=4420 00:27:50.327 qpair failed and we were unable to recover it. 00:27:50.327 [2024-07-15 09:36:01.404856] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:50.327 [2024-07-15 09:36:01.404884] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a58000b90 with addr=10.0.0.2, port=4420 00:27:50.327 qpair failed and we were unable to recover it. 00:27:50.327 [2024-07-15 09:36:01.404975] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:50.327 [2024-07-15 09:36:01.405000] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a58000b90 with addr=10.0.0.2, port=4420 00:27:50.327 qpair failed and we were unable to recover it. 00:27:50.327 [2024-07-15 09:36:01.405119] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:50.327 [2024-07-15 09:36:01.405145] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a58000b90 with addr=10.0.0.2, port=4420 00:27:50.327 qpair failed and we were unable to recover it. 00:27:50.327 [2024-07-15 09:36:01.405227] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:50.327 [2024-07-15 09:36:01.405253] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a58000b90 with addr=10.0.0.2, port=4420 00:27:50.327 qpair failed and we were unable to recover it. 00:27:50.327 [2024-07-15 09:36:01.405342] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:50.327 [2024-07-15 09:36:01.405368] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a58000b90 with addr=10.0.0.2, port=4420 00:27:50.327 qpair failed and we were unable to recover it. 00:27:50.327 [2024-07-15 09:36:01.405452] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:50.327 [2024-07-15 09:36:01.405479] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d69200 with addr=10.0.0.2, port=4420 00:27:50.327 qpair failed and we were unable to recover it. 00:27:50.327 [2024-07-15 09:36:01.405588] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:50.327 [2024-07-15 09:36:01.405614] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d69200 with addr=10.0.0.2, port=4420 00:27:50.327 qpair failed and we were unable to recover it. 00:27:50.327 [2024-07-15 09:36:01.405701] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:50.327 [2024-07-15 09:36:01.405726] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d69200 with addr=10.0.0.2, port=4420 00:27:50.327 qpair failed and we were unable to recover it. 00:27:50.327 [2024-07-15 09:36:01.405817] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:50.327 [2024-07-15 09:36:01.405844] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d69200 with addr=10.0.0.2, port=4420 00:27:50.327 qpair failed and we were unable to recover it. 00:27:50.327 [2024-07-15 09:36:01.405924] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:50.327 [2024-07-15 09:36:01.405949] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d69200 with addr=10.0.0.2, port=4420 00:27:50.327 qpair failed and we were unable to recover it. 00:27:50.327 [2024-07-15 09:36:01.406084] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:50.327 [2024-07-15 09:36:01.406113] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d69200 with addr=10.0.0.2, port=4420 00:27:50.327 qpair failed and we were unable to recover it. 00:27:50.327 [2024-07-15 09:36:01.406231] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:50.327 [2024-07-15 09:36:01.406259] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a58000b90 with addr=10.0.0.2, port=4420 00:27:50.327 qpair failed and we were unable to recover it. 00:27:50.327 [2024-07-15 09:36:01.406383] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:50.327 [2024-07-15 09:36:01.406428] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a58000b90 with addr=10.0.0.2, port=4420 00:27:50.327 qpair failed and we were unable to recover it. 00:27:50.327 [2024-07-15 09:36:01.406526] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:50.327 [2024-07-15 09:36:01.406565] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:27:50.327 qpair failed and we were unable to recover it. 00:27:50.327 [2024-07-15 09:36:01.406683] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:50.327 [2024-07-15 09:36:01.406709] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d69200 with addr=10.0.0.2, port=4420 00:27:50.327 qpair failed and we were unable to recover it. 00:27:50.327 [2024-07-15 09:36:01.406810] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:50.327 [2024-07-15 09:36:01.406841] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a50000b90 with addr=10.0.0.2, port=4420 00:27:50.327 qpair failed and we were unable to recover it. 00:27:50.327 [2024-07-15 09:36:01.406951] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:50.327 [2024-07-15 09:36:01.406978] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a58000b90 with addr=10.0.0.2, port=4420 00:27:50.327 qpair failed and we were unable to recover it. 00:27:50.327 [2024-07-15 09:36:01.407068] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:50.327 [2024-07-15 09:36:01.407095] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a58000b90 with addr=10.0.0.2, port=4420 00:27:50.327 qpair failed and we were unable to recover it. 00:27:50.327 [2024-07-15 09:36:01.407205] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:50.327 [2024-07-15 09:36:01.407232] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a58000b90 with addr=10.0.0.2, port=4420 00:27:50.327 qpair failed and we were unable to recover it. 00:27:50.327 [2024-07-15 09:36:01.407319] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:50.327 [2024-07-15 09:36:01.407345] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a58000b90 with addr=10.0.0.2, port=4420 00:27:50.327 qpair failed and we were unable to recover it. 00:27:50.327 [2024-07-15 09:36:01.407431] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:50.327 [2024-07-15 09:36:01.407457] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a58000b90 with addr=10.0.0.2, port=4420 00:27:50.327 qpair failed and we were unable to recover it. 00:27:50.327 [2024-07-15 09:36:01.407543] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:50.327 [2024-07-15 09:36:01.407568] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d69200 with addr=10.0.0.2, port=4420 00:27:50.327 qpair failed and we were unable to recover it. 00:27:50.327 [2024-07-15 09:36:01.407650] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:50.327 [2024-07-15 09:36:01.407674] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d69200 with addr=10.0.0.2, port=4420 00:27:50.327 qpair failed and we were unable to recover it. 00:27:50.327 [2024-07-15 09:36:01.407757] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:50.327 [2024-07-15 09:36:01.407783] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d69200 with addr=10.0.0.2, port=4420 00:27:50.327 qpair failed and we were unable to recover it. 00:27:50.327 [2024-07-15 09:36:01.407919] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:50.327 [2024-07-15 09:36:01.407945] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d69200 with addr=10.0.0.2, port=4420 00:27:50.327 qpair failed and we were unable to recover it. 00:27:50.327 [2024-07-15 09:36:01.408062] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:50.327 [2024-07-15 09:36:01.408088] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d69200 with addr=10.0.0.2, port=4420 00:27:50.327 qpair failed and we were unable to recover it. 00:27:50.327 [2024-07-15 09:36:01.408195] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:50.327 [2024-07-15 09:36:01.408238] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d69200 with addr=10.0.0.2, port=4420 00:27:50.327 qpair failed and we were unable to recover it. 00:27:50.327 [2024-07-15 09:36:01.408321] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:50.327 [2024-07-15 09:36:01.408347] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d69200 with addr=10.0.0.2, port=4420 00:27:50.327 qpair failed and we were unable to recover it. 00:27:50.327 [2024-07-15 09:36:01.408499] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:50.327 [2024-07-15 09:36:01.408530] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a50000b90 with addr=10.0.0.2, port=4420 00:27:50.327 qpair failed and we were unable to recover it. 00:27:50.327 [2024-07-15 09:36:01.408640] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:50.327 [2024-07-15 09:36:01.408668] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a50000b90 with addr=10.0.0.2, port=4420 00:27:50.327 qpair failed and we were unable to recover it. 00:27:50.327 [2024-07-15 09:36:01.408782] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:50.327 [2024-07-15 09:36:01.408819] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a50000b90 with addr=10.0.0.2, port=4420 00:27:50.327 qpair failed and we were unable to recover it. 00:27:50.327 [2024-07-15 09:36:01.408936] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:50.327 [2024-07-15 09:36:01.408963] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a50000b90 with addr=10.0.0.2, port=4420 00:27:50.327 qpair failed and we were unable to recover it. 00:27:50.327 [2024-07-15 09:36:01.409051] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:50.327 [2024-07-15 09:36:01.409079] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a50000b90 with addr=10.0.0.2, port=4420 00:27:50.327 qpair failed and we were unable to recover it. 00:27:50.327 [2024-07-15 09:36:01.409205] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:50.327 [2024-07-15 09:36:01.409234] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a50000b90 with addr=10.0.0.2, port=4420 00:27:50.327 qpair failed and we were unable to recover it. 00:27:50.327 [2024-07-15 09:36:01.409322] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:50.327 [2024-07-15 09:36:01.409349] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a50000b90 with addr=10.0.0.2, port=4420 00:27:50.327 qpair failed and we were unable to recover it. 00:27:50.327 [2024-07-15 09:36:01.409465] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:50.327 [2024-07-15 09:36:01.409495] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:27:50.327 qpair failed and we were unable to recover it. 00:27:50.327 [2024-07-15 09:36:01.409588] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:50.327 [2024-07-15 09:36:01.409614] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:27:50.327 qpair failed and we were unable to recover it. 00:27:50.327 [2024-07-15 09:36:01.409722] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:50.327 [2024-07-15 09:36:01.409754] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:27:50.327 qpair failed and we were unable to recover it. 00:27:50.327 [2024-07-15 09:36:01.409857] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:50.327 [2024-07-15 09:36:01.409883] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:27:50.327 qpair failed and we were unable to recover it. 00:27:50.327 [2024-07-15 09:36:01.409974] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:50.327 [2024-07-15 09:36:01.409999] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:27:50.327 qpair failed and we were unable to recover it. 00:27:50.327 [2024-07-15 09:36:01.410130] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:50.327 [2024-07-15 09:36:01.410158] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:27:50.327 qpair failed and we were unable to recover it. 00:27:50.327 [2024-07-15 09:36:01.410320] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:50.327 [2024-07-15 09:36:01.410348] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:27:50.327 qpair failed and we were unable to recover it. 00:27:50.327 [2024-07-15 09:36:01.410469] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:50.327 [2024-07-15 09:36:01.410508] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a58000b90 with addr=10.0.0.2, port=4420 00:27:50.327 qpair failed and we were unable to recover it. 00:27:50.327 [2024-07-15 09:36:01.410624] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:50.327 [2024-07-15 09:36:01.410652] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d69200 with addr=10.0.0.2, port=4420 00:27:50.328 qpair failed and we were unable to recover it. 00:27:50.328 [2024-07-15 09:36:01.410768] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:50.328 [2024-07-15 09:36:01.410794] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d69200 with addr=10.0.0.2, port=4420 00:27:50.328 qpair failed and we were unable to recover it. 00:27:50.328 [2024-07-15 09:36:01.410889] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:50.328 [2024-07-15 09:36:01.410914] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d69200 with addr=10.0.0.2, port=4420 00:27:50.328 qpair failed and we were unable to recover it. 00:27:50.328 [2024-07-15 09:36:01.411003] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:50.328 [2024-07-15 09:36:01.411047] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d69200 with addr=10.0.0.2, port=4420 00:27:50.328 qpair failed and we were unable to recover it. 00:27:50.328 [2024-07-15 09:36:01.411163] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:50.328 [2024-07-15 09:36:01.411191] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d69200 with addr=10.0.0.2, port=4420 00:27:50.328 qpair failed and we were unable to recover it. 00:27:50.328 [2024-07-15 09:36:01.411309] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:50.328 [2024-07-15 09:36:01.411338] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:27:50.328 qpair failed and we were unable to recover it. 00:27:50.328 [2024-07-15 09:36:01.411445] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:50.328 [2024-07-15 09:36:01.411491] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a58000b90 with addr=10.0.0.2, port=4420 00:27:50.328 qpair failed and we were unable to recover it. 00:27:50.328 [2024-07-15 09:36:01.411628] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:50.328 [2024-07-15 09:36:01.411655] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a58000b90 with addr=10.0.0.2, port=4420 00:27:50.328 qpair failed and we were unable to recover it. 00:27:50.328 [2024-07-15 09:36:01.411741] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:50.328 [2024-07-15 09:36:01.411767] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a58000b90 with addr=10.0.0.2, port=4420 00:27:50.328 qpair failed and we were unable to recover it. 00:27:50.328 [2024-07-15 09:36:01.411900] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:50.328 [2024-07-15 09:36:01.411927] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a58000b90 with addr=10.0.0.2, port=4420 00:27:50.328 qpair failed and we were unable to recover it. 00:27:50.328 [2024-07-15 09:36:01.412035] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:50.328 [2024-07-15 09:36:01.412061] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a58000b90 with addr=10.0.0.2, port=4420 00:27:50.328 qpair failed and we were unable to recover it. 00:27:50.328 [2024-07-15 09:36:01.412147] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:50.328 [2024-07-15 09:36:01.412173] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a58000b90 with addr=10.0.0.2, port=4420 00:27:50.328 qpair failed and we were unable to recover it. 00:27:50.328 [2024-07-15 09:36:01.412256] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:50.328 [2024-07-15 09:36:01.412283] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a58000b90 with addr=10.0.0.2, port=4420 00:27:50.328 qpair failed and we were unable to recover it. 00:27:50.328 [2024-07-15 09:36:01.412403] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:50.328 [2024-07-15 09:36:01.412429] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a58000b90 with addr=10.0.0.2, port=4420 00:27:50.328 qpair failed and we were unable to recover it. 00:27:50.328 [2024-07-15 09:36:01.412542] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:50.328 [2024-07-15 09:36:01.412569] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a58000b90 with addr=10.0.0.2, port=4420 00:27:50.328 qpair failed and we were unable to recover it. 00:27:50.328 [2024-07-15 09:36:01.412657] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:50.328 [2024-07-15 09:36:01.412682] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a58000b90 with addr=10.0.0.2, port=4420 00:27:50.328 qpair failed and we were unable to recover it. 00:27:50.328 [2024-07-15 09:36:01.412758] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:50.328 [2024-07-15 09:36:01.412783] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a58000b90 with addr=10.0.0.2, port=4420 00:27:50.328 qpair failed and we were unable to recover it. 00:27:50.328 [2024-07-15 09:36:01.412916] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:50.328 [2024-07-15 09:36:01.412954] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d69200 with addr=10.0.0.2, port=4420 00:27:50.328 qpair failed and we were unable to recover it. 00:27:50.328 [2024-07-15 09:36:01.413051] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:50.328 [2024-07-15 09:36:01.413078] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d69200 with addr=10.0.0.2, port=4420 00:27:50.328 qpair failed and we were unable to recover it. 00:27:50.328 [2024-07-15 09:36:01.413175] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:50.328 [2024-07-15 09:36:01.413201] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d69200 with addr=10.0.0.2, port=4420 00:27:50.328 qpair failed and we were unable to recover it. 00:27:50.328 [2024-07-15 09:36:01.413288] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:50.328 [2024-07-15 09:36:01.413313] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d69200 with addr=10.0.0.2, port=4420 00:27:50.328 qpair failed and we were unable to recover it. 00:27:50.328 [2024-07-15 09:36:01.413431] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:50.328 [2024-07-15 09:36:01.413470] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:27:50.328 qpair failed and we were unable to recover it. 00:27:50.328 [2024-07-15 09:36:01.413580] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:50.328 [2024-07-15 09:36:01.413622] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a50000b90 with addr=10.0.0.2, port=4420 00:27:50.328 qpair failed and we were unable to recover it. 00:27:50.328 [2024-07-15 09:36:01.413716] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:50.328 [2024-07-15 09:36:01.413743] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a58000b90 with addr=10.0.0.2, port=4420 00:27:50.328 qpair failed and we were unable to recover it. 00:27:50.328 [2024-07-15 09:36:01.413848] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:50.328 [2024-07-15 09:36:01.413875] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a58000b90 with addr=10.0.0.2, port=4420 00:27:50.328 qpair failed and we were unable to recover it. 00:27:50.328 [2024-07-15 09:36:01.413989] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:50.328 [2024-07-15 09:36:01.414016] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a58000b90 with addr=10.0.0.2, port=4420 00:27:50.328 qpair failed and we were unable to recover it. 00:27:50.328 [2024-07-15 09:36:01.414097] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:50.328 [2024-07-15 09:36:01.414130] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a58000b90 with addr=10.0.0.2, port=4420 00:27:50.328 qpair failed and we were unable to recover it. 00:27:50.328 [2024-07-15 09:36:01.414218] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:50.328 [2024-07-15 09:36:01.414243] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a58000b90 with addr=10.0.0.2, port=4420 00:27:50.328 qpair failed and we were unable to recover it. 00:27:50.328 [2024-07-15 09:36:01.414339] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:50.328 [2024-07-15 09:36:01.414367] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:27:50.328 qpair failed and we were unable to recover it. 00:27:50.328 [2024-07-15 09:36:01.414464] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:50.328 [2024-07-15 09:36:01.414497] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a50000b90 with addr=10.0.0.2, port=4420 00:27:50.328 qpair failed and we were unable to recover it. 00:27:50.328 [2024-07-15 09:36:01.414604] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:50.328 [2024-07-15 09:36:01.414632] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a50000b90 with addr=10.0.0.2, port=4420 00:27:50.328 qpair failed and we were unable to recover it. 00:27:50.328 [2024-07-15 09:36:01.414716] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:50.328 [2024-07-15 09:36:01.414741] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a50000b90 with addr=10.0.0.2, port=4420 00:27:50.328 qpair failed and we were unable to recover it. 00:27:50.328 [2024-07-15 09:36:01.414840] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:50.328 [2024-07-15 09:36:01.414866] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a50000b90 with addr=10.0.0.2, port=4420 00:27:50.328 qpair failed and we were unable to recover it. 00:27:50.328 [2024-07-15 09:36:01.414980] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:50.328 [2024-07-15 09:36:01.415007] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a50000b90 with addr=10.0.0.2, port=4420 00:27:50.328 qpair failed and we were unable to recover it. 00:27:50.328 [2024-07-15 09:36:01.415095] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:50.328 [2024-07-15 09:36:01.415123] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a50000b90 with addr=10.0.0.2, port=4420 00:27:50.328 qpair failed and we were unable to recover it. 00:27:50.328 [2024-07-15 09:36:01.415224] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:50.328 [2024-07-15 09:36:01.415252] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a50000b90 with addr=10.0.0.2, port=4420 00:27:50.328 qpair failed and we were unable to recover it. 00:27:50.328 [2024-07-15 09:36:01.415346] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:50.328 [2024-07-15 09:36:01.415374] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d69200 with addr=10.0.0.2, port=4420 00:27:50.328 qpair failed and we were unable to recover it. 00:27:50.328 [2024-07-15 09:36:01.415452] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:50.328 [2024-07-15 09:36:01.415478] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d69200 with addr=10.0.0.2, port=4420 00:27:50.328 qpair failed and we were unable to recover it. 00:27:50.328 [2024-07-15 09:36:01.415588] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:50.328 [2024-07-15 09:36:01.415614] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d69200 with addr=10.0.0.2, port=4420 00:27:50.328 qpair failed and we were unable to recover it. 00:27:50.328 [2024-07-15 09:36:01.415705] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:50.328 [2024-07-15 09:36:01.415731] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d69200 with addr=10.0.0.2, port=4420 00:27:50.328 qpair failed and we were unable to recover it. 00:27:50.328 [2024-07-15 09:36:01.415823] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:50.328 [2024-07-15 09:36:01.415848] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d69200 with addr=10.0.0.2, port=4420 00:27:50.328 qpair failed and we were unable to recover it. 00:27:50.328 [2024-07-15 09:36:01.415926] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:50.328 [2024-07-15 09:36:01.415951] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d69200 with addr=10.0.0.2, port=4420 00:27:50.328 qpair failed and we were unable to recover it. 00:27:50.328 [2024-07-15 09:36:01.416054] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:50.328 [2024-07-15 09:36:01.416085] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d69200 with addr=10.0.0.2, port=4420 00:27:50.328 qpair failed and we were unable to recover it. 00:27:50.328 [2024-07-15 09:36:01.416167] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:50.328 [2024-07-15 09:36:01.416193] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d69200 with addr=10.0.0.2, port=4420 00:27:50.328 qpair failed and we were unable to recover it. 00:27:50.328 [2024-07-15 09:36:01.416289] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:50.328 [2024-07-15 09:36:01.416328] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:27:50.328 qpair failed and we were unable to recover it. 00:27:50.328 [2024-07-15 09:36:01.416447] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:50.328 [2024-07-15 09:36:01.416476] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a50000b90 with addr=10.0.0.2, port=4420 00:27:50.328 qpair failed and we were unable to recover it. 00:27:50.328 [2024-07-15 09:36:01.416572] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:50.328 [2024-07-15 09:36:01.416602] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a58000b90 with addr=10.0.0.2, port=4420 00:27:50.328 qpair failed and we were unable to recover it. 00:27:50.328 [2024-07-15 09:36:01.416699] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:50.328 [2024-07-15 09:36:01.416726] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a58000b90 with addr=10.0.0.2, port=4420 00:27:50.328 qpair failed and we were unable to recover it. 00:27:50.328 [2024-07-15 09:36:01.416841] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:50.328 [2024-07-15 09:36:01.416869] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a58000b90 with addr=10.0.0.2, port=4420 00:27:50.329 qpair failed and we were unable to recover it. 00:27:50.329 [2024-07-15 09:36:01.416958] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:50.329 [2024-07-15 09:36:01.416985] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a58000b90 with addr=10.0.0.2, port=4420 00:27:50.329 qpair failed and we were unable to recover it. 00:27:50.329 [2024-07-15 09:36:01.417097] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:50.329 [2024-07-15 09:36:01.417127] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d69200 with addr=10.0.0.2, port=4420 00:27:50.329 qpair failed and we were unable to recover it. 00:27:50.329 [2024-07-15 09:36:01.417208] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:50.329 [2024-07-15 09:36:01.417234] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d69200 with addr=10.0.0.2, port=4420 00:27:50.329 qpair failed and we were unable to recover it. 00:27:50.329 [2024-07-15 09:36:01.417350] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:50.329 [2024-07-15 09:36:01.417376] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d69200 with addr=10.0.0.2, port=4420 00:27:50.329 qpair failed and we were unable to recover it. 00:27:50.329 [2024-07-15 09:36:01.417457] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:50.329 [2024-07-15 09:36:01.417482] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d69200 with addr=10.0.0.2, port=4420 00:27:50.329 qpair failed and we were unable to recover it. 00:27:50.329 [2024-07-15 09:36:01.417561] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:50.329 [2024-07-15 09:36:01.417586] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d69200 with addr=10.0.0.2, port=4420 00:27:50.329 qpair failed and we were unable to recover it. 00:27:50.329 [2024-07-15 09:36:01.417737] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:50.329 [2024-07-15 09:36:01.417765] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a50000b90 with addr=10.0.0.2, port=4420 00:27:50.329 qpair failed and we were unable to recover it. 00:27:50.329 [2024-07-15 09:36:01.417880] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:50.329 [2024-07-15 09:36:01.417909] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a58000b90 with addr=10.0.0.2, port=4420 00:27:50.329 qpair failed and we were unable to recover it. 00:27:50.329 [2024-07-15 09:36:01.417997] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:50.329 [2024-07-15 09:36:01.418022] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a58000b90 with addr=10.0.0.2, port=4420 00:27:50.329 qpair failed and we were unable to recover it. 00:27:50.329 [2024-07-15 09:36:01.418131] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:50.329 [2024-07-15 09:36:01.418157] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a58000b90 with addr=10.0.0.2, port=4420 00:27:50.329 qpair failed and we were unable to recover it. 00:27:50.329 [2024-07-15 09:36:01.418238] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:50.329 [2024-07-15 09:36:01.418265] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a58000b90 with addr=10.0.0.2, port=4420 00:27:50.329 qpair failed and we were unable to recover it. 00:27:50.329 [2024-07-15 09:36:01.418364] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:50.329 [2024-07-15 09:36:01.418402] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:27:50.329 qpair failed and we were unable to recover it. 00:27:50.329 [2024-07-15 09:36:01.418497] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:50.329 [2024-07-15 09:36:01.418528] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d69200 with addr=10.0.0.2, port=4420 00:27:50.329 qpair failed and we were unable to recover it. 00:27:50.329 [2024-07-15 09:36:01.418613] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:50.329 [2024-07-15 09:36:01.418639] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d69200 with addr=10.0.0.2, port=4420 00:27:50.329 qpair failed and we were unable to recover it. 00:27:50.329 [2024-07-15 09:36:01.418752] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:50.329 [2024-07-15 09:36:01.418779] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d69200 with addr=10.0.0.2, port=4420 00:27:50.329 qpair failed and we were unable to recover it. 00:27:50.329 [2024-07-15 09:36:01.418869] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:50.329 [2024-07-15 09:36:01.418897] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:27:50.329 qpair failed and we were unable to recover it. 00:27:50.329 [2024-07-15 09:36:01.418983] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:50.329 [2024-07-15 09:36:01.419008] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:27:50.329 qpair failed and we were unable to recover it. 00:27:50.329 [2024-07-15 09:36:01.419114] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:50.329 [2024-07-15 09:36:01.419140] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:27:50.329 qpair failed and we were unable to recover it. 00:27:50.329 [2024-07-15 09:36:01.419254] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:50.329 [2024-07-15 09:36:01.419280] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:27:50.329 qpair failed and we were unable to recover it. 00:27:50.329 [2024-07-15 09:36:01.419365] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:50.329 [2024-07-15 09:36:01.419392] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:27:50.329 qpair failed and we were unable to recover it. 00:27:50.329 [2024-07-15 09:36:01.419480] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:50.329 [2024-07-15 09:36:01.419506] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a58000b90 with addr=10.0.0.2, port=4420 00:27:50.329 qpair failed and we were unable to recover it. 00:27:50.329 [2024-07-15 09:36:01.419594] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:50.329 [2024-07-15 09:36:01.419623] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d69200 with addr=10.0.0.2, port=4420 00:27:50.329 qpair failed and we were unable to recover it. 00:27:50.329 [2024-07-15 09:36:01.419713] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:50.329 [2024-07-15 09:36:01.419742] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d69200 with addr=10.0.0.2, port=4420 00:27:50.329 qpair failed and we were unable to recover it. 00:27:50.329 [2024-07-15 09:36:01.419839] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:50.329 [2024-07-15 09:36:01.419865] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d69200 with addr=10.0.0.2, port=4420 00:27:50.329 qpair failed and we were unable to recover it. 00:27:50.329 [2024-07-15 09:36:01.419948] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:50.329 [2024-07-15 09:36:01.419973] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d69200 with addr=10.0.0.2, port=4420 00:27:50.329 qpair failed and we were unable to recover it. 00:27:50.329 [2024-07-15 09:36:01.420081] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:50.329 [2024-07-15 09:36:01.420107] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d69200 with addr=10.0.0.2, port=4420 00:27:50.329 qpair failed and we were unable to recover it. 00:27:50.329 [2024-07-15 09:36:01.420226] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:50.329 [2024-07-15 09:36:01.420252] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d69200 with addr=10.0.0.2, port=4420 00:27:50.329 qpair failed and we were unable to recover it. 00:27:50.329 [2024-07-15 09:36:01.420339] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:50.329 [2024-07-15 09:36:01.420365] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d69200 with addr=10.0.0.2, port=4420 00:27:50.329 qpair failed and we were unable to recover it. 00:27:50.329 [2024-07-15 09:36:01.420447] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:50.329 [2024-07-15 09:36:01.420473] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d69200 with addr=10.0.0.2, port=4420 00:27:50.329 qpair failed and we were unable to recover it. 00:27:50.329 [2024-07-15 09:36:01.420554] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:50.329 [2024-07-15 09:36:01.420580] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d69200 with addr=10.0.0.2, port=4420 00:27:50.329 qpair failed and we were unable to recover it. 00:27:50.329 [2024-07-15 09:36:01.420691] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:50.329 [2024-07-15 09:36:01.420719] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a58000b90 with addr=10.0.0.2, port=4420 00:27:50.329 qpair failed and we were unable to recover it. 00:27:50.329 [2024-07-15 09:36:01.420825] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:50.329 [2024-07-15 09:36:01.420852] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a58000b90 with addr=10.0.0.2, port=4420 00:27:50.329 qpair failed and we were unable to recover it. 00:27:50.329 [2024-07-15 09:36:01.420939] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:50.329 [2024-07-15 09:36:01.420965] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a58000b90 with addr=10.0.0.2, port=4420 00:27:50.329 qpair failed and we were unable to recover it. 00:27:50.329 [2024-07-15 09:36:01.421072] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:50.329 [2024-07-15 09:36:01.421098] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a58000b90 with addr=10.0.0.2, port=4420 00:27:50.329 qpair failed and we were unable to recover it. 00:27:50.329 [2024-07-15 09:36:01.421220] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:50.329 [2024-07-15 09:36:01.421246] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a58000b90 with addr=10.0.0.2, port=4420 00:27:50.329 qpair failed and we were unable to recover it. 00:27:50.329 [2024-07-15 09:36:01.421382] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:50.329 [2024-07-15 09:36:01.421408] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a58000b90 with addr=10.0.0.2, port=4420 00:27:50.329 qpair failed and we were unable to recover it. 00:27:50.329 [2024-07-15 09:36:01.421513] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:50.329 [2024-07-15 09:36:01.421539] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d69200 with addr=10.0.0.2, port=4420 00:27:50.329 qpair failed and we were unable to recover it. 00:27:50.329 [2024-07-15 09:36:01.421622] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:50.329 [2024-07-15 09:36:01.421647] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d69200 with addr=10.0.0.2, port=4420 00:27:50.329 qpair failed and we were unable to recover it. 00:27:50.329 [2024-07-15 09:36:01.421757] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:50.329 [2024-07-15 09:36:01.421782] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d69200 with addr=10.0.0.2, port=4420 00:27:50.329 qpair failed and we were unable to recover it. 00:27:50.329 [2024-07-15 09:36:01.421886] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:50.329 [2024-07-15 09:36:01.421915] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d69200 with addr=10.0.0.2, port=4420 00:27:50.329 qpair failed and we were unable to recover it. 00:27:50.329 [2024-07-15 09:36:01.422003] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:50.329 [2024-07-15 09:36:01.422028] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d69200 with addr=10.0.0.2, port=4420 00:27:50.329 qpair failed and we were unable to recover it. 00:27:50.329 [2024-07-15 09:36:01.422148] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:50.329 [2024-07-15 09:36:01.422174] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d69200 with addr=10.0.0.2, port=4420 00:27:50.329 qpair failed and we were unable to recover it. 00:27:50.329 [2024-07-15 09:36:01.422284] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:50.329 [2024-07-15 09:36:01.422310] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d69200 with addr=10.0.0.2, port=4420 00:27:50.329 qpair failed and we were unable to recover it. 00:27:50.329 [2024-07-15 09:36:01.422395] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:50.329 [2024-07-15 09:36:01.422420] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d69200 with addr=10.0.0.2, port=4420 00:27:50.329 EAL: No free 2048 kB hugepages reported on node 1 00:27:50.329 qpair failed and we were unable to recover it. 00:27:50.329 [2024-07-15 09:36:01.422540] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:50.329 [2024-07-15 09:36:01.422578] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:27:50.329 qpair failed and we were unable to recover it. 00:27:50.329 [2024-07-15 09:36:01.422697] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:50.329 [2024-07-15 09:36:01.422725] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:27:50.329 qpair failed and we were unable to recover it. 00:27:50.329 [2024-07-15 09:36:01.422837] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:50.329 [2024-07-15 09:36:01.422863] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:27:50.329 qpair failed and we were unable to recover it. 00:27:50.329 [2024-07-15 09:36:01.422948] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:50.329 [2024-07-15 09:36:01.422974] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:27:50.329 qpair failed and we were unable to recover it. 00:27:50.329 [2024-07-15 09:36:01.423059] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:50.329 [2024-07-15 09:36:01.423084] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:27:50.329 qpair failed and we were unable to recover it. 00:27:50.329 [2024-07-15 09:36:01.423173] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:50.329 [2024-07-15 09:36:01.423200] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:27:50.329 qpair failed and we were unable to recover it. 00:27:50.329 [2024-07-15 09:36:01.423283] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:50.329 [2024-07-15 09:36:01.423310] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d69200 with addr=10.0.0.2, port=4420 00:27:50.329 qpair failed and we were unable to recover it. 00:27:50.329 [2024-07-15 09:36:01.423448] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:50.329 [2024-07-15 09:36:01.423476] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a58000b90 with addr=10.0.0.2, port=4420 00:27:50.329 qpair failed and we were unable to recover it. 00:27:50.329 [2024-07-15 09:36:01.423579] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:50.330 [2024-07-15 09:36:01.423619] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a50000b90 with addr=10.0.0.2, port=4420 00:27:50.330 qpair failed and we were unable to recover it. 00:27:50.330 [2024-07-15 09:36:01.423721] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:50.330 [2024-07-15 09:36:01.423750] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a50000b90 with addr=10.0.0.2, port=4420 00:27:50.330 qpair failed and we were unable to recover it. 00:27:50.330 [2024-07-15 09:36:01.423839] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:50.330 [2024-07-15 09:36:01.423868] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a50000b90 with addr=10.0.0.2, port=4420 00:27:50.330 qpair failed and we were unable to recover it. 00:27:50.330 [2024-07-15 09:36:01.423956] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:50.330 [2024-07-15 09:36:01.423984] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a50000b90 with addr=10.0.0.2, port=4420 00:27:50.330 qpair failed and we were unable to recover it. 00:27:50.330 [2024-07-15 09:36:01.424077] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:50.330 [2024-07-15 09:36:01.424103] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a50000b90 with addr=10.0.0.2, port=4420 00:27:50.330 qpair failed and we were unable to recover it. 00:27:50.330 [2024-07-15 09:36:01.424181] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:50.330 [2024-07-15 09:36:01.424207] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a50000b90 with addr=10.0.0.2, port=4420 00:27:50.330 qpair failed and we were unable to recover it. 00:27:50.330 [2024-07-15 09:36:01.424341] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:50.330 [2024-07-15 09:36:01.424368] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a50000b90 with addr=10.0.0.2, port=4420 00:27:50.330 qpair failed and we were unable to recover it. 00:27:50.330 [2024-07-15 09:36:01.424451] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:50.330 [2024-07-15 09:36:01.424477] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a50000b90 with addr=10.0.0.2, port=4420 00:27:50.330 qpair failed and we were unable to recover it. 00:27:50.330 [2024-07-15 09:36:01.424615] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:50.330 [2024-07-15 09:36:01.424643] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d69200 with addr=10.0.0.2, port=4420 00:27:50.330 qpair failed and we were unable to recover it. 00:27:50.330 [2024-07-15 09:36:01.424725] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:50.330 [2024-07-15 09:36:01.424753] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a58000b90 with addr=10.0.0.2, port=4420 00:27:50.330 qpair failed and we were unable to recover it. 00:27:50.330 [2024-07-15 09:36:01.424851] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:50.330 [2024-07-15 09:36:01.424880] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:27:50.330 qpair failed and we were unable to recover it. 00:27:50.330 [2024-07-15 09:36:01.425021] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:50.330 [2024-07-15 09:36:01.425048] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:27:50.330 qpair failed and we were unable to recover it. 00:27:50.330 [2024-07-15 09:36:01.425134] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:50.330 [2024-07-15 09:36:01.425160] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:27:50.330 qpair failed and we were unable to recover it. 00:27:50.330 [2024-07-15 09:36:01.425249] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:50.330 [2024-07-15 09:36:01.425275] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:27:50.330 qpair failed and we were unable to recover it. 00:27:50.330 [2024-07-15 09:36:01.425372] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:50.330 [2024-07-15 09:36:01.425399] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:27:50.330 qpair failed and we were unable to recover it. 00:27:50.330 [2024-07-15 09:36:01.425482] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:50.330 [2024-07-15 09:36:01.425508] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:27:50.330 qpair failed and we were unable to recover it. 00:27:50.330 [2024-07-15 09:36:01.425591] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:50.330 [2024-07-15 09:36:01.425617] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:27:50.330 qpair failed and we were unable to recover it. 00:27:50.330 [2024-07-15 09:36:01.425711] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:50.330 [2024-07-15 09:36:01.425736] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:27:50.330 qpair failed and we were unable to recover it. 00:27:50.330 [2024-07-15 09:36:01.425862] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:50.330 [2024-07-15 09:36:01.425901] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d69200 with addr=10.0.0.2, port=4420 00:27:50.330 qpair failed and we were unable to recover it. 00:27:50.330 [2024-07-15 09:36:01.426000] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:50.330 [2024-07-15 09:36:01.426028] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a58000b90 with addr=10.0.0.2, port=4420 00:27:50.330 qpair failed and we were unable to recover it. 00:27:50.330 [2024-07-15 09:36:01.426150] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:50.330 [2024-07-15 09:36:01.426176] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a58000b90 with addr=10.0.0.2, port=4420 00:27:50.330 qpair failed and we were unable to recover it. 00:27:50.330 [2024-07-15 09:36:01.426260] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:50.330 [2024-07-15 09:36:01.426285] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a58000b90 with addr=10.0.0.2, port=4420 00:27:50.330 qpair failed and we were unable to recover it. 00:27:50.330 [2024-07-15 09:36:01.426377] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:50.330 [2024-07-15 09:36:01.426403] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a58000b90 with addr=10.0.0.2, port=4420 00:27:50.330 qpair failed and we were unable to recover it. 00:27:50.330 [2024-07-15 09:36:01.426484] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:50.330 [2024-07-15 09:36:01.426510] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a58000b90 with addr=10.0.0.2, port=4420 00:27:50.330 qpair failed and we were unable to recover it. 00:27:50.330 [2024-07-15 09:36:01.426589] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:50.330 [2024-07-15 09:36:01.426616] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a58000b90 with addr=10.0.0.2, port=4420 00:27:50.330 qpair failed and we were unable to recover it. 00:27:50.330 [2024-07-15 09:36:01.426708] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:50.330 [2024-07-15 09:36:01.426747] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d69200 with addr=10.0.0.2, port=4420 00:27:50.330 qpair failed and we were unable to recover it. 00:27:50.330 [2024-07-15 09:36:01.426846] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:50.330 [2024-07-15 09:36:01.426876] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a50000b90 with addr=10.0.0.2, port=4420 00:27:50.330 qpair failed and we were unable to recover it. 00:27:50.330 [2024-07-15 09:36:01.426997] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:50.330 [2024-07-15 09:36:01.427029] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:27:50.330 qpair failed and we were unable to recover it. 00:27:50.330 [2024-07-15 09:36:01.427122] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:50.330 [2024-07-15 09:36:01.427150] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:27:50.330 qpair failed and we were unable to recover it. 00:27:50.330 [2024-07-15 09:36:01.427265] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:50.330 [2024-07-15 09:36:01.427291] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:27:50.330 qpair failed and we were unable to recover it. 00:27:50.330 [2024-07-15 09:36:01.427406] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:50.330 [2024-07-15 09:36:01.427433] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:27:50.330 qpair failed and we were unable to recover it. 00:27:50.330 [2024-07-15 09:36:01.427551] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:50.330 [2024-07-15 09:36:01.427579] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a58000b90 with addr=10.0.0.2, port=4420 00:27:50.330 qpair failed and we were unable to recover it. 00:27:50.330 [2024-07-15 09:36:01.427679] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:50.330 [2024-07-15 09:36:01.427717] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d69200 with addr=10.0.0.2, port=4420 00:27:50.330 qpair failed and we were unable to recover it. 00:27:50.330 [2024-07-15 09:36:01.427818] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:50.330 [2024-07-15 09:36:01.427847] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d69200 with addr=10.0.0.2, port=4420 00:27:50.330 qpair failed and we were unable to recover it. 00:27:50.330 [2024-07-15 09:36:01.427927] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:50.330 [2024-07-15 09:36:01.427953] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d69200 with addr=10.0.0.2, port=4420 00:27:50.330 qpair failed and we were unable to recover it. 00:27:50.330 [2024-07-15 09:36:01.428029] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:50.330 [2024-07-15 09:36:01.428055] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d69200 with addr=10.0.0.2, port=4420 00:27:50.330 qpair failed and we were unable to recover it. 00:27:50.330 [2024-07-15 09:36:01.428184] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:50.330 [2024-07-15 09:36:01.428210] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d69200 with addr=10.0.0.2, port=4420 00:27:50.330 qpair failed and we were unable to recover it. 00:27:50.330 [2024-07-15 09:36:01.428319] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:50.330 [2024-07-15 09:36:01.428345] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d69200 with addr=10.0.0.2, port=4420 00:27:50.330 qpair failed and we were unable to recover it. 00:27:50.330 [2024-07-15 09:36:01.428434] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:50.330 [2024-07-15 09:36:01.428461] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a58000b90 with addr=10.0.0.2, port=4420 00:27:50.330 qpair failed and we were unable to recover it. 00:27:50.330 [2024-07-15 09:36:01.428551] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:50.330 [2024-07-15 09:36:01.428577] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a58000b90 with addr=10.0.0.2, port=4420 00:27:50.330 qpair failed and we were unable to recover it. 00:27:50.330 [2024-07-15 09:36:01.428656] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:50.330 [2024-07-15 09:36:01.428682] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a58000b90 with addr=10.0.0.2, port=4420 00:27:50.330 qpair failed and we were unable to recover it. 00:27:50.330 [2024-07-15 09:36:01.428819] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:50.330 [2024-07-15 09:36:01.428846] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a58000b90 with addr=10.0.0.2, port=4420 00:27:50.330 qpair failed and we were unable to recover it. 00:27:50.330 [2024-07-15 09:36:01.428958] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:50.330 [2024-07-15 09:36:01.428984] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a58000b90 with addr=10.0.0.2, port=4420 00:27:50.330 qpair failed and we were unable to recover it. 00:27:50.330 [2024-07-15 09:36:01.429077] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:50.330 [2024-07-15 09:36:01.429103] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a58000b90 with addr=10.0.0.2, port=4420 00:27:50.330 qpair failed and we were unable to recover it. 00:27:50.330 [2024-07-15 09:36:01.429220] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:50.330 [2024-07-15 09:36:01.429246] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a58000b90 with addr=10.0.0.2, port=4420 00:27:50.330 qpair failed and we were unable to recover it. 00:27:50.330 [2024-07-15 09:36:01.429331] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:50.330 [2024-07-15 09:36:01.429357] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a58000b90 with addr=10.0.0.2, port=4420 00:27:50.330 qpair failed and we were unable to recover it. 00:27:50.330 [2024-07-15 09:36:01.429467] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:50.330 [2024-07-15 09:36:01.429494] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:27:50.330 qpair failed and we were unable to recover it. 00:27:50.330 [2024-07-15 09:36:01.429593] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:50.330 [2024-07-15 09:36:01.429631] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d69200 with addr=10.0.0.2, port=4420 00:27:50.330 qpair failed and we were unable to recover it. 00:27:50.330 [2024-07-15 09:36:01.429761] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:50.330 [2024-07-15 09:36:01.429814] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a50000b90 with addr=10.0.0.2, port=4420 00:27:50.330 qpair failed and we were unable to recover it. 00:27:50.330 [2024-07-15 09:36:01.429912] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:50.330 [2024-07-15 09:36:01.429939] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a50000b90 with addr=10.0.0.2, port=4420 00:27:50.330 qpair failed and we were unable to recover it. 00:27:50.330 [2024-07-15 09:36:01.430027] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:50.330 [2024-07-15 09:36:01.430055] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a50000b90 with addr=10.0.0.2, port=4420 00:27:50.330 qpair failed and we were unable to recover it. 00:27:50.330 [2024-07-15 09:36:01.430141] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:50.330 [2024-07-15 09:36:01.430167] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a50000b90 with addr=10.0.0.2, port=4420 00:27:50.330 qpair failed and we were unable to recover it. 00:27:50.330 [2024-07-15 09:36:01.430251] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:50.330 [2024-07-15 09:36:01.430277] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a50000b90 with addr=10.0.0.2, port=4420 00:27:50.330 qpair failed and we were unable to recover it. 00:27:50.330 [2024-07-15 09:36:01.430394] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:50.330 [2024-07-15 09:36:01.430420] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a50000b90 with addr=10.0.0.2, port=4420 00:27:50.330 qpair failed and we were unable to recover it. 00:27:50.330 [2024-07-15 09:36:01.430539] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:50.330 [2024-07-15 09:36:01.430568] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d69200 with addr=10.0.0.2, port=4420 00:27:50.330 qpair failed and we were unable to recover it. 00:27:50.330 [2024-07-15 09:36:01.430700] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:50.330 [2024-07-15 09:36:01.430726] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d69200 with addr=10.0.0.2, port=4420 00:27:50.330 qpair failed and we were unable to recover it. 00:27:50.330 [2024-07-15 09:36:01.430821] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:50.330 [2024-07-15 09:36:01.430848] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d69200 with addr=10.0.0.2, port=4420 00:27:50.330 qpair failed and we were unable to recover it. 00:27:50.330 [2024-07-15 09:36:01.430933] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:50.330 [2024-07-15 09:36:01.430959] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d69200 with addr=10.0.0.2, port=4420 00:27:50.330 qpair failed and we were unable to recover it. 00:27:50.330 [2024-07-15 09:36:01.431040] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:50.330 [2024-07-15 09:36:01.431065] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d69200 with addr=10.0.0.2, port=4420 00:27:50.330 qpair failed and we were unable to recover it. 00:27:50.330 [2024-07-15 09:36:01.431147] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:50.330 [2024-07-15 09:36:01.431172] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d69200 with addr=10.0.0.2, port=4420 00:27:50.330 qpair failed and we were unable to recover it. 00:27:50.330 [2024-07-15 09:36:01.431283] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:50.330 [2024-07-15 09:36:01.431312] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a50000b90 with addr=10.0.0.2, port=4420 00:27:50.330 qpair failed and we were unable to recover it. 00:27:50.330 [2024-07-15 09:36:01.431428] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:50.330 [2024-07-15 09:36:01.431456] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a58000b90 with addr=10.0.0.2, port=4420 00:27:50.330 qpair failed and we were unable to recover it. 00:27:50.330 [2024-07-15 09:36:01.431547] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:50.330 [2024-07-15 09:36:01.431574] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:27:50.330 qpair failed and we were unable to recover it. 00:27:50.330 [2024-07-15 09:36:01.431710] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:50.330 [2024-07-15 09:36:01.431736] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:27:50.330 qpair failed and we were unable to recover it. 00:27:50.330 [2024-07-15 09:36:01.431855] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:50.330 [2024-07-15 09:36:01.431881] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:27:50.330 qpair failed and we were unable to recover it. 00:27:50.330 [2024-07-15 09:36:01.431964] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:50.330 [2024-07-15 09:36:01.431990] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:27:50.330 qpair failed and we were unable to recover it. 00:27:50.330 [2024-07-15 09:36:01.432064] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:50.330 [2024-07-15 09:36:01.432090] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:27:50.330 qpair failed and we were unable to recover it. 00:27:50.330 [2024-07-15 09:36:01.432206] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:50.330 [2024-07-15 09:36:01.432231] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:27:50.330 qpair failed and we were unable to recover it. 00:27:50.330 [2024-07-15 09:36:01.432344] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:50.330 [2024-07-15 09:36:01.432370] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:27:50.330 qpair failed and we were unable to recover it. 00:27:50.330 [2024-07-15 09:36:01.432445] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:50.330 [2024-07-15 09:36:01.432472] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d69200 with addr=10.0.0.2, port=4420 00:27:50.330 qpair failed and we were unable to recover it. 00:27:50.330 [2024-07-15 09:36:01.432585] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:50.330 [2024-07-15 09:36:01.432614] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a50000b90 with addr=10.0.0.2, port=4420 00:27:50.330 qpair failed and we were unable to recover it. 00:27:50.330 [2024-07-15 09:36:01.432699] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:50.330 [2024-07-15 09:36:01.432725] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a50000b90 with addr=10.0.0.2, port=4420 00:27:50.330 qpair failed and we were unable to recover it. 00:27:50.331 [2024-07-15 09:36:01.432818] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:50.331 [2024-07-15 09:36:01.432846] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a50000b90 with addr=10.0.0.2, port=4420 00:27:50.331 qpair failed and we were unable to recover it. 00:27:50.331 [2024-07-15 09:36:01.432954] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:50.331 [2024-07-15 09:36:01.432981] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a50000b90 with addr=10.0.0.2, port=4420 00:27:50.331 qpair failed and we were unable to recover it. 00:27:50.331 [2024-07-15 09:36:01.433068] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:50.331 [2024-07-15 09:36:01.433095] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a50000b90 with addr=10.0.0.2, port=4420 00:27:50.331 qpair failed and we were unable to recover it. 00:27:50.331 [2024-07-15 09:36:01.433177] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:50.331 [2024-07-15 09:36:01.433203] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:27:50.331 qpair failed and we were unable to recover it. 00:27:50.331 [2024-07-15 09:36:01.433291] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:50.331 [2024-07-15 09:36:01.433319] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d69200 with addr=10.0.0.2, port=4420 00:27:50.331 qpair failed and we were unable to recover it. 00:27:50.331 [2024-07-15 09:36:01.433406] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:50.331 [2024-07-15 09:36:01.433435] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a58000b90 with addr=10.0.0.2, port=4420 00:27:50.331 qpair failed and we were unable to recover it. 00:27:50.331 [2024-07-15 09:36:01.433515] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:50.331 [2024-07-15 09:36:01.433542] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a58000b90 with addr=10.0.0.2, port=4420 00:27:50.331 qpair failed and we were unable to recover it. 00:27:50.331 [2024-07-15 09:36:01.433622] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:50.331 [2024-07-15 09:36:01.433648] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a58000b90 with addr=10.0.0.2, port=4420 00:27:50.331 qpair failed and we were unable to recover it. 00:27:50.331 [2024-07-15 09:36:01.433736] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:50.331 [2024-07-15 09:36:01.433762] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a58000b90 with addr=10.0.0.2, port=4420 00:27:50.331 qpair failed and we were unable to recover it. 00:27:50.331 [2024-07-15 09:36:01.433860] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:50.331 [2024-07-15 09:36:01.433887] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a58000b90 with addr=10.0.0.2, port=4420 00:27:50.331 qpair failed and we were unable to recover it. 00:27:50.331 [2024-07-15 09:36:01.433971] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:50.331 [2024-07-15 09:36:01.433999] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a58000b90 with addr=10.0.0.2, port=4420 00:27:50.331 qpair failed and we were unable to recover it. 00:27:50.331 [2024-07-15 09:36:01.434082] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:50.331 [2024-07-15 09:36:01.434108] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a58000b90 with addr=10.0.0.2, port=4420 00:27:50.331 qpair failed and we were unable to recover it. 00:27:50.331 [2024-07-15 09:36:01.434214] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:50.331 [2024-07-15 09:36:01.434241] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a58000b90 with addr=10.0.0.2, port=4420 00:27:50.331 qpair failed and we were unable to recover it. 00:27:50.331 [2024-07-15 09:36:01.434328] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:50.331 [2024-07-15 09:36:01.434354] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:27:50.331 qpair failed and we were unable to recover it. 00:27:50.331 [2024-07-15 09:36:01.434470] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:50.331 [2024-07-15 09:36:01.434496] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:27:50.331 qpair failed and we were unable to recover it. 00:27:50.331 [2024-07-15 09:36:01.434614] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:50.331 [2024-07-15 09:36:01.434642] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a50000b90 with addr=10.0.0.2, port=4420 00:27:50.331 qpair failed and we were unable to recover it. 00:27:50.331 [2024-07-15 09:36:01.434783] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:50.331 [2024-07-15 09:36:01.434814] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a50000b90 with addr=10.0.0.2, port=4420 00:27:50.331 qpair failed and we were unable to recover it. 00:27:50.331 [2024-07-15 09:36:01.434903] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:50.331 [2024-07-15 09:36:01.434929] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a50000b90 with addr=10.0.0.2, port=4420 00:27:50.331 qpair failed and we were unable to recover it. 00:27:50.331 [2024-07-15 09:36:01.435014] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:50.331 [2024-07-15 09:36:01.435040] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a50000b90 with addr=10.0.0.2, port=4420 00:27:50.331 qpair failed and we were unable to recover it. 00:27:50.331 [2024-07-15 09:36:01.435159] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:50.331 [2024-07-15 09:36:01.435186] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a50000b90 with addr=10.0.0.2, port=4420 00:27:50.331 qpair failed and we were unable to recover it. 00:27:50.331 [2024-07-15 09:36:01.435321] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:50.331 [2024-07-15 09:36:01.435347] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a50000b90 with addr=10.0.0.2, port=4420 00:27:50.331 qpair failed and we were unable to recover it. 00:27:50.331 [2024-07-15 09:36:01.435436] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:50.331 [2024-07-15 09:36:01.435463] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:27:50.331 qpair failed and we were unable to recover it. 00:27:50.331 [2024-07-15 09:36:01.435574] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:50.331 [2024-07-15 09:36:01.435603] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:27:50.331 qpair failed and we were unable to recover it. 00:27:50.331 [2024-07-15 09:36:01.435694] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:50.331 [2024-07-15 09:36:01.435720] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:27:50.331 qpair failed and we were unable to recover it. 00:27:50.331 [2024-07-15 09:36:01.435799] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:50.331 [2024-07-15 09:36:01.435830] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:27:50.331 qpair failed and we were unable to recover it. 00:27:50.331 [2024-07-15 09:36:01.435936] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:50.331 [2024-07-15 09:36:01.435963] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:27:50.331 qpair failed and we were unable to recover it. 00:27:50.331 [2024-07-15 09:36:01.436045] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:50.331 [2024-07-15 09:36:01.436070] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:27:50.331 qpair failed and we were unable to recover it. 00:27:50.331 [2024-07-15 09:36:01.436162] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:50.331 [2024-07-15 09:36:01.436188] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:27:50.331 qpair failed and we were unable to recover it. 00:27:50.331 [2024-07-15 09:36:01.436321] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:50.331 [2024-07-15 09:36:01.436346] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:27:50.331 qpair failed and we were unable to recover it. 00:27:50.331 [2024-07-15 09:36:01.436452] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:50.331 [2024-07-15 09:36:01.436478] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:27:50.331 qpair failed and we were unable to recover it. 00:27:50.331 [2024-07-15 09:36:01.436562] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:50.331 [2024-07-15 09:36:01.436587] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:27:50.331 qpair failed and we were unable to recover it. 00:27:50.331 [2024-07-15 09:36:01.436664] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:50.331 [2024-07-15 09:36:01.436689] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:27:50.331 qpair failed and we were unable to recover it. 00:27:50.331 [2024-07-15 09:36:01.436811] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:50.331 [2024-07-15 09:36:01.436837] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:27:50.331 qpair failed and we were unable to recover it. 00:27:50.331 [2024-07-15 09:36:01.436917] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:50.331 [2024-07-15 09:36:01.436941] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:27:50.331 qpair failed and we were unable to recover it. 00:27:50.331 [2024-07-15 09:36:01.437019] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:50.331 [2024-07-15 09:36:01.437045] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:27:50.331 qpair failed and we were unable to recover it. 00:27:50.331 [2024-07-15 09:36:01.437172] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:50.331 [2024-07-15 09:36:01.437199] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:27:50.331 qpair failed and we were unable to recover it. 00:27:50.331 [2024-07-15 09:36:01.437291] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:50.331 [2024-07-15 09:36:01.437317] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:27:50.331 qpair failed and we were unable to recover it. 00:27:50.331 [2024-07-15 09:36:01.437399] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:50.331 [2024-07-15 09:36:01.437424] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:27:50.331 qpair failed and we were unable to recover it. 00:27:50.331 [2024-07-15 09:36:01.437514] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:50.331 [2024-07-15 09:36:01.437553] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d69200 with addr=10.0.0.2, port=4420 00:27:50.331 qpair failed and we were unable to recover it. 00:27:50.331 [2024-07-15 09:36:01.437643] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:50.331 [2024-07-15 09:36:01.437671] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a50000b90 with addr=10.0.0.2, port=4420 00:27:50.331 qpair failed and we were unable to recover it. 00:27:50.331 [2024-07-15 09:36:01.437752] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:50.331 [2024-07-15 09:36:01.437778] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a50000b90 with addr=10.0.0.2, port=4420 00:27:50.331 qpair failed and we were unable to recover it. 00:27:50.331 [2024-07-15 09:36:01.437877] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:50.331 [2024-07-15 09:36:01.437903] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:27:50.331 qpair failed and we were unable to recover it. 00:27:50.331 [2024-07-15 09:36:01.438019] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:50.331 [2024-07-15 09:36:01.438044] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:27:50.331 qpair failed and we were unable to recover it. 00:27:50.331 [2024-07-15 09:36:01.438136] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:50.331 [2024-07-15 09:36:01.438161] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:27:50.331 qpair failed and we were unable to recover it. 00:27:50.331 [2024-07-15 09:36:01.438274] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:50.331 [2024-07-15 09:36:01.438300] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:27:50.331 qpair failed and we were unable to recover it. 00:27:50.331 [2024-07-15 09:36:01.438379] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:50.331 [2024-07-15 09:36:01.438404] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:27:50.331 qpair failed and we were unable to recover it. 00:27:50.331 [2024-07-15 09:36:01.438503] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:50.331 [2024-07-15 09:36:01.438542] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a58000b90 with addr=10.0.0.2, port=4420 00:27:50.331 qpair failed and we were unable to recover it. 00:27:50.331 [2024-07-15 09:36:01.438635] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:50.331 [2024-07-15 09:36:01.438662] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a50000b90 with addr=10.0.0.2, port=4420 00:27:50.331 qpair failed and we were unable to recover it. 00:27:50.331 [2024-07-15 09:36:01.438753] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:50.331 [2024-07-15 09:36:01.438779] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a50000b90 with addr=10.0.0.2, port=4420 00:27:50.331 qpair failed and we were unable to recover it. 00:27:50.331 [2024-07-15 09:36:01.438912] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:50.331 [2024-07-15 09:36:01.438938] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a50000b90 with addr=10.0.0.2, port=4420 00:27:50.331 qpair failed and we were unable to recover it. 00:27:50.331 [2024-07-15 09:36:01.439020] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:50.331 [2024-07-15 09:36:01.439047] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a50000b90 with addr=10.0.0.2, port=4420 00:27:50.331 qpair failed and we were unable to recover it. 00:27:50.331 [2024-07-15 09:36:01.439139] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:50.331 [2024-07-15 09:36:01.439166] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a50000b90 with addr=10.0.0.2, port=4420 00:27:50.331 qpair failed and we were unable to recover it. 00:27:50.331 [2024-07-15 09:36:01.439277] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:50.331 [2024-07-15 09:36:01.439302] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a50000b90 with addr=10.0.0.2, port=4420 00:27:50.331 qpair failed and we were unable to recover it. 00:27:50.331 [2024-07-15 09:36:01.439388] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:50.331 [2024-07-15 09:36:01.439417] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a58000b90 with addr=10.0.0.2, port=4420 00:27:50.331 qpair failed and we were unable to recover it. 00:27:50.331 [2024-07-15 09:36:01.439543] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:50.331 [2024-07-15 09:36:01.439582] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d69200 with addr=10.0.0.2, port=4420 00:27:50.331 qpair failed and we were unable to recover it. 00:27:50.331 [2024-07-15 09:36:01.439701] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:50.331 [2024-07-15 09:36:01.439728] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:27:50.331 qpair failed and we were unable to recover it. 00:27:50.331 [2024-07-15 09:36:01.439852] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:50.331 [2024-07-15 09:36:01.439878] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:27:50.331 qpair failed and we were unable to recover it. 00:27:50.331 [2024-07-15 09:36:01.439956] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:50.331 [2024-07-15 09:36:01.439982] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:27:50.331 qpair failed and we were unable to recover it. 00:27:50.331 [2024-07-15 09:36:01.440069] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:50.331 [2024-07-15 09:36:01.440095] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:27:50.331 qpair failed and we were unable to recover it. 00:27:50.331 [2024-07-15 09:36:01.440202] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:50.331 [2024-07-15 09:36:01.440228] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:27:50.331 qpair failed and we were unable to recover it. 00:27:50.331 [2024-07-15 09:36:01.440366] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:50.331 [2024-07-15 09:36:01.440392] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:27:50.331 qpair failed and we were unable to recover it. 00:27:50.331 [2024-07-15 09:36:01.440477] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:50.331 [2024-07-15 09:36:01.440503] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:27:50.331 qpair failed and we were unable to recover it. 00:27:50.331 [2024-07-15 09:36:01.440615] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:50.331 [2024-07-15 09:36:01.440645] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:27:50.331 qpair failed and we were unable to recover it. 00:27:50.331 [2024-07-15 09:36:01.440732] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:50.331 [2024-07-15 09:36:01.440759] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:27:50.331 qpair failed and we were unable to recover it. 00:27:50.331 [2024-07-15 09:36:01.440852] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:50.331 [2024-07-15 09:36:01.440878] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:27:50.331 qpair failed and we were unable to recover it. 00:27:50.331 [2024-07-15 09:36:01.440957] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:50.331 [2024-07-15 09:36:01.440982] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:27:50.331 qpair failed and we were unable to recover it. 00:27:50.331 [2024-07-15 09:36:01.441065] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:50.331 [2024-07-15 09:36:01.441090] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:27:50.331 qpair failed and we were unable to recover it. 00:27:50.331 [2024-07-15 09:36:01.441198] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:50.331 [2024-07-15 09:36:01.441224] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:27:50.331 qpair failed and we were unable to recover it. 00:27:50.331 [2024-07-15 09:36:01.441302] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:50.331 [2024-07-15 09:36:01.441326] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:27:50.331 qpair failed and we were unable to recover it. 00:27:50.331 [2024-07-15 09:36:01.441404] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:50.332 [2024-07-15 09:36:01.441430] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:27:50.332 qpair failed and we were unable to recover it. 00:27:50.332 [2024-07-15 09:36:01.441539] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:50.332 [2024-07-15 09:36:01.441565] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:27:50.332 qpair failed and we were unable to recover it. 00:27:50.332 [2024-07-15 09:36:01.441670] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:50.332 [2024-07-15 09:36:01.441695] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:27:50.332 qpair failed and we were unable to recover it. 00:27:50.332 [2024-07-15 09:36:01.441778] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:50.332 [2024-07-15 09:36:01.441808] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:27:50.332 qpair failed and we were unable to recover it. 00:27:50.332 [2024-07-15 09:36:01.441899] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:50.332 [2024-07-15 09:36:01.441926] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:27:50.332 qpair failed and we were unable to recover it. 00:27:50.332 [2024-07-15 09:36:01.442003] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:50.332 [2024-07-15 09:36:01.442027] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:27:50.332 qpair failed and we were unable to recover it. 00:27:50.332 [2024-07-15 09:36:01.442125] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:50.332 [2024-07-15 09:36:01.442150] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:27:50.332 qpair failed and we were unable to recover it. 00:27:50.332 [2024-07-15 09:36:01.442232] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:50.332 [2024-07-15 09:36:01.442257] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:27:50.332 qpair failed and we were unable to recover it. 00:27:50.332 [2024-07-15 09:36:01.442342] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:50.332 [2024-07-15 09:36:01.442367] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:27:50.332 qpair failed and we were unable to recover it. 00:27:50.332 [2024-07-15 09:36:01.442464] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:50.332 [2024-07-15 09:36:01.442493] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a58000b90 with addr=10.0.0.2, port=4420 00:27:50.332 qpair failed and we were unable to recover it. 00:27:50.332 [2024-07-15 09:36:01.442621] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:50.332 [2024-07-15 09:36:01.442659] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d69200 with addr=10.0.0.2, port=4420 00:27:50.332 qpair failed and we were unable to recover it. 00:27:50.332 [2024-07-15 09:36:01.442777] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:50.332 [2024-07-15 09:36:01.442810] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a50000b90 with addr=10.0.0.2, port=4420 00:27:50.332 qpair failed and we were unable to recover it. 00:27:50.332 [2024-07-15 09:36:01.442924] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:50.332 [2024-07-15 09:36:01.442950] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a50000b90 with addr=10.0.0.2, port=4420 00:27:50.332 qpair failed and we were unable to recover it. 00:27:50.332 [2024-07-15 09:36:01.443031] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:50.332 [2024-07-15 09:36:01.443057] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a50000b90 with addr=10.0.0.2, port=4420 00:27:50.332 qpair failed and we were unable to recover it. 00:27:50.332 [2024-07-15 09:36:01.443162] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:50.332 [2024-07-15 09:36:01.443190] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a50000b90 with addr=10.0.0.2, port=4420 00:27:50.332 qpair failed and we were unable to recover it. 00:27:50.332 [2024-07-15 09:36:01.443278] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:50.332 [2024-07-15 09:36:01.443305] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:27:50.332 qpair failed and we were unable to recover it. 00:27:50.332 [2024-07-15 09:36:01.443397] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:50.332 [2024-07-15 09:36:01.443425] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a58000b90 with addr=10.0.0.2, port=4420 00:27:50.332 qpair failed and we were unable to recover it. 00:27:50.332 [2024-07-15 09:36:01.443519] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:50.332 [2024-07-15 09:36:01.443546] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a58000b90 with addr=10.0.0.2, port=4420 00:27:50.332 qpair failed and we were unable to recover it. 00:27:50.332 [2024-07-15 09:36:01.443662] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:50.332 [2024-07-15 09:36:01.443688] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a50000b90 with addr=10.0.0.2, port=4420 00:27:50.332 qpair failed and we were unable to recover it. 00:27:50.332 [2024-07-15 09:36:01.443811] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:50.332 [2024-07-15 09:36:01.443838] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a50000b90 with addr=10.0.0.2, port=4420 00:27:50.332 qpair failed and we were unable to recover it. 00:27:50.332 [2024-07-15 09:36:01.443973] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:50.332 [2024-07-15 09:36:01.444004] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d69200 with addr=10.0.0.2, port=4420 00:27:50.332 qpair failed and we were unable to recover it. 00:27:50.332 [2024-07-15 09:36:01.444124] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:50.332 [2024-07-15 09:36:01.444150] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d69200 with addr=10.0.0.2, port=4420 00:27:50.332 qpair failed and we were unable to recover it. 00:27:50.332 [2024-07-15 09:36:01.444239] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:50.332 [2024-07-15 09:36:01.444265] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d69200 with addr=10.0.0.2, port=4420 00:27:50.332 qpair failed and we were unable to recover it. 00:27:50.332 [2024-07-15 09:36:01.444347] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:50.332 [2024-07-15 09:36:01.444373] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d69200 with addr=10.0.0.2, port=4420 00:27:50.332 qpair failed and we were unable to recover it. 00:27:50.332 [2024-07-15 09:36:01.444473] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:50.332 [2024-07-15 09:36:01.444503] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a58000b90 with addr=10.0.0.2, port=4420 00:27:50.332 qpair failed and we were unable to recover it. 00:27:50.332 [2024-07-15 09:36:01.444595] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:50.332 [2024-07-15 09:36:01.444622] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:27:50.332 qpair failed and we were unable to recover it. 00:27:50.332 [2024-07-15 09:36:01.444709] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:50.332 [2024-07-15 09:36:01.444735] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:27:50.332 qpair failed and we were unable to recover it. 00:27:50.332 [2024-07-15 09:36:01.444854] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:50.332 [2024-07-15 09:36:01.444880] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:27:50.332 qpair failed and we were unable to recover it. 00:27:50.332 [2024-07-15 09:36:01.444962] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:50.332 [2024-07-15 09:36:01.444987] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:27:50.332 qpair failed and we were unable to recover it. 00:27:50.332 [2024-07-15 09:36:01.445066] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:50.332 [2024-07-15 09:36:01.445091] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:27:50.332 qpair failed and we were unable to recover it. 00:27:50.332 [2024-07-15 09:36:01.445169] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:50.332 [2024-07-15 09:36:01.445194] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:27:50.332 qpair failed and we were unable to recover it. 00:27:50.332 [2024-07-15 09:36:01.445280] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:50.332 [2024-07-15 09:36:01.445304] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:27:50.332 qpair failed and we were unable to recover it. 00:27:50.332 [2024-07-15 09:36:01.445415] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:50.332 [2024-07-15 09:36:01.445440] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:27:50.332 qpair failed and we were unable to recover it. 00:27:50.332 [2024-07-15 09:36:01.445532] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:50.332 [2024-07-15 09:36:01.445562] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:27:50.332 qpair failed and we were unable to recover it. 00:27:50.332 [2024-07-15 09:36:01.445640] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:50.332 [2024-07-15 09:36:01.445665] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:27:50.332 qpair failed and we were unable to recover it. 00:27:50.332 [2024-07-15 09:36:01.445751] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:50.332 [2024-07-15 09:36:01.445779] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a50000b90 with addr=10.0.0.2, port=4420 00:27:50.332 qpair failed and we were unable to recover it. 00:27:50.332 [2024-07-15 09:36:01.445911] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:50.332 [2024-07-15 09:36:01.445938] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d69200 with addr=10.0.0.2, port=4420 00:27:50.332 qpair failed and we were unable to recover it. 00:27:50.332 [2024-07-15 09:36:01.446025] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:50.332 [2024-07-15 09:36:01.446051] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d69200 with addr=10.0.0.2, port=4420 00:27:50.332 qpair failed and we were unable to recover it. 00:27:50.332 [2024-07-15 09:36:01.446166] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:50.332 [2024-07-15 09:36:01.446192] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d69200 with addr=10.0.0.2, port=4420 00:27:50.332 qpair failed and we were unable to recover it. 00:27:50.332 [2024-07-15 09:36:01.446302] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:50.332 [2024-07-15 09:36:01.446328] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d69200 with addr=10.0.0.2, port=4420 00:27:50.332 qpair failed and we were unable to recover it. 00:27:50.332 [2024-07-15 09:36:01.446442] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:50.332 [2024-07-15 09:36:01.446471] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a58000b90 with addr=10.0.0.2, port=4420 00:27:50.332 qpair failed and we were unable to recover it. 00:27:50.332 [2024-07-15 09:36:01.446588] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:50.332 [2024-07-15 09:36:01.446615] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:27:50.332 qpair failed and we were unable to recover it. 00:27:50.332 [2024-07-15 09:36:01.446708] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:50.332 [2024-07-15 09:36:01.446736] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a50000b90 with addr=10.0.0.2, port=4420 00:27:50.332 qpair failed and we were unable to recover it. 00:27:50.332 [2024-07-15 09:36:01.446824] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:50.332 [2024-07-15 09:36:01.446851] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a50000b90 with addr=10.0.0.2, port=4420 00:27:50.332 qpair failed and we were unable to recover it. 00:27:50.332 [2024-07-15 09:36:01.446968] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:50.332 [2024-07-15 09:36:01.446996] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a50000b90 with addr=10.0.0.2, port=4420 00:27:50.332 qpair failed and we were unable to recover it. 00:27:50.332 [2024-07-15 09:36:01.447075] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:50.332 [2024-07-15 09:36:01.447102] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a50000b90 with addr=10.0.0.2, port=4420 00:27:50.332 qpair failed and we were unable to recover it. 00:27:50.332 [2024-07-15 09:36:01.447207] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:50.332 [2024-07-15 09:36:01.447233] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a50000b90 with addr=10.0.0.2, port=4420 00:27:50.332 qpair failed and we were unable to recover it. 00:27:50.332 [2024-07-15 09:36:01.447329] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:50.332 [2024-07-15 09:36:01.447356] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a50000b90 with addr=10.0.0.2, port=4420 00:27:50.332 qpair failed and we were unable to recover it. 00:27:50.332 [2024-07-15 09:36:01.447470] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:50.332 [2024-07-15 09:36:01.447498] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:27:50.332 qpair failed and we were unable to recover it. 00:27:50.332 [2024-07-15 09:36:01.447614] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:50.332 [2024-07-15 09:36:01.447640] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:27:50.332 qpair failed and we were unable to recover it. 00:27:50.605 [2024-07-15 09:36:01.447726] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:50.605 [2024-07-15 09:36:01.447752] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:27:50.605 qpair failed and we were unable to recover it. 00:27:50.605 [2024-07-15 09:36:01.447870] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:50.605 [2024-07-15 09:36:01.447896] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:27:50.605 qpair failed and we were unable to recover it. 00:27:50.605 [2024-07-15 09:36:01.447976] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:50.605 [2024-07-15 09:36:01.448002] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:27:50.605 qpair failed and we were unable to recover it. 00:27:50.605 [2024-07-15 09:36:01.448119] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:50.605 [2024-07-15 09:36:01.448145] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:27:50.605 qpair failed and we were unable to recover it. 00:27:50.605 [2024-07-15 09:36:01.448252] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:50.605 [2024-07-15 09:36:01.448278] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:27:50.605 qpair failed and we were unable to recover it. 00:27:50.605 [2024-07-15 09:36:01.448388] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:50.605 [2024-07-15 09:36:01.448413] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:27:50.605 qpair failed and we were unable to recover it. 00:27:50.605 [2024-07-15 09:36:01.448517] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:50.605 [2024-07-15 09:36:01.448542] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:27:50.605 qpair failed and we were unable to recover it. 00:27:50.605 [2024-07-15 09:36:01.448614] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:50.605 [2024-07-15 09:36:01.448640] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:27:50.605 qpair failed and we were unable to recover it. 00:27:50.605 [2024-07-15 09:36:01.448731] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:50.605 [2024-07-15 09:36:01.448759] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d69200 with addr=10.0.0.2, port=4420 00:27:50.605 qpair failed and we were unable to recover it. 00:27:50.605 [2024-07-15 09:36:01.448858] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:50.605 [2024-07-15 09:36:01.448884] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d69200 with addr=10.0.0.2, port=4420 00:27:50.605 qpair failed and we were unable to recover it. 00:27:50.605 [2024-07-15 09:36:01.448963] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:50.605 [2024-07-15 09:36:01.448994] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d69200 with addr=10.0.0.2, port=4420 00:27:50.605 qpair failed and we were unable to recover it. 00:27:50.605 [2024-07-15 09:36:01.449079] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:50.605 [2024-07-15 09:36:01.449105] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d69200 with addr=10.0.0.2, port=4420 00:27:50.605 qpair failed and we were unable to recover it. 00:27:50.605 [2024-07-15 09:36:01.449185] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:50.605 [2024-07-15 09:36:01.449215] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d69200 with addr=10.0.0.2, port=4420 00:27:50.605 qpair failed and we were unable to recover it. 00:27:50.605 [2024-07-15 09:36:01.449294] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:50.605 [2024-07-15 09:36:01.449320] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d69200 with addr=10.0.0.2, port=4420 00:27:50.605 qpair failed and we were unable to recover it. 00:27:50.605 [2024-07-15 09:36:01.449407] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:50.605 [2024-07-15 09:36:01.449433] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d69200 with addr=10.0.0.2, port=4420 00:27:50.605 qpair failed and we were unable to recover it. 00:27:50.605 [2024-07-15 09:36:01.449509] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:50.605 [2024-07-15 09:36:01.449535] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d69200 with addr=10.0.0.2, port=4420 00:27:50.605 qpair failed and we were unable to recover it. 00:27:50.605 [2024-07-15 09:36:01.449677] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:50.605 [2024-07-15 09:36:01.449706] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a50000b90 with addr=10.0.0.2, port=4420 00:27:50.605 qpair failed and we were unable to recover it. 00:27:50.605 [2024-07-15 09:36:01.449792] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:50.605 [2024-07-15 09:36:01.449823] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a50000b90 with addr=10.0.0.2, port=4420 00:27:50.605 qpair failed and we were unable to recover it. 00:27:50.605 [2024-07-15 09:36:01.449906] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:50.605 [2024-07-15 09:36:01.449933] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a50000b90 with addr=10.0.0.2, port=4420 00:27:50.605 qpair failed and we were unable to recover it. 00:27:50.605 [2024-07-15 09:36:01.450044] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:50.605 [2024-07-15 09:36:01.450070] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a50000b90 with addr=10.0.0.2, port=4420 00:27:50.605 qpair failed and we were unable to recover it. 00:27:50.605 [2024-07-15 09:36:01.450204] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:50.605 [2024-07-15 09:36:01.450230] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a50000b90 with addr=10.0.0.2, port=4420 00:27:50.605 qpair failed and we were unable to recover it. 00:27:50.605 [2024-07-15 09:36:01.450306] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:50.605 [2024-07-15 09:36:01.450333] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a50000b90 with addr=10.0.0.2, port=4420 00:27:50.605 qpair failed and we were unable to recover it. 00:27:50.605 [2024-07-15 09:36:01.450409] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:50.605 [2024-07-15 09:36:01.450435] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a50000b90 with addr=10.0.0.2, port=4420 00:27:50.605 qpair failed and we were unable to recover it. 00:27:50.605 [2024-07-15 09:36:01.450551] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:50.605 [2024-07-15 09:36:01.450580] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:27:50.605 qpair failed and we were unable to recover it. 00:27:50.605 [2024-07-15 09:36:01.450705] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:50.605 [2024-07-15 09:36:01.450744] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a58000b90 with addr=10.0.0.2, port=4420 00:27:50.605 qpair failed and we were unable to recover it. 00:27:50.605 [2024-07-15 09:36:01.450851] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:50.605 [2024-07-15 09:36:01.450882] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d69200 with addr=10.0.0.2, port=4420 00:27:50.605 qpair failed and we were unable to recover it. 00:27:50.605 [2024-07-15 09:36:01.450972] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:50.605 [2024-07-15 09:36:01.450998] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d69200 with addr=10.0.0.2, port=4420 00:27:50.605 qpair failed and we were unable to recover it. 00:27:50.605 [2024-07-15 09:36:01.451082] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:50.605 [2024-07-15 09:36:01.451108] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d69200 with addr=10.0.0.2, port=4420 00:27:50.605 qpair failed and we were unable to recover it. 00:27:50.605 [2024-07-15 09:36:01.451216] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:50.605 [2024-07-15 09:36:01.451242] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d69200 with addr=10.0.0.2, port=4420 00:27:50.605 qpair failed and we were unable to recover it. 00:27:50.605 [2024-07-15 09:36:01.451324] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:50.605 [2024-07-15 09:36:01.451349] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d69200 with addr=10.0.0.2, port=4420 00:27:50.605 qpair failed and we were unable to recover it. 00:27:50.605 [2024-07-15 09:36:01.451421] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:50.605 [2024-07-15 09:36:01.451447] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d69200 with addr=10.0.0.2, port=4420 00:27:50.605 qpair failed and we were unable to recover it. 00:27:50.605 [2024-07-15 09:36:01.451555] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:50.605 [2024-07-15 09:36:01.451581] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d69200 with addr=10.0.0.2, port=4420 00:27:50.605 qpair failed and we were unable to recover it. 00:27:50.605 [2024-07-15 09:36:01.451667] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:50.605 [2024-07-15 09:36:01.451694] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a50000b90 with addr=10.0.0.2, port=4420 00:27:50.605 qpair failed and we were unable to recover it. 00:27:50.605 [2024-07-15 09:36:01.451773] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:50.605 [2024-07-15 09:36:01.451805] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a50000b90 with addr=10.0.0.2, port=4420 00:27:50.605 qpair failed and we were unable to recover it. 00:27:50.605 [2024-07-15 09:36:01.451897] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:50.605 [2024-07-15 09:36:01.451924] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a50000b90 with addr=10.0.0.2, port=4420 00:27:50.606 qpair failed and we were unable to recover it. 00:27:50.606 [2024-07-15 09:36:01.452032] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:50.606 [2024-07-15 09:36:01.452058] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a50000b90 with addr=10.0.0.2, port=4420 00:27:50.606 qpair failed and we were unable to recover it. 00:27:50.606 [2024-07-15 09:36:01.452166] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:50.606 [2024-07-15 09:36:01.452192] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a50000b90 with addr=10.0.0.2, port=4420 00:27:50.606 qpair failed and we were unable to recover it. 00:27:50.606 [2024-07-15 09:36:01.452308] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:50.606 [2024-07-15 09:36:01.452338] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a58000b90 with addr=10.0.0.2, port=4420 00:27:50.606 qpair failed and we were unable to recover it. 00:27:50.606 [2024-07-15 09:36:01.452423] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:50.606 [2024-07-15 09:36:01.452450] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a58000b90 with addr=10.0.0.2, port=4420 00:27:50.606 qpair failed and we were unable to recover it. 00:27:50.606 [2024-07-15 09:36:01.452534] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:50.606 [2024-07-15 09:36:01.452561] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a58000b90 with addr=10.0.0.2, port=4420 00:27:50.606 qpair failed and we were unable to recover it. 00:27:50.606 [2024-07-15 09:36:01.452705] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:50.606 [2024-07-15 09:36:01.452731] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a58000b90 with addr=10.0.0.2, port=4420 00:27:50.606 qpair failed and we were unable to recover it. 00:27:50.606 [2024-07-15 09:36:01.452822] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:50.606 [2024-07-15 09:36:01.452849] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a58000b90 with addr=10.0.0.2, port=4420 00:27:50.606 qpair failed and we were unable to recover it. 00:27:50.606 [2024-07-15 09:36:01.452936] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:50.606 [2024-07-15 09:36:01.452965] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a58000b90 with addr=10.0.0.2, port=4420 00:27:50.606 qpair failed and we were unable to recover it. 00:27:50.606 [2024-07-15 09:36:01.453052] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:50.606 [2024-07-15 09:36:01.453079] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a58000b90 with addr=10.0.0.2, port=4420 00:27:50.606 qpair failed and we were unable to recover it. 00:27:50.606 [2024-07-15 09:36:01.453169] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:50.606 [2024-07-15 09:36:01.453196] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a58000b90 with addr=10.0.0.2, port=4420 00:27:50.606 qpair failed and we were unable to recover it. 00:27:50.606 [2024-07-15 09:36:01.453302] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:50.606 [2024-07-15 09:36:01.453328] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a58000b90 with addr=10.0.0.2, port=4420 00:27:50.606 qpair failed and we were unable to recover it. 00:27:50.606 [2024-07-15 09:36:01.453439] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:50.606 [2024-07-15 09:36:01.453466] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a58000b90 with addr=10.0.0.2, port=4420 00:27:50.606 qpair failed and we were unable to recover it. 00:27:50.606 [2024-07-15 09:36:01.453571] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:50.606 [2024-07-15 09:36:01.453598] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a58000b90 with addr=10.0.0.2, port=4420 00:27:50.606 qpair failed and we were unable to recover it. 00:27:50.606 [2024-07-15 09:36:01.453705] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:50.606 [2024-07-15 09:36:01.453731] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a58000b90 with addr=10.0.0.2, port=4420 00:27:50.606 qpair failed and we were unable to recover it. 00:27:50.606 [2024-07-15 09:36:01.453848] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:50.606 [2024-07-15 09:36:01.453874] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a58000b90 with addr=10.0.0.2, port=4420 00:27:50.606 qpair failed and we were unable to recover it. 00:27:50.606 [2024-07-15 09:36:01.453983] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:50.606 [2024-07-15 09:36:01.454017] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a58000b90 with addr=10.0.0.2, port=4420 00:27:50.606 qpair failed and we were unable to recover it. 00:27:50.606 [2024-07-15 09:36:01.454138] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:50.606 [2024-07-15 09:36:01.454165] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a58000b90 with addr=10.0.0.2, port=4420 00:27:50.606 qpair failed and we were unable to recover it. 00:27:50.606 [2024-07-15 09:36:01.454248] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:50.606 [2024-07-15 09:36:01.454274] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a58000b90 with addr=10.0.0.2, port=4420 00:27:50.606 qpair failed and we were unable to recover it. 00:27:50.606 [2024-07-15 09:36:01.454368] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:50.606 [2024-07-15 09:36:01.454394] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a58000b90 with addr=10.0.0.2, port=4420 00:27:50.606 qpair failed and we were unable to recover it. 00:27:50.606 [2024-07-15 09:36:01.454471] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:50.606 [2024-07-15 09:36:01.454498] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a58000b90 with addr=10.0.0.2, port=4420 00:27:50.606 qpair failed and we were unable to recover it. 00:27:50.606 [2024-07-15 09:36:01.454609] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:50.606 [2024-07-15 09:36:01.454636] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a58000b90 with addr=10.0.0.2, port=4420 00:27:50.606 qpair failed and we were unable to recover it. 00:27:50.606 [2024-07-15 09:36:01.454717] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:50.606 [2024-07-15 09:36:01.454744] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a58000b90 with addr=10.0.0.2, port=4420 00:27:50.606 qpair failed and we were unable to recover it. 00:27:50.606 [2024-07-15 09:36:01.454847] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:50.606 [2024-07-15 09:36:01.454875] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:27:50.606 qpair failed and we were unable to recover it. 00:27:50.606 [2024-07-15 09:36:01.454989] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:50.606 [2024-07-15 09:36:01.455016] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:27:50.606 qpair failed and we were unable to recover it. 00:27:50.606 [2024-07-15 09:36:01.455133] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:50.606 [2024-07-15 09:36:01.455159] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:27:50.606 qpair failed and we were unable to recover it. 00:27:50.606 [2024-07-15 09:36:01.455240] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:50.606 [2024-07-15 09:36:01.455266] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:27:50.606 qpair failed and we were unable to recover it. 00:27:50.606 [2024-07-15 09:36:01.455350] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:50.606 [2024-07-15 09:36:01.455375] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:27:50.606 qpair failed and we were unable to recover it. 00:27:50.606 [2024-07-15 09:36:01.455487] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:50.606 [2024-07-15 09:36:01.455512] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:27:50.606 qpair failed and we were unable to recover it. 00:27:50.606 [2024-07-15 09:36:01.455623] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:50.606 [2024-07-15 09:36:01.455648] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:27:50.606 qpair failed and we were unable to recover it. 00:27:50.606 [2024-07-15 09:36:01.455754] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:50.606 [2024-07-15 09:36:01.455780] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:27:50.606 qpair failed and we were unable to recover it. 00:27:50.606 [2024-07-15 09:36:01.455876] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:50.606 [2024-07-15 09:36:01.455901] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:27:50.606 qpair failed and we were unable to recover it. 00:27:50.606 [2024-07-15 09:36:01.455979] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:50.606 [2024-07-15 09:36:01.456005] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:27:50.606 qpair failed and we were unable to recover it. 00:27:50.606 [2024-07-15 09:36:01.456081] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:50.606 [2024-07-15 09:36:01.456106] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:27:50.606 qpair failed and we were unable to recover it. 00:27:50.606 [2024-07-15 09:36:01.456211] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:50.606 [2024-07-15 09:36:01.456237] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:27:50.606 qpair failed and we were unable to recover it. 00:27:50.606 [2024-07-15 09:36:01.456317] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:50.606 [2024-07-15 09:36:01.456342] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:27:50.606 qpair failed and we were unable to recover it. 00:27:50.606 [2024-07-15 09:36:01.456416] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:50.606 [2024-07-15 09:36:01.456440] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:27:50.606 qpair failed and we were unable to recover it. 00:27:50.606 [2024-07-15 09:36:01.456548] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:50.606 [2024-07-15 09:36:01.456573] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:27:50.606 qpair failed and we were unable to recover it. 00:27:50.606 [2024-07-15 09:36:01.456660] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:50.606 [2024-07-15 09:36:01.456688] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a58000b90 with addr=10.0.0.2, port=4420 00:27:50.606 qpair failed and we were unable to recover it. 00:27:50.606 [2024-07-15 09:36:01.456764] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:50.606 [2024-07-15 09:36:01.456791] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a58000b90 with addr=10.0.0.2, port=4420 00:27:50.606 qpair failed and we were unable to recover it. 00:27:50.606 [2024-07-15 09:36:01.456884] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:50.606 [2024-07-15 09:36:01.456912] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a58000b90 with addr=10.0.0.2, port=4420 00:27:50.606 qpair failed and we were unable to recover it. 00:27:50.606 [2024-07-15 09:36:01.456996] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:50.606 [2024-07-15 09:36:01.457022] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a58000b90 with addr=10.0.0.2, port=4420 00:27:50.606 qpair failed and we were unable to recover it. 00:27:50.606 [2024-07-15 09:36:01.457135] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:50.606 [2024-07-15 09:36:01.457134] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 4 00:27:50.606 [2024-07-15 09:36:01.457162] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a58000b90 with addr=10.0.0.2, port=4420 00:27:50.606 qpair failed and we were unable to recover it. 00:27:50.606 [2024-07-15 09:36:01.457270] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:50.606 [2024-07-15 09:36:01.457296] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a58000b90 with addr=10.0.0.2, port=4420 00:27:50.606 qpair failed and we were unable to recover it. 00:27:50.606 [2024-07-15 09:36:01.457380] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:50.606 [2024-07-15 09:36:01.457406] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a58000b90 with addr=10.0.0.2, port=4420 00:27:50.606 qpair failed and we were unable to recover it. 00:27:50.606 [2024-07-15 09:36:01.457524] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:50.606 [2024-07-15 09:36:01.457563] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a50000b90 with addr=10.0.0.2, port=4420 00:27:50.606 qpair failed and we were unable to recover it. 00:27:50.606 [2024-07-15 09:36:01.457663] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:50.606 [2024-07-15 09:36:01.457701] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d69200 with addr=10.0.0.2, port=4420 00:27:50.606 qpair failed and we were unable to recover it. 00:27:50.606 [2024-07-15 09:36:01.457829] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:50.606 [2024-07-15 09:36:01.457856] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:27:50.606 qpair failed and we were unable to recover it. 00:27:50.606 [2024-07-15 09:36:01.457970] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:50.606 [2024-07-15 09:36:01.457995] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:27:50.606 qpair failed and we were unable to recover it. 00:27:50.606 [2024-07-15 09:36:01.458073] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:50.606 [2024-07-15 09:36:01.458099] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:27:50.606 qpair failed and we were unable to recover it. 00:27:50.606 [2024-07-15 09:36:01.458205] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:50.606 [2024-07-15 09:36:01.458231] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:27:50.606 qpair failed and we were unable to recover it. 00:27:50.606 [2024-07-15 09:36:01.458346] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:50.606 [2024-07-15 09:36:01.458372] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:27:50.606 qpair failed and we were unable to recover it. 00:27:50.606 [2024-07-15 09:36:01.458491] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:50.606 [2024-07-15 09:36:01.458520] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a50000b90 with addr=10.0.0.2, port=4420 00:27:50.606 qpair failed and we were unable to recover it. 00:27:50.606 [2024-07-15 09:36:01.458651] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:50.606 [2024-07-15 09:36:01.458690] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d69200 with addr=10.0.0.2, port=4420 00:27:50.606 qpair failed and we were unable to recover it. 00:27:50.606 [2024-07-15 09:36:01.458780] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:50.606 [2024-07-15 09:36:01.458815] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a58000b90 with addr=10.0.0.2, port=4420 00:27:50.606 qpair failed and we were unable to recover it. 00:27:50.606 [2024-07-15 09:36:01.458899] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:50.606 [2024-07-15 09:36:01.458926] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a58000b90 with addr=10.0.0.2, port=4420 00:27:50.606 qpair failed and we were unable to recover it. 00:27:50.606 [2024-07-15 09:36:01.459035] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:50.606 [2024-07-15 09:36:01.459065] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a58000b90 with addr=10.0.0.2, port=4420 00:27:50.606 qpair failed and we were unable to recover it. 00:27:50.606 [2024-07-15 09:36:01.459145] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:50.606 [2024-07-15 09:36:01.459171] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a58000b90 with addr=10.0.0.2, port=4420 00:27:50.606 qpair failed and we were unable to recover it. 00:27:50.606 [2024-07-15 09:36:01.459282] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:50.606 [2024-07-15 09:36:01.459309] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a58000b90 with addr=10.0.0.2, port=4420 00:27:50.606 qpair failed and we were unable to recover it. 00:27:50.606 [2024-07-15 09:36:01.459416] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:50.606 [2024-07-15 09:36:01.459444] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a50000b90 with addr=10.0.0.2, port=4420 00:27:50.606 qpair failed and we were unable to recover it. 00:27:50.606 [2024-07-15 09:36:01.459560] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:50.606 [2024-07-15 09:36:01.459599] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d69200 with addr=10.0.0.2, port=4420 00:27:50.606 qpair failed and we were unable to recover it. 00:27:50.606 [2024-07-15 09:36:01.459678] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:50.606 [2024-07-15 09:36:01.459705] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:27:50.606 qpair failed and we were unable to recover it. 00:27:50.606 [2024-07-15 09:36:01.459783] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:50.606 [2024-07-15 09:36:01.459823] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:27:50.606 qpair failed and we were unable to recover it. 00:27:50.606 [2024-07-15 09:36:01.459918] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:50.606 [2024-07-15 09:36:01.459944] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:27:50.606 qpair failed and we were unable to recover it. 00:27:50.606 [2024-07-15 09:36:01.460026] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:50.606 [2024-07-15 09:36:01.460052] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:27:50.606 qpair failed and we were unable to recover it. 00:27:50.606 [2024-07-15 09:36:01.460139] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:50.606 [2024-07-15 09:36:01.460165] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:27:50.606 qpair failed and we were unable to recover it. 00:27:50.606 [2024-07-15 09:36:01.460276] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:50.606 [2024-07-15 09:36:01.460304] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a50000b90 with addr=10.0.0.2, port=4420 00:27:50.606 qpair failed and we were unable to recover it. 00:27:50.606 [2024-07-15 09:36:01.460418] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:50.606 [2024-07-15 09:36:01.460445] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a50000b90 with addr=10.0.0.2, port=4420 00:27:50.606 qpair failed and we were unable to recover it. 00:27:50.607 [2024-07-15 09:36:01.460531] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:50.607 [2024-07-15 09:36:01.460558] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a58000b90 with addr=10.0.0.2, port=4420 00:27:50.607 qpair failed and we were unable to recover it. 00:27:50.607 [2024-07-15 09:36:01.460666] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:50.607 [2024-07-15 09:36:01.460692] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a58000b90 with addr=10.0.0.2, port=4420 00:27:50.607 qpair failed and we were unable to recover it. 00:27:50.607 [2024-07-15 09:36:01.460779] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:50.607 [2024-07-15 09:36:01.460821] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a58000b90 with addr=10.0.0.2, port=4420 00:27:50.607 qpair failed and we were unable to recover it. 00:27:50.607 [2024-07-15 09:36:01.460912] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:50.607 [2024-07-15 09:36:01.460938] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a58000b90 with addr=10.0.0.2, port=4420 00:27:50.607 qpair failed and we were unable to recover it. 00:27:50.607 [2024-07-15 09:36:01.461021] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:50.607 [2024-07-15 09:36:01.461049] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:27:50.607 qpair failed and we were unable to recover it. 00:27:50.607 [2024-07-15 09:36:01.461134] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:50.607 [2024-07-15 09:36:01.461160] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:27:50.607 qpair failed and we were unable to recover it. 00:27:50.607 [2024-07-15 09:36:01.461284] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:50.607 [2024-07-15 09:36:01.461311] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a50000b90 with addr=10.0.0.2, port=4420 00:27:50.607 qpair failed and we were unable to recover it. 00:27:50.607 [2024-07-15 09:36:01.461432] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:50.607 [2024-07-15 09:36:01.461459] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a50000b90 with addr=10.0.0.2, port=4420 00:27:50.607 qpair failed and we were unable to recover it. 00:27:50.607 [2024-07-15 09:36:01.461579] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:50.607 [2024-07-15 09:36:01.461609] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d69200 with addr=10.0.0.2, port=4420 00:27:50.607 qpair failed and we were unable to recover it. 00:27:50.607 [2024-07-15 09:36:01.461699] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:50.607 [2024-07-15 09:36:01.461725] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d69200 with addr=10.0.0.2, port=4420 00:27:50.607 qpair failed and we were unable to recover it. 00:27:50.607 [2024-07-15 09:36:01.461845] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:50.607 [2024-07-15 09:36:01.461873] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a58000b90 with addr=10.0.0.2, port=4420 00:27:50.607 qpair failed and we were unable to recover it. 00:27:50.607 [2024-07-15 09:36:01.461958] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:50.607 [2024-07-15 09:36:01.461985] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a58000b90 with addr=10.0.0.2, port=4420 00:27:50.607 qpair failed and we were unable to recover it. 00:27:50.607 [2024-07-15 09:36:01.462071] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:50.607 [2024-07-15 09:36:01.462097] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a58000b90 with addr=10.0.0.2, port=4420 00:27:50.607 qpair failed and we were unable to recover it. 00:27:50.607 [2024-07-15 09:36:01.462176] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:50.607 [2024-07-15 09:36:01.462202] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a58000b90 with addr=10.0.0.2, port=4420 00:27:50.607 qpair failed and we were unable to recover it. 00:27:50.607 [2024-07-15 09:36:01.462317] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:50.607 [2024-07-15 09:36:01.462343] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a58000b90 with addr=10.0.0.2, port=4420 00:27:50.607 qpair failed and we were unable to recover it. 00:27:50.607 [2024-07-15 09:36:01.462455] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:50.607 [2024-07-15 09:36:01.462481] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a58000b90 with addr=10.0.0.2, port=4420 00:27:50.607 qpair failed and we were unable to recover it. 00:27:50.607 [2024-07-15 09:36:01.462629] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:50.607 [2024-07-15 09:36:01.462668] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d69200 with addr=10.0.0.2, port=4420 00:27:50.607 qpair failed and we were unable to recover it. 00:27:50.607 [2024-07-15 09:36:01.462814] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:50.607 [2024-07-15 09:36:01.462843] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a50000b90 with addr=10.0.0.2, port=4420 00:27:50.607 qpair failed and we were unable to recover it. 00:27:50.607 [2024-07-15 09:36:01.462929] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:50.607 [2024-07-15 09:36:01.462955] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a50000b90 with addr=10.0.0.2, port=4420 00:27:50.607 qpair failed and we were unable to recover it. 00:27:50.607 [2024-07-15 09:36:01.463040] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:50.607 [2024-07-15 09:36:01.463066] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a50000b90 with addr=10.0.0.2, port=4420 00:27:50.607 qpair failed and we were unable to recover it. 00:27:50.607 [2024-07-15 09:36:01.463143] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:50.607 [2024-07-15 09:36:01.463170] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a50000b90 with addr=10.0.0.2, port=4420 00:27:50.607 qpair failed and we were unable to recover it. 00:27:50.607 [2024-07-15 09:36:01.463251] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:50.607 [2024-07-15 09:36:01.463278] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a50000b90 with addr=10.0.0.2, port=4420 00:27:50.607 qpair failed and we were unable to recover it. 00:27:50.607 [2024-07-15 09:36:01.463388] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:50.607 [2024-07-15 09:36:01.463414] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a50000b90 with addr=10.0.0.2, port=4420 00:27:50.607 qpair failed and we were unable to recover it. 00:27:50.607 [2024-07-15 09:36:01.463527] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:50.607 [2024-07-15 09:36:01.463553] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a50000b90 with addr=10.0.0.2, port=4420 00:27:50.607 qpair failed and we were unable to recover it. 00:27:50.607 [2024-07-15 09:36:01.463637] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:50.607 [2024-07-15 09:36:01.463665] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:27:50.607 qpair failed and we were unable to recover it. 00:27:50.607 [2024-07-15 09:36:01.463750] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:50.607 [2024-07-15 09:36:01.463776] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:27:50.607 qpair failed and we were unable to recover it. 00:27:50.607 [2024-07-15 09:36:01.463896] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:50.607 [2024-07-15 09:36:01.463926] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d69200 with addr=10.0.0.2, port=4420 00:27:50.607 qpair failed and we were unable to recover it. 00:27:50.607 [2024-07-15 09:36:01.464039] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:50.607 [2024-07-15 09:36:01.464065] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d69200 with addr=10.0.0.2, port=4420 00:27:50.607 qpair failed and we were unable to recover it. 00:27:50.607 [2024-07-15 09:36:01.464180] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:50.607 [2024-07-15 09:36:01.464212] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d69200 with addr=10.0.0.2, port=4420 00:27:50.607 qpair failed and we were unable to recover it. 00:27:50.607 [2024-07-15 09:36:01.464297] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:50.607 [2024-07-15 09:36:01.464323] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d69200 with addr=10.0.0.2, port=4420 00:27:50.607 qpair failed and we were unable to recover it. 00:27:50.607 [2024-07-15 09:36:01.464399] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:50.607 [2024-07-15 09:36:01.464424] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d69200 with addr=10.0.0.2, port=4420 00:27:50.607 qpair failed and we were unable to recover it. 00:27:50.607 [2024-07-15 09:36:01.464504] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:50.607 [2024-07-15 09:36:01.464530] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d69200 with addr=10.0.0.2, port=4420 00:27:50.607 qpair failed and we were unable to recover it. 00:27:50.607 [2024-07-15 09:36:01.464623] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:50.607 [2024-07-15 09:36:01.464650] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:27:50.607 qpair failed and we were unable to recover it. 00:27:50.607 [2024-07-15 09:36:01.464805] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:50.607 [2024-07-15 09:36:01.464831] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:27:50.607 qpair failed and we were unable to recover it. 00:27:50.607 [2024-07-15 09:36:01.464945] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:50.607 [2024-07-15 09:36:01.464972] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:27:50.607 qpair failed and we were unable to recover it. 00:27:50.607 [2024-07-15 09:36:01.465053] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:50.607 [2024-07-15 09:36:01.465084] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:27:50.607 qpair failed and we were unable to recover it. 00:27:50.607 [2024-07-15 09:36:01.465203] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:50.607 [2024-07-15 09:36:01.465228] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:27:50.607 qpair failed and we were unable to recover it. 00:27:50.607 [2024-07-15 09:36:01.465345] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:50.607 [2024-07-15 09:36:01.465373] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a58000b90 with addr=10.0.0.2, port=4420 00:27:50.607 qpair failed and we were unable to recover it. 00:27:50.607 [2024-07-15 09:36:01.465459] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:50.607 [2024-07-15 09:36:01.465486] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d69200 with addr=10.0.0.2, port=4420 00:27:50.607 qpair failed and we were unable to recover it. 00:27:50.607 [2024-07-15 09:36:01.465600] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:50.607 [2024-07-15 09:36:01.465629] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a50000b90 with addr=10.0.0.2, port=4420 00:27:50.607 qpair failed and we were unable to recover it. 00:27:50.607 [2024-07-15 09:36:01.465718] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:50.607 [2024-07-15 09:36:01.465745] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a50000b90 with addr=10.0.0.2, port=4420 00:27:50.607 qpair failed and we were unable to recover it. 00:27:50.607 [2024-07-15 09:36:01.465843] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:50.607 [2024-07-15 09:36:01.465870] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a50000b90 with addr=10.0.0.2, port=4420 00:27:50.607 qpair failed and we were unable to recover it. 00:27:50.607 [2024-07-15 09:36:01.465962] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:50.607 [2024-07-15 09:36:01.465989] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a50000b90 with addr=10.0.0.2, port=4420 00:27:50.607 qpair failed and we were unable to recover it. 00:27:50.607 [2024-07-15 09:36:01.466125] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:50.607 [2024-07-15 09:36:01.466152] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a50000b90 with addr=10.0.0.2, port=4420 00:27:50.607 qpair failed and we were unable to recover it. 00:27:50.607 [2024-07-15 09:36:01.466229] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:50.607 [2024-07-15 09:36:01.466255] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a50000b90 with addr=10.0.0.2, port=4420 00:27:50.607 qpair failed and we were unable to recover it. 00:27:50.607 [2024-07-15 09:36:01.466391] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:50.607 [2024-07-15 09:36:01.466417] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a50000b90 with addr=10.0.0.2, port=4420 00:27:50.607 qpair failed and we were unable to recover it. 00:27:50.607 [2024-07-15 09:36:01.466529] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:50.607 [2024-07-15 09:36:01.466556] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:27:50.607 qpair failed and we were unable to recover it. 00:27:50.607 [2024-07-15 09:36:01.466640] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:50.607 [2024-07-15 09:36:01.466669] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a58000b90 with addr=10.0.0.2, port=4420 00:27:50.607 qpair failed and we were unable to recover it. 00:27:50.607 [2024-07-15 09:36:01.466789] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:50.607 [2024-07-15 09:36:01.466848] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a58000b90 with addr=10.0.0.2, port=4420 00:27:50.607 qpair failed and we were unable to recover it. 00:27:50.607 [2024-07-15 09:36:01.466947] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:50.607 [2024-07-15 09:36:01.466973] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a58000b90 with addr=10.0.0.2, port=4420 00:27:50.607 qpair failed and we were unable to recover it. 00:27:50.607 [2024-07-15 09:36:01.467051] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:50.607 [2024-07-15 09:36:01.467077] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a58000b90 with addr=10.0.0.2, port=4420 00:27:50.607 qpair failed and we were unable to recover it. 00:27:50.607 [2024-07-15 09:36:01.467177] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:50.607 [2024-07-15 09:36:01.467203] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a58000b90 with addr=10.0.0.2, port=4420 00:27:50.607 qpair failed and we were unable to recover it. 00:27:50.607 [2024-07-15 09:36:01.467323] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:50.607 [2024-07-15 09:36:01.467349] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a58000b90 with addr=10.0.0.2, port=4420 00:27:50.607 qpair failed and we were unable to recover it. 00:27:50.607 [2024-07-15 09:36:01.467431] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:50.607 [2024-07-15 09:36:01.467456] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a58000b90 with addr=10.0.0.2, port=4420 00:27:50.607 qpair failed and we were unable to recover it. 00:27:50.607 [2024-07-15 09:36:01.467573] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:50.607 [2024-07-15 09:36:01.467599] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a58000b90 with addr=10.0.0.2, port=4420 00:27:50.607 qpair failed and we were unable to recover it. 00:27:50.607 [2024-07-15 09:36:01.467688] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:50.607 [2024-07-15 09:36:01.467717] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a50000b90 with addr=10.0.0.2, port=4420 00:27:50.607 qpair failed and we were unable to recover it. 00:27:50.607 [2024-07-15 09:36:01.467839] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:50.607 [2024-07-15 09:36:01.467868] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:27:50.607 qpair failed and we were unable to recover it. 00:27:50.607 [2024-07-15 09:36:01.467955] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:50.607 [2024-07-15 09:36:01.467981] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:27:50.607 qpair failed and we were unable to recover it. 00:27:50.607 [2024-07-15 09:36:01.468073] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:50.607 [2024-07-15 09:36:01.468099] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:27:50.607 qpair failed and we were unable to recover it. 00:27:50.607 [2024-07-15 09:36:01.468177] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:50.607 [2024-07-15 09:36:01.468202] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:27:50.607 qpair failed and we were unable to recover it. 00:27:50.607 [2024-07-15 09:36:01.468313] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:50.607 [2024-07-15 09:36:01.468339] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:27:50.607 qpair failed and we were unable to recover it. 00:27:50.607 [2024-07-15 09:36:01.468445] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:50.607 [2024-07-15 09:36:01.468472] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:27:50.607 qpair failed and we were unable to recover it. 00:27:50.607 [2024-07-15 09:36:01.468594] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:50.607 [2024-07-15 09:36:01.468619] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:27:50.607 qpair failed and we were unable to recover it. 00:27:50.607 [2024-07-15 09:36:01.468700] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:50.607 [2024-07-15 09:36:01.468725] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:27:50.607 qpair failed and we were unable to recover it. 00:27:50.607 [2024-07-15 09:36:01.468823] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:50.607 [2024-07-15 09:36:01.468850] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:27:50.607 qpair failed and we were unable to recover it. 00:27:50.607 [2024-07-15 09:36:01.468941] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:50.607 [2024-07-15 09:36:01.468966] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:27:50.607 qpair failed and we were unable to recover it. 00:27:50.607 [2024-07-15 09:36:01.469045] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:50.607 [2024-07-15 09:36:01.469071] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:27:50.607 qpair failed and we were unable to recover it. 00:27:50.608 [2024-07-15 09:36:01.469157] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:50.608 [2024-07-15 09:36:01.469185] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a58000b90 with addr=10.0.0.2, port=4420 00:27:50.608 qpair failed and we were unable to recover it. 00:27:50.608 [2024-07-15 09:36:01.469265] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:50.608 [2024-07-15 09:36:01.469296] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a58000b90 with addr=10.0.0.2, port=4420 00:27:50.608 qpair failed and we were unable to recover it. 00:27:50.608 [2024-07-15 09:36:01.469410] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:50.608 [2024-07-15 09:36:01.469437] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a58000b90 with addr=10.0.0.2, port=4420 00:27:50.608 qpair failed and we were unable to recover it. 00:27:50.608 [2024-07-15 09:36:01.469518] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:50.608 [2024-07-15 09:36:01.469545] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a58000b90 with addr=10.0.0.2, port=4420 00:27:50.608 qpair failed and we were unable to recover it. 00:27:50.608 [2024-07-15 09:36:01.469624] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:50.608 [2024-07-15 09:36:01.469650] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a58000b90 with addr=10.0.0.2, port=4420 00:27:50.608 qpair failed and we were unable to recover it. 00:27:50.608 [2024-07-15 09:36:01.469779] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:50.608 [2024-07-15 09:36:01.469832] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d69200 with addr=10.0.0.2, port=4420 00:27:50.608 qpair failed and we were unable to recover it. 00:27:50.608 [2024-07-15 09:36:01.469924] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:50.608 [2024-07-15 09:36:01.469952] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d69200 with addr=10.0.0.2, port=4420 00:27:50.608 qpair failed and we were unable to recover it. 00:27:50.608 [2024-07-15 09:36:01.470034] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:50.608 [2024-07-15 09:36:01.470061] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d69200 with addr=10.0.0.2, port=4420 00:27:50.608 qpair failed and we were unable to recover it. 00:27:50.608 [2024-07-15 09:36:01.470178] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:50.608 [2024-07-15 09:36:01.470203] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d69200 with addr=10.0.0.2, port=4420 00:27:50.608 qpair failed and we were unable to recover it. 00:27:50.608 [2024-07-15 09:36:01.470284] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:50.608 [2024-07-15 09:36:01.470310] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d69200 with addr=10.0.0.2, port=4420 00:27:50.608 qpair failed and we were unable to recover it. 00:27:50.608 [2024-07-15 09:36:01.470425] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:50.608 [2024-07-15 09:36:01.470452] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d69200 with addr=10.0.0.2, port=4420 00:27:50.608 qpair failed and we were unable to recover it. 00:27:50.608 [2024-07-15 09:36:01.470532] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:50.608 [2024-07-15 09:36:01.470560] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a58000b90 with addr=10.0.0.2, port=4420 00:27:50.608 qpair failed and we were unable to recover it. 00:27:50.608 [2024-07-15 09:36:01.470651] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:50.608 [2024-07-15 09:36:01.470679] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:27:50.608 qpair failed and we were unable to recover it. 00:27:50.608 [2024-07-15 09:36:01.470798] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:50.608 [2024-07-15 09:36:01.470832] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a50000b90 with addr=10.0.0.2, port=4420 00:27:50.608 qpair failed and we were unable to recover it. 00:27:50.608 [2024-07-15 09:36:01.470928] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:50.608 [2024-07-15 09:36:01.470954] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a50000b90 with addr=10.0.0.2, port=4420 00:27:50.608 qpair failed and we were unable to recover it. 00:27:50.608 [2024-07-15 09:36:01.471073] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:50.608 [2024-07-15 09:36:01.471100] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a50000b90 with addr=10.0.0.2, port=4420 00:27:50.608 qpair failed and we were unable to recover it. 00:27:50.608 [2024-07-15 09:36:01.471186] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:50.608 [2024-07-15 09:36:01.471213] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a50000b90 with addr=10.0.0.2, port=4420 00:27:50.608 qpair failed and we were unable to recover it. 00:27:50.608 [2024-07-15 09:36:01.471352] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:50.608 [2024-07-15 09:36:01.471378] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d69200 with addr=10.0.0.2, port=4420 00:27:50.608 qpair failed and we were unable to recover it. 00:27:50.608 [2024-07-15 09:36:01.471472] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:50.608 [2024-07-15 09:36:01.471499] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a58000b90 with addr=10.0.0.2, port=4420 00:27:50.608 qpair failed and we were unable to recover it. 00:27:50.608 [2024-07-15 09:36:01.471595] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:50.608 [2024-07-15 09:36:01.471622] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:27:50.608 qpair failed and we were unable to recover it. 00:27:50.608 [2024-07-15 09:36:01.471706] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:50.608 [2024-07-15 09:36:01.471732] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:27:50.608 qpair failed and we were unable to recover it. 00:27:50.608 [2024-07-15 09:36:01.471854] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:50.608 [2024-07-15 09:36:01.471880] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:27:50.608 qpair failed and we were unable to recover it. 00:27:50.608 [2024-07-15 09:36:01.471960] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:50.608 [2024-07-15 09:36:01.471986] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:27:50.608 qpair failed and we were unable to recover it. 00:27:50.608 [2024-07-15 09:36:01.472123] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:50.608 [2024-07-15 09:36:01.472149] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:27:50.608 qpair failed and we were unable to recover it. 00:27:50.608 [2024-07-15 09:36:01.472230] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:50.608 [2024-07-15 09:36:01.472256] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:27:50.608 qpair failed and we were unable to recover it. 00:27:50.608 [2024-07-15 09:36:01.472352] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:50.608 [2024-07-15 09:36:01.472379] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a58000b90 with addr=10.0.0.2, port=4420 00:27:50.608 qpair failed and we were unable to recover it. 00:27:50.608 [2024-07-15 09:36:01.472456] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:50.608 [2024-07-15 09:36:01.472483] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a58000b90 with addr=10.0.0.2, port=4420 00:27:50.608 qpair failed and we were unable to recover it. 00:27:50.608 [2024-07-15 09:36:01.472558] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:50.608 [2024-07-15 09:36:01.472584] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a58000b90 with addr=10.0.0.2, port=4420 00:27:50.608 qpair failed and we were unable to recover it. 00:27:50.608 [2024-07-15 09:36:01.472665] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:50.608 [2024-07-15 09:36:01.472695] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a58000b90 with addr=10.0.0.2, port=4420 00:27:50.608 qpair failed and we were unable to recover it. 00:27:50.608 [2024-07-15 09:36:01.472780] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:50.608 [2024-07-15 09:36:01.472817] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a50000b90 with addr=10.0.0.2, port=4420 00:27:50.608 qpair failed and we were unable to recover it. 00:27:50.608 [2024-07-15 09:36:01.472933] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:50.608 [2024-07-15 09:36:01.472961] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a50000b90 with addr=10.0.0.2, port=4420 00:27:50.608 qpair failed and we were unable to recover it. 00:27:50.608 [2024-07-15 09:36:01.473066] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:50.608 [2024-07-15 09:36:01.473101] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a50000b90 with addr=10.0.0.2, port=4420 00:27:50.608 qpair failed and we were unable to recover it. 00:27:50.608 [2024-07-15 09:36:01.473219] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:50.608 [2024-07-15 09:36:01.473246] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a50000b90 with addr=10.0.0.2, port=4420 00:27:50.608 qpair failed and we were unable to recover it. 00:27:50.608 [2024-07-15 09:36:01.473362] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:50.608 [2024-07-15 09:36:01.473389] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a50000b90 with addr=10.0.0.2, port=4420 00:27:50.608 qpair failed and we were unable to recover it. 00:27:50.608 [2024-07-15 09:36:01.473499] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:50.608 [2024-07-15 09:36:01.473525] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a50000b90 with addr=10.0.0.2, port=4420 00:27:50.608 qpair failed and we were unable to recover it. 00:27:50.608 [2024-07-15 09:36:01.473637] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:50.608 [2024-07-15 09:36:01.473666] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a58000b90 with addr=10.0.0.2, port=4420 00:27:50.608 qpair failed and we were unable to recover it. 00:27:50.608 [2024-07-15 09:36:01.473758] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:50.608 [2024-07-15 09:36:01.473785] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a58000b90 with addr=10.0.0.2, port=4420 00:27:50.608 qpair failed and we were unable to recover it. 00:27:50.608 [2024-07-15 09:36:01.473887] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:50.608 [2024-07-15 09:36:01.473913] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a58000b90 with addr=10.0.0.2, port=4420 00:27:50.608 qpair failed and we were unable to recover it. 00:27:50.608 [2024-07-15 09:36:01.474023] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:50.608 [2024-07-15 09:36:01.474050] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a58000b90 with addr=10.0.0.2, port=4420 00:27:50.608 qpair failed and we were unable to recover it. 00:27:50.608 [2024-07-15 09:36:01.474163] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:50.608 [2024-07-15 09:36:01.474190] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a58000b90 with addr=10.0.0.2, port=4420 00:27:50.608 qpair failed and we were unable to recover it. 00:27:50.608 [2024-07-15 09:36:01.474268] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:50.608 [2024-07-15 09:36:01.474296] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:27:50.608 qpair failed and we were unable to recover it. 00:27:50.608 [2024-07-15 09:36:01.474381] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:50.608 [2024-07-15 09:36:01.474407] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:27:50.608 qpair failed and we were unable to recover it. 00:27:50.608 [2024-07-15 09:36:01.474504] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:50.608 [2024-07-15 09:36:01.474543] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d69200 with addr=10.0.0.2, port=4420 00:27:50.608 qpair failed and we were unable to recover it. 00:27:50.608 [2024-07-15 09:36:01.474656] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:50.608 [2024-07-15 09:36:01.474684] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d69200 with addr=10.0.0.2, port=4420 00:27:50.608 qpair failed and we were unable to recover it. 00:27:50.608 [2024-07-15 09:36:01.474768] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:50.608 [2024-07-15 09:36:01.474794] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d69200 with addr=10.0.0.2, port=4420 00:27:50.608 qpair failed and we were unable to recover it. 00:27:50.608 [2024-07-15 09:36:01.474879] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:50.608 [2024-07-15 09:36:01.474906] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d69200 with addr=10.0.0.2, port=4420 00:27:50.608 qpair failed and we were unable to recover it. 00:27:50.608 [2024-07-15 09:36:01.474988] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:50.608 [2024-07-15 09:36:01.475020] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d69200 with addr=10.0.0.2, port=4420 00:27:50.608 qpair failed and we were unable to recover it. 00:27:50.608 [2024-07-15 09:36:01.475116] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:50.608 [2024-07-15 09:36:01.475142] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d69200 with addr=10.0.0.2, port=4420 00:27:50.608 qpair failed and we were unable to recover it. 00:27:50.608 [2024-07-15 09:36:01.475248] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:50.608 [2024-07-15 09:36:01.475275] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:27:50.608 qpair failed and we were unable to recover it. 00:27:50.608 [2024-07-15 09:36:01.475356] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:50.608 [2024-07-15 09:36:01.475384] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a58000b90 with addr=10.0.0.2, port=4420 00:27:50.608 qpair failed and we were unable to recover it. 00:27:50.608 [2024-07-15 09:36:01.475473] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:50.608 [2024-07-15 09:36:01.475501] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a50000b90 with addr=10.0.0.2, port=4420 00:27:50.608 qpair failed and we were unable to recover it. 00:27:50.608 [2024-07-15 09:36:01.475589] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:50.608 [2024-07-15 09:36:01.475617] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a50000b90 with addr=10.0.0.2, port=4420 00:27:50.608 qpair failed and we were unable to recover it. 00:27:50.608 [2024-07-15 09:36:01.475729] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:50.608 [2024-07-15 09:36:01.475755] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a50000b90 with addr=10.0.0.2, port=4420 00:27:50.608 qpair failed and we were unable to recover it. 00:27:50.608 [2024-07-15 09:36:01.475847] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:50.608 [2024-07-15 09:36:01.475873] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a50000b90 with addr=10.0.0.2, port=4420 00:27:50.608 qpair failed and we were unable to recover it. 00:27:50.608 [2024-07-15 09:36:01.475988] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:50.608 [2024-07-15 09:36:01.476015] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a50000b90 with addr=10.0.0.2, port=4420 00:27:50.608 qpair failed and we were unable to recover it. 00:27:50.608 [2024-07-15 09:36:01.476098] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:50.608 [2024-07-15 09:36:01.476126] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:27:50.608 qpair failed and we were unable to recover it. 00:27:50.608 [2024-07-15 09:36:01.476246] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:50.608 [2024-07-15 09:36:01.476274] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a58000b90 with addr=10.0.0.2, port=4420 00:27:50.608 qpair failed and we were unable to recover it. 00:27:50.608 [2024-07-15 09:36:01.476363] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:50.608 [2024-07-15 09:36:01.476390] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a58000b90 with addr=10.0.0.2, port=4420 00:27:50.608 qpair failed and we were unable to recover it. 00:27:50.608 [2024-07-15 09:36:01.476500] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:50.608 [2024-07-15 09:36:01.476526] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a58000b90 with addr=10.0.0.2, port=4420 00:27:50.608 qpair failed and we were unable to recover it. 00:27:50.608 [2024-07-15 09:36:01.476615] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:50.608 [2024-07-15 09:36:01.476642] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a58000b90 with addr=10.0.0.2, port=4420 00:27:50.608 qpair failed and we were unable to recover it. 00:27:50.608 [2024-07-15 09:36:01.476729] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:50.608 [2024-07-15 09:36:01.476756] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d69200 with addr=10.0.0.2, port=4420 00:27:50.608 qpair failed and we were unable to recover it. 00:27:50.608 [2024-07-15 09:36:01.476885] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:50.608 [2024-07-15 09:36:01.476912] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d69200 with addr=10.0.0.2, port=4420 00:27:50.608 qpair failed and we were unable to recover it. 00:27:50.609 [2024-07-15 09:36:01.476996] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:50.609 [2024-07-15 09:36:01.477023] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d69200 with addr=10.0.0.2, port=4420 00:27:50.609 qpair failed and we were unable to recover it. 00:27:50.609 [2024-07-15 09:36:01.477102] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:50.609 [2024-07-15 09:36:01.477129] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d69200 with addr=10.0.0.2, port=4420 00:27:50.609 qpair failed and we were unable to recover it. 00:27:50.609 [2024-07-15 09:36:01.477230] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:50.609 [2024-07-15 09:36:01.477256] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d69200 with addr=10.0.0.2, port=4420 00:27:50.609 qpair failed and we were unable to recover it. 00:27:50.609 [2024-07-15 09:36:01.477367] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:50.609 [2024-07-15 09:36:01.477394] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:27:50.609 qpair failed and we were unable to recover it. 00:27:50.609 [2024-07-15 09:36:01.477507] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:50.609 [2024-07-15 09:36:01.477535] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:27:50.609 qpair failed and we were unable to recover it. 00:27:50.609 [2024-07-15 09:36:01.477652] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:50.609 [2024-07-15 09:36:01.477678] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:27:50.609 qpair failed and we were unable to recover it. 00:27:50.609 [2024-07-15 09:36:01.477790] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:50.609 [2024-07-15 09:36:01.477833] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:27:50.609 qpair failed and we were unable to recover it. 00:27:50.609 [2024-07-15 09:36:01.477920] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:50.609 [2024-07-15 09:36:01.477945] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:27:50.609 qpair failed and we were unable to recover it. 00:27:50.609 [2024-07-15 09:36:01.478057] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:50.609 [2024-07-15 09:36:01.478083] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:27:50.609 qpair failed and we were unable to recover it. 00:27:50.609 [2024-07-15 09:36:01.478169] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:50.609 [2024-07-15 09:36:01.478195] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:27:50.609 qpair failed and we were unable to recover it. 00:27:50.609 [2024-07-15 09:36:01.478269] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:50.609 [2024-07-15 09:36:01.478294] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:27:50.609 qpair failed and we were unable to recover it. 00:27:50.609 [2024-07-15 09:36:01.478409] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:50.609 [2024-07-15 09:36:01.478438] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a50000b90 with addr=10.0.0.2, port=4420 00:27:50.609 qpair failed and we were unable to recover it. 00:27:50.609 [2024-07-15 09:36:01.478523] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:50.609 [2024-07-15 09:36:01.478550] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a50000b90 with addr=10.0.0.2, port=4420 00:27:50.609 qpair failed and we were unable to recover it. 00:27:50.609 [2024-07-15 09:36:01.478696] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:50.609 [2024-07-15 09:36:01.478723] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a50000b90 with addr=10.0.0.2, port=4420 00:27:50.609 qpair failed and we were unable to recover it. 00:27:50.609 [2024-07-15 09:36:01.478815] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:50.609 [2024-07-15 09:36:01.478841] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a50000b90 with addr=10.0.0.2, port=4420 00:27:50.609 qpair failed and we were unable to recover it. 00:27:50.609 [2024-07-15 09:36:01.478924] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:50.609 [2024-07-15 09:36:01.478951] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a50000b90 with addr=10.0.0.2, port=4420 00:27:50.609 qpair failed and we were unable to recover it. 00:27:50.609 [2024-07-15 09:36:01.479032] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:50.609 [2024-07-15 09:36:01.479058] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a50000b90 with addr=10.0.0.2, port=4420 00:27:50.609 qpair failed and we were unable to recover it. 00:27:50.609 [2024-07-15 09:36:01.479157] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:50.609 [2024-07-15 09:36:01.479186] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:27:50.609 qpair failed and we were unable to recover it. 00:27:50.609 [2024-07-15 09:36:01.479287] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:50.609 [2024-07-15 09:36:01.479313] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:27:50.609 qpair failed and we were unable to recover it. 00:27:50.609 [2024-07-15 09:36:01.479437] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:50.609 [2024-07-15 09:36:01.479463] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:27:50.609 qpair failed and we were unable to recover it. 00:27:50.609 [2024-07-15 09:36:01.479559] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:50.609 [2024-07-15 09:36:01.479585] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:27:50.609 qpair failed and we were unable to recover it. 00:27:50.609 [2024-07-15 09:36:01.479677] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:50.609 [2024-07-15 09:36:01.479704] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d69200 with addr=10.0.0.2, port=4420 00:27:50.609 qpair failed and we were unable to recover it. 00:27:50.609 [2024-07-15 09:36:01.479786] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:50.609 [2024-07-15 09:36:01.479825] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a58000b90 with addr=10.0.0.2, port=4420 00:27:50.609 qpair failed and we were unable to recover it. 00:27:50.609 [2024-07-15 09:36:01.479910] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:50.609 [2024-07-15 09:36:01.479937] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a58000b90 with addr=10.0.0.2, port=4420 00:27:50.609 qpair failed and we were unable to recover it. 00:27:50.609 [2024-07-15 09:36:01.480022] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:50.609 [2024-07-15 09:36:01.480049] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a58000b90 with addr=10.0.0.2, port=4420 00:27:50.609 qpair failed and we were unable to recover it. 00:27:50.609 [2024-07-15 09:36:01.480146] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:50.609 [2024-07-15 09:36:01.480184] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a58000b90 with addr=10.0.0.2, port=4420 00:27:50.609 qpair failed and we were unable to recover it. 00:27:50.609 [2024-07-15 09:36:01.480320] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:50.609 [2024-07-15 09:36:01.480346] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a58000b90 with addr=10.0.0.2, port=4420 00:27:50.609 qpair failed and we were unable to recover it. 00:27:50.609 [2024-07-15 09:36:01.480423] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:50.609 [2024-07-15 09:36:01.480449] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a58000b90 with addr=10.0.0.2, port=4420 00:27:50.609 qpair failed and we were unable to recover it. 00:27:50.609 [2024-07-15 09:36:01.480534] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:50.609 [2024-07-15 09:36:01.480561] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a58000b90 with addr=10.0.0.2, port=4420 00:27:50.609 qpair failed and we were unable to recover it. 00:27:50.609 [2024-07-15 09:36:01.480649] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:50.609 [2024-07-15 09:36:01.480675] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a58000b90 with addr=10.0.0.2, port=4420 00:27:50.609 qpair failed and we were unable to recover it. 00:27:50.609 [2024-07-15 09:36:01.480764] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:50.609 [2024-07-15 09:36:01.480807] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a58000b90 with addr=10.0.0.2, port=4420 00:27:50.609 qpair failed and we were unable to recover it. 00:27:50.609 [2024-07-15 09:36:01.480885] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:50.609 [2024-07-15 09:36:01.480912] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a58000b90 with addr=10.0.0.2, port=4420 00:27:50.609 qpair failed and we were unable to recover it. 00:27:50.609 [2024-07-15 09:36:01.480990] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:50.609 [2024-07-15 09:36:01.481017] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a58000b90 with addr=10.0.0.2, port=4420 00:27:50.609 qpair failed and we were unable to recover it. 00:27:50.609 [2024-07-15 09:36:01.481125] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:50.609 [2024-07-15 09:36:01.481156] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a58000b90 with addr=10.0.0.2, port=4420 00:27:50.609 qpair failed and we were unable to recover it. 00:27:50.609 [2024-07-15 09:36:01.481236] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:50.609 [2024-07-15 09:36:01.481263] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a58000b90 with addr=10.0.0.2, port=4420 00:27:50.609 qpair failed and we were unable to recover it. 00:27:50.609 [2024-07-15 09:36:01.481406] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:50.609 [2024-07-15 09:36:01.481434] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a50000b90 with addr=10.0.0.2, port=4420 00:27:50.609 qpair failed and we were unable to recover it. 00:27:50.609 [2024-07-15 09:36:01.481515] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:50.609 [2024-07-15 09:36:01.481543] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:27:50.609 qpair failed and we were unable to recover it. 00:27:50.609 [2024-07-15 09:36:01.481673] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:50.609 [2024-07-15 09:36:01.481712] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d69200 with addr=10.0.0.2, port=4420 00:27:50.609 qpair failed and we were unable to recover it. 00:27:50.609 [2024-07-15 09:36:01.481813] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:50.609 [2024-07-15 09:36:01.481842] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d69200 with addr=10.0.0.2, port=4420 00:27:50.609 qpair failed and we were unable to recover it. 00:27:50.609 [2024-07-15 09:36:01.481959] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:50.609 [2024-07-15 09:36:01.481985] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d69200 with addr=10.0.0.2, port=4420 00:27:50.609 qpair failed and we were unable to recover it. 00:27:50.609 [2024-07-15 09:36:01.482079] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:50.609 [2024-07-15 09:36:01.482105] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d69200 with addr=10.0.0.2, port=4420 00:27:50.609 qpair failed and we were unable to recover it. 00:27:50.609 [2024-07-15 09:36:01.482186] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:50.609 [2024-07-15 09:36:01.482212] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d69200 with addr=10.0.0.2, port=4420 00:27:50.609 qpair failed and we were unable to recover it. 00:27:50.609 [2024-07-15 09:36:01.482295] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:50.609 [2024-07-15 09:36:01.482321] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d69200 with addr=10.0.0.2, port=4420 00:27:50.609 qpair failed and we were unable to recover it. 00:27:50.609 [2024-07-15 09:36:01.482405] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:50.609 [2024-07-15 09:36:01.482432] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d69200 with addr=10.0.0.2, port=4420 00:27:50.609 qpair failed and we were unable to recover it. 00:27:50.609 [2024-07-15 09:36:01.482544] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:50.609 [2024-07-15 09:36:01.482569] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d69200 with addr=10.0.0.2, port=4420 00:27:50.609 qpair failed and we were unable to recover it. 00:27:50.609 [2024-07-15 09:36:01.482658] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:50.609 [2024-07-15 09:36:01.482684] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d69200 with addr=10.0.0.2, port=4420 00:27:50.609 qpair failed and we were unable to recover it. 00:27:50.609 [2024-07-15 09:36:01.482761] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:50.609 [2024-07-15 09:36:01.482788] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:27:50.609 qpair failed and we were unable to recover it. 00:27:50.609 [2024-07-15 09:36:01.482891] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:50.609 [2024-07-15 09:36:01.482920] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a58000b90 with addr=10.0.0.2, port=4420 00:27:50.609 qpair failed and we were unable to recover it. 00:27:50.609 [2024-07-15 09:36:01.483011] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:50.609 [2024-07-15 09:36:01.483038] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a58000b90 with addr=10.0.0.2, port=4420 00:27:50.609 qpair failed and we were unable to recover it. 00:27:50.609 [2024-07-15 09:36:01.483122] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:50.609 [2024-07-15 09:36:01.483150] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a58000b90 with addr=10.0.0.2, port=4420 00:27:50.609 qpair failed and we were unable to recover it. 00:27:50.609 [2024-07-15 09:36:01.483237] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:50.609 [2024-07-15 09:36:01.483264] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a58000b90 with addr=10.0.0.2, port=4420 00:27:50.609 qpair failed and we were unable to recover it. 00:27:50.609 [2024-07-15 09:36:01.483350] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:50.609 [2024-07-15 09:36:01.483376] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a58000b90 with addr=10.0.0.2, port=4420 00:27:50.609 qpair failed and we were unable to recover it. 00:27:50.609 [2024-07-15 09:36:01.483461] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:50.609 [2024-07-15 09:36:01.483489] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d69200 with addr=10.0.0.2, port=4420 00:27:50.609 qpair failed and we were unable to recover it. 00:27:50.609 [2024-07-15 09:36:01.483584] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:50.609 [2024-07-15 09:36:01.483611] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:27:50.609 qpair failed and we were unable to recover it. 00:27:50.609 [2024-07-15 09:36:01.483696] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:50.609 [2024-07-15 09:36:01.483722] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:27:50.609 qpair failed and we were unable to recover it. 00:27:50.609 [2024-07-15 09:36:01.483815] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:50.609 [2024-07-15 09:36:01.483842] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:27:50.609 qpair failed and we were unable to recover it. 00:27:50.609 [2024-07-15 09:36:01.483952] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:50.609 [2024-07-15 09:36:01.483978] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:27:50.609 qpair failed and we were unable to recover it. 00:27:50.609 [2024-07-15 09:36:01.484064] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:50.609 [2024-07-15 09:36:01.484090] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:27:50.609 qpair failed and we were unable to recover it. 00:27:50.609 [2024-07-15 09:36:01.484198] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:50.609 [2024-07-15 09:36:01.484225] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a58000b90 with addr=10.0.0.2, port=4420 00:27:50.609 qpair failed and we were unable to recover it. 00:27:50.609 [2024-07-15 09:36:01.484310] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:50.609 [2024-07-15 09:36:01.484337] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a58000b90 with addr=10.0.0.2, port=4420 00:27:50.609 qpair failed and we were unable to recover it. 00:27:50.609 [2024-07-15 09:36:01.484423] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:50.609 [2024-07-15 09:36:01.484452] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a50000b90 with addr=10.0.0.2, port=4420 00:27:50.609 qpair failed and we were unable to recover it. 00:27:50.609 [2024-07-15 09:36:01.484543] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:50.609 [2024-07-15 09:36:01.484570] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a50000b90 with addr=10.0.0.2, port=4420 00:27:50.609 qpair failed and we were unable to recover it. 00:27:50.609 [2024-07-15 09:36:01.484649] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:50.609 [2024-07-15 09:36:01.484676] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d69200 with addr=10.0.0.2, port=4420 00:27:50.609 qpair failed and we were unable to recover it. 00:27:50.609 [2024-07-15 09:36:01.484764] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:50.609 [2024-07-15 09:36:01.484791] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d69200 with addr=10.0.0.2, port=4420 00:27:50.609 qpair failed and we were unable to recover it. 00:27:50.609 [2024-07-15 09:36:01.484910] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:50.609 [2024-07-15 09:36:01.484937] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d69200 with addr=10.0.0.2, port=4420 00:27:50.609 qpair failed and we were unable to recover it. 00:27:50.609 [2024-07-15 09:36:01.485076] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:50.609 [2024-07-15 09:36:01.485112] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d69200 with addr=10.0.0.2, port=4420 00:27:50.609 qpair failed and we were unable to recover it. 00:27:50.609 [2024-07-15 09:36:01.485227] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:50.609 [2024-07-15 09:36:01.485255] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a58000b90 with addr=10.0.0.2, port=4420 00:27:50.609 qpair failed and we were unable to recover it. 00:27:50.609 [2024-07-15 09:36:01.485335] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:50.609 [2024-07-15 09:36:01.485364] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:27:50.609 qpair failed and we were unable to recover it. 00:27:50.609 [2024-07-15 09:36:01.485486] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:50.609 [2024-07-15 09:36:01.485514] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a50000b90 with addr=10.0.0.2, port=4420 00:27:50.609 qpair failed and we were unable to recover it. 00:27:50.609 [2024-07-15 09:36:01.485594] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:50.610 [2024-07-15 09:36:01.485621] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a50000b90 with addr=10.0.0.2, port=4420 00:27:50.610 qpair failed and we were unable to recover it. 00:27:50.610 [2024-07-15 09:36:01.485697] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:50.610 [2024-07-15 09:36:01.485723] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a50000b90 with addr=10.0.0.2, port=4420 00:27:50.610 qpair failed and we were unable to recover it. 00:27:50.610 [2024-07-15 09:36:01.485835] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:50.610 [2024-07-15 09:36:01.485862] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a50000b90 with addr=10.0.0.2, port=4420 00:27:50.610 qpair failed and we were unable to recover it. 00:27:50.610 [2024-07-15 09:36:01.485950] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:50.610 [2024-07-15 09:36:01.485977] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a50000b90 with addr=10.0.0.2, port=4420 00:27:50.610 qpair failed and we were unable to recover it. 00:27:50.610 [2024-07-15 09:36:01.486062] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:50.610 [2024-07-15 09:36:01.486104] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a50000b90 with addr=10.0.0.2, port=4420 00:27:50.610 qpair failed and we were unable to recover it. 00:27:50.610 [2024-07-15 09:36:01.486258] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:50.610 [2024-07-15 09:36:01.486285] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a50000b90 with addr=10.0.0.2, port=4420 00:27:50.610 qpair failed and we were unable to recover it. 00:27:50.610 [2024-07-15 09:36:01.486395] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:50.610 [2024-07-15 09:36:01.486423] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a50000b90 with addr=10.0.0.2, port=4420 00:27:50.610 qpair failed and we were unable to recover it. 00:27:50.610 [2024-07-15 09:36:01.486533] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:50.610 [2024-07-15 09:36:01.486560] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a50000b90 with addr=10.0.0.2, port=4420 00:27:50.610 qpair failed and we were unable to recover it. 00:27:50.610 [2024-07-15 09:36:01.486676] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:50.610 [2024-07-15 09:36:01.486704] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:27:50.610 qpair failed and we were unable to recover it. 00:27:50.610 [2024-07-15 09:36:01.486819] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:50.610 [2024-07-15 09:36:01.486847] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d69200 with addr=10.0.0.2, port=4420 00:27:50.610 qpair failed and we were unable to recover it. 00:27:50.610 [2024-07-15 09:36:01.486962] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:50.610 [2024-07-15 09:36:01.486988] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d69200 with addr=10.0.0.2, port=4420 00:27:50.610 qpair failed and we were unable to recover it. 00:27:50.610 [2024-07-15 09:36:01.487070] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:50.610 [2024-07-15 09:36:01.487097] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d69200 with addr=10.0.0.2, port=4420 00:27:50.610 qpair failed and we were unable to recover it. 00:27:50.610 [2024-07-15 09:36:01.487183] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:50.610 [2024-07-15 09:36:01.487209] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d69200 with addr=10.0.0.2, port=4420 00:27:50.610 qpair failed and we were unable to recover it. 00:27:50.610 [2024-07-15 09:36:01.487342] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:50.610 [2024-07-15 09:36:01.487368] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d69200 with addr=10.0.0.2, port=4420 00:27:50.610 qpair failed and we were unable to recover it. 00:27:50.610 [2024-07-15 09:36:01.487445] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:50.610 [2024-07-15 09:36:01.487472] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d69200 with addr=10.0.0.2, port=4420 00:27:50.610 qpair failed and we were unable to recover it. 00:27:50.610 [2024-07-15 09:36:01.487552] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:50.610 [2024-07-15 09:36:01.487580] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:27:50.610 qpair failed and we were unable to recover it. 00:27:50.610 [2024-07-15 09:36:01.487722] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:50.610 [2024-07-15 09:36:01.487748] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:27:50.610 qpair failed and we were unable to recover it. 00:27:50.610 [2024-07-15 09:36:01.487855] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:50.610 [2024-07-15 09:36:01.487882] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a50000b90 with addr=10.0.0.2, port=4420 00:27:50.610 qpair failed and we were unable to recover it. 00:27:50.610 [2024-07-15 09:36:01.487979] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:50.610 [2024-07-15 09:36:01.488006] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a50000b90 with addr=10.0.0.2, port=4420 00:27:50.610 qpair failed and we were unable to recover it. 00:27:50.610 [2024-07-15 09:36:01.488097] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:50.610 [2024-07-15 09:36:01.488124] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a50000b90 with addr=10.0.0.2, port=4420 00:27:50.610 qpair failed and we were unable to recover it. 00:27:50.610 [2024-07-15 09:36:01.488204] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:50.610 [2024-07-15 09:36:01.488231] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a50000b90 with addr=10.0.0.2, port=4420 00:27:50.610 qpair failed and we were unable to recover it. 00:27:50.610 [2024-07-15 09:36:01.488311] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:50.610 [2024-07-15 09:36:01.488337] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d69200 with addr=10.0.0.2, port=4420 00:27:50.610 qpair failed and we were unable to recover it. 00:27:50.610 [2024-07-15 09:36:01.488422] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:50.610 [2024-07-15 09:36:01.488452] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a58000b90 with addr=10.0.0.2, port=4420 00:27:50.610 qpair failed and we were unable to recover it. 00:27:50.610 [2024-07-15 09:36:01.488591] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:50.610 [2024-07-15 09:36:01.488617] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a58000b90 with addr=10.0.0.2, port=4420 00:27:50.610 qpair failed and we were unable to recover it. 00:27:50.610 [2024-07-15 09:36:01.488700] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:50.610 [2024-07-15 09:36:01.488726] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a58000b90 with addr=10.0.0.2, port=4420 00:27:50.610 qpair failed and we were unable to recover it. 00:27:50.610 [2024-07-15 09:36:01.488811] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:50.610 [2024-07-15 09:36:01.488838] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a58000b90 with addr=10.0.0.2, port=4420 00:27:50.610 qpair failed and we were unable to recover it. 00:27:50.610 [2024-07-15 09:36:01.488920] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:50.610 [2024-07-15 09:36:01.488947] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a58000b90 with addr=10.0.0.2, port=4420 00:27:50.610 qpair failed and we were unable to recover it. 00:27:50.610 [2024-07-15 09:36:01.489027] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:50.610 [2024-07-15 09:36:01.489053] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a58000b90 with addr=10.0.0.2, port=4420 00:27:50.610 qpair failed and we were unable to recover it. 00:27:50.610 [2024-07-15 09:36:01.489170] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:50.610 [2024-07-15 09:36:01.489196] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a58000b90 with addr=10.0.0.2, port=4420 00:27:50.610 qpair failed and we were unable to recover it. 00:27:50.610 [2024-07-15 09:36:01.489281] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:50.610 [2024-07-15 09:36:01.489310] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:27:50.610 qpair failed and we were unable to recover it. 00:27:50.610 [2024-07-15 09:36:01.489443] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:50.610 [2024-07-15 09:36:01.489469] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:27:50.610 qpair failed and we were unable to recover it. 00:27:50.610 [2024-07-15 09:36:01.489583] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:50.610 [2024-07-15 09:36:01.489617] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d69200 with addr=10.0.0.2, port=4420 00:27:50.610 qpair failed and we were unable to recover it. 00:27:50.610 [2024-07-15 09:36:01.489732] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:50.610 [2024-07-15 09:36:01.489758] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d69200 with addr=10.0.0.2, port=4420 00:27:50.610 qpair failed and we were unable to recover it. 00:27:50.610 [2024-07-15 09:36:01.489849] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:50.610 [2024-07-15 09:36:01.489875] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d69200 with addr=10.0.0.2, port=4420 00:27:50.610 qpair failed and we were unable to recover it. 00:27:50.610 [2024-07-15 09:36:01.489955] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:50.610 [2024-07-15 09:36:01.489981] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d69200 with addr=10.0.0.2, port=4420 00:27:50.610 qpair failed and we were unable to recover it. 00:27:50.610 [2024-07-15 09:36:01.490089] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:50.610 [2024-07-15 09:36:01.490115] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d69200 with addr=10.0.0.2, port=4420 00:27:50.610 qpair failed and we were unable to recover it. 00:27:50.610 [2024-07-15 09:36:01.490191] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:50.610 [2024-07-15 09:36:01.490217] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d69200 with addr=10.0.0.2, port=4420 00:27:50.610 qpair failed and we were unable to recover it. 00:27:50.610 [2024-07-15 09:36:01.490325] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:50.610 [2024-07-15 09:36:01.490351] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d69200 with addr=10.0.0.2, port=4420 00:27:50.610 qpair failed and we were unable to recover it. 00:27:50.610 [2024-07-15 09:36:01.490437] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:50.610 [2024-07-15 09:36:01.490465] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a58000b90 with addr=10.0.0.2, port=4420 00:27:50.610 qpair failed and we were unable to recover it. 00:27:50.610 [2024-07-15 09:36:01.490567] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:50.610 [2024-07-15 09:36:01.490607] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a50000b90 with addr=10.0.0.2, port=4420 00:27:50.610 qpair failed and we were unable to recover it. 00:27:50.610 [2024-07-15 09:36:01.490701] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:50.610 [2024-07-15 09:36:01.490728] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:27:50.610 qpair failed and we were unable to recover it. 00:27:50.610 [2024-07-15 09:36:01.490823] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:50.610 [2024-07-15 09:36:01.490850] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:27:50.610 qpair failed and we were unable to recover it. 00:27:50.610 [2024-07-15 09:36:01.490935] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:50.610 [2024-07-15 09:36:01.490961] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:27:50.610 qpair failed and we were unable to recover it. 00:27:50.610 [2024-07-15 09:36:01.491066] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:50.610 [2024-07-15 09:36:01.491092] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:27:50.610 qpair failed and we were unable to recover it. 00:27:50.610 [2024-07-15 09:36:01.491184] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:50.610 [2024-07-15 09:36:01.491212] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d69200 with addr=10.0.0.2, port=4420 00:27:50.610 qpair failed and we were unable to recover it. 00:27:50.610 [2024-07-15 09:36:01.491308] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:50.610 [2024-07-15 09:36:01.491336] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a58000b90 with addr=10.0.0.2, port=4420 00:27:50.610 qpair failed and we were unable to recover it. 00:27:50.610 [2024-07-15 09:36:01.491452] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:50.610 [2024-07-15 09:36:01.491478] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a58000b90 with addr=10.0.0.2, port=4420 00:27:50.610 qpair failed and we were unable to recover it. 00:27:50.610 [2024-07-15 09:36:01.491557] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:50.610 [2024-07-15 09:36:01.491583] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a58000b90 with addr=10.0.0.2, port=4420 00:27:50.610 qpair failed and we were unable to recover it. 00:27:50.610 [2024-07-15 09:36:01.491663] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:50.610 [2024-07-15 09:36:01.491689] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a58000b90 with addr=10.0.0.2, port=4420 00:27:50.610 qpair failed and we were unable to recover it. 00:27:50.610 [2024-07-15 09:36:01.491769] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:50.610 [2024-07-15 09:36:01.491795] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a58000b90 with addr=10.0.0.2, port=4420 00:27:50.610 qpair failed and we were unable to recover it. 00:27:50.610 [2024-07-15 09:36:01.491882] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:50.610 [2024-07-15 09:36:01.491909] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a58000b90 with addr=10.0.0.2, port=4420 00:27:50.610 qpair failed and we were unable to recover it. 00:27:50.610 [2024-07-15 09:36:01.492019] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:50.610 [2024-07-15 09:36:01.492046] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a58000b90 with addr=10.0.0.2, port=4420 00:27:50.610 qpair failed and we were unable to recover it. 00:27:50.610 [2024-07-15 09:36:01.492142] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:50.610 [2024-07-15 09:36:01.492181] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a58000b90 with addr=10.0.0.2, port=4420 00:27:50.610 qpair failed and we were unable to recover it. 00:27:50.610 [2024-07-15 09:36:01.492295] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:50.610 [2024-07-15 09:36:01.492322] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:27:50.610 qpair failed and we were unable to recover it. 00:27:50.610 [2024-07-15 09:36:01.492414] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:50.610 [2024-07-15 09:36:01.492441] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d69200 with addr=10.0.0.2, port=4420 00:27:50.610 qpair failed and we were unable to recover it. 00:27:50.610 [2024-07-15 09:36:01.492536] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:50.610 [2024-07-15 09:36:01.492562] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d69200 with addr=10.0.0.2, port=4420 00:27:50.610 qpair failed and we were unable to recover it. 00:27:50.610 [2024-07-15 09:36:01.492674] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:50.610 [2024-07-15 09:36:01.492701] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d69200 with addr=10.0.0.2, port=4420 00:27:50.610 qpair failed and we were unable to recover it. 00:27:50.610 [2024-07-15 09:36:01.492824] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:50.610 [2024-07-15 09:36:01.492850] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d69200 with addr=10.0.0.2, port=4420 00:27:50.610 qpair failed and we were unable to recover it. 00:27:50.610 [2024-07-15 09:36:01.492951] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:50.610 [2024-07-15 09:36:01.492990] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a50000b90 with addr=10.0.0.2, port=4420 00:27:50.610 qpair failed and we were unable to recover it. 00:27:50.610 [2024-07-15 09:36:01.493110] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:50.610 [2024-07-15 09:36:01.493138] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a58000b90 with addr=10.0.0.2, port=4420 00:27:50.610 qpair failed and we were unable to recover it. 00:27:50.610 [2024-07-15 09:36:01.493213] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:50.610 [2024-07-15 09:36:01.493240] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a58000b90 with addr=10.0.0.2, port=4420 00:27:50.610 qpair failed and we were unable to recover it. 00:27:50.610 [2024-07-15 09:36:01.493328] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:50.610 [2024-07-15 09:36:01.493355] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a58000b90 with addr=10.0.0.2, port=4420 00:27:50.610 qpair failed and we were unable to recover it. 00:27:50.610 [2024-07-15 09:36:01.493469] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:50.611 [2024-07-15 09:36:01.493496] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a58000b90 with addr=10.0.0.2, port=4420 00:27:50.611 qpair failed and we were unable to recover it. 00:27:50.611 [2024-07-15 09:36:01.493608] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:50.611 [2024-07-15 09:36:01.493635] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a58000b90 with addr=10.0.0.2, port=4420 00:27:50.611 qpair failed and we were unable to recover it. 00:27:50.611 [2024-07-15 09:36:01.493743] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:50.611 [2024-07-15 09:36:01.493771] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a58000b90 with addr=10.0.0.2, port=4420 00:27:50.611 qpair failed and we were unable to recover it. 00:27:50.611 [2024-07-15 09:36:01.493869] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:50.611 [2024-07-15 09:36:01.493897] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a58000b90 with addr=10.0.0.2, port=4420 00:27:50.611 qpair failed and we were unable to recover it. 00:27:50.611 [2024-07-15 09:36:01.493982] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:50.611 [2024-07-15 09:36:01.494009] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a58000b90 with addr=10.0.0.2, port=4420 00:27:50.611 qpair failed and we were unable to recover it. 00:27:50.611 [2024-07-15 09:36:01.494115] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:50.611 [2024-07-15 09:36:01.494142] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a58000b90 with addr=10.0.0.2, port=4420 00:27:50.611 qpair failed and we were unable to recover it. 00:27:50.611 [2024-07-15 09:36:01.494279] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:50.611 [2024-07-15 09:36:01.494306] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a58000b90 with addr=10.0.0.2, port=4420 00:27:50.611 qpair failed and we were unable to recover it. 00:27:50.611 [2024-07-15 09:36:01.494395] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:50.611 [2024-07-15 09:36:01.494434] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:27:50.611 qpair failed and we were unable to recover it. 00:27:50.611 [2024-07-15 09:36:01.494549] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:50.611 [2024-07-15 09:36:01.494576] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d69200 with addr=10.0.0.2, port=4420 00:27:50.611 qpair failed and we were unable to recover it. 00:27:50.611 [2024-07-15 09:36:01.494696] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:50.611 [2024-07-15 09:36:01.494731] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a50000b90 with addr=10.0.0.2, port=4420 00:27:50.611 qpair failed and we were unable to recover it. 00:27:50.611 [2024-07-15 09:36:01.494831] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:50.611 [2024-07-15 09:36:01.494859] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a50000b90 with addr=10.0.0.2, port=4420 00:27:50.611 qpair failed and we were unable to recover it. 00:27:50.611 [2024-07-15 09:36:01.494950] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:50.611 [2024-07-15 09:36:01.494977] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a50000b90 with addr=10.0.0.2, port=4420 00:27:50.611 qpair failed and we were unable to recover it. 00:27:50.611 [2024-07-15 09:36:01.495090] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:50.611 [2024-07-15 09:36:01.495125] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a50000b90 with addr=10.0.0.2, port=4420 00:27:50.611 qpair failed and we were unable to recover it. 00:27:50.611 [2024-07-15 09:36:01.495210] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:50.611 [2024-07-15 09:36:01.495238] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a50000b90 with addr=10.0.0.2, port=4420 00:27:50.611 qpair failed and we were unable to recover it. 00:27:50.611 [2024-07-15 09:36:01.495374] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:50.611 [2024-07-15 09:36:01.495401] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a50000b90 with addr=10.0.0.2, port=4420 00:27:50.611 qpair failed and we were unable to recover it. 00:27:50.611 [2024-07-15 09:36:01.495487] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:50.611 [2024-07-15 09:36:01.495514] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a50000b90 with addr=10.0.0.2, port=4420 00:27:50.611 qpair failed and we were unable to recover it. 00:27:50.611 [2024-07-15 09:36:01.495622] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:50.611 [2024-07-15 09:36:01.495651] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a58000b90 with addr=10.0.0.2, port=4420 00:27:50.611 qpair failed and we were unable to recover it. 00:27:50.611 [2024-07-15 09:36:01.495742] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:50.611 [2024-07-15 09:36:01.495769] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a58000b90 with addr=10.0.0.2, port=4420 00:27:50.611 qpair failed and we were unable to recover it. 00:27:50.611 [2024-07-15 09:36:01.495867] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:50.611 [2024-07-15 09:36:01.495895] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d69200 with addr=10.0.0.2, port=4420 00:27:50.611 qpair failed and we were unable to recover it. 00:27:50.611 [2024-07-15 09:36:01.495977] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:50.611 [2024-07-15 09:36:01.496003] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d69200 with addr=10.0.0.2, port=4420 00:27:50.611 qpair failed and we were unable to recover it. 00:27:50.611 [2024-07-15 09:36:01.496080] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:50.611 [2024-07-15 09:36:01.496116] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d69200 with addr=10.0.0.2, port=4420 00:27:50.611 qpair failed and we were unable to recover it. 00:27:50.611 [2024-07-15 09:36:01.496198] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:50.611 [2024-07-15 09:36:01.496224] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d69200 with addr=10.0.0.2, port=4420 00:27:50.611 qpair failed and we were unable to recover it. 00:27:50.611 [2024-07-15 09:36:01.496338] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:50.611 [2024-07-15 09:36:01.496366] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a50000b90 with addr=10.0.0.2, port=4420 00:27:50.611 qpair failed and we were unable to recover it. 00:27:50.611 [2024-07-15 09:36:01.496493] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:50.611 [2024-07-15 09:36:01.496523] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:27:50.611 qpair failed and we were unable to recover it. 00:27:50.611 [2024-07-15 09:36:01.496640] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:50.611 [2024-07-15 09:36:01.496668] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a58000b90 with addr=10.0.0.2, port=4420 00:27:50.611 qpair failed and we were unable to recover it. 00:27:50.611 [2024-07-15 09:36:01.496781] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:50.611 [2024-07-15 09:36:01.496823] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a58000b90 with addr=10.0.0.2, port=4420 00:27:50.611 qpair failed and we were unable to recover it. 00:27:50.611 [2024-07-15 09:36:01.496935] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:50.611 [2024-07-15 09:36:01.496962] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a58000b90 with addr=10.0.0.2, port=4420 00:27:50.611 qpair failed and we were unable to recover it. 00:27:50.611 [2024-07-15 09:36:01.497051] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:50.611 [2024-07-15 09:36:01.497077] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a58000b90 with addr=10.0.0.2, port=4420 00:27:50.611 qpair failed and we were unable to recover it. 00:27:50.611 [2024-07-15 09:36:01.497162] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:50.611 [2024-07-15 09:36:01.497190] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a58000b90 with addr=10.0.0.2, port=4420 00:27:50.611 qpair failed and we were unable to recover it. 00:27:50.611 [2024-07-15 09:36:01.497305] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:50.611 [2024-07-15 09:36:01.497333] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a50000b90 with addr=10.0.0.2, port=4420 00:27:50.611 qpair failed and we were unable to recover it. 00:27:50.611 [2024-07-15 09:36:01.497445] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:50.611 [2024-07-15 09:36:01.497472] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a50000b90 with addr=10.0.0.2, port=4420 00:27:50.611 qpair failed and we were unable to recover it. 00:27:50.611 [2024-07-15 09:36:01.497556] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:50.611 [2024-07-15 09:36:01.497583] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a50000b90 with addr=10.0.0.2, port=4420 00:27:50.611 qpair failed and we were unable to recover it. 00:27:50.611 [2024-07-15 09:36:01.497672] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:50.611 [2024-07-15 09:36:01.497699] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a50000b90 with addr=10.0.0.2, port=4420 00:27:50.611 qpair failed and we were unable to recover it. 00:27:50.611 [2024-07-15 09:36:01.497783] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:50.611 [2024-07-15 09:36:01.497827] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a50000b90 with addr=10.0.0.2, port=4420 00:27:50.611 qpair failed and we were unable to recover it. 00:27:50.611 [2024-07-15 09:36:01.497943] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:50.611 [2024-07-15 09:36:01.497970] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a50000b90 with addr=10.0.0.2, port=4420 00:27:50.611 qpair failed and we were unable to recover it. 00:27:50.611 [2024-07-15 09:36:01.498084] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:50.611 [2024-07-15 09:36:01.498116] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a50000b90 with addr=10.0.0.2, port=4420 00:27:50.611 qpair failed and we were unable to recover it. 00:27:50.611 [2024-07-15 09:36:01.498233] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:50.611 [2024-07-15 09:36:01.498265] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a50000b90 with addr=10.0.0.2, port=4420 00:27:50.611 qpair failed and we were unable to recover it. 00:27:50.611 [2024-07-15 09:36:01.498373] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:50.611 [2024-07-15 09:36:01.498401] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a50000b90 with addr=10.0.0.2, port=4420 00:27:50.611 qpair failed and we were unable to recover it. 00:27:50.611 [2024-07-15 09:36:01.498486] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:50.611 [2024-07-15 09:36:01.498513] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a50000b90 with addr=10.0.0.2, port=4420 00:27:50.611 qpair failed and we were unable to recover it. 00:27:50.611 [2024-07-15 09:36:01.498594] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:50.611 [2024-07-15 09:36:01.498623] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a50000b90 with addr=10.0.0.2, port=4420 00:27:50.611 qpair failed and we were unable to recover it. 00:27:50.611 [2024-07-15 09:36:01.498734] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:50.611 [2024-07-15 09:36:01.498761] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a50000b90 with addr=10.0.0.2, port=4420 00:27:50.611 qpair failed and we were unable to recover it. 00:27:50.611 [2024-07-15 09:36:01.498861] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:50.611 [2024-07-15 09:36:01.498889] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a58000b90 with addr=10.0.0.2, port=4420 00:27:50.611 qpair failed and we were unable to recover it. 00:27:50.611 [2024-07-15 09:36:01.498978] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:50.611 [2024-07-15 09:36:01.499005] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a58000b90 with addr=10.0.0.2, port=4420 00:27:50.611 qpair failed and we were unable to recover it. 00:27:50.611 [2024-07-15 09:36:01.499114] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:50.611 [2024-07-15 09:36:01.499140] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a58000b90 with addr=10.0.0.2, port=4420 00:27:50.611 qpair failed and we were unable to recover it. 00:27:50.611 [2024-07-15 09:36:01.499221] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:50.611 [2024-07-15 09:36:01.499249] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a58000b90 with addr=10.0.0.2, port=4420 00:27:50.611 qpair failed and we were unable to recover it. 00:27:50.611 [2024-07-15 09:36:01.499372] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:50.611 [2024-07-15 09:36:01.499411] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:27:50.611 qpair failed and we were unable to recover it. 00:27:50.611 [2024-07-15 09:36:01.499506] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:50.611 [2024-07-15 09:36:01.499534] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:27:50.611 qpair failed and we were unable to recover it. 00:27:50.611 [2024-07-15 09:36:01.499621] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:50.611 [2024-07-15 09:36:01.499650] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a50000b90 with addr=10.0.0.2, port=4420 00:27:50.611 qpair failed and we were unable to recover it. 00:27:50.611 [2024-07-15 09:36:01.499737] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:50.611 [2024-07-15 09:36:01.499765] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a50000b90 with addr=10.0.0.2, port=4420 00:27:50.611 qpair failed and we were unable to recover it. 00:27:50.611 [2024-07-15 09:36:01.499891] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:50.611 [2024-07-15 09:36:01.499930] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d69200 with addr=10.0.0.2, port=4420 00:27:50.611 qpair failed and we were unable to recover it. 00:27:50.611 [2024-07-15 09:36:01.500031] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:50.611 [2024-07-15 09:36:01.500060] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d69200 with addr=10.0.0.2, port=4420 00:27:50.611 qpair failed and we were unable to recover it. 00:27:50.611 [2024-07-15 09:36:01.500144] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:50.611 [2024-07-15 09:36:01.500172] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d69200 with addr=10.0.0.2, port=4420 00:27:50.611 qpair failed and we were unable to recover it. 00:27:50.611 [2024-07-15 09:36:01.500284] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:50.611 [2024-07-15 09:36:01.500311] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d69200 with addr=10.0.0.2, port=4420 00:27:50.611 qpair failed and we were unable to recover it. 00:27:50.611 [2024-07-15 09:36:01.500427] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:50.611 [2024-07-15 09:36:01.500455] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:27:50.611 qpair failed and we were unable to recover it. 00:27:50.611 [2024-07-15 09:36:01.500542] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:50.611 [2024-07-15 09:36:01.500569] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:27:50.611 qpair failed and we were unable to recover it. 00:27:50.611 [2024-07-15 09:36:01.500655] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:50.611 [2024-07-15 09:36:01.500682] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:27:50.611 qpair failed and we were unable to recover it. 00:27:50.611 [2024-07-15 09:36:01.500774] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:50.611 [2024-07-15 09:36:01.500817] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:27:50.611 qpair failed and we were unable to recover it. 00:27:50.611 [2024-07-15 09:36:01.500912] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:50.611 [2024-07-15 09:36:01.500939] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:27:50.611 qpair failed and we were unable to recover it. 00:27:50.611 [2024-07-15 09:36:01.501032] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:50.611 [2024-07-15 09:36:01.501058] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:27:50.611 qpair failed and we were unable to recover it. 00:27:50.611 [2024-07-15 09:36:01.501209] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:50.611 [2024-07-15 09:36:01.501235] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:27:50.611 qpair failed and we were unable to recover it. 00:27:50.611 [2024-07-15 09:36:01.501321] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:50.611 [2024-07-15 09:36:01.501348] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:27:50.611 qpair failed and we were unable to recover it. 00:27:50.611 [2024-07-15 09:36:01.501435] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:50.611 [2024-07-15 09:36:01.501464] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a50000b90 with addr=10.0.0.2, port=4420 00:27:50.611 qpair failed and we were unable to recover it. 00:27:50.611 [2024-07-15 09:36:01.501559] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:50.611 [2024-07-15 09:36:01.501598] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d69200 with addr=10.0.0.2, port=4420 00:27:50.611 qpair failed and we were unable to recover it. 00:27:50.611 [2024-07-15 09:36:01.501756] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:50.611 [2024-07-15 09:36:01.501797] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a58000b90 with addr=10.0.0.2, port=4420 00:27:50.611 qpair failed and we were unable to recover it. 00:27:50.611 [2024-07-15 09:36:01.501903] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:50.611 [2024-07-15 09:36:01.501931] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a58000b90 with addr=10.0.0.2, port=4420 00:27:50.611 qpair failed and we were unable to recover it. 00:27:50.611 [2024-07-15 09:36:01.502014] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:50.611 [2024-07-15 09:36:01.502041] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a58000b90 with addr=10.0.0.2, port=4420 00:27:50.611 qpair failed and we were unable to recover it. 00:27:50.611 [2024-07-15 09:36:01.502139] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:50.611 [2024-07-15 09:36:01.502166] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a58000b90 with addr=10.0.0.2, port=4420 00:27:50.611 qpair failed and we were unable to recover it. 00:27:50.611 [2024-07-15 09:36:01.502272] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:50.611 [2024-07-15 09:36:01.502298] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a58000b90 with addr=10.0.0.2, port=4420 00:27:50.611 qpair failed and we were unable to recover it. 00:27:50.611 [2024-07-15 09:36:01.502404] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:50.612 [2024-07-15 09:36:01.502430] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a58000b90 with addr=10.0.0.2, port=4420 00:27:50.612 qpair failed and we were unable to recover it. 00:27:50.612 [2024-07-15 09:36:01.502516] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:50.612 [2024-07-15 09:36:01.502544] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a58000b90 with addr=10.0.0.2, port=4420 00:27:50.612 qpair failed and we were unable to recover it. 00:27:50.612 [2024-07-15 09:36:01.502656] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:50.612 [2024-07-15 09:36:01.502684] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:27:50.612 qpair failed and we were unable to recover it. 00:27:50.612 [2024-07-15 09:36:01.502765] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:50.612 [2024-07-15 09:36:01.502794] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a50000b90 with addr=10.0.0.2, port=4420 00:27:50.612 qpair failed and we were unable to recover it. 00:27:50.612 [2024-07-15 09:36:01.502885] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:50.612 [2024-07-15 09:36:01.502912] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a50000b90 with addr=10.0.0.2, port=4420 00:27:50.612 qpair failed and we were unable to recover it. 00:27:50.612 [2024-07-15 09:36:01.503026] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:50.612 [2024-07-15 09:36:01.503054] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a58000b90 with addr=10.0.0.2, port=4420 00:27:50.612 qpair failed and we were unable to recover it. 00:27:50.612 [2024-07-15 09:36:01.503138] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:50.612 [2024-07-15 09:36:01.503164] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a58000b90 with addr=10.0.0.2, port=4420 00:27:50.612 qpair failed and we were unable to recover it. 00:27:50.612 [2024-07-15 09:36:01.503242] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:50.612 [2024-07-15 09:36:01.503270] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a58000b90 with addr=10.0.0.2, port=4420 00:27:50.612 qpair failed and we were unable to recover it. 00:27:50.612 [2024-07-15 09:36:01.503357] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:50.612 [2024-07-15 09:36:01.503389] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a58000b90 with addr=10.0.0.2, port=4420 00:27:50.612 qpair failed and we were unable to recover it. 00:27:50.612 [2024-07-15 09:36:01.503472] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:50.612 [2024-07-15 09:36:01.503498] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:27:50.612 qpair failed and we were unable to recover it. 00:27:50.612 [2024-07-15 09:36:01.503621] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:50.612 [2024-07-15 09:36:01.503660] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d69200 with addr=10.0.0.2, port=4420 00:27:50.612 qpair failed and we were unable to recover it. 00:27:50.612 [2024-07-15 09:36:01.503749] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:50.612 [2024-07-15 09:36:01.503777] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d69200 with addr=10.0.0.2, port=4420 00:27:50.612 qpair failed and we were unable to recover it. 00:27:50.612 [2024-07-15 09:36:01.503872] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:50.612 [2024-07-15 09:36:01.503899] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d69200 with addr=10.0.0.2, port=4420 00:27:50.612 qpair failed and we were unable to recover it. 00:27:50.612 [2024-07-15 09:36:01.504007] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:50.612 [2024-07-15 09:36:01.504034] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d69200 with addr=10.0.0.2, port=4420 00:27:50.612 qpair failed and we were unable to recover it. 00:27:50.612 [2024-07-15 09:36:01.504121] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:50.612 [2024-07-15 09:36:01.504147] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d69200 with addr=10.0.0.2, port=4420 00:27:50.612 qpair failed and we were unable to recover it. 00:27:50.612 [2024-07-15 09:36:01.504303] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:50.612 [2024-07-15 09:36:01.504331] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a58000b90 with addr=10.0.0.2, port=4420 00:27:50.612 qpair failed and we were unable to recover it. 00:27:50.612 [2024-07-15 09:36:01.504427] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:50.612 [2024-07-15 09:36:01.504454] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:27:50.612 qpair failed and we were unable to recover it. 00:27:50.612 [2024-07-15 09:36:01.504550] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:50.612 [2024-07-15 09:36:01.504578] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a50000b90 with addr=10.0.0.2, port=4420 00:27:50.612 qpair failed and we were unable to recover it. 00:27:50.612 [2024-07-15 09:36:01.504692] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:50.612 [2024-07-15 09:36:01.504719] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a50000b90 with addr=10.0.0.2, port=4420 00:27:50.612 qpair failed and we were unable to recover it. 00:27:50.612 [2024-07-15 09:36:01.504808] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:50.612 [2024-07-15 09:36:01.504835] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a50000b90 with addr=10.0.0.2, port=4420 00:27:50.612 qpair failed and we were unable to recover it. 00:27:50.612 [2024-07-15 09:36:01.504945] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:50.612 [2024-07-15 09:36:01.504972] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a50000b90 with addr=10.0.0.2, port=4420 00:27:50.612 qpair failed and we were unable to recover it. 00:27:50.612 [2024-07-15 09:36:01.505060] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:50.612 [2024-07-15 09:36:01.505091] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a50000b90 with addr=10.0.0.2, port=4420 00:27:50.612 qpair failed and we were unable to recover it. 00:27:50.612 [2024-07-15 09:36:01.505216] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:50.612 [2024-07-15 09:36:01.505243] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a50000b90 with addr=10.0.0.2, port=4420 00:27:50.612 qpair failed and we were unable to recover it. 00:27:50.612 [2024-07-15 09:36:01.505327] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:50.612 [2024-07-15 09:36:01.505354] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a50000b90 with addr=10.0.0.2, port=4420 00:27:50.612 qpair failed and we were unable to recover it. 00:27:50.612 [2024-07-15 09:36:01.505432] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:50.612 [2024-07-15 09:36:01.505458] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a50000b90 with addr=10.0.0.2, port=4420 00:27:50.612 qpair failed and we were unable to recover it. 00:27:50.612 [2024-07-15 09:36:01.505552] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:50.612 [2024-07-15 09:36:01.505592] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a58000b90 with addr=10.0.0.2, port=4420 00:27:50.612 qpair failed and we were unable to recover it. 00:27:50.612 [2024-07-15 09:36:01.505683] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:50.612 [2024-07-15 09:36:01.505711] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:27:50.612 qpair failed and we were unable to recover it. 00:27:50.612 [2024-07-15 09:36:01.505794] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:50.612 [2024-07-15 09:36:01.505828] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d69200 with addr=10.0.0.2, port=4420 00:27:50.612 qpair failed and we were unable to recover it. 00:27:50.612 [2024-07-15 09:36:01.505917] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:50.612 [2024-07-15 09:36:01.505944] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d69200 with addr=10.0.0.2, port=4420 00:27:50.612 qpair failed and we were unable to recover it. 00:27:50.612 [2024-07-15 09:36:01.506031] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:50.612 [2024-07-15 09:36:01.506056] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d69200 with addr=10.0.0.2, port=4420 00:27:50.612 qpair failed and we were unable to recover it. 00:27:50.612 [2024-07-15 09:36:01.506135] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:50.612 [2024-07-15 09:36:01.506160] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d69200 with addr=10.0.0.2, port=4420 00:27:50.612 qpair failed and we were unable to recover it. 00:27:50.612 [2024-07-15 09:36:01.506267] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:50.612 [2024-07-15 09:36:01.506293] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d69200 with addr=10.0.0.2, port=4420 00:27:50.612 qpair failed and we were unable to recover it. 00:27:50.612 [2024-07-15 09:36:01.506381] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:50.612 [2024-07-15 09:36:01.506411] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a58000b90 with addr=10.0.0.2, port=4420 00:27:50.612 qpair failed and we were unable to recover it. 00:27:50.612 [2024-07-15 09:36:01.506500] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:50.612 [2024-07-15 09:36:01.506527] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:27:50.612 qpair failed and we were unable to recover it. 00:27:50.612 [2024-07-15 09:36:01.506641] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:50.612 [2024-07-15 09:36:01.506668] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:27:50.612 qpair failed and we were unable to recover it. 00:27:50.612 [2024-07-15 09:36:01.506782] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:50.612 [2024-07-15 09:36:01.506830] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:27:50.612 qpair failed and we were unable to recover it. 00:27:50.612 [2024-07-15 09:36:01.506921] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:50.612 [2024-07-15 09:36:01.506948] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:27:50.612 qpair failed and we were unable to recover it. 00:27:50.612 [2024-07-15 09:36:01.507060] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:50.612 [2024-07-15 09:36:01.507086] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:27:50.612 qpair failed and we were unable to recover it. 00:27:50.612 [2024-07-15 09:36:01.507182] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:50.612 [2024-07-15 09:36:01.507210] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:27:50.612 qpair failed and we were unable to recover it. 00:27:50.612 [2024-07-15 09:36:01.507289] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:50.612 [2024-07-15 09:36:01.507317] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a58000b90 with addr=10.0.0.2, port=4420 00:27:50.612 qpair failed and we were unable to recover it. 00:27:50.612 [2024-07-15 09:36:01.507406] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:50.612 [2024-07-15 09:36:01.507436] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a50000b90 with addr=10.0.0.2, port=4420 00:27:50.612 qpair failed and we were unable to recover it. 00:27:50.612 [2024-07-15 09:36:01.507554] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:50.612 [2024-07-15 09:36:01.507581] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a50000b90 with addr=10.0.0.2, port=4420 00:27:50.612 qpair failed and we were unable to recover it. 00:27:50.612 [2024-07-15 09:36:01.507661] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:50.612 [2024-07-15 09:36:01.507688] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a50000b90 with addr=10.0.0.2, port=4420 00:27:50.612 qpair failed and we were unable to recover it. 00:27:50.612 [2024-07-15 09:36:01.507767] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:50.612 [2024-07-15 09:36:01.507793] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a50000b90 with addr=10.0.0.2, port=4420 00:27:50.612 qpair failed and we were unable to recover it. 00:27:50.612 [2024-07-15 09:36:01.507908] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:50.612 [2024-07-15 09:36:01.507935] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a50000b90 with addr=10.0.0.2, port=4420 00:27:50.612 qpair failed and we were unable to recover it. 00:27:50.612 [2024-07-15 09:36:01.508049] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:50.612 [2024-07-15 09:36:01.508075] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a50000b90 with addr=10.0.0.2, port=4420 00:27:50.612 qpair failed and we were unable to recover it. 00:27:50.612 [2024-07-15 09:36:01.508155] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:50.612 [2024-07-15 09:36:01.508183] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a50000b90 with addr=10.0.0.2, port=4420 00:27:50.612 qpair failed and we were unable to recover it. 00:27:50.612 [2024-07-15 09:36:01.508271] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:50.612 [2024-07-15 09:36:01.508298] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a50000b90 with addr=10.0.0.2, port=4420 00:27:50.612 qpair failed and we were unable to recover it. 00:27:50.612 [2024-07-15 09:36:01.508385] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:50.612 [2024-07-15 09:36:01.508412] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:27:50.612 qpair failed and we were unable to recover it. 00:27:50.612 [2024-07-15 09:36:01.508497] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:50.612 [2024-07-15 09:36:01.508525] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a58000b90 with addr=10.0.0.2, port=4420 00:27:50.612 qpair failed and we were unable to recover it. 00:27:50.612 [2024-07-15 09:36:01.508649] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:50.612 [2024-07-15 09:36:01.508688] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d69200 with addr=10.0.0.2, port=4420 00:27:50.612 qpair failed and we were unable to recover it. 00:27:50.612 [2024-07-15 09:36:01.508775] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:50.612 [2024-07-15 09:36:01.508807] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d69200 with addr=10.0.0.2, port=4420 00:27:50.612 qpair failed and we were unable to recover it. 00:27:50.612 [2024-07-15 09:36:01.508894] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:50.612 [2024-07-15 09:36:01.508920] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d69200 with addr=10.0.0.2, port=4420 00:27:50.612 qpair failed and we were unable to recover it. 00:27:50.612 [2024-07-15 09:36:01.509033] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:50.612 [2024-07-15 09:36:01.509059] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d69200 with addr=10.0.0.2, port=4420 00:27:50.612 qpair failed and we were unable to recover it. 00:27:50.612 [2024-07-15 09:36:01.509199] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:50.612 [2024-07-15 09:36:01.509229] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a50000b90 with addr=10.0.0.2, port=4420 00:27:50.612 qpair failed and we were unable to recover it. 00:27:50.612 [2024-07-15 09:36:01.509348] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:50.612 [2024-07-15 09:36:01.509375] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:27:50.612 qpair failed and we were unable to recover it. 00:27:50.612 [2024-07-15 09:36:01.509467] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:50.612 [2024-07-15 09:36:01.509495] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a58000b90 with addr=10.0.0.2, port=4420 00:27:50.612 qpair failed and we were unable to recover it. 00:27:50.612 [2024-07-15 09:36:01.509574] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:50.612 [2024-07-15 09:36:01.509601] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a58000b90 with addr=10.0.0.2, port=4420 00:27:50.612 qpair failed and we were unable to recover it. 00:27:50.612 [2024-07-15 09:36:01.509709] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:50.612 [2024-07-15 09:36:01.509736] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a58000b90 with addr=10.0.0.2, port=4420 00:27:50.612 qpair failed and we were unable to recover it. 00:27:50.612 [2024-07-15 09:36:01.509829] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:50.612 [2024-07-15 09:36:01.509856] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a58000b90 with addr=10.0.0.2, port=4420 00:27:50.612 qpair failed and we were unable to recover it. 00:27:50.612 [2024-07-15 09:36:01.509935] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:50.612 [2024-07-15 09:36:01.509961] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a58000b90 with addr=10.0.0.2, port=4420 00:27:50.612 qpair failed and we were unable to recover it. 00:27:50.612 [2024-07-15 09:36:01.510073] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:50.612 [2024-07-15 09:36:01.510100] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a58000b90 with addr=10.0.0.2, port=4420 00:27:50.612 qpair failed and we were unable to recover it. 00:27:50.612 [2024-07-15 09:36:01.510213] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:50.612 [2024-07-15 09:36:01.510240] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a58000b90 with addr=10.0.0.2, port=4420 00:27:50.612 qpair failed and we were unable to recover it. 00:27:50.612 [2024-07-15 09:36:01.510349] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:50.612 [2024-07-15 09:36:01.510376] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a58000b90 with addr=10.0.0.2, port=4420 00:27:50.612 qpair failed and we were unable to recover it. 00:27:50.612 [2024-07-15 09:36:01.510486] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:50.612 [2024-07-15 09:36:01.510512] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a58000b90 with addr=10.0.0.2, port=4420 00:27:50.612 qpair failed and we were unable to recover it. 00:27:50.612 [2024-07-15 09:36:01.510600] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:50.612 [2024-07-15 09:36:01.510627] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a58000b90 with addr=10.0.0.2, port=4420 00:27:50.612 qpair failed and we were unable to recover it. 00:27:50.612 [2024-07-15 09:36:01.510712] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:50.612 [2024-07-15 09:36:01.510738] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a58000b90 with addr=10.0.0.2, port=4420 00:27:50.612 qpair failed and we were unable to recover it. 00:27:50.612 [2024-07-15 09:36:01.510828] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:50.612 [2024-07-15 09:36:01.510855] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a58000b90 with addr=10.0.0.2, port=4420 00:27:50.612 qpair failed and we were unable to recover it. 00:27:50.612 [2024-07-15 09:36:01.510940] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:50.612 [2024-07-15 09:36:01.510968] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:27:50.612 qpair failed and we were unable to recover it. 00:27:50.612 [2024-07-15 09:36:01.511065] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:50.613 [2024-07-15 09:36:01.511110] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d69200 with addr=10.0.0.2, port=4420 00:27:50.613 qpair failed and we were unable to recover it. 00:27:50.613 [2024-07-15 09:36:01.511204] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:50.613 [2024-07-15 09:36:01.511232] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d69200 with addr=10.0.0.2, port=4420 00:27:50.613 qpair failed and we were unable to recover it. 00:27:50.613 [2024-07-15 09:36:01.511369] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:50.613 [2024-07-15 09:36:01.511395] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d69200 with addr=10.0.0.2, port=4420 00:27:50.613 qpair failed and we were unable to recover it. 00:27:50.613 [2024-07-15 09:36:01.511481] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:50.613 [2024-07-15 09:36:01.511507] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d69200 with addr=10.0.0.2, port=4420 00:27:50.613 qpair failed and we were unable to recover it. 00:27:50.613 [2024-07-15 09:36:01.511592] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:50.613 [2024-07-15 09:36:01.511620] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a50000b90 with addr=10.0.0.2, port=4420 00:27:50.613 qpair failed and we were unable to recover it. 00:27:50.613 [2024-07-15 09:36:01.511707] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:50.613 [2024-07-15 09:36:01.511736] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a58000b90 with addr=10.0.0.2, port=4420 00:27:50.613 qpair failed and we were unable to recover it. 00:27:50.613 [2024-07-15 09:36:01.511829] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:50.613 [2024-07-15 09:36:01.511860] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a58000b90 with addr=10.0.0.2, port=4420 00:27:50.613 qpair failed and we were unable to recover it. 00:27:50.613 [2024-07-15 09:36:01.511992] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:50.613 [2024-07-15 09:36:01.512020] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a58000b90 with addr=10.0.0.2, port=4420 00:27:50.613 qpair failed and we were unable to recover it. 00:27:50.613 [2024-07-15 09:36:01.512113] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:50.613 [2024-07-15 09:36:01.512139] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a58000b90 with addr=10.0.0.2, port=4420 00:27:50.613 qpair failed and we were unable to recover it. 00:27:50.613 [2024-07-15 09:36:01.512254] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:50.613 [2024-07-15 09:36:01.512281] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a58000b90 with addr=10.0.0.2, port=4420 00:27:50.613 qpair failed and we were unable to recover it. 00:27:50.613 [2024-07-15 09:36:01.512356] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:50.613 [2024-07-15 09:36:01.512383] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d69200 with addr=10.0.0.2, port=4420 00:27:50.613 qpair failed and we were unable to recover it. 00:27:50.613 [2024-07-15 09:36:01.512465] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:50.613 [2024-07-15 09:36:01.512493] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a50000b90 with addr=10.0.0.2, port=4420 00:27:50.613 qpair failed and we were unable to recover it. 00:27:50.613 [2024-07-15 09:36:01.512614] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:50.613 [2024-07-15 09:36:01.512642] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a50000b90 with addr=10.0.0.2, port=4420 00:27:50.613 qpair failed and we were unable to recover it. 00:27:50.613 [2024-07-15 09:36:01.512756] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:50.613 [2024-07-15 09:36:01.512783] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a50000b90 with addr=10.0.0.2, port=4420 00:27:50.613 qpair failed and we were unable to recover it. 00:27:50.613 [2024-07-15 09:36:01.512902] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:50.613 [2024-07-15 09:36:01.512929] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a50000b90 with addr=10.0.0.2, port=4420 00:27:50.613 qpair failed and we were unable to recover it. 00:27:50.613 [2024-07-15 09:36:01.513016] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:50.613 [2024-07-15 09:36:01.513043] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a50000b90 with addr=10.0.0.2, port=4420 00:27:50.613 qpair failed and we were unable to recover it. 00:27:50.613 [2024-07-15 09:36:01.513164] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:50.613 [2024-07-15 09:36:01.513191] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a58000b90 with addr=10.0.0.2, port=4420 00:27:50.613 qpair failed and we were unable to recover it. 00:27:50.613 [2024-07-15 09:36:01.513303] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:50.613 [2024-07-15 09:36:01.513331] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a58000b90 with addr=10.0.0.2, port=4420 00:27:50.613 qpair failed and we were unable to recover it. 00:27:50.613 [2024-07-15 09:36:01.513439] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:50.613 [2024-07-15 09:36:01.513465] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a58000b90 with addr=10.0.0.2, port=4420 00:27:50.613 qpair failed and we were unable to recover it. 00:27:50.613 [2024-07-15 09:36:01.513583] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:50.613 [2024-07-15 09:36:01.513610] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a58000b90 with addr=10.0.0.2, port=4420 00:27:50.613 qpair failed and we were unable to recover it. 00:27:50.613 [2024-07-15 09:36:01.513698] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:50.613 [2024-07-15 09:36:01.513726] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d69200 with addr=10.0.0.2, port=4420 00:27:50.613 qpair failed and we were unable to recover it. 00:27:50.613 [2024-07-15 09:36:01.513838] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:50.613 [2024-07-15 09:36:01.513876] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:27:50.613 qpair failed and we were unable to recover it. 00:27:50.613 [2024-07-15 09:36:01.513977] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:50.613 [2024-07-15 09:36:01.514005] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:27:50.613 qpair failed and we were unable to recover it. 00:27:50.613 [2024-07-15 09:36:01.514117] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:50.613 [2024-07-15 09:36:01.514143] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:27:50.613 qpair failed and we were unable to recover it. 00:27:50.613 [2024-07-15 09:36:01.514220] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:50.613 [2024-07-15 09:36:01.514246] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:27:50.613 qpair failed and we were unable to recover it. 00:27:50.613 [2024-07-15 09:36:01.514350] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:50.613 [2024-07-15 09:36:01.514376] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:27:50.613 qpair failed and we were unable to recover it. 00:27:50.613 [2024-07-15 09:36:01.514482] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:50.613 [2024-07-15 09:36:01.514509] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:27:50.613 qpair failed and we were unable to recover it. 00:27:50.613 [2024-07-15 09:36:01.514631] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:50.613 [2024-07-15 09:36:01.514670] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d69200 with addr=10.0.0.2, port=4420 00:27:50.613 qpair failed and we were unable to recover it. 00:27:50.613 [2024-07-15 09:36:01.514816] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:50.613 [2024-07-15 09:36:01.514845] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a50000b90 with addr=10.0.0.2, port=4420 00:27:50.613 qpair failed and we were unable to recover it. 00:27:50.613 [2024-07-15 09:36:01.514928] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:50.613 [2024-07-15 09:36:01.514955] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a58000b90 with addr=10.0.0.2, port=4420 00:27:50.613 qpair failed and we were unable to recover it. 00:27:50.613 [2024-07-15 09:36:01.515070] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:50.613 [2024-07-15 09:36:01.515097] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a58000b90 with addr=10.0.0.2, port=4420 00:27:50.613 qpair failed and we were unable to recover it. 00:27:50.613 [2024-07-15 09:36:01.515234] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:50.613 [2024-07-15 09:36:01.515261] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a58000b90 with addr=10.0.0.2, port=4420 00:27:50.613 qpair failed and we were unable to recover it. 00:27:50.613 [2024-07-15 09:36:01.515393] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:50.613 [2024-07-15 09:36:01.515420] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a58000b90 with addr=10.0.0.2, port=4420 00:27:50.613 qpair failed and we were unable to recover it. 00:27:50.613 [2024-07-15 09:36:01.515499] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:50.613 [2024-07-15 09:36:01.515526] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:27:50.613 qpair failed and we were unable to recover it. 00:27:50.613 [2024-07-15 09:36:01.515637] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:50.613 [2024-07-15 09:36:01.515675] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d69200 with addr=10.0.0.2, port=4420 00:27:50.613 qpair failed and we were unable to recover it. 00:27:50.613 [2024-07-15 09:36:01.515816] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:50.613 [2024-07-15 09:36:01.515844] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a50000b90 with addr=10.0.0.2, port=4420 00:27:50.613 qpair failed and we were unable to recover it. 00:27:50.613 [2024-07-15 09:36:01.515954] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:50.613 [2024-07-15 09:36:01.515981] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a50000b90 with addr=10.0.0.2, port=4420 00:27:50.613 qpair failed and we were unable to recover it. 00:27:50.613 [2024-07-15 09:36:01.516094] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:50.613 [2024-07-15 09:36:01.516121] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a50000b90 with addr=10.0.0.2, port=4420 00:27:50.613 qpair failed and we were unable to recover it. 00:27:50.613 [2024-07-15 09:36:01.516206] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:50.613 [2024-07-15 09:36:01.516232] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a50000b90 with addr=10.0.0.2, port=4420 00:27:50.613 qpair failed and we were unable to recover it. 00:27:50.613 [2024-07-15 09:36:01.516350] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:50.613 [2024-07-15 09:36:01.516376] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a50000b90 with addr=10.0.0.2, port=4420 00:27:50.613 qpair failed and we were unable to recover it. 00:27:50.613 [2024-07-15 09:36:01.516483] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:50.613 [2024-07-15 09:36:01.516510] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a50000b90 with addr=10.0.0.2, port=4420 00:27:50.613 qpair failed and we were unable to recover it. 00:27:50.613 [2024-07-15 09:36:01.516628] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:50.613 [2024-07-15 09:36:01.516657] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d69200 with addr=10.0.0.2, port=4420 00:27:50.613 qpair failed and we were unable to recover it. 00:27:50.613 [2024-07-15 09:36:01.516750] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:50.613 [2024-07-15 09:36:01.516778] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a58000b90 with addr=10.0.0.2, port=4420 00:27:50.613 qpair failed and we were unable to recover it. 00:27:50.613 [2024-07-15 09:36:01.516895] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:50.613 [2024-07-15 09:36:01.516924] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:27:50.613 qpair failed and we were unable to recover it. 00:27:50.613 [2024-07-15 09:36:01.517037] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:50.613 [2024-07-15 09:36:01.517063] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:27:50.613 qpair failed and we were unable to recover it. 00:27:50.613 [2024-07-15 09:36:01.517156] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:50.613 [2024-07-15 09:36:01.517183] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:27:50.613 qpair failed and we were unable to recover it. 00:27:50.613 [2024-07-15 09:36:01.517267] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:50.613 [2024-07-15 09:36:01.517299] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:27:50.613 qpair failed and we were unable to recover it. 00:27:50.613 [2024-07-15 09:36:01.517384] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:50.613 [2024-07-15 09:36:01.517410] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:27:50.613 qpair failed and we were unable to recover it. 00:27:50.613 [2024-07-15 09:36:01.517523] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:50.613 [2024-07-15 09:36:01.517549] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:27:50.613 qpair failed and we were unable to recover it. 00:27:50.613 [2024-07-15 09:36:01.517659] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:50.613 [2024-07-15 09:36:01.517696] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:27:50.613 qpair failed and we were unable to recover it. 00:27:50.613 [2024-07-15 09:36:01.517774] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:50.613 [2024-07-15 09:36:01.517818] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:27:50.613 qpair failed and we were unable to recover it. 00:27:50.613 [2024-07-15 09:36:01.517912] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:50.613 [2024-07-15 09:36:01.517938] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:27:50.613 qpair failed and we were unable to recover it. 00:27:50.613 [2024-07-15 09:36:01.518046] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:50.613 [2024-07-15 09:36:01.518072] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:27:50.613 qpair failed and we were unable to recover it. 00:27:50.613 [2024-07-15 09:36:01.518158] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:50.613 [2024-07-15 09:36:01.518185] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:27:50.613 qpair failed and we were unable to recover it. 00:27:50.613 [2024-07-15 09:36:01.518276] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:50.613 [2024-07-15 09:36:01.518302] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:27:50.613 qpair failed and we were unable to recover it. 00:27:50.613 [2024-07-15 09:36:01.518394] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:50.613 [2024-07-15 09:36:01.518420] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:27:50.613 qpair failed and we were unable to recover it. 00:27:50.613 [2024-07-15 09:36:01.518534] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:50.613 [2024-07-15 09:36:01.518561] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:27:50.613 qpair failed and we were unable to recover it. 00:27:50.613 [2024-07-15 09:36:01.518644] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:50.613 [2024-07-15 09:36:01.518671] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:27:50.613 qpair failed and we were unable to recover it. 00:27:50.613 [2024-07-15 09:36:01.518782] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:50.613 [2024-07-15 09:36:01.518822] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:27:50.613 qpair failed and we were unable to recover it. 00:27:50.613 [2024-07-15 09:36:01.518901] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:50.613 [2024-07-15 09:36:01.518927] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:27:50.613 qpair failed and we were unable to recover it. 00:27:50.613 [2024-07-15 09:36:01.519025] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:50.613 [2024-07-15 09:36:01.519052] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:27:50.613 qpair failed and we were unable to recover it. 00:27:50.613 [2024-07-15 09:36:01.519175] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:50.613 [2024-07-15 09:36:01.519202] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:27:50.613 qpair failed and we were unable to recover it. 00:27:50.614 [2024-07-15 09:36:01.519292] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:50.614 [2024-07-15 09:36:01.519332] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a50000b90 with addr=10.0.0.2, port=4420 00:27:50.614 qpair failed and we were unable to recover it. 00:27:50.614 [2024-07-15 09:36:01.519424] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:50.614 [2024-07-15 09:36:01.519452] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a50000b90 with addr=10.0.0.2, port=4420 00:27:50.614 qpair failed and we were unable to recover it. 00:27:50.614 [2024-07-15 09:36:01.519579] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:50.614 [2024-07-15 09:36:01.519618] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a58000b90 with addr=10.0.0.2, port=4420 00:27:50.614 qpair failed and we were unable to recover it. 00:27:50.614 [2024-07-15 09:36:01.519741] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:50.614 [2024-07-15 09:36:01.519769] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a58000b90 with addr=10.0.0.2, port=4420 00:27:50.614 qpair failed and we were unable to recover it. 00:27:50.614 [2024-07-15 09:36:01.519865] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:50.614 [2024-07-15 09:36:01.519894] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a58000b90 with addr=10.0.0.2, port=4420 00:27:50.614 qpair failed and we were unable to recover it. 00:27:50.614 [2024-07-15 09:36:01.519975] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:50.614 [2024-07-15 09:36:01.520001] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a58000b90 with addr=10.0.0.2, port=4420 00:27:50.614 qpair failed and we were unable to recover it. 00:27:50.614 [2024-07-15 09:36:01.520097] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:50.614 [2024-07-15 09:36:01.520123] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a58000b90 with addr=10.0.0.2, port=4420 00:27:50.614 qpair failed and we were unable to recover it. 00:27:50.614 [2024-07-15 09:36:01.520266] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:50.614 [2024-07-15 09:36:01.520294] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a58000b90 with addr=10.0.0.2, port=4420 00:27:50.614 qpair failed and we were unable to recover it. 00:27:50.614 [2024-07-15 09:36:01.520402] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:50.614 [2024-07-15 09:36:01.520431] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a50000b90 with addr=10.0.0.2, port=4420 00:27:50.614 qpair failed and we were unable to recover it. 00:27:50.614 [2024-07-15 09:36:01.520517] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:50.614 [2024-07-15 09:36:01.520544] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a50000b90 with addr=10.0.0.2, port=4420 00:27:50.614 qpair failed and we were unable to recover it. 00:27:50.614 [2024-07-15 09:36:01.520627] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:50.614 [2024-07-15 09:36:01.520654] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a50000b90 with addr=10.0.0.2, port=4420 00:27:50.614 qpair failed and we were unable to recover it. 00:27:50.614 [2024-07-15 09:36:01.520749] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:50.614 [2024-07-15 09:36:01.520777] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a58000b90 with addr=10.0.0.2, port=4420 00:27:50.614 qpair failed and we were unable to recover it. 00:27:50.614 [2024-07-15 09:36:01.520882] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:50.614 [2024-07-15 09:36:01.520910] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:27:50.614 qpair failed and we were unable to recover it. 00:27:50.614 [2024-07-15 09:36:01.521020] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:50.614 [2024-07-15 09:36:01.521046] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:27:50.614 qpair failed and we were unable to recover it. 00:27:50.614 [2024-07-15 09:36:01.521141] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:50.614 [2024-07-15 09:36:01.521167] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:27:50.614 qpair failed and we were unable to recover it. 00:27:50.614 [2024-07-15 09:36:01.521276] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:50.614 [2024-07-15 09:36:01.521302] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:27:50.614 qpair failed and we were unable to recover it. 00:27:50.614 [2024-07-15 09:36:01.521392] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:50.614 [2024-07-15 09:36:01.521419] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:27:50.614 qpair failed and we were unable to recover it. 00:27:50.614 [2024-07-15 09:36:01.521510] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:50.614 [2024-07-15 09:36:01.521538] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a50000b90 with addr=10.0.0.2, port=4420 00:27:50.614 qpair failed and we were unable to recover it. 00:27:50.614 [2024-07-15 09:36:01.521618] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:50.614 [2024-07-15 09:36:01.521644] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a50000b90 with addr=10.0.0.2, port=4420 00:27:50.614 qpair failed and we were unable to recover it. 00:27:50.614 [2024-07-15 09:36:01.521755] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:50.614 [2024-07-15 09:36:01.521781] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a50000b90 with addr=10.0.0.2, port=4420 00:27:50.614 qpair failed and we were unable to recover it. 00:27:50.614 [2024-07-15 09:36:01.521876] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:50.614 [2024-07-15 09:36:01.521904] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:27:50.614 qpair failed and we were unable to recover it. 00:27:50.614 [2024-07-15 09:36:01.521988] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:50.614 [2024-07-15 09:36:01.522015] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:27:50.614 qpair failed and we were unable to recover it. 00:27:50.614 [2024-07-15 09:36:01.522135] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:50.614 [2024-07-15 09:36:01.522160] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:27:50.614 qpair failed and we were unable to recover it. 00:27:50.614 [2024-07-15 09:36:01.522269] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:50.614 [2024-07-15 09:36:01.522295] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:27:50.614 qpair failed and we were unable to recover it. 00:27:50.614 [2024-07-15 09:36:01.522387] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:50.614 [2024-07-15 09:36:01.522421] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a58000b90 with addr=10.0.0.2, port=4420 00:27:50.614 qpair failed and we were unable to recover it. 00:27:50.614 [2024-07-15 09:36:01.522510] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:50.614 [2024-07-15 09:36:01.522536] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a58000b90 with addr=10.0.0.2, port=4420 00:27:50.614 qpair failed and we were unable to recover it. 00:27:50.614 [2024-07-15 09:36:01.522617] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:50.614 [2024-07-15 09:36:01.522644] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a58000b90 with addr=10.0.0.2, port=4420 00:27:50.614 qpair failed and we were unable to recover it. 00:27:50.614 [2024-07-15 09:36:01.522730] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:50.614 [2024-07-15 09:36:01.522758] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a58000b90 with addr=10.0.0.2, port=4420 00:27:50.614 qpair failed and we were unable to recover it. 00:27:50.614 [2024-07-15 09:36:01.522856] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:50.614 [2024-07-15 09:36:01.522883] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a58000b90 with addr=10.0.0.2, port=4420 00:27:50.614 qpair failed and we were unable to recover it. 00:27:50.614 [2024-07-15 09:36:01.522969] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:50.614 [2024-07-15 09:36:01.522995] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a58000b90 with addr=10.0.0.2, port=4420 00:27:50.614 qpair failed and we were unable to recover it. 00:27:50.614 [2024-07-15 09:36:01.523075] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:50.614 [2024-07-15 09:36:01.523110] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a58000b90 with addr=10.0.0.2, port=4420 00:27:50.614 qpair failed and we were unable to recover it. 00:27:50.614 [2024-07-15 09:36:01.523226] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:50.614 [2024-07-15 09:36:01.523253] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a58000b90 with addr=10.0.0.2, port=4420 00:27:50.614 qpair failed and we were unable to recover it. 00:27:50.614 [2024-07-15 09:36:01.523372] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:50.614 [2024-07-15 09:36:01.523399] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a50000b90 with addr=10.0.0.2, port=4420 00:27:50.614 qpair failed and we were unable to recover it. 00:27:50.614 [2024-07-15 09:36:01.523512] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:50.614 [2024-07-15 09:36:01.523539] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a50000b90 with addr=10.0.0.2, port=4420 00:27:50.614 qpair failed and we were unable to recover it. 00:27:50.614 [2024-07-15 09:36:01.523622] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:50.614 [2024-07-15 09:36:01.523649] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a50000b90 with addr=10.0.0.2, port=4420 00:27:50.614 qpair failed and we were unable to recover it. 00:27:50.614 [2024-07-15 09:36:01.523734] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:50.614 [2024-07-15 09:36:01.523761] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a50000b90 with addr=10.0.0.2, port=4420 00:27:50.614 qpair failed and we were unable to recover it. 00:27:50.614 [2024-07-15 09:36:01.523896] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:50.614 [2024-07-15 09:36:01.523924] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a50000b90 with addr=10.0.0.2, port=4420 00:27:50.614 qpair failed and we were unable to recover it. 00:27:50.614 [2024-07-15 09:36:01.524004] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:50.614 [2024-07-15 09:36:01.524031] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a50000b90 with addr=10.0.0.2, port=4420 00:27:50.614 qpair failed and we were unable to recover it. 00:27:50.614 [2024-07-15 09:36:01.524153] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:50.614 [2024-07-15 09:36:01.524179] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a50000b90 with addr=10.0.0.2, port=4420 00:27:50.614 qpair failed and we were unable to recover it. 00:27:50.614 [2024-07-15 09:36:01.524292] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:50.614 [2024-07-15 09:36:01.524320] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a50000b90 with addr=10.0.0.2, port=4420 00:27:50.614 qpair failed and we were unable to recover it. 00:27:50.614 [2024-07-15 09:36:01.524431] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:50.614 [2024-07-15 09:36:01.524457] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a50000b90 with addr=10.0.0.2, port=4420 00:27:50.614 qpair failed and we were unable to recover it. 00:27:50.614 [2024-07-15 09:36:01.524541] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:50.614 [2024-07-15 09:36:01.524567] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a50000b90 with addr=10.0.0.2, port=4420 00:27:50.614 qpair failed and we were unable to recover it. 00:27:50.614 [2024-07-15 09:36:01.524664] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:50.614 [2024-07-15 09:36:01.524690] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a50000b90 with addr=10.0.0.2, port=4420 00:27:50.614 qpair failed and we were unable to recover it. 00:27:50.614 [2024-07-15 09:36:01.524836] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:50.614 [2024-07-15 09:36:01.524863] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a50000b90 with addr=10.0.0.2, port=4420 00:27:50.614 qpair failed and we were unable to recover it. 00:27:50.614 [2024-07-15 09:36:01.524950] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:50.614 [2024-07-15 09:36:01.524976] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a50000b90 with addr=10.0.0.2, port=4420 00:27:50.614 qpair failed and we were unable to recover it. 00:27:50.614 [2024-07-15 09:36:01.525062] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:50.614 [2024-07-15 09:36:01.525090] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a50000b90 with addr=10.0.0.2, port=4420 00:27:50.614 qpair failed and we were unable to recover it. 00:27:50.614 [2024-07-15 09:36:01.525187] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:50.614 [2024-07-15 09:36:01.525214] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a50000b90 with addr=10.0.0.2, port=4420 00:27:50.614 qpair failed and we were unable to recover it. 00:27:50.614 [2024-07-15 09:36:01.525296] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:50.614 [2024-07-15 09:36:01.525322] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a50000b90 with addr=10.0.0.2, port=4420 00:27:50.614 qpair failed and we were unable to recover it. 00:27:50.614 [2024-07-15 09:36:01.525420] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:50.614 [2024-07-15 09:36:01.525458] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:27:50.614 qpair failed and we were unable to recover it. 00:27:50.614 [2024-07-15 09:36:01.525593] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:50.614 [2024-07-15 09:36:01.525633] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d69200 with addr=10.0.0.2, port=4420 00:27:50.614 qpair failed and we were unable to recover it. 00:27:50.614 [2024-07-15 09:36:01.525735] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:50.614 [2024-07-15 09:36:01.525764] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a58000b90 with addr=10.0.0.2, port=4420 00:27:50.614 qpair failed and we were unable to recover it. 00:27:50.614 [2024-07-15 09:36:01.525920] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:50.614 [2024-07-15 09:36:01.525947] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a50000b90 with addr=10.0.0.2, port=4420 00:27:50.614 qpair failed and we were unable to recover it. 00:27:50.614 [2024-07-15 09:36:01.526039] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:50.614 [2024-07-15 09:36:01.526067] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a50000b90 with addr=10.0.0.2, port=4420 00:27:50.614 qpair failed and we were unable to recover it. 00:27:50.614 [2024-07-15 09:36:01.526178] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:50.614 [2024-07-15 09:36:01.526205] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a50000b90 with addr=10.0.0.2, port=4420 00:27:50.614 qpair failed and we were unable to recover it. 00:27:50.614 [2024-07-15 09:36:01.526282] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:50.614 [2024-07-15 09:36:01.526309] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a50000b90 with addr=10.0.0.2, port=4420 00:27:50.614 qpair failed and we were unable to recover it. 00:27:50.614 [2024-07-15 09:36:01.526418] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:50.614 [2024-07-15 09:36:01.526444] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a50000b90 with addr=10.0.0.2, port=4420 00:27:50.614 qpair failed and we were unable to recover it. 00:27:50.614 [2024-07-15 09:36:01.526561] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:50.614 [2024-07-15 09:36:01.526590] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d69200 with addr=10.0.0.2, port=4420 00:27:50.614 qpair failed and we were unable to recover it. 00:27:50.614 [2024-07-15 09:36:01.526676] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:50.614 [2024-07-15 09:36:01.526703] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d69200 with addr=10.0.0.2, port=4420 00:27:50.614 qpair failed and we were unable to recover it. 00:27:50.614 [2024-07-15 09:36:01.526826] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:50.614 [2024-07-15 09:36:01.526853] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d69200 with addr=10.0.0.2, port=4420 00:27:50.614 qpair failed and we were unable to recover it. 00:27:50.614 [2024-07-15 09:36:01.526968] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:50.614 [2024-07-15 09:36:01.526994] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d69200 with addr=10.0.0.2, port=4420 00:27:50.614 qpair failed and we were unable to recover it. 00:27:50.614 [2024-07-15 09:36:01.527075] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:50.614 [2024-07-15 09:36:01.527107] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d69200 with addr=10.0.0.2, port=4420 00:27:50.614 qpair failed and we were unable to recover it. 00:27:50.614 [2024-07-15 09:36:01.527224] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:50.614 [2024-07-15 09:36:01.527250] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d69200 with addr=10.0.0.2, port=4420 00:27:50.614 qpair failed and we were unable to recover it. 00:27:50.614 [2024-07-15 09:36:01.527339] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:50.614 [2024-07-15 09:36:01.527364] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d69200 with addr=10.0.0.2, port=4420 00:27:50.614 qpair failed and we were unable to recover it. 00:27:50.614 [2024-07-15 09:36:01.527446] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:50.614 [2024-07-15 09:36:01.527471] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d69200 with addr=10.0.0.2, port=4420 00:27:50.614 qpair failed and we were unable to recover it. 00:27:50.614 [2024-07-15 09:36:01.527582] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:50.614 [2024-07-15 09:36:01.527608] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d69200 with addr=10.0.0.2, port=4420 00:27:50.614 qpair failed and we were unable to recover it. 00:27:50.614 [2024-07-15 09:36:01.527728] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:50.614 [2024-07-15 09:36:01.527757] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a50000b90 with addr=10.0.0.2, port=4420 00:27:50.614 qpair failed and we were unable to recover it. 00:27:50.614 [2024-07-15 09:36:01.527857] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:50.614 [2024-07-15 09:36:01.527885] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a58000b90 with addr=10.0.0.2, port=4420 00:27:50.614 qpair failed and we were unable to recover it. 00:27:50.614 [2024-07-15 09:36:01.527998] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:50.614 [2024-07-15 09:36:01.528025] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a58000b90 with addr=10.0.0.2, port=4420 00:27:50.614 qpair failed and we were unable to recover it. 00:27:50.615 [2024-07-15 09:36:01.528144] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:50.615 [2024-07-15 09:36:01.528181] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a58000b90 with addr=10.0.0.2, port=4420 00:27:50.615 qpair failed and we were unable to recover it. 00:27:50.615 [2024-07-15 09:36:01.528286] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:50.615 [2024-07-15 09:36:01.528313] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a58000b90 with addr=10.0.0.2, port=4420 00:27:50.615 qpair failed and we were unable to recover it. 00:27:50.615 [2024-07-15 09:36:01.528408] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:50.615 [2024-07-15 09:36:01.528447] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:27:50.615 qpair failed and we were unable to recover it. 00:27:50.615 [2024-07-15 09:36:01.528536] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:50.615 [2024-07-15 09:36:01.528563] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:27:50.615 qpair failed and we were unable to recover it. 00:27:50.615 [2024-07-15 09:36:01.528678] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:50.615 [2024-07-15 09:36:01.528705] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:27:50.615 qpair failed and we were unable to recover it. 00:27:50.615 [2024-07-15 09:36:01.528824] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:50.615 [2024-07-15 09:36:01.528851] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:27:50.615 qpair failed and we were unable to recover it. 00:27:50.615 [2024-07-15 09:36:01.528931] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:50.615 [2024-07-15 09:36:01.528957] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:27:50.615 qpair failed and we were unable to recover it. 00:27:50.615 [2024-07-15 09:36:01.529046] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:50.615 [2024-07-15 09:36:01.529072] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:27:50.615 qpair failed and we were unable to recover it. 00:27:50.615 [2024-07-15 09:36:01.529169] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:50.615 [2024-07-15 09:36:01.529196] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a58000b90 with addr=10.0.0.2, port=4420 00:27:50.615 qpair failed and we were unable to recover it. 00:27:50.615 [2024-07-15 09:36:01.529314] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:50.615 [2024-07-15 09:36:01.529341] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a58000b90 with addr=10.0.0.2, port=4420 00:27:50.615 qpair failed and we were unable to recover it. 00:27:50.615 [2024-07-15 09:36:01.529425] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:50.615 [2024-07-15 09:36:01.529455] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a58000b90 with addr=10.0.0.2, port=4420 00:27:50.615 qpair failed and we were unable to recover it. 00:27:50.615 [2024-07-15 09:36:01.529566] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:50.615 [2024-07-15 09:36:01.529592] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a58000b90 with addr=10.0.0.2, port=4420 00:27:50.615 qpair failed and we were unable to recover it. 00:27:50.615 [2024-07-15 09:36:01.529683] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:50.615 [2024-07-15 09:36:01.529710] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a58000b90 with addr=10.0.0.2, port=4420 00:27:50.615 qpair failed and we were unable to recover it. 00:27:50.615 [2024-07-15 09:36:01.529822] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:50.615 [2024-07-15 09:36:01.529849] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a58000b90 with addr=10.0.0.2, port=4420 00:27:50.615 qpair failed and we were unable to recover it. 00:27:50.615 [2024-07-15 09:36:01.529961] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:50.615 [2024-07-15 09:36:01.529987] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a58000b90 with addr=10.0.0.2, port=4420 00:27:50.615 qpair failed and we were unable to recover it. 00:27:50.615 [2024-07-15 09:36:01.530073] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:50.615 [2024-07-15 09:36:01.530109] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a58000b90 with addr=10.0.0.2, port=4420 00:27:50.615 qpair failed and we were unable to recover it. 00:27:50.615 [2024-07-15 09:36:01.530207] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:50.615 [2024-07-15 09:36:01.530234] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a58000b90 with addr=10.0.0.2, port=4420 00:27:50.615 qpair failed and we were unable to recover it. 00:27:50.615 [2024-07-15 09:36:01.530313] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:50.615 [2024-07-15 09:36:01.530339] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a58000b90 with addr=10.0.0.2, port=4420 00:27:50.615 qpair failed and we were unable to recover it. 00:27:50.615 [2024-07-15 09:36:01.530433] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:50.615 [2024-07-15 09:36:01.530462] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d69200 with addr=10.0.0.2, port=4420 00:27:50.615 qpair failed and we were unable to recover it. 00:27:50.615 [2024-07-15 09:36:01.530568] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:50.615 [2024-07-15 09:36:01.530594] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d69200 with addr=10.0.0.2, port=4420 00:27:50.615 qpair failed and we were unable to recover it. 00:27:50.615 [2024-07-15 09:36:01.530728] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:50.615 [2024-07-15 09:36:01.530754] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d69200 with addr=10.0.0.2, port=4420 00:27:50.615 qpair failed and we were unable to recover it. 00:27:50.615 [2024-07-15 09:36:01.530870] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:50.615 [2024-07-15 09:36:01.530897] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d69200 with addr=10.0.0.2, port=4420 00:27:50.615 qpair failed and we were unable to recover it. 00:27:50.615 [2024-07-15 09:36:01.530974] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:50.615 [2024-07-15 09:36:01.531000] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d69200 with addr=10.0.0.2, port=4420 00:27:50.615 qpair failed and we were unable to recover it. 00:27:50.615 [2024-07-15 09:36:01.531106] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:50.615 [2024-07-15 09:36:01.531142] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d69200 with addr=10.0.0.2, port=4420 00:27:50.615 qpair failed and we were unable to recover it. 00:27:50.615 [2024-07-15 09:36:01.531267] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:50.615 [2024-07-15 09:36:01.531296] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a58000b90 with addr=10.0.0.2, port=4420 00:27:50.615 qpair failed and we were unable to recover it. 00:27:50.615 [2024-07-15 09:36:01.531379] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:50.615 [2024-07-15 09:36:01.531406] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a58000b90 with addr=10.0.0.2, port=4420 00:27:50.615 qpair failed and we were unable to recover it. 00:27:50.615 [2024-07-15 09:36:01.531519] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:50.615 [2024-07-15 09:36:01.531546] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a58000b90 with addr=10.0.0.2, port=4420 00:27:50.615 qpair failed and we were unable to recover it. 00:27:50.615 [2024-07-15 09:36:01.531684] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:50.615 [2024-07-15 09:36:01.531711] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a58000b90 with addr=10.0.0.2, port=4420 00:27:50.615 qpair failed and we were unable to recover it. 00:27:50.615 [2024-07-15 09:36:01.531787] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:50.615 [2024-07-15 09:36:01.531818] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a58000b90 with addr=10.0.0.2, port=4420 00:27:50.615 qpair failed and we were unable to recover it. 00:27:50.615 [2024-07-15 09:36:01.531901] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:50.615 [2024-07-15 09:36:01.531927] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a58000b90 with addr=10.0.0.2, port=4420 00:27:50.615 qpair failed and we were unable to recover it. 00:27:50.615 [2024-07-15 09:36:01.532012] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:50.615 [2024-07-15 09:36:01.532039] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a58000b90 with addr=10.0.0.2, port=4420 00:27:50.615 qpair failed and we were unable to recover it. 00:27:50.615 [2024-07-15 09:36:01.532155] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:50.615 [2024-07-15 09:36:01.532181] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a58000b90 with addr=10.0.0.2, port=4420 00:27:50.615 qpair failed and we were unable to recover it. 00:27:50.615 [2024-07-15 09:36:01.532262] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:50.615 [2024-07-15 09:36:01.532289] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a58000b90 with addr=10.0.0.2, port=4420 00:27:50.615 qpair failed and we were unable to recover it. 00:27:50.615 [2024-07-15 09:36:01.532398] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:50.615 [2024-07-15 09:36:01.532425] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d69200 with addr=10.0.0.2, port=4420 00:27:50.615 qpair failed and we were unable to recover it. 00:27:50.615 [2024-07-15 09:36:01.532523] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:50.615 [2024-07-15 09:36:01.532562] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a50000b90 with addr=10.0.0.2, port=4420 00:27:50.615 qpair failed and we were unable to recover it. 00:27:50.615 [2024-07-15 09:36:01.532680] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:50.615 [2024-07-15 09:36:01.532708] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:27:50.615 qpair failed and we were unable to recover it. 00:27:50.615 [2024-07-15 09:36:01.532796] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:50.615 [2024-07-15 09:36:01.532828] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:27:50.615 qpair failed and we were unable to recover it. 00:27:50.615 [2024-07-15 09:36:01.532918] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:50.615 [2024-07-15 09:36:01.532945] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:27:50.615 qpair failed and we were unable to recover it. 00:27:50.615 [2024-07-15 09:36:01.533036] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:50.615 [2024-07-15 09:36:01.533062] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:27:50.615 qpair failed and we were unable to recover it. 00:27:50.615 [2024-07-15 09:36:01.533197] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:50.615 [2024-07-15 09:36:01.533223] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:27:50.615 qpair failed and we were unable to recover it. 00:27:50.615 [2024-07-15 09:36:01.533307] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:50.615 [2024-07-15 09:36:01.533333] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:27:50.615 qpair failed and we were unable to recover it. 00:27:50.615 [2024-07-15 09:36:01.533421] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:50.615 [2024-07-15 09:36:01.533448] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:27:50.615 qpair failed and we were unable to recover it. 00:27:50.615 [2024-07-15 09:36:01.533556] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:50.615 [2024-07-15 09:36:01.533584] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a58000b90 with addr=10.0.0.2, port=4420 00:27:50.615 qpair failed and we were unable to recover it. 00:27:50.615 [2024-07-15 09:36:01.533737] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:50.615 [2024-07-15 09:36:01.533776] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d69200 with addr=10.0.0.2, port=4420 00:27:50.615 qpair failed and we were unable to recover it. 00:27:50.615 [2024-07-15 09:36:01.533881] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:50.615 [2024-07-15 09:36:01.533910] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a50000b90 with addr=10.0.0.2, port=4420 00:27:50.615 qpair failed and we were unable to recover it. 00:27:50.615 [2024-07-15 09:36:01.533997] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:50.615 [2024-07-15 09:36:01.534024] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a50000b90 with addr=10.0.0.2, port=4420 00:27:50.615 qpair failed and we were unable to recover it. 00:27:50.615 [2024-07-15 09:36:01.534118] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:50.615 [2024-07-15 09:36:01.534146] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a50000b90 with addr=10.0.0.2, port=4420 00:27:50.615 qpair failed and we were unable to recover it. 00:27:50.615 [2024-07-15 09:36:01.534250] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:50.615 [2024-07-15 09:36:01.534276] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a50000b90 with addr=10.0.0.2, port=4420 00:27:50.615 qpair failed and we were unable to recover it. 00:27:50.615 [2024-07-15 09:36:01.534397] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:50.615 [2024-07-15 09:36:01.534424] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a50000b90 with addr=10.0.0.2, port=4420 00:27:50.615 qpair failed and we were unable to recover it. 00:27:50.615 [2024-07-15 09:36:01.534550] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:50.615 [2024-07-15 09:36:01.534577] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a58000b90 with addr=10.0.0.2, port=4420 00:27:50.615 qpair failed and we were unable to recover it. 00:27:50.615 [2024-07-15 09:36:01.534671] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:50.615 [2024-07-15 09:36:01.534701] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d69200 with addr=10.0.0.2, port=4420 00:27:50.615 qpair failed and we were unable to recover it. 00:27:50.615 [2024-07-15 09:36:01.534791] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:50.615 [2024-07-15 09:36:01.534824] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:27:50.615 qpair failed and we were unable to recover it. 00:27:50.615 [2024-07-15 09:36:01.534908] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:50.615 [2024-07-15 09:36:01.534934] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:27:50.615 qpair failed and we were unable to recover it. 00:27:50.615 [2024-07-15 09:36:01.535023] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:50.615 [2024-07-15 09:36:01.535049] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:27:50.615 qpair failed and we were unable to recover it. 00:27:50.615 [2024-07-15 09:36:01.535160] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:50.615 [2024-07-15 09:36:01.535187] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:27:50.615 qpair failed and we were unable to recover it. 00:27:50.615 [2024-07-15 09:36:01.535293] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:50.615 [2024-07-15 09:36:01.535319] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:27:50.615 qpair failed and we were unable to recover it. 00:27:50.615 [2024-07-15 09:36:01.535400] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:50.615 [2024-07-15 09:36:01.535428] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a58000b90 with addr=10.0.0.2, port=4420 00:27:50.615 qpair failed and we were unable to recover it. 00:27:50.615 [2024-07-15 09:36:01.535543] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:50.615 [2024-07-15 09:36:01.535572] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d69200 with addr=10.0.0.2, port=4420 00:27:50.615 qpair failed and we were unable to recover it. 00:27:50.615 [2024-07-15 09:36:01.535656] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:50.615 [2024-07-15 09:36:01.535682] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d69200 with addr=10.0.0.2, port=4420 00:27:50.615 qpair failed and we were unable to recover it. 00:27:50.615 [2024-07-15 09:36:01.535760] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:50.615 [2024-07-15 09:36:01.535786] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d69200 with addr=10.0.0.2, port=4420 00:27:50.615 qpair failed and we were unable to recover it. 00:27:50.615 [2024-07-15 09:36:01.535907] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:50.615 [2024-07-15 09:36:01.535932] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d69200 with addr=10.0.0.2, port=4420 00:27:50.615 qpair failed and we were unable to recover it. 00:27:50.615 [2024-07-15 09:36:01.536012] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:50.616 [2024-07-15 09:36:01.536038] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d69200 with addr=10.0.0.2, port=4420 00:27:50.616 qpair failed and we were unable to recover it. 00:27:50.616 [2024-07-15 09:36:01.536132] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:50.616 [2024-07-15 09:36:01.536158] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d69200 with addr=10.0.0.2, port=4420 00:27:50.616 qpair failed and we were unable to recover it. 00:27:50.616 [2024-07-15 09:36:01.536305] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:50.616 [2024-07-15 09:36:01.536333] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a58000b90 with addr=10.0.0.2, port=4420 00:27:50.616 qpair failed and we were unable to recover it. 00:27:50.616 [2024-07-15 09:36:01.536425] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:50.616 [2024-07-15 09:36:01.536453] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a50000b90 with addr=10.0.0.2, port=4420 00:27:50.616 qpair failed and we were unable to recover it. 00:27:50.616 [2024-07-15 09:36:01.536562] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:50.616 [2024-07-15 09:36:01.536589] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a50000b90 with addr=10.0.0.2, port=4420 00:27:50.616 qpair failed and we were unable to recover it. 00:27:50.616 [2024-07-15 09:36:01.536667] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:50.616 [2024-07-15 09:36:01.536693] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a50000b90 with addr=10.0.0.2, port=4420 00:27:50.616 qpair failed and we were unable to recover it. 00:27:50.616 [2024-07-15 09:36:01.536806] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:50.616 [2024-07-15 09:36:01.536832] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a50000b90 with addr=10.0.0.2, port=4420 00:27:50.616 qpair failed and we were unable to recover it. 00:27:50.616 [2024-07-15 09:36:01.536939] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:50.616 [2024-07-15 09:36:01.536965] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a50000b90 with addr=10.0.0.2, port=4420 00:27:50.616 qpair failed and we were unable to recover it. 00:27:50.616 [2024-07-15 09:36:01.537075] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:50.616 [2024-07-15 09:36:01.537101] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a50000b90 with addr=10.0.0.2, port=4420 00:27:50.616 qpair failed and we were unable to recover it. 00:27:50.616 [2024-07-15 09:36:01.537189] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:50.616 [2024-07-15 09:36:01.537215] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a50000b90 with addr=10.0.0.2, port=4420 00:27:50.616 qpair failed and we were unable to recover it. 00:27:50.616 [2024-07-15 09:36:01.537305] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:50.616 [2024-07-15 09:36:01.537332] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a50000b90 with addr=10.0.0.2, port=4420 00:27:50.616 qpair failed and we were unable to recover it. 00:27:50.616 [2024-07-15 09:36:01.537470] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:50.616 [2024-07-15 09:36:01.537496] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a50000b90 with addr=10.0.0.2, port=4420 00:27:50.616 qpair failed and we were unable to recover it. 00:27:50.616 [2024-07-15 09:36:01.537575] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:50.616 [2024-07-15 09:36:01.537601] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a50000b90 with addr=10.0.0.2, port=4420 00:27:50.616 qpair failed and we were unable to recover it. 00:27:50.616 [2024-07-15 09:36:01.537725] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:50.616 [2024-07-15 09:36:01.537752] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a50000b90 with addr=10.0.0.2, port=4420 00:27:50.616 qpair failed and we were unable to recover it. 00:27:50.616 [2024-07-15 09:36:01.537857] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:50.616 [2024-07-15 09:36:01.537884] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d69200 with addr=10.0.0.2, port=4420 00:27:50.616 qpair failed and we were unable to recover it. 00:27:50.616 [2024-07-15 09:36:01.538000] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:50.616 [2024-07-15 09:36:01.538027] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d69200 with addr=10.0.0.2, port=4420 00:27:50.616 qpair failed and we were unable to recover it. 00:27:50.616 [2024-07-15 09:36:01.538119] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:50.616 [2024-07-15 09:36:01.538145] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d69200 with addr=10.0.0.2, port=4420 00:27:50.616 qpair failed and we were unable to recover it. 00:27:50.616 [2024-07-15 09:36:01.538229] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:50.616 [2024-07-15 09:36:01.538255] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d69200 with addr=10.0.0.2, port=4420 00:27:50.616 qpair failed and we were unable to recover it. 00:27:50.616 [2024-07-15 09:36:01.538351] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:50.616 [2024-07-15 09:36:01.538389] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a58000b90 with addr=10.0.0.2, port=4420 00:27:50.616 qpair failed and we were unable to recover it. 00:27:50.616 [2024-07-15 09:36:01.538474] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:50.616 [2024-07-15 09:36:01.538502] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a58000b90 with addr=10.0.0.2, port=4420 00:27:50.616 qpair failed and we were unable to recover it. 00:27:50.616 [2024-07-15 09:36:01.538643] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:50.616 [2024-07-15 09:36:01.538670] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a50000b90 with addr=10.0.0.2, port=4420 00:27:50.616 qpair failed and we were unable to recover it. 00:27:50.616 [2024-07-15 09:36:01.538778] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:50.616 [2024-07-15 09:36:01.538809] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a50000b90 with addr=10.0.0.2, port=4420 00:27:50.616 qpair failed and we were unable to recover it. 00:27:50.616 [2024-07-15 09:36:01.538893] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:50.616 [2024-07-15 09:36:01.538919] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a50000b90 with addr=10.0.0.2, port=4420 00:27:50.616 qpair failed and we were unable to recover it. 00:27:50.616 [2024-07-15 09:36:01.539007] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:50.616 [2024-07-15 09:36:01.539033] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a50000b90 with addr=10.0.0.2, port=4420 00:27:50.616 qpair failed and we were unable to recover it. 00:27:50.616 [2024-07-15 09:36:01.539126] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:50.616 [2024-07-15 09:36:01.539152] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a50000b90 with addr=10.0.0.2, port=4420 00:27:50.616 qpair failed and we were unable to recover it. 00:27:50.616 [2024-07-15 09:36:01.539261] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:50.616 [2024-07-15 09:36:01.539289] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:27:50.616 qpair failed and we were unable to recover it. 00:27:50.616 [2024-07-15 09:36:01.539383] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:50.616 [2024-07-15 09:36:01.539410] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a58000b90 with addr=10.0.0.2, port=4420 00:27:50.616 qpair failed and we were unable to recover it. 00:27:50.616 [2024-07-15 09:36:01.539500] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:50.616 [2024-07-15 09:36:01.539527] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a58000b90 with addr=10.0.0.2, port=4420 00:27:50.616 qpair failed and we were unable to recover it. 00:27:50.616 [2024-07-15 09:36:01.539615] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:50.616 [2024-07-15 09:36:01.539642] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a58000b90 with addr=10.0.0.2, port=4420 00:27:50.616 qpair failed and we were unable to recover it. 00:27:50.616 [2024-07-15 09:36:01.539724] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:50.616 [2024-07-15 09:36:01.539756] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a58000b90 with addr=10.0.0.2, port=4420 00:27:50.616 qpair failed and we were unable to recover it. 00:27:50.616 [2024-07-15 09:36:01.539880] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:50.616 [2024-07-15 09:36:01.539908] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d69200 with addr=10.0.0.2, port=4420 00:27:50.616 qpair failed and we were unable to recover it. 00:27:50.616 [2024-07-15 09:36:01.539994] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:50.616 [2024-07-15 09:36:01.540020] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d69200 with addr=10.0.0.2, port=4420 00:27:50.616 qpair failed and we were unable to recover it. 00:27:50.616 [2024-07-15 09:36:01.540112] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:50.616 [2024-07-15 09:36:01.540137] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d69200 with addr=10.0.0.2, port=4420 00:27:50.616 qpair failed and we were unable to recover it. 00:27:50.616 [2024-07-15 09:36:01.540244] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:50.616 [2024-07-15 09:36:01.540270] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d69200 with addr=10.0.0.2, port=4420 00:27:50.616 qpair failed and we were unable to recover it. 00:27:50.616 [2024-07-15 09:36:01.540383] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:50.616 [2024-07-15 09:36:01.540408] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d69200 with addr=10.0.0.2, port=4420 00:27:50.616 qpair failed and we were unable to recover it. 00:27:50.616 [2024-07-15 09:36:01.540492] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:50.616 [2024-07-15 09:36:01.540517] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d69200 with addr=10.0.0.2, port=4420 00:27:50.616 qpair failed and we were unable to recover it. 00:27:50.616 [2024-07-15 09:36:01.540598] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:50.616 [2024-07-15 09:36:01.540626] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a58000b90 with addr=10.0.0.2, port=4420 00:27:50.616 qpair failed and we were unable to recover it. 00:27:50.616 [2024-07-15 09:36:01.540738] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:50.616 [2024-07-15 09:36:01.540766] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a50000b90 with addr=10.0.0.2, port=4420 00:27:50.616 qpair failed and we were unable to recover it. 00:27:50.616 [2024-07-15 09:36:01.540879] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:50.616 [2024-07-15 09:36:01.540906] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a50000b90 with addr=10.0.0.2, port=4420 00:27:50.616 qpair failed and we were unable to recover it. 00:27:50.616 [2024-07-15 09:36:01.541010] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:50.616 [2024-07-15 09:36:01.541036] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a50000b90 with addr=10.0.0.2, port=4420 00:27:50.616 qpair failed and we were unable to recover it. 00:27:50.616 [2024-07-15 09:36:01.541148] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:50.616 [2024-07-15 09:36:01.541175] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a50000b90 with addr=10.0.0.2, port=4420 00:27:50.616 qpair failed and we were unable to recover it. 00:27:50.616 [2024-07-15 09:36:01.541260] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:50.616 [2024-07-15 09:36:01.541287] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a50000b90 with addr=10.0.0.2, port=4420 00:27:50.616 qpair failed and we were unable to recover it. 00:27:50.616 [2024-07-15 09:36:01.541386] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:50.616 [2024-07-15 09:36:01.541413] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a50000b90 with addr=10.0.0.2, port=4420 00:27:50.616 qpair failed and we were unable to recover it. 00:27:50.616 [2024-07-15 09:36:01.541506] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:50.616 [2024-07-15 09:36:01.541533] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a50000b90 with addr=10.0.0.2, port=4420 00:27:50.616 qpair failed and we were unable to recover it. 00:27:50.616 [2024-07-15 09:36:01.541644] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:50.616 [2024-07-15 09:36:01.541670] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a50000b90 with addr=10.0.0.2, port=4420 00:27:50.616 qpair failed and we were unable to recover it. 00:27:50.616 [2024-07-15 09:36:01.541813] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:50.616 [2024-07-15 09:36:01.541840] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a50000b90 with addr=10.0.0.2, port=4420 00:27:50.616 qpair failed and we were unable to recover it. 00:27:50.616 [2024-07-15 09:36:01.541919] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:50.616 [2024-07-15 09:36:01.541945] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a50000b90 with addr=10.0.0.2, port=4420 00:27:50.616 qpair failed and we were unable to recover it. 00:27:50.616 [2024-07-15 09:36:01.542019] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:50.616 [2024-07-15 09:36:01.542046] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a50000b90 with addr=10.0.0.2, port=4420 00:27:50.616 qpair failed and we were unable to recover it. 00:27:50.616 [2024-07-15 09:36:01.542126] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:50.616 [2024-07-15 09:36:01.542152] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a50000b90 with addr=10.0.0.2, port=4420 00:27:50.616 qpair failed and we were unable to recover it. 00:27:50.616 [2024-07-15 09:36:01.542240] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:50.616 [2024-07-15 09:36:01.542265] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a50000b90 with addr=10.0.0.2, port=4420 00:27:50.616 qpair failed and we were unable to recover it. 00:27:50.616 [2024-07-15 09:36:01.542388] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:50.616 [2024-07-15 09:36:01.542427] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:27:50.616 qpair failed and we were unable to recover it. 00:27:50.616 [2024-07-15 09:36:01.542543] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:50.616 [2024-07-15 09:36:01.542570] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:27:50.616 qpair failed and we were unable to recover it. 00:27:50.616 [2024-07-15 09:36:01.542697] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:50.616 [2024-07-15 09:36:01.542737] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a58000b90 with addr=10.0.0.2, port=4420 00:27:50.616 qpair failed and we were unable to recover it. 00:27:50.616 [2024-07-15 09:36:01.542862] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:50.616 [2024-07-15 09:36:01.542900] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a50000b90 with addr=10.0.0.2, port=4420 00:27:50.616 qpair failed and we were unable to recover it. 00:27:50.616 [2024-07-15 09:36:01.542989] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:50.616 [2024-07-15 09:36:01.543016] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a50000b90 with addr=10.0.0.2, port=4420 00:27:50.616 qpair failed and we were unable to recover it. 00:27:50.616 [2024-07-15 09:36:01.543137] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:50.616 [2024-07-15 09:36:01.543168] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a50000b90 with addr=10.0.0.2, port=4420 00:27:50.616 qpair failed and we were unable to recover it. 00:27:50.616 [2024-07-15 09:36:01.543312] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:50.616 [2024-07-15 09:36:01.543338] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a50000b90 with addr=10.0.0.2, port=4420 00:27:50.616 qpair failed and we were unable to recover it. 00:27:50.616 [2024-07-15 09:36:01.543454] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:50.616 [2024-07-15 09:36:01.543482] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d69200 with addr=10.0.0.2, port=4420 00:27:50.616 qpair failed and we were unable to recover it. 00:27:50.616 [2024-07-15 09:36:01.543607] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:50.616 [2024-07-15 09:36:01.543635] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:27:50.616 qpair failed and we were unable to recover it. 00:27:50.616 [2024-07-15 09:36:01.543730] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:50.616 [2024-07-15 09:36:01.543757] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:27:50.616 qpair failed and we were unable to recover it. 00:27:50.616 [2024-07-15 09:36:01.543844] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:50.616 [2024-07-15 09:36:01.543871] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:27:50.616 qpair failed and we were unable to recover it. 00:27:50.616 [2024-07-15 09:36:01.543955] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:50.616 [2024-07-15 09:36:01.543981] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:27:50.616 qpair failed and we were unable to recover it. 00:27:50.616 [2024-07-15 09:36:01.544082] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:50.616 [2024-07-15 09:36:01.544131] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a58000b90 with addr=10.0.0.2, port=4420 00:27:50.616 qpair failed and we were unable to recover it. 00:27:50.616 [2024-07-15 09:36:01.544235] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:50.616 [2024-07-15 09:36:01.544262] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a58000b90 with addr=10.0.0.2, port=4420 00:27:50.616 qpair failed and we were unable to recover it. 00:27:50.616 [2024-07-15 09:36:01.544379] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:50.616 [2024-07-15 09:36:01.544406] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a58000b90 with addr=10.0.0.2, port=4420 00:27:50.616 qpair failed and we were unable to recover it. 00:27:50.616 [2024-07-15 09:36:01.544522] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:50.616 [2024-07-15 09:36:01.544549] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a58000b90 with addr=10.0.0.2, port=4420 00:27:50.616 qpair failed and we were unable to recover it. 00:27:50.616 [2024-07-15 09:36:01.544657] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:50.616 [2024-07-15 09:36:01.544685] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a58000b90 with addr=10.0.0.2, port=4420 00:27:50.616 qpair failed and we were unable to recover it. 00:27:50.616 [2024-07-15 09:36:01.544795] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:50.616 [2024-07-15 09:36:01.544828] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a58000b90 with addr=10.0.0.2, port=4420 00:27:50.616 qpair failed and we were unable to recover it. 00:27:50.616 [2024-07-15 09:36:01.544910] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:50.616 [2024-07-15 09:36:01.544936] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a58000b90 with addr=10.0.0.2, port=4420 00:27:50.616 qpair failed and we were unable to recover it. 00:27:50.616 [2024-07-15 09:36:01.545054] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:50.616 [2024-07-15 09:36:01.545081] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a58000b90 with addr=10.0.0.2, port=4420 00:27:50.616 qpair failed and we were unable to recover it. 00:27:50.617 [2024-07-15 09:36:01.545220] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:50.617 [2024-07-15 09:36:01.545247] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a58000b90 with addr=10.0.0.2, port=4420 00:27:50.617 qpair failed and we were unable to recover it. 00:27:50.617 [2024-07-15 09:36:01.545331] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:50.617 [2024-07-15 09:36:01.545357] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:27:50.617 qpair failed and we were unable to recover it. 00:27:50.617 [2024-07-15 09:36:01.545478] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:50.617 [2024-07-15 09:36:01.545506] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a50000b90 with addr=10.0.0.2, port=4420 00:27:50.617 qpair failed and we were unable to recover it. 00:27:50.617 [2024-07-15 09:36:01.545595] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:50.617 [2024-07-15 09:36:01.545622] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a50000b90 with addr=10.0.0.2, port=4420 00:27:50.617 qpair failed and we were unable to recover it. 00:27:50.617 [2024-07-15 09:36:01.545732] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:50.617 [2024-07-15 09:36:01.545759] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a50000b90 with addr=10.0.0.2, port=4420 00:27:50.617 qpair failed and we were unable to recover it. 00:27:50.617 [2024-07-15 09:36:01.545882] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:50.617 [2024-07-15 09:36:01.545911] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:27:50.617 qpair failed and we were unable to recover it. 00:27:50.617 [2024-07-15 09:36:01.546022] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:50.617 [2024-07-15 09:36:01.546061] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d69200 with addr=10.0.0.2, port=4420 00:27:50.617 qpair failed and we were unable to recover it. 00:27:50.617 [2024-07-15 09:36:01.546182] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:50.617 [2024-07-15 09:36:01.546210] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a58000b90 with addr=10.0.0.2, port=4420 00:27:50.617 qpair failed and we were unable to recover it. 00:27:50.617 [2024-07-15 09:36:01.546324] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:50.617 [2024-07-15 09:36:01.546350] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a58000b90 with addr=10.0.0.2, port=4420 00:27:50.617 qpair failed and we were unable to recover it. 00:27:50.617 [2024-07-15 09:36:01.546459] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:50.617 [2024-07-15 09:36:01.546485] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a58000b90 with addr=10.0.0.2, port=4420 00:27:50.617 qpair failed and we were unable to recover it. 00:27:50.617 [2024-07-15 09:36:01.546575] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:50.617 [2024-07-15 09:36:01.546602] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a58000b90 with addr=10.0.0.2, port=4420 00:27:50.617 qpair failed and we were unable to recover it. 00:27:50.617 [2024-07-15 09:36:01.546714] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:50.617 [2024-07-15 09:36:01.546742] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a50000b90 with addr=10.0.0.2, port=4420 00:27:50.617 qpair failed and we were unable to recover it. 00:27:50.617 [2024-07-15 09:36:01.546880] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:50.617 [2024-07-15 09:36:01.546908] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:27:50.617 qpair failed and we were unable to recover it. 00:27:50.617 [2024-07-15 09:36:01.547027] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:50.617 [2024-07-15 09:36:01.547054] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:27:50.617 qpair failed and we were unable to recover it. 00:27:50.617 [2024-07-15 09:36:01.547148] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:50.617 [2024-07-15 09:36:01.547174] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:27:50.617 qpair failed and we were unable to recover it. 00:27:50.617 [2024-07-15 09:36:01.547284] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:50.617 [2024-07-15 09:36:01.547311] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:27:50.617 qpair failed and we were unable to recover it. 00:27:50.617 [2024-07-15 09:36:01.547400] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:50.617 [2024-07-15 09:36:01.547427] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:27:50.617 qpair failed and we were unable to recover it. 00:27:50.617 [2024-07-15 09:36:01.547513] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:50.617 [2024-07-15 09:36:01.547540] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:27:50.617 qpair failed and we were unable to recover it. 00:27:50.617 [2024-07-15 09:36:01.547628] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:50.617 [2024-07-15 09:36:01.547654] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:27:50.617 qpair failed and we were unable to recover it. 00:27:50.617 [2024-07-15 09:36:01.547737] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:50.617 [2024-07-15 09:36:01.547763] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:27:50.617 qpair failed and we were unable to recover it. 00:27:50.617 [2024-07-15 09:36:01.547867] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:50.617 [2024-07-15 09:36:01.547895] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a58000b90 with addr=10.0.0.2, port=4420 00:27:50.617 qpair failed and we were unable to recover it. 00:27:50.617 [2024-07-15 09:36:01.548004] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:50.617 [2024-07-15 09:36:01.548030] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a58000b90 with addr=10.0.0.2, port=4420 00:27:50.617 qpair failed and we were unable to recover it. 00:27:50.617 [2024-07-15 09:36:01.548127] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:50.617 [2024-07-15 09:36:01.548153] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a58000b90 with addr=10.0.0.2, port=4420 00:27:50.617 qpair failed and we were unable to recover it. 00:27:50.617 [2024-07-15 09:36:01.548271] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:50.617 [2024-07-15 09:36:01.548298] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a58000b90 with addr=10.0.0.2, port=4420 00:27:50.617 qpair failed and we were unable to recover it. 00:27:50.617 [2024-07-15 09:36:01.548380] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:50.617 [2024-07-15 09:36:01.548407] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a58000b90 with addr=10.0.0.2, port=4420 00:27:50.617 qpair failed and we were unable to recover it. 00:27:50.617 [2024-07-15 09:36:01.548497] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:50.617 [2024-07-15 09:36:01.548524] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a58000b90 with addr=10.0.0.2, port=4420 00:27:50.617 qpair failed and we were unable to recover it. 00:27:50.617 [2024-07-15 09:36:01.548633] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:50.617 [2024-07-15 09:36:01.548664] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:27:50.617 qpair failed and we were unable to recover it. 00:27:50.617 [2024-07-15 09:36:01.548768] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:50.617 [2024-07-15 09:36:01.548813] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a50000b90 with addr=10.0.0.2, port=4420 00:27:50.617 qpair failed and we were unable to recover it. 00:27:50.617 [2024-07-15 09:36:01.548941] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:50.617 [2024-07-15 09:36:01.548968] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a50000b90 with addr=10.0.0.2, port=4420 00:27:50.617 qpair failed and we were unable to recover it. 00:27:50.617 [2024-07-15 09:36:01.549118] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:50.617 [2024-07-15 09:36:01.549145] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a50000b90 with addr=10.0.0.2, port=4420 00:27:50.617 qpair failed and we were unable to recover it. 00:27:50.617 [2024-07-15 09:36:01.549255] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:50.617 [2024-07-15 09:36:01.549282] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a50000b90 with addr=10.0.0.2, port=4420 00:27:50.617 qpair failed and we were unable to recover it. 00:27:50.617 [2024-07-15 09:36:01.549392] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:50.617 [2024-07-15 09:36:01.549418] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a50000b90 with addr=10.0.0.2, port=4420 00:27:50.617 qpair failed and we were unable to recover it. 00:27:50.617 [2024-07-15 09:36:01.549562] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:50.617 [2024-07-15 09:36:01.549590] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a58000b90 with addr=10.0.0.2, port=4420 00:27:50.617 qpair failed and we were unable to recover it. 00:27:50.617 [2024-07-15 09:36:01.549728] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:50.617 [2024-07-15 09:36:01.549754] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a58000b90 with addr=10.0.0.2, port=4420 00:27:50.617 qpair failed and we were unable to recover it. 00:27:50.617 [2024-07-15 09:36:01.549867] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:50.617 [2024-07-15 09:36:01.549894] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a58000b90 with addr=10.0.0.2, port=4420 00:27:50.617 qpair failed and we were unable to recover it. 00:27:50.617 [2024-07-15 09:36:01.550037] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:50.617 [2024-07-15 09:36:01.550063] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a58000b90 with addr=10.0.0.2, port=4420 00:27:50.617 qpair failed and we were unable to recover it. 00:27:50.617 [2024-07-15 09:36:01.550151] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:50.617 [2024-07-15 09:36:01.550177] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a58000b90 with addr=10.0.0.2, port=4420 00:27:50.617 qpair failed and we were unable to recover it. 00:27:50.617 [2024-07-15 09:36:01.550313] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:50.617 [2024-07-15 09:36:01.550339] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a58000b90 with addr=10.0.0.2, port=4420 00:27:50.617 qpair failed and we were unable to recover it. 00:27:50.617 [2024-07-15 09:36:01.550477] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:50.617 [2024-07-15 09:36:01.550503] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a58000b90 with addr=10.0.0.2, port=4420 00:27:50.617 qpair failed and we were unable to recover it. 00:27:50.617 [2024-07-15 09:36:01.550593] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:50.617 [2024-07-15 09:36:01.550619] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a58000b90 with addr=10.0.0.2, port=4420 00:27:50.617 qpair failed and we were unable to recover it. 00:27:50.617 [2024-07-15 09:36:01.550720] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:50.617 [2024-07-15 09:36:01.550759] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d69200 with addr=10.0.0.2, port=4420 00:27:50.617 qpair failed and we were unable to recover it. 00:27:50.617 [2024-07-15 09:36:01.550858] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:50.617 [2024-07-15 09:36:01.550886] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d69200 with addr=10.0.0.2, port=4420 00:27:50.617 qpair failed and we were unable to recover it. 00:27:50.617 [2024-07-15 09:36:01.550971] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:50.617 [2024-07-15 09:36:01.550998] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d69200 with addr=10.0.0.2, port=4420 00:27:50.617 qpair failed and we were unable to recover it. 00:27:50.617 [2024-07-15 09:36:01.551080] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:50.617 [2024-07-15 09:36:01.551105] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d69200 with addr=10.0.0.2, port=4420 00:27:50.617 qpair failed and we were unable to recover it. 00:27:50.617 [2024-07-15 09:36:01.551187] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:50.617 [2024-07-15 09:36:01.551213] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d69200 with addr=10.0.0.2, port=4420 00:27:50.617 qpair failed and we were unable to recover it. 00:27:50.617 [2024-07-15 09:36:01.551349] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:50.617 [2024-07-15 09:36:01.551375] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d69200 with addr=10.0.0.2, port=4420 00:27:50.617 qpair failed and we were unable to recover it. 00:27:50.617 [2024-07-15 09:36:01.551462] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:50.617 [2024-07-15 09:36:01.551490] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a58000b90 with addr=10.0.0.2, port=4420 00:27:50.617 qpair failed and we were unable to recover it. 00:27:50.617 [2024-07-15 09:36:01.551577] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:50.617 [2024-07-15 09:36:01.551603] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a58000b90 with addr=10.0.0.2, port=4420 00:27:50.617 qpair failed and we were unable to recover it. 00:27:50.617 [2024-07-15 09:36:01.551741] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:50.617 [2024-07-15 09:36:01.551781] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:27:50.617 qpair failed and we were unable to recover it. 00:27:50.617 [2024-07-15 09:36:01.551895] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:50.617 [2024-07-15 09:36:01.551923] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:27:50.617 qpair failed and we were unable to recover it. 00:27:50.617 [2024-07-15 09:36:01.552010] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:50.617 [2024-07-15 09:36:01.552037] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:27:50.617 qpair failed and we were unable to recover it. 00:27:50.617 [2024-07-15 09:36:01.552184] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:50.617 [2024-07-15 09:36:01.552211] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:27:50.617 qpair failed and we were unable to recover it. 00:27:50.617 [2024-07-15 09:36:01.552323] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:50.617 [2024-07-15 09:36:01.552349] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:27:50.617 qpair failed and we were unable to recover it. 00:27:50.617 [2024-07-15 09:36:01.552439] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:50.617 [2024-07-15 09:36:01.552467] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:27:50.617 qpair failed and we were unable to recover it. 00:27:50.617 [2024-07-15 09:36:01.552545] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:50.617 [2024-07-15 09:36:01.552571] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:27:50.617 qpair failed and we were unable to recover it. 00:27:50.617 [2024-07-15 09:36:01.552652] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:50.617 [2024-07-15 09:36:01.552679] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d69200 with addr=10.0.0.2, port=4420 00:27:50.617 qpair failed and we were unable to recover it. 00:27:50.617 [2024-07-15 09:36:01.552810] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:50.617 [2024-07-15 09:36:01.552838] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a50000b90 with addr=10.0.0.2, port=4420 00:27:50.617 qpair failed and we were unable to recover it. 00:27:50.617 [2024-07-15 09:36:01.552930] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:50.617 [2024-07-15 09:36:01.552956] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a50000b90 with addr=10.0.0.2, port=4420 00:27:50.617 qpair failed and we were unable to recover it. 00:27:50.617 [2024-07-15 09:36:01.553070] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:50.617 [2024-07-15 09:36:01.553096] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a50000b90 with addr=10.0.0.2, port=4420 00:27:50.617 qpair failed and we were unable to recover it. 00:27:50.617 [2024-07-15 09:36:01.553183] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:50.617 [2024-07-15 09:36:01.553210] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a50000b90 with addr=10.0.0.2, port=4420 00:27:50.617 qpair failed and we were unable to recover it. 00:27:50.617 [2024-07-15 09:36:01.553305] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:50.617 [2024-07-15 09:36:01.553331] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a50000b90 with addr=10.0.0.2, port=4420 00:27:50.617 qpair failed and we were unable to recover it. 00:27:50.617 [2024-07-15 09:36:01.553437] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:50.617 [2024-07-15 09:36:01.553462] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a50000b90 with addr=10.0.0.2, port=4420 00:27:50.617 qpair failed and we were unable to recover it. 00:27:50.617 [2024-07-15 09:36:01.553575] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:50.617 [2024-07-15 09:36:01.553603] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d69200 with addr=10.0.0.2, port=4420 00:27:50.617 qpair failed and we were unable to recover it. 00:27:50.617 [2024-07-15 09:36:01.553712] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:50.617 [2024-07-15 09:36:01.553738] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d69200 with addr=10.0.0.2, port=4420 00:27:50.617 qpair failed and we were unable to recover it. 00:27:50.617 [2024-07-15 09:36:01.553829] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:50.617 [2024-07-15 09:36:01.553856] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d69200 with addr=10.0.0.2, port=4420 00:27:50.617 qpair failed and we were unable to recover it. 00:27:50.617 [2024-07-15 09:36:01.553961] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:50.617 [2024-07-15 09:36:01.553987] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d69200 with addr=10.0.0.2, port=4420 00:27:50.617 qpair failed and we were unable to recover it. 00:27:50.617 [2024-07-15 09:36:01.554066] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:50.617 [2024-07-15 09:36:01.554097] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d69200 with addr=10.0.0.2, port=4420 00:27:50.617 qpair failed and we were unable to recover it. 00:27:50.617 [2024-07-15 09:36:01.554178] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:50.617 [2024-07-15 09:36:01.554204] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d69200 with addr=10.0.0.2, port=4420 00:27:50.617 qpair failed and we were unable to recover it. 00:27:50.617 [2024-07-15 09:36:01.554314] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:50.617 [2024-07-15 09:36:01.554339] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d69200 with addr=10.0.0.2, port=4420 00:27:50.617 qpair failed and we were unable to recover it. 00:27:50.618 [2024-07-15 09:36:01.554419] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:50.618 [2024-07-15 09:36:01.554445] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d69200 with addr=10.0.0.2, port=4420 00:27:50.618 qpair failed and we were unable to recover it. 00:27:50.618 [2024-07-15 09:36:01.554534] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:50.618 [2024-07-15 09:36:01.554559] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d69200 with addr=10.0.0.2, port=4420 00:27:50.618 qpair failed and we were unable to recover it. 00:27:50.618 [2024-07-15 09:36:01.554634] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:50.618 [2024-07-15 09:36:01.554660] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d69200 with addr=10.0.0.2, port=4420 00:27:50.618 qpair failed and we were unable to recover it. 00:27:50.618 [2024-07-15 09:36:01.554786] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:50.618 [2024-07-15 09:36:01.554837] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a58000b90 with addr=10.0.0.2, port=4420 00:27:50.618 qpair failed and we were unable to recover it. 00:27:50.618 [2024-07-15 09:36:01.554958] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:50.618 [2024-07-15 09:36:01.554986] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:27:50.618 qpair failed and we were unable to recover it. 00:27:50.618 [2024-07-15 09:36:01.555097] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:50.618 [2024-07-15 09:36:01.555123] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:27:50.618 qpair failed and we were unable to recover it. 00:27:50.618 [2024-07-15 09:36:01.555241] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:50.618 [2024-07-15 09:36:01.555268] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:27:50.618 qpair failed and we were unable to recover it. 00:27:50.618 [2024-07-15 09:36:01.555382] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:50.618 [2024-07-15 09:36:01.555408] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:27:50.618 qpair failed and we were unable to recover it. 00:27:50.618 [2024-07-15 09:36:01.555494] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:50.618 [2024-07-15 09:36:01.555523] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a50000b90 with addr=10.0.0.2, port=4420 00:27:50.618 qpair failed and we were unable to recover it. 00:27:50.618 [2024-07-15 09:36:01.555638] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:50.618 [2024-07-15 09:36:01.555664] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a50000b90 with addr=10.0.0.2, port=4420 00:27:50.618 qpair failed and we were unable to recover it. 00:27:50.618 [2024-07-15 09:36:01.555812] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:50.618 [2024-07-15 09:36:01.555839] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a50000b90 with addr=10.0.0.2, port=4420 00:27:50.618 qpair failed and we were unable to recover it. 00:27:50.618 [2024-07-15 09:36:01.555933] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:50.618 [2024-07-15 09:36:01.555960] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a50000b90 with addr=10.0.0.2, port=4420 00:27:50.618 qpair failed and we were unable to recover it. 00:27:50.618 [2024-07-15 09:36:01.556042] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:50.618 [2024-07-15 09:36:01.556068] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a50000b90 with addr=10.0.0.2, port=4420 00:27:50.618 qpair failed and we were unable to recover it. 00:27:50.618 [2024-07-15 09:36:01.556149] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:50.618 [2024-07-15 09:36:01.556175] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a50000b90 with addr=10.0.0.2, port=4420 00:27:50.618 qpair failed and we were unable to recover it. 00:27:50.618 [2024-07-15 09:36:01.556291] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:50.618 [2024-07-15 09:36:01.556319] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:27:50.618 qpair failed and we were unable to recover it. 00:27:50.618 [2024-07-15 09:36:01.556438] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:50.618 [2024-07-15 09:36:01.556464] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:27:50.618 qpair failed and we were unable to recover it. 00:27:50.618 [2024-07-15 09:36:01.556544] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:50.618 [2024-07-15 09:36:01.556570] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:27:50.618 qpair failed and we were unable to recover it. 00:27:50.618 [2024-07-15 09:36:01.556649] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:50.618 [2024-07-15 09:36:01.556675] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:27:50.618 qpair failed and we were unable to recover it. 00:27:50.618 [2024-07-15 09:36:01.556787] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:50.618 [2024-07-15 09:36:01.556827] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:27:50.618 qpair failed and we were unable to recover it. 00:27:50.618 [2024-07-15 09:36:01.556937] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:50.618 [2024-07-15 09:36:01.556964] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:27:50.618 qpair failed and we were unable to recover it. 00:27:50.618 [2024-07-15 09:36:01.557098] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:50.618 [2024-07-15 09:36:01.557124] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:27:50.618 qpair failed and we were unable to recover it. 00:27:50.618 [2024-07-15 09:36:01.557235] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:50.618 [2024-07-15 09:36:01.557262] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:27:50.618 qpair failed and we were unable to recover it. 00:27:50.618 [2024-07-15 09:36:01.557340] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:50.618 [2024-07-15 09:36:01.557368] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:27:50.618 qpair failed and we were unable to recover it. 00:27:50.618 [2024-07-15 09:36:01.557463] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:50.618 [2024-07-15 09:36:01.557492] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a50000b90 with addr=10.0.0.2, port=4420 00:27:50.618 qpair failed and we were unable to recover it. 00:27:50.618 [2024-07-15 09:36:01.557594] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:50.618 [2024-07-15 09:36:01.557638] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d69200 with addr=10.0.0.2, port=4420 00:27:50.618 qpair failed and we were unable to recover it. 00:27:50.618 [2024-07-15 09:36:01.557759] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:50.618 [2024-07-15 09:36:01.557787] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d69200 with addr=10.0.0.2, port=4420 00:27:50.618 qpair failed and we were unable to recover it. 00:27:50.618 [2024-07-15 09:36:01.557917] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:50.618 [2024-07-15 09:36:01.557944] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d69200 with addr=10.0.0.2, port=4420 00:27:50.618 qpair failed and we were unable to recover it. 00:27:50.618 [2024-07-15 09:36:01.558032] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:50.618 [2024-07-15 09:36:01.558058] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d69200 with addr=10.0.0.2, port=4420 00:27:50.618 qpair failed and we were unable to recover it. 00:27:50.618 [2024-07-15 09:36:01.558164] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:50.618 [2024-07-15 09:36:01.558189] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d69200 with addr=10.0.0.2, port=4420 00:27:50.618 qpair failed and we were unable to recover it. 00:27:50.618 [2024-07-15 09:36:01.558309] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:50.618 [2024-07-15 09:36:01.558336] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:27:50.618 qpair failed and we were unable to recover it. 00:27:50.618 [2024-07-15 09:36:01.558426] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:50.618 [2024-07-15 09:36:01.558453] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a50000b90 with addr=10.0.0.2, port=4420 00:27:50.618 qpair failed and we were unable to recover it. 00:27:50.618 [2024-07-15 09:36:01.558554] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:50.618 [2024-07-15 09:36:01.558594] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a58000b90 with addr=10.0.0.2, port=4420 00:27:50.618 qpair failed and we were unable to recover it. 00:27:50.618 [2024-07-15 09:36:01.558718] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:50.618 [2024-07-15 09:36:01.558746] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a58000b90 with addr=10.0.0.2, port=4420 00:27:50.618 qpair failed and we were unable to recover it. 00:27:50.618 [2024-07-15 09:36:01.558847] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:50.618 [2024-07-15 09:36:01.558874] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a58000b90 with addr=10.0.0.2, port=4420 00:27:50.618 qpair failed and we were unable to recover it. 00:27:50.618 [2024-07-15 09:36:01.558960] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:50.618 [2024-07-15 09:36:01.558987] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a58000b90 with addr=10.0.0.2, port=4420 00:27:50.618 qpair failed and we were unable to recover it. 00:27:50.618 [2024-07-15 09:36:01.559069] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:50.618 [2024-07-15 09:36:01.559109] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d69200 with addr=10.0.0.2, port=4420 00:27:50.618 qpair failed and we were unable to recover it. 00:27:50.618 [2024-07-15 09:36:01.559200] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:50.618 [2024-07-15 09:36:01.559228] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:27:50.618 qpair failed and we were unable to recover it. 00:27:50.618 [2024-07-15 09:36:01.559336] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:50.618 [2024-07-15 09:36:01.559362] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:27:50.618 qpair failed and we were unable to recover it. 00:27:50.618 [2024-07-15 09:36:01.559457] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:50.618 [2024-07-15 09:36:01.559484] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:27:50.618 qpair failed and we were unable to recover it. 00:27:50.618 [2024-07-15 09:36:01.559568] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:50.618 [2024-07-15 09:36:01.559594] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:27:50.618 qpair failed and we were unable to recover it. 00:27:50.618 [2024-07-15 09:36:01.559678] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:50.618 [2024-07-15 09:36:01.559706] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a50000b90 with addr=10.0.0.2, port=4420 00:27:50.618 qpair failed and we were unable to recover it. 00:27:50.618 [2024-07-15 09:36:01.559824] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:50.618 [2024-07-15 09:36:01.559853] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a58000b90 with addr=10.0.0.2, port=4420 00:27:50.618 qpair failed and we were unable to recover it. 00:27:50.618 [2024-07-15 09:36:01.559948] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:50.618 [2024-07-15 09:36:01.559975] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a58000b90 with addr=10.0.0.2, port=4420 00:27:50.618 qpair failed and we were unable to recover it. 00:27:50.618 [2024-07-15 09:36:01.560087] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:50.618 [2024-07-15 09:36:01.560120] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a58000b90 with addr=10.0.0.2, port=4420 00:27:50.618 qpair failed and we were unable to recover it. 00:27:50.618 [2024-07-15 09:36:01.560201] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:50.618 [2024-07-15 09:36:01.560228] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a58000b90 with addr=10.0.0.2, port=4420 00:27:50.618 qpair failed and we were unable to recover it. 00:27:50.618 [2024-07-15 09:36:01.560310] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:50.618 [2024-07-15 09:36:01.560337] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a58000b90 with addr=10.0.0.2, port=4420 00:27:50.618 qpair failed and we were unable to recover it. 00:27:50.618 [2024-07-15 09:36:01.560424] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:50.618 [2024-07-15 09:36:01.560451] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a58000b90 with addr=10.0.0.2, port=4420 00:27:50.618 qpair failed and we were unable to recover it. 00:27:50.618 [2024-07-15 09:36:01.560560] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:50.618 [2024-07-15 09:36:01.560588] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a50000b90 with addr=10.0.0.2, port=4420 00:27:50.618 qpair failed and we were unable to recover it. 00:27:50.618 [2024-07-15 09:36:01.560670] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:50.618 [2024-07-15 09:36:01.560696] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a50000b90 with addr=10.0.0.2, port=4420 00:27:50.618 qpair failed and we were unable to recover it. 00:27:50.618 [2024-07-15 09:36:01.560810] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:50.618 [2024-07-15 09:36:01.560837] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a50000b90 with addr=10.0.0.2, port=4420 00:27:50.618 qpair failed and we were unable to recover it. 00:27:50.618 [2024-07-15 09:36:01.560928] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:50.618 [2024-07-15 09:36:01.560954] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a50000b90 with addr=10.0.0.2, port=4420 00:27:50.618 qpair failed and we were unable to recover it. 00:27:50.618 [2024-07-15 09:36:01.561043] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:50.618 [2024-07-15 09:36:01.561072] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a50000b90 with addr=10.0.0.2, port=4420 00:27:50.618 qpair failed and we were unable to recover it. 00:27:50.618 [2024-07-15 09:36:01.561180] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:50.618 [2024-07-15 09:36:01.561206] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a50000b90 with addr=10.0.0.2, port=4420 00:27:50.618 qpair failed and we were unable to recover it. 00:27:50.618 [2024-07-15 09:36:01.561321] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:50.618 [2024-07-15 09:36:01.561349] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a58000b90 with addr=10.0.0.2, port=4420 00:27:50.618 qpair failed and we were unable to recover it. 00:27:50.618 [2024-07-15 09:36:01.561438] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:50.618 [2024-07-15 09:36:01.561465] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a58000b90 with addr=10.0.0.2, port=4420 00:27:50.618 qpair failed and we were unable to recover it. 00:27:50.618 [2024-07-15 09:36:01.561572] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:50.618 [2024-07-15 09:36:01.561598] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a58000b90 with addr=10.0.0.2, port=4420 00:27:50.618 qpair failed and we were unable to recover it. 00:27:50.618 [2024-07-15 09:36:01.561681] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:50.618 [2024-07-15 09:36:01.561709] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a58000b90 with addr=10.0.0.2, port=4420 00:27:50.618 qpair failed and we were unable to recover it. 00:27:50.618 [2024-07-15 09:36:01.561799] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:50.618 [2024-07-15 09:36:01.561846] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d69200 with addr=10.0.0.2, port=4420 00:27:50.618 qpair failed and we were unable to recover it. 00:27:50.618 [2024-07-15 09:36:01.561935] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:50.618 [2024-07-15 09:36:01.561962] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d69200 with addr=10.0.0.2, port=4420 00:27:50.618 qpair failed and we were unable to recover it. 00:27:50.618 [2024-07-15 09:36:01.562073] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:50.618 [2024-07-15 09:36:01.562105] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d69200 with addr=10.0.0.2, port=4420 00:27:50.618 qpair failed and we were unable to recover it. 00:27:50.618 [2024-07-15 09:36:01.562212] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:50.618 [2024-07-15 09:36:01.562238] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d69200 with addr=10.0.0.2, port=4420 00:27:50.618 qpair failed and we were unable to recover it. 00:27:50.619 [2024-07-15 09:36:01.562328] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:50.619 [2024-07-15 09:36:01.562355] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d69200 with addr=10.0.0.2, port=4420 00:27:50.619 qpair failed and we were unable to recover it. 00:27:50.619 [2024-07-15 09:36:01.562465] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:50.619 [2024-07-15 09:36:01.562492] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a50000b90 with addr=10.0.0.2, port=4420 00:27:50.619 qpair failed and we were unable to recover it. 00:27:50.619 [2024-07-15 09:36:01.562578] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:50.619 [2024-07-15 09:36:01.562605] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a50000b90 with addr=10.0.0.2, port=4420 00:27:50.619 qpair failed and we were unable to recover it. 00:27:50.619 [2024-07-15 09:36:01.562731] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:50.619 [2024-07-15 09:36:01.562764] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:27:50.619 qpair failed and we were unable to recover it. 00:27:50.619 [2024-07-15 09:36:01.562888] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:50.619 [2024-07-15 09:36:01.562915] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d69200 with addr=10.0.0.2, port=4420 00:27:50.619 qpair failed and we were unable to recover it. 00:27:50.619 [2024-07-15 09:36:01.562994] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:50.619 [2024-07-15 09:36:01.563023] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a58000b90 with addr=10.0.0.2, port=4420 00:27:50.619 qpair failed and we were unable to recover it. 00:27:50.619 [2024-07-15 09:36:01.563108] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:50.619 [2024-07-15 09:36:01.563135] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a58000b90 with addr=10.0.0.2, port=4420 00:27:50.619 qpair failed and we were unable to recover it. 00:27:50.619 [2024-07-15 09:36:01.563218] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:50.619 [2024-07-15 09:36:01.563245] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a58000b90 with addr=10.0.0.2, port=4420 00:27:50.619 qpair failed and we were unable to recover it. 00:27:50.619 [2024-07-15 09:36:01.563356] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:50.619 [2024-07-15 09:36:01.563384] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a58000b90 with addr=10.0.0.2, port=4420 00:27:50.619 qpair failed and we were unable to recover it. 00:27:50.619 [2024-07-15 09:36:01.563472] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:50.619 [2024-07-15 09:36:01.563499] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a50000b90 with addr=10.0.0.2, port=4420 00:27:50.619 qpair failed and we were unable to recover it. 00:27:50.619 [2024-07-15 09:36:01.563592] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:50.619 [2024-07-15 09:36:01.563620] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:27:50.619 qpair failed and we were unable to recover it. 00:27:50.619 [2024-07-15 09:36:01.563705] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:50.619 [2024-07-15 09:36:01.563731] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:27:50.619 qpair failed and we were unable to recover it. 00:27:50.619 [2024-07-15 09:36:01.563877] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:50.619 [2024-07-15 09:36:01.563904] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:27:50.619 qpair failed and we were unable to recover it. 00:27:50.619 [2024-07-15 09:36:01.564012] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:50.619 [2024-07-15 09:36:01.564038] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:27:50.619 qpair failed and we were unable to recover it. 00:27:50.619 [2024-07-15 09:36:01.564133] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:50.619 [2024-07-15 09:36:01.564160] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:27:50.619 qpair failed and we were unable to recover it. 00:27:50.619 [2024-07-15 09:36:01.564300] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:50.619 [2024-07-15 09:36:01.564328] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a58000b90 with addr=10.0.0.2, port=4420 00:27:50.619 qpair failed and we were unable to recover it. 00:27:50.619 [2024-07-15 09:36:01.564416] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:50.619 [2024-07-15 09:36:01.564443] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a58000b90 with addr=10.0.0.2, port=4420 00:27:50.619 qpair failed and we were unable to recover it. 00:27:50.619 [2024-07-15 09:36:01.564580] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:50.619 [2024-07-15 09:36:01.564619] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d69200 with addr=10.0.0.2, port=4420 00:27:50.619 qpair failed and we were unable to recover it. 00:27:50.619 [2024-07-15 09:36:01.564702] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:50.619 [2024-07-15 09:36:01.564730] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:27:50.619 qpair failed and we were unable to recover it. 00:27:50.619 [2024-07-15 09:36:01.564820] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:50.619 [2024-07-15 09:36:01.564848] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:27:50.619 qpair failed and we were unable to recover it. 00:27:50.619 [2024-07-15 09:36:01.564938] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:50.619 [2024-07-15 09:36:01.564964] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:27:50.619 qpair failed and we were unable to recover it. 00:27:50.619 [2024-07-15 09:36:01.565052] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:50.619 [2024-07-15 09:36:01.565079] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:27:50.619 qpair failed and we were unable to recover it. 00:27:50.619 [2024-07-15 09:36:01.565158] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:50.619 [2024-07-15 09:36:01.565183] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:27:50.619 qpair failed and we were unable to recover it. 00:27:50.619 [2024-07-15 09:36:01.565292] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:50.619 [2024-07-15 09:36:01.565319] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a58000b90 with addr=10.0.0.2, port=4420 00:27:50.619 qpair failed and we were unable to recover it. 00:27:50.619 [2024-07-15 09:36:01.565433] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:50.619 [2024-07-15 09:36:01.565461] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a58000b90 with addr=10.0.0.2, port=4420 00:27:50.619 qpair failed and we were unable to recover it. 00:27:50.619 [2024-07-15 09:36:01.565543] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:50.619 [2024-07-15 09:36:01.565571] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a58000b90 with addr=10.0.0.2, port=4420 00:27:50.619 qpair failed and we were unable to recover it. 00:27:50.619 [2024-07-15 09:36:01.565686] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:50.619 [2024-07-15 09:36:01.565713] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a58000b90 with addr=10.0.0.2, port=4420 00:27:50.619 qpair failed and we were unable to recover it. 00:27:50.619 [2024-07-15 09:36:01.565824] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:50.619 [2024-07-15 09:36:01.565852] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a58000b90 with addr=10.0.0.2, port=4420 00:27:50.619 qpair failed and we were unable to recover it. 00:27:50.619 [2024-07-15 09:36:01.565938] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:50.619 [2024-07-15 09:36:01.565965] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a58000b90 with addr=10.0.0.2, port=4420 00:27:50.619 qpair failed and we were unable to recover it. 00:27:50.619 [2024-07-15 09:36:01.566050] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:50.619 [2024-07-15 09:36:01.566078] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a58000b90 with addr=10.0.0.2, port=4420 00:27:50.619 qpair failed and we were unable to recover it. 00:27:50.619 [2024-07-15 09:36:01.566201] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:50.619 [2024-07-15 09:36:01.566228] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a58000b90 with addr=10.0.0.2, port=4420 00:27:50.619 qpair failed and we were unable to recover it. 00:27:50.619 [2024-07-15 09:36:01.566356] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:50.619 [2024-07-15 09:36:01.566397] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a50000b90 with addr=10.0.0.2, port=4420 00:27:50.619 qpair failed and we were unable to recover it. 00:27:50.619 [2024-07-15 09:36:01.566497] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:50.619 [2024-07-15 09:36:01.566525] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:27:50.619 qpair failed and we were unable to recover it. 00:27:50.619 [2024-07-15 09:36:01.566656] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:50.619 [2024-07-15 09:36:01.566696] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d69200 with addr=10.0.0.2, port=4420 00:27:50.619 qpair failed and we were unable to recover it. 00:27:50.619 [2024-07-15 09:36:01.566853] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:50.619 [2024-07-15 09:36:01.566880] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d69200 with addr=10.0.0.2, port=4420 00:27:50.619 qpair failed and we were unable to recover it. 00:27:50.619 [2024-07-15 09:36:01.566966] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:50.619 [2024-07-15 09:36:01.566992] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d69200 with addr=10.0.0.2, port=4420 00:27:50.619 qpair failed and we were unable to recover it. 00:27:50.619 [2024-07-15 09:36:01.567108] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:50.619 [2024-07-15 09:36:01.567134] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d69200 with addr=10.0.0.2, port=4420 00:27:50.619 qpair failed and we were unable to recover it. 00:27:50.619 [2024-07-15 09:36:01.567226] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:50.619 [2024-07-15 09:36:01.567258] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a58000b90 with addr=10.0.0.2, port=4420 00:27:50.619 qpair failed and we were unable to recover it. 00:27:50.620 [2024-07-15 09:36:01.567349] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:50.620 [2024-07-15 09:36:01.567380] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a50000b90 with addr=10.0.0.2, port=4420 00:27:50.620 qpair failed and we were unable to recover it. 00:27:50.620 [2024-07-15 09:36:01.567500] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:50.620 [2024-07-15 09:36:01.567526] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a50000b90 with addr=10.0.0.2, port=4420 00:27:50.620 qpair failed and we were unable to recover it. 00:27:50.620 [2024-07-15 09:36:01.567636] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:50.620 [2024-07-15 09:36:01.567663] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a50000b90 with addr=10.0.0.2, port=4420 00:27:50.620 qpair failed and we were unable to recover it. 00:27:50.620 [2024-07-15 09:36:01.567745] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:50.620 [2024-07-15 09:36:01.567772] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a50000b90 with addr=10.0.0.2, port=4420 00:27:50.620 qpair failed and we were unable to recover it. 00:27:50.620 [2024-07-15 09:36:01.567857] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:50.620 [2024-07-15 09:36:01.567885] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:27:50.620 qpair failed and we were unable to recover it. 00:27:50.620 [2024-07-15 09:36:01.567976] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:50.620 [2024-07-15 09:36:01.568003] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:27:50.620 qpair failed and we were unable to recover it. 00:27:50.620 [2024-07-15 09:36:01.568117] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:50.620 [2024-07-15 09:36:01.568145] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a58000b90 with addr=10.0.0.2, port=4420 00:27:50.620 qpair failed and we were unable to recover it. 00:27:50.620 [2024-07-15 09:36:01.568264] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:50.620 [2024-07-15 09:36:01.568290] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a58000b90 with addr=10.0.0.2, port=4420 00:27:50.620 qpair failed and we were unable to recover it. 00:27:50.620 [2024-07-15 09:36:01.568369] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:50.620 [2024-07-15 09:36:01.568396] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d69200 with addr=10.0.0.2, port=4420 00:27:50.620 qpair failed and we were unable to recover it. 00:27:50.620 [2024-07-15 09:36:01.568512] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:50.620 [2024-07-15 09:36:01.568538] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d69200 with addr=10.0.0.2, port=4420 00:27:50.620 qpair failed and we were unable to recover it. 00:27:50.620 [2024-07-15 09:36:01.568621] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:50.620 [2024-07-15 09:36:01.568648] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d69200 with addr=10.0.0.2, port=4420 00:27:50.620 qpair failed and we were unable to recover it. 00:27:50.620 [2024-07-15 09:36:01.568759] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:50.620 [2024-07-15 09:36:01.568785] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d69200 with addr=10.0.0.2, port=4420 00:27:50.620 qpair failed and we were unable to recover it. 00:27:50.620 [2024-07-15 09:36:01.568896] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:50.620 [2024-07-15 09:36:01.568922] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d69200 with addr=10.0.0.2, port=4420 00:27:50.620 qpair failed and we were unable to recover it. 00:27:50.620 [2024-07-15 09:36:01.569002] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:50.620 [2024-07-15 09:36:01.569028] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d69200 with addr=10.0.0.2, port=4420 00:27:50.620 qpair failed and we were unable to recover it. 00:27:50.620 [2024-07-15 09:36:01.569118] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:50.620 [2024-07-15 09:36:01.569144] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d69200 with addr=10.0.0.2, port=4420 00:27:50.620 qpair failed and we were unable to recover it. 00:27:50.620 [2024-07-15 09:36:01.569223] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:50.620 [2024-07-15 09:36:01.569248] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d69200 with addr=10.0.0.2, port=4420 00:27:50.620 qpair failed and we were unable to recover it. 00:27:50.620 [2024-07-15 09:36:01.569331] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:50.620 [2024-07-15 09:36:01.569358] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a50000b90 with addr=10.0.0.2, port=4420 00:27:50.620 qpair failed and we were unable to recover it. 00:27:50.620 [2024-07-15 09:36:01.569449] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:50.620 [2024-07-15 09:36:01.569477] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a58000b90 with addr=10.0.0.2, port=4420 00:27:50.620 qpair failed and we were unable to recover it. 00:27:50.620 [2024-07-15 09:36:01.569567] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:50.620 [2024-07-15 09:36:01.569595] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:27:50.620 qpair failed and we were unable to recover it. 00:27:50.620 [2024-07-15 09:36:01.569687] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:50.620 [2024-07-15 09:36:01.569714] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:27:50.620 qpair failed and we were unable to recover it. 00:27:50.620 [2024-07-15 09:36:01.569788] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:50.620 [2024-07-15 09:36:01.569825] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:27:50.620 qpair failed and we were unable to recover it. 00:27:50.620 [2024-07-15 09:36:01.569900] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:50.620 [2024-07-15 09:36:01.569927] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:27:50.620 qpair failed and we were unable to recover it. 00:27:50.620 [2024-07-15 09:36:01.570016] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:50.620 [2024-07-15 09:36:01.570042] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:27:50.620 qpair failed and we were unable to recover it. 00:27:50.620 [2024-07-15 09:36:01.570134] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:50.620 [2024-07-15 09:36:01.570160] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:27:50.620 qpair failed and we were unable to recover it. 00:27:50.620 [2024-07-15 09:36:01.570238] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:50.620 [2024-07-15 09:36:01.570266] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a58000b90 with addr=10.0.0.2, port=4420 00:27:50.620 qpair failed and we were unable to recover it. 00:27:50.620 [2024-07-15 09:36:01.570355] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:50.620 [2024-07-15 09:36:01.570383] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d69200 with addr=10.0.0.2, port=4420 00:27:50.620 qpair failed and we were unable to recover it. 00:27:50.620 [2024-07-15 09:36:01.570468] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:50.620 [2024-07-15 09:36:01.570496] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a50000b90 with addr=10.0.0.2, port=4420 00:27:50.620 qpair failed and we were unable to recover it. 00:27:50.620 [2024-07-15 09:36:01.570580] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:50.620 [2024-07-15 09:36:01.570607] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a50000b90 with addr=10.0.0.2, port=4420 00:27:50.620 qpair failed and we were unable to recover it. 00:27:50.620 [2024-07-15 09:36:01.570619] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:27:50.620 [2024-07-15 09:36:01.570653] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:27:50.620 [2024-07-15 09:36:01.570672] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:27:50.620 [2024-07-15 09:36:01.570686] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:27:50.620 [2024-07-15 09:36:01.570688] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:50.620 [2024-07-15 09:36:01.570697] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:27:50.620 [2024-07-15 09:36:01.570713] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a50000b90 with addr=10.0.0.2, port=4420 00:27:50.620 qpair failed and we were unable to recover it. 00:27:50.620 [2024-07-15 09:36:01.570810] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:50.620 [2024-07-15 09:36:01.570836] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a50000b90 with addr=10.0.0.2, port=4420 00:27:50.620 qpair failed and we were unable to recover it. 00:27:50.620 [2024-07-15 09:36:01.570772] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 5 00:27:50.620 [2024-07-15 09:36:01.570832] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 6 00:27:50.620 [2024-07-15 09:36:01.570921] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:50.620 [2024-07-15 09:36:01.570947] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a50000b90 with addr=10.0.0.2, port=4420 00:27:50.620 qpair failed and we were unable to recover it. 00:27:50.620 [2024-07-15 09:36:01.571038] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:50.620 [2024-07-15 09:36:01.571063] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a50000b90 with addr=10.0.0.2, port=4420 00:27:50.620 qpair failed and we were unable to recover it. 00:27:50.620 [2024-07-15 09:36:01.571158] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:50.620 [2024-07-15 09:36:01.571189] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d69200 with addr=10.0.0.2, port=4420 00:27:50.620 qpair failed and we were unable to recover it. 00:27:50.620 [2024-07-15 09:36:01.571165] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 7 00:27:50.620 [2024-07-15 09:36:01.571171] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 4 00:27:50.620 [2024-07-15 09:36:01.571283] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:50.620 [2024-07-15 09:36:01.571310] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:27:50.620 qpair failed and we were unable to recover it. 00:27:50.620 [2024-07-15 09:36:01.571402] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:50.620 [2024-07-15 09:36:01.571430] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a58000b90 with addr=10.0.0.2, port=4420 00:27:50.620 qpair failed and we were unable to recover it. 00:27:50.620 [2024-07-15 09:36:01.571520] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:50.620 [2024-07-15 09:36:01.571546] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a58000b90 with addr=10.0.0.2, port=4420 00:27:50.620 qpair failed and we were unable to recover it. 00:27:50.620 [2024-07-15 09:36:01.571644] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:50.620 [2024-07-15 09:36:01.571672] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a58000b90 with addr=10.0.0.2, port=4420 00:27:50.620 qpair failed and we were unable to recover it. 00:27:50.620 [2024-07-15 09:36:01.571756] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:50.621 [2024-07-15 09:36:01.571783] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a58000b90 with addr=10.0.0.2, port=4420 00:27:50.621 qpair failed and we were unable to recover it. 00:27:50.621 [2024-07-15 09:36:01.571886] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:50.621 [2024-07-15 09:36:01.571913] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a58000b90 with addr=10.0.0.2, port=4420 00:27:50.621 qpair failed and we were unable to recover it. 00:27:50.621 [2024-07-15 09:36:01.571996] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:50.621 [2024-07-15 09:36:01.572022] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a58000b90 with addr=10.0.0.2, port=4420 00:27:50.621 qpair failed and we were unable to recover it. 00:27:50.621 [2024-07-15 09:36:01.572121] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:50.621 [2024-07-15 09:36:01.572147] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a58000b90 with addr=10.0.0.2, port=4420 00:27:50.621 qpair failed and we were unable to recover it. 00:27:50.621 [2024-07-15 09:36:01.572241] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:50.621 [2024-07-15 09:36:01.572268] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a58000b90 with addr=10.0.0.2, port=4420 00:27:50.621 qpair failed and we were unable to recover it. 00:27:50.621 [2024-07-15 09:36:01.572349] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:50.621 [2024-07-15 09:36:01.572375] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a58000b90 with addr=10.0.0.2, port=4420 00:27:50.621 qpair failed and we were unable to recover it. 00:27:50.621 [2024-07-15 09:36:01.572476] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:50.621 [2024-07-15 09:36:01.572504] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:27:50.621 qpair failed and we were unable to recover it. 00:27:50.621 [2024-07-15 09:36:01.572594] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:50.621 [2024-07-15 09:36:01.572621] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:27:50.621 qpair failed and we were unable to recover it. 00:27:50.621 [2024-07-15 09:36:01.572707] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:50.621 [2024-07-15 09:36:01.572735] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d69200 with addr=10.0.0.2, port=4420 00:27:50.621 qpair failed and we were unable to recover it. 00:27:50.621 [2024-07-15 09:36:01.572832] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:50.621 [2024-07-15 09:36:01.572859] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d69200 with addr=10.0.0.2, port=4420 00:27:50.621 qpair failed and we were unable to recover it. 00:27:50.621 [2024-07-15 09:36:01.572941] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:50.621 [2024-07-15 09:36:01.572969] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d69200 with addr=10.0.0.2, port=4420 00:27:50.621 qpair failed and we were unable to recover it. 00:27:50.621 [2024-07-15 09:36:01.573050] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:50.621 [2024-07-15 09:36:01.573075] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d69200 with addr=10.0.0.2, port=4420 00:27:50.621 qpair failed and we were unable to recover it. 00:27:50.621 [2024-07-15 09:36:01.573159] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:50.621 [2024-07-15 09:36:01.573185] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d69200 with addr=10.0.0.2, port=4420 00:27:50.621 qpair failed and we were unable to recover it. 00:27:50.621 [2024-07-15 09:36:01.573269] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:50.621 [2024-07-15 09:36:01.573295] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d69200 with addr=10.0.0.2, port=4420 00:27:50.621 qpair failed and we were unable to recover it. 00:27:50.621 [2024-07-15 09:36:01.573386] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:50.621 [2024-07-15 09:36:01.573413] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d69200 with addr=10.0.0.2, port=4420 00:27:50.621 qpair failed and we were unable to recover it. 00:27:50.621 [2024-07-15 09:36:01.573502] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:50.621 [2024-07-15 09:36:01.573529] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a58000b90 with addr=10.0.0.2, port=4420 00:27:50.621 qpair failed and we were unable to recover it. 00:27:50.621 [2024-07-15 09:36:01.573618] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:50.621 [2024-07-15 09:36:01.573648] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a50000b90 with addr=10.0.0.2, port=4420 00:27:50.621 qpair failed and we were unable to recover it. 00:27:50.621 [2024-07-15 09:36:01.573737] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:50.621 [2024-07-15 09:36:01.573765] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:27:50.621 qpair failed and we were unable to recover it. 00:27:50.621 [2024-07-15 09:36:01.573863] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:50.621 [2024-07-15 09:36:01.573890] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d69200 with addr=10.0.0.2, port=4420 00:27:50.621 qpair failed and we were unable to recover it. 00:27:50.621 [2024-07-15 09:36:01.574005] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:50.621 [2024-07-15 09:36:01.574031] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d69200 with addr=10.0.0.2, port=4420 00:27:50.621 qpair failed and we were unable to recover it. 00:27:50.621 [2024-07-15 09:36:01.574139] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:50.621 [2024-07-15 09:36:01.574165] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d69200 with addr=10.0.0.2, port=4420 00:27:50.621 qpair failed and we were unable to recover it. 00:27:50.621 [2024-07-15 09:36:01.574247] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:50.621 [2024-07-15 09:36:01.574273] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d69200 with addr=10.0.0.2, port=4420 00:27:50.621 qpair failed and we were unable to recover it. 00:27:50.621 [2024-07-15 09:36:01.574349] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:50.621 [2024-07-15 09:36:01.574375] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d69200 with addr=10.0.0.2, port=4420 00:27:50.621 qpair failed and we were unable to recover it. 00:27:50.621 [2024-07-15 09:36:01.574467] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:50.621 [2024-07-15 09:36:01.574495] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a50000b90 with addr=10.0.0.2, port=4420 00:27:50.621 qpair failed and we were unable to recover it. 00:27:50.621 [2024-07-15 09:36:01.574614] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:50.621 [2024-07-15 09:36:01.574642] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:27:50.621 qpair failed and we were unable to recover it. 00:27:50.621 [2024-07-15 09:36:01.574727] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:50.621 [2024-07-15 09:36:01.574755] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a58000b90 with addr=10.0.0.2, port=4420 00:27:50.621 qpair failed and we were unable to recover it. 00:27:50.621 [2024-07-15 09:36:01.574865] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:50.621 [2024-07-15 09:36:01.574893] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a58000b90 with addr=10.0.0.2, port=4420 00:27:50.621 qpair failed and we were unable to recover it. 00:27:50.621 [2024-07-15 09:36:01.574979] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:50.621 [2024-07-15 09:36:01.575005] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a58000b90 with addr=10.0.0.2, port=4420 00:27:50.621 qpair failed and we were unable to recover it. 00:27:50.621 [2024-07-15 09:36:01.575124] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:50.621 [2024-07-15 09:36:01.575150] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a58000b90 with addr=10.0.0.2, port=4420 00:27:50.621 qpair failed and we were unable to recover it. 00:27:50.621 [2024-07-15 09:36:01.575229] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:50.621 [2024-07-15 09:36:01.575256] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a58000b90 with addr=10.0.0.2, port=4420 00:27:50.621 qpair failed and we were unable to recover it. 00:27:50.621 [2024-07-15 09:36:01.575338] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:50.621 [2024-07-15 09:36:01.575365] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a58000b90 with addr=10.0.0.2, port=4420 00:27:50.621 qpair failed and we were unable to recover it. 00:27:50.621 [2024-07-15 09:36:01.575453] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:50.621 [2024-07-15 09:36:01.575479] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:27:50.621 qpair failed and we were unable to recover it. 00:27:50.621 [2024-07-15 09:36:01.575562] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:50.621 [2024-07-15 09:36:01.575593] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:27:50.621 qpair failed and we were unable to recover it. 00:27:50.621 [2024-07-15 09:36:01.575674] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:50.621 [2024-07-15 09:36:01.575702] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a50000b90 with addr=10.0.0.2, port=4420 00:27:50.621 qpair failed and we were unable to recover it. 00:27:50.621 [2024-07-15 09:36:01.575798] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:50.621 [2024-07-15 09:36:01.575834] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a50000b90 with addr=10.0.0.2, port=4420 00:27:50.621 qpair failed and we were unable to recover it. 00:27:50.621 [2024-07-15 09:36:01.575918] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:50.621 [2024-07-15 09:36:01.575945] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a50000b90 with addr=10.0.0.2, port=4420 00:27:50.621 qpair failed and we were unable to recover it. 00:27:50.621 [2024-07-15 09:36:01.576028] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:50.621 [2024-07-15 09:36:01.576053] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a50000b90 with addr=10.0.0.2, port=4420 00:27:50.621 qpair failed and we were unable to recover it. 00:27:50.621 [2024-07-15 09:36:01.576141] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:50.621 [2024-07-15 09:36:01.576166] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a50000b90 with addr=10.0.0.2, port=4420 00:27:50.621 qpair failed and we were unable to recover it. 00:27:50.621 [2024-07-15 09:36:01.576253] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:50.621 [2024-07-15 09:36:01.576280] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a50000b90 with addr=10.0.0.2, port=4420 00:27:50.621 qpair failed and we were unable to recover it. 00:27:50.621 [2024-07-15 09:36:01.576359] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:50.621 [2024-07-15 09:36:01.576385] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a50000b90 with addr=10.0.0.2, port=4420 00:27:50.621 qpair failed and we were unable to recover it. 00:27:50.621 [2024-07-15 09:36:01.576468] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:50.621 [2024-07-15 09:36:01.576494] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a50000b90 with addr=10.0.0.2, port=4420 00:27:50.621 qpair failed and we were unable to recover it. 00:27:50.621 [2024-07-15 09:36:01.576570] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:50.621 [2024-07-15 09:36:01.576596] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a50000b90 with addr=10.0.0.2, port=4420 00:27:50.621 qpair failed and we were unable to recover it. 00:27:50.621 [2024-07-15 09:36:01.576680] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:50.621 [2024-07-15 09:36:01.576708] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:27:50.621 qpair failed and we were unable to recover it. 00:27:50.621 [2024-07-15 09:36:01.576796] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:50.621 [2024-07-15 09:36:01.576831] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a58000b90 with addr=10.0.0.2, port=4420 00:27:50.621 qpair failed and we were unable to recover it. 00:27:50.621 [2024-07-15 09:36:01.576922] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:50.621 [2024-07-15 09:36:01.576948] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a58000b90 with addr=10.0.0.2, port=4420 00:27:50.621 qpair failed and we were unable to recover it. 00:27:50.621 [2024-07-15 09:36:01.577025] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:50.621 [2024-07-15 09:36:01.577052] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a58000b90 with addr=10.0.0.2, port=4420 00:27:50.621 qpair failed and we were unable to recover it. 00:27:50.621 [2024-07-15 09:36:01.577138] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:50.621 [2024-07-15 09:36:01.577164] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a58000b90 with addr=10.0.0.2, port=4420 00:27:50.621 qpair failed and we were unable to recover it. 00:27:50.622 [2024-07-15 09:36:01.577249] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:50.622 [2024-07-15 09:36:01.577275] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a58000b90 with addr=10.0.0.2, port=4420 00:27:50.622 qpair failed and we were unable to recover it. 00:27:50.622 [2024-07-15 09:36:01.577364] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:50.622 [2024-07-15 09:36:01.577392] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a50000b90 with addr=10.0.0.2, port=4420 00:27:50.622 qpair failed and we were unable to recover it. 00:27:50.622 [2024-07-15 09:36:01.577479] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:50.622 [2024-07-15 09:36:01.577506] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:27:50.622 qpair failed and we were unable to recover it. 00:27:50.622 [2024-07-15 09:36:01.577584] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:50.622 [2024-07-15 09:36:01.577611] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:27:50.622 qpair failed and we were unable to recover it. 00:27:50.622 [2024-07-15 09:36:01.577693] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:50.622 [2024-07-15 09:36:01.577720] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:27:50.622 qpair failed and we were unable to recover it. 00:27:50.622 [2024-07-15 09:36:01.577849] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:50.622 [2024-07-15 09:36:01.577876] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:27:50.622 qpair failed and we were unable to recover it. 00:27:50.622 [2024-07-15 09:36:01.577989] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:50.622 [2024-07-15 09:36:01.578015] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:27:50.622 qpair failed and we were unable to recover it. 00:27:50.622 [2024-07-15 09:36:01.578110] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:50.622 [2024-07-15 09:36:01.578138] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a58000b90 with addr=10.0.0.2, port=4420 00:27:50.622 qpair failed and we were unable to recover it. 00:27:50.622 [2024-07-15 09:36:01.578223] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:50.622 [2024-07-15 09:36:01.578249] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a58000b90 with addr=10.0.0.2, port=4420 00:27:50.622 qpair failed and we were unable to recover it. 00:27:50.622 [2024-07-15 09:36:01.578329] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:50.622 [2024-07-15 09:36:01.578356] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a58000b90 with addr=10.0.0.2, port=4420 00:27:50.622 qpair failed and we were unable to recover it. 00:27:50.622 [2024-07-15 09:36:01.578437] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:50.622 [2024-07-15 09:36:01.578463] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a58000b90 with addr=10.0.0.2, port=4420 00:27:50.622 qpair failed and we were unable to recover it. 00:27:50.622 [2024-07-15 09:36:01.578536] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:50.622 [2024-07-15 09:36:01.578562] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a58000b90 with addr=10.0.0.2, port=4420 00:27:50.622 qpair failed and we were unable to recover it. 00:27:50.622 [2024-07-15 09:36:01.578661] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:50.622 [2024-07-15 09:36:01.578700] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d69200 with addr=10.0.0.2, port=4420 00:27:50.622 qpair failed and we were unable to recover it. 00:27:50.622 [2024-07-15 09:36:01.578818] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:50.622 [2024-07-15 09:36:01.578847] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d69200 with addr=10.0.0.2, port=4420 00:27:50.622 qpair failed and we were unable to recover it. 00:27:50.622 [2024-07-15 09:36:01.578933] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:50.622 [2024-07-15 09:36:01.578959] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d69200 with addr=10.0.0.2, port=4420 00:27:50.622 qpair failed and we were unable to recover it. 00:27:50.622 [2024-07-15 09:36:01.579040] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:50.622 [2024-07-15 09:36:01.579066] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d69200 with addr=10.0.0.2, port=4420 00:27:50.622 qpair failed and we were unable to recover it. 00:27:50.622 [2024-07-15 09:36:01.579141] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:50.622 [2024-07-15 09:36:01.579166] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d69200 with addr=10.0.0.2, port=4420 00:27:50.622 qpair failed and we were unable to recover it. 00:27:50.622 [2024-07-15 09:36:01.579241] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:50.622 [2024-07-15 09:36:01.579267] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d69200 with addr=10.0.0.2, port=4420 00:27:50.622 qpair failed and we were unable to recover it. 00:27:50.622 [2024-07-15 09:36:01.579362] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:50.622 [2024-07-15 09:36:01.579389] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a58000b90 with addr=10.0.0.2, port=4420 00:27:50.622 qpair failed and we were unable to recover it. 00:27:50.622 [2024-07-15 09:36:01.579471] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:50.622 [2024-07-15 09:36:01.579498] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:27:50.622 qpair failed and we were unable to recover it. 00:27:50.622 [2024-07-15 09:36:01.579574] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:50.622 [2024-07-15 09:36:01.579601] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:27:50.622 qpair failed and we were unable to recover it. 00:27:50.622 [2024-07-15 09:36:01.579681] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:50.622 [2024-07-15 09:36:01.579707] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:27:50.622 qpair failed and we were unable to recover it. 00:27:50.622 [2024-07-15 09:36:01.579785] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:50.622 [2024-07-15 09:36:01.579816] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:27:50.622 qpair failed and we were unable to recover it. 00:27:50.622 [2024-07-15 09:36:01.579901] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:50.622 [2024-07-15 09:36:01.579927] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:27:50.622 qpair failed and we were unable to recover it. 00:27:50.622 [2024-07-15 09:36:01.580017] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:50.622 [2024-07-15 09:36:01.580043] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d69200 with addr=10.0.0.2, port=4420 00:27:50.622 qpair failed and we were unable to recover it. 00:27:50.622 [2024-07-15 09:36:01.580147] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:50.622 [2024-07-15 09:36:01.580174] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a58000b90 with addr=10.0.0.2, port=4420 00:27:50.622 qpair failed and we were unable to recover it. 00:27:50.622 [2024-07-15 09:36:01.580261] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:50.622 [2024-07-15 09:36:01.580288] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a58000b90 with addr=10.0.0.2, port=4420 00:27:50.622 qpair failed and we were unable to recover it. 00:27:50.622 [2024-07-15 09:36:01.580371] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:50.622 [2024-07-15 09:36:01.580397] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a58000b90 with addr=10.0.0.2, port=4420 00:27:50.622 qpair failed and we were unable to recover it. 00:27:50.622 [2024-07-15 09:36:01.580480] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:50.622 [2024-07-15 09:36:01.580506] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a58000b90 with addr=10.0.0.2, port=4420 00:27:50.622 qpair failed and we were unable to recover it. 00:27:50.622 [2024-07-15 09:36:01.580603] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:50.622 [2024-07-15 09:36:01.580643] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a50000b90 with addr=10.0.0.2, port=4420 00:27:50.622 qpair failed and we were unable to recover it. 00:27:50.622 [2024-07-15 09:36:01.580735] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:50.622 [2024-07-15 09:36:01.580762] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:27:50.622 qpair failed and we were unable to recover it. 00:27:50.622 [2024-07-15 09:36:01.580850] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:50.622 [2024-07-15 09:36:01.580877] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:27:50.622 qpair failed and we were unable to recover it. 00:27:50.622 [2024-07-15 09:36:01.580960] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:50.622 [2024-07-15 09:36:01.580986] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:27:50.622 qpair failed and we were unable to recover it. 00:27:50.622 [2024-07-15 09:36:01.581067] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:50.622 [2024-07-15 09:36:01.581101] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:27:50.622 qpair failed and we were unable to recover it. 00:27:50.622 [2024-07-15 09:36:01.581204] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:50.622 [2024-07-15 09:36:01.581230] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:27:50.622 qpair failed and we were unable to recover it. 00:27:50.622 [2024-07-15 09:36:01.581316] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:50.622 [2024-07-15 09:36:01.581343] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a58000b90 with addr=10.0.0.2, port=4420 00:27:50.622 qpair failed and we were unable to recover it. 00:27:50.622 [2024-07-15 09:36:01.581453] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:50.622 [2024-07-15 09:36:01.581479] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a58000b90 with addr=10.0.0.2, port=4420 00:27:50.622 qpair failed and we were unable to recover it. 00:27:50.622 [2024-07-15 09:36:01.581572] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:50.622 [2024-07-15 09:36:01.581611] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d69200 with addr=10.0.0.2, port=4420 00:27:50.622 qpair failed and we were unable to recover it. 00:27:50.622 [2024-07-15 09:36:01.581728] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:50.622 [2024-07-15 09:36:01.581755] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d69200 with addr=10.0.0.2, port=4420 00:27:50.622 qpair failed and we were unable to recover it. 00:27:50.622 [2024-07-15 09:36:01.581868] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:50.622 [2024-07-15 09:36:01.581899] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a50000b90 with addr=10.0.0.2, port=4420 00:27:50.622 qpair failed and we were unable to recover it. 00:27:50.622 [2024-07-15 09:36:01.581983] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:50.622 [2024-07-15 09:36:01.582010] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a50000b90 with addr=10.0.0.2, port=4420 00:27:50.622 qpair failed and we were unable to recover it. 00:27:50.622 [2024-07-15 09:36:01.582101] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:50.622 [2024-07-15 09:36:01.582128] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:27:50.622 qpair failed and we were unable to recover it. 00:27:50.622 [2024-07-15 09:36:01.582210] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:50.622 [2024-07-15 09:36:01.582236] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:27:50.622 qpair failed and we were unable to recover it. 00:27:50.622 [2024-07-15 09:36:01.582323] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:50.622 [2024-07-15 09:36:01.582349] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a58000b90 with addr=10.0.0.2, port=4420 00:27:50.622 qpair failed and we were unable to recover it. 00:27:50.622 [2024-07-15 09:36:01.582437] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:50.622 [2024-07-15 09:36:01.582464] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a58000b90 with addr=10.0.0.2, port=4420 00:27:50.622 qpair failed and we were unable to recover it. 00:27:50.622 [2024-07-15 09:36:01.582577] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:50.622 [2024-07-15 09:36:01.582604] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a58000b90 with addr=10.0.0.2, port=4420 00:27:50.622 qpair failed and we were unable to recover it. 00:27:50.622 [2024-07-15 09:36:01.582682] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:50.622 [2024-07-15 09:36:01.582708] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a58000b90 with addr=10.0.0.2, port=4420 00:27:50.622 qpair failed and we were unable to recover it. 00:27:50.622 [2024-07-15 09:36:01.582788] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:50.622 [2024-07-15 09:36:01.582832] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a58000b90 with addr=10.0.0.2, port=4420 00:27:50.622 qpair failed and we were unable to recover it. 00:27:50.622 [2024-07-15 09:36:01.582918] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:50.622 [2024-07-15 09:36:01.582943] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a58000b90 with addr=10.0.0.2, port=4420 00:27:50.622 qpair failed and we were unable to recover it. 00:27:50.622 [2024-07-15 09:36:01.583028] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:50.622 [2024-07-15 09:36:01.583054] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a58000b90 with addr=10.0.0.2, port=4420 00:27:50.622 qpair failed and we were unable to recover it. 00:27:50.622 [2024-07-15 09:36:01.583191] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:50.622 [2024-07-15 09:36:01.583217] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a58000b90 with addr=10.0.0.2, port=4420 00:27:50.622 qpair failed and we were unable to recover it. 00:27:50.622 [2024-07-15 09:36:01.583328] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:50.622 [2024-07-15 09:36:01.583354] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a58000b90 with addr=10.0.0.2, port=4420 00:27:50.622 qpair failed and we were unable to recover it. 00:27:50.622 [2024-07-15 09:36:01.583441] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:50.622 [2024-07-15 09:36:01.583472] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a58000b90 with addr=10.0.0.2, port=4420 00:27:50.622 qpair failed and we were unable to recover it. 00:27:50.622 [2024-07-15 09:36:01.583548] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:50.622 [2024-07-15 09:36:01.583574] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a58000b90 with addr=10.0.0.2, port=4420 00:27:50.622 qpair failed and we were unable to recover it. 00:27:50.623 [2024-07-15 09:36:01.583670] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:50.623 [2024-07-15 09:36:01.583709] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d69200 with addr=10.0.0.2, port=4420 00:27:50.623 qpair failed and we were unable to recover it. 00:27:50.623 [2024-07-15 09:36:01.583814] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:50.623 [2024-07-15 09:36:01.583842] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:27:50.623 qpair failed and we were unable to recover it. 00:27:50.623 [2024-07-15 09:36:01.583921] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:50.623 [2024-07-15 09:36:01.583946] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:27:50.623 qpair failed and we were unable to recover it. 00:27:50.623 [2024-07-15 09:36:01.584021] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:50.623 [2024-07-15 09:36:01.584047] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:27:50.623 qpair failed and we were unable to recover it. 00:27:50.623 [2024-07-15 09:36:01.584141] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:50.623 [2024-07-15 09:36:01.584167] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:27:50.623 qpair failed and we were unable to recover it. 00:27:50.623 [2024-07-15 09:36:01.584276] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:50.623 [2024-07-15 09:36:01.584302] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:27:50.623 qpair failed and we were unable to recover it. 00:27:50.623 [2024-07-15 09:36:01.584401] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:50.623 [2024-07-15 09:36:01.584428] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a58000b90 with addr=10.0.0.2, port=4420 00:27:50.623 qpair failed and we were unable to recover it. 00:27:50.623 [2024-07-15 09:36:01.584526] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:50.623 [2024-07-15 09:36:01.584565] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d69200 with addr=10.0.0.2, port=4420 00:27:50.623 qpair failed and we were unable to recover it. 00:27:50.623 [2024-07-15 09:36:01.584654] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:50.623 [2024-07-15 09:36:01.584681] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d69200 with addr=10.0.0.2, port=4420 00:27:50.623 qpair failed and we were unable to recover it. 00:27:50.623 [2024-07-15 09:36:01.584763] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:50.623 [2024-07-15 09:36:01.584789] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d69200 with addr=10.0.0.2, port=4420 00:27:50.623 qpair failed and we were unable to recover it. 00:27:50.623 [2024-07-15 09:36:01.584903] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:50.623 [2024-07-15 09:36:01.584928] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d69200 with addr=10.0.0.2, port=4420 00:27:50.623 qpair failed and we were unable to recover it. 00:27:50.623 [2024-07-15 09:36:01.585045] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:50.623 [2024-07-15 09:36:01.585071] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d69200 with addr=10.0.0.2, port=4420 00:27:50.623 qpair failed and we were unable to recover it. 00:27:50.623 [2024-07-15 09:36:01.585157] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:50.623 [2024-07-15 09:36:01.585184] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:27:50.623 qpair failed and we were unable to recover it. 00:27:50.623 [2024-07-15 09:36:01.585270] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:50.623 [2024-07-15 09:36:01.585297] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a58000b90 with addr=10.0.0.2, port=4420 00:27:50.623 qpair failed and we were unable to recover it. 00:27:50.623 [2024-07-15 09:36:01.585380] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:50.623 [2024-07-15 09:36:01.585406] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a58000b90 with addr=10.0.0.2, port=4420 00:27:50.623 qpair failed and we were unable to recover it. 00:27:50.623 [2024-07-15 09:36:01.585514] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:50.623 [2024-07-15 09:36:01.585541] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a58000b90 with addr=10.0.0.2, port=4420 00:27:50.623 qpair failed and we were unable to recover it. 00:27:50.623 [2024-07-15 09:36:01.585622] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:50.623 [2024-07-15 09:36:01.585648] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a58000b90 with addr=10.0.0.2, port=4420 00:27:50.623 qpair failed and we were unable to recover it. 00:27:50.623 [2024-07-15 09:36:01.585730] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:50.623 [2024-07-15 09:36:01.585756] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a58000b90 with addr=10.0.0.2, port=4420 00:27:50.623 qpair failed and we were unable to recover it. 00:27:50.623 [2024-07-15 09:36:01.585841] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:50.623 [2024-07-15 09:36:01.585873] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a58000b90 with addr=10.0.0.2, port=4420 00:27:50.623 qpair failed and we were unable to recover it. 00:27:50.623 [2024-07-15 09:36:01.585962] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:50.623 [2024-07-15 09:36:01.585987] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a58000b90 with addr=10.0.0.2, port=4420 00:27:50.623 qpair failed and we were unable to recover it. 00:27:50.623 [2024-07-15 09:36:01.586100] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:50.623 [2024-07-15 09:36:01.586126] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a58000b90 with addr=10.0.0.2, port=4420 00:27:50.623 qpair failed and we were unable to recover it. 00:27:50.623 [2024-07-15 09:36:01.586217] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:50.623 [2024-07-15 09:36:01.586244] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d69200 with addr=10.0.0.2, port=4420 00:27:50.623 qpair failed and we were unable to recover it. 00:27:50.623 [2024-07-15 09:36:01.586335] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:50.623 [2024-07-15 09:36:01.586361] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d69200 with addr=10.0.0.2, port=4420 00:27:50.623 qpair failed and we were unable to recover it. 00:27:50.623 [2024-07-15 09:36:01.586443] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:50.623 [2024-07-15 09:36:01.586468] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d69200 with addr=10.0.0.2, port=4420 00:27:50.623 qpair failed and we were unable to recover it. 00:27:50.623 [2024-07-15 09:36:01.586556] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:50.623 [2024-07-15 09:36:01.586581] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d69200 with addr=10.0.0.2, port=4420 00:27:50.623 qpair failed and we were unable to recover it. 00:27:50.623 [2024-07-15 09:36:01.586661] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:50.623 [2024-07-15 09:36:01.586688] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:27:50.623 qpair failed and we were unable to recover it. 00:27:50.623 [2024-07-15 09:36:01.586777] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:50.623 [2024-07-15 09:36:01.586808] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:27:50.623 qpair failed and we were unable to recover it. 00:27:50.623 [2024-07-15 09:36:01.586892] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:50.623 [2024-07-15 09:36:01.586917] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:27:50.623 qpair failed and we were unable to recover it. 00:27:50.623 [2024-07-15 09:36:01.587001] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:50.623 [2024-07-15 09:36:01.587027] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:27:50.623 qpair failed and we were unable to recover it. 00:27:50.623 [2024-07-15 09:36:01.587115] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:50.623 [2024-07-15 09:36:01.587141] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:27:50.623 qpair failed and we were unable to recover it. 00:27:50.623 [2024-07-15 09:36:01.587218] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:50.623 [2024-07-15 09:36:01.587243] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:27:50.623 qpair failed and we were unable to recover it. 00:27:50.623 [2024-07-15 09:36:01.587324] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:50.623 [2024-07-15 09:36:01.587351] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d69200 with addr=10.0.0.2, port=4420 00:27:50.623 qpair failed and we were unable to recover it. 00:27:50.623 [2024-07-15 09:36:01.587438] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:50.623 [2024-07-15 09:36:01.587465] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d69200 with addr=10.0.0.2, port=4420 00:27:50.623 qpair failed and we were unable to recover it. 00:27:50.623 [2024-07-15 09:36:01.587542] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:50.623 [2024-07-15 09:36:01.587567] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d69200 with addr=10.0.0.2, port=4420 00:27:50.623 qpair failed and we were unable to recover it. 00:27:50.623 [2024-07-15 09:36:01.587649] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:50.623 [2024-07-15 09:36:01.587674] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d69200 with addr=10.0.0.2, port=4420 00:27:50.623 qpair failed and we were unable to recover it. 00:27:50.623 [2024-07-15 09:36:01.587780] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:50.623 [2024-07-15 09:36:01.587810] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d69200 with addr=10.0.0.2, port=4420 00:27:50.623 qpair failed and we were unable to recover it. 00:27:50.623 [2024-07-15 09:36:01.587901] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:50.623 [2024-07-15 09:36:01.587926] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d69200 with addr=10.0.0.2, port=4420 00:27:50.623 qpair failed and we were unable to recover it. 00:27:50.623 [2024-07-15 09:36:01.588014] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:50.623 [2024-07-15 09:36:01.588040] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d69200 with addr=10.0.0.2, port=4420 00:27:50.623 qpair failed and we were unable to recover it. 00:27:50.623 [2024-07-15 09:36:01.588137] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:50.623 [2024-07-15 09:36:01.588163] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d69200 with addr=10.0.0.2, port=4420 00:27:50.623 qpair failed and we were unable to recover it. 00:27:50.623 [2024-07-15 09:36:01.588249] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:50.623 [2024-07-15 09:36:01.588277] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a58000b90 with addr=10.0.0.2, port=4420 00:27:50.623 qpair failed and we were unable to recover it. 00:27:50.623 [2024-07-15 09:36:01.588352] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:50.623 [2024-07-15 09:36:01.588378] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a58000b90 with addr=10.0.0.2, port=4420 00:27:50.623 qpair failed and we were unable to recover it. 00:27:50.623 [2024-07-15 09:36:01.588503] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:50.623 [2024-07-15 09:36:01.588543] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a50000b90 with addr=10.0.0.2, port=4420 00:27:50.623 qpair failed and we were unable to recover it. 00:27:50.623 [2024-07-15 09:36:01.588635] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:50.623 [2024-07-15 09:36:01.588662] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a50000b90 with addr=10.0.0.2, port=4420 00:27:50.623 qpair failed and we were unable to recover it. 00:27:50.623 [2024-07-15 09:36:01.588748] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:50.623 [2024-07-15 09:36:01.588775] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a50000b90 with addr=10.0.0.2, port=4420 00:27:50.623 qpair failed and we were unable to recover it. 00:27:50.623 [2024-07-15 09:36:01.588876] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:50.623 [2024-07-15 09:36:01.588902] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a50000b90 with addr=10.0.0.2, port=4420 00:27:50.623 qpair failed and we were unable to recover it. 00:27:50.623 [2024-07-15 09:36:01.588981] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:50.623 [2024-07-15 09:36:01.589007] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a50000b90 with addr=10.0.0.2, port=4420 00:27:50.623 qpair failed and we were unable to recover it. 00:27:50.623 [2024-07-15 09:36:01.589113] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:50.623 [2024-07-15 09:36:01.589140] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:27:50.623 qpair failed and we were unable to recover it. 00:27:50.623 [2024-07-15 09:36:01.589230] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:50.623 [2024-07-15 09:36:01.589256] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:27:50.623 qpair failed and we were unable to recover it. 00:27:50.623 [2024-07-15 09:36:01.589342] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:50.623 [2024-07-15 09:36:01.589369] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d69200 with addr=10.0.0.2, port=4420 00:27:50.623 qpair failed and we were unable to recover it. 00:27:50.623 [2024-07-15 09:36:01.589447] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:50.623 [2024-07-15 09:36:01.589473] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d69200 with addr=10.0.0.2, port=4420 00:27:50.623 qpair failed and we were unable to recover it. 00:27:50.623 [2024-07-15 09:36:01.589546] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:50.624 [2024-07-15 09:36:01.589571] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d69200 with addr=10.0.0.2, port=4420 00:27:50.624 qpair failed and we were unable to recover it. 00:27:50.624 [2024-07-15 09:36:01.589704] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:50.624 [2024-07-15 09:36:01.589729] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d69200 with addr=10.0.0.2, port=4420 00:27:50.624 qpair failed and we were unable to recover it. 00:27:50.624 [2024-07-15 09:36:01.589826] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:50.624 [2024-07-15 09:36:01.589859] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a50000b90 with addr=10.0.0.2, port=4420 00:27:50.624 qpair failed and we were unable to recover it. 00:27:50.624 [2024-07-15 09:36:01.589937] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:50.624 [2024-07-15 09:36:01.589963] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a50000b90 with addr=10.0.0.2, port=4420 00:27:50.624 qpair failed and we were unable to recover it. 00:27:50.624 [2024-07-15 09:36:01.590048] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:50.624 [2024-07-15 09:36:01.590075] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a50000b90 with addr=10.0.0.2, port=4420 00:27:50.624 qpair failed and we were unable to recover it. 00:27:50.624 [2024-07-15 09:36:01.590178] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:50.624 [2024-07-15 09:36:01.590203] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a50000b90 with addr=10.0.0.2, port=4420 00:27:50.624 qpair failed and we were unable to recover it. 00:27:50.624 [2024-07-15 09:36:01.590284] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:50.624 [2024-07-15 09:36:01.590311] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a50000b90 with addr=10.0.0.2, port=4420 00:27:50.624 qpair failed and we were unable to recover it. 00:27:50.624 [2024-07-15 09:36:01.590436] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:50.624 [2024-07-15 09:36:01.590464] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:27:50.624 qpair failed and we were unable to recover it. 00:27:50.624 [2024-07-15 09:36:01.590550] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:50.624 [2024-07-15 09:36:01.590578] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d69200 with addr=10.0.0.2, port=4420 00:27:50.624 qpair failed and we were unable to recover it. 00:27:50.624 [2024-07-15 09:36:01.590714] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:50.624 [2024-07-15 09:36:01.590742] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a58000b90 with addr=10.0.0.2, port=4420 00:27:50.624 qpair failed and we were unable to recover it. 00:27:50.624 [2024-07-15 09:36:01.590868] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:50.624 [2024-07-15 09:36:01.590895] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a58000b90 with addr=10.0.0.2, port=4420 00:27:50.624 qpair failed and we were unable to recover it. 00:27:50.624 [2024-07-15 09:36:01.590972] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:50.624 [2024-07-15 09:36:01.590998] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a58000b90 with addr=10.0.0.2, port=4420 00:27:50.624 qpair failed and we were unable to recover it. 00:27:50.624 [2024-07-15 09:36:01.591083] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:50.624 [2024-07-15 09:36:01.591110] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a58000b90 with addr=10.0.0.2, port=4420 00:27:50.624 qpair failed and we were unable to recover it. 00:27:50.624 [2024-07-15 09:36:01.591206] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:50.624 [2024-07-15 09:36:01.591232] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a58000b90 with addr=10.0.0.2, port=4420 00:27:50.624 qpair failed and we were unable to recover it. 00:27:50.624 [2024-07-15 09:36:01.591315] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:50.624 [2024-07-15 09:36:01.591342] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a58000b90 with addr=10.0.0.2, port=4420 00:27:50.624 qpair failed and we were unable to recover it. 00:27:50.624 [2024-07-15 09:36:01.591422] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:50.624 [2024-07-15 09:36:01.591452] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a58000b90 with addr=10.0.0.2, port=4420 00:27:50.624 qpair failed and we were unable to recover it. 00:27:50.624 [2024-07-15 09:36:01.591529] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:50.624 [2024-07-15 09:36:01.591556] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a58000b90 with addr=10.0.0.2, port=4420 00:27:50.624 qpair failed and we were unable to recover it. 00:27:50.624 [2024-07-15 09:36:01.591652] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:50.624 [2024-07-15 09:36:01.591679] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:27:50.624 qpair failed and we were unable to recover it. 00:27:50.624 [2024-07-15 09:36:01.591773] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:50.624 [2024-07-15 09:36:01.591806] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d69200 with addr=10.0.0.2, port=4420 00:27:50.624 qpair failed and we were unable to recover it. 00:27:50.624 [2024-07-15 09:36:01.591903] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:50.624 [2024-07-15 09:36:01.591929] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d69200 with addr=10.0.0.2, port=4420 00:27:50.624 qpair failed and we were unable to recover it. 00:27:50.624 [2024-07-15 09:36:01.592006] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:50.624 [2024-07-15 09:36:01.592032] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d69200 with addr=10.0.0.2, port=4420 00:27:50.624 qpair failed and we were unable to recover it. 00:27:50.624 [2024-07-15 09:36:01.592129] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:50.624 [2024-07-15 09:36:01.592154] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d69200 with addr=10.0.0.2, port=4420 00:27:50.624 qpair failed and we were unable to recover it. 00:27:50.624 [2024-07-15 09:36:01.592234] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:50.624 [2024-07-15 09:36:01.592259] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d69200 with addr=10.0.0.2, port=4420 00:27:50.624 qpair failed and we were unable to recover it. 00:27:50.624 [2024-07-15 09:36:01.592334] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:50.624 [2024-07-15 09:36:01.592359] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d69200 with addr=10.0.0.2, port=4420 00:27:50.624 qpair failed and we were unable to recover it. 00:27:50.624 [2024-07-15 09:36:01.592442] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:50.624 [2024-07-15 09:36:01.592470] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:27:50.624 qpair failed and we were unable to recover it. 00:27:50.624 [2024-07-15 09:36:01.592558] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:50.624 [2024-07-15 09:36:01.592584] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:27:50.624 qpair failed and we were unable to recover it. 00:27:50.624 [2024-07-15 09:36:01.592662] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:50.624 [2024-07-15 09:36:01.592689] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a58000b90 with addr=10.0.0.2, port=4420 00:27:50.624 qpair failed and we were unable to recover it. 00:27:50.624 [2024-07-15 09:36:01.592766] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:50.624 [2024-07-15 09:36:01.592791] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a58000b90 with addr=10.0.0.2, port=4420 00:27:50.624 qpair failed and we were unable to recover it. 00:27:50.624 [2024-07-15 09:36:01.592887] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:50.624 [2024-07-15 09:36:01.592913] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a58000b90 with addr=10.0.0.2, port=4420 00:27:50.624 qpair failed and we were unable to recover it. 00:27:50.624 [2024-07-15 09:36:01.593001] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:50.624 [2024-07-15 09:36:01.593028] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a58000b90 with addr=10.0.0.2, port=4420 00:27:50.624 qpair failed and we were unable to recover it. 00:27:50.624 [2024-07-15 09:36:01.593113] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:50.624 [2024-07-15 09:36:01.593139] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a58000b90 with addr=10.0.0.2, port=4420 00:27:50.624 qpair failed and we were unable to recover it. 00:27:50.624 [2024-07-15 09:36:01.593227] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:50.624 [2024-07-15 09:36:01.593253] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a58000b90 with addr=10.0.0.2, port=4420 00:27:50.624 qpair failed and we were unable to recover it. 00:27:50.624 [2024-07-15 09:36:01.593338] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:50.624 [2024-07-15 09:36:01.593365] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:27:50.624 qpair failed and we were unable to recover it. 00:27:50.624 [2024-07-15 09:36:01.593458] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:50.624 [2024-07-15 09:36:01.593486] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d69200 with addr=10.0.0.2, port=4420 00:27:50.624 qpair failed and we were unable to recover it. 00:27:50.624 [2024-07-15 09:36:01.593573] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:50.624 [2024-07-15 09:36:01.593599] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d69200 with addr=10.0.0.2, port=4420 00:27:50.624 qpair failed and we were unable to recover it. 00:27:50.624 [2024-07-15 09:36:01.593682] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:50.624 [2024-07-15 09:36:01.593707] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d69200 with addr=10.0.0.2, port=4420 00:27:50.624 qpair failed and we were unable to recover it. 00:27:50.624 [2024-07-15 09:36:01.593807] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:50.624 [2024-07-15 09:36:01.593833] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d69200 with addr=10.0.0.2, port=4420 00:27:50.624 qpair failed and we were unable to recover it. 00:27:50.624 [2024-07-15 09:36:01.593915] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:50.624 [2024-07-15 09:36:01.593940] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d69200 with addr=10.0.0.2, port=4420 00:27:50.624 qpair failed and we were unable to recover it. 00:27:50.624 [2024-07-15 09:36:01.594021] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:50.624 [2024-07-15 09:36:01.594045] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d69200 with addr=10.0.0.2, port=4420 00:27:50.624 qpair failed and we were unable to recover it. 00:27:50.624 [2024-07-15 09:36:01.594162] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:50.624 [2024-07-15 09:36:01.594186] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d69200 with addr=10.0.0.2, port=4420 00:27:50.624 qpair failed and we were unable to recover it. 00:27:50.624 [2024-07-15 09:36:01.594263] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:50.624 [2024-07-15 09:36:01.594288] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d69200 with addr=10.0.0.2, port=4420 00:27:50.624 qpair failed and we were unable to recover it. 00:27:50.624 [2024-07-15 09:36:01.594367] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:50.624 [2024-07-15 09:36:01.594392] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d69200 with addr=10.0.0.2, port=4420 00:27:50.624 qpair failed and we were unable to recover it. 00:27:50.624 [2024-07-15 09:36:01.594474] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:50.624 [2024-07-15 09:36:01.594506] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:27:50.624 qpair failed and we were unable to recover it. 00:27:50.624 [2024-07-15 09:36:01.594591] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:50.624 [2024-07-15 09:36:01.594617] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:27:50.624 qpair failed and we were unable to recover it. 00:27:50.624 [2024-07-15 09:36:01.594695] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:50.624 [2024-07-15 09:36:01.594720] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:27:50.624 qpair failed and we were unable to recover it. 00:27:50.624 [2024-07-15 09:36:01.594810] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:50.624 [2024-07-15 09:36:01.594837] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:27:50.624 qpair failed and we were unable to recover it. 00:27:50.624 [2024-07-15 09:36:01.594920] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:50.624 [2024-07-15 09:36:01.594946] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:27:50.624 qpair failed and we were unable to recover it. 00:27:50.624 [2024-07-15 09:36:01.595035] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:50.624 [2024-07-15 09:36:01.595060] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:27:50.624 qpair failed and we were unable to recover it. 00:27:50.624 [2024-07-15 09:36:01.595157] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:50.624 [2024-07-15 09:36:01.595182] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:27:50.624 qpair failed and we were unable to recover it. 00:27:50.624 [2024-07-15 09:36:01.595257] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:50.624 [2024-07-15 09:36:01.595282] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:27:50.624 qpair failed and we were unable to recover it. 00:27:50.624 [2024-07-15 09:36:01.595375] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:50.624 [2024-07-15 09:36:01.595403] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a58000b90 with addr=10.0.0.2, port=4420 00:27:50.624 qpair failed and we were unable to recover it. 00:27:50.624 [2024-07-15 09:36:01.595492] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:50.624 [2024-07-15 09:36:01.595519] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d69200 with addr=10.0.0.2, port=4420 00:27:50.624 qpair failed and we were unable to recover it. 00:27:50.624 [2024-07-15 09:36:01.595605] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:50.624 [2024-07-15 09:36:01.595631] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d69200 with addr=10.0.0.2, port=4420 00:27:50.624 qpair failed and we were unable to recover it. 00:27:50.624 [2024-07-15 09:36:01.595717] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:50.624 [2024-07-15 09:36:01.595742] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d69200 with addr=10.0.0.2, port=4420 00:27:50.624 qpair failed and we were unable to recover it. 00:27:50.624 [2024-07-15 09:36:01.595824] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:50.624 [2024-07-15 09:36:01.595856] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d69200 with addr=10.0.0.2, port=4420 00:27:50.624 qpair failed and we were unable to recover it. 00:27:50.624 [2024-07-15 09:36:01.595942] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:50.624 [2024-07-15 09:36:01.595967] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d69200 with addr=10.0.0.2, port=4420 00:27:50.624 qpair failed and we were unable to recover it. 00:27:50.625 [2024-07-15 09:36:01.596052] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:50.625 [2024-07-15 09:36:01.596076] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d69200 with addr=10.0.0.2, port=4420 00:27:50.625 qpair failed and we were unable to recover it. 00:27:50.625 [2024-07-15 09:36:01.596157] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:50.625 [2024-07-15 09:36:01.596182] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d69200 with addr=10.0.0.2, port=4420 00:27:50.625 qpair failed and we were unable to recover it. 00:27:50.625 [2024-07-15 09:36:01.596261] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:50.625 [2024-07-15 09:36:01.596287] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d69200 with addr=10.0.0.2, port=4420 00:27:50.625 qpair failed and we were unable to recover it. 00:27:50.625 [2024-07-15 09:36:01.596366] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:50.625 [2024-07-15 09:36:01.596392] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d69200 with addr=10.0.0.2, port=4420 00:27:50.625 qpair failed and we were unable to recover it. 00:27:50.625 [2024-07-15 09:36:01.596482] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:50.625 [2024-07-15 09:36:01.596521] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a58000b90 with addr=10.0.0.2, port=4420 00:27:50.625 qpair failed and we were unable to recover it. 00:27:50.625 [2024-07-15 09:36:01.596624] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:50.625 [2024-07-15 09:36:01.596664] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a50000b90 with addr=10.0.0.2, port=4420 00:27:50.625 qpair failed and we were unable to recover it. 00:27:50.625 [2024-07-15 09:36:01.596788] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:50.625 [2024-07-15 09:36:01.596820] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:27:50.625 qpair failed and we were unable to recover it. 00:27:50.625 [2024-07-15 09:36:01.596903] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:50.625 [2024-07-15 09:36:01.596929] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:27:50.625 qpair failed and we were unable to recover it. 00:27:50.625 [2024-07-15 09:36:01.597010] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:50.625 [2024-07-15 09:36:01.597035] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:27:50.625 qpair failed and we were unable to recover it. 00:27:50.625 [2024-07-15 09:36:01.597152] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:50.625 [2024-07-15 09:36:01.597178] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:27:50.625 qpair failed and we were unable to recover it. 00:27:50.625 [2024-07-15 09:36:01.597257] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:50.625 [2024-07-15 09:36:01.597284] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d69200 with addr=10.0.0.2, port=4420 00:27:50.625 qpair failed and we were unable to recover it. 00:27:50.625 [2024-07-15 09:36:01.597382] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:50.625 [2024-07-15 09:36:01.597412] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a58000b90 with addr=10.0.0.2, port=4420 00:27:50.625 qpair failed and we were unable to recover it. 00:27:50.625 [2024-07-15 09:36:01.597508] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:50.625 [2024-07-15 09:36:01.597534] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a58000b90 with addr=10.0.0.2, port=4420 00:27:50.625 qpair failed and we were unable to recover it. 00:27:50.625 [2024-07-15 09:36:01.597624] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:50.625 [2024-07-15 09:36:01.597650] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:27:50.625 qpair failed and we were unable to recover it. 00:27:50.625 [2024-07-15 09:36:01.597723] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:50.625 [2024-07-15 09:36:01.597749] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:27:50.625 qpair failed and we were unable to recover it. 00:27:50.625 [2024-07-15 09:36:01.597844] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:50.625 [2024-07-15 09:36:01.597869] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:27:50.625 qpair failed and we were unable to recover it. 00:27:50.625 [2024-07-15 09:36:01.597949] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:50.625 [2024-07-15 09:36:01.597975] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:27:50.625 qpair failed and we were unable to recover it. 00:27:50.625 [2024-07-15 09:36:01.598057] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:50.625 [2024-07-15 09:36:01.598083] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:27:50.625 qpair failed and we were unable to recover it. 00:27:50.625 [2024-07-15 09:36:01.598170] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:50.625 [2024-07-15 09:36:01.598196] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:27:50.625 qpair failed and we were unable to recover it. 00:27:50.625 [2024-07-15 09:36:01.598285] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:50.625 [2024-07-15 09:36:01.598312] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:27:50.625 qpair failed and we were unable to recover it. 00:27:50.625 [2024-07-15 09:36:01.598392] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:50.625 [2024-07-15 09:36:01.598418] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d69200 with addr=10.0.0.2, port=4420 00:27:50.625 qpair failed and we were unable to recover it. 00:27:50.625 [2024-07-15 09:36:01.598499] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:50.625 [2024-07-15 09:36:01.598525] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d69200 with addr=10.0.0.2, port=4420 00:27:50.625 qpair failed and we were unable to recover it. 00:27:50.625 [2024-07-15 09:36:01.598616] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:50.625 [2024-07-15 09:36:01.598644] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a58000b90 with addr=10.0.0.2, port=4420 00:27:50.625 qpair failed and we were unable to recover it. 00:27:50.625 [2024-07-15 09:36:01.598737] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:50.625 [2024-07-15 09:36:01.598764] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a58000b90 with addr=10.0.0.2, port=4420 00:27:50.625 qpair failed and we were unable to recover it. 00:27:50.625 [2024-07-15 09:36:01.598875] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:50.625 [2024-07-15 09:36:01.598906] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a50000b90 with addr=10.0.0.2, port=4420 00:27:50.625 qpair failed and we were unable to recover it. 00:27:50.625 [2024-07-15 09:36:01.598993] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:50.625 [2024-07-15 09:36:01.599020] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a50000b90 with addr=10.0.0.2, port=4420 00:27:50.625 qpair failed and we were unable to recover it. 00:27:50.625 [2024-07-15 09:36:01.599115] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:50.625 [2024-07-15 09:36:01.599150] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:27:50.625 qpair failed and we were unable to recover it. 00:27:50.625 [2024-07-15 09:36:01.599234] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:50.625 [2024-07-15 09:36:01.599261] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:27:50.625 qpair failed and we were unable to recover it. 00:27:50.625 [2024-07-15 09:36:01.599340] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:50.625 [2024-07-15 09:36:01.599367] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d69200 with addr=10.0.0.2, port=4420 00:27:50.625 qpair failed and we were unable to recover it. 00:27:50.625 [2024-07-15 09:36:01.599449] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:50.625 [2024-07-15 09:36:01.599475] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d69200 with addr=10.0.0.2, port=4420 00:27:50.625 qpair failed and we were unable to recover it. 00:27:50.625 [2024-07-15 09:36:01.599558] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:50.625 [2024-07-15 09:36:01.599585] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a58000b90 with addr=10.0.0.2, port=4420 00:27:50.625 qpair failed and we were unable to recover it. 00:27:50.625 [2024-07-15 09:36:01.599673] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:50.625 [2024-07-15 09:36:01.599700] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a58000b90 with addr=10.0.0.2, port=4420 00:27:50.625 qpair failed and we were unable to recover it. 00:27:50.625 [2024-07-15 09:36:01.599813] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:50.625 [2024-07-15 09:36:01.599840] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a58000b90 with addr=10.0.0.2, port=4420 00:27:50.625 qpair failed and we were unable to recover it. 00:27:50.625 [2024-07-15 09:36:01.599928] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:50.625 [2024-07-15 09:36:01.599955] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a58000b90 with addr=10.0.0.2, port=4420 00:27:50.625 qpair failed and we were unable to recover it. 00:27:50.625 [2024-07-15 09:36:01.600037] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:50.625 [2024-07-15 09:36:01.600063] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a58000b90 with addr=10.0.0.2, port=4420 00:27:50.625 qpair failed and we were unable to recover it. 00:27:50.625 [2024-07-15 09:36:01.600178] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:50.625 [2024-07-15 09:36:01.600204] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a58000b90 with addr=10.0.0.2, port=4420 00:27:50.625 qpair failed and we were unable to recover it. 00:27:50.625 [2024-07-15 09:36:01.600286] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:50.625 [2024-07-15 09:36:01.600313] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a58000b90 with addr=10.0.0.2, port=4420 00:27:50.625 qpair failed and we were unable to recover it. 00:27:50.625 [2024-07-15 09:36:01.600389] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:50.625 [2024-07-15 09:36:01.600415] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a58000b90 with addr=10.0.0.2, port=4420 00:27:50.625 qpair failed and we were unable to recover it. 00:27:50.625 [2024-07-15 09:36:01.600497] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:50.625 [2024-07-15 09:36:01.600523] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a58000b90 with addr=10.0.0.2, port=4420 00:27:50.625 qpair failed and we were unable to recover it. 00:27:50.625 [2024-07-15 09:36:01.600614] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:50.625 [2024-07-15 09:36:01.600640] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d69200 with addr=10.0.0.2, port=4420 00:27:50.625 qpair failed and we were unable to recover it. 00:27:50.625 [2024-07-15 09:36:01.600735] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:50.625 [2024-07-15 09:36:01.600762] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:27:50.625 qpair failed and we were unable to recover it. 00:27:50.625 [2024-07-15 09:36:01.600864] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:50.625 [2024-07-15 09:36:01.600894] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a50000b90 with addr=10.0.0.2, port=4420 00:27:50.625 qpair failed and we were unable to recover it. 00:27:50.625 [2024-07-15 09:36:01.600977] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:50.625 [2024-07-15 09:36:01.601007] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a50000b90 with addr=10.0.0.2, port=4420 00:27:50.625 qpair failed and we were unable to recover it. 00:27:50.625 [2024-07-15 09:36:01.601089] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:50.625 [2024-07-15 09:36:01.601116] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a50000b90 with addr=10.0.0.2, port=4420 00:27:50.625 qpair failed and we were unable to recover it. 00:27:50.625 [2024-07-15 09:36:01.601214] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:50.625 [2024-07-15 09:36:01.601240] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a50000b90 with addr=10.0.0.2, port=4420 00:27:50.625 qpair failed and we were unable to recover it. 00:27:50.625 [2024-07-15 09:36:01.601332] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:50.625 [2024-07-15 09:36:01.601359] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a58000b90 with addr=10.0.0.2, port=4420 00:27:50.625 qpair failed and we were unable to recover it. 00:27:50.625 [2024-07-15 09:36:01.601476] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:50.625 [2024-07-15 09:36:01.601503] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a58000b90 with addr=10.0.0.2, port=4420 00:27:50.625 qpair failed and we were unable to recover it. 00:27:50.625 [2024-07-15 09:36:01.601588] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:50.625 [2024-07-15 09:36:01.601614] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a58000b90 with addr=10.0.0.2, port=4420 00:27:50.625 qpair failed and we were unable to recover it. 00:27:50.626 [2024-07-15 09:36:01.601703] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:50.626 [2024-07-15 09:36:01.601729] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a58000b90 with addr=10.0.0.2, port=4420 00:27:50.626 qpair failed and we were unable to recover it. 00:27:50.626 [2024-07-15 09:36:01.601835] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:50.626 [2024-07-15 09:36:01.601873] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a58000b90 with addr=10.0.0.2, port=4420 00:27:50.626 qpair failed and we were unable to recover it. 00:27:50.626 [2024-07-15 09:36:01.601954] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:50.626 [2024-07-15 09:36:01.601981] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a58000b90 with addr=10.0.0.2, port=4420 00:27:50.626 qpair failed and we were unable to recover it. 00:27:50.626 [2024-07-15 09:36:01.602065] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:50.626 [2024-07-15 09:36:01.602092] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a58000b90 with addr=10.0.0.2, port=4420 00:27:50.626 qpair failed and we were unable to recover it. 00:27:50.626 [2024-07-15 09:36:01.602183] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:50.626 [2024-07-15 09:36:01.602210] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a58000b90 with addr=10.0.0.2, port=4420 00:27:50.626 qpair failed and we were unable to recover it. 00:27:50.626 [2024-07-15 09:36:01.602289] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:50.626 [2024-07-15 09:36:01.602320] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a58000b90 with addr=10.0.0.2, port=4420 00:27:50.626 qpair failed and we were unable to recover it. 00:27:50.626 [2024-07-15 09:36:01.602437] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:50.626 [2024-07-15 09:36:01.602463] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a58000b90 with addr=10.0.0.2, port=4420 00:27:50.626 qpair failed and we were unable to recover it. 00:27:50.626 [2024-07-15 09:36:01.602556] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:50.626 [2024-07-15 09:36:01.602595] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d69200 with addr=10.0.0.2, port=4420 00:27:50.626 qpair failed and we were unable to recover it. 00:27:50.626 [2024-07-15 09:36:01.602682] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:50.626 [2024-07-15 09:36:01.602709] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d69200 with addr=10.0.0.2, port=4420 00:27:50.626 qpair failed and we were unable to recover it. 00:27:50.626 [2024-07-15 09:36:01.602807] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:50.626 [2024-07-15 09:36:01.602834] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d69200 with addr=10.0.0.2, port=4420 00:27:50.626 qpair failed and we were unable to recover it. 00:27:50.626 [2024-07-15 09:36:01.602914] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:50.626 [2024-07-15 09:36:01.602940] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d69200 with addr=10.0.0.2, port=4420 00:27:50.626 qpair failed and we were unable to recover it. 00:27:50.626 [2024-07-15 09:36:01.603016] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:50.626 [2024-07-15 09:36:01.603040] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d69200 with addr=10.0.0.2, port=4420 00:27:50.626 qpair failed and we were unable to recover it. 00:27:50.626 [2024-07-15 09:36:01.603134] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:50.626 [2024-07-15 09:36:01.603159] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d69200 with addr=10.0.0.2, port=4420 00:27:50.626 qpair failed and we were unable to recover it. 00:27:50.626 [2024-07-15 09:36:01.603234] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:50.626 [2024-07-15 09:36:01.603259] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d69200 with addr=10.0.0.2, port=4420 00:27:50.626 qpair failed and we were unable to recover it. 00:27:50.626 [2024-07-15 09:36:01.603333] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:50.626 [2024-07-15 09:36:01.603359] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d69200 with addr=10.0.0.2, port=4420 00:27:50.626 qpair failed and we were unable to recover it. 00:27:50.626 [2024-07-15 09:36:01.603446] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:50.626 [2024-07-15 09:36:01.603475] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a50000b90 with addr=10.0.0.2, port=4420 00:27:50.626 qpair failed and we were unable to recover it. 00:27:50.626 [2024-07-15 09:36:01.603560] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:50.626 [2024-07-15 09:36:01.603587] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a58000b90 with addr=10.0.0.2, port=4420 00:27:50.626 qpair failed and we were unable to recover it. 00:27:50.626 [2024-07-15 09:36:01.603683] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:50.626 [2024-07-15 09:36:01.603722] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:27:50.626 qpair failed and we were unable to recover it. 00:27:50.626 [2024-07-15 09:36:01.603828] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:50.626 [2024-07-15 09:36:01.603857] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:27:50.626 qpair failed and we were unable to recover it. 00:27:50.626 [2024-07-15 09:36:01.603948] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:50.626 [2024-07-15 09:36:01.603974] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:27:50.626 qpair failed and we were unable to recover it. 00:27:50.626 [2024-07-15 09:36:01.604051] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:50.626 [2024-07-15 09:36:01.604076] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:27:50.626 qpair failed and we were unable to recover it. 00:27:50.626 [2024-07-15 09:36:01.604158] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:50.626 [2024-07-15 09:36:01.604185] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:27:50.626 qpair failed and we were unable to recover it. 00:27:50.626 [2024-07-15 09:36:01.604278] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:50.626 [2024-07-15 09:36:01.604307] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a50000b90 with addr=10.0.0.2, port=4420 00:27:50.626 qpair failed and we were unable to recover it. 00:27:50.626 [2024-07-15 09:36:01.604394] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:50.626 [2024-07-15 09:36:01.604421] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a58000b90 with addr=10.0.0.2, port=4420 00:27:50.626 qpair failed and we were unable to recover it. 00:27:50.626 [2024-07-15 09:36:01.604499] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:50.626 [2024-07-15 09:36:01.604525] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a58000b90 with addr=10.0.0.2, port=4420 00:27:50.626 qpair failed and we were unable to recover it. 00:27:50.626 [2024-07-15 09:36:01.604609] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:50.626 [2024-07-15 09:36:01.604635] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a58000b90 with addr=10.0.0.2, port=4420 00:27:50.626 qpair failed and we were unable to recover it. 00:27:50.626 [2024-07-15 09:36:01.604728] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:50.626 [2024-07-15 09:36:01.604753] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a58000b90 with addr=10.0.0.2, port=4420 00:27:50.626 qpair failed and we were unable to recover it. 00:27:50.626 [2024-07-15 09:36:01.604852] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:50.626 [2024-07-15 09:36:01.604880] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d69200 with addr=10.0.0.2, port=4420 00:27:50.626 qpair failed and we were unable to recover it. 00:27:50.626 [2024-07-15 09:36:01.604964] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:50.626 [2024-07-15 09:36:01.604990] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:27:50.626 qpair failed and we were unable to recover it. 00:27:50.626 [2024-07-15 09:36:01.605075] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:50.626 [2024-07-15 09:36:01.605101] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:27:50.626 qpair failed and we were unable to recover it. 00:27:50.626 [2024-07-15 09:36:01.605184] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:50.626 [2024-07-15 09:36:01.605209] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:27:50.626 qpair failed and we were unable to recover it. 00:27:50.626 [2024-07-15 09:36:01.605288] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:50.626 [2024-07-15 09:36:01.605313] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:27:50.626 qpair failed and we were unable to recover it. 00:27:50.626 [2024-07-15 09:36:01.605400] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:50.626 [2024-07-15 09:36:01.605429] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a50000b90 with addr=10.0.0.2, port=4420 00:27:50.626 qpair failed and we were unable to recover it. 00:27:50.626 [2024-07-15 09:36:01.605516] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:50.626 [2024-07-15 09:36:01.605543] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a58000b90 with addr=10.0.0.2, port=4420 00:27:50.626 qpair failed and we were unable to recover it. 00:27:50.626 [2024-07-15 09:36:01.605635] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:50.626 [2024-07-15 09:36:01.605663] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d69200 with addr=10.0.0.2, port=4420 00:27:50.626 qpair failed and we were unable to recover it. 00:27:50.626 [2024-07-15 09:36:01.605750] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:50.626 [2024-07-15 09:36:01.605775] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d69200 with addr=10.0.0.2, port=4420 00:27:50.626 qpair failed and we were unable to recover it. 00:27:50.626 [2024-07-15 09:36:01.605886] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:50.626 [2024-07-15 09:36:01.605911] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d69200 with addr=10.0.0.2, port=4420 00:27:50.626 qpair failed and we were unable to recover it. 00:27:50.626 [2024-07-15 09:36:01.606002] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:50.626 [2024-07-15 09:36:01.606028] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d69200 with addr=10.0.0.2, port=4420 00:27:50.626 qpair failed and we were unable to recover it. 00:27:50.626 [2024-07-15 09:36:01.606133] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:50.626 [2024-07-15 09:36:01.606158] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d69200 with addr=10.0.0.2, port=4420 00:27:50.626 qpair failed and we were unable to recover it. 00:27:50.626 [2024-07-15 09:36:01.606238] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:50.626 [2024-07-15 09:36:01.606262] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d69200 with addr=10.0.0.2, port=4420 00:27:50.626 qpair failed and we were unable to recover it. 00:27:50.626 [2024-07-15 09:36:01.606361] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:50.626 [2024-07-15 09:36:01.606387] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d69200 with addr=10.0.0.2, port=4420 00:27:50.626 qpair failed and we were unable to recover it. 00:27:50.626 [2024-07-15 09:36:01.606482] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:50.626 [2024-07-15 09:36:01.606510] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:27:50.626 qpair failed and we were unable to recover it. 00:27:50.626 [2024-07-15 09:36:01.606605] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:50.626 [2024-07-15 09:36:01.606634] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a50000b90 with addr=10.0.0.2, port=4420 00:27:50.626 qpair failed and we were unable to recover it. 00:27:50.626 [2024-07-15 09:36:01.606719] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:50.626 [2024-07-15 09:36:01.606746] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a58000b90 with addr=10.0.0.2, port=4420 00:27:50.626 qpair failed and we were unable to recover it. 00:27:50.626 [2024-07-15 09:36:01.606845] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:50.626 [2024-07-15 09:36:01.606872] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a58000b90 with addr=10.0.0.2, port=4420 00:27:50.626 qpair failed and we were unable to recover it. 00:27:50.626 [2024-07-15 09:36:01.606950] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:50.626 [2024-07-15 09:36:01.606981] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a58000b90 with addr=10.0.0.2, port=4420 00:27:50.626 qpair failed and we were unable to recover it. 00:27:50.626 [2024-07-15 09:36:01.607062] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:50.626 [2024-07-15 09:36:01.607099] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a58000b90 with addr=10.0.0.2, port=4420 00:27:50.626 qpair failed and we were unable to recover it. 00:27:50.626 [2024-07-15 09:36:01.607194] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:50.626 [2024-07-15 09:36:01.607223] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a58000b90 with addr=10.0.0.2, port=4420 00:27:50.626 qpair failed and we were unable to recover it. 00:27:50.626 [2024-07-15 09:36:01.607306] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:50.626 [2024-07-15 09:36:01.607332] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a58000b90 with addr=10.0.0.2, port=4420 00:27:50.626 qpair failed and we were unable to recover it. 00:27:50.626 [2024-07-15 09:36:01.607408] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:50.626 [2024-07-15 09:36:01.607433] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a58000b90 with addr=10.0.0.2, port=4420 00:27:50.626 qpair failed and we were unable to recover it. 00:27:50.626 [2024-07-15 09:36:01.607514] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:50.626 [2024-07-15 09:36:01.607539] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a58000b90 with addr=10.0.0.2, port=4420 00:27:50.626 qpair failed and we were unable to recover it. 00:27:50.626 [2024-07-15 09:36:01.607613] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:50.626 [2024-07-15 09:36:01.607638] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a58000b90 with addr=10.0.0.2, port=4420 00:27:50.626 qpair failed and we were unable to recover it. 00:27:50.626 [2024-07-15 09:36:01.607718] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:50.626 [2024-07-15 09:36:01.607742] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a58000b90 with addr=10.0.0.2, port=4420 00:27:50.626 qpair failed and we were unable to recover it. 00:27:50.626 [2024-07-15 09:36:01.607839] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:50.626 [2024-07-15 09:36:01.607866] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d69200 with addr=10.0.0.2, port=4420 00:27:50.626 qpair failed and we were unable to recover it. 00:27:50.626 [2024-07-15 09:36:01.607945] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:50.626 [2024-07-15 09:36:01.607969] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d69200 with addr=10.0.0.2, port=4420 00:27:50.626 qpair failed and we were unable to recover it. 00:27:50.626 [2024-07-15 09:36:01.608054] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:50.626 [2024-07-15 09:36:01.608079] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d69200 with addr=10.0.0.2, port=4420 00:27:50.626 qpair failed and we were unable to recover it. 00:27:50.626 [2024-07-15 09:36:01.608161] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:50.626 [2024-07-15 09:36:01.608186] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d69200 with addr=10.0.0.2, port=4420 00:27:50.626 qpair failed and we were unable to recover it. 00:27:50.626 [2024-07-15 09:36:01.608270] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:50.626 [2024-07-15 09:36:01.608298] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:27:50.626 qpair failed and we were unable to recover it. 00:27:50.626 [2024-07-15 09:36:01.608383] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:50.626 [2024-07-15 09:36:01.608409] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:27:50.626 qpair failed and we were unable to recover it. 00:27:50.626 [2024-07-15 09:36:01.608522] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:50.626 [2024-07-15 09:36:01.608549] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:27:50.626 qpair failed and we were unable to recover it. 00:27:50.626 [2024-07-15 09:36:01.608639] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:50.627 [2024-07-15 09:36:01.608665] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:27:50.627 qpair failed and we were unable to recover it. 00:27:50.627 [2024-07-15 09:36:01.608754] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:50.627 [2024-07-15 09:36:01.608780] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:27:50.627 qpair failed and we were unable to recover it. 00:27:50.627 [2024-07-15 09:36:01.608888] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:50.627 [2024-07-15 09:36:01.608922] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:27:50.627 qpair failed and we were unable to recover it. 00:27:50.627 [2024-07-15 09:36:01.609001] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:50.627 [2024-07-15 09:36:01.609026] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:27:50.627 qpair failed and we were unable to recover it. 00:27:50.627 [2024-07-15 09:36:01.609136] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:50.627 [2024-07-15 09:36:01.609162] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:27:50.627 qpair failed and we were unable to recover it. 00:27:50.627 [2024-07-15 09:36:01.609237] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:50.627 [2024-07-15 09:36:01.609262] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:27:50.627 qpair failed and we were unable to recover it. 00:27:50.627 [2024-07-15 09:36:01.609364] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:50.627 [2024-07-15 09:36:01.609390] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:27:50.627 qpair failed and we were unable to recover it. 00:27:50.627 [2024-07-15 09:36:01.609474] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:50.627 [2024-07-15 09:36:01.609505] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a50000b90 with addr=10.0.0.2, port=4420 00:27:50.627 qpair failed and we were unable to recover it. 00:27:50.627 [2024-07-15 09:36:01.609587] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:50.627 [2024-07-15 09:36:01.609617] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a50000b90 with addr=10.0.0.2, port=4420 00:27:50.627 qpair failed and we were unable to recover it. 00:27:50.627 [2024-07-15 09:36:01.609698] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:50.627 [2024-07-15 09:36:01.609725] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d69200 with addr=10.0.0.2, port=4420 00:27:50.627 qpair failed and we were unable to recover it. 00:27:50.627 [2024-07-15 09:36:01.609818] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:50.627 [2024-07-15 09:36:01.609844] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d69200 with addr=10.0.0.2, port=4420 00:27:50.627 qpair failed and we were unable to recover it. 00:27:50.627 [2024-07-15 09:36:01.609926] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:50.627 [2024-07-15 09:36:01.609951] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d69200 with addr=10.0.0.2, port=4420 00:27:50.627 qpair failed and we were unable to recover it. 00:27:50.627 [2024-07-15 09:36:01.610030] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:50.627 [2024-07-15 09:36:01.610060] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d69200 with addr=10.0.0.2, port=4420 00:27:50.627 qpair failed and we were unable to recover it. 00:27:50.627 [2024-07-15 09:36:01.610140] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:50.627 [2024-07-15 09:36:01.610164] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d69200 with addr=10.0.0.2, port=4420 00:27:50.627 qpair failed and we were unable to recover it. 00:27:50.627 [2024-07-15 09:36:01.610243] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:50.627 [2024-07-15 09:36:01.610267] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d69200 with addr=10.0.0.2, port=4420 00:27:50.627 qpair failed and we were unable to recover it. 00:27:50.627 [2024-07-15 09:36:01.610348] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:50.627 [2024-07-15 09:36:01.610373] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d69200 with addr=10.0.0.2, port=4420 00:27:50.627 qpair failed and we were unable to recover it. 00:27:50.627 [2024-07-15 09:36:01.610459] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:50.627 [2024-07-15 09:36:01.610485] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:27:50.627 qpair failed and we were unable to recover it. 00:27:50.627 [2024-07-15 09:36:01.610567] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:50.627 [2024-07-15 09:36:01.610597] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:27:50.627 qpair failed and we were unable to recover it. 00:27:50.627 [2024-07-15 09:36:01.610684] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:50.627 [2024-07-15 09:36:01.610711] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a50000b90 with addr=10.0.0.2, port=4420 00:27:50.627 qpair failed and we were unable to recover it. 00:27:50.627 [2024-07-15 09:36:01.610806] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:50.627 [2024-07-15 09:36:01.610833] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a50000b90 with addr=10.0.0.2, port=4420 00:27:50.627 qpair failed and we were unable to recover it. 00:27:50.627 [2024-07-15 09:36:01.610913] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:50.627 [2024-07-15 09:36:01.610938] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a50000b90 with addr=10.0.0.2, port=4420 00:27:50.627 qpair failed and we were unable to recover it. 00:27:50.627 [2024-07-15 09:36:01.611025] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:50.627 [2024-07-15 09:36:01.611051] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a50000b90 with addr=10.0.0.2, port=4420 00:27:50.627 qpair failed and we were unable to recover it. 00:27:50.627 [2024-07-15 09:36:01.611168] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:50.627 [2024-07-15 09:36:01.611195] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a50000b90 with addr=10.0.0.2, port=4420 00:27:50.627 qpair failed and we were unable to recover it. 00:27:50.627 [2024-07-15 09:36:01.611287] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:50.627 [2024-07-15 09:36:01.611315] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a50000b90 with addr=10.0.0.2, port=4420 00:27:50.627 qpair failed and we were unable to recover it. 00:27:50.627 [2024-07-15 09:36:01.611402] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:50.627 [2024-07-15 09:36:01.611430] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a50000b90 with addr=10.0.0.2, port=4420 00:27:50.627 qpair failed and we were unable to recover it. 00:27:50.627 [2024-07-15 09:36:01.611512] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:50.627 [2024-07-15 09:36:01.611538] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a50000b90 with addr=10.0.0.2, port=4420 00:27:50.627 qpair failed and we were unable to recover it. 00:27:50.627 [2024-07-15 09:36:01.611622] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:50.627 [2024-07-15 09:36:01.611646] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a50000b90 with addr=10.0.0.2, port=4420 00:27:50.627 qpair failed and we were unable to recover it. 00:27:50.627 [2024-07-15 09:36:01.611739] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:50.627 [2024-07-15 09:36:01.611765] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a50000b90 with addr=10.0.0.2, port=4420 00:27:50.627 qpair failed and we were unable to recover it. 00:27:50.627 [2024-07-15 09:36:01.611873] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:50.627 [2024-07-15 09:36:01.611903] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d69200 with addr=10.0.0.2, port=4420 00:27:50.627 qpair failed and we were unable to recover it. 00:27:50.627 [2024-07-15 09:36:01.611995] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:50.627 [2024-07-15 09:36:01.612020] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d69200 with addr=10.0.0.2, port=4420 00:27:50.627 qpair failed and we were unable to recover it. 00:27:50.627 [2024-07-15 09:36:01.612115] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:50.627 [2024-07-15 09:36:01.612139] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d69200 with addr=10.0.0.2, port=4420 00:27:50.627 qpair failed and we were unable to recover it. 00:27:50.627 [2024-07-15 09:36:01.612228] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:50.627 [2024-07-15 09:36:01.612251] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d69200 with addr=10.0.0.2, port=4420 00:27:50.627 qpair failed and we were unable to recover it. 00:27:50.627 [2024-07-15 09:36:01.612332] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:50.627 [2024-07-15 09:36:01.612357] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d69200 with addr=10.0.0.2, port=4420 00:27:50.627 qpair failed and we were unable to recover it. 00:27:50.627 [2024-07-15 09:36:01.612439] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:50.627 [2024-07-15 09:36:01.612463] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d69200 with addr=10.0.0.2, port=4420 00:27:50.627 qpair failed and we were unable to recover it. 00:27:50.627 [2024-07-15 09:36:01.612541] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:50.627 [2024-07-15 09:36:01.612566] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d69200 with addr=10.0.0.2, port=4420 00:27:50.627 qpair failed and we were unable to recover it. 00:27:50.627 [2024-07-15 09:36:01.612638] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:50.627 [2024-07-15 09:36:01.612663] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d69200 with addr=10.0.0.2, port=4420 00:27:50.627 qpair failed and we were unable to recover it. 00:27:50.627 [2024-07-15 09:36:01.612751] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:50.627 [2024-07-15 09:36:01.612775] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d69200 with addr=10.0.0.2, port=4420 00:27:50.627 qpair failed and we were unable to recover it. 00:27:50.627 [2024-07-15 09:36:01.612893] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:50.627 [2024-07-15 09:36:01.612923] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a50000b90 with addr=10.0.0.2, port=4420 00:27:50.627 qpair failed and we were unable to recover it. 00:27:50.627 [2024-07-15 09:36:01.613012] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:50.627 [2024-07-15 09:36:01.613038] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a50000b90 with addr=10.0.0.2, port=4420 00:27:50.627 qpair failed and we were unable to recover it. 00:27:50.627 [2024-07-15 09:36:01.613132] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:50.627 [2024-07-15 09:36:01.613173] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a50000b90 with addr=10.0.0.2, port=4420 00:27:50.627 qpair failed and we were unable to recover it. 00:27:50.627 [2024-07-15 09:36:01.613262] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:50.627 [2024-07-15 09:36:01.613288] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a50000b90 with addr=10.0.0.2, port=4420 00:27:50.627 qpair failed and we were unable to recover it. 00:27:50.627 [2024-07-15 09:36:01.613373] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:50.627 [2024-07-15 09:36:01.613400] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a50000b90 with addr=10.0.0.2, port=4420 00:27:50.627 qpair failed and we were unable to recover it. 00:27:50.627 [2024-07-15 09:36:01.613485] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:50.627 [2024-07-15 09:36:01.613512] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a50000b90 with addr=10.0.0.2, port=4420 00:27:50.627 qpair failed and we were unable to recover it. 00:27:50.627 [2024-07-15 09:36:01.613589] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:50.628 [2024-07-15 09:36:01.613614] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a50000b90 with addr=10.0.0.2, port=4420 00:27:50.628 qpair failed and we were unable to recover it. 00:27:50.628 [2024-07-15 09:36:01.613712] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:50.628 [2024-07-15 09:36:01.613752] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a58000b90 with addr=10.0.0.2, port=4420 00:27:50.628 qpair failed and we were unable to recover it. 00:27:50.628 [2024-07-15 09:36:01.613861] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:50.628 [2024-07-15 09:36:01.613889] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:27:50.628 qpair failed and we were unable to recover it. 00:27:50.628 [2024-07-15 09:36:01.613978] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:50.628 [2024-07-15 09:36:01.614002] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:27:50.628 qpair failed and we were unable to recover it. 00:27:50.628 [2024-07-15 09:36:01.614083] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:50.628 [2024-07-15 09:36:01.614109] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:27:50.628 qpair failed and we were unable to recover it. 00:27:50.628 [2024-07-15 09:36:01.614192] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:50.628 [2024-07-15 09:36:01.614217] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:27:50.628 qpair failed and we were unable to recover it. 00:27:50.628 [2024-07-15 09:36:01.614311] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:50.628 [2024-07-15 09:36:01.614338] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d69200 with addr=10.0.0.2, port=4420 00:27:50.628 qpair failed and we were unable to recover it. 00:27:50.628 [2024-07-15 09:36:01.614421] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:50.628 [2024-07-15 09:36:01.614446] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d69200 with addr=10.0.0.2, port=4420 00:27:50.628 qpair failed and we were unable to recover it. 00:27:50.628 [2024-07-15 09:36:01.614535] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:50.628 [2024-07-15 09:36:01.614564] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a58000b90 with addr=10.0.0.2, port=4420 00:27:50.628 qpair failed and we were unable to recover it. 00:27:50.628 [2024-07-15 09:36:01.614659] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:50.628 [2024-07-15 09:36:01.614685] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a58000b90 with addr=10.0.0.2, port=4420 00:27:50.628 qpair failed and we were unable to recover it. 00:27:50.628 [2024-07-15 09:36:01.614777] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:50.628 [2024-07-15 09:36:01.614810] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a50000b90 with addr=10.0.0.2, port=4420 00:27:50.628 qpair failed and we were unable to recover it. 00:27:50.628 [2024-07-15 09:36:01.614902] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:50.628 [2024-07-15 09:36:01.614930] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a50000b90 with addr=10.0.0.2, port=4420 00:27:50.628 qpair failed and we were unable to recover it. 00:27:50.628 [2024-07-15 09:36:01.615010] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:50.628 [2024-07-15 09:36:01.615037] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:27:50.628 qpair failed and we were unable to recover it. 00:27:50.628 [2024-07-15 09:36:01.615151] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:50.628 [2024-07-15 09:36:01.615178] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:27:50.628 qpair failed and we were unable to recover it. 00:27:50.628 [2024-07-15 09:36:01.615262] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:50.628 [2024-07-15 09:36:01.615288] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d69200 with addr=10.0.0.2, port=4420 00:27:50.628 qpair failed and we were unable to recover it. 00:27:50.628 [2024-07-15 09:36:01.615371] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:50.628 [2024-07-15 09:36:01.615395] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d69200 with addr=10.0.0.2, port=4420 00:27:50.628 qpair failed and we were unable to recover it. 00:27:50.628 [2024-07-15 09:36:01.615487] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:50.628 [2024-07-15 09:36:01.615515] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a58000b90 with addr=10.0.0.2, port=4420 00:27:50.628 qpair failed and we were unable to recover it. 00:27:50.628 [2024-07-15 09:36:01.615608] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:50.628 [2024-07-15 09:36:01.615634] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a58000b90 with addr=10.0.0.2, port=4420 00:27:50.628 qpair failed and we were unable to recover it. 00:27:50.628 [2024-07-15 09:36:01.615721] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:50.628 [2024-07-15 09:36:01.615747] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a58000b90 with addr=10.0.0.2, port=4420 00:27:50.628 qpair failed and we were unable to recover it. 00:27:50.628 [2024-07-15 09:36:01.615836] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:50.628 [2024-07-15 09:36:01.615863] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a58000b90 with addr=10.0.0.2, port=4420 00:27:50.628 qpair failed and we were unable to recover it. 00:27:50.628 [2024-07-15 09:36:01.615946] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:50.628 [2024-07-15 09:36:01.615972] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a58000b90 with addr=10.0.0.2, port=4420 00:27:50.628 qpair failed and we were unable to recover it. 00:27:50.628 [2024-07-15 09:36:01.616056] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:50.628 [2024-07-15 09:36:01.616082] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a58000b90 with addr=10.0.0.2, port=4420 00:27:50.628 qpair failed and we were unable to recover it. 00:27:50.628 [2024-07-15 09:36:01.616175] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:50.628 [2024-07-15 09:36:01.616201] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a58000b90 with addr=10.0.0.2, port=4420 00:27:50.628 qpair failed and we were unable to recover it. 00:27:50.628 [2024-07-15 09:36:01.616293] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:50.628 [2024-07-15 09:36:01.616320] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:27:50.628 qpair failed and we were unable to recover it. 00:27:50.628 [2024-07-15 09:36:01.616403] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:50.628 [2024-07-15 09:36:01.616431] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a50000b90 with addr=10.0.0.2, port=4420 00:27:50.628 qpair failed and we were unable to recover it. 00:27:50.628 [2024-07-15 09:36:01.616515] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:50.628 [2024-07-15 09:36:01.616546] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a50000b90 with addr=10.0.0.2, port=4420 00:27:50.628 qpair failed and we were unable to recover it. 00:27:50.628 [2024-07-15 09:36:01.616622] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:50.628 [2024-07-15 09:36:01.616648] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a50000b90 with addr=10.0.0.2, port=4420 00:27:50.628 qpair failed and we were unable to recover it. 00:27:50.628 [2024-07-15 09:36:01.616727] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:50.628 [2024-07-15 09:36:01.616752] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a50000b90 with addr=10.0.0.2, port=4420 00:27:50.628 qpair failed and we were unable to recover it. 00:27:50.628 [2024-07-15 09:36:01.616895] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:50.628 [2024-07-15 09:36:01.616934] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d69200 with addr=10.0.0.2, port=4420 00:27:50.628 qpair failed and we were unable to recover it. 00:27:50.628 [2024-07-15 09:36:01.617017] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:50.628 [2024-07-15 09:36:01.617042] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d69200 with addr=10.0.0.2, port=4420 00:27:50.628 qpair failed and we were unable to recover it. 00:27:50.628 [2024-07-15 09:36:01.617141] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:50.628 [2024-07-15 09:36:01.617172] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d69200 with addr=10.0.0.2, port=4420 00:27:50.628 qpair failed and we were unable to recover it. 00:27:50.628 [2024-07-15 09:36:01.617247] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:50.628 [2024-07-15 09:36:01.617271] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d69200 with addr=10.0.0.2, port=4420 00:27:50.628 qpair failed and we were unable to recover it. 00:27:50.628 [2024-07-15 09:36:01.617361] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:50.628 [2024-07-15 09:36:01.617387] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d69200 with addr=10.0.0.2, port=4420 00:27:50.628 qpair failed and we were unable to recover it. 00:27:50.628 [2024-07-15 09:36:01.617473] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:50.628 [2024-07-15 09:36:01.617498] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d69200 with addr=10.0.0.2, port=4420 00:27:50.628 qpair failed and we were unable to recover it. 00:27:50.628 [2024-07-15 09:36:01.617578] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:50.628 [2024-07-15 09:36:01.617607] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a50000b90 with addr=10.0.0.2, port=4420 00:27:50.628 qpair failed and we were unable to recover it. 00:27:50.628 [2024-07-15 09:36:01.617696] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:50.628 [2024-07-15 09:36:01.617723] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a58000b90 with addr=10.0.0.2, port=4420 00:27:50.628 qpair failed and we were unable to recover it. 00:27:50.628 [2024-07-15 09:36:01.617827] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:50.628 [2024-07-15 09:36:01.617883] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:27:50.629 qpair failed and we were unable to recover it. 00:27:50.629 [2024-07-15 09:36:01.617975] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:50.629 [2024-07-15 09:36:01.618002] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:27:50.629 qpair failed and we were unable to recover it. 00:27:50.629 [2024-07-15 09:36:01.618094] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:50.629 [2024-07-15 09:36:01.618119] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:27:50.629 qpair failed and we were unable to recover it. 00:27:50.629 [2024-07-15 09:36:01.618203] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:50.629 [2024-07-15 09:36:01.618227] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:27:50.629 qpair failed and we were unable to recover it. 00:27:50.629 [2024-07-15 09:36:01.618308] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:50.629 [2024-07-15 09:36:01.618333] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:27:50.629 qpair failed and we were unable to recover it. 00:27:50.629 [2024-07-15 09:36:01.618418] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:50.629 [2024-07-15 09:36:01.618443] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:27:50.629 qpair failed and we were unable to recover it. 00:27:50.629 [2024-07-15 09:36:01.618528] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:50.629 [2024-07-15 09:36:01.618554] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:27:50.629 qpair failed and we were unable to recover it. 00:27:50.629 [2024-07-15 09:36:01.618633] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:50.629 [2024-07-15 09:36:01.618658] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d69200 with addr=10.0.0.2, port=4420 00:27:50.629 qpair failed and we were unable to recover it. 00:27:50.629 [2024-07-15 09:36:01.618742] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:50.629 [2024-07-15 09:36:01.618765] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d69200 with addr=10.0.0.2, port=4420 00:27:50.629 qpair failed and we were unable to recover it. 00:27:50.629 [2024-07-15 09:36:01.618875] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:50.629 [2024-07-15 09:36:01.618901] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d69200 with addr=10.0.0.2, port=4420 00:27:50.629 qpair failed and we were unable to recover it. 00:27:50.629 [2024-07-15 09:36:01.618983] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:50.629 [2024-07-15 09:36:01.619006] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d69200 with addr=10.0.0.2, port=4420 00:27:50.629 qpair failed and we were unable to recover it. 00:27:50.629 [2024-07-15 09:36:01.619088] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:50.629 [2024-07-15 09:36:01.619112] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d69200 with addr=10.0.0.2, port=4420 00:27:50.629 qpair failed and we were unable to recover it. 00:27:50.629 [2024-07-15 09:36:01.619206] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:50.629 [2024-07-15 09:36:01.619231] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d69200 with addr=10.0.0.2, port=4420 00:27:50.629 qpair failed and we were unable to recover it. 00:27:50.629 [2024-07-15 09:36:01.619319] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:50.629 [2024-07-15 09:36:01.619344] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d69200 with addr=10.0.0.2, port=4420 00:27:50.629 qpair failed and we were unable to recover it. 00:27:50.629 [2024-07-15 09:36:01.619432] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:50.629 [2024-07-15 09:36:01.619458] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d69200 with addr=10.0.0.2, port=4420 00:27:50.629 qpair failed and we were unable to recover it. 00:27:50.629 [2024-07-15 09:36:01.619530] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:50.629 [2024-07-15 09:36:01.619555] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d69200 with addr=10.0.0.2, port=4420 00:27:50.629 qpair failed and we were unable to recover it. 00:27:50.629 [2024-07-15 09:36:01.619642] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:50.629 [2024-07-15 09:36:01.619665] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d69200 with addr=10.0.0.2, port=4420 00:27:50.629 qpair failed and we were unable to recover it. 00:27:50.629 [2024-07-15 09:36:01.619743] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:50.629 [2024-07-15 09:36:01.619768] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d69200 with addr=10.0.0.2, port=4420 00:27:50.629 qpair failed and we were unable to recover it. 00:27:50.629 [2024-07-15 09:36:01.619859] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:50.629 [2024-07-15 09:36:01.619883] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d69200 with addr=10.0.0.2, port=4420 00:27:50.629 qpair failed and we were unable to recover it. 00:27:50.629 [2024-07-15 09:36:01.619962] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:50.629 [2024-07-15 09:36:01.619986] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d69200 with addr=10.0.0.2, port=4420 00:27:50.629 qpair failed and we were unable to recover it. 00:27:50.629 [2024-07-15 09:36:01.620095] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:50.629 [2024-07-15 09:36:01.620121] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d69200 with addr=10.0.0.2, port=4420 00:27:50.629 qpair failed and we were unable to recover it. 00:27:50.629 [2024-07-15 09:36:01.620201] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:50.629 [2024-07-15 09:36:01.620226] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d69200 with addr=10.0.0.2, port=4420 00:27:50.629 qpair failed and we were unable to recover it. 00:27:50.629 [2024-07-15 09:36:01.620314] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:50.629 [2024-07-15 09:36:01.620346] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:27:50.629 qpair failed and we were unable to recover it. 00:27:50.629 [2024-07-15 09:36:01.620440] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:50.629 [2024-07-15 09:36:01.620467] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:27:50.629 qpair failed and we were unable to recover it. 00:27:50.629 [2024-07-15 09:36:01.620869] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:50.629 [2024-07-15 09:36:01.620899] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:27:50.629 qpair failed and we were unable to recover it. 00:27:50.629 [2024-07-15 09:36:01.620999] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:50.629 [2024-07-15 09:36:01.621026] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:27:50.629 qpair failed and we were unable to recover it. 00:27:50.629 [2024-07-15 09:36:01.621122] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:50.629 [2024-07-15 09:36:01.621148] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:27:50.629 qpair failed and we were unable to recover it. 00:27:50.629 [2024-07-15 09:36:01.621238] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:50.629 [2024-07-15 09:36:01.621270] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:27:50.629 qpair failed and we were unable to recover it. 00:27:50.629 [2024-07-15 09:36:01.621363] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:50.629 [2024-07-15 09:36:01.621388] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d69200 with addr=10.0.0.2, port=4420 00:27:50.629 qpair failed and we were unable to recover it. 00:27:50.629 [2024-07-15 09:36:01.621473] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:50.629 [2024-07-15 09:36:01.621497] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d69200 with addr=10.0.0.2, port=4420 00:27:50.629 qpair failed and we were unable to recover it. 00:27:50.629 [2024-07-15 09:36:01.621586] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:50.629 [2024-07-15 09:36:01.621611] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d69200 with addr=10.0.0.2, port=4420 00:27:50.629 qpair failed and we were unable to recover it. 00:27:50.629 [2024-07-15 09:36:01.621702] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:50.629 [2024-07-15 09:36:01.621727] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d69200 with addr=10.0.0.2, port=4420 00:27:50.629 qpair failed and we were unable to recover it. 00:27:50.629 [2024-07-15 09:36:01.621810] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:50.629 [2024-07-15 09:36:01.621844] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d69200 with addr=10.0.0.2, port=4420 00:27:50.629 qpair failed and we were unable to recover it. 00:27:50.629 [2024-07-15 09:36:01.621934] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:50.629 [2024-07-15 09:36:01.621959] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d69200 with addr=10.0.0.2, port=4420 00:27:50.629 qpair failed and we were unable to recover it. 00:27:50.629 [2024-07-15 09:36:01.622040] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:50.629 [2024-07-15 09:36:01.622063] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d69200 with addr=10.0.0.2, port=4420 00:27:50.629 qpair failed and we were unable to recover it. 00:27:50.629 [2024-07-15 09:36:01.622152] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:50.629 [2024-07-15 09:36:01.622177] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d69200 with addr=10.0.0.2, port=4420 00:27:50.629 qpair failed and we were unable to recover it. 00:27:50.629 [2024-07-15 09:36:01.622252] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:50.629 [2024-07-15 09:36:01.622277] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d69200 with addr=10.0.0.2, port=4420 00:27:50.629 qpair failed and we were unable to recover it. 00:27:50.629 [2024-07-15 09:36:01.622359] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:50.629 [2024-07-15 09:36:01.622387] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:27:50.629 qpair failed and we were unable to recover it. 00:27:50.629 [2024-07-15 09:36:01.622461] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:50.629 [2024-07-15 09:36:01.622487] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:27:50.629 qpair failed and we were unable to recover it. 00:27:50.629 [2024-07-15 09:36:01.622566] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:50.629 [2024-07-15 09:36:01.622592] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:27:50.629 qpair failed and we were unable to recover it. 00:27:50.629 [2024-07-15 09:36:01.622678] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:50.629 [2024-07-15 09:36:01.622704] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:27:50.629 qpair failed and we were unable to recover it. 00:27:50.629 [2024-07-15 09:36:01.622793] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:50.629 [2024-07-15 09:36:01.622825] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:27:50.629 qpair failed and we were unable to recover it. 00:27:50.629 [2024-07-15 09:36:01.622907] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:50.629 [2024-07-15 09:36:01.622931] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:27:50.629 qpair failed and we were unable to recover it. 00:27:50.629 [2024-07-15 09:36:01.623009] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:50.629 [2024-07-15 09:36:01.623035] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:27:50.629 qpair failed and we were unable to recover it. 00:27:50.629 [2024-07-15 09:36:01.623127] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:50.629 [2024-07-15 09:36:01.623154] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:27:50.629 qpair failed and we were unable to recover it. 00:27:50.629 [2024-07-15 09:36:01.623253] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:50.629 [2024-07-15 09:36:01.623279] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:27:50.629 qpair failed and we were unable to recover it. 00:27:50.629 [2024-07-15 09:36:01.623364] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:50.629 [2024-07-15 09:36:01.623390] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d69200 with addr=10.0.0.2, port=4420 00:27:50.629 qpair failed and we were unable to recover it. 00:27:50.629 [2024-07-15 09:36:01.623479] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:50.629 [2024-07-15 09:36:01.623508] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d69200 with addr=10.0.0.2, port=4420 00:27:50.629 qpair failed and we were unable to recover it. 00:27:50.629 [2024-07-15 09:36:01.623597] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:50.629 [2024-07-15 09:36:01.623622] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d69200 with addr=10.0.0.2, port=4420 00:27:50.629 qpair failed and we were unable to recover it. 00:27:50.629 [2024-07-15 09:36:01.623704] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:50.629 [2024-07-15 09:36:01.623728] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d69200 with addr=10.0.0.2, port=4420 00:27:50.629 qpair failed and we were unable to recover it. 00:27:50.629 [2024-07-15 09:36:01.623816] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:50.629 [2024-07-15 09:36:01.623847] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d69200 with addr=10.0.0.2, port=4420 00:27:50.629 qpair failed and we were unable to recover it. 00:27:50.629 [2024-07-15 09:36:01.623937] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:50.630 [2024-07-15 09:36:01.623964] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d69200 with addr=10.0.0.2, port=4420 00:27:50.630 qpair failed and we were unable to recover it. 00:27:50.630 [2024-07-15 09:36:01.624051] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:50.630 [2024-07-15 09:36:01.624077] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d69200 with addr=10.0.0.2, port=4420 00:27:50.630 qpair failed and we were unable to recover it. 00:27:50.630 [2024-07-15 09:36:01.624165] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:50.630 [2024-07-15 09:36:01.624190] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d69200 with addr=10.0.0.2, port=4420 00:27:50.630 qpair failed and we were unable to recover it. 00:27:50.630 [2024-07-15 09:36:01.624268] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:50.630 [2024-07-15 09:36:01.624299] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d69200 with addr=10.0.0.2, port=4420 00:27:50.630 qpair failed and we were unable to recover it. 00:27:50.630 [2024-07-15 09:36:01.624383] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:50.630 [2024-07-15 09:36:01.624411] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:27:50.630 qpair failed and we were unable to recover it. 00:27:50.630 [2024-07-15 09:36:01.624515] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:50.630 [2024-07-15 09:36:01.624554] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a50000b90 with addr=10.0.0.2, port=4420 00:27:50.630 qpair failed and we were unable to recover it. 00:27:50.630 [2024-07-15 09:36:01.624656] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:50.630 [2024-07-15 09:36:01.624685] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a50000b90 with addr=10.0.0.2, port=4420 00:27:50.630 qpair failed and we were unable to recover it. 00:27:50.630 [2024-07-15 09:36:01.624767] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:50.630 [2024-07-15 09:36:01.624812] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a50000b90 with addr=10.0.0.2, port=4420 00:27:50.630 qpair failed and we were unable to recover it. 00:27:50.630 [2024-07-15 09:36:01.624898] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:50.630 [2024-07-15 09:36:01.624925] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a50000b90 with addr=10.0.0.2, port=4420 00:27:50.630 qpair failed and we were unable to recover it. 00:27:50.630 [2024-07-15 09:36:01.625007] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:50.630 [2024-07-15 09:36:01.625032] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a50000b90 with addr=10.0.0.2, port=4420 00:27:50.630 qpair failed and we were unable to recover it. 00:27:50.630 [2024-07-15 09:36:01.625149] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:50.630 [2024-07-15 09:36:01.625175] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a50000b90 with addr=10.0.0.2, port=4420 00:27:50.630 qpair failed and we were unable to recover it. 00:27:50.630 [2024-07-15 09:36:01.625263] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:50.630 [2024-07-15 09:36:01.625288] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a50000b90 with addr=10.0.0.2, port=4420 00:27:50.630 qpair failed and we were unable to recover it. 00:27:50.630 [2024-07-15 09:36:01.625383] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:50.630 [2024-07-15 09:36:01.625409] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a50000b90 with addr=10.0.0.2, port=4420 00:27:50.630 qpair failed and we were unable to recover it. 00:27:50.630 [2024-07-15 09:36:01.625497] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:50.630 [2024-07-15 09:36:01.625524] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d69200 with addr=10.0.0.2, port=4420 00:27:50.630 qpair failed and we were unable to recover it. 00:27:50.630 [2024-07-15 09:36:01.625612] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:50.630 [2024-07-15 09:36:01.625640] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:27:50.630 qpair failed and we were unable to recover it. 00:27:50.630 [2024-07-15 09:36:01.625717] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:50.630 [2024-07-15 09:36:01.625743] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:27:50.630 qpair failed and we were unable to recover it. 00:27:50.630 [2024-07-15 09:36:01.625836] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:50.630 [2024-07-15 09:36:01.625863] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:27:50.630 qpair failed and we were unable to recover it. 00:27:50.630 [2024-07-15 09:36:01.625950] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:50.630 [2024-07-15 09:36:01.625974] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:27:50.630 qpair failed and we were unable to recover it. 00:27:50.630 [2024-07-15 09:36:01.626061] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:50.630 [2024-07-15 09:36:01.626088] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:27:50.630 qpair failed and we were unable to recover it. 00:27:50.630 [2024-07-15 09:36:01.626196] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:50.630 [2024-07-15 09:36:01.626221] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:27:50.630 qpair failed and we were unable to recover it. 00:27:50.630 [2024-07-15 09:36:01.626308] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:50.630 [2024-07-15 09:36:01.626332] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:27:50.630 qpair failed and we were unable to recover it. 00:27:50.630 [2024-07-15 09:36:01.626410] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:50.630 [2024-07-15 09:36:01.626434] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:27:50.630 qpair failed and we were unable to recover it. 00:27:50.630 [2024-07-15 09:36:01.626544] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:50.630 [2024-07-15 09:36:01.626570] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:27:50.630 qpair failed and we were unable to recover it. 00:27:50.630 [2024-07-15 09:36:01.626652] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:50.630 [2024-07-15 09:36:01.626679] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d69200 with addr=10.0.0.2, port=4420 00:27:50.630 qpair failed and we were unable to recover it. 00:27:50.630 [2024-07-15 09:36:01.626756] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:50.630 [2024-07-15 09:36:01.626780] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d69200 with addr=10.0.0.2, port=4420 00:27:50.630 qpair failed and we were unable to recover it. 00:27:50.630 [2024-07-15 09:36:01.626879] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:50.630 [2024-07-15 09:36:01.626908] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a50000b90 with addr=10.0.0.2, port=4420 00:27:50.630 qpair failed and we were unable to recover it. 00:27:50.630 [2024-07-15 09:36:01.626995] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:50.630 [2024-07-15 09:36:01.627022] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a50000b90 with addr=10.0.0.2, port=4420 00:27:50.630 qpair failed and we were unable to recover it. 00:27:50.630 [2024-07-15 09:36:01.627108] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:50.630 [2024-07-15 09:36:01.627135] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a50000b90 with addr=10.0.0.2, port=4420 00:27:50.630 qpair failed and we were unable to recover it. 00:27:50.630 [2024-07-15 09:36:01.627217] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:50.630 [2024-07-15 09:36:01.627243] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a50000b90 with addr=10.0.0.2, port=4420 00:27:50.630 qpair failed and we were unable to recover it. 00:27:50.630 [2024-07-15 09:36:01.627337] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:50.630 [2024-07-15 09:36:01.627365] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a50000b90 with addr=10.0.0.2, port=4420 00:27:50.630 qpair failed and we were unable to recover it. 00:27:50.630 [2024-07-15 09:36:01.627469] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:50.630 [2024-07-15 09:36:01.627509] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a58000b90 with addr=10.0.0.2, port=4420 00:27:50.630 qpair failed and we were unable to recover it. 00:27:50.630 [2024-07-15 09:36:01.627602] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:50.630 [2024-07-15 09:36:01.627630] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d69200 with addr=10.0.0.2, port=4420 00:27:50.630 qpair failed and we were unable to recover it. 00:27:50.630 [2024-07-15 09:36:01.627707] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:50.630 [2024-07-15 09:36:01.627732] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:27:50.630 qpair failed and we were unable to recover it. 00:27:50.630 [2024-07-15 09:36:01.627837] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:50.630 [2024-07-15 09:36:01.627863] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:27:50.630 qpair failed and we were unable to recover it. 00:27:50.630 [2024-07-15 09:36:01.627941] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:50.630 [2024-07-15 09:36:01.627965] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:27:50.630 qpair failed and we were unable to recover it. 00:27:50.630 [2024-07-15 09:36:01.628051] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:50.630 [2024-07-15 09:36:01.628081] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:27:50.630 qpair failed and we were unable to recover it. 00:27:50.630 [2024-07-15 09:36:01.628167] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:50.630 [2024-07-15 09:36:01.628195] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:27:50.630 qpair failed and we were unable to recover it. 00:27:50.630 [2024-07-15 09:36:01.628270] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:50.630 [2024-07-15 09:36:01.628294] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:27:50.630 qpair failed and we were unable to recover it. 00:27:50.630 [2024-07-15 09:36:01.628382] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:50.630 [2024-07-15 09:36:01.628408] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:27:50.630 qpair failed and we were unable to recover it. 00:27:50.630 [2024-07-15 09:36:01.628484] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:50.630 [2024-07-15 09:36:01.628510] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:27:50.630 qpair failed and we were unable to recover it. 00:27:50.630 [2024-07-15 09:36:01.628604] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:50.630 [2024-07-15 09:36:01.628634] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a58000b90 with addr=10.0.0.2, port=4420 00:27:50.630 qpair failed and we were unable to recover it. 00:27:50.630 [2024-07-15 09:36:01.628735] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:50.630 [2024-07-15 09:36:01.628762] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d69200 with addr=10.0.0.2, port=4420 00:27:50.630 qpair failed and we were unable to recover it. 00:27:50.630 [2024-07-15 09:36:01.628865] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:50.630 [2024-07-15 09:36:01.628894] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a50000b90 with addr=10.0.0.2, port=4420 00:27:50.630 qpair failed and we were unable to recover it. 00:27:50.630 [2024-07-15 09:36:01.628983] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:50.630 [2024-07-15 09:36:01.629013] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a50000b90 with addr=10.0.0.2, port=4420 00:27:50.630 qpair failed and we were unable to recover it. 00:27:50.630 [2024-07-15 09:36:01.629094] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:50.630 [2024-07-15 09:36:01.629122] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a50000b90 with addr=10.0.0.2, port=4420 00:27:50.630 qpair failed and we were unable to recover it. 00:27:50.630 [2024-07-15 09:36:01.629207] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:50.630 [2024-07-15 09:36:01.629234] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a50000b90 with addr=10.0.0.2, port=4420 00:27:50.630 qpair failed and we were unable to recover it. 00:27:50.630 [2024-07-15 09:36:01.629322] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:50.630 [2024-07-15 09:36:01.629347] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a50000b90 with addr=10.0.0.2, port=4420 00:27:50.630 qpair failed and we were unable to recover it. 00:27:50.630 [2024-07-15 09:36:01.629459] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:50.630 [2024-07-15 09:36:01.629487] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a58000b90 with addr=10.0.0.2, port=4420 00:27:50.630 qpair failed and we were unable to recover it. 00:27:50.630 [2024-07-15 09:36:01.629568] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:50.630 [2024-07-15 09:36:01.629593] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a58000b90 with addr=10.0.0.2, port=4420 00:27:50.630 qpair failed and we were unable to recover it. 00:27:50.630 [2024-07-15 09:36:01.629670] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:50.630 [2024-07-15 09:36:01.629696] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:27:50.630 qpair failed and we were unable to recover it. 00:27:50.630 [2024-07-15 09:36:01.629782] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:50.630 [2024-07-15 09:36:01.629824] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:27:50.630 qpair failed and we were unable to recover it. 00:27:50.630 [2024-07-15 09:36:01.629905] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:50.630 [2024-07-15 09:36:01.629931] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:27:50.630 qpair failed and we were unable to recover it. 00:27:50.630 [2024-07-15 09:36:01.630009] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:50.630 [2024-07-15 09:36:01.630035] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:27:50.630 qpair failed and we were unable to recover it. 00:27:50.630 [2024-07-15 09:36:01.630155] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:50.630 [2024-07-15 09:36:01.630181] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:27:50.630 qpair failed and we were unable to recover it. 00:27:50.630 [2024-07-15 09:36:01.630258] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:50.630 [2024-07-15 09:36:01.630284] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:27:50.630 qpair failed and we were unable to recover it. 00:27:50.630 [2024-07-15 09:36:01.630364] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:50.630 [2024-07-15 09:36:01.630391] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:27:50.630 qpair failed and we were unable to recover it. 00:27:50.630 [2024-07-15 09:36:01.630467] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:50.630 [2024-07-15 09:36:01.630494] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a50000b90 with addr=10.0.0.2, port=4420 00:27:50.630 qpair failed and we were unable to recover it. 00:27:50.630 [2024-07-15 09:36:01.630582] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:50.630 [2024-07-15 09:36:01.630608] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a50000b90 with addr=10.0.0.2, port=4420 00:27:50.630 qpair failed and we were unable to recover it. 00:27:50.630 [2024-07-15 09:36:01.630705] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:50.630 [2024-07-15 09:36:01.630734] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a58000b90 with addr=10.0.0.2, port=4420 00:27:50.630 qpair failed and we were unable to recover it. 00:27:50.630 [2024-07-15 09:36:01.630843] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:50.630 [2024-07-15 09:36:01.630871] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a58000b90 with addr=10.0.0.2, port=4420 00:27:50.630 qpair failed and we were unable to recover it. 00:27:50.630 [2024-07-15 09:36:01.630952] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:50.630 [2024-07-15 09:36:01.630978] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a58000b90 with addr=10.0.0.2, port=4420 00:27:50.630 qpair failed and we were unable to recover it. 00:27:50.630 [2024-07-15 09:36:01.631062] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:50.630 [2024-07-15 09:36:01.631088] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a58000b90 with addr=10.0.0.2, port=4420 00:27:50.630 qpair failed and we were unable to recover it. 00:27:50.630 [2024-07-15 09:36:01.631177] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:50.631 [2024-07-15 09:36:01.631203] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a58000b90 with addr=10.0.0.2, port=4420 00:27:50.631 qpair failed and we were unable to recover it. 00:27:50.631 [2024-07-15 09:36:01.631289] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:50.631 [2024-07-15 09:36:01.631315] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a58000b90 with addr=10.0.0.2, port=4420 00:27:50.631 qpair failed and we were unable to recover it. 00:27:50.631 [2024-07-15 09:36:01.631394] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:50.631 [2024-07-15 09:36:01.631420] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a58000b90 with addr=10.0.0.2, port=4420 00:27:50.631 qpair failed and we were unable to recover it. 00:27:50.631 [2024-07-15 09:36:01.631506] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:50.631 [2024-07-15 09:36:01.631532] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a58000b90 with addr=10.0.0.2, port=4420 00:27:50.631 qpair failed and we were unable to recover it. 00:27:50.631 [2024-07-15 09:36:01.631610] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:50.631 [2024-07-15 09:36:01.631636] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a58000b90 with addr=10.0.0.2, port=4420 00:27:50.631 qpair failed and we were unable to recover it. 00:27:50.631 [2024-07-15 09:36:01.631717] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:50.631 [2024-07-15 09:36:01.631744] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a58000b90 with addr=10.0.0.2, port=4420 00:27:50.631 qpair failed and we were unable to recover it. 00:27:50.631 [2024-07-15 09:36:01.631841] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:50.631 [2024-07-15 09:36:01.631869] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:27:50.631 qpair failed and we were unable to recover it. 00:27:50.631 [2024-07-15 09:36:01.631972] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:50.631 [2024-07-15 09:36:01.632011] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d69200 with addr=10.0.0.2, port=4420 00:27:50.631 qpair failed and we were unable to recover it. 00:27:50.631 [2024-07-15 09:36:01.632120] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:50.631 [2024-07-15 09:36:01.632148] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a50000b90 with addr=10.0.0.2, port=4420 00:27:50.631 qpair failed and we were unable to recover it. 00:27:50.631 [2024-07-15 09:36:01.632256] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:50.631 [2024-07-15 09:36:01.632282] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a50000b90 with addr=10.0.0.2, port=4420 00:27:50.631 qpair failed and we were unable to recover it. 00:27:50.631 [2024-07-15 09:36:01.632363] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:50.631 [2024-07-15 09:36:01.632388] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a50000b90 with addr=10.0.0.2, port=4420 00:27:50.631 qpair failed and we were unable to recover it. 00:27:50.631 [2024-07-15 09:36:01.632503] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:50.631 [2024-07-15 09:36:01.632531] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a50000b90 with addr=10.0.0.2, port=4420 00:27:50.631 qpair failed and we were unable to recover it. 00:27:50.631 [2024-07-15 09:36:01.632615] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:50.631 [2024-07-15 09:36:01.632640] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a58000b90 with addr=10.0.0.2, port=4420 00:27:50.631 qpair failed and we were unable to recover it. 00:27:50.631 [2024-07-15 09:36:01.632731] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:50.631 [2024-07-15 09:36:01.632758] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:27:50.631 qpair failed and we were unable to recover it. 00:27:50.631 [2024-07-15 09:36:01.632849] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:50.631 [2024-07-15 09:36:01.632877] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d69200 with addr=10.0.0.2, port=4420 00:27:50.631 qpair failed and we were unable to recover it. 00:27:50.631 [2024-07-15 09:36:01.632969] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:50.631 [2024-07-15 09:36:01.632994] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d69200 with addr=10.0.0.2, port=4420 00:27:50.631 qpair failed and we were unable to recover it. 00:27:50.631 [2024-07-15 09:36:01.633079] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:50.631 [2024-07-15 09:36:01.633110] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d69200 with addr=10.0.0.2, port=4420 00:27:50.631 qpair failed and we were unable to recover it. 00:27:50.631 [2024-07-15 09:36:01.633202] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:50.631 [2024-07-15 09:36:01.633228] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d69200 with addr=10.0.0.2, port=4420 00:27:50.631 qpair failed and we were unable to recover it. 00:27:50.631 [2024-07-15 09:36:01.633306] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:50.631 [2024-07-15 09:36:01.633331] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d69200 with addr=10.0.0.2, port=4420 00:27:50.631 qpair failed and we were unable to recover it. 00:27:50.631 [2024-07-15 09:36:01.633420] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:50.631 [2024-07-15 09:36:01.633448] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a58000b90 with addr=10.0.0.2, port=4420 00:27:50.631 qpair failed and we were unable to recover it. 00:27:50.631 [2024-07-15 09:36:01.633530] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:50.631 [2024-07-15 09:36:01.633556] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a58000b90 with addr=10.0.0.2, port=4420 00:27:50.631 qpair failed and we were unable to recover it. 00:27:50.631 [2024-07-15 09:36:01.633648] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:50.631 [2024-07-15 09:36:01.633676] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a50000b90 with addr=10.0.0.2, port=4420 00:27:50.631 qpair failed and we were unable to recover it. 00:27:50.631 [2024-07-15 09:36:01.633771] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:50.631 [2024-07-15 09:36:01.633799] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a50000b90 with addr=10.0.0.2, port=4420 00:27:50.631 qpair failed and we were unable to recover it. 00:27:50.631 [2024-07-15 09:36:01.633890] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:50.631 [2024-07-15 09:36:01.633915] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a50000b90 with addr=10.0.0.2, port=4420 00:27:50.631 qpair failed and we were unable to recover it. 00:27:50.631 [2024-07-15 09:36:01.633996] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:50.631 [2024-07-15 09:36:01.634021] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a50000b90 with addr=10.0.0.2, port=4420 00:27:50.631 qpair failed and we were unable to recover it. 00:27:50.631 [2024-07-15 09:36:01.634107] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:50.631 [2024-07-15 09:36:01.634132] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a50000b90 with addr=10.0.0.2, port=4420 00:27:50.631 qpair failed and we were unable to recover it. 00:27:50.631 [2024-07-15 09:36:01.634210] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:50.631 [2024-07-15 09:36:01.634236] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:27:50.631 qpair failed and we were unable to recover it. 00:27:50.631 [2024-07-15 09:36:01.634319] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:50.631 [2024-07-15 09:36:01.634344] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:27:50.631 qpair failed and we were unable to recover it. 00:27:50.631 [2024-07-15 09:36:01.634427] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:50.631 [2024-07-15 09:36:01.634453] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d69200 with addr=10.0.0.2, port=4420 00:27:50.631 qpair failed and we were unable to recover it. 00:27:50.631 [2024-07-15 09:36:01.634529] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:50.631 [2024-07-15 09:36:01.634554] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d69200 with addr=10.0.0.2, port=4420 00:27:50.631 qpair failed and we were unable to recover it. 00:27:50.631 [2024-07-15 09:36:01.634639] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:50.631 [2024-07-15 09:36:01.634667] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a58000b90 with addr=10.0.0.2, port=4420 00:27:50.631 qpair failed and we were unable to recover it. 00:27:50.631 [2024-07-15 09:36:01.634753] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:50.631 [2024-07-15 09:36:01.634779] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a58000b90 with addr=10.0.0.2, port=4420 00:27:50.631 qpair failed and we were unable to recover it. 00:27:50.631 [2024-07-15 09:36:01.634889] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:50.631 [2024-07-15 09:36:01.634915] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a58000b90 with addr=10.0.0.2, port=4420 00:27:50.631 qpair failed and we were unable to recover it. 00:27:50.631 [2024-07-15 09:36:01.634994] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:50.631 [2024-07-15 09:36:01.635021] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a58000b90 with addr=10.0.0.2, port=4420 00:27:50.631 qpair failed and we were unable to recover it. 00:27:50.631 [2024-07-15 09:36:01.635116] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:50.631 [2024-07-15 09:36:01.635142] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a58000b90 with addr=10.0.0.2, port=4420 00:27:50.631 qpair failed and we were unable to recover it. 00:27:50.631 [2024-07-15 09:36:01.635230] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:50.631 [2024-07-15 09:36:01.635256] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a50000b90 with addr=10.0.0.2, port=4420 00:27:50.631 qpair failed and we were unable to recover it. 00:27:50.631 [2024-07-15 09:36:01.635333] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:50.631 [2024-07-15 09:36:01.635359] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a50000b90 with addr=10.0.0.2, port=4420 00:27:50.631 qpair failed and we were unable to recover it. 00:27:50.631 [2024-07-15 09:36:01.635444] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:50.631 [2024-07-15 09:36:01.635470] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a50000b90 with addr=10.0.0.2, port=4420 00:27:50.631 qpair failed and we were unable to recover it. 00:27:50.631 [2024-07-15 09:36:01.635555] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:50.631 [2024-07-15 09:36:01.635583] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a50000b90 with addr=10.0.0.2, port=4420 00:27:50.631 qpair failed and we were unable to recover it. 00:27:50.631 [2024-07-15 09:36:01.635661] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:50.631 [2024-07-15 09:36:01.635685] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a50000b90 with addr=10.0.0.2, port=4420 00:27:50.631 qpair failed and we were unable to recover it. 00:27:50.631 [2024-07-15 09:36:01.635795] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:50.631 [2024-07-15 09:36:01.635826] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a50000b90 with addr=10.0.0.2, port=4420 00:27:50.631 qpair failed and we were unable to recover it. 00:27:50.631 [2024-07-15 09:36:01.635914] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:50.631 [2024-07-15 09:36:01.635940] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a58000b90 with addr=10.0.0.2, port=4420 00:27:50.631 qpair failed and we were unable to recover it. 00:27:50.631 [2024-07-15 09:36:01.636031] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:50.631 [2024-07-15 09:36:01.636058] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d69200 with addr=10.0.0.2, port=4420 00:27:50.631 qpair failed and we were unable to recover it. 00:27:50.631 [2024-07-15 09:36:01.636150] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:50.631 [2024-07-15 09:36:01.636177] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:27:50.631 qpair failed and we were unable to recover it. 00:27:50.631 [2024-07-15 09:36:01.636258] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:50.631 [2024-07-15 09:36:01.636284] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:27:50.631 qpair failed and we were unable to recover it. 00:27:50.631 [2024-07-15 09:36:01.636362] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:50.631 [2024-07-15 09:36:01.636388] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:27:50.631 qpair failed and we were unable to recover it. 00:27:50.631 [2024-07-15 09:36:01.636471] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:50.631 [2024-07-15 09:36:01.636496] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:27:50.631 qpair failed and we were unable to recover it. 00:27:50.631 [2024-07-15 09:36:01.636571] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:50.631 [2024-07-15 09:36:01.636596] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:27:50.631 qpair failed and we were unable to recover it. 00:27:50.631 [2024-07-15 09:36:01.636686] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:50.631 [2024-07-15 09:36:01.636718] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:27:50.631 qpair failed and we were unable to recover it. 00:27:50.631 [2024-07-15 09:36:01.636812] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:50.631 [2024-07-15 09:36:01.636838] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:27:50.631 qpair failed and we were unable to recover it. 00:27:50.631 [2024-07-15 09:36:01.636911] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:50.631 [2024-07-15 09:36:01.636935] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:27:50.631 qpair failed and we were unable to recover it. 00:27:50.631 [2024-07-15 09:36:01.637020] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:50.631 [2024-07-15 09:36:01.637044] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:27:50.631 qpair failed and we were unable to recover it. 00:27:50.631 [2024-07-15 09:36:01.637160] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:50.631 [2024-07-15 09:36:01.637187] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:27:50.631 qpair failed and we were unable to recover it. 00:27:50.631 [2024-07-15 09:36:01.637271] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:50.631 [2024-07-15 09:36:01.637298] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a50000b90 with addr=10.0.0.2, port=4420 00:27:50.631 qpair failed and we were unable to recover it. 00:27:50.631 [2024-07-15 09:36:01.637383] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:50.631 [2024-07-15 09:36:01.637412] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a58000b90 with addr=10.0.0.2, port=4420 00:27:50.631 qpair failed and we were unable to recover it. 00:27:50.631 [2024-07-15 09:36:01.637511] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:50.631 [2024-07-15 09:36:01.637549] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d69200 with addr=10.0.0.2, port=4420 00:27:50.631 qpair failed and we were unable to recover it. 00:27:50.631 [2024-07-15 09:36:01.637634] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:50.631 [2024-07-15 09:36:01.637661] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d69200 with addr=10.0.0.2, port=4420 00:27:50.631 qpair failed and we were unable to recover it. 00:27:50.631 [2024-07-15 09:36:01.637747] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:50.631 [2024-07-15 09:36:01.637774] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d69200 with addr=10.0.0.2, port=4420 00:27:50.631 qpair failed and we were unable to recover it. 00:27:50.631 [2024-07-15 09:36:01.637885] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:50.631 [2024-07-15 09:36:01.637913] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a58000b90 with addr=10.0.0.2, port=4420 00:27:50.631 qpair failed and we were unable to recover it. 00:27:50.632 [2024-07-15 09:36:01.637994] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:50.632 [2024-07-15 09:36:01.638020] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a58000b90 with addr=10.0.0.2, port=4420 00:27:50.632 qpair failed and we were unable to recover it. 00:27:50.632 [2024-07-15 09:36:01.638100] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:50.632 [2024-07-15 09:36:01.638127] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a58000b90 with addr=10.0.0.2, port=4420 00:27:50.632 qpair failed and we were unable to recover it. 00:27:50.632 [2024-07-15 09:36:01.638216] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:50.632 [2024-07-15 09:36:01.638242] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a58000b90 with addr=10.0.0.2, port=4420 00:27:50.632 qpair failed and we were unable to recover it. 00:27:50.632 [2024-07-15 09:36:01.638336] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:50.632 [2024-07-15 09:36:01.638364] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:27:50.632 qpair failed and we were unable to recover it. 00:27:50.632 [2024-07-15 09:36:01.638452] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:50.632 [2024-07-15 09:36:01.638478] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:27:50.632 qpair failed and we were unable to recover it. 00:27:50.632 [2024-07-15 09:36:01.638558] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:50.632 [2024-07-15 09:36:01.638585] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d69200 with addr=10.0.0.2, port=4420 00:27:50.632 qpair failed and we were unable to recover it. 00:27:50.632 [2024-07-15 09:36:01.638665] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:50.632 [2024-07-15 09:36:01.638690] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d69200 with addr=10.0.0.2, port=4420 00:27:50.632 qpair failed and we were unable to recover it. 00:27:50.632 [2024-07-15 09:36:01.638773] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:50.632 [2024-07-15 09:36:01.638799] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d69200 with addr=10.0.0.2, port=4420 00:27:50.632 qpair failed and we were unable to recover it. 00:27:50.632 [2024-07-15 09:36:01.638903] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:50.632 [2024-07-15 09:36:01.638927] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d69200 with addr=10.0.0.2, port=4420 00:27:50.632 qpair failed and we were unable to recover it. 00:27:50.632 [2024-07-15 09:36:01.639015] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:50.632 [2024-07-15 09:36:01.639039] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d69200 with addr=10.0.0.2, port=4420 00:27:50.632 qpair failed and we were unable to recover it. 00:27:50.632 [2024-07-15 09:36:01.639127] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:50.632 [2024-07-15 09:36:01.639151] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d69200 with addr=10.0.0.2, port=4420 00:27:50.632 qpair failed and we were unable to recover it. 00:27:50.632 [2024-07-15 09:36:01.639234] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:50.632 [2024-07-15 09:36:01.639260] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d69200 with addr=10.0.0.2, port=4420 00:27:50.632 qpair failed and we were unable to recover it. 00:27:50.632 [2024-07-15 09:36:01.639338] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:50.632 [2024-07-15 09:36:01.639363] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d69200 with addr=10.0.0.2, port=4420 00:27:50.632 qpair failed and we were unable to recover it. 00:27:50.632 [2024-07-15 09:36:01.639453] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:50.632 [2024-07-15 09:36:01.639478] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d69200 with addr=10.0.0.2, port=4420 00:27:50.632 qpair failed and we were unable to recover it. 00:27:50.632 [2024-07-15 09:36:01.639584] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:50.632 [2024-07-15 09:36:01.639611] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:27:50.632 qpair failed and we were unable to recover it. 00:27:50.632 [2024-07-15 09:36:01.639696] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:50.632 [2024-07-15 09:36:01.639728] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a58000b90 with addr=10.0.0.2, port=4420 00:27:50.632 qpair failed and we were unable to recover it. 00:27:50.632 [2024-07-15 09:36:01.639824] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:50.632 [2024-07-15 09:36:01.639853] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a50000b90 with addr=10.0.0.2, port=4420 00:27:50.632 qpair failed and we were unable to recover it. 00:27:50.632 [2024-07-15 09:36:01.639947] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:50.632 [2024-07-15 09:36:01.639974] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a50000b90 with addr=10.0.0.2, port=4420 00:27:50.632 qpair failed and we were unable to recover it. 00:27:50.632 [2024-07-15 09:36:01.640061] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:50.632 [2024-07-15 09:36:01.640087] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a50000b90 with addr=10.0.0.2, port=4420 00:27:50.632 qpair failed and we were unable to recover it. 00:27:50.632 [2024-07-15 09:36:01.640175] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:50.632 [2024-07-15 09:36:01.640201] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a50000b90 with addr=10.0.0.2, port=4420 00:27:50.632 qpair failed and we were unable to recover it. 00:27:50.632 [2024-07-15 09:36:01.640288] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:50.632 [2024-07-15 09:36:01.640315] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a50000b90 with addr=10.0.0.2, port=4420 00:27:50.632 qpair failed and we were unable to recover it. 00:27:50.632 [2024-07-15 09:36:01.640434] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:50.632 [2024-07-15 09:36:01.640461] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:27:50.632 qpair failed and we were unable to recover it. 00:27:50.632 [2024-07-15 09:36:01.640551] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:50.632 [2024-07-15 09:36:01.640576] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:27:50.632 qpair failed and we were unable to recover it. 00:27:50.632 [2024-07-15 09:36:01.640663] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:50.632 [2024-07-15 09:36:01.640690] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d69200 with addr=10.0.0.2, port=4420 00:27:50.632 qpair failed and we were unable to recover it. 00:27:50.632 [2024-07-15 09:36:01.640789] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:50.632 [2024-07-15 09:36:01.640824] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d69200 with addr=10.0.0.2, port=4420 00:27:50.632 qpair failed and we were unable to recover it. 00:27:50.632 [2024-07-15 09:36:01.640916] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:50.632 [2024-07-15 09:36:01.640941] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d69200 with addr=10.0.0.2, port=4420 00:27:50.632 qpair failed and we were unable to recover it. 00:27:50.632 [2024-07-15 09:36:01.641018] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:50.632 [2024-07-15 09:36:01.641042] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d69200 with addr=10.0.0.2, port=4420 00:27:50.632 qpair failed and we were unable to recover it. 00:27:50.632 [2024-07-15 09:36:01.641134] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:50.632 [2024-07-15 09:36:01.641159] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d69200 with addr=10.0.0.2, port=4420 00:27:50.632 qpair failed and we were unable to recover it. 00:27:50.632 [2024-07-15 09:36:01.641238] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:50.632 [2024-07-15 09:36:01.641264] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a58000b90 with addr=10.0.0.2, port=4420 00:27:50.632 qpair failed and we were unable to recover it. 00:27:50.632 [2024-07-15 09:36:01.641347] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:50.632 [2024-07-15 09:36:01.641374] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:27:50.632 qpair failed and we were unable to recover it. 00:27:50.632 [2024-07-15 09:36:01.641465] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:50.632 [2024-07-15 09:36:01.641491] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:27:50.632 qpair failed and we were unable to recover it. 00:27:50.632 [2024-07-15 09:36:01.641574] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:50.632 [2024-07-15 09:36:01.641600] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:27:50.632 qpair failed and we were unable to recover it. 00:27:50.632 [2024-07-15 09:36:01.641682] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:50.632 [2024-07-15 09:36:01.641708] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:27:50.632 qpair failed and we were unable to recover it. 00:27:50.632 [2024-07-15 09:36:01.641790] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:50.632 [2024-07-15 09:36:01.641822] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:27:50.632 qpair failed and we were unable to recover it. 00:27:50.632 [2024-07-15 09:36:01.641901] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:50.632 [2024-07-15 09:36:01.641927] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:27:50.632 qpair failed and we were unable to recover it. 00:27:50.632 [2024-07-15 09:36:01.642012] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:50.632 [2024-07-15 09:36:01.642038] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:27:50.632 qpair failed and we were unable to recover it. 00:27:50.632 [2024-07-15 09:36:01.642123] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:50.632 [2024-07-15 09:36:01.642148] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:27:50.632 qpair failed and we were unable to recover it. 00:27:50.632 [2024-07-15 09:36:01.642237] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:50.632 [2024-07-15 09:36:01.642262] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:27:50.632 qpair failed and we were unable to recover it. 00:27:50.632 [2024-07-15 09:36:01.642351] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:50.632 [2024-07-15 09:36:01.642378] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a58000b90 with addr=10.0.0.2, port=4420 00:27:50.632 qpair failed and we were unable to recover it. 00:27:50.632 [2024-07-15 09:36:01.642468] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:50.632 [2024-07-15 09:36:01.642496] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a50000b90 with addr=10.0.0.2, port=4420 00:27:50.632 qpair failed and we were unable to recover it. 00:27:50.632 [2024-07-15 09:36:01.642599] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:50.632 [2024-07-15 09:36:01.642626] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d69200 with addr=10.0.0.2, port=4420 00:27:50.632 qpair failed and we were unable to recover it. 00:27:50.632 [2024-07-15 09:36:01.642716] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:50.632 [2024-07-15 09:36:01.642741] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d69200 with addr=10.0.0.2, port=4420 00:27:50.632 qpair failed and we were unable to recover it. 00:27:50.632 [2024-07-15 09:36:01.642831] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:50.632 [2024-07-15 09:36:01.642857] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d69200 with addr=10.0.0.2, port=4420 00:27:50.632 qpair failed and we were unable to recover it. 00:27:50.632 [2024-07-15 09:36:01.642947] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:50.632 [2024-07-15 09:36:01.642971] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d69200 with addr=10.0.0.2, port=4420 00:27:50.632 qpair failed and we were unable to recover it. 00:27:50.632 [2024-07-15 09:36:01.643053] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:50.632 [2024-07-15 09:36:01.643078] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d69200 with addr=10.0.0.2, port=4420 00:27:50.632 qpair failed and we were unable to recover it. 00:27:50.632 [2024-07-15 09:36:01.643169] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:50.632 [2024-07-15 09:36:01.643195] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d69200 with addr=10.0.0.2, port=4420 00:27:50.632 qpair failed and we were unable to recover it. 00:27:50.632 [2024-07-15 09:36:01.643272] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:50.632 [2024-07-15 09:36:01.643297] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d69200 with addr=10.0.0.2, port=4420 00:27:50.632 qpair failed and we were unable to recover it. 00:27:50.632 [2024-07-15 09:36:01.643380] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:50.632 [2024-07-15 09:36:01.643405] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d69200 with addr=10.0.0.2, port=4420 00:27:50.632 qpair failed and we were unable to recover it. 00:27:50.632 [2024-07-15 09:36:01.643488] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:50.632 [2024-07-15 09:36:01.643514] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d69200 with addr=10.0.0.2, port=4420 00:27:50.632 qpair failed and we were unable to recover it. 00:27:50.632 [2024-07-15 09:36:01.643617] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:50.632 [2024-07-15 09:36:01.643645] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a50000b90 with addr=10.0.0.2, port=4420 00:27:50.632 qpair failed and we were unable to recover it. 00:27:50.632 [2024-07-15 09:36:01.643731] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:50.632 [2024-07-15 09:36:01.643756] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:27:50.632 qpair failed and we were unable to recover it. 00:27:50.632 [2024-07-15 09:36:01.643864] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:50.632 [2024-07-15 09:36:01.643893] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a58000b90 with addr=10.0.0.2, port=4420 00:27:50.632 qpair failed and we were unable to recover it. 00:27:50.632 [2024-07-15 09:36:01.643980] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:50.632 [2024-07-15 09:36:01.644005] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a58000b90 with addr=10.0.0.2, port=4420 00:27:50.632 qpair failed and we were unable to recover it. 00:27:50.632 [2024-07-15 09:36:01.644090] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:50.632 [2024-07-15 09:36:01.644122] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a58000b90 with addr=10.0.0.2, port=4420 00:27:50.632 qpair failed and we were unable to recover it. 00:27:50.632 [2024-07-15 09:36:01.644208] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:50.632 [2024-07-15 09:36:01.644234] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a58000b90 with addr=10.0.0.2, port=4420 00:27:50.632 qpair failed and we were unable to recover it. 00:27:50.632 [2024-07-15 09:36:01.644315] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:50.632 [2024-07-15 09:36:01.644340] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a58000b90 with addr=10.0.0.2, port=4420 00:27:50.632 qpair failed and we were unable to recover it. 00:27:50.632 [2024-07-15 09:36:01.644420] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:50.632 [2024-07-15 09:36:01.644449] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a58000b90 with addr=10.0.0.2, port=4420 00:27:50.632 qpair failed and we were unable to recover it. 00:27:50.632 [2024-07-15 09:36:01.644527] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:50.632 [2024-07-15 09:36:01.644553] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a58000b90 with addr=10.0.0.2, port=4420 00:27:50.632 qpair failed and we were unable to recover it. 00:27:50.632 [2024-07-15 09:36:01.644631] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:50.632 [2024-07-15 09:36:01.644657] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a58000b90 with addr=10.0.0.2, port=4420 00:27:50.632 qpair failed and we were unable to recover it. 00:27:50.632 [2024-07-15 09:36:01.644748] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:50.632 [2024-07-15 09:36:01.644777] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a50000b90 with addr=10.0.0.2, port=4420 00:27:50.632 qpair failed and we were unable to recover it. 00:27:50.632 [2024-07-15 09:36:01.644889] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:50.632 [2024-07-15 09:36:01.644917] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:27:50.632 qpair failed and we were unable to recover it. 00:27:50.632 [2024-07-15 09:36:01.645000] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:50.632 [2024-07-15 09:36:01.645026] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:27:50.632 qpair failed and we were unable to recover it. 00:27:50.632 [2024-07-15 09:36:01.645144] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:50.632 [2024-07-15 09:36:01.645171] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:27:50.632 qpair failed and we were unable to recover it. 00:27:50.632 [2024-07-15 09:36:01.645252] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:50.632 [2024-07-15 09:36:01.645276] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:27:50.632 qpair failed and we were unable to recover it. 00:27:50.632 [2024-07-15 09:36:01.645356] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:50.633 [2024-07-15 09:36:01.645381] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:27:50.633 qpair failed and we were unable to recover it. 00:27:50.633 [2024-07-15 09:36:01.645471] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:50.633 [2024-07-15 09:36:01.645498] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a58000b90 with addr=10.0.0.2, port=4420 00:27:50.633 qpair failed and we were unable to recover it. 00:27:50.633 [2024-07-15 09:36:01.645578] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:50.633 [2024-07-15 09:36:01.645603] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a58000b90 with addr=10.0.0.2, port=4420 00:27:50.633 qpair failed and we were unable to recover it. 00:27:50.633 [2024-07-15 09:36:01.645707] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:50.633 [2024-07-15 09:36:01.645733] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a58000b90 with addr=10.0.0.2, port=4420 00:27:50.633 qpair failed and we were unable to recover it. 00:27:50.633 [2024-07-15 09:36:01.645829] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:50.633 [2024-07-15 09:36:01.645854] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a58000b90 with addr=10.0.0.2, port=4420 00:27:50.633 qpair failed and we were unable to recover it. 00:27:50.633 [2024-07-15 09:36:01.645941] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:50.633 [2024-07-15 09:36:01.645965] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a58000b90 with addr=10.0.0.2, port=4420 00:27:50.633 qpair failed and we were unable to recover it. 00:27:50.633 [2024-07-15 09:36:01.646046] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:50.633 [2024-07-15 09:36:01.646071] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a58000b90 with addr=10.0.0.2, port=4420 00:27:50.633 qpair failed and we were unable to recover it. 00:27:50.633 [2024-07-15 09:36:01.646160] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:50.633 [2024-07-15 09:36:01.646187] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a58000b90 with addr=10.0.0.2, port=4420 00:27:50.633 qpair failed and we were unable to recover it. 00:27:50.633 [2024-07-15 09:36:01.646274] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:50.633 [2024-07-15 09:36:01.646301] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a58000b90 with addr=10.0.0.2, port=4420 00:27:50.633 qpair failed and we were unable to recover it. 00:27:50.633 [2024-07-15 09:36:01.646384] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:50.633 [2024-07-15 09:36:01.646410] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a58000b90 with addr=10.0.0.2, port=4420 00:27:50.633 qpair failed and we were unable to recover it. 00:27:50.633 [2024-07-15 09:36:01.646498] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:50.633 [2024-07-15 09:36:01.646524] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a58000b90 with addr=10.0.0.2, port=4420 00:27:50.633 qpair failed and we were unable to recover it. 00:27:50.633 [2024-07-15 09:36:01.646603] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:50.633 [2024-07-15 09:36:01.646628] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a58000b90 with addr=10.0.0.2, port=4420 00:27:50.633 qpair failed and we were unable to recover it. 00:27:50.633 [2024-07-15 09:36:01.646727] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:50.633 [2024-07-15 09:36:01.646766] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d69200 with addr=10.0.0.2, port=4420 00:27:50.633 qpair failed and we were unable to recover it. 00:27:50.633 [2024-07-15 09:36:01.646868] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:50.633 [2024-07-15 09:36:01.646895] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:27:50.633 qpair failed and we were unable to recover it. 00:27:50.633 [2024-07-15 09:36:01.646978] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:50.633 [2024-07-15 09:36:01.647004] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:27:50.633 qpair failed and we were unable to recover it. 00:27:50.633 [2024-07-15 09:36:01.647078] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:50.633 [2024-07-15 09:36:01.647106] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:27:50.633 qpair failed and we were unable to recover it. 00:27:50.633 [2024-07-15 09:36:01.647203] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:50.633 [2024-07-15 09:36:01.647227] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:27:50.633 qpair failed and we were unable to recover it. 00:27:50.633 [2024-07-15 09:36:01.647316] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:50.633 [2024-07-15 09:36:01.647344] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a50000b90 with addr=10.0.0.2, port=4420 00:27:50.633 qpair failed and we were unable to recover it. 00:27:50.633 [2024-07-15 09:36:01.647431] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:50.633 [2024-07-15 09:36:01.647458] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a58000b90 with addr=10.0.0.2, port=4420 00:27:50.633 qpair failed and we were unable to recover it. 00:27:50.633 [2024-07-15 09:36:01.647556] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:50.633 [2024-07-15 09:36:01.647594] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d69200 with addr=10.0.0.2, port=4420 00:27:50.633 qpair failed and we were unable to recover it. 00:27:50.633 [2024-07-15 09:36:01.647687] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:50.633 [2024-07-15 09:36:01.647713] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d69200 with addr=10.0.0.2, port=4420 00:27:50.633 qpair failed and we were unable to recover it. 00:27:50.633 [2024-07-15 09:36:01.647813] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:50.633 [2024-07-15 09:36:01.647840] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d69200 with addr=10.0.0.2, port=4420 00:27:50.633 qpair failed and we were unable to recover it. 00:27:50.633 [2024-07-15 09:36:01.647917] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:50.633 [2024-07-15 09:36:01.647942] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d69200 with addr=10.0.0.2, port=4420 00:27:50.633 qpair failed and we were unable to recover it. 00:27:50.633 [2024-07-15 09:36:01.648022] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:50.633 [2024-07-15 09:36:01.648047] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d69200 with addr=10.0.0.2, port=4420 00:27:50.633 qpair failed and we were unable to recover it. 00:27:50.633 [2024-07-15 09:36:01.648140] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:50.633 [2024-07-15 09:36:01.648163] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d69200 with addr=10.0.0.2, port=4420 00:27:50.633 qpair failed and we were unable to recover it. 00:27:50.633 [2024-07-15 09:36:01.648252] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:50.633 [2024-07-15 09:36:01.648275] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d69200 with addr=10.0.0.2, port=4420 00:27:50.633 qpair failed and we were unable to recover it. 00:27:50.633 [2024-07-15 09:36:01.648358] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:50.633 [2024-07-15 09:36:01.648384] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:27:50.633 qpair failed and we were unable to recover it. 00:27:50.633 [2024-07-15 09:36:01.648460] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:50.633 [2024-07-15 09:36:01.648484] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:27:50.633 qpair failed and we were unable to recover it. 00:27:50.633 [2024-07-15 09:36:01.648560] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:50.633 [2024-07-15 09:36:01.648584] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:27:50.633 qpair failed and we were unable to recover it. 00:27:50.633 [2024-07-15 09:36:01.648687] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:50.633 [2024-07-15 09:36:01.648713] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:27:50.633 qpair failed and we were unable to recover it. 00:27:50.633 [2024-07-15 09:36:01.648805] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:50.633 [2024-07-15 09:36:01.648831] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:27:50.633 qpair failed and we were unable to recover it. 00:27:50.633 [2024-07-15 09:36:01.648921] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:50.633 [2024-07-15 09:36:01.648945] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:27:50.633 qpair failed and we were unable to recover it. 00:27:50.633 [2024-07-15 09:36:01.649027] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:50.633 [2024-07-15 09:36:01.649052] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:27:50.633 qpair failed and we were unable to recover it. 00:27:50.633 [2024-07-15 09:36:01.649157] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:50.633 [2024-07-15 09:36:01.649182] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:27:50.633 qpair failed and we were unable to recover it. 00:27:50.633 [2024-07-15 09:36:01.649259] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:50.633 [2024-07-15 09:36:01.649284] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:27:50.633 qpair failed and we were unable to recover it. 00:27:50.633 [2024-07-15 09:36:01.649356] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:50.633 [2024-07-15 09:36:01.649381] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:27:50.633 qpair failed and we were unable to recover it. 00:27:50.633 [2024-07-15 09:36:01.649462] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:50.633 [2024-07-15 09:36:01.649486] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:27:50.633 qpair failed and we were unable to recover it. 00:27:50.633 [2024-07-15 09:36:01.649575] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:50.633 [2024-07-15 09:36:01.649602] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a58000b90 with addr=10.0.0.2, port=4420 00:27:50.633 qpair failed and we were unable to recover it. 00:27:50.633 [2024-07-15 09:36:01.649682] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:50.633 [2024-07-15 09:36:01.649708] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a58000b90 with addr=10.0.0.2, port=4420 00:27:50.633 qpair failed and we were unable to recover it. 00:27:50.633 [2024-07-15 09:36:01.649811] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:50.633 [2024-07-15 09:36:01.649836] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a58000b90 with addr=10.0.0.2, port=4420 00:27:50.633 qpair failed and we were unable to recover it. 00:27:50.633 [2024-07-15 09:36:01.649923] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:50.633 [2024-07-15 09:36:01.649948] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a58000b90 with addr=10.0.0.2, port=4420 00:27:50.633 qpair failed and we were unable to recover it. 00:27:50.633 [2024-07-15 09:36:01.650028] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:50.633 [2024-07-15 09:36:01.650054] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a58000b90 with addr=10.0.0.2, port=4420 00:27:50.633 qpair failed and we were unable to recover it. 00:27:50.633 [2024-07-15 09:36:01.650137] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:50.633 [2024-07-15 09:36:01.650162] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a58000b90 with addr=10.0.0.2, port=4420 00:27:50.633 qpair failed and we were unable to recover it. 00:27:50.633 [2024-07-15 09:36:01.650238] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:50.633 [2024-07-15 09:36:01.650263] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a58000b90 with addr=10.0.0.2, port=4420 00:27:50.633 qpair failed and we were unable to recover it. 00:27:50.633 [2024-07-15 09:36:01.650340] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:50.633 [2024-07-15 09:36:01.650365] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a58000b90 with addr=10.0.0.2, port=4420 00:27:50.633 qpair failed and we were unable to recover it. 00:27:50.633 [2024-07-15 09:36:01.650443] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:50.633 [2024-07-15 09:36:01.650468] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a58000b90 with addr=10.0.0.2, port=4420 00:27:50.633 qpair failed and we were unable to recover it. 00:27:50.633 [2024-07-15 09:36:01.650587] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:50.633 [2024-07-15 09:36:01.650613] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a58000b90 with addr=10.0.0.2, port=4420 00:27:50.633 qpair failed and we were unable to recover it. 00:27:50.633 [2024-07-15 09:36:01.650699] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:50.633 [2024-07-15 09:36:01.650725] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a58000b90 with addr=10.0.0.2, port=4420 00:27:50.633 qpair failed and we were unable to recover it. 00:27:50.633 [2024-07-15 09:36:01.650832] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:50.633 [2024-07-15 09:36:01.650858] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a58000b90 with addr=10.0.0.2, port=4420 00:27:50.633 qpair failed and we were unable to recover it. 00:27:50.633 [2024-07-15 09:36:01.650938] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:50.633 [2024-07-15 09:36:01.650963] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a58000b90 with addr=10.0.0.2, port=4420 00:27:50.633 qpair failed and we were unable to recover it. 00:27:50.633 [2024-07-15 09:36:01.651047] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:50.633 [2024-07-15 09:36:01.651073] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a58000b90 with addr=10.0.0.2, port=4420 00:27:50.633 qpair failed and we were unable to recover it. 00:27:50.633 [2024-07-15 09:36:01.651161] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:50.633 [2024-07-15 09:36:01.651186] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a58000b90 with addr=10.0.0.2, port=4420 00:27:50.633 qpair failed and we were unable to recover it. 00:27:50.633 [2024-07-15 09:36:01.651262] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:50.633 [2024-07-15 09:36:01.651287] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a58000b90 with addr=10.0.0.2, port=4420 00:27:50.633 qpair failed and we were unable to recover it. 00:27:50.633 [2024-07-15 09:36:01.651370] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:50.633 [2024-07-15 09:36:01.651397] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a58000b90 with addr=10.0.0.2, port=4420 00:27:50.633 qpair failed and we were unable to recover it. 00:27:50.633 [2024-07-15 09:36:01.651487] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:50.633 [2024-07-15 09:36:01.651513] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a58000b90 with addr=10.0.0.2, port=4420 00:27:50.633 qpair failed and we were unable to recover it. 00:27:50.633 [2024-07-15 09:36:01.651601] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:50.633 [2024-07-15 09:36:01.651627] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a58000b90 with addr=10.0.0.2, port=4420 00:27:50.633 qpair failed and we were unable to recover it. 00:27:50.633 [2024-07-15 09:36:01.651736] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:50.633 [2024-07-15 09:36:01.651766] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a58000b90 with addr=10.0.0.2, port=4420 00:27:50.633 qpair failed and we were unable to recover it. 00:27:50.633 [2024-07-15 09:36:01.651862] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:50.633 [2024-07-15 09:36:01.651887] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a58000b90 with addr=10.0.0.2, port=4420 00:27:50.633 qpair failed and we were unable to recover it. 00:27:50.633 [2024-07-15 09:36:01.651961] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:50.633 [2024-07-15 09:36:01.651986] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a58000b90 with addr=10.0.0.2, port=4420 00:27:50.633 qpair failed and we were unable to recover it. 00:27:50.633 [2024-07-15 09:36:01.652066] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:50.633 [2024-07-15 09:36:01.652094] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a58000b90 with addr=10.0.0.2, port=4420 00:27:50.633 qpair failed and we were unable to recover it. 00:27:50.633 [2024-07-15 09:36:01.652189] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:50.633 [2024-07-15 09:36:01.652214] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a58000b90 with addr=10.0.0.2, port=4420 00:27:50.633 qpair failed and we were unable to recover it. 00:27:50.633 [2024-07-15 09:36:01.652303] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:50.633 [2024-07-15 09:36:01.652328] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a58000b90 with addr=10.0.0.2, port=4420 00:27:50.633 qpair failed and we were unable to recover it. 00:27:50.633 [2024-07-15 09:36:01.652442] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:50.633 [2024-07-15 09:36:01.652467] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a58000b90 with addr=10.0.0.2, port=4420 00:27:50.633 qpair failed and we were unable to recover it. 00:27:50.633 [2024-07-15 09:36:01.652549] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:50.633 [2024-07-15 09:36:01.652574] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a58000b90 with addr=10.0.0.2, port=4420 00:27:50.633 qpair failed and we were unable to recover it. 00:27:50.633 [2024-07-15 09:36:01.652660] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:50.634 [2024-07-15 09:36:01.652685] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a58000b90 with addr=10.0.0.2, port=4420 00:27:50.634 qpair failed and we were unable to recover it. 00:27:50.634 [2024-07-15 09:36:01.652766] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:50.634 [2024-07-15 09:36:01.652807] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a58000b90 with addr=10.0.0.2, port=4420 00:27:50.634 qpair failed and we were unable to recover it. 00:27:50.634 [2024-07-15 09:36:01.652896] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:50.634 [2024-07-15 09:36:01.652922] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a58000b90 with addr=10.0.0.2, port=4420 00:27:50.634 qpair failed and we were unable to recover it. 00:27:50.634 [2024-07-15 09:36:01.652996] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:50.634 [2024-07-15 09:36:01.653021] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a58000b90 with addr=10.0.0.2, port=4420 00:27:50.634 qpair failed and we were unable to recover it. 00:27:50.634 [2024-07-15 09:36:01.653108] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:50.634 [2024-07-15 09:36:01.653134] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a58000b90 with addr=10.0.0.2, port=4420 00:27:50.634 qpair failed and we were unable to recover it. 00:27:50.634 [2024-07-15 09:36:01.653215] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:50.634 [2024-07-15 09:36:01.653241] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a58000b90 with addr=10.0.0.2, port=4420 00:27:50.634 qpair failed and we were unable to recover it. 00:27:50.634 [2024-07-15 09:36:01.653324] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:50.634 [2024-07-15 09:36:01.653350] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:27:50.634 qpair failed and we were unable to recover it. 00:27:50.634 [2024-07-15 09:36:01.653430] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:50.634 [2024-07-15 09:36:01.653455] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:27:50.634 qpair failed and we were unable to recover it. 00:27:50.634 [2024-07-15 09:36:01.653536] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:50.634 [2024-07-15 09:36:01.653561] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:27:50.634 qpair failed and we were unable to recover it. 00:27:50.634 [2024-07-15 09:36:01.653647] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:50.634 [2024-07-15 09:36:01.653672] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:27:50.634 qpair failed and we were unable to recover it. 00:27:50.634 [2024-07-15 09:36:01.653751] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:50.634 [2024-07-15 09:36:01.653776] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:27:50.634 qpair failed and we were unable to recover it. 00:27:50.634 [2024-07-15 09:36:01.653879] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:50.634 [2024-07-15 09:36:01.653907] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a50000b90 with addr=10.0.0.2, port=4420 00:27:50.634 qpair failed and we were unable to recover it. 00:27:50.634 [2024-07-15 09:36:01.653985] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:50.634 [2024-07-15 09:36:01.654010] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a50000b90 with addr=10.0.0.2, port=4420 00:27:50.634 qpair failed and we were unable to recover it. 00:27:50.634 [2024-07-15 09:36:01.654093] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:50.634 [2024-07-15 09:36:01.654120] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a50000b90 with addr=10.0.0.2, port=4420 00:27:50.634 qpair failed and we were unable to recover it. 00:27:50.634 [2024-07-15 09:36:01.654202] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:50.634 [2024-07-15 09:36:01.654226] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a50000b90 with addr=10.0.0.2, port=4420 00:27:50.634 qpair failed and we were unable to recover it. 00:27:50.634 [2024-07-15 09:36:01.654304] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:50.634 [2024-07-15 09:36:01.654330] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a50000b90 with addr=10.0.0.2, port=4420 00:27:50.634 qpair failed and we were unable to recover it. 00:27:50.634 [2024-07-15 09:36:01.654411] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:50.634 [2024-07-15 09:36:01.654436] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a50000b90 with addr=10.0.0.2, port=4420 00:27:50.634 qpair failed and we were unable to recover it. 00:27:50.634 [2024-07-15 09:36:01.654520] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:50.634 [2024-07-15 09:36:01.654546] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a50000b90 with addr=10.0.0.2, port=4420 00:27:50.634 qpair failed and we were unable to recover it. 00:27:50.634 [2024-07-15 09:36:01.654623] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:50.634 [2024-07-15 09:36:01.654649] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a50000b90 with addr=10.0.0.2, port=4420 00:27:50.634 qpair failed and we were unable to recover it. 00:27:50.634 [2024-07-15 09:36:01.654731] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:50.634 [2024-07-15 09:36:01.654758] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a50000b90 with addr=10.0.0.2, port=4420 00:27:50.634 qpair failed and we were unable to recover it. 00:27:50.634 [2024-07-15 09:36:01.654844] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:50.634 [2024-07-15 09:36:01.654871] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:27:50.634 qpair failed and we were unable to recover it. 00:27:50.634 [2024-07-15 09:36:01.654947] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:50.634 [2024-07-15 09:36:01.654972] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:27:50.634 qpair failed and we were unable to recover it. 00:27:50.634 [2024-07-15 09:36:01.655072] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:50.634 [2024-07-15 09:36:01.655096] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:27:50.634 qpair failed and we were unable to recover it. 00:27:50.634 [2024-07-15 09:36:01.655171] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:50.634 [2024-07-15 09:36:01.655195] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:27:50.634 qpair failed and we were unable to recover it. 00:27:50.634 [2024-07-15 09:36:01.655276] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:50.634 [2024-07-15 09:36:01.655303] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a58000b90 with addr=10.0.0.2, port=4420 00:27:50.634 qpair failed and we were unable to recover it. 00:27:50.634 [2024-07-15 09:36:01.655390] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:50.634 [2024-07-15 09:36:01.655415] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a58000b90 with addr=10.0.0.2, port=4420 00:27:50.634 qpair failed and we were unable to recover it. 00:27:50.634 [2024-07-15 09:36:01.655496] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:50.634 [2024-07-15 09:36:01.655522] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a50000b90 with addr=10.0.0.2, port=4420 00:27:50.634 qpair failed and we were unable to recover it. 00:27:50.634 [2024-07-15 09:36:01.655611] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:50.634 [2024-07-15 09:36:01.655637] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a50000b90 with addr=10.0.0.2, port=4420 00:27:50.634 qpair failed and we were unable to recover it. 00:27:50.634 [2024-07-15 09:36:01.655732] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:50.634 [2024-07-15 09:36:01.655757] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a50000b90 with addr=10.0.0.2, port=4420 00:27:50.634 qpair failed and we were unable to recover it. 00:27:50.634 [2024-07-15 09:36:01.655841] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:50.634 [2024-07-15 09:36:01.655867] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a50000b90 with addr=10.0.0.2, port=4420 00:27:50.634 qpair failed and we were unable to recover it. 00:27:50.634 [2024-07-15 09:36:01.655951] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:50.634 [2024-07-15 09:36:01.655976] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a50000b90 with addr=10.0.0.2, port=4420 00:27:50.634 qpair failed and we were unable to recover it. 00:27:50.634 [2024-07-15 09:36:01.656056] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:50.634 [2024-07-15 09:36:01.656080] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a50000b90 with addr=10.0.0.2, port=4420 00:27:50.634 qpair failed and we were unable to recover it. 00:27:50.634 [2024-07-15 09:36:01.656156] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:50.634 [2024-07-15 09:36:01.656180] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a50000b90 with addr=10.0.0.2, port=4420 00:27:50.634 qpair failed and we were unable to recover it. 00:27:50.634 [2024-07-15 09:36:01.656266] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:50.634 [2024-07-15 09:36:01.656291] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a50000b90 with addr=10.0.0.2, port=4420 00:27:50.634 qpair failed and we were unable to recover it. 00:27:50.634 [2024-07-15 09:36:01.656376] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:50.634 [2024-07-15 09:36:01.656401] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a50000b90 with addr=10.0.0.2, port=4420 00:27:50.634 qpair failed and we were unable to recover it. 00:27:50.634 [2024-07-15 09:36:01.656482] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:50.634 [2024-07-15 09:36:01.656512] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a50000b90 with addr=10.0.0.2, port=4420 00:27:50.634 qpair failed and we were unable to recover it. 00:27:50.634 [2024-07-15 09:36:01.656595] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:50.634 [2024-07-15 09:36:01.656620] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a50000b90 with addr=10.0.0.2, port=4420 00:27:50.634 qpair failed and we were unable to recover it. 00:27:50.634 [2024-07-15 09:36:01.656731] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:50.634 [2024-07-15 09:36:01.656768] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d69200 with addr=10.0.0.2, port=4420 00:27:50.634 qpair failed and we were unable to recover it. 00:27:50.634 [2024-07-15 09:36:01.656884] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:50.634 [2024-07-15 09:36:01.656922] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a58000b90 with addr=10.0.0.2, port=4420 00:27:50.634 qpair failed and we were unable to recover it. 00:27:50.634 [2024-07-15 09:36:01.657011] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:50.634 [2024-07-15 09:36:01.657036] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a58000b90 with addr=10.0.0.2, port=4420 00:27:50.634 qpair failed and we were unable to recover it. 00:27:50.634 [2024-07-15 09:36:01.657128] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:50.634 [2024-07-15 09:36:01.657154] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a58000b90 with addr=10.0.0.2, port=4420 00:27:50.634 qpair failed and we were unable to recover it. 00:27:50.634 [2024-07-15 09:36:01.657241] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:50.634 [2024-07-15 09:36:01.657266] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a58000b90 with addr=10.0.0.2, port=4420 00:27:50.634 qpair failed and we were unable to recover it. 00:27:50.634 [2024-07-15 09:36:01.657354] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:50.634 [2024-07-15 09:36:01.657378] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a58000b90 with addr=10.0.0.2, port=4420 00:27:50.634 qpair failed and we were unable to recover it. 00:27:50.634 [2024-07-15 09:36:01.657491] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:50.634 [2024-07-15 09:36:01.657516] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a58000b90 with addr=10.0.0.2, port=4420 00:27:50.634 qpair failed and we were unable to recover it. 00:27:50.634 [2024-07-15 09:36:01.657595] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:50.634 [2024-07-15 09:36:01.657620] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a58000b90 with addr=10.0.0.2, port=4420 00:27:50.634 qpair failed and we were unable to recover it. 00:27:50.634 [2024-07-15 09:36:01.657712] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:50.634 [2024-07-15 09:36:01.657739] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:27:50.634 qpair failed and we were unable to recover it. 00:27:50.634 [2024-07-15 09:36:01.657834] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:50.634 [2024-07-15 09:36:01.657859] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:27:50.634 qpair failed and we were unable to recover it. 00:27:50.634 [2024-07-15 09:36:01.657942] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:50.634 [2024-07-15 09:36:01.657967] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:27:50.634 qpair failed and we were unable to recover it. 00:27:50.634 [2024-07-15 09:36:01.658040] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:50.634 [2024-07-15 09:36:01.658064] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:27:50.634 qpair failed and we were unable to recover it. 00:27:50.634 [2024-07-15 09:36:01.658155] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:50.634 [2024-07-15 09:36:01.658179] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:27:50.634 qpair failed and we were unable to recover it. 00:27:50.634 [2024-07-15 09:36:01.658260] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:50.634 [2024-07-15 09:36:01.658285] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:27:50.634 qpair failed and we were unable to recover it. 00:27:50.634 [2024-07-15 09:36:01.658362] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:50.635 [2024-07-15 09:36:01.658388] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a58000b90 with addr=10.0.0.2, port=4420 00:27:50.635 qpair failed and we were unable to recover it. 00:27:50.635 [2024-07-15 09:36:01.658472] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:50.635 [2024-07-15 09:36:01.658497] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a58000b90 with addr=10.0.0.2, port=4420 00:27:50.635 qpair failed and we were unable to recover it. 00:27:50.635 [2024-07-15 09:36:01.658594] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:50.635 [2024-07-15 09:36:01.658630] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d69200 with addr=10.0.0.2, port=4420 00:27:50.635 qpair failed and we were unable to recover it. 00:27:50.635 [2024-07-15 09:36:01.658723] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:50.635 [2024-07-15 09:36:01.658749] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d69200 with addr=10.0.0.2, port=4420 00:27:50.635 qpair failed and we were unable to recover it. 00:27:50.635 [2024-07-15 09:36:01.658839] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:50.635 [2024-07-15 09:36:01.658868] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a50000b90 with addr=10.0.0.2, port=4420 00:27:50.635 qpair failed and we were unable to recover it. 00:27:50.635 [2024-07-15 09:36:01.658952] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:50.635 [2024-07-15 09:36:01.658982] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a50000b90 with addr=10.0.0.2, port=4420 00:27:50.635 qpair failed and we were unable to recover it. 00:27:50.635 [2024-07-15 09:36:01.659071] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:50.635 [2024-07-15 09:36:01.659096] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:27:50.635 qpair failed and we were unable to recover it. 00:27:50.635 [2024-07-15 09:36:01.659181] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:50.635 [2024-07-15 09:36:01.659207] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:27:50.635 qpair failed and we were unable to recover it. 00:27:50.635 [2024-07-15 09:36:01.659289] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:50.635 [2024-07-15 09:36:01.659314] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:27:50.635 qpair failed and we were unable to recover it. 00:27:50.635 [2024-07-15 09:36:01.659401] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:50.635 [2024-07-15 09:36:01.659426] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:27:50.635 qpair failed and we were unable to recover it. 00:27:50.635 [2024-07-15 09:36:01.659509] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:50.635 [2024-07-15 09:36:01.659533] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:27:50.635 qpair failed and we were unable to recover it. 00:27:50.635 [2024-07-15 09:36:01.659620] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:50.635 [2024-07-15 09:36:01.659649] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a58000b90 with addr=10.0.0.2, port=4420 00:27:50.635 qpair failed and we were unable to recover it. 00:27:50.635 [2024-07-15 09:36:01.659732] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:50.635 [2024-07-15 09:36:01.659758] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a58000b90 with addr=10.0.0.2, port=4420 00:27:50.635 qpair failed and we were unable to recover it. 00:27:50.635 [2024-07-15 09:36:01.659854] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:50.635 [2024-07-15 09:36:01.659885] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d69200 with addr=10.0.0.2, port=4420 00:27:50.635 qpair failed and we were unable to recover it. 00:27:50.635 [2024-07-15 09:36:01.659972] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:50.635 [2024-07-15 09:36:01.659996] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d69200 with addr=10.0.0.2, port=4420 00:27:50.635 qpair failed and we were unable to recover it. 00:27:50.635 [2024-07-15 09:36:01.660074] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:50.635 [2024-07-15 09:36:01.660103] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d69200 with addr=10.0.0.2, port=4420 00:27:50.635 qpair failed and we were unable to recover it. 00:27:50.635 [2024-07-15 09:36:01.660181] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:50.635 [2024-07-15 09:36:01.660206] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d69200 with addr=10.0.0.2, port=4420 00:27:50.635 qpair failed and we were unable to recover it. 00:27:50.635 [2024-07-15 09:36:01.660285] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:50.635 [2024-07-15 09:36:01.660310] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d69200 with addr=10.0.0.2, port=4420 00:27:50.635 qpair failed and we were unable to recover it. 00:27:50.635 [2024-07-15 09:36:01.660382] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:50.635 [2024-07-15 09:36:01.660407] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d69200 with addr=10.0.0.2, port=4420 00:27:50.635 qpair failed and we were unable to recover it. 00:27:50.635 [2024-07-15 09:36:01.660488] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:50.635 [2024-07-15 09:36:01.660515] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a58000b90 with addr=10.0.0.2, port=4420 00:27:50.635 qpair failed and we were unable to recover it. 00:27:50.635 [2024-07-15 09:36:01.660605] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:50.635 [2024-07-15 09:36:01.660631] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a58000b90 with addr=10.0.0.2, port=4420 00:27:50.635 qpair failed and we were unable to recover it. 00:27:50.635 [2024-07-15 09:36:01.660716] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:50.635 [2024-07-15 09:36:01.660741] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a58000b90 with addr=10.0.0.2, port=4420 00:27:50.635 qpair failed and we were unable to recover it. 00:27:50.635 [2024-07-15 09:36:01.660822] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:50.635 [2024-07-15 09:36:01.660847] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a58000b90 with addr=10.0.0.2, port=4420 00:27:50.635 qpair failed and we were unable to recover it. 00:27:50.635 [2024-07-15 09:36:01.660931] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:50.635 [2024-07-15 09:36:01.660958] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:27:50.635 qpair failed and we were unable to recover it. 00:27:50.635 [2024-07-15 09:36:01.661041] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:50.635 [2024-07-15 09:36:01.661073] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:27:50.635 qpair failed and we were unable to recover it. 00:27:50.635 [2024-07-15 09:36:01.661160] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:50.635 [2024-07-15 09:36:01.661187] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:27:50.635 qpair failed and we were unable to recover it. 00:27:50.635 [2024-07-15 09:36:01.661295] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:50.635 [2024-07-15 09:36:01.661321] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:27:50.635 qpair failed and we were unable to recover it. 00:27:50.635 [2024-07-15 09:36:01.661402] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:50.635 [2024-07-15 09:36:01.661430] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a50000b90 with addr=10.0.0.2, port=4420 00:27:50.635 qpair failed and we were unable to recover it. 00:27:50.635 [2024-07-15 09:36:01.661529] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:50.635 [2024-07-15 09:36:01.661556] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d69200 with addr=10.0.0.2, port=4420 00:27:50.635 qpair failed and we were unable to recover it. 00:27:50.635 [2024-07-15 09:36:01.661639] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:50.635 [2024-07-15 09:36:01.661667] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a58000b90 with addr=10.0.0.2, port=4420 00:27:50.635 qpair failed and we were unable to recover it. 00:27:50.635 [2024-07-15 09:36:01.661748] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:50.635 [2024-07-15 09:36:01.661774] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a58000b90 with addr=10.0.0.2, port=4420 00:27:50.635 qpair failed and we were unable to recover it. 00:27:50.635 [2024-07-15 09:36:01.661866] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:50.635 [2024-07-15 09:36:01.661893] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a58000b90 with addr=10.0.0.2, port=4420 00:27:50.635 qpair failed and we were unable to recover it. 00:27:50.635 [2024-07-15 09:36:01.661976] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:50.635 [2024-07-15 09:36:01.662002] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a58000b90 with addr=10.0.0.2, port=4420 00:27:50.635 qpair failed and we were unable to recover it. 00:27:50.635 [2024-07-15 09:36:01.662099] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:50.635 [2024-07-15 09:36:01.662126] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a58000b90 with addr=10.0.0.2, port=4420 00:27:50.635 qpair failed and we were unable to recover it. 00:27:50.635 [2024-07-15 09:36:01.662202] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:50.635 [2024-07-15 09:36:01.662226] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a58000b90 with addr=10.0.0.2, port=4420 00:27:50.635 qpair failed and we were unable to recover it. 00:27:50.635 [2024-07-15 09:36:01.662308] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:50.635 [2024-07-15 09:36:01.662333] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a58000b90 with addr=10.0.0.2, port=4420 00:27:50.635 qpair failed and we were unable to recover it. 00:27:50.635 [2024-07-15 09:36:01.662413] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:50.635 [2024-07-15 09:36:01.662439] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:27:50.635 qpair failed and we were unable to recover it. 00:27:50.635 [2024-07-15 09:36:01.662522] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:50.636 [2024-07-15 09:36:01.662547] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:27:50.636 qpair failed and we were unable to recover it. 00:27:50.636 [2024-07-15 09:36:01.662667] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:50.636 [2024-07-15 09:36:01.662693] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d69200 with addr=10.0.0.2, port=4420 00:27:50.636 qpair failed and we were unable to recover it. 00:27:50.636 [2024-07-15 09:36:01.662773] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:50.636 [2024-07-15 09:36:01.662816] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d69200 with addr=10.0.0.2, port=4420 00:27:50.636 qpair failed and we were unable to recover it. 00:27:50.636 [2024-07-15 09:36:01.662905] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:50.636 [2024-07-15 09:36:01.662930] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d69200 with addr=10.0.0.2, port=4420 00:27:50.636 qpair failed and we were unable to recover it. 00:27:50.636 [2024-07-15 09:36:01.663013] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:50.636 [2024-07-15 09:36:01.663036] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d69200 with addr=10.0.0.2, port=4420 00:27:50.636 qpair failed and we were unable to recover it. 00:27:50.636 [2024-07-15 09:36:01.663152] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:50.636 [2024-07-15 09:36:01.663178] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d69200 with addr=10.0.0.2, port=4420 00:27:50.636 qpair failed and we were unable to recover it. 00:27:50.636 [2024-07-15 09:36:01.663264] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:50.636 [2024-07-15 09:36:01.663289] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d69200 with addr=10.0.0.2, port=4420 00:27:50.636 qpair failed and we were unable to recover it. 00:27:50.636 [2024-07-15 09:36:01.663368] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:50.636 [2024-07-15 09:36:01.663394] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d69200 with addr=10.0.0.2, port=4420 00:27:50.636 qpair failed and we were unable to recover it. 00:27:50.636 [2024-07-15 09:36:01.663480] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:50.636 [2024-07-15 09:36:01.663508] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a58000b90 with addr=10.0.0.2, port=4420 00:27:50.636 qpair failed and we were unable to recover it. 00:27:50.636 [2024-07-15 09:36:01.663591] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:50.636 [2024-07-15 09:36:01.663619] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a50000b90 with addr=10.0.0.2, port=4420 00:27:50.636 qpair failed and we were unable to recover it. 00:27:50.636 [2024-07-15 09:36:01.663707] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:50.636 [2024-07-15 09:36:01.663734] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:27:50.636 qpair failed and we were unable to recover it. 00:27:50.636 [2024-07-15 09:36:01.663828] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:50.636 [2024-07-15 09:36:01.663854] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:27:50.636 qpair failed and we were unable to recover it. 00:27:50.636 [2024-07-15 09:36:01.663935] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:50.636 [2024-07-15 09:36:01.663961] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:27:50.636 qpair failed and we were unable to recover it. 00:27:50.636 [2024-07-15 09:36:01.664068] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:50.636 [2024-07-15 09:36:01.664094] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:27:50.636 qpair failed and we were unable to recover it. 00:27:50.636 [2024-07-15 09:36:01.664168] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:50.636 [2024-07-15 09:36:01.664196] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:27:50.636 qpair failed and we were unable to recover it. 00:27:50.636 [2024-07-15 09:36:01.664280] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:50.636 [2024-07-15 09:36:01.664312] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:27:50.636 qpair failed and we were unable to recover it. 00:27:50.636 [2024-07-15 09:36:01.664413] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:50.636 [2024-07-15 09:36:01.664438] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:27:50.636 qpair failed and we were unable to recover it. 00:27:50.636 [2024-07-15 09:36:01.664529] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:50.636 [2024-07-15 09:36:01.664554] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:27:50.636 qpair failed and we were unable to recover it. 00:27:50.636 [2024-07-15 09:36:01.664631] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:50.636 [2024-07-15 09:36:01.664655] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:27:50.636 qpair failed and we were unable to recover it. 00:27:50.636 [2024-07-15 09:36:01.664735] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:50.636 [2024-07-15 09:36:01.664762] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:27:50.636 qpair failed and we were unable to recover it. 00:27:50.636 [2024-07-15 09:36:01.664864] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:50.636 [2024-07-15 09:36:01.664890] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:27:50.636 qpair failed and we were unable to recover it. 00:27:50.636 [2024-07-15 09:36:01.664980] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:50.636 [2024-07-15 09:36:01.665006] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:27:50.636 qpair failed and we were unable to recover it. 00:27:50.636 [2024-07-15 09:36:01.665077] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:50.636 [2024-07-15 09:36:01.665102] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:27:50.636 qpair failed and we were unable to recover it. 00:27:50.636 [2024-07-15 09:36:01.665177] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:50.636 [2024-07-15 09:36:01.665202] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:27:50.636 qpair failed and we were unable to recover it. 00:27:50.636 [2024-07-15 09:36:01.665279] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:50.636 [2024-07-15 09:36:01.665303] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:27:50.636 qpair failed and we were unable to recover it. 00:27:50.636 [2024-07-15 09:36:01.665387] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:50.636 [2024-07-15 09:36:01.665411] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:27:50.636 qpair failed and we were unable to recover it. 00:27:50.636 [2024-07-15 09:36:01.665485] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:50.636 [2024-07-15 09:36:01.665510] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:27:50.636 qpair failed and we were unable to recover it. 00:27:50.636 [2024-07-15 09:36:01.665587] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:50.636 [2024-07-15 09:36:01.665614] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a58000b90 with addr=10.0.0.2, port=4420 00:27:50.636 qpair failed and we were unable to recover it. 00:27:50.636 [2024-07-15 09:36:01.665703] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:50.636 [2024-07-15 09:36:01.665728] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a58000b90 with addr=10.0.0.2, port=4420 00:27:50.636 qpair failed and we were unable to recover it. 00:27:50.636 [2024-07-15 09:36:01.665830] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:50.636 [2024-07-15 09:36:01.665867] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d69200 with addr=10.0.0.2, port=4420 00:27:50.636 qpair failed and we were unable to recover it. 00:27:50.636 [2024-07-15 09:36:01.665950] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:50.636 [2024-07-15 09:36:01.665977] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d69200 with addr=10.0.0.2, port=4420 00:27:50.636 qpair failed and we were unable to recover it. 00:27:50.636 [2024-07-15 09:36:01.666058] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:50.636 [2024-07-15 09:36:01.666091] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d69200 with addr=10.0.0.2, port=4420 00:27:50.636 qpair failed and we were unable to recover it. 00:27:50.636 [2024-07-15 09:36:01.666202] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:50.636 [2024-07-15 09:36:01.666227] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d69200 with addr=10.0.0.2, port=4420 00:27:50.636 qpair failed and we were unable to recover it. 00:27:50.636 [2024-07-15 09:36:01.666312] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:50.636 [2024-07-15 09:36:01.666338] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:27:50.636 qpair failed and we were unable to recover it. 00:27:50.636 [2024-07-15 09:36:01.666414] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:50.636 [2024-07-15 09:36:01.666440] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:27:50.636 qpair failed and we were unable to recover it. 00:27:50.636 [2024-07-15 09:36:01.666516] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:50.636 [2024-07-15 09:36:01.666543] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a58000b90 with addr=10.0.0.2, port=4420 00:27:50.636 qpair failed and we were unable to recover it. 00:27:50.636 [2024-07-15 09:36:01.666626] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:50.637 [2024-07-15 09:36:01.666652] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a58000b90 with addr=10.0.0.2, port=4420 00:27:50.637 qpair failed and we were unable to recover it. 00:27:50.637 [2024-07-15 09:36:01.666730] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:50.637 [2024-07-15 09:36:01.666755] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a58000b90 with addr=10.0.0.2, port=4420 00:27:50.637 qpair failed and we were unable to recover it. 00:27:50.637 [2024-07-15 09:36:01.666855] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:50.637 [2024-07-15 09:36:01.666880] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a58000b90 with addr=10.0.0.2, port=4420 00:27:50.637 qpair failed and we were unable to recover it. 00:27:50.637 [2024-07-15 09:36:01.666970] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:50.637 [2024-07-15 09:36:01.666997] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a58000b90 with addr=10.0.0.2, port=4420 00:27:50.637 qpair failed and we were unable to recover it. 00:27:50.637 [2024-07-15 09:36:01.667079] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:50.637 [2024-07-15 09:36:01.667104] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a58000b90 with addr=10.0.0.2, port=4420 00:27:50.637 qpair failed and we were unable to recover it. 00:27:50.637 [2024-07-15 09:36:01.667186] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:50.637 [2024-07-15 09:36:01.667211] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a58000b90 with addr=10.0.0.2, port=4420 00:27:50.637 qpair failed and we were unable to recover it. 00:27:50.637 [2024-07-15 09:36:01.667287] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:50.637 [2024-07-15 09:36:01.667312] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a58000b90 with addr=10.0.0.2, port=4420 00:27:50.637 qpair failed and we were unable to recover it. 00:27:50.637 [2024-07-15 09:36:01.667393] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:50.637 [2024-07-15 09:36:01.667420] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a50000b90 with addr=10.0.0.2, port=4420 00:27:50.637 qpair failed and we were unable to recover it. 00:27:50.637 [2024-07-15 09:36:01.667500] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:50.637 [2024-07-15 09:36:01.667524] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a50000b90 with addr=10.0.0.2, port=4420 00:27:50.637 qpair failed and we were unable to recover it. 00:27:50.637 [2024-07-15 09:36:01.667597] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:50.637 [2024-07-15 09:36:01.667622] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a50000b90 with addr=10.0.0.2, port=4420 00:27:50.637 qpair failed and we were unable to recover it. 00:27:50.637 [2024-07-15 09:36:01.667708] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:50.637 [2024-07-15 09:36:01.667733] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a50000b90 with addr=10.0.0.2, port=4420 00:27:50.637 qpair failed and we were unable to recover it. 00:27:50.637 [2024-07-15 09:36:01.667831] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:50.637 [2024-07-15 09:36:01.667859] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a50000b90 with addr=10.0.0.2, port=4420 00:27:50.637 qpair failed and we were unable to recover it. 00:27:50.637 [2024-07-15 09:36:01.667959] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:50.637 [2024-07-15 09:36:01.667998] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d69200 with addr=10.0.0.2, port=4420 00:27:50.637 qpair failed and we were unable to recover it. 00:27:50.637 [2024-07-15 09:36:01.668082] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:50.637 [2024-07-15 09:36:01.668108] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d69200 with addr=10.0.0.2, port=4420 00:27:50.637 qpair failed and we were unable to recover it. 00:27:50.637 [2024-07-15 09:36:01.668186] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:50.637 [2024-07-15 09:36:01.668211] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d69200 with addr=10.0.0.2, port=4420 00:27:50.637 qpair failed and we were unable to recover it. 00:27:50.637 [2024-07-15 09:36:01.668288] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:50.637 [2024-07-15 09:36:01.668314] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d69200 with addr=10.0.0.2, port=4420 00:27:50.637 qpair failed and we were unable to recover it. 00:27:50.637 [2024-07-15 09:36:01.668398] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:50.637 [2024-07-15 09:36:01.668422] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d69200 with addr=10.0.0.2, port=4420 00:27:50.637 qpair failed and we were unable to recover it. 00:27:50.637 [2024-07-15 09:36:01.668501] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:50.637 [2024-07-15 09:36:01.668524] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d69200 with addr=10.0.0.2, port=4420 00:27:50.637 qpair failed and we were unable to recover it. 00:27:50.637 [2024-07-15 09:36:01.668606] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:50.637 [2024-07-15 09:36:01.668636] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d69200 with addr=10.0.0.2, port=4420 00:27:50.637 qpair failed and we were unable to recover it. 00:27:50.637 [2024-07-15 09:36:01.668718] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:50.637 [2024-07-15 09:36:01.668746] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a58000b90 with addr=10.0.0.2, port=4420 00:27:50.637 qpair failed and we were unable to recover it. 00:27:50.637 [2024-07-15 09:36:01.668843] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:50.637 [2024-07-15 09:36:01.668870] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:27:50.637 qpair failed and we were unable to recover it. 00:27:50.637 [2024-07-15 09:36:01.668952] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:50.637 [2024-07-15 09:36:01.668981] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:27:50.637 qpair failed and we were unable to recover it. 00:27:50.637 [2024-07-15 09:36:01.669062] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:50.637 [2024-07-15 09:36:01.669088] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:27:50.637 qpair failed and we were unable to recover it. 00:27:50.637 [2024-07-15 09:36:01.669183] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:50.637 [2024-07-15 09:36:01.669208] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:27:50.637 qpair failed and we were unable to recover it. 00:27:50.637 [2024-07-15 09:36:01.669304] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:50.637 [2024-07-15 09:36:01.669333] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a50000b90 with addr=10.0.0.2, port=4420 00:27:50.637 qpair failed and we were unable to recover it. 00:27:50.637 [2024-07-15 09:36:01.669409] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:50.637 [2024-07-15 09:36:01.669437] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d69200 with addr=10.0.0.2, port=4420 00:27:50.637 qpair failed and we were unable to recover it. 00:27:50.637 [2024-07-15 09:36:01.669521] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:50.637 [2024-07-15 09:36:01.669549] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a58000b90 with addr=10.0.0.2, port=4420 00:27:50.637 qpair failed and we were unable to recover it. 00:27:50.637 [2024-07-15 09:36:01.669635] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:50.637 [2024-07-15 09:36:01.669662] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a58000b90 with addr=10.0.0.2, port=4420 00:27:50.637 qpair failed and we were unable to recover it. 00:27:50.637 [2024-07-15 09:36:01.669743] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:50.637 [2024-07-15 09:36:01.669769] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a58000b90 with addr=10.0.0.2, port=4420 00:27:50.637 qpair failed and we were unable to recover it. 00:27:50.637 [2024-07-15 09:36:01.669863] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:50.637 [2024-07-15 09:36:01.669889] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a58000b90 with addr=10.0.0.2, port=4420 00:27:50.637 qpair failed and we were unable to recover it. 00:27:50.637 [2024-07-15 09:36:01.669967] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:50.637 [2024-07-15 09:36:01.669992] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a58000b90 with addr=10.0.0.2, port=4420 00:27:50.637 qpair failed and we were unable to recover it. 00:27:50.637 [2024-07-15 09:36:01.670077] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:50.637 [2024-07-15 09:36:01.670102] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a58000b90 with addr=10.0.0.2, port=4420 00:27:50.637 qpair failed and we were unable to recover it. 00:27:50.637 [2024-07-15 09:36:01.670185] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:50.637 [2024-07-15 09:36:01.670210] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a58000b90 with addr=10.0.0.2, port=4420 00:27:50.637 qpair failed and we were unable to recover it. 00:27:50.637 [2024-07-15 09:36:01.670291] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:50.637 [2024-07-15 09:36:01.670317] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:27:50.637 qpair failed and we were unable to recover it. 00:27:50.637 [2024-07-15 09:36:01.670407] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:50.637 [2024-07-15 09:36:01.670433] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a50000b90 with addr=10.0.0.2, port=4420 00:27:50.637 qpair failed and we were unable to recover it. 00:27:50.637 [2024-07-15 09:36:01.670514] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:50.637 [2024-07-15 09:36:01.670539] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a50000b90 with addr=10.0.0.2, port=4420 00:27:50.637 qpair failed and we were unable to recover it. 00:27:50.637 [2024-07-15 09:36:01.670616] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:50.637 [2024-07-15 09:36:01.670641] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a50000b90 with addr=10.0.0.2, port=4420 00:27:50.637 qpair failed and we were unable to recover it. 00:27:50.637 [2024-07-15 09:36:01.670713] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:50.637 [2024-07-15 09:36:01.670738] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a50000b90 with addr=10.0.0.2, port=4420 00:27:50.637 qpair failed and we were unable to recover it. 00:27:50.637 [2024-07-15 09:36:01.670830] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:50.637 [2024-07-15 09:36:01.670856] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a50000b90 with addr=10.0.0.2, port=4420 00:27:50.637 qpair failed and we were unable to recover it. 00:27:50.637 [2024-07-15 09:36:01.670935] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:50.638 [2024-07-15 09:36:01.670961] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a50000b90 with addr=10.0.0.2, port=4420 00:27:50.638 qpair failed and we were unable to recover it. 00:27:50.638 [2024-07-15 09:36:01.671053] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:50.638 [2024-07-15 09:36:01.671079] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a50000b90 with addr=10.0.0.2, port=4420 00:27:50.638 qpair failed and we were unable to recover it. 00:27:50.638 [2024-07-15 09:36:01.671157] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:50.638 [2024-07-15 09:36:01.671184] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a50000b90 with addr=10.0.0.2, port=4420 00:27:50.638 qpair failed and we were unable to recover it. 00:27:50.638 [2024-07-15 09:36:01.671265] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:50.638 [2024-07-15 09:36:01.671291] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a50000b90 with addr=10.0.0.2, port=4420 00:27:50.638 qpair failed and we were unable to recover it. 00:27:50.638 [2024-07-15 09:36:01.671367] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:50.638 [2024-07-15 09:36:01.671394] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:27:50.638 qpair failed and we were unable to recover it. 00:27:50.638 [2024-07-15 09:36:01.671479] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:50.638 [2024-07-15 09:36:01.671516] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d69200 with addr=10.0.0.2, port=4420 00:27:50.638 qpair failed and we were unable to recover it. 00:27:50.638 [2024-07-15 09:36:01.671601] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:50.638 [2024-07-15 09:36:01.671629] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a58000b90 with addr=10.0.0.2, port=4420 00:27:50.638 qpair failed and we were unable to recover it. 00:27:50.638 [2024-07-15 09:36:01.671719] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:50.638 [2024-07-15 09:36:01.671745] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a58000b90 with addr=10.0.0.2, port=4420 00:27:50.638 qpair failed and we were unable to recover it. 00:27:50.638 [2024-07-15 09:36:01.671854] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:50.638 [2024-07-15 09:36:01.671881] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a58000b90 with addr=10.0.0.2, port=4420 00:27:50.638 qpair failed and we were unable to recover it. 00:27:50.638 [2024-07-15 09:36:01.671955] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:50.638 [2024-07-15 09:36:01.671979] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a58000b90 with addr=10.0.0.2, port=4420 00:27:50.638 qpair failed and we were unable to recover it. 00:27:50.638 [2024-07-15 09:36:01.672053] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:50.638 [2024-07-15 09:36:01.672078] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a58000b90 with addr=10.0.0.2, port=4420 00:27:50.638 qpair failed and we were unable to recover it. 00:27:50.638 [2024-07-15 09:36:01.672168] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:50.638 [2024-07-15 09:36:01.672192] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a58000b90 with addr=10.0.0.2, port=4420 00:27:50.638 qpair failed and we were unable to recover it. 00:27:50.638 [2024-07-15 09:36:01.672278] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:50.638 [2024-07-15 09:36:01.672303] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a58000b90 with addr=10.0.0.2, port=4420 00:27:50.638 qpair failed and we were unable to recover it. 00:27:50.638 [2024-07-15 09:36:01.672384] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:50.638 [2024-07-15 09:36:01.672409] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a58000b90 with addr=10.0.0.2, port=4420 00:27:50.638 qpair failed and we were unable to recover it. 00:27:50.638 [2024-07-15 09:36:01.672496] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:50.638 [2024-07-15 09:36:01.672533] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:27:50.638 qpair failed and we were unable to recover it. 00:27:50.638 [2024-07-15 09:36:01.672624] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:50.638 [2024-07-15 09:36:01.672654] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d69200 with addr=10.0.0.2, port=4420 00:27:50.638 qpair failed and we were unable to recover it. 00:27:50.638 [2024-07-15 09:36:01.672745] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:50.638 [2024-07-15 09:36:01.672774] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a50000b90 with addr=10.0.0.2, port=4420 00:27:50.638 qpair failed and we were unable to recover it. 00:27:50.638 [2024-07-15 09:36:01.672864] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:50.638 [2024-07-15 09:36:01.672891] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a50000b90 with addr=10.0.0.2, port=4420 00:27:50.638 qpair failed and we were unable to recover it. 00:27:50.638 [2024-07-15 09:36:01.672983] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:50.638 [2024-07-15 09:36:01.673010] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a50000b90 with addr=10.0.0.2, port=4420 00:27:50.638 qpair failed and we were unable to recover it. 00:27:50.638 [2024-07-15 09:36:01.673093] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:50.638 [2024-07-15 09:36:01.673118] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a50000b90 with addr=10.0.0.2, port=4420 00:27:50.638 qpair failed and we were unable to recover it. 00:27:50.638 [2024-07-15 09:36:01.673208] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:50.638 [2024-07-15 09:36:01.673235] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a58000b90 with addr=10.0.0.2, port=4420 00:27:50.638 qpair failed and we were unable to recover it. 00:27:50.638 [2024-07-15 09:36:01.673326] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:50.638 [2024-07-15 09:36:01.673351] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a58000b90 with addr=10.0.0.2, port=4420 00:27:50.638 qpair failed and we were unable to recover it. 00:27:50.638 [2024-07-15 09:36:01.673448] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:50.638 [2024-07-15 09:36:01.673474] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d69200 with addr=10.0.0.2, port=4420 00:27:50.638 qpair failed and we were unable to recover it. 00:27:50.638 [2024-07-15 09:36:01.673558] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:50.638 [2024-07-15 09:36:01.673581] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d69200 with addr=10.0.0.2, port=4420 00:27:50.638 qpair failed and we were unable to recover it. 00:27:50.638 [2024-07-15 09:36:01.673694] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:50.638 [2024-07-15 09:36:01.673724] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:27:50.638 qpair failed and we were unable to recover it. 00:27:50.638 [2024-07-15 09:36:01.673824] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:50.638 [2024-07-15 09:36:01.673850] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:27:50.638 qpair failed and we were unable to recover it. 00:27:50.638 [2024-07-15 09:36:01.673933] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:50.638 [2024-07-15 09:36:01.673960] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a50000b90 with addr=10.0.0.2, port=4420 00:27:50.638 qpair failed and we were unable to recover it. 00:27:50.638 [2024-07-15 09:36:01.674048] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:50.638 [2024-07-15 09:36:01.674075] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a50000b90 with addr=10.0.0.2, port=4420 00:27:50.638 qpair failed and we were unable to recover it. 00:27:50.638 [2024-07-15 09:36:01.674171] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:50.638 [2024-07-15 09:36:01.674197] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a58000b90 with addr=10.0.0.2, port=4420 00:27:50.638 qpair failed and we were unable to recover it. 00:27:50.638 [2024-07-15 09:36:01.674280] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:50.638 [2024-07-15 09:36:01.674305] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a58000b90 with addr=10.0.0.2, port=4420 00:27:50.638 qpair failed and we were unable to recover it. 00:27:50.638 [2024-07-15 09:36:01.674388] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:50.638 [2024-07-15 09:36:01.674415] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d69200 with addr=10.0.0.2, port=4420 00:27:50.638 qpair failed and we were unable to recover it. 00:27:50.638 [2024-07-15 09:36:01.674497] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:50.638 [2024-07-15 09:36:01.674522] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d69200 with addr=10.0.0.2, port=4420 00:27:50.638 qpair failed and we were unable to recover it. 00:27:50.638 [2024-07-15 09:36:01.674604] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:50.638 [2024-07-15 09:36:01.674630] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d69200 with addr=10.0.0.2, port=4420 00:27:50.638 qpair failed and we were unable to recover it. 00:27:50.638 [2024-07-15 09:36:01.674716] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:50.638 [2024-07-15 09:36:01.674741] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d69200 with addr=10.0.0.2, port=4420 00:27:50.638 qpair failed and we were unable to recover it. 00:27:50.638 [2024-07-15 09:36:01.674826] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:50.638 [2024-07-15 09:36:01.674851] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d69200 with addr=10.0.0.2, port=4420 00:27:50.638 qpair failed and we were unable to recover it. 00:27:50.638 [2024-07-15 09:36:01.674932] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:50.638 [2024-07-15 09:36:01.674956] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d69200 with addr=10.0.0.2, port=4420 00:27:50.639 qpair failed and we were unable to recover it. 00:27:50.639 [2024-07-15 09:36:01.675032] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:50.639 [2024-07-15 09:36:01.675056] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d69200 with addr=10.0.0.2, port=4420 00:27:50.639 qpair failed and we were unable to recover it. 00:27:50.639 [2024-07-15 09:36:01.675136] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:50.639 [2024-07-15 09:36:01.675159] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d69200 with addr=10.0.0.2, port=4420 00:27:50.639 qpair failed and we were unable to recover it. 00:27:50.639 [2024-07-15 09:36:01.675237] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:50.639 [2024-07-15 09:36:01.675261] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d69200 with addr=10.0.0.2, port=4420 00:27:50.639 qpair failed and we were unable to recover it. 00:27:50.639 [2024-07-15 09:36:01.675334] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:50.639 [2024-07-15 09:36:01.675358] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d69200 with addr=10.0.0.2, port=4420 00:27:50.639 qpair failed and we were unable to recover it. 00:27:50.639 [2024-07-15 09:36:01.675441] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:50.639 [2024-07-15 09:36:01.675465] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d69200 with addr=10.0.0.2, port=4420 00:27:50.639 qpair failed and we were unable to recover it. 00:27:50.639 [2024-07-15 09:36:01.675539] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:50.639 [2024-07-15 09:36:01.675563] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d69200 with addr=10.0.0.2, port=4420 00:27:50.639 qpair failed and we were unable to recover it. 00:27:50.639 [2024-07-15 09:36:01.675640] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:50.639 [2024-07-15 09:36:01.675664] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d69200 with addr=10.0.0.2, port=4420 00:27:50.639 qpair failed and we were unable to recover it. 00:27:50.639 [2024-07-15 09:36:01.675754] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:50.639 [2024-07-15 09:36:01.675781] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a50000b90 with addr=10.0.0.2, port=4420 00:27:50.639 qpair failed and we were unable to recover it. 00:27:50.639 [2024-07-15 09:36:01.675870] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:50.639 [2024-07-15 09:36:01.675896] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:27:50.639 qpair failed and we were unable to recover it. 00:27:50.639 [2024-07-15 09:36:01.675973] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:50.639 [2024-07-15 09:36:01.675999] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:27:50.639 qpair failed and we were unable to recover it. 00:27:50.639 [2024-07-15 09:36:01.676082] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:50.639 [2024-07-15 09:36:01.676113] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:27:50.639 qpair failed and we were unable to recover it. 00:27:50.639 [2024-07-15 09:36:01.676199] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:50.639 [2024-07-15 09:36:01.676225] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:27:50.639 qpair failed and we were unable to recover it. 00:27:50.639 [2024-07-15 09:36:01.676301] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:50.639 [2024-07-15 09:36:01.676329] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a58000b90 with addr=10.0.0.2, port=4420 00:27:50.639 qpair failed and we were unable to recover it. 00:27:50.639 [2024-07-15 09:36:01.676441] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:50.639 [2024-07-15 09:36:01.676468] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d69200 with addr=10.0.0.2, port=4420 00:27:50.639 qpair failed and we were unable to recover it. 00:27:50.639 [2024-07-15 09:36:01.676549] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:50.639 [2024-07-15 09:36:01.676572] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d69200 with addr=10.0.0.2, port=4420 00:27:50.639 qpair failed and we were unable to recover it. 00:27:50.639 [2024-07-15 09:36:01.676653] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:50.639 [2024-07-15 09:36:01.676677] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d69200 with addr=10.0.0.2, port=4420 00:27:50.639 qpair failed and we were unable to recover it. 00:27:50.639 [2024-07-15 09:36:01.676756] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:50.639 [2024-07-15 09:36:01.676780] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d69200 with addr=10.0.0.2, port=4420 00:27:50.639 qpair failed and we were unable to recover it. 00:27:50.639 [2024-07-15 09:36:01.676865] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:50.639 [2024-07-15 09:36:01.676892] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a50000b90 with addr=10.0.0.2, port=4420 00:27:50.639 qpair failed and we were unable to recover it. 00:27:50.639 [2024-07-15 09:36:01.676969] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:50.639 [2024-07-15 09:36:01.676995] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a50000b90 with addr=10.0.0.2, port=4420 00:27:50.639 qpair failed and we were unable to recover it. 00:27:50.639 [2024-07-15 09:36:01.677085] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:50.639 [2024-07-15 09:36:01.677110] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a50000b90 with addr=10.0.0.2, port=4420 00:27:50.639 qpair failed and we were unable to recover it. 00:27:50.639 [2024-07-15 09:36:01.677190] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:50.639 [2024-07-15 09:36:01.677215] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a50000b90 with addr=10.0.0.2, port=4420 00:27:50.639 qpair failed and we were unable to recover it. 00:27:50.639 [2024-07-15 09:36:01.677296] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:50.639 [2024-07-15 09:36:01.677322] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:27:50.639 qpair failed and we were unable to recover it. 00:27:50.639 [2024-07-15 09:36:01.677406] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:50.639 [2024-07-15 09:36:01.677432] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a58000b90 with addr=10.0.0.2, port=4420 00:27:50.639 qpair failed and we were unable to recover it. 00:27:50.639 [2024-07-15 09:36:01.677516] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:50.639 [2024-07-15 09:36:01.677542] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a58000b90 with addr=10.0.0.2, port=4420 00:27:50.639 qpair failed and we were unable to recover it. 00:27:50.639 [2024-07-15 09:36:01.677654] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:50.639 [2024-07-15 09:36:01.677680] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a58000b90 with addr=10.0.0.2, port=4420 00:27:50.639 qpair failed and we were unable to recover it. 00:27:50.639 [2024-07-15 09:36:01.677761] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:50.639 [2024-07-15 09:36:01.677787] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a58000b90 with addr=10.0.0.2, port=4420 00:27:50.639 qpair failed and we were unable to recover it. 00:27:50.639 [2024-07-15 09:36:01.677875] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:50.639 [2024-07-15 09:36:01.677903] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d69200 with addr=10.0.0.2, port=4420 00:27:50.639 qpair failed and we were unable to recover it. 00:27:50.639 [2024-07-15 09:36:01.677989] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:50.639 [2024-07-15 09:36:01.678013] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d69200 with addr=10.0.0.2, port=4420 00:27:50.639 qpair failed and we were unable to recover it. 00:27:50.639 [2024-07-15 09:36:01.678113] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:50.639 [2024-07-15 09:36:01.678138] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:27:50.639 qpair failed and we were unable to recover it. 00:27:50.639 [2024-07-15 09:36:01.678217] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:50.639 [2024-07-15 09:36:01.678242] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:27:50.639 qpair failed and we were unable to recover it. 00:27:50.639 [2024-07-15 09:36:01.678331] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:50.639 [2024-07-15 09:36:01.678358] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a50000b90 with addr=10.0.0.2, port=4420 00:27:50.639 qpair failed and we were unable to recover it. 00:27:50.639 [2024-07-15 09:36:01.678434] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:50.639 [2024-07-15 09:36:01.678459] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a50000b90 with addr=10.0.0.2, port=4420 00:27:50.639 qpair failed and we were unable to recover it. 00:27:50.640 [2024-07-15 09:36:01.678571] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:50.640 [2024-07-15 09:36:01.678598] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a58000b90 with addr=10.0.0.2, port=4420 00:27:50.640 qpair failed and we were unable to recover it. 00:27:50.640 [2024-07-15 09:36:01.678688] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:50.640 [2024-07-15 09:36:01.678719] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a58000b90 with addr=10.0.0.2, port=4420 00:27:50.640 qpair failed and we were unable to recover it. 00:27:50.640 [2024-07-15 09:36:01.678812] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:50.640 [2024-07-15 09:36:01.678838] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a58000b90 with addr=10.0.0.2, port=4420 00:27:50.640 qpair failed and we were unable to recover it. 00:27:50.640 [2024-07-15 09:36:01.678927] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:50.640 [2024-07-15 09:36:01.678952] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a58000b90 with addr=10.0.0.2, port=4420 00:27:50.640 qpair failed and we were unable to recover it. 00:27:50.640 [2024-07-15 09:36:01.679033] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:50.640 [2024-07-15 09:36:01.679059] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a58000b90 with addr=10.0.0.2, port=4420 00:27:50.640 qpair failed and we were unable to recover it. 00:27:50.640 [2024-07-15 09:36:01.679141] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:50.640 [2024-07-15 09:36:01.679170] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a58000b90 with addr=10.0.0.2, port=4420 00:27:50.640 qpair failed and we were unable to recover it. 00:27:50.640 [2024-07-15 09:36:01.679251] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:50.640 [2024-07-15 09:36:01.679279] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a50000b90 with addr=10.0.0.2, port=4420 00:27:50.640 qpair failed and we were unable to recover it. 00:27:50.640 [2024-07-15 09:36:01.679363] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:50.640 [2024-07-15 09:36:01.679390] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:27:50.640 qpair failed and we were unable to recover it. 00:27:50.640 [2024-07-15 09:36:01.679491] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:50.640 [2024-07-15 09:36:01.679529] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d69200 with addr=10.0.0.2, port=4420 00:27:50.640 qpair failed and we were unable to recover it. 00:27:50.640 [2024-07-15 09:36:01.679619] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:50.640 [2024-07-15 09:36:01.679644] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d69200 with addr=10.0.0.2, port=4420 00:27:50.640 qpair failed and we were unable to recover it. 00:27:50.640 [2024-07-15 09:36:01.679732] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:50.640 [2024-07-15 09:36:01.679756] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d69200 with addr=10.0.0.2, port=4420 00:27:50.640 qpair failed and we were unable to recover it. 00:27:50.640 [2024-07-15 09:36:01.679866] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:50.640 [2024-07-15 09:36:01.679892] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d69200 with addr=10.0.0.2, port=4420 00:27:50.640 qpair failed and we were unable to recover it. 00:27:50.640 [2024-07-15 09:36:01.679977] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:50.640 [2024-07-15 09:36:01.680002] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d69200 with addr=10.0.0.2, port=4420 00:27:50.640 qpair failed and we were unable to recover it. 00:27:50.640 [2024-07-15 09:36:01.680077] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:50.640 [2024-07-15 09:36:01.680100] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d69200 with addr=10.0.0.2, port=4420 00:27:50.640 qpair failed and we were unable to recover it. 00:27:50.640 [2024-07-15 09:36:01.680181] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:50.640 [2024-07-15 09:36:01.680205] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d69200 with addr=10.0.0.2, port=4420 00:27:50.640 qpair failed and we were unable to recover it. 00:27:50.640 [2024-07-15 09:36:01.680290] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:50.640 [2024-07-15 09:36:01.680315] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a58000b90 with addr=10.0.0.2, port=4420 00:27:50.640 qpair failed and we were unable to recover it. 00:27:50.640 [2024-07-15 09:36:01.680402] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:50.640 [2024-07-15 09:36:01.680428] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a50000b90 with addr=10.0.0.2, port=4420 00:27:50.640 qpair failed and we were unable to recover it. 00:27:50.640 [2024-07-15 09:36:01.680515] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:50.640 [2024-07-15 09:36:01.680540] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:27:50.640 qpair failed and we were unable to recover it. 00:27:50.640 [2024-07-15 09:36:01.680627] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:50.640 [2024-07-15 09:36:01.680653] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:27:50.640 qpair failed and we were unable to recover it. 00:27:50.640 [2024-07-15 09:36:01.680733] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:50.640 [2024-07-15 09:36:01.680758] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:27:50.640 qpair failed and we were unable to recover it. 00:27:50.640 [2024-07-15 09:36:01.680873] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:50.640 [2024-07-15 09:36:01.680899] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:27:50.640 qpair failed and we were unable to recover it. 00:27:50.640 [2024-07-15 09:36:01.680980] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:50.640 [2024-07-15 09:36:01.681005] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:27:50.640 qpair failed and we were unable to recover it. 00:27:50.640 [2024-07-15 09:36:01.681094] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:50.640 [2024-07-15 09:36:01.681120] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:27:50.640 qpair failed and we were unable to recover it. 00:27:50.640 [2024-07-15 09:36:01.681213] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:50.640 [2024-07-15 09:36:01.681237] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:27:50.640 qpair failed and we were unable to recover it. 00:27:50.640 [2024-07-15 09:36:01.681315] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:50.640 [2024-07-15 09:36:01.681340] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:27:50.640 qpair failed and we were unable to recover it. 00:27:50.640 [2024-07-15 09:36:01.681423] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:50.640 [2024-07-15 09:36:01.681448] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:27:50.640 qpair failed and we were unable to recover it. 00:27:50.640 [2024-07-15 09:36:01.681525] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:50.640 [2024-07-15 09:36:01.681551] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a50000b90 with addr=10.0.0.2, port=4420 00:27:50.640 qpair failed and we were unable to recover it. 00:27:50.640 [2024-07-15 09:36:01.681636] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:50.640 [2024-07-15 09:36:01.681663] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d69200 with addr=10.0.0.2, port=4420 00:27:50.640 qpair failed and we were unable to recover it. 00:27:50.640 [2024-07-15 09:36:01.681749] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:50.640 [2024-07-15 09:36:01.681772] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d69200 with addr=10.0.0.2, port=4420 00:27:50.640 qpair failed and we were unable to recover it. 00:27:50.640 [2024-07-15 09:36:01.681890] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:50.640 [2024-07-15 09:36:01.681916] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d69200 with addr=10.0.0.2, port=4420 00:27:50.640 qpair failed and we were unable to recover it. 00:27:50.640 [2024-07-15 09:36:01.681998] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:50.640 [2024-07-15 09:36:01.682021] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d69200 with addr=10.0.0.2, port=4420 00:27:50.640 qpair failed and we were unable to recover it. 00:27:50.640 [2024-07-15 09:36:01.682103] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:50.640 [2024-07-15 09:36:01.682130] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a58000b90 with addr=10.0.0.2, port=4420 00:27:50.640 qpair failed and we were unable to recover it. 00:27:50.640 [2024-07-15 09:36:01.682209] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:50.640 [2024-07-15 09:36:01.682236] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:27:50.640 qpair failed and we were unable to recover it. 00:27:50.640 [2024-07-15 09:36:01.682319] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:50.640 [2024-07-15 09:36:01.682344] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:27:50.640 qpair failed and we were unable to recover it. 00:27:50.640 [2024-07-15 09:36:01.682457] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:50.640 [2024-07-15 09:36:01.682482] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:27:50.640 qpair failed and we were unable to recover it. 00:27:50.640 [2024-07-15 09:36:01.682569] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:50.640 [2024-07-15 09:36:01.682605] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:27:50.640 qpair failed and we were unable to recover it. 00:27:50.640 [2024-07-15 09:36:01.682691] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:50.640 [2024-07-15 09:36:01.682716] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:27:50.640 qpair failed and we were unable to recover it. 00:27:50.640 [2024-07-15 09:36:01.682811] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:50.640 [2024-07-15 09:36:01.682837] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d69200 with addr=10.0.0.2, port=4420 00:27:50.641 qpair failed and we were unable to recover it. 00:27:50.641 [2024-07-15 09:36:01.682925] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:50.641 [2024-07-15 09:36:01.682950] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d69200 with addr=10.0.0.2, port=4420 00:27:50.641 qpair failed and we were unable to recover it. 00:27:50.641 [2024-07-15 09:36:01.683022] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:50.641 [2024-07-15 09:36:01.683046] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d69200 with addr=10.0.0.2, port=4420 00:27:50.641 qpair failed and we were unable to recover it. 00:27:50.641 [2024-07-15 09:36:01.683131] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:50.641 [2024-07-15 09:36:01.683157] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d69200 with addr=10.0.0.2, port=4420 00:27:50.641 qpair failed and we were unable to recover it. 00:27:50.641 [2024-07-15 09:36:01.683242] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:50.641 [2024-07-15 09:36:01.683267] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d69200 with addr=10.0.0.2, port=4420 00:27:50.641 qpair failed and we were unable to recover it. 00:27:50.641 [2024-07-15 09:36:01.683361] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:50.641 [2024-07-15 09:36:01.683389] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a50000b90 with addr=10.0.0.2, port=4420 00:27:50.641 qpair failed and we were unable to recover it. 00:27:50.641 [2024-07-15 09:36:01.683477] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:50.641 [2024-07-15 09:36:01.683502] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a50000b90 with addr=10.0.0.2, port=4420 00:27:50.641 qpair failed and we were unable to recover it. 00:27:50.641 [2024-07-15 09:36:01.683583] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:50.641 [2024-07-15 09:36:01.683609] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a58000b90 with addr=10.0.0.2, port=4420 00:27:50.641 qpair failed and we were unable to recover it. 00:27:50.641 [2024-07-15 09:36:01.683709] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:50.641 [2024-07-15 09:36:01.683736] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a58000b90 with addr=10.0.0.2, port=4420 00:27:50.641 qpair failed and we were unable to recover it. 00:27:50.641 [2024-07-15 09:36:01.683833] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:50.641 [2024-07-15 09:36:01.683860] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a58000b90 with addr=10.0.0.2, port=4420 00:27:50.641 qpair failed and we were unable to recover it. 00:27:50.641 [2024-07-15 09:36:01.683940] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:50.641 [2024-07-15 09:36:01.683967] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a58000b90 with addr=10.0.0.2, port=4420 00:27:50.641 qpair failed and we were unable to recover it. 00:27:50.641 [2024-07-15 09:36:01.684044] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:50.641 [2024-07-15 09:36:01.684070] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a58000b90 with addr=10.0.0.2, port=4420 00:27:50.641 qpair failed and we were unable to recover it. 00:27:50.641 [2024-07-15 09:36:01.684146] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:50.641 [2024-07-15 09:36:01.684175] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a58000b90 with addr=10.0.0.2, port=4420 00:27:50.641 qpair failed and we were unable to recover it. 00:27:50.641 [2024-07-15 09:36:01.684256] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:50.641 [2024-07-15 09:36:01.684282] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a58000b90 with addr=10.0.0.2, port=4420 00:27:50.641 qpair failed and we were unable to recover it. 00:27:50.641 [2024-07-15 09:36:01.684363] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:50.641 [2024-07-15 09:36:01.684391] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d69200 with addr=10.0.0.2, port=4420 00:27:50.641 qpair failed and we were unable to recover it. 00:27:50.641 [2024-07-15 09:36:01.684475] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:50.641 [2024-07-15 09:36:01.684502] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:27:50.641 qpair failed and we were unable to recover it. 00:27:50.641 [2024-07-15 09:36:01.684587] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:50.641 [2024-07-15 09:36:01.684613] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:27:50.641 qpair failed and we were unable to recover it. 00:27:50.641 [2024-07-15 09:36:01.684693] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:50.641 [2024-07-15 09:36:01.684718] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:27:50.641 qpair failed and we were unable to recover it. 00:27:50.641 [2024-07-15 09:36:01.684814] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:50.641 [2024-07-15 09:36:01.684839] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:27:50.641 qpair failed and we were unable to recover it. 00:27:50.641 [2024-07-15 09:36:01.684920] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:50.641 [2024-07-15 09:36:01.684944] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:27:50.641 qpair failed and we were unable to recover it. 00:27:50.641 [2024-07-15 09:36:01.685031] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:50.641 [2024-07-15 09:36:01.685056] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:27:50.641 qpair failed and we were unable to recover it. 00:27:50.641 [2024-07-15 09:36:01.685148] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:50.641 [2024-07-15 09:36:01.685174] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:27:50.641 qpair failed and we were unable to recover it. 00:27:50.641 [2024-07-15 09:36:01.685260] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:50.641 [2024-07-15 09:36:01.685290] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:27:50.641 qpair failed and we were unable to recover it. 00:27:50.641 [2024-07-15 09:36:01.685376] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:50.641 [2024-07-15 09:36:01.685404] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a58000b90 with addr=10.0.0.2, port=4420 00:27:50.641 qpair failed and we were unable to recover it. 00:27:50.641 [2024-07-15 09:36:01.685486] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:50.641 [2024-07-15 09:36:01.685511] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a58000b90 with addr=10.0.0.2, port=4420 00:27:50.641 qpair failed and we were unable to recover it. 00:27:50.641 [2024-07-15 09:36:01.685597] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:50.641 [2024-07-15 09:36:01.685624] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a58000b90 with addr=10.0.0.2, port=4420 00:27:50.641 qpair failed and we were unable to recover it. 00:27:50.641 [2024-07-15 09:36:01.685709] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:50.641 [2024-07-15 09:36:01.685735] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a58000b90 with addr=10.0.0.2, port=4420 00:27:50.641 qpair failed and we were unable to recover it. 00:27:50.641 [2024-07-15 09:36:01.685831] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:50.641 [2024-07-15 09:36:01.685870] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d69200 with addr=10.0.0.2, port=4420 00:27:50.641 qpair failed and we were unable to recover it. 00:27:50.641 [2024-07-15 09:36:01.685958] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:50.641 [2024-07-15 09:36:01.685985] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d69200 with addr=10.0.0.2, port=4420 00:27:50.641 qpair failed and we were unable to recover it. 00:27:50.641 [2024-07-15 09:36:01.686069] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:50.641 [2024-07-15 09:36:01.686094] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d69200 with addr=10.0.0.2, port=4420 00:27:50.641 qpair failed and we were unable to recover it. 00:27:50.641 [2024-07-15 09:36:01.686186] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:50.641 [2024-07-15 09:36:01.686212] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d69200 with addr=10.0.0.2, port=4420 00:27:50.641 qpair failed and we were unable to recover it. 00:27:50.641 [2024-07-15 09:36:01.686299] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:50.641 [2024-07-15 09:36:01.686328] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a50000b90 with addr=10.0.0.2, port=4420 00:27:50.641 qpair failed and we were unable to recover it. 00:27:50.641 [2024-07-15 09:36:01.686416] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:50.641 [2024-07-15 09:36:01.686442] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:27:50.641 qpair failed and we were unable to recover it. 00:27:50.641 [2024-07-15 09:36:01.686526] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:50.641 [2024-07-15 09:36:01.686552] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a58000b90 with addr=10.0.0.2, port=4420 00:27:50.641 qpair failed and we were unable to recover it. 00:27:50.641 [2024-07-15 09:36:01.686632] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:50.641 [2024-07-15 09:36:01.686656] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a58000b90 with addr=10.0.0.2, port=4420 00:27:50.641 qpair failed and we were unable to recover it. 00:27:50.641 [2024-07-15 09:36:01.686740] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:50.641 [2024-07-15 09:36:01.686773] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a58000b90 with addr=10.0.0.2, port=4420 00:27:50.641 qpair failed and we were unable to recover it. 00:27:50.641 [2024-07-15 09:36:01.686860] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:50.641 [2024-07-15 09:36:01.686885] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a58000b90 with addr=10.0.0.2, port=4420 00:27:50.641 qpair failed and we were unable to recover it. 00:27:50.641 [2024-07-15 09:36:01.686967] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:50.641 [2024-07-15 09:36:01.686991] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a58000b90 with addr=10.0.0.2, port=4420 00:27:50.641 qpair failed and we were unable to recover it. 00:27:50.641 [2024-07-15 09:36:01.687118] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:50.641 [2024-07-15 09:36:01.687144] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a58000b90 with addr=10.0.0.2, port=4420 00:27:50.641 qpair failed and we were unable to recover it. 00:27:50.641 [2024-07-15 09:36:01.687225] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:50.641 [2024-07-15 09:36:01.687250] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a58000b90 with addr=10.0.0.2, port=4420 00:27:50.641 qpair failed and we were unable to recover it. 00:27:50.641 [2024-07-15 09:36:01.687333] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:50.641 [2024-07-15 09:36:01.687358] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d69200 with addr=10.0.0.2, port=4420 00:27:50.641 qpair failed and we were unable to recover it. 00:27:50.641 [2024-07-15 09:36:01.687437] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:50.642 [2024-07-15 09:36:01.687464] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d69200 with addr=10.0.0.2, port=4420 00:27:50.642 qpair failed and we were unable to recover it. 00:27:50.642 [2024-07-15 09:36:01.687544] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:50.642 [2024-07-15 09:36:01.687570] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d69200 with addr=10.0.0.2, port=4420 00:27:50.642 qpair failed and we were unable to recover it. 00:27:50.642 [2024-07-15 09:36:01.687654] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:50.642 [2024-07-15 09:36:01.687679] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d69200 with addr=10.0.0.2, port=4420 00:27:50.642 qpair failed and we were unable to recover it. 00:27:50.642 [2024-07-15 09:36:01.687759] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:50.642 [2024-07-15 09:36:01.687787] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a50000b90 with addr=10.0.0.2, port=4420 00:27:50.642 qpair failed and we were unable to recover it. 00:27:50.642 [2024-07-15 09:36:01.687889] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:50.642 [2024-07-15 09:36:01.687916] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:27:50.642 qpair failed and we were unable to recover it. 00:27:50.642 [2024-07-15 09:36:01.687997] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:50.642 [2024-07-15 09:36:01.688024] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a58000b90 with addr=10.0.0.2, port=4420 00:27:50.642 qpair failed and we were unable to recover it. 00:27:50.642 [2024-07-15 09:36:01.688113] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:50.642 [2024-07-15 09:36:01.688140] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a58000b90 with addr=10.0.0.2, port=4420 00:27:50.642 qpair failed and we were unable to recover it. 00:27:50.642 [2024-07-15 09:36:01.688221] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:50.642 [2024-07-15 09:36:01.688247] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a58000b90 with addr=10.0.0.2, port=4420 00:27:50.642 qpair failed and we were unable to recover it. 00:27:50.642 [2024-07-15 09:36:01.688333] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:50.642 [2024-07-15 09:36:01.688359] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a58000b90 with addr=10.0.0.2, port=4420 00:27:50.642 qpair failed and we were unable to recover it. 00:27:50.642 [2024-07-15 09:36:01.688449] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:50.642 [2024-07-15 09:36:01.688480] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d69200 with addr=10.0.0.2, port=4420 00:27:50.642 qpair failed and we were unable to recover it. 00:27:50.642 [2024-07-15 09:36:01.688570] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:50.642 [2024-07-15 09:36:01.688597] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a50000b90 with addr=10.0.0.2, port=4420 00:27:50.642 qpair failed and we were unable to recover it. 00:27:50.642 [2024-07-15 09:36:01.688712] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:50.642 [2024-07-15 09:36:01.688739] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:27:50.642 qpair failed and we were unable to recover it. 00:27:50.642 [2024-07-15 09:36:01.688834] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:50.642 [2024-07-15 09:36:01.688860] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:27:50.642 qpair failed and we were unable to recover it. 00:27:50.642 [2024-07-15 09:36:01.688949] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:50.642 [2024-07-15 09:36:01.688975] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:27:50.642 qpair failed and we were unable to recover it. 00:27:50.642 [2024-07-15 09:36:01.689053] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:50.642 [2024-07-15 09:36:01.689079] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:27:50.642 qpair failed and we were unable to recover it. 00:27:50.642 [2024-07-15 09:36:01.689176] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:50.642 [2024-07-15 09:36:01.689203] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:27:50.642 qpair failed and we were unable to recover it. 00:27:50.642 [2024-07-15 09:36:01.689283] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:50.642 [2024-07-15 09:36:01.689310] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d69200 with addr=10.0.0.2, port=4420 00:27:50.642 qpair failed and we were unable to recover it. 00:27:50.642 [2024-07-15 09:36:01.689388] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:50.642 [2024-07-15 09:36:01.689414] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d69200 with addr=10.0.0.2, port=4420 00:27:50.642 qpair failed and we were unable to recover it. 00:27:50.642 [2024-07-15 09:36:01.689490] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:50.642 [2024-07-15 09:36:01.689516] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a58000b90 with addr=10.0.0.2, port=4420 00:27:50.642 qpair failed and we were unable to recover it. 00:27:50.642 [2024-07-15 09:36:01.689597] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:50.642 [2024-07-15 09:36:01.689622] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a58000b90 with addr=10.0.0.2, port=4420 00:27:50.642 qpair failed and we were unable to recover it. 00:27:50.642 [2024-07-15 09:36:01.689712] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:50.642 [2024-07-15 09:36:01.689739] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a50000b90 with addr=10.0.0.2, port=4420 00:27:50.642 qpair failed and we were unable to recover it. 00:27:50.642 [2024-07-15 09:36:01.689846] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:50.642 [2024-07-15 09:36:01.689874] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d69200 with addr=10.0.0.2, port=4420 00:27:50.642 qpair failed and we were unable to recover it. 00:27:50.642 [2024-07-15 09:36:01.689959] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:50.642 [2024-07-15 09:36:01.689984] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d69200 with addr=10.0.0.2, port=4420 00:27:50.642 qpair failed and we were unable to recover it. 00:27:50.642 [2024-07-15 09:36:01.690058] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:50.642 [2024-07-15 09:36:01.690083] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d69200 with addr=10.0.0.2, port=4420 00:27:50.642 qpair failed and we were unable to recover it. 00:27:50.642 [2024-07-15 09:36:01.690169] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:50.642 [2024-07-15 09:36:01.690193] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d69200 with addr=10.0.0.2, port=4420 00:27:50.642 qpair failed and we were unable to recover it. 00:27:50.642 [2024-07-15 09:36:01.690303] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:50.642 [2024-07-15 09:36:01.690328] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d69200 with addr=10.0.0.2, port=4420 00:27:50.642 qpair failed and we were unable to recover it. 00:27:50.642 [2024-07-15 09:36:01.690409] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:50.642 [2024-07-15 09:36:01.690433] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d69200 with addr=10.0.0.2, port=4420 00:27:50.642 qpair failed and we were unable to recover it. 00:27:50.642 [2024-07-15 09:36:01.690504] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:50.642 [2024-07-15 09:36:01.690528] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d69200 with addr=10.0.0.2, port=4420 00:27:50.642 qpair failed and we were unable to recover it. 00:27:50.642 [2024-07-15 09:36:01.690609] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:50.642 [2024-07-15 09:36:01.690637] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:27:50.642 qpair failed and we were unable to recover it. 00:27:50.642 [2024-07-15 09:36:01.690721] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:50.642 [2024-07-15 09:36:01.690749] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a58000b90 with addr=10.0.0.2, port=4420 00:27:50.642 qpair failed and we were unable to recover it. 00:27:50.642 [2024-07-15 09:36:01.690860] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:50.642 [2024-07-15 09:36:01.690900] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a50000b90 with addr=10.0.0.2, port=4420 00:27:50.642 qpair failed and we were unable to recover it. 00:27:50.642 [2024-07-15 09:36:01.690996] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:50.642 [2024-07-15 09:36:01.691027] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a50000b90 with addr=10.0.0.2, port=4420 00:27:50.642 qpair failed and we were unable to recover it. 00:27:50.642 [2024-07-15 09:36:01.691117] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:50.642 [2024-07-15 09:36:01.691142] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a50000b90 with addr=10.0.0.2, port=4420 00:27:50.642 qpair failed and we were unable to recover it. 00:27:50.642 [2024-07-15 09:36:01.691226] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:50.642 [2024-07-15 09:36:01.691251] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a50000b90 with addr=10.0.0.2, port=4420 00:27:50.642 qpair failed and we were unable to recover it. 00:27:50.642 [2024-07-15 09:36:01.691332] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:50.642 [2024-07-15 09:36:01.691357] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d69200 with addr=10.0.0.2, port=4420 00:27:50.642 qpair failed and we were unable to recover it. 00:27:50.642 [2024-07-15 09:36:01.691449] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:50.642 [2024-07-15 09:36:01.691475] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d69200 with addr=10.0.0.2, port=4420 00:27:50.642 qpair failed and we were unable to recover it. 00:27:50.642 [2024-07-15 09:36:01.691557] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:50.642 [2024-07-15 09:36:01.691584] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a58000b90 with addr=10.0.0.2, port=4420 00:27:50.642 qpair failed and we were unable to recover it. 00:27:50.642 [2024-07-15 09:36:01.691661] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:50.642 [2024-07-15 09:36:01.691687] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a58000b90 with addr=10.0.0.2, port=4420 00:27:50.642 qpair failed and we were unable to recover it. 00:27:50.642 [2024-07-15 09:36:01.691774] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:50.642 [2024-07-15 09:36:01.691806] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:27:50.642 qpair failed and we were unable to recover it. 00:27:50.642 [2024-07-15 09:36:01.691898] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:50.642 [2024-07-15 09:36:01.691923] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:27:50.642 qpair failed and we were unable to recover it. 00:27:50.642 [2024-07-15 09:36:01.691995] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:50.642 [2024-07-15 09:36:01.692020] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:27:50.642 qpair failed and we were unable to recover it. 00:27:50.642 [2024-07-15 09:36:01.692113] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:50.643 [2024-07-15 09:36:01.692140] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:27:50.643 qpair failed and we were unable to recover it. 00:27:50.643 [2024-07-15 09:36:01.692224] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:50.643 [2024-07-15 09:36:01.692251] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:27:50.643 qpair failed and we were unable to recover it. 00:27:50.643 [2024-07-15 09:36:01.692334] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:50.643 [2024-07-15 09:36:01.692360] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:27:50.643 qpair failed and we were unable to recover it. 00:27:50.643 [2024-07-15 09:36:01.692439] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:50.643 [2024-07-15 09:36:01.692465] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:27:50.643 qpair failed and we were unable to recover it. 00:27:50.643 [2024-07-15 09:36:01.692552] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:50.643 [2024-07-15 09:36:01.692578] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:27:50.643 qpair failed and we were unable to recover it. 00:27:50.643 [2024-07-15 09:36:01.692664] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:50.643 [2024-07-15 09:36:01.692688] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:27:50.643 qpair failed and we were unable to recover it. 00:27:50.643 [2024-07-15 09:36:01.692763] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:50.643 [2024-07-15 09:36:01.692788] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:27:50.643 qpair failed and we were unable to recover it. 00:27:50.643 [2024-07-15 09:36:01.692878] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:50.643 [2024-07-15 09:36:01.692903] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:27:50.643 qpair failed and we were unable to recover it. 00:27:50.643 [2024-07-15 09:36:01.692990] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:50.643 [2024-07-15 09:36:01.693015] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:27:50.643 qpair failed and we were unable to recover it. 00:27:50.643 [2024-07-15 09:36:01.693111] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:50.643 [2024-07-15 09:36:01.693140] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:27:50.643 qpair failed and we were unable to recover it. 00:27:50.643 [2024-07-15 09:36:01.693220] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:50.643 [2024-07-15 09:36:01.693245] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:27:50.643 qpair failed and we were unable to recover it. 00:27:50.643 [2024-07-15 09:36:01.693327] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:50.643 [2024-07-15 09:36:01.693355] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a50000b90 with addr=10.0.0.2, port=4420 00:27:50.643 qpair failed and we were unable to recover it. 00:27:50.643 [2024-07-15 09:36:01.693440] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:50.643 [2024-07-15 09:36:01.693471] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a58000b90 with addr=10.0.0.2, port=4420 00:27:50.643 qpair failed and we were unable to recover it. 00:27:50.643 [2024-07-15 09:36:01.693563] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:50.643 [2024-07-15 09:36:01.693602] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d69200 with addr=10.0.0.2, port=4420 00:27:50.643 qpair failed and we were unable to recover it. 00:27:50.643 [2024-07-15 09:36:01.693687] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:50.643 [2024-07-15 09:36:01.693714] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d69200 with addr=10.0.0.2, port=4420 00:27:50.643 qpair failed and we were unable to recover it. 00:27:50.643 [2024-07-15 09:36:01.693811] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:50.643 [2024-07-15 09:36:01.693838] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d69200 with addr=10.0.0.2, port=4420 00:27:50.643 qpair failed and we were unable to recover it. 00:27:50.643 [2024-07-15 09:36:01.693918] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:50.643 [2024-07-15 09:36:01.693944] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d69200 with addr=10.0.0.2, port=4420 00:27:50.643 qpair failed and we were unable to recover it. 00:27:50.643 [2024-07-15 09:36:01.694019] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:50.643 [2024-07-15 09:36:01.694043] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d69200 with addr=10.0.0.2, port=4420 00:27:50.643 qpair failed and we were unable to recover it. 00:27:50.643 [2024-07-15 09:36:01.694139] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:50.643 [2024-07-15 09:36:01.694164] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d69200 with addr=10.0.0.2, port=4420 00:27:50.643 qpair failed and we were unable to recover it. 00:27:50.643 [2024-07-15 09:36:01.694243] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:50.643 [2024-07-15 09:36:01.694267] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d69200 with addr=10.0.0.2, port=4420 00:27:50.643 qpair failed and we were unable to recover it. 00:27:50.643 [2024-07-15 09:36:01.694348] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:50.643 [2024-07-15 09:36:01.694379] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:27:50.643 qpair failed and we were unable to recover it. 00:27:50.643 [2024-07-15 09:36:01.694468] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:50.643 [2024-07-15 09:36:01.694495] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:27:50.643 qpair failed and we were unable to recover it. 00:27:50.643 [2024-07-15 09:36:01.694572] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:50.643 [2024-07-15 09:36:01.694596] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:27:50.643 qpair failed and we were unable to recover it. 00:27:50.643 [2024-07-15 09:36:01.694668] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:50.643 [2024-07-15 09:36:01.694692] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:27:50.643 qpair failed and we were unable to recover it. 00:27:50.643 [2024-07-15 09:36:01.694777] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:50.643 [2024-07-15 09:36:01.694809] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a50000b90 with addr=10.0.0.2, port=4420 00:27:50.643 qpair failed and we were unable to recover it. 00:27:50.643 [2024-07-15 09:36:01.694923] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:50.643 [2024-07-15 09:36:01.694950] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a58000b90 with addr=10.0.0.2, port=4420 00:27:50.643 qpair failed and we were unable to recover it. 00:27:50.643 [2024-07-15 09:36:01.695037] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:50.643 [2024-07-15 09:36:01.695063] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a58000b90 with addr=10.0.0.2, port=4420 00:27:50.643 qpair failed and we were unable to recover it. 00:27:50.643 [2024-07-15 09:36:01.695151] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:50.643 [2024-07-15 09:36:01.695177] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a58000b90 with addr=10.0.0.2, port=4420 00:27:50.643 qpair failed and we were unable to recover it. 00:27:50.643 [2024-07-15 09:36:01.695253] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:50.643 [2024-07-15 09:36:01.695279] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a58000b90 with addr=10.0.0.2, port=4420 00:27:50.643 qpair failed and we were unable to recover it. 00:27:50.643 [2024-07-15 09:36:01.695358] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:50.643 [2024-07-15 09:36:01.695386] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d69200 with addr=10.0.0.2, port=4420 00:27:50.643 qpair failed and we were unable to recover it. 00:27:50.643 [2024-07-15 09:36:01.695469] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:50.644 [2024-07-15 09:36:01.695496] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:27:50.644 qpair failed and we were unable to recover it. 00:27:50.644 [2024-07-15 09:36:01.695585] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:50.644 [2024-07-15 09:36:01.695611] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:27:50.644 qpair failed and we were unable to recover it. 00:27:50.644 [2024-07-15 09:36:01.695696] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:50.644 [2024-07-15 09:36:01.695722] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:27:50.644 qpair failed and we were unable to recover it. 00:27:50.644 [2024-07-15 09:36:01.695814] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:50.644 [2024-07-15 09:36:01.695840] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:27:50.644 qpair failed and we were unable to recover it. 00:27:50.644 [2024-07-15 09:36:01.695924] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:50.644 [2024-07-15 09:36:01.695951] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:27:50.644 qpair failed and we were unable to recover it. 00:27:50.644 [2024-07-15 09:36:01.696032] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:50.644 [2024-07-15 09:36:01.696056] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:27:50.644 qpair failed and we were unable to recover it. 00:27:50.644 [2024-07-15 09:36:01.696152] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:50.644 [2024-07-15 09:36:01.696177] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:27:50.644 qpair failed and we were unable to recover it. 00:27:50.644 [2024-07-15 09:36:01.696260] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:50.644 [2024-07-15 09:36:01.696288] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a50000b90 with addr=10.0.0.2, port=4420 00:27:50.644 qpair failed and we were unable to recover it. 00:27:50.644 [2024-07-15 09:36:01.696373] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:50.644 [2024-07-15 09:36:01.696400] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a58000b90 with addr=10.0.0.2, port=4420 00:27:50.644 qpair failed and we were unable to recover it. 00:27:50.644 [2024-07-15 09:36:01.696499] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:50.644 [2024-07-15 09:36:01.696537] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d69200 with addr=10.0.0.2, port=4420 00:27:50.644 qpair failed and we were unable to recover it. 00:27:50.644 [2024-07-15 09:36:01.696629] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:50.644 [2024-07-15 09:36:01.696656] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d69200 with addr=10.0.0.2, port=4420 00:27:50.644 qpair failed and we were unable to recover it. 00:27:50.644 [2024-07-15 09:36:01.696734] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:50.644 [2024-07-15 09:36:01.696760] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d69200 with addr=10.0.0.2, port=4420 00:27:50.644 qpair failed and we were unable to recover it. 00:27:50.644 [2024-07-15 09:36:01.696846] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:50.644 [2024-07-15 09:36:01.696873] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d69200 with addr=10.0.0.2, port=4420 00:27:50.644 qpair failed and we were unable to recover it. 00:27:50.644 [2024-07-15 09:36:01.696954] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:50.644 [2024-07-15 09:36:01.696981] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:27:50.644 qpair failed and we were unable to recover it. 00:27:50.644 [2024-07-15 09:36:01.697070] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:50.644 [2024-07-15 09:36:01.697096] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:27:50.644 qpair failed and we were unable to recover it. 00:27:50.644 [2024-07-15 09:36:01.697177] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:50.644 [2024-07-15 09:36:01.697202] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:27:50.644 qpair failed and we were unable to recover it. 00:27:50.644 [2024-07-15 09:36:01.697289] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:50.644 [2024-07-15 09:36:01.697313] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:27:50.644 qpair failed and we were unable to recover it. 00:27:50.644 [2024-07-15 09:36:01.697399] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:50.644 [2024-07-15 09:36:01.697426] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a50000b90 with addr=10.0.0.2, port=4420 00:27:50.644 qpair failed and we were unable to recover it. 00:27:50.644 [2024-07-15 09:36:01.697511] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:50.644 [2024-07-15 09:36:01.697537] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a58000b90 with addr=10.0.0.2, port=4420 00:27:50.644 qpair failed and we were unable to recover it. 00:27:50.644 [2024-07-15 09:36:01.697616] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:50.644 [2024-07-15 09:36:01.697642] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d69200 with addr=10.0.0.2, port=4420 00:27:50.644 qpair failed and we were unable to recover it. 00:27:50.644 [2024-07-15 09:36:01.697722] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:50.644 [2024-07-15 09:36:01.697746] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d69200 with addr=10.0.0.2, port=4420 00:27:50.644 qpair failed and we were unable to recover it. 00:27:50.644 [2024-07-15 09:36:01.697841] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:50.644 [2024-07-15 09:36:01.697866] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d69200 with addr=10.0.0.2, port=4420 00:27:50.644 qpair failed and we were unable to recover it. 00:27:50.644 [2024-07-15 09:36:01.697951] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:50.644 [2024-07-15 09:36:01.697975] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d69200 with addr=10.0.0.2, port=4420 00:27:50.644 qpair failed and we were unable to recover it. 00:27:50.644 [2024-07-15 09:36:01.698085] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:50.644 [2024-07-15 09:36:01.698118] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d69200 with addr=10.0.0.2, port=4420 00:27:50.644 qpair failed and we were unable to recover it. 00:27:50.644 [2024-07-15 09:36:01.698203] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:50.644 [2024-07-15 09:36:01.698226] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d69200 with addr=10.0.0.2, port=4420 00:27:50.644 qpair failed and we were unable to recover it. 00:27:50.644 [2024-07-15 09:36:01.698307] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:50.644 [2024-07-15 09:36:01.698331] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d69200 with addr=10.0.0.2, port=4420 00:27:50.644 qpair failed and we were unable to recover it. 00:27:50.644 [2024-07-15 09:36:01.698405] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:50.644 [2024-07-15 09:36:01.698430] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d69200 with addr=10.0.0.2, port=4420 00:27:50.644 qpair failed and we were unable to recover it. 00:27:50.644 [2024-07-15 09:36:01.698513] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:50.644 [2024-07-15 09:36:01.698542] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a50000b90 with addr=10.0.0.2, port=4420 00:27:50.644 qpair failed and we were unable to recover it. 00:27:50.644 [2024-07-15 09:36:01.698636] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:50.644 [2024-07-15 09:36:01.698663] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a58000b90 with addr=10.0.0.2, port=4420 00:27:50.644 qpair failed and we were unable to recover it. 00:27:50.644 [2024-07-15 09:36:01.698753] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:50.644 [2024-07-15 09:36:01.698781] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:27:50.644 qpair failed and we were unable to recover it. 00:27:50.644 [2024-07-15 09:36:01.698884] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:50.644 [2024-07-15 09:36:01.698911] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:27:50.644 qpair failed and we were unable to recover it. 00:27:50.644 [2024-07-15 09:36:01.698997] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:50.644 [2024-07-15 09:36:01.699021] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:27:50.644 qpair failed and we were unable to recover it. 00:27:50.644 [2024-07-15 09:36:01.699109] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:50.644 [2024-07-15 09:36:01.699139] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:27:50.644 qpair failed and we were unable to recover it. 00:27:50.644 [2024-07-15 09:36:01.699217] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:50.644 [2024-07-15 09:36:01.699242] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:27:50.644 qpair failed and we were unable to recover it. 00:27:50.644 [2024-07-15 09:36:01.699317] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:50.644 [2024-07-15 09:36:01.699342] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:27:50.644 qpair failed and we were unable to recover it. 00:27:50.644 [2024-07-15 09:36:01.699420] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:50.644 [2024-07-15 09:36:01.699446] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:27:50.644 qpair failed and we were unable to recover it. 00:27:50.644 [2024-07-15 09:36:01.699526] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:50.644 [2024-07-15 09:36:01.699552] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d69200 with addr=10.0.0.2, port=4420 00:27:50.644 qpair failed and we were unable to recover it. 00:27:50.644 [2024-07-15 09:36:01.699631] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:50.644 [2024-07-15 09:36:01.699655] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d69200 with addr=10.0.0.2, port=4420 00:27:50.644 qpair failed and we were unable to recover it. 00:27:50.644 [2024-07-15 09:36:01.699737] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:50.644 [2024-07-15 09:36:01.699764] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a58000b90 with addr=10.0.0.2, port=4420 00:27:50.644 qpair failed and we were unable to recover it. 00:27:50.644 [2024-07-15 09:36:01.699863] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:50.644 [2024-07-15 09:36:01.699891] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:27:50.644 qpair failed and we were unable to recover it. 00:27:50.644 [2024-07-15 09:36:01.699966] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:50.644 [2024-07-15 09:36:01.699992] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:27:50.644 qpair failed and we were unable to recover it. 00:27:50.645 [2024-07-15 09:36:01.700077] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:50.645 [2024-07-15 09:36:01.700103] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:27:50.645 qpair failed and we were unable to recover it. 00:27:50.645 [2024-07-15 09:36:01.700196] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:50.645 [2024-07-15 09:36:01.700221] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:27:50.645 qpair failed and we were unable to recover it. 00:27:50.645 [2024-07-15 09:36:01.700305] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:50.645 [2024-07-15 09:36:01.700334] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a50000b90 with addr=10.0.0.2, port=4420 00:27:50.645 qpair failed and we were unable to recover it. 00:27:50.645 [2024-07-15 09:36:01.700431] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:50.645 [2024-07-15 09:36:01.700458] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d69200 with addr=10.0.0.2, port=4420 00:27:50.645 qpair failed and we were unable to recover it. 00:27:50.645 [2024-07-15 09:36:01.700545] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:50.645 [2024-07-15 09:36:01.700569] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d69200 with addr=10.0.0.2, port=4420 00:27:50.645 qpair failed and we were unable to recover it. 00:27:50.645 [2024-07-15 09:36:01.700649] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:50.645 [2024-07-15 09:36:01.700673] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d69200 with addr=10.0.0.2, port=4420 00:27:50.645 qpair failed and we were unable to recover it. 00:27:50.645 [2024-07-15 09:36:01.700748] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:50.645 [2024-07-15 09:36:01.700772] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d69200 with addr=10.0.0.2, port=4420 00:27:50.645 qpair failed and we were unable to recover it. 00:27:50.645 [2024-07-15 09:36:01.700853] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:50.645 [2024-07-15 09:36:01.700878] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d69200 with addr=10.0.0.2, port=4420 00:27:50.645 qpair failed and we were unable to recover it. 00:27:50.645 [2024-07-15 09:36:01.700960] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:50.645 [2024-07-15 09:36:01.700984] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d69200 with addr=10.0.0.2, port=4420 00:27:50.645 qpair failed and we were unable to recover it. 00:27:50.645 [2024-07-15 09:36:01.701062] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:50.645 [2024-07-15 09:36:01.701087] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d69200 with addr=10.0.0.2, port=4420 00:27:50.645 qpair failed and we were unable to recover it. 00:27:50.645 [2024-07-15 09:36:01.701171] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:50.645 [2024-07-15 09:36:01.701196] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d69200 with addr=10.0.0.2, port=4420 00:27:50.645 qpair failed and we were unable to recover it. 00:27:50.645 [2024-07-15 09:36:01.701274] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:50.645 [2024-07-15 09:36:01.701299] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d69200 with addr=10.0.0.2, port=4420 00:27:50.645 qpair failed and we were unable to recover it. 00:27:50.645 [2024-07-15 09:36:01.701384] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:50.645 [2024-07-15 09:36:01.701411] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a50000b90 with addr=10.0.0.2, port=4420 00:27:50.645 qpair failed and we were unable to recover it. 00:27:50.645 [2024-07-15 09:36:01.701489] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:50.645 [2024-07-15 09:36:01.701517] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a50000b90 with addr=10.0.0.2, port=4420 00:27:50.645 qpair failed and we were unable to recover it. 00:27:50.645 [2024-07-15 09:36:01.701606] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:50.645 [2024-07-15 09:36:01.701634] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:27:50.645 qpair failed and we were unable to recover it. 00:27:50.645 [2024-07-15 09:36:01.701712] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:50.645 [2024-07-15 09:36:01.701739] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:27:50.645 qpair failed and we were unable to recover it. 00:27:50.645 [2024-07-15 09:36:01.701835] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:50.645 [2024-07-15 09:36:01.701871] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a58000b90 with addr=10.0.0.2, port=4420 00:27:50.645 qpair failed and we were unable to recover it. 00:27:50.645 [2024-07-15 09:36:01.701955] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:50.645 [2024-07-15 09:36:01.701981] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a58000b90 with addr=10.0.0.2, port=4420 00:27:50.645 qpair failed and we were unable to recover it. 00:27:50.645 [2024-07-15 09:36:01.702062] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:50.645 [2024-07-15 09:36:01.702089] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d69200 with addr=10.0.0.2, port=4420 00:27:50.645 qpair failed and we were unable to recover it. 00:27:50.645 [2024-07-15 09:36:01.702178] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:50.645 [2024-07-15 09:36:01.702203] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d69200 with addr=10.0.0.2, port=4420 00:27:50.645 qpair failed and we were unable to recover it. 00:27:50.645 [2024-07-15 09:36:01.702285] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:50.645 [2024-07-15 09:36:01.702310] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d69200 with addr=10.0.0.2, port=4420 00:27:50.645 qpair failed and we were unable to recover it. 00:27:50.645 [2024-07-15 09:36:01.702385] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:50.645 [2024-07-15 09:36:01.702409] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d69200 with addr=10.0.0.2, port=4420 00:27:50.645 qpair failed and we were unable to recover it. 00:27:50.645 [2024-07-15 09:36:01.702486] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:50.645 [2024-07-15 09:36:01.702509] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d69200 with addr=10.0.0.2, port=4420 00:27:50.645 qpair failed and we were unable to recover it. 00:27:50.645 [2024-07-15 09:36:01.702592] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:50.645 [2024-07-15 09:36:01.702617] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d69200 with addr=10.0.0.2, port=4420 00:27:50.645 qpair failed and we were unable to recover it. 00:27:50.645 [2024-07-15 09:36:01.702703] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:50.645 [2024-07-15 09:36:01.702728] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d69200 with addr=10.0.0.2, port=4420 00:27:50.645 qpair failed and we were unable to recover it. 00:27:50.645 [2024-07-15 09:36:01.702812] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:50.645 [2024-07-15 09:36:01.702839] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:27:50.645 qpair failed and we were unable to recover it. 00:27:50.645 [2024-07-15 09:36:01.702919] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:50.645 [2024-07-15 09:36:01.702946] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:27:50.645 qpair failed and we were unable to recover it. 00:27:50.645 [2024-07-15 09:36:01.703026] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:50.645 [2024-07-15 09:36:01.703052] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:27:50.645 qpair failed and we were unable to recover it. 00:27:50.645 [2024-07-15 09:36:01.703136] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:50.645 [2024-07-15 09:36:01.703162] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:27:50.645 qpair failed and we were unable to recover it. 00:27:50.645 [2024-07-15 09:36:01.703248] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:50.645 [2024-07-15 09:36:01.703273] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:27:50.645 qpair failed and we were unable to recover it. 00:27:50.645 [2024-07-15 09:36:01.703363] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:50.645 [2024-07-15 09:36:01.703391] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a58000b90 with addr=10.0.0.2, port=4420 00:27:50.645 qpair failed and we were unable to recover it. 00:27:50.645 [2024-07-15 09:36:01.703479] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:50.645 [2024-07-15 09:36:01.703504] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a58000b90 with addr=10.0.0.2, port=4420 00:27:50.645 qpair failed and we were unable to recover it. 00:27:50.645 [2024-07-15 09:36:01.703579] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:50.645 [2024-07-15 09:36:01.703604] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a58000b90 with addr=10.0.0.2, port=4420 00:27:50.645 qpair failed and we were unable to recover it. 00:27:50.645 [2024-07-15 09:36:01.703686] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:50.645 [2024-07-15 09:36:01.703711] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a58000b90 with addr=10.0.0.2, port=4420 00:27:50.645 qpair failed and we were unable to recover it. 00:27:50.645 [2024-07-15 09:36:01.703785] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:50.645 [2024-07-15 09:36:01.703825] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d69200 with addr=10.0.0.2, port=4420 00:27:50.645 qpair failed and we were unable to recover it. 00:27:50.645 [2024-07-15 09:36:01.703910] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:50.645 [2024-07-15 09:36:01.703936] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d69200 with addr=10.0.0.2, port=4420 00:27:50.645 qpair failed and we were unable to recover it. 00:27:50.645 [2024-07-15 09:36:01.704021] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:50.646 [2024-07-15 09:36:01.704047] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:27:50.646 qpair failed and we were unable to recover it. 00:27:50.646 [2024-07-15 09:36:01.704130] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:50.646 [2024-07-15 09:36:01.704156] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:27:50.646 qpair failed and we were unable to recover it. 00:27:50.646 [2024-07-15 09:36:01.704258] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:50.646 [2024-07-15 09:36:01.704283] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:27:50.646 qpair failed and we were unable to recover it. 00:27:50.646 [2024-07-15 09:36:01.704364] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:50.646 [2024-07-15 09:36:01.704389] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:27:50.646 qpair failed and we were unable to recover it. 00:27:50.646 [2024-07-15 09:36:01.704466] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:50.646 [2024-07-15 09:36:01.704492] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:27:50.646 qpair failed and we were unable to recover it. 00:27:50.646 [2024-07-15 09:36:01.704582] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:50.646 [2024-07-15 09:36:01.704611] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a50000b90 with addr=10.0.0.2, port=4420 00:27:50.646 qpair failed and we were unable to recover it. 00:27:50.646 [2024-07-15 09:36:01.704690] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:50.646 [2024-07-15 09:36:01.704717] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a50000b90 with addr=10.0.0.2, port=4420 00:27:50.646 qpair failed and we were unable to recover it. 00:27:50.646 [2024-07-15 09:36:01.704821] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:50.646 [2024-07-15 09:36:01.704853] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a58000b90 with addr=10.0.0.2, port=4420 00:27:50.646 qpair failed and we were unable to recover it. 00:27:50.646 [2024-07-15 09:36:01.704940] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:50.646 [2024-07-15 09:36:01.704966] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a58000b90 with addr=10.0.0.2, port=4420 00:27:50.646 qpair failed and we were unable to recover it. 00:27:50.646 [2024-07-15 09:36:01.705061] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:50.646 [2024-07-15 09:36:01.705087] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a58000b90 with addr=10.0.0.2, port=4420 00:27:50.646 qpair failed and we were unable to recover it. 00:27:50.646 [2024-07-15 09:36:01.705178] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:50.646 [2024-07-15 09:36:01.705203] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a58000b90 with addr=10.0.0.2, port=4420 00:27:50.646 qpair failed and we were unable to recover it. 00:27:50.646 [2024-07-15 09:36:01.705278] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:50.646 [2024-07-15 09:36:01.705302] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a58000b90 with addr=10.0.0.2, port=4420 00:27:50.646 qpair failed and we were unable to recover it. 00:27:50.646 [2024-07-15 09:36:01.705392] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:50.646 [2024-07-15 09:36:01.705418] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d69200 with addr=10.0.0.2, port=4420 00:27:50.646 qpair failed and we were unable to recover it. 00:27:50.646 [2024-07-15 09:36:01.705504] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:50.646 [2024-07-15 09:36:01.705531] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a50000b90 with addr=10.0.0.2, port=4420 00:27:50.646 qpair failed and we were unable to recover it. 00:27:50.646 [2024-07-15 09:36:01.705610] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:50.646 [2024-07-15 09:36:01.705637] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:27:50.646 qpair failed and we were unable to recover it. 00:27:50.646 [2024-07-15 09:36:01.705720] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:50.646 [2024-07-15 09:36:01.705746] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:27:50.646 qpair failed and we were unable to recover it. 00:27:50.646 [2024-07-15 09:36:01.705846] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:50.646 [2024-07-15 09:36:01.705872] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:27:50.646 qpair failed and we were unable to recover it. 00:27:50.646 [2024-07-15 09:36:01.705957] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:50.646 [2024-07-15 09:36:01.705982] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:27:50.646 qpair failed and we were unable to recover it. 00:27:50.646 [2024-07-15 09:36:01.706073] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:50.646 [2024-07-15 09:36:01.706099] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:27:50.646 qpair failed and we were unable to recover it. 00:27:50.646 [2024-07-15 09:36:01.706178] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:50.646 [2024-07-15 09:36:01.706204] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:27:50.646 qpair failed and we were unable to recover it. 00:27:50.646 [2024-07-15 09:36:01.706287] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:50.646 [2024-07-15 09:36:01.706313] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:27:50.646 qpair failed and we were unable to recover it. 00:27:50.646 [2024-07-15 09:36:01.706401] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:50.646 [2024-07-15 09:36:01.706430] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a58000b90 with addr=10.0.0.2, port=4420 00:27:50.646 qpair failed and we were unable to recover it. 00:27:50.646 [2024-07-15 09:36:01.706520] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:50.646 [2024-07-15 09:36:01.706547] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a58000b90 with addr=10.0.0.2, port=4420 00:27:50.646 qpair failed and we were unable to recover it. 00:27:50.646 [2024-07-15 09:36:01.706638] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:50.646 [2024-07-15 09:36:01.706666] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a50000b90 with addr=10.0.0.2, port=4420 00:27:50.646 qpair failed and we were unable to recover it. 00:27:50.646 [2024-07-15 09:36:01.706756] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:50.646 [2024-07-15 09:36:01.706782] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a50000b90 with addr=10.0.0.2, port=4420 00:27:50.646 qpair failed and we were unable to recover it. 00:27:50.646 [2024-07-15 09:36:01.706888] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:50.646 [2024-07-15 09:36:01.706914] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a50000b90 with addr=10.0.0.2, port=4420 00:27:50.646 qpair failed and we were unable to recover it. 00:27:50.646 [2024-07-15 09:36:01.706992] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:50.646 [2024-07-15 09:36:01.707017] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a50000b90 with addr=10.0.0.2, port=4420 00:27:50.646 qpair failed and we were unable to recover it. 00:27:50.646 [2024-07-15 09:36:01.707097] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:50.646 [2024-07-15 09:36:01.707124] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a50000b90 with addr=10.0.0.2, port=4420 00:27:50.646 qpair failed and we were unable to recover it. 00:27:50.646 09:36:01 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:27:50.646 [2024-07-15 09:36:01.707200] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:50.646 [2024-07-15 09:36:01.707226] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a50000b90 with addr=10.0.0.2, port=4420 00:27:50.646 qpair failed and we were unable to recover it. 00:27:50.646 09:36:01 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@862 -- # return 0 00:27:50.646 [2024-07-15 09:36:01.707335] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:50.646 [2024-07-15 09:36:01.707361] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a50000b90 with addr=10.0.0.2, port=4420 00:27:50.646 qpair failed and we were unable to recover it. 00:27:50.646 09:36:01 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:27:50.646 [2024-07-15 09:36:01.707473] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:50.646 [2024-07-15 09:36:01.707499] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a50000b90 with addr=10.0.0.2, port=4420 00:27:50.646 qpair failed and we were unable to recover it. 00:27:50.646 09:36:01 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@728 -- # xtrace_disable 00:27:50.646 [2024-07-15 09:36:01.707582] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:50.646 [2024-07-15 09:36:01.707607] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a50000b90 with addr=10.0.0.2, port=4420 00:27:50.646 09:36:01 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:27:50.646 qpair failed and we were unable to recover it. 00:27:50.646 [2024-07-15 09:36:01.707681] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:50.646 [2024-07-15 09:36:01.707709] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a58000b90 with addr=10.0.0.2, port=4420 00:27:50.646 qpair failed and we were unable to recover it. 00:27:50.646 [2024-07-15 09:36:01.707792] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:50.646 [2024-07-15 09:36:01.707825] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:27:50.646 qpair failed and we were unable to recover it. 00:27:50.646 [2024-07-15 09:36:01.707922] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:50.646 [2024-07-15 09:36:01.707961] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d69200 with addr=10.0.0.2, port=4420 00:27:50.646 qpair failed and we were unable to recover it. 00:27:50.646 [2024-07-15 09:36:01.708052] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:50.646 [2024-07-15 09:36:01.708080] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d69200 with addr=10.0.0.2, port=4420 00:27:50.646 qpair failed and we were unable to recover it. 00:27:50.646 [2024-07-15 09:36:01.708170] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:50.646 [2024-07-15 09:36:01.708197] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d69200 with addr=10.0.0.2, port=4420 00:27:50.646 qpair failed and we were unable to recover it. 00:27:50.646 [2024-07-15 09:36:01.708291] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:50.646 [2024-07-15 09:36:01.708319] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d69200 with addr=10.0.0.2, port=4420 00:27:50.646 qpair failed and we were unable to recover it. 00:27:50.646 [2024-07-15 09:36:01.708401] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:50.646 [2024-07-15 09:36:01.708435] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d69200 with addr=10.0.0.2, port=4420 00:27:50.646 qpair failed and we were unable to recover it. 00:27:50.646 [2024-07-15 09:36:01.708519] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:50.646 [2024-07-15 09:36:01.708544] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d69200 with addr=10.0.0.2, port=4420 00:27:50.646 qpair failed and we were unable to recover it. 00:27:50.646 [2024-07-15 09:36:01.708620] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:50.646 [2024-07-15 09:36:01.708645] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d69200 with addr=10.0.0.2, port=4420 00:27:50.646 qpair failed and we were unable to recover it. 00:27:50.646 [2024-07-15 09:36:01.708724] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:50.647 [2024-07-15 09:36:01.708749] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d69200 with addr=10.0.0.2, port=4420 00:27:50.647 qpair failed and we were unable to recover it. 00:27:50.647 [2024-07-15 09:36:01.708860] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:50.647 [2024-07-15 09:36:01.708886] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d69200 with addr=10.0.0.2, port=4420 00:27:50.647 qpair failed and we were unable to recover it. 00:27:50.647 [2024-07-15 09:36:01.708964] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:50.647 [2024-07-15 09:36:01.708989] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d69200 with addr=10.0.0.2, port=4420 00:27:50.647 qpair failed and we were unable to recover it. 00:27:50.647 [2024-07-15 09:36:01.709095] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:50.647 [2024-07-15 09:36:01.709121] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d69200 with addr=10.0.0.2, port=4420 00:27:50.647 qpair failed and we were unable to recover it. 00:27:50.647 [2024-07-15 09:36:01.709210] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:50.647 [2024-07-15 09:36:01.709235] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d69200 with addr=10.0.0.2, port=4420 00:27:50.647 qpair failed and we were unable to recover it. 00:27:50.647 [2024-07-15 09:36:01.709318] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:50.647 [2024-07-15 09:36:01.709346] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:27:50.647 qpair failed and we were unable to recover it. 00:27:50.647 [2024-07-15 09:36:01.709435] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:50.647 [2024-07-15 09:36:01.709465] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a50000b90 with addr=10.0.0.2, port=4420 00:27:50.647 qpair failed and we were unable to recover it. 00:27:50.647 [2024-07-15 09:36:01.709558] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:50.647 [2024-07-15 09:36:01.709585] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a50000b90 with addr=10.0.0.2, port=4420 00:27:50.647 qpair failed and we were unable to recover it. 00:27:50.647 [2024-07-15 09:36:01.709671] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:50.647 [2024-07-15 09:36:01.709698] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a50000b90 with addr=10.0.0.2, port=4420 00:27:50.647 qpair failed and we were unable to recover it. 00:27:50.647 [2024-07-15 09:36:01.709778] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:50.647 [2024-07-15 09:36:01.709812] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a50000b90 with addr=10.0.0.2, port=4420 00:27:50.647 qpair failed and we were unable to recover it. 00:27:50.647 [2024-07-15 09:36:01.709902] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:50.647 [2024-07-15 09:36:01.709931] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a58000b90 with addr=10.0.0.2, port=4420 00:27:50.647 qpair failed and we were unable to recover it. 00:27:50.647 [2024-07-15 09:36:01.710015] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:50.647 [2024-07-15 09:36:01.710040] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a58000b90 with addr=10.0.0.2, port=4420 00:27:50.647 qpair failed and we were unable to recover it. 00:27:50.647 [2024-07-15 09:36:01.710130] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:50.647 [2024-07-15 09:36:01.710155] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a58000b90 with addr=10.0.0.2, port=4420 00:27:50.647 qpair failed and we were unable to recover it. 00:27:50.647 [2024-07-15 09:36:01.710231] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:50.647 [2024-07-15 09:36:01.710256] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a58000b90 with addr=10.0.0.2, port=4420 00:27:50.647 qpair failed and we were unable to recover it. 00:27:50.647 [2024-07-15 09:36:01.710337] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:50.647 [2024-07-15 09:36:01.710363] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a58000b90 with addr=10.0.0.2, port=4420 00:27:50.647 qpair failed and we were unable to recover it. 00:27:50.647 [2024-07-15 09:36:01.710445] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:50.647 [2024-07-15 09:36:01.710470] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a58000b90 with addr=10.0.0.2, port=4420 00:27:50.647 qpair failed and we were unable to recover it. 00:27:50.647 [2024-07-15 09:36:01.710566] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:50.647 [2024-07-15 09:36:01.710594] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a58000b90 with addr=10.0.0.2, port=4420 00:27:50.647 qpair failed and we were unable to recover it. 00:27:50.647 [2024-07-15 09:36:01.710678] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:50.647 [2024-07-15 09:36:01.710705] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d69200 with addr=10.0.0.2, port=4420 00:27:50.647 qpair failed and we were unable to recover it. 00:27:50.647 [2024-07-15 09:36:01.710799] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:50.647 [2024-07-15 09:36:01.710830] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:27:50.647 qpair failed and we were unable to recover it. 00:27:50.647 [2024-07-15 09:36:01.710907] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:50.647 [2024-07-15 09:36:01.710934] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:27:50.647 qpair failed and we were unable to recover it. 00:27:50.647 [2024-07-15 09:36:01.711018] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:50.647 [2024-07-15 09:36:01.711045] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:27:50.647 qpair failed and we were unable to recover it. 00:27:50.647 [2024-07-15 09:36:01.711140] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:50.647 [2024-07-15 09:36:01.711167] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:27:50.647 qpair failed and we were unable to recover it. 00:27:50.647 [2024-07-15 09:36:01.711252] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:50.647 [2024-07-15 09:36:01.711280] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a50000b90 with addr=10.0.0.2, port=4420 00:27:50.647 qpair failed and we were unable to recover it. 00:27:50.647 [2024-07-15 09:36:01.711377] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:50.647 [2024-07-15 09:36:01.711405] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a50000b90 with addr=10.0.0.2, port=4420 00:27:50.647 qpair failed and we were unable to recover it. 00:27:50.647 [2024-07-15 09:36:01.711490] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:50.647 [2024-07-15 09:36:01.711516] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a50000b90 with addr=10.0.0.2, port=4420 00:27:50.647 qpair failed and we were unable to recover it. 00:27:50.647 [2024-07-15 09:36:01.711620] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:50.647 [2024-07-15 09:36:01.711647] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a50000b90 with addr=10.0.0.2, port=4420 00:27:50.647 qpair failed and we were unable to recover it. 00:27:50.647 [2024-07-15 09:36:01.711728] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:50.647 [2024-07-15 09:36:01.711754] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a50000b90 with addr=10.0.0.2, port=4420 00:27:50.647 qpair failed and we were unable to recover it. 00:27:50.647 [2024-07-15 09:36:01.711848] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:50.647 [2024-07-15 09:36:01.711873] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a50000b90 with addr=10.0.0.2, port=4420 00:27:50.647 qpair failed and we were unable to recover it. 00:27:50.647 [2024-07-15 09:36:01.711952] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:50.647 [2024-07-15 09:36:01.711976] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a50000b90 with addr=10.0.0.2, port=4420 00:27:50.647 qpair failed and we were unable to recover it. 00:27:50.647 [2024-07-15 09:36:01.712093] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:50.647 [2024-07-15 09:36:01.712119] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a50000b90 with addr=10.0.0.2, port=4420 00:27:50.647 qpair failed and we were unable to recover it. 00:27:50.647 [2024-07-15 09:36:01.712200] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:50.647 [2024-07-15 09:36:01.712226] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a50000b90 with addr=10.0.0.2, port=4420 00:27:50.647 qpair failed and we were unable to recover it. 00:27:50.647 [2024-07-15 09:36:01.712310] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:50.647 [2024-07-15 09:36:01.712342] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:27:50.647 qpair failed and we were unable to recover it. 00:27:50.647 [2024-07-15 09:36:01.712426] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:50.647 [2024-07-15 09:36:01.712451] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:27:50.647 qpair failed and we were unable to recover it. 00:27:50.647 [2024-07-15 09:36:01.712536] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:50.647 [2024-07-15 09:36:01.712562] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:27:50.647 qpair failed and we were unable to recover it. 00:27:50.647 [2024-07-15 09:36:01.712646] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:50.647 [2024-07-15 09:36:01.712672] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:27:50.647 qpair failed and we were unable to recover it. 00:27:50.647 [2024-07-15 09:36:01.712757] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:50.647 [2024-07-15 09:36:01.712784] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:27:50.647 qpair failed and we were unable to recover it. 00:27:50.647 [2024-07-15 09:36:01.712885] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:50.647 [2024-07-15 09:36:01.712911] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:27:50.647 qpair failed and we were unable to recover it. 00:27:50.647 [2024-07-15 09:36:01.713003] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:50.647 [2024-07-15 09:36:01.713031] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a50000b90 with addr=10.0.0.2, port=4420 00:27:50.648 qpair failed and we were unable to recover it. 00:27:50.648 [2024-07-15 09:36:01.713128] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:50.648 [2024-07-15 09:36:01.713157] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a58000b90 with addr=10.0.0.2, port=4420 00:27:50.648 qpair failed and we were unable to recover it. 00:27:50.648 [2024-07-15 09:36:01.713241] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:50.648 [2024-07-15 09:36:01.713268] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d69200 with addr=10.0.0.2, port=4420 00:27:50.648 qpair failed and we were unable to recover it. 00:27:50.648 [2024-07-15 09:36:01.713350] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:50.648 [2024-07-15 09:36:01.713374] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d69200 with addr=10.0.0.2, port=4420 00:27:50.648 qpair failed and we were unable to recover it. 00:27:50.648 [2024-07-15 09:36:01.713462] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:50.648 [2024-07-15 09:36:01.713486] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d69200 with addr=10.0.0.2, port=4420 00:27:50.648 qpair failed and we were unable to recover it. 00:27:50.648 [2024-07-15 09:36:01.713564] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:50.648 [2024-07-15 09:36:01.713589] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d69200 with addr=10.0.0.2, port=4420 00:27:50.648 qpair failed and we were unable to recover it. 00:27:50.648 [2024-07-15 09:36:01.713666] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:50.648 [2024-07-15 09:36:01.713692] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d69200 with addr=10.0.0.2, port=4420 00:27:50.648 qpair failed and we were unable to recover it. 00:27:50.648 [2024-07-15 09:36:01.713784] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:50.648 [2024-07-15 09:36:01.713825] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d69200 with addr=10.0.0.2, port=4420 00:27:50.648 qpair failed and we were unable to recover it. 00:27:50.648 [2024-07-15 09:36:01.713914] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:50.648 [2024-07-15 09:36:01.713939] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d69200 with addr=10.0.0.2, port=4420 00:27:50.648 qpair failed and we were unable to recover it. 00:27:50.648 [2024-07-15 09:36:01.714039] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:50.648 [2024-07-15 09:36:01.714064] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d69200 with addr=10.0.0.2, port=4420 00:27:50.648 qpair failed and we were unable to recover it. 00:27:50.648 [2024-07-15 09:36:01.714156] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:50.648 [2024-07-15 09:36:01.714183] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d69200 with addr=10.0.0.2, port=4420 00:27:50.648 qpair failed and we were unable to recover it. 00:27:50.648 [2024-07-15 09:36:01.714261] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:50.648 [2024-07-15 09:36:01.714287] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d69200 with addr=10.0.0.2, port=4420 00:27:50.648 qpair failed and we were unable to recover it. 00:27:50.648 [2024-07-15 09:36:01.714367] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:50.648 [2024-07-15 09:36:01.714395] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:27:50.648 qpair failed and we were unable to recover it. 00:27:50.648 [2024-07-15 09:36:01.714474] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:50.648 [2024-07-15 09:36:01.714500] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:27:50.648 qpair failed and we were unable to recover it. 00:27:50.648 [2024-07-15 09:36:01.714578] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:50.648 [2024-07-15 09:36:01.714605] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:27:50.648 qpair failed and we were unable to recover it. 00:27:50.648 [2024-07-15 09:36:01.714685] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:50.648 [2024-07-15 09:36:01.714710] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:27:50.648 qpair failed and we were unable to recover it. 00:27:50.648 [2024-07-15 09:36:01.714813] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:50.648 [2024-07-15 09:36:01.714841] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:27:50.648 qpair failed and we were unable to recover it. 00:27:50.648 [2024-07-15 09:36:01.714929] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:50.648 [2024-07-15 09:36:01.714953] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:27:50.648 qpair failed and we were unable to recover it. 00:27:50.648 [2024-07-15 09:36:01.715031] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:50.648 [2024-07-15 09:36:01.715056] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:27:50.648 qpair failed and we were unable to recover it. 00:27:50.648 [2024-07-15 09:36:01.715134] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:50.648 [2024-07-15 09:36:01.715158] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:27:50.648 qpair failed and we were unable to recover it. 00:27:50.648 [2024-07-15 09:36:01.715252] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:50.648 [2024-07-15 09:36:01.715292] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a58000b90 with addr=10.0.0.2, port=4420 00:27:50.648 qpair failed and we were unable to recover it. 00:27:50.648 [2024-07-15 09:36:01.715393] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:50.648 [2024-07-15 09:36:01.715421] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d69200 with addr=10.0.0.2, port=4420 00:27:50.648 qpair failed and we were unable to recover it. 00:27:50.648 [2024-07-15 09:36:01.715507] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:50.648 [2024-07-15 09:36:01.715532] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d69200 with addr=10.0.0.2, port=4420 00:27:50.648 qpair failed and we were unable to recover it. 00:27:50.648 [2024-07-15 09:36:01.715612] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:50.648 [2024-07-15 09:36:01.715638] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d69200 with addr=10.0.0.2, port=4420 00:27:50.648 qpair failed and we were unable to recover it. 00:27:50.648 [2024-07-15 09:36:01.715724] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:50.648 [2024-07-15 09:36:01.715749] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d69200 with addr=10.0.0.2, port=4420 00:27:50.648 qpair failed and we were unable to recover it. 00:27:50.648 [2024-07-15 09:36:01.715850] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:50.648 [2024-07-15 09:36:01.715876] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d69200 with addr=10.0.0.2, port=4420 00:27:50.648 qpair failed and we were unable to recover it. 00:27:50.648 [2024-07-15 09:36:01.715959] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:50.648 [2024-07-15 09:36:01.715985] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d69200 with addr=10.0.0.2, port=4420 00:27:50.648 qpair failed and we were unable to recover it. 00:27:50.648 [2024-07-15 09:36:01.716072] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:50.648 [2024-07-15 09:36:01.716098] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d69200 with addr=10.0.0.2, port=4420 00:27:50.648 qpair failed and we were unable to recover it. 00:27:50.648 [2024-07-15 09:36:01.716185] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:50.648 [2024-07-15 09:36:01.716211] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d69200 with addr=10.0.0.2, port=4420 00:27:50.648 qpair failed and we were unable to recover it. 00:27:50.648 [2024-07-15 09:36:01.716301] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:50.648 [2024-07-15 09:36:01.716326] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d69200 with addr=10.0.0.2, port=4420 00:27:50.648 qpair failed and we were unable to recover it. 00:27:50.648 [2024-07-15 09:36:01.716409] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:50.648 [2024-07-15 09:36:01.716434] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d69200 with addr=10.0.0.2, port=4420 00:27:50.648 qpair failed and we were unable to recover it. 00:27:50.648 [2024-07-15 09:36:01.716526] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:50.648 [2024-07-15 09:36:01.716552] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d69200 with addr=10.0.0.2, port=4420 00:27:50.648 qpair failed and we were unable to recover it. 00:27:50.648 [2024-07-15 09:36:01.716645] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:50.648 [2024-07-15 09:36:01.716671] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d69200 with addr=10.0.0.2, port=4420 00:27:50.648 qpair failed and we were unable to recover it. 00:27:50.648 [2024-07-15 09:36:01.716747] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:50.648 [2024-07-15 09:36:01.716773] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d69200 with addr=10.0.0.2, port=4420 00:27:50.648 qpair failed and we were unable to recover it. 00:27:50.648 [2024-07-15 09:36:01.716877] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:50.648 [2024-07-15 09:36:01.716917] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a50000b90 with addr=10.0.0.2, port=4420 00:27:50.648 qpair failed and we were unable to recover it. 00:27:50.648 [2024-07-15 09:36:01.717011] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:50.648 [2024-07-15 09:36:01.717039] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a50000b90 with addr=10.0.0.2, port=4420 00:27:50.648 qpair failed and we were unable to recover it. 00:27:50.648 [2024-07-15 09:36:01.717154] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:50.648 [2024-07-15 09:36:01.717181] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a50000b90 with addr=10.0.0.2, port=4420 00:27:50.648 qpair failed and we were unable to recover it. 00:27:50.648 [2024-07-15 09:36:01.717283] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:50.648 [2024-07-15 09:36:01.717310] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a50000b90 with addr=10.0.0.2, port=4420 00:27:50.648 qpair failed and we were unable to recover it. 00:27:50.648 [2024-07-15 09:36:01.717399] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:50.648 [2024-07-15 09:36:01.717426] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:27:50.648 qpair failed and we were unable to recover it. 00:27:50.648 [2024-07-15 09:36:01.717511] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:50.648 [2024-07-15 09:36:01.717544] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a58000b90 with addr=10.0.0.2, port=4420 00:27:50.648 qpair failed and we were unable to recover it. 00:27:50.649 [2024-07-15 09:36:01.717637] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:50.649 [2024-07-15 09:36:01.717662] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d69200 with addr=10.0.0.2, port=4420 00:27:50.649 qpair failed and we were unable to recover it. 00:27:50.649 [2024-07-15 09:36:01.717744] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:50.649 [2024-07-15 09:36:01.717769] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d69200 with addr=10.0.0.2, port=4420 00:27:50.649 qpair failed and we were unable to recover it. 00:27:50.649 [2024-07-15 09:36:01.717864] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:50.649 [2024-07-15 09:36:01.717889] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d69200 with addr=10.0.0.2, port=4420 00:27:50.649 qpair failed and we were unable to recover it. 00:27:50.649 [2024-07-15 09:36:01.717973] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:50.649 [2024-07-15 09:36:01.717998] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d69200 with addr=10.0.0.2, port=4420 00:27:50.649 qpair failed and we were unable to recover it. 00:27:50.649 [2024-07-15 09:36:01.718081] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:50.649 [2024-07-15 09:36:01.718108] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d69200 with addr=10.0.0.2, port=4420 00:27:50.649 qpair failed and we were unable to recover it. 00:27:50.649 [2024-07-15 09:36:01.718192] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:50.649 [2024-07-15 09:36:01.718218] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d69200 with addr=10.0.0.2, port=4420 00:27:50.649 qpair failed and we were unable to recover it. 00:27:50.649 [2024-07-15 09:36:01.718315] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:50.649 [2024-07-15 09:36:01.718341] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d69200 with addr=10.0.0.2, port=4420 00:27:50.649 qpair failed and we were unable to recover it. 00:27:50.649 [2024-07-15 09:36:01.718416] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:50.649 [2024-07-15 09:36:01.718440] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d69200 with addr=10.0.0.2, port=4420 00:27:50.649 qpair failed and we were unable to recover it. 00:27:50.649 [2024-07-15 09:36:01.718529] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:50.649 [2024-07-15 09:36:01.718555] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:27:50.649 qpair failed and we were unable to recover it. 00:27:50.649 [2024-07-15 09:36:01.718646] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:50.649 [2024-07-15 09:36:01.718670] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:27:50.649 qpair failed and we were unable to recover it. 00:27:50.649 [2024-07-15 09:36:01.718745] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:50.649 [2024-07-15 09:36:01.718770] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:27:50.649 qpair failed and we were unable to recover it. 00:27:50.649 [2024-07-15 09:36:01.718867] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:50.649 [2024-07-15 09:36:01.718893] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:27:50.649 qpair failed and we were unable to recover it. 00:27:50.649 [2024-07-15 09:36:01.718969] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:50.649 [2024-07-15 09:36:01.718994] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:27:50.649 qpair failed and we were unable to recover it. 00:27:50.649 [2024-07-15 09:36:01.719081] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:50.649 [2024-07-15 09:36:01.719115] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:27:50.649 qpair failed and we were unable to recover it. 00:27:50.649 [2024-07-15 09:36:01.719201] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:50.649 [2024-07-15 09:36:01.719226] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:27:50.649 qpair failed and we were unable to recover it. 00:27:50.649 [2024-07-15 09:36:01.719302] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:50.649 [2024-07-15 09:36:01.719327] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:27:50.649 qpair failed and we were unable to recover it. 00:27:50.649 [2024-07-15 09:36:01.719401] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:50.649 [2024-07-15 09:36:01.719426] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:27:50.649 qpair failed and we were unable to recover it. 00:27:50.649 [2024-07-15 09:36:01.719539] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:50.649 [2024-07-15 09:36:01.719565] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:27:50.649 qpair failed and we were unable to recover it. 00:27:50.649 [2024-07-15 09:36:01.719649] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:50.649 [2024-07-15 09:36:01.719675] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:27:50.649 qpair failed and we were unable to recover it. 00:27:50.649 [2024-07-15 09:36:01.719759] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:50.649 [2024-07-15 09:36:01.719788] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a50000b90 with addr=10.0.0.2, port=4420 00:27:50.649 qpair failed and we were unable to recover it. 00:27:50.649 [2024-07-15 09:36:01.719874] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:50.649 [2024-07-15 09:36:01.719900] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d69200 with addr=10.0.0.2, port=4420 00:27:50.649 qpair failed and we were unable to recover it. 00:27:50.649 [2024-07-15 09:36:01.719983] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:50.649 [2024-07-15 09:36:01.720010] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d69200 with addr=10.0.0.2, port=4420 00:27:50.649 qpair failed and we were unable to recover it. 00:27:50.649 [2024-07-15 09:36:01.720092] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:50.649 [2024-07-15 09:36:01.720117] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d69200 with addr=10.0.0.2, port=4420 00:27:50.649 qpair failed and we were unable to recover it. 00:27:50.649 [2024-07-15 09:36:01.720197] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:50.649 [2024-07-15 09:36:01.720221] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d69200 with addr=10.0.0.2, port=4420 00:27:50.649 qpair failed and we were unable to recover it. 00:27:50.649 [2024-07-15 09:36:01.720308] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:50.649 [2024-07-15 09:36:01.720336] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a58000b90 with addr=10.0.0.2, port=4420 00:27:50.649 qpair failed and we were unable to recover it. 00:27:50.649 [2024-07-15 09:36:01.720419] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:50.649 [2024-07-15 09:36:01.720447] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a58000b90 with addr=10.0.0.2, port=4420 00:27:50.649 qpair failed and we were unable to recover it. 00:27:50.649 [2024-07-15 09:36:01.720539] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:50.649 [2024-07-15 09:36:01.720568] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a50000b90 with addr=10.0.0.2, port=4420 00:27:50.649 qpair failed and we were unable to recover it. 00:27:50.649 [2024-07-15 09:36:01.720663] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:50.649 [2024-07-15 09:36:01.720691] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d69200 with addr=10.0.0.2, port=4420 00:27:50.649 qpair failed and we were unable to recover it. 00:27:50.649 [2024-07-15 09:36:01.720772] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:50.649 [2024-07-15 09:36:01.720798] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:27:50.649 qpair failed and we were unable to recover it. 00:27:50.649 [2024-07-15 09:36:01.720879] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:50.649 [2024-07-15 09:36:01.720904] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:27:50.649 qpair failed and we were unable to recover it. 00:27:50.649 [2024-07-15 09:36:01.720976] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:50.649 [2024-07-15 09:36:01.721000] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:27:50.649 qpair failed and we were unable to recover it. 00:27:50.649 [2024-07-15 09:36:01.721084] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:50.649 [2024-07-15 09:36:01.721117] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:27:50.649 qpair failed and we were unable to recover it. 00:27:50.649 [2024-07-15 09:36:01.721201] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:50.649 [2024-07-15 09:36:01.721227] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d69200 with addr=10.0.0.2, port=4420 00:27:50.649 qpair failed and we were unable to recover it. 00:27:50.649 [2024-07-15 09:36:01.721316] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:50.649 [2024-07-15 09:36:01.721353] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a58000b90 with addr=10.0.0.2, port=4420 00:27:50.649 qpair failed and we were unable to recover it. 00:27:50.649 [2024-07-15 09:36:01.721439] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:50.649 [2024-07-15 09:36:01.721466] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a58000b90 with addr=10.0.0.2, port=4420 00:27:50.649 qpair failed and we were unable to recover it. 00:27:50.649 [2024-07-15 09:36:01.721555] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:50.649 [2024-07-15 09:36:01.721581] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a58000b90 with addr=10.0.0.2, port=4420 00:27:50.649 qpair failed and we were unable to recover it. 00:27:50.649 [2024-07-15 09:36:01.721656] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:50.649 [2024-07-15 09:36:01.721681] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a58000b90 with addr=10.0.0.2, port=4420 00:27:50.649 qpair failed and we were unable to recover it. 00:27:50.649 [2024-07-15 09:36:01.721756] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:50.649 [2024-07-15 09:36:01.721781] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a58000b90 with addr=10.0.0.2, port=4420 00:27:50.649 qpair failed and we were unable to recover it. 00:27:50.649 A controller has encountered a failure and is being reset. 00:27:50.649 [2024-07-15 09:36:01.721884] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:50.649 [2024-07-15 09:36:01.721911] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a50000b90 with addr=10.0.0.2, port=4420 00:27:50.649 qpair failed and we were unable to recover it. 00:27:50.649 [2024-07-15 09:36:01.721997] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:50.649 [2024-07-15 09:36:01.722024] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a50000b90 with addr=10.0.0.2, port=4420 00:27:50.649 qpair failed and we were unable to recover it. 00:27:50.649 [2024-07-15 09:36:01.722109] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:50.649 [2024-07-15 09:36:01.722135] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a50000b90 with addr=10.0.0.2, port=4420 00:27:50.649 qpair failed and we were unable to recover it. 00:27:50.650 [2024-07-15 09:36:01.722217] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:50.650 [2024-07-15 09:36:01.722241] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a50000b90 with addr=10.0.0.2, port=4420 00:27:50.650 qpair failed and we were unable to recover it. 00:27:50.650 [2024-07-15 09:36:01.722347] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:50.650 [2024-07-15 09:36:01.722372] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a50000b90 with addr=10.0.0.2, port=4420 00:27:50.650 qpair failed and we were unable to recover it. 00:27:50.650 [2024-07-15 09:36:01.722449] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:50.650 [2024-07-15 09:36:01.722474] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a50000b90 with addr=10.0.0.2, port=4420 00:27:50.650 qpair failed and we were unable to recover it. 00:27:50.650 [2024-07-15 09:36:01.722552] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:50.650 [2024-07-15 09:36:01.722579] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a50000b90 with addr=10.0.0.2, port=4420 00:27:50.650 qpair failed and we were unable to recover it. 00:27:50.650 [2024-07-15 09:36:01.722663] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:50.650 [2024-07-15 09:36:01.722691] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a50000b90 with addr=10.0.0.2, port=4420 00:27:50.650 qpair failed and we were unable to recover it. 00:27:50.650 [2024-07-15 09:36:01.722813] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:50.650 [2024-07-15 09:36:01.722849] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d770e0 with addr=10.0.0.2, port=4420 00:27:50.650 [2024-07-15 09:36:01.722867] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d770e0 is same with the state(5) to be set 00:27:50.650 [2024-07-15 09:36:01.722892] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1d770e0 (9): Bad file descriptor 00:27:50.650 [2024-07-15 09:36:01.722910] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:50.650 [2024-07-15 09:36:01.722927] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:50.650 [2024-07-15 09:36:01.722943] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:50.650 Unable to reset the controller. 00:27:50.650 09:36:01 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:27:50.650 09:36:01 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@19 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:27:50.650 09:36:01 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:50.650 09:36:01 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:27:50.650 Malloc0 00:27:50.650 09:36:01 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:50.650 09:36:01 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@21 -- # rpc_cmd nvmf_create_transport -t tcp -o 00:27:50.650 09:36:01 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:50.650 09:36:01 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:27:50.650 [2024-07-15 09:36:01.750472] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:27:50.650 09:36:01 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:50.650 09:36:01 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:27:50.650 09:36:01 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:50.650 09:36:01 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:27:50.650 09:36:01 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:50.650 09:36:01 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:27:50.650 09:36:01 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:50.650 09:36:01 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:27:50.650 09:36:01 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:50.650 09:36:01 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:27:50.650 09:36:01 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:50.650 09:36:01 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:27:50.650 [2024-07-15 09:36:01.778700] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:27:50.650 09:36:01 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:50.650 09:36:01 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@26 -- # rpc_cmd nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:27:50.650 09:36:01 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:50.650 09:36:01 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:27:50.907 09:36:01 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:50.907 09:36:01 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@50 -- # wait 943486 00:27:51.838 Controller properly reset. 00:27:57.095 Initializing NVMe Controllers 00:27:57.095 Attaching to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:27:57.095 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:27:57.095 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) with lcore 0 00:27:57.095 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) with lcore 1 00:27:57.095 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) with lcore 2 00:27:57.095 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) with lcore 3 00:27:57.095 Initialization complete. Launching workers. 00:27:57.095 Starting thread on core 1 00:27:57.095 Starting thread on core 2 00:27:57.095 Starting thread on core 3 00:27:57.095 Starting thread on core 0 00:27:57.095 09:36:07 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@51 -- # sync 00:27:57.095 00:27:57.095 real 0m10.781s 00:27:57.095 user 0m33.023s 00:27:57.095 sys 0m7.562s 00:27:57.095 09:36:07 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@1124 -- # xtrace_disable 00:27:57.095 09:36:07 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:27:57.095 ************************************ 00:27:57.095 END TEST nvmf_target_disconnect_tc2 00:27:57.095 ************************************ 00:27:57.095 09:36:07 nvmf_tcp.nvmf_target_disconnect -- common/autotest_common.sh@1142 -- # return 0 00:27:57.095 09:36:07 nvmf_tcp.nvmf_target_disconnect -- host/target_disconnect.sh@72 -- # '[' -n '' ']' 00:27:57.095 09:36:07 nvmf_tcp.nvmf_target_disconnect -- host/target_disconnect.sh@76 -- # trap - SIGINT SIGTERM EXIT 00:27:57.095 09:36:07 nvmf_tcp.nvmf_target_disconnect -- host/target_disconnect.sh@77 -- # nvmftestfini 00:27:57.095 09:36:07 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@488 -- # nvmfcleanup 00:27:57.095 09:36:07 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@117 -- # sync 00:27:57.095 09:36:07 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:27:57.095 09:36:07 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@120 -- # set +e 00:27:57.095 09:36:07 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@121 -- # for i in {1..20} 00:27:57.095 09:36:07 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:27:57.095 rmmod nvme_tcp 00:27:57.095 rmmod nvme_fabrics 00:27:57.095 rmmod nvme_keyring 00:27:57.095 09:36:07 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:27:57.095 09:36:07 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@124 -- # set -e 00:27:57.095 09:36:07 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@125 -- # return 0 00:27:57.095 09:36:07 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@489 -- # '[' -n 943999 ']' 00:27:57.095 09:36:07 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@490 -- # killprocess 943999 00:27:57.095 09:36:07 nvmf_tcp.nvmf_target_disconnect -- common/autotest_common.sh@948 -- # '[' -z 943999 ']' 00:27:57.095 09:36:07 nvmf_tcp.nvmf_target_disconnect -- common/autotest_common.sh@952 -- # kill -0 943999 00:27:57.095 09:36:07 nvmf_tcp.nvmf_target_disconnect -- common/autotest_common.sh@953 -- # uname 00:27:57.095 09:36:07 nvmf_tcp.nvmf_target_disconnect -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:27:57.095 09:36:07 nvmf_tcp.nvmf_target_disconnect -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 943999 00:27:57.095 09:36:07 nvmf_tcp.nvmf_target_disconnect -- common/autotest_common.sh@954 -- # process_name=reactor_4 00:27:57.095 09:36:07 nvmf_tcp.nvmf_target_disconnect -- common/autotest_common.sh@958 -- # '[' reactor_4 = sudo ']' 00:27:57.095 09:36:07 nvmf_tcp.nvmf_target_disconnect -- common/autotest_common.sh@966 -- # echo 'killing process with pid 943999' 00:27:57.095 killing process with pid 943999 00:27:57.095 09:36:07 nvmf_tcp.nvmf_target_disconnect -- common/autotest_common.sh@967 -- # kill 943999 00:27:57.095 09:36:07 nvmf_tcp.nvmf_target_disconnect -- common/autotest_common.sh@972 -- # wait 943999 00:27:57.095 09:36:08 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:27:57.095 09:36:08 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:27:57.095 09:36:08 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:27:57.095 09:36:08 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:27:57.095 09:36:08 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@278 -- # remove_spdk_ns 00:27:57.095 09:36:08 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:27:57.095 09:36:08 nvmf_tcp.nvmf_target_disconnect -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:27:57.095 09:36:08 nvmf_tcp.nvmf_target_disconnect -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:27:59.008 09:36:10 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:27:59.008 00:27:59.008 real 0m15.418s 00:27:59.008 user 0m58.650s 00:27:59.008 sys 0m9.870s 00:27:59.008 09:36:10 nvmf_tcp.nvmf_target_disconnect -- common/autotest_common.sh@1124 -- # xtrace_disable 00:27:59.008 09:36:10 nvmf_tcp.nvmf_target_disconnect -- common/autotest_common.sh@10 -- # set +x 00:27:59.008 ************************************ 00:27:59.008 END TEST nvmf_target_disconnect 00:27:59.008 ************************************ 00:27:59.008 09:36:10 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:27:59.008 09:36:10 nvmf_tcp -- nvmf/nvmf.sh@126 -- # timing_exit host 00:27:59.008 09:36:10 nvmf_tcp -- common/autotest_common.sh@728 -- # xtrace_disable 00:27:59.008 09:36:10 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:27:59.008 09:36:10 nvmf_tcp -- nvmf/nvmf.sh@128 -- # trap - SIGINT SIGTERM EXIT 00:27:59.008 00:27:59.008 real 19m14.286s 00:27:59.008 user 45m43.113s 00:27:59.008 sys 4m46.324s 00:27:59.008 09:36:10 nvmf_tcp -- common/autotest_common.sh@1124 -- # xtrace_disable 00:27:59.008 09:36:10 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:27:59.008 ************************************ 00:27:59.008 END TEST nvmf_tcp 00:27:59.008 ************************************ 00:27:59.008 09:36:10 -- common/autotest_common.sh@1142 -- # return 0 00:27:59.008 09:36:10 -- spdk/autotest.sh@288 -- # [[ 0 -eq 0 ]] 00:27:59.009 09:36:10 -- spdk/autotest.sh@289 -- # run_test spdkcli_nvmf_tcp /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/nvmf.sh --transport=tcp 00:27:59.009 09:36:10 -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:27:59.009 09:36:10 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:27:59.009 09:36:10 -- common/autotest_common.sh@10 -- # set +x 00:27:59.009 ************************************ 00:27:59.009 START TEST spdkcli_nvmf_tcp 00:27:59.009 ************************************ 00:27:59.009 09:36:10 spdkcli_nvmf_tcp -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/nvmf.sh --transport=tcp 00:27:59.267 * Looking for test storage... 00:27:59.267 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli 00:27:59.267 09:36:10 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/common.sh 00:27:59.267 09:36:10 spdkcli_nvmf_tcp -- spdkcli/common.sh@6 -- # spdkcli_job=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/spdkcli_job.py 00:27:59.267 09:36:10 spdkcli_nvmf_tcp -- spdkcli/common.sh@7 -- # spdk_clear_config_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/clear_config.py 00:27:59.267 09:36:10 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:27:59.267 09:36:10 spdkcli_nvmf_tcp -- nvmf/common.sh@7 -- # uname -s 00:27:59.267 09:36:10 spdkcli_nvmf_tcp -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:27:59.267 09:36:10 spdkcli_nvmf_tcp -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:27:59.267 09:36:10 spdkcli_nvmf_tcp -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:27:59.267 09:36:10 spdkcli_nvmf_tcp -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:27:59.267 09:36:10 spdkcli_nvmf_tcp -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:27:59.267 09:36:10 spdkcli_nvmf_tcp -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:27:59.267 09:36:10 spdkcli_nvmf_tcp -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:27:59.267 09:36:10 spdkcli_nvmf_tcp -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:27:59.267 09:36:10 spdkcli_nvmf_tcp -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:27:59.267 09:36:10 spdkcli_nvmf_tcp -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:27:59.267 09:36:10 spdkcli_nvmf_tcp -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a 00:27:59.267 09:36:10 spdkcli_nvmf_tcp -- nvmf/common.sh@18 -- # NVME_HOSTID=29f67375-a902-e411-ace9-001e67bc3c9a 00:27:59.267 09:36:10 spdkcli_nvmf_tcp -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:27:59.267 09:36:10 spdkcli_nvmf_tcp -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:27:59.267 09:36:10 spdkcli_nvmf_tcp -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:27:59.267 09:36:10 spdkcli_nvmf_tcp -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:27:59.267 09:36:10 spdkcli_nvmf_tcp -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:27:59.267 09:36:10 spdkcli_nvmf_tcp -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:27:59.267 09:36:10 spdkcli_nvmf_tcp -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:27:59.267 09:36:10 spdkcli_nvmf_tcp -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:27:59.267 09:36:10 spdkcli_nvmf_tcp -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:27:59.267 09:36:10 spdkcli_nvmf_tcp -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:27:59.267 09:36:10 spdkcli_nvmf_tcp -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:27:59.267 09:36:10 spdkcli_nvmf_tcp -- paths/export.sh@5 -- # export PATH 00:27:59.267 09:36:10 spdkcli_nvmf_tcp -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:27:59.267 09:36:10 spdkcli_nvmf_tcp -- nvmf/common.sh@47 -- # : 0 00:27:59.267 09:36:10 spdkcli_nvmf_tcp -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:27:59.267 09:36:10 spdkcli_nvmf_tcp -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:27:59.267 09:36:10 spdkcli_nvmf_tcp -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:27:59.267 09:36:10 spdkcli_nvmf_tcp -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:27:59.267 09:36:10 spdkcli_nvmf_tcp -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:27:59.267 09:36:10 spdkcli_nvmf_tcp -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:27:59.267 09:36:10 spdkcli_nvmf_tcp -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:27:59.267 09:36:10 spdkcli_nvmf_tcp -- nvmf/common.sh@51 -- # have_pci_nics=0 00:27:59.267 09:36:10 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@12 -- # MATCH_FILE=spdkcli_nvmf.test 00:27:59.267 09:36:10 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@13 -- # SPDKCLI_BRANCH=/nvmf 00:27:59.267 09:36:10 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@15 -- # trap cleanup EXIT 00:27:59.267 09:36:10 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@17 -- # timing_enter run_nvmf_tgt 00:27:59.267 09:36:10 spdkcli_nvmf_tcp -- common/autotest_common.sh@722 -- # xtrace_disable 00:27:59.267 09:36:10 spdkcli_nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:27:59.267 09:36:10 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@18 -- # run_nvmf_tgt 00:27:59.267 09:36:10 spdkcli_nvmf_tcp -- spdkcli/common.sh@33 -- # nvmf_tgt_pid=945745 00:27:59.267 09:36:10 spdkcli_nvmf_tcp -- spdkcli/common.sh@32 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -m 0x3 -p 0 00:27:59.267 09:36:10 spdkcli_nvmf_tcp -- spdkcli/common.sh@34 -- # waitforlisten 945745 00:27:59.267 09:36:10 spdkcli_nvmf_tcp -- common/autotest_common.sh@829 -- # '[' -z 945745 ']' 00:27:59.267 09:36:10 spdkcli_nvmf_tcp -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:27:59.267 09:36:10 spdkcli_nvmf_tcp -- common/autotest_common.sh@834 -- # local max_retries=100 00:27:59.267 09:36:10 spdkcli_nvmf_tcp -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:27:59.267 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:27:59.267 09:36:10 spdkcli_nvmf_tcp -- common/autotest_common.sh@838 -- # xtrace_disable 00:27:59.267 09:36:10 spdkcli_nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:27:59.267 [2024-07-15 09:36:10.304261] Starting SPDK v24.09-pre git sha1 b0f01ebc5 / DPDK 24.03.0 initialization... 00:27:59.267 [2024-07-15 09:36:10.304335] [ DPDK EAL parameters: nvmf --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid945745 ] 00:27:59.267 EAL: No free 2048 kB hugepages reported on node 1 00:27:59.267 [2024-07-15 09:36:10.361394] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 2 00:27:59.525 [2024-07-15 09:36:10.469001] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:27:59.525 [2024-07-15 09:36:10.469005] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:27:59.525 09:36:10 spdkcli_nvmf_tcp -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:27:59.525 09:36:10 spdkcli_nvmf_tcp -- common/autotest_common.sh@862 -- # return 0 00:27:59.525 09:36:10 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@19 -- # timing_exit run_nvmf_tgt 00:27:59.525 09:36:10 spdkcli_nvmf_tcp -- common/autotest_common.sh@728 -- # xtrace_disable 00:27:59.526 09:36:10 spdkcli_nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:27:59.526 09:36:10 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@21 -- # NVMF_TARGET_IP=127.0.0.1 00:27:59.526 09:36:10 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@22 -- # [[ tcp == \r\d\m\a ]] 00:27:59.526 09:36:10 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@27 -- # timing_enter spdkcli_create_nvmf_config 00:27:59.526 09:36:10 spdkcli_nvmf_tcp -- common/autotest_common.sh@722 -- # xtrace_disable 00:27:59.526 09:36:10 spdkcli_nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:27:59.526 09:36:10 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@65 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/spdkcli_job.py ''\''/bdevs/malloc create 32 512 Malloc1'\'' '\''Malloc1'\'' True 00:27:59.526 '\''/bdevs/malloc create 32 512 Malloc2'\'' '\''Malloc2'\'' True 00:27:59.526 '\''/bdevs/malloc create 32 512 Malloc3'\'' '\''Malloc3'\'' True 00:27:59.526 '\''/bdevs/malloc create 32 512 Malloc4'\'' '\''Malloc4'\'' True 00:27:59.526 '\''/bdevs/malloc create 32 512 Malloc5'\'' '\''Malloc5'\'' True 00:27:59.526 '\''/bdevs/malloc create 32 512 Malloc6'\'' '\''Malloc6'\'' True 00:27:59.526 '\''nvmf/transport create tcp max_io_qpairs_per_ctrlr=4 io_unit_size=8192'\'' '\'''\'' True 00:27:59.526 '\''/nvmf/subsystem create nqn.2014-08.org.spdk:cnode1 N37SXV509SRW max_namespaces=4 allow_any_host=True'\'' '\''nqn.2014-08.org.spdk:cnode1'\'' True 00:27:59.526 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces create Malloc3 1'\'' '\''Malloc3'\'' True 00:27:59.526 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces create Malloc4 2'\'' '\''Malloc4'\'' True 00:27:59.526 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses create tcp 127.0.0.1 4260 IPv4'\'' '\''127.0.0.1:4260'\'' True 00:27:59.526 '\''/nvmf/subsystem create nqn.2014-08.org.spdk:cnode2 N37SXV509SRD max_namespaces=2 allow_any_host=True'\'' '\''nqn.2014-08.org.spdk:cnode2'\'' True 00:27:59.526 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode2/namespaces create Malloc2'\'' '\''Malloc2'\'' True 00:27:59.526 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode2/listen_addresses create tcp 127.0.0.1 4260 IPv4'\'' '\''127.0.0.1:4260'\'' True 00:27:59.526 '\''/nvmf/subsystem create nqn.2014-08.org.spdk:cnode3 N37SXV509SRR max_namespaces=2 allow_any_host=True'\'' '\''nqn.2014-08.org.spdk:cnode2'\'' True 00:27:59.526 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/namespaces create Malloc1'\'' '\''Malloc1'\'' True 00:27:59.526 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/listen_addresses create tcp 127.0.0.1 4260 IPv4'\'' '\''127.0.0.1:4260'\'' True 00:27:59.526 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/listen_addresses create tcp 127.0.0.1 4261 IPv4'\'' '\''127.0.0.1:4261'\'' True 00:27:59.526 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/hosts create nqn.2014-08.org.spdk:cnode1'\'' '\''nqn.2014-08.org.spdk:cnode1'\'' True 00:27:59.526 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/hosts create nqn.2014-08.org.spdk:cnode2'\'' '\''nqn.2014-08.org.spdk:cnode2'\'' True 00:27:59.526 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1 allow_any_host True'\'' '\''Allow any host'\'' 00:27:59.526 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1 allow_any_host False'\'' '\''Allow any host'\'' True 00:27:59.526 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses create tcp 127.0.0.1 4261 IPv4'\'' '\''127.0.0.1:4261'\'' True 00:27:59.526 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses create tcp 127.0.0.1 4262 IPv4'\'' '\''127.0.0.1:4262'\'' True 00:27:59.526 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/hosts create nqn.2014-08.org.spdk:cnode2'\'' '\''nqn.2014-08.org.spdk:cnode2'\'' True 00:27:59.526 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces create Malloc5'\'' '\''Malloc5'\'' True 00:27:59.526 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces create Malloc6'\'' '\''Malloc6'\'' True 00:27:59.526 '\''/nvmf/referral create tcp 127.0.0.2 4030 IPv4'\'' 00:27:59.526 ' 00:28:02.049 [2024-07-15 09:36:13.239512] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:28:03.419 [2024-07-15 09:36:14.459634] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4260 *** 00:28:05.946 [2024-07-15 09:36:16.718554] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4261 *** 00:28:07.843 [2024-07-15 09:36:18.652527] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4262 *** 00:28:09.216 Executing command: ['/bdevs/malloc create 32 512 Malloc1', 'Malloc1', True] 00:28:09.216 Executing command: ['/bdevs/malloc create 32 512 Malloc2', 'Malloc2', True] 00:28:09.216 Executing command: ['/bdevs/malloc create 32 512 Malloc3', 'Malloc3', True] 00:28:09.216 Executing command: ['/bdevs/malloc create 32 512 Malloc4', 'Malloc4', True] 00:28:09.216 Executing command: ['/bdevs/malloc create 32 512 Malloc5', 'Malloc5', True] 00:28:09.216 Executing command: ['/bdevs/malloc create 32 512 Malloc6', 'Malloc6', True] 00:28:09.216 Executing command: ['nvmf/transport create tcp max_io_qpairs_per_ctrlr=4 io_unit_size=8192', '', True] 00:28:09.216 Executing command: ['/nvmf/subsystem create nqn.2014-08.org.spdk:cnode1 N37SXV509SRW max_namespaces=4 allow_any_host=True', 'nqn.2014-08.org.spdk:cnode1', True] 00:28:09.216 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces create Malloc3 1', 'Malloc3', True] 00:28:09.216 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces create Malloc4 2', 'Malloc4', True] 00:28:09.216 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses create tcp 127.0.0.1 4260 IPv4', '127.0.0.1:4260', True] 00:28:09.216 Executing command: ['/nvmf/subsystem create nqn.2014-08.org.spdk:cnode2 N37SXV509SRD max_namespaces=2 allow_any_host=True', 'nqn.2014-08.org.spdk:cnode2', True] 00:28:09.216 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode2/namespaces create Malloc2', 'Malloc2', True] 00:28:09.216 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode2/listen_addresses create tcp 127.0.0.1 4260 IPv4', '127.0.0.1:4260', True] 00:28:09.216 Executing command: ['/nvmf/subsystem create nqn.2014-08.org.spdk:cnode3 N37SXV509SRR max_namespaces=2 allow_any_host=True', 'nqn.2014-08.org.spdk:cnode2', True] 00:28:09.216 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/namespaces create Malloc1', 'Malloc1', True] 00:28:09.216 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/listen_addresses create tcp 127.0.0.1 4260 IPv4', '127.0.0.1:4260', True] 00:28:09.216 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/listen_addresses create tcp 127.0.0.1 4261 IPv4', '127.0.0.1:4261', True] 00:28:09.216 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/hosts create nqn.2014-08.org.spdk:cnode1', 'nqn.2014-08.org.spdk:cnode1', True] 00:28:09.216 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/hosts create nqn.2014-08.org.spdk:cnode2', 'nqn.2014-08.org.spdk:cnode2', True] 00:28:09.216 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1 allow_any_host True', 'Allow any host', False] 00:28:09.216 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1 allow_any_host False', 'Allow any host', True] 00:28:09.217 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses create tcp 127.0.0.1 4261 IPv4', '127.0.0.1:4261', True] 00:28:09.217 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses create tcp 127.0.0.1 4262 IPv4', '127.0.0.1:4262', True] 00:28:09.217 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/hosts create nqn.2014-08.org.spdk:cnode2', 'nqn.2014-08.org.spdk:cnode2', True] 00:28:09.217 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces create Malloc5', 'Malloc5', True] 00:28:09.217 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces create Malloc6', 'Malloc6', True] 00:28:09.217 Executing command: ['/nvmf/referral create tcp 127.0.0.2 4030 IPv4', False] 00:28:09.217 09:36:20 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@66 -- # timing_exit spdkcli_create_nvmf_config 00:28:09.217 09:36:20 spdkcli_nvmf_tcp -- common/autotest_common.sh@728 -- # xtrace_disable 00:28:09.217 09:36:20 spdkcli_nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:28:09.217 09:36:20 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@68 -- # timing_enter spdkcli_check_match 00:28:09.217 09:36:20 spdkcli_nvmf_tcp -- common/autotest_common.sh@722 -- # xtrace_disable 00:28:09.217 09:36:20 spdkcli_nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:28:09.217 09:36:20 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@69 -- # check_match 00:28:09.217 09:36:20 spdkcli_nvmf_tcp -- spdkcli/common.sh@44 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/spdkcli.py ll /nvmf 00:28:09.782 09:36:20 spdkcli_nvmf_tcp -- spdkcli/common.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app/match/match /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/match_files/spdkcli_nvmf.test.match 00:28:09.783 09:36:20 spdkcli_nvmf_tcp -- spdkcli/common.sh@46 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/match_files/spdkcli_nvmf.test 00:28:09.783 09:36:20 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@70 -- # timing_exit spdkcli_check_match 00:28:09.783 09:36:20 spdkcli_nvmf_tcp -- common/autotest_common.sh@728 -- # xtrace_disable 00:28:09.783 09:36:20 spdkcli_nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:28:09.783 09:36:20 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@72 -- # timing_enter spdkcli_clear_nvmf_config 00:28:09.783 09:36:20 spdkcli_nvmf_tcp -- common/autotest_common.sh@722 -- # xtrace_disable 00:28:09.783 09:36:20 spdkcli_nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:28:09.783 09:36:20 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@87 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/spdkcli_job.py ''\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces delete nsid=1'\'' '\''Malloc3'\'' 00:28:09.783 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces delete_all'\'' '\''Malloc4'\'' 00:28:09.783 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/hosts delete nqn.2014-08.org.spdk:cnode2'\'' '\''nqn.2014-08.org.spdk:cnode2'\'' 00:28:09.783 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/hosts delete_all'\'' '\''nqn.2014-08.org.spdk:cnode1'\'' 00:28:09.783 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses delete tcp 127.0.0.1 4262'\'' '\''127.0.0.1:4262'\'' 00:28:09.783 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses delete_all'\'' '\''127.0.0.1:4261'\'' 00:28:09.783 '\''/nvmf/subsystem delete nqn.2014-08.org.spdk:cnode3'\'' '\''nqn.2014-08.org.spdk:cnode3'\'' 00:28:09.783 '\''/nvmf/subsystem delete_all'\'' '\''nqn.2014-08.org.spdk:cnode2'\'' 00:28:09.783 '\''/bdevs/malloc delete Malloc6'\'' '\''Malloc6'\'' 00:28:09.783 '\''/bdevs/malloc delete Malloc5'\'' '\''Malloc5'\'' 00:28:09.783 '\''/bdevs/malloc delete Malloc4'\'' '\''Malloc4'\'' 00:28:09.783 '\''/bdevs/malloc delete Malloc3'\'' '\''Malloc3'\'' 00:28:09.783 '\''/bdevs/malloc delete Malloc2'\'' '\''Malloc2'\'' 00:28:09.783 '\''/bdevs/malloc delete Malloc1'\'' '\''Malloc1'\'' 00:28:09.783 ' 00:28:15.054 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces delete nsid=1', 'Malloc3', False] 00:28:15.054 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces delete_all', 'Malloc4', False] 00:28:15.054 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/hosts delete nqn.2014-08.org.spdk:cnode2', 'nqn.2014-08.org.spdk:cnode2', False] 00:28:15.054 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/hosts delete_all', 'nqn.2014-08.org.spdk:cnode1', False] 00:28:15.054 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses delete tcp 127.0.0.1 4262', '127.0.0.1:4262', False] 00:28:15.054 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses delete_all', '127.0.0.1:4261', False] 00:28:15.054 Executing command: ['/nvmf/subsystem delete nqn.2014-08.org.spdk:cnode3', 'nqn.2014-08.org.spdk:cnode3', False] 00:28:15.054 Executing command: ['/nvmf/subsystem delete_all', 'nqn.2014-08.org.spdk:cnode2', False] 00:28:15.054 Executing command: ['/bdevs/malloc delete Malloc6', 'Malloc6', False] 00:28:15.054 Executing command: ['/bdevs/malloc delete Malloc5', 'Malloc5', False] 00:28:15.054 Executing command: ['/bdevs/malloc delete Malloc4', 'Malloc4', False] 00:28:15.054 Executing command: ['/bdevs/malloc delete Malloc3', 'Malloc3', False] 00:28:15.054 Executing command: ['/bdevs/malloc delete Malloc2', 'Malloc2', False] 00:28:15.054 Executing command: ['/bdevs/malloc delete Malloc1', 'Malloc1', False] 00:28:15.054 09:36:25 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@88 -- # timing_exit spdkcli_clear_nvmf_config 00:28:15.054 09:36:25 spdkcli_nvmf_tcp -- common/autotest_common.sh@728 -- # xtrace_disable 00:28:15.054 09:36:25 spdkcli_nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:28:15.054 09:36:25 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@90 -- # killprocess 945745 00:28:15.054 09:36:25 spdkcli_nvmf_tcp -- common/autotest_common.sh@948 -- # '[' -z 945745 ']' 00:28:15.054 09:36:25 spdkcli_nvmf_tcp -- common/autotest_common.sh@952 -- # kill -0 945745 00:28:15.054 09:36:25 spdkcli_nvmf_tcp -- common/autotest_common.sh@953 -- # uname 00:28:15.054 09:36:25 spdkcli_nvmf_tcp -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:28:15.054 09:36:25 spdkcli_nvmf_tcp -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 945745 00:28:15.054 09:36:25 spdkcli_nvmf_tcp -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:28:15.054 09:36:25 spdkcli_nvmf_tcp -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:28:15.054 09:36:25 spdkcli_nvmf_tcp -- common/autotest_common.sh@966 -- # echo 'killing process with pid 945745' 00:28:15.054 killing process with pid 945745 00:28:15.054 09:36:25 spdkcli_nvmf_tcp -- common/autotest_common.sh@967 -- # kill 945745 00:28:15.054 09:36:25 spdkcli_nvmf_tcp -- common/autotest_common.sh@972 -- # wait 945745 00:28:15.054 09:36:26 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@1 -- # cleanup 00:28:15.054 09:36:26 spdkcli_nvmf_tcp -- spdkcli/common.sh@10 -- # '[' -n '' ']' 00:28:15.054 09:36:26 spdkcli_nvmf_tcp -- spdkcli/common.sh@13 -- # '[' -n 945745 ']' 00:28:15.054 09:36:26 spdkcli_nvmf_tcp -- spdkcli/common.sh@14 -- # killprocess 945745 00:28:15.054 09:36:26 spdkcli_nvmf_tcp -- common/autotest_common.sh@948 -- # '[' -z 945745 ']' 00:28:15.054 09:36:26 spdkcli_nvmf_tcp -- common/autotest_common.sh@952 -- # kill -0 945745 00:28:15.054 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/autotest_common.sh: line 952: kill: (945745) - No such process 00:28:15.054 09:36:26 spdkcli_nvmf_tcp -- common/autotest_common.sh@975 -- # echo 'Process with pid 945745 is not found' 00:28:15.054 Process with pid 945745 is not found 00:28:15.054 09:36:26 spdkcli_nvmf_tcp -- spdkcli/common.sh@16 -- # '[' -n '' ']' 00:28:15.054 09:36:26 spdkcli_nvmf_tcp -- spdkcli/common.sh@19 -- # '[' -n '' ']' 00:28:15.054 09:36:26 spdkcli_nvmf_tcp -- spdkcli/common.sh@22 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/spdkcli_nvmf.test /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/match_files/spdkcli_details_vhost.test /tmp/sample_aio 00:28:15.054 00:28:15.054 real 0m16.060s 00:28:15.054 user 0m33.846s 00:28:15.054 sys 0m0.871s 00:28:15.054 09:36:26 spdkcli_nvmf_tcp -- common/autotest_common.sh@1124 -- # xtrace_disable 00:28:15.054 09:36:26 spdkcli_nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:28:15.054 ************************************ 00:28:15.054 END TEST spdkcli_nvmf_tcp 00:28:15.054 ************************************ 00:28:15.312 09:36:26 -- common/autotest_common.sh@1142 -- # return 0 00:28:15.312 09:36:26 -- spdk/autotest.sh@290 -- # run_test nvmf_identify_passthru /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/identify_passthru.sh --transport=tcp 00:28:15.312 09:36:26 -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:28:15.312 09:36:26 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:28:15.312 09:36:26 -- common/autotest_common.sh@10 -- # set +x 00:28:15.312 ************************************ 00:28:15.312 START TEST nvmf_identify_passthru 00:28:15.312 ************************************ 00:28:15.312 09:36:26 nvmf_identify_passthru -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/identify_passthru.sh --transport=tcp 00:28:15.312 * Looking for test storage... 00:28:15.312 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:28:15.313 09:36:26 nvmf_identify_passthru -- target/identify_passthru.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:28:15.313 09:36:26 nvmf_identify_passthru -- nvmf/common.sh@7 -- # uname -s 00:28:15.313 09:36:26 nvmf_identify_passthru -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:28:15.313 09:36:26 nvmf_identify_passthru -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:28:15.313 09:36:26 nvmf_identify_passthru -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:28:15.313 09:36:26 nvmf_identify_passthru -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:28:15.313 09:36:26 nvmf_identify_passthru -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:28:15.313 09:36:26 nvmf_identify_passthru -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:28:15.313 09:36:26 nvmf_identify_passthru -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:28:15.313 09:36:26 nvmf_identify_passthru -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:28:15.313 09:36:26 nvmf_identify_passthru -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:28:15.313 09:36:26 nvmf_identify_passthru -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:28:15.313 09:36:26 nvmf_identify_passthru -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a 00:28:15.313 09:36:26 nvmf_identify_passthru -- nvmf/common.sh@18 -- # NVME_HOSTID=29f67375-a902-e411-ace9-001e67bc3c9a 00:28:15.313 09:36:26 nvmf_identify_passthru -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:28:15.313 09:36:26 nvmf_identify_passthru -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:28:15.313 09:36:26 nvmf_identify_passthru -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:28:15.313 09:36:26 nvmf_identify_passthru -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:28:15.313 09:36:26 nvmf_identify_passthru -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:28:15.313 09:36:26 nvmf_identify_passthru -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:28:15.313 09:36:26 nvmf_identify_passthru -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:28:15.313 09:36:26 nvmf_identify_passthru -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:28:15.313 09:36:26 nvmf_identify_passthru -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:28:15.313 09:36:26 nvmf_identify_passthru -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:28:15.313 09:36:26 nvmf_identify_passthru -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:28:15.313 09:36:26 nvmf_identify_passthru -- paths/export.sh@5 -- # export PATH 00:28:15.313 09:36:26 nvmf_identify_passthru -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:28:15.313 09:36:26 nvmf_identify_passthru -- nvmf/common.sh@47 -- # : 0 00:28:15.313 09:36:26 nvmf_identify_passthru -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:28:15.313 09:36:26 nvmf_identify_passthru -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:28:15.313 09:36:26 nvmf_identify_passthru -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:28:15.313 09:36:26 nvmf_identify_passthru -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:28:15.313 09:36:26 nvmf_identify_passthru -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:28:15.313 09:36:26 nvmf_identify_passthru -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:28:15.313 09:36:26 nvmf_identify_passthru -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:28:15.313 09:36:26 nvmf_identify_passthru -- nvmf/common.sh@51 -- # have_pci_nics=0 00:28:15.313 09:36:26 nvmf_identify_passthru -- target/identify_passthru.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:28:15.313 09:36:26 nvmf_identify_passthru -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:28:15.313 09:36:26 nvmf_identify_passthru -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:28:15.313 09:36:26 nvmf_identify_passthru -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:28:15.313 09:36:26 nvmf_identify_passthru -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:28:15.313 09:36:26 nvmf_identify_passthru -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:28:15.313 09:36:26 nvmf_identify_passthru -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:28:15.313 09:36:26 nvmf_identify_passthru -- paths/export.sh@5 -- # export PATH 00:28:15.313 09:36:26 nvmf_identify_passthru -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:28:15.313 09:36:26 nvmf_identify_passthru -- target/identify_passthru.sh@12 -- # nvmftestinit 00:28:15.313 09:36:26 nvmf_identify_passthru -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:28:15.313 09:36:26 nvmf_identify_passthru -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:28:15.313 09:36:26 nvmf_identify_passthru -- nvmf/common.sh@448 -- # prepare_net_devs 00:28:15.313 09:36:26 nvmf_identify_passthru -- nvmf/common.sh@410 -- # local -g is_hw=no 00:28:15.313 09:36:26 nvmf_identify_passthru -- nvmf/common.sh@412 -- # remove_spdk_ns 00:28:15.313 09:36:26 nvmf_identify_passthru -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:28:15.313 09:36:26 nvmf_identify_passthru -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 13> /dev/null' 00:28:15.313 09:36:26 nvmf_identify_passthru -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:28:15.313 09:36:26 nvmf_identify_passthru -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:28:15.313 09:36:26 nvmf_identify_passthru -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:28:15.313 09:36:26 nvmf_identify_passthru -- nvmf/common.sh@285 -- # xtrace_disable 00:28:15.313 09:36:26 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:28:17.209 09:36:28 nvmf_identify_passthru -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:28:17.209 09:36:28 nvmf_identify_passthru -- nvmf/common.sh@291 -- # pci_devs=() 00:28:17.209 09:36:28 nvmf_identify_passthru -- nvmf/common.sh@291 -- # local -a pci_devs 00:28:17.209 09:36:28 nvmf_identify_passthru -- nvmf/common.sh@292 -- # pci_net_devs=() 00:28:17.209 09:36:28 nvmf_identify_passthru -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:28:17.209 09:36:28 nvmf_identify_passthru -- nvmf/common.sh@293 -- # pci_drivers=() 00:28:17.209 09:36:28 nvmf_identify_passthru -- nvmf/common.sh@293 -- # local -A pci_drivers 00:28:17.209 09:36:28 nvmf_identify_passthru -- nvmf/common.sh@295 -- # net_devs=() 00:28:17.209 09:36:28 nvmf_identify_passthru -- nvmf/common.sh@295 -- # local -ga net_devs 00:28:17.209 09:36:28 nvmf_identify_passthru -- nvmf/common.sh@296 -- # e810=() 00:28:17.209 09:36:28 nvmf_identify_passthru -- nvmf/common.sh@296 -- # local -ga e810 00:28:17.209 09:36:28 nvmf_identify_passthru -- nvmf/common.sh@297 -- # x722=() 00:28:17.209 09:36:28 nvmf_identify_passthru -- nvmf/common.sh@297 -- # local -ga x722 00:28:17.209 09:36:28 nvmf_identify_passthru -- nvmf/common.sh@298 -- # mlx=() 00:28:17.209 09:36:28 nvmf_identify_passthru -- nvmf/common.sh@298 -- # local -ga mlx 00:28:17.209 09:36:28 nvmf_identify_passthru -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:28:17.209 09:36:28 nvmf_identify_passthru -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:28:17.209 09:36:28 nvmf_identify_passthru -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:28:17.209 09:36:28 nvmf_identify_passthru -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:28:17.209 09:36:28 nvmf_identify_passthru -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:28:17.209 09:36:28 nvmf_identify_passthru -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:28:17.209 09:36:28 nvmf_identify_passthru -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:28:17.209 09:36:28 nvmf_identify_passthru -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:28:17.209 09:36:28 nvmf_identify_passthru -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:28:17.209 09:36:28 nvmf_identify_passthru -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:28:17.209 09:36:28 nvmf_identify_passthru -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:28:17.209 09:36:28 nvmf_identify_passthru -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:28:17.209 09:36:28 nvmf_identify_passthru -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:28:17.209 09:36:28 nvmf_identify_passthru -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:28:17.209 09:36:28 nvmf_identify_passthru -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:28:17.209 09:36:28 nvmf_identify_passthru -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:28:17.209 09:36:28 nvmf_identify_passthru -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:28:17.209 09:36:28 nvmf_identify_passthru -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:28:17.209 09:36:28 nvmf_identify_passthru -- nvmf/common.sh@341 -- # echo 'Found 0000:09:00.0 (0x8086 - 0x159b)' 00:28:17.209 Found 0000:09:00.0 (0x8086 - 0x159b) 00:28:17.209 09:36:28 nvmf_identify_passthru -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:28:17.209 09:36:28 nvmf_identify_passthru -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:28:17.209 09:36:28 nvmf_identify_passthru -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:28:17.209 09:36:28 nvmf_identify_passthru -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:28:17.209 09:36:28 nvmf_identify_passthru -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:28:17.209 09:36:28 nvmf_identify_passthru -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:28:17.209 09:36:28 nvmf_identify_passthru -- nvmf/common.sh@341 -- # echo 'Found 0000:09:00.1 (0x8086 - 0x159b)' 00:28:17.209 Found 0000:09:00.1 (0x8086 - 0x159b) 00:28:17.209 09:36:28 nvmf_identify_passthru -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:28:17.209 09:36:28 nvmf_identify_passthru -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:28:17.209 09:36:28 nvmf_identify_passthru -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:28:17.209 09:36:28 nvmf_identify_passthru -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:28:17.209 09:36:28 nvmf_identify_passthru -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:28:17.209 09:36:28 nvmf_identify_passthru -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:28:17.209 09:36:28 nvmf_identify_passthru -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:28:17.209 09:36:28 nvmf_identify_passthru -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:28:17.209 09:36:28 nvmf_identify_passthru -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:28:17.209 09:36:28 nvmf_identify_passthru -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:28:17.209 09:36:28 nvmf_identify_passthru -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:28:17.209 09:36:28 nvmf_identify_passthru -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:28:17.210 09:36:28 nvmf_identify_passthru -- nvmf/common.sh@390 -- # [[ up == up ]] 00:28:17.210 09:36:28 nvmf_identify_passthru -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:28:17.210 09:36:28 nvmf_identify_passthru -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:28:17.210 09:36:28 nvmf_identify_passthru -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:09:00.0: cvl_0_0' 00:28:17.210 Found net devices under 0000:09:00.0: cvl_0_0 00:28:17.210 09:36:28 nvmf_identify_passthru -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:28:17.210 09:36:28 nvmf_identify_passthru -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:28:17.210 09:36:28 nvmf_identify_passthru -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:28:17.210 09:36:28 nvmf_identify_passthru -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:28:17.210 09:36:28 nvmf_identify_passthru -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:28:17.210 09:36:28 nvmf_identify_passthru -- nvmf/common.sh@390 -- # [[ up == up ]] 00:28:17.210 09:36:28 nvmf_identify_passthru -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:28:17.210 09:36:28 nvmf_identify_passthru -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:28:17.210 09:36:28 nvmf_identify_passthru -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:09:00.1: cvl_0_1' 00:28:17.210 Found net devices under 0000:09:00.1: cvl_0_1 00:28:17.210 09:36:28 nvmf_identify_passthru -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:28:17.210 09:36:28 nvmf_identify_passthru -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:28:17.210 09:36:28 nvmf_identify_passthru -- nvmf/common.sh@414 -- # is_hw=yes 00:28:17.210 09:36:28 nvmf_identify_passthru -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:28:17.210 09:36:28 nvmf_identify_passthru -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:28:17.210 09:36:28 nvmf_identify_passthru -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:28:17.210 09:36:28 nvmf_identify_passthru -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:28:17.210 09:36:28 nvmf_identify_passthru -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:28:17.210 09:36:28 nvmf_identify_passthru -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:28:17.210 09:36:28 nvmf_identify_passthru -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:28:17.210 09:36:28 nvmf_identify_passthru -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:28:17.210 09:36:28 nvmf_identify_passthru -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:28:17.210 09:36:28 nvmf_identify_passthru -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:28:17.210 09:36:28 nvmf_identify_passthru -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:28:17.210 09:36:28 nvmf_identify_passthru -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:28:17.210 09:36:28 nvmf_identify_passthru -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:28:17.210 09:36:28 nvmf_identify_passthru -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:28:17.210 09:36:28 nvmf_identify_passthru -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:28:17.210 09:36:28 nvmf_identify_passthru -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:28:17.467 09:36:28 nvmf_identify_passthru -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:28:17.467 09:36:28 nvmf_identify_passthru -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:28:17.467 09:36:28 nvmf_identify_passthru -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:28:17.468 09:36:28 nvmf_identify_passthru -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:28:17.468 09:36:28 nvmf_identify_passthru -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:28:17.468 09:36:28 nvmf_identify_passthru -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:28:17.468 09:36:28 nvmf_identify_passthru -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:28:17.468 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:28:17.468 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.140 ms 00:28:17.468 00:28:17.468 --- 10.0.0.2 ping statistics --- 00:28:17.468 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:28:17.468 rtt min/avg/max/mdev = 0.140/0.140/0.140/0.000 ms 00:28:17.468 09:36:28 nvmf_identify_passthru -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:28:17.468 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:28:17.468 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.070 ms 00:28:17.468 00:28:17.468 --- 10.0.0.1 ping statistics --- 00:28:17.468 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:28:17.468 rtt min/avg/max/mdev = 0.070/0.070/0.070/0.000 ms 00:28:17.468 09:36:28 nvmf_identify_passthru -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:28:17.468 09:36:28 nvmf_identify_passthru -- nvmf/common.sh@422 -- # return 0 00:28:17.468 09:36:28 nvmf_identify_passthru -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:28:17.468 09:36:28 nvmf_identify_passthru -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:28:17.468 09:36:28 nvmf_identify_passthru -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:28:17.468 09:36:28 nvmf_identify_passthru -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:28:17.468 09:36:28 nvmf_identify_passthru -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:28:17.468 09:36:28 nvmf_identify_passthru -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:28:17.468 09:36:28 nvmf_identify_passthru -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:28:17.468 09:36:28 nvmf_identify_passthru -- target/identify_passthru.sh@14 -- # timing_enter nvme_identify 00:28:17.468 09:36:28 nvmf_identify_passthru -- common/autotest_common.sh@722 -- # xtrace_disable 00:28:17.468 09:36:28 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:28:17.468 09:36:28 nvmf_identify_passthru -- target/identify_passthru.sh@16 -- # get_first_nvme_bdf 00:28:17.468 09:36:28 nvmf_identify_passthru -- common/autotest_common.sh@1524 -- # bdfs=() 00:28:17.468 09:36:28 nvmf_identify_passthru -- common/autotest_common.sh@1524 -- # local bdfs 00:28:17.468 09:36:28 nvmf_identify_passthru -- common/autotest_common.sh@1525 -- # bdfs=($(get_nvme_bdfs)) 00:28:17.468 09:36:28 nvmf_identify_passthru -- common/autotest_common.sh@1525 -- # get_nvme_bdfs 00:28:17.468 09:36:28 nvmf_identify_passthru -- common/autotest_common.sh@1513 -- # bdfs=() 00:28:17.468 09:36:28 nvmf_identify_passthru -- common/autotest_common.sh@1513 -- # local bdfs 00:28:17.468 09:36:28 nvmf_identify_passthru -- common/autotest_common.sh@1514 -- # bdfs=($("$rootdir/scripts/gen_nvme.sh" | jq -r '.config[].params.traddr')) 00:28:17.468 09:36:28 nvmf_identify_passthru -- common/autotest_common.sh@1514 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/gen_nvme.sh 00:28:17.468 09:36:28 nvmf_identify_passthru -- common/autotest_common.sh@1514 -- # jq -r '.config[].params.traddr' 00:28:17.468 09:36:28 nvmf_identify_passthru -- common/autotest_common.sh@1515 -- # (( 1 == 0 )) 00:28:17.468 09:36:28 nvmf_identify_passthru -- common/autotest_common.sh@1519 -- # printf '%s\n' 0000:0b:00.0 00:28:17.468 09:36:28 nvmf_identify_passthru -- common/autotest_common.sh@1527 -- # echo 0000:0b:00.0 00:28:17.468 09:36:28 nvmf_identify_passthru -- target/identify_passthru.sh@16 -- # bdf=0000:0b:00.0 00:28:17.468 09:36:28 nvmf_identify_passthru -- target/identify_passthru.sh@17 -- # '[' -z 0000:0b:00.0 ']' 00:28:17.468 09:36:28 nvmf_identify_passthru -- target/identify_passthru.sh@23 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_identify -r 'trtype:PCIe traddr:0000:0b:00.0' -i 0 00:28:17.468 09:36:28 nvmf_identify_passthru -- target/identify_passthru.sh@23 -- # grep 'Serial Number:' 00:28:17.468 09:36:28 nvmf_identify_passthru -- target/identify_passthru.sh@23 -- # awk '{print $3}' 00:28:17.468 EAL: No free 2048 kB hugepages reported on node 1 00:28:21.659 09:36:32 nvmf_identify_passthru -- target/identify_passthru.sh@23 -- # nvme_serial_number=BTLJ72430F4Q1P0FGN 00:28:21.659 09:36:32 nvmf_identify_passthru -- target/identify_passthru.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_identify -r 'trtype:PCIe traddr:0000:0b:00.0' -i 0 00:28:21.659 09:36:32 nvmf_identify_passthru -- target/identify_passthru.sh@24 -- # grep 'Model Number:' 00:28:21.659 09:36:32 nvmf_identify_passthru -- target/identify_passthru.sh@24 -- # awk '{print $3}' 00:28:21.659 EAL: No free 2048 kB hugepages reported on node 1 00:28:25.886 09:36:36 nvmf_identify_passthru -- target/identify_passthru.sh@24 -- # nvme_model_number=INTEL 00:28:25.886 09:36:36 nvmf_identify_passthru -- target/identify_passthru.sh@26 -- # timing_exit nvme_identify 00:28:25.886 09:36:36 nvmf_identify_passthru -- common/autotest_common.sh@728 -- # xtrace_disable 00:28:25.886 09:36:36 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:28:25.886 09:36:36 nvmf_identify_passthru -- target/identify_passthru.sh@28 -- # timing_enter start_nvmf_tgt 00:28:25.886 09:36:36 nvmf_identify_passthru -- common/autotest_common.sh@722 -- # xtrace_disable 00:28:25.886 09:36:36 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:28:25.887 09:36:36 nvmf_identify_passthru -- target/identify_passthru.sh@31 -- # nvmfpid=950248 00:28:25.887 09:36:36 nvmf_identify_passthru -- target/identify_passthru.sh@30 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF --wait-for-rpc 00:28:25.887 09:36:36 nvmf_identify_passthru -- target/identify_passthru.sh@33 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:28:25.887 09:36:36 nvmf_identify_passthru -- target/identify_passthru.sh@35 -- # waitforlisten 950248 00:28:25.887 09:36:36 nvmf_identify_passthru -- common/autotest_common.sh@829 -- # '[' -z 950248 ']' 00:28:25.887 09:36:36 nvmf_identify_passthru -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:28:25.887 09:36:36 nvmf_identify_passthru -- common/autotest_common.sh@834 -- # local max_retries=100 00:28:25.887 09:36:36 nvmf_identify_passthru -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:28:25.887 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:28:25.887 09:36:36 nvmf_identify_passthru -- common/autotest_common.sh@838 -- # xtrace_disable 00:28:25.887 09:36:36 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:28:25.887 [2024-07-15 09:36:36.950730] Starting SPDK v24.09-pre git sha1 b0f01ebc5 / DPDK 24.03.0 initialization... 00:28:25.887 [2024-07-15 09:36:36.950864] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:28:25.887 EAL: No free 2048 kB hugepages reported on node 1 00:28:25.887 [2024-07-15 09:36:37.013557] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 4 00:28:26.145 [2024-07-15 09:36:37.125188] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:28:26.145 [2024-07-15 09:36:37.125234] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:28:26.145 [2024-07-15 09:36:37.125262] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:28:26.145 [2024-07-15 09:36:37.125274] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:28:26.145 [2024-07-15 09:36:37.125283] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:28:26.145 [2024-07-15 09:36:37.125368] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:28:26.145 [2024-07-15 09:36:37.125487] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:28:26.145 [2024-07-15 09:36:37.125568] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:28:26.145 [2024-07-15 09:36:37.125573] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:28:26.146 09:36:37 nvmf_identify_passthru -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:28:26.146 09:36:37 nvmf_identify_passthru -- common/autotest_common.sh@862 -- # return 0 00:28:26.146 09:36:37 nvmf_identify_passthru -- target/identify_passthru.sh@36 -- # rpc_cmd -v nvmf_set_config --passthru-identify-ctrlr 00:28:26.146 09:36:37 nvmf_identify_passthru -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:26.146 09:36:37 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:28:26.146 INFO: Log level set to 20 00:28:26.146 INFO: Requests: 00:28:26.146 { 00:28:26.146 "jsonrpc": "2.0", 00:28:26.146 "method": "nvmf_set_config", 00:28:26.146 "id": 1, 00:28:26.146 "params": { 00:28:26.146 "admin_cmd_passthru": { 00:28:26.146 "identify_ctrlr": true 00:28:26.146 } 00:28:26.146 } 00:28:26.146 } 00:28:26.146 00:28:26.146 INFO: response: 00:28:26.146 { 00:28:26.146 "jsonrpc": "2.0", 00:28:26.146 "id": 1, 00:28:26.146 "result": true 00:28:26.146 } 00:28:26.146 00:28:26.146 09:36:37 nvmf_identify_passthru -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:26.146 09:36:37 nvmf_identify_passthru -- target/identify_passthru.sh@37 -- # rpc_cmd -v framework_start_init 00:28:26.146 09:36:37 nvmf_identify_passthru -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:26.146 09:36:37 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:28:26.146 INFO: Setting log level to 20 00:28:26.146 INFO: Setting log level to 20 00:28:26.146 INFO: Log level set to 20 00:28:26.146 INFO: Log level set to 20 00:28:26.146 INFO: Requests: 00:28:26.146 { 00:28:26.146 "jsonrpc": "2.0", 00:28:26.146 "method": "framework_start_init", 00:28:26.146 "id": 1 00:28:26.146 } 00:28:26.146 00:28:26.146 INFO: Requests: 00:28:26.146 { 00:28:26.146 "jsonrpc": "2.0", 00:28:26.146 "method": "framework_start_init", 00:28:26.146 "id": 1 00:28:26.146 } 00:28:26.146 00:28:26.146 [2024-07-15 09:36:37.276018] nvmf_tgt.c: 451:nvmf_tgt_advance_state: *NOTICE*: Custom identify ctrlr handler enabled 00:28:26.146 INFO: response: 00:28:26.146 { 00:28:26.146 "jsonrpc": "2.0", 00:28:26.146 "id": 1, 00:28:26.146 "result": true 00:28:26.146 } 00:28:26.146 00:28:26.146 INFO: response: 00:28:26.146 { 00:28:26.146 "jsonrpc": "2.0", 00:28:26.146 "id": 1, 00:28:26.146 "result": true 00:28:26.146 } 00:28:26.146 00:28:26.146 09:36:37 nvmf_identify_passthru -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:26.146 09:36:37 nvmf_identify_passthru -- target/identify_passthru.sh@38 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:28:26.146 09:36:37 nvmf_identify_passthru -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:26.146 09:36:37 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:28:26.146 INFO: Setting log level to 40 00:28:26.146 INFO: Setting log level to 40 00:28:26.146 INFO: Setting log level to 40 00:28:26.146 [2024-07-15 09:36:37.286072] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:28:26.146 09:36:37 nvmf_identify_passthru -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:26.146 09:36:37 nvmf_identify_passthru -- target/identify_passthru.sh@39 -- # timing_exit start_nvmf_tgt 00:28:26.146 09:36:37 nvmf_identify_passthru -- common/autotest_common.sh@728 -- # xtrace_disable 00:28:26.146 09:36:37 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:28:26.146 09:36:37 nvmf_identify_passthru -- target/identify_passthru.sh@41 -- # rpc_cmd bdev_nvme_attach_controller -b Nvme0 -t PCIe -a 0000:0b:00.0 00:28:26.146 09:36:37 nvmf_identify_passthru -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:26.146 09:36:37 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:28:29.432 Nvme0n1 00:28:29.432 09:36:40 nvmf_identify_passthru -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:29.432 09:36:40 nvmf_identify_passthru -- target/identify_passthru.sh@42 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 1 00:28:29.432 09:36:40 nvmf_identify_passthru -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:29.432 09:36:40 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:28:29.432 09:36:40 nvmf_identify_passthru -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:29.432 09:36:40 nvmf_identify_passthru -- target/identify_passthru.sh@43 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Nvme0n1 00:28:29.432 09:36:40 nvmf_identify_passthru -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:29.432 09:36:40 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:28:29.433 09:36:40 nvmf_identify_passthru -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:29.433 09:36:40 nvmf_identify_passthru -- target/identify_passthru.sh@44 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:28:29.433 09:36:40 nvmf_identify_passthru -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:29.433 09:36:40 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:28:29.433 [2024-07-15 09:36:40.177762] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:28:29.433 09:36:40 nvmf_identify_passthru -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:29.433 09:36:40 nvmf_identify_passthru -- target/identify_passthru.sh@46 -- # rpc_cmd nvmf_get_subsystems 00:28:29.433 09:36:40 nvmf_identify_passthru -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:29.433 09:36:40 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:28:29.433 [ 00:28:29.433 { 00:28:29.433 "nqn": "nqn.2014-08.org.nvmexpress.discovery", 00:28:29.433 "subtype": "Discovery", 00:28:29.433 "listen_addresses": [], 00:28:29.433 "allow_any_host": true, 00:28:29.433 "hosts": [] 00:28:29.433 }, 00:28:29.433 { 00:28:29.433 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:28:29.433 "subtype": "NVMe", 00:28:29.433 "listen_addresses": [ 00:28:29.433 { 00:28:29.433 "trtype": "TCP", 00:28:29.433 "adrfam": "IPv4", 00:28:29.433 "traddr": "10.0.0.2", 00:28:29.433 "trsvcid": "4420" 00:28:29.433 } 00:28:29.433 ], 00:28:29.433 "allow_any_host": true, 00:28:29.433 "hosts": [], 00:28:29.433 "serial_number": "SPDK00000000000001", 00:28:29.433 "model_number": "SPDK bdev Controller", 00:28:29.433 "max_namespaces": 1, 00:28:29.433 "min_cntlid": 1, 00:28:29.433 "max_cntlid": 65519, 00:28:29.433 "namespaces": [ 00:28:29.433 { 00:28:29.433 "nsid": 1, 00:28:29.433 "bdev_name": "Nvme0n1", 00:28:29.433 "name": "Nvme0n1", 00:28:29.433 "nguid": "5E58BF51BE5D4B1481D5E24DEFCE8954", 00:28:29.433 "uuid": "5e58bf51-be5d-4b14-81d5-e24defce8954" 00:28:29.433 } 00:28:29.433 ] 00:28:29.433 } 00:28:29.433 ] 00:28:29.433 09:36:40 nvmf_identify_passthru -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:29.433 09:36:40 nvmf_identify_passthru -- target/identify_passthru.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_identify -r ' trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' 00:28:29.433 09:36:40 nvmf_identify_passthru -- target/identify_passthru.sh@54 -- # grep 'Serial Number:' 00:28:29.433 09:36:40 nvmf_identify_passthru -- target/identify_passthru.sh@54 -- # awk '{print $3}' 00:28:29.433 EAL: No free 2048 kB hugepages reported on node 1 00:28:29.433 09:36:40 nvmf_identify_passthru -- target/identify_passthru.sh@54 -- # nvmf_serial_number=BTLJ72430F4Q1P0FGN 00:28:29.433 09:36:40 nvmf_identify_passthru -- target/identify_passthru.sh@61 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_identify -r ' trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' 00:28:29.433 09:36:40 nvmf_identify_passthru -- target/identify_passthru.sh@61 -- # grep 'Model Number:' 00:28:29.433 09:36:40 nvmf_identify_passthru -- target/identify_passthru.sh@61 -- # awk '{print $3}' 00:28:29.433 EAL: No free 2048 kB hugepages reported on node 1 00:28:29.433 09:36:40 nvmf_identify_passthru -- target/identify_passthru.sh@61 -- # nvmf_model_number=INTEL 00:28:29.433 09:36:40 nvmf_identify_passthru -- target/identify_passthru.sh@63 -- # '[' BTLJ72430F4Q1P0FGN '!=' BTLJ72430F4Q1P0FGN ']' 00:28:29.433 09:36:40 nvmf_identify_passthru -- target/identify_passthru.sh@68 -- # '[' INTEL '!=' INTEL ']' 00:28:29.433 09:36:40 nvmf_identify_passthru -- target/identify_passthru.sh@73 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:28:29.433 09:36:40 nvmf_identify_passthru -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:29.433 09:36:40 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:28:29.433 09:36:40 nvmf_identify_passthru -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:29.433 09:36:40 nvmf_identify_passthru -- target/identify_passthru.sh@75 -- # trap - SIGINT SIGTERM EXIT 00:28:29.433 09:36:40 nvmf_identify_passthru -- target/identify_passthru.sh@77 -- # nvmftestfini 00:28:29.433 09:36:40 nvmf_identify_passthru -- nvmf/common.sh@488 -- # nvmfcleanup 00:28:29.433 09:36:40 nvmf_identify_passthru -- nvmf/common.sh@117 -- # sync 00:28:29.433 09:36:40 nvmf_identify_passthru -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:28:29.433 09:36:40 nvmf_identify_passthru -- nvmf/common.sh@120 -- # set +e 00:28:29.433 09:36:40 nvmf_identify_passthru -- nvmf/common.sh@121 -- # for i in {1..20} 00:28:29.433 09:36:40 nvmf_identify_passthru -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:28:29.433 rmmod nvme_tcp 00:28:29.433 rmmod nvme_fabrics 00:28:29.433 rmmod nvme_keyring 00:28:29.433 09:36:40 nvmf_identify_passthru -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:28:29.433 09:36:40 nvmf_identify_passthru -- nvmf/common.sh@124 -- # set -e 00:28:29.433 09:36:40 nvmf_identify_passthru -- nvmf/common.sh@125 -- # return 0 00:28:29.433 09:36:40 nvmf_identify_passthru -- nvmf/common.sh@489 -- # '[' -n 950248 ']' 00:28:29.433 09:36:40 nvmf_identify_passthru -- nvmf/common.sh@490 -- # killprocess 950248 00:28:29.433 09:36:40 nvmf_identify_passthru -- common/autotest_common.sh@948 -- # '[' -z 950248 ']' 00:28:29.433 09:36:40 nvmf_identify_passthru -- common/autotest_common.sh@952 -- # kill -0 950248 00:28:29.433 09:36:40 nvmf_identify_passthru -- common/autotest_common.sh@953 -- # uname 00:28:29.433 09:36:40 nvmf_identify_passthru -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:28:29.433 09:36:40 nvmf_identify_passthru -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 950248 00:28:29.693 09:36:40 nvmf_identify_passthru -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:28:29.693 09:36:40 nvmf_identify_passthru -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:28:29.693 09:36:40 nvmf_identify_passthru -- common/autotest_common.sh@966 -- # echo 'killing process with pid 950248' 00:28:29.693 killing process with pid 950248 00:28:29.693 09:36:40 nvmf_identify_passthru -- common/autotest_common.sh@967 -- # kill 950248 00:28:29.693 09:36:40 nvmf_identify_passthru -- common/autotest_common.sh@972 -- # wait 950248 00:28:31.068 09:36:42 nvmf_identify_passthru -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:28:31.068 09:36:42 nvmf_identify_passthru -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:28:31.069 09:36:42 nvmf_identify_passthru -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:28:31.069 09:36:42 nvmf_identify_passthru -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:28:31.069 09:36:42 nvmf_identify_passthru -- nvmf/common.sh@278 -- # remove_spdk_ns 00:28:31.069 09:36:42 nvmf_identify_passthru -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:28:31.069 09:36:42 nvmf_identify_passthru -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 13> /dev/null' 00:28:31.069 09:36:42 nvmf_identify_passthru -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:28:33.604 09:36:44 nvmf_identify_passthru -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:28:33.604 00:28:33.604 real 0m17.889s 00:28:33.604 user 0m26.383s 00:28:33.604 sys 0m2.303s 00:28:33.604 09:36:44 nvmf_identify_passthru -- common/autotest_common.sh@1124 -- # xtrace_disable 00:28:33.604 09:36:44 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:28:33.604 ************************************ 00:28:33.604 END TEST nvmf_identify_passthru 00:28:33.604 ************************************ 00:28:33.604 09:36:44 -- common/autotest_common.sh@1142 -- # return 0 00:28:33.604 09:36:44 -- spdk/autotest.sh@292 -- # run_test nvmf_dif /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/dif.sh 00:28:33.604 09:36:44 -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:28:33.604 09:36:44 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:28:33.604 09:36:44 -- common/autotest_common.sh@10 -- # set +x 00:28:33.604 ************************************ 00:28:33.604 START TEST nvmf_dif 00:28:33.604 ************************************ 00:28:33.604 09:36:44 nvmf_dif -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/dif.sh 00:28:33.604 * Looking for test storage... 00:28:33.605 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:28:33.605 09:36:44 nvmf_dif -- target/dif.sh@13 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:28:33.605 09:36:44 nvmf_dif -- nvmf/common.sh@7 -- # uname -s 00:28:33.605 09:36:44 nvmf_dif -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:28:33.605 09:36:44 nvmf_dif -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:28:33.605 09:36:44 nvmf_dif -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:28:33.605 09:36:44 nvmf_dif -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:28:33.605 09:36:44 nvmf_dif -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:28:33.605 09:36:44 nvmf_dif -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:28:33.605 09:36:44 nvmf_dif -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:28:33.605 09:36:44 nvmf_dif -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:28:33.605 09:36:44 nvmf_dif -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:28:33.605 09:36:44 nvmf_dif -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:28:33.605 09:36:44 nvmf_dif -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a 00:28:33.605 09:36:44 nvmf_dif -- nvmf/common.sh@18 -- # NVME_HOSTID=29f67375-a902-e411-ace9-001e67bc3c9a 00:28:33.605 09:36:44 nvmf_dif -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:28:33.605 09:36:44 nvmf_dif -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:28:33.605 09:36:44 nvmf_dif -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:28:33.605 09:36:44 nvmf_dif -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:28:33.605 09:36:44 nvmf_dif -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:28:33.605 09:36:44 nvmf_dif -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:28:33.605 09:36:44 nvmf_dif -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:28:33.605 09:36:44 nvmf_dif -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:28:33.605 09:36:44 nvmf_dif -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:28:33.605 09:36:44 nvmf_dif -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:28:33.605 09:36:44 nvmf_dif -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:28:33.605 09:36:44 nvmf_dif -- paths/export.sh@5 -- # export PATH 00:28:33.605 09:36:44 nvmf_dif -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:28:33.605 09:36:44 nvmf_dif -- nvmf/common.sh@47 -- # : 0 00:28:33.605 09:36:44 nvmf_dif -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:28:33.605 09:36:44 nvmf_dif -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:28:33.605 09:36:44 nvmf_dif -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:28:33.605 09:36:44 nvmf_dif -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:28:33.605 09:36:44 nvmf_dif -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:28:33.605 09:36:44 nvmf_dif -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:28:33.605 09:36:44 nvmf_dif -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:28:33.605 09:36:44 nvmf_dif -- nvmf/common.sh@51 -- # have_pci_nics=0 00:28:33.605 09:36:44 nvmf_dif -- target/dif.sh@15 -- # NULL_META=16 00:28:33.605 09:36:44 nvmf_dif -- target/dif.sh@15 -- # NULL_BLOCK_SIZE=512 00:28:33.605 09:36:44 nvmf_dif -- target/dif.sh@15 -- # NULL_SIZE=64 00:28:33.605 09:36:44 nvmf_dif -- target/dif.sh@15 -- # NULL_DIF=1 00:28:33.605 09:36:44 nvmf_dif -- target/dif.sh@135 -- # nvmftestinit 00:28:33.605 09:36:44 nvmf_dif -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:28:33.605 09:36:44 nvmf_dif -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:28:33.605 09:36:44 nvmf_dif -- nvmf/common.sh@448 -- # prepare_net_devs 00:28:33.605 09:36:44 nvmf_dif -- nvmf/common.sh@410 -- # local -g is_hw=no 00:28:33.605 09:36:44 nvmf_dif -- nvmf/common.sh@412 -- # remove_spdk_ns 00:28:33.605 09:36:44 nvmf_dif -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:28:33.605 09:36:44 nvmf_dif -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 13> /dev/null' 00:28:33.605 09:36:44 nvmf_dif -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:28:33.605 09:36:44 nvmf_dif -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:28:33.605 09:36:44 nvmf_dif -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:28:33.605 09:36:44 nvmf_dif -- nvmf/common.sh@285 -- # xtrace_disable 00:28:33.605 09:36:44 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:28:35.507 09:36:46 nvmf_dif -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:28:35.507 09:36:46 nvmf_dif -- nvmf/common.sh@291 -- # pci_devs=() 00:28:35.507 09:36:46 nvmf_dif -- nvmf/common.sh@291 -- # local -a pci_devs 00:28:35.507 09:36:46 nvmf_dif -- nvmf/common.sh@292 -- # pci_net_devs=() 00:28:35.507 09:36:46 nvmf_dif -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:28:35.507 09:36:46 nvmf_dif -- nvmf/common.sh@293 -- # pci_drivers=() 00:28:35.507 09:36:46 nvmf_dif -- nvmf/common.sh@293 -- # local -A pci_drivers 00:28:35.507 09:36:46 nvmf_dif -- nvmf/common.sh@295 -- # net_devs=() 00:28:35.507 09:36:46 nvmf_dif -- nvmf/common.sh@295 -- # local -ga net_devs 00:28:35.507 09:36:46 nvmf_dif -- nvmf/common.sh@296 -- # e810=() 00:28:35.507 09:36:46 nvmf_dif -- nvmf/common.sh@296 -- # local -ga e810 00:28:35.507 09:36:46 nvmf_dif -- nvmf/common.sh@297 -- # x722=() 00:28:35.507 09:36:46 nvmf_dif -- nvmf/common.sh@297 -- # local -ga x722 00:28:35.507 09:36:46 nvmf_dif -- nvmf/common.sh@298 -- # mlx=() 00:28:35.507 09:36:46 nvmf_dif -- nvmf/common.sh@298 -- # local -ga mlx 00:28:35.507 09:36:46 nvmf_dif -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:28:35.507 09:36:46 nvmf_dif -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:28:35.507 09:36:46 nvmf_dif -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:28:35.507 09:36:46 nvmf_dif -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:28:35.507 09:36:46 nvmf_dif -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:28:35.507 09:36:46 nvmf_dif -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:28:35.507 09:36:46 nvmf_dif -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:28:35.507 09:36:46 nvmf_dif -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:28:35.507 09:36:46 nvmf_dif -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:28:35.507 09:36:46 nvmf_dif -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:28:35.507 09:36:46 nvmf_dif -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:28:35.507 09:36:46 nvmf_dif -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:28:35.507 09:36:46 nvmf_dif -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:28:35.507 09:36:46 nvmf_dif -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:28:35.507 09:36:46 nvmf_dif -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:28:35.507 09:36:46 nvmf_dif -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:28:35.507 09:36:46 nvmf_dif -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:28:35.507 09:36:46 nvmf_dif -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:28:35.507 09:36:46 nvmf_dif -- nvmf/common.sh@341 -- # echo 'Found 0000:09:00.0 (0x8086 - 0x159b)' 00:28:35.507 Found 0000:09:00.0 (0x8086 - 0x159b) 00:28:35.507 09:36:46 nvmf_dif -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:28:35.507 09:36:46 nvmf_dif -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:28:35.507 09:36:46 nvmf_dif -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:28:35.507 09:36:46 nvmf_dif -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:28:35.507 09:36:46 nvmf_dif -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:28:35.507 09:36:46 nvmf_dif -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:28:35.507 09:36:46 nvmf_dif -- nvmf/common.sh@341 -- # echo 'Found 0000:09:00.1 (0x8086 - 0x159b)' 00:28:35.507 Found 0000:09:00.1 (0x8086 - 0x159b) 00:28:35.507 09:36:46 nvmf_dif -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:28:35.507 09:36:46 nvmf_dif -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:28:35.507 09:36:46 nvmf_dif -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:28:35.507 09:36:46 nvmf_dif -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:28:35.507 09:36:46 nvmf_dif -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:28:35.507 09:36:46 nvmf_dif -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:28:35.507 09:36:46 nvmf_dif -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:28:35.507 09:36:46 nvmf_dif -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:28:35.507 09:36:46 nvmf_dif -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:28:35.507 09:36:46 nvmf_dif -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:28:35.507 09:36:46 nvmf_dif -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:28:35.507 09:36:46 nvmf_dif -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:28:35.507 09:36:46 nvmf_dif -- nvmf/common.sh@390 -- # [[ up == up ]] 00:28:35.507 09:36:46 nvmf_dif -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:28:35.507 09:36:46 nvmf_dif -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:28:35.507 09:36:46 nvmf_dif -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:09:00.0: cvl_0_0' 00:28:35.507 Found net devices under 0000:09:00.0: cvl_0_0 00:28:35.507 09:36:46 nvmf_dif -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:28:35.507 09:36:46 nvmf_dif -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:28:35.507 09:36:46 nvmf_dif -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:28:35.507 09:36:46 nvmf_dif -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:28:35.507 09:36:46 nvmf_dif -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:28:35.507 09:36:46 nvmf_dif -- nvmf/common.sh@390 -- # [[ up == up ]] 00:28:35.507 09:36:46 nvmf_dif -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:28:35.507 09:36:46 nvmf_dif -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:28:35.507 09:36:46 nvmf_dif -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:09:00.1: cvl_0_1' 00:28:35.507 Found net devices under 0000:09:00.1: cvl_0_1 00:28:35.507 09:36:46 nvmf_dif -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:28:35.507 09:36:46 nvmf_dif -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:28:35.507 09:36:46 nvmf_dif -- nvmf/common.sh@414 -- # is_hw=yes 00:28:35.507 09:36:46 nvmf_dif -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:28:35.507 09:36:46 nvmf_dif -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:28:35.507 09:36:46 nvmf_dif -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:28:35.507 09:36:46 nvmf_dif -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:28:35.507 09:36:46 nvmf_dif -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:28:35.507 09:36:46 nvmf_dif -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:28:35.507 09:36:46 nvmf_dif -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:28:35.507 09:36:46 nvmf_dif -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:28:35.507 09:36:46 nvmf_dif -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:28:35.507 09:36:46 nvmf_dif -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:28:35.507 09:36:46 nvmf_dif -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:28:35.507 09:36:46 nvmf_dif -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:28:35.507 09:36:46 nvmf_dif -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:28:35.507 09:36:46 nvmf_dif -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:28:35.507 09:36:46 nvmf_dif -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:28:35.507 09:36:46 nvmf_dif -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:28:35.507 09:36:46 nvmf_dif -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:28:35.507 09:36:46 nvmf_dif -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:28:35.507 09:36:46 nvmf_dif -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:28:35.507 09:36:46 nvmf_dif -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:28:35.507 09:36:46 nvmf_dif -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:28:35.507 09:36:46 nvmf_dif -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:28:35.507 09:36:46 nvmf_dif -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:28:35.507 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:28:35.507 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.245 ms 00:28:35.507 00:28:35.507 --- 10.0.0.2 ping statistics --- 00:28:35.507 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:28:35.507 rtt min/avg/max/mdev = 0.245/0.245/0.245/0.000 ms 00:28:35.507 09:36:46 nvmf_dif -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:28:35.507 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:28:35.507 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.154 ms 00:28:35.507 00:28:35.507 --- 10.0.0.1 ping statistics --- 00:28:35.507 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:28:35.507 rtt min/avg/max/mdev = 0.154/0.154/0.154/0.000 ms 00:28:35.507 09:36:46 nvmf_dif -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:28:35.507 09:36:46 nvmf_dif -- nvmf/common.sh@422 -- # return 0 00:28:35.507 09:36:46 nvmf_dif -- nvmf/common.sh@450 -- # '[' iso == iso ']' 00:28:35.507 09:36:46 nvmf_dif -- nvmf/common.sh@451 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:28:36.441 0000:00:04.7 (8086 0e27): Already using the vfio-pci driver 00:28:36.441 0000:00:04.6 (8086 0e26): Already using the vfio-pci driver 00:28:36.441 0000:00:04.5 (8086 0e25): Already using the vfio-pci driver 00:28:36.441 0000:00:04.4 (8086 0e24): Already using the vfio-pci driver 00:28:36.441 0000:00:04.3 (8086 0e23): Already using the vfio-pci driver 00:28:36.441 0000:00:04.2 (8086 0e22): Already using the vfio-pci driver 00:28:36.441 0000:00:04.1 (8086 0e21): Already using the vfio-pci driver 00:28:36.441 0000:00:04.0 (8086 0e20): Already using the vfio-pci driver 00:28:36.441 0000:80:04.7 (8086 0e27): Already using the vfio-pci driver 00:28:36.441 0000:0b:00.0 (8086 0a54): Already using the vfio-pci driver 00:28:36.441 0000:80:04.6 (8086 0e26): Already using the vfio-pci driver 00:28:36.441 0000:80:04.5 (8086 0e25): Already using the vfio-pci driver 00:28:36.441 0000:80:04.4 (8086 0e24): Already using the vfio-pci driver 00:28:36.441 0000:80:04.3 (8086 0e23): Already using the vfio-pci driver 00:28:36.441 0000:80:04.2 (8086 0e22): Already using the vfio-pci driver 00:28:36.441 0000:80:04.1 (8086 0e21): Already using the vfio-pci driver 00:28:36.441 0000:80:04.0 (8086 0e20): Already using the vfio-pci driver 00:28:36.698 09:36:47 nvmf_dif -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:28:36.698 09:36:47 nvmf_dif -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:28:36.698 09:36:47 nvmf_dif -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:28:36.698 09:36:47 nvmf_dif -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:28:36.699 09:36:47 nvmf_dif -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:28:36.699 09:36:47 nvmf_dif -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:28:36.699 09:36:47 nvmf_dif -- target/dif.sh@136 -- # NVMF_TRANSPORT_OPTS+=' --dif-insert-or-strip' 00:28:36.699 09:36:47 nvmf_dif -- target/dif.sh@137 -- # nvmfappstart 00:28:36.699 09:36:47 nvmf_dif -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:28:36.699 09:36:47 nvmf_dif -- common/autotest_common.sh@722 -- # xtrace_disable 00:28:36.699 09:36:47 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:28:36.699 09:36:47 nvmf_dif -- nvmf/common.sh@481 -- # nvmfpid=953511 00:28:36.699 09:36:47 nvmf_dif -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF 00:28:36.699 09:36:47 nvmf_dif -- nvmf/common.sh@482 -- # waitforlisten 953511 00:28:36.699 09:36:47 nvmf_dif -- common/autotest_common.sh@829 -- # '[' -z 953511 ']' 00:28:36.699 09:36:47 nvmf_dif -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:28:36.699 09:36:47 nvmf_dif -- common/autotest_common.sh@834 -- # local max_retries=100 00:28:36.699 09:36:47 nvmf_dif -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:28:36.699 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:28:36.699 09:36:47 nvmf_dif -- common/autotest_common.sh@838 -- # xtrace_disable 00:28:36.699 09:36:47 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:28:36.699 [2024-07-15 09:36:47.781274] Starting SPDK v24.09-pre git sha1 b0f01ebc5 / DPDK 24.03.0 initialization... 00:28:36.699 [2024-07-15 09:36:47.781370] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:28:36.699 EAL: No free 2048 kB hugepages reported on node 1 00:28:36.699 [2024-07-15 09:36:47.843631] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:28:36.957 [2024-07-15 09:36:47.949184] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:28:36.957 [2024-07-15 09:36:47.949234] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:28:36.957 [2024-07-15 09:36:47.949254] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:28:36.957 [2024-07-15 09:36:47.949265] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:28:36.957 [2024-07-15 09:36:47.949275] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:28:36.957 [2024-07-15 09:36:47.949300] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:28:36.957 09:36:48 nvmf_dif -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:28:36.957 09:36:48 nvmf_dif -- common/autotest_common.sh@862 -- # return 0 00:28:36.957 09:36:48 nvmf_dif -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:28:36.957 09:36:48 nvmf_dif -- common/autotest_common.sh@728 -- # xtrace_disable 00:28:36.957 09:36:48 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:28:36.957 09:36:48 nvmf_dif -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:28:36.957 09:36:48 nvmf_dif -- target/dif.sh@139 -- # create_transport 00:28:36.957 09:36:48 nvmf_dif -- target/dif.sh@50 -- # rpc_cmd nvmf_create_transport -t tcp -o --dif-insert-or-strip 00:28:36.957 09:36:48 nvmf_dif -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:36.957 09:36:48 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:28:36.957 [2024-07-15 09:36:48.095304] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:28:36.957 09:36:48 nvmf_dif -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:36.957 09:36:48 nvmf_dif -- target/dif.sh@141 -- # run_test fio_dif_1_default fio_dif_1 00:28:36.957 09:36:48 nvmf_dif -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:28:36.957 09:36:48 nvmf_dif -- common/autotest_common.sh@1105 -- # xtrace_disable 00:28:36.957 09:36:48 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:28:36.957 ************************************ 00:28:36.957 START TEST fio_dif_1_default 00:28:36.957 ************************************ 00:28:36.957 09:36:48 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1123 -- # fio_dif_1 00:28:36.957 09:36:48 nvmf_dif.fio_dif_1_default -- target/dif.sh@86 -- # create_subsystems 0 00:28:36.957 09:36:48 nvmf_dif.fio_dif_1_default -- target/dif.sh@28 -- # local sub 00:28:36.957 09:36:48 nvmf_dif.fio_dif_1_default -- target/dif.sh@30 -- # for sub in "$@" 00:28:36.957 09:36:48 nvmf_dif.fio_dif_1_default -- target/dif.sh@31 -- # create_subsystem 0 00:28:36.957 09:36:48 nvmf_dif.fio_dif_1_default -- target/dif.sh@18 -- # local sub_id=0 00:28:36.957 09:36:48 nvmf_dif.fio_dif_1_default -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null0 64 512 --md-size 16 --dif-type 1 00:28:36.957 09:36:48 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:36.957 09:36:48 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@10 -- # set +x 00:28:36.957 bdev_null0 00:28:36.957 09:36:48 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:36.957 09:36:48 nvmf_dif.fio_dif_1_default -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 --serial-number 53313233-0 --allow-any-host 00:28:36.957 09:36:48 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:36.957 09:36:48 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@10 -- # set +x 00:28:36.958 09:36:48 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:36.958 09:36:48 nvmf_dif.fio_dif_1_default -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 bdev_null0 00:28:36.958 09:36:48 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:36.958 09:36:48 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@10 -- # set +x 00:28:36.958 09:36:48 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:36.958 09:36:48 nvmf_dif.fio_dif_1_default -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:28:36.958 09:36:48 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:36.958 09:36:48 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@10 -- # set +x 00:28:37.216 [2024-07-15 09:36:48.155595] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:28:37.216 09:36:48 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:37.216 09:36:48 nvmf_dif.fio_dif_1_default -- target/dif.sh@87 -- # fio /dev/fd/62 00:28:37.216 09:36:48 nvmf_dif.fio_dif_1_default -- target/dif.sh@87 -- # create_json_sub_conf 0 00:28:37.216 09:36:48 nvmf_dif.fio_dif_1_default -- target/dif.sh@51 -- # gen_nvmf_target_json 0 00:28:37.216 09:36:48 nvmf_dif.fio_dif_1_default -- nvmf/common.sh@532 -- # config=() 00:28:37.216 09:36:48 nvmf_dif.fio_dif_1_default -- target/dif.sh@82 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:28:37.216 09:36:48 nvmf_dif.fio_dif_1_default -- nvmf/common.sh@532 -- # local subsystem config 00:28:37.216 09:36:48 nvmf_dif.fio_dif_1_default -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:28:37.216 09:36:48 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1356 -- # fio_plugin /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:28:37.216 09:36:48 nvmf_dif.fio_dif_1_default -- target/dif.sh@82 -- # gen_fio_conf 00:28:37.216 09:36:48 nvmf_dif.fio_dif_1_default -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:28:37.216 { 00:28:37.216 "params": { 00:28:37.216 "name": "Nvme$subsystem", 00:28:37.216 "trtype": "$TEST_TRANSPORT", 00:28:37.216 "traddr": "$NVMF_FIRST_TARGET_IP", 00:28:37.216 "adrfam": "ipv4", 00:28:37.216 "trsvcid": "$NVMF_PORT", 00:28:37.216 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:28:37.216 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:28:37.216 "hdgst": ${hdgst:-false}, 00:28:37.216 "ddgst": ${ddgst:-false} 00:28:37.216 }, 00:28:37.216 "method": "bdev_nvme_attach_controller" 00:28:37.216 } 00:28:37.216 EOF 00:28:37.216 )") 00:28:37.216 09:36:48 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1337 -- # local fio_dir=/usr/src/fio 00:28:37.216 09:36:48 nvmf_dif.fio_dif_1_default -- target/dif.sh@54 -- # local file 00:28:37.216 09:36:48 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1339 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:28:37.216 09:36:48 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1339 -- # local sanitizers 00:28:37.216 09:36:48 nvmf_dif.fio_dif_1_default -- target/dif.sh@56 -- # cat 00:28:37.216 09:36:48 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1340 -- # local plugin=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:28:37.216 09:36:48 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1341 -- # shift 00:28:37.216 09:36:48 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1343 -- # local asan_lib= 00:28:37.216 09:36:48 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1344 -- # for sanitizer in "${sanitizers[@]}" 00:28:37.216 09:36:48 nvmf_dif.fio_dif_1_default -- nvmf/common.sh@554 -- # cat 00:28:37.216 09:36:48 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1345 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:28:37.216 09:36:48 nvmf_dif.fio_dif_1_default -- target/dif.sh@72 -- # (( file = 1 )) 00:28:37.216 09:36:48 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1345 -- # grep libasan 00:28:37.216 09:36:48 nvmf_dif.fio_dif_1_default -- target/dif.sh@72 -- # (( file <= files )) 00:28:37.216 09:36:48 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1345 -- # awk '{print $3}' 00:28:37.216 09:36:48 nvmf_dif.fio_dif_1_default -- nvmf/common.sh@556 -- # jq . 00:28:37.216 09:36:48 nvmf_dif.fio_dif_1_default -- nvmf/common.sh@557 -- # IFS=, 00:28:37.216 09:36:48 nvmf_dif.fio_dif_1_default -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:28:37.216 "params": { 00:28:37.216 "name": "Nvme0", 00:28:37.216 "trtype": "tcp", 00:28:37.216 "traddr": "10.0.0.2", 00:28:37.216 "adrfam": "ipv4", 00:28:37.217 "trsvcid": "4420", 00:28:37.217 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:28:37.217 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:28:37.217 "hdgst": false, 00:28:37.217 "ddgst": false 00:28:37.217 }, 00:28:37.217 "method": "bdev_nvme_attach_controller" 00:28:37.217 }' 00:28:37.217 09:36:48 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1345 -- # asan_lib= 00:28:37.217 09:36:48 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1346 -- # [[ -n '' ]] 00:28:37.217 09:36:48 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1344 -- # for sanitizer in "${sanitizers[@]}" 00:28:37.217 09:36:48 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1345 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:28:37.217 09:36:48 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1345 -- # grep libclang_rt.asan 00:28:37.217 09:36:48 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1345 -- # awk '{print $3}' 00:28:37.217 09:36:48 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1345 -- # asan_lib= 00:28:37.217 09:36:48 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1346 -- # [[ -n '' ]] 00:28:37.217 09:36:48 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1352 -- # LD_PRELOAD=' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev' 00:28:37.217 09:36:48 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1352 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:28:37.475 filename0: (g=0): rw=randread, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=4 00:28:37.475 fio-3.35 00:28:37.475 Starting 1 thread 00:28:37.475 EAL: No free 2048 kB hugepages reported on node 1 00:28:49.697 00:28:49.697 filename0: (groupid=0, jobs=1): err= 0: pid=953740: Mon Jul 15 09:36:59 2024 00:28:49.697 read: IOPS=97, BW=391KiB/s (401kB/s)(3920KiB/10015msec) 00:28:49.697 slat (usec): min=4, max=111, avg= 9.61, stdev= 4.26 00:28:49.697 clat (usec): min=540, max=49231, avg=40844.56, stdev=2634.37 00:28:49.697 lat (usec): min=548, max=49272, avg=40854.17, stdev=2634.50 00:28:49.697 clat percentiles (usec): 00:28:49.697 | 1.00th=[41157], 5.00th=[41157], 10.00th=[41157], 20.00th=[41157], 00:28:49.697 | 30.00th=[41157], 40.00th=[41157], 50.00th=[41157], 60.00th=[41157], 00:28:49.697 | 70.00th=[41157], 80.00th=[41157], 90.00th=[41157], 95.00th=[41157], 00:28:49.697 | 99.00th=[41157], 99.50th=[41681], 99.90th=[49021], 99.95th=[49021], 00:28:49.697 | 99.99th=[49021] 00:28:49.697 bw ( KiB/s): min= 384, max= 448, per=99.64%, avg=390.40, stdev=16.74, samples=20 00:28:49.697 iops : min= 96, max= 112, avg=97.60, stdev= 4.19, samples=20 00:28:49.697 lat (usec) : 750=0.41% 00:28:49.697 lat (msec) : 50=99.59% 00:28:49.697 cpu : usr=90.08%, sys=9.65%, ctx=26, majf=0, minf=307 00:28:49.697 IO depths : 1=25.0%, 2=50.0%, 4=25.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:28:49.697 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:28:49.697 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:28:49.697 issued rwts: total=980,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:28:49.697 latency : target=0, window=0, percentile=100.00%, depth=4 00:28:49.697 00:28:49.697 Run status group 0 (all jobs): 00:28:49.697 READ: bw=391KiB/s (401kB/s), 391KiB/s-391KiB/s (401kB/s-401kB/s), io=3920KiB (4014kB), run=10015-10015msec 00:28:49.697 09:36:59 nvmf_dif.fio_dif_1_default -- target/dif.sh@88 -- # destroy_subsystems 0 00:28:49.697 09:36:59 nvmf_dif.fio_dif_1_default -- target/dif.sh@43 -- # local sub 00:28:49.697 09:36:59 nvmf_dif.fio_dif_1_default -- target/dif.sh@45 -- # for sub in "$@" 00:28:49.697 09:36:59 nvmf_dif.fio_dif_1_default -- target/dif.sh@46 -- # destroy_subsystem 0 00:28:49.697 09:36:59 nvmf_dif.fio_dif_1_default -- target/dif.sh@36 -- # local sub_id=0 00:28:49.697 09:36:59 nvmf_dif.fio_dif_1_default -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:28:49.697 09:36:59 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:49.697 09:36:59 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@10 -- # set +x 00:28:49.697 09:36:59 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:49.697 09:36:59 nvmf_dif.fio_dif_1_default -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null0 00:28:49.697 09:36:59 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:49.697 09:36:59 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@10 -- # set +x 00:28:49.697 09:36:59 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:49.697 00:28:49.697 real 0m11.282s 00:28:49.697 user 0m10.188s 00:28:49.697 sys 0m1.298s 00:28:49.697 09:36:59 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1124 -- # xtrace_disable 00:28:49.697 09:36:59 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@10 -- # set +x 00:28:49.698 ************************************ 00:28:49.698 END TEST fio_dif_1_default 00:28:49.698 ************************************ 00:28:49.698 09:36:59 nvmf_dif -- common/autotest_common.sh@1142 -- # return 0 00:28:49.698 09:36:59 nvmf_dif -- target/dif.sh@142 -- # run_test fio_dif_1_multi_subsystems fio_dif_1_multi_subsystems 00:28:49.698 09:36:59 nvmf_dif -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:28:49.698 09:36:59 nvmf_dif -- common/autotest_common.sh@1105 -- # xtrace_disable 00:28:49.698 09:36:59 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:28:49.698 ************************************ 00:28:49.698 START TEST fio_dif_1_multi_subsystems 00:28:49.698 ************************************ 00:28:49.698 09:36:59 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1123 -- # fio_dif_1_multi_subsystems 00:28:49.698 09:36:59 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@92 -- # local files=1 00:28:49.698 09:36:59 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@94 -- # create_subsystems 0 1 00:28:49.698 09:36:59 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@28 -- # local sub 00:28:49.698 09:36:59 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@30 -- # for sub in "$@" 00:28:49.698 09:36:59 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@31 -- # create_subsystem 0 00:28:49.698 09:36:59 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@18 -- # local sub_id=0 00:28:49.698 09:36:59 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null0 64 512 --md-size 16 --dif-type 1 00:28:49.698 09:36:59 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:49.698 09:36:59 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:28:49.698 bdev_null0 00:28:49.698 09:36:59 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:49.698 09:36:59 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 --serial-number 53313233-0 --allow-any-host 00:28:49.698 09:36:59 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:49.698 09:36:59 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:28:49.698 09:36:59 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:49.698 09:36:59 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 bdev_null0 00:28:49.698 09:36:59 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:49.698 09:36:59 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:28:49.698 09:36:59 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:49.698 09:36:59 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:28:49.698 09:36:59 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:49.698 09:36:59 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:28:49.698 [2024-07-15 09:36:59.478084] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:28:49.698 09:36:59 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:49.698 09:36:59 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@30 -- # for sub in "$@" 00:28:49.698 09:36:59 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@31 -- # create_subsystem 1 00:28:49.698 09:36:59 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@18 -- # local sub_id=1 00:28:49.698 09:36:59 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null1 64 512 --md-size 16 --dif-type 1 00:28:49.698 09:36:59 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:49.698 09:36:59 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:28:49.698 bdev_null1 00:28:49.698 09:36:59 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:49.698 09:36:59 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 --serial-number 53313233-1 --allow-any-host 00:28:49.698 09:36:59 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:49.698 09:36:59 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:28:49.698 09:36:59 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:49.698 09:36:59 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 bdev_null1 00:28:49.698 09:36:59 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:49.698 09:36:59 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:28:49.698 09:36:59 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:49.698 09:36:59 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:28:49.698 09:36:59 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:49.698 09:36:59 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:28:49.698 09:36:59 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:49.698 09:36:59 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@95 -- # fio /dev/fd/62 00:28:49.698 09:36:59 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@95 -- # create_json_sub_conf 0 1 00:28:49.698 09:36:59 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@51 -- # gen_nvmf_target_json 0 1 00:28:49.698 09:36:59 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@532 -- # config=() 00:28:49.698 09:36:59 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@532 -- # local subsystem config 00:28:49.698 09:36:59 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:28:49.698 09:36:59 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:28:49.698 { 00:28:49.698 "params": { 00:28:49.698 "name": "Nvme$subsystem", 00:28:49.698 "trtype": "$TEST_TRANSPORT", 00:28:49.698 "traddr": "$NVMF_FIRST_TARGET_IP", 00:28:49.698 "adrfam": "ipv4", 00:28:49.698 "trsvcid": "$NVMF_PORT", 00:28:49.698 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:28:49.698 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:28:49.698 "hdgst": ${hdgst:-false}, 00:28:49.698 "ddgst": ${ddgst:-false} 00:28:49.698 }, 00:28:49.698 "method": "bdev_nvme_attach_controller" 00:28:49.698 } 00:28:49.698 EOF 00:28:49.698 )") 00:28:49.698 09:36:59 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@82 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:28:49.698 09:36:59 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@82 -- # gen_fio_conf 00:28:49.698 09:36:59 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1356 -- # fio_plugin /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:28:49.698 09:36:59 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@54 -- # local file 00:28:49.698 09:36:59 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1337 -- # local fio_dir=/usr/src/fio 00:28:49.698 09:36:59 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@56 -- # cat 00:28:49.698 09:36:59 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1339 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:28:49.698 09:36:59 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1339 -- # local sanitizers 00:28:49.698 09:36:59 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1340 -- # local plugin=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:28:49.698 09:36:59 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1341 -- # shift 00:28:49.698 09:36:59 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@554 -- # cat 00:28:49.698 09:36:59 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1343 -- # local asan_lib= 00:28:49.698 09:36:59 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1344 -- # for sanitizer in "${sanitizers[@]}" 00:28:49.698 09:36:59 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@72 -- # (( file = 1 )) 00:28:49.698 09:36:59 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1345 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:28:49.698 09:36:59 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@72 -- # (( file <= files )) 00:28:49.698 09:36:59 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@73 -- # cat 00:28:49.698 09:36:59 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1345 -- # grep libasan 00:28:49.698 09:36:59 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1345 -- # awk '{print $3}' 00:28:49.698 09:36:59 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:28:49.698 09:36:59 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:28:49.698 { 00:28:49.698 "params": { 00:28:49.698 "name": "Nvme$subsystem", 00:28:49.698 "trtype": "$TEST_TRANSPORT", 00:28:49.698 "traddr": "$NVMF_FIRST_TARGET_IP", 00:28:49.698 "adrfam": "ipv4", 00:28:49.698 "trsvcid": "$NVMF_PORT", 00:28:49.698 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:28:49.698 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:28:49.698 "hdgst": ${hdgst:-false}, 00:28:49.698 "ddgst": ${ddgst:-false} 00:28:49.698 }, 00:28:49.698 "method": "bdev_nvme_attach_controller" 00:28:49.698 } 00:28:49.698 EOF 00:28:49.698 )") 00:28:49.698 09:36:59 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@554 -- # cat 00:28:49.698 09:36:59 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@72 -- # (( file++ )) 00:28:49.698 09:36:59 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@72 -- # (( file <= files )) 00:28:49.698 09:36:59 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@556 -- # jq . 00:28:49.698 09:36:59 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@557 -- # IFS=, 00:28:49.698 09:36:59 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:28:49.698 "params": { 00:28:49.698 "name": "Nvme0", 00:28:49.698 "trtype": "tcp", 00:28:49.698 "traddr": "10.0.0.2", 00:28:49.698 "adrfam": "ipv4", 00:28:49.698 "trsvcid": "4420", 00:28:49.698 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:28:49.698 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:28:49.698 "hdgst": false, 00:28:49.698 "ddgst": false 00:28:49.698 }, 00:28:49.698 "method": "bdev_nvme_attach_controller" 00:28:49.698 },{ 00:28:49.698 "params": { 00:28:49.698 "name": "Nvme1", 00:28:49.698 "trtype": "tcp", 00:28:49.698 "traddr": "10.0.0.2", 00:28:49.698 "adrfam": "ipv4", 00:28:49.698 "trsvcid": "4420", 00:28:49.698 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:28:49.698 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:28:49.698 "hdgst": false, 00:28:49.698 "ddgst": false 00:28:49.698 }, 00:28:49.698 "method": "bdev_nvme_attach_controller" 00:28:49.698 }' 00:28:49.698 09:36:59 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1345 -- # asan_lib= 00:28:49.698 09:36:59 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1346 -- # [[ -n '' ]] 00:28:49.698 09:36:59 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1344 -- # for sanitizer in "${sanitizers[@]}" 00:28:49.698 09:36:59 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1345 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:28:49.698 09:36:59 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1345 -- # grep libclang_rt.asan 00:28:49.698 09:36:59 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1345 -- # awk '{print $3}' 00:28:49.699 09:36:59 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1345 -- # asan_lib= 00:28:49.699 09:36:59 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1346 -- # [[ -n '' ]] 00:28:49.699 09:36:59 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1352 -- # LD_PRELOAD=' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev' 00:28:49.699 09:36:59 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1352 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:28:49.699 filename0: (g=0): rw=randread, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=4 00:28:49.699 filename1: (g=0): rw=randread, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=4 00:28:49.699 fio-3.35 00:28:49.699 Starting 2 threads 00:28:49.699 EAL: No free 2048 kB hugepages reported on node 1 00:28:59.677 00:28:59.678 filename0: (groupid=0, jobs=1): err= 0: pid=955139: Mon Jul 15 09:37:10 2024 00:28:59.678 read: IOPS=351, BW=1405KiB/s (1438kB/s)(13.8MiB/10036msec) 00:28:59.678 slat (nsec): min=7580, max=66816, avg=9971.25, stdev=3127.05 00:28:59.678 clat (usec): min=508, max=42376, avg=11360.17, stdev=17882.11 00:28:59.678 lat (usec): min=517, max=42388, avg=11370.14, stdev=17882.01 00:28:59.678 clat percentiles (usec): 00:28:59.678 | 1.00th=[ 562], 5.00th=[ 586], 10.00th=[ 594], 20.00th=[ 611], 00:28:59.678 | 30.00th=[ 635], 40.00th=[ 652], 50.00th=[ 676], 60.00th=[ 709], 00:28:59.678 | 70.00th=[ 881], 80.00th=[41157], 90.00th=[41157], 95.00th=[41157], 00:28:59.678 | 99.00th=[42206], 99.50th=[42206], 99.90th=[42206], 99.95th=[42206], 00:28:59.678 | 99.99th=[42206] 00:28:59.678 bw ( KiB/s): min= 896, max= 2048, per=71.45%, avg=1408.00, stdev=368.09, samples=20 00:28:59.678 iops : min= 224, max= 512, avg=352.00, stdev=92.02, samples=20 00:28:59.678 lat (usec) : 750=64.16%, 1000=9.25% 00:28:59.678 lat (msec) : 2=0.14%, 4=0.11%, 50=26.33% 00:28:59.678 cpu : usr=93.35%, sys=5.86%, ctx=162, majf=0, minf=186 00:28:59.678 IO depths : 1=25.0%, 2=50.0%, 4=25.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:28:59.678 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:28:59.678 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:28:59.678 issued rwts: total=3524,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:28:59.678 latency : target=0, window=0, percentile=100.00%, depth=4 00:28:59.678 filename1: (groupid=0, jobs=1): err= 0: pid=955140: Mon Jul 15 09:37:10 2024 00:28:59.678 read: IOPS=141, BW=566KiB/s (580kB/s)(5680KiB/10033msec) 00:28:59.678 slat (nsec): min=6194, max=28314, avg=9803.70, stdev=2662.53 00:28:59.678 clat (usec): min=500, max=44069, avg=28229.58, stdev=19449.21 00:28:59.678 lat (usec): min=508, max=44095, avg=28239.39, stdev=19449.09 00:28:59.678 clat percentiles (usec): 00:28:59.678 | 1.00th=[ 529], 5.00th=[ 545], 10.00th=[ 594], 20.00th=[ 725], 00:28:59.678 | 30.00th=[ 857], 40.00th=[41157], 50.00th=[41681], 60.00th=[42206], 00:28:59.678 | 70.00th=[42206], 80.00th=[42206], 90.00th=[42206], 95.00th=[42206], 00:28:59.678 | 99.00th=[43254], 99.50th=[43254], 99.90th=[44303], 99.95th=[44303], 00:28:59.678 | 99.99th=[44303] 00:28:59.678 bw ( KiB/s): min= 416, max= 768, per=28.72%, avg=566.40, stdev=98.00, samples=20 00:28:59.678 iops : min= 104, max= 192, avg=141.60, stdev=24.50, samples=20 00:28:59.678 lat (usec) : 750=25.92%, 1000=6.27% 00:28:59.678 lat (msec) : 2=1.06%, 50=66.76% 00:28:59.678 cpu : usr=93.87%, sys=5.85%, ctx=14, majf=0, minf=92 00:28:59.678 IO depths : 1=25.0%, 2=50.0%, 4=25.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:28:59.678 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:28:59.678 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:28:59.678 issued rwts: total=1420,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:28:59.678 latency : target=0, window=0, percentile=100.00%, depth=4 00:28:59.678 00:28:59.678 Run status group 0 (all jobs): 00:28:59.678 READ: bw=1971KiB/s (2018kB/s), 566KiB/s-1405KiB/s (580kB/s-1438kB/s), io=19.3MiB (20.2MB), run=10033-10036msec 00:28:59.678 09:37:10 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@96 -- # destroy_subsystems 0 1 00:28:59.678 09:37:10 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@43 -- # local sub 00:28:59.678 09:37:10 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@45 -- # for sub in "$@" 00:28:59.678 09:37:10 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@46 -- # destroy_subsystem 0 00:28:59.678 09:37:10 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@36 -- # local sub_id=0 00:28:59.678 09:37:10 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:28:59.678 09:37:10 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:59.678 09:37:10 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:28:59.678 09:37:10 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:59.678 09:37:10 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null0 00:28:59.678 09:37:10 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:59.678 09:37:10 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:28:59.678 09:37:10 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:59.678 09:37:10 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@45 -- # for sub in "$@" 00:28:59.678 09:37:10 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@46 -- # destroy_subsystem 1 00:28:59.678 09:37:10 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@36 -- # local sub_id=1 00:28:59.678 09:37:10 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:28:59.678 09:37:10 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:59.678 09:37:10 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:28:59.678 09:37:10 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:59.678 09:37:10 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null1 00:28:59.678 09:37:10 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:59.678 09:37:10 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:28:59.678 09:37:10 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:59.678 00:28:59.678 real 0m11.355s 00:28:59.678 user 0m20.147s 00:28:59.678 sys 0m1.504s 00:28:59.678 09:37:10 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1124 -- # xtrace_disable 00:28:59.678 09:37:10 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:28:59.678 ************************************ 00:28:59.678 END TEST fio_dif_1_multi_subsystems 00:28:59.678 ************************************ 00:28:59.678 09:37:10 nvmf_dif -- common/autotest_common.sh@1142 -- # return 0 00:28:59.678 09:37:10 nvmf_dif -- target/dif.sh@143 -- # run_test fio_dif_rand_params fio_dif_rand_params 00:28:59.678 09:37:10 nvmf_dif -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:28:59.678 09:37:10 nvmf_dif -- common/autotest_common.sh@1105 -- # xtrace_disable 00:28:59.678 09:37:10 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:28:59.678 ************************************ 00:28:59.678 START TEST fio_dif_rand_params 00:28:59.678 ************************************ 00:28:59.678 09:37:10 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1123 -- # fio_dif_rand_params 00:28:59.678 09:37:10 nvmf_dif.fio_dif_rand_params -- target/dif.sh@100 -- # local NULL_DIF 00:28:59.678 09:37:10 nvmf_dif.fio_dif_rand_params -- target/dif.sh@101 -- # local bs numjobs runtime iodepth files 00:28:59.678 09:37:10 nvmf_dif.fio_dif_rand_params -- target/dif.sh@103 -- # NULL_DIF=3 00:28:59.678 09:37:10 nvmf_dif.fio_dif_rand_params -- target/dif.sh@103 -- # bs=128k 00:28:59.678 09:37:10 nvmf_dif.fio_dif_rand_params -- target/dif.sh@103 -- # numjobs=3 00:28:59.678 09:37:10 nvmf_dif.fio_dif_rand_params -- target/dif.sh@103 -- # iodepth=3 00:28:59.678 09:37:10 nvmf_dif.fio_dif_rand_params -- target/dif.sh@103 -- # runtime=5 00:28:59.678 09:37:10 nvmf_dif.fio_dif_rand_params -- target/dif.sh@105 -- # create_subsystems 0 00:28:59.678 09:37:10 nvmf_dif.fio_dif_rand_params -- target/dif.sh@28 -- # local sub 00:28:59.678 09:37:10 nvmf_dif.fio_dif_rand_params -- target/dif.sh@30 -- # for sub in "$@" 00:28:59.678 09:37:10 nvmf_dif.fio_dif_rand_params -- target/dif.sh@31 -- # create_subsystem 0 00:28:59.678 09:37:10 nvmf_dif.fio_dif_rand_params -- target/dif.sh@18 -- # local sub_id=0 00:28:59.678 09:37:10 nvmf_dif.fio_dif_rand_params -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null0 64 512 --md-size 16 --dif-type 3 00:28:59.678 09:37:10 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:59.678 09:37:10 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:28:59.678 bdev_null0 00:28:59.678 09:37:10 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:59.678 09:37:10 nvmf_dif.fio_dif_rand_params -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 --serial-number 53313233-0 --allow-any-host 00:28:59.678 09:37:10 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:59.678 09:37:10 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:28:59.678 09:37:10 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:59.936 09:37:10 nvmf_dif.fio_dif_rand_params -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 bdev_null0 00:28:59.936 09:37:10 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:59.936 09:37:10 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:28:59.936 09:37:10 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:59.936 09:37:10 nvmf_dif.fio_dif_rand_params -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:28:59.936 09:37:10 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:59.936 09:37:10 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:28:59.936 [2024-07-15 09:37:10.884255] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:28:59.936 09:37:10 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:59.936 09:37:10 nvmf_dif.fio_dif_rand_params -- target/dif.sh@106 -- # fio /dev/fd/62 00:28:59.936 09:37:10 nvmf_dif.fio_dif_rand_params -- target/dif.sh@106 -- # create_json_sub_conf 0 00:28:59.936 09:37:10 nvmf_dif.fio_dif_rand_params -- target/dif.sh@51 -- # gen_nvmf_target_json 0 00:28:59.936 09:37:10 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@532 -- # config=() 00:28:59.936 09:37:10 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@532 -- # local subsystem config 00:28:59.936 09:37:10 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:28:59.936 09:37:10 nvmf_dif.fio_dif_rand_params -- target/dif.sh@82 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:28:59.936 09:37:10 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:28:59.936 { 00:28:59.936 "params": { 00:28:59.936 "name": "Nvme$subsystem", 00:28:59.936 "trtype": "$TEST_TRANSPORT", 00:28:59.936 "traddr": "$NVMF_FIRST_TARGET_IP", 00:28:59.936 "adrfam": "ipv4", 00:28:59.936 "trsvcid": "$NVMF_PORT", 00:28:59.937 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:28:59.937 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:28:59.937 "hdgst": ${hdgst:-false}, 00:28:59.937 "ddgst": ${ddgst:-false} 00:28:59.937 }, 00:28:59.937 "method": "bdev_nvme_attach_controller" 00:28:59.937 } 00:28:59.937 EOF 00:28:59.937 )") 00:28:59.937 09:37:10 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1356 -- # fio_plugin /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:28:59.937 09:37:10 nvmf_dif.fio_dif_rand_params -- target/dif.sh@82 -- # gen_fio_conf 00:28:59.937 09:37:10 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1337 -- # local fio_dir=/usr/src/fio 00:28:59.937 09:37:10 nvmf_dif.fio_dif_rand_params -- target/dif.sh@54 -- # local file 00:28:59.937 09:37:10 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1339 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:28:59.937 09:37:10 nvmf_dif.fio_dif_rand_params -- target/dif.sh@56 -- # cat 00:28:59.937 09:37:10 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1339 -- # local sanitizers 00:28:59.937 09:37:10 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1340 -- # local plugin=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:28:59.937 09:37:10 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1341 -- # shift 00:28:59.937 09:37:10 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1343 -- # local asan_lib= 00:28:59.937 09:37:10 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1344 -- # for sanitizer in "${sanitizers[@]}" 00:28:59.937 09:37:10 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@554 -- # cat 00:28:59.937 09:37:10 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:28:59.937 09:37:10 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file = 1 )) 00:28:59.937 09:37:10 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # grep libasan 00:28:59.937 09:37:10 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file <= files )) 00:28:59.937 09:37:10 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # awk '{print $3}' 00:28:59.937 09:37:10 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@556 -- # jq . 00:28:59.937 09:37:10 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@557 -- # IFS=, 00:28:59.937 09:37:10 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:28:59.937 "params": { 00:28:59.937 "name": "Nvme0", 00:28:59.937 "trtype": "tcp", 00:28:59.937 "traddr": "10.0.0.2", 00:28:59.937 "adrfam": "ipv4", 00:28:59.937 "trsvcid": "4420", 00:28:59.937 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:28:59.937 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:28:59.937 "hdgst": false, 00:28:59.937 "ddgst": false 00:28:59.937 }, 00:28:59.937 "method": "bdev_nvme_attach_controller" 00:28:59.937 }' 00:28:59.937 09:37:10 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # asan_lib= 00:28:59.937 09:37:10 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1346 -- # [[ -n '' ]] 00:28:59.937 09:37:10 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1344 -- # for sanitizer in "${sanitizers[@]}" 00:28:59.937 09:37:10 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:28:59.937 09:37:10 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # grep libclang_rt.asan 00:28:59.937 09:37:10 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # awk '{print $3}' 00:28:59.937 09:37:10 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # asan_lib= 00:28:59.937 09:37:10 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1346 -- # [[ -n '' ]] 00:28:59.937 09:37:10 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1352 -- # LD_PRELOAD=' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev' 00:28:59.937 09:37:10 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1352 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:29:00.193 filename0: (g=0): rw=randread, bs=(R) 128KiB-128KiB, (W) 128KiB-128KiB, (T) 128KiB-128KiB, ioengine=spdk_bdev, iodepth=3 00:29:00.193 ... 00:29:00.193 fio-3.35 00:29:00.193 Starting 3 threads 00:29:00.193 EAL: No free 2048 kB hugepages reported on node 1 00:29:06.751 00:29:06.751 filename0: (groupid=0, jobs=1): err= 0: pid=956535: Mon Jul 15 09:37:16 2024 00:29:06.751 read: IOPS=220, BW=27.6MiB/s (28.9MB/s)(139MiB/5047msec) 00:29:06.751 slat (nsec): min=7719, max=45173, avg=13070.40, stdev=3361.12 00:29:06.751 clat (usec): min=5143, max=54538, avg=13535.28, stdev=3952.16 00:29:06.751 lat (usec): min=5155, max=54550, avg=13548.35, stdev=3952.48 00:29:06.751 clat percentiles (usec): 00:29:06.751 | 1.00th=[ 5932], 5.00th=[ 9241], 10.00th=[10552], 20.00th=[11863], 00:29:06.751 | 30.00th=[12518], 40.00th=[12911], 50.00th=[13435], 60.00th=[13829], 00:29:06.751 | 70.00th=[14353], 80.00th=[14877], 90.00th=[16057], 95.00th=[17171], 00:29:06.751 | 99.00th=[19006], 99.50th=[50070], 99.90th=[54264], 99.95th=[54789], 00:29:06.751 | 99.99th=[54789] 00:29:06.751 bw ( KiB/s): min=27136, max=30720, per=34.99%, avg=28441.60, stdev=1115.55, samples=10 00:29:06.751 iops : min= 212, max= 240, avg=222.20, stdev= 8.72, samples=10 00:29:06.751 lat (msec) : 10=7.81%, 20=91.47%, 50=0.27%, 100=0.45% 00:29:06.751 cpu : usr=92.11%, sys=7.41%, ctx=7, majf=0, minf=86 00:29:06.751 IO depths : 1=0.3%, 2=99.7%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:29:06.751 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:29:06.751 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:29:06.751 issued rwts: total=1114,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:29:06.751 latency : target=0, window=0, percentile=100.00%, depth=3 00:29:06.751 filename0: (groupid=0, jobs=1): err= 0: pid=956536: Mon Jul 15 09:37:16 2024 00:29:06.751 read: IOPS=204, BW=25.5MiB/s (26.8MB/s)(129MiB/5045msec) 00:29:06.752 slat (nsec): min=7104, max=35819, avg=13331.00, stdev=3393.11 00:29:06.752 clat (usec): min=5509, max=57519, avg=14621.32, stdev=6954.28 00:29:06.752 lat (usec): min=5523, max=57532, avg=14634.65, stdev=6954.41 00:29:06.752 clat percentiles (usec): 00:29:06.752 | 1.00th=[ 9241], 5.00th=[10945], 10.00th=[11600], 20.00th=[12256], 00:29:06.752 | 30.00th=[12649], 40.00th=[13042], 50.00th=[13304], 60.00th=[13698], 00:29:06.752 | 70.00th=[14222], 80.00th=[15008], 90.00th=[16057], 95.00th=[17171], 00:29:06.752 | 99.00th=[53216], 99.50th=[54789], 99.90th=[56886], 99.95th=[57410], 00:29:06.752 | 99.99th=[57410] 00:29:06.752 bw ( KiB/s): min=24064, max=28672, per=32.38%, avg=26316.80, stdev=1743.81, samples=10 00:29:06.752 iops : min= 188, max= 224, avg=205.60, stdev=13.62, samples=10 00:29:06.752 lat (msec) : 10=2.33%, 20=94.57%, 50=0.39%, 100=2.72% 00:29:06.752 cpu : usr=92.17%, sys=7.36%, ctx=11, majf=0, minf=149 00:29:06.752 IO depths : 1=0.2%, 2=99.8%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:29:06.752 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:29:06.752 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:29:06.752 issued rwts: total=1031,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:29:06.752 latency : target=0, window=0, percentile=100.00%, depth=3 00:29:06.752 filename0: (groupid=0, jobs=1): err= 0: pid=956537: Mon Jul 15 09:37:16 2024 00:29:06.752 read: IOPS=211, BW=26.5MiB/s (27.8MB/s)(133MiB/5005msec) 00:29:06.752 slat (nsec): min=7376, max=35593, avg=13300.14, stdev=2608.17 00:29:06.752 clat (usec): min=5224, max=53082, avg=14146.47, stdev=4390.49 00:29:06.752 lat (usec): min=5236, max=53097, avg=14159.77, stdev=4390.63 00:29:06.752 clat percentiles (usec): 00:29:06.752 | 1.00th=[ 6718], 5.00th=[ 8225], 10.00th=[ 9634], 20.00th=[12256], 00:29:06.752 | 30.00th=[12911], 40.00th=[13435], 50.00th=[13829], 60.00th=[14484], 00:29:06.752 | 70.00th=[15139], 80.00th=[16188], 90.00th=[17433], 95.00th=[18482], 00:29:06.752 | 99.00th=[20317], 99.50th=[51119], 99.90th=[52691], 99.95th=[53216], 00:29:06.752 | 99.99th=[53216] 00:29:06.752 bw ( KiB/s): min=24320, max=30464, per=33.30%, avg=27064.10, stdev=2142.56, samples=10 00:29:06.752 iops : min= 190, max= 238, avg=211.40, stdev=16.79, samples=10 00:29:06.752 lat (msec) : 10=10.28%, 20=88.58%, 50=0.57%, 100=0.57% 00:29:06.752 cpu : usr=91.23%, sys=8.29%, ctx=12, majf=0, minf=110 00:29:06.752 IO depths : 1=0.1%, 2=99.9%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:29:06.752 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:29:06.752 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:29:06.752 issued rwts: total=1060,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:29:06.752 latency : target=0, window=0, percentile=100.00%, depth=3 00:29:06.752 00:29:06.752 Run status group 0 (all jobs): 00:29:06.752 READ: bw=79.4MiB/s (83.2MB/s), 25.5MiB/s-27.6MiB/s (26.8MB/s-28.9MB/s), io=401MiB (420MB), run=5005-5047msec 00:29:06.752 09:37:17 nvmf_dif.fio_dif_rand_params -- target/dif.sh@107 -- # destroy_subsystems 0 00:29:06.752 09:37:17 nvmf_dif.fio_dif_rand_params -- target/dif.sh@43 -- # local sub 00:29:06.752 09:37:17 nvmf_dif.fio_dif_rand_params -- target/dif.sh@45 -- # for sub in "$@" 00:29:06.752 09:37:17 nvmf_dif.fio_dif_rand_params -- target/dif.sh@46 -- # destroy_subsystem 0 00:29:06.752 09:37:17 nvmf_dif.fio_dif_rand_params -- target/dif.sh@36 -- # local sub_id=0 00:29:06.752 09:37:17 nvmf_dif.fio_dif_rand_params -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:29:06.752 09:37:17 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:29:06.752 09:37:17 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:29:06.752 09:37:17 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:29:06.752 09:37:17 nvmf_dif.fio_dif_rand_params -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null0 00:29:06.752 09:37:17 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:29:06.752 09:37:17 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:29:06.752 09:37:17 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:29:06.752 09:37:17 nvmf_dif.fio_dif_rand_params -- target/dif.sh@109 -- # NULL_DIF=2 00:29:06.752 09:37:17 nvmf_dif.fio_dif_rand_params -- target/dif.sh@109 -- # bs=4k 00:29:06.752 09:37:17 nvmf_dif.fio_dif_rand_params -- target/dif.sh@109 -- # numjobs=8 00:29:06.752 09:37:17 nvmf_dif.fio_dif_rand_params -- target/dif.sh@109 -- # iodepth=16 00:29:06.752 09:37:17 nvmf_dif.fio_dif_rand_params -- target/dif.sh@109 -- # runtime= 00:29:06.752 09:37:17 nvmf_dif.fio_dif_rand_params -- target/dif.sh@109 -- # files=2 00:29:06.752 09:37:17 nvmf_dif.fio_dif_rand_params -- target/dif.sh@111 -- # create_subsystems 0 1 2 00:29:06.752 09:37:17 nvmf_dif.fio_dif_rand_params -- target/dif.sh@28 -- # local sub 00:29:06.752 09:37:17 nvmf_dif.fio_dif_rand_params -- target/dif.sh@30 -- # for sub in "$@" 00:29:06.752 09:37:17 nvmf_dif.fio_dif_rand_params -- target/dif.sh@31 -- # create_subsystem 0 00:29:06.752 09:37:17 nvmf_dif.fio_dif_rand_params -- target/dif.sh@18 -- # local sub_id=0 00:29:06.752 09:37:17 nvmf_dif.fio_dif_rand_params -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null0 64 512 --md-size 16 --dif-type 2 00:29:06.752 09:37:17 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:29:06.752 09:37:17 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:29:06.752 bdev_null0 00:29:06.752 09:37:17 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:29:06.752 09:37:17 nvmf_dif.fio_dif_rand_params -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 --serial-number 53313233-0 --allow-any-host 00:29:06.752 09:37:17 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:29:06.752 09:37:17 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:29:06.752 09:37:17 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:29:06.752 09:37:17 nvmf_dif.fio_dif_rand_params -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 bdev_null0 00:29:06.752 09:37:17 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:29:06.752 09:37:17 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:29:06.752 09:37:17 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:29:06.752 09:37:17 nvmf_dif.fio_dif_rand_params -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:29:06.752 09:37:17 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:29:06.752 09:37:17 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:29:06.752 [2024-07-15 09:37:17.205003] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:29:06.752 09:37:17 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:29:06.752 09:37:17 nvmf_dif.fio_dif_rand_params -- target/dif.sh@30 -- # for sub in "$@" 00:29:06.752 09:37:17 nvmf_dif.fio_dif_rand_params -- target/dif.sh@31 -- # create_subsystem 1 00:29:06.752 09:37:17 nvmf_dif.fio_dif_rand_params -- target/dif.sh@18 -- # local sub_id=1 00:29:06.752 09:37:17 nvmf_dif.fio_dif_rand_params -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null1 64 512 --md-size 16 --dif-type 2 00:29:06.752 09:37:17 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:29:06.752 09:37:17 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:29:06.752 bdev_null1 00:29:06.752 09:37:17 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:29:06.752 09:37:17 nvmf_dif.fio_dif_rand_params -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 --serial-number 53313233-1 --allow-any-host 00:29:06.752 09:37:17 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:29:06.752 09:37:17 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:29:06.752 09:37:17 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:29:06.752 09:37:17 nvmf_dif.fio_dif_rand_params -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 bdev_null1 00:29:06.752 09:37:17 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:29:06.752 09:37:17 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:29:06.752 09:37:17 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:29:06.752 09:37:17 nvmf_dif.fio_dif_rand_params -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:29:06.752 09:37:17 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:29:06.752 09:37:17 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:29:06.752 09:37:17 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:29:06.752 09:37:17 nvmf_dif.fio_dif_rand_params -- target/dif.sh@30 -- # for sub in "$@" 00:29:06.752 09:37:17 nvmf_dif.fio_dif_rand_params -- target/dif.sh@31 -- # create_subsystem 2 00:29:06.752 09:37:17 nvmf_dif.fio_dif_rand_params -- target/dif.sh@18 -- # local sub_id=2 00:29:06.752 09:37:17 nvmf_dif.fio_dif_rand_params -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null2 64 512 --md-size 16 --dif-type 2 00:29:06.752 09:37:17 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:29:06.752 09:37:17 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:29:06.752 bdev_null2 00:29:06.752 09:37:17 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:29:06.752 09:37:17 nvmf_dif.fio_dif_rand_params -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode2 --serial-number 53313233-2 --allow-any-host 00:29:06.752 09:37:17 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:29:06.752 09:37:17 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:29:06.752 09:37:17 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:29:06.752 09:37:17 nvmf_dif.fio_dif_rand_params -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode2 bdev_null2 00:29:06.752 09:37:17 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:29:06.752 09:37:17 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:29:06.752 09:37:17 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:29:06.752 09:37:17 nvmf_dif.fio_dif_rand_params -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode2 -t tcp -a 10.0.0.2 -s 4420 00:29:06.752 09:37:17 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:29:06.752 09:37:17 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:29:06.752 09:37:17 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:29:06.752 09:37:17 nvmf_dif.fio_dif_rand_params -- target/dif.sh@112 -- # fio /dev/fd/62 00:29:06.752 09:37:17 nvmf_dif.fio_dif_rand_params -- target/dif.sh@112 -- # create_json_sub_conf 0 1 2 00:29:06.752 09:37:17 nvmf_dif.fio_dif_rand_params -- target/dif.sh@51 -- # gen_nvmf_target_json 0 1 2 00:29:06.752 09:37:17 nvmf_dif.fio_dif_rand_params -- target/dif.sh@82 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:29:06.752 09:37:17 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@532 -- # config=() 00:29:06.752 09:37:17 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1356 -- # fio_plugin /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:29:06.752 09:37:17 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@532 -- # local subsystem config 00:29:06.752 09:37:17 nvmf_dif.fio_dif_rand_params -- target/dif.sh@82 -- # gen_fio_conf 00:29:06.752 09:37:17 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1337 -- # local fio_dir=/usr/src/fio 00:29:06.752 09:37:17 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:29:06.752 09:37:17 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1339 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:29:06.752 09:37:17 nvmf_dif.fio_dif_rand_params -- target/dif.sh@54 -- # local file 00:29:06.752 09:37:17 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:29:06.752 { 00:29:06.752 "params": { 00:29:06.752 "name": "Nvme$subsystem", 00:29:06.753 "trtype": "$TEST_TRANSPORT", 00:29:06.753 "traddr": "$NVMF_FIRST_TARGET_IP", 00:29:06.753 "adrfam": "ipv4", 00:29:06.753 "trsvcid": "$NVMF_PORT", 00:29:06.753 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:29:06.753 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:29:06.753 "hdgst": ${hdgst:-false}, 00:29:06.753 "ddgst": ${ddgst:-false} 00:29:06.753 }, 00:29:06.753 "method": "bdev_nvme_attach_controller" 00:29:06.753 } 00:29:06.753 EOF 00:29:06.753 )") 00:29:06.753 09:37:17 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1339 -- # local sanitizers 00:29:06.753 09:37:17 nvmf_dif.fio_dif_rand_params -- target/dif.sh@56 -- # cat 00:29:06.753 09:37:17 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1340 -- # local plugin=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:29:06.753 09:37:17 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1341 -- # shift 00:29:06.753 09:37:17 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1343 -- # local asan_lib= 00:29:06.753 09:37:17 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1344 -- # for sanitizer in "${sanitizers[@]}" 00:29:06.753 09:37:17 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@554 -- # cat 00:29:06.753 09:37:17 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:29:06.753 09:37:17 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # grep libasan 00:29:06.753 09:37:17 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file = 1 )) 00:29:06.753 09:37:17 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file <= files )) 00:29:06.753 09:37:17 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # awk '{print $3}' 00:29:06.753 09:37:17 nvmf_dif.fio_dif_rand_params -- target/dif.sh@73 -- # cat 00:29:06.753 09:37:17 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:29:06.753 09:37:17 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:29:06.753 { 00:29:06.753 "params": { 00:29:06.753 "name": "Nvme$subsystem", 00:29:06.753 "trtype": "$TEST_TRANSPORT", 00:29:06.753 "traddr": "$NVMF_FIRST_TARGET_IP", 00:29:06.753 "adrfam": "ipv4", 00:29:06.753 "trsvcid": "$NVMF_PORT", 00:29:06.753 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:29:06.753 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:29:06.753 "hdgst": ${hdgst:-false}, 00:29:06.753 "ddgst": ${ddgst:-false} 00:29:06.753 }, 00:29:06.753 "method": "bdev_nvme_attach_controller" 00:29:06.753 } 00:29:06.753 EOF 00:29:06.753 )") 00:29:06.753 09:37:17 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file++ )) 00:29:06.753 09:37:17 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@554 -- # cat 00:29:06.753 09:37:17 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file <= files )) 00:29:06.753 09:37:17 nvmf_dif.fio_dif_rand_params -- target/dif.sh@73 -- # cat 00:29:06.753 09:37:17 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file++ )) 00:29:06.753 09:37:17 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file <= files )) 00:29:06.753 09:37:17 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:29:06.753 09:37:17 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:29:06.753 { 00:29:06.753 "params": { 00:29:06.753 "name": "Nvme$subsystem", 00:29:06.753 "trtype": "$TEST_TRANSPORT", 00:29:06.753 "traddr": "$NVMF_FIRST_TARGET_IP", 00:29:06.753 "adrfam": "ipv4", 00:29:06.753 "trsvcid": "$NVMF_PORT", 00:29:06.753 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:29:06.753 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:29:06.753 "hdgst": ${hdgst:-false}, 00:29:06.753 "ddgst": ${ddgst:-false} 00:29:06.753 }, 00:29:06.753 "method": "bdev_nvme_attach_controller" 00:29:06.753 } 00:29:06.753 EOF 00:29:06.753 )") 00:29:06.753 09:37:17 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@554 -- # cat 00:29:06.753 09:37:17 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@556 -- # jq . 00:29:06.753 09:37:17 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@557 -- # IFS=, 00:29:06.753 09:37:17 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:29:06.753 "params": { 00:29:06.753 "name": "Nvme0", 00:29:06.753 "trtype": "tcp", 00:29:06.753 "traddr": "10.0.0.2", 00:29:06.753 "adrfam": "ipv4", 00:29:06.753 "trsvcid": "4420", 00:29:06.753 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:29:06.753 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:29:06.753 "hdgst": false, 00:29:06.753 "ddgst": false 00:29:06.753 }, 00:29:06.753 "method": "bdev_nvme_attach_controller" 00:29:06.753 },{ 00:29:06.753 "params": { 00:29:06.753 "name": "Nvme1", 00:29:06.753 "trtype": "tcp", 00:29:06.753 "traddr": "10.0.0.2", 00:29:06.753 "adrfam": "ipv4", 00:29:06.753 "trsvcid": "4420", 00:29:06.753 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:29:06.753 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:29:06.753 "hdgst": false, 00:29:06.753 "ddgst": false 00:29:06.753 }, 00:29:06.753 "method": "bdev_nvme_attach_controller" 00:29:06.753 },{ 00:29:06.753 "params": { 00:29:06.753 "name": "Nvme2", 00:29:06.753 "trtype": "tcp", 00:29:06.753 "traddr": "10.0.0.2", 00:29:06.753 "adrfam": "ipv4", 00:29:06.753 "trsvcid": "4420", 00:29:06.753 "subnqn": "nqn.2016-06.io.spdk:cnode2", 00:29:06.753 "hostnqn": "nqn.2016-06.io.spdk:host2", 00:29:06.753 "hdgst": false, 00:29:06.753 "ddgst": false 00:29:06.753 }, 00:29:06.753 "method": "bdev_nvme_attach_controller" 00:29:06.753 }' 00:29:06.753 09:37:17 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # asan_lib= 00:29:06.753 09:37:17 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1346 -- # [[ -n '' ]] 00:29:06.753 09:37:17 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1344 -- # for sanitizer in "${sanitizers[@]}" 00:29:06.753 09:37:17 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:29:06.753 09:37:17 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # grep libclang_rt.asan 00:29:06.753 09:37:17 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # awk '{print $3}' 00:29:06.753 09:37:17 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # asan_lib= 00:29:06.753 09:37:17 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1346 -- # [[ -n '' ]] 00:29:06.753 09:37:17 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1352 -- # LD_PRELOAD=' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev' 00:29:06.753 09:37:17 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1352 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:29:06.753 filename0: (g=0): rw=randread, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=16 00:29:06.753 ... 00:29:06.753 filename1: (g=0): rw=randread, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=16 00:29:06.753 ... 00:29:06.753 filename2: (g=0): rw=randread, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=16 00:29:06.753 ... 00:29:06.753 fio-3.35 00:29:06.753 Starting 24 threads 00:29:06.753 EAL: No free 2048 kB hugepages reported on node 1 00:29:18.953 00:29:18.953 filename0: (groupid=0, jobs=1): err= 0: pid=957401: Mon Jul 15 09:37:28 2024 00:29:18.953 read: IOPS=476, BW=1904KiB/s (1950kB/s)(18.6MiB/10015msec) 00:29:18.953 slat (nsec): min=13033, max=87371, avg=39136.38, stdev=12108.14 00:29:18.953 clat (usec): min=25314, max=45966, avg=33254.15, stdev=979.32 00:29:18.953 lat (usec): min=25335, max=45987, avg=33293.29, stdev=978.60 00:29:18.953 clat percentiles (usec): 00:29:18.953 | 1.00th=[32375], 5.00th=[32637], 10.00th=[32637], 20.00th=[32900], 00:29:18.953 | 30.00th=[32900], 40.00th=[33162], 50.00th=[33162], 60.00th=[33162], 00:29:18.953 | 70.00th=[33424], 80.00th=[33424], 90.00th=[33817], 95.00th=[34341], 00:29:18.953 | 99.00th=[34866], 99.50th=[35914], 99.90th=[45876], 99.95th=[45876], 00:29:18.953 | 99.99th=[45876] 00:29:18.953 bw ( KiB/s): min= 1792, max= 1923, per=4.16%, avg=1900.95, stdev=46.96, samples=20 00:29:18.953 iops : min= 448, max= 480, avg=475.20, stdev=11.72, samples=20 00:29:18.953 lat (msec) : 50=100.00% 00:29:18.953 cpu : usr=97.58%, sys=1.69%, ctx=44, majf=0, minf=57 00:29:18.953 IO depths : 1=6.2%, 2=12.5%, 4=25.0%, 8=50.0%, 16=6.2%, 32=0.0%, >=64=0.0% 00:29:18.953 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:29:18.953 complete : 0=0.0%, 4=94.1%, 8=0.0%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:29:18.953 issued rwts: total=4768,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:29:18.953 latency : target=0, window=0, percentile=100.00%, depth=16 00:29:18.953 filename0: (groupid=0, jobs=1): err= 0: pid=957402: Mon Jul 15 09:37:28 2024 00:29:18.953 read: IOPS=477, BW=1912KiB/s (1958kB/s)(18.7MiB/10009msec) 00:29:18.953 slat (usec): min=8, max=100, avg=25.17, stdev=21.60 00:29:18.953 clat (usec): min=18976, max=38681, avg=33259.58, stdev=1297.90 00:29:18.953 lat (usec): min=19020, max=38718, avg=33284.75, stdev=1295.10 00:29:18.953 clat percentiles (usec): 00:29:18.953 | 1.00th=[27395], 5.00th=[32637], 10.00th=[32900], 20.00th=[33162], 00:29:18.953 | 30.00th=[33162], 40.00th=[33424], 50.00th=[33424], 60.00th=[33424], 00:29:18.953 | 70.00th=[33424], 80.00th=[33817], 90.00th=[33817], 95.00th=[34341], 00:29:18.953 | 99.00th=[34866], 99.50th=[34866], 99.90th=[36439], 99.95th=[36439], 00:29:18.953 | 99.99th=[38536] 00:29:18.953 bw ( KiB/s): min= 1792, max= 1923, per=4.17%, avg=1906.68, stdev=40.42, samples=19 00:29:18.953 iops : min= 448, max= 480, avg=476.63, stdev=10.09, samples=19 00:29:18.953 lat (msec) : 20=0.56%, 50=99.44% 00:29:18.953 cpu : usr=98.18%, sys=1.41%, ctx=13, majf=0, minf=82 00:29:18.953 IO depths : 1=6.2%, 2=12.4%, 4=25.0%, 8=50.1%, 16=6.3%, 32=0.0%, >=64=0.0% 00:29:18.953 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:29:18.953 complete : 0=0.0%, 4=94.1%, 8=0.0%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:29:18.953 issued rwts: total=4784,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:29:18.953 latency : target=0, window=0, percentile=100.00%, depth=16 00:29:18.953 filename0: (groupid=0, jobs=1): err= 0: pid=957403: Mon Jul 15 09:37:28 2024 00:29:18.953 read: IOPS=475, BW=1900KiB/s (1946kB/s)(18.6MiB/10002msec) 00:29:18.953 slat (nsec): min=8655, max=98780, avg=36190.79, stdev=19328.88 00:29:18.953 clat (usec): min=17659, max=68420, avg=33365.55, stdev=2189.86 00:29:18.953 lat (usec): min=17669, max=68447, avg=33401.75, stdev=2188.09 00:29:18.953 clat percentiles (usec): 00:29:18.953 | 1.00th=[31851], 5.00th=[32375], 10.00th=[32637], 20.00th=[32900], 00:29:18.953 | 30.00th=[33162], 40.00th=[33162], 50.00th=[33162], 60.00th=[33424], 00:29:18.953 | 70.00th=[33424], 80.00th=[33817], 90.00th=[33817], 95.00th=[34341], 00:29:18.953 | 99.00th=[34866], 99.50th=[35914], 99.90th=[68682], 99.95th=[68682], 00:29:18.953 | 99.99th=[68682] 00:29:18.953 bw ( KiB/s): min= 1664, max= 1920, per=4.14%, avg=1893.05, stdev=68.52, samples=19 00:29:18.953 iops : min= 416, max= 480, avg=473.26, stdev=17.13, samples=19 00:29:18.953 lat (msec) : 20=0.04%, 50=99.62%, 100=0.34% 00:29:18.953 cpu : usr=97.96%, sys=1.63%, ctx=14, majf=0, minf=42 00:29:18.953 IO depths : 1=6.2%, 2=12.5%, 4=25.0%, 8=50.0%, 16=6.3%, 32=0.0%, >=64=0.0% 00:29:18.953 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:29:18.953 complete : 0=0.0%, 4=94.1%, 8=0.0%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:29:18.953 issued rwts: total=4752,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:29:18.953 latency : target=0, window=0, percentile=100.00%, depth=16 00:29:18.953 filename0: (groupid=0, jobs=1): err= 0: pid=957404: Mon Jul 15 09:37:28 2024 00:29:18.953 read: IOPS=476, BW=1904KiB/s (1950kB/s)(18.6MiB/10015msec) 00:29:18.953 slat (usec): min=11, max=126, avg=46.20, stdev=17.12 00:29:18.953 clat (usec): min=25391, max=50747, avg=33204.01, stdev=1049.88 00:29:18.953 lat (usec): min=25434, max=50775, avg=33250.21, stdev=1047.72 00:29:18.953 clat percentiles (usec): 00:29:18.953 | 1.00th=[31851], 5.00th=[32375], 10.00th=[32637], 20.00th=[32900], 00:29:18.953 | 30.00th=[32900], 40.00th=[33162], 50.00th=[33162], 60.00th=[33162], 00:29:18.953 | 70.00th=[33424], 80.00th=[33424], 90.00th=[33817], 95.00th=[34341], 00:29:18.953 | 99.00th=[34866], 99.50th=[35914], 99.90th=[45876], 99.95th=[45876], 00:29:18.953 | 99.99th=[50594] 00:29:18.953 bw ( KiB/s): min= 1792, max= 1923, per=4.16%, avg=1900.95, stdev=46.96, samples=20 00:29:18.953 iops : min= 448, max= 480, avg=475.20, stdev=11.72, samples=20 00:29:18.953 lat (msec) : 50=99.96%, 100=0.04% 00:29:18.953 cpu : usr=96.67%, sys=2.22%, ctx=183, majf=0, minf=52 00:29:18.953 IO depths : 1=6.2%, 2=12.5%, 4=25.0%, 8=50.0%, 16=6.3%, 32=0.0%, >=64=0.0% 00:29:18.953 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:29:18.953 complete : 0=0.0%, 4=94.1%, 8=0.0%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:29:18.953 issued rwts: total=4768,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:29:18.953 latency : target=0, window=0, percentile=100.00%, depth=16 00:29:18.953 filename0: (groupid=0, jobs=1): err= 0: pid=957405: Mon Jul 15 09:37:28 2024 00:29:18.953 read: IOPS=476, BW=1904KiB/s (1950kB/s)(18.6MiB/10015msec) 00:29:18.953 slat (nsec): min=9129, max=95694, avg=35614.89, stdev=14882.56 00:29:18.953 clat (usec): min=25204, max=45749, avg=33313.80, stdev=973.47 00:29:18.953 lat (usec): min=25249, max=45773, avg=33349.42, stdev=971.35 00:29:18.953 clat percentiles (usec): 00:29:18.953 | 1.00th=[32375], 5.00th=[32637], 10.00th=[32900], 20.00th=[32900], 00:29:18.953 | 30.00th=[33162], 40.00th=[33162], 50.00th=[33162], 60.00th=[33424], 00:29:18.953 | 70.00th=[33424], 80.00th=[33817], 90.00th=[33817], 95.00th=[34341], 00:29:18.953 | 99.00th=[34866], 99.50th=[35914], 99.90th=[45876], 99.95th=[45876], 00:29:18.953 | 99.99th=[45876] 00:29:18.953 bw ( KiB/s): min= 1792, max= 1923, per=4.16%, avg=1900.95, stdev=46.96, samples=20 00:29:18.953 iops : min= 448, max= 480, avg=475.20, stdev=11.72, samples=20 00:29:18.953 lat (msec) : 50=100.00% 00:29:18.953 cpu : usr=96.99%, sys=2.06%, ctx=153, majf=0, minf=40 00:29:18.953 IO depths : 1=6.2%, 2=12.5%, 4=25.0%, 8=50.0%, 16=6.2%, 32=0.0%, >=64=0.0% 00:29:18.953 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:29:18.953 complete : 0=0.0%, 4=94.1%, 8=0.0%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:29:18.953 issued rwts: total=4768,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:29:18.953 latency : target=0, window=0, percentile=100.00%, depth=16 00:29:18.953 filename0: (groupid=0, jobs=1): err= 0: pid=957406: Mon Jul 15 09:37:28 2024 00:29:18.953 read: IOPS=479, BW=1919KiB/s (1965kB/s)(18.8MiB/10007msec) 00:29:18.953 slat (usec): min=6, max=145, avg=14.46, stdev= 9.14 00:29:18.953 clat (usec): min=5098, max=36166, avg=33226.42, stdev=2281.85 00:29:18.953 lat (usec): min=5114, max=36181, avg=33240.88, stdev=2281.69 00:29:18.953 clat percentiles (usec): 00:29:18.953 | 1.00th=[19792], 5.00th=[32900], 10.00th=[33162], 20.00th=[33162], 00:29:18.953 | 30.00th=[33162], 40.00th=[33424], 50.00th=[33424], 60.00th=[33424], 00:29:18.953 | 70.00th=[33817], 80.00th=[33817], 90.00th=[33817], 95.00th=[34341], 00:29:18.953 | 99.00th=[34866], 99.50th=[35390], 99.90th=[35914], 99.95th=[35914], 00:29:18.953 | 99.99th=[35914] 00:29:18.953 bw ( KiB/s): min= 1792, max= 2052, per=4.18%, avg=1913.80, stdev=51.00, samples=20 00:29:18.953 iops : min= 448, max= 513, avg=478.45, stdev=12.75, samples=20 00:29:18.953 lat (msec) : 10=0.33%, 20=0.67%, 50=99.00% 00:29:18.953 cpu : usr=98.05%, sys=1.54%, ctx=17, majf=0, minf=70 00:29:18.953 IO depths : 1=6.1%, 2=12.4%, 4=25.0%, 8=50.1%, 16=6.4%, 32=0.0%, >=64=0.0% 00:29:18.953 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:29:18.953 complete : 0=0.0%, 4=94.1%, 8=0.0%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:29:18.953 issued rwts: total=4800,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:29:18.953 latency : target=0, window=0, percentile=100.00%, depth=16 00:29:18.953 filename0: (groupid=0, jobs=1): err= 0: pid=957407: Mon Jul 15 09:37:28 2024 00:29:18.953 read: IOPS=475, BW=1901KiB/s (1946kB/s)(18.6MiB/10001msec) 00:29:18.953 slat (usec): min=8, max=105, avg=30.04, stdev=16.99 00:29:18.953 clat (usec): min=13574, max=80672, avg=33409.54, stdev=3013.76 00:29:18.953 lat (usec): min=13609, max=80703, avg=33439.58, stdev=3013.74 00:29:18.953 clat percentiles (usec): 00:29:18.953 | 1.00th=[32113], 5.00th=[32637], 10.00th=[32900], 20.00th=[33162], 00:29:18.953 | 30.00th=[33162], 40.00th=[33162], 50.00th=[33424], 60.00th=[33424], 00:29:18.953 | 70.00th=[33424], 80.00th=[33817], 90.00th=[33817], 95.00th=[34341], 00:29:18.954 | 99.00th=[34866], 99.50th=[35914], 99.90th=[80217], 99.95th=[80217], 00:29:18.954 | 99.99th=[80217] 00:29:18.954 bw ( KiB/s): min= 1667, max= 1920, per=4.14%, avg=1893.21, stdev=67.96, samples=19 00:29:18.954 iops : min= 416, max= 480, avg=473.26, stdev=17.13, samples=19 00:29:18.954 lat (msec) : 20=0.34%, 50=99.33%, 100=0.34% 00:29:18.954 cpu : usr=98.24%, sys=1.35%, ctx=19, majf=0, minf=55 00:29:18.954 IO depths : 1=6.2%, 2=12.5%, 4=25.0%, 8=50.0%, 16=6.3%, 32=0.0%, >=64=0.0% 00:29:18.954 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:29:18.954 complete : 0=0.0%, 4=94.1%, 8=0.0%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:29:18.954 issued rwts: total=4752,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:29:18.954 latency : target=0, window=0, percentile=100.00%, depth=16 00:29:18.954 filename0: (groupid=0, jobs=1): err= 0: pid=957408: Mon Jul 15 09:37:28 2024 00:29:18.954 read: IOPS=474, BW=1900KiB/s (1945kB/s)(18.6MiB/10005msec) 00:29:18.954 slat (nsec): min=10408, max=88889, avg=39611.93, stdev=13245.70 00:29:18.954 clat (usec): min=25391, max=74930, avg=33314.23, stdev=2078.05 00:29:18.954 lat (usec): min=25407, max=74958, avg=33353.85, stdev=2077.44 00:29:18.954 clat percentiles (usec): 00:29:18.954 | 1.00th=[32375], 5.00th=[32637], 10.00th=[32637], 20.00th=[32900], 00:29:18.954 | 30.00th=[32900], 40.00th=[33162], 50.00th=[33162], 60.00th=[33162], 00:29:18.954 | 70.00th=[33424], 80.00th=[33424], 90.00th=[33817], 95.00th=[34341], 00:29:18.954 | 99.00th=[34866], 99.50th=[35914], 99.90th=[66323], 99.95th=[66323], 00:29:18.954 | 99.99th=[74974] 00:29:18.954 bw ( KiB/s): min= 1664, max= 1920, per=4.14%, avg=1893.05, stdev=68.52, samples=19 00:29:18.954 iops : min= 416, max= 480, avg=473.26, stdev=17.13, samples=19 00:29:18.954 lat (msec) : 50=99.66%, 100=0.34% 00:29:18.954 cpu : usr=96.47%, sys=2.22%, ctx=197, majf=0, minf=50 00:29:18.954 IO depths : 1=6.2%, 2=12.5%, 4=25.0%, 8=50.0%, 16=6.3%, 32=0.0%, >=64=0.0% 00:29:18.954 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:29:18.954 complete : 0=0.0%, 4=94.1%, 8=0.0%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:29:18.954 issued rwts: total=4752,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:29:18.954 latency : target=0, window=0, percentile=100.00%, depth=16 00:29:18.954 filename1: (groupid=0, jobs=1): err= 0: pid=957409: Mon Jul 15 09:37:28 2024 00:29:18.954 read: IOPS=474, BW=1899KiB/s (1945kB/s)(18.6MiB/10008msec) 00:29:18.954 slat (usec): min=10, max=107, avg=43.90, stdev=23.70 00:29:18.954 clat (usec): min=17635, max=72124, avg=33310.30, stdev=2437.35 00:29:18.954 lat (usec): min=17646, max=72155, avg=33354.19, stdev=2435.45 00:29:18.954 clat percentiles (usec): 00:29:18.954 | 1.00th=[31851], 5.00th=[32375], 10.00th=[32637], 20.00th=[32900], 00:29:18.954 | 30.00th=[32900], 40.00th=[33162], 50.00th=[33162], 60.00th=[33424], 00:29:18.954 | 70.00th=[33424], 80.00th=[33424], 90.00th=[33817], 95.00th=[34341], 00:29:18.954 | 99.00th=[34866], 99.50th=[35914], 99.90th=[71828], 99.95th=[71828], 00:29:18.954 | 99.99th=[71828] 00:29:18.954 bw ( KiB/s): min= 1667, max= 1920, per=4.14%, avg=1893.21, stdev=67.96, samples=19 00:29:18.954 iops : min= 416, max= 480, avg=473.26, stdev=17.13, samples=19 00:29:18.954 lat (msec) : 20=0.08%, 50=99.54%, 100=0.38% 00:29:18.954 cpu : usr=98.22%, sys=1.35%, ctx=25, majf=0, minf=42 00:29:18.954 IO depths : 1=6.2%, 2=12.5%, 4=25.0%, 8=50.0%, 16=6.3%, 32=0.0%, >=64=0.0% 00:29:18.954 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:29:18.954 complete : 0=0.0%, 4=94.1%, 8=0.0%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:29:18.954 issued rwts: total=4752,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:29:18.954 latency : target=0, window=0, percentile=100.00%, depth=16 00:29:18.954 filename1: (groupid=0, jobs=1): err= 0: pid=957410: Mon Jul 15 09:37:28 2024 00:29:18.954 read: IOPS=479, BW=1917KiB/s (1963kB/s)(18.8MiB/10018msec) 00:29:18.954 slat (usec): min=5, max=132, avg=49.11, stdev=30.77 00:29:18.954 clat (usec): min=7558, max=36264, avg=32950.98, stdev=1899.79 00:29:18.954 lat (usec): min=7570, max=36320, avg=33000.09, stdev=1896.97 00:29:18.954 clat percentiles (usec): 00:29:18.954 | 1.00th=[21890], 5.00th=[32113], 10.00th=[32375], 20.00th=[32637], 00:29:18.954 | 30.00th=[32900], 40.00th=[32900], 50.00th=[33162], 60.00th=[33424], 00:29:18.954 | 70.00th=[33424], 80.00th=[33817], 90.00th=[33817], 95.00th=[34341], 00:29:18.954 | 99.00th=[34866], 99.50th=[34866], 99.90th=[35914], 99.95th=[36439], 00:29:18.954 | 99.99th=[36439] 00:29:18.954 bw ( KiB/s): min= 1792, max= 2048, per=4.18%, avg=1913.60, stdev=50.44, samples=20 00:29:18.954 iops : min= 448, max= 512, avg=478.40, stdev=12.61, samples=20 00:29:18.954 lat (msec) : 10=0.04%, 20=0.58%, 50=99.38% 00:29:18.954 cpu : usr=98.18%, sys=1.37%, ctx=28, majf=0, minf=52 00:29:18.954 IO depths : 1=6.2%, 2=12.5%, 4=25.0%, 8=50.0%, 16=6.3%, 32=0.0%, >=64=0.0% 00:29:18.954 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:29:18.954 complete : 0=0.0%, 4=94.1%, 8=0.0%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:29:18.954 issued rwts: total=4800,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:29:18.954 latency : target=0, window=0, percentile=100.00%, depth=16 00:29:18.954 filename1: (groupid=0, jobs=1): err= 0: pid=957411: Mon Jul 15 09:37:28 2024 00:29:18.954 read: IOPS=474, BW=1899KiB/s (1945kB/s)(18.6MiB/10007msec) 00:29:18.954 slat (usec): min=8, max=112, avg=50.58, stdev=25.77 00:29:18.954 clat (usec): min=13565, max=81502, avg=33216.28, stdev=3070.10 00:29:18.954 lat (usec): min=13590, max=81539, avg=33266.86, stdev=3067.24 00:29:18.954 clat percentiles (usec): 00:29:18.954 | 1.00th=[31851], 5.00th=[32113], 10.00th=[32375], 20.00th=[32637], 00:29:18.954 | 30.00th=[32900], 40.00th=[32900], 50.00th=[33162], 60.00th=[33162], 00:29:18.954 | 70.00th=[33424], 80.00th=[33424], 90.00th=[33817], 95.00th=[34341], 00:29:18.954 | 99.00th=[34866], 99.50th=[35914], 99.90th=[81265], 99.95th=[81265], 00:29:18.954 | 99.99th=[81265] 00:29:18.954 bw ( KiB/s): min= 1664, max= 2032, per=4.16%, avg=1900.00, stdev=73.57, samples=20 00:29:18.954 iops : min= 416, max= 508, avg=475.00, stdev=18.39, samples=20 00:29:18.954 lat (msec) : 20=0.34%, 50=99.33%, 100=0.34% 00:29:18.954 cpu : usr=97.68%, sys=1.79%, ctx=63, majf=0, minf=49 00:29:18.954 IO depths : 1=6.2%, 2=12.5%, 4=25.0%, 8=50.0%, 16=6.2%, 32=0.0%, >=64=0.0% 00:29:18.954 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:29:18.954 complete : 0=0.0%, 4=94.1%, 8=0.0%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:29:18.954 issued rwts: total=4752,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:29:18.954 latency : target=0, window=0, percentile=100.00%, depth=16 00:29:18.954 filename1: (groupid=0, jobs=1): err= 0: pid=957412: Mon Jul 15 09:37:28 2024 00:29:18.954 read: IOPS=476, BW=1906KiB/s (1952kB/s)(18.6MiB/10007msec) 00:29:18.954 slat (nsec): min=8209, max=83636, avg=25638.70, stdev=11821.84 00:29:18.954 clat (usec): min=8015, max=65674, avg=33332.41, stdev=2580.99 00:29:18.954 lat (usec): min=8029, max=65697, avg=33358.05, stdev=2581.02 00:29:18.954 clat percentiles (usec): 00:29:18.954 | 1.00th=[29754], 5.00th=[32900], 10.00th=[32900], 20.00th=[32900], 00:29:18.954 | 30.00th=[33162], 40.00th=[33162], 50.00th=[33424], 60.00th=[33424], 00:29:18.954 | 70.00th=[33424], 80.00th=[33817], 90.00th=[33817], 95.00th=[34341], 00:29:18.954 | 99.00th=[34866], 99.50th=[36439], 99.90th=[65799], 99.95th=[65799], 00:29:18.954 | 99.99th=[65799] 00:29:18.954 bw ( KiB/s): min= 1664, max= 2048, per=4.16%, avg=1900.80, stdev=75.33, samples=20 00:29:18.954 iops : min= 416, max= 512, avg=475.20, stdev=18.83, samples=20 00:29:18.954 lat (msec) : 10=0.34%, 20=0.13%, 50=99.20%, 100=0.34% 00:29:18.954 cpu : usr=96.18%, sys=2.30%, ctx=306, majf=0, minf=45 00:29:18.954 IO depths : 1=6.2%, 2=12.4%, 4=25.0%, 8=50.1%, 16=6.3%, 32=0.0%, >=64=0.0% 00:29:18.954 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:29:18.954 complete : 0=0.0%, 4=94.1%, 8=0.0%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:29:18.954 issued rwts: total=4768,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:29:18.954 latency : target=0, window=0, percentile=100.00%, depth=16 00:29:18.954 filename1: (groupid=0, jobs=1): err= 0: pid=957413: Mon Jul 15 09:37:28 2024 00:29:18.954 read: IOPS=476, BW=1904KiB/s (1950kB/s)(18.6MiB/10015msec) 00:29:18.954 slat (nsec): min=8679, max=76370, avg=35543.65, stdev=11565.15 00:29:18.954 clat (usec): min=25474, max=45857, avg=33306.37, stdev=967.77 00:29:18.954 lat (usec): min=25514, max=45880, avg=33341.91, stdev=966.42 00:29:18.954 clat percentiles (usec): 00:29:18.954 | 1.00th=[32375], 5.00th=[32637], 10.00th=[32900], 20.00th=[32900], 00:29:18.954 | 30.00th=[33162], 40.00th=[33162], 50.00th=[33162], 60.00th=[33424], 00:29:18.954 | 70.00th=[33424], 80.00th=[33817], 90.00th=[33817], 95.00th=[34341], 00:29:18.954 | 99.00th=[34866], 99.50th=[35914], 99.90th=[45876], 99.95th=[45876], 00:29:18.954 | 99.99th=[45876] 00:29:18.954 bw ( KiB/s): min= 1792, max= 1923, per=4.16%, avg=1900.95, stdev=46.96, samples=20 00:29:18.954 iops : min= 448, max= 480, avg=475.20, stdev=11.72, samples=20 00:29:18.954 lat (msec) : 50=100.00% 00:29:18.954 cpu : usr=97.75%, sys=1.80%, ctx=19, majf=0, minf=42 00:29:18.954 IO depths : 1=6.2%, 2=12.5%, 4=25.0%, 8=50.0%, 16=6.2%, 32=0.0%, >=64=0.0% 00:29:18.954 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:29:18.954 complete : 0=0.0%, 4=94.1%, 8=0.0%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:29:18.954 issued rwts: total=4768,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:29:18.954 latency : target=0, window=0, percentile=100.00%, depth=16 00:29:18.954 filename1: (groupid=0, jobs=1): err= 0: pid=957414: Mon Jul 15 09:37:28 2024 00:29:18.954 read: IOPS=479, BW=1920KiB/s (1966kB/s)(18.8MiB/10002msec) 00:29:18.954 slat (usec): min=7, max=102, avg=27.82, stdev=20.93 00:29:18.954 clat (usec): min=5859, max=36037, avg=33093.86, stdev=2500.95 00:29:18.954 lat (usec): min=5897, max=36114, avg=33121.68, stdev=2499.77 00:29:18.954 clat percentiles (usec): 00:29:18.954 | 1.00th=[11469], 5.00th=[32637], 10.00th=[32900], 20.00th=[33162], 00:29:18.954 | 30.00th=[33162], 40.00th=[33162], 50.00th=[33424], 60.00th=[33424], 00:29:18.954 | 70.00th=[33424], 80.00th=[33817], 90.00th=[33817], 95.00th=[34341], 00:29:18.954 | 99.00th=[34866], 99.50th=[35390], 99.90th=[35914], 99.95th=[35914], 00:29:18.954 | 99.99th=[35914] 00:29:18.954 bw ( KiB/s): min= 1792, max= 2048, per=4.20%, avg=1920.00, stdev=42.67, samples=19 00:29:18.954 iops : min= 448, max= 512, avg=480.00, stdev=10.67, samples=19 00:29:18.954 lat (msec) : 10=0.33%, 20=0.67%, 50=99.00% 00:29:18.954 cpu : usr=97.57%, sys=1.77%, ctx=83, majf=0, minf=68 00:29:18.954 IO depths : 1=6.2%, 2=12.5%, 4=25.0%, 8=50.0%, 16=6.2%, 32=0.0%, >=64=0.0% 00:29:18.954 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:29:18.954 complete : 0=0.0%, 4=94.1%, 8=0.0%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:29:18.954 issued rwts: total=4800,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:29:18.954 latency : target=0, window=0, percentile=100.00%, depth=16 00:29:18.955 filename1: (groupid=0, jobs=1): err= 0: pid=957415: Mon Jul 15 09:37:28 2024 00:29:18.955 read: IOPS=475, BW=1901KiB/s (1946kB/s)(18.6MiB/10001msec) 00:29:18.955 slat (usec): min=11, max=119, avg=47.51, stdev=16.75 00:29:18.955 clat (usec): min=25273, max=71661, avg=33252.75, stdev=1931.55 00:29:18.955 lat (usec): min=25285, max=71704, avg=33300.26, stdev=1930.93 00:29:18.955 clat percentiles (usec): 00:29:18.955 | 1.00th=[31851], 5.00th=[32375], 10.00th=[32637], 20.00th=[32900], 00:29:18.955 | 30.00th=[32900], 40.00th=[33162], 50.00th=[33162], 60.00th=[33162], 00:29:18.955 | 70.00th=[33424], 80.00th=[33424], 90.00th=[33817], 95.00th=[33817], 00:29:18.955 | 99.00th=[34866], 99.50th=[35914], 99.90th=[63177], 99.95th=[63177], 00:29:18.955 | 99.99th=[71828] 00:29:18.955 bw ( KiB/s): min= 1664, max= 1920, per=4.14%, avg=1893.05, stdev=68.52, samples=19 00:29:18.955 iops : min= 416, max= 480, avg=473.26, stdev=17.13, samples=19 00:29:18.955 lat (msec) : 50=99.66%, 100=0.34% 00:29:18.955 cpu : usr=97.44%, sys=1.74%, ctx=96, majf=0, minf=37 00:29:18.955 IO depths : 1=6.2%, 2=12.4%, 4=25.0%, 8=50.1%, 16=6.3%, 32=0.0%, >=64=0.0% 00:29:18.955 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:29:18.955 complete : 0=0.0%, 4=94.1%, 8=0.0%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:29:18.955 issued rwts: total=4752,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:29:18.955 latency : target=0, window=0, percentile=100.00%, depth=16 00:29:18.955 filename1: (groupid=0, jobs=1): err= 0: pid=957416: Mon Jul 15 09:37:28 2024 00:29:18.955 read: IOPS=477, BW=1911KiB/s (1957kB/s)(18.7MiB/10010msec) 00:29:18.955 slat (usec): min=8, max=104, avg=33.82, stdev=22.70 00:29:18.955 clat (usec): min=19044, max=39572, avg=33206.26, stdev=1218.53 00:29:18.955 lat (usec): min=19097, max=39592, avg=33240.08, stdev=1215.97 00:29:18.955 clat percentiles (usec): 00:29:18.955 | 1.00th=[29754], 5.00th=[32375], 10.00th=[32637], 20.00th=[32900], 00:29:18.955 | 30.00th=[33162], 40.00th=[33162], 50.00th=[33424], 60.00th=[33424], 00:29:18.955 | 70.00th=[33424], 80.00th=[33817], 90.00th=[33817], 95.00th=[34341], 00:29:18.955 | 99.00th=[34866], 99.50th=[34866], 99.90th=[35914], 99.95th=[36439], 00:29:18.955 | 99.99th=[39584] 00:29:18.955 bw ( KiB/s): min= 1792, max= 1920, per=4.17%, avg=1905.84, stdev=40.23, samples=19 00:29:18.955 iops : min= 448, max= 480, avg=476.42, stdev=10.06, samples=19 00:29:18.955 lat (msec) : 20=0.31%, 50=99.69% 00:29:18.955 cpu : usr=97.97%, sys=1.61%, ctx=22, majf=0, minf=50 00:29:18.955 IO depths : 1=6.2%, 2=12.5%, 4=25.0%, 8=50.0%, 16=6.3%, 32=0.0%, >=64=0.0% 00:29:18.955 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:29:18.955 complete : 0=0.0%, 4=94.1%, 8=0.0%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:29:18.955 issued rwts: total=4782,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:29:18.955 latency : target=0, window=0, percentile=100.00%, depth=16 00:29:18.955 filename2: (groupid=0, jobs=1): err= 0: pid=957417: Mon Jul 15 09:37:28 2024 00:29:18.955 read: IOPS=476, BW=1904KiB/s (1950kB/s)(18.6MiB/10015msec) 00:29:18.955 slat (usec): min=8, max=132, avg=48.06, stdev=26.03 00:29:18.955 clat (usec): min=25438, max=50728, avg=33180.46, stdev=1054.10 00:29:18.955 lat (usec): min=25502, max=50752, avg=33228.52, stdev=1047.67 00:29:18.955 clat percentiles (usec): 00:29:18.955 | 1.00th=[32113], 5.00th=[32375], 10.00th=[32637], 20.00th=[32637], 00:29:18.955 | 30.00th=[32900], 40.00th=[32900], 50.00th=[33162], 60.00th=[33162], 00:29:18.955 | 70.00th=[33424], 80.00th=[33424], 90.00th=[33817], 95.00th=[34341], 00:29:18.955 | 99.00th=[34866], 99.50th=[35914], 99.90th=[45351], 99.95th=[45876], 00:29:18.955 | 99.99th=[50594] 00:29:18.955 bw ( KiB/s): min= 1792, max= 1923, per=4.16%, avg=1900.95, stdev=46.96, samples=20 00:29:18.955 iops : min= 448, max= 480, avg=475.20, stdev=11.72, samples=20 00:29:18.955 lat (msec) : 50=99.96%, 100=0.04% 00:29:18.955 cpu : usr=98.22%, sys=1.35%, ctx=11, majf=0, minf=43 00:29:18.955 IO depths : 1=6.2%, 2=12.5%, 4=25.0%, 8=50.0%, 16=6.3%, 32=0.0%, >=64=0.0% 00:29:18.955 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:29:18.955 complete : 0=0.0%, 4=94.1%, 8=0.0%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:29:18.955 issued rwts: total=4768,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:29:18.955 latency : target=0, window=0, percentile=100.00%, depth=16 00:29:18.955 filename2: (groupid=0, jobs=1): err= 0: pid=957418: Mon Jul 15 09:37:28 2024 00:29:18.955 read: IOPS=475, BW=1901KiB/s (1946kB/s)(18.6MiB/10001msec) 00:29:18.955 slat (usec): min=8, max=104, avg=28.67, stdev=19.01 00:29:18.955 clat (usec): min=13582, max=80437, avg=33402.87, stdev=2997.67 00:29:18.955 lat (usec): min=13625, max=80476, avg=33431.55, stdev=2997.29 00:29:18.955 clat percentiles (usec): 00:29:18.955 | 1.00th=[32375], 5.00th=[32637], 10.00th=[32900], 20.00th=[32900], 00:29:18.955 | 30.00th=[33162], 40.00th=[33162], 50.00th=[33162], 60.00th=[33424], 00:29:18.955 | 70.00th=[33424], 80.00th=[33817], 90.00th=[33817], 95.00th=[34341], 00:29:18.955 | 99.00th=[34866], 99.50th=[35914], 99.90th=[80217], 99.95th=[80217], 00:29:18.955 | 99.99th=[80217] 00:29:18.955 bw ( KiB/s): min= 1667, max= 1920, per=4.14%, avg=1893.21, stdev=67.96, samples=19 00:29:18.955 iops : min= 416, max= 480, avg=473.26, stdev=17.13, samples=19 00:29:18.955 lat (msec) : 20=0.34%, 50=99.33%, 100=0.34% 00:29:18.955 cpu : usr=96.44%, sys=2.24%, ctx=247, majf=0, minf=50 00:29:18.955 IO depths : 1=6.2%, 2=12.5%, 4=25.0%, 8=50.0%, 16=6.2%, 32=0.0%, >=64=0.0% 00:29:18.955 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:29:18.955 complete : 0=0.0%, 4=94.1%, 8=0.0%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:29:18.955 issued rwts: total=4752,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:29:18.955 latency : target=0, window=0, percentile=100.00%, depth=16 00:29:18.955 filename2: (groupid=0, jobs=1): err= 0: pid=957419: Mon Jul 15 09:37:28 2024 00:29:18.955 read: IOPS=476, BW=1906KiB/s (1951kB/s)(18.6MiB/10008msec) 00:29:18.955 slat (nsec): min=5555, max=91032, avg=40116.82, stdev=14197.47 00:29:18.955 clat (usec): min=10964, max=58692, avg=33209.65, stdev=2047.42 00:29:18.955 lat (usec): min=10972, max=58725, avg=33249.76, stdev=2048.16 00:29:18.955 clat percentiles (usec): 00:29:18.955 | 1.00th=[32375], 5.00th=[32637], 10.00th=[32637], 20.00th=[32900], 00:29:18.955 | 30.00th=[32900], 40.00th=[33162], 50.00th=[33162], 60.00th=[33162], 00:29:18.955 | 70.00th=[33424], 80.00th=[33424], 90.00th=[33817], 95.00th=[33817], 00:29:18.955 | 99.00th=[34866], 99.50th=[35914], 99.90th=[58459], 99.95th=[58459], 00:29:18.955 | 99.99th=[58459] 00:29:18.955 bw ( KiB/s): min= 1667, max= 2048, per=4.15%, avg=1899.95, stdev=76.57, samples=19 00:29:18.955 iops : min= 416, max= 512, avg=474.95, stdev=19.27, samples=19 00:29:18.955 lat (msec) : 20=0.34%, 50=99.33%, 100=0.34% 00:29:18.955 cpu : usr=97.92%, sys=1.46%, ctx=45, majf=0, minf=39 00:29:18.955 IO depths : 1=6.2%, 2=12.5%, 4=25.0%, 8=50.0%, 16=6.2%, 32=0.0%, >=64=0.0% 00:29:18.955 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:29:18.955 complete : 0=0.0%, 4=94.1%, 8=0.0%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:29:18.955 issued rwts: total=4768,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:29:18.955 latency : target=0, window=0, percentile=100.00%, depth=16 00:29:18.955 filename2: (groupid=0, jobs=1): err= 0: pid=957420: Mon Jul 15 09:37:28 2024 00:29:18.955 read: IOPS=474, BW=1900KiB/s (1945kB/s)(18.6MiB/10005msec) 00:29:18.955 slat (nsec): min=8858, max=83750, avg=38144.74, stdev=11302.59 00:29:18.955 clat (usec): min=25389, max=66547, avg=33339.98, stdev=2024.26 00:29:18.955 lat (usec): min=25429, max=66574, avg=33378.12, stdev=2023.01 00:29:18.955 clat percentiles (usec): 00:29:18.955 | 1.00th=[32375], 5.00th=[32637], 10.00th=[32637], 20.00th=[32900], 00:29:18.955 | 30.00th=[32900], 40.00th=[33162], 50.00th=[33162], 60.00th=[33162], 00:29:18.955 | 70.00th=[33424], 80.00th=[33424], 90.00th=[33817], 95.00th=[34341], 00:29:18.955 | 99.00th=[34866], 99.50th=[35914], 99.90th=[66323], 99.95th=[66323], 00:29:18.955 | 99.99th=[66323] 00:29:18.955 bw ( KiB/s): min= 1664, max= 1920, per=4.14%, avg=1893.05, stdev=68.52, samples=19 00:29:18.955 iops : min= 416, max= 480, avg=473.26, stdev=17.13, samples=19 00:29:18.955 lat (msec) : 50=99.66%, 100=0.34% 00:29:18.955 cpu : usr=98.25%, sys=1.35%, ctx=11, majf=0, minf=45 00:29:18.955 IO depths : 1=6.2%, 2=12.5%, 4=25.0%, 8=50.0%, 16=6.2%, 32=0.0%, >=64=0.0% 00:29:18.955 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:29:18.955 complete : 0=0.0%, 4=94.1%, 8=0.0%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:29:18.955 issued rwts: total=4752,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:29:18.955 latency : target=0, window=0, percentile=100.00%, depth=16 00:29:18.955 filename2: (groupid=0, jobs=1): err= 0: pid=957421: Mon Jul 15 09:37:28 2024 00:29:18.955 read: IOPS=476, BW=1904KiB/s (1950kB/s)(18.6MiB/10015msec) 00:29:18.955 slat (usec): min=13, max=108, avg=41.55, stdev=14.18 00:29:18.955 clat (usec): min=25401, max=45897, avg=33237.46, stdev=982.44 00:29:18.955 lat (usec): min=25431, max=45927, avg=33279.01, stdev=981.09 00:29:18.955 clat percentiles (usec): 00:29:18.955 | 1.00th=[32113], 5.00th=[32637], 10.00th=[32637], 20.00th=[32900], 00:29:18.955 | 30.00th=[32900], 40.00th=[33162], 50.00th=[33162], 60.00th=[33162], 00:29:18.955 | 70.00th=[33424], 80.00th=[33424], 90.00th=[33817], 95.00th=[34341], 00:29:18.955 | 99.00th=[34866], 99.50th=[35914], 99.90th=[45876], 99.95th=[45876], 00:29:18.955 | 99.99th=[45876] 00:29:18.955 bw ( KiB/s): min= 1792, max= 1923, per=4.16%, avg=1900.95, stdev=46.96, samples=20 00:29:18.955 iops : min= 448, max= 480, avg=475.20, stdev=11.72, samples=20 00:29:18.955 lat (msec) : 50=100.00% 00:29:18.955 cpu : usr=97.15%, sys=1.81%, ctx=133, majf=0, minf=41 00:29:18.955 IO depths : 1=6.2%, 2=12.5%, 4=25.0%, 8=50.0%, 16=6.2%, 32=0.0%, >=64=0.0% 00:29:18.955 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:29:18.955 complete : 0=0.0%, 4=94.1%, 8=0.0%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:29:18.955 issued rwts: total=4768,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:29:18.955 latency : target=0, window=0, percentile=100.00%, depth=16 00:29:18.955 filename2: (groupid=0, jobs=1): err= 0: pid=957422: Mon Jul 15 09:37:28 2024 00:29:18.955 read: IOPS=478, BW=1915KiB/s (1961kB/s)(18.7MiB/10006msec) 00:29:18.955 slat (usec): min=5, max=179, avg=28.79, stdev=20.76 00:29:18.955 clat (usec): min=13783, max=38578, avg=33187.59, stdev=1545.67 00:29:18.955 lat (usec): min=13819, max=38616, avg=33216.38, stdev=1544.66 00:29:18.955 clat percentiles (usec): 00:29:18.955 | 1.00th=[22938], 5.00th=[32637], 10.00th=[32900], 20.00th=[33162], 00:29:18.955 | 30.00th=[33162], 40.00th=[33162], 50.00th=[33424], 60.00th=[33424], 00:29:18.955 | 70.00th=[33424], 80.00th=[33817], 90.00th=[33817], 95.00th=[33817], 00:29:18.955 | 99.00th=[34866], 99.50th=[35390], 99.90th=[38011], 99.95th=[38536], 00:29:18.955 | 99.99th=[38536] 00:29:18.955 bw ( KiB/s): min= 1792, max= 2048, per=4.19%, avg=1915.79, stdev=53.29, samples=19 00:29:18.955 iops : min= 448, max= 512, avg=478.95, stdev=13.32, samples=19 00:29:18.955 lat (msec) : 20=0.31%, 50=99.69% 00:29:18.955 cpu : usr=98.04%, sys=1.49%, ctx=48, majf=0, minf=45 00:29:18.955 IO depths : 1=6.1%, 2=12.3%, 4=24.7%, 8=50.5%, 16=6.4%, 32=0.0%, >=64=0.0% 00:29:18.956 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:29:18.956 complete : 0=0.0%, 4=94.0%, 8=0.1%, 16=5.8%, 32=0.0%, 64=0.0%, >=64=0.0% 00:29:18.956 issued rwts: total=4790,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:29:18.956 latency : target=0, window=0, percentile=100.00%, depth=16 00:29:18.956 filename2: (groupid=0, jobs=1): err= 0: pid=957423: Mon Jul 15 09:37:28 2024 00:29:18.956 read: IOPS=479, BW=1919KiB/s (1965kB/s)(18.8MiB/10006msec) 00:29:18.956 slat (usec): min=6, max=191, avg=38.04, stdev=28.15 00:29:18.956 clat (usec): min=7364, max=41606, avg=33011.71, stdev=2359.39 00:29:18.956 lat (usec): min=7376, max=41621, avg=33049.76, stdev=2360.38 00:29:18.956 clat percentiles (usec): 00:29:18.956 | 1.00th=[16319], 5.00th=[32375], 10.00th=[32637], 20.00th=[32900], 00:29:18.956 | 30.00th=[33162], 40.00th=[33162], 50.00th=[33162], 60.00th=[33424], 00:29:18.956 | 70.00th=[33424], 80.00th=[33424], 90.00th=[33817], 95.00th=[34341], 00:29:18.956 | 99.00th=[34866], 99.50th=[35390], 99.90th=[35914], 99.95th=[40109], 00:29:18.956 | 99.99th=[41681] 00:29:18.956 bw ( KiB/s): min= 1792, max= 2048, per=4.18%, avg=1913.60, stdev=65.33, samples=20 00:29:18.956 iops : min= 448, max= 512, avg=478.40, stdev=16.33, samples=20 00:29:18.956 lat (msec) : 10=0.33%, 20=0.67%, 50=99.00% 00:29:18.956 cpu : usr=98.08%, sys=1.50%, ctx=22, majf=0, minf=42 00:29:18.956 IO depths : 1=6.1%, 2=12.4%, 4=25.0%, 8=50.1%, 16=6.4%, 32=0.0%, >=64=0.0% 00:29:18.956 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:29:18.956 complete : 0=0.0%, 4=94.1%, 8=0.0%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:29:18.956 issued rwts: total=4800,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:29:18.956 latency : target=0, window=0, percentile=100.00%, depth=16 00:29:18.956 filename2: (groupid=0, jobs=1): err= 0: pid=957424: Mon Jul 15 09:37:28 2024 00:29:18.956 read: IOPS=476, BW=1906KiB/s (1952kB/s)(18.6MiB/10007msec) 00:29:18.956 slat (usec): min=8, max=115, avg=39.50, stdev=21.64 00:29:18.956 clat (usec): min=7944, max=65470, avg=33216.53, stdev=2499.89 00:29:18.956 lat (usec): min=7957, max=65512, avg=33256.03, stdev=2499.50 00:29:18.956 clat percentiles (usec): 00:29:18.956 | 1.00th=[29754], 5.00th=[32375], 10.00th=[32637], 20.00th=[32900], 00:29:18.956 | 30.00th=[32900], 40.00th=[33162], 50.00th=[33162], 60.00th=[33424], 00:29:18.956 | 70.00th=[33424], 80.00th=[33424], 90.00th=[33817], 95.00th=[33817], 00:29:18.956 | 99.00th=[34866], 99.50th=[35914], 99.90th=[65274], 99.95th=[65274], 00:29:18.956 | 99.99th=[65274] 00:29:18.956 bw ( KiB/s): min= 1664, max= 2048, per=4.16%, avg=1900.80, stdev=75.15, samples=20 00:29:18.956 iops : min= 416, max= 512, avg=475.20, stdev=18.79, samples=20 00:29:18.956 lat (msec) : 10=0.34%, 20=0.04%, 50=99.29%, 100=0.34% 00:29:18.956 cpu : usr=97.86%, sys=1.54%, ctx=100, majf=0, minf=62 00:29:18.956 IO depths : 1=6.2%, 2=12.5%, 4=25.0%, 8=50.0%, 16=6.3%, 32=0.0%, >=64=0.0% 00:29:18.956 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:29:18.956 complete : 0=0.0%, 4=94.1%, 8=0.0%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:29:18.956 issued rwts: total=4768,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:29:18.956 latency : target=0, window=0, percentile=100.00%, depth=16 00:29:18.956 00:29:18.956 Run status group 0 (all jobs): 00:29:18.956 READ: bw=44.6MiB/s (46.8MB/s), 1899KiB/s-1920KiB/s (1945kB/s-1966kB/s), io=447MiB (469MB), run=10001-10018msec 00:29:18.956 09:37:28 nvmf_dif.fio_dif_rand_params -- target/dif.sh@113 -- # destroy_subsystems 0 1 2 00:29:18.956 09:37:28 nvmf_dif.fio_dif_rand_params -- target/dif.sh@43 -- # local sub 00:29:18.956 09:37:28 nvmf_dif.fio_dif_rand_params -- target/dif.sh@45 -- # for sub in "$@" 00:29:18.956 09:37:28 nvmf_dif.fio_dif_rand_params -- target/dif.sh@46 -- # destroy_subsystem 0 00:29:18.956 09:37:28 nvmf_dif.fio_dif_rand_params -- target/dif.sh@36 -- # local sub_id=0 00:29:18.956 09:37:28 nvmf_dif.fio_dif_rand_params -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:29:18.956 09:37:28 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:29:18.956 09:37:28 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:29:18.956 09:37:28 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:29:18.956 09:37:28 nvmf_dif.fio_dif_rand_params -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null0 00:29:18.956 09:37:28 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:29:18.956 09:37:28 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:29:18.956 09:37:28 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:29:18.956 09:37:28 nvmf_dif.fio_dif_rand_params -- target/dif.sh@45 -- # for sub in "$@" 00:29:18.956 09:37:28 nvmf_dif.fio_dif_rand_params -- target/dif.sh@46 -- # destroy_subsystem 1 00:29:18.956 09:37:28 nvmf_dif.fio_dif_rand_params -- target/dif.sh@36 -- # local sub_id=1 00:29:18.956 09:37:28 nvmf_dif.fio_dif_rand_params -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:29:18.956 09:37:28 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:29:18.956 09:37:28 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:29:18.956 09:37:28 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:29:18.956 09:37:28 nvmf_dif.fio_dif_rand_params -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null1 00:29:18.956 09:37:28 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:29:18.956 09:37:28 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:29:18.956 09:37:28 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:29:18.956 09:37:28 nvmf_dif.fio_dif_rand_params -- target/dif.sh@45 -- # for sub in "$@" 00:29:18.956 09:37:28 nvmf_dif.fio_dif_rand_params -- target/dif.sh@46 -- # destroy_subsystem 2 00:29:18.956 09:37:28 nvmf_dif.fio_dif_rand_params -- target/dif.sh@36 -- # local sub_id=2 00:29:18.956 09:37:28 nvmf_dif.fio_dif_rand_params -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode2 00:29:18.956 09:37:28 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:29:18.956 09:37:28 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:29:18.956 09:37:28 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:29:18.956 09:37:28 nvmf_dif.fio_dif_rand_params -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null2 00:29:18.956 09:37:28 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:29:18.956 09:37:28 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:29:18.956 09:37:28 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:29:18.956 09:37:28 nvmf_dif.fio_dif_rand_params -- target/dif.sh@115 -- # NULL_DIF=1 00:29:18.956 09:37:28 nvmf_dif.fio_dif_rand_params -- target/dif.sh@115 -- # bs=8k,16k,128k 00:29:18.956 09:37:28 nvmf_dif.fio_dif_rand_params -- target/dif.sh@115 -- # numjobs=2 00:29:18.956 09:37:28 nvmf_dif.fio_dif_rand_params -- target/dif.sh@115 -- # iodepth=8 00:29:18.956 09:37:28 nvmf_dif.fio_dif_rand_params -- target/dif.sh@115 -- # runtime=5 00:29:18.956 09:37:28 nvmf_dif.fio_dif_rand_params -- target/dif.sh@115 -- # files=1 00:29:18.956 09:37:28 nvmf_dif.fio_dif_rand_params -- target/dif.sh@117 -- # create_subsystems 0 1 00:29:18.956 09:37:28 nvmf_dif.fio_dif_rand_params -- target/dif.sh@28 -- # local sub 00:29:18.956 09:37:28 nvmf_dif.fio_dif_rand_params -- target/dif.sh@30 -- # for sub in "$@" 00:29:18.956 09:37:28 nvmf_dif.fio_dif_rand_params -- target/dif.sh@31 -- # create_subsystem 0 00:29:18.956 09:37:28 nvmf_dif.fio_dif_rand_params -- target/dif.sh@18 -- # local sub_id=0 00:29:18.956 09:37:28 nvmf_dif.fio_dif_rand_params -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null0 64 512 --md-size 16 --dif-type 1 00:29:18.956 09:37:28 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:29:18.956 09:37:28 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:29:18.956 bdev_null0 00:29:18.956 09:37:28 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:29:18.956 09:37:28 nvmf_dif.fio_dif_rand_params -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 --serial-number 53313233-0 --allow-any-host 00:29:18.956 09:37:28 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:29:18.956 09:37:28 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:29:18.956 09:37:28 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:29:18.956 09:37:28 nvmf_dif.fio_dif_rand_params -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 bdev_null0 00:29:18.956 09:37:28 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:29:18.956 09:37:28 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:29:18.956 09:37:28 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:29:18.956 09:37:28 nvmf_dif.fio_dif_rand_params -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:29:18.956 09:37:28 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:29:18.956 09:37:28 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:29:18.956 [2024-07-15 09:37:28.883319] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:29:18.956 09:37:28 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:29:18.956 09:37:28 nvmf_dif.fio_dif_rand_params -- target/dif.sh@30 -- # for sub in "$@" 00:29:18.956 09:37:28 nvmf_dif.fio_dif_rand_params -- target/dif.sh@31 -- # create_subsystem 1 00:29:18.956 09:37:28 nvmf_dif.fio_dif_rand_params -- target/dif.sh@18 -- # local sub_id=1 00:29:18.956 09:37:28 nvmf_dif.fio_dif_rand_params -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null1 64 512 --md-size 16 --dif-type 1 00:29:18.956 09:37:28 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:29:18.956 09:37:28 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:29:18.956 bdev_null1 00:29:18.956 09:37:28 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:29:18.956 09:37:28 nvmf_dif.fio_dif_rand_params -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 --serial-number 53313233-1 --allow-any-host 00:29:18.956 09:37:28 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:29:18.956 09:37:28 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:29:18.956 09:37:28 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:29:18.956 09:37:28 nvmf_dif.fio_dif_rand_params -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 bdev_null1 00:29:18.956 09:37:28 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:29:18.956 09:37:28 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:29:18.956 09:37:28 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:29:18.956 09:37:28 nvmf_dif.fio_dif_rand_params -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:29:18.956 09:37:28 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:29:18.956 09:37:28 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:29:18.956 09:37:28 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:29:18.956 09:37:28 nvmf_dif.fio_dif_rand_params -- target/dif.sh@118 -- # fio /dev/fd/62 00:29:18.956 09:37:28 nvmf_dif.fio_dif_rand_params -- target/dif.sh@118 -- # create_json_sub_conf 0 1 00:29:18.956 09:37:28 nvmf_dif.fio_dif_rand_params -- target/dif.sh@51 -- # gen_nvmf_target_json 0 1 00:29:18.956 09:37:28 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@532 -- # config=() 00:29:18.957 09:37:28 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@532 -- # local subsystem config 00:29:18.957 09:37:28 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:29:18.957 09:37:28 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:29:18.957 { 00:29:18.957 "params": { 00:29:18.957 "name": "Nvme$subsystem", 00:29:18.957 "trtype": "$TEST_TRANSPORT", 00:29:18.957 "traddr": "$NVMF_FIRST_TARGET_IP", 00:29:18.957 "adrfam": "ipv4", 00:29:18.957 "trsvcid": "$NVMF_PORT", 00:29:18.957 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:29:18.957 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:29:18.957 "hdgst": ${hdgst:-false}, 00:29:18.957 "ddgst": ${ddgst:-false} 00:29:18.957 }, 00:29:18.957 "method": "bdev_nvme_attach_controller" 00:29:18.957 } 00:29:18.957 EOF 00:29:18.957 )") 00:29:18.957 09:37:28 nvmf_dif.fio_dif_rand_params -- target/dif.sh@82 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:29:18.957 09:37:28 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1356 -- # fio_plugin /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:29:18.957 09:37:28 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1337 -- # local fio_dir=/usr/src/fio 00:29:18.957 09:37:28 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1339 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:29:18.957 09:37:28 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1339 -- # local sanitizers 00:29:18.957 09:37:28 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1340 -- # local plugin=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:29:18.957 09:37:28 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1341 -- # shift 00:29:18.957 09:37:28 nvmf_dif.fio_dif_rand_params -- target/dif.sh@82 -- # gen_fio_conf 00:29:18.957 09:37:28 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1343 -- # local asan_lib= 00:29:18.957 09:37:28 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1344 -- # for sanitizer in "${sanitizers[@]}" 00:29:18.957 09:37:28 nvmf_dif.fio_dif_rand_params -- target/dif.sh@54 -- # local file 00:29:18.957 09:37:28 nvmf_dif.fio_dif_rand_params -- target/dif.sh@56 -- # cat 00:29:18.957 09:37:28 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@554 -- # cat 00:29:18.957 09:37:28 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:29:18.957 09:37:28 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # grep libasan 00:29:18.957 09:37:28 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # awk '{print $3}' 00:29:18.957 09:37:28 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file = 1 )) 00:29:18.957 09:37:28 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file <= files )) 00:29:18.957 09:37:28 nvmf_dif.fio_dif_rand_params -- target/dif.sh@73 -- # cat 00:29:18.957 09:37:28 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:29:18.957 09:37:28 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:29:18.957 { 00:29:18.957 "params": { 00:29:18.957 "name": "Nvme$subsystem", 00:29:18.957 "trtype": "$TEST_TRANSPORT", 00:29:18.957 "traddr": "$NVMF_FIRST_TARGET_IP", 00:29:18.957 "adrfam": "ipv4", 00:29:18.957 "trsvcid": "$NVMF_PORT", 00:29:18.957 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:29:18.957 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:29:18.957 "hdgst": ${hdgst:-false}, 00:29:18.957 "ddgst": ${ddgst:-false} 00:29:18.957 }, 00:29:18.957 "method": "bdev_nvme_attach_controller" 00:29:18.957 } 00:29:18.957 EOF 00:29:18.957 )") 00:29:18.957 09:37:28 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@554 -- # cat 00:29:18.957 09:37:28 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file++ )) 00:29:18.957 09:37:28 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file <= files )) 00:29:18.957 09:37:28 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@556 -- # jq . 00:29:18.957 09:37:28 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@557 -- # IFS=, 00:29:18.957 09:37:28 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:29:18.957 "params": { 00:29:18.957 "name": "Nvme0", 00:29:18.957 "trtype": "tcp", 00:29:18.957 "traddr": "10.0.0.2", 00:29:18.957 "adrfam": "ipv4", 00:29:18.957 "trsvcid": "4420", 00:29:18.957 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:29:18.957 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:29:18.957 "hdgst": false, 00:29:18.957 "ddgst": false 00:29:18.957 }, 00:29:18.957 "method": "bdev_nvme_attach_controller" 00:29:18.957 },{ 00:29:18.957 "params": { 00:29:18.957 "name": "Nvme1", 00:29:18.957 "trtype": "tcp", 00:29:18.957 "traddr": "10.0.0.2", 00:29:18.957 "adrfam": "ipv4", 00:29:18.957 "trsvcid": "4420", 00:29:18.957 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:29:18.957 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:29:18.957 "hdgst": false, 00:29:18.957 "ddgst": false 00:29:18.957 }, 00:29:18.957 "method": "bdev_nvme_attach_controller" 00:29:18.957 }' 00:29:18.957 09:37:28 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # asan_lib= 00:29:18.957 09:37:28 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1346 -- # [[ -n '' ]] 00:29:18.957 09:37:28 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1344 -- # for sanitizer in "${sanitizers[@]}" 00:29:18.957 09:37:28 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:29:18.957 09:37:28 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # grep libclang_rt.asan 00:29:18.957 09:37:28 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # awk '{print $3}' 00:29:18.957 09:37:28 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # asan_lib= 00:29:18.957 09:37:28 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1346 -- # [[ -n '' ]] 00:29:18.957 09:37:28 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1352 -- # LD_PRELOAD=' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev' 00:29:18.957 09:37:28 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1352 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:29:18.957 filename0: (g=0): rw=randread, bs=(R) 8192B-8192B, (W) 16.0KiB-16.0KiB, (T) 128KiB-128KiB, ioengine=spdk_bdev, iodepth=8 00:29:18.957 ... 00:29:18.957 filename1: (g=0): rw=randread, bs=(R) 8192B-8192B, (W) 16.0KiB-16.0KiB, (T) 128KiB-128KiB, ioengine=spdk_bdev, iodepth=8 00:29:18.957 ... 00:29:18.957 fio-3.35 00:29:18.957 Starting 4 threads 00:29:18.957 EAL: No free 2048 kB hugepages reported on node 1 00:29:24.214 00:29:24.214 filename0: (groupid=0, jobs=1): err= 0: pid=958799: Mon Jul 15 09:37:34 2024 00:29:24.214 read: IOPS=1982, BW=15.5MiB/s (16.2MB/s)(77.5MiB/5001msec) 00:29:24.214 slat (nsec): min=3767, max=32650, avg=13401.85, stdev=4207.01 00:29:24.214 clat (usec): min=883, max=7308, avg=3986.95, stdev=446.14 00:29:24.214 lat (usec): min=896, max=7324, avg=4000.35, stdev=446.26 00:29:24.214 clat percentiles (usec): 00:29:24.214 | 1.00th=[ 2671], 5.00th=[ 3392], 10.00th=[ 3621], 20.00th=[ 3785], 00:29:24.214 | 30.00th=[ 3916], 40.00th=[ 3982], 50.00th=[ 4015], 60.00th=[ 4047], 00:29:24.214 | 70.00th=[ 4080], 80.00th=[ 4146], 90.00th=[ 4228], 95.00th=[ 4424], 00:29:24.214 | 99.00th=[ 5800], 99.50th=[ 6456], 99.90th=[ 7046], 99.95th=[ 7242], 00:29:24.214 | 99.99th=[ 7308] 00:29:24.214 bw ( KiB/s): min=15616, max=16384, per=25.44%, avg=15905.78, stdev=248.96, samples=9 00:29:24.214 iops : min= 1952, max= 2048, avg=1988.22, stdev=31.12, samples=9 00:29:24.214 lat (usec) : 1000=0.06% 00:29:24.214 lat (msec) : 2=0.37%, 4=46.76%, 10=52.81% 00:29:24.214 cpu : usr=93.72%, sys=5.80%, ctx=7, majf=0, minf=10 00:29:24.214 IO depths : 1=0.4%, 2=16.3%, 4=56.7%, 8=26.7%, 16=0.0%, 32=0.0%, >=64=0.0% 00:29:24.214 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:29:24.214 complete : 0=0.0%, 4=91.7%, 8=8.3%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:29:24.214 issued rwts: total=9915,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:29:24.214 latency : target=0, window=0, percentile=100.00%, depth=8 00:29:24.214 filename0: (groupid=0, jobs=1): err= 0: pid=958800: Mon Jul 15 09:37:34 2024 00:29:24.214 read: IOPS=1913, BW=15.0MiB/s (15.7MB/s)(74.8MiB/5002msec) 00:29:24.214 slat (nsec): min=3897, max=36925, avg=14104.74, stdev=4199.52 00:29:24.214 clat (usec): min=767, max=7555, avg=4126.84, stdev=631.08 00:29:24.214 lat (usec): min=781, max=7570, avg=4140.95, stdev=630.84 00:29:24.214 clat percentiles (usec): 00:29:24.214 | 1.00th=[ 2278], 5.00th=[ 3556], 10.00th=[ 3720], 20.00th=[ 3949], 00:29:24.214 | 30.00th=[ 3982], 40.00th=[ 4015], 50.00th=[ 4047], 60.00th=[ 4080], 00:29:24.214 | 70.00th=[ 4113], 80.00th=[ 4178], 90.00th=[ 4686], 95.00th=[ 5342], 00:29:24.214 | 99.00th=[ 6652], 99.50th=[ 6849], 99.90th=[ 7373], 99.95th=[ 7373], 00:29:24.214 | 99.99th=[ 7570] 00:29:24.214 bw ( KiB/s): min=14992, max=15856, per=24.45%, avg=15287.11, stdev=302.96, samples=9 00:29:24.214 iops : min= 1874, max= 1982, avg=1910.89, stdev=37.87, samples=9 00:29:24.214 lat (usec) : 1000=0.23% 00:29:24.214 lat (msec) : 2=0.55%, 4=37.60%, 10=61.62% 00:29:24.214 cpu : usr=93.76%, sys=5.36%, ctx=95, majf=0, minf=0 00:29:24.214 IO depths : 1=0.1%, 2=17.1%, 4=55.3%, 8=27.5%, 16=0.0%, 32=0.0%, >=64=0.0% 00:29:24.214 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:29:24.214 complete : 0=0.0%, 4=92.2%, 8=7.8%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:29:24.214 issued rwts: total=9573,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:29:24.214 latency : target=0, window=0, percentile=100.00%, depth=8 00:29:24.214 filename1: (groupid=0, jobs=1): err= 0: pid=958801: Mon Jul 15 09:37:34 2024 00:29:24.214 read: IOPS=1950, BW=15.2MiB/s (16.0MB/s)(76.2MiB/5001msec) 00:29:24.214 slat (nsec): min=3883, max=43396, avg=14933.58, stdev=4757.19 00:29:24.214 clat (usec): min=678, max=7655, avg=4046.11, stdev=548.93 00:29:24.214 lat (usec): min=691, max=7664, avg=4061.05, stdev=548.93 00:29:24.214 clat percentiles (usec): 00:29:24.214 | 1.00th=[ 2089], 5.00th=[ 3458], 10.00th=[ 3654], 20.00th=[ 3884], 00:29:24.214 | 30.00th=[ 3949], 40.00th=[ 3982], 50.00th=[ 4015], 60.00th=[ 4047], 00:29:24.214 | 70.00th=[ 4080], 80.00th=[ 4146], 90.00th=[ 4359], 95.00th=[ 4883], 00:29:24.214 | 99.00th=[ 6390], 99.50th=[ 6718], 99.90th=[ 7242], 99.95th=[ 7308], 00:29:24.214 | 99.99th=[ 7635] 00:29:24.214 bw ( KiB/s): min=15136, max=15887, per=24.94%, avg=15591.00, stdev=278.85, samples=9 00:29:24.214 iops : min= 1892, max= 1985, avg=1948.78, stdev=34.74, samples=9 00:29:24.214 lat (usec) : 750=0.01%, 1000=0.11% 00:29:24.214 lat (msec) : 2=0.79%, 4=43.21%, 10=55.88% 00:29:24.214 cpu : usr=91.70%, sys=6.38%, ctx=177, majf=0, minf=0 00:29:24.214 IO depths : 1=0.2%, 2=19.4%, 4=53.8%, 8=26.6%, 16=0.0%, 32=0.0%, >=64=0.0% 00:29:24.214 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:29:24.214 complete : 0=0.0%, 4=91.5%, 8=8.5%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:29:24.214 issued rwts: total=9752,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:29:24.214 latency : target=0, window=0, percentile=100.00%, depth=8 00:29:24.214 filename1: (groupid=0, jobs=1): err= 0: pid=958802: Mon Jul 15 09:37:34 2024 00:29:24.214 read: IOPS=1968, BW=15.4MiB/s (16.1MB/s)(76.9MiB/5002msec) 00:29:24.214 slat (nsec): min=3832, max=34133, avg=13223.68, stdev=3775.04 00:29:24.214 clat (usec): min=755, max=8120, avg=4015.76, stdev=490.11 00:29:24.214 lat (usec): min=768, max=8139, avg=4028.98, stdev=490.05 00:29:24.214 clat percentiles (usec): 00:29:24.214 | 1.00th=[ 2606], 5.00th=[ 3425], 10.00th=[ 3621], 20.00th=[ 3818], 00:29:24.214 | 30.00th=[ 3949], 40.00th=[ 3982], 50.00th=[ 4015], 60.00th=[ 4047], 00:29:24.214 | 70.00th=[ 4080], 80.00th=[ 4146], 90.00th=[ 4293], 95.00th=[ 4490], 00:29:24.214 | 99.00th=[ 6521], 99.50th=[ 6915], 99.90th=[ 7242], 99.95th=[ 7373], 00:29:24.214 | 99.99th=[ 8094] 00:29:24.214 bw ( KiB/s): min=15456, max=16128, per=25.23%, avg=15774.22, stdev=206.06, samples=9 00:29:24.214 iops : min= 1932, max= 2016, avg=1971.78, stdev=25.76, samples=9 00:29:24.214 lat (usec) : 1000=0.02% 00:29:24.214 lat (msec) : 2=0.47%, 4=42.87%, 10=56.65% 00:29:24.215 cpu : usr=93.14%, sys=6.18%, ctx=80, majf=0, minf=0 00:29:24.215 IO depths : 1=0.3%, 2=17.4%, 4=55.5%, 8=26.9%, 16=0.0%, 32=0.0%, >=64=0.0% 00:29:24.215 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:29:24.215 complete : 0=0.0%, 4=91.7%, 8=8.3%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:29:24.215 issued rwts: total=9847,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:29:24.215 latency : target=0, window=0, percentile=100.00%, depth=8 00:29:24.215 00:29:24.215 Run status group 0 (all jobs): 00:29:24.215 READ: bw=61.0MiB/s (64.0MB/s), 15.0MiB/s-15.5MiB/s (15.7MB/s-16.2MB/s), io=305MiB (320MB), run=5001-5002msec 00:29:24.215 09:37:35 nvmf_dif.fio_dif_rand_params -- target/dif.sh@119 -- # destroy_subsystems 0 1 00:29:24.215 09:37:35 nvmf_dif.fio_dif_rand_params -- target/dif.sh@43 -- # local sub 00:29:24.215 09:37:35 nvmf_dif.fio_dif_rand_params -- target/dif.sh@45 -- # for sub in "$@" 00:29:24.215 09:37:35 nvmf_dif.fio_dif_rand_params -- target/dif.sh@46 -- # destroy_subsystem 0 00:29:24.215 09:37:35 nvmf_dif.fio_dif_rand_params -- target/dif.sh@36 -- # local sub_id=0 00:29:24.215 09:37:35 nvmf_dif.fio_dif_rand_params -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:29:24.215 09:37:35 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:29:24.215 09:37:35 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:29:24.215 09:37:35 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:29:24.215 09:37:35 nvmf_dif.fio_dif_rand_params -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null0 00:29:24.215 09:37:35 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:29:24.215 09:37:35 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:29:24.215 09:37:35 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:29:24.215 09:37:35 nvmf_dif.fio_dif_rand_params -- target/dif.sh@45 -- # for sub in "$@" 00:29:24.215 09:37:35 nvmf_dif.fio_dif_rand_params -- target/dif.sh@46 -- # destroy_subsystem 1 00:29:24.215 09:37:35 nvmf_dif.fio_dif_rand_params -- target/dif.sh@36 -- # local sub_id=1 00:29:24.215 09:37:35 nvmf_dif.fio_dif_rand_params -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:29:24.215 09:37:35 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:29:24.215 09:37:35 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:29:24.215 09:37:35 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:29:24.215 09:37:35 nvmf_dif.fio_dif_rand_params -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null1 00:29:24.215 09:37:35 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:29:24.215 09:37:35 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:29:24.215 09:37:35 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:29:24.215 00:29:24.215 real 0m24.317s 00:29:24.215 user 4m31.777s 00:29:24.215 sys 0m7.340s 00:29:24.215 09:37:35 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1124 -- # xtrace_disable 00:29:24.215 09:37:35 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:29:24.215 ************************************ 00:29:24.215 END TEST fio_dif_rand_params 00:29:24.215 ************************************ 00:29:24.215 09:37:35 nvmf_dif -- common/autotest_common.sh@1142 -- # return 0 00:29:24.215 09:37:35 nvmf_dif -- target/dif.sh@144 -- # run_test fio_dif_digest fio_dif_digest 00:29:24.215 09:37:35 nvmf_dif -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:29:24.215 09:37:35 nvmf_dif -- common/autotest_common.sh@1105 -- # xtrace_disable 00:29:24.215 09:37:35 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:29:24.215 ************************************ 00:29:24.215 START TEST fio_dif_digest 00:29:24.215 ************************************ 00:29:24.215 09:37:35 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1123 -- # fio_dif_digest 00:29:24.215 09:37:35 nvmf_dif.fio_dif_digest -- target/dif.sh@123 -- # local NULL_DIF 00:29:24.215 09:37:35 nvmf_dif.fio_dif_digest -- target/dif.sh@124 -- # local bs numjobs runtime iodepth files 00:29:24.215 09:37:35 nvmf_dif.fio_dif_digest -- target/dif.sh@125 -- # local hdgst ddgst 00:29:24.215 09:37:35 nvmf_dif.fio_dif_digest -- target/dif.sh@127 -- # NULL_DIF=3 00:29:24.215 09:37:35 nvmf_dif.fio_dif_digest -- target/dif.sh@127 -- # bs=128k,128k,128k 00:29:24.215 09:37:35 nvmf_dif.fio_dif_digest -- target/dif.sh@127 -- # numjobs=3 00:29:24.215 09:37:35 nvmf_dif.fio_dif_digest -- target/dif.sh@127 -- # iodepth=3 00:29:24.215 09:37:35 nvmf_dif.fio_dif_digest -- target/dif.sh@127 -- # runtime=10 00:29:24.215 09:37:35 nvmf_dif.fio_dif_digest -- target/dif.sh@128 -- # hdgst=true 00:29:24.215 09:37:35 nvmf_dif.fio_dif_digest -- target/dif.sh@128 -- # ddgst=true 00:29:24.215 09:37:35 nvmf_dif.fio_dif_digest -- target/dif.sh@130 -- # create_subsystems 0 00:29:24.215 09:37:35 nvmf_dif.fio_dif_digest -- target/dif.sh@28 -- # local sub 00:29:24.215 09:37:35 nvmf_dif.fio_dif_digest -- target/dif.sh@30 -- # for sub in "$@" 00:29:24.215 09:37:35 nvmf_dif.fio_dif_digest -- target/dif.sh@31 -- # create_subsystem 0 00:29:24.215 09:37:35 nvmf_dif.fio_dif_digest -- target/dif.sh@18 -- # local sub_id=0 00:29:24.215 09:37:35 nvmf_dif.fio_dif_digest -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null0 64 512 --md-size 16 --dif-type 3 00:29:24.215 09:37:35 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@559 -- # xtrace_disable 00:29:24.215 09:37:35 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@10 -- # set +x 00:29:24.215 bdev_null0 00:29:24.215 09:37:35 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:29:24.215 09:37:35 nvmf_dif.fio_dif_digest -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 --serial-number 53313233-0 --allow-any-host 00:29:24.215 09:37:35 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@559 -- # xtrace_disable 00:29:24.215 09:37:35 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@10 -- # set +x 00:29:24.215 09:37:35 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:29:24.215 09:37:35 nvmf_dif.fio_dif_digest -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 bdev_null0 00:29:24.215 09:37:35 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@559 -- # xtrace_disable 00:29:24.215 09:37:35 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@10 -- # set +x 00:29:24.215 09:37:35 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:29:24.215 09:37:35 nvmf_dif.fio_dif_digest -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:29:24.215 09:37:35 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@559 -- # xtrace_disable 00:29:24.215 09:37:35 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@10 -- # set +x 00:29:24.215 [2024-07-15 09:37:35.255364] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:29:24.215 09:37:35 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:29:24.215 09:37:35 nvmf_dif.fio_dif_digest -- target/dif.sh@131 -- # fio /dev/fd/62 00:29:24.215 09:37:35 nvmf_dif.fio_dif_digest -- target/dif.sh@131 -- # create_json_sub_conf 0 00:29:24.215 09:37:35 nvmf_dif.fio_dif_digest -- target/dif.sh@51 -- # gen_nvmf_target_json 0 00:29:24.215 09:37:35 nvmf_dif.fio_dif_digest -- nvmf/common.sh@532 -- # config=() 00:29:24.215 09:37:35 nvmf_dif.fio_dif_digest -- nvmf/common.sh@532 -- # local subsystem config 00:29:24.215 09:37:35 nvmf_dif.fio_dif_digest -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:29:24.215 09:37:35 nvmf_dif.fio_dif_digest -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:29:24.215 { 00:29:24.215 "params": { 00:29:24.215 "name": "Nvme$subsystem", 00:29:24.215 "trtype": "$TEST_TRANSPORT", 00:29:24.215 "traddr": "$NVMF_FIRST_TARGET_IP", 00:29:24.215 "adrfam": "ipv4", 00:29:24.215 "trsvcid": "$NVMF_PORT", 00:29:24.215 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:29:24.215 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:29:24.215 "hdgst": ${hdgst:-false}, 00:29:24.215 "ddgst": ${ddgst:-false} 00:29:24.215 }, 00:29:24.215 "method": "bdev_nvme_attach_controller" 00:29:24.215 } 00:29:24.215 EOF 00:29:24.215 )") 00:29:24.215 09:37:35 nvmf_dif.fio_dif_digest -- target/dif.sh@82 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:29:24.215 09:37:35 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1356 -- # fio_plugin /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:29:24.215 09:37:35 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1337 -- # local fio_dir=/usr/src/fio 00:29:24.215 09:37:35 nvmf_dif.fio_dif_digest -- target/dif.sh@82 -- # gen_fio_conf 00:29:24.215 09:37:35 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1339 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:29:24.215 09:37:35 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1339 -- # local sanitizers 00:29:24.215 09:37:35 nvmf_dif.fio_dif_digest -- target/dif.sh@54 -- # local file 00:29:24.215 09:37:35 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1340 -- # local plugin=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:29:24.215 09:37:35 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1341 -- # shift 00:29:24.215 09:37:35 nvmf_dif.fio_dif_digest -- target/dif.sh@56 -- # cat 00:29:24.215 09:37:35 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1343 -- # local asan_lib= 00:29:24.215 09:37:35 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1344 -- # for sanitizer in "${sanitizers[@]}" 00:29:24.215 09:37:35 nvmf_dif.fio_dif_digest -- nvmf/common.sh@554 -- # cat 00:29:24.215 09:37:35 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1345 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:29:24.215 09:37:35 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1345 -- # grep libasan 00:29:24.215 09:37:35 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1345 -- # awk '{print $3}' 00:29:24.215 09:37:35 nvmf_dif.fio_dif_digest -- target/dif.sh@72 -- # (( file = 1 )) 00:29:24.215 09:37:35 nvmf_dif.fio_dif_digest -- target/dif.sh@72 -- # (( file <= files )) 00:29:24.215 09:37:35 nvmf_dif.fio_dif_digest -- nvmf/common.sh@556 -- # jq . 00:29:24.215 09:37:35 nvmf_dif.fio_dif_digest -- nvmf/common.sh@557 -- # IFS=, 00:29:24.215 09:37:35 nvmf_dif.fio_dif_digest -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:29:24.215 "params": { 00:29:24.215 "name": "Nvme0", 00:29:24.215 "trtype": "tcp", 00:29:24.215 "traddr": "10.0.0.2", 00:29:24.215 "adrfam": "ipv4", 00:29:24.215 "trsvcid": "4420", 00:29:24.215 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:29:24.215 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:29:24.215 "hdgst": true, 00:29:24.215 "ddgst": true 00:29:24.215 }, 00:29:24.215 "method": "bdev_nvme_attach_controller" 00:29:24.215 }' 00:29:24.215 09:37:35 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1345 -- # asan_lib= 00:29:24.215 09:37:35 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1346 -- # [[ -n '' ]] 00:29:24.215 09:37:35 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1344 -- # for sanitizer in "${sanitizers[@]}" 00:29:24.215 09:37:35 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1345 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:29:24.215 09:37:35 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1345 -- # grep libclang_rt.asan 00:29:24.215 09:37:35 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1345 -- # awk '{print $3}' 00:29:24.215 09:37:35 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1345 -- # asan_lib= 00:29:24.215 09:37:35 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1346 -- # [[ -n '' ]] 00:29:24.215 09:37:35 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1352 -- # LD_PRELOAD=' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev' 00:29:24.215 09:37:35 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1352 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:29:24.480 filename0: (g=0): rw=randread, bs=(R) 128KiB-128KiB, (W) 128KiB-128KiB, (T) 128KiB-128KiB, ioengine=spdk_bdev, iodepth=3 00:29:24.480 ... 00:29:24.480 fio-3.35 00:29:24.480 Starting 3 threads 00:29:24.480 EAL: No free 2048 kB hugepages reported on node 1 00:29:36.747 00:29:36.747 filename0: (groupid=0, jobs=1): err= 0: pid=959560: Mon Jul 15 09:37:46 2024 00:29:36.747 read: IOPS=201, BW=25.2MiB/s (26.5MB/s)(254MiB/10045msec) 00:29:36.747 slat (nsec): min=4273, max=73269, avg=15559.74, stdev=4302.45 00:29:36.747 clat (usec): min=11124, max=55126, avg=14810.96, stdev=1485.54 00:29:36.747 lat (usec): min=11138, max=55139, avg=14826.52, stdev=1485.32 00:29:36.747 clat percentiles (usec): 00:29:36.747 | 1.00th=[12518], 5.00th=[13173], 10.00th=[13566], 20.00th=[13960], 00:29:36.747 | 30.00th=[14222], 40.00th=[14484], 50.00th=[14746], 60.00th=[15008], 00:29:36.747 | 70.00th=[15270], 80.00th=[15533], 90.00th=[16057], 95.00th=[16450], 00:29:36.747 | 99.00th=[17171], 99.50th=[17433], 99.90th=[21627], 99.95th=[44827], 00:29:36.747 | 99.99th=[55313] 00:29:36.747 bw ( KiB/s): min=25088, max=26880, per=32.57%, avg=25945.60, stdev=500.24, samples=20 00:29:36.747 iops : min= 196, max= 210, avg=202.70, stdev= 3.91, samples=20 00:29:36.747 lat (msec) : 20=99.85%, 50=0.10%, 100=0.05% 00:29:36.747 cpu : usr=87.06%, sys=9.16%, ctx=580, majf=0, minf=126 00:29:36.747 IO depths : 1=0.1%, 2=100.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:29:36.747 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:29:36.747 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:29:36.747 issued rwts: total=2029,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:29:36.747 latency : target=0, window=0, percentile=100.00%, depth=3 00:29:36.747 filename0: (groupid=0, jobs=1): err= 0: pid=959561: Mon Jul 15 09:37:46 2024 00:29:36.747 read: IOPS=216, BW=27.1MiB/s (28.4MB/s)(271MiB/10008msec) 00:29:36.747 slat (nsec): min=4409, max=49991, avg=16564.26, stdev=4041.14 00:29:36.747 clat (usec): min=9487, max=22500, avg=13807.79, stdev=953.45 00:29:36.747 lat (usec): min=9501, max=22514, avg=13824.35, stdev=953.06 00:29:36.747 clat percentiles (usec): 00:29:36.747 | 1.00th=[11600], 5.00th=[12256], 10.00th=[12649], 20.00th=[13042], 00:29:36.747 | 30.00th=[13304], 40.00th=[13566], 50.00th=[13829], 60.00th=[14091], 00:29:36.747 | 70.00th=[14222], 80.00th=[14484], 90.00th=[15008], 95.00th=[15270], 00:29:36.747 | 99.00th=[16057], 99.50th=[16319], 99.90th=[19530], 99.95th=[19530], 00:29:36.747 | 99.99th=[22414] 00:29:36.747 bw ( KiB/s): min=26880, max=28672, per=34.83%, avg=27750.40, stdev=465.42, samples=20 00:29:36.747 iops : min= 210, max= 224, avg=216.80, stdev= 3.64, samples=20 00:29:36.747 lat (msec) : 10=0.09%, 20=99.86%, 50=0.05% 00:29:36.747 cpu : usr=92.07%, sys=7.32%, ctx=20, majf=0, minf=104 00:29:36.747 IO depths : 1=0.2%, 2=99.8%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:29:36.747 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:29:36.747 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:29:36.747 issued rwts: total=2171,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:29:36.747 latency : target=0, window=0, percentile=100.00%, depth=3 00:29:36.747 filename0: (groupid=0, jobs=1): err= 0: pid=959562: Mon Jul 15 09:37:46 2024 00:29:36.747 read: IOPS=205, BW=25.6MiB/s (26.9MB/s)(257MiB/10007msec) 00:29:36.747 slat (nsec): min=4592, max=35304, avg=13963.26, stdev=2813.91 00:29:36.747 clat (usec): min=8878, max=22184, avg=14609.57, stdev=963.74 00:29:36.747 lat (usec): min=8891, max=22196, avg=14623.54, stdev=963.71 00:29:36.747 clat percentiles (usec): 00:29:36.747 | 1.00th=[12387], 5.00th=[13042], 10.00th=[13435], 20.00th=[13829], 00:29:36.747 | 30.00th=[14091], 40.00th=[14353], 50.00th=[14484], 60.00th=[14746], 00:29:36.747 | 70.00th=[15008], 80.00th=[15401], 90.00th=[15795], 95.00th=[16188], 00:29:36.747 | 99.00th=[17171], 99.50th=[17433], 99.90th=[19006], 99.95th=[19006], 00:29:36.747 | 99.99th=[22152] 00:29:36.747 bw ( KiB/s): min=25344, max=27392, per=32.94%, avg=26242.55, stdev=655.15, samples=20 00:29:36.747 iops : min= 198, max= 214, avg=205.00, stdev= 5.13, samples=20 00:29:36.747 lat (msec) : 10=0.10%, 20=99.85%, 50=0.05% 00:29:36.747 cpu : usr=92.79%, sys=6.74%, ctx=17, majf=0, minf=143 00:29:36.747 IO depths : 1=0.1%, 2=100.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:29:36.747 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:29:36.747 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:29:36.747 issued rwts: total=2052,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:29:36.747 latency : target=0, window=0, percentile=100.00%, depth=3 00:29:36.747 00:29:36.747 Run status group 0 (all jobs): 00:29:36.747 READ: bw=77.8MiB/s (81.6MB/s), 25.2MiB/s-27.1MiB/s (26.5MB/s-28.4MB/s), io=782MiB (819MB), run=10007-10045msec 00:29:36.747 09:37:46 nvmf_dif.fio_dif_digest -- target/dif.sh@132 -- # destroy_subsystems 0 00:29:36.747 09:37:46 nvmf_dif.fio_dif_digest -- target/dif.sh@43 -- # local sub 00:29:36.747 09:37:46 nvmf_dif.fio_dif_digest -- target/dif.sh@45 -- # for sub in "$@" 00:29:36.747 09:37:46 nvmf_dif.fio_dif_digest -- target/dif.sh@46 -- # destroy_subsystem 0 00:29:36.747 09:37:46 nvmf_dif.fio_dif_digest -- target/dif.sh@36 -- # local sub_id=0 00:29:36.747 09:37:46 nvmf_dif.fio_dif_digest -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:29:36.747 09:37:46 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@559 -- # xtrace_disable 00:29:36.747 09:37:46 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@10 -- # set +x 00:29:36.747 09:37:46 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:29:36.747 09:37:46 nvmf_dif.fio_dif_digest -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null0 00:29:36.748 09:37:46 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@559 -- # xtrace_disable 00:29:36.748 09:37:46 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@10 -- # set +x 00:29:36.748 09:37:46 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:29:36.748 00:29:36.748 real 0m11.046s 00:29:36.748 user 0m28.366s 00:29:36.748 sys 0m2.574s 00:29:36.748 09:37:46 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1124 -- # xtrace_disable 00:29:36.748 09:37:46 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@10 -- # set +x 00:29:36.748 ************************************ 00:29:36.748 END TEST fio_dif_digest 00:29:36.748 ************************************ 00:29:36.748 09:37:46 nvmf_dif -- common/autotest_common.sh@1142 -- # return 0 00:29:36.748 09:37:46 nvmf_dif -- target/dif.sh@146 -- # trap - SIGINT SIGTERM EXIT 00:29:36.748 09:37:46 nvmf_dif -- target/dif.sh@147 -- # nvmftestfini 00:29:36.748 09:37:46 nvmf_dif -- nvmf/common.sh@488 -- # nvmfcleanup 00:29:36.748 09:37:46 nvmf_dif -- nvmf/common.sh@117 -- # sync 00:29:36.748 09:37:46 nvmf_dif -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:29:36.748 09:37:46 nvmf_dif -- nvmf/common.sh@120 -- # set +e 00:29:36.748 09:37:46 nvmf_dif -- nvmf/common.sh@121 -- # for i in {1..20} 00:29:36.748 09:37:46 nvmf_dif -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:29:36.748 rmmod nvme_tcp 00:29:36.748 rmmod nvme_fabrics 00:29:36.748 rmmod nvme_keyring 00:29:36.748 09:37:46 nvmf_dif -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:29:36.748 09:37:46 nvmf_dif -- nvmf/common.sh@124 -- # set -e 00:29:36.748 09:37:46 nvmf_dif -- nvmf/common.sh@125 -- # return 0 00:29:36.748 09:37:46 nvmf_dif -- nvmf/common.sh@489 -- # '[' -n 953511 ']' 00:29:36.748 09:37:46 nvmf_dif -- nvmf/common.sh@490 -- # killprocess 953511 00:29:36.748 09:37:46 nvmf_dif -- common/autotest_common.sh@948 -- # '[' -z 953511 ']' 00:29:36.748 09:37:46 nvmf_dif -- common/autotest_common.sh@952 -- # kill -0 953511 00:29:36.748 09:37:46 nvmf_dif -- common/autotest_common.sh@953 -- # uname 00:29:36.748 09:37:46 nvmf_dif -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:29:36.748 09:37:46 nvmf_dif -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 953511 00:29:36.748 09:37:46 nvmf_dif -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:29:36.748 09:37:46 nvmf_dif -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:29:36.748 09:37:46 nvmf_dif -- common/autotest_common.sh@966 -- # echo 'killing process with pid 953511' 00:29:36.748 killing process with pid 953511 00:29:36.748 09:37:46 nvmf_dif -- common/autotest_common.sh@967 -- # kill 953511 00:29:36.748 09:37:46 nvmf_dif -- common/autotest_common.sh@972 -- # wait 953511 00:29:36.748 09:37:46 nvmf_dif -- nvmf/common.sh@492 -- # '[' iso == iso ']' 00:29:36.748 09:37:46 nvmf_dif -- nvmf/common.sh@493 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh reset 00:29:36.748 Waiting for block devices as requested 00:29:36.748 0000:00:04.7 (8086 0e27): vfio-pci -> ioatdma 00:29:36.748 0000:00:04.6 (8086 0e26): vfio-pci -> ioatdma 00:29:36.748 0000:00:04.5 (8086 0e25): vfio-pci -> ioatdma 00:29:37.006 0000:00:04.4 (8086 0e24): vfio-pci -> ioatdma 00:29:37.006 0000:00:04.3 (8086 0e23): vfio-pci -> ioatdma 00:29:37.006 0000:00:04.2 (8086 0e22): vfio-pci -> ioatdma 00:29:37.006 0000:00:04.1 (8086 0e21): vfio-pci -> ioatdma 00:29:37.264 0000:00:04.0 (8086 0e20): vfio-pci -> ioatdma 00:29:37.264 0000:0b:00.0 (8086 0a54): vfio-pci -> nvme 00:29:37.264 0000:80:04.7 (8086 0e27): vfio-pci -> ioatdma 00:29:37.520 0000:80:04.6 (8086 0e26): vfio-pci -> ioatdma 00:29:37.520 0000:80:04.5 (8086 0e25): vfio-pci -> ioatdma 00:29:37.520 0000:80:04.4 (8086 0e24): vfio-pci -> ioatdma 00:29:37.520 0000:80:04.3 (8086 0e23): vfio-pci -> ioatdma 00:29:37.778 0000:80:04.2 (8086 0e22): vfio-pci -> ioatdma 00:29:37.778 0000:80:04.1 (8086 0e21): vfio-pci -> ioatdma 00:29:37.778 0000:80:04.0 (8086 0e20): vfio-pci -> ioatdma 00:29:38.036 09:37:49 nvmf_dif -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:29:38.036 09:37:49 nvmf_dif -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:29:38.036 09:37:49 nvmf_dif -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:29:38.036 09:37:49 nvmf_dif -- nvmf/common.sh@278 -- # remove_spdk_ns 00:29:38.036 09:37:49 nvmf_dif -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:29:38.036 09:37:49 nvmf_dif -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 13> /dev/null' 00:29:38.036 09:37:49 nvmf_dif -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:29:39.937 09:37:51 nvmf_dif -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:29:39.937 00:29:39.937 real 1m6.837s 00:29:39.937 user 6m26.785s 00:29:39.937 sys 0m19.875s 00:29:39.937 09:37:51 nvmf_dif -- common/autotest_common.sh@1124 -- # xtrace_disable 00:29:39.937 09:37:51 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:29:39.938 ************************************ 00:29:39.938 END TEST nvmf_dif 00:29:39.938 ************************************ 00:29:39.938 09:37:51 -- common/autotest_common.sh@1142 -- # return 0 00:29:39.938 09:37:51 -- spdk/autotest.sh@293 -- # run_test nvmf_abort_qd_sizes /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/abort_qd_sizes.sh 00:29:39.938 09:37:51 -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:29:39.938 09:37:51 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:29:39.938 09:37:51 -- common/autotest_common.sh@10 -- # set +x 00:29:39.938 ************************************ 00:29:39.938 START TEST nvmf_abort_qd_sizes 00:29:39.938 ************************************ 00:29:39.938 09:37:51 nvmf_abort_qd_sizes -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/abort_qd_sizes.sh 00:29:40.196 * Looking for test storage... 00:29:40.196 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:29:40.196 09:37:51 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@14 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:29:40.196 09:37:51 nvmf_abort_qd_sizes -- nvmf/common.sh@7 -- # uname -s 00:29:40.196 09:37:51 nvmf_abort_qd_sizes -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:29:40.196 09:37:51 nvmf_abort_qd_sizes -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:29:40.196 09:37:51 nvmf_abort_qd_sizes -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:29:40.196 09:37:51 nvmf_abort_qd_sizes -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:29:40.196 09:37:51 nvmf_abort_qd_sizes -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:29:40.196 09:37:51 nvmf_abort_qd_sizes -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:29:40.196 09:37:51 nvmf_abort_qd_sizes -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:29:40.196 09:37:51 nvmf_abort_qd_sizes -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:29:40.196 09:37:51 nvmf_abort_qd_sizes -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:29:40.196 09:37:51 nvmf_abort_qd_sizes -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:29:40.196 09:37:51 nvmf_abort_qd_sizes -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a 00:29:40.196 09:37:51 nvmf_abort_qd_sizes -- nvmf/common.sh@18 -- # NVME_HOSTID=29f67375-a902-e411-ace9-001e67bc3c9a 00:29:40.196 09:37:51 nvmf_abort_qd_sizes -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:29:40.196 09:37:51 nvmf_abort_qd_sizes -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:29:40.196 09:37:51 nvmf_abort_qd_sizes -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:29:40.196 09:37:51 nvmf_abort_qd_sizes -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:29:40.196 09:37:51 nvmf_abort_qd_sizes -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:29:40.196 09:37:51 nvmf_abort_qd_sizes -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:29:40.196 09:37:51 nvmf_abort_qd_sizes -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:29:40.196 09:37:51 nvmf_abort_qd_sizes -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:29:40.196 09:37:51 nvmf_abort_qd_sizes -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:40.196 09:37:51 nvmf_abort_qd_sizes -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:40.196 09:37:51 nvmf_abort_qd_sizes -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:40.196 09:37:51 nvmf_abort_qd_sizes -- paths/export.sh@5 -- # export PATH 00:29:40.196 09:37:51 nvmf_abort_qd_sizes -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:40.196 09:37:51 nvmf_abort_qd_sizes -- nvmf/common.sh@47 -- # : 0 00:29:40.196 09:37:51 nvmf_abort_qd_sizes -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:29:40.196 09:37:51 nvmf_abort_qd_sizes -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:29:40.196 09:37:51 nvmf_abort_qd_sizes -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:29:40.196 09:37:51 nvmf_abort_qd_sizes -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:29:40.196 09:37:51 nvmf_abort_qd_sizes -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:29:40.196 09:37:51 nvmf_abort_qd_sizes -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:29:40.196 09:37:51 nvmf_abort_qd_sizes -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:29:40.196 09:37:51 nvmf_abort_qd_sizes -- nvmf/common.sh@51 -- # have_pci_nics=0 00:29:40.196 09:37:51 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@70 -- # nvmftestinit 00:29:40.196 09:37:51 nvmf_abort_qd_sizes -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:29:40.196 09:37:51 nvmf_abort_qd_sizes -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:29:40.196 09:37:51 nvmf_abort_qd_sizes -- nvmf/common.sh@448 -- # prepare_net_devs 00:29:40.196 09:37:51 nvmf_abort_qd_sizes -- nvmf/common.sh@410 -- # local -g is_hw=no 00:29:40.196 09:37:51 nvmf_abort_qd_sizes -- nvmf/common.sh@412 -- # remove_spdk_ns 00:29:40.196 09:37:51 nvmf_abort_qd_sizes -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:29:40.196 09:37:51 nvmf_abort_qd_sizes -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 13> /dev/null' 00:29:40.196 09:37:51 nvmf_abort_qd_sizes -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:29:40.196 09:37:51 nvmf_abort_qd_sizes -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:29:40.196 09:37:51 nvmf_abort_qd_sizes -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:29:40.196 09:37:51 nvmf_abort_qd_sizes -- nvmf/common.sh@285 -- # xtrace_disable 00:29:40.196 09:37:51 nvmf_abort_qd_sizes -- common/autotest_common.sh@10 -- # set +x 00:29:42.096 09:37:53 nvmf_abort_qd_sizes -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:29:42.096 09:37:53 nvmf_abort_qd_sizes -- nvmf/common.sh@291 -- # pci_devs=() 00:29:42.096 09:37:53 nvmf_abort_qd_sizes -- nvmf/common.sh@291 -- # local -a pci_devs 00:29:42.096 09:37:53 nvmf_abort_qd_sizes -- nvmf/common.sh@292 -- # pci_net_devs=() 00:29:42.096 09:37:53 nvmf_abort_qd_sizes -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:29:42.096 09:37:53 nvmf_abort_qd_sizes -- nvmf/common.sh@293 -- # pci_drivers=() 00:29:42.096 09:37:53 nvmf_abort_qd_sizes -- nvmf/common.sh@293 -- # local -A pci_drivers 00:29:42.096 09:37:53 nvmf_abort_qd_sizes -- nvmf/common.sh@295 -- # net_devs=() 00:29:42.096 09:37:53 nvmf_abort_qd_sizes -- nvmf/common.sh@295 -- # local -ga net_devs 00:29:42.096 09:37:53 nvmf_abort_qd_sizes -- nvmf/common.sh@296 -- # e810=() 00:29:42.096 09:37:53 nvmf_abort_qd_sizes -- nvmf/common.sh@296 -- # local -ga e810 00:29:42.096 09:37:53 nvmf_abort_qd_sizes -- nvmf/common.sh@297 -- # x722=() 00:29:42.096 09:37:53 nvmf_abort_qd_sizes -- nvmf/common.sh@297 -- # local -ga x722 00:29:42.096 09:37:53 nvmf_abort_qd_sizes -- nvmf/common.sh@298 -- # mlx=() 00:29:42.096 09:37:53 nvmf_abort_qd_sizes -- nvmf/common.sh@298 -- # local -ga mlx 00:29:42.096 09:37:53 nvmf_abort_qd_sizes -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:29:42.096 09:37:53 nvmf_abort_qd_sizes -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:29:42.096 09:37:53 nvmf_abort_qd_sizes -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:29:42.096 09:37:53 nvmf_abort_qd_sizes -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:29:42.096 09:37:53 nvmf_abort_qd_sizes -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:29:42.096 09:37:53 nvmf_abort_qd_sizes -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:29:42.096 09:37:53 nvmf_abort_qd_sizes -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:29:42.096 09:37:53 nvmf_abort_qd_sizes -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:29:42.096 09:37:53 nvmf_abort_qd_sizes -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:29:42.096 09:37:53 nvmf_abort_qd_sizes -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:29:42.096 09:37:53 nvmf_abort_qd_sizes -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:29:42.096 09:37:53 nvmf_abort_qd_sizes -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:29:42.096 09:37:53 nvmf_abort_qd_sizes -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:29:42.096 09:37:53 nvmf_abort_qd_sizes -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:29:42.096 09:37:53 nvmf_abort_qd_sizes -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:29:42.096 09:37:53 nvmf_abort_qd_sizes -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:29:42.096 09:37:53 nvmf_abort_qd_sizes -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:29:42.097 09:37:53 nvmf_abort_qd_sizes -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:29:42.097 09:37:53 nvmf_abort_qd_sizes -- nvmf/common.sh@341 -- # echo 'Found 0000:09:00.0 (0x8086 - 0x159b)' 00:29:42.097 Found 0000:09:00.0 (0x8086 - 0x159b) 00:29:42.097 09:37:53 nvmf_abort_qd_sizes -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:29:42.097 09:37:53 nvmf_abort_qd_sizes -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:29:42.097 09:37:53 nvmf_abort_qd_sizes -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:29:42.097 09:37:53 nvmf_abort_qd_sizes -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:29:42.097 09:37:53 nvmf_abort_qd_sizes -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:29:42.097 09:37:53 nvmf_abort_qd_sizes -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:29:42.097 09:37:53 nvmf_abort_qd_sizes -- nvmf/common.sh@341 -- # echo 'Found 0000:09:00.1 (0x8086 - 0x159b)' 00:29:42.097 Found 0000:09:00.1 (0x8086 - 0x159b) 00:29:42.097 09:37:53 nvmf_abort_qd_sizes -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:29:42.097 09:37:53 nvmf_abort_qd_sizes -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:29:42.097 09:37:53 nvmf_abort_qd_sizes -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:29:42.097 09:37:53 nvmf_abort_qd_sizes -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:29:42.097 09:37:53 nvmf_abort_qd_sizes -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:29:42.097 09:37:53 nvmf_abort_qd_sizes -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:29:42.097 09:37:53 nvmf_abort_qd_sizes -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:29:42.097 09:37:53 nvmf_abort_qd_sizes -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:29:42.097 09:37:53 nvmf_abort_qd_sizes -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:29:42.097 09:37:53 nvmf_abort_qd_sizes -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:29:42.097 09:37:53 nvmf_abort_qd_sizes -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:29:42.097 09:37:53 nvmf_abort_qd_sizes -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:29:42.097 09:37:53 nvmf_abort_qd_sizes -- nvmf/common.sh@390 -- # [[ up == up ]] 00:29:42.097 09:37:53 nvmf_abort_qd_sizes -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:29:42.097 09:37:53 nvmf_abort_qd_sizes -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:29:42.097 09:37:53 nvmf_abort_qd_sizes -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:09:00.0: cvl_0_0' 00:29:42.097 Found net devices under 0000:09:00.0: cvl_0_0 00:29:42.097 09:37:53 nvmf_abort_qd_sizes -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:29:42.097 09:37:53 nvmf_abort_qd_sizes -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:29:42.097 09:37:53 nvmf_abort_qd_sizes -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:29:42.097 09:37:53 nvmf_abort_qd_sizes -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:29:42.097 09:37:53 nvmf_abort_qd_sizes -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:29:42.097 09:37:53 nvmf_abort_qd_sizes -- nvmf/common.sh@390 -- # [[ up == up ]] 00:29:42.097 09:37:53 nvmf_abort_qd_sizes -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:29:42.097 09:37:53 nvmf_abort_qd_sizes -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:29:42.097 09:37:53 nvmf_abort_qd_sizes -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:09:00.1: cvl_0_1' 00:29:42.097 Found net devices under 0000:09:00.1: cvl_0_1 00:29:42.097 09:37:53 nvmf_abort_qd_sizes -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:29:42.097 09:37:53 nvmf_abort_qd_sizes -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:29:42.097 09:37:53 nvmf_abort_qd_sizes -- nvmf/common.sh@414 -- # is_hw=yes 00:29:42.097 09:37:53 nvmf_abort_qd_sizes -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:29:42.097 09:37:53 nvmf_abort_qd_sizes -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:29:42.097 09:37:53 nvmf_abort_qd_sizes -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:29:42.097 09:37:53 nvmf_abort_qd_sizes -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:29:42.097 09:37:53 nvmf_abort_qd_sizes -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:29:42.097 09:37:53 nvmf_abort_qd_sizes -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:29:42.097 09:37:53 nvmf_abort_qd_sizes -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:29:42.097 09:37:53 nvmf_abort_qd_sizes -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:29:42.097 09:37:53 nvmf_abort_qd_sizes -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:29:42.097 09:37:53 nvmf_abort_qd_sizes -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:29:42.097 09:37:53 nvmf_abort_qd_sizes -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:29:42.097 09:37:53 nvmf_abort_qd_sizes -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:29:42.097 09:37:53 nvmf_abort_qd_sizes -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:29:42.097 09:37:53 nvmf_abort_qd_sizes -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:29:42.097 09:37:53 nvmf_abort_qd_sizes -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:29:42.097 09:37:53 nvmf_abort_qd_sizes -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:29:42.097 09:37:53 nvmf_abort_qd_sizes -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:29:42.097 09:37:53 nvmf_abort_qd_sizes -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:29:42.097 09:37:53 nvmf_abort_qd_sizes -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:29:42.097 09:37:53 nvmf_abort_qd_sizes -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:29:42.354 09:37:53 nvmf_abort_qd_sizes -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:29:42.354 09:37:53 nvmf_abort_qd_sizes -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:29:42.354 09:37:53 nvmf_abort_qd_sizes -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:29:42.354 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:29:42.354 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.124 ms 00:29:42.354 00:29:42.354 --- 10.0.0.2 ping statistics --- 00:29:42.354 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:29:42.354 rtt min/avg/max/mdev = 0.124/0.124/0.124/0.000 ms 00:29:42.354 09:37:53 nvmf_abort_qd_sizes -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:29:42.354 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:29:42.354 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.115 ms 00:29:42.354 00:29:42.354 --- 10.0.0.1 ping statistics --- 00:29:42.354 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:29:42.354 rtt min/avg/max/mdev = 0.115/0.115/0.115/0.000 ms 00:29:42.354 09:37:53 nvmf_abort_qd_sizes -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:29:42.354 09:37:53 nvmf_abort_qd_sizes -- nvmf/common.sh@422 -- # return 0 00:29:42.354 09:37:53 nvmf_abort_qd_sizes -- nvmf/common.sh@450 -- # '[' iso == iso ']' 00:29:42.354 09:37:53 nvmf_abort_qd_sizes -- nvmf/common.sh@451 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:29:43.727 0000:00:04.7 (8086 0e27): ioatdma -> vfio-pci 00:29:43.727 0000:00:04.6 (8086 0e26): ioatdma -> vfio-pci 00:29:43.727 0000:00:04.5 (8086 0e25): ioatdma -> vfio-pci 00:29:43.727 0000:00:04.4 (8086 0e24): ioatdma -> vfio-pci 00:29:43.727 0000:00:04.3 (8086 0e23): ioatdma -> vfio-pci 00:29:43.727 0000:00:04.2 (8086 0e22): ioatdma -> vfio-pci 00:29:43.727 0000:00:04.1 (8086 0e21): ioatdma -> vfio-pci 00:29:43.727 0000:00:04.0 (8086 0e20): ioatdma -> vfio-pci 00:29:43.727 0000:80:04.7 (8086 0e27): ioatdma -> vfio-pci 00:29:43.727 0000:80:04.6 (8086 0e26): ioatdma -> vfio-pci 00:29:43.727 0000:80:04.5 (8086 0e25): ioatdma -> vfio-pci 00:29:43.727 0000:80:04.4 (8086 0e24): ioatdma -> vfio-pci 00:29:43.727 0000:80:04.3 (8086 0e23): ioatdma -> vfio-pci 00:29:43.727 0000:80:04.2 (8086 0e22): ioatdma -> vfio-pci 00:29:43.727 0000:80:04.1 (8086 0e21): ioatdma -> vfio-pci 00:29:43.727 0000:80:04.0 (8086 0e20): ioatdma -> vfio-pci 00:29:44.664 0000:0b:00.0 (8086 0a54): nvme -> vfio-pci 00:29:44.664 09:37:55 nvmf_abort_qd_sizes -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:29:44.664 09:37:55 nvmf_abort_qd_sizes -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:29:44.664 09:37:55 nvmf_abort_qd_sizes -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:29:44.664 09:37:55 nvmf_abort_qd_sizes -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:29:44.664 09:37:55 nvmf_abort_qd_sizes -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:29:44.664 09:37:55 nvmf_abort_qd_sizes -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:29:44.664 09:37:55 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@71 -- # nvmfappstart -m 0xf 00:29:44.664 09:37:55 nvmf_abort_qd_sizes -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:29:44.664 09:37:55 nvmf_abort_qd_sizes -- common/autotest_common.sh@722 -- # xtrace_disable 00:29:44.664 09:37:55 nvmf_abort_qd_sizes -- common/autotest_common.sh@10 -- # set +x 00:29:44.664 09:37:55 nvmf_abort_qd_sizes -- nvmf/common.sh@481 -- # nvmfpid=964363 00:29:44.664 09:37:55 nvmf_abort_qd_sizes -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xf 00:29:44.664 09:37:55 nvmf_abort_qd_sizes -- nvmf/common.sh@482 -- # waitforlisten 964363 00:29:44.664 09:37:55 nvmf_abort_qd_sizes -- common/autotest_common.sh@829 -- # '[' -z 964363 ']' 00:29:44.664 09:37:55 nvmf_abort_qd_sizes -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:29:44.664 09:37:55 nvmf_abort_qd_sizes -- common/autotest_common.sh@834 -- # local max_retries=100 00:29:44.664 09:37:55 nvmf_abort_qd_sizes -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:29:44.664 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:29:44.664 09:37:55 nvmf_abort_qd_sizes -- common/autotest_common.sh@838 -- # xtrace_disable 00:29:44.664 09:37:55 nvmf_abort_qd_sizes -- common/autotest_common.sh@10 -- # set +x 00:29:44.664 [2024-07-15 09:37:55.840965] Starting SPDK v24.09-pre git sha1 b0f01ebc5 / DPDK 24.03.0 initialization... 00:29:44.664 [2024-07-15 09:37:55.841036] [ DPDK EAL parameters: nvmf -c 0xf --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:29:44.922 EAL: No free 2048 kB hugepages reported on node 1 00:29:44.922 [2024-07-15 09:37:55.907692] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 4 00:29:44.922 [2024-07-15 09:37:56.014939] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:29:44.922 [2024-07-15 09:37:56.014991] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:29:44.922 [2024-07-15 09:37:56.015005] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:29:44.922 [2024-07-15 09:37:56.015017] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:29:44.922 [2024-07-15 09:37:56.015026] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:29:44.922 [2024-07-15 09:37:56.015093] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:29:44.922 [2024-07-15 09:37:56.015130] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:29:44.922 [2024-07-15 09:37:56.015186] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:29:44.922 [2024-07-15 09:37:56.015189] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:29:45.180 09:37:56 nvmf_abort_qd_sizes -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:29:45.180 09:37:56 nvmf_abort_qd_sizes -- common/autotest_common.sh@862 -- # return 0 00:29:45.180 09:37:56 nvmf_abort_qd_sizes -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:29:45.180 09:37:56 nvmf_abort_qd_sizes -- common/autotest_common.sh@728 -- # xtrace_disable 00:29:45.180 09:37:56 nvmf_abort_qd_sizes -- common/autotest_common.sh@10 -- # set +x 00:29:45.180 09:37:56 nvmf_abort_qd_sizes -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:29:45.180 09:37:56 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@73 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini || :; clean_kernel_target' SIGINT SIGTERM EXIT 00:29:45.180 09:37:56 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@75 -- # mapfile -t nvmes 00:29:45.180 09:37:56 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@75 -- # nvme_in_userspace 00:29:45.180 09:37:56 nvmf_abort_qd_sizes -- scripts/common.sh@309 -- # local bdf bdfs 00:29:45.180 09:37:56 nvmf_abort_qd_sizes -- scripts/common.sh@310 -- # local nvmes 00:29:45.180 09:37:56 nvmf_abort_qd_sizes -- scripts/common.sh@312 -- # [[ -n 0000:0b:00.0 ]] 00:29:45.180 09:37:56 nvmf_abort_qd_sizes -- scripts/common.sh@313 -- # nvmes=(${pci_bus_cache["0x010802"]}) 00:29:45.180 09:37:56 nvmf_abort_qd_sizes -- scripts/common.sh@318 -- # for bdf in "${nvmes[@]}" 00:29:45.180 09:37:56 nvmf_abort_qd_sizes -- scripts/common.sh@319 -- # [[ -e /sys/bus/pci/drivers/nvme/0000:0b:00.0 ]] 00:29:45.180 09:37:56 nvmf_abort_qd_sizes -- scripts/common.sh@320 -- # uname -s 00:29:45.180 09:37:56 nvmf_abort_qd_sizes -- scripts/common.sh@320 -- # [[ Linux == FreeBSD ]] 00:29:45.180 09:37:56 nvmf_abort_qd_sizes -- scripts/common.sh@323 -- # bdfs+=("$bdf") 00:29:45.180 09:37:56 nvmf_abort_qd_sizes -- scripts/common.sh@325 -- # (( 1 )) 00:29:45.180 09:37:56 nvmf_abort_qd_sizes -- scripts/common.sh@326 -- # printf '%s\n' 0000:0b:00.0 00:29:45.180 09:37:56 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@76 -- # (( 1 > 0 )) 00:29:45.180 09:37:56 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@78 -- # nvme=0000:0b:00.0 00:29:45.180 09:37:56 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@80 -- # run_test spdk_target_abort spdk_target 00:29:45.180 09:37:56 nvmf_abort_qd_sizes -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:29:45.180 09:37:56 nvmf_abort_qd_sizes -- common/autotest_common.sh@1105 -- # xtrace_disable 00:29:45.180 09:37:56 nvmf_abort_qd_sizes -- common/autotest_common.sh@10 -- # set +x 00:29:45.180 ************************************ 00:29:45.180 START TEST spdk_target_abort 00:29:45.180 ************************************ 00:29:45.180 09:37:56 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@1123 -- # spdk_target 00:29:45.180 09:37:56 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@43 -- # local name=spdk_target 00:29:45.180 09:37:56 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@45 -- # rpc_cmd bdev_nvme_attach_controller -t pcie -a 0000:0b:00.0 -b spdk_target 00:29:45.180 09:37:56 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@559 -- # xtrace_disable 00:29:45.180 09:37:56 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@10 -- # set +x 00:29:48.458 spdk_targetn1 00:29:48.458 09:37:59 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:29:48.458 09:37:59 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@47 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:29:48.458 09:37:59 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@559 -- # xtrace_disable 00:29:48.458 09:37:59 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@10 -- # set +x 00:29:48.458 [2024-07-15 09:37:59.059486] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:29:48.458 09:37:59 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:29:48.458 09:37:59 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@48 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:testnqn -a -s SPDKISFASTANDAWESOME 00:29:48.458 09:37:59 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@559 -- # xtrace_disable 00:29:48.458 09:37:59 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@10 -- # set +x 00:29:48.458 09:37:59 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:29:48.458 09:37:59 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@49 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:testnqn spdk_targetn1 00:29:48.458 09:37:59 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@559 -- # xtrace_disable 00:29:48.459 09:37:59 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@10 -- # set +x 00:29:48.459 09:37:59 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:29:48.459 09:37:59 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@50 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:testnqn -t tcp -a 10.0.0.2 -s 4420 00:29:48.459 09:37:59 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@559 -- # xtrace_disable 00:29:48.459 09:37:59 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@10 -- # set +x 00:29:48.459 [2024-07-15 09:37:59.091708] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:29:48.459 09:37:59 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:29:48.459 09:37:59 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@52 -- # rabort tcp IPv4 10.0.0.2 4420 nqn.2016-06.io.spdk:testnqn 00:29:48.459 09:37:59 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@17 -- # local trtype=tcp 00:29:48.459 09:37:59 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@18 -- # local adrfam=IPv4 00:29:48.459 09:37:59 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@19 -- # local traddr=10.0.0.2 00:29:48.459 09:37:59 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@20 -- # local trsvcid=4420 00:29:48.459 09:37:59 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@21 -- # local subnqn=nqn.2016-06.io.spdk:testnqn 00:29:48.459 09:37:59 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@23 -- # local qds qd 00:29:48.459 09:37:59 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@24 -- # local target r 00:29:48.459 09:37:59 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@26 -- # qds=(4 24 64) 00:29:48.459 09:37:59 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:29:48.459 09:37:59 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@29 -- # target=trtype:tcp 00:29:48.459 09:37:59 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:29:48.459 09:37:59 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4' 00:29:48.459 09:37:59 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:29:48.459 09:37:59 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4 traddr:10.0.0.2' 00:29:48.459 09:37:59 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:29:48.459 09:37:59 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:29:48.459 09:37:59 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:29:48.459 09:37:59 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:29:48.459 09:37:59 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@32 -- # for qd in "${qds[@]}" 00:29:48.459 09:37:59 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/abort -q 4 -w rw -M 50 -o 4096 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:29:48.459 EAL: No free 2048 kB hugepages reported on node 1 00:29:51.738 Initializing NVMe Controllers 00:29:51.738 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:testnqn 00:29:51.738 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 with lcore 0 00:29:51.738 Initialization complete. Launching workers. 00:29:51.738 NS: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 I/O completed: 13147, failed: 0 00:29:51.738 CTRLR: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:testnqn) abort submitted 1231, failed to submit 11916 00:29:51.738 success 770, unsuccess 461, failed 0 00:29:51.738 09:38:02 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@32 -- # for qd in "${qds[@]}" 00:29:51.738 09:38:02 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/abort -q 24 -w rw -M 50 -o 4096 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:29:51.738 EAL: No free 2048 kB hugepages reported on node 1 00:29:55.015 Initializing NVMe Controllers 00:29:55.015 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:testnqn 00:29:55.015 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 with lcore 0 00:29:55.015 Initialization complete. Launching workers. 00:29:55.015 NS: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 I/O completed: 8809, failed: 0 00:29:55.015 CTRLR: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:testnqn) abort submitted 1237, failed to submit 7572 00:29:55.015 success 328, unsuccess 909, failed 0 00:29:55.015 09:38:05 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@32 -- # for qd in "${qds[@]}" 00:29:55.015 09:38:05 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/abort -q 64 -w rw -M 50 -o 4096 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:29:55.015 EAL: No free 2048 kB hugepages reported on node 1 00:29:58.291 Initializing NVMe Controllers 00:29:58.291 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:testnqn 00:29:58.291 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 with lcore 0 00:29:58.291 Initialization complete. Launching workers. 00:29:58.291 NS: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 I/O completed: 30978, failed: 0 00:29:58.291 CTRLR: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:testnqn) abort submitted 2681, failed to submit 28297 00:29:58.291 success 511, unsuccess 2170, failed 0 00:29:58.291 09:38:08 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@54 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:testnqn 00:29:58.291 09:38:08 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@559 -- # xtrace_disable 00:29:58.291 09:38:08 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@10 -- # set +x 00:29:58.291 09:38:08 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:29:58.291 09:38:08 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@55 -- # rpc_cmd bdev_nvme_detach_controller spdk_target 00:29:58.291 09:38:08 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@559 -- # xtrace_disable 00:29:58.291 09:38:08 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@10 -- # set +x 00:29:59.221 09:38:10 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:29:59.221 09:38:10 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@61 -- # killprocess 964363 00:29:59.221 09:38:10 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@948 -- # '[' -z 964363 ']' 00:29:59.221 09:38:10 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@952 -- # kill -0 964363 00:29:59.221 09:38:10 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@953 -- # uname 00:29:59.221 09:38:10 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:29:59.221 09:38:10 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 964363 00:29:59.221 09:38:10 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:29:59.221 09:38:10 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:29:59.221 09:38:10 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@966 -- # echo 'killing process with pid 964363' 00:29:59.221 killing process with pid 964363 00:29:59.221 09:38:10 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@967 -- # kill 964363 00:29:59.221 09:38:10 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@972 -- # wait 964363 00:29:59.479 00:29:59.479 real 0m14.262s 00:29:59.479 user 0m54.058s 00:29:59.479 sys 0m2.570s 00:29:59.479 09:38:10 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@1124 -- # xtrace_disable 00:29:59.479 09:38:10 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@10 -- # set +x 00:29:59.479 ************************************ 00:29:59.479 END TEST spdk_target_abort 00:29:59.479 ************************************ 00:29:59.479 09:38:10 nvmf_abort_qd_sizes -- common/autotest_common.sh@1142 -- # return 0 00:29:59.479 09:38:10 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@81 -- # run_test kernel_target_abort kernel_target 00:29:59.479 09:38:10 nvmf_abort_qd_sizes -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:29:59.479 09:38:10 nvmf_abort_qd_sizes -- common/autotest_common.sh@1105 -- # xtrace_disable 00:29:59.479 09:38:10 nvmf_abort_qd_sizes -- common/autotest_common.sh@10 -- # set +x 00:29:59.479 ************************************ 00:29:59.479 START TEST kernel_target_abort 00:29:59.479 ************************************ 00:29:59.479 09:38:10 nvmf_abort_qd_sizes.kernel_target_abort -- common/autotest_common.sh@1123 -- # kernel_target 00:29:59.479 09:38:10 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@65 -- # get_main_ns_ip 00:29:59.479 09:38:10 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@741 -- # local ip 00:29:59.479 09:38:10 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@742 -- # ip_candidates=() 00:29:59.479 09:38:10 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@742 -- # local -A ip_candidates 00:29:59.479 09:38:10 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:29:59.479 09:38:10 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:29:59.479 09:38:10 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:29:59.479 09:38:10 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:29:59.480 09:38:10 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:29:59.480 09:38:10 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:29:59.480 09:38:10 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:29:59.480 09:38:10 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@65 -- # configure_kernel_target nqn.2016-06.io.spdk:testnqn 10.0.0.1 00:29:59.480 09:38:10 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@632 -- # local kernel_name=nqn.2016-06.io.spdk:testnqn kernel_target_ip=10.0.0.1 00:29:59.480 09:38:10 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@634 -- # nvmet=/sys/kernel/config/nvmet 00:29:59.480 09:38:10 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@635 -- # kernel_subsystem=/sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn 00:29:59.480 09:38:10 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@636 -- # kernel_namespace=/sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn/namespaces/1 00:29:59.480 09:38:10 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@637 -- # kernel_port=/sys/kernel/config/nvmet/ports/1 00:29:59.480 09:38:10 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@639 -- # local block nvme 00:29:59.480 09:38:10 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@641 -- # [[ ! -e /sys/module/nvmet ]] 00:29:59.480 09:38:10 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@642 -- # modprobe nvmet 00:29:59.480 09:38:10 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@645 -- # [[ -e /sys/kernel/config/nvmet ]] 00:29:59.480 09:38:10 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@647 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh reset 00:30:00.412 Waiting for block devices as requested 00:30:00.671 0000:00:04.7 (8086 0e27): vfio-pci -> ioatdma 00:30:00.671 0000:00:04.6 (8086 0e26): vfio-pci -> ioatdma 00:30:00.671 0000:00:04.5 (8086 0e25): vfio-pci -> ioatdma 00:30:00.930 0000:00:04.4 (8086 0e24): vfio-pci -> ioatdma 00:30:00.930 0000:00:04.3 (8086 0e23): vfio-pci -> ioatdma 00:30:00.930 0000:00:04.2 (8086 0e22): vfio-pci -> ioatdma 00:30:01.188 0000:00:04.1 (8086 0e21): vfio-pci -> ioatdma 00:30:01.188 0000:00:04.0 (8086 0e20): vfio-pci -> ioatdma 00:30:01.188 0000:0b:00.0 (8086 0a54): vfio-pci -> nvme 00:30:01.445 0000:80:04.7 (8086 0e27): vfio-pci -> ioatdma 00:30:01.445 0000:80:04.6 (8086 0e26): vfio-pci -> ioatdma 00:30:01.445 0000:80:04.5 (8086 0e25): vfio-pci -> ioatdma 00:30:01.703 0000:80:04.4 (8086 0e24): vfio-pci -> ioatdma 00:30:01.703 0000:80:04.3 (8086 0e23): vfio-pci -> ioatdma 00:30:01.703 0000:80:04.2 (8086 0e22): vfio-pci -> ioatdma 00:30:01.703 0000:80:04.1 (8086 0e21): vfio-pci -> ioatdma 00:30:01.960 0000:80:04.0 (8086 0e20): vfio-pci -> ioatdma 00:30:01.960 09:38:13 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@650 -- # for block in /sys/block/nvme* 00:30:01.960 09:38:13 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@651 -- # [[ -e /sys/block/nvme0n1 ]] 00:30:01.960 09:38:13 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@652 -- # is_block_zoned nvme0n1 00:30:01.960 09:38:13 nvmf_abort_qd_sizes.kernel_target_abort -- common/autotest_common.sh@1662 -- # local device=nvme0n1 00:30:01.960 09:38:13 nvmf_abort_qd_sizes.kernel_target_abort -- common/autotest_common.sh@1664 -- # [[ -e /sys/block/nvme0n1/queue/zoned ]] 00:30:01.960 09:38:13 nvmf_abort_qd_sizes.kernel_target_abort -- common/autotest_common.sh@1665 -- # [[ none != none ]] 00:30:01.960 09:38:13 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@653 -- # block_in_use nvme0n1 00:30:01.960 09:38:13 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@378 -- # local block=nvme0n1 pt 00:30:01.960 09:38:13 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@387 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/spdk-gpt.py nvme0n1 00:30:01.960 No valid GPT data, bailing 00:30:01.960 09:38:13 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@391 -- # blkid -s PTTYPE -o value /dev/nvme0n1 00:30:01.960 09:38:13 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@391 -- # pt= 00:30:01.960 09:38:13 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@392 -- # return 1 00:30:01.960 09:38:13 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@653 -- # nvme=/dev/nvme0n1 00:30:01.960 09:38:13 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@656 -- # [[ -b /dev/nvme0n1 ]] 00:30:01.960 09:38:13 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@658 -- # mkdir /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn 00:30:01.960 09:38:13 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@659 -- # mkdir /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn/namespaces/1 00:30:01.960 09:38:13 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@660 -- # mkdir /sys/kernel/config/nvmet/ports/1 00:30:01.960 09:38:13 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@665 -- # echo SPDK-nqn.2016-06.io.spdk:testnqn 00:30:01.960 09:38:13 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@667 -- # echo 1 00:30:01.960 09:38:13 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@668 -- # echo /dev/nvme0n1 00:30:01.960 09:38:13 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@669 -- # echo 1 00:30:01.960 09:38:13 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@671 -- # echo 10.0.0.1 00:30:01.960 09:38:13 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@672 -- # echo tcp 00:30:01.960 09:38:13 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@673 -- # echo 4420 00:30:01.960 09:38:13 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@674 -- # echo ipv4 00:30:01.960 09:38:13 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@677 -- # ln -s /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn /sys/kernel/config/nvmet/ports/1/subsystems/ 00:30:01.960 09:38:13 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@680 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --hostid=29f67375-a902-e411-ace9-001e67bc3c9a -a 10.0.0.1 -t tcp -s 4420 00:30:02.218 00:30:02.218 Discovery Log Number of Records 2, Generation counter 2 00:30:02.218 =====Discovery Log Entry 0====== 00:30:02.218 trtype: tcp 00:30:02.218 adrfam: ipv4 00:30:02.218 subtype: current discovery subsystem 00:30:02.218 treq: not specified, sq flow control disable supported 00:30:02.218 portid: 1 00:30:02.218 trsvcid: 4420 00:30:02.218 subnqn: nqn.2014-08.org.nvmexpress.discovery 00:30:02.218 traddr: 10.0.0.1 00:30:02.218 eflags: none 00:30:02.218 sectype: none 00:30:02.218 =====Discovery Log Entry 1====== 00:30:02.218 trtype: tcp 00:30:02.218 adrfam: ipv4 00:30:02.218 subtype: nvme subsystem 00:30:02.218 treq: not specified, sq flow control disable supported 00:30:02.218 portid: 1 00:30:02.218 trsvcid: 4420 00:30:02.218 subnqn: nqn.2016-06.io.spdk:testnqn 00:30:02.218 traddr: 10.0.0.1 00:30:02.218 eflags: none 00:30:02.218 sectype: none 00:30:02.218 09:38:13 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@66 -- # rabort tcp IPv4 10.0.0.1 4420 nqn.2016-06.io.spdk:testnqn 00:30:02.218 09:38:13 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@17 -- # local trtype=tcp 00:30:02.218 09:38:13 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@18 -- # local adrfam=IPv4 00:30:02.218 09:38:13 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@19 -- # local traddr=10.0.0.1 00:30:02.218 09:38:13 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@20 -- # local trsvcid=4420 00:30:02.218 09:38:13 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@21 -- # local subnqn=nqn.2016-06.io.spdk:testnqn 00:30:02.218 09:38:13 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@23 -- # local qds qd 00:30:02.218 09:38:13 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@24 -- # local target r 00:30:02.218 09:38:13 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@26 -- # qds=(4 24 64) 00:30:02.218 09:38:13 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:30:02.218 09:38:13 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@29 -- # target=trtype:tcp 00:30:02.218 09:38:13 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:30:02.218 09:38:13 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4' 00:30:02.218 09:38:13 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:30:02.218 09:38:13 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4 traddr:10.0.0.1' 00:30:02.218 09:38:13 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:30:02.218 09:38:13 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4 traddr:10.0.0.1 trsvcid:4420' 00:30:02.218 09:38:13 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:30:02.218 09:38:13 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4 traddr:10.0.0.1 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:30:02.218 09:38:13 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@32 -- # for qd in "${qds[@]}" 00:30:02.218 09:38:13 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/abort -q 4 -w rw -M 50 -o 4096 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.1 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:30:02.218 EAL: No free 2048 kB hugepages reported on node 1 00:30:05.517 Initializing NVMe Controllers 00:30:05.517 Attached to NVMe over Fabrics controller at 10.0.0.1:4420: nqn.2016-06.io.spdk:testnqn 00:30:05.517 Associating TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 with lcore 0 00:30:05.517 Initialization complete. Launching workers. 00:30:05.517 NS: TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 I/O completed: 56223, failed: 0 00:30:05.517 CTRLR: TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) abort submitted 56223, failed to submit 0 00:30:05.517 success 0, unsuccess 56223, failed 0 00:30:05.517 09:38:16 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@32 -- # for qd in "${qds[@]}" 00:30:05.517 09:38:16 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/abort -q 24 -w rw -M 50 -o 4096 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.1 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:30:05.517 EAL: No free 2048 kB hugepages reported on node 1 00:30:08.857 Initializing NVMe Controllers 00:30:08.858 Attached to NVMe over Fabrics controller at 10.0.0.1:4420: nqn.2016-06.io.spdk:testnqn 00:30:08.858 Associating TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 with lcore 0 00:30:08.858 Initialization complete. Launching workers. 00:30:08.858 NS: TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 I/O completed: 102081, failed: 0 00:30:08.858 CTRLR: TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) abort submitted 25602, failed to submit 76479 00:30:08.858 success 0, unsuccess 25602, failed 0 00:30:08.858 09:38:19 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@32 -- # for qd in "${qds[@]}" 00:30:08.858 09:38:19 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/abort -q 64 -w rw -M 50 -o 4096 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.1 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:30:08.858 EAL: No free 2048 kB hugepages reported on node 1 00:30:11.386 Initializing NVMe Controllers 00:30:11.386 Attached to NVMe over Fabrics controller at 10.0.0.1:4420: nqn.2016-06.io.spdk:testnqn 00:30:11.386 Associating TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 with lcore 0 00:30:11.386 Initialization complete. Launching workers. 00:30:11.386 NS: TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 I/O completed: 98427, failed: 0 00:30:11.386 CTRLR: TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) abort submitted 24630, failed to submit 73797 00:30:11.386 success 0, unsuccess 24630, failed 0 00:30:11.386 09:38:22 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@67 -- # clean_kernel_target 00:30:11.386 09:38:22 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@684 -- # [[ -e /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn ]] 00:30:11.386 09:38:22 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@686 -- # echo 0 00:30:11.386 09:38:22 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@688 -- # rm -f /sys/kernel/config/nvmet/ports/1/subsystems/nqn.2016-06.io.spdk:testnqn 00:30:11.386 09:38:22 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@689 -- # rmdir /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn/namespaces/1 00:30:11.386 09:38:22 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@690 -- # rmdir /sys/kernel/config/nvmet/ports/1 00:30:11.386 09:38:22 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@691 -- # rmdir /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn 00:30:11.386 09:38:22 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@693 -- # modules=(/sys/module/nvmet/holders/*) 00:30:11.386 09:38:22 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@695 -- # modprobe -r nvmet_tcp nvmet 00:30:11.645 09:38:22 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@698 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:30:13.015 0000:00:04.7 (8086 0e27): ioatdma -> vfio-pci 00:30:13.015 0000:00:04.6 (8086 0e26): ioatdma -> vfio-pci 00:30:13.015 0000:00:04.5 (8086 0e25): ioatdma -> vfio-pci 00:30:13.015 0000:00:04.4 (8086 0e24): ioatdma -> vfio-pci 00:30:13.015 0000:00:04.3 (8086 0e23): ioatdma -> vfio-pci 00:30:13.015 0000:00:04.2 (8086 0e22): ioatdma -> vfio-pci 00:30:13.015 0000:00:04.1 (8086 0e21): ioatdma -> vfio-pci 00:30:13.015 0000:00:04.0 (8086 0e20): ioatdma -> vfio-pci 00:30:13.015 0000:80:04.7 (8086 0e27): ioatdma -> vfio-pci 00:30:13.015 0000:80:04.6 (8086 0e26): ioatdma -> vfio-pci 00:30:13.015 0000:80:04.5 (8086 0e25): ioatdma -> vfio-pci 00:30:13.015 0000:80:04.4 (8086 0e24): ioatdma -> vfio-pci 00:30:13.015 0000:80:04.3 (8086 0e23): ioatdma -> vfio-pci 00:30:13.015 0000:80:04.2 (8086 0e22): ioatdma -> vfio-pci 00:30:13.015 0000:80:04.1 (8086 0e21): ioatdma -> vfio-pci 00:30:13.015 0000:80:04.0 (8086 0e20): ioatdma -> vfio-pci 00:30:13.948 0000:0b:00.0 (8086 0a54): nvme -> vfio-pci 00:30:13.949 00:30:13.949 real 0m14.510s 00:30:13.949 user 0m6.718s 00:30:13.949 sys 0m3.171s 00:30:13.949 09:38:25 nvmf_abort_qd_sizes.kernel_target_abort -- common/autotest_common.sh@1124 -- # xtrace_disable 00:30:13.949 09:38:25 nvmf_abort_qd_sizes.kernel_target_abort -- common/autotest_common.sh@10 -- # set +x 00:30:13.949 ************************************ 00:30:13.949 END TEST kernel_target_abort 00:30:13.949 ************************************ 00:30:13.949 09:38:25 nvmf_abort_qd_sizes -- common/autotest_common.sh@1142 -- # return 0 00:30:13.949 09:38:25 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@83 -- # trap - SIGINT SIGTERM EXIT 00:30:13.949 09:38:25 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@84 -- # nvmftestfini 00:30:13.949 09:38:25 nvmf_abort_qd_sizes -- nvmf/common.sh@488 -- # nvmfcleanup 00:30:13.949 09:38:25 nvmf_abort_qd_sizes -- nvmf/common.sh@117 -- # sync 00:30:13.949 09:38:25 nvmf_abort_qd_sizes -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:30:13.949 09:38:25 nvmf_abort_qd_sizes -- nvmf/common.sh@120 -- # set +e 00:30:13.949 09:38:25 nvmf_abort_qd_sizes -- nvmf/common.sh@121 -- # for i in {1..20} 00:30:13.949 09:38:25 nvmf_abort_qd_sizes -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:30:13.949 rmmod nvme_tcp 00:30:13.949 rmmod nvme_fabrics 00:30:13.949 rmmod nvme_keyring 00:30:13.949 09:38:25 nvmf_abort_qd_sizes -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:30:13.949 09:38:25 nvmf_abort_qd_sizes -- nvmf/common.sh@124 -- # set -e 00:30:13.949 09:38:25 nvmf_abort_qd_sizes -- nvmf/common.sh@125 -- # return 0 00:30:13.949 09:38:25 nvmf_abort_qd_sizes -- nvmf/common.sh@489 -- # '[' -n 964363 ']' 00:30:13.949 09:38:25 nvmf_abort_qd_sizes -- nvmf/common.sh@490 -- # killprocess 964363 00:30:13.949 09:38:25 nvmf_abort_qd_sizes -- common/autotest_common.sh@948 -- # '[' -z 964363 ']' 00:30:13.949 09:38:25 nvmf_abort_qd_sizes -- common/autotest_common.sh@952 -- # kill -0 964363 00:30:13.949 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/autotest_common.sh: line 952: kill: (964363) - No such process 00:30:13.949 09:38:25 nvmf_abort_qd_sizes -- common/autotest_common.sh@975 -- # echo 'Process with pid 964363 is not found' 00:30:13.949 Process with pid 964363 is not found 00:30:13.949 09:38:25 nvmf_abort_qd_sizes -- nvmf/common.sh@492 -- # '[' iso == iso ']' 00:30:13.949 09:38:25 nvmf_abort_qd_sizes -- nvmf/common.sh@493 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh reset 00:30:15.323 Waiting for block devices as requested 00:30:15.323 0000:00:04.7 (8086 0e27): vfio-pci -> ioatdma 00:30:15.323 0000:00:04.6 (8086 0e26): vfio-pci -> ioatdma 00:30:15.582 0000:00:04.5 (8086 0e25): vfio-pci -> ioatdma 00:30:15.582 0000:00:04.4 (8086 0e24): vfio-pci -> ioatdma 00:30:15.582 0000:00:04.3 (8086 0e23): vfio-pci -> ioatdma 00:30:15.582 0000:00:04.2 (8086 0e22): vfio-pci -> ioatdma 00:30:15.840 0000:00:04.1 (8086 0e21): vfio-pci -> ioatdma 00:30:15.840 0000:00:04.0 (8086 0e20): vfio-pci -> ioatdma 00:30:15.840 0000:0b:00.0 (8086 0a54): vfio-pci -> nvme 00:30:16.098 0000:80:04.7 (8086 0e27): vfio-pci -> ioatdma 00:30:16.098 0000:80:04.6 (8086 0e26): vfio-pci -> ioatdma 00:30:16.098 0000:80:04.5 (8086 0e25): vfio-pci -> ioatdma 00:30:16.355 0000:80:04.4 (8086 0e24): vfio-pci -> ioatdma 00:30:16.355 0000:80:04.3 (8086 0e23): vfio-pci -> ioatdma 00:30:16.355 0000:80:04.2 (8086 0e22): vfio-pci -> ioatdma 00:30:16.355 0000:80:04.1 (8086 0e21): vfio-pci -> ioatdma 00:30:16.615 0000:80:04.0 (8086 0e20): vfio-pci -> ioatdma 00:30:16.615 09:38:27 nvmf_abort_qd_sizes -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:30:16.615 09:38:27 nvmf_abort_qd_sizes -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:30:16.615 09:38:27 nvmf_abort_qd_sizes -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:30:16.615 09:38:27 nvmf_abort_qd_sizes -- nvmf/common.sh@278 -- # remove_spdk_ns 00:30:16.615 09:38:27 nvmf_abort_qd_sizes -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:30:16.615 09:38:27 nvmf_abort_qd_sizes -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 13> /dev/null' 00:30:16.615 09:38:27 nvmf_abort_qd_sizes -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:30:19.148 09:38:29 nvmf_abort_qd_sizes -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:30:19.148 00:30:19.148 real 0m38.647s 00:30:19.148 user 1m2.992s 00:30:19.148 sys 0m9.301s 00:30:19.148 09:38:29 nvmf_abort_qd_sizes -- common/autotest_common.sh@1124 -- # xtrace_disable 00:30:19.148 09:38:29 nvmf_abort_qd_sizes -- common/autotest_common.sh@10 -- # set +x 00:30:19.148 ************************************ 00:30:19.148 END TEST nvmf_abort_qd_sizes 00:30:19.148 ************************************ 00:30:19.148 09:38:29 -- common/autotest_common.sh@1142 -- # return 0 00:30:19.148 09:38:29 -- spdk/autotest.sh@295 -- # run_test keyring_file /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/keyring/file.sh 00:30:19.148 09:38:29 -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:30:19.148 09:38:29 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:30:19.148 09:38:29 -- common/autotest_common.sh@10 -- # set +x 00:30:19.148 ************************************ 00:30:19.148 START TEST keyring_file 00:30:19.148 ************************************ 00:30:19.148 09:38:29 keyring_file -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/keyring/file.sh 00:30:19.148 * Looking for test storage... 00:30:19.148 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/keyring 00:30:19.148 09:38:29 keyring_file -- keyring/file.sh@11 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/keyring/common.sh 00:30:19.148 09:38:29 keyring_file -- keyring/common.sh@4 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:30:19.148 09:38:29 keyring_file -- nvmf/common.sh@7 -- # uname -s 00:30:19.148 09:38:29 keyring_file -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:30:19.148 09:38:29 keyring_file -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:30:19.148 09:38:29 keyring_file -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:30:19.148 09:38:29 keyring_file -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:30:19.148 09:38:29 keyring_file -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:30:19.148 09:38:29 keyring_file -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:30:19.148 09:38:29 keyring_file -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:30:19.148 09:38:29 keyring_file -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:30:19.148 09:38:29 keyring_file -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:30:19.148 09:38:29 keyring_file -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:30:19.148 09:38:29 keyring_file -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a 00:30:19.148 09:38:29 keyring_file -- nvmf/common.sh@18 -- # NVME_HOSTID=29f67375-a902-e411-ace9-001e67bc3c9a 00:30:19.148 09:38:29 keyring_file -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:30:19.148 09:38:29 keyring_file -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:30:19.148 09:38:29 keyring_file -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:30:19.148 09:38:29 keyring_file -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:30:19.148 09:38:29 keyring_file -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:30:19.148 09:38:29 keyring_file -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:30:19.148 09:38:29 keyring_file -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:30:19.148 09:38:29 keyring_file -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:30:19.148 09:38:29 keyring_file -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:30:19.148 09:38:29 keyring_file -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:30:19.148 09:38:29 keyring_file -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:30:19.148 09:38:29 keyring_file -- paths/export.sh@5 -- # export PATH 00:30:19.148 09:38:29 keyring_file -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:30:19.148 09:38:29 keyring_file -- nvmf/common.sh@47 -- # : 0 00:30:19.148 09:38:29 keyring_file -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:30:19.148 09:38:29 keyring_file -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:30:19.148 09:38:29 keyring_file -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:30:19.149 09:38:29 keyring_file -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:30:19.149 09:38:29 keyring_file -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:30:19.149 09:38:29 keyring_file -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:30:19.149 09:38:29 keyring_file -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:30:19.149 09:38:29 keyring_file -- nvmf/common.sh@51 -- # have_pci_nics=0 00:30:19.149 09:38:29 keyring_file -- keyring/common.sh@6 -- # bperfsock=/var/tmp/bperf.sock 00:30:19.149 09:38:29 keyring_file -- keyring/file.sh@13 -- # subnqn=nqn.2016-06.io.spdk:cnode0 00:30:19.149 09:38:29 keyring_file -- keyring/file.sh@14 -- # hostnqn=nqn.2016-06.io.spdk:host0 00:30:19.149 09:38:29 keyring_file -- keyring/file.sh@15 -- # key0=00112233445566778899aabbccddeeff 00:30:19.149 09:38:29 keyring_file -- keyring/file.sh@16 -- # key1=112233445566778899aabbccddeeff00 00:30:19.149 09:38:29 keyring_file -- keyring/file.sh@24 -- # trap cleanup EXIT 00:30:19.149 09:38:29 keyring_file -- keyring/file.sh@26 -- # prep_key key0 00112233445566778899aabbccddeeff 0 00:30:19.149 09:38:29 keyring_file -- keyring/common.sh@15 -- # local name key digest path 00:30:19.149 09:38:29 keyring_file -- keyring/common.sh@17 -- # name=key0 00:30:19.149 09:38:29 keyring_file -- keyring/common.sh@17 -- # key=00112233445566778899aabbccddeeff 00:30:19.149 09:38:29 keyring_file -- keyring/common.sh@17 -- # digest=0 00:30:19.149 09:38:29 keyring_file -- keyring/common.sh@18 -- # mktemp 00:30:19.149 09:38:29 keyring_file -- keyring/common.sh@18 -- # path=/tmp/tmp.h6d4hRMySv 00:30:19.149 09:38:29 keyring_file -- keyring/common.sh@20 -- # format_interchange_psk 00112233445566778899aabbccddeeff 0 00:30:19.149 09:38:29 keyring_file -- nvmf/common.sh@715 -- # format_key NVMeTLSkey-1 00112233445566778899aabbccddeeff 0 00:30:19.149 09:38:29 keyring_file -- nvmf/common.sh@702 -- # local prefix key digest 00:30:19.149 09:38:29 keyring_file -- nvmf/common.sh@704 -- # prefix=NVMeTLSkey-1 00:30:19.149 09:38:29 keyring_file -- nvmf/common.sh@704 -- # key=00112233445566778899aabbccddeeff 00:30:19.149 09:38:29 keyring_file -- nvmf/common.sh@704 -- # digest=0 00:30:19.149 09:38:29 keyring_file -- nvmf/common.sh@705 -- # python - 00:30:19.149 09:38:29 keyring_file -- keyring/common.sh@21 -- # chmod 0600 /tmp/tmp.h6d4hRMySv 00:30:19.149 09:38:29 keyring_file -- keyring/common.sh@23 -- # echo /tmp/tmp.h6d4hRMySv 00:30:19.149 09:38:29 keyring_file -- keyring/file.sh@26 -- # key0path=/tmp/tmp.h6d4hRMySv 00:30:19.149 09:38:29 keyring_file -- keyring/file.sh@27 -- # prep_key key1 112233445566778899aabbccddeeff00 0 00:30:19.149 09:38:29 keyring_file -- keyring/common.sh@15 -- # local name key digest path 00:30:19.149 09:38:29 keyring_file -- keyring/common.sh@17 -- # name=key1 00:30:19.149 09:38:29 keyring_file -- keyring/common.sh@17 -- # key=112233445566778899aabbccddeeff00 00:30:19.149 09:38:29 keyring_file -- keyring/common.sh@17 -- # digest=0 00:30:19.149 09:38:29 keyring_file -- keyring/common.sh@18 -- # mktemp 00:30:19.149 09:38:29 keyring_file -- keyring/common.sh@18 -- # path=/tmp/tmp.3h2G9HuCiu 00:30:19.149 09:38:29 keyring_file -- keyring/common.sh@20 -- # format_interchange_psk 112233445566778899aabbccddeeff00 0 00:30:19.149 09:38:29 keyring_file -- nvmf/common.sh@715 -- # format_key NVMeTLSkey-1 112233445566778899aabbccddeeff00 0 00:30:19.149 09:38:29 keyring_file -- nvmf/common.sh@702 -- # local prefix key digest 00:30:19.149 09:38:29 keyring_file -- nvmf/common.sh@704 -- # prefix=NVMeTLSkey-1 00:30:19.149 09:38:29 keyring_file -- nvmf/common.sh@704 -- # key=112233445566778899aabbccddeeff00 00:30:19.149 09:38:29 keyring_file -- nvmf/common.sh@704 -- # digest=0 00:30:19.149 09:38:29 keyring_file -- nvmf/common.sh@705 -- # python - 00:30:19.149 09:38:29 keyring_file -- keyring/common.sh@21 -- # chmod 0600 /tmp/tmp.3h2G9HuCiu 00:30:19.149 09:38:29 keyring_file -- keyring/common.sh@23 -- # echo /tmp/tmp.3h2G9HuCiu 00:30:19.149 09:38:29 keyring_file -- keyring/file.sh@27 -- # key1path=/tmp/tmp.3h2G9HuCiu 00:30:19.149 09:38:29 keyring_file -- keyring/file.sh@30 -- # tgtpid=970134 00:30:19.149 09:38:29 keyring_file -- keyring/file.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:30:19.149 09:38:29 keyring_file -- keyring/file.sh@32 -- # waitforlisten 970134 00:30:19.149 09:38:29 keyring_file -- common/autotest_common.sh@829 -- # '[' -z 970134 ']' 00:30:19.149 09:38:29 keyring_file -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:30:19.149 09:38:29 keyring_file -- common/autotest_common.sh@834 -- # local max_retries=100 00:30:19.149 09:38:29 keyring_file -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:30:19.149 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:30:19.149 09:38:29 keyring_file -- common/autotest_common.sh@838 -- # xtrace_disable 00:30:19.149 09:38:29 keyring_file -- common/autotest_common.sh@10 -- # set +x 00:30:19.149 [2024-07-15 09:38:30.033662] Starting SPDK v24.09-pre git sha1 b0f01ebc5 / DPDK 24.03.0 initialization... 00:30:19.149 [2024-07-15 09:38:30.033755] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid970134 ] 00:30:19.149 EAL: No free 2048 kB hugepages reported on node 1 00:30:19.149 [2024-07-15 09:38:30.090257] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:30:19.149 [2024-07-15 09:38:30.191964] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:30:19.406 09:38:30 keyring_file -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:30:19.407 09:38:30 keyring_file -- common/autotest_common.sh@862 -- # return 0 00:30:19.407 09:38:30 keyring_file -- keyring/file.sh@33 -- # rpc_cmd 00:30:19.407 09:38:30 keyring_file -- common/autotest_common.sh@559 -- # xtrace_disable 00:30:19.407 09:38:30 keyring_file -- common/autotest_common.sh@10 -- # set +x 00:30:19.407 [2024-07-15 09:38:30.413688] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:30:19.407 null0 00:30:19.407 [2024-07-15 09:38:30.445740] tcp.c: 928:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:30:19.407 [2024-07-15 09:38:30.446194] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4420 *** 00:30:19.407 [2024-07-15 09:38:30.453759] tcp.c:3679:nvmf_tcp_subsystem_add_host: *WARNING*: nvmf_tcp_psk_path: deprecated feature PSK path to be removed in v24.09 00:30:19.407 09:38:30 keyring_file -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:30:19.407 09:38:30 keyring_file -- keyring/file.sh@43 -- # NOT rpc_cmd nvmf_subsystem_add_listener -t tcp -a 127.0.0.1 -s 4420 nqn.2016-06.io.spdk:cnode0 00:30:19.407 09:38:30 keyring_file -- common/autotest_common.sh@648 -- # local es=0 00:30:19.407 09:38:30 keyring_file -- common/autotest_common.sh@650 -- # valid_exec_arg rpc_cmd nvmf_subsystem_add_listener -t tcp -a 127.0.0.1 -s 4420 nqn.2016-06.io.spdk:cnode0 00:30:19.407 09:38:30 keyring_file -- common/autotest_common.sh@636 -- # local arg=rpc_cmd 00:30:19.407 09:38:30 keyring_file -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:30:19.407 09:38:30 keyring_file -- common/autotest_common.sh@640 -- # type -t rpc_cmd 00:30:19.407 09:38:30 keyring_file -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:30:19.407 09:38:30 keyring_file -- common/autotest_common.sh@651 -- # rpc_cmd nvmf_subsystem_add_listener -t tcp -a 127.0.0.1 -s 4420 nqn.2016-06.io.spdk:cnode0 00:30:19.407 09:38:30 keyring_file -- common/autotest_common.sh@559 -- # xtrace_disable 00:30:19.407 09:38:30 keyring_file -- common/autotest_common.sh@10 -- # set +x 00:30:19.407 [2024-07-15 09:38:30.461768] nvmf_rpc.c: 783:nvmf_rpc_listen_paused: *ERROR*: Listener already exists 00:30:19.407 request: 00:30:19.407 { 00:30:19.407 "nqn": "nqn.2016-06.io.spdk:cnode0", 00:30:19.407 "secure_channel": false, 00:30:19.407 "listen_address": { 00:30:19.407 "trtype": "tcp", 00:30:19.407 "traddr": "127.0.0.1", 00:30:19.407 "trsvcid": "4420" 00:30:19.407 }, 00:30:19.407 "method": "nvmf_subsystem_add_listener", 00:30:19.407 "req_id": 1 00:30:19.407 } 00:30:19.407 Got JSON-RPC error response 00:30:19.407 response: 00:30:19.407 { 00:30:19.407 "code": -32602, 00:30:19.407 "message": "Invalid parameters" 00:30:19.407 } 00:30:19.407 09:38:30 keyring_file -- common/autotest_common.sh@587 -- # [[ 1 == 0 ]] 00:30:19.407 09:38:30 keyring_file -- common/autotest_common.sh@651 -- # es=1 00:30:19.407 09:38:30 keyring_file -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:30:19.407 09:38:30 keyring_file -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:30:19.407 09:38:30 keyring_file -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:30:19.407 09:38:30 keyring_file -- keyring/file.sh@46 -- # bperfpid=970139 00:30:19.407 09:38:30 keyring_file -- keyring/file.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -q 128 -o 4k -w randrw -M 50 -t 1 -m 2 -r /var/tmp/bperf.sock -z 00:30:19.407 09:38:30 keyring_file -- keyring/file.sh@48 -- # waitforlisten 970139 /var/tmp/bperf.sock 00:30:19.407 09:38:30 keyring_file -- common/autotest_common.sh@829 -- # '[' -z 970139 ']' 00:30:19.407 09:38:30 keyring_file -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bperf.sock 00:30:19.407 09:38:30 keyring_file -- common/autotest_common.sh@834 -- # local max_retries=100 00:30:19.407 09:38:30 keyring_file -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:30:19.407 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:30:19.407 09:38:30 keyring_file -- common/autotest_common.sh@838 -- # xtrace_disable 00:30:19.407 09:38:30 keyring_file -- common/autotest_common.sh@10 -- # set +x 00:30:19.407 [2024-07-15 09:38:30.505850] Starting SPDK v24.09-pre git sha1 b0f01ebc5 / DPDK 24.03.0 initialization... 00:30:19.407 [2024-07-15 09:38:30.505924] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid970139 ] 00:30:19.407 EAL: No free 2048 kB hugepages reported on node 1 00:30:19.407 [2024-07-15 09:38:30.560946] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:30:19.665 [2024-07-15 09:38:30.671836] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:30:19.665 09:38:30 keyring_file -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:30:19.665 09:38:30 keyring_file -- common/autotest_common.sh@862 -- # return 0 00:30:19.665 09:38:30 keyring_file -- keyring/file.sh@49 -- # bperf_cmd keyring_file_add_key key0 /tmp/tmp.h6d4hRMySv 00:30:19.665 09:38:30 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_add_key key0 /tmp/tmp.h6d4hRMySv 00:30:19.923 09:38:31 keyring_file -- keyring/file.sh@50 -- # bperf_cmd keyring_file_add_key key1 /tmp/tmp.3h2G9HuCiu 00:30:19.923 09:38:31 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_add_key key1 /tmp/tmp.3h2G9HuCiu 00:30:20.181 09:38:31 keyring_file -- keyring/file.sh@51 -- # get_key key0 00:30:20.181 09:38:31 keyring_file -- keyring/file.sh@51 -- # jq -r .path 00:30:20.181 09:38:31 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:30:20.181 09:38:31 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:30:20.181 09:38:31 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:30:20.439 09:38:31 keyring_file -- keyring/file.sh@51 -- # [[ /tmp/tmp.h6d4hRMySv == \/\t\m\p\/\t\m\p\.\h\6\d\4\h\R\M\y\S\v ]] 00:30:20.439 09:38:31 keyring_file -- keyring/file.sh@52 -- # get_key key1 00:30:20.439 09:38:31 keyring_file -- keyring/file.sh@52 -- # jq -r .path 00:30:20.439 09:38:31 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:30:20.439 09:38:31 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:30:20.439 09:38:31 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key1")' 00:30:20.697 09:38:31 keyring_file -- keyring/file.sh@52 -- # [[ /tmp/tmp.3h2G9HuCiu == \/\t\m\p\/\t\m\p\.\3\h\2\G\9\H\u\C\i\u ]] 00:30:20.697 09:38:31 keyring_file -- keyring/file.sh@53 -- # get_refcnt key0 00:30:20.697 09:38:31 keyring_file -- keyring/common.sh@12 -- # get_key key0 00:30:20.697 09:38:31 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:30:20.697 09:38:31 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:30:20.697 09:38:31 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:30:20.697 09:38:31 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:30:20.955 09:38:32 keyring_file -- keyring/file.sh@53 -- # (( 1 == 1 )) 00:30:20.955 09:38:32 keyring_file -- keyring/file.sh@54 -- # get_refcnt key1 00:30:20.955 09:38:32 keyring_file -- keyring/common.sh@12 -- # get_key key1 00:30:20.955 09:38:32 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:30:20.955 09:38:32 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:30:20.955 09:38:32 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:30:20.955 09:38:32 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key1")' 00:30:21.212 09:38:32 keyring_file -- keyring/file.sh@54 -- # (( 1 == 1 )) 00:30:21.212 09:38:32 keyring_file -- keyring/file.sh@57 -- # bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:30:21.212 09:38:32 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:30:21.470 [2024-07-15 09:38:32.485230] bdev_nvme_rpc.c: 517:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:30:21.470 nvme0n1 00:30:21.470 09:38:32 keyring_file -- keyring/file.sh@59 -- # get_refcnt key0 00:30:21.470 09:38:32 keyring_file -- keyring/common.sh@12 -- # get_key key0 00:30:21.470 09:38:32 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:30:21.470 09:38:32 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:30:21.470 09:38:32 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:30:21.470 09:38:32 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:30:21.727 09:38:32 keyring_file -- keyring/file.sh@59 -- # (( 2 == 2 )) 00:30:21.727 09:38:32 keyring_file -- keyring/file.sh@60 -- # get_refcnt key1 00:30:21.727 09:38:32 keyring_file -- keyring/common.sh@12 -- # get_key key1 00:30:21.727 09:38:32 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:30:21.727 09:38:32 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:30:21.727 09:38:32 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:30:21.727 09:38:32 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key1")' 00:30:21.984 09:38:33 keyring_file -- keyring/file.sh@60 -- # (( 1 == 1 )) 00:30:21.984 09:38:33 keyring_file -- keyring/file.sh@62 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:30:21.984 Running I/O for 1 seconds... 00:30:23.371 00:30:23.371 Latency(us) 00:30:23.371 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:30:23.371 Job: nvme0n1 (Core Mask 0x2, workload: randrw, percentage: 50, depth: 128, IO size: 4096) 00:30:23.371 nvme0n1 : 1.01 10282.61 40.17 0.00 0.00 12399.41 6990.51 23787.14 00:30:23.371 =================================================================================================================== 00:30:23.371 Total : 10282.61 40.17 0.00 0.00 12399.41 6990.51 23787.14 00:30:23.371 0 00:30:23.371 09:38:34 keyring_file -- keyring/file.sh@64 -- # bperf_cmd bdev_nvme_detach_controller nvme0 00:30:23.371 09:38:34 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_detach_controller nvme0 00:30:23.371 09:38:34 keyring_file -- keyring/file.sh@65 -- # get_refcnt key0 00:30:23.371 09:38:34 keyring_file -- keyring/common.sh@12 -- # get_key key0 00:30:23.371 09:38:34 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:30:23.372 09:38:34 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:30:23.372 09:38:34 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:30:23.372 09:38:34 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:30:23.629 09:38:34 keyring_file -- keyring/file.sh@65 -- # (( 1 == 1 )) 00:30:23.629 09:38:34 keyring_file -- keyring/file.sh@66 -- # get_refcnt key1 00:30:23.629 09:38:34 keyring_file -- keyring/common.sh@12 -- # get_key key1 00:30:23.629 09:38:34 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:30:23.629 09:38:34 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:30:23.629 09:38:34 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:30:23.629 09:38:34 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key1")' 00:30:23.887 09:38:34 keyring_file -- keyring/file.sh@66 -- # (( 1 == 1 )) 00:30:23.887 09:38:34 keyring_file -- keyring/file.sh@69 -- # NOT bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key1 00:30:23.887 09:38:34 keyring_file -- common/autotest_common.sh@648 -- # local es=0 00:30:23.887 09:38:34 keyring_file -- common/autotest_common.sh@650 -- # valid_exec_arg bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key1 00:30:23.887 09:38:34 keyring_file -- common/autotest_common.sh@636 -- # local arg=bperf_cmd 00:30:23.887 09:38:34 keyring_file -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:30:23.887 09:38:34 keyring_file -- common/autotest_common.sh@640 -- # type -t bperf_cmd 00:30:23.887 09:38:34 keyring_file -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:30:23.887 09:38:34 keyring_file -- common/autotest_common.sh@651 -- # bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key1 00:30:23.887 09:38:34 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key1 00:30:24.145 [2024-07-15 09:38:35.192646] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk_internal/nvme_tcp.h: 428:nvme_tcp_read_data: *ERROR*: spdk_sock_recv() failed, errno 107: Transport endpoint is not connected 00:30:24.145 [2024-07-15 09:38:35.193461] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x96a9a0 (107): Transport endpoint is not connected 00:30:24.145 [2024-07-15 09:38:35.194454] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x96a9a0 (9): Bad file descriptor 00:30:24.145 [2024-07-15 09:38:35.195453] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:30:24.145 [2024-07-15 09:38:35.195473] nvme.c: 708:nvme_ctrlr_poll_internal: *ERROR*: Failed to initialize SSD: 127.0.0.1 00:30:24.145 [2024-07-15 09:38:35.195501] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:30:24.145 request: 00:30:24.145 { 00:30:24.145 "name": "nvme0", 00:30:24.145 "trtype": "tcp", 00:30:24.145 "traddr": "127.0.0.1", 00:30:24.145 "adrfam": "ipv4", 00:30:24.145 "trsvcid": "4420", 00:30:24.145 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:30:24.145 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:30:24.145 "prchk_reftag": false, 00:30:24.145 "prchk_guard": false, 00:30:24.145 "hdgst": false, 00:30:24.145 "ddgst": false, 00:30:24.145 "psk": "key1", 00:30:24.145 "method": "bdev_nvme_attach_controller", 00:30:24.145 "req_id": 1 00:30:24.145 } 00:30:24.145 Got JSON-RPC error response 00:30:24.145 response: 00:30:24.145 { 00:30:24.145 "code": -5, 00:30:24.145 "message": "Input/output error" 00:30:24.145 } 00:30:24.145 09:38:35 keyring_file -- common/autotest_common.sh@651 -- # es=1 00:30:24.146 09:38:35 keyring_file -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:30:24.146 09:38:35 keyring_file -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:30:24.146 09:38:35 keyring_file -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:30:24.146 09:38:35 keyring_file -- keyring/file.sh@71 -- # get_refcnt key0 00:30:24.146 09:38:35 keyring_file -- keyring/common.sh@12 -- # get_key key0 00:30:24.146 09:38:35 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:30:24.146 09:38:35 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:30:24.146 09:38:35 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:30:24.146 09:38:35 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:30:24.403 09:38:35 keyring_file -- keyring/file.sh@71 -- # (( 1 == 1 )) 00:30:24.403 09:38:35 keyring_file -- keyring/file.sh@72 -- # get_refcnt key1 00:30:24.403 09:38:35 keyring_file -- keyring/common.sh@12 -- # get_key key1 00:30:24.403 09:38:35 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:30:24.403 09:38:35 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:30:24.403 09:38:35 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:30:24.403 09:38:35 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key1")' 00:30:24.661 09:38:35 keyring_file -- keyring/file.sh@72 -- # (( 1 == 1 )) 00:30:24.661 09:38:35 keyring_file -- keyring/file.sh@75 -- # bperf_cmd keyring_file_remove_key key0 00:30:24.661 09:38:35 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_remove_key key0 00:30:24.918 09:38:35 keyring_file -- keyring/file.sh@76 -- # bperf_cmd keyring_file_remove_key key1 00:30:24.918 09:38:35 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_remove_key key1 00:30:25.176 09:38:36 keyring_file -- keyring/file.sh@77 -- # bperf_cmd keyring_get_keys 00:30:25.176 09:38:36 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:30:25.176 09:38:36 keyring_file -- keyring/file.sh@77 -- # jq length 00:30:25.434 09:38:36 keyring_file -- keyring/file.sh@77 -- # (( 0 == 0 )) 00:30:25.434 09:38:36 keyring_file -- keyring/file.sh@80 -- # chmod 0660 /tmp/tmp.h6d4hRMySv 00:30:25.434 09:38:36 keyring_file -- keyring/file.sh@81 -- # NOT bperf_cmd keyring_file_add_key key0 /tmp/tmp.h6d4hRMySv 00:30:25.434 09:38:36 keyring_file -- common/autotest_common.sh@648 -- # local es=0 00:30:25.434 09:38:36 keyring_file -- common/autotest_common.sh@650 -- # valid_exec_arg bperf_cmd keyring_file_add_key key0 /tmp/tmp.h6d4hRMySv 00:30:25.434 09:38:36 keyring_file -- common/autotest_common.sh@636 -- # local arg=bperf_cmd 00:30:25.434 09:38:36 keyring_file -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:30:25.434 09:38:36 keyring_file -- common/autotest_common.sh@640 -- # type -t bperf_cmd 00:30:25.434 09:38:36 keyring_file -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:30:25.434 09:38:36 keyring_file -- common/autotest_common.sh@651 -- # bperf_cmd keyring_file_add_key key0 /tmp/tmp.h6d4hRMySv 00:30:25.434 09:38:36 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_add_key key0 /tmp/tmp.h6d4hRMySv 00:30:25.692 [2024-07-15 09:38:36.674551] keyring.c: 34:keyring_file_check_path: *ERROR*: Invalid permissions for key file '/tmp/tmp.h6d4hRMySv': 0100660 00:30:25.692 [2024-07-15 09:38:36.674585] keyring.c: 126:spdk_keyring_add_key: *ERROR*: Failed to add key 'key0' to the keyring 00:30:25.692 request: 00:30:25.692 { 00:30:25.692 "name": "key0", 00:30:25.692 "path": "/tmp/tmp.h6d4hRMySv", 00:30:25.692 "method": "keyring_file_add_key", 00:30:25.692 "req_id": 1 00:30:25.692 } 00:30:25.692 Got JSON-RPC error response 00:30:25.692 response: 00:30:25.692 { 00:30:25.692 "code": -1, 00:30:25.692 "message": "Operation not permitted" 00:30:25.692 } 00:30:25.692 09:38:36 keyring_file -- common/autotest_common.sh@651 -- # es=1 00:30:25.692 09:38:36 keyring_file -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:30:25.692 09:38:36 keyring_file -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:30:25.692 09:38:36 keyring_file -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:30:25.692 09:38:36 keyring_file -- keyring/file.sh@84 -- # chmod 0600 /tmp/tmp.h6d4hRMySv 00:30:25.692 09:38:36 keyring_file -- keyring/file.sh@85 -- # bperf_cmd keyring_file_add_key key0 /tmp/tmp.h6d4hRMySv 00:30:25.692 09:38:36 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_add_key key0 /tmp/tmp.h6d4hRMySv 00:30:25.949 09:38:36 keyring_file -- keyring/file.sh@86 -- # rm -f /tmp/tmp.h6d4hRMySv 00:30:25.949 09:38:36 keyring_file -- keyring/file.sh@88 -- # get_refcnt key0 00:30:25.949 09:38:36 keyring_file -- keyring/common.sh@12 -- # get_key key0 00:30:25.949 09:38:36 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:30:25.949 09:38:36 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:30:25.949 09:38:36 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:30:25.949 09:38:36 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:30:26.207 09:38:37 keyring_file -- keyring/file.sh@88 -- # (( 1 == 1 )) 00:30:26.207 09:38:37 keyring_file -- keyring/file.sh@90 -- # NOT bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:30:26.207 09:38:37 keyring_file -- common/autotest_common.sh@648 -- # local es=0 00:30:26.207 09:38:37 keyring_file -- common/autotest_common.sh@650 -- # valid_exec_arg bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:30:26.207 09:38:37 keyring_file -- common/autotest_common.sh@636 -- # local arg=bperf_cmd 00:30:26.207 09:38:37 keyring_file -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:30:26.207 09:38:37 keyring_file -- common/autotest_common.sh@640 -- # type -t bperf_cmd 00:30:26.207 09:38:37 keyring_file -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:30:26.207 09:38:37 keyring_file -- common/autotest_common.sh@651 -- # bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:30:26.207 09:38:37 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:30:26.464 [2024-07-15 09:38:37.412525] keyring.c: 29:keyring_file_check_path: *ERROR*: Could not stat key file '/tmp/tmp.h6d4hRMySv': No such file or directory 00:30:26.464 [2024-07-15 09:38:37.412556] nvme_tcp.c:2582:nvme_tcp_generate_tls_credentials: *ERROR*: Failed to obtain key 'key0': No such file or directory 00:30:26.464 [2024-07-15 09:38:37.412597] nvme.c: 683:nvme_ctrlr_probe: *ERROR*: Failed to construct NVMe controller for SSD: 127.0.0.1 00:30:26.464 [2024-07-15 09:38:37.412608] nvme.c: 830:nvme_probe_internal: *ERROR*: NVMe ctrlr scan failed 00:30:26.464 [2024-07-15 09:38:37.412619] bdev_nvme.c:6268:bdev_nvme_create: *ERROR*: No controller was found with provided trid (traddr: 127.0.0.1) 00:30:26.464 request: 00:30:26.464 { 00:30:26.464 "name": "nvme0", 00:30:26.464 "trtype": "tcp", 00:30:26.464 "traddr": "127.0.0.1", 00:30:26.464 "adrfam": "ipv4", 00:30:26.464 "trsvcid": "4420", 00:30:26.464 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:30:26.464 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:30:26.464 "prchk_reftag": false, 00:30:26.464 "prchk_guard": false, 00:30:26.464 "hdgst": false, 00:30:26.464 "ddgst": false, 00:30:26.464 "psk": "key0", 00:30:26.464 "method": "bdev_nvme_attach_controller", 00:30:26.464 "req_id": 1 00:30:26.464 } 00:30:26.464 Got JSON-RPC error response 00:30:26.464 response: 00:30:26.464 { 00:30:26.464 "code": -19, 00:30:26.464 "message": "No such device" 00:30:26.464 } 00:30:26.464 09:38:37 keyring_file -- common/autotest_common.sh@651 -- # es=1 00:30:26.464 09:38:37 keyring_file -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:30:26.464 09:38:37 keyring_file -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:30:26.464 09:38:37 keyring_file -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:30:26.464 09:38:37 keyring_file -- keyring/file.sh@92 -- # bperf_cmd keyring_file_remove_key key0 00:30:26.464 09:38:37 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_remove_key key0 00:30:26.722 09:38:37 keyring_file -- keyring/file.sh@95 -- # prep_key key0 00112233445566778899aabbccddeeff 0 00:30:26.722 09:38:37 keyring_file -- keyring/common.sh@15 -- # local name key digest path 00:30:26.722 09:38:37 keyring_file -- keyring/common.sh@17 -- # name=key0 00:30:26.722 09:38:37 keyring_file -- keyring/common.sh@17 -- # key=00112233445566778899aabbccddeeff 00:30:26.722 09:38:37 keyring_file -- keyring/common.sh@17 -- # digest=0 00:30:26.722 09:38:37 keyring_file -- keyring/common.sh@18 -- # mktemp 00:30:26.722 09:38:37 keyring_file -- keyring/common.sh@18 -- # path=/tmp/tmp.mYGcE8reJQ 00:30:26.722 09:38:37 keyring_file -- keyring/common.sh@20 -- # format_interchange_psk 00112233445566778899aabbccddeeff 0 00:30:26.722 09:38:37 keyring_file -- nvmf/common.sh@715 -- # format_key NVMeTLSkey-1 00112233445566778899aabbccddeeff 0 00:30:26.722 09:38:37 keyring_file -- nvmf/common.sh@702 -- # local prefix key digest 00:30:26.722 09:38:37 keyring_file -- nvmf/common.sh@704 -- # prefix=NVMeTLSkey-1 00:30:26.722 09:38:37 keyring_file -- nvmf/common.sh@704 -- # key=00112233445566778899aabbccddeeff 00:30:26.722 09:38:37 keyring_file -- nvmf/common.sh@704 -- # digest=0 00:30:26.722 09:38:37 keyring_file -- nvmf/common.sh@705 -- # python - 00:30:26.722 09:38:37 keyring_file -- keyring/common.sh@21 -- # chmod 0600 /tmp/tmp.mYGcE8reJQ 00:30:26.722 09:38:37 keyring_file -- keyring/common.sh@23 -- # echo /tmp/tmp.mYGcE8reJQ 00:30:26.722 09:38:37 keyring_file -- keyring/file.sh@95 -- # key0path=/tmp/tmp.mYGcE8reJQ 00:30:26.722 09:38:37 keyring_file -- keyring/file.sh@96 -- # bperf_cmd keyring_file_add_key key0 /tmp/tmp.mYGcE8reJQ 00:30:26.722 09:38:37 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_add_key key0 /tmp/tmp.mYGcE8reJQ 00:30:26.980 09:38:37 keyring_file -- keyring/file.sh@97 -- # bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:30:26.980 09:38:37 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:30:27.238 nvme0n1 00:30:27.238 09:38:38 keyring_file -- keyring/file.sh@99 -- # get_refcnt key0 00:30:27.238 09:38:38 keyring_file -- keyring/common.sh@12 -- # get_key key0 00:30:27.238 09:38:38 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:30:27.238 09:38:38 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:30:27.238 09:38:38 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:30:27.238 09:38:38 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:30:27.496 09:38:38 keyring_file -- keyring/file.sh@99 -- # (( 2 == 2 )) 00:30:27.496 09:38:38 keyring_file -- keyring/file.sh@100 -- # bperf_cmd keyring_file_remove_key key0 00:30:27.496 09:38:38 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_remove_key key0 00:30:27.754 09:38:38 keyring_file -- keyring/file.sh@101 -- # get_key key0 00:30:27.754 09:38:38 keyring_file -- keyring/file.sh@101 -- # jq -r .removed 00:30:27.754 09:38:38 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:30:27.754 09:38:38 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:30:27.754 09:38:38 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:30:28.011 09:38:39 keyring_file -- keyring/file.sh@101 -- # [[ true == \t\r\u\e ]] 00:30:28.011 09:38:39 keyring_file -- keyring/file.sh@102 -- # get_refcnt key0 00:30:28.011 09:38:39 keyring_file -- keyring/common.sh@12 -- # get_key key0 00:30:28.011 09:38:39 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:30:28.011 09:38:39 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:30:28.011 09:38:39 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:30:28.011 09:38:39 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:30:28.269 09:38:39 keyring_file -- keyring/file.sh@102 -- # (( 1 == 1 )) 00:30:28.269 09:38:39 keyring_file -- keyring/file.sh@103 -- # bperf_cmd bdev_nvme_detach_controller nvme0 00:30:28.269 09:38:39 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_detach_controller nvme0 00:30:28.527 09:38:39 keyring_file -- keyring/file.sh@104 -- # bperf_cmd keyring_get_keys 00:30:28.527 09:38:39 keyring_file -- keyring/file.sh@104 -- # jq length 00:30:28.527 09:38:39 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:30:28.786 09:38:39 keyring_file -- keyring/file.sh@104 -- # (( 0 == 0 )) 00:30:28.786 09:38:39 keyring_file -- keyring/file.sh@107 -- # bperf_cmd keyring_file_add_key key0 /tmp/tmp.mYGcE8reJQ 00:30:28.786 09:38:39 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_add_key key0 /tmp/tmp.mYGcE8reJQ 00:30:29.044 09:38:40 keyring_file -- keyring/file.sh@108 -- # bperf_cmd keyring_file_add_key key1 /tmp/tmp.3h2G9HuCiu 00:30:29.044 09:38:40 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_add_key key1 /tmp/tmp.3h2G9HuCiu 00:30:29.302 09:38:40 keyring_file -- keyring/file.sh@109 -- # bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:30:29.302 09:38:40 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:30:29.560 nvme0n1 00:30:29.560 09:38:40 keyring_file -- keyring/file.sh@112 -- # bperf_cmd save_config 00:30:29.560 09:38:40 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock save_config 00:30:29.820 09:38:40 keyring_file -- keyring/file.sh@112 -- # config='{ 00:30:29.820 "subsystems": [ 00:30:29.820 { 00:30:29.820 "subsystem": "keyring", 00:30:29.820 "config": [ 00:30:29.820 { 00:30:29.820 "method": "keyring_file_add_key", 00:30:29.820 "params": { 00:30:29.820 "name": "key0", 00:30:29.820 "path": "/tmp/tmp.mYGcE8reJQ" 00:30:29.820 } 00:30:29.820 }, 00:30:29.820 { 00:30:29.820 "method": "keyring_file_add_key", 00:30:29.820 "params": { 00:30:29.820 "name": "key1", 00:30:29.820 "path": "/tmp/tmp.3h2G9HuCiu" 00:30:29.820 } 00:30:29.820 } 00:30:29.820 ] 00:30:29.820 }, 00:30:29.820 { 00:30:29.820 "subsystem": "iobuf", 00:30:29.820 "config": [ 00:30:29.820 { 00:30:29.820 "method": "iobuf_set_options", 00:30:29.820 "params": { 00:30:29.820 "small_pool_count": 8192, 00:30:29.820 "large_pool_count": 1024, 00:30:29.820 "small_bufsize": 8192, 00:30:29.820 "large_bufsize": 135168 00:30:29.820 } 00:30:29.820 } 00:30:29.820 ] 00:30:29.820 }, 00:30:29.820 { 00:30:29.820 "subsystem": "sock", 00:30:29.820 "config": [ 00:30:29.820 { 00:30:29.820 "method": "sock_set_default_impl", 00:30:29.820 "params": { 00:30:29.820 "impl_name": "posix" 00:30:29.820 } 00:30:29.820 }, 00:30:29.820 { 00:30:29.820 "method": "sock_impl_set_options", 00:30:29.821 "params": { 00:30:29.821 "impl_name": "ssl", 00:30:29.821 "recv_buf_size": 4096, 00:30:29.821 "send_buf_size": 4096, 00:30:29.821 "enable_recv_pipe": true, 00:30:29.821 "enable_quickack": false, 00:30:29.821 "enable_placement_id": 0, 00:30:29.821 "enable_zerocopy_send_server": true, 00:30:29.821 "enable_zerocopy_send_client": false, 00:30:29.821 "zerocopy_threshold": 0, 00:30:29.821 "tls_version": 0, 00:30:29.821 "enable_ktls": false 00:30:29.821 } 00:30:29.821 }, 00:30:29.821 { 00:30:29.821 "method": "sock_impl_set_options", 00:30:29.821 "params": { 00:30:29.821 "impl_name": "posix", 00:30:29.821 "recv_buf_size": 2097152, 00:30:29.821 "send_buf_size": 2097152, 00:30:29.821 "enable_recv_pipe": true, 00:30:29.821 "enable_quickack": false, 00:30:29.821 "enable_placement_id": 0, 00:30:29.821 "enable_zerocopy_send_server": true, 00:30:29.821 "enable_zerocopy_send_client": false, 00:30:29.821 "zerocopy_threshold": 0, 00:30:29.821 "tls_version": 0, 00:30:29.821 "enable_ktls": false 00:30:29.821 } 00:30:29.821 } 00:30:29.821 ] 00:30:29.821 }, 00:30:29.821 { 00:30:29.821 "subsystem": "vmd", 00:30:29.821 "config": [] 00:30:29.821 }, 00:30:29.821 { 00:30:29.821 "subsystem": "accel", 00:30:29.821 "config": [ 00:30:29.821 { 00:30:29.821 "method": "accel_set_options", 00:30:29.821 "params": { 00:30:29.821 "small_cache_size": 128, 00:30:29.821 "large_cache_size": 16, 00:30:29.821 "task_count": 2048, 00:30:29.821 "sequence_count": 2048, 00:30:29.821 "buf_count": 2048 00:30:29.821 } 00:30:29.821 } 00:30:29.821 ] 00:30:29.821 }, 00:30:29.821 { 00:30:29.821 "subsystem": "bdev", 00:30:29.821 "config": [ 00:30:29.821 { 00:30:29.821 "method": "bdev_set_options", 00:30:29.821 "params": { 00:30:29.821 "bdev_io_pool_size": 65535, 00:30:29.821 "bdev_io_cache_size": 256, 00:30:29.821 "bdev_auto_examine": true, 00:30:29.821 "iobuf_small_cache_size": 128, 00:30:29.821 "iobuf_large_cache_size": 16 00:30:29.821 } 00:30:29.821 }, 00:30:29.821 { 00:30:29.821 "method": "bdev_raid_set_options", 00:30:29.821 "params": { 00:30:29.821 "process_window_size_kb": 1024 00:30:29.821 } 00:30:29.821 }, 00:30:29.821 { 00:30:29.821 "method": "bdev_iscsi_set_options", 00:30:29.821 "params": { 00:30:29.821 "timeout_sec": 30 00:30:29.821 } 00:30:29.821 }, 00:30:29.821 { 00:30:29.821 "method": "bdev_nvme_set_options", 00:30:29.821 "params": { 00:30:29.821 "action_on_timeout": "none", 00:30:29.821 "timeout_us": 0, 00:30:29.821 "timeout_admin_us": 0, 00:30:29.821 "keep_alive_timeout_ms": 10000, 00:30:29.821 "arbitration_burst": 0, 00:30:29.821 "low_priority_weight": 0, 00:30:29.821 "medium_priority_weight": 0, 00:30:29.821 "high_priority_weight": 0, 00:30:29.821 "nvme_adminq_poll_period_us": 10000, 00:30:29.821 "nvme_ioq_poll_period_us": 0, 00:30:29.821 "io_queue_requests": 512, 00:30:29.821 "delay_cmd_submit": true, 00:30:29.821 "transport_retry_count": 4, 00:30:29.821 "bdev_retry_count": 3, 00:30:29.821 "transport_ack_timeout": 0, 00:30:29.821 "ctrlr_loss_timeout_sec": 0, 00:30:29.821 "reconnect_delay_sec": 0, 00:30:29.821 "fast_io_fail_timeout_sec": 0, 00:30:29.821 "disable_auto_failback": false, 00:30:29.821 "generate_uuids": false, 00:30:29.821 "transport_tos": 0, 00:30:29.821 "nvme_error_stat": false, 00:30:29.821 "rdma_srq_size": 0, 00:30:29.821 "io_path_stat": false, 00:30:29.821 "allow_accel_sequence": false, 00:30:29.821 "rdma_max_cq_size": 0, 00:30:29.821 "rdma_cm_event_timeout_ms": 0, 00:30:29.821 "dhchap_digests": [ 00:30:29.821 "sha256", 00:30:29.821 "sha384", 00:30:29.821 "sha512" 00:30:29.821 ], 00:30:29.821 "dhchap_dhgroups": [ 00:30:29.821 "null", 00:30:29.821 "ffdhe2048", 00:30:29.821 "ffdhe3072", 00:30:29.821 "ffdhe4096", 00:30:29.821 "ffdhe6144", 00:30:29.821 "ffdhe8192" 00:30:29.821 ] 00:30:29.821 } 00:30:29.821 }, 00:30:29.821 { 00:30:29.821 "method": "bdev_nvme_attach_controller", 00:30:29.821 "params": { 00:30:29.821 "name": "nvme0", 00:30:29.821 "trtype": "TCP", 00:30:29.821 "adrfam": "IPv4", 00:30:29.821 "traddr": "127.0.0.1", 00:30:29.821 "trsvcid": "4420", 00:30:29.821 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:30:29.821 "prchk_reftag": false, 00:30:29.821 "prchk_guard": false, 00:30:29.821 "ctrlr_loss_timeout_sec": 0, 00:30:29.821 "reconnect_delay_sec": 0, 00:30:29.821 "fast_io_fail_timeout_sec": 0, 00:30:29.821 "psk": "key0", 00:30:29.821 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:30:29.821 "hdgst": false, 00:30:29.821 "ddgst": false 00:30:29.821 } 00:30:29.821 }, 00:30:29.821 { 00:30:29.821 "method": "bdev_nvme_set_hotplug", 00:30:29.821 "params": { 00:30:29.821 "period_us": 100000, 00:30:29.821 "enable": false 00:30:29.821 } 00:30:29.821 }, 00:30:29.821 { 00:30:29.821 "method": "bdev_wait_for_examine" 00:30:29.821 } 00:30:29.821 ] 00:30:29.821 }, 00:30:29.821 { 00:30:29.821 "subsystem": "nbd", 00:30:29.821 "config": [] 00:30:29.821 } 00:30:29.821 ] 00:30:29.821 }' 00:30:29.821 09:38:40 keyring_file -- keyring/file.sh@114 -- # killprocess 970139 00:30:29.821 09:38:40 keyring_file -- common/autotest_common.sh@948 -- # '[' -z 970139 ']' 00:30:29.821 09:38:40 keyring_file -- common/autotest_common.sh@952 -- # kill -0 970139 00:30:29.821 09:38:40 keyring_file -- common/autotest_common.sh@953 -- # uname 00:30:29.821 09:38:40 keyring_file -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:30:29.821 09:38:40 keyring_file -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 970139 00:30:29.821 09:38:40 keyring_file -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:30:29.821 09:38:40 keyring_file -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:30:29.821 09:38:40 keyring_file -- common/autotest_common.sh@966 -- # echo 'killing process with pid 970139' 00:30:29.821 killing process with pid 970139 00:30:29.821 09:38:40 keyring_file -- common/autotest_common.sh@967 -- # kill 970139 00:30:29.821 Received shutdown signal, test time was about 1.000000 seconds 00:30:29.821 00:30:29.821 Latency(us) 00:30:29.821 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:30:29.821 =================================================================================================================== 00:30:29.821 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:30:29.821 09:38:40 keyring_file -- common/autotest_common.sh@972 -- # wait 970139 00:30:30.081 09:38:41 keyring_file -- keyring/file.sh@117 -- # bperfpid=971597 00:30:30.081 09:38:41 keyring_file -- keyring/file.sh@119 -- # waitforlisten 971597 /var/tmp/bperf.sock 00:30:30.081 09:38:41 keyring_file -- common/autotest_common.sh@829 -- # '[' -z 971597 ']' 00:30:30.081 09:38:41 keyring_file -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bperf.sock 00:30:30.081 09:38:41 keyring_file -- keyring/file.sh@115 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -q 128 -o 4k -w randrw -M 50 -t 1 -m 2 -r /var/tmp/bperf.sock -z -c /dev/fd/63 00:30:30.081 09:38:41 keyring_file -- common/autotest_common.sh@834 -- # local max_retries=100 00:30:30.081 09:38:41 keyring_file -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:30:30.081 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:30:30.081 09:38:41 keyring_file -- keyring/file.sh@115 -- # echo '{ 00:30:30.081 "subsystems": [ 00:30:30.081 { 00:30:30.081 "subsystem": "keyring", 00:30:30.081 "config": [ 00:30:30.081 { 00:30:30.081 "method": "keyring_file_add_key", 00:30:30.081 "params": { 00:30:30.081 "name": "key0", 00:30:30.081 "path": "/tmp/tmp.mYGcE8reJQ" 00:30:30.081 } 00:30:30.081 }, 00:30:30.081 { 00:30:30.081 "method": "keyring_file_add_key", 00:30:30.081 "params": { 00:30:30.081 "name": "key1", 00:30:30.081 "path": "/tmp/tmp.3h2G9HuCiu" 00:30:30.081 } 00:30:30.081 } 00:30:30.081 ] 00:30:30.081 }, 00:30:30.081 { 00:30:30.081 "subsystem": "iobuf", 00:30:30.081 "config": [ 00:30:30.081 { 00:30:30.081 "method": "iobuf_set_options", 00:30:30.081 "params": { 00:30:30.081 "small_pool_count": 8192, 00:30:30.081 "large_pool_count": 1024, 00:30:30.081 "small_bufsize": 8192, 00:30:30.081 "large_bufsize": 135168 00:30:30.081 } 00:30:30.081 } 00:30:30.081 ] 00:30:30.081 }, 00:30:30.081 { 00:30:30.081 "subsystem": "sock", 00:30:30.081 "config": [ 00:30:30.081 { 00:30:30.081 "method": "sock_set_default_impl", 00:30:30.081 "params": { 00:30:30.081 "impl_name": "posix" 00:30:30.081 } 00:30:30.081 }, 00:30:30.081 { 00:30:30.081 "method": "sock_impl_set_options", 00:30:30.081 "params": { 00:30:30.081 "impl_name": "ssl", 00:30:30.081 "recv_buf_size": 4096, 00:30:30.081 "send_buf_size": 4096, 00:30:30.081 "enable_recv_pipe": true, 00:30:30.081 "enable_quickack": false, 00:30:30.081 "enable_placement_id": 0, 00:30:30.081 "enable_zerocopy_send_server": true, 00:30:30.081 "enable_zerocopy_send_client": false, 00:30:30.081 "zerocopy_threshold": 0, 00:30:30.081 "tls_version": 0, 00:30:30.081 "enable_ktls": false 00:30:30.081 } 00:30:30.081 }, 00:30:30.081 { 00:30:30.081 "method": "sock_impl_set_options", 00:30:30.081 "params": { 00:30:30.081 "impl_name": "posix", 00:30:30.081 "recv_buf_size": 2097152, 00:30:30.081 "send_buf_size": 2097152, 00:30:30.081 "enable_recv_pipe": true, 00:30:30.081 "enable_quickack": false, 00:30:30.081 "enable_placement_id": 0, 00:30:30.081 "enable_zerocopy_send_server": true, 00:30:30.081 "enable_zerocopy_send_client": false, 00:30:30.081 "zerocopy_threshold": 0, 00:30:30.081 "tls_version": 0, 00:30:30.081 "enable_ktls": false 00:30:30.081 } 00:30:30.081 } 00:30:30.081 ] 00:30:30.081 }, 00:30:30.081 { 00:30:30.081 "subsystem": "vmd", 00:30:30.081 "config": [] 00:30:30.081 }, 00:30:30.081 { 00:30:30.081 "subsystem": "accel", 00:30:30.081 "config": [ 00:30:30.081 { 00:30:30.081 "method": "accel_set_options", 00:30:30.081 "params": { 00:30:30.081 "small_cache_size": 128, 00:30:30.081 "large_cache_size": 16, 00:30:30.081 "task_count": 2048, 00:30:30.081 "sequence_count": 2048, 00:30:30.081 "buf_count": 2048 00:30:30.081 } 00:30:30.081 } 00:30:30.081 ] 00:30:30.081 }, 00:30:30.081 { 00:30:30.081 "subsystem": "bdev", 00:30:30.081 "config": [ 00:30:30.081 { 00:30:30.081 "method": "bdev_set_options", 00:30:30.081 "params": { 00:30:30.081 "bdev_io_pool_size": 65535, 00:30:30.081 "bdev_io_cache_size": 256, 00:30:30.081 "bdev_auto_examine": true, 00:30:30.081 "iobuf_small_cache_size": 128, 00:30:30.081 "iobuf_large_cache_size": 16 00:30:30.081 } 00:30:30.081 }, 00:30:30.081 { 00:30:30.081 "method": "bdev_raid_set_options", 00:30:30.081 "params": { 00:30:30.081 "process_window_size_kb": 1024 00:30:30.081 } 00:30:30.081 }, 00:30:30.081 { 00:30:30.081 "method": "bdev_iscsi_set_options", 00:30:30.081 "params": { 00:30:30.081 "timeout_sec": 30 00:30:30.081 } 00:30:30.081 }, 00:30:30.081 { 00:30:30.081 "method": "bdev_nvme_set_options", 00:30:30.081 "params": { 00:30:30.081 "action_on_timeout": "none", 00:30:30.081 "timeout_us": 0, 00:30:30.081 "timeout_admin_us": 0, 00:30:30.081 "keep_alive_timeout_ms": 10000, 00:30:30.081 "arbitration_burst": 0, 00:30:30.081 "low_priority_weight": 0, 00:30:30.081 "medium_priority_weight": 0, 00:30:30.081 "high_priority_weight": 0, 00:30:30.081 "nvme_adminq_poll_period_us": 10000, 00:30:30.081 "nvme_ioq_poll_period_us": 0, 00:30:30.081 "io_queue_requests": 512, 00:30:30.081 "delay_cmd_submit": true, 00:30:30.081 "transport_retry_count": 4, 00:30:30.081 "bdev_retry_count": 3, 00:30:30.081 "transport_ack_timeout": 0, 00:30:30.081 "ctrlr_loss_timeout_sec": 0, 00:30:30.081 "reconnect_delay_sec": 0, 00:30:30.081 "fast_io_fail_timeout_sec": 0, 00:30:30.081 "disable_auto_failback": false, 00:30:30.081 "generate_uuids": false, 00:30:30.081 "transport_tos": 0, 00:30:30.081 "nvme_error_stat": false, 00:30:30.081 "rdma_srq_size": 0, 00:30:30.081 "io_path_stat": false, 00:30:30.081 "allow_accel_sequence": false, 00:30:30.081 "rdma_max_cq_size": 0, 00:30:30.081 "rdma_cm_event_timeout_ms": 0, 00:30:30.081 "dhchap_digests": [ 00:30:30.081 "sha256", 00:30:30.081 "sha384", 00:30:30.081 "sha512" 00:30:30.081 ], 00:30:30.081 "dhchap_dhgroups": [ 00:30:30.081 "null", 00:30:30.081 "ffdhe2048", 00:30:30.081 "ffdhe3072", 00:30:30.081 "ffdhe4096", 00:30:30.081 "ffdhe6144", 00:30:30.081 "ffdhe8192" 00:30:30.081 ] 00:30:30.081 } 00:30:30.081 }, 00:30:30.081 { 00:30:30.081 "method": "bdev_nvme_attach_controller", 00:30:30.081 "params": { 00:30:30.081 "name": "nvme0", 00:30:30.081 "trtype": "TCP", 00:30:30.081 "adrfam": "IPv4", 00:30:30.081 "traddr": "127.0.0.1", 00:30:30.081 "trsvcid": "4420", 00:30:30.081 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:30:30.081 "prchk_reftag": false, 00:30:30.081 "prchk_guard": false, 00:30:30.081 "ctrlr_loss_timeout_sec": 0, 00:30:30.081 "reconnect_delay_sec": 0, 00:30:30.081 "fast_io_fail_timeout_sec": 0, 00:30:30.081 "psk": "key0", 00:30:30.081 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:30:30.081 "hdgst": false, 00:30:30.081 "ddgst": false 00:30:30.081 } 00:30:30.081 }, 00:30:30.081 { 00:30:30.081 "method": "bdev_nvme_set_hotplug", 00:30:30.081 "params": { 00:30:30.081 "period_us": 100000, 00:30:30.082 "enable": false 00:30:30.082 } 00:30:30.082 }, 00:30:30.082 { 00:30:30.082 "method": "bdev_wait_for_examine" 00:30:30.082 } 00:30:30.082 ] 00:30:30.082 }, 00:30:30.082 { 00:30:30.082 "subsystem": "nbd", 00:30:30.082 "config": [] 00:30:30.082 } 00:30:30.082 ] 00:30:30.082 }' 00:30:30.082 09:38:41 keyring_file -- common/autotest_common.sh@838 -- # xtrace_disable 00:30:30.082 09:38:41 keyring_file -- common/autotest_common.sh@10 -- # set +x 00:30:30.082 [2024-07-15 09:38:41.218147] Starting SPDK v24.09-pre git sha1 b0f01ebc5 / DPDK 24.03.0 initialization... 00:30:30.082 [2024-07-15 09:38:41.218228] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid971597 ] 00:30:30.082 EAL: No free 2048 kB hugepages reported on node 1 00:30:30.341 [2024-07-15 09:38:41.275532] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:30:30.341 [2024-07-15 09:38:41.380490] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:30:30.601 [2024-07-15 09:38:41.565816] bdev_nvme_rpc.c: 517:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:30:31.168 09:38:42 keyring_file -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:30:31.168 09:38:42 keyring_file -- common/autotest_common.sh@862 -- # return 0 00:30:31.168 09:38:42 keyring_file -- keyring/file.sh@120 -- # bperf_cmd keyring_get_keys 00:30:31.168 09:38:42 keyring_file -- keyring/file.sh@120 -- # jq length 00:30:31.168 09:38:42 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:30:31.426 09:38:42 keyring_file -- keyring/file.sh@120 -- # (( 2 == 2 )) 00:30:31.426 09:38:42 keyring_file -- keyring/file.sh@121 -- # get_refcnt key0 00:30:31.426 09:38:42 keyring_file -- keyring/common.sh@12 -- # get_key key0 00:30:31.426 09:38:42 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:30:31.426 09:38:42 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:30:31.426 09:38:42 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:30:31.426 09:38:42 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:30:31.683 09:38:42 keyring_file -- keyring/file.sh@121 -- # (( 2 == 2 )) 00:30:31.683 09:38:42 keyring_file -- keyring/file.sh@122 -- # get_refcnt key1 00:30:31.683 09:38:42 keyring_file -- keyring/common.sh@12 -- # get_key key1 00:30:31.683 09:38:42 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:30:31.683 09:38:42 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:30:31.683 09:38:42 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:30:31.683 09:38:42 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key1")' 00:30:31.940 09:38:42 keyring_file -- keyring/file.sh@122 -- # (( 1 == 1 )) 00:30:31.940 09:38:42 keyring_file -- keyring/file.sh@123 -- # bperf_cmd bdev_nvme_get_controllers 00:30:31.940 09:38:42 keyring_file -- keyring/file.sh@123 -- # jq -r '.[].name' 00:30:31.940 09:38:42 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_get_controllers 00:30:32.198 09:38:43 keyring_file -- keyring/file.sh@123 -- # [[ nvme0 == nvme0 ]] 00:30:32.198 09:38:43 keyring_file -- keyring/file.sh@1 -- # cleanup 00:30:32.198 09:38:43 keyring_file -- keyring/file.sh@19 -- # rm -f /tmp/tmp.mYGcE8reJQ /tmp/tmp.3h2G9HuCiu 00:30:32.198 09:38:43 keyring_file -- keyring/file.sh@20 -- # killprocess 971597 00:30:32.198 09:38:43 keyring_file -- common/autotest_common.sh@948 -- # '[' -z 971597 ']' 00:30:32.198 09:38:43 keyring_file -- common/autotest_common.sh@952 -- # kill -0 971597 00:30:32.198 09:38:43 keyring_file -- common/autotest_common.sh@953 -- # uname 00:30:32.198 09:38:43 keyring_file -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:30:32.198 09:38:43 keyring_file -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 971597 00:30:32.198 09:38:43 keyring_file -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:30:32.198 09:38:43 keyring_file -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:30:32.198 09:38:43 keyring_file -- common/autotest_common.sh@966 -- # echo 'killing process with pid 971597' 00:30:32.198 killing process with pid 971597 00:30:32.198 09:38:43 keyring_file -- common/autotest_common.sh@967 -- # kill 971597 00:30:32.198 Received shutdown signal, test time was about 1.000000 seconds 00:30:32.198 00:30:32.198 Latency(us) 00:30:32.198 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:30:32.198 =================================================================================================================== 00:30:32.198 Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:30:32.198 09:38:43 keyring_file -- common/autotest_common.sh@972 -- # wait 971597 00:30:32.457 09:38:43 keyring_file -- keyring/file.sh@21 -- # killprocess 970134 00:30:32.457 09:38:43 keyring_file -- common/autotest_common.sh@948 -- # '[' -z 970134 ']' 00:30:32.457 09:38:43 keyring_file -- common/autotest_common.sh@952 -- # kill -0 970134 00:30:32.457 09:38:43 keyring_file -- common/autotest_common.sh@953 -- # uname 00:30:32.457 09:38:43 keyring_file -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:30:32.457 09:38:43 keyring_file -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 970134 00:30:32.457 09:38:43 keyring_file -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:30:32.457 09:38:43 keyring_file -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:30:32.457 09:38:43 keyring_file -- common/autotest_common.sh@966 -- # echo 'killing process with pid 970134' 00:30:32.457 killing process with pid 970134 00:30:32.457 09:38:43 keyring_file -- common/autotest_common.sh@967 -- # kill 970134 00:30:32.457 [2024-07-15 09:38:43.483571] app.c:1023:log_deprecation_hits: *WARNING*: nvmf_tcp_psk_path: deprecation 'PSK path' scheduled for removal in v24.09 hit 1 times 00:30:32.457 09:38:43 keyring_file -- common/autotest_common.sh@972 -- # wait 970134 00:30:32.759 00:30:32.759 real 0m14.105s 00:30:32.759 user 0m35.378s 00:30:32.759 sys 0m3.223s 00:30:32.759 09:38:43 keyring_file -- common/autotest_common.sh@1124 -- # xtrace_disable 00:30:32.759 09:38:43 keyring_file -- common/autotest_common.sh@10 -- # set +x 00:30:32.759 ************************************ 00:30:32.759 END TEST keyring_file 00:30:32.759 ************************************ 00:30:32.759 09:38:43 -- common/autotest_common.sh@1142 -- # return 0 00:30:32.759 09:38:43 -- spdk/autotest.sh@296 -- # [[ y == y ]] 00:30:32.759 09:38:43 -- spdk/autotest.sh@297 -- # run_test keyring_linux /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/keyring/linux.sh 00:30:32.759 09:38:43 -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:30:32.759 09:38:43 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:30:32.759 09:38:43 -- common/autotest_common.sh@10 -- # set +x 00:30:33.017 ************************************ 00:30:33.017 START TEST keyring_linux 00:30:33.017 ************************************ 00:30:33.017 09:38:43 keyring_linux -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/keyring/linux.sh 00:30:33.017 * Looking for test storage... 00:30:33.017 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/keyring 00:30:33.017 09:38:43 keyring_linux -- keyring/linux.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/keyring/common.sh 00:30:33.017 09:38:43 keyring_linux -- keyring/common.sh@4 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:30:33.017 09:38:44 keyring_linux -- nvmf/common.sh@7 -- # uname -s 00:30:33.017 09:38:44 keyring_linux -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:30:33.017 09:38:44 keyring_linux -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:30:33.017 09:38:44 keyring_linux -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:30:33.017 09:38:44 keyring_linux -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:30:33.017 09:38:44 keyring_linux -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:30:33.017 09:38:44 keyring_linux -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:30:33.017 09:38:44 keyring_linux -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:30:33.018 09:38:44 keyring_linux -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:30:33.018 09:38:44 keyring_linux -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:30:33.018 09:38:44 keyring_linux -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:30:33.018 09:38:44 keyring_linux -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a 00:30:33.018 09:38:44 keyring_linux -- nvmf/common.sh@18 -- # NVME_HOSTID=29f67375-a902-e411-ace9-001e67bc3c9a 00:30:33.018 09:38:44 keyring_linux -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:30:33.018 09:38:44 keyring_linux -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:30:33.018 09:38:44 keyring_linux -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:30:33.018 09:38:44 keyring_linux -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:30:33.018 09:38:44 keyring_linux -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:30:33.018 09:38:44 keyring_linux -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:30:33.018 09:38:44 keyring_linux -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:30:33.018 09:38:44 keyring_linux -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:30:33.018 09:38:44 keyring_linux -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:30:33.018 09:38:44 keyring_linux -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:30:33.018 09:38:44 keyring_linux -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:30:33.018 09:38:44 keyring_linux -- paths/export.sh@5 -- # export PATH 00:30:33.018 09:38:44 keyring_linux -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:30:33.018 09:38:44 keyring_linux -- nvmf/common.sh@47 -- # : 0 00:30:33.018 09:38:44 keyring_linux -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:30:33.018 09:38:44 keyring_linux -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:30:33.018 09:38:44 keyring_linux -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:30:33.018 09:38:44 keyring_linux -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:30:33.018 09:38:44 keyring_linux -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:30:33.018 09:38:44 keyring_linux -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:30:33.018 09:38:44 keyring_linux -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:30:33.018 09:38:44 keyring_linux -- nvmf/common.sh@51 -- # have_pci_nics=0 00:30:33.018 09:38:44 keyring_linux -- keyring/common.sh@6 -- # bperfsock=/var/tmp/bperf.sock 00:30:33.018 09:38:44 keyring_linux -- keyring/linux.sh@11 -- # subnqn=nqn.2016-06.io.spdk:cnode0 00:30:33.018 09:38:44 keyring_linux -- keyring/linux.sh@12 -- # hostnqn=nqn.2016-06.io.spdk:host0 00:30:33.018 09:38:44 keyring_linux -- keyring/linux.sh@13 -- # key0=00112233445566778899aabbccddeeff 00:30:33.018 09:38:44 keyring_linux -- keyring/linux.sh@14 -- # key1=112233445566778899aabbccddeeff00 00:30:33.018 09:38:44 keyring_linux -- keyring/linux.sh@45 -- # trap cleanup EXIT 00:30:33.018 09:38:44 keyring_linux -- keyring/linux.sh@47 -- # prep_key key0 00112233445566778899aabbccddeeff 0 /tmp/:spdk-test:key0 00:30:33.018 09:38:44 keyring_linux -- keyring/common.sh@15 -- # local name key digest path 00:30:33.018 09:38:44 keyring_linux -- keyring/common.sh@17 -- # name=key0 00:30:33.018 09:38:44 keyring_linux -- keyring/common.sh@17 -- # key=00112233445566778899aabbccddeeff 00:30:33.018 09:38:44 keyring_linux -- keyring/common.sh@17 -- # digest=0 00:30:33.018 09:38:44 keyring_linux -- keyring/common.sh@18 -- # path=/tmp/:spdk-test:key0 00:30:33.018 09:38:44 keyring_linux -- keyring/common.sh@20 -- # format_interchange_psk 00112233445566778899aabbccddeeff 0 00:30:33.018 09:38:44 keyring_linux -- nvmf/common.sh@715 -- # format_key NVMeTLSkey-1 00112233445566778899aabbccddeeff 0 00:30:33.018 09:38:44 keyring_linux -- nvmf/common.sh@702 -- # local prefix key digest 00:30:33.018 09:38:44 keyring_linux -- nvmf/common.sh@704 -- # prefix=NVMeTLSkey-1 00:30:33.018 09:38:44 keyring_linux -- nvmf/common.sh@704 -- # key=00112233445566778899aabbccddeeff 00:30:33.018 09:38:44 keyring_linux -- nvmf/common.sh@704 -- # digest=0 00:30:33.018 09:38:44 keyring_linux -- nvmf/common.sh@705 -- # python - 00:30:33.018 09:38:44 keyring_linux -- keyring/common.sh@21 -- # chmod 0600 /tmp/:spdk-test:key0 00:30:33.018 09:38:44 keyring_linux -- keyring/common.sh@23 -- # echo /tmp/:spdk-test:key0 00:30:33.018 /tmp/:spdk-test:key0 00:30:33.018 09:38:44 keyring_linux -- keyring/linux.sh@48 -- # prep_key key1 112233445566778899aabbccddeeff00 0 /tmp/:spdk-test:key1 00:30:33.018 09:38:44 keyring_linux -- keyring/common.sh@15 -- # local name key digest path 00:30:33.018 09:38:44 keyring_linux -- keyring/common.sh@17 -- # name=key1 00:30:33.018 09:38:44 keyring_linux -- keyring/common.sh@17 -- # key=112233445566778899aabbccddeeff00 00:30:33.018 09:38:44 keyring_linux -- keyring/common.sh@17 -- # digest=0 00:30:33.018 09:38:44 keyring_linux -- keyring/common.sh@18 -- # path=/tmp/:spdk-test:key1 00:30:33.018 09:38:44 keyring_linux -- keyring/common.sh@20 -- # format_interchange_psk 112233445566778899aabbccddeeff00 0 00:30:33.018 09:38:44 keyring_linux -- nvmf/common.sh@715 -- # format_key NVMeTLSkey-1 112233445566778899aabbccddeeff00 0 00:30:33.018 09:38:44 keyring_linux -- nvmf/common.sh@702 -- # local prefix key digest 00:30:33.018 09:38:44 keyring_linux -- nvmf/common.sh@704 -- # prefix=NVMeTLSkey-1 00:30:33.018 09:38:44 keyring_linux -- nvmf/common.sh@704 -- # key=112233445566778899aabbccddeeff00 00:30:33.018 09:38:44 keyring_linux -- nvmf/common.sh@704 -- # digest=0 00:30:33.018 09:38:44 keyring_linux -- nvmf/common.sh@705 -- # python - 00:30:33.018 09:38:44 keyring_linux -- keyring/common.sh@21 -- # chmod 0600 /tmp/:spdk-test:key1 00:30:33.018 09:38:44 keyring_linux -- keyring/common.sh@23 -- # echo /tmp/:spdk-test:key1 00:30:33.018 /tmp/:spdk-test:key1 00:30:33.018 09:38:44 keyring_linux -- keyring/linux.sh@51 -- # tgtpid=971962 00:30:33.018 09:38:44 keyring_linux -- keyring/linux.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:30:33.018 09:38:44 keyring_linux -- keyring/linux.sh@53 -- # waitforlisten 971962 00:30:33.018 09:38:44 keyring_linux -- common/autotest_common.sh@829 -- # '[' -z 971962 ']' 00:30:33.018 09:38:44 keyring_linux -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:30:33.018 09:38:44 keyring_linux -- common/autotest_common.sh@834 -- # local max_retries=100 00:30:33.018 09:38:44 keyring_linux -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:30:33.018 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:30:33.018 09:38:44 keyring_linux -- common/autotest_common.sh@838 -- # xtrace_disable 00:30:33.018 09:38:44 keyring_linux -- common/autotest_common.sh@10 -- # set +x 00:30:33.018 [2024-07-15 09:38:44.139113] Starting SPDK v24.09-pre git sha1 b0f01ebc5 / DPDK 24.03.0 initialization... 00:30:33.018 [2024-07-15 09:38:44.139216] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid971962 ] 00:30:33.018 EAL: No free 2048 kB hugepages reported on node 1 00:30:33.018 [2024-07-15 09:38:44.203629] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:30:33.277 [2024-07-15 09:38:44.311450] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:30:33.557 09:38:44 keyring_linux -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:30:33.557 09:38:44 keyring_linux -- common/autotest_common.sh@862 -- # return 0 00:30:33.557 09:38:44 keyring_linux -- keyring/linux.sh@54 -- # rpc_cmd 00:30:33.557 09:38:44 keyring_linux -- common/autotest_common.sh@559 -- # xtrace_disable 00:30:33.557 09:38:44 keyring_linux -- common/autotest_common.sh@10 -- # set +x 00:30:33.557 [2024-07-15 09:38:44.534474] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:30:33.557 null0 00:30:33.557 [2024-07-15 09:38:44.566521] tcp.c: 928:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:30:33.557 [2024-07-15 09:38:44.566981] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4420 *** 00:30:33.557 09:38:44 keyring_linux -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:30:33.557 09:38:44 keyring_linux -- keyring/linux.sh@66 -- # keyctl add user :spdk-test:key0 NVMeTLSkey-1:00:MDAxMTIyMzM0NDU1NjY3Nzg4OTlhYWJiY2NkZGVlZmZwJEiQ: @s 00:30:33.557 711153578 00:30:33.557 09:38:44 keyring_linux -- keyring/linux.sh@67 -- # keyctl add user :spdk-test:key1 NVMeTLSkey-1:00:MTEyMjMzNDQ1NTY2Nzc4ODk5YWFiYmNjZGRlZWZmMDA6CPcs: @s 00:30:33.557 133697651 00:30:33.557 09:38:44 keyring_linux -- keyring/linux.sh@70 -- # bperfpid=972088 00:30:33.557 09:38:44 keyring_linux -- keyring/linux.sh@68 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -q 128 -o 4k -w randread -t 1 -m 2 -r /var/tmp/bperf.sock -z --wait-for-rpc 00:30:33.557 09:38:44 keyring_linux -- keyring/linux.sh@72 -- # waitforlisten 972088 /var/tmp/bperf.sock 00:30:33.557 09:38:44 keyring_linux -- common/autotest_common.sh@829 -- # '[' -z 972088 ']' 00:30:33.557 09:38:44 keyring_linux -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bperf.sock 00:30:33.557 09:38:44 keyring_linux -- common/autotest_common.sh@834 -- # local max_retries=100 00:30:33.557 09:38:44 keyring_linux -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:30:33.558 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:30:33.558 09:38:44 keyring_linux -- common/autotest_common.sh@838 -- # xtrace_disable 00:30:33.558 09:38:44 keyring_linux -- common/autotest_common.sh@10 -- # set +x 00:30:33.558 [2024-07-15 09:38:44.635339] Starting SPDK v24.09-pre git sha1 b0f01ebc5 / DPDK 24.03.0 initialization... 00:30:33.558 [2024-07-15 09:38:44.635421] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid972088 ] 00:30:33.558 EAL: No free 2048 kB hugepages reported on node 1 00:30:33.558 [2024-07-15 09:38:44.693583] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:30:33.834 [2024-07-15 09:38:44.799346] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:30:33.834 09:38:44 keyring_linux -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:30:33.834 09:38:44 keyring_linux -- common/autotest_common.sh@862 -- # return 0 00:30:33.834 09:38:44 keyring_linux -- keyring/linux.sh@73 -- # bperf_cmd keyring_linux_set_options --enable 00:30:33.834 09:38:44 keyring_linux -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_linux_set_options --enable 00:30:34.092 09:38:45 keyring_linux -- keyring/linux.sh@74 -- # bperf_cmd framework_start_init 00:30:34.092 09:38:45 keyring_linux -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock framework_start_init 00:30:34.349 09:38:45 keyring_linux -- keyring/linux.sh@75 -- # bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk :spdk-test:key0 00:30:34.349 09:38:45 keyring_linux -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk :spdk-test:key0 00:30:34.606 [2024-07-15 09:38:45.629250] bdev_nvme_rpc.c: 517:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:30:34.606 nvme0n1 00:30:34.606 09:38:45 keyring_linux -- keyring/linux.sh@77 -- # check_keys 1 :spdk-test:key0 00:30:34.606 09:38:45 keyring_linux -- keyring/linux.sh@19 -- # local count=1 name=:spdk-test:key0 00:30:34.606 09:38:45 keyring_linux -- keyring/linux.sh@20 -- # local sn 00:30:34.606 09:38:45 keyring_linux -- keyring/linux.sh@22 -- # bperf_cmd keyring_get_keys 00:30:34.606 09:38:45 keyring_linux -- keyring/linux.sh@22 -- # jq length 00:30:34.606 09:38:45 keyring_linux -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:30:34.864 09:38:45 keyring_linux -- keyring/linux.sh@22 -- # (( 1 == count )) 00:30:34.864 09:38:45 keyring_linux -- keyring/linux.sh@23 -- # (( count == 0 )) 00:30:34.864 09:38:45 keyring_linux -- keyring/linux.sh@25 -- # get_key :spdk-test:key0 00:30:34.864 09:38:45 keyring_linux -- keyring/linux.sh@25 -- # jq -r .sn 00:30:34.864 09:38:45 keyring_linux -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:30:34.864 09:38:45 keyring_linux -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:30:34.864 09:38:45 keyring_linux -- keyring/common.sh@10 -- # jq '.[] | select(.name == ":spdk-test:key0")' 00:30:35.122 09:38:46 keyring_linux -- keyring/linux.sh@25 -- # sn=711153578 00:30:35.122 09:38:46 keyring_linux -- keyring/linux.sh@26 -- # get_keysn :spdk-test:key0 00:30:35.122 09:38:46 keyring_linux -- keyring/linux.sh@16 -- # keyctl search @s user :spdk-test:key0 00:30:35.122 09:38:46 keyring_linux -- keyring/linux.sh@26 -- # [[ 711153578 == \7\1\1\1\5\3\5\7\8 ]] 00:30:35.122 09:38:46 keyring_linux -- keyring/linux.sh@27 -- # keyctl print 711153578 00:30:35.122 09:38:46 keyring_linux -- keyring/linux.sh@27 -- # [[ NVMeTLSkey-1:00:MDAxMTIyMzM0NDU1NjY3Nzg4OTlhYWJiY2NkZGVlZmZwJEiQ: == \N\V\M\e\T\L\S\k\e\y\-\1\:\0\0\:\M\D\A\x\M\T\I\y\M\z\M\0\N\D\U\1\N\j\Y\3\N\z\g\4\O\T\l\h\Y\W\J\i\Y\2\N\k\Z\G\V\l\Z\m\Z\w\J\E\i\Q\: ]] 00:30:35.122 09:38:46 keyring_linux -- keyring/linux.sh@79 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:30:35.122 Running I/O for 1 seconds... 00:30:36.497 00:30:36.497 Latency(us) 00:30:36.497 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:30:36.497 Job: nvme0n1 (Core Mask 0x2, workload: randread, depth: 128, IO size: 4096) 00:30:36.497 nvme0n1 : 1.01 11013.99 43.02 0.00 0.00 11547.98 4029.25 15922.82 00:30:36.497 =================================================================================================================== 00:30:36.497 Total : 11013.99 43.02 0.00 0.00 11547.98 4029.25 15922.82 00:30:36.497 0 00:30:36.497 09:38:47 keyring_linux -- keyring/linux.sh@80 -- # bperf_cmd bdev_nvme_detach_controller nvme0 00:30:36.497 09:38:47 keyring_linux -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_detach_controller nvme0 00:30:36.497 09:38:47 keyring_linux -- keyring/linux.sh@81 -- # check_keys 0 00:30:36.497 09:38:47 keyring_linux -- keyring/linux.sh@19 -- # local count=0 name= 00:30:36.497 09:38:47 keyring_linux -- keyring/linux.sh@20 -- # local sn 00:30:36.497 09:38:47 keyring_linux -- keyring/linux.sh@22 -- # bperf_cmd keyring_get_keys 00:30:36.497 09:38:47 keyring_linux -- keyring/linux.sh@22 -- # jq length 00:30:36.497 09:38:47 keyring_linux -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:30:36.755 09:38:47 keyring_linux -- keyring/linux.sh@22 -- # (( 0 == count )) 00:30:36.755 09:38:47 keyring_linux -- keyring/linux.sh@23 -- # (( count == 0 )) 00:30:36.755 09:38:47 keyring_linux -- keyring/linux.sh@23 -- # return 00:30:36.755 09:38:47 keyring_linux -- keyring/linux.sh@84 -- # NOT bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk :spdk-test:key1 00:30:36.755 09:38:47 keyring_linux -- common/autotest_common.sh@648 -- # local es=0 00:30:36.755 09:38:47 keyring_linux -- common/autotest_common.sh@650 -- # valid_exec_arg bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk :spdk-test:key1 00:30:36.755 09:38:47 keyring_linux -- common/autotest_common.sh@636 -- # local arg=bperf_cmd 00:30:36.755 09:38:47 keyring_linux -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:30:36.755 09:38:47 keyring_linux -- common/autotest_common.sh@640 -- # type -t bperf_cmd 00:30:36.755 09:38:47 keyring_linux -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:30:36.755 09:38:47 keyring_linux -- common/autotest_common.sh@651 -- # bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk :spdk-test:key1 00:30:36.755 09:38:47 keyring_linux -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk :spdk-test:key1 00:30:37.013 [2024-07-15 09:38:48.081173] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk_internal/nvme_tcp.h: 428:nvme_tcp_read_data: *ERROR*: spdk_sock_recv() failed, errno 107: Transport endpoint is not connected 00:30:37.013 [2024-07-15 09:38:48.081821] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x24253f0 (107): Transport endpoint is not connected 00:30:37.013 [2024-07-15 09:38:48.082791] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x24253f0 (9): Bad file descriptor 00:30:37.013 [2024-07-15 09:38:48.083776] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:30:37.013 [2024-07-15 09:38:48.083828] nvme.c: 708:nvme_ctrlr_poll_internal: *ERROR*: Failed to initialize SSD: 127.0.0.1 00:30:37.013 [2024-07-15 09:38:48.083842] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:30:37.013 request: 00:30:37.013 { 00:30:37.013 "name": "nvme0", 00:30:37.013 "trtype": "tcp", 00:30:37.013 "traddr": "127.0.0.1", 00:30:37.013 "adrfam": "ipv4", 00:30:37.013 "trsvcid": "4420", 00:30:37.013 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:30:37.013 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:30:37.013 "prchk_reftag": false, 00:30:37.013 "prchk_guard": false, 00:30:37.013 "hdgst": false, 00:30:37.013 "ddgst": false, 00:30:37.013 "psk": ":spdk-test:key1", 00:30:37.013 "method": "bdev_nvme_attach_controller", 00:30:37.013 "req_id": 1 00:30:37.013 } 00:30:37.013 Got JSON-RPC error response 00:30:37.013 response: 00:30:37.013 { 00:30:37.013 "code": -5, 00:30:37.013 "message": "Input/output error" 00:30:37.013 } 00:30:37.013 09:38:48 keyring_linux -- common/autotest_common.sh@651 -- # es=1 00:30:37.013 09:38:48 keyring_linux -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:30:37.013 09:38:48 keyring_linux -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:30:37.013 09:38:48 keyring_linux -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:30:37.013 09:38:48 keyring_linux -- keyring/linux.sh@1 -- # cleanup 00:30:37.013 09:38:48 keyring_linux -- keyring/linux.sh@38 -- # for key in key0 key1 00:30:37.013 09:38:48 keyring_linux -- keyring/linux.sh@39 -- # unlink_key key0 00:30:37.013 09:38:48 keyring_linux -- keyring/linux.sh@31 -- # local name=key0 sn 00:30:37.013 09:38:48 keyring_linux -- keyring/linux.sh@33 -- # get_keysn :spdk-test:key0 00:30:37.013 09:38:48 keyring_linux -- keyring/linux.sh@16 -- # keyctl search @s user :spdk-test:key0 00:30:37.013 09:38:48 keyring_linux -- keyring/linux.sh@33 -- # sn=711153578 00:30:37.013 09:38:48 keyring_linux -- keyring/linux.sh@34 -- # keyctl unlink 711153578 00:30:37.013 1 links removed 00:30:37.013 09:38:48 keyring_linux -- keyring/linux.sh@38 -- # for key in key0 key1 00:30:37.013 09:38:48 keyring_linux -- keyring/linux.sh@39 -- # unlink_key key1 00:30:37.013 09:38:48 keyring_linux -- keyring/linux.sh@31 -- # local name=key1 sn 00:30:37.013 09:38:48 keyring_linux -- keyring/linux.sh@33 -- # get_keysn :spdk-test:key1 00:30:37.013 09:38:48 keyring_linux -- keyring/linux.sh@16 -- # keyctl search @s user :spdk-test:key1 00:30:37.013 09:38:48 keyring_linux -- keyring/linux.sh@33 -- # sn=133697651 00:30:37.013 09:38:48 keyring_linux -- keyring/linux.sh@34 -- # keyctl unlink 133697651 00:30:37.013 1 links removed 00:30:37.013 09:38:48 keyring_linux -- keyring/linux.sh@41 -- # killprocess 972088 00:30:37.013 09:38:48 keyring_linux -- common/autotest_common.sh@948 -- # '[' -z 972088 ']' 00:30:37.013 09:38:48 keyring_linux -- common/autotest_common.sh@952 -- # kill -0 972088 00:30:37.013 09:38:48 keyring_linux -- common/autotest_common.sh@953 -- # uname 00:30:37.013 09:38:48 keyring_linux -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:30:37.013 09:38:48 keyring_linux -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 972088 00:30:37.013 09:38:48 keyring_linux -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:30:37.013 09:38:48 keyring_linux -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:30:37.013 09:38:48 keyring_linux -- common/autotest_common.sh@966 -- # echo 'killing process with pid 972088' 00:30:37.013 killing process with pid 972088 00:30:37.013 09:38:48 keyring_linux -- common/autotest_common.sh@967 -- # kill 972088 00:30:37.013 Received shutdown signal, test time was about 1.000000 seconds 00:30:37.013 00:30:37.013 Latency(us) 00:30:37.013 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:30:37.013 =================================================================================================================== 00:30:37.013 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:30:37.013 09:38:48 keyring_linux -- common/autotest_common.sh@972 -- # wait 972088 00:30:37.272 09:38:48 keyring_linux -- keyring/linux.sh@42 -- # killprocess 971962 00:30:37.272 09:38:48 keyring_linux -- common/autotest_common.sh@948 -- # '[' -z 971962 ']' 00:30:37.272 09:38:48 keyring_linux -- common/autotest_common.sh@952 -- # kill -0 971962 00:30:37.272 09:38:48 keyring_linux -- common/autotest_common.sh@953 -- # uname 00:30:37.272 09:38:48 keyring_linux -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:30:37.272 09:38:48 keyring_linux -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 971962 00:30:37.272 09:38:48 keyring_linux -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:30:37.272 09:38:48 keyring_linux -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:30:37.272 09:38:48 keyring_linux -- common/autotest_common.sh@966 -- # echo 'killing process with pid 971962' 00:30:37.272 killing process with pid 971962 00:30:37.272 09:38:48 keyring_linux -- common/autotest_common.sh@967 -- # kill 971962 00:30:37.272 09:38:48 keyring_linux -- common/autotest_common.sh@972 -- # wait 971962 00:30:37.835 00:30:37.835 real 0m4.819s 00:30:37.835 user 0m9.492s 00:30:37.835 sys 0m1.502s 00:30:37.835 09:38:48 keyring_linux -- common/autotest_common.sh@1124 -- # xtrace_disable 00:30:37.835 09:38:48 keyring_linux -- common/autotest_common.sh@10 -- # set +x 00:30:37.835 ************************************ 00:30:37.835 END TEST keyring_linux 00:30:37.835 ************************************ 00:30:37.835 09:38:48 -- common/autotest_common.sh@1142 -- # return 0 00:30:37.835 09:38:48 -- spdk/autotest.sh@308 -- # '[' 0 -eq 1 ']' 00:30:37.835 09:38:48 -- spdk/autotest.sh@312 -- # '[' 0 -eq 1 ']' 00:30:37.835 09:38:48 -- spdk/autotest.sh@316 -- # '[' 0 -eq 1 ']' 00:30:37.835 09:38:48 -- spdk/autotest.sh@321 -- # '[' 0 -eq 1 ']' 00:30:37.835 09:38:48 -- spdk/autotest.sh@330 -- # '[' 0 -eq 1 ']' 00:30:37.835 09:38:48 -- spdk/autotest.sh@335 -- # '[' 0 -eq 1 ']' 00:30:37.835 09:38:48 -- spdk/autotest.sh@339 -- # '[' 0 -eq 1 ']' 00:30:37.835 09:38:48 -- spdk/autotest.sh@343 -- # '[' 0 -eq 1 ']' 00:30:37.835 09:38:48 -- spdk/autotest.sh@347 -- # '[' 0 -eq 1 ']' 00:30:37.835 09:38:48 -- spdk/autotest.sh@352 -- # '[' 0 -eq 1 ']' 00:30:37.835 09:38:48 -- spdk/autotest.sh@356 -- # '[' 0 -eq 1 ']' 00:30:37.835 09:38:48 -- spdk/autotest.sh@363 -- # [[ 0 -eq 1 ]] 00:30:37.835 09:38:48 -- spdk/autotest.sh@367 -- # [[ 0 -eq 1 ]] 00:30:37.835 09:38:48 -- spdk/autotest.sh@371 -- # [[ 0 -eq 1 ]] 00:30:37.835 09:38:48 -- spdk/autotest.sh@375 -- # [[ 0 -eq 1 ]] 00:30:37.835 09:38:48 -- spdk/autotest.sh@380 -- # trap - SIGINT SIGTERM EXIT 00:30:37.835 09:38:48 -- spdk/autotest.sh@382 -- # timing_enter post_cleanup 00:30:37.835 09:38:48 -- common/autotest_common.sh@722 -- # xtrace_disable 00:30:37.835 09:38:48 -- common/autotest_common.sh@10 -- # set +x 00:30:37.835 09:38:48 -- spdk/autotest.sh@383 -- # autotest_cleanup 00:30:37.835 09:38:48 -- common/autotest_common.sh@1392 -- # local autotest_es=0 00:30:37.835 09:38:48 -- common/autotest_common.sh@1393 -- # xtrace_disable 00:30:37.835 09:38:48 -- common/autotest_common.sh@10 -- # set +x 00:30:39.733 INFO: APP EXITING 00:30:39.733 INFO: killing all VMs 00:30:39.733 INFO: killing vhost app 00:30:39.733 INFO: EXIT DONE 00:30:40.668 0000:00:04.7 (8086 0e27): Already using the ioatdma driver 00:30:40.668 0000:00:04.6 (8086 0e26): Already using the ioatdma driver 00:30:40.668 0000:00:04.5 (8086 0e25): Already using the ioatdma driver 00:30:40.668 0000:00:04.4 (8086 0e24): Already using the ioatdma driver 00:30:40.668 0000:00:04.3 (8086 0e23): Already using the ioatdma driver 00:30:40.668 0000:00:04.2 (8086 0e22): Already using the ioatdma driver 00:30:40.668 0000:00:04.1 (8086 0e21): Already using the ioatdma driver 00:30:40.668 0000:00:04.0 (8086 0e20): Already using the ioatdma driver 00:30:40.668 0000:0b:00.0 (8086 0a54): Already using the nvme driver 00:30:40.668 0000:80:04.7 (8086 0e27): Already using the ioatdma driver 00:30:40.668 0000:80:04.6 (8086 0e26): Already using the ioatdma driver 00:30:40.668 0000:80:04.5 (8086 0e25): Already using the ioatdma driver 00:30:40.668 0000:80:04.4 (8086 0e24): Already using the ioatdma driver 00:30:40.668 0000:80:04.3 (8086 0e23): Already using the ioatdma driver 00:30:40.668 0000:80:04.2 (8086 0e22): Already using the ioatdma driver 00:30:40.925 0000:80:04.1 (8086 0e21): Already using the ioatdma driver 00:30:40.925 0000:80:04.0 (8086 0e20): Already using the ioatdma driver 00:30:42.298 Cleaning 00:30:42.298 Removing: /var/run/dpdk/spdk0/config 00:30:42.298 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-0-0 00:30:42.298 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-0-1 00:30:42.298 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-0-2 00:30:42.298 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-0-3 00:30:42.298 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-1-0 00:30:42.298 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-1-1 00:30:42.298 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-1-2 00:30:42.298 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-1-3 00:30:42.298 Removing: /var/run/dpdk/spdk0/fbarray_memzone 00:30:42.298 Removing: /var/run/dpdk/spdk0/hugepage_info 00:30:42.298 Removing: /var/run/dpdk/spdk1/config 00:30:42.298 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-0-0 00:30:42.298 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-0-1 00:30:42.298 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-0-2 00:30:42.298 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-0-3 00:30:42.298 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-1-0 00:30:42.298 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-1-1 00:30:42.298 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-1-2 00:30:42.298 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-1-3 00:30:42.298 Removing: /var/run/dpdk/spdk1/fbarray_memzone 00:30:42.298 Removing: /var/run/dpdk/spdk1/hugepage_info 00:30:42.298 Removing: /var/run/dpdk/spdk1/mp_socket 00:30:42.298 Removing: /var/run/dpdk/spdk2/config 00:30:42.298 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-0-0 00:30:42.298 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-0-1 00:30:42.298 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-0-2 00:30:42.298 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-0-3 00:30:42.298 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-1-0 00:30:42.298 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-1-1 00:30:42.298 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-1-2 00:30:42.298 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-1-3 00:30:42.298 Removing: /var/run/dpdk/spdk2/fbarray_memzone 00:30:42.298 Removing: /var/run/dpdk/spdk2/hugepage_info 00:30:42.298 Removing: /var/run/dpdk/spdk3/config 00:30:42.298 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-0-0 00:30:42.298 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-0-1 00:30:42.298 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-0-2 00:30:42.298 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-0-3 00:30:42.298 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-1-0 00:30:42.298 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-1-1 00:30:42.298 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-1-2 00:30:42.298 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-1-3 00:30:42.298 Removing: /var/run/dpdk/spdk3/fbarray_memzone 00:30:42.298 Removing: /var/run/dpdk/spdk3/hugepage_info 00:30:42.298 Removing: /var/run/dpdk/spdk4/config 00:30:42.298 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-0-0 00:30:42.298 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-0-1 00:30:42.298 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-0-2 00:30:42.298 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-0-3 00:30:42.298 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-1-0 00:30:42.298 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-1-1 00:30:42.298 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-1-2 00:30:42.298 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-1-3 00:30:42.298 Removing: /var/run/dpdk/spdk4/fbarray_memzone 00:30:42.298 Removing: /var/run/dpdk/spdk4/hugepage_info 00:30:42.298 Removing: /dev/shm/bdev_svc_trace.1 00:30:42.298 Removing: /dev/shm/nvmf_trace.0 00:30:42.298 Removing: /dev/shm/spdk_tgt_trace.pid713882 00:30:42.298 Removing: /var/run/dpdk/spdk0 00:30:42.298 Removing: /var/run/dpdk/spdk1 00:30:42.298 Removing: /var/run/dpdk/spdk2 00:30:42.298 Removing: /var/run/dpdk/spdk3 00:30:42.298 Removing: /var/run/dpdk/spdk4 00:30:42.298 Removing: /var/run/dpdk/spdk_pid712337 00:30:42.298 Removing: /var/run/dpdk/spdk_pid713070 00:30:42.298 Removing: /var/run/dpdk/spdk_pid713882 00:30:42.298 Removing: /var/run/dpdk/spdk_pid714418 00:30:42.298 Removing: /var/run/dpdk/spdk_pid715132 00:30:42.298 Removing: /var/run/dpdk/spdk_pid715619 00:30:42.298 Removing: /var/run/dpdk/spdk_pid716480 00:30:42.298 Removing: /var/run/dpdk/spdk_pid716493 00:30:42.298 Removing: /var/run/dpdk/spdk_pid716735 00:30:42.298 Removing: /var/run/dpdk/spdk_pid718007 00:30:42.298 Removing: /var/run/dpdk/spdk_pid718978 00:30:42.299 Removing: /var/run/dpdk/spdk_pid719191 00:30:42.299 Removing: /var/run/dpdk/spdk_pid719477 00:30:42.299 Removing: /var/run/dpdk/spdk_pid719680 00:30:42.299 Removing: /var/run/dpdk/spdk_pid719868 00:30:42.299 Removing: /var/run/dpdk/spdk_pid720025 00:30:42.299 Removing: /var/run/dpdk/spdk_pid720183 00:30:42.299 Removing: /var/run/dpdk/spdk_pid720368 00:30:42.299 Removing: /var/run/dpdk/spdk_pid720694 00:30:42.299 Removing: /var/run/dpdk/spdk_pid723063 00:30:42.299 Removing: /var/run/dpdk/spdk_pid723313 00:30:42.299 Removing: /var/run/dpdk/spdk_pid723479 00:30:42.299 Removing: /var/run/dpdk/spdk_pid723491 00:30:42.299 Removing: /var/run/dpdk/spdk_pid723872 00:30:42.299 Removing: /var/run/dpdk/spdk_pid723920 00:30:42.299 Removing: /var/run/dpdk/spdk_pid724232 00:30:42.299 Removing: /var/run/dpdk/spdk_pid724350 00:30:42.299 Removing: /var/run/dpdk/spdk_pid724530 00:30:42.299 Removing: /var/run/dpdk/spdk_pid724656 00:30:42.299 Removing: /var/run/dpdk/spdk_pid724820 00:30:42.299 Removing: /var/run/dpdk/spdk_pid724835 00:30:42.299 Removing: /var/run/dpdk/spdk_pid725278 00:30:42.299 Removing: /var/run/dpdk/spdk_pid725471 00:30:42.299 Removing: /var/run/dpdk/spdk_pid725672 00:30:42.299 Removing: /var/run/dpdk/spdk_pid725838 00:30:42.299 Removing: /var/run/dpdk/spdk_pid725869 00:30:42.299 Removing: /var/run/dpdk/spdk_pid726049 00:30:42.299 Removing: /var/run/dpdk/spdk_pid726211 00:30:42.299 Removing: /var/run/dpdk/spdk_pid726375 00:30:42.299 Removing: /var/run/dpdk/spdk_pid726637 00:30:42.299 Removing: /var/run/dpdk/spdk_pid726802 00:30:42.299 Removing: /var/run/dpdk/spdk_pid726957 00:30:42.299 Removing: /var/run/dpdk/spdk_pid727223 00:30:42.299 Removing: /var/run/dpdk/spdk_pid727382 00:30:42.299 Removing: /var/run/dpdk/spdk_pid727543 00:30:42.299 Removing: /var/run/dpdk/spdk_pid727816 00:30:42.299 Removing: /var/run/dpdk/spdk_pid727968 00:30:42.299 Removing: /var/run/dpdk/spdk_pid728135 00:30:42.299 Removing: /var/run/dpdk/spdk_pid728294 00:30:42.299 Removing: /var/run/dpdk/spdk_pid728562 00:30:42.299 Removing: /var/run/dpdk/spdk_pid728715 00:30:42.299 Removing: /var/run/dpdk/spdk_pid728882 00:30:42.299 Removing: /var/run/dpdk/spdk_pid729148 00:30:42.299 Removing: /var/run/dpdk/spdk_pid729310 00:30:42.299 Removing: /var/run/dpdk/spdk_pid729473 00:30:42.299 Removing: /var/run/dpdk/spdk_pid729731 00:30:42.299 Removing: /var/run/dpdk/spdk_pid729900 00:30:42.299 Removing: /var/run/dpdk/spdk_pid729981 00:30:42.299 Removing: /var/run/dpdk/spdk_pid730198 00:30:42.299 Removing: /var/run/dpdk/spdk_pid732382 00:30:42.299 Removing: /var/run/dpdk/spdk_pid758638 00:30:42.299 Removing: /var/run/dpdk/spdk_pid761250 00:30:42.299 Removing: /var/run/dpdk/spdk_pid768097 00:30:42.299 Removing: /var/run/dpdk/spdk_pid771390 00:30:42.299 Removing: /var/run/dpdk/spdk_pid773609 00:30:42.299 Removing: /var/run/dpdk/spdk_pid774033 00:30:42.299 Removing: /var/run/dpdk/spdk_pid777999 00:30:42.299 Removing: /var/run/dpdk/spdk_pid781845 00:30:42.299 Removing: /var/run/dpdk/spdk_pid781855 00:30:42.299 Removing: /var/run/dpdk/spdk_pid782389 00:30:42.299 Removing: /var/run/dpdk/spdk_pid783169 00:30:42.299 Removing: /var/run/dpdk/spdk_pid784179 00:30:42.299 Removing: /var/run/dpdk/spdk_pid784723 00:30:42.299 Removing: /var/run/dpdk/spdk_pid784733 00:30:42.299 Removing: /var/run/dpdk/spdk_pid784876 00:30:42.299 Removing: /var/run/dpdk/spdk_pid785012 00:30:42.299 Removing: /var/run/dpdk/spdk_pid785057 00:30:42.299 Removing: /var/run/dpdk/spdk_pid785672 00:30:42.299 Removing: /var/run/dpdk/spdk_pid786329 00:30:42.299 Removing: /var/run/dpdk/spdk_pid786987 00:30:42.299 Removing: /var/run/dpdk/spdk_pid787393 00:30:42.299 Removing: /var/run/dpdk/spdk_pid787396 00:30:42.299 Removing: /var/run/dpdk/spdk_pid787532 00:30:42.299 Removing: /var/run/dpdk/spdk_pid788475 00:30:42.299 Removing: /var/run/dpdk/spdk_pid789260 00:30:42.299 Removing: /var/run/dpdk/spdk_pid794498 00:30:42.299 Removing: /var/run/dpdk/spdk_pid794777 00:30:42.299 Removing: /var/run/dpdk/spdk_pid797408 00:30:42.299 Removing: /var/run/dpdk/spdk_pid801051 00:30:42.299 Removing: /var/run/dpdk/spdk_pid803157 00:30:42.299 Removing: /var/run/dpdk/spdk_pid809417 00:30:42.299 Removing: /var/run/dpdk/spdk_pid814633 00:30:42.299 Removing: /var/run/dpdk/spdk_pid816555 00:30:42.299 Removing: /var/run/dpdk/spdk_pid817223 00:30:42.299 Removing: /var/run/dpdk/spdk_pid827415 00:30:42.299 Removing: /var/run/dpdk/spdk_pid829504 00:30:42.299 Removing: /var/run/dpdk/spdk_pid854589 00:30:42.299 Removing: /var/run/dpdk/spdk_pid857383 00:30:42.299 Removing: /var/run/dpdk/spdk_pid858565 00:30:42.299 Removing: /var/run/dpdk/spdk_pid859885 00:30:42.299 Removing: /var/run/dpdk/spdk_pid860015 00:30:42.557 Removing: /var/run/dpdk/spdk_pid860054 00:30:42.557 Removing: /var/run/dpdk/spdk_pid860175 00:30:42.557 Removing: /var/run/dpdk/spdk_pid860607 00:30:42.557 Removing: /var/run/dpdk/spdk_pid861926 00:30:42.557 Removing: /var/run/dpdk/spdk_pid862525 00:30:42.557 Removing: /var/run/dpdk/spdk_pid862956 00:30:42.557 Removing: /var/run/dpdk/spdk_pid864573 00:30:42.557 Removing: /var/run/dpdk/spdk_pid864983 00:30:42.557 Removing: /var/run/dpdk/spdk_pid865436 00:30:42.557 Removing: /var/run/dpdk/spdk_pid867957 00:30:42.557 Removing: /var/run/dpdk/spdk_pid874614 00:30:42.557 Removing: /var/run/dpdk/spdk_pid877406 00:30:42.557 Removing: /var/run/dpdk/spdk_pid881045 00:30:42.557 Removing: /var/run/dpdk/spdk_pid881987 00:30:42.557 Removing: /var/run/dpdk/spdk_pid883079 00:30:42.557 Removing: /var/run/dpdk/spdk_pid885620 00:30:42.557 Removing: /var/run/dpdk/spdk_pid887980 00:30:42.557 Removing: /var/run/dpdk/spdk_pid892315 00:30:42.557 Removing: /var/run/dpdk/spdk_pid892317 00:30:42.557 Removing: /var/run/dpdk/spdk_pid895106 00:30:42.557 Removing: /var/run/dpdk/spdk_pid895242 00:30:42.557 Removing: /var/run/dpdk/spdk_pid895376 00:30:42.557 Removing: /var/run/dpdk/spdk_pid895746 00:30:42.557 Removing: /var/run/dpdk/spdk_pid895765 00:30:42.557 Removing: /var/run/dpdk/spdk_pid898440 00:30:42.557 Removing: /var/run/dpdk/spdk_pid898856 00:30:42.557 Removing: /var/run/dpdk/spdk_pid901398 00:30:42.557 Removing: /var/run/dpdk/spdk_pid903374 00:30:42.557 Removing: /var/run/dpdk/spdk_pid906788 00:30:42.557 Removing: /var/run/dpdk/spdk_pid910208 00:30:42.557 Removing: /var/run/dpdk/spdk_pid916949 00:30:42.557 Removing: /var/run/dpdk/spdk_pid921413 00:30:42.557 Removing: /var/run/dpdk/spdk_pid921419 00:30:42.557 Removing: /var/run/dpdk/spdk_pid933630 00:30:42.557 Removing: /var/run/dpdk/spdk_pid934053 00:30:42.557 Removing: /var/run/dpdk/spdk_pid934564 00:30:42.557 Removing: /var/run/dpdk/spdk_pid934976 00:30:42.557 Removing: /var/run/dpdk/spdk_pid935552 00:30:42.557 Removing: /var/run/dpdk/spdk_pid935965 00:30:42.557 Removing: /var/run/dpdk/spdk_pid936421 00:30:42.557 Removing: /var/run/dpdk/spdk_pid936917 00:30:42.557 Removing: /var/run/dpdk/spdk_pid939405 00:30:42.557 Removing: /var/run/dpdk/spdk_pid939553 00:30:42.557 Removing: /var/run/dpdk/spdk_pid943342 00:30:42.557 Removing: /var/run/dpdk/spdk_pid943486 00:30:42.557 Removing: /var/run/dpdk/spdk_pid945745 00:30:42.557 Removing: /var/run/dpdk/spdk_pid950663 00:30:42.557 Removing: /var/run/dpdk/spdk_pid950702 00:30:42.557 Removing: /var/run/dpdk/spdk_pid953559 00:30:42.557 Removing: /var/run/dpdk/spdk_pid954961 00:30:42.557 Removing: /var/run/dpdk/spdk_pid956367 00:30:42.557 Removing: /var/run/dpdk/spdk_pid957222 00:30:42.557 Removing: /var/run/dpdk/spdk_pid958624 00:30:42.557 Removing: /var/run/dpdk/spdk_pid959391 00:30:42.557 Removing: /var/run/dpdk/spdk_pid964758 00:30:42.557 Removing: /var/run/dpdk/spdk_pid965050 00:30:42.557 Removing: /var/run/dpdk/spdk_pid965447 00:30:42.557 Removing: /var/run/dpdk/spdk_pid967003 00:30:42.557 Removing: /var/run/dpdk/spdk_pid967400 00:30:42.557 Removing: /var/run/dpdk/spdk_pid967704 00:30:42.557 Removing: /var/run/dpdk/spdk_pid970134 00:30:42.557 Removing: /var/run/dpdk/spdk_pid970139 00:30:42.557 Removing: /var/run/dpdk/spdk_pid971597 00:30:42.557 Removing: /var/run/dpdk/spdk_pid971962 00:30:42.557 Removing: /var/run/dpdk/spdk_pid972088 00:30:42.557 Clean 00:30:42.557 09:38:53 -- common/autotest_common.sh@1451 -- # return 0 00:30:42.557 09:38:53 -- spdk/autotest.sh@384 -- # timing_exit post_cleanup 00:30:42.557 09:38:53 -- common/autotest_common.sh@728 -- # xtrace_disable 00:30:42.557 09:38:53 -- common/autotest_common.sh@10 -- # set +x 00:30:42.557 09:38:53 -- spdk/autotest.sh@386 -- # timing_exit autotest 00:30:42.557 09:38:53 -- common/autotest_common.sh@728 -- # xtrace_disable 00:30:42.557 09:38:53 -- common/autotest_common.sh@10 -- # set +x 00:30:42.815 09:38:53 -- spdk/autotest.sh@387 -- # chmod a+r /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/timing.txt 00:30:42.815 09:38:53 -- spdk/autotest.sh@389 -- # [[ -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/udev.log ]] 00:30:42.815 09:38:53 -- spdk/autotest.sh@389 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/udev.log 00:30:42.815 09:38:53 -- spdk/autotest.sh@391 -- # hash lcov 00:30:42.815 09:38:53 -- spdk/autotest.sh@391 -- # [[ CC_TYPE=gcc == *\c\l\a\n\g* ]] 00:30:42.815 09:38:53 -- spdk/autotest.sh@393 -- # hostname 00:30:42.815 09:38:53 -- spdk/autotest.sh@393 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --no-external -q -c -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk -t spdk-gp-06 -o /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_test.info 00:30:42.815 geninfo: WARNING: invalid characters removed from testname! 00:31:14.875 09:39:21 -- spdk/autotest.sh@394 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --no-external -q -a /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_base.info -a /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_test.info -o /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_total.info 00:31:14.875 09:39:25 -- spdk/autotest.sh@395 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --no-external -q -r /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_total.info '*/dpdk/*' -o /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_total.info 00:31:17.403 09:39:28 -- spdk/autotest.sh@396 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --no-external -q -r /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_total.info '/usr/*' -o /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_total.info 00:31:20.694 09:39:31 -- spdk/autotest.sh@397 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --no-external -q -r /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_total.info '*/examples/vmd/*' -o /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_total.info 00:31:23.971 09:39:34 -- spdk/autotest.sh@398 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --no-external -q -r /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_total.info '*/app/spdk_lspci/*' -o /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_total.info 00:31:26.530 09:39:37 -- spdk/autotest.sh@399 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --no-external -q -r /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_total.info '*/app/spdk_top/*' -o /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_total.info 00:31:29.053 09:39:40 -- spdk/autotest.sh@400 -- # rm -f cov_base.info cov_test.info OLD_STDOUT OLD_STDERR 00:31:29.313 09:39:40 -- common/autobuild_common.sh@15 -- $ source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:31:29.313 09:39:40 -- scripts/common.sh@508 -- $ [[ -e /bin/wpdk_common.sh ]] 00:31:29.313 09:39:40 -- scripts/common.sh@516 -- $ [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:31:29.313 09:39:40 -- scripts/common.sh@517 -- $ source /etc/opt/spdk-pkgdep/paths/export.sh 00:31:29.313 09:39:40 -- paths/export.sh@2 -- $ PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/sys_sgci/.local/bin:/home/sys_sgci/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:31:29.313 09:39:40 -- paths/export.sh@3 -- $ PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/sys_sgci/.local/bin:/home/sys_sgci/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:31:29.313 09:39:40 -- paths/export.sh@4 -- $ PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/sys_sgci/.local/bin:/home/sys_sgci/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:31:29.313 09:39:40 -- paths/export.sh@5 -- $ export PATH 00:31:29.313 09:39:40 -- paths/export.sh@6 -- $ echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/sys_sgci/.local/bin:/home/sys_sgci/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:31:29.313 09:39:40 -- common/autobuild_common.sh@443 -- $ out=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output 00:31:29.313 09:39:40 -- common/autobuild_common.sh@444 -- $ date +%s 00:31:29.313 09:39:40 -- common/autobuild_common.sh@444 -- $ mktemp -dt spdk_1721029180.XXXXXX 00:31:29.313 09:39:40 -- common/autobuild_common.sh@444 -- $ SPDK_WORKSPACE=/tmp/spdk_1721029180.GuClVY 00:31:29.313 09:39:40 -- common/autobuild_common.sh@446 -- $ [[ -n '' ]] 00:31:29.313 09:39:40 -- common/autobuild_common.sh@450 -- $ '[' -n '' ']' 00:31:29.313 09:39:40 -- common/autobuild_common.sh@453 -- $ scanbuild_exclude='--exclude /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/' 00:31:29.313 09:39:40 -- common/autobuild_common.sh@457 -- $ scanbuild_exclude+=' --exclude /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/xnvme --exclude /tmp' 00:31:29.313 09:39:40 -- common/autobuild_common.sh@459 -- $ scanbuild='scan-build -o /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/scan-build-tmp --exclude /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/ --exclude /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/xnvme --exclude /tmp --status-bugs' 00:31:29.313 09:39:40 -- common/autobuild_common.sh@460 -- $ get_config_params 00:31:29.313 09:39:40 -- common/autotest_common.sh@396 -- $ xtrace_disable 00:31:29.313 09:39:40 -- common/autotest_common.sh@10 -- $ set +x 00:31:29.313 09:39:40 -- common/autobuild_common.sh@460 -- $ config_params='--enable-debug --enable-werror --with-rdma --with-idxd --with-fio=/usr/src/fio --with-iscsi-initiator --disable-unit-tests --enable-ubsan --enable-coverage --with-ublk --with-vfio-user' 00:31:29.313 09:39:40 -- common/autobuild_common.sh@462 -- $ start_monitor_resources 00:31:29.313 09:39:40 -- pm/common@17 -- $ local monitor 00:31:29.313 09:39:40 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:31:29.313 09:39:40 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:31:29.313 09:39:40 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:31:29.313 09:39:40 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:31:29.313 09:39:40 -- pm/common@21 -- $ date +%s 00:31:29.313 09:39:40 -- pm/common@21 -- $ date +%s 00:31:29.313 09:39:40 -- pm/common@25 -- $ sleep 1 00:31:29.313 09:39:40 -- pm/common@21 -- $ date +%s 00:31:29.313 09:39:40 -- pm/common@21 -- $ date +%s 00:31:29.313 09:39:40 -- pm/common@21 -- $ /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/collect-vmstat -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power -l -p monitor.autopackage.sh.1721029180 00:31:29.313 09:39:40 -- pm/common@21 -- $ /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/collect-cpu-load -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power -l -p monitor.autopackage.sh.1721029180 00:31:29.313 09:39:40 -- pm/common@21 -- $ /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/collect-cpu-temp -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power -l -p monitor.autopackage.sh.1721029180 00:31:29.313 09:39:40 -- pm/common@21 -- $ sudo -E /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/collect-bmc-pm -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power -l -p monitor.autopackage.sh.1721029180 00:31:29.313 Redirecting to /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/monitor.autopackage.sh.1721029180_collect-vmstat.pm.log 00:31:29.313 Redirecting to /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/monitor.autopackage.sh.1721029180_collect-cpu-load.pm.log 00:31:29.313 Redirecting to /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/monitor.autopackage.sh.1721029180_collect-cpu-temp.pm.log 00:31:29.313 Redirecting to /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/monitor.autopackage.sh.1721029180_collect-bmc-pm.bmc.pm.log 00:31:30.255 09:39:41 -- common/autobuild_common.sh@463 -- $ trap stop_monitor_resources EXIT 00:31:30.255 09:39:41 -- spdk/autopackage.sh@10 -- $ MAKEFLAGS=-j48 00:31:30.255 09:39:41 -- spdk/autopackage.sh@11 -- $ cd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:31:30.255 09:39:41 -- spdk/autopackage.sh@13 -- $ [[ 0 -eq 1 ]] 00:31:30.255 09:39:41 -- spdk/autopackage.sh@18 -- $ [[ 0 -eq 0 ]] 00:31:30.255 09:39:41 -- spdk/autopackage.sh@19 -- $ timing_finish 00:31:30.255 09:39:41 -- common/autotest_common.sh@734 -- $ flamegraph=/usr/local/FlameGraph/flamegraph.pl 00:31:30.255 09:39:41 -- common/autotest_common.sh@735 -- $ '[' -x /usr/local/FlameGraph/flamegraph.pl ']' 00:31:30.255 09:39:41 -- common/autotest_common.sh@737 -- $ /usr/local/FlameGraph/flamegraph.pl --title 'Build Timing' --nametype Step: --countname seconds /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/timing.txt 00:31:30.255 09:39:41 -- spdk/autopackage.sh@20 -- $ exit 0 00:31:30.255 09:39:41 -- spdk/autopackage.sh@1 -- $ stop_monitor_resources 00:31:30.255 09:39:41 -- pm/common@29 -- $ signal_monitor_resources TERM 00:31:30.255 09:39:41 -- pm/common@40 -- $ local monitor pid pids signal=TERM 00:31:30.255 09:39:41 -- pm/common@42 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:31:30.255 09:39:41 -- pm/common@43 -- $ [[ -e /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/collect-cpu-load.pid ]] 00:31:30.255 09:39:41 -- pm/common@44 -- $ pid=982309 00:31:30.255 09:39:41 -- pm/common@50 -- $ kill -TERM 982309 00:31:30.255 09:39:41 -- pm/common@42 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:31:30.255 09:39:41 -- pm/common@43 -- $ [[ -e /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/collect-vmstat.pid ]] 00:31:30.255 09:39:41 -- pm/common@44 -- $ pid=982311 00:31:30.255 09:39:41 -- pm/common@50 -- $ kill -TERM 982311 00:31:30.255 09:39:41 -- pm/common@42 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:31:30.255 09:39:41 -- pm/common@43 -- $ [[ -e /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/collect-cpu-temp.pid ]] 00:31:30.255 09:39:41 -- pm/common@44 -- $ pid=982313 00:31:30.255 09:39:41 -- pm/common@50 -- $ kill -TERM 982313 00:31:30.255 09:39:41 -- pm/common@42 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:31:30.255 09:39:41 -- pm/common@43 -- $ [[ -e /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/collect-bmc-pm.pid ]] 00:31:30.255 09:39:41 -- pm/common@44 -- $ pid=982341 00:31:30.255 09:39:41 -- pm/common@50 -- $ sudo -E kill -TERM 982341 00:31:30.255 + [[ -n 628530 ]] 00:31:30.255 + sudo kill 628530 00:31:30.266 [Pipeline] } 00:31:30.286 [Pipeline] // stage 00:31:30.292 [Pipeline] } 00:31:30.309 [Pipeline] // timeout 00:31:30.315 [Pipeline] } 00:31:30.332 [Pipeline] // catchError 00:31:30.337 [Pipeline] } 00:31:30.356 [Pipeline] // wrap 00:31:30.362 [Pipeline] } 00:31:30.378 [Pipeline] // catchError 00:31:30.388 [Pipeline] stage 00:31:30.390 [Pipeline] { (Epilogue) 00:31:30.405 [Pipeline] catchError 00:31:30.407 [Pipeline] { 00:31:30.447 [Pipeline] echo 00:31:30.450 Cleanup processes 00:31:30.456 [Pipeline] sh 00:31:30.745 + sudo pgrep -af /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:31:30.745 982453 /usr/bin/ipmitool sdr dump /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/sdr.cache 00:31:30.745 982573 sudo pgrep -af /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:31:30.765 [Pipeline] sh 00:31:31.112 ++ sudo pgrep -af /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:31:31.112 ++ awk '{print $1}' 00:31:31.112 ++ grep -v 'sudo pgrep' 00:31:31.112 + sudo kill -9 982453 00:31:31.126 [Pipeline] sh 00:31:31.413 + jbp/jenkins/jjb-config/jobs/scripts/compress_artifacts.sh 00:31:39.535 [Pipeline] sh 00:31:39.823 + jbp/jenkins/jjb-config/jobs/scripts/check_artifacts_size.sh 00:31:39.823 Artifacts sizes are good 00:31:39.837 [Pipeline] archiveArtifacts 00:31:39.844 Archiving artifacts 00:31:40.098 [Pipeline] sh 00:31:40.382 + sudo chown -R sys_sgci /var/jenkins/workspace/nvmf-tcp-phy-autotest 00:31:40.397 [Pipeline] cleanWs 00:31:40.408 [WS-CLEANUP] Deleting project workspace... 00:31:40.408 [WS-CLEANUP] Deferred wipeout is used... 00:31:40.416 [WS-CLEANUP] done 00:31:40.418 [Pipeline] } 00:31:40.438 [Pipeline] // catchError 00:31:40.450 [Pipeline] sh 00:31:40.743 + logger -p user.info -t JENKINS-CI 00:31:40.753 [Pipeline] } 00:31:40.771 [Pipeline] // stage 00:31:40.778 [Pipeline] } 00:31:40.797 [Pipeline] // node 00:31:40.805 [Pipeline] End of Pipeline 00:31:40.841 Finished: SUCCESS